linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT
@ 2015-05-25 16:09 Andy Shevchenko
  2015-05-25 16:09 ` [PATCH v2 1/8] PM / QoS: Make it possible to expose device latency tolerance to userspace Andy Shevchenko
                   ` (8 more replies)
  0 siblings, 9 replies; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-25 16:09 UTC (permalink / raw)
  To: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula
  Cc: Andy Shevchenko

The new coming Intel platforms such as Skylake will contain Sunrisepoint PCH.

The driver is based on MFD framework since the main device, i.e. serial bus
controller, contains register space for itself, DMA part, and an additional
address space (convergence layer). 

The public specification of the register map is avaiable in [3].

This is second generation of the patch series to bring support LPSS devices
found on Intel Sunrisepoint (Intel Skylake PCH). First one can be found here
[2].

The series has few logical parts:
- patches 1-3 prepares PM core, ACPI, and driver core (PM) to handle our case
- patches 4-6 introduce unregistering platform devices in MFD in reversed
  order
- patch 7 implements iDMA 64-bit driver
- patch 8 introduces an MFD driver for LPSS devices

The patch 8 depends on clkdev_create() helper that has been introduced by
Russel King in [3].

The driver has been tested with SPI and UART on Intel Skylake PCH.

[1] https://download.01.org/future-platform-configuration-hub/skylake/register-definitions/332219_001_Final.pdf
[2] https://lkml.org/lkml/2015/3/31/255
[3] https://patchwork.linuxtv.org/patch/28464/

Changelog v2:
- new DMA driver to fully support iDMA 64-bit IP
- patch 3 is added to wake up parent devices when ->probe(), ->remove(), or
  ->shutdown()
- MFD core is unregistering devices in reversed order
- address few Lee's comments on v1
- address Russel's comment, therefore use clkdev_create() helper
- intel-lpss{,-acpi,-pci} are modified regarding to above changes

Andy Shevchenko (5):
  klist: implement klist_prev()
  driver core: implement device_for_each_child_reverse()
  mfd: make mfd_remove_devices() iterate in reverse order
  dmaengine: add a driver for Intel integrated DMA 64-bit
  mfd: Add support for Intel Sunrisepoint LPSS devices

Heikki Krogerus (1):
  core: platform: wakeup the parent before trying any driver operations

Mika Westerberg (2):
  PM / QoS: Make it possible to expose device latency tolerance to
    userspace
  ACPI / PM: Attach ACPI power domain only once

 drivers/acpi/device_pm.c      |   8 +
 drivers/acpi/internal.h       |   2 +
 drivers/acpi/scan.c           |  46 ++-
 drivers/base/core.c           |  43 +++
 drivers/base/platform.c       |  21 +-
 drivers/base/power/power.h    |   2 +
 drivers/base/power/qos.c      |  37 +++
 drivers/base/power/sysfs.c    |  11 +
 drivers/dma/Kconfig           |   5 +
 drivers/dma/Makefile          |   1 +
 drivers/dma/idma64.c          | 749 ++++++++++++++++++++++++++++++++++++++++++
 drivers/dma/idma64.h          | 233 +++++++++++++
 drivers/mfd/Kconfig           |  24 ++
 drivers/mfd/Makefile          |   3 +
 drivers/mfd/intel-lpss-acpi.c |  84 +++++
 drivers/mfd/intel-lpss-pci.c  | 113 +++++++
 drivers/mfd/intel-lpss.c      | 534 ++++++++++++++++++++++++++++++
 drivers/mfd/intel-lpss.h      |  62 ++++
 drivers/mfd/mfd-core.c        |   2 +-
 include/linux/device.h        |   2 +
 include/linux/klist.h         |   1 +
 include/linux/pm_qos.h        |   5 +
 lib/klist.c                   |  41 +++
 23 files changed, 2011 insertions(+), 18 deletions(-)
 create mode 100644 drivers/dma/idma64.c
 create mode 100644 drivers/dma/idma64.h
 create mode 100644 drivers/mfd/intel-lpss-acpi.c
 create mode 100644 drivers/mfd/intel-lpss-pci.c
 create mode 100644 drivers/mfd/intel-lpss.c
 create mode 100644 drivers/mfd/intel-lpss.h

-- 
2.1.4


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH v2 1/8] PM / QoS: Make it possible to expose device latency tolerance to userspace
  2015-05-25 16:09 [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Andy Shevchenko
@ 2015-05-25 16:09 ` Andy Shevchenko
  2015-05-25 16:09 ` [PATCH v2 2/8] ACPI / PM: Attach ACPI power domain only once Andy Shevchenko
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-25 16:09 UTC (permalink / raw)
  To: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula
  Cc: Andy Shevchenko

From: Mika Westerberg <mika.westerberg@linux.intel.com>

Typically when a device is created the bus core it belongs to (for example
PCI) does not know if the device supports things like latency tolerance.
This is left to the driver that binds to the device in question. However,
at that time the device has already been created and there is no way to set
its dev->power.set_latency_tolerance anymore.

So follow what has been done for other PM QoS attributes as well and allow
drivers to expose and hide latency tolerance from userspace, if the device
supports it.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
 drivers/base/power/power.h |  2 ++
 drivers/base/power/qos.c   | 37 +++++++++++++++++++++++++++++++++++++
 drivers/base/power/sysfs.c | 11 +++++++++++
 include/linux/pm_qos.h     |  5 +++++
 4 files changed, 55 insertions(+)

diff --git a/drivers/base/power/power.h b/drivers/base/power/power.h
index b6b8a27..0e62fb2 100644
--- a/drivers/base/power/power.h
+++ b/drivers/base/power/power.h
@@ -33,6 +33,8 @@ extern int pm_qos_sysfs_add_resume_latency(struct device *dev);
 extern void pm_qos_sysfs_remove_resume_latency(struct device *dev);
 extern int pm_qos_sysfs_add_flags(struct device *dev);
 extern void pm_qos_sysfs_remove_flags(struct device *dev);
+extern int pm_qos_sysfs_add_latency_tolerance(struct device *dev);
+extern void pm_qos_sysfs_remove_latency_tolerance(struct device *dev);
 
 #else /* CONFIG_PM */
 
diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c
index e56d538..7f3646e 100644
--- a/drivers/base/power/qos.c
+++ b/drivers/base/power/qos.c
@@ -883,3 +883,40 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
 	mutex_unlock(&dev_pm_qos_mtx);
 	return ret;
 }
+
+/**
+ * dev_pm_qos_expose_latency_tolerance - Expose latency tolerance to userspace
+ * @dev: Device whose latency tolerance to expose
+ */
+int dev_pm_qos_expose_latency_tolerance(struct device *dev)
+{
+	int ret;
+
+	if (!dev->power.set_latency_tolerance)
+		return -EINVAL;
+
+	mutex_lock(&dev_pm_qos_sysfs_mtx);
+	ret = pm_qos_sysfs_add_latency_tolerance(dev);
+	mutex_unlock(&dev_pm_qos_sysfs_mtx);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(dev_pm_qos_expose_latency_tolerance);
+
+/**
+ * dev_pm_qos_hide_latency_tolerance - Hide latency tolerance from userspace
+ * @dev: Device whose latency tolerance to hide
+ */
+void dev_pm_qos_hide_latency_tolerance(struct device *dev)
+{
+	mutex_lock(&dev_pm_qos_sysfs_mtx);
+	pm_qos_sysfs_remove_latency_tolerance(dev);
+	mutex_unlock(&dev_pm_qos_sysfs_mtx);
+
+	/* Remove the request from user space now */
+	pm_runtime_get_sync(dev);
+	dev_pm_qos_update_user_latency_tolerance(dev,
+		PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT);
+	pm_runtime_put(dev);
+}
+EXPORT_SYMBOL_GPL(dev_pm_qos_hide_latency_tolerance);
diff --git a/drivers/base/power/sysfs.c b/drivers/base/power/sysfs.c
index d2be3f9..a7b4679 100644
--- a/drivers/base/power/sysfs.c
+++ b/drivers/base/power/sysfs.c
@@ -738,6 +738,17 @@ void pm_qos_sysfs_remove_flags(struct device *dev)
 	sysfs_unmerge_group(&dev->kobj, &pm_qos_flags_attr_group);
 }
 
+int pm_qos_sysfs_add_latency_tolerance(struct device *dev)
+{
+	return sysfs_merge_group(&dev->kobj,
+				 &pm_qos_latency_tolerance_attr_group);
+}
+
+void pm_qos_sysfs_remove_latency_tolerance(struct device *dev)
+{
+	sysfs_unmerge_group(&dev->kobj, &pm_qos_latency_tolerance_attr_group);
+}
+
 void rpm_sysfs_remove(struct device *dev)
 {
 	sysfs_unmerge_group(&dev->kobj, &pm_runtime_attr_group);
diff --git a/include/linux/pm_qos.h b/include/linux/pm_qos.h
index 7b3ae0c..0f65d36 100644
--- a/include/linux/pm_qos.h
+++ b/include/linux/pm_qos.h
@@ -161,6 +161,8 @@ void dev_pm_qos_hide_flags(struct device *dev);
 int dev_pm_qos_update_flags(struct device *dev, s32 mask, bool set);
 s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev);
 int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val);
+int dev_pm_qos_expose_latency_tolerance(struct device *dev);
+void dev_pm_qos_hide_latency_tolerance(struct device *dev);
 
 static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev)
 {
@@ -229,6 +231,9 @@ static inline s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev)
 			{ return PM_QOS_LATENCY_TOLERANCE_NO_CONSTRAINT; }
 static inline int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val)
 			{ return 0; }
+static inline int dev_pm_qos_expose_latency_tolerance(struct device *dev)
+			{ return 0; }
+static inline void dev_pm_qos_hide_latency_tolerance(struct device *dev) {}
 
 static inline s32 dev_pm_qos_requested_resume_latency(struct device *dev) { return 0; }
 static inline s32 dev_pm_qos_requested_flags(struct device *dev) { return 0; }
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 2/8] ACPI / PM: Attach ACPI power domain only once
  2015-05-25 16:09 [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Andy Shevchenko
  2015-05-25 16:09 ` [PATCH v2 1/8] PM / QoS: Make it possible to expose device latency tolerance to userspace Andy Shevchenko
@ 2015-05-25 16:09 ` Andy Shevchenko
  2015-05-25 16:09 ` [PATCH v2 3/8] core: platform: wakeup the parent before trying any driver operations Andy Shevchenko
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-25 16:09 UTC (permalink / raw)
  To: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula
  Cc: Andy Shevchenko

From: Mika Westerberg <mika.westerberg@linux.intel.com>

Some devices, like MFD subdevices, share a single ACPI companion device so
that they are able to access their resources and children. However,
currently all these subdevices are attached to the ACPI power domain and
this might cause that the power methods for the companion device get called
more than once.

In order to solve this we attach the ACPI power domain only to the first
physical device that is bound to the ACPI companion device. In case of MFD
devices, this is the parent MFD device itself.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
 drivers/acpi/device_pm.c |  8 ++++++++
 drivers/acpi/internal.h  |  2 ++
 drivers/acpi/scan.c      | 46 ++++++++++++++++++++++++++++++----------------
 3 files changed, 40 insertions(+), 16 deletions(-)

diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
index 735db11..7d0c7e9 100644
--- a/drivers/acpi/device_pm.c
+++ b/drivers/acpi/device_pm.c
@@ -1103,6 +1103,14 @@ int acpi_dev_pm_attach(struct device *dev, bool power_on)
 	if (dev->pm_domain)
 		return -EEXIST;
 
+	/*
+	 * Only attach the power domain to the first device if the
+	 * companion is shared by multiple. This is to prevent doing power
+	 * management twice.
+	 */
+	if (!acpi_device_is_first_physical_node(adev, dev))
+		return -EBUSY;
+
 	acpi_add_pm_notifier(adev, dev, acpi_pm_notify_work_func);
 	dev->pm_domain = &acpi_general_pm_domain;
 	if (power_on) {
diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
index ba4a61e..c26a951 100644
--- a/drivers/acpi/internal.h
+++ b/drivers/acpi/internal.h
@@ -96,6 +96,8 @@ void acpi_device_add_finalize(struct acpi_device *device);
 void acpi_free_pnp_ids(struct acpi_device_pnp *pnp);
 bool acpi_device_is_present(struct acpi_device *adev);
 bool acpi_device_is_battery(struct acpi_device *adev);
+bool acpi_device_is_first_physical_node(struct acpi_device *adev,
+					const struct device *dev);
 
 /* --------------------------------------------------------------------------
                                   Power Resource
diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
index 03141aa..37f8f74 100644
--- a/drivers/acpi/scan.c
+++ b/drivers/acpi/scan.c
@@ -223,6 +223,35 @@ static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias,
 	return len;
 }
 
+/**
+ * acpi_device_is_first_physical_node - Is given dev first physical node
+ * @adev: ACPI companion device
+ * @dev: Physical device to check
+ *
+ * Function checks if given @dev is the first physical devices attached to
+ * the ACPI companion device. This distinction is needed in some cases
+ * where the same companion device is shared between many physical devices.
+ *
+ * Note that the caller have to provide valid @adev pointer.
+ */
+bool acpi_device_is_first_physical_node(struct acpi_device *adev,
+					const struct device *dev)
+{
+	bool ret = false;
+
+	mutex_lock(&adev->physical_node_lock);
+	if (!list_empty(&adev->physical_node_list)) {
+		const struct acpi_device_physical_node *node;
+
+		node = list_first_entry(&adev->physical_node_list,
+					struct acpi_device_physical_node, node);
+		ret = node->dev == dev;
+	}
+	mutex_unlock(&adev->physical_node_lock);
+
+	return ret;
+}
+
 /*
  * acpi_companion_match() - Can we match via ACPI companion device
  * @dev: Device in question
@@ -247,7 +276,6 @@ static int create_of_modalias(struct acpi_device *acpi_dev, char *modalias,
 static struct acpi_device *acpi_companion_match(const struct device *dev)
 {
 	struct acpi_device *adev;
-	struct mutex *physical_node_lock;
 
 	adev = ACPI_COMPANION(dev);
 	if (!adev)
@@ -256,21 +284,7 @@ static struct acpi_device *acpi_companion_match(const struct device *dev)
 	if (list_empty(&adev->pnp.ids))
 		return NULL;
 
-	physical_node_lock = &adev->physical_node_lock;
-	mutex_lock(physical_node_lock);
-	if (list_empty(&adev->physical_node_list)) {
-		adev = NULL;
-	} else {
-		const struct acpi_device_physical_node *node;
-
-		node = list_first_entry(&adev->physical_node_list,
-					struct acpi_device_physical_node, node);
-		if (node->dev != dev)
-			adev = NULL;
-	}
-	mutex_unlock(physical_node_lock);
-
-	return adev;
+	return acpi_device_is_first_physical_node(adev, dev) ? adev : NULL;
 }
 
 static int __acpi_device_uevent_modalias(struct acpi_device *adev,
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 3/8] core: platform: wakeup the parent before trying any driver operations
  2015-05-25 16:09 [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Andy Shevchenko
  2015-05-25 16:09 ` [PATCH v2 1/8] PM / QoS: Make it possible to expose device latency tolerance to userspace Andy Shevchenko
  2015-05-25 16:09 ` [PATCH v2 2/8] ACPI / PM: Attach ACPI power domain only once Andy Shevchenko
@ 2015-05-25 16:09 ` Andy Shevchenko
  2015-05-25 17:36   ` Alan Stern
  2015-05-26  4:04   ` Vinod Koul
  2015-05-25 16:09 ` [PATCH v2 4/8] klist: implement klist_prev() Andy Shevchenko
                   ` (5 subsequent siblings)
  8 siblings, 2 replies; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-25 16:09 UTC (permalink / raw)
  To: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula
  Cc: Andy Shevchenko

From: Heikki Krogerus <heikki.krogerus@linux.intel.com>

If the parent is still suspended when a driver probe,
remove or shutdown is attempted, the result may be a
failure.

For example, if the parent is a PCI MFD device that has been
suspended when we try to probe our device, any register
reads will return 0xffffffff.

To fix the problem, making sure the parent is always awake
before using driver probe, remove or shutdown.

Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
 drivers/base/platform.c | 21 ++++++++++++++++++++-
 1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/drivers/base/platform.c b/drivers/base/platform.c
index ebf034b..59fcda5 100644
--- a/drivers/base/platform.c
+++ b/drivers/base/platform.c
@@ -518,9 +518,15 @@ static int platform_drv_probe(struct device *_dev)
 
 	ret = dev_pm_domain_attach(_dev, true);
 	if (ret != -EPROBE_DEFER) {
+		if (_dev->parent != &platform_bus)
+			pm_runtime_get_sync(_dev->parent);
+
 		ret = drv->probe(dev);
 		if (ret)
 			dev_pm_domain_detach(_dev, true);
+
+		if (_dev->parent != &platform_bus)
+			pm_runtime_put(_dev->parent);
 	}
 
 	if (drv->prevent_deferred_probe && ret == -EPROBE_DEFER) {
@@ -542,9 +548,15 @@ static int platform_drv_remove(struct device *_dev)
 	struct platform_device *dev = to_platform_device(_dev);
 	int ret;
 
+	if (_dev->parent != &platform_bus)
+		pm_runtime_get_sync(_dev->parent);
+
 	ret = drv->remove(dev);
-	dev_pm_domain_detach(_dev, true);
 
+	if (_dev->parent != &platform_bus)
+		pm_runtime_put(_dev->parent);
+
+	dev_pm_domain_detach(_dev, true);
 	return ret;
 }
 
@@ -553,7 +565,14 @@ static void platform_drv_shutdown(struct device *_dev)
 	struct platform_driver *drv = to_platform_driver(_dev->driver);
 	struct platform_device *dev = to_platform_device(_dev);
 
+	if (_dev->parent != &platform_bus)
+		pm_runtime_get_sync(_dev->parent);
+
 	drv->shutdown(dev);
+
+	if (_dev->parent != &platform_bus)
+		pm_runtime_put(_dev->parent);
+
 	dev_pm_domain_detach(_dev, true);
 }
 
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 4/8] klist: implement klist_prev()
  2015-05-25 16:09 [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Andy Shevchenko
                   ` (2 preceding siblings ...)
  2015-05-25 16:09 ` [PATCH v2 3/8] core: platform: wakeup the parent before trying any driver operations Andy Shevchenko
@ 2015-05-25 16:09 ` Andy Shevchenko
  2015-06-01  1:21   ` Greg Kroah-Hartman
  2015-05-25 16:09 ` [PATCH v2 5/8] driver core: implement device_for_each_child_reverse() Andy Shevchenko
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-25 16:09 UTC (permalink / raw)
  To: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula
  Cc: Andy Shevchenko

klist_prev() gets the previous element in the list. It is useful to traverse
through the list in reverse order, for example, to provide LIFO (last in first
out) variant of access.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
 include/linux/klist.h |  1 +
 lib/klist.c           | 41 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+)

diff --git a/include/linux/klist.h b/include/linux/klist.h
index 61e5b72..953f283 100644
--- a/include/linux/klist.h
+++ b/include/linux/klist.h
@@ -63,6 +63,7 @@ extern void klist_iter_init(struct klist *k, struct klist_iter *i);
 extern void klist_iter_init_node(struct klist *k, struct klist_iter *i,
 				 struct klist_node *n);
 extern void klist_iter_exit(struct klist_iter *i);
+extern struct klist_node *klist_prev(struct klist_iter *i);
 extern struct klist_node *klist_next(struct klist_iter *i);
 
 #endif
diff --git a/lib/klist.c b/lib/klist.c
index 89b485a..d74cf7a 100644
--- a/lib/klist.c
+++ b/lib/klist.c
@@ -324,6 +324,47 @@ static struct klist_node *to_klist_node(struct list_head *n)
 }
 
 /**
+ * klist_prev - Ante up prev node in list.
+ * @i: Iterator structure.
+ *
+ * First grab list lock. Decrement the reference count of the previous
+ * node, if there was one. Grab the prev node, increment its reference
+ * count, drop the lock, and return that prev node.
+ */
+struct klist_node *klist_prev(struct klist_iter *i)
+{
+	void (*put)(struct klist_node *) = i->i_klist->put;
+	struct klist_node *last = i->i_cur;
+	struct klist_node *prev;
+
+	spin_lock(&i->i_klist->k_lock);
+
+	if (last) {
+		prev = to_klist_node(last->n_node.prev);
+		if (!klist_dec_and_del(last))
+			put = NULL;
+	} else
+		prev = to_klist_node(i->i_klist->k_list.prev);
+
+	i->i_cur = NULL;
+	while (prev != to_klist_node(&i->i_klist->k_list)) {
+		if (likely(!knode_dead(prev))) {
+			kref_get(&prev->n_ref);
+			i->i_cur = prev;
+			break;
+		}
+		prev = to_klist_node(prev->n_node.prev);
+	}
+
+	spin_unlock(&i->i_klist->k_lock);
+
+	if (put && last)
+		put(last);
+	return i->i_cur;
+}
+EXPORT_SYMBOL_GPL(klist_prev);
+
+/**
  * klist_next - Ante up next node in list.
  * @i: Iterator structure.
  *
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 5/8] driver core: implement device_for_each_child_reverse()
  2015-05-25 16:09 [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Andy Shevchenko
                   ` (3 preceding siblings ...)
  2015-05-25 16:09 ` [PATCH v2 4/8] klist: implement klist_prev() Andy Shevchenko
@ 2015-05-25 16:09 ` Andy Shevchenko
  2015-06-01  1:21   ` Greg Kroah-Hartman
  2015-05-25 16:09 ` [PATCH v2 6/8] mfd: make mfd_remove_devices() iterate in reverse order Andy Shevchenko
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-25 16:09 UTC (permalink / raw)
  To: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula
  Cc: Andy Shevchenko

The new function device_for_each_child_reverse() is helpful to traverse the
registered devices in a reversed order, e.g. in the case when an operation on
each device should be done first on the last added device, then on one before
last and so on.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
 drivers/base/core.c    | 43 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/device.h |  2 ++
 2 files changed, 45 insertions(+)

diff --git a/drivers/base/core.c b/drivers/base/core.c
index 21d1303..69b2acc 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -1252,6 +1252,19 @@ void device_unregister(struct device *dev)
 }
 EXPORT_SYMBOL_GPL(device_unregister);
 
+static struct device *prev_device(struct klist_iter *i)
+{
+	struct klist_node *n = klist_prev(i);
+	struct device *dev = NULL;
+	struct device_private *p;
+
+	if (n) {
+		p = to_device_private_parent(n);
+		dev = p->device;
+	}
+	return dev;
+}
+
 static struct device *next_device(struct klist_iter *i)
 {
 	struct klist_node *n = klist_next(i);
@@ -1342,6 +1355,36 @@ int device_for_each_child(struct device *parent, void *data,
 EXPORT_SYMBOL_GPL(device_for_each_child);
 
 /**
+ * device_for_each_child_reverse - device child iterator in reversed order.
+ * @parent: parent struct device.
+ * @fn: function to be called for each device.
+ * @data: data for the callback.
+ *
+ * Iterate over @parent's child devices, and call @fn for each,
+ * passing it @data.
+ *
+ * We check the return of @fn each time. If it returns anything
+ * other than 0, we break out and return that value.
+ */
+int device_for_each_child_reverse(struct device *parent, void *data,
+				  int (*fn)(struct device *dev, void *data))
+{
+	struct klist_iter i;
+	struct device *child;
+	int error = 0;
+
+	if (!parent->p)
+		return 0;
+
+	klist_iter_init(&parent->p->klist_children, &i);
+	while ((child = prev_device(&i)) && !error)
+		error = fn(child, data);
+	klist_iter_exit(&i);
+	return error;
+}
+EXPORT_SYMBOL_GPL(device_for_each_child_reverse);
+
+/**
  * device_find_child - device iterator for locating a particular device.
  * @parent: parent struct device
  * @match: Callback function to check device
diff --git a/include/linux/device.h b/include/linux/device.h
index 6558af9..cf404a0 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -928,6 +928,8 @@ extern int __must_check device_add(struct device *dev);
 extern void device_del(struct device *dev);
 extern int device_for_each_child(struct device *dev, void *data,
 		     int (*fn)(struct device *dev, void *data));
+extern int device_for_each_child_reverse(struct device *dev, void *data,
+		     int (*fn)(struct device *dev, void *data));
 extern struct device *device_find_child(struct device *dev, void *data,
 				int (*match)(struct device *dev, void *data));
 extern int device_rename(struct device *dev, const char *new_name);
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 6/8] mfd: make mfd_remove_devices() iterate in reverse order
  2015-05-25 16:09 [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Andy Shevchenko
                   ` (4 preceding siblings ...)
  2015-05-25 16:09 ` [PATCH v2 5/8] driver core: implement device_for_each_child_reverse() Andy Shevchenko
@ 2015-05-25 16:09 ` Andy Shevchenko
  2015-05-25 16:09 ` [PATCH v2 7/8] dmaengine: add a driver for Intel integrated DMA 64-bit Andy Shevchenko
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-25 16:09 UTC (permalink / raw)
  To: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula
  Cc: Andy Shevchenko

The newly introduced device_for_each_child_reverse() would be used when MFD
core removes the device.

After this patch applied the devices will be removed in a reversed order. This
behaviour is useful when devices have implicit dependency on order, i.e.
consider MFD device with serial bus controller, such as SPI, and DMA IP that is
attached to serial bus controller: before remove the DMA driver we have to be
ensured that no DMA transfers is ongoing and the requested channel are unused.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
 drivers/mfd/mfd-core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c
index 1aed3b7..79eeaa5 100644
--- a/drivers/mfd/mfd-core.c
+++ b/drivers/mfd/mfd-core.c
@@ -300,7 +300,7 @@ void mfd_remove_devices(struct device *parent)
 {
 	atomic_t *cnts = NULL;
 
-	device_for_each_child(parent, &cnts, mfd_remove_devices_fn);
+	device_for_each_child_reverse(parent, &cnts, mfd_remove_devices_fn);
 	kfree(cnts);
 }
 EXPORT_SYMBOL(mfd_remove_devices);
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 7/8] dmaengine: add a driver for Intel integrated DMA 64-bit
  2015-05-25 16:09 [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Andy Shevchenko
                   ` (5 preceding siblings ...)
  2015-05-25 16:09 ` [PATCH v2 6/8] mfd: make mfd_remove_devices() iterate in reverse order Andy Shevchenko
@ 2015-05-25 16:09 ` Andy Shevchenko
  2015-05-26  4:06   ` Vinod Koul
  2015-05-25 16:09 ` [PATCH v2 8/8] mfd: Add support for Intel Sunrisepoint LPSS devices Andy Shevchenko
  2015-05-26  3:51 ` [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Vinod Koul
  8 siblings, 1 reply; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-25 16:09 UTC (permalink / raw)
  To: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula
  Cc: Andy Shevchenko

Intel integrated DMA (iDMA) 64-bit is a specific IP that is used as a part of
LPSS devices such as HSUART or SPI. The iDMA IP is attached for private
usage on each host controller independently.

While it has similarities with Synopsys DesignWare DMA, the following
distinctions doesn't allow to use the existing driver:
- 64-bit mode with corresponding changes in Hardware Linked List data structure
- many slight differences in the channel registers

Moreover this driver is based on the DMA virtual channels framework that helps
to make the driver cleaner and easy to understand.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
 drivers/dma/Kconfig  |   5 +
 drivers/dma/Makefile |   1 +
 drivers/dma/idma64.c | 749 +++++++++++++++++++++++++++++++++++++++++++++++++++
 drivers/dma/idma64.h | 233 ++++++++++++++++
 4 files changed, 988 insertions(+)
 create mode 100644 drivers/dma/idma64.c
 create mode 100644 drivers/dma/idma64.h

diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index bda2cb0..e4257e9 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -85,6 +85,11 @@ config INTEL_IOP_ADMA
 	help
 	  Enable support for the Intel(R) IOP Series RAID engines.
 
+config IDMA64
+	tristate "Intel integrated DMA 64-bit support"
+	select DMA_ENGINE
+	select DMA_VIRTUAL_CHANNELS
+
 source "drivers/dma/dw/Kconfig"
 
 config AT_HDMAC
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 69f77d5..c1fe119 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -14,6 +14,7 @@ obj-$(CONFIG_HSU_DMA) += hsu/
 obj-$(CONFIG_MPC512X_DMA) += mpc512x_dma.o
 obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
 obj-$(CONFIG_MV_XOR) += mv_xor.o
+obj-$(CONFIG_IDMA64) += idma64.o
 obj-$(CONFIG_DW_DMAC_CORE) += dw/
 obj-$(CONFIG_AT_HDMAC) += at_hdmac.o
 obj-$(CONFIG_AT_XDMAC) += at_xdmac.o
diff --git a/drivers/dma/idma64.c b/drivers/dma/idma64.c
new file mode 100644
index 0000000..3119bdf
--- /dev/null
+++ b/drivers/dma/idma64.c
@@ -0,0 +1,749 @@
+/*
+ * Core driver for the Intel integrated DMA 64-bit
+ *
+ * Copyright (C) 2015 Intel Corporation
+ * Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/delay.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmapool.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/platform_device.h>
+
+#include "idma64.h"
+
+/* Platform driver name */
+#define DRV_NAME		"idma64"
+
+/* For now we support only two channels */
+#define IDMA64_NR_CHAN		2
+
+/* ---------------------------------------------------------------------- */
+
+static struct device *chan2dev(struct dma_chan *chan)
+{
+	return &chan->dev->device;
+}
+
+/* ---------------------------------------------------------------------- */
+
+static void idma64_off(struct idma64 *idma64)
+{
+	unsigned short count = 100;
+
+	dma_writel(idma64, CFG, 0);
+
+	channel_clear_bit(idma64, MASK(XFER), idma64->all_chan_mask);
+	channel_clear_bit(idma64, MASK(BLOCK), idma64->all_chan_mask);
+	channel_clear_bit(idma64, MASK(SRC_TRAN), idma64->all_chan_mask);
+	channel_clear_bit(idma64, MASK(DST_TRAN), idma64->all_chan_mask);
+	channel_clear_bit(idma64, MASK(ERROR), idma64->all_chan_mask);
+
+	do {
+		cpu_relax();
+	} while (dma_readl(idma64, CFG) & IDMA64_CFG_DMA_EN && --count);
+}
+
+static void idma64_on(struct idma64 *idma64)
+{
+	dma_writel(idma64, CFG, IDMA64_CFG_DMA_EN);
+}
+
+/* ---------------------------------------------------------------------- */
+
+static void idma64_chan_init(struct idma64 *idma64, struct idma64_chan *idma64c)
+{
+	u32 cfghi = IDMA64C_CFGH_SRC_PER(1) | IDMA64C_CFGH_DST_PER(0);
+	u32 cfglo = 0;
+
+	/* Enforce FIFO drain when channel is suspended */
+	cfglo |= IDMA64C_CFGL_CH_DRAIN;
+
+	/* Set default burst alignment */
+	cfglo |= IDMA64C_CFGL_DST_BURST_ALIGN | IDMA64C_CFGL_SRC_BURST_ALIGN;
+
+	channel_writel(idma64c, CFG_LO, cfglo);
+	channel_writel(idma64c, CFG_HI, cfghi);
+
+	/* Enable interrupts */
+	channel_set_bit(idma64, MASK(XFER), idma64c->mask);
+	channel_set_bit(idma64, MASK(ERROR), idma64c->mask);
+
+	/*
+	 * Enforce the controller to be turned on.
+	 *
+	 * The iDMA is turned off in ->probe() and looses context during system
+	 * suspend / resume cycle. That's why we have to enable it each time we
+	 * use it.
+	 */
+	idma64_on(idma64);
+}
+
+static void idma64_chan_stop(struct idma64 *idma64, struct idma64_chan *idma64c)
+{
+	channel_clear_bit(idma64, CH_EN, idma64c->mask);
+}
+
+static void idma64_chan_start(struct idma64 *idma64, struct idma64_chan *idma64c)
+{
+	struct idma64_desc *desc = idma64c->desc;
+	struct idma64_hw_desc *hw = &desc->hw[0];
+
+	channel_writeq(idma64c, SAR, 0);
+	channel_writeq(idma64c, DAR, 0);
+
+	channel_writel(idma64c, CTL_HI, IDMA64C_CTLH_BLOCK_TS(~0UL));
+	channel_writel(idma64c, CTL_LO, IDMA64C_CTLL_LLP_S_EN | IDMA64C_CTLL_LLP_D_EN);
+
+	channel_writeq(idma64c, LLP, hw->llp);
+
+	channel_set_bit(idma64, CH_EN, idma64c->mask);
+}
+
+static void idma64_init_channel(struct idma64_chan *idma64c)
+{
+	struct idma64 *idma64 = to_idma64(idma64c->vchan.chan.device);
+	unsigned long flags;
+
+	spin_lock_irqsave(&idma64c->lock, flags);
+	idma64_chan_init(idma64, idma64c);
+	spin_unlock_irqrestore(&idma64c->lock, flags);
+}
+
+static void idma64_stop_channel(struct idma64_chan *idma64c)
+{
+	struct idma64 *idma64 = to_idma64(idma64c->vchan.chan.device);
+	unsigned long flags;
+
+	spin_lock_irqsave(&idma64c->lock, flags);
+	idma64_chan_stop(idma64, idma64c);
+	spin_unlock_irqrestore(&idma64c->lock, flags);
+}
+
+static void idma64_start_channel(struct idma64_chan *idma64c)
+{
+	struct idma64 *idma64 = to_idma64(idma64c->vchan.chan.device);
+	unsigned long flags;
+
+	spin_lock_irqsave(&idma64c->lock, flags);
+	idma64_chan_start(idma64, idma64c);
+	spin_unlock_irqrestore(&idma64c->lock, flags);
+}
+
+static void idma64_start_transfer(struct idma64_chan *idma64c)
+{
+	struct virt_dma_desc *vdesc;
+
+	/* Get the next descriptor */
+	vdesc = vchan_next_desc(&idma64c->vchan);
+	if (!vdesc) {
+		idma64c->desc = NULL;
+		return;
+	}
+
+	list_del(&vdesc->node);
+	idma64c->desc = to_idma64_desc(vdesc);
+
+	/* Configure the channel */
+	idma64_init_channel(idma64c);
+
+	/* Start the channel with a new descriptor */
+	idma64_start_channel(idma64c);
+}
+
+/* ---------------------------------------------------------------------- */
+
+static void idma64_chan_irq(struct idma64 *idma64, unsigned short c,
+		u32 status_err, u32 status_xfer)
+{
+	struct idma64_chan *idma64c = &idma64->chan[c];
+	struct idma64_desc *desc;
+	unsigned long flags;
+
+	spin_lock_irqsave(&idma64c->vchan.lock, flags);
+	desc = idma64c->desc;
+	if (desc) {
+		if (status_err & (1 << c)) {
+			dma_writel(idma64, CLEAR(ERROR), idma64c->mask);
+			desc->status = DMA_ERROR;
+		} else if (status_xfer & (1 << c)) {
+			dma_writel(idma64, CLEAR(XFER), idma64c->mask);
+			desc->status = DMA_COMPLETE;
+			vchan_cookie_complete(&desc->vdesc);
+			idma64_start_transfer(idma64c);
+		}
+
+		/* idma64_start_transfer() updates idma64c->desc */
+		if (idma64c->desc == NULL || desc->status == DMA_ERROR)
+			idma64_stop_channel(idma64c);
+	}
+	spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
+}
+
+static irqreturn_t idma64_irq(int irq, void *dev)
+{
+	struct idma64 *idma64 = dev;
+	u32 status = dma_readl(idma64, STATUS_INT);
+	u32 status_xfer;
+	u32 status_err;
+	unsigned short i;
+
+	dev_vdbg(idma64->dma.dev, "%s: status=%#x\n", __func__, status);
+
+	/* Check if we have any interrupt from the DMA controller */
+	if (!status)
+		return IRQ_NONE;
+
+	/* Disable interrupts */
+	channel_clear_bit(idma64, MASK(XFER), idma64->all_chan_mask);
+	channel_clear_bit(idma64, MASK(ERROR), idma64->all_chan_mask);
+
+	status_xfer = dma_readl(idma64, RAW(XFER));
+	status_err = dma_readl(idma64, RAW(ERROR));
+
+	for (i = 0; i < idma64->dma.chancnt; i++)
+		idma64_chan_irq(idma64, i, status_err, status_xfer);
+
+	/* Re-enable interrupts */
+	channel_set_bit(idma64, MASK(XFER), idma64->all_chan_mask);
+	channel_set_bit(idma64, MASK(ERROR), idma64->all_chan_mask);
+
+	return IRQ_HANDLED;
+}
+
+/* ---------------------------------------------------------------------- */
+
+static struct idma64_desc *idma64_alloc_desc(unsigned int ndesc)
+{
+	struct idma64_desc *desc;
+
+	desc = kzalloc(sizeof(*desc), GFP_NOWAIT);
+	if (!desc)
+		return NULL;
+
+	desc->hw = kcalloc(ndesc, sizeof(*desc->hw), GFP_NOWAIT);
+	if (!desc->hw) {
+		kfree(desc);
+		return NULL;
+	}
+
+	return desc;
+}
+
+static void idma64_desc_free(struct idma64_chan *idma64c,
+		struct idma64_desc *desc)
+{
+	struct idma64_hw_desc *hw;
+
+	if (desc->ndesc) {
+		unsigned int i = desc->ndesc;
+
+		do {
+			hw = &desc->hw[--i];
+			dma_pool_free(idma64c->pool, hw->lli, hw->llp);
+		} while (i);
+	}
+
+	kfree(desc->hw);
+	kfree(desc);
+}
+
+static void idma64_vdesc_free(struct virt_dma_desc *vdesc)
+{
+	struct idma64_chan *idma64c = to_idma64_chan(vdesc->tx.chan);
+
+	idma64_desc_free(idma64c, to_idma64_desc(vdesc));
+}
+
+static void idma64_hw_desc_fill(struct idma64_hw_desc *hw,
+		struct dma_slave_config *config,
+		enum dma_transfer_direction direction, u64 llp)
+{
+	struct idma64_lli *lli = hw->lli;
+	u64 sar, dar;
+	u32 ctlhi = IDMA64C_CTLH_BLOCK_TS(hw->len);
+	u32 ctllo = IDMA64C_CTLL_LLP_S_EN | IDMA64C_CTLL_LLP_D_EN;
+	u32 src_width, dst_width;
+
+	if (direction == DMA_MEM_TO_DEV) {
+		sar = hw->phys;
+		dar = config->dst_addr;
+		ctllo |= IDMA64C_CTLL_DST_FIX | IDMA64C_CTLL_SRC_INC |
+			 IDMA64C_CTLL_FC_M2P;
+		src_width = min_t(u32, 2, __ffs(sar | hw->len));
+		dst_width = __fls(config->dst_addr_width);
+	} else {	/* DMA_DEV_TO_MEM */
+		sar = config->src_addr;
+		dar = hw->phys;
+		ctllo |= IDMA64C_CTLL_DST_INC | IDMA64C_CTLL_SRC_FIX |
+			 IDMA64C_CTLL_FC_P2M;
+		src_width = __fls(config->src_addr_width);
+		dst_width = min_t(u32, 2, __ffs(dar | hw->len));
+	}
+
+	lli->sar = sar;
+	lli->dar = dar;
+
+	lli->ctlhi = ctlhi;
+	lli->ctllo = ctllo |
+		     IDMA64C_CTLL_SRC_MSIZE(config->src_maxburst) |
+		     IDMA64C_CTLL_DST_MSIZE(config->dst_maxburst) |
+		     IDMA64C_CTLL_DST_WIDTH(dst_width) |
+		     IDMA64C_CTLL_SRC_WIDTH(src_width);
+
+	lli->llp = llp;
+}
+
+static void idma64_desc_fill(struct idma64_chan *idma64c,
+		struct idma64_desc *desc)
+{
+	struct dma_slave_config *config = &idma64c->config;
+	struct idma64_hw_desc *hw = &desc->hw[desc->ndesc - 1];
+	struct idma64_lli *lli = hw->lli;
+	u64 llp = 0;
+	unsigned int i = desc->ndesc;
+
+	/* Fill the hardware descriptors and link them to a list */
+	do {
+		hw = &desc->hw[--i];
+		idma64_hw_desc_fill(hw, config, desc->direction, llp);
+		llp = hw->llp;
+	} while (i);
+
+	/* Trigger interrupt after last block */
+	lli->ctllo |= IDMA64C_CTLL_INT_EN;
+}
+
+static struct dma_async_tx_descriptor *idma64_prep_slave_sg(
+		struct dma_chan *chan, struct scatterlist *sgl,
+		unsigned int sg_len, enum dma_transfer_direction direction,
+		unsigned long flags, void *context)
+{
+	struct idma64_chan *idma64c = to_idma64_chan(chan);
+	struct idma64_desc *desc;
+	struct scatterlist *sg;
+	unsigned int i;
+
+	desc = idma64_alloc_desc(sg_len);
+	if (!desc)
+		return NULL;
+
+	for_each_sg(sgl, sg, sg_len, i) {
+		struct idma64_hw_desc *hw = &desc->hw[i];
+
+		/* Allocate DMA capable memory for hardware descriptor */
+		hw->lli = dma_pool_alloc(idma64c->pool, GFP_NOWAIT, &hw->llp);
+		if (!hw->lli) {
+			desc->ndesc = i;
+			idma64_desc_free(idma64c, desc);
+			return NULL;
+		}
+
+		hw->phys = sg_dma_address(sg);
+		hw->len = sg_dma_len(sg);
+	}
+
+	desc->ndesc = sg_len;
+	desc->direction = direction;
+	desc->status = DMA_IN_PROGRESS;
+
+	idma64_desc_fill(idma64c, desc);
+	return vchan_tx_prep(&idma64c->vchan, &desc->vdesc, flags);
+}
+
+static void idma64_issue_pending(struct dma_chan *chan)
+{
+	struct idma64_chan *idma64c = to_idma64_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&idma64c->vchan.lock, flags);
+	if (vchan_issue_pending(&idma64c->vchan) && !idma64c->desc)
+		idma64_start_transfer(idma64c);
+	spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
+}
+
+static size_t idma64_desc_size(struct idma64_desc *desc, unsigned int active)
+{
+	size_t bytes = 0;
+	unsigned int i;
+
+	for (i = active; i < desc->ndesc; i++)
+		bytes += desc->hw[i].len;
+
+	return bytes;
+}
+
+static size_t idma64_pending_desc_size(struct idma64_desc *desc)
+{
+	return idma64_desc_size(desc, 0);
+}
+
+static size_t idma64_active_desc_size(struct idma64_chan *idma64c)
+{
+	struct idma64_desc *desc = idma64c->desc;
+	struct idma64_hw_desc *hw;
+	size_t bytes;
+	u64 llp;
+	u32 ctlhi;
+	unsigned int i = 0;
+	unsigned long flags;
+
+	spin_lock_irqsave(&idma64c->lock, flags);
+	llp = channel_readq(idma64c, LLP);
+	spin_unlock_irqrestore(&idma64c->lock, flags);
+
+	do {
+		hw = &desc->hw[i];
+	} while (llp != hw->llp && ++i < desc->ndesc);
+
+	bytes = idma64_desc_size(desc, i);
+	if (!i)
+		return bytes;
+
+	hw = &desc->hw[--i];
+
+	spin_lock_irqsave(&idma64c->lock, flags);
+	ctlhi = channel_readl(idma64c, CTL_HI);
+	spin_unlock_irqrestore(&idma64c->lock, flags);
+
+	return bytes + hw->len - IDMA64C_CTLH_BLOCK_TS(ctlhi);
+}
+
+static enum dma_status idma64_tx_status(struct dma_chan *chan,
+		dma_cookie_t cookie, struct dma_tx_state *state)
+{
+	struct idma64_chan *idma64c = to_idma64_chan(chan);
+	struct virt_dma_desc *vdesc;
+	enum dma_status status;
+	size_t bytes;
+	unsigned long flags;
+
+	status = dma_cookie_status(chan, cookie, state);
+	if (status == DMA_COMPLETE)
+		return status;
+
+	spin_lock_irqsave(&idma64c->vchan.lock, flags);
+	vdesc = vchan_find_desc(&idma64c->vchan, cookie);
+	if (idma64c->desc && cookie == idma64c->desc->vdesc.tx.cookie) {
+		bytes = idma64_active_desc_size(idma64c);
+		dma_set_residue(state, bytes);
+		status = idma64c->desc->status;
+	} else if (vdesc) {
+		bytes = idma64_pending_desc_size(to_idma64_desc(vdesc));
+		dma_set_residue(state, bytes);
+	}
+	spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
+
+	return status;
+}
+
+static void convert_burst(u32 *maxburst)
+{
+	if (*maxburst)
+		*maxburst = __fls(*maxburst);
+	else
+		*maxburst = 0;
+}
+
+static int idma64_slave_config(struct dma_chan *chan,
+		struct dma_slave_config *config)
+{
+	struct idma64_chan *idma64c = to_idma64_chan(chan);
+
+	/* Check if chan will be configured for slave transfers */
+	if (!is_slave_direction(config->direction))
+		return -EINVAL;
+
+	memcpy(&idma64c->config, config, sizeof(idma64c->config));
+
+	convert_burst(&idma64c->config.src_maxburst);
+	convert_burst(&idma64c->config.dst_maxburst);
+
+	return 0;
+}
+
+static void idma64_chan_deactivate(struct idma64_chan *idma64c)
+{
+	unsigned short count = 100;
+	u32 cfglo;
+	unsigned long flags;
+
+	spin_lock_irqsave(&idma64c->lock, flags);
+	cfglo = channel_readl(idma64c, CFG_LO);
+	channel_writel(idma64c, CFG_LO, cfglo | IDMA64C_CFGL_CH_SUSP);
+	do {
+		udelay(1);
+		cfglo = channel_readl(idma64c, CFG_LO);
+	} while (!(cfglo & IDMA64C_CFGL_FIFO_EMPTY) && --count);
+	spin_unlock_irqrestore(&idma64c->lock, flags);
+}
+
+static void idma64_chan_activate(struct idma64_chan *idma64c)
+{
+	u32 cfglo;
+	unsigned long flags;
+
+	spin_lock_irqsave(&idma64c->lock, flags);
+	cfglo = channel_readl(idma64c, CFG_LO);
+	channel_writel(idma64c, CFG_LO, cfglo & ~IDMA64C_CFGL_CH_SUSP);
+	spin_unlock_irqrestore(&idma64c->lock, flags);
+}
+
+static int idma64_pause(struct dma_chan *chan)
+{
+	struct idma64_chan *idma64c = to_idma64_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&idma64c->vchan.lock, flags);
+	if (idma64c->desc && idma64c->desc->status == DMA_IN_PROGRESS) {
+		idma64_chan_deactivate(idma64c);
+		idma64c->desc->status = DMA_PAUSED;
+	}
+	spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
+
+	return 0;
+}
+
+static int idma64_resume(struct dma_chan *chan)
+{
+	struct idma64_chan *idma64c = to_idma64_chan(chan);
+	unsigned long flags;
+
+	spin_lock_irqsave(&idma64c->vchan.lock, flags);
+	if (idma64c->desc && idma64c->desc->status == DMA_PAUSED) {
+		idma64c->desc->status = DMA_IN_PROGRESS;
+		idma64_chan_activate(idma64c);
+	}
+	spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
+
+	return 0;
+}
+
+static int idma64_terminate_all(struct dma_chan *chan)
+{
+	struct idma64_chan *idma64c = to_idma64_chan(chan);
+	unsigned long flags;
+	LIST_HEAD(head);
+
+	spin_lock_irqsave(&idma64c->vchan.lock, flags);
+	idma64_stop_channel(idma64c);
+	if (idma64c->desc) {
+		idma64_vdesc_free(&idma64c->desc->vdesc);
+		idma64c->desc = NULL;
+	}
+	vchan_get_all_descriptors(&idma64c->vchan, &head);
+	spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
+
+	vchan_dma_desc_free_list(&idma64c->vchan, &head);
+	return 0;
+}
+
+static int idma64_alloc_chan_resources(struct dma_chan *chan)
+{
+	struct idma64_chan *idma64c = to_idma64_chan(chan);
+
+	/* Create a pool of consistent memory blocks for hardware descriptors */
+	idma64c->pool = dma_pool_create(dev_name(chan2dev(chan)),
+					chan->device->dev,
+					sizeof(struct idma64_lli), 8, 0);
+	if (!idma64c->pool) {
+		dev_err(chan2dev(chan), "No memory for descriptors\n");
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void idma64_free_chan_resources(struct dma_chan *chan)
+{
+	struct idma64_chan *idma64c = to_idma64_chan(chan);
+
+	dma_pool_destroy(idma64c->pool);
+	vchan_free_chan_resources(to_virt_chan(chan));
+}
+
+static int idma64_probe(struct idma64_chip *chip)
+{
+	struct idma64 *idma64;
+	unsigned short nr_chan = IDMA64_NR_CHAN;
+	unsigned short i;
+	int ret;
+
+	idma64 = devm_kzalloc(chip->dev, sizeof(*idma64), GFP_KERNEL);
+	if (!idma64)
+		return -ENOMEM;
+
+	idma64->regs = chip->regs;
+	chip->idma64 = idma64;
+
+	idma64->chan = devm_kcalloc(chip->dev, nr_chan, sizeof(*idma64->chan),
+				    GFP_KERNEL);
+	if (!idma64->chan)
+		return -ENOMEM;
+
+	idma64->all_chan_mask = (1 << nr_chan) - 1;
+
+	/* Turn off iDMA controller */
+	idma64_off(idma64);
+
+	ret = devm_request_irq(chip->dev, chip->irq, idma64_irq, IRQF_SHARED,
+			       dev_name(chip->dev), idma64);
+	if (ret)
+		return ret;
+
+	INIT_LIST_HEAD(&idma64->dma.channels);
+	for (i = 0; i < nr_chan; i++) {
+		struct idma64_chan *idma64c = &idma64->chan[i];
+
+		idma64c->vchan.desc_free = idma64_vdesc_free;
+		vchan_init(&idma64c->vchan, &idma64->dma);
+
+		idma64c->regs = idma64->regs + i * IDMA64_CH_LENGTH;
+		idma64c->mask = BIT(i);
+
+		spin_lock_init(&idma64c->lock);
+	}
+
+	dma_cap_set(DMA_SLAVE, idma64->dma.cap_mask);
+	dma_cap_set(DMA_PRIVATE, idma64->dma.cap_mask);
+
+	idma64->dma.device_alloc_chan_resources = idma64_alloc_chan_resources;
+	idma64->dma.device_free_chan_resources = idma64_free_chan_resources;
+
+	idma64->dma.device_prep_slave_sg = idma64_prep_slave_sg;
+
+	idma64->dma.device_issue_pending = idma64_issue_pending;
+	idma64->dma.device_tx_status = idma64_tx_status;
+
+	idma64->dma.device_config = idma64_slave_config;
+	idma64->dma.device_pause = idma64_pause;
+	idma64->dma.device_resume = idma64_resume;
+	idma64->dma.device_terminate_all = idma64_terminate_all;
+
+	idma64->dma.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+	idma64->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
+
+	idma64->dma.dev = chip->dev;
+
+	ret = dma_async_device_register(&idma64->dma);
+	if (ret)
+		return ret;
+
+	dev_info(chip->dev, "Found Intel integrated DMA 64-bit\n");
+	return 0;
+}
+
+static int idma64_remove(struct idma64_chip *chip)
+{
+	struct idma64 *idma64 = chip->idma64;
+	unsigned short i;
+
+	dma_async_device_unregister(&idma64->dma);
+
+	/*
+	 * Explicitly call devm_request_irq() to avoid the side effects with
+	 * the scheduled tasklets.
+	 */
+	devm_free_irq(chip->dev, chip->irq, idma64);
+
+	for (i = 0; i < idma64->dma.chancnt; i++) {
+		struct idma64_chan *idma64c = &idma64->chan[i];
+
+		tasklet_kill(&idma64c->vchan.task);
+	}
+
+	return 0;
+}
+
+/* ---------------------------------------------------------------------- */
+
+static int idma64_platform_probe(struct platform_device *pdev)
+{
+	struct idma64_chip *chip;
+	struct device *dev = &pdev->dev;
+	struct resource *mem;
+	int ret;
+
+	chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL);
+	if (!chip)
+		return -ENOMEM;
+
+	chip->irq = platform_get_irq(pdev, 0);
+	if (chip->irq < 0)
+		return chip->irq;
+
+	mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	chip->regs = devm_ioremap_resource(dev, mem);
+	if (IS_ERR(chip->regs))
+		return PTR_ERR(chip->regs);
+
+	ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
+	if (ret)
+		return ret;
+
+	chip->dev = dev;
+
+	ret = idma64_probe(chip);
+	if (ret)
+		return ret;
+
+	platform_set_drvdata(pdev, chip);
+	return 0;
+}
+
+static int idma64_platform_remove(struct platform_device *pdev)
+{
+	struct idma64_chip *chip = platform_get_drvdata(pdev);
+
+	return idma64_remove(chip);
+}
+
+#ifdef CONFIG_PM_SLEEP
+
+static int idma64_suspend_late(struct device *dev)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct idma64_chip *chip = platform_get_drvdata(pdev);
+
+	idma64_off(chip->idma64);
+	return 0;
+}
+
+static int idma64_resume_early(struct device *dev)
+{
+	struct platform_device *pdev = to_platform_device(dev);
+	struct idma64_chip *chip = platform_get_drvdata(pdev);
+
+	idma64_on(chip->idma64);
+	return 0;
+}
+
+#endif /* CONFIG_PM_SLEEP */
+
+static const struct dev_pm_ops idma64_dev_pm_ops = {
+	SET_LATE_SYSTEM_SLEEP_PM_OPS(idma64_suspend_late, idma64_resume_early)
+};
+
+static struct platform_driver idma64_platform_driver = {
+	.probe		= idma64_platform_probe,
+	.remove		= idma64_platform_remove,
+	.driver = {
+		.name	= DRV_NAME,
+		.pm	= &idma64_dev_pm_ops,
+	},
+};
+
+module_platform_driver(idma64_platform_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("iDMA64 core driver");
+MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>");
+MODULE_ALIAS("platform:" DRV_NAME);
diff --git a/drivers/dma/idma64.h b/drivers/dma/idma64.h
new file mode 100644
index 0000000..63be77f
--- /dev/null
+++ b/drivers/dma/idma64.h
@@ -0,0 +1,233 @@
+/*
+ * Driver for the Intel integrated DMA 64-bit
+ *
+ * Copyright (C) 2015 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __DMA_IDMA64_H__
+#define __DMA_IDMA64_H__
+
+#include <linux/device.h>
+#include <linux/io.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+
+#include "virt-dma.h"
+
+/* Channel registers */
+
+#define IDMA64_CH_SAR		0x00	/* Source Address Register */
+#define IDMA64_CH_DAR		0x08	/* Destination Address Register */
+#define IDMA64_CH_LLP		0x10	/* Linked List Pointer */
+#define IDMA64_CH_CTL_LO	0x18	/* Control Register Low */
+#define IDMA64_CH_CTL_HI	0x1c	/* Control Register High */
+#define IDMA64_CH_SSTAT		0x20
+#define IDMA64_CH_DSTAT		0x28
+#define IDMA64_CH_SSTATAR	0x30
+#define IDMA64_CH_DSTATAR	0x38
+#define IDMA64_CH_CFG_LO	0x40	/* Configuration Register Low */
+#define IDMA64_CH_CFG_HI	0x44	/* Configuration Register High */
+#define IDMA64_CH_SGR		0x48
+#define IDMA64_CH_DSR		0x50
+
+#define IDMA64_CH_LENGTH	0x58
+
+/* Bitfields in CTL_LO */
+#define IDMA64C_CTLL_INT_EN		(1 << 0)	/* irqs enabled? */
+#define IDMA64C_CTLL_DST_WIDTH(x)	((x) << 1)	/* bytes per element */
+#define IDMA64C_CTLL_SRC_WIDTH(x)	((x) << 4)
+#define IDMA64C_CTLL_DST_INC		(0 << 8)	/* DAR update/not */
+#define IDMA64C_CTLL_DST_FIX		(1 << 8)
+#define IDMA64C_CTLL_SRC_INC		(0 << 10)	/* SAR update/not */
+#define IDMA64C_CTLL_SRC_FIX		(1 << 10)
+#define IDMA64C_CTLL_DST_MSIZE(x)	((x) << 11)	/* burst, #elements */
+#define IDMA64C_CTLL_SRC_MSIZE(x)	((x) << 14)
+#define IDMA64C_CTLL_FC_M2P		(1 << 20)	/* mem-to-periph */
+#define IDMA64C_CTLL_FC_P2M		(2 << 20)	/* periph-to-mem */
+#define IDMA64C_CTLL_LLP_D_EN		(1 << 27)	/* dest block chain */
+#define IDMA64C_CTLL_LLP_S_EN		(1 << 28)	/* src block chain */
+
+/* Bitfields in CTL_HI */
+#define IDMA64C_CTLH_BLOCK_TS(x)	((x) & ((1 << 17) - 1))
+#define IDMA64C_CTLH_DONE		(1 << 17)
+
+/* Bitfields in CFG_LO */
+#define IDMA64C_CFGL_DST_BURST_ALIGN	(1 << 0)	/* dst burst align */
+#define IDMA64C_CFGL_SRC_BURST_ALIGN	(1 << 1)	/* src burst align */
+#define IDMA64C_CFGL_CH_SUSP		(1 << 8)
+#define IDMA64C_CFGL_FIFO_EMPTY		(1 << 9)
+#define IDMA64C_CFGL_CH_DRAIN		(1 << 10)	/* drain FIFO */
+#define IDMA64C_CFGL_DST_OPT_BL		(1 << 20)	/* optimize dst burst length */
+#define IDMA64C_CFGL_SRC_OPT_BL		(1 << 21)	/* optimize src burst length */
+
+/* Bitfields in CFG_HI */
+#define IDMA64C_CFGH_SRC_PER(x)		((x) << 0)	/* src peripheral */
+#define IDMA64C_CFGH_DST_PER(x)		((x) << 4)	/* dst peripheral */
+#define IDMA64C_CFGH_RD_ISSUE_THD(x)	((x) << 8)
+#define IDMA64C_CFGH_RW_ISSUE_THD(x)	((x) << 18)
+
+/* Interrupt registers */
+
+#define IDMA64_INT_XFER		0x00
+#define IDMA64_INT_BLOCK	0x08
+#define IDMA64_INT_SRC_TRAN	0x10
+#define IDMA64_INT_DST_TRAN	0x18
+#define IDMA64_INT_ERROR	0x20
+
+#define IDMA64_RAW(x)		(0x2c0 + IDMA64_INT_##x)	/* r */
+#define IDMA64_STATUS(x)	(0x2e8 + IDMA64_INT_##x)	/* r (raw & mask) */
+#define IDMA64_MASK(x)		(0x310 + IDMA64_INT_##x)	/* rw (set = irq enabled) */
+#define IDMA64_CLEAR(x)		(0x338 + IDMA64_INT_##x)	/* w (ack, affects "raw") */
+
+/* Common registers */
+
+#define IDMA64_STATUS_INT	0x360	/* r */
+#define IDMA64_CFG		0x398
+#define IDMA64_CH_EN		0x3a0
+
+/* Bitfields in CFG */
+#define IDMA64_CFG_DMA_EN		(1 << 0)
+
+/* Hardware descriptor for Linked LIst transfers */
+struct idma64_lli {
+	u64		sar;
+	u64		dar;
+	u64		llp;
+	u32		ctllo;
+	u32		ctlhi;
+	u32		sstat;
+	u32		dstat;
+};
+
+struct idma64_hw_desc {
+	struct idma64_lli *lli;
+	dma_addr_t llp;
+	dma_addr_t phys;
+	unsigned int len;
+};
+
+struct idma64_desc {
+	struct virt_dma_desc vdesc;
+	enum dma_transfer_direction direction;
+	struct idma64_hw_desc *hw;
+	unsigned int ndesc;
+	enum dma_status status;
+};
+
+static inline struct idma64_desc *to_idma64_desc(struct virt_dma_desc *vdesc)
+{
+	return container_of(vdesc, struct idma64_desc, vdesc);
+}
+
+struct idma64_chan {
+	struct virt_dma_chan vchan;
+
+	void __iomem *regs;
+	spinlock_t lock;
+
+	/* hardware configuration */
+	enum dma_transfer_direction direction;
+	unsigned int mask;
+	struct dma_slave_config config;
+
+	void *pool;
+	struct idma64_desc *desc;
+};
+
+static inline struct idma64_chan *to_idma64_chan(struct dma_chan *chan)
+{
+	return container_of(chan, struct idma64_chan, vchan.chan);
+}
+
+#define channel_set_bit(idma64, reg, mask)	\
+	dma_writel(idma64, reg, ((mask) << 8) | (mask))
+#define channel_clear_bit(idma64, reg, mask)	\
+	dma_writel(idma64, reg, ((mask) << 8) | 0)
+
+static inline u32 idma64c_readl(struct idma64_chan *idma64c, int offset)
+{
+	return readl(idma64c->regs + offset);
+}
+
+static inline void idma64c_writel(struct idma64_chan *idma64c, int offset,
+				  u32 value)
+{
+	writel(value, idma64c->regs + offset);
+}
+
+#define channel_readl(idma64c, reg)		\
+	idma64c_readl(idma64c, IDMA64_CH_##reg)
+#define channel_writel(idma64c, reg, value)	\
+	idma64c_writel(idma64c, IDMA64_CH_##reg, (value))
+
+static inline u64 idma64c_readq(struct idma64_chan *idma64c, int offset)
+{
+	u64 l, h;
+
+	l = idma64c_readl(idma64c, offset);
+	h = idma64c_readl(idma64c, offset + 4);
+
+	return l | (h << 32);
+}
+
+static inline void idma64c_writeq(struct idma64_chan *idma64c, int offset,
+				  u64 value)
+{
+	idma64c_writel(idma64c, offset, value);
+	idma64c_writel(idma64c, offset + 4, value >> 32);
+}
+
+#define channel_readq(idma64c, reg)		\
+	idma64c_readq(idma64c, IDMA64_CH_##reg)
+#define channel_writeq(idma64c, reg, value)	\
+	idma64c_writeq(idma64c, IDMA64_CH_##reg, (value))
+
+struct idma64 {
+	struct dma_device dma;
+
+	void __iomem *regs;
+
+	/* channels */
+	unsigned short all_chan_mask;
+	struct idma64_chan *chan;
+};
+
+static inline struct idma64 *to_idma64(struct dma_device *ddev)
+{
+	return container_of(ddev, struct idma64, dma);
+}
+
+static inline u32 idma64_readl(struct idma64 *idma64, int offset)
+{
+	return readl(idma64->regs + offset);
+}
+
+static inline void idma64_writel(struct idma64 *idma64, int offset, u32 value)
+{
+	writel(value, idma64->regs + offset);
+}
+
+#define dma_readl(idma64, reg)			\
+	idma64_readl(idma64, IDMA64_##reg)
+#define dma_writel(idma64, reg, value)		\
+	idma64_writel(idma64, IDMA64_##reg, (value))
+
+/**
+ * struct idma64_chip - representation of DesignWare DMA controller hardware
+ * @dev:		struct device of the DMA controller
+ * @irq:		irq line
+ * @regs:		memory mapped I/O space
+ * @idma64:		struct idma64 that is filed by idma64_probe()
+ */
+struct idma64_chip {
+	struct device	*dev;
+	int		irq;
+	void __iomem	*regs;
+	struct idma64	*idma64;
+};
+
+#endif /* __DMA_IDMA64_H__ */
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH v2 8/8] mfd: Add support for Intel Sunrisepoint LPSS devices
  2015-05-25 16:09 [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Andy Shevchenko
                   ` (6 preceding siblings ...)
  2015-05-25 16:09 ` [PATCH v2 7/8] dmaengine: add a driver for Intel integrated DMA 64-bit Andy Shevchenko
@ 2015-05-25 16:09 ` Andy Shevchenko
  2015-05-27 10:22   ` Lee Jones
  2015-05-26  3:51 ` [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Vinod Koul
  8 siblings, 1 reply; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-25 16:09 UTC (permalink / raw)
  To: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula
  Cc: Andy Shevchenko

The new coming Intel platforms such as Skylake will contain Sunrisepoint PCH.
The main difference to the previous platforms is that the LPSS devices are
compound devices where usually main (SPI, HSUART, or I2C) and DMA IPs are
present.

This patch brings the driver for such devices found on Sunrisepoint PCH.

Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
 drivers/mfd/Kconfig           |  24 ++
 drivers/mfd/Makefile          |   3 +
 drivers/mfd/intel-lpss-acpi.c |  84 +++++++
 drivers/mfd/intel-lpss-pci.c  | 113 +++++++++
 drivers/mfd/intel-lpss.c      | 534 ++++++++++++++++++++++++++++++++++++++++++
 drivers/mfd/intel-lpss.h      |  62 +++++
 6 files changed, 820 insertions(+)
 create mode 100644 drivers/mfd/intel-lpss-acpi.c
 create mode 100644 drivers/mfd/intel-lpss-pci.c
 create mode 100644 drivers/mfd/intel-lpss.c
 create mode 100644 drivers/mfd/intel-lpss.h

diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
index d5ad04d..8cdaa12 100644
--- a/drivers/mfd/Kconfig
+++ b/drivers/mfd/Kconfig
@@ -325,6 +325,30 @@ config INTEL_SOC_PMIC
 	  thermal, charger and related power management functions
 	  on these systems.
 
+config MFD_INTEL_LPSS
+	tristate
+	depends on X86
+	select COMMON_CLK
+	select MFD_CORE
+
+config MFD_INTEL_LPSS_ACPI
+	tristate "Intel Low Power Subsystem support in ACPI mode"
+	select MFD_INTEL_LPSS
+	depends on ACPI
+	help
+	  This driver supports Intel Low Power Subsystem (LPSS) devices such as
+	  I2C, SPI and HS-UART starting from Intel Sunrisepoint (Intel Skylake
+	  PCH) in ACPI mode.
+
+config MFD_INTEL_LPSS_PCI
+	tristate "Intel Low Power Subsystem support in PCI mode"
+	select MFD_INTEL_LPSS
+	depends on PCI
+	help
+	  This driver supports Intel Low Power Subsystem (LPSS) devices such as
+	  I2C, SPI and HS-UART starting from Intel Sunrisepoint (Intel Skylake
+	  PCH) in PCI mode.
+
 config MFD_INTEL_MSIC
 	bool "Intel MSIC"
 	depends on INTEL_SCU_IPC
diff --git a/drivers/mfd/Makefile b/drivers/mfd/Makefile
index 0e5cfeb..cdf29b9 100644
--- a/drivers/mfd/Makefile
+++ b/drivers/mfd/Makefile
@@ -161,6 +161,9 @@ obj-$(CONFIG_TPS65911_COMPARATOR)	+= tps65911-comparator.o
 obj-$(CONFIG_MFD_TPS65090)	+= tps65090.o
 obj-$(CONFIG_MFD_AAT2870_CORE)	+= aat2870-core.o
 obj-$(CONFIG_MFD_ATMEL_HLCDC)	+= atmel-hlcdc.o
+obj-$(CONFIG_MFD_INTEL_LPSS)	+= intel-lpss.o
+obj-$(CONFIG_MFD_INTEL_LPSS_PCI)	+= intel-lpss-pci.o
+obj-$(CONFIG_MFD_INTEL_LPSS_ACPI)	+= intel-lpss-acpi.o
 obj-$(CONFIG_MFD_INTEL_MSIC)	+= intel_msic.o
 obj-$(CONFIG_MFD_PALMAS)	+= palmas.o
 obj-$(CONFIG_MFD_VIPERBOARD)    += viperboard.o
diff --git a/drivers/mfd/intel-lpss-acpi.c b/drivers/mfd/intel-lpss-acpi.c
new file mode 100644
index 0000000..0d92d73
--- /dev/null
+++ b/drivers/mfd/intel-lpss-acpi.c
@@ -0,0 +1,84 @@
+/*
+ * Intel LPSS ACPI support.
+ *
+ * Copyright (C) 2015, Intel Corporation
+ *
+ * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *          Mika Westerberg <mika.westerberg@linux.intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/acpi.h>
+#include <linux/ioport.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pm.h>
+#include <linux/pm_runtime.h>
+#include <linux/platform_device.h>
+
+#include "intel-lpss.h"
+
+static const struct intel_lpss_platform_info spt_info = {
+	.clk_rate = 120000000,
+};
+
+static const struct acpi_device_id intel_lpss_acpi_ids[] = {
+	/* SPT */
+	{ "INT3446", (kernel_ulong_t)&spt_info },
+	{ "INT3447", (kernel_ulong_t)&spt_info },
+	{ }
+};
+MODULE_DEVICE_TABLE(acpi, intel_lpss_acpi_ids);
+
+static int intel_lpss_acpi_probe(struct platform_device *pdev)
+{
+	struct intel_lpss_platform_info *info;
+	const struct acpi_device_id *id;
+
+	id = acpi_match_device(intel_lpss_acpi_ids, &pdev->dev);
+	if (!id)
+		return -ENODEV;
+
+	info = devm_kmemdup(&pdev->dev, (void *)id->driver_data, sizeof(*info),
+			    GFP_KERNEL);
+	if (!info)
+		return -ENOMEM;
+
+	info->mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	info->irq = platform_get_irq(pdev, 0);
+
+	pm_runtime_set_active(&pdev->dev);
+	pm_runtime_enable(&pdev->dev);
+
+	return intel_lpss_probe(&pdev->dev, info);
+}
+
+static int intel_lpss_acpi_remove(struct platform_device *pdev)
+{
+	intel_lpss_remove(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+
+	return 0;
+}
+
+static INTEL_LPSS_PM_OPS(intel_lpss_acpi_pm_ops);
+
+static struct platform_driver intel_lpss_acpi_driver = {
+	.probe = intel_lpss_acpi_probe,
+	.remove = intel_lpss_acpi_remove,
+	.driver = {
+		.name = "intel-lpss",
+		.acpi_match_table = intel_lpss_acpi_ids,
+		.pm = &intel_lpss_acpi_pm_ops,
+	},
+};
+
+module_platform_driver(intel_lpss_acpi_driver);
+
+MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>");
+MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>");
+MODULE_DESCRIPTION("Intel LPSS ACPI driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/mfd/intel-lpss-pci.c b/drivers/mfd/intel-lpss-pci.c
new file mode 100644
index 0000000..9236dff
--- /dev/null
+++ b/drivers/mfd/intel-lpss-pci.c
@@ -0,0 +1,113 @@
+/*
+ * Intel LPSS PCI support.
+ *
+ * Copyright (C) 2015, Intel Corporation
+ *
+ * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *          Mika Westerberg <mika.westerberg@linux.intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/ioport.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/pm.h>
+#include <linux/pm_runtime.h>
+
+#include "intel-lpss.h"
+
+static int intel_lpss_pci_probe(struct pci_dev *pdev,
+				const struct pci_device_id *id)
+{
+	struct intel_lpss_platform_info *info;
+	int ret;
+
+	ret = pcim_enable_device(pdev);
+	if (ret)
+		return ret;
+
+	info = devm_kmemdup(&pdev->dev, (void *)id->driver_data, sizeof(*info),
+			    GFP_KERNEL);
+	if (!info)
+		return -ENOMEM;
+
+	info->mem = &pdev->resource[0];
+	info->irq = pdev->irq;
+
+	/* Probably it is enough to set this for iDMA capable devices only */
+	pci_set_master(pdev);
+
+	ret = intel_lpss_probe(&pdev->dev, info);
+	if (ret)
+		return ret;
+
+	pm_runtime_put(&pdev->dev);
+	pm_runtime_allow(&pdev->dev);
+
+	return 0;
+}
+
+static void intel_lpss_pci_remove(struct pci_dev *pdev)
+{
+	pm_runtime_forbid(&pdev->dev);
+	pm_runtime_get_sync(&pdev->dev);
+
+	intel_lpss_remove(&pdev->dev);
+}
+
+static INTEL_LPSS_PM_OPS(intel_lpss_pci_pm_ops);
+
+static const struct intel_lpss_platform_info spt_info = {
+	.clk_rate = 120000000,
+};
+
+static const struct intel_lpss_platform_info spt_uart_info = {
+	.clk_rate = 120000000,
+	.clk_con_id = "baudclk",
+};
+
+static const struct pci_device_id intel_lpss_pci_ids[] = {
+	/* SPT-LP */
+	{ PCI_VDEVICE(INTEL, 0x9d27), (kernel_ulong_t)&spt_uart_info },
+	{ PCI_VDEVICE(INTEL, 0x9d28), (kernel_ulong_t)&spt_uart_info },
+	{ PCI_VDEVICE(INTEL, 0x9d29), (kernel_ulong_t)&spt_info },
+	{ PCI_VDEVICE(INTEL, 0x9d2a), (kernel_ulong_t)&spt_info },
+	{ PCI_VDEVICE(INTEL, 0x9d60), (kernel_ulong_t)&spt_info },
+	{ PCI_VDEVICE(INTEL, 0x9d61), (kernel_ulong_t)&spt_info },
+	{ PCI_VDEVICE(INTEL, 0x9d62), (kernel_ulong_t)&spt_info },
+	{ PCI_VDEVICE(INTEL, 0x9d63), (kernel_ulong_t)&spt_info },
+	{ PCI_VDEVICE(INTEL, 0x9d64), (kernel_ulong_t)&spt_info },
+	{ PCI_VDEVICE(INTEL, 0x9d65), (kernel_ulong_t)&spt_info },
+	{ PCI_VDEVICE(INTEL, 0x9d66), (kernel_ulong_t)&spt_uart_info },
+	/* SPT-H */
+	{ PCI_VDEVICE(INTEL, 0xa127), (kernel_ulong_t)&spt_uart_info },
+	{ PCI_VDEVICE(INTEL, 0xa128), (kernel_ulong_t)&spt_uart_info },
+	{ PCI_VDEVICE(INTEL, 0xa129), (kernel_ulong_t)&spt_info },
+	{ PCI_VDEVICE(INTEL, 0xa12a), (kernel_ulong_t)&spt_info },
+	{ PCI_VDEVICE(INTEL, 0xa160), (kernel_ulong_t)&spt_info },
+	{ PCI_VDEVICE(INTEL, 0xa161), (kernel_ulong_t)&spt_info },
+	{ PCI_VDEVICE(INTEL, 0xa166), (kernel_ulong_t)&spt_uart_info },
+	{ }
+};
+MODULE_DEVICE_TABLE(pci, intel_lpss_pci_ids);
+
+static struct pci_driver intel_lpss_pci_driver = {
+	.name = "intel-lpss",
+	.id_table = intel_lpss_pci_ids,
+	.probe = intel_lpss_pci_probe,
+	.remove = intel_lpss_pci_remove,
+	.driver = {
+		.pm = &intel_lpss_pci_pm_ops,
+	},
+};
+
+module_pci_driver(intel_lpss_pci_driver);
+
+MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>");
+MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>");
+MODULE_DESCRIPTION("Intel LPSS PCI driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/mfd/intel-lpss.c b/drivers/mfd/intel-lpss.c
new file mode 100644
index 0000000..28bebf6
--- /dev/null
+++ b/drivers/mfd/intel-lpss.c
@@ -0,0 +1,534 @@
+/*
+ * Intel Sunrisepoint LPSS core support.
+ *
+ * Copyright (C) 2015, Intel Corporation
+ *
+ * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *          Mika Westerberg <mika.westerberg@linux.intel.com>
+ *          Heikki Krogerus <heikki.krogerus@linux.intel.com>
+ *          Jarkko Nikula <jarkko.nikula@linux.intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/clk.h>
+#include <linux/clkdev.h>
+#include <linux/clk-provider.h>
+#include <linux/debugfs.h>
+#include <linux/idr.h>
+#include <linux/ioport.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/mfd/core.h>
+#include <linux/pm_qos.h>
+#include <linux/pm_runtime.h>
+#include <linux/seq_file.h>
+
+#include "intel-lpss.h"
+
+#define LPSS_DEV_OFFSET		0x000
+#define LPSS_DEV_SIZE		0x200
+#define LPSS_PRIV_OFFSET	0x200
+#define LPSS_PRIV_SIZE		0x100
+#define LPSS_IDMA_OFFSET	0x800
+#define LPSS_IDMA_SIZE		0x800
+
+/* Offsets from lpss->priv */
+#define LPSS_PRIV_RESETS		0x04
+#define LPSS_PRIV_RESETS_FUNC		BIT(2)
+#define LPSS_PRIV_RESETS_IDMA		0x3
+
+#define LPSS_PRIV_ACTIVELTR		0x10
+#define LPSS_PRIV_IDLELTR		0x14
+
+#define LPSS_PRIV_LTR_REQ		BIT(15)
+#define LPSS_PRIV_LTR_SCALE_MASK	0xc00
+#define LPSS_PRIV_LTR_SCALE_1US		0x800
+#define LPSS_PRIV_LTR_SCALE_32US	0xc00
+#define LPSS_PRIV_LTR_VALUE_MASK	0x3ff
+
+#define LPSS_PRIV_SSP_REG		0x20
+#define LPSS_PRIV_SSP_REG_DIS_DMA_FIN	BIT(0)
+
+#define LPSS_PRIV_REMAP_ADDR_LO		0x40
+#define LPSS_PRIV_REMAP_ADDR_HI		0x44
+
+#define LPSS_PRIV_CAPS			0xfc
+#define LPSS_PRIV_CAPS_NO_IDMA		BIT(8)
+#define LPSS_PRIV_CAPS_TYPE_SHIFT	4
+#define LPSS_PRIV_CAPS_TYPE_MASK	(0xf << LPSS_PRIV_CAPS_TYPE_SHIFT)
+
+/* This matches the type field in CAPS register */
+enum intel_lpss_dev_type {
+	LPSS_DEV_I2C = 0,
+	LPSS_DEV_UART,
+	LPSS_DEV_SPI,
+};
+
+struct intel_lpss {
+	const struct intel_lpss_platform_info *info;
+	enum intel_lpss_dev_type type;
+	struct clk *clk;
+	struct clk_lookup *clock;
+	const struct mfd_cell *devs;
+	struct device *dev;
+	void __iomem *priv;
+	int devid;
+	u32 caps;
+	u32 active_ltr;
+	u32 idle_ltr;
+	struct dentry *debugfs;
+};
+
+static const struct resource intel_lpss_dev_resources[] = {
+	DEFINE_RES_MEM_NAMED(LPSS_DEV_OFFSET, LPSS_DEV_SIZE, "lpss_dev"),
+	DEFINE_RES_MEM_NAMED(LPSS_PRIV_OFFSET, LPSS_PRIV_SIZE, "lpss_priv"),
+	DEFINE_RES_IRQ(0),
+};
+
+static const struct resource intel_lpss_idma_resources[] = {
+	DEFINE_RES_MEM(LPSS_IDMA_OFFSET, LPSS_IDMA_SIZE),
+	DEFINE_RES_IRQ(0),
+};
+
+#define LPSS_IDMA_DRIVER_NAME		"idma64"
+
+/*
+ * Cells needs to be ordered so that the iDMA is created first. This is
+ * because we need to be sure the DMA is available when the host controller
+ * driver is probed.
+ */
+static const struct mfd_cell intel_lpss_i2c_devs[] = {
+	{
+		.name = LPSS_IDMA_DRIVER_NAME,
+		.num_resources = ARRAY_SIZE(intel_lpss_idma_resources),
+		.resources = intel_lpss_idma_resources,
+	},
+	{
+		.name = "i2c_designware",
+		.num_resources = ARRAY_SIZE(intel_lpss_dev_resources),
+		.resources = intel_lpss_dev_resources,
+	},
+};
+
+static const struct mfd_cell intel_lpss_uart_devs[] = {
+	{
+		.name = LPSS_IDMA_DRIVER_NAME,
+		.num_resources = ARRAY_SIZE(intel_lpss_idma_resources),
+		.resources = intel_lpss_idma_resources,
+	},
+	{
+		.name = "dw-apb-uart",
+		.num_resources = ARRAY_SIZE(intel_lpss_dev_resources),
+		.resources = intel_lpss_dev_resources,
+	},
+};
+
+static const struct mfd_cell intel_lpss_spi_devs[] = {
+	{
+		.name = LPSS_IDMA_DRIVER_NAME,
+		.num_resources = ARRAY_SIZE(intel_lpss_idma_resources),
+		.resources = intel_lpss_idma_resources,
+	},
+	{
+		.name = "pxa2xx-spi",
+		.num_resources = ARRAY_SIZE(intel_lpss_dev_resources),
+		.resources = intel_lpss_dev_resources,
+	},
+};
+
+static DEFINE_IDA(intel_lpss_devid_ida);
+static struct dentry *intel_lpss_debugfs;
+
+static int intel_lpss_request_dma_module(const char *name)
+{
+	static bool intel_lpss_dma_requested;
+
+	if (intel_lpss_dma_requested)
+		return 0;
+
+	intel_lpss_dma_requested = true;
+	return request_module("%s", name);
+}
+
+static void intel_lpss_cache_ltr(struct intel_lpss *lpss)
+{
+	lpss->active_ltr = readl(lpss->priv + LPSS_PRIV_ACTIVELTR);
+	lpss->idle_ltr = readl(lpss->priv + LPSS_PRIV_IDLELTR);
+}
+
+static int intel_lpss_debugfs_add(struct intel_lpss *lpss)
+{
+	struct dentry *dir;
+
+	dir = debugfs_create_dir(dev_name(lpss->dev), intel_lpss_debugfs);
+	if (IS_ERR(dir))
+		return PTR_ERR(dir);
+
+	/* Cache the values into lpss structure */
+	intel_lpss_cache_ltr(lpss);
+
+	debugfs_create_x32("capabilities", S_IRUGO, dir, &lpss->caps);
+	debugfs_create_x32("active_ltr", S_IRUGO, dir, &lpss->active_ltr);
+	debugfs_create_x32("idle_ltr", S_IRUGO, dir, &lpss->idle_ltr);
+
+	lpss->debugfs = dir;
+	return 0;
+}
+
+static void intel_lpss_debugfs_remove(struct intel_lpss *lpss)
+{
+	debugfs_remove_recursive(lpss->debugfs);
+}
+
+static void intel_lpss_ltr_set(struct device *dev, s32 val)
+{
+	struct intel_lpss *lpss = dev_get_drvdata(dev);
+	u32 ltr;
+
+	/*
+	 * Program latency tolerance (LTR) accordingly what has been asked
+	 * by the PM QoS layer or disable it in case we were passed
+	 * negative value or PM_QOS_LATENCY_ANY.
+	 */
+	ltr = readl(lpss->priv + LPSS_PRIV_ACTIVELTR);
+
+	if (val == PM_QOS_LATENCY_ANY || val < 0) {
+		ltr &= ~LPSS_PRIV_LTR_REQ;
+	} else {
+		ltr |= LPSS_PRIV_LTR_REQ;
+		ltr &= ~LPSS_PRIV_LTR_SCALE_MASK;
+		ltr &= ~LPSS_PRIV_LTR_VALUE_MASK;
+
+		if (val > LPSS_PRIV_LTR_VALUE_MASK)
+			ltr |= LPSS_PRIV_LTR_SCALE_32US | val >> 5;
+		else
+			ltr |= LPSS_PRIV_LTR_SCALE_1US | val;
+	}
+
+	if (ltr == lpss->active_ltr)
+		return;
+
+	writel(ltr, lpss->priv + LPSS_PRIV_ACTIVELTR);
+	writel(ltr, lpss->priv + LPSS_PRIV_IDLELTR);
+
+	/* Cache the values into lpss structure */
+	intel_lpss_cache_ltr(lpss);
+}
+
+static void intel_lpss_ltr_expose(struct intel_lpss *lpss)
+{
+	lpss->dev->power.set_latency_tolerance = intel_lpss_ltr_set;
+	dev_pm_qos_expose_latency_tolerance(lpss->dev);
+}
+
+static void intel_lpss_ltr_hide(struct intel_lpss *lpss)
+{
+	dev_pm_qos_hide_latency_tolerance(lpss->dev);
+	lpss->dev->power.set_latency_tolerance = NULL;
+}
+
+static int intel_lpss_assign_devs(struct intel_lpss *lpss)
+{
+	unsigned int type;
+
+	type = lpss->caps & LPSS_PRIV_CAPS_TYPE_MASK;
+	type >>= LPSS_PRIV_CAPS_TYPE_SHIFT;
+
+	switch (type) {
+	case LPSS_DEV_I2C:
+		lpss->devs = intel_lpss_i2c_devs;
+		break;
+	case LPSS_DEV_UART:
+		lpss->devs = intel_lpss_uart_devs;
+		break;
+	case LPSS_DEV_SPI:
+		lpss->devs = intel_lpss_spi_devs;
+		break;
+	default:
+		return -ENODEV;
+	}
+
+	lpss->type = type;
+
+	return 0;
+}
+
+static bool intel_lpss_has_idma(const struct intel_lpss *lpss)
+{
+	return (lpss->caps & LPSS_PRIV_CAPS_NO_IDMA) == 0;
+}
+
+static void intel_lpss_set_remap_addr(const struct intel_lpss *lpss)
+{
+	resource_size_t addr = lpss->info->mem->start;
+
+	writel(addr, lpss->priv + LPSS_PRIV_REMAP_ADDR_LO);
+#if BITS_PER_LONG > 32
+	writel(addr >> 32, lpss->priv + LPSS_PRIV_REMAP_ADDR_HI);
+#else
+	writel(0, lpss->priv + LPSS_PRIV_REMAP_ADDR_HI);
+#endif
+}
+
+static void intel_lpss_deassert_reset(const struct intel_lpss *lpss)
+{
+	u32 value = LPSS_PRIV_RESETS_FUNC | LPSS_PRIV_RESETS_IDMA;
+
+	/* Bring out the device from reset */
+	writel(value, lpss->priv + LPSS_PRIV_RESETS);
+}
+
+static void intel_lpss_init_dev(const struct intel_lpss *lpss)
+{
+	u32 value = LPSS_PRIV_SSP_REG_DIS_DMA_FIN;
+
+	intel_lpss_deassert_reset(lpss);
+
+	if (!intel_lpss_has_idma(lpss))
+		return;
+
+	intel_lpss_set_remap_addr(lpss);
+
+	/* Make sure that SPI multiblock DMA transfers are re-enabled */
+	if (lpss->type == LPSS_DEV_SPI)
+		writel(value, lpss->priv + LPSS_PRIV_SSP_REG);
+}
+
+static void intel_lpss_unregister_clock_tree(struct clk *clk)
+{
+	struct clk *parent;
+
+	while (clk) {
+		parent = clk_get_parent(clk);
+		clk_unregister(clk);
+		clk = parent;
+	}
+}
+
+static int intel_lpss_register_clock_divider(struct intel_lpss *lpss,
+					     const char *devname,
+					     struct clk **clk)
+{
+	char name[32];
+	struct clk *tmp = *clk;
+
+	snprintf(name, sizeof(name), "%s-enable", devname);
+	tmp = clk_register_gate(NULL, name, __clk_get_name(tmp), 0,
+				lpss->priv, 0, 0, NULL);
+	if (IS_ERR(tmp))
+		return PTR_ERR(tmp);
+
+	snprintf(name, sizeof(name), "%s-div", devname);
+	tmp = clk_register_fractional_divider(NULL, name, __clk_get_name(tmp),
+					      0, lpss->priv, 1, 15, 16, 15, 0,
+					      NULL);
+	if (IS_ERR(tmp))
+		return PTR_ERR(tmp);
+	*clk = tmp;
+
+	snprintf(name, sizeof(name), "%s-update", devname);
+	tmp = clk_register_gate(NULL, name, __clk_get_name(tmp),
+				CLK_SET_RATE_PARENT, lpss->priv, 31, 0, NULL);
+	if (IS_ERR(tmp))
+		return PTR_ERR(tmp);
+	*clk = tmp;
+
+	return 0;
+}
+
+static int intel_lpss_register_clock(struct intel_lpss *lpss)
+{
+	const struct mfd_cell *devs = lpss->devs;
+	struct clk *clk;
+	char devname[24];
+	int ret = -ENOMEM;
+
+	if (!lpss->info->clk_rate)
+		return 0;
+
+	/* Root clock */
+	clk = clk_register_fixed_rate(NULL, dev_name(lpss->dev), NULL,
+				      CLK_IS_ROOT, lpss->info->clk_rate);
+	if (IS_ERR(clk))
+		return PTR_ERR(clk);
+
+	snprintf(devname, sizeof(devname), "%s.%d", devs[1].name, lpss->devid);
+
+	/*
+	 * Support for clock divider only if it has some preset value.
+	 * Otherwise we assume that the divider is not used.
+	 */
+	if (lpss->type != LPSS_DEV_I2C) {
+		ret = intel_lpss_register_clock_divider(lpss, devname, &clk);
+		if (ret)
+			goto err_clk_register;
+	}
+
+	/* Clock for the host controller */
+	lpss->clock = clkdev_create(clk, lpss->info->clk_con_id, "%s", devname);
+	if (!lpss->clock)
+		goto err_clk_register;
+
+	lpss->clk = clk;
+
+	return 0;
+
+err_clk_register:
+	intel_lpss_unregister_clock_tree(clk);
+
+	return ret;
+}
+
+static void intel_lpss_unregister_clock(struct intel_lpss *lpss)
+{
+	if (IS_ERR_OR_NULL(lpss->clk))
+		return;
+
+	clkdev_drop(lpss->clock);
+	intel_lpss_unregister_clock_tree(lpss->clk);
+}
+
+int intel_lpss_probe(struct device *dev,
+		     const struct intel_lpss_platform_info *info)
+{
+	struct intel_lpss *lpss;
+	int ret;
+
+	if (!info || !info->mem || info->irq <= 0)
+		return -EINVAL;
+
+	lpss = devm_kzalloc(dev, sizeof(*lpss), GFP_KERNEL);
+	if (!lpss)
+		return -ENOMEM;
+
+	lpss->priv = devm_ioremap(dev, info->mem->start + LPSS_PRIV_OFFSET,
+				  LPSS_PRIV_SIZE);
+	if (!lpss->priv)
+		return -ENOMEM;
+
+	lpss->info = info;
+	lpss->dev = dev;
+	lpss->caps = readl(lpss->priv + LPSS_PRIV_CAPS);
+
+	dev_set_drvdata(dev, lpss);
+
+	ret = intel_lpss_assign_devs(lpss);
+	if (ret)
+		return ret;
+
+	intel_lpss_init_dev(lpss);
+
+	lpss->devid = ida_simple_get(&intel_lpss_devid_ida, 0, 0, GFP_KERNEL);
+	if (lpss->devid < 0)
+		return lpss->devid;
+
+	ret = intel_lpss_register_clock(lpss);
+	if (ret < 0)
+		goto err_clk_register;
+
+	intel_lpss_ltr_expose(lpss);
+
+	ret = intel_lpss_debugfs_add(lpss);
+	if (ret)
+		dev_warn(lpss->dev, "Failed to create debugfs entries\n");
+
+	if (intel_lpss_has_idma(lpss)) {
+		/*
+		 * Ensure the DMA driver is loaded before the host
+		 * controller device appears, so that the host controller
+		 * driver can request its DMA channels as early as
+		 * possible.
+		 *
+		 * If the DMA module is not there that's OK as well.
+		 */
+		intel_lpss_request_dma_module(LPSS_IDMA_DRIVER_NAME);
+
+		ret = mfd_add_devices(dev, lpss->devid, lpss->devs, 2,
+				      info->mem, info->irq, NULL);
+	} else {
+		ret = mfd_add_devices(dev, lpss->devid, lpss->devs + 1, 1,
+				      info->mem, info->irq, NULL);
+	}
+	if (ret < 0)
+		goto err_remove_ltr;
+
+	return 0;
+
+err_remove_ltr:
+	intel_lpss_debugfs_remove(lpss);
+	intel_lpss_ltr_hide(lpss);
+
+err_clk_register:
+	ida_simple_remove(&intel_lpss_devid_ida, lpss->devid);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(intel_lpss_probe);
+
+void intel_lpss_remove(struct device *dev)
+{
+	struct intel_lpss *lpss = dev_get_drvdata(dev);
+
+	mfd_remove_devices(dev);
+	intel_lpss_debugfs_remove(lpss);
+	intel_lpss_ltr_hide(lpss);
+	intel_lpss_unregister_clock(lpss);
+	ida_simple_remove(&intel_lpss_devid_ida, lpss->devid);
+}
+EXPORT_SYMBOL_GPL(intel_lpss_remove);
+
+static int resume_lpss_device(struct device *dev, void *data)
+{
+	pm_runtime_resume(dev);
+	return 0;
+}
+
+int intel_lpss_prepare(struct device *dev)
+{
+	/*
+	 * Resume both child devices before entering system sleep. This
+	 * ensures that they are in proper state before they get suspended.
+	 */
+	device_for_each_child_reverse(dev, NULL, resume_lpss_device);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(intel_lpss_prepare);
+
+int intel_lpss_suspend(struct device *dev)
+{
+	return 0;
+}
+EXPORT_SYMBOL_GPL(intel_lpss_suspend);
+
+int intel_lpss_resume(struct device *dev)
+{
+	struct intel_lpss *lpss = dev_get_drvdata(dev);
+
+	intel_lpss_init_dev(lpss);
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(intel_lpss_resume);
+
+static int __init intel_lpss_init(void)
+{
+	intel_lpss_debugfs = debugfs_create_dir("intel_lpss", NULL);
+	return 0;
+}
+module_init(intel_lpss_init);
+
+static void __exit intel_lpss_exit(void)
+{
+	debugfs_remove(intel_lpss_debugfs);
+}
+module_exit(intel_lpss_exit);
+
+MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>");
+MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>");
+MODULE_AUTHOR("Heikki Krogerus <heikki.krogerus@linux.intel.com>");
+MODULE_AUTHOR("Jarkko Nikula <jarkko.nikula@linux.intel.com>");
+MODULE_DESCRIPTION("Intel LPSS core driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/mfd/intel-lpss.h b/drivers/mfd/intel-lpss.h
new file mode 100644
index 0000000..f28cb28a
--- /dev/null
+++ b/drivers/mfd/intel-lpss.h
@@ -0,0 +1,62 @@
+/*
+ * Intel LPSS core support.
+ *
+ * Copyright (C) 2015, Intel Corporation
+ *
+ * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *          Mika Westerberg <mika.westerberg@linux.intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __MFD_INTEL_LPSS_H
+#define __MFD_INTEL_LPSS_H
+
+struct device;
+struct resource;
+
+struct intel_lpss_platform_info {
+	struct resource *mem;
+	int irq;
+	unsigned long clk_rate;
+	const char *clk_con_id;
+};
+
+int intel_lpss_probe(struct device *dev,
+		     const struct intel_lpss_platform_info *info);
+void intel_lpss_remove(struct device *dev);
+
+#ifdef CONFIG_PM
+int intel_lpss_prepare(struct device *dev);
+int intel_lpss_suspend(struct device *dev);
+int intel_lpss_resume(struct device *dev);
+
+#ifdef CONFIG_PM_SLEEP
+#define INTEL_LPSS_SLEEP_PM_OPS			\
+	.prepare = intel_lpss_prepare,		\
+	.suspend = intel_lpss_suspend,		\
+	.resume = intel_lpss_resume,		\
+	.freeze = intel_lpss_suspend,		\
+	.thaw = intel_lpss_resume,		\
+	.poweroff = intel_lpss_suspend,		\
+	.restore = intel_lpss_resume,
+#endif
+
+#define INTEL_LPSS_RUNTIME_PM_OPS		\
+	.runtime_suspend = intel_lpss_suspend,	\
+	.runtime_resume = intel_lpss_resume,
+
+#else /* !CONFIG_PM */
+#define INTEL_LPSS_SLEEP_PM_OPS
+#define INTEL_LPSS_RUNTIME_PM_OPS
+#endif /* CONFIG_PM */
+
+#define INTEL_LPSS_PM_OPS(name)			\
+const struct dev_pm_ops name = {		\
+	INTEL_LPSS_SLEEP_PM_OPS			\
+	INTEL_LPSS_RUNTIME_PM_OPS		\
+}
+
+#endif /* __MFD_INTEL_LPSS_H */
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/8] core: platform: wakeup the parent before trying any driver operations
  2015-05-25 16:09 ` [PATCH v2 3/8] core: platform: wakeup the parent before trying any driver operations Andy Shevchenko
@ 2015-05-25 17:36   ` Alan Stern
  2015-05-26 13:28     ` Heikki Krogerus
  2015-05-26  4:04   ` Vinod Koul
  1 sibling, 1 reply; 24+ messages in thread
From: Alan Stern @ 2015-05-25 17:36 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula

On Mon, 25 May 2015, Andy Shevchenko wrote:

> From: Heikki Krogerus <heikki.krogerus@linux.intel.com>
> 
> If the parent is still suspended when a driver probe,
> remove or shutdown is attempted, the result may be a
> failure.
> 
> For example, if the parent is a PCI MFD device that has been
> suspended when we try to probe our device, any register
> reads will return 0xffffffff.
> 
> To fix the problem, making sure the parent is always awake
> before using driver probe, remove or shutdown.
> 
> Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> ---
>  drivers/base/platform.c | 21 ++++++++++++++++++++-

Why make the changes here rather than in dd.c?  Is there something 
special about platform devices?

Alan Stern


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT
  2015-05-25 16:09 [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Andy Shevchenko
                   ` (7 preceding siblings ...)
  2015-05-25 16:09 ` [PATCH v2 8/8] mfd: Add support for Intel Sunrisepoint LPSS devices Andy Shevchenko
@ 2015-05-26  3:51 ` Vinod Koul
  2015-05-26  6:51   ` Andy Shevchenko
  8 siblings, 1 reply; 24+ messages in thread
From: Vinod Koul @ 2015-05-26  3:51 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Lee Jones, Andrew Morton, Mika Westerberg, linux-kernel,
	dmaengine, Heikki Krogerus, Jarkko Nikula

On Mon, May 25, 2015 at 07:09:24PM +0300, Andy Shevchenko wrote:
> The new coming Intel platforms such as Skylake will contain Sunrisepoint PCH.
> 
> The driver is based on MFD framework since the main device, i.e. serial bus
> controller, contains register space for itself, DMA part, and an additional
> address space (convergence layer). 
> 
> The public specification of the register map is avaiable in [3].
or [1]...?

-- 
~Vinod

> 
> This is second generation of the patch series to bring support LPSS devices
> found on Intel Sunrisepoint (Intel Skylake PCH). First one can be found here
> [2].
> 
> The series has few logical parts:
> - patches 1-3 prepares PM core, ACPI, and driver core (PM) to handle our case
> - patches 4-6 introduce unregistering platform devices in MFD in reversed
>   order
> - patch 7 implements iDMA 64-bit driver
> - patch 8 introduces an MFD driver for LPSS devices
> 
> The patch 8 depends on clkdev_create() helper that has been introduced by
> Russel King in [3].
> 
> The driver has been tested with SPI and UART on Intel Skylake PCH.
> 
> [1] https://download.01.org/future-platform-configuration-hub/skylake/register-definitions/332219_001_Final.pdf
> [2] https://lkml.org/lkml/2015/3/31/255
> [3] https://patchwork.linuxtv.org/patch/28464/
> 
> Changelog v2:
> - new DMA driver to fully support iDMA 64-bit IP
> - patch 3 is added to wake up parent devices when ->probe(), ->remove(), or
>   ->shutdown()
> - MFD core is unregistering devices in reversed order
> - address few Lee's comments on v1
> - address Russel's comment, therefore use clkdev_create() helper
> - intel-lpss{,-acpi,-pci} are modified regarding to above changes
> 
> Andy Shevchenko (5):
>   klist: implement klist_prev()
>   driver core: implement device_for_each_child_reverse()
>   mfd: make mfd_remove_devices() iterate in reverse order
>   dmaengine: add a driver for Intel integrated DMA 64-bit
>   mfd: Add support for Intel Sunrisepoint LPSS devices
> 
> Heikki Krogerus (1):
>   core: platform: wakeup the parent before trying any driver operations
> 
> Mika Westerberg (2):
>   PM / QoS: Make it possible to expose device latency tolerance to
>     userspace
>   ACPI / PM: Attach ACPI power domain only once
> 
>  drivers/acpi/device_pm.c      |   8 +
>  drivers/acpi/internal.h       |   2 +
>  drivers/acpi/scan.c           |  46 ++-
>  drivers/base/core.c           |  43 +++
>  drivers/base/platform.c       |  21 +-
>  drivers/base/power/power.h    |   2 +
>  drivers/base/power/qos.c      |  37 +++
>  drivers/base/power/sysfs.c    |  11 +
>  drivers/dma/Kconfig           |   5 +
>  drivers/dma/Makefile          |   1 +
>  drivers/dma/idma64.c          | 749 ++++++++++++++++++++++++++++++++++++++++++
>  drivers/dma/idma64.h          | 233 +++++++++++++
>  drivers/mfd/Kconfig           |  24 ++
>  drivers/mfd/Makefile          |   3 +
>  drivers/mfd/intel-lpss-acpi.c |  84 +++++
>  drivers/mfd/intel-lpss-pci.c  | 113 +++++++
>  drivers/mfd/intel-lpss.c      | 534 ++++++++++++++++++++++++++++++
>  drivers/mfd/intel-lpss.h      |  62 ++++
>  drivers/mfd/mfd-core.c        |   2 +-
>  include/linux/device.h        |   2 +
>  include/linux/klist.h         |   1 +
>  include/linux/pm_qos.h        |   5 +
>  lib/klist.c                   |  41 +++
>  23 files changed, 2011 insertions(+), 18 deletions(-)
>  create mode 100644 drivers/dma/idma64.c
>  create mode 100644 drivers/dma/idma64.h
>  create mode 100644 drivers/mfd/intel-lpss-acpi.c
>  create mode 100644 drivers/mfd/intel-lpss-pci.c
>  create mode 100644 drivers/mfd/intel-lpss.c
>  create mode 100644 drivers/mfd/intel-lpss.h
> 
> -- 
> 2.1.4
> 

-- 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/8] core: platform: wakeup the parent before trying any driver operations
  2015-05-25 16:09 ` [PATCH v2 3/8] core: platform: wakeup the parent before trying any driver operations Andy Shevchenko
  2015-05-25 17:36   ` Alan Stern
@ 2015-05-26  4:04   ` Vinod Koul
  1 sibling, 0 replies; 24+ messages in thread
From: Vinod Koul @ 2015-05-26  4:04 UTC (permalink / raw)
  To: Andy Shevchenko, Alan Stern
  Cc: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Lee Jones, Andrew Morton, Mika Westerberg, linux-kernel,
	dmaengine, Heikki Krogerus, Jarkko Nikula

On Mon, May 25, 2015 at 01:36:43PM -0400, Alan Stern wrote:
> On Mon, 25 May 2015, Andy Shevchenko wrote:
> 
> > From: Heikki Krogerus <heikki.krogerus@linux.intel.com>
> > 
> > If the parent is still suspended when a driver probe,
> > remove or shutdown is attempted, the result may be a
> > failure.
> > 
> > For example, if the parent is a PCI MFD device that has been
> > suspended when we try to probe our device, any register
> > reads will return 0xffffffff.
> > 
> > To fix the problem, making sure the parent is always awake
> > before using driver probe, remove or shutdown.
> > 
> > Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
> > Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> > ---
> >  drivers/base/platform.c | 21 ++++++++++++++++++++-
> 
> Why make the changes here rather than in dd.c?  Is there something 
> special about platform devices?
Right, also isn't there an assumption in runtime pm that
pm_runtime_get_sync() will wake up its parent. So why should platform core
do special handling here, if parent is set properly when creating this
device.

Or should the device probe "assume" that it is suspended and then do
pm_runtime_get_sync() which wakes its parent and ensures devices accesses are
okay

-- 
~Vinod

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 7/8] dmaengine: add a driver for Intel integrated DMA 64-bit
  2015-05-25 16:09 ` [PATCH v2 7/8] dmaengine: add a driver for Intel integrated DMA 64-bit Andy Shevchenko
@ 2015-05-26  4:06   ` Vinod Koul
  2015-05-26  6:49     ` Andy Shevchenko
  0 siblings, 1 reply; 24+ messages in thread
From: Vinod Koul @ 2015-05-26  4:06 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Lee Jones, Andrew Morton, Mika Westerberg, linux-kernel,
	dmaengine, Heikki Krogerus, Jarkko Nikula

On Mon, May 25, 2015 at 07:09:31PM +0300, Andy Shevchenko wrote:
> Intel integrated DMA (iDMA) 64-bit is a specific IP that is used as a part of
> LPSS devices such as HSUART or SPI. The iDMA IP is attached for private
> usage on each host controller independently.
> 
> While it has similarities with Synopsys DesignWare DMA, the following
> distinctions doesn't allow to use the existing driver:
> - 64-bit mode with corresponding changes in Hardware Linked List data structure
> - many slight differences in the channel registers
> 
> Moreover this driver is based on the DMA virtual channels framework that helps
> to make the driver cleaner and easy to understand.
> 
Looking at code and iDMA controllers (if this is the same as I have used), we
have register compatibility with DW controller, so why new driver and why not
use and enhance dw driver ?

-- 
~Vinod

> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> ---
>  drivers/dma/Kconfig  |   5 +
>  drivers/dma/Makefile |   1 +
>  drivers/dma/idma64.c | 749 +++++++++++++++++++++++++++++++++++++++++++++++++++
>  drivers/dma/idma64.h | 233 ++++++++++++++++
>  4 files changed, 988 insertions(+)
>  create mode 100644 drivers/dma/idma64.c
>  create mode 100644 drivers/dma/idma64.h
> 
> diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
> index bda2cb0..e4257e9 100644
> --- a/drivers/dma/Kconfig
> +++ b/drivers/dma/Kconfig
> @@ -85,6 +85,11 @@ config INTEL_IOP_ADMA
>  	help
>  	  Enable support for the Intel(R) IOP Series RAID engines.
>  
> +config IDMA64
> +	tristate "Intel integrated DMA 64-bit support"
> +	select DMA_ENGINE
> +	select DMA_VIRTUAL_CHANNELS
> +
>  source "drivers/dma/dw/Kconfig"
>  
>  config AT_HDMAC
> diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
> index 69f77d5..c1fe119 100644
> --- a/drivers/dma/Makefile
> +++ b/drivers/dma/Makefile
> @@ -14,6 +14,7 @@ obj-$(CONFIG_HSU_DMA) += hsu/
>  obj-$(CONFIG_MPC512X_DMA) += mpc512x_dma.o
>  obj-$(CONFIG_PPC_BESTCOMM) += bestcomm/
>  obj-$(CONFIG_MV_XOR) += mv_xor.o
> +obj-$(CONFIG_IDMA64) += idma64.o
>  obj-$(CONFIG_DW_DMAC_CORE) += dw/
>  obj-$(CONFIG_AT_HDMAC) += at_hdmac.o
>  obj-$(CONFIG_AT_XDMAC) += at_xdmac.o
> diff --git a/drivers/dma/idma64.c b/drivers/dma/idma64.c
> new file mode 100644
> index 0000000..3119bdf
> --- /dev/null
> +++ b/drivers/dma/idma64.c
> @@ -0,0 +1,749 @@
> +/*
> + * Core driver for the Intel integrated DMA 64-bit
> + *
> + * Copyright (C) 2015 Intel Corporation
> + * Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include <linux/delay.h>
> +#include <linux/dmaengine.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/dmapool.h>
> +#include <linux/init.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/platform_device.h>
> +
> +#include "idma64.h"
> +
> +/* Platform driver name */
> +#define DRV_NAME		"idma64"
> +
> +/* For now we support only two channels */
> +#define IDMA64_NR_CHAN		2
> +
> +/* ---------------------------------------------------------------------- */
> +
> +static struct device *chan2dev(struct dma_chan *chan)
> +{
> +	return &chan->dev->device;
> +}
> +
> +/* ---------------------------------------------------------------------- */
> +
> +static void idma64_off(struct idma64 *idma64)
> +{
> +	unsigned short count = 100;
> +
> +	dma_writel(idma64, CFG, 0);
> +
> +	channel_clear_bit(idma64, MASK(XFER), idma64->all_chan_mask);
> +	channel_clear_bit(idma64, MASK(BLOCK), idma64->all_chan_mask);
> +	channel_clear_bit(idma64, MASK(SRC_TRAN), idma64->all_chan_mask);
> +	channel_clear_bit(idma64, MASK(DST_TRAN), idma64->all_chan_mask);
> +	channel_clear_bit(idma64, MASK(ERROR), idma64->all_chan_mask);
> +
> +	do {
> +		cpu_relax();
> +	} while (dma_readl(idma64, CFG) & IDMA64_CFG_DMA_EN && --count);
> +}
> +
> +static void idma64_on(struct idma64 *idma64)
> +{
> +	dma_writel(idma64, CFG, IDMA64_CFG_DMA_EN);
> +}
> +
> +/* ---------------------------------------------------------------------- */
> +
> +static void idma64_chan_init(struct idma64 *idma64, struct idma64_chan *idma64c)
> +{
> +	u32 cfghi = IDMA64C_CFGH_SRC_PER(1) | IDMA64C_CFGH_DST_PER(0);
> +	u32 cfglo = 0;
> +
> +	/* Enforce FIFO drain when channel is suspended */
> +	cfglo |= IDMA64C_CFGL_CH_DRAIN;
> +
> +	/* Set default burst alignment */
> +	cfglo |= IDMA64C_CFGL_DST_BURST_ALIGN | IDMA64C_CFGL_SRC_BURST_ALIGN;
> +
> +	channel_writel(idma64c, CFG_LO, cfglo);
> +	channel_writel(idma64c, CFG_HI, cfghi);
> +
> +	/* Enable interrupts */
> +	channel_set_bit(idma64, MASK(XFER), idma64c->mask);
> +	channel_set_bit(idma64, MASK(ERROR), idma64c->mask);
> +
> +	/*
> +	 * Enforce the controller to be turned on.
> +	 *
> +	 * The iDMA is turned off in ->probe() and looses context during system
> +	 * suspend / resume cycle. That's why we have to enable it each time we
> +	 * use it.
> +	 */
> +	idma64_on(idma64);
> +}
> +
> +static void idma64_chan_stop(struct idma64 *idma64, struct idma64_chan *idma64c)
> +{
> +	channel_clear_bit(idma64, CH_EN, idma64c->mask);
> +}
> +
> +static void idma64_chan_start(struct idma64 *idma64, struct idma64_chan *idma64c)
> +{
> +	struct idma64_desc *desc = idma64c->desc;
> +	struct idma64_hw_desc *hw = &desc->hw[0];
> +
> +	channel_writeq(idma64c, SAR, 0);
> +	channel_writeq(idma64c, DAR, 0);
> +
> +	channel_writel(idma64c, CTL_HI, IDMA64C_CTLH_BLOCK_TS(~0UL));
> +	channel_writel(idma64c, CTL_LO, IDMA64C_CTLL_LLP_S_EN | IDMA64C_CTLL_LLP_D_EN);
> +
> +	channel_writeq(idma64c, LLP, hw->llp);
> +
> +	channel_set_bit(idma64, CH_EN, idma64c->mask);
> +}
> +
> +static void idma64_init_channel(struct idma64_chan *idma64c)
> +{
> +	struct idma64 *idma64 = to_idma64(idma64c->vchan.chan.device);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&idma64c->lock, flags);
> +	idma64_chan_init(idma64, idma64c);
> +	spin_unlock_irqrestore(&idma64c->lock, flags);
> +}
> +
> +static void idma64_stop_channel(struct idma64_chan *idma64c)
> +{
> +	struct idma64 *idma64 = to_idma64(idma64c->vchan.chan.device);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&idma64c->lock, flags);
> +	idma64_chan_stop(idma64, idma64c);
> +	spin_unlock_irqrestore(&idma64c->lock, flags);
> +}
> +
> +static void idma64_start_channel(struct idma64_chan *idma64c)
> +{
> +	struct idma64 *idma64 = to_idma64(idma64c->vchan.chan.device);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&idma64c->lock, flags);
> +	idma64_chan_start(idma64, idma64c);
> +	spin_unlock_irqrestore(&idma64c->lock, flags);
> +}
> +
> +static void idma64_start_transfer(struct idma64_chan *idma64c)
> +{
> +	struct virt_dma_desc *vdesc;
> +
> +	/* Get the next descriptor */
> +	vdesc = vchan_next_desc(&idma64c->vchan);
> +	if (!vdesc) {
> +		idma64c->desc = NULL;
> +		return;
> +	}
> +
> +	list_del(&vdesc->node);
> +	idma64c->desc = to_idma64_desc(vdesc);
> +
> +	/* Configure the channel */
> +	idma64_init_channel(idma64c);
> +
> +	/* Start the channel with a new descriptor */
> +	idma64_start_channel(idma64c);
> +}
> +
> +/* ---------------------------------------------------------------------- */
> +
> +static void idma64_chan_irq(struct idma64 *idma64, unsigned short c,
> +		u32 status_err, u32 status_xfer)
> +{
> +	struct idma64_chan *idma64c = &idma64->chan[c];
> +	struct idma64_desc *desc;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&idma64c->vchan.lock, flags);
> +	desc = idma64c->desc;
> +	if (desc) {
> +		if (status_err & (1 << c)) {
> +			dma_writel(idma64, CLEAR(ERROR), idma64c->mask);
> +			desc->status = DMA_ERROR;
> +		} else if (status_xfer & (1 << c)) {
> +			dma_writel(idma64, CLEAR(XFER), idma64c->mask);
> +			desc->status = DMA_COMPLETE;
> +			vchan_cookie_complete(&desc->vdesc);
> +			idma64_start_transfer(idma64c);
> +		}
> +
> +		/* idma64_start_transfer() updates idma64c->desc */
> +		if (idma64c->desc == NULL || desc->status == DMA_ERROR)
> +			idma64_stop_channel(idma64c);
> +	}
> +	spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
> +}
> +
> +static irqreturn_t idma64_irq(int irq, void *dev)
> +{
> +	struct idma64 *idma64 = dev;
> +	u32 status = dma_readl(idma64, STATUS_INT);
> +	u32 status_xfer;
> +	u32 status_err;
> +	unsigned short i;
> +
> +	dev_vdbg(idma64->dma.dev, "%s: status=%#x\n", __func__, status);
> +
> +	/* Check if we have any interrupt from the DMA controller */
> +	if (!status)
> +		return IRQ_NONE;
> +
> +	/* Disable interrupts */
> +	channel_clear_bit(idma64, MASK(XFER), idma64->all_chan_mask);
> +	channel_clear_bit(idma64, MASK(ERROR), idma64->all_chan_mask);
> +
> +	status_xfer = dma_readl(idma64, RAW(XFER));
> +	status_err = dma_readl(idma64, RAW(ERROR));
> +
> +	for (i = 0; i < idma64->dma.chancnt; i++)
> +		idma64_chan_irq(idma64, i, status_err, status_xfer);
> +
> +	/* Re-enable interrupts */
> +	channel_set_bit(idma64, MASK(XFER), idma64->all_chan_mask);
> +	channel_set_bit(idma64, MASK(ERROR), idma64->all_chan_mask);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +/* ---------------------------------------------------------------------- */
> +
> +static struct idma64_desc *idma64_alloc_desc(unsigned int ndesc)
> +{
> +	struct idma64_desc *desc;
> +
> +	desc = kzalloc(sizeof(*desc), GFP_NOWAIT);
> +	if (!desc)
> +		return NULL;
> +
> +	desc->hw = kcalloc(ndesc, sizeof(*desc->hw), GFP_NOWAIT);
> +	if (!desc->hw) {
> +		kfree(desc);
> +		return NULL;
> +	}
> +
> +	return desc;
> +}
> +
> +static void idma64_desc_free(struct idma64_chan *idma64c,
> +		struct idma64_desc *desc)
> +{
> +	struct idma64_hw_desc *hw;
> +
> +	if (desc->ndesc) {
> +		unsigned int i = desc->ndesc;
> +
> +		do {
> +			hw = &desc->hw[--i];
> +			dma_pool_free(idma64c->pool, hw->lli, hw->llp);
> +		} while (i);
> +	}
> +
> +	kfree(desc->hw);
> +	kfree(desc);
> +}
> +
> +static void idma64_vdesc_free(struct virt_dma_desc *vdesc)
> +{
> +	struct idma64_chan *idma64c = to_idma64_chan(vdesc->tx.chan);
> +
> +	idma64_desc_free(idma64c, to_idma64_desc(vdesc));
> +}
> +
> +static void idma64_hw_desc_fill(struct idma64_hw_desc *hw,
> +		struct dma_slave_config *config,
> +		enum dma_transfer_direction direction, u64 llp)
> +{
> +	struct idma64_lli *lli = hw->lli;
> +	u64 sar, dar;
> +	u32 ctlhi = IDMA64C_CTLH_BLOCK_TS(hw->len);
> +	u32 ctllo = IDMA64C_CTLL_LLP_S_EN | IDMA64C_CTLL_LLP_D_EN;
> +	u32 src_width, dst_width;
> +
> +	if (direction == DMA_MEM_TO_DEV) {
> +		sar = hw->phys;
> +		dar = config->dst_addr;
> +		ctllo |= IDMA64C_CTLL_DST_FIX | IDMA64C_CTLL_SRC_INC |
> +			 IDMA64C_CTLL_FC_M2P;
> +		src_width = min_t(u32, 2, __ffs(sar | hw->len));
> +		dst_width = __fls(config->dst_addr_width);
> +	} else {	/* DMA_DEV_TO_MEM */
> +		sar = config->src_addr;
> +		dar = hw->phys;
> +		ctllo |= IDMA64C_CTLL_DST_INC | IDMA64C_CTLL_SRC_FIX |
> +			 IDMA64C_CTLL_FC_P2M;
> +		src_width = __fls(config->src_addr_width);
> +		dst_width = min_t(u32, 2, __ffs(dar | hw->len));
> +	}
> +
> +	lli->sar = sar;
> +	lli->dar = dar;
> +
> +	lli->ctlhi = ctlhi;
> +	lli->ctllo = ctllo |
> +		     IDMA64C_CTLL_SRC_MSIZE(config->src_maxburst) |
> +		     IDMA64C_CTLL_DST_MSIZE(config->dst_maxburst) |
> +		     IDMA64C_CTLL_DST_WIDTH(dst_width) |
> +		     IDMA64C_CTLL_SRC_WIDTH(src_width);
> +
> +	lli->llp = llp;
> +}
> +
> +static void idma64_desc_fill(struct idma64_chan *idma64c,
> +		struct idma64_desc *desc)
> +{
> +	struct dma_slave_config *config = &idma64c->config;
> +	struct idma64_hw_desc *hw = &desc->hw[desc->ndesc - 1];
> +	struct idma64_lli *lli = hw->lli;
> +	u64 llp = 0;
> +	unsigned int i = desc->ndesc;
> +
> +	/* Fill the hardware descriptors and link them to a list */
> +	do {
> +		hw = &desc->hw[--i];
> +		idma64_hw_desc_fill(hw, config, desc->direction, llp);
> +		llp = hw->llp;
> +	} while (i);
> +
> +	/* Trigger interrupt after last block */
> +	lli->ctllo |= IDMA64C_CTLL_INT_EN;
> +}
> +
> +static struct dma_async_tx_descriptor *idma64_prep_slave_sg(
> +		struct dma_chan *chan, struct scatterlist *sgl,
> +		unsigned int sg_len, enum dma_transfer_direction direction,
> +		unsigned long flags, void *context)
> +{
> +	struct idma64_chan *idma64c = to_idma64_chan(chan);
> +	struct idma64_desc *desc;
> +	struct scatterlist *sg;
> +	unsigned int i;
> +
> +	desc = idma64_alloc_desc(sg_len);
> +	if (!desc)
> +		return NULL;
> +
> +	for_each_sg(sgl, sg, sg_len, i) {
> +		struct idma64_hw_desc *hw = &desc->hw[i];
> +
> +		/* Allocate DMA capable memory for hardware descriptor */
> +		hw->lli = dma_pool_alloc(idma64c->pool, GFP_NOWAIT, &hw->llp);
> +		if (!hw->lli) {
> +			desc->ndesc = i;
> +			idma64_desc_free(idma64c, desc);
> +			return NULL;
> +		}
> +
> +		hw->phys = sg_dma_address(sg);
> +		hw->len = sg_dma_len(sg);
> +	}
> +
> +	desc->ndesc = sg_len;
> +	desc->direction = direction;
> +	desc->status = DMA_IN_PROGRESS;
> +
> +	idma64_desc_fill(idma64c, desc);
> +	return vchan_tx_prep(&idma64c->vchan, &desc->vdesc, flags);
> +}
> +
> +static void idma64_issue_pending(struct dma_chan *chan)
> +{
> +	struct idma64_chan *idma64c = to_idma64_chan(chan);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&idma64c->vchan.lock, flags);
> +	if (vchan_issue_pending(&idma64c->vchan) && !idma64c->desc)
> +		idma64_start_transfer(idma64c);
> +	spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
> +}
> +
> +static size_t idma64_desc_size(struct idma64_desc *desc, unsigned int active)
> +{
> +	size_t bytes = 0;
> +	unsigned int i;
> +
> +	for (i = active; i < desc->ndesc; i++)
> +		bytes += desc->hw[i].len;
> +
> +	return bytes;
> +}
> +
> +static size_t idma64_pending_desc_size(struct idma64_desc *desc)
> +{
> +	return idma64_desc_size(desc, 0);
> +}
> +
> +static size_t idma64_active_desc_size(struct idma64_chan *idma64c)
> +{
> +	struct idma64_desc *desc = idma64c->desc;
> +	struct idma64_hw_desc *hw;
> +	size_t bytes;
> +	u64 llp;
> +	u32 ctlhi;
> +	unsigned int i = 0;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&idma64c->lock, flags);
> +	llp = channel_readq(idma64c, LLP);
> +	spin_unlock_irqrestore(&idma64c->lock, flags);
> +
> +	do {
> +		hw = &desc->hw[i];
> +	} while (llp != hw->llp && ++i < desc->ndesc);
> +
> +	bytes = idma64_desc_size(desc, i);
> +	if (!i)
> +		return bytes;
> +
> +	hw = &desc->hw[--i];
> +
> +	spin_lock_irqsave(&idma64c->lock, flags);
> +	ctlhi = channel_readl(idma64c, CTL_HI);
> +	spin_unlock_irqrestore(&idma64c->lock, flags);
> +
> +	return bytes + hw->len - IDMA64C_CTLH_BLOCK_TS(ctlhi);
> +}
> +
> +static enum dma_status idma64_tx_status(struct dma_chan *chan,
> +		dma_cookie_t cookie, struct dma_tx_state *state)
> +{
> +	struct idma64_chan *idma64c = to_idma64_chan(chan);
> +	struct virt_dma_desc *vdesc;
> +	enum dma_status status;
> +	size_t bytes;
> +	unsigned long flags;
> +
> +	status = dma_cookie_status(chan, cookie, state);
> +	if (status == DMA_COMPLETE)
> +		return status;
> +
> +	spin_lock_irqsave(&idma64c->vchan.lock, flags);
> +	vdesc = vchan_find_desc(&idma64c->vchan, cookie);
> +	if (idma64c->desc && cookie == idma64c->desc->vdesc.tx.cookie) {
> +		bytes = idma64_active_desc_size(idma64c);
> +		dma_set_residue(state, bytes);
> +		status = idma64c->desc->status;
> +	} else if (vdesc) {
> +		bytes = idma64_pending_desc_size(to_idma64_desc(vdesc));
> +		dma_set_residue(state, bytes);
> +	}
> +	spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
> +
> +	return status;
> +}
> +
> +static void convert_burst(u32 *maxburst)
> +{
> +	if (*maxburst)
> +		*maxburst = __fls(*maxburst);
> +	else
> +		*maxburst = 0;
> +}
> +
> +static int idma64_slave_config(struct dma_chan *chan,
> +		struct dma_slave_config *config)
> +{
> +	struct idma64_chan *idma64c = to_idma64_chan(chan);
> +
> +	/* Check if chan will be configured for slave transfers */
> +	if (!is_slave_direction(config->direction))
> +		return -EINVAL;
> +
> +	memcpy(&idma64c->config, config, sizeof(idma64c->config));
> +
> +	convert_burst(&idma64c->config.src_maxburst);
> +	convert_burst(&idma64c->config.dst_maxburst);
> +
> +	return 0;
> +}
> +
> +static void idma64_chan_deactivate(struct idma64_chan *idma64c)
> +{
> +	unsigned short count = 100;
> +	u32 cfglo;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&idma64c->lock, flags);
> +	cfglo = channel_readl(idma64c, CFG_LO);
> +	channel_writel(idma64c, CFG_LO, cfglo | IDMA64C_CFGL_CH_SUSP);
> +	do {
> +		udelay(1);
> +		cfglo = channel_readl(idma64c, CFG_LO);
> +	} while (!(cfglo & IDMA64C_CFGL_FIFO_EMPTY) && --count);
> +	spin_unlock_irqrestore(&idma64c->lock, flags);
> +}
> +
> +static void idma64_chan_activate(struct idma64_chan *idma64c)
> +{
> +	u32 cfglo;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&idma64c->lock, flags);
> +	cfglo = channel_readl(idma64c, CFG_LO);
> +	channel_writel(idma64c, CFG_LO, cfglo & ~IDMA64C_CFGL_CH_SUSP);
> +	spin_unlock_irqrestore(&idma64c->lock, flags);
> +}
> +
> +static int idma64_pause(struct dma_chan *chan)
> +{
> +	struct idma64_chan *idma64c = to_idma64_chan(chan);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&idma64c->vchan.lock, flags);
> +	if (idma64c->desc && idma64c->desc->status == DMA_IN_PROGRESS) {
> +		idma64_chan_deactivate(idma64c);
> +		idma64c->desc->status = DMA_PAUSED;
> +	}
> +	spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
> +
> +	return 0;
> +}
> +
> +static int idma64_resume(struct dma_chan *chan)
> +{
> +	struct idma64_chan *idma64c = to_idma64_chan(chan);
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&idma64c->vchan.lock, flags);
> +	if (idma64c->desc && idma64c->desc->status == DMA_PAUSED) {
> +		idma64c->desc->status = DMA_IN_PROGRESS;
> +		idma64_chan_activate(idma64c);
> +	}
> +	spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
> +
> +	return 0;
> +}
> +
> +static int idma64_terminate_all(struct dma_chan *chan)
> +{
> +	struct idma64_chan *idma64c = to_idma64_chan(chan);
> +	unsigned long flags;
> +	LIST_HEAD(head);
> +
> +	spin_lock_irqsave(&idma64c->vchan.lock, flags);
> +	idma64_stop_channel(idma64c);
> +	if (idma64c->desc) {
> +		idma64_vdesc_free(&idma64c->desc->vdesc);
> +		idma64c->desc = NULL;
> +	}
> +	vchan_get_all_descriptors(&idma64c->vchan, &head);
> +	spin_unlock_irqrestore(&idma64c->vchan.lock, flags);
> +
> +	vchan_dma_desc_free_list(&idma64c->vchan, &head);
> +	return 0;
> +}
> +
> +static int idma64_alloc_chan_resources(struct dma_chan *chan)
> +{
> +	struct idma64_chan *idma64c = to_idma64_chan(chan);
> +
> +	/* Create a pool of consistent memory blocks for hardware descriptors */
> +	idma64c->pool = dma_pool_create(dev_name(chan2dev(chan)),
> +					chan->device->dev,
> +					sizeof(struct idma64_lli), 8, 0);
> +	if (!idma64c->pool) {
> +		dev_err(chan2dev(chan), "No memory for descriptors\n");
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
> +
> +static void idma64_free_chan_resources(struct dma_chan *chan)
> +{
> +	struct idma64_chan *idma64c = to_idma64_chan(chan);
> +
> +	dma_pool_destroy(idma64c->pool);
> +	vchan_free_chan_resources(to_virt_chan(chan));
> +}
> +
> +static int idma64_probe(struct idma64_chip *chip)
> +{
> +	struct idma64 *idma64;
> +	unsigned short nr_chan = IDMA64_NR_CHAN;
> +	unsigned short i;
> +	int ret;
> +
> +	idma64 = devm_kzalloc(chip->dev, sizeof(*idma64), GFP_KERNEL);
> +	if (!idma64)
> +		return -ENOMEM;
> +
> +	idma64->regs = chip->regs;
> +	chip->idma64 = idma64;
> +
> +	idma64->chan = devm_kcalloc(chip->dev, nr_chan, sizeof(*idma64->chan),
> +				    GFP_KERNEL);
> +	if (!idma64->chan)
> +		return -ENOMEM;
> +
> +	idma64->all_chan_mask = (1 << nr_chan) - 1;
> +
> +	/* Turn off iDMA controller */
> +	idma64_off(idma64);
> +
> +	ret = devm_request_irq(chip->dev, chip->irq, idma64_irq, IRQF_SHARED,
> +			       dev_name(chip->dev), idma64);
> +	if (ret)
> +		return ret;
> +
> +	INIT_LIST_HEAD(&idma64->dma.channels);
> +	for (i = 0; i < nr_chan; i++) {
> +		struct idma64_chan *idma64c = &idma64->chan[i];
> +
> +		idma64c->vchan.desc_free = idma64_vdesc_free;
> +		vchan_init(&idma64c->vchan, &idma64->dma);
> +
> +		idma64c->regs = idma64->regs + i * IDMA64_CH_LENGTH;
> +		idma64c->mask = BIT(i);
> +
> +		spin_lock_init(&idma64c->lock);
> +	}
> +
> +	dma_cap_set(DMA_SLAVE, idma64->dma.cap_mask);
> +	dma_cap_set(DMA_PRIVATE, idma64->dma.cap_mask);
> +
> +	idma64->dma.device_alloc_chan_resources = idma64_alloc_chan_resources;
> +	idma64->dma.device_free_chan_resources = idma64_free_chan_resources;
> +
> +	idma64->dma.device_prep_slave_sg = idma64_prep_slave_sg;
> +
> +	idma64->dma.device_issue_pending = idma64_issue_pending;
> +	idma64->dma.device_tx_status = idma64_tx_status;
> +
> +	idma64->dma.device_config = idma64_slave_config;
> +	idma64->dma.device_pause = idma64_pause;
> +	idma64->dma.device_resume = idma64_resume;
> +	idma64->dma.device_terminate_all = idma64_terminate_all;
> +
> +	idma64->dma.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
> +	idma64->dma.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
> +
> +	idma64->dma.dev = chip->dev;
> +
> +	ret = dma_async_device_register(&idma64->dma);
> +	if (ret)
> +		return ret;
> +
> +	dev_info(chip->dev, "Found Intel integrated DMA 64-bit\n");
> +	return 0;
> +}
> +
> +static int idma64_remove(struct idma64_chip *chip)
> +{
> +	struct idma64 *idma64 = chip->idma64;
> +	unsigned short i;
> +
> +	dma_async_device_unregister(&idma64->dma);
> +
> +	/*
> +	 * Explicitly call devm_request_irq() to avoid the side effects with
> +	 * the scheduled tasklets.
> +	 */
> +	devm_free_irq(chip->dev, chip->irq, idma64);
> +
> +	for (i = 0; i < idma64->dma.chancnt; i++) {
> +		struct idma64_chan *idma64c = &idma64->chan[i];
> +
> +		tasklet_kill(&idma64c->vchan.task);
> +	}
> +
> +	return 0;
> +}
> +
> +/* ---------------------------------------------------------------------- */
> +
> +static int idma64_platform_probe(struct platform_device *pdev)
> +{
> +	struct idma64_chip *chip;
> +	struct device *dev = &pdev->dev;
> +	struct resource *mem;
> +	int ret;
> +
> +	chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL);
> +	if (!chip)
> +		return -ENOMEM;
> +
> +	chip->irq = platform_get_irq(pdev, 0);
> +	if (chip->irq < 0)
> +		return chip->irq;
> +
> +	mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +	chip->regs = devm_ioremap_resource(dev, mem);
> +	if (IS_ERR(chip->regs))
> +		return PTR_ERR(chip->regs);
> +
> +	ret = dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
> +	if (ret)
> +		return ret;
> +
> +	chip->dev = dev;
> +
> +	ret = idma64_probe(chip);
> +	if (ret)
> +		return ret;
> +
> +	platform_set_drvdata(pdev, chip);
> +	return 0;
> +}
> +
> +static int idma64_platform_remove(struct platform_device *pdev)
> +{
> +	struct idma64_chip *chip = platform_get_drvdata(pdev);
> +
> +	return idma64_remove(chip);
> +}
> +
> +#ifdef CONFIG_PM_SLEEP
> +
> +static int idma64_suspend_late(struct device *dev)
> +{
> +	struct platform_device *pdev = to_platform_device(dev);
> +	struct idma64_chip *chip = platform_get_drvdata(pdev);
> +
> +	idma64_off(chip->idma64);
> +	return 0;
> +}
> +
> +static int idma64_resume_early(struct device *dev)
> +{
> +	struct platform_device *pdev = to_platform_device(dev);
> +	struct idma64_chip *chip = platform_get_drvdata(pdev);
> +
> +	idma64_on(chip->idma64);
> +	return 0;
> +}
> +
> +#endif /* CONFIG_PM_SLEEP */
> +
> +static const struct dev_pm_ops idma64_dev_pm_ops = {
> +	SET_LATE_SYSTEM_SLEEP_PM_OPS(idma64_suspend_late, idma64_resume_early)
> +};
> +
> +static struct platform_driver idma64_platform_driver = {
> +	.probe		= idma64_platform_probe,
> +	.remove		= idma64_platform_remove,
> +	.driver = {
> +		.name	= DRV_NAME,
> +		.pm	= &idma64_dev_pm_ops,
> +	},
> +};
> +
> +module_platform_driver(idma64_platform_driver);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("iDMA64 core driver");
> +MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>");
> +MODULE_ALIAS("platform:" DRV_NAME);
> diff --git a/drivers/dma/idma64.h b/drivers/dma/idma64.h
> new file mode 100644
> index 0000000..63be77f
> --- /dev/null
> +++ b/drivers/dma/idma64.h
> @@ -0,0 +1,233 @@
> +/*
> + * Driver for the Intel integrated DMA 64-bit
> + *
> + * Copyright (C) 2015 Intel Corporation
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#ifndef __DMA_IDMA64_H__
> +#define __DMA_IDMA64_H__
> +
> +#include <linux/device.h>
> +#include <linux/io.h>
> +#include <linux/spinlock.h>
> +#include <linux/types.h>
> +
> +#include "virt-dma.h"
> +
> +/* Channel registers */
> +
> +#define IDMA64_CH_SAR		0x00	/* Source Address Register */
> +#define IDMA64_CH_DAR		0x08	/* Destination Address Register */
> +#define IDMA64_CH_LLP		0x10	/* Linked List Pointer */
> +#define IDMA64_CH_CTL_LO	0x18	/* Control Register Low */
> +#define IDMA64_CH_CTL_HI	0x1c	/* Control Register High */
> +#define IDMA64_CH_SSTAT		0x20
> +#define IDMA64_CH_DSTAT		0x28
> +#define IDMA64_CH_SSTATAR	0x30
> +#define IDMA64_CH_DSTATAR	0x38
> +#define IDMA64_CH_CFG_LO	0x40	/* Configuration Register Low */
> +#define IDMA64_CH_CFG_HI	0x44	/* Configuration Register High */
> +#define IDMA64_CH_SGR		0x48
> +#define IDMA64_CH_DSR		0x50
> +
> +#define IDMA64_CH_LENGTH	0x58
> +
> +/* Bitfields in CTL_LO */
> +#define IDMA64C_CTLL_INT_EN		(1 << 0)	/* irqs enabled? */
> +#define IDMA64C_CTLL_DST_WIDTH(x)	((x) << 1)	/* bytes per element */
> +#define IDMA64C_CTLL_SRC_WIDTH(x)	((x) << 4)
> +#define IDMA64C_CTLL_DST_INC		(0 << 8)	/* DAR update/not */
> +#define IDMA64C_CTLL_DST_FIX		(1 << 8)
> +#define IDMA64C_CTLL_SRC_INC		(0 << 10)	/* SAR update/not */
> +#define IDMA64C_CTLL_SRC_FIX		(1 << 10)
> +#define IDMA64C_CTLL_DST_MSIZE(x)	((x) << 11)	/* burst, #elements */
> +#define IDMA64C_CTLL_SRC_MSIZE(x)	((x) << 14)
> +#define IDMA64C_CTLL_FC_M2P		(1 << 20)	/* mem-to-periph */
> +#define IDMA64C_CTLL_FC_P2M		(2 << 20)	/* periph-to-mem */
> +#define IDMA64C_CTLL_LLP_D_EN		(1 << 27)	/* dest block chain */
> +#define IDMA64C_CTLL_LLP_S_EN		(1 << 28)	/* src block chain */
> +
> +/* Bitfields in CTL_HI */
> +#define IDMA64C_CTLH_BLOCK_TS(x)	((x) & ((1 << 17) - 1))
> +#define IDMA64C_CTLH_DONE		(1 << 17)
> +
> +/* Bitfields in CFG_LO */
> +#define IDMA64C_CFGL_DST_BURST_ALIGN	(1 << 0)	/* dst burst align */
> +#define IDMA64C_CFGL_SRC_BURST_ALIGN	(1 << 1)	/* src burst align */
> +#define IDMA64C_CFGL_CH_SUSP		(1 << 8)
> +#define IDMA64C_CFGL_FIFO_EMPTY		(1 << 9)
> +#define IDMA64C_CFGL_CH_DRAIN		(1 << 10)	/* drain FIFO */
> +#define IDMA64C_CFGL_DST_OPT_BL		(1 << 20)	/* optimize dst burst length */
> +#define IDMA64C_CFGL_SRC_OPT_BL		(1 << 21)	/* optimize src burst length */
> +
> +/* Bitfields in CFG_HI */
> +#define IDMA64C_CFGH_SRC_PER(x)		((x) << 0)	/* src peripheral */
> +#define IDMA64C_CFGH_DST_PER(x)		((x) << 4)	/* dst peripheral */
> +#define IDMA64C_CFGH_RD_ISSUE_THD(x)	((x) << 8)
> +#define IDMA64C_CFGH_RW_ISSUE_THD(x)	((x) << 18)
> +
> +/* Interrupt registers */
> +
> +#define IDMA64_INT_XFER		0x00
> +#define IDMA64_INT_BLOCK	0x08
> +#define IDMA64_INT_SRC_TRAN	0x10
> +#define IDMA64_INT_DST_TRAN	0x18
> +#define IDMA64_INT_ERROR	0x20
> +
> +#define IDMA64_RAW(x)		(0x2c0 + IDMA64_INT_##x)	/* r */
> +#define IDMA64_STATUS(x)	(0x2e8 + IDMA64_INT_##x)	/* r (raw & mask) */
> +#define IDMA64_MASK(x)		(0x310 + IDMA64_INT_##x)	/* rw (set = irq enabled) */
> +#define IDMA64_CLEAR(x)		(0x338 + IDMA64_INT_##x)	/* w (ack, affects "raw") */
> +
> +/* Common registers */
> +
> +#define IDMA64_STATUS_INT	0x360	/* r */
> +#define IDMA64_CFG		0x398
> +#define IDMA64_CH_EN		0x3a0
> +
> +/* Bitfields in CFG */
> +#define IDMA64_CFG_DMA_EN		(1 << 0)
> +
> +/* Hardware descriptor for Linked LIst transfers */
> +struct idma64_lli {
> +	u64		sar;
> +	u64		dar;
> +	u64		llp;
> +	u32		ctllo;
> +	u32		ctlhi;
> +	u32		sstat;
> +	u32		dstat;
> +};
> +
> +struct idma64_hw_desc {
> +	struct idma64_lli *lli;
> +	dma_addr_t llp;
> +	dma_addr_t phys;
> +	unsigned int len;
> +};
> +
> +struct idma64_desc {
> +	struct virt_dma_desc vdesc;
> +	enum dma_transfer_direction direction;
> +	struct idma64_hw_desc *hw;
> +	unsigned int ndesc;
> +	enum dma_status status;
> +};
> +
> +static inline struct idma64_desc *to_idma64_desc(struct virt_dma_desc *vdesc)
> +{
> +	return container_of(vdesc, struct idma64_desc, vdesc);
> +}
> +
> +struct idma64_chan {
> +	struct virt_dma_chan vchan;
> +
> +	void __iomem *regs;
> +	spinlock_t lock;
> +
> +	/* hardware configuration */
> +	enum dma_transfer_direction direction;
> +	unsigned int mask;
> +	struct dma_slave_config config;
> +
> +	void *pool;
> +	struct idma64_desc *desc;
> +};
> +
> +static inline struct idma64_chan *to_idma64_chan(struct dma_chan *chan)
> +{
> +	return container_of(chan, struct idma64_chan, vchan.chan);
> +}
> +
> +#define channel_set_bit(idma64, reg, mask)	\
> +	dma_writel(idma64, reg, ((mask) << 8) | (mask))
> +#define channel_clear_bit(idma64, reg, mask)	\
> +	dma_writel(idma64, reg, ((mask) << 8) | 0)
> +
> +static inline u32 idma64c_readl(struct idma64_chan *idma64c, int offset)
> +{
> +	return readl(idma64c->regs + offset);
> +}
> +
> +static inline void idma64c_writel(struct idma64_chan *idma64c, int offset,
> +				  u32 value)
> +{
> +	writel(value, idma64c->regs + offset);
> +}
> +
> +#define channel_readl(idma64c, reg)		\
> +	idma64c_readl(idma64c, IDMA64_CH_##reg)
> +#define channel_writel(idma64c, reg, value)	\
> +	idma64c_writel(idma64c, IDMA64_CH_##reg, (value))
> +
> +static inline u64 idma64c_readq(struct idma64_chan *idma64c, int offset)
> +{
> +	u64 l, h;
> +
> +	l = idma64c_readl(idma64c, offset);
> +	h = idma64c_readl(idma64c, offset + 4);
> +
> +	return l | (h << 32);
> +}
> +
> +static inline void idma64c_writeq(struct idma64_chan *idma64c, int offset,
> +				  u64 value)
> +{
> +	idma64c_writel(idma64c, offset, value);
> +	idma64c_writel(idma64c, offset + 4, value >> 32);
> +}
> +
> +#define channel_readq(idma64c, reg)		\
> +	idma64c_readq(idma64c, IDMA64_CH_##reg)
> +#define channel_writeq(idma64c, reg, value)	\
> +	idma64c_writeq(idma64c, IDMA64_CH_##reg, (value))
> +
> +struct idma64 {
> +	struct dma_device dma;
> +
> +	void __iomem *regs;
> +
> +	/* channels */
> +	unsigned short all_chan_mask;
> +	struct idma64_chan *chan;
> +};
> +
> +static inline struct idma64 *to_idma64(struct dma_device *ddev)
> +{
> +	return container_of(ddev, struct idma64, dma);
> +}
> +
> +static inline u32 idma64_readl(struct idma64 *idma64, int offset)
> +{
> +	return readl(idma64->regs + offset);
> +}
> +
> +static inline void idma64_writel(struct idma64 *idma64, int offset, u32 value)
> +{
> +	writel(value, idma64->regs + offset);
> +}
> +
> +#define dma_readl(idma64, reg)			\
> +	idma64_readl(idma64, IDMA64_##reg)
> +#define dma_writel(idma64, reg, value)		\
> +	idma64_writel(idma64, IDMA64_##reg, (value))
> +
> +/**
> + * struct idma64_chip - representation of DesignWare DMA controller hardware
> + * @dev:		struct device of the DMA controller
> + * @irq:		irq line
> + * @regs:		memory mapped I/O space
> + * @idma64:		struct idma64 that is filed by idma64_probe()
> + */
> +struct idma64_chip {
> +	struct device	*dev;
> +	int		irq;
> +	void __iomem	*regs;
> +	struct idma64	*idma64;
> +};
> +
> +#endif /* __DMA_IDMA64_H__ */
> -- 
> 2.1.4
> 

-- 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 7/8] dmaengine: add a driver for Intel integrated DMA 64-bit
  2015-05-26  4:06   ` Vinod Koul
@ 2015-05-26  6:49     ` Andy Shevchenko
  2015-06-02 12:49       ` Vinod Koul
  0 siblings, 1 reply; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-26  6:49 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Andy Shevchenko, Rafael J. Wysocki, linux-acpi, linux-pm,
	Greg Kroah-Hartman, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula

On Tue, May 26, 2015 at 7:06 AM, Vinod Koul <vinod.koul@intel.com> wrote:
> On Mon, May 25, 2015 at 07:09:31PM +0300, Andy Shevchenko wrote:
>> Intel integrated DMA (iDMA) 64-bit is a specific IP that is used as a part of
>> LPSS devices such as HSUART or SPI. The iDMA IP is attached for private
>> usage on each host controller independently.
>>
>> While it has similarities with Synopsys DesignWare DMA, the following
>> distinctions doesn't allow to use the existing driver:
>> - 64-bit mode with corresponding changes in Hardware Linked List data structure
>> - many slight differences in the channel registers
>>
>> Moreover this driver is based on the DMA virtual channels framework that helps
>> to make the driver cleaner and easy to understand.
>>
> Looking at code and iDMA controllers (if this is the same as I have used), we
> have register compatibility with DW controller, so why new driver and why not
> use and enhance dw driver ?

Take a look closer. There are many, like I mentioned, slight but not
least changes in the registers, besides *64-bit mode*:
- ctl_hi represents bytes, not items
- 2 bytes of burst is supported (dw has no gap there)
- shuffling bits between ctl_* and cfg_*
- new bits with different meaning in ctl_* and cfg_*.

Preliminary we did a patchset for dw_dmac, but above hw changes blows
up and messes the driver code. I really would prefer to have those two
separate.

However, the 32-bit iDMA which is used in Baytrail might be driven by dw_dmac.

>> @@ -0,0 +1,233 @@
>> +/*
>> + * Driver for the Intel integrated DMA 64-bit
>> + *
>> + * Copyright (C) 2015 Intel Corporation
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License version 2 as
>> + * published by the Free Software Foundation.
>> + */
>> +
>> +#ifndef __DMA_IDMA64_H__
>> +#define __DMA_IDMA64_H__
>> +
>> +#include <linux/device.h>
>> +#include <linux/io.h>
>> +#include <linux/spinlock.h>
>> +#include <linux/types.h>
>> +
>> +#include "virt-dma.h"
>> +
>> +/* Channel registers */
>> +
>> +#define IDMA64_CH_SAR                0x00    /* Source Address Register */
>> +#define IDMA64_CH_DAR                0x08    /* Destination Address Register */
>> +#define IDMA64_CH_LLP                0x10    /* Linked List Pointer */
>> +#define IDMA64_CH_CTL_LO     0x18    /* Control Register Low */
>> +#define IDMA64_CH_CTL_HI     0x1c    /* Control Register High */
>> +#define IDMA64_CH_SSTAT              0x20
>> +#define IDMA64_CH_DSTAT              0x28
>> +#define IDMA64_CH_SSTATAR    0x30
>> +#define IDMA64_CH_DSTATAR    0x38
>> +#define IDMA64_CH_CFG_LO     0x40    /* Configuration Register Low */
>> +#define IDMA64_CH_CFG_HI     0x44    /* Configuration Register High */
>> +#define IDMA64_CH_SGR                0x48
>> +#define IDMA64_CH_DSR                0x50
>> +
>> +#define IDMA64_CH_LENGTH     0x58
>> +
>> +/* Bitfields in CTL_LO */
>> +#define IDMA64C_CTLL_INT_EN          (1 << 0)        /* irqs enabled? */
>> +#define IDMA64C_CTLL_DST_WIDTH(x)    ((x) << 1)      /* bytes per element */
>> +#define IDMA64C_CTLL_SRC_WIDTH(x)    ((x) << 4)
>> +#define IDMA64C_CTLL_DST_INC         (0 << 8)        /* DAR update/not */
>> +#define IDMA64C_CTLL_DST_FIX         (1 << 8)
>> +#define IDMA64C_CTLL_SRC_INC         (0 << 10)       /* SAR update/not */
>> +#define IDMA64C_CTLL_SRC_FIX         (1 << 10)
>> +#define IDMA64C_CTLL_DST_MSIZE(x)    ((x) << 11)     /* burst, #elements */
>> +#define IDMA64C_CTLL_SRC_MSIZE(x)    ((x) << 14)
>> +#define IDMA64C_CTLL_FC_M2P          (1 << 20)       /* mem-to-periph */
>> +#define IDMA64C_CTLL_FC_P2M          (2 << 20)       /* periph-to-mem */
>> +#define IDMA64C_CTLL_LLP_D_EN                (1 << 27)       /* dest block chain */
>> +#define IDMA64C_CTLL_LLP_S_EN                (1 << 28)       /* src block chain */
>> +
>> +/* Bitfields in CTL_HI */
>> +#define IDMA64C_CTLH_BLOCK_TS(x)     ((x) & ((1 << 17) - 1))
>> +#define IDMA64C_CTLH_DONE            (1 << 17)
>> +
>> +/* Bitfields in CFG_LO */
>> +#define IDMA64C_CFGL_DST_BURST_ALIGN (1 << 0)        /* dst burst align */
>> +#define IDMA64C_CFGL_SRC_BURST_ALIGN (1 << 1)        /* src burst align */
>> +#define IDMA64C_CFGL_CH_SUSP         (1 << 8)
>> +#define IDMA64C_CFGL_FIFO_EMPTY              (1 << 9)
>> +#define IDMA64C_CFGL_CH_DRAIN                (1 << 10)       /* drain FIFO */
>> +#define IDMA64C_CFGL_DST_OPT_BL              (1 << 20)       /* optimize dst burst length */
>> +#define IDMA64C_CFGL_SRC_OPT_BL              (1 << 21)       /* optimize src burst length */
>> +
>> +/* Bitfields in CFG_HI */
>> +#define IDMA64C_CFGH_SRC_PER(x)              ((x) << 0)      /* src peripheral */
>> +#define IDMA64C_CFGH_DST_PER(x)              ((x) << 4)      /* dst peripheral */
>> +#define IDMA64C_CFGH_RD_ISSUE_THD(x) ((x) << 8)
>> +#define IDMA64C_CFGH_RW_ISSUE_THD(x) ((x) << 18)
>> +
>> +/* Interrupt registers */
>> +
>> +#define IDMA64_INT_XFER              0x00
>> +#define IDMA64_INT_BLOCK     0x08
>> +#define IDMA64_INT_SRC_TRAN  0x10
>> +#define IDMA64_INT_DST_TRAN  0x18
>> +#define IDMA64_INT_ERROR     0x20
>> +
>> +#define IDMA64_RAW(x)                (0x2c0 + IDMA64_INT_##x)        /* r */
>> +#define IDMA64_STATUS(x)     (0x2e8 + IDMA64_INT_##x)        /* r (raw & mask) */
>> +#define IDMA64_MASK(x)               (0x310 + IDMA64_INT_##x)        /* rw (set = irq enabled) */
>> +#define IDMA64_CLEAR(x)              (0x338 + IDMA64_INT_##x)        /* w (ack, affects "raw") */
>> +
>> +/* Common registers */
>> +
>> +#define IDMA64_STATUS_INT    0x360   /* r */
>> +#define IDMA64_CFG           0x398
>> +#define IDMA64_CH_EN         0x3a0
>> +
>> +/* Bitfields in CFG */
>> +#define IDMA64_CFG_DMA_EN            (1 << 0)
>> +
>> +/* Hardware descriptor for Linked LIst transfers */
>> +struct idma64_lli {
>> +     u64             sar;
>> +     u64             dar;
>> +     u64             llp;
>> +     u32             ctllo;
>> +     u32             ctlhi;
>> +     u32             sstat;
>> +     u32             dstat;
>> +};
>> +
>> +struct idma64_hw_desc {
>> +     struct idma64_lli *lli;
>> +     dma_addr_t llp;
>> +     dma_addr_t phys;
>> +     unsigned int len;
>> +};
>> +
>> +struct idma64_desc {
>> +     struct virt_dma_desc vdesc;
>> +     enum dma_transfer_direction direction;
>> +     struct idma64_hw_desc *hw;
>> +     unsigned int ndesc;
>> +     enum dma_status status;
>> +};
>> +
>> +static inline struct idma64_desc *to_idma64_desc(struct virt_dma_desc *vdesc)
>> +{
>> +     return container_of(vdesc, struct idma64_desc, vdesc);
>> +}
>> +
>> +struct idma64_chan {
>> +     struct virt_dma_chan vchan;
>> +
>> +     void __iomem *regs;
>> +     spinlock_t lock;
>> +
>> +     /* hardware configuration */
>> +     enum dma_transfer_direction direction;
>> +     unsigned int mask;
>> +     struct dma_slave_config config;
>> +
>> +     void *pool;
>> +     struct idma64_desc *desc;
>> +};
>> +
>> +static inline struct idma64_chan *to_idma64_chan(struct dma_chan *chan)
>> +{
>> +     return container_of(chan, struct idma64_chan, vchan.chan);
>> +}
>> +
>> +#define channel_set_bit(idma64, reg, mask)   \
>> +     dma_writel(idma64, reg, ((mask) << 8) | (mask))
>> +#define channel_clear_bit(idma64, reg, mask) \
>> +     dma_writel(idma64, reg, ((mask) << 8) | 0)
>> +
>> +static inline u32 idma64c_readl(struct idma64_chan *idma64c, int offset)
>> +{
>> +     return readl(idma64c->regs + offset);
>> +}
>> +
>> +static inline void idma64c_writel(struct idma64_chan *idma64c, int offset,
>> +                               u32 value)
>> +{
>> +     writel(value, idma64c->regs + offset);
>> +}
>> +
>> +#define channel_readl(idma64c, reg)          \
>> +     idma64c_readl(idma64c, IDMA64_CH_##reg)
>> +#define channel_writel(idma64c, reg, value)  \
>> +     idma64c_writel(idma64c, IDMA64_CH_##reg, (value))
>> +
>> +static inline u64 idma64c_readq(struct idma64_chan *idma64c, int offset)
>> +{
>> +     u64 l, h;
>> +
>> +     l = idma64c_readl(idma64c, offset);
>> +     h = idma64c_readl(idma64c, offset + 4);
>> +
>> +     return l | (h << 32);
>> +}
>> +
>> +static inline void idma64c_writeq(struct idma64_chan *idma64c, int offset,
>> +                               u64 value)
>> +{
>> +     idma64c_writel(idma64c, offset, value);
>> +     idma64c_writel(idma64c, offset + 4, value >> 32);
>> +}
>> +
>> +#define channel_readq(idma64c, reg)          \
>> +     idma64c_readq(idma64c, IDMA64_CH_##reg)
>> +#define channel_writeq(idma64c, reg, value)  \
>> +     idma64c_writeq(idma64c, IDMA64_CH_##reg, (value))
>> +
>> +struct idma64 {
>> +     struct dma_device dma;
>> +
>> +     void __iomem *regs;
>> +
>> +     /* channels */
>> +     unsigned short all_chan_mask;
>> +     struct idma64_chan *chan;
>> +};
>> +
>> +static inline struct idma64 *to_idma64(struct dma_device *ddev)
>> +{
>> +     return container_of(ddev, struct idma64, dma);
>> +}
>> +
>> +static inline u32 idma64_readl(struct idma64 *idma64, int offset)
>> +{
>> +     return readl(idma64->regs + offset);
>> +}
>> +
>> +static inline void idma64_writel(struct idma64 *idma64, int offset, u32 value)
>> +{
>> +     writel(value, idma64->regs + offset);
>> +}
>> +
>> +#define dma_readl(idma64, reg)                       \
>> +     idma64_readl(idma64, IDMA64_##reg)
>> +#define dma_writel(idma64, reg, value)               \
>> +     idma64_writel(idma64, IDMA64_##reg, (value))
>> +
>> +/**
>> + * struct idma64_chip - representation of DesignWare DMA controller hardware
>> + * @dev:             struct device of the DMA controller
>> + * @irq:             irq line
>> + * @regs:            memory mapped I/O space
>> + * @idma64:          struct idma64 that is filed by idma64_probe()
>> + */
>> +struct idma64_chip {
>> +     struct device   *dev;
>> +     int             irq;
>> +     void __iomem    *regs;
>> +     struct idma64   *idma64;
>> +};
>> +
>> +#endif /* __DMA_IDMA64_H__ */
>> --
>> 2.1.4
>>
>
> --
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/



-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT
  2015-05-26  3:51 ` [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Vinod Koul
@ 2015-05-26  6:51   ` Andy Shevchenko
  0 siblings, 0 replies; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-26  6:51 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Andy Shevchenko, Rafael J. Wysocki, linux-acpi, linux-pm,
	Greg Kroah-Hartman, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula

On Tue, May 26, 2015 at 6:51 AM, Vinod Koul <vinod.koul@intel.com> wrote:
> On Mon, May 25, 2015 at 07:09:24PM +0300, Andy Shevchenko wrote:
>> The new coming Intel platforms such as Skylake will contain Sunrisepoint PCH.
>>
>> The driver is based on MFD framework since the main device, i.e. serial bus
>> controller, contains register space for itself, DMA part, and an additional
>> address space (convergence layer).
>>
>> The public specification of the register map is avaiable in [3].
> or [1]...?

You are correct!

>> This is second generation of the patch series to bring support LPSS devices
>> found on Intel Sunrisepoint (Intel Skylake PCH). First one can be found here
>> [2].
>>
>> The series has few logical parts:
>> - patches 1-3 prepares PM core, ACPI, and driver core (PM) to handle our case
>> - patches 4-6 introduce unregistering platform devices in MFD in reversed
>>   order
>> - patch 7 implements iDMA 64-bit driver
>> - patch 8 introduces an MFD driver for LPSS devices
>>
>> The patch 8 depends on clkdev_create() helper that has been introduced by
>> Russel King in [3].
>>
>> The driver has been tested with SPI and UART on Intel Skylake PCH.
>>
>> [1] https://download.01.org/future-platform-configuration-hub/skylake/register-definitions/332219_001_Final.pdf
>> [2] https://lkml.org/lkml/2015/3/31/255
>> [3] https://patchwork.linuxtv.org/patch/28464/

-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 3/8] core: platform: wakeup the parent before trying any driver operations
  2015-05-25 17:36   ` Alan Stern
@ 2015-05-26 13:28     ` Heikki Krogerus
  0 siblings, 0 replies; 24+ messages in thread
From: Heikki Krogerus @ 2015-05-26 13:28 UTC (permalink / raw)
  To: Alan Stern
  Cc: Andy Shevchenko, Rafael J. Wysocki, linux-acpi, linux-pm,
	Greg Kroah-Hartman, Vinod Koul, Lee Jones, Andrew Morton,
	Mika Westerberg, linux-kernel, dmaengine, Jarkko Nikula

On Mon, May 25, 2015 at 01:36:43PM -0400, Alan Stern wrote:
> On Mon, 25 May 2015, Andy Shevchenko wrote:
> 
> > From: Heikki Krogerus <heikki.krogerus@linux.intel.com>
> > 
> > If the parent is still suspended when a driver probe,
> > remove or shutdown is attempted, the result may be a
> > failure.
> > 
> > For example, if the parent is a PCI MFD device that has been
> > suspended when we try to probe our device, any register
> > reads will return 0xffffffff.
> > 
> > To fix the problem, making sure the parent is always awake
> > before using driver probe, remove or shutdown.
> > 
> > Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
> > Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> > ---
> >  drivers/base/platform.c | 21 ++++++++++++++++++++-
> 
> Why make the changes here rather than in dd.c?  Is there something 
> special about platform devices?

There isn't. That would definitely be better.


Thanks Alan,

-- 
heikki

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 8/8] mfd: Add support for Intel Sunrisepoint LPSS devices
  2015-05-25 16:09 ` [PATCH v2 8/8] mfd: Add support for Intel Sunrisepoint LPSS devices Andy Shevchenko
@ 2015-05-27 10:22   ` Lee Jones
  2015-05-27 10:41     ` Mika Westerberg
  2015-05-28 11:17     ` Andy Shevchenko
  0 siblings, 2 replies; 24+ messages in thread
From: Lee Jones @ 2015-05-27 10:22 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Andrew Morton, Mika Westerberg, linux-kernel,
	dmaengine, Heikki Krogerus, Jarkko Nikula

On Mon, 25 May 2015, Andy Shevchenko wrote:

> The new coming Intel platforms such as Skylake will contain Sunrisepoint PCH.
> The main difference to the previous platforms is that the LPSS devices are
> compound devices where usually main (SPI, HSUART, or I2C) and DMA IPs are
> present.
> 
> This patch brings the driver for such devices found on Sunrisepoint PCH.
> 
> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> ---
>  drivers/mfd/Kconfig           |  24 ++
>  drivers/mfd/Makefile          |   3 +
>  drivers/mfd/intel-lpss-acpi.c |  84 +++++++
>  drivers/mfd/intel-lpss-pci.c  | 113 +++++++++
>  drivers/mfd/intel-lpss.c      | 534 ++++++++++++++++++++++++++++++++++++++++++
>  drivers/mfd/intel-lpss.h      |  62 +++++
>  6 files changed, 820 insertions(+)
>  create mode 100644 drivers/mfd/intel-lpss-acpi.c
>  create mode 100644 drivers/mfd/intel-lpss-pci.c
>  create mode 100644 drivers/mfd/intel-lpss.c
>  create mode 100644 drivers/mfd/intel-lpss.h
> 
> diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
> index d5ad04d..8cdaa12 100644
> --- a/drivers/mfd/Kconfig
> +++ b/drivers/mfd/Kconfig
> @@ -325,6 +325,30 @@ config INTEL_SOC_PMIC
>  	  thermal, charger and related power management functions
>  	  on these systems.
>  
> +config MFD_INTEL_LPSS
> +	tristate
> +	depends on X86
> +	select COMMON_CLK
> +	select MFD_CORE
> +
> +config MFD_INTEL_LPSS_ACPI
> +	tristate "Intel Low Power Subsystem support in ACPI mode"
> +	select MFD_INTEL_LPSS
> +	depends on ACPI
> +	help
> +	  This driver supports Intel Low Power Subsystem (LPSS) devices such as
> +	  I2C, SPI and HS-UART starting from Intel Sunrisepoint (Intel Skylake
> +	  PCH) in ACPI mode.
> +
> +config MFD_INTEL_LPSS_PCI
> +	tristate "Intel Low Power Subsystem support in PCI mode"
> +	select MFD_INTEL_LPSS
> +	depends on PCI
> +	help
> +	  This driver supports Intel Low Power Subsystem (LPSS) devices such as
> +	  I2C, SPI and HS-UART starting from Intel Sunrisepoint (Intel Skylake
> +	  PCH) in PCI mode.
> +
>  config MFD_INTEL_MSIC
>  	bool "Intel MSIC"
>  	depends on INTEL_SCU_IPC
> diff --git a/drivers/mfd/Makefile b/drivers/mfd/Makefile
> index 0e5cfeb..cdf29b9 100644
> --- a/drivers/mfd/Makefile
> +++ b/drivers/mfd/Makefile
> @@ -161,6 +161,9 @@ obj-$(CONFIG_TPS65911_COMPARATOR)	+= tps65911-comparator.o
>  obj-$(CONFIG_MFD_TPS65090)	+= tps65090.o
>  obj-$(CONFIG_MFD_AAT2870_CORE)	+= aat2870-core.o
>  obj-$(CONFIG_MFD_ATMEL_HLCDC)	+= atmel-hlcdc.o
> +obj-$(CONFIG_MFD_INTEL_LPSS)	+= intel-lpss.o
> +obj-$(CONFIG_MFD_INTEL_LPSS_PCI)	+= intel-lpss-pci.o
> +obj-$(CONFIG_MFD_INTEL_LPSS_ACPI)	+= intel-lpss-acpi.o
>  obj-$(CONFIG_MFD_INTEL_MSIC)	+= intel_msic.o
>  obj-$(CONFIG_MFD_PALMAS)	+= palmas.o
>  obj-$(CONFIG_MFD_VIPERBOARD)    += viperboard.o
> diff --git a/drivers/mfd/intel-lpss-acpi.c b/drivers/mfd/intel-lpss-acpi.c
> new file mode 100644
> index 0000000..0d92d73
> --- /dev/null
> +++ b/drivers/mfd/intel-lpss-acpi.c
> @@ -0,0 +1,84 @@
> +/*
> + * Intel LPSS ACPI support.
> + *
> + * Copyright (C) 2015, Intel Corporation
> + *
> + * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> + *          Mika Westerberg <mika.westerberg@linux.intel.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#include <linux/acpi.h>
> +#include <linux/ioport.h>
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/pm.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/platform_device.h>

[...]

> +#include "intel-lpss.h"
> +int intel_lpss_probe(struct device *dev,
> +		     const struct intel_lpss_platform_info *info)
> +{
> +	struct intel_lpss *lpss;
> +	int ret;
> +
> +	if (!info || !info->mem || info->irq <= 0)
> +		return -EINVAL;
> +
> +	lpss = devm_kzalloc(dev, sizeof(*lpss), GFP_KERNEL);
> +	if (!lpss)
> +		return -ENOMEM;
> +
> +	lpss->priv = devm_ioremap(dev, info->mem->start + LPSS_PRIV_OFFSET,
> +				  LPSS_PRIV_SIZE);
> +	if (!lpss->priv)
> +		return -ENOMEM;
> +
> +	lpss->info = info;
> +	lpss->dev = dev;
> +	lpss->caps = readl(lpss->priv + LPSS_PRIV_CAPS);
> +
> +	dev_set_drvdata(dev, lpss);
> +
> +	ret = intel_lpss_assign_devs(lpss);
> +	if (ret)
> +		return ret;
> +
> +	intel_lpss_init_dev(lpss);

[...]

> +	lpss->devid = ida_simple_get(&intel_lpss_devid_ida, 0, 0, GFP_KERNEL);
> +	if (lpss->devid < 0)
> +		return lpss->devid;
> +
> +	ret = intel_lpss_register_clock(lpss);
> +	if (ret < 0)
> +		goto err_clk_register;

Still not convinced by this.  I'd like Mike (who you *still* have not
CC'ed), to review.

> +	intel_lpss_ltr_expose(lpss);
> +
> +	ret = intel_lpss_debugfs_add(lpss);
> +	if (ret)
> +		dev_warn(lpss->dev, "Failed to create debugfs entries\n");
> +
> +	if (intel_lpss_has_idma(lpss)) {
> +		/*
> +		 * Ensure the DMA driver is loaded before the host
> +		 * controller device appears, so that the host controller
> +		 * driver can request its DMA channels as early as
> +		 * possible.
> +		 *
> +		 * If the DMA module is not there that's OK as well.
> +		 */
> +		intel_lpss_request_dma_module(LPSS_IDMA_DRIVER_NAME);
> +
> +		ret = mfd_add_devices(dev, lpss->devid, lpss->devs, 2,
> +				      info->mem, info->irq, NULL);
> +	} else {
> +		ret = mfd_add_devices(dev, lpss->devid, lpss->devs + 1, 1,
> +				      info->mem, info->irq, NULL);
> +	}

I'm still not happy with the mfd_cells being manipulated in this way,
or with the duplication you have within them.  Why don't you place the
IDMA device it its own mfd_cell, then:

> +	if (intel_lpss_has_idma(lpss)) {
> +		intel_lpss_request_dma_module(LPSS_IDMA_DRIVER_NAME);
> +
> +		ret = mfd_add_devices(dev, TBC, idma_dev, ARRAY_SIZE(idma_dev),
> +				      info->mem, info->irq, NULL);
> +             /* Error check */
> +	}
> +
> +	ret = mfd_add_devices(dev, TBC, proto_dev, ARRAY_SIZE(proto_dev),
> +				      info->mem, info->irq, NULL);
> +	if (ret < 0)

if (!ret)

[...]

> +static int resume_lpss_device(struct device *dev, void *data)
> +{
> +	pm_runtime_resume(dev);
> +	return 0;
> +}
> +
> +int intel_lpss_prepare(struct device *dev)
> +{
> +	/*
> +	 * Resume both child devices before entering system sleep. This
> +	 * ensures that they are in proper state before they get suspended.
> +	 */
> +	device_for_each_child_reverse(dev, NULL, resume_lpss_device);

Why can't you do this in intel_lpss_suspend()?

Then you can get rid of all the hand-rolled nonsense you have in the
header file and use SET_SYSTEM_SLEEP_PM_OPS instead.

Does something happen after .prepare() and before .suspend() that
prevents this from working?

> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(intel_lpss_prepare);
> +
> +int intel_lpss_suspend(struct device *dev)
> +{
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(intel_lpss_suspend);
> +
> +int intel_lpss_resume(struct device *dev)
> +{
> +	struct intel_lpss *lpss = dev_get_drvdata(dev);
> +
> +	intel_lpss_init_dev(lpss);
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(intel_lpss_resume);
> +
> +static int __init intel_lpss_init(void)
> +{
> +	intel_lpss_debugfs = debugfs_create_dir("intel_lpss", NULL);

Any reason this can't be done in .probe()?

> +	return 0;
> +}
> +module_init(intel_lpss_init);
> +
> +static void __exit intel_lpss_exit(void)
> +{
> +	debugfs_remove(intel_lpss_debugfs);

.remove()?

> +}
> +module_exit(intel_lpss_exit);
> +
> +MODULE_AUTHOR("Andy Shevchenko <andriy.shevchenko@linux.intel.com>");
> +MODULE_AUTHOR("Mika Westerberg <mika.westerberg@linux.intel.com>");
> +MODULE_AUTHOR("Heikki Krogerus <heikki.krogerus@linux.intel.com>");
> +MODULE_AUTHOR("Jarkko Nikula <jarkko.nikula@linux.intel.com>");
> +MODULE_DESCRIPTION("Intel LPSS core driver");
> +MODULE_LICENSE("GPL v2");
> diff --git a/drivers/mfd/intel-lpss.h b/drivers/mfd/intel-lpss.h
> new file mode 100644
> index 0000000..f28cb28a
> --- /dev/null
> +++ b/drivers/mfd/intel-lpss.h
> @@ -0,0 +1,62 @@
> +/*
> + * Intel LPSS core support.
> + *
> + * Copyright (C) 2015, Intel Corporation
> + *
> + * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> + *          Mika Westerberg <mika.westerberg@linux.intel.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + */
> +
> +#ifndef __MFD_INTEL_LPSS_H
> +#define __MFD_INTEL_LPSS_H
> +
> +struct device;
> +struct resource;
> +
> +struct intel_lpss_platform_info {
> +	struct resource *mem;
> +	int irq;
> +	unsigned long clk_rate;
> +	const char *clk_con_id;
> +};
> +
> +int intel_lpss_probe(struct device *dev,
> +		     const struct intel_lpss_platform_info *info);
> +void intel_lpss_remove(struct device *dev);
> +
> +#ifdef CONFIG_PM
> +int intel_lpss_prepare(struct device *dev);
> +int intel_lpss_suspend(struct device *dev);
> +int intel_lpss_resume(struct device *dev);
> +
> +#ifdef CONFIG_PM_SLEEP
> +#define INTEL_LPSS_SLEEP_PM_OPS			\
> +	.prepare = intel_lpss_prepare,		\
> +	.suspend = intel_lpss_suspend,		\
> +	.resume = intel_lpss_resume,		\
> +	.freeze = intel_lpss_suspend,		\
> +	.thaw = intel_lpss_resume,		\
> +	.poweroff = intel_lpss_suspend,		\
> +	.restore = intel_lpss_resume,
> +#endif
> +
> +#define INTEL_LPSS_RUNTIME_PM_OPS		\
> +	.runtime_suspend = intel_lpss_suspend,	\
> +	.runtime_resume = intel_lpss_resume,
> +
> +#else /* !CONFIG_PM */
> +#define INTEL_LPSS_SLEEP_PM_OPS
> +#define INTEL_LPSS_RUNTIME_PM_OPS
> +#endif /* CONFIG_PM */
> +
> +#define INTEL_LPSS_PM_OPS(name)			\
> +const struct dev_pm_ops name = {		\
> +	INTEL_LPSS_SLEEP_PM_OPS			\
> +	INTEL_LPSS_RUNTIME_PM_OPS		\
> +}
> +
> +#endif /* __MFD_INTEL_LPSS_H */

If you _really_ need .prepare, then it's likely that some other
platform might too.  It will be the same amount of code to just make
this generic, so do that instead please.

-- 
Lee Jones
Linaro STMicroelectronics Landing Team Lead
Linaro.org │ Open source software for ARM SoCs
Follow Linaro: Facebook | Twitter | Blog

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 8/8] mfd: Add support for Intel Sunrisepoint LPSS devices
  2015-05-27 10:22   ` Lee Jones
@ 2015-05-27 10:41     ` Mika Westerberg
  2015-05-28 11:17     ` Andy Shevchenko
  1 sibling, 0 replies; 24+ messages in thread
From: Mika Westerberg @ 2015-05-27 10:41 UTC (permalink / raw)
  To: Lee Jones
  Cc: Andy Shevchenko, Rafael J. Wysocki, linux-acpi, linux-pm,
	Greg Kroah-Hartman, Vinod Koul, Andrew Morton, linux-kernel,
	dmaengine, Heikki Krogerus, Jarkko Nikula

On Wed, May 27, 2015 at 11:22:41AM +0100, Lee Jones wrote:
> > +static int resume_lpss_device(struct device *dev, void *data)
> > +{
> > +	pm_runtime_resume(dev);
> > +	return 0;
> > +}
> > +
> > +int intel_lpss_prepare(struct device *dev)
> > +{
> > +	/*
> > +	 * Resume both child devices before entering system sleep. This
> > +	 * ensures that they are in proper state before they get suspended.
> > +	 */
> > +	device_for_each_child_reverse(dev, NULL, resume_lpss_device);
> 
> Why can't you do this in intel_lpss_suspend()?
> 
> Then you can get rid of all the hand-rolled nonsense you have in the
> header file and use SET_SYSTEM_SLEEP_PM_OPS instead.
> 
> Does something happen after .prepare() and before .suspend() that
> prevents this from working?

At that time all children are already suspended (to system sleep) so we
cannot bring them out of runtime suspend anymore.

.prepare() is executed for all devices before suspend callbacks for
each device.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 8/8] mfd: Add support for Intel Sunrisepoint LPSS devices
  2015-05-27 10:22   ` Lee Jones
  2015-05-27 10:41     ` Mika Westerberg
@ 2015-05-28 11:17     ` Andy Shevchenko
  2015-05-28 13:10       ` Lee Jones
  1 sibling, 1 reply; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-28 11:17 UTC (permalink / raw)
  To: Lee Jones
  Cc: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Andrew Morton, Mika Westerberg, linux-kernel,
	dmaengine, Heikki Krogerus, Jarkko Nikula

On Wed, 2015-05-27 at 11:22 +0100, Lee Jones wrote:
> On Mon, 25 May 2015, Andy Shevchenko wrote:
> 
> > The new coming Intel platforms such as Skylake will contain Sunrisepoint PCH.
> > The main difference to the previous platforms is that the LPSS devices are
> > compound devices where usually main (SPI, HSUART, or I2C) and DMA IPs are
> > present.
> > 
> > This patch brings the driver for such devices found on Sunrisepoint PCH.

Thanks for comments.
My answers below.

> > --- /dev/null
> > +++ b/drivers/mfd/intel-lpss-acpi.c
> > @@ -0,0 +1,84 @@
> > +/*
> > + * Intel LPSS ACPI support.
> > + *
> > + * Copyright (C) 2015, Intel Corporation
> > + *
> > + * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> > + *          Mika Westerberg <mika.westerberg@linux.intel.com>
> > + *
> > + * This program is free software; you can redistribute it and/or modify
> > + * it under the terms of the GNU General Public License version 2 as
> > + * published by the Free Software Foundation.
> > + */
> > +
> > +#include <linux/acpi.h>
> > +#include <linux/ioport.h>
> > +#include <linux/kernel.h>
> > +#include <linux/module.h>
> > +#include <linux/pm.h>
> > +#include <linux/pm_runtime.h>
> > +#include <linux/platform_device.h>
> 
> [...]
> 
> > +#include "intel-lpss.h"
> > +int intel_lpss_probe(struct device *dev,
> > +		     const struct intel_lpss_platform_info *info)
> > +{
> > +	struct intel_lpss *lpss;
> > +	int ret;
> > +
> > +	if (!info || !info->mem || info->irq <= 0)
> > +		return -EINVAL;
> > +
> > +	lpss = devm_kzalloc(dev, sizeof(*lpss), GFP_KERNEL);
> > +	if (!lpss)
> > +		return -ENOMEM;
> > +
> > +	lpss->priv = devm_ioremap(dev, info->mem->start + LPSS_PRIV_OFFSET,
> > +				  LPSS_PRIV_SIZE);
> > +	if (!lpss->priv)
> > +		return -ENOMEM;
> > +
> > +	lpss->info = info;
> > +	lpss->dev = dev;
> > +	lpss->caps = readl(lpss->priv + LPSS_PRIV_CAPS);
> > +
> > +	dev_set_drvdata(dev, lpss);
> > +
> > +	ret = intel_lpss_assign_devs(lpss);
> > +	if (ret)
> > +		return ret;
> > +
> > +	intel_lpss_init_dev(lpss);
> 
> [...]
> 
> > +	lpss->devid = ida_simple_get(&intel_lpss_devid_ida, 0, 0, GFP_KERNEL);
> > +	if (lpss->devid < 0)
> > +		return lpss->devid;
> > +
> > +	ret = intel_lpss_register_clock(lpss);
> > +	if (ret < 0)
> > +		goto err_clk_register;
> 
> Still not convinced by this.  I'd like Mike (who you *still* have not
> CC'ed), to review.

I will include him on next iteration.

> > +	intel_lpss_ltr_expose(lpss);
> > +
> > +	ret = intel_lpss_debugfs_add(lpss);
> > +	if (ret)
> > +		dev_warn(lpss->dev, "Failed to create debugfs entries\n");
> > +
> > +	if (intel_lpss_has_idma(lpss)) {
> > +		/*
> > +		 * Ensure the DMA driver is loaded before the host
> > +		 * controller device appears, so that the host controller
> > +		 * driver can request its DMA channels as early as
> > +		 * possible.
> > +		 *
> > +		 * If the DMA module is not there that's OK as well.
> > +		 */
> > +		intel_lpss_request_dma_module(LPSS_IDMA_DRIVER_NAME);
> > +
> > +		ret = mfd_add_devices(dev, lpss->devid, lpss->devs, 2,
> > +				      info->mem, info->irq, NULL);
> > +	} else {
> > +		ret = mfd_add_devices(dev, lpss->devid, lpss->devs + 1, 1,
> > +				      info->mem, info->irq, NULL);
> > +	}
> 
> I'm still not happy with the mfd_cells being manipulated in this way,
> or with the duplication you have within them.  Why don't you place the
> IDMA device it its own mfd_cell, then:
> 
> > +	if (intel_lpss_has_idma(lpss)) {
> > +		intel_lpss_request_dma_module(LPSS_IDMA_DRIVER_NAME);
> > +
> > +		ret = mfd_add_devices(dev, TBC, idma_dev, ARRAY_SIZE(idma_dev),
> > +				      info->mem, info->irq, NULL);
> > +             /* Error check */
> > +	}
> > +
> > +	ret = mfd_add_devices(dev, TBC, proto_dev, ARRAY_SIZE(proto_dev),
> > +				      info->mem, info->irq, NULL);

Would be nicer to export mfd_add_device() in that case?

> > +	if (ret < 0)
> 
> if (!ret)

Do you mean a) if (ret) or b) if (!ret) return 0; ?

Will be fixed for option a).

> > +static int resume_lpss_device(struct device *dev, void *data)
> > +{
> > +	pm_runtime_resume(dev);
> > +	return 0;
> > +}
> > +
> > +int intel_lpss_prepare(struct device *dev)
> > +{
> > +	/*
> > +	 * Resume both child devices before entering system sleep. This
> > +	 * ensures that they are in proper state before they get suspended.
> > +	 */
> > +	device_for_each_child_reverse(dev, NULL, resume_lpss_device);
> 
> Why can't you do this in intel_lpss_suspend()?
> 
> Then you can get rid of all the hand-rolled nonsense you have in the
> header file and use SET_SYSTEM_SLEEP_PM_OPS instead.
> 
> Does something happen after .prepare() and before .suspend() that
> prevents this from working?

I rely on what Mika answered you.

> > +static int __init intel_lpss_init(void)
> > +{
> > +	intel_lpss_debugfs = debugfs_create_dir("intel_lpss", NULL);
> 
> Any reason this can't be done in .probe()?

->probe is called per device, but we have one global folder for all of them.

So,
intel_lpss/
  dev_name 1/
    capabilities
    ...
  dev_name 2/
    capabilities
    ...
  ...

I doubt debugfs_create_dir() works like 'mkdir -p'.

> > +	return 0;
> > +}
> > +module_init(intel_lpss_init);
> > +
> > +static void __exit intel_lpss_exit(void)
> > +{
> > +	debugfs_remove(intel_lpss_debugfs);
> 
> .remove()?

See above.

> > +++ b/drivers/mfd/intel-lpss.h

[]

> > +#ifdef CONFIG_PM_SLEEP
> > +#define INTEL_LPSS_SLEEP_PM_OPS			\
> > +	.prepare = intel_lpss_prepare,		\
> > +	.suspend = intel_lpss_suspend,		\
> > +	.resume = intel_lpss_resume,		\
> > +	.freeze = intel_lpss_suspend,		\
> > +	.thaw = intel_lpss_resume,		\
> > +	.poweroff = intel_lpss_suspend,		\
> > +	.restore = intel_lpss_resume,
> > +#endif
> > +
> > +#define INTEL_LPSS_RUNTIME_PM_OPS		\
> > +	.runtime_suspend = intel_lpss_suspend,	\
> > +	.runtime_resume = intel_lpss_resume,
> > +
> > +#else /* !CONFIG_PM */
> > +#define INTEL_LPSS_SLEEP_PM_OPS
> > +#define INTEL_LPSS_RUNTIME_PM_OPS
> > +#endif /* CONFIG_PM */
> > +
> > +#define INTEL_LPSS_PM_OPS(name)			\
> > +const struct dev_pm_ops name = {		\
> > +	INTEL_LPSS_SLEEP_PM_OPS			\
> > +	INTEL_LPSS_RUNTIME_PM_OPS		\

> If you _really_ need .prepare, then it's likely that some other
> platform might too.  It will be the same amount of code to just make
> this generic, so do that instead please.

In 'linux/pm.h' ->prepare() is excluded since it's quite exotic to be 
in device drivers. That is my understanding why it makes not much sense
to provide a generic definition for that.

$ git grep -n '\.prepare[ \t]*=.*pm' drivers/ | wc -l
33
$ git grep -n SET_SYSTEM_SLEEP_PM_OPS drivers/ | wc -l
114
$ git grep -n UNIVERSAL_DEV_PM_OPS drivers/ | wc -l
9
…and there are a lot of drivers (hundreds+) that do
not use mentioned macros, and has no ->prepare() callback defined.

I can try to summon up Rafael to clarify this.

-- 
Andy Shevchenko <andriy.shevchenko@intel.com>
Intel Finland Oy


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 8/8] mfd: Add support for Intel Sunrisepoint LPSS devices
  2015-05-28 11:17     ` Andy Shevchenko
@ 2015-05-28 13:10       ` Lee Jones
  2015-05-29 10:03         ` Andy Shevchenko
  0 siblings, 1 reply; 24+ messages in thread
From: Lee Jones @ 2015-05-28 13:10 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Andrew Morton, Mika Westerberg, linux-kernel,
	dmaengine, Heikki Krogerus, Jarkko Nikula

On Thu, 28 May 2015, Andy Shevchenko wrote:
> On Wed, 2015-05-27 at 11:22 +0100, Lee Jones wrote:
> > On Mon, 25 May 2015, Andy Shevchenko wrote:
> > 
> > > The new coming Intel platforms such as Skylake will contain Sunrisepoint PCH.
> > > The main difference to the previous platforms is that the LPSS devices are
> > > compound devices where usually main (SPI, HSUART, or I2C) and DMA IPs are
> > > present.
> > > 
> > > This patch brings the driver for such devices found on Sunrisepoint PCH.
> 
> Thanks for comments.
> My answers below.
> 
> > > --- /dev/null
> > > +++ b/drivers/mfd/intel-lpss-acpi.c
> > > @@ -0,0 +1,84 @@
> > > +/*
> > > + * Intel LPSS ACPI support.
> > > + *
> > > + * Copyright (C) 2015, Intel Corporation
> > > + *
> > > + * Authors: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> > > + *          Mika Westerberg <mika.westerberg@linux.intel.com>
> > > + *
> > > + * This program is free software; you can redistribute it and/or modify
> > > + * it under the terms of the GNU General Public License version 2 as
> > > + * published by the Free Software Foundation.
> > > + */
> > > +
> > > +#include <linux/acpi.h>
> > > +#include <linux/ioport.h>
> > > +#include <linux/kernel.h>
> > > +#include <linux/module.h>
> > > +#include <linux/pm.h>
> > > +#include <linux/pm_runtime.h>
> > > +#include <linux/platform_device.h>
> > 
> > [...]
> > 
> > > +	lpss->devid = ida_simple_get(&intel_lpss_devid_ida, 0, 0, GFP_KERNEL);
> > > +	if (lpss->devid < 0)
> > > +		return lpss->devid;
> > > +
> > > +	ret = intel_lpss_register_clock(lpss);
> > > +	if (ret < 0)
> > > +		goto err_clk_register;
> > 
> > Still not convinced by this.  I'd like Mike (who you *still* have not
> > CC'ed), to review.
> 
> I will include him on next iteration.
> 
> > > +	intel_lpss_ltr_expose(lpss);
> > > +
> > > +	ret = intel_lpss_debugfs_add(lpss);
> > > +	if (ret)
> > > +		dev_warn(lpss->dev, "Failed to create debugfs entries\n");
> > > +
> > > +	if (intel_lpss_has_idma(lpss)) {
> > > +		/*
> > > +		 * Ensure the DMA driver is loaded before the host
> > > +		 * controller device appears, so that the host controller
> > > +		 * driver can request its DMA channels as early as
> > > +		 * possible.
> > > +		 *
> > > +		 * If the DMA module is not there that's OK as well.
> > > +		 */
> > > +		intel_lpss_request_dma_module(LPSS_IDMA_DRIVER_NAME);
> > > +
> > > +		ret = mfd_add_devices(dev, lpss->devid, lpss->devs, 2,
> > > +				      info->mem, info->irq, NULL);
> > > +	} else {
> > > +		ret = mfd_add_devices(dev, lpss->devid, lpss->devs + 1, 1,
> > > +				      info->mem, info->irq, NULL);
> > > +	}
> > 
> > I'm still not happy with the mfd_cells being manipulated in this way,
> > or with the duplication you have within them.  Why don't you place the
> > IDMA device it its own mfd_cell, then:
> > 
> > > +	if (intel_lpss_has_idma(lpss)) {
> > > +		intel_lpss_request_dma_module(LPSS_IDMA_DRIVER_NAME);
> > > +
> > > +		ret = mfd_add_devices(dev, TBC, idma_dev, ARRAY_SIZE(idma_dev),
> > > +				      info->mem, info->irq, NULL);
> > > +             /* Error check */
> > > +	}
> > > +
> > > +	ret = mfd_add_devices(dev, TBC, proto_dev, ARRAY_SIZE(proto_dev),
> > > +				      info->mem, info->irq, NULL);
> 
> Would be nicer to export mfd_add_device() in that case?

What do you mean by export?  What's wrong with using this code
segment?

> > > +	if (ret < 0)
> > 
> > if (!ret)
> 
> Do you mean a) if (ret) or b) if (!ret) return 0; ?
> 
> Will be fixed for option a).

Right.

> > > +static int __init intel_lpss_init(void)
> > > +{
> > > +	intel_lpss_debugfs = debugfs_create_dir("intel_lpss", NULL);
> > 
> > Any reason this can't be done in .probe()?
> 
> ->probe is called per device, but we have one global folder for all of them.
> 
> So,
> intel_lpss/
>   dev_name 1/
>     capabilities
>     ...
>   dev_name 2/
>     capabilities
>     ...
>   ...
> 
> I doubt debugfs_create_dir() works like 'mkdir -p'.

Ah, multiple devices, yes good point.

[...]

> > > +#ifdef CONFIG_PM_SLEEP
> > > +#define INTEL_LPSS_SLEEP_PM_OPS			\
> > > +	.prepare = intel_lpss_prepare,		\
> > > +	.suspend = intel_lpss_suspend,		\
> > > +	.resume = intel_lpss_resume,		\
> > > +	.freeze = intel_lpss_suspend,		\
> > > +	.thaw = intel_lpss_resume,		\
> > > +	.poweroff = intel_lpss_suspend,		\
> > > +	.restore = intel_lpss_resume,
> > > +#endif
> > > +
> > > +#define INTEL_LPSS_RUNTIME_PM_OPS		\
> > > +	.runtime_suspend = intel_lpss_suspend,	\
> > > +	.runtime_resume = intel_lpss_resume,
> > > +
> > > +#else /* !CONFIG_PM */
> > > +#define INTEL_LPSS_SLEEP_PM_OPS
> > > +#define INTEL_LPSS_RUNTIME_PM_OPS
> > > +#endif /* CONFIG_PM */
> > > +
> > > +#define INTEL_LPSS_PM_OPS(name)			\
> > > +const struct dev_pm_ops name = {		\
> > > +	INTEL_LPSS_SLEEP_PM_OPS			\
> > > +	INTEL_LPSS_RUNTIME_PM_OPS		\
> 
> > If you _really_ need .prepare, then it's likely that some other
> > platform might too.  It will be the same amount of code to just make
> > this generic, so do that instead please.
> 
> In 'linux/pm.h' ->prepare() is excluded since it's quite exotic to be 
> in device drivers. That is my understanding why it makes not much sense
> to provide a generic definition for that.
> 
> $ git grep -n '\.prepare[ \t]*=.*pm' drivers/ | wc -l
> 33
> $ git grep -n SET_SYSTEM_SLEEP_PM_OPS drivers/ | wc -l
> 114
> $ git grep -n UNIVERSAL_DEV_PM_OPS drivers/ | wc -l
> 9
> …and there are a lot of drivers (hundreds+) that do
> not use mentioned macros, and has no ->prepare() callback defined.
> 
> I can try to summon up Rafael to clarify this.

Yes, let's do that, as I'd like a second opinion on this, thanks.

-- 
Lee Jones
Linaro STMicroelectronics Landing Team Lead
Linaro.org │ Open source software for ARM SoCs
Follow Linaro: Facebook | Twitter | Blog

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 8/8] mfd: Add support for Intel Sunrisepoint LPSS devices
  2015-05-28 13:10       ` Lee Jones
@ 2015-05-29 10:03         ` Andy Shevchenko
  0 siblings, 0 replies; 24+ messages in thread
From: Andy Shevchenko @ 2015-05-29 10:03 UTC (permalink / raw)
  To: Lee Jones
  Cc: Rafael J. Wysocki, linux-acpi, linux-pm, Greg Kroah-Hartman,
	Vinod Koul, Andrew Morton, Mika Westerberg, linux-kernel,
	dmaengine, Heikki Krogerus, Jarkko Nikula, Wysocki, Rafael J

On Thu, 2015-05-28 at 14:10 +0100, Lee Jones wrote:
> On Thu, 28 May 2015, Andy Shevchenko wrote:
> > On Wed, 2015-05-27 at 11:22 +0100, Lee Jones wrote:
> > > On Mon, 25 May 2015, Andy Shevchenko wrote:

[]

> > > > +	intel_lpss_ltr_expose(lpss);
> > > > +
> > > > +	ret = intel_lpss_debugfs_add(lpss);
> > > > +	if (ret)
> > > > +		dev_warn(lpss->dev, "Failed to create debugfs entries\n");
> > > > +
> > > > +	if (intel_lpss_has_idma(lpss)) {
> > > > +		/*
> > > > +		 * Ensure the DMA driver is loaded before the host
> > > > +		 * controller device appears, so that the host controller
> > > > +		 * driver can request its DMA channels as early as
> > > > +		 * possible.
> > > > +		 *
> > > > +		 * If the DMA module is not there that's OK as well.
> > > > +		 */
> > > > +		intel_lpss_request_dma_module(LPSS_IDMA_DRIVER_NAME);
> > > > +
> > > > +		ret = mfd_add_devices(dev, lpss->devid, lpss->devs, 2,
> > > > +				      info->mem, info->irq, NULL);
> > > > +	} else {
> > > > +		ret = mfd_add_devices(dev, lpss->devid, lpss->devs + 1, 1,
> > > > +				      info->mem, info->irq, NULL);
> > > > +	}
> > > 
> > > I'm still not happy with the mfd_cells being manipulated in this way,
> > > or with the duplication you have within them.  Why don't you place the
> > > IDMA device it its own mfd_cell, then:
> > > 
> > > > +	if (intel_lpss_has_idma(lpss)) {
> > > > +		intel_lpss_request_dma_module(LPSS_IDMA_DRIVER_NAME);
> > > > +
> > > > +		ret = mfd_add_devices(dev, TBC, idma_dev, ARRAY_SIZE(idma_dev),
> > > > +				      info->mem, info->irq, NULL);
> > > > +             /* Error check */
> > > > +	}
> > > > +
> > > > +	ret = mfd_add_devices(dev, TBC, proto_dev, ARRAY_SIZE(proto_dev),
> > > > +				      info->mem, info->irq, NULL);
> > 
> > Would be nicer to export mfd_add_device() in that case?
> 
> What do you mean by export?  What's wrong with using this code
> segment?

I took a closer look into this, indeed, we can call mfd_add_devices() as
many time as we want to add a new child device.

Will refactor this piece of code.

> > > > +#ifdef CONFIG_PM_SLEEP
> > > > +#define INTEL_LPSS_SLEEP_PM_OPS			\
> > > > +	.prepare = intel_lpss_prepare,		\
> > > > +	.suspend = intel_lpss_suspend,		\
> > > > +	.resume = intel_lpss_resume,		\
> > > > +	.freeze = intel_lpss_suspend,		\
> > > > +	.thaw = intel_lpss_resume,		\
> > > > +	.poweroff = intel_lpss_suspend,		\
> > > > +	.restore = intel_lpss_resume,
> > > > +#endif
> > > > +
> > > > +#define INTEL_LPSS_RUNTIME_PM_OPS		\
> > > > +	.runtime_suspend = intel_lpss_suspend,	\
> > > > +	.runtime_resume = intel_lpss_resume,
> > > > +
> > > > +#else /* !CONFIG_PM */
> > > > +#define INTEL_LPSS_SLEEP_PM_OPS
> > > > +#define INTEL_LPSS_RUNTIME_PM_OPS
> > > > +#endif /* CONFIG_PM */
> > > > +
> > > > +#define INTEL_LPSS_PM_OPS(name)			\
> > > > +const struct dev_pm_ops name = {		\
> > > > +	INTEL_LPSS_SLEEP_PM_OPS			\
> > > > +	INTEL_LPSS_RUNTIME_PM_OPS		\
> > 
> > > If you _really_ need .prepare, then it's likely that some other
> > > platform might too.  It will be the same amount of code to just make
> > > this generic, so do that instead please.
> > 
> > In 'linux/pm.h' ->prepare() is excluded since it's quite exotic to be 
> > in device drivers. That is my understanding why it makes not much sense
> > to provide a generic definition for that.
> > 
> > $ git grep -n '\.prepare[ \t]*=.*pm' drivers/ | wc -l
> > 33
> > $ git grep -n SET_SYSTEM_SLEEP_PM_OPS drivers/ | wc -l
> > 114
> > $ git grep -n UNIVERSAL_DEV_PM_OPS drivers/ | wc -l
> > 9
> > …and there are a lot of drivers (hundreds+) that do
> > not use mentioned macros, and has no ->prepare() callback defined.
> > 
> > I can try to summon up Rafael to clarify this.
> 
> Yes, let's do that, as I'd like a second opinion on this, thanks.

Rafael, it would be nice to have your input here.

-- 
Andy Shevchenko <andriy.shevchenko@intel.com>
Intel Finland Oy


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 4/8] klist: implement klist_prev()
  2015-05-25 16:09 ` [PATCH v2 4/8] klist: implement klist_prev() Andy Shevchenko
@ 2015-06-01  1:21   ` Greg Kroah-Hartman
  0 siblings, 0 replies; 24+ messages in thread
From: Greg Kroah-Hartman @ 2015-06-01  1:21 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Rafael J. Wysocki, linux-acpi, linux-pm, Vinod Koul, Lee Jones,
	Andrew Morton, Mika Westerberg, linux-kernel, dmaengine,
	Heikki Krogerus, Jarkko Nikula

On Mon, May 25, 2015 at 07:09:28PM +0300, Andy Shevchenko wrote:
> klist_prev() gets the previous element in the list. It is useful to traverse
> through the list in reverse order, for example, to provide LIFO (last in first
> out) variant of access.
> 
> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> ---

Acked-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 5/8] driver core: implement device_for_each_child_reverse()
  2015-05-25 16:09 ` [PATCH v2 5/8] driver core: implement device_for_each_child_reverse() Andy Shevchenko
@ 2015-06-01  1:21   ` Greg Kroah-Hartman
  0 siblings, 0 replies; 24+ messages in thread
From: Greg Kroah-Hartman @ 2015-06-01  1:21 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Rafael J. Wysocki, linux-acpi, linux-pm, Vinod Koul, Lee Jones,
	Andrew Morton, Mika Westerberg, linux-kernel, dmaengine,
	Heikki Krogerus, Jarkko Nikula

On Mon, May 25, 2015 at 07:09:29PM +0300, Andy Shevchenko wrote:
> The new function device_for_each_child_reverse() is helpful to traverse the
> registered devices in a reversed order, e.g. in the case when an operation on
> each device should be done first on the last added device, then on one before
> last and so on.
> 
> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>

Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH v2 7/8] dmaengine: add a driver for Intel integrated DMA 64-bit
  2015-05-26  6:49     ` Andy Shevchenko
@ 2015-06-02 12:49       ` Vinod Koul
  0 siblings, 0 replies; 24+ messages in thread
From: Vinod Koul @ 2015-06-02 12:49 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Andy Shevchenko, Rafael J. Wysocki, linux-acpi, linux-pm,
	Greg Kroah-Hartman, Lee Jones, Andrew Morton, Mika Westerberg,
	linux-kernel, dmaengine, Heikki Krogerus, Jarkko Nikula

On Tue, May 26, 2015 at 09:49:57AM +0300, Andy Shevchenko wrote:
> On Tue, May 26, 2015 at 7:06 AM, Vinod Koul <vinod.koul@intel.com> wrote:
> > On Mon, May 25, 2015 at 07:09:31PM +0300, Andy Shevchenko wrote:
> >> Intel integrated DMA (iDMA) 64-bit is a specific IP that is used as a part of
> >> LPSS devices such as HSUART or SPI. The iDMA IP is attached for private
> >> usage on each host controller independently.
> >>
> >> While it has similarities with Synopsys DesignWare DMA, the following
> >> distinctions doesn't allow to use the existing driver:
> >> - 64-bit mode with corresponding changes in Hardware Linked List data structure
> >> - many slight differences in the channel registers
> >>
> >> Moreover this driver is based on the DMA virtual channels framework that helps
> >> to make the driver cleaner and easy to understand.
> >>
> > Looking at code and iDMA controllers (if this is the same as I have used), we
> > have register compatibility with DW controller, so why new driver and why not
> > use and enhance dw driver ?
> 
> Take a look closer. There are many, like I mentioned, slight but not
> least changes in the registers, besides *64-bit mode*:
> - ctl_hi represents bytes, not items
> - 2 bytes of burst is supported (dw has no gap there)
> - shuffling bits between ctl_* and cfg_*
> - new bits with different meaning in ctl_* and cfg_*.
Yes these are the changes which I was thinking and these would impact only
calculating different values for a descriptor, so based on device probed you
cna load a specfic operation for calculating, rest of the driver code is
agnostic

> 
> Preliminary we did a patchset for dw_dmac, but above hw changes blows
> up and messes the driver code. I really would prefer to have those two
> separate
I think it should be doable and reading this patch also doesnt convince me
for that

> 
> However, the 32-bit iDMA which is used in Baytrail might be driven by dw_dmac.
Also what part here is specfic to *64* bit ?

-- 
~Vinod


^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2015-06-02 12:47 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-25 16:09 [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Andy Shevchenko
2015-05-25 16:09 ` [PATCH v2 1/8] PM / QoS: Make it possible to expose device latency tolerance to userspace Andy Shevchenko
2015-05-25 16:09 ` [PATCH v2 2/8] ACPI / PM: Attach ACPI power domain only once Andy Shevchenko
2015-05-25 16:09 ` [PATCH v2 3/8] core: platform: wakeup the parent before trying any driver operations Andy Shevchenko
2015-05-25 17:36   ` Alan Stern
2015-05-26 13:28     ` Heikki Krogerus
2015-05-26  4:04   ` Vinod Koul
2015-05-25 16:09 ` [PATCH v2 4/8] klist: implement klist_prev() Andy Shevchenko
2015-06-01  1:21   ` Greg Kroah-Hartman
2015-05-25 16:09 ` [PATCH v2 5/8] driver core: implement device_for_each_child_reverse() Andy Shevchenko
2015-06-01  1:21   ` Greg Kroah-Hartman
2015-05-25 16:09 ` [PATCH v2 6/8] mfd: make mfd_remove_devices() iterate in reverse order Andy Shevchenko
2015-05-25 16:09 ` [PATCH v2 7/8] dmaengine: add a driver for Intel integrated DMA 64-bit Andy Shevchenko
2015-05-26  4:06   ` Vinod Koul
2015-05-26  6:49     ` Andy Shevchenko
2015-06-02 12:49       ` Vinod Koul
2015-05-25 16:09 ` [PATCH v2 8/8] mfd: Add support for Intel Sunrisepoint LPSS devices Andy Shevchenko
2015-05-27 10:22   ` Lee Jones
2015-05-27 10:41     ` Mika Westerberg
2015-05-28 11:17     ` Andy Shevchenko
2015-05-28 13:10       ` Lee Jones
2015-05-29 10:03         ` Andy Shevchenko
2015-05-26  3:51 ` [PATCH v2 0/8] mfd: introduce a driver for LPSS devices on SPT Vinod Koul
2015-05-26  6:51   ` Andy Shevchenko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).