All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCHv5 0/7] HSI framework and drivers
@ 2011-06-10 13:38 Carlos Chinea
  2011-06-10 13:38 ` [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework Carlos Chinea
                   ` (9 more replies)
  0 siblings, 10 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-06-10 13:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-omap, sre, linus.walleij, govindraj.ti, pawel.szyszuk,
	sjur.brandeland, peter_henn

Hi !

Here you have the fifth round of the HSI framework patches.

The patch series introduces the HSI framework, an SSI driver
for OMAP and a generic character device for HSI/SSI devices.

SSI, which is a legacy version of HSI, is used to connect the application
engine with the cellular modem on the Nokia N900.

In this round we have added/fixed:

- hsi_char has undergone refactoring and poll support has been removed.
- Fix RX hwbreak handling in omap_ssi
- Add functions for clients to query the controllers and port id.

TODO:

- Move OMAP SSI to use omap hwmod and new pm functions.
- Add aio support to hsi_char.
- Add automatic character device node registration for hsi_char.
- Add priority support in the HSI framework.

I would very glad to continue getting feedback about this proposal.

This patch series is based on 3.0-rc1.

Version 4 of the patch set: https://lkml.org/lkml/2010/12/14/73

Notes: 
 - checkpatch still reports a false positive on patch 1/7.
 - sparse error in hsi_char.c:128:1: error: Syntax error in unary expression,
	due to a bug in the BUILD_BUG_ON_ZERO macro for __CHECKER__ option.
	See https://lkml.org/lkml/2011/5/28/180

Andras Domokos (3):
  HSI: hsi_char: Add HSI char device driver
  HSI: hsi_char: Add HSI char device kernel configuration
  HSI: hsi_char: Update ioctl-number.txt

Carlos Chinea (4):
  HSI: hsi: Introducing HSI framework
  HSI: omap_ssi: Introducing OMAP SSI driver
  HSI: omap_ssi: Add OMAP SSI to the kernel configuration
  HSI: Add HSI API documentation

 Documentation/DocBook/device-drivers.tmpl |   17 +
 Documentation/ioctl/ioctl-number.txt      |    1 +
 arch/arm/mach-omap2/Makefile              |    2 +
 arch/arm/mach-omap2/ssi.c                 |  134 +++
 arch/arm/plat-omap/include/plat/ssi.h     |  204 ++++
 drivers/Kconfig                           |    2 +
 drivers/Makefile                          |    1 +
 drivers/hsi/Kconfig                       |   20 +
 drivers/hsi/Makefile                      |    6 +
 drivers/hsi/clients/Kconfig               |   13 +
 drivers/hsi/clients/Makefile              |    5 +
 drivers/hsi/clients/hsi_char.c            |  804 +++++++++++++
 drivers/hsi/controllers/Kconfig           |   23 +
 drivers/hsi/controllers/Makefile          |    5 +
 drivers/hsi/controllers/omap_ssi.c        | 1852 +++++++++++++++++++++++++++++
 drivers/hsi/hsi.c                         |  496 ++++++++
 drivers/hsi/hsi_boardinfo.c               |   64 +
 drivers/hsi/hsi_core.h                    |   37 +
 include/linux/Kbuild                      |    1 +
 include/linux/hsi/Kbuild                  |    1 +
 include/linux/hsi/hsi.h                   |  412 +++++++
 include/linux/hsi/hsi_char.h              |   65 +
 22 files changed, 4165 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm/mach-omap2/ssi.c
 create mode 100644 arch/arm/plat-omap/include/plat/ssi.h
 create mode 100644 drivers/hsi/Kconfig
 create mode 100644 drivers/hsi/Makefile
 create mode 100644 drivers/hsi/clients/Kconfig
 create mode 100644 drivers/hsi/clients/Makefile
 create mode 100644 drivers/hsi/clients/hsi_char.c
 create mode 100644 drivers/hsi/controllers/Kconfig
 create mode 100644 drivers/hsi/controllers/Makefile
 create mode 100644 drivers/hsi/controllers/omap_ssi.c
 create mode 100644 drivers/hsi/hsi.c
 create mode 100644 drivers/hsi/hsi_boardinfo.c
 create mode 100644 drivers/hsi/hsi_core.h
 create mode 100644 include/linux/hsi/Kbuild
 create mode 100644 include/linux/hsi/hsi.h
 create mode 100644 include/linux/hsi/hsi_char.h


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
  2011-06-10 13:38 [RFC PATCHv5 0/7] HSI framework and drivers Carlos Chinea
@ 2011-06-10 13:38 ` Carlos Chinea
  2011-06-10 13:38 ` [RFC PATCHv5 2/7] HSI: omap_ssi: Introducing OMAP SSI driver Carlos Chinea
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-06-10 13:38 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-omap

Adds HSI framework in to the linux kernel.

High Speed Synchronous Serial Interface (HSI) is a
serial interface mainly used for connecting application
engines (APE) with cellular modem engines (CMT) in cellular
handsets.

HSI provides multiplexing for up to 16 logical channels,
low-latency and full duplex communication.

Signed-off-by: Carlos Chinea <carlos.chinea@nokia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
---
 drivers/Kconfig             |    2 +
 drivers/Makefile            |    1 +
 drivers/hsi/Kconfig         |   17 ++
 drivers/hsi/Makefile        |    5 +
 drivers/hsi/hsi.c           |  496 +++++++++++++++++++++++++++++++++++++++++++
 drivers/hsi/hsi_boardinfo.c |   64 ++++++
 drivers/hsi/hsi_core.h      |   37 ++++
 include/linux/hsi/hsi.h     |  412 +++++++++++++++++++++++++++++++++++
 8 files changed, 1034 insertions(+), 0 deletions(-)
 create mode 100644 drivers/hsi/Kconfig
 create mode 100644 drivers/hsi/Makefile
 create mode 100644 drivers/hsi/hsi.c
 create mode 100644 drivers/hsi/hsi_boardinfo.c
 create mode 100644 drivers/hsi/hsi_core.h
 create mode 100644 include/linux/hsi/hsi.h

diff --git a/drivers/Kconfig b/drivers/Kconfig
index 3bb154d..a644aae 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -52,6 +52,8 @@ source "drivers/i2c/Kconfig"
 
 source "drivers/spi/Kconfig"
 
+source "drivers/hsi/Kconfig"
+
 source "drivers/pps/Kconfig"
 
 source "drivers/ptp/Kconfig"
diff --git a/drivers/Makefile b/drivers/Makefile
index 09f3232..09e2c10 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -51,6 +51,7 @@ obj-$(CONFIG_ATA)		+= ata/
 obj-$(CONFIG_TARGET_CORE)	+= target/
 obj-$(CONFIG_MTD)		+= mtd/
 obj-$(CONFIG_SPI)		+= spi/
+obj-y				+= hsi/
 obj-y				+= net/
 obj-$(CONFIG_ATM)		+= atm/
 obj-$(CONFIG_FUSION)		+= message/
diff --git a/drivers/hsi/Kconfig b/drivers/hsi/Kconfig
new file mode 100644
index 0000000..937062e
--- /dev/null
+++ b/drivers/hsi/Kconfig
@@ -0,0 +1,17 @@
+#
+# HSI driver configuration
+#
+menuconfig HSI
+	tristate "HSI support"
+	---help---
+	  The "High speed synchronous Serial Interface" is
+	  synchronous serial interface used mainly to connect
+	  application engines and cellular modems.
+
+if HSI
+
+config HSI_BOARDINFO
+	bool
+	default y
+
+endif # HSI
diff --git a/drivers/hsi/Makefile b/drivers/hsi/Makefile
new file mode 100644
index 0000000..ed94a3a
--- /dev/null
+++ b/drivers/hsi/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for HSI
+#
+obj-$(CONFIG_HSI_BOARDINFO)	+= hsi_boardinfo.o
+obj-$(CONFIG_HSI)		+= hsi.o
diff --git a/drivers/hsi/hsi.c b/drivers/hsi/hsi.c
new file mode 100644
index 0000000..06b5743
--- /dev/null
+++ b/drivers/hsi/hsi.c
@@ -0,0 +1,496 @@
+/*
+ * hsi.c
+ *
+ * HSI core.
+ *
+ * Copyright (C) 2010 Nokia Corporation. All rights reserved.
+ *
+ * Contact: Carlos Chinea <carlos.chinea@nokia.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+ * 02110-1301 USA
+ */
+#include <linux/hsi/hsi.h>
+#include <linux/compiler.h>
+#include <linux/rwsem.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/kobject.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include "hsi_core.h"
+
+static struct device_type hsi_ctrl = {
+	.name	= "hsi_controller",
+};
+
+static struct device_type hsi_cl = {
+	.name	= "hsi_client",
+};
+
+static struct device_type hsi_port = {
+	.name	= "hsi_port",
+};
+
+static ssize_t modalias_show(struct device *dev,
+			struct device_attribute *a __maybe_unused, char *buf)
+{
+	return sprintf(buf, "hsi:%s\n", dev_name(dev));
+}
+
+static struct device_attribute hsi_bus_dev_attrs[] = {
+	__ATTR_RO(modalias),
+	__ATTR_NULL,
+};
+
+static int hsi_bus_uevent(struct device *dev, struct kobj_uevent_env *env)
+{
+	if (dev->type == &hsi_cl)
+		add_uevent_var(env, "MODALIAS=hsi:%s", dev_name(dev));
+
+	return 0;
+}
+
+static int hsi_bus_match(struct device *dev, struct device_driver *driver)
+{
+	return strcmp(dev_name(dev), driver->name) == 0;
+}
+
+static struct bus_type hsi_bus_type = {
+	.name		= "hsi",
+	.dev_attrs	= hsi_bus_dev_attrs,
+	.match		= hsi_bus_match,
+	.uevent		= hsi_bus_uevent,
+};
+
+static void hsi_client_release(struct device *dev)
+{
+	kfree(to_hsi_client(dev));
+}
+
+static void hsi_new_client(struct hsi_port *port, struct hsi_board_info *info)
+{
+	struct hsi_client *cl;
+	unsigned long flags;
+
+	cl = kzalloc(sizeof(*cl), GFP_KERNEL);
+	if (!cl)
+		return;
+	cl->device.type = &hsi_cl;
+	cl->tx_cfg = info->tx_cfg;
+	cl->rx_cfg = info->rx_cfg;
+	cl->device.bus = &hsi_bus_type;
+	cl->device.parent = &port->device;
+	cl->device.release = hsi_client_release;
+	dev_set_name(&cl->device, info->name);
+	cl->device.platform_data = info->platform_data;
+	spin_lock_irqsave(&port->clock, flags);
+	list_add_tail(&cl->link, &port->clients);
+	spin_unlock_irqrestore(&port->clock, flags);
+	if (info->archdata)
+		cl->device.archdata = *info->archdata;
+	if (device_register(&cl->device) < 0) {
+		pr_err("hsi: failed to register client: %s\n", info->name);
+		kfree(cl);
+	}
+}
+
+static void hsi_scan_board_info(struct hsi_controller *hsi)
+{
+	struct hsi_cl_info *cl_info;
+	struct hsi_port	*p;
+
+	list_for_each_entry(cl_info, &hsi_board_list, list)
+		if (cl_info->info.hsi_id == hsi->id) {
+			p = hsi_find_port_num(hsi, cl_info->info.port);
+			if (!p)
+				continue;
+			hsi_new_client(p, &cl_info->info);
+		}
+}
+
+static int hsi_remove_client(struct device *dev, void *data __maybe_unused)
+{
+	struct hsi_client *cl = to_hsi_client(dev);
+	struct hsi_port *port = to_hsi_port(dev->parent);
+	unsigned long flags;
+
+	spin_lock_irqsave(&port->clock, flags);
+	list_del(&cl->link);
+	spin_unlock_irqrestore(&port->clock, flags);
+	device_unregister(dev);
+
+	return 0;
+}
+
+static int hsi_remove_port(struct device *dev, void *data __maybe_unused)
+{
+	device_for_each_child(dev, NULL, hsi_remove_client);
+	device_unregister(dev);
+
+	return 0;
+}
+
+static void hsi_controller_release(struct device *dev __maybe_unused)
+{
+}
+
+static void hsi_port_release(struct device *dev __maybe_unused)
+{
+}
+
+/**
+ * hsi_unregister_controller - Unregister an HSI controller
+ * @hsi: The HSI controller to register
+ */
+void hsi_unregister_controller(struct hsi_controller *hsi)
+{
+	device_for_each_child(&hsi->device, NULL, hsi_remove_port);
+	device_unregister(&hsi->device);
+}
+EXPORT_SYMBOL_GPL(hsi_unregister_controller);
+
+/**
+ * hsi_register_controller - Register an HSI controller and its ports
+ * @hsi: The HSI controller to register
+ *
+ * Returns -errno on failure, 0 on success.
+ */
+int hsi_register_controller(struct hsi_controller *hsi)
+{
+	unsigned int i;
+	int err;
+
+	hsi->device.type = &hsi_ctrl;
+	hsi->device.bus = &hsi_bus_type;
+	hsi->device.release = hsi_controller_release;
+	err = device_register(&hsi->device);
+	if (err < 0)
+		return err;
+	for (i = 0; i < hsi->num_ports; i++) {
+		hsi->port[i].device.parent = &hsi->device;
+		hsi->port[i].device.bus = &hsi_bus_type;
+		hsi->port[i].device.release = hsi_port_release;
+		hsi->port[i].device.type = &hsi_port;
+		INIT_LIST_HEAD(&hsi->port[i].clients);
+		spin_lock_init(&hsi->port[i].clock);
+		err = device_register(&hsi->port[i].device);
+		if (err < 0)
+			goto out;
+	}
+	/* Populate HSI bus with HSI clients */
+	hsi_scan_board_info(hsi);
+
+	return 0;
+out:
+	hsi_unregister_controller(hsi);
+
+	return err;
+}
+EXPORT_SYMBOL_GPL(hsi_register_controller);
+
+/**
+ * hsi_register_client_driver - Register an HSI client to the HSI bus
+ * @drv: HSI client driver to register
+ *
+ * Returns -errno on failure, 0 on success.
+ */
+int hsi_register_client_driver(struct hsi_client_driver *drv)
+{
+	drv->driver.bus = &hsi_bus_type;
+
+	return driver_register(&drv->driver);
+}
+EXPORT_SYMBOL_GPL(hsi_register_client_driver);
+
+static inline int hsi_dummy_msg(struct hsi_msg *msg __maybe_unused)
+{
+	return 0;
+}
+
+static inline int hsi_dummy_cl(struct hsi_client *cl __maybe_unused)
+{
+	return 0;
+}
+
+/**
+ * hsi_alloc_controller - Allocate an HSI controller and its ports
+ * @n_ports: Number of ports on the HSI controller
+ * @flags: Kernel allocation flags
+ *
+ * Return NULL on failure or a pointer to an hsi_controller on success.
+ */
+struct hsi_controller *hsi_alloc_controller(unsigned int n_ports, gfp_t flags)
+{
+	struct hsi_controller	*hsi;
+	struct hsi_port		*port;
+	unsigned int		i;
+
+	if (!n_ports)
+		return NULL;
+
+	port = kzalloc(sizeof(*port)*n_ports, flags);
+	if (!port)
+		return NULL;
+	hsi = kzalloc(sizeof(*hsi), flags);
+	if (!hsi)
+		goto out;
+	for (i = 0; i < n_ports; i++) {
+		dev_set_name(&port[i].device, "port%d", i);
+		port[i].num = i;
+		port[i].async = hsi_dummy_msg;
+		port[i].setup = hsi_dummy_cl;
+		port[i].flush = hsi_dummy_cl;
+		port[i].start_tx = hsi_dummy_cl;
+		port[i].stop_tx = hsi_dummy_cl;
+		port[i].release = hsi_dummy_cl;
+		mutex_init(&port[i].lock);
+	}
+	hsi->num_ports = n_ports;
+	hsi->port = port;
+
+	return hsi;
+out:
+	kfree(port);
+
+	return NULL;
+}
+EXPORT_SYMBOL_GPL(hsi_alloc_controller);
+
+/**
+ * hsi_free_controller - Free an HSI controller
+ * @hsi: Pointer to HSI controller
+ */
+void hsi_free_controller(struct hsi_controller *hsi)
+{
+	if (!hsi)
+		return;
+
+	kfree(hsi->port);
+	kfree(hsi);
+}
+EXPORT_SYMBOL_GPL(hsi_free_controller);
+
+/**
+ * hsi_free_msg - Free an HSI message
+ * @msg: Pointer to the HSI message
+ *
+ * Client is responsible to free the buffers pointed by the scatterlists.
+ */
+void hsi_free_msg(struct hsi_msg *msg)
+{
+	if (!msg)
+		return;
+	sg_free_table(&msg->sgt);
+	kfree(msg);
+}
+EXPORT_SYMBOL_GPL(hsi_free_msg);
+
+/**
+ * hsi_alloc_msg - Allocate an HSI message
+ * @nents: Number of memory entries
+ * @flags: Kernel allocation flags
+ *
+ * nents can be 0. This mainly makes sense for read transfer.
+ * In that case, HSI drivers will call the complete callback when
+ * there is data to be read without consuming it.
+ *
+ * Return NULL on failure or a pointer to an hsi_msg on success.
+ */
+struct hsi_msg *hsi_alloc_msg(unsigned int nents, gfp_t flags)
+{
+	struct hsi_msg *msg;
+	int err;
+
+	msg = kzalloc(sizeof(*msg), flags);
+	if (!msg)
+		return NULL;
+
+	if (!nents)
+		return msg;
+
+	err = sg_alloc_table(&msg->sgt, nents, flags);
+	if (unlikely(err)) {
+		kfree(msg);
+		msg = NULL;
+	}
+
+	return msg;
+}
+EXPORT_SYMBOL_GPL(hsi_alloc_msg);
+
+/**
+ * hsi_async - Submit an HSI transfer to the controller
+ * @cl: HSI client sending the transfer
+ * @msg: The HSI transfer passed to controller
+ *
+ * The HSI message must have the channel, ttype, complete and destructor
+ * fields set beforehand. If nents > 0 then the client has to initialize
+ * also the scatterlists to point to the buffers to write to or read from.
+ *
+ * HSI controllers relay on pre-allocated buffers from their clients and they
+ * do not allocate buffers on their own.
+ *
+ * Once the HSI message transfer finishes, the HSI controller calls the
+ * complete callback with the status and actual_len fields of the HSI message
+ * updated. The complete callback can be called before returning from
+ * hsi_async.
+ *
+ * Returns -errno on failure or 0 on success
+ */
+int hsi_async(struct hsi_client *cl, struct hsi_msg *msg)
+{
+	struct hsi_port *port = hsi_get_port(cl);
+
+	if (!hsi_port_claimed(cl))
+		return -EACCES;
+
+	WARN_ON_ONCE(!msg->destructor || !msg->complete);
+	msg->cl = cl;
+
+	return port->async(msg);
+}
+EXPORT_SYMBOL_GPL(hsi_async);
+
+/**
+ * hsi_claim_port - Claim the HSI client's port
+ * @cl: HSI client that wants to claim its port
+ * @share: Flag to indicate if the client wants to share the port or not.
+ *
+ * Returns -errno on failure, 0 on success.
+ */
+int hsi_claim_port(struct hsi_client *cl, unsigned int share)
+{
+	struct hsi_port *port = hsi_get_port(cl);
+	int err = 0;
+
+	mutex_lock(&port->lock);
+	if ((port->claimed) && (!port->shared || !share)) {
+		err = -EBUSY;
+		goto out;
+	}
+	if (!try_module_get(to_hsi_controller(port->device.parent)->owner)) {
+		err = -ENODEV;
+		goto out;
+	}
+	port->claimed++;
+	port->shared = !!share;
+	cl->pclaimed = 1;
+out:
+	mutex_unlock(&port->lock);
+
+	return err;
+}
+EXPORT_SYMBOL_GPL(hsi_claim_port);
+
+/**
+ * hsi_release_port - Release the HSI client's port
+ * @cl: HSI client which previously claimed its port
+ */
+void hsi_release_port(struct hsi_client *cl)
+{
+	struct hsi_port *port = hsi_get_port(cl);
+
+	mutex_lock(&port->lock);
+	/* Allow HW driver to do some cleanup */
+	port->release(cl);
+	if (cl->pclaimed)
+		port->claimed--;
+	BUG_ON(port->claimed < 0);
+	cl->pclaimed = 0;
+	if (!port->claimed)
+		port->shared = 0;
+	module_put(to_hsi_controller(port->device.parent)->owner);
+	mutex_unlock(&port->lock);
+}
+EXPORT_SYMBOL_GPL(hsi_release_port);
+
+static int hsi_start_rx(struct hsi_client *cl, void *data __maybe_unused)
+{
+	if (cl->hsi_start_rx)
+		(*cl->hsi_start_rx)(cl);
+
+	return 0;
+}
+
+static int hsi_stop_rx(struct hsi_client *cl, void *data __maybe_unused)
+{
+	if (cl->hsi_stop_rx)
+		(*cl->hsi_stop_rx)(cl);
+
+	return 0;
+}
+
+static int hsi_port_for_each_client(struct hsi_port *port, void *data,
+				int (*fn)(struct hsi_client *cl, void *data))
+{
+	struct hsi_client *cl;
+
+	spin_lock(&port->clock);
+	list_for_each_entry(cl, &port->clients, link) {
+		spin_unlock(&port->clock);
+		(*fn)(cl, data);
+		spin_lock(&port->clock);
+	}
+	spin_unlock(&port->clock);
+
+	return 0;
+}
+
+/**
+ * hsi_event -Notifies clients about port events
+ * @port: Port where the event occurred
+ * @event: The event type
+ *
+ * Clients should not be concerned about wake line behavior. However, due
+ * to a race condition in HSI HW protocol, clients need to be notified
+ * about wake line changes, so they can implement a workaround for it.
+ *
+ * Events:
+ * HSI_EVENT_START_RX - Incoming wake line high
+ * HSI_EVENT_STOP_RX - Incoming wake line down
+ */
+void hsi_event(struct hsi_port *port, unsigned int event)
+{
+	int (*fn)(struct hsi_client *cl, void *data);
+
+	switch (event) {
+	case HSI_EVENT_START_RX:
+		fn = hsi_start_rx;
+		break;
+	case HSI_EVENT_STOP_RX:
+		fn = hsi_stop_rx;
+		break;
+	default:
+		return;
+	}
+	hsi_port_for_each_client(port, NULL, fn);
+}
+EXPORT_SYMBOL_GPL(hsi_event);
+
+static int __init hsi_init(void)
+{
+	return bus_register(&hsi_bus_type);
+}
+postcore_initcall(hsi_init);
+
+static void __exit hsi_exit(void)
+{
+	bus_unregister(&hsi_bus_type);
+}
+module_exit(hsi_exit);
+
+MODULE_AUTHOR("Carlos Chinea <carlos.chinea@nokia.com>");
+MODULE_DESCRIPTION("High-speed Synchronous Serial Interface (HSI) framework");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/hsi/hsi_boardinfo.c b/drivers/hsi/hsi_boardinfo.c
new file mode 100644
index 0000000..3a9e4e8
--- /dev/null
+++ b/drivers/hsi/hsi_boardinfo.c
@@ -0,0 +1,64 @@
+/*
+ * hsi_boardinfo.c
+ *
+ * HSI clients registration interface
+ *
+ * Copyright (C) 2010 Nokia Corporation. All rights reserved.
+ *
+ * Contact: Carlos Chinea <carlos.chinea@nokia.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+ * 02110-1301 USA
+ */
+#include <linux/hsi/hsi.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include "hsi_core.h"
+
+/*
+ * hsi_board_list is only used internally by the HSI framework.
+ * No one else is allowed to make use of it.
+ */
+LIST_HEAD(hsi_board_list);
+EXPORT_SYMBOL_GPL(hsi_board_list);
+
+/**
+ * hsi_register_board_info - Register HSI clients information
+ * @info: Array of HSI clients on the board
+ * @len: Length of the array
+ *
+ * HSI clients are statically declared and registered on board files.
+ *
+ * HSI clients will be automatically registered to the HSI bus once the
+ * controller and the port where the clients wishes to attach are registered
+ * to it.
+ *
+ * Return -errno on failure, 0 on success.
+ */
+int __init hsi_register_board_info(struct hsi_board_info const *info,
+							unsigned int len)
+{
+	struct hsi_cl_info *cl_info;
+
+	cl_info = kzalloc(sizeof(*cl_info) * len, GFP_KERNEL);
+	if (!cl_info)
+		return -ENOMEM;
+
+	for (; len; len--, info++, cl_info++) {
+		cl_info->info = *info;
+		list_add_tail(&cl_info->list, &hsi_board_list);
+	}
+
+	return 0;
+}
diff --git a/drivers/hsi/hsi_core.h b/drivers/hsi/hsi_core.h
new file mode 100644
index 0000000..8005509
--- /dev/null
+++ b/drivers/hsi/hsi_core.h
@@ -0,0 +1,37 @@
+/*
+ * hsi_core.h
+ *
+ * HSI framework internal interfaces,
+ *
+ * Copyright (C) 2010 Nokia Corporation. All rights reserved.
+ *
+ * Contact: Carlos Chinea <carlos.chinea@nokia.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+ * 02110-1301 USA
+ */
+
+#ifndef __LINUX_HSI_CORE_H__
+#define __LINUX_HSI_CORE_H__
+
+#include <linux/hsi/hsi.h>
+
+struct hsi_cl_info {
+	struct list_head	list;
+	struct hsi_board_info	info;
+};
+
+extern struct list_head hsi_board_list;
+
+#endif /* __LINUX_HSI_CORE_H__ */
diff --git a/include/linux/hsi/hsi.h b/include/linux/hsi/hsi.h
new file mode 100644
index 0000000..304a1cc
--- /dev/null
+++ b/include/linux/hsi/hsi.h
@@ -0,0 +1,412 @@
+/*
+ * hsi.h
+ *
+ * HSI core header file.
+ *
+ * Copyright (C) 2010 Nokia Corporation. All rights reserved.
+ *
+ * Contact: Carlos Chinea <carlos.chinea@nokia.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+ * 02110-1301 USA
+ */
+
+#ifndef __LINUX_HSI_H__
+#define __LINUX_HSI_H__
+
+#include <linux/device.h>
+#include <linux/mutex.h>
+#include <linux/scatterlist.h>
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <linux/module.h>
+
+/* HSI message ttype */
+#define HSI_MSG_READ	0
+#define HSI_MSG_WRITE	1
+
+/* HSI configuration values */
+enum {
+	HSI_MODE_STREAM	= 1,
+	HSI_MODE_FRAME,
+};
+
+enum {
+	HSI_FLOW_SYNC,	/* Synchronized flow */
+	HSI_FLOW_PIPE,	/* Pipelined flow */
+};
+
+enum {
+	HSI_ARB_RR,	/* Round-robin arbitration */
+	HSI_ARB_PRIO,	/* Channel priority arbitration */
+};
+
+#define HSI_MAX_CHANNELS	16
+
+/* HSI message status codes */
+enum {
+	HSI_STATUS_COMPLETED,	/* Message transfer is completed */
+	HSI_STATUS_PENDING,	/* Message pending to be read/write (POLL) */
+	HSI_STATUS_PROCEEDING,	/* Message transfer is ongoing */
+	HSI_STATUS_QUEUED,	/* Message waiting to be served */
+	HSI_STATUS_ERROR,	/* Error when message transfer was ongoing */
+};
+
+/* HSI port event codes */
+enum {
+	HSI_EVENT_START_RX,
+	HSI_EVENT_STOP_RX,
+};
+
+/**
+ * struct hsi_config - Configuration for RX/TX HSI modules
+ * @mode: Bit transmission mode (STREAM or FRAME)
+ * @channels: Number of channels to use [1..16]
+ * @speed: Max bit transmission speed (Kbit/s)
+ * @flow: RX flow type (SYNCHRONIZED or PIPELINE)
+ * @arb_mode: Arbitration mode for TX frame (Round robin, priority)
+ */
+struct hsi_config {
+	unsigned int	mode;
+	unsigned int	channels;
+	unsigned int	speed;
+	union {
+		unsigned int	flow;		/* RX only */
+		unsigned int	arb_mode;	/* TX only */
+	};
+};
+
+/**
+ * struct hsi_board_info - HSI client board info
+ * @name: Name for the HSI device
+ * @hsi_id: HSI controller id where the client sits
+ * @port: Port number in the controller where the client sits
+ * @tx_cfg: HSI TX configuration
+ * @rx_cfg: HSI RX configuration
+ * @platform_data: Platform related data
+ * @archdata: Architecture-dependent device data
+ */
+struct hsi_board_info {
+	const char		*name;
+	unsigned int		hsi_id;
+	unsigned int		port;
+	struct hsi_config	tx_cfg;
+	struct hsi_config	rx_cfg;
+	void			*platform_data;
+	struct dev_archdata	*archdata;
+};
+
+#ifdef CONFIG_HSI_BOARDINFO
+extern int hsi_register_board_info(struct hsi_board_info const *info,
+							unsigned int len);
+#else
+static inline int hsi_register_board_info(struct hsi_board_info const *info,
+							unsigned int len)
+{
+	return 0;
+}
+#endif /* CONFIG_HSI_BOARDINFO */
+
+/**
+ * struct hsi_client - HSI client attached to an HSI port
+ * @device: Driver model representation of the device
+ * @tx_cfg: HSI TX configuration
+ * @rx_cfg: HSI RX configuration
+ * @hsi_start_rx: Called after incoming wake line goes high
+ * @hsi_stop_rx: Called after incoming wake line goes low
+ */
+struct hsi_client {
+	struct device		device;
+	struct hsi_config	tx_cfg;
+	struct hsi_config	rx_cfg;
+	void			(*hsi_start_rx)(struct hsi_client *cl);
+	void			(*hsi_stop_rx)(struct hsi_client *cl);
+	/* private: */
+	unsigned int		pclaimed:1;
+	struct list_head	link;
+};
+
+#define to_hsi_client(dev) container_of(dev, struct hsi_client, device)
+
+static inline void hsi_client_set_drvdata(struct hsi_client *cl, void *data)
+{
+	dev_set_drvdata(&cl->device, data);
+}
+
+static inline void *hsi_client_drvdata(struct hsi_client *cl)
+{
+	return dev_get_drvdata(&cl->device);
+}
+
+/**
+ * struct hsi_client_driver - Driver associated to an HSI client
+ * @driver: Driver model representation of the driver
+ */
+struct hsi_client_driver {
+	struct device_driver	driver;
+};
+
+#define to_hsi_client_driver(drv) container_of(drv, struct hsi_client_driver,\
+									driver)
+
+int hsi_register_client_driver(struct hsi_client_driver *drv);
+
+static inline void hsi_unregister_client_driver(struct hsi_client_driver *drv)
+{
+	driver_unregister(&drv->driver);
+}
+
+/**
+ * struct hsi_msg - HSI message descriptor
+ * @link: Free to use by the current descriptor owner
+ * @cl: HSI device client that issues the transfer
+ * @sgt: Head of the scatterlist array
+ * @context: Client context data associated to the transfer
+ * @complete: Transfer completion callback
+ * @destructor: Destructor to free resources when flushing
+ * @status: Status of the transfer when completed
+ * @actual_len: Actual length of data transfered on completion
+ * @channel: Channel were to TX/RX the message
+ * @ttype: Transfer type (TX if set, RX otherwise)
+ * @break_frame: if true HSI will send/receive a break frame. Data buffers are
+ *		ignored in the request.
+ */
+struct hsi_msg {
+	struct list_head	link;
+	struct hsi_client	*cl;
+	struct sg_table		sgt;
+	void			*context;
+
+	void			(*complete)(struct hsi_msg *msg);
+	void			(*destructor)(struct hsi_msg *msg);
+
+	int			status;
+	unsigned int		actual_len;
+	unsigned int		channel;
+	unsigned int		ttype:1;
+	unsigned int		break_frame:1;
+};
+
+struct hsi_msg *hsi_alloc_msg(unsigned int n_frag, gfp_t flags);
+void hsi_free_msg(struct hsi_msg *msg);
+
+/**
+ * struct hsi_port - HSI port device
+ * @device: Driver model representation of the device
+ * @tx_cfg: Current TX path configuration
+ * @rx_cfg: Current RX path configuration
+ * @num: Port number
+ * @shared: Set when port can be shared by different clients
+ * @claimed: Reference count of clients which claimed the port
+ * @lock: Serialize port claim
+ * @async: Asynchronous transfer callback
+ * @setup: Callback to set the HSI client configuration
+ * @flush: Callback to clean the HW state and destroy all pending transfers
+ * @start_tx: Callback to inform that a client wants to TX data
+ * @stop_tx: Callback to inform that a client no longer wishes to TX data
+ * @release: Callback to inform that a client no longer uses the port
+ * @clients: List of hsi_clients using the port.
+ * @clock: Lock to serialize access to the clients list.
+ */
+struct hsi_port {
+	struct device			device;
+	struct hsi_config		tx_cfg;
+	struct hsi_config		rx_cfg;
+	unsigned int			num;
+	unsigned int			shared:1;
+	int				claimed;
+	struct mutex			lock;
+	int				(*async)(struct hsi_msg *msg);
+	int				(*setup)(struct hsi_client *cl);
+	int				(*flush)(struct hsi_client *cl);
+	int				(*start_tx)(struct hsi_client *cl);
+	int				(*stop_tx)(struct hsi_client *cl);
+	int				(*release)(struct hsi_client *cl);
+	struct list_head		clients;
+	spinlock_t			clock;
+};
+
+#define to_hsi_port(dev) container_of(dev, struct hsi_port, device)
+#define hsi_get_port(cl) to_hsi_port((cl)->device.parent)
+
+void hsi_event(struct hsi_port *port, unsigned int event);
+int hsi_claim_port(struct hsi_client *cl, unsigned int share);
+void hsi_release_port(struct hsi_client *cl);
+
+static inline int hsi_port_claimed(struct hsi_client *cl)
+{
+	return cl->pclaimed;
+}
+
+static inline void hsi_port_set_drvdata(struct hsi_port *port, void *data)
+{
+	dev_set_drvdata(&port->device, data);
+}
+
+static inline void *hsi_port_drvdata(struct hsi_port *port)
+{
+	return dev_get_drvdata(&port->device);
+}
+
+/**
+ * struct hsi_controller - HSI controller device
+ * @device: Driver model representation of the device
+ * @owner: Pointer to the module owning the controller
+ * @id: HSI controller ID
+ * @num_ports: Number of ports in the HSI controller
+ * @port: Array of HSI ports
+ */
+struct hsi_controller {
+	struct device		device;
+	struct module		*owner;
+	unsigned int		id;
+	unsigned int		num_ports;
+	struct hsi_port		*port;
+};
+
+#define to_hsi_controller(dev) container_of(dev, struct hsi_controller, device)
+
+struct hsi_controller *hsi_alloc_controller(unsigned int n_ports, gfp_t flags);
+void hsi_free_controller(struct hsi_controller *hsi);
+int hsi_register_controller(struct hsi_controller *hsi);
+void hsi_unregister_controller(struct hsi_controller *hsi);
+
+static inline void hsi_controller_set_drvdata(struct hsi_controller *hsi,
+								void *data)
+{
+	dev_set_drvdata(&hsi->device, data);
+}
+
+static inline void *hsi_controller_drvdata(struct hsi_controller *hsi)
+{
+	return dev_get_drvdata(&hsi->device);
+}
+
+static inline struct hsi_port *hsi_find_port_num(struct hsi_controller *hsi,
+							unsigned int num)
+{
+	return (num < hsi->num_ports) ? &hsi->port[num] : NULL;
+}
+
+/*
+ * API for HSI clients
+ */
+int hsi_async(struct hsi_client *cl, struct hsi_msg *msg);
+
+/**
+ * hsi_id - Get HSI controller ID associated to a client
+ * @cl: Pointer to a HSI client
+ *
+ * Return the controller id where the client is attached to
+ */
+static inline unsigned int hsi_id(struct hsi_client *cl)
+{
+	return	to_hsi_controller(cl->device.parent->parent)->id;
+}
+
+/**
+ * hsi_port_id - Gets the port number a client is attached to
+ * @cl: Pointer to HSI client
+ *
+ * Return the port number associated to the client
+ */
+static inline unsigned int hsi_port_id(struct hsi_client *cl)
+{
+	return	to_hsi_port(cl->device.parent)->num;
+}
+
+/**
+ * hsi_setup - Configure the client's port
+ * @cl: Pointer to the HSI client
+ *
+ * When sharing ports, clients should either relay on a single
+ * client setup or have the same setup for all of them.
+ *
+ * Return -errno on failure, 0 on success
+ */
+static inline int hsi_setup(struct hsi_client *cl)
+{
+	if (!hsi_port_claimed(cl))
+		return -EACCES;
+	return	hsi_get_port(cl)->setup(cl);
+}
+
+/**
+ * hsi_flush - Flush all pending transactions on the client's port
+ * @cl: Pointer to the HSI client
+ *
+ * This function will destroy all pending hsi_msg in the port and reset
+ * the HW port so it is ready to receive and transmit from a clean state.
+ *
+ * Return -errno on failure, 0 on success
+ */
+static inline int hsi_flush(struct hsi_client *cl)
+{
+	if (!hsi_port_claimed(cl))
+		return -EACCES;
+	return hsi_get_port(cl)->flush(cl);
+}
+
+/**
+ * hsi_async_read - Submit a read transfer
+ * @cl: Pointer to the HSI client
+ * @msg: HSI message descriptor of the transfer
+ *
+ * Return -errno on failure, 0 on success
+ */
+static inline int hsi_async_read(struct hsi_client *cl, struct hsi_msg *msg)
+{
+	msg->ttype = HSI_MSG_READ;
+	return hsi_async(cl, msg);
+}
+
+/**
+ * hsi_async_write - Submit a write transfer
+ * @cl: Pointer to the HSI client
+ * @msg: HSI message descriptor of the transfer
+ *
+ * Return -errno on failure, 0 on success
+ */
+static inline int hsi_async_write(struct hsi_client *cl, struct hsi_msg *msg)
+{
+	msg->ttype = HSI_MSG_WRITE;
+	return hsi_async(cl, msg);
+}
+
+/**
+ * hsi_start_tx - Signal the port that the client wants to start a TX
+ * @cl: Pointer to the HSI client
+ *
+ * Return -errno on failure, 0 on success
+ */
+static inline int hsi_start_tx(struct hsi_client *cl)
+{
+	if (!hsi_port_claimed(cl))
+		return -EACCES;
+	return hsi_get_port(cl)->start_tx(cl);
+}
+
+/**
+ * hsi_stop_tx - Signal the port that the client no longer wants to transmit
+ * @cl: Pointer to the HSI client
+ *
+ * Return -errno on failure, 0 on success
+ */
+static inline int hsi_stop_tx(struct hsi_client *cl)
+{
+	if (!hsi_port_claimed(cl))
+		return -EACCES;
+	return hsi_get_port(cl)->stop_tx(cl);
+}
+#endif /* __LINUX_HSI_H__ */
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC PATCHv5 2/7] HSI: omap_ssi: Introducing OMAP SSI driver
  2011-06-10 13:38 [RFC PATCHv5 0/7] HSI framework and drivers Carlos Chinea
  2011-06-10 13:38 ` [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework Carlos Chinea
@ 2011-06-10 13:38 ` Carlos Chinea
  2011-06-13 13:21   ` Tony Lindgren
  2011-06-13 20:21   ` Kevin Hilman
  2011-06-10 13:38 ` [RFC PATCHv5 3/7] HSI: omap_ssi: Add OMAP SSI to the kernel configuration Carlos Chinea
                   ` (7 subsequent siblings)
  9 siblings, 2 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-06-10 13:38 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-omap

Introduces the OMAP SSI driver in the kernel.

The Synchronous Serial Interface (SSI) is a legacy version
of HSI. As in the case of HSI, it is mainly used to connect
Application engines (APE) with cellular modem engines (CMT)
in cellular handsets.

It provides a multichannel, full-duplex, multi-core communication
with no reference clock. The OMAP SSI block is capable of reaching
speeds of 110 Mbit/s.

Signed-off-by: Carlos Chinea <carlos.chinea@nokia.com>
---
 arch/arm/mach-omap2/ssi.c             |  134 +++
 arch/arm/plat-omap/include/plat/ssi.h |  204 ++++
 drivers/hsi/controllers/omap_ssi.c    | 1852 +++++++++++++++++++++++++++++++++
 3 files changed, 2190 insertions(+), 0 deletions(-)
 create mode 100644 arch/arm/mach-omap2/ssi.c
 create mode 100644 arch/arm/plat-omap/include/plat/ssi.h
 create mode 100644 drivers/hsi/controllers/omap_ssi.c

diff --git a/arch/arm/mach-omap2/ssi.c b/arch/arm/mach-omap2/ssi.c
new file mode 100644
index 0000000..e822a77
--- /dev/null
+++ b/arch/arm/mach-omap2/ssi.c
@@ -0,0 +1,134 @@
+/*
+ * linux/arch/arm/mach-omap2/ssi.c
+ *
+ * Copyright (C) 2010 Nokia Corporation. All rights reserved.
+ *
+ * Contact: Carlos Chinea <carlos.chinea@nokia.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+ * 02110-1301 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/err.h>
+#include <linux/gpio.h>
+#include <linux/platform_device.h>
+#include <plat/omap-pm.h>
+#include <plat/ssi.h>
+
+static struct omap_ssi_platform_data ssi_pdata = {
+	.num_ports			= SSI_NUM_PORTS,
+	.get_dev_context_loss_count	= omap_pm_get_dev_context_loss_count,
+};
+
+static struct resource ssi_resources[] = {
+	/* SSI controller */
+	[0] = {
+		.start	= 0x48058000,
+		.end	= 0x48058fff,
+		.name	= "omap_ssi_sys",
+		.flags	= IORESOURCE_MEM,
+	},
+	/* GDD */
+	[1] = {
+		.start	= 0x48059000,
+		.end	= 0x48059fff,
+		.name	= "omap_ssi_gdd",
+		.flags	= IORESOURCE_MEM,
+	},
+	[2] = {
+		.start	= 71,
+		.end	= 71,
+		.name	= "ssi_gdd",
+		.flags	= IORESOURCE_IRQ,
+	},
+	/* SSI port 1 */
+	[3] = {
+		.start	= 0x4805a000,
+		.end	= 0x4805a7ff,
+		.name	= "omap_ssi_sst1",
+		.flags	= IORESOURCE_MEM,
+	},
+	[4] = {
+		.start	= 0x4805a800,
+		.end	= 0x4805afff,
+		.name	= "omap_ssi_ssr1",
+		.flags	= IORESOURCE_MEM,
+	},
+	[5] = {
+		.start	= 67,
+		.end	= 67,
+		.name	= "ssi_p1_mpu_irq0",
+		.flags	= IORESOURCE_IRQ,
+	},
+	[6] = {
+		.start	= 68,
+		.end	= 68,
+		.name	= "ssi_p1_mpu_irq1",
+		.flags	= IORESOURCE_IRQ,
+	},
+	[7] = {
+		.start	= 0,
+		.end	= 0,
+		.name	= "ssi_p1_cawake",
+		.flags	= IORESOURCE_IRQ | IORESOURCE_UNSET,
+	},
+};
+
+static struct platform_device ssi_pdev = {
+	.name		= "omap_ssi",
+	.id		= 0,
+	.num_resources	= ARRAY_SIZE(ssi_resources),
+	.resource	= ssi_resources,
+	.dev		= {
+				.platform_data	= &ssi_pdata,
+	},
+};
+
+int __init omap_ssi_config(struct omap_ssi_board_config *ssi_config)
+{
+	unsigned int port, offset, cawake_gpio;
+	int err;
+
+	ssi_pdata.num_ports = ssi_config->num_ports;
+	for (port = 0, offset = 7; port < ssi_config->num_ports;
+							port++, offset += 5) {
+		cawake_gpio = ssi_config->cawake_gpio[port];
+		if (!cawake_gpio)
+			continue; /* Nothing to do */
+		err = gpio_request(cawake_gpio, "cawake");
+		if (err < 0)
+			goto rback;
+		gpio_direction_input(cawake_gpio);
+		ssi_resources[offset].start = gpio_to_irq(cawake_gpio);
+		ssi_resources[offset].flags &= ~IORESOURCE_UNSET;
+		ssi_resources[offset].flags |= IORESOURCE_IRQ_HIGHEDGE |
+							IORESOURCE_IRQ_LOWEDGE;
+	}
+
+	return 0;
+rback:
+	dev_err(&ssi_pdev.dev, "Request cawake (gpio%d) failed\n", cawake_gpio);
+	while (port > 0)
+		gpio_free(ssi_config->cawake_gpio[--port]);
+
+	return err;
+}
+
+static int __init ssi_init(void)
+{
+	return platform_device_register(&ssi_pdev);
+}
+subsys_initcall(ssi_init);
diff --git a/arch/arm/plat-omap/include/plat/ssi.h b/arch/arm/plat-omap/include/plat/ssi.h
new file mode 100644
index 0000000..52f0526
--- /dev/null
+++ b/arch/arm/plat-omap/include/plat/ssi.h
@@ -0,0 +1,204 @@
+/*
+ * plat/ssi.h
+ *
+ * Hardware definitions for SSI.
+ *
+ * Copyright (C) 2010 Nokia Corporation. All rights reserved.
+ *
+ * Contact: Carlos Chinea <carlos.chinea@nokia.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+ * 02110-1301 USA
+ */
+
+#ifndef __OMAP_SSI_REGS_H__
+#define __OMAP_SSI_REGS_H__
+
+#define SSI_NUM_PORTS	1
+/*
+ * SSI SYS registers
+ */
+#define SSI_REVISION_REG		0
+#	define SSI_REV_MAJOR		0xf0
+#	define SSI_REV_MINOR		0xf
+#define SSI_SYSCONFIG_REG		0x10
+#	define SSI_AUTOIDLE		(1 << 0)
+#	define SSI_SOFTRESET		(1 << 1)
+#	define SSI_SIDLEMODE_FORCE	0
+#	define SSI_SIDLEMODE_NO		(1 << 3)
+#	define SSI_SIDLEMODE_SMART	(1 << 4)
+#	define SSI_SIDLEMODE_MASK	0x18
+#	define SSI_MIDLEMODE_FORCE	0
+#	define SSI_MIDLEMODE_NO		(1 << 12)
+#	define SSI_MIDLEMODE_SMART	(1 << 13)
+#	define SSI_MIDLEMODE_MASK	0x3000
+#define SSI_SYSSTATUS_REG		0x14
+#	define SSI_RESETDONE		1
+#define SSI_MPU_STATUS_REG(port, irq)	(0x808 + ((port) * 0x10) + ((irq) * 2))
+#define SSI_MPU_ENABLE_REG(port, irq)	(0x80c + ((port) * 0x10) + ((irq) * 8))
+#	define SSI_DATAACCEPT(channel)		(1 << (channel))
+#	define SSI_DATAAVAILABLE(channel)	(1 << ((channel) + 8))
+#	define SSI_DATAOVERRUN(channel)		(1 << ((channel) + 16))
+#	define SSI_ERROROCCURED			(1 << 24)
+#	define SSI_BREAKDETECTED		(1 << 25)
+#define SSI_GDD_MPU_IRQ_STATUS_REG	0x0800
+#define SSI_GDD_MPU_IRQ_ENABLE_REG	0x0804
+#	define SSI_GDD_LCH(channel)	(1 << (channel))
+#define SSI_WAKE_REG(port)		(0xc00 + ((port) * 0x10))
+#define SSI_CLEAR_WAKE_REG(port)	(0xc04 + ((port) * 0x10))
+#define SSI_SET_WAKE_REG(port)		(0xc08 + ((port) * 0x10))
+#	define SSI_WAKE(channel)	(1 << (channel))
+#	define SSI_WAKE_MASK		0xff
+
+/*
+ * SSI SST registers
+ */
+#define SSI_SST_ID_REG			0
+#define SSI_SST_MODE_REG		4
+#	define SSI_MODE_VAL_MASK	3
+#	define SSI_MODE_SLEEP		0
+#	define SSI_MODE_STREAM		1
+#	define SSI_MODE_FRAME		2
+#	define SSI_MODE_MULTIPOINTS	3
+#define SSI_SST_FRAMESIZE_REG		8
+#	define SSI_FRAMESIZE_DEFAULT	31
+#define SSI_SST_TXSTATE_REG		0xc
+#	define	SSI_TXSTATE_IDLE	0
+#define SSI_SST_BUFSTATE_REG		0x10
+#	define	SSI_FULL(channel)	(1 << (channel))
+#define SSI_SST_DIVISOR_REG		0x18
+#	define SSI_MAX_DIVISOR		127
+#define SSI_SST_BREAK_REG		0x20
+#define SSI_SST_CHANNELS_REG		0x24
+#	define SSI_CHANNELS_DEFAULT	4
+#define SSI_SST_ARBMODE_REG		0x28
+#	define SSI_ARBMODE_ROUNDROBIN	0
+#	define SSI_ARBMODE_PRIORITY	1
+#define SSI_SST_BUFFER_CH_REG(channel)	(0x80 + ((channel) * 4))
+#define SSI_SST_SWAPBUF_CH_REG(channel)	(0xc0 + ((channel) * 4))
+
+/*
+ * SSI SSR registers
+ */
+#define SSI_SSR_ID_REG			0
+#define SSI_SSR_MODE_REG		4
+#define SSI_SSR_FRAMESIZE_REG		8
+#define SSI_SSR_RXSTATE_REG		0xc
+#define SSI_SSR_BUFSTATE_REG		0x10
+#	define SSI_NOTEMPTY(channel)	(1 << (channel))
+#define SSI_SSR_BREAK_REG		0x1c
+#define SSI_SSR_ERROR_REG		0x20
+#define SSI_SSR_ERRORACK_REG		0x24
+#define SSI_SSR_OVERRUN_REG		0x2c
+#define SSI_SSR_OVERRUNACK_REG		0x30
+#define SSI_SSR_TIMEOUT_REG		0x34
+#	define SSI_TIMEOUT_DEFAULT	0
+#define SSI_SSR_CHANNELS_REG		0x28
+#define SSI_SSR_BUFFER_CH_REG(channel)	(0x80 + ((channel) * 4))
+#define SSI_SSR_SWAPBUF_CH_REG(channel)	(0xc0 + ((channel) * 4))
+
+/*
+ * SSI GDD registers
+ */
+#define SSI_GDD_HW_ID_REG		0
+#define SSI_GDD_PPORT_ID_REG		0x10
+#define SSI_GDD_MPORT_ID_REG		0x14
+#define SSI_GDD_PPORT_SR_REG		0x20
+#define SSI_GDD_MPORT_SR_REG		0x24
+#	define SSI_ACTIVE_LCH_NUM_MASK	0xff
+#define SSI_GDD_TEST_REG		0x40
+#	define SSI_TEST			1
+#define SSI_GDD_GCR_REG			0x100
+#	define	SSI_CLK_AUTOGATING_ON	(1 << 3)
+#	define	SSI_FREE		(1 << 2)
+#	define	SSI_SWITCH_OFF		(1 << 0)
+#define SSI_GDD_GRST_REG		0x200
+#	define SSI_SWRESET		1
+#define SSI_GDD_CSDP_REG(channel)	(0x800 + ((channel) * 0x40))
+#	define SSI_DST_BURST_EN_MASK	0xc000
+#	define SSI_DST_SINGLE_ACCESS0	0
+#	define SSI_DST_SINGLE_ACCESS	(1 << 14)
+#	define SSI_DST_BURST_4x32_BIT	(2 << 14)
+#	define SSI_DST_BURST_8x32_BIT	(3 << 14)
+#	define SSI_DST_MASK		0x1e00
+#	define SSI_DST_MEMORY_PORT	(8 << 9)
+#	define SSI_DST_PERIPHERAL_PORT	(9 << 9)
+#	define SSI_SRC_BURST_EN_MASK	0x180
+#	define SSI_SRC_SINGLE_ACCESS0	0
+#	define SSI_SRC_SINGLE_ACCESS	(1 << 7)
+#	define SSI_SRC_BURST_4x32_BIT	(2 << 7)
+#	define SSI_SRC_BURST_8x32_BIT	(3 << 7)
+#	define SSI_SRC_MASK		0x3c
+#	define SSI_SRC_MEMORY_PORT	(8 << 2)
+#	define SSI_SRC_PERIPHERAL_PORT	(9 << 2)
+#	define SSI_DATA_TYPE_MASK	3
+#	define SSI_DATA_TYPE_S32	2
+#define SSI_GDD_CCR_REG(channel)	(0x802 + ((channel) * 0x40))
+#	define SSI_DST_AMODE_MASK	(3 << 14)
+#	define SSI_DST_AMODE_CONST	0
+#	define SSI_DST_AMODE_POSTINC	(1 << 12)
+#	define SSI_SRC_AMODE_MASK	(3 << 12)
+#	define SSI_SRC_AMODE_CONST	0
+#	define SSI_SRC_AMODE_POSTINC	(1 << 12)
+#	define SSI_CCR_ENABLE		(1 << 7)
+#	define SSI_CCR_SYNC_MASK	0x1f
+#define SSI_GDD_CICR_REG(channel)	(0x804 + ((channel) * 0x40))
+#	define SSI_BLOCK_IE		(1 << 5)
+#	define SSI_HALF_IE		(1 << 2)
+#	define SSI_TOUT_IE		(1 << 0)
+#define SSI_GDD_CSR_REG(channel)	(0x806 + ((channel) * 0x40))
+#	define SSI_CSR_SYNC		(1 << 6)
+#	define SSI_CSR_BLOCK		(1 << 5)
+#	define SSI_CSR_HALF		(1 << 2)
+#	define SSI_CSR_TOUR		(1 << 0)
+#define SSI_GDD_CSSA_REG(channel)	(0x808 + ((channel) * 0x40))
+#define SSI_GDD_CDSA_REG(channel)	(0x80c + ((channel) * 0x40))
+#define SSI_GDD_CEN_REG(channel)	(0x810 + ((channel) * 0x40))
+#define SSI_GDD_CSAC_REG(channel)	(0x818 + ((channel) * 0x40))
+#define SSI_GDD_CDAC_REG(channel)	(0x81a + ((channel) * 0x40))
+#define SSI_GDD_CLNK_CTRL_REG(channel)	(0x828 + ((channel) * 0x40))
+#	define SSI_ENABLE_LNK		(1 << 15)
+#	define SSI_STOP_LNK		(1 << 14)
+#	define SSI_NEXT_CH_ID_MASK	0xf
+
+/**
+ * struct omap_ssi_platform_data - OMAP SSI platform data
+ * @num_ports: Number of ports on the controller
+ * @ctxt_loss_count: Pointer to omap_pm_get_dev_context_loss_count
+ */
+struct omap_ssi_platform_data {
+	unsigned int	num_ports;
+	u32 (*get_dev_context_loss_count)(struct device *dev);
+};
+
+/**
+ * struct omap_ssi_config - SSI board configuration
+ * @num_ports: Number of ports in use
+ * @cawake_line: Array of cawake gpio lines
+ */
+struct omap_ssi_board_config {
+	unsigned int num_ports;
+	int cawake_gpio[SSI_NUM_PORTS];
+};
+
+#ifdef CONFIG_OMAP_SSI_CONFIG
+extern int omap_ssi_config(struct omap_ssi_board_config *ssi_config);
+#else
+static inline int omap_ssi_config(struct omap_ssi_board_config *ssi_config)
+{
+	return 0;
+}
+#endif /* CONFIG_OMAP_SSI_CONFIG */
+
+#endif /* __OMAP_SSI_REGS_H__ */
diff --git a/drivers/hsi/controllers/omap_ssi.c b/drivers/hsi/controllers/omap_ssi.c
new file mode 100644
index 0000000..49cec11
--- /dev/null
+++ b/drivers/hsi/controllers/omap_ssi.c
@@ -0,0 +1,1852 @@
+/*
+ * omap_ssi.c
+ *
+ * Implements the OMAP SSI driver.
+ *
+ * Copyright (C) 2010 Nokia Corporation. All rights reserved.
+ *
+ * Contact: Carlos Chinea <carlos.chinea@nokia.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+ * 02110-1301 USA
+ */
+#include <linux/compiler.h>
+#include <linux/err.h>
+#include <linux/ioport.h>
+#include <linux/io.h>
+#include <linux/gpio.h>
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/delay.h>
+#include <linux/seq_file.h>
+#include <linux/scatterlist.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/hsi/hsi.h>
+#include <linux/debugfs.h>
+#include <plat/omap-pm.h>
+#include <plat/clock.h>
+#include <plat/ssi.h>
+
+#define SSI_MAX_CHANNELS	8
+#define SSI_MAX_GDD_LCH		8
+#define SSI_BYTES_TO_FRAMES(x) ((((x) - 1) >> 2) + 1)
+
+/**
+ * struct ssi_clk_res - Device resource data for the SSI clocks
+ * @clk: Pointer to the clock
+ * @nb: Pointer to the clock notifier for clk, if any
+ */
+struct ssi_clk_res {
+	struct clk *clk;
+	struct notifier_block *nb;
+};
+
+/**
+ * struct gdd_trn - GDD transaction data
+ * @msg: Pointer to the HSI message being served
+ * @sg: Pointer to the current sg entry being served
+ */
+struct gdd_trn {
+	struct hsi_msg		*msg;
+	struct scatterlist	*sg;
+};
+
+/**
+ * struct omap_ssm_ctx - OMAP synchronous serial module (TX/RX) context
+ * @mode: Bit transmission mode
+ * @channels: Number of channels
+ * @framesize: Frame size in bits
+ * @timeout: RX frame timeout
+ * @divisor: TX divider
+ * @arb_mode: Arbitration mode for TX frame (Round robin, priority)
+ */
+struct omap_ssm_ctx {
+	u32	mode;
+	u32	channels;
+	u32	frame_size;
+	union	{
+			u32	timeout; /* Rx Only */
+			struct	{
+					u32	arb_mode;
+					u32	divisor;
+			}; /* Tx only */
+	};
+};
+
+/**
+ * struct omap_ssi_port - OMAP SSI port data
+ * @dev: device associated to the port (HSI port)
+ * @sst_dma: SSI transmitter physical base address
+ * @ssr_dma: SSI receiver physical base address
+ * @sst_base: SSI transmitter base address
+ * @ssr_base: SSI receiver base address
+ * @wk_lock: spin lock to serialize access to the wake lines
+ * @lock: Spin lock to serialize access to the SSI port
+ * @channels: Current number of channels configured (1,2,4 or 8)
+ * @txqueue: TX message queues
+ * @rxqueue: RX message queues
+ * @brkqueue: Queue of incoming HWBREAK requests (FRAME mode)
+ * @irq: IRQ number
+ * @wake_irq: IRQ number for incoming wake line (-1 if none)
+ * @pio_tasklet: Bottom half for PIO transfers and events
+ * @wake_tasklet: Bottom half for incoming wake events
+ * @wkin_cken: Keep track of clock references due to the incoming wake line
+ * @wk_refcount: Reference count for output wake line
+ * @sys_mpu_enable: Context for the interrupt enable register for irq 0
+ * @sst: Context for the synchronous serial transmitter
+ * @ssr: Context for the synchronous serial receiver
+ */
+struct omap_ssi_port {
+	struct device		*dev;
+	dma_addr_t		sst_dma;
+	dma_addr_t		ssr_dma;
+	void __iomem		*sst_base;
+	void __iomem		*ssr_base;
+	spinlock_t		wk_lock;
+	spinlock_t		lock;
+	unsigned int		channels;
+	struct list_head	txqueue[SSI_MAX_CHANNELS];
+	struct list_head	rxqueue[SSI_MAX_CHANNELS];
+	struct list_head	brkqueue;
+	unsigned int		irq;
+	int			wake_irq;
+	struct tasklet_struct	pio_tasklet;
+	struct tasklet_struct	wake_tasklet;
+	unsigned int		wkin_cken:1; /* Workaround */
+	int			wk_refcount;
+	/* OMAP SSI port context */
+	u32			sys_mpu_enable; /* We use only one irq */
+	struct omap_ssm_ctx	sst;
+	struct omap_ssm_ctx	ssr;
+};
+
+/**
+ * struct omap_ssi_controller - OMAP SSI controller data
+ * @dev: device associated to the controller (HSI controller)
+ * @sys: SSI I/O base address
+ * @gdd: GDD I/O base address
+ * @ick: SSI interconnect clock
+ * @fck: SSI functional clock
+ * @ck_refcount: References count for clocks
+ * @gdd_irq: IRQ line for GDD
+ * @gdd_tasklet: bottom half for DMA transfers
+ * @gdd_trn: Array of GDD transaction data for ongoing GDD transfers
+ * @lock: lock to serialize access to GDD
+ * @ck_lock: lock to serialize access to the clocks
+ * @loss_count: To follow if we need to restore context or not
+ * @max_speed: Maximum TX speed (Kb/s) set by the clients.
+ * @sysconfig: SSI controller saved context
+ * @gdd_gcr: SSI GDD saved context
+ * @get_loss: Pointer to omap_pm_get_dev_context_loss_count, if any
+ * @port: Array of pointers of the ports of the controller
+ * @dir: Debugfs SSI root directory
+ */
+struct omap_ssi_controller {
+	struct device		*dev;
+	void __iomem		*sys;
+	void __iomem		*gdd;
+	struct clk		*ick;
+	struct clk		*fck;
+	int			ck_refcount;
+	unsigned int		gdd_irq;
+	struct tasklet_struct	gdd_tasklet;
+	struct gdd_trn		gdd_trn[SSI_MAX_GDD_LCH];
+	spinlock_t		lock;
+	spinlock_t		ck_lock;
+	unsigned long		fck_rate;
+	u32			loss_count;
+	u32			max_speed;
+	/* OMAP SSI Controller context */
+	u32			sysconfig;
+	u32			gdd_gcr;
+	u32			(*get_loss)(struct device *dev);
+	struct omap_ssi_port	**port;
+#ifdef CONFIG_DEBUG_FS
+	struct dentry *dir;
+#endif
+};
+
+static inline unsigned int ssi_wakein(struct hsi_port *port)
+{
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+
+	return gpio_get_value(irq_to_gpio(omap_port->wake_irq));
+}
+
+static int ssi_for_each_port(struct hsi_controller *ssi, void *data,
+			int (*fn)(struct omap_ssi_port *p, void *data))
+{
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	unsigned int i = 0;
+	int err = 0;
+
+	for (i = 0; ((i < ssi->num_ports) && !err); i++)
+		err = (*fn)(omap_ssi->port[i], data);
+
+	return err;
+}
+
+static int ssi_set_port_mode(struct omap_ssi_port *omap_port, void *data)
+{
+	u32 *mode = data;
+
+	__raw_writel(*mode, omap_port->sst_base + SSI_SST_MODE_REG);
+	__raw_writel(*mode, omap_port->ssr_base + SSI_SSR_MODE_REG);
+	/* OCP barrier */
+	*mode = __raw_readl(omap_port->ssr_base + SSI_SSR_MODE_REG);
+
+	return 0;
+}
+
+static inline void ssi_set_mode(struct hsi_controller *ssi, u32 mode)
+{
+	ssi_for_each_port(ssi, &mode, ssi_set_port_mode);
+}
+
+static int ssi_restore_port_mode(struct omap_ssi_port *omap_port,
+						void *data __maybe_unused)
+{
+	u32 mode;
+
+	__raw_writel(omap_port->sst.mode,
+				omap_port->sst_base + SSI_SST_MODE_REG);
+	__raw_writel(omap_port->ssr.mode,
+				omap_port->ssr_base + SSI_SSR_MODE_REG);
+	/* OCP barrier */
+	mode =  __raw_readl(omap_port->ssr_base + SSI_SSR_MODE_REG);
+
+	return 0;
+}
+
+static int ssi_restore_divisor(struct omap_ssi_port *omap_port,
+						void *data __maybe_unused)
+{
+	__raw_writel(omap_port->sst.divisor,
+				omap_port->sst_base + SSI_SST_DIVISOR_REG);
+
+	return 0;
+}
+
+static int ssi_restore_port_ctx(struct omap_ssi_port *omap_port,
+						void *data __maybe_unused)
+{
+	struct hsi_port *port = to_hsi_port(omap_port->dev);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	void __iomem	*base = omap_port->sst_base;
+
+	__raw_writel(omap_port->sys_mpu_enable,
+			omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	/* SST context */
+	__raw_writel(omap_port->sst.frame_size, base + SSI_SST_FRAMESIZE_REG);
+	__raw_writel(omap_port->sst.channels, base + SSI_SST_CHANNELS_REG);
+	__raw_writel(omap_port->sst.arb_mode, base + SSI_SST_ARBMODE_REG);
+	/* SSR context */
+	base = omap_port->ssr_base;
+	__raw_writel(omap_port->ssr.frame_size, base + SSI_SSR_FRAMESIZE_REG);
+	__raw_writel(omap_port->ssr.channels, base + SSI_SSR_CHANNELS_REG);
+	__raw_writel(omap_port->ssr.timeout, base + SSI_SSR_TIMEOUT_REG);
+
+	return 0;
+}
+
+static int ssi_save_port_ctx(struct omap_ssi_port *omap_port,
+						void *data __maybe_unused)
+{
+	struct hsi_port *port = to_hsi_port(omap_port->dev);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+
+	omap_port->sys_mpu_enable = __raw_readl(omap_ssi->sys +
+					SSI_MPU_ENABLE_REG(port->num, 0));
+
+	return 0;
+}
+
+static int ssi_clk_enable(struct hsi_controller *ssi)
+{
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	int err = 0;
+
+	spin_lock_bh(&omap_ssi->ck_lock);
+	if (omap_ssi->ck_refcount++)
+		goto out;
+	err = clk_enable(omap_ssi->fck);
+	if (unlikely(err < 0))
+		goto out;
+	err = clk_enable(omap_ssi->ick);
+	if (unlikely(err < 0)) {
+		clk_disable(omap_ssi->fck);
+		goto out;
+	}
+	if ((omap_ssi->get_loss) && (omap_ssi->loss_count ==
+				(*omap_ssi->get_loss)(ssi->device.parent)))
+		goto mode; /* We always need to restore the mode & TX divisor */
+
+	__raw_writel(omap_ssi->sysconfig, omap_ssi->sys + SSI_SYSCONFIG_REG);
+	__raw_writel(omap_ssi->gdd_gcr, omap_ssi->gdd + SSI_GDD_GCR_REG);
+
+	ssi_for_each_port(ssi, NULL, ssi_restore_port_ctx);
+mode:
+	ssi_for_each_port(ssi, NULL, ssi_restore_divisor);
+	ssi_for_each_port(ssi, NULL, ssi_restore_port_mode);
+out:
+	spin_unlock_bh(&omap_ssi->ck_lock);
+
+	return err;
+}
+
+static void ssi_clk_disable(struct hsi_controller *ssi)
+{
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+
+	spin_lock_bh(&omap_ssi->ck_lock);
+	WARN_ON(omap_ssi->ck_refcount <= 0);
+	if (--omap_ssi->ck_refcount)
+		goto out;
+
+	ssi_set_mode(ssi, SSI_MODE_SLEEP);
+
+	if (omap_ssi->get_loss)
+		omap_ssi->loss_count =
+				(*omap_ssi->get_loss)(ssi->device.parent);
+
+	ssi_for_each_port(ssi, NULL, ssi_save_port_ctx);
+	clk_disable(omap_ssi->ick);
+	clk_disable(omap_ssi->fck);
+out:
+	spin_unlock_bh(&omap_ssi->ck_lock);
+}
+
+#ifdef CONFIG_DEBUG_FS
+static int ssi_debug_show(struct seq_file *m, void *p __maybe_unused)
+{
+	struct hsi_controller *ssi = m->private;
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	void __iomem	*sys = omap_ssi->sys;
+
+	ssi_clk_enable(ssi);
+	seq_printf(m, "REVISION\t: 0x%08x\n",
+					__raw_readl(sys + SSI_REVISION_REG));
+	seq_printf(m, "SYSCONFIG\t: 0x%08x\n",
+					__raw_readl(sys + SSI_SYSCONFIG_REG));
+	seq_printf(m, "SYSSTATUS\t: 0x%08x\n",
+					__raw_readl(sys + SSI_SYSSTATUS_REG));
+	ssi_clk_disable(ssi);
+
+	return 0;
+}
+
+static int ssi_debug_port_show(struct seq_file *m, void *p __maybe_unused)
+{
+	struct hsi_port *port = m->private;
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	void __iomem	*base = omap_ssi->sys;
+	unsigned int ch;
+
+	ssi_clk_enable(ssi);
+	if (omap_port->wake_irq > 0)
+		seq_printf(m, "CAWAKE\t\t: %d\n", ssi_wakein(port));
+	seq_printf(m, "WAKE\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_WAKE_REG(port->num)));
+	seq_printf(m, "MPU_ENABLE_IRQ%d\t: 0x%08x\n", 0,
+			__raw_readl(base + SSI_MPU_ENABLE_REG(port->num, 0)));
+	seq_printf(m, "MPU_STATUS_IRQ%d\t: 0x%08x\n", 0,
+			__raw_readl(base + SSI_MPU_STATUS_REG(port->num, 0)));
+	/* SST */
+	base = omap_port->sst_base;
+	seq_printf(m, "\nSST\n===\n");
+	seq_printf(m, "ID SST\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_SST_ID_REG));
+	seq_printf(m, "MODE\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_SST_MODE_REG));
+	seq_printf(m, "FRAMESIZE\t: 0x%08x\n",
+				__raw_readl(base + SSI_SST_FRAMESIZE_REG));
+	seq_printf(m, "DIVISOR\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_SST_DIVISOR_REG));
+	seq_printf(m, "CHANNELS\t: 0x%08x\n",
+				__raw_readl(base + SSI_SST_CHANNELS_REG));
+	seq_printf(m, "ARBMODE\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_SST_ARBMODE_REG));
+	seq_printf(m, "TXSTATE\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_SST_TXSTATE_REG));
+	seq_printf(m, "BUFSTATE\t: 0x%08x\n",
+				__raw_readl(base + SSI_SST_BUFSTATE_REG));
+	seq_printf(m, "BREAK\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_SST_BREAK_REG));
+	for (ch = 0; ch < omap_port->channels; ch++) {
+		seq_printf(m, "BUFFER_CH%d\t: 0x%08x\n", ch,
+				__raw_readl(base + SSI_SST_BUFFER_CH_REG(ch)));
+	}
+	/* SSR */
+	base = omap_port->ssr_base;
+	seq_printf(m, "\nSSR\n===\n");
+	seq_printf(m, "ID SSR\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_SSR_ID_REG));
+	seq_printf(m, "MODE\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_SSR_MODE_REG));
+	seq_printf(m, "FRAMESIZE\t: 0x%08x\n",
+				__raw_readl(base + SSI_SSR_FRAMESIZE_REG));
+	seq_printf(m, "CHANNELS\t: 0x%08x\n",
+				__raw_readl(base + SSI_SSR_CHANNELS_REG));
+	seq_printf(m, "TIMEOUT\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_SSR_TIMEOUT_REG));
+	seq_printf(m, "RXSTATE\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_SSR_RXSTATE_REG));
+	seq_printf(m, "BUFSTATE\t: 0x%08x\n",
+				__raw_readl(base + SSI_SSR_BUFSTATE_REG));
+	seq_printf(m, "BREAK\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_SSR_BREAK_REG));
+	seq_printf(m, "ERROR\t\t: 0x%08x\n",
+				__raw_readl(base + SSI_SSR_ERROR_REG));
+	seq_printf(m, "ERRORACK\t: 0x%08x\n",
+				__raw_readl(base + SSI_SSR_ERRORACK_REG));
+	for (ch = 0; ch < omap_port->channels; ch++) {
+		seq_printf(m, "BUFFER_CH%d\t: 0x%08x\n", ch,
+				__raw_readl(base + SSI_SSR_BUFFER_CH_REG(ch)));
+	}
+	ssi_clk_disable(ssi);
+
+	return 0;
+}
+
+static int ssi_debug_gdd_show(struct seq_file *m, void *p __maybe_unused)
+{
+	struct hsi_controller *ssi = m->private;
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	void __iomem	*gdd = omap_ssi->gdd;
+	int lch;
+
+	ssi_clk_enable(ssi);
+	seq_printf(m, "GDD_MPU_STATUS\t: 0x%08x\n",
+		__raw_readl(omap_ssi->sys + SSI_GDD_MPU_IRQ_STATUS_REG));
+	seq_printf(m, "GDD_MPU_ENABLE\t: 0x%08x\n\n",
+		__raw_readl(omap_ssi->sys + SSI_GDD_MPU_IRQ_ENABLE_REG));
+	seq_printf(m, "HW_ID\t\t: 0x%08x\n",
+				__raw_readl(gdd + SSI_GDD_HW_ID_REG));
+	seq_printf(m, "PPORT_ID\t: 0x%08x\n",
+				__raw_readl(gdd + SSI_GDD_PPORT_ID_REG));
+	seq_printf(m, "MPORT_ID\t: 0x%08x\n",
+				__raw_readl(gdd + SSI_GDD_MPORT_ID_REG));
+	seq_printf(m, "TEST\t\t: 0x%08x\n",
+				__raw_readl(gdd + SSI_GDD_TEST_REG));
+	seq_printf(m, "GCR\t\t: 0x%08x\n",
+				__raw_readl(gdd + SSI_GDD_GCR_REG));
+
+	for (lch = 0; lch < SSI_MAX_GDD_LCH; lch++) {
+		seq_printf(m, "\nGDD LCH %d\n=========\n", lch);
+		seq_printf(m, "CSDP\t\t: 0x%04x\n",
+				__raw_readw(gdd + SSI_GDD_CSDP_REG(lch)));
+		seq_printf(m, "CCR\t\t: 0x%04x\n",
+				__raw_readw(gdd + SSI_GDD_CCR_REG(lch)));
+		seq_printf(m, "CICR\t\t: 0x%04x\n",
+				__raw_readw(gdd + SSI_GDD_CICR_REG(lch)));
+		seq_printf(m, "CSR\t\t: 0x%04x\n",
+				__raw_readw(gdd + SSI_GDD_CSR_REG(lch)));
+		seq_printf(m, "CSSA\t\t: 0x%08x\n",
+				__raw_readl(gdd + SSI_GDD_CSSA_REG(lch)));
+		seq_printf(m, "CDSA\t\t: 0x%08x\n",
+				__raw_readl(gdd + SSI_GDD_CDSA_REG(lch)));
+		seq_printf(m, "CEN\t\t: 0x%04x\n",
+				__raw_readw(gdd + SSI_GDD_CEN_REG(lch)));
+		seq_printf(m, "CSAC\t\t: 0x%04x\n",
+				__raw_readw(gdd + SSI_GDD_CSAC_REG(lch)));
+		seq_printf(m, "CDAC\t\t: 0x%04x\n",
+				__raw_readw(gdd + SSI_GDD_CDAC_REG(lch)));
+		seq_printf(m, "CLNK_CTRL\t: 0x%04x\n",
+				__raw_readw(gdd + SSI_GDD_CLNK_CTRL_REG(lch)));
+	}
+	ssi_clk_disable(ssi);
+
+	return 0;
+}
+
+static int ssi_div_get(void *data, u64 *val)
+{
+	struct hsi_port *port = data;
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+
+	ssi_clk_enable(ssi);
+	*val = __raw_readl(omap_port->sst_base + SSI_SST_DIVISOR_REG);
+	ssi_clk_disable(ssi);
+
+	return 0;
+}
+
+static int ssi_div_set(void *data, u64 val)
+{
+	struct hsi_port *port = data;
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+
+	if (val > 127)
+		return -EINVAL;
+
+	ssi_clk_enable(ssi);
+	__raw_writel(val, omap_port->sst_base + SSI_SST_DIVISOR_REG);
+	omap_port->sst.divisor = val;
+	ssi_clk_disable(ssi);
+
+	return 0;
+}
+
+static int ssi_regs_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, ssi_debug_show, inode->i_private);
+}
+
+static int ssi_port_regs_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, ssi_debug_port_show, inode->i_private);
+}
+
+static int ssi_gdd_regs_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, ssi_debug_gdd_show, inode->i_private);
+}
+
+static const struct file_operations ssi_regs_fops = {
+	.open		= ssi_regs_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static const struct file_operations ssi_port_regs_fops = {
+	.open		= ssi_port_regs_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static const struct file_operations ssi_gdd_regs_fops = {
+	.open		= ssi_gdd_regs_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+DEFINE_SIMPLE_ATTRIBUTE(ssi_sst_div_fops, ssi_div_get, ssi_div_set, "%llu\n");
+
+static int __init ssi_debug_add_port(struct omap_ssi_port *omap_port,
+								void *data)
+{
+	struct hsi_port *port = to_hsi_port(omap_port->dev);
+	struct dentry *dir = data;
+
+	dir = debugfs_create_dir(dev_name(omap_port->dev), dir);
+	if (IS_ERR(dir))
+		return PTR_ERR(dir);
+	debugfs_create_file("regs", S_IRUGO, dir, port, &ssi_port_regs_fops);
+	dir = debugfs_create_dir("sst", dir);
+	if (IS_ERR(dir))
+		return PTR_ERR(dir);
+	debugfs_create_file("divisor", S_IRUGO | S_IWUSR, dir, port,
+							&ssi_sst_div_fops);
+
+	return 0;
+}
+
+static int __init ssi_debug_add_ctrl(struct hsi_controller *ssi)
+{
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	struct dentry *dir;
+	int err;
+
+	/* SSI controller */
+	omap_ssi->dir = debugfs_create_dir(dev_name(&ssi->device), NULL);
+	if (IS_ERR(omap_ssi->dir))
+		return PTR_ERR(omap_ssi->dir);
+
+	debugfs_create_file("regs", S_IRUGO, omap_ssi->dir, ssi,
+								&ssi_regs_fops);
+	/* SSI GDD (DMA) */
+	dir = debugfs_create_dir("gdd", omap_ssi->dir);
+	if (IS_ERR(dir))
+		goto rback;
+	debugfs_create_file("regs", S_IRUGO, dir, ssi, &ssi_gdd_regs_fops);
+	/* SSI ports */
+	err = ssi_for_each_port(ssi, omap_ssi->dir, ssi_debug_add_port);
+	if (err < 0)
+		goto rback;
+
+	return 0;
+rback:
+	debugfs_remove_recursive(omap_ssi->dir);
+
+	return PTR_ERR(dir);
+}
+
+static void ssi_debug_remove_ctrl(struct hsi_controller *ssi)
+{
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+
+	debugfs_remove_recursive(omap_ssi->dir);
+}
+#endif /* CONFIG_DEBUG_FS */
+
+static int ssi_claim_lch(struct hsi_msg *msg)
+{
+
+	struct hsi_port *port = hsi_get_port(msg->cl);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	int lch;
+
+	for (lch = 0; lch < SSI_MAX_GDD_LCH; lch++)
+		if (!omap_ssi->gdd_trn[lch].msg) {
+			omap_ssi->gdd_trn[lch].msg = msg;
+			omap_ssi->gdd_trn[lch].sg = msg->sgt.sgl;
+			return lch;
+		}
+
+	return -EBUSY;
+}
+
+static int ssi_start_pio(struct hsi_msg *msg)
+{
+	struct hsi_port *port = hsi_get_port(msg->cl);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	u32 val;
+
+	ssi_clk_enable(ssi);
+	if (msg->ttype == HSI_MSG_WRITE) {
+		val = SSI_DATAACCEPT(msg->channel);
+		ssi_clk_enable(ssi); /* Hold clocks for pio writes */
+	} else {
+		val = SSI_DATAAVAILABLE(msg->channel) | SSI_ERROROCCURED;
+	}
+	dev_dbg(&port->device, "Single %s transfer\n",
+						msg->ttype ? "write" : "read");
+	val |= __raw_readl(omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	__raw_writel(val, omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	ssi_clk_disable(ssi);
+	msg->actual_len = 0;
+	msg->status = HSI_STATUS_PROCEEDING;
+
+	return 0;
+}
+
+static int ssi_start_dma(struct hsi_msg *msg, int lch)
+{
+	struct hsi_port *port = hsi_get_port(msg->cl);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	void __iomem *gdd = omap_ssi->gdd;
+	int err;
+	u16 csdp;
+	u16 ccr;
+	u32 s_addr;
+	u32 d_addr;
+	u32 tmp;
+
+	if (msg->ttype == HSI_MSG_READ) {
+		err = dma_map_sg(&ssi->device, msg->sgt.sgl, msg->sgt.nents,
+							DMA_FROM_DEVICE);
+		if (err < 0) {
+			dev_dbg(&ssi->device, "DMA map SG failed !\n");
+			return err;
+		}
+		csdp = SSI_DST_BURST_4x32_BIT | SSI_DST_MEMORY_PORT |
+			SSI_SRC_SINGLE_ACCESS0 | SSI_SRC_PERIPHERAL_PORT |
+			SSI_DATA_TYPE_S32;
+		ccr = msg->channel + 0x10 + (port->num * 8); /* Sync */
+		ccr |= SSI_DST_AMODE_POSTINC | SSI_SRC_AMODE_CONST |
+			SSI_CCR_ENABLE;
+		s_addr = omap_port->ssr_dma +
+					SSI_SSR_BUFFER_CH_REG(msg->channel);
+		d_addr = sg_dma_address(msg->sgt.sgl);
+	} else {
+		err = dma_map_sg(&ssi->device, msg->sgt.sgl, msg->sgt.nents,
+							DMA_TO_DEVICE);
+		if (err < 0) {
+			dev_dbg(&ssi->device, "DMA map SG failed !\n");
+			return err;
+		}
+		csdp = SSI_SRC_BURST_4x32_BIT | SSI_SRC_MEMORY_PORT |
+			SSI_DST_SINGLE_ACCESS0 | SSI_DST_PERIPHERAL_PORT |
+			SSI_DATA_TYPE_S32;
+		ccr = (msg->channel + 1 + (port->num * 8)) & 0xf; /* Sync */
+		ccr |= SSI_SRC_AMODE_POSTINC | SSI_DST_AMODE_CONST |
+			SSI_CCR_ENABLE;
+		s_addr = sg_dma_address(msg->sgt.sgl);
+		d_addr = omap_port->sst_dma +
+					SSI_SST_BUFFER_CH_REG(msg->channel);
+	}
+	dev_dbg(&ssi->device, "lch %d cdsp %08x ccr %04x s_addr %08x"
+			" d_addr %08x\n", lch, csdp, ccr, s_addr, d_addr);
+	ssi_clk_enable(ssi); /* Hold clocks during the transfer */
+	__raw_writew(csdp, gdd + SSI_GDD_CSDP_REG(lch));
+	__raw_writew(SSI_BLOCK_IE | SSI_TOUT_IE, gdd + SSI_GDD_CICR_REG(lch));
+	__raw_writel(d_addr, gdd + SSI_GDD_CDSA_REG(lch));
+	__raw_writel(s_addr, gdd + SSI_GDD_CSSA_REG(lch));
+	__raw_writew(SSI_BYTES_TO_FRAMES(msg->sgt.sgl->length),
+						gdd + SSI_GDD_CEN_REG(lch));
+
+	spin_lock_bh(&omap_ssi->lock);
+	tmp = __raw_readl(omap_ssi->sys + SSI_GDD_MPU_IRQ_ENABLE_REG);
+	tmp |= SSI_GDD_LCH(lch);
+	__raw_writel(tmp, omap_ssi->sys + SSI_GDD_MPU_IRQ_ENABLE_REG);
+	spin_unlock_bh(&omap_ssi->lock);
+	__raw_writew(ccr, gdd + SSI_GDD_CCR_REG(lch));
+	msg->status = HSI_STATUS_PROCEEDING;
+
+	return 0;
+}
+
+static int ssi_start_transfer(struct list_head *queue)
+{
+	struct hsi_msg *msg;
+	int lch = -1;
+
+	if (list_empty(queue))
+		return 0;
+	msg = list_first_entry(queue, struct hsi_msg, link);
+	if (msg->status != HSI_STATUS_QUEUED)
+		return 0;
+	if ((msg->sgt.nents) && (msg->sgt.sgl->length > sizeof(u32)))
+		lch = ssi_claim_lch(msg);
+	if (lch >= 0)
+		return ssi_start_dma(msg, lch);
+	else
+		return ssi_start_pio(msg);
+}
+
+static void ssi_transfer(struct omap_ssi_port *omap_port,
+							struct list_head *queue)
+{
+	struct hsi_msg *msg;
+	int err = -1;
+
+	spin_lock_bh(&omap_port->lock);
+	while (err < 0) {
+		err = ssi_start_transfer(queue);
+		if (err < 0) {
+			msg = list_first_entry(queue, struct hsi_msg, link);
+			msg->status = HSI_STATUS_ERROR;
+			msg->actual_len = 0;
+			list_del(&msg->link);
+			spin_unlock_bh(&omap_port->lock);
+			msg->complete(msg);
+			spin_lock_bh(&omap_port->lock);
+		}
+	}
+	spin_unlock_bh(&omap_port->lock);
+}
+
+static u32 ssi_calculate_div(struct hsi_controller *ssi)
+{
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	u32 tx_fckrate = (u32) omap_ssi->fck_rate;
+
+	/* / 2 : SSI TX clock is always half of the SSI functional clock */
+	tx_fckrate >>= 1;
+	/* Round down when tx_fckrate % omap_ssi->max_speed == 0 */
+	tx_fckrate--;
+	dev_dbg(&ssi->device, "TX div %d for fck_rate %lu Khz speed %d Kb/s\n",
+			tx_fckrate / omap_ssi->max_speed, omap_ssi->fck_rate,
+							omap_ssi->max_speed);
+
+	return tx_fckrate / omap_ssi->max_speed;
+}
+
+static void ssi_error(struct hsi_port *port)
+{
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	struct hsi_msg *msg;
+	unsigned int i;
+	u32 err;
+	u32 val;
+	u32 tmp;
+
+	/* ACK error */
+	err = __raw_readl(omap_port->ssr_base + SSI_SSR_ERROR_REG);
+	dev_err(&port->device, "SSI error: 0x%02x\n", err);
+	if (!err) {
+		dev_dbg(&port->device, "spurious SSI error ignored!\n");
+		return;
+	}
+	spin_lock(&omap_ssi->lock);
+	/* Cancel all GDD read transfers */
+	for (i = 0, val = 0; i < SSI_MAX_GDD_LCH; i++) {
+		msg = omap_ssi->gdd_trn[i].msg;
+		if ((msg) && (msg->ttype == HSI_MSG_READ)) {
+			__raw_writew(0, omap_ssi->gdd + SSI_GDD_CCR_REG(i));
+			val |= (1 << i);
+			omap_ssi->gdd_trn[i].msg = NULL;
+		}
+	}
+	tmp = __raw_readl(omap_ssi->sys + SSI_GDD_MPU_IRQ_ENABLE_REG);
+	tmp &= ~val;
+	__raw_writel(tmp, omap_ssi->sys + SSI_GDD_MPU_IRQ_ENABLE_REG);
+	spin_unlock(&omap_ssi->lock);
+	/* Cancel all PIO read transfers */
+	spin_lock(&omap_port->lock);
+	tmp = __raw_readl(omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	tmp &= 0xfeff00ff; /* Disable error & all dataavailable interrupts */
+	__raw_writel(tmp, omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	/* ACK error */
+	__raw_writel(err, omap_port->ssr_base + SSI_SSR_ERRORACK_REG);
+	__raw_writel(SSI_ERROROCCURED,
+			omap_ssi->sys + SSI_MPU_STATUS_REG(port->num, 0));
+	/* Signal the error all current pending read requests */
+	for (i = 0; i < omap_port->channels; i++) {
+		if (list_empty(&omap_port->rxqueue[i]))
+			continue;
+		msg = list_first_entry(&omap_port->rxqueue[i], struct hsi_msg,
+									link);
+		list_del(&msg->link);
+		msg->status = HSI_STATUS_ERROR;
+		spin_unlock(&omap_port->lock);
+		msg->complete(msg);
+		/* Now restart queued reads if any */
+		ssi_transfer(omap_port, &omap_port->rxqueue[i]);
+		spin_lock(&omap_port->lock);
+	}
+	spin_unlock(&omap_port->lock);
+}
+
+static void ssi_break_complete(struct hsi_port *port)
+{
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	LIST_HEAD(bq);
+	struct hsi_msg *msg;
+	struct hsi_msg *tmp;
+	u32 val;
+
+	dev_dbg(&port->device, "HWBREAK received\n");
+
+	spin_lock(&omap_port->lock);
+	val = __raw_readl(omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	val &= ~SSI_BREAKDETECTED;
+	__raw_writel(val, omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	__raw_writel(0, omap_port->ssr_base + SSI_SSR_BREAK_REG);
+	__raw_writel(SSI_BREAKDETECTED,
+			omap_ssi->sys + SSI_MPU_STATUS_REG(port->num, 0));
+	list_splice_init(&omap_port->brkqueue, &bq);
+	spin_unlock(&omap_port->lock);
+
+	list_for_each_entry_safe(msg, tmp, &bq, link) {
+		msg->status = HSI_STATUS_COMPLETED;
+		list_del(&msg->link);
+		msg->complete(msg);
+	}
+}
+
+static int ssi_async_break(struct hsi_msg *msg)
+{
+	struct hsi_port *port = hsi_get_port(msg->cl);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	int err = 0;
+	u32 tmp;
+
+	ssi_clk_enable(ssi);
+	if (msg->ttype == HSI_MSG_WRITE) {
+		if (omap_port->sst.mode != SSI_MODE_FRAME) {
+			err = -EINVAL;
+			goto out;
+		}
+		__raw_writel(1, omap_port->sst_base + SSI_SST_BREAK_REG);
+		msg->status = HSI_STATUS_COMPLETED;
+		msg->complete(msg);
+	} else {
+		if (omap_port->ssr.mode != SSI_MODE_FRAME) {
+			err = -EINVAL;
+			goto out;
+		}
+		spin_lock_bh(&omap_port->lock);
+		tmp = __raw_readl(omap_ssi->sys +
+					SSI_MPU_ENABLE_REG(port->num, 0));
+		__raw_writel(tmp | SSI_BREAKDETECTED,
+			omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+		msg->status = HSI_STATUS_PROCEEDING;
+		list_add_tail(&msg->link, &omap_port->brkqueue);
+		spin_unlock_bh(&omap_port->lock);
+	}
+out:
+	ssi_clk_disable(ssi);
+
+	return err;
+}
+
+static int ssi_async(struct hsi_msg *msg)
+{
+	struct hsi_port *port = hsi_get_port(msg->cl);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct list_head *queue;
+	int err = 0;
+
+	BUG_ON(!msg);
+
+	if (msg->sgt.nents > 1)
+		return -ENOSYS; /* TODO: Add sg support */
+
+	if (msg->break_frame)
+		return ssi_async_break(msg);
+
+	if (msg->ttype) {
+		BUG_ON(msg->channel >= omap_port->sst.channels);
+		queue = &omap_port->txqueue[msg->channel];
+	} else {
+		BUG_ON(msg->channel >= omap_port->ssr.channels);
+		queue = &omap_port->rxqueue[msg->channel];
+	}
+	msg->status = HSI_STATUS_QUEUED;
+	spin_lock_bh(&omap_port->lock);
+	list_add_tail(&msg->link, queue);
+	err = ssi_start_transfer(queue);
+	if (err < 0) {
+		list_del(&msg->link);
+		msg->status = HSI_STATUS_ERROR;
+	}
+	spin_unlock_bh(&omap_port->lock);
+	dev_dbg(&port->device, "msg status %d ttype %d ch %d\n",
+				msg->status, msg->ttype, msg->channel);
+
+	return err;
+}
+
+static void ssi_flush_queue(struct list_head *queue, struct hsi_client *cl)
+{
+	struct list_head *node, *tmp;
+	struct hsi_msg *msg;
+
+	list_for_each_safe(node, tmp, queue) {
+		msg = list_entry(node, struct hsi_msg, link);
+		if ((cl) && (cl != msg->cl))
+			continue;
+		list_del(node);
+		pr_debug("flush queue: ch %d, msg %p len %d type %d ctxt %p\n",
+			msg->channel, msg, msg->sgt.sgl->length,
+					msg->ttype, msg->context);
+		if (msg->destructor)
+			msg->destructor(msg);
+		else
+			hsi_free_msg(msg);
+	}
+}
+
+static int ssi_setup(struct hsi_client *cl)
+{
+	struct hsi_port *port = to_hsi_port(cl->device.parent);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	void __iomem *sst = omap_port->sst_base;
+	void __iomem *ssr = omap_port->ssr_base;
+	u32 div;
+	u32 val;
+	int err = 0;
+
+	ssi_clk_enable(ssi);
+	spin_lock_bh(&omap_port->lock);
+	if (cl->tx_cfg.speed)
+		omap_ssi->max_speed = cl->tx_cfg.speed;
+	div = ssi_calculate_div(ssi);
+	if (div > SSI_MAX_DIVISOR) {
+		dev_err(&cl->device, "Invalid TX speed %d Mb/s (div %d)\n",
+						cl->tx_cfg.speed, div);
+		err = -EINVAL;
+		goto out;
+	}
+	/* Set TX/RX module to sleep to stop TX/RX during cfg update */
+	__raw_writel(SSI_MODE_SLEEP, sst + SSI_SST_MODE_REG);
+	__raw_writel(SSI_MODE_SLEEP, ssr + SSI_SSR_MODE_REG);
+	/* Flush posted write */
+	val = __raw_readl(ssr + SSI_SSR_MODE_REG);
+	/* TX */
+	__raw_writel(31, sst + SSI_SST_FRAMESIZE_REG);
+	__raw_writel(div, sst + SSI_SST_DIVISOR_REG);
+	__raw_writel(cl->tx_cfg.channels, sst + SSI_SST_CHANNELS_REG);
+	__raw_writel(cl->tx_cfg.arb_mode, sst + SSI_SST_ARBMODE_REG);
+	__raw_writel(cl->tx_cfg.mode, sst + SSI_SST_MODE_REG);
+	/* RX */
+	__raw_writel(31, ssr + SSI_SSR_FRAMESIZE_REG);
+	__raw_writel(cl->rx_cfg.channels, ssr + SSI_SSR_CHANNELS_REG);
+	__raw_writel(0, ssr + SSI_SSR_TIMEOUT_REG);
+	/* Cleanup the break queue if we leave FRAME mode */
+	if ((omap_port->ssr.mode == SSI_MODE_FRAME) &&
+		(cl->rx_cfg.mode != SSI_MODE_FRAME))
+		ssi_flush_queue(&omap_port->brkqueue, cl);
+	__raw_writel(cl->rx_cfg.mode, ssr + SSI_SSR_MODE_REG);
+	omap_port->channels = max(cl->rx_cfg.channels, cl->tx_cfg.channels);
+	/* Shadow registering for OFF mode */
+	/* SST */
+	omap_port->sst.divisor = div;
+	omap_port->sst.frame_size = 31;
+	omap_port->sst.channels = cl->tx_cfg.channels;
+	omap_port->sst.arb_mode = cl->tx_cfg.arb_mode;
+	omap_port->sst.mode = cl->tx_cfg.mode;
+	/* SSR */
+	omap_port->ssr.frame_size = 31;
+	omap_port->ssr.timeout = 0;
+	omap_port->ssr.channels = cl->rx_cfg.channels;
+	omap_port->ssr.mode = cl->rx_cfg.mode;
+out:
+	spin_unlock_bh(&omap_port->lock);
+	ssi_clk_disable(ssi);
+
+	return err;
+}
+
+static void ssi_cleanup_queues(struct hsi_client *cl)
+{
+	struct hsi_port *port = hsi_get_port(cl);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	struct hsi_msg *msg;
+	unsigned int i;
+	u32 rxbufstate = 0;
+	u32 txbufstate = 0;
+	u32 status = SSI_ERROROCCURED;
+	u32 tmp;
+
+	ssi_flush_queue(&omap_port->brkqueue, cl);
+	if (list_empty(&omap_port->brkqueue))
+		status |= SSI_BREAKDETECTED;
+
+	for (i = 0; i < omap_port->channels; i++) {
+		if (list_empty(&omap_port->txqueue[i]))
+			continue;
+		msg = list_first_entry(&omap_port->txqueue[i], struct hsi_msg,
+									link);
+		if ((msg->cl == cl) && (msg->status == HSI_STATUS_PROCEEDING)) {
+			txbufstate |= (1 << i);
+			status |= SSI_DATAACCEPT(i);
+			/* Release the clocks writes, also GDD ones */
+			ssi_clk_disable(ssi);
+		}
+		ssi_flush_queue(&omap_port->txqueue[i], cl);
+	}
+	for (i = 0; i < omap_port->channels; i++) {
+		if (list_empty(&omap_port->rxqueue[i]))
+			continue;
+		msg = list_first_entry(&omap_port->rxqueue[i], struct hsi_msg,
+									link);
+		if ((msg->cl == cl) && (msg->status == HSI_STATUS_PROCEEDING)) {
+			rxbufstate |= (1 << i);
+			status |= SSI_DATAAVAILABLE(i);
+		}
+		ssi_flush_queue(&omap_port->rxqueue[i], cl);
+		/* Check if we keep the error detection interrupt armed */
+		if (!list_empty(&omap_port->rxqueue[i]))
+			status &= ~SSI_ERROROCCURED;
+	}
+	/* Cleanup write buffers */
+	tmp = __raw_readl(omap_port->sst_base + SSI_SST_BUFSTATE_REG);
+	tmp &= ~txbufstate;
+	__raw_writel(tmp, omap_port->sst_base + SSI_SST_BUFSTATE_REG);
+	/* Cleanup read buffers */
+	tmp = __raw_readl(omap_port->ssr_base + SSI_SSR_BUFSTATE_REG);
+	tmp &= ~rxbufstate;
+	__raw_writel(tmp, omap_port->ssr_base + SSI_SSR_BUFSTATE_REG);
+	/* Disarm and ack pending interrupts */
+	tmp = __raw_readl(omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	tmp &= ~status;
+	__raw_writel(tmp, omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	__raw_writel(status, omap_ssi->sys + SSI_MPU_STATUS_REG(port->num, 0));
+}
+
+static void ssi_cleanup_gdd(struct hsi_controller *ssi, struct hsi_client *cl)
+{
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	struct hsi_msg *msg;
+	unsigned int i;
+	u32 val = 0;
+	u32 tmp;
+
+	for (i = 0; i < SSI_MAX_GDD_LCH; i++) {
+		msg = omap_ssi->gdd_trn[i].msg;
+		if ((!msg) || (msg->cl != cl))
+			continue;
+		__raw_writew(0, omap_ssi->gdd + SSI_GDD_CCR_REG(i));
+		val |= (1 << i);
+		/*
+		 * Clock references for write will be handled in
+		 * ssi_cleanup_queues
+		 */
+		if (msg->ttype == HSI_MSG_READ)
+			ssi_clk_disable(ssi);
+		omap_ssi->gdd_trn[i].msg = NULL;
+	}
+	tmp = __raw_readl(omap_ssi->sys + SSI_GDD_MPU_IRQ_ENABLE_REG);
+	tmp &= ~val;
+	__raw_writel(tmp, omap_ssi->sys + SSI_GDD_MPU_IRQ_ENABLE_REG);
+	__raw_writel(val, omap_ssi->sys + SSI_GDD_MPU_IRQ_STATUS_REG);
+}
+
+static int ssi_release(struct hsi_client *cl)
+{
+	struct hsi_port *port = hsi_get_port(cl);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+
+	spin_lock_bh(&omap_port->lock);
+	ssi_clk_enable(ssi);
+	/* Stop all the pending DMA requests for that client */
+	ssi_cleanup_gdd(ssi, cl);
+	/* Now cleanup all the queues */
+	ssi_cleanup_queues(cl);
+	ssi_clk_disable(ssi);
+	/* If it is the last client of the port, do extra checks and cleanup */
+	if (port->claimed <= 1) {
+		/*
+		 * Drop the clock reference for the incoming wake line
+		 * if it is still kept high by the other side.
+		 */
+		if (omap_port->wkin_cken) {
+			ssi_clk_disable(ssi);
+			omap_port->wkin_cken = 0;
+		}
+		ssi_clk_enable(ssi);
+		/* Stop any SSI TX/RX without a client */
+		ssi_set_mode(ssi, SSI_MODE_SLEEP);
+		omap_port->sst.mode = SSI_MODE_SLEEP;
+		omap_port->ssr.mode = SSI_MODE_SLEEP;
+		ssi_clk_disable(ssi);
+		WARN_ON(omap_port->wk_refcount != 0);
+		WARN_ON(omap_ssi->ck_refcount != 0);
+	}
+	spin_unlock_bh(&omap_port->lock);
+
+	return 0;
+}
+
+static int ssi_flush(struct hsi_client *cl)
+{
+	struct hsi_port *port = hsi_get_port(cl);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	struct hsi_msg *msg;
+	void __iomem *sst = omap_port->sst_base;
+	void __iomem *ssr = omap_port->ssr_base;
+	unsigned int i;
+	u32 err;
+
+	ssi_clk_enable(ssi);
+	spin_lock_bh(&omap_port->lock);
+	/* Stop all DMA transfers */
+	for (i = 0; i < SSI_MAX_GDD_LCH; i++) {
+		msg = omap_ssi->gdd_trn[i].msg;
+		if (!msg || (port != hsi_get_port(msg->cl)))
+			continue;
+		__raw_writew(0, omap_ssi->gdd + SSI_GDD_CCR_REG(i));
+		if (msg->ttype == HSI_MSG_READ)
+			ssi_clk_disable(ssi);
+		omap_ssi->gdd_trn[i].msg = NULL;
+	}
+	/* Flush all SST buffers */
+	__raw_writel(0, sst + SSI_SST_BUFSTATE_REG);
+	__raw_writel(0, sst + SSI_SST_TXSTATE_REG);
+	/* Flush all SSR buffers */
+	__raw_writel(0, ssr + SSI_SSR_RXSTATE_REG);
+	__raw_writel(0, ssr + SSI_SSR_BUFSTATE_REG);
+	/* Flush all errors */
+	err = __raw_readl(ssr + SSI_SSR_ERROR_REG);
+	__raw_writel(err, ssr + SSI_SSR_ERRORACK_REG);
+	/* Flush break */
+	__raw_writel(0, ssr + SSI_SSR_BREAK_REG);
+	/* Clear interrupts */
+	__raw_writel(0, omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	__raw_writel(0xffffff00,
+			omap_ssi->sys + SSI_MPU_STATUS_REG(port->num, 0));
+	__raw_writel(0, omap_ssi->sys + SSI_GDD_MPU_IRQ_ENABLE_REG);
+	__raw_writel(0xff, omap_ssi->sys + SSI_GDD_MPU_IRQ_STATUS_REG);
+	/* Dequeue all pending requests */
+	for (i = 0; i < omap_port->channels; i++) {
+		/* Release write clocks */
+		if (!list_empty(&omap_port->txqueue[i]))
+			ssi_clk_disable(ssi);
+		ssi_flush_queue(&omap_port->txqueue[i], NULL);
+		ssi_flush_queue(&omap_port->rxqueue[i], NULL);
+	}
+	ssi_flush_queue(&omap_port->brkqueue, NULL);
+	spin_unlock_bh(&omap_port->lock);
+	ssi_clk_disable(ssi);
+
+	return 0;
+}
+
+static int ssi_start_tx(struct hsi_client *cl)
+{
+	struct hsi_port *port = hsi_get_port(cl);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+
+	dev_dbg(&port->device, "Wake out high %d\n", omap_port->wk_refcount);
+
+	spin_lock_bh(&omap_port->wk_lock);
+	if (omap_port->wk_refcount++) {
+		spin_unlock_bh(&omap_port->wk_lock);
+		return 0;
+	}
+	ssi_clk_enable(ssi); /* Grab clocks */
+	__raw_writel(SSI_WAKE(0), omap_ssi->sys + SSI_SET_WAKE_REG(port->num));
+	spin_unlock_bh(&omap_port->wk_lock);
+
+	return 0;
+}
+
+static int ssi_stop_tx(struct hsi_client *cl)
+{
+	struct hsi_port *port = hsi_get_port(cl);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+
+	dev_dbg(&port->device, "Wake out low %d\n", omap_port->wk_refcount);
+
+	spin_lock_bh(&omap_port->wk_lock);
+	BUG_ON(!omap_port->wk_refcount);
+	if (--omap_port->wk_refcount) {
+		spin_unlock_bh(&omap_port->wk_lock);
+		return 0;
+	}
+	__raw_writel(SSI_WAKE(0),
+				omap_ssi->sys + SSI_CLEAR_WAKE_REG(port->num));
+	ssi_clk_disable(ssi); /* Release clocks */
+	spin_unlock_bh(&omap_port->wk_lock);
+
+	return 0;
+}
+
+static void ssi_pio_complete(struct hsi_port *port, struct list_head *queue)
+{
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct hsi_msg *msg;
+	u32 *buf;
+	u32 reg;
+	u32 val;
+
+	spin_lock(&omap_port->lock);
+	msg = list_first_entry(queue, struct hsi_msg, link);
+	if ((!msg->sgt.nents) || (!msg->sgt.sgl->length)) {
+		msg->actual_len = 0;
+		msg->status = HSI_STATUS_PENDING;
+	}
+	if (msg->ttype == HSI_MSG_WRITE)
+		val = SSI_DATAACCEPT(msg->channel);
+	else
+		val = SSI_DATAAVAILABLE(msg->channel);
+	if (msg->status == HSI_STATUS_PROCEEDING) {
+		buf = sg_virt(msg->sgt.sgl) + msg->actual_len;
+		if (msg->ttype == HSI_MSG_WRITE)
+			__raw_writel(*buf, omap_port->sst_base +
+					SSI_SST_BUFFER_CH_REG(msg->channel));
+		 else
+			*buf = __raw_readl(omap_port->ssr_base +
+					SSI_SSR_BUFFER_CH_REG(msg->channel));
+		dev_dbg(&port->device, "ch %d ttype %d 0x%08x\n", msg->channel,
+							msg->ttype, *buf);
+		msg->actual_len += sizeof(*buf);
+		if (msg->actual_len >= msg->sgt.sgl->length)
+			msg->status = HSI_STATUS_COMPLETED;
+		/*
+		 * Wait for the last written frame to be really sent before
+		 * we call the complete callback
+		 */
+		if ((msg->status == HSI_STATUS_PROCEEDING) ||
+				((msg->status == HSI_STATUS_COMPLETED) &&
+					(msg->ttype == HSI_MSG_WRITE))) {
+			__raw_writel(val, omap_ssi->sys +
+					SSI_MPU_STATUS_REG(port->num, 0));
+			spin_unlock(&omap_port->lock);
+
+			return;
+		}
+
+	}
+	/* Transfer completed at this point */
+	reg = __raw_readl(omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	if (msg->ttype == HSI_MSG_WRITE)
+		ssi_clk_disable(ssi); /* Release clocks for write transfer */
+	reg &= ~val;
+	__raw_writel(reg, omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	__raw_writel(val, omap_ssi->sys + SSI_MPU_STATUS_REG(port->num, 0));
+	list_del(&msg->link);
+	spin_unlock(&omap_port->lock);
+	msg->complete(msg);
+	ssi_transfer(omap_port, queue);
+}
+
+static void ssi_gdd_complete(struct hsi_controller *ssi, unsigned int lch)
+{
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	struct hsi_msg *msg = omap_ssi->gdd_trn[lch].msg;
+	struct hsi_port *port = to_hsi_port(msg->cl->device.parent);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	unsigned int dir;
+	u32 csr;
+	u32 val;
+
+	spin_lock(&omap_ssi->lock);
+
+	val = __raw_readl(omap_ssi->sys + SSI_GDD_MPU_IRQ_ENABLE_REG);
+	val &= ~SSI_GDD_LCH(lch);
+	__raw_writel(val, omap_ssi->sys + SSI_GDD_MPU_IRQ_ENABLE_REG);
+
+	if (msg->ttype == HSI_MSG_READ) {
+		dir = DMA_FROM_DEVICE;
+		val = SSI_DATAAVAILABLE(msg->channel);
+		ssi_clk_disable(ssi);
+	} else {
+		dir = DMA_TO_DEVICE;
+		val = SSI_DATAACCEPT(msg->channel);
+		/* Keep clocks reference for write pio event */
+	}
+	dma_unmap_sg(&ssi->device, msg->sgt.sgl, msg->sgt.nents, dir);
+	csr = __raw_readw(omap_ssi->gdd + SSI_GDD_CSR_REG(lch));
+	omap_ssi->gdd_trn[lch].msg = NULL; /* release GDD lch */
+	dev_dbg(&port->device, "DMA completed ch %d ttype %d\n",
+				msg->channel, msg->ttype);
+	spin_unlock(&omap_ssi->lock);
+	if (csr & SSI_CSR_TOUR) { /* Timeout error */
+		msg->status = HSI_STATUS_ERROR;
+		msg->actual_len = 0;
+		spin_lock(&omap_port->lock);
+		list_del(&msg->link); /* Dequeue msg */
+		spin_unlock(&omap_port->lock);
+		msg->complete(msg);
+		return;
+	}
+	spin_lock(&omap_port->lock);
+	val |= __raw_readl(omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	__raw_writel(val, omap_ssi->sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	spin_unlock(&omap_port->lock);
+
+	msg->status = HSI_STATUS_COMPLETED;
+	msg->actual_len = sg_dma_len(msg->sgt.sgl);
+}
+
+static void ssi_gdd_tasklet(unsigned long dev)
+{
+	struct hsi_controller *ssi = (struct hsi_controller *)dev;
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	void __iomem *sys = omap_ssi->sys;
+	unsigned int lch;
+	u32 status_reg;
+
+	ssi_clk_enable(ssi);
+
+	status_reg = __raw_readl(sys + SSI_GDD_MPU_IRQ_STATUS_REG);
+	for (lch = 0; lch < SSI_MAX_GDD_LCH; lch++) {
+		if (status_reg & SSI_GDD_LCH(lch))
+			ssi_gdd_complete(ssi, lch);
+	}
+	__raw_writel(status_reg, sys + SSI_GDD_MPU_IRQ_STATUS_REG);
+	status_reg = __raw_readl(sys + SSI_GDD_MPU_IRQ_STATUS_REG);
+	ssi_clk_disable(ssi);
+	if (status_reg)
+		tasklet_hi_schedule(&omap_ssi->gdd_tasklet);
+	else
+		enable_irq(omap_ssi->gdd_irq);
+
+}
+
+static irqreturn_t ssi_gdd_isr(int irq, void *ssi)
+{
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+
+	tasklet_hi_schedule(&omap_ssi->gdd_tasklet);
+	disable_irq_nosync(irq);
+
+	return IRQ_HANDLED;
+}
+
+static void ssi_pio_tasklet(unsigned long ssi_port)
+{
+	struct hsi_port *port = (struct hsi_port *)ssi_port;
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	void __iomem *sys = omap_ssi->sys;
+	unsigned int ch;
+	u32 status_reg;
+
+	ssi_clk_enable(ssi);
+	status_reg = __raw_readl(sys + SSI_MPU_STATUS_REG(port->num, 0));
+	status_reg &= __raw_readl(sys + SSI_MPU_ENABLE_REG(port->num, 0));
+
+	for (ch = 0; ch < omap_port->channels; ch++) {
+		if (status_reg & SSI_DATAACCEPT(ch))
+			ssi_pio_complete(port, &omap_port->txqueue[ch]);
+		if (status_reg & SSI_DATAAVAILABLE(ch))
+			ssi_pio_complete(port, &omap_port->rxqueue[ch]);
+	}
+	if (status_reg & SSI_BREAKDETECTED)
+		ssi_break_complete(port);
+	if (status_reg & SSI_ERROROCCURED)
+		ssi_error(port);
+
+	status_reg = __raw_readl(sys + SSI_MPU_STATUS_REG(port->num, 0));
+	status_reg &= __raw_readl(sys + SSI_MPU_ENABLE_REG(port->num, 0));
+	ssi_clk_disable(ssi);
+
+	if (status_reg)
+		tasklet_hi_schedule(&omap_port->pio_tasklet);
+	else
+		enable_irq(omap_port->irq);
+}
+
+static irqreturn_t ssi_pio_isr(int irq, void *port)
+{
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+
+	tasklet_hi_schedule(&omap_port->pio_tasklet);
+	disable_irq_nosync(irq);
+
+	return IRQ_HANDLED;
+}
+
+static void ssi_wake_tasklet(unsigned long ssi_port)
+{
+	struct hsi_port *port = (struct hsi_port *)ssi_port;
+	struct hsi_controller *ssi = to_hsi_controller(port->device.parent);
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+
+	if (ssi_wakein(port)) {
+		/**
+		 * We can have a quick High-Low-High transition in the line.
+		 * In such a case if we have long interrupt latencies,
+		 * we can miss the low event or get twice a high event.
+		 * This workaround will avoid breaking the clock reference
+		 * count when such a situation ocurrs.
+		 */
+		spin_lock(&omap_port->lock);
+		if (!omap_port->wkin_cken) {
+			omap_port->wkin_cken = 1;
+			ssi_clk_enable(ssi);
+		}
+		spin_unlock(&omap_port->lock);
+		dev_dbg(&ssi->device, "Wake in high\n");
+		hsi_event(port, HSI_EVENT_START_RX);
+	} else {
+		dev_dbg(&ssi->device, "Wake in low\n");
+		hsi_event(port, HSI_EVENT_STOP_RX);
+		spin_lock(&omap_port->lock);
+		if (omap_port->wkin_cken) {
+			ssi_clk_disable(ssi);
+			omap_port->wkin_cken = 0;
+		}
+		spin_unlock(&omap_port->lock);
+	}
+}
+
+static irqreturn_t ssi_wake_isr(int irq __maybe_unused, void *ssi_port)
+{
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(ssi_port);
+
+	tasklet_hi_schedule(&omap_port->wake_tasklet);
+
+	return IRQ_HANDLED;
+}
+
+static int __init ssi_port_irq(struct hsi_port *port,
+						struct platform_device *pd)
+{
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct resource *irq;
+	int err;
+
+	irq = platform_get_resource(pd, IORESOURCE_IRQ, (port->num * 3) + 1);
+	if (!irq) {
+		dev_err(&port->device, "Port IRQ resource missing\n");
+		return -ENXIO;
+	}
+	omap_port->irq = irq->start;
+	tasklet_init(&omap_port->pio_tasklet, ssi_pio_tasklet,
+							(unsigned long)port);
+	err = devm_request_irq(&pd->dev, omap_port->irq, ssi_pio_isr,
+						IRQF_DISABLED, irq->name, port);
+	if (err < 0)
+		dev_err(&port->device, "Request IRQ %d failed (%d)\n",
+							omap_port->irq, err);
+	return err;
+}
+
+static int __init ssi_wake_irq(struct hsi_port *port,
+						struct platform_device *pd)
+{
+	struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
+	struct resource *irq;
+	int err;
+
+	irq = platform_get_resource(pd, IORESOURCE_IRQ, (port->num * 3) + 3);
+	if (!irq) {
+		dev_err(&port->device, "Wake in IRQ resource missing");
+		return -ENXIO;
+	}
+	if (irq->flags & IORESOURCE_UNSET) {
+		dev_info(&port->device, "No Wake in support\n");
+		omap_port->wake_irq = -1;
+		return 0;
+	}
+	omap_port->wake_irq = irq->start;
+	tasklet_init(&omap_port->wake_tasklet, ssi_wake_tasklet,
+							(unsigned long)port);
+	err = devm_request_irq(&pd->dev, omap_port->wake_irq, ssi_wake_isr,
+		IRQF_DISABLED | IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
+							irq->name, port);
+	if (err < 0)
+		dev_err(&port->device, "Request Wake in IRQ %d failed %d\n",
+						omap_port->wake_irq, err);
+	err = enable_irq_wake(omap_port->wake_irq);
+	if (err < 0)
+		dev_err(&port->device, "Enable wake on the wakeline in irq %d"
+				" failed %d\n", omap_port->wake_irq, err);
+
+	return err;
+}
+
+static void __init ssi_queues_init(struct omap_ssi_port *omap_port)
+{
+	unsigned int ch;
+
+	for (ch = 0; ch < SSI_MAX_CHANNELS; ch++) {
+		INIT_LIST_HEAD(&omap_port->txqueue[ch]);
+		INIT_LIST_HEAD(&omap_port->rxqueue[ch]);
+	}
+	INIT_LIST_HEAD(&omap_port->brkqueue);
+}
+
+static int __init ssi_get_iomem(struct platform_device *pd,
+		unsigned int num, void __iomem **pbase, dma_addr_t *phy)
+{
+	struct resource *mem;
+	struct resource *ioarea;
+	void __iomem *base;
+
+	mem = platform_get_resource(pd, IORESOURCE_MEM, num);
+	if (!mem) {
+		dev_err(&pd->dev, "IO memory region missing (%d)\n", num);
+		return -ENXIO;
+	}
+	ioarea = devm_request_mem_region(&pd->dev, mem->start,
+					resource_size(mem), dev_name(&pd->dev));
+	if (!ioarea) {
+		dev_err(&pd->dev, "%s IO memory region request failed\n",
+								mem->name);
+		return -ENXIO;
+	}
+	base = devm_ioremap(&pd->dev, mem->start, resource_size(mem));
+	if (!base) {
+		dev_err(&pd->dev, "%s IO remap failed\n", mem->name);
+		return -ENXIO;
+	}
+	*pbase = base;
+
+	if (phy)
+		*phy = mem->start;
+
+	return 0;
+}
+
+static int __init ssi_ports_init(struct hsi_controller *ssi,
+					struct platform_device *pd)
+{
+	struct hsi_port *port;
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	struct omap_ssi_port *omap_port;
+	unsigned int i;
+	int err;
+
+	omap_ssi->port = devm_kzalloc(&pd->dev,
+				sizeof(omap_port) * ssi->num_ports, GFP_KERNEL);
+	if (!omap_ssi->port)
+		return -ENOMEM;
+
+	for (i = 0; i < ssi->num_ports; i++) {
+		port = &ssi->port[i];
+		omap_port = devm_kzalloc(&pd->dev, sizeof(*omap_port),
+								GFP_KERNEL);
+		if (!omap_port)
+			return -ENOMEM;
+		port->async = ssi_async;
+		port->setup = ssi_setup;
+		port->flush = ssi_flush;
+		port->start_tx = ssi_start_tx;
+		port->stop_tx = ssi_stop_tx;
+		port->release = ssi_release;
+		hsi_port_set_drvdata(port, omap_port);
+		/* Get SST base addresses*/
+		err = ssi_get_iomem(pd, ((i * 2) + 2), &omap_port->sst_base,
+							&omap_port->sst_dma);
+		if (err < 0)
+			return err;
+		/* Get SSR base addresses */
+		err = ssi_get_iomem(pd, ((i * 2) + 3), &omap_port->ssr_base,
+							&omap_port->ssr_dma);
+		if (err < 0)
+			return err;
+		err = ssi_port_irq(port, pd);
+		if (err < 0)
+			return err;
+		err = ssi_wake_irq(port, pd);
+		if (err < 0)
+			return err;
+		ssi_queues_init(omap_port);
+		spin_lock_init(&omap_port->lock);
+		spin_lock_init(&omap_port->wk_lock);
+		omap_port->dev = &port->device;
+		omap_ssi->port[i] = omap_port;
+	}
+
+	return 0;
+}
+
+static void ssi_ports_exit(struct hsi_controller *ssi)
+{
+	struct omap_ssi_port *omap_port;
+	unsigned int i;
+
+	for (i = 0; i < ssi->num_ports; i++) {
+		omap_port = hsi_port_drvdata(&ssi->port[i]);
+		tasklet_kill(&omap_port->wake_tasklet);
+		tasklet_kill(&omap_port->pio_tasklet);
+	}
+}
+
+static void ssi_clk_release(struct device *dev __maybe_unused, void *res)
+{
+	struct ssi_clk_res *r = res;
+
+	clk_put(r->clk);
+}
+
+static struct clk *__init ssi_devm_clk_get(struct device *dev, const char *id)
+{
+	struct ssi_clk_res *pclk;
+	struct clk *clk;
+
+	pclk = devres_alloc(ssi_clk_release, sizeof(*pclk), GFP_KERNEL);
+	if (!pclk) {
+		dev_err(dev, "Could not allocate the device resource entry\n");
+		return ERR_PTR(-ENOMEM);
+	}
+	clk = clk_get(dev, id);
+	if (IS_ERR(clk)) {
+		dev_err(dev, "clock get %s failed %li\n", id, PTR_ERR(clk));
+		devres_free(pclk);
+	} else {
+		pclk->clk = clk;
+		devres_add(dev, pclk);
+	}
+
+	return clk;
+}
+
+static int __init ssi_add_controller(struct hsi_controller *ssi,
+						struct platform_device *pd)
+{
+	struct omap_ssi_platform_data *omap_ssi_pdata = pd->dev.platform_data;
+	struct omap_ssi_controller *omap_ssi;
+	struct resource *irq;
+	int err;
+
+	omap_ssi = devm_kzalloc(&pd->dev, sizeof(*omap_ssi), GFP_KERNEL);
+	if (!omap_ssi) {
+		dev_err(&pd->dev, "not enough memory for omap ssi\n");
+		return -ENOMEM;
+	}
+	ssi->id = pd->id;
+	ssi->owner = THIS_MODULE;
+	ssi->device.parent = &pd->dev;
+	dev_set_name(&ssi->device, "ssi%d", ssi->id);
+	hsi_controller_set_drvdata(ssi, omap_ssi);
+	omap_ssi->dev = &ssi->device;
+	err = ssi_get_iomem(pd, 0, &omap_ssi->sys, NULL);
+	if (err < 0)
+		return err;
+	err = ssi_get_iomem(pd, 1, &omap_ssi->gdd, NULL);
+	if (err < 0)
+		return err;
+	irq = platform_get_resource(pd, IORESOURCE_IRQ, 0);
+	if (!irq) {
+		dev_err(&pd->dev, "GDD IRQ resource missing\n");
+		return -ENXIO;
+	}
+	omap_ssi->gdd_irq = irq->start;
+	tasklet_init(&omap_ssi->gdd_tasklet, ssi_gdd_tasklet,
+							(unsigned long)ssi);
+	err = devm_request_irq(&pd->dev, omap_ssi->gdd_irq, ssi_gdd_isr,
+						IRQF_DISABLED, irq->name, ssi);
+	if (err < 0) {
+		dev_err(&ssi->device, "Request GDD IRQ %d failed (%d)",
+							omap_ssi->gdd_irq, err);
+		return err;
+	}
+	err = ssi_ports_init(ssi, pd);
+	if (err < 0)
+		return err;
+	omap_ssi->get_loss = omap_ssi_pdata->get_dev_context_loss_count;
+	omap_ssi->max_speed = UINT_MAX;
+	spin_lock_init(&omap_ssi->lock);
+	spin_lock_init(&omap_ssi->ck_lock);
+	omap_ssi->ick = ssi_devm_clk_get(&pd->dev, "ssi_ick");
+	if (IS_ERR(omap_ssi->ick))
+		return PTR_ERR(omap_ssi->ick);
+	omap_ssi->fck = ssi_devm_clk_get(&pd->dev, "ssi_ssr_fck");
+	if (IS_ERR(omap_ssi->fck))
+		return PTR_ERR(omap_ssi->fck);
+	err = hsi_register_controller(ssi);
+
+	return err;
+}
+
+static int __init ssi_hw_init(struct hsi_controller *ssi)
+{
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+	unsigned int i;
+	u32 val;
+	int err;
+
+	err = ssi_clk_enable(ssi);
+	if (err < 0) {
+		dev_err(&ssi->device, "Failed to enable the clocks %d\n", err);
+		return err;
+	}
+	/* Reseting SSI controller */
+	__raw_writel(SSI_SOFTRESET, omap_ssi->sys + SSI_SYSCONFIG_REG);
+	val = __raw_readl(omap_ssi->sys + SSI_SYSSTATUS_REG);
+	for (i = 0; ((i < 20) && !(val & SSI_RESETDONE)); i++) {
+		msleep(20);
+		val = __raw_readl(omap_ssi->sys + SSI_SYSSTATUS_REG);
+	}
+	if (!(val & SSI_RESETDONE)) {
+		dev_err(&ssi->device, "SSI HW reset failed\n");
+		ssi_clk_disable(ssi);
+		return -EIO;
+	}
+	/* Reseting GDD */
+	__raw_writel(SSI_SWRESET, omap_ssi->gdd + SSI_GDD_GRST_REG);
+	/* Get FCK rate */
+	omap_ssi->fck_rate = clk_get_rate(omap_ssi->fck) / 1000; /* KHz */
+	dev_dbg(&ssi->device, "SSI fck rate %lu KHz\n", omap_ssi->fck_rate);
+	/* Set default PM settings */
+	val = SSI_AUTOIDLE | SSI_SIDLEMODE_SMART | SSI_MIDLEMODE_SMART;
+	__raw_writel(val, omap_ssi->sys + SSI_SYSCONFIG_REG);
+	omap_ssi->sysconfig = val;
+	__raw_writel(SSI_CLK_AUTOGATING_ON, omap_ssi->sys + SSI_GDD_GCR_REG);
+	omap_ssi->gdd_gcr = SSI_CLK_AUTOGATING_ON;
+	ssi_clk_disable(ssi);
+
+	return 0;
+}
+
+static void ssi_remove_controller(struct hsi_controller *ssi)
+{
+	struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
+
+	ssi_ports_exit(ssi);
+	tasklet_kill(&omap_ssi->gdd_tasklet);
+	hsi_unregister_controller(ssi);
+}
+
+static int __init ssi_probe(struct platform_device *pd)
+{
+	struct omap_ssi_platform_data *omap_ssi_pdata = pd->dev.platform_data;
+	struct hsi_controller *ssi;
+	int err;
+
+	if (!omap_ssi_pdata) {
+		dev_err(&pd->dev, "No OMAP SSI platform data\n");
+		return -EINVAL;
+	}
+	ssi = hsi_alloc_controller(omap_ssi_pdata->num_ports, GFP_KERNEL);
+	if (!ssi) {
+		dev_err(&pd->dev, "No memory for controller\n");
+		return -ENOMEM;
+	}
+	platform_set_drvdata(pd, ssi);
+	err = ssi_add_controller(ssi, pd);
+	if (err < 0)
+		goto out1;
+	err = ssi_hw_init(ssi);
+	if (err < 0)
+		goto out2;
+#ifdef CONFIG_DEBUG_FS
+	err = ssi_debug_add_ctrl(ssi);
+	if (err < 0)
+		goto out2;
+#endif
+	return err;
+out2:
+	ssi_remove_controller(ssi);
+out1:
+	platform_set_drvdata(pd, NULL);
+	hsi_free_controller(ssi);
+
+	return err;
+}
+
+static int __exit ssi_remove(struct platform_device *pd)
+{
+	struct hsi_controller *ssi = platform_get_drvdata(pd);
+
+#ifdef CONFIG_DEBUG_FS
+	ssi_debug_remove_ctrl(ssi);
+#endif
+	ssi_remove_controller(ssi);
+	platform_set_drvdata(pd, NULL);
+	hsi_free_controller(ssi);
+
+	return 0;
+}
+
+static struct platform_driver ssi_pdriver = {
+	.remove	= __exit_p(ssi_remove),
+	.driver	= {
+			.name	= "omap_ssi",
+			.owner	= THIS_MODULE,
+	},
+};
+
+static int __init omap_ssi_init(void)
+{
+	pr_info("OMAP SSI hw driver loaded\n");
+	return platform_driver_probe(&ssi_pdriver, ssi_probe);
+}
+module_init(omap_ssi_init);
+
+static void __exit omap_ssi_exit(void)
+{
+	platform_driver_unregister(&ssi_pdriver);
+	pr_info("OMAP SSI driver removed\n");
+}
+module_exit(omap_ssi_exit);
+
+MODULE_ALIAS("platform:omap_ssi");
+MODULE_AUTHOR("Carlos Chinea <carlos.chinea@nokia.com>");
+MODULE_DESCRIPTION("Synchronous Serial Interface Driver");
+MODULE_LICENSE("GPL v2");
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC PATCHv5 3/7] HSI: omap_ssi: Add OMAP SSI to the kernel configuration
  2011-06-10 13:38 [RFC PATCHv5 0/7] HSI framework and drivers Carlos Chinea
  2011-06-10 13:38 ` [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework Carlos Chinea
  2011-06-10 13:38 ` [RFC PATCHv5 2/7] HSI: omap_ssi: Introducing OMAP SSI driver Carlos Chinea
@ 2011-06-10 13:38 ` Carlos Chinea
  2011-06-10 13:38 ` [RFC PATCHv5 4/7] HSI: hsi_char: Add HSI char device driver Carlos Chinea
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-06-10 13:38 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-omap

Add OMAP SSI device and driver to the kernel configuration

Signed-off-by: Carlos Chinea <carlos.chinea@nokia.com>
---
 arch/arm/mach-omap2/Makefile     |    2 ++
 drivers/hsi/Kconfig              |    2 ++
 drivers/hsi/Makefile             |    1 +
 drivers/hsi/controllers/Kconfig  |   23 +++++++++++++++++++++++
 drivers/hsi/controllers/Makefile |    5 +++++
 5 files changed, 33 insertions(+), 0 deletions(-)
 create mode 100644 drivers/hsi/controllers/Kconfig
 create mode 100644 drivers/hsi/controllers/Makefile

diff --git a/arch/arm/mach-omap2/Makefile b/arch/arm/mach-omap2/Makefile
index b148077..b192954 100644
--- a/arch/arm/mach-omap2/Makefile
+++ b/arch/arm/mach-omap2/Makefile
@@ -171,6 +171,8 @@ obj-y					+= $(i2c-omap-m) $(i2c-omap-y)
 ifneq ($(CONFIG_TIDSPBRIDGE),)
 obj-y					+= dsp.o
 endif
+omap-ssi-$(CONFIG_OMAP_SSI)		:= ssi.o
+obj-y					+= $(omap-ssi-m) $(omap-ssi-y)
 
 # Specific board support
 obj-$(CONFIG_MACH_OMAP_GENERIC)		+= board-generic.o
diff --git a/drivers/hsi/Kconfig b/drivers/hsi/Kconfig
index 937062e..bba73ea 100644
--- a/drivers/hsi/Kconfig
+++ b/drivers/hsi/Kconfig
@@ -14,4 +14,6 @@ config HSI_BOARDINFO
 	bool
 	default y
 
+source "drivers/hsi/controllers/Kconfig"
+
 endif # HSI
diff --git a/drivers/hsi/Makefile b/drivers/hsi/Makefile
index ed94a3a..0de87bd 100644
--- a/drivers/hsi/Makefile
+++ b/drivers/hsi/Makefile
@@ -3,3 +3,4 @@
 #
 obj-$(CONFIG_HSI_BOARDINFO)	+= hsi_boardinfo.o
 obj-$(CONFIG_HSI)		+= hsi.o
+obj-y				+= controllers/
diff --git a/drivers/hsi/controllers/Kconfig b/drivers/hsi/controllers/Kconfig
new file mode 100644
index 0000000..3efe0f0
--- /dev/null
+++ b/drivers/hsi/controllers/Kconfig
@@ -0,0 +1,23 @@
+#
+# HSI controllers configuration
+#
+comment "HSI controllers"
+
+config OMAP_SSI
+	tristate "OMAP SSI hardware driver"
+	depends on ARCH_OMAP && HSI
+	default n
+	---help---
+	  SSI is a legacy version of HSI. It is usually used to connect
+	  an application engine with a cellular modem.
+	  If you say Y here, you will enable the OMAP SSI hardware driver.
+
+	  If unsure, say N.
+
+if OMAP_SSI
+
+config OMAP_SSI_CONFIG
+	boolean
+	default y
+
+endif # OMAP_SSI
diff --git a/drivers/hsi/controllers/Makefile b/drivers/hsi/controllers/Makefile
new file mode 100644
index 0000000..c4ba2c2
--- /dev/null
+++ b/drivers/hsi/controllers/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for HSI controllers drivers
+#
+
+obj-$(CONFIG_OMAP_SSI)		+= omap_ssi.o
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC PATCHv5 4/7] HSI: hsi_char: Add HSI char device driver
  2011-06-10 13:38 [RFC PATCHv5 0/7] HSI framework and drivers Carlos Chinea
                   ` (2 preceding siblings ...)
  2011-06-10 13:38 ` [RFC PATCHv5 3/7] HSI: omap_ssi: Add OMAP SSI to the kernel configuration Carlos Chinea
@ 2011-06-10 13:38 ` Carlos Chinea
  2011-06-22 19:37   ` Sjur Brændeland
  2011-06-10 13:38 ` [RFC PATCHv5 5/7] HSI: hsi_char: Add HSI char device kernel configuration Carlos Chinea
                   ` (5 subsequent siblings)
  9 siblings, 1 reply; 35+ messages in thread
From: Carlos Chinea @ 2011-06-10 13:38 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-omap, Andras Domokos, Alan Cox

From: Andras Domokos <andras.domokos@nokia.com>

Add HSI char device driver to the kernel.

Signed-off-by: Andras Domokos <andras.domokos@nokia.com>
Signed-off-by: Carlos Chinea <carlos.chinea@nokia.com>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
---
 drivers/hsi/clients/hsi_char.c |  804 ++++++++++++++++++++++++++++++++++++++++
 include/linux/hsi/hsi_char.h   |   65 ++++
 2 files changed, 869 insertions(+), 0 deletions(-)
 create mode 100644 drivers/hsi/clients/hsi_char.c
 create mode 100644 include/linux/hsi/hsi_char.h

diff --git a/drivers/hsi/clients/hsi_char.c b/drivers/hsi/clients/hsi_char.c
new file mode 100644
index 0000000..ad56bfd
--- /dev/null
+++ b/drivers/hsi/clients/hsi_char.c
@@ -0,0 +1,804 @@
+/*
+ * hsi-char.c
+ *
+ * HSI character device driver, implements the character device
+ * interface.
+ *
+ * Copyright (C) 2010 Nokia Corporation. All rights reserved.
+ *
+ * Contact: Andras Domokos <andras.domokos@nokia.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+ * 02110-1301 USA
+ */
+
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/kernel.h>
+#include <linux/kmemleak.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/ioctl.h>
+#include <linux/wait.h>
+#include <linux/fs.h>
+#include <linux/sched.h>
+#include <linux/device.h>
+#include <linux/cdev.h>
+#include <linux/uaccess.h>
+#include <linux/scatterlist.h>
+#include <linux/stat.h>
+#include <linux/hsi/hsi.h>
+#include <linux/hsi/hsi_char.h>
+
+#define HSC_DEVS		16 /* Num of channels */
+#define HSC_MSGS		4
+
+#define HSC_RXBREAK		0
+
+#define HSC_ID_BITS		6
+#define HSC_PORT_ID_BITS	4
+#define HSC_ID_MASK		3
+#define HSC_PORT_ID_MASK	3
+#define HSC_CH_MASK		0xf
+
+/*
+ * We support up to 4 controllers that can have up to 4
+ * ports, which should currently be more than enough.
+ */
+#define HSC_BASEMINOR(id, port_id) \
+		((((id) & HSC_ID_MASK) << HSC_ID_BITS) | \
+		(((port_id) & HSC_PORT_ID_MASK) << HSC_PORT_ID_BITS))
+
+enum {
+	HSC_CH_OPEN,
+	HSC_CH_READ,
+	HSC_CH_WRITE,
+	HSC_CH_WLINE,
+};
+
+enum {
+	HSC_RX,
+	HSC_TX,
+};
+
+struct hsc_client_data;
+/**
+ * struct hsc_channel - hsi_char internal channel data
+ * @ch: channel number
+ * @flags: Keeps state of the channel (open/close, reading, writing)
+ * @free_msgs_list: List of free HSI messages/requests
+ * @rx_msgs_queue: List of pending RX requests
+ * @tx_msgs_queue: List of pending TX requests
+ * @lock: Serialize access to the lists
+ * @cl: reference to the associated hsi_client
+ * @cl_data: reference to the client data that this channels belongs to
+ * @rx_wait: RX requests wait queue
+ * @tx_wait: TX requests wait queue
+ */
+struct hsc_channel {
+	unsigned int		ch;
+	unsigned long		flags;
+	struct list_head	free_msgs_list;
+	struct list_head	rx_msgs_queue;
+	struct list_head	tx_msgs_queue;
+	spinlock_t		lock;
+	struct hsi_client	*cl;
+	struct hsc_client_data *cl_data;
+	wait_queue_head_t	rx_wait;
+	wait_queue_head_t	tx_wait;
+};
+
+/**
+ * struct hsc_client_data - hsi_char internal client data
+ * @cdev: Characther device associated to the hsi_client
+ * @lock: Lock to serialize open/close access
+ * @flags: Keeps track of port state (rx hwbreak armed)
+ * @usecnt: Use count for claiming the HSI port (mutex protected)
+ * @cl: Referece to the HSI client
+ * @channels: Array of channels accessible by the client
+ */
+struct hsc_client_data {
+	struct cdev		cdev;
+	struct mutex		lock;
+	unsigned long		flags;
+	unsigned int		usecnt;
+	struct hsi_client	*cl;
+	struct hsc_channel	channels[HSC_DEVS];
+};
+
+/* Stores the major number dynamically allocated for hsi_char */
+static unsigned int hsc_major;
+/* Maximum buffer size that hsi_char will accept from userspace */
+static unsigned int max_data_size = 0x1000;
+module_param(max_data_size, uint, S_IRUSR | S_IWUSR);
+MODULE_PARM_DESC(max_data_size, "max read/write data size [4,8..65536] (^2)");
+
+static void hsc_add_tail(struct hsc_channel *channel, struct hsi_msg *msg,
+							struct list_head *queue)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&channel->lock, flags);
+	list_add_tail(&msg->link, queue);
+	spin_unlock_irqrestore(&channel->lock, flags);
+}
+
+static struct hsi_msg *hsc_get_first_msg(struct hsc_channel *channel,
+							struct list_head *queue)
+{
+	struct hsi_msg *msg = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&channel->lock, flags);
+
+	if (list_empty(queue))
+		goto out;
+
+	msg = list_first_entry(queue, struct hsi_msg, link);
+	list_del(&msg->link);
+out:
+	spin_unlock_irqrestore(&channel->lock, flags);
+
+	return msg;
+}
+
+static inline void hsc_msg_free(struct hsi_msg *msg)
+{
+	kfree(sg_virt(msg->sgt.sgl));
+	hsi_free_msg(msg);
+}
+
+static void hsc_free_list(struct list_head *list)
+{
+	struct hsi_msg *msg, *tmp;
+
+	list_for_each_entry_safe(msg, tmp, list, link) {
+		list_del(&msg->link);
+		hsc_msg_free(msg);
+	}
+}
+
+static void hsc_reset_list(struct hsc_channel *channel, struct list_head *l)
+{
+	unsigned long flags;
+	LIST_HEAD(list);
+
+	spin_lock_irqsave(&channel->lock, flags);
+	list_splice_init(l, &list);
+	spin_unlock_irqrestore(&channel->lock, flags);
+
+	hsc_free_list(&list);
+}
+
+static inline struct hsi_msg *hsc_msg_alloc(unsigned int alloc_size)
+{
+	struct hsi_msg *msg;
+	void *buf;
+
+	msg = hsi_alloc_msg(1, GFP_KERNEL);
+	if (!msg)
+		goto out;
+	buf = kmalloc(alloc_size, GFP_KERNEL);
+	if (!buf) {
+		hsi_free_msg(msg);
+		goto out;
+	}
+	sg_init_one(msg->sgt.sgl, buf, alloc_size);
+	/* Ignore false positive, due to sg pointer handling */
+	kmemleak_ignore(buf);
+
+	return msg;
+out:
+	return NULL;
+}
+
+static inline int hsc_msgs_alloc(struct hsc_channel *channel)
+{
+	struct hsi_msg *msg;
+	int i;
+
+	for (i = 0; i < HSC_MSGS; i++) {
+		msg = hsc_msg_alloc(max_data_size);
+		if (!msg)
+			goto out;
+		msg->channel = channel->ch;
+		list_add_tail(&msg->link, &channel->free_msgs_list);
+	}
+
+	return 0;
+out:
+	hsc_free_list(&channel->free_msgs_list);
+
+	return -ENOMEM;
+}
+
+static inline unsigned int hsc_msg_len_get(struct hsi_msg *msg)
+{
+	return msg->sgt.sgl->length;
+}
+
+static inline void hsc_msg_len_set(struct hsi_msg *msg, unsigned int len)
+{
+	msg->sgt.sgl->length = len;
+}
+
+static void hsc_rx_completed(struct hsi_msg *msg)
+{
+	struct hsc_client_data *cl_data = hsi_client_drvdata(msg->cl);
+	struct hsc_channel *channel = cl_data->channels + msg->channel;
+
+	if (test_bit(HSC_CH_READ, &channel->flags)) {
+		hsc_add_tail(channel, msg, &channel->rx_msgs_queue);
+		wake_up(&channel->rx_wait);
+	} else {
+		hsc_add_tail(channel, msg, &channel->free_msgs_list);
+	}
+}
+
+static void hsc_rx_msg_destructor(struct hsi_msg *msg)
+{
+	msg->status = HSI_STATUS_ERROR;
+	hsc_msg_len_set(msg, 0);
+	hsc_rx_completed(msg);
+}
+
+static void hsc_tx_completed(struct hsi_msg *msg)
+{
+	struct hsc_client_data *cl_data = hsi_client_drvdata(msg->cl);
+	struct hsc_channel *channel = cl_data->channels + msg->channel;
+
+	if (test_bit(HSC_CH_WRITE, &channel->flags)) {
+		hsc_add_tail(channel, msg, &channel->tx_msgs_queue);
+		wake_up(&channel->tx_wait);
+	} else {
+		hsc_add_tail(channel, msg, &channel->free_msgs_list);
+	}
+}
+
+static void hsc_tx_msg_destructor(struct hsi_msg *msg)
+{
+	msg->status = HSI_STATUS_ERROR;
+	hsc_msg_len_set(msg, 0);
+	hsc_tx_completed(msg);
+}
+
+static void hsc_break_req_destructor(struct hsi_msg *msg)
+{
+	struct hsc_client_data *cl_data = hsi_client_drvdata(msg->cl);
+
+	hsi_free_msg(msg);
+	clear_bit(HSC_RXBREAK, &cl_data->flags);
+}
+
+static void hsc_break_received(struct hsi_msg *msg)
+{
+	struct hsc_client_data *cl_data = hsi_client_drvdata(msg->cl);
+	struct hsc_channel *channel = cl_data->channels;
+	int i, ret;
+
+	/* Broadcast HWBREAK on all channels */
+	for (i = 0; i < HSC_DEVS; i++, channel++) {
+		struct hsi_msg *msg2;
+
+		if (!test_bit(HSC_CH_READ, &channel->flags))
+			continue;
+		msg2 = hsc_get_first_msg(channel, &channel->free_msgs_list);
+		if (!msg2)
+			continue;
+		clear_bit(HSC_CH_READ, &channel->flags);
+		hsc_msg_len_set(msg2, 0);
+		msg2->status = HSI_STATUS_COMPLETED;
+		hsc_add_tail(channel, msg2, &channel->rx_msgs_queue);
+		wake_up(&channel->rx_wait);
+	}
+	hsi_flush(msg->cl);
+	ret = hsi_async_read(msg->cl, msg);
+	if (ret < 0)
+		hsc_break_req_destructor(msg);
+}
+
+static int hsc_break_request(struct hsi_client *cl)
+{
+	struct hsc_client_data *cl_data = hsi_client_drvdata(cl);
+	struct hsi_msg *msg;
+	int ret;
+
+	if (test_and_set_bit(HSC_RXBREAK, &cl_data->flags))
+		return -EBUSY;
+
+	msg = hsi_alloc_msg(0, GFP_KERNEL);
+	if (!msg) {
+		clear_bit(HSC_RXBREAK, &cl_data->flags);
+		return -ENOMEM;
+	}
+	msg->break_frame = 1;
+	msg->complete = hsc_break_received;
+	msg->destructor = hsc_break_req_destructor;
+	ret = hsi_async_read(cl, msg);
+	if (ret < 0)
+		hsc_break_req_destructor(msg);
+
+	return ret;
+}
+
+static int hsc_break_send(struct hsi_client *cl)
+{
+	struct hsi_msg *msg;
+	int ret;
+
+	msg = hsi_alloc_msg(0, GFP_ATOMIC);
+	if (!msg)
+		return -ENOMEM;
+	msg->break_frame = 1;
+	msg->complete = hsi_free_msg;
+	msg->destructor = hsi_free_msg;
+	ret = hsi_async_write(cl, msg);
+	if (ret < 0)
+		hsi_free_msg(msg);
+
+	return ret;
+}
+
+static int hsc_rx_set(struct hsi_client *cl, struct hsc_rx_config *rxc)
+{
+	struct hsi_config tmp;
+	int ret;
+
+	if ((rxc->mode != HSI_MODE_STREAM) && (rxc->mode != HSI_MODE_FRAME))
+		return -EINVAL;
+	if ((rxc->channels == 0) || (rxc->channels > HSC_DEVS))
+		return -EINVAL;
+	if (rxc->channels & (rxc->channels - 1))
+		return -EINVAL;
+	if ((rxc->flow != HSI_FLOW_SYNC) && (rxc->flow != HSI_FLOW_PIPE))
+		return -EINVAL;
+	tmp = cl->rx_cfg;
+	cl->rx_cfg.mode = rxc->mode;
+	cl->rx_cfg.channels = rxc->channels;
+	cl->rx_cfg.flow = rxc->flow;
+	ret = hsi_setup(cl);
+	if (ret < 0) {
+		cl->rx_cfg = tmp;
+		return ret;
+	}
+	if (rxc->mode == HSI_MODE_FRAME)
+		hsc_break_request(cl);
+
+	return ret;
+}
+
+static inline void hsc_rx_get(struct hsi_client *cl, struct hsc_rx_config *rxc)
+{
+	rxc->mode = cl->rx_cfg.mode;
+	rxc->channels = cl->rx_cfg.channels;
+	rxc->flow = cl->rx_cfg.flow;
+}
+
+static int hsc_tx_set(struct hsi_client *cl, struct hsc_tx_config *txc)
+{
+	struct hsi_config tmp;
+	int ret;
+
+	if ((txc->mode != HSI_MODE_STREAM) && (txc->mode != HSI_MODE_FRAME))
+		return -EINVAL;
+	if ((txc->channels == 0) || (txc->channels > HSC_DEVS))
+		return -EINVAL;
+	if (txc->channels & (txc->channels - 1))
+		return -EINVAL;
+	if ((txc->arb_mode != HSI_ARB_RR) && (txc->arb_mode != HSI_ARB_PRIO))
+		return -EINVAL;
+	tmp = cl->tx_cfg;
+	cl->tx_cfg.mode = txc->mode;
+	cl->tx_cfg.channels = txc->channels;
+	cl->tx_cfg.speed = txc->speed;
+	cl->tx_cfg.arb_mode = txc->arb_mode;
+	ret = hsi_setup(cl);
+	if (ret < 0) {
+		cl->tx_cfg = tmp;
+		return ret;
+	}
+
+	return ret;
+}
+
+static inline void hsc_tx_get(struct hsi_client *cl, struct hsc_tx_config *txc)
+{
+	txc->mode = cl->tx_cfg.mode;
+	txc->channels = cl->tx_cfg.channels;
+	txc->speed = cl->tx_cfg.speed;
+	txc->arb_mode = cl->tx_cfg.arb_mode;
+}
+
+static ssize_t hsc_read(struct file *file, char __user *buf, size_t len,
+						loff_t *ppos __maybe_unused)
+{
+	struct hsc_channel *channel = file->private_data;
+	struct hsi_msg *msg;
+	ssize_t ret;
+
+	if (len == 0)
+		return 0;
+	if (!IS_ALIGNED(len, sizeof(u32)))
+		return -EINVAL;
+	if (len > max_data_size)
+		len = max_data_size;
+	if (channel->ch >= channel->cl->rx_cfg.channels)
+		return -ECHRNG;
+	if (test_and_set_bit(HSC_CH_READ, &channel->flags))
+		return -EBUSY;
+	msg = hsc_get_first_msg(channel, &channel->free_msgs_list);
+	if (!msg) {
+		ret = -ENOSPC;
+		goto out;
+	}
+	hsc_msg_len_set(msg, len);
+	msg->complete = hsc_rx_completed;
+	msg->destructor = hsc_rx_msg_destructor;
+	ret = hsi_async_read(channel->cl, msg);
+	if (ret < 0) {
+		hsc_add_tail(channel, msg, &channel->free_msgs_list);
+		goto out;
+	}
+
+	ret = wait_event_interruptible(channel->rx_wait,
+					!list_empty(&channel->rx_msgs_queue));
+	if (ret < 0) {
+		clear_bit(HSC_CH_READ, &channel->flags);
+		hsi_flush(channel->cl);
+		return -EINTR;
+	}
+
+	msg = hsc_get_first_msg(channel, &channel->rx_msgs_queue);
+	if (msg) {
+		if (msg->status != HSI_STATUS_ERROR) {
+			ret = copy_to_user((void __user *)buf,
+			sg_virt(msg->sgt.sgl), hsc_msg_len_get(msg));
+			if (ret)
+				ret = -EFAULT;
+			else
+				ret = hsc_msg_len_get(msg);
+		} else {
+			ret = -EIO;
+		}
+		hsc_add_tail(channel, msg, &channel->free_msgs_list);
+	}
+out:
+	clear_bit(HSC_CH_READ, &channel->flags);
+
+	return ret;
+}
+
+static ssize_t hsc_write(struct file *file, const char __user *buf, size_t len,
+						loff_t *ppos __maybe_unused)
+{
+	struct hsc_channel *channel = file->private_data;
+	struct hsi_msg *msg;
+	ssize_t ret;
+
+	if ((len == 0) || !IS_ALIGNED(len, sizeof(u32)))
+		return -EINVAL;
+	if (len > max_data_size)
+		len = max_data_size;
+	if (channel->ch >= channel->cl->tx_cfg.channels)
+		return -ECHRNG;
+	if (test_and_set_bit(HSC_CH_WRITE, &channel->flags))
+		return -EBUSY;
+	msg = hsc_get_first_msg(channel, &channel->free_msgs_list);
+	if (!msg) {
+		clear_bit(HSC_CH_WRITE, &channel->flags);
+		return -ENOSPC;
+	}
+	if (copy_from_user(sg_virt(msg->sgt.sgl), (void __user *)buf, len)) {
+		ret = -EFAULT;
+		goto out;
+	}
+	hsc_msg_len_set(msg, len);
+	msg->complete = hsc_tx_completed;
+	msg->destructor = hsc_tx_msg_destructor;
+	ret = hsi_async_write(channel->cl, msg);
+	if (ret < 0)
+		goto out;
+
+	ret = wait_event_interruptible(channel->tx_wait,
+					!list_empty(&channel->tx_msgs_queue));
+	if (ret < 0) {
+		clear_bit(HSC_CH_WRITE, &channel->flags);
+		hsi_flush(channel->cl);
+		return -EINTR;
+	}
+
+	msg = hsc_get_first_msg(channel, &channel->tx_msgs_queue);
+	if (msg) {
+		if (msg->status == HSI_STATUS_ERROR)
+			ret = -EIO;
+		else
+			ret = hsc_msg_len_get(msg);
+
+		hsc_add_tail(channel, msg, &channel->free_msgs_list);
+	}
+out:
+	clear_bit(HSC_CH_WRITE, &channel->flags);
+
+	return ret;
+}
+
+static long hsc_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+	struct hsc_channel *channel = file->private_data;
+	unsigned int state;
+	struct hsc_rx_config rxc;
+	struct hsc_tx_config txc;
+	long ret = 0;
+
+	switch (cmd) {
+	case HSC_RESET:
+		hsi_flush(channel->cl);
+		break;
+	case HSC_SET_PM:
+		if (copy_from_user(&state, (void __user *)arg, sizeof(state)))
+			return -EFAULT;
+		if (state == HSC_PM_DISABLE) {
+			if (test_and_set_bit(HSC_CH_WLINE, &channel->flags))
+				return -EINVAL;
+			ret = hsi_start_tx(channel->cl);
+		} else if (state == HSC_PM_ENABLE) {
+			if (!test_and_clear_bit(HSC_CH_WLINE, &channel->flags))
+				return -EINVAL;
+			ret = hsi_stop_tx(channel->cl);
+		} else {
+			ret = -EINVAL;
+		}
+		break;
+	case HSC_SEND_BREAK:
+		return hsc_break_send(channel->cl);
+	case HSC_SET_RX:
+		if (copy_from_user(&rxc, (void __user *)arg, sizeof(rxc)))
+			return -EFAULT;
+		return hsc_rx_set(channel->cl, &rxc);
+	case HSC_GET_RX:
+		hsc_rx_get(channel->cl, &rxc);
+		if (copy_to_user((void __user *)arg, &rxc, sizeof(rxc)))
+			return -EFAULT;
+		break;
+	case HSC_SET_TX:
+		if (copy_from_user(&txc, (void __user *)arg, sizeof(txc)))
+			return -EFAULT;
+		return hsc_tx_set(channel->cl, &txc);
+	case HSC_GET_TX:
+		hsc_tx_get(channel->cl, &txc);
+		if (copy_to_user((void __user *)arg, &txc, sizeof(txc)))
+			return -EFAULT;
+		break;
+	default:
+		return -ENOIOCTLCMD;
+	}
+
+	return ret;
+}
+
+static inline void __hsc_port_release(struct hsc_client_data *cl_data)
+{
+	BUG_ON(cl_data->usecnt == 0);
+
+	if (--cl_data->usecnt == 0) {
+		hsi_flush(cl_data->cl);
+		hsi_release_port(cl_data->cl);
+	}
+}
+
+static int hsc_open(struct inode *inode, struct file *file)
+{
+	struct hsc_client_data *cl_data;
+	struct hsc_channel *channel;
+	int ret = 0;
+
+	pr_debug("open, minor = %d\n", iminor(inode));
+
+	cl_data = container_of(inode->i_cdev, struct hsc_client_data, cdev);
+	mutex_lock(&cl_data->lock);
+	channel = cl_data->channels + (iminor(inode) & HSC_CH_MASK);
+
+	if (test_and_set_bit(HSC_CH_OPEN, &channel->flags)) {
+		ret = -EBUSY;
+		goto out;
+	}
+	/*
+	 * Check if we have already claimed the port associated to the HSI
+	 * client. If not then try to claim it, else increase its refcount
+	 */
+	if (cl_data->usecnt == 0) {
+		ret = hsi_claim_port(cl_data->cl, 0);
+		if (ret < 0)
+			goto out;
+		hsi_setup(cl_data->cl);
+	}
+	cl_data->usecnt++;
+
+	ret = hsc_msgs_alloc(channel);
+	if (ret < 0) {
+		__hsc_port_release(cl_data);
+		goto out;
+	}
+
+	file->private_data = channel;
+	mutex_unlock(&cl_data->lock);
+
+	return ret;
+out:
+	mutex_unlock(&cl_data->lock);
+
+	return ret;
+}
+
+static int hsc_release(struct inode *inode __maybe_unused, struct file *file)
+{
+	struct hsc_channel *channel = file->private_data;
+	struct hsc_client_data *cl_data = channel->cl_data;
+
+	mutex_lock(&cl_data->lock);
+	file->private_data = NULL;
+	if (test_and_clear_bit(HSC_CH_WLINE, &channel->flags))
+		hsi_stop_tx(channel->cl);
+	__hsc_port_release(cl_data);
+	hsc_reset_list(channel, &channel->rx_msgs_queue);
+	hsc_reset_list(channel, &channel->tx_msgs_queue);
+	hsc_reset_list(channel, &channel->free_msgs_list);
+	clear_bit(HSC_CH_READ, &channel->flags);
+	clear_bit(HSC_CH_WRITE, &channel->flags);
+	clear_bit(HSC_CH_OPEN, &channel->flags);
+	wake_up(&channel->rx_wait);
+	wake_up(&channel->tx_wait);
+	mutex_unlock(&cl_data->lock);
+
+	return 0;
+}
+
+static const struct file_operations hsc_fops = {
+	.owner		= THIS_MODULE,
+	.read		= hsc_read,
+	.write		= hsc_write,
+	.unlocked_ioctl	= hsc_ioctl,
+	.open		= hsc_open,
+	.release	= hsc_release,
+};
+
+static void __devinit hsc_channel_init(struct hsc_channel *channel)
+{
+	init_waitqueue_head(&channel->rx_wait);
+	init_waitqueue_head(&channel->tx_wait);
+	spin_lock_init(&channel->lock);
+	INIT_LIST_HEAD(&channel->free_msgs_list);
+	INIT_LIST_HEAD(&channel->rx_msgs_queue);
+	INIT_LIST_HEAD(&channel->tx_msgs_queue);
+}
+
+static int __devinit hsc_probe(struct device *dev)
+{
+	const char devname[] = "hsi_char";
+	struct hsc_client_data *cl_data;
+	struct hsc_channel *channel;
+	struct hsi_client *cl = to_hsi_client(dev);
+	unsigned int hsc_baseminor;
+	dev_t hsc_dev;
+	int ret;
+	int i;
+
+	cl_data = kzalloc(sizeof(*cl_data), GFP_KERNEL);
+	if (!cl_data) {
+		dev_err(dev, "Could not allocate hsc_client_data\n");
+		return -ENOMEM;
+	}
+	hsc_baseminor = HSC_BASEMINOR(hsi_id(cl), hsi_port_id(cl));
+	if (!hsc_major) {
+		ret = alloc_chrdev_region(&hsc_dev, hsc_baseminor,
+						HSC_DEVS, devname);
+		if (ret > 0)
+			hsc_major = MAJOR(hsc_dev);
+	} else {
+		hsc_dev = MKDEV(hsc_major, hsc_baseminor);
+		ret = register_chrdev_region(hsc_dev, HSC_DEVS, devname);
+	}
+	if (ret < 0) {
+		dev_err(dev, "Device %s allocation failed %d\n",
+					hsc_major ? "minor" : "major", ret);
+		goto out1;
+	}
+	mutex_init(&cl_data->lock);
+	hsi_client_set_drvdata(cl, cl_data);
+	cdev_init(&cl_data->cdev, &hsc_fops);
+	cl_data->cdev.owner = THIS_MODULE;
+	cl_data->cl = cl;
+	for (i = 0, channel = cl_data->channels; i < HSC_DEVS; i++, channel++) {
+		hsc_channel_init(channel);
+		channel->ch = i;
+		channel->cl = cl;
+		channel->cl_data = cl_data;
+	}
+
+	/* 1 hsi client -> N char devices (one for each channel) */
+	ret = cdev_add(&cl_data->cdev, hsc_dev, HSC_DEVS);
+	if (ret) {
+		dev_err(dev, "Could not add char device %d\n", ret);
+		goto out2;
+	}
+
+	return 0;
+out2:
+	unregister_chrdev_region(hsc_dev, HSC_DEVS);
+out1:
+	kfree(cl_data);
+
+	return ret;
+}
+
+static int __devexit hsc_remove(struct device *dev)
+{
+	struct hsi_client *cl = to_hsi_client(dev);
+	struct hsc_client_data *cl_data = hsi_client_drvdata(cl);
+	dev_t hsc_dev = cl_data->cdev.dev;
+
+	cdev_del(&cl_data->cdev);
+	unregister_chrdev_region(hsc_dev, HSC_DEVS);
+	hsi_client_set_drvdata(cl, NULL);
+	kfree(cl_data);
+
+	return 0;
+}
+
+static struct hsi_client_driver hsc_driver = {
+	.driver = {
+		.name	= "hsi_char",
+		.owner	= THIS_MODULE,
+		.probe	= hsc_probe,
+		.remove	= __devexit_p(hsc_remove),
+	},
+};
+
+static int __init hsc_init(void)
+{
+	int ret;
+
+	if ((max_data_size < 4) || (max_data_size > 0x10000) ||
+		(max_data_size & (max_data_size - 1))) {
+		pr_err("Invalid max read/write data size");
+		return -EINVAL;
+	}
+
+	ret = hsi_register_client_driver(&hsc_driver);
+	if (ret) {
+		pr_err("Error while registering HSI/SSI driver %d", ret);
+		return ret;
+	}
+
+	pr_info("HSI/SSI char device loaded\n");
+
+	return 0;
+}
+module_init(hsc_init);
+
+static void __exit hsc_exit(void)
+{
+	hsi_unregister_client_driver(&hsc_driver);
+	pr_info("HSI char device removed\n");
+}
+module_exit(hsc_exit);
+
+MODULE_AUTHOR("Andras Domokos <andras.domokos@nokia.com>");
+MODULE_ALIAS("hsi:hsi_char");
+MODULE_DESCRIPTION("HSI character device");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/hsi/hsi_char.h b/include/linux/hsi/hsi_char.h
new file mode 100644
index 0000000..fc49897
--- /dev/null
+++ b/include/linux/hsi/hsi_char.h
@@ -0,0 +1,65 @@
+/*
+ * hsi_char.h
+ *
+ * Part of the HSI character device driver.
+ *
+ * Copyright (C) 2010 Nokia Corporation. All rights reserved.
+ *
+ * Contact: Andras Domokos <andras.domokos at nokia.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
+ * 02110-1301 USA
+ */
+
+
+#ifndef __HSI_CHAR_H
+#define __HSI_CHAR_H
+
+#define HSI_CHAR_MAGIC		'k'
+#define HSC_IOW(num, dtype)	_IOW(HSI_CHAR_MAGIC, num, dtype)
+#define HSC_IOR(num, dtype)	_IOR(HSI_CHAR_MAGIC, num, dtype)
+#define HSC_IOWR(num, dtype)	_IOWR(HSI_CHAR_MAGIC, num, dtype)
+#define HSC_IO(num)		_IO(HSI_CHAR_MAGIC, num)
+
+#define HSC_RESET		HSC_IO(16)
+#define HSC_SET_PM		HSC_IO(17)
+#define HSC_SEND_BREAK		HSC_IO(18)
+#define HSC_SET_RX		HSC_IOW(19, struct hsc_rx_config)
+#define HSC_GET_RX		HSC_IOW(20, struct hsc_rx_config)
+#define HSC_SET_TX		HSC_IOW(21, struct hsc_tx_config)
+#define HSC_GET_TX		HSC_IOW(22, struct hsc_tx_config)
+
+#define HSC_PM_DISABLE		0
+#define HSC_PM_ENABLE		1
+
+#define HSC_MODE_STREAM		1
+#define HSC_MODE_FRAME		2
+#define HSC_FLOW_SYNC		0
+#define HSC_ARB_RR		0
+#define HSC_ARB_PRIO		1
+
+struct hsc_rx_config {
+	uint32_t mode;
+	uint32_t flow;
+	uint32_t channels;
+};
+
+struct hsc_tx_config {
+	uint32_t mode;
+	uint32_t channels;
+	uint32_t speed;
+	uint32_t arb_mode;
+};
+
+#endif /* __HSI_CHAR_H */
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC PATCHv5 5/7] HSI: hsi_char: Add HSI char device kernel configuration
  2011-06-10 13:38 [RFC PATCHv5 0/7] HSI framework and drivers Carlos Chinea
                   ` (3 preceding siblings ...)
  2011-06-10 13:38 ` [RFC PATCHv5 4/7] HSI: hsi_char: Add HSI char device driver Carlos Chinea
@ 2011-06-10 13:38 ` Carlos Chinea
  2011-06-10 13:38 ` [RFC PATCHv5 6/7] HSI: Add HSI API documentation Carlos Chinea
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-06-10 13:38 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-omap, Andras Domokos

From: Andras Domokos <andras.domokos@nokia.com>

Add HSI character device kernel configuration

Signed-off-by: Andras Domokos <andras.domokos@nokia.com>
Signed-off-by: Carlos Chinea <carlos.chinea@nokia.com>
---
 drivers/hsi/Kconfig          |    1 +
 drivers/hsi/Makefile         |    2 +-
 drivers/hsi/clients/Kconfig  |   13 +++++++++++++
 drivers/hsi/clients/Makefile |    5 +++++
 include/linux/Kbuild         |    1 +
 include/linux/hsi/Kbuild     |    1 +
 6 files changed, 22 insertions(+), 1 deletions(-)
 create mode 100644 drivers/hsi/clients/Kconfig
 create mode 100644 drivers/hsi/clients/Makefile
 create mode 100644 include/linux/hsi/Kbuild

diff --git a/drivers/hsi/Kconfig b/drivers/hsi/Kconfig
index bba73ea..2c76de4 100644
--- a/drivers/hsi/Kconfig
+++ b/drivers/hsi/Kconfig
@@ -15,5 +15,6 @@ config HSI_BOARDINFO
 	default y
 
 source "drivers/hsi/controllers/Kconfig"
+source "drivers/hsi/clients/Kconfig"
 
 endif # HSI
diff --git a/drivers/hsi/Makefile b/drivers/hsi/Makefile
index 0de87bd..d47ca5d 100644
--- a/drivers/hsi/Makefile
+++ b/drivers/hsi/Makefile
@@ -3,4 +3,4 @@
 #
 obj-$(CONFIG_HSI_BOARDINFO)	+= hsi_boardinfo.o
 obj-$(CONFIG_HSI)		+= hsi.o
-obj-y				+= controllers/
+obj-y				+= controllers/ clients/
diff --git a/drivers/hsi/clients/Kconfig b/drivers/hsi/clients/Kconfig
new file mode 100644
index 0000000..3bacd27
--- /dev/null
+++ b/drivers/hsi/clients/Kconfig
@@ -0,0 +1,13 @@
+#
+# HSI clients configuration
+#
+
+comment "HSI clients"
+
+config HSI_CHAR
+	tristate "HSI/SSI character driver"
+	depends on HSI
+	---help---
+	  If you say Y here, you will enable the HSI/SSI character driver.
+	  This driver provides a simple character device interface for
+	  serial communication with the cellular modem over HSI/SSI bus.
diff --git a/drivers/hsi/clients/Makefile b/drivers/hsi/clients/Makefile
new file mode 100644
index 0000000..327c0e2
--- /dev/null
+++ b/drivers/hsi/clients/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for HSI clients
+#
+
+obj-$(CONFIG_HSI_CHAR)	+= hsi_char.o
diff --git a/include/linux/Kbuild b/include/linux/Kbuild
index 01f6362..d41e127 100644
--- a/include/linux/Kbuild
+++ b/include/linux/Kbuild
@@ -3,6 +3,7 @@ header-y += can/
 header-y += caif/
 header-y += dvb/
 header-y += hdlc/
+header-y += hsi/
 header-y += isdn/
 header-y += mmc/
 header-y += nfsd/
diff --git a/include/linux/hsi/Kbuild b/include/linux/hsi/Kbuild
new file mode 100644
index 0000000..271a770
--- /dev/null
+++ b/include/linux/hsi/Kbuild
@@ -0,0 +1 @@
+header-y += hsi_char.h
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC PATCHv5 6/7] HSI: Add HSI API documentation
  2011-06-10 13:38 [RFC PATCHv5 0/7] HSI framework and drivers Carlos Chinea
                   ` (4 preceding siblings ...)
  2011-06-10 13:38 ` [RFC PATCHv5 5/7] HSI: hsi_char: Add HSI char device kernel configuration Carlos Chinea
@ 2011-06-10 13:38 ` Carlos Chinea
  2011-06-10 13:38 ` [RFC PATCHv5 7/7] HSI: hsi_char: Update ioctl-number.txt Carlos Chinea
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-06-10 13:38 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-omap

Add an entry for HSI in the device-drivers section of
the kernel documentation.

Signed-off-by: Carlos Chinea <carlos.chinea@nokia.com>
---
 Documentation/DocBook/device-drivers.tmpl |   17 +++++++++++++++++
 1 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
index b638e50..5f70f73 100644
--- a/Documentation/DocBook/device-drivers.tmpl
+++ b/Documentation/DocBook/device-drivers.tmpl
@@ -437,4 +437,21 @@ X!Idrivers/video/console/fonts.c
 !Edrivers/i2c/i2c-core.c
   </chapter>
 
+  <chapter id="hsi">
+     <title>High Speed Synchronous Serial Interface (HSI)</title>
+
+     <para>
+	High Speed Synchronous Serial Interface (HSI) is a
+	serial interface mainly used for connecting application
+	engines (APE) with cellular modem engines (CMT) in cellular
+	handsets.
+
+	HSI provides multiplexing for up to 16 logical channels,
+	low-latency and full duplex communication.
+     </para>
+
+!Iinclude/linux/hsi/hsi.h
+!Edrivers/hsi/hsi.c
+  </chapter>
+
 </book>
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [RFC PATCHv5 7/7] HSI: hsi_char: Update ioctl-number.txt
  2011-06-10 13:38 [RFC PATCHv5 0/7] HSI framework and drivers Carlos Chinea
                   ` (5 preceding siblings ...)
  2011-06-10 13:38 ` [RFC PATCHv5 6/7] HSI: Add HSI API documentation Carlos Chinea
@ 2011-06-10 13:38 ` Carlos Chinea
  2011-06-14  9:35 ` [RFC PATCHv5 0/7] HSI framework and drivers Alan Cox
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-06-10 13:38 UTC (permalink / raw)
  To: linux-kernel; +Cc: linux-omap, Andras Domokos

From: Andras Domokos <andras.domokos@nokia.com>

Added ioctl range for HSI char devices to the documentation

Signed-off-by: Andras Domokos <andras.domokos@nokia.com>
Signed-off-by: Carlos Chinea <carlos.chinea@nokia.com>
---
 Documentation/ioctl/ioctl-number.txt |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/Documentation/ioctl/ioctl-number.txt b/Documentation/ioctl/ioctl-number.txt
index 3a46e36..016ee45 100644
--- a/Documentation/ioctl/ioctl-number.txt
+++ b/Documentation/ioctl/ioctl-number.txt
@@ -223,6 +223,7 @@ Code  Seq#(hex)	Include File		Comments
 'j'	00-3F	linux/joystick.h
 'k'	00-0F	linux/spi/spidev.h	conflict!
 'k'	00-05	video/kyro.h		conflict!
+'k'	10-17	linux/hsi/hsi_char.h	HSI character device
 'l'	00-3F	linux/tcfs_fs.h		transparent cryptographic file system
 					<http://web.archive.org/web/*/http://mikonos.dia.unisa.it/tcfs>
 'l'	40-7F	linux/udf_fs_i.h	in development:
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 2/7] HSI: omap_ssi: Introducing OMAP SSI driver
  2011-06-10 13:38 ` [RFC PATCHv5 2/7] HSI: omap_ssi: Introducing OMAP SSI driver Carlos Chinea
@ 2011-06-13 13:21   ` Tony Lindgren
  2011-06-14 12:09     ` Carlos Chinea
  2011-06-13 20:21   ` Kevin Hilman
  1 sibling, 1 reply; 35+ messages in thread
From: Tony Lindgren @ 2011-06-13 13:21 UTC (permalink / raw)
  To: Carlos Chinea; +Cc: linux-kernel, linux-omap

* Carlos Chinea <carlos.chinea@nokia.com> [110610 06:41]:
> --- /dev/null
> +++ b/arch/arm/mach-omap2/ssi.c
> +static struct platform_device ssi_pdev = {
> +	.name		= "omap_ssi",
> +	.id		= 0,
> +	.num_resources	= ARRAY_SIZE(ssi_resources),
> +	.resource	= ssi_resources,
> +	.dev		= {
> +				.platform_data	= &ssi_pdata,
> +	},
> +};
> +
> +int __init omap_ssi_config(struct omap_ssi_board_config *ssi_config)
> +{
> +	unsigned int port, offset, cawake_gpio;
> +	int err;
> +
> +	ssi_pdata.num_ports = ssi_config->num_ports;
> +	for (port = 0, offset = 7; port < ssi_config->num_ports;
> +							port++, offset += 5) {
> +		cawake_gpio = ssi_config->cawake_gpio[port];
> +		if (!cawake_gpio)
> +			continue; /* Nothing to do */
> +		err = gpio_request(cawake_gpio, "cawake");
> +		if (err < 0)
> +			goto rback;
> +		gpio_direction_input(cawake_gpio);
> +		ssi_resources[offset].start = gpio_to_irq(cawake_gpio);
> +		ssi_resources[offset].flags &= ~IORESOURCE_UNSET;
> +		ssi_resources[offset].flags |= IORESOURCE_IRQ_HIGHEDGE |
> +							IORESOURCE_IRQ_LOWEDGE;
> +	}
> +
> +	return 0;
> +rback:
> +	dev_err(&ssi_pdev.dev, "Request cawake (gpio%d) failed\n", cawake_gpio);
> +	while (port > 0)
> +		gpio_free(ssi_config->cawake_gpio[--port]);
> +
> +	return err;
> +}
> +
> +static int __init ssi_init(void)
> +{
> +	return platform_device_register(&ssi_pdev);
> +}
> +subsys_initcall(ssi_init);

Looks like you need something here also to prevent this subsys_initcall
on running on all boards. Maybe have a pointer to ssi_pdev that only
gets initialized after omap_ssi_config?

Then you can have ssi_init fail if no configuration is called:

	if (!pdev)
		return -ENODEV;

	return platform_device_register(pdev);

Regards,

Tony

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 2/7] HSI: omap_ssi: Introducing OMAP SSI driver
  2011-06-10 13:38 ` [RFC PATCHv5 2/7] HSI: omap_ssi: Introducing OMAP SSI driver Carlos Chinea
  2011-06-13 13:21   ` Tony Lindgren
@ 2011-06-13 20:21   ` Kevin Hilman
  2011-06-14 12:12     ` Carlos Chinea
  1 sibling, 1 reply; 35+ messages in thread
From: Kevin Hilman @ 2011-06-13 20:21 UTC (permalink / raw)
  To: Carlos Chinea; +Cc: linux-kernel, linux-omap

Carlos Chinea <carlos.chinea@nokia.com> writes:

> Introduces the OMAP SSI driver in the kernel.
>
> The Synchronous Serial Interface (SSI) is a legacy version
> of HSI. As in the case of HSI, it is mainly used to connect
> Application engines (APE) with cellular modem engines (CMT)
> in cellular handsets.
>
> It provides a multichannel, full-duplex, multi-core communication
> with no reference clock. The OMAP SSI block is capable of reaching
> speeds of 110 Mbit/s.
>
> Signed-off-by: Carlos Chinea <carlos.chinea@nokia.com>
> ---
>  arch/arm/mach-omap2/ssi.c             |  134 +++
>  arch/arm/plat-omap/include/plat/ssi.h |  204 ++++
>  drivers/hsi/controllers/omap_ssi.c    | 1852 +++++++++++++++++++++++++++++++++
>  3 files changed, 2190 insertions(+), 0 deletions(-)
>  create mode 100644 arch/arm/mach-omap2/ssi.c
>  create mode 100644 arch/arm/plat-omap/include/plat/ssi.h
>  create mode 100644 drivers/hsi/controllers/omap_ssi.c
>
> diff --git a/arch/arm/mach-omap2/ssi.c b/arch/arm/mach-omap2/ssi.c
> new file mode 100644
> index 0000000..e822a77
> --- /dev/null
> +++ b/arch/arm/mach-omap2/ssi.c
> @@ -0,0 +1,134 @@
> +/*
> + * linux/arch/arm/mach-omap2/ssi.c

Minor: Please don't include filenames in the comments.  Files tend to move
around and these comments don't get updated.

> + * Copyright (C) 2010 Nokia Corporation. All rights reserved.
> + *
> + * Contact: Carlos Chinea <carlos.chinea@nokia.com>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License
> + * version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful, but
> + * WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
> + * 02110-1301 USA
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/init.h>
> +#include <linux/err.h>
> +#include <linux/gpio.h>
> +#include <linux/platform_device.h>
> +#include <plat/omap-pm.h>
> +#include <plat/ssi.h>
> +
> +static struct omap_ssi_platform_data ssi_pdata = {
> +	.num_ports			= SSI_NUM_PORTS,
> +	.get_dev_context_loss_count	= omap_pm_get_dev_context_loss_count,
> +};
> +
> +static struct resource ssi_resources[] = {
> +	/* SSI controller */
> +	[0] = {
> +		.start	= 0x48058000,
> +		.end	= 0x48058fff,
> +		.name	= "omap_ssi_sys",
> +		.flags	= IORESOURCE_MEM,
> +	},
> +	/* GDD */
> +	[1] = {
> +		.start	= 0x48059000,
> +		.end	= 0x48059fff,
> +		.name	= "omap_ssi_gdd",
> +		.flags	= IORESOURCE_MEM,
> +	},
> +	[2] = {
> +		.start	= 71,
> +		.end	= 71,
> +		.name	= "ssi_gdd",
> +		.flags	= IORESOURCE_IRQ,
> +	},
> +	/* SSI port 1 */
> +	[3] = {
> +		.start	= 0x4805a000,
> +		.end	= 0x4805a7ff,
> +		.name	= "omap_ssi_sst1",
> +		.flags	= IORESOURCE_MEM,
> +	},
> +	[4] = {
> +		.start	= 0x4805a800,
> +		.end	= 0x4805afff,
> +		.name	= "omap_ssi_ssr1",
> +		.flags	= IORESOURCE_MEM,
> +	},
> +	[5] = {
> +		.start	= 67,
> +		.end	= 67,
> +		.name	= "ssi_p1_mpu_irq0",
> +		.flags	= IORESOURCE_IRQ,
> +	},
> +	[6] = {
> +		.start	= 68,
> +		.end	= 68,
> +		.name	= "ssi_p1_mpu_irq1",
> +		.flags	= IORESOURCE_IRQ,
> +	},
> +	[7] = {
> +		.start	= 0,
> +		.end	= 0,
> +		.name	= "ssi_p1_cawake",
> +		.flags	= IORESOURCE_IRQ | IORESOURCE_UNSET,
> +	},
> +};
> +
> +static struct platform_device ssi_pdev = {
> +	.name		= "omap_ssi",
> +	.id		= 0,
> +	.num_resources	= ARRAY_SIZE(ssi_resources),
> +	.resource	= ssi_resources,
> +	.dev		= {
> +				.platform_data	= &ssi_pdata,
> +	},
> +};

omap_hwmod has all the base address and IRQ data, will construct the
struct resources and the platform_devices for you.  Please use
omap_hwmod + omap_device for this part of the code.

Kevin



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 0/7] HSI framework and drivers
  2011-06-10 13:38 [RFC PATCHv5 0/7] HSI framework and drivers Carlos Chinea
                   ` (6 preceding siblings ...)
  2011-06-10 13:38 ` [RFC PATCHv5 7/7] HSI: hsi_char: Update ioctl-number.txt Carlos Chinea
@ 2011-06-14  9:35 ` Alan Cox
  2011-06-15  9:27   ` Andras Domokos
  2011-06-22 19:11 ` [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework Sjur Brændeland
  2011-10-20 12:57 ` [RFC PATCHv5 0/7] HSI framework and drivers Sebastian Reichel
  9 siblings, 1 reply; 35+ messages in thread
From: Alan Cox @ 2011-06-14  9:35 UTC (permalink / raw)
  To: Carlos Chinea
  Cc: linux-kernel, linux-omap, sre, linus.walleij, govindraj.ti,
	pawel.szyszuk, sjur.brandeland, peter_henn

On Fri, 10 Jun 2011 16:38:37 +0300
Carlos Chinea <carlos.chinea@nokia.com> wrote:

> Hi !
> 
> Here you have the fifth round of the HSI framework patches.

Looks good to me - only other oddity I found is to build hsi_char on x86
you need slab.h before kmemleak.h or the build errors

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 2/7] HSI: omap_ssi: Introducing OMAP SSI driver
  2011-06-13 13:21   ` Tony Lindgren
@ 2011-06-14 12:09     ` Carlos Chinea
  0 siblings, 0 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-06-14 12:09 UTC (permalink / raw)
  To: ext Tony Lindgren; +Cc: linux-kernel, linux-omap

Hi,
-- cut--
> > +
> > +static int __init ssi_init(void)
> > +{
> > +	return platform_device_register(&ssi_pdev);
> > +}
> > +subsys_initcall(ssi_init);
> 
> Looks like you need something here also to prevent this subsys_initcall
> on running on all boards. Maybe have a pointer to ssi_pdev that only
> gets initialized after omap_ssi_config?
> 
> Then you can have ssi_init fail if no configuration is called:
> 
> 	if (!pdev)
> 		return -ENODEV;
> 
> 	return platform_device_register(pdev);
> 

Right. I'll do those changes.

Thanks,
Carlos


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 2/7] HSI: omap_ssi: Introducing OMAP SSI driver
  2011-06-13 20:21   ` Kevin Hilman
@ 2011-06-14 12:12     ` Carlos Chinea
  2011-06-15 15:37       ` Kevin Hilman
  0 siblings, 1 reply; 35+ messages in thread
From: Carlos Chinea @ 2011-06-14 12:12 UTC (permalink / raw)
  To: ext Kevin Hilman; +Cc: linux-kernel, linux-omap

On Mon, 2011-06-13 at 13:21 -0700, ext Kevin Hilman wrote:
> Carlos Chinea <carlos.chinea@nokia.com> writes:
> 
> > Introduces the OMAP SSI driver in the kernel.
> >
> > The Synchronous Serial Interface (SSI) is a legacy version
> > of HSI. As in the case of HSI, it is mainly used to connect
> > Application engines (APE) with cellular modem engines (CMT)
> > in cellular handsets.
> >
> > It provides a multichannel, full-duplex, multi-core communication
> > with no reference clock. The OMAP SSI block is capable of reaching
> > speeds of 110 Mbit/s.
> >
> > Signed-off-by: Carlos Chinea <carlos.chinea@nokia.com>
> > ---
> >  arch/arm/mach-omap2/ssi.c             |  134 +++
> >  arch/arm/plat-omap/include/plat/ssi.h |  204 ++++
> >  drivers/hsi/controllers/omap_ssi.c    | 1852 +++++++++++++++++++++++++++++++++
> >  3 files changed, 2190 insertions(+), 0 deletions(-)
> >  create mode 100644 arch/arm/mach-omap2/ssi.c
> >  create mode 100644 arch/arm/plat-omap/include/plat/ssi.h
> >  create mode 100644 drivers/hsi/controllers/omap_ssi.c
> >
> > diff --git a/arch/arm/mach-omap2/ssi.c b/arch/arm/mach-omap2/ssi.c
> > new file mode 100644
> > index 0000000..e822a77
> > --- /dev/null
> > +++ b/arch/arm/mach-omap2/ssi.c
> > @@ -0,0 +1,134 @@
> > +/*
> > + * linux/arch/arm/mach-omap2/ssi.c
> 
> Minor: Please don't include filenames in the comments.  Files tend to move
> around and these comments don't get updated.

Yep. I'll remove this from all the comments.

> 
> > + * Copyright (C) 2010 Nokia Corporation. All rights reserved.
> > + *
> > + * Contact: Carlos Chinea <carlos.chinea@nokia.com>
> > + *
> > + * This program is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU General Public License
> > + * version 2 as published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope that it will be useful, but
> > + * WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> > + * General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU General Public License
> > + * along with this program; if not, write to the Free Software
> > + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
> > + * 02110-1301 USA
> > + */
-- cut --

> > +static struct platform_device ssi_pdev = {
> > +	.name		= "omap_ssi",
> > +	.id		= 0,
> > +	.num_resources	= ARRAY_SIZE(ssi_resources),
> > +	.resource	= ssi_resources,
> > +	.dev		= {
> > +				.platform_data	= &ssi_pdata,
> > +	},
> > +};
> 
> omap_hwmod has all the base address and IRQ data, will construct the
> struct resources and the platform_devices for you.  Please use
> omap_hwmod + omap_device for this part of the code.
> 

Yes, it  is already in my TODO list ;)

Thanks,
Carlos



^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 0/7] HSI framework and drivers
  2011-06-14  9:35 ` [RFC PATCHv5 0/7] HSI framework and drivers Alan Cox
@ 2011-06-15  9:27   ` Andras Domokos
  0 siblings, 0 replies; 35+ messages in thread
From: Andras Domokos @ 2011-06-15  9:27 UTC (permalink / raw)
  To: Alan Cox
  Cc: linux-kernel, linux-omap, sre, linus.walleij, govindraj.ti,
	pawel.szyszuk, sjur.brandeland, peter_henn, Carlos Chinea,
	Andras Domokos

Hi,

On 01/-10/-28163 09:59 PM, ext Alan Cox wrote:
> On Fri, 10 Jun 2011 16:38:37 +0300
> Carlos Chinea <carlos.chinea@nokia.com> wrote:
>
>> Hi !
>>
>> Here you have the fifth round of the HSI framework patches.
>
> Looks good to me - only other oddity I found is to build hsi_char on x86
> you need slab.h before kmemleak.h or the build errors

We checked with Carlos why hsi_char was building for the ARM
architecture in our environment.
We found the reason, it was a kernel configuration issue after all, we
didn't have CONFIG_DEBUG_KMEMLEAK option enabled in our .config file,
that's why the oddity you have mentioned didn't show up in our case. But
once the kernel option was turned on, we started seeing the compilation
errors.

We'll fix the problem based on your comments.

Thank you!

Regards,
Andras

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 2/7] HSI: omap_ssi: Introducing OMAP SSI driver
  2011-06-14 12:12     ` Carlos Chinea
@ 2011-06-15 15:37       ` Kevin Hilman
  0 siblings, 0 replies; 35+ messages in thread
From: Kevin Hilman @ 2011-06-15 15:37 UTC (permalink / raw)
  To: Carlos Chinea; +Cc: linux-kernel, linux-omap

Carlos Chinea <carlos.chinea@nokia.com> writes:

> On Mon, 2011-06-13 at 13:21 -0700, ext Kevin Hilman wrote:
>> Carlos Chinea <carlos.chinea@nokia.com> writes:
>> 

[...]

>> > + * Copyright (C) 2010 Nokia Corporation. All rights reserved.
>> > + *
>> > + * Contact: Carlos Chinea <carlos.chinea@nokia.com>
>> > + *
>> > + * This program is free software; you can redistribute it and/or
>> > + * modify it under the terms of the GNU General Public License
>> > + * version 2 as published by the Free Software Foundation.
>> > + *
>> > + * This program is distributed in the hope that it will be useful, but
>> > + * WITHOUT ANY WARRANTY; without even the implied warranty of
>> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>> > + * General Public License for more details.
>> > + *
>> > + * You should have received a copy of the GNU General Public License
>> > + * along with this program; if not, write to the Free Software
>> > + * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
>> > + * 02110-1301 USA
>> > + */
> -- cut --
>
>> > +static struct platform_device ssi_pdev = {
>> > +	.name		= "omap_ssi",
>> > +	.id		= 0,
>> > +	.num_resources	= ARRAY_SIZE(ssi_resources),
>> > +	.resource	= ssi_resources,
>> > +	.dev		= {
>> > +				.platform_data	= &ssi_pdata,
>> > +	},
>> > +};
>> 
>> omap_hwmod has all the base address and IRQ data, will construct the
>> struct resources and the platform_devices for you.  Please use
>> omap_hwmod + omap_device for this part of the code.
>> 
>
> Yes, it  is already in my TODO list ;)
>

Great.  Just so you know, we are not taking new OMAP drivers unless they
are using omap_hwmod + omap_device to create the devices.

Kevin

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
  2011-06-10 13:38 [RFC PATCHv5 0/7] HSI framework and drivers Carlos Chinea
                   ` (7 preceding siblings ...)
  2011-06-14  9:35 ` [RFC PATCHv5 0/7] HSI framework and drivers Alan Cox
@ 2011-06-22 19:11 ` Sjur Brændeland
  2011-06-22 19:25   ` Linus Walleij
  2011-06-23 13:08   ` Carlos Chinea
  2011-10-20 12:57 ` [RFC PATCHv5 0/7] HSI framework and drivers Sebastian Reichel
  9 siblings, 2 replies; 35+ messages in thread
From: Sjur Brændeland @ 2011-06-22 19:11 UTC (permalink / raw)
  To: Carlos Chinea, sjurbren; +Cc: linux-omap, linux-kernel, Linus Walleij

Hi Carlos,

Some weeks ago I submitted a CAIF-HSI protocol driver for
Linux 3.0.1, located in drivers/net/caif in David Miller's net-next-2.6.
This driver depends on a platform specific "glue-layer".
It would be nice to adapt to a generic HSI API, so I'm looking forward
to see a this patch going upstream.

I have tried to investigate if this API proposal fulfills the
needs for the CAIF HSI protocol used by the ST-Ericsson modems.

Please find my review comments below.


>+/**
>+ * struct hsi_config - Configuration for RX/TX HSI modules
>+ * @mode: Bit transmission mode (STREAM or FRAME)
>+ * @channels: Number of channels to use [1..16]
>+ * @speed: Max bit transmission speed (Kbit/s)

Just for clarity maybe you should say if you meen 1000 bit/s or 1024 bit/s?

>+ * @flow: RX flow type (SYNCHRONIZED or PIPELINE)
>+ * @arb_mode: Arbitration mode for TX frame (Round robin, priority)
>+ */
>+struct hsi_config {
>+	unsigned int	mode;
>+	unsigned int	channels;
>+	unsigned int	speed;

frame_size: Can we assume 32 bits the only supported frame size,
or should this be configurable?

>+	union {
>+		unsigned int	flow;		/* RX only */
>+		unsigned int	arb_mode;	/* TX only */
>+	};

It would be usefull with the following RX counters:
                      unsigned int    frame_timeout_counter:16;
                      unsigned int    tailing_bit_counter:8;
                      unsigned int    frame_burst_counter:8;
>+};
...
>+
>+/**
>+ * struct hsi_board_info - HSI client board info
>+ * @name: Name for the HSI device
>+ * @hsi_id: HSI controller id where the client sits
>+ * @port: Port number in the controller where the client sits
>+ * @tx_cfg: HSI TX configuration
>+ * @rx_cfg: HSI RX configuration
>+ * @platform_data: Platform related data
>+ * @archdata: Architecture-dependent device data
>+ */
>+struct hsi_board_info {
>+	const char		*name;
>+	unsigned int		hsi_id;
>+	unsigned int		port;
>+	struct hsi_config	tx_cfg;
>+	struct hsi_config	rx_cfg;
>+	void			*platform_data;
>+	struct dev_archdata	*archdata;
>+};

What about information about the supported transmission speeds?
Is it any way to obtain this information, so we can know the legal
values for speed in hsi_config?

Can we really assume that all HW supports linked DMA jobs (scatter list),
or do we need some information about DMA support in board_info as well?

...
>+/**
>+ * struct hsi_msg - HSI message descriptor
>+ * @link: Free to use by the current descriptor owner
>+ * @cl: HSI device client that issues the transfer
>+ * @sgt: Head of the scatterlist array
>+ * @context: Client context data associated to the transfer
>+ * @complete: Transfer completion callback
>+ * @destructor: Destructor to free resources when flushing
>+ * @status: Status of the transfer when completed

I guess you refer to the enum "HSI message status codes".
I think this would be more readable if you don't use anynomus enums,
but use enum-name to reference the enum here in the documentation.

>+ * @actual_len: Actual length of data transfered on completion
>+ * @channel: Channel were to TX/RX the message
>+ * @ttype: Transfer type (TX if set, RX otherwise)
>+ * @break_frame: if true HSI will send/receive a break frame. Data buffers are
>+ *		ignored in the request.
>+ */
>+struct hsi_msg {
>+	struct list_head	link;
>+	struct hsi_client	*cl;
>+	struct sg_table		sgt;
>+	void			*context;
>+
>+	void			(*complete)(struct hsi_msg *msg);
>+	void			(*destructor)(struct hsi_msg *msg);
>+
>+	int			status;
>+	unsigned int		actual_len;

size_t ?

>+	unsigned int		channel;
>+	unsigned int		ttype:1;
>+	unsigned int		break_frame:1;
>+};
...
>+/*
>+ * API for HSI clients
>+ */
>+int hsi_async(struct hsi_client *cl, struct hsi_msg *msg);
>+
>+/**

I'm pleased to see scatter list support. But is this supported by all HW?
What is the behavior if HW doesn't support this, will hsi_async fail, or
is the scatter-list handled 'under the hood'?

Q: Can multiple Read operations be queued?
Will multiple read queued operations result in chained DMA operations, or single read operations?

...
>+/**
>+ * hsi_flush - Flush all pending transactions on the client's port
>+ * @cl: Pointer to the HSI client
>+ *
>+ * This function will destroy all pending hsi_msg in the port and reset
>+ * the HW port so it is ready to receive and transmit from a clean state.
>+ *
>+ * Return -errno on failure, 0 on success
>+ */
>+static inline int hsi_flush(struct hsi_client *cl)
>+{
>+	if (!hsi_port_claimed(cl))
>+		return -EACCES;
>+	return hsi_get_port(cl)->flush(cl);
>+}

For CAIF we need to have independent RX and TX flush operations.

Flushing the TX FIFO can be long duration operation due to HSI flow control,
if counterpart is not receiving data. I would prefer to see a callback here.

...

>+/**
>+ * hsi_start_tx - Signal the port that the client wants to start a TX
>+ * @cl: Pointer to the HSI client
>+ *
>+ * Return -errno on failure, 0 on success
>+ */
>+static inline int hsi_start_tx(struct hsi_client *cl)
>+{
>+	if (!hsi_port_claimed(cl))
>+		return -EACCES;
>+	return hsi_get_port(cl)->start_tx(cl);
>+}
>+
>+/**
>+ * hsi_stop_tx - Signal the port that the client no longer wants to transmit
>+ * @cl: Pointer to the HSI client
>+ *
>+ * Return -errno on failure, 0 on success
>+ */
>+static inline int hsi_stop_tx(struct hsi_client *cl)
>+{
>+	if (!hsi_port_claimed(cl))
>+		return -EACCES;
>+	return hsi_get_port(cl)->stop_tx(cl);
>+}

What exactly do hsi_start_tx and hsi_stop_tx functions do?
Do they set ACWAKE_UP and ACWAKE_DOWN lines high?

*Missing function*: hsi_reset()
I would also like to see a hsi_reset function.

In modem-restart scenarios or when coming up from low power state we need the ability
to perform SW reset in order to discard any garbage received in these states.
We also have the need to force the lines DATA and FLAG (and READY) low,
this could be done by the hsi_reset function as well.
It would be nice to have a callback function to be called upon completion as well.

*Missing function*: hsi_rx_fifo_occupancy()
Before putting the link asleep we need to know if the fifo is empty or not.
So we would like to have a way to read out the number of bytes in the RX fifo.

Regards,
Sjur


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
  2011-06-22 19:11 ` [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework Sjur Brændeland
@ 2011-06-22 19:25   ` Linus Walleij
  2011-06-23 13:08   ` Carlos Chinea
  1 sibling, 0 replies; 35+ messages in thread
From: Linus Walleij @ 2011-06-22 19:25 UTC (permalink / raw)
  To: Sjur Brændeland; +Cc: Carlos Chinea, sjurbren, linux-omap, linux-kernel

On Wed, Jun 22, 2011 at 9:11 PM, Sjur Brændeland
<sjur.brandeland@stericsson.com> wrote:
>>+/*
>>+ * API for HSI clients
>>+ */
>>+int hsi_async(struct hsi_client *cl, struct hsi_msg *msg);
>>+
>>+/**
>
> I'm pleased to see scatter list support. But is this supported by all HW?
> What is the behavior if HW doesn't support this, will hsi_async fail, or
> is the scatter-list handled 'under the hood'?

I think it's pretty straight-forward even if you have to use PIO (IRQs and
CPU-filling of FIFO). You simply use a memory iterator (sg_miter), so
for example the MMC layer can only handle SGlists and in mmci.c we
use sg_miter_start/next/stop to do this in the PIO case.

If that kind of hardware would ever be fast enough for anyone using
a HSI link is a good question though :-)

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 4/7] HSI: hsi_char: Add HSI char device driver
  2011-06-10 13:38 ` [RFC PATCHv5 4/7] HSI: hsi_char: Add HSI char device driver Carlos Chinea
@ 2011-06-22 19:37   ` Sjur Brændeland
  2011-06-23  9:12     ` Carlos Chinea
  0 siblings, 1 reply; 35+ messages in thread
From: Sjur Brændeland @ 2011-06-22 19:37 UTC (permalink / raw)
  To: Carlos Chinea, sjurbren
  Cc: Andras Domokos, Alan Cox, linux-omap, linux-kernel, Linus Walleij

Hi Carlos,

...
>+static ssize_t hsc_read(struct file *file, char __user *buf, size_t len,
>+						loff_t *ppos __maybe_unused)
>+{
...
>+	ret = hsi_async_read(channel->cl, msg);
>+
>+	ret = wait_event_interruptible(channel->rx_wait,
>+					!list_empty(&channel->rx_msgs_queue));
...

>+}
>+
>+static ssize_t hsc_write(struct file *file, const char __user *buf, size_t len,
>+						loff_t *ppos __maybe_unused)
>+{
>+	ret = hsi_async_write(channel->cl, msg);
>+	if (ret < 0)
>+		goto out;
>+
>+	ret = wait_event_interruptible(channel->tx_wait,
>+					!list_empty(&channel->tx_msgs_queue));

I would really like to see support for non-blocking read/write operation here.

...
>+
>+static const struct file_operations hsc_fops = {
>+	.owner		= THIS_MODULE,
>+	.read		= hsc_read,
>+	.write		= hsc_write,
>+	.unlocked_ioctl	= hsc_ioctl,
>+	.open		= hsc_open,
>+	.release	= hsc_release,
>+};

No poll?
Currently we do bulk read/write operations upon modem-crash or firmware upload,
and would perfeer to be able do asynchronous IO (select/poll) in order to
receive other system events or timeouts during HSI bulk transfers.

Regards,
Sjur

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 4/7] HSI: hsi_char: Add HSI char device driver
  2011-06-22 19:37   ` Sjur Brændeland
@ 2011-06-23  9:12     ` Carlos Chinea
  0 siblings, 0 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-06-23  9:12 UTC (permalink / raw)
  To: ext Sjur Brændeland
  Cc: sjurbren, Andras Domokos, Alan Cox, linux-omap, linux-kernel,
	Linus Walleij

Hi,

On Wed, 2011-06-22 at 21:37 +0200, ext Sjur Brændeland wrote:
> Hi Carlos,
> 
> ...
> >+static ssize_t hsc_read(struct file *file, char __user *buf, size_t len,
> >+						loff_t *ppos __maybe_unused)
> >+{
> ...
> >+	ret = hsi_async_read(channel->cl, msg);
> >+
> >+	ret = wait_event_interruptible(channel->rx_wait,
> >+					!list_empty(&channel->rx_msgs_queue));
> ...
> 
> >+}
> >+
> >+static ssize_t hsc_write(struct file *file, const char __user *buf, size_t len,
> >+						loff_t *ppos __maybe_unused)
> >+{
> >+	ret = hsi_async_write(channel->cl, msg);
> >+	if (ret < 0)
> >+		goto out;
> >+
> >+	ret = wait_event_interruptible(channel->tx_wait,
> >+					!list_empty(&channel->tx_msgs_queue));
> 
> I would really like to see support for non-blocking read/write operation here.
> 

Non-blocking support will not be supported in hsi_char.

> ...
> >+
> >+static const struct file_operations hsc_fops = {
> >+	.owner		= THIS_MODULE,
> >+	.read		= hsc_read,
> >+	.write		= hsc_write,
> >+	.unlocked_ioctl	= hsc_ioctl,
> >+	.open		= hsc_open,
> >+	.release	= hsc_release,
> >+};
> 
> No poll?

We did have some "kind" of support for poll in previous versions, but we
did not honor properly the poll expected behavior, as we always need to
"block" waiting for the complete callback to be called. At least for
reads this was an issue.

However to compensate this, we are planning to add AIO support in the
future, which maps a lot better into the hsi behavior.

> Currently we do bulk read/write operations upon modem-crash or firmware upload,
> and would perfeer to be able do asynchronous IO (select/poll) in order to
> receive other system events or timeouts during HSI bulk transfers.
> 


Br,
-- 
Carlos Chinea <carlos.chinea@nokia.com>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
  2011-06-22 19:11 ` [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework Sjur Brændeland
  2011-06-22 19:25   ` Linus Walleij
@ 2011-06-23 13:08   ` Carlos Chinea
  2011-06-28 13:05       ` Sjur BRENDELAND
  1 sibling, 1 reply; 35+ messages in thread
From: Carlos Chinea @ 2011-06-23 13:08 UTC (permalink / raw)
  To: ext Sjur Brændeland
  Cc: sjurbren, linux-omap, Linus Walleij, linux-kernel

Hi Sjur,

On Wed, 2011-06-22 at 21:11 +0200, ext Sjur Brændeland wrote:
> Hi Carlos,
> 
> Some weeks ago I submitted a CAIF-HSI protocol driver for
> Linux 3.0.1, located in drivers/net/caif in David Miller's net-next-2.6.
> This driver depends on a platform specific "glue-layer".
> It would be nice to adapt to a generic HSI API, so I'm looking forward
> to see a this patch going upstream.
> 
> I have tried to investigate if this API proposal fulfills the
> needs for the CAIF HSI protocol used by the ST-Ericsson modems.
> 
> Please find my review comments below.
> 
> 
> >+/**
> >+ * struct hsi_config - Configuration for RX/TX HSI modules 
> >+ * @mode: Bit transmission mode (STREAM or FRAME)
> >+ * @channels: Number of channels to use [1..16]
> >+ * @speed: Max bit transmission speed (Kbit/s)
> 
> Just for clarity maybe you should say if you meen 1000 bit/s or 1024 bit/s?
> 

I meant kbit/s -> 10^3 bit/s. I will change the capital K.

> >+ * @flow: RX flow type (SYNCHRONIZED or PIPELINE)
> >+ * @arb_mode: Arbitration mode for TX frame (Round robin, priority)
> >+ */
> >+struct hsi_config {
> >+	unsigned int	mode;
> >+	unsigned int	channels;
> >+	unsigned int	speed;
> 
> frame_size: Can we assume 32 bits the only supported frame size,
> or should this be configurable?

Correct me if I am wrong but the HSI spec sets 32 bits frame size into
stone. I know some HW (like OMAP SSI) allows changing the frame size,
but I can not for see why an hsi_client will need a lower frame size. If
a weird client needs this then this information should be pass to the
controller through other means.

> 
> >+	union {
> >+		unsigned int	flow;		/* RX only */
> >+		unsigned int	arb_mode;	/* TX only */
> >+	};
> 
> It would be usefull with the following RX counters:
>                       unsigned int    frame_timeout_counter:16;
>                       unsigned int    tailing_bit_counter:8;
>                       unsigned int    frame_burst_counter:8;
> >+};

IMHO clients do not need to care about this HW specific values. I will
pass this through the controller in some other way (e.g: platform_data)
and use the same values for all the clients.

> ...
> >+
> >+/**
> >+ * struct hsi_board_info - HSI client board info
> >+ * @name: Name for the HSI device
> >+ * @hsi_id: HSI controller id where the client sits
> >+ * @port: Port number in the controller where the client sits
> >+ * @tx_cfg: HSI TX configuration
> >+ * @rx_cfg: HSI RX configuration
> >+ * @platform_data: Platform related data
> >+ * @archdata: Architecture-dependent device data
> >+ */
> >+struct hsi_board_info {
> >+	const char		*name;
> >+	unsigned int		hsi_id;
> >+	unsigned int		port;
> >+	struct hsi_config	tx_cfg;
> >+	struct hsi_config	rx_cfg;
> >+	void			*platform_data;
> >+	struct dev_archdata	*archdata;
> >+};
> 
> What about information about the supported transmission speeds?
> Is it any way to obtain this information, so we can know the legal
> values for speed in hsi_config?

For TX speed sets the max tx speed the HSI should go. The controller
driver will try to get as close as it possible to that value without
going over.

For the RX path it sets min RX speed the HSI should use. The controller
should ensure that it does not drop under that value to avoid breaking
the communication.

All this values need to be and can be known beforehand and are platform
specific.

> 
> Can we really assume that all HW supports linked DMA jobs (scatter list),
> or do we need some information about DMA support in board_info as well?
> 

No we don't assume anything and we do not need DMA support info in the
board_info. The hsi_controller will know. I think Linus Walleij already
make a comment on this ;)

> ...
> >+/**
> >+ * struct hsi_msg - HSI message descriptor
> >+ * @link: Free to use by the current descriptor owner
> >+ * @cl: HSI device client that issues the transfer
> >+ * @sgt: Head of the scatterlist array
> >+ * @context: Client context data associated to the transfer
> >+ * @complete: Transfer completion callback
> >+ * @destructor: Destructor to free resources when flushing
> >+ * @status: Status of the transfer when completed
> 
> I guess you refer to the enum "HSI message status codes".
> I think this would be more readable if you don't use anynomus enums,
> but use enum-name to reference the enum here in the documentation.

Hmm HSI_STATUS_X I think it is quite self explanatory.
> 
> >+ * @actual_len: Actual length of data transfered on completion
> >+ * @channel: Channel were to TX/RX the message
> >+ * @ttype: Transfer type (TX if set, RX otherwise)
> >+ * @break_frame: if true HSI will send/receive a break frame. Data buffers are
> >+ *		ignored in the request.
> >+ */
> >+struct hsi_msg {
> >+	struct list_head	link;
> >+	struct hsi_client	*cl;
> >+	struct sg_table		sgt;
> >+	void			*context;
> >+
> >+	void			(*complete)(struct hsi_msg *msg);
> >+	void			(*destructor)(struct hsi_msg *msg);
> >+
> >+	int			status;
> >+	unsigned int		actual_len;
> 
> size_t ?

Yeah maybe better, but I follow here the scatterlist API (see length and
dma_length) and I'll continue using unsigned int as long as the
scatterlist API uses it.

> 
> >+	unsigned int		channel;
> >+	unsigned int		ttype:1;
> >+	unsigned int		break_frame:1;
> >+};
> ...
> >+/*
> >+ * API for HSI clients
> >+ */
> >+int hsi_async(struct hsi_client *cl, struct hsi_msg *msg);
> >+
> >+/**
> 
> I'm pleased to see scatter list support. But is this supported by all HW?
> What is the behavior if HW doesn't support this, will hsi_async fail, or
> is the scatter-list handled 'under the hood'?

Linus Walleij already explained quite well ;)

> 
> Q: Can multiple Read operations be queued?

Yes

> Will multiple read queued operations result in chained DMA operations, or single read operations?
> 

Well, it is up to you, but my initial idea is that the complete callback
is called right after the request has been fulfilled. This may be not
possible if you chain several read requests.

> ...
> >+/**
> >+ * hsi_flush - Flush all pending transactions on the client's port
> >+ * @cl: Pointer to the HSI client
> >+ *
> >+ * This function will destroy all pending hsi_msg in the port and reset
> >+ * the HW port so it is ready to receive and transmit from a clean state.
> >+ *
> >+ * Return -errno on failure, 0 on success
> >+ */
> >+static inline int hsi_flush(struct hsi_client *cl)
> >+{
> >+	if (!hsi_port_claimed(cl))
> >+		return -EACCES;
> >+	return hsi_get_port(cl)->flush(cl);
> >+}
> 
> For CAIF we need to have independent RX and TX flush operations.
> 

The current framework assumes that in the unlikely case of an error or
whenever you need to to do some cleanup, you will end up cleaning up the
two sides anyway. Moreover, you will reset the state of the HW also. 

Exactly, in which case CAIF will only need to cleanup the TX path but
not the RX or vice versa ?

> Flushing the TX FIFO can be long duration operation due to HSI flow control,
> if counterpart is not receiving data. I would prefer to see a callback here.
> 

No, flush should not wait for an ongoing TX transfer to finish. You
should stop all ongoing transfers, call the destructor callback on all
the requests (queued and ongoing) and clean up the HW state.

> ...
> 
> >+/**
> >+ * hsi_start_tx - Signal the port that the client wants to start a TX
> >+ * @cl: Pointer to the HSI client
> >+ *
> >+ * Return -errno on failure, 0 on success
> >+ */
> >+static inline int hsi_start_tx(struct hsi_client *cl)
> >+{
> >+	if (!hsi_port_claimed(cl))
> >+		return -EACCES;
> >+	return hsi_get_port(cl)->start_tx(cl);
> >+}
> >+
> >+/**
> >+ * hsi_stop_tx - Signal the port that the client no longer wants to transmit
> >+ * @cl: Pointer to the HSI client
> >+ *
> >+ * Return -errno on failure, 0 on success
> >+ */
> >+static inline int hsi_stop_tx(struct hsi_client *cl)
> >+{
> >+	if (!hsi_port_claimed(cl))
> >+		return -EACCES;
> >+	return hsi_get_port(cl)->stop_tx(cl);
> >+}
> 
> What exactly do hsi_start_tx and hsi_stop_tx functions do?
> Do they set ACWAKE_UP and ACWAKE_DOWN lines high?
> 

Yes.

> *Missing function*: hsi_reset()
> I would also like to see a hsi_reset function.

This is currently done also in the hsi_flush. See my previous comment.

> 
> In modem-restart scenarios or when coming up from low power state we need the ability
> to perform SW reset in order to discard any garbage received in these states.
> We also have the need to force the lines DATA and FLAG (and READY) low,
> this could be done by the hsi_reset function as well.
> It would be nice to have a callback function to be called upon completion as well.
> 

Coming up from low power state should be handled by the hsi_controller
and the hsi_client should not be concerned about this. I think also the
workaround of setting the DATA and FLAG lines low can be implemented
just in the hsi_controller without hsi_client intervention. 

> *Missing function*: hsi_rx_fifo_occupancy()
> Before putting the link asleep we need to know if the fifo is empty or not.
> So we would like to have a way to read out the number of bytes in the RX fifo.
> 

This should be handled only by hsi_controller. Clients should not care
about this.

> Regards,
> Sjur
> 

Br,

Carlos Chinea <carlos.chinea@nokia.com>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
  2011-06-23 13:08   ` Carlos Chinea
@ 2011-06-28 13:05       ` Sjur BRENDELAND
  0 siblings, 0 replies; 35+ messages in thread
From: Sjur BRENDELAND @ 2011-06-28 13:05 UTC (permalink / raw)
  To: Carlos Chinea; +Cc: sjurbren, linux-omap, Linus Walleij, linux-kernel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 6133 bytes --]

Hi Carlos,

> > What about information about the supported transmission speeds?
> > Is it any way to obtain this information, so we can know the legal 
> > values for speed in hsi_config?
> 
> For TX speed sets the max tx speed the HSI should go. The controller 
> driver will try to get as close as it possible to that value without 
> going over.
> 
> For the RX path it sets min RX speed the HSI should use. The 
> controller should ensure that it does not drop under that value to 
> avoid breaking the communication.
> 
> All this values need to be and can be known beforehand and are 
> platform specific.

Ok fine, so the controller adjusts the speed according the HW supported frequencies.

> > Q: Can multiple Read operations be queued?
> 
> Yes
> 
> > Will multiple read queued operations result in chained DMA
> operations, or single read operations?
> >
> 
> Well, it is up to you, but my initial idea is that the complete 
> callback is called right after the request has been fulfilled. This 
> may be not possible if you chain several read requests.

I think our concern is to squeeze every bit of bandwidth out of the link.
Perhaps we can utilize bandwidth better by chaining the DMA jobs.
But due to latency we need the complete callback for each DMA job.
If the DMA cannot handle this, the HSI controller should handle the queuing
of IO requests.

> > ...
> > >+/**
> > >+ * hsi_flush - Flush all pending transactions on the client's port
> > >+ * @cl: Pointer to the HSI client
> > >+ *
> > >+ * This function will destroy all pending hsi_msg in the port and
> reset
> > >+ * the HW port so it is ready to receive and transmit from a clean
> state.
> > >+ *
> > >+ * Return -errno on failure, 0 on success  */ static inline int 
> > >+hsi_flush(struct hsi_client *cl) {
> > >+	if (!hsi_port_claimed(cl))
> > >+		return -EACCES;
> > >+	return hsi_get_port(cl)->flush(cl); }
> >
> > For CAIF we need to have independent RX and TX flush operations.
> >
> 
> The current framework assumes that in the unlikely case of an error or 
> whenever you need to to do some cleanup, you will end up cleaning up 
> the two sides anyway. Moreover, you will reset the state of the HW 
> also.
> 
> Exactly, in which case CAIF will only need to cleanup the TX path but 
> not the RX or vice versa ?
> 
> > Flushing the TX FIFO can be long duration operation due to HSI flow
> control,
> > if counterpart is not receiving data. I would prefer to see a
> callback here.
> >
> 
> No, flush should not wait for an ongoing TX transfer to finish. You 
> should stop all ongoing transfers, call the destructor callback on all 
> the requests (queued and ongoing) and clean up the HW state.
...
> > *Missing function*: hsi_reset()
> > I would also like to see a hsi_reset function.
> 
> This is currently done also in the hsi_flush. See my previous comment.

Sorry, I misunderstood your API description here. hsi_flush() seems to work
like the hsi_reset I was looking for. I would prefer if you rename this
function to hsi_reset for clarity (see flush_work() in workqueue.c, where
flush wait for the work to finish).

Anyway, I still see a need for ensuring fifos are empty or reading the
number of bytes in the fifos.

CAIF is using the wake lines for controlling when the Modem and Host can
power down the HSI blocks. In order to go to low power mode, the Host set
AC_WAKE low, and wait for modem to respond by setting CA_WAKE low. The host
cannot set AC_WAKE high again before modem has done CA_WAKE low (in order
to avoid races).

When going up from low power anyone could set the WAKE line high, and wait for
the other side to respond by setting WAKE line high.

So CAIF implements the following protocol for handshaking before going to
low-power mode:
1. Inactivity timeout expires on Host, i.e the host has nothing to send and no
   RX has happened the last couple of seconds.
2. Host request low-power state by setting AC_WAKE low. In this state Host
   side can still receive data, but is not allowed to send data.
3. Modem responds with setting CA_WAKE low, and cannot send data either.
4. When both AC_WAKE and CA_WAKE are low, the host must set AC_FLAG, AC_DATA
   and AC_READY low.
5. When Host side RX-fifo is empty and DMA jobs are completed,
   ongoing RX requests are cancelled.
6. HSI block can be powered down.
 
After AC_WAKE is set low the Host must guarantee that the modem does not receive
data until AC_WAKE is high again. This implies that the Host must know that the
TX FIFO is empty before setting the AC_WAKE low. So we need some way to know that
the TX fifo is empty.

I think the cleanest design here is that hsi_stop_tx() handles this.
So his_stop_tx() should wait for any pending TX jobs and wait for the TX
FIFO to be empty, and then set the AC_WAKE low. 

As described above, when going down to low power mode the host has set AC_WAKE low.
The Host should then set AC_FLAG, AC_DATA and AC_READY low.

>...I think also the
>workaround of setting the DATA and FLAG lines low can be implemented
>just in the hsi_controller without hsi_client intervention. 

Great :-) Perhaps a function hsi_stop_rx()?

> > *Missing function*: hsi_rx_fifo_occupancy()
> > Before putting the link asleep we need to know if the fifo is empty
> > or not.
> > So we would like to have a way to read out the number of bytes in the
> > RX fifo.
> 
> This should be handled only by hsi_controller. Clients should not care
> about this.

There is a corner case when going to low power mode and both side has put the WAKE line low,
but a RX DMA job is ongoing or the RX-fifo is not empty.
In this case the host side must wait for the DMA job to complete and RX-
fifo to be empty, before canceling any pending RX jobs. 

One option would be to provide a function hsi_rx_sync() that guarantees that any ongoing
RX-job is completed and that the RX FIFO is empty. Another option could be to be
able to provide API for reading RX-job states and RX-fifo occupancy.

Regards,
Sjur
ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
@ 2011-06-28 13:05       ` Sjur BRENDELAND
  0 siblings, 0 replies; 35+ messages in thread
From: Sjur BRENDELAND @ 2011-06-28 13:05 UTC (permalink / raw)
  To: Carlos Chinea; +Cc: sjurbren, linux-omap, Linus Walleij, linux-kernel

Hi Carlos,

> > What about information about the supported transmission speeds?
> > Is it any way to obtain this information, so we can know the legal 
> > values for speed in hsi_config?
> 
> For TX speed sets the max tx speed the HSI should go. The controller 
> driver will try to get as close as it possible to that value without 
> going over.
> 
> For the RX path it sets min RX speed the HSI should use. The 
> controller should ensure that it does not drop under that value to 
> avoid breaking the communication.
> 
> All this values need to be and can be known beforehand and are 
> platform specific.

Ok fine, so the controller adjusts the speed according the HW supported frequencies.

> > Q: Can multiple Read operations be queued?
> 
> Yes
> 
> > Will multiple read queued operations result in chained DMA
> operations, or single read operations?
> >
> 
> Well, it is up to you, but my initial idea is that the complete 
> callback is called right after the request has been fulfilled. This 
> may be not possible if you chain several read requests.

I think our concern is to squeeze every bit of bandwidth out of the link.
Perhaps we can utilize bandwidth better by chaining the DMA jobs.
But due to latency we need the complete callback for each DMA job.
If the DMA cannot handle this, the HSI controller should handle the queuing
of IO requests.

> > ...
> > >+/**
> > >+ * hsi_flush - Flush all pending transactions on the client's port
> > >+ * @cl: Pointer to the HSI client
> > >+ *
> > >+ * This function will destroy all pending hsi_msg in the port and
> reset
> > >+ * the HW port so it is ready to receive and transmit from a clean
> state.
> > >+ *
> > >+ * Return -errno on failure, 0 on success  */ static inline int 
> > >+hsi_flush(struct hsi_client *cl) {
> > >+	if (!hsi_port_claimed(cl))
> > >+		return -EACCES;
> > >+	return hsi_get_port(cl)->flush(cl); }
> >
> > For CAIF we need to have independent RX and TX flush operations.
> >
> 
> The current framework assumes that in the unlikely case of an error or 
> whenever you need to to do some cleanup, you will end up cleaning up 
> the two sides anyway. Moreover, you will reset the state of the HW 
> also.
> 
> Exactly, in which case CAIF will only need to cleanup the TX path but 
> not the RX or vice versa ?
> 
> > Flushing the TX FIFO can be long duration operation due to HSI flow
> control,
> > if counterpart is not receiving data. I would prefer to see a
> callback here.
> >
> 
> No, flush should not wait for an ongoing TX transfer to finish. You 
> should stop all ongoing transfers, call the destructor callback on all 
> the requests (queued and ongoing) and clean up the HW state.
...
> > *Missing function*: hsi_reset()
> > I would also like to see a hsi_reset function.
> 
> This is currently done also in the hsi_flush. See my previous comment.

Sorry, I misunderstood your API description here. hsi_flush() seems to work
like the hsi_reset I was looking for. I would prefer if you rename this
function to hsi_reset for clarity (see flush_work() in workqueue.c, where
flush wait for the work to finish).

Anyway, I still see a need for ensuring fifos are empty or reading the
number of bytes in the fifos.

CAIF is using the wake lines for controlling when the Modem and Host can
power down the HSI blocks. In order to go to low power mode, the Host set
AC_WAKE low, and wait for modem to respond by setting CA_WAKE low. The host
cannot set AC_WAKE high again before modem has done CA_WAKE low (in order
to avoid races).

When going up from low power anyone could set the WAKE line high, and wait for
the other side to respond by setting WAKE line high.

So CAIF implements the following protocol for handshaking before going to
low-power mode:
1. Inactivity timeout expires on Host, i.e the host has nothing to send and no
   RX has happened the last couple of seconds.
2. Host request low-power state by setting AC_WAKE low. In this state Host
   side can still receive data, but is not allowed to send data.
3. Modem responds with setting CA_WAKE low, and cannot send data either.
4. When both AC_WAKE and CA_WAKE are low, the host must set AC_FLAG, AC_DATA
   and AC_READY low.
5. When Host side RX-fifo is empty and DMA jobs are completed,
   ongoing RX requests are cancelled.
6. HSI block can be powered down.
 
After AC_WAKE is set low the Host must guarantee that the modem does not receive
data until AC_WAKE is high again. This implies that the Host must know that the
TX FIFO is empty before setting the AC_WAKE low. So we need some way to know that
the TX fifo is empty.

I think the cleanest design here is that hsi_stop_tx() handles this.
So his_stop_tx() should wait for any pending TX jobs and wait for the TX
FIFO to be empty, and then set the AC_WAKE low. 

As described above, when going down to low power mode the host has set AC_WAKE low.
The Host should then set AC_FLAG, AC_DATA and AC_READY low.

>...I think also the
>workaround of setting the DATA and FLAG lines low can be implemented
>just in the hsi_controller without hsi_client intervention. 

Great :-) Perhaps a function hsi_stop_rx()?

> > *Missing function*: hsi_rx_fifo_occupancy()
> > Before putting the link asleep we need to know if the fifo is empty
> > or not.
> > So we would like to have a way to read out the number of bytes in the
> > RX fifo.
> 
> This should be handled only by hsi_controller. Clients should not care
> about this.

There is a corner case when going to low power mode and both side has put the WAKE line low,
but a RX DMA job is ongoing or the RX-fifo is not empty.
In this case the host side must wait for the DMA job to complete and RX-
fifo to be empty, before canceling any pending RX jobs. 

One option would be to provide a function hsi_rx_sync() that guarantees that any ongoing
RX-job is completed and that the RX FIFO is empty. Another option could be to be
able to provide API for reading RX-job states and RX-fifo occupancy.

Regards,
Sjur

^ permalink raw reply	[flat|nested] 35+ messages in thread

* RE: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
  2011-06-28 13:05       ` Sjur BRENDELAND
  (?)
@ 2011-07-22 10:43       ` Carlos Chinea
  2011-07-22 11:01         ` Felipe Balbi
  -1 siblings, 1 reply; 35+ messages in thread
From: Carlos Chinea @ 2011-07-22 10:43 UTC (permalink / raw)
  To: ext Sjur BRENDELAND; +Cc: sjurbren, linux-omap, Linus Walleij, linux-kernel

Hi Sjur,

Sorry for the long delay. My comments below:

On Tue, 2011-06-28 at 15:05 +0200, ext Sjur BRENDELAND wrote:
-- cut--
> > 
> > > Will multiple read queued operations result in chained DMA
> > operations, or single read operations?
> > >
> > 
> > Well, it is up to you, but my initial idea is that the complete 
> > callback is called right after the request has been fulfilled. This 
> > may be not possible if you chain several read requests.
> 
> I think our concern is to squeeze every bit of bandwidth out of the link.
> Perhaps we can utilize bandwidth better by chaining the DMA jobs.
> But due to latency we need the complete callback for each DMA job.
> If the DMA cannot handle this, the HSI controller should handle the queuing
> of IO requests.
> 

Exactly.

> > > ...
> > > >+/**
> > > >+ * hsi_flush - Flush all pending transactions on the client's port
> > > >+ * @cl: Pointer to the HSI client
> > > >+ *
> > > >+ * This function will destroy all pending hsi_msg in the port and
> > reset
> > > >+ * the HW port so it is ready to receive and transmit from a clean
> > state.
> > > >+ *
> > > >+ * Return -errno on failure, 0 on success  */ static inline int 
> > > >+hsi_flush(struct hsi_client *cl) {
> > > >+	if (!hsi_port_claimed(cl))
> > > >+		return -EACCES;
> > > >+	return hsi_get_port(cl)->flush(cl); }
> > >
> > > For CAIF we need to have independent RX and TX flush operations.
> > >
> > 
> > The current framework assumes that in the unlikely case of an error or 
> > whenever you need to to do some cleanup, you will end up cleaning up 
> > the two sides anyway. Moreover, you will reset the state of the HW 
> > also.
> > 
> > Exactly, in which case CAIF will only need to cleanup the TX path but 
> > not the RX or vice versa ?
> > 
> > > Flushing the TX FIFO can be long duration operation due to HSI flow
> > control,
> > > if counterpart is not receiving data. I would prefer to see a
> > callback here.
> > >
> > 
> > No, flush should not wait for an ongoing TX transfer to finish. You 
> > should stop all ongoing transfers, call the destructor callback on all 
> > the requests (queued and ongoing) and clean up the HW state.
> ...
> > > *Missing function*: hsi_reset()
> > > I would also like to see a hsi_reset function.
> > 
> > This is currently done also in the hsi_flush. See my previous comment.
> 
> Sorry, I misunderstood your API description here. hsi_flush() seems to work
> like the hsi_reset I was looking for. I would prefer if you rename this
> function to hsi_reset for clarity (see flush_work() in workqueue.c, where
> flush wait for the work to finish).
> 
> Anyway, I still see a need for ensuring fifos are empty or reading the
> number of bytes in the fifos.
> 
> CAIF is using the wake lines for controlling when the Modem and Host can
> power down the HSI blocks. In order to go to low power mode, the Host set
> AC_WAKE low, and wait for modem to respond by setting CA_WAKE low. The host
> cannot set AC_WAKE high again before modem has done CA_WAKE low (in order
> to avoid races).
> 
> When going up from low power anyone could set the WAKE line high, and wait for
> the other side to respond by setting WAKE line high.
> 
> So CAIF implements the following protocol for handshaking before going to
> low-power mode:
> 1. Inactivity timeout expires on Host, i.e the host has nothing to send and no
>    RX has happened the last couple of seconds.
> 2. Host request low-power state by setting AC_WAKE low. In this state Host
>    side can still receive data, but is not allowed to send data.
> 3. Modem responds with setting CA_WAKE low, and cannot send data either.
> 4. When both AC_WAKE and CA_WAKE are low, the host must set AC_FLAG, AC_DATA
>    and AC_READY low.
> 5. When Host side RX-fifo is empty and DMA jobs are completed,
>    ongoing RX requests are cancelled.
> 6. HSI block can be powered down.
>  
> After AC_WAKE is set low the Host must guarantee that the modem does not receive
> data until AC_WAKE is high again. This implies that the Host must know that the
> TX FIFO is empty before setting the AC_WAKE low. So we need some way to know that
> the TX fifo is empty.
> 
> I think the cleanest design here is that hsi_stop_tx() handles this.
> So his_stop_tx() should wait for any pending TX jobs and wait for the TX
> FIFO to be empty, and then set the AC_WAKE low. 

Hmmm, I don't like this.  My initial idea is that you could call this
functions on interrupt context. This will prevent this. However, nothing
prevents you from schedule a delayed work, which will be in charge of
checking that the last frame has gone through the wire and then bring
down the AC_WAKE line.

> 
> As described above, when going down to low power mode the host has set AC_WAKE low.
> The Host should then set AC_FLAG, AC_DATA and AC_READY low.
> 
> >...I think also the
> >workaround of setting the DATA and FLAG lines low can be implemented
> >just in the hsi_controller without hsi_client intervention. 
> 
> Great :-) Perhaps a function hsi_stop_rx()?

No need for a new function. The hsi_controller driver knows when it goes
to "low power mode" so it can set the DATA and FLAG lines down just righ
before that.

> 
> > > *Missing function*: hsi_rx_fifo_occupancy()
> > > Before putting the link asleep we need to know if the fifo is empty
> > > or not.
> > > So we would like to have a way to read out the number of bytes in the
> > > RX fifo.
> > 
> > This should be handled only by hsi_controller. Clients should not care
> > about this.
> 
> There is a corner case when going to low power mode and both side has put the WAKE line low,
> but a RX DMA job is ongoing or the RX-fifo is not empty.
> In this case the host side must wait for the DMA job to complete and RX-
> fifo to be empty, before canceling any pending RX jobs. 
> 
> One option would be to provide a function hsi_rx_sync() that guarantees that any ongoing
> RX-job is completed and that the RX FIFO is empty. Another option could be to be
> able to provide API for reading RX-job states and RX-fifo occupancy.
> 

I think we don't need another function to do this neither. The
hsi_controller driver should implement a usecount scheme to know when
the HW can be switch off. IMO it is not a good idea to relay just on the
wakelines to power on/off the device, exactly because of this kind of
issues.

Br,
-- 
Carlos Chinea <carlos.chinea@nokia.com>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
  2011-07-22 10:43       ` Carlos Chinea
@ 2011-07-22 11:01         ` Felipe Balbi
  2011-07-22 11:51           ` Carlos Chinea
  0 siblings, 1 reply; 35+ messages in thread
From: Felipe Balbi @ 2011-07-22 11:01 UTC (permalink / raw)
  To: Carlos Chinea
  Cc: ext Sjur BRENDELAND, sjurbren, linux-omap, Linus Walleij, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 6847 bytes --]

Hi,

On Fri, Jul 22, 2011 at 01:43:36PM +0300, Carlos Chinea wrote:
> > > > >+/**
> > > > >+ * hsi_flush - Flush all pending transactions on the client's port
> > > > >+ * @cl: Pointer to the HSI client
> > > > >+ *
> > > > >+ * This function will destroy all pending hsi_msg in the port and
> > > reset
> > > > >+ * the HW port so it is ready to receive and transmit from a clean
> > > state.
> > > > >+ *
> > > > >+ * Return -errno on failure, 0 on success  */ static inline int 
> > > > >+hsi_flush(struct hsi_client *cl) {
> > > > >+	if (!hsi_port_claimed(cl))
> > > > >+		return -EACCES;
> > > > >+	return hsi_get_port(cl)->flush(cl); }
> > > >
> > > > For CAIF we need to have independent RX and TX flush operations.
> > > >
> > > 
> > > The current framework assumes that in the unlikely case of an error or 
> > > whenever you need to to do some cleanup, you will end up cleaning up 
> > > the two sides anyway. Moreover, you will reset the state of the HW 
> > > also.
> > > 
> > > Exactly, in which case CAIF will only need to cleanup the TX path but 
> > > not the RX or vice versa ?
> > > 
> > > > Flushing the TX FIFO can be long duration operation due to HSI flow
> > > control,
> > > > if counterpart is not receiving data. I would prefer to see a
> > > callback here.
> > > >
> > > 
> > > No, flush should not wait for an ongoing TX transfer to finish. You 
> > > should stop all ongoing transfers, call the destructor callback on all 
> > > the requests (queued and ongoing) and clean up the HW state.
> > ...
> > > > *Missing function*: hsi_reset()
> > > > I would also like to see a hsi_reset function.
> > > 
> > > This is currently done also in the hsi_flush. See my previous comment.
> > 
> > Sorry, I misunderstood your API description here. hsi_flush() seems to work
> > like the hsi_reset I was looking for. I would prefer if you rename this
> > function to hsi_reset for clarity (see flush_work() in workqueue.c, where
> > flush wait for the work to finish).
> > 
> > Anyway, I still see a need for ensuring fifos are empty or reading the
> > number of bytes in the fifos.
> > 
> > CAIF is using the wake lines for controlling when the Modem and Host can
> > power down the HSI blocks. In order to go to low power mode, the Host set
> > AC_WAKE low, and wait for modem to respond by setting CA_WAKE low. The host
> > cannot set AC_WAKE high again before modem has done CA_WAKE low (in order
> > to avoid races).
> > 
> > When going up from low power anyone could set the WAKE line high, and wait for
> > the other side to respond by setting WAKE line high.
> > 
> > So CAIF implements the following protocol for handshaking before going to
> > low-power mode:
> > 1. Inactivity timeout expires on Host, i.e the host has nothing to send and no
> >    RX has happened the last couple of seconds.
> > 2. Host request low-power state by setting AC_WAKE low. In this state Host
> >    side can still receive data, but is not allowed to send data.
> > 3. Modem responds with setting CA_WAKE low, and cannot send data either.
> > 4. When both AC_WAKE and CA_WAKE are low, the host must set AC_FLAG, AC_DATA
> >    and AC_READY low.
> > 5. When Host side RX-fifo is empty and DMA jobs are completed,
> >    ongoing RX requests are cancelled.
> > 6. HSI block can be powered down.
> >  
> > After AC_WAKE is set low the Host must guarantee that the modem does not receive
> > data until AC_WAKE is high again. This implies that the Host must know that the
> > TX FIFO is empty before setting the AC_WAKE low. So we need some way to know that
> > the TX fifo is empty.
> > 
> > I think the cleanest design here is that hsi_stop_tx() handles this.
> > So his_stop_tx() should wait for any pending TX jobs and wait for the TX
> > FIFO to be empty, and then set the AC_WAKE low. 
> 
> Hmmm, I don't like this.  My initial idea is that you could call this
> functions on interrupt context. This will prevent this. However, nothing
> prevents you from schedule a delayed work, which will be in charge of
> checking that the last frame has gone through the wire and then bring
> down the AC_WAKE line.

why don't you use threaded IRQ ? It's higher priority than a delayed
work and you would be able to use something like
wait_for_completion_timeout() to wait for TX FIFOs to become empty (??)

> > As described above, when going down to low power mode the host has set AC_WAKE low.
> > The Host should then set AC_FLAG, AC_DATA and AC_READY low.

but, if I read correctly, only when TX FIFO is known to be empty, right?

> > >...I think also the
> > >workaround of setting the DATA and FLAG lines low can be implemented
> > >just in the hsi_controller without hsi_client intervention. 
> > 
> > Great :-) Perhaps a function hsi_stop_rx()?
> 
> No need for a new function. The hsi_controller driver knows when it goes
> to "low power mode" so it can set the DATA and FLAG lines down just righ
> before that.

are you already using pm_runtime ? You could put that logic grouped in
runtime_suspend()/runtime_resume() calls and from the driver, just call
pm_runtime_put()/pm_runtime_get_sync() whenever you want to
suspend/resume.

> > > > *Missing function*: hsi_rx_fifo_occupancy()
> > > > Before putting the link asleep we need to know if the fifo is empty
> > > > or not.
> > > > So we would like to have a way to read out the number of bytes in the
> > > > RX fifo.
> > > 
> > > This should be handled only by hsi_controller. Clients should not care
> > > about this.
> > 
> > There is a corner case when going to low power mode and both side has put the WAKE line low,
> > but a RX DMA job is ongoing or the RX-fifo is not empty.
> > In this case the host side must wait for the DMA job to complete and RX-
> > fifo to be empty, before canceling any pending RX jobs. 
> > 
> > One option would be to provide a function hsi_rx_sync() that guarantees that any ongoing
> > RX-job is completed and that the RX FIFO is empty. Another option could be to be
> > able to provide API for reading RX-job states and RX-fifo occupancy.
> > 
> 
> I think we don't need another function to do this neither. The
> hsi_controller driver should implement a usecount scheme to know when
> the HW can be switch off. IMO it is not a good idea to relay just on the
> wakelines to power on/off the device, exactly because of this kind of
> issues.

true, but I'm not sure a usecount is a good way to go. Why don't you
just remove the sync API altogether and use only async, then the OMAP
HSI controller driver is supposed to know when it can go to sleep. If
you receive some data before a client queues a request, you just defer
processing of that data until a new request is queued, or something...

-- 
balbi

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
  2011-07-22 11:01         ` Felipe Balbi
@ 2011-07-22 11:51           ` Carlos Chinea
  2011-07-22 12:05             ` Felipe Balbi
  0 siblings, 1 reply; 35+ messages in thread
From: Carlos Chinea @ 2011-07-22 11:51 UTC (permalink / raw)
  To: balbi
  Cc: ext Sjur BRENDELAND, sjurbren, linux-omap, Linus Walleij, linux-kernel

Hi Felipe, 

:)

On Fri, 2011-07-22 at 14:01 +0300, ext Felipe Balbi wrote:
> Hi,
> 
> On Fri, Jul 22, 2011 at 01:43:36PM +0300, Carlos Chinea wrote:
> > > > > >+/**
> > > > > >+ * hsi_flush - Flush all pending transactions on the client's port
> > > > > >+ * @cl: Pointer to the HSI client
> > > > > >+ *
> > > > > >+ * This function will destroy all pending hsi_msg in the port and
> > > > reset
> > > > > >+ * the HW port so it is ready to receive and transmit from a clean
> > > > state.
> > > > > >+ *
> > > > > >+ * Return -errno on failure, 0 on success  */ static inline int 
> > > > > >+hsi_flush(struct hsi_client *cl) {
> > > > > >+	if (!hsi_port_claimed(cl))
> > > > > >+		return -EACCES;
> > > > > >+	return hsi_get_port(cl)->flush(cl); }
> > > > >
> > > > > For CAIF we need to have independent RX and TX flush operations.
> > > > >
> > > > 
> > > > The current framework assumes that in the unlikely case of an error or 
> > > > whenever you need to to do some cleanup, you will end up cleaning up 
> > > > the two sides anyway. Moreover, you will reset the state of the HW 
> > > > also.
> > > > 
> > > > Exactly, in which case CAIF will only need to cleanup the TX path but 
> > > > not the RX or vice versa ?
> > > > 
> > > > > Flushing the TX FIFO can be long duration operation due to HSI flow
> > > > control,
> > > > > if counterpart is not receiving data. I would prefer to see a
> > > > callback here.
> > > > >
> > > > 
> > > > No, flush should not wait for an ongoing TX transfer to finish. You 
> > > > should stop all ongoing transfers, call the destructor callback on all 
> > > > the requests (queued and ongoing) and clean up the HW state.
> > > ...
> > > > > *Missing function*: hsi_reset()
> > > > > I would also like to see a hsi_reset function.
> > > > 
> > > > This is currently done also in the hsi_flush. See my previous comment.
> > > 
> > > Sorry, I misunderstood your API description here. hsi_flush() seems to work
> > > like the hsi_reset I was looking for. I would prefer if you rename this
> > > function to hsi_reset for clarity (see flush_work() in workqueue.c, where
> > > flush wait for the work to finish).
> > > 
> > > Anyway, I still see a need for ensuring fifos are empty or reading the
> > > number of bytes in the fifos.
> > > 
> > > CAIF is using the wake lines for controlling when the Modem and Host can
> > > power down the HSI blocks. In order to go to low power mode, the Host set
> > > AC_WAKE low, and wait for modem to respond by setting CA_WAKE low. The host
> > > cannot set AC_WAKE high again before modem has done CA_WAKE low (in order
> > > to avoid races).
> > > 
> > > When going up from low power anyone could set the WAKE line high, and wait for
> > > the other side to respond by setting WAKE line high.
> > > 
> > > So CAIF implements the following protocol for handshaking before going to
> > > low-power mode:
> > > 1. Inactivity timeout expires on Host, i.e the host has nothing to send and no
> > >    RX has happened the last couple of seconds.
> > > 2. Host request low-power state by setting AC_WAKE low. In this state Host
> > >    side can still receive data, but is not allowed to send data.
> > > 3. Modem responds with setting CA_WAKE low, and cannot send data either.
> > > 4. When both AC_WAKE and CA_WAKE are low, the host must set AC_FLAG, AC_DATA
> > >    and AC_READY low.
> > > 5. When Host side RX-fifo is empty and DMA jobs are completed,
> > >    ongoing RX requests are cancelled.
> > > 6. HSI block can be powered down.
> > >  
> > > After AC_WAKE is set low the Host must guarantee that the modem does not receive
> > > data until AC_WAKE is high again. This implies that the Host must know that the
> > > TX FIFO is empty before setting the AC_WAKE low. So we need some way to know that
> > > the TX fifo is empty.
> > > 
> > > I think the cleanest design here is that hsi_stop_tx() handles this.
> > > So his_stop_tx() should wait for any pending TX jobs and wait for the TX
> > > FIFO to be empty, and then set the AC_WAKE low. 
> > 
> > Hmmm, I don't like this.  My initial idea is that you could call this
> > functions on interrupt context. This will prevent this. However, nothing
> > prevents you from schedule a delayed work, which will be in charge of
> > checking that the last frame has gone through the wire and then bring
> > down the AC_WAKE line.
> 
> why don't you use threaded IRQ ? It's higher priority than a delayed
> work and you would be able to use something like
> wait_for_completion_timeout() to wait for TX FIFOs to become empty (??)
> 
> > > As described above, when going down to low power mode the host has set AC_WAKE low.
> > > The Host should then set AC_FLAG, AC_DATA and AC_READY low.
> 
> but, if I read correctly, only when TX FIFO is known to be empty, right?
> 
> > > >...I think also the
> > > >workaround of setting the DATA and FLAG lines low can be implemented
> > > >just in the hsi_controller without hsi_client intervention. 
> > > 
> > > Great :-) Perhaps a function hsi_stop_rx()?
> > 
> > No need for a new function. The hsi_controller driver knows when it goes
> > to "low power mode" so it can set the DATA and FLAG lines down just righ
> > before that.
> 
> are you already using pm_runtime ? You could put that logic grouped in
> runtime_suspend()/runtime_resume() calls and from the driver, just call
> pm_runtime_put()/pm_runtime_get_sync() whenever you want to
> suspend/resume.

In the case of the omap_ssi driver not yet, but hopefully the
implementation will come soon ;) Anyway, the DATA/FLAG workaround is not
something I'm going to add to the omap_ssi, at least not now. This seems
to be needed by ST-Ericcson in their HSI controller.

> 
> > > > > *Missing function*: hsi_rx_fifo_occupancy()
> > > > > Before putting the link asleep we need to know if the fifo is empty
> > > > > or not.
> > > > > So we would like to have a way to read out the number of bytes in the
> > > > > RX fifo.
> > > > 
> > > > This should be handled only by hsi_controller. Clients should not care
> > > > about this.
> > > 
> > > There is a corner case when going to low power mode and both side has put the WAKE line low,
> > > but a RX DMA job is ongoing or the RX-fifo is not empty.
> > > In this case the host side must wait for the DMA job to complete and RX-
> > > fifo to be empty, before canceling any pending RX jobs. 
> > > 
> > > One option would be to provide a function hsi_rx_sync() that guarantees that any ongoing
> > > RX-job is completed and that the RX FIFO is empty. Another option could be to be
> > > able to provide API for reading RX-job states and RX-fifo occupancy.
> > > 
> > 
> > I think we don't need another function to do this neither. The
> > hsi_controller driver should implement a usecount scheme to know when
> > the HW can be switch off. IMO it is not a good idea to relay just on the
> > wakelines to power on/off the device, exactly because of this kind of
> > issues.
> 
> true, but I'm not sure a usecount is a good way to go. Why don't you
> just remove the sync API altogether and use only async, then the OMAP
> HSI controller driver is supposed to know when it can go to sleep. If
> you receive some data before a client queues a request, you just defer
> processing of that data until a new request is queued, or something...

Hmmm, Do you mean I remove the hsi_start_tx() and hsi_stop_tx()
completely ? Or Do I just create an async version of them ?

The truth is that they should not even exist in the first place, and the
wakelines should have been handled silently by the hsi controllers.  
But they are need because the hsi_clients/protocols, like CAIF or the
ssi_protocol in N900, need access to the wakelines to implement some
workarounds due to some races in the HSI/SSI HW spec.

Br,
-- 
Carlos Chinea <carlos.chinea@nokia.com>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
  2011-07-22 11:51           ` Carlos Chinea
@ 2011-07-22 12:05             ` Felipe Balbi
  2011-07-22 13:02               ` Carlos Chinea
  2011-07-24 21:56               ` Sjur Brændeland
  0 siblings, 2 replies; 35+ messages in thread
From: Felipe Balbi @ 2011-07-22 12:05 UTC (permalink / raw)
  To: Carlos Chinea
  Cc: balbi, ext Sjur BRENDELAND, sjurbren, linux-omap, Linus Walleij,
	linux-kernel

[-- Attachment #1: Type: text/plain, Size: 2460 bytes --]

Hi,

On Fri, Jul 22, 2011 at 02:51:12PM +0300, Carlos Chinea wrote:
> Hi Felipe, 

hello there :-)

> > are you already using pm_runtime ? You could put that logic grouped in
> > runtime_suspend()/runtime_resume() calls and from the driver, just call
> > pm_runtime_put()/pm_runtime_get_sync() whenever you want to
> > suspend/resume.
> 
> In the case of the omap_ssi driver not yet, but hopefully the
> implementation will come soon ;) Anyway, the DATA/FLAG workaround is not
> something I'm going to add to the omap_ssi, at least not now. This seems
> to be needed by ST-Ericcson in their HSI controller.

aha, I see. Thanks for the clarification.

> > true, but I'm not sure a usecount is a good way to go. Why don't you
> > just remove the sync API altogether and use only async, then the OMAP
> > HSI controller driver is supposed to know when it can go to sleep. If
> > you receive some data before a client queues a request, you just defer
> > processing of that data until a new request is queued, or something...
> 
> Hmmm, Do you mean I remove the hsi_start_tx() and hsi_stop_tx()
> completely ? Or Do I just create an async version of them ?

I would say remove completely and add async-only version.

> The truth is that they should not even exist in the first place, and the
> wakelines should have been handled silently by the hsi controllers.  
> But they are need because the hsi_clients/protocols, like CAIF or the
> ssi_protocol in N900, need access to the wakelines to implement some
> workarounds due to some races in the HSI/SSI HW spec.

that's quite nasty situation :-)

I still think, though, that if the clients don't really start a transfer
straight away (meaning it's all ASYNC), you might be able to handle the
workarounds in the HSI framework itself by having some extra flags on
your hsi request structure (?).

See for example the use of short_not_ok flags on the gadget framework.
The gadget drivers which can't handle short transfers (as of today only
the Mass Storage gadget) will set that flag to tell the controller that
we're not expecting a short packet and if we do, treat it as error. In
case of such error, gadget driver is required to dequeue any queued
transfer, flush the FIFO and restart ;-)

Maybe you could have something similar ? Depending on the workaround
you're talking about, this might be feasible with few lines of code on
the async API.

-- 
balbi

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
  2011-07-22 12:05             ` Felipe Balbi
@ 2011-07-22 13:02               ` Carlos Chinea
  2011-07-24 21:56               ` Sjur Brændeland
  1 sibling, 0 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-07-22 13:02 UTC (permalink / raw)
  To: balbi
  Cc: ext Sjur BRENDELAND, sjurbren, linux-omap, Linus Walleij, linux-kernel

On Fri, 2011-07-22 at 15:05 +0300, ext Felipe Balbi wrote:
> Hi,
> 
> On Fri, Jul 22, 2011 at 02:51:12PM +0300, Carlos Chinea wrote:
> > Hi Felipe, 
> 
> hello there :-)
> 
> > > are you already using pm_runtime ? You could put that logic grouped in
> > > runtime_suspend()/runtime_resume() calls and from the driver, just call
> > > pm_runtime_put()/pm_runtime_get_sync() whenever you want to
> > > suspend/resume.
> > 
> > In the case of the omap_ssi driver not yet, but hopefully the
> > implementation will come soon ;) Anyway, the DATA/FLAG workaround is not
> > something I'm going to add to the omap_ssi, at least not now. This seems
> > to be needed by ST-Ericcson in their HSI controller.
> 
> aha, I see. Thanks for the clarification.
> 
> > > true, but I'm not sure a usecount is a good way to go. Why don't you
> > > just remove the sync API altogether and use only async, then the OMAP
> > > HSI controller driver is supposed to know when it can go to sleep. If
> > > you receive some data before a client queues a request, you just defer
> > > processing of that data until a new request is queued, or something...
> > 
> > Hmmm, Do you mean I remove the hsi_start_tx() and hsi_stop_tx()
> > completely ? Or Do I just create an async version of them ?
> 
> I would say remove completely and add async-only version.
> 
> > The truth is that they should not even exist in the first place, and the
> > wakelines should have been handled silently by the hsi controllers.  
> > But they are need because the hsi_clients/protocols, like CAIF or the
> > ssi_protocol in N900, need access to the wakelines to implement some
> > workarounds due to some races in the HSI/SSI HW spec.
> 
> that's quite nasty situation :-)
> 
> I still think, though, that if the clients don't really start a transfer
> straight away (meaning it's all ASYNC), you might be able to handle the
> workarounds in the HSI framework itself by having some extra flags on
> your hsi request structure (?).
> 
> See for example the use of short_not_ok flags on the gadget framework.
> The gadget drivers which can't handle short transfers (as of today only
> the Mass Storage gadget) will set that flag to tell the controller that
> we're not expecting a short packet and if we do, treat it as error. In
> case of such error, gadget driver is required to dequeue any queued
> transfer, flush the FIFO and restart ;-)
> 
> Maybe you could have something similar ? Depending on the workaround
> you're talking about, this might be feasible with few lines of code on
> the async API.
> 

The problem is that there is no standard way to handle this situation :(

For example ssi_protocol tries to deal with the wakelines race problem
by raising the wakeline and then waiting for the other end to send a
"READY to receive" data frame, before start sending. But as you can see
in the case of CAIF, they went for a solution where they raise their
wakeline and wait for the other end to raise its wakeline to signal that
they are "READY to received". Both solutions have their pros and cons.

Br,
-- 
Carlos Chinea <carlos.chinea@nokia.com>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
  2011-07-22 12:05             ` Felipe Balbi
  2011-07-22 13:02               ` Carlos Chinea
@ 2011-07-24 21:56               ` Sjur Brændeland
  2011-07-25  9:17                   ` Carlos Chinea
  1 sibling, 1 reply; 35+ messages in thread
From: Sjur Brændeland @ 2011-07-24 21:56 UTC (permalink / raw)
  To: balbi, Carlos Chinea
  Cc: linux-omap, Linus Walleij, linux-kernel, dmitry.tarnyagin

Hi Carlos,

>Sorry for the long delay. My comments below:
No worries, I will probably be very slow when responding to you as
well for the next
couple of weeks...

>+ * @flow: RX flow type (SYNCHRONIZED or PIPELINE)
>+ * @arb_mode: Arbitration mode for TX frame (Round robin, priority)
>+ */
>+struct hsi_config {
>+	unsigned int	mode;
>+	unsigned int	channels;
>+	unsigned int	speed;

I have to pick up on one issue I missed earlier. The CAIF-HSI protocol
is going to use
separate RX and TX speeds, where modem and host side looks at the
throughput and
TX-queues and request their TX speeds accordingly. So I would prefer
to be able to set
the RX and TX speed in each direction individually.

...
>>>... Why don't you
>>> just remove the sync API altogether and use only async, then the OMAP
>>> HSI controller driver is supposed to know when it can go to sleep. If
>>> you receive some data before a client queues a request, you just defer
>>> processing of that data until a new request is queued, or something...
>>
>> Hmmm, Do you mean I remove the hsi_start_tx() and hsi_stop_tx()
>> completely ? Or Do I just create an async version of them ?
>
> I would say remove completely and add async-only version.

Yes, this is probably the best way, but I'm not too concerned how this is done,
as long as the API provides some way to assure that the TX FIFO is empty
before putting the WAKE line low.


> > > > > *Missing function*: hsi_rx_fifo_occupancy()
> > > > > Before putting the link asleep we need to know if the fifo is empty
> > > > > or not.
> > > > > So we would like to have a way to read out the number of bytes in the
> > > > > RX fifo.
> > > >
> > > > This should be handled only by hsi_controller. Clients should not care
> > > > about this.
> > >
> > > There is a corner case when going to low power mode and both side has put the WAKE line low,
> > > but a RX DMA job is ongoing or the RX-fifo is not empty.
> > > In this case the host side must wait for the DMA job to complete and RX-
> > > fifo to be empty, before canceling any pending RX jobs.
> > >
> > > One option would be to provide a function hsi_rx_sync() that guarantees that any ongoing
> > > RX-job is completed and that the RX FIFO is empty. Another option could be to be
> > > able to provide API for reading RX-job states and RX-fifo occupancy.
> > >
> >
> > I think we don't need another function to do this neither. The
> > hsi_controller driver should implement a usecount scheme to know when
> > the HW can be switch off. IMO it is not a good idea to relay just on the
> > wakelines to power on/off the device, exactly because of this kind of
> > issues.

For the RX FIFO maybe you are right that the controller can handle the
power down issues
on it's own. However I'm uneasy about not having the possibility to
read out the RX
FIFO-occupancy from the HSI-controller. In the CAIF HSI implementation
queued for the 3.1
kernel (git.kernel.org/?p=linux/kernel/git/davem/net.git;a=blob;f=drivers/net/caif/caif_hsi.c
)
we use this both in probe() and when wakeline go low.
There may also be other corner-cases related to wakeline handling or
speed change,
where we need this in the future. So from my point of view think we
still need to be able read
out the RX FIFO occupancy.

Regards,
Sjur

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
  2011-07-24 21:56               ` Sjur Brændeland
@ 2011-07-25  9:17                   ` Carlos Chinea
  0 siblings, 0 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-07-25  9:17 UTC (permalink / raw)
  To: ext Sjur Brændeland
  Cc: balbi, linux-omap, Linus Walleij, linux-kernel, dmitry.tarnyagin

Hi,

On Sun, 2011-07-24 at 23:56 +0200, ext Sjur Brændeland wrote:
> Hi Carlos,
> 
> >Sorry for the long delay. My comments below:
> No worries, I will probably be very slow when responding to you as
> well for the next
> couple of weeks...
> 
> >+ * @flow: RX flow type (SYNCHRONIZED or PIPELINE)
> >+ * @arb_mode: Arbitration mode for TX frame (Round robin, priority)
> >+ */
> >+struct hsi_config {
> >+	unsigned int	mode;
> >+	unsigned int	channels;
> >+	unsigned int	speed;
> 
> I have to pick up on one issue I missed earlier. The CAIF-HSI protocol
> is going to use
> separate RX and TX speeds, where modem and host side looks at the
> throughput and
> TX-queues and request their TX speeds accordingly. So I would prefer
> to be able to set
> the RX and TX speed in each direction individually.
> 

You can already do that ;) RX and TX configuration are different.
/**
 * struct hsi_client - HSI client attached to an HSI port
 * @device: Driver model representation of the device
 * @tx_cfg: HSI TX configuration
 * @rx_cfg: HSI RX configuration
 * @hsi_start_rx: Called after incoming wake line goes high
 * @hsi_stop_rx: Called after incoming wake line goes low
 */
struct hsi_client {
	struct device		device;
	struct hsi_config	tx_cfg;
	struct hsi_config	rx_cfg;
...

> ...
> >>>... Why don't you
> >>> just remove the sync API altogether and use only async, then the OMAP
> >>> HSI controller driver is supposed to know when it can go to sleep. If
> >>> you receive some data before a client queues a request, you just defer
> >>> processing of that data until a new request is queued, or something...
> >>
> >> Hmmm, Do you mean I remove the hsi_start_tx() and hsi_stop_tx()
> >> completely ? Or Do I just create an async version of them ?
> >
> > I would say remove completely and add async-only version.
> 
> Yes, this is probably the best way, but I'm not too concerned how this is done,
> as long as the API provides some way to assure that the TX FIFO is empty
> before putting the WAKE line low.

Ok let's see. We can rephrase this problem as that you want to be
certain that the last TX frame has gone through the wires before doing
something, like bringing the wake line down.

This can be done and should be done in the hsi_controller. It is just a
matter of calling the complete() callback on the right time. Meaning,
that the hsi_controller does not call the complete() callback when the
DMA transfer for TX has completed, but when the last TX frame is already
in the wires. As optimization, you may also do this only when there is
no more TX requests in the hsi_controller driver queue.

> 
> 
> > > > > > *Missing function*: hsi_rx_fifo_occupancy()
> > > > > > Before putting the link asleep we need to know if the fifo is empty
> > > > > > or not.
> > > > > > So we would like to have a way to read out the number of bytes in the
> > > > > > RX fifo.
> > > > >
> > > > > This should be handled only by hsi_controller. Clients should not care
> > > > > about this.
> > > >
> > > > There is a corner case when going to low power mode and both side has put the WAKE line low,
> > > > but a RX DMA job is ongoing or the RX-fifo is not empty.
> > > > In this case the host side must wait for the DMA job to complete and RX-
> > > > fifo to be empty, before canceling any pending RX jobs.
> > > >
> > > > One option would be to provide a function hsi_rx_sync() that guarantees that any ongoing
> > > > RX-job is completed and that the RX FIFO is empty. Another option could be to be
> > > > able to provide API for reading RX-job states and RX-fifo occupancy.
> > > >
> > >
> > > I think we don't need another function to do this neither. The
> > > hsi_controller driver should implement a usecount scheme to know when
> > > the HW can be switch off. IMO it is not a good idea to relay just on the
> > > wakelines to power on/off the device, exactly because of this kind of
> > > issues.
> 
> For the RX FIFO maybe you are right that the controller can handle the
> power down issues
> on it's own. However I'm uneasy about not having the possibility to
> read out the RX
> FIFO-occupancy from the HSI-controller. In the CAIF HSI implementation
> queued for the 3.1
> kernel (git.kernel.org/?p=linux/kernel/git/davem/net.git;a=blob;f=drivers/net/caif/caif_hsi.c
> )
> we use this both in probe() and when wakeline go low.
> There may also be other corner-cases related to wakeline handling or
> speed change,
> where we need this in the future. So from my point of view think we
> still need to be able read
> out the RX FIFO occupancy.

In the case of wakeline handling: 
- The HSI HW block should be kept power on as long as there is some
activity, regardless of the wake line state.

In the case of speed change:
- Could you explain a little bit more which is the problem ?
How can the data already stored in the  RX FIFO be affected by changes
in the RX and/or TX HSI functional clock ?

Br,
-- 
Carlos Chinea <carlos.chinea@nokia.com>


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework
@ 2011-07-25  9:17                   ` Carlos Chinea
  0 siblings, 0 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-07-25  9:17 UTC (permalink / raw)
  To: ext Sjur Brændeland
  Cc: balbi, linux-omap, Linus Walleij, linux-kernel, dmitry.tarnyagin

Hi,

On Sun, 2011-07-24 at 23:56 +0200, ext Sjur Brændeland wrote:
> Hi Carlos,
> 
> >Sorry for the long delay. My comments below:
> No worries, I will probably be very slow when responding to you as
> well for the next
> couple of weeks...
> 
> >+ * @flow: RX flow type (SYNCHRONIZED or PIPELINE)
> >+ * @arb_mode: Arbitration mode for TX frame (Round robin, priority)
> >+ */
> >+struct hsi_config {
> >+	unsigned int	mode;
> >+	unsigned int	channels;
> >+	unsigned int	speed;
> 
> I have to pick up on one issue I missed earlier. The CAIF-HSI protocol
> is going to use
> separate RX and TX speeds, where modem and host side looks at the
> throughput and
> TX-queues and request their TX speeds accordingly. So I would prefer
> to be able to set
> the RX and TX speed in each direction individually.
> 

You can already do that ;) RX and TX configuration are different.
/**
 * struct hsi_client - HSI client attached to an HSI port
 * @device: Driver model representation of the device
 * @tx_cfg: HSI TX configuration
 * @rx_cfg: HSI RX configuration
 * @hsi_start_rx: Called after incoming wake line goes high
 * @hsi_stop_rx: Called after incoming wake line goes low
 */
struct hsi_client {
	struct device		device;
	struct hsi_config	tx_cfg;
	struct hsi_config	rx_cfg;
...

> ...
> >>>... Why don't you
> >>> just remove the sync API altogether and use only async, then the OMAP
> >>> HSI controller driver is supposed to know when it can go to sleep. If
> >>> you receive some data before a client queues a request, you just defer
> >>> processing of that data until a new request is queued, or something...
> >>
> >> Hmmm, Do you mean I remove the hsi_start_tx() and hsi_stop_tx()
> >> completely ? Or Do I just create an async version of them ?
> >
> > I would say remove completely and add async-only version.
> 
> Yes, this is probably the best way, but I'm not too concerned how this is done,
> as long as the API provides some way to assure that the TX FIFO is empty
> before putting the WAKE line low.

Ok let's see. We can rephrase this problem as that you want to be
certain that the last TX frame has gone through the wires before doing
something, like bringing the wake line down.

This can be done and should be done in the hsi_controller. It is just a
matter of calling the complete() callback on the right time. Meaning,
that the hsi_controller does not call the complete() callback when the
DMA transfer for TX has completed, but when the last TX frame is already
in the wires. As optimization, you may also do this only when there is
no more TX requests in the hsi_controller driver queue.

> 
> 
> > > > > > *Missing function*: hsi_rx_fifo_occupancy()
> > > > > > Before putting the link asleep we need to know if the fifo is empty
> > > > > > or not.
> > > > > > So we would like to have a way to read out the number of bytes in the
> > > > > > RX fifo.
> > > > >
> > > > > This should be handled only by hsi_controller. Clients should not care
> > > > > about this.
> > > >
> > > > There is a corner case when going to low power mode and both side has put the WAKE line low,
> > > > but a RX DMA job is ongoing or the RX-fifo is not empty.
> > > > In this case the host side must wait for the DMA job to complete and RX-
> > > > fifo to be empty, before canceling any pending RX jobs.
> > > >
> > > > One option would be to provide a function hsi_rx_sync() that guarantees that any ongoing
> > > > RX-job is completed and that the RX FIFO is empty. Another option could be to be
> > > > able to provide API for reading RX-job states and RX-fifo occupancy.
> > > >
> > >
> > > I think we don't need another function to do this neither. The
> > > hsi_controller driver should implement a usecount scheme to know when
> > > the HW can be switch off. IMO it is not a good idea to relay just on the
> > > wakelines to power on/off the device, exactly because of this kind of
> > > issues.
> 
> For the RX FIFO maybe you are right that the controller can handle the
> power down issues
> on it's own. However I'm uneasy about not having the possibility to
> read out the RX
> FIFO-occupancy from the HSI-controller. In the CAIF HSI implementation
> queued for the 3.1
> kernel (git.kernel.org/?p=linux/kernel/git/davem/net.git;a=blob;f=drivers/net/caif/caif_hsi.c
> )
> we use this both in probe() and when wakeline go low.
> There may also be other corner-cases related to wakeline handling or
> speed change,
> where we need this in the future. So from my point of view think we
> still need to be able read
> out the RX FIFO occupancy.

In the case of wakeline handling: 
- The HSI HW block should be kept power on as long as there is some
activity, regardless of the wake line state.

In the case of speed change:
- Could you explain a little bit more which is the problem ?
How can the data already stored in the  RX FIFO be affected by changes
in the RX and/or TX HSI functional clock ?

Br,
-- 
Carlos Chinea <carlos.chinea@nokia.com>

--
To unsubscribe from this list: send the line "unsubscribe linux-omap" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 0/7] HSI framework and drivers
  2011-06-10 13:38 [RFC PATCHv5 0/7] HSI framework and drivers Carlos Chinea
                   ` (8 preceding siblings ...)
  2011-06-22 19:11 ` [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework Sjur Brændeland
@ 2011-10-20 12:57 ` Sebastian Reichel
  2011-10-21  9:54   ` Linus Walleij
  9 siblings, 1 reply; 35+ messages in thread
From: Sebastian Reichel @ 2011-10-20 12:57 UTC (permalink / raw)
  To: Carlos Chinea
  Cc: linux-kernel, linux-omap, linus.walleij, govindraj.ti,
	pawel.szyszuk, sjur.brandeland, peter_henn

[-- Attachment #1: Type: text/plain, Size: 90 bytes --]

Hi,

What's the status of this patch? Are there plans to merge it into
3.2?

-- Sebastian

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 0/7] HSI framework and drivers
  2011-10-20 12:57 ` [RFC PATCHv5 0/7] HSI framework and drivers Sebastian Reichel
@ 2011-10-21  9:54   ` Linus Walleij
  2011-10-21 10:28     ` Carlos Chinea
  0 siblings, 1 reply; 35+ messages in thread
From: Linus Walleij @ 2011-10-21  9:54 UTC (permalink / raw)
  To: Carlos Chinea, Sebastian Reichel
  Cc: linux-kernel, linux-omap, govindraj.ti, pawel.szyszuk,
	sjur.brandeland, peter_henn

On Thu, Oct 20, 2011 at 2:57 PM, Sebastian Reichel <sre@debian.org> wrote:

> What's the status of this patch? Are there plans to merge it into
> 3.2?

If there were, it'd be part of linux-next by now wouldn't it?

Carlos, can you tell us what's happening, this is starting to
look pretty solid and it sort of hurts to have it out-of-tree.

Can't you create a for-next branch and ask Stephen to pull
that to linux-next so it can be merged for linux 3.3?

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 0/7] HSI framework and drivers
  2011-10-21  9:54   ` Linus Walleij
@ 2011-10-21 10:28     ` Carlos Chinea
  2011-10-21 12:19       ` Linus Walleij
  2011-10-21 13:36       ` Alan Cox
  0 siblings, 2 replies; 35+ messages in thread
From: Carlos Chinea @ 2011-10-21 10:28 UTC (permalink / raw)
  To: ext Linus Walleij
  Cc: Sebastian Reichel, linux-kernel, linux-omap, govindraj.ti,
	pawel.szyszuk, sjur.brandeland, peter_henn

On Fri, 2011-10-21 at 11:54 +0200, ext Linus Walleij wrote:
> On Thu, Oct 20, 2011 at 2:57 PM, Sebastian Reichel <sre@debian.org> wrote:
> 
> > What's the status of this patch? Are there plans to merge it into
> > 3.2?
> 
> If there were, it'd be part of linux-next by now wouldn't it?
> 
> Carlos, can you tell us what's happening, this is starting to
> look pretty solid and it sort of hurts to have it out-of-tree.
> 

Yes I agree. I want to have the omap_ssi controller driver ready for
acceptance for linux-omap. It is currently the only thing blocking the
last patch set. Unfortunately, I am behind on that but I am still
committed to deliver the changes for it.

> Can't you create a for-next branch and ask Stephen to pull
> that to linux-next so it can be merged for linux 3.3?
> 

Yes I can, but currently not with the changes needed in the omap_ssi
controller driver. Basically, we have 2 alternatives for that:

A. I create the branch with just the hsi_char and the hsi framework
B. The same as A but with STE or another hsi/ssi controller driver.

Any more ideas ? preferences ?

Br,
Carlos


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 0/7] HSI framework and drivers
  2011-10-21 10:28     ` Carlos Chinea
@ 2011-10-21 12:19       ` Linus Walleij
  2011-10-21 13:36       ` Alan Cox
  1 sibling, 0 replies; 35+ messages in thread
From: Linus Walleij @ 2011-10-21 12:19 UTC (permalink / raw)
  To: Carlos Chinea
  Cc: Sebastian Reichel, linux-kernel, linux-omap, govindraj.ti,
	pawel.szyszuk, sjur.brandeland, peter_henn

On Fri, Oct 21, 2011 at 12:28 PM, Carlos Chinea <carlos.chinea@nokia.com> wrote:

(...)
> Basically, we have 2 alternatives for that:
>
> A. I create the branch with just the hsi_char and the hsi framework
> B. The same as A but with STE or another hsi/ssi controller driver.

I'd say (A) and then when someone (like me) fixes up the
STE HSI controller you can just git am that on top of the
framework. Possibly you can also stack the needed OMAP
changes in the same HSI tree provided you get an ACK
from the subsystem maintainer.

Yours,
Linus Walleij

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [RFC PATCHv5 0/7] HSI framework and drivers
  2011-10-21 10:28     ` Carlos Chinea
  2011-10-21 12:19       ` Linus Walleij
@ 2011-10-21 13:36       ` Alan Cox
  1 sibling, 0 replies; 35+ messages in thread
From: Alan Cox @ 2011-10-21 13:36 UTC (permalink / raw)
  To: Carlos Chinea
  Cc: ext Linus Walleij, Sebastian Reichel, linux-kernel, linux-omap,
	govindraj.ti, pawel.szyszuk, sjur.brandeland, peter_henn

> A. I create the branch with just the hsi_char and the hsi framework
> B. The same as A but with STE or another hsi/ssi controller driver.

With my Intel hat on we've also got stuff using this framework and
controllers that we'd want to sync with the current version of the core
code (I think it's diverged a bit).

Alan

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2011-10-21 13:36 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-06-10 13:38 [RFC PATCHv5 0/7] HSI framework and drivers Carlos Chinea
2011-06-10 13:38 ` [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework Carlos Chinea
2011-06-10 13:38 ` [RFC PATCHv5 2/7] HSI: omap_ssi: Introducing OMAP SSI driver Carlos Chinea
2011-06-13 13:21   ` Tony Lindgren
2011-06-14 12:09     ` Carlos Chinea
2011-06-13 20:21   ` Kevin Hilman
2011-06-14 12:12     ` Carlos Chinea
2011-06-15 15:37       ` Kevin Hilman
2011-06-10 13:38 ` [RFC PATCHv5 3/7] HSI: omap_ssi: Add OMAP SSI to the kernel configuration Carlos Chinea
2011-06-10 13:38 ` [RFC PATCHv5 4/7] HSI: hsi_char: Add HSI char device driver Carlos Chinea
2011-06-22 19:37   ` Sjur Brændeland
2011-06-23  9:12     ` Carlos Chinea
2011-06-10 13:38 ` [RFC PATCHv5 5/7] HSI: hsi_char: Add HSI char device kernel configuration Carlos Chinea
2011-06-10 13:38 ` [RFC PATCHv5 6/7] HSI: Add HSI API documentation Carlos Chinea
2011-06-10 13:38 ` [RFC PATCHv5 7/7] HSI: hsi_char: Update ioctl-number.txt Carlos Chinea
2011-06-14  9:35 ` [RFC PATCHv5 0/7] HSI framework and drivers Alan Cox
2011-06-15  9:27   ` Andras Domokos
2011-06-22 19:11 ` [RFC PATCHv5 1/7] HSI: hsi: Introducing HSI framework Sjur Brændeland
2011-06-22 19:25   ` Linus Walleij
2011-06-23 13:08   ` Carlos Chinea
2011-06-28 13:05     ` Sjur BRENDELAND
2011-06-28 13:05       ` Sjur BRENDELAND
2011-07-22 10:43       ` Carlos Chinea
2011-07-22 11:01         ` Felipe Balbi
2011-07-22 11:51           ` Carlos Chinea
2011-07-22 12:05             ` Felipe Balbi
2011-07-22 13:02               ` Carlos Chinea
2011-07-24 21:56               ` Sjur Brændeland
2011-07-25  9:17                 ` Carlos Chinea
2011-07-25  9:17                   ` Carlos Chinea
2011-10-20 12:57 ` [RFC PATCHv5 0/7] HSI framework and drivers Sebastian Reichel
2011-10-21  9:54   ` Linus Walleij
2011-10-21 10:28     ` Carlos Chinea
2011-10-21 12:19       ` Linus Walleij
2011-10-21 13:36       ` Alan Cox

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.