All of lore.kernel.org
 help / color / mirror / Atom feed
* MHI initial design review
@ 2018-04-27  2:23 Sujeev Dias
  2018-04-27  2:23 ` [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface Sujeev Dias
                   ` (4 more replies)
  0 siblings, 5 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-04-27  2:23 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: Sujeev Dias, linux-kernel, linux-arm-msm, Tony Truong

Hi Greg Kroah-Hartman\All

This is the initial submit of Modem Host Interface (MHI) stack for upstream
consideration. MHI is a communication protocol to communicate with external
Qualcomm modems and Wi-Fi chipsets over high speed peripheral buses. Even
though MHI doesn’t dictate underlying physical layer, protocol and mhi stack
is structured for PCIe based devices.

For additional details related to MHI interface please see Documentation/mhi.txt.

MHI stack partitioned into three main components:
1. Core layer  handles all MHI protocol specific actions such as firmware download,
   and data transfer
   /drivers/bus/mhi/core/*
2. Control layer  bus master, manages power transitions of external modem.
  /drivers/bus/mhi/controllers/*
3. Device drivers  MHI channels (physical transport channels) exposed as mhi devices
   for clients to send and receive data.
  /drivers/bus/mhi/device/*

There are three ways which clients can interface with MHI framework to send and receive
data from external modem.

1. Register directly with mhi core layer as a mhi device driver
2. User space clients can interface via mhi_uci driver.
3. For net traffic, mhi_netdev can be used.

Can you please do a high-level design review of the MHI driver and let me know if I need to
make any design changes before the drivers can be considered for upstream.

Thanks
Sujeev

   

^ permalink raw reply	[flat|nested] 54+ messages in thread

* [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
  2018-04-27  2:23 MHI initial design review Sujeev Dias
@ 2018-04-27  2:23 ` Sujeev Dias
  2018-04-27  7:22   ` Greg Kroah-Hartman
                     ` (6 more replies)
  2018-04-27  2:23 ` [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems Sujeev Dias
                   ` (3 subsequent siblings)
  4 siblings, 7 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-04-27  2:23 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: Sujeev Dias, linux-kernel, linux-arm-msm, Tony Truong

MHI Host Interface is a communication protocol to be used by the host
to control and communcate with modem over a high speed peripheral bus.
This module will allow host to communicate with external devices that
support MHI protocol.

Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
---
 Documentation/00-INDEX                        |    2 +
 Documentation/devicetree/bindings/bus/mhi.txt |  141 +++
 Documentation/mhi.txt                         |  235 ++++
 drivers/bus/Kconfig                           |   17 +
 drivers/bus/Makefile                          |    1 +
 drivers/bus/mhi/Makefile                      |    8 +
 drivers/bus/mhi/core/Makefile                 |    1 +
 drivers/bus/mhi/core/mhi_boot.c               |  593 ++++++++++
 drivers/bus/mhi/core/mhi_dtr.c                |  177 +++
 drivers/bus/mhi/core/mhi_init.c               | 1290 +++++++++++++++++++++
 drivers/bus/mhi/core/mhi_internal.h           |  732 ++++++++++++
 drivers/bus/mhi/core/mhi_main.c               | 1476 +++++++++++++++++++++++++
 drivers/bus/mhi/core/mhi_pm.c                 | 1177 ++++++++++++++++++++
 include/linux/mhi.h                           |  694 ++++++++++++
 include/linux/mod_devicetable.h               |   11 +
 15 files changed, 6555 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/bus/mhi.txt
 create mode 100644 Documentation/mhi.txt
 create mode 100644 drivers/bus/mhi/Makefile
 create mode 100644 drivers/bus/mhi/core/Makefile
 create mode 100644 drivers/bus/mhi/core/mhi_boot.c
 create mode 100644 drivers/bus/mhi/core/mhi_dtr.c
 create mode 100644 drivers/bus/mhi/core/mhi_init.c
 create mode 100644 drivers/bus/mhi/core/mhi_internal.h
 create mode 100644 drivers/bus/mhi/core/mhi_main.c
 create mode 100644 drivers/bus/mhi/core/mhi_pm.c
 create mode 100644 include/linux/mhi.h

diff --git a/Documentation/00-INDEX b/Documentation/00-INDEX
index 708dc4c..44e2c6b 100644
--- a/Documentation/00-INDEX
+++ b/Documentation/00-INDEX
@@ -270,6 +270,8 @@ memory-hotplug.txt
 	- Hotpluggable memory support, how to use and current status.
 men-chameleon-bus.txt
 	- info on MEN chameleon bus.
+mhi.txt
+	- Modem Host Interface
 mic/
 	- Intel Many Integrated Core (MIC) architecture device driver.
 mips/
diff --git a/Documentation/devicetree/bindings/bus/mhi.txt b/Documentation/devicetree/bindings/bus/mhi.txt
new file mode 100644
index 0000000..ea1b620
--- /dev/null
+++ b/Documentation/devicetree/bindings/bus/mhi.txt
@@ -0,0 +1,141 @@
+MHI Host Interface
+
+MHI used by the host to control and communicate with modem over
+high speed peripheral bus.
+
+==============
+Node Structure
+==============
+
+Main node properties:
+
+- mhi,max-channels
+  Usage: required
+  Value type: <u32>
+  Definition: Maximum number of channels supported by this controller
+
+- mhi,chan-cfg
+  Usage: required
+  Value type: Array of <u32>
+  Definition: Array of tuples describe channel configuration.
+	1st element: Physical channel number
+	2nd element: Transfer ring length in elements
+	3rd element: Event ring associated with this channel
+	4th element: Channel direction as defined by enum dma_data_direction
+		1 = UL data transfer
+		2 = DL data transfer
+	5th element: Channel doorbell mode configuration as defined by
+	enum MHI_BRSTMODE
+		2 = burst mode disabled
+		3 = burst mode enabled
+	6th element: mhi doorbell configuration, valid only when burst mode
+	enabled.
+		0 = Use default (device specific) polling configuration
+		For UL channels, value specifies the timer to poll MHI context
+		in milliseconds.
+		For DL channels, the threshold to poll the MHI context
+		in multiple of eight ring element.
+	7th element: Channel execution enviornment as defined by enum MHI_EE
+		1 = Bootloader stage
+		2 = AMSS mode
+	8th element: data transfer type accepted as defined by enum
+	MHI_XFER_TYPE
+		0 = accept cpu address for buffer
+		1 = accept skb
+		2 = accept scatterlist
+		3 = offload channel, does not accept any transfer type
+	9th element: Bitwise configuration settings for the channel
+		Bit mask:
+		BIT(0) : LPM notify, this channel master requre lpm enter/exit
+		notifications.
+		BIT(1) : Offload channel, MHI host only involved in setting up
+		the data pipe. Not involved in active data transfer.
+		BIT(2) : Must switch to doorbell mode whenever MHI M0 state
+		transition happens.
+		BIT(3) : MHI bus driver pre-allocate buffer for this channel.
+		If set, clients not allowed to queue buffers. Valid only for DL
+		direction.
+
+- mhi,chan-names
+  Usage: required
+  Value type: Array of <string>
+  Definition: Channel names configured in mhi,chan-cfg.
+
+- mhi,ev-cfg
+  Usage: required
+  Value type: Array of <u32>
+  Definition: Array of tuples describe event configuration.
+	1st element: Event ring length in elements
+	2nd element: Interrupt moderation time in ms
+	3rd element: MSI associated with this event ring
+	4th element: Dedicated channel number, if it's a dedicated event ring
+	5th element: Event ring priority, set to 1 for now
+	6th element: Event doorbell mode configuration as defined by
+	enum MHI_BRSTMODE
+		2 = burst mode disabled
+		3 = burst mode enabled
+	7th element: Bitwise configuration settings for the channel
+		Bit mask:
+		BIT(0) : Event ring associated with hardware channels
+		BIT(1) : Client manages the event ring (use by napi_poll)
+		BIT(2) : Event ring associated with offload channel
+		BIT(3) : Event ring dedicated to control events only
+
+- mhi,timeout
+  Usage: optional
+  Value type: <u32>
+  Definition: Maximum timeout in ms wait for state and cmd completion
+
+- mhi,fw-name
+  Usage: optional
+  Value type: <string>
+  Definition: Firmware image name to upload
+
+- mhi,edl-name
+  Usage: optional
+  Value type: <string>
+  Definition: Firmware image name for emergency download
+
+- mhi,fbc-download
+  Usage: optional
+  Value type: <bool>
+  Definition: If set true, image specified by fw-name is for full image
+
+- mhi,sbl-size
+  Usage: optional
+  Value type: <u32>
+  Definition: Size of SBL image in bytes
+
+- mhi,seg-len
+  Usage: optional
+  Value type: <u32>
+  Definition: Size of each segment to allocate for BHIe vector table
+
+Children node properties:
+
+MHI drivers that require DT can add driver specific information as a child node.
+
+- mhi,chan
+  Usage: Required
+  Value type: <string>
+  Definition: Channel name
+
+========
+Example:
+========
+mhi_controller {
+	mhi,max-channels = <105>;
+	mhi,chan-cfg = <0 64 2 1 2 1 2 0 0>, <1 64 2 2 2 1 2 0 0>,
+		       <2 64 1 1 2 1 1 0 0>, <3 64 1 2 2 1 1 0 0>;
+	mhi,chan-names = "LOOPBACK", "LOOPBACK",
+			 "SAHARA", "SAHARA";
+	mhi,ev-cfg = <64 1 1 0 1 2 8>
+		     <64 1 2 0 1 2 0>;
+	mhi,fw-name = "sbl1.mbn";
+	mhi,timeout = <500>;
+
+	children_node {
+		mhi,chan = "LOOPBACK"
+		<driver specific properties>
+	};
+};
diff --git a/Documentation/mhi.txt b/Documentation/mhi.txt
new file mode 100644
index 0000000..1c501f1
--- /dev/null
+++ b/Documentation/mhi.txt
@@ -0,0 +1,235 @@
+Overview of Linux kernel MHI support
+====================================
+
+Modem-Host Interface (MHI)
+=========================
+MHI used by the host to control and communicate with modem over high speed
+peripheral bus. Even though MHI can be easily adapt to any peripheral buses,
+primarily used with PCIe based devices. The host has one or more PCIe root
+ports connected to modem device. The host has limited access to device memory
+space, including register configuration and control the device operation.
+Data transfers are invoked from the device.
+
+All data structures used by MHI are in the host system memory. Using PCIe
+interface, the device accesses those data structures. MHI data structures and
+data buffers in the host system memory regions are mapped for device.
+
+Memory spaces
+-------------
+PCIe Configurations : Used for enumeration and resource management, such as
+interrupt and base addresses.  This is done by mhi control driver.
+
+MMIO
+----
+MHI MMIO : Memory mapped IO consists of set of registers in the device hardware,
+which are mapped to the host memory space through PCIe base address register
+(BAR)
+
+MHI control registers : Access to MHI configurations registers
+(struct mhi_controller.regs).
+
+MHI BHI register: Boot host interface registers (struct mhi_controller.bhi) used
+for firmware download before MHI initialization.
+
+Channel db array : Doorbell registers (struct mhi_chan.tre_ring.db_addr) used by
+host to notify device there is new work to do.
+
+Event db array : Associated with event context array
+(struct mhi_event.ring.db_addr), host uses to notify device free events are
+available.
+
+Data structures
+---------------
+Host memory : Directly accessed by the host to manage the MHI data structures
+and buffers. The device accesses the host memory over the PCIe interface.
+
+Channel context array : All channel configurations are organized in channel
+context data array.
+
+struct __packed mhi_chan_ctxt;
+struct mhi_ctxt.chan_ctxt;
+
+Transfer rings : Used by host to schedule work items for a channel and organized
+as a circular queue of transfer descriptors (TD).
+
+struct __packed mhi_tre;
+struct mhi_chan.tre_ring;
+
+Event context array : All event configurations are organized in event context
+data array.
+
+struct mhi_ctxt.er_ctxt;
+struct __packed mhi_event_ctxt;
+
+Event rings: Used by device to send completion and state transition messages to
+host
+
+struct mhi_event.ring;
+struct __packed mhi_tre;
+
+Command context array: All command configurations are organized in command
+context data array.
+
+struct __packed mhi_cmd_ctxt;
+struct mhi_ctxt.cmd_ctxt;
+
+Command rings: Used by host to send MHI commands to device
+
+struct __packed mhi_tre;
+struct mhi_cmd.ring;
+
+Transfer rings
+--------------
+MHI channels are logical, unidirectional data pipes between host and device.
+Each channel associated with a single transfer ring.  The data direction can be
+either inbound (device to host) or outbound (host to device).  Transfer
+descriptors are managed by using transfer rings, which are defined for each
+channel between device and host and resides in the host memory.
+
+Transfer ring Pointer:	  	Transfer Ring Array
+[Read Pointer (RP)] ----------->[Ring Element] } TD
+[Write Pointer (WP)]-		[Ring Element]
+                     -		[Ring Element]
+		      --------->[Ring Element]
+				[Ring Element]
+
+1. Host allocate memory for transfer ring
+2. Host sets base, read pointer, write pointer in corresponding channel context
+3. Ring is considered empty when RP == WP
+4. Ring is considered full when WP + 1 == RP
+4. RP indicates the next element to be serviced by device
+4. When host new buffer to send, host update the Ring element with buffer
+   information
+5. Host increment the WP to next element
+6. Ring the associated channel DB.
+
+Event rings
+-----------
+Events from the device to host are organized in event rings and defined in event
+descriptors.  Event rings are array of EDs that resides in the host memory.
+
+Transfer ring Pointer:	  	Event Ring Array
+[Read Pointer (RP)] ----------->[Ring Element] } ED
+[Write Pointer (WP)]-		[Ring Element]
+                     -		[Ring Element]
+		      --------->[Ring Element]
+				[Ring Element]
+
+1. Host allocate memory for event ring
+2. Host sets base, read pointer, write pointer in corresponding channel context
+3. Both host and device has local copy of RP, WP
+3. Ring is considered empty (no events to service) when WP + 1 == RP
+4. Ring is full of events when RP == WP
+4. RP - 1 = last event device programmed
+4. When there is a new event device need to send, device update ED pointed by RP
+5. Device increment RP to next element
+6. Device trigger and interrupt
+
+Example Operation for data transfer:
+
+1. Host prepare TD with buffer information
+2. Host increment Chan[id].ctxt.WP
+3. Host ring channel DB register
+4. Device wakes up process the TD
+5. Device generate a completion event for that TD by updating ED
+6. Device increment Event[id].ctxt.RP
+7. Device trigger MSI to wake host
+8. Host wakes up and check event ring for completion event
+9. Host update the Event[i].ctxt.WP to indicate processed of completion event.
+
+MHI States
+----------
+
+enum MHI_STATE {
+MHI_STATE_RESET : MHI is in reset state, POR state. Host is not allowed to
+		  access device MMIO register space.
+MHI_STATE_READY : Device is ready for initialization. Host can start MHI
+		  initialization by programming MMIO
+MHI_STATE_M0 : MHI is in fully active state, data transfer is active
+MHI_STATE_M1 : Device in a suspended state
+MHI_STATE_M2 : MHI in low power mode, device may enter lower power mode.
+MHI_STATE_M3 : Both host and device in suspended state.  PCIe link is not
+	       accessible to device.
+
+MHI Initialization
+------------------
+
+1. After system boots, the device is enumerated over PCIe interface
+2. Host allocate MHI context for event, channel and command arrays
+3. Initialize context array, and prepare interrupts
+3. Host waits until device enter READY state
+4. Program MHI MMIO registers and set device into MHI_M0 state
+5. Wait for device to enter M0 state
+
+Linux Software Architecture
+===========================
+
+MHI Controller
+--------------
+MHI controller is also the MHI bus master. In charge of managing the physical
+link between host and device.  Not involved in actual data transfer.  At least
+for PCIe based buses, for other type of bus, we can expand to add support.
+
+Roles:
+1. Turn on PCIe bus and configure the link
+2. Configure MSI, SMMU, and IOMEM
+3. Allocate struct mhi_controller and register with MHI bus framework
+2. Initiate power on and shutdown sequence
+3. Initiate suspend and resume
+
+Usage
+-----
+
+1. Allocate control data structure by calling mhi_alloc_controller()
+2. Initialize mhi_controller with all the known information such as:
+   - Device Topology
+   - IOMMU window
+   - IOMEM mapping
+   - Device to use for memory allocation, and of_node with DT configuration
+   - Configure asynchronous callback functions
+3. Register MHI controller with MHI bus framework by calling
+   of_register_mhi_controller()
+
+After successfully registering controller can initiate any of these power modes:
+
+1. Power up sequence
+   - mhi_prepare_for_power_up()
+   - mhi_async_power_up()
+   - mhi_sync_power_up()
+2. Power down sequence
+   - mhi_power_down()
+   - mhi_unprepare_after_power_down()
+3. Initiate suspend
+   - mhi_pm_suspend()
+4. Initiate resume
+   - mhi_pm_resume()
+
+MHI Devices
+-----------
+Logical device that bind to maximum of two physical MHI channels. Once MHI is in
+powered on state, each supported channel by controller will be allocated as a
+mhi_device.
+
+Each supported device would be enumerated under
+/sys/bus/mhi/devices/
+
+struct mhi_device;
+
+MHI Driver
+----------
+Each MHI driver can bind to one or more MHI devices. MHI host driver will bind
+mhi_device to mhi_driver.
+
+All registered drivers are visible under
+/sys/bus/mhi/drivers/
+
+struct mhi_driver;
+
+Usage
+-----
+
+1. Register driver using mhi_driver_register
+2. Before sending data, prepare device for transfer by calling
+   mhi_prepare_for_transfer
+3. Initiate data transfer by calling mhi_queue_transfer
+4. After finish, call mhi_unprepare_from_transfer to end data transfer
diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
index d1c0b60..e15d56d 100644
--- a/drivers/bus/Kconfig
+++ b/drivers/bus/Kconfig
@@ -171,6 +171,23 @@ config DA8XX_MSTPRI
 	  configuration. Allows to adjust the priorities of all master
 	  peripherals.
 
+config MHI_BUS
+	tristate "Modem Host Interface"
+	help
+	  MHI Host Interface is a communication protocol to be used by the host
+	  to control and communcate with modem over a high speed peripheral bus.
+	  Enabling this module will allow host to communicate with external
+	  devices that support MHI protocol.
+
+config MHI_DEBUG
+	 bool "MHI debug support"
+	 depends on MHI_BUS
+	 help
+	   Say yes here to enable debugging support in the MHI transport
+	   and individual MHI client drivers. This option will impact
+	   throughput as individual MHI packets and state transitions
+	   will be logged.
+
 source "drivers/bus/fsl-mc/Kconfig"
 
 endmenu
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index b8f036c..8fc0b3b 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -31,3 +31,4 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS)	+= uniphier-system-bus.o
 obj-$(CONFIG_VEXPRESS_CONFIG)	+= vexpress-config.o
 
 obj-$(CONFIG_DA8XX_MSTPRI)	+= da8xx-mstpri.o
+obj-$(CONFIG_MHI_BUS) += mhi/
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
new file mode 100644
index 0000000..9f8f3ac
--- /dev/null
+++ b/drivers/bus/mhi/Makefile
@@ -0,0 +1,8 @@
+#
+# Makefile for the MHI stack
+#
+
+# core layer
+obj-y += core/
+#obj-y += controllers/
+#obj-y += devices/
diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
new file mode 100644
index 0000000..a743fbf
--- /dev/null
+++ b/drivers/bus/mhi/core/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_MHI_BUS) +=mhi_init.o mhi_main.o mhi_pm.o mhi_boot.o mhi_dtr.o
diff --git a/drivers/bus/mhi/core/mhi_boot.c b/drivers/bus/mhi/core/mhi_boot.c
new file mode 100644
index 0000000..47276a3
--- /dev/null
+++ b/drivers/bus/mhi/core/mhi_boot.c
@@ -0,0 +1,593 @@
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/firmware.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/module.h>
+#include <linux/random.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+#include <linux/mhi.h>
+#include "mhi_internal.h"
+
+
+/* setup rddm vector table for rddm transfer */
+static void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
+			     struct image_info *img_info)
+{
+	struct mhi_buf *mhi_buf = img_info->mhi_buf;
+	struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
+	int i = 0;
+
+	for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
+		MHI_VERB("Setting vector:%pad size:%zu\n",
+			 &mhi_buf->dma_addr, mhi_buf->len);
+		bhi_vec->dma_addr = mhi_buf->dma_addr;
+		bhi_vec->size = mhi_buf->len;
+	}
+}
+
+/* collect rddm during kernel panic */
+static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+	struct mhi_buf *mhi_buf;
+	u32 sequence_id;
+	u32 rx_status;
+	enum MHI_EE ee;
+	struct image_info *rddm_image = mhi_cntrl->rddm_image;
+	const u32 delayus = 100;
+	u32 retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
+	void __iomem *base = mhi_cntrl->bhi;
+
+	MHI_LOG("Entered with pm_state:%s dev_state:%s ee:%s\n",
+		to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		TO_MHI_EXEC_STR(mhi_cntrl->ee));
+
+	/*
+	 * This should only be executing during a kernel panic, we expect all
+	 * other cores to shutdown while we're collecting rddm buffer. After
+	 * returning from this function, we expect device to reset.
+	 *
+	 * Normaly, we would read/write pm_state only after grabbing
+	 * pm_lock, since we're in a panic, skipping it.
+	 */
+
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+		return -EIO;
+
+	/*
+	 * There is no gurantee this state change would take effect since
+	 * we're setting it w/o grabbing pmlock, it's best effort
+	 */
+	mhi_cntrl->pm_state = MHI_PM_LD_ERR_FATAL_DETECT;
+	/* update should take the effect immediately */
+	smp_wmb();
+
+	/* setup the RX vector table */
+	mhi_rddm_prepare(mhi_cntrl, rddm_image);
+	mhi_buf = &rddm_image->mhi_buf[rddm_image->entries - 1];
+
+	MHI_LOG("Starting BHIe programming for RDDM\n");
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS,
+		      upper_32_bits(mhi_buf->dma_addr));
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_LOW_OFFS,
+		      lower_32_bits(mhi_buf->dma_addr));
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
+	sequence_id = prandom_u32() & BHIE_RXVECSTATUS_SEQNUM_BMSK;
+
+	if (unlikely(!sequence_id))
+		sequence_id = 1;
+
+
+	mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
+			    BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
+			    sequence_id);
+
+	MHI_LOG("Trigger device into RDDM mode\n");
+	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
+
+	MHI_LOG("Waiting for image download completion\n");
+	while (retry--) {
+		ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
+					 BHIE_RXVECSTATUS_STATUS_BMSK,
+					 BHIE_RXVECSTATUS_STATUS_SHFT,
+					 &rx_status);
+		if (ret)
+			return -EIO;
+
+		if (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) {
+			MHI_LOG("RDDM successfully collected\n");
+			return 0;
+		}
+
+		udelay(delayus);
+	}
+
+	ee = mhi_get_exec_env(mhi_cntrl);
+	ret = mhi_read_reg(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS, &rx_status);
+
+	MHI_ERR("Did not complete RDDM transfer\n");
+	MHI_ERR("Current EE:%s\n", TO_MHI_EXEC_STR(ee));
+	MHI_ERR("RXVEC_STATUS:0x%x, ret:%d\n", rx_status, ret);
+
+	return -EIO;
+}
+
+/* download ramdump image from device */
+int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic)
+{
+	void __iomem *base = mhi_cntrl->bhi;
+	rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
+	struct image_info *rddm_image = mhi_cntrl->rddm_image;
+	struct mhi_buf *mhi_buf;
+	int ret;
+	u32 rx_status;
+	u32 sequence_id;
+
+	if (!rddm_image)
+		return -ENOMEM;
+
+	if (in_panic)
+		return __mhi_download_rddm_in_panic(mhi_cntrl);
+
+	MHI_LOG("Waiting for device to enter RDDM state from EE:%s\n",
+		TO_MHI_EXEC_STR(mhi_cntrl->ee));
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->ee == MHI_EE_RDDM ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		MHI_ERR("MHI is not in valid state, pm_state:%s ee:%s\n",
+			to_mhi_pm_state_str(mhi_cntrl->pm_state),
+			TO_MHI_EXEC_STR(mhi_cntrl->ee));
+		return -EIO;
+	}
+
+	mhi_rddm_prepare(mhi_cntrl, mhi_cntrl->rddm_image);
+
+	/* vector table is the last entry */
+	mhi_buf = &rddm_image->mhi_buf[rddm_image->entries - 1];
+
+	read_lock_bh(pm_lock);
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+		read_unlock_bh(pm_lock);
+		return -EIO;
+	}
+
+	MHI_LOG("Starting BHIe Programming for RDDM\n");
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS,
+		      upper_32_bits(mhi_buf->dma_addr));
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_LOW_OFFS,
+		      lower_32_bits(mhi_buf->dma_addr));
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
+
+	sequence_id = prandom_u32() & BHIE_RXVECSTATUS_SEQNUM_BMSK;
+	mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
+			    BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
+			    sequence_id);
+	read_unlock_bh(pm_lock);
+
+	MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
+		upper_32_bits(mhi_buf->dma_addr),
+		lower_32_bits(mhi_buf->dma_addr),
+		mhi_buf->len, sequence_id);
+	MHI_LOG("Waiting for image download completion\n");
+
+	/* waiting for image download completion */
+	wait_event_timeout(mhi_cntrl->state_event,
+			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
+			   mhi_read_reg_field(mhi_cntrl, base,
+					      BHIE_RXVECSTATUS_OFFS,
+					      BHIE_RXVECSTATUS_STATUS_BMSK,
+					      BHIE_RXVECSTATUS_STATUS_SHFT,
+					      &rx_status) || rx_status,
+			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+		return -EIO;
+
+	return (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
+}
+EXPORT_SYMBOL(mhi_download_rddm_img);
+
+static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
+			    const struct mhi_buf *mhi_buf)
+{
+	void __iomem *base = mhi_cntrl->bhi;
+	rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
+	u32 tx_status;
+
+	read_lock_bh(pm_lock);
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+		read_unlock_bh(pm_lock);
+		return -EIO;
+	}
+
+	MHI_LOG("Starting BHIe Programming\n");
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_HIGH_OFFS,
+		      upper_32_bits(mhi_buf->dma_addr));
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_LOW_OFFS,
+		      lower_32_bits(mhi_buf->dma_addr));
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
+
+	mhi_cntrl->sequence_id = prandom_u32() & BHIE_TXVECSTATUS_SEQNUM_BMSK;
+	mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
+			    BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
+			    mhi_cntrl->sequence_id);
+	read_unlock_bh(pm_lock);
+
+	MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
+		upper_32_bits(mhi_buf->dma_addr),
+		lower_32_bits(mhi_buf->dma_addr),
+		mhi_buf->len, mhi_cntrl->sequence_id);
+	MHI_LOG("Waiting for image transfer completion\n");
+
+	/* waiting for image download completion */
+	wait_event_timeout(mhi_cntrl->state_event,
+			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
+			   mhi_read_reg_field(mhi_cntrl, base,
+					      BHIE_TXVECSTATUS_OFFS,
+					      BHIE_TXVECSTATUS_STATUS_BMSK,
+					      BHIE_TXVECSTATUS_STATUS_SHFT,
+					      &tx_status) || tx_status,
+			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+		return -EIO;
+
+	return (tx_status == BHIE_TXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
+}
+
+static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
+			   void *buf,
+			   size_t size)
+{
+	u32 tx_status, val;
+	int i, ret;
+	void __iomem *base = mhi_cntrl->bhi;
+	rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
+	dma_addr_t phys = dma_map_single(mhi_cntrl->dev, buf, size,
+					 DMA_TO_DEVICE);
+	struct {
+		char *name;
+		u32 offset;
+	} error_reg[] = {
+		{ "ERROR_CODE", BHI_ERRCODE },
+		{ "ERROR_DBG1", BHI_ERRDBG1 },
+		{ "ERROR_DBG2", BHI_ERRDBG2 },
+		{ "ERROR_DBG3", BHI_ERRDBG3 },
+		{ NULL },
+	};
+
+	if (dma_mapping_error(mhi_cntrl->dev, phys))
+		return -ENOMEM;
+
+	MHI_LOG("Starting BHI programming\n");
+
+	/* program start sbl download via  bhi protocol */
+	read_lock_bh(pm_lock);
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+		read_unlock_bh(pm_lock);
+		goto invalid_pm_state;
+	}
+
+	mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0);
+	mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH, upper_32_bits(phys));
+	mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW, lower_32_bits(phys));
+	mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size);
+	mhi_cntrl->session_id = prandom_u32() & BHI_TXDB_SEQNUM_BMSK;
+	mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, mhi_cntrl->session_id);
+	read_unlock_bh(pm_lock);
+
+	MHI_LOG("Waiting for image transfer completion\n");
+
+	/* waiting for image download completion */
+	wait_event_timeout(mhi_cntrl->state_event,
+			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
+			   mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS,
+					      BHI_STATUS_MASK, BHI_STATUS_SHIFT,
+					      &tx_status) || tx_status,
+			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+		goto invalid_pm_state;
+
+	if (tx_status == BHI_STATUS_ERROR) {
+		MHI_ERR("Image transfer failed\n");
+		read_lock_bh(pm_lock);
+		if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+			for (i = 0; error_reg[i].name; i++) {
+				ret = mhi_read_reg(mhi_cntrl, base,
+						   error_reg[i].offset, &val);
+				if (ret)
+					break;
+				MHI_ERR("reg:%s value:0x%x\n",
+					error_reg[i].name, val);
+			}
+		}
+		read_unlock_bh(pm_lock);
+		goto invalid_pm_state;
+	}
+
+	dma_unmap_single(mhi_cntrl->dev, phys, size, DMA_TO_DEVICE);
+
+	return (tx_status == BHI_STATUS_SUCCESS) ? 0 : -ETIMEDOUT;
+
+invalid_pm_state:
+	dma_unmap_single(mhi_cntrl->dev, phys, size, DMA_TO_DEVICE);
+
+	return -EIO;
+}
+
+void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
+			 struct image_info *image_info)
+{
+	int i;
+	struct mhi_buf *mhi_buf = image_info->mhi_buf;
+
+	for (i = 0; i < image_info->entries; i++, mhi_buf++)
+		mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
+				  mhi_buf->dma_addr);
+
+	kfree(image_info->mhi_buf);
+	kfree(image_info);
+}
+
+int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
+			 struct image_info **image_info,
+			 size_t alloc_size)
+{
+	size_t seg_size = mhi_cntrl->seg_len;
+	/* requier additional entry for vec table */
+	int segments = DIV_ROUND_UP(alloc_size, seg_size) + 1;
+	int i;
+	struct image_info *img_info;
+	struct mhi_buf *mhi_buf;
+
+	MHI_LOG("Allocating bytes:%zu seg_size:%zu total_seg:%u\n",
+		alloc_size, seg_size, segments);
+
+	img_info = kzalloc(sizeof(*img_info), GFP_KERNEL);
+	if (!img_info)
+		return -ENOMEM;
+
+	/* allocate memory for entries */
+	img_info->mhi_buf = kcalloc(segments, sizeof(*img_info->mhi_buf),
+				    GFP_KERNEL);
+	if (!img_info->mhi_buf)
+		goto error_alloc_mhi_buf;
+
+	/* allocate and populate vector table */
+	mhi_buf = img_info->mhi_buf;
+	for (i = 0; i < segments; i++, mhi_buf++) {
+		size_t vec_size = seg_size;
+
+		/* last entry is for vector table */
+		if (i == segments - 1)
+			vec_size = sizeof(struct __packed bhi_vec_entry) * i;
+
+		mhi_buf->len = vec_size;
+		mhi_buf->buf = mhi_alloc_coherent(mhi_cntrl, vec_size,
+					&mhi_buf->dma_addr, GFP_KERNEL);
+		if (!mhi_buf->buf)
+			goto error_alloc_segment;
+
+		MHI_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
+			mhi_buf->dma_addr, mhi_buf->len);
+	}
+
+	img_info->bhi_vec = img_info->mhi_buf[segments - 1].buf;
+	img_info->entries = segments;
+	*image_info = img_info;
+
+	MHI_LOG("Successfully allocated bhi vec table\n");
+
+	return 0;
+
+error_alloc_segment:
+	for (--i, --mhi_buf; i >= 0; i--, mhi_buf--)
+		mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
+				  mhi_buf->dma_addr);
+
+error_alloc_mhi_buf:
+	kfree(img_info);
+
+	return -ENOMEM;
+}
+
+static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
+			      const struct firmware *firmware,
+			      struct image_info *img_info)
+{
+	size_t remainder = firmware->size;
+	size_t to_cpy;
+	const u8 *buf = firmware->data;
+	int i = 0;
+	struct mhi_buf *mhi_buf = img_info->mhi_buf;
+	struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
+
+	while (remainder) {
+		MHI_ASSERT(i >= img_info->entries, "malformed vector table");
+
+		to_cpy = min(remainder, mhi_buf->len);
+		memcpy(mhi_buf->buf, buf, to_cpy);
+		bhi_vec->dma_addr = mhi_buf->dma_addr;
+		bhi_vec->size = to_cpy;
+
+		MHI_VERB("Setting Vector:0x%llx size: %llu\n",
+			 bhi_vec->dma_addr, bhi_vec->size);
+		buf += to_cpy;
+		remainder -= to_cpy;
+		i++;
+		bhi_vec++;
+		mhi_buf++;
+	}
+}
+
+void mhi_fw_load_worker(struct work_struct *work)
+{
+	int ret;
+	struct mhi_controller *mhi_cntrl;
+	const char *fw_name;
+	const struct firmware *firmware;
+	struct image_info *image_info;
+	void *buf;
+	size_t size;
+
+	mhi_cntrl = container_of(work, struct mhi_controller, fw_worker);
+
+	MHI_LOG("Waiting for device to enter PBL from EE:%s\n",
+		TO_MHI_EXEC_STR(mhi_cntrl->ee));
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 MHI_IN_PBL(mhi_cntrl->ee) ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		MHI_ERR("MHI is not in valid state\n");
+		return;
+	}
+
+	MHI_LOG("Device current EE:%s\n", TO_MHI_EXEC_STR(mhi_cntrl->ee));
+
+	/* if device in pthru, we do not have to load firmware */
+	if (mhi_cntrl->ee == MHI_EE_PTHRU)
+		return;
+
+	fw_name = (mhi_cntrl->ee == MHI_EE_EDL) ?
+		mhi_cntrl->edl_image : mhi_cntrl->fw_image;
+
+	if (!fw_name || (mhi_cntrl->fbc_download && (!mhi_cntrl->sbl_size ||
+						     !mhi_cntrl->seg_len))) {
+		MHI_ERR("No firmware image defined or !sbl_size || !seg_len\n");
+		return;
+	}
+
+	ret = request_firmware(&firmware, fw_name, mhi_cntrl->dev);
+	if (ret) {
+		MHI_ERR("Error loading firmware, ret:%d\n", ret);
+		return;
+	}
+
+	size = (mhi_cntrl->fbc_download) ? mhi_cntrl->sbl_size : firmware->size;
+
+	/* the sbl size provided is maximum size, not necessarily image size */
+	if (size > firmware->size)
+		size = firmware->size;
+
+	buf = kmalloc(size, GFP_KERNEL);
+	if (!buf) {
+		MHI_ERR("Could not allocate memory for image\n");
+		release_firmware(firmware);
+		return;
+	}
+
+	/* load sbl image */
+	memcpy(buf, firmware->data, size);
+	ret = mhi_fw_load_sbl(mhi_cntrl, buf, size);
+	kfree(buf);
+
+	if (!mhi_cntrl->fbc_download || ret || mhi_cntrl->ee == MHI_EE_EDL)
+		release_firmware(firmware);
+
+	/* error or in edl, we're done */
+	if (ret || mhi_cntrl->ee == MHI_EE_EDL)
+		return;
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	mhi_cntrl->dev_state = MHI_STATE_RESET;
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+
+	/*
+	 * if we're doing fbc, populate vector tables while
+	 * device transitioning into MHI READY state
+	 */
+	if (mhi_cntrl->fbc_download) {
+		ret = mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->fbc_image,
+					   firmware->size);
+		if (ret) {
+			MHI_ERR("Error alloc size of %zu\n", firmware->size);
+			goto error_alloc_fw_table;
+		}
+
+		MHI_LOG("Copying firmware image into vector table\n");
+
+		/* load the firmware into BHIE vec table */
+		mhi_firmware_copy(mhi_cntrl, firmware, mhi_cntrl->fbc_image);
+	}
+
+	/* transitioning into MHI RESET->READY state */
+	ret = mhi_ready_state_transition(mhi_cntrl);
+
+	MHI_LOG("To Reset->Ready PM_STATE:%s MHI_STATE:%s EE:%s, ret:%d\n",
+		to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		TO_MHI_EXEC_STR(mhi_cntrl->ee), ret);
+
+	if (!mhi_cntrl->fbc_download)
+		return;
+
+	if (ret) {
+		MHI_ERR("Did not transition to READY state\n");
+		goto error_read;
+	}
+
+	/* wait for BHIE event */
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->ee == MHI_EE_BHIE ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		MHI_ERR("MHI did not enter BHIE\n");
+		goto error_read;
+	}
+
+	/* start full firmware image download */
+	image_info = mhi_cntrl->fbc_image;
+	ret = mhi_fw_load_amss(mhi_cntrl,
+			       /* last entry is vec table */
+			       &image_info->mhi_buf[image_info->entries - 1]);
+
+	MHI_LOG("amss fw_load, ret:%d\n", ret);
+
+	release_firmware(firmware);
+
+	return;
+
+error_read:
+	mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
+	mhi_cntrl->fbc_image = NULL;
+
+error_alloc_fw_table:
+	release_firmware(firmware);
+}
diff --git a/drivers/bus/mhi/core/mhi_dtr.c b/drivers/bus/mhi/core/mhi_dtr.c
new file mode 100644
index 0000000..bc17359
--- /dev/null
+++ b/drivers/bus/mhi/core/mhi_dtr.c
@@ -0,0 +1,177 @@
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/termios.h>
+#include <linux/wait.h>
+#include <linux/mhi.h>
+#include "mhi_internal.h"
+
+struct __packed dtr_ctrl_msg {
+	u32 preamble;
+	u32 msg_id;
+	u32 dest_id;
+	u32 size;
+	u32 msg;
+};
+
+#define CTRL_MAGIC (0x4C525443)
+#define CTRL_MSG_DTR BIT(0)
+#define CTRL_MSG_ID (0x10)
+
+static int mhi_dtr_tiocmset(struct mhi_controller *mhi_cntrl,
+			    struct mhi_chan *mhi_chan,
+			    u32 tiocm)
+{
+	struct dtr_ctrl_msg *dtr_msg = NULL;
+	struct mhi_chan *dtr_chan = mhi_cntrl->dtr_dev->ul_chan;
+	int ret = 0;
+
+	tiocm &= TIOCM_DTR;
+	if (mhi_chan->tiocm == tiocm)
+		return 0;
+
+	mutex_lock(&dtr_chan->mutex);
+
+	dtr_msg = kzalloc(sizeof(*dtr_msg), GFP_KERNEL);
+	if (!dtr_msg) {
+		ret = -ENOMEM;
+		goto tiocm_exit;
+	}
+
+	dtr_msg->preamble = CTRL_MAGIC;
+	dtr_msg->msg_id = CTRL_MSG_ID;
+	dtr_msg->dest_id = mhi_chan->chan;
+	dtr_msg->size = sizeof(u32);
+	if (tiocm & TIOCM_DTR)
+		dtr_msg->msg |= CTRL_MSG_DTR;
+
+	reinit_completion(&dtr_chan->completion);
+	ret = mhi_queue_transfer(mhi_cntrl->dtr_dev, DMA_TO_DEVICE, dtr_msg,
+				 sizeof(*dtr_msg), MHI_EOT);
+	if (ret)
+		goto tiocm_exit;
+
+	ret = wait_for_completion_timeout(&dtr_chan->completion,
+				msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret) {
+		MHI_ERR("Failed to receive transfer callback\n");
+		ret = -EIO;
+		goto tiocm_exit;
+	}
+
+	ret = 0;
+	mhi_chan->tiocm = tiocm;
+
+tiocm_exit:
+	kfree(dtr_msg);
+	mutex_unlock(&dtr_chan->mutex);
+
+	return ret;
+}
+
+long mhi_ioctl(struct mhi_device *mhi_dev, unsigned int cmd, unsigned long arg)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan = mhi_dev->ul_chan;
+	int ret;
+
+	/* ioctl not supported by this controller */
+	if (!mhi_cntrl->dtr_dev)
+		return -EIO;
+
+	switch (cmd) {
+	case TIOCMGET:
+		return mhi_chan->tiocm;
+	case TIOCMSET:
+	{
+		u32 tiocm;
+
+		ret = get_user(tiocm, (u32 *)arg);
+		if (ret)
+			return ret;
+
+		return mhi_dtr_tiocmset(mhi_cntrl, mhi_chan, tiocm);
+	}
+	default:
+		break;
+	}
+
+	return -EINVAL;
+}
+EXPORT_SYMBOL(mhi_ioctl);
+
+static void mhi_dtr_xfer_cb(struct mhi_device *mhi_dev,
+			    struct mhi_result *mhi_result)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *dtr_chan = mhi_cntrl->dtr_dev->ul_chan;
+
+	MHI_VERB("Received with status:%d\n", mhi_result->transaction_status);
+	if (!mhi_result->transaction_status)
+		complete(&dtr_chan->completion);
+}
+
+static void mhi_dtr_remove(struct mhi_device *mhi_dev)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+
+	mhi_cntrl->dtr_dev = NULL;
+}
+
+static int mhi_dtr_probe(struct mhi_device *mhi_dev,
+			 const struct mhi_device_id *id)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	int ret;
+
+	MHI_LOG("Enter for DTR control channel\n");
+
+	ret = mhi_prepare_for_transfer(mhi_dev);
+	if (!ret)
+		mhi_cntrl->dtr_dev = mhi_dev;
+
+	MHI_LOG("Exit with ret:%d\n", ret);
+
+	return ret;
+}
+
+static const struct mhi_device_id mhi_dtr_table[] = {
+	{ .chan = "IP_CTRL" },
+	{ NULL },
+};
+
+static struct mhi_driver mhi_dtr_driver = {
+	.id_table = mhi_dtr_table,
+	.remove = mhi_dtr_remove,
+	.probe = mhi_dtr_probe,
+	.ul_xfer_cb = mhi_dtr_xfer_cb,
+	.dl_xfer_cb = mhi_dtr_xfer_cb,
+	.driver = {
+		.name = "MHI_DTR",
+		.owner = THIS_MODULE,
+	}
+};
+
+int __init mhi_dtr_init(void)
+{
+	return mhi_driver_register(&mhi_dtr_driver);
+}
diff --git a/drivers/bus/mhi/core/mhi_init.c b/drivers/bus/mhi/core/mhi_init.c
new file mode 100644
index 0000000..8f103ed
--- /dev/null
+++ b/drivers/bus/mhi/core/mhi_init.c
@@ -0,0 +1,1290 @@
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+#include <linux/mhi.h>
+#include "mhi_internal.h"
+
+const char * const mhi_ee_str[MHI_EE_MAX] = {
+	[MHI_EE_PBL] = "PBL",
+	[MHI_EE_SBL] = "SBL",
+	[MHI_EE_AMSS] = "AMSS",
+	[MHI_EE_BHIE] = "BHIE",
+	[MHI_EE_RDDM] = "RDDM",
+	[MHI_EE_PTHRU] = "PASS THRU",
+	[MHI_EE_EDL] = "EDL",
+	[MHI_EE_DISABLE_TRANSITION] = "DISABLE",
+};
+
+const char * const mhi_state_tran_str[MHI_ST_TRANSITION_MAX] = {
+	[MHI_ST_TRANSITION_PBL] = "PBL",
+	[MHI_ST_TRANSITION_READY] = "READY",
+	[MHI_ST_TRANSITION_SBL] = "SBL",
+	[MHI_ST_TRANSITION_AMSS] = "AMSS",
+	[MHI_ST_TRANSITION_BHIE] = "BHIE",
+};
+
+const char * const mhi_state_str[MHI_STATE_MAX] = {
+	[MHI_STATE_RESET] = "RESET",
+	[MHI_STATE_READY] = "READY",
+	[MHI_STATE_M0] = "M0",
+	[MHI_STATE_M1] = "M1",
+	[MHI_STATE_M2] = "M2",
+	[MHI_STATE_M3] = "M3",
+	[MHI_STATE_BHI] = "BHI",
+	[MHI_STATE_SYS_ERR] = "SYS_ERR",
+};
+
+static const char * const mhi_pm_state_str[] = {
+	"DISABLE",
+	"POR",
+	"M0",
+	"M1",
+	"M1->M2",
+	"M2",
+	"M?->M3",
+	"M3",
+	"M3->M0",
+	"FW DL Error",
+	"SYS_ERR Detect",
+	"SYS_ERR Process",
+	"SHUTDOWN Process",
+	"LD or Error Fatal Detect",
+};
+
+struct mhi_bus mhi_bus;
+
+const char *to_mhi_pm_state_str(enum MHI_PM_STATE state)
+{
+	int index = find_last_bit((unsigned long *)&state, 32);
+
+	if (index >= ARRAY_SIZE(mhi_pm_state_str))
+		return "Invalid State";
+
+	return mhi_pm_state_str[index];
+}
+
+/* MHI protocol require transfer ring to be aligned to ring length */
+static int mhi_alloc_aligned_ring(struct mhi_controller *mhi_cntrl,
+				  struct mhi_ring *ring,
+				  u64 len)
+{
+	ring->alloc_size = len + (len - 1);
+	ring->pre_aligned = mhi_alloc_coherent(mhi_cntrl, ring->alloc_size,
+					       &ring->dma_handle, GFP_KERNEL);
+	if (!ring->pre_aligned)
+		return -ENOMEM;
+
+	ring->iommu_base = (ring->dma_handle + (len - 1)) & ~(len - 1);
+	ring->base = ring->pre_aligned + (ring->iommu_base - ring->dma_handle);
+	return 0;
+}
+
+void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl)
+{
+	int i;
+	struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
+
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		if (mhi_event->offload_ev)
+			continue;
+
+		free_irq(mhi_cntrl->irq[mhi_event->msi], mhi_event);
+	}
+
+	free_irq(mhi_cntrl->irq[0], mhi_cntrl);
+}
+
+int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl)
+{
+	int i;
+	int ret;
+	struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
+
+	/* for BHI INTVEC msi */
+	ret = request_threaded_irq(mhi_cntrl->irq[0], mhi_intvec_handlr,
+				   mhi_intvec_threaded_handlr, IRQF_ONESHOT,
+				   "mhi", mhi_cntrl);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		if (mhi_event->offload_ev)
+			continue;
+
+		ret = request_irq(mhi_cntrl->irq[mhi_event->msi],
+				  mhi_msi_handlr, IRQF_SHARED, "mhi",
+				  mhi_event);
+		if (ret) {
+			MHI_ERR("Error requesting irq:%d for ev:%d\n",
+				mhi_cntrl->irq[mhi_event->msi], i);
+			goto error_request;
+		}
+	}
+
+	return 0;
+
+error_request:
+	for (--i, --mhi_event; i >= 0; i--, mhi_event--) {
+		if (mhi_event->offload_ev)
+			continue;
+
+		free_irq(mhi_cntrl->irq[mhi_event->msi], mhi_event);
+	}
+	free_irq(mhi_cntrl->irq[0], mhi_cntrl);
+
+	return ret;
+}
+
+void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl)
+{
+	int i;
+	struct mhi_ctxt *mhi_ctxt = mhi_cntrl->mhi_ctxt;
+	struct mhi_cmd *mhi_cmd;
+	struct mhi_event *mhi_event;
+	struct mhi_ring *ring;
+
+	mhi_cmd = mhi_cntrl->mhi_cmd;
+	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++) {
+		ring = &mhi_cmd->ring;
+		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+				  ring->pre_aligned, ring->dma_handle);
+		ring->base = NULL;
+		ring->iommu_base = 0;
+	}
+
+	mhi_free_coherent(mhi_cntrl,
+			  sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS,
+			  mhi_ctxt->cmd_ctxt, mhi_ctxt->cmd_ctxt_addr);
+
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		if (mhi_event->offload_ev)
+			continue;
+
+		ring = &mhi_event->ring;
+		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+				  ring->pre_aligned, ring->dma_handle);
+		ring->base = NULL;
+		ring->iommu_base = 0;
+	}
+
+	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->er_ctxt) *
+			  mhi_cntrl->total_ev_rings, mhi_ctxt->er_ctxt,
+			  mhi_ctxt->er_ctxt_addr);
+
+	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->chan_ctxt) *
+			  mhi_cntrl->max_chan, mhi_ctxt->chan_ctxt,
+			  mhi_ctxt->chan_ctxt_addr);
+
+	kfree(mhi_ctxt);
+	mhi_cntrl->mhi_ctxt = NULL;
+}
+
+static int mhi_init_debugfs_mhi_states_open(struct inode *inode,
+					    struct file *fp)
+{
+	return single_open(fp, mhi_debugfs_mhi_states_show, inode->i_private);
+}
+
+static int mhi_init_debugfs_mhi_event_open(struct inode *inode, struct file *fp)
+{
+	return single_open(fp, mhi_debugfs_mhi_event_show, inode->i_private);
+}
+
+static int mhi_init_debugfs_mhi_chan_open(struct inode *inode, struct file *fp)
+{
+	return single_open(fp, mhi_debugfs_mhi_chan_show, inode->i_private);
+}
+
+static const struct file_operations debugfs_state_ops = {
+	.open = mhi_init_debugfs_mhi_states_open,
+	.release = single_release,
+	.read = seq_read,
+};
+
+static const struct file_operations debugfs_ev_ops = {
+	.open = mhi_init_debugfs_mhi_event_open,
+	.release = single_release,
+	.read = seq_read,
+};
+
+static const struct file_operations debugfs_chan_ops = {
+	.open = mhi_init_debugfs_mhi_chan_open,
+	.release = single_release,
+	.read = seq_read,
+};
+
+DEFINE_SIMPLE_ATTRIBUTE(debugfs_trigger_reset_fops, NULL,
+			mhi_debugfs_trigger_reset, "%llu\n");
+
+void mhi_init_debugfs(struct mhi_controller *mhi_cntrl)
+{
+	struct dentry *dentry;
+	char node[32];
+
+	if (!mhi_cntrl->parent)
+		return;
+
+	snprintf(node, sizeof(node), "%04x_%02u:%02u.%02u",
+		 mhi_cntrl->dev_id, mhi_cntrl->domain, mhi_cntrl->bus,
+		 mhi_cntrl->slot);
+
+	dentry = debugfs_create_dir(node, mhi_cntrl->parent);
+	if (IS_ERR_OR_NULL(dentry))
+		return;
+
+	debugfs_create_file("states", 0444, dentry, mhi_cntrl,
+			    &debugfs_state_ops);
+	debugfs_create_file("events", 0444, dentry, mhi_cntrl,
+			    &debugfs_ev_ops);
+	debugfs_create_file("chan", 0444, dentry, mhi_cntrl, &debugfs_chan_ops);
+	debugfs_create_file("reset", 0444, dentry, mhi_cntrl,
+			    &debugfs_trigger_reset_fops);
+	mhi_cntrl->dentry = dentry;
+}
+
+void mhi_deinit_debugfs(struct mhi_controller *mhi_cntrl)
+{
+	debugfs_remove_recursive(mhi_cntrl->dentry);
+	mhi_cntrl->dentry = NULL;
+}
+
+int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_ctxt *mhi_ctxt;
+	struct mhi_chan_ctxt *chan_ctxt;
+	struct mhi_event_ctxt *er_ctxt;
+	struct mhi_cmd_ctxt *cmd_ctxt;
+	struct mhi_chan *mhi_chan;
+	struct mhi_event *mhi_event;
+	struct mhi_cmd *mhi_cmd;
+	int ret = -ENOMEM, i;
+
+	atomic_set(&mhi_cntrl->dev_wake, 0);
+	atomic_set(&mhi_cntrl->alloc_size, 0);
+
+	mhi_ctxt = kzalloc(sizeof(*mhi_ctxt), GFP_KERNEL);
+	if (!mhi_ctxt)
+		return -ENOMEM;
+
+	/* setup channel ctxt */
+	mhi_ctxt->chan_ctxt = mhi_alloc_coherent(mhi_cntrl,
+			sizeof(*mhi_ctxt->chan_ctxt) * mhi_cntrl->max_chan,
+			&mhi_ctxt->chan_ctxt_addr, GFP_KERNEL);
+	if (!mhi_ctxt->chan_ctxt)
+		goto error_alloc_chan_ctxt;
+
+	mhi_chan = mhi_cntrl->mhi_chan;
+	chan_ctxt = mhi_ctxt->chan_ctxt;
+	for (i = 0; i < mhi_cntrl->max_chan; i++, chan_ctxt++, mhi_chan++) {
+		/* If it's offload channel skip this step */
+		if (mhi_chan->offload_ch)
+			continue;
+
+		chan_ctxt->chstate = MHI_CH_STATE_DISABLED;
+		chan_ctxt->brstmode = mhi_chan->db_cfg.brstmode;
+		chan_ctxt->pollcfg = mhi_chan->db_cfg.pollcfg;
+		chan_ctxt->chtype = mhi_chan->dir;
+		chan_ctxt->erindex = mhi_chan->er_index;
+
+		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+		mhi_chan->tre_ring.db_addr = &chan_ctxt->wp;
+	}
+
+	/* setup event context */
+	mhi_ctxt->er_ctxt = mhi_alloc_coherent(mhi_cntrl,
+			sizeof(*mhi_ctxt->er_ctxt) * mhi_cntrl->total_ev_rings,
+			&mhi_ctxt->er_ctxt_addr, GFP_KERNEL);
+	if (!mhi_ctxt->er_ctxt)
+		goto error_alloc_er_ctxt;
+
+	er_ctxt = mhi_ctxt->er_ctxt;
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++,
+		     mhi_event++) {
+		struct mhi_ring *ring = &mhi_event->ring;
+
+		/* it's a satellite ev, we do not touch it */
+		if (mhi_event->offload_ev)
+			continue;
+
+		er_ctxt->intmodc = 0;
+		er_ctxt->intmodt = mhi_event->intmod;
+		er_ctxt->ertype = MHI_ER_TYPE_VALID;
+		er_ctxt->msivec = mhi_event->msi;
+		mhi_event->db_cfg.db_mode = true;
+
+		ring->el_size = sizeof(struct __packed mhi_tre);
+		ring->len = ring->el_size * ring->elements;
+		ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
+		if (ret)
+			goto error_alloc_er;
+
+		ring->rp = ring->wp = ring->base;
+		er_ctxt->rbase = ring->iommu_base;
+		er_ctxt->rp = er_ctxt->wp = er_ctxt->rbase;
+		er_ctxt->rlen = ring->len;
+		ring->ctxt_wp = &er_ctxt->wp;
+	}
+
+	/* setup cmd context */
+	mhi_ctxt->cmd_ctxt = mhi_alloc_coherent(mhi_cntrl,
+				sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS,
+				&mhi_ctxt->cmd_ctxt_addr, GFP_KERNEL);
+	if (!mhi_ctxt->cmd_ctxt)
+		goto error_alloc_er;
+
+	mhi_cmd = mhi_cntrl->mhi_cmd;
+	cmd_ctxt = mhi_ctxt->cmd_ctxt;
+	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
+		struct mhi_ring *ring = &mhi_cmd->ring;
+
+		ring->el_size = sizeof(struct __packed mhi_tre);
+		ring->elements = CMD_EL_PER_RING;
+		ring->len = ring->el_size * ring->elements;
+		ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
+		if (ret)
+			goto error_alloc_cmd;
+
+		ring->rp = ring->wp = ring->base;
+		cmd_ctxt->rbase = ring->iommu_base;
+		cmd_ctxt->rp = cmd_ctxt->wp = cmd_ctxt->rbase;
+		cmd_ctxt->rlen = ring->len;
+		ring->ctxt_wp = &cmd_ctxt->wp;
+	}
+
+	mhi_cntrl->mhi_ctxt = mhi_ctxt;
+
+	return 0;
+
+error_alloc_cmd:
+	for (--i, --mhi_cmd; i >= 0; i--, mhi_cmd--) {
+		struct mhi_ring *ring = &mhi_cmd->ring;
+
+		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+				  ring->pre_aligned, ring->dma_handle);
+	}
+	mhi_free_coherent(mhi_cntrl,
+			  sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS,
+			  mhi_ctxt->cmd_ctxt, mhi_ctxt->cmd_ctxt_addr);
+	i = mhi_cntrl->total_ev_rings;
+	mhi_event = mhi_cntrl->mhi_event + i;
+
+error_alloc_er:
+	for (--i, --mhi_event; i >= 0; i--, mhi_event--) {
+		struct mhi_ring *ring = &mhi_event->ring;
+
+		if (mhi_event->offload_ev)
+			continue;
+
+		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+				  ring->pre_aligned, ring->dma_handle);
+	}
+	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->er_ctxt) *
+			  mhi_cntrl->total_ev_rings, mhi_ctxt->er_ctxt,
+			  mhi_ctxt->er_ctxt_addr);
+
+error_alloc_er_ctxt:
+	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->chan_ctxt) *
+			  mhi_cntrl->max_chan, mhi_ctxt->chan_ctxt,
+			  mhi_ctxt->chan_ctxt_addr);
+
+error_alloc_chan_ctxt:
+	kfree(mhi_ctxt);
+
+	return ret;
+}
+
+int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
+{
+	u32 val;
+	int i, ret;
+	struct mhi_chan *mhi_chan;
+	struct mhi_event *mhi_event;
+	void __iomem *base = mhi_cntrl->regs;
+	struct {
+		u32 offset;
+		u32 mask;
+		u32 shift;
+		u32 val;
+	} reg_info[] = {
+		{
+			CCABAP_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
+		},
+		{
+			CCABAP_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
+		},
+		{
+			ECABAP_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
+		},
+		{
+			ECABAP_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
+		},
+		{
+			CRCBAP_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
+		},
+		{
+			CRCBAP_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
+		},
+		{
+			MHICFG, MHICFG_NER_MASK, MHICFG_NER_SHIFT,
+			mhi_cntrl->total_ev_rings,
+		},
+		{
+			MHICFG, MHICFG_NHWER_MASK, MHICFG_NHWER_SHIFT,
+			mhi_cntrl->hw_ev_rings,
+		},
+		{
+			MHICTRLBASE_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->iova_start),
+		},
+		{
+			MHICTRLBASE_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->iova_start),
+		},
+		{
+			MHIDATABASE_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->iova_start),
+		},
+		{
+			MHIDATABASE_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->iova_start),
+		},
+		{
+			MHICTRLLIMIT_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->iova_stop),
+		},
+		{
+			MHICTRLLIMIT_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->iova_stop),
+		},
+		{
+			MHIDATALIMIT_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->iova_stop),
+		},
+		{
+			MHIDATALIMIT_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->iova_stop),
+		},
+		{ 0, 0, 0 }
+	};
+
+	MHI_LOG("Initializing MMIO\n");
+
+	/* set up DB register for all the chan rings */
+	ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK,
+				 CHDBOFF_CHDBOFF_SHIFT, &val);
+	if (ret)
+		return -EIO;
+
+	MHI_LOG("CHDBOFF:0x%x\n", val);
+
+	/* setup wake db */
+	mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
+	mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 4, 0);
+	mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 0, 0);
+	mhi_cntrl->wake_set = false;
+
+	/* setup channel db addresses */
+	mhi_chan = mhi_cntrl->mhi_chan;
+	for (i = 0; i < mhi_cntrl->max_chan; i++, val += 8, mhi_chan++)
+		mhi_chan->tre_ring.db_addr = base + val;
+
+	/* setup event ring db addresses */
+	ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK,
+				 ERDBOFF_ERDBOFF_SHIFT, &val);
+	if (ret)
+		return -EIO;
+
+	MHI_LOG("ERDBOFF:0x%x\n", val);
+
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, val += 8, mhi_event++) {
+		if (mhi_event->offload_ev)
+			continue;
+
+		mhi_event->ring.db_addr = base + val;
+	}
+
+	/* set up DB register for primary CMD rings */
+	mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING].ring.db_addr = base + CRDB_LOWER;
+
+	MHI_LOG("Programming all MMIO values.\n");
+	for (i = 0; reg_info[i].offset; i++)
+		mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
+				    reg_info[i].mask, reg_info[i].shift,
+				    reg_info[i].val);
+
+	return 0;
+}
+
+void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+			  struct mhi_chan *mhi_chan)
+{
+	struct mhi_ring *buf_ring;
+	struct mhi_ring *tre_ring;
+	struct mhi_chan_ctxt *chan_ctxt;
+
+	buf_ring = &mhi_chan->buf_ring;
+	tre_ring = &mhi_chan->tre_ring;
+	chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
+
+	mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size,
+			  tre_ring->pre_aligned, tre_ring->dma_handle);
+	kfree(buf_ring->base);
+
+	buf_ring->base = tre_ring->base = NULL;
+	chan_ctxt->rbase = 0;
+}
+
+int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+		       struct mhi_chan *mhi_chan)
+{
+	struct mhi_ring *buf_ring;
+	struct mhi_ring *tre_ring;
+	struct mhi_chan_ctxt *chan_ctxt;
+	int ret;
+
+	buf_ring = &mhi_chan->buf_ring;
+	tre_ring = &mhi_chan->tre_ring;
+	tre_ring->el_size = sizeof(struct __packed mhi_tre);
+	tre_ring->len = tre_ring->el_size * tre_ring->elements;
+	chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
+	ret = mhi_alloc_aligned_ring(mhi_cntrl, tre_ring, tre_ring->len);
+	if (ret)
+		return -ENOMEM;
+
+	buf_ring->el_size = sizeof(struct mhi_buf_info);
+	buf_ring->len = buf_ring->el_size * buf_ring->elements;
+	buf_ring->base = kzalloc(buf_ring->len, GFP_KERNEL);
+
+	if (!buf_ring->base) {
+		mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size,
+				  tre_ring->pre_aligned, tre_ring->dma_handle);
+		return -ENOMEM;
+	}
+
+	chan_ctxt->chstate = MHI_CH_STATE_ENABLED;
+	chan_ctxt->rbase = tre_ring->iommu_base;
+	chan_ctxt->rp = chan_ctxt->wp = chan_ctxt->rbase;
+	chan_ctxt->rlen = tre_ring->len;
+	tre_ring->ctxt_wp = &chan_ctxt->wp;
+
+	tre_ring->rp = tre_ring->wp = tre_ring->base;
+	buf_ring->rp = buf_ring->wp = buf_ring->base;
+	mhi_chan->db_cfg.db_mode = 1;
+
+	/* update to all cores */
+	smp_wmb();
+
+	return 0;
+}
+
+int mhi_device_configure(struct mhi_device *mhi_dev,
+			 enum dma_data_direction dir,
+			 struct mhi_buf *cfg_tbl,
+			 int elements)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan;
+	struct mhi_event_ctxt *er_ctxt;
+	struct mhi_chan_ctxt *ch_ctxt;
+	int er_index, chan;
+
+	mhi_chan = (dir == DMA_TO_DEVICE) ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+	er_index = mhi_chan->er_index;
+	chan = mhi_chan->chan;
+
+	for (; elements > 0; elements--, cfg_tbl++) {
+		/* update event context array */
+		if (!strcmp(cfg_tbl->name, "ECA")) {
+			er_ctxt = &mhi_cntrl->mhi_ctxt->er_ctxt[er_index];
+			if (sizeof(*er_ctxt) != cfg_tbl->len) {
+				MHI_ERR(
+					"Invalid ECA size, expected:%zu actual%zu\n",
+					sizeof(*er_ctxt), cfg_tbl->len);
+				return -EINVAL;
+			}
+			memcpy((void *)er_ctxt, cfg_tbl->buf, sizeof(*er_ctxt));
+			continue;
+		}
+
+		/* update channel context array */
+		if (!strcmp(cfg_tbl->name, "CCA")) {
+			ch_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[chan];
+			if (cfg_tbl->len != sizeof(*ch_ctxt)) {
+				MHI_ERR(
+					"Invalid CCA size, expected:%zu actual:%zu\n",
+					sizeof(*ch_ctxt), cfg_tbl->len);
+				return -EINVAL;
+			}
+			memcpy((void *)ch_ctxt, cfg_tbl->buf, sizeof(*ch_ctxt));
+			continue;
+		}
+
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+#if defined(CONFIG_OF)
+static int of_parse_ev_cfg(struct mhi_controller *mhi_cntrl,
+			   struct device_node *of_node)
+{
+	struct {
+		u32 ev_cfg[MHI_EV_CFG_MAX];
+	} *ev_cfg;
+	int num, i, ret;
+	struct mhi_event *mhi_event;
+	u32 bit_cfg;
+
+	num = of_property_count_elems_of_size(of_node, "mhi,ev-cfg",
+					      sizeof(*ev_cfg));
+	if (num <= 0)
+		return -EINVAL;
+
+	ev_cfg = kcalloc(num, sizeof(*ev_cfg), GFP_KERNEL);
+	if (!ev_cfg)
+		return -ENOMEM;
+
+	mhi_cntrl->total_ev_rings = num;
+	mhi_cntrl->mhi_event = kcalloc(num, sizeof(*mhi_cntrl->mhi_event),
+				       GFP_KERNEL);
+	if (!mhi_cntrl->mhi_event) {
+		kfree(ev_cfg);
+		return -ENOMEM;
+	}
+
+	ret = of_property_read_u32_array(of_node, "mhi,ev-cfg", (u32 *)ev_cfg,
+					 num * sizeof(*ev_cfg) / sizeof(u32));
+	if (ret)
+		goto error_ev_cfg;
+
+	/* populate ev ring */
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		mhi_event->er_index = i;
+		mhi_event->ring.elements =
+			ev_cfg[i].ev_cfg[MHI_EV_CFG_ELEMENTS];
+		mhi_event->intmod = ev_cfg[i].ev_cfg[MHI_EV_CFG_INTMOD];
+		mhi_event->msi = ev_cfg[i].ev_cfg[MHI_EV_CFG_MSI];
+		mhi_event->chan = ev_cfg[i].ev_cfg[MHI_EV_CFG_CHAN];
+		if (mhi_event->chan >= mhi_cntrl->max_chan)
+			goto error_ev_cfg;
+
+		/* this event ring has a dedicated channel */
+		if (mhi_event->chan)
+			mhi_event->mhi_chan =
+				&mhi_cntrl->mhi_chan[mhi_event->chan];
+
+		mhi_event->priority = ev_cfg[i].ev_cfg[MHI_EV_CFG_PRIORITY];
+		mhi_event->db_cfg.brstmode =
+			ev_cfg[i].ev_cfg[MHI_EV_CFG_BRSTMODE];
+		if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
+			goto error_ev_cfg;
+
+		mhi_event->db_cfg.process_db =
+			(mhi_event->db_cfg.brstmode == MHI_BRSTMODE_ENABLE) ?
+			mhi_db_brstmode : mhi_db_brstmode_disable;
+
+		bit_cfg = ev_cfg[i].ev_cfg[MHI_EV_CFG_BITCFG];
+		if (bit_cfg & MHI_EV_CFG_BIT_HW_EV) {
+			mhi_event->hw_ring = true;
+			mhi_cntrl->hw_ev_rings++;
+		} else
+			mhi_cntrl->sw_ev_rings++;
+
+		mhi_event->cl_manage = !!(bit_cfg & MHI_EV_CFG_BIT_CL_MANAGE);
+		mhi_event->offload_ev = !!(bit_cfg & MHI_EV_CFG_BIT_OFFLOAD_EV);
+		mhi_event->ctrl_ev = !!(bit_cfg & MHI_EV_CFG_BIT_CTRL_EV);
+	}
+
+	kfree(ev_cfg);
+
+	/* we need msi for each event ring + additional one for BHI */
+	mhi_cntrl->msi_required = mhi_cntrl->total_ev_rings + 1;
+
+	return 0;
+
+ error_ev_cfg:
+	kfree(ev_cfg);
+	kfree(mhi_cntrl->mhi_event);
+	return -EINVAL;
+}
+static int of_parse_ch_cfg(struct mhi_controller *mhi_cntrl,
+			   struct device_node *of_node)
+{
+	int num, i, ret;
+	struct {
+		u32 chan_cfg[MHI_CH_CFG_MAX];
+	} *chan_cfg;
+
+	ret = of_property_read_u32(of_node, "mhi,max-channels",
+				   &mhi_cntrl->max_chan);
+	if (ret)
+		return ret;
+	num = of_property_count_elems_of_size(of_node, "mhi,chan-cfg",
+					      sizeof(*chan_cfg));
+	if (num <= 0 || num >= mhi_cntrl->max_chan)
+		return -EINVAL;
+	if (of_property_count_strings(of_node, "mhi,chan-names") != num)
+		return -EINVAL;
+
+	mhi_cntrl->mhi_chan = kcalloc(mhi_cntrl->max_chan,
+				      sizeof(*mhi_cntrl->mhi_chan), GFP_KERNEL);
+	if (!mhi_cntrl->mhi_chan)
+		return -ENOMEM;
+	chan_cfg = kcalloc(num, sizeof(*chan_cfg), GFP_KERNEL);
+	if (!chan_cfg) {
+		kfree(mhi_cntrl->mhi_chan);
+		return -ENOMEM;
+	}
+
+	ret = of_property_read_u32_array(of_node, "mhi,chan-cfg",
+					 (u32 *)chan_cfg,
+					 num * sizeof(*chan_cfg) / sizeof(u32));
+	if (ret)
+		goto error_chan_cfg;
+
+	INIT_LIST_HEAD(&mhi_cntrl->lpm_chans);
+
+	/* populate channel configurations */
+	for (i = 0; i < num; i++) {
+		struct mhi_chan *mhi_chan;
+		int chan = chan_cfg[i].chan_cfg[MHI_CH_CFG_CHAN_ID];
+		u32 bit_cfg = chan_cfg[i].chan_cfg[MHI_CH_CFG_BITCFG];
+
+		if (chan >= mhi_cntrl->max_chan)
+			goto error_chan_cfg;
+
+		mhi_chan = &mhi_cntrl->mhi_chan[chan];
+		mhi_chan->chan = chan;
+		mhi_chan->buf_ring.elements =
+			chan_cfg[i].chan_cfg[MHI_CH_CFG_ELEMENTS];
+		mhi_chan->tre_ring.elements = mhi_chan->buf_ring.elements;
+		mhi_chan->er_index = chan_cfg[i].chan_cfg[MHI_CH_CFG_ER_INDEX];
+		mhi_chan->dir = chan_cfg[i].chan_cfg[MHI_CH_CFG_DIRECTION];
+
+		mhi_chan->db_cfg.pollcfg =
+			chan_cfg[i].chan_cfg[MHI_CH_CFG_POLLCFG];
+		mhi_chan->ee = chan_cfg[i].chan_cfg[MHI_CH_CFG_EE];
+		if (mhi_chan->ee >= MHI_EE_MAX_SUPPORTED)
+			goto error_chan_cfg;
+
+		mhi_chan->xfer_type =
+			chan_cfg[i].chan_cfg[MHI_CH_CFG_XFER_TYPE];
+
+		switch (mhi_chan->xfer_type) {
+		case MHI_XFER_BUFFER:
+			mhi_chan->gen_tre = mhi_gen_tre;
+			mhi_chan->queue_xfer = mhi_queue_buf;
+			break;
+		case MHI_XFER_SKB:
+			mhi_chan->queue_xfer = mhi_queue_skb;
+			break;
+		case MHI_XFER_SCLIST:
+			mhi_chan->gen_tre = mhi_gen_tre;
+			mhi_chan->queue_xfer = mhi_queue_sclist;
+			break;
+		case MHI_XFER_NOP:
+			mhi_chan->queue_xfer = mhi_queue_nop;
+			break;
+		default:
+			goto error_chan_cfg;
+		}
+
+		mhi_chan->lpm_notify = !!(bit_cfg & MHI_CH_CFG_BIT_LPM_NOTIFY);
+		mhi_chan->offload_ch = !!(bit_cfg & MHI_CH_CFG_BIT_OFFLOAD_CH);
+		mhi_chan->db_cfg.reset_req =
+			!!(bit_cfg & MHI_CH_CFG_BIT_DBMODE_RESET_CH);
+		mhi_chan->pre_alloc = !!(bit_cfg & MHI_CH_CFG_BIT_PRE_ALLOC);
+
+		if (mhi_chan->pre_alloc &&
+		    (mhi_chan->dir != DMA_FROM_DEVICE ||
+		     mhi_chan->xfer_type != MHI_XFER_BUFFER))
+			goto error_chan_cfg;
+
+		/* if mhi host allocate the buffers then client cannot queue */
+		if (mhi_chan->pre_alloc)
+			mhi_chan->queue_xfer = mhi_queue_nop;
+
+		ret = of_property_read_string_index(of_node, "mhi,chan-names",
+						    i, &mhi_chan->name);
+		if (ret)
+			goto error_chan_cfg;
+
+		if (!mhi_chan->offload_ch) {
+			mhi_chan->db_cfg.brstmode =
+				chan_cfg[i].chan_cfg[MHI_CH_CFG_BRSTMODE];
+			if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode))
+				goto error_chan_cfg;
+
+			mhi_chan->db_cfg.process_db =
+				(mhi_chan->db_cfg.brstmode ==
+				 MHI_BRSTMODE_ENABLE) ?
+				mhi_db_brstmode : mhi_db_brstmode_disable;
+		}
+		mhi_chan->configured = true;
+
+		if (mhi_chan->lpm_notify)
+			list_add_tail(&mhi_chan->node, &mhi_cntrl->lpm_chans);
+	}
+
+	kfree(chan_cfg);
+
+	return 0;
+
+error_chan_cfg:
+	kfree(mhi_cntrl->mhi_chan);
+	kfree(chan_cfg);
+
+	return -EINVAL;
+}
+
+static int of_parse_dt(struct mhi_controller *mhi_cntrl,
+		       struct device_node *of_node)
+{
+	int ret;
+
+	/* parse firmware image info (optional parameters) */
+	of_property_read_string(of_node, "mhi,fw-name", &mhi_cntrl->fw_image);
+	of_property_read_string(of_node, "mhi,edl-name", &mhi_cntrl->fw_image);
+	mhi_cntrl->fbc_download = of_property_read_bool(of_node, "mhi,dl-fbc");
+	of_property_read_u32(of_node, "mhi,sbl-size",
+			     (u32 *)&mhi_cntrl->sbl_size);
+	of_property_read_u32(of_node, "mhi,seg-len",
+			     (u32 *)&mhi_cntrl->seg_len);
+
+	/* parse MHI channel configuration */
+	ret = of_parse_ch_cfg(mhi_cntrl, of_node);
+	if (ret)
+		return ret;
+
+	/* parse MHI event configuration */
+	ret = of_parse_ev_cfg(mhi_cntrl, of_node);
+	if (ret)
+		goto error_ev_cfg;
+
+	ret = of_property_read_u32(of_node, "mhi,timeout",
+				   &mhi_cntrl->timeout_ms);
+	if (ret)
+		mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
+
+	return 0;
+
+ error_ev_cfg:
+	kfree(mhi_cntrl->mhi_chan);
+
+	return ret;
+}
+#else
+static int of_parse_dt(struct mhi_controller *mhi_cntrl,
+		       struct device_node *of_node)
+{
+	return -EINVAL;
+}
+#endif
+
+int of_register_mhi_controller(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+	int i;
+	struct mhi_event *mhi_event;
+	struct mhi_chan *mhi_chan;
+	struct mhi_cmd *mhi_cmd;
+
+	if (!mhi_cntrl->of_node)
+		return -EINVAL;
+
+	if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put)
+		return -EINVAL;
+
+	if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status)
+		return -EINVAL;
+
+	ret = of_parse_dt(mhi_cntrl, mhi_cntrl->of_node);
+	if (ret)
+		return -EINVAL;
+
+	mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS,
+				     sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
+	if (!mhi_cntrl->mhi_cmd)
+		goto error_alloc_cmd;
+
+	INIT_LIST_HEAD(&mhi_cntrl->transition_list);
+	mutex_init(&mhi_cntrl->pm_mutex);
+	rwlock_init(&mhi_cntrl->pm_lock);
+	spin_lock_init(&mhi_cntrl->transition_lock);
+	spin_lock_init(&mhi_cntrl->wlock);
+	INIT_WORK(&mhi_cntrl->st_worker, mhi_pm_st_worker);
+	INIT_WORK(&mhi_cntrl->fw_worker, mhi_fw_load_worker);
+	INIT_WORK(&mhi_cntrl->m1_worker, mhi_pm_m1_worker);
+	INIT_WORK(&mhi_cntrl->syserr_worker, mhi_pm_sys_err_worker);
+	init_waitqueue_head(&mhi_cntrl->state_event);
+
+	mhi_cmd = mhi_cntrl->mhi_cmd;
+	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++)
+		spin_lock_init(&mhi_cmd->lock);
+
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		if (mhi_event->offload_ev)
+			continue;
+
+		mhi_event->mhi_cntrl = mhi_cntrl;
+		spin_lock_init(&mhi_event->lock);
+		if (mhi_event->ctrl_ev)
+			tasklet_init(&mhi_event->task, mhi_ctrl_ev_task,
+				     (ulong)mhi_event);
+		else
+			tasklet_init(&mhi_event->task, mhi_ev_task,
+				     (ulong)mhi_event);
+	}
+
+	mhi_chan = mhi_cntrl->mhi_chan;
+	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+		mutex_init(&mhi_chan->mutex);
+		init_completion(&mhi_chan->completion);
+		rwlock_init(&mhi_chan->lock);
+	}
+
+	mhi_cntrl->parent = mhi_bus.dentry;
+	mhi_cntrl->klog_lvl = MHI_MSG_LVL_ERROR;
+
+	/* add to list */
+	mutex_lock(&mhi_bus.lock);
+	list_add_tail(&mhi_cntrl->node, &mhi_bus.controller_list);
+	mutex_unlock(&mhi_bus.lock);
+
+	return 0;
+
+error_alloc_cmd:
+	kfree(mhi_cntrl->mhi_chan);
+	kfree(mhi_cntrl->mhi_event);
+
+	return -ENOMEM;
+};
+EXPORT_SYMBOL(of_register_mhi_controller);
+
+void mhi_unregister_mhi_controller(struct mhi_controller *mhi_cntrl)
+{
+	kfree(mhi_cntrl->mhi_cmd);
+	kfree(mhi_cntrl->mhi_event);
+	kfree(mhi_cntrl->mhi_chan);
+
+	mutex_lock(&mhi_bus.lock);
+	list_del(&mhi_cntrl->node);
+	mutex_unlock(&mhi_bus.lock);
+}
+
+/* set ptr to control private data */
+static inline void mhi_controller_set_devdata(struct mhi_controller *mhi_cntrl,
+					 void *priv)
+{
+	mhi_cntrl->priv_data = priv;
+}
+
+
+/* allocate mhi controller to register */
+struct mhi_controller *mhi_alloc_controller(size_t size)
+{
+	struct mhi_controller *mhi_cntrl;
+
+	mhi_cntrl = kzalloc(size + sizeof(*mhi_cntrl), GFP_KERNEL);
+
+	if (mhi_cntrl && size)
+		mhi_controller_set_devdata(mhi_cntrl, mhi_cntrl + 1);
+
+	return mhi_cntrl;
+}
+EXPORT_SYMBOL(mhi_alloc_controller);
+
+int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+
+	ret = mhi_init_dev_ctxt(mhi_cntrl);
+	if (ret) {
+		MHI_ERR("Error with init dev_ctxt\n");
+		goto error_dev_ctxt;
+	}
+
+	ret = mhi_init_irq_setup(mhi_cntrl);
+	if (ret) {
+		MHI_ERR("Error setting up irq\n");
+		goto error_setup_irq;
+	}
+
+	/*
+	 * allocate rddm table if specified, this table is for debug purpose
+	 * so we'll ignore erros
+	 */
+	if (mhi_cntrl->rddm_size)
+		mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->rddm_image,
+				     mhi_cntrl->rddm_size);
+
+	mhi_cntrl->pre_init = true;
+
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	return 0;
+
+error_setup_irq:
+	mhi_deinit_dev_ctxt(mhi_cntrl);
+
+error_dev_ctxt:
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	return ret;
+}
+EXPORT_SYMBOL(mhi_prepare_for_power_up);
+
+void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl)
+{
+	if (mhi_cntrl->fbc_image) {
+		mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
+		mhi_cntrl->fbc_image = NULL;
+	}
+
+	if (mhi_cntrl->rddm_image) {
+		mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->rddm_image);
+		mhi_cntrl->rddm_image = NULL;
+	}
+
+	mhi_deinit_free_irq(mhi_cntrl);
+	mhi_deinit_dev_ctxt(mhi_cntrl);
+	mhi_cntrl->pre_init = false;
+}
+
+/* match dev to drv */
+static int mhi_match(struct device *dev, struct device_driver *drv)
+{
+	struct mhi_device *mhi_dev = to_mhi_device(dev);
+	struct mhi_driver *mhi_drv = to_mhi_driver(drv);
+	const struct mhi_device_id *id;
+
+	for (id = mhi_drv->id_table; id->chan; id++)
+		if (!strcmp(mhi_dev->chan_name, id->chan)) {
+			mhi_dev->id = id;
+			return 1;
+		}
+
+	return 0;
+};
+
+static void mhi_release_device(struct device *dev)
+{
+	struct mhi_device *mhi_dev = to_mhi_device(dev);
+
+	kfree(mhi_dev);
+}
+
+struct bus_type mhi_bus_type = {
+	.name = "mhi",
+	.dev_name = "mhi",
+	.match = mhi_match,
+};
+
+static int mhi_driver_probe(struct device *dev)
+{
+	struct mhi_device *mhi_dev = to_mhi_device(dev);
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct device_driver *drv = dev->driver;
+	struct mhi_driver *mhi_drv = to_mhi_driver(drv);
+	struct mhi_event *mhi_event;
+	struct mhi_chan *ul_chan = mhi_dev->ul_chan;
+	struct mhi_chan *dl_chan = mhi_dev->dl_chan;
+	bool offload_ch = ((ul_chan && ul_chan->offload_ch) ||
+			   (dl_chan && dl_chan->offload_ch));
+
+	/* all offload channels require status_cb to be defined */
+	if (offload_ch) {
+		if (!mhi_dev->status_cb)
+			return -EINVAL;
+		mhi_dev->status_cb = mhi_drv->status_cb;
+	}
+
+	if (ul_chan && !offload_ch) {
+		if (!mhi_drv->ul_xfer_cb)
+			return -EINVAL;
+		ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
+	}
+
+	if (dl_chan && !offload_ch) {
+		if (!mhi_drv->dl_xfer_cb)
+			return -EINVAL;
+		dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
+		mhi_event = &mhi_cntrl->mhi_event[dl_chan->er_index];
+
+		/*
+		 * if this channal event ring manage by client, then
+		 * status_cb must be defined so we can send the async
+		 * cb whenever there are pending data
+		 */
+		if (mhi_event->cl_manage && !mhi_drv->status_cb)
+			return -EINVAL;
+		mhi_dev->status_cb = mhi_drv->status_cb;
+	}
+
+	return mhi_drv->probe(mhi_dev, mhi_dev->id);
+}
+
+static int mhi_driver_remove(struct device *dev)
+{
+	struct mhi_device *mhi_dev = to_mhi_device(dev);
+	struct mhi_driver *mhi_drv = to_mhi_driver(dev->driver);
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan;
+	enum MHI_CH_STATE ch_state[] = {
+		MHI_CH_STATE_DISABLED,
+		MHI_CH_STATE_DISABLED
+	};
+	int dir;
+
+	MHI_LOG("Removing device for chan:%s\n", mhi_dev->chan_name);
+
+	/* reset both channels */
+	for (dir = 0; dir < 2; dir++) {
+		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		/* wake all threads waiting for completion */
+		write_lock_irq(&mhi_chan->lock);
+		mhi_chan->ccs = MHI_EV_CC_INVALID;
+		complete_all(&mhi_chan->completion);
+		write_unlock_irq(&mhi_chan->lock);
+
+		/* move channel state to disable, no more processing */
+		mutex_lock(&mhi_chan->mutex);
+		write_lock_irq(&mhi_chan->lock);
+		ch_state[dir] = mhi_chan->ch_state;
+		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+		write_unlock_irq(&mhi_chan->lock);
+
+		/* reset the channel */
+		if (!mhi_chan->offload_ch)
+			mhi_reset_chan(mhi_cntrl, mhi_chan);
+	}
+
+	/* destroy the device */
+	mhi_drv->remove(mhi_dev);
+
+	/* de_init channel if it was enabled */
+	for (dir = 0; dir < 2; dir++) {
+		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		if (ch_state[dir] == MHI_CH_STATE_ENABLED &&
+		    !mhi_chan->offload_ch)
+			mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+
+		mutex_unlock(&mhi_chan->mutex);
+	}
+
+	/* relinquish any pending votes */
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	while (atomic_read(&mhi_dev->dev_wake))
+		mhi_device_put(mhi_dev);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return 0;
+}
+
+int mhi_driver_register(struct mhi_driver *mhi_drv)
+{
+	struct device_driver *driver = &mhi_drv->driver;
+
+	if (!mhi_drv->probe || !mhi_drv->remove)
+		return -EINVAL;
+
+	driver->bus = &mhi_bus_type;
+	driver->probe = mhi_driver_probe;
+	driver->remove = mhi_driver_remove;
+	return driver_register(driver);
+}
+EXPORT_SYMBOL(mhi_driver_register);
+
+void mhi_driver_unregister(struct mhi_driver *mhi_drv)
+{
+	driver_unregister(&mhi_drv->driver);
+}
+EXPORT_SYMBOL(mhi_driver_unregister);
+
+struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_device *mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
+	struct device *dev;
+
+	if (!mhi_dev)
+		return NULL;
+
+	dev = &mhi_dev->dev;
+	device_initialize(dev);
+	dev->bus = &mhi_bus_type;
+	dev->release = mhi_release_device;
+	dev->parent = mhi_cntrl->dev;
+	mhi_dev->mhi_cntrl = mhi_cntrl;
+	mhi_dev->dev_id = mhi_cntrl->dev_id;
+	mhi_dev->domain = mhi_cntrl->domain;
+	mhi_dev->bus = mhi_cntrl->bus;
+	mhi_dev->slot = mhi_cntrl->slot;
+	mhi_dev->mtu = MHI_MAX_MTU;
+	atomic_set(&mhi_dev->dev_wake, 0);
+
+	return mhi_dev;
+}
+
+static int __init mhi_init(void)
+{
+	struct dentry *dentry;
+	int ret;
+
+	mutex_init(&mhi_bus.lock);
+	INIT_LIST_HEAD(&mhi_bus.controller_list);
+	dentry = debugfs_create_dir("mhi", NULL);
+	if (!IS_ERR_OR_NULL(dentry))
+		mhi_bus.dentry = dentry;
+
+	ret = bus_register(&mhi_bus_type);
+
+	if (!ret)
+		mhi_dtr_init();
+	return ret;
+}
+postcore_initcall(mhi_init);
+
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("MHI_CORE");
+MODULE_DESCRIPTION("MHI Host Interface");
diff --git a/drivers/bus/mhi/core/mhi_internal.h b/drivers/bus/mhi/core/mhi_internal.h
new file mode 100644
index 0000000..7adc7963
--- /dev/null
+++ b/drivers/bus/mhi/core/mhi_internal.h
@@ -0,0 +1,732 @@
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef _MHI_INT_H
+#define _MHI_INT_H
+
+extern struct bus_type mhi_bus_type;
+
+/* MHI mmio register mapping */
+#define PCI_INVALID_READ(val) (val == U32_MAX)
+
+#define MHIREGLEN (0x0)
+#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
+#define MHIREGLEN_MHIREGLEN_SHIFT (0)
+
+#define MHIVER (0x8)
+#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
+#define MHIVER_MHIVER_SHIFT (0)
+
+#define MHICFG (0x10)
+#define MHICFG_NHWER_MASK (0xFF000000)
+#define MHICFG_NHWER_SHIFT (24)
+#define MHICFG_NER_MASK (0xFF0000)
+#define MHICFG_NER_SHIFT (16)
+#define MHICFG_NHWCH_MASK (0xFF00)
+#define MHICFG_NHWCH_SHIFT (8)
+#define MHICFG_NCH_MASK (0xFF)
+#define MHICFG_NCH_SHIFT (0)
+
+#define CHDBOFF (0x18)
+#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
+#define CHDBOFF_CHDBOFF_SHIFT (0)
+
+#define ERDBOFF (0x20)
+#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
+#define ERDBOFF_ERDBOFF_SHIFT (0)
+
+#define BHIOFF (0x28)
+#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
+#define BHIOFF_BHIOFF_SHIFT (0)
+
+#define DEBUGOFF (0x30)
+#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
+#define DEBUGOFF_DEBUGOFF_SHIFT (0)
+
+#define MHICTRL (0x38)
+#define MHICTRL_MHISTATE_MASK (0x0000FF00)
+#define MHICTRL_MHISTATE_SHIFT (8)
+#define MHICTRL_RESET_MASK (0x2)
+#define MHICTRL_RESET_SHIFT (1)
+
+#define MHISTATUS (0x48)
+#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
+#define MHISTATUS_MHISTATE_SHIFT (8)
+#define MHISTATUS_SYSERR_MASK (0x4)
+#define MHISTATUS_SYSERR_SHIFT (2)
+#define MHISTATUS_READY_MASK (0x1)
+#define MHISTATUS_READY_SHIFT (0)
+
+#define CCABAP_LOWER (0x58)
+#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
+#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
+
+#define CCABAP_HIGHER (0x5C)
+#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
+#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
+
+#define ECABAP_LOWER (0x60)
+#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
+#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
+
+#define ECABAP_HIGHER (0x64)
+#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
+#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
+
+#define CRCBAP_LOWER (0x68)
+#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
+#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
+
+#define CRCBAP_HIGHER (0x6C)
+#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
+#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
+
+#define CRDB_LOWER (0x70)
+#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
+#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
+
+#define CRDB_HIGHER (0x74)
+#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
+#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
+
+#define MHICTRLBASE_LOWER (0x80)
+#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
+#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
+
+#define MHICTRLBASE_HIGHER (0x84)
+#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
+#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
+
+#define MHICTRLLIMIT_LOWER (0x88)
+#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
+#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
+
+#define MHICTRLLIMIT_HIGHER (0x8C)
+#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
+#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
+
+#define MHIDATABASE_LOWER (0x98)
+#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
+#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
+
+#define MHIDATABASE_HIGHER (0x9C)
+#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
+#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
+
+#define MHIDATALIMIT_LOWER (0xA0)
+#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
+#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
+
+#define MHIDATALIMIT_HIGHER (0xA4)
+#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
+#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
+
+/* MHI BHI offfsets */
+#define BHI_BHIVERSION_MINOR (0x00)
+#define BHI_BHIVERSION_MAJOR (0x04)
+#define BHI_IMGADDR_LOW (0x08)
+#define BHI_IMGADDR_HIGH (0x0C)
+#define BHI_IMGSIZE (0x10)
+#define BHI_RSVD1 (0x14)
+#define BHI_IMGTXDB (0x18)
+#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHI_TXDB_SEQNUM_SHFT (0)
+#define BHI_RSVD2 (0x1C)
+#define BHI_INTVEC (0x20)
+#define BHI_RSVD3 (0x24)
+#define BHI_EXECENV (0x28)
+#define BHI_STATUS (0x2C)
+#define BHI_ERRCODE (0x30)
+#define BHI_ERRDBG1 (0x34)
+#define BHI_ERRDBG2 (0x38)
+#define BHI_ERRDBG3 (0x3C)
+#define BHI_SERIALNU (0x40)
+#define BHI_SBLANTIROLLVER (0x44)
+#define BHI_NUMSEG (0x48)
+#define BHI_MSMHWID(n) (0x4C + (0x4 * n))
+#define BHI_OEMPKHASH(n) (0x64 + (0x4 * n))
+#define BHI_RSVD5 (0xC4)
+#define BHI_STATUS_MASK (0xC0000000)
+#define BHI_STATUS_SHIFT (30)
+#define BHI_STATUS_ERROR (3)
+#define BHI_STATUS_SUCCESS (2)
+#define BHI_STATUS_RESET (0)
+
+/* MHI BHIE offsets */
+#define BHIE_OFFSET (0x0124) /* BHIE register space offset from BHI base */
+#define BHIE_MSMSOCID_OFFS (BHIE_OFFSET + 0x0000)
+#define BHIE_TXVECADDR_LOW_OFFS (BHIE_OFFSET + 0x002C)
+#define BHIE_TXVECADDR_HIGH_OFFS (BHIE_OFFSET + 0x0030)
+#define BHIE_TXVECSIZE_OFFS (BHIE_OFFSET + 0x0034)
+#define BHIE_TXVECDB_OFFS (BHIE_OFFSET + 0x003C)
+#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHIE_TXVECDB_SEQNUM_SHFT (0)
+#define BHIE_TXVECSTATUS_OFFS (BHIE_OFFSET + 0x0044)
+#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
+#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
+#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
+#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
+#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
+#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
+#define BHIE_RXVECADDR_LOW_OFFS (BHIE_OFFSET + 0x0060)
+#define BHIE_RXVECADDR_HIGH_OFFS (BHIE_OFFSET + 0x0064)
+#define BHIE_RXVECSIZE_OFFS (BHIE_OFFSET + 0x0068)
+#define BHIE_RXVECDB_OFFS (BHIE_OFFSET + 0x0070)
+#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHIE_RXVECDB_SEQNUM_SHFT (0)
+#define BHIE_RXVECSTATUS_OFFS (BHIE_OFFSET + 0x0078)
+#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
+#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
+#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
+#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
+#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
+#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
+
+struct __packed mhi_event_ctxt {
+	u32 reserved : 8;
+	u32 intmodc : 8;
+	u32 intmodt : 16;
+	u32 ertype;
+	u32 msivec;
+	u64 rbase;
+	u64 rlen;
+	u64 rp;
+	u64 wp;
+};
+
+struct __packed mhi_chan_ctxt {
+	u32 chstate : 8;
+	u32 brstmode : 2;
+	u32 pollcfg : 6;
+	u32 reserved : 16;
+	u32 chtype;
+	u32 erindex;
+	u64 rbase;
+	u64 rlen;
+	u64 rp;
+	u64 wp;
+};
+
+struct __packed mhi_cmd_ctxt {
+	u32 reserved0;
+	u32 reserved1;
+	u32 reserved2;
+	u64 rbase;
+	u64 rlen;
+	u64 rp;
+	u64 wp;
+};
+
+struct __packed mhi_tre {
+	u64 ptr;
+	u32 dword[2];
+};
+
+struct __packed bhi_vec_entry {
+	u64 dma_addr;
+	u64 size;
+};
+
+/* no operation command */
+#define MHI_TRE_CMD_NOOP_PTR (0)
+#define MHI_TRE_CMD_NOOP_DWORD0 (0)
+#define MHI_TRE_CMD_NOOP_DWORD1 (1 << 16)
+
+/* channel reset command */
+#define MHI_TRE_CMD_RESET_PTR (0)
+#define MHI_TRE_CMD_RESET_DWORD0 (0)
+#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | (16 << 16))
+
+/* channel stop command */
+#define MHI_TRE_CMD_STOP_PTR (0)
+#define MHI_TRE_CMD_STOP_DWORD0 (0)
+#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | (17 << 16))
+
+/* channel start command */
+#define MHI_TRE_CMD_START_PTR (0)
+#define MHI_TRE_CMD_START_DWORD0 (0)
+#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | (18 << 16))
+
+#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+
+/* event descriptor macros */
+#define MHI_TRE_EV_PTR(ptr) (ptr)
+#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
+#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
+#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
+#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
+#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
+
+
+/* transfer descriptor macros */
+#define MHI_TRE_DATA_PTR(ptr) (ptr)
+#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
+#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
+	| (ieot << 9) | (ieob << 8) | chain)
+
+enum MHI_CMD {
+	MHI_CMD_NOOP = 0x0,
+	MHI_CMD_RESET_CHAN = 0x1,
+	MHI_CMD_STOP_CHAN = 0x2,
+	MHI_CMD_START_CHAN = 0x3,
+	MHI_CMD_RESUME_CHAN = 0x4,
+};
+
+enum MHI_PKT_TYPE {
+	MHI_PKT_TYPE_INVALID = 0x0,
+	MHI_PKT_TYPE_NOOP_CMD = 0x1,
+	MHI_PKT_TYPE_TRANSFER = 0x2,
+	MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
+	MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
+	MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
+	MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
+	MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
+	MHI_PKT_TYPE_TX_EVENT = 0x22,
+	MHI_PKT_TYPE_EE_EVENT = 0x40,
+	MHI_PKT_TYPE_STALE_EVENT, /* internal event */
+};
+
+/* MHI transfer completion events */
+enum MHI_EV_CCS {
+	MHI_EV_CC_INVALID = 0x0,
+	MHI_EV_CC_SUCCESS = 0x1,
+	MHI_EV_CC_EOT = 0x2,
+	MHI_EV_CC_OVERFLOW = 0x3,
+	MHI_EV_CC_EOB = 0x4,
+	MHI_EV_CC_OOB = 0x5,
+	MHI_EV_CC_DB_MODE = 0x6,
+	MHI_EV_CC_UNDEFINED_ERR = 0x10,
+	MHI_EV_CC_BAD_TRE = 0x11,
+};
+
+enum MHI_CH_STATE {
+	MHI_CH_STATE_DISABLED = 0x0,
+	MHI_CH_STATE_ENABLED = 0x1,
+	MHI_CH_STATE_RUNNING = 0x2,
+	MHI_CH_STATE_SUSPENDED = 0x3,
+	MHI_CH_STATE_STOP = 0x4,
+	MHI_CH_STATE_ERROR = 0x5,
+};
+
+enum MHI_CH_CFG {
+	MHI_CH_CFG_CHAN_ID = 0,
+	MHI_CH_CFG_ELEMENTS = 1,
+	MHI_CH_CFG_ER_INDEX = 2,
+	MHI_CH_CFG_DIRECTION = 3,
+	MHI_CH_CFG_BRSTMODE = 4,
+	MHI_CH_CFG_POLLCFG = 5,
+	MHI_CH_CFG_EE = 6,
+	MHI_CH_CFG_XFER_TYPE = 7,
+	MHI_CH_CFG_BITCFG = 8,
+	MHI_CH_CFG_MAX
+};
+
+#define MHI_CH_CFG_BIT_LPM_NOTIFY BIT(0) /* require LPM notification */
+#define MHI_CH_CFG_BIT_OFFLOAD_CH BIT(1) /* satellite mhi devices */
+#define MHI_CH_CFG_BIT_DBMODE_RESET_CH BIT(2) /* require db mode to reset */
+#define MHI_CH_CFG_BIT_PRE_ALLOC BIT(3) /* host allocate buffers for DL */
+
+enum MHI_EV_CFG {
+	MHI_EV_CFG_ELEMENTS = 0,
+	MHI_EV_CFG_INTMOD = 1,
+	MHI_EV_CFG_MSI = 2,
+	MHI_EV_CFG_CHAN = 3,
+	MHI_EV_CFG_PRIORITY = 4,
+	MHI_EV_CFG_BRSTMODE = 5,
+	MHI_EV_CFG_BITCFG = 6,
+	MHI_EV_CFG_MAX
+};
+
+#define MHI_EV_CFG_BIT_HW_EV BIT(0) /* hw event ring */
+#define MHI_EV_CFG_BIT_CL_MANAGE BIT(1) /* client manages the event ring */
+#define MHI_EV_CFG_BIT_OFFLOAD_EV BIT(2) /* satellite driver manges it */
+#define MHI_EV_CFG_BIT_CTRL_EV BIT(3) /* ctrl event ring */
+
+enum MHI_BRSTMODE {
+	MHI_BRSTMODE_DISABLE = 0x2,
+	MHI_BRSTMODE_ENABLE = 0x3,
+};
+
+#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_BRSTMODE_DISABLE && \
+				    mode != MHI_BRSTMODE_ENABLE)
+
+enum MHI_EE {
+	MHI_EE_PBL = 0x0,
+	MHI_EE_SBL = 0x1,
+	MHI_EE_AMSS = 0x2,
+	MHI_EE_BHIE = 0x3,
+	MHI_EE_RDDM = 0x4,
+	MHI_EE_PTHRU = 0x5,
+	MHI_EE_EDL = 0x6,
+	MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
+	MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
+	MHI_EE_MAX,
+};
+
+extern const char * const mhi_ee_str[MHI_EE_MAX];
+#define TO_MHI_EXEC_STR(ee) (((ee) >= MHI_EE_MAX) ? \
+			     "INVALID_EE" : mhi_ee_str[ee])
+
+#define MHI_IN_PBL(ee) (ee == MHI_EE_PBL || ee == MHI_EE_PTHRU || \
+			ee == MHI_EE_EDL)
+
+enum MHI_ST_TRANSITION {
+	MHI_ST_TRANSITION_PBL,
+	MHI_ST_TRANSITION_READY,
+	MHI_ST_TRANSITION_SBL,
+	MHI_ST_TRANSITION_AMSS,
+	MHI_ST_TRANSITION_BHIE,
+	MHI_ST_TRANSITION_MAX,
+};
+
+extern const char * const mhi_state_tran_str[MHI_ST_TRANSITION_MAX];
+#define TO_MHI_STATE_TRANS_STR(state) (((state) >= MHI_ST_TRANSITION_MAX) ? \
+				"INVALID_STATE" : mhi_state_tran_str[state])
+
+enum MHI_STATE {
+	MHI_STATE_RESET = 0x0,
+	MHI_STATE_READY = 0x1,
+	MHI_STATE_M0 = 0x2,
+	MHI_STATE_M1 = 0x3,
+	MHI_STATE_M2 = 0x4,
+	MHI_STATE_M3 = 0x5,
+	MHI_STATE_BHI  = 0x7,
+	MHI_STATE_SYS_ERR  = 0xFF,
+	MHI_STATE_MAX,
+};
+
+extern const char * const mhi_state_str[MHI_STATE_MAX];
+#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
+				  !mhi_state_str[state]) ? \
+				"INVALID_STATE" : mhi_state_str[state])
+
+/* internal power states */
+enum MHI_PM_STATE {
+	MHI_PM_DISABLE = BIT(0), /* MHI is not enabled */
+	MHI_PM_POR = BIT(1), /* reset state */
+	MHI_PM_M0 = BIT(2),
+	MHI_PM_M1 = BIT(3),
+	MHI_PM_M1_M2_TRANSITION = BIT(4), /* register access not allowed */
+	MHI_PM_M2 = BIT(5),
+	MHI_PM_M3_ENTER = BIT(6),
+	MHI_PM_M3 = BIT(7),
+	MHI_PM_M3_EXIT = BIT(8),
+	MHI_PM_FW_DL_ERR = BIT(9), /* firmware download failure state */
+	MHI_PM_SYS_ERR_DETECT = BIT(10),
+	MHI_PM_SYS_ERR_PROCESS = BIT(11),
+	MHI_PM_SHUTDOWN_PROCESS = BIT(12),
+	MHI_PM_LD_ERR_FATAL_DETECT = BIT(13), /* link not accessible */
+};
+
+#define MHI_REG_ACCESS_VALID(pm_state) ((pm_state & (MHI_PM_POR | MHI_PM_M0 | \
+		MHI_PM_M1 | MHI_PM_M2 | MHI_PM_M3_ENTER | MHI_PM_M3_EXIT | \
+		MHI_PM_SYS_ERR_DETECT | MHI_PM_SYS_ERR_PROCESS | \
+		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_FW_DL_ERR)))
+#define MHI_PM_IN_ERROR_STATE(pm_state) (pm_state >= MHI_PM_FW_DL_ERR)
+#define MHI_PM_IN_FATAL_STATE(pm_state) (pm_state == MHI_PM_LD_ERR_FATAL_DETECT)
+#define MHI_DB_ACCESS_VALID(pm_state) (pm_state & (MHI_PM_M0 | MHI_PM_M1))
+#define MHI_WAKE_DB_ACCESS_VALID(pm_state) (pm_state & (MHI_PM_M0 | \
+							MHI_PM_M1 | MHI_PM_M2))
+#define MHI_EVENT_ACCESS_INVALID(pm_state) (pm_state == MHI_PM_DISABLE || \
+					    MHI_PM_IN_ERROR_STATE(pm_state))
+#define MHI_PM_IN_SUSPEND_STATE(pm_state) (pm_state & \
+					   (MHI_PM_M3_ENTER | MHI_PM_M3))
+
+/* accepted buffer type for the channel */
+enum MHI_XFER_TYPE {
+	MHI_XFER_BUFFER,
+	MHI_XFER_SKB,
+	MHI_XFER_SCLIST,
+	MHI_XFER_NOP, /* CPU offload channel, host does not accept transfer */
+};
+
+#define NR_OF_CMD_RINGS (1)
+#define CMD_EL_PER_RING (128)
+#define PRIMARY_CMD_RING (0)
+#define MHI_DEV_WAKE_DB (127)
+#define MHI_M2_DEBOUNCE_TMR_US (10000)
+#define MHI_MAX_MTU (0xffff)
+
+enum MHI_ER_TYPE {
+	MHI_ER_TYPE_INVALID = 0x0,
+	MHI_ER_TYPE_VALID = 0x1,
+};
+
+struct db_cfg {
+	bool reset_req;
+	bool db_mode;
+	u32 pollcfg;
+	enum MHI_BRSTMODE brstmode;
+	dma_addr_t db_val;
+	void (*process_db)(struct mhi_controller *mhi_cntrl,
+			   struct db_cfg *db_cfg, void __iomem *io_addr,
+			   dma_addr_t db_val);
+};
+
+struct mhi_pm_transitions {
+	enum MHI_PM_STATE from_state;
+	u32 to_states;
+};
+
+struct state_transition {
+	struct list_head node;
+	enum MHI_ST_TRANSITION state;
+};
+
+struct mhi_ctxt {
+	struct mhi_event_ctxt *er_ctxt;
+	struct mhi_chan_ctxt *chan_ctxt;
+	struct mhi_cmd_ctxt *cmd_ctxt;
+	dma_addr_t er_ctxt_addr;
+	dma_addr_t chan_ctxt_addr;
+	dma_addr_t cmd_ctxt_addr;
+};
+
+struct mhi_ring {
+	dma_addr_t dma_handle;
+	dma_addr_t iommu_base;
+	u64 *ctxt_wp; /* point to ctxt wp */
+	void *pre_aligned;
+	void *base;
+	void *rp;
+	void *wp;
+	size_t el_size;
+	size_t len;
+	size_t elements;
+	size_t alloc_size;
+	void __iomem *db_addr;
+};
+
+struct mhi_cmd {
+	struct mhi_ring ring;
+	spinlock_t lock;
+};
+
+struct mhi_buf_info {
+	dma_addr_t p_addr;
+	void *v_addr;
+	void *wp;
+	size_t len;
+	void *cb_buf;
+	enum dma_data_direction dir;
+};
+
+struct mhi_event {
+	u32 er_index;
+	u32 intmod;
+	u32 msi;
+	int chan; /* this event ring is dedicated to a channel */
+	u32 priority;
+	struct mhi_ring ring;
+	struct db_cfg db_cfg;
+	bool hw_ring;
+	bool cl_manage;
+	bool offload_ev; /* managed by a device driver */
+	bool ctrl_ev;
+	spinlock_t lock;
+	struct mhi_chan *mhi_chan; /* dedicated to channel */
+	struct tasklet_struct task;
+	struct mhi_controller *mhi_cntrl;
+};
+
+struct mhi_chan {
+	u32 chan;
+	const char *name;
+	/*
+	 * important, when consuming increment tre_ring first, when releasing
+	 * decrement buf_ring first. If tre_ring has space, buf_ring
+	 * guranteed to have space so we do not need to check both rings.
+	 */
+	struct mhi_ring buf_ring;
+	struct mhi_ring tre_ring;
+	u32 er_index;
+	u32 intmod;
+	u32 tiocm;
+	enum dma_data_direction dir;
+	struct db_cfg db_cfg;
+	enum MHI_EE ee;
+	enum MHI_XFER_TYPE xfer_type;
+	enum MHI_CH_STATE ch_state;
+	enum MHI_EV_CCS ccs;
+	bool lpm_notify;
+	bool configured;
+	bool offload_ch;
+	bool pre_alloc;
+	/* functions that generate the transfer ring elements */
+	int (*gen_tre)(struct mhi_controller *mhi_cntrl,
+		       struct mhi_chan *mhi_chan, void *buf, void *cb,
+		       size_t len, enum MHI_FLAGS flags);
+	int (*queue_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+			  void *buf, size_t len, enum MHI_FLAGS flags);
+	/* xfer call back */
+	struct mhi_device *mhi_dev;
+	void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *res);
+	struct mutex mutex;
+	struct completion completion;
+	rwlock_t lock;
+	struct list_head node;
+};
+
+struct mhi_bus {
+	struct list_head controller_list;
+	struct mutex lock;
+	struct dentry *dentry;
+};
+
+/* default MHI timeout */
+#define MHI_TIMEOUT_MS (1000)
+extern struct mhi_bus mhi_bus;
+
+/* debug fs related functions */
+int mhi_debugfs_mhi_chan_show(struct seq_file *m, void *d);
+int mhi_debugfs_mhi_event_show(struct seq_file *m, void *d);
+int mhi_debugfs_mhi_states_show(struct seq_file *m, void *d);
+int mhi_debugfs_trigger_reset(void *data, u64 val);
+
+void mhi_deinit_debugfs(struct mhi_controller *mhi_cntrl);
+void mhi_init_debugfs(struct mhi_controller *mhi_cntrl);
+
+/* power management apis */
+enum MHI_PM_STATE __must_check mhi_tryset_pm_state(
+					struct mhi_controller *mhi_cntrl,
+					enum MHI_PM_STATE state);
+const char *to_mhi_pm_state_str(enum MHI_PM_STATE state);
+void mhi_reset_chan(struct mhi_controller *mhi_cntrl,
+		    struct mhi_chan *mhi_chan);
+enum MHI_EE mhi_get_exec_env(struct mhi_controller *mhi_cntrl);
+enum MHI_STATE mhi_get_m_state(struct mhi_controller *mhi_cntrl);
+int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
+			       enum MHI_ST_TRANSITION state);
+void mhi_pm_st_worker(struct work_struct *work);
+void mhi_fw_load_worker(struct work_struct *work);
+void mhi_pm_m1_worker(struct work_struct *work);
+void mhi_pm_sys_err_worker(struct work_struct *work);
+int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl);
+void mhi_ctrl_ev_task(unsigned long data);
+int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
+void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl);
+int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
+void mhi_notify(struct mhi_device *mhi_dev, enum MHI_CB cb_reason);
+
+/* queue transfer buffer */
+int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+		void *buf, void *cb, size_t buf_len, enum MHI_FLAGS flags);
+int mhi_queue_buf(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+		  void *buf, size_t len, enum MHI_FLAGS mflags);
+int mhi_queue_skb(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+		  void *buf, size_t len, enum MHI_FLAGS mflags);
+int mhi_queue_sclist(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+		  void *buf, size_t len, enum MHI_FLAGS mflags);
+int mhi_queue_nop(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+		  void *buf, size_t len, enum MHI_FLAGS mflags);
+
+
+/* register access methods */
+void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
+		     void __iomem *db_addr, dma_addr_t wp);
+void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
+			     struct db_cfg *db_mode, void __iomem *db_addr,
+			     dma_addr_t wp);
+int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
+			      void __iomem *base, u32 offset, u32 *out);
+int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
+				    void __iomem *base, u32 offset, u32 mask,
+				    u32 shift, u32 *out);
+void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
+		   u32 offset, u32 val);
+void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
+			 u32 offset, u32 mask, u32 shift, u32 val);
+void mhi_ring_er_db(struct mhi_event *mhi_event);
+void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
+		  dma_addr_t wp);
+void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd);
+void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
+		      struct mhi_chan *mhi_chan);
+void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum MHI_STATE state);
+
+/* memory allocation methods */
+static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl,
+				       size_t size,
+				       dma_addr_t *dma_handle,
+				       gfp_t gfp)
+{
+	void *buf = dma_zalloc_coherent(mhi_cntrl->dev, size, dma_handle, gfp);
+
+	if (buf)
+		atomic_add(size, &mhi_cntrl->alloc_size);
+
+	return buf;
+}
+static inline void mhi_free_coherent(struct mhi_controller *mhi_cntrl,
+				     size_t size,
+				     void *vaddr,
+				     dma_addr_t dma_handle)
+{
+	atomic_sub(size, &mhi_cntrl->alloc_size);
+	dma_free_coherent(mhi_cntrl->dev, size, vaddr, dma_handle);
+}
+struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
+static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
+				      struct mhi_device *mhi_dev)
+{
+	kfree(mhi_dev);
+}
+int mhi_destroy_device(struct device *dev, void *data);
+void mhi_create_devices(struct mhi_controller *mhi_cntrl);
+int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
+			 struct image_info **image_info, size_t alloc_size);
+void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
+			 struct image_info *image_info);
+
+/* initialization methods */
+int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+		       struct mhi_chan *mhi_chan);
+void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+			  struct mhi_chan *mhi_chan);
+int mhi_init_mmio(struct mhi_controller *mhi_cntrl);
+int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl);
+void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl);
+int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
+void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
+int mhi_dtr_init(void);
+
+/* isr handlers */
+irqreturn_t mhi_msi_handlr(int irq_number, void *dev);
+irqreturn_t mhi_intvec_threaded_handlr(int irq_number, void *dev);
+irqreturn_t mhi_intvec_handlr(int irq_number, void *dev);
+void mhi_ev_task(unsigned long data);
+
+#ifdef CONFIG_MHI_DEBUG
+
+#define MHI_ASSERT(cond, msg) do { \
+	if (cond) \
+		panic(msg); \
+} while (0)
+
+#else
+
+#define MHI_ASSERT(cond, msg) do { \
+	if (cond) { \
+		MHI_ERR(msg); \
+		WARN_ON(cond); \
+	} \
+} while (0)
+
+#endif
+
+#endif /* _MHI_INT_H */
diff --git a/drivers/bus/mhi/core/mhi_main.c b/drivers/bus/mhi/core/mhi_main.c
new file mode 100644
index 0000000..761c060
--- /dev/null
+++ b/drivers/bus/mhi/core/mhi_main.c
@@ -0,0 +1,1476 @@
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/module.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include <linux/mhi.h>
+#include "mhi_internal.h"
+
+static void __mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
+				    struct mhi_chan *mhi_chan);
+
+int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
+			      void __iomem *base,
+			      u32 offset,
+			      u32 *out)
+{
+	u32 tmp = readl_relaxed(base + offset);
+
+	/* unexpected value, query the link status */
+	if (PCI_INVALID_READ(tmp) &&
+	    mhi_cntrl->link_status(mhi_cntrl, mhi_cntrl->priv_data))
+		return -EIO;
+
+	*out = tmp;
+
+	return 0;
+}
+
+int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
+				    void __iomem *base,
+				    u32 offset,
+				    u32 mask,
+				    u32 shift,
+				    u32 *out)
+{
+	u32 tmp;
+	int ret;
+
+	ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
+	if (ret)
+		return ret;
+
+	*out = (tmp & mask) >> shift;
+
+	return 0;
+}
+
+void mhi_write_reg(struct mhi_controller *mhi_cntrl,
+		   void __iomem *base,
+		   u32 offset,
+		   u32 val)
+{
+	writel_relaxed(val, base + offset);
+}
+
+void mhi_write_reg_field(struct mhi_controller *mhi_cntrl,
+			 void __iomem *base,
+			 u32 offset,
+			 u32 mask,
+			 u32 shift,
+			 u32 val)
+{
+	int ret;
+	u32 tmp;
+
+	ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
+	if (ret)
+		return;
+
+	tmp &= ~mask;
+	tmp |= (val << shift);
+	mhi_write_reg(mhi_cntrl, base, offset, tmp);
+}
+
+void mhi_write_db(struct mhi_controller *mhi_cntrl,
+		  void __iomem *db_addr,
+		  dma_addr_t wp)
+{
+	mhi_write_reg(mhi_cntrl, db_addr, 4, upper_32_bits(wp));
+	mhi_write_reg(mhi_cntrl, db_addr, 0, lower_32_bits(wp));
+}
+
+void mhi_db_brstmode(struct mhi_controller *mhi_cntrl,
+		     struct db_cfg *db_cfg,
+		     void __iomem *db_addr,
+		     dma_addr_t wp)
+{
+	if (db_cfg->db_mode) {
+		db_cfg->db_val = wp;
+		mhi_write_db(mhi_cntrl, db_addr, wp);
+		db_cfg->db_mode = 0;
+	}
+}
+
+void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
+			     struct db_cfg *db_cfg,
+			     void __iomem *db_addr,
+			     dma_addr_t wp)
+{
+	db_cfg->db_val = wp;
+	mhi_write_db(mhi_cntrl, db_addr, wp);
+}
+
+void mhi_ring_er_db(struct mhi_event *mhi_event)
+{
+	struct mhi_ring *ring = &mhi_event->ring;
+
+	mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
+				     ring->db_addr, *ring->ctxt_wp);
+}
+
+void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
+{
+	dma_addr_t db;
+	struct mhi_ring *ring = &mhi_cmd->ring;
+
+	db = ring->iommu_base + (ring->wp - ring->base);
+	*ring->ctxt_wp = db;
+	mhi_write_db(mhi_cntrl, ring->db_addr, db);
+}
+
+void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
+		      struct mhi_chan *mhi_chan)
+{
+	struct mhi_ring *ring = &mhi_chan->tre_ring;
+	dma_addr_t db;
+
+	db = ring->iommu_base + (ring->wp - ring->base);
+	*ring->ctxt_wp = db;
+	mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg, ring->db_addr,
+				    db);
+}
+
+enum MHI_EE mhi_get_exec_env(struct mhi_controller *mhi_cntrl)
+{
+	u32 exec;
+	int ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_EXECENV, &exec);
+
+	return (ret) ? MHI_EE_MAX : exec;
+}
+
+enum MHI_STATE mhi_get_m_state(struct mhi_controller *mhi_cntrl)
+{
+	u32 state;
+	int ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
+				     MHISTATUS_MHISTATE_MASK,
+				     MHISTATUS_MHISTATE_SHIFT, &state);
+	return ret ? MHI_STATE_MAX : state;
+}
+
+int mhi_queue_sclist(struct mhi_device *mhi_dev,
+		     struct mhi_chan *mhi_chan,
+		     void *buf,
+		     size_t len,
+		     enum MHI_FLAGS mflags)
+{
+	return -EINVAL;
+}
+
+int mhi_queue_nop(struct mhi_device *mhi_dev,
+		  struct mhi_chan *mhi_chan,
+		  void *buf,
+		  size_t len,
+		  enum MHI_FLAGS mflags)
+{
+	return -EINVAL;
+}
+
+static void mhi_add_ring_element(struct mhi_controller *mhi_cntrl,
+				 struct mhi_ring *ring)
+{
+	ring->wp += ring->el_size;
+	if (ring->wp >= (ring->base + ring->len))
+		ring->wp = ring->base;
+	/* smp update */
+	smp_wmb();
+}
+
+static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
+				 struct mhi_ring *ring)
+{
+	ring->rp += ring->el_size;
+	if (ring->rp >= (ring->base + ring->len))
+		ring->rp = ring->base;
+	/* smp update */
+	smp_wmb();
+}
+
+static int get_nr_avail_ring_elements(struct mhi_controller *mhi_cntrl,
+				      struct mhi_ring *ring)
+{
+	int nr_el;
+
+	if (ring->wp < ring->rp)
+		nr_el = ((ring->rp - ring->wp) / ring->el_size) - 1;
+	else {
+		nr_el = (ring->rp - ring->base) / ring->el_size;
+		nr_el += ((ring->base + ring->len - ring->wp) /
+			  ring->el_size) - 1;
+	}
+	return nr_el;
+}
+
+static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr)
+{
+	return (addr - ring->iommu_base) + ring->base;
+}
+
+dma_addr_t mhi_to_physical(struct mhi_ring *ring, void *addr)
+{
+	return (addr - ring->base) + ring->iommu_base;
+}
+
+static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
+					struct mhi_ring *ring)
+{
+	dma_addr_t ctxt_wp;
+
+	/* update the WP */
+	ring->wp += ring->el_size;
+	ctxt_wp = *ring->ctxt_wp + ring->el_size;
+
+	if (ring->wp >= (ring->base + ring->len)) {
+		ring->wp = ring->base;
+		ctxt_wp = ring->iommu_base;
+	}
+
+	*ring->ctxt_wp = ctxt_wp;
+
+	/* update the RP */
+	ring->rp += ring->el_size;
+	if (ring->rp >= (ring->base + ring->len))
+		ring->rp = ring->base;
+
+	/* visible to other cores */
+	smp_wmb();
+}
+
+static bool mhi_is_ring_full(struct mhi_controller *mhi_cntrl,
+			     struct mhi_ring *ring)
+{
+	void *tmp = ring->wp + ring->el_size;
+
+	if (tmp >= (ring->base + ring->len))
+		tmp = ring->base;
+
+	return (tmp == ring->rp);
+}
+
+int mhi_queue_skb(struct mhi_device *mhi_dev,
+		  struct mhi_chan *mhi_chan,
+		  void *buf,
+		  size_t len,
+		  enum MHI_FLAGS mflags)
+{
+	struct sk_buff *skb = buf;
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+	struct mhi_ring *buf_ring = &mhi_chan->buf_ring;
+	struct mhi_buf_info *buf_info;
+	struct mhi_tre *mhi_tre;
+
+	if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+		return -ENOMEM;
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
+		MHI_ERR("MHI is not in activate state, pm_state:%s\n",
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+
+		return -EIO;
+	}
+
+	/* we're in M3 or transitioning to M3 */
+	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+		mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+		mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+	}
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+
+	/* generate the tre */
+	buf_info = buf_ring->wp;
+	buf_info->v_addr = skb->data;
+	buf_info->cb_buf = skb;
+	buf_info->wp = tre_ring->wp;
+	buf_info->dir = mhi_chan->dir;
+	buf_info->len = len;
+	buf_info->p_addr = dma_map_single(mhi_cntrl->dev, buf_info->v_addr, len,
+					  buf_info->dir);
+
+	if (dma_mapping_error(mhi_cntrl->dev, buf_info->p_addr))
+		goto map_error;
+
+	mhi_tre = tre_ring->wp;
+	mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
+	mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_info->len);
+	mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(1, 1, 0, 0);
+
+	MHI_VERB("chan:%d WP:0x%llx TRE:0x%llx 0x%08x 0x%08x\n", mhi_chan->chan,
+		 (u64)mhi_to_physical(tre_ring, mhi_tre), mhi_tre->ptr,
+		 mhi_tre->dword[0], mhi_tre->dword[1]);
+
+	/* increment WP */
+	mhi_add_ring_element(mhi_cntrl, tre_ring);
+	mhi_add_ring_element(mhi_cntrl, buf_ring);
+
+	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state))) {
+		read_lock_bh(&mhi_chan->lock);
+		mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+		read_unlock_bh(&mhi_chan->lock);
+	}
+
+	if (mhi_chan->dir == DMA_FROM_DEVICE) {
+		bool override = (mhi_cntrl->pm_state != MHI_PM_M0);
+
+		mhi_cntrl->wake_put(mhi_cntrl, override);
+	}
+
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return 0;
+
+map_error:
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return -ENOMEM;
+}
+
+int mhi_gen_tre(struct mhi_controller *mhi_cntrl,
+		struct mhi_chan *mhi_chan,
+		void *buf,
+		void *cb,
+		size_t buf_len,
+		enum MHI_FLAGS flags)
+{
+	struct mhi_ring *buf_ring, *tre_ring;
+	struct mhi_tre *mhi_tre;
+	struct mhi_buf_info *buf_info;
+	int eot, eob, chain, bei;
+
+	buf_ring = &mhi_chan->buf_ring;
+	tre_ring = &mhi_chan->tre_ring;
+
+	buf_info = buf_ring->wp;
+	buf_info->v_addr = buf;
+	buf_info->cb_buf = cb;
+	buf_info->wp = tre_ring->wp;
+	buf_info->dir = mhi_chan->dir;
+	buf_info->len = buf_len;
+	buf_info->p_addr = dma_map_single(mhi_cntrl->dev, buf, buf_len,
+					  buf_info->dir);
+
+	if (dma_mapping_error(mhi_cntrl->dev, buf_info->p_addr))
+		return -ENOMEM;
+
+	eob = !!(flags & MHI_EOB);
+	eot = !!(flags & MHI_EOT);
+	chain = !!(flags & MHI_CHAIN);
+	bei = !!(mhi_chan->intmod);
+
+	mhi_tre = tre_ring->wp;
+	mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
+	mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_len);
+	mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(bei, eot, eob, chain);
+
+	MHI_VERB("chan:%d WP:0x%llx TRE:0x%llx 0x%08x 0x%08x\n", mhi_chan->chan,
+		 (u64)mhi_to_physical(tre_ring, mhi_tre), mhi_tre->ptr,
+		 mhi_tre->dword[0], mhi_tre->dword[1]);
+
+	/* increment WP */
+	mhi_add_ring_element(mhi_cntrl, tre_ring);
+	mhi_add_ring_element(mhi_cntrl, buf_ring);
+
+	return 0;
+}
+
+int mhi_queue_buf(struct mhi_device *mhi_dev,
+		  struct mhi_chan *mhi_chan,
+		  void *buf,
+		  size_t len,
+		  enum MHI_FLAGS mflags)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_ring *tre_ring;
+	unsigned long flags;
+	int ret;
+
+	read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+	if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
+		MHI_ERR("MHI is not in active state, pm_state:%s\n",
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+
+		return -EIO;
+	}
+
+	/* we're in M3 or transitioning to M3 */
+	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+		mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+		mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+	}
+
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+	read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+
+	tre_ring = &mhi_chan->tre_ring;
+	if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+		goto error_queue;
+
+	ret = mhi_chan->gen_tre(mhi_cntrl, mhi_chan, buf, buf, len, mflags);
+	if (unlikely(ret))
+		goto error_queue;
+
+	read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state))) {
+		unsigned long flags;
+
+		read_lock_irqsave(&mhi_chan->lock, flags);
+		mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+		read_unlock_irqrestore(&mhi_chan->lock, flags);
+	}
+
+	if (mhi_chan->dir == DMA_FROM_DEVICE) {
+		bool override = (mhi_cntrl->pm_state != MHI_PM_M0);
+
+		mhi_cntrl->wake_put(mhi_cntrl, override);
+	}
+
+	read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+
+	return 0;
+
+error_queue:
+	read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+
+	return -ENOMEM;
+}
+
+/* destroy specific device */
+int mhi_destroy_device(struct device *dev, void *data)
+{
+	struct mhi_device *mhi_dev;
+	struct mhi_driver *mhi_drv;
+	struct mhi_controller *mhi_cntrl;
+	struct mhi_chan *mhi_chan;
+	int dir;
+
+	if (dev->bus != &mhi_bus_type)
+		return 0;
+
+	mhi_dev = to_mhi_device(dev);
+	mhi_drv = to_mhi_driver(dev->driver);
+	mhi_cntrl = mhi_dev->mhi_cntrl;
+
+	MHI_LOG("destroy device for chan:%s\n", mhi_dev->chan_name);
+
+	for (dir = 0; dir < 2; dir++) {
+		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		/* remove device associated with the channel */
+		mutex_lock(&mhi_chan->mutex);
+		mhi_chan->mhi_dev = NULL;
+		mutex_unlock(&mhi_chan->mutex);
+	}
+
+	/* notify the client and remove the device from mhi bus */
+	device_del(dev);
+	put_device(dev);
+
+	return 0;
+}
+
+void mhi_notify(struct mhi_device *mhi_dev, enum MHI_CB cb_reason)
+{
+	struct mhi_driver *mhi_drv;
+
+	if (!mhi_dev->dev.driver)
+		return;
+
+	mhi_drv = to_mhi_driver(mhi_dev->dev.driver);
+
+	if (mhi_drv->status_cb)
+		mhi_drv->status_cb(mhi_dev, cb_reason);
+}
+
+/* bind mhi channels into mhi devices */
+void mhi_create_devices(struct mhi_controller *mhi_cntrl)
+{
+	int i;
+	struct mhi_chan *mhi_chan;
+	struct mhi_device *mhi_dev;
+	struct device_node *controller, *node;
+	const char *dt_name;
+	int ret;
+
+	mhi_chan = mhi_cntrl->mhi_chan;
+	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+		if (!mhi_chan->configured || mhi_chan->ee != mhi_cntrl->ee)
+			continue;
+		mhi_dev = mhi_alloc_device(mhi_cntrl);
+		if (!mhi_dev)
+			return;
+
+		if (mhi_chan->dir == DMA_TO_DEVICE) {
+			mhi_dev->ul_chan = mhi_chan;
+			mhi_dev->ul_chan_id = mhi_chan->chan;
+			mhi_dev->ul_xfer = mhi_chan->queue_xfer;
+			mhi_dev->ul_event_id = mhi_chan->er_index;
+		} else {
+			mhi_dev->dl_chan = mhi_chan;
+			mhi_dev->dl_chan_id = mhi_chan->chan;
+			mhi_dev->dl_xfer = mhi_chan->queue_xfer;
+			mhi_dev->dl_event_id = mhi_chan->er_index;
+		}
+
+		mhi_chan->mhi_dev = mhi_dev;
+
+		/* check next channel if it matches */
+		if ((i + 1) < mhi_cntrl->max_chan && mhi_chan[1].configured) {
+			if (!strcmp(mhi_chan[1].name, mhi_chan->name)) {
+				i++;
+				mhi_chan++;
+				if (mhi_chan->dir == DMA_TO_DEVICE) {
+					mhi_dev->ul_chan = mhi_chan;
+					mhi_dev->ul_chan_id = mhi_chan->chan;
+					mhi_dev->ul_xfer = mhi_chan->queue_xfer;
+					mhi_dev->ul_event_id =
+						mhi_chan->er_index;
+				} else {
+					mhi_dev->dl_chan = mhi_chan;
+					mhi_dev->dl_chan_id = mhi_chan->chan;
+					mhi_dev->dl_xfer = mhi_chan->queue_xfer;
+					mhi_dev->dl_event_id =
+						mhi_chan->er_index;
+				}
+				mhi_chan->mhi_dev = mhi_dev;
+			}
+		}
+
+		mhi_dev->chan_name = mhi_chan->name;
+		dev_set_name(&mhi_dev->dev, "%04x_%02u.%02u.%02u_%s",
+			     mhi_dev->dev_id, mhi_dev->domain, mhi_dev->bus,
+			     mhi_dev->slot, mhi_dev->chan_name);
+
+		/* add if there is a matching DT node */
+		controller = mhi_cntrl->of_node;
+		for_each_available_child_of_node(controller, node) {
+			ret = of_property_read_string(node, "mhi,chan",
+						      &dt_name);
+			if (ret)
+				continue;
+			if (!strcmp(mhi_dev->chan_name, dt_name))
+				mhi_dev->dev.of_node = node;
+		}
+
+		ret = device_add(&mhi_dev->dev);
+		if (ret) {
+			MHI_ERR("Failed to register dev for  chan:%s\n",
+				mhi_dev->chan_name);
+			mhi_dealloc_device(mhi_cntrl, mhi_dev);
+		}
+	}
+}
+
+static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+			    struct mhi_tre *event,
+			    struct mhi_chan *mhi_chan)
+{
+	struct mhi_ring *buf_ring, *tre_ring;
+	u32 ev_code;
+	struct mhi_result result;
+	unsigned long flags = 0;
+
+	ev_code = MHI_TRE_GET_EV_CODE(event);
+	buf_ring = &mhi_chan->buf_ring;
+	tre_ring = &mhi_chan->tre_ring;
+
+	result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
+		-EOVERFLOW : 0;
+
+	/*
+	 * if it's a DB Event then we need to grab the lock
+	 * with preemption disable and as a write because we
+	 * have to update db register and another thread could
+	 * be doing same.
+	 */
+	if (ev_code >= MHI_EV_CC_OOB)
+		write_lock_irqsave(&mhi_chan->lock, flags);
+	else
+		read_lock_bh(&mhi_chan->lock);
+
+	if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
+		goto end_process_tx_event;
+
+	switch (ev_code) {
+	case MHI_EV_CC_OVERFLOW:
+	case MHI_EV_CC_EOB:
+	case MHI_EV_CC_EOT:
+	{
+		dma_addr_t ptr = MHI_TRE_GET_EV_PTR(event);
+		struct mhi_tre *local_rp, *ev_tre;
+		void *dev_rp;
+		struct mhi_buf_info *buf_info;
+		u16 xfer_len;
+
+		/* Get the TRB this event points to */
+		ev_tre = mhi_to_virtual(tre_ring, ptr);
+
+		/* device rp after servicing the TREs */
+		dev_rp = ev_tre + 1;
+		if (dev_rp >= (tre_ring->base + tre_ring->len))
+			dev_rp = tre_ring->base;
+
+		result.dir = mhi_chan->dir;
+
+		/* local rp */
+		local_rp = tre_ring->rp;
+		while (local_rp != dev_rp) {
+			buf_info = buf_ring->rp;
+			/* if it's last tre get len from the event */
+			if (local_rp == ev_tre)
+				xfer_len = MHI_TRE_GET_EV_LEN(event);
+			else
+				xfer_len = buf_info->len;
+
+			dma_unmap_single(mhi_cntrl->dev, buf_info->p_addr,
+					 buf_info->len, buf_info->dir);
+
+			result.buf_addr = buf_info->cb_buf;
+			result.bytes_xferd = xfer_len;
+			mhi_del_ring_element(mhi_cntrl, buf_ring);
+			mhi_del_ring_element(mhi_cntrl, tre_ring);
+			local_rp = tre_ring->rp;
+
+			/* notify client */
+			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+
+			if (mhi_chan->dir == DMA_TO_DEVICE) {
+				read_lock_bh(&mhi_cntrl->pm_lock);
+				mhi_cntrl->wake_put(mhi_cntrl, false);
+				read_unlock_bh(&mhi_cntrl->pm_lock);
+			}
+
+			/*
+			 * recycle the buffer if buffer is pre-allocated,
+			 * if there is error, not much we can do apart from
+			 * dropping the packet
+			 */
+			if (mhi_chan->pre_alloc) {
+				if (mhi_queue_buf(mhi_chan->mhi_dev, mhi_chan,
+						  buf_info->cb_buf,
+						  buf_info->len, MHI_EOT)) {
+					MHI_ERR(
+						"Error recycling buffer for chan:%d\n",
+						mhi_chan->chan);
+					kfree(buf_info->cb_buf);
+				}
+			}
+		};
+		break;
+	} /* CC_EOT */
+	case MHI_EV_CC_OOB:
+	case MHI_EV_CC_DB_MODE:
+	{
+		unsigned long flags;
+
+		MHI_VERB("DB_MODE/OOB Detected chan %d.\n", mhi_chan->chan);
+		mhi_chan->db_cfg.db_mode = 1;
+		read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+		if (tre_ring->wp != tre_ring->rp &&
+		    MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state)) {
+			mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+		}
+		read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+		break;
+	}
+	case MHI_EV_CC_BAD_TRE:
+		MHI_ASSERT(1, "Received BAD TRE event for ring");
+		break;
+	default:
+		MHI_CRITICAL("Unknown TX completion.\n");
+
+		break;
+	} /* switch(MHI_EV_READ_CODE(EV_TRB_CODE,event)) */
+
+end_process_tx_event:
+	if (ev_code >= MHI_EV_CC_OOB)
+		write_unlock_irqrestore(&mhi_chan->lock, flags);
+	else
+		read_unlock_bh(&mhi_chan->lock);
+
+	return 0;
+}
+
+static int mhi_process_event_ring(struct mhi_controller *mhi_cntrl,
+				  struct mhi_event *mhi_event,
+				  u32 event_quota)
+{
+	struct mhi_tre *dev_rp, *local_rp;
+	struct mhi_ring *ev_ring = &mhi_event->ring;
+	struct mhi_event_ctxt *er_ctxt =
+		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+	int count = 0;
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state))) {
+		MHI_ERR("No EV access, PM_STATE:%s\n",
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+		return -EIO;
+	}
+
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+	local_rp = ev_ring->rp;
+
+	while (dev_rp != local_rp && event_quota > 0) {
+		enum MHI_PKT_TYPE type = MHI_TRE_GET_EV_TYPE(local_rp);
+
+		MHI_VERB("Processing Event:0x%llx 0x%08x 0x%08x\n",
+			local_rp->ptr, local_rp->dword[0], local_rp->dword[1]);
+
+		switch (type) {
+		case MHI_PKT_TYPE_TX_EVENT:
+		{
+			u32 chan;
+			struct mhi_chan *mhi_chan;
+
+			chan = MHI_TRE_GET_EV_CHID(local_rp);
+			mhi_chan = &mhi_cntrl->mhi_chan[chan];
+			parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
+			event_quota--;
+			break;
+		}
+		case MHI_PKT_TYPE_STATE_CHANGE_EVENT:
+		{
+			enum MHI_STATE new_state;
+
+			new_state = MHI_TRE_GET_EV_STATE(local_rp);
+
+			MHI_LOG("MHI state change event to state:%s\n",
+				TO_MHI_STATE_STR(new_state));
+
+			switch (new_state) {
+			case MHI_STATE_M0:
+				mhi_pm_m0_transition(mhi_cntrl);
+				break;
+			case MHI_STATE_M1:
+				mhi_pm_m1_transition(mhi_cntrl);
+				break;
+			case MHI_STATE_M3:
+				mhi_pm_m3_transition(mhi_cntrl);
+				break;
+			case MHI_STATE_SYS_ERR:
+			{
+				enum MHI_PM_STATE new_state;
+
+				MHI_ERR("MHI system error detected\n");
+				write_lock_irq(&mhi_cntrl->pm_lock);
+				new_state = mhi_tryset_pm_state(mhi_cntrl,
+							MHI_PM_SYS_ERR_DETECT);
+				write_unlock_irq(&mhi_cntrl->pm_lock);
+				if (new_state == MHI_PM_SYS_ERR_DETECT)
+					schedule_work(
+						&mhi_cntrl->syserr_worker);
+				break;
+			}
+			default:
+				MHI_ERR("Unsupported STE:%s\n",
+					TO_MHI_STATE_STR(new_state));
+			}
+
+			break;
+		}
+		case MHI_PKT_TYPE_CMD_COMPLETION_EVENT:
+		{
+			dma_addr_t ptr = MHI_TRE_GET_EV_PTR(local_rp);
+			struct mhi_cmd *cmd_ring =
+				&mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+			struct mhi_ring *mhi_ring = &cmd_ring->ring;
+			struct mhi_tre *cmd_pkt;
+			struct mhi_chan *mhi_chan;
+			u32 chan;
+
+			cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
+
+			/* out of order completion received */
+			MHI_ASSERT(cmd_pkt != mhi_ring->rp,
+				   "Out of order cmd completion");
+
+			chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
+
+			mhi_chan = &mhi_cntrl->mhi_chan[chan];
+			write_lock_bh(&mhi_chan->lock);
+			mhi_chan->ccs = MHI_TRE_GET_EV_CODE(local_rp);
+			complete(&mhi_chan->completion);
+			write_unlock_bh(&mhi_chan->lock);
+			mhi_del_ring_element(mhi_cntrl, mhi_ring);
+			break;
+		}
+		case MHI_PKT_TYPE_EE_EVENT:
+		{
+			enum MHI_ST_TRANSITION st = MHI_ST_TRANSITION_MAX;
+			enum MHI_EE event = MHI_TRE_GET_EV_EXECENV(local_rp);
+
+			MHI_LOG("MHI EE received event:%s\n",
+				TO_MHI_EXEC_STR(event));
+			switch (event) {
+			case MHI_EE_SBL:
+				st = MHI_ST_TRANSITION_SBL;
+				break;
+			case MHI_EE_AMSS:
+				st = MHI_ST_TRANSITION_AMSS;
+				break;
+			case MHI_EE_RDDM:
+				mhi_cntrl->status_cb(mhi_cntrl,
+						     mhi_cntrl->priv_data,
+						     MHI_CB_EE_RDDM);
+				/* fall thru to wake up the event */
+			case MHI_EE_BHIE:
+				write_lock_irq(&mhi_cntrl->pm_lock);
+				mhi_cntrl->ee = event;
+				write_unlock_irq(&mhi_cntrl->pm_lock);
+				wake_up(&mhi_cntrl->state_event);
+				break;
+			default:
+				MHI_ERR("Unhandled EE event:%s\n",
+					TO_MHI_EXEC_STR(event));
+			}
+			if (st != MHI_ST_TRANSITION_MAX)
+				mhi_queue_state_transition(mhi_cntrl, st);
+
+			break;
+		}
+		case MHI_PKT_TYPE_STALE_EVENT:
+			MHI_VERB("Stale Event received for chan:%u\n",
+				 MHI_TRE_GET_EV_CHID(local_rp));
+			break;
+		default:
+			MHI_ERR("Unsupported packet type code 0x%x\n", type);
+			break;
+		}
+
+		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+		local_rp = ev_ring->rp;
+		dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+		count++;
+	}
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state)))
+		mhi_ring_er_db(mhi_event);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	MHI_VERB("exit er_index:%u\n", mhi_event->er_index);
+	return count;
+}
+
+void mhi_ev_task(unsigned long data)
+{
+	struct mhi_event *mhi_event = (struct mhi_event *)data;
+	struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+
+	MHI_VERB("Enter for ev_index:%d\n", mhi_event->er_index);
+
+	/* process all pending events */
+	spin_lock_bh(&mhi_event->lock);
+	mhi_process_event_ring(mhi_cntrl, mhi_event, U32_MAX);
+	spin_unlock_bh(&mhi_event->lock);
+}
+
+void mhi_ctrl_ev_task(unsigned long data)
+{
+	struct mhi_event *mhi_event = (struct mhi_event *)data;
+	struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+	enum MHI_STATE state = MHI_STATE_MAX;
+	enum MHI_PM_STATE pm_state = 0;
+	int ret;
+
+	MHI_VERB("Enter for ev_index:%d\n", mhi_event->er_index);
+
+	/* process ctrl events events */
+	ret = mhi_process_event_ring(mhi_cntrl, mhi_event, U32_MAX);
+
+	/*
+	 * we received a MSI but no events to process maybe device went to
+	 * SYS_ERR state, check the state
+	 */
+	if (!ret) {
+		write_lock_irq(&mhi_cntrl->pm_lock);
+		if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+			state = mhi_get_m_state(mhi_cntrl);
+		if (state == MHI_STATE_SYS_ERR) {
+			MHI_ERR("MHI system error detected\n");
+			pm_state = mhi_tryset_pm_state(mhi_cntrl,
+						       MHI_PM_SYS_ERR_DETECT);
+		}
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		if (pm_state == MHI_PM_SYS_ERR_DETECT)
+			schedule_work(&mhi_cntrl->syserr_worker);
+	}
+}
+
+irqreturn_t mhi_msi_handlr(int irq_number, void *dev)
+{
+	struct mhi_event *mhi_event = dev;
+	struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+	struct mhi_event_ctxt *er_ctxt =
+		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+	struct mhi_ring *ev_ring = &mhi_event->ring;
+	void *dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+
+	/* confirm ER has pending events to process before scheduling work */
+	if (ev_ring->rp == dev_rp)
+		return IRQ_HANDLED;
+
+	/* client managed event ring, notify pending data */
+	if (mhi_event->cl_manage) {
+		struct mhi_chan *mhi_chan = mhi_event->mhi_chan;
+		struct mhi_device *mhi_dev = mhi_chan->mhi_dev;
+
+		if (mhi_dev)
+			mhi_dev->status_cb(mhi_dev, MHI_CB_PENDING_DATA);
+	} else
+		tasklet_schedule(&mhi_event->task);
+
+	return IRQ_HANDLED;
+}
+
+/* this is the threaded fn */
+irqreturn_t mhi_intvec_threaded_handlr(int irq_number, void *dev)
+{
+	struct mhi_controller *mhi_cntrl = dev;
+	enum MHI_STATE state = MHI_STATE_MAX;
+	enum MHI_PM_STATE pm_state = 0;
+
+	MHI_VERB("Enter\n");
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+		state = mhi_get_m_state(mhi_cntrl);
+	if (state == MHI_STATE_SYS_ERR) {
+		MHI_ERR("MHI system error detected\n");
+		pm_state = mhi_tryset_pm_state(mhi_cntrl,
+					       MHI_PM_SYS_ERR_DETECT);
+	}
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	if (pm_state == MHI_PM_SYS_ERR_DETECT)
+		schedule_work(&mhi_cntrl->syserr_worker);
+
+	MHI_VERB("Exit\n");
+
+	return IRQ_HANDLED;
+}
+
+irqreturn_t mhi_intvec_handlr(int irq_number, void *dev)
+{
+
+	struct mhi_controller *mhi_cntrl = dev;
+
+	/* wake up any events waiting for state change */
+	MHI_VERB("Enter\n");
+	wake_up(&mhi_cntrl->state_event);
+	MHI_VERB("Exit\n");
+
+	return IRQ_WAKE_THREAD;
+}
+
+static int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
+			struct mhi_chan *mhi_chan,
+			enum MHI_CMD cmd)
+{
+	struct mhi_tre *cmd_tre = NULL;
+	struct mhi_cmd *mhi_cmd = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+	struct mhi_ring *ring = &mhi_cmd->ring;
+	int chan = mhi_chan->chan;
+
+	MHI_VERB("Entered, MHI pm_state:%s dev_state:%s ee:%s\n",
+		 to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		 TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		 TO_MHI_EXEC_STR(mhi_cntrl->ee));
+
+	/* MHI host currently handles RESET and START cmd */
+	if (cmd != MHI_CMD_START_CHAN && cmd != MHI_CMD_RESET_CHAN)
+		return -EINVAL;
+
+	spin_lock_bh(&mhi_cmd->lock);
+	if (!get_nr_avail_ring_elements(mhi_cntrl, ring)) {
+		spin_unlock_bh(&mhi_cmd->lock);
+		return -ENOMEM;
+	}
+
+	/* prepare the cmd tre */
+	cmd_tre = ring->wp;
+	if (cmd == MHI_CMD_START_CHAN) {
+		cmd_tre->ptr = MHI_TRE_CMD_START_PTR;
+		cmd_tre->dword[0] = MHI_TRE_CMD_START_DWORD0;
+		cmd_tre->dword[1] = MHI_TRE_CMD_START_DWORD1(chan);
+	} else {
+		cmd_tre->ptr = MHI_TRE_CMD_RESET_PTR;
+		cmd_tre->dword[0] = MHI_TRE_CMD_RESET_DWORD0;
+		cmd_tre->dword[1] = MHI_TRE_CMD_RESET_DWORD1(chan);
+	}
+
+	MHI_VERB("WP:0x%llx TRE: 0x%llx 0x%08x 0x%08x\n",
+		 (u64)mhi_to_physical(ring, cmd_tre), cmd_tre->ptr,
+		 cmd_tre->dword[0], cmd_tre->dword[1]);
+
+	/* queue to hardware */
+	mhi_add_ring_element(mhi_cntrl, ring);
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state)))
+		mhi_ring_cmd_db(mhi_cntrl, mhi_cmd);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+	spin_unlock_bh(&mhi_cmd->lock);
+
+	return 0;
+}
+
+static int __mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
+				 struct mhi_chan *mhi_chan)
+{
+	int ret = 0;
+
+	MHI_LOG("Entered: preparing channel:%d\n", mhi_chan->chan);
+
+	if (mhi_cntrl->ee != mhi_chan->ee) {
+		MHI_ERR("Current EE:%s Required EE:%s for chan:%s\n",
+			TO_MHI_EXEC_STR(mhi_cntrl->ee),
+			TO_MHI_EXEC_STR(mhi_chan->ee),
+			mhi_chan->name);
+		return -ENOTCONN;
+	}
+
+	mutex_lock(&mhi_chan->mutex);
+	/* client manages channel context for offload channels */
+	if (!mhi_chan->offload_ch) {
+		ret = mhi_init_chan_ctxt(mhi_cntrl, mhi_chan);
+		if (ret) {
+			MHI_ERR("Error with init chan\n");
+			goto error_init_chan;
+		}
+	}
+
+	reinit_completion(&mhi_chan->completion);
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		MHI_ERR("MHI host is not in active state\n");
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+		ret = -EIO;
+		goto error_pm_state;
+	}
+
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+	mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+
+	ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_START_CHAN);
+	if (ret) {
+		MHI_ERR("Failed to send start chan cmd\n");
+		goto error_send_cmd;
+	}
+
+	ret = wait_for_completion_timeout(&mhi_chan->completion,
+				msecs_to_jiffies(mhi_cntrl->timeout_ms));
+	if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) {
+		MHI_ERR("Failed to receive cmd completion for chan:%d\n",
+			mhi_chan->chan);
+		ret = -EIO;
+		goto error_send_cmd;
+	}
+
+	write_lock_irq(&mhi_chan->lock);
+	mhi_chan->ch_state = MHI_CH_STATE_ENABLED;
+	write_unlock_irq(&mhi_chan->lock);
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	/* pre allocate buffer for xfer ring */
+	if (mhi_chan->pre_alloc) {
+		struct mhi_device *mhi_dev = mhi_chan->mhi_dev;
+		int nr_el = get_nr_avail_ring_elements(mhi_cntrl,
+						       &mhi_chan->tre_ring);
+
+		while (nr_el--) {
+			void *buf;
+
+			buf = kmalloc(MHI_MAX_MTU, GFP_KERNEL);
+			if (!buf) {
+				ret = -ENOMEM;
+				goto error_pre_alloc;
+			}
+
+			ret = mhi_queue_buf(mhi_dev, mhi_chan, buf, MHI_MAX_MTU,
+					    MHI_EOT);
+			if (ret) {
+				MHI_ERR("Chan:%d error queue buffer\n",
+					mhi_chan->chan);
+				kfree(buf);
+				goto error_pre_alloc;
+			}
+		}
+	}
+
+	mutex_unlock(&mhi_chan->mutex);
+
+	MHI_LOG("Chan:%d successfully moved to start state\n", mhi_chan->chan);
+
+	return 0;
+
+error_send_cmd:
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+error_pm_state:
+	if (!mhi_chan->offload_ch)
+		mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+
+error_init_chan:
+	mutex_unlock(&mhi_chan->mutex);
+
+	return ret;
+
+error_pre_alloc:
+	mutex_unlock(&mhi_chan->mutex);
+	__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+
+	return ret;
+}
+
+void mhi_reset_chan(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan)
+{
+	struct mhi_tre *dev_rp, *local_rp;
+	struct mhi_event_ctxt *er_ctxt;
+	struct mhi_event *mhi_event;
+	struct mhi_ring *ev_ring, *buf_ring, *tre_ring;
+	unsigned long flags;
+	int chan = mhi_chan->chan;
+	struct mhi_result result;
+
+	/* nothing to reset, client don't queue buffers */
+	if (mhi_chan->offload_ch)
+		return;
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
+	ev_ring = &mhi_event->ring;
+	er_ctxt = &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_chan->er_index];
+
+	MHI_LOG("Marking all events for chan:%d as stale\n", chan);
+
+	/* mark all stale events related to channel as STALE event */
+	spin_lock_irqsave(&mhi_event->lock, flags);
+	dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+	if (!mhi_event->mhi_chan) {
+		local_rp = ev_ring->rp;
+		while (dev_rp != local_rp) {
+			if (MHI_TRE_GET_EV_TYPE(local_rp) ==
+			    MHI_PKT_TYPE_TX_EVENT &&
+			    chan == MHI_TRE_GET_EV_CHID(local_rp))
+				local_rp->dword[1] = MHI_TRE_EV_DWORD1(chan,
+						MHI_PKT_TYPE_STALE_EVENT);
+			local_rp++;
+			if (local_rp == (ev_ring->base + ev_ring->len))
+				local_rp = ev_ring->base;
+		}
+	} else {
+		/* dedicated event ring so move the ptr to end */
+		ev_ring->rp = dev_rp;
+		ev_ring->wp = ev_ring->rp - ev_ring->el_size;
+		if (ev_ring->wp < ev_ring->base)
+			ev_ring->wp = ev_ring->base + ev_ring->len -
+				ev_ring->el_size;
+		if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state)))
+			mhi_ring_er_db(mhi_event);
+	}
+
+	MHI_LOG("Finished marking events as stale events\n");
+	spin_unlock_irqrestore(&mhi_event->lock, flags);
+
+	/* reset any pending buffers */
+	buf_ring = &mhi_chan->buf_ring;
+	tre_ring = &mhi_chan->tre_ring;
+	result.transaction_status = -ENOTCONN;
+	result.bytes_xferd = 0;
+	while (tre_ring->rp != tre_ring->wp) {
+		struct mhi_buf_info *buf_info = buf_ring->rp;
+
+		if (mhi_chan->dir == DMA_TO_DEVICE)
+			mhi_cntrl->wake_put(mhi_cntrl, false);
+
+		dma_unmap_single(mhi_cntrl->dev, buf_info->p_addr,
+				 buf_info->len, buf_info->dir);
+		mhi_del_ring_element(mhi_cntrl, buf_ring);
+		mhi_del_ring_element(mhi_cntrl, tre_ring);
+
+		if (mhi_chan->pre_alloc) {
+			kfree(buf_info->cb_buf);
+		} else {
+			result.buf_addr = buf_info->cb_buf;
+			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+		}
+	}
+
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+	MHI_LOG("Reset complete.\n");
+}
+
+static void __mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
+				    struct mhi_chan *mhi_chan)
+{
+	int ret;
+
+	MHI_LOG("Entered: unprepare channel:%d\n", mhi_chan->chan);
+
+	/* no more processing events for this channel */
+	mutex_lock(&mhi_chan->mutex);
+	write_lock_irq(&mhi_chan->lock);
+	if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED) {
+		MHI_LOG("chan:%d is already disabled\n", mhi_chan->chan);
+		write_unlock_irq(&mhi_chan->lock);
+		mutex_unlock(&mhi_chan->mutex);
+		return;
+	}
+
+	mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+	write_unlock_irq(&mhi_chan->lock);
+
+	reinit_completion(&mhi_chan->completion);
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+		goto error_invalid_state;
+	}
+
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+	mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+	ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_RESET_CHAN);
+	if (ret) {
+		MHI_ERR("Failed to send reset chan cmd\n");
+		goto error_completion;
+	}
+
+	/* even if it fails we will still reset */
+	ret = wait_for_completion_timeout(&mhi_chan->completion,
+				msecs_to_jiffies(mhi_cntrl->timeout_ms));
+	if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS)
+		MHI_ERR("Failed to receive cmd completion, still resetting\n");
+
+error_completion:
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+error_invalid_state:
+	if (!mhi_chan->offload_ch) {
+		mhi_reset_chan(mhi_cntrl, mhi_chan);
+		mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+	}
+	MHI_LOG("chan:%d successfully resetted\n", mhi_chan->chan);
+	mutex_unlock(&mhi_chan->mutex);
+}
+
+int mhi_debugfs_mhi_states_show(struct seq_file *m, void *d)
+{
+	struct mhi_controller *mhi_cntrl = m->private;
+
+	seq_printf(m,
+		   "pm_state:%s dev_state:%s EE:%s M0:%u M1:%u M2:%u M3:%u wake:%d dev_wake:%u alloc_size:%u\n",
+		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		   TO_MHI_EXEC_STR(mhi_cntrl->ee),
+		   mhi_cntrl->M0, mhi_cntrl->M1, mhi_cntrl->M2, mhi_cntrl->M3,
+		   mhi_cntrl->wake_set,
+		   atomic_read(&mhi_cntrl->dev_wake),
+		   atomic_read(&mhi_cntrl->alloc_size));
+	return 0;
+}
+
+int mhi_debugfs_mhi_event_show(struct seq_file *m, void *d)
+{
+	struct mhi_controller *mhi_cntrl = m->private;
+	struct mhi_event *mhi_event;
+	struct mhi_event_ctxt *er_ctxt;
+
+	int i;
+
+	er_ctxt = mhi_cntrl->mhi_ctxt->er_ctxt;
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++,
+		     mhi_event++) {
+		struct mhi_ring *ring = &mhi_event->ring;
+
+		if (mhi_event->offload_ev) {
+			seq_printf(m, "Index:%d offload event ring\n", i);
+		} else {
+			seq_printf(m,
+				   "Index:%d modc:%d modt:%d base:0x%0llx len:0x%llx",
+				   i, er_ctxt->intmodc, er_ctxt->intmodt,
+				   er_ctxt->rbase, er_ctxt->rlen);
+			seq_printf(m,
+				   " rp:0x%llx wp:0x%llx local_rp:0x%llx db:0x%llx\n",
+				   er_ctxt->rp, er_ctxt->wp,
+				   mhi_to_physical(ring, ring->rp),
+				   mhi_event->db_cfg.db_val);
+		}
+	}
+
+	return 0;
+}
+
+int mhi_debugfs_mhi_chan_show(struct seq_file *m, void *d)
+{
+	struct mhi_controller *mhi_cntrl = m->private;
+	struct mhi_chan *mhi_chan;
+	struct mhi_chan_ctxt *chan_ctxt;
+	int i;
+
+	mhi_chan = mhi_cntrl->mhi_chan;
+	chan_ctxt = mhi_cntrl->mhi_ctxt->chan_ctxt;
+	for (i = 0; i < mhi_cntrl->max_chan; i++, chan_ctxt++, mhi_chan++) {
+		struct mhi_ring *ring = &mhi_chan->tre_ring;
+
+		if (mhi_chan->offload_ch) {
+			seq_printf(m, "%s(%u) offload channel\n",
+				   mhi_chan->name, mhi_chan->chan);
+		} else if (mhi_chan->mhi_dev) {
+			seq_printf(m,
+				   "%s(%u) state:0x%x brstmode:0x%x pllcfg:0x%x type:0x%x erindex:%u",
+				   mhi_chan->name, mhi_chan->chan,
+				   chan_ctxt->chstate, chan_ctxt->brstmode,
+				   chan_ctxt->pollcfg, chan_ctxt->chtype,
+				   chan_ctxt->erindex);
+			seq_printf(m,
+				   " base:0x%llx len:0x%llx wp:0x%llx local_rp:0x%llx local_wp:0x%llx db:0x%llx\n",
+				   chan_ctxt->rbase, chan_ctxt->rlen,
+				   chan_ctxt->wp,
+				   mhi_to_physical(ring, ring->rp),
+				   mhi_to_physical(ring, ring->wp),
+				   mhi_chan->db_cfg.db_val);
+		}
+	}
+
+	return 0;
+}
+
+/* move channel to start state */
+int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
+{
+	int ret, dir;
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan;
+
+	for (dir = 0; dir < 2; dir++) {
+		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		ret = __mhi_prepare_channel(mhi_cntrl, mhi_chan);
+		if (ret) {
+			MHI_ERR("Error moving chan %s,%d to START state\n",
+				mhi_chan->name, mhi_chan->chan);
+			goto error_open_chan;
+		}
+	}
+
+	return 0;
+
+error_open_chan:
+	for (--dir; dir >= 0; dir--) {
+		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(mhi_prepare_for_transfer);
+
+void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan;
+	int dir;
+
+	for (dir = 0; dir < 2; dir++) {
+		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+	}
+}
+EXPORT_SYMBOL(mhi_unprepare_from_transfer);
+
+int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev,
+				enum dma_data_direction dir)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ?
+		mhi_dev->ul_chan : mhi_dev->dl_chan;
+	struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+
+	return get_nr_avail_ring_elements(mhi_cntrl, tre_ring);
+}
+EXPORT_SYMBOL(mhi_get_no_free_descriptors);
+
+struct mhi_controller *mhi_bdf_to_controller(u32 domain,
+					     u32 bus,
+					     u32 slot,
+					     u32 dev_id)
+{
+	struct mhi_controller *itr, *tmp;
+
+	list_for_each_entry_safe(itr, tmp, &mhi_bus.controller_list, node)
+		if (itr->domain == domain && itr->bus == bus &&
+		    itr->slot == slot && itr->dev_id == dev_id)
+			return itr;
+
+	return NULL;
+}
+EXPORT_SYMBOL(mhi_bdf_to_controller);
+
+int mhi_poll(struct mhi_device *mhi_dev,
+	     u32 budget)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan = mhi_dev->dl_chan;
+	struct mhi_event *mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
+	int ret;
+
+	spin_lock_bh(&mhi_event->lock);
+	ret = mhi_process_event_ring(mhi_cntrl, mhi_event, budget);
+	spin_unlock_bh(&mhi_event->lock);
+
+	return ret;
+}
+EXPORT_SYMBOL(mhi_poll);
diff --git a/drivers/bus/mhi/core/mhi_pm.c b/drivers/bus/mhi/core/mhi_pm.c
new file mode 100644
index 0000000..c476a20
--- /dev/null
+++ b/drivers/bus/mhi/core/mhi_pm.c
@@ -0,0 +1,1177 @@
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+#include <linux/mhi.h>
+#include "mhi_internal.h"
+
+/*
+ * Not all MHI states transitions are sync transitions. Linkdown, SSR, and
+ * shutdown can happen anytime asynchronously. This function will transition to
+ * new state only if we're allowed to transitions.
+ *
+ * Priority increase as we go down, example while in any states from L0, start
+ * state from L1, L2, or L3 can be set.  Notable exception to this rule is state
+ * DISABLE.  From DISABLE state we can transition to only POR or state.  Also
+ * for example while in L2 state, user cannot jump back to L1 or L0 states.
+ * Valid transitions:
+ * L0: DISABLE <--> POR
+ *     POR <--> POR
+ *     POR -> M0 -> M1 -> M1_M2 -> M2 --> M0
+ *     POR -> FW_DL_ERR
+ *     FW_DL_ERR <--> FW_DL_ERR
+ *     M0 -> FW_DL_ERR
+ *     M1_M2 -> M0 (Device can trigger it)
+ *     M0 -> M3_ENTER -> M3 -> M3_EXIT --> M0
+ *     M1 -> M3_ENTER --> M3
+ * L1: SYS_ERR_DETECT -> SYS_ERR_PROCESS --> POR
+ * L2: SHUTDOWN_PROCESS -> DISABLE
+ * L3: LD_ERR_FATAL_DETECT <--> LD_ERR_FATAL_DETECT
+ *     LD_ERR_FATAL_DETECT -> SHUTDOWN_PROCESS
+ */
+static struct mhi_pm_transitions const mhi_state_transitions[] = {
+	/* L0 States */
+	{
+		MHI_PM_DISABLE,
+		MHI_PM_POR
+	},
+	{
+		MHI_PM_POR,
+		MHI_PM_POR | MHI_PM_DISABLE | MHI_PM_M0 |
+		MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_FW_DL_ERR
+	},
+	{
+		MHI_PM_M0,
+		MHI_PM_M1 | MHI_PM_M3_ENTER | MHI_PM_SYS_ERR_DETECT |
+		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT |
+		MHI_PM_FW_DL_ERR
+	},
+	{
+		MHI_PM_M1,
+		MHI_PM_M1_M2_TRANSITION | MHI_PM_M3_ENTER |
+		MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	{
+		MHI_PM_M1_M2_TRANSITION,
+		MHI_PM_M2 | MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT |
+		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	{
+		MHI_PM_M2,
+		MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	{
+		MHI_PM_M3_ENTER,
+		MHI_PM_M3 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	{
+		MHI_PM_M3,
+		MHI_PM_M3_EXIT | MHI_PM_SYS_ERR_DETECT |
+		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	{
+		MHI_PM_M3_EXIT,
+		MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	{
+		MHI_PM_FW_DL_ERR,
+		MHI_PM_FW_DL_ERR | MHI_PM_SYS_ERR_DETECT |
+		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	/* L1 States */
+	{
+		MHI_PM_SYS_ERR_DETECT,
+		MHI_PM_SYS_ERR_PROCESS | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	{
+		MHI_PM_SYS_ERR_PROCESS,
+		MHI_PM_POR | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	/* L2 States */
+	{
+		MHI_PM_SHUTDOWN_PROCESS,
+		MHI_PM_DISABLE | MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	/* L3 States */
+	{
+		MHI_PM_LD_ERR_FATAL_DETECT,
+		MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_SHUTDOWN_PROCESS
+	},
+};
+
+enum MHI_PM_STATE __must_check mhi_tryset_pm_state(
+				struct mhi_controller *mhi_cntrl,
+				enum MHI_PM_STATE state)
+{
+	unsigned long cur_state = mhi_cntrl->pm_state;
+	int index = find_last_bit(&cur_state, 32);
+
+	if (unlikely(index >= ARRAY_SIZE(mhi_state_transitions))) {
+		MHI_CRITICAL("cur_state:%s is not a valid pm_state\n",
+			     to_mhi_pm_state_str(cur_state));
+		return cur_state;
+	}
+
+	if (unlikely(mhi_state_transitions[index].from_state != cur_state)) {
+		MHI_ERR("index:%u cur_state:%s != actual_state: %s\n",
+			index, to_mhi_pm_state_str(cur_state),
+			to_mhi_pm_state_str
+			(mhi_state_transitions[index].from_state));
+		return cur_state;
+	}
+
+	if (unlikely(!(mhi_state_transitions[index].to_states & state))) {
+		MHI_LOG(
+			"Not allowing pm state transition from:%s to:%s state\n",
+			to_mhi_pm_state_str(cur_state),
+			to_mhi_pm_state_str(state));
+		return cur_state;
+	}
+
+	MHI_VERB("Transition to pm state from:%s to:%s\n",
+		 to_mhi_pm_state_str(cur_state), to_mhi_pm_state_str(state));
+
+	mhi_cntrl->pm_state = state;
+	return mhi_cntrl->pm_state;
+}
+
+void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum MHI_STATE state)
+{
+	if (state == MHI_STATE_RESET) {
+		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
+				    MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 1);
+	} else {
+		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
+			MHICTRL_MHISTATE_MASK, MHICTRL_MHISTATE_SHIFT, state);
+	}
+}
+
+/* set device wake */
+void mhi_assert_dev_wake(struct mhi_controller *mhi_cntrl, bool force)
+{
+	unsigned long flags;
+
+	/* if set, regardless of count set the bit if not set */
+	if (unlikely(force)) {
+		spin_lock_irqsave(&mhi_cntrl->wlock, flags);
+		atomic_inc(&mhi_cntrl->dev_wake);
+		if (MHI_WAKE_DB_ACCESS_VALID(mhi_cntrl->pm_state) &&
+		    !mhi_cntrl->wake_set) {
+			mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1);
+			mhi_cntrl->wake_set = true;
+		}
+		spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
+	} else {
+		/* if resources requested already, then increment and exit */
+		if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, 1, 0)))
+			return;
+
+		spin_lock_irqsave(&mhi_cntrl->wlock, flags);
+		if ((atomic_inc_return(&mhi_cntrl->dev_wake) == 1) &&
+		    MHI_WAKE_DB_ACCESS_VALID(mhi_cntrl->pm_state) &&
+		    !mhi_cntrl->wake_set) {
+			mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1);
+			mhi_cntrl->wake_set = true;
+		}
+		spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
+	}
+}
+
+/* clear device wake */
+void mhi_deassert_dev_wake(struct mhi_controller *mhi_cntrl, bool override)
+{
+	unsigned long flags;
+
+	MHI_ASSERT(atomic_read(&mhi_cntrl->dev_wake) == 0, "dev_wake == 0");
+
+	/* resources not dropping to 0, decrement and exit */
+	if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, -1, 1)))
+		return;
+
+	spin_lock_irqsave(&mhi_cntrl->wlock, flags);
+	if ((atomic_dec_return(&mhi_cntrl->dev_wake) == 0) &&
+	    MHI_WAKE_DB_ACCESS_VALID(mhi_cntrl->pm_state) && !override &&
+	    mhi_cntrl->wake_set) {
+		mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 0);
+		mhi_cntrl->wake_set = false;
+	}
+	spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
+}
+
+int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
+{
+	void __iomem *base = mhi_cntrl->regs;
+	u32 reset = 1, ready = 0;
+	struct mhi_event *mhi_event;
+	enum MHI_PM_STATE cur_state;
+	int ret, i;
+
+	MHI_LOG("Waiting to enter READY state\n");
+
+	/* wait for RESET to be cleared and READY bit to be set */
+	wait_event_timeout(mhi_cntrl->state_event,
+			   MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
+			   mhi_read_reg_field(mhi_cntrl, base, MHICTRL,
+					      MHICTRL_RESET_MASK,
+					      MHICTRL_RESET_SHIFT, &reset) ||
+			   mhi_read_reg_field(mhi_cntrl, base, MHISTATUS,
+					      MHISTATUS_READY_MASK,
+					      MHISTATUS_READY_SHIFT, &ready) ||
+			   (!reset && ready),
+			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	/* device enter into error state */
+	if (MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state))
+		return -EIO;
+
+	/* device did not transition to ready state */
+	if (reset || !ready)
+		return -ETIMEDOUT;
+
+	MHI_LOG("Device in READY State\n");
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_POR);
+	mhi_cntrl->dev_state = MHI_STATE_READY;
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+
+	if (cur_state != MHI_PM_POR) {
+		MHI_ERR("Error moving to state %s from %s\n",
+			to_mhi_pm_state_str(MHI_PM_POR),
+			to_mhi_pm_state_str(cur_state));
+		return -EIO;
+	}
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+		goto error_mmio;
+
+	ret = mhi_init_mmio(mhi_cntrl);
+	if (ret) {
+		MHI_ERR("Error programming mmio registers\n");
+		goto error_mmio;
+	}
+
+	/* add elements to all sw event rings */
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		struct mhi_ring *ring = &mhi_event->ring;
+
+		if (mhi_event->offload_ev || mhi_event->hw_ring)
+			continue;
+
+		ring->wp = ring->base + ring->len - ring->el_size;
+		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
+		/* needs to update to all cores */
+		smp_wmb();
+
+		/* ring the db for event rings */
+		spin_lock_irq(&mhi_event->lock);
+		mhi_ring_er_db(mhi_event);
+		spin_unlock_irq(&mhi_event->lock);
+	}
+
+	/* set device into M0 state */
+	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return 0;
+
+error_mmio:
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return -EIO;
+}
+
+int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl)
+{
+	enum MHI_PM_STATE cur_state;
+	struct mhi_chan *mhi_chan;
+	int i;
+
+	MHI_LOG("Entered With State:%s PM_STATE:%s\n",
+		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		to_mhi_pm_state_str(mhi_cntrl->pm_state));
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	mhi_cntrl->dev_state = MHI_STATE_M0;
+	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M0);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	if (unlikely(cur_state != MHI_PM_M0)) {
+		MHI_ERR("Failed to transition to state %s from %s\n",
+			to_mhi_pm_state_str(MHI_PM_M0),
+			to_mhi_pm_state_str(cur_state));
+		return -EIO;
+	}
+	mhi_cntrl->M0++;
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_get(mhi_cntrl, true);
+
+	/* ring all event rings and CMD ring only if we're in AMSS */
+	if (mhi_cntrl->ee == MHI_EE_AMSS) {
+		struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
+		struct mhi_cmd *mhi_cmd =
+			&mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+
+		for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+			if (mhi_event->offload_ev)
+				continue;
+
+			spin_lock_irq(&mhi_event->lock);
+			mhi_ring_er_db(mhi_event);
+			spin_unlock_irq(&mhi_event->lock);
+		}
+
+		/* only ring primary cmd ring */
+		spin_lock_irq(&mhi_cmd->lock);
+		if (mhi_cmd->ring.rp != mhi_cmd->ring.wp)
+			mhi_ring_cmd_db(mhi_cntrl, mhi_cmd);
+		spin_unlock_irq(&mhi_cmd->lock);
+	}
+
+	/* ring channel db registers */
+	mhi_chan = mhi_cntrl->mhi_chan;
+	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+		struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+
+		write_lock_irq(&mhi_chan->lock);
+		if (mhi_chan->db_cfg.reset_req)
+			mhi_chan->db_cfg.db_mode = true;
+
+		/* only ring DB if ring is not empty */
+		if (tre_ring->base && tre_ring->wp  != tre_ring->rp)
+			mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+		write_unlock_irq(&mhi_chan->lock);
+	}
+
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+	wake_up(&mhi_cntrl->state_event);
+	MHI_VERB("Exited\n");
+
+	return 0;
+}
+
+void mhi_pm_m1_worker(struct work_struct *work)
+{
+	enum MHI_PM_STATE cur_state;
+	struct mhi_controller *mhi_cntrl;
+
+	mhi_cntrl = container_of(work, struct mhi_controller, m1_worker);
+
+	MHI_LOG("M1 state transition from dev_state:%s pm_state:%s\n",
+		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		to_mhi_pm_state_str(mhi_cntrl->pm_state));
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+	write_lock_irq(&mhi_cntrl->pm_lock);
+
+	/* we either Entered M3 or we did M3->M0 Exit */
+	if (mhi_cntrl->pm_state != MHI_PM_M1)
+		goto invalid_pm_state;
+
+	MHI_LOG("Transitioning to M2 Transition\n");
+	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M1_M2_TRANSITION);
+	if (unlikely(cur_state != MHI_PM_M1_M2_TRANSITION)) {
+		MHI_ERR("Failed to transition to state %s from %s\n",
+			to_mhi_pm_state_str(MHI_PM_M1_M2_TRANSITION),
+			to_mhi_pm_state_str(cur_state));
+		goto invalid_pm_state;
+	}
+
+	mhi_cntrl->dev_state = MHI_STATE_M2;
+	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M2);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	mhi_cntrl->M2++;
+
+	/* during M2 transition we cannot access DB registers must sleep */
+	usleep_range(MHI_M2_DEBOUNCE_TMR_US, MHI_M2_DEBOUNCE_TMR_US + 50);
+	write_lock_irq(&mhi_cntrl->pm_lock);
+
+	/* during de-bounce time could be receiving M0 Event */
+	if (mhi_cntrl->pm_state == MHI_PM_M1_M2_TRANSITION) {
+		MHI_LOG("Entered M2 State\n");
+		cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M2);
+		if (unlikely(cur_state != MHI_PM_M2)) {
+			MHI_ERR("Failed to transition to state %s from %s\n",
+				to_mhi_pm_state_str(MHI_PM_M2),
+				to_mhi_pm_state_str(cur_state));
+			goto invalid_pm_state;
+		}
+	}
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+
+	/* transfer pending, exit M2 */
+	if (unlikely(atomic_read(&mhi_cntrl->dev_wake))) {
+		MHI_VERB("Exiting M2 Immediately, count:%d\n",
+			atomic_read(&mhi_cntrl->dev_wake));
+		read_lock_bh(&mhi_cntrl->pm_lock);
+		mhi_cntrl->wake_get(mhi_cntrl, true);
+		mhi_cntrl->wake_put(mhi_cntrl, false);
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+	} else
+		mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data,
+				     MHI_CB_IDLE);
+
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+	return;
+
+invalid_pm_state:
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+}
+
+void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl)
+{
+	enum MHI_PM_STATE state;
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	mhi_cntrl->dev_state = mhi_get_m_state(mhi_cntrl);
+	if (mhi_cntrl->dev_state == MHI_STATE_M1) {
+		state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M1);
+
+		/* schedule M1->M2 transition */
+		if (state == MHI_PM_M1) {
+			schedule_work(&mhi_cntrl->m1_worker);
+			mhi_cntrl->M1++;
+		}
+	}
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+}
+
+int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl)
+{
+	enum MHI_PM_STATE state;
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	mhi_cntrl->dev_state = MHI_STATE_M3;
+	state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	if (state != MHI_PM_M3) {
+		MHI_ERR("Failed to transition to state %s from %s\n",
+			to_mhi_pm_state_str(MHI_PM_M3),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		return -EIO;
+	}
+	wake_up(&mhi_cntrl->state_event);
+	mhi_cntrl->M3++;
+
+	MHI_LOG("Entered mhi_state:%s pm_state:%s\n",
+		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		to_mhi_pm_state_str(mhi_cntrl->pm_state));
+	return 0;
+}
+
+static int mhi_pm_amss_transition(struct mhi_controller *mhi_cntrl)
+{
+	int i;
+	struct mhi_event *mhi_event;
+
+	MHI_LOG("Processing AMSS Transition\n");
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	mhi_cntrl->ee = MHI_EE_AMSS;
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	wake_up(&mhi_cntrl->state_event);
+
+	/* add elements to all HW event rings */
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+		return -EIO;
+	}
+
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		struct mhi_ring *ring = &mhi_event->ring;
+
+		if (mhi_event->offload_ev || !mhi_event->hw_ring)
+			continue;
+
+		ring->wp = ring->base + ring->len - ring->el_size;
+		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
+		/* all ring updates must get updated immediately */
+		smp_wmb();
+
+		spin_lock_irq(&mhi_event->lock);
+		if (MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state))
+			mhi_ring_er_db(mhi_event);
+		spin_unlock_irq(&mhi_event->lock);
+
+	}
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	MHI_LOG("Adding new devices\n");
+
+	/* add supported devices */
+	mhi_create_devices(mhi_cntrl);
+
+	MHI_LOG("Exited\n");
+
+	return 0;
+}
+
+/* handles both sys_err and shutdown transitions */
+static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
+				      enum MHI_PM_STATE transition_state)
+{
+	enum MHI_PM_STATE cur_state, prev_state;
+	struct mhi_event *mhi_event;
+	struct mhi_cmd_ctxt *cmd_ctxt;
+	struct mhi_cmd *mhi_cmd;
+	struct mhi_event_ctxt *er_ctxt;
+	int ret, i;
+
+	MHI_LOG("Enter with from pm_state:%s MHI_STATE:%s to pm_state:%s\n",
+		to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		to_mhi_pm_state_str(transition_state));
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	prev_state = mhi_cntrl->pm_state;
+	cur_state = mhi_tryset_pm_state(mhi_cntrl, transition_state);
+	if (cur_state == transition_state) {
+		mhi_cntrl->ee = MHI_EE_DISABLE_TRANSITION;
+		mhi_cntrl->dev_state = MHI_STATE_RESET;
+	}
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+
+	/* not handling sys_err, could be middle of shut down */
+	if (cur_state != transition_state) {
+		MHI_LOG("Failed to transition to state:0x%x from:0x%x\n",
+			transition_state, cur_state);
+		mutex_unlock(&mhi_cntrl->pm_mutex);
+		return;
+	}
+
+	/* trigger MHI RESET so device will not access host ddr */
+	if (MHI_REG_ACCESS_VALID(prev_state)) {
+		u32 in_reset = -1;
+		unsigned long timeout = msecs_to_jiffies(mhi_cntrl->timeout_ms);
+
+		MHI_LOG("Trigger device into MHI_RESET\n");
+		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
+
+		/* wait for reset to be cleared */
+		ret = wait_event_timeout(mhi_cntrl->state_event,
+					 mhi_read_reg_field(mhi_cntrl,
+						mhi_cntrl->regs, MHICTRL,
+						MHICTRL_RESET_MASK,
+						MHICTRL_RESET_SHIFT, &in_reset)
+					 || !in_reset, timeout);
+		if ((!ret || in_reset) && cur_state == MHI_PM_SYS_ERR_PROCESS) {
+			MHI_CRITICAL("Device failed to exit RESET state\n");
+			mutex_unlock(&mhi_cntrl->pm_mutex);
+			return;
+		}
+
+		/*
+		 * device cleares INTVEC as part of RESET processing,
+		 * re-program it
+		 */
+		mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
+	}
+
+	MHI_LOG("Waiting for all pending event ring processing to complete\n");
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		if (mhi_event->offload_ev)
+			continue;
+		tasklet_kill(&mhi_event->task);
+	}
+
+	MHI_LOG("Reset all active channels and remove mhi devices\n");
+	device_for_each_child(mhi_cntrl->dev, NULL, mhi_destroy_device);
+
+	MHI_LOG("Finish resetting channels\n");
+
+	/* release lock and wait for all pending thread to complete */
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+	MHI_LOG("Waiting for all pending threads to complete\n");
+	wake_up(&mhi_cntrl->state_event);
+	flush_work(&mhi_cntrl->m1_worker);
+	flush_work(&mhi_cntrl->st_worker);
+	flush_work(&mhi_cntrl->fw_worker);
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+
+	MHI_ASSERT(atomic_read(&mhi_cntrl->dev_wake), "dev_wake != 0");
+
+	/* reset the ev rings and cmd rings */
+	MHI_LOG("Resetting EV CTXT and CMD CTXT\n");
+	mhi_cmd = mhi_cntrl->mhi_cmd;
+	cmd_ctxt = mhi_cntrl->mhi_ctxt->cmd_ctxt;
+	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
+		struct mhi_ring *ring = &mhi_cmd->ring;
+
+		ring->rp = ring->base;
+		ring->wp = ring->base;
+		cmd_ctxt->rp = cmd_ctxt->rbase;
+		cmd_ctxt->wp = cmd_ctxt->rbase;
+	}
+
+	mhi_event = mhi_cntrl->mhi_event;
+	er_ctxt = mhi_cntrl->mhi_ctxt->er_ctxt;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++,
+		     mhi_event++) {
+		struct mhi_ring *ring = &mhi_event->ring;
+
+		/* do not touch offload er */
+		if (mhi_event->offload_ev)
+			continue;
+
+		ring->rp = ring->base;
+		ring->wp = ring->base;
+		er_ctxt->rp = er_ctxt->rbase;
+		er_ctxt->wp = er_ctxt->rbase;
+	}
+
+	if (cur_state == MHI_PM_SYS_ERR_PROCESS) {
+		mhi_ready_state_transition(mhi_cntrl);
+	} else {
+		/* move to disable state */
+		write_lock_irq(&mhi_cntrl->pm_lock);
+		cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_DISABLE);
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		if (unlikely(cur_state != MHI_PM_DISABLE))
+			MHI_ERR("Error moving from pm state:%s to state:%s\n",
+				to_mhi_pm_state_str(cur_state),
+				to_mhi_pm_state_str(MHI_PM_DISABLE));
+	}
+
+	MHI_LOG("Exit with pm_state:%s mhi_state:%s\n",
+		to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+}
+
+int mhi_debugfs_trigger_reset(void *data, u64 val)
+{
+	struct mhi_controller *mhi_cntrl = data;
+	enum MHI_PM_STATE cur_state;
+	int ret;
+
+	MHI_LOG("Trigger MHI Reset\n");
+
+	/* exit lpm first */
+	mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+	mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		MHI_ERR("Did not enter M0 state, cur_state:%s pm_state:%s\n",
+			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		return -EIO;
+	}
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_SYS_ERR_DETECT);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+
+	if (cur_state == MHI_PM_SYS_ERR_DETECT)
+		schedule_work(&mhi_cntrl->syserr_worker);
+
+	return 0;
+}
+
+/* queue a new work item and scheduler work */
+int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
+			       enum MHI_ST_TRANSITION state)
+{
+	struct state_transition *item = kmalloc(sizeof(*item), GFP_ATOMIC);
+	unsigned long flags;
+
+	if (!item)
+		return -ENOMEM;
+
+	item->state = state;
+	spin_lock_irqsave(&mhi_cntrl->transition_lock, flags);
+	list_add_tail(&item->node, &mhi_cntrl->transition_list);
+	spin_unlock_irqrestore(&mhi_cntrl->transition_lock, flags);
+
+	schedule_work(&mhi_cntrl->st_worker);
+
+	return 0;
+}
+
+void mhi_pm_sys_err_worker(struct work_struct *work)
+{
+	struct mhi_controller *mhi_cntrl = container_of(work,
+							struct mhi_controller,
+							syserr_worker);
+
+	MHI_LOG("Enter with pm_state:%s MHI_STATE:%s\n",
+		to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+
+	mhi_pm_disable_transition(mhi_cntrl, MHI_PM_SYS_ERR_PROCESS);
+}
+
+void mhi_pm_st_worker(struct work_struct *work)
+{
+	struct state_transition *itr, *tmp;
+	LIST_HEAD(head);
+	struct mhi_controller *mhi_cntrl = container_of(work,
+							struct mhi_controller,
+							st_worker);
+	spin_lock_irq(&mhi_cntrl->transition_lock);
+	list_splice_tail_init(&mhi_cntrl->transition_list, &head);
+	spin_unlock_irq(&mhi_cntrl->transition_lock);
+
+	list_for_each_entry_safe(itr, tmp, &head, node) {
+		list_del(&itr->node);
+		MHI_LOG("Transition to state:%s\n",
+			TO_MHI_STATE_TRANS_STR(itr->state));
+
+		switch (itr->state) {
+		case MHI_ST_TRANSITION_PBL:
+			write_lock_irq(&mhi_cntrl->pm_lock);
+			if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+				mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
+			write_unlock_irq(&mhi_cntrl->pm_lock);
+			if (MHI_IN_PBL(mhi_cntrl->ee))
+				wake_up(&mhi_cntrl->state_event);
+			break;
+		case MHI_ST_TRANSITION_SBL:
+			write_lock_irq(&mhi_cntrl->pm_lock);
+			mhi_cntrl->ee = MHI_EE_SBL;
+			write_unlock_irq(&mhi_cntrl->pm_lock);
+			mhi_create_devices(mhi_cntrl);
+			break;
+		case MHI_ST_TRANSITION_AMSS:
+			mhi_pm_amss_transition(mhi_cntrl);
+			break;
+		default:
+			break;
+		}
+		kfree(itr);
+	}
+}
+
+int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+	u32 bhi_offset;
+	enum MHI_EE current_ee;
+	enum MHI_ST_TRANSITION next_state;
+
+	MHI_LOG("Requested to power on\n");
+
+	if (mhi_cntrl->msi_allocated < mhi_cntrl->total_ev_rings)
+		return -EINVAL;
+
+	/* set to default wake if not set */
+	if (!mhi_cntrl->wake_get || !mhi_cntrl->wake_put) {
+		mhi_cntrl->wake_get = mhi_assert_dev_wake;
+		mhi_cntrl->wake_put = mhi_deassert_dev_wake;
+	}
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+	mhi_cntrl->pm_state = MHI_PM_DISABLE;
+
+	if (!mhi_cntrl->pre_init) {
+		/* setup device context */
+		ret = mhi_init_dev_ctxt(mhi_cntrl);
+		if (ret) {
+			MHI_ERR("Error setting dev_context\n");
+			goto error_dev_ctxt;
+		}
+
+		ret = mhi_init_irq_setup(mhi_cntrl);
+		if (ret) {
+			MHI_ERR("Error setting up irq\n");
+			goto error_setup_irq;
+		}
+	}
+
+	/* setup bhi offset & intvec */
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIOFF, &bhi_offset);
+	if (ret) {
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		MHI_ERR("Error getting bhi offset\n");
+		goto error_bhi_offset;
+	}
+
+	mhi_cntrl->bhi = mhi_cntrl->regs + bhi_offset;
+	mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
+	mhi_cntrl->pm_state = MHI_PM_POR;
+	mhi_cntrl->ee = MHI_EE_MAX;
+	current_ee = mhi_get_exec_env(mhi_cntrl);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+
+	/* confirm device is in valid exec env */
+	if (!MHI_IN_PBL(current_ee) && current_ee != MHI_EE_AMSS) {
+		MHI_ERR("Not a valid ee for power on\n");
+		ret = -EIO;
+		goto error_bhi_offset;
+	}
+
+	/* transition to next state */
+	next_state = MHI_IN_PBL(current_ee) ?
+		MHI_ST_TRANSITION_PBL : MHI_ST_TRANSITION_READY;
+
+	if (next_state == MHI_ST_TRANSITION_PBL)
+		schedule_work(&mhi_cntrl->fw_worker);
+
+	mhi_queue_state_transition(mhi_cntrl, next_state);
+
+	mhi_init_debugfs(mhi_cntrl);
+
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	MHI_LOG("Power on setup success\n");
+
+	return 0;
+
+error_bhi_offset:
+	if (!mhi_cntrl->pre_init)
+		mhi_deinit_free_irq(mhi_cntrl);
+
+error_setup_irq:
+	if (!mhi_cntrl->pre_init)
+		mhi_deinit_dev_ctxt(mhi_cntrl);
+
+error_dev_ctxt:
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	return ret;
+}
+EXPORT_SYMBOL(mhi_async_power_up);
+
+void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
+{
+	enum MHI_PM_STATE cur_state;
+
+	/* if it's not graceful shutdown, force MHI to a linkdown state */
+	if (!graceful) {
+		mutex_lock(&mhi_cntrl->pm_mutex);
+		write_lock_irq(&mhi_cntrl->pm_lock);
+		cur_state = mhi_tryset_pm_state(mhi_cntrl,
+						MHI_PM_LD_ERR_FATAL_DETECT);
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		mutex_unlock(&mhi_cntrl->pm_mutex);
+		if (cur_state != MHI_PM_LD_ERR_FATAL_DETECT)
+			MHI_ERR("Failed to move to state:%s from:%s\n",
+				to_mhi_pm_state_str(MHI_PM_LD_ERR_FATAL_DETECT),
+				to_mhi_pm_state_str(mhi_cntrl->pm_state));
+	}
+	mhi_pm_disable_transition(mhi_cntrl, MHI_PM_SHUTDOWN_PROCESS);
+
+	mhi_deinit_debugfs(mhi_cntrl);
+
+	if (!mhi_cntrl->pre_init) {
+		/* free all allocated resources */
+		if (mhi_cntrl->fbc_image) {
+			mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
+			mhi_cntrl->fbc_image = NULL;
+		}
+		mhi_deinit_free_irq(mhi_cntrl);
+		mhi_deinit_dev_ctxt(mhi_cntrl);
+	}
+
+}
+EXPORT_SYMBOL(mhi_power_down);
+
+int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
+{
+	int ret = mhi_async_power_up(mhi_cntrl);
+
+	if (ret)
+		return ret;
+
+	wait_event_timeout(mhi_cntrl->state_event,
+			   mhi_cntrl->ee == MHI_EE_AMSS ||
+			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	return (mhi_cntrl->ee == MHI_EE_AMSS) ? 0 : -EIO;
+}
+EXPORT_SYMBOL(mhi_sync_power_up);
+
+int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+	enum MHI_PM_STATE new_state;
+	struct mhi_chan *itr, *tmp;
+
+	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
+		return -EINVAL;
+
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+		return -EIO;
+
+	/* do a quick check to see if any pending data, then exit */
+	if (atomic_read(&mhi_cntrl->dev_wake)) {
+		MHI_VERB("Busy, aborting M3\n");
+		return -EBUSY;
+	}
+
+	/* exit MHI out of M2 state */
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
+				 mhi_cntrl->dev_state == MHI_STATE_M1 ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		MHI_ERR(
+			"Did not enter M0||M1 state, cur_state:%s pm_state:%s\n",
+			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		return -EIO;
+	}
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+
+	if (atomic_read(&mhi_cntrl->dev_wake)) {
+		MHI_VERB("Busy, aborting M3\n");
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		return -EBUSY;
+	}
+
+	/* anytime after this, we will resume thru runtime pm framework */
+	MHI_LOG("Allowing M3 transition\n");
+	new_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_ENTER);
+	if (new_state != MHI_PM_M3_ENTER) {
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		MHI_ERR("Error setting to pm_state:%s from pm_state:%s\n",
+			to_mhi_pm_state_str(MHI_PM_M3_ENTER),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		return -EIO;
+	}
+
+	/* set dev to M3 and wait for completion */
+	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	MHI_LOG("Wait for M3 completion\n");
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->dev_state == MHI_STATE_M3 ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		MHI_ERR("Did not enter M3 state, cur_state:%s pm_state:%s\n",
+			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		return -EIO;
+	}
+
+	/* notify any clients we enter lpm */
+	list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
+		mutex_lock(&itr->mutex);
+		if (itr->mhi_dev)
+			mhi_notify(itr->mhi_dev, MHI_CB_LPM_ENTER);
+		mutex_unlock(&itr->mutex);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(mhi_pm_suspend);
+
+int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
+{
+	enum MHI_PM_STATE cur_state;
+	int ret;
+	struct mhi_chan *itr, *tmp;
+
+	MHI_LOG("Entered with pm_state:%s dev_state:%s\n",
+		to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+
+	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
+		return 0;
+
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+		return -EIO;
+
+	MHI_ASSERT(mhi_cntrl->pm_state != MHI_PM_M3, "mhi_pm_state != M3");
+
+	/* notify any clients we enter lpm */
+	list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
+		mutex_lock(&itr->mutex);
+		if (itr->mhi_dev)
+			mhi_notify(itr->mhi_dev, MHI_CB_LPM_EXIT);
+		mutex_unlock(&itr->mutex);
+	}
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_EXIT);
+	if (cur_state != MHI_PM_M3_EXIT) {
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		MHI_ERR("Error setting to pm_state:%s from pm_state:%s\n",
+			to_mhi_pm_state_str(MHI_PM_M3_EXIT),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		return -EIO;
+	}
+
+	/* set dev to M0 and wait for completion */
+	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		MHI_ERR("Did not enter M0 state, cur_state:%s pm_state:%s\n",
+			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+		mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+		mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+	}
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		MHI_ERR("Did not enter M0 state, cur_state:%s pm_state:%s\n",
+			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		read_lock_bh(&mhi_cntrl->pm_lock);
+		mhi_cntrl->wake_put(mhi_cntrl, false);
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+void mhi_device_get(struct mhi_device *mhi_dev)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+
+	atomic_inc(&mhi_dev->dev_wake);
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+}
+EXPORT_SYMBOL(mhi_device_get);
+
+int mhi_device_get_sync(struct mhi_device *mhi_dev)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	int ret;
+
+	ret = __mhi_device_get_sync(mhi_cntrl);
+	if (!ret)
+		atomic_inc(&mhi_dev->dev_wake);
+
+	return ret;
+}
+EXPORT_SYMBOL(mhi_device_get_sync);
+
+void mhi_device_put(struct mhi_device *mhi_dev)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+
+	atomic_dec(&mhi_dev->dev_wake);
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+}
+EXPORT_SYMBOL(mhi_device_put);
+
+int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+
+	MHI_LOG("Enter with pm_state:%s ee:%s\n",
+		to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		TO_MHI_EXEC_STR(mhi_cntrl->ee));
+
+	/* before rddm mode, we need to enter M0 state */
+	ret = __mhi_device_get_sync(mhi_cntrl);
+	if (ret)
+		return ret;
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+		goto no_reg_access;
+
+	MHI_LOG("Triggering SYS_ERR to force rddm state\n");
+
+	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	/* wait for rddm event */
+	MHI_LOG("Waiting for device to enter RDDM state\n");
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->ee == MHI_EE_RDDM,
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+	ret = !ret ? 0 : -EIO;
+
+	MHI_LOG("Exiting with pm_state:%s ee:%s ret:%d\n",
+		to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		TO_MHI_EXEC_STR(mhi_cntrl->ee), ret);
+
+	return ret;
+
+no_reg_access:
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	return -EIO;
+}
+EXPORT_SYMBOL(mhi_force_rddm_mode);
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
new file mode 100644
index 0000000..6eb780c
--- /dev/null
+++ b/include/linux/mhi.h
@@ -0,0 +1,694 @@
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#ifndef _MHI_H_
+#define _MHI_H_
+
+struct mhi_chan;
+struct mhi_event;
+struct mhi_ctxt;
+struct mhi_cmd;
+struct image_info;
+struct bhi_vec_entry;
+
+/**
+ * enum MHI_CB - MHI callback
+ * @MHI_CB_IDLE: MHI entered idle state
+ * @MHI_CB_PENDING_DATA: New data available for client to process
+ * @MHI_CB_LPM_ENTER: MHI host entered low power mode
+ * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
+ * @MHI_CB_EE_RDDM: MHI device entered RDDM execution enviornment
+ */
+enum MHI_CB {
+	MHI_CB_IDLE,
+	MHI_CB_PENDING_DATA,
+	MHI_CB_LPM_ENTER,
+	MHI_CB_LPM_EXIT,
+	MHI_CB_EE_RDDM,
+};
+
+/**
+ * enum MHI_DEBUG_LEVL - various debugging level
+ */
+enum MHI_DEBUG_LEVEL {
+	MHI_MSG_LVL_VERBOSE,
+	MHI_MSG_LVL_INFO,
+	MHI_MSG_LVL_ERROR,
+	MHI_MSG_LVL_CRITICAL,
+	MHI_MSG_LVL_MASK_ALL,
+};
+
+/**
+ * enum MHI_FLAGS - Transfer flags
+ * @MHI_EOB: End of buffer for bulk transfer
+ * @MHI_EOT: End of transfer
+ * @MHI_CHAIN: Linked transfer
+ */
+enum MHI_FLAGS {
+	MHI_EOB,
+	MHI_EOT,
+	MHI_CHAIN,
+};
+
+/**
+ * struct image_info - firmware and rddm table table
+ * @mhi_buf - Contain device firmware and rddm table
+ * @entries - # of entries in table
+ */
+struct image_info {
+	struct mhi_buf *mhi_buf;
+	struct bhi_vec_entry *bhi_vec;
+	u32 entries;
+};
+
+/**
+ * struct mhi_controller - Master controller structure for external modem
+ * @dev: Device associated with this controller
+ * @of_node: DT that has MHI configuration information
+ * @regs: Points to base of MHI MMIO register space
+ * @bhi: Points to base of MHI BHI register space
+ * @wake_db: MHI WAKE doorbell register address
+ * @dev_id: PCIe device id of the external device
+ * @domain: PCIe domain the device connected to
+ * @bus: PCIe bus the device assigned to
+ * @slot: PCIe slot for the modem
+ * @iova_start: IOMMU starting address for data
+ * @iova_stop: IOMMU stop address for data
+ * @fw_image: Firmware image name for normal booting
+ * @edl_image: Firmware image name for emergency download mode
+ * @fbc_download: MHI host needs to do complete image transfer
+ * @rddm_size: RAM dump size that host should allocate for debugging purpose
+ * @sbl_size: SBL image size
+ * @seg_len: BHIe vector size
+ * @fbc_image: Points to firmware image buffer
+ * @rddm_image: Points to RAM dump buffer
+ * @max_chan: Maximum number of channels controller support
+ * @mhi_chan: Points to channel configuration table
+ * @lpm_chans: List of channels that require LPM notifications
+ * @total_ev_rings: Total # of event rings allocated
+ * @hw_ev_rings: Number of hardware event rings
+ * @sw_ev_rings: Number of software event rings
+ * @msi_required: Number of msi required to operate
+ * @msi_allocated: Number of msi allocated by bus master
+ * @irq: base irq # to request
+ * @mhi_event: MHI event ring configurations table
+ * @mhi_cmd: MHI command ring configurations table
+ * @mhi_ctxt: MHI device context, shared memory between host and device
+ * @timeout_ms: Timeout in ms for state transitions
+ * @pm_state: Power management state
+ * @ee: MHI device execution environment
+ * @dev_state: MHI STATE
+ * @status_cb: CB function to notify various power states to but master
+ * @link_status: Query link status in case of abnormal value read from device
+ * @runtime_get: Async runtime resume function
+ * @runtimet_put: Release votes
+ * @priv_data: Points to bus master's private data
+ */
+struct mhi_controller {
+	struct list_head node;
+
+	/* device node for iommu ops */
+	struct device *dev;
+	struct device_node *of_node;
+
+	/* mmio base */
+	void __iomem *regs;
+	void __iomem *bhi;
+	void __iomem *wake_db;
+
+	/* device topology */
+	u32 dev_id;
+	u32 domain;
+	u32 bus;
+	u32 slot;
+
+	/* addressing window */
+	dma_addr_t iova_start;
+	dma_addr_t iova_stop;
+
+	/* fw images */
+	const char *fw_image;
+	const char *edl_image;
+
+	/* mhi host manages downloading entire fbc images */
+	bool fbc_download;
+	size_t rddm_size;
+	size_t sbl_size;
+	size_t seg_len;
+	u32 session_id;
+	u32 sequence_id;
+	struct image_info *fbc_image;
+	struct image_info *rddm_image;
+
+	/* physical channel config data */
+	u32 max_chan;
+	struct mhi_chan *mhi_chan;
+	struct list_head lpm_chans; /* these chan require lpm notification */
+
+	/* physical event config data */
+	u32 total_ev_rings;
+	u32 hw_ev_rings;
+	u32 sw_ev_rings;
+	u32 msi_required;
+	u32 msi_allocated;
+	int *irq; /* interrupt table */
+	struct mhi_event *mhi_event;
+
+	/* cmd rings */
+	struct mhi_cmd *mhi_cmd;
+
+	/* mhi context (shared with device) */
+	struct mhi_ctxt *mhi_ctxt;
+
+	u32 timeout_ms;
+
+	/* caller should grab pm_mutex for suspend/resume operations */
+	struct mutex pm_mutex;
+	bool pre_init;
+	rwlock_t pm_lock;
+	u32 pm_state;
+	u32 ee;
+	u32 dev_state;
+	bool wake_set;
+	atomic_t dev_wake;
+	atomic_t alloc_size;
+	struct list_head transition_list;
+	spinlock_t transition_lock;
+	spinlock_t wlock;
+
+	/* debug counters */
+	u32 M0, M1, M2, M3;
+
+	/* worker for different state transitions */
+	struct work_struct st_worker;
+	struct work_struct fw_worker;
+	struct work_struct m1_worker;
+	struct work_struct syserr_worker;
+	wait_queue_head_t state_event;
+
+	/* shadow functions */
+	void (*status_cb)(struct mhi_controller *mhi_cntrl, void *piv,
+			  enum MHI_CB reason);
+	int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
+	void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
+	void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
+	int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
+	void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
+
+	/* channel to control DTR messaging */
+	struct mhi_device *dtr_dev;
+
+	/* kernel log level */
+	enum MHI_DEBUG_LEVEL klog_lvl;
+
+	/* private log level controller driver to set */
+	enum MHI_DEBUG_LEVEL log_lvl;
+
+	/* controller specific data */
+	void *priv_data;
+	void *log_buf;
+	struct dentry *dentry;
+	struct dentry *parent;
+};
+
+/**
+ * struct mhi_device - mhi device structure associated bind to channel
+ * @dev: Device associated with the channels
+ * @mtu: Maximum # of bytes controller support
+ * @ul_chan_id: MHI channel id for UL transfer
+ * @dl_chan_id: MHI channel id for DL transfer
+ * @priv: Driver private data
+ */
+struct mhi_device {
+	struct device dev;
+	u32 dev_id;
+	u32 domain;
+	u32 bus;
+	u32 slot;
+	size_t mtu;
+	int ul_chan_id;
+	int dl_chan_id;
+	int ul_event_id;
+	int dl_event_id;
+	const struct mhi_device_id *id;
+	const char *chan_name;
+	struct mhi_controller *mhi_cntrl;
+	struct mhi_chan *ul_chan;
+	struct mhi_chan *dl_chan;
+	atomic_t dev_wake;
+	void *priv_data;
+	int (*ul_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+		       void *buf, size_t len, enum MHI_FLAGS flags);
+	int (*dl_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+		       void *buf, size_t len, enum MHI_FLAGS flags);
+	void (*status_cb)(struct mhi_device *mhi_dev, enum MHI_CB reason);
+};
+
+/**
+ * struct mhi_result - Completed buffer information
+ * @buf_addr: Address of data buffer
+ * @dir: Channel direction
+ * @bytes_xfer: # of bytes transferred
+ * @transaction_status: Status of last trasnferred
+ */
+struct mhi_result {
+	void *buf_addr;
+	enum dma_data_direction dir;
+	size_t bytes_xferd;
+	int transaction_status;
+};
+
+/**
+ * struct mhi_buf - Describes the buffer
+ * @buf: cpu address for the buffer
+ * @phys_addr: physical address of the buffer
+ * @dma_addr: iommu address for the buffer
+ * @len: # of bytes
+ * @name: Buffer label, for offload channel configurations name must be:
+ * ECA - Event context array data
+ * CCA - Channel context array data
+ */
+struct mhi_buf {
+	void *buf;
+	phys_addr_t phys_addr;
+	dma_addr_t dma_addr;
+	size_t len;
+	const char *name; /* ECA, CCA */
+};
+
+/**
+ * struct mhi_driver - mhi driver information
+ * @id_table: NULL terminated channel ID names
+ * @ul_xfer_cb: UL data transfer callback
+ * @dl_xfer_cb: DL data transfer callback
+ * @status_cb: Asynchronous status callback
+ */
+struct mhi_driver {
+	const struct mhi_device_id *id_table;
+	int (*probe)(struct mhi_device *mhi_dev,
+		     const struct mhi_device_id *id);
+	void (*remove)(struct mhi_device *mhi_dev);
+	void (*ul_xfer_cb)(struct mhi_device *mhi_dev,
+			   struct mhi_result *result);
+	void (*dl_xfer_cb)(struct mhi_device *mhi_dev,
+			   struct mhi_result *result);
+	void (*status_cb)(struct mhi_device *mhi_dev, enum MHI_CB mhi_cb);
+	struct device_driver driver;
+};
+
+#define to_mhi_driver(drv) container_of(drv, struct mhi_driver, driver)
+#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
+
+static inline void mhi_device_set_devdata(struct mhi_device *mhi_dev,
+					  void *priv)
+{
+	mhi_dev->priv_data = priv;
+}
+
+static inline void *mhi_device_get_devdata(struct mhi_device *mhi_dev)
+{
+	return mhi_dev->priv_data;
+}
+
+/**
+ * mhi_queue_transfer - Queue a buffer to hardware
+ * All transfers are asyncronous transfers
+ * @mhi_dev: Device associated with the channels
+ * @dir: Data direction
+ * @buf: Data buffer (skb for hardware channels)
+ * @len: Size in bytes
+ * @mflags: Interrupt flags for the device
+ */
+static inline int mhi_queue_transfer(struct mhi_device *mhi_dev,
+				     enum dma_data_direction dir,
+				     void *buf,
+				     size_t len,
+				     enum MHI_FLAGS mflags)
+{
+	if (dir == DMA_TO_DEVICE)
+		return mhi_dev->ul_xfer(mhi_dev, mhi_dev->ul_chan, buf, len,
+					mflags);
+	else
+		return mhi_dev->dl_xfer(mhi_dev, mhi_dev->dl_chan, buf, len,
+					mflags);
+}
+
+static inline void *mhi_controller_get_devdata(struct mhi_controller *mhi_cntrl)
+{
+	return mhi_cntrl->priv_data;
+}
+
+static inline void mhi_free_controller(struct mhi_controller *mhi_cntrl)
+{
+	kfree(mhi_cntrl);
+}
+
+#if defined(CONFIG_MHI_BUS)
+
+/**
+ * mhi_driver_register - Register driver with MHI framework
+ * @mhi_drv: mhi_driver structure
+ */
+int mhi_driver_register(struct mhi_driver *mhi_drv);
+
+/**
+ * mhi_driver_unregister - Unregister a driver for mhi_devices
+ * @mhi_drv: mhi_driver structure
+ */
+void mhi_driver_unregister(struct mhi_driver *mhi_drv);
+
+/**
+ * mhi_device_configure - configure ECA or CCA context
+ * For offload channels that client manage, call this
+ * function to configure channel context or event context
+ * array associated with the channel
+ * @mhi_div: Device associated with the channels
+ * @dir: Direction of the channel
+ * @mhi_buf: Configuration data
+ * @elements: # of configuration elements
+ */
+int mhi_device_configure(struct mhi_device *mhi_div,
+			 enum dma_data_direction dir,
+			 struct mhi_buf *mhi_buf,
+			 int elements);
+
+/**
+ * mhi_device_get - disable all low power modes
+ * Only disables lpm, does not immediately exit low power mode
+ * if controller already in a low power mode
+ * @mhi_dev: Device associated with the channels
+ */
+void mhi_device_get(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_device_get_sync - disable all low power modes
+ * Synchronously disable all low power, exit low power mode if
+ * controller already in a low power state
+ * @mhi_dev: Device associated with the channels
+ */
+int mhi_device_get_sync(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_device_put - re-enable low power modes
+ * @mhi_dev: Device associated with the channels
+ */
+void mhi_device_put(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_prepare_for_transfer - setup channel for data transfer
+ * Moves both UL and DL channel from RESET to START state
+ * @mhi_dev: Device associated with the channels
+ */
+int mhi_prepare_for_transfer(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_unprepare_from_transfer -unprepare the channels
+ * Moves both UL and DL channels to RESET state
+ * @mhi_dev: Device associated with the channels
+ */
+void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_get_no_free_descriptors - Get transfer ring length
+ * Get # of TD available to queue buffers
+ * @mhi_dev: Device associated with the channels
+ * @dir: Direction of the channel
+ */
+int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev,
+				enum dma_data_direction dir);
+
+/**
+ * mhi_poll - poll for any available data to consume
+ * This is only applicable for DL direction
+ * @mhi_dev: Device associated with the channels
+ * @budget: In descriptors to service before returning
+ */
+int mhi_poll(struct mhi_device *mhi_dev, u32 budget);
+
+/**
+ * mhi_ioctl - user space IOCTL support for MHI channels
+ * Native support for setting  TIOCM
+ * @mhi_dev: Device associated with the channels
+ * @cmd: IOCTL cmd
+ * @arg: Optional parameter, iotcl cmd specific
+ */
+long mhi_ioctl(struct mhi_device *mhi_dev, unsigned int cmd, unsigned long arg);
+
+/**
+ * mhi_alloc_controller - Allocate mhi_controller structure
+ * Allocate controller structure and additional data for controller
+ * private data. You may get the private data pointer by calling
+ * mhi_controller_get_devdata
+ * @size: # of additional bytes to allocate
+ */
+struct mhi_controller *mhi_alloc_controller(size_t size);
+
+/**
+ * of_register_mhi_controller - Register MHI controller
+ * Registers MHI controller with MHI bus framework. DT must be supported
+ * @mhi_cntrl: MHI controller to register
+ */
+int of_register_mhi_controller(struct mhi_controller *mhi_cntrl);
+
+void mhi_unregister_mhi_controller(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_bdf_to_controller - Look up a registered controller
+ * Search for controller based on device identification
+ * @domain: RC domain of the device
+ * @bus: Bus device connected to
+ * @slot: Slot device assigned to
+ * @dev_id: Device Identification
+ */
+struct mhi_controller *mhi_bdf_to_controller(u32 domain, u32 bus, u32 slot,
+					     u32 dev_id);
+
+/**
+ * mhi_prepare_for_power_up - Do pre-initialization before power up
+ * This is optional, call this before power up if controller do not
+ * want bus framework to automatically free any allocated memory during shutdown
+ * process.
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_async_power_up - Starts MHI power up sequence
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_async_power_up(struct mhi_controller *mhi_cntrl);
+int mhi_sync_power_up(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_power_down - Start MHI power down sequence
+ * @mhi_cntrl: MHI controller
+ * @graceful: link is still accessible, do a graceful shutdown process otherwise
+ * we will shutdown host w/o putting device into RESET state
+ */
+void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful);
+
+/**
+ * mhi_unprepare_after_powre_down - free any allocated memory for power up
+ * @mhi_cntrl: MHI controller
+ */
+void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_pm_suspend - Move MHI into a suspended state
+ * Transition to MHI state M3 state from M0||M1||M2 state
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_pm_suspend(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_pm_resume - Resume MHI from suspended state
+ * Transition to MHI state M0 state from M3 state
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_pm_resume(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_download_rddm_img - Download ramdump image from device for
+ * debugging purpose.
+ * @mhi_cntrl: MHI controller
+ * @in_panic: If we trying to capture image while in kernel panic
+ */
+int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic);
+
+/**
+ * mhi_force_rddm_mode - Force external device into rddm mode
+ * to collect device ramdump. This is useful if host driver assert
+ * and we need to see device state as well.
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl);
+
+#else
+
+static inline int mhi_driver_register(struct mhi_driver *mhi_drv)
+{
+	return -EINVAL;
+}
+
+static inline void mhi_driver_unregister(struct mhi_driver *mhi_drv)
+{
+}
+
+static inline int mhi_device_configure(struct mhi_device *mhi_div,
+				       enum dma_data_direction dir,
+				       struct mhi_buf *mhi_buf,
+				       int elements)
+{
+	return -EINVAL;
+}
+
+static inline void mhi_device_get(struct mhi_device *mhi_dev)
+{
+}
+
+static inline int mhi_device_get_sync(struct mhi_device *mhi_dev)
+{
+	return -EINVAL;
+}
+
+static inline void mhi_device_put(struct mhi_device *mhi_dev)
+{
+}
+
+static inline int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
+{
+	return -EINVAL;
+}
+
+static inline void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
+{
+
+}
+
+static inline int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev,
+					      enum dma_data_direction dir)
+{
+	return -EINVAL;
+}
+
+static inline int mhi_poll(struct mhi_device *mhi_dev, u32 budget)
+{
+	return -EINVAL;
+}
+
+static inline long mhi_ioctl(struct mhi_device *mhi_dev,
+			     unsigned int cmd,
+			     unsigned long arg)
+{
+	return -EINVAL;
+}
+
+static inline struct mhi_controller *mhi_alloc_controller(size_t size)
+{
+	return NULL;
+}
+
+static inline int of_register_mhi_controller(struct mhi_controller *mhi_cntrl)
+{
+	return -EINVAL;
+}
+
+static inline void mhi_unregister_mhi_controller(
+					struct mhi_controller *mhi_cntrl)
+{
+}
+
+static inline struct mhi_controller *mhi_bdf_to_controller(u32 domain,
+							   u32 bus,
+							   u32 slot,
+							   u32 dev_id)
+{
+	return NULL;
+}
+
+static inline int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
+{
+	return -EINVAL;
+}
+
+static inline int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
+{
+	return -EINVAL;
+}
+
+static inline int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
+{
+	return -EINVAL;
+}
+
+static inline void mhi_power_down(struct mhi_controller *mhi_cntrl,
+				  bool graceful)
+{
+}
+
+static inline void mhi_unprepare_after_power_down(
+					struct mhi_controller *mhi_cntrl)
+{
+}
+
+static inline int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
+{
+	return -EINVAL;
+}
+
+static inline int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
+{
+	return -EINVAL;
+}
+
+static inline int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl,
+					bool in_panic)
+{
+	return -EINVAL;
+}
+
+static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
+{
+	return -EINVAL;
+}
+
+#endif
+
+#ifdef CONFIG_MHI_DEBUG
+
+#define MHI_VERB(fmt, ...) do { \
+		if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_VERBOSE) \
+			pr_debug("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
+} while (0)
+
+#else
+
+#define MHI_VERB(fmt, ...)
+
+#endif
+
+#define MHI_LOG(fmt, ...) do {	\
+		if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
+			pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
+} while (0)
+
+#define MHI_ERR(fmt, ...) do {	\
+		if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_ERROR) \
+			pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
+} while (0)
+
+#define MHI_CRITICAL(fmt, ...) do { \
+		if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_CRITICAL) \
+			pr_alert("[C][%s] " fmt, __func__, ##__VA_ARGS__); \
+} while (0)
+
+
+#endif /* _MHI_H_ */
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index 7d361be..1e11e30 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -734,4 +734,15 @@ struct tb_service_id {
 #define TBSVC_MATCH_PROTOCOL_VERSION	0x0004
 #define TBSVC_MATCH_PROTOCOL_REVISION	0x0008
 
+
+/**
+ * struct mhi_device_id - MHI device identification
+ * @chan: MHI channel name
+ * @driver_data: driver data;
+ */
+struct mhi_device_id {
+	const char *chan;
+	kernel_ulong_t driver_data;
+};
+
 #endif /* LINUX_MOD_DEVICETABLE_H */
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems
  2018-04-27  2:23 MHI initial design review Sujeev Dias
  2018-04-27  2:23 ` [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface Sujeev Dias
@ 2018-04-27  2:23 ` Sujeev Dias
  2018-04-27 11:32   ` Arnd Bergmann
                     ` (2 more replies)
  2018-04-27  2:23 ` [PATCH v1 3/4] mhi_bus: dev: netdev: add network interface driver Sujeev Dias
                   ` (2 subsequent siblings)
  4 siblings, 3 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-04-27  2:23 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: Sujeev Dias, linux-kernel, linux-arm-msm, Tony Truong

QCOM PCIe based modems uses MHI as the communication protocol.
MHI control driver is the bus master for such modems. As the bus
master driver, it oversees power management operations
such as suspend, resume, powering on and off the device.

Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
---
 Documentation/devicetree/bindings/bus/mhi_qcom.txt | 110 ++++
 drivers/bus/Kconfig                                |   1 +
 drivers/bus/mhi/Makefile                           |   2 +-
 drivers/bus/mhi/controllers/Kconfig                |  10 +
 drivers/bus/mhi/controllers/Makefile               |   1 +
 drivers/bus/mhi/controllers/mhi_qcom.c             | 686 +++++++++++++++++++++
 drivers/bus/mhi/controllers/mhi_qcom.h             |  85 +++
 7 files changed, 894 insertions(+), 1 deletion(-)
 create mode 100644 Documentation/devicetree/bindings/bus/mhi_qcom.txt
 create mode 100644 drivers/bus/mhi/controllers/Kconfig
 create mode 100644 drivers/bus/mhi/controllers/Makefile
 create mode 100644 drivers/bus/mhi/controllers/mhi_qcom.c
 create mode 100644 drivers/bus/mhi/controllers/mhi_qcom.h

diff --git a/Documentation/devicetree/bindings/bus/mhi_qcom.txt b/Documentation/devicetree/bindings/bus/mhi_qcom.txt
new file mode 100644
index 0000000..c0f8d86
--- /dev/null
+++ b/Documentation/devicetree/bindings/bus/mhi_qcom.txt
@@ -0,0 +1,110 @@
+Qualcomm Technologies Inc MHI Bus controller
+
+MHI control driver enables clients to communicate with external mode
+using MHI protocol.
+
+==============
+Node Structure
+==============
+
+Main node properties:
+
+- compatible
+  Usage: required
+  Value type: <string>
+  Definition: "qcom,mhi"
+
+- qcom,pci-dev-id
+  Usage: optional
+  Value type: <u32>
+  Definition: PCIe device id of external modem to bind. If not set, any
+	device is compatible with this node.
+
+- qcom,pci-domain
+  Usage: required
+  Value type: <u32>
+  Definition: PCIe root complex external modem connected to
+
+- qcom,pci-bus
+  Usage: required
+  Value type: <u32>
+  Definition: PCIe bus external modem connected to
+
+- qcom,pci-slot
+  Usage: required
+  Value type: <u32>
+  Definition: PCIe slot as assigned by pci framework to external modem
+
+- qcom,smmu-cfg
+  Usage: required
+  Value type: <u32>
+  Definition: Required SMMU configuration bitmask for PCIe bus.
+	BIT mask:
+	BIT(0) : Attach address mapping to endpoint device
+	BIT(1) : Set attribute S1_BYPASS
+	BIT(2) : Set attribute FAST
+	BIT(3) : Set attribute ATOMIC
+	BIT(4) : Set attribute FORCE_COHERENT
+
+- qcom,addr-win
+  Usage: required if SMMU S1 translation is enabled
+  Value type: Array of <u64>
+  Definition: Pair of values describing iova start and stop address
+
+- qcom,msm-bus,name
+  Usage: required
+  Value type: <string>
+  Definition: string representing the bus scale client name to register
+
+- qcom,msm-bus,num-cases
+  Usage: required
+  Value type: <u32>
+  Definition: Must be set to two, MHI support two scales
+
+- qcom,msm-bus,num-paths
+  Usage: required
+  Value type: <u32>
+  Definition: Total number of master-slave pairs MHI host will vote. Must be set
+	to one.
+
+- qcom,msm-bus,vectors-KBps
+  Usage: required
+  Value type: Array of <u32>
+  Definition: Array of tuples which define the bus bandwidth requirements.
+	Each tuple is of length 4, values are master-id, slave-id,
+	arbitrated bandwidth in KBps, and instantaneous bandwidth in
+	KBps.
+
+- esoc-names
+  Usage: optional
+  Value type: <string>
+  Definition: if external modem managed by esoc framework, set string to "mdm"
+
+- esoc-0
+  Usage: required if device is managed by esoc framework
+  Value type: phandle
+  Definition: A esoc phandle pointing to external modem
+
+- MHI bus settings
+  Usage: required
+  Values: as defined by mhi.txt
+  Definition: Per definition of devicetree/bindings/bus/mhi.txt, define device
+	specific MHI configuration parameters.
+
+========
+Example:
+========
+qcom,mhi {
+	compatible = "qcom,mhi";
+	qcom,pci-domain = <0>;
+	qcom,pci-bus = <1>;
+	qcom,pci-slot = <0>;
+	qcom,smmu-cfg = <0x3d>;
+	qcom,addr-win = <0x0 0x20000000 0x0 0x3fffffff>;
+	qcom,msm-bus,name = "mhi";
+	qcom,msm-bus,num-cases = <2>;
+	qcom,msm-bus,num-paths = <1>;
+	qcom,msm-bus,vectors-KBps = <45 512 0 0>,
+				    <45 512 1200000000 650000000>;
+	<mhi bus configurations>
+};
diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
index e15d56d..fb28002 100644
--- a/drivers/bus/Kconfig
+++ b/drivers/bus/Kconfig
@@ -189,5 +189,6 @@ config MHI_DEBUG
 	   will be logged.
 
 source "drivers/bus/fsl-mc/Kconfig"
+source drivers/bus/mhi/controllers/Kconfig
 
 endmenu
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
index 9f8f3ac..c6a2a91 100644
--- a/drivers/bus/mhi/Makefile
+++ b/drivers/bus/mhi/Makefile
@@ -4,5 +4,5 @@
 
 # core layer
 obj-y += core/
-#obj-y += controllers/
+obj-y += controllers/
 #obj-y += devices/
diff --git a/drivers/bus/mhi/controllers/Kconfig b/drivers/bus/mhi/controllers/Kconfig
new file mode 100644
index 0000000..e8a90b6
--- /dev/null
+++ b/drivers/bus/mhi/controllers/Kconfig
@@ -0,0 +1,10 @@
+menu "MHI controllers"
+
+config MHI_QCOM
+       tristate "MHI QCOM"
+       depends on MHI_BUS
+       help
+	  If you say yes to this option, MHI bus support for QCOM modem chipsets
+	  will be enabled.
+
+endmenu
diff --git a/drivers/bus/mhi/controllers/Makefile b/drivers/bus/mhi/controllers/Makefile
new file mode 100644
index 0000000..292f696
--- /dev/null
+++ b/drivers/bus/mhi/controllers/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_MHI_QCOM) += mhi_qcom.o
diff --git a/drivers/bus/mhi/controllers/mhi_qcom.c b/drivers/bus/mhi/controllers/mhi_qcom.c
new file mode 100644
index 0000000..642981a
--- /dev/null
+++ b/drivers/bus/mhi/controllers/mhi_qcom.c
@@ -0,0 +1,686 @@
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/memblock.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/pm_runtime.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/mhi.h>
+#include "mhi_qcom.h"
+
+static struct pci_device_id mhi_pcie_device_id[] = {
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0300)},
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0301)},
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0302)},
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0303)},
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0304)},
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0305)},
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, MHI_PCIE_DEBUG_ID)},
+	{0},
+};
+
+static struct pci_driver mhi_pcie_driver;
+
+void mhi_deinit_pci_dev(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+	struct pci_dev *pci_dev = mhi_dev->pci_dev;
+
+	pci_free_irq_vectors(pci_dev);
+	kfree(mhi_cntrl->irq);
+	mhi_cntrl->irq = NULL;
+	iounmap(mhi_cntrl->regs);
+	mhi_cntrl->regs = NULL;
+	pci_clear_master(pci_dev);
+	pci_release_region(pci_dev, mhi_dev->resn);
+	pci_disable_device(pci_dev);
+}
+
+static int mhi_init_pci_dev(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+	struct pci_dev *pci_dev = mhi_dev->pci_dev;
+	int ret;
+	resource_size_t start, len;
+	int i;
+
+	mhi_dev->resn = MHI_PCI_BAR_NUM;
+	ret = pci_assign_resource(pci_dev, mhi_dev->resn);
+	if (ret) {
+		MHI_ERR("Error assign pci resources, ret:%d\n", ret);
+		return ret;
+	}
+
+	ret = pci_enable_device(pci_dev);
+	if (ret) {
+		MHI_ERR("Error enabling device, ret:%d\n", ret);
+		goto error_enable_device;
+	}
+
+	ret = pci_request_region(pci_dev, mhi_dev->resn, "mhi");
+	if (ret) {
+		MHI_ERR("Error pci_request_region, ret:%d\n", ret);
+		goto error_request_region;
+	}
+
+	pci_set_master(pci_dev);
+
+	start = pci_resource_start(pci_dev, mhi_dev->resn);
+	len = pci_resource_len(pci_dev, mhi_dev->resn);
+	mhi_cntrl->regs = ioremap_nocache(start, len);
+	if (!mhi_cntrl->regs) {
+		MHI_ERR("Error ioremap region\n");
+		goto error_ioremap;
+	}
+
+	ret = pci_alloc_irq_vectors(pci_dev, mhi_cntrl->msi_required,
+				    mhi_cntrl->msi_required, PCI_IRQ_MSI);
+	if (IS_ERR_VALUE((ulong)ret) || ret < mhi_cntrl->msi_required) {
+		MHI_ERR("Failed to enable MSI, ret:%d\n", ret);
+		goto error_req_msi;
+	}
+
+	mhi_cntrl->msi_allocated = ret;
+	mhi_cntrl->irq = kmalloc_array(mhi_cntrl->msi_allocated,
+				       sizeof(*mhi_cntrl->irq), GFP_KERNEL);
+	if (!mhi_cntrl->irq) {
+		ret = -ENOMEM;
+		goto error_alloc_msi_vec;
+	}
+
+	for (i = 0; i < mhi_cntrl->msi_allocated; i++) {
+		mhi_cntrl->irq[i] = pci_irq_vector(pci_dev, i);
+		if (mhi_cntrl->irq[i] < 0) {
+			ret = mhi_cntrl->irq[i];
+			goto error_get_irq_vec;
+		}
+	}
+
+	dev_set_drvdata(&pci_dev->dev, mhi_cntrl);
+
+	/* configure runtime pm */
+	pm_runtime_set_autosuspend_delay(&pci_dev->dev, MHI_RPM_SUSPEND_TMR_MS);
+	pm_runtime_use_autosuspend(&pci_dev->dev);
+	pm_suspend_ignore_children(&pci_dev->dev, true);
+
+	/*
+	 * pci framework will increment usage count (twice) before
+	 * calling local device driver probe function.
+	 * 1st pci.c pci_pm_init() calls pm_runtime_forbid
+	 * 2nd pci-driver.c local_pci_probe calls pm_runtime_get_sync
+	 * Framework expect pci device driver to call
+	 * pm_runtime_put_noidle to decrement usage count after
+	 * successful probe and and call pm_runtime_allow to enable
+	 * runtime suspend.
+	 */
+	pm_runtime_mark_last_busy(&pci_dev->dev);
+	pm_runtime_put_noidle(&pci_dev->dev);
+
+	return 0;
+
+error_get_irq_vec:
+	kfree(mhi_cntrl->irq);
+	mhi_cntrl->irq = NULL;
+
+error_alloc_msi_vec:
+	pci_free_irq_vectors(pci_dev);
+
+error_req_msi:
+	iounmap(mhi_cntrl->regs);
+
+error_ioremap:
+	pci_clear_master(pci_dev);
+
+error_request_region:
+	pci_disable_device(pci_dev);
+
+error_enable_device:
+	pci_release_region(pci_dev, mhi_dev->resn);
+
+	return ret;
+}
+
+static int mhi_runtime_suspend(struct device *dev)
+{
+	int ret = 0;
+	struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
+
+	MHI_LOG("Enter\n");
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+
+	ret = mhi_pm_suspend(mhi_cntrl);
+	if (ret) {
+		MHI_LOG("Abort due to ret:%d\n", ret);
+		goto exit_runtime_suspend;
+	}
+
+	ret = mhi_arch_link_off(mhi_cntrl, true);
+	if (ret)
+		MHI_ERR("Failed to Turn off link ret:%d\n", ret);
+
+exit_runtime_suspend:
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+	MHI_LOG("Exited with ret:%d\n", ret);
+
+	return ret;
+}
+
+static int mhi_runtime_idle(struct device *dev)
+{
+	struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
+
+	MHI_LOG("Entered returning -EBUSY\n");
+
+	/*
+	 * RPM framework during runtime resume always calls
+	 * rpm_idle to see if device ready to suspend.
+	 * If dev.power usage_count count is 0, rpm fw will call
+	 * rpm_idle cb to see if device is ready to suspend.
+	 * if cb return 0, or cb not defined the framework will
+	 * assume device driver is ready to suspend;
+	 * therefore, fw will schedule runtime suspend.
+	 * In MHI power management, MHI host shall go to
+	 * runtime suspend only after entering MHI State M2, even if
+	 * usage count is 0.  Return -EBUSY to disable automatic suspend.
+	 */
+	return -EBUSY;
+}
+
+static int mhi_runtime_resume(struct device *dev)
+{
+	int ret = 0;
+	struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
+	struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+
+	MHI_LOG("Enter\n");
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+
+	if (!mhi_dev->powered_on) {
+		MHI_LOG("Not fully powered, return success\n");
+		mutex_unlock(&mhi_cntrl->pm_mutex);
+		return 0;
+	}
+
+	/* turn on link */
+	ret = mhi_arch_link_on(mhi_cntrl);
+	if (ret)
+		goto rpm_resume_exit;
+
+	/* enter M0 state */
+	ret = mhi_pm_resume(mhi_cntrl);
+
+rpm_resume_exit:
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+	MHI_LOG("Exited with :%d\n", ret);
+
+	return ret;
+}
+
+static int mhi_system_resume(struct device *dev)
+{
+	int ret = 0;
+	struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
+
+	ret = mhi_runtime_resume(dev);
+	if (ret) {
+		MHI_ERR("Failed to resume link\n");
+	} else {
+		pm_runtime_set_active(dev);
+		pm_runtime_enable(dev);
+	}
+
+	return ret;
+}
+
+int mhi_system_suspend(struct device *dev)
+{
+	struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
+
+	MHI_LOG("Entered\n");
+
+	/* if rpm status still active then force suspend */
+	if (!pm_runtime_status_suspended(dev))
+		return mhi_runtime_suspend(dev);
+
+	pm_runtime_set_suspended(dev);
+	pm_runtime_disable(dev);
+
+	MHI_LOG("Exit\n");
+	return 0;
+}
+
+/* checks if link is down */
+static int mhi_link_status(struct mhi_controller *mhi_cntrl, void *priv)
+{
+	struct mhi_dev *mhi_dev = priv;
+	u16 dev_id;
+	int ret;
+
+	/* try reading device id, if dev id don't match, link is down */
+	ret = pci_read_config_word(mhi_dev->pci_dev, PCI_DEVICE_ID, &dev_id);
+
+	return (ret || dev_id != mhi_cntrl->dev_id) ? -EIO : 0;
+}
+
+static int mhi_runtime_get(struct mhi_controller *mhi_cntrl, void *priv)
+{
+	struct mhi_dev *mhi_dev = priv;
+	struct device *dev = &mhi_dev->pci_dev->dev;
+
+	return pm_runtime_get(dev);
+}
+
+static void mhi_runtime_put(struct mhi_controller *mhi_cntrl, void *priv)
+{
+	struct mhi_dev *mhi_dev = priv;
+	struct device *dev = &mhi_dev->pci_dev->dev;
+
+	pm_runtime_put_noidle(dev);
+}
+
+static void mhi_status_cb(struct mhi_controller *mhi_cntrl,
+			  void *priv,
+			  enum MHI_CB reason)
+{
+	struct mhi_dev *mhi_dev = priv;
+	struct device *dev = &mhi_dev->pci_dev->dev;
+
+	if (reason == MHI_CB_IDLE) {
+		MHI_LOG("Schedule runtime suspend 1\n");
+		pm_runtime_mark_last_busy(dev);
+		pm_request_autosuspend(dev);
+	}
+}
+
+int mhi_debugfs_trigger_m0(void *data, u64 val)
+{
+	struct mhi_controller *mhi_cntrl = data;
+	struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+
+	MHI_LOG("Trigger M3 Exit\n");
+	pm_runtime_get(&mhi_dev->pci_dev->dev);
+	pm_runtime_put(&mhi_dev->pci_dev->dev);
+
+	return 0;
+}
+
+int mhi_debugfs_trigger_m3(void *data, u64 val)
+{
+	struct mhi_controller *mhi_cntrl = data;
+	struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+
+	MHI_LOG("Trigger M3 Entry\n");
+	pm_runtime_mark_last_busy(&mhi_dev->pci_dev->dev);
+	pm_request_autosuspend(&mhi_dev->pci_dev->dev);
+
+	return 0;
+}
+
+DEFINE_SIMPLE_ATTRIBUTE(debugfs_trigger_m0_fops, NULL,
+			mhi_debugfs_trigger_m0, "%llu\n");
+
+DEFINE_SIMPLE_ATTRIBUTE(debugfs_trigger_m3_fops, NULL,
+			mhi_debugfs_trigger_m3, "%llu\n");
+
+static int mhi_init_debugfs_trigger_go(void *data, u64 val)
+{
+	struct mhi_controller *mhi_cntrl = data;
+
+	MHI_LOG("Trigger power up sequence\n");
+
+	mhi_async_power_up(mhi_cntrl);
+
+	return 0;
+}
+DEFINE_SIMPLE_ATTRIBUTE(mhi_init_debugfs_trigger_go_fops, NULL,
+			mhi_init_debugfs_trigger_go, "%llu\n");
+
+
+int mhi_init_debugfs_debug_show(struct seq_file *m, void *d)
+{
+	seq_puts(m, "Enable debug mode to debug  external soc\n");
+	seq_puts(m,
+		 "Usage:  echo 'devid,timeout,domain,smmu_cfg' > debug_mode\n");
+	seq_puts(m, "No spaces between parameters\n");
+	seq_puts(m, "\t1.  devid : 0 or pci device id to register\n");
+	seq_puts(m, "\t2.  timeout: mhi cmd/state transition timeout\n");
+	seq_puts(m, "\t3.  domain: Rootcomplex\n");
+	seq_puts(m, "\t4.  smmu_cfg: smmu configuration mask:\n");
+	seq_puts(m, "\t\t- BIT0: ATTACH\n");
+	seq_puts(m, "\t\t- BIT1: S1 BYPASS\n");
+	seq_puts(m, "\t\t-BIT2: FAST_MAP\n");
+	seq_puts(m, "\t\t-BIT3: ATOMIC\n");
+	seq_puts(m, "\t\t-BIT4: FORCE_COHERENT\n");
+	seq_puts(m, "\t\t-BIT5: GEOMETRY\n");
+	seq_puts(m, "\tAll timeout are in ms, enter 0 to keep default\n");
+	seq_puts(m, "Examples inputs: '0x307,10000'\n");
+	seq_puts(m, "\techo '0,10000,1'\n");
+	seq_puts(m, "\techo '0x307,10000,0,0x3d'\n");
+	seq_puts(m, "firmware image name will be changed to debug.mbn\n");
+
+	return 0;
+}
+
+static int mhi_init_debugfs_debug_open(struct inode *node, struct file *file)
+{
+	return single_open(file, mhi_init_debugfs_debug_show, NULL);
+}
+
+static ssize_t mhi_init_debugfs_debug_write(struct file *fp,
+					    const char __user *ubuf,
+					    size_t count,
+					    loff_t *pos)
+{
+	char *buf = kmalloc(count + 1, GFP_KERNEL);
+	/* #,devid,timeout,domain,smmu-cfg */
+	int args[5] = {0};
+	static char const *dbg_fw = "debug.mbn";
+	int ret;
+	struct mhi_controller *mhi_cntrl = fp->f_inode->i_private;
+	struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+	struct pci_device_id *id;
+
+	if (!buf)
+		return -ENOMEM;
+
+	ret = copy_from_user(buf, ubuf, count);
+	if (ret)
+		goto error_read;
+	buf[count] = 0;
+	get_options(buf, ARRAY_SIZE(args), args);
+	kfree(buf);
+
+	/* override default parameters */
+	mhi_cntrl->fw_image = dbg_fw;
+	mhi_cntrl->edl_image = dbg_fw;
+
+	if (args[0] >= 2 && args[2])
+		mhi_cntrl->timeout_ms = args[2];
+
+	if (args[0] >= 3 && args[3])
+		mhi_cntrl->domain = args[3];
+
+	if (args[0] >= 4 && args[4])
+		mhi_dev->smmu_cfg = args[4];
+
+	/* If it's a new device id register it */
+	if (args[0] && args[1]) {
+		/* find the debug_id  and overwrite it */
+		for (id = mhi_pcie_device_id; id->vendor; id++)
+			if (id->device == MHI_PCIE_DEBUG_ID) {
+				id->device = args[1];
+				pci_unregister_driver(&mhi_pcie_driver);
+				ret = pci_register_driver(&mhi_pcie_driver);
+			}
+	}
+
+	mhi_dev->debug_mode = true;
+	debugfs_create_file("go", 0444, mhi_cntrl->parent, mhi_cntrl,
+			    &mhi_init_debugfs_trigger_go_fops);
+	pr_info(
+		"%s: ret:%d pcidev:0x%x smm_cfg:%u timeout:%u\n",
+		__func__, ret, args[1], mhi_dev->smmu_cfg,
+		mhi_cntrl->timeout_ms);
+	return count;
+
+error_read:
+	kfree(buf);
+	return ret;
+}
+
+static const struct file_operations debugfs_debug_ops = {
+	.open = mhi_init_debugfs_debug_open,
+	.release = single_release,
+	.read = seq_read,
+	.write = mhi_init_debugfs_debug_write,
+};
+
+int mhi_pci_probe(struct pci_dev *pci_dev,
+		  const struct pci_device_id *device_id)
+{
+	struct mhi_controller *mhi_cntrl = NULL;
+	u32 domain = pci_domain_nr(pci_dev->bus);
+	u32 bus = pci_dev->bus->number;
+	/* first match to exact DT node, if not match to any free DT */
+	u32 dev_id[] = {pci_dev->device, PCI_ANY_ID};
+	u32 slot = PCI_SLOT(pci_dev->devfn);
+	struct mhi_dev *mhi_dev;
+	int i, ret;
+
+	/* find a matching controller */
+	for (i = 0; i < ARRAY_SIZE(dev_id); i++) {
+		mhi_cntrl = mhi_bdf_to_controller(domain, bus, slot, dev_id[i]);
+		if (mhi_cntrl)
+			break;
+	}
+
+	if (!mhi_cntrl)
+		return -EPROBE_DEFER;
+
+	mhi_cntrl->dev_id = pci_dev->device;
+	mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+	mhi_dev->pci_dev = pci_dev;
+	mhi_dev->powered_on = true;
+
+	ret = mhi_arch_pcie_init(mhi_cntrl);
+	if (ret)
+		return ret;
+
+	ret = mhi_arch_iommu_init(mhi_cntrl);
+	if (ret)
+		goto error_iommu_init;
+
+	ret = mhi_init_pci_dev(mhi_cntrl);
+	if (ret)
+		goto error_init_pci;
+
+	/* start power up sequence if not in debug mode */
+	if (!mhi_dev->debug_mode) {
+		ret = mhi_async_power_up(mhi_cntrl);
+		if (ret)
+			goto error_power_up;
+	}
+
+	pm_runtime_mark_last_busy(&pci_dev->dev);
+	pm_runtime_allow(&pci_dev->dev);
+
+	if (mhi_cntrl->dentry) {
+		debugfs_create_file("m0", 0444, mhi_cntrl->dentry, mhi_cntrl,
+				    &debugfs_trigger_m0_fops);
+		debugfs_create_file("m3", 0444, mhi_cntrl->dentry, mhi_cntrl,
+				    &debugfs_trigger_m3_fops);
+	}
+
+	MHI_LOG("Return successful\n");
+
+	return 0;
+
+error_power_up:
+	mhi_deinit_pci_dev(mhi_cntrl);
+
+error_init_pci:
+	mhi_arch_iommu_deinit(mhi_cntrl);
+
+error_iommu_init:
+	mhi_arch_pcie_deinit(mhi_cntrl);
+
+	return ret;
+}
+
+static const struct of_device_id mhi_plat_match[] = {
+	{ .compatible = "qcom,mhi" },
+	{},
+};
+
+static int mhi_platform_probe(struct platform_device *pdev)
+{
+	struct mhi_controller *mhi_cntrl;
+	struct mhi_dev *mhi_dev;
+	struct device_node *of_node = pdev->dev.of_node;
+	u64 addr_win[2];
+	int ret;
+
+	if (!of_node)
+		return -ENODEV;
+
+	mhi_cntrl = mhi_alloc_controller(sizeof(*mhi_dev));
+	if (!mhi_cntrl)
+		return -ENOMEM;
+
+	mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+
+	/* get pci bus topology for this node */
+	ret = of_property_read_u32(of_node, "qcom,pci-dev-id",
+				   &mhi_cntrl->dev_id);
+	if (ret)
+		mhi_cntrl->dev_id = PCI_ANY_ID;
+
+	ret = of_property_read_u32(of_node, "qcom,pci-domain",
+				   &mhi_cntrl->domain);
+	if (ret)
+		goto error_probe;
+
+	ret = of_property_read_u32(of_node, "qcom,pci-bus", &mhi_cntrl->bus);
+	if (ret)
+		goto error_probe;
+
+	ret = of_property_read_u32(of_node, "qcom,pci-slot", &mhi_cntrl->slot);
+	if (ret)
+		goto error_probe;
+
+	ret = of_property_read_u32(of_node, "qcom,smmu-cfg",
+				   &mhi_dev->smmu_cfg);
+	if (ret)
+		goto error_probe;
+
+	/* if s1 translation enabled pull iova addr from dt */
+	if (mhi_dev->smmu_cfg & MHI_SMMU_ATTACH &&
+	    !(mhi_dev->smmu_cfg & MHI_SMMU_S1_BYPASS)) {
+		ret = of_property_count_elems_of_size(of_node, "qcom,addr-win",
+						      sizeof(addr_win));
+		if (ret != 1)
+			goto error_probe;
+		ret = of_property_read_u64_array(of_node, "qcom,addr-win",
+						 addr_win, 2);
+		if (ret)
+			goto error_probe;
+	} else {
+		addr_win[0] = memblock_start_of_DRAM();
+		addr_win[1] = memblock_end_of_DRAM();
+	}
+
+	mhi_dev->iova_start = addr_win[0];
+	mhi_dev->iova_stop = addr_win[1];
+
+	/*
+	 * if S1 is enabled, set MHI_CTRL start address to 0 so we can use low
+	 * level mapping api to map buffers outside of smmu domain
+	 */
+	if (mhi_dev->smmu_cfg & MHI_SMMU_ATTACH &&
+	    !(mhi_dev->smmu_cfg & MHI_SMMU_S1_BYPASS))
+		mhi_cntrl->iova_start = 0;
+	else
+		mhi_cntrl->iova_start = addr_win[0];
+
+	mhi_cntrl->iova_stop = mhi_dev->iova_stop;
+	mhi_cntrl->of_node = of_node;
+
+	/* setup power management apis */
+	mhi_cntrl->status_cb = mhi_status_cb;
+	mhi_cntrl->runtime_get = mhi_runtime_get;
+	mhi_cntrl->runtime_put = mhi_runtime_put;
+	mhi_cntrl->link_status = mhi_link_status;
+
+	mhi_dev->pdev = pdev;
+
+	ret = mhi_arch_platform_init(mhi_dev);
+	if (ret)
+		goto error_probe;
+
+	ret = of_register_mhi_controller(mhi_cntrl);
+	if (ret)
+		goto error_register;
+
+	if (mhi_cntrl->parent)
+		debugfs_create_file("debug_mode", 0444, mhi_cntrl->parent,
+				    mhi_cntrl, &debugfs_debug_ops);
+
+	return 0;
+
+error_register:
+	mhi_arch_platform_deinit(mhi_dev);
+
+error_probe:
+	mhi_free_controller(mhi_cntrl);
+
+	return -EINVAL;
+};
+
+static struct platform_driver mhi_platform_driver = {
+	.probe = mhi_platform_probe,
+	.driver = {
+		.name = "mhi",
+		.owner = THIS_MODULE,
+		.of_match_table = mhi_plat_match,
+	},
+};
+
+static const struct dev_pm_ops pm_ops = {
+	SET_RUNTIME_PM_OPS(mhi_runtime_suspend,
+			   mhi_runtime_resume,
+			   mhi_runtime_idle)
+	SET_SYSTEM_SLEEP_PM_OPS(mhi_system_suspend, mhi_system_resume)
+};
+
+static struct pci_driver mhi_pcie_driver = {
+	.name = "mhi",
+	.id_table = mhi_pcie_device_id,
+	.probe = mhi_pci_probe,
+	.driver = {
+		.pm = &pm_ops
+	}
+};
+
+static int __init mhi_init(void)
+{
+	int ret;
+
+	ret = platform_driver_register(&mhi_platform_driver);
+	if (ret)
+		return ret;
+
+	ret = pci_register_driver(&mhi_pcie_driver);
+	if (ret)
+		goto pci_reg_error;
+
+	return ret;
+
+pci_reg_error:
+	platform_driver_unregister(&mhi_platform_driver);
+
+	return ret;
+};
+module_init(mhi_init);
+
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("MHI_CORE");
+MODULE_DESCRIPTION("MHI Host Driver");
diff --git a/drivers/bus/mhi/controllers/mhi_qcom.h b/drivers/bus/mhi/controllers/mhi_qcom.h
new file mode 100644
index 0000000..4b90916
--- /dev/null
+++ b/drivers/bus/mhi/controllers/mhi_qcom.h
@@ -0,0 +1,85 @@
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+#ifndef _MHI_QCOM_
+#define _MHI_QCOM_
+
+/* iova cfg bitmask */
+#define MHI_SMMU_ATTACH BIT(0)
+#define MHI_SMMU_S1_BYPASS BIT(1)
+#define MHI_SMMU_FAST BIT(2)
+#define MHI_SMMU_ATOMIC BIT(3)
+#define MHI_SMMU_FORCE_COHERENT BIT(4)
+
+#define MHI_PCIE_VENDOR_ID (0x17cb)
+#define MHI_PCIE_DEBUG_ID (0xffff)
+#define MHI_RPM_SUSPEND_TMR_MS (1000)
+#define MHI_PCI_BAR_NUM (0)
+
+struct mhi_dev {
+	struct platform_device *pdev;
+	struct pci_dev *pci_dev;
+	u32 smmu_cfg;
+	int resn;
+	void *arch_info;
+	bool powered_on;
+	bool debug_mode;
+	dma_addr_t iova_start;
+	dma_addr_t iova_stop;
+};
+
+void mhi_deinit_pci_dev(struct mhi_controller *mhi_cntrl);
+int mhi_pci_probe(struct pci_dev *pci_dev,
+		  const struct pci_device_id *device_id);
+
+static inline int mhi_arch_iommu_init(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+
+	mhi_cntrl->dev = &mhi_dev->pci_dev->dev;
+
+	return dma_set_mask_and_coherent(mhi_cntrl->dev, DMA_BIT_MASK(64));
+}
+
+static inline void mhi_arch_iommu_deinit(struct mhi_controller *mhi_cntrl)
+{
+}
+
+static inline int mhi_arch_pcie_init(struct mhi_controller *mhi_cntrl)
+{
+	return 0;
+}
+
+static inline void mhi_arch_pcie_deinit(struct mhi_controller *mhi_cntrl)
+{
+}
+
+static inline int mhi_arch_platform_init(struct mhi_dev *mhi_dev)
+{
+	return 0;
+}
+
+static inline void mhi_arch_platform_deinit(struct mhi_dev *mhi_dev)
+{
+}
+
+static inline int mhi_arch_link_off(struct mhi_controller *mhi_cntrl,
+				    bool graceful)
+{
+	return 0;
+}
+
+static inline int mhi_arch_link_on(struct mhi_controller *mhi_cntrl)
+{
+	return 0;
+}
+
+#endif /* _MHI_QCOM_ */
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v1 3/4] mhi_bus: dev: netdev: add network interface driver
  2018-04-27  2:23 MHI initial design review Sujeev Dias
  2018-04-27  2:23 ` [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface Sujeev Dias
  2018-04-27  2:23 ` [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems Sujeev Dias
@ 2018-04-27  2:23 ` Sujeev Dias
  2018-04-27 11:19   ` Arnd Bergmann
  2018-04-27  2:23 ` [PATCH v1 4/4] mhi_bus: dev: uci: add user space " Sujeev Dias
  2018-07-09 20:08 ` MHI code review Sujeev Dias
  4 siblings, 1 reply; 54+ messages in thread
From: Sujeev Dias @ 2018-04-27  2:23 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: Sujeev Dias, linux-kernel, linux-arm-msm, Tony Truong

MHI based net device driver is used for transferring IP
traffic between host and modem. Driver allows clients to
transfer data using standard network interface.

Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
---
 Documentation/devicetree/bindings/bus/mhi.txt |  36 ++
 drivers/bus/Kconfig                           |   1 +
 drivers/bus/mhi/Makefile                      |   2 +-
 drivers/bus/mhi/devices/Kconfig               |  10 +
 drivers/bus/mhi/devices/Makefile              |   1 +
 drivers/bus/mhi/devices/mhi_netdev.c          | 893 ++++++++++++++++++++++++++
 6 files changed, 942 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/mhi/devices/Kconfig
 create mode 100644 drivers/bus/mhi/devices/Makefile
 create mode 100644 drivers/bus/mhi/devices/mhi_netdev.c

diff --git a/Documentation/devicetree/bindings/bus/mhi.txt b/Documentation/devicetree/bindings/bus/mhi.txt
index ea1b620..172ae7b 100644
--- a/Documentation/devicetree/bindings/bus/mhi.txt
+++ b/Documentation/devicetree/bindings/bus/mhi.txt
@@ -139,3 +139,39 @@ mhi_controller {
 		<driver specific properties>
 	};
 };
+
+================
+Children Devices
+================
+
+MHI netdev properties
+
+- mhi,chan
+  Usage: required
+  Value type: <string>
+  Definition: Channel name MHI netdev support
+
+- mhi,mru
+  Usage: required
+  Value type: <u32>
+  Definition: Largest packet size interface can receive in bytes.
+
+- mhi,interface-name
+  Usage: optional
+  Value type: <string>
+  Definition: Interface name to be given so clients can identify it
+
+- mhi,recycle-buf
+  Usage: optional
+  Value type: <bool>
+  Definition: Set true if interface support recycling buffers.
+
+========
+Example:
+========
+
+mhi_rmnet@0 {
+	mhi,chan = "IP_HW0";
+	mhi,interface-name = "rmnet_mhi";
+	mhi,mru = <0x4000>;
+};
diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
index fb28002..cc03762 100644
--- a/drivers/bus/Kconfig
+++ b/drivers/bus/Kconfig
@@ -190,5 +190,6 @@ config MHI_DEBUG
 
 source "drivers/bus/fsl-mc/Kconfig"
 source drivers/bus/mhi/controllers/Kconfig
+source drivers/bus/mhi/devices/Kconfig
 
 endmenu
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
index c6a2a91..2382e04 100644
--- a/drivers/bus/mhi/Makefile
+++ b/drivers/bus/mhi/Makefile
@@ -5,4 +5,4 @@
 # core layer
 obj-y += core/
 obj-y += controllers/
-#obj-y += devices/
+obj-y += devices/
diff --git a/drivers/bus/mhi/devices/Kconfig b/drivers/bus/mhi/devices/Kconfig
new file mode 100644
index 0000000..40f964d
--- /dev/null
+++ b/drivers/bus/mhi/devices/Kconfig
@@ -0,0 +1,10 @@
+menu "MHI device support"
+
+config MHI_NETDEV
+       tristate "MHI NETDEV"
+       depends on MHI_BUS
+       help
+	  MHI based net device driver for transferring IP traffic
+	  between host and modem. By enabling this driver, clients
+	  can transfer data using standard network interface.
+endmenu
diff --git a/drivers/bus/mhi/devices/Makefile b/drivers/bus/mhi/devices/Makefile
new file mode 100644
index 0000000..ee12a64
--- /dev/null
+++ b/drivers/bus/mhi/devices/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_MHI_NETDEV) +=mhi_netdev.o
diff --git a/drivers/bus/mhi/devices/mhi_netdev.c b/drivers/bus/mhi/devices/mhi_netdev.c
new file mode 100644
index 0000000..23881a9
--- /dev/null
+++ b/drivers/bus/mhi/devices/mhi_netdev.c
@@ -0,0 +1,893 @@
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/if_arp.h>
+#include <linux/dma-mapping.h>
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/errno.h>
+#include <linux/of_device.h>
+#include <linux/rtnetlink.h>
+#include <linux/mhi.h>
+
+#define MHI_NETDEV_DRIVER_NAME "mhi_netdev"
+#define WATCHDOG_TIMEOUT (30 * HZ)
+
+#ifdef CONFIG_MHI_DEBUG
+
+#define MHI_ASSERT(cond, msg) do { \
+	if (cond) \
+		panic(msg); \
+} while (0)
+
+#define MSG_VERB(fmt, ...) do { \
+	if (mhi_netdev->msg_lvl <= MHI_MSG_LVL_VERBOSE) \
+		pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
+} while (0)
+
+#else
+
+#define MHI_ASSERT(cond, msg) do { \
+	if (cond) { \
+		MSG_ERR(msg); \
+		WARN_ON(cond); \
+	} \
+} while (0)
+
+#define MSG_VERB(fmt, ...)
+
+#endif
+
+#define MSG_LOG(fmt, ...) do { \
+	if (mhi_netdev->msg_lvl <= MHI_MSG_LVL_INFO) \
+		pr_err("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
+} while (0)
+
+#define MSG_ERR(fmt, ...) do { \
+	if (mhi_netdev->msg_lvl <= MHI_MSG_LVL_ERROR) \
+		pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
+} while (0)
+
+struct mhi_stats {
+	u32 rx_int;
+	u32 tx_full;
+	u32 tx_pkts;
+	u32 rx_budget_overflow;
+	u32 rx_frag;
+	u32 alloc_failed;
+};
+
+/* important: do not exceed sk_buf->cb (48 bytes) */
+struct mhi_skb_priv {
+	void *buf;
+	size_t size;
+	struct mhi_netdev *mhi_netdev;
+};
+
+struct mhi_netdev {
+	int alias;
+	struct mhi_device *mhi_dev;
+	spinlock_t rx_lock;
+	bool enabled;
+	rwlock_t pm_lock; /* state change lock */
+	int (*rx_queue)(struct mhi_netdev *mhi_netdev, gfp_t gfp_t);
+	struct work_struct alloc_work;
+	int wake;
+
+	struct sk_buff_head rx_allocated;
+
+	u32 mru;
+	const char *interface_name;
+	struct napi_struct napi;
+	struct net_device *ndev;
+	struct sk_buff *frag_skb;
+	bool recycle_buf;
+
+	struct mhi_stats stats;
+	struct dentry *dentry;
+	enum MHI_DEBUG_LEVEL msg_lvl;
+};
+
+struct mhi_netdev_priv {
+	struct mhi_netdev *mhi_netdev;
+};
+
+static struct mhi_driver mhi_netdev_driver;
+static void mhi_netdev_create_debugfs(struct mhi_netdev *mhi_netdev);
+
+static __be16 mhi_netdev_ip_type_trans(struct sk_buff *skb)
+{
+	__be16 protocol = 0;
+
+	/* determine L3 protocol */
+	switch (skb->data[0] & 0xf0) {
+	case 0x40:
+		protocol = htons(ETH_P_IP);
+		break;
+	case 0x60:
+		protocol = htons(ETH_P_IPV6);
+		break;
+	default:
+		/* default is QMAP */
+		protocol = htons(ETH_P_MAP);
+		break;
+	}
+	return protocol;
+}
+
+static void mhi_netdev_skb_destructor(struct sk_buff *skb)
+{
+	struct mhi_skb_priv *skb_priv = (struct mhi_skb_priv *)(skb->cb);
+	struct mhi_netdev *mhi_netdev = skb_priv->mhi_netdev;
+
+	skb->data = skb->head;
+	skb_reset_tail_pointer(skb);
+	skb->len = 0;
+	MHI_ASSERT(skb->data != skb_priv->buf, "incorrect buf");
+	skb_queue_tail(&mhi_netdev->rx_allocated, skb);
+}
+
+static int mhi_netdev_alloc_skb(struct mhi_netdev *mhi_netdev, gfp_t gfp_t)
+{
+	u32 cur_mru = mhi_netdev->mru;
+	struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
+	struct mhi_skb_priv *skb_priv;
+	int ret;
+	struct sk_buff *skb;
+	int no_tre = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
+	int i;
+
+	for (i = 0; i < no_tre; i++) {
+		skb = alloc_skb(cur_mru, gfp_t);
+		if (!skb)
+			return -ENOMEM;
+
+		read_lock_bh(&mhi_netdev->pm_lock);
+		if (unlikely(!mhi_netdev->enabled)) {
+			MSG_ERR("Interface not enabled\n");
+			ret = -EIO;
+			goto error_queue;
+		}
+
+		skb_priv = (struct mhi_skb_priv *)skb->cb;
+		skb_priv->buf = skb->data;
+		skb_priv->size = cur_mru;
+		skb_priv->mhi_netdev = mhi_netdev;
+		skb->dev = mhi_netdev->ndev;
+
+		if (mhi_netdev->recycle_buf)
+			skb->destructor = mhi_netdev_skb_destructor;
+
+		spin_lock_bh(&mhi_netdev->rx_lock);
+		ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE, skb,
+					 skb_priv->size, MHI_EOT);
+		spin_unlock_bh(&mhi_netdev->rx_lock);
+
+		if (ret) {
+			MSG_ERR("Failed to queue skb, ret:%d\n", ret);
+			ret = -EIO;
+			goto error_queue;
+		}
+
+		read_unlock_bh(&mhi_netdev->pm_lock);
+	}
+
+	return 0;
+
+error_queue:
+	skb->destructor = NULL;
+	read_unlock_bh(&mhi_netdev->pm_lock);
+	dev_kfree_skb_any(skb);
+
+	return ret;
+}
+
+static void mhi_netdev_alloc_work(struct work_struct *work)
+{
+	struct mhi_netdev *mhi_netdev = container_of(work, struct mhi_netdev,
+						   alloc_work);
+	/* sleep about 1 sec and retry, that should be enough time
+	 * for system to reclaim freed memory back.
+	 */
+	const int sleep_ms =  1000;
+	int retry = 60;
+	int ret;
+
+	MSG_LOG("Entered\n");
+	do {
+		ret = mhi_netdev_alloc_skb(mhi_netdev, GFP_KERNEL);
+		/* sleep and try again */
+		if (ret == -ENOMEM) {
+			msleep(sleep_ms);
+			retry--;
+		}
+	} while (ret == -ENOMEM && retry);
+
+	MSG_LOG("Exit with status:%d retry:%d\n", ret, retry);
+}
+
+/* we will recycle buffers */
+static int mhi_netdev_skb_recycle(struct mhi_netdev *mhi_netdev, gfp_t gfp_t)
+{
+	struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
+	int no_tre;
+	int ret = 0;
+	struct sk_buff *skb;
+	struct mhi_skb_priv *skb_priv;
+
+	read_lock_bh(&mhi_netdev->pm_lock);
+	if (!mhi_netdev->enabled) {
+		read_unlock_bh(&mhi_netdev->pm_lock);
+		return -EIO;
+	}
+
+	no_tre = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
+
+	spin_lock_bh(&mhi_netdev->rx_lock);
+	while (no_tre) {
+		skb = skb_dequeue(&mhi_netdev->rx_allocated);
+
+		/* no free buffers to recycle, reschedule work */
+		if (!skb) {
+			ret = -ENOMEM;
+			goto error_queue;
+		}
+
+		skb_priv = (struct mhi_skb_priv *)(skb->cb);
+		ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE, skb,
+					 skb_priv->size, MHI_EOT);
+
+		/* failed to queue buffer */
+		if (ret) {
+			MSG_ERR("Failed to queue skb, ret:%d\n", ret);
+			skb_queue_tail(&mhi_netdev->rx_allocated, skb);
+			goto error_queue;
+		}
+
+		no_tre--;
+	}
+
+error_queue:
+	spin_unlock_bh(&mhi_netdev->rx_lock);
+	read_unlock_bh(&mhi_netdev->pm_lock);
+
+	return ret;
+}
+
+static void mhi_netdev_dealloc(struct mhi_netdev *mhi_netdev)
+{
+	struct sk_buff *skb;
+
+	skb = skb_dequeue(&mhi_netdev->rx_allocated);
+	while (skb) {
+		skb->destructor = NULL;
+		kfree_skb(skb);
+		skb = skb_dequeue(&mhi_netdev->rx_allocated);
+	}
+}
+
+static int mhi_netdev_poll(struct napi_struct *napi, int budget)
+{
+	struct net_device *dev = napi->dev;
+	struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
+	struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
+	struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
+	int rx_work = 0;
+	int ret;
+
+	MSG_VERB("Entered\n");
+
+	read_lock_bh(&mhi_netdev->pm_lock);
+
+	if (!mhi_netdev->enabled) {
+		MSG_LOG("interface is disabled!\n");
+		napi_complete(napi);
+		read_unlock_bh(&mhi_netdev->pm_lock);
+		return 0;
+	}
+
+	mhi_device_get(mhi_dev);
+
+	rx_work = mhi_poll(mhi_dev, budget);
+	if (rx_work < 0) {
+		MSG_ERR("Error polling ret:%d\n", rx_work);
+		rx_work = 0;
+		napi_complete(napi);
+		goto exit_poll;
+	}
+
+	/* queue new buffers */
+	ret = mhi_netdev->rx_queue(mhi_netdev, GFP_ATOMIC);
+	if (ret == -ENOMEM) {
+		MSG_LOG("out of tre, queuing bg worker\n");
+		mhi_netdev->stats.alloc_failed++;
+		schedule_work(&mhi_netdev->alloc_work);
+
+	}
+
+	/* complete work if # of packet processed less than allocated budget */
+	if (rx_work < budget)
+		napi_complete(napi);
+	else
+		mhi_netdev->stats.rx_budget_overflow++;
+
+exit_poll:
+	mhi_device_put(mhi_dev);
+	read_unlock_bh(&mhi_netdev->pm_lock);
+
+	MSG_VERB("polled %d pkts\n", rx_work);
+
+	return rx_work;
+}
+
+static int mhi_netdev_open(struct net_device *dev)
+{
+	struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
+	struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
+	struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
+
+	MSG_LOG("Opened net dev interface\n");
+
+	/* tx queue may not necessarily be stopped already
+	 * so stop the queue if tx path is not enabled
+	 */
+	if (!mhi_dev->ul_chan)
+		netif_stop_queue(dev);
+	else
+		netif_start_queue(dev);
+
+	return 0;
+
+}
+
+static int mhi_netdev_change_mtu(struct net_device *dev, int new_mtu)
+{
+	struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
+	struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
+	struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
+
+	if (new_mtu < 0 || mhi_dev->mtu < new_mtu)
+		return -EINVAL;
+
+	dev->mtu = new_mtu;
+	return 0;
+}
+
+static int mhi_netdev_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+	struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
+	struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
+	struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
+	int res = 0;
+	struct mhi_skb_priv *tx_priv;
+
+	MSG_VERB("Entered\n");
+
+	tx_priv = (struct mhi_skb_priv *)(skb->cb);
+	tx_priv->mhi_netdev = mhi_netdev;
+	read_lock_bh(&mhi_netdev->pm_lock);
+
+	if (unlikely(!mhi_netdev->enabled)) {
+		/* Only reason interface could be disabled and we get data
+		 * is due to an SSR. We do not want to stop the queue and
+		 * return error. Instead we will flush all the uplink packets
+		 * and return successful
+		 */
+		res = NETDEV_TX_OK;
+		dev_kfree_skb_any(skb);
+		goto mhi_xmit_exit;
+	}
+
+	res = mhi_queue_transfer(mhi_dev, DMA_TO_DEVICE, skb, skb->len,
+				 MHI_EOT);
+	if (res) {
+		MSG_VERB("Failed to queue with reason:%d\n", res);
+		netif_stop_queue(dev);
+		mhi_netdev->stats.tx_full++;
+		res = NETDEV_TX_BUSY;
+		goto mhi_xmit_exit;
+	}
+
+	mhi_netdev->stats.tx_pkts++;
+
+mhi_xmit_exit:
+	read_unlock_bh(&mhi_netdev->pm_lock);
+	MSG_VERB("Exited\n");
+
+	return res;
+}
+
+static int mhi_netdev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+	int rc = 0;
+
+	switch (cmd) {
+	default:
+		/* don't fail any IOCTL right now */
+		rc = 0;
+		break;
+	}
+
+	return rc;
+}
+
+static const struct net_device_ops mhi_netdev_ops_ip = {
+	.ndo_open = mhi_netdev_open,
+	.ndo_start_xmit = mhi_netdev_xmit,
+	.ndo_do_ioctl = mhi_netdev_ioctl,
+	.ndo_change_mtu = mhi_netdev_change_mtu,
+	.ndo_set_mac_address = 0,
+	.ndo_validate_addr = 0,
+};
+
+static void mhi_netdev_setup(struct net_device *dev)
+{
+	dev->netdev_ops = &mhi_netdev_ops_ip;
+	ether_setup(dev);
+
+	/* set this after calling ether_setup */
+	dev->header_ops = 0;  /* No header */
+	dev->type = ARPHRD_RAWIP;
+	dev->hard_header_len = 0;
+	dev->addr_len = 0;
+	dev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST);
+	dev->watchdog_timeo = WATCHDOG_TIMEOUT;
+}
+
+/* enable mhi_netdev netdev, call only after grabbing mhi_netdev.mutex */
+static int mhi_netdev_enable_iface(struct mhi_netdev *mhi_netdev)
+{
+	int ret = 0;
+	char ifalias[IFALIASZ];
+	char ifname[IFNAMSIZ];
+	struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
+	int no_tre;
+
+	MSG_LOG("Prepare the channels for transfer\n");
+
+	ret = mhi_prepare_for_transfer(mhi_dev);
+	if (ret) {
+		MSG_ERR("Failed to start TX chan ret %d\n", ret);
+		goto mhi_failed_to_start;
+	}
+
+	/* first time enabling the node */
+	if (!mhi_netdev->ndev) {
+		struct mhi_netdev_priv *mhi_netdev_priv;
+
+		snprintf(ifalias, sizeof(ifalias), "%s_%04x_%02u.%02u.%02u_%u",
+			 mhi_netdev->interface_name, mhi_dev->dev_id,
+			 mhi_dev->domain, mhi_dev->bus, mhi_dev->slot,
+			 mhi_netdev->alias);
+
+		snprintf(ifname, sizeof(ifname), "%s%%d",
+			 mhi_netdev->interface_name);
+
+		rtnl_lock();
+		mhi_netdev->ndev = alloc_netdev(sizeof(*mhi_netdev_priv),
+					ifname, NET_NAME_PREDICTABLE,
+					mhi_netdev_setup);
+
+		if (!mhi_netdev->ndev) {
+			ret = -ENOMEM;
+			rtnl_unlock();
+			goto net_dev_alloc_fail;
+		}
+
+		mhi_netdev->ndev->mtu = mhi_dev->mtu;
+		SET_NETDEV_DEV(mhi_netdev->ndev, &mhi_dev->dev);
+		dev_set_alias(mhi_netdev->ndev, ifalias, strlen(ifalias));
+		mhi_netdev_priv = netdev_priv(mhi_netdev->ndev);
+		mhi_netdev_priv->mhi_netdev = mhi_netdev;
+		rtnl_unlock();
+
+		netif_napi_add(mhi_netdev->ndev, &mhi_netdev->napi,
+			       mhi_netdev_poll, NAPI_POLL_WEIGHT);
+		ret = register_netdev(mhi_netdev->ndev);
+		if (ret) {
+			MSG_ERR("Network device registration failed\n");
+			goto net_dev_reg_fail;
+		}
+
+		skb_queue_head_init(&mhi_netdev->rx_allocated);
+	}
+
+	write_lock_irq(&mhi_netdev->pm_lock);
+	mhi_netdev->enabled =  true;
+	write_unlock_irq(&mhi_netdev->pm_lock);
+
+	/* queue buffer for rx path */
+	no_tre = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
+	ret = mhi_netdev_alloc_skb(mhi_netdev, GFP_KERNEL);
+	if (ret)
+		schedule_work(&mhi_netdev->alloc_work);
+
+	/* if we recycle prepare one more set */
+	if (mhi_netdev->recycle_buf)
+		for (; no_tre >= 0; no_tre--) {
+			struct sk_buff *skb = alloc_skb(mhi_netdev->mru,
+							GFP_KERNEL);
+			struct mhi_skb_priv *skb_priv;
+
+			if (!skb)
+				break;
+
+			skb_priv = (struct mhi_skb_priv *)skb->cb;
+			skb_priv->buf = skb->data;
+			skb_priv->size = mhi_netdev->mru;
+			skb_priv->mhi_netdev = mhi_netdev;
+			skb->dev = mhi_netdev->ndev;
+			skb->destructor = mhi_netdev_skb_destructor;
+			skb_queue_tail(&mhi_netdev->rx_allocated, skb);
+		}
+
+	napi_enable(&mhi_netdev->napi);
+
+	MSG_LOG("Exited.\n");
+
+	return 0;
+
+net_dev_reg_fail:
+	netif_napi_del(&mhi_netdev->napi);
+	free_netdev(mhi_netdev->ndev);
+	mhi_netdev->ndev = NULL;
+
+net_dev_alloc_fail:
+	mhi_unprepare_from_transfer(mhi_dev);
+
+mhi_failed_to_start:
+	MSG_ERR("Exited ret %d.\n", ret);
+
+	return ret;
+}
+
+static void mhi_netdev_xfer_ul_cb(struct mhi_device *mhi_dev,
+				  struct mhi_result *mhi_result)
+{
+	struct mhi_netdev *mhi_netdev = mhi_device_get_devdata(mhi_dev);
+	struct sk_buff *skb = mhi_result->buf_addr;
+	struct net_device *ndev = mhi_netdev->ndev;
+
+	ndev->stats.tx_packets++;
+	ndev->stats.tx_bytes += skb->len;
+	dev_kfree_skb(skb);
+
+	if (netif_queue_stopped(ndev))
+		netif_wake_queue(ndev);
+}
+
+static int mhi_netdev_process_fragment(struct mhi_netdev *mhi_netdev,
+				      struct sk_buff *skb)
+{
+	struct sk_buff *temp_skb;
+
+	if (mhi_netdev->frag_skb) {
+		/* merge the new skb into the old fragment */
+		temp_skb = skb_copy_expand(mhi_netdev->frag_skb, 0, skb->len,
+					   GFP_ATOMIC);
+		if (!temp_skb) {
+			dev_kfree_skb(mhi_netdev->frag_skb);
+			mhi_netdev->frag_skb = NULL;
+			return -ENOMEM;
+		}
+
+		dev_kfree_skb_any(mhi_netdev->frag_skb);
+		mhi_netdev->frag_skb = temp_skb;
+		memcpy(skb_put(mhi_netdev->frag_skb, skb->len), skb->data,
+		       skb->len);
+	} else {
+		mhi_netdev->frag_skb = skb_copy(skb, GFP_ATOMIC);
+		if (!mhi_netdev->frag_skb)
+			return -ENOMEM;
+	}
+
+	mhi_netdev->stats.rx_frag++;
+
+	return 0;
+}
+
+static void mhi_netdev_xfer_dl_cb(struct mhi_device *mhi_dev,
+				  struct mhi_result *mhi_result)
+{
+	struct mhi_netdev *mhi_netdev = mhi_device_get_devdata(mhi_dev);
+	struct sk_buff *skb = mhi_result->buf_addr;
+	struct net_device *dev = mhi_netdev->ndev;
+	int ret = 0;
+
+	if (mhi_result->transaction_status == -ENOTCONN) {
+		dev_kfree_skb(skb);
+		return;
+	}
+
+	skb_put(skb, mhi_result->bytes_xferd);
+	dev->stats.rx_packets++;
+	dev->stats.rx_bytes += mhi_result->bytes_xferd;
+
+	/* merge skb's together, it's a chain transfer */
+	if (mhi_result->transaction_status == -EOVERFLOW ||
+	    mhi_netdev->frag_skb) {
+		ret = mhi_netdev_process_fragment(mhi_netdev, skb);
+
+		/* recycle the skb */
+		if (mhi_netdev->recycle_buf)
+			mhi_netdev_skb_destructor(skb);
+		else
+			dev_kfree_skb(skb);
+
+		if (ret)
+			return;
+	}
+
+	/* more data will come, don't submit the buffer */
+	if (mhi_result->transaction_status == -EOVERFLOW)
+		return;
+
+	if (mhi_netdev->frag_skb) {
+		skb = mhi_netdev->frag_skb;
+		skb->dev = dev;
+		mhi_netdev->frag_skb = NULL;
+	}
+
+	skb->protocol = mhi_netdev_ip_type_trans(skb);
+	netif_receive_skb(skb);
+}
+
+static void mhi_netdev_status_cb(struct mhi_device *mhi_dev, enum MHI_CB mhi_cb)
+{
+	struct mhi_netdev *mhi_netdev = mhi_device_get_devdata(mhi_dev);
+
+	if (mhi_cb != MHI_CB_PENDING_DATA)
+		return;
+
+	if (napi_schedule_prep(&mhi_netdev->napi)) {
+		__napi_schedule(&mhi_netdev->napi);
+		mhi_netdev->stats.rx_int++;
+		return;
+	}
+
+}
+
+#ifdef CONFIG_DEBUG_FS
+
+struct dentry *dentry;
+
+static int mhi_netdev_debugfs_trigger_reset(void *data, u64 val)
+{
+	struct mhi_netdev *mhi_netdev = data;
+	struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
+	int ret;
+
+	MSG_LOG("Triggering channel reset\n");
+
+	/* disable the interface so no data processing */
+	write_lock_irq(&mhi_netdev->pm_lock);
+	mhi_netdev->enabled = false;
+	write_unlock_irq(&mhi_netdev->pm_lock);
+	napi_disable(&mhi_netdev->napi);
+
+	/* disable all hardware channels */
+	mhi_unprepare_from_transfer(mhi_dev);
+
+	/* clean up all alocated buffers */
+	mhi_netdev_dealloc(mhi_netdev);
+
+	MSG_LOG("Restarting iface\n");
+
+	ret = mhi_netdev_enable_iface(mhi_netdev);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+DEFINE_SIMPLE_ATTRIBUTE(mhi_netdev_debugfs_trigger_reset_fops, NULL,
+			mhi_netdev_debugfs_trigger_reset, "%llu\n");
+
+static void mhi_netdev_create_debugfs(struct mhi_netdev *mhi_netdev)
+{
+	char node_name[32];
+	int i;
+	const umode_t mode = 0600;
+	struct dentry *file;
+	struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
+
+	const struct {
+		char *name;
+		u32 *ptr;
+	} debugfs_table[] = {
+		{
+			"rx_int",
+			&mhi_netdev->stats.rx_int
+		},
+		{
+			"tx_full",
+			&mhi_netdev->stats.tx_full
+		},
+		{
+			"tx_pkts",
+			&mhi_netdev->stats.tx_pkts
+		},
+		{
+			"rx_budget_overflow",
+			&mhi_netdev->stats.rx_budget_overflow
+		},
+		{
+			"rx_fragmentation",
+			&mhi_netdev->stats.rx_frag
+		},
+		{
+			"alloc_failed",
+			&mhi_netdev->stats.alloc_failed
+		},
+		{
+			NULL, NULL
+		},
+	};
+
+	/* Both tx & rx client handle contain same device info */
+	snprintf(node_name, sizeof(node_name), "%s_%04x_%02u.%02u.%02u_%u",
+		 mhi_netdev->interface_name, mhi_dev->dev_id, mhi_dev->domain,
+		 mhi_dev->bus, mhi_dev->slot, mhi_netdev->alias);
+
+	if (IS_ERR_OR_NULL(dentry))
+		return;
+
+	mhi_netdev->dentry = debugfs_create_dir(node_name, dentry);
+	if (IS_ERR_OR_NULL(mhi_netdev->dentry))
+		return;
+
+	file = debugfs_create_u32("msg_lvl", mode, mhi_netdev->dentry,
+				  (u32 *)&mhi_netdev->msg_lvl);
+	if (IS_ERR_OR_NULL(file))
+		return;
+
+	/* Add debug stats table */
+	for (i = 0; debugfs_table[i].name; i++) {
+		file = debugfs_create_u32(debugfs_table[i].name, mode,
+					  mhi_netdev->dentry,
+					  debugfs_table[i].ptr);
+		if (IS_ERR_OR_NULL(file))
+			return;
+	}
+
+	debugfs_create_file("reset", mode, mhi_netdev->dentry, mhi_netdev,
+			    &mhi_netdev_debugfs_trigger_reset_fops);
+}
+
+static void mhi_netdev_create_debugfs_dir(void)
+{
+	dentry = debugfs_create_dir(MHI_NETDEV_DRIVER_NAME, 0);
+}
+
+#else
+
+static void mhi_netdev_create_debugfs(struct mhi_netdev_private *mhi_netdev)
+{
+}
+
+static void mhi_netdev_create_debugfs_dir(void)
+{
+}
+
+#endif
+
+static void mhi_netdev_remove(struct mhi_device *mhi_dev)
+{
+	struct mhi_netdev *mhi_netdev = mhi_device_get_devdata(mhi_dev);
+
+	MSG_LOG("Remove notification received\n");
+
+	write_lock_irq(&mhi_netdev->pm_lock);
+	mhi_netdev->enabled = false;
+	write_unlock_irq(&mhi_netdev->pm_lock);
+
+	napi_disable(&mhi_netdev->napi);
+	netif_napi_del(&mhi_netdev->napi);
+	mhi_netdev_dealloc(mhi_netdev);
+	unregister_netdev(mhi_netdev->ndev);
+	free_netdev(mhi_netdev->ndev);
+	flush_work(&mhi_netdev->alloc_work);
+
+	if (!IS_ERR_OR_NULL(mhi_netdev->dentry))
+		debugfs_remove_recursive(mhi_netdev->dentry);
+}
+
+static int mhi_netdev_probe(struct mhi_device *mhi_dev,
+			    const struct mhi_device_id *id)
+{
+	int ret;
+	struct mhi_netdev *mhi_netdev;
+	struct device_node *of_node = mhi_dev->dev.of_node;
+	char node_name[32];
+
+	if (!of_node)
+		return -ENODEV;
+
+	mhi_netdev = devm_kzalloc(&mhi_dev->dev, sizeof(*mhi_netdev),
+				  GFP_KERNEL);
+	if (!mhi_netdev)
+		return -ENOMEM;
+
+	mhi_netdev->alias = of_alias_get_id(of_node, "mhi_netdev");
+	if (mhi_netdev->alias < 0)
+		return -ENODEV;
+
+	mhi_netdev->mhi_dev = mhi_dev;
+	mhi_device_set_devdata(mhi_dev, mhi_netdev);
+
+	ret = of_property_read_u32(of_node, "mhi,mru", &mhi_netdev->mru);
+	if (ret)
+		return -ENODEV;
+
+	ret = of_property_read_string(of_node, "mhi,interface-name",
+				      &mhi_netdev->interface_name);
+	if (ret)
+		mhi_netdev->interface_name = mhi_netdev_driver.driver.name;
+
+	mhi_netdev->recycle_buf = of_property_read_bool(of_node,
+							"mhi,recycle-buf");
+	mhi_netdev->rx_queue = mhi_netdev->recycle_buf ?
+		mhi_netdev_skb_recycle : mhi_netdev_alloc_skb;
+
+	spin_lock_init(&mhi_netdev->rx_lock);
+	rwlock_init(&mhi_netdev->pm_lock);
+	INIT_WORK(&mhi_netdev->alloc_work, mhi_netdev_alloc_work);
+
+	/* create ipc log buffer */
+	snprintf(node_name, sizeof(node_name), "%s_%04x_%02u.%02u.%02u_%u",
+		 mhi_netdev->interface_name, mhi_dev->dev_id, mhi_dev->domain,
+		 mhi_dev->bus, mhi_dev->slot, mhi_netdev->alias);
+
+	mhi_netdev->msg_lvl = MHI_MSG_LVL_ERROR;
+
+	/* setup network interface */
+	ret = mhi_netdev_enable_iface(mhi_netdev);
+	if (ret)
+		return ret;
+
+	mhi_netdev_create_debugfs(mhi_netdev);
+
+	return 0;
+}
+
+static const struct mhi_device_id mhi_netdev_match_table[] = {
+	{ .chan = "IP_HW0" },
+	{ .chan = "IP_HW_ADPL" },
+	{ NULL },
+};
+
+static struct mhi_driver mhi_netdev_driver = {
+	.id_table = mhi_netdev_match_table,
+	.probe = mhi_netdev_probe,
+	.remove = mhi_netdev_remove,
+	.ul_xfer_cb = mhi_netdev_xfer_ul_cb,
+	.dl_xfer_cb = mhi_netdev_xfer_dl_cb,
+	.status_cb = mhi_netdev_status_cb,
+	.driver = {
+		.name = "mhi_netdev",
+		.owner = THIS_MODULE,
+	}
+};
+
+static int __init mhi_netdev_init(void)
+{
+	mhi_netdev_create_debugfs_dir();
+
+	return mhi_driver_register(&mhi_netdev_driver);
+}
+module_init(mhi_netdev_init);
+
+MODULE_DESCRIPTION("MHI NETDEV Network Interface");
+MODULE_LICENSE("GPL v2");
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v1 4/4] mhi_bus: dev: uci: add user space interface driver
  2018-04-27  2:23 MHI initial design review Sujeev Dias
                   ` (2 preceding siblings ...)
  2018-04-27  2:23 ` [PATCH v1 3/4] mhi_bus: dev: netdev: add network interface driver Sujeev Dias
@ 2018-04-27  2:23 ` Sujeev Dias
  2018-04-27 11:36   ` Arnd Bergmann
                     ` (3 more replies)
  2018-07-09 20:08 ` MHI code review Sujeev Dias
  4 siblings, 4 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-04-27  2:23 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: Sujeev Dias, linux-kernel, linux-arm-msm, Tony Truong

This module allows user space clients to transfer data
between external modem and host using standard file
operations.

Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
---
 drivers/bus/mhi/devices/Kconfig   |   9 +
 drivers/bus/mhi/devices/Makefile  |   1 +
 drivers/bus/mhi/devices/mhi_uci.c | 662 ++++++++++++++++++++++++++++++++++++++
 3 files changed, 672 insertions(+)
 create mode 100644 drivers/bus/mhi/devices/mhi_uci.c

diff --git a/drivers/bus/mhi/devices/Kconfig b/drivers/bus/mhi/devices/Kconfig
index 40f964d..83b9673 100644
--- a/drivers/bus/mhi/devices/Kconfig
+++ b/drivers/bus/mhi/devices/Kconfig
@@ -7,4 +7,13 @@ config MHI_NETDEV
 	  MHI based net device driver for transferring IP traffic
 	  between host and modem. By enabling this driver, clients
 	  can transfer data using standard network interface.
+
+config MHI_UCI
+       tristate "MHI UCI"
+       depends on MHI_BUS
+       help
+	  MHI based uci driver is for transferring data between host and
+	  modem using standard file operations from user space. Open, read,
+	  write, ioctl, and close operations are supported by this driver.
+
 endmenu
diff --git a/drivers/bus/mhi/devices/Makefile b/drivers/bus/mhi/devices/Makefile
index ee12a64..300eed1 100644
--- a/drivers/bus/mhi/devices/Makefile
+++ b/drivers/bus/mhi/devices/Makefile
@@ -1 +1,2 @@
 obj-$(CONFIG_MHI_NETDEV) +=mhi_netdev.o
+obj-$(CONFIG_MHI_UCI) +=mhi_uci.o
diff --git a/drivers/bus/mhi/devices/mhi_uci.c b/drivers/bus/mhi/devices/mhi_uci.c
new file mode 100644
index 0000000..11b7b1f
--- /dev/null
+++ b/drivers/bus/mhi/devices/mhi_uci.c
@@ -0,0 +1,662 @@
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/cdev.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/poll.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+#include <linux/uaccess.h>
+#include <linux/mhi.h>
+
+#define DEVICE_NAME "mhi"
+#define MHI_UCI_DRIVER_NAME "mhi_uci"
+
+struct uci_chan {
+	wait_queue_head_t wq;
+	spinlock_t lock;
+	struct list_head pending; /* user space waiting to read */
+	struct uci_buf *cur_buf; /* current buffer user space reading */
+	size_t rx_size;
+};
+
+struct uci_buf {
+	void *data;
+	size_t len;
+	struct list_head node;
+};
+
+struct uci_dev {
+	struct list_head node;
+	dev_t devt;
+	struct device *dev;
+	struct mhi_device *mhi_dev;
+	const char *chan;
+	struct mutex mutex; /* sync open and close */
+	struct uci_chan ul_chan;
+	struct uci_chan dl_chan;
+	size_t mtu;
+	int ref_count;
+	bool enabled;
+};
+
+struct mhi_uci_drv {
+	struct list_head head;
+	struct mutex lock;
+	struct class *class;
+	int major;
+	dev_t dev_t;
+};
+
+enum MHI_DEBUG_LEVEL msg_lvl = MHI_MSG_LVL_ERROR;
+
+#ifdef CONFIG_MHI_DEBUG
+
+#define MSG_VERB(fmt, ...) do { \
+		if (msg_lvl <= MHI_MSG_LVL_VERBOSE) \
+			pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
+	} while (0)
+
+#else
+
+#define MSG_VERB(fmt, ...)
+
+#endif
+
+#define MSG_LOG(fmt, ...) do { \
+		if (msg_lvl <= MHI_MSG_LVL_INFO) \
+			pr_err("[I][%s] " fmt, __func__, ##__VA_ARGS__); \
+	} while (0)
+
+#define MSG_ERR(fmt, ...) do { \
+		if (msg_lvl <= MHI_MSG_LVL_ERROR) \
+			pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
+	} while (0)
+
+#define MAX_UCI_DEVICES (64)
+
+static DECLARE_BITMAP(uci_minors, MAX_UCI_DEVICES);
+static struct mhi_uci_drv mhi_uci_drv;
+
+static int mhi_queue_inbound(struct uci_dev *uci_dev)
+{
+	struct mhi_device *mhi_dev = uci_dev->mhi_dev;
+	int nr_trbs = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
+	size_t mtu = uci_dev->mtu;
+	void *buf;
+	struct uci_buf *uci_buf;
+	int ret = -EIO, i;
+
+	for (i = 0; i < nr_trbs; i++) {
+		buf = kmalloc(mtu + sizeof(*uci_buf), GFP_KERNEL);
+		if (!buf)
+			return -ENOMEM;
+
+		uci_buf = buf + mtu;
+		uci_buf->data = buf;
+
+		MSG_VERB("Allocated buf %d of %d size %ld\n", i, nr_trbs, mtu);
+
+		ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE, buf, mtu,
+					 MHI_EOT);
+		if (ret) {
+			kfree(buf);
+			MSG_ERR("Failed to queue buffer %d\n", i);
+			return ret;
+		}
+	}
+
+	return ret;
+}
+
+static long mhi_uci_ioctl(struct file *file,
+			  unsigned int cmd,
+			  unsigned long arg)
+{
+	struct uci_dev *uci_dev = file->private_data;
+	struct mhi_device *mhi_dev = uci_dev->mhi_dev;
+	long ret = -ERESTARTSYS;
+
+	mutex_lock(&uci_dev->mutex);
+	if (uci_dev->enabled)
+		ret = mhi_ioctl(mhi_dev, cmd, arg);
+	mutex_unlock(&uci_dev->mutex);
+
+	return ret;
+}
+
+static int mhi_uci_release(struct inode *inode, struct file *file)
+{
+	struct uci_dev *uci_dev = file->private_data;
+
+	mutex_lock(&uci_dev->mutex);
+	uci_dev->ref_count--;
+	if (!uci_dev->ref_count) {
+		struct uci_buf *itr, *tmp;
+		struct uci_chan *uci_chan;
+
+		MSG_LOG("Last client left, closing node\n");
+
+		if (uci_dev->enabled)
+			mhi_unprepare_from_transfer(uci_dev->mhi_dev);
+
+		/* clean inbound channel */
+		uci_chan = &uci_dev->dl_chan;
+		list_for_each_entry_safe(itr, tmp, &uci_chan->pending, node) {
+			list_del(&itr->node);
+			kfree(itr->data);
+		}
+		if (uci_chan->cur_buf)
+			kfree(uci_chan->cur_buf->data);
+
+		uci_chan->cur_buf = NULL;
+
+		if (!uci_dev->enabled) {
+			MSG_LOG("Node is deleted, freeing dev node\n");
+			mutex_unlock(&uci_dev->mutex);
+			mutex_destroy(&uci_dev->mutex);
+			clear_bit(MINOR(uci_dev->devt), uci_minors);
+			kfree(uci_dev);
+			return 0;
+		}
+	}
+
+	mutex_unlock(&uci_dev->mutex);
+
+	MSG_LOG("exit: ref_count:%d\n", uci_dev->ref_count);
+
+	return 0;
+}
+
+static unsigned int mhi_uci_poll(struct file *file, poll_table *wait)
+{
+	struct uci_dev *uci_dev = file->private_data;
+	struct mhi_device *mhi_dev = uci_dev->mhi_dev;
+	struct uci_chan *uci_chan;
+	unsigned int mask = 0;
+
+	poll_wait(file, &uci_dev->dl_chan.wq, wait);
+	poll_wait(file, &uci_dev->ul_chan.wq, wait);
+
+	uci_chan = &uci_dev->dl_chan;
+	spin_lock_bh(&uci_chan->lock);
+	if (!uci_dev->enabled) {
+		mask = POLLERR;
+	} else if (!list_empty(&uci_chan->pending) || uci_chan->cur_buf) {
+		MSG_VERB("Client can read from node\n");
+		mask |= POLLIN | POLLRDNORM;
+	}
+	spin_unlock_bh(&uci_chan->lock);
+
+	uci_chan = &uci_dev->ul_chan;
+	spin_lock_bh(&uci_chan->lock);
+	if (!uci_dev->enabled) {
+		mask |= POLLERR;
+	} else if (mhi_get_no_free_descriptors(mhi_dev, DMA_TO_DEVICE) > 0) {
+		MSG_VERB("Client can write to node\n");
+		mask |= POLLOUT | POLLWRNORM;
+	}
+	spin_unlock_bh(&uci_chan->lock);
+
+	MSG_LOG("Client attempted to poll, returning mask 0x%x\n", mask);
+
+	return mask;
+}
+
+static ssize_t mhi_uci_write(struct file *file,
+			     const char __user *buf,
+			     size_t count,
+			     loff_t *offp)
+{
+	struct uci_dev *uci_dev = file->private_data;
+	struct mhi_device *mhi_dev = uci_dev->mhi_dev;
+	struct uci_chan *uci_chan = &uci_dev->ul_chan;
+	size_t bytes_xfered = 0;
+	int ret;
+
+	if (!buf || !count)
+		return -EINVAL;
+
+	/* confirm channel is active */
+	spin_lock_bh(&uci_chan->lock);
+	if (!uci_dev->enabled) {
+		spin_unlock_bh(&uci_chan->lock);
+		return -ERESTARTSYS;
+	}
+
+	MSG_VERB("Enter: to xfer:%lu bytes\n", count);
+
+	while (count) {
+		size_t xfer_size;
+		void *kbuf;
+		enum MHI_FLAGS flags;
+
+		spin_unlock_bh(&uci_chan->lock);
+
+		/* wait for free descriptors */
+		ret = wait_event_interruptible(uci_chan->wq,
+			(!uci_dev->enabled) ||
+			mhi_get_no_free_descriptors
+					       (mhi_dev, DMA_TO_DEVICE) > 0);
+
+		if (ret == -ERESTARTSYS) {
+			MSG_LOG("Exit signal caught for node\n");
+			return -ERESTARTSYS;
+		}
+
+		xfer_size = min_t(size_t, count, uci_dev->mtu);
+		kbuf = kmalloc(xfer_size, GFP_KERNEL);
+		if (!kbuf) {
+			MSG_ERR("Failed to allocate memory %lu\n", xfer_size);
+			return -ENOMEM;
+		}
+
+		ret = copy_from_user(kbuf, buf, xfer_size);
+		if (unlikely(ret)) {
+			kfree(kbuf);
+			return ret;
+		}
+
+		spin_lock_bh(&uci_chan->lock);
+		flags = (count - xfer_size) ? MHI_EOB : MHI_EOT;
+		if (uci_dev->enabled)
+			ret = mhi_queue_transfer(mhi_dev, DMA_TO_DEVICE, kbuf,
+						 xfer_size, flags);
+		else
+			ret = -ERESTARTSYS;
+
+		if (ret) {
+			kfree(kbuf);
+			goto sys_interrupt;
+		}
+
+		bytes_xfered += xfer_size;
+		count -= xfer_size;
+		buf += xfer_size;
+	}
+
+	spin_unlock_bh(&uci_chan->lock);
+	MSG_VERB("Exit: Number of bytes xferred:%lu\n", bytes_xfered);
+
+	return bytes_xfered;
+
+sys_interrupt:
+	spin_unlock_bh(&uci_chan->lock);
+
+	return ret;
+}
+
+static ssize_t mhi_uci_read(struct file *file,
+			    char __user *buf,
+			    size_t count,
+			    loff_t *ppos)
+{
+	struct uci_dev *uci_dev = file->private_data;
+	struct mhi_device *mhi_dev = uci_dev->mhi_dev;
+	struct uci_chan *uci_chan = &uci_dev->dl_chan;
+	struct uci_buf *uci_buf;
+	char *ptr;
+	size_t to_copy;
+	int ret = 0;
+
+	if (!buf)
+		return -EINVAL;
+
+	MSG_VERB("Client provided buf len:%lu\n", count);
+
+	/* confirm channel is active */
+	spin_lock_bh(&uci_chan->lock);
+	if (!uci_dev->enabled) {
+		spin_unlock_bh(&uci_chan->lock);
+		return -ERESTARTSYS;
+	}
+
+	/* No data available to read, wait */
+	if (!uci_chan->cur_buf && list_empty(&uci_chan->pending)) {
+		MSG_VERB("No data available to read waiting\n");
+
+		spin_unlock_bh(&uci_chan->lock);
+		ret = wait_event_interruptible(uci_chan->wq,
+				(!uci_dev->enabled ||
+				 !list_empty(&uci_chan->pending)));
+		if (ret == -ERESTARTSYS) {
+			MSG_LOG("Exit signal caught for node\n");
+			return -ERESTARTSYS;
+		}
+
+		spin_lock_bh(&uci_chan->lock);
+		if (!uci_dev->enabled) {
+			MSG_LOG("node is disabled\n");
+			ret = -ERESTARTSYS;
+			goto read_error;
+		}
+	}
+
+	/* new read, get the next descriptor from the list */
+	if (!uci_chan->cur_buf) {
+		uci_buf = list_first_entry_or_null(&uci_chan->pending,
+						   struct uci_buf, node);
+		if (unlikely(!uci_buf)) {
+			ret = -EIO;
+			goto read_error;
+		}
+
+		list_del(&uci_buf->node);
+		uci_chan->cur_buf = uci_buf;
+		uci_chan->rx_size = uci_buf->len;
+		MSG_VERB("Got pkt of size:%zu\n", uci_chan->rx_size);
+	}
+
+	uci_buf = uci_chan->cur_buf;
+	spin_unlock_bh(&uci_chan->lock);
+
+	/* Copy the buffer to user space */
+	to_copy = min_t(size_t, count, uci_chan->rx_size);
+	ptr = uci_buf->data + (uci_buf->len - uci_chan->rx_size);
+	ret = copy_to_user(buf, ptr, to_copy);
+	if (ret)
+		return ret;
+
+	MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
+	uci_chan->rx_size -= to_copy;
+
+	/* we finished with this buffer, queue it back to hardware */
+	if (!uci_chan->rx_size) {
+		spin_lock_bh(&uci_chan->lock);
+		uci_chan->cur_buf = NULL;
+
+		if (uci_dev->enabled)
+			ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE,
+						 uci_buf->data, uci_dev->mtu,
+						 MHI_EOT);
+		else
+			ret = -ERESTARTSYS;
+
+		if (ret) {
+			MSG_ERR("Failed to recycle element\n");
+			kfree(uci_buf->data);
+			goto read_error;
+		}
+
+		spin_unlock_bh(&uci_chan->lock);
+	}
+
+	MSG_VERB("Returning %lu bytes\n", to_copy);
+
+	return to_copy;
+
+read_error:
+	spin_unlock_bh(&uci_chan->lock);
+
+	return ret;
+}
+
+static int mhi_uci_open(struct inode *inode, struct file *filp)
+{
+	struct uci_dev *uci_dev;
+	int ret = -EIO;
+	struct uci_buf *buf_itr, *tmp;
+	struct uci_chan *dl_chan;
+
+	mutex_lock(&mhi_uci_drv.lock);
+	list_for_each_entry(uci_dev, &mhi_uci_drv.head, node) {
+		if (uci_dev->devt == inode->i_rdev) {
+			ret = 0;
+			break;
+		}
+	}
+	mutex_unlock(&mhi_uci_drv.lock);
+
+	/* could not find a minor node */
+	if (ret)
+		return ret;
+
+	mutex_lock(&uci_dev->mutex);
+	if (!uci_dev->enabled) {
+		MSG_ERR("Node exist, but not in active state!\n");
+		goto error_open_chan;
+	}
+
+	uci_dev->ref_count++;
+
+	MSG_LOG("Node open, ref counts %u\n", uci_dev->ref_count);
+
+	if (uci_dev->ref_count == 1) {
+		MSG_LOG("Starting channel\n");
+		ret = mhi_prepare_for_transfer(uci_dev->mhi_dev);
+		if (ret) {
+			MSG_ERR("Error starting transfer channels\n");
+			uci_dev->ref_count--;
+			goto error_open_chan;
+		}
+
+		ret = mhi_queue_inbound(uci_dev);
+		if (ret)
+			goto error_rx_queue;
+	}
+
+	filp->private_data = uci_dev;
+	mutex_unlock(&uci_dev->mutex);
+
+	return 0;
+
+ error_rx_queue:
+	dl_chan = &uci_dev->dl_chan;
+	mhi_unprepare_from_transfer(uci_dev->mhi_dev);
+	list_for_each_entry_safe(buf_itr, tmp, &dl_chan->pending, node) {
+		list_del(&buf_itr->node);
+		kfree(buf_itr->data);
+	}
+
+ error_open_chan:
+	mutex_unlock(&uci_dev->mutex);
+
+	return ret;
+}
+
+static const struct file_operations mhidev_fops = {
+	.open = mhi_uci_open,
+	.release = mhi_uci_release,
+	.read = mhi_uci_read,
+	.write = mhi_uci_write,
+	.poll = mhi_uci_poll,
+	.unlocked_ioctl = mhi_uci_ioctl,
+};
+
+static void mhi_uci_remove(struct mhi_device *mhi_dev)
+{
+	struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
+
+	MSG_LOG("Enter\n");
+
+	/* disable the node */
+	mutex_lock(&uci_dev->mutex);
+	spin_lock_irq(&uci_dev->dl_chan.lock);
+	spin_lock_irq(&uci_dev->ul_chan.lock);
+	uci_dev->enabled = false;
+	spin_unlock_irq(&uci_dev->ul_chan.lock);
+	spin_unlock_irq(&uci_dev->dl_chan.lock);
+	wake_up(&uci_dev->dl_chan.wq);
+	wake_up(&uci_dev->ul_chan.wq);
+
+	/* delete the node to prevent new opens */
+	device_destroy(mhi_uci_drv.class, uci_dev->devt);
+	uci_dev->dev = NULL;
+	mutex_lock(&mhi_uci_drv.lock);
+	list_del(&uci_dev->node);
+	mutex_unlock(&mhi_uci_drv.lock);
+
+	/* safe to free memory only if all file nodes are closed */
+	if (!uci_dev->ref_count) {
+		mutex_unlock(&uci_dev->mutex);
+		mutex_destroy(&uci_dev->mutex);
+		clear_bit(MINOR(uci_dev->devt), uci_minors);
+		kfree(uci_dev);
+		return;
+	}
+
+	mutex_unlock(&uci_dev->mutex);
+	MSG_LOG("Exit\n");
+}
+
+static int mhi_uci_probe(struct mhi_device *mhi_dev,
+			 const struct mhi_device_id *id)
+{
+	struct uci_dev *uci_dev;
+	int minor;
+	int dir;
+
+	uci_dev = kzalloc(sizeof(*uci_dev), GFP_KERNEL);
+	if (!uci_dev)
+		return -ENOMEM;
+
+	mutex_init(&uci_dev->mutex);
+	uci_dev->mhi_dev = mhi_dev;
+
+	minor = find_first_zero_bit(uci_minors, MAX_UCI_DEVICES);
+	if (minor >= MAX_UCI_DEVICES) {
+		kfree(uci_dev);
+		return -ENOSPC;
+	}
+
+	mutex_lock(&uci_dev->mutex);
+	mutex_lock(&mhi_uci_drv.lock);
+
+	uci_dev->devt = MKDEV(mhi_uci_drv.major, minor);
+	uci_dev->dev = device_create(mhi_uci_drv.class, &mhi_dev->dev,
+				     uci_dev->devt, uci_dev,
+				     DEVICE_NAME "_%04x_%02u.%02u.%02u%s%d",
+				     mhi_dev->dev_id, mhi_dev->domain,
+				     mhi_dev->bus, mhi_dev->slot, "_pipe_",
+				     mhi_dev->ul_chan_id);
+	set_bit(minor, uci_minors);
+
+	for (dir = 0; dir < 2; dir++) {
+		struct uci_chan *uci_chan = (dir) ?
+			&uci_dev->ul_chan : &uci_dev->dl_chan;
+		spin_lock_init(&uci_chan->lock);
+		init_waitqueue_head(&uci_chan->wq);
+		INIT_LIST_HEAD(&uci_chan->pending);
+	};
+
+	uci_dev->mtu = id->driver_data;
+	mhi_device_set_devdata(mhi_dev, uci_dev);
+	uci_dev->enabled = true;
+
+	list_add(&uci_dev->node, &mhi_uci_drv.head);
+	mutex_unlock(&mhi_uci_drv.lock);
+	mutex_unlock(&uci_dev->mutex);
+
+	MSG_LOG("channel:%s successfully probed\n", mhi_dev->chan_name);
+
+	return 0;
+};
+
+static void mhi_ul_xfer_cb(struct mhi_device *mhi_dev,
+			   struct mhi_result *mhi_result)
+{
+	struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
+	struct uci_chan *uci_chan = &uci_dev->ul_chan;
+
+	MSG_VERB("status:%d xfer_len:%zu\n", mhi_result->transaction_status,
+		 mhi_result->bytes_xferd);
+
+	kfree(mhi_result->buf_addr);
+	if (!mhi_result->transaction_status)
+		wake_up(&uci_chan->wq);
+}
+
+static void mhi_dl_xfer_cb(struct mhi_device *mhi_dev,
+			   struct mhi_result *mhi_result)
+{
+	struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
+	struct uci_chan *uci_chan = &uci_dev->dl_chan;
+	unsigned long flags;
+	struct uci_buf *buf;
+
+	MSG_VERB("status:%d receive_len:%zu\n", mhi_result->transaction_status,
+		 mhi_result->bytes_xferd);
+
+	if (mhi_result->transaction_status == -ENOTCONN) {
+		kfree(mhi_result->buf_addr);
+		return;
+	}
+
+	spin_lock_irqsave(&uci_chan->lock, flags);
+	buf = mhi_result->buf_addr + uci_dev->mtu;
+	buf->data = mhi_result->buf_addr;
+	buf->len = mhi_result->bytes_xferd;
+	list_add_tail(&buf->node, &uci_chan->pending);
+	spin_unlock_irqrestore(&uci_chan->lock, flags);
+
+	wake_up(&uci_chan->wq);
+}
+
+/* .driver_data stores max mtu */
+static const struct mhi_device_id mhi_uci_match_table[] = {
+	{ .chan = "LOOPBACK", .driver_data = 0x1000 },
+	{ .chan = "SAHARA", .driver_data = 0x8000 },
+	{ .chan = "EFS", .driver_data = 0x1000 },
+	{ .chan = "QMI0", .driver_data = 0x1000 },
+	{ .chan = "QMI1", .driver_data = 0x1000 },
+	{ .chan = "TF", .driver_data = 0x1000 },
+	{ .chan = "BL", .driver_data = 0x1000 },
+	{ .chan = "DUN", .driver_data = 0x1000 },
+	{ NULL },
+};
+
+static struct mhi_driver mhi_uci_driver = {
+	.id_table = mhi_uci_match_table,
+	.remove = mhi_uci_remove,
+	.probe = mhi_uci_probe,
+	.ul_xfer_cb = mhi_ul_xfer_cb,
+	.dl_xfer_cb = mhi_dl_xfer_cb,
+	.driver = {
+		.name = MHI_UCI_DRIVER_NAME,
+		.owner = THIS_MODULE,
+	},
+};
+
+static int mhi_uci_init(void)
+{
+	int ret;
+
+	ret = register_chrdev(0, MHI_UCI_DRIVER_NAME, &mhidev_fops);
+	if (ret < 0)
+		return ret;
+
+	mhi_uci_drv.major = ret;
+	mhi_uci_drv.class = class_create(THIS_MODULE, MHI_UCI_DRIVER_NAME);
+	if (IS_ERR(mhi_uci_drv.class))
+		return -ENODEV;
+
+	mutex_init(&mhi_uci_drv.lock);
+	INIT_LIST_HEAD(&mhi_uci_drv.head);
+
+	ret = mhi_driver_register(&mhi_uci_driver);
+	if (ret)
+		class_destroy(mhi_uci_drv.class);
+
+	return ret;
+}
+
+module_init(mhi_uci_init);
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("MHI_UCI");
+MODULE_DESCRIPTION("MHI UCI Driver");
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
  2018-04-27  2:23 ` [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface Sujeev Dias
@ 2018-04-27  7:22   ` Greg Kroah-Hartman
  2018-04-28 14:28     ` Sujeev Dias
  2018-04-27  7:23   ` Greg Kroah-Hartman
                     ` (5 subsequent siblings)
  6 siblings, 1 reply; 54+ messages in thread
From: Greg Kroah-Hartman @ 2018-04-27  7:22 UTC (permalink / raw)
  To: Sujeev Dias; +Cc: Arnd Bergmann, linux-kernel, linux-arm-msm, Tony Truong

On Thu, Apr 26, 2018 at 07:23:28PM -0700, Sujeev Dias wrote:
> MHI Host Interface is a communication protocol to be used by the host
> to control and communcate with modem over a high speed peripheral bus.
> This module will allow host to communicate with external devices that
> support MHI protocol.
> 
> Signed-off-by: Sujeev Dias <sdias@codeaurora.org>

No one else has ever reviewed this code before?  That's not good, please
at the very least, have someone else at your company go over it first.
I don't want to be the ones having to point out all of the "obvious"
issues :)

> ---
>  Documentation/00-INDEX                        |    2 +
>  Documentation/devicetree/bindings/bus/mhi.txt |  141 +++
>  Documentation/mhi.txt                         |  235 ++++
>  drivers/bus/Kconfig                           |   17 +
>  drivers/bus/Makefile                          |    1 +
>  drivers/bus/mhi/Makefile                      |    8 +
>  drivers/bus/mhi/core/Makefile                 |    1 +
>  drivers/bus/mhi/core/mhi_boot.c               |  593 ++++++++++
>  drivers/bus/mhi/core/mhi_dtr.c                |  177 +++
>  drivers/bus/mhi/core/mhi_init.c               | 1290 +++++++++++++++++++++
>  drivers/bus/mhi/core/mhi_internal.h           |  732 ++++++++++++
>  drivers/bus/mhi/core/mhi_main.c               | 1476 +++++++++++++++++++++++++
>  drivers/bus/mhi/core/mhi_pm.c                 | 1177 ++++++++++++++++++++
>  include/linux/mhi.h                           |  694 ++++++++++++
>  include/linux/mod_devicetable.h               |   11 +
>  15 files changed, 6555 insertions(+)

And a 6555 line patch is a bit hard to consume all at once.  Can't this
be split up into much more reviewable chunks?  Look at how some of the
other new bus subsystems got added to the tree recently.  They were
submitted in longer patch series, but smaller sized patches
individually.  That makes things much easier to review.

For example, there is no reason your debugfs stuff needs to be in this
initial patch.  That should be in a separate one, right?  Same for
firmware download.  Please take the time to break this up into logical
steps.

Like my son's math teacher keeps telling him, "show your work, not just
an answer at the bottom of the page".

Also, it is required by the DT maintainers to split that file alone up
into a separate patch to be even considered for merging.

One thing I can tell you right now that isn't acceptable:

> +#ifdef CONFIG_MHI_DEBUG

Don't have a separate config option for debugging.  No one will enable
it, which makes it pointless.   Everything has to be dynamic these days.

> +
> +#define MHI_VERB(fmt, ...) do { \
> +		if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_VERBOSE) \
> +			pr_debug("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
> +} while (0)
> +
> +#else
> +
> +#define MHI_VERB(fmt, ...)
> +
> +#endif
> +
> +#define MHI_LOG(fmt, ...) do {	\
> +		if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
> +			pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
> +} while (0)
> +
> +#define MHI_ERR(fmt, ...) do {	\
> +		if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_ERROR) \
> +			pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
> +} while (0)
> +
> +#define MHI_CRITICAL(fmt, ...) do { \
> +		if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_CRITICAL) \
> +			pr_alert("[C][%s] " fmt, __func__, ##__VA_ARGS__); \
> +} while (0)
> +

And do not roll your own debugging/logging macros.  Use what is given to
you (dev_info(), dev_err(), dev_dbg()), they are there for a reason.  By
going around them, you circumvent the whole of the kernel logging
infrastructure and declare that your tiny bus is somehow more "special"
than it.

And I doubt you want to make such a statement :)

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
  2018-04-27  2:23 ` [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface Sujeev Dias
  2018-04-27  7:22   ` Greg Kroah-Hartman
@ 2018-04-27  7:23   ` Greg Kroah-Hartman
  2018-04-27 12:18   ` Arnd Bergmann
                     ` (4 subsequent siblings)
  6 siblings, 0 replies; 54+ messages in thread
From: Greg Kroah-Hartman @ 2018-04-27  7:23 UTC (permalink / raw)
  To: Sujeev Dias; +Cc: Arnd Bergmann, linux-kernel, linux-arm-msm, Tony Truong

On Thu, Apr 26, 2018 at 07:23:28PM -0700, Sujeev Dias wrote:
> --- /dev/null
> +++ b/drivers/bus/mhi/core/mhi_boot.c
> @@ -0,0 +1,593 @@
> +/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 and
> + * only version 2 as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
> + * GNU General Public License for more details.
> + */

Also, please use the proper SPDX headers on all new files, and do not
put the "boiler-plate" license crud here either.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 3/4] mhi_bus: dev: netdev: add network interface driver
  2018-04-27  2:23 ` [PATCH v1 3/4] mhi_bus: dev: netdev: add network interface driver Sujeev Dias
@ 2018-04-27 11:19   ` Arnd Bergmann
  2018-04-28 15:25     ` Sujeev Dias
  0 siblings, 1 reply; 54+ messages in thread
From: Arnd Bergmann @ 2018-04-27 11:19 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: Greg Kroah-Hartman, Linux Kernel Mailing List, linux-arm-msm,
	Tony Truong

On Fri, Apr 27, 2018 at 4:23 AM, Sujeev Dias <sdias@codeaurora.org> wrote:
> MHI based net device driver is used for transferring IP
> traffic between host and modem. Driver allows clients to
> transfer data using standard network interface.
>
> Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
> ---
>  Documentation/devicetree/bindings/bus/mhi.txt |  36 ++
>  drivers/bus/Kconfig                           |   1 +
>  drivers/bus/mhi/Makefile                      |   2 +-
>  drivers/bus/mhi/devices/Kconfig               |  10 +
>  drivers/bus/mhi/devices/Makefile              |   1 +
>  drivers/bus/mhi/devices/mhi_netdev.c          | 893 ++++++++++++++++++++++++++

Network drivers go into drivers/net/, not into the bus subsystem.
You also need to Cc the netdev mailing list for review.

>  6 files changed, 942 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/bus/mhi/devices/Kconfig
>  create mode 100644 drivers/bus/mhi/devices/Makefile
>  create mode 100644 drivers/bus/mhi/devices/mhi_netdev.c
>
> diff --git a/Documentation/devicetree/bindings/bus/mhi.txt b/Documentation/devicetree/bindings/bus/mhi.txt
> index ea1b620..172ae7b 100644
> --- a/Documentation/devicetree/bindings/bus/mhi.txt
> +++ b/Documentation/devicetree/bindings/bus/mhi.txt
> @@ -139,3 +139,39 @@ mhi_controller {
>                 <driver specific properties>
>         };
>  };
> +
> +================
> +Children Devices
> +================
> +
> +MHI netdev properties
> +
> +- mhi,chan
> +  Usage: required
> +  Value type: <string>
> +  Definition: Channel name MHI netdev support
>
> +- mhi,mru
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Largest packet size interface can receive in bytes.
> +
> +- mhi,interface-name
> +  Usage: optional
> +  Value type: <string>
> +  Definition: Interface name to be given so clients can identify it
> +
> +- mhi,recycle-buf
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Set true if interface support recycling buffers.


> +========
> +Example:
> +========
> +
> +mhi_rmnet@0 {
> +       mhi,chan = "IP_HW0";
> +       mhi,interface-name = "rmnet_mhi";
> +       mhi,mru = <0x4000>;
> +};

Who defines the "IP_HW0" and "rmnet_mhi" strings?

Shouldn't there be a "compatible" property?


> +#define MHI_NETDEV_DRIVER_NAME "mhi_netdev"
> +#define WATCHDOG_TIMEOUT (30 * HZ)
> +
> +#ifdef CONFIG_MHI_DEBUG
> +
> +#define MHI_ASSERT(cond, msg) do { \
> +       if (cond) \
> +               panic(msg); \
> +} while (0)
> +
> +#define MSG_VERB(fmt, ...) do { \
> +       if (mhi_netdev->msg_lvl <= MHI_MSG_LVL_VERBOSE) \
> +               pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
> +} while (0)
> +
> +#define MSG_LOG(fmt, ...) do { \
> +       if (mhi_netdev->msg_lvl <= MHI_MSG_LVL_INFO) \
> +               pr_err("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
> +} while (0)
> +
> +#define MSG_ERR(fmt, ...) do { \
> +       if (mhi_netdev->msg_lvl <= MHI_MSG_LVL_ERROR) \
> +               pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
> +} while (0)
> +

Please remove these macros and use the normal kernel
helpers we have.

> +struct mhi_stats {
> +       u32 rx_int;
> +       u32 tx_full;
> +       u32 tx_pkts;
> +       u32 rx_budget_overflow;
> +       u32 rx_frag;
> +       u32 alloc_failed;
> +};
> +
> +/* important: do not exceed sk_buf->cb (48 bytes) */
> +struct mhi_skb_priv {
> +       void *buf;
> +       size_t size;
> +       struct mhi_netdev *mhi_netdev;
> +};

Shouldn't all three members already be part of an skb?

You initialize them as

> +       u32 cur_mru = mhi_netdev->mru;
> +               skb_priv = (struct mhi_skb_priv *)skb->cb;
> +               skb_priv->buf = skb->data;
> +               skb_priv->size = cur_mru;
> +               skb_priv->mhi_netdev = mhi_netdev;

so I think you can remove the structure completely.

> +struct mhi_netdev {
> +       int alias;
> +       struct mhi_device *mhi_dev;
> +       spinlock_t rx_lock;
> +       bool enabled;
> +       rwlock_t pm_lock; /* state change lock */
> +       int (*rx_queue)(struct mhi_netdev *mhi_netdev, gfp_t gfp_t);
> +       struct work_struct alloc_work;
> +       int wake;
> +
> +       struct sk_buff_head rx_allocated;
> +
> +       u32 mru;
> +       const char *interface_name;
> +       struct napi_struct napi;
> +       struct net_device *ndev;
> +       struct sk_buff *frag_skb;
> +       bool recycle_buf;
> +
> +       struct mhi_stats stats;
> +       struct dentry *dentry;
> +       enum MHI_DEBUG_LEVEL msg_lvl;
> +};
> +
> +struct mhi_netdev_priv {
> +       struct mhi_netdev *mhi_netdev;
> +};

Please kill this intermediate structure and use the mhi_netdev
directly.

> +static struct mhi_driver mhi_netdev_driver;
> +static void mhi_netdev_create_debugfs(struct mhi_netdev *mhi_netdev);

Better remove forward declarations and sort the symbols in their
natural order.

> +static int mhi_netdev_alloc_skb(struct mhi_netdev *mhi_netdev, gfp_t gfp_t)
> +{
> +       u32 cur_mru = mhi_netdev->mru;
> +       struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
> +       struct mhi_skb_priv *skb_priv;
> +       int ret;
> +       struct sk_buff *skb;
> +       int no_tre = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
> +       int i;
> +
> +       for (i = 0; i < no_tre; i++) {
> +               skb = alloc_skb(cur_mru, gfp_t);
> +               if (!skb)
> +                       return -ENOMEM;
> +
> +               read_lock_bh(&mhi_netdev->pm_lock);
> +               if (unlikely(!mhi_netdev->enabled)) {
> +                       MSG_ERR("Interface not enabled\n");
> +                       ret = -EIO;
> +                       goto error_queue;
> +               }

You shouldn't normally get here when the device is disabled,
I think you can remove the pm_lock and enabled members
and instead make sure the netdev itself is stopped at that
point.

> +               spin_lock_bh(&mhi_netdev->rx_lock);
> +               ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE, skb,
> +                                        skb_priv->size, MHI_EOT);
> +               spin_unlock_bh(&mhi_netdev->rx_lock);

What does this spinlock protect?

> +static void mhi_netdev_alloc_work(struct work_struct *work)
> +{
> +       struct mhi_netdev *mhi_netdev = container_of(work, struct mhi_netdev,
> +                                                  alloc_work);
> +       /* sleep about 1 sec and retry, that should be enough time
> +        * for system to reclaim freed memory back.
> +        */
> +       const int sleep_ms =  1000;

That is a long time to wait for in the middle of a work function!

Have you tested that it's actually sufficient to make the driver survive
an out-of-memory situation?

If you do need it at all, maybe use a delayed work queue that reschedules
itself rather than retrying in the work queue.

> +static int mhi_netdev_poll(struct napi_struct *napi, int budget)
> +{
> +       struct net_device *dev = napi->dev;
> +       struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
> +       struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
> +       struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
> +       int rx_work = 0;
> +       int ret;
> +
> +       MSG_VERB("Entered\n");
> +
> +       read_lock_bh(&mhi_netdev->pm_lock);
> +
> +       if (!mhi_netdev->enabled) {
> +               MSG_LOG("interface is disabled!\n");
> +               napi_complete(napi);
> +               read_unlock_bh(&mhi_netdev->pm_lock);
> +               return 0;
> +       }

Again, this shouldn't be required.

> +static int mhi_netdev_xmit(struct sk_buff *skb, struct net_device *dev)
> +{
> +       struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
> +       struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
> +       struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
> +       int res = 0;
> +       struct mhi_skb_priv *tx_priv;
> +
> +       MSG_VERB("Entered\n");
> +
> +       tx_priv = (struct mhi_skb_priv *)(skb->cb);
> +       tx_priv->mhi_netdev = mhi_netdev;
> +       read_lock_bh(&mhi_netdev->pm_lock);
> +
> +       if (unlikely(!mhi_netdev->enabled)) {
> +               /* Only reason interface could be disabled and we get data
> +                * is due to an SSR. We do not want to stop the queue and
> +                * return error. Instead we will flush all the uplink packets
> +                * and return successful
> +                */
> +               res = NETDEV_TX_OK;
> +               dev_kfree_skb_any(skb);
> +               goto mhi_xmit_exit;
> +       }
> +
> +       res = mhi_queue_transfer(mhi_dev, DMA_TO_DEVICE, skb, skb->len,
> +                                MHI_EOT);
> +       if (res) {
> +               MSG_VERB("Failed to queue with reason:%d\n", res);
> +               netif_stop_queue(dev);
> +               mhi_netdev->stats.tx_full++;
> +               res = NETDEV_TX_BUSY;
> +               goto mhi_xmit_exit;
> +       }

You don't seem to have any limit on the number of queued
transfers here. Since this is for a modem, it seems absolutely
essential that you keep track of how many packets have been
queued but not sent over the air yet.

Please add the necessary calls to netdev_sent_queue() here
and netdev_completed_queue() in the mhi_netdev_xfer_ul_cb
callback respectively to prevent uncontrolled buffer bloat.

Can you clarify whether mhi_netdev_xfer_ul_cb() is called
after the data has been read by the modem and queued
again in another buffer, or if it only gets called after we know
that the packet has been sent over the air?

> +static int mhi_netdev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
> +{
> +       int rc = 0;
> +
> +       switch (cmd) {
> +       default:
> +               /* don't fail any IOCTL right now */
> +               rc = 0;
> +               break;
> +       }
> +
> +       return rc;
> +}

That looks wrong. You should return an error for any unknown ioctl,
which is the default behavior you get without an ioctl function.

> +static void mhi_netdev_create_debugfs(struct mhi_netdev *mhi_netdev)
> +{
> +       char node_name[32];
> +       int i;
> +       const umode_t mode = 0600;
> +       struct dentry *file;
> +       struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
> +
> +       const struct {
> +               char *name;
> +               u32 *ptr;
> +       } debugfs_table[] = {
> +               {
> +                       "rx_int",
> +                       &mhi_netdev->stats.rx_int
> +               },
> +               {
> +                       "tx_full",
> +                       &mhi_netdev->stats.tx_full
> +               },
> +               {
> +                       "tx_pkts",
> +                       &mhi_netdev->stats.tx_pkts
> +               },
> +               {
> +                       "rx_budget_overflow",
> +                       &mhi_netdev->stats.rx_budget_overflow
> +               },
> +               {
> +                       "rx_fragmentation",
> +                       &mhi_netdev->stats.rx_frag
> +               },
> +               {
> +                       "alloc_failed",
> +                       &mhi_netdev->stats.alloc_failed
> +               },
> +               {
> +                       NULL, NULL
> +               },
> +       };

Please use the regular network statistics interfaces for this,
don't roll your own.

> +       /* Both tx & rx client handle contain same device info */
> +       snprintf(node_name, sizeof(node_name), "%s_%04x_%02u.%02u.%02u_%u",
> +                mhi_netdev->interface_name, mhi_dev->dev_id, mhi_dev->domain,
> +                mhi_dev->bus, mhi_dev->slot, mhi_netdev->alias);
> +
> +       if (IS_ERR_OR_NULL(dentry))
> +               return;

IS_ERR_OR_NULL() is basically always a bug. debugfs returns NULL
only on success when it is completely disabled, which you were
trying to handle separately.

> +static int mhi_netdev_probe(struct mhi_device *mhi_dev,
> +                           const struct mhi_device_id *id)
> +{
> +       int ret;
> +       struct mhi_netdev *mhi_netdev;
> +       struct device_node *of_node = mhi_dev->dev.of_node;
> +       char node_name[32];
> +
> +       if (!of_node)
> +               return -ENODEV;
> +
> +       mhi_netdev = devm_kzalloc(&mhi_dev->dev, sizeof(*mhi_netdev),
> +                                 GFP_KERNEL);
> +       if (!mhi_netdev)
> +               return -ENOMEM;
> +
> +       mhi_netdev->alias = of_alias_get_id(of_node, "mhi_netdev");
> +       if (mhi_netdev->alias < 0)
> +               return -ENODEV;

Where is that alias documented?

> +       ret = of_property_read_string(of_node, "mhi,interface-name",
> +                                     &mhi_netdev->interface_name);
> +       if (ret)
> +               mhi_netdev->interface_name = mhi_netdev_driver.driver.name;

Don't couple the Linux interface name to what the hardware is
called.

      Arnd

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems
  2018-04-27  2:23 ` [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems Sujeev Dias
@ 2018-04-27 11:32   ` Arnd Bergmann
  2018-04-28 15:40     ` Sujeev Dias
  2018-04-28  3:05     ` kbuild test robot
  2018-04-28  3:12     ` kbuild test robot
  2 siblings, 1 reply; 54+ messages in thread
From: Arnd Bergmann @ 2018-04-27 11:32 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: Greg Kroah-Hartman, Linux Kernel Mailing List, linux-arm-msm,
	Tony Truong

On Fri, Apr 27, 2018 at 4:23 AM, Sujeev Dias <sdias@codeaurora.org> wrote:
> QCOM PCIe based modems uses MHI as the communication protocol.
> MHI control driver is the bus master for such modems. As the bus
> master driver, it oversees power management operations
> such as suspend, resume, powering on and off the device.
>

> +- compatible
> +  Usage: required
> +  Value type: <string>
> +  Definition: "qcom,mhi"
> +
> +- qcom,pci-dev-id
> +  Usage: optional
> +  Value type: <u32>
> +  Definition: PCIe device id of external modem to bind. If not set, any
> +       device is compatible with this node.
> +
> +- qcom,pci-domain
> +  Usage: required
> +  Value type: <u32>
> +  Definition: PCIe root complex external modem connected to
> +
> +- qcom,pci-bus
> +  Usage: required
> +  Value type: <u32>
> +  Definition: PCIe bus external modem connected to
> +
> +- qcom,pci-slot
> +  Usage: required
> +  Value type: <u32>
> +  Definition: PCIe slot as assigned by pci framework to external modem

These don't seem to make any sense: You seem to have access to
a regular pci_device already, so why do you need to duplicate the
information about it in DT?

> +- qcom,smmu-cfg
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Required SMMU configuration bitmask for PCIe bus.
> +       BIT mask:
> +       BIT(0) : Attach address mapping to endpoint device
> +       BIT(1) : Set attribute S1_BYPASS
> +       BIT(2) : Set attribute FAST
> +       BIT(3) : Set attribute ATOMIC
> +       BIT(4) : Set attribute FORCE_COHERENT
> +
> +- qcom,addr-win
> +  Usage: required if SMMU S1 translation is enabled
> +  Value type: Array of <u64>
> +  Definition: Pair of values describing iova start and stop address

Why do you need these? Can't that be handled by the PCI
layer?

> +- qcom,msm-bus,name
> +  Usage: required
> +  Value type: <string>
> +  Definition: string representing the bus scale client name to register

This probably belongs into a separate binding for the bus
scale driver, right?

> +static struct pci_driver mhi_pcie_driver;

Please try to reorder the symbols to avoid forward declarations.

> +static int mhi_platform_probe(struct platform_device *pdev)
> +{
> +       struct mhi_controller *mhi_cntrl;
> +       struct mhi_dev *mhi_dev;
> +       struct device_node *of_node = pdev->dev.of_node;
> +       u64 addr_win[2];
> +       int ret;
> +
> +       if (!of_node)
> +               return -ENODEV;
> +
> +       mhi_cntrl = mhi_alloc_controller(sizeof(*mhi_dev));
> +       if (!mhi_cntrl)
> +               return -ENOMEM;
> +
> +       mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
> +
> +       /* get pci bus topology for this node */
> +       ret = of_property_read_u32(of_node, "qcom,pci-dev-id",
> +                                  &mhi_cntrl->dev_id);
> +       if (ret)
> +               mhi_cntrl->dev_id = PCI_ANY_ID;
> +
> +       ret = of_property_read_u32(of_node, "qcom,pci-domain",
> +                                  &mhi_cntrl->domain);
> +       if (ret)
> +               goto error_probe;
> +
> +       ret = of_property_read_u32(of_node, "qcom,pci-bus", &mhi_cntrl->bus);
> +       if (ret)
> +               goto error_probe;
> +
> +       ret = of_property_read_u32(of_node, "qcom,pci-slot", &mhi_cntrl->slot);
> +       if (ret)
> +               goto error_probe;

Please explain what you are trying to do here, why do you register
two device drivers? It looks like they both refer to the same
hardware, so why isn't it sufficient to have the pci_driver?

       Arnd

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 4/4] mhi_bus: dev: uci: add user space interface driver
  2018-04-27  2:23 ` [PATCH v1 4/4] mhi_bus: dev: uci: add user space " Sujeev Dias
@ 2018-04-27 11:36   ` Arnd Bergmann
  2018-04-28  1:03     ` kbuild test robot
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 54+ messages in thread
From: Arnd Bergmann @ 2018-04-27 11:36 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: Greg Kroah-Hartman, Linux Kernel Mailing List, linux-arm-msm,
	Tony Truong

On Fri, Apr 27, 2018 at 4:23 AM, Sujeev Dias <sdias@codeaurora.org> wrote:
> This module allows user space clients to transfer data
> between external modem and host using standard file
> operations.
>
> Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
> ---
>  drivers/bus/mhi/devices/Kconfig   |   9 +
>  drivers/bus/mhi/devices/Makefile  |   1 +
>  drivers/bus/mhi/devices/mhi_uci.c | 662 ++++++++++++++++++++++++++++++++++++++
>  3 files changed, 672 insertions(+)
>  create mode 100644 drivers/bus/mhi/devices/mhi_uci.c
>
> diff --git a/drivers/bus/mhi/devices/Kconfig b/drivers/bus/mhi/devices/Kconfig
> index 40f964d..83b9673 100644
> --- a/drivers/bus/mhi/devices/Kconfig
> +++ b/drivers/bus/mhi/devices/Kconfig
> @@ -7,4 +7,13 @@ config MHI_NETDEV
>           MHI based net device driver for transferring IP traffic
>           between host and modem. By enabling this driver, clients
>           can transfer data using standard network interface.
> +
> +config MHI_UCI
> +       tristate "MHI UCI"
> +       depends on MHI_BUS
> +       help
> +         MHI based uci driver is for transferring data between host and
> +         modem using standard file operations from user space. Open, read,
> +         write, ioctl, and close operations are supported by this driver.
> +

Can you expand this description both in Kconfig and the patch
changelog? It's not clear at all what this is good for and what the
actual data is that you send over it.

What tools do you use to talk to them, can you point to a source
repository here? I think that would be useful in order to decide
whether we actually need a low-level interface at all, or if it
should all be accessed using high-level interfaces like the
network driver.

       Arnd

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
  2018-04-27  2:23 ` [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface Sujeev Dias
  2018-04-27  7:22   ` Greg Kroah-Hartman
  2018-04-27  7:23   ` Greg Kroah-Hartman
@ 2018-04-27 12:18   ` Arnd Bergmann
  2018-04-28 16:08     ` Sujeev Dias
  2018-04-28  0:28     ` kbuild test robot
                     ` (3 subsequent siblings)
  6 siblings, 1 reply; 54+ messages in thread
From: Arnd Bergmann @ 2018-04-27 12:18 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: Greg Kroah-Hartman, Linux Kernel Mailing List, linux-arm-msm,
	Tony Truong

On Fri, Apr 27, 2018 at 4:23 AM, Sujeev Dias <sdias@codeaurora.org> wrote:

> diff --git a/Documentation/devicetree/bindings/bus/mhi.txt b/Documentation/devicetree/bindings/bus/mhi.txt
> new file mode 100644
> index 0000000..ea1b620
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/bus/mhi.txt
> @@ -0,0 +1,141 @@
> +MHI Host Interface
> +
> +MHI used by the host to control and communicate with modem over
> +high speed peripheral bus.
> +
> +==============
> +Node Structure
> +==============
> +
> +Main node properties:
> +
> +- mhi,max-channels
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Maximum number of channels supported by this controller
> +
> +- mhi,chan-cfg
> +  Usage: required
> +  Value type: Array of <u32>
> +  Definition: Array of tuples describe channel configuration.
> +       1st element: Physical channel number
> +       2nd element: Transfer ring length in elements
> +       3rd element: Event ring associated with this channel
> +       4th element: Channel direction as defined by enum dma_data_direction
> +               1 = UL data transfer
> +               2 = DL data transfer
> +       5th element: Channel doorbell mode configuration as defined by
> +       enum MHI_BRSTMODE
> +               2 = burst mode disabled
> +               3 = burst mode enabled
> +       6th element: mhi doorbell configuration, valid only when burst mode
> +       enabled.
> +               0 = Use default (device specific) polling configuration
> +               For UL channels, value specifies the timer to poll MHI context
> +               in milliseconds.
> +               For DL channels, the threshold to poll the MHI context
> +               in multiple of eight ring element.
> +       7th element: Channel execution enviornment as defined by enum MHI_EE
> +               1 = Bootloader stage
> +               2 = AMSS mode
> +       8th element: data transfer type accepted as defined by enum
> +       MHI_XFER_TYPE
> +               0 = accept cpu address for buffer
> +               1 = accept skb
> +               2 = accept scatterlist
> +               3 = offload channel, does not accept any transfer type
> +       9th element: Bitwise configuration settings for the channel
> +               Bit mask:
> +               BIT(0) : LPM notify, this channel master requre lpm enter/exit
> +               notifications.
> +               BIT(1) : Offload channel, MHI host only involved in setting up
> +               the data pipe. Not involved in active data transfer.
> +               BIT(2) : Must switch to doorbell mode whenever MHI M0 state
> +               transition happens.
> +               BIT(3) : MHI bus driver pre-allocate buffer for this channel.
> +               If set, clients not allowed to queue buffers. Valid only for DL
> +               direction.
> +
> +- mhi,chan-names
> +  Usage: required
> +  Value type: Array of <string>
> +  Definition: Channel names configured in mhi,chan-cfg.

I think the top-level structure would be better done by requiring
a child for each channel and moving the mhi,chan-cfg
into the children nodes, either as separate properties, or
some of the fields combined into a 'reg' property.

> +- mhi,fw-name
> +  Usage: optional
> +  Value type: <string>
> +  Definition: Firmware image name to upload

The device tree should have no knowledge of a firmware file name.

What you should do instead is to have the driver generate the file
name from the "compatible" property. You could use a lookup table
in the driver, or have some algorithmic way of doing that, but you
want the driver to have some form of intelligence to let it pick
a different firmware image, e.g. when you have added new
capabilities to the driver and the firmware but want to use an
existing device tree.

> +Data structures
> +---------------
> +Host memory : Directly accessed by the host to manage the MHI data structures
> +and buffers. The device accesses the host memory over the PCIe interface.
> +
> +Channel context array : All channel configurations are organized in channel
> +context data array.
> +
> +struct __packed mhi_chan_ctxt;
> +struct mhi_ctxt.chan_ctxt;
> +
> +Transfer rings : Used by host to schedule work items for a channel and organized
> +as a circular queue of transfer descriptors (TD).
> +
> +struct __packed mhi_tre;
> +struct mhi_chan.tre_ring;
> +

This looks like you reinvented virtio. What are the reasons for not just
using virtio directly as we do for other modem interfaces?

> +static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
> +                          void *buf,
> +                          size_t size)
> +{
> +       u32 tx_status, val;
> +       int i, ret;
> +       void __iomem *base = mhi_cntrl->bhi;
> +       rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
> +       dma_addr_t phys = dma_map_single(mhi_cntrl->dev, buf, size,
> +                                        DMA_TO_DEVICE);

Please don't name a dma address 'phys', it's very confusing ;-)

> +struct mhi_bus mhi_bus;

One global instance? I suspec this duplicates data that is already
in the bus_type structure, so just use that instead.

> +void mhi_init_debugfs(struct mhi_controller *mhi_cntrl)
> +{
> +       struct dentry *dentry;
> +       char node[32];
> +
> +       if (!mhi_cntrl->parent)
> +               return;
> +
> +       snprintf(node, sizeof(node), "%04x_%02u:%02u.%02u",
> +                mhi_cntrl->dev_id, mhi_cntrl->domain, mhi_cntrl->bus,
> +                mhi_cntrl->slot);
> +
> +       dentry = debugfs_create_dir(node, mhi_cntrl->parent);
> +       if (IS_ERR_OR_NULL(dentry))
> +               return;

Using IS_ERR_OR_NULL() just means that you have misunderstood
the interface, you want IS_ERR() here.

> +#if defined(CONFIG_OF)
> +static int of_parse_ev_cfg(struct mhi_controller *mhi_cntrl,
> +                          struct device_node *of_node)

Why the #ifdef? Do you ever build this on architectures that don't support
CONFIG_OF yet? Which ones?

> +struct __packed mhi_event_ctxt {
> +       u32 reserved : 8;
> +       u32 intmodc : 8;
> +       u32 intmodt : 16;
> +       u32 ertype;
> +       u32 msivec;
> +       u64 rbase;
> +       u64 rlen;
> +       u64 rp;
> +       u64 wp;
> +};

I would generally only mark those members as __packed that
don't already have natural alignment, such as

struct mhi_event_ctxt {
       u32 reserved : 8;
       u32 intmodc : 8;
       u32 intmodt : 16;
       u32 ertype;
       u32 msivec;
       u64 rbase __packed __aligned(4);
       u64 rlen__packed __aligned(4);
       u64 rp __packed __aligned(4);
       u64 wp__packed __aligned(4);
};

This clarifies where the hardware designers screwed up
without forcing bytewise access to the structures.

> +
> +#ifdef CONFIG_MHI_DEBUG
> +
> +#define MHI_ASSERT(cond, msg) do { \
> +       if (cond) \
> +               panic(msg); \
> +} while (0)
> +
> +#else
> +
> +#define MHI_ASSERT(cond, msg) do { \
> +       if (cond) { \
> +               MHI_ERR(msg); \
> +               WARN_ON(cond); \
> +       } \
> +} while (0)

Remove these.

> +struct mhi_controller {
> +       struct list_head node;
> +
> +       /* device node for iommu ops */
> +       struct device *dev;
> +       struct device_node *of_node;

Hmm, shouldn't the mhi_controller itself be a device that
is a child of ->dev? It feels like

> +       /* mmio base */
> +       void __iomem *regs;
> +       void __iomem *bhi;
> +       void __iomem *wake_db;
> +
> +       /* device topology */
> +       u32 dev_id;
> +       u32 domain;
> +       u32 bus;
> +       u32 slot;

These are even stranger. This looks PCI specific, but if you have
a pci_dev then you know all of them while for the case you don't have
a pci_dev, they don't make sense.

> +       /* kernel log level */
> +       enum MHI_DEBUG_LEVEL klog_lvl;
> +
> +       /* private log level controller driver to set */
> +       enum MHI_DEBUG_LEVEL log_lvl;
> +
> +       /* controller specific data */
> +       void *priv_data;
> +       void *log_buf;
> +       struct dentry *dentry;
> +       struct dentry *parent;

All the debugfs stuff is a bit worrying.  It seems so pervasive in
the driver that it's hard to tell if any of it is actually used for more
than just debugging. It might be better to split that out of the
series and add back the debug files one at a time in separate
patches that each explain why the files are still required.
If you have successfully debugged a problem with one
file, it may have seemed useful but maybe it's not needed
for any future bugs.

You might also be able to get more value out of tracepoints
than debugfs files.

> +};
> +
> +/**
> + * struct mhi_device - mhi device structure associated bind to channel
> + * @dev: Device associated with the channels
> + * @mtu: Maximum # of bytes controller support
> + * @ul_chan_id: MHI channel id for UL transfer
> + * @dl_chan_id: MHI channel id for DL transfer
> + * @priv: Driver private data
> + */
> +struct mhi_device {
> +       struct device dev;
> +       u32 dev_id;
> +       u32 domain;
> +       u32 bus;
> +       u32 slot;

Again, shouldn't the pci stuff be part of the parent or grandparent
device?

> +
> +#if defined(CONFIG_MHI_BUS)

In what situation would you include this header without enabling
CONFIG_MHI_BUS?

> +
> +#ifdef CONFIG_MHI_DEBUG
> +
> +#define MHI_VERB(fmt, ...) do { \
> +               if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_VERBOSE) \
> +                       pr_debug("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
> +} while (0)
> +
> +#else
> +
> +#define MHI_VERB(fmt, ...)
> +
> +#endif
> +
> +#define MHI_LOG(fmt, ...) do { \
> +               if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
> +                       pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
> +} while (0)
> +
> +#define MHI_ERR(fmt, ...) do { \
> +               if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_ERROR) \
> +                       pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
> +} while (0)
> +
> +#define MHI_CRITICAL(fmt, ...) do { \
> +               if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_CRITICAL) \
> +                       pr_alert("[C][%s] " fmt, __func__, ##__VA_ARGS__); \
> +} while (0)
> +

Again, these all should be removed. Just use dev_err() etc.

> diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
> index 7d361be..1e11e30 100644
> --- a/include/linux/mod_devicetable.h
> +++ b/include/linux/mod_devicetable.h
> @@ -734,4 +734,15 @@ struct tb_service_id {
>  #define TBSVC_MATCH_PROTOCOL_VERSION   0x0004
>  #define TBSVC_MATCH_PROTOCOL_REVISION  0x0008
>
> +
> +/**
> + * struct mhi_device_id - MHI device identification
> + * @chan: MHI channel name
> + * @driver_data: driver data;
> + */
> +struct mhi_device_id {
> +       const char *chan;
> +       kernel_ulong_t driver_data;
> +};

I think you have to use a fixed-length string here, otherwise
module autoloading cannot work.

      Arnd

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
  2018-04-27  2:23 ` [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface Sujeev Dias
@ 2018-04-28  0:28     ` kbuild test robot
  2018-04-27  7:23   ` Greg Kroah-Hartman
                       ` (5 subsequent siblings)
  6 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  0:28 UTC (permalink / raw)
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

[-- Attachment #1: Type: text/plain, Size: 21610 bytes --]

Hi Sujeev,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Sujeev-Dias/mhi_bus-core-Add-support-for-MHI-host-interface/20180428-065959
config: i386-allmodconfig (attached as .config)
compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All error/warnings (new ones prefixed by >>):

   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
>> include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
>> drivers/bus/mhi/core/mhi_init.c:608:5: error: redefinition of 'mhi_device_configure'
    int mhi_device_configure(struct mhi_device *mhi_dev,
        ^~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:545:19: note: previous definition of 'mhi_device_configure' was here
    static inline int mhi_device_configure(struct mhi_device *mhi_div,
                      ^~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:914:5: error: redefinition of 'of_register_mhi_controller'
    int of_register_mhi_controller(struct mhi_controller *mhi_cntrl)
        ^~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:599:19: note: previous definition of 'of_register_mhi_controller' was here
    static inline int of_register_mhi_controller(struct mhi_controller *mhi_cntrl)
                      ^~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:995:6: error: redefinition of 'mhi_unregister_mhi_controller'
    void mhi_unregister_mhi_controller(struct mhi_controller *mhi_cntrl)
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:604:20: note: previous definition of 'mhi_unregister_mhi_controller' was here
    static inline void mhi_unregister_mhi_controller(
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:1015:24: error: redefinition of 'mhi_alloc_controller'
    struct mhi_controller *mhi_alloc_controller(size_t size)
                           ^~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:594:38: note: previous definition of 'mhi_alloc_controller' was here
    static inline struct mhi_controller *mhi_alloc_controller(size_t size)
                                         ^~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:1028:5: error: redefinition of 'mhi_prepare_for_power_up'
    int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
        ^~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:617:19: note: previous definition of 'mhi_prepare_for_power_up' was here
    static inline int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
                      ^~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:1070:6: error: redefinition of 'mhi_unprepare_after_power_down'
    void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl)
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:637:20: note: previous definition of 'mhi_unprepare_after_power_down' was here
    static inline void mhi_unprepare_after_power_down(
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:1225:5: error: redefinition of 'mhi_driver_register'
    int mhi_driver_register(struct mhi_driver *mhi_drv)
        ^~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:536:19: note: previous definition of 'mhi_driver_register' was here
    static inline int mhi_driver_register(struct mhi_driver *mhi_drv)
                      ^~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:1239:6: error: redefinition of 'mhi_driver_unregister'
    void mhi_driver_unregister(struct mhi_driver *mhi_drv)
         ^~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:541:20: note: previous definition of 'mhi_driver_unregister' was here
    static inline void mhi_driver_unregister(struct mhi_driver *mhi_drv)
                       ^~~~~~~~~~~~~~~~~~~~~
--
   In file included from drivers/bus/mhi/core/mhi_main.c:23:0:
>> include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
   drivers/bus/mhi/core/mhi_main.c: In function 'mhi_debugfs_mhi_event_show':
>> drivers/bus/mhi/core/mhi_main.c:1335:44: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 5 has type 'dma_addr_t {aka unsigned int}' [-Wformat=]
           " rp:0x%llx wp:0x%llx local_rp:0x%llx db:0x%llx\n",
                                            ~~~^
                                            %x
   drivers/bus/mhi/core/mhi_main.c:1337:8:
           mhi_to_physical(ring, ring->rp),
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      
   drivers/bus/mhi/core/mhi_main.c:1335:54: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 6 has type 'dma_addr_t {aka unsigned int}' [-Wformat=]
           " rp:0x%llx wp:0x%llx local_rp:0x%llx db:0x%llx\n",
                                                      ~~~^
                                                      %x
   drivers/bus/mhi/core/mhi_main.c:1338:8:
           mhi_event->db_cfg.db_val);
           ~~~~~~~~~~~~~~~~~~~~~~~~                       
   drivers/bus/mhi/core/mhi_main.c: In function 'mhi_debugfs_mhi_chan_show':
   drivers/bus/mhi/core/mhi_main.c:1368:57: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 6 has type 'dma_addr_t {aka unsigned int}' [-Wformat=]
           " base:0x%llx len:0x%llx wp:0x%llx local_rp:0x%llx local_wp:0x%llx db:0x%llx\n",
                                                         ~~~^
                                                         %x
   drivers/bus/mhi/core/mhi_main.c:1371:8:
           mhi_to_physical(ring, ring->rp),
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                   
   drivers/bus/mhi/core/mhi_main.c:1368:73: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 7 has type 'dma_addr_t {aka unsigned int}' [-Wformat=]
           " base:0x%llx len:0x%llx wp:0x%llx local_rp:0x%llx local_wp:0x%llx db:0x%llx\n",
                                                                         ~~~^
                                                                         %x
   drivers/bus/mhi/core/mhi_main.c:1372:8:
           mhi_to_physical(ring, ring->wp),
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                                   
   drivers/bus/mhi/core/mhi_main.c:1368:83: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 8 has type 'dma_addr_t {aka unsigned int}' [-Wformat=]
           " base:0x%llx len:0x%llx wp:0x%llx local_rp:0x%llx local_wp:0x%llx db:0x%llx\n",
                                                                                   ~~~^
                                                                                   %x
   drivers/bus/mhi/core/mhi_main.c:1373:8:
           mhi_chan->db_cfg.db_val);
           ~~~~~~~~~~~~~~~~~~~~~~~                                                     
   drivers/bus/mhi/core/mhi_main.c: At top level:
>> drivers/bus/mhi/core/mhi_main.c:1381:5: error: redefinition of 'mhi_prepare_for_transfer'
    int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
        ^~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_main.c:23:0:
   include/linux/mhi.h:566:19: note: previous definition of 'mhi_prepare_for_transfer' was here
    static inline int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
                      ^~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_main.c:1417:6: error: redefinition of 'mhi_unprepare_from_transfer'
    void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_main.c:23:0:
   include/linux/mhi.h:571:20: note: previous definition of 'mhi_unprepare_from_transfer' was here
    static inline void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_main.c:1434:5: error: redefinition of 'mhi_get_no_free_descriptors'
    int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev,
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_main.c:23:0:
   include/linux/mhi.h:576:19: note: previous definition of 'mhi_get_no_free_descriptors' was here
    static inline int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev,
                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_main.c:1446:24: error: redefinition of 'mhi_bdf_to_controller'
    struct mhi_controller *mhi_bdf_to_controller(u32 domain,
                           ^~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_main.c:23:0:
   include/linux/mhi.h:609:38: note: previous definition of 'mhi_bdf_to_controller' was here
    static inline struct mhi_controller *mhi_bdf_to_controller(u32 domain,
                                         ^~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_main.c:1462:5: error: redefinition of 'mhi_poll'
    int mhi_poll(struct mhi_device *mhi_dev,
        ^~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_main.c:23:0:
   include/linux/mhi.h:582:19: note: previous definition of 'mhi_poll' was here
    static inline int mhi_poll(struct mhi_device *mhi_dev, u32 budget)
                      ^~~~~~~~
--
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
>> include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
>> drivers/bus/mhi/core/mhi_pm.c:782:5: error: redefinition of 'mhi_async_power_up'
    int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
        ^~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:622:19: note: previous definition of 'mhi_async_power_up' was here
    static inline int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
                      ^~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:873:6: error: redefinition of 'mhi_power_down'
    void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
         ^~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:632:20: note: previous definition of 'mhi_power_down' was here
    static inline void mhi_power_down(struct mhi_controller *mhi_cntrl,
                       ^~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:907:5: error: redefinition of 'mhi_sync_power_up'
    int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
        ^~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:627:19: note: previous definition of 'mhi_sync_power_up' was here
    static inline int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
                      ^~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:923:5: error: redefinition of 'mhi_pm_suspend'
    int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
        ^~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:642:19: note: previous definition of 'mhi_pm_suspend' was here
    static inline int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
                      ^~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:1012:5: error: redefinition of 'mhi_pm_resume'
    int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
        ^~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:647:19: note: previous definition of 'mhi_pm_resume' was here
    static inline int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
                      ^~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:1097:6: error: redefinition of 'mhi_device_get'
    void mhi_device_get(struct mhi_device *mhi_dev)
         ^~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:553:20: note: previous definition of 'mhi_device_get' was here
    static inline void mhi_device_get(struct mhi_device *mhi_dev)
                       ^~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:1108:5: error: redefinition of 'mhi_device_get_sync'
    int mhi_device_get_sync(struct mhi_device *mhi_dev)
        ^~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:557:19: note: previous definition of 'mhi_device_get_sync' was here
    static inline int mhi_device_get_sync(struct mhi_device *mhi_dev)
                      ^~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:1121:6: error: redefinition of 'mhi_device_put'
    void mhi_device_put(struct mhi_device *mhi_dev)
         ^~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:562:20: note: previous definition of 'mhi_device_put' was here
    static inline void mhi_device_put(struct mhi_device *mhi_dev)
                       ^~~~~~~~~~~~~~
--
   In file included from drivers/bus/mhi/core/mhi_boot.c:26:0:
>> include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
>> drivers/bus/mhi/core/mhi_boot.c:138:5: error: redefinition of 'mhi_download_rddm_img'
    int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic)
        ^~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_boot.c:26:0:
   include/linux/mhi.h:652:19: note: previous definition of 'mhi_download_rddm_img' was here
    static inline int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl,
                      ^~~~~~~~~~~~~~~~~~~~~
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/wait.h:7,
                    from include/linux/wait_bit.h:8,
                    from include/linux/fs.h:6,
                    from include/linux/debugfs.h:15,
                    from drivers/bus/mhi/core/mhi_boot.c:13:
   drivers/bus/mhi/core/mhi_boot.c: In function 'mhi_download_rddm_img':
   include/linux/kern_levels.h:5:18: warning: format '%lx' expects argument of type 'long unsigned int', but argument 5 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:14:19: note: in expansion of macro 'KERN_SOH'
    #define KERN_INFO KERN_SOH "6" /* informational */
                      ^~~~~~~~
   include/linux/printk.h:311:9: note: in expansion of macro 'KERN_INFO'
     printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~~
>> include/linux/mhi.h:680:4: note: in expansion of macro 'pr_info'
       pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
       ^~~~~~~
>> drivers/bus/mhi/core/mhi_boot.c:196:2: note: in expansion of macro 'MHI_LOG'
     MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
     ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:196:41: note: format string is defined here
     MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
                                          ~~^
                                          %x
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/wait.h:7,
                    from include/linux/wait_bit.h:8,
                    from include/linux/fs.h:6,
                    from include/linux/debugfs.h:15,
                    from drivers/bus/mhi/core/mhi_boot.c:13:
   drivers/bus/mhi/core/mhi_boot.c: In function 'mhi_fw_load_amss':
>> include/linux/kern_levels.h:5:18: warning: format '%lx' expects argument of type 'long unsigned int', but argument 5 has type 'size_t {aka const unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:14:19: note: in expansion of macro 'KERN_SOH'
    #define KERN_INFO KERN_SOH "6" /* informational */
                      ^~~~~~~~
   include/linux/printk.h:311:9: note: in expansion of macro 'KERN_INFO'
     printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~~
>> include/linux/mhi.h:680:4: note: in expansion of macro 'pr_info'
       pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
       ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:248:2: note: in expansion of macro 'MHI_LOG'
     MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
     ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:248:41: note: format string is defined here
     MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
                                          ~~^
                                          %x
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/wait.h:7,
                    from include/linux/wait_bit.h:8,
                    from include/linux/fs.h:6,
                    from include/linux/debugfs.h:15,
                    from drivers/bus/mhi/core/mhi_boot.c:13:
   drivers/bus/mhi/core/mhi_boot.c: In function 'mhi_alloc_bhie_table':
   include/linux/kern_levels.h:5:18: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 4 has type 'dma_addr_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:14:19: note: in expansion of macro 'KERN_SOH'
    #define KERN_INFO KERN_SOH "6" /* informational */
                      ^~~~~~~~
   include/linux/printk.h:311:9: note: in expansion of macro 'KERN_INFO'
     printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~~
>> include/linux/mhi.h:680:4: note: in expansion of macro 'pr_info'
       pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
       ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:403:3: note: in expansion of macro 'MHI_LOG'
      MHI_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
      ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:403:34: note: format string is defined here
      MHI_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
                                  ~~~^
                                  %x
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/wait.h:7,
                    from include/linux/wait_bit.h:8,
                    from include/linux/fs.h:6,
                    from include/linux/debugfs.h:15,
                    from drivers/bus/mhi/core/mhi_boot.c:13:
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 5 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:14:19: note: in expansion of macro 'KERN_SOH'
    #define KERN_INFO KERN_SOH "6" /* informational */
                      ^~~~~~~~
   include/linux/printk.h:311:9: note: in expansion of macro 'KERN_INFO'
     printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~~
>> include/linux/mhi.h:680:4: note: in expansion of macro 'pr_info'
       pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
       ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:403:3: note: in expansion of macro 'MHI_LOG'
      MHI_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
      ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:403:43: note: format string is defined here
      MHI_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
                                            ~~^
                                            %u
..

vim +658 include/linux/mhi.h

   657	
 > 658	static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
   659	{
   660		return -EINVAL;
   661	}
   662	
   663	#endif
   664	
   665	#ifdef CONFIG_MHI_DEBUG
   666	
   667	#define MHI_VERB(fmt, ...) do { \
   668			if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_VERBOSE) \
   669				pr_debug("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
   670	} while (0)
   671	
   672	#else
   673	
   674	#define MHI_VERB(fmt, ...)
   675	
   676	#endif
   677	
   678	#define MHI_LOG(fmt, ...) do {	\
   679			if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
 > 680				pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
   681	} while (0)
   682	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 62932 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
@ 2018-04-28  0:28     ` kbuild test robot
  0 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  0:28 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

[-- Attachment #1: Type: text/plain, Size: 21610 bytes --]

Hi Sujeev,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Sujeev-Dias/mhi_bus-core-Add-support-for-MHI-host-interface/20180428-065959
config: i386-allmodconfig (attached as .config)
compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All error/warnings (new ones prefixed by >>):

   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
>> include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
>> drivers/bus/mhi/core/mhi_init.c:608:5: error: redefinition of 'mhi_device_configure'
    int mhi_device_configure(struct mhi_device *mhi_dev,
        ^~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:545:19: note: previous definition of 'mhi_device_configure' was here
    static inline int mhi_device_configure(struct mhi_device *mhi_div,
                      ^~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:914:5: error: redefinition of 'of_register_mhi_controller'
    int of_register_mhi_controller(struct mhi_controller *mhi_cntrl)
        ^~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:599:19: note: previous definition of 'of_register_mhi_controller' was here
    static inline int of_register_mhi_controller(struct mhi_controller *mhi_cntrl)
                      ^~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:995:6: error: redefinition of 'mhi_unregister_mhi_controller'
    void mhi_unregister_mhi_controller(struct mhi_controller *mhi_cntrl)
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:604:20: note: previous definition of 'mhi_unregister_mhi_controller' was here
    static inline void mhi_unregister_mhi_controller(
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:1015:24: error: redefinition of 'mhi_alloc_controller'
    struct mhi_controller *mhi_alloc_controller(size_t size)
                           ^~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:594:38: note: previous definition of 'mhi_alloc_controller' was here
    static inline struct mhi_controller *mhi_alloc_controller(size_t size)
                                         ^~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:1028:5: error: redefinition of 'mhi_prepare_for_power_up'
    int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
        ^~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:617:19: note: previous definition of 'mhi_prepare_for_power_up' was here
    static inline int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
                      ^~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:1070:6: error: redefinition of 'mhi_unprepare_after_power_down'
    void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl)
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:637:20: note: previous definition of 'mhi_unprepare_after_power_down' was here
    static inline void mhi_unprepare_after_power_down(
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:1225:5: error: redefinition of 'mhi_driver_register'
    int mhi_driver_register(struct mhi_driver *mhi_drv)
        ^~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:536:19: note: previous definition of 'mhi_driver_register' was here
    static inline int mhi_driver_register(struct mhi_driver *mhi_drv)
                      ^~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_init.c:1239:6: error: redefinition of 'mhi_driver_unregister'
    void mhi_driver_unregister(struct mhi_driver *mhi_drv)
         ^~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_init.c:23:0:
   include/linux/mhi.h:541:20: note: previous definition of 'mhi_driver_unregister' was here
    static inline void mhi_driver_unregister(struct mhi_driver *mhi_drv)
                       ^~~~~~~~~~~~~~~~~~~~~
--
   In file included from drivers/bus/mhi/core/mhi_main.c:23:0:
>> include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
   drivers/bus/mhi/core/mhi_main.c: In function 'mhi_debugfs_mhi_event_show':
>> drivers/bus/mhi/core/mhi_main.c:1335:44: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 5 has type 'dma_addr_t {aka unsigned int}' [-Wformat=]
           " rp:0x%llx wp:0x%llx local_rp:0x%llx db:0x%llx\n",
                                            ~~~^
                                            %x
   drivers/bus/mhi/core/mhi_main.c:1337:8:
           mhi_to_physical(ring, ring->rp),
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      
   drivers/bus/mhi/core/mhi_main.c:1335:54: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 6 has type 'dma_addr_t {aka unsigned int}' [-Wformat=]
           " rp:0x%llx wp:0x%llx local_rp:0x%llx db:0x%llx\n",
                                                      ~~~^
                                                      %x
   drivers/bus/mhi/core/mhi_main.c:1338:8:
           mhi_event->db_cfg.db_val);
           ~~~~~~~~~~~~~~~~~~~~~~~~                       
   drivers/bus/mhi/core/mhi_main.c: In function 'mhi_debugfs_mhi_chan_show':
   drivers/bus/mhi/core/mhi_main.c:1368:57: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 6 has type 'dma_addr_t {aka unsigned int}' [-Wformat=]
           " base:0x%llx len:0x%llx wp:0x%llx local_rp:0x%llx local_wp:0x%llx db:0x%llx\n",
                                                         ~~~^
                                                         %x
   drivers/bus/mhi/core/mhi_main.c:1371:8:
           mhi_to_physical(ring, ring->rp),
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                   
   drivers/bus/mhi/core/mhi_main.c:1368:73: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 7 has type 'dma_addr_t {aka unsigned int}' [-Wformat=]
           " base:0x%llx len:0x%llx wp:0x%llx local_rp:0x%llx local_wp:0x%llx db:0x%llx\n",
                                                                         ~~~^
                                                                         %x
   drivers/bus/mhi/core/mhi_main.c:1372:8:
           mhi_to_physical(ring, ring->wp),
           ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                                   
   drivers/bus/mhi/core/mhi_main.c:1368:83: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 8 has type 'dma_addr_t {aka unsigned int}' [-Wformat=]
           " base:0x%llx len:0x%llx wp:0x%llx local_rp:0x%llx local_wp:0x%llx db:0x%llx\n",
                                                                                   ~~~^
                                                                                   %x
   drivers/bus/mhi/core/mhi_main.c:1373:8:
           mhi_chan->db_cfg.db_val);
           ~~~~~~~~~~~~~~~~~~~~~~~                                                     
   drivers/bus/mhi/core/mhi_main.c: At top level:
>> drivers/bus/mhi/core/mhi_main.c:1381:5: error: redefinition of 'mhi_prepare_for_transfer'
    int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
        ^~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_main.c:23:0:
   include/linux/mhi.h:566:19: note: previous definition of 'mhi_prepare_for_transfer' was here
    static inline int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
                      ^~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_main.c:1417:6: error: redefinition of 'mhi_unprepare_from_transfer'
    void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
         ^~~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_main.c:23:0:
   include/linux/mhi.h:571:20: note: previous definition of 'mhi_unprepare_from_transfer' was here
    static inline void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_main.c:1434:5: error: redefinition of 'mhi_get_no_free_descriptors'
    int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev,
        ^~~~~~~~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_main.c:23:0:
   include/linux/mhi.h:576:19: note: previous definition of 'mhi_get_no_free_descriptors' was here
    static inline int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev,
                      ^~~~~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_main.c:1446:24: error: redefinition of 'mhi_bdf_to_controller'
    struct mhi_controller *mhi_bdf_to_controller(u32 domain,
                           ^~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_main.c:23:0:
   include/linux/mhi.h:609:38: note: previous definition of 'mhi_bdf_to_controller' was here
    static inline struct mhi_controller *mhi_bdf_to_controller(u32 domain,
                                         ^~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_main.c:1462:5: error: redefinition of 'mhi_poll'
    int mhi_poll(struct mhi_device *mhi_dev,
        ^~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_main.c:23:0:
   include/linux/mhi.h:582:19: note: previous definition of 'mhi_poll' was here
    static inline int mhi_poll(struct mhi_device *mhi_dev, u32 budget)
                      ^~~~~~~~
--
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
>> include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
>> drivers/bus/mhi/core/mhi_pm.c:782:5: error: redefinition of 'mhi_async_power_up'
    int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
        ^~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:622:19: note: previous definition of 'mhi_async_power_up' was here
    static inline int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
                      ^~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:873:6: error: redefinition of 'mhi_power_down'
    void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
         ^~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:632:20: note: previous definition of 'mhi_power_down' was here
    static inline void mhi_power_down(struct mhi_controller *mhi_cntrl,
                       ^~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:907:5: error: redefinition of 'mhi_sync_power_up'
    int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
        ^~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:627:19: note: previous definition of 'mhi_sync_power_up' was here
    static inline int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
                      ^~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:923:5: error: redefinition of 'mhi_pm_suspend'
    int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
        ^~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:642:19: note: previous definition of 'mhi_pm_suspend' was here
    static inline int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
                      ^~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:1012:5: error: redefinition of 'mhi_pm_resume'
    int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
        ^~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:647:19: note: previous definition of 'mhi_pm_resume' was here
    static inline int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
                      ^~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:1097:6: error: redefinition of 'mhi_device_get'
    void mhi_device_get(struct mhi_device *mhi_dev)
         ^~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:553:20: note: previous definition of 'mhi_device_get' was here
    static inline void mhi_device_get(struct mhi_device *mhi_dev)
                       ^~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:1108:5: error: redefinition of 'mhi_device_get_sync'
    int mhi_device_get_sync(struct mhi_device *mhi_dev)
        ^~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:557:19: note: previous definition of 'mhi_device_get_sync' was here
    static inline int mhi_device_get_sync(struct mhi_device *mhi_dev)
                      ^~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/core/mhi_pm.c:1121:6: error: redefinition of 'mhi_device_put'
    void mhi_device_put(struct mhi_device *mhi_dev)
         ^~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_pm.c:24:0:
   include/linux/mhi.h:562:20: note: previous definition of 'mhi_device_put' was here
    static inline void mhi_device_put(struct mhi_device *mhi_dev)
                       ^~~~~~~~~~~~~~
--
   In file included from drivers/bus/mhi/core/mhi_boot.c:26:0:
>> include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
>> drivers/bus/mhi/core/mhi_boot.c:138:5: error: redefinition of 'mhi_download_rddm_img'
    int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic)
        ^~~~~~~~~~~~~~~~~~~~~
   In file included from drivers/bus/mhi/core/mhi_boot.c:26:0:
   include/linux/mhi.h:652:19: note: previous definition of 'mhi_download_rddm_img' was here
    static inline int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl,
                      ^~~~~~~~~~~~~~~~~~~~~
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/wait.h:7,
                    from include/linux/wait_bit.h:8,
                    from include/linux/fs.h:6,
                    from include/linux/debugfs.h:15,
                    from drivers/bus/mhi/core/mhi_boot.c:13:
   drivers/bus/mhi/core/mhi_boot.c: In function 'mhi_download_rddm_img':
   include/linux/kern_levels.h:5:18: warning: format '%lx' expects argument of type 'long unsigned int', but argument 5 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:14:19: note: in expansion of macro 'KERN_SOH'
    #define KERN_INFO KERN_SOH "6" /* informational */
                      ^~~~~~~~
   include/linux/printk.h:311:9: note: in expansion of macro 'KERN_INFO'
     printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~~
>> include/linux/mhi.h:680:4: note: in expansion of macro 'pr_info'
       pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
       ^~~~~~~
>> drivers/bus/mhi/core/mhi_boot.c:196:2: note: in expansion of macro 'MHI_LOG'
     MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
     ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:196:41: note: format string is defined here
     MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
                                          ~~^
                                          %x
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/wait.h:7,
                    from include/linux/wait_bit.h:8,
                    from include/linux/fs.h:6,
                    from include/linux/debugfs.h:15,
                    from drivers/bus/mhi/core/mhi_boot.c:13:
   drivers/bus/mhi/core/mhi_boot.c: In function 'mhi_fw_load_amss':
>> include/linux/kern_levels.h:5:18: warning: format '%lx' expects argument of type 'long unsigned int', but argument 5 has type 'size_t {aka const unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:14:19: note: in expansion of macro 'KERN_SOH'
    #define KERN_INFO KERN_SOH "6" /* informational */
                      ^~~~~~~~
   include/linux/printk.h:311:9: note: in expansion of macro 'KERN_INFO'
     printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~~
>> include/linux/mhi.h:680:4: note: in expansion of macro 'pr_info'
       pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
       ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:248:2: note: in expansion of macro 'MHI_LOG'
     MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
     ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:248:41: note: format string is defined here
     MHI_LOG("Upper:0x%x Lower:0x%x len:0x%lx sequence:%u\n",
                                          ~~^
                                          %x
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/wait.h:7,
                    from include/linux/wait_bit.h:8,
                    from include/linux/fs.h:6,
                    from include/linux/debugfs.h:15,
                    from drivers/bus/mhi/core/mhi_boot.c:13:
   drivers/bus/mhi/core/mhi_boot.c: In function 'mhi_alloc_bhie_table':
   include/linux/kern_levels.h:5:18: warning: format '%llx' expects argument of type 'long long unsigned int', but argument 4 has type 'dma_addr_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:14:19: note: in expansion of macro 'KERN_SOH'
    #define KERN_INFO KERN_SOH "6" /* informational */
                      ^~~~~~~~
   include/linux/printk.h:311:9: note: in expansion of macro 'KERN_INFO'
     printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~~
>> include/linux/mhi.h:680:4: note: in expansion of macro 'pr_info'
       pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
       ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:403:3: note: in expansion of macro 'MHI_LOG'
      MHI_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
      ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:403:34: note: format string is defined here
      MHI_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
                                  ~~~^
                                  %x
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/wait.h:7,
                    from include/linux/wait_bit.h:8,
                    from include/linux/fs.h:6,
                    from include/linux/debugfs.h:15,
                    from drivers/bus/mhi/core/mhi_boot.c:13:
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 5 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:14:19: note: in expansion of macro 'KERN_SOH'
    #define KERN_INFO KERN_SOH "6" /* informational */
                      ^~~~~~~~
   include/linux/printk.h:311:9: note: in expansion of macro 'KERN_INFO'
     printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~~
>> include/linux/mhi.h:680:4: note: in expansion of macro 'pr_info'
       pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
       ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:403:3: note: in expansion of macro 'MHI_LOG'
      MHI_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
      ^~~~~~~
   drivers/bus/mhi/core/mhi_boot.c:403:43: note: format string is defined here
      MHI_LOG("Entry:%d Address:0x%llx size:%lu\n", i,
                                            ~~^
                                            %u
..

vim +658 include/linux/mhi.h

   657	
 > 658	static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
   659	{
   660		return -EINVAL;
   661	}
   662	
   663	#endif
   664	
   665	#ifdef CONFIG_MHI_DEBUG
   666	
   667	#define MHI_VERB(fmt, ...) do { \
   668			if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_VERBOSE) \
   669				pr_debug("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
   670	} while (0)
   671	
   672	#else
   673	
   674	#define MHI_VERB(fmt, ...)
   675	
   676	#endif
   677	
   678	#define MHI_LOG(fmt, ...) do {	\
   679			if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
 > 680				pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
   681	} while (0)
   682	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 62932 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 4/4] mhi_bus: dev: uci: add user space interface driver
  2018-04-27  2:23 ` [PATCH v1 4/4] mhi_bus: dev: uci: add user space " Sujeev Dias
@ 2018-04-28  1:03     ` kbuild test robot
  2018-04-28  1:03     ` kbuild test robot
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  1:03 UTC (permalink / raw)
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

[-- Attachment #1: Type: text/plain, Size: 19534 bytes --]

Hi Sujeev,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Sujeev-Dias/mhi_bus-core-Add-support-for-MHI-host-interface/20180428-065959
config: i386-allmodconfig (attached as .config)
compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All warnings (new ones prefixed by >>):

   In file included from drivers/bus/mhi/devices/mhi_uci.c:26:0:
   include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   drivers/bus/mhi/devices/mhi_uci.c: In function 'mhi_queue_inbound':
>> include/linux/kern_levels.h:5:18: warning: format '%ld' expects argument of type 'long int', but argument 5 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:114:3: note: in expansion of macro 'MSG_VERB'
      MSG_VERB("Allocated buf %d of %d size %ld\n", i, nr_trbs, mtu);
      ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:114:43: note: format string is defined here
      MSG_VERB("Allocated buf %d of %d size %ld\n", i, nr_trbs, mtu);
                                            ~~^
                                            %d
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   drivers/bus/mhi/devices/mhi_uci.c: In function 'mhi_uci_write':
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:243:2: note: in expansion of macro 'MSG_VERB'
     MSG_VERB("Enter: to xfer:%lu bytes\n", count);
     ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:243:29: note: format string is defined here
     MSG_VERB("Enter: to xfer:%lu bytes\n", count);
                              ~~^
                              %u
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:89:4: note: in expansion of macro 'pr_err'
       pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:266:4: note: in expansion of macro 'MSG_ERR'
       MSG_ERR("Failed to allocate memory %lu\n", xfer_size);
       ^~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:266:41: note: format string is defined here
       MSG_ERR("Failed to allocate memory %lu\n", xfer_size);
                                          ~~^
                                          %u
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:295:2: note: in expansion of macro 'MSG_VERB'
     MSG_VERB("Exit: Number of bytes xferred:%lu\n", bytes_xfered);
     ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:295:44: note: format string is defined here
     MSG_VERB("Exit: Number of bytes xferred:%lu\n", bytes_xfered);
                                             ~~^
                                             %u
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   drivers/bus/mhi/devices/mhi_uci.c: In function 'mhi_uci_read':
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:321:2: note: in expansion of macro 'MSG_VERB'
     MSG_VERB("Client provided buf len:%lu\n", count);
     ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:321:38: note: format string is defined here
     MSG_VERB("Client provided buf len:%lu\n", count);
                                       ~~^
                                       %u
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:376:2: note: in expansion of macro 'MSG_VERB'
     MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
     ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:376:21: note: format string is defined here
     MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
                      ~~^
                      %u
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 4 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:376:2: note: in expansion of macro 'MSG_VERB'
     MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
     ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:376:28: note: format string is defined here
     MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
                             ~~^
                             %u
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:400:2: note: in expansion of macro 'MSG_VERB'
     MSG_VERB("Returning %lu bytes\n", to_copy);
     ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:400:24: note: format string is defined here
     MSG_VERB("Returning %lu bytes\n", to_copy);
                         ~~^
                         %u

vim +/pr_err +73 drivers/bus/mhi/devices/mhi_uci.c

    70	
    71	#define MSG_VERB(fmt, ...) do { \
    72			if (msg_lvl <= MHI_MSG_LVL_VERBOSE) \
  > 73				pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
    74		} while (0)
    75	
    76	#else
    77	
    78	#define MSG_VERB(fmt, ...)
    79	
    80	#endif
    81	
    82	#define MSG_LOG(fmt, ...) do { \
    83			if (msg_lvl <= MHI_MSG_LVL_INFO) \
    84				pr_err("[I][%s] " fmt, __func__, ##__VA_ARGS__); \
    85		} while (0)
    86	
    87	#define MSG_ERR(fmt, ...) do { \
    88			if (msg_lvl <= MHI_MSG_LVL_ERROR) \
  > 89				pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
    90		} while (0)
    91	
    92	#define MAX_UCI_DEVICES (64)
    93	
    94	static DECLARE_BITMAP(uci_minors, MAX_UCI_DEVICES);
    95	static struct mhi_uci_drv mhi_uci_drv;
    96	
    97	static int mhi_queue_inbound(struct uci_dev *uci_dev)
    98	{
    99		struct mhi_device *mhi_dev = uci_dev->mhi_dev;
   100		int nr_trbs = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
   101		size_t mtu = uci_dev->mtu;
   102		void *buf;
   103		struct uci_buf *uci_buf;
   104		int ret = -EIO, i;
   105	
   106		for (i = 0; i < nr_trbs; i++) {
   107			buf = kmalloc(mtu + sizeof(*uci_buf), GFP_KERNEL);
   108			if (!buf)
   109				return -ENOMEM;
   110	
   111			uci_buf = buf + mtu;
   112			uci_buf->data = buf;
   113	
 > 114			MSG_VERB("Allocated buf %d of %d size %ld\n", i, nr_trbs, mtu);
   115	
   116			ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE, buf, mtu,
   117						 MHI_EOT);
   118			if (ret) {
   119				kfree(buf);
   120				MSG_ERR("Failed to queue buffer %d\n", i);
   121				return ret;
   122			}
   123		}
   124	
   125		return ret;
   126	}
   127	
   128	static long mhi_uci_ioctl(struct file *file,
   129				  unsigned int cmd,
   130				  unsigned long arg)
   131	{
   132		struct uci_dev *uci_dev = file->private_data;
   133		struct mhi_device *mhi_dev = uci_dev->mhi_dev;
   134		long ret = -ERESTARTSYS;
   135	
   136		mutex_lock(&uci_dev->mutex);
   137		if (uci_dev->enabled)
   138			ret = mhi_ioctl(mhi_dev, cmd, arg);
   139		mutex_unlock(&uci_dev->mutex);
   140	
   141		return ret;
   142	}
   143	
   144	static int mhi_uci_release(struct inode *inode, struct file *file)
   145	{
   146		struct uci_dev *uci_dev = file->private_data;
   147	
   148		mutex_lock(&uci_dev->mutex);
   149		uci_dev->ref_count--;
   150		if (!uci_dev->ref_count) {
   151			struct uci_buf *itr, *tmp;
   152			struct uci_chan *uci_chan;
   153	
   154			MSG_LOG("Last client left, closing node\n");
   155	
   156			if (uci_dev->enabled)
   157				mhi_unprepare_from_transfer(uci_dev->mhi_dev);
   158	
   159			/* clean inbound channel */
   160			uci_chan = &uci_dev->dl_chan;
   161			list_for_each_entry_safe(itr, tmp, &uci_chan->pending, node) {
   162				list_del(&itr->node);
   163				kfree(itr->data);
   164			}
   165			if (uci_chan->cur_buf)
   166				kfree(uci_chan->cur_buf->data);
   167	
   168			uci_chan->cur_buf = NULL;
   169	
   170			if (!uci_dev->enabled) {
   171				MSG_LOG("Node is deleted, freeing dev node\n");
   172				mutex_unlock(&uci_dev->mutex);
   173				mutex_destroy(&uci_dev->mutex);
   174				clear_bit(MINOR(uci_dev->devt), uci_minors);
   175				kfree(uci_dev);
   176				return 0;
   177			}
   178		}
   179	
   180		mutex_unlock(&uci_dev->mutex);
   181	
   182		MSG_LOG("exit: ref_count:%d\n", uci_dev->ref_count);
   183	
   184		return 0;
   185	}
   186	
   187	static unsigned int mhi_uci_poll(struct file *file, poll_table *wait)
   188	{
   189		struct uci_dev *uci_dev = file->private_data;
   190		struct mhi_device *mhi_dev = uci_dev->mhi_dev;
   191		struct uci_chan *uci_chan;
   192		unsigned int mask = 0;
   193	
   194		poll_wait(file, &uci_dev->dl_chan.wq, wait);
   195		poll_wait(file, &uci_dev->ul_chan.wq, wait);
   196	
   197		uci_chan = &uci_dev->dl_chan;
   198		spin_lock_bh(&uci_chan->lock);
   199		if (!uci_dev->enabled) {
   200			mask = POLLERR;
   201		} else if (!list_empty(&uci_chan->pending) || uci_chan->cur_buf) {
   202			MSG_VERB("Client can read from node\n");
   203			mask |= POLLIN | POLLRDNORM;
   204		}
   205		spin_unlock_bh(&uci_chan->lock);
   206	
   207		uci_chan = &uci_dev->ul_chan;
   208		spin_lock_bh(&uci_chan->lock);
   209		if (!uci_dev->enabled) {
   210			mask |= POLLERR;
   211		} else if (mhi_get_no_free_descriptors(mhi_dev, DMA_TO_DEVICE) > 0) {
   212			MSG_VERB("Client can write to node\n");
   213			mask |= POLLOUT | POLLWRNORM;
   214		}
   215		spin_unlock_bh(&uci_chan->lock);
   216	
   217		MSG_LOG("Client attempted to poll, returning mask 0x%x\n", mask);
   218	
   219		return mask;
   220	}
   221	
   222	static ssize_t mhi_uci_write(struct file *file,
   223				     const char __user *buf,
   224				     size_t count,
   225				     loff_t *offp)
   226	{
   227		struct uci_dev *uci_dev = file->private_data;
   228		struct mhi_device *mhi_dev = uci_dev->mhi_dev;
   229		struct uci_chan *uci_chan = &uci_dev->ul_chan;
   230		size_t bytes_xfered = 0;
   231		int ret;
   232	
   233		if (!buf || !count)
   234			return -EINVAL;
   235	
   236		/* confirm channel is active */
   237		spin_lock_bh(&uci_chan->lock);
   238		if (!uci_dev->enabled) {
   239			spin_unlock_bh(&uci_chan->lock);
   240			return -ERESTARTSYS;
   241		}
   242	
   243		MSG_VERB("Enter: to xfer:%lu bytes\n", count);
   244	
   245		while (count) {
   246			size_t xfer_size;
   247			void *kbuf;
   248			enum MHI_FLAGS flags;
   249	
   250			spin_unlock_bh(&uci_chan->lock);
   251	
   252			/* wait for free descriptors */
   253			ret = wait_event_interruptible(uci_chan->wq,
   254				(!uci_dev->enabled) ||
   255				mhi_get_no_free_descriptors
   256						       (mhi_dev, DMA_TO_DEVICE) > 0);
   257	
   258			if (ret == -ERESTARTSYS) {
   259				MSG_LOG("Exit signal caught for node\n");
   260				return -ERESTARTSYS;
   261			}
   262	
   263			xfer_size = min_t(size_t, count, uci_dev->mtu);
   264			kbuf = kmalloc(xfer_size, GFP_KERNEL);
   265			if (!kbuf) {
 > 266				MSG_ERR("Failed to allocate memory %lu\n", xfer_size);
   267				return -ENOMEM;
   268			}
   269	
   270			ret = copy_from_user(kbuf, buf, xfer_size);
   271			if (unlikely(ret)) {
   272				kfree(kbuf);
   273				return ret;
   274			}
   275	
   276			spin_lock_bh(&uci_chan->lock);
   277			flags = (count - xfer_size) ? MHI_EOB : MHI_EOT;
   278			if (uci_dev->enabled)
   279				ret = mhi_queue_transfer(mhi_dev, DMA_TO_DEVICE, kbuf,
   280							 xfer_size, flags);
   281			else
   282				ret = -ERESTARTSYS;
   283	
   284			if (ret) {
   285				kfree(kbuf);
   286				goto sys_interrupt;
   287			}
   288	
   289			bytes_xfered += xfer_size;
   290			count -= xfer_size;
   291			buf += xfer_size;
   292		}
   293	
   294		spin_unlock_bh(&uci_chan->lock);
   295		MSG_VERB("Exit: Number of bytes xferred:%lu\n", bytes_xfered);
   296	
   297		return bytes_xfered;
   298	
   299	sys_interrupt:
   300		spin_unlock_bh(&uci_chan->lock);
   301	
   302		return ret;
   303	}
   304	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 62952 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 4/4] mhi_bus: dev: uci: add user space interface driver
@ 2018-04-28  1:03     ` kbuild test robot
  0 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  1:03 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

[-- Attachment #1: Type: text/plain, Size: 19534 bytes --]

Hi Sujeev,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Sujeev-Dias/mhi_bus-core-Add-support-for-MHI-host-interface/20180428-065959
config: i386-allmodconfig (attached as .config)
compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All warnings (new ones prefixed by >>):

   In file included from drivers/bus/mhi/devices/mhi_uci.c:26:0:
   include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   drivers/bus/mhi/devices/mhi_uci.c: In function 'mhi_queue_inbound':
>> include/linux/kern_levels.h:5:18: warning: format '%ld' expects argument of type 'long int', but argument 5 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:114:3: note: in expansion of macro 'MSG_VERB'
      MSG_VERB("Allocated buf %d of %d size %ld\n", i, nr_trbs, mtu);
      ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:114:43: note: format string is defined here
      MSG_VERB("Allocated buf %d of %d size %ld\n", i, nr_trbs, mtu);
                                            ~~^
                                            %d
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   drivers/bus/mhi/devices/mhi_uci.c: In function 'mhi_uci_write':
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:243:2: note: in expansion of macro 'MSG_VERB'
     MSG_VERB("Enter: to xfer:%lu bytes\n", count);
     ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:243:29: note: format string is defined here
     MSG_VERB("Enter: to xfer:%lu bytes\n", count);
                              ~~^
                              %u
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:89:4: note: in expansion of macro 'pr_err'
       pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:266:4: note: in expansion of macro 'MSG_ERR'
       MSG_ERR("Failed to allocate memory %lu\n", xfer_size);
       ^~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:266:41: note: format string is defined here
       MSG_ERR("Failed to allocate memory %lu\n", xfer_size);
                                          ~~^
                                          %u
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:295:2: note: in expansion of macro 'MSG_VERB'
     MSG_VERB("Exit: Number of bytes xferred:%lu\n", bytes_xfered);
     ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:295:44: note: format string is defined here
     MSG_VERB("Exit: Number of bytes xferred:%lu\n", bytes_xfered);
                                             ~~^
                                             %u
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   drivers/bus/mhi/devices/mhi_uci.c: In function 'mhi_uci_read':
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:321:2: note: in expansion of macro 'MSG_VERB'
     MSG_VERB("Client provided buf len:%lu\n", count);
     ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:321:38: note: format string is defined here
     MSG_VERB("Client provided buf len:%lu\n", count);
                                       ~~^
                                       %u
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:376:2: note: in expansion of macro 'MSG_VERB'
     MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
     ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:376:21: note: format string is defined here
     MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
                      ~~^
                      %u
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 4 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:376:2: note: in expansion of macro 'MSG_VERB'
     MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
     ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:376:28: note: format string is defined here
     MSG_VERB("Copied %lu of %lu bytes\n", to_copy, uci_chan->rx_size);
                             ~~^
                             %u
   In file included from include/linux/printk.h:7:0,
                    from include/linux/kernel.h:14,
                    from include/linux/list.h:9,
                    from include/linux/kobject.h:19,
                    from include/linux/cdev.h:5,
                    from drivers/bus/mhi/devices/mhi_uci.c:13:
   include/linux/kern_levels.h:5:18: warning: format '%lu' expects argument of type 'long unsigned int', but argument 3 has type 'size_t {aka unsigned int}' [-Wformat=]
    #define KERN_SOH "\001"  /* ASCII Start Of Header */
                     ^
   include/linux/kern_levels.h:11:18: note: in expansion of macro 'KERN_SOH'
    #define KERN_ERR KERN_SOH "3" /* error conditions */
                     ^~~~~~~~
   include/linux/printk.h:304:9: note: in expansion of macro 'KERN_ERR'
     printk(KERN_ERR pr_fmt(fmt), ##__VA_ARGS__)
            ^~~~~~~~
>> drivers/bus/mhi/devices/mhi_uci.c:73:4: note: in expansion of macro 'pr_err'
       pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
       ^~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:400:2: note: in expansion of macro 'MSG_VERB'
     MSG_VERB("Returning %lu bytes\n", to_copy);
     ^~~~~~~~
   drivers/bus/mhi/devices/mhi_uci.c:400:24: note: format string is defined here
     MSG_VERB("Returning %lu bytes\n", to_copy);
                         ~~^
                         %u

vim +/pr_err +73 drivers/bus/mhi/devices/mhi_uci.c

    70	
    71	#define MSG_VERB(fmt, ...) do { \
    72			if (msg_lvl <= MHI_MSG_LVL_VERBOSE) \
  > 73				pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__); \
    74		} while (0)
    75	
    76	#else
    77	
    78	#define MSG_VERB(fmt, ...)
    79	
    80	#endif
    81	
    82	#define MSG_LOG(fmt, ...) do { \
    83			if (msg_lvl <= MHI_MSG_LVL_INFO) \
    84				pr_err("[I][%s] " fmt, __func__, ##__VA_ARGS__); \
    85		} while (0)
    86	
    87	#define MSG_ERR(fmt, ...) do { \
    88			if (msg_lvl <= MHI_MSG_LVL_ERROR) \
  > 89				pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
    90		} while (0)
    91	
    92	#define MAX_UCI_DEVICES (64)
    93	
    94	static DECLARE_BITMAP(uci_minors, MAX_UCI_DEVICES);
    95	static struct mhi_uci_drv mhi_uci_drv;
    96	
    97	static int mhi_queue_inbound(struct uci_dev *uci_dev)
    98	{
    99		struct mhi_device *mhi_dev = uci_dev->mhi_dev;
   100		int nr_trbs = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
   101		size_t mtu = uci_dev->mtu;
   102		void *buf;
   103		struct uci_buf *uci_buf;
   104		int ret = -EIO, i;
   105	
   106		for (i = 0; i < nr_trbs; i++) {
   107			buf = kmalloc(mtu + sizeof(*uci_buf), GFP_KERNEL);
   108			if (!buf)
   109				return -ENOMEM;
   110	
   111			uci_buf = buf + mtu;
   112			uci_buf->data = buf;
   113	
 > 114			MSG_VERB("Allocated buf %d of %d size %ld\n", i, nr_trbs, mtu);
   115	
   116			ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE, buf, mtu,
   117						 MHI_EOT);
   118			if (ret) {
   119				kfree(buf);
   120				MSG_ERR("Failed to queue buffer %d\n", i);
   121				return ret;
   122			}
   123		}
   124	
   125		return ret;
   126	}
   127	
   128	static long mhi_uci_ioctl(struct file *file,
   129				  unsigned int cmd,
   130				  unsigned long arg)
   131	{
   132		struct uci_dev *uci_dev = file->private_data;
   133		struct mhi_device *mhi_dev = uci_dev->mhi_dev;
   134		long ret = -ERESTARTSYS;
   135	
   136		mutex_lock(&uci_dev->mutex);
   137		if (uci_dev->enabled)
   138			ret = mhi_ioctl(mhi_dev, cmd, arg);
   139		mutex_unlock(&uci_dev->mutex);
   140	
   141		return ret;
   142	}
   143	
   144	static int mhi_uci_release(struct inode *inode, struct file *file)
   145	{
   146		struct uci_dev *uci_dev = file->private_data;
   147	
   148		mutex_lock(&uci_dev->mutex);
   149		uci_dev->ref_count--;
   150		if (!uci_dev->ref_count) {
   151			struct uci_buf *itr, *tmp;
   152			struct uci_chan *uci_chan;
   153	
   154			MSG_LOG("Last client left, closing node\n");
   155	
   156			if (uci_dev->enabled)
   157				mhi_unprepare_from_transfer(uci_dev->mhi_dev);
   158	
   159			/* clean inbound channel */
   160			uci_chan = &uci_dev->dl_chan;
   161			list_for_each_entry_safe(itr, tmp, &uci_chan->pending, node) {
   162				list_del(&itr->node);
   163				kfree(itr->data);
   164			}
   165			if (uci_chan->cur_buf)
   166				kfree(uci_chan->cur_buf->data);
   167	
   168			uci_chan->cur_buf = NULL;
   169	
   170			if (!uci_dev->enabled) {
   171				MSG_LOG("Node is deleted, freeing dev node\n");
   172				mutex_unlock(&uci_dev->mutex);
   173				mutex_destroy(&uci_dev->mutex);
   174				clear_bit(MINOR(uci_dev->devt), uci_minors);
   175				kfree(uci_dev);
   176				return 0;
   177			}
   178		}
   179	
   180		mutex_unlock(&uci_dev->mutex);
   181	
   182		MSG_LOG("exit: ref_count:%d\n", uci_dev->ref_count);
   183	
   184		return 0;
   185	}
   186	
   187	static unsigned int mhi_uci_poll(struct file *file, poll_table *wait)
   188	{
   189		struct uci_dev *uci_dev = file->private_data;
   190		struct mhi_device *mhi_dev = uci_dev->mhi_dev;
   191		struct uci_chan *uci_chan;
   192		unsigned int mask = 0;
   193	
   194		poll_wait(file, &uci_dev->dl_chan.wq, wait);
   195		poll_wait(file, &uci_dev->ul_chan.wq, wait);
   196	
   197		uci_chan = &uci_dev->dl_chan;
   198		spin_lock_bh(&uci_chan->lock);
   199		if (!uci_dev->enabled) {
   200			mask = POLLERR;
   201		} else if (!list_empty(&uci_chan->pending) || uci_chan->cur_buf) {
   202			MSG_VERB("Client can read from node\n");
   203			mask |= POLLIN | POLLRDNORM;
   204		}
   205		spin_unlock_bh(&uci_chan->lock);
   206	
   207		uci_chan = &uci_dev->ul_chan;
   208		spin_lock_bh(&uci_chan->lock);
   209		if (!uci_dev->enabled) {
   210			mask |= POLLERR;
   211		} else if (mhi_get_no_free_descriptors(mhi_dev, DMA_TO_DEVICE) > 0) {
   212			MSG_VERB("Client can write to node\n");
   213			mask |= POLLOUT | POLLWRNORM;
   214		}
   215		spin_unlock_bh(&uci_chan->lock);
   216	
   217		MSG_LOG("Client attempted to poll, returning mask 0x%x\n", mask);
   218	
   219		return mask;
   220	}
   221	
   222	static ssize_t mhi_uci_write(struct file *file,
   223				     const char __user *buf,
   224				     size_t count,
   225				     loff_t *offp)
   226	{
   227		struct uci_dev *uci_dev = file->private_data;
   228		struct mhi_device *mhi_dev = uci_dev->mhi_dev;
   229		struct uci_chan *uci_chan = &uci_dev->ul_chan;
   230		size_t bytes_xfered = 0;
   231		int ret;
   232	
   233		if (!buf || !count)
   234			return -EINVAL;
   235	
   236		/* confirm channel is active */
   237		spin_lock_bh(&uci_chan->lock);
   238		if (!uci_dev->enabled) {
   239			spin_unlock_bh(&uci_chan->lock);
   240			return -ERESTARTSYS;
   241		}
   242	
   243		MSG_VERB("Enter: to xfer:%lu bytes\n", count);
   244	
   245		while (count) {
   246			size_t xfer_size;
   247			void *kbuf;
   248			enum MHI_FLAGS flags;
   249	
   250			spin_unlock_bh(&uci_chan->lock);
   251	
   252			/* wait for free descriptors */
   253			ret = wait_event_interruptible(uci_chan->wq,
   254				(!uci_dev->enabled) ||
   255				mhi_get_no_free_descriptors
   256						       (mhi_dev, DMA_TO_DEVICE) > 0);
   257	
   258			if (ret == -ERESTARTSYS) {
   259				MSG_LOG("Exit signal caught for node\n");
   260				return -ERESTARTSYS;
   261			}
   262	
   263			xfer_size = min_t(size_t, count, uci_dev->mtu);
   264			kbuf = kmalloc(xfer_size, GFP_KERNEL);
   265			if (!kbuf) {
 > 266				MSG_ERR("Failed to allocate memory %lu\n", xfer_size);
   267				return -ENOMEM;
   268			}
   269	
   270			ret = copy_from_user(kbuf, buf, xfer_size);
   271			if (unlikely(ret)) {
   272				kfree(kbuf);
   273				return ret;
   274			}
   275	
   276			spin_lock_bh(&uci_chan->lock);
   277			flags = (count - xfer_size) ? MHI_EOB : MHI_EOT;
   278			if (uci_dev->enabled)
   279				ret = mhi_queue_transfer(mhi_dev, DMA_TO_DEVICE, kbuf,
   280							 xfer_size, flags);
   281			else
   282				ret = -ERESTARTSYS;
   283	
   284			if (ret) {
   285				kfree(kbuf);
   286				goto sys_interrupt;
   287			}
   288	
   289			bytes_xfered += xfer_size;
   290			count -= xfer_size;
   291			buf += xfer_size;
   292		}
   293	
   294		spin_unlock_bh(&uci_chan->lock);
   295		MSG_VERB("Exit: Number of bytes xferred:%lu\n", bytes_xfered);
   296	
   297		return bytes_xfered;
   298	
   299	sys_interrupt:
   300		spin_unlock_bh(&uci_chan->lock);
   301	
   302		return ret;
   303	}
   304	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 62952 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
  2018-04-27  2:23 ` [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface Sujeev Dias
@ 2018-04-28  2:52     ` kbuild test robot
  2018-04-27  7:23   ` Greg Kroah-Hartman
                       ` (5 subsequent siblings)
  6 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  2:52 UTC (permalink / raw)
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

[-- Attachment #1: Type: text/plain, Size: 2324 bytes --]

Hi Sujeev,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Sujeev-Dias/mhi_bus-core-Add-support-for-MHI-host-interface/20180428-065959
config: sparc64-allyesconfig (attached as .config)
compiler: sparc64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=sparc64 

All error/warnings (new ones prefixed by >>):

   drivers/bus/mhi/core/mhi_dtr.c: In function 'mhi_ioctl':
>> drivers/bus/mhi/core/mhi_dtr.c:108:9: error: implicit declaration of function 'get_user'; did you mean 'get_super'? [-Werror=implicit-function-declaration]
      ret = get_user(tiocm, (u32 *)arg);
            ^~~~~~~~
            get_super
>> drivers/bus/mhi/core/mhi_dtr.c:108:7: warning: 'tiocm' may be used uninitialized in this function [-Wmaybe-uninitialized]
      ret = get_user(tiocm, (u32 *)arg);
      ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors

vim +108 drivers/bus/mhi/core/mhi_dtr.c

    90	
    91	long mhi_ioctl(struct mhi_device *mhi_dev, unsigned int cmd, unsigned long arg)
    92	{
    93		struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
    94		struct mhi_chan *mhi_chan = mhi_dev->ul_chan;
    95		int ret;
    96	
    97		/* ioctl not supported by this controller */
    98		if (!mhi_cntrl->dtr_dev)
    99			return -EIO;
   100	
   101		switch (cmd) {
   102		case TIOCMGET:
   103			return mhi_chan->tiocm;
   104		case TIOCMSET:
   105		{
   106			u32 tiocm;
   107	
 > 108			ret = get_user(tiocm, (u32 *)arg);
   109			if (ret)
   110				return ret;
   111	
   112			return mhi_dtr_tiocmset(mhi_cntrl, mhi_chan, tiocm);
   113		}
   114		default:
   115			break;
   116		}
   117	
   118		return -EINVAL;
   119	}
   120	EXPORT_SYMBOL(mhi_ioctl);
   121	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 53272 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
@ 2018-04-28  2:52     ` kbuild test robot
  0 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  2:52 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

[-- Attachment #1: Type: text/plain, Size: 2324 bytes --]

Hi Sujeev,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Sujeev-Dias/mhi_bus-core-Add-support-for-MHI-host-interface/20180428-065959
config: sparc64-allyesconfig (attached as .config)
compiler: sparc64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=sparc64 

All error/warnings (new ones prefixed by >>):

   drivers/bus/mhi/core/mhi_dtr.c: In function 'mhi_ioctl':
>> drivers/bus/mhi/core/mhi_dtr.c:108:9: error: implicit declaration of function 'get_user'; did you mean 'get_super'? [-Werror=implicit-function-declaration]
      ret = get_user(tiocm, (u32 *)arg);
            ^~~~~~~~
            get_super
>> drivers/bus/mhi/core/mhi_dtr.c:108:7: warning: 'tiocm' may be used uninitialized in this function [-Wmaybe-uninitialized]
      ret = get_user(tiocm, (u32 *)arg);
      ~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors

vim +108 drivers/bus/mhi/core/mhi_dtr.c

    90	
    91	long mhi_ioctl(struct mhi_device *mhi_dev, unsigned int cmd, unsigned long arg)
    92	{
    93		struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
    94		struct mhi_chan *mhi_chan = mhi_dev->ul_chan;
    95		int ret;
    96	
    97		/* ioctl not supported by this controller */
    98		if (!mhi_cntrl->dtr_dev)
    99			return -EIO;
   100	
   101		switch (cmd) {
   102		case TIOCMGET:
   103			return mhi_chan->tiocm;
   104		case TIOCMSET:
   105		{
   106			u32 tiocm;
   107	
 > 108			ret = get_user(tiocm, (u32 *)arg);
   109			if (ret)
   110				return ret;
   111	
   112			return mhi_dtr_tiocmset(mhi_cntrl, mhi_chan, tiocm);
   113		}
   114		default:
   115			break;
   116		}
   117	
   118		return -EINVAL;
   119	}
   120	EXPORT_SYMBOL(mhi_ioctl);
   121	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 53272 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems
  2018-04-27  2:23 ` [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems Sujeev Dias
@ 2018-04-28  3:05     ` kbuild test robot
  2018-04-28  3:05     ` kbuild test robot
  2018-04-28  3:12     ` kbuild test robot
  2 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  3:05 UTC (permalink / raw)
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

[-- Attachment #1: Type: text/plain, Size: 5710 bytes --]

Hi Sujeev,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Sujeev-Dias/mhi_bus-core-Add-support-for-MHI-host-interface/20180428-065959
config: alpha-allmodconfig (attached as .config)
compiler: alpha-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=alpha 

All errors (new ones prefixed by >>):

   In file included from drivers/bus/mhi/controllers/mhi_qcom.c:25:0:
   include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
   drivers/bus/mhi/controllers/mhi_qcom.c: In function 'mhi_platform_probe':
>> drivers/bus/mhi/controllers/mhi_qcom.c:587:17: error: implicit declaration of function 'memblock_start_of_DRAM'; did you mean 'memblock_alloc'? [-Werror=implicit-function-declaration]
      addr_win[0] = memblock_start_of_DRAM();
                    ^~~~~~~~~~~~~~~~~~~~~~
                    memblock_alloc
>> drivers/bus/mhi/controllers/mhi_qcom.c:588:17: error: implicit declaration of function 'memblock_end_of_DRAM'; did you mean 'memblock_alloc'? [-Werror=implicit-function-declaration]
      addr_win[1] = memblock_end_of_DRAM();
                    ^~~~~~~~~~~~~~~~~~~~
                    memblock_alloc
   At top level:
   drivers/bus/mhi/controllers/mhi_qcom.c:238:12: warning: 'mhi_system_resume' defined but not used [-Wunused-function]
    static int mhi_system_resume(struct device *dev)
               ^~~~~~~~~~~~~~~~~
   drivers/bus/mhi/controllers/mhi_qcom.c:186:12: warning: 'mhi_runtime_idle' defined but not used [-Wunused-function]
    static int mhi_runtime_idle(struct device *dev)
               ^~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors

vim +587 drivers/bus/mhi/controllers/mhi_qcom.c

   533	
   534	static int mhi_platform_probe(struct platform_device *pdev)
   535	{
   536		struct mhi_controller *mhi_cntrl;
   537		struct mhi_dev *mhi_dev;
   538		struct device_node *of_node = pdev->dev.of_node;
   539		u64 addr_win[2];
   540		int ret;
   541	
   542		if (!of_node)
   543			return -ENODEV;
   544	
   545		mhi_cntrl = mhi_alloc_controller(sizeof(*mhi_dev));
   546		if (!mhi_cntrl)
   547			return -ENOMEM;
   548	
   549		mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
   550	
   551		/* get pci bus topology for this node */
   552		ret = of_property_read_u32(of_node, "qcom,pci-dev-id",
   553					   &mhi_cntrl->dev_id);
   554		if (ret)
   555			mhi_cntrl->dev_id = PCI_ANY_ID;
   556	
   557		ret = of_property_read_u32(of_node, "qcom,pci-domain",
   558					   &mhi_cntrl->domain);
   559		if (ret)
   560			goto error_probe;
   561	
   562		ret = of_property_read_u32(of_node, "qcom,pci-bus", &mhi_cntrl->bus);
   563		if (ret)
   564			goto error_probe;
   565	
   566		ret = of_property_read_u32(of_node, "qcom,pci-slot", &mhi_cntrl->slot);
   567		if (ret)
   568			goto error_probe;
   569	
   570		ret = of_property_read_u32(of_node, "qcom,smmu-cfg",
   571					   &mhi_dev->smmu_cfg);
   572		if (ret)
   573			goto error_probe;
   574	
   575		/* if s1 translation enabled pull iova addr from dt */
   576		if (mhi_dev->smmu_cfg & MHI_SMMU_ATTACH &&
   577		    !(mhi_dev->smmu_cfg & MHI_SMMU_S1_BYPASS)) {
   578			ret = of_property_count_elems_of_size(of_node, "qcom,addr-win",
   579							      sizeof(addr_win));
   580			if (ret != 1)
   581				goto error_probe;
   582			ret = of_property_read_u64_array(of_node, "qcom,addr-win",
   583							 addr_win, 2);
   584			if (ret)
   585				goto error_probe;
   586		} else {
 > 587			addr_win[0] = memblock_start_of_DRAM();
 > 588			addr_win[1] = memblock_end_of_DRAM();
   589		}
   590	
   591		mhi_dev->iova_start = addr_win[0];
   592		mhi_dev->iova_stop = addr_win[1];
   593	
   594		/*
   595		 * if S1 is enabled, set MHI_CTRL start address to 0 so we can use low
   596		 * level mapping api to map buffers outside of smmu domain
   597		 */
   598		if (mhi_dev->smmu_cfg & MHI_SMMU_ATTACH &&
   599		    !(mhi_dev->smmu_cfg & MHI_SMMU_S1_BYPASS))
   600			mhi_cntrl->iova_start = 0;
   601		else
   602			mhi_cntrl->iova_start = addr_win[0];
   603	
   604		mhi_cntrl->iova_stop = mhi_dev->iova_stop;
   605		mhi_cntrl->of_node = of_node;
   606	
   607		/* setup power management apis */
   608		mhi_cntrl->status_cb = mhi_status_cb;
   609		mhi_cntrl->runtime_get = mhi_runtime_get;
   610		mhi_cntrl->runtime_put = mhi_runtime_put;
   611		mhi_cntrl->link_status = mhi_link_status;
   612	
   613		mhi_dev->pdev = pdev;
   614	
   615		ret = mhi_arch_platform_init(mhi_dev);
   616		if (ret)
   617			goto error_probe;
   618	
   619		ret = of_register_mhi_controller(mhi_cntrl);
   620		if (ret)
   621			goto error_register;
   622	
   623		if (mhi_cntrl->parent)
   624			debugfs_create_file("debug_mode", 0444, mhi_cntrl->parent,
   625					    mhi_cntrl, &debugfs_debug_ops);
   626	
   627		return 0;
   628	
   629	error_register:
   630		mhi_arch_platform_deinit(mhi_dev);
   631	
   632	error_probe:
   633		mhi_free_controller(mhi_cntrl);
   634	
   635		return -EINVAL;
   636	};
   637	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 52366 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems
@ 2018-04-28  3:05     ` kbuild test robot
  0 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  3:05 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

[-- Attachment #1: Type: text/plain, Size: 5710 bytes --]

Hi Sujeev,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Sujeev-Dias/mhi_bus-core-Add-support-for-MHI-host-interface/20180428-065959
config: alpha-allmodconfig (attached as .config)
compiler: alpha-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=alpha 

All errors (new ones prefixed by >>):

   In file included from drivers/bus/mhi/controllers/mhi_qcom.c:25:0:
   include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
   drivers/bus/mhi/controllers/mhi_qcom.c: In function 'mhi_platform_probe':
>> drivers/bus/mhi/controllers/mhi_qcom.c:587:17: error: implicit declaration of function 'memblock_start_of_DRAM'; did you mean 'memblock_alloc'? [-Werror=implicit-function-declaration]
      addr_win[0] = memblock_start_of_DRAM();
                    ^~~~~~~~~~~~~~~~~~~~~~
                    memblock_alloc
>> drivers/bus/mhi/controllers/mhi_qcom.c:588:17: error: implicit declaration of function 'memblock_end_of_DRAM'; did you mean 'memblock_alloc'? [-Werror=implicit-function-declaration]
      addr_win[1] = memblock_end_of_DRAM();
                    ^~~~~~~~~~~~~~~~~~~~
                    memblock_alloc
   At top level:
   drivers/bus/mhi/controllers/mhi_qcom.c:238:12: warning: 'mhi_system_resume' defined but not used [-Wunused-function]
    static int mhi_system_resume(struct device *dev)
               ^~~~~~~~~~~~~~~~~
   drivers/bus/mhi/controllers/mhi_qcom.c:186:12: warning: 'mhi_runtime_idle' defined but not used [-Wunused-function]
    static int mhi_runtime_idle(struct device *dev)
               ^~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors

vim +587 drivers/bus/mhi/controllers/mhi_qcom.c

   533	
   534	static int mhi_platform_probe(struct platform_device *pdev)
   535	{
   536		struct mhi_controller *mhi_cntrl;
   537		struct mhi_dev *mhi_dev;
   538		struct device_node *of_node = pdev->dev.of_node;
   539		u64 addr_win[2];
   540		int ret;
   541	
   542		if (!of_node)
   543			return -ENODEV;
   544	
   545		mhi_cntrl = mhi_alloc_controller(sizeof(*mhi_dev));
   546		if (!mhi_cntrl)
   547			return -ENOMEM;
   548	
   549		mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
   550	
   551		/* get pci bus topology for this node */
   552		ret = of_property_read_u32(of_node, "qcom,pci-dev-id",
   553					   &mhi_cntrl->dev_id);
   554		if (ret)
   555			mhi_cntrl->dev_id = PCI_ANY_ID;
   556	
   557		ret = of_property_read_u32(of_node, "qcom,pci-domain",
   558					   &mhi_cntrl->domain);
   559		if (ret)
   560			goto error_probe;
   561	
   562		ret = of_property_read_u32(of_node, "qcom,pci-bus", &mhi_cntrl->bus);
   563		if (ret)
   564			goto error_probe;
   565	
   566		ret = of_property_read_u32(of_node, "qcom,pci-slot", &mhi_cntrl->slot);
   567		if (ret)
   568			goto error_probe;
   569	
   570		ret = of_property_read_u32(of_node, "qcom,smmu-cfg",
   571					   &mhi_dev->smmu_cfg);
   572		if (ret)
   573			goto error_probe;
   574	
   575		/* if s1 translation enabled pull iova addr from dt */
   576		if (mhi_dev->smmu_cfg & MHI_SMMU_ATTACH &&
   577		    !(mhi_dev->smmu_cfg & MHI_SMMU_S1_BYPASS)) {
   578			ret = of_property_count_elems_of_size(of_node, "qcom,addr-win",
   579							      sizeof(addr_win));
   580			if (ret != 1)
   581				goto error_probe;
   582			ret = of_property_read_u64_array(of_node, "qcom,addr-win",
   583							 addr_win, 2);
   584			if (ret)
   585				goto error_probe;
   586		} else {
 > 587			addr_win[0] = memblock_start_of_DRAM();
 > 588			addr_win[1] = memblock_end_of_DRAM();
   589		}
   590	
   591		mhi_dev->iova_start = addr_win[0];
   592		mhi_dev->iova_stop = addr_win[1];
   593	
   594		/*
   595		 * if S1 is enabled, set MHI_CTRL start address to 0 so we can use low
   596		 * level mapping api to map buffers outside of smmu domain
   597		 */
   598		if (mhi_dev->smmu_cfg & MHI_SMMU_ATTACH &&
   599		    !(mhi_dev->smmu_cfg & MHI_SMMU_S1_BYPASS))
   600			mhi_cntrl->iova_start = 0;
   601		else
   602			mhi_cntrl->iova_start = addr_win[0];
   603	
   604		mhi_cntrl->iova_stop = mhi_dev->iova_stop;
   605		mhi_cntrl->of_node = of_node;
   606	
   607		/* setup power management apis */
   608		mhi_cntrl->status_cb = mhi_status_cb;
   609		mhi_cntrl->runtime_get = mhi_runtime_get;
   610		mhi_cntrl->runtime_put = mhi_runtime_put;
   611		mhi_cntrl->link_status = mhi_link_status;
   612	
   613		mhi_dev->pdev = pdev;
   614	
   615		ret = mhi_arch_platform_init(mhi_dev);
   616		if (ret)
   617			goto error_probe;
   618	
   619		ret = of_register_mhi_controller(mhi_cntrl);
   620		if (ret)
   621			goto error_register;
   622	
   623		if (mhi_cntrl->parent)
   624			debugfs_create_file("debug_mode", 0444, mhi_cntrl->parent,
   625					    mhi_cntrl, &debugfs_debug_ops);
   626	
   627		return 0;
   628	
   629	error_register:
   630		mhi_arch_platform_deinit(mhi_dev);
   631	
   632	error_probe:
   633		mhi_free_controller(mhi_cntrl);
   634	
   635		return -EINVAL;
   636	};
   637	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 52366 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems
  2018-04-27  2:23 ` [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems Sujeev Dias
@ 2018-04-28  3:12     ` kbuild test robot
  2018-04-28  3:05     ` kbuild test robot
  2018-04-28  3:12     ` kbuild test robot
  2 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  3:12 UTC (permalink / raw)
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

[-- Attachment #1: Type: text/plain, Size: 8141 bytes --]

Hi Sujeev,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Sujeev-Dias/mhi_bus-core-Add-support-for-MHI-host-interface/20180428-065959
config: openrisc-allmodconfig (attached as .config)
compiler: or1k-linux-gcc (GCC) 6.0.0 20160327 (experimental)
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=openrisc 

All errors (new ones prefixed by >>):

   In file included from drivers/bus/mhi/controllers/mhi_qcom.c:25:0:
   include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
   drivers/bus/mhi/controllers/mhi_qcom.c: In function 'mhi_deinit_pci_dev':
>> drivers/bus/mhi/controllers/mhi_qcom.c:46:2: error: implicit declaration of function 'pci_free_irq_vectors' [-Werror=implicit-function-declaration]
     pci_free_irq_vectors(pci_dev);
     ^~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/controllers/mhi_qcom.c:51:2: error: implicit declaration of function 'pci_clear_master' [-Werror=implicit-function-declaration]
     pci_clear_master(pci_dev);
     ^~~~~~~~~~~~~~~~
>> drivers/bus/mhi/controllers/mhi_qcom.c:52:2: error: implicit declaration of function 'pci_release_region' [-Werror=implicit-function-declaration]
     pci_release_region(pci_dev, mhi_dev->resn);
     ^~~~~~~~~~~~~~~~~~
   drivers/bus/mhi/controllers/mhi_qcom.c: In function 'mhi_init_pci_dev':
>> drivers/bus/mhi/controllers/mhi_qcom.c:77:8: error: implicit declaration of function 'pci_request_region' [-Werror=implicit-function-declaration]
     ret = pci_request_region(pci_dev, mhi_dev->resn, "mhi");
           ^~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/controllers/mhi_qcom.c:93:8: error: implicit declaration of function 'pci_alloc_irq_vectors' [-Werror=implicit-function-declaration]
     ret = pci_alloc_irq_vectors(pci_dev, mhi_cntrl->msi_required,
           ^~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/controllers/mhi_qcom.c:94:34: error: 'PCI_IRQ_MSI' undeclared (first use in this function)
            mhi_cntrl->msi_required, PCI_IRQ_MSI);
                                     ^~~~~~~~~~~
   drivers/bus/mhi/controllers/mhi_qcom.c:94:34: note: each undeclared identifier is reported only once for each function it appears in
>> drivers/bus/mhi/controllers/mhi_qcom.c:109:23: error: implicit declaration of function 'pci_irq_vector' [-Werror=implicit-function-declaration]
      mhi_cntrl->irq[i] = pci_irq_vector(pci_dev, i);
                          ^~~~~~~~~~~~~~
   At top level:
   drivers/bus/mhi/controllers/mhi_qcom.c:238:12: warning: 'mhi_system_resume' defined but not used [-Wunused-function]
    static int mhi_system_resume(struct device *dev)
               ^~~~~~~~~~~~~~~~~
   drivers/bus/mhi/controllers/mhi_qcom.c:186:12: warning: 'mhi_runtime_idle' defined but not used [-Wunused-function]
    static int mhi_runtime_idle(struct device *dev)
               ^~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors

vim +/pci_free_irq_vectors +46 drivers/bus/mhi/controllers/mhi_qcom.c

  > 25	#include <linux/mhi.h>
    26	#include "mhi_qcom.h"
    27	
    28	static struct pci_device_id mhi_pcie_device_id[] = {
    29		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0300)},
    30		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0301)},
    31		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0302)},
    32		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0303)},
    33		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0304)},
    34		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0305)},
    35		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, MHI_PCIE_DEBUG_ID)},
    36		{0},
    37	};
    38	
    39	static struct pci_driver mhi_pcie_driver;
    40	
    41	void mhi_deinit_pci_dev(struct mhi_controller *mhi_cntrl)
    42	{
    43		struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
    44		struct pci_dev *pci_dev = mhi_dev->pci_dev;
    45	
  > 46		pci_free_irq_vectors(pci_dev);
    47		kfree(mhi_cntrl->irq);
    48		mhi_cntrl->irq = NULL;
    49		iounmap(mhi_cntrl->regs);
    50		mhi_cntrl->regs = NULL;
  > 51		pci_clear_master(pci_dev);
  > 52		pci_release_region(pci_dev, mhi_dev->resn);
    53		pci_disable_device(pci_dev);
    54	}
    55	
    56	static int mhi_init_pci_dev(struct mhi_controller *mhi_cntrl)
    57	{
    58		struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
    59		struct pci_dev *pci_dev = mhi_dev->pci_dev;
    60		int ret;
    61		resource_size_t start, len;
    62		int i;
    63	
    64		mhi_dev->resn = MHI_PCI_BAR_NUM;
    65		ret = pci_assign_resource(pci_dev, mhi_dev->resn);
    66		if (ret) {
    67			MHI_ERR("Error assign pci resources, ret:%d\n", ret);
    68			return ret;
    69		}
    70	
    71		ret = pci_enable_device(pci_dev);
    72		if (ret) {
    73			MHI_ERR("Error enabling device, ret:%d\n", ret);
    74			goto error_enable_device;
    75		}
    76	
  > 77		ret = pci_request_region(pci_dev, mhi_dev->resn, "mhi");
    78		if (ret) {
    79			MHI_ERR("Error pci_request_region, ret:%d\n", ret);
    80			goto error_request_region;
    81		}
    82	
    83		pci_set_master(pci_dev);
    84	
    85		start = pci_resource_start(pci_dev, mhi_dev->resn);
    86		len = pci_resource_len(pci_dev, mhi_dev->resn);
    87		mhi_cntrl->regs = ioremap_nocache(start, len);
    88		if (!mhi_cntrl->regs) {
    89			MHI_ERR("Error ioremap region\n");
    90			goto error_ioremap;
    91		}
    92	
  > 93		ret = pci_alloc_irq_vectors(pci_dev, mhi_cntrl->msi_required,
  > 94					    mhi_cntrl->msi_required, PCI_IRQ_MSI);
    95		if (IS_ERR_VALUE((ulong)ret) || ret < mhi_cntrl->msi_required) {
    96			MHI_ERR("Failed to enable MSI, ret:%d\n", ret);
    97			goto error_req_msi;
    98		}
    99	
   100		mhi_cntrl->msi_allocated = ret;
   101		mhi_cntrl->irq = kmalloc_array(mhi_cntrl->msi_allocated,
   102					       sizeof(*mhi_cntrl->irq), GFP_KERNEL);
   103		if (!mhi_cntrl->irq) {
   104			ret = -ENOMEM;
   105			goto error_alloc_msi_vec;
   106		}
   107	
   108		for (i = 0; i < mhi_cntrl->msi_allocated; i++) {
 > 109			mhi_cntrl->irq[i] = pci_irq_vector(pci_dev, i);
   110			if (mhi_cntrl->irq[i] < 0) {
   111				ret = mhi_cntrl->irq[i];
   112				goto error_get_irq_vec;
   113			}
   114		}
   115	
   116		dev_set_drvdata(&pci_dev->dev, mhi_cntrl);
   117	
   118		/* configure runtime pm */
   119		pm_runtime_set_autosuspend_delay(&pci_dev->dev, MHI_RPM_SUSPEND_TMR_MS);
   120		pm_runtime_use_autosuspend(&pci_dev->dev);
   121		pm_suspend_ignore_children(&pci_dev->dev, true);
   122	
   123		/*
   124		 * pci framework will increment usage count (twice) before
   125		 * calling local device driver probe function.
   126		 * 1st pci.c pci_pm_init() calls pm_runtime_forbid
   127		 * 2nd pci-driver.c local_pci_probe calls pm_runtime_get_sync
   128		 * Framework expect pci device driver to call
   129		 * pm_runtime_put_noidle to decrement usage count after
   130		 * successful probe and and call pm_runtime_allow to enable
   131		 * runtime suspend.
   132		 */
   133		pm_runtime_mark_last_busy(&pci_dev->dev);
   134		pm_runtime_put_noidle(&pci_dev->dev);
   135	
   136		return 0;
   137	
   138	error_get_irq_vec:
   139		kfree(mhi_cntrl->irq);
   140		mhi_cntrl->irq = NULL;
   141	
   142	error_alloc_msi_vec:
   143		pci_free_irq_vectors(pci_dev);
   144	
   145	error_req_msi:
   146		iounmap(mhi_cntrl->regs);
   147	
   148	error_ioremap:
   149		pci_clear_master(pci_dev);
   150	
   151	error_request_region:
   152		pci_disable_device(pci_dev);
   153	
   154	error_enable_device:
   155		pci_release_region(pci_dev, mhi_dev->resn);
   156	
   157		return ret;
   158	}
   159	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 44455 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems
@ 2018-04-28  3:12     ` kbuild test robot
  0 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  3:12 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

[-- Attachment #1: Type: text/plain, Size: 8141 bytes --]

Hi Sujeev,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Sujeev-Dias/mhi_bus-core-Add-support-for-MHI-host-interface/20180428-065959
config: openrisc-allmodconfig (attached as .config)
compiler: or1k-linux-gcc (GCC) 6.0.0 20160327 (experimental)
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=openrisc 

All errors (new ones prefixed by >>):

   In file included from drivers/bus/mhi/controllers/mhi_qcom.c:25:0:
   include/linux/mhi.h:658:15: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'int'
    static inlint int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
                  ^~~
   drivers/bus/mhi/controllers/mhi_qcom.c: In function 'mhi_deinit_pci_dev':
>> drivers/bus/mhi/controllers/mhi_qcom.c:46:2: error: implicit declaration of function 'pci_free_irq_vectors' [-Werror=implicit-function-declaration]
     pci_free_irq_vectors(pci_dev);
     ^~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/controllers/mhi_qcom.c:51:2: error: implicit declaration of function 'pci_clear_master' [-Werror=implicit-function-declaration]
     pci_clear_master(pci_dev);
     ^~~~~~~~~~~~~~~~
>> drivers/bus/mhi/controllers/mhi_qcom.c:52:2: error: implicit declaration of function 'pci_release_region' [-Werror=implicit-function-declaration]
     pci_release_region(pci_dev, mhi_dev->resn);
     ^~~~~~~~~~~~~~~~~~
   drivers/bus/mhi/controllers/mhi_qcom.c: In function 'mhi_init_pci_dev':
>> drivers/bus/mhi/controllers/mhi_qcom.c:77:8: error: implicit declaration of function 'pci_request_region' [-Werror=implicit-function-declaration]
     ret = pci_request_region(pci_dev, mhi_dev->resn, "mhi");
           ^~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/controllers/mhi_qcom.c:93:8: error: implicit declaration of function 'pci_alloc_irq_vectors' [-Werror=implicit-function-declaration]
     ret = pci_alloc_irq_vectors(pci_dev, mhi_cntrl->msi_required,
           ^~~~~~~~~~~~~~~~~~~~~
>> drivers/bus/mhi/controllers/mhi_qcom.c:94:34: error: 'PCI_IRQ_MSI' undeclared (first use in this function)
            mhi_cntrl->msi_required, PCI_IRQ_MSI);
                                     ^~~~~~~~~~~
   drivers/bus/mhi/controllers/mhi_qcom.c:94:34: note: each undeclared identifier is reported only once for each function it appears in
>> drivers/bus/mhi/controllers/mhi_qcom.c:109:23: error: implicit declaration of function 'pci_irq_vector' [-Werror=implicit-function-declaration]
      mhi_cntrl->irq[i] = pci_irq_vector(pci_dev, i);
                          ^~~~~~~~~~~~~~
   At top level:
   drivers/bus/mhi/controllers/mhi_qcom.c:238:12: warning: 'mhi_system_resume' defined but not used [-Wunused-function]
    static int mhi_system_resume(struct device *dev)
               ^~~~~~~~~~~~~~~~~
   drivers/bus/mhi/controllers/mhi_qcom.c:186:12: warning: 'mhi_runtime_idle' defined but not used [-Wunused-function]
    static int mhi_runtime_idle(struct device *dev)
               ^~~~~~~~~~~~~~~~
   cc1: some warnings being treated as errors

vim +/pci_free_irq_vectors +46 drivers/bus/mhi/controllers/mhi_qcom.c

  > 25	#include <linux/mhi.h>
    26	#include "mhi_qcom.h"
    27	
    28	static struct pci_device_id mhi_pcie_device_id[] = {
    29		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0300)},
    30		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0301)},
    31		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0302)},
    32		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0303)},
    33		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0304)},
    34		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0305)},
    35		{PCI_DEVICE(MHI_PCIE_VENDOR_ID, MHI_PCIE_DEBUG_ID)},
    36		{0},
    37	};
    38	
    39	static struct pci_driver mhi_pcie_driver;
    40	
    41	void mhi_deinit_pci_dev(struct mhi_controller *mhi_cntrl)
    42	{
    43		struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
    44		struct pci_dev *pci_dev = mhi_dev->pci_dev;
    45	
  > 46		pci_free_irq_vectors(pci_dev);
    47		kfree(mhi_cntrl->irq);
    48		mhi_cntrl->irq = NULL;
    49		iounmap(mhi_cntrl->regs);
    50		mhi_cntrl->regs = NULL;
  > 51		pci_clear_master(pci_dev);
  > 52		pci_release_region(pci_dev, mhi_dev->resn);
    53		pci_disable_device(pci_dev);
    54	}
    55	
    56	static int mhi_init_pci_dev(struct mhi_controller *mhi_cntrl)
    57	{
    58		struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
    59		struct pci_dev *pci_dev = mhi_dev->pci_dev;
    60		int ret;
    61		resource_size_t start, len;
    62		int i;
    63	
    64		mhi_dev->resn = MHI_PCI_BAR_NUM;
    65		ret = pci_assign_resource(pci_dev, mhi_dev->resn);
    66		if (ret) {
    67			MHI_ERR("Error assign pci resources, ret:%d\n", ret);
    68			return ret;
    69		}
    70	
    71		ret = pci_enable_device(pci_dev);
    72		if (ret) {
    73			MHI_ERR("Error enabling device, ret:%d\n", ret);
    74			goto error_enable_device;
    75		}
    76	
  > 77		ret = pci_request_region(pci_dev, mhi_dev->resn, "mhi");
    78		if (ret) {
    79			MHI_ERR("Error pci_request_region, ret:%d\n", ret);
    80			goto error_request_region;
    81		}
    82	
    83		pci_set_master(pci_dev);
    84	
    85		start = pci_resource_start(pci_dev, mhi_dev->resn);
    86		len = pci_resource_len(pci_dev, mhi_dev->resn);
    87		mhi_cntrl->regs = ioremap_nocache(start, len);
    88		if (!mhi_cntrl->regs) {
    89			MHI_ERR("Error ioremap region\n");
    90			goto error_ioremap;
    91		}
    92	
  > 93		ret = pci_alloc_irq_vectors(pci_dev, mhi_cntrl->msi_required,
  > 94					    mhi_cntrl->msi_required, PCI_IRQ_MSI);
    95		if (IS_ERR_VALUE((ulong)ret) || ret < mhi_cntrl->msi_required) {
    96			MHI_ERR("Failed to enable MSI, ret:%d\n", ret);
    97			goto error_req_msi;
    98		}
    99	
   100		mhi_cntrl->msi_allocated = ret;
   101		mhi_cntrl->irq = kmalloc_array(mhi_cntrl->msi_allocated,
   102					       sizeof(*mhi_cntrl->irq), GFP_KERNEL);
   103		if (!mhi_cntrl->irq) {
   104			ret = -ENOMEM;
   105			goto error_alloc_msi_vec;
   106		}
   107	
   108		for (i = 0; i < mhi_cntrl->msi_allocated; i++) {
 > 109			mhi_cntrl->irq[i] = pci_irq_vector(pci_dev, i);
   110			if (mhi_cntrl->irq[i] < 0) {
   111				ret = mhi_cntrl->irq[i];
   112				goto error_get_irq_vec;
   113			}
   114		}
   115	
   116		dev_set_drvdata(&pci_dev->dev, mhi_cntrl);
   117	
   118		/* configure runtime pm */
   119		pm_runtime_set_autosuspend_delay(&pci_dev->dev, MHI_RPM_SUSPEND_TMR_MS);
   120		pm_runtime_use_autosuspend(&pci_dev->dev);
   121		pm_suspend_ignore_children(&pci_dev->dev, true);
   122	
   123		/*
   124		 * pci framework will increment usage count (twice) before
   125		 * calling local device driver probe function.
   126		 * 1st pci.c pci_pm_init() calls pm_runtime_forbid
   127		 * 2nd pci-driver.c local_pci_probe calls pm_runtime_get_sync
   128		 * Framework expect pci device driver to call
   129		 * pm_runtime_put_noidle to decrement usage count after
   130		 * successful probe and and call pm_runtime_allow to enable
   131		 * runtime suspend.
   132		 */
   133		pm_runtime_mark_last_busy(&pci_dev->dev);
   134		pm_runtime_put_noidle(&pci_dev->dev);
   135	
   136		return 0;
   137	
   138	error_get_irq_vec:
   139		kfree(mhi_cntrl->irq);
   140		mhi_cntrl->irq = NULL;
   141	
   142	error_alloc_msi_vec:
   143		pci_free_irq_vectors(pci_dev);
   144	
   145	error_req_msi:
   146		iounmap(mhi_cntrl->regs);
   147	
   148	error_ioremap:
   149		pci_clear_master(pci_dev);
   150	
   151	error_request_region:
   152		pci_disable_device(pci_dev);
   153	
   154	error_enable_device:
   155		pci_release_region(pci_dev, mhi_dev->resn);
   156	
   157		return ret;
   158	}
   159	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 44455 bytes --]

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 4/4] mhi_bus: dev: uci: add user space interface driver
  2018-04-27  2:23 ` [PATCH v1 4/4] mhi_bus: dev: uci: add user space " Sujeev Dias
@ 2018-04-28  5:16     ` kbuild test robot
  2018-04-28  1:03     ` kbuild test robot
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  5:16 UTC (permalink / raw)
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

Hi Sujeev,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Sujeev-Dias/mhi_bus-core-Add-support-for-MHI-host-interface/20180428-065959


coccinelle warnings: (new ones prefixed by >>)

>> drivers/bus/mhi/devices/mhi_uci.c:556:2-3: Unneeded semicolon

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 54+ messages in thread

* [PATCH] mhi_bus: dev: uci: fix semicolon.cocci warnings
  2018-04-27  2:23 ` [PATCH v1 4/4] mhi_bus: dev: uci: add user space " Sujeev Dias
@ 2018-04-28  5:16     ` kbuild test robot
  2018-04-28  1:03     ` kbuild test robot
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  5:16 UTC (permalink / raw)
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

From: Fengguang Wu <fengguang.wu@intel.com>

drivers/bus/mhi/devices/mhi_uci.c:556:2-3: Unneeded semicolon


 Remove unneeded semicolon.

Generated by: scripts/coccinelle/misc/semicolon.cocci

Fixes: 2064c7ebc5cb ("mhi_bus: dev: uci: add user space interface driver")
CC: Sujeev Dias <sdias@codeaurora.org>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
---

 mhi_uci.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/bus/mhi/devices/mhi_uci.c
+++ b/drivers/bus/mhi/devices/mhi_uci.c
@@ -553,7 +553,7 @@ static int mhi_uci_probe(struct mhi_devi
 		spin_lock_init(&uci_chan->lock);
 		init_waitqueue_head(&uci_chan->wq);
 		INIT_LIST_HEAD(&uci_chan->pending);
-	};
+	}
 
 	uci_dev->mtu = id->driver_data;
 	mhi_device_set_devdata(mhi_dev, uci_dev);

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 4/4] mhi_bus: dev: uci: add user space interface driver
@ 2018-04-28  5:16     ` kbuild test robot
  0 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  5:16 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

Hi Sujeev,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Sujeev-Dias/mhi_bus-core-Add-support-for-MHI-host-interface/20180428-065959


coccinelle warnings: (new ones prefixed by >>)

>> drivers/bus/mhi/devices/mhi_uci.c:556:2-3: Unneeded semicolon

Please review and possibly fold the followup patch.

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

^ permalink raw reply	[flat|nested] 54+ messages in thread

* [PATCH] mhi_bus: dev: uci: fix semicolon.cocci warnings
@ 2018-04-28  5:16     ` kbuild test robot
  0 siblings, 0 replies; 54+ messages in thread
From: kbuild test robot @ 2018-04-28  5:16 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: kbuild-all, Greg Kroah-Hartman, Arnd Bergmann, Sujeev Dias,
	linux-kernel, linux-arm-msm, Tony Truong

From: Fengguang Wu <fengguang.wu@intel.com>

drivers/bus/mhi/devices/mhi_uci.c:556:2-3: Unneeded semicolon


 Remove unneeded semicolon.

Generated by: scripts/coccinelle/misc/semicolon.cocci

Fixes: 2064c7ebc5cb ("mhi_bus: dev: uci: add user space interface driver")
CC: Sujeev Dias <sdias@codeaurora.org>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
---

 mhi_uci.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/bus/mhi/devices/mhi_uci.c
+++ b/drivers/bus/mhi/devices/mhi_uci.c
@@ -553,7 +553,7 @@ static int mhi_uci_probe(struct mhi_devi
 		spin_lock_init(&uci_chan->lock);
 		init_waitqueue_head(&uci_chan->wq);
 		INIT_LIST_HEAD(&uci_chan->pending);
-	};
+	}
 
 	uci_dev->mtu = id->driver_data;
 	mhi_device_set_devdata(mhi_dev, uci_dev);

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
  2018-04-27  7:22   ` Greg Kroah-Hartman
@ 2018-04-28 14:28     ` Sujeev Dias
  2018-04-28 15:50       ` Greg Kroah-Hartman
  0 siblings, 1 reply; 54+ messages in thread
From: Sujeev Dias @ 2018-04-28 14:28 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Arnd Bergmann, linux-kernel, linux-arm-msm, Tony Truong

Thanks for quick feedback


On 04/27/2018 12:22 AM, Greg Kroah-Hartman wrote:
> On Thu, Apr 26, 2018 at 07:23:28PM -0700, Sujeev Dias wrote:
>> MHI Host Interface is a communication protocol to be used by the host
>> to control and communcate with modem over a high speed peripheral bus.
>> This module will allow host to communicate with external devices that
>> support MHI protocol.
>>
>> Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
> No one else has ever reviewed this code before?  That's not good, please
> at the very least, have someone else at your company go over it first.
> I don't want to be the ones having to point out all of the "obvious"
> issues :)
>
This code has gone thru rigorous code review and testing, before I 
submit next patch
I will have multiple people sign off on it.
>> ---
>>   Documentation/00-INDEX                        |    2 +
>>   Documentation/devicetree/bindings/bus/mhi.txt |  141 +++
>>   Documentation/mhi.txt                         |  235 ++++
>>   drivers/bus/Kconfig                           |   17 +
>>   drivers/bus/Makefile                          |    1 +
>>   drivers/bus/mhi/Makefile                      |    8 +
>>   drivers/bus/mhi/core/Makefile                 |    1 +
>>   drivers/bus/mhi/core/mhi_boot.c               |  593 ++++++++++
>>   drivers/bus/mhi/core/mhi_dtr.c                |  177 +++
>>   drivers/bus/mhi/core/mhi_init.c               | 1290 +++++++++++++++++++++
>>   drivers/bus/mhi/core/mhi_internal.h           |  732 ++++++++++++
>>   drivers/bus/mhi/core/mhi_main.c               | 1476 +++++++++++++++++++++++++
>>   drivers/bus/mhi/core/mhi_pm.c                 | 1177 ++++++++++++++++++++
>>   include/linux/mhi.h                           |  694 ++++++++++++
>>   include/linux/mod_devicetable.h               |   11 +
>>   15 files changed, 6555 insertions(+)
> And a 6555 line patch is a bit hard to consume all at once.  Can't this
> be split up into much more reviewable chunks?  Look at how some of the
> other new bus subsystems got added to the tree recently.  They were
> submitted in longer patch series, but smaller sized patches
> individually.  That makes things much easier to review.
>
> For example, there is no reason your debugfs stuff needs to be in this
> initial patch.  That should be in a separate one, right?  Same for
> firmware download.  Please take the time to break this up into logical
> steps.
>
> Like my son's math teacher keeps telling him, "show your work, not just
> an answer at the bottom of the page".
>
> Also, it is required by the DT maintainers to split that file alone up
> into a separate patch to be even considered for merging.
>
> One thing I can tell you right now that isn't acceptable:
That is interesting because internally it's separated, and I squash them 
thinking
it was preferred. I will separate them out to functional blocks
>> +#ifdef CONFIG_MHI_DEBUG
> Don't have a separate config option for debugging.  No one will enable
> it, which makes it pointless.   Everything has to be dynamic these days.
Intention was to completely compile out MHI_VERB messages because we 
have those messages in
data path.  For release build, we wanted to reduce as much mips as 
possible. However, for
debugging these messages are extremely helpful.

I will look into tracepoints...
>> +
>> +#define MHI_VERB(fmt, ...) do { \
>> +		if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_VERBOSE) \
>> +			pr_debug("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
>> +} while (0)
>> +
>> +#else
>> +
>> +#define MHI_VERB(fmt, ...)
>> +
>> +#endif
>> +
>> +#define MHI_LOG(fmt, ...) do {	\
>> +		if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
>> +			pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
>> +} while (0)
>> +
>> +#define MHI_ERR(fmt, ...) do {	\
>> +		if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_ERROR) \
>> +			pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
>> +} while (0)
>> +
>> +#define MHI_CRITICAL(fmt, ...) do { \
>> +		if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_CRITICAL) \
>> +			pr_alert("[C][%s] " fmt, __func__, ##__VA_ARGS__); \
>> +} while (0)
>> +
> And do not roll your own debugging/logging macros.  Use what is given to
> you (dev_info(), dev_err(), dev_dbg()), they are there for a reason.  By
> going around them, you circumvent the whole of the kernel logging
> infrastructure and declare that your tiny bus is somehow more "special"
> than it.
>
> And I doubt you want to make such a statement :)

well :).. I will remove them in next revision.
> thanks,
>
> greg k-h
> --
> To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Thanks
Sujeev
-- 

Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 3/4] mhi_bus: dev: netdev: add network interface driver
  2018-04-27 11:19   ` Arnd Bergmann
@ 2018-04-28 15:25     ` Sujeev Dias
  0 siblings, 0 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-04-28 15:25 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Greg Kroah-Hartman, Linux Kernel Mailing List, linux-arm-msm,
	Tony Truong



On 04/27/2018 04:19 AM, Arnd Bergmann wrote:
> On Fri, Apr 27, 2018 at 4:23 AM, Sujeev Dias <sdias@codeaurora.org> wrote:
>> MHI based net device driver is used for transferring IP
>> traffic between host and modem. Driver allows clients to
>> transfer data using standard network interface.
>>
>> Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
>> ---
>>   Documentation/devicetree/bindings/bus/mhi.txt |  36 ++
>>   drivers/bus/Kconfig                           |   1 +
>>   drivers/bus/mhi/Makefile                      |   2 +-
>>   drivers/bus/mhi/devices/Kconfig               |  10 +
>>   drivers/bus/mhi/devices/Makefile              |   1 +
>>   drivers/bus/mhi/devices/mhi_netdev.c          | 893 ++++++++++++++++++++++++++
> Network drivers go into drivers/net/, not into the bus subsystem.
> You also need to Cc the netdev mailing list for review.
will do, thanks.
>>   6 files changed, 942 insertions(+), 1 deletion(-)
>>   create mode 100644 drivers/bus/mhi/devices/Kconfig
>>   create mode 100644 drivers/bus/mhi/devices/Makefile
>>   create mode 100644 drivers/bus/mhi/devices/mhi_netdev.c
>>
>> diff --git a/Documentation/devicetree/bindings/bus/mhi.txt b/Documentation/devicetree/bindings/bus/mhi.txt
>> index ea1b620..172ae7b 100644
>> --- a/Documentation/devicetree/bindings/bus/mhi.txt
>> +++ b/Documentation/devicetree/bindings/bus/mhi.txt
>> @@ -139,3 +139,39 @@ mhi_controller {
>>                  <driver specific properties>
>>          };
>>   };
>> +
>> +================
>> +Children Devices
>> +================
>> +
>> +MHI netdev properties
>> +
>> +- mhi,chan
>> +  Usage: required
>> +  Value type: <string>
>> +  Definition: Channel name MHI netdev support
>>
>> +- mhi,mru
>> +  Usage: required
>> +  Value type: <u32>
>> +  Definition: Largest packet size interface can receive in bytes.
>> +
>> +- mhi,interface-name
>> +  Usage: optional
>> +  Value type: <string>
>> +  Definition: Interface name to be given so clients can identify it
>> +
>> +- mhi,recycle-buf
>> +  Usage: optional
>> +  Value type: <bool>
>> +  Definition: Set true if interface support recycling buffers.
>
>> +========
>> +Example:
>> +========
>> +
>> +mhi_rmnet@0 {
>> +       mhi,chan = "IP_HW0";
>> +       mhi,interface-name = "rmnet_mhi";
>> +       mhi,mru = <0x4000>;
>> +};
> Who defines the "IP_HW0" and "rmnet_mhi" strings?
>
> Shouldn't there be a "compatible" property?
>
IP_HW0 is the a dedicated channel for IP traffic.  It's defined  by MHI 
specification.  Supported channel names are fixed and
dedicated per device. Interface name is a network iface name given so 
clients can identify the netdevice under /net/.

Similar how pci framework bind DT based on bdf with pci end point 
driver, MHI bus uses mhi,chan property to bind children
DT node before calling probe.
mhi_main.c/mhi_create_device()

mhi_main.c/mhi_create_device()
  /* add if there is a matching DT node */
controller = mhi_cntrl->of_node;
  for_each_available_child_of_node(controller, node) {
      ret = of_property_read_string(node, "mhi,chan",
                        &dt_name);
      if (ret)
          continue;
      if (!strcmp(mhi_dev->chan_name, dt_name))
           mhi_dev->dev.of_node = node;
>> +#define MHI_NETDEV_DRIVER_NAME "mhi_netdev"
>> +#define WATCHDOG_TIMEOUT (30 * HZ)
>> +
>> +#ifdef CONFIG_MHI_DEBUG
>> +
>> +#define MHI_ASSERT(cond, msg) do { \
>> +       if (cond) \
>> +               panic(msg); \
>> +} while (0)
>> +
>> +#define MSG_VERB(fmt, ...) do { \
>> +       if (mhi_netdev->msg_lvl <= MHI_MSG_LVL_VERBOSE) \
>> +               pr_err("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
>> +} while (0)
>> +
>> +#define MSG_LOG(fmt, ...) do { \
>> +       if (mhi_netdev->msg_lvl <= MHI_MSG_LVL_INFO) \
>> +               pr_err("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
>> +} while (0)
>> +
>> +#define MSG_ERR(fmt, ...) do { \
>> +       if (mhi_netdev->msg_lvl <= MHI_MSG_LVL_ERROR) \
>> +               pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
>> +} while (0)
>> +
> Please remove these macros and use the normal kernel
> helpers we have.
Will do on next patch set.
>> +struct mhi_stats {
>> +       u32 rx_int;
>> +       u32 tx_full;
>> +       u32 tx_pkts;
>> +       u32 rx_budget_overflow;
>> +       u32 rx_frag;
>> +       u32 alloc_failed;
>> +};
>> +
>> +/* important: do not exceed sk_buf->cb (48 bytes) */
>> +struct mhi_skb_priv {
>> +       void *buf;
>> +       size_t size;
>> +       struct mhi_netdev *mhi_netdev;
>> +};
> Shouldn't all three members already be part of an skb?
>
> You initialize them as
>
>> +       u32 cur_mru = mhi_netdev->mru;
>> +               skb_priv = (struct mhi_skb_priv *)skb->cb;
>> +               skb_priv->buf = skb->data;
>> +               skb_priv->size = cur_mru;
>> +               skb_priv->mhi_netdev = mhi_netdev;
> so I think you can remove the structure completely.
Will confirm and remove it if it's not needed.
>> +struct mhi_netdev {
>> +       int alias;
>> +       struct mhi_device *mhi_dev;
>> +       spinlock_t rx_lock;
>> +       bool enabled;
>> +       rwlock_t pm_lock; /* state change lock */
>> +       int (*rx_queue)(struct mhi_netdev *mhi_netdev, gfp_t gfp_t);
>> +       struct work_struct alloc_work;
>> +       int wake;
>> +
>> +       struct sk_buff_head rx_allocated;
>> +
>> +       u32 mru;
>> +       const char *interface_name;
>> +       struct napi_struct napi;
>> +       struct net_device *ndev;
>> +       struct sk_buff *frag_skb;
>> +       bool recycle_buf;
>> +
>> +       struct mhi_stats stats;
>> +       struct dentry *dentry;
>> +       enum MHI_DEBUG_LEVEL msg_lvl;
>> +};
>> +
>> +struct mhi_netdev_priv {
>> +       struct mhi_netdev *mhi_netdev;
>> +};
> Please kill this intermediate structure and use the mhi_netdev
> directly.
>
>> +static struct mhi_driver mhi_netdev_driver;
>> +static void mhi_netdev_create_debugfs(struct mhi_netdev *mhi_netdev);
> Better remove forward declarations and sort the symbols in their
> natural order.
>
>> +static int mhi_netdev_alloc_skb(struct mhi_netdev *mhi_netdev, gfp_t gfp_t)
>> +{
>> +       u32 cur_mru = mhi_netdev->mru;
>> +       struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
>> +       struct mhi_skb_priv *skb_priv;
>> +       int ret;
>> +       struct sk_buff *skb;
>> +       int no_tre = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
>> +       int i;
>> +
>> +       for (i = 0; i < no_tre; i++) {
>> +               skb = alloc_skb(cur_mru, gfp_t);
>> +               if (!skb)
>> +                       return -ENOMEM;
>> +
>> +               read_lock_bh(&mhi_netdev->pm_lock);
>> +               if (unlikely(!mhi_netdev->enabled)) {
>> +                       MSG_ERR("Interface not enabled\n");
>> +                       ret = -EIO;
>> +                       goto error_queue;
>> +               }
> You shouldn't normally get here when the device is disabled,
> I think you can remove the pm_lock and enabled members
> and instead make sure the netdev itself is stopped at that
> point.
I will need to think about this a bit more, will address in next patch set.
>> +               spin_lock_bh(&mhi_netdev->rx_lock);
>> +               ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE, skb,
>> +                                        skb_priv->size, MHI_EOT);
>> +               spin_unlock_bh(&mhi_netdev->rx_lock);
> What does this spinlock protect?
This should answer by your next question, reason is I have this lock is 
because
mhi_netdev_allocwork could be running in parallel while mhi_netdev_poll also
calling mhi_netdev_alloc_skb().  I couldn't think of a simpler way to
protect each other than using this lock. I will put a comment
>> +static void mhi_netdev_alloc_work(struct work_struct *work)
>> +{
>> +       struct mhi_netdev *mhi_netdev = container_of(work, struct mhi_netdev,
>> +                                                  alloc_work);
>> +       /* sleep about 1 sec and retry, that should be enough time
>> +        * for system to reclaim freed memory back.
>> +        */
>> +       const int sleep_ms =  1000;
> That is a long time to wait for in the middle of a work function!
>
> Have you tested that it's actually sufficient to make the driver survive
> an out-of-memory situation?
>
> If you do need it at all, maybe use a delayed work queue that reschedules
> itself rather than retrying in the work queue.
When system running for days, we have seeing  out of
memory issues and re-trying does helps.  I like your suggestion,
will use a delayed work queue.
>> +static int mhi_netdev_poll(struct napi_struct *napi, int budget)
>> +{
>> +       struct net_device *dev = napi->dev;
>> +       struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
>> +       struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
>> +       struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
>> +       int rx_work = 0;
>> +       int ret;
>> +
>> +       MSG_VERB("Entered\n");
>> +
>> +       read_lock_bh(&mhi_netdev->pm_lock);
>> +
>> +       if (!mhi_netdev->enabled) {
>> +               MSG_LOG("interface is disabled!\n");
>> +               napi_complete(napi);
>> +               read_unlock_bh(&mhi_netdev->pm_lock);
>> +               return 0;
>> +       }
> Again, this shouldn't be required.
>
Without the lock how can I synchronize mhi_netdev_poll with remove()? 
Remove can happens anytime
asynchronously whenever modem powers off. I may not needed the check, 
but I may need to keep the lock.
>> +static int mhi_netdev_xmit(struct sk_buff *skb, struct net_device *dev)
>> +{
>> +       struct mhi_netdev_priv *mhi_netdev_priv = netdev_priv(dev);
>> +       struct mhi_netdev *mhi_netdev = mhi_netdev_priv->mhi_netdev;
>> +       struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
>> +       int res = 0;
>> +       struct mhi_skb_priv *tx_priv;
>> +
>> +       MSG_VERB("Entered\n");
>> +
>> +       tx_priv = (struct mhi_skb_priv *)(skb->cb);
>> +       tx_priv->mhi_netdev = mhi_netdev;
>> +       read_lock_bh(&mhi_netdev->pm_lock);
>> +
>> +       if (unlikely(!mhi_netdev->enabled)) {
>> +               /* Only reason interface could be disabled and we get data
>> +                * is due to an SSR. We do not want to stop the queue and
>> +                * return error. Instead we will flush all the uplink packets
>> +                * and return successful
>> +                */
>> +               res = NETDEV_TX_OK;
>> +               dev_kfree_skb_any(skb);
>> +               goto mhi_xmit_exit;
>> +       }
>> +
>> +       res = mhi_queue_transfer(mhi_dev, DMA_TO_DEVICE, skb, skb->len,
>> +                                MHI_EOT);
>> +       if (res) {
>> +               MSG_VERB("Failed to queue with reason:%d\n", res);
>> +               netif_stop_queue(dev);
>> +               mhi_netdev->stats.tx_full++;
>> +               res = NETDEV_TX_BUSY;
>> +               goto mhi_xmit_exit;
>> +       }
> You don't seem to have any limit on the number of queued
> transfers here. Since this is for a modem, it seems absolutely
> essential that you keep track of how many packets have been
> queued but not sent over the air yet.
>
> Please add the necessary calls to netdev_sent_queue() here
> and netdev_completed_queue() in the mhi_netdev_xfer_ul_cb
> callback respectively to prevent uncontrolled buffer bloat.
>
> Can you clarify whether mhi_netdev_xfer_ul_cb() is called
> after the data has been read by the modem and queued
> again in another buffer, or if it only gets called after we know
> that the packet has been sent over the air?
Callback comes immediately after modem read the buffer from host DDR to
local DDR. Data goes thru multiple layers in modem before actually goes out
over the air.
>> +static int mhi_netdev_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
>> +{
>> +       int rc = 0;
>> +
>> +       switch (cmd) {
>> +       default:
>> +               /* don't fail any IOCTL right now */
>> +               rc = 0;
>> +               break;
>> +       }
>> +
>> +       return rc;
>> +}
> That looks wrong. You should return an error for any unknown ioctl,
> which is the default behavior you get without an ioctl function.
I will remove the ioctl, will add a separate patch for supported IOCTLs.
>> +static void mhi_netdev_create_debugfs(struct mhi_netdev *mhi_netdev)
>> +{
>> +       char node_name[32];
>> +       int i;
>> +       const umode_t mode = 0600;
>> +       struct dentry *file;
>> +       struct mhi_device *mhi_dev = mhi_netdev->mhi_dev;
>> +
>> +       const struct {
>> +               char *name;
>> +               u32 *ptr;
>> +       } debugfs_table[] = {
>> +               {
>> +                       "rx_int",
>> +                       &mhi_netdev->stats.rx_int
>> +               },
>> +               {
>> +                       "tx_full",
>> +                       &mhi_netdev->stats.tx_full
>> +               },
>> +               {
>> +                       "tx_pkts",
>> +                       &mhi_netdev->stats.tx_pkts
>> +               },
>> +               {
>> +                       "rx_budget_overflow",
>> +                       &mhi_netdev->stats.rx_budget_overflow
>> +               },
>> +               {
>> +                       "rx_fragmentation",
>> +                       &mhi_netdev->stats.rx_frag
>> +               },
>> +               {
>> +                       "alloc_failed",
>> +                       &mhi_netdev->stats.alloc_failed
>> +               },
>> +               {
>> +                       NULL, NULL
>> +               },
>> +       };
> Please use the regular network statistics interfaces for this,
> don't roll your own.
>> +       /* Both tx & rx client handle contain same device info */
>> +       snprintf(node_name, sizeof(node_name), "%s_%04x_%02u.%02u.%02u_%u",
>> +                mhi_netdev->interface_name, mhi_dev->dev_id, mhi_dev->domain,
>> +                mhi_dev->bus, mhi_dev->slot, mhi_netdev->alias);
>> +
>> +       if (IS_ERR_OR_NULL(dentry))
>> +               return;
> IS_ERR_OR_NULL() is basically always a bug. debugfs returns NULL
> only on success when it is completely disabled, which you were
> trying to handle separately.
>
>> +static int mhi_netdev_probe(struct mhi_device *mhi_dev,
>> +                           const struct mhi_device_id *id)
>> +{
>> +       int ret;
>> +       struct mhi_netdev *mhi_netdev;
>> +       struct device_node *of_node = mhi_dev->dev.of_node;
>> +       char node_name[32];
>> +
>> +       if (!of_node)
>> +               return -ENODEV;
>> +
>> +       mhi_netdev = devm_kzalloc(&mhi_dev->dev, sizeof(*mhi_netdev),
>> +                                 GFP_KERNEL);
>> +       if (!mhi_netdev)
>> +               return -ENOMEM;
>> +
>> +       mhi_netdev->alias = of_alias_get_id(of_node, "mhi_netdev");
>> +       if (mhi_netdev->alias < 0)
>> +               return -ENODEV;
> Where is that alias documented?
Will add to documentation.
>> +       ret = of_property_read_string(of_node, "mhi,interface-name",
>> +                                     &mhi_netdev->interface_name);
>> +       if (ret)
>> +               mhi_netdev->interface_name = mhi_netdev_driver.driver.name;
> Don't couple the Linux interface name to what the hardware is
> called.
>
>        Arnd

Thanks for all comments Arnd

Sujeev
> --
> To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems
  2018-04-27 11:32   ` Arnd Bergmann
@ 2018-04-28 15:40     ` Sujeev Dias
  0 siblings, 0 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-04-28 15:40 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Greg Kroah-Hartman, Linux Kernel Mailing List, linux-arm-msm,
	Tony Truong



On 04/27/2018 04:32 AM, Arnd Bergmann wrote:
> On Fri, Apr 27, 2018 at 4:23 AM, Sujeev Dias <sdias@codeaurora.org> wrote:
>> QCOM PCIe based modems uses MHI as the communication protocol.
>> MHI control driver is the bus master for such modems. As the bus
>> master driver, it oversees power management operations
>> such as suspend, resume, powering on and off the device.
>>
>> +- compatible
>> +  Usage: required
>> +  Value type: <string>
>> +  Definition: "qcom,mhi"
>> +
>> +- qcom,pci-dev-id
>> +  Usage: optional
>> +  Value type: <u32>
>> +  Definition: PCIe device id of external modem to bind. If not set, any
>> +       device is compatible with this node.
>> +
>> +- qcom,pci-domain
>> +  Usage: required
>> +  Value type: <u32>
>> +  Definition: PCIe root complex external modem connected to
>> +
>> +- qcom,pci-bus
>> +  Usage: required
>> +  Value type: <u32>
>> +  Definition: PCIe bus external modem connected to
>> +
>> +- qcom,pci-slot
>> +  Usage: required
>> +  Value type: <u32>
>> +  Definition: PCIe slot as assigned by pci framework to external modem
> These don't seem to make any sense: You seem to have access to
> a regular pci_device already, so why do you need to duplicate the
> information about it in DT?
>
I will remove the platform device, original hardware design we had a 
complicated power on
sequence that require platform device to come up first and follow a 
strict power on sequence to power on modem
before pci device can enumerate.  I stored the BDF in DT to correlate 
the platform device with pci device. platform device
is no longer needed so I can remove it.
>> +- qcom,smmu-cfg
>> +  Usage: required
>> +  Value type: <u32>
>> +  Definition: Required SMMU configuration bitmask for PCIe bus.
>> +       BIT mask:
>> +       BIT(0) : Attach address mapping to endpoint device
>> +       BIT(1) : Set attribute S1_BYPASS
>> +       BIT(2) : Set attribute FAST
>> +       BIT(3) : Set attribute ATOMIC
>> +       BIT(4) : Set attribute FORCE_COHERENT
>> +
>> +- qcom,addr-win
>> +  Usage: required if SMMU S1 translation is enabled
>> +  Value type: Array of <u64>
>> +  Definition: Pair of values describing iova start and stop address
> Why do you need these? Can't that be handled by the PCI
> layer?
I will move this to end point DT.  PCIe end point driver does the iommu 
configuration
>> +- qcom,msm-bus,name
>> +  Usage: required
>> +  Value type: <string>
>> +  Definition: string representing the bus scale client name to register
> This probably belongs into a separate binding for the bus
> scale driver, right?
Yes, this is for qcom bus scale driver which I don't think is upstreamed 
yet. Will confirm.
>> +static struct pci_driver mhi_pcie_driver;
> Please try to reorder the symbols to avoid forward declarations.
>
>> +static int mhi_platform_probe(struct platform_device *pdev)
>> +{
>> +       struct mhi_controller *mhi_cntrl;
>> +       struct mhi_dev *mhi_dev;
>> +       struct device_node *of_node = pdev->dev.of_node;
>> +       u64 addr_win[2];
>> +       int ret;
>> +
>> +       if (!of_node)
>> +               return -ENODEV;
>> +
>> +       mhi_cntrl = mhi_alloc_controller(sizeof(*mhi_dev));
>> +       if (!mhi_cntrl)
>> +               return -ENOMEM;
>> +
>> +       mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
>> +
>> +       /* get pci bus topology for this node */
>> +       ret = of_property_read_u32(of_node, "qcom,pci-dev-id",
>> +                                  &mhi_cntrl->dev_id);
>> +       if (ret)
>> +               mhi_cntrl->dev_id = PCI_ANY_ID;
>> +
>> +       ret = of_property_read_u32(of_node, "qcom,pci-domain",
>> +                                  &mhi_cntrl->domain);
>> +       if (ret)
>> +               goto error_probe;
>> +
>> +       ret = of_property_read_u32(of_node, "qcom,pci-bus", &mhi_cntrl->bus);
>> +       if (ret)
>> +               goto error_probe;
>> +
>> +       ret = of_property_read_u32(of_node, "qcom,pci-slot", &mhi_cntrl->slot);
>> +       if (ret)
>> +               goto error_probe;
> Please explain what you are trying to do here, why do you register
> two device drivers? It looks like they both refer to the same
> hardware, so why isn't it sufficient to have the pci_driver?
As I explained earlier, it's now. Original hardware design we had 
chicken egg situation where
some driver has to come up and power on device before pcie enumeration 
can take place.

>
>         Arnd
> --
> To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
Thanks again
Sujeev

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
  2018-04-28 14:28     ` Sujeev Dias
@ 2018-04-28 15:50       ` Greg Kroah-Hartman
  0 siblings, 0 replies; 54+ messages in thread
From: Greg Kroah-Hartman @ 2018-04-28 15:50 UTC (permalink / raw)
  To: Sujeev Dias; +Cc: Arnd Bergmann, linux-kernel, linux-arm-msm, Tony Truong

On Sat, Apr 28, 2018 at 07:28:17AM -0700, Sujeev Dias wrote:
> Thanks for quick feedback
> 
> 
> On 04/27/2018 12:22 AM, Greg Kroah-Hartman wrote:
> > On Thu, Apr 26, 2018 at 07:23:28PM -0700, Sujeev Dias wrote:
> > > MHI Host Interface is a communication protocol to be used by the host
> > > to control and communcate with modem over a high speed peripheral bus.
> > > This module will allow host to communicate with external devices that
> > > support MHI protocol.
> > > 
> > > Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
> > No one else has ever reviewed this code before?  That's not good, please
> > at the very least, have someone else at your company go over it first.
> > I don't want to be the ones having to point out all of the "obvious"
> > issues :)
> > 
> This code has gone thru rigorous code review and testing, before I submit
> next patch I will have multiple people sign off on it.

Thank you.

> Intention was to completely compile out MHI_VERB messages because we have
> those messages in data path.  For release build, we wanted to reduce
> as much mips as possible.  However, for debugging these messages are
> extremely helpful.

That's exactly what dev_dbg() does today.  So please use it.

> I will look into tracepoints...

Or that, if you want them, they can be useful on data paths.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
  2018-04-27 12:18   ` Arnd Bergmann
@ 2018-04-28 16:08     ` Sujeev Dias
  0 siblings, 0 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-04-28 16:08 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Greg Kroah-Hartman, Linux Kernel Mailing List, linux-arm-msm,
	Tony Truong



On 04/27/2018 05:18 AM, Arnd Bergmann wrote:
> On Fri, Apr 27, 2018 at 4:23 AM, Sujeev Dias <sdias@codeaurora.org> wrote:
>
>> diff --git a/Documentation/devicetree/bindings/bus/mhi.txt b/Documentation/devicetree/bindings/bus/mhi.txt
>> new file mode 100644
>> index 0000000..ea1b620
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/bus/mhi.txt
>> @@ -0,0 +1,141 @@
>> +MHI Host Interface
>> +
>> +MHI used by the host to control and communicate with modem over
>> +high speed peripheral bus.
>> +
>> +==============
>> +Node Structure
>> +==============
>> +
>> +Main node properties:
>> +
>> +- mhi,max-channels
>> +  Usage: required
>> +  Value type: <u32>
>> +  Definition: Maximum number of channels supported by this controller
>> +
>> +- mhi,chan-cfg
>> +  Usage: required
>> +  Value type: Array of <u32>
>> +  Definition: Array of tuples describe channel configuration.
>> +       1st element: Physical channel number
>> +       2nd element: Transfer ring length in elements
>> +       3rd element: Event ring associated with this channel
>> +       4th element: Channel direction as defined by enum dma_data_direction
>> +               1 = UL data transfer
>> +               2 = DL data transfer
>> +       5th element: Channel doorbell mode configuration as defined by
>> +       enum MHI_BRSTMODE
>> +               2 = burst mode disabled
>> +               3 = burst mode enabled
>> +       6th element: mhi doorbell configuration, valid only when burst mode
>> +       enabled.
>> +               0 = Use default (device specific) polling configuration
>> +               For UL channels, value specifies the timer to poll MHI context
>> +               in milliseconds.
>> +               For DL channels, the threshold to poll the MHI context
>> +               in multiple of eight ring element.
>> +       7th element: Channel execution enviornment as defined by enum MHI_EE
>> +               1 = Bootloader stage
>> +               2 = AMSS mode
>> +       8th element: data transfer type accepted as defined by enum
>> +       MHI_XFER_TYPE
>> +               0 = accept cpu address for buffer
>> +               1 = accept skb
>> +               2 = accept scatterlist
>> +               3 = offload channel, does not accept any transfer type
>> +       9th element: Bitwise configuration settings for the channel
>> +               Bit mask:
>> +               BIT(0) : LPM notify, this channel master requre lpm enter/exit
>> +               notifications.
>> +               BIT(1) : Offload channel, MHI host only involved in setting up
>> +               the data pipe. Not involved in active data transfer.
>> +               BIT(2) : Must switch to doorbell mode whenever MHI M0 state
>> +               transition happens.
>> +               BIT(3) : MHI bus driver pre-allocate buffer for this channel.
>> +               If set, clients not allowed to queue buffers. Valid only for DL
>> +               direction.
>> +
>> +- mhi,chan-names
>> +  Usage: required
>> +  Value type: Array of <string>
>> +  Definition: Channel names configured in mhi,chan-cfg.
> I think the top-level structure would be better done by requiring
> a child for each channel and moving the mhi,chan-cfg
> into the children nodes, either as separate properties, or
> some of the fields combined into a 'reg' property.
>
>> +- mhi,fw-name
>> +  Usage: optional
>> +  Value type: <string>
>> +  Definition: Firmware image name to upload
> The device tree should have no knowledge of a firmware file name.
>
> What you should do instead is to have the driver generate the file
> name from the "compatible" property. You could use a lookup table
> in the driver, or have some algorithmic way of doing that, but you
> want the driver to have some form of intelligence to let it pick
> a different firmware image, e.g. when you have added new
> capabilities to the driver and the firmware but want to use an
> existing device tree.
>
>> +Data structures
>> +---------------
>> +Host memory : Directly accessed by the host to manage the MHI data structures
>> +and buffers. The device accesses the host memory over the PCIe interface.
>> +
>> +Channel context array : All channel configurations are organized in channel
>> +context data array.
>> +
>> +struct __packed mhi_chan_ctxt;
>> +struct mhi_ctxt.chan_ctxt;
>> +
>> +Transfer rings : Used by host to schedule work items for a channel and organized
>> +as a circular queue of transfer descriptors (TD).
>> +
>> +struct __packed mhi_tre;
>> +struct mhi_chan.tre_ring;
>> +
> This looks like you reinvented virtio. What are the reasons for not just
> using virtio directly as we do for other modem interfaces?
>
>> +static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
>> +                          void *buf,
>> +                          size_t size)
>> +{
>> +       u32 tx_status, val;
>> +       int i, ret;
>> +       void __iomem *base = mhi_cntrl->bhi;
>> +       rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
>> +       dma_addr_t phys = dma_map_single(mhi_cntrl->dev, buf, size,
>> +                                        DMA_TO_DEVICE);
> Please don't name a dma address 'phys', it's very confusing ;-)
>
>> +struct mhi_bus mhi_bus;
> One global instance? I suspec this duplicates data that is already
> in the bus_type structure, so just use that instead.
>
>> +void mhi_init_debugfs(struct mhi_controller *mhi_cntrl)
>> +{
>> +       struct dentry *dentry;
>> +       char node[32];
>> +
>> +       if (!mhi_cntrl->parent)
>> +               return;
>> +
>> +       snprintf(node, sizeof(node), "%04x_%02u:%02u.%02u",
>> +                mhi_cntrl->dev_id, mhi_cntrl->domain, mhi_cntrl->bus,
>> +                mhi_cntrl->slot);
>> +
>> +       dentry = debugfs_create_dir(node, mhi_cntrl->parent);
>> +       if (IS_ERR_OR_NULL(dentry))
>> +               return;
> Using IS_ERR_OR_NULL() just means that you have misunderstood
> the interface, you want IS_ERR() here.
>
>> +#if defined(CONFIG_OF)
>> +static int of_parse_ev_cfg(struct mhi_controller *mhi_cntrl,
>> +                          struct device_node *of_node)
> Why the #ifdef? Do you ever build this on architectures that don't support
> CONFIG_OF yet? Which ones?
>
>> +struct __packed mhi_event_ctxt {
>> +       u32 reserved : 8;
>> +       u32 intmodc : 8;
>> +       u32 intmodt : 16;
>> +       u32 ertype;
>> +       u32 msivec;
>> +       u64 rbase;
>> +       u64 rlen;
>> +       u64 rp;
>> +       u64 wp;
>> +};
> I would generally only mark those members as __packed that
> don't already have natural alignment, such as
>
> struct mhi_event_ctxt {
>         u32 reserved : 8;
>         u32 intmodc : 8;
>         u32 intmodt : 16;
>         u32 ertype;
>         u32 msivec;
>         u64 rbase __packed __aligned(4);
>         u64 rlen__packed __aligned(4);
>         u64 rp __packed __aligned(4);
>         u64 wp__packed __aligned(4);
> };
>
> This clarifies where the hardware designers screwed up
> without forcing bytewise access to the structures.
>
>> +
>> +#ifdef CONFIG_MHI_DEBUG
>> +
>> +#define MHI_ASSERT(cond, msg) do { \
>> +       if (cond) \
>> +               panic(msg); \
>> +} while (0)
>> +
>> +#else
>> +
>> +#define MHI_ASSERT(cond, msg) do { \
>> +       if (cond) { \
>> +               MHI_ERR(msg); \
>> +               WARN_ON(cond); \
>> +       } \
>> +} while (0)
> Remove these.
>
>> +struct mhi_controller {
>> +       struct list_head node;
>> +
>> +       /* device node for iommu ops */
>> +       struct device *dev;
>> +       struct device_node *of_node;
> Hmm, shouldn't the mhi_controller itself be a device that
> is a child of ->dev? It feels like
Originally I did thought about it but I did not see any value of doing 
that.  Not only that, I would need
the parent device for all memory allocation and mapping since children 
device doesn't inherit parent
iommu settings.
>> +       /* mmio base */
>> +       void __iomem *regs;
>> +       void __iomem *bhi;
>> +       void __iomem *wake_db;
>> +
>> +       /* device topology */
>> +       u32 dev_id;
>> +       u32 domain;
>> +       u32 bus;
>> +       u32 slot;
> These are even stranger. This looks PCI specific, but if you have
> a pci_dev then you know all of them while for the case you don't have
> a pci_dev, they don't make sense.
 From MHI bus point of view, this is a memory mapped device.  Not 
technically tied to a
pci device. I would like to keep that abstraction.
>> +       /* kernel log level */
>> +       enum MHI_DEBUG_LEVEL klog_lvl;
>> +
>> +       /* private log level controller driver to set */
>> +       enum MHI_DEBUG_LEVEL log_lvl;
>> +
>> +       /* controller specific data */
>> +       void *priv_data;
>> +       void *log_buf;
>> +       struct dentry *dentry;
>> +       struct dentry *parent;
> All the debugfs stuff is a bit worrying.  It seems so pervasive in
> the driver that it's hard to tell if any of it is actually used for more
> than just debugging. It might be better to split that out of the
> series and add back the debug files one at a time in separate
> patches that each explain why the files are still required.
> If you have successfully debugged a problem with one
> file, it may have seemed useful but maybe it's not needed
> for any future bugs.
>
> You might also be able to get more value out of tracepoints
> than debugfs files.
>
>> +};
>> +
>> +/**
>> + * struct mhi_device - mhi device structure associated bind to channel
>> + * @dev: Device associated with the channels
>> + * @mtu: Maximum # of bytes controller support
>> + * @ul_chan_id: MHI channel id for UL transfer
>> + * @dl_chan_id: MHI channel id for DL transfer
>> + * @priv: Driver private data
>> + */
>> +struct mhi_device {
>> +       struct device dev;
>> +       u32 dev_id;
>> +       u32 domain;
>> +       u32 bus;
>> +       u32 slot;
> Again, shouldn't the pci stuff be part of the parent or grandparent
> device?
This is for clients, they are absolutely not pcie aware at all. To them 
this is just
a unique way to identify the physical device.
>> +
>> +#if defined(CONFIG_MHI_BUS)
> In what situation would you include this header without enabling
> CONFIG_MHI_BUS?
>   
>> +
>> +#ifdef CONFIG_MHI_DEBUG
>> +
>> +#define MHI_VERB(fmt, ...) do { \
>> +               if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_VERBOSE) \
>> +                       pr_debug("[D][%s] " fmt, __func__, ##__VA_ARGS__);\
>> +} while (0)
>> +
>> +#else
>> +
>> +#define MHI_VERB(fmt, ...)
>> +
>> +#endif
>> +
>> +#define MHI_LOG(fmt, ...) do { \
>> +               if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_INFO) \
>> +                       pr_info("[I][%s] " fmt, __func__, ##__VA_ARGS__);\
>> +} while (0)
>> +
>> +#define MHI_ERR(fmt, ...) do { \
>> +               if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_ERROR) \
>> +                       pr_err("[E][%s] " fmt, __func__, ##__VA_ARGS__); \
>> +} while (0)
>> +
>> +#define MHI_CRITICAL(fmt, ...) do { \
>> +               if (mhi_cntrl->klog_lvl <= MHI_MSG_LVL_CRITICAL) \
>> +                       pr_alert("[C][%s] " fmt, __func__, ##__VA_ARGS__); \
>> +} while (0)
>> +
> Again, these all should be removed. Just use dev_err() etc.
>
>> diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
>> index 7d361be..1e11e30 100644
>> --- a/include/linux/mod_devicetable.h
>> +++ b/include/linux/mod_devicetable.h
>> @@ -734,4 +734,15 @@ struct tb_service_id {
>>   #define TBSVC_MATCH_PROTOCOL_VERSION   0x0004
>>   #define TBSVC_MATCH_PROTOCOL_REVISION  0x0008
>>
>> +
>> +/**
>> + * struct mhi_device_id - MHI device identification
>> + * @chan: MHI channel name
>> + * @driver_data: driver data;
>> + */
>> +struct mhi_device_id {
>> +       const char *chan;
>> +       kernel_ulong_t driver_data;
>> +};
> I think you have to use a fixed-length string here, otherwise
> module autoloading cannot work.
>
>        Arnd
> --
> To unsubscribe from this list: send the line "unsubscribe linux-arm-msm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
Thank you
Sujeev

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
  2018-04-27  2:23 ` [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface Sujeev Dias
                     ` (4 preceding siblings ...)
  2018-04-28  2:52     ` kbuild test robot
@ 2018-05-03 19:21   ` Pavel Machek
  2018-05-04  3:05     ` Sujeev Dias
  2018-06-22 23:03   ` Randy Dunlap
  6 siblings, 1 reply; 54+ messages in thread
From: Pavel Machek @ 2018-05-03 19:21 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: Greg Kroah-Hartman, Arnd Bergmann, linux-kernel, linux-arm-msm,
	Tony Truong

Hi!

> MHI Host Interface is a communication protocol to be used by the host
> to control and communcate with modem over a high speed peripheral bus.
> This module will allow host to communicate with external devices that
> support MHI protocol.

I have Motorola Droid 4 cellphone here, with Qualcomm GSM modem. But
it talks over serial line and USB. I guess MHI is not applicable to my
hardware?

> +MHI Devices
> +-----------
> +Logical device that bind to maximum of two physical MHI channels. Once MHI is in
> +powered on state, each supported channel by controller will be allocated as a
> +mhi_device.

What kind of protocol is running over MHI? I guess its not AT
commands. QMI? Or something entirely different?
									Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
  2018-05-03 19:21   ` Pavel Machek
@ 2018-05-04  3:05     ` Sujeev Dias
  0 siblings, 0 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-05-04  3:05 UTC (permalink / raw)
  To: Pavel Machek
  Cc: Greg Kroah-Hartman, Arnd Bergmann, linux-kernel, linux-arm-msm,
	Tony Truong

Hi Pavel


On 05/03/2018 12:21 PM, Pavel Machek wrote:
> Hi!
>
>> MHI Host Interface is a communication protocol to be used by the host
>> to control and communcate with modem over a high speed peripheral bus.
>> This module will allow host to communicate with external devices that
>> support MHI protocol.
> I have Motorola Droid 4 cellphone here, with Qualcomm GSM modem. But
> it talks over serial line and USB. I guess MHI is not applicable to my
> hardware?
>
MHI is for PCIe based modem, in any case I can't comment about 
commercial devices.
>> +MHI Devices
>> +-----------
>> +Logical device that bind to maximum of two physical MHI channels. Once MHI is in
>> +powered on state, each supported channel by controller will be allocated as a
>> +mhi_device.
> What kind of protocol is running over MHI? I guess its not AT
> commands. QMI? Or something entirely different?
All modem services between external modem and host go over MHI.
So QMI, AT, all other data and control services go thru MHI host driver.
> 									Pavel

Thanks
Sujeev

-- 
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface
  2018-04-27  2:23 ` [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface Sujeev Dias
                     ` (5 preceding siblings ...)
  2018-05-03 19:21   ` Pavel Machek
@ 2018-06-22 23:03   ` Randy Dunlap
  6 siblings, 0 replies; 54+ messages in thread
From: Randy Dunlap @ 2018-06-22 23:03 UTC (permalink / raw)
  To: Sujeev Dias, Greg Kroah-Hartman, Arnd Bergmann
  Cc: linux-kernel, linux-arm-msm, Tony Truong

Hi,

On 04/26/2018 07:23 PM, Sujeev Dias wrote:
> diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
> index d1c0b60..e15d56d 100644
> --- a/drivers/bus/Kconfig
> +++ b/drivers/bus/Kconfig
> @@ -171,6 +171,23 @@ config DA8XX_MSTPRI
>  	  configuration. Allows to adjust the priorities of all master
>  	  peripherals.
>  
> +config MHI_BUS
> +	tristate "Modem Host Interface"
> +	help
> +	  MHI Host Interface is a communication protocol to be used by the host

	  MHI (Modem Host Interface) is

> +	  to control and communcate with modem over a high speed peripheral bus.

	                 communicate

> +	  Enabling this module will allow host to communicate with external
> +	  devices that support MHI protocol.
> +
> +config MHI_DEBUG
> +	 bool "MHI debug support"
> +	 depends on MHI_BUS
> +	 help
> +	   Say yes here to enable debugging support in the MHI transport
> +	   and individual MHI client drivers. This option will impact
> +	   throughput as individual MHI packets and state transitions
> +	   will be logged.
> +
>  source "drivers/bus/fsl-mc/Kconfig"
>  
>  endmenu


-- 
~Randy

^ permalink raw reply	[flat|nested] 54+ messages in thread

* MHI code review
  2018-04-27  2:23 MHI initial design review Sujeev Dias
                   ` (3 preceding siblings ...)
  2018-04-27  2:23 ` [PATCH v1 4/4] mhi_bus: dev: uci: add user space " Sujeev Dias
@ 2018-07-09 20:08 ` Sujeev Dias
  2018-07-09 20:08   ` [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver Sujeev Dias
                     ` (7 more replies)
  4 siblings, 8 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-07-09 20:08 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: linux-kernel, linux-arm-msm, devicetree, Sujeev Dias, Tony Truong

Hi Greg Kroah-Hartman\Arnd Bergmann and community

Thank you for all the feedback, I believe I have addressed all the comments from previous
patches. Also, I am excluding mhi network driver in this series. I still have some modifications
to do.

Please review the new patch series and share your feedback.

Thanks again

Sincerely,
Sujeev

^ permalink raw reply	[flat|nested] 54+ messages in thread

* [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver
  2018-07-09 20:08 ` MHI code review Sujeev Dias
@ 2018-07-09 20:08   ` Sujeev Dias
  2018-07-09 20:50     ` Greg Kroah-Hartman
                       ` (4 more replies)
  2018-07-09 20:08   ` [PATCH v2 2/7] mhi_bus: core: add power management support Sujeev Dias
                     ` (6 subsequent siblings)
  7 siblings, 5 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-07-09 20:08 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: linux-kernel, linux-arm-msm, devicetree, Sujeev Dias,
	Tony Truong, Siddartha Mohanadoss

This is the initial skeleton driver for mhi bus stack. MHI Host
Interface is a communication protocol to be used by the host to
control and communcate with modem over a high speed peripheral bus.
This module will allow host to communicate with external devices that
support MHI protocol.

Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
Reviewed-by: Tony Truong <truong@codeaurora.org>
Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
---
 Documentation/00-INDEX                        |   2 +
 Documentation/devicetree/bindings/bus/mhi.txt | 258 ++++++++++++
 Documentation/mhi.txt                         | 235 +++++++++++
 drivers/bus/Kconfig                           |   8 +
 drivers/bus/Makefile                          |   1 +
 drivers/bus/mhi/Makefile                      |   6 +
 drivers/bus/mhi/core/Makefile                 |   1 +
 drivers/bus/mhi/core/mhi_init.c               | 538 ++++++++++++++++++++++++++
 drivers/bus/mhi/core/mhi_internal.h           | 238 ++++++++++++
 drivers/bus/mhi/core/mhi_main.c               | 122 ++++++
 include/linux/mhi.h                           | 341 ++++++++++++++++
 include/linux/mod_devicetable.h               |  12 +
 12 files changed, 1762 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/bus/mhi.txt
 create mode 100644 Documentation/mhi.txt
 create mode 100644 drivers/bus/mhi/Makefile
 create mode 100644 drivers/bus/mhi/core/Makefile
 create mode 100644 drivers/bus/mhi/core/mhi_init.c
 create mode 100644 drivers/bus/mhi/core/mhi_internal.h
 create mode 100644 drivers/bus/mhi/core/mhi_main.c
 create mode 100644 include/linux/mhi.h

diff --git a/Documentation/00-INDEX b/Documentation/00-INDEX
index 708dc4c..44e2c6b 100644
--- a/Documentation/00-INDEX
+++ b/Documentation/00-INDEX
@@ -270,6 +270,8 @@ memory-hotplug.txt
 	- Hotpluggable memory support, how to use and current status.
 men-chameleon-bus.txt
 	- info on MEN chameleon bus.
+mhi.txt
+	- Modem Host Interface
 mic/
 	- Intel Many Integrated Core (MIC) architecture device driver.
 mips/
diff --git a/Documentation/devicetree/bindings/bus/mhi.txt b/Documentation/devicetree/bindings/bus/mhi.txt
new file mode 100644
index 0000000..19deb84
--- /dev/null
+++ b/Documentation/devicetree/bindings/bus/mhi.txt
@@ -0,0 +1,258 @@
+MHI Host Interface
+
+MHI used by the host to control and communicate with modem over
+high speed peripheral bus.
+
+==============
+Node Structure
+==============
+
+Main node properties:
+
+- mhi,max-channels
+  Usage: required
+  Value type: <u32>
+  Definition: Maximum number of channels supported by this controller
+
+- mhi,timeout
+  Usage: optional
+  Value type: <u32>
+  Definition: Maximum timeout in ms wait for state and cmd completion
+
+- mhi,use-bb
+  Usage: optional
+  Value type: <bool>
+  Definition: Set true, if PCIe controller does not have full access to host
+	DDR, and we're using a dedicated memory pool like cma, or
+	carveout pool. Pool must support atomic allocation.
+
+- mhi,buffer-len
+  Usage: optional
+  Value type: <bool>
+  Definition: MHI automatically pre-allocate buffers for some channel.
+	Set the length of buffer size to allocate. If not default
+	size MHI_MAX_MTU will be used.
+
+============================
+mhi channel node properties:
+============================
+
+- reg
+  Usage: required
+  Value type: <u32>
+  Definition: physical channel number
+
+- label
+  Usage: required
+  Value type: <string>
+  Definition: given name for the channel
+
+- mhi,num-elements
+  Usage: optional
+  Value type: <u32>
+  Definition: Number of elements transfer ring support
+
+- mhi,event-ring
+  Usage: required
+  Value type: <u32>
+  Definition: Event ring index associated with this channel
+
+- mhi,chan-dir
+  Usage: required
+  Value type: <u32>
+  Definition: Channel direction as defined by enum dma_data_direction
+	0 = Bidirectional data transfer
+	1 = UL data transfer
+	2 = DL data transfer
+	3 = No direction, not a regular data transfer channel
+
+- mhi,ee
+  Usage: required
+  Value type: <u32>
+  Definition: Channel execution enviornment as defined by enum MHI_EE
+	1 = Bootloader stage
+	2 = AMSS mode
+
+- mhi,pollcfg
+  Usage: optional
+  Value type: <u32>
+  Definition: MHI poll configuration, valid only when burst mode is enabled
+	0 = Use default (device specific) polling configuration
+	For UL channels, value specifies the timer to poll MHI context in
+	milliseconds.
+	For DL channels, the threshold to poll the MHI context in multiple of
+	eight ring element.
+
+- mhi,data-type
+  Usage: required
+  Value type: <u32>
+  Definition: Data transfer type accepted as defined by enum MHI_XFER_TYPE
+	0 = accept cpu address for buffer
+	1 = accept skb
+	2 = accept scatterlist
+	3 = offload channel, does not accept any transfer type
+
+- mhi,doorbell-mode
+  Usage: required
+  Value type: <u32>
+  Definition: Channel doorbell mode configuration as defined by enum
+	MHI_BRSTMODE
+	2 = burst mode disabled
+	3 = burst mode enabled
+
+- mhi,lpm-notify
+  Usage: optional
+  Value type: <bool>
+  Definition: This channel master require low power mode enter and exit
+  notifications from mhi bus master.
+
+- mhi,offload-chan
+  Usage: optional
+  Value type: <bool>
+  Definition: Client managed channel, MHI host only involved in setting up
+	the data path, not involved in active data path.
+
+- mhi,db-mode-switch
+  Usage: optional
+  Value type: <bool>
+  Definition: Must switch to doorbell mode whenever MHI M0 state transition
+	happens.
+
+- mhi,auto-queue
+  Usage: optional
+  Value type: <bool>
+  Definition: MHI bus driver will pre-allocate buffers for this channel and
+	queue to hardware. If set, client not allowed to queue buffers. Valid
+	only for downlink direction.
+
+- mhi,auto-start
+  Usage: optional
+  Value type: <bool>
+  Definition: MHI host driver to automatically start channels once mhi device
+	driver probe is complete. This should be only set true if initial
+	handshake iniaitead by external modem.
+
+==========================
+mhi event node properties:
+==========================
+
+- mhi,num-elements
+  Usage: required
+  Value type: <u32>
+  Definition: Number of elements event ring support
+
+- mhi,intmod
+  Usage: required
+  Value type: <u32>
+  Definition: interrupt moderation time in ms
+
+- mhi,msi
+  Usage: required
+  Value type: <u32>
+  Definition: MSI associated with this event ring
+
+- mhi,chan
+  Usage: optional
+  Value type: <u32>
+  Definition: Dedicated channel number, if it's a dedicated event ring
+
+- mhi,priority
+  Usage: required
+  Value type: <u32>
+  Definition: Event ring priority, set to 1 for now
+
+- mhi,brstmode
+  Usage: required
+  Value type: <u32>
+  Definition: Event doorbell mode configuration as defined by
+	enum MHI_BRSTMODE
+		2 = burst mode disabled
+		3 = burst mode enabled
+
+- mhi,data-type
+  Usage: optional
+  Value type: <u32>
+  Definition: Type of data this event ring will process as defined
+	by enum mhi_er_data_type
+		0 = process data packets (default)
+		1 = process mhi control packets
+
+- mhi,hw-ev
+  Usage: optional
+  Value type: <bool>
+  Definition: Event ring associated with hardware channels
+
+- mhi,client-manage
+  Usage: optional
+  Value type: <bool>
+  Definition: Client manages the event ring (use by napi_poll)
+
+- mhi,offload
+  Usage: optional
+  Value type: <bool>
+  Definition: Event ring associated with offload channel
+
+
+Children node properties:
+
+MHI drivers that require DT can add driver specific information as a child node.
+
+- mhi,chan
+  Usage: Required
+  Value type: <string>
+  Definition: Channel name
+
+========
+Example:
+========
+mhi_controller {
+	mhi,max-channels = <105>;
+
+	mhi_chan@0 {
+		reg = <0>;
+		label = "LOOPBACK";
+		mhi,num-elements = <64>;
+		mhi,event-ring = <2>;
+		mhi,chan-dir = <1>;
+		mhi,data-type = <0>;
+		mhi,doorbell-mode = <2>;
+		mhi,ee = <2>;
+	};
+
+	mhi_chan@1 {
+		reg = <1>;
+		label = "LOOPBACK";
+		mhi,num-elements = <64>;
+		mhi,event-ring = <2>;
+		mhi,chan-dir = <2>;
+		mhi,data-type = <0>;
+		mhi,doorbell-mode = <2>;
+		mhi,ee = <2>;
+	};
+
+	mhi_event@0 {
+		mhi,num-elements = <32>;
+		mhi,intmod = <1>;
+		mhi,msi = <1>;
+		mhi,chan = <0>;
+		mhi,priority = <1>;
+		mhi,bstmode = <2>;
+		mhi,data-type = <1>;
+	};
+
+	mhi_event@1 {
+		mhi,num-elements = <256>;
+		mhi,intmod = <1>;
+		mhi,msi = <2>;
+		mhi,chan = <0>;
+		mhi,priority = <1>;
+		mhi,bstmode = <2>;
+	};
+
+	mhi,timeout = <500>;
+
+	children_node {
+		mhi,chan = "LOOPBACK"
+		<driver specific properties>
+	};
+};
diff --git a/Documentation/mhi.txt b/Documentation/mhi.txt
new file mode 100644
index 0000000..1c501f1
--- /dev/null
+++ b/Documentation/mhi.txt
@@ -0,0 +1,235 @@
+Overview of Linux kernel MHI support
+====================================
+
+Modem-Host Interface (MHI)
+=========================
+MHI used by the host to control and communicate with modem over high speed
+peripheral bus. Even though MHI can be easily adapt to any peripheral buses,
+primarily used with PCIe based devices. The host has one or more PCIe root
+ports connected to modem device. The host has limited access to device memory
+space, including register configuration and control the device operation.
+Data transfers are invoked from the device.
+
+All data structures used by MHI are in the host system memory. Using PCIe
+interface, the device accesses those data structures. MHI data structures and
+data buffers in the host system memory regions are mapped for device.
+
+Memory spaces
+-------------
+PCIe Configurations : Used for enumeration and resource management, such as
+interrupt and base addresses.  This is done by mhi control driver.
+
+MMIO
+----
+MHI MMIO : Memory mapped IO consists of set of registers in the device hardware,
+which are mapped to the host memory space through PCIe base address register
+(BAR)
+
+MHI control registers : Access to MHI configurations registers
+(struct mhi_controller.regs).
+
+MHI BHI register: Boot host interface registers (struct mhi_controller.bhi) used
+for firmware download before MHI initialization.
+
+Channel db array : Doorbell registers (struct mhi_chan.tre_ring.db_addr) used by
+host to notify device there is new work to do.
+
+Event db array : Associated with event context array
+(struct mhi_event.ring.db_addr), host uses to notify device free events are
+available.
+
+Data structures
+---------------
+Host memory : Directly accessed by the host to manage the MHI data structures
+and buffers. The device accesses the host memory over the PCIe interface.
+
+Channel context array : All channel configurations are organized in channel
+context data array.
+
+struct __packed mhi_chan_ctxt;
+struct mhi_ctxt.chan_ctxt;
+
+Transfer rings : Used by host to schedule work items for a channel and organized
+as a circular queue of transfer descriptors (TD).
+
+struct __packed mhi_tre;
+struct mhi_chan.tre_ring;
+
+Event context array : All event configurations are organized in event context
+data array.
+
+struct mhi_ctxt.er_ctxt;
+struct __packed mhi_event_ctxt;
+
+Event rings: Used by device to send completion and state transition messages to
+host
+
+struct mhi_event.ring;
+struct __packed mhi_tre;
+
+Command context array: All command configurations are organized in command
+context data array.
+
+struct __packed mhi_cmd_ctxt;
+struct mhi_ctxt.cmd_ctxt;
+
+Command rings: Used by host to send MHI commands to device
+
+struct __packed mhi_tre;
+struct mhi_cmd.ring;
+
+Transfer rings
+--------------
+MHI channels are logical, unidirectional data pipes between host and device.
+Each channel associated with a single transfer ring.  The data direction can be
+either inbound (device to host) or outbound (host to device).  Transfer
+descriptors are managed by using transfer rings, which are defined for each
+channel between device and host and resides in the host memory.
+
+Transfer ring Pointer:	  	Transfer Ring Array
+[Read Pointer (RP)] ----------->[Ring Element] } TD
+[Write Pointer (WP)]-		[Ring Element]
+                     -		[Ring Element]
+		      --------->[Ring Element]
+				[Ring Element]
+
+1. Host allocate memory for transfer ring
+2. Host sets base, read pointer, write pointer in corresponding channel context
+3. Ring is considered empty when RP == WP
+4. Ring is considered full when WP + 1 == RP
+4. RP indicates the next element to be serviced by device
+4. When host new buffer to send, host update the Ring element with buffer
+   information
+5. Host increment the WP to next element
+6. Ring the associated channel DB.
+
+Event rings
+-----------
+Events from the device to host are organized in event rings and defined in event
+descriptors.  Event rings are array of EDs that resides in the host memory.
+
+Transfer ring Pointer:	  	Event Ring Array
+[Read Pointer (RP)] ----------->[Ring Element] } ED
+[Write Pointer (WP)]-		[Ring Element]
+                     -		[Ring Element]
+		      --------->[Ring Element]
+				[Ring Element]
+
+1. Host allocate memory for event ring
+2. Host sets base, read pointer, write pointer in corresponding channel context
+3. Both host and device has local copy of RP, WP
+3. Ring is considered empty (no events to service) when WP + 1 == RP
+4. Ring is full of events when RP == WP
+4. RP - 1 = last event device programmed
+4. When there is a new event device need to send, device update ED pointed by RP
+5. Device increment RP to next element
+6. Device trigger and interrupt
+
+Example Operation for data transfer:
+
+1. Host prepare TD with buffer information
+2. Host increment Chan[id].ctxt.WP
+3. Host ring channel DB register
+4. Device wakes up process the TD
+5. Device generate a completion event for that TD by updating ED
+6. Device increment Event[id].ctxt.RP
+7. Device trigger MSI to wake host
+8. Host wakes up and check event ring for completion event
+9. Host update the Event[i].ctxt.WP to indicate processed of completion event.
+
+MHI States
+----------
+
+enum MHI_STATE {
+MHI_STATE_RESET : MHI is in reset state, POR state. Host is not allowed to
+		  access device MMIO register space.
+MHI_STATE_READY : Device is ready for initialization. Host can start MHI
+		  initialization by programming MMIO
+MHI_STATE_M0 : MHI is in fully active state, data transfer is active
+MHI_STATE_M1 : Device in a suspended state
+MHI_STATE_M2 : MHI in low power mode, device may enter lower power mode.
+MHI_STATE_M3 : Both host and device in suspended state.  PCIe link is not
+	       accessible to device.
+
+MHI Initialization
+------------------
+
+1. After system boots, the device is enumerated over PCIe interface
+2. Host allocate MHI context for event, channel and command arrays
+3. Initialize context array, and prepare interrupts
+3. Host waits until device enter READY state
+4. Program MHI MMIO registers and set device into MHI_M0 state
+5. Wait for device to enter M0 state
+
+Linux Software Architecture
+===========================
+
+MHI Controller
+--------------
+MHI controller is also the MHI bus master. In charge of managing the physical
+link between host and device.  Not involved in actual data transfer.  At least
+for PCIe based buses, for other type of bus, we can expand to add support.
+
+Roles:
+1. Turn on PCIe bus and configure the link
+2. Configure MSI, SMMU, and IOMEM
+3. Allocate struct mhi_controller and register with MHI bus framework
+2. Initiate power on and shutdown sequence
+3. Initiate suspend and resume
+
+Usage
+-----
+
+1. Allocate control data structure by calling mhi_alloc_controller()
+2. Initialize mhi_controller with all the known information such as:
+   - Device Topology
+   - IOMMU window
+   - IOMEM mapping
+   - Device to use for memory allocation, and of_node with DT configuration
+   - Configure asynchronous callback functions
+3. Register MHI controller with MHI bus framework by calling
+   of_register_mhi_controller()
+
+After successfully registering controller can initiate any of these power modes:
+
+1. Power up sequence
+   - mhi_prepare_for_power_up()
+   - mhi_async_power_up()
+   - mhi_sync_power_up()
+2. Power down sequence
+   - mhi_power_down()
+   - mhi_unprepare_after_power_down()
+3. Initiate suspend
+   - mhi_pm_suspend()
+4. Initiate resume
+   - mhi_pm_resume()
+
+MHI Devices
+-----------
+Logical device that bind to maximum of two physical MHI channels. Once MHI is in
+powered on state, each supported channel by controller will be allocated as a
+mhi_device.
+
+Each supported device would be enumerated under
+/sys/bus/mhi/devices/
+
+struct mhi_device;
+
+MHI Driver
+----------
+Each MHI driver can bind to one or more MHI devices. MHI host driver will bind
+mhi_device to mhi_driver.
+
+All registered drivers are visible under
+/sys/bus/mhi/drivers/
+
+struct mhi_driver;
+
+Usage
+-----
+
+1. Register driver using mhi_driver_register
+2. Before sending data, prepare device for transfer by calling
+   mhi_prepare_for_transfer
+3. Initiate data transfer by calling mhi_queue_transfer
+4. After finish, call mhi_unprepare_from_transfer to end data transfer
diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
index d1c0b60..080d3c2 100644
--- a/drivers/bus/Kconfig
+++ b/drivers/bus/Kconfig
@@ -171,6 +171,14 @@ config DA8XX_MSTPRI
 	  configuration. Allows to adjust the priorities of all master
 	  peripherals.
 
+config MHI_BUS
+	tristate "Modem Host Interface"
+	help
+	  MHI Host Interface is a communication protocol to be used by the host
+	  to control and communcate with modem over a high speed peripheral bus.
+	  Enabling this module will allow host to communicate with external
+	  devices that support MHI protocol.
+
 source "drivers/bus/fsl-mc/Kconfig"
 
 endmenu
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index b8f036c..8fc0b3b 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -31,3 +31,4 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS)	+= uniphier-system-bus.o
 obj-$(CONFIG_VEXPRESS_CONFIG)	+= vexpress-config.o
 
 obj-$(CONFIG_DA8XX_MSTPRI)	+= da8xx-mstpri.o
+obj-$(CONFIG_MHI_BUS) += mhi/
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
new file mode 100644
index 0000000..f8f14f7
--- /dev/null
+++ b/drivers/bus/mhi/Makefile
@@ -0,0 +1,6 @@
+#
+# Makefile for the MHI stack
+#
+
+# core layer
+obj-y += core/
diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
new file mode 100644
index 0000000..a015809
--- /dev/null
+++ b/drivers/bus/mhi/core/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_MHI_BUS) +=mhi_init.o mhi_main.o
diff --git a/drivers/bus/mhi/core/mhi_init.c b/drivers/bus/mhi/core/mhi_init.c
new file mode 100644
index 0000000..b8c30f8
--- /dev/null
+++ b/drivers/bus/mhi/core/mhi_init.c
@@ -0,0 +1,538 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+#include <linux/mhi.h>
+#include "mhi_internal.h"
+
+static int of_parse_ev_cfg(struct mhi_controller *mhi_cntrl,
+			   struct device_node *of_node)
+{
+	int i, ret, num = 0;
+	struct mhi_event *mhi_event;
+	struct device_node *child;
+
+	for_each_available_child_of_node(of_node, child) {
+		if (!strcmp(child->name, "mhi_event"))
+			num++;
+	}
+
+	if (!num)
+		return -EINVAL;
+
+	mhi_cntrl->total_ev_rings = num;
+	mhi_cntrl->mhi_event = kcalloc(num, sizeof(*mhi_cntrl->mhi_event),
+				       GFP_KERNEL);
+	if (!mhi_cntrl->mhi_event)
+		return -ENOMEM;
+
+	/* populate ev ring */
+	mhi_event = mhi_cntrl->mhi_event;
+	i = 0;
+	for_each_available_child_of_node(of_node, child) {
+		if (strcmp(child->name, "mhi_event"))
+			continue;
+
+		mhi_event->er_index = i++;
+		ret = of_property_read_u32(child, "mhi,num-elements",
+					   (u32 *)&mhi_event->ring.elements);
+		if (ret)
+			goto error_ev_cfg;
+
+		ret = of_property_read_u32(child, "mhi,intmod",
+					   &mhi_event->intmod);
+		if (ret)
+			goto error_ev_cfg;
+
+		ret = of_property_read_u32(child, "mhi,msi",
+					   &mhi_event->msi);
+		if (ret)
+			goto error_ev_cfg;
+
+		ret = of_property_read_u32(child, "mhi,chan",
+					   &mhi_event->chan);
+		if (!ret) {
+			if (mhi_event->chan >= mhi_cntrl->max_chan)
+				goto error_ev_cfg;
+			/* this event ring has a dedicated channel */
+			mhi_event->mhi_chan =
+				&mhi_cntrl->mhi_chan[mhi_event->chan];
+		}
+
+		ret = of_property_read_u32(child, "mhi,priority",
+					   &mhi_event->priority);
+		if (ret)
+			goto error_ev_cfg;
+
+		ret = of_property_read_u32(child, "mhi,brstmode",
+					   &mhi_event->db_cfg.brstmode);
+		if (ret || MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
+			goto error_ev_cfg;
+
+		mhi_event->db_cfg.process_db =
+			(mhi_event->db_cfg.brstmode == MHI_BRSTMODE_ENABLE) ?
+			mhi_db_brstmode : mhi_db_brstmode_disable;
+
+		ret = of_property_read_u32(child, "mhi,data-type",
+					   &mhi_event->data_type);
+		if (ret)
+			mhi_event->data_type = MHI_ER_DATA_ELEMENT_TYPE;
+
+		if (mhi_event->data_type > MHI_ER_DATA_TYPE_MAX)
+			goto error_ev_cfg;
+
+		switch (mhi_event->data_type) {
+		case MHI_ER_DATA_ELEMENT_TYPE:
+			mhi_event->process_event = mhi_process_data_event_ring;
+			break;
+		case MHI_ER_CTRL_ELEMENT_TYPE:
+			mhi_event->process_event = mhi_process_ctrl_ev_ring;
+			break;
+		}
+
+		mhi_event->hw_ring = of_property_read_bool(child, "mhi,hw-ev");
+		if (mhi_event->hw_ring)
+			mhi_cntrl->hw_ev_rings++;
+		else
+			mhi_cntrl->sw_ev_rings++;
+		mhi_event->cl_manage = of_property_read_bool(child,
+							"mhi,client-manage");
+		mhi_event->offload_ev = of_property_read_bool(child,
+							      "mhi,offload");
+		mhi_event++;
+	}
+
+	/* we need msi for each event ring + additional one for BHI */
+	mhi_cntrl->msi_required = mhi_cntrl->total_ev_rings + 1;
+
+	return 0;
+
+error_ev_cfg:
+
+	kfree(mhi_cntrl->mhi_event);
+	return -EINVAL;
+}
+static int of_parse_ch_cfg(struct mhi_controller *mhi_cntrl,
+			   struct device_node *of_node)
+{
+	int ret;
+	struct device_node *child;
+	u32 chan;
+
+	ret = of_property_read_u32(of_node, "mhi,max-channels",
+				   &mhi_cntrl->max_chan);
+	if (ret)
+		return ret;
+
+	mhi_cntrl->mhi_chan = kcalloc(mhi_cntrl->max_chan,
+				      sizeof(*mhi_cntrl->mhi_chan), GFP_KERNEL);
+	if (!mhi_cntrl->mhi_chan)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&mhi_cntrl->lpm_chans);
+
+	/* populate channel configurations */
+	for_each_available_child_of_node(of_node, child) {
+		struct mhi_chan *mhi_chan;
+
+		if (strcmp(child->name, "mhi_chan"))
+			continue;
+
+		ret = of_property_read_u32(child, "reg", &chan);
+		if (ret || chan >= mhi_cntrl->max_chan)
+			goto error_chan_cfg;
+
+		mhi_chan = &mhi_cntrl->mhi_chan[chan];
+
+		ret = of_property_read_string(child, "label",
+					      &mhi_chan->name);
+		if (ret)
+			goto error_chan_cfg;
+
+		mhi_chan->chan = chan;
+
+		ret = of_property_read_u32(child, "mhi,num-elements",
+					   (u32 *)&mhi_chan->buf_ring.elements);
+		if (!ret && !mhi_chan->buf_ring.elements)
+			goto error_chan_cfg;
+
+		mhi_chan->tre_ring.elements = mhi_chan->buf_ring.elements;
+
+		ret = of_property_read_u32(child, "mhi,event-ring",
+					   &mhi_chan->er_index);
+		if (ret)
+			goto error_chan_cfg;
+
+		ret = of_property_read_u32(child, "mhi,chan-dir",
+					   &mhi_chan->dir);
+		if (ret)
+			goto error_chan_cfg;
+
+		ret = of_property_read_u32(child, "mhi,ee", &mhi_chan->ee);
+		if (ret || mhi_chan->ee >= MHI_EE_MAX_SUPPORTED)
+			goto error_chan_cfg;
+
+		of_property_read_u32(child, "mhi,pollcfg",
+				     &mhi_chan->db_cfg.pollcfg);
+
+		ret = of_property_read_u32(child, "mhi,data-type",
+					   &mhi_chan->xfer_type);
+		if (ret)
+			goto error_chan_cfg;
+
+		switch (mhi_chan->xfer_type) {
+		case MHI_XFER_BUFFER:
+			mhi_chan->gen_tre = mhi_gen_tre;
+			mhi_chan->queue_xfer = mhi_queue_buf;
+			break;
+		case MHI_XFER_SKB:
+			mhi_chan->queue_xfer = mhi_queue_skb;
+			break;
+		case MHI_XFER_SCLIST:
+			mhi_chan->gen_tre = mhi_gen_tre;
+			mhi_chan->queue_xfer = mhi_queue_sclist;
+			break;
+		case MHI_XFER_NOP:
+			mhi_chan->queue_xfer = mhi_queue_nop;
+			break;
+		default:
+			goto error_chan_cfg;
+		}
+
+		mhi_chan->lpm_notify = of_property_read_bool(child,
+							     "mhi,lpm-notify");
+		mhi_chan->offload_ch = of_property_read_bool(child,
+							"mhi,offload-chan");
+		mhi_chan->db_cfg.reset_req = of_property_read_bool(child,
+							"mhi,db-mode-switch");
+		mhi_chan->pre_alloc = of_property_read_bool(child,
+							    "mhi,auto-queue");
+		mhi_chan->auto_start = of_property_read_bool(child,
+							     "mhi,auto-start");
+
+		if (mhi_chan->pre_alloc &&
+		    (mhi_chan->dir != DMA_FROM_DEVICE ||
+		     mhi_chan->xfer_type != MHI_XFER_BUFFER))
+			goto error_chan_cfg;
+
+		/* bi-dir and dirctionless channels must be a offload chan */
+		if ((mhi_chan->dir == DMA_BIDIRECTIONAL ||
+		     mhi_chan->dir == DMA_NONE) && !mhi_chan->offload_ch)
+			goto error_chan_cfg;
+
+		/* if mhi host allocate the buffers then client cannot queue */
+		if (mhi_chan->pre_alloc)
+			mhi_chan->queue_xfer = mhi_queue_nop;
+
+		if (!mhi_chan->offload_ch) {
+			ret = of_property_read_u32(child, "mhi,doorbell-mode",
+						   &mhi_chan->db_cfg.brstmode);
+			if (ret ||
+			    MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode))
+				goto error_chan_cfg;
+
+			mhi_chan->db_cfg.process_db =
+				(mhi_chan->db_cfg.brstmode ==
+				 MHI_BRSTMODE_ENABLE) ?
+				mhi_db_brstmode : mhi_db_brstmode_disable;
+		}
+
+		mhi_chan->configured = true;
+
+		if (mhi_chan->lpm_notify)
+			list_add_tail(&mhi_chan->node, &mhi_cntrl->lpm_chans);
+	}
+
+	return 0;
+
+error_chan_cfg:
+	kfree(mhi_cntrl->mhi_chan);
+
+	return -EINVAL;
+}
+
+static int of_parse_dt(struct mhi_controller *mhi_cntrl,
+		       struct device_node *of_node)
+{
+	int ret;
+
+	/* parse MHI channel configuration */
+	ret = of_parse_ch_cfg(mhi_cntrl, of_node);
+	if (ret)
+		return ret;
+
+	/* parse MHI event configuration */
+	ret = of_parse_ev_cfg(mhi_cntrl, of_node);
+	if (ret)
+		goto error_ev_cfg;
+
+	ret = of_property_read_u32(of_node, "mhi,timeout",
+				   &mhi_cntrl->timeout_ms);
+	if (ret)
+		mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
+
+	mhi_cntrl->bounce_buf = of_property_read_bool(of_node, "mhi,use-bb");
+	ret = of_property_read_u32(of_node, "mhi,buffer-len",
+				   (u32 *)&mhi_cntrl->buffer_len);
+	if (ret)
+		mhi_cntrl->buffer_len = MHI_MAX_MTU;
+
+	return 0;
+
+error_ev_cfg:
+	kfree(mhi_cntrl->mhi_chan);
+
+	return ret;
+}
+
+int of_register_mhi_controller(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+	int i;
+	struct mhi_event *mhi_event;
+	struct mhi_chan *mhi_chan;
+	struct mhi_cmd *mhi_cmd;
+	struct mhi_device *mhi_dev;
+
+	if (!mhi_cntrl->of_node)
+		return -EINVAL;
+
+	if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put)
+		return -EINVAL;
+
+	if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status)
+		return -EINVAL;
+
+	ret = of_parse_dt(mhi_cntrl, mhi_cntrl->of_node);
+	if (ret)
+		return -EINVAL;
+
+	mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS,
+				     sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
+	if (!mhi_cntrl->mhi_cmd) {
+		ret = -ENOMEM;
+		goto error_alloc_cmd;
+	}
+
+	INIT_LIST_HEAD(&mhi_cntrl->transition_list);
+	mutex_init(&mhi_cntrl->pm_mutex);
+	rwlock_init(&mhi_cntrl->pm_lock);
+	spin_lock_init(&mhi_cntrl->transition_lock);
+	spin_lock_init(&mhi_cntrl->wlock);
+	init_waitqueue_head(&mhi_cntrl->state_event);
+
+	mhi_cmd = mhi_cntrl->mhi_cmd;
+	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++)
+		spin_lock_init(&mhi_cmd->lock);
+
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		if (mhi_event->offload_ev)
+			continue;
+
+		mhi_event->mhi_cntrl = mhi_cntrl;
+		spin_lock_init(&mhi_event->lock);
+		if (mhi_event->data_type == MHI_ER_CTRL_ELEMENT_TYPE)
+			tasklet_init(&mhi_event->task, mhi_ctrl_ev_task,
+				     (ulong)mhi_event);
+		else
+			tasklet_init(&mhi_event->task, mhi_ev_task,
+				     (ulong)mhi_event);
+	}
+
+	mhi_chan = mhi_cntrl->mhi_chan;
+	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+		mutex_init(&mhi_chan->mutex);
+		init_completion(&mhi_chan->completion);
+		rwlock_init(&mhi_chan->lock);
+	}
+
+	if (mhi_cntrl->bounce_buf) {
+		mhi_cntrl->map_single = mhi_map_single_use_bb;
+		mhi_cntrl->unmap_single = mhi_unmap_single_use_bb;
+	} else {
+		mhi_cntrl->map_single = mhi_map_single_no_bb;
+		mhi_cntrl->unmap_single = mhi_unmap_single_no_bb;
+	}
+
+	/* register controller with mhi_bus */
+	mhi_dev = mhi_alloc_device(mhi_cntrl);
+	if (!mhi_dev) {
+		ret = -ENOMEM;
+		goto error_alloc_dev;
+	}
+
+	mhi_dev->dev_type = MHI_CONTROLLER_TYPE;
+	mhi_dev->mhi_cntrl = mhi_cntrl;
+	dev_set_name(&mhi_dev->dev, "%04x_%02u.%02u.%02u", mhi_dev->dev_id,
+		     mhi_dev->domain, mhi_dev->bus, mhi_dev->slot);
+	ret = device_add(&mhi_dev->dev);
+	if (ret)
+		goto error_add_dev;
+
+	mhi_cntrl->mhi_dev = mhi_dev;
+
+	mhi_cntrl->parent = debugfs_lookup(mhi_bus_type.name, NULL);
+
+	return 0;
+
+error_add_dev:
+	mhi_dealloc_device(mhi_cntrl, mhi_dev);
+
+error_alloc_dev:
+	kfree(mhi_cntrl->mhi_cmd);
+
+error_alloc_cmd:
+	kfree(mhi_cntrl->mhi_chan);
+	kfree(mhi_cntrl->mhi_event);
+
+	return ret;
+};
+EXPORT_SYMBOL(of_register_mhi_controller);
+
+void mhi_unregister_mhi_controller(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
+
+	kfree(mhi_cntrl->mhi_cmd);
+	kfree(mhi_cntrl->mhi_event);
+	kfree(mhi_cntrl->mhi_chan);
+
+	device_del(&mhi_dev->dev);
+	put_device(&mhi_dev->dev);
+}
+
+/* set ptr to control private data */
+static inline void mhi_controller_set_devdata(struct mhi_controller *mhi_cntrl,
+					 void *priv)
+{
+	mhi_cntrl->priv_data = priv;
+}
+
+
+/* allocate mhi controller to register */
+struct mhi_controller *mhi_alloc_controller(size_t size)
+{
+	struct mhi_controller *mhi_cntrl;
+
+	mhi_cntrl = kzalloc(size + sizeof(*mhi_cntrl), GFP_KERNEL);
+
+	if (mhi_cntrl && size)
+		mhi_controller_set_devdata(mhi_cntrl, mhi_cntrl + 1);
+
+	return mhi_cntrl;
+}
+EXPORT_SYMBOL(mhi_alloc_controller);
+
+/* match dev to drv */
+static int mhi_match(struct device *dev, struct device_driver *drv)
+{
+	struct mhi_device *mhi_dev = to_mhi_device(dev);
+	struct mhi_driver *mhi_drv = to_mhi_driver(drv);
+	const struct mhi_device_id *id;
+
+	/* if controller type there is no client driver associated with it */
+	if (mhi_dev->dev_type == MHI_CONTROLLER_TYPE)
+		return 0;
+
+	for (id = mhi_drv->id_table; id->chan[0]; id++)
+		if (!strcmp(mhi_dev->chan_name, id->chan)) {
+			mhi_dev->id = id;
+			return 1;
+		}
+
+	return 0;
+};
+
+static void mhi_release_device(struct device *dev)
+{
+	struct mhi_device *mhi_dev = to_mhi_device(dev);
+
+	kfree(mhi_dev);
+}
+
+struct bus_type mhi_bus_type = {
+	.name = "mhi",
+	.dev_name = "mhi",
+	.match = mhi_match,
+};
+
+static int mhi_driver_probe(struct device *dev)
+{
+	return -EINVAL;
+}
+
+static int mhi_driver_remove(struct device *dev)
+{
+	return 0;
+}
+
+int mhi_driver_register(struct mhi_driver *mhi_drv)
+{
+	struct device_driver *driver = &mhi_drv->driver;
+
+	if (!mhi_drv->probe || !mhi_drv->remove)
+		return -EINVAL;
+
+	driver->bus = &mhi_bus_type;
+	driver->probe = mhi_driver_probe;
+	driver->remove = mhi_driver_remove;
+
+	return driver_register(driver);
+}
+EXPORT_SYMBOL(mhi_driver_register);
+
+void mhi_driver_unregister(struct mhi_driver *mhi_drv)
+{
+	driver_unregister(&mhi_drv->driver);
+}
+EXPORT_SYMBOL(mhi_driver_unregister);
+
+struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_device *mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
+	struct device *dev;
+
+	if (!mhi_dev)
+		return NULL;
+
+	dev = &mhi_dev->dev;
+	device_initialize(dev);
+	dev->bus = &mhi_bus_type;
+	dev->release = mhi_release_device;
+	dev->parent = mhi_cntrl->dev;
+	mhi_dev->mhi_cntrl = mhi_cntrl;
+	mhi_dev->dev_id = mhi_cntrl->dev_id;
+	mhi_dev->domain = mhi_cntrl->domain;
+	mhi_dev->bus = mhi_cntrl->bus;
+	mhi_dev->slot = mhi_cntrl->slot;
+	mhi_dev->mtu = MHI_MAX_MTU;
+	atomic_set(&mhi_dev->dev_wake, 0);
+
+	return mhi_dev;
+}
+
+static int __init mhi_init(void)
+{
+	/* parent directory */
+	debugfs_create_dir(mhi_bus_type.name, NULL);
+
+	return bus_register(&mhi_bus_type);
+}
+postcore_initcall(mhi_init);
+
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("MHI_CORE");
+MODULE_DESCRIPTION("MHI Host Interface");
diff --git a/drivers/bus/mhi/core/mhi_internal.h b/drivers/bus/mhi/core/mhi_internal.h
new file mode 100644
index 0000000..90c40de
--- /dev/null
+++ b/drivers/bus/mhi/core/mhi_internal.h
@@ -0,0 +1,238 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#ifndef _MHI_INT_H
+#define _MHI_INT_H
+
+extern struct bus_type mhi_bus_type;
+
+/* MHI transfer completion events */
+enum MHI_EV_CCS {
+	MHI_EV_CC_INVALID = 0x0,
+	MHI_EV_CC_SUCCESS = 0x1,
+	MHI_EV_CC_EOT = 0x2,
+	MHI_EV_CC_OVERFLOW = 0x3,
+	MHI_EV_CC_EOB = 0x4,
+	MHI_EV_CC_OOB = 0x5,
+	MHI_EV_CC_DB_MODE = 0x6,
+	MHI_EV_CC_UNDEFINED_ERR = 0x10,
+	MHI_EV_CC_BAD_TRE = 0x11,
+};
+
+enum MHI_CH_STATE {
+	MHI_CH_STATE_DISABLED = 0x0,
+	MHI_CH_STATE_ENABLED = 0x1,
+	MHI_CH_STATE_RUNNING = 0x2,
+	MHI_CH_STATE_SUSPENDED = 0x3,
+	MHI_CH_STATE_STOP = 0x4,
+	MHI_CH_STATE_ERROR = 0x5,
+};
+
+enum MHI_BRSTMODE {
+	MHI_BRSTMODE_DISABLE = 0x2,
+	MHI_BRSTMODE_ENABLE = 0x3,
+};
+
+#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_BRSTMODE_DISABLE && \
+				    mode != MHI_BRSTMODE_ENABLE)
+
+enum MHI_EE {
+	MHI_EE_PBL = 0x0,
+	MHI_EE_SBL = 0x1,
+	MHI_EE_AMSS = 0x2,
+	MHI_EE_BHIE = 0x3,
+	MHI_EE_RDDM = 0x4,
+	MHI_EE_PTHRU = 0x5,
+	MHI_EE_EDL = 0x6,
+	MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
+	MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
+	MHI_EE_MAX,
+};
+
+/* accepted buffer type for the channel */
+enum MHI_XFER_TYPE {
+	MHI_XFER_BUFFER,
+	MHI_XFER_SKB,
+	MHI_XFER_SCLIST,
+	MHI_XFER_NOP, /* CPU offload channel, host does not accept transfer */
+};
+
+#define NR_OF_CMD_RINGS (1)
+#define CMD_EL_PER_RING (128)
+#define PRIMARY_CMD_RING (0)
+#define MHI_MAX_MTU (0xffff)
+
+enum MHI_ER_TYPE {
+	MHI_ER_TYPE_INVALID = 0x0,
+	MHI_ER_TYPE_VALID = 0x1,
+};
+
+enum mhi_er_data_type {
+	MHI_ER_DATA_ELEMENT_TYPE,
+	MHI_ER_CTRL_ELEMENT_TYPE,
+	MHI_ER_DATA_TYPE_MAX = MHI_ER_CTRL_ELEMENT_TYPE,
+};
+
+struct db_cfg {
+	bool reset_req;
+	bool db_mode;
+	u32 pollcfg;
+	enum MHI_BRSTMODE brstmode;
+	dma_addr_t db_val;
+	void (*process_db)(struct mhi_controller *mhi_cntrl,
+			   struct db_cfg *db_cfg, void __iomem *io_addr,
+			   dma_addr_t db_val);
+};
+
+struct mhi_ring {
+	dma_addr_t dma_handle;
+	dma_addr_t iommu_base;
+	u64 *ctxt_wp; /* point to ctxt wp */
+	void *pre_aligned;
+	void *base;
+	void *rp;
+	void *wp;
+	size_t el_size;
+	size_t len;
+	size_t elements;
+	size_t alloc_size;
+	void __iomem *db_addr;
+};
+
+struct mhi_cmd {
+	struct mhi_ring ring;
+	spinlock_t lock;
+};
+
+struct mhi_buf_info {
+	dma_addr_t p_addr;
+	void *v_addr;
+	void *bb_addr;
+	void *wp;
+	size_t len;
+	void *cb_buf;
+	enum dma_data_direction dir;
+};
+
+struct mhi_event {
+	u32 er_index;
+	u32 intmod;
+	u32 msi;
+	int chan; /* this event ring is dedicated to a channel */
+	u32 priority;
+	enum mhi_er_data_type data_type;
+	struct mhi_ring ring;
+	struct db_cfg db_cfg;
+	bool hw_ring;
+	bool cl_manage;
+	bool offload_ev; /* managed by a device driver */
+	spinlock_t lock;
+	struct mhi_chan *mhi_chan; /* dedicated to channel */
+	struct tasklet_struct task;
+	int (*process_event)(struct mhi_controller *mhi_cntrl,
+			     struct mhi_event *mhi_event,
+			     u32 event_quota);
+	struct mhi_controller *mhi_cntrl;
+};
+
+struct mhi_chan {
+	u32 chan;
+	const char *name;
+	/*
+	 * important, when consuming increment tre_ring first, when releasing
+	 * decrement buf_ring first. If tre_ring has space, buf_ring
+	 * guranteed to have space so we do not need to check both rings.
+	 */
+	struct mhi_ring buf_ring;
+	struct mhi_ring tre_ring;
+	u32 er_index;
+	u32 intmod;
+	enum dma_data_direction dir;
+	struct db_cfg db_cfg;
+	enum MHI_EE ee;
+	enum MHI_XFER_TYPE xfer_type;
+	enum MHI_CH_STATE ch_state;
+	enum MHI_EV_CCS ccs;
+	bool lpm_notify;
+	bool configured;
+	bool offload_ch;
+	bool pre_alloc;
+	bool auto_start;
+	/* functions that generate the transfer ring elements */
+	int (*gen_tre)(struct mhi_controller *mhi_cntrl,
+		       struct mhi_chan *mhi_chan, void *buf, void *cb,
+		       size_t len, enum MHI_FLAGS flags);
+	int (*queue_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+			  void *buf, size_t len, enum MHI_FLAGS mflags);
+	/* xfer call back */
+	struct mhi_device *mhi_dev;
+	void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *result);
+	struct mutex mutex;
+	struct completion completion;
+	rwlock_t lock;
+	struct list_head node;
+};
+
+/* default MHI timeout */
+#define MHI_TIMEOUT_MS (1000)
+
+/* power management apis */
+void mhi_pm_st_worker(struct work_struct *work);
+void mhi_fw_load_worker(struct work_struct *work);
+void mhi_pm_sys_err_worker(struct work_struct *work);
+
+/* queue transfer buffer */
+int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+		void *buf, void *cb, size_t buf_len, enum MHI_FLAGS flags);
+int mhi_queue_buf(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+		  void *buf, size_t len, enum MHI_FLAGS mflags);
+int mhi_queue_skb(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+		  void *buf, size_t len, enum MHI_FLAGS mflags);
+int mhi_queue_sclist(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+		  void *buf, size_t len, enum MHI_FLAGS mflags);
+int mhi_queue_nop(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+		  void *buf, size_t len, enum MHI_FLAGS mflags);
+
+/* register access methods */
+void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
+		     void __iomem *db_addr, dma_addr_t wp);
+void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
+			     struct db_cfg *db_mode, void __iomem *db_addr,
+			     dma_addr_t wp);
+
+struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
+static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
+				      struct mhi_device *mhi_dev)
+{
+	kfree(mhi_dev);
+}
+int mhi_destroy_device(struct device *dev, void *data);
+void mhi_create_devices(struct mhi_controller *mhi_cntrl);
+
+int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
+			 struct mhi_buf_info *buf_info);
+int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
+			  struct mhi_buf_info *buf_info);
+void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
+			    struct mhi_buf_info *buf_info);
+void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
+			     struct mhi_buf_info *buf_info);
+void mhi_ctrl_ev_task(unsigned long data);
+int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+				struct mhi_event *mhi_event, u32 event_quota);
+int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+			     struct mhi_event *mhi_event, u32 event_quota);
+
+/* initialization methods */
+int mhi_dtr_init(void);
+
+/* isr handlers */
+irqreturn_t mhi_msi_handlr(int irq_number, void *dev);
+irqreturn_t mhi_intvec_threaded_handlr(int irq_number, void *dev);
+irqreturn_t mhi_intvec_handlr(int irq_number, void *dev);
+void mhi_ev_task(unsigned long data);
+
+#endif /* _MHI_INT_H */
diff --git a/drivers/bus/mhi/core/mhi_main.c b/drivers/bus/mhi/core/mhi_main.c
new file mode 100644
index 0000000..4df4cd0
--- /dev/null
+++ b/drivers/bus/mhi/core/mhi_main.c
@@ -0,0 +1,122 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/module.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include <linux/mhi.h>
+#include "mhi_internal.h"
+
+void mhi_db_brstmode(struct mhi_controller *mhi_cntrl,
+		     struct db_cfg *db_cfg,
+		     void __iomem *db_addr,
+		     dma_addr_t wp)
+{
+}
+
+void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
+			     struct db_cfg *db_cfg,
+			     void __iomem *db_addr,
+			     dma_addr_t wp)
+{
+}
+
+int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
+			 struct mhi_buf_info *buf_info)
+{
+	return -ENOMEM;
+}
+
+int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
+			  struct mhi_buf_info *buf_info)
+{
+	return -ENOMEM;
+}
+
+void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
+			    struct mhi_buf_info *buf_info)
+{
+}
+
+void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
+			    struct mhi_buf_info *buf_info)
+{
+}
+
+int mhi_queue_sclist(struct mhi_device *mhi_dev,
+		     struct mhi_chan *mhi_chan,
+		     void *buf,
+		     size_t len,
+		     enum MHI_FLAGS mflags)
+{
+	return -EINVAL;
+}
+
+int mhi_queue_nop(struct mhi_device *mhi_dev,
+		  struct mhi_chan *mhi_chan,
+		  void *buf,
+		  size_t len,
+		  enum MHI_FLAGS mflags)
+{
+	return -EINVAL;
+}
+
+int mhi_queue_skb(struct mhi_device *mhi_dev,
+		  struct mhi_chan *mhi_chan,
+		  void *buf,
+		  size_t len,
+		  enum MHI_FLAGS mflags)
+{
+	return -EINVAL;
+}
+
+int mhi_gen_tre(struct mhi_controller *mhi_cntrl,
+		struct mhi_chan *mhi_chan,
+		void *buf,
+		void *cb,
+		size_t buf_len,
+		enum MHI_FLAGS flags)
+{
+	return -EINVAL;
+}
+
+int mhi_queue_buf(struct mhi_device *mhi_dev,
+		  struct mhi_chan *mhi_chan,
+		  void *buf,
+		  size_t len,
+		  enum MHI_FLAGS mflags)
+{
+	return -EINVAL;
+}
+
+int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+			     struct mhi_event *mhi_event,
+			     u32 event_quota)
+{
+	return -EIO;
+}
+
+int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+				struct mhi_event *mhi_event,
+				u32 event_quota)
+{
+	return -EIO;
+}
+
+void mhi_ev_task(unsigned long data)
+{
+}
+
+void mhi_ctrl_ev_task(unsigned long data)
+{
+}
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
new file mode 100644
index 0000000..c80685e9
--- /dev/null
+++ b/include/linux/mhi.h
@@ -0,0 +1,341 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ */
+#ifndef _MHI_H_
+#define _MHI_H_
+
+struct mhi_chan;
+struct mhi_event;
+struct mhi_ctxt;
+struct mhi_cmd;
+struct mhi_buf_info;
+
+/**
+ * enum MHI_CB - MHI callback
+ * @MHI_CB_IDLE: MHI entered idle state
+ * @MHI_CB_PENDING_DATA: New data available for client to process
+ * @MHI_CB_LPM_ENTER: MHI host entered low power mode
+ * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
+ * @MHI_CB_EE_RDDM: MHI device entered RDDM execution enviornment
+ */
+enum MHI_CB {
+	MHI_CB_IDLE,
+	MHI_CB_PENDING_DATA,
+	MHI_CB_LPM_ENTER,
+	MHI_CB_LPM_EXIT,
+	MHI_CB_EE_RDDM,
+};
+
+/**
+ * enum MHI_FLAGS - Transfer flags
+ * @MHI_EOB: End of buffer for bulk transfer
+ * @MHI_EOT: End of transfer
+ * @MHI_CHAIN: Linked transfer
+ */
+enum MHI_FLAGS {
+	MHI_EOB,
+	MHI_EOT,
+	MHI_CHAIN,
+};
+
+/**
+ * enum mhi_device_type - Device types
+ * @MHI_XFER_TYPE: Handles data transfer
+ * @MHI_TIMESYNC_TYPE: Use for timesync feature
+ * @MHI_CONTROLLER_TYPE: Control device
+ */
+enum mhi_device_type {
+	MHI_XFER_TYPE,
+	MHI_TIMESYNC_TYPE,
+	MHI_CONTROLLER_TYPE,
+};
+
+/**
+ * struct mhi_controller - Master controller structure for external modem
+ * @dev: Device associated with this controller
+ * @of_node: DT that has MHI configuration information
+ * @regs: Points to base of MHI MMIO register space
+ * @dev_id: PCIe device id of the external device
+ * @domain: PCIe domain the device connected to
+ * @bus: PCIe bus the device assigned to
+ * @slot: PCIe slot for the modem
+ * @iova_start: IOMMU starting address for data
+ * @iova_stop: IOMMU stop address for data
+ * @fw_image: Firmware image name for normal booting
+ * @edl_image: Firmware image name for emergency download mode
+ * @fbc_download: MHI host needs to do complete image transfer
+ * @rddm_size: RAM dump size that host should allocate for debugging purpose
+ * @sbl_size: SBL image size
+ * @seg_len: BHIe vector size
+ * @fbc_image: Points to firmware image buffer
+ * @rddm_image: Points to RAM dump buffer
+ * @max_chan: Maximum number of channels controller support
+ * @mhi_chan: Points to channel configuration table
+ * @lpm_chans: List of channels that require LPM notifications
+ * @total_ev_rings: Total # of event rings allocated
+ * @hw_ev_rings: Number of hardware event rings
+ * @sw_ev_rings: Number of software event rings
+ * @msi_required: Number of msi required to operate
+ * @msi_allocated: Number of msi allocated by bus master
+ * @irq: base irq # to request
+ * @mhi_event: MHI event ring configurations table
+ * @mhi_cmd: MHI command ring configurations table
+ * @mhi_ctxt: MHI device context, shared memory between host and device
+ * @timeout_ms: Timeout in ms for state transitions
+ * @pm_state: Power management state
+ * @ee: MHI device execution environment
+ * @dev_state: MHI STATE
+ * @status_cb: CB function to notify various power states to but master
+ * @link_status: Query link status in case of abnormal value read from device
+ * @runtime_get: Async runtime resume function
+ * @runtimet_put: Release votes
+ * @time_get: Return host time in us
+ * @lpm_disable: Request controller to disable link level low power modes
+ * @lpm_enable: Controller may enable link level low power modes again
+ * @priv_data: Points to bus master's private data
+ */
+struct mhi_controller {
+	struct list_head node;
+	struct mhi_device *mhi_dev;
+
+	/* device node for iommu ops */
+	struct device *dev;
+	struct device_node *of_node;
+
+	/* mmio base */
+	void __iomem *regs;
+
+	/* device topology */
+	u32 dev_id;
+	u32 domain;
+	u32 bus;
+	u32 slot;
+
+	/* addressing window */
+	dma_addr_t iova_start;
+	dma_addr_t iova_stop;
+
+	/* fw images */
+	const char *fw_image;
+	const char *edl_image;
+
+	/* mhi host manages downloading entire fbc images */
+	bool fbc_download;
+	size_t rddm_size;
+	size_t sbl_size;
+	size_t seg_len;
+	u32 session_id;
+	u32 sequence_id;
+
+	/* physical channel config data */
+	u32 max_chan;
+	struct mhi_chan *mhi_chan;
+	struct list_head lpm_chans; /* these chan require lpm notification */
+
+	/* physical event config data */
+	u32 total_ev_rings;
+	u32 hw_ev_rings;
+	u32 sw_ev_rings;
+	u32 msi_required;
+	u32 msi_allocated;
+	int *irq; /* interrupt table */
+	struct mhi_event *mhi_event;
+
+	/* cmd rings */
+	struct mhi_cmd *mhi_cmd;
+
+	/* mhi context (shared with device) */
+	struct mhi_ctxt *mhi_ctxt;
+
+	u32 timeout_ms;
+
+	/* caller should grab pm_mutex for suspend/resume operations */
+	struct mutex pm_mutex;
+	bool pre_init;
+	rwlock_t pm_lock;
+	u32 pm_state;
+	u32 ee;
+	u32 dev_state;
+	bool wake_set;
+	atomic_t dev_wake;
+	atomic_t alloc_size;
+	struct list_head transition_list;
+	spinlock_t transition_lock;
+	spinlock_t wlock;
+
+	/* debug counters */
+	u32 M0, M2, M3;
+
+	/* worker for different state transitions */
+	struct work_struct st_worker;
+	struct work_struct fw_worker;
+	struct work_struct syserr_worker;
+	wait_queue_head_t state_event;
+
+	/* shadow functions */
+	void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
+			  enum MHI_CB cb);
+	int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
+	void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
+	void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
+	int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
+	void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
+	void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
+	void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
+	int (*map_single)(struct mhi_controller *mhi_cntrl,
+			  struct mhi_buf_info *buf);
+	void (*unmap_single)(struct mhi_controller *mhi_cntrl,
+			     struct mhi_buf_info *buf);
+
+	/* channel to control DTR messaging */
+	struct mhi_device *dtr_dev;
+
+	/* bounce buffer settings */
+	bool bounce_buf;
+	size_t buffer_len;
+
+	/* controller specific data */
+	void *priv_data;
+	void *log_buf;
+	struct dentry *dentry;
+	struct dentry *parent;
+};
+
+/**
+ * struct mhi_device - mhi device structure associated bind to channel
+ * @dev: Device associated with the channels
+ * @mtu: Maximum # of bytes controller support
+ * @ul_chan_id: MHI channel id for UL transfer
+ * @dl_chan_id: MHI channel id for DL transfer
+ * @tiocm: Device current terminal settings
+ * @priv: Driver private data
+ */
+struct mhi_device {
+	struct device dev;
+	u32 dev_id;
+	u32 domain;
+	u32 bus;
+	u32 slot;
+	size_t mtu;
+	int ul_chan_id;
+	int dl_chan_id;
+	int ul_event_id;
+	int dl_event_id;
+	u32 tiocm;
+	const struct mhi_device_id *id;
+	const char *chan_name;
+	struct mhi_controller *mhi_cntrl;
+	struct mhi_chan *ul_chan;
+	struct mhi_chan *dl_chan;
+	atomic_t dev_wake;
+	enum mhi_device_type dev_type;
+	void *priv_data;
+	int (*ul_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+		       void *buf, size_t len, enum MHI_FLAGS flags);
+	int (*dl_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+		       void *buf, size_t len, enum MHI_FLAGS flags);
+	void (*status_cb)(struct mhi_device *mhi_dev, enum MHI_CB cb);
+};
+
+/**
+ * struct mhi_result - Completed buffer information
+ * @buf_addr: Address of data buffer
+ * @dir: Channel direction
+ * @bytes_xfer: # of bytes transferred
+ * @transaction_status: Status of last trasnferred
+ */
+struct mhi_result {
+	void *buf_addr;
+	enum dma_data_direction dir;
+	size_t bytes_xferd;
+	int transaction_status;
+};
+
+/**
+ * struct mhi_driver - mhi driver information
+ * @id_table: NULL terminated channel ID names
+ * @ul_xfer_cb: UL data transfer callback
+ * @dl_xfer_cb: DL data transfer callback
+ * @status_cb: Asynchronous status callback
+ */
+struct mhi_driver {
+	const struct mhi_device_id *id_table;
+	int (*probe)(struct mhi_device *mhi_dev,
+		     const struct mhi_device_id *id);
+	void (*remove)(struct mhi_device *mhi_dev);
+	void (*ul_xfer_cb)(struct mhi_device *mhi_dev,
+			   struct mhi_result *result);
+	void (*dl_xfer_cb)(struct mhi_device *mhi_dev,
+			   struct mhi_result *result);
+	void (*status_cb)(struct mhi_device *mhi_dev, enum MHI_CB mhi_cb);
+	struct device_driver driver;
+};
+
+#define to_mhi_driver(drv) container_of(drv, struct mhi_driver, driver)
+#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
+
+static inline void mhi_device_set_devdata(struct mhi_device *mhi_dev,
+					  void *priv)
+{
+	mhi_dev->priv_data = priv;
+}
+
+static inline void *mhi_device_get_devdata(struct mhi_device *mhi_dev)
+{
+	return mhi_dev->priv_data;
+}
+
+static inline void *mhi_controller_get_devdata(struct mhi_controller *mhi_cntrl)
+{
+	return mhi_cntrl->priv_data;
+}
+
+static inline void mhi_free_controller(struct mhi_controller *mhi_cntrl)
+{
+	kfree(mhi_cntrl);
+}
+
+/**
+ * mhi_driver_register - Register driver with MHI framework
+ * @mhi_drv: mhi_driver structure
+ */
+int mhi_driver_register(struct mhi_driver *mhi_drv);
+
+/**
+ * mhi_driver_unregister - Unregister a driver for mhi_devices
+ * @mhi_drv: mhi_driver structure
+ */
+void mhi_driver_unregister(struct mhi_driver *mhi_drv);
+
+/**
+ * mhi_alloc_controller - Allocate mhi_controller structure
+ * Allocate controller structure and additional data for controller
+ * private data. You may get the private data pointer by calling
+ * mhi_controller_get_devdata
+ * @size: # of additional bytes to allocate
+ */
+struct mhi_controller *mhi_alloc_controller(size_t size);
+
+/**
+ * of_register_mhi_controller - Register MHI controller
+ * Registers MHI controller with MHI bus framework. DT must be supported
+ * @mhi_cntrl: MHI controller to register
+ */
+int of_register_mhi_controller(struct mhi_controller *mhi_cntrl);
+
+void mhi_unregister_mhi_controller(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_bdf_to_controller - Look up a registered controller
+ * Search for controller based on device identification
+ * @domain: RC domain of the device
+ * @bus: Bus device connected to
+ * @slot: Slot device assigned to
+ * @dev_id: Device Identification
+ */
+struct mhi_controller *mhi_bdf_to_controller(u32 domain, u32 bus, u32 slot,
+					     u32 dev_id);
+
+#endif /* _MHI_H_ */
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index 7d361be..453ec01 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -734,4 +734,16 @@ struct tb_service_id {
 #define TBSVC_MATCH_PROTOCOL_VERSION	0x0004
 #define TBSVC_MATCH_PROTOCOL_REVISION	0x0008
 
+#define MHI_NAME_SIZE 32
+
+/**
+ * struct mhi_device_id - MHI device identification
+ * @chan: MHI channel name
+ * @driver_data: driver data;
+ */
+struct mhi_device_id {
+	const char chan[MHI_NAME_SIZE];
+	kernel_ulong_t driver_data;
+};
+
 #endif /* LINUX_MOD_DEVICETABLE_H */
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v2 2/7] mhi_bus: core: add power management support
  2018-07-09 20:08 ` MHI code review Sujeev Dias
  2018-07-09 20:08   ` [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver Sujeev Dias
@ 2018-07-09 20:08   ` Sujeev Dias
  2018-07-09 20:08   ` [PATCH v2 3/7] mhi_bus: core: add support for data transfer Sujeev Dias
                     ` (5 subsequent siblings)
  7 siblings, 0 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-07-09 20:08 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: linux-kernel, linux-arm-msm, devicetree, Sujeev Dias,
	Tony Truong, Siddartha Mohanadoss

Add support for MHI power management operations such as
power on, off, suspend, and resume.

Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
Reviewed-by: Tony Truong <truong@codeaurora.org>
Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
---
 drivers/bus/mhi/core/Makefile       |    2 +-
 drivers/bus/mhi/core/mhi_boot.c     |  533 ++++++++++++++++++
 drivers/bus/mhi/core/mhi_init.c     |  695 +++++++++++++++++++++++-
 drivers/bus/mhi/core/mhi_internal.h |  491 +++++++++++++++++
 drivers/bus/mhi/core/mhi_main.c     |  528 +++++++++++++++++-
 drivers/bus/mhi/core/mhi_pm.c       | 1027 +++++++++++++++++++++++++++++++++++
 include/linux/mhi.h                 |  121 +++++
 7 files changed, 3394 insertions(+), 3 deletions(-)
 create mode 100644 drivers/bus/mhi/core/mhi_boot.c
 create mode 100644 drivers/bus/mhi/core/mhi_pm.c

diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
index a015809..a6015ab 100644
--- a/drivers/bus/mhi/core/Makefile
+++ b/drivers/bus/mhi/core/Makefile
@@ -1 +1 @@
-obj-$(CONFIG_MHI_BUS) +=mhi_init.o mhi_main.o
+obj-$(CONFIG_MHI_BUS) +=mhi_init.o mhi_main.o mhi_pm.o mhi_boot.o
diff --git a/drivers/bus/mhi/core/mhi_boot.c b/drivers/bus/mhi/core/mhi_boot.c
new file mode 100644
index 0000000..a8e4a15
--- /dev/null
+++ b/drivers/bus/mhi/core/mhi_boot.c
@@ -0,0 +1,533 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#include <linux/debugfs.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/firmware.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/module.h>
+#include <linux/random.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+#include <linux/mhi.h>
+#include "mhi_internal.h"
+
+
+/* setup rddm vector table for rddm transfer */
+static void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
+			     struct image_info *img_info)
+{
+	struct mhi_buf *mhi_buf = img_info->mhi_buf;
+	struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
+	int i = 0;
+
+	for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
+		bhi_vec->dma_addr = mhi_buf->dma_addr;
+		bhi_vec->size = mhi_buf->len;
+	}
+}
+
+/* collect rddm during kernel panic */
+static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+	struct mhi_buf *mhi_buf;
+	u32 sequence_id;
+	u32 rx_status;
+	enum MHI_EE ee;
+	struct image_info *rddm_image = mhi_cntrl->rddm_image;
+	const u32 delayus = 100;
+	u32 retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
+	void __iomem *base = mhi_cntrl->bhie;
+
+	dev_info(mhi_cntrl->dev,
+		 "Entered with pm_state:%s dev_state:%s ee:%s\n",
+		to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		TO_MHI_EXEC_STR(mhi_cntrl->ee));
+
+	/*
+	 * This should only be executing during a kernel panic, we expect all
+	 * other cores to shutdown while we're collecting rddm buffer. After
+	 * returning from this function, we expect device to reset.
+	 *
+	 * Normaly, we would read/write pm_state only after grabbing
+	 * pm_lock, since we're in a panic, skipping it.
+	 */
+
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+		return -EIO;
+
+	/*
+	 * There is no gurantee this state change would take effect since
+	 * we're setting it w/o grabbing pmlock, it's best effort
+	 */
+	mhi_cntrl->pm_state = MHI_PM_LD_ERR_FATAL_DETECT;
+	/* update should take the effect immediately */
+	smp_wmb();
+
+	/* setup the RX vector table */
+	mhi_rddm_prepare(mhi_cntrl, rddm_image);
+	mhi_buf = &rddm_image->mhi_buf[rddm_image->entries - 1];
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS,
+		      upper_32_bits(mhi_buf->dma_addr));
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_LOW_OFFS,
+		      lower_32_bits(mhi_buf->dma_addr));
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
+	sequence_id = prandom_u32() & BHIE_RXVECSTATUS_SEQNUM_BMSK;
+
+	if (unlikely(!sequence_id))
+		sequence_id = 1;
+
+
+	mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
+			    BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
+			    sequence_id);
+
+	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
+
+	while (retry--) {
+		ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
+					 BHIE_RXVECSTATUS_STATUS_BMSK,
+					 BHIE_RXVECSTATUS_STATUS_SHFT,
+					 &rx_status);
+		if (ret)
+			return -EIO;
+
+		if (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL)
+			return 0;
+
+		udelay(delayus);
+	}
+
+	ee = mhi_get_exec_env(mhi_cntrl);
+	ret = mhi_read_reg(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS, &rx_status);
+
+	dev_err(mhi_cntrl->dev, "Did not complete RDDM transfer\n");
+	dev_err(mhi_cntrl->dev, "Current EE:%s\n", TO_MHI_EXEC_STR(ee));
+	dev_err(mhi_cntrl->dev, "RXVEC_STATUS:0x%x, ret:%d\n", rx_status, ret);
+
+	return -EIO;
+}
+
+/* download ramdump image from device */
+int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic)
+{
+	void __iomem *base = mhi_cntrl->bhie;
+	rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
+	struct image_info *rddm_image = mhi_cntrl->rddm_image;
+	struct mhi_buf *mhi_buf;
+	int ret;
+	u32 rx_status;
+	u32 sequence_id;
+
+	if (!rddm_image)
+		return -ENOMEM;
+
+	if (in_panic)
+		return __mhi_download_rddm_in_panic(mhi_cntrl);
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->ee == MHI_EE_RDDM ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		dev_err(mhi_cntrl->dev,
+			"MHI is not in valid state, pm_state:%s ee:%s\n",
+			to_mhi_pm_state_str(mhi_cntrl->pm_state),
+			TO_MHI_EXEC_STR(mhi_cntrl->ee));
+		return -EIO;
+	}
+
+	mhi_rddm_prepare(mhi_cntrl, mhi_cntrl->rddm_image);
+
+	/* vector table is the last entry */
+	mhi_buf = &rddm_image->mhi_buf[rddm_image->entries - 1];
+
+	read_lock_bh(pm_lock);
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+		read_unlock_bh(pm_lock);
+		return -EIO;
+	}
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS,
+		      upper_32_bits(mhi_buf->dma_addr));
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_LOW_OFFS,
+		      lower_32_bits(mhi_buf->dma_addr));
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
+
+	sequence_id = prandom_u32() & BHIE_RXVECSTATUS_SEQNUM_BMSK;
+	mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
+			    BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
+			    sequence_id);
+	read_unlock_bh(pm_lock);
+
+	/* waiting for image download completion */
+	wait_event_timeout(mhi_cntrl->state_event,
+			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
+			   mhi_read_reg_field(mhi_cntrl, base,
+					      BHIE_RXVECSTATUS_OFFS,
+					      BHIE_RXVECSTATUS_STATUS_BMSK,
+					      BHIE_RXVECSTATUS_STATUS_SHFT,
+					      &rx_status) || rx_status,
+			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+		return -EIO;
+
+	return (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
+}
+EXPORT_SYMBOL(mhi_download_rddm_img);
+
+static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
+			    const struct mhi_buf *mhi_buf)
+{
+	void __iomem *base = mhi_cntrl->bhie;
+	rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
+	u32 tx_status;
+
+	read_lock_bh(pm_lock);
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+		read_unlock_bh(pm_lock);
+		return -EIO;
+	}
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_HIGH_OFFS,
+		      upper_32_bits(mhi_buf->dma_addr));
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_LOW_OFFS,
+		      lower_32_bits(mhi_buf->dma_addr));
+
+	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
+
+	mhi_cntrl->sequence_id = prandom_u32() & BHIE_TXVECSTATUS_SEQNUM_BMSK;
+	mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
+			    BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
+			    mhi_cntrl->sequence_id);
+	read_unlock_bh(pm_lock);
+
+	/* waiting for image download completion */
+	wait_event_timeout(mhi_cntrl->state_event,
+			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
+			   mhi_read_reg_field(mhi_cntrl, base,
+					      BHIE_TXVECSTATUS_OFFS,
+					      BHIE_TXVECSTATUS_STATUS_BMSK,
+					      BHIE_TXVECSTATUS_STATUS_SHFT,
+					      &tx_status) || tx_status,
+			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+		return -EIO;
+
+	return (tx_status == BHIE_TXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
+}
+
+static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
+			   void *buf,
+			   size_t size)
+{
+	u32 tx_status, val;
+	int i, ret;
+	void __iomem *base = mhi_cntrl->bhi;
+	rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
+	dma_addr_t dma_addr = dma_map_single(mhi_cntrl->dev, buf, size,
+					     DMA_TO_DEVICE);
+	struct {
+		char *name;
+		u32 offset;
+	} error_reg[] = {
+		{ "ERROR_CODE", BHI_ERRCODE },
+		{ "ERROR_DBG1", BHI_ERRDBG1 },
+		{ "ERROR_DBG2", BHI_ERRDBG2 },
+		{ "ERROR_DBG3", BHI_ERRDBG3 },
+		{ NULL },
+	};
+
+	if (dma_mapping_error(mhi_cntrl->dev, dma_addr))
+		return -ENOMEM;
+
+	/* program start sbl download via  bhi protocol */
+	read_lock_bh(pm_lock);
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+		read_unlock_bh(pm_lock);
+		goto invalid_pm_state;
+	}
+
+	mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0);
+	mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH,
+		      upper_32_bits(dma_addr));
+	mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW,
+		      lower_32_bits(dma_addr));
+	mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size);
+	mhi_cntrl->session_id = prandom_u32() & BHI_TXDB_SEQNUM_BMSK;
+	mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, mhi_cntrl->session_id);
+	read_unlock_bh(pm_lock);
+
+	/* waiting for image download completion */
+	wait_event_timeout(mhi_cntrl->state_event,
+			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
+			   mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS,
+					      BHI_STATUS_MASK, BHI_STATUS_SHIFT,
+					      &tx_status) || tx_status,
+			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+		goto invalid_pm_state;
+
+	if (tx_status == BHI_STATUS_ERROR) {
+		dev_err(mhi_cntrl->dev, "Image transfer failed\n");
+		read_lock_bh(pm_lock);
+		if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+			for (i = 0; error_reg[i].name; i++) {
+				ret = mhi_read_reg(mhi_cntrl, base,
+						   error_reg[i].offset, &val);
+				if (ret)
+					break;
+				dev_err(mhi_cntrl->dev, "reg:%s value:0x%x\n",
+					error_reg[i].name, val);
+			}
+		}
+		read_unlock_bh(pm_lock);
+		goto invalid_pm_state;
+	}
+
+	dma_unmap_single(mhi_cntrl->dev, dma_addr, size, DMA_TO_DEVICE);
+
+	return (tx_status == BHI_STATUS_SUCCESS) ? 0 : -ETIMEDOUT;
+
+invalid_pm_state:
+	dma_unmap_single(mhi_cntrl->dev, dma_addr, size, DMA_TO_DEVICE);
+
+	return -EIO;
+}
+
+void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
+			 struct image_info *image_info)
+{
+	int i;
+	struct mhi_buf *mhi_buf = image_info->mhi_buf;
+
+	for (i = 0; i < image_info->entries; i++, mhi_buf++)
+		mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
+				  mhi_buf->dma_addr);
+
+	kfree(image_info->mhi_buf);
+	kfree(image_info);
+}
+
+int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
+			 struct image_info **image_info,
+			 size_t alloc_size)
+{
+	size_t seg_size = mhi_cntrl->seg_len;
+	/* requier additional entry for vec table */
+	int segments = DIV_ROUND_UP(alloc_size, seg_size) + 1;
+	int i;
+	struct image_info *img_info;
+	struct mhi_buf *mhi_buf;
+
+	img_info = kzalloc(sizeof(*img_info), GFP_KERNEL);
+	if (!img_info)
+		return -ENOMEM;
+
+	/* allocate memory for entries */
+	img_info->mhi_buf = kcalloc(segments, sizeof(*img_info->mhi_buf),
+				    GFP_KERNEL);
+	if (!img_info->mhi_buf)
+		goto error_alloc_mhi_buf;
+
+	/* allocate and populate vector table */
+	mhi_buf = img_info->mhi_buf;
+	for (i = 0; i < segments; i++, mhi_buf++) {
+		size_t vec_size = seg_size;
+
+		/* last entry is for vector table */
+		if (i == segments - 1)
+			vec_size = sizeof(struct bhi_vec_entry) * i;
+
+		mhi_buf->len = vec_size;
+		mhi_buf->buf = mhi_alloc_coherent(mhi_cntrl, vec_size,
+					&mhi_buf->dma_addr, GFP_KERNEL);
+		if (!mhi_buf->buf)
+			goto error_alloc_segment;
+	}
+
+	img_info->bhi_vec = img_info->mhi_buf[segments - 1].buf;
+	img_info->entries = segments;
+	*image_info = img_info;
+
+	return 0;
+
+error_alloc_segment:
+	for (--i, --mhi_buf; i >= 0; i--, mhi_buf--)
+		mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
+				  mhi_buf->dma_addr);
+
+error_alloc_mhi_buf:
+	kfree(img_info);
+
+	return -ENOMEM;
+}
+
+static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
+			      const struct firmware *firmware,
+			      struct image_info *img_info)
+{
+	size_t remainder = firmware->size;
+	size_t to_cpy;
+	const u8 *buf = firmware->data;
+	int i = 0;
+	struct mhi_buf *mhi_buf = img_info->mhi_buf;
+	struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
+
+	while (remainder) {
+		to_cpy = min(remainder, mhi_buf->len);
+		memcpy(mhi_buf->buf, buf, to_cpy);
+		bhi_vec->dma_addr = mhi_buf->dma_addr;
+		bhi_vec->size = to_cpy;
+
+		buf += to_cpy;
+		remainder -= to_cpy;
+		i++;
+		bhi_vec++;
+		mhi_buf++;
+	}
+}
+
+void mhi_fw_load_worker(struct work_struct *work)
+{
+	int ret;
+	struct mhi_controller *mhi_cntrl;
+	const char *fw_name;
+	const struct firmware *firmware;
+	struct image_info *image_info;
+	void *buf;
+	size_t size;
+
+	mhi_cntrl = container_of(work, struct mhi_controller, fw_worker);
+
+	dev_info(mhi_cntrl->dev, "Waiting for device to enter PBL from EE:%s\n",
+		TO_MHI_EXEC_STR(mhi_cntrl->ee));
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 MHI_IN_PBL(mhi_cntrl->ee) ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		dev_err(mhi_cntrl->dev, "MHI is not in valid state\n");
+		return;
+	}
+
+	/* if device in pthru, we do not have to load firmware */
+	if (mhi_cntrl->ee == MHI_EE_PTHRU)
+		return;
+
+	fw_name = (mhi_cntrl->ee == MHI_EE_EDL) ?
+		mhi_cntrl->edl_image : mhi_cntrl->fw_image;
+
+	if (!fw_name || (mhi_cntrl->fbc_download && (!mhi_cntrl->sbl_size ||
+						     !mhi_cntrl->seg_len))) {
+		dev_err(mhi_cntrl->dev,
+			"No firmware image defined or !sbl_size || !seg_len\n");
+		return;
+	}
+
+	ret = request_firmware(&firmware, fw_name, mhi_cntrl->dev);
+	if (ret) {
+		dev_err(mhi_cntrl->dev, "Error loading firmware, ret:%d\n",
+			ret);
+		return;
+	}
+
+	size = (mhi_cntrl->fbc_download) ? mhi_cntrl->sbl_size : firmware->size;
+
+	/* the sbl size provided is maximum size, not necessarily image size */
+	if (size > firmware->size)
+		size = firmware->size;
+
+	buf = kmemdup(firmware->data, size, GFP_KERNEL);
+	if (!buf) {
+		release_firmware(firmware);
+		return;
+	}
+
+	/* load sbl image */
+	ret = mhi_fw_load_sbl(mhi_cntrl, buf, size);
+	kfree(buf);
+
+	if (!mhi_cntrl->fbc_download || ret || mhi_cntrl->ee == MHI_EE_EDL)
+		release_firmware(firmware);
+
+	/* error or in edl, we're done */
+	if (ret || mhi_cntrl->ee == MHI_EE_EDL)
+		return;
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	mhi_cntrl->dev_state = MHI_STATE_RESET;
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+
+	/*
+	 * if we're doing fbc, populate vector tables while
+	 * device transitioning into MHI READY state
+	 */
+	if (mhi_cntrl->fbc_download) {
+		ret = mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->fbc_image,
+					   firmware->size);
+		if (ret)
+			goto error_alloc_fw_table;
+
+		/* load the firmware into BHIE vec table */
+		mhi_firmware_copy(mhi_cntrl, firmware, mhi_cntrl->fbc_image);
+	}
+
+	/* transitioning into MHI RESET->READY state */
+	ret = mhi_ready_state_transition(mhi_cntrl);
+
+	if (!mhi_cntrl->fbc_download)
+		return;
+
+	if (ret)
+		goto error_read;
+
+	/* wait for BHIE event */
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->ee == MHI_EE_BHIE ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		dev_err(mhi_cntrl->dev, "MHI did not enter BHIE\n");
+		goto error_read;
+	}
+
+	/* start full firmware image download */
+	image_info = mhi_cntrl->fbc_image;
+	ret = mhi_fw_load_amss(mhi_cntrl,
+			       /* last entry is vec table */
+			       &image_info->mhi_buf[image_info->entries - 1]);
+
+	release_firmware(firmware);
+
+	return;
+
+error_read:
+	mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
+	mhi_cntrl->fbc_image = NULL;
+
+error_alloc_fw_table:
+	release_firmware(firmware);
+}
diff --git a/drivers/bus/mhi/core/mhi_init.c b/drivers/bus/mhi/core/mhi_init.c
index b8c30f8..7b3d12a 100644
--- a/drivers/bus/mhi/core/mhi_init.c
+++ b/drivers/bus/mhi/core/mhi_init.c
@@ -17,6 +17,537 @@
 #include <linux/mhi.h>
 #include "mhi_internal.h"
 
+const char * const mhi_ee_str[MHI_EE_MAX] = {
+	[MHI_EE_PBL] = "PBL",
+	[MHI_EE_SBL] = "SBL",
+	[MHI_EE_AMSS] = "AMSS",
+	[MHI_EE_BHIE] = "BHIE",
+	[MHI_EE_RDDM] = "RDDM",
+	[MHI_EE_PTHRU] = "PASS THRU",
+	[MHI_EE_EDL] = "EDL",
+	[MHI_EE_DISABLE_TRANSITION] = "DISABLE",
+};
+
+const char * const mhi_state_tran_str[MHI_ST_TRANSITION_MAX] = {
+	[MHI_ST_TRANSITION_PBL] = "PBL",
+	[MHI_ST_TRANSITION_READY] = "READY",
+	[MHI_ST_TRANSITION_SBL] = "SBL",
+	[MHI_ST_TRANSITION_AMSS] = "AMSS",
+	[MHI_ST_TRANSITION_BHIE] = "BHIE",
+};
+
+const char * const mhi_state_str[MHI_STATE_MAX] = {
+	[MHI_STATE_RESET] = "RESET",
+	[MHI_STATE_READY] = "READY",
+	[MHI_STATE_M0] = "M0",
+	[MHI_STATE_M1] = "M1",
+	[MHI_STATE_M2] = "M2",
+	[MHI_STATE_M3] = "M3",
+	[MHI_STATE_BHI] = "BHI",
+	[MHI_STATE_SYS_ERR] = "SYS_ERR",
+};
+
+static const char * const mhi_pm_state_str[] = {
+	[MHI_PM_BIT_DISABLE] = "DISABLE",
+	[MHI_PM_BIT_POR] = "POR",
+	[MHI_PM_BIT_M0] = "M0",
+	[MHI_PM_BIT_M2] = "M2",
+	[MHI_PM_BIT_M3_ENTER] = "M?->M3",
+	[MHI_PM_BIT_M3] = "M3",
+	[MHI_PM_BIT_M3_EXIT] = "M3->M0",
+	[MHI_PM_BIT_FW_DL_ERR] = "FW DL Error",
+	[MHI_PM_BIT_SYS_ERR_DETECT] = "SYS_ERR Detect",
+	[MHI_PM_BIT_SYS_ERR_PROCESS] = "SYS_ERR Process",
+	[MHI_PM_BIT_SHUTDOWN_PROCESS] = "SHUTDOWN Process",
+	[MHI_PM_BIT_LD_ERR_FATAL_DETECT] = "LD or Error Fatal Detect",
+};
+
+const char *to_mhi_pm_state_str(enum MHI_PM_STATE state)
+{
+	int index = find_last_bit((unsigned long *)&state, 32);
+
+	if (index >= ARRAY_SIZE(mhi_pm_state_str))
+		return "Invalid State";
+
+	return mhi_pm_state_str[index];
+}
+
+/* MHI protocol require transfer ring to be aligned to ring length */
+static int mhi_alloc_aligned_ring(struct mhi_controller *mhi_cntrl,
+				  struct mhi_ring *ring,
+				  u64 len)
+{
+	ring->alloc_size = len + (len - 1);
+	ring->pre_aligned = mhi_alloc_coherent(mhi_cntrl, ring->alloc_size,
+					       &ring->dma_handle, GFP_KERNEL);
+	if (!ring->pre_aligned)
+		return -ENOMEM;
+
+	ring->iommu_base = (ring->dma_handle + (len - 1)) & ~(len - 1);
+	ring->base = ring->pre_aligned + (ring->iommu_base - ring->dma_handle);
+
+	return 0;
+}
+
+void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl)
+{
+	int i;
+	struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
+
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		if (mhi_event->offload_ev)
+			continue;
+
+		free_irq(mhi_cntrl->irq[mhi_event->msi], mhi_event);
+	}
+
+	free_irq(mhi_cntrl->irq[0], mhi_cntrl);
+}
+
+int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl)
+{
+	int i;
+	int ret;
+	struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
+
+	/* for BHI INTVEC msi */
+	ret = request_threaded_irq(mhi_cntrl->irq[0], mhi_intvec_handlr,
+				   mhi_intvec_threaded_handlr, IRQF_ONESHOT,
+				   "mhi", mhi_cntrl);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		if (mhi_event->offload_ev)
+			continue;
+
+		ret = request_irq(mhi_cntrl->irq[mhi_event->msi],
+				  mhi_msi_handlr, IRQF_SHARED, "mhi",
+				  mhi_event);
+		if (ret) {
+			dev_err(mhi_cntrl->dev,
+				"Error requesting irq:%d for ev:%d\n",
+				mhi_cntrl->irq[mhi_event->msi], i);
+			goto error_request;
+		}
+	}
+
+	return 0;
+
+error_request:
+	for (--i, --mhi_event; i >= 0; i--, mhi_event--) {
+		if (mhi_event->offload_ev)
+			continue;
+
+		free_irq(mhi_cntrl->irq[mhi_event->msi], mhi_event);
+	}
+	free_irq(mhi_cntrl->irq[0], mhi_cntrl);
+
+	return ret;
+}
+
+void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl)
+{
+	int i;
+	struct mhi_ctxt *mhi_ctxt = mhi_cntrl->mhi_ctxt;
+	struct mhi_cmd *mhi_cmd;
+	struct mhi_event *mhi_event;
+	struct mhi_ring *ring;
+
+	mhi_cmd = mhi_cntrl->mhi_cmd;
+	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++) {
+		ring = &mhi_cmd->ring;
+		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+				  ring->pre_aligned, ring->dma_handle);
+		ring->base = NULL;
+		ring->iommu_base = 0;
+	}
+
+	mhi_free_coherent(mhi_cntrl,
+			  sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS,
+			  mhi_ctxt->cmd_ctxt, mhi_ctxt->cmd_ctxt_addr);
+
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		if (mhi_event->offload_ev)
+			continue;
+
+		ring = &mhi_event->ring;
+		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+				  ring->pre_aligned, ring->dma_handle);
+		ring->base = NULL;
+		ring->iommu_base = 0;
+	}
+
+	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->er_ctxt) *
+			  mhi_cntrl->total_ev_rings, mhi_ctxt->er_ctxt,
+			  mhi_ctxt->er_ctxt_addr);
+
+	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->chan_ctxt) *
+			  mhi_cntrl->max_chan, mhi_ctxt->chan_ctxt,
+			  mhi_ctxt->chan_ctxt_addr);
+
+	kfree(mhi_ctxt);
+	mhi_cntrl->mhi_ctxt = NULL;
+}
+
+void mhi_init_debugfs(struct mhi_controller *mhi_cntrl)
+{
+	struct dentry *dentry;
+	char node[32];
+
+	if (!mhi_cntrl->parent)
+		return;
+
+	snprintf(node, sizeof(node), "%04x_%02u:%02u.%02u",
+		 mhi_cntrl->dev_id, mhi_cntrl->domain, mhi_cntrl->bus,
+		 mhi_cntrl->slot);
+
+	dentry = debugfs_create_dir(node, mhi_cntrl->parent);
+	if (IS_ERR(dentry))
+		return;
+
+	mhi_cntrl->dentry = dentry;
+}
+
+void mhi_deinit_debugfs(struct mhi_controller *mhi_cntrl)
+{
+	debugfs_remove_recursive(mhi_cntrl->dentry);
+	mhi_cntrl->dentry = NULL;
+}
+
+int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_ctxt *mhi_ctxt;
+	struct mhi_chan_ctxt *chan_ctxt;
+	struct mhi_event_ctxt *er_ctxt;
+	struct mhi_cmd_ctxt *cmd_ctxt;
+	struct mhi_chan *mhi_chan;
+	struct mhi_event *mhi_event;
+	struct mhi_cmd *mhi_cmd;
+	int ret = -ENOMEM, i;
+
+	atomic_set(&mhi_cntrl->dev_wake, 0);
+	atomic_set(&mhi_cntrl->alloc_size, 0);
+
+	mhi_ctxt = kzalloc(sizeof(*mhi_ctxt), GFP_KERNEL);
+	if (!mhi_ctxt)
+		return -ENOMEM;
+
+	/* setup channel ctxt */
+	mhi_ctxt->chan_ctxt = mhi_alloc_coherent(mhi_cntrl,
+			sizeof(*mhi_ctxt->chan_ctxt) * mhi_cntrl->max_chan,
+			&mhi_ctxt->chan_ctxt_addr, GFP_KERNEL);
+	if (!mhi_ctxt->chan_ctxt)
+		goto error_alloc_chan_ctxt;
+
+	mhi_chan = mhi_cntrl->mhi_chan;
+	chan_ctxt = mhi_ctxt->chan_ctxt;
+	for (i = 0; i < mhi_cntrl->max_chan; i++, chan_ctxt++, mhi_chan++) {
+		/* If it's offload channel skip this step */
+		if (mhi_chan->offload_ch)
+			continue;
+
+		chan_ctxt->chstate = MHI_CH_STATE_DISABLED;
+		chan_ctxt->brstmode = mhi_chan->db_cfg.brstmode;
+		chan_ctxt->pollcfg = mhi_chan->db_cfg.pollcfg;
+		chan_ctxt->chtype = mhi_chan->dir;
+		chan_ctxt->erindex = mhi_chan->er_index;
+
+		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+		mhi_chan->tre_ring.db_addr = &chan_ctxt->wp;
+	}
+
+	/* setup event context */
+	mhi_ctxt->er_ctxt = mhi_alloc_coherent(mhi_cntrl,
+			sizeof(*mhi_ctxt->er_ctxt) * mhi_cntrl->total_ev_rings,
+			&mhi_ctxt->er_ctxt_addr, GFP_KERNEL);
+	if (!mhi_ctxt->er_ctxt)
+		goto error_alloc_er_ctxt;
+
+	er_ctxt = mhi_ctxt->er_ctxt;
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++,
+		     mhi_event++) {
+		struct mhi_ring *ring = &mhi_event->ring;
+
+		/* it's a satellite ev, we do not touch it */
+		if (mhi_event->offload_ev)
+			continue;
+
+		er_ctxt->intmodc = 0;
+		er_ctxt->intmodt = mhi_event->intmod;
+		er_ctxt->ertype = MHI_ER_TYPE_VALID;
+		er_ctxt->msivec = mhi_event->msi;
+		mhi_event->db_cfg.db_mode = true;
+
+		ring->el_size = sizeof(struct mhi_tre);
+		ring->len = ring->el_size * ring->elements;
+		ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
+		if (ret)
+			goto error_alloc_er;
+
+		ring->rp = ring->wp = ring->base;
+		er_ctxt->rbase = ring->iommu_base;
+		er_ctxt->rp = er_ctxt->wp = er_ctxt->rbase;
+		er_ctxt->rlen = ring->len;
+		ring->ctxt_wp = &er_ctxt->wp;
+	}
+
+	/* setup cmd context */
+	mhi_ctxt->cmd_ctxt = mhi_alloc_coherent(mhi_cntrl,
+				sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS,
+				&mhi_ctxt->cmd_ctxt_addr, GFP_KERNEL);
+	if (!mhi_ctxt->cmd_ctxt)
+		goto error_alloc_er;
+
+	mhi_cmd = mhi_cntrl->mhi_cmd;
+	cmd_ctxt = mhi_ctxt->cmd_ctxt;
+	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
+		struct mhi_ring *ring = &mhi_cmd->ring;
+
+		ring->el_size = sizeof(struct mhi_tre);
+		ring->elements = CMD_EL_PER_RING;
+		ring->len = ring->el_size * ring->elements;
+		ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
+		if (ret)
+			goto error_alloc_cmd;
+
+		ring->rp = ring->wp = ring->base;
+		cmd_ctxt->rbase = ring->iommu_base;
+		cmd_ctxt->rp = cmd_ctxt->wp = cmd_ctxt->rbase;
+		cmd_ctxt->rlen = ring->len;
+		ring->ctxt_wp = &cmd_ctxt->wp;
+	}
+
+	mhi_cntrl->mhi_ctxt = mhi_ctxt;
+
+	return 0;
+
+error_alloc_cmd:
+	for (--i, --mhi_cmd; i >= 0; i--, mhi_cmd--) {
+		struct mhi_ring *ring = &mhi_cmd->ring;
+
+		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+				  ring->pre_aligned, ring->dma_handle);
+	}
+	mhi_free_coherent(mhi_cntrl,
+			  sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS,
+			  mhi_ctxt->cmd_ctxt, mhi_ctxt->cmd_ctxt_addr);
+	i = mhi_cntrl->total_ev_rings;
+	mhi_event = mhi_cntrl->mhi_event + i;
+
+error_alloc_er:
+	for (--i, --mhi_event; i >= 0; i--, mhi_event--) {
+		struct mhi_ring *ring = &mhi_event->ring;
+
+		if (mhi_event->offload_ev)
+			continue;
+
+		mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+				  ring->pre_aligned, ring->dma_handle);
+	}
+	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->er_ctxt) *
+			  mhi_cntrl->total_ev_rings, mhi_ctxt->er_ctxt,
+			  mhi_ctxt->er_ctxt_addr);
+
+error_alloc_er_ctxt:
+	mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->chan_ctxt) *
+			  mhi_cntrl->max_chan, mhi_ctxt->chan_ctxt,
+			  mhi_ctxt->chan_ctxt_addr);
+
+error_alloc_chan_ctxt:
+	kfree(mhi_ctxt);
+
+	return ret;
+}
+
+int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
+{
+	u32 val;
+	int i, ret;
+	struct mhi_chan *mhi_chan;
+	struct mhi_event *mhi_event;
+	void __iomem *base = mhi_cntrl->regs;
+	struct {
+		u32 offset;
+		u32 mask;
+		u32 shift;
+		u32 val;
+	} reg_info[] = {
+		{
+			CCABAP_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
+		},
+		{
+			CCABAP_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
+		},
+		{
+			ECABAP_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
+		},
+		{
+			ECABAP_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
+		},
+		{
+			CRCBAP_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
+		},
+		{
+			CRCBAP_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
+		},
+		{
+			MHICFG, MHICFG_NER_MASK, MHICFG_NER_SHIFT,
+			mhi_cntrl->total_ev_rings,
+		},
+		{
+			MHICFG, MHICFG_NHWER_MASK, MHICFG_NHWER_SHIFT,
+			mhi_cntrl->hw_ev_rings,
+		},
+		{
+			MHICTRLBASE_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->iova_start),
+		},
+		{
+			MHICTRLBASE_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->iova_start),
+		},
+		{
+			MHIDATABASE_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->iova_start),
+		},
+		{
+			MHIDATABASE_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->iova_start),
+		},
+		{
+			MHICTRLLIMIT_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->iova_stop),
+		},
+		{
+			MHICTRLLIMIT_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->iova_stop),
+		},
+		{
+			MHIDATALIMIT_HIGHER, U32_MAX, 0,
+			upper_32_bits(mhi_cntrl->iova_stop),
+		},
+		{
+			MHIDATALIMIT_LOWER, U32_MAX, 0,
+			lower_32_bits(mhi_cntrl->iova_stop),
+		},
+		{ 0, 0, 0 }
+	};
+
+	dev_info(mhi_cntrl->dev, "Initializing MHI registers\n");
+
+	/* set up DB register for all the chan rings */
+	ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK,
+				 CHDBOFF_CHDBOFF_SHIFT, &val);
+	if (ret)
+		return -EIO;
+
+	/* setup wake db */
+	mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
+	mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 4, 0);
+	mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 0, 0);
+	mhi_cntrl->wake_set = false;
+
+	/* setup channel db addresses */
+	mhi_chan = mhi_cntrl->mhi_chan;
+	for (i = 0; i < mhi_cntrl->max_chan; i++, val += 8, mhi_chan++)
+		mhi_chan->tre_ring.db_addr = base + val;
+
+	/* setup event ring db addresses */
+	ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK,
+				 ERDBOFF_ERDBOFF_SHIFT, &val);
+	if (ret)
+		return -EIO;
+
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, val += 8, mhi_event++) {
+		if (mhi_event->offload_ev)
+			continue;
+
+		mhi_event->ring.db_addr = base + val;
+	}
+
+	/* set up DB register for primary CMD rings */
+	mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING].ring.db_addr = base + CRDB_LOWER;
+
+	for (i = 0; reg_info[i].offset; i++)
+		mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
+				    reg_info[i].mask, reg_info[i].shift,
+				    reg_info[i].val);
+
+	return 0;
+}
+
+void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+			  struct mhi_chan *mhi_chan)
+{
+	struct mhi_ring *buf_ring;
+	struct mhi_ring *tre_ring;
+	struct mhi_chan_ctxt *chan_ctxt;
+
+	buf_ring = &mhi_chan->buf_ring;
+	tre_ring = &mhi_chan->tre_ring;
+	chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
+
+	mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size,
+			  tre_ring->pre_aligned, tre_ring->dma_handle);
+	kfree(buf_ring->base);
+
+	buf_ring->base = tre_ring->base = NULL;
+	chan_ctxt->rbase = 0;
+}
+
+int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+		       struct mhi_chan *mhi_chan)
+{
+	struct mhi_ring *buf_ring;
+	struct mhi_ring *tre_ring;
+	struct mhi_chan_ctxt *chan_ctxt;
+	int ret;
+
+	buf_ring = &mhi_chan->buf_ring;
+	tre_ring = &mhi_chan->tre_ring;
+	tre_ring->el_size = sizeof(struct mhi_tre);
+	tre_ring->len = tre_ring->el_size * tre_ring->elements;
+	chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
+	ret = mhi_alloc_aligned_ring(mhi_cntrl, tre_ring, tre_ring->len);
+	if (ret)
+		return -ENOMEM;
+
+	buf_ring->el_size = sizeof(struct mhi_buf_info);
+	buf_ring->len = buf_ring->el_size * buf_ring->elements;
+	buf_ring->base = kzalloc(buf_ring->len, GFP_KERNEL);
+
+	if (!buf_ring->base) {
+		mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size,
+				  tre_ring->pre_aligned, tre_ring->dma_handle);
+		return -ENOMEM;
+	}
+
+	chan_ctxt->chstate = MHI_CH_STATE_ENABLED;
+	chan_ctxt->rbase = tre_ring->iommu_base;
+	chan_ctxt->rp = chan_ctxt->wp = chan_ctxt->rbase;
+	chan_ctxt->rlen = tre_ring->len;
+	tre_ring->ctxt_wp = &chan_ctxt->wp;
+
+	tre_ring->rp = tre_ring->wp = tre_ring->base;
+	buf_ring->rp = buf_ring->wp = buf_ring->base;
+	mhi_chan->db_cfg.db_mode = 1;
+
+	/* update to all cores */
+	smp_wmb();
+
+	return 0;
+}
+
 static int of_parse_ev_cfg(struct mhi_controller *mhi_cntrl,
 			   struct device_node *of_node)
 {
@@ -331,6 +862,9 @@ int of_register_mhi_controller(struct mhi_controller *mhi_cntrl)
 	rwlock_init(&mhi_cntrl->pm_lock);
 	spin_lock_init(&mhi_cntrl->transition_lock);
 	spin_lock_init(&mhi_cntrl->wlock);
+	INIT_WORK(&mhi_cntrl->st_worker, mhi_pm_st_worker);
+	INIT_WORK(&mhi_cntrl->fw_worker, mhi_fw_load_worker);
+	INIT_WORK(&mhi_cntrl->syserr_worker, mhi_pm_sys_err_worker);
 	init_waitqueue_head(&mhi_cntrl->state_event);
 
 	mhi_cmd = mhi_cntrl->mhi_cmd;
@@ -436,6 +970,61 @@ struct mhi_controller *mhi_alloc_controller(size_t size)
 }
 EXPORT_SYMBOL(mhi_alloc_controller);
 
+int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+
+	ret = mhi_init_dev_ctxt(mhi_cntrl);
+	if (ret)
+		goto error_dev_ctxt;
+
+	ret = mhi_init_irq_setup(mhi_cntrl);
+	if (ret)
+		goto error_setup_irq;
+
+	/*
+	 * allocate rddm table if specified, this table is for debug purpose
+	 * so we'll ignore erros
+	 */
+	if (mhi_cntrl->rddm_size)
+		mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->rddm_image,
+				     mhi_cntrl->rddm_size);
+
+	mhi_cntrl->pre_init = true;
+
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	return 0;
+
+error_setup_irq:
+	mhi_deinit_dev_ctxt(mhi_cntrl);
+
+error_dev_ctxt:
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	return ret;
+}
+EXPORT_SYMBOL(mhi_prepare_for_power_up);
+
+void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl)
+{
+	if (mhi_cntrl->fbc_image) {
+		mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
+		mhi_cntrl->fbc_image = NULL;
+	}
+
+	if (mhi_cntrl->rddm_image) {
+		mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->rddm_image);
+		mhi_cntrl->rddm_image = NULL;
+	}
+
+	mhi_deinit_free_irq(mhi_cntrl);
+	mhi_deinit_dev_ctxt(mhi_cntrl);
+	mhi_cntrl->pre_init = false;
+}
+
 /* match dev to drv */
 static int mhi_match(struct device *dev, struct device_driver *drv)
 {
@@ -471,11 +1060,115 @@ struct bus_type mhi_bus_type = {
 
 static int mhi_driver_probe(struct device *dev)
 {
-	return -EINVAL;
+	struct mhi_device *mhi_dev = to_mhi_device(dev);
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct device_driver *drv = dev->driver;
+	struct mhi_driver *mhi_drv = to_mhi_driver(drv);
+	struct mhi_event *mhi_event;
+	struct mhi_chan *ul_chan = mhi_dev->ul_chan;
+	struct mhi_chan *dl_chan = mhi_dev->dl_chan;
+
+	if (ul_chan) {
+		/* lpm notification require status_cb */
+		if (ul_chan->lpm_notify && !mhi_drv->status_cb)
+			return -EINVAL;
+
+		if (!ul_chan->offload_ch && !mhi_drv->ul_xfer_cb)
+			return -EINVAL;
+
+		ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
+		mhi_dev->status_cb = mhi_drv->status_cb;
+	}
+
+	if (dl_chan) {
+		if (dl_chan->lpm_notify && !mhi_drv->status_cb)
+			return -EINVAL;
+
+		if (!dl_chan->offload_ch && !mhi_drv->dl_xfer_cb)
+			return -EINVAL;
+
+		mhi_event = &mhi_cntrl->mhi_event[dl_chan->er_index];
+
+		/*
+		 * if this channal event ring manage by client, then
+		 * status_cb must be defined so we can send the async
+		 * cb whenever there are pending data
+		 */
+		if (mhi_event->cl_manage && !mhi_drv->status_cb)
+			return -EINVAL;
+
+		dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
+
+		/* ul & dl uses same status cb */
+		mhi_dev->status_cb = mhi_drv->status_cb;
+	}
+
+	return mhi_drv->probe(mhi_dev, mhi_dev->id);
 }
 
 static int mhi_driver_remove(struct device *dev)
 {
+	struct mhi_device *mhi_dev = to_mhi_device(dev);
+	struct mhi_driver *mhi_drv = to_mhi_driver(dev->driver);
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan;
+	enum MHI_CH_STATE ch_state[] = {
+		MHI_CH_STATE_DISABLED,
+		MHI_CH_STATE_DISABLED
+	};
+	int dir;
+
+	/* control device has no work to do */
+	if (mhi_dev->dev_type == MHI_CONTROLLER_TYPE)
+		return 0;
+
+	/* reset both channels */
+	for (dir = 0; dir < 2; dir++) {
+		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		/* wake all threads waiting for completion */
+		write_lock_irq(&mhi_chan->lock);
+		mhi_chan->ccs = MHI_EV_CC_INVALID;
+		complete_all(&mhi_chan->completion);
+		write_unlock_irq(&mhi_chan->lock);
+
+		/* move channel state to disable, no more processing */
+		mutex_lock(&mhi_chan->mutex);
+		write_lock_irq(&mhi_chan->lock);
+		ch_state[dir] = mhi_chan->ch_state;
+		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+		write_unlock_irq(&mhi_chan->lock);
+	}
+
+	/* destroy the device */
+	mhi_drv->remove(mhi_dev);
+
+	/* de_init channel if it was enabled */
+	for (dir = 0; dir < 2; dir++) {
+		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		if (ch_state[dir] == MHI_CH_STATE_ENABLED &&
+		    !mhi_chan->offload_ch)
+			mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+
+		/* remove associated device */
+		mhi_chan->mhi_dev = NULL;
+
+		mutex_unlock(&mhi_chan->mutex);
+	}
+
+	/* relinquish any pending votes */
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	while (atomic_read(&mhi_dev->dev_wake))
+		mhi_device_put(mhi_dev);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
 	return 0;
 }
 
diff --git a/drivers/bus/mhi/core/mhi_internal.h b/drivers/bus/mhi/core/mhi_internal.h
index 90c40de..0091245 100644
--- a/drivers/bus/mhi/core/mhi_internal.h
+++ b/drivers/bus/mhi/core/mhi_internal.h
@@ -9,6 +9,315 @@
 
 extern struct bus_type mhi_bus_type;
 
+/* MHI mmio register mapping */
+#define PCI_INVALID_READ(val) (val == U32_MAX)
+
+#define MHIREGLEN (0x0)
+#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
+#define MHIREGLEN_MHIREGLEN_SHIFT (0)
+
+#define MHIVER (0x8)
+#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
+#define MHIVER_MHIVER_SHIFT (0)
+
+#define MHICFG (0x10)
+#define MHICFG_NHWER_MASK (0xFF000000)
+#define MHICFG_NHWER_SHIFT (24)
+#define MHICFG_NER_MASK (0xFF0000)
+#define MHICFG_NER_SHIFT (16)
+#define MHICFG_NHWCH_MASK (0xFF00)
+#define MHICFG_NHWCH_SHIFT (8)
+#define MHICFG_NCH_MASK (0xFF)
+#define MHICFG_NCH_SHIFT (0)
+
+#define CHDBOFF (0x18)
+#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
+#define CHDBOFF_CHDBOFF_SHIFT (0)
+
+#define ERDBOFF (0x20)
+#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
+#define ERDBOFF_ERDBOFF_SHIFT (0)
+
+#define BHIOFF (0x28)
+#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
+#define BHIOFF_BHIOFF_SHIFT (0)
+
+#define BHIEOFF (0x2C)
+#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
+#define BHIEOFF_BHIEOFF_SHIFT (0)
+
+#define DEBUGOFF (0x30)
+#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
+#define DEBUGOFF_DEBUGOFF_SHIFT (0)
+
+#define MHICTRL (0x38)
+#define MHICTRL_MHISTATE_MASK (0x0000FF00)
+#define MHICTRL_MHISTATE_SHIFT (8)
+#define MHICTRL_RESET_MASK (0x2)
+#define MHICTRL_RESET_SHIFT (1)
+
+#define MHISTATUS (0x48)
+#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
+#define MHISTATUS_MHISTATE_SHIFT (8)
+#define MHISTATUS_SYSERR_MASK (0x4)
+#define MHISTATUS_SYSERR_SHIFT (2)
+#define MHISTATUS_READY_MASK (0x1)
+#define MHISTATUS_READY_SHIFT (0)
+
+#define CCABAP_LOWER (0x58)
+#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
+#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
+
+#define CCABAP_HIGHER (0x5C)
+#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
+#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
+
+#define ECABAP_LOWER (0x60)
+#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
+#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
+
+#define ECABAP_HIGHER (0x64)
+#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
+#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
+
+#define CRCBAP_LOWER (0x68)
+#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
+#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
+
+#define CRCBAP_HIGHER (0x6C)
+#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
+#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
+
+#define CRDB_LOWER (0x70)
+#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
+#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
+
+#define CRDB_HIGHER (0x74)
+#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
+#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
+
+#define MHICTRLBASE_LOWER (0x80)
+#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
+#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
+
+#define MHICTRLBASE_HIGHER (0x84)
+#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
+#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
+
+#define MHICTRLLIMIT_LOWER (0x88)
+#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
+#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
+
+#define MHICTRLLIMIT_HIGHER (0x8C)
+#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
+#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
+
+#define MHIDATABASE_LOWER (0x98)
+#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
+#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
+
+#define MHIDATABASE_HIGHER (0x9C)
+#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
+#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
+
+#define MHIDATALIMIT_LOWER (0xA0)
+#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
+#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
+
+#define MHIDATALIMIT_HIGHER (0xA4)
+#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
+#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
+
+/* MHI BHI offfsets */
+#define BHI_BHIVERSION_MINOR (0x00)
+#define BHI_BHIVERSION_MAJOR (0x04)
+#define BHI_IMGADDR_LOW (0x08)
+#define BHI_IMGADDR_HIGH (0x0C)
+#define BHI_IMGSIZE (0x10)
+#define BHI_RSVD1 (0x14)
+#define BHI_IMGTXDB (0x18)
+#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHI_TXDB_SEQNUM_SHFT (0)
+#define BHI_RSVD2 (0x1C)
+#define BHI_INTVEC (0x20)
+#define BHI_RSVD3 (0x24)
+#define BHI_EXECENV (0x28)
+#define BHI_STATUS (0x2C)
+#define BHI_ERRCODE (0x30)
+#define BHI_ERRDBG1 (0x34)
+#define BHI_ERRDBG2 (0x38)
+#define BHI_ERRDBG3 (0x3C)
+#define BHI_SERIALNU (0x40)
+#define BHI_SBLANTIROLLVER (0x44)
+#define BHI_NUMSEG (0x48)
+#define BHI_MSMHWID(n) (0x4C + (0x4 * n))
+#define BHI_OEMPKHASH(n) (0x64 + (0x4 * n))
+#define BHI_RSVD5 (0xC4)
+#define BHI_STATUS_MASK (0xC0000000)
+#define BHI_STATUS_SHIFT (30)
+#define BHI_STATUS_ERROR (3)
+#define BHI_STATUS_SUCCESS (2)
+#define BHI_STATUS_RESET (0)
+
+/* MHI BHIE offsets */
+#define BHIE_MSMSOCID_OFFS (0x0000)
+#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
+#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
+#define BHIE_TXVECSIZE_OFFS (0x0034)
+#define BHIE_TXVECDB_OFFS (0x003C)
+#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHIE_TXVECDB_SEQNUM_SHFT (0)
+#define BHIE_TXVECSTATUS_OFFS (0x0044)
+#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
+#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
+#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
+#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
+#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
+#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
+#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
+#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
+#define BHIE_RXVECSIZE_OFFS (0x0068)
+#define BHIE_RXVECDB_OFFS (0x0070)
+#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHIE_RXVECDB_SEQNUM_SHFT (0)
+#define BHIE_RXVECSTATUS_OFFS (0x0078)
+#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
+#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
+#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
+#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
+#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
+#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
+
+struct mhi_event_ctxt {
+	u32 reserved : 8;
+	u32 intmodc : 8;
+	u32 intmodt : 16;
+	u32 ertype;
+	u32 msivec;
+
+	u64 rbase __packed __aligned(4);
+	u64 rlen __packed __aligned(4);
+	u64 rp __packed __aligned(4);
+	u64 wp __packed __aligned(4);
+};
+
+struct mhi_chan_ctxt {
+	u32 chstate : 8;
+	u32 brstmode : 2;
+	u32 pollcfg : 6;
+	u32 reserved : 16;
+	u32 chtype;
+	u32 erindex;
+
+	u64 rbase __packed __aligned(4);
+	u64 rlen __packed __aligned(4);
+	u64 rp __packed __aligned(4);
+	u64 wp __packed __aligned(4);
+};
+
+struct mhi_cmd_ctxt {
+	u32 reserved0;
+	u32 reserved1;
+	u32 reserved2;
+
+	u64 rbase __packed __aligned(4);
+	u64 rlen __packed __aligned(4);
+	u64 rp __packed __aligned(4);
+	u64 wp __packed __aligned(4);
+};
+
+struct mhi_tre {
+	u64 ptr;
+	u32 dword[2];
+};
+
+struct bhi_vec_entry {
+	u64 dma_addr;
+	u64 size;
+};
+
+enum mhi_cmd_type {
+	MHI_CMD_TYPE_NOP = 1,
+	MHI_CMD_TYPE_RESET = 16,
+	MHI_CMD_TYPE_STOP = 17,
+	MHI_CMD_TYPE_START = 18,
+	MHI_CMD_TYPE_TSYNC = 24,
+};
+
+/* no operation command */
+#define MHI_TRE_CMD_NOOP_PTR (0)
+#define MHI_TRE_CMD_NOOP_DWORD0 (0)
+#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_TYPE_NOP << 16)
+
+/* channel reset command */
+#define MHI_TRE_CMD_RESET_PTR (0)
+#define MHI_TRE_CMD_RESET_DWORD0 (0)
+#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
+					(MHI_CMD_TYPE_RESET << 16))
+
+/* channel stop command */
+#define MHI_TRE_CMD_STOP_PTR (0)
+#define MHI_TRE_CMD_STOP_DWORD0 (0)
+#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | (MHI_CMD_TYPE_STOP << 16))
+
+/* channel start command */
+#define MHI_TRE_CMD_START_PTR (0)
+#define MHI_TRE_CMD_START_DWORD0 (0)
+#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
+					(MHI_CMD_TYPE_START << 16))
+
+/* time sync cfg command */
+#define MHI_TRE_CMD_TSYNC_CFG_PTR (0)
+#define MHI_TRE_CMD_TSYNC_CFG_DWORD0 (0)
+#define MHI_TRE_CMD_TSYNC_CFG_DWORD1(er) ((MHI_CMD_TYPE_TSYNC << 16) | \
+					  (er << 24))
+
+#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+
+/* event descriptor macros */
+#define MHI_TRE_EV_PTR(ptr) (ptr)
+#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
+#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
+#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
+#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
+#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
+#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
+
+/* transfer descriptor macros */
+#define MHI_TRE_DATA_PTR(ptr) (ptr)
+#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
+#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
+	| (ieot << 9) | (ieob << 8) | chain)
+
+enum MHI_CMD {
+	MHI_CMD_RESET_CHAN,
+	MHI_CMD_START_CHAN,
+	MHI_CMD_TIMSYNC_CFG,
+};
+
+enum MHI_PKT_TYPE {
+	MHI_PKT_TYPE_INVALID = 0x0,
+	MHI_PKT_TYPE_NOOP_CMD = 0x1,
+	MHI_PKT_TYPE_TRANSFER = 0x2,
+	MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
+	MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
+	MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
+	MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
+	MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
+	MHI_PKT_TYPE_TX_EVENT = 0x22,
+	MHI_PKT_TYPE_EE_EVENT = 0x40,
+	MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
+	MHI_PKT_TYPE_STALE_EVENT, /* internal event */
+};
+
 /* MHI transfer completion events */
 enum MHI_EV_CCS {
 	MHI_EV_CC_INVALID = 0x0,
@@ -52,6 +361,93 @@ enum MHI_EE {
 	MHI_EE_MAX,
 };
 
+extern const char * const mhi_ee_str[MHI_EE_MAX];
+#define TO_MHI_EXEC_STR(ee) (((ee) >= MHI_EE_MAX) ? \
+			     "INVALID_EE" : mhi_ee_str[ee])
+
+#define MHI_IN_PBL(ee) (ee == MHI_EE_PBL || ee == MHI_EE_PTHRU || \
+			ee == MHI_EE_EDL)
+
+enum MHI_ST_TRANSITION {
+	MHI_ST_TRANSITION_PBL,
+	MHI_ST_TRANSITION_READY,
+	MHI_ST_TRANSITION_SBL,
+	MHI_ST_TRANSITION_AMSS,
+	MHI_ST_TRANSITION_BHIE,
+	MHI_ST_TRANSITION_MAX,
+};
+
+extern const char * const mhi_state_tran_str[MHI_ST_TRANSITION_MAX];
+#define TO_MHI_STATE_TRANS_STR(state) (((state) >= MHI_ST_TRANSITION_MAX) ? \
+				"INVALID_STATE" : mhi_state_tran_str[state])
+
+enum MHI_STATE {
+	MHI_STATE_RESET = 0x0,
+	MHI_STATE_READY = 0x1,
+	MHI_STATE_M0 = 0x2,
+	MHI_STATE_M1 = 0x3,
+	MHI_STATE_M2 = 0x4,
+	MHI_STATE_M3 = 0x5,
+	MHI_STATE_BHI  = 0x7,
+	MHI_STATE_SYS_ERR  = 0xFF,
+	MHI_STATE_MAX,
+};
+
+extern const char * const mhi_state_str[MHI_STATE_MAX];
+#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
+				  !mhi_state_str[state]) ? \
+				"INVALID_STATE" : mhi_state_str[state])
+
+enum {
+	MHI_PM_BIT_DISABLE,
+	MHI_PM_BIT_POR,
+	MHI_PM_BIT_M0,
+	MHI_PM_BIT_M2,
+	MHI_PM_BIT_M3_ENTER,
+	MHI_PM_BIT_M3,
+	MHI_PM_BIT_M3_EXIT,
+	MHI_PM_BIT_FW_DL_ERR,
+	MHI_PM_BIT_SYS_ERR_DETECT,
+	MHI_PM_BIT_SYS_ERR_PROCESS,
+	MHI_PM_BIT_SHUTDOWN_PROCESS,
+	MHI_PM_BIT_LD_ERR_FATAL_DETECT,
+	MHI_PM_BIT_MAX
+};
+
+/* internal power states */
+enum MHI_PM_STATE {
+	MHI_PM_DISABLE = BIT(MHI_PM_BIT_DISABLE), /* MHI is not enabled */
+	MHI_PM_POR = BIT(MHI_PM_BIT_POR), /* reset state */
+	MHI_PM_M0 = BIT(MHI_PM_BIT_M0),
+	MHI_PM_M2 = BIT(MHI_PM_BIT_M2),
+	MHI_PM_M3_ENTER = BIT(MHI_PM_BIT_M3_ENTER),
+	MHI_PM_M3 = BIT(MHI_PM_BIT_M3),
+	MHI_PM_M3_EXIT = BIT(MHI_PM_BIT_M3_EXIT),
+	/* firmware download failure state */
+	MHI_PM_FW_DL_ERR = BIT(MHI_PM_BIT_FW_DL_ERR),
+	MHI_PM_SYS_ERR_DETECT = BIT(MHI_PM_BIT_SYS_ERR_DETECT),
+	MHI_PM_SYS_ERR_PROCESS = BIT(MHI_PM_BIT_SYS_ERR_PROCESS),
+	MHI_PM_SHUTDOWN_PROCESS = BIT(MHI_PM_BIT_SHUTDOWN_PROCESS),
+	/* link not accessible */
+	MHI_PM_LD_ERR_FATAL_DETECT = BIT(MHI_PM_BIT_LD_ERR_FATAL_DETECT),
+};
+
+#define MHI_REG_ACCESS_VALID(pm_state) ((pm_state & (MHI_PM_POR | MHI_PM_M0 | \
+		MHI_PM_M2 | MHI_PM_M3_ENTER | MHI_PM_M3_EXIT | \
+		MHI_PM_SYS_ERR_DETECT | MHI_PM_SYS_ERR_PROCESS | \
+		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_FW_DL_ERR)))
+#define MHI_PM_IN_ERROR_STATE(pm_state) (pm_state >= MHI_PM_FW_DL_ERR)
+#define MHI_PM_IN_FATAL_STATE(pm_state) (pm_state == MHI_PM_LD_ERR_FATAL_DETECT)
+#define MHI_DB_ACCESS_VALID(pm_state) (pm_state & MHI_PM_M0)
+#define MHI_WAKE_DB_CLEAR_VALID(pm_state) (pm_state & (MHI_PM_M0 | \
+						       MHI_PM_M2))
+#define MHI_WAKE_DB_SET_VALID(pm_state) (pm_state & MHI_PM_M2)
+#define MHI_WAKE_DB_FORCE_SET_VALID(pm_state) MHI_WAKE_DB_CLEAR_VALID(pm_state)
+#define MHI_EVENT_ACCESS_INVALID(pm_state) (pm_state == MHI_PM_DISABLE || \
+					    MHI_PM_IN_ERROR_STATE(pm_state))
+#define MHI_PM_IN_SUSPEND_STATE(pm_state) (pm_state & \
+					   (MHI_PM_M3_ENTER | MHI_PM_M3))
+
 /* accepted buffer type for the channel */
 enum MHI_XFER_TYPE {
 	MHI_XFER_BUFFER,
@@ -63,6 +459,7 @@ enum MHI_XFER_TYPE {
 #define NR_OF_CMD_RINGS (1)
 #define CMD_EL_PER_RING (128)
 #define PRIMARY_CMD_RING (0)
+#define MHI_DEV_WAKE_DB (127)
 #define MHI_MAX_MTU (0xffff)
 
 enum MHI_ER_TYPE {
@@ -87,6 +484,25 @@ struct db_cfg {
 			   dma_addr_t db_val);
 };
 
+struct mhi_pm_transitions {
+	enum MHI_PM_STATE from_state;
+	u32 to_states;
+};
+
+struct state_transition {
+	struct list_head node;
+	enum MHI_ST_TRANSITION state;
+};
+
+struct mhi_ctxt {
+	struct mhi_event_ctxt *er_ctxt;
+	struct mhi_chan_ctxt *chan_ctxt;
+	struct mhi_cmd_ctxt *cmd_ctxt;
+	dma_addr_t er_ctxt_addr;
+	dma_addr_t chan_ctxt_addr;
+	dma_addr_t cmd_ctxt_addr;
+};
+
 struct mhi_ring {
 	dma_addr_t dma_handle;
 	dma_addr_t iommu_base;
@@ -179,10 +595,33 @@ struct mhi_chan {
 /* default MHI timeout */
 #define MHI_TIMEOUT_MS (1000)
 
+/* debug fs related functions */
+void mhi_deinit_debugfs(struct mhi_controller *mhi_cntrl);
+void mhi_init_debugfs(struct mhi_controller *mhi_cntrl);
+
 /* power management apis */
+enum MHI_PM_STATE __must_check mhi_tryset_pm_state(
+					struct mhi_controller *mhi_cntrl,
+					enum MHI_PM_STATE state);
+const char *to_mhi_pm_state_str(enum MHI_PM_STATE state);
+void mhi_reset_chan(struct mhi_controller *mhi_cntrl,
+		    struct mhi_chan *mhi_chan);
+enum MHI_EE mhi_get_exec_env(struct mhi_controller *mhi_cntrl);
+enum MHI_STATE mhi_get_m_state(struct mhi_controller *mhi_cntrl);
+int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
+			       enum MHI_ST_TRANSITION state);
 void mhi_pm_st_worker(struct work_struct *work);
 void mhi_fw_load_worker(struct work_struct *work);
 void mhi_pm_sys_err_worker(struct work_struct *work);
+int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl);
+void mhi_ctrl_ev_task(unsigned long data);
+int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
+void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl);
+int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
+void mhi_notify(struct mhi_device *mhi_dev, enum MHI_CB cb_reason);
+int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+		 enum MHI_CMD cmd);
+int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
 
 /* queue transfer buffer */
 int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
@@ -202,7 +641,46 @@ void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
 void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
 			     struct db_cfg *db_mode, void __iomem *db_addr,
 			     dma_addr_t wp);
+int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
+			      void __iomem *base, u32 offset, u32 *out);
+int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
+				    void __iomem *base, u32 offset, u32 mask,
+				    u32 shift, u32 *out);
+void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
+		   u32 offset, u32 val);
+void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
+			 u32 offset, u32 mask, u32 shift, u32 val);
+void mhi_ring_er_db(struct mhi_event *mhi_event);
+void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
+		  dma_addr_t wp);
+void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd);
+void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
+		      struct mhi_chan *mhi_chan);
+void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum MHI_STATE state);
+int mhi_get_capability_offset(struct mhi_controller *mhi_cntrl, u32 capability,
+			      u32 *offset);
+
+/* memory allocation methods */
+static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl,
+				       size_t size,
+				       dma_addr_t *dma_handle,
+				       gfp_t gfp)
+{
+	void *buf = dma_zalloc_coherent(mhi_cntrl->dev, size, dma_handle, gfp);
+
+	if (buf)
+		atomic_add(size, &mhi_cntrl->alloc_size);
 
+	return buf;
+}
+static inline void mhi_free_coherent(struct mhi_controller *mhi_cntrl,
+				     size_t size,
+				     void *vaddr,
+				     dma_addr_t dma_handle)
+{
+	atomic_sub(size, &mhi_cntrl->alloc_size);
+	dma_free_coherent(mhi_cntrl->dev, size, vaddr, dma_handle);
+}
 struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
 static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
 				      struct mhi_device *mhi_dev)
@@ -211,6 +689,10 @@ static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
 }
 int mhi_destroy_device(struct device *dev, void *data);
 void mhi_create_devices(struct mhi_controller *mhi_cntrl);
+int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
+			 struct image_info **image_info, size_t alloc_size);
+void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
+			 struct image_info *image_info);
 
 int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
 			 struct mhi_buf_info *buf_info);
@@ -227,6 +709,15 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 			     struct mhi_event *mhi_event, u32 event_quota);
 
 /* initialization methods */
+int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+		       struct mhi_chan *mhi_chan);
+void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+			  struct mhi_chan *mhi_chan);
+int mhi_init_mmio(struct mhi_controller *mhi_cntrl);
+int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl);
+void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl);
+int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
+void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
 int mhi_dtr_init(void);
 
 /* isr handlers */
diff --git a/drivers/bus/mhi/core/mhi_main.c b/drivers/bus/mhi/core/mhi_main.c
index 4df4cd0..81cda31 100644
--- a/drivers/bus/mhi/core/mhi_main.c
+++ b/drivers/bus/mhi/core/mhi_main.c
@@ -17,11 +17,87 @@
 #include <linux/mhi.h>
 #include "mhi_internal.h"
 
+int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
+			      void __iomem *base,
+			      u32 offset,
+			      u32 *out)
+{
+	u32 tmp = readl_relaxed(base + offset);
+
+	/* unexpected value, query the link status */
+	if (PCI_INVALID_READ(tmp) &&
+	    mhi_cntrl->link_status(mhi_cntrl, mhi_cntrl->priv_data))
+		return -EIO;
+
+	*out = tmp;
+
+	return 0;
+}
+
+int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
+				    void __iomem *base,
+				    u32 offset,
+				    u32 mask,
+				    u32 shift,
+				    u32 *out)
+{
+	u32 tmp;
+	int ret;
+
+	ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
+	if (ret)
+		return ret;
+
+	*out = (tmp & mask) >> shift;
+
+	return 0;
+}
+
+void mhi_write_reg(struct mhi_controller *mhi_cntrl,
+		   void __iomem *base,
+		   u32 offset,
+		   u32 val)
+{
+	writel_relaxed(val, base + offset);
+}
+
+void mhi_write_reg_field(struct mhi_controller *mhi_cntrl,
+			 void __iomem *base,
+			 u32 offset,
+			 u32 mask,
+			 u32 shift,
+			 u32 val)
+{
+	int ret;
+	u32 tmp;
+
+	ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
+	if (ret)
+		return;
+
+	tmp &= ~mask;
+	tmp |= (val << shift);
+	mhi_write_reg(mhi_cntrl, base, offset, tmp);
+}
+
+void mhi_write_db(struct mhi_controller *mhi_cntrl,
+		  void __iomem *db_addr,
+		  dma_addr_t wp)
+{
+	mhi_write_reg(mhi_cntrl, db_addr, 4, upper_32_bits(wp));
+	mhi_write_reg(mhi_cntrl, db_addr, 0, lower_32_bits(wp));
+}
+
 void mhi_db_brstmode(struct mhi_controller *mhi_cntrl,
 		     struct db_cfg *db_cfg,
 		     void __iomem *db_addr,
 		     dma_addr_t wp)
 {
+	if (db_cfg->db_mode) {
+		db_cfg->db_val = wp;
+		mhi_write_db(mhi_cntrl, db_addr, wp);
+		db_cfg->db_mode = 0;
+	}
 }
 
 void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
@@ -29,6 +105,55 @@ void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
 			     void __iomem *db_addr,
 			     dma_addr_t wp)
 {
+	db_cfg->db_val = wp;
+	mhi_write_db(mhi_cntrl, db_addr, wp);
+}
+
+void mhi_ring_er_db(struct mhi_event *mhi_event)
+{
+	struct mhi_ring *ring = &mhi_event->ring;
+
+	mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
+				     ring->db_addr, *ring->ctxt_wp);
+}
+
+void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
+{
+	dma_addr_t db;
+	struct mhi_ring *ring = &mhi_cmd->ring;
+
+	db = ring->iommu_base + (ring->wp - ring->base);
+	*ring->ctxt_wp = db;
+	mhi_write_db(mhi_cntrl, ring->db_addr, db);
+}
+
+void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
+		      struct mhi_chan *mhi_chan)
+{
+	struct mhi_ring *ring = &mhi_chan->tre_ring;
+	dma_addr_t db;
+
+	db = ring->iommu_base + (ring->wp - ring->base);
+	*ring->ctxt_wp = db;
+	mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg, ring->db_addr,
+				    db);
+}
+
+enum MHI_EE mhi_get_exec_env(struct mhi_controller *mhi_cntrl)
+{
+	u32 exec;
+	int ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_EXECENV, &exec);
+
+	return (ret) ? MHI_EE_MAX : exec;
+}
+
+enum MHI_STATE mhi_get_m_state(struct mhi_controller *mhi_cntrl)
+{
+	u32 state;
+	int ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
+				     MHISTATUS_MHISTATE_MASK,
+				     MHISTATUS_MHISTATE_SHIFT, &state);
+	return ret ? MHI_STATE_MAX : state;
 }
 
 int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
@@ -71,6 +196,41 @@ int mhi_queue_nop(struct mhi_device *mhi_dev,
 	return -EINVAL;
 }
 
+static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr)
+{
+	return (addr - ring->iommu_base) + ring->base;
+}
+
+dma_addr_t mhi_to_physical(struct mhi_ring *ring, void *addr)
+{
+	return (addr - ring->base) + ring->iommu_base;
+}
+
+static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
+					struct mhi_ring *ring)
+{
+	dma_addr_t ctxt_wp;
+
+	/* update the WP */
+	ring->wp += ring->el_size;
+	ctxt_wp = *ring->ctxt_wp + ring->el_size;
+
+	if (ring->wp >= (ring->base + ring->len)) {
+		ring->wp = ring->base;
+		ctxt_wp = ring->iommu_base;
+	}
+
+	*ring->ctxt_wp = ctxt_wp;
+
+	/* update the RP */
+	ring->rp += ring->el_size;
+	if (ring->rp >= (ring->base + ring->len))
+		ring->rp = ring->base;
+
+	/* visible to other cores */
+	smp_wmb();
+}
+
 int mhi_queue_skb(struct mhi_device *mhi_dev,
 		  struct mhi_chan *mhi_chan,
 		  void *buf,
@@ -99,11 +259,254 @@ int mhi_queue_buf(struct mhi_device *mhi_dev,
 	return -EINVAL;
 }
 
+/* destroy specific device */
+int mhi_destroy_device(struct device *dev, void *data)
+{
+	struct mhi_device *mhi_dev;
+	struct mhi_controller *mhi_cntrl;
+
+	if (dev->bus != &mhi_bus_type)
+		return 0;
+
+	mhi_dev = to_mhi_device(dev);
+	mhi_cntrl = mhi_dev->mhi_cntrl;
+
+	/* only destroying virtual devices thats attached to bus */
+	if (mhi_dev->dev_type ==  MHI_CONTROLLER_TYPE)
+		return 0;
+
+	dev_info(mhi_cntrl->dev, "destroy device for chan:%s\n",
+		 mhi_dev->chan_name);
+
+	/* notify the client and remove the device from mhi bus */
+	device_del(dev);
+	put_device(dev);
+
+	return 0;
+}
+
+void mhi_notify(struct mhi_device *mhi_dev, enum MHI_CB cb_reason)
+{
+	struct mhi_driver *mhi_drv;
+
+	if (!mhi_dev->dev.driver)
+		return;
+
+	mhi_drv = to_mhi_driver(mhi_dev->dev.driver);
+
+	if (mhi_drv->status_cb)
+		mhi_drv->status_cb(mhi_dev, cb_reason);
+}
+
+static void mhi_assign_of_node(struct mhi_controller *mhi_cntrl,
+			       struct mhi_device *mhi_dev)
+{
+	struct device_node *controller, *node;
+	const char *dt_name;
+	int ret;
+
+	controller = mhi_cntrl->of_node;
+	for_each_available_child_of_node(controller, node) {
+		ret = of_property_read_string(node, "mhi,chan", &dt_name);
+		if (ret)
+			continue;
+		if (!strcmp(mhi_dev->chan_name, dt_name)) {
+			mhi_dev->dev.of_node = node;
+			break;
+		}
+	}
+}
+
+/* bind mhi channels into mhi devices */
+void mhi_create_devices(struct mhi_controller *mhi_cntrl)
+{
+	int i;
+	struct mhi_chan *mhi_chan;
+	struct mhi_device *mhi_dev;
+	int ret;
+
+	mhi_chan = mhi_cntrl->mhi_chan;
+	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+		if (!mhi_chan->configured || mhi_chan->ee != mhi_cntrl->ee)
+			continue;
+		mhi_dev = mhi_alloc_device(mhi_cntrl);
+		if (!mhi_dev)
+			return;
+
+		mhi_dev->dev_type = MHI_XFER_TYPE;
+		switch (mhi_chan->dir) {
+		case DMA_TO_DEVICE:
+			mhi_dev->ul_chan = mhi_chan;
+			mhi_dev->ul_chan_id = mhi_chan->chan;
+			mhi_dev->ul_xfer = mhi_chan->queue_xfer;
+			mhi_dev->ul_event_id = mhi_chan->er_index;
+			break;
+		case DMA_NONE:
+		case DMA_BIDIRECTIONAL:
+			mhi_dev->ul_chan_id = mhi_chan->chan;
+		case DMA_FROM_DEVICE:
+			/* we use dl_chan for offload channels */
+			mhi_dev->dl_chan = mhi_chan;
+			mhi_dev->dl_chan_id = mhi_chan->chan;
+			mhi_dev->dl_xfer = mhi_chan->queue_xfer;
+			mhi_dev->dl_event_id = mhi_chan->er_index;
+		}
+
+		mhi_chan->mhi_dev = mhi_dev;
+
+		/* check next channel if it matches */
+		if ((i + 1) < mhi_cntrl->max_chan && mhi_chan[1].configured) {
+			if (!strcmp(mhi_chan[1].name, mhi_chan->name)) {
+				i++;
+				mhi_chan++;
+				if (mhi_chan->dir == DMA_TO_DEVICE) {
+					mhi_dev->ul_chan = mhi_chan;
+					mhi_dev->ul_chan_id = mhi_chan->chan;
+					mhi_dev->ul_xfer = mhi_chan->queue_xfer;
+					mhi_dev->ul_event_id =
+						mhi_chan->er_index;
+				} else {
+					mhi_dev->dl_chan = mhi_chan;
+					mhi_dev->dl_chan_id = mhi_chan->chan;
+					mhi_dev->dl_xfer = mhi_chan->queue_xfer;
+					mhi_dev->dl_event_id =
+						mhi_chan->er_index;
+				}
+				mhi_chan->mhi_dev = mhi_dev;
+			}
+		}
+
+		mhi_dev->chan_name = mhi_chan->name;
+		dev_set_name(&mhi_dev->dev, "%04x_%02u.%02u.%02u_%s",
+			     mhi_dev->dev_id, mhi_dev->domain, mhi_dev->bus,
+			     mhi_dev->slot, mhi_dev->chan_name);
+
+		/* add if there is a matching DT node */
+		mhi_assign_of_node(mhi_cntrl, mhi_dev);
+
+		ret = device_add(&mhi_dev->dev);
+		if (ret)
+			mhi_dealloc_device(mhi_cntrl, mhi_dev);
+	}
+}
 int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 			     struct mhi_event *mhi_event,
 			     u32 event_quota)
 {
-	return -EIO;
+	struct mhi_tre *dev_rp, *local_rp;
+	struct mhi_ring *ev_ring = &mhi_event->ring;
+	struct mhi_event_ctxt *er_ctxt =
+		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+	int count = 0;
+
+	/*
+	 * this is a quick check to avoid unnecessary event processing
+	 * in case we already in error state, but it's still possible
+	 * to transition to error state while processing events
+	 */
+	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+		return -EIO;
+
+	dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+	local_rp = ev_ring->rp;
+
+	while (dev_rp != local_rp) {
+		enum MHI_PKT_TYPE type = MHI_TRE_GET_EV_TYPE(local_rp);
+
+		switch (type) {
+		case MHI_PKT_TYPE_STATE_CHANGE_EVENT:
+		{
+			enum MHI_STATE new_state;
+
+			new_state = MHI_TRE_GET_EV_STATE(local_rp);
+
+			dev_info(mhi_cntrl->dev,
+				 "MHI state change event to state:%s\n",
+				 TO_MHI_STATE_STR(new_state));
+
+			switch (new_state) {
+			case MHI_STATE_M0:
+				mhi_pm_m0_transition(mhi_cntrl);
+				break;
+			case MHI_STATE_M1:
+				mhi_pm_m1_transition(mhi_cntrl);
+				break;
+			case MHI_STATE_M3:
+				mhi_pm_m3_transition(mhi_cntrl);
+				break;
+			case MHI_STATE_SYS_ERR:
+			{
+				enum MHI_PM_STATE new_state;
+
+				dev_info(mhi_cntrl->dev,
+					 "MHI system error detected\n");
+				write_lock_irq(&mhi_cntrl->pm_lock);
+				new_state = mhi_tryset_pm_state(mhi_cntrl,
+							MHI_PM_SYS_ERR_DETECT);
+				write_unlock_irq(&mhi_cntrl->pm_lock);
+				if (new_state == MHI_PM_SYS_ERR_DETECT)
+					schedule_work(
+						&mhi_cntrl->syserr_worker);
+				break;
+			}
+			default:
+				dev_err(mhi_cntrl->dev, "Unsupported STE:%s\n",
+					TO_MHI_STATE_STR(new_state));
+			}
+
+			break;
+		}
+		case MHI_PKT_TYPE_CMD_COMPLETION_EVENT:
+			break;
+		case MHI_PKT_TYPE_EE_EVENT:
+		{
+			enum MHI_ST_TRANSITION st = MHI_ST_TRANSITION_MAX;
+			enum MHI_EE event = MHI_TRE_GET_EV_EXECENV(local_rp);
+
+			dev_info(mhi_cntrl->dev, "MHI EE received event:%s\n",
+				 TO_MHI_EXEC_STR(event));
+			switch (event) {
+			case MHI_EE_SBL:
+				st = MHI_ST_TRANSITION_SBL;
+				break;
+			case MHI_EE_AMSS:
+				st = MHI_ST_TRANSITION_AMSS;
+				break;
+			case MHI_EE_RDDM:
+				mhi_cntrl->status_cb(mhi_cntrl,
+						     mhi_cntrl->priv_data,
+						     MHI_CB_EE_RDDM);
+				/* fall thru to wake up the event */
+			case MHI_EE_BHIE:
+				write_lock_irq(&mhi_cntrl->pm_lock);
+				mhi_cntrl->ee = event;
+				write_unlock_irq(&mhi_cntrl->pm_lock);
+				wake_up(&mhi_cntrl->state_event);
+				break;
+			default:
+				dev_err(mhi_cntrl->dev, "Unhandled EE event\n");
+			}
+			if (st != MHI_ST_TRANSITION_MAX)
+				mhi_queue_state_transition(mhi_cntrl, st);
+
+			break;
+		}
+		default:
+			break;
+		}
+
+		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+		local_rp = ev_ring->rp;
+		dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+		count++;
+	}
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state)))
+		mhi_ring_er_db(mhi_event);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return count;
 }
 
 int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
@@ -119,4 +522,127 @@ void mhi_ev_task(unsigned long data)
 
 void mhi_ctrl_ev_task(unsigned long data)
 {
+	struct mhi_event *mhi_event = (struct mhi_event *)data;
+	struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+	enum MHI_STATE state = MHI_STATE_MAX;
+	enum MHI_PM_STATE pm_state = 0;
+	int ret;
+
+	/* process ctrl events events */
+	ret = mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX);
+
+	/*
+	 * we received a MSI but no events to process maybe device went to
+	 * SYS_ERR state, check the state
+	 */
+	if (!ret) {
+		write_lock_irq(&mhi_cntrl->pm_lock);
+		if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+			state = mhi_get_m_state(mhi_cntrl);
+		if (state == MHI_STATE_SYS_ERR) {
+			dev_info(mhi_cntrl->dev, "MHI system error detected\n");
+			pm_state = mhi_tryset_pm_state(mhi_cntrl,
+						       MHI_PM_SYS_ERR_DETECT);
+		}
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		if (pm_state == MHI_PM_SYS_ERR_DETECT)
+			schedule_work(&mhi_cntrl->syserr_worker);
+	}
+}
+
+irqreturn_t mhi_msi_handlr(int irq_number, void *dev)
+{
+	struct mhi_event *mhi_event = dev;
+	struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+	struct mhi_event_ctxt *er_ctxt =
+		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+	struct mhi_ring *ev_ring = &mhi_event->ring;
+	void *dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+
+	/* confirm ER has pending events to process before scheduling work */
+	if (ev_ring->rp == dev_rp)
+		return IRQ_HANDLED;
+
+	/* client managed event ring, notify pending data */
+	if (mhi_event->cl_manage) {
+		struct mhi_chan *mhi_chan = mhi_event->mhi_chan;
+		struct mhi_device *mhi_dev = mhi_chan->mhi_dev;
+
+		if (mhi_dev)
+			mhi_dev->status_cb(mhi_dev, MHI_CB_PENDING_DATA);
+	} else
+		tasklet_schedule(&mhi_event->task);
+
+	return IRQ_HANDLED;
+}
+
+/* this is the threaded fn */
+irqreturn_t mhi_intvec_threaded_handlr(int irq_number, void *dev)
+{
+	struct mhi_controller *mhi_cntrl = dev;
+	enum MHI_STATE state = MHI_STATE_MAX;
+	enum MHI_PM_STATE pm_state = 0;
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+		state = mhi_get_m_state(mhi_cntrl);
+	if (state == MHI_STATE_SYS_ERR) {
+		dev_info(mhi_cntrl->dev, "MHI system error detected\n");
+		pm_state = mhi_tryset_pm_state(mhi_cntrl,
+					       MHI_PM_SYS_ERR_DETECT);
+	}
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	if (pm_state == MHI_PM_SYS_ERR_DETECT)
+		schedule_work(&mhi_cntrl->syserr_worker);
+
+	return IRQ_HANDLED;
+}
+
+irqreturn_t mhi_intvec_handlr(int irq_number, void *dev)
+{
+
+	struct mhi_controller *mhi_cntrl = dev;
+
+	/* wake up any events waiting for state change */
+	wake_up(&mhi_cntrl->state_event);
+
+	return IRQ_WAKE_THREAD;
+}
+
+static int __mhi_bdf_to_controller(struct device *dev, void *tmp)
+{
+	struct mhi_device *mhi_dev = to_mhi_device(dev);
+	struct mhi_device *match = tmp;
+
+	/* return any none-zero value if match */
+	if (mhi_dev->dev_type == MHI_CONTROLLER_TYPE &&
+	    mhi_dev->domain == match->domain && mhi_dev->bus == match->bus &&
+	    mhi_dev->slot == match->slot && mhi_dev->dev_id == match->dev_id)
+		return 1;
+
+	return 0;
+}
+
+struct mhi_controller *mhi_bdf_to_controller(u32 domain,
+					     u32 bus,
+					     u32 slot,
+					     u32 dev_id)
+{
+	struct mhi_device tmp, *mhi_dev;
+	struct device *dev;
+
+	tmp.domain = domain;
+	tmp.bus = bus;
+	tmp.slot = slot;
+	tmp.dev_id = dev_id;
+
+	dev = bus_find_device(&mhi_bus_type, NULL, &tmp,
+			      __mhi_bdf_to_controller);
+	if (!dev)
+		return NULL;
+
+	mhi_dev = to_mhi_device(dev);
+
+	return mhi_dev->mhi_cntrl;
 }
+EXPORT_SYMBOL(mhi_bdf_to_controller);
diff --git a/drivers/bus/mhi/core/mhi_pm.c b/drivers/bus/mhi/core/mhi_pm.c
new file mode 100644
index 0000000..5220cfa
--- /dev/null
+++ b/drivers/bus/mhi/core/mhi_pm.c
@@ -0,0 +1,1027 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#include <linux/debugfs.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+#include <linux/mhi.h>
+#include "mhi_internal.h"
+
+/*
+ * Not all MHI states transitions are sync transitions. Linkdown, SSR, and
+ * shutdown can happen anytime asynchronously. This function will transition to
+ * new state only if we're allowed to transitions.
+ *
+ * Priority increase as we go down, example while in any states from L0, start
+ * state from L1, L2, or L3 can be set.  Notable exception to this rule is state
+ * DISABLE.  From DISABLE state we can transition to only POR or state.  Also
+ * for example while in L2 state, user cannot jump back to L1 or L0 states.
+ * Valid transitions:
+ * L0: DISABLE <--> POR
+ *     POR <--> POR
+ *     POR -> M0 -> M2 --> M0
+ *     POR -> FW_DL_ERR
+ *     FW_DL_ERR <--> FW_DL_ERR
+ *     M0 -> FW_DL_ERR
+ *     M0 -> M3_ENTER -> M3 -> M3_EXIT --> M0
+ * L1: SYS_ERR_DETECT -> SYS_ERR_PROCESS --> POR
+ * L2: SHUTDOWN_PROCESS -> DISABLE
+ * L3: LD_ERR_FATAL_DETECT <--> LD_ERR_FATAL_DETECT
+ *     LD_ERR_FATAL_DETECT -> SHUTDOWN_PROCESS
+ */
+static struct mhi_pm_transitions const mhi_state_transitions[] = {
+	/* L0 States */
+	{
+		MHI_PM_DISABLE,
+		MHI_PM_POR
+	},
+	{
+		MHI_PM_POR,
+		MHI_PM_POR | MHI_PM_DISABLE | MHI_PM_M0 |
+		MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_FW_DL_ERR
+	},
+	{
+		MHI_PM_M0,
+		MHI_PM_M2 | MHI_PM_M3_ENTER | MHI_PM_SYS_ERR_DETECT |
+		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT |
+		MHI_PM_FW_DL_ERR
+	},
+	{
+		MHI_PM_M2,
+		MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	{
+		MHI_PM_M3_ENTER,
+		MHI_PM_M3 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	{
+		MHI_PM_M3,
+		MHI_PM_M3_EXIT | MHI_PM_SYS_ERR_DETECT |
+		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	{
+		MHI_PM_M3_EXIT,
+		MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	{
+		MHI_PM_FW_DL_ERR,
+		MHI_PM_FW_DL_ERR | MHI_PM_SYS_ERR_DETECT |
+		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	/* L1 States */
+	{
+		MHI_PM_SYS_ERR_DETECT,
+		MHI_PM_SYS_ERR_PROCESS | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	{
+		MHI_PM_SYS_ERR_PROCESS,
+		MHI_PM_POR | MHI_PM_SHUTDOWN_PROCESS |
+		MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	/* L2 States */
+	{
+		MHI_PM_SHUTDOWN_PROCESS,
+		MHI_PM_DISABLE | MHI_PM_LD_ERR_FATAL_DETECT
+	},
+	/* L3 States */
+	{
+		MHI_PM_LD_ERR_FATAL_DETECT,
+		MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_SHUTDOWN_PROCESS
+	},
+};
+
+enum MHI_PM_STATE __must_check mhi_tryset_pm_state(
+				struct mhi_controller *mhi_cntrl,
+				enum MHI_PM_STATE state)
+{
+	unsigned long cur_state = mhi_cntrl->pm_state;
+	int index = find_last_bit(&cur_state, 32);
+
+	if (unlikely(index >= ARRAY_SIZE(mhi_state_transitions)))
+		return cur_state;
+
+	if (unlikely(mhi_state_transitions[index].from_state != cur_state))
+		return cur_state;
+
+	if (unlikely(!(mhi_state_transitions[index].to_states & state)))
+		return cur_state;
+
+	mhi_cntrl->pm_state = state;
+	return mhi_cntrl->pm_state;
+}
+
+void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum MHI_STATE state)
+{
+	if (state == MHI_STATE_RESET) {
+		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
+				    MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 1);
+	} else {
+		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
+			MHICTRL_MHISTATE_MASK, MHICTRL_MHISTATE_SHIFT, state);
+	}
+}
+
+/* set device wake */
+void mhi_assert_dev_wake(struct mhi_controller *mhi_cntrl, bool force)
+{
+	unsigned long flags;
+
+	/* if set, regardless of count set the bit if not set */
+	if (unlikely(force)) {
+		spin_lock_irqsave(&mhi_cntrl->wlock, flags);
+		atomic_inc(&mhi_cntrl->dev_wake);
+		if (MHI_WAKE_DB_FORCE_SET_VALID(mhi_cntrl->pm_state) &&
+		    !mhi_cntrl->wake_set) {
+			mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1);
+			mhi_cntrl->wake_set = true;
+		}
+		spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
+	} else {
+		/* if resources requested already, then increment and exit */
+		if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, 1, 0)))
+			return;
+
+		spin_lock_irqsave(&mhi_cntrl->wlock, flags);
+		if ((atomic_inc_return(&mhi_cntrl->dev_wake) == 1) &&
+		    MHI_WAKE_DB_SET_VALID(mhi_cntrl->pm_state) &&
+		    !mhi_cntrl->wake_set) {
+			mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1);
+			mhi_cntrl->wake_set = true;
+		}
+		spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
+	}
+}
+
+/* clear device wake */
+void mhi_deassert_dev_wake(struct mhi_controller *mhi_cntrl, bool override)
+{
+	unsigned long flags;
+
+	/* resources not dropping to 0, decrement and exit */
+	if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, -1, 1)))
+		return;
+
+	spin_lock_irqsave(&mhi_cntrl->wlock, flags);
+	if ((atomic_dec_return(&mhi_cntrl->dev_wake) == 0) &&
+	    MHI_WAKE_DB_CLEAR_VALID(mhi_cntrl->pm_state) && !override &&
+	    mhi_cntrl->wake_set) {
+		mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 0);
+		mhi_cntrl->wake_set = false;
+	}
+	spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
+}
+
+int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
+{
+	void __iomem *base = mhi_cntrl->regs;
+	u32 reset = 1, ready = 0;
+	struct mhi_event *mhi_event;
+	enum MHI_PM_STATE cur_state;
+	int ret, i;
+
+	/* wait for RESET to be cleared and READY bit to be set */
+	wait_event_timeout(mhi_cntrl->state_event,
+			   MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
+			   mhi_read_reg_field(mhi_cntrl, base, MHICTRL,
+					      MHICTRL_RESET_MASK,
+					      MHICTRL_RESET_SHIFT, &reset) ||
+			   mhi_read_reg_field(mhi_cntrl, base, MHISTATUS,
+					      MHISTATUS_READY_MASK,
+					      MHISTATUS_READY_SHIFT, &ready) ||
+			   (!reset && ready),
+			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	/* device enter into error state */
+	if (MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state))
+		return -EIO;
+
+	/* device did not transition to ready state */
+	if (reset || !ready)
+		return -ETIMEDOUT;
+
+	dev_info(mhi_cntrl->dev, "Device in READY State\n");
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_POR);
+	mhi_cntrl->dev_state = MHI_STATE_READY;
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+
+	if (cur_state != MHI_PM_POR) {
+		dev_err(mhi_cntrl->dev, "Error moving to state %s from %s\n",
+			to_mhi_pm_state_str(MHI_PM_POR),
+			to_mhi_pm_state_str(cur_state));
+		return -EIO;
+	}
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+		goto error_mmio;
+
+	ret = mhi_init_mmio(mhi_cntrl);
+	if (ret) {
+		dev_err(mhi_cntrl->dev, "Error programming mmio registers\n");
+		goto error_mmio;
+	}
+
+	/* add elements to all sw event rings */
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		struct mhi_ring *ring = &mhi_event->ring;
+
+		if (mhi_event->offload_ev || mhi_event->hw_ring)
+			continue;
+
+		ring->wp = ring->base + ring->len - ring->el_size;
+		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
+		/* needs to update to all cores */
+		smp_wmb();
+
+		/* ring the db for event rings */
+		spin_lock_irq(&mhi_event->lock);
+		mhi_ring_er_db(mhi_event);
+		spin_unlock_irq(&mhi_event->lock);
+	}
+
+	/* set device into M0 state */
+	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return 0;
+
+error_mmio:
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return -EIO;
+}
+
+int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl)
+{
+	enum MHI_PM_STATE cur_state;
+	struct mhi_chan *mhi_chan;
+	int i;
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	mhi_cntrl->dev_state = MHI_STATE_M0;
+	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M0);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	if (unlikely(cur_state != MHI_PM_M0))
+		return -EIO;
+
+	mhi_cntrl->M0++;
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+
+	/* ring all event rings and CMD ring only if we're in AMSS */
+	if (mhi_cntrl->ee == MHI_EE_AMSS) {
+		struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
+		struct mhi_cmd *mhi_cmd =
+			&mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+
+		for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+			if (mhi_event->offload_ev)
+				continue;
+
+			spin_lock_irq(&mhi_event->lock);
+			mhi_ring_er_db(mhi_event);
+			spin_unlock_irq(&mhi_event->lock);
+		}
+
+		/* only ring primary cmd ring */
+		spin_lock_irq(&mhi_cmd->lock);
+		if (mhi_cmd->ring.rp != mhi_cmd->ring.wp)
+			mhi_ring_cmd_db(mhi_cntrl, mhi_cmd);
+		spin_unlock_irq(&mhi_cmd->lock);
+	}
+
+	/* ring channel db registers */
+	mhi_chan = mhi_cntrl->mhi_chan;
+	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+		struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+
+		write_lock_irq(&mhi_chan->lock);
+		if (mhi_chan->db_cfg.reset_req)
+			mhi_chan->db_cfg.db_mode = true;
+
+		/* only ring DB if ring is not empty */
+		if (tre_ring->base && tre_ring->wp  != tre_ring->rp)
+			mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+		write_unlock_irq(&mhi_chan->lock);
+	}
+
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+	wake_up(&mhi_cntrl->state_event);
+
+	return 0;
+}
+
+void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl)
+{
+	enum MHI_PM_STATE state;
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	/* if it fails, means we transition to M3 */
+	state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M2);
+	if (state == MHI_PM_M2) {
+		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M2);
+		mhi_cntrl->dev_state = MHI_STATE_M2;
+		mhi_cntrl->M2++;
+
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+
+		/* transfer pending, exit M2 immediately */
+		if (unlikely(atomic_read(&mhi_cntrl->dev_wake))) {
+			read_lock_bh(&mhi_cntrl->pm_lock);
+			mhi_cntrl->wake_get(mhi_cntrl, true);
+			mhi_cntrl->wake_put(mhi_cntrl, false);
+			read_unlock_bh(&mhi_cntrl->pm_lock);
+		} else {
+			mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data,
+					     MHI_CB_IDLE);
+		}
+	} else {
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+	}
+}
+
+int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl)
+{
+	enum MHI_PM_STATE state;
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	mhi_cntrl->dev_state = MHI_STATE_M3;
+	state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	if (state != MHI_PM_M3)
+		return -EIO;
+
+	wake_up(&mhi_cntrl->state_event);
+	mhi_cntrl->M3++;
+
+	return 0;
+}
+
+static int mhi_pm_amss_transition(struct mhi_controller *mhi_cntrl)
+{
+	int i;
+	struct mhi_event *mhi_event;
+
+	dev_info(mhi_cntrl->dev, "Processing AMSS Transition\n");
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	mhi_cntrl->ee = MHI_EE_AMSS;
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	wake_up(&mhi_cntrl->state_event);
+
+	/* add elements to all HW event rings */
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+		return -EIO;
+	}
+
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		struct mhi_ring *ring = &mhi_event->ring;
+
+		if (mhi_event->offload_ev || !mhi_event->hw_ring)
+			continue;
+
+		ring->wp = ring->base + ring->len - ring->el_size;
+		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
+		/* all ring updates must get updated immediately */
+		smp_wmb();
+
+		spin_lock_irq(&mhi_event->lock);
+		if (MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state))
+			mhi_ring_er_db(mhi_event);
+		spin_unlock_irq(&mhi_event->lock);
+
+	}
+
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	/* add supported devices */
+	mhi_create_devices(mhi_cntrl);
+
+	return 0;
+}
+
+/* handles both sys_err and shutdown transitions */
+static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
+				      enum MHI_PM_STATE transition_state)
+{
+	enum MHI_PM_STATE cur_state, prev_state;
+	struct mhi_event *mhi_event;
+	struct mhi_cmd_ctxt *cmd_ctxt;
+	struct mhi_cmd *mhi_cmd;
+	struct mhi_event_ctxt *er_ctxt;
+	int ret, i;
+
+	dev_info(mhi_cntrl->dev,
+		 "Enter with from pm_state:%s MHI_STATE:%s to pm_state:%s\n",
+		 to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		 TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		 to_mhi_pm_state_str(transition_state));
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	prev_state = mhi_cntrl->pm_state;
+	cur_state = mhi_tryset_pm_state(mhi_cntrl, transition_state);
+	if (cur_state == transition_state) {
+		mhi_cntrl->ee = MHI_EE_DISABLE_TRANSITION;
+		mhi_cntrl->dev_state = MHI_STATE_RESET;
+	}
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+
+	/* not handling sys_err, could be middle of shut down */
+	if (cur_state != transition_state) {
+		dev_info(mhi_cntrl->dev,
+			 "Failed to transition to state:0x%x from:0x%x\n",
+			 transition_state, cur_state);
+		mutex_unlock(&mhi_cntrl->pm_mutex);
+		return;
+	}
+
+	/* trigger MHI RESET so device will not access host ddr */
+	if (MHI_REG_ACCESS_VALID(prev_state)) {
+		u32 in_reset = -1;
+		unsigned long timeout = msecs_to_jiffies(mhi_cntrl->timeout_ms);
+
+		dev_info(mhi_cntrl->dev, "Trigger device into MHI_RESET\n");
+		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
+
+		/* wait for reset to be cleared */
+		ret = wait_event_timeout(mhi_cntrl->state_event,
+					 mhi_read_reg_field(mhi_cntrl,
+						mhi_cntrl->regs, MHICTRL,
+						MHICTRL_RESET_MASK,
+						MHICTRL_RESET_SHIFT, &in_reset)
+					 || !in_reset, timeout);
+		if ((!ret || in_reset) && cur_state == MHI_PM_SYS_ERR_PROCESS) {
+			dev_err(mhi_cntrl->dev,
+				"Device failed to exit RESET state\n");
+			mutex_unlock(&mhi_cntrl->pm_mutex);
+			return;
+		}
+
+		/*
+		 * device cleares INTVEC as part of RESET processing,
+		 * re-program it
+		 */
+		mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
+	}
+
+	dev_info(mhi_cntrl->dev,
+		 "Waiting for all pending event ring processing to complete\n");
+	mhi_event = mhi_cntrl->mhi_event;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+		if (mhi_event->offload_ev)
+			continue;
+		tasklet_kill(&mhi_event->task);
+	}
+
+	dev_info(mhi_cntrl->dev,
+		 "Reset all active channels and remove mhi devices\n");
+	device_for_each_child(mhi_cntrl->dev, NULL, mhi_destroy_device);
+
+	dev_info(mhi_cntrl->dev, "Finish resetting channels\n");
+
+	/* release lock and wait for all pending thread to complete */
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+	dev_info(mhi_cntrl->dev,
+		 "Waiting for all pending threads to complete\n");
+	wake_up(&mhi_cntrl->state_event);
+	flush_work(&mhi_cntrl->st_worker);
+	flush_work(&mhi_cntrl->fw_worker);
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+
+	/* reset the ev rings and cmd rings */
+	dev_info(mhi_cntrl->dev, "Resetting EV CTXT and CMD CTXT\n");
+	mhi_cmd = mhi_cntrl->mhi_cmd;
+	cmd_ctxt = mhi_cntrl->mhi_ctxt->cmd_ctxt;
+	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
+		struct mhi_ring *ring = &mhi_cmd->ring;
+
+		ring->rp = ring->base;
+		ring->wp = ring->base;
+		cmd_ctxt->rp = cmd_ctxt->rbase;
+		cmd_ctxt->wp = cmd_ctxt->rbase;
+	}
+
+	mhi_event = mhi_cntrl->mhi_event;
+	er_ctxt = mhi_cntrl->mhi_ctxt->er_ctxt;
+	for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++,
+		     mhi_event++) {
+		struct mhi_ring *ring = &mhi_event->ring;
+
+		/* do not touch offload er */
+		if (mhi_event->offload_ev)
+			continue;
+
+		ring->rp = ring->base;
+		ring->wp = ring->base;
+		er_ctxt->rp = er_ctxt->rbase;
+		er_ctxt->wp = er_ctxt->rbase;
+	}
+
+	if (cur_state == MHI_PM_SYS_ERR_PROCESS) {
+		mhi_ready_state_transition(mhi_cntrl);
+	} else {
+		/* move to disable state */
+		write_lock_irq(&mhi_cntrl->pm_lock);
+		cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_DISABLE);
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		if (unlikely(cur_state != MHI_PM_DISABLE))
+			dev_err(mhi_cntrl->dev,
+				"Error moving from pm state:%s to state:%s\n",
+				to_mhi_pm_state_str(cur_state),
+				to_mhi_pm_state_str(MHI_PM_DISABLE));
+	}
+
+	dev_info(mhi_cntrl->dev, "Exit with pm_state:%s mhi_state:%s\n",
+		 to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		 TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+}
+
+/* queue a new work item and scheduler work */
+int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
+			       enum MHI_ST_TRANSITION state)
+{
+	struct state_transition *item = kmalloc(sizeof(*item), GFP_ATOMIC);
+	unsigned long flags;
+
+	if (!item)
+		return -ENOMEM;
+
+	item->state = state;
+	spin_lock_irqsave(&mhi_cntrl->transition_lock, flags);
+	list_add_tail(&item->node, &mhi_cntrl->transition_list);
+	spin_unlock_irqrestore(&mhi_cntrl->transition_lock, flags);
+
+	schedule_work(&mhi_cntrl->st_worker);
+
+	return 0;
+}
+
+void mhi_pm_sys_err_worker(struct work_struct *work)
+{
+	struct mhi_controller *mhi_cntrl = container_of(work,
+							struct mhi_controller,
+							syserr_worker);
+
+	mhi_pm_disable_transition(mhi_cntrl, MHI_PM_SYS_ERR_PROCESS);
+}
+
+void mhi_pm_st_worker(struct work_struct *work)
+{
+	struct state_transition *itr, *tmp;
+	LIST_HEAD(head);
+	struct mhi_controller *mhi_cntrl = container_of(work,
+							struct mhi_controller,
+							st_worker);
+	spin_lock_irq(&mhi_cntrl->transition_lock);
+	list_splice_tail_init(&mhi_cntrl->transition_list, &head);
+	spin_unlock_irq(&mhi_cntrl->transition_lock);
+
+	list_for_each_entry_safe(itr, tmp, &head, node) {
+		list_del(&itr->node);
+		dev_info(mhi_cntrl->dev, "Transition to state:%s\n",
+			 TO_MHI_STATE_TRANS_STR(itr->state));
+
+		switch (itr->state) {
+		case MHI_ST_TRANSITION_PBL:
+			write_lock_irq(&mhi_cntrl->pm_lock);
+			if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+				mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
+			write_unlock_irq(&mhi_cntrl->pm_lock);
+			if (MHI_IN_PBL(mhi_cntrl->ee))
+				wake_up(&mhi_cntrl->state_event);
+			break;
+		case MHI_ST_TRANSITION_SBL:
+			write_lock_irq(&mhi_cntrl->pm_lock);
+			mhi_cntrl->ee = MHI_EE_SBL;
+			write_unlock_irq(&mhi_cntrl->pm_lock);
+			mhi_create_devices(mhi_cntrl);
+			break;
+		case MHI_ST_TRANSITION_AMSS:
+			mhi_pm_amss_transition(mhi_cntrl);
+			break;
+		case MHI_ST_TRANSITION_READY:
+			mhi_ready_state_transition(mhi_cntrl);
+			break;
+		default:
+			break;
+		}
+		kfree(itr);
+	}
+}
+
+int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+	u32 val;
+	enum MHI_EE current_ee;
+	enum MHI_ST_TRANSITION next_state;
+
+	dev_info(mhi_cntrl->dev, "Requested to power on\n");
+
+	if (mhi_cntrl->msi_allocated < mhi_cntrl->total_ev_rings)
+		return -EINVAL;
+
+	/* set to default wake if not set */
+	if (!mhi_cntrl->wake_get || !mhi_cntrl->wake_put) {
+		mhi_cntrl->wake_get = mhi_assert_dev_wake;
+		mhi_cntrl->wake_put = mhi_deassert_dev_wake;
+	}
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+	mhi_cntrl->pm_state = MHI_PM_DISABLE;
+
+	if (!mhi_cntrl->pre_init) {
+		/* setup device context */
+		ret = mhi_init_dev_ctxt(mhi_cntrl);
+		if (ret)
+			goto error_dev_ctxt;
+
+		ret = mhi_init_irq_setup(mhi_cntrl);
+		if (ret)
+			goto error_setup_irq;
+	}
+
+	/* setup bhi offset & intvec */
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIOFF, &val);
+	if (ret) {
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		goto error_bhi_offset;
+	}
+
+	mhi_cntrl->bhi = mhi_cntrl->regs + val;
+
+	/* setup bhie offset */
+	if (mhi_cntrl->fbc_download) {
+		ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF, &val);
+		if (ret) {
+			write_unlock_irq(&mhi_cntrl->pm_lock);
+			dev_err(mhi_cntrl->dev, "Error reading BHIE offset\n");
+			goto error_bhi_offset;
+		}
+
+		mhi_cntrl->bhie = mhi_cntrl->regs + val;
+	}
+
+	mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
+	mhi_cntrl->pm_state = MHI_PM_POR;
+	mhi_cntrl->ee = MHI_EE_MAX;
+	current_ee = mhi_get_exec_env(mhi_cntrl);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+
+	/* confirm device is in valid exec env */
+	if (!MHI_IN_PBL(current_ee) && current_ee != MHI_EE_AMSS) {
+		dev_info(mhi_cntrl->dev, "Not a valid ee for power on\n");
+		ret = -EIO;
+		goto error_bhi_offset;
+	}
+
+	/* transition to next state */
+	next_state = MHI_IN_PBL(current_ee) ?
+		MHI_ST_TRANSITION_PBL : MHI_ST_TRANSITION_READY;
+
+	if (next_state == MHI_ST_TRANSITION_PBL)
+		schedule_work(&mhi_cntrl->fw_worker);
+
+	mhi_queue_state_transition(mhi_cntrl, next_state);
+
+	mhi_init_debugfs(mhi_cntrl);
+
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	dev_info(mhi_cntrl->dev, "Power on setup success\n");
+
+	return 0;
+
+error_bhi_offset:
+	if (!mhi_cntrl->pre_init)
+		mhi_deinit_free_irq(mhi_cntrl);
+
+error_setup_irq:
+	if (!mhi_cntrl->pre_init)
+		mhi_deinit_dev_ctxt(mhi_cntrl);
+
+error_dev_ctxt:
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	return ret;
+}
+EXPORT_SYMBOL(mhi_async_power_up);
+
+void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
+{
+	enum MHI_PM_STATE cur_state;
+
+	/* if it's not graceful shutdown, force MHI to a linkdown state */
+	if (!graceful) {
+		mutex_lock(&mhi_cntrl->pm_mutex);
+		write_lock_irq(&mhi_cntrl->pm_lock);
+		cur_state = mhi_tryset_pm_state(mhi_cntrl,
+						MHI_PM_LD_ERR_FATAL_DETECT);
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		mutex_unlock(&mhi_cntrl->pm_mutex);
+		if (cur_state != MHI_PM_LD_ERR_FATAL_DETECT)
+			dev_info(mhi_cntrl->dev,
+			"Failed to move to state:%s from:%s\n",
+			to_mhi_pm_state_str(MHI_PM_LD_ERR_FATAL_DETECT),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+	}
+	mhi_pm_disable_transition(mhi_cntrl, MHI_PM_SHUTDOWN_PROCESS);
+
+	mhi_deinit_debugfs(mhi_cntrl);
+
+	if (!mhi_cntrl->pre_init) {
+		/* free all allocated resources */
+		if (mhi_cntrl->fbc_image) {
+			mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
+			mhi_cntrl->fbc_image = NULL;
+		}
+		mhi_deinit_free_irq(mhi_cntrl);
+		mhi_deinit_dev_ctxt(mhi_cntrl);
+	}
+}
+EXPORT_SYMBOL(mhi_power_down);
+
+int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
+{
+	int ret = mhi_async_power_up(mhi_cntrl);
+
+	if (ret)
+		return ret;
+
+	wait_event_timeout(mhi_cntrl->state_event,
+			   mhi_cntrl->ee == MHI_EE_AMSS ||
+			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	return (mhi_cntrl->ee == MHI_EE_AMSS) ? 0 : -EIO;
+}
+EXPORT_SYMBOL(mhi_sync_power_up);
+
+int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+	enum MHI_PM_STATE new_state;
+	struct mhi_chan *itr, *tmp;
+
+	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
+		return -EINVAL;
+
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+		return -EIO;
+
+	/* do a quick check to see if any pending data, then exit */
+	if (atomic_read(&mhi_cntrl->dev_wake))
+		return -EBUSY;
+
+	/* exit MHI out of M2 state */
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
+				 mhi_cntrl->dev_state == MHI_STATE_M1 ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		dev_err(mhi_cntrl->dev,
+			"Did not enter M0||M1 state, cur_state:%s pm_state:%s\n",
+			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		return -EIO;
+	}
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+
+	if (atomic_read(&mhi_cntrl->dev_wake)) {
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		return -EBUSY;
+	}
+
+	/* anytime after this, we will resume thru runtime pm framework */
+	dev_info(mhi_cntrl->dev, "Allowing M3 transition\n");
+	new_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_ENTER);
+	if (new_state != MHI_PM_M3_ENTER) {
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		dev_err(mhi_cntrl->dev,
+			"Error setting to pm_state:%s from pm_state:%s\n",
+			to_mhi_pm_state_str(MHI_PM_M3_ENTER),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		return -EIO;
+	}
+
+	/* set dev to M3 and wait for completion */
+	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	dev_info(mhi_cntrl->dev, "Wait for M3 completion\n");
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->dev_state == MHI_STATE_M3 ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		dev_err(mhi_cntrl->dev,
+			"Did not enter M3 state, cur_state:%s pm_state:%s\n",
+			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		return -EIO;
+	}
+
+	/* notify any clients we enter lpm */
+	list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
+		mutex_lock(&itr->mutex);
+		if (itr->mhi_dev)
+			mhi_notify(itr->mhi_dev, MHI_CB_LPM_ENTER);
+		mutex_unlock(&itr->mutex);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(mhi_pm_suspend);
+
+int mhi_pm_resume(struct mhi_controller *mhi_cntrl)
+{
+	enum MHI_PM_STATE cur_state;
+	int ret;
+	struct mhi_chan *itr, *tmp;
+
+	dev_info(mhi_cntrl->dev, "Entered with pm_state:%s dev_state:%s\n",
+		 to_mhi_pm_state_str(mhi_cntrl->pm_state),
+		 TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+
+	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
+		return 0;
+
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+		return -EIO;
+
+	/* notify any clients we enter lpm */
+	list_for_each_entry_safe(itr, tmp, &mhi_cntrl->lpm_chans, node) {
+		mutex_lock(&itr->mutex);
+		if (itr->mhi_dev)
+			mhi_notify(itr->mhi_dev, MHI_CB_LPM_EXIT);
+		mutex_unlock(&itr->mutex);
+	}
+
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3_EXIT);
+	if (cur_state != MHI_PM_M3_EXIT) {
+		write_unlock_irq(&mhi_cntrl->pm_lock);
+		dev_info(mhi_cntrl->dev,
+			 "Error setting to pm_state:%s from pm_state:%s\n",
+			 to_mhi_pm_state_str(MHI_PM_M3_EXIT),
+			 to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		return -EIO;
+	}
+
+	/* set dev to M0 and wait for completion */
+	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		dev_err(mhi_cntrl->dev,
+			"Did not enter M0 state, cur_state:%s pm_state:%s\n",
+			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+			to_mhi_pm_state_str(mhi_cntrl->pm_state));
+		return -EIO;
+	}
+
+	return 0;
+}
+
+int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_get(mhi_cntrl, true);
+	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+		mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+		mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+	}
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->dev_state == MHI_STATE_M0 ||
+				 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		read_lock_bh(&mhi_cntrl->pm_lock);
+		mhi_cntrl->wake_put(mhi_cntrl, false);
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+void mhi_device_get(struct mhi_device *mhi_dev)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+
+	atomic_inc(&mhi_dev->dev_wake);
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_get(mhi_cntrl, true);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+}
+EXPORT_SYMBOL(mhi_device_get);
+
+int mhi_device_get_sync(struct mhi_device *mhi_dev)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	int ret;
+
+	ret = __mhi_device_get_sync(mhi_cntrl);
+	if (!ret)
+		atomic_inc(&mhi_dev->dev_wake);
+
+	return ret;
+}
+EXPORT_SYMBOL(mhi_device_get_sync);
+
+void mhi_device_put(struct mhi_device *mhi_dev)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+
+	atomic_dec(&mhi_dev->dev_wake);
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+}
+EXPORT_SYMBOL(mhi_device_put);
+
+int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
+{
+	int ret;
+
+	/* before rddm mode, we need to enter M0 state */
+	ret = __mhi_device_get_sync(mhi_cntrl);
+	if (ret)
+		return ret;
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+	write_lock_irq(&mhi_cntrl->pm_lock);
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+		goto no_reg_access;
+
+	dev_info(mhi_cntrl->dev, "Triggering SYS_ERR to force rddm state\n");
+
+	mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	/* wait for rddm event */
+	ret = wait_event_timeout(mhi_cntrl->state_event,
+				 mhi_cntrl->ee == MHI_EE_RDDM,
+				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
+	ret = !ret ? 0 : -EIO;
+
+	return ret;
+
+no_reg_access:
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	write_unlock_irq(&mhi_cntrl->pm_lock);
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	return -EIO;
+}
+EXPORT_SYMBOL(mhi_force_rddm_mode);
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index c80685e9..ed7cea8 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -10,6 +10,8 @@
 struct mhi_event;
 struct mhi_ctxt;
 struct mhi_cmd;
+struct image_info;
+struct bhi_vec_entry;
 struct mhi_buf_info;
 
 /**
@@ -53,10 +55,24 @@ enum mhi_device_type {
 };
 
 /**
+ * struct image_info - firmware and rddm table table
+ * @mhi_buf - Contain device firmware and rddm table
+ * @entries - # of entries in table
+ */
+struct image_info {
+	struct mhi_buf *mhi_buf;
+	struct bhi_vec_entry *bhi_vec;
+	u32 entries;
+};
+
+/**
  * struct mhi_controller - Master controller structure for external modem
  * @dev: Device associated with this controller
  * @of_node: DT that has MHI configuration information
  * @regs: Points to base of MHI MMIO register space
+ * @bhi: Points to base of MHI BHI register space
+ * @bhie: Points to base of MHI BHIe register space
+ * @wake_db: MHI WAKE doorbell register address
  * @dev_id: PCIe device id of the external device
  * @domain: PCIe domain the device connected to
  * @bus: PCIe bus the device assigned to
@@ -106,6 +122,9 @@ struct mhi_controller {
 
 	/* mmio base */
 	void __iomem *regs;
+	void __iomem *bhi;
+	void __iomem *bhie;
+	void __iomem *wake_db;
 
 	/* device topology */
 	u32 dev_id;
@@ -128,6 +147,8 @@ struct mhi_controller {
 	size_t seg_len;
 	u32 session_id;
 	u32 sequence_id;
+	struct image_info *fbc_image;
+	struct image_info *rddm_image;
 
 	/* physical channel config data */
 	u32 max_chan;
@@ -254,6 +275,24 @@ struct mhi_result {
 };
 
 /**
+ * struct mhi_buf - Describes the buffer
+ * @buf: cpu address for the buffer
+ * @phys_addr: physical address of the buffer
+ * @dma_addr: iommu address for the buffer
+ * @len: # of bytes
+ * @name: Buffer label, for offload channel configurations name must be:
+ * ECA - Event context array data
+ * CCA - Channel context array data
+ */
+struct mhi_buf {
+	void *buf;
+	phys_addr_t phys_addr;
+	dma_addr_t dma_addr;
+	size_t len;
+	const char *name; /* ECA, CCA */
+};
+
+/**
  * struct mhi_driver - mhi driver information
  * @id_table: NULL terminated channel ID names
  * @ul_xfer_cb: UL data transfer callback
@@ -310,6 +349,28 @@ static inline void mhi_free_controller(struct mhi_controller *mhi_cntrl)
 void mhi_driver_unregister(struct mhi_driver *mhi_drv);
 
 /**
+ * mhi_device_get - disable all low power modes
+ * Only disables lpm, does not immediately exit low power mode
+ * if controller already in a low power mode
+ * @mhi_dev: Device associated with the channels
+ */
+void mhi_device_get(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_device_get_sync - disable all low power modes
+ * Synchronously disable all low power, exit low power mode if
+ * controller already in a low power state
+ * @mhi_dev: Device associated with the channels
+ */
+int mhi_device_get_sync(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_device_put - re-enable low power modes
+ * @mhi_dev: Device associated with the channels
+ */
+void mhi_device_put(struct mhi_device *mhi_dev);
+
+/**
  * mhi_alloc_controller - Allocate mhi_controller structure
  * Allocate controller structure and additional data for controller
  * private data. You may get the private data pointer by calling
@@ -338,4 +399,64 @@ static inline void mhi_free_controller(struct mhi_controller *mhi_cntrl)
 struct mhi_controller *mhi_bdf_to_controller(u32 domain, u32 bus, u32 slot,
 					     u32 dev_id);
 
+/**
+ * mhi_prepare_for_power_up - Do pre-initialization before power up
+ * This is optional, call this before power up if controller do not
+ * want bus framework to automatically free any allocated memory during shutdown
+ * process.
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_async_power_up - Starts MHI power up sequence
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_async_power_up(struct mhi_controller *mhi_cntrl);
+int mhi_sync_power_up(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_power_down - Start MHI power down sequence
+ * @mhi_cntrl: MHI controller
+ * @graceful: link is still accessible, do a graceful shutdown process otherwise
+ * we will shutdown host w/o putting device into RESET state
+ */
+void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful);
+
+/**
+ * mhi_unprepare_after_powre_down - free any allocated memory for power up
+ * @mhi_cntrl: MHI controller
+ */
+void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_pm_suspend - Move MHI into a suspended state
+ * Transition to MHI state M3 state from M0||M1||M2 state
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_pm_suspend(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_pm_resume - Resume MHI from suspended state
+ * Transition to MHI state M0 state from M3 state
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_pm_resume(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_download_rddm_img - Download ramdump image from device for
+ * debugging purpose.
+ * @mhi_cntrl: MHI controller
+ * @in_panic: If we trying to capture image while in kernel panic
+ */
+int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic);
+
+/**
+ * mhi_force_rddm_mode - Force external device into rddm mode
+ * to collect device ramdump. This is useful if host driver assert
+ * and we need to see device state as well.
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl);
+
 #endif /* _MHI_H_ */
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v2 3/7] mhi_bus: core: add support for data transfer
  2018-07-09 20:08 ` MHI code review Sujeev Dias
  2018-07-09 20:08   ` [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver Sujeev Dias
  2018-07-09 20:08   ` [PATCH v2 2/7] mhi_bus: core: add power management support Sujeev Dias
@ 2018-07-09 20:08   ` Sujeev Dias
  2018-07-10  6:29     ` Greg Kroah-Hartman
  2018-07-09 20:08   ` [PATCH v2 4/7] mhi_bus: core: add support for handling ioctl cmds Sujeev Dias
                     ` (4 subsequent siblings)
  7 siblings, 1 reply; 54+ messages in thread
From: Sujeev Dias @ 2018-07-09 20:08 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: linux-kernel, linux-arm-msm, devicetree, Sujeev Dias,
	Tony Truong, Siddartha Mohanadoss

Add support for transferring data between external
modem and MHI host using MHI protocol.

Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
Reviewed-by: Tony Truong <truong@codeaurora.org>
Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
---
 drivers/bus/mhi/core/mhi_init.c     |  76 +++-
 drivers/bus/mhi/core/mhi_internal.h |   8 -
 drivers/bus/mhi/core/mhi_main.c     | 803 +++++++++++++++++++++++++++++++++++-
 include/linux/mhi.h                 |  78 ++++
 4 files changed, 950 insertions(+), 15 deletions(-)

diff --git a/drivers/bus/mhi/core/mhi_init.c b/drivers/bus/mhi/core/mhi_init.c
index 7b3d12a..43a700d 100644
--- a/drivers/bus/mhi/core/mhi_init.c
+++ b/drivers/bus/mhi/core/mhi_init.c
@@ -548,6 +548,66 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
 	return 0;
 }
 
+int mhi_device_configure(struct mhi_device *mhi_dev,
+			 enum dma_data_direction dir,
+			 struct mhi_buf *cfg_tbl,
+			 int elements)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan;
+	struct mhi_event_ctxt *er_ctxt;
+	struct mhi_chan_ctxt *ch_ctxt;
+	int er_index, chan;
+
+	switch (dir) {
+	case DMA_TO_DEVICE:
+		mhi_chan = mhi_dev->ul_chan;
+		break;
+	case DMA_BIDIRECTIONAL:
+	case DMA_FROM_DEVICE:
+	case DMA_NONE:
+		mhi_chan = mhi_dev->dl_chan;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	er_index = mhi_chan->er_index;
+	chan = mhi_chan->chan;
+
+	for (; elements > 0; elements--, cfg_tbl++) {
+		/* update event context array */
+		if (!strcmp(cfg_tbl->name, "ECA")) {
+			er_ctxt = &mhi_cntrl->mhi_ctxt->er_ctxt[er_index];
+			if (sizeof(*er_ctxt) != cfg_tbl->len) {
+				dev_err(mhi_cntrl->dev,
+					"Invalid ECA size, expected:%zu actual%zu\n",
+					sizeof(*er_ctxt), cfg_tbl->len);
+				return -EINVAL;
+			}
+			memcpy((void *)er_ctxt, cfg_tbl->buf, sizeof(*er_ctxt));
+			continue;
+		}
+
+		/* update channel context array */
+		if (!strcmp(cfg_tbl->name, "CCA")) {
+			ch_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[chan];
+			if (cfg_tbl->len != sizeof(*ch_ctxt)) {
+				dev_err(mhi_cntrl->dev,
+					"Invalid CCA size, expected:%zu actual:%zu\n",
+					sizeof(*ch_ctxt), cfg_tbl->len);
+				return -EINVAL;
+			}
+			memcpy((void *)ch_ctxt, cfg_tbl->buf, sizeof(*ch_ctxt));
+			continue;
+		}
+
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
 static int of_parse_ev_cfg(struct mhi_controller *mhi_cntrl,
 			   struct device_node *of_node)
 {
@@ -1067,6 +1127,9 @@ static int mhi_driver_probe(struct device *dev)
 	struct mhi_event *mhi_event;
 	struct mhi_chan *ul_chan = mhi_dev->ul_chan;
 	struct mhi_chan *dl_chan = mhi_dev->dl_chan;
+	bool auto_start = false;
+	int ret;
+
 
 	if (ul_chan) {
 		/* lpm notification require status_cb */
@@ -1078,6 +1141,7 @@ static int mhi_driver_probe(struct device *dev)
 
 		ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
 		mhi_dev->status_cb = mhi_drv->status_cb;
+		auto_start = ul_chan->auto_start;
 	}
 
 	if (dl_chan) {
@@ -1101,9 +1165,15 @@ static int mhi_driver_probe(struct device *dev)
 
 		/* ul & dl uses same status cb */
 		mhi_dev->status_cb = mhi_drv->status_cb;
+		auto_start = (auto_start || dl_chan->auto_start);
 	}
 
-	return mhi_drv->probe(mhi_dev, mhi_dev->id);
+	ret = mhi_drv->probe(mhi_dev, mhi_dev->id);
+
+	if (!ret && auto_start)
+		mhi_prepare_for_transfer(mhi_dev);
+
+	return ret;
 }
 
 static int mhi_driver_remove(struct device *dev)
@@ -1141,6 +1211,10 @@ static int mhi_driver_remove(struct device *dev)
 		ch_state[dir] = mhi_chan->ch_state;
 		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
 		write_unlock_irq(&mhi_chan->lock);
+
+		/* reset the channel */
+		if (!mhi_chan->offload_ch)
+			mhi_reset_chan(mhi_cntrl, mhi_chan);
 	}
 
 	/* destroy the device */
diff --git a/drivers/bus/mhi/core/mhi_internal.h b/drivers/bus/mhi/core/mhi_internal.h
index 0091245..1167d75 100644
--- a/drivers/bus/mhi/core/mhi_internal.h
+++ b/drivers/bus/mhi/core/mhi_internal.h
@@ -243,7 +243,6 @@ enum mhi_cmd_type {
 	MHI_CMD_TYPE_RESET = 16,
 	MHI_CMD_TYPE_STOP = 17,
 	MHI_CMD_TYPE_START = 18,
-	MHI_CMD_TYPE_TSYNC = 24,
 };
 
 /* no operation command */
@@ -268,12 +267,6 @@ enum mhi_cmd_type {
 #define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
 					(MHI_CMD_TYPE_START << 16))
 
-/* time sync cfg command */
-#define MHI_TRE_CMD_TSYNC_CFG_PTR (0)
-#define MHI_TRE_CMD_TSYNC_CFG_DWORD0 (0)
-#define MHI_TRE_CMD_TSYNC_CFG_DWORD1(er) ((MHI_CMD_TYPE_TSYNC << 16) | \
-					  (er << 24))
-
 #define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
 #define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
 
@@ -300,7 +293,6 @@ enum mhi_cmd_type {
 enum MHI_CMD {
 	MHI_CMD_RESET_CHAN,
 	MHI_CMD_START_CHAN,
-	MHI_CMD_TIMSYNC_CFG,
 };
 
 enum MHI_PKT_TYPE {
diff --git a/drivers/bus/mhi/core/mhi_main.c b/drivers/bus/mhi/core/mhi_main.c
index 81cda31..3e7077a8 100644
--- a/drivers/bus/mhi/core/mhi_main.c
+++ b/drivers/bus/mhi/core/mhi_main.c
@@ -159,23 +159,46 @@ enum MHI_STATE mhi_get_m_state(struct mhi_controller *mhi_cntrl)
 int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
 			 struct mhi_buf_info *buf_info)
 {
-	return -ENOMEM;
+	buf_info->p_addr = dma_map_single(mhi_cntrl->dev, buf_info->v_addr,
+					  buf_info->len, buf_info->dir);
+	if (dma_mapping_error(mhi_cntrl->dev, buf_info->p_addr))
+		return -ENOMEM;
+
+	return 0;
 }
 
 int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
 			  struct mhi_buf_info *buf_info)
 {
-	return -ENOMEM;
+	void *buf = mhi_alloc_coherent(mhi_cntrl, buf_info->len,
+				       &buf_info->p_addr, GFP_ATOMIC);
+
+	if (!buf)
+		return -ENOMEM;
+
+	if (buf_info->dir == DMA_TO_DEVICE)
+		memcpy(buf, buf_info->v_addr, buf_info->len);
+
+	buf_info->bb_addr = buf;
+
+	return 0;
 }
 
 void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
 			    struct mhi_buf_info *buf_info)
 {
+	dma_unmap_single(mhi_cntrl->dev, buf_info->p_addr, buf_info->len,
+			 buf_info->dir);
 }
 
 void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
 			    struct mhi_buf_info *buf_info)
 {
+	if (buf_info->dir == DMA_FROM_DEVICE)
+		memcpy(buf_info->v_addr, buf_info->bb_addr, buf_info->len);
+
+	mhi_free_coherent(mhi_cntrl, buf_info->len, buf_info->bb_addr,
+			  buf_info->p_addr);
 }
 
 int mhi_queue_sclist(struct mhi_device *mhi_dev,
@@ -196,6 +219,41 @@ int mhi_queue_nop(struct mhi_device *mhi_dev,
 	return -EINVAL;
 }
 
+static void mhi_add_ring_element(struct mhi_controller *mhi_cntrl,
+				 struct mhi_ring *ring)
+{
+	ring->wp += ring->el_size;
+	if (ring->wp >= (ring->base + ring->len))
+		ring->wp = ring->base;
+	/* smp update */
+	smp_wmb();
+}
+
+static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
+				 struct mhi_ring *ring)
+{
+	ring->rp += ring->el_size;
+	if (ring->rp >= (ring->base + ring->len))
+		ring->rp = ring->base;
+	/* smp update */
+	smp_wmb();
+}
+
+static int get_nr_avail_ring_elements(struct mhi_controller *mhi_cntrl,
+				      struct mhi_ring *ring)
+{
+	int nr_el;
+
+	if (ring->wp < ring->rp)
+		nr_el = ((ring->rp - ring->wp) / ring->el_size) - 1;
+	else {
+		nr_el = (ring->rp - ring->base) / ring->el_size;
+		nr_el += ((ring->base + ring->len - ring->wp) /
+			  ring->el_size) - 1;
+	}
+	return nr_el;
+}
+
 static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr)
 {
 	return (addr - ring->iommu_base) + ring->base;
@@ -231,13 +289,97 @@ static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
 	smp_wmb();
 }
 
+static bool mhi_is_ring_full(struct mhi_controller *mhi_cntrl,
+			     struct mhi_ring *ring)
+{
+	void *tmp = ring->wp + ring->el_size;
+
+	if (tmp >= (ring->base + ring->len))
+		tmp = ring->base;
+
+	return (tmp == ring->rp);
+}
+
 int mhi_queue_skb(struct mhi_device *mhi_dev,
 		  struct mhi_chan *mhi_chan,
 		  void *buf,
 		  size_t len,
 		  enum MHI_FLAGS mflags)
 {
-	return -EINVAL;
+	struct sk_buff *skb = buf;
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+	struct mhi_ring *buf_ring = &mhi_chan->buf_ring;
+	struct mhi_buf_info *buf_info;
+	struct mhi_tre *mhi_tre;
+	bool assert_wake = false;
+	int ret;
+
+	if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+		return -ENOMEM;
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+		return -EIO;
+	}
+
+	/* we're in M3 or transitioning to M3 */
+	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+		mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+		mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+	}
+
+	/*
+	 * For UL channels always assert WAKE until work is done,
+	 * For DL channels only assert if MHI is in a LPM
+	 */
+	if (mhi_chan->dir == DMA_TO_DEVICE ||
+	    (mhi_chan->dir == DMA_FROM_DEVICE &&
+	     mhi_cntrl->pm_state != MHI_PM_M0)) {
+		assert_wake = true;
+		mhi_cntrl->wake_get(mhi_cntrl, false);
+	}
+
+	/* generate the tre */
+	buf_info = buf_ring->wp;
+	buf_info->v_addr = skb->data;
+	buf_info->cb_buf = skb;
+	buf_info->wp = tre_ring->wp;
+	buf_info->dir = mhi_chan->dir;
+	buf_info->len = len;
+	ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
+	if (ret)
+		goto map_error;
+
+	mhi_tre = tre_ring->wp;
+	mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
+	mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_info->len);
+	mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(1, 1, 0, 0);
+
+	/* increment WP */
+	mhi_add_ring_element(mhi_cntrl, tre_ring);
+	mhi_add_ring_element(mhi_cntrl, buf_ring);
+
+	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state))) {
+		read_lock_bh(&mhi_chan->lock);
+		mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+		read_unlock_bh(&mhi_chan->lock);
+	}
+
+	if (mhi_chan->dir == DMA_FROM_DEVICE && assert_wake)
+		mhi_cntrl->wake_put(mhi_cntrl, true);
+
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return 0;
+
+map_error:
+	if (assert_wake)
+		mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return ret;
 }
 
 int mhi_gen_tre(struct mhi_controller *mhi_cntrl,
@@ -247,7 +389,41 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl,
 		size_t buf_len,
 		enum MHI_FLAGS flags)
 {
-	return -EINVAL;
+	struct mhi_ring *buf_ring, *tre_ring;
+	struct mhi_tre *mhi_tre;
+	struct mhi_buf_info *buf_info;
+	int eot, eob, chain, bei;
+	int ret;
+
+	buf_ring = &mhi_chan->buf_ring;
+	tre_ring = &mhi_chan->tre_ring;
+
+	buf_info = buf_ring->wp;
+	buf_info->v_addr = buf;
+	buf_info->cb_buf = cb;
+	buf_info->wp = tre_ring->wp;
+	buf_info->dir = mhi_chan->dir;
+	buf_info->len = buf_len;
+
+	ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
+	if (ret)
+		return ret;
+
+	eob = !!(flags & MHI_EOB);
+	eot = !!(flags & MHI_EOT);
+	chain = !!(flags & MHI_CHAIN);
+	bei = !!(mhi_chan->intmod);
+
+	mhi_tre = tre_ring->wp;
+	mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
+	mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_len);
+	mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(bei, eot, eob, chain);
+
+	/* increment WP */
+	mhi_add_ring_element(mhi_cntrl, tre_ring);
+	mhi_add_ring_element(mhi_cntrl, buf_ring);
+
+	return 0;
 }
 
 int mhi_queue_buf(struct mhi_device *mhi_dev,
@@ -256,7 +432,61 @@ int mhi_queue_buf(struct mhi_device *mhi_dev,
 		  size_t len,
 		  enum MHI_FLAGS mflags)
 {
-	return -EINVAL;
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_ring *tre_ring;
+	unsigned long flags;
+	bool assert_wake = false;
+	int ret;
+
+	/*
+	 * this check here only as a guard, it's always
+	 * possible mhi can enter error while executing rest of function,
+	 * which is not fatal so we do not need to hold pm_lock
+	 */
+	if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)))
+		return -EIO;
+
+	/* we're in M3 or transitioning to M3 */
+	if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+		mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+		mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+	}
+
+	tre_ring = &mhi_chan->tre_ring;
+	if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+		return -ENOMEM;
+
+	ret = mhi_chan->gen_tre(mhi_cntrl, mhi_chan, buf, buf, len, mflags);
+	if (unlikely(ret))
+		return ret;
+
+	read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+
+	/*
+	 * For UL channels always assert WAKE until work is done,
+	 * For DL channels only assert if MHI is in a LPM
+	 */
+	if (mhi_chan->dir == DMA_TO_DEVICE ||
+	    (mhi_chan->dir == DMA_FROM_DEVICE &&
+	     mhi_cntrl->pm_state != MHI_PM_M0)) {
+		assert_wake = true;
+		mhi_cntrl->wake_get(mhi_cntrl, false);
+	}
+
+	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state))) {
+		unsigned long flags;
+
+		read_lock_irqsave(&mhi_chan->lock, flags);
+		mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+		read_unlock_irqrestore(&mhi_chan->lock, flags);
+	}
+
+	if (mhi_chan->dir == DMA_FROM_DEVICE && assert_wake)
+		mhi_cntrl->wake_put(mhi_cntrl, true);
+
+	read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+
+	return 0;
 }
 
 /* destroy specific device */
@@ -389,6 +619,154 @@ void mhi_create_devices(struct mhi_controller *mhi_cntrl)
 			mhi_dealloc_device(mhi_cntrl, mhi_dev);
 	}
 }
+
+static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+			    struct mhi_tre *event,
+			    struct mhi_chan *mhi_chan)
+{
+	struct mhi_ring *buf_ring, *tre_ring;
+	u32 ev_code;
+	struct mhi_result result;
+	unsigned long flags = 0;
+
+	ev_code = MHI_TRE_GET_EV_CODE(event);
+	buf_ring = &mhi_chan->buf_ring;
+	tre_ring = &mhi_chan->tre_ring;
+
+	result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
+		-EOVERFLOW : 0;
+
+	/*
+	 * if it's a DB Event then we need to grab the lock
+	 * with preemption disable and as a write because we
+	 * have to update db register and another thread could
+	 * be doing same.
+	 */
+	if (ev_code >= MHI_EV_CC_OOB)
+		write_lock_irqsave(&mhi_chan->lock, flags);
+	else
+		read_lock_bh(&mhi_chan->lock);
+
+	if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
+		goto end_process_tx_event;
+
+	switch (ev_code) {
+	case MHI_EV_CC_OVERFLOW:
+	case MHI_EV_CC_EOB:
+	case MHI_EV_CC_EOT:
+	{
+		dma_addr_t ptr = MHI_TRE_GET_EV_PTR(event);
+		struct mhi_tre *local_rp, *ev_tre;
+		void *dev_rp;
+		struct mhi_buf_info *buf_info;
+		u16 xfer_len;
+
+		/* Get the TRB this event points to */
+		ev_tre = mhi_to_virtual(tre_ring, ptr);
+
+		/* device rp after servicing the TREs */
+		dev_rp = ev_tre + 1;
+		if (dev_rp >= (tre_ring->base + tre_ring->len))
+			dev_rp = tre_ring->base;
+
+		result.dir = mhi_chan->dir;
+
+		/* local rp */
+		local_rp = tre_ring->rp;
+		while (local_rp != dev_rp) {
+			buf_info = buf_ring->rp;
+			/* if it's last tre get len from the event */
+			if (local_rp == ev_tre)
+				xfer_len = MHI_TRE_GET_EV_LEN(event);
+			else
+				xfer_len = buf_info->len;
+
+			mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
+
+			result.buf_addr = buf_info->cb_buf;
+			result.bytes_xferd = xfer_len;
+			mhi_del_ring_element(mhi_cntrl, buf_ring);
+			mhi_del_ring_element(mhi_cntrl, tre_ring);
+			local_rp = tre_ring->rp;
+
+			/* notify client */
+			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+
+			if (mhi_chan->dir == DMA_TO_DEVICE) {
+				read_lock_bh(&mhi_cntrl->pm_lock);
+				mhi_cntrl->wake_put(mhi_cntrl, false);
+				read_unlock_bh(&mhi_cntrl->pm_lock);
+			}
+
+			/*
+			 * recycle the buffer if buffer is pre-allocated,
+			 * if there is error, not much we can do apart from
+			 * dropping the packet
+			 */
+			if (mhi_chan->pre_alloc) {
+				if (mhi_queue_buf(mhi_chan->mhi_dev, mhi_chan,
+						  buf_info->cb_buf,
+						  buf_info->len, MHI_EOT)) {
+					dev_err(mhi_cntrl->dev,
+						"Error recycling buffer for chan:%d\n",
+						mhi_chan->chan);
+					kfree(buf_info->cb_buf);
+				}
+			}
+		}
+		break;
+	} /* CC_EOT */
+	case MHI_EV_CC_OOB:
+	case MHI_EV_CC_DB_MODE:
+	{
+		unsigned long flags;
+
+		mhi_chan->db_cfg.db_mode = 1;
+		read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+		if (tre_ring->wp != tre_ring->rp &&
+		    MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state)) {
+			mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+		}
+		read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+		break;
+	}
+	case MHI_EV_CC_BAD_TRE:
+	default:
+		dev_err(mhi_cntrl->dev, "Unknown event 0x%x\n", ev_code);
+		break;
+	} /* switch(MHI_EV_READ_CODE(EV_TRB_CODE,event)) */
+
+end_process_tx_event:
+	if (ev_code >= MHI_EV_CC_OOB)
+		write_unlock_irqrestore(&mhi_chan->lock, flags);
+	else
+		read_unlock_bh(&mhi_chan->lock);
+
+	return 0;
+}
+
+static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
+				       struct mhi_tre *tre)
+{
+	dma_addr_t ptr = MHI_TRE_GET_EV_PTR(tre);
+	struct mhi_cmd *cmd_ring = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+	struct mhi_ring *mhi_ring = &cmd_ring->ring;
+	struct mhi_tre *cmd_pkt;
+	struct mhi_chan *mhi_chan;
+	u32 chan;
+
+	cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
+
+	chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
+	mhi_chan = &mhi_cntrl->mhi_chan[chan];
+	write_lock_bh(&mhi_chan->lock);
+	mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
+	complete(&mhi_chan->completion);
+	write_unlock_bh(&mhi_chan->lock);
+
+	mhi_del_ring_element(mhi_cntrl, mhi_ring);
+}
+
 int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 			     struct mhi_event *mhi_event,
 			     u32 event_quota)
@@ -457,6 +835,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 			break;
 		}
 		case MHI_PKT_TYPE_CMD_COMPLETION_EVENT:
+			mhi_process_cmd_completion(mhi_cntrl, local_rp);
 			break;
 		case MHI_PKT_TYPE_EE_EVENT:
 		{
@@ -513,11 +892,52 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
 				struct mhi_event *mhi_event,
 				u32 event_quota)
 {
-	return -EIO;
+	struct mhi_tre *dev_rp, *local_rp;
+	struct mhi_ring *ev_ring = &mhi_event->ring;
+	struct mhi_event_ctxt *er_ctxt =
+		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+	int count = 0;
+	u32 chan;
+	struct mhi_chan *mhi_chan;
+
+	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+		return -EIO;
+
+	dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+	local_rp = ev_ring->rp;
+
+	while (dev_rp != local_rp && event_quota > 0) {
+		enum MHI_PKT_TYPE type = MHI_TRE_GET_EV_TYPE(local_rp);
+
+		if (likely(type == MHI_PKT_TYPE_TX_EVENT)) {
+			chan = MHI_TRE_GET_EV_CHID(local_rp);
+			mhi_chan = &mhi_cntrl->mhi_chan[chan];
+			parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
+			event_quota--;
+		}
+
+		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+		local_rp = ev_ring->rp;
+		dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+		count++;
+	}
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state)))
+		mhi_ring_er_db(mhi_event);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return count;
 }
 
 void mhi_ev_task(unsigned long data)
 {
+	struct mhi_event *mhi_event = (struct mhi_event *)data;
+	struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+
+	/* process all pending events */
+	spin_lock_bh(&mhi_event->lock);
+	mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX);
+	spin_unlock_bh(&mhi_event->lock);
 }
 
 void mhi_ctrl_ev_task(unsigned long data)
@@ -609,6 +1029,361 @@ irqreturn_t mhi_intvec_handlr(int irq_number, void *dev)
 	return IRQ_WAKE_THREAD;
 }
 
+int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
+		 struct mhi_chan *mhi_chan,
+		 enum MHI_CMD cmd)
+{
+	struct mhi_tre *cmd_tre = NULL;
+	struct mhi_cmd *mhi_cmd = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+	struct mhi_ring *ring = &mhi_cmd->ring;
+	int chan = 0;
+
+	if (mhi_chan)
+		chan = mhi_chan->chan;
+
+	spin_lock_bh(&mhi_cmd->lock);
+	if (!get_nr_avail_ring_elements(mhi_cntrl, ring)) {
+		spin_unlock_bh(&mhi_cmd->lock);
+		return -ENOMEM;
+	}
+
+	/* prepare the cmd tre */
+	cmd_tre = ring->wp;
+	switch (cmd) {
+	case MHI_CMD_RESET_CHAN:
+		cmd_tre->ptr = MHI_TRE_CMD_RESET_PTR;
+		cmd_tre->dword[0] = MHI_TRE_CMD_RESET_DWORD0;
+		cmd_tre->dword[1] = MHI_TRE_CMD_RESET_DWORD1(chan);
+		break;
+	case MHI_CMD_START_CHAN:
+		cmd_tre->ptr = MHI_TRE_CMD_START_PTR;
+		cmd_tre->dword[0] = MHI_TRE_CMD_START_DWORD0;
+		cmd_tre->dword[1] = MHI_TRE_CMD_START_DWORD1(chan);
+		break;
+	}
+
+	/* queue to hardware */
+	mhi_add_ring_element(mhi_cntrl, ring);
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state)))
+		mhi_ring_cmd_db(mhi_cntrl, mhi_cmd);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+	spin_unlock_bh(&mhi_cmd->lock);
+
+	return 0;
+}
+
+static void __mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
+				    struct mhi_chan *mhi_chan)
+{
+	int ret;
+
+	dev_info(mhi_cntrl->dev, "Entered: unprepare channel:%d\n",
+		 mhi_chan->chan);
+
+	/* no more processing events for this channel */
+	mutex_lock(&mhi_chan->mutex);
+	write_lock_irq(&mhi_chan->lock);
+	if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED) {
+		write_unlock_irq(&mhi_chan->lock);
+		mutex_unlock(&mhi_chan->mutex);
+		return;
+	}
+
+	mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+	write_unlock_irq(&mhi_chan->lock);
+
+	reinit_completion(&mhi_chan->completion);
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+		goto error_invalid_state;
+	}
+
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+	mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+	ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_RESET_CHAN);
+	if (ret)
+		goto error_completion;
+
+	/* even if it fails we will still reset */
+	ret = wait_for_completion_timeout(&mhi_chan->completion,
+				msecs_to_jiffies(mhi_cntrl->timeout_ms));
+	if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS)
+		dev_err(mhi_cntrl->dev,
+			"Failed to receive cmd completion, still resetting\n");
+
+error_completion:
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+error_invalid_state:
+	if (!mhi_chan->offload_ch) {
+		mhi_reset_chan(mhi_cntrl, mhi_chan);
+		mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+	}
+	dev_info(mhi_cntrl->dev, "chan:%d successfully resetted\n",
+		 mhi_chan->chan);
+	mutex_unlock(&mhi_chan->mutex);
+}
+
+static int __mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
+				 struct mhi_chan *mhi_chan)
+{
+	int ret = 0;
+
+	dev_info(mhi_cntrl->dev, "Entered: preparing channel:%d\n",
+		 mhi_chan->chan);
+
+	if (mhi_cntrl->ee != mhi_chan->ee)
+		return -ENOTCONN;
+
+	mutex_lock(&mhi_chan->mutex);
+	/* client manages channel context for offload channels */
+	if (!mhi_chan->offload_ch) {
+		ret = mhi_init_chan_ctxt(mhi_cntrl, mhi_chan);
+		if (ret)
+			goto error_init_chan;
+	}
+
+	reinit_completion(&mhi_chan->completion);
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+		ret = -EIO;
+		goto error_pm_state;
+	}
+
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+	mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+
+	ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_START_CHAN);
+	if (ret)
+		goto error_send_cmd;
+
+	ret = wait_for_completion_timeout(&mhi_chan->completion,
+				msecs_to_jiffies(mhi_cntrl->timeout_ms));
+	if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) {
+		ret = -EIO;
+		goto error_send_cmd;
+	}
+
+	write_lock_irq(&mhi_chan->lock);
+	mhi_chan->ch_state = MHI_CH_STATE_ENABLED;
+	write_unlock_irq(&mhi_chan->lock);
+
+	/* pre allocate buffer for xfer ring */
+	if (mhi_chan->pre_alloc) {
+		int nr_el = get_nr_avail_ring_elements(mhi_cntrl,
+						       &mhi_chan->tre_ring);
+		size_t len = mhi_cntrl->buffer_len;
+
+		while (nr_el--) {
+			void *buf;
+
+			buf = kmalloc(len, GFP_KERNEL);
+			if (!buf) {
+				ret = -ENOMEM;
+				goto error_pre_alloc;
+			}
+
+			/* prepare transfer descriptors */
+			ret = mhi_chan->gen_tre(mhi_cntrl, mhi_chan, buf, buf,
+						len, MHI_EOT);
+			if (ret) {
+				kfree(buf);
+				goto error_pre_alloc;
+			}
+		}
+
+		read_lock_bh(&mhi_cntrl->pm_lock);
+		if (MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state)) {
+			read_lock_irq(&mhi_chan->lock);
+			mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+			read_unlock_irq(&mhi_chan->lock);
+		}
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+	}
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	mutex_unlock(&mhi_chan->mutex);
+
+	dev_info(mhi_cntrl->dev, "Chan:%d successfully moved to start state\n",
+		 mhi_chan->chan);
+
+	return 0;
+
+error_send_cmd:
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+error_pm_state:
+	if (!mhi_chan->offload_ch)
+		mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+
+error_init_chan:
+	mutex_unlock(&mhi_chan->mutex);
+
+	return ret;
+
+error_pre_alloc:
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	mutex_unlock(&mhi_chan->mutex);
+	__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+
+	return ret;
+}
+
+void mhi_reset_chan(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan)
+{
+	struct mhi_tre *dev_rp, *local_rp;
+	struct mhi_event_ctxt *er_ctxt;
+	struct mhi_event *mhi_event;
+	struct mhi_ring *ev_ring, *buf_ring, *tre_ring;
+	unsigned long flags;
+	int chan = mhi_chan->chan;
+	struct mhi_result result;
+
+	/* nothing to reset, client don't queue buffers */
+	if (mhi_chan->offload_ch)
+		return;
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
+	ev_ring = &mhi_event->ring;
+	er_ctxt = &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_chan->er_index];
+
+	/* mark all stale events related to channel as STALE event */
+	spin_lock_irqsave(&mhi_event->lock, flags);
+	dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+	if (!mhi_event->mhi_chan) {
+		local_rp = ev_ring->rp;
+		while (dev_rp != local_rp) {
+			if (MHI_TRE_GET_EV_TYPE(local_rp) ==
+			    MHI_PKT_TYPE_TX_EVENT &&
+			    chan == MHI_TRE_GET_EV_CHID(local_rp))
+				local_rp->dword[1] = MHI_TRE_EV_DWORD1(chan,
+						MHI_PKT_TYPE_STALE_EVENT);
+			local_rp++;
+			if (local_rp == (ev_ring->base + ev_ring->len))
+				local_rp = ev_ring->base;
+		}
+	} else {
+		/* dedicated event ring so move the ptr to end */
+		ev_ring->rp = dev_rp;
+		ev_ring->wp = ev_ring->rp - ev_ring->el_size;
+		if (ev_ring->wp < ev_ring->base)
+			ev_ring->wp = ev_ring->base + ev_ring->len -
+				ev_ring->el_size;
+		if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state)))
+			mhi_ring_er_db(mhi_event);
+	}
+
+	spin_unlock_irqrestore(&mhi_event->lock, flags);
+
+	/* reset any pending buffers */
+	buf_ring = &mhi_chan->buf_ring;
+	tre_ring = &mhi_chan->tre_ring;
+	result.transaction_status = -ENOTCONN;
+	result.bytes_xferd = 0;
+	while (tre_ring->rp != tre_ring->wp) {
+		struct mhi_buf_info *buf_info = buf_ring->rp;
+
+		if (mhi_chan->dir == DMA_TO_DEVICE)
+			mhi_cntrl->wake_put(mhi_cntrl, false);
+
+		mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
+		mhi_del_ring_element(mhi_cntrl, buf_ring);
+		mhi_del_ring_element(mhi_cntrl, tre_ring);
+
+		if (mhi_chan->pre_alloc) {
+			kfree(buf_info->cb_buf);
+		} else {
+			result.buf_addr = buf_info->cb_buf;
+			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+		}
+	}
+
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+}
+
+/* move channel to start state */
+int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
+{
+	int ret, dir;
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan;
+
+	for (dir = 0; dir < 2; dir++) {
+		mhi_chan = dir ? mhi_dev->dl_chan : mhi_dev->ul_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		ret = __mhi_prepare_channel(mhi_cntrl, mhi_chan);
+		if (ret)
+			goto error_open_chan;
+	}
+
+	return 0;
+
+error_open_chan:
+	for (--dir; dir >= 0; dir--) {
+		mhi_chan = dir ? mhi_dev->dl_chan : mhi_dev->ul_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+	}
+
+	return ret;
+}
+EXPORT_SYMBOL(mhi_prepare_for_transfer);
+
+void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan;
+	int dir;
+
+	for (dir = 0; dir < 2; dir++) {
+		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		__mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+	}
+}
+EXPORT_SYMBOL(mhi_unprepare_from_transfer);
+
+int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev,
+				enum dma_data_direction dir)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan = (dir == DMA_TO_DEVICE) ?
+		mhi_dev->ul_chan : mhi_dev->dl_chan;
+	struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+
+	return get_nr_avail_ring_elements(mhi_cntrl, tre_ring);
+}
+EXPORT_SYMBOL(mhi_get_no_free_descriptors);
+
 static int __mhi_bdf_to_controller(struct device *dev, void *tmp)
 {
 	struct mhi_device *mhi_dev = to_mhi_device(dev);
@@ -646,3 +1421,19 @@ struct mhi_controller *mhi_bdf_to_controller(u32 domain,
 	return mhi_dev->mhi_cntrl;
 }
 EXPORT_SYMBOL(mhi_bdf_to_controller);
+
+int mhi_poll(struct mhi_device *mhi_dev,
+	     u32 budget)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *mhi_chan = mhi_dev->dl_chan;
+	struct mhi_event *mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
+	int ret;
+
+	spin_lock_bh(&mhi_event->lock);
+	ret = mhi_event->process_event(mhi_cntrl, mhi_event, budget);
+	spin_unlock_bh(&mhi_event->lock);
+
+	return ret;
+}
+EXPORT_SYMBOL(mhi_poll);
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index ed7cea8..308d12b 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -326,6 +326,29 @@ static inline void *mhi_device_get_devdata(struct mhi_device *mhi_dev)
 	return mhi_dev->priv_data;
 }
 
+/**
+ * mhi_queue_transfer - Queue a buffer to hardware
+ * All transfers are asyncronous transfers
+ * @mhi_dev: Device associated with the channels
+ * @dir: Data direction
+ * @buf: Data buffer (skb for hardware channels)
+ * @len: Size in bytes
+ * @mflags: Interrupt flags for the device
+ */
+static inline int mhi_queue_transfer(struct mhi_device *mhi_dev,
+				     enum dma_data_direction dir,
+				     void *buf,
+				     size_t len,
+				     enum MHI_FLAGS mflags)
+{
+	if (dir == DMA_TO_DEVICE)
+		return mhi_dev->ul_xfer(mhi_dev, mhi_dev->ul_chan, buf, len,
+					mflags);
+	else
+		return mhi_dev->dl_xfer(mhi_dev, mhi_dev->dl_chan, buf, len,
+					mflags);
+}
+
 static inline void *mhi_controller_get_devdata(struct mhi_controller *mhi_cntrl)
 {
 	return mhi_cntrl->priv_data;
@@ -349,6 +372,21 @@ static inline void mhi_free_controller(struct mhi_controller *mhi_cntrl)
 void mhi_driver_unregister(struct mhi_driver *mhi_drv);
 
 /**
+ * mhi_device_configure - configure ECA or CCA context
+ * For offload channels that client manage, call this
+ * function to configure channel context or event context
+ * array associated with the channel
+ * @mhi_div: Device associated with the channels
+ * @dir: Direction of the channel
+ * @mhi_buf: Configuration data
+ * @elements: # of configuration elements
+ */
+int mhi_device_configure(struct mhi_device *mhi_div,
+			 enum dma_data_direction dir,
+			 struct mhi_buf *mhi_buf,
+			 int elements);
+
+/**
  * mhi_device_get - disable all low power modes
  * Only disables lpm, does not immediately exit low power mode
  * if controller already in a low power mode
@@ -371,6 +409,46 @@ static inline void mhi_free_controller(struct mhi_controller *mhi_cntrl)
 void mhi_device_put(struct mhi_device *mhi_dev);
 
 /**
+ * mhi_prepare_for_transfer - setup channel for data transfer
+ * Moves both UL and DL channel from RESET to START state
+ * @mhi_dev: Device associated with the channels
+ */
+int mhi_prepare_for_transfer(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_unprepare_from_transfer -unprepare the channels
+ * Moves both UL and DL channels to RESET state
+ * @mhi_dev: Device associated with the channels
+ */
+void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_get_no_free_descriptors - Get transfer ring length
+ * Get # of TD available to queue buffers
+ * @mhi_dev: Device associated with the channels
+ * @dir: Direction of the channel
+ */
+int mhi_get_no_free_descriptors(struct mhi_device *mhi_dev,
+				enum dma_data_direction dir);
+
+/**
+ * mhi_poll - poll for any available data to consume
+ * This is only applicable for DL direction
+ * @mhi_dev: Device associated with the channels
+ * @budget: In descriptors to service before returning
+ */
+int mhi_poll(struct mhi_device *mhi_dev, u32 budget);
+
+/**
+ * mhi_ioctl - user space IOCTL support for MHI channels
+ * Native support for setting  TIOCM
+ * @mhi_dev: Device associated with the channels
+ * @cmd: IOCTL cmd
+ * @arg: Optional parameter, iotcl cmd specific
+ */
+long mhi_ioctl(struct mhi_device *mhi_dev, unsigned int cmd, unsigned long arg);
+
+/**
  * mhi_alloc_controller - Allocate mhi_controller structure
  * Allocate controller structure and additional data for controller
  * private data. You may get the private data pointer by calling
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v2 4/7] mhi_bus: core: add support for handling ioctl cmds
  2018-07-09 20:08 ` MHI code review Sujeev Dias
                     ` (2 preceding siblings ...)
  2018-07-09 20:08   ` [PATCH v2 3/7] mhi_bus: core: add support for data transfer Sujeev Dias
@ 2018-07-09 20:08   ` Sujeev Dias
  2018-07-09 20:08   ` [PATCH v2 5/7] mhi_bus: core: add support to get external modem time Sujeev Dias
                     ` (3 subsequent siblings)
  7 siblings, 0 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-07-09 20:08 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: linux-kernel, linux-arm-msm, devicetree, Sujeev Dias,
	Tony Truong, Siddartha Mohanadoss

User space clients use RS232 control signaling mechanism to
communicate call status between host and modem. Adding support
to handle ioctl commands from user space.

Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
Reviewed-by: Tony Truong <truong@codeaurora.org>
Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
---
 drivers/bus/mhi/core/Makefile   |   2 +-
 drivers/bus/mhi/core/mhi_dtr.c  | 218 ++++++++++++++++++++++++++++++++++++++++
 drivers/bus/mhi/core/mhi_init.c |   8 +-
 3 files changed, 226 insertions(+), 2 deletions(-)
 create mode 100644 drivers/bus/mhi/core/mhi_dtr.c

diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
index a6015ab..a743fbf 100644
--- a/drivers/bus/mhi/core/Makefile
+++ b/drivers/bus/mhi/core/Makefile
@@ -1 +1 @@
-obj-$(CONFIG_MHI_BUS) +=mhi_init.o mhi_main.o mhi_pm.o mhi_boot.o
+obj-$(CONFIG_MHI_BUS) +=mhi_init.o mhi_main.o mhi_pm.o mhi_boot.o mhi_dtr.o
diff --git a/drivers/bus/mhi/core/mhi_dtr.c b/drivers/bus/mhi/core/mhi_dtr.c
new file mode 100644
index 0000000..5c2409b
--- /dev/null
+++ b/drivers/bus/mhi/core/mhi_dtr.c
@@ -0,0 +1,218 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/termios.h>
+#include <linux/wait.h>
+#include <linux/mhi.h>
+#include "mhi_internal.h"
+
+struct __packed dtr_ctrl_msg {
+	u32 preamble;
+	u32 msg_id;
+	u32 dest_id;
+	u32 size;
+	u32 msg;
+};
+
+#define CTRL_MAGIC (0x4C525443)
+#define CTRL_MSG_DTR BIT(0)
+#define CTRL_MSG_RTS BIT(1)
+#define CTRL_MSG_DCD BIT(0)
+#define CTRL_MSG_DSR BIT(1)
+#define CTRL_MSG_RI BIT(3)
+#define CTRL_HOST_STATE (0x10)
+#define CTRL_DEVICE_STATE (0x11)
+#define CTRL_GET_CHID(dtr) (dtr->dest_id & 0xFF)
+
+static int mhi_dtr_tiocmset(struct mhi_controller *mhi_cntrl,
+			    struct mhi_device *mhi_dev,
+			    u32 tiocm)
+{
+	struct dtr_ctrl_msg *dtr_msg = NULL;
+	struct mhi_chan *dtr_chan = mhi_cntrl->dtr_dev->ul_chan;
+	spinlock_t *res_lock = &mhi_dev->dev.devres_lock;
+	u32 cur_tiocm;
+	int ret = 0;
+
+	cur_tiocm = mhi_dev->tiocm & ~(TIOCM_CD | TIOCM_DSR | TIOCM_RI);
+
+	tiocm &= (TIOCM_DTR | TIOCM_RTS);
+
+	/* state did not changed */
+	if (cur_tiocm == tiocm)
+		return 0;
+
+	mutex_lock(&dtr_chan->mutex);
+
+	dtr_msg = kzalloc(sizeof(*dtr_msg), GFP_KERNEL);
+	if (!dtr_msg) {
+		ret = -ENOMEM;
+		goto tiocm_exit;
+	}
+
+	dtr_msg->preamble = CTRL_MAGIC;
+	dtr_msg->msg_id = CTRL_HOST_STATE;
+	dtr_msg->dest_id = mhi_dev->ul_chan_id;
+	dtr_msg->size = sizeof(u32);
+	if (tiocm & TIOCM_DTR)
+		dtr_msg->msg |= CTRL_MSG_DTR;
+	if (tiocm & TIOCM_RTS)
+		dtr_msg->msg |= CTRL_MSG_RTS;
+
+	reinit_completion(&dtr_chan->completion);
+	ret = mhi_queue_transfer(mhi_cntrl->dtr_dev, DMA_TO_DEVICE, dtr_msg,
+				 sizeof(*dtr_msg), MHI_EOT);
+	if (ret)
+		goto tiocm_exit;
+
+	ret = wait_for_completion_timeout(&dtr_chan->completion,
+				msecs_to_jiffies(mhi_cntrl->timeout_ms));
+	if (!ret) {
+		dev_err(mhi_cntrl->dev, "Failed to recv transfer callback\n");
+		ret = -EIO;
+		goto tiocm_exit;
+	}
+
+	ret = 0;
+	spin_lock_irq(res_lock);
+	mhi_dev->tiocm &= ~(TIOCM_DTR | TIOCM_RTS);
+	mhi_dev->tiocm |= tiocm;
+	spin_unlock_irq(res_lock);
+
+tiocm_exit:
+	kfree(dtr_msg);
+	mutex_unlock(&dtr_chan->mutex);
+
+	return ret;
+}
+
+long mhi_ioctl(struct mhi_device *mhi_dev, unsigned int cmd, unsigned long arg)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	int ret;
+
+	/* ioctl not supported by this controller */
+	if (!mhi_cntrl->dtr_dev)
+		return -EIO;
+
+	switch (cmd) {
+	case TIOCMGET:
+		return mhi_dev->tiocm;
+	case TIOCMSET:
+	{
+		u32 tiocm;
+
+		ret = get_user(tiocm, (u32 *)arg);
+		if (ret)
+			return ret;
+
+		return mhi_dtr_tiocmset(mhi_cntrl, mhi_dev, tiocm);
+	}
+	default:
+		break;
+	}
+
+	return -EINVAL;
+}
+EXPORT_SYMBOL(mhi_ioctl);
+
+static void mhi_dtr_dl_xfer_cb(struct mhi_device *mhi_dev,
+			       struct mhi_result *mhi_result)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct dtr_ctrl_msg *dtr_msg = mhi_result->buf_addr;
+	u32 chan;
+	spinlock_t *res_lock;
+
+	if (mhi_result->bytes_xferd != sizeof(*dtr_msg)) {
+		dev_err(mhi_cntrl->dev, "Unexpected length %zu received\n",
+			mhi_result->bytes_xferd);
+		return;
+	}
+
+	chan = CTRL_GET_CHID(dtr_msg);
+	if (chan >= mhi_cntrl->max_chan)
+		return;
+
+	mhi_dev = mhi_cntrl->mhi_chan[chan].mhi_dev;
+	if (!mhi_dev)
+		return;
+
+	res_lock = &mhi_dev->dev.devres_lock;
+	spin_lock_irq(res_lock);
+	mhi_dev->tiocm &= ~(TIOCM_CD | TIOCM_DSR | TIOCM_RI);
+
+	if (dtr_msg->msg & CTRL_MSG_DCD)
+		mhi_dev->tiocm |= TIOCM_CD;
+
+	if (dtr_msg->msg & CTRL_MSG_DSR)
+		mhi_dev->tiocm |= TIOCM_DSR;
+
+	if (dtr_msg->msg & CTRL_MSG_RI)
+		mhi_dev->tiocm |= TIOCM_RI;
+	spin_unlock_irq(res_lock);
+}
+
+static void mhi_dtr_ul_xfer_cb(struct mhi_device *mhi_dev,
+			       struct mhi_result *mhi_result)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_chan *dtr_chan = mhi_cntrl->dtr_dev->ul_chan;
+
+	if (!mhi_result->transaction_status)
+		complete(&dtr_chan->completion);
+}
+
+static void mhi_dtr_remove(struct mhi_device *mhi_dev)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+
+	mhi_cntrl->dtr_dev = NULL;
+}
+
+static int mhi_dtr_probe(struct mhi_device *mhi_dev,
+			 const struct mhi_device_id *id)
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	int ret;
+
+	ret = mhi_prepare_for_transfer(mhi_dev);
+	if (!ret)
+		mhi_cntrl->dtr_dev = mhi_dev;
+
+	return ret;
+}
+
+static const struct mhi_device_id mhi_dtr_table[] = {
+	{ .chan = "IP_CTRL" },
+	{},
+};
+
+static struct mhi_driver mhi_dtr_driver = {
+	.id_table = mhi_dtr_table,
+	.remove = mhi_dtr_remove,
+	.probe = mhi_dtr_probe,
+	.ul_xfer_cb = mhi_dtr_ul_xfer_cb,
+	.dl_xfer_cb = mhi_dtr_dl_xfer_cb,
+	.driver = {
+		.name = "MHI_DTR",
+		.owner = THIS_MODULE,
+	}
+};
+
+int __init mhi_dtr_init(void)
+{
+	return mhi_driver_register(&mhi_dtr_driver);
+}
diff --git a/drivers/bus/mhi/core/mhi_init.c b/drivers/bus/mhi/core/mhi_init.c
index 43a700d..b09767a 100644
--- a/drivers/bus/mhi/core/mhi_init.c
+++ b/drivers/bus/mhi/core/mhi_init.c
@@ -1293,10 +1293,16 @@ struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
 
 static int __init mhi_init(void)
 {
+	int ret;
+
 	/* parent directory */
 	debugfs_create_dir(mhi_bus_type.name, NULL);
 
-	return bus_register(&mhi_bus_type);
+	ret = bus_register(&mhi_bus_type);
+
+	if (!ret)
+		mhi_dtr_init();
+	return ret;
 }
 postcore_initcall(mhi_init);
 
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v2 5/7] mhi_bus: core: add support to get external modem time
  2018-07-09 20:08 ` MHI code review Sujeev Dias
                     ` (3 preceding siblings ...)
  2018-07-09 20:08   ` [PATCH v2 4/7] mhi_bus: core: add support for handling ioctl cmds Sujeev Dias
@ 2018-07-09 20:08   ` Sujeev Dias
  2018-07-11 19:32     ` Rob Herring
  2018-08-09 20:17     ` Randy Dunlap
  2018-07-09 20:08   ` [PATCH v2 6/7] mhi_bus: controller: MHI support for QCOM modems Sujeev Dias
                     ` (2 subsequent siblings)
  7 siblings, 2 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-07-09 20:08 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: linux-kernel, linux-arm-msm, devicetree, Sujeev Dias,
	Tony Truong, Siddartha Mohanadoss

For accurate synchronizations between external modem and
host processor, mhi host will capture modem time relative
to host time. Client may use time measurements for adjusting
any drift between host and modem.

Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
Reviewed-by: Tony Truong <truong@codeaurora.org>
Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
---
 Documentation/devicetree/bindings/bus/mhi.txt |   7 +
 Documentation/mhi.txt                         |  41 ++++
 drivers/bus/mhi/core/mhi_init.c               | 111 +++++++++++
 drivers/bus/mhi/core/mhi_internal.h           |  57 +++++-
 drivers/bus/mhi/core/mhi_main.c               | 263 +++++++++++++++++++++++++-
 drivers/bus/mhi/core/mhi_pm.c                 |   7 +
 include/linux/mhi.h                           |   7 +
 7 files changed, 486 insertions(+), 7 deletions(-)

diff --git a/Documentation/devicetree/bindings/bus/mhi.txt b/Documentation/devicetree/bindings/bus/mhi.txt
index 19deb84..1012ff2 100644
--- a/Documentation/devicetree/bindings/bus/mhi.txt
+++ b/Documentation/devicetree/bindings/bus/mhi.txt
@@ -19,6 +19,13 @@ Main node properties:
   Value type: <u32>
   Definition: Maximum timeout in ms wait for state and cmd completion
 
+- mhi,time-sync
+  Usage: optional
+  Value type: <bool>
+  Definition: Set true, if the external device support MHI get time
+	feature for time synchronization between host processor and
+	external modem.
+
 - mhi,use-bb
   Usage: optional
   Value type: <bool>
diff --git a/Documentation/mhi.txt b/Documentation/mhi.txt
index 1c501f1..9287899 100644
--- a/Documentation/mhi.txt
+++ b/Documentation/mhi.txt
@@ -137,6 +137,47 @@ Example Operation for data transfer:
 8. Host wakes up and check event ring for completion event
 9. Host update the Event[i].ctxt.WP to indicate processed of completion event.
 
+Time sync
+---------
+To synchronize two applications between host and external modem, MHI provide
+native support to get external modems free running timer value in a fast
+reliable method. MHI clients do not need to create client specific methods to
+get modem time.
+
+When client requests modem time, MHI host will automatically capture host time
+at that moment so clients are able to do accurate drift adjustment.
+
+Example:
+
+Client request time @ time T1
+
+Host Time: Tx
+Modem Time: Ty
+
+Client request time @ time T2
+Host Time: Txx
+Modem Time: Tyy
+
+Then drift is:
+Tyy - Ty + <drift> == Txx - Tx
+
+Clients are free to implement their own drift algorithms, what MHI host provide
+is a way to accurately correlate host time with external modem time.
+
+To avoid link level latencies, controller must support capabilities to disable
+any link level latency.
+
+During Time capture host will:
+	1. Capture host time
+	2. Trigger doorbell to capture modem time
+
+It's important time between Step 2 to Step 1 is deterministic as possible.
+Therefore, MHI host will:
+	1. Disable any MHI related to low power modes.
+	2. Disable preemption
+	3. Request bus master to disable any link level latencies. Controller
+	should disable all low power modes such as L0s, L1, L1ss.
+
 MHI States
 ----------
 
diff --git a/drivers/bus/mhi/core/mhi_init.c b/drivers/bus/mhi/core/mhi_init.c
index b09767a..e1e1b8c 100644
--- a/drivers/bus/mhi/core/mhi_init.c
+++ b/drivers/bus/mhi/core/mhi_init.c
@@ -362,6 +362,69 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 	return ret;
 }
 
+int mhi_init_timesync(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
+	u32 time_offset, db_offset;
+	int ret;
+
+	reinit_completion(&mhi_tsync->completion);
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+		return -EIO;
+	}
+
+	mhi_cntrl->wake_get(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+	mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+
+	ret = mhi_send_cmd(mhi_cntrl, NULL, MHI_CMD_TIMSYNC_CFG);
+	if (ret)
+		goto error_send_cmd;
+
+	ret = wait_for_completion_timeout(&mhi_tsync->completion,
+				msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+	if (!ret || mhi_tsync->ccs != MHI_EV_CC_SUCCESS) {
+		ret = -EIO;
+		goto error_send_cmd;
+	}
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+
+	ret = -EIO;
+
+	if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+		goto error_sync_cap;
+
+	ret = mhi_get_capability_offset(mhi_cntrl, TIMESYNC_CAP_ID,
+					&time_offset);
+	if (ret)
+		goto error_sync_cap;
+
+	ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs,
+			   time_offset + TIMESYNC_DB_OFFSET, &db_offset);
+	if (ret)
+		goto error_sync_cap;
+
+	mhi_tsync->db = mhi_cntrl->regs + db_offset;
+
+error_sync_cap:
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return ret;
+
+error_send_cmd:
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return ret;
+}
+
 int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
 {
 	u32 val;
@@ -691,6 +754,9 @@ static int of_parse_ev_cfg(struct mhi_controller *mhi_cntrl,
 		case MHI_ER_CTRL_ELEMENT_TYPE:
 			mhi_event->process_event = mhi_process_ctrl_ev_ring;
 			break;
+		case MHI_ER_TSYNC_ELEMENT_TYPE:
+			mhi_event->process_event = mhi_process_tsync_event_ring;
+			break;
 		}
 
 		mhi_event->hw_ring = of_property_read_bool(child, "mhi,hw-ev");
@@ -858,6 +924,7 @@ static int of_parse_dt(struct mhi_controller *mhi_cntrl,
 		       struct device_node *of_node)
 {
 	int ret;
+	struct mhi_timesync *mhi_tsync;
 
 	/* parse MHI channel configuration */
 	ret = of_parse_ch_cfg(mhi_cntrl, of_node);
@@ -874,6 +941,28 @@ static int of_parse_dt(struct mhi_controller *mhi_cntrl,
 	if (ret)
 		mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
 
+	mhi_cntrl->time_sync = of_property_read_bool(of_node, "mhi,time-sync");
+
+	if (mhi_cntrl->time_sync) {
+		mhi_tsync = kzalloc(sizeof(*mhi_tsync), GFP_KERNEL);
+		if (!mhi_tsync) {
+			ret = -ENOMEM;
+			goto error_time_sync;
+		}
+
+		ret = of_property_read_u32(of_node, "mhi,tsync-er",
+					   &mhi_tsync->er_index);
+		if (ret)
+			goto error_time_sync;
+
+		if (mhi_tsync->er_index >= mhi_cntrl->total_ev_rings) {
+			ret = -EINVAL;
+			goto error_time_sync;
+		}
+
+		mhi_cntrl->mhi_tsync = mhi_tsync;
+	}
+
 	mhi_cntrl->bounce_buf = of_property_read_bool(of_node, "mhi,use-bb");
 	ret = of_property_read_u32(of_node, "mhi,buffer-len",
 				   (u32 *)&mhi_cntrl->buffer_len);
@@ -882,6 +971,10 @@ static int of_parse_dt(struct mhi_controller *mhi_cntrl,
 
 	return 0;
 
+error_time_sync:
+	kfree(mhi_tsync);
+	kfree(mhi_cntrl->mhi_event);
+
 error_ev_cfg:
 	kfree(mhi_cntrl->mhi_chan);
 
@@ -910,6 +1003,11 @@ int of_register_mhi_controller(struct mhi_controller *mhi_cntrl)
 	if (ret)
 		return -EINVAL;
 
+	if (mhi_cntrl->time_sync &&
+	    (!mhi_cntrl->time_get || !mhi_cntrl->lpm_disable ||
+	     !mhi_cntrl->lpm_enable))
+		return -EINVAL;
+
 	mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS,
 				     sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
 	if (!mhi_cntrl->mhi_cmd) {
@@ -953,6 +1051,14 @@ int of_register_mhi_controller(struct mhi_controller *mhi_cntrl)
 		rwlock_init(&mhi_chan->lock);
 	}
 
+	if (mhi_cntrl->mhi_tsync) {
+		struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
+
+		spin_lock_init(&mhi_tsync->lock);
+		INIT_LIST_HEAD(&mhi_tsync->head);
+		init_completion(&mhi_tsync->completion);
+	}
+
 	if (mhi_cntrl->bounce_buf) {
 		mhi_cntrl->map_single = mhi_map_single_use_bb;
 		mhi_cntrl->unmap_single = mhi_unmap_single_use_bb;
@@ -1003,6 +1109,7 @@ void mhi_unregister_mhi_controller(struct mhi_controller *mhi_cntrl)
 	kfree(mhi_cntrl->mhi_cmd);
 	kfree(mhi_cntrl->mhi_event);
 	kfree(mhi_cntrl->mhi_chan);
+	kfree(mhi_cntrl->mhi_tsync);
 
 	device_del(&mhi_dev->dev);
 	put_device(&mhi_dev->dev);
@@ -1237,6 +1344,10 @@ static int mhi_driver_remove(struct device *dev)
 		mutex_unlock(&mhi_chan->mutex);
 	}
 
+
+	if (mhi_cntrl->tsync_dev == mhi_dev)
+		mhi_cntrl->tsync_dev = NULL;
+
 	/* relinquish any pending votes */
 	read_lock_bh(&mhi_cntrl->pm_lock);
 	while (atomic_read(&mhi_dev->dev_wake))
diff --git a/drivers/bus/mhi/core/mhi_internal.h b/drivers/bus/mhi/core/mhi_internal.h
index 1167d75..47d258a 100644
--- a/drivers/bus/mhi/core/mhi_internal.h
+++ b/drivers/bus/mhi/core/mhi_internal.h
@@ -128,6 +128,30 @@
 #define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
 #define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
 
+/* MHI misc capability registers */
+#define MISC_OFFSET (0x24)
+#define MISC_CAP_MASK (0xFFFFFFFF)
+#define MISC_CAP_SHIFT (0)
+
+#define CAP_CAPID_MASK (0xFF000000)
+#define CAP_CAPID_SHIFT (24)
+#define CAP_NEXT_CAP_MASK (0x00FFF000)
+#define CAP_NEXT_CAP_SHIFT (12)
+
+/* MHI Timesync offsets */
+#define TIMESYNC_CFG_OFFSET (0x00)
+#define TIMESYNC_CFG_CAPID_MASK (CAP_CAPID_MASK)
+#define TIMESYNC_CFG_CAPID_SHIFT (CAP_CAPID_SHIFT)
+#define TIMESYNC_CFG_NEXT_OFF_MASK (CAP_NEXT_CAP_MASK)
+#define TIMESYNC_CFG_NEXT_OFF_SHIFT (CAP_NEXT_CAP_SHIFT)
+#define TIMESYNC_CFG_NUMCMD_MASK (0xFF)
+#define TIMESYNC_CFG_NUMCMD_SHIFT (0)
+#define TIMESYNC_TIME_LOW_OFFSET (0x4)
+#define TIMESYNC_TIME_HIGH_OFFSET (0x8)
+#define TIMESYNC_DB_OFFSET (0xC)
+
+#define TIMESYNC_CAP_ID (2)
+
 /* MHI BHI offfsets */
 #define BHI_BHIVERSION_MINOR (0x00)
 #define BHI_BHIVERSION_MAJOR (0x04)
@@ -243,6 +267,7 @@ enum mhi_cmd_type {
 	MHI_CMD_TYPE_RESET = 16,
 	MHI_CMD_TYPE_STOP = 17,
 	MHI_CMD_TYPE_START = 18,
+	MHI_CMD_TYPE_TSYNC = 24,
 };
 
 /* no operation command */
@@ -267,6 +292,12 @@ enum mhi_cmd_type {
 #define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
 					(MHI_CMD_TYPE_START << 16))
 
+/* time sync cfg command */
+#define MHI_TRE_CMD_TSYNC_CFG_PTR (0)
+#define MHI_TRE_CMD_TSYNC_CFG_DWORD0 (0)
+#define MHI_TRE_CMD_TSYNC_CFG_DWORD1(er) ((MHI_CMD_TYPE_TSYNC << 16) | \
+					  (er << 24))
+
 #define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
 #define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
 
@@ -293,6 +324,7 @@ enum mhi_cmd_type {
 enum MHI_CMD {
 	MHI_CMD_RESET_CHAN,
 	MHI_CMD_START_CHAN,
+	MHI_CMD_TIMSYNC_CFG,
 };
 
 enum MHI_PKT_TYPE {
@@ -462,7 +494,8 @@ enum MHI_ER_TYPE {
 enum mhi_er_data_type {
 	MHI_ER_DATA_ELEMENT_TYPE,
 	MHI_ER_CTRL_ELEMENT_TYPE,
-	MHI_ER_DATA_TYPE_MAX = MHI_ER_CTRL_ELEMENT_TYPE,
+	MHI_ER_TSYNC_ELEMENT_TYPE,
+	MHI_ER_DATA_TYPE_MAX = MHI_ER_TSYNC_ELEMENT_TYPE,
 };
 
 struct db_cfg {
@@ -584,6 +617,25 @@ struct mhi_chan {
 	struct list_head node;
 };
 
+struct tsync_node {
+	struct list_head node;
+	u32 sequence;
+	u64 local_time;
+	u64 remote_time;
+	struct mhi_device *mhi_dev;
+	void (*cb_func)(struct mhi_device *mhi_dev, u32 sequence,
+			u64 local_time, u64 remote_time);
+};
+
+struct mhi_timesync {
+	u32 er_index;
+	void __iomem *db;
+	enum MHI_EV_CCS ccs;
+	struct completion completion;
+	spinlock_t lock;
+	struct list_head head;
+};
+
 /* default MHI timeout */
 #define MHI_TIMEOUT_MS (1000)
 
@@ -651,6 +703,7 @@ void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
 void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum MHI_STATE state);
 int mhi_get_capability_offset(struct mhi_controller *mhi_cntrl, u32 capability,
 			      u32 *offset);
+int mhi_init_timesync(struct mhi_controller *mhi_cntrl);
 
 /* memory allocation methods */
 static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl,
@@ -699,6 +752,8 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
 				struct mhi_event *mhi_event, u32 event_quota);
 int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 			     struct mhi_event *mhi_event, u32 event_quota);
+int mhi_process_tsync_event_ring(struct mhi_controller *mhi_cntrl,
+				 struct mhi_event *mhi_event, u32 event_quota);
 
 /* initialization methods */
 int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
diff --git a/drivers/bus/mhi/core/mhi_main.c b/drivers/bus/mhi/core/mhi_main.c
index 3e7077a8..8a0a7e1 100644
--- a/drivers/bus/mhi/core/mhi_main.c
+++ b/drivers/bus/mhi/core/mhi_main.c
@@ -53,6 +53,40 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
 	return 0;
 }
 
+int mhi_get_capability_offset(struct mhi_controller *mhi_cntrl,
+			      u32 capability,
+			      u32 *offset)
+{
+	u32 cur_cap, next_offset;
+	int ret;
+
+	/* get the 1st supported capability offset */
+	ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MISC_OFFSET,
+				 MISC_CAP_MASK, MISC_CAP_SHIFT, offset);
+	if (ret)
+		return ret;
+	do {
+		ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, *offset,
+					 CAP_CAPID_MASK, CAP_CAPID_SHIFT,
+					 &cur_cap);
+		if (ret)
+			return ret;
+
+		if (cur_cap == capability)
+			return 0;
+
+		ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, *offset,
+					 CAP_NEXT_CAP_MASK, CAP_NEXT_CAP_SHIFT,
+					 &next_offset);
+		if (ret)
+			return ret;
+
+		*offset += next_offset;
+	} while (next_offset);
+
+	return -ENXIO;
+}
+
 void mhi_write_reg(struct mhi_controller *mhi_cntrl,
 		   void __iomem *base,
 		   u32 offset,
@@ -547,6 +581,42 @@ static void mhi_assign_of_node(struct mhi_controller *mhi_cntrl,
 	}
 }
 
+static void mhi_create_time_sync_dev(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_device *mhi_dev;
+	struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
+	int ret;
+
+	if (!mhi_tsync || !mhi_tsync->db)
+		return;
+
+	if (mhi_cntrl->ee != MHI_EE_AMSS)
+		return;
+
+	mhi_dev = mhi_alloc_device(mhi_cntrl);
+	if (!mhi_dev)
+		return;
+
+	mhi_dev->dev_type = MHI_TIMESYNC_TYPE;
+	mhi_dev->chan_name = "TIME_SYNC";
+	dev_set_name(&mhi_dev->dev, "%04x_%02u.%02u.%02u_%s", mhi_dev->dev_id,
+		     mhi_dev->domain, mhi_dev->bus, mhi_dev->slot,
+		     mhi_dev->chan_name);
+
+	/* add if there is a matching DT node */
+	mhi_assign_of_node(mhi_cntrl, mhi_dev);
+
+	ret = device_add(&mhi_dev->dev);
+	if (ret) {
+		dev_err(mhi_cntrl->dev, "Failed to register dev for  chan:%s\n",
+			mhi_dev->chan_name);
+		mhi_dealloc_device(mhi_cntrl, mhi_dev);
+		return;
+	}
+
+	mhi_cntrl->tsync_dev = mhi_dev;
+}
+
 /* bind mhi channels into mhi devices */
 void mhi_create_devices(struct mhi_controller *mhi_cntrl)
 {
@@ -555,6 +625,13 @@ void mhi_create_devices(struct mhi_controller *mhi_cntrl)
 	struct mhi_device *mhi_dev;
 	int ret;
 
+	/*
+	 * we need to create time sync device before creating other
+	 * devices, because client may try to capture time during
+	 * clint probe.
+	 */
+	mhi_create_time_sync_dev(mhi_cntrl);
+
 	mhi_chan = mhi_cntrl->mhi_chan;
 	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
 		if (!mhi_chan->configured || mhi_chan->ee != mhi_cntrl->ee)
@@ -753,16 +830,26 @@ static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
 	struct mhi_ring *mhi_ring = &cmd_ring->ring;
 	struct mhi_tre *cmd_pkt;
 	struct mhi_chan *mhi_chan;
+	struct mhi_timesync *mhi_tsync;
+	enum mhi_cmd_type type;
 	u32 chan;
 
 	cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
 
-	chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
-	mhi_chan = &mhi_cntrl->mhi_chan[chan];
-	write_lock_bh(&mhi_chan->lock);
-	mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
-	complete(&mhi_chan->completion);
-	write_unlock_bh(&mhi_chan->lock);
+	type = MHI_TRE_GET_CMD_TYPE(cmd_pkt);
+
+	if (type == MHI_CMD_TYPE_TSYNC) {
+		mhi_tsync = mhi_cntrl->mhi_tsync;
+		mhi_tsync->ccs = MHI_TRE_GET_EV_CODE(tre);
+		complete(&mhi_tsync->completion);
+	} else {
+		chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
+		mhi_chan = &mhi_cntrl->mhi_chan[chan];
+		write_lock_bh(&mhi_chan->lock);
+		mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
+		complete(&mhi_chan->completion);
+		write_unlock_bh(&mhi_chan->lock);
+	}
 
 	mhi_del_ring_element(mhi_cntrl, mhi_ring);
 }
@@ -929,6 +1016,73 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
 	return count;
 }
 
+int mhi_process_tsync_event_ring(struct mhi_controller *mhi_cntrl,
+				 struct mhi_event *mhi_event,
+				 u32 event_quota)
+{
+	struct mhi_tre *dev_rp, *local_rp;
+	struct mhi_ring *ev_ring = &mhi_event->ring;
+	struct mhi_event_ctxt *er_ctxt =
+		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+	struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
+	int count = 0;
+	u32 sequence;
+	u64 remote_time;
+
+	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state))) {
+		read_unlock_bh(&mhi_cntrl->pm_lock);
+		return -EIO;
+	}
+
+	dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+	local_rp = ev_ring->rp;
+
+	while (dev_rp != local_rp) {
+		struct tsync_node *tsync_node;
+
+		sequence = MHI_TRE_GET_EV_SEQ(local_rp);
+		remote_time = MHI_TRE_GET_EV_TIME(local_rp);
+
+		do {
+			spin_lock_irq(&mhi_tsync->lock);
+			tsync_node = list_first_entry_or_null(&mhi_tsync->head,
+						      struct tsync_node, node);
+
+			if (unlikely(!tsync_node))
+				break;
+
+			list_del(&tsync_node->node);
+			spin_unlock_irq(&mhi_tsync->lock);
+
+			/*
+			 * device may not able to process all time sync commands
+			 * host issue and only process last command it receive
+			 */
+			if (tsync_node->sequence == sequence) {
+				tsync_node->cb_func(tsync_node->mhi_dev,
+						    sequence,
+						    tsync_node->local_time,
+						    remote_time);
+				kfree(tsync_node);
+			} else {
+				kfree(tsync_node);
+			}
+		} while (true);
+
+		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+		local_rp = ev_ring->rp;
+		dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+		count++;
+	}
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state)))
+		mhi_ring_er_db(mhi_event);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return count;
+}
+
 void mhi_ev_task(unsigned long data)
 {
 	struct mhi_event *mhi_event = (struct mhi_event *)data;
@@ -1060,6 +1214,12 @@ int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
 		cmd_tre->dword[0] = MHI_TRE_CMD_START_DWORD0;
 		cmd_tre->dword[1] = MHI_TRE_CMD_START_DWORD1(chan);
 		break;
+	case MHI_CMD_TIMSYNC_CFG:
+		cmd_tre->ptr = MHI_TRE_CMD_TSYNC_CFG_PTR;
+		cmd_tre->dword[0] = MHI_TRE_CMD_TSYNC_CFG_DWORD0;
+		cmd_tre->dword[1] = MHI_TRE_CMD_TSYNC_CFG_DWORD1
+			(mhi_cntrl->mhi_tsync->er_index);
+		break;
 	}
 
 	/* queue to hardware */
@@ -1437,3 +1597,94 @@ int mhi_poll(struct mhi_device *mhi_dev,
 	return ret;
 }
 EXPORT_SYMBOL(mhi_poll);
+
+/**
+ * mhi_get_remote_time - Get external modem time relative to host time
+ * Trigger event to capture modem time, also capture host time so client
+ * can do a relative drift comparision.
+ * Recommended only tsync device calls this method and do not call this
+ * from atomic context
+ * @mhi_dev: Device associated with the channels
+ * @sequence:unique sequence id track event
+ * @cb_func: callback function to call back
+ */
+int mhi_get_remote_time(struct mhi_device *mhi_dev,
+			u32 sequence,
+			void (*cb_func)(struct mhi_device *mhi_dev,
+					u32 sequence,
+					u64 local_time,
+					u64 remote_time))
+{
+	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
+	struct tsync_node *tsync_node;
+	int ret;
+
+	/* not all devices support time feature */
+	if (!mhi_tsync)
+		return -EIO;
+
+	/* tsync db can only be rung in M0 state */
+	ret = __mhi_device_get_sync(mhi_cntrl);
+	if (ret)
+		return ret;
+
+	/*
+	 * technically we can use GFP_KERNEL, but wants to avoid
+	 * # of times scheduling out
+	 */
+	tsync_node = kzalloc(sizeof(*tsync_node), GFP_ATOMIC);
+	if (!tsync_node) {
+		ret = -ENOMEM;
+		goto error_no_mem;
+	}
+
+	tsync_node->sequence = sequence;
+	tsync_node->cb_func = cb_func;
+	tsync_node->mhi_dev = mhi_dev;
+
+	/* disable link level low power modes */
+	mhi_cntrl->lpm_disable(mhi_cntrl, mhi_cntrl->priv_data);
+
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
+		ret = -EIO;
+		goto error_invalid_state;
+	}
+
+	spin_lock_irq(&mhi_tsync->lock);
+	list_add_tail(&tsync_node->node, &mhi_tsync->head);
+	spin_unlock_irq(&mhi_tsync->lock);
+
+	/*
+	 * time critical code, delay between these two steps should be
+	 * deterministic as possible.
+	 */
+	preempt_disable();
+	local_irq_disable();
+
+	tsync_node->local_time =
+		mhi_cntrl->time_get(mhi_cntrl, mhi_cntrl->priv_data);
+	writel_relaxed(tsync_node->sequence, mhi_tsync->db);
+	/* write must go thru immediately */
+	wmb();
+
+	local_irq_enable();
+	preempt_enable();
+
+	ret = 0;
+
+error_invalid_state:
+	if (ret)
+		kfree(tsync_node);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->lpm_enable(mhi_cntrl, mhi_cntrl->priv_data);
+
+error_no_mem:
+	read_lock_bh(&mhi_cntrl->pm_lock);
+	mhi_cntrl->wake_put(mhi_cntrl, false);
+	read_unlock_bh(&mhi_cntrl->pm_lock);
+
+	return ret;
+}
+EXPORT_SYMBOL(mhi_get_remote_time);
diff --git a/drivers/bus/mhi/core/mhi_pm.c b/drivers/bus/mhi/core/mhi_pm.c
index 5220cfa..f3a4965 100644
--- a/drivers/bus/mhi/core/mhi_pm.c
+++ b/drivers/bus/mhi/core/mhi_pm.c
@@ -415,6 +415,10 @@ static int mhi_pm_amss_transition(struct mhi_controller *mhi_cntrl)
 
 	read_unlock_bh(&mhi_cntrl->pm_lock);
 
+	/* setup support for time sync */
+	if (mhi_cntrl->time_sync)
+		mhi_init_timesync(mhi_cntrl);
+
 	/* add supported devices */
 	mhi_create_devices(mhi_cntrl);
 
@@ -764,6 +768,9 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
 		mhi_deinit_free_irq(mhi_cntrl);
 		mhi_deinit_dev_ctxt(mhi_cntrl);
 	}
+
+	if (mhi_cntrl->mhi_tsync)
+		mhi_cntrl->mhi_tsync->db = NULL;
 }
 EXPORT_SYMBOL(mhi_power_down);
 
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index 308d12b..25de226 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -12,6 +12,7 @@
 struct mhi_cmd;
 struct image_info;
 struct bhi_vec_entry;
+struct mhi_timesync;
 struct mhi_buf_info;
 
 /**
@@ -203,6 +204,7 @@ struct mhi_controller {
 	void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
 	int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
 	void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
+	u64 (*time_get)(struct mhi_controller *mhi_cntrl, void *priv);
 	void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
 	void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
 	int (*map_single)(struct mhi_controller *mhi_cntrl,
@@ -217,6 +219,11 @@ struct mhi_controller {
 	bool bounce_buf;
 	size_t buffer_len;
 
+	/* supports time sync feature */
+	bool time_sync;
+	struct mhi_timesync *mhi_tsync;
+	struct mhi_device *tsync_dev;
+
 	/* controller specific data */
 	void *priv_data;
 	void *log_buf;
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v2 6/7] mhi_bus: controller: MHI support for QCOM modems
  2018-07-09 20:08 ` MHI code review Sujeev Dias
                     ` (4 preceding siblings ...)
  2018-07-09 20:08   ` [PATCH v2 5/7] mhi_bus: core: add support to get external modem time Sujeev Dias
@ 2018-07-09 20:08   ` Sujeev Dias
  2018-07-11 19:36     ` Rob Herring
  2018-07-09 20:08   ` [PATCH v2 7/7] mhi_bus: dev: uci: add user space interface driver Sujeev Dias
  2019-04-30 15:10   ` MHI code review Daniele Palmas
  7 siblings, 1 reply; 54+ messages in thread
From: Sujeev Dias @ 2018-07-09 20:08 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: linux-kernel, linux-arm-msm, devicetree, Sujeev Dias,
	Tony Truong, Siddartha Mohanadoss

QCOM PCIe based modems uses MHI as the communication protocol.
MHI control driver is the bus master for such modems. As the bus
master driver, it oversees power management operations
such as suspend, resume, powering on and off the device.

Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
Reviewed-by: Tony Truong <truong@codeaurora.org>
Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
---
 Documentation/devicetree/bindings/bus/mhi_qcom.txt |  58 +++
 arch/arm64/configs/defconfig                       |   1 +
 drivers/bus/Kconfig                                |   1 +
 drivers/bus/mhi/Makefile                           |   1 +
 drivers/bus/mhi/controllers/Kconfig                |  13 +
 drivers/bus/mhi/controllers/Makefile               |   1 +
 drivers/bus/mhi/controllers/mhi_qcom.c             | 461 +++++++++++++++++++++
 drivers/bus/mhi/controllers/mhi_qcom.h             |  67 +++
 8 files changed, 603 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/bus/mhi_qcom.txt
 create mode 100644 drivers/bus/mhi/controllers/Kconfig
 create mode 100644 drivers/bus/mhi/controllers/Makefile
 create mode 100644 drivers/bus/mhi/controllers/mhi_qcom.c
 create mode 100644 drivers/bus/mhi/controllers/mhi_qcom.h

diff --git a/Documentation/devicetree/bindings/bus/mhi_qcom.txt b/Documentation/devicetree/bindings/bus/mhi_qcom.txt
new file mode 100644
index 0000000..0a48a50
--- /dev/null
+++ b/Documentation/devicetree/bindings/bus/mhi_qcom.txt
@@ -0,0 +1,58 @@
+Qualcomm Technologies Inc MHI Bus controller
+
+MHI control driver enables clients to communicate with external mode
+using MHI protocol.
+
+==============
+Node Structure
+==============
+
+Main node properties:
+
+- reg
+  Usage: required
+  Value type: Array (5-cell PCI resource) of <u32>
+  Definition: First cell is devfn, which is determined by pci bus topology.
+	Assign the other cells 0 since they are not used.
+
+- qcom,smmu-cfg
+  Usage: required
+  Value type: <u32>
+  Definition: Required SMMU configuration bitmask for PCIe bus.
+	BIT mask:
+	BIT(0) : Attach address mapping to endpoint device
+	BIT(1) : Set attribute S1_BYPASS
+	BIT(2) : Set attribute FAST
+	BIT(3) : Set attribute ATOMIC
+	BIT(4) : Set attribute FORCE_COHERENT
+
+- qcom,addr-win
+  Usage: required if SMMU S1 translation is enabled
+  Value type: Array of <u64>
+  Definition: Pair of values describing iova start and stop address
+
+- MHI bus settings
+  Usage: required
+  Values: as defined by mhi.txt
+  Definition: Per definition of devicetree/bindings/bus/mhi.txt, define device
+	specific MHI configuration parameters.
+
+========
+Example:
+========
+
+/* pcie domain (root complex) modem connected to */
+&pcie1 {
+	/* pcie bus modem connected to */
+	pci,bus@1 {
+		reg = <0 0 0 0 0>;
+
+		qcom,mhi {
+			reg = <0 0 0 0 0>;
+			qcom,smmu-cfg = <0x3d>;
+			qcom,addr-win = <0x0 0x20000000 0x0 0x3fffffff>;
+
+			<mhi bus configurations>
+		};
+	};
+};
diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
index 3562026..fd36426 100644
--- a/arch/arm64/configs/defconfig
+++ b/arch/arm64/configs/defconfig
@@ -168,6 +168,7 @@ CONFIG_DEVTMPFS=y
 CONFIG_DEVTMPFS_MOUNT=y
 CONFIG_DMA_CMA=y
 CONFIG_MHI_BUS=y
+CONFIG_MHI_QCOM=y
 CONFIG_MTD=y
 CONFIG_MTD_BLOCK=y
 CONFIG_MTD_M25P80=y
diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
index 080d3c2..8c6827a 100644
--- a/drivers/bus/Kconfig
+++ b/drivers/bus/Kconfig
@@ -180,5 +180,6 @@ config MHI_BUS
 	  devices that support MHI protocol.
 
 source "drivers/bus/fsl-mc/Kconfig"
+source drivers/bus/mhi/controllers/Kconfig
 
 endmenu
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
index f8f14f7..3458ba3 100644
--- a/drivers/bus/mhi/Makefile
+++ b/drivers/bus/mhi/Makefile
@@ -4,3 +4,4 @@
 
 # core layer
 obj-y += core/
+obj-y += controllers/
\ No newline at end of file
diff --git a/drivers/bus/mhi/controllers/Kconfig b/drivers/bus/mhi/controllers/Kconfig
new file mode 100644
index 0000000..2586cc4
--- /dev/null
+++ b/drivers/bus/mhi/controllers/Kconfig
@@ -0,0 +1,13 @@
+menu "MHI controllers"
+
+config MHI_QCOM
+       tristate "MHI QCOM"
+       depends on MHI_BUS
+       help
+	  If you say yes to this option, MHI bus support for QCOM modem chipsets
+	  will be enabled. QCOM PCIe based modems uses MHI as the communication
+	  protocol. MHI control driver is the bus master for such modems. As the
+	  bus master driver, it oversees power management operations such as
+	  suspend, resume, powering on and off the device.
+
+endmenu
diff --git a/drivers/bus/mhi/controllers/Makefile b/drivers/bus/mhi/controllers/Makefile
new file mode 100644
index 0000000..292f696
--- /dev/null
+++ b/drivers/bus/mhi/controllers/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_MHI_QCOM) += mhi_qcom.o
diff --git a/drivers/bus/mhi/controllers/mhi_qcom.c b/drivers/bus/mhi/controllers/mhi_qcom.c
new file mode 100644
index 0000000..62620d1
--- /dev/null
+++ b/drivers/bus/mhi/controllers/mhi_qcom.c
@@ -0,0 +1,461 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/list.h>
+#include <linux/of.h>
+#include <linux/memblock.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/mhi.h>
+#include "mhi_qcom.h"
+
+struct firmware_info {
+	u32 dev_id;
+	const char *fw_image;
+	const char *edl_image;
+};
+
+static const struct firmware_info firmware_table[] = {
+	{.dev_id = 0x305, .fw_image = "sdx50m/sbl1.mbn"},
+	{.dev_id = 0x304, .fw_image = "sbl.mbn", .edl_image = "edl.mbn"},
+	/* default, set to debug.mbn */
+	{.fw_image = "debug.mbn"},
+};
+
+void mhi_deinit_pci_dev(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+	struct pci_dev *pci_dev = mhi_dev->pci_dev;
+
+	pci_free_irq_vectors(pci_dev);
+	kfree(mhi_cntrl->irq);
+	mhi_cntrl->irq = NULL;
+	iounmap(mhi_cntrl->regs);
+	mhi_cntrl->regs = NULL;
+	pci_clear_master(pci_dev);
+	pci_release_region(pci_dev, mhi_dev->resn);
+	pci_disable_device(pci_dev);
+}
+
+static int mhi_init_pci_dev(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+	struct pci_dev *pci_dev = mhi_dev->pci_dev;
+	int ret;
+	resource_size_t start, len;
+	int i;
+
+	mhi_dev->resn = MHI_PCI_BAR_NUM;
+	ret = pci_assign_resource(pci_dev, mhi_dev->resn);
+	if (ret)
+		return ret;
+
+	ret = pci_enable_device(pci_dev);
+	if (ret)
+		goto error_enable_device;
+
+	ret = pci_request_region(pci_dev, mhi_dev->resn, "mhi");
+	if (ret)
+		goto error_request_region;
+
+	pci_set_master(pci_dev);
+
+	start = pci_resource_start(pci_dev, mhi_dev->resn);
+	len = pci_resource_len(pci_dev, mhi_dev->resn);
+	mhi_cntrl->regs = ioremap_nocache(start, len);
+	if (!mhi_cntrl->regs)
+		goto error_ioremap;
+
+	ret = pci_alloc_irq_vectors(pci_dev, mhi_cntrl->msi_required,
+				    mhi_cntrl->msi_required, PCI_IRQ_MSI);
+	if (IS_ERR_VALUE((ulong)ret) || ret < mhi_cntrl->msi_required)
+		goto error_req_msi;
+
+	mhi_cntrl->msi_allocated = ret;
+	mhi_cntrl->irq = kmalloc_array(mhi_cntrl->msi_allocated,
+				       sizeof(*mhi_cntrl->irq), GFP_KERNEL);
+	if (!mhi_cntrl->irq) {
+		ret = -ENOMEM;
+		goto error_alloc_msi_vec;
+	}
+
+	for (i = 0; i < mhi_cntrl->msi_allocated; i++) {
+		mhi_cntrl->irq[i] = pci_irq_vector(pci_dev, i);
+		if (mhi_cntrl->irq[i] < 0) {
+			ret = mhi_cntrl->irq[i];
+			goto error_get_irq_vec;
+		}
+	}
+
+	dev_set_drvdata(&pci_dev->dev, mhi_cntrl);
+
+	/* configure runtime pm */
+	pm_runtime_set_autosuspend_delay(&pci_dev->dev, MHI_RPM_SUSPEND_TMR_MS);
+	pm_runtime_use_autosuspend(&pci_dev->dev);
+	pm_suspend_ignore_children(&pci_dev->dev, true);
+
+	/*
+	 * pci framework will increment usage count (twice) before
+	 * calling local device driver probe function.
+	 * 1st pci.c pci_pm_init() calls pm_runtime_forbid
+	 * 2nd pci-driver.c local_pci_probe calls pm_runtime_get_sync
+	 * Framework expect pci device driver to call
+	 * pm_runtime_put_noidle to decrement usage count after
+	 * successful probe and and call pm_runtime_allow to enable
+	 * runtime suspend.
+	 */
+	pm_runtime_mark_last_busy(&pci_dev->dev);
+	pm_runtime_put_noidle(&pci_dev->dev);
+
+	return 0;
+
+error_get_irq_vec:
+	kfree(mhi_cntrl->irq);
+	mhi_cntrl->irq = NULL;
+
+error_alloc_msi_vec:
+	pci_free_irq_vectors(pci_dev);
+
+error_req_msi:
+	iounmap(mhi_cntrl->regs);
+
+error_ioremap:
+	pci_clear_master(pci_dev);
+
+error_request_region:
+	pci_disable_device(pci_dev);
+
+error_enable_device:
+	pci_release_region(pci_dev, mhi_dev->resn);
+
+	return ret;
+}
+
+static int mhi_runtime_suspend(struct device *dev)
+{
+	int ret = 0;
+	struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
+
+	dev_info(mhi_cntrl->dev, "Enter\n");
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+
+	ret = mhi_pm_suspend(mhi_cntrl);
+	if (ret)
+		goto exit_runtime_suspend;
+
+	ret = mhi_arch_link_off(mhi_cntrl, true);
+
+exit_runtime_suspend:
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	return ret;
+}
+
+static int mhi_runtime_idle(struct device *dev)
+{
+	/*
+	 * RPM framework during runtime resume always calls
+	 * rpm_idle to see if device ready to suspend.
+	 * If dev.power usage_count count is 0, rpm fw will call
+	 * rpm_idle cb to see if device is ready to suspend.
+	 * if cb return 0, or cb not defined the framework will
+	 * assume device driver is ready to suspend;
+	 * therefore, fw will schedule runtime suspend.
+	 * In MHI power management, MHI host shall go to
+	 * runtime suspend only after entering MHI State M2, even if
+	 * usage count is 0.  Return -EBUSY to disable automatic suspend.
+	 */
+	return -EBUSY;
+}
+
+static int mhi_runtime_resume(struct device *dev)
+{
+	int ret = 0;
+	struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
+	struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+
+	dev_info(mhi_cntrl->dev, "Enter\n");
+
+	mutex_lock(&mhi_cntrl->pm_mutex);
+
+	if (!mhi_dev->powered_on) {
+		mutex_unlock(&mhi_cntrl->pm_mutex);
+		return 0;
+	}
+
+	/* turn on link */
+	ret = mhi_arch_link_on(mhi_cntrl);
+	if (ret)
+		goto rpm_resume_exit;
+
+	/* enter M0 state */
+	ret = mhi_pm_resume(mhi_cntrl);
+
+rpm_resume_exit:
+	mutex_unlock(&mhi_cntrl->pm_mutex);
+
+	return ret;
+}
+
+static int mhi_system_resume(struct device *dev)
+{
+	int ret = 0;
+	struct mhi_controller *mhi_cntrl = dev_get_drvdata(dev);
+
+	ret = mhi_runtime_resume(dev);
+	if (ret) {
+		dev_err(mhi_cntrl->dev, "Failed to resume link\n");
+	} else {
+		pm_runtime_set_active(dev);
+		pm_runtime_enable(dev);
+	}
+
+	return ret;
+}
+
+int mhi_system_suspend(struct device *dev)
+{
+	/* if rpm status still active then force suspend */
+	if (!pm_runtime_status_suspended(dev))
+		return mhi_runtime_suspend(dev);
+
+	pm_runtime_set_suspended(dev);
+	pm_runtime_disable(dev);
+
+	return 0;
+}
+
+/* checks if link is down */
+static int mhi_link_status(struct mhi_controller *mhi_cntrl, void *priv)
+{
+	struct mhi_dev *mhi_dev = priv;
+	u16 dev_id;
+	int ret;
+
+	/* try reading device id, if dev id don't match, link is down */
+	ret = pci_read_config_word(mhi_dev->pci_dev, PCI_DEVICE_ID, &dev_id);
+
+	return (ret || dev_id != mhi_cntrl->dev_id) ? -EIO : 0;
+}
+
+static int mhi_runtime_get(struct mhi_controller *mhi_cntrl, void *priv)
+{
+	struct mhi_dev *mhi_dev = priv;
+	struct device *dev = &mhi_dev->pci_dev->dev;
+
+	return pm_runtime_get(dev);
+}
+
+static void mhi_runtime_put(struct mhi_controller *mhi_cntrl, void *priv)
+{
+	struct mhi_dev *mhi_dev = priv;
+	struct device *dev = &mhi_dev->pci_dev->dev;
+
+	pm_runtime_put_noidle(dev);
+}
+
+static void mhi_status_cb(struct mhi_controller *mhi_cntrl,
+			  void *priv,
+			  enum MHI_CB reason)
+{
+	struct mhi_dev *mhi_dev = priv;
+	struct device *dev = &mhi_dev->pci_dev->dev;
+
+	if (reason == MHI_CB_IDLE) {
+		pm_runtime_mark_last_busy(dev);
+		pm_request_autosuspend(dev);
+	}
+}
+
+static struct mhi_controller *mhi_register_controller(struct pci_dev *pci_dev)
+{
+	struct mhi_controller *mhi_cntrl;
+	struct mhi_dev *mhi_dev;
+	struct device_node *of_node = pci_dev->dev.of_node;
+	const struct firmware_info *firmware_info;
+	bool use_bb;
+	u64 addr_win[2];
+	int ret, i;
+
+	if (!of_node)
+		return ERR_PTR(-ENODEV);
+
+	mhi_cntrl = mhi_alloc_controller(sizeof(*mhi_dev));
+	if (!mhi_cntrl)
+		return ERR_PTR(-ENOMEM);
+
+	mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+
+	mhi_cntrl->domain = pci_domain_nr(pci_dev->bus);
+	mhi_cntrl->dev_id = pci_dev->device;
+	mhi_cntrl->bus = pci_dev->bus->number;
+	mhi_cntrl->slot = PCI_SLOT(pci_dev->devfn);
+
+	ret = of_property_read_u32(of_node, "qcom,smmu-cfg",
+				   &mhi_dev->smmu_cfg);
+	if (ret)
+		goto error_register;
+
+	use_bb = of_property_read_bool(of_node, "mhi,use-bb");
+
+	/*
+	 * if s1 translation enabled or using bounce buffer pull iova addr
+	 * from dt
+	 */
+	if (use_bb || (mhi_dev->smmu_cfg & MHI_SMMU_ATTACH &&
+		       !(mhi_dev->smmu_cfg & MHI_SMMU_S1_BYPASS))) {
+		ret = of_property_count_elems_of_size(of_node, "qcom,addr-win",
+						      sizeof(addr_win));
+		if (ret != 1)
+			goto error_register;
+		ret = of_property_read_u64_array(of_node, "qcom,addr-win",
+						 addr_win, 2);
+		if (ret)
+			goto error_register;
+	} else {
+		addr_win[0] = memblock_start_of_DRAM();
+		addr_win[1] = memblock_end_of_DRAM();
+	}
+
+	mhi_dev->iova_start = addr_win[0];
+	mhi_dev->iova_stop = addr_win[1];
+
+	/*
+	 * If S1 is enabled, set MHI_CTRL start address to 0 so we can use low
+	 * level mapping api to map buffers outside of smmu domain
+	 */
+	if (mhi_dev->smmu_cfg & MHI_SMMU_ATTACH &&
+	    !(mhi_dev->smmu_cfg & MHI_SMMU_S1_BYPASS))
+		mhi_cntrl->iova_start = 0;
+	else
+		mhi_cntrl->iova_start = addr_win[0];
+
+	mhi_cntrl->iova_stop = mhi_dev->iova_stop;
+	mhi_cntrl->of_node = of_node;
+
+	mhi_dev->pci_dev = pci_dev;
+
+	/* setup power management apis */
+	mhi_cntrl->status_cb = mhi_status_cb;
+	mhi_cntrl->runtime_get = mhi_runtime_get;
+	mhi_cntrl->runtime_put = mhi_runtime_put;
+	mhi_cntrl->link_status = mhi_link_status;
+
+	ret = of_register_mhi_controller(mhi_cntrl);
+	if (ret)
+		goto error_register;
+
+	for (i = 0; i < ARRAY_SIZE(firmware_table); i++) {
+		firmware_info = firmware_table + i;
+		if (mhi_cntrl->dev_id == firmware_info->dev_id)
+			break;
+	}
+
+	mhi_cntrl->fw_image = firmware_info->fw_image;
+	mhi_cntrl->edl_image = firmware_info->edl_image;
+
+	return mhi_cntrl;
+
+error_register:
+	mhi_free_controller(mhi_cntrl);
+
+	return ERR_PTR(-EINVAL);
+}
+
+int mhi_pci_probe(struct pci_dev *pci_dev,
+		  const struct pci_device_id *device_id)
+{
+	struct mhi_controller *mhi_cntrl;
+	u32 domain = pci_domain_nr(pci_dev->bus);
+	u32 bus = pci_dev->bus->number;
+	u32 dev_id = pci_dev->device;
+	u32 slot = PCI_SLOT(pci_dev->devfn);
+	struct mhi_dev *mhi_dev;
+	int ret;
+
+	/* see if we already registered */
+	mhi_cntrl = mhi_bdf_to_controller(domain, bus, slot, dev_id);
+	if (!mhi_cntrl)
+		mhi_cntrl = mhi_register_controller(pci_dev);
+
+	if (IS_ERR(mhi_cntrl))
+		return PTR_ERR(mhi_cntrl);
+
+	mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+	mhi_dev->powered_on = true;
+
+	ret = mhi_arch_pcie_init(mhi_cntrl);
+	if (ret)
+		return ret;
+
+	ret = mhi_arch_iommu_init(mhi_cntrl);
+	if (ret)
+		goto error_iommu_init;
+
+	ret = mhi_init_pci_dev(mhi_cntrl);
+	if (ret)
+		goto error_init_pci;
+
+	/* start power up sequence */
+	ret = mhi_async_power_up(mhi_cntrl);
+	if (ret)
+		goto error_power_up;
+
+	pm_runtime_mark_last_busy(&pci_dev->dev);
+	pm_runtime_allow(&pci_dev->dev);
+
+	return 0;
+
+error_power_up:
+	mhi_deinit_pci_dev(mhi_cntrl);
+
+error_init_pci:
+	mhi_arch_iommu_deinit(mhi_cntrl);
+
+error_iommu_init:
+	mhi_arch_pcie_deinit(mhi_cntrl);
+
+	return ret;
+}
+
+static const struct dev_pm_ops pm_ops = {
+	SET_RUNTIME_PM_OPS(mhi_runtime_suspend,
+			   mhi_runtime_resume,
+			   mhi_runtime_idle)
+	SET_SYSTEM_SLEEP_PM_OPS(mhi_system_suspend, mhi_system_resume)
+};
+
+static struct pci_device_id mhi_pcie_device_id[] = {
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0300)},
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0301)},
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0302)},
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0303)},
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0304)},
+	{PCI_DEVICE(MHI_PCIE_VENDOR_ID, 0x0305)},
+	{0},
+};
+
+static struct pci_driver mhi_pcie_driver = {
+	.name = "mhi",
+	.id_table = mhi_pcie_device_id,
+	.probe = mhi_pci_probe,
+	.driver = {
+		.pm = &pm_ops
+	}
+};
+
+module_pci_driver(mhi_pcie_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("MHI_CORE");
+MODULE_DESCRIPTION("MHI Host Driver");
diff --git a/drivers/bus/mhi/controllers/mhi_qcom.h b/drivers/bus/mhi/controllers/mhi_qcom.h
new file mode 100644
index 0000000..41acd20
--- /dev/null
+++ b/drivers/bus/mhi/controllers/mhi_qcom.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ */
+#ifndef _MHI_QCOM_
+#define _MHI_QCOM_
+
+/* iova cfg bitmask */
+#define MHI_SMMU_ATTACH BIT(0)
+#define MHI_SMMU_S1_BYPASS BIT(1)
+#define MHI_SMMU_FAST BIT(2)
+#define MHI_SMMU_ATOMIC BIT(3)
+#define MHI_SMMU_FORCE_COHERENT BIT(4)
+
+#define MHI_PCIE_VENDOR_ID (0x17cb)
+#define MHI_RPM_SUSPEND_TMR_MS (1000)
+#define MHI_PCI_BAR_NUM (0)
+
+struct mhi_dev {
+	struct pci_dev *pci_dev;
+	u32 smmu_cfg;
+	int resn;
+	void *arch_info;
+	bool powered_on;
+	dma_addr_t iova_start;
+	dma_addr_t iova_stop;
+};
+
+void mhi_deinit_pci_dev(struct mhi_controller *mhi_cntrl);
+int mhi_pci_probe(struct pci_dev *pci_dev,
+		  const struct pci_device_id *device_id);
+
+static inline int mhi_arch_iommu_init(struct mhi_controller *mhi_cntrl)
+{
+	struct mhi_dev *mhi_dev = mhi_controller_get_devdata(mhi_cntrl);
+
+	mhi_cntrl->dev = &mhi_dev->pci_dev->dev;
+
+	return dma_set_mask_and_coherent(mhi_cntrl->dev, DMA_BIT_MASK(64));
+}
+
+static inline void mhi_arch_iommu_deinit(struct mhi_controller *mhi_cntrl)
+{
+}
+
+static inline int mhi_arch_pcie_init(struct mhi_controller *mhi_cntrl)
+{
+	return 0;
+}
+
+static inline void mhi_arch_pcie_deinit(struct mhi_controller *mhi_cntrl)
+{
+}
+
+static inline int mhi_arch_link_off(struct mhi_controller *mhi_cntrl,
+				    bool graceful)
+{
+	return 0;
+}
+
+static inline int mhi_arch_link_on(struct mhi_controller *mhi_cntrl)
+{
+	return 0;
+}
+
+#endif /* _MHI_QCOM_ */
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v2 7/7] mhi_bus: dev: uci: add user space interface driver
  2018-07-09 20:08 ` MHI code review Sujeev Dias
                     ` (5 preceding siblings ...)
  2018-07-09 20:08   ` [PATCH v2 6/7] mhi_bus: controller: MHI support for QCOM modems Sujeev Dias
@ 2018-07-09 20:08   ` Sujeev Dias
  2019-04-30 15:10   ` MHI code review Daniele Palmas
  7 siblings, 0 replies; 54+ messages in thread
From: Sujeev Dias @ 2018-07-09 20:08 UTC (permalink / raw)
  To: Greg Kroah-Hartman, Arnd Bergmann
  Cc: linux-kernel, linux-arm-msm, devicetree, Sujeev Dias,
	Tony Truong, Siddartha Mohanadoss

This module allows user space clients to transfer data
between external modem and host using standard file
operations.

Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
Reviewed-by: Tony Truong <truong@codeaurora.org>
Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
---
 arch/arm64/configs/defconfig      |   1 +
 drivers/bus/Kconfig               |   1 +
 drivers/bus/mhi/Makefile          |   3 +-
 drivers/bus/mhi/devices/Kconfig   |  13 +
 drivers/bus/mhi/devices/Makefile  |   1 +
 drivers/bus/mhi/devices/mhi_uci.c | 590 ++++++++++++++++++++++++++++++++++++++
 6 files changed, 608 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/mhi/devices/Kconfig
 create mode 100644 drivers/bus/mhi/devices/Makefile
 create mode 100644 drivers/bus/mhi/devices/mhi_uci.c

diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
index fd36426..7f68d706 100644
--- a/arch/arm64/configs/defconfig
+++ b/arch/arm64/configs/defconfig
@@ -169,6 +169,7 @@ CONFIG_DEVTMPFS_MOUNT=y
 CONFIG_DMA_CMA=y
 CONFIG_MHI_BUS=y
 CONFIG_MHI_QCOM=y
+CONFIG_MHI_UCI=y
 CONFIG_MTD=y
 CONFIG_MTD_BLOCK=y
 CONFIG_MTD_M25P80=y
diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
index 8c6827a..cb10366 100644
--- a/drivers/bus/Kconfig
+++ b/drivers/bus/Kconfig
@@ -181,5 +181,6 @@ config MHI_BUS
 
 source "drivers/bus/fsl-mc/Kconfig"
 source drivers/bus/mhi/controllers/Kconfig
+source drivers/bus/mhi/devices/Kconfig
 
 endmenu
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
index 3458ba3..2382e04 100644
--- a/drivers/bus/mhi/Makefile
+++ b/drivers/bus/mhi/Makefile
@@ -4,4 +4,5 @@
 
 # core layer
 obj-y += core/
-obj-y += controllers/
\ No newline at end of file
+obj-y += controllers/
+obj-y += devices/
diff --git a/drivers/bus/mhi/devices/Kconfig b/drivers/bus/mhi/devices/Kconfig
new file mode 100644
index 0000000..45cd742
--- /dev/null
+++ b/drivers/bus/mhi/devices/Kconfig
@@ -0,0 +1,13 @@
+menu "MHI device support"
+
+config MHI_UCI
+       tristate "MHI UCI"
+       depends on MHI_BUS
+       help
+	  MHI based uci driver is for transferring data between host and
+	  modem using standard file operations from user space. Open, read,
+	  write, ioctl, and close operations are supported by this driver.
+	  Please check mhi_uci_match_table for all supported channels that
+	  are exposed to userspace.
+
+endmenu
diff --git a/drivers/bus/mhi/devices/Makefile b/drivers/bus/mhi/devices/Makefile
new file mode 100644
index 0000000..ddc0613
--- /dev/null
+++ b/drivers/bus/mhi/devices/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_MHI_UCI) +=mhi_uci.o
diff --git a/drivers/bus/mhi/devices/mhi_uci.c b/drivers/bus/mhi/devices/mhi_uci.c
new file mode 100644
index 0000000..ad60a15
--- /dev/null
+++ b/drivers/bus/mhi/devices/mhi_uci.c
@@ -0,0 +1,590 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#include <linux/cdev.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/poll.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+#include <linux/uaccess.h>
+#include <linux/mhi.h>
+
+#define DEVICE_NAME "mhi"
+#define MHI_UCI_DRIVER_NAME "mhi_uci"
+
+struct uci_chan {
+	wait_queue_head_t wq;
+	spinlock_t lock;
+	struct list_head pending; /* user space waiting to read */
+	struct uci_buf *cur_buf; /* current buffer user space reading */
+	size_t rx_size;
+};
+
+struct uci_buf {
+	void *data;
+	size_t len;
+	struct list_head node;
+};
+
+struct uci_dev {
+	struct list_head node;
+	dev_t devt;
+	struct device *dev;
+	struct mhi_device *mhi_dev;
+	const char *chan;
+	struct mutex mutex; /* sync open and close */
+	struct uci_chan ul_chan;
+	struct uci_chan dl_chan;
+	size_t mtu;
+	int ref_count;
+	bool enabled;
+};
+
+struct mhi_uci_drv {
+	struct list_head head;
+	struct mutex lock;
+	struct class *class;
+	int major;
+	dev_t dev_t;
+};
+
+#define MAX_UCI_DEVICES (64)
+
+static DECLARE_BITMAP(uci_minors, MAX_UCI_DEVICES);
+static struct mhi_uci_drv mhi_uci_drv;
+
+static int mhi_queue_inbound(struct uci_dev *uci_dev)
+{
+	struct mhi_device *mhi_dev = uci_dev->mhi_dev;
+	int nr_trbs = mhi_get_no_free_descriptors(mhi_dev, DMA_FROM_DEVICE);
+	size_t mtu = uci_dev->mtu;
+	void *buf;
+	struct uci_buf *uci_buf;
+	int ret = -EIO, i;
+
+	for (i = 0; i < nr_trbs; i++) {
+		buf = kmalloc(mtu + sizeof(*uci_buf), GFP_KERNEL);
+		if (!buf)
+			return -ENOMEM;
+
+		uci_buf = buf + mtu;
+		uci_buf->data = buf;
+
+		ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE, buf, mtu,
+					 MHI_EOT);
+		if (ret) {
+			kfree(buf);
+			return ret;
+		}
+	}
+
+	return ret;
+}
+
+static long mhi_uci_ioctl(struct file *file,
+			  unsigned int cmd,
+			  unsigned long arg)
+{
+	struct uci_dev *uci_dev = file->private_data;
+	struct mhi_device *mhi_dev = uci_dev->mhi_dev;
+	long ret = -ERESTARTSYS;
+
+	mutex_lock(&uci_dev->mutex);
+	if (uci_dev->enabled)
+		ret = mhi_ioctl(mhi_dev, cmd, arg);
+	mutex_unlock(&uci_dev->mutex);
+
+	return ret;
+}
+
+static int mhi_uci_release(struct inode *inode, struct file *file)
+{
+	struct uci_dev *uci_dev = file->private_data;
+
+	mutex_lock(&uci_dev->mutex);
+	uci_dev->ref_count--;
+	if (!uci_dev->ref_count) {
+		struct uci_buf *itr, *tmp;
+		struct uci_chan *uci_chan;
+
+		if (uci_dev->enabled)
+			mhi_unprepare_from_transfer(uci_dev->mhi_dev);
+
+		/* clean inbound channel */
+		uci_chan = &uci_dev->dl_chan;
+		list_for_each_entry_safe(itr, tmp, &uci_chan->pending, node) {
+			list_del(&itr->node);
+			kfree(itr->data);
+		}
+		if (uci_chan->cur_buf)
+			kfree(uci_chan->cur_buf->data);
+
+		uci_chan->cur_buf = NULL;
+
+		if (!uci_dev->enabled) {
+			mutex_unlock(&uci_dev->mutex);
+			mutex_destroy(&uci_dev->mutex);
+			clear_bit(MINOR(uci_dev->devt), uci_minors);
+			kfree(uci_dev);
+			return 0;
+		}
+	}
+
+	mutex_unlock(&uci_dev->mutex);
+
+	return 0;
+}
+
+static unsigned int mhi_uci_poll(struct file *file, poll_table *wait)
+{
+	struct uci_dev *uci_dev = file->private_data;
+	struct mhi_device *mhi_dev = uci_dev->mhi_dev;
+	struct uci_chan *uci_chan;
+	unsigned int mask = 0;
+
+	poll_wait(file, &uci_dev->dl_chan.wq, wait);
+	poll_wait(file, &uci_dev->ul_chan.wq, wait);
+
+	uci_chan = &uci_dev->dl_chan;
+	spin_lock_bh(&uci_chan->lock);
+	if (!uci_dev->enabled)
+		mask = POLLERR;
+	else if (!list_empty(&uci_chan->pending) || uci_chan->cur_buf)
+		mask |= POLLIN | POLLRDNORM;
+
+	spin_unlock_bh(&uci_chan->lock);
+
+	uci_chan = &uci_dev->ul_chan;
+	spin_lock_bh(&uci_chan->lock);
+	if (!uci_dev->enabled)
+		mask |= POLLERR;
+	else if (mhi_get_no_free_descriptors(mhi_dev, DMA_TO_DEVICE) > 0)
+		mask |= POLLOUT | POLLWRNORM;
+
+	spin_unlock_bh(&uci_chan->lock);
+
+	return mask;
+}
+
+static ssize_t mhi_uci_write(struct file *file,
+			     const char __user *buf,
+			     size_t count,
+			     loff_t *offp)
+{
+	struct uci_dev *uci_dev = file->private_data;
+	struct mhi_device *mhi_dev = uci_dev->mhi_dev;
+	struct uci_chan *uci_chan = &uci_dev->ul_chan;
+	size_t bytes_xfered = 0;
+	int ret, nr_avail;
+
+	if (!buf || !count)
+		return -EINVAL;
+
+	/* confirm channel is active */
+	spin_lock_bh(&uci_chan->lock);
+	if (!uci_dev->enabled) {
+		spin_unlock_bh(&uci_chan->lock);
+		return -ERESTARTSYS;
+	}
+
+	while (count) {
+		size_t xfer_size;
+		void *kbuf;
+		enum MHI_FLAGS flags;
+
+		spin_unlock_bh(&uci_chan->lock);
+
+		/* wait for free descriptors */
+		ret = wait_event_interruptible(uci_chan->wq,
+			(!uci_dev->enabled) ||
+			(nr_avail = mhi_get_no_free_descriptors(mhi_dev,
+							DMA_TO_DEVICE)) > 0);
+
+		if (ret == -ERESTARTSYS || !uci_dev->enabled)
+			return -ERESTARTSYS;
+
+		xfer_size = min_t(size_t, count, uci_dev->mtu);
+		kbuf = kmalloc(xfer_size, GFP_KERNEL);
+		if (!kbuf)
+			return -ENOMEM;
+
+		ret = copy_from_user(kbuf, buf, xfer_size);
+		if (unlikely(ret)) {
+			kfree(kbuf);
+			return ret;
+		}
+
+		spin_lock_bh(&uci_chan->lock);
+
+		/* if ring is full after this force EOT */
+		if (nr_avail > 1 && (count - xfer_size))
+			flags = MHI_CHAIN;
+		else
+			flags = MHI_EOT;
+
+		if (uci_dev->enabled)
+			ret = mhi_queue_transfer(mhi_dev, DMA_TO_DEVICE, kbuf,
+						 xfer_size, flags);
+		else
+			ret = -ERESTARTSYS;
+
+		if (ret) {
+			kfree(kbuf);
+			goto sys_interrupt;
+		}
+
+		bytes_xfered += xfer_size;
+		count -= xfer_size;
+		buf += xfer_size;
+	}
+
+	spin_unlock_bh(&uci_chan->lock);
+
+	return bytes_xfered;
+
+sys_interrupt:
+	spin_unlock_bh(&uci_chan->lock);
+
+	return ret;
+}
+
+static ssize_t mhi_uci_read(struct file *file,
+			    char __user *buf,
+			    size_t count,
+			    loff_t *ppos)
+{
+	struct uci_dev *uci_dev = file->private_data;
+	struct mhi_device *mhi_dev = uci_dev->mhi_dev;
+	struct uci_chan *uci_chan = &uci_dev->dl_chan;
+	struct uci_buf *uci_buf;
+	char *ptr;
+	size_t to_copy;
+	int ret = 0;
+
+	if (!buf)
+		return -EINVAL;
+
+	/* confirm channel is active */
+	spin_lock_bh(&uci_chan->lock);
+	if (!uci_dev->enabled) {
+		spin_unlock_bh(&uci_chan->lock);
+		return -ERESTARTSYS;
+	}
+
+	/* No data available to read, wait */
+	if (!uci_chan->cur_buf && list_empty(&uci_chan->pending)) {
+
+		spin_unlock_bh(&uci_chan->lock);
+		ret = wait_event_interruptible(uci_chan->wq,
+				(!uci_dev->enabled ||
+				 !list_empty(&uci_chan->pending)));
+		if (ret == -ERESTARTSYS)
+			return -ERESTARTSYS;
+
+		spin_lock_bh(&uci_chan->lock);
+		if (!uci_dev->enabled) {
+			ret = -ERESTARTSYS;
+			goto read_error;
+		}
+	}
+
+	/* new read, get the next descriptor from the list */
+	if (!uci_chan->cur_buf) {
+		uci_buf = list_first_entry_or_null(&uci_chan->pending,
+						   struct uci_buf, node);
+		if (unlikely(!uci_buf)) {
+			ret = -EIO;
+			goto read_error;
+		}
+
+		list_del(&uci_buf->node);
+		uci_chan->cur_buf = uci_buf;
+		uci_chan->rx_size = uci_buf->len;
+	}
+
+	uci_buf = uci_chan->cur_buf;
+	spin_unlock_bh(&uci_chan->lock);
+
+	/* Copy the buffer to user space */
+	to_copy = min_t(size_t, count, uci_chan->rx_size);
+	ptr = uci_buf->data + (uci_buf->len - uci_chan->rx_size);
+	ret = copy_to_user(buf, ptr, to_copy);
+	if (ret)
+		return ret;
+
+	uci_chan->rx_size -= to_copy;
+
+	/* we finished with this buffer, queue it back to hardware */
+	if (!uci_chan->rx_size) {
+		spin_lock_bh(&uci_chan->lock);
+		uci_chan->cur_buf = NULL;
+
+		if (uci_dev->enabled)
+			ret = mhi_queue_transfer(mhi_dev, DMA_FROM_DEVICE,
+						 uci_buf->data, uci_dev->mtu,
+						 MHI_EOT);
+		else
+			ret = -ERESTARTSYS;
+
+		if (ret) {
+			kfree(uci_buf->data);
+			goto read_error;
+		}
+
+		spin_unlock_bh(&uci_chan->lock);
+	}
+
+	return to_copy;
+
+read_error:
+	spin_unlock_bh(&uci_chan->lock);
+
+	return ret;
+}
+
+static int mhi_uci_open(struct inode *inode, struct file *filp)
+{
+	struct uci_dev *uci_dev;
+	int ret = -EIO;
+	struct uci_buf *buf_itr, *tmp;
+	struct uci_chan *dl_chan;
+
+	mutex_lock(&mhi_uci_drv.lock);
+	list_for_each_entry(uci_dev, &mhi_uci_drv.head, node) {
+		if (uci_dev->devt == inode->i_rdev) {
+			ret = 0;
+			break;
+		}
+	}
+	mutex_unlock(&mhi_uci_drv.lock);
+
+	/* could not find a minor node */
+	if (ret)
+		return ret;
+
+	mutex_lock(&uci_dev->mutex);
+	if (!uci_dev->enabled)
+		goto error_open_chan;
+
+	uci_dev->ref_count++;
+
+	if (uci_dev->ref_count == 1) {
+		ret = mhi_prepare_for_transfer(uci_dev->mhi_dev);
+		if (ret) {
+			uci_dev->ref_count--;
+			goto error_open_chan;
+		}
+
+		ret = mhi_queue_inbound(uci_dev);
+		if (ret)
+			goto error_rx_queue;
+	}
+
+	filp->private_data = uci_dev;
+	mutex_unlock(&uci_dev->mutex);
+
+	return 0;
+
+ error_rx_queue:
+	dl_chan = &uci_dev->dl_chan;
+	mhi_unprepare_from_transfer(uci_dev->mhi_dev);
+	list_for_each_entry_safe(buf_itr, tmp, &dl_chan->pending, node) {
+		list_del(&buf_itr->node);
+		kfree(buf_itr->data);
+	}
+
+ error_open_chan:
+	mutex_unlock(&uci_dev->mutex);
+
+	return ret;
+}
+
+static const struct file_operations mhidev_fops = {
+	.open = mhi_uci_open,
+	.release = mhi_uci_release,
+	.read = mhi_uci_read,
+	.write = mhi_uci_write,
+	.poll = mhi_uci_poll,
+	.unlocked_ioctl = mhi_uci_ioctl,
+};
+
+static void mhi_uci_remove(struct mhi_device *mhi_dev)
+{
+	struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
+
+	/* disable the node */
+	mutex_lock(&uci_dev->mutex);
+	spin_lock_irq(&uci_dev->dl_chan.lock);
+	spin_lock_irq(&uci_dev->ul_chan.lock);
+	uci_dev->enabled = false;
+	spin_unlock_irq(&uci_dev->ul_chan.lock);
+	spin_unlock_irq(&uci_dev->dl_chan.lock);
+	wake_up(&uci_dev->dl_chan.wq);
+	wake_up(&uci_dev->ul_chan.wq);
+
+	/* delete the node to prevent new opens */
+	device_destroy(mhi_uci_drv.class, uci_dev->devt);
+	uci_dev->dev = NULL;
+	mutex_lock(&mhi_uci_drv.lock);
+	list_del(&uci_dev->node);
+	mutex_unlock(&mhi_uci_drv.lock);
+
+	/* safe to free memory only if all file nodes are closed */
+	if (!uci_dev->ref_count) {
+		mutex_unlock(&uci_dev->mutex);
+		mutex_destroy(&uci_dev->mutex);
+		clear_bit(MINOR(uci_dev->devt), uci_minors);
+		kfree(uci_dev);
+		return;
+	}
+
+	mutex_unlock(&uci_dev->mutex);
+}
+
+static int mhi_uci_probe(struct mhi_device *mhi_dev,
+			 const struct mhi_device_id *id)
+{
+	struct uci_dev *uci_dev;
+	int minor;
+	int dir;
+
+	uci_dev = kzalloc(sizeof(*uci_dev), GFP_KERNEL);
+	if (!uci_dev)
+		return -ENOMEM;
+
+	mutex_init(&uci_dev->mutex);
+	uci_dev->mhi_dev = mhi_dev;
+
+	minor = find_first_zero_bit(uci_minors, MAX_UCI_DEVICES);
+	if (minor >= MAX_UCI_DEVICES) {
+		kfree(uci_dev);
+		return -ENOSPC;
+	}
+
+	mutex_lock(&uci_dev->mutex);
+	mutex_lock(&mhi_uci_drv.lock);
+
+	uci_dev->devt = MKDEV(mhi_uci_drv.major, minor);
+	uci_dev->dev = device_create(mhi_uci_drv.class, &mhi_dev->dev,
+				     uci_dev->devt, uci_dev,
+				     DEVICE_NAME "_%04x_%02u.%02u.%02u%s%d",
+				     mhi_dev->dev_id, mhi_dev->domain,
+				     mhi_dev->bus, mhi_dev->slot, "_pipe_",
+				     mhi_dev->ul_chan_id);
+	set_bit(minor, uci_minors);
+
+	for (dir = 0; dir < 2; dir++) {
+		struct uci_chan *uci_chan = (dir) ?
+			&uci_dev->ul_chan : &uci_dev->dl_chan;
+		spin_lock_init(&uci_chan->lock);
+		init_waitqueue_head(&uci_chan->wq);
+		INIT_LIST_HEAD(&uci_chan->pending);
+	}
+
+	uci_dev->mtu = min_t(size_t, id->driver_data, mhi_dev->mtu);
+	mhi_device_set_devdata(mhi_dev, uci_dev);
+	uci_dev->enabled = true;
+
+	list_add(&uci_dev->node, &mhi_uci_drv.head);
+	mutex_unlock(&mhi_uci_drv.lock);
+	mutex_unlock(&uci_dev->mutex);
+
+	return 0;
+};
+
+static void mhi_ul_xfer_cb(struct mhi_device *mhi_dev,
+			   struct mhi_result *mhi_result)
+{
+	struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
+	struct uci_chan *uci_chan = &uci_dev->ul_chan;
+
+	kfree(mhi_result->buf_addr);
+	if (!mhi_result->transaction_status)
+		wake_up(&uci_chan->wq);
+}
+
+static void mhi_dl_xfer_cb(struct mhi_device *mhi_dev,
+			   struct mhi_result *mhi_result)
+{
+	struct uci_dev *uci_dev = mhi_device_get_devdata(mhi_dev);
+	struct uci_chan *uci_chan = &uci_dev->dl_chan;
+	unsigned long flags;
+	struct uci_buf *buf;
+
+	if (mhi_result->transaction_status == -ENOTCONN) {
+		kfree(mhi_result->buf_addr);
+		return;
+	}
+
+	spin_lock_irqsave(&uci_chan->lock, flags);
+	buf = mhi_result->buf_addr + uci_dev->mtu;
+	buf->data = mhi_result->buf_addr;
+	buf->len = mhi_result->bytes_xferd;
+	list_add_tail(&buf->node, &uci_chan->pending);
+	spin_unlock_irqrestore(&uci_chan->lock, flags);
+
+	wake_up(&uci_chan->wq);
+}
+
+/* .driver_data stores max mtu */
+static const struct mhi_device_id mhi_uci_match_table[] = {
+	{ .chan = "LOOPBACK", .driver_data = 0x1000 },
+	{ .chan = "SAHARA", .driver_data = 0x8000 },
+	{ .chan = "EFS", .driver_data = 0x1000 },
+	{ .chan = "QMI0", .driver_data = 0x1000 },
+	{ .chan = "QMI1", .driver_data = 0x1000 },
+	{ .chan = "TF", .driver_data = 0x1000 },
+	{ .chan = "BL", .driver_data = 0x1000 },
+	{ .chan = "DUN", .driver_data = 0x1000 },
+	{},
+};
+
+static struct mhi_driver mhi_uci_driver = {
+	.id_table = mhi_uci_match_table,
+	.remove = mhi_uci_remove,
+	.probe = mhi_uci_probe,
+	.ul_xfer_cb = mhi_ul_xfer_cb,
+	.dl_xfer_cb = mhi_dl_xfer_cb,
+	.driver = {
+		.name = MHI_UCI_DRIVER_NAME,
+		.owner = THIS_MODULE,
+	},
+};
+
+static int mhi_uci_init(void)
+{
+	int ret;
+
+	ret = register_chrdev(0, MHI_UCI_DRIVER_NAME, &mhidev_fops);
+	if (ret < 0)
+		return ret;
+
+	mhi_uci_drv.major = ret;
+	mhi_uci_drv.class = class_create(THIS_MODULE, MHI_UCI_DRIVER_NAME);
+	if (IS_ERR(mhi_uci_drv.class))
+		return -ENODEV;
+
+	mutex_init(&mhi_uci_drv.lock);
+	INIT_LIST_HEAD(&mhi_uci_drv.head);
+
+	ret = mhi_driver_register(&mhi_uci_driver);
+	if (ret)
+		class_destroy(mhi_uci_drv.class);
+
+	return ret;
+}
+
+module_init(mhi_uci_init);
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("MHI_UCI");
+MODULE_DESCRIPTION("MHI UCI Driver");
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
a Linux Foundation Collaborative Project

^ permalink raw reply related	[flat|nested] 54+ messages in thread

* Re: [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver
  2018-07-09 20:08   ` [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver Sujeev Dias
@ 2018-07-09 20:50     ` Greg Kroah-Hartman
  2018-07-09 20:52     ` Greg Kroah-Hartman
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 54+ messages in thread
From: Greg Kroah-Hartman @ 2018-07-09 20:50 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: Arnd Bergmann, linux-kernel, linux-arm-msm, devicetree,
	Tony Truong, Siddartha Mohanadoss

On Mon, Jul 09, 2018 at 01:08:08PM -0700, Sujeev Dias wrote:
> This is the initial skeleton driver for mhi bus stack. MHI Host
> Interface is a communication protocol to be used by the host to
> control and communcate with modem over a high speed peripheral bus.
> This module will allow host to communicate with external devices that
> support MHI protocol.
> 
> Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
> Reviewed-by: Tony Truong <truong@codeaurora.org>
> Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
> ---
>  Documentation/00-INDEX                        |   2 +
>  Documentation/devicetree/bindings/bus/mhi.txt | 258 ++++++++++++
>  Documentation/mhi.txt                         | 235 +++++++++++
>  drivers/bus/Kconfig                           |   8 +
>  drivers/bus/Makefile                          |   1 +
>  drivers/bus/mhi/Makefile                      |   6 +
>  drivers/bus/mhi/core/Makefile                 |   1 +
>  drivers/bus/mhi/core/mhi_init.c               | 538 ++++++++++++++++++++++++++
>  drivers/bus/mhi/core/mhi_internal.h           | 238 ++++++++++++
>  drivers/bus/mhi/core/mhi_main.c               | 122 ++++++
>  include/linux/mhi.h                           | 341 ++++++++++++++++
>  include/linux/mod_devicetable.h               |  12 +
>  12 files changed, 1762 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/bus/mhi.txt
>  create mode 100644 Documentation/mhi.txt
>  create mode 100644 drivers/bus/mhi/Makefile
>  create mode 100644 drivers/bus/mhi/core/Makefile
>  create mode 100644 drivers/bus/mhi/core/mhi_init.c
>  create mode 100644 drivers/bus/mhi/core/mhi_internal.h
>  create mode 100644 drivers/bus/mhi/core/mhi_main.c
>  create mode 100644 include/linux/mhi.h

You don't say what changed from v1 here :(

Please do so...

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver
  2018-07-09 20:08   ` [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver Sujeev Dias
  2018-07-09 20:50     ` Greg Kroah-Hartman
@ 2018-07-09 20:52     ` Greg Kroah-Hartman
  2018-07-10  6:36     ` Greg Kroah-Hartman
                       ` (2 subsequent siblings)
  4 siblings, 0 replies; 54+ messages in thread
From: Greg Kroah-Hartman @ 2018-07-09 20:52 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: Arnd Bergmann, linux-kernel, linux-arm-msm, devicetree,
	Tony Truong, Siddartha Mohanadoss

On Mon, Jul 09, 2018 at 01:08:08PM -0700, Sujeev Dias wrote:
> This is the initial skeleton driver for mhi bus stack. MHI Host
> Interface is a communication protocol to be used by the host to
> control and communcate with modem over a high speed peripheral bus.
> This module will allow host to communicate with external devices that
> support MHI protocol.
> 
> Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
> Reviewed-by: Tony Truong <truong@codeaurora.org>
> Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
> ---
>  Documentation/00-INDEX                        |   2 +
>  Documentation/devicetree/bindings/bus/mhi.txt | 258 ++++++++++++
>  Documentation/mhi.txt                         | 235 +++++++++++

Do not put .txt files in the root of the documentation directory for no
reason.  They should be in .rst and hopefully in the proper location for
where the documentation belongs.

And dt bindings always belong in a separate patch, you all know better
than to mush it into a large one like this...

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v2 3/7] mhi_bus: core: add support for data transfer
  2018-07-09 20:08   ` [PATCH v2 3/7] mhi_bus: core: add support for data transfer Sujeev Dias
@ 2018-07-10  6:29     ` Greg Kroah-Hartman
  0 siblings, 0 replies; 54+ messages in thread
From: Greg Kroah-Hartman @ 2018-07-10  6:29 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: Arnd Bergmann, linux-kernel, linux-arm-msm, devicetree,
	Tony Truong, Siddartha Mohanadoss

On Mon, Jul 09, 2018 at 01:08:10PM -0700, Sujeev Dias wrote:
> +static void mhi_add_ring_element(struct mhi_controller *mhi_cntrl,
> +				 struct mhi_ring *ring)
> +{
> +	ring->wp += ring->el_size;
> +	if (ring->wp >= (ring->base + ring->len))
> +		ring->wp = ring->base;
> +	/* smp update */
> +	smp_wmb();

Why do this?  You need to comment the heck out of odd stuff like this.

Also, why are you using your own ring buffer code?  What's wrong with
the code that is in the kernel already for this type of thing?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver
  2018-07-09 20:08   ` [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver Sujeev Dias
  2018-07-09 20:50     ` Greg Kroah-Hartman
  2018-07-09 20:52     ` Greg Kroah-Hartman
@ 2018-07-10  6:36     ` Greg Kroah-Hartman
  2018-07-11 19:30     ` Rob Herring
  2018-08-09 18:39     ` Randy Dunlap
  4 siblings, 0 replies; 54+ messages in thread
From: Greg Kroah-Hartman @ 2018-07-10  6:36 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: Arnd Bergmann, linux-kernel, linux-arm-msm, devicetree,
	Tony Truong, Siddartha Mohanadoss

On Mon, Jul 09, 2018 at 01:08:08PM -0700, Sujeev Dias wrote:
> +void mhi_driver_unregister(struct mhi_driver *mhi_drv)
> +{
> +	driver_unregister(&mhi_drv->driver);
> +}
> +EXPORT_SYMBOL(mhi_driver_unregister);

As you are just "wrapping" a core symbol here, please use
EXPORT_SYMBOL_GPL().  Actually, you should do that for all of the
exports here in this new bus, right?


> +static int __init mhi_init(void)
> +{
> +	/* parent directory */
> +	debugfs_create_dir(mhi_bus_type.name, NULL);

What do you do with this debugfs directory that you never save the
dentry for?  It looks like you forgot to remove it when this module is
removed from the system :(

> +++ b/drivers/bus/mhi/core/mhi_main.c
> @@ -0,0 +1,122 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2018, The Linux Foundation. All rights reserved.
> + *
> + */
> +
> +#include <linux/debugfs.h>
> +#include <linux/device.h>
> +#include <linux/dma-direction.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/interrupt.h>
> +#include <linux/list.h>
> +#include <linux/of.h>
> +#include <linux/module.h>
> +#include <linux/skbuff.h>
> +#include <linux/slab.h>
> +#include <linux/mhi.h>
> +#include "mhi_internal.h"
> +
> +void mhi_db_brstmode(struct mhi_controller *mhi_cntrl,
> +		     struct db_cfg *db_cfg,
> +		     void __iomem *db_addr,
> +		     dma_addr_t wp)
> +{
> +}

<snip>

You have a lot of empty functions in this file, why?

> +/**
> + * struct mhi_controller - Master controller structure for external modem
> + * @dev: Device associated with this controller
> + * @of_node: DT that has MHI configuration information
> + * @regs: Points to base of MHI MMIO register space
> + * @dev_id: PCIe device id of the external device
> + * @domain: PCIe domain the device connected to
> + * @bus: PCIe bus the device assigned to
> + * @slot: PCIe slot for the modem

Why not just save a pointer to the pci device?  That way you don't have
to duplicate all of these things.

> + * @iova_start: IOMMU starting address for data
> + * @iova_stop: IOMMU stop address for data
> + * @fw_image: Firmware image name for normal booting
> + * @edl_image: Firmware image name for emergency download mode
> + * @fbc_download: MHI host needs to do complete image transfer
> + * @rddm_size: RAM dump size that host should allocate for debugging purpose
> + * @sbl_size: SBL image size
> + * @seg_len: BHIe vector size
> + * @fbc_image: Points to firmware image buffer
> + * @rddm_image: Points to RAM dump buffer
> + * @max_chan: Maximum number of channels controller support
> + * @mhi_chan: Points to channel configuration table
> + * @lpm_chans: List of channels that require LPM notifications
> + * @total_ev_rings: Total # of event rings allocated
> + * @hw_ev_rings: Number of hardware event rings
> + * @sw_ev_rings: Number of software event rings
> + * @msi_required: Number of msi required to operate
> + * @msi_allocated: Number of msi allocated by bus master
> + * @irq: base irq # to request
> + * @mhi_event: MHI event ring configurations table
> + * @mhi_cmd: MHI command ring configurations table
> + * @mhi_ctxt: MHI device context, shared memory between host and device
> + * @timeout_ms: Timeout in ms for state transitions
> + * @pm_state: Power management state
> + * @ee: MHI device execution environment
> + * @dev_state: MHI STATE
> + * @status_cb: CB function to notify various power states to but master
> + * @link_status: Query link status in case of abnormal value read from device
> + * @runtime_get: Async runtime resume function
> + * @runtimet_put: Release votes
> + * @time_get: Return host time in us
> + * @lpm_disable: Request controller to disable link level low power modes
> + * @lpm_enable: Controller may enable link level low power modes again
> + * @priv_data: Points to bus master's private data

You list a lot of the structure's fields here, but not all of them, why?

> + */
> +struct mhi_controller {
> +	struct list_head node;
> +	struct mhi_device *mhi_dev;
> +
> +	/* device node for iommu ops */
> +	struct device *dev;
> +	struct device_node *of_node;
> +
> +	/* mmio base */
> +	void __iomem *regs;
> +
> +	/* device topology */
> +	u32 dev_id;
> +	u32 domain;
> +	u32 bus;
> +	u32 slot;
> +
> +	/* addressing window */
> +	dma_addr_t iova_start;
> +	dma_addr_t iova_stop;
> +
> +	/* fw images */
> +	const char *fw_image;
> +	const char *edl_image;
> +
> +	/* mhi host manages downloading entire fbc images */
> +	bool fbc_download;
> +	size_t rddm_size;
> +	size_t sbl_size;
> +	size_t seg_len;
> +	u32 session_id;
> +	u32 sequence_id;
> +
> +	/* physical channel config data */
> +	u32 max_chan;
> +	struct mhi_chan *mhi_chan;
> +	struct list_head lpm_chans; /* these chan require lpm notification */
> +
> +	/* physical event config data */
> +	u32 total_ev_rings;
> +	u32 hw_ev_rings;
> +	u32 sw_ev_rings;
> +	u32 msi_required;
> +	u32 msi_allocated;
> +	int *irq; /* interrupt table */
> +	struct mhi_event *mhi_event;
> +
> +	/* cmd rings */
> +	struct mhi_cmd *mhi_cmd;
> +
> +	/* mhi context (shared with device) */
> +	struct mhi_ctxt *mhi_ctxt;
> +
> +	u32 timeout_ms;
> +
> +	/* caller should grab pm_mutex for suspend/resume operations */
> +	struct mutex pm_mutex;
> +	bool pre_init;
> +	rwlock_t pm_lock;
> +	u32 pm_state;
> +	u32 ee;
> +	u32 dev_state;
> +	bool wake_set;
> +	atomic_t dev_wake;
> +	atomic_t alloc_size;
> +	struct list_head transition_list;
> +	spinlock_t transition_lock;
> +	spinlock_t wlock;
> +
> +	/* debug counters */
> +	u32 M0, M2, M3;
> +
> +	/* worker for different state transitions */
> +	struct work_struct st_worker;
> +	struct work_struct fw_worker;
> +	struct work_struct syserr_worker;
> +	wait_queue_head_t state_event;
> +
> +	/* shadow functions */
> +	void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
> +			  enum MHI_CB cb);
> +	int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
> +	void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
> +	void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
> +	int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
> +	void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
> +	void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
> +	void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
> +	int (*map_single)(struct mhi_controller *mhi_cntrl,
> +			  struct mhi_buf_info *buf);
> +	void (*unmap_single)(struct mhi_controller *mhi_cntrl,
> +			     struct mhi_buf_info *buf);
> +
> +	/* channel to control DTR messaging */
> +	struct mhi_device *dtr_dev;
> +
> +	/* bounce buffer settings */
> +	bool bounce_buf;
> +	size_t buffer_len;
> +
> +	/* controller specific data */
> +	void *priv_data;
> +	void *log_buf;
> +	struct dentry *dentry;
> +	struct dentry *parent;
> +};
> +
> +/**
> + * struct mhi_device - mhi device structure associated bind to channel
> + * @dev: Device associated with the channels
> + * @mtu: Maximum # of bytes controller support
> + * @ul_chan_id: MHI channel id for UL transfer
> + * @dl_chan_id: MHI channel id for DL transfer
> + * @tiocm: Device current terminal settings
> + * @priv: Driver private data
> + */
> +struct mhi_device {
> +	struct device dev;
> +	u32 dev_id;
> +	u32 domain;
> +	u32 bus;
> +	u32 slot;

Same comment as above, why duplicate this and just not save the pointer
to the pci device?

> +	size_t mtu;
> +	int ul_chan_id;
> +	int dl_chan_id;
> +	int ul_event_id;
> +	int dl_event_id;
> +	u32 tiocm;
> +	const struct mhi_device_id *id;
> +	const char *chan_name;
> +	struct mhi_controller *mhi_cntrl;
> +	struct mhi_chan *ul_chan;
> +	struct mhi_chan *dl_chan;
> +	atomic_t dev_wake;

What is dev_wake for?

> +	enum mhi_device_type dev_type;
> +	void *priv_data;

Why do you need priv_data?  What's wrong with using the space in struct
device instead?

> +	int (*ul_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
> +		       void *buf, size_t len, enum MHI_FLAGS flags);
> +	int (*dl_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
> +		       void *buf, size_t len, enum MHI_FLAGS flags);
> +	void (*status_cb)(struct mhi_device *mhi_dev, enum MHI_CB cb);

Why does a device have function call backs in it?  Shouldn't those
belong to a driver?

> +};
> +
> +/**
> + * struct mhi_result - Completed buffer information
> + * @buf_addr: Address of data buffer
> + * @dir: Channel direction
> + * @bytes_xfer: # of bytes transferred
> + * @transaction_status: Status of last trasnferred
> + */
> +struct mhi_result {
> +	void *buf_addr;
> +	enum dma_data_direction dir;
> +	size_t bytes_xferd;
> +	int transaction_status;
> +};
> +
> +/**
> + * struct mhi_driver - mhi driver information
> + * @id_table: NULL terminated channel ID names
> + * @ul_xfer_cb: UL data transfer callback
> + * @dl_xfer_cb: DL data transfer callback
> + * @status_cb: Asynchronous status callback
> + */
> +struct mhi_driver {
> +	const struct mhi_device_id *id_table;
> +	int (*probe)(struct mhi_device *mhi_dev,
> +		     const struct mhi_device_id *id);
> +	void (*remove)(struct mhi_device *mhi_dev);
> +	void (*ul_xfer_cb)(struct mhi_device *mhi_dev,
> +			   struct mhi_result *result);
> +	void (*dl_xfer_cb)(struct mhi_device *mhi_dev,
> +			   struct mhi_result *result);
> +	void (*status_cb)(struct mhi_device *mhi_dev, enum MHI_CB mhi_cb);

See, they are here in a driver!

> +static inline void mhi_device_set_devdata(struct mhi_device *mhi_dev,
> +					  void *priv)
> +{
> +	mhi_dev->priv_data = priv;

Again, what's wrong with the device's private data pointer?

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver
  2018-07-09 20:08   ` [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver Sujeev Dias
                       ` (2 preceding siblings ...)
  2018-07-10  6:36     ` Greg Kroah-Hartman
@ 2018-07-11 19:30     ` Rob Herring
  2018-08-09 18:39     ` Randy Dunlap
  4 siblings, 0 replies; 54+ messages in thread
From: Rob Herring @ 2018-07-11 19:30 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: Greg Kroah-Hartman, Arnd Bergmann, linux-kernel, linux-arm-msm,
	devicetree, Tony Truong, Siddartha Mohanadoss

On Mon, Jul 09, 2018 at 01:08:08PM -0700, Sujeev Dias wrote:
> This is the initial skeleton driver for mhi bus stack. MHI Host
> Interface is a communication protocol to be used by the host to
> control and communcate with modem over a high speed peripheral bus.
> This module will allow host to communicate with external devices that
> support MHI protocol.
> 
> Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
> Reviewed-by: Tony Truong <truong@codeaurora.org>
> Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
> ---
>  Documentation/00-INDEX                        |   2 +
>  Documentation/devicetree/bindings/bus/mhi.txt | 258 ++++++++++++

As Greg said, separate patch.

>  Documentation/mhi.txt                         | 235 +++++++++++
>  drivers/bus/Kconfig                           |   8 +
>  drivers/bus/Makefile                          |   1 +
>  drivers/bus/mhi/Makefile                      |   6 +
>  drivers/bus/mhi/core/Makefile                 |   1 +
>  drivers/bus/mhi/core/mhi_init.c               | 538 ++++++++++++++++++++++++++
>  drivers/bus/mhi/core/mhi_internal.h           | 238 ++++++++++++
>  drivers/bus/mhi/core/mhi_main.c               | 122 ++++++
>  include/linux/mhi.h                           | 341 ++++++++++++++++
>  include/linux/mod_devicetable.h               |  12 +
>  12 files changed, 1762 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/bus/mhi.txt
>  create mode 100644 Documentation/mhi.txt
>  create mode 100644 drivers/bus/mhi/Makefile
>  create mode 100644 drivers/bus/mhi/core/Makefile
>  create mode 100644 drivers/bus/mhi/core/mhi_init.c
>  create mode 100644 drivers/bus/mhi/core/mhi_internal.h
>  create mode 100644 drivers/bus/mhi/core/mhi_main.c
>  create mode 100644 include/linux/mhi.h

> diff --git a/Documentation/devicetree/bindings/bus/mhi.txt b/Documentation/devicetree/bindings/bus/mhi.txt
> new file mode 100644
> index 0000000..19deb84
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/bus/mhi.txt
> @@ -0,0 +1,258 @@
> +MHI Host Interface
> +
> +MHI used by the host to control and communicate with modem over
> +high speed peripheral bus.
> +
> +==============
> +Node Structure
> +==============
> +
> +Main node properties:
> +
> +- mhi,max-channels

mhi is not a vendor. mhi-max-channels. Same for rest. Or maybe just drop 
'mhi' completely.

> +  Usage: required
> +  Value type: <u32>
> +  Definition: Maximum number of channels supported by this controller
> +
> +- mhi,timeout
> +  Usage: optional
> +  Value type: <u32>
> +  Definition: Maximum timeout in ms wait for state and cmd completion

Needs a unit suffix.

> +
> +- mhi,use-bb
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Set true, if PCIe controller does not have full access to host
> +	DDR, and we're using a dedicated memory pool like cma, or
> +	carveout pool. Pool must support atomic allocation.

How is this related to PCI?

"atomic allocation" has nothing to do with bindings.

> +
> +- mhi,buffer-len
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: MHI automatically pre-allocate buffers for some channel.
> +	Set the length of buffer size to allocate. If not default
> +	size MHI_MAX_MTU will be used.
> +
> +============================
> +mhi channel node properties:
> +============================
> +
> +- reg
> +  Usage: required
> +  Value type: <u32>
> +  Definition: physical channel number
> +
> +- label
> +  Usage: required
> +  Value type: <string>
> +  Definition: given name for the channel

Why does the user need this?

> +
> +- mhi,num-elements
> +  Usage: optional
> +  Value type: <u32>
> +  Definition: Number of elements transfer ring support
> +
> +- mhi,event-ring
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Event ring index associated with this channel
> +
> +- mhi,chan-dir
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Channel direction as defined by enum dma_data_direction
> +	0 = Bidirectional data transfer
> +	1 = UL data transfer
> +	2 = DL data transfer
> +	3 = No direction, not a regular data transfer channel
> +
> +- mhi,ee
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Channel execution enviornment as defined by enum MHI_EE
> +	1 = Bootloader stage
> +	2 = AMSS mode
> +
> +- mhi,pollcfg
> +  Usage: optional
> +  Value type: <u32>
> +  Definition: MHI poll configuration, valid only when burst mode is enabled
> +	0 = Use default (device specific) polling configuration
> +	For UL channels, value specifies the timer to poll MHI context in
> +	milliseconds.
> +	For DL channels, the threshold to poll the MHI context in multiple of
> +	eight ring element.
> +
> +- mhi,data-type
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Data transfer type accepted as defined by enum MHI_XFER_TYPE
> +	0 = accept cpu address for buffer
> +	1 = accept skb
> +	2 = accept scatterlist
> +	3 = offload channel, does not accept any transfer type
> +
> +- mhi,doorbell-mode
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Channel doorbell mode configuration as defined by enum
> +	MHI_BRSTMODE
> +	2 = burst mode disabled
> +	3 = burst mode enabled
> +
> +- mhi,lpm-notify
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: This channel master require low power mode enter and exit
> +  notifications from mhi bus master.
> +
> +- mhi,offload-chan
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Client managed channel, MHI host only involved in setting up
> +	the data path, not involved in active data path.
> +
> +- mhi,db-mode-switch
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Must switch to doorbell mode whenever MHI M0 state transition
> +	happens.
> +
> +- mhi,auto-queue
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: MHI bus driver will pre-allocate buffers for this channel and
> +	queue to hardware. If set, client not allowed to queue buffers. Valid
> +	only for downlink direction.
> +
> +- mhi,auto-start
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: MHI host driver to automatically start channels once mhi device
> +	driver probe is complete. This should be only set true if initial
> +	handshake iniaitead by external modem.

typo

> +
> +==========================
> +mhi event node properties:
> +==========================
> +
> +- mhi,num-elements
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Number of elements event ring support
> +
> +- mhi,intmod
> +  Usage: required
> +  Value type: <u32>
> +  Definition: interrupt moderation time in ms
> +
> +- mhi,msi
> +  Usage: required
> +  Value type: <u32>
> +  Definition: MSI associated with this event ring
> +
> +- mhi,chan
> +  Usage: optional
> +  Value type: <u32>
> +  Definition: Dedicated channel number, if it's a dedicated event ring
> +
> +- mhi,priority
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Event ring priority, set to 1 for now
> +
> +- mhi,brstmode
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Event doorbell mode configuration as defined by
> +	enum MHI_BRSTMODE
> +		2 = burst mode disabled
> +		3 = burst mode enabled
> +
> +- mhi,data-type
> +  Usage: optional
> +  Value type: <u32>
> +  Definition: Type of data this event ring will process as defined
> +	by enum mhi_er_data_type
> +		0 = process data packets (default)
> +		1 = process mhi control packets
> +
> +- mhi,hw-ev
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Event ring associated with hardware channels
> +
> +- mhi,client-manage
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Client manages the event ring (use by napi_poll)
> +
> +- mhi,offload
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Event ring associated with offload channel

This is a lot of properties. I suspect that many of these should be 
implied by the modem device or be driver settings. If the modems are PCI 
devices, then you should be able to determine a lot from the PCI 
VID/PIDs.

I'm not sure that a bus is appropriate here either. This looks like the 
next layer up. How's it different than a programming model for some 
ethernet NIC?

Rob

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v2 5/7] mhi_bus: core: add support to get external modem time
  2018-07-09 20:08   ` [PATCH v2 5/7] mhi_bus: core: add support to get external modem time Sujeev Dias
@ 2018-07-11 19:32     ` Rob Herring
  2018-08-09 20:17     ` Randy Dunlap
  1 sibling, 0 replies; 54+ messages in thread
From: Rob Herring @ 2018-07-11 19:32 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: Greg Kroah-Hartman, Arnd Bergmann, linux-kernel, linux-arm-msm,
	devicetree, Tony Truong, Siddartha Mohanadoss

On Mon, Jul 09, 2018 at 01:08:12PM -0700, Sujeev Dias wrote:
> For accurate synchronizations between external modem and
> host processor, mhi host will capture modem time relative
> to host time. Client may use time measurements for adjusting
> any drift between host and modem.
> 
> Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
> Reviewed-by: Tony Truong <truong@codeaurora.org>
> Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
> ---
>  Documentation/devicetree/bindings/bus/mhi.txt |   7 +

Put all the binding in one patch.
 
>  Documentation/mhi.txt                         |  41 ++++
>  drivers/bus/mhi/core/mhi_init.c               | 111 +++++++++++
>  drivers/bus/mhi/core/mhi_internal.h           |  57 +++++-
>  drivers/bus/mhi/core/mhi_main.c               | 263 +++++++++++++++++++++++++-
>  drivers/bus/mhi/core/mhi_pm.c                 |   7 +
>  include/linux/mhi.h                           |   7 +
>  7 files changed, 486 insertions(+), 7 deletions(-)

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v2 6/7] mhi_bus: controller: MHI support for QCOM modems
  2018-07-09 20:08   ` [PATCH v2 6/7] mhi_bus: controller: MHI support for QCOM modems Sujeev Dias
@ 2018-07-11 19:36     ` Rob Herring
  0 siblings, 0 replies; 54+ messages in thread
From: Rob Herring @ 2018-07-11 19:36 UTC (permalink / raw)
  To: Sujeev Dias
  Cc: Greg Kroah-Hartman, Arnd Bergmann, linux-kernel, linux-arm-msm,
	devicetree, Tony Truong, Siddartha Mohanadoss

On Mon, Jul 09, 2018 at 01:08:13PM -0700, Sujeev Dias wrote:
> QCOM PCIe based modems uses MHI as the communication protocol.
> MHI control driver is the bus master for such modems. As the bus
> master driver, it oversees power management operations
> such as suspend, resume, powering on and off the device.
> 
> Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
> Reviewed-by: Tony Truong <truong@codeaurora.org>
> Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
> ---
>  Documentation/devicetree/bindings/bus/mhi_qcom.txt |  58 +++

And this one in it's own patch.

>  arch/arm64/configs/defconfig                       |   1 +
>  drivers/bus/Kconfig                                |   1 +
>  drivers/bus/mhi/Makefile                           |   1 +
>  drivers/bus/mhi/controllers/Kconfig                |  13 +
>  drivers/bus/mhi/controllers/Makefile               |   1 +
>  drivers/bus/mhi/controllers/mhi_qcom.c             | 461 +++++++++++++++++++++
>  drivers/bus/mhi/controllers/mhi_qcom.h             |  67 +++
>  8 files changed, 603 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/bus/mhi_qcom.txt
>  create mode 100644 drivers/bus/mhi/controllers/Kconfig
>  create mode 100644 drivers/bus/mhi/controllers/Makefile
>  create mode 100644 drivers/bus/mhi/controllers/mhi_qcom.c
>  create mode 100644 drivers/bus/mhi/controllers/mhi_qcom.h
> 
> diff --git a/Documentation/devicetree/bindings/bus/mhi_qcom.txt b/Documentation/devicetree/bindings/bus/mhi_qcom.txt
> new file mode 100644
> index 0000000..0a48a50
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/bus/mhi_qcom.txt
> @@ -0,0 +1,58 @@
> +Qualcomm Technologies Inc MHI Bus controller
> +
> +MHI control driver enables clients to communicate with external mode
> +using MHI protocol.
> +
> +==============
> +Node Structure
> +==============
> +
> +Main node properties:
> +
> +- reg
> +  Usage: required
> +  Value type: Array (5-cell PCI resource) of <u32>
> +  Definition: First cell is devfn, which is determined by pci bus topology.
> +	Assign the other cells 0 since they are not used.
> +
> +- qcom,smmu-cfg

This should probably be part of the SMMU binding?

> +  Usage: required
> +  Value type: <u32>
> +  Definition: Required SMMU configuration bitmask for PCIe bus.
> +	BIT mask:
> +	BIT(0) : Attach address mapping to endpoint device
> +	BIT(1) : Set attribute S1_BYPASS
> +	BIT(2) : Set attribute FAST
> +	BIT(3) : Set attribute ATOMIC
> +	BIT(4) : Set attribute FORCE_COHERENT
> +
> +- qcom,addr-win
> +  Usage: required if SMMU S1 translation is enabled
> +  Value type: Array of <u64>
> +  Definition: Pair of values describing iova start and stop address
> +
> +- MHI bus settings
> +  Usage: required
> +  Values: as defined by mhi.txt
> +  Definition: Per definition of devicetree/bindings/bus/mhi.txt, define device
> +	specific MHI configuration parameters.
> +
> +========
> +Example:
> +========
> +
> +/* pcie domain (root complex) modem connected to */
> +&pcie1 {
> +	/* pcie bus modem connected to */
> +	pci,bus@1 {
> +		reg = <0 0 0 0 0>;
> +
> +		qcom,mhi {
> +			reg = <0 0 0 0 0>;

2 levels of PCI addresses doesn't look right, but I can't really tell in 
your example. The addresses don't look valid.

> +			qcom,smmu-cfg = <0x3d>;
> +			qcom,addr-win = <0x0 0x20000000 0x0 0x3fffffff>;
> +
> +			<mhi bus configurations>
> +		};
> +	};
> +};

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver
  2018-07-09 20:08   ` [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver Sujeev Dias
                       ` (3 preceding siblings ...)
  2018-07-11 19:30     ` Rob Herring
@ 2018-08-09 18:39     ` Randy Dunlap
  4 siblings, 0 replies; 54+ messages in thread
From: Randy Dunlap @ 2018-08-09 18:39 UTC (permalink / raw)
  To: Sujeev Dias, Greg Kroah-Hartman, Arnd Bergmann
  Cc: linux-kernel, linux-arm-msm, devicetree, Tony Truong,
	Siddartha Mohanadoss

On 07/09/2018 01:08 PM, Sujeev Dias wrote:
> This is the initial skeleton driver for mhi bus stack. MHI Host
> Interface is a communication protocol to be used by the host to
> control and communcate with modem over a high speed peripheral bus.

              communicate

> This module will allow host to communicate with external devices that
> support MHI protocol.
> 
> Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
> Reviewed-by: Tony Truong <truong@codeaurora.org>
> Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
> ---
>  Documentation/00-INDEX                        |   2 +
>  Documentation/devicetree/bindings/bus/mhi.txt | 258 ++++++++++++
>  Documentation/mhi.txt                         | 235 +++++++++++
>  drivers/bus/Kconfig                           |   8 +
>  drivers/bus/Makefile                          |   1 +
>  drivers/bus/mhi/Makefile                      |   6 +
>  drivers/bus/mhi/core/Makefile                 |   1 +
>  drivers/bus/mhi/core/mhi_init.c               | 538 ++++++++++++++++++++++++++
>  drivers/bus/mhi/core/mhi_internal.h           | 238 ++++++++++++
>  drivers/bus/mhi/core/mhi_main.c               | 122 ++++++
>  include/linux/mhi.h                           | 341 ++++++++++++++++
>  include/linux/mod_devicetable.h               |  12 +
>  12 files changed, 1762 insertions(+)
>  create mode 100644 Documentation/devicetree/bindings/bus/mhi.txt
>  create mode 100644 Documentation/mhi.txt
>  create mode 100644 drivers/bus/mhi/Makefile
>  create mode 100644 drivers/bus/mhi/core/Makefile
>  create mode 100644 drivers/bus/mhi/core/mhi_init.c
>  create mode 100644 drivers/bus/mhi/core/mhi_internal.h
>  create mode 100644 drivers/bus/mhi/core/mhi_main.c
>  create mode 100644 include/linux/mhi.h
> 

> diff --git a/Documentation/devicetree/bindings/bus/mhi.txt b/Documentation/devicetree/bindings/bus/mhi.txt
> new file mode 100644
> index 0000000..19deb84
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/bus/mhi.txt
> @@ -0,0 +1,258 @@
> +MHI Host Interface
> +
> +MHI used by the host to control and communicate with modem over

   MHI is used

> +high speed peripheral bus.
> +
> +==============
> +Node Structure
> +==============
> +
> +Main node properties:
> +
> +- mhi,max-channels
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Maximum number of channels supported by this controller
> +
> +- mhi,timeout
> +  Usage: optional
> +  Value type: <u32>
> +  Definition: Maximum timeout in ms wait for state and cmd completion
> +
> +- mhi,use-bb
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Set true, if PCIe controller does not have full access to host
> +	DDR, and we're using a dedicated memory pool like cma, or
> +	carveout pool. Pool must support atomic allocation.
> +
> +- mhi,buffer-len
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: MHI automatically pre-allocate buffers for some channel.
> +	Set the length of buffer size to allocate. If not default
> +	size MHI_MAX_MTU will be used.
> +
> +============================
> +mhi channel node properties:
> +============================
> +
> +- reg
> +  Usage: required
> +  Value type: <u32>
> +  Definition: physical channel number
> +
> +- label
> +  Usage: required
> +  Value type: <string>
> +  Definition: given name for the channel
> +
> +- mhi,num-elements
> +  Usage: optional
> +  Value type: <u32>
> +  Definition: Number of elements transfer ring support
> +
> +- mhi,event-ring
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Event ring index associated with this channel
> +
> +- mhi,chan-dir
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Channel direction as defined by enum dma_data_direction
> +	0 = Bidirectional data transfer
> +	1 = UL data transfer
> +	2 = DL data transfer
> +	3 = No direction, not a regular data transfer channel
> +
> +- mhi,ee
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Channel execution enviornment as defined by enum MHI_EE

                                   environment

> +	1 = Bootloader stage
> +	2 = AMSS mode
> +
> +- mhi,pollcfg
> +  Usage: optional
> +  Value type: <u32>
> +  Definition: MHI poll configuration, valid only when burst mode is enabled
> +	0 = Use default (device specific) polling configuration
> +	For UL channels, value specifies the timer to poll MHI context in
> +	milliseconds.
> +	For DL channels, the threshold to poll the MHI context in multiple of
> +	eight ring element.
> +
> +- mhi,data-type
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Data transfer type accepted as defined by enum MHI_XFER_TYPE
> +	0 = accept cpu address for buffer
> +	1 = accept skb
> +	2 = accept scatterlist
> +	3 = offload channel, does not accept any transfer type
> +
> +- mhi,doorbell-mode
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Channel doorbell mode configuration as defined by enum
> +	MHI_BRSTMODE
> +	2 = burst mode disabled
> +	3 = burst mode enabled
> +
> +- mhi,lpm-notify
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: This channel master require low power mode enter and exit
> +  notifications from mhi bus master.
> +
> +- mhi,offload-chan
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Client managed channel, MHI host only involved in setting up
> +	the data path, not involved in active data path.
> +
> +- mhi,db-mode-switch
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Must switch to doorbell mode whenever MHI M0 state transition
> +	happens.
> +
> +- mhi,auto-queue
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: MHI bus driver will pre-allocate buffers for this channel and
> +	queue to hardware. If set, client not allowed to queue buffers. Valid
> +	only for downlink direction.
> +
> +- mhi,auto-start
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: MHI host driver to automatically start channels once mhi device
> +	driver probe is complete. This should be only set true if initial
> +	handshake iniaitead by external modem.

	          initiated

> +
> +==========================
> +mhi event node properties:
> +==========================
> +
> +- mhi,num-elements
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Number of elements event ring support
> +
> +- mhi,intmod
> +  Usage: required
> +  Value type: <u32>
> +  Definition: interrupt moderation time in ms
> +
> +- mhi,msi
> +  Usage: required
> +  Value type: <u32>
> +  Definition: MSI associated with this event ring
> +
> +- mhi,chan
> +  Usage: optional
> +  Value type: <u32>
> +  Definition: Dedicated channel number, if it's a dedicated event ring
> +
> +- mhi,priority
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Event ring priority, set to 1 for now
> +
> +- mhi,brstmode
> +  Usage: required
> +  Value type: <u32>
> +  Definition: Event doorbell mode configuration as defined by
> +	enum MHI_BRSTMODE
> +		2 = burst mode disabled
> +		3 = burst mode enabled
> +
> +- mhi,data-type
> +  Usage: optional
> +  Value type: <u32>
> +  Definition: Type of data this event ring will process as defined
> +	by enum mhi_er_data_type
> +		0 = process data packets (default)
> +		1 = process mhi control packets
> +
> +- mhi,hw-ev
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Event ring associated with hardware channels
> +
> +- mhi,client-manage
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Client manages the event ring (use by napi_poll)
> +
> +- mhi,offload
> +  Usage: optional
> +  Value type: <bool>
> +  Definition: Event ring associated with offload channel
> +
> +
> +Children node properties:
> +
> +MHI drivers that require DT can add driver specific information as a child node.
> +
> +- mhi,chan
> +  Usage: Required
> +  Value type: <string>
> +  Definition: Channel name
> +
> +========
> +Example:
> +========
> +mhi_controller {
> +	mhi,max-channels = <105>;
> +
> +	mhi_chan@0 {
> +		reg = <0>;
> +		label = "LOOPBACK";
> +		mhi,num-elements = <64>;
> +		mhi,event-ring = <2>;
> +		mhi,chan-dir = <1>;
> +		mhi,data-type = <0>;
> +		mhi,doorbell-mode = <2>;
> +		mhi,ee = <2>;
> +	};
> +
> +	mhi_chan@1 {
> +		reg = <1>;
> +		label = "LOOPBACK";
> +		mhi,num-elements = <64>;
> +		mhi,event-ring = <2>;
> +		mhi,chan-dir = <2>;
> +		mhi,data-type = <0>;
> +		mhi,doorbell-mode = <2>;
> +		mhi,ee = <2>;
> +	};
> +
> +	mhi_event@0 {
> +		mhi,num-elements = <32>;
> +		mhi,intmod = <1>;
> +		mhi,msi = <1>;
> +		mhi,chan = <0>;
> +		mhi,priority = <1>;
> +		mhi,bstmode = <2>;
> +		mhi,data-type = <1>;
> +	};
> +
> +	mhi_event@1 {
> +		mhi,num-elements = <256>;
> +		mhi,intmod = <1>;
> +		mhi,msi = <2>;
> +		mhi,chan = <0>;
> +		mhi,priority = <1>;
> +		mhi,bstmode = <2>;
> +	};
> +
> +	mhi,timeout = <500>;
> +
> +	children_node {
> +		mhi,chan = "LOOPBACK"
> +		<driver specific properties>
> +	};
> +};
> diff --git a/Documentation/mhi.txt b/Documentation/mhi.txt
> new file mode 100644
> index 0000000..1c501f1
> --- /dev/null
> +++ b/Documentation/mhi.txt
> @@ -0,0 +1,235 @@
> +Overview of Linux kernel MHI support
> +====================================
> +
> +Modem-Host Interface (MHI)
> +=========================
> +MHI used by the host to control and communicate with modem over high speed

   MHI is used

> +peripheral bus. Even though MHI can be easily adapt to any peripheral buses,

                                                 adapted

> +primarily used with PCIe based devices. The host has one or more PCIe root

   it is primarily used with

> +ports connected to modem device. The host has limited access to device memory
> +space, including register configuration and control the device operation.

                                           and control of the

> +Data transfers are invoked from the device.
> +
> +All data structures used by MHI are in the host system memory. Using PCIe
> +interface, the device accesses those data structures. MHI data structures and
> +data buffers in the host system memory regions are mapped for device.
> +
> +Memory spaces
> +-------------
> +PCIe Configurations : Used for enumeration and resource management, such as
> +interrupt and base addresses.  This is done by mhi control driver.
> +
> +MMIO
> +----
> +MHI MMIO : Memory mapped IO consists of set of registers in the device hardware,
> +which are mapped to the host memory space through PCIe base address register
> +(BAR)

   (BAR).

> +
> +MHI control registers : Access to MHI configurations registers
> +(struct mhi_controller.regs).
> +
> +MHI BHI register: Boot host interface registers (struct mhi_controller.bhi) used
> +for firmware download before MHI initialization.
> +
> +Channel db array : Doorbell registers (struct mhi_chan.tre_ring.db_addr) used by
> +host to notify device there is new work to do.
> +
> +Event db array : Associated with event context array
> +(struct mhi_event.ring.db_addr), host uses to notify device free events are
> +available.
> +
> +Data structures
> +---------------
> +Host memory : Directly accessed by the host to manage the MHI data structures
> +and buffers. The device accesses the host memory over the PCIe interface.
> +
> +Channel context array : All channel configurations are organized in channel
> +context data array.
> +
> +struct __packed mhi_chan_ctxt;
> +struct mhi_ctxt.chan_ctxt;
> +
> +Transfer rings : Used by host to schedule work items for a channel and organized
> +as a circular queue of transfer descriptors (TD).
> +
> +struct __packed mhi_tre;
> +struct mhi_chan.tre_ring;
> +
> +Event context array : All event configurations are organized in event context
> +data array.
> +
> +struct mhi_ctxt.er_ctxt;
> +struct __packed mhi_event_ctxt;
> +
> +Event rings: Used by device to send completion and state transition messages to
> +host
> +
> +struct mhi_event.ring;
> +struct __packed mhi_tre;
> +
> +Command context array: All command configurations are organized in command
> +context data array.
> +
> +struct __packed mhi_cmd_ctxt;
> +struct mhi_ctxt.cmd_ctxt;
> +
> +Command rings: Used by host to send MHI commands to device
> +
> +struct __packed mhi_tre;
> +struct mhi_cmd.ring;
> +
> +Transfer rings
> +--------------
> +MHI channels are logical, unidirectional data pipes between host and device.
> +Each channel associated with a single transfer ring.  The data direction can be

        channel is associated

> +either inbound (device to host) or outbound (host to device).  Transfer
> +descriptors are managed by using transfer rings, which are defined for each
> +channel between device and host and resides in the host memory.
> +
> +Transfer ring Pointer:	  	Transfer Ring Array
> +[Read Pointer (RP)] ----------->[Ring Element] } TD
> +[Write Pointer (WP)]-		[Ring Element]
> +                     -		[Ring Element]
> +		      --------->[Ring Element]
> +				[Ring Element]
> +
> +1. Host allocate memory for transfer ring

           allocates

> +2. Host sets base, read pointer, write pointer in corresponding channel context
> +3. Ring is considered empty when RP == WP
> +4. Ring is considered full when WP + 1 == RP
> +4. RP indicates the next element to be serviced by device
> +4. When host new buffer to send, host update the Ring element with buffer

           host has new buffer to send, host updates

> +   information
> +5. Host increment the WP to next element

           increments

> +6. Ring the associated channel DB.
> +
> +Event rings
> +-----------
> +Events from the device to host are organized in event rings and defined in event
> +descriptors.  Event rings are array of EDs that resides in the host memory.
> +
> +Transfer ring Pointer:	  	Event Ring Array
> +[Read Pointer (RP)] ----------->[Ring Element] } ED
> +[Write Pointer (WP)]-		[Ring Element]
> +                     -		[Ring Element]
> +		      --------->[Ring Element]
> +				[Ring Element]
> +
> +1. Host allocate memory for event ring

           allocates

> +2. Host sets base, read pointer, write pointer in corresponding channel context
> +3. Both host and device has local copy of RP, WP
> +3. Ring is considered empty (no events to service) when WP + 1 == RP
> +4. Ring is full of events when RP == WP
> +4. RP - 1 = last event device programmed
> +4. When there is a new event device need to send, device update ED pointed by RP

                                       needs to send, device updates


> +5. Device increment RP to next element

             increments

> +6. Device trigger and interrupt

             triggers an interrupt

> +
> +Example Operation for data transfer:
> +
> +1. Host prepare TD with buffer information

           prepares

> +2. Host increment Chan[id].ctxt.WP

           increments

> +3. Host ring channel DB register

           rings

> +4. Device wakes up process the TD

             wakes up to process the TD

> +5. Device generate a completion event for that TD by updating ED

             generates

> +6. Device increment Event[id].ctxt.RP

             increments

> +7. Device trigger MSI to wake host

             triggers

> +8. Host wakes up and check event ring for completion event

                    and checks

> +9. Host update the Event[i].ctxt.WP to indicate processed of completion event.

           updates                                 processing

> +
> +MHI States
> +----------
> +
> +enum MHI_STATE {
> +MHI_STATE_RESET : MHI is in reset state, POR state. Host is not allowed to
> +		  access device MMIO register space.
> +MHI_STATE_READY : Device is ready for initialization. Host can start MHI
> +		  initialization by programming MMIO

		                                MMIO.

> +MHI_STATE_M0 : MHI is in fully active state, data transfer is active
> +MHI_STATE_M1 : Device in a suspended state
> +MHI_STATE_M2 : MHI in low power mode, device may enter lower power mode.
> +MHI_STATE_M3 : Both host and device in suspended state.  PCIe link is not
> +	       accessible to device.
> +
> +MHI Initialization
> +------------------
> +
> +1. After system boots, the device is enumerated over PCIe interface
> +2. Host allocate MHI context for event, channel and command arrays

           allocates

> +3. Initialize context array, and prepare interrupts
> +3. Host waits until device enter READY state

                              enters

> +4. Program MHI MMIO registers and set device into MHI_M0 state
> +5. Wait for device to enter M0 state
> +
> +Linux Software Architecture
> +===========================
> +
> +MHI Controller
> +--------------
> +MHI controller is also the MHI bus master. In charge of managing the physical
> +link between host and device.  Not involved in actual data transfer.  At least
> +for PCIe based buses, for other type of bus, we can expand to add support.

^^^^^ Too many sentence fragments...

> +
> +Roles:
> +1. Turn on PCIe bus and configure the link
> +2. Configure MSI, SMMU, and IOMEM
> +3. Allocate struct mhi_controller and register with MHI bus framework
> +2. Initiate power on and shutdown sequence
> +3. Initiate suspend and resume
> +
> +Usage
> +-----
> +
> +1. Allocate control data structure by calling mhi_alloc_controller()
> +2. Initialize mhi_controller with all the known information such as:
> +   - Device Topology
> +   - IOMMU window
> +   - IOMEM mapping
> +   - Device to use for memory allocation, and of_node with DT configuration
> +   - Configure asynchronous callback functions
> +3. Register MHI controller with MHI bus framework by calling
> +   of_register_mhi_controller()
> +
> +After successfully registering controller can initiate any of these power modes:
> +
> +1. Power up sequence
> +   - mhi_prepare_for_power_up()
> +   - mhi_async_power_up()
> +   - mhi_sync_power_up()
> +2. Power down sequence
> +   - mhi_power_down()
> +   - mhi_unprepare_after_power_down()
> +3. Initiate suspend
> +   - mhi_pm_suspend()
> +4. Initiate resume
> +   - mhi_pm_resume()
> +
> +MHI Devices
> +-----------
> +Logical device that bind to maximum of two physical MHI channels. Once MHI is in

^^^^^ not-a-sentence.

> +powered on state, each supported channel by controller will be allocated as a

                                                                            as an

> +mhi_device.
> +
> +Each supported device would be enumerated under

how about:
Each supported device is enumerated in /sys/bus/mhi/devices

> +/sys/bus/mhi/devices/
> +
> +struct mhi_device;
> +
> +MHI Driver
> +----------
> +Each MHI driver can bind to one or more MHI devices. MHI host driver will bind
> +mhi_device to mhi_driver.
> +
> +All registered drivers are visible under
> +/sys/bus/mhi/drivers/
> +
> +struct mhi_driver;
> +
> +Usage
> +-----
> +
> +1. Register driver using mhi_driver_register
> +2. Before sending data, prepare device for transfer by calling
> +   mhi_prepare_for_transfer
> +3. Initiate data transfer by calling mhi_queue_transfer
> +4. After finish, call mhi_unprepare_from_transfer to end data transfer


> diff --git a/include/linux/mhi.h b/include/linux/mhi.h
> new file mode 100644
> index 0000000..c80685e9
> --- /dev/null
> +++ b/include/linux/mhi.h
> @@ -0,0 +1,341 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2018, The Linux Foundation. All rights reserved.
> + *
> + */
> +#ifndef _MHI_H_
> +#define _MHI_H_
> +
> +struct mhi_chan;
> +struct mhi_event;
> +struct mhi_ctxt;
> +struct mhi_cmd;
> +struct mhi_buf_info;
> +
> +/**
> + * enum MHI_CB - MHI callback
> + * @MHI_CB_IDLE: MHI entered idle state
> + * @MHI_CB_PENDING_DATA: New data available for client to process
> + * @MHI_CB_LPM_ENTER: MHI host entered low power mode
> + * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
> + * @MHI_CB_EE_RDDM: MHI device entered RDDM execution enviornment
> + */
> +enum MHI_CB {
> +	MHI_CB_IDLE,
> +	MHI_CB_PENDING_DATA,
> +	MHI_CB_LPM_ENTER,
> +	MHI_CB_LPM_EXIT,
> +	MHI_CB_EE_RDDM,
> +};
> +
> +/**
> + * enum MHI_FLAGS - Transfer flags
> + * @MHI_EOB: End of buffer for bulk transfer
> + * @MHI_EOT: End of transfer
> + * @MHI_CHAIN: Linked transfer
> + */
> +enum MHI_FLAGS {
> +	MHI_EOB,
> +	MHI_EOT,
> +	MHI_CHAIN,
> +};
> +
> +/**
> + * enum mhi_device_type - Device types
> + * @MHI_XFER_TYPE: Handles data transfer
> + * @MHI_TIMESYNC_TYPE: Use for timesync feature
> + * @MHI_CONTROLLER_TYPE: Control device
> + */
> +enum mhi_device_type {
> +	MHI_XFER_TYPE,
> +	MHI_TIMESYNC_TYPE,
> +	MHI_CONTROLLER_TYPE,
> +};
> +

> +/**
> + * struct mhi_device - mhi device structure associated bind to channel

                                               ^^^ I don't understand what that
                                               is trying to say.

> + * @dev: Device associated with the channels
> + * @mtu: Maximum # of bytes controller support
> + * @ul_chan_id: MHI channel id for UL transfer
> + * @dl_chan_id: MHI channel id for DL transfer
> + * @tiocm: Device current terminal settings
> + * @priv: Driver private data
> + */
> +struct mhi_device {
> +	struct device dev;
> +	u32 dev_id;
> +	u32 domain;
> +	u32 bus;
> +	u32 slot;
> +	size_t mtu;
> +	int ul_chan_id;
> +	int dl_chan_id;
> +	int ul_event_id;
> +	int dl_event_id;
> +	u32 tiocm;
> +	const struct mhi_device_id *id;
> +	const char *chan_name;
> +	struct mhi_controller *mhi_cntrl;
> +	struct mhi_chan *ul_chan;
> +	struct mhi_chan *dl_chan;
> +	atomic_t dev_wake;
> +	enum mhi_device_type dev_type;
> +	void *priv_data;
> +	int (*ul_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
> +		       void *buf, size_t len, enum MHI_FLAGS flags);
> +	int (*dl_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
> +		       void *buf, size_t len, enum MHI_FLAGS flags);
> +	void (*status_cb)(struct mhi_device *mhi_dev, enum MHI_CB cb);
> +};
> +
> +/**
> + * struct mhi_result - Completed buffer information
> + * @buf_addr: Address of data buffer
> + * @dir: Channel direction
> + * @bytes_xfer: # of bytes transferred
> + * @transaction_status: Status of last trasnferred

                                  of last transfer

> + */
> +struct mhi_result {
> +	void *buf_addr;
> +	enum dma_data_direction dir;
> +	size_t bytes_xferd;
> +	int transaction_status;
> +};
> +
> +/**
> + * struct mhi_driver - mhi driver information
> + * @id_table: NULL terminated channel ID names
> + * @ul_xfer_cb: UL data transfer callback
> + * @dl_xfer_cb: DL data transfer callback
> + * @status_cb: Asynchronous status callback
> + */
> +struct mhi_driver {
> +	const struct mhi_device_id *id_table;
> +	int (*probe)(struct mhi_device *mhi_dev,
> +		     const struct mhi_device_id *id);
> +	void (*remove)(struct mhi_device *mhi_dev);
> +	void (*ul_xfer_cb)(struct mhi_device *mhi_dev,
> +			   struct mhi_result *result);
> +	void (*dl_xfer_cb)(struct mhi_device *mhi_dev,
> +			   struct mhi_result *result);
> +	void (*status_cb)(struct mhi_device *mhi_dev, enum MHI_CB mhi_cb);
> +	struct device_driver driver;
> +};
> +
> +#define to_mhi_driver(drv) container_of(drv, struct mhi_driver, driver)
> +#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
> +
> +static inline void mhi_device_set_devdata(struct mhi_device *mhi_dev,
> +					  void *priv)
> +{
> +	mhi_dev->priv_data = priv;
> +}
> +
> +static inline void *mhi_device_get_devdata(struct mhi_device *mhi_dev)
> +{
> +	return mhi_dev->priv_data;
> +}
> +
> +static inline void *mhi_controller_get_devdata(struct mhi_controller *mhi_cntrl)
> +{
> +	return mhi_cntrl->priv_data;
> +}
> +
> +static inline void mhi_free_controller(struct mhi_controller *mhi_cntrl)
> +{
> +	kfree(mhi_cntrl);
> +}
> +
> +/**
> + * mhi_driver_register - Register driver with MHI framework
> + * @mhi_drv: mhi_driver structure
> + */
> +int mhi_driver_register(struct mhi_driver *mhi_drv);
> +
> +/**
> + * mhi_driver_unregister - Unregister a driver for mhi_devices
> + * @mhi_drv: mhi_driver structure
> + */
> +void mhi_driver_unregister(struct mhi_driver *mhi_drv);
> +
> +/**
> + * mhi_alloc_controller - Allocate mhi_controller structure
> + * Allocate controller structure and additional data for controller
> + * private data. You may get the private data pointer by calling
> + * mhi_controller_get_devdata
> + * @size: # of additional bytes to allocate
> + */
> +struct mhi_controller *mhi_alloc_controller(size_t size);
> +
> +/**
> + * of_register_mhi_controller - Register MHI controller
> + * Registers MHI controller with MHI bus framework. DT must be supported
> + * @mhi_cntrl: MHI controller to register
> + */
> +int of_register_mhi_controller(struct mhi_controller *mhi_cntrl);
> +
> +void mhi_unregister_mhi_controller(struct mhi_controller *mhi_cntrl);
> +
> +/**
> + * mhi_bdf_to_controller - Look up a registered controller
> + * Search for controller based on device identification
> + * @domain: RC domain of the device

What is an RC domain?

> + * @bus: Bus device connected to
> + * @slot: Slot device assigned to
> + * @dev_id: Device Identification
> + */
> +struct mhi_controller *mhi_bdf_to_controller(u32 domain, u32 bus, u32 slot,
> +					     u32 dev_id);
> +
> +#endif /* _MHI_H_ */


thanks,
-- 
~Randy

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v2 5/7] mhi_bus: core: add support to get external modem time
  2018-07-09 20:08   ` [PATCH v2 5/7] mhi_bus: core: add support to get external modem time Sujeev Dias
  2018-07-11 19:32     ` Rob Herring
@ 2018-08-09 20:17     ` Randy Dunlap
  1 sibling, 0 replies; 54+ messages in thread
From: Randy Dunlap @ 2018-08-09 20:17 UTC (permalink / raw)
  To: Sujeev Dias, Greg Kroah-Hartman, Arnd Bergmann
  Cc: linux-kernel, linux-arm-msm, devicetree, Tony Truong,
	Siddartha Mohanadoss

On 07/09/2018 01:08 PM, Sujeev Dias wrote:
> For accurate synchronizations between external modem and
> host processor, mhi host will capture modem time relative
> to host time. Client may use time measurements for adjusting
> any drift between host and modem.
> 
> Signed-off-by: Sujeev Dias <sdias@codeaurora.org>
> Reviewed-by: Tony Truong <truong@codeaurora.org>
> Signed-off-by: Siddartha Mohanadoss <smohanad@codeaurora.org>
> ---
>  Documentation/devicetree/bindings/bus/mhi.txt |   7 +
>  Documentation/mhi.txt                         |  41 ++++
>  drivers/bus/mhi/core/mhi_init.c               | 111 +++++++++++
>  drivers/bus/mhi/core/mhi_internal.h           |  57 +++++-
>  drivers/bus/mhi/core/mhi_main.c               | 263 +++++++++++++++++++++++++-
>  drivers/bus/mhi/core/mhi_pm.c                 |   7 +
>  include/linux/mhi.h                           |   7 +
>  7 files changed, 486 insertions(+), 7 deletions(-)
> 

Hi,

More corrections for you...


> diff --git a/Documentation/mhi.txt b/Documentation/mhi.txt
> index 1c501f1..9287899 100644
> --- a/Documentation/mhi.txt
> +++ b/Documentation/mhi.txt
> @@ -137,6 +137,47 @@ Example Operation for data transfer:
>  8. Host wakes up and check event ring for completion event
>  9. Host update the Event[i].ctxt.WP to indicate processed of completion event.
>  
> +Time sync
> +---------
> +To synchronize two applications between host and external modem, MHI provide

                                                                        provides

> +native support to get external modems free running timer value in a fast
> +reliable method. MHI clients do not need to create client specific methods to
> +get modem time.
> +
> +When client requests modem time, MHI host will automatically capture host time
> +at that moment so clients are able to do accurate drift adjustment.
> +
> +Example:
> +
> +Client request time @ time T1
> +
> +Host Time: Tx
> +Modem Time: Ty
> +
> +Client request time @ time T2
> +Host Time: Txx
> +Modem Time: Tyy
> +
> +Then drift is:
> +Tyy - Ty + <drift> == Txx - Tx
> +
> +Clients are free to implement their own drift algorithms, what MHI host provide

                                                 algorithms. What MHI host provides

> +is a way to accurately correlate host time with external modem time.
> +
> +To avoid link level latencies, controller must support capabilities to disable
> +any link level latency.
> +
> +During Time capture host will:
> +	1. Capture host time
> +	2. Trigger doorbell to capture modem time
> +
> +It's important time between Step 2 to Step 1 is deterministic as possible.

It's important that the time between Step 1 and Step 2 is as deterministic as possible.


> +Therefore, MHI host will:
> +	1. Disable any MHI related to low power modes.
> +	2. Disable preemption
> +	3. Request bus master to disable any link level latencies. Controller
> +	should disable all low power modes such as L0s, L1, L1ss.
> +
>  MHI States
>  ----------
>  

> diff --git a/drivers/bus/mhi/core/mhi_internal.h b/drivers/bus/mhi/core/mhi_internal.h
> index 1167d75..47d258a 100644
> --- a/drivers/bus/mhi/core/mhi_internal.h
> +++ b/drivers/bus/mhi/core/mhi_internal.h
> @@ -128,6 +128,30 @@
>  #define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
>  #define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)

ugh.  Way too long for a name.


> diff --git a/drivers/bus/mhi/core/mhi_main.c b/drivers/bus/mhi/core/mhi_main.c
> index 3e7077a8..8a0a7e1 100644
> --- a/drivers/bus/mhi/core/mhi_main.c
> +++ b/drivers/bus/mhi/core/mhi_main.c
> @@ -53,6 +53,40 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
>  	return 0;
>  }
>  
> +int mhi_get_capability_offset(struct mhi_controller *mhi_cntrl,
> +			      u32 capability,
> +			      u32 *offset)
> +{
> +	u32 cur_cap, next_offset;
> +	int ret;
> +
> +	/* get the 1st supported capability offset */
> +	ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MISC_OFFSET,
> +				 MISC_CAP_MASK, MISC_CAP_SHIFT, offset);
> +	if (ret)
> +		return ret;
> +	do {
> +		ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, *offset,
> +					 CAP_CAPID_MASK, CAP_CAPID_SHIFT,
> +					 &cur_cap);
> +		if (ret)
> +			return ret;
> +
> +		if (cur_cap == capability)
> +			return 0;
> +
> +		ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, *offset,
> +					 CAP_NEXT_CAP_MASK, CAP_NEXT_CAP_SHIFT,
> +					 &next_offset);
> +		if (ret)
> +			return ret;
> +
> +		*offset += next_offset;
> +	} while (next_offset);
> +
> +	return -ENXIO;
> +}
> +
>  void mhi_write_reg(struct mhi_controller *mhi_cntrl,
>  		   void __iomem *base,
>  		   u32 offset,
> @@ -547,6 +581,42 @@ static void mhi_assign_of_node(struct mhi_controller *mhi_cntrl,
>  	}
>  }
>  
> +static void mhi_create_time_sync_dev(struct mhi_controller *mhi_cntrl)
> +{
> +	struct mhi_device *mhi_dev;
> +	struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
> +	int ret;
> +
> +	if (!mhi_tsync || !mhi_tsync->db)
> +		return;
> +
> +	if (mhi_cntrl->ee != MHI_EE_AMSS)
> +		return;
> +
> +	mhi_dev = mhi_alloc_device(mhi_cntrl);
> +	if (!mhi_dev)
> +		return;
> +
> +	mhi_dev->dev_type = MHI_TIMESYNC_TYPE;
> +	mhi_dev->chan_name = "TIME_SYNC";
> +	dev_set_name(&mhi_dev->dev, "%04x_%02u.%02u.%02u_%s", mhi_dev->dev_id,
> +		     mhi_dev->domain, mhi_dev->bus, mhi_dev->slot,
> +		     mhi_dev->chan_name);
> +
> +	/* add if there is a matching DT node */
> +	mhi_assign_of_node(mhi_cntrl, mhi_dev);
> +
> +	ret = device_add(&mhi_dev->dev);
> +	if (ret) {
> +		dev_err(mhi_cntrl->dev, "Failed to register dev for  chan:%s\n",
> +			mhi_dev->chan_name);
> +		mhi_dealloc_device(mhi_cntrl, mhi_dev);
> +		return;
> +	}
> +
> +	mhi_cntrl->tsync_dev = mhi_dev;
> +}
> +
>  /* bind mhi channels into mhi devices */
>  void mhi_create_devices(struct mhi_controller *mhi_cntrl)
>  {
> @@ -555,6 +625,13 @@ void mhi_create_devices(struct mhi_controller *mhi_cntrl)
>  	struct mhi_device *mhi_dev;
>  	int ret;
>  
> +	/*
> +	 * we need to create time sync device before creating other
> +	 * devices, because client may try to capture time during
> +	 * clint probe.

	   client

> +	 */
> +	mhi_create_time_sync_dev(mhi_cntrl);
> +
>  	mhi_chan = mhi_cntrl->mhi_chan;
>  	for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
>  		if (!mhi_chan->configured || mhi_chan->ee != mhi_cntrl->ee)
> @@ -753,16 +830,26 @@ static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
>  	struct mhi_ring *mhi_ring = &cmd_ring->ring;
>  	struct mhi_tre *cmd_pkt;
>  	struct mhi_chan *mhi_chan;
> +	struct mhi_timesync *mhi_tsync;
> +	enum mhi_cmd_type type;
>  	u32 chan;
>  
>  	cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
>  
> -	chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
> -	mhi_chan = &mhi_cntrl->mhi_chan[chan];
> -	write_lock_bh(&mhi_chan->lock);
> -	mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
> -	complete(&mhi_chan->completion);
> -	write_unlock_bh(&mhi_chan->lock);
> +	type = MHI_TRE_GET_CMD_TYPE(cmd_pkt);
> +
> +	if (type == MHI_CMD_TYPE_TSYNC) {
> +		mhi_tsync = mhi_cntrl->mhi_tsync;
> +		mhi_tsync->ccs = MHI_TRE_GET_EV_CODE(tre);
> +		complete(&mhi_tsync->completion);
> +	} else {
> +		chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
> +		mhi_chan = &mhi_cntrl->mhi_chan[chan];
> +		write_lock_bh(&mhi_chan->lock);
> +		mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
> +		complete(&mhi_chan->completion);
> +		write_unlock_bh(&mhi_chan->lock);
> +	}
>  
>  	mhi_del_ring_element(mhi_cntrl, mhi_ring);
>  }
> @@ -929,6 +1016,73 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
>  	return count;
>  }
>  
> +int mhi_process_tsync_event_ring(struct mhi_controller *mhi_cntrl,
> +				 struct mhi_event *mhi_event,
> +				 u32 event_quota)
> +{
> +	struct mhi_tre *dev_rp, *local_rp;
> +	struct mhi_ring *ev_ring = &mhi_event->ring;
> +	struct mhi_event_ctxt *er_ctxt =
> +		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
> +	struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
> +	int count = 0;
> +	u32 sequence;
> +	u64 remote_time;
> +
> +	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state))) {
> +		read_unlock_bh(&mhi_cntrl->pm_lock);
> +		return -EIO;
> +	}
> +
> +	dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
> +	local_rp = ev_ring->rp;
> +
> +	while (dev_rp != local_rp) {
> +		struct tsync_node *tsync_node;
> +
> +		sequence = MHI_TRE_GET_EV_SEQ(local_rp);
> +		remote_time = MHI_TRE_GET_EV_TIME(local_rp);
> +
> +		do {
> +			spin_lock_irq(&mhi_tsync->lock);
> +			tsync_node = list_first_entry_or_null(&mhi_tsync->head,
> +						      struct tsync_node, node);
> +
> +			if (unlikely(!tsync_node))
> +				break;
> +
> +			list_del(&tsync_node->node);
> +			spin_unlock_irq(&mhi_tsync->lock);
> +
> +			/*
> +			 * device may not able to process all time sync commands
> +			 * host issue and only process last command it receive
> +			 */
> +			if (tsync_node->sequence == sequence) {
> +				tsync_node->cb_func(tsync_node->mhi_dev,
> +						    sequence,
> +						    tsync_node->local_time,
> +						    remote_time);
> +				kfree(tsync_node);
> +			} else {
> +				kfree(tsync_node);
> +			}
> +		} while (true);
> +
> +		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
> +		local_rp = ev_ring->rp;
> +		dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
> +		count++;
> +	}
> +
> +	read_lock_bh(&mhi_cntrl->pm_lock);
> +	if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl->pm_state)))
> +		mhi_ring_er_db(mhi_event);
> +	read_unlock_bh(&mhi_cntrl->pm_lock);
> +
> +	return count;
> +}
> +
>  void mhi_ev_task(unsigned long data)
>  {
>  	struct mhi_event *mhi_event = (struct mhi_event *)data;
> @@ -1060,6 +1214,12 @@ int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
>  		cmd_tre->dword[0] = MHI_TRE_CMD_START_DWORD0;
>  		cmd_tre->dword[1] = MHI_TRE_CMD_START_DWORD1(chan);
>  		break;
> +	case MHI_CMD_TIMSYNC_CFG:
> +		cmd_tre->ptr = MHI_TRE_CMD_TSYNC_CFG_PTR;
> +		cmd_tre->dword[0] = MHI_TRE_CMD_TSYNC_CFG_DWORD0;
> +		cmd_tre->dword[1] = MHI_TRE_CMD_TSYNC_CFG_DWORD1
> +			(mhi_cntrl->mhi_tsync->er_index);
> +		break;
>  	}
>  
>  	/* queue to hardware */
> @@ -1437,3 +1597,94 @@ int mhi_poll(struct mhi_device *mhi_dev,
>  	return ret;
>  }
>  EXPORT_SYMBOL(mhi_poll);
> +
> +/**
> + * mhi_get_remote_time - Get external modem time relative to host time
> + * Trigger event to capture modem time, also capture host time so client
> + * can do a relative drift comparision.
> + * Recommended only tsync device calls this method and do not call this
> + * from atomic context
> + * @mhi_dev: Device associated with the channels
> + * @sequence:unique sequence id track event
> + * @cb_func: callback function to call back
> + */
> +int mhi_get_remote_time(struct mhi_device *mhi_dev,
> +			u32 sequence,
> +			void (*cb_func)(struct mhi_device *mhi_dev,
> +					u32 sequence,
> +					u64 local_time,
> +					u64 remote_time))
> +{
> +	struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
> +	struct mhi_timesync *mhi_tsync = mhi_cntrl->mhi_tsync;
> +	struct tsync_node *tsync_node;
> +	int ret;
> +
> +	/* not all devices support time feature */
> +	if (!mhi_tsync)
> +		return -EIO;
> +
> +	/* tsync db can only be rung in M0 state */
> +	ret = __mhi_device_get_sync(mhi_cntrl);
> +	if (ret)
> +		return ret;
> +
> +	/*
> +	 * technically we can use GFP_KERNEL, but wants to avoid

	                                          want

> +	 * # of times scheduling out
> +	 */




-- 
~Randy

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: MHI code review
  2018-07-09 20:08 ` MHI code review Sujeev Dias
                     ` (6 preceding siblings ...)
  2018-07-09 20:08   ` [PATCH v2 7/7] mhi_bus: dev: uci: add user space interface driver Sujeev Dias
@ 2019-04-30 15:10   ` Daniele Palmas
  2019-06-12 17:54     ` Sujeev Dias
  2019-06-12 18:00     ` Sujeev Dias
  7 siblings, 2 replies; 54+ messages in thread
From: Daniele Palmas @ 2019-04-30 15:10 UTC (permalink / raw)
  To: sdias; +Cc: linux-kernel, truong, dnlplm

Hi Sujeev,

> Hi Greg Kroah-Hartman\Arnd Bergmann and community
>
> Thank you for all the feedback, I believe I have addressed all the comments from previous
> patches. Also, I am excluding mhi network driver in this series. I still have some modifications
> to do.
>
> Please review the new patch series and share your feedback.
>
> Thanks again
>
> Sincerely,
> Sujeev

are you going to continue working on this series?

Can this series be used with PCIe SDX20/24/55 based modems?

If yes, it would really be important to have this integrated into an official kernel.

Thanks,
Daniele

^ permalink raw reply	[flat|nested] 54+ messages in thread

* RE: MHI code review
  2019-04-30 15:10   ` MHI code review Daniele Palmas
@ 2019-06-12 17:54     ` Sujeev Dias
  2019-06-12 20:58       ` Daniele Palmas
  2019-06-12 18:00     ` Sujeev Dias
  1 sibling, 1 reply; 54+ messages in thread
From: Sujeev Dias @ 2019-06-12 17:54 UTC (permalink / raw)
  To: 'Daniele Palmas'; +Cc: linux-kernel, truong

Hi Daniels

Sorry for delay response.  Yes, we will be pushing new set of series very
soon that will have support for 55 as well.  The series that's pushed should
already work for SDX20, 24 and 55.   There are some new features related to
55 that's not yet in series.

Thanks
Sujeev 

-----Original Message-----
From: Daniele Palmas <dnlplm@gmail.com> 
Sent: Tuesday, April 30, 2019 8:11 AM
To: sdias@codeaurora.org
Cc: linux-kernel@vger.kernel.org; truong@codeaurora.org; dnlplm@gmail.com
Subject: Re: MHI code review

Hi Sujeev,

> Hi Greg Kroah-Hartman\Arnd Bergmann and community
>
> Thank you for all the feedback, I believe I have addressed all the 
> comments from previous patches. Also, I am excluding mhi network 
> driver in this series. I still have some modifications to do.
>
> Please review the new patch series and share your feedback.
>
> Thanks again
>
> Sincerely,
> Sujeev

are you going to continue working on this series?

Can this series be used with PCIe SDX20/24/55 based modems?

If yes, it would really be important to have this integrated into an
official kernel.

Thanks,
Daniele


^ permalink raw reply	[flat|nested] 54+ messages in thread

* RE: MHI code review
  2019-04-30 15:10   ` MHI code review Daniele Palmas
  2019-06-12 17:54     ` Sujeev Dias
@ 2019-06-12 18:00     ` Sujeev Dias
  1 sibling, 0 replies; 54+ messages in thread
From: Sujeev Dias @ 2019-06-12 18:00 UTC (permalink / raw)
  To: 'Daniele Palmas'; +Cc: linux-kernel, truong

Our team also monitoring following thread
https://lkml.kernel.org/lkml/20190531035348.7194-1-elder@linaro.org/

Since this also has implication on MHI as well.

Thanks
Sujeev

-----Original Message-----
From: Sujeev Dias <sdias@codeaurora.org> 
Sent: Wednesday, June 12, 2019 10:55 AM
To: 'Daniele Palmas' <dnlplm@gmail.com>
Cc: 'linux-kernel@vger.kernel.org' <linux-kernel@vger.kernel.org>;
'truong@codeaurora.org' <truong@codeaurora.org>
Subject: RE: MHI code review

Hi Daniels

Sorry for delay response.  Yes, we will be pushing new set of series very
soon that will have support for 55 as well.  The series that's pushed should
already work for SDX20, 24 and 55.   There are some new features related to
55 that's not yet in series.

Thanks
Sujeev 

-----Original Message-----
From: Daniele Palmas <dnlplm@gmail.com>
Sent: Tuesday, April 30, 2019 8:11 AM
To: sdias@codeaurora.org
Cc: linux-kernel@vger.kernel.org; truong@codeaurora.org; dnlplm@gmail.com
Subject: Re: MHI code review

Hi Sujeev,

> Hi Greg Kroah-Hartman\Arnd Bergmann and community
>
> Thank you for all the feedback, I believe I have addressed all the 
> comments from previous patches. Also, I am excluding mhi network 
> driver in this series. I still have some modifications to do.
>
> Please review the new patch series and share your feedback.
>
> Thanks again
>
> Sincerely,
> Sujeev

are you going to continue working on this series?

Can this series be used with PCIe SDX20/24/55 based modems?

If yes, it would really be important to have this integrated into an
official kernel.

Thanks,
Daniele


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: MHI code review
  2019-06-12 17:54     ` Sujeev Dias
@ 2019-06-12 20:58       ` Daniele Palmas
  0 siblings, 0 replies; 54+ messages in thread
From: Daniele Palmas @ 2019-06-12 20:58 UTC (permalink / raw)
  To: Sujeev Dias; +Cc: linux-kernel, truong

Hi Sujeev,

Il giorno mer 12 giu 2019 alle ore 19:54 Sujeev Dias
<sdias@codeaurora.org> ha scritto:
>
> Hi Daniels
>
> Sorry for delay response.  Yes, we will be pushing new set of series very
> soon that will have support for 55 as well.  The series that's pushed should
> already work for SDX20, 24 and 55.   There are some new features related to
> 55 that's not yet in series.
>

great, thanks for the update! I'll wait for you new patch-set.

Thanks,
Daniele

> Thanks
> Sujeev
>
> -----Original Message-----
> From: Daniele Palmas <dnlplm@gmail.com>
> Sent: Tuesday, April 30, 2019 8:11 AM
> To: sdias@codeaurora.org
> Cc: linux-kernel@vger.kernel.org; truong@codeaurora.org; dnlplm@gmail.com
> Subject: Re: MHI code review
>
> Hi Sujeev,
>
> > Hi Greg Kroah-Hartman\Arnd Bergmann and community
> >
> > Thank you for all the feedback, I believe I have addressed all the
> > comments from previous patches. Also, I am excluding mhi network
> > driver in this series. I still have some modifications to do.
> >
> > Please review the new patch series and share your feedback.
> >
> > Thanks again
> >
> > Sincerely,
> > Sujeev
>
> are you going to continue working on this series?
>
> Can this series be used with PCIe SDX20/24/55 based modems?
>
> If yes, it would really be important to have this integrated into an
> official kernel.
>
> Thanks,
> Daniele
>

^ permalink raw reply	[flat|nested] 54+ messages in thread

end of thread, other threads:[~2019-06-12 20:59 UTC | newest]

Thread overview: 54+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-27  2:23 MHI initial design review Sujeev Dias
2018-04-27  2:23 ` [PATCH v1 1/4] mhi_bus: core: Add support for MHI host interface Sujeev Dias
2018-04-27  7:22   ` Greg Kroah-Hartman
2018-04-28 14:28     ` Sujeev Dias
2018-04-28 15:50       ` Greg Kroah-Hartman
2018-04-27  7:23   ` Greg Kroah-Hartman
2018-04-27 12:18   ` Arnd Bergmann
2018-04-28 16:08     ` Sujeev Dias
2018-04-28  0:28   ` kbuild test robot
2018-04-28  0:28     ` kbuild test robot
2018-04-28  2:52   ` kbuild test robot
2018-04-28  2:52     ` kbuild test robot
2018-05-03 19:21   ` Pavel Machek
2018-05-04  3:05     ` Sujeev Dias
2018-06-22 23:03   ` Randy Dunlap
2018-04-27  2:23 ` [PATCH v1 2/4] mhi_bus: controller: MHI support for QCOM modems Sujeev Dias
2018-04-27 11:32   ` Arnd Bergmann
2018-04-28 15:40     ` Sujeev Dias
2018-04-28  3:05   ` kbuild test robot
2018-04-28  3:05     ` kbuild test robot
2018-04-28  3:12   ` kbuild test robot
2018-04-28  3:12     ` kbuild test robot
2018-04-27  2:23 ` [PATCH v1 3/4] mhi_bus: dev: netdev: add network interface driver Sujeev Dias
2018-04-27 11:19   ` Arnd Bergmann
2018-04-28 15:25     ` Sujeev Dias
2018-04-27  2:23 ` [PATCH v1 4/4] mhi_bus: dev: uci: add user space " Sujeev Dias
2018-04-27 11:36   ` Arnd Bergmann
2018-04-28  1:03   ` kbuild test robot
2018-04-28  1:03     ` kbuild test robot
2018-04-28  5:16   ` [PATCH] mhi_bus: dev: uci: fix semicolon.cocci warnings kbuild test robot
2018-04-28  5:16     ` kbuild test robot
2018-04-28  5:16   ` [PATCH v1 4/4] mhi_bus: dev: uci: add user space interface driver kbuild test robot
2018-04-28  5:16     ` kbuild test robot
2018-07-09 20:08 ` MHI code review Sujeev Dias
2018-07-09 20:08   ` [PATCH v2 1/7] mhi_bus: core: initial checkin for modem host interface bus driver Sujeev Dias
2018-07-09 20:50     ` Greg Kroah-Hartman
2018-07-09 20:52     ` Greg Kroah-Hartman
2018-07-10  6:36     ` Greg Kroah-Hartman
2018-07-11 19:30     ` Rob Herring
2018-08-09 18:39     ` Randy Dunlap
2018-07-09 20:08   ` [PATCH v2 2/7] mhi_bus: core: add power management support Sujeev Dias
2018-07-09 20:08   ` [PATCH v2 3/7] mhi_bus: core: add support for data transfer Sujeev Dias
2018-07-10  6:29     ` Greg Kroah-Hartman
2018-07-09 20:08   ` [PATCH v2 4/7] mhi_bus: core: add support for handling ioctl cmds Sujeev Dias
2018-07-09 20:08   ` [PATCH v2 5/7] mhi_bus: core: add support to get external modem time Sujeev Dias
2018-07-11 19:32     ` Rob Herring
2018-08-09 20:17     ` Randy Dunlap
2018-07-09 20:08   ` [PATCH v2 6/7] mhi_bus: controller: MHI support for QCOM modems Sujeev Dias
2018-07-11 19:36     ` Rob Herring
2018-07-09 20:08   ` [PATCH v2 7/7] mhi_bus: dev: uci: add user space interface driver Sujeev Dias
2019-04-30 15:10   ` MHI code review Daniele Palmas
2019-06-12 17:54     ` Sujeev Dias
2019-06-12 20:58       ` Daniele Palmas
2019-06-12 18:00     ` Sujeev Dias

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.