mhi.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/27] Add initial support for MHI endpoint stack
@ 2022-02-28 12:43 Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 01/27] bus: mhi: Fix pm_state conversion to string Manivannan Sadhasivam
                   ` (28 more replies)
  0 siblings, 29 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Hello,

This series adds initial support for the Qualcomm specific Modem Host Interface
(MHI) bus in endpoint devices like SDX55 modems. The MHI bus in endpoint devices
communicates with the MHI bus in host machines like x86 over any physical bus
like PCIe. The MHI host support is already in mainline [1] and been used by PCIe
based modems and WLAN devices running vendor code (downstream).

Overview
========

This series aims at adding the MHI support in the endpoint devices with the goal
of getting data connectivity using the mainline kernel running on the modems.
Modems here refer to the combination of an APPS processor (Cortex A grade) and
a baseband processor (DSP). The MHI bus is located in the APPS processor and it
transfers data packets from the baseband processor to the host machine.

The MHI Endpoint (MHI EP) stack proposed here is inspired by the downstream
code written by Qualcomm. But the complete stack is mostly re-written to adapt
to the "bus" framework and made it modular so that it can work with the upstream
subsystems like "PCI Endpoint". The code structure of the MHI endpoint stack
follows the MHI host stack to maintain uniformity.

With this initial MHI EP stack (along with few other drivers), we can establish
the network interface between host and endpoint over the MHI software channels
(IP_SW0) and can do things like IP forwarding, SSH, etc...

Stack Organization
==================

The MHI EP stack has the concept of controller and device drivers as like the
MHI host stack. The MHI EP controller driver can be a PCI Endpoint Function
driver and the MHI device driver can be a MHI EP Networking driver or QRTR
driver. The MHI EP controller driver is tied to the PCI Endpoint subsystem and
handles all bus related activities like mapping the host memory, raising IRQ,
passing link specific events etc... The MHI EP networking driver is tied to the
Networking stack and handles all networking related activities like
sending/receiving the SKBs from netdev, statistics collection etc...

This series only contains the MHI EP code, whereas the PCIe EPF driver and MHI
EP Networking drivers are not yet submitted and can be found here [2]. Though
the MHI EP stack doesn't have the build time dependency, it cannot function
without them.

Test setup
==========

This series has been tested on Telit FN980 TLB board powered by Qualcomm SDX55
(a.k.a X55 modem) and Qualcomm SM8450 based dev board.

For testing the stability and performance, networking tools such as iperf, ssh
and ping are used.

Limitations
===========

We are not _yet_ there to get the data packets from the modem as that involves
the Qualcomm IP Accelerator (IPA) integration with MHI endpoint stack. But we
are planning to add support for it in the coming days.

References
==========

MHI bus: https://www.kernel.org/doc/html/latest/mhi/mhi.html
Linaro connect presentation around this topic: https://connect.linaro.org/resources/lvc21f/lvc21f-222/

Thanks,
Mani

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/bus/mhi
[2] https://git.linaro.org/landing-teams/working/qualcomm/kernel.git/log/?h=tracking-qcomlt-sdx55-drivers

Changes in v4:

* Collected reviews from Hemant and Alex.
* Removed the A7 suffix from register names and functions.
* Added a couple of cleanup patches.
* Reworked the mhi_ep_queue_skb() API.
* Switched to separate workers for command and transfer rings.
* Used a common workqueue for state and ring management.
* Reworked the channel ring management.
* Other misc changes as per review from Alex.

Changes in v3:

* Splitted the patch 20/23 into two.
* Fixed the error handling in patch 21/23.
* Removed spurious change in patch 01/23.
* Added check for xfer callbacks in client driver probe.

Changes in v2:

v2 mostly addresses the issues seen while testing the stack on SM8450 that is a
SMP platform and also incorporates the review comments from Alex.

Major changes are:

* Added a cleanup patch for getting rid of SHIFT macros and used the bitfield
  operations.
* Added the endianess patches that were submitted to MHI list and used the
  endianess conversion in EP patches also.
* Added support for multiple event rings.
* Fixed the MSI generation based on the event ring index.
* Fixed the doorbell list handling by making use of list splice and not locking
  the entire list manipulation.
* Added new APIs for wrapping the reading and writing to host memory (Dmitry).
* Optimized the read_channel and queue_skb function logics.
* Added Hemant's R-o-b tag.

Manivannan Sadhasivam (25):
  bus: mhi: Move host MHI code to "host" directory
  bus: mhi: Use bitfield operations for register read and write
  bus: mhi: Use bitfield operations for handling DWORDs of ring elements
  bus: mhi: Cleanup the register definitions used in headers
  bus: mhi: host: Rename "struct mhi_tre" to "struct mhi_ring_element"
  bus: mhi: Move common MHI definitions out of host directory
  bus: mhi: Make mhi_state_str[] array static inline and move to
    common.h
  bus: mhi: ep: Add support for registering MHI endpoint controllers
  bus: mhi: ep: Add support for registering MHI endpoint client drivers
  bus: mhi: ep: Add support for creating and destroying MHI EP devices
  bus: mhi: ep: Add support for managing MMIO registers
  bus: mhi: ep: Add support for ring management
  bus: mhi: ep: Add support for sending events to the host
  bus: mhi: ep: Add support for managing MHI state machine
  bus: mhi: ep: Add support for processing MHI endpoint interrupts
  bus: mhi: ep: Add support for powering up the MHI endpoint stack
  bus: mhi: ep: Add support for powering down the MHI endpoint stack
  bus: mhi: ep: Add support for handling MHI_RESET
  bus: mhi: ep: Add support for handling SYS_ERR condition
  bus: mhi: ep: Add support for processing command rings
  bus: mhi: ep: Add support for reading from the host
  bus: mhi: ep: Add support for processing channel rings
  bus: mhi: ep: Add support for queueing SKBs to the host
  bus: mhi: ep: Add support for suspending and resuming channels
  bus: mhi: ep: Add uevent support for module autoloading

Paul Davey (2):
  bus: mhi: Fix pm_state conversion to string
  bus: mhi: Fix MHI DMA structure endianness

 drivers/bus/Makefile                     |    2 +-
 drivers/bus/mhi/Kconfig                  |   28 +-
 drivers/bus/mhi/Makefile                 |    9 +-
 drivers/bus/mhi/common.h                 |  326 +++++
 drivers/bus/mhi/core/internal.h          |  722 ----------
 drivers/bus/mhi/ep/Kconfig               |   10 +
 drivers/bus/mhi/ep/Makefile              |    2 +
 drivers/bus/mhi/ep/internal.h            |  222 +++
 drivers/bus/mhi/ep/main.c                | 1623 ++++++++++++++++++++++
 drivers/bus/mhi/ep/mmio.c                |  272 ++++
 drivers/bus/mhi/ep/ring.c                |  197 +++
 drivers/bus/mhi/ep/sm.c                  |  148 ++
 drivers/bus/mhi/host/Kconfig             |   31 +
 drivers/bus/mhi/{core => host}/Makefile  |    4 +-
 drivers/bus/mhi/{core => host}/boot.c    |   17 +-
 drivers/bus/mhi/{core => host}/debugfs.c |   40 +-
 drivers/bus/mhi/{core => host}/init.c    |  131 +-
 drivers/bus/mhi/host/internal.h          |  382 +++++
 drivers/bus/mhi/{core => host}/main.c    |   66 +-
 drivers/bus/mhi/{ => host}/pci_generic.c |    0
 drivers/bus/mhi/{core => host}/pm.c      |   36 +-
 include/linux/mhi_ep.h                   |  284 ++++
 include/linux/mod_devicetable.h          |    2 +
 scripts/mod/file2alias.c                 |   10 +
 24 files changed, 3649 insertions(+), 915 deletions(-)
 create mode 100644 drivers/bus/mhi/common.h
 delete mode 100644 drivers/bus/mhi/core/internal.h
 create mode 100644 drivers/bus/mhi/ep/Kconfig
 create mode 100644 drivers/bus/mhi/ep/Makefile
 create mode 100644 drivers/bus/mhi/ep/internal.h
 create mode 100644 drivers/bus/mhi/ep/main.c
 create mode 100644 drivers/bus/mhi/ep/mmio.c
 create mode 100644 drivers/bus/mhi/ep/ring.c
 create mode 100644 drivers/bus/mhi/ep/sm.c
 create mode 100644 drivers/bus/mhi/host/Kconfig
 rename drivers/bus/mhi/{core => host}/Makefile (54%)
 rename drivers/bus/mhi/{core => host}/boot.c (96%)
 rename drivers/bus/mhi/{core => host}/debugfs.c (90%)
 rename drivers/bus/mhi/{core => host}/init.c (92%)
 create mode 100644 drivers/bus/mhi/host/internal.h
 rename drivers/bus/mhi/{core => host}/main.c (97%)
 rename drivers/bus/mhi/{ => host}/pci_generic.c (100%)
 rename drivers/bus/mhi/{core => host}/pm.c (97%)
 create mode 100644 include/linux/mhi_ep.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 52+ messages in thread

* [PATCH v4 01/27] bus: mhi: Fix pm_state conversion to string
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 15:30   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 02/27] bus: mhi: Fix MHI DMA structure endianness Manivannan Sadhasivam
                   ` (27 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder, Paul Davey,
	Manivannan Sadhasivam, Hemant Kumar, stable,
	Manivannan Sadhasivam

From: Paul Davey <paul.davey@alliedtelesis.co.nz>

On big endian architectures the mhi debugfs files which report pm state
give "Invalid State" for all states.  This is caused by using
find_last_bit which takes an unsigned long* while the state is passed in
as an enum mhi_pm_state which will be of int size.

Fix by using __fls to pass the value of state instead of find_last_bit.

Also the current API expects "mhi_pm_state" enumerator as the function
argument but the function only works with bitmasks. So as Alex suggested,
let's change the argument to u32 to avoid confusion.

Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions")
Signed-off-by: Paul Davey <paul.davey@alliedtelesis.co.nz>
Reviewed-by: Manivannan Sadhasivam <mani@kernel.org>
Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
Cc: stable@vger.kernel.org
[mani: changed the function argument to u32]
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/core/init.c     | 10 ++++++----
 drivers/bus/mhi/core/internal.h |  2 +-
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 046f407dc5d6..09394a1c29ec 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -77,12 +77,14 @@ static const char * const mhi_pm_state_str[] = {
 	[MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "Linkdown or Error Fatal Detect",
 };
 
-const char *to_mhi_pm_state_str(enum mhi_pm_state state)
+const char *to_mhi_pm_state_str(u32 state)
 {
-	unsigned long pm_state = state;
-	int index = find_last_bit(&pm_state, 32);
+	int index;
 
-	if (index >= ARRAY_SIZE(mhi_pm_state_str))
+	if (state)
+		index = __fls(state);
+
+	if (!state || index >= ARRAY_SIZE(mhi_pm_state_str))
 		return "Invalid State";
 
 	return mhi_pm_state_str[index];
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index e2e10474a9d9..3508cbbf555d 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -622,7 +622,7 @@ void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
 enum mhi_pm_state __must_check mhi_tryset_pm_state(
 					struct mhi_controller *mhi_cntrl,
 					enum mhi_pm_state state);
-const char *to_mhi_pm_state_str(enum mhi_pm_state state);
+const char *to_mhi_pm_state_str(u32 state);
 int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
 			       enum dev_st_transition state);
 void mhi_pm_st_worker(struct work_struct *work);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 02/27] bus: mhi: Fix MHI DMA structure endianness
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 01/27] bus: mhi: Fix pm_state conversion to string Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 15:40   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 03/27] bus: mhi: Move host MHI code to "host" directory Manivannan Sadhasivam
                   ` (26 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder, Paul Davey,
	Manivannan Sadhasivam, stable

From: Paul Davey <paul.davey@alliedtelesis.co.nz>

The MHI driver does not work on big endian architectures.  The
controller never transitions into mission mode.  This appears to be due
to the modem device expecting the various contexts and transfer rings to
have fields in little endian order in memory, but the driver constructs
them in native endianness.

Fix MHI event, channel and command contexts and TRE handling macros to
use explicit conversion to little endian.  Mark fields in relevant
structures as little endian to document this requirement.

Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions")
Fixes: 6cd330ae76ff ("bus: mhi: core: Add support for ringing channel/event ring doorbells")
Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Paul Davey <paul.davey@alliedtelesis.co.nz>
Cc: stable@vger.kernel.org
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/core/debugfs.c  |  26 +++----
 drivers/bus/mhi/core/init.c     |  36 +++++-----
 drivers/bus/mhi/core/internal.h | 119 ++++++++++++++++----------------
 drivers/bus/mhi/core/main.c     |  22 +++---
 drivers/bus/mhi/core/pm.c       |   4 +-
 5 files changed, 104 insertions(+), 103 deletions(-)

diff --git a/drivers/bus/mhi/core/debugfs.c b/drivers/bus/mhi/core/debugfs.c
index 858d7516410b..d818586c229d 100644
--- a/drivers/bus/mhi/core/debugfs.c
+++ b/drivers/bus/mhi/core/debugfs.c
@@ -60,16 +60,16 @@ static int mhi_debugfs_events_show(struct seq_file *m, void *d)
 		}
 
 		seq_printf(m, "Index: %d intmod count: %lu time: %lu",
-			   i, (er_ctxt->intmod & EV_CTX_INTMODC_MASK) >>
+			   i, (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODC_MASK) >>
 			   EV_CTX_INTMODC_SHIFT,
-			   (er_ctxt->intmod & EV_CTX_INTMODT_MASK) >>
+			   (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODT_MASK) >>
 			   EV_CTX_INTMODT_SHIFT);
 
-		seq_printf(m, " base: 0x%0llx len: 0x%llx", er_ctxt->rbase,
-			   er_ctxt->rlen);
+		seq_printf(m, " base: 0x%0llx len: 0x%llx", le64_to_cpu(er_ctxt->rbase),
+			   le64_to_cpu(er_ctxt->rlen));
 
-		seq_printf(m, " rp: 0x%llx wp: 0x%llx", er_ctxt->rp,
-			   er_ctxt->wp);
+		seq_printf(m, " rp: 0x%llx wp: 0x%llx", le64_to_cpu(er_ctxt->rp),
+			   le64_to_cpu(er_ctxt->wp));
 
 		seq_printf(m, " local rp: 0x%pK db: 0x%pad\n", ring->rp,
 			   &mhi_event->db_cfg.db_val);
@@ -106,18 +106,18 @@ static int mhi_debugfs_channels_show(struct seq_file *m, void *d)
 
 		seq_printf(m,
 			   "%s(%u) state: 0x%lx brstmode: 0x%lx pollcfg: 0x%lx",
-			   mhi_chan->name, mhi_chan->chan, (chan_ctxt->chcfg &
+			   mhi_chan->name, mhi_chan->chan, (le32_to_cpu(chan_ctxt->chcfg) &
 			   CHAN_CTX_CHSTATE_MASK) >> CHAN_CTX_CHSTATE_SHIFT,
-			   (chan_ctxt->chcfg & CHAN_CTX_BRSTMODE_MASK) >>
-			   CHAN_CTX_BRSTMODE_SHIFT, (chan_ctxt->chcfg &
+			   (le32_to_cpu(chan_ctxt->chcfg) & CHAN_CTX_BRSTMODE_MASK) >>
+			   CHAN_CTX_BRSTMODE_SHIFT, (le32_to_cpu(chan_ctxt->chcfg) &
 			   CHAN_CTX_POLLCFG_MASK) >> CHAN_CTX_POLLCFG_SHIFT);
 
-		seq_printf(m, " type: 0x%x event ring: %u", chan_ctxt->chtype,
-			   chan_ctxt->erindex);
+		seq_printf(m, " type: 0x%x event ring: %u", le32_to_cpu(chan_ctxt->chtype),
+			   le32_to_cpu(chan_ctxt->erindex));
 
 		seq_printf(m, " base: 0x%llx len: 0x%llx rp: 0x%llx wp: 0x%llx",
-			   chan_ctxt->rbase, chan_ctxt->rlen, chan_ctxt->rp,
-			   chan_ctxt->wp);
+			   le64_to_cpu(chan_ctxt->rbase), le64_to_cpu(chan_ctxt->rlen),
+			   le64_to_cpu(chan_ctxt->rp), le64_to_cpu(chan_ctxt->wp));
 
 		seq_printf(m, " local rp: 0x%pK local wp: 0x%pK db: 0x%pad\n",
 			   ring->rp, ring->wp,
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 09394a1c29ec..d8787aaa176b 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -293,17 +293,17 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 		if (mhi_chan->offload_ch)
 			continue;
 
-		tmp = chan_ctxt->chcfg;
+		tmp = le32_to_cpu(chan_ctxt->chcfg);
 		tmp &= ~CHAN_CTX_CHSTATE_MASK;
 		tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
 		tmp &= ~CHAN_CTX_BRSTMODE_MASK;
 		tmp |= (mhi_chan->db_cfg.brstmode << CHAN_CTX_BRSTMODE_SHIFT);
 		tmp &= ~CHAN_CTX_POLLCFG_MASK;
 		tmp |= (mhi_chan->db_cfg.pollcfg << CHAN_CTX_POLLCFG_SHIFT);
-		chan_ctxt->chcfg = tmp;
+		chan_ctxt->chcfg = cpu_to_le32(tmp);
 
-		chan_ctxt->chtype = mhi_chan->type;
-		chan_ctxt->erindex = mhi_chan->er_index;
+		chan_ctxt->chtype = cpu_to_le32(mhi_chan->type);
+		chan_ctxt->erindex = cpu_to_le32(mhi_chan->er_index);
 
 		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
 		mhi_chan->tre_ring.db_addr = (void __iomem *)&chan_ctxt->wp;
@@ -328,14 +328,14 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 		if (mhi_event->offload_ev)
 			continue;
 
-		tmp = er_ctxt->intmod;
+		tmp = le32_to_cpu(er_ctxt->intmod);
 		tmp &= ~EV_CTX_INTMODC_MASK;
 		tmp &= ~EV_CTX_INTMODT_MASK;
 		tmp |= (mhi_event->intmod << EV_CTX_INTMODT_SHIFT);
-		er_ctxt->intmod = tmp;
+		er_ctxt->intmod = cpu_to_le32(tmp);
 
-		er_ctxt->ertype = MHI_ER_TYPE_VALID;
-		er_ctxt->msivec = mhi_event->irq;
+		er_ctxt->ertype = cpu_to_le32(MHI_ER_TYPE_VALID);
+		er_ctxt->msivec = cpu_to_le32(mhi_event->irq);
 		mhi_event->db_cfg.db_mode = true;
 
 		ring->el_size = sizeof(struct mhi_tre);
@@ -349,9 +349,9 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 		 * ring is empty
 		 */
 		ring->rp = ring->wp = ring->base;
-		er_ctxt->rbase = ring->iommu_base;
+		er_ctxt->rbase = cpu_to_le64(ring->iommu_base);
 		er_ctxt->rp = er_ctxt->wp = er_ctxt->rbase;
-		er_ctxt->rlen = ring->len;
+		er_ctxt->rlen = cpu_to_le64(ring->len);
 		ring->ctxt_wp = &er_ctxt->wp;
 	}
 
@@ -378,9 +378,9 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 			goto error_alloc_cmd;
 
 		ring->rp = ring->wp = ring->base;
-		cmd_ctxt->rbase = ring->iommu_base;
+		cmd_ctxt->rbase = cpu_to_le64(ring->iommu_base);
 		cmd_ctxt->rp = cmd_ctxt->wp = cmd_ctxt->rbase;
-		cmd_ctxt->rlen = ring->len;
+		cmd_ctxt->rlen = cpu_to_le64(ring->len);
 		ring->ctxt_wp = &cmd_ctxt->wp;
 	}
 
@@ -581,10 +581,10 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
 	chan_ctxt->rp = 0;
 	chan_ctxt->wp = 0;
 
-	tmp = chan_ctxt->chcfg;
+	tmp = le32_to_cpu(chan_ctxt->chcfg);
 	tmp &= ~CHAN_CTX_CHSTATE_MASK;
 	tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
-	chan_ctxt->chcfg = tmp;
+	chan_ctxt->chcfg = cpu_to_le32(tmp);
 
 	/* Update to all cores */
 	smp_wmb();
@@ -618,14 +618,14 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
 		return -ENOMEM;
 	}
 
-	tmp = chan_ctxt->chcfg;
+	tmp = le32_to_cpu(chan_ctxt->chcfg);
 	tmp &= ~CHAN_CTX_CHSTATE_MASK;
 	tmp |= (MHI_CH_STATE_ENABLED << CHAN_CTX_CHSTATE_SHIFT);
-	chan_ctxt->chcfg = tmp;
+	chan_ctxt->chcfg = cpu_to_le32(tmp);
 
-	chan_ctxt->rbase = tre_ring->iommu_base;
+	chan_ctxt->rbase = cpu_to_le64(tre_ring->iommu_base);
 	chan_ctxt->rp = chan_ctxt->wp = chan_ctxt->rbase;
-	chan_ctxt->rlen = tre_ring->len;
+	chan_ctxt->rlen = cpu_to_le64(tre_ring->len);
 	tre_ring->ctxt_wp = &chan_ctxt->wp;
 
 	tre_ring->rp = tre_ring->wp = tre_ring->base;
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index 3508cbbf555d..37c39bf1c7a9 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -209,14 +209,14 @@ extern struct bus_type mhi_bus_type;
 #define EV_CTX_INTMODT_MASK GENMASK(31, 16)
 #define EV_CTX_INTMODT_SHIFT 16
 struct mhi_event_ctxt {
-	__u32 intmod;
-	__u32 ertype;
-	__u32 msivec;
-
-	__u64 rbase __packed __aligned(4);
-	__u64 rlen __packed __aligned(4);
-	__u64 rp __packed __aligned(4);
-	__u64 wp __packed __aligned(4);
+	__le32 intmod;
+	__le32 ertype;
+	__le32 msivec;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
 };
 
 #define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
@@ -227,25 +227,25 @@ struct mhi_event_ctxt {
 #define CHAN_CTX_POLLCFG_SHIFT 10
 #define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
 struct mhi_chan_ctxt {
-	__u32 chcfg;
-	__u32 chtype;
-	__u32 erindex;
-
-	__u64 rbase __packed __aligned(4);
-	__u64 rlen __packed __aligned(4);
-	__u64 rp __packed __aligned(4);
-	__u64 wp __packed __aligned(4);
+	__le32 chcfg;
+	__le32 chtype;
+	__le32 erindex;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
 };
 
 struct mhi_cmd_ctxt {
-	__u32 reserved0;
-	__u32 reserved1;
-	__u32 reserved2;
-
-	__u64 rbase __packed __aligned(4);
-	__u64 rlen __packed __aligned(4);
-	__u64 rp __packed __aligned(4);
-	__u64 wp __packed __aligned(4);
+	__le32 reserved0;
+	__le32 reserved1;
+	__le32 reserved2;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
 };
 
 struct mhi_ctxt {
@@ -258,8 +258,8 @@ struct mhi_ctxt {
 };
 
 struct mhi_tre {
-	u64 ptr;
-	u32 dword[2];
+	__le64 ptr;
+	__le32 dword[2];
 };
 
 struct bhi_vec_entry {
@@ -277,57 +277,58 @@ enum mhi_cmd_type {
 /* No operation command */
 #define MHI_TRE_CMD_NOOP_PTR (0)
 #define MHI_TRE_CMD_NOOP_DWORD0 (0)
-#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
+#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
 
 /* Channel reset command */
 #define MHI_TRE_CMD_RESET_PTR (0)
 #define MHI_TRE_CMD_RESET_DWORD0 (0)
-#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
-					(MHI_CMD_RESET_CHAN << 16))
+#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
+					(MHI_CMD_RESET_CHAN << 16)))
 
 /* Channel stop command */
 #define MHI_TRE_CMD_STOP_PTR (0)
 #define MHI_TRE_CMD_STOP_DWORD0 (0)
-#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
-				       (MHI_CMD_STOP_CHAN << 16))
+#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
+				       (MHI_CMD_STOP_CHAN << 16)))
 
 /* Channel start command */
 #define MHI_TRE_CMD_START_PTR (0)
 #define MHI_TRE_CMD_START_DWORD0 (0)
-#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
-					(MHI_CMD_START_CHAN << 16))
+#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
+					(MHI_CMD_START_CHAN << 16)))
 
-#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
-#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
+#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
+#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
 
 /* Event descriptor macros */
-#define MHI_TRE_EV_PTR(ptr) (ptr)
-#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
-#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
-#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
-#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
-#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
-#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
-#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
-#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
+#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
+#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
+#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
+#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
+#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
+#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
+#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
+#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
+#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
 
 /* Transfer descriptor macros */
-#define MHI_TRE_DATA_PTR(ptr) (ptr)
-#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
-#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
-	| (ieot << 9) | (ieob << 8) | chain)
+#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
+#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
+#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
+	| (ieot << 9) | (ieob << 8) | chain))
 
 /* RSC transfer descriptor macros */
-#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
-#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
-#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
+#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
+#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
+#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
 
 enum mhi_pkt_type {
 	MHI_PKT_TYPE_INVALID = 0x0,
@@ -500,7 +501,7 @@ struct state_transition {
 struct mhi_ring {
 	dma_addr_t dma_handle;
 	dma_addr_t iommu_base;
-	u64 *ctxt_wp; /* point to ctxt wp */
+	__le64 *ctxt_wp; /* point to ctxt wp */
 	void *pre_aligned;
 	void *base;
 	void *rp;
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
index ffde617f93a3..85f4f7c8d7c6 100644
--- a/drivers/bus/mhi/core/main.c
+++ b/drivers/bus/mhi/core/main.c
@@ -114,7 +114,7 @@ void mhi_ring_er_db(struct mhi_event *mhi_event)
 	struct mhi_ring *ring = &mhi_event->ring;
 
 	mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
-				     ring->db_addr, *ring->ctxt_wp);
+				     ring->db_addr, le64_to_cpu(*ring->ctxt_wp));
 }
 
 void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
@@ -123,7 +123,7 @@ void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
 	struct mhi_ring *ring = &mhi_cmd->ring;
 
 	db = ring->iommu_base + (ring->wp - ring->base);
-	*ring->ctxt_wp = db;
+	*ring->ctxt_wp = cpu_to_le64(db);
 	mhi_write_db(mhi_cntrl, ring->db_addr, db);
 }
 
@@ -140,7 +140,7 @@ void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
 	 * before letting h/w know there is new element to fetch.
 	 */
 	dma_wmb();
-	*ring->ctxt_wp = db;
+	*ring->ctxt_wp = cpu_to_le64(db);
 
 	mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
 				    ring->db_addr, db);
@@ -432,7 +432,7 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
 	struct mhi_event_ctxt *er_ctxt =
 		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
 	struct mhi_ring *ev_ring = &mhi_event->ring;
-	dma_addr_t ptr = er_ctxt->rp;
+	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
 	void *dev_rp;
 
 	if (!is_valid_ring_ptr(ev_ring, ptr)) {
@@ -537,14 +537,14 @@ static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
 
 	/* Update the WP */
 	ring->wp += ring->el_size;
-	ctxt_wp = *ring->ctxt_wp + ring->el_size;
+	ctxt_wp = le64_to_cpu(*ring->ctxt_wp) + ring->el_size;
 
 	if (ring->wp >= (ring->base + ring->len)) {
 		ring->wp = ring->base;
 		ctxt_wp = ring->iommu_base;
 	}
 
-	*ring->ctxt_wp = ctxt_wp;
+	*ring->ctxt_wp = cpu_to_le64(ctxt_wp);
 
 	/* Update the RP */
 	ring->rp += ring->el_size;
@@ -801,7 +801,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
 	u32 chan;
 	int count = 0;
-	dma_addr_t ptr = er_ctxt->rp;
+	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
 
 	/*
 	 * This is a quick check to avoid unnecessary event processing
@@ -940,7 +940,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
 		local_rp = ev_ring->rp;
 
-		ptr = er_ctxt->rp;
+		ptr = le64_to_cpu(er_ctxt->rp);
 		if (!is_valid_ring_ptr(ev_ring, ptr)) {
 			dev_err(&mhi_cntrl->mhi_dev->dev,
 				"Event ring rp points outside of the event ring\n");
@@ -970,7 +970,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
 	int count = 0;
 	u32 chan;
 	struct mhi_chan *mhi_chan;
-	dma_addr_t ptr = er_ctxt->rp;
+	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
 
 	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
 		return -EIO;
@@ -1011,7 +1011,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
 		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
 		local_rp = ev_ring->rp;
 
-		ptr = er_ctxt->rp;
+		ptr = le64_to_cpu(er_ctxt->rp);
 		if (!is_valid_ring_ptr(ev_ring, ptr)) {
 			dev_err(&mhi_cntrl->mhi_dev->dev,
 				"Event ring rp points outside of the event ring\n");
@@ -1533,7 +1533,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
 	/* mark all stale events related to channel as STALE event */
 	spin_lock_irqsave(&mhi_event->lock, flags);
 
-	ptr = er_ctxt->rp;
+	ptr = le64_to_cpu(er_ctxt->rp);
 	if (!is_valid_ring_ptr(ev_ring, ptr)) {
 		dev_err(&mhi_cntrl->mhi_dev->dev,
 			"Event ring rp points outside of the event ring\n");
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
index 4aae0baea008..c35c5ddc7220 100644
--- a/drivers/bus/mhi/core/pm.c
+++ b/drivers/bus/mhi/core/pm.c
@@ -218,7 +218,7 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
 			continue;
 
 		ring->wp = ring->base + ring->len - ring->el_size;
-		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
+		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
 		/* Update all cores */
 		smp_wmb();
 
@@ -420,7 +420,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
 			continue;
 
 		ring->wp = ring->base + ring->len - ring->el_size;
-		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
+		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
 		/* Update to all cores */
 		smp_wmb();
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 03/27] bus: mhi: Move host MHI code to "host" directory
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 01/27] bus: mhi: Fix pm_state conversion to string Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 02/27] bus: mhi: Fix MHI DMA structure endianness Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 04/27] bus: mhi: Use bitfield operations for register read and write Manivannan Sadhasivam
                   ` (25 subsequent siblings)
  28 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam, Hemant Kumar

In preparation of the endpoint MHI support, let's move the host MHI code
to its own "host" directory and adjust the toplevel MHI Kconfig & Makefile.

While at it, let's also move the "pci_generic" driver to "host" directory
as it is a host MHI controller driver.

Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/Makefile                      |  2 +-
 drivers/bus/mhi/Kconfig                   | 27 ++------------------
 drivers/bus/mhi/Makefile                  |  8 ++----
 drivers/bus/mhi/host/Kconfig              | 31 +++++++++++++++++++++++
 drivers/bus/mhi/{core => host}/Makefile   |  4 ++-
 drivers/bus/mhi/{core => host}/boot.c     |  0
 drivers/bus/mhi/{core => host}/debugfs.c  |  0
 drivers/bus/mhi/{core => host}/init.c     |  0
 drivers/bus/mhi/{core => host}/internal.h |  0
 drivers/bus/mhi/{core => host}/main.c     |  0
 drivers/bus/mhi/{ => host}/pci_generic.c  |  0
 drivers/bus/mhi/{core => host}/pm.c       |  0
 12 files changed, 39 insertions(+), 33 deletions(-)
 create mode 100644 drivers/bus/mhi/host/Kconfig
 rename drivers/bus/mhi/{core => host}/Makefile (54%)
 rename drivers/bus/mhi/{core => host}/boot.c (100%)
 rename drivers/bus/mhi/{core => host}/debugfs.c (100%)
 rename drivers/bus/mhi/{core => host}/init.c (100%)
 rename drivers/bus/mhi/{core => host}/internal.h (100%)
 rename drivers/bus/mhi/{core => host}/main.c (100%)
 rename drivers/bus/mhi/{ => host}/pci_generic.c (100%)
 rename drivers/bus/mhi/{core => host}/pm.c (100%)

diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 52c2f35a26a9..16da51130d1a 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -39,4 +39,4 @@ obj-$(CONFIG_VEXPRESS_CONFIG)	+= vexpress-config.o
 obj-$(CONFIG_DA8XX_MSTPRI)	+= da8xx-mstpri.o
 
 # MHI
-obj-$(CONFIG_MHI_BUS)		+= mhi/
+obj-y				+= mhi/
diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
index da5cd0c9fc62..4748df7f9cd5 100644
--- a/drivers/bus/mhi/Kconfig
+++ b/drivers/bus/mhi/Kconfig
@@ -2,30 +2,7 @@
 #
 # MHI bus
 #
-# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+# Copyright (c) 2021, Linaro Ltd.
 #
 
-config MHI_BUS
-	tristate "Modem Host Interface (MHI) bus"
-	help
-	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
-	  communication protocol used by the host processors to control
-	  and communicate with modem devices over a high speed peripheral
-	  bus or shared memory.
-
-config MHI_BUS_DEBUG
-	bool "Debugfs support for the MHI bus"
-	depends on MHI_BUS && DEBUG_FS
-	help
-	  Enable debugfs support for use with the MHI transport. Allows
-	  reading and/or modifying some values within the MHI controller
-	  for debug and test purposes.
-
-config MHI_BUS_PCI_GENERIC
-	tristate "MHI PCI controller driver"
-	depends on MHI_BUS
-	depends on PCI
-	help
-	  This driver provides MHI PCI controller driver for devices such as
-	  Qualcomm SDX55 based PCIe modems.
-
+source "drivers/bus/mhi/host/Kconfig"
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
index 0a2d778d6fb4..5f5708a249f5 100644
--- a/drivers/bus/mhi/Makefile
+++ b/drivers/bus/mhi/Makefile
@@ -1,6 +1,2 @@
-# core layer
-obj-y += core/
-
-obj-$(CONFIG_MHI_BUS_PCI_GENERIC) += mhi_pci_generic.o
-mhi_pci_generic-y += pci_generic.o
-
+# Host MHI stack
+obj-y += host/
diff --git a/drivers/bus/mhi/host/Kconfig b/drivers/bus/mhi/host/Kconfig
new file mode 100644
index 000000000000..da5cd0c9fc62
--- /dev/null
+++ b/drivers/bus/mhi/host/Kconfig
@@ -0,0 +1,31 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# MHI bus
+#
+# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+#
+
+config MHI_BUS
+	tristate "Modem Host Interface (MHI) bus"
+	help
+	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
+	  communication protocol used by the host processors to control
+	  and communicate with modem devices over a high speed peripheral
+	  bus or shared memory.
+
+config MHI_BUS_DEBUG
+	bool "Debugfs support for the MHI bus"
+	depends on MHI_BUS && DEBUG_FS
+	help
+	  Enable debugfs support for use with the MHI transport. Allows
+	  reading and/or modifying some values within the MHI controller
+	  for debug and test purposes.
+
+config MHI_BUS_PCI_GENERIC
+	tristate "MHI PCI controller driver"
+	depends on MHI_BUS
+	depends on PCI
+	help
+	  This driver provides MHI PCI controller driver for devices such as
+	  Qualcomm SDX55 based PCIe modems.
+
diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/host/Makefile
similarity index 54%
rename from drivers/bus/mhi/core/Makefile
rename to drivers/bus/mhi/host/Makefile
index c3feb4130aa3..859c2f38451c 100644
--- a/drivers/bus/mhi/core/Makefile
+++ b/drivers/bus/mhi/host/Makefile
@@ -1,4 +1,6 @@
 obj-$(CONFIG_MHI_BUS) += mhi.o
-
 mhi-y := init.o main.o pm.o boot.o
 mhi-$(CONFIG_MHI_BUS_DEBUG) += debugfs.o
+
+obj-$(CONFIG_MHI_BUS_PCI_GENERIC) += mhi_pci_generic.o
+mhi_pci_generic-y += pci_generic.o
diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/host/boot.c
similarity index 100%
rename from drivers/bus/mhi/core/boot.c
rename to drivers/bus/mhi/host/boot.c
diff --git a/drivers/bus/mhi/core/debugfs.c b/drivers/bus/mhi/host/debugfs.c
similarity index 100%
rename from drivers/bus/mhi/core/debugfs.c
rename to drivers/bus/mhi/host/debugfs.c
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/host/init.c
similarity index 100%
rename from drivers/bus/mhi/core/init.c
rename to drivers/bus/mhi/host/init.c
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/host/internal.h
similarity index 100%
rename from drivers/bus/mhi/core/internal.h
rename to drivers/bus/mhi/host/internal.h
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/host/main.c
similarity index 100%
rename from drivers/bus/mhi/core/main.c
rename to drivers/bus/mhi/host/main.c
diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
similarity index 100%
rename from drivers/bus/mhi/pci_generic.c
rename to drivers/bus/mhi/host/pci_generic.c
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/host/pm.c
similarity index 100%
rename from drivers/bus/mhi/core/pm.c
rename to drivers/bus/mhi/host/pm.c
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 04/27] bus: mhi: Use bitfield operations for register read and write
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (2 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 03/27] bus: mhi: Move host MHI code to "host" directory Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 05/27] bus: mhi: Use bitfield operations for handling DWORDs of ring elements Manivannan Sadhasivam
                   ` (24 subsequent siblings)
  28 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Functions like mhi_read_reg_field(), mhi_poll_reg_field() and
mhi_write_reg_field() could be modified to not depend on the shift value
passed as an argument. Instead, the bitfield operation could be used to
extract the shift value from the mask itself.

This eliminates the need to define _SHIFT (and _SHFT) macros and
simplifies the code a bit. For shift values those cannot be determined
during build time, "__ffs()" helper is used find the shift value during
runtime.

While at it, let's also get rid of 32-bit masks like CHDBOFF_CHDBOFF_MASK
by doing the full 32-bit register read.

Suggested-by: Alex Elder <elder@linaro.org>
Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/host/boot.c     |  15 ++--
 drivers/bus/mhi/host/debugfs.c  |  10 +--
 drivers/bus/mhi/host/init.c     |  67 ++++++++----------
 drivers/bus/mhi/host/internal.h | 120 +++++++-------------------------
 drivers/bus/mhi/host/main.c     |  16 ++---
 drivers/bus/mhi/host/pm.c       |  18 ++---
 6 files changed, 76 insertions(+), 170 deletions(-)

diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
index 74295d3cc662..d5ba3c7efb61 100644
--- a/drivers/bus/mhi/host/boot.c
+++ b/drivers/bus/mhi/host/boot.c
@@ -46,8 +46,7 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
 	sequence_id = MHI_RANDOM_U32_NONZERO(BHIE_RXVECSTATUS_SEQNUM_BMSK);
 
 	mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
-			    BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
-			    sequence_id);
+			    BHIE_RXVECDB_SEQNUM_BMSK, sequence_id);
 
 	dev_dbg(dev, "Address: %p and len: 0x%zx sequence: %u\n",
 		&mhi_buf->dma_addr, mhi_buf->len, sequence_id);
@@ -127,9 +126,7 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
 
 	while (retry--) {
 		ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
-					 BHIE_RXVECSTATUS_STATUS_BMSK,
-					 BHIE_RXVECSTATUS_STATUS_SHFT,
-					 &rx_status);
+					 BHIE_RXVECSTATUS_STATUS_BMSK, &rx_status);
 		if (ret)
 			return -EIO;
 
@@ -168,7 +165,6 @@ int mhi_download_rddm_image(struct mhi_controller *mhi_cntrl, bool in_panic)
 			   mhi_read_reg_field(mhi_cntrl, base,
 					      BHIE_RXVECSTATUS_OFFS,
 					      BHIE_RXVECSTATUS_STATUS_BMSK,
-					      BHIE_RXVECSTATUS_STATUS_SHFT,
 					      &rx_status) || rx_status,
 			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
 
@@ -203,8 +199,7 @@ static int mhi_fw_load_bhie(struct mhi_controller *mhi_cntrl,
 	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
 
 	mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
-			    BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
-			    sequence_id);
+			    BHIE_TXVECDB_SEQNUM_BMSK, sequence_id);
 	read_unlock_bh(pm_lock);
 
 	/* Wait for the image download to complete */
@@ -213,7 +208,6 @@ static int mhi_fw_load_bhie(struct mhi_controller *mhi_cntrl,
 				 mhi_read_reg_field(mhi_cntrl, base,
 						   BHIE_TXVECSTATUS_OFFS,
 						   BHIE_TXVECSTATUS_STATUS_BMSK,
-						   BHIE_TXVECSTATUS_STATUS_SHFT,
 						   &tx_status) || tx_status,
 				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
 	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
@@ -265,8 +259,7 @@ static int mhi_fw_load_bhi(struct mhi_controller *mhi_cntrl,
 	ret = wait_event_timeout(mhi_cntrl->state_event,
 			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
 			   mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS,
-					      BHI_STATUS_MASK, BHI_STATUS_SHIFT,
-					      &tx_status) || tx_status,
+					      BHI_STATUS_MASK, &tx_status) || tx_status,
 			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
 	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
 		goto invalid_pm_state;
diff --git a/drivers/bus/mhi/host/debugfs.c b/drivers/bus/mhi/host/debugfs.c
index d818586c229d..bdc875d7bd4d 100644
--- a/drivers/bus/mhi/host/debugfs.c
+++ b/drivers/bus/mhi/host/debugfs.c
@@ -61,9 +61,9 @@ static int mhi_debugfs_events_show(struct seq_file *m, void *d)
 
 		seq_printf(m, "Index: %d intmod count: %lu time: %lu",
 			   i, (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODC_MASK) >>
-			   EV_CTX_INTMODC_SHIFT,
+			   __ffs(EV_CTX_INTMODC_MASK),
 			   (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODT_MASK) >>
-			   EV_CTX_INTMODT_SHIFT);
+			   __ffs(EV_CTX_INTMODT_MASK));
 
 		seq_printf(m, " base: 0x%0llx len: 0x%llx", le64_to_cpu(er_ctxt->rbase),
 			   le64_to_cpu(er_ctxt->rlen));
@@ -107,10 +107,10 @@ static int mhi_debugfs_channels_show(struct seq_file *m, void *d)
 		seq_printf(m,
 			   "%s(%u) state: 0x%lx brstmode: 0x%lx pollcfg: 0x%lx",
 			   mhi_chan->name, mhi_chan->chan, (le32_to_cpu(chan_ctxt->chcfg) &
-			   CHAN_CTX_CHSTATE_MASK) >> CHAN_CTX_CHSTATE_SHIFT,
+			   CHAN_CTX_CHSTATE_MASK) >> __ffs(CHAN_CTX_CHSTATE_MASK),
 			   (le32_to_cpu(chan_ctxt->chcfg) & CHAN_CTX_BRSTMODE_MASK) >>
-			   CHAN_CTX_BRSTMODE_SHIFT, (le32_to_cpu(chan_ctxt->chcfg) &
-			   CHAN_CTX_POLLCFG_MASK) >> CHAN_CTX_POLLCFG_SHIFT);
+			   __ffs(CHAN_CTX_BRSTMODE_MASK), (le32_to_cpu(chan_ctxt->chcfg) &
+			   CHAN_CTX_POLLCFG_MASK) >> __ffs(CHAN_CTX_POLLCFG_MASK));
 
 		seq_printf(m, " type: 0x%x event ring: %u", le32_to_cpu(chan_ctxt->chtype),
 			   le32_to_cpu(chan_ctxt->erindex));
diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
index d8787aaa176b..ca068a017a42 100644
--- a/drivers/bus/mhi/host/init.c
+++ b/drivers/bus/mhi/host/init.c
@@ -4,6 +4,7 @@
  *
  */
 
+#include <linux/bitfield.h>
 #include <linux/debugfs.h>
 #include <linux/device.h>
 #include <linux/dma-direction.h>
@@ -295,11 +296,11 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 
 		tmp = le32_to_cpu(chan_ctxt->chcfg);
 		tmp &= ~CHAN_CTX_CHSTATE_MASK;
-		tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
+		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_DISABLED);
 		tmp &= ~CHAN_CTX_BRSTMODE_MASK;
-		tmp |= (mhi_chan->db_cfg.brstmode << CHAN_CTX_BRSTMODE_SHIFT);
+		tmp |= FIELD_PREP(CHAN_CTX_BRSTMODE_MASK, mhi_chan->db_cfg.brstmode);
 		tmp &= ~CHAN_CTX_POLLCFG_MASK;
-		tmp |= (mhi_chan->db_cfg.pollcfg << CHAN_CTX_POLLCFG_SHIFT);
+		tmp |= FIELD_PREP(CHAN_CTX_POLLCFG_MASK, mhi_chan->db_cfg.pollcfg);
 		chan_ctxt->chcfg = cpu_to_le32(tmp);
 
 		chan_ctxt->chtype = cpu_to_le32(mhi_chan->type);
@@ -331,7 +332,7 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 		tmp = le32_to_cpu(er_ctxt->intmod);
 		tmp &= ~EV_CTX_INTMODC_MASK;
 		tmp &= ~EV_CTX_INTMODT_MASK;
-		tmp |= (mhi_event->intmod << EV_CTX_INTMODT_SHIFT);
+		tmp |= FIELD_PREP(EV_CTX_INTMODT_MASK, mhi_event->intmod);
 		er_ctxt->intmod = cpu_to_le32(tmp);
 
 		er_ctxt->ertype = cpu_to_le32(MHI_ER_TYPE_VALID);
@@ -437,71 +438,70 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
 	struct {
 		u32 offset;
 		u32 mask;
-		u32 shift;
 		u32 val;
 	} reg_info[] = {
 		{
-			CCABAP_HIGHER, U32_MAX, 0,
+			CCABAP_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
 		},
 		{
-			CCABAP_LOWER, U32_MAX, 0,
+			CCABAP_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
 		},
 		{
-			ECABAP_HIGHER, U32_MAX, 0,
+			ECABAP_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
 		},
 		{
-			ECABAP_LOWER, U32_MAX, 0,
+			ECABAP_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
 		},
 		{
-			CRCBAP_HIGHER, U32_MAX, 0,
+			CRCBAP_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
 		},
 		{
-			CRCBAP_LOWER, U32_MAX, 0,
+			CRCBAP_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
 		},
 		{
-			MHICFG, MHICFG_NER_MASK, MHICFG_NER_SHIFT,
+			MHICFG, MHICFG_NER_MASK,
 			mhi_cntrl->total_ev_rings,
 		},
 		{
-			MHICFG, MHICFG_NHWER_MASK, MHICFG_NHWER_SHIFT,
+			MHICFG, MHICFG_NHWER_MASK,
 			mhi_cntrl->hw_ev_rings,
 		},
 		{
-			MHICTRLBASE_HIGHER, U32_MAX, 0,
+			MHICTRLBASE_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->iova_start),
 		},
 		{
-			MHICTRLBASE_LOWER, U32_MAX, 0,
+			MHICTRLBASE_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->iova_start),
 		},
 		{
-			MHIDATABASE_HIGHER, U32_MAX, 0,
+			MHIDATABASE_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->iova_start),
 		},
 		{
-			MHIDATABASE_LOWER, U32_MAX, 0,
+			MHIDATABASE_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->iova_start),
 		},
 		{
-			MHICTRLLIMIT_HIGHER, U32_MAX, 0,
+			MHICTRLLIMIT_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->iova_stop),
 		},
 		{
-			MHICTRLLIMIT_LOWER, U32_MAX, 0,
+			MHICTRLLIMIT_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->iova_stop),
 		},
 		{
-			MHIDATALIMIT_HIGHER, U32_MAX, 0,
+			MHIDATALIMIT_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->iova_stop),
 		},
 		{
-			MHIDATALIMIT_LOWER, U32_MAX, 0,
+			MHIDATALIMIT_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->iova_stop),
 		},
 		{ 0, 0, 0 }
@@ -510,8 +510,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
 	dev_dbg(dev, "Initializing MHI registers\n");
 
 	/* Read channel db offset */
-	ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK,
-				 CHDBOFF_CHDBOFF_SHIFT, &val);
+	ret = mhi_read_reg(mhi_cntrl, base, CHDBOFF, &val);
 	if (ret) {
 		dev_err(dev, "Unable to read CHDBOFF register\n");
 		return -EIO;
@@ -527,8 +526,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
 		mhi_chan->tre_ring.db_addr = base + val;
 
 	/* Read event ring db offset */
-	ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK,
-				 ERDBOFF_ERDBOFF_SHIFT, &val);
+	ret = mhi_read_reg(mhi_cntrl, base, ERDBOFF, &val);
 	if (ret) {
 		dev_err(dev, "Unable to read ERDBOFF register\n");
 		return -EIO;
@@ -549,8 +547,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
 	/* Write to MMIO registers */
 	for (i = 0; reg_info[i].offset; i++)
 		mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
-				    reg_info[i].mask, reg_info[i].shift,
-				    reg_info[i].val);
+				    reg_info[i].mask, reg_info[i].val);
 
 	return 0;
 }
@@ -583,7 +580,7 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
 
 	tmp = le32_to_cpu(chan_ctxt->chcfg);
 	tmp &= ~CHAN_CTX_CHSTATE_MASK;
-	tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
+	tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_DISABLED);
 	chan_ctxt->chcfg = cpu_to_le32(tmp);
 
 	/* Update to all cores */
@@ -620,7 +617,7 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
 
 	tmp = le32_to_cpu(chan_ctxt->chcfg);
 	tmp &= ~CHAN_CTX_CHSTATE_MASK;
-	tmp |= (MHI_CH_STATE_ENABLED << CHAN_CTX_CHSTATE_SHIFT);
+	tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_ENABLED);
 	chan_ctxt->chcfg = cpu_to_le32(tmp);
 
 	chan_ctxt->rbase = cpu_to_le64(tre_ring->iommu_base);
@@ -964,14 +961,10 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
 	if (ret)
 		goto err_destroy_wq;
 
-	mhi_cntrl->family_number = (soc_info & SOC_HW_VERSION_FAM_NUM_BMSK) >>
-					SOC_HW_VERSION_FAM_NUM_SHFT;
-	mhi_cntrl->device_number = (soc_info & SOC_HW_VERSION_DEV_NUM_BMSK) >>
-					SOC_HW_VERSION_DEV_NUM_SHFT;
-	mhi_cntrl->major_version = (soc_info & SOC_HW_VERSION_MAJOR_VER_BMSK) >>
-					SOC_HW_VERSION_MAJOR_VER_SHFT;
-	mhi_cntrl->minor_version = (soc_info & SOC_HW_VERSION_MINOR_VER_BMSK) >>
-					SOC_HW_VERSION_MINOR_VER_SHFT;
+	mhi_cntrl->family_number = FIELD_GET(SOC_HW_VERSION_FAM_NUM_BMSK, soc_info);
+	mhi_cntrl->device_number = FIELD_GET(SOC_HW_VERSION_DEV_NUM_BMSK, soc_info);
+	mhi_cntrl->major_version = FIELD_GET(SOC_HW_VERSION_MAJOR_VER_BMSK, soc_info);
+	mhi_cntrl->minor_version = FIELD_GET(SOC_HW_VERSION_MINOR_VER_BMSK, soc_info);
 
 	mhi_cntrl->index = ida_alloc(&mhi_controller_ida, GFP_KERNEL);
 	if (mhi_cntrl->index < 0) {
diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
index 37c39bf1c7a9..156bf65b6810 100644
--- a/drivers/bus/mhi/host/internal.h
+++ b/drivers/bus/mhi/host/internal.h
@@ -12,120 +12,65 @@
 extern struct bus_type mhi_bus_type;
 
 #define MHIREGLEN (0x0)
-#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
-#define MHIREGLEN_MHIREGLEN_SHIFT (0)
 
 #define MHIVER (0x8)
-#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
-#define MHIVER_MHIVER_SHIFT (0)
 
 #define MHICFG (0x10)
-#define MHICFG_NHWER_MASK (0xFF000000)
-#define MHICFG_NHWER_SHIFT (24)
-#define MHICFG_NER_MASK (0xFF0000)
-#define MHICFG_NER_SHIFT (16)
-#define MHICFG_NHWCH_MASK (0xFF00)
-#define MHICFG_NHWCH_SHIFT (8)
-#define MHICFG_NCH_MASK (0xFF)
-#define MHICFG_NCH_SHIFT (0)
+#define MHICFG_NHWER_MASK (GENMASK(31, 24))
+#define MHICFG_NER_MASK (GENMASK(23, 16))
+#define MHICFG_NHWCH_MASK (GENMASK(15, 8))
+#define MHICFG_NCH_MASK (GENMASK(7, 0))
 
 #define CHDBOFF (0x18)
-#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
-#define CHDBOFF_CHDBOFF_SHIFT (0)
 
 #define ERDBOFF (0x20)
-#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
-#define ERDBOFF_ERDBOFF_SHIFT (0)
 
 #define BHIOFF (0x28)
-#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
-#define BHIOFF_BHIOFF_SHIFT (0)
 
 #define BHIEOFF (0x2C)
-#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
-#define BHIEOFF_BHIEOFF_SHIFT (0)
 
 #define DEBUGOFF (0x30)
-#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
-#define DEBUGOFF_DEBUGOFF_SHIFT (0)
 
 #define MHICTRL (0x38)
-#define MHICTRL_MHISTATE_MASK (0x0000FF00)
-#define MHICTRL_MHISTATE_SHIFT (8)
-#define MHICTRL_RESET_MASK (0x2)
-#define MHICTRL_RESET_SHIFT (1)
+#define MHICTRL_MHISTATE_MASK (GENMASK(15, 8))
+#define MHICTRL_RESET_MASK (BIT(1))
 
 #define MHISTATUS (0x48)
-#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
-#define MHISTATUS_MHISTATE_SHIFT (8)
-#define MHISTATUS_SYSERR_MASK (0x4)
-#define MHISTATUS_SYSERR_SHIFT (2)
-#define MHISTATUS_READY_MASK (0x1)
-#define MHISTATUS_READY_SHIFT (0)
+#define MHISTATUS_MHISTATE_MASK (GENMASK(15, 8))
+#define MHISTATUS_SYSERR_MASK (BIT(2))
+#define MHISTATUS_READY_MASK (BIT(0))
 
 #define CCABAP_LOWER (0x58)
-#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
-#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
 
 #define CCABAP_HIGHER (0x5C)
-#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
-#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
 
 #define ECABAP_LOWER (0x60)
-#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
-#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
 
 #define ECABAP_HIGHER (0x64)
-#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
-#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
 
 #define CRCBAP_LOWER (0x68)
-#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
-#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
 
 #define CRCBAP_HIGHER (0x6C)
-#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
-#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
 
 #define CRDB_LOWER (0x70)
-#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
-#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
 
 #define CRDB_HIGHER (0x74)
-#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
-#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
 
 #define MHICTRLBASE_LOWER (0x80)
-#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
-#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
 
 #define MHICTRLBASE_HIGHER (0x84)
-#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
-#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
 
 #define MHICTRLLIMIT_LOWER (0x88)
-#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
-#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
 
 #define MHICTRLLIMIT_HIGHER (0x8C)
-#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
-#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
 
 #define MHIDATABASE_LOWER (0x98)
-#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
-#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
 
 #define MHIDATABASE_HIGHER (0x9C)
-#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
-#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
 
 #define MHIDATALIMIT_LOWER (0xA0)
-#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
-#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
 
 #define MHIDATALIMIT_HIGHER (0xA4)
-#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
-#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
 
 /* Host request register */
 #define MHI_SOC_RESET_REQ_OFFSET (0xB0)
@@ -139,8 +84,7 @@ extern struct bus_type mhi_bus_type;
 #define BHI_IMGSIZE (0x10)
 #define BHI_RSVD1 (0x14)
 #define BHI_IMGTXDB (0x18)
-#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHI_TXDB_SEQNUM_SHFT (0)
+#define BHI_TXDB_SEQNUM_BMSK (GENMASK(29, 0))
 #define BHI_RSVD2 (0x1C)
 #define BHI_INTVEC (0x20)
 #define BHI_RSVD3 (0x24)
@@ -156,8 +100,7 @@ extern struct bus_type mhi_bus_type;
 #define BHI_MSMHWID(n) (0x4C + (0x4 * (n)))
 #define BHI_OEMPKHASH(n) (0x64 + (0x4 * (n)))
 #define BHI_RSVD5 (0xC4)
-#define BHI_STATUS_MASK (0xC0000000)
-#define BHI_STATUS_SHIFT (30)
+#define BHI_STATUS_MASK (GENMASK(31, 30))
 #define BHI_STATUS_ERROR (3)
 #define BHI_STATUS_SUCCESS (2)
 #define BHI_STATUS_RESET (0)
@@ -168,13 +111,10 @@ extern struct bus_type mhi_bus_type;
 #define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
 #define BHIE_TXVECSIZE_OFFS (0x0034)
 #define BHIE_TXVECDB_OFFS (0x003C)
-#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHIE_TXVECDB_SEQNUM_SHFT (0)
+#define BHIE_TXVECDB_SEQNUM_BMSK (GENMASK(29, 0))
 #define BHIE_TXVECSTATUS_OFFS (0x0044)
-#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
-#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
-#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
+#define BHIE_TXVECSTATUS_SEQNUM_BMSK (GENMASK(29, 0))
+#define BHIE_TXVECSTATUS_STATUS_BMSK (GENMASK(31, 30))
 #define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
 #define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
 #define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
@@ -182,32 +122,23 @@ extern struct bus_type mhi_bus_type;
 #define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
 #define BHIE_RXVECSIZE_OFFS (0x0068)
 #define BHIE_RXVECDB_OFFS (0x0070)
-#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHIE_RXVECDB_SEQNUM_SHFT (0)
+#define BHIE_RXVECDB_SEQNUM_BMSK (GENMASK(29, 0))
 #define BHIE_RXVECSTATUS_OFFS (0x0078)
-#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
-#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
-#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
+#define BHIE_RXVECSTATUS_SEQNUM_BMSK (GENMASK(29, 0))
+#define BHIE_RXVECSTATUS_STATUS_BMSK (GENMASK(31, 30))
 #define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
 #define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
 #define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
 
 #define SOC_HW_VERSION_OFFS (0x224)
-#define SOC_HW_VERSION_FAM_NUM_BMSK (0xF0000000)
-#define SOC_HW_VERSION_FAM_NUM_SHFT (28)
-#define SOC_HW_VERSION_DEV_NUM_BMSK (0x0FFF0000)
-#define SOC_HW_VERSION_DEV_NUM_SHFT (16)
-#define SOC_HW_VERSION_MAJOR_VER_BMSK (0x0000FF00)
-#define SOC_HW_VERSION_MAJOR_VER_SHFT (8)
-#define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
-#define SOC_HW_VERSION_MINOR_VER_SHFT (0)
+#define SOC_HW_VERSION_FAM_NUM_BMSK (GENMASK(31, 28))
+#define SOC_HW_VERSION_DEV_NUM_BMSK (GENMASK(27, 16))
+#define SOC_HW_VERSION_MAJOR_VER_BMSK (GENMASK(15, 8))
+#define SOC_HW_VERSION_MINOR_VER_BMSK (GENMASK(7, 0))
 
 #define EV_CTX_RESERVED_MASK GENMASK(7, 0)
 #define EV_CTX_INTMODC_MASK GENMASK(15, 8)
-#define EV_CTX_INTMODC_SHIFT 8
 #define EV_CTX_INTMODT_MASK GENMASK(31, 16)
-#define EV_CTX_INTMODT_SHIFT 16
 struct mhi_event_ctxt {
 	__le32 intmod;
 	__le32 ertype;
@@ -220,11 +151,8 @@ struct mhi_event_ctxt {
 };
 
 #define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
-#define CHAN_CTX_CHSTATE_SHIFT 0
 #define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
-#define CHAN_CTX_BRSTMODE_SHIFT 8
 #define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
-#define CHAN_CTX_POLLCFG_SHIFT 10
 #define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
 struct mhi_chan_ctxt {
 	__le32 chcfg;
@@ -659,14 +587,14 @@ int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
 			      void __iomem *base, u32 offset, u32 *out);
 int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
 				    void __iomem *base, u32 offset, u32 mask,
-				    u32 shift, u32 *out);
+				    u32 *out);
 int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
 				    void __iomem *base, u32 offset, u32 mask,
-				    u32 shift, u32 val, u32 delayus);
+				    u32 val, u32 delayus);
 void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
 		   u32 offset, u32 val);
 void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
-			 u32 offset, u32 mask, u32 shift, u32 val);
+			 u32 offset, u32 mask, u32 val);
 void mhi_ring_er_db(struct mhi_event *mhi_event);
 void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
 		  dma_addr_t db_val);
diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
index 85f4f7c8d7c6..3e6e615466b7 100644
--- a/drivers/bus/mhi/host/main.c
+++ b/drivers/bus/mhi/host/main.c
@@ -24,7 +24,7 @@ int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
 
 int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
 				    void __iomem *base, u32 offset,
-				    u32 mask, u32 shift, u32 *out)
+				    u32 mask, u32 *out)
 {
 	u32 tmp;
 	int ret;
@@ -33,21 +33,20 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
 	if (ret)
 		return ret;
 
-	*out = (tmp & mask) >> shift;
+	*out = (tmp & mask) >> __ffs(mask);
 
 	return 0;
 }
 
 int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
 				    void __iomem *base, u32 offset,
-				    u32 mask, u32 shift, u32 val, u32 delayus)
+				    u32 mask, u32 val, u32 delayus)
 {
 	int ret;
 	u32 out, retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
 
 	while (retry--) {
-		ret = mhi_read_reg_field(mhi_cntrl, base, offset, mask, shift,
-					 &out);
+		ret = mhi_read_reg_field(mhi_cntrl, base, offset, mask, &out);
 		if (ret)
 			return ret;
 
@@ -67,7 +66,7 @@ void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
 }
 
 void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
-			 u32 offset, u32 mask, u32 shift, u32 val)
+			 u32 offset, u32 mask, u32 val)
 {
 	int ret;
 	u32 tmp;
@@ -77,7 +76,7 @@ void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
 		return;
 
 	tmp &= ~mask;
-	tmp |= (val << shift);
+	tmp |= (val << __ffs(mask));
 	mhi_write_reg(mhi_cntrl, base, offset, tmp);
 }
 
@@ -159,8 +158,7 @@ enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl)
 {
 	u32 state;
 	int ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
-				     MHISTATUS_MHISTATE_MASK,
-				     MHISTATUS_MHISTATE_SHIFT, &state);
+				     MHISTATUS_MHISTATE_MASK, &state);
 	return ret ? MHI_STATE_MAX : state;
 }
 EXPORT_SYMBOL_GPL(mhi_get_mhi_state);
diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
index c35c5ddc7220..bb8a23e80e19 100644
--- a/drivers/bus/mhi/host/pm.c
+++ b/drivers/bus/mhi/host/pm.c
@@ -131,11 +131,10 @@ void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state)
 {
 	if (state == MHI_STATE_RESET) {
 		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
-				    MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 1);
+				    MHICTRL_RESET_MASK, 1);
 	} else {
 		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
-				    MHICTRL_MHISTATE_MASK,
-				    MHICTRL_MHISTATE_SHIFT, state);
+				    MHICTRL_MHISTATE_MASK, state);
 	}
 }
 
@@ -167,16 +166,14 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
 
 	/* Wait for RESET to be cleared and READY bit to be set by the device */
 	ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
-				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
-				 interval_us);
+				 MHICTRL_RESET_MASK, 0, interval_us);
 	if (ret) {
 		dev_err(dev, "Device failed to clear MHI Reset\n");
 		return ret;
 	}
 
 	ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
-				 MHISTATUS_READY_MASK, MHISTATUS_READY_SHIFT, 1,
-				 interval_us);
+				 MHISTATUS_READY_MASK, 1, interval_us);
 	if (ret) {
 		dev_err(dev, "Device failed to enter MHI Ready\n");
 		return ret;
@@ -470,8 +467,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
 
 		/* Wait for the reset bit to be cleared by the device */
 		ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
-				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
-				 25000);
+				 MHICTRL_RESET_MASK, 0, 25000);
 		if (ret)
 			dev_err(dev, "Device failed to clear MHI Reset\n");
 
@@ -602,7 +598,6 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
 							    mhi_cntrl->regs,
 							    MHICTRL,
 							    MHICTRL_RESET_MASK,
-							    MHICTRL_RESET_SHIFT,
 							    &in_reset) ||
 					!in_reset, timeout);
 		if (!ret || in_reset) {
@@ -1093,8 +1088,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
 	if (state == MHI_STATE_SYS_ERR) {
 		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
 		ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
-				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
-				 interval_us);
+				 MHICTRL_RESET_MASK, 0, interval_us);
 		if (ret) {
 			dev_info(dev, "Failed to reset MHI due to syserr state\n");
 			goto error_exit;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 05/27] bus: mhi: Use bitfield operations for handling DWORDs of ring elements
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (3 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 04/27] bus: mhi: Use bitfield operations for register read and write Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 14:00   ` David Laight
  2022-02-28 12:43 ` [PATCH v4 06/27] bus: mhi: Cleanup the register definitions used in headers Manivannan Sadhasivam
                   ` (23 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Instead of using the hardcoded bits in DWORD definitions, let's use the
bitfield operations to make it more clear how the DWORDs are structured.

Suggested-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/host/internal.h | 58 +++++++++++++++++++--------------
 1 file changed, 33 insertions(+), 25 deletions(-)

diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
index 156bf65b6810..1d1790e83a93 100644
--- a/drivers/bus/mhi/host/internal.h
+++ b/drivers/bus/mhi/host/internal.h
@@ -7,6 +7,7 @@
 #ifndef _MHI_INT_H
 #define _MHI_INT_H
 
+#include <linux/bitfield.h>
 #include <linux/mhi.h>
 
 extern struct bus_type mhi_bus_type;
@@ -205,58 +206,65 @@ enum mhi_cmd_type {
 /* No operation command */
 #define MHI_TRE_CMD_NOOP_PTR (0)
 #define MHI_TRE_CMD_NOOP_DWORD0 (0)
-#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
+#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_CMD_NOP)))
 
 /* Channel reset command */
 #define MHI_TRE_CMD_RESET_PTR (0)
 #define MHI_TRE_CMD_RESET_DWORD0 (0)
-#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
-					(MHI_CMD_RESET_CHAN << 16)))
+#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
+					FIELD_PREP(GENMASK(23, 16), MHI_CMD_RESET_CHAN))
 
 /* Channel stop command */
 #define MHI_TRE_CMD_STOP_PTR (0)
 #define MHI_TRE_CMD_STOP_DWORD0 (0)
-#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
-				       (MHI_CMD_STOP_CHAN << 16)))
+#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
+					FIELD_PREP(GENMASK(23, 16), MHI_CMD_STOP_CHAN))
 
 /* Channel start command */
 #define MHI_TRE_CMD_START_PTR (0)
 #define MHI_TRE_CMD_START_DWORD0 (0)
-#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
-					(MHI_CMD_START_CHAN << 16)))
+#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
+					FIELD_PREP(GENMASK(23, 16), MHI_CMD_START_CHAN))
 
 #define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
-#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
-#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
+#define MHI_TRE_GET_CMD_CHID(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
+#define MHI_TRE_GET_CMD_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))
 
 /* Event descriptor macros */
 #define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
-#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
-#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
+#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), code) | \
+						FIELD_PREP(GENMASK(15, 0), len)))
+#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
+						FIELD_PREP(GENMASK(23, 16), type)))
 #define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
-#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
-#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_CODE(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
+#define MHI_TRE_GET_EV_LEN(tre) (FIELD_GET(GENMASK(15, 0), (MHI_TRE_GET_DWORD(tre, 0))))
+#define MHI_TRE_GET_EV_CHID(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
+#define MHI_TRE_GET_EV_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))
+#define MHI_TRE_GET_EV_STATE(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
+#define MHI_TRE_GET_EV_EXECENV(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
 #define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
 #define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
 #define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
-#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
+#define MHI_TRE_GET_EV_VEID(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 0))))
+#define MHI_TRE_GET_EV_LINKSPEED(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
+#define MHI_TRE_GET_EV_LINKWIDTH(tre) (FIELD_GET(GENMASK(7, 0), (MHI_TRE_GET_DWORD(tre, 0))))
 
 /* Transfer descriptor macros */
 #define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
-#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
-#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
-	| (ieot << 9) | (ieob << 8) | chain))
+#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(FIELD_PREP(GENMASK(15, 0), len)))
+#define MHI_TRE_TYPE_TRANSFER 2
+#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \
+							MHI_TRE_TYPE_TRANSFER) | \
+							FIELD_PREP(BIT(10), bei) | \
+							FIELD_PREP(BIT(9), ieot) | \
+							FIELD_PREP(BIT(8), ieob) | \
+							FIELD_PREP(BIT(0), chain)))
 
 /* RSC transfer descriptor macros */
-#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
+#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(FIELD_PREP(GENMASK(64, 48), len) | ptr))
 #define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
-#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
+#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_PKT_TYPE_COALESCING)
 
 enum mhi_pkt_type {
 	MHI_PKT_TYPE_INVALID = 0x0,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 06/27] bus: mhi: Cleanup the register definitions used in headers
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (4 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 05/27] bus: mhi: Use bitfield operations for handling DWORDs of ring elements Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 07/27] bus: mhi: host: Rename "struct mhi_tre" to "struct mhi_ring_element" Manivannan Sadhasivam
                   ` (22 subsequent siblings)
  28 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam, Hemant Kumar

Cleanup includes:

1. Using the GENMASK macro for masks
2. Removing brackets for single values
3. Using lowercase for hex values
4. Using two digits for hex values where applicable
5. Aligning the defines on same column

Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/host/internal.h | 413 +++++++++++++++-----------------
 1 file changed, 199 insertions(+), 214 deletions(-)

diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
index 1d1790e83a93..1c7a48be033f 100644
--- a/drivers/bus/mhi/host/internal.h
+++ b/drivers/bus/mhi/host/internal.h
@@ -12,134 +12,116 @@
 
 extern struct bus_type mhi_bus_type;
 
-#define MHIREGLEN (0x0)
-
-#define MHIVER (0x8)
-
-#define MHICFG (0x10)
-#define MHICFG_NHWER_MASK (GENMASK(31, 24))
-#define MHICFG_NER_MASK (GENMASK(23, 16))
-#define MHICFG_NHWCH_MASK (GENMASK(15, 8))
-#define MHICFG_NCH_MASK (GENMASK(7, 0))
-
-#define CHDBOFF (0x18)
-
-#define ERDBOFF (0x20)
-
-#define BHIOFF (0x28)
-
-#define BHIEOFF (0x2C)
-
-#define DEBUGOFF (0x30)
-
-#define MHICTRL (0x38)
-#define MHICTRL_MHISTATE_MASK (GENMASK(15, 8))
-#define MHICTRL_RESET_MASK (BIT(1))
-
-#define MHISTATUS (0x48)
-#define MHISTATUS_MHISTATE_MASK (GENMASK(15, 8))
-#define MHISTATUS_SYSERR_MASK (BIT(2))
-#define MHISTATUS_READY_MASK (BIT(0))
-
-#define CCABAP_LOWER (0x58)
-
-#define CCABAP_HIGHER (0x5C)
-
-#define ECABAP_LOWER (0x60)
-
-#define ECABAP_HIGHER (0x64)
-
-#define CRCBAP_LOWER (0x68)
-
-#define CRCBAP_HIGHER (0x6C)
-
-#define CRDB_LOWER (0x70)
-
-#define CRDB_HIGHER (0x74)
-
-#define MHICTRLBASE_LOWER (0x80)
-
-#define MHICTRLBASE_HIGHER (0x84)
-
-#define MHICTRLLIMIT_LOWER (0x88)
-
-#define MHICTRLLIMIT_HIGHER (0x8C)
-
-#define MHIDATABASE_LOWER (0x98)
-
-#define MHIDATABASE_HIGHER (0x9C)
-
-#define MHIDATALIMIT_LOWER (0xA0)
-
-#define MHIDATALIMIT_HIGHER (0xA4)
+/* MHI registers */
+#define MHIREGLEN					0x00
+#define MHIVER						0x08
+#define MHICFG						0x10
+#define CHDBOFF						0x18
+#define ERDBOFF						0x20
+#define BHIOFF						0x28
+#define BHIEOFF						0x2c
+#define DEBUGOFF					0x30
+#define MHICTRL						0x38
+#define MHISTATUS					0x48
+#define CCABAP_LOWER					0x58
+#define CCABAP_HIGHER					0x5c
+#define ECABAP_LOWER					0x60
+#define ECABAP_HIGHER					0x64
+#define CRCBAP_LOWER					0x68
+#define CRCBAP_HIGHER					0x6c
+#define CRDB_LOWER					0x70
+#define CRDB_HIGHER					0x74
+#define MHICTRLBASE_LOWER				0x80
+#define MHICTRLBASE_HIGHER				0x84
+#define MHICTRLLIMIT_LOWER				0x88
+#define MHICTRLLIMIT_HIGHER				0x8c
+#define MHIDATABASE_LOWER				0x98
+#define MHIDATABASE_HIGHER				0x9c
+#define MHIDATALIMIT_LOWER				0xa0
+#define MHIDATALIMIT_HIGHER				0xa4
 
 /* Host request register */
-#define MHI_SOC_RESET_REQ_OFFSET (0xB0)
-#define MHI_SOC_RESET_REQ BIT(0)
-
-/* MHI BHI offfsets */
-#define BHI_BHIVERSION_MINOR (0x00)
-#define BHI_BHIVERSION_MAJOR (0x04)
-#define BHI_IMGADDR_LOW (0x08)
-#define BHI_IMGADDR_HIGH (0x0C)
-#define BHI_IMGSIZE (0x10)
-#define BHI_RSVD1 (0x14)
-#define BHI_IMGTXDB (0x18)
-#define BHI_TXDB_SEQNUM_BMSK (GENMASK(29, 0))
-#define BHI_RSVD2 (0x1C)
-#define BHI_INTVEC (0x20)
-#define BHI_RSVD3 (0x24)
-#define BHI_EXECENV (0x28)
-#define BHI_STATUS (0x2C)
-#define BHI_ERRCODE (0x30)
-#define BHI_ERRDBG1 (0x34)
-#define BHI_ERRDBG2 (0x38)
-#define BHI_ERRDBG3 (0x3C)
-#define BHI_SERIALNU (0x40)
-#define BHI_SBLANTIROLLVER (0x44)
-#define BHI_NUMSEG (0x48)
-#define BHI_MSMHWID(n) (0x4C + (0x4 * (n)))
-#define BHI_OEMPKHASH(n) (0x64 + (0x4 * (n)))
-#define BHI_RSVD5 (0xC4)
-#define BHI_STATUS_MASK (GENMASK(31, 30))
-#define BHI_STATUS_ERROR (3)
-#define BHI_STATUS_SUCCESS (2)
-#define BHI_STATUS_RESET (0)
-
-/* MHI BHIE offsets */
-#define BHIE_MSMSOCID_OFFS (0x0000)
-#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
-#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
-#define BHIE_TXVECSIZE_OFFS (0x0034)
-#define BHIE_TXVECDB_OFFS (0x003C)
-#define BHIE_TXVECDB_SEQNUM_BMSK (GENMASK(29, 0))
-#define BHIE_TXVECSTATUS_OFFS (0x0044)
-#define BHIE_TXVECSTATUS_SEQNUM_BMSK (GENMASK(29, 0))
-#define BHIE_TXVECSTATUS_STATUS_BMSK (GENMASK(31, 30))
-#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
-#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
-#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
-#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
-#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
-#define BHIE_RXVECSIZE_OFFS (0x0068)
-#define BHIE_RXVECDB_OFFS (0x0070)
-#define BHIE_RXVECDB_SEQNUM_BMSK (GENMASK(29, 0))
-#define BHIE_RXVECSTATUS_OFFS (0x0078)
-#define BHIE_RXVECSTATUS_SEQNUM_BMSK (GENMASK(29, 0))
-#define BHIE_RXVECSTATUS_STATUS_BMSK (GENMASK(31, 30))
-#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
-#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
-#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
-
-#define SOC_HW_VERSION_OFFS (0x224)
-#define SOC_HW_VERSION_FAM_NUM_BMSK (GENMASK(31, 28))
-#define SOC_HW_VERSION_DEV_NUM_BMSK (GENMASK(27, 16))
-#define SOC_HW_VERSION_MAJOR_VER_BMSK (GENMASK(15, 8))
-#define SOC_HW_VERSION_MINOR_VER_BMSK (GENMASK(7, 0))
-
-#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
-#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
-#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
+#define MHI_SOC_RESET_REQ_OFFSET			0xb0
+#define MHI_SOC_RESET_REQ				BIT(0)
+
+/* MHI register bits */
+#define MHICFG_NHWER_MASK				GENMASK(31, 24)
+#define MHICFG_NER_MASK					GENMASK(23, 16)
+#define MHICFG_NHWCH_MASK				GENMASK(15, 8)
+#define MHICFG_NCH_MASK					GENMASK(7, 0)
+#define MHICTRL_MHISTATE_MASK				GENMASK(15, 8)
+#define MHICTRL_RESET_MASK				BIT(1)
+#define MHISTATUS_MHISTATE_MASK				GENMASK(15, 8)
+#define MHISTATUS_SYSERR_MASK				BIT(2)
+#define MHISTATUS_READY_MASK				BIT(0)
+
+/* MHI BHI registers */
+#define BHI_BHIVERSION_MINOR				0x00
+#define BHI_BHIVERSION_MAJOR				0x04
+#define BHI_IMGADDR_LOW					0x08
+#define BHI_IMGADDR_HIGH				0x0c
+#define BHI_IMGSIZE					0x10
+#define BHI_RSVD1					0x14
+#define BHI_IMGTXDB					0x18
+#define BHI_RSVD2					0x1c
+#define BHI_INTVEC					0x20
+#define BHI_RSVD3					0x24
+#define BHI_EXECENV					0x28
+#define BHI_STATUS					0x2c
+#define BHI_ERRCODE					0x30
+#define BHI_ERRDBG1					0x34
+#define BHI_ERRDBG2					0x38
+#define BHI_ERRDBG3					0x3c
+#define BHI_SERIALNU					0x40
+#define BHI_SBLANTIROLLVER				0x44
+#define BHI_NUMSEG					0x48
+#define BHI_MSMHWID(n)					(0x4c + (0x4 * (n)))
+#define BHI_OEMPKHASH(n)				(0x64 + (0x4 * (n)))
+#define BHI_RSVD5					0xc4
+
+/* BHI register bits */
+#define BHI_TXDB_SEQNUM_BMSK				GENMASK(29, 0)
+#define BHI_STATUS_MASK					GENMASK(31, 30)
+#define BHI_STATUS_ERROR				0x03
+#define BHI_STATUS_SUCCESS				0x02
+#define BHI_STATUS_RESET				0x00
+
+/* MHI BHIE registers */
+#define BHIE_MSMSOCID_OFFS				0x00
+#define BHIE_TXVECADDR_LOW_OFFS				0x2c
+#define BHIE_TXVECADDR_HIGH_OFFS			0x30
+#define BHIE_TXVECSIZE_OFFS				0x34
+#define BHIE_TXVECDB_OFFS				0x3c
+#define BHIE_TXVECSTATUS_OFFS				0x44
+#define BHIE_RXVECADDR_LOW_OFFS				0x60
+#define BHIE_RXVECADDR_HIGH_OFFS			0x64
+#define BHIE_RXVECSIZE_OFFS				0x68
+#define BHIE_RXVECDB_OFFS				0x70
+#define BHIE_RXVECSTATUS_OFFS				0x78
+
+/* BHIE register bits */
+#define BHIE_TXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
+#define BHIE_TXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
+#define BHIE_TXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
+#define BHIE_TXVECSTATUS_STATUS_RESET			0x00
+#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL		0x02
+#define BHIE_TXVECSTATUS_STATUS_ERROR			0x03
+#define BHIE_RXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
+#define BHIE_RXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
+#define BHIE_RXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
+#define BHIE_RXVECSTATUS_STATUS_RESET			0x00
+#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL		0x02
+#define BHIE_RXVECSTATUS_STATUS_ERROR			0x03
+
+#define SOC_HW_VERSION_OFFS				0x224
+#define SOC_HW_VERSION_FAM_NUM_BMSK			GENMASK(31, 28)
+#define SOC_HW_VERSION_DEV_NUM_BMSK			GENMASK(27, 16)
+#define SOC_HW_VERSION_MAJOR_VER_BMSK			GENMASK(15, 8)
+#define SOC_HW_VERSION_MINOR_VER_BMSK			GENMASK(7, 0)
+
+#define EV_CTX_RESERVED_MASK				GENMASK(7, 0)
+#define EV_CTX_INTMODC_MASK				GENMASK(15, 8)
+#define EV_CTX_INTMODT_MASK				GENMASK(31, 16)
 struct mhi_event_ctxt {
 	__le32 intmod;
 	__le32 ertype;
@@ -151,10 +133,10 @@ struct mhi_event_ctxt {
 	__le64 wp __packed __aligned(4);
 };
 
-#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
-#define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
-#define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
-#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
+#define CHAN_CTX_CHSTATE_MASK				GENMASK(7, 0)
+#define CHAN_CTX_BRSTMODE_MASK				GENMASK(9, 8)
+#define CHAN_CTX_POLLCFG_MASK				GENMASK(15, 10)
+#define CHAN_CTX_RESERVED_MASK				GENMASK(31, 16)
 struct mhi_chan_ctxt {
 	__le32 chcfg;
 	__le32 chtype;
@@ -204,67 +186,71 @@ enum mhi_cmd_type {
 };
 
 /* No operation command */
-#define MHI_TRE_CMD_NOOP_PTR (0)
-#define MHI_TRE_CMD_NOOP_DWORD0 (0)
-#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_CMD_NOP)))
+#define MHI_TRE_CMD_NOOP_PTR		0
+#define MHI_TRE_CMD_NOOP_DWORD0		0
+#define MHI_TRE_CMD_NOOP_DWORD1		cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_CMD_NOP))
 
 /* Channel reset command */
-#define MHI_TRE_CMD_RESET_PTR (0)
-#define MHI_TRE_CMD_RESET_DWORD0 (0)
-#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
-					FIELD_PREP(GENMASK(23, 16), MHI_CMD_RESET_CHAN))
+#define MHI_TRE_CMD_RESET_PTR		0
+#define MHI_TRE_CMD_RESET_DWORD0	0
+#define MHI_TRE_CMD_RESET_DWORD1(chid)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
+						    FIELD_PREP(GENMASK(23, 16),         \
+							       MHI_CMD_RESET_CHAN))
 
 /* Channel stop command */
-#define MHI_TRE_CMD_STOP_PTR (0)
-#define MHI_TRE_CMD_STOP_DWORD0 (0)
-#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
-					FIELD_PREP(GENMASK(23, 16), MHI_CMD_STOP_CHAN))
+#define MHI_TRE_CMD_STOP_PTR		0
+#define MHI_TRE_CMD_STOP_DWORD0		0
+#define MHI_TRE_CMD_STOP_DWORD1(chid)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
+						    FIELD_PREP(GENMASK(23, 16),         \
+							       MHI_CMD_STOP_CHAN))
 
 /* Channel start command */
-#define MHI_TRE_CMD_START_PTR (0)
-#define MHI_TRE_CMD_START_DWORD0 (0)
-#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
-					FIELD_PREP(GENMASK(23, 16), MHI_CMD_START_CHAN))
+#define MHI_TRE_CMD_START_PTR		0
+#define MHI_TRE_CMD_START_DWORD0	0
+#define MHI_TRE_CMD_START_DWORD1(chid)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
+						    FIELD_PREP(GENMASK(23, 16),         \
+							       MHI_CMD_START_CHAN))
 
-#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
-#define MHI_TRE_GET_CMD_CHID(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
-#define MHI_TRE_GET_CMD_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))
+#define MHI_TRE_GET_DWORD(tre, word)	le32_to_cpu((tre)->dword[(word)])
+#define MHI_TRE_GET_CMD_CHID(tre)	FIELD_GET(GENMASK(31, 24), MHI_TRE_GET_DWORD(tre, 1))
+#define MHI_TRE_GET_CMD_TYPE(tre)	FIELD_GET(GENMASK(23, 16), MHI_TRE_GET_DWORD(tre, 1))
 
 /* Event descriptor macros */
-#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
-#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), code) | \
-						FIELD_PREP(GENMASK(15, 0), len)))
-#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
-						FIELD_PREP(GENMASK(23, 16), type)))
-#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
-#define MHI_TRE_GET_EV_CODE(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
-#define MHI_TRE_GET_EV_LEN(tre) (FIELD_GET(GENMASK(15, 0), (MHI_TRE_GET_DWORD(tre, 0))))
-#define MHI_TRE_GET_EV_CHID(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
-#define MHI_TRE_GET_EV_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))
-#define MHI_TRE_GET_EV_STATE(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
-#define MHI_TRE_GET_EV_EXECENV(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
-#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
-#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
-#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
-#define MHI_TRE_GET_EV_VEID(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 0))))
-#define MHI_TRE_GET_EV_LINKSPEED(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
-#define MHI_TRE_GET_EV_LINKWIDTH(tre) (FIELD_GET(GENMASK(7, 0), (MHI_TRE_GET_DWORD(tre, 0))))
+#define MHI_TRE_EV_PTR(ptr)		cpu_to_le64(ptr)
+#define MHI_TRE_EV_DWORD0(code, len)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), code | \
+						    FIELD_PREP(GENMASK(15, 0), len)))
+#define MHI_TRE_EV_DWORD1(chid, type)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid | \
+						    FIELD_PREP(GENMASK(23, 16), type)))
+#define MHI_TRE_GET_EV_PTR(tre)		le64_to_cpu((tre)->ptr)
+#define MHI_TRE_GET_EV_CODE(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0)))
+#define MHI_TRE_GET_EV_LEN(tre)		FIELD_GET(GENMASK(15, 0), (MHI_TRE_GET_DWORD(tre, 0)))
+#define MHI_TRE_GET_EV_CHID(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1)))
+#define MHI_TRE_GET_EV_TYPE(tre)	FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1)))
+#define MHI_TRE_GET_EV_STATE(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0)))
+#define MHI_TRE_GET_EV_EXECENV(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0)))
+#define MHI_TRE_GET_EV_SEQ(tre)		MHI_TRE_GET_DWORD(tre, 0)
+#define MHI_TRE_GET_EV_TIME(tre)	MHI_TRE_GET_EV_PTR(tre)
+#define MHI_TRE_GET_EV_COOKIE(tre)	lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
+#define MHI_TRE_GET_EV_VEID(tre)	FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 0)))
+#define MHI_TRE_GET_EV_LINKSPEED(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1)))
+#define MHI_TRE_GET_EV_LINKWIDTH(tre)	FIELD_GET(GENMASK(7, 0), (MHI_TRE_GET_DWORD(tre, 0)))
 
 /* Transfer descriptor macros */
-#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
-#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(FIELD_PREP(GENMASK(15, 0), len)))
-#define MHI_TRE_TYPE_TRANSFER 2
-#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \
-							MHI_TRE_TYPE_TRANSFER) | \
-							FIELD_PREP(BIT(10), bei) | \
-							FIELD_PREP(BIT(9), ieot) | \
-							FIELD_PREP(BIT(8), ieob) | \
-							FIELD_PREP(BIT(0), chain)))
+#define MHI_TRE_DATA_PTR(ptr)		cpu_to_le64(ptr)
+#define MHI_TRE_DATA_DWORD0(len)	cpu_to_le32(FIELD_PREP(GENMASK(15, 0), len))
+#define MHI_TRE_TYPE_TRANSFER		2
+#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \
+								MHI_TRE_TYPE_TRANSFER) |    \
+								FIELD_PREP(BIT(10), bei) |  \
+								FIELD_PREP(BIT(9), ieot) |  \
+								FIELD_PREP(BIT(8), ieob) |  \
+								FIELD_PREP(BIT(0), chain))
 
 /* RSC transfer descriptor macros */
-#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(FIELD_PREP(GENMASK(64, 48), len) | ptr))
-#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
-#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_PKT_TYPE_COALESCING)
+#define MHI_RSCTRE_DATA_PTR(ptr, len)	cpu_to_le64(FIELD_PREP(GENMASK(64, 48), len) | ptr)
+#define MHI_RSCTRE_DATA_DWORD0(cookie)	cpu_to_le32(cookie)
+#define MHI_RSCTRE_DATA_DWORD1		cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \
+							       MHI_PKT_TYPE_COALESCING))
 
 enum mhi_pkt_type {
 	MHI_PKT_TYPE_INVALID = 0x0,
@@ -369,44 +355,43 @@ enum mhi_pm_state {
 	MHI_PM_STATE_MAX
 };
 
-#define MHI_PM_DISABLE			BIT(0)
-#define MHI_PM_POR			BIT(1)
-#define MHI_PM_M0			BIT(2)
-#define MHI_PM_M2			BIT(3)
-#define MHI_PM_M3_ENTER			BIT(4)
-#define MHI_PM_M3			BIT(5)
-#define MHI_PM_M3_EXIT			BIT(6)
+#define MHI_PM_DISABLE					BIT(0)
+#define MHI_PM_POR					BIT(1)
+#define MHI_PM_M0					BIT(2)
+#define MHI_PM_M2					BIT(3)
+#define MHI_PM_M3_ENTER					BIT(4)
+#define MHI_PM_M3					BIT(5)
+#define MHI_PM_M3_EXIT					BIT(6)
 /* firmware download failure state */
-#define MHI_PM_FW_DL_ERR		BIT(7)
-#define MHI_PM_SYS_ERR_DETECT		BIT(8)
-#define MHI_PM_SYS_ERR_PROCESS		BIT(9)
-#define MHI_PM_SHUTDOWN_PROCESS		BIT(10)
+#define MHI_PM_FW_DL_ERR				BIT(7)
+#define MHI_PM_SYS_ERR_DETECT				BIT(8)
+#define MHI_PM_SYS_ERR_PROCESS				BIT(9)
+#define MHI_PM_SHUTDOWN_PROCESS				BIT(10)
 /* link not accessible */
-#define MHI_PM_LD_ERR_FATAL_DETECT	BIT(11)
-
-#define MHI_REG_ACCESS_VALID(pm_state) ((pm_state & (MHI_PM_POR | MHI_PM_M0 | \
-		MHI_PM_M2 | MHI_PM_M3_ENTER | MHI_PM_M3_EXIT | \
-		MHI_PM_SYS_ERR_DETECT | MHI_PM_SYS_ERR_PROCESS | \
-		MHI_PM_SHUTDOWN_PROCESS | MHI_PM_FW_DL_ERR)))
-#define MHI_PM_IN_ERROR_STATE(pm_state) (pm_state >= MHI_PM_FW_DL_ERR)
-#define MHI_PM_IN_FATAL_STATE(pm_state) (pm_state == MHI_PM_LD_ERR_FATAL_DETECT)
-#define MHI_DB_ACCESS_VALID(mhi_cntrl) (mhi_cntrl->pm_state & \
-					mhi_cntrl->db_access)
-#define MHI_WAKE_DB_CLEAR_VALID(pm_state) (pm_state & (MHI_PM_M0 | \
-						MHI_PM_M2 | MHI_PM_M3_EXIT))
-#define MHI_WAKE_DB_SET_VALID(pm_state) (pm_state & MHI_PM_M2)
-#define MHI_WAKE_DB_FORCE_SET_VALID(pm_state) MHI_WAKE_DB_CLEAR_VALID(pm_state)
-#define MHI_EVENT_ACCESS_INVALID(pm_state) (pm_state == MHI_PM_DISABLE || \
-					    MHI_PM_IN_ERROR_STATE(pm_state))
-#define MHI_PM_IN_SUSPEND_STATE(pm_state) (pm_state & \
-					   (MHI_PM_M3_ENTER | MHI_PM_M3))
-
-#define NR_OF_CMD_RINGS			1
-#define CMD_EL_PER_RING			128
-#define PRIMARY_CMD_RING		0
-#define MHI_DEV_WAKE_DB			127
-#define MHI_MAX_MTU			0xffff
-#define MHI_RANDOM_U32_NONZERO(bmsk)	(prandom_u32_max(bmsk) + 1)
+#define MHI_PM_LD_ERR_FATAL_DETECT			BIT(11)
+
+#define MHI_REG_ACCESS_VALID(pm_state)			((pm_state & (MHI_PM_POR | MHI_PM_M0 | \
+						MHI_PM_M2 | MHI_PM_M3_ENTER | MHI_PM_M3_EXIT | \
+						MHI_PM_SYS_ERR_DETECT | MHI_PM_SYS_ERR_PROCESS | \
+						MHI_PM_SHUTDOWN_PROCESS | MHI_PM_FW_DL_ERR)))
+#define MHI_PM_IN_ERROR_STATE(pm_state)			(pm_state >= MHI_PM_FW_DL_ERR)
+#define MHI_PM_IN_FATAL_STATE(pm_state)			(pm_state == MHI_PM_LD_ERR_FATAL_DETECT)
+#define MHI_DB_ACCESS_VALID(mhi_cntrl)			(mhi_cntrl->pm_state & mhi_cntrl->db_access)
+#define MHI_WAKE_DB_CLEAR_VALID(pm_state)		(pm_state & (MHI_PM_M0 | \
+							MHI_PM_M2 | MHI_PM_M3_EXIT))
+#define MHI_WAKE_DB_SET_VALID(pm_state)			(pm_state & MHI_PM_M2)
+#define MHI_WAKE_DB_FORCE_SET_VALID(pm_state)		MHI_WAKE_DB_CLEAR_VALID(pm_state)
+#define MHI_EVENT_ACCESS_INVALID(pm_state)		(pm_state == MHI_PM_DISABLE || \
+							MHI_PM_IN_ERROR_STATE(pm_state))
+#define MHI_PM_IN_SUSPEND_STATE(pm_state)		(pm_state & \
+							(MHI_PM_M3_ENTER | MHI_PM_M3))
+
+#define NR_OF_CMD_RINGS					1
+#define CMD_EL_PER_RING					128
+#define PRIMARY_CMD_RING				0
+#define MHI_DEV_WAKE_DB					127
+#define MHI_MAX_MTU					0xffff
+#define MHI_RANDOM_U32_NONZERO(bmsk)			(prandom_u32_max(bmsk) + 1)
 
 enum mhi_er_type {
 	MHI_ER_TYPE_INVALID = 0x0,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 07/27] bus: mhi: host: Rename "struct mhi_tre" to "struct mhi_ring_element"
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (5 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 06/27] bus: mhi: Cleanup the register definitions used in headers Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 15:52   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 08/27] bus: mhi: Move common MHI definitions out of host directory Manivannan Sadhasivam
                   ` (21 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Structure "struct mhi_tre" is representing a generic MHI ring element and
not specifically a Transfer Ring Element (TRE). Fix the naming.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/host/init.c     |  6 +++---
 drivers/bus/mhi/host/internal.h |  2 +-
 drivers/bus/mhi/host/main.c     | 20 ++++++++++----------
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
index ca068a017a42..016dcc35db80 100644
--- a/drivers/bus/mhi/host/init.c
+++ b/drivers/bus/mhi/host/init.c
@@ -339,7 +339,7 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 		er_ctxt->msivec = cpu_to_le32(mhi_event->irq);
 		mhi_event->db_cfg.db_mode = true;
 
-		ring->el_size = sizeof(struct mhi_tre);
+		ring->el_size = sizeof(struct mhi_ring_element);
 		ring->len = ring->el_size * ring->elements;
 		ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
 		if (ret)
@@ -371,7 +371,7 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
 		struct mhi_ring *ring = &mhi_cmd->ring;
 
-		ring->el_size = sizeof(struct mhi_tre);
+		ring->el_size = sizeof(struct mhi_ring_element);
 		ring->elements = CMD_EL_PER_RING;
 		ring->len = ring->el_size * ring->elements;
 		ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
@@ -598,7 +598,7 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
 
 	buf_ring = &mhi_chan->buf_ring;
 	tre_ring = &mhi_chan->tre_ring;
-	tre_ring->el_size = sizeof(struct mhi_tre);
+	tre_ring->el_size = sizeof(struct mhi_ring_element);
 	tre_ring->len = tre_ring->el_size * tre_ring->elements;
 	chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
 	ret = mhi_alloc_aligned_ring(mhi_cntrl, tre_ring, tre_ring->len);
diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
index 1c7a48be033f..5860cd326db6 100644
--- a/drivers/bus/mhi/host/internal.h
+++ b/drivers/bus/mhi/host/internal.h
@@ -168,7 +168,7 @@ struct mhi_ctxt {
 	dma_addr_t cmd_ctxt_addr;
 };
 
-struct mhi_tre {
+struct mhi_ring_element {
 	__le64 ptr;
 	__le32 dword[2];
 };
diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
index 3e6e615466b7..dabf85b92a84 100644
--- a/drivers/bus/mhi/host/main.c
+++ b/drivers/bus/mhi/host/main.c
@@ -554,7 +554,7 @@ static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
 }
 
 static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
-			    struct mhi_tre *event,
+			    struct mhi_ring_element *event,
 			    struct mhi_chan *mhi_chan)
 {
 	struct mhi_ring *buf_ring, *tre_ring;
@@ -590,7 +590,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
 	case MHI_EV_CC_EOT:
 	{
 		dma_addr_t ptr = MHI_TRE_GET_EV_PTR(event);
-		struct mhi_tre *local_rp, *ev_tre;
+		struct mhi_ring_element *local_rp, *ev_tre;
 		void *dev_rp;
 		struct mhi_buf_info *buf_info;
 		u16 xfer_len;
@@ -689,7 +689,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
 }
 
 static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
-			   struct mhi_tre *event,
+			   struct mhi_ring_element *event,
 			   struct mhi_chan *mhi_chan)
 {
 	struct mhi_ring *buf_ring, *tre_ring;
@@ -753,12 +753,12 @@ static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
 }
 
 static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
-				       struct mhi_tre *tre)
+				       struct mhi_ring_element *tre)
 {
 	dma_addr_t ptr = MHI_TRE_GET_EV_PTR(tre);
 	struct mhi_cmd *cmd_ring = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
 	struct mhi_ring *mhi_ring = &cmd_ring->ring;
-	struct mhi_tre *cmd_pkt;
+	struct mhi_ring_element *cmd_pkt;
 	struct mhi_chan *mhi_chan;
 	u32 chan;
 
@@ -791,7 +791,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 			     struct mhi_event *mhi_event,
 			     u32 event_quota)
 {
-	struct mhi_tre *dev_rp, *local_rp;
+	struct mhi_ring_element *dev_rp, *local_rp;
 	struct mhi_ring *ev_ring = &mhi_event->ring;
 	struct mhi_event_ctxt *er_ctxt =
 		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
@@ -961,7 +961,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
 				struct mhi_event *mhi_event,
 				u32 event_quota)
 {
-	struct mhi_tre *dev_rp, *local_rp;
+	struct mhi_ring_element *dev_rp, *local_rp;
 	struct mhi_ring *ev_ring = &mhi_event->ring;
 	struct mhi_event_ctxt *er_ctxt =
 		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
@@ -1185,7 +1185,7 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
 			struct mhi_buf_info *info, enum mhi_flags flags)
 {
 	struct mhi_ring *buf_ring, *tre_ring;
-	struct mhi_tre *mhi_tre;
+	struct mhi_ring_element *mhi_tre;
 	struct mhi_buf_info *buf_info;
 	int eot, eob, chain, bei;
 	int ret;
@@ -1256,7 +1256,7 @@ int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
 		 struct mhi_chan *mhi_chan,
 		 enum mhi_cmd_type cmd)
 {
-	struct mhi_tre *cmd_tre = NULL;
+	struct mhi_ring_element *cmd_tre = NULL;
 	struct mhi_cmd *mhi_cmd = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
 	struct mhi_ring *ring = &mhi_cmd->ring;
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
@@ -1518,7 +1518,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
 				  int chan)
 
 {
-	struct mhi_tre *dev_rp, *local_rp;
+	struct mhi_ring_element *dev_rp, *local_rp;
 	struct mhi_ring *ev_ring;
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
 	unsigned long flags;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 08/27] bus: mhi: Move common MHI definitions out of host directory
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (6 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 07/27] bus: mhi: host: Rename "struct mhi_tre" to "struct mhi_ring_element" Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 09/27] bus: mhi: Make mhi_state_str[] array static inline and move to common.h Manivannan Sadhasivam
                   ` (20 subsequent siblings)
  28 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam, Hemant Kumar

Move the common MHI definitions in host "internal.h" to "common.h" so
that the endpoint code can make use of them. This also avoids
duplicating the definitions in the endpoint stack.

Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/common.h        | 283 ++++++++++++++++++++++++++++++++
 drivers/bus/mhi/host/internal.h | 264 +----------------------------
 2 files changed, 284 insertions(+), 263 deletions(-)
 create mode 100644 drivers/bus/mhi/common.h

diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
new file mode 100644
index 000000000000..f2690bf11c99
--- /dev/null
+++ b/drivers/bus/mhi/common.h
@@ -0,0 +1,283 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2022, Linaro Ltd.
+ *
+ */
+
+#ifndef _MHI_COMMON_H
+#define _MHI_COMMON_H
+
+#include <linux/bitfield.h>
+#include <linux/mhi.h>
+
+/* MHI registers */
+#define MHIREGLEN			0x00
+#define MHIVER				0x08
+#define MHICFG				0x10
+#define CHDBOFF				0x18
+#define ERDBOFF				0x20
+#define BHIOFF				0x28
+#define BHIEOFF				0x2c
+#define DEBUGOFF			0x30
+#define MHICTRL				0x38
+#define MHISTATUS			0x48
+#define CCABAP_LOWER			0x58
+#define CCABAP_HIGHER			0x5c
+#define ECABAP_LOWER			0x60
+#define ECABAP_HIGHER			0x64
+#define CRCBAP_LOWER			0x68
+#define CRCBAP_HIGHER			0x6c
+#define CRDB_LOWER			0x70
+#define CRDB_HIGHER			0x74
+#define MHICTRLBASE_LOWER		0x80
+#define MHICTRLBASE_HIGHER		0x84
+#define MHICTRLLIMIT_LOWER		0x88
+#define MHICTRLLIMIT_HIGHER		0x8c
+#define MHIDATABASE_LOWER		0x98
+#define MHIDATABASE_HIGHER		0x9c
+#define MHIDATALIMIT_LOWER		0xa0
+#define MHIDATALIMIT_HIGHER		0xa4
+
+/* MHI BHI registers */
+#define BHI_BHIVERSION_MINOR		0x00
+#define BHI_BHIVERSION_MAJOR		0x04
+#define BHI_IMGADDR_LOW			0x08
+#define BHI_IMGADDR_HIGH		0x0c
+#define BHI_IMGSIZE			0x10
+#define BHI_RSVD1			0x14
+#define BHI_IMGTXDB			0x18
+#define BHI_RSVD2			0x1c
+#define BHI_INTVEC			0x20
+#define BHI_RSVD3			0x24
+#define BHI_EXECENV			0x28
+#define BHI_STATUS			0x2c
+#define BHI_ERRCODE			0x30
+#define BHI_ERRDBG1			0x34
+#define BHI_ERRDBG2			0x38
+#define BHI_ERRDBG3			0x3c
+#define BHI_SERIALNU			0x40
+#define BHI_SBLANTIROLLVER		0x44
+#define BHI_NUMSEG			0x48
+#define BHI_MSMHWID(n)			(0x4c + (0x4 * (n)))
+#define BHI_OEMPKHASH(n)		(0x64 + (0x4 * (n)))
+#define BHI_RSVD5			0xc4
+
+/* BHI register bits */
+#define BHI_TXDB_SEQNUM_BMSK		GENMASK(29, 0)
+#define BHI_TXDB_SEQNUM_SHFT		0
+#define BHI_STATUS_MASK			GENMASK(31, 30)
+#define BHI_STATUS_ERROR		0x03
+#define BHI_STATUS_SUCCESS		0x02
+#define BHI_STATUS_RESET		0x00
+
+/* MHI BHIE registers */
+#define BHIE_MSMSOCID_OFFS		0x00
+#define BHIE_TXVECADDR_LOW_OFFS		0x2c
+#define BHIE_TXVECADDR_HIGH_OFFS	0x30
+#define BHIE_TXVECSIZE_OFFS		0x34
+#define BHIE_TXVECDB_OFFS		0x3c
+#define BHIE_TXVECSTATUS_OFFS		0x44
+#define BHIE_RXVECADDR_LOW_OFFS		0x60
+#define BHIE_RXVECADDR_HIGH_OFFS	0x64
+#define BHIE_RXVECSIZE_OFFS		0x68
+#define BHIE_RXVECDB_OFFS		0x70
+#define BHIE_RXVECSTATUS_OFFS		0x78
+
+/* BHIE register bits */
+#define BHIE_TXVECDB_SEQNUM_BMSK	GENMASK(29, 0)
+#define BHIE_TXVECDB_SEQNUM_SHFT	0
+#define BHIE_TXVECSTATUS_SEQNUM_BMSK	GENMASK(29, 0)
+#define BHIE_TXVECSTATUS_SEQNUM_SHFT	0
+#define BHIE_TXVECSTATUS_STATUS_BMSK	GENMASK(31, 30)
+#define BHIE_TXVECSTATUS_STATUS_SHFT	30
+#define BHIE_TXVECSTATUS_STATUS_RESET	0x00
+#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL	0x02
+#define BHIE_TXVECSTATUS_STATUS_ERROR	0x03
+#define BHIE_RXVECDB_SEQNUM_BMSK	GENMASK(29, 0)
+#define BHIE_RXVECDB_SEQNUM_SHFT	0
+#define BHIE_RXVECSTATUS_SEQNUM_BMSK	GENMASK(29, 0)
+#define BHIE_RXVECSTATUS_SEQNUM_SHFT	0
+#define BHIE_RXVECSTATUS_STATUS_BMSK	GENMASK(31, 30)
+#define BHIE_RXVECSTATUS_STATUS_SHFT	30
+#define BHIE_RXVECSTATUS_STATUS_RESET	0x00
+#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL	0x02
+#define BHIE_RXVECSTATUS_STATUS_ERROR	0x03
+
+/* MHI register bits */
+#define MHICFG_NHWER_MASK		GENMASK(31, 24)
+#define MHICFG_NER_MASK			GENMASK(23, 16)
+#define MHICFG_NHWCH_MASK		GENMASK(15, 8)
+#define MHICFG_NCH_MASK			GENMASK(7, 0)
+#define MHICTRL_MHISTATE_MASK		GENMASK(15, 8)
+#define MHICTRL_RESET_MASK		BIT(1)
+#define MHISTATUS_MHISTATE_MASK		GENMASK(15, 8)
+#define MHISTATUS_SYSERR_MASK		BIT(2)
+#define MHISTATUS_READY_MASK		BIT(0)
+
+/* Command Ring Element macros */
+/* No operation command */
+#define MHI_TRE_CMD_NOOP_PTR		0
+#define MHI_TRE_CMD_NOOP_DWORD0		0
+#define MHI_TRE_CMD_NOOP_DWORD1		cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_CMD_NOP))
+
+/* Channel reset command */
+#define MHI_TRE_CMD_RESET_PTR		0
+#define MHI_TRE_CMD_RESET_DWORD0	0
+#define MHI_TRE_CMD_RESET_DWORD1(chid)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
+						    FIELD_PREP(GENMASK(23, 16),         \
+							       MHI_CMD_RESET_CHAN))
+
+/* Channel stop command */
+#define MHI_TRE_CMD_STOP_PTR		0
+#define MHI_TRE_CMD_STOP_DWORD0		0
+#define MHI_TRE_CMD_STOP_DWORD1(chid)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
+						    FIELD_PREP(GENMASK(23, 16),         \
+							       MHI_CMD_STOP_CHAN))
+
+/* Channel start command */
+#define MHI_TRE_CMD_START_PTR		0
+#define MHI_TRE_CMD_START_DWORD0	0
+#define MHI_TRE_CMD_START_DWORD1(chid)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
+						    FIELD_PREP(GENMASK(23, 16),         \
+							       MHI_CMD_START_CHAN))
+
+#define MHI_TRE_GET_DWORD(tre, word)	le32_to_cpu((tre)->dword[(word)])
+#define MHI_TRE_GET_CMD_CHID(tre)	FIELD_GET(GENMASK(31, 24), MHI_TRE_GET_DWORD(tre, 1))
+#define MHI_TRE_GET_CMD_TYPE(tre)	FIELD_GET(GENMASK(23, 16), MHI_TRE_GET_DWORD(tre, 1))
+
+/* Event descriptor macros */
+#define MHI_TRE_EV_PTR(ptr)		cpu_to_le64(ptr)
+#define MHI_TRE_EV_DWORD0(code, len)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), code) | \
+						    FIELD_PREP(GENMASK(15, 0), len))
+#define MHI_TRE_EV_DWORD1(chid, type)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
+						    FIELD_PREP(GENMASK(23, 16), type))
+#define MHI_TRE_GET_EV_PTR(tre)		le64_to_cpu((tre)->ptr)
+#define MHI_TRE_GET_EV_CODE(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0)))
+#define MHI_TRE_GET_EV_LEN(tre)		FIELD_GET(GENMASK(15, 0), (MHI_TRE_GET_DWORD(tre, 0)))
+#define MHI_TRE_GET_EV_CHID(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1)))
+#define MHI_TRE_GET_EV_TYPE(tre)	FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1)))
+#define MHI_TRE_GET_EV_STATE(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0)))
+#define MHI_TRE_GET_EV_EXECENV(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0)))
+#define MHI_TRE_GET_EV_SEQ(tre)		MHI_TRE_GET_DWORD(tre, 0)
+#define MHI_TRE_GET_EV_TIME(tre)	MHI_TRE_GET_EV_PTR(tre)
+#define MHI_TRE_GET_EV_COOKIE(tre)	lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
+#define MHI_TRE_GET_EV_VEID(tre)	FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 0)))
+#define MHI_TRE_GET_EV_LINKSPEED(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1)))
+#define MHI_TRE_GET_EV_LINKWIDTH(tre)	FIELD_GET(GENMASK(7, 0), (MHI_TRE_GET_DWORD(tre, 0)))
+
+/* Transfer descriptor macros */
+#define MHI_TRE_DATA_PTR(ptr)		cpu_to_le64(ptr)
+#define MHI_TRE_DATA_DWORD0(len)	cpu_to_le32(FIELD_PREP(GENMASK(15, 0), len))
+#define MHI_TRE_TYPE_TRANSFER		2
+#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \
+								MHI_TRE_TYPE_TRANSFER) |    \
+								FIELD_PREP(BIT(10), bei) |  \
+								FIELD_PREP(BIT(9), ieot) |  \
+								FIELD_PREP(BIT(8), ieob) |  \
+								FIELD_PREP(BIT(0), chain))
+
+/* RSC transfer descriptor macros */
+#define MHI_RSCTRE_DATA_PTR(ptr, len)	cpu_to_le64(FIELD_PREP(GENMASK(64, 48), len) | ptr)
+#define MHI_RSCTRE_DATA_DWORD0(cookie)	cpu_to_le32(cookie)
+#define MHI_RSCTRE_DATA_DWORD1		cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \
+							       MHI_PKT_TYPE_COALESCING))
+
+enum mhi_pkt_type {
+	MHI_PKT_TYPE_INVALID = 0x0,
+	MHI_PKT_TYPE_NOOP_CMD = 0x1,
+	MHI_PKT_TYPE_TRANSFER = 0x2,
+	MHI_PKT_TYPE_COALESCING = 0x8,
+	MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
+	MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
+	MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
+	MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
+	MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
+	MHI_PKT_TYPE_TX_EVENT = 0x22,
+	MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
+	MHI_PKT_TYPE_EE_EVENT = 0x40,
+	MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
+	MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
+	MHI_PKT_TYPE_STALE_EVENT, /* internal event */
+};
+
+/* MHI transfer completion events */
+enum mhi_ev_ccs {
+	MHI_EV_CC_INVALID = 0x0,
+	MHI_EV_CC_SUCCESS = 0x1,
+	MHI_EV_CC_EOT = 0x2, /* End of transfer event */
+	MHI_EV_CC_OVERFLOW = 0x3,
+	MHI_EV_CC_EOB = 0x4, /* End of block event */
+	MHI_EV_CC_OOB = 0x5, /* Out of block event */
+	MHI_EV_CC_DB_MODE = 0x6,
+	MHI_EV_CC_UNDEFINED_ERR = 0x10,
+	MHI_EV_CC_BAD_TRE = 0x11,
+};
+
+/* Channel state */
+enum mhi_ch_state {
+	MHI_CH_STATE_DISABLED,
+	MHI_CH_STATE_ENABLED,
+	MHI_CH_STATE_RUNNING,
+	MHI_CH_STATE_SUSPENDED,
+	MHI_CH_STATE_STOP,
+	MHI_CH_STATE_ERROR,
+};
+
+enum mhi_cmd_type {
+	MHI_CMD_NOP = 1,
+	MHI_CMD_RESET_CHAN = 16,
+	MHI_CMD_STOP_CHAN = 17,
+	MHI_CMD_START_CHAN = 18,
+};
+
+#define EV_CTX_RESERVED_MASK		GENMASK(7, 0)
+#define EV_CTX_INTMODC_MASK		GENMASK(15, 8)
+#define EV_CTX_INTMODT_MASK		GENMASK(31, 16)
+struct mhi_event_ctxt {
+	__le32 intmod;
+	__le32 ertype;
+	__le32 msivec;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
+};
+
+#define CHAN_CTX_CHSTATE_MASK		GENMASK(7, 0)
+#define CHAN_CTX_BRSTMODE_MASK		GENMASK(9, 8)
+#define CHAN_CTX_POLLCFG_MASK		GENMASK(15, 10)
+#define CHAN_CTX_RESERVED_MASK		GENMASK(31, 16)
+struct mhi_chan_ctxt {
+	__le32 chcfg;
+	__le32 chtype;
+	__le32 erindex;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
+};
+
+struct mhi_cmd_ctxt {
+	__le32 reserved0;
+	__le32 reserved1;
+	__le32 reserved2;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
+};
+
+struct mhi_ring_element {
+	__le64 ptr;
+	__le32 dword[2];
+};
+
+extern const char * const mhi_state_str[MHI_STATE_MAX];
+#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
+				  !mhi_state_str[state]) ? \
+				"INVALID_STATE" : mhi_state_str[state])
+
+#endif /* _MHI_COMMON_H */
diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
index 5860cd326db6..b47d8ef2624a 100644
--- a/drivers/bus/mhi/host/internal.h
+++ b/drivers/bus/mhi/host/internal.h
@@ -7,158 +7,20 @@
 #ifndef _MHI_INT_H
 #define _MHI_INT_H
 
-#include <linux/bitfield.h>
-#include <linux/mhi.h>
+#include "../common.h"
 
 extern struct bus_type mhi_bus_type;
 
-/* MHI registers */
-#define MHIREGLEN					0x00
-#define MHIVER						0x08
-#define MHICFG						0x10
-#define CHDBOFF						0x18
-#define ERDBOFF						0x20
-#define BHIOFF						0x28
-#define BHIEOFF						0x2c
-#define DEBUGOFF					0x30
-#define MHICTRL						0x38
-#define MHISTATUS					0x48
-#define CCABAP_LOWER					0x58
-#define CCABAP_HIGHER					0x5c
-#define ECABAP_LOWER					0x60
-#define ECABAP_HIGHER					0x64
-#define CRCBAP_LOWER					0x68
-#define CRCBAP_HIGHER					0x6c
-#define CRDB_LOWER					0x70
-#define CRDB_HIGHER					0x74
-#define MHICTRLBASE_LOWER				0x80
-#define MHICTRLBASE_HIGHER				0x84
-#define MHICTRLLIMIT_LOWER				0x88
-#define MHICTRLLIMIT_HIGHER				0x8c
-#define MHIDATABASE_LOWER				0x98
-#define MHIDATABASE_HIGHER				0x9c
-#define MHIDATALIMIT_LOWER				0xa0
-#define MHIDATALIMIT_HIGHER				0xa4
-
 /* Host request register */
 #define MHI_SOC_RESET_REQ_OFFSET			0xb0
 #define MHI_SOC_RESET_REQ				BIT(0)
 
-/* MHI register bits */
-#define MHICFG_NHWER_MASK				GENMASK(31, 24)
-#define MHICFG_NER_MASK					GENMASK(23, 16)
-#define MHICFG_NHWCH_MASK				GENMASK(15, 8)
-#define MHICFG_NCH_MASK					GENMASK(7, 0)
-#define MHICTRL_MHISTATE_MASK				GENMASK(15, 8)
-#define MHICTRL_RESET_MASK				BIT(1)
-#define MHISTATUS_MHISTATE_MASK				GENMASK(15, 8)
-#define MHISTATUS_SYSERR_MASK				BIT(2)
-#define MHISTATUS_READY_MASK				BIT(0)
-
-/* MHI BHI registers */
-#define BHI_BHIVERSION_MINOR				0x00
-#define BHI_BHIVERSION_MAJOR				0x04
-#define BHI_IMGADDR_LOW					0x08
-#define BHI_IMGADDR_HIGH				0x0c
-#define BHI_IMGSIZE					0x10
-#define BHI_RSVD1					0x14
-#define BHI_IMGTXDB					0x18
-#define BHI_RSVD2					0x1c
-#define BHI_INTVEC					0x20
-#define BHI_RSVD3					0x24
-#define BHI_EXECENV					0x28
-#define BHI_STATUS					0x2c
-#define BHI_ERRCODE					0x30
-#define BHI_ERRDBG1					0x34
-#define BHI_ERRDBG2					0x38
-#define BHI_ERRDBG3					0x3c
-#define BHI_SERIALNU					0x40
-#define BHI_SBLANTIROLLVER				0x44
-#define BHI_NUMSEG					0x48
-#define BHI_MSMHWID(n)					(0x4c + (0x4 * (n)))
-#define BHI_OEMPKHASH(n)				(0x64 + (0x4 * (n)))
-#define BHI_RSVD5					0xc4
-
-/* BHI register bits */
-#define BHI_TXDB_SEQNUM_BMSK				GENMASK(29, 0)
-#define BHI_STATUS_MASK					GENMASK(31, 30)
-#define BHI_STATUS_ERROR				0x03
-#define BHI_STATUS_SUCCESS				0x02
-#define BHI_STATUS_RESET				0x00
-
-/* MHI BHIE registers */
-#define BHIE_MSMSOCID_OFFS				0x00
-#define BHIE_TXVECADDR_LOW_OFFS				0x2c
-#define BHIE_TXVECADDR_HIGH_OFFS			0x30
-#define BHIE_TXVECSIZE_OFFS				0x34
-#define BHIE_TXVECDB_OFFS				0x3c
-#define BHIE_TXVECSTATUS_OFFS				0x44
-#define BHIE_RXVECADDR_LOW_OFFS				0x60
-#define BHIE_RXVECADDR_HIGH_OFFS			0x64
-#define BHIE_RXVECSIZE_OFFS				0x68
-#define BHIE_RXVECDB_OFFS				0x70
-#define BHIE_RXVECSTATUS_OFFS				0x78
-
-/* BHIE register bits */
-#define BHIE_TXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
-#define BHIE_TXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
-#define BHIE_TXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
-#define BHIE_TXVECSTATUS_STATUS_RESET			0x00
-#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL		0x02
-#define BHIE_TXVECSTATUS_STATUS_ERROR			0x03
-#define BHIE_RXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
-#define BHIE_RXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
-#define BHIE_RXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
-#define BHIE_RXVECSTATUS_STATUS_RESET			0x00
-#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL		0x02
-#define BHIE_RXVECSTATUS_STATUS_ERROR			0x03
-
 #define SOC_HW_VERSION_OFFS				0x224
 #define SOC_HW_VERSION_FAM_NUM_BMSK			GENMASK(31, 28)
 #define SOC_HW_VERSION_DEV_NUM_BMSK			GENMASK(27, 16)
 #define SOC_HW_VERSION_MAJOR_VER_BMSK			GENMASK(15, 8)
 #define SOC_HW_VERSION_MINOR_VER_BMSK			GENMASK(7, 0)
 
-#define EV_CTX_RESERVED_MASK				GENMASK(7, 0)
-#define EV_CTX_INTMODC_MASK				GENMASK(15, 8)
-#define EV_CTX_INTMODT_MASK				GENMASK(31, 16)
-struct mhi_event_ctxt {
-	__le32 intmod;
-	__le32 ertype;
-	__le32 msivec;
-
-	__le64 rbase __packed __aligned(4);
-	__le64 rlen __packed __aligned(4);
-	__le64 rp __packed __aligned(4);
-	__le64 wp __packed __aligned(4);
-};
-
-#define CHAN_CTX_CHSTATE_MASK				GENMASK(7, 0)
-#define CHAN_CTX_BRSTMODE_MASK				GENMASK(9, 8)
-#define CHAN_CTX_POLLCFG_MASK				GENMASK(15, 10)
-#define CHAN_CTX_RESERVED_MASK				GENMASK(31, 16)
-struct mhi_chan_ctxt {
-	__le32 chcfg;
-	__le32 chtype;
-	__le32 erindex;
-
-	__le64 rbase __packed __aligned(4);
-	__le64 rlen __packed __aligned(4);
-	__le64 rp __packed __aligned(4);
-	__le64 wp __packed __aligned(4);
-};
-
-struct mhi_cmd_ctxt {
-	__le32 reserved0;
-	__le32 reserved1;
-	__le32 reserved2;
-
-	__le64 rbase __packed __aligned(4);
-	__le64 rlen __packed __aligned(4);
-	__le64 rp __packed __aligned(4);
-	__le64 wp __packed __aligned(4);
-};
-
 struct mhi_ctxt {
 	struct mhi_event_ctxt *er_ctxt;
 	struct mhi_chan_ctxt *chan_ctxt;
@@ -168,130 +30,11 @@ struct mhi_ctxt {
 	dma_addr_t cmd_ctxt_addr;
 };
 
-struct mhi_ring_element {
-	__le64 ptr;
-	__le32 dword[2];
-};
-
 struct bhi_vec_entry {
 	u64 dma_addr;
 	u64 size;
 };
 
-enum mhi_cmd_type {
-	MHI_CMD_NOP = 1,
-	MHI_CMD_RESET_CHAN = 16,
-	MHI_CMD_STOP_CHAN = 17,
-	MHI_CMD_START_CHAN = 18,
-};
-
-/* No operation command */
-#define MHI_TRE_CMD_NOOP_PTR		0
-#define MHI_TRE_CMD_NOOP_DWORD0		0
-#define MHI_TRE_CMD_NOOP_DWORD1		cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_CMD_NOP))
-
-/* Channel reset command */
-#define MHI_TRE_CMD_RESET_PTR		0
-#define MHI_TRE_CMD_RESET_DWORD0	0
-#define MHI_TRE_CMD_RESET_DWORD1(chid)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
-						    FIELD_PREP(GENMASK(23, 16),         \
-							       MHI_CMD_RESET_CHAN))
-
-/* Channel stop command */
-#define MHI_TRE_CMD_STOP_PTR		0
-#define MHI_TRE_CMD_STOP_DWORD0		0
-#define MHI_TRE_CMD_STOP_DWORD1(chid)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
-						    FIELD_PREP(GENMASK(23, 16),         \
-							       MHI_CMD_STOP_CHAN))
-
-/* Channel start command */
-#define MHI_TRE_CMD_START_PTR		0
-#define MHI_TRE_CMD_START_DWORD0	0
-#define MHI_TRE_CMD_START_DWORD1(chid)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
-						    FIELD_PREP(GENMASK(23, 16),         \
-							       MHI_CMD_START_CHAN))
-
-#define MHI_TRE_GET_DWORD(tre, word)	le32_to_cpu((tre)->dword[(word)])
-#define MHI_TRE_GET_CMD_CHID(tre)	FIELD_GET(GENMASK(31, 24), MHI_TRE_GET_DWORD(tre, 1))
-#define MHI_TRE_GET_CMD_TYPE(tre)	FIELD_GET(GENMASK(23, 16), MHI_TRE_GET_DWORD(tre, 1))
-
-/* Event descriptor macros */
-#define MHI_TRE_EV_PTR(ptr)		cpu_to_le64(ptr)
-#define MHI_TRE_EV_DWORD0(code, len)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), code | \
-						    FIELD_PREP(GENMASK(15, 0), len)))
-#define MHI_TRE_EV_DWORD1(chid, type)	cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid | \
-						    FIELD_PREP(GENMASK(23, 16), type)))
-#define MHI_TRE_GET_EV_PTR(tre)		le64_to_cpu((tre)->ptr)
-#define MHI_TRE_GET_EV_CODE(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0)))
-#define MHI_TRE_GET_EV_LEN(tre)		FIELD_GET(GENMASK(15, 0), (MHI_TRE_GET_DWORD(tre, 0)))
-#define MHI_TRE_GET_EV_CHID(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1)))
-#define MHI_TRE_GET_EV_TYPE(tre)	FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1)))
-#define MHI_TRE_GET_EV_STATE(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0)))
-#define MHI_TRE_GET_EV_EXECENV(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0)))
-#define MHI_TRE_GET_EV_SEQ(tre)		MHI_TRE_GET_DWORD(tre, 0)
-#define MHI_TRE_GET_EV_TIME(tre)	MHI_TRE_GET_EV_PTR(tre)
-#define MHI_TRE_GET_EV_COOKIE(tre)	lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
-#define MHI_TRE_GET_EV_VEID(tre)	FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 0)))
-#define MHI_TRE_GET_EV_LINKSPEED(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1)))
-#define MHI_TRE_GET_EV_LINKWIDTH(tre)	FIELD_GET(GENMASK(7, 0), (MHI_TRE_GET_DWORD(tre, 0)))
-
-/* Transfer descriptor macros */
-#define MHI_TRE_DATA_PTR(ptr)		cpu_to_le64(ptr)
-#define MHI_TRE_DATA_DWORD0(len)	cpu_to_le32(FIELD_PREP(GENMASK(15, 0), len))
-#define MHI_TRE_TYPE_TRANSFER		2
-#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \
-								MHI_TRE_TYPE_TRANSFER) |    \
-								FIELD_PREP(BIT(10), bei) |  \
-								FIELD_PREP(BIT(9), ieot) |  \
-								FIELD_PREP(BIT(8), ieob) |  \
-								FIELD_PREP(BIT(0), chain))
-
-/* RSC transfer descriptor macros */
-#define MHI_RSCTRE_DATA_PTR(ptr, len)	cpu_to_le64(FIELD_PREP(GENMASK(64, 48), len) | ptr)
-#define MHI_RSCTRE_DATA_DWORD0(cookie)	cpu_to_le32(cookie)
-#define MHI_RSCTRE_DATA_DWORD1		cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \
-							       MHI_PKT_TYPE_COALESCING))
-
-enum mhi_pkt_type {
-	MHI_PKT_TYPE_INVALID = 0x0,
-	MHI_PKT_TYPE_NOOP_CMD = 0x1,
-	MHI_PKT_TYPE_TRANSFER = 0x2,
-	MHI_PKT_TYPE_COALESCING = 0x8,
-	MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
-	MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
-	MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
-	MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
-	MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
-	MHI_PKT_TYPE_TX_EVENT = 0x22,
-	MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
-	MHI_PKT_TYPE_EE_EVENT = 0x40,
-	MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
-	MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
-	MHI_PKT_TYPE_STALE_EVENT, /* internal event */
-};
-
-/* MHI transfer completion events */
-enum mhi_ev_ccs {
-	MHI_EV_CC_INVALID = 0x0,
-	MHI_EV_CC_SUCCESS = 0x1,
-	MHI_EV_CC_EOT = 0x2, /* End of transfer event */
-	MHI_EV_CC_OVERFLOW = 0x3,
-	MHI_EV_CC_EOB = 0x4, /* End of block event */
-	MHI_EV_CC_OOB = 0x5, /* Out of block event */
-	MHI_EV_CC_DB_MODE = 0x6,
-	MHI_EV_CC_UNDEFINED_ERR = 0x10,
-	MHI_EV_CC_BAD_TRE = 0x11,
-};
-
-enum mhi_ch_state {
-	MHI_CH_STATE_DISABLED = 0x0,
-	MHI_CH_STATE_ENABLED = 0x1,
-	MHI_CH_STATE_RUNNING = 0x2,
-	MHI_CH_STATE_SUSPENDED = 0x3,
-	MHI_CH_STATE_STOP = 0x4,
-	MHI_CH_STATE_ERROR = 0x5,
-};
-
 enum mhi_ch_state_type {
 	MHI_CH_STATE_TYPE_RESET,
 	MHI_CH_STATE_TYPE_STOP,
@@ -333,11 +76,6 @@ extern const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX];
 #define TO_DEV_STATE_TRANS_STR(state) (((state) >= DEV_ST_TRANSITION_MAX) ? \
 				"INVALID_STATE" : dev_state_tran_str[state])
 
-extern const char * const mhi_state_str[MHI_STATE_MAX];
-#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
-				  !mhi_state_str[state]) ? \
-				"INVALID_STATE" : mhi_state_str[state])
-
 /* internal power states */
 enum mhi_pm_state {
 	MHI_PM_STATE_DISABLE,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 09/27] bus: mhi: Make mhi_state_str[] array static inline and move to common.h
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (7 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 08/27] bus: mhi: Move common MHI definitions out of host directory Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 15:56   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 10/27] bus: mhi: ep: Add support for registering MHI endpoint controllers Manivannan Sadhasivam
                   ` (19 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam, Hemant Kumar

mhi_state_str[] array could be used by MHI endpoint stack also. So let's
make the array as "static inline function" and move it inside the
"common.h" header so that the endpoint stack could also make use of it.

Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/common.h       | 29 +++++++++++++++++++++++++----
 drivers/bus/mhi/host/boot.c    |  2 +-
 drivers/bus/mhi/host/debugfs.c |  6 +++---
 drivers/bus/mhi/host/init.c    | 12 ------------
 drivers/bus/mhi/host/main.c    |  8 ++++----
 drivers/bus/mhi/host/pm.c      | 14 +++++++-------
 6 files changed, 40 insertions(+), 31 deletions(-)

diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
index f2690bf11c99..ec75ba1e6686 100644
--- a/drivers/bus/mhi/common.h
+++ b/drivers/bus/mhi/common.h
@@ -275,9 +275,30 @@ struct mhi_ring_element {
 	__le32 dword[2];
 };
 
-extern const char * const mhi_state_str[MHI_STATE_MAX];
-#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
-				  !mhi_state_str[state]) ? \
-				"INVALID_STATE" : mhi_state_str[state])
+static inline const char * const mhi_state_str(enum mhi_state state)
+{
+	switch (state) {
+	case MHI_STATE_RESET:
+		return "RESET";
+	case MHI_STATE_READY:
+		return "READY";
+	case MHI_STATE_M0:
+		return "M0";
+	case MHI_STATE_M1:
+		return "M1";
+	case MHI_STATE_M2:
+		return "M2";
+	case MHI_STATE_M3:
+		return "M3";
+	case MHI_STATE_M3_FAST:
+		return "M3 FAST";
+	case MHI_STATE_BHI:
+		return "BHI";
+	case MHI_STATE_SYS_ERR:
+		return "SYS ERROR";
+	default:
+		return "Unknown state";
+	}
+};
 
 #endif /* _MHI_COMMON_H */
diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
index d5ba3c7efb61..b0da7ca4519c 100644
--- a/drivers/bus/mhi/host/boot.c
+++ b/drivers/bus/mhi/host/boot.c
@@ -67,7 +67,7 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
 
 	dev_dbg(dev, "Entered with pm_state:%s dev_state:%s ee:%s\n",
 		to_mhi_pm_state_str(mhi_cntrl->pm_state),
-		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		mhi_state_str(mhi_cntrl->dev_state),
 		TO_MHI_EXEC_STR(mhi_cntrl->ee));
 
 	/*
diff --git a/drivers/bus/mhi/host/debugfs.c b/drivers/bus/mhi/host/debugfs.c
index bdc875d7bd4d..cfec7811dfbb 100644
--- a/drivers/bus/mhi/host/debugfs.c
+++ b/drivers/bus/mhi/host/debugfs.c
@@ -20,7 +20,7 @@ static int mhi_debugfs_states_show(struct seq_file *m, void *d)
 	seq_printf(m, "PM state: %s Device: %s MHI state: %s EE: %s wake: %s\n",
 		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
 		   mhi_is_active(mhi_cntrl) ? "Active" : "Inactive",
-		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		   mhi_state_str(mhi_cntrl->dev_state),
 		   TO_MHI_EXEC_STR(mhi_cntrl->ee),
 		   mhi_cntrl->wake_set ? "true" : "false");
 
@@ -206,13 +206,13 @@ static int mhi_debugfs_regdump_show(struct seq_file *m, void *d)
 
 	seq_printf(m, "Host PM state: %s Device state: %s EE: %s\n",
 		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
-		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		   mhi_state_str(mhi_cntrl->dev_state),
 		   TO_MHI_EXEC_STR(mhi_cntrl->ee));
 
 	state = mhi_get_mhi_state(mhi_cntrl);
 	ee = mhi_get_exec_env(mhi_cntrl);
 	seq_printf(m, "Device EE: %s state: %s\n", TO_MHI_EXEC_STR(ee),
-		   TO_MHI_STATE_STR(state));
+		   mhi_state_str(state));
 
 	for (i = 0; regs[i].name; i++) {
 		if (!regs[i].base)
diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
index 016dcc35db80..a665b8e92408 100644
--- a/drivers/bus/mhi/host/init.c
+++ b/drivers/bus/mhi/host/init.c
@@ -45,18 +45,6 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
 	[DEV_ST_TRANSITION_DISABLE] = "DISABLE",
 };
 
-const char * const mhi_state_str[MHI_STATE_MAX] = {
-	[MHI_STATE_RESET] = "RESET",
-	[MHI_STATE_READY] = "READY",
-	[MHI_STATE_M0] = "M0",
-	[MHI_STATE_M1] = "M1",
-	[MHI_STATE_M2] = "M2",
-	[MHI_STATE_M3] = "M3",
-	[MHI_STATE_M3_FAST] = "M3 FAST",
-	[MHI_STATE_BHI] = "BHI",
-	[MHI_STATE_SYS_ERR] = "SYS ERROR",
-};
-
 const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX] = {
 	[MHI_CH_STATE_TYPE_RESET] = "RESET",
 	[MHI_CH_STATE_TYPE_STOP] = "STOP",
diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
index dabf85b92a84..9021be7f2359 100644
--- a/drivers/bus/mhi/host/main.c
+++ b/drivers/bus/mhi/host/main.c
@@ -477,8 +477,8 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
 	ee = mhi_get_exec_env(mhi_cntrl);
 	dev_dbg(dev, "local ee: %s state: %s device ee: %s state: %s\n",
 		TO_MHI_EXEC_STR(mhi_cntrl->ee),
-		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
-		TO_MHI_EXEC_STR(ee), TO_MHI_STATE_STR(state));
+		mhi_state_str(mhi_cntrl->dev_state),
+		TO_MHI_EXEC_STR(ee), mhi_state_str(state));
 
 	if (state == MHI_STATE_SYS_ERR) {
 		dev_dbg(dev, "System error detected\n");
@@ -844,7 +844,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 			new_state = MHI_TRE_GET_EV_STATE(local_rp);
 
 			dev_dbg(dev, "State change event to state: %s\n",
-				TO_MHI_STATE_STR(new_state));
+				mhi_state_str(new_state));
 
 			switch (new_state) {
 			case MHI_STATE_M0:
@@ -871,7 +871,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 			}
 			default:
 				dev_err(dev, "Invalid state: %s\n",
-					TO_MHI_STATE_STR(new_state));
+					mhi_state_str(new_state));
 			}
 
 			break;
diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
index bb8a23e80e19..3d90b8ecd3d9 100644
--- a/drivers/bus/mhi/host/pm.c
+++ b/drivers/bus/mhi/host/pm.c
@@ -541,7 +541,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
 
 	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
 		to_mhi_pm_state_str(mhi_cntrl->pm_state),
-		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+		mhi_state_str(mhi_cntrl->dev_state));
 
 	mutex_unlock(&mhi_cntrl->pm_mutex);
 }
@@ -684,7 +684,7 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
 exit_sys_error_transition:
 	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
 		to_mhi_pm_state_str(mhi_cntrl->pm_state),
-		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+		mhi_state_str(mhi_cntrl->dev_state));
 
 	mutex_unlock(&mhi_cntrl->pm_mutex);
 }
@@ -859,7 +859,7 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
 	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
 		dev_err(dev,
 			"Did not enter M3 state, MHI state: %s, PM state: %s\n",
-			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+			mhi_state_str(mhi_cntrl->dev_state),
 			to_mhi_pm_state_str(mhi_cntrl->pm_state));
 		return -EIO;
 	}
@@ -885,7 +885,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
 
 	dev_dbg(dev, "Entered with PM state: %s, MHI state: %s\n",
 		to_mhi_pm_state_str(mhi_cntrl->pm_state),
-		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+		mhi_state_str(mhi_cntrl->dev_state));
 
 	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
 		return 0;
@@ -895,7 +895,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
 
 	if (mhi_get_mhi_state(mhi_cntrl) != MHI_STATE_M3) {
 		dev_warn(dev, "Resuming from non M3 state (%s)\n",
-			 TO_MHI_STATE_STR(mhi_get_mhi_state(mhi_cntrl)));
+			 mhi_state_str(mhi_get_mhi_state(mhi_cntrl)));
 		if (!force)
 			return -EINVAL;
 	}
@@ -932,7 +932,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
 	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
 		dev_err(dev,
 			"Did not enter M0 state, MHI state: %s, PM state: %s\n",
-			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+			mhi_state_str(mhi_cntrl->dev_state),
 			to_mhi_pm_state_str(mhi_cntrl->pm_state));
 		return -EIO;
 	}
@@ -1083,7 +1083,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
 
 	state = mhi_get_mhi_state(mhi_cntrl);
 	dev_dbg(dev, "Attempting power on with EE: %s, state: %s\n",
-		TO_MHI_EXEC_STR(current_ee), TO_MHI_STATE_STR(state));
+		TO_MHI_EXEC_STR(current_ee), mhi_state_str(state));
 
 	if (state == MHI_STATE_SYS_ERR) {
 		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 10/27] bus: mhi: ep: Add support for registering MHI endpoint controllers
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (8 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 09/27] bus: mhi: Make mhi_state_str[] array static inline and move to common.h Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 16:06   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 11/27] bus: mhi: ep: Add support for registering MHI endpoint client drivers Manivannan Sadhasivam
                   ` (18 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

This commit adds support for registering MHI endpoint controller drivers
with the MHI endpoint stack. MHI endpoint controller drivers manage
the interaction with the host machines (such as x86). They are also the
MHI endpoint bus master in charge of managing the physical link between
the host and endpoint device. Eventhough the MHI spec is bus agnostic,
the current implementation is entirely based on PCIe bus.

The endpoint controller driver encloses all information about the
underlying physical bus like PCIe. The registration process involves
parsing the channel configuration and allocating an MHI EP device.

Channels used in the endpoint stack follows the perspective of the MHI
host stack. i.e.,

UL - From host to endpoint
DL - From endpoint to host

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/Kconfig       |   1 +
 drivers/bus/mhi/Makefile      |   3 +
 drivers/bus/mhi/ep/Kconfig    |  10 ++
 drivers/bus/mhi/ep/Makefile   |   2 +
 drivers/bus/mhi/ep/internal.h | 154 ++++++++++++++++++++++
 drivers/bus/mhi/ep/main.c     | 236 ++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h        | 143 ++++++++++++++++++++
 7 files changed, 549 insertions(+)
 create mode 100644 drivers/bus/mhi/ep/Kconfig
 create mode 100644 drivers/bus/mhi/ep/Makefile
 create mode 100644 drivers/bus/mhi/ep/internal.h
 create mode 100644 drivers/bus/mhi/ep/main.c
 create mode 100644 include/linux/mhi_ep.h

diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
index 4748df7f9cd5..b39a11e6c624 100644
--- a/drivers/bus/mhi/Kconfig
+++ b/drivers/bus/mhi/Kconfig
@@ -6,3 +6,4 @@
 #
 
 source "drivers/bus/mhi/host/Kconfig"
+source "drivers/bus/mhi/ep/Kconfig"
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
index 5f5708a249f5..46981331b38f 100644
--- a/drivers/bus/mhi/Makefile
+++ b/drivers/bus/mhi/Makefile
@@ -1,2 +1,5 @@
 # Host MHI stack
 obj-y += host/
+
+# Endpoint MHI stack
+obj-y += ep/
diff --git a/drivers/bus/mhi/ep/Kconfig b/drivers/bus/mhi/ep/Kconfig
new file mode 100644
index 000000000000..90ab3b040672
--- /dev/null
+++ b/drivers/bus/mhi/ep/Kconfig
@@ -0,0 +1,10 @@
+config MHI_BUS_EP
+	tristate "Modem Host Interface (MHI) bus Endpoint implementation"
+	help
+	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
+	  communication protocol used by a host processor to control
+	  and communicate a modem device over a high speed peripheral
+	  bus or shared memory.
+
+	  MHI_BUS_EP implements the MHI protocol for the endpoint devices,
+	  such as SDX55 modem connected to the host machine over PCIe.
diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
new file mode 100644
index 000000000000..64e29252b608
--- /dev/null
+++ b/drivers/bus/mhi/ep/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
+mhi_ep-y := main.o
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
new file mode 100644
index 000000000000..58ec5fdc503f
--- /dev/null
+++ b/drivers/bus/mhi/ep/internal.h
@@ -0,0 +1,154 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2022, Linaro Ltd.
+ *
+ */
+
+#ifndef _MHI_EP_INTERNAL_
+#define _MHI_EP_INTERNAL_
+
+#include <linux/bitfield.h>
+
+#include "../common.h"
+
+extern struct bus_type mhi_ep_bus_type;
+
+#define MHI_REG_OFFSET				0x100
+#define BHI_REG_OFFSET				0x200
+
+/* MHI registers */
+#define EP_MHIREGLEN				(MHI_REG_OFFSET + MHIREGLEN)
+#define EP_MHIVER				(MHI_REG_OFFSET + MHIVER)
+#define EP_MHICFG				(MHI_REG_OFFSET + MHICFG)
+#define EP_CHDBOFF				(MHI_REG_OFFSET + CHDBOFF)
+#define EP_ERDBOFF				(MHI_REG_OFFSET + ERDBOFF)
+#define EP_BHIOFF				(MHI_REG_OFFSET + BHIOFF)
+#define EP_BHIEOFF				(MHI_REG_OFFSET + BHIEOFF)
+#define EP_DEBUGOFF				(MHI_REG_OFFSET + DEBUGOFF)
+#define EP_MHICTRL				(MHI_REG_OFFSET + MHICTRL)
+#define EP_MHISTATUS				(MHI_REG_OFFSET + MHISTATUS)
+#define EP_CCABAP_LOWER				(MHI_REG_OFFSET + CCABAP_LOWER)
+#define EP_CCABAP_HIGHER			(MHI_REG_OFFSET + CCABAP_HIGHER)
+#define EP_ECABAP_LOWER				(MHI_REG_OFFSET + ECABAP_LOWER)
+#define EP_ECABAP_HIGHER			(MHI_REG_OFFSET + ECABAP_HIGHER)
+#define EP_CRCBAP_LOWER				(MHI_REG_OFFSET + CRCBAP_LOWER)
+#define EP_CRCBAP_HIGHER			(MHI_REG_OFFSET + CRCBAP_HIGHER)
+#define EP_CRDB_LOWER				(MHI_REG_OFFSET + CRDB_LOWER)
+#define EP_CRDB_HIGHER				(MHI_REG_OFFSET + CRDB_HIGHER)
+#define EP_MHICTRLBASE_LOWER			(MHI_REG_OFFSET + MHICTRLBASE_LOWER)
+#define EP_MHICTRLBASE_HIGHER			(MHI_REG_OFFSET + MHICTRLBASE_HIGHER)
+#define EP_MHICTRLLIMIT_LOWER			(MHI_REG_OFFSET + MHICTRLLIMIT_LOWER)
+#define EP_MHICTRLLIMIT_HIGHER			(MHI_REG_OFFSET + MHICTRLLIMIT_HIGHER)
+#define EP_MHIDATABASE_LOWER			(MHI_REG_OFFSET + MHIDATABASE_LOWER)
+#define EP_MHIDATABASE_HIGHER			(MHI_REG_OFFSET + MHIDATABASE_HIGHER)
+#define EP_MHIDATALIMIT_LOWER			(MHI_REG_OFFSET + MHIDATALIMIT_LOWER)
+#define EP_MHIDATALIMIT_HIGHER			(MHI_REG_OFFSET + MHIDATALIMIT_HIGHER)
+
+/* MHI BHI registers */
+#define EP_BHI_INTVEC				(BHI_REG_OFFSET + BHI_INTVEC)
+#define EP_BHI_EXECENV				(BHI_REG_OFFSET + BHI_EXECENV)
+
+/* MHI Doorbell registers */
+#define CHDB_LOWER_n(n)				(0x400 + 0x8 * (n))
+#define CHDB_HIGHER_n(n)			(0x404 + 0x8 * (n))
+#define ERDB_LOWER_n(n)				(0x800 + 0x8 * (n))
+#define ERDB_HIGHER_n(n)			(0x804 + 0x8 * (n))
+
+#define MHI_CTRL_INT_STATUS			0x4
+#define MHI_CTRL_INT_STATUS_MSK			BIT(0)
+#define MHI_CTRL_INT_STATUS_CRDB_MSK		BIT(1)
+#define MHI_CHDB_INT_STATUS_n(n)		(0x28 + 0x4 * (n))
+#define MHI_ERDB_INT_STATUS_n(n)		(0x38 + 0x4 * (n))
+
+#define MHI_CTRL_INT_CLEAR			0x4c
+#define MHI_CTRL_INT_MMIO_WR_CLEAR		BIT(2)
+#define MHI_CTRL_INT_CRDB_CLEAR			BIT(1)
+#define MHI_CTRL_INT_CRDB_MHICTRL_CLEAR		BIT(0)
+
+#define MHI_CHDB_INT_CLEAR_n(n)			(0x70 + 0x4 * (n))
+#define MHI_CHDB_INT_CLEAR_n_CLEAR_ALL		GENMASK(31, 0)
+#define MHI_ERDB_INT_CLEAR_n(n)			(0x80 + 0x4 * (n))
+#define MHI_ERDB_INT_CLEAR_n_CLEAR_ALL		GENMASK(31, 0)
+
+/*
+ * Unlike the usual "masking" convention, writing "1" to a bit in this register
+ * enables the interrupt and writing "0" will disable it..
+ */
+#define MHI_CTRL_INT_MASK			0x94
+#define MHI_CTRL_INT_MASK_MASK			GENMASK(1, 0)
+#define MHI_CTRL_MHICTRL_MASK			BIT(0)
+#define MHI_CTRL_CRDB_MASK			BIT(1)
+
+#define MHI_CHDB_INT_MASK_n(n)			(0xb8 + 0x4 * (n))
+#define MHI_CHDB_INT_MASK_n_EN_ALL		GENMASK(31, 0)
+#define MHI_ERDB_INT_MASK_n(n)			(0xc8 + 0x4 * (n))
+#define MHI_ERDB_INT_MASK_n_EN_ALL		GENMASK(31, 0)
+
+#define NR_OF_CMD_RINGS				1
+#define MHI_MASK_ROWS_CH_EV_DB			4
+#define MHI_MASK_CH_EV_LEN			32
+
+/* Generic context */
+struct mhi_generic_ctx {
+	__le32 reserved0;
+	__le32 reserved1;
+	__le32 reserved2;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
+};
+
+enum mhi_ep_ring_type {
+	RING_TYPE_CMD,
+	RING_TYPE_ER,
+	RING_TYPE_CH,
+};
+
+/* Ring element */
+union mhi_ep_ring_ctx {
+	struct mhi_cmd_ctxt cmd;
+	struct mhi_event_ctxt ev;
+	struct mhi_chan_ctxt ch;
+	struct mhi_generic_ctx generic;
+};
+
+struct mhi_ep_ring {
+	struct mhi_ep_cntrl *mhi_cntrl;
+	union mhi_ep_ring_ctx *ring_ctx;
+	struct mhi_ring_element *ring_cache;
+	enum mhi_ep_ring_type type;
+	u64 rbase;
+	size_t rd_offset;
+	size_t wr_offset;
+	size_t ring_size;
+	u32 db_offset_h;
+	u32 db_offset_l;
+	u32 ch_id;
+};
+
+struct mhi_ep_cmd {
+	struct mhi_ep_ring ring;
+};
+
+struct mhi_ep_event {
+	struct mhi_ep_ring ring;
+};
+
+struct mhi_ep_chan {
+	char *name;
+	struct mhi_ep_device *mhi_dev;
+	struct mhi_ep_ring ring;
+	struct mutex lock;
+	void (*xfer_cb)(struct mhi_ep_device *mhi_dev, struct mhi_result *result);
+	enum mhi_ch_state state;
+	enum dma_data_direction dir;
+	u64 tre_loc;
+	u32 tre_size;
+	u32 tre_bytes_left;
+	u32 chan;
+	bool skip_td;
+};
+
+#endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
new file mode 100644
index 000000000000..87ca42c7b067
--- /dev/null
+++ b/drivers/bus/mhi/ep/main.c
@@ -0,0 +1,236 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * MHI Endpoint bus stack
+ *
+ * Copyright (C) 2022 Linaro Ltd.
+ * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/dma-direction.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/mhi_ep.h>
+#include <linux/mod_devicetable.h>
+#include <linux/module.h>
+#include "internal.h"
+
+static DEFINE_IDA(mhi_ep_cntrl_ida);
+
+static void mhi_ep_release_device(struct device *dev)
+{
+	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+
+	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+		mhi_dev->mhi_cntrl->mhi_dev = NULL;
+
+	/*
+	 * We need to set the mhi_chan->mhi_dev to NULL here since the MHI
+	 * devices for the channels will only get created in mhi_ep_create_device()
+	 * if the mhi_dev associated with it is NULL.
+	 */
+	if (mhi_dev->ul_chan)
+		mhi_dev->ul_chan->mhi_dev = NULL;
+
+	if (mhi_dev->dl_chan)
+		mhi_dev->dl_chan->mhi_dev = NULL;
+
+	kfree(mhi_dev);
+}
+
+static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl,
+						 enum mhi_device_type dev_type)
+{
+	struct mhi_ep_device *mhi_dev;
+	struct device *dev;
+
+	mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
+	if (!mhi_dev)
+		return ERR_PTR(-ENOMEM);
+
+	dev = &mhi_dev->dev;
+	device_initialize(dev);
+	dev->bus = &mhi_ep_bus_type;
+	dev->release = mhi_ep_release_device;
+
+	/* Controller device is always allocated first */
+	if (dev_type == MHI_DEVICE_CONTROLLER)
+		/* for MHI controller device, parent is the bus device (e.g. PCI EPF) */
+		dev->parent = mhi_cntrl->cntrl_dev;
+	else
+		/* for MHI client devices, parent is the MHI controller device */
+		dev->parent = &mhi_cntrl->mhi_dev->dev;
+
+	mhi_dev->mhi_cntrl = mhi_cntrl;
+	mhi_dev->dev_type = dev_type;
+
+	return mhi_dev;
+}
+
+static int mhi_ep_chan_init(struct mhi_ep_cntrl *mhi_cntrl,
+			    const struct mhi_ep_cntrl_config *config)
+{
+	const struct mhi_ep_channel_config *ch_cfg;
+	struct device *dev = mhi_cntrl->cntrl_dev;
+	u32 chan, i;
+	int ret = -EINVAL;
+
+	mhi_cntrl->max_chan = config->max_channels;
+
+	/*
+	 * Allocate max_channels supported by the MHI endpoint and populate
+	 * only the defined channels
+	 */
+	mhi_cntrl->mhi_chan = kcalloc(mhi_cntrl->max_chan, sizeof(*mhi_cntrl->mhi_chan),
+				      GFP_KERNEL);
+	if (!mhi_cntrl->mhi_chan)
+		return -ENOMEM;
+
+	for (i = 0; i < config->num_channels; i++) {
+		struct mhi_ep_chan *mhi_chan;
+
+		ch_cfg = &config->ch_cfg[i];
+
+		chan = ch_cfg->num;
+		if (chan >= mhi_cntrl->max_chan) {
+			dev_err(dev, "Channel (%u) exceeds maximum available channels (%u)\n",
+				chan, mhi_cntrl->max_chan);
+			goto error_chan_cfg;
+		}
+
+		/* Bi-directional and direction less channels are not supported */
+		if (ch_cfg->dir == DMA_BIDIRECTIONAL || ch_cfg->dir == DMA_NONE) {
+			dev_err(dev, "Invalid direction (%u) for channel (%u)\n",
+				ch_cfg->dir, chan);
+			goto error_chan_cfg;
+		}
+
+		mhi_chan = &mhi_cntrl->mhi_chan[chan];
+		mhi_chan->name = ch_cfg->name;
+		mhi_chan->chan = chan;
+		mhi_chan->dir = ch_cfg->dir;
+		mutex_init(&mhi_chan->lock);
+	}
+
+	return 0;
+
+error_chan_cfg:
+	kfree(mhi_cntrl->mhi_chan);
+
+	return ret;
+}
+
+/*
+ * Allocate channel and command rings here. Event rings will be allocated
+ * in mhi_ep_power_up() as the config comes from the host.
+ */
+int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
+				const struct mhi_ep_cntrl_config *config)
+{
+	struct mhi_ep_device *mhi_dev;
+	int ret;
+
+	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
+		return -EINVAL;
+
+	ret = mhi_ep_chan_init(mhi_cntrl, config);
+	if (ret)
+		return ret;
+
+	mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS, sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
+	if (!mhi_cntrl->mhi_cmd) {
+		ret = -ENOMEM;
+		goto err_free_ch;
+	}
+
+	/* Set controller index */
+	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
+	if (mhi_cntrl->index < 0) {
+		ret = mhi_cntrl->index;
+		goto err_free_cmd;
+	}
+
+	/* Allocate the controller device */
+	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_CONTROLLER);
+	if (IS_ERR(mhi_dev)) {
+		dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate controller device\n");
+		ret = PTR_ERR(mhi_dev);
+		goto err_ida_free;
+	}
+
+	dev_set_name(&mhi_dev->dev, "mhi_ep%u", mhi_cntrl->index);
+	mhi_dev->name = dev_name(&mhi_dev->dev);
+	mhi_cntrl->mhi_dev = mhi_dev;
+
+	ret = device_add(&mhi_dev->dev);
+	if (ret)
+		goto err_put_dev;
+
+	dev_dbg(&mhi_dev->dev, "MHI EP Controller registered\n");
+
+	return 0;
+
+err_put_dev:
+	put_device(&mhi_dev->dev);
+err_ida_free:
+	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
+err_free_cmd:
+	kfree(mhi_cntrl->mhi_cmd);
+err_free_ch:
+	kfree(mhi_cntrl->mhi_chan);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_ep_register_controller);
+
+void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
+
+	kfree(mhi_cntrl->mhi_cmd);
+	kfree(mhi_cntrl->mhi_chan);
+
+	device_del(&mhi_dev->dev);
+	put_device(&mhi_dev->dev);
+
+	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
+}
+EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
+
+static int mhi_ep_match(struct device *dev, struct device_driver *drv)
+{
+	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+
+	/*
+	 * If the device is a controller type then there is no client driver
+	 * associated with it
+	 */
+	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+		return 0;
+
+	return 0;
+};
+
+struct bus_type mhi_ep_bus_type = {
+	.name = "mhi_ep",
+	.dev_name = "mhi_ep",
+	.match = mhi_ep_match,
+};
+
+static int __init mhi_ep_init(void)
+{
+	return bus_register(&mhi_ep_bus_type);
+}
+
+static void __exit mhi_ep_exit(void)
+{
+	bus_unregister(&mhi_ep_bus_type);
+}
+
+postcore_initcall(mhi_ep_init);
+module_exit(mhi_ep_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("MHI Bus Endpoint stack");
+MODULE_AUTHOR("Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>");
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
new file mode 100644
index 000000000000..9c58938371e2
--- /dev/null
+++ b/include/linux/mhi_ep.h
@@ -0,0 +1,143 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2022, Linaro Ltd.
+ *
+ */
+#ifndef _MHI_EP_H_
+#define _MHI_EP_H_
+
+#include <linux/dma-direction.h>
+#include <linux/mhi.h>
+
+#define MHI_EP_DEFAULT_MTU 0x8000
+
+/**
+ * struct mhi_ep_channel_config - Channel configuration structure for controller
+ * @name: The name of this channel
+ * @num: The number assigned to this channel
+ * @num_elements: The number of elements that can be queued to this channel
+ * @dir: Direction that data may flow on this channel
+ */
+struct mhi_ep_channel_config {
+	char *name;
+	u32 num;
+	u32 num_elements;
+	enum dma_data_direction dir;
+};
+
+/**
+ * struct mhi_ep_cntrl_config - MHI Endpoint controller configuration
+ * @mhi_version: MHI spec version supported by the controller
+ * @max_channels: Maximum number of channels supported
+ * @num_channels: Number of channels defined in @ch_cfg
+ * @ch_cfg: Array of defined channels
+ */
+struct mhi_ep_cntrl_config {
+	u32 mhi_version;
+	u32 max_channels;
+	u32 num_channels;
+	const struct mhi_ep_channel_config *ch_cfg;
+};
+
+/**
+ * struct mhi_ep_db_info - MHI Endpoint doorbell info
+ * @mask: Mask of the doorbell interrupt
+ * @status: Status of the doorbell interrupt
+ */
+struct mhi_ep_db_info {
+	u32 mask;
+	u32 status;
+};
+
+/**
+ * struct mhi_ep_cntrl - MHI Endpoint controller structure
+ * @cntrl_dev: Pointer to the struct device of physical bus acting as the MHI
+ *             Endpoint controller
+ * @mhi_dev: MHI Endpoint device instance for the controller
+ * @mmio: MMIO region containing the MHI registers
+ * @mhi_chan: Points to the channel configuration table
+ * @mhi_event: Points to the event ring configurations table
+ * @mhi_cmd: Points to the command ring configurations table
+ * @sm: MHI Endpoint state machine
+ * @raise_irq: CB function for raising IRQ to the host
+ * @alloc_addr: CB function for allocating memory in endpoint for storing host context
+ * @map_addr: CB function for mapping host context to endpoint
+ * @free_addr: CB function to free the allocated memory in endpoint for storing host context
+ * @unmap_addr: CB function to unmap the host context in endpoint
+ * @read_from_host: CB function for reading from host memory from endpoint
+ * @write_to_host: CB function for writing to host memory from endpoint
+ * @mhi_state: MHI Endpoint state
+ * @max_chan: Maximum channels supported by the endpoint controller
+ * @mru: MRU (Maximum Receive Unit) value of the endpoint controller
+ * @index: MHI Endpoint controller index
+ */
+struct mhi_ep_cntrl {
+	struct device *cntrl_dev;
+	struct mhi_ep_device *mhi_dev;
+	void __iomem *mmio;
+
+	struct mhi_ep_chan *mhi_chan;
+	struct mhi_ep_event *mhi_event;
+	struct mhi_ep_cmd *mhi_cmd;
+	struct mhi_ep_sm *sm;
+
+	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
+	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,
+		       size_t size);
+	int (*map_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t phys_addr, u64 pci_addr,
+			size_t size);
+	void (*free_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t phys_addr,
+			  void __iomem *virt_addr, size_t size);
+	void (*unmap_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t phys_addr);
+	int (*read_from_host)(struct mhi_ep_cntrl *mhi_cntrl, u64 from, void __iomem *to,
+			      size_t size);
+	int (*write_to_host)(struct mhi_ep_cntrl *mhi_cntrl, void __iomem *from, u64 to,
+			     size_t size);
+
+	enum mhi_state mhi_state;
+
+	u32 max_chan;
+	u32 mru;
+	u32 index;
+};
+
+/**
+ * struct mhi_ep_device - Structure representing an MHI Endpoint device that binds
+ *                     to channels or is associated with controllers
+ * @dev: Driver model device node for the MHI Endpoint device
+ * @mhi_cntrl: Controller the device belongs to
+ * @id: Pointer to MHI Endpoint device ID struct
+ * @name: Name of the associated MHI Endpoint device
+ * @ul_chan: UL channel for the device
+ * @dl_chan: DL channel for the device
+ * @dev_type: MHI device type
+ */
+struct mhi_ep_device {
+	struct device dev;
+	struct mhi_ep_cntrl *mhi_cntrl;
+	const struct mhi_device_id *id;
+	const char *name;
+	struct mhi_ep_chan *ul_chan;
+	struct mhi_ep_chan *dl_chan;
+	enum mhi_device_type dev_type;
+};
+
+#define to_mhi_ep_device(dev) container_of(dev, struct mhi_ep_device, dev)
+
+/**
+ * mhi_ep_register_controller - Register MHI Endpoint controller
+ * @mhi_cntrl: MHI Endpoint controller to register
+ * @config: Configuration to use for the controller
+ *
+ * Return: 0 if controller registrations succeeds, a negative error code otherwise.
+ */
+int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
+			       const struct mhi_ep_cntrl_config *config);
+
+/**
+ * mhi_ep_unregister_controller - Unregister MHI Endpoint controller
+ * @mhi_cntrl: MHI Endpoint controller to unregister
+ */
+void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
+
+#endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 11/27] bus: mhi: ep: Add support for registering MHI endpoint client drivers
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (9 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 10/27] bus: mhi: ep: Add support for registering MHI endpoint controllers Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 16:09   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 12/27] bus: mhi: ep: Add support for creating and destroying MHI EP devices Manivannan Sadhasivam
                   ` (17 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam, Hemant Kumar

This commit adds support for registering MHI endpoint client drivers
with the MHI endpoint stack. MHI endpoint client drivers bind to one
or more MHI endpoint devices inorder to send and receive the upper-layer
protocol packets like IP packets, modem control messages, and
diagnostics messages over MHI bus.

Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 85 +++++++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h    | 57 +++++++++++++++++++++++++-
 2 files changed, 140 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 87ca42c7b067..2bdcf1657479 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -198,9 +198,88 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
 }
 EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
 
+static int mhi_ep_driver_probe(struct device *dev)
+{
+	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
+	struct mhi_ep_chan *ul_chan = mhi_dev->ul_chan;
+	struct mhi_ep_chan *dl_chan = mhi_dev->dl_chan;
+
+	ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
+	dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
+
+	return mhi_drv->probe(mhi_dev, mhi_dev->id);
+}
+
+static int mhi_ep_driver_remove(struct device *dev)
+{
+	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
+	struct mhi_result result = {};
+	struct mhi_ep_chan *mhi_chan;
+	int dir;
+
+	/* Skip if it is a controller device */
+	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+		return 0;
+
+	/* Disconnect the channels associated with the driver */
+	for (dir = 0; dir < 2; dir++) {
+		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		mutex_lock(&mhi_chan->lock);
+		/* Send channel disconnect status to the client driver */
+		if (mhi_chan->xfer_cb) {
+			result.transaction_status = -ENOTCONN;
+			result.bytes_xferd = 0;
+			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+		}
+
+		mhi_chan->state = MHI_CH_STATE_DISABLED;
+		mhi_chan->xfer_cb = NULL;
+		mutex_unlock(&mhi_chan->lock);
+	}
+
+	/* Remove the client driver now */
+	mhi_drv->remove(mhi_dev);
+
+	return 0;
+}
+
+int __mhi_ep_driver_register(struct mhi_ep_driver *mhi_drv, struct module *owner)
+{
+	struct device_driver *driver = &mhi_drv->driver;
+
+	if (!mhi_drv->probe || !mhi_drv->remove)
+		return -EINVAL;
+
+	/* Client drivers should have callbacks defined for both channels */
+	if (!mhi_drv->ul_xfer_cb || !mhi_drv->dl_xfer_cb)
+		return -EINVAL;
+
+	driver->bus = &mhi_ep_bus_type;
+	driver->owner = owner;
+	driver->probe = mhi_ep_driver_probe;
+	driver->remove = mhi_ep_driver_remove;
+
+	return driver_register(driver);
+}
+EXPORT_SYMBOL_GPL(__mhi_ep_driver_register);
+
+void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv)
+{
+	driver_unregister(&mhi_drv->driver);
+}
+EXPORT_SYMBOL_GPL(mhi_ep_driver_unregister);
+
 static int mhi_ep_match(struct device *dev, struct device_driver *drv)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(drv);
+	const struct mhi_device_id *id;
 
 	/*
 	 * If the device is a controller type then there is no client driver
@@ -209,6 +288,12 @@ static int mhi_ep_match(struct device *dev, struct device_driver *drv)
 	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
 		return 0;
 
+	for (id = mhi_drv->id_table; id->chan[0]; id++)
+		if (!strcmp(mhi_dev->name, id->chan)) {
+			mhi_dev->id = id;
+			return 1;
+		}
+
 	return 0;
 };
 
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 9c58938371e2..efcbdc51464f 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -108,8 +108,8 @@ struct mhi_ep_cntrl {
  * @mhi_cntrl: Controller the device belongs to
  * @id: Pointer to MHI Endpoint device ID struct
  * @name: Name of the associated MHI Endpoint device
- * @ul_chan: UL channel for the device
- * @dl_chan: DL channel for the device
+ * @ul_chan: UL (from host to endpoint) channel for the device
+ * @dl_chan: DL (from endpoint to host) channel for the device
  * @dev_type: MHI device type
  */
 struct mhi_ep_device {
@@ -122,7 +122,60 @@ struct mhi_ep_device {
 	enum mhi_device_type dev_type;
 };
 
+/**
+ * struct mhi_ep_driver - Structure representing a MHI Endpoint client driver
+ * @id_table: Pointer to MHI Endpoint device ID table
+ * @driver: Device driver model driver
+ * @probe: CB function for client driver probe function
+ * @remove: CB function for client driver remove function
+ * @ul_xfer_cb: CB function for UL (from host to endpoint) data transfer
+ * @dl_xfer_cb: CB function for DL (from endpoint to host) data transfer
+ */
+struct mhi_ep_driver {
+	const struct mhi_device_id *id_table;
+	struct device_driver driver;
+	int (*probe)(struct mhi_ep_device *mhi_ep,
+		     const struct mhi_device_id *id);
+	void (*remove)(struct mhi_ep_device *mhi_ep);
+	void (*ul_xfer_cb)(struct mhi_ep_device *mhi_dev,
+			   struct mhi_result *result);
+	void (*dl_xfer_cb)(struct mhi_ep_device *mhi_dev,
+			   struct mhi_result *result);
+};
+
 #define to_mhi_ep_device(dev) container_of(dev, struct mhi_ep_device, dev)
+#define to_mhi_ep_driver(drv) container_of(drv, struct mhi_ep_driver, driver)
+
+/*
+ * module_mhi_ep_driver() - Helper macro for drivers that don't do
+ * anything special other than using default mhi_ep_driver_register() and
+ * mhi_ep_driver_unregister().  This eliminates a lot of boilerplate.
+ * Each module may only use this macro once.
+ */
+#define module_mhi_ep_driver(mhi_drv) \
+	module_driver(mhi_drv, mhi_ep_driver_register, \
+		      mhi_ep_driver_unregister)
+
+/*
+ * Macro to avoid include chaining to get THIS_MODULE
+ */
+#define mhi_ep_driver_register(mhi_drv) \
+	__mhi_ep_driver_register(mhi_drv, THIS_MODULE)
+
+/**
+ * __mhi_ep_driver_register - Register a driver with MHI Endpoint bus
+ * @mhi_drv: Driver to be associated with the device
+ * @owner: The module owner
+ *
+ * Return: 0 if driver registrations succeeds, a negative error code otherwise.
+ */
+int __mhi_ep_driver_register(struct mhi_ep_driver *mhi_drv, struct module *owner);
+
+/**
+ * mhi_ep_driver_unregister - Unregister a driver from MHI Endpoint bus
+ * @mhi_drv: Driver associated with the device
+ */
+void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv);
 
 /**
  * mhi_ep_register_controller - Register MHI Endpoint controller
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 12/27] bus: mhi: ep: Add support for creating and destroying MHI EP devices
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (10 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 11/27] bus: mhi: ep: Add support for registering MHI endpoint client drivers Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 16:10   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 13/27] bus: mhi: ep: Add support for managing MMIO registers Manivannan Sadhasivam
                   ` (16 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

This commit adds support for creating and destroying MHI endpoint devices.
The MHI endpoint devices binds to the MHI endpoint channels and are used
to transfer data between MHI host and endpoint device.

There is a single MHI EP device for each channel pair. The devices will be
created when the corresponding channels has been started by the host and
will be destroyed during MHI EP power down and reset.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 83 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 83 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 2bdcf1657479..3afae0bfd83c 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -68,6 +68,89 @@ static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl,
 	return mhi_dev;
 }
 
+/*
+ * MHI channels are always defined in pairs with UL as the even numbered
+ * channel and DL as odd numbered one. This function gets UL channel (primary)
+ * as the ch_id and always looks after the next entry in channel list for
+ * the corresponding DL channel (secondary).
+ */
+static int mhi_ep_create_device(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id)
+{
+	struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ch_id];
+	struct device *dev = mhi_cntrl->cntrl_dev;
+	struct mhi_ep_device *mhi_dev;
+	int ret;
+
+	/* Check if the channel name is same for both UL and DL */
+	if (strcmp(mhi_chan->name, mhi_chan[1].name)) {
+		dev_err(dev, "UL and DL channel names are not same: (%s) != (%s)\n",
+			mhi_chan->name, mhi_chan[1].name);
+		return -EINVAL;
+	}
+
+	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_XFER);
+	if (IS_ERR(mhi_dev))
+		return PTR_ERR(mhi_dev);
+
+	/* Configure primary channel */
+	mhi_dev->ul_chan = mhi_chan;
+	get_device(&mhi_dev->dev);
+	mhi_chan->mhi_dev = mhi_dev;
+
+	/* Configure secondary channel as well */
+	mhi_chan++;
+	mhi_dev->dl_chan = mhi_chan;
+	get_device(&mhi_dev->dev);
+	mhi_chan->mhi_dev = mhi_dev;
+
+	/* Channel name is same for both UL and DL */
+	mhi_dev->name = mhi_chan->name;
+	dev_set_name(&mhi_dev->dev, "%s_%s",
+		     dev_name(&mhi_cntrl->mhi_dev->dev),
+		     mhi_dev->name);
+
+	ret = device_add(&mhi_dev->dev);
+	if (ret)
+		put_device(&mhi_dev->dev);
+
+	return ret;
+}
+
+static int mhi_ep_destroy_device(struct device *dev, void *data)
+{
+	struct mhi_ep_device *mhi_dev;
+	struct mhi_ep_cntrl *mhi_cntrl;
+	struct mhi_ep_chan *ul_chan, *dl_chan;
+
+	if (dev->bus != &mhi_ep_bus_type)
+		return 0;
+
+	mhi_dev = to_mhi_ep_device(dev);
+	mhi_cntrl = mhi_dev->mhi_cntrl;
+
+	/* Only destroy devices created for channels */
+	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+		return 0;
+
+	ul_chan = mhi_dev->ul_chan;
+	dl_chan = mhi_dev->dl_chan;
+
+	if (ul_chan)
+		put_device(&ul_chan->mhi_dev->dev);
+
+	if (dl_chan)
+		put_device(&dl_chan->mhi_dev->dev);
+
+	dev_dbg(&mhi_cntrl->mhi_dev->dev, "Destroying device for chan:%s\n",
+		 mhi_dev->name);
+
+	/* Notify the client and remove the device from MHI bus */
+	device_del(dev);
+	put_device(dev);
+
+	return 0;
+}
+
 static int mhi_ep_chan_init(struct mhi_ep_cntrl *mhi_cntrl,
 			    const struct mhi_ep_cntrl_config *config)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 13/27] bus: mhi: ep: Add support for managing MMIO registers
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (11 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 12/27] bus: mhi: ep: Add support for creating and destroying MHI EP devices Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 16:23   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 14/27] bus: mhi: ep: Add support for ring management Manivannan Sadhasivam
                   ` (15 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for managing the Memory Mapped Input Output (MMIO) registers
of the MHI bus. All MHI operations are carried out using the MMIO registers
by both host and the endpoint device.

The MMIO registers reside inside the endpoint device memory (fixed
location based on the platform) and the address is passed by the MHI EP
controller driver during its registration.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/Makefile   |   2 +-
 drivers/bus/mhi/ep/internal.h |  26 ++++
 drivers/bus/mhi/ep/main.c     |   6 +-
 drivers/bus/mhi/ep/mmio.c     | 272 ++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h        |  18 +++
 5 files changed, 322 insertions(+), 2 deletions(-)
 create mode 100644 drivers/bus/mhi/ep/mmio.c

diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
index 64e29252b608..a1555ae287ad 100644
--- a/drivers/bus/mhi/ep/Makefile
+++ b/drivers/bus/mhi/ep/Makefile
@@ -1,2 +1,2 @@
 obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
-mhi_ep-y := main.o
+mhi_ep-y := main.o mmio.o
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index 58ec5fdc503f..139e939fcf57 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -151,4 +151,30 @@ struct mhi_ep_chan {
 	bool skip_td;
 };
 
+/* MMIO related functions */
+u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset);
+void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
+void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 mask, u32 val);
+u32 mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset, u32 mask);
+void mhi_ep_mmio_enable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_disable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_enable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_disable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_enable_chdb(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id);
+void mhi_ep_mmio_disable_chdb(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id);
+void mhi_ep_mmio_enable_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
+bool mhi_ep_mmio_read_chdb_status_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_mask_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_chc_base(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_erc_base(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_crc_base(struct mhi_ep_cntrl *mhi_cntrl);
+u64 mhi_ep_mmio_get_db(struct mhi_ep_ring *ring);
+void mhi_ep_mmio_set_env(struct mhi_ep_cntrl *mhi_cntrl, u32 value);
+void mhi_ep_mmio_clear_reset(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_reset(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
+			       bool *mhi_reset);
+void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
+
 #endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 3afae0bfd83c..d76387c4d5fa 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -214,7 +214,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	struct mhi_ep_device *mhi_dev;
 	int ret;
 
-	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
+	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio)
 		return -EINVAL;
 
 	ret = mhi_ep_chan_init(mhi_cntrl, config);
@@ -227,6 +227,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 		goto err_free_ch;
 	}
 
+	/* Set MHI version and AMSS EE before enumeration */
+	mhi_ep_mmio_write(mhi_cntrl, EP_MHIVER, config->mhi_version);
+	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EE_AMSS);
+
 	/* Set controller index */
 	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
 	if (mhi_cntrl->index < 0) {
diff --git a/drivers/bus/mhi/ep/mmio.c b/drivers/bus/mhi/ep/mmio.c
new file mode 100644
index 000000000000..311c5d94c4d2
--- /dev/null
+++ b/drivers/bus/mhi/ep/mmio.c
@@ -0,0 +1,272 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022 Linaro Ltd.
+ * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/io.h>
+#include <linux/mhi_ep.h>
+
+#include "internal.h"
+
+u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset)
+{
+	return readl(mhi_cntrl->mmio + offset);
+}
+
+void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val)
+{
+	writel(val, mhi_cntrl->mmio + offset);
+}
+
+void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 mask, u32 val)
+{
+	u32 regval;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, offset);
+	regval &= ~mask;
+	regval |= (val << __ffs(mask)) & mask;
+	mhi_ep_mmio_write(mhi_cntrl, offset, regval);
+}
+
+u32 mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset, u32 mask)
+{
+	u32 regval;
+
+	regval = mhi_ep_mmio_read(dev, offset);
+	regval &= mask;
+	regval >>= __ffs(mask);
+
+	return regval;
+}
+
+void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
+				bool *mhi_reset)
+{
+	u32 regval;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, EP_MHICTRL);
+	*state = FIELD_GET(MHICTRL_MHISTATE_MASK, regval);
+	*mhi_reset = !!FIELD_GET(MHICTRL_RESET_MASK, regval);
+}
+
+static void mhi_ep_mmio_set_chdb(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id, bool enable)
+{
+	u32 chid_mask, chid_shift, chdb_idx, val;
+
+	chid_shift = ch_id % 32;
+	chid_mask = BIT(chid_shift);
+	chdb_idx = ch_id / 32;
+
+	val = enable ? 1 : 0;
+
+	mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CHDB_INT_MASK_n(chdb_idx), chid_mask, val);
+
+	/* Update the local copy of the channel mask */
+	mhi_cntrl->chdb[chdb_idx].mask &= ~chid_mask;
+	mhi_cntrl->chdb[chdb_idx].mask |= val << chid_shift;
+}
+
+void mhi_ep_mmio_enable_chdb(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id)
+{
+	mhi_ep_mmio_set_chdb(mhi_cntrl, ch_id, true);
+}
+
+void mhi_ep_mmio_disable_chdb(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id)
+{
+	mhi_ep_mmio_set_chdb(mhi_cntrl, ch_id, false);
+}
+
+static void mhi_ep_mmio_set_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl, bool enable)
+{
+	u32 val, i;
+
+	val = enable ? MHI_CHDB_INT_MASK_n_EN_ALL : 0;
+
+	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
+		mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_MASK_n(i), val);
+		mhi_cntrl->chdb[i].mask = val;
+	}
+}
+
+void mhi_ep_mmio_enable_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_set_chdb_interrupts(mhi_cntrl, true);
+}
+
+static void mhi_ep_mmio_mask_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_set_chdb_interrupts(mhi_cntrl, false);
+}
+
+bool mhi_ep_mmio_read_chdb_status_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	bool chdb = 0;
+	u32 i;
+
+	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
+		mhi_cntrl->chdb[i].status = mhi_ep_mmio_read(mhi_cntrl, MHI_CHDB_INT_STATUS_n(i));
+		chdb |= !!mhi_cntrl->chdb[i].status;
+	}
+
+	/* Return whether a channel doorbell interrupt occurred or not */
+	return chdb;
+}
+
+static void mhi_ep_mmio_set_erdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl, bool enable)
+{
+	u32 val, i;
+
+	val = enable ? MHI_ERDB_INT_MASK_n_EN_ALL : 0;
+
+	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++)
+		mhi_ep_mmio_write(mhi_cntrl, MHI_ERDB_INT_MASK_n(i), val);
+}
+
+static void mhi_ep_mmio_mask_erdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_set_erdb_interrupts(mhi_cntrl, false);
+}
+
+void mhi_ep_mmio_enable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK,
+				  MHI_CTRL_MHICTRL_MASK, 1);
+}
+
+void mhi_ep_mmio_disable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK,
+				  MHI_CTRL_MHICTRL_MASK, 0);
+}
+
+void mhi_ep_mmio_enable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK,
+				  MHI_CTRL_CRDB_MASK, 1);
+}
+
+void mhi_ep_mmio_disable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK,
+				  MHI_CTRL_CRDB_MASK, 0);
+}
+
+void mhi_ep_mmio_mask_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_disable_ctrl_interrupt(mhi_cntrl);
+	mhi_ep_mmio_disable_cmdb_interrupt(mhi_cntrl);
+	mhi_ep_mmio_mask_chdb_interrupts(mhi_cntrl);
+	mhi_ep_mmio_mask_erdb_interrupts(mhi_cntrl);
+}
+
+static void mhi_ep_mmio_clear_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 i;
+
+	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++)
+		mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_CLEAR_n(i),
+				   MHI_CHDB_INT_CLEAR_n_CLEAR_ALL);
+
+	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++)
+		mhi_ep_mmio_write(mhi_cntrl, MHI_ERDB_INT_CLEAR_n(i),
+				   MHI_ERDB_INT_CLEAR_n_CLEAR_ALL);
+
+	mhi_ep_mmio_write(mhi_cntrl, MHI_CTRL_INT_CLEAR,
+			   MHI_CTRL_INT_MMIO_WR_CLEAR |
+			   MHI_CTRL_INT_CRDB_CLEAR |
+			   MHI_CTRL_INT_CRDB_MHICTRL_CLEAR);
+}
+
+void mhi_ep_mmio_get_chc_base(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 regval;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, EP_CCABAP_HIGHER);
+	mhi_cntrl->ch_ctx_host_pa = regval;
+	mhi_cntrl->ch_ctx_host_pa <<= 32;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, EP_CCABAP_LOWER);
+	mhi_cntrl->ch_ctx_host_pa |= regval;
+}
+
+void mhi_ep_mmio_get_erc_base(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 regval;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, EP_ECABAP_HIGHER);
+	mhi_cntrl->ev_ctx_host_pa = regval;
+	mhi_cntrl->ev_ctx_host_pa <<= 32;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, EP_ECABAP_LOWER);
+	mhi_cntrl->ev_ctx_host_pa |= regval;
+}
+
+void mhi_ep_mmio_get_crc_base(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 regval;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, EP_CRCBAP_HIGHER);
+	mhi_cntrl->cmd_ctx_host_pa = regval;
+	mhi_cntrl->cmd_ctx_host_pa <<= 32;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, EP_CRCBAP_LOWER);
+	mhi_cntrl->cmd_ctx_host_pa |= regval;
+}
+
+u64 mhi_ep_mmio_get_db(struct mhi_ep_ring *ring)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+	u64 db_offset;
+	u32 regval;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, ring->db_offset_h);
+	db_offset = regval;
+	db_offset <<= 32;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, ring->db_offset_l);
+	db_offset |= regval;
+
+	return db_offset;
+}
+
+void mhi_ep_mmio_set_env(struct mhi_ep_cntrl *mhi_cntrl, u32 value)
+{
+	mhi_ep_mmio_write(mhi_cntrl, EP_BHI_EXECENV, value);
+}
+
+void mhi_ep_mmio_clear_reset(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_masked_write(mhi_cntrl, EP_MHICTRL, MHICTRL_RESET_MASK, 0);
+}
+
+void mhi_ep_mmio_reset(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_write(mhi_cntrl, EP_MHICTRL, 0);
+	mhi_ep_mmio_write(mhi_cntrl, EP_MHISTATUS, 0);
+	mhi_ep_mmio_clear_interrupts(mhi_cntrl);
+}
+
+void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 regval;
+
+	mhi_cntrl->chdb_offset = mhi_ep_mmio_read(mhi_cntrl, EP_CHDBOFF);
+	mhi_cntrl->erdb_offset = mhi_ep_mmio_read(mhi_cntrl, EP_ERDBOFF);
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, EP_MHICFG);
+	mhi_cntrl->event_rings = FIELD_GET(MHICFG_NER_MASK, regval);
+	mhi_cntrl->hw_event_rings = FIELD_GET(MHICFG_NHWER_MASK, regval);
+
+	mhi_ep_mmio_reset(mhi_cntrl);
+}
+
+void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 regval;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, EP_MHICFG);
+	mhi_cntrl->event_rings = FIELD_GET(MHICFG_NER_MASK, regval);
+	mhi_cntrl->hw_event_rings = FIELD_GET(MHICFG_NHWER_MASK, regval);
+}
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index efcbdc51464f..8e1de062f820 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -59,6 +59,10 @@ struct mhi_ep_db_info {
  * @mhi_event: Points to the event ring configurations table
  * @mhi_cmd: Points to the command ring configurations table
  * @sm: MHI Endpoint state machine
+ * @ch_ctx_host_pa: Physical address of host channel context data structure
+ * @ev_ctx_host_pa: Physical address of host event context data structure
+ * @cmd_ctx_host_pa: Physical address of host command context data structure
+ * @chdb: Array of channel doorbell interrupt info
  * @raise_irq: CB function for raising IRQ to the host
  * @alloc_addr: CB function for allocating memory in endpoint for storing host context
  * @map_addr: CB function for mapping host context to endpoint
@@ -69,6 +73,10 @@ struct mhi_ep_db_info {
  * @mhi_state: MHI Endpoint state
  * @max_chan: Maximum channels supported by the endpoint controller
  * @mru: MRU (Maximum Receive Unit) value of the endpoint controller
+ * @event_rings: Number of event rings supported by the endpoint controller
+ * @hw_event_rings: Number of hardware event rings supported by the endpoint controller
+ * @chdb_offset: Channel doorbell offset set by the host
+ * @erdb_offset: Event ring doorbell offset set by the host
  * @index: MHI Endpoint controller index
  */
 struct mhi_ep_cntrl {
@@ -81,6 +89,12 @@ struct mhi_ep_cntrl {
 	struct mhi_ep_cmd *mhi_cmd;
 	struct mhi_ep_sm *sm;
 
+	u64 ch_ctx_host_pa;
+	u64 ev_ctx_host_pa;
+	u64 cmd_ctx_host_pa;
+
+	struct mhi_ep_db_info chdb[4];
+
 	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
 	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,
 		       size_t size);
@@ -98,6 +112,10 @@ struct mhi_ep_cntrl {
 
 	u32 max_chan;
 	u32 mru;
+	u32 event_rings;
+	u32 hw_event_rings;
+	u32 chdb_offset;
+	u32 erdb_offset;
 	u32 index;
 };
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 14/27] bus: mhi: ep: Add support for ring management
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (12 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 13/27] bus: mhi: ep: Add support for managing MMIO registers Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 16:27   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 15/27] bus: mhi: ep: Add support for sending events to the host Manivannan Sadhasivam
                   ` (14 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for managing the MHI ring. The MHI ring is a circular queue
of data structures used to pass the information between host and the
endpoint.

MHI support 3 types of rings:

1. Transfer ring
2. Event ring
3. Command ring

All rings reside inside the host memory and the MHI EP device maps it to
the device memory using blocks like PCIe iATU. The mapping is handled in
the MHI EP controller driver itself.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/Makefile   |   2 +-
 drivers/bus/mhi/ep/internal.h |  18 ++++
 drivers/bus/mhi/ep/ring.c     | 197 ++++++++++++++++++++++++++++++++++
 3 files changed, 216 insertions(+), 1 deletion(-)
 create mode 100644 drivers/bus/mhi/ep/ring.c

diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
index a1555ae287ad..7ba0e04801eb 100644
--- a/drivers/bus/mhi/ep/Makefile
+++ b/drivers/bus/mhi/ep/Makefile
@@ -1,2 +1,2 @@
 obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
-mhi_ep-y := main.o mmio.o
+mhi_ep-y := main.o mmio.o ring.o
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index 139e939fcf57..b3b8770f2f4e 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -114,6 +114,11 @@ union mhi_ep_ring_ctx {
 	struct mhi_generic_ctx generic;
 };
 
+struct mhi_ep_ring_item {
+	struct list_head node;
+	struct mhi_ep_ring *ring;
+};
+
 struct mhi_ep_ring {
 	struct mhi_ep_cntrl *mhi_cntrl;
 	union mhi_ep_ring_ctx *ring_ctx;
@@ -126,6 +131,9 @@ struct mhi_ep_ring {
 	u32 db_offset_h;
 	u32 db_offset_l;
 	u32 ch_id;
+	u32 er_index;
+	u32 irq_vector;
+	bool started;
 };
 
 struct mhi_ep_cmd {
@@ -151,6 +159,16 @@ struct mhi_ep_chan {
 	bool skip_td;
 };
 
+/* MHI Ring related functions */
+void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id);
+void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring);
+int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
+		      union mhi_ep_ring_ctx *ctx);
+size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr);
+int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ring_element *element);
+void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring);
+int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring);
+
 /* MMIO related functions */
 u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset);
 void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
diff --git a/drivers/bus/mhi/ep/ring.c b/drivers/bus/mhi/ep/ring.c
new file mode 100644
index 000000000000..1029eed2cc28
--- /dev/null
+++ b/drivers/bus/mhi/ep/ring.c
@@ -0,0 +1,197 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022 Linaro Ltd.
+ * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+ */
+
+#include <linux/mhi_ep.h>
+#include "internal.h"
+
+size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr)
+{
+	return (ptr - ring->rbase) / sizeof(struct mhi_ring_element);
+}
+
+static u32 mhi_ep_ring_num_elems(struct mhi_ep_ring *ring)
+{
+	return le64_to_cpu(ring->ring_ctx->generic.rlen) / sizeof(struct mhi_ring_element);
+}
+
+void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring)
+{
+	ring->rd_offset = (ring->rd_offset + 1) % ring->ring_size;
+}
+
+static int __mhi_ep_cache_ring(struct mhi_ep_ring *ring, size_t end)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	size_t start, copy_size;
+	int ret;
+
+	/* Don't proceed in the case of event ring. This happens during mhi_ep_ring_start(). */
+	if (ring->type == RING_TYPE_ER)
+		return 0;
+
+	/* No need to cache the ring if write pointer is unmodified */
+	if (ring->wr_offset == end)
+		return 0;
+
+	start = ring->wr_offset;
+	if (start < end) {
+		copy_size = (end - start) * sizeof(struct mhi_ring_element);
+		ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase +
+						(start * sizeof(struct mhi_ring_element)),
+						&ring->ring_cache[start], copy_size);
+		if (ret < 0)
+			return ret;
+	} else {
+		copy_size = (ring->ring_size - start) * sizeof(struct mhi_ring_element);
+		ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase +
+						(start * sizeof(struct mhi_ring_element)),
+						&ring->ring_cache[start], copy_size);
+		if (ret < 0)
+			return ret;
+
+		if (end) {
+			ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase,
+							&ring->ring_cache[0],
+							end * sizeof(struct mhi_ring_element));
+			if (ret < 0)
+				return ret;
+		}
+	}
+
+	dev_dbg(dev, "Cached ring: start %zu end %zu size %zu\n", start, end, copy_size);
+
+	return 0;
+}
+
+static int mhi_ep_cache_ring(struct mhi_ep_ring *ring, u64 wr_ptr)
+{
+	size_t wr_offset;
+	int ret;
+
+	wr_offset = mhi_ep_ring_addr2offset(ring, wr_ptr);
+
+	/* Cache the host ring till write offset */
+	ret = __mhi_ep_cache_ring(ring, wr_offset);
+	if (ret)
+		return ret;
+
+	ring->wr_offset = wr_offset;
+
+	return 0;
+}
+
+int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring)
+{
+	u64 wr_ptr;
+
+	wr_ptr = mhi_ep_mmio_get_db(ring);
+
+	return mhi_ep_cache_ring(ring, wr_ptr);
+}
+
+/* TODO: Support for adding multiple ring elements to the ring */
+int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ring_element *el)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	size_t old_offset = 0;
+	u32 num_free_elem;
+	int ret;
+
+	ret = mhi_ep_update_wr_offset(ring);
+	if (ret) {
+		dev_err(dev, "Error updating write pointer\n");
+		return ret;
+	}
+
+	if (ring->rd_offset < ring->wr_offset)
+		num_free_elem = (ring->wr_offset - ring->rd_offset) - 1;
+	else
+		num_free_elem = ((ring->ring_size - ring->rd_offset) + ring->wr_offset) - 1;
+
+	/* Check if there is space in ring for adding at least an element */
+	if (!num_free_elem) {
+		dev_err(dev, "No space left in the ring\n");
+		return -ENOSPC;
+	}
+
+	old_offset = ring->rd_offset;
+	mhi_ep_ring_inc_index(ring);
+
+	dev_dbg(dev, "Adding an element to ring at offset (%zu)\n", ring->rd_offset);
+
+	/* Update rp in ring context */
+	ring->ring_ctx->generic.rp = cpu_to_le64((ring->rd_offset * sizeof(*el)) + ring->rbase);
+
+	ret = mhi_cntrl->write_to_host(mhi_cntrl, el, ring->rbase + (old_offset * sizeof(*el)),
+				       sizeof(*el));
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id)
+{
+	ring->type = type;
+	if (ring->type == RING_TYPE_CMD) {
+		ring->db_offset_h = EP_CRDB_HIGHER;
+		ring->db_offset_l = EP_CRDB_LOWER;
+	} else if (ring->type == RING_TYPE_CH) {
+		ring->db_offset_h = CHDB_HIGHER_n(id);
+		ring->db_offset_l = CHDB_LOWER_n(id);
+		ring->ch_id = id;
+	} else {
+		ring->db_offset_h = ERDB_HIGHER_n(id);
+		ring->db_offset_l = ERDB_LOWER_n(id);
+	}
+}
+
+int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
+			union mhi_ep_ring_ctx *ctx)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	int ret;
+
+	ring->mhi_cntrl = mhi_cntrl;
+	ring->ring_ctx = ctx;
+	ring->ring_size = mhi_ep_ring_num_elems(ring);
+	ring->rbase = le64_to_cpu(ring->ring_ctx->generic.rbase);
+
+	if (ring->type == RING_TYPE_CH)
+		ring->er_index = le32_to_cpu(ring->ring_ctx->ch.erindex);
+
+	if (ring->type == RING_TYPE_ER)
+		ring->irq_vector = le32_to_cpu(ring->ring_ctx->ev.msivec);
+
+	/* During ring init, both rp and wp are equal */
+	ring->rd_offset = mhi_ep_ring_addr2offset(ring, le64_to_cpu(ring->ring_ctx->generic.rp));
+	ring->wr_offset = mhi_ep_ring_addr2offset(ring, le64_to_cpu(ring->ring_ctx->generic.rp));
+
+	/* Allocate ring cache memory for holding the copy of host ring */
+	ring->ring_cache = kcalloc(ring->ring_size, sizeof(struct mhi_ring_element), GFP_KERNEL);
+	if (!ring->ring_cache)
+		return -ENOMEM;
+
+	ret = mhi_ep_cache_ring(ring, le64_to_cpu(ring->ring_ctx->generic.wp));
+	if (ret) {
+		dev_err(dev, "Failed to cache ring\n");
+		kfree(ring->ring_cache);
+		return ret;
+	}
+
+	ring->started = true;
+
+	return 0;
+}
+
+void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring)
+{
+	ring->started = false;
+	kfree(ring->ring_cache);
+	ring->ring_cache = NULL;
+}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 15/27] bus: mhi: ep: Add support for sending events to the host
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (13 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 14/27] bus: mhi: ep: Add support for ring management Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 16:37   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 16/27] bus: mhi: ep: Add support for managing MHI state machine Manivannan Sadhasivam
                   ` (13 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for sending the events to the host over MHI bus from the
endpoint. Following events are supported:

1. Transfer completion event
2. Command completion event
3. State change event
4. Execution Environment (EE) change event

An event is sent whenever an operation has been completed in the MHI EP
device. Event is sent using the MHI event ring and additionally the host
is notified using an IRQ if required.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/common.h      | 22 +++++++++
 drivers/bus/mhi/ep/internal.h |  4 ++
 drivers/bus/mhi/ep/main.c     | 90 +++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h        |  8 ++++
 4 files changed, 124 insertions(+)

diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
index ec75ba1e6686..5b30e2d0832e 100644
--- a/drivers/bus/mhi/common.h
+++ b/drivers/bus/mhi/common.h
@@ -165,6 +165,22 @@
 #define MHI_TRE_GET_EV_LINKSPEED(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1)))
 #define MHI_TRE_GET_EV_LINKWIDTH(tre)	FIELD_GET(GENMASK(7, 0), (MHI_TRE_GET_DWORD(tre, 0)))
 
+/* State change event */
+#define MHI_SC_EV_PTR			0
+#define MHI_SC_EV_DWORD0(state)		cpu_to_le32(FIELD_PREP(GENMASK(31, 24), state))
+#define MHI_SC_EV_DWORD1(type)		cpu_to_le32(FIELD_PREP(GENMASK(23, 16), type))
+
+/* EE event */
+#define MHI_EE_EV_PTR			0
+#define MHI_EE_EV_DWORD0(ee)		cpu_to_le32(FIELD_PREP(GENMASK(31, 24), ee))
+#define MHI_EE_EV_DWORD1(type)		cpu_to_le32(FIELD_PREP(GENMASK(23, 16), type))
+
+
+/* Command Completion event */
+#define MHI_CC_EV_PTR(ptr)		cpu_to_le64(ptr)
+#define MHI_CC_EV_DWORD0(code)		cpu_to_le32(FIELD_PREP(GENMASK(31, 24), code))
+#define MHI_CC_EV_DWORD1(type)		cpu_to_le32(FIELD_PREP(GENMASK(23, 16), type))
+
 /* Transfer descriptor macros */
 #define MHI_TRE_DATA_PTR(ptr)		cpu_to_le64(ptr)
 #define MHI_TRE_DATA_DWORD0(len)	cpu_to_le32(FIELD_PREP(GENMASK(15, 0), len))
@@ -175,6 +191,12 @@
 								FIELD_PREP(BIT(9), ieot) |  \
 								FIELD_PREP(BIT(8), ieob) |  \
 								FIELD_PREP(BIT(0), chain))
+#define MHI_TRE_DATA_GET_PTR(tre)	le64_to_cpu((tre)->ptr)
+#define MHI_TRE_DATA_GET_LEN(tre)	FIELD_GET(GENMASK(15, 0), MHI_TRE_GET_DWORD(tre, 0))
+#define MHI_TRE_DATA_GET_CHAIN(tre)	FIELD_GET(BIT(0), MHI_TRE_GET_DWORD(tre, 1))
+#define MHI_TRE_DATA_GET_IEOB(tre)	FIELD_GET(BIT(8), MHI_TRE_GET_DWORD(tre, 1))
+#define MHI_TRE_DATA_GET_IEOT(tre)	FIELD_GET(BIT(9), MHI_TRE_GET_DWORD(tre, 1))
+#define MHI_TRE_DATA_GET_BEI(tre)	FIELD_GET(BIT(10), MHI_TRE_GET_DWORD(tre, 1))
 
 /* RSC transfer descriptor macros */
 #define MHI_RSCTRE_DATA_PTR(ptr, len)	cpu_to_le64(FIELD_PREP(GENMASK(64, 48), len) | ptr)
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index b3b8770f2f4e..8753ae93eda3 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -195,4 +195,8 @@ void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *s
 void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
 void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
 
+/* MHI EP core functions */
+int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
+int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ee_type exec_env);
+
 #endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index d76387c4d5fa..903f9bd3e03d 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -18,6 +18,94 @@
 
 static DEFINE_IDA(mhi_ep_cntrl_ida);
 
+static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
+			     struct mhi_ring_element *el, bool bei)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	union mhi_ep_ring_ctx *ctx;
+	struct mhi_ep_ring *ring;
+	int ret;
+
+	mutex_lock(&mhi_cntrl->event_lock);
+	ring = &mhi_cntrl->mhi_event[ring_idx].ring;
+	ctx = (union mhi_ep_ring_ctx *)&mhi_cntrl->ev_ctx_cache[ring_idx];
+	if (!ring->started) {
+		ret = mhi_ep_ring_start(mhi_cntrl, ring, ctx);
+		if (ret) {
+			dev_err(dev, "Error starting event ring (%u)\n", ring_idx);
+			goto err_unlock;
+		}
+	}
+
+	/* Add element to the event ring */
+	ret = mhi_ep_ring_add_element(ring, el);
+	if (ret) {
+		dev_err(dev, "Error adding element to event ring (%u)\n", ring_idx);
+		goto err_unlock;
+	}
+
+	mutex_unlock(&mhi_cntrl->event_lock);
+
+	/*
+	 * Raise IRQ to host only if the BEI flag is not set in TRE. Host might
+	 * set this flag for interrupt moderation as per MHI protocol.
+	 */
+	if (!bei)
+		mhi_cntrl->raise_irq(mhi_cntrl, ring->irq_vector);
+
+	return 0;
+
+err_unlock:
+	mutex_unlock(&mhi_cntrl->event_lock);
+
+	return ret;
+}
+
+static int mhi_ep_send_completion_event(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
+					struct mhi_ring_element *tre, u32 len, enum mhi_ev_ccs code)
+{
+	struct mhi_ring_element event = {};
+
+	event.ptr = cpu_to_le64(ring->rbase + (ring->rd_offset * (sizeof(*tre))));
+	event.dword[0] = MHI_TRE_EV_DWORD0(code, len);
+	event.dword[1] = MHI_TRE_EV_DWORD1(ring->ch_id, MHI_PKT_TYPE_TX_EVENT);
+
+	return mhi_ep_send_event(mhi_cntrl, ring->er_index, &event, !!MHI_TRE_DATA_GET_BEI(tre));
+}
+
+int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state)
+{
+	struct mhi_ring_element event = {};
+
+	event.dword[0] = MHI_SC_EV_DWORD0(state);
+	event.dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_STATE_CHANGE_EVENT);
+
+	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
+}
+
+int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ee_type exec_env)
+{
+	struct mhi_ring_element event = {};
+
+	event.dword[0] = MHI_EE_EV_DWORD0(exec_env);
+	event.dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_EE_EVENT);
+
+	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
+}
+
+static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ev_ccs code)
+{
+	struct mhi_ep_ring *ring = &mhi_cntrl->mhi_cmd->ring;
+	struct mhi_ring_element event = {};
+
+	event.ptr = cpu_to_le64(ring->rbase + (ring->rd_offset *
+					       (sizeof(struct mhi_ring_element))));
+	event.dword[0] = MHI_CC_EV_DWORD0(code);
+	event.dword[1] = MHI_CC_EV_DWORD1(MHI_PKT_TYPE_CMD_COMPLETION_EVENT);
+
+	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
+}
+
 static void mhi_ep_release_device(struct device *dev)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -227,6 +315,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 		goto err_free_ch;
 	}
 
+	mutex_init(&mhi_cntrl->event_lock);
+
 	/* Set MHI version and AMSS EE before enumeration */
 	mhi_ep_mmio_write(mhi_cntrl, EP_MHIVER, config->mhi_version);
 	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EE_AMSS);
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 8e1de062f820..44a4669382ad 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -59,10 +59,14 @@ struct mhi_ep_db_info {
  * @mhi_event: Points to the event ring configurations table
  * @mhi_cmd: Points to the command ring configurations table
  * @sm: MHI Endpoint state machine
+ * @ch_ctx_cache: Cache of host channel context data structure
+ * @ev_ctx_cache: Cache of host event context data structure
+ * @cmd_ctx_cache: Cache of host command context data structure
  * @ch_ctx_host_pa: Physical address of host channel context data structure
  * @ev_ctx_host_pa: Physical address of host event context data structure
  * @cmd_ctx_host_pa: Physical address of host command context data structure
  * @chdb: Array of channel doorbell interrupt info
+ * @event_lock: Lock for protecting event rings
  * @raise_irq: CB function for raising IRQ to the host
  * @alloc_addr: CB function for allocating memory in endpoint for storing host context
  * @map_addr: CB function for mapping host context to endpoint
@@ -89,11 +93,15 @@ struct mhi_ep_cntrl {
 	struct mhi_ep_cmd *mhi_cmd;
 	struct mhi_ep_sm *sm;
 
+	struct mhi_chan_ctxt *ch_ctx_cache;
+	struct mhi_event_ctxt *ev_ctx_cache;
+	struct mhi_cmd_ctxt *cmd_ctx_cache;
 	u64 ch_ctx_host_pa;
 	u64 ev_ctx_host_pa;
 	u64 cmd_ctx_host_pa;
 
 	struct mhi_ep_db_info chdb[4];
+	struct mutex event_lock;
 
 	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
 	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 16/27] bus: mhi: ep: Add support for managing MHI state machine
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (14 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 15/27] bus: mhi: ep: Add support for sending events to the host Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 16:41   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 17/27] bus: mhi: ep: Add support for processing MHI endpoint interrupts Manivannan Sadhasivam
                   ` (12 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for managing the MHI state machine by controlling the state
transitions. Only the following MHI state transitions are supported:

1. Ready state
2. M0 state
3. M3 state
4. SYS_ERR state

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/Makefile   |   2 +-
 drivers/bus/mhi/ep/internal.h |  11 +++
 drivers/bus/mhi/ep/main.c     |  54 +++++++++++++-
 drivers/bus/mhi/ep/sm.c       | 136 ++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h        |  12 +++
 5 files changed, 213 insertions(+), 2 deletions(-)
 create mode 100644 drivers/bus/mhi/ep/sm.c

diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
index 7ba0e04801eb..aad85f180b70 100644
--- a/drivers/bus/mhi/ep/Makefile
+++ b/drivers/bus/mhi/ep/Makefile
@@ -1,2 +1,2 @@
 obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
-mhi_ep-y := main.o mmio.o ring.o
+mhi_ep-y := main.o mmio.o ring.o sm.o
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index 8753ae93eda3..536351218685 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -144,6 +144,11 @@ struct mhi_ep_event {
 	struct mhi_ep_ring ring;
 };
 
+struct mhi_ep_state_transition {
+	struct list_head node;
+	enum mhi_state state;
+};
+
 struct mhi_ep_chan {
 	char *name;
 	struct mhi_ep_device *mhi_dev;
@@ -198,5 +203,11 @@ void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
 /* MHI EP core functions */
 int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
 int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ee_type exec_env);
+bool mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state cur_mhi_state,
+			    enum mhi_state mhi_state);
+int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state);
+int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
+int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
+int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
 
 #endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 903f9bd3e03d..7a29543586d0 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -106,6 +106,43 @@ static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_e
 	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
 }
 
+static void mhi_ep_state_worker(struct work_struct *work)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	struct mhi_ep_state_transition *itr, *tmp;
+	unsigned long flags;
+	LIST_HEAD(head);
+	int ret;
+
+	spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
+	list_splice_tail_init(&mhi_cntrl->st_transition_list, &head);
+	spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
+
+	list_for_each_entry_safe(itr, tmp, &head, node) {
+		list_del(&itr->node);
+		dev_dbg(dev, "Handling MHI state transition to %s\n",
+			 mhi_state_str(itr->state));
+
+		switch (itr->state) {
+		case MHI_STATE_M0:
+			ret = mhi_ep_set_m0_state(mhi_cntrl);
+			if (ret)
+				dev_err(dev, "Failed to transition to M0 state\n");
+			break;
+		case MHI_STATE_M3:
+			ret = mhi_ep_set_m3_state(mhi_cntrl);
+			if (ret)
+				dev_err(dev, "Failed to transition to M3 state\n");
+			break;
+		default:
+			dev_err(dev, "Invalid MHI state transition: %d\n", itr->state);
+			break;
+		}
+		kfree(itr);
+	}
+}
+
 static void mhi_ep_release_device(struct device *dev)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -315,6 +352,17 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 		goto err_free_ch;
 	}
 
+	INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
+
+	mhi_cntrl->wq = alloc_workqueue("mhi_ep_wq", 0, 0);
+	if (!mhi_cntrl->wq) {
+		ret = -ENOMEM;
+		goto err_free_cmd;
+	}
+
+	INIT_LIST_HEAD(&mhi_cntrl->st_transition_list);
+	spin_lock_init(&mhi_cntrl->state_lock);
+	spin_lock_init(&mhi_cntrl->list_lock);
 	mutex_init(&mhi_cntrl->event_lock);
 
 	/* Set MHI version and AMSS EE before enumeration */
@@ -325,7 +373,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
 	if (mhi_cntrl->index < 0) {
 		ret = mhi_cntrl->index;
-		goto err_free_cmd;
+		goto err_destroy_wq;
 	}
 
 	/* Allocate the controller device */
@@ -352,6 +400,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	put_device(&mhi_dev->dev);
 err_ida_free:
 	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
+err_destroy_wq:
+	destroy_workqueue(mhi_cntrl->wq);
 err_free_cmd:
 	kfree(mhi_cntrl->mhi_cmd);
 err_free_ch:
@@ -365,6 +415,8 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
 
+	destroy_workqueue(mhi_cntrl->wq);
+
 	kfree(mhi_cntrl->mhi_cmd);
 	kfree(mhi_cntrl->mhi_chan);
 
diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
new file mode 100644
index 000000000000..ad49276ec044
--- /dev/null
+++ b/drivers/bus/mhi/ep/sm.c
@@ -0,0 +1,136 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2022 Linaro Ltd.
+ * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+ */
+
+#include <linux/errno.h>
+#include <linux/mhi_ep.h>
+#include "internal.h"
+
+bool __must_check mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl,
+					 enum mhi_state cur_mhi_state,
+					 enum mhi_state mhi_state)
+{
+	if (mhi_state == MHI_STATE_SYS_ERR)
+		return true;    /* Allowed in any state */
+
+	if (mhi_state == MHI_STATE_READY)
+		return cur_mhi_state == MHI_STATE_RESET;
+
+	if (mhi_state == MHI_STATE_M0)
+		return (cur_mhi_state == MHI_STATE_M3 || cur_mhi_state == MHI_STATE_READY);
+
+	if (mhi_state == MHI_STATE_M3)
+		return cur_mhi_state == MHI_STATE_M0;
+
+	return false;
+}
+
+int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+
+	if (!mhi_ep_check_mhi_state(mhi_cntrl, mhi_cntrl->mhi_state, mhi_state)) {
+		dev_err(dev, "MHI state change to %s from %s is not allowed!\n",
+			mhi_state_str(mhi_state),
+			mhi_state_str(mhi_cntrl->mhi_state));
+		return -EACCES;
+	}
+
+	/* TODO */
+	if (mhi_state == MHI_STATE_M1 || mhi_state == MHI_STATE_M2) {
+		dev_err(dev, "MHI state (%s) not supported\n", mhi_state_str(mhi_state));
+		return -EOPNOTSUPP;
+	}
+
+	mhi_ep_mmio_masked_write(mhi_cntrl, EP_MHISTATUS, MHISTATUS_MHISTATE_MASK, mhi_state);
+	mhi_cntrl->mhi_state = mhi_state;
+
+	if (mhi_state == MHI_STATE_READY)
+		mhi_ep_mmio_masked_write(mhi_cntrl, EP_MHISTATUS, MHISTATUS_READY_MASK, 1);
+
+	if (mhi_state == MHI_STATE_SYS_ERR)
+		mhi_ep_mmio_masked_write(mhi_cntrl, EP_MHISTATUS, MHISTATUS_SYSERR_MASK, 1);
+
+	return 0;
+}
+
+int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	enum mhi_state old_state;
+	int ret;
+
+	spin_lock_bh(&mhi_cntrl->state_lock);
+	old_state = mhi_cntrl->mhi_state;
+
+	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
+	spin_unlock_bh(&mhi_cntrl->state_lock);
+
+	if (ret)
+		return ret;
+
+	/* Signal host that the device moved to M0 */
+	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M0);
+	if (ret) {
+		dev_err(dev, "Failed sending M0 state change event\n");
+		return ret;
+	}
+
+	if (old_state == MHI_STATE_READY) {
+		/* Send AMSS EE event to host */
+		ret = mhi_ep_send_ee_event(mhi_cntrl, MHI_EE_AMSS);
+		if (ret) {
+			dev_err(dev, "Failed sending AMSS EE event\n");
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	int ret;
+
+	spin_lock_bh(&mhi_cntrl->state_lock);
+	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
+	spin_unlock_bh(&mhi_cntrl->state_lock);
+
+	if (ret)
+		return ret;
+
+	/* Signal host that the device moved to M3 */
+	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M3);
+	if (ret) {
+		dev_err(dev, "Failed sending M3 state change event\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	enum mhi_state mhi_state;
+	int ret, is_ready;
+
+	spin_lock_bh(&mhi_cntrl->state_lock);
+	/* Ensure that the MHISTATUS is set to RESET by host */
+	mhi_state = mhi_ep_mmio_masked_read(mhi_cntrl, EP_MHISTATUS, MHISTATUS_MHISTATE_MASK);
+	is_ready = mhi_ep_mmio_masked_read(mhi_cntrl, EP_MHISTATUS, MHISTATUS_READY_MASK);
+
+	if (mhi_state != MHI_STATE_RESET || is_ready) {
+		dev_err(dev, "READY state transition failed. MHI host not in RESET state\n");
+		spin_unlock_bh(&mhi_cntrl->state_lock);
+		return -EIO;
+	}
+
+	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_READY);
+	spin_unlock_bh(&mhi_cntrl->state_lock);
+
+	return ret;
+}
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 44a4669382ad..dc27a5de7d3c 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -67,6 +67,11 @@ struct mhi_ep_db_info {
  * @cmd_ctx_host_pa: Physical address of host command context data structure
  * @chdb: Array of channel doorbell interrupt info
  * @event_lock: Lock for protecting event rings
+ * @list_lock: Lock for protecting state transition and channel doorbell lists
+ * @state_lock: Lock for protecting state transitions
+ * @st_transition_list: List of state transitions
+ * @wq: Dedicated workqueue for handling rings and state changes
+ * @state_work: State transition worker
  * @raise_irq: CB function for raising IRQ to the host
  * @alloc_addr: CB function for allocating memory in endpoint for storing host context
  * @map_addr: CB function for mapping host context to endpoint
@@ -102,6 +107,13 @@ struct mhi_ep_cntrl {
 
 	struct mhi_ep_db_info chdb[4];
 	struct mutex event_lock;
+	spinlock_t list_lock;
+	spinlock_t state_lock;
+
+	struct list_head st_transition_list;
+
+	struct workqueue_struct *wq;
+	struct work_struct state_work;
 
 	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
 	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 17/27] bus: mhi: ep: Add support for processing MHI endpoint interrupts
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (15 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 16/27] bus: mhi: ep: Add support for managing MHI state machine Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 16:45   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 18/27] bus: mhi: ep: Add support for powering up the MHI endpoint stack Manivannan Sadhasivam
                   ` (11 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for processing MHI endpoint interrupts such as control
interrupt, command interrupt and channel interrupt from the host.

The interrupts will be generated in the endpoint device whenever host
writes to the corresponding doorbell registers. The doorbell logic
is handled inside the hardware internally.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 123 +++++++++++++++++++++++++++++++++++++-
 include/linux/mhi_ep.h    |   4 ++
 2 files changed, 125 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 7a29543586d0..ce690b1aeace 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -143,6 +143,112 @@ static void mhi_ep_state_worker(struct work_struct *work)
 	}
 }
 
+static void mhi_ep_queue_channel_db(struct mhi_ep_cntrl *mhi_cntrl, unsigned long ch_int,
+				    u32 ch_idx)
+{
+	struct mhi_ep_ring_item *item;
+	struct mhi_ep_ring *ring;
+	bool work = !!ch_int;
+	LIST_HEAD(head);
+	u32 i;
+
+	/* First add the ring items to a local list */
+	for_each_set_bit(i, &ch_int, 32) {
+		/* Channel index varies for each register: 0, 32, 64, 96 */
+		u32 ch_id = ch_idx + i;
+
+		ring = &mhi_cntrl->mhi_chan[ch_id].ring;
+		item = kzalloc(sizeof(*item), GFP_ATOMIC);
+		if (!item)
+			return;
+
+		item->ring = ring;
+		list_add_tail(&item->node, &head);
+	}
+
+	/* Now, splice the local list into ch_db_list and queue the work item */
+	if (work) {
+		spin_lock(&mhi_cntrl->list_lock);
+		list_splice_tail_init(&head, &mhi_cntrl->ch_db_list);
+		spin_unlock(&mhi_cntrl->list_lock);
+	}
+}
+
+/*
+ * Channel interrupt statuses are contained in 4 registers each of 32bit length.
+ * For checking all interrupts, we need to loop through each registers and then
+ * check for bits set.
+ */
+static void mhi_ep_check_channel_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 ch_int, ch_idx, i;
+
+	/* Bail out if there is no channel doorbell interrupt */
+	if (!mhi_ep_mmio_read_chdb_status_interrupts(mhi_cntrl))
+		return;
+
+	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
+		ch_idx = i * MHI_MASK_CH_EV_LEN;
+
+		/* Only process channel interrupt if the mask is enabled */
+		ch_int = mhi_cntrl->chdb[i].status & mhi_cntrl->chdb[i].mask;
+		if (ch_int) {
+			mhi_ep_queue_channel_db(mhi_cntrl, ch_int, ch_idx);
+			mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_CLEAR_n(i),
+							mhi_cntrl->chdb[i].status);
+		}
+	}
+}
+
+static void mhi_ep_process_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl,
+					 enum mhi_state state)
+{
+	struct mhi_ep_state_transition *item;
+
+	item = kzalloc(sizeof(*item), GFP_ATOMIC);
+	if (!item)
+		return;
+
+	item->state = state;
+	spin_lock(&mhi_cntrl->list_lock);
+	list_add_tail(&item->node, &mhi_cntrl->st_transition_list);
+	spin_unlock(&mhi_cntrl->list_lock);
+
+	queue_work(mhi_cntrl->wq, &mhi_cntrl->state_work);
+}
+
+/*
+ * Interrupt handler that services interrupts raised by the host writing to
+ * MHICTRL and Command ring doorbell (CRDB) registers for state change and
+ * channel interrupts.
+ */
+static irqreturn_t mhi_ep_irq(int irq, void *data)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = data;
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	enum mhi_state state;
+	u32 int_value;
+
+	/* Acknowledge the ctrl interrupt */
+	int_value = mhi_ep_mmio_read(mhi_cntrl, MHI_CTRL_INT_STATUS);
+	mhi_ep_mmio_write(mhi_cntrl, MHI_CTRL_INT_CLEAR, int_value);
+
+	/* Check for ctrl interrupt */
+	if (FIELD_GET(MHI_CTRL_INT_STATUS_MSK, int_value)) {
+		dev_dbg(dev, "Processing ctrl interrupt\n");
+		mhi_ep_process_ctrl_interrupt(mhi_cntrl, state);
+	}
+
+	/* Check for command doorbell interrupt */
+	if (FIELD_GET(MHI_CTRL_INT_STATUS_CRDB_MSK, int_value))
+		dev_dbg(dev, "Processing command doorbell interrupt\n");
+
+	/* Check for channel interrupts */
+	mhi_ep_check_channel_interrupt(mhi_cntrl);
+
+	return IRQ_HANDLED;
+}
+
 static void mhi_ep_release_device(struct device *dev)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -339,7 +445,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	struct mhi_ep_device *mhi_dev;
 	int ret;
 
-	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio)
+	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio || !mhi_cntrl->irq)
 		return -EINVAL;
 
 	ret = mhi_ep_chan_init(mhi_cntrl, config);
@@ -361,6 +467,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	}
 
 	INIT_LIST_HEAD(&mhi_cntrl->st_transition_list);
+	INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
 	spin_lock_init(&mhi_cntrl->state_lock);
 	spin_lock_init(&mhi_cntrl->list_lock);
 	mutex_init(&mhi_cntrl->event_lock);
@@ -376,12 +483,20 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 		goto err_destroy_wq;
 	}
 
+	irq_set_status_flags(mhi_cntrl->irq, IRQ_NOAUTOEN);
+	ret = request_irq(mhi_cntrl->irq, mhi_ep_irq, IRQF_TRIGGER_HIGH,
+			  "doorbell_irq", mhi_cntrl);
+	if (ret) {
+		dev_err(mhi_cntrl->cntrl_dev, "Failed to request Doorbell IRQ\n");
+		goto err_ida_free;
+	}
+
 	/* Allocate the controller device */
 	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_CONTROLLER);
 	if (IS_ERR(mhi_dev)) {
 		dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate controller device\n");
 		ret = PTR_ERR(mhi_dev);
-		goto err_ida_free;
+		goto err_free_irq;
 	}
 
 	dev_set_name(&mhi_dev->dev, "mhi_ep%u", mhi_cntrl->index);
@@ -398,6 +513,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 
 err_put_dev:
 	put_device(&mhi_dev->dev);
+err_free_irq:
+	free_irq(mhi_cntrl->irq, mhi_cntrl);
 err_ida_free:
 	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
 err_destroy_wq:
@@ -417,6 +534,8 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
 
 	destroy_workqueue(mhi_cntrl->wq);
 
+	free_irq(mhi_cntrl->irq, mhi_cntrl);
+
 	kfree(mhi_cntrl->mhi_cmd);
 	kfree(mhi_cntrl->mhi_chan);
 
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index dc27a5de7d3c..43aa9b133db4 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -70,6 +70,7 @@ struct mhi_ep_db_info {
  * @list_lock: Lock for protecting state transition and channel doorbell lists
  * @state_lock: Lock for protecting state transitions
  * @st_transition_list: List of state transitions
+ * @ch_db_list: List of queued channel doorbells
  * @wq: Dedicated workqueue for handling rings and state changes
  * @state_work: State transition worker
  * @raise_irq: CB function for raising IRQ to the host
@@ -87,6 +88,7 @@ struct mhi_ep_db_info {
  * @chdb_offset: Channel doorbell offset set by the host
  * @erdb_offset: Event ring doorbell offset set by the host
  * @index: MHI Endpoint controller index
+ * @irq: IRQ used by the endpoint controller
  */
 struct mhi_ep_cntrl {
 	struct device *cntrl_dev;
@@ -111,6 +113,7 @@ struct mhi_ep_cntrl {
 	spinlock_t state_lock;
 
 	struct list_head st_transition_list;
+	struct list_head ch_db_list;
 
 	struct workqueue_struct *wq;
 	struct work_struct state_work;
@@ -137,6 +140,7 @@ struct mhi_ep_cntrl {
 	u32 chdb_offset;
 	u32 erdb_offset;
 	u32 index;
+	int irq;
 };
 
 /**
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 18/27] bus: mhi: ep: Add support for powering up the MHI endpoint stack
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (16 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 17/27] bus: mhi: ep: Add support for processing MHI endpoint interrupts Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 16:47   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 19/27] bus: mhi: ep: Add support for powering down " Manivannan Sadhasivam
                   ` (10 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for MHI endpoint power_up that includes initializing the MMIO
and rings, caching the host MHI registers, and setting the MHI state to M0.
After registering the MHI EP controller, the stack has to be powered up
for usage.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/internal.h |   6 +
 drivers/bus/mhi/ep/main.c     | 237 ++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h        |  16 +++
 3 files changed, 259 insertions(+)

diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index 536351218685..a2ec4169a4b2 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -210,4 +210,10 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
 int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
 int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
 
+/* MHI EP memory management functions */
+int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
+		     phys_addr_t *phys_ptr, void __iomem **virt);
+void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t phys,
+		       void __iomem *virt, size_t size);
+
 #endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index ce690b1aeace..47807102baad 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -16,6 +16,9 @@
 #include <linux/module.h>
 #include "internal.h"
 
+#define MHI_SUSPEND_MIN			100
+#define MHI_SUSPEND_TIMEOUT		600
+
 static DEFINE_IDA(mhi_ep_cntrl_ida);
 
 static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
@@ -106,6 +109,186 @@ static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_e
 	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
 }
 
+int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
+		     phys_addr_t *phys_ptr, void __iomem **virt)
+{
+	size_t offset = pci_addr % 0x1000;
+	void __iomem *buf;
+	phys_addr_t phys;
+	int ret;
+
+	size += offset;
+
+	buf = mhi_cntrl->alloc_addr(mhi_cntrl, &phys, size);
+	if (!buf)
+		return -ENOMEM;
+
+	ret = mhi_cntrl->map_addr(mhi_cntrl, phys, pci_addr - offset, size);
+	if (ret) {
+		mhi_cntrl->free_addr(mhi_cntrl, phys, buf, size);
+		return ret;
+	}
+
+	*phys_ptr = phys + offset;
+	*virt = buf + offset;
+
+	return 0;
+}
+
+void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t phys,
+			void __iomem *virt, size_t size)
+{
+	size_t offset = pci_addr % 0x1000;
+
+	size += offset;
+
+	mhi_cntrl->unmap_addr(mhi_cntrl, phys - offset);
+	mhi_cntrl->free_addr(mhi_cntrl, phys - offset, virt - offset, size);
+}
+
+static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	size_t cmd_ctx_host_size, ch_ctx_host_size, ev_ctx_host_size;
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	int ret;
+
+	/* Update the number of event rings (NER) programmed by the host */
+	mhi_ep_mmio_update_ner(mhi_cntrl);
+
+	dev_dbg(dev, "Number of Event rings: %u, HW Event rings: %u\n",
+		 mhi_cntrl->event_rings, mhi_cntrl->hw_event_rings);
+
+	ch_ctx_host_size = sizeof(struct mhi_chan_ctxt) * mhi_cntrl->max_chan;
+	ev_ctx_host_size = sizeof(struct mhi_event_ctxt) * mhi_cntrl->event_rings;
+	cmd_ctx_host_size = sizeof(struct mhi_cmd_ctxt) * NR_OF_CMD_RINGS;
+
+	/* Get the channel context base pointer from host */
+	mhi_ep_mmio_get_chc_base(mhi_cntrl);
+
+	/* Allocate and map memory for caching host channel context */
+	ret = mhi_ep_alloc_map(mhi_cntrl, mhi_cntrl->ch_ctx_host_pa, ch_ctx_host_size,
+				&mhi_cntrl->ch_ctx_cache_phys,
+				(void __iomem **)&mhi_cntrl->ch_ctx_cache);
+	if (ret) {
+		dev_err(dev, "Failed to allocate and map ch_ctx_cache\n");
+		return ret;
+	}
+
+	/* Get the event context base pointer from host */
+	mhi_ep_mmio_get_erc_base(mhi_cntrl);
+
+	/* Allocate and map memory for caching host event context */
+	ret = mhi_ep_alloc_map(mhi_cntrl, mhi_cntrl->ev_ctx_host_pa, ev_ctx_host_size,
+				&mhi_cntrl->ev_ctx_cache_phys,
+				(void __iomem **)&mhi_cntrl->ev_ctx_cache);
+	if (ret) {
+		dev_err(dev, "Failed to allocate and map ev_ctx_cache\n");
+		goto err_ch_ctx;
+	}
+
+	/* Get the command context base pointer from host */
+	mhi_ep_mmio_get_crc_base(mhi_cntrl);
+
+	/* Allocate and map memory for caching host command context */
+	ret = mhi_ep_alloc_map(mhi_cntrl, mhi_cntrl->cmd_ctx_host_pa, cmd_ctx_host_size,
+				&mhi_cntrl->cmd_ctx_cache_phys,
+				(void __iomem **)&mhi_cntrl->cmd_ctx_cache);
+	if (ret) {
+		dev_err(dev, "Failed to allocate and map cmd_ctx_cache\n");
+		goto err_ev_ctx;
+	}
+
+	/* Initialize command ring */
+	ret = mhi_ep_ring_start(mhi_cntrl, &mhi_cntrl->mhi_cmd->ring,
+				(union mhi_ep_ring_ctx *)mhi_cntrl->cmd_ctx_cache);
+	if (ret) {
+		dev_err(dev, "Failed to start the command ring\n");
+		goto err_cmd_ctx;
+	}
+
+	return ret;
+
+err_cmd_ctx:
+	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->cmd_ctx_host_pa, mhi_cntrl->cmd_ctx_cache_phys,
+			mhi_cntrl->cmd_ctx_cache, cmd_ctx_host_size);
+
+err_ev_ctx:
+	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ev_ctx_host_pa, mhi_cntrl->ev_ctx_cache_phys,
+			mhi_cntrl->ev_ctx_cache, ev_ctx_host_size);
+
+err_ch_ctx:
+	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ch_ctx_host_pa, mhi_cntrl->ch_ctx_cache_phys,
+			mhi_cntrl->ch_ctx_cache, ch_ctx_host_size);
+
+	return ret;
+}
+
+static void mhi_ep_free_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	size_t cmd_ctx_host_size, ch_ctx_host_size, ev_ctx_host_size;
+
+	ch_ctx_host_size = sizeof(struct mhi_chan_ctxt) * mhi_cntrl->max_chan;
+	ev_ctx_host_size = sizeof(struct mhi_event_ctxt) * mhi_cntrl->event_rings;
+	cmd_ctx_host_size = sizeof(struct mhi_cmd_ctxt) * NR_OF_CMD_RINGS;
+
+	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->cmd_ctx_host_pa, mhi_cntrl->cmd_ctx_cache_phys,
+			mhi_cntrl->cmd_ctx_cache, cmd_ctx_host_size);
+	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ev_ctx_host_pa, mhi_cntrl->ev_ctx_cache_phys,
+			mhi_cntrl->ev_ctx_cache, ev_ctx_host_size);
+	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ch_ctx_host_pa, mhi_cntrl->ch_ctx_cache_phys,
+			mhi_cntrl->ch_ctx_cache, ch_ctx_host_size);
+}
+
+static void mhi_ep_enable_int(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	/*
+	 * Doorbell interrupts are enabled when the corresponding channel gets started.
+	 * Enabling all interrupts here triggers spurious irqs as some of the interrupts
+	 * associated with hw channels always get triggered.
+	 */
+	mhi_ep_mmio_enable_ctrl_interrupt(mhi_cntrl);
+	mhi_ep_mmio_enable_cmdb_interrupt(mhi_cntrl);
+}
+
+static int mhi_ep_enable(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	enum mhi_state state;
+	u32 max_cnt = 0;
+	bool mhi_reset;
+	int ret;
+
+	/* Wait for Host to set the M0 state */
+	do {
+		msleep(MHI_SUSPEND_MIN);
+		mhi_ep_mmio_get_mhi_state(mhi_cntrl, &state, &mhi_reset);
+		if (mhi_reset) {
+			/* Clear the MHI reset if host is in reset state */
+			mhi_ep_mmio_clear_reset(mhi_cntrl);
+			dev_dbg(dev, "Host initiated reset while waiting for M0\n");
+		}
+		max_cnt++;
+	} while (state != MHI_STATE_M0 && max_cnt < MHI_SUSPEND_TIMEOUT);
+
+	if (state != MHI_STATE_M0) {
+		dev_err(dev, "Host failed to enter M0\n");
+		return -ETIMEDOUT;
+	}
+
+	ret = mhi_ep_cache_host_cfg(mhi_cntrl);
+	if (ret) {
+		dev_err(dev, "Failed to cache host config\n");
+		return ret;
+	}
+
+	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EE_AMSS);
+
+	/* Enable all interrupts now */
+	mhi_ep_enable_int(mhi_cntrl);
+
+	return 0;
+}
+
 static void mhi_ep_state_worker(struct work_struct *work)
 {
 	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
@@ -249,6 +432,60 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
 	return IRQ_HANDLED;
 }
 
+int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	int ret, i;
+
+	/*
+	 * Mask all interrupts until the state machine is ready. Interrupts will
+	 * be enabled later with mhi_ep_enable().
+	 */
+	mhi_ep_mmio_mask_interrupts(mhi_cntrl);
+	mhi_ep_mmio_init(mhi_cntrl);
+
+	mhi_cntrl->mhi_event = kzalloc(mhi_cntrl->event_rings * (sizeof(*mhi_cntrl->mhi_event)),
+					GFP_KERNEL);
+	if (!mhi_cntrl->mhi_event)
+		return -ENOMEM;
+
+	/* Initialize command, channel and event rings */
+	mhi_ep_ring_init(&mhi_cntrl->mhi_cmd->ring, RING_TYPE_CMD, 0);
+	for (i = 0; i < mhi_cntrl->max_chan; i++)
+		mhi_ep_ring_init(&mhi_cntrl->mhi_chan[i].ring, RING_TYPE_CH, i);
+	for (i = 0; i < mhi_cntrl->event_rings; i++)
+		mhi_ep_ring_init(&mhi_cntrl->mhi_event[i].ring, RING_TYPE_ER, i);
+
+	mhi_cntrl->mhi_state = MHI_STATE_RESET;
+
+	/* Set AMSS EE before signaling ready state */
+	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EE_AMSS);
+
+	/* All set, notify the host that we are ready */
+	ret = mhi_ep_set_ready_state(mhi_cntrl);
+	if (ret)
+		goto err_free_event;
+
+	dev_dbg(dev, "READY state notification sent to the host\n");
+
+	ret = mhi_ep_enable(mhi_cntrl);
+	if (ret) {
+		dev_err(dev, "Failed to enable MHI endpoint\n");
+		goto err_free_event;
+	}
+
+	enable_irq(mhi_cntrl->irq);
+	mhi_cntrl->enabled = true;
+
+	return 0;
+
+err_free_event:
+	kfree(mhi_cntrl->mhi_event);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_ep_power_up);
+
 static void mhi_ep_release_device(struct device *dev)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 43aa9b133db4..1b7dec859a5e 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -65,6 +65,9 @@ struct mhi_ep_db_info {
  * @ch_ctx_host_pa: Physical address of host channel context data structure
  * @ev_ctx_host_pa: Physical address of host event context data structure
  * @cmd_ctx_host_pa: Physical address of host command context data structure
+ * @ch_ctx_cache_phys: Physical address of the host channel context cache
+ * @ev_ctx_cache_phys: Physical address of the host event context cache
+ * @cmd_ctx_cache_phys: Physical address of the host command context cache
  * @chdb: Array of channel doorbell interrupt info
  * @event_lock: Lock for protecting event rings
  * @list_lock: Lock for protecting state transition and channel doorbell lists
@@ -89,6 +92,7 @@ struct mhi_ep_db_info {
  * @erdb_offset: Event ring doorbell offset set by the host
  * @index: MHI Endpoint controller index
  * @irq: IRQ used by the endpoint controller
+ * @enabled: Check if the endpoint controller is enabled or not
  */
 struct mhi_ep_cntrl {
 	struct device *cntrl_dev;
@@ -106,6 +110,9 @@ struct mhi_ep_cntrl {
 	u64 ch_ctx_host_pa;
 	u64 ev_ctx_host_pa;
 	u64 cmd_ctx_host_pa;
+	phys_addr_t ch_ctx_cache_phys;
+	phys_addr_t ev_ctx_cache_phys;
+	phys_addr_t cmd_ctx_cache_phys;
 
 	struct mhi_ep_db_info chdb[4];
 	struct mutex event_lock;
@@ -141,6 +148,7 @@ struct mhi_ep_cntrl {
 	u32 erdb_offset;
 	u32 index;
 	int irq;
+	bool enabled;
 };
 
 /**
@@ -235,4 +243,12 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
  */
 void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
 
+/**
+ * mhi_ep_power_up - Power up the MHI endpoint stack
+ * @mhi_cntrl: MHI Endpoint controller
+ *
+ * Return: 0 if power up succeeds, a negative error code otherwise.
+ */
+int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
+
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 19/27] bus: mhi: ep: Add support for powering down the MHI endpoint stack
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (17 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 18/27] bus: mhi: ep: Add support for powering up the MHI endpoint stack Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 16:49   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 20/27] bus: mhi: ep: Add support for handling MHI_RESET Manivannan Sadhasivam
                   ` (9 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for MHI endpoint power_down that includes stopping all
available channels, destroying the channels, resetting the event and
transfer rings and freeing the host cache.

The stack will be powered down whenever the physical bus link goes down.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 78 +++++++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h    |  6 +++
 2 files changed, 84 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 47807102baad..4956440273ad 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -21,6 +21,8 @@
 
 static DEFINE_IDA(mhi_ep_cntrl_ida);
 
+static int mhi_ep_destroy_device(struct device *dev, void *data);
+
 static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
 			     struct mhi_ring_element *el, bool bei)
 {
@@ -432,6 +434,68 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
 	return IRQ_HANDLED;
 }
 
+static void mhi_ep_abort_transfer(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct mhi_ep_ring *ch_ring, *ev_ring;
+	struct mhi_result result = {};
+	struct mhi_ep_chan *mhi_chan;
+	int i;
+
+	/* Stop all the channels */
+	for (i = 0; i < mhi_cntrl->max_chan; i++) {
+		mhi_chan = &mhi_cntrl->mhi_chan[i];
+		if (!mhi_chan->ring.started)
+			continue;
+
+		mutex_lock(&mhi_chan->lock);
+		/* Send channel disconnect status to client drivers */
+		if (mhi_chan->xfer_cb) {
+			result.transaction_status = -ENOTCONN;
+			result.bytes_xferd = 0;
+			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+		}
+
+		mhi_chan->state = MHI_CH_STATE_DISABLED;
+		mutex_unlock(&mhi_chan->lock);
+	}
+
+	flush_workqueue(mhi_cntrl->wq);
+
+	/* Destroy devices associated with all channels */
+	device_for_each_child(&mhi_cntrl->mhi_dev->dev, NULL, mhi_ep_destroy_device);
+
+	/* Stop and reset the transfer rings */
+	for (i = 0; i < mhi_cntrl->max_chan; i++) {
+		mhi_chan = &mhi_cntrl->mhi_chan[i];
+		if (!mhi_chan->ring.started)
+			continue;
+
+		ch_ring = &mhi_cntrl->mhi_chan[i].ring;
+		mutex_lock(&mhi_chan->lock);
+		mhi_ep_ring_reset(mhi_cntrl, ch_ring);
+		mutex_unlock(&mhi_chan->lock);
+	}
+
+	/* Stop and reset the event rings */
+	for (i = 0; i < mhi_cntrl->event_rings; i++) {
+		ev_ring = &mhi_cntrl->mhi_event[i].ring;
+		if (!ev_ring->started)
+			continue;
+
+		mutex_lock(&mhi_cntrl->event_lock);
+		mhi_ep_ring_reset(mhi_cntrl, ev_ring);
+		mutex_unlock(&mhi_cntrl->event_lock);
+	}
+
+	/* Stop and reset the command ring */
+	mhi_ep_ring_reset(mhi_cntrl, &mhi_cntrl->mhi_cmd->ring);
+
+	mhi_ep_free_host_cfg(mhi_cntrl);
+	mhi_ep_mmio_mask_interrupts(mhi_cntrl);
+
+	mhi_cntrl->enabled = false;
+}
+
 int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
@@ -486,6 +550,16 @@ int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
 }
 EXPORT_SYMBOL_GPL(mhi_ep_power_up);
 
+void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	if (mhi_cntrl->enabled)
+		mhi_ep_abort_transfer(mhi_cntrl);
+
+	kfree(mhi_cntrl->mhi_event);
+	disable_irq(mhi_cntrl->irq);
+}
+EXPORT_SYMBOL_GPL(mhi_ep_power_down);
+
 static void mhi_ep_release_device(struct device *dev)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -765,6 +839,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 }
 EXPORT_SYMBOL_GPL(mhi_ep_register_controller);
 
+/*
+ * It is expected that the controller drivers will power down the MHI EP stack
+ * using "mhi_ep_power_down()" before calling this function to unregister themselves.
+ */
 void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 1b7dec859a5e..8e062a4c84f4 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -251,4 +251,10 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
  */
 int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
 
+/**
+ * mhi_ep_power_down - Power down the MHI endpoint stack
+ * @mhi_cntrl: MHI controller
+ */
+void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
+
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 20/27] bus: mhi: ep: Add support for handling MHI_RESET
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (18 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 19/27] bus: mhi: ep: Add support for powering down " Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 21/27] bus: mhi: ep: Add support for handling SYS_ERR condition Manivannan Sadhasivam
                   ` (8 subsequent siblings)
  28 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for handling MHI_RESET in MHI endpoint stack. MHI_RESET will
be issued by the host during shutdown and during error scenario so that
it can recover the endpoint device without restarting the whole device.

MHI_RESET handling involves resetting the internal MHI registers, data
structures, state machines, resetting all channels/rings and setting
MHICTRL.RESET bit to 0. Additionally the device will also move to READY
state if the reset was due to SYS_ERR.

Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 53 +++++++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h    |  2 ++
 2 files changed, 55 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 4956440273ad..99cbad2a94c9 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -413,6 +413,7 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
 	enum mhi_state state;
 	u32 int_value;
+	bool mhi_reset;
 
 	/* Acknowledge the ctrl interrupt */
 	int_value = mhi_ep_mmio_read(mhi_cntrl, MHI_CTRL_INT_STATUS);
@@ -421,6 +422,14 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
 	/* Check for ctrl interrupt */
 	if (FIELD_GET(MHI_CTRL_INT_STATUS_MSK, int_value)) {
 		dev_dbg(dev, "Processing ctrl interrupt\n");
+		mhi_ep_mmio_get_mhi_state(mhi_cntrl, &state, &mhi_reset);
+		if (mhi_reset) {
+			dev_info(dev, "Host triggered MHI reset!\n");
+			disable_irq_nosync(mhi_cntrl->irq);
+			schedule_work(&mhi_cntrl->reset_work);
+			return IRQ_HANDLED;
+		}
+
 		mhi_ep_process_ctrl_interrupt(mhi_cntrl, state);
 	}
 
@@ -496,6 +505,49 @@ static void mhi_ep_abort_transfer(struct mhi_ep_cntrl *mhi_cntrl)
 	mhi_cntrl->enabled = false;
 }
 
+static void mhi_ep_reset_worker(struct work_struct *work)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, reset_work);
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	enum mhi_state cur_state;
+	int ret;
+
+	mhi_ep_abort_transfer(mhi_cntrl);
+
+	spin_lock_bh(&mhi_cntrl->state_lock);
+	/* Reset MMIO to signal host that the MHI_RESET is completed in endpoint */
+	mhi_ep_mmio_reset(mhi_cntrl);
+	cur_state = mhi_cntrl->mhi_state;
+	spin_unlock_bh(&mhi_cntrl->state_lock);
+
+	/*
+	 * Only proceed further if the reset is due to SYS_ERR. The host will
+	 * issue reset during shutdown also and we don't need to do re-init in
+	 * that case.
+	 */
+	if (cur_state == MHI_STATE_SYS_ERR) {
+		mhi_ep_mmio_init(mhi_cntrl);
+
+		/* Set AMSS EE before signaling ready state */
+		mhi_ep_mmio_set_env(mhi_cntrl, MHI_EE_AMSS);
+
+		/* All set, notify the host that we are ready */
+		ret = mhi_ep_set_ready_state(mhi_cntrl);
+		if (ret)
+			return;
+
+		dev_dbg(dev, "READY state notification sent to the host\n");
+
+		ret = mhi_ep_enable(mhi_cntrl);
+		if (ret) {
+			dev_err(dev, "Failed to enable MHI endpoint: %d\n", ret);
+			return;
+		}
+
+		enable_irq(mhi_cntrl->irq);
+	}
+}
+
 int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
@@ -770,6 +822,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	}
 
 	INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
+	INIT_WORK(&mhi_cntrl->reset_work, mhi_ep_reset_worker);
 
 	mhi_cntrl->wq = alloc_workqueue("mhi_ep_wq", 0, 0);
 	if (!mhi_cntrl->wq) {
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 8e062a4c84f4..e77a7b025430 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -76,6 +76,7 @@ struct mhi_ep_db_info {
  * @ch_db_list: List of queued channel doorbells
  * @wq: Dedicated workqueue for handling rings and state changes
  * @state_work: State transition worker
+ * @reset_work: Worker for MHI Endpoint reset
  * @raise_irq: CB function for raising IRQ to the host
  * @alloc_addr: CB function for allocating memory in endpoint for storing host context
  * @map_addr: CB function for mapping host context to endpoint
@@ -124,6 +125,7 @@ struct mhi_ep_cntrl {
 
 	struct workqueue_struct *wq;
 	struct work_struct state_work;
+	struct work_struct reset_work;
 
 	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
 	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 21/27] bus: mhi: ep: Add support for handling SYS_ERR condition
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (19 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 20/27] bus: mhi: ep: Add support for handling MHI_RESET Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 22/27] bus: mhi: ep: Add support for processing command rings Manivannan Sadhasivam
                   ` (7 subsequent siblings)
  28 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for handling SYS_ERR (System Error) condition in the MHI
endpoint stack. The SYS_ERR flag will be asserted by the endpoint device
when it detects an internal error. The host will then issue reset and
reinitializes MHI to recover from the error state.

Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/internal.h |  1 +
 drivers/bus/mhi/ep/main.c     | 20 ++++++++++++++++++++
 drivers/bus/mhi/ep/sm.c       | 11 +++++++++--
 3 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index a2ec4169a4b2..a229d8b70227 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -209,6 +209,7 @@ int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_stat
 int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
 int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
 int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl);
 
 /* MHI EP memory management functions */
 int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 99cbad2a94c9..132fd9f51a1f 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -548,6 +548,26 @@ static void mhi_ep_reset_worker(struct work_struct *work)
 	}
 }
 
+/*
+ * We don't need to do anything special other than setting the MHI SYS_ERR
+ * state. The host will reset all contexts and issue MHI RESET so that we
+ * could also recover from error state.
+ */
+void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	int ret;
+
+	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
+	if (ret)
+		return;
+
+	/* Signal host that the device went to SYS_ERR state */
+	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_SYS_ERR);
+	if (ret)
+		dev_err(dev, "Failed sending SYS_ERR state change event: %d\n", ret);
+}
+
 int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
index ad49276ec044..4d6e8c2d615c 100644
--- a/drivers/bus/mhi/ep/sm.c
+++ b/drivers/bus/mhi/ep/sm.c
@@ -68,8 +68,10 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
 	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
 	spin_unlock_bh(&mhi_cntrl->state_lock);
 
-	if (ret)
+	if (ret) {
+		mhi_ep_handle_syserr(mhi_cntrl);
 		return ret;
+	}
 
 	/* Signal host that the device moved to M0 */
 	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M0);
@@ -99,8 +101,10 @@ int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
 	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
 	spin_unlock_bh(&mhi_cntrl->state_lock);
 
-	if (ret)
+	if (ret) {
+		mhi_ep_handle_syserr(mhi_cntrl);
 		return ret;
+	}
 
 	/* Signal host that the device moved to M3 */
 	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M3);
@@ -132,5 +136,8 @@ int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl)
 	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_READY);
 	spin_unlock_bh(&mhi_cntrl->state_lock);
 
+	if (ret)
+		mhi_ep_handle_syserr(mhi_cntrl);
+
 	return ret;
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 22/27] bus: mhi: ep: Add support for processing command rings
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (20 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 21/27] bus: mhi: ep: Add support for handling SYS_ERR condition Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 23/27] bus: mhi: ep: Add support for reading from the host Manivannan Sadhasivam
                   ` (6 subsequent siblings)
  28 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for processing the command rings. Command ring is used by the
host to issue channel specific commands to the ep device. Following
commands are supported:

1. Start channel
2. Stop channel
3. Reset channel

Once the device receives the command doorbell interrupt from host, it
executes the command and generates a command completion event to the
host in the primary event ring.

Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 190 +++++++++++++++++++++++++++++++++++++-
 include/linux/mhi_ep.h    |   2 +
 2 files changed, 191 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 132fd9f51a1f..1d4a9f6db8a3 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -21,6 +21,7 @@
 
 static DEFINE_IDA(mhi_ep_cntrl_ida);
 
+static int mhi_ep_create_device(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id);
 static int mhi_ep_destroy_device(struct device *dev, void *data);
 
 static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
@@ -148,6 +149,156 @@ void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t
 	mhi_cntrl->free_addr(mhi_cntrl, phys - offset, virt - offset, size);
 }
 
+int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ring_element *el)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	struct mhi_result result = {};
+	struct mhi_ep_chan *mhi_chan;
+	struct mhi_ep_ring *ch_ring;
+	u32 tmp, ch_id;
+	int ret;
+
+	ch_id = MHI_TRE_GET_CMD_CHID(el);
+	mhi_chan = &mhi_cntrl->mhi_chan[ch_id];
+	ch_ring = &mhi_cntrl->mhi_chan[ch_id].ring;
+
+	switch (MHI_TRE_GET_CMD_TYPE(el)) {
+	case MHI_PKT_TYPE_START_CHAN_CMD:
+		dev_dbg(dev, "Received START command for channel (%u)\n", ch_id);
+
+		mutex_lock(&mhi_chan->lock);
+		/* Initialize and configure the corresponding channel ring */
+		if (!ch_ring->started) {
+			ret = mhi_ep_ring_start(mhi_cntrl, ch_ring,
+				(union mhi_ep_ring_ctx *)&mhi_cntrl->ch_ctx_cache[ch_id]);
+			if (ret) {
+				dev_err(dev, "Failed to start ring for channel (%u)\n", ch_id);
+				ret = mhi_ep_send_cmd_comp_event(mhi_cntrl,
+							MHI_EV_CC_UNDEFINED_ERR);
+				if (ret)
+					dev_err(dev, "Error sending completion event: %d\n", ret);
+
+				goto err_unlock;
+			}
+		}
+
+		/* Set channel state to RUNNING */
+		mhi_chan->state = MHI_CH_STATE_RUNNING;
+		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[ch_id].chcfg);
+		tmp &= ~CHAN_CTX_CHSTATE_MASK;
+		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_RUNNING);
+		mhi_cntrl->ch_ctx_cache[ch_id].chcfg = cpu_to_le32(tmp);
+
+		ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
+		if (ret) {
+			dev_err(dev, "Error sending command completion event (%u)\n",
+				MHI_EV_CC_SUCCESS);
+			goto err_unlock;
+		}
+
+		mutex_unlock(&mhi_chan->lock);
+
+		/*
+		 * Create MHI device only during UL channel start. Since the MHI
+		 * channels operate in a pair, we'll associate both UL and DL
+		 * channels to the same device.
+		 *
+		 * We also need to check for mhi_dev != NULL because, the host
+		 * will issue START_CHAN command during resume and we don't
+		 * destroy the device during suspend.
+		 */
+		if (!(ch_id % 2) && !mhi_chan->mhi_dev) {
+			ret = mhi_ep_create_device(mhi_cntrl, ch_id);
+			if (ret) {
+				dev_err(dev, "Error creating device for channel (%u)\n", ch_id);
+				mhi_ep_handle_syserr(mhi_cntrl);
+				return ret;
+			}
+		}
+
+		/* Finally, enable DB for the channel */
+		mhi_ep_mmio_enable_chdb(mhi_cntrl, ch_id);
+
+		break;
+	case MHI_PKT_TYPE_STOP_CHAN_CMD:
+		dev_dbg(dev, "Received STOP command for channel (%u)\n", ch_id);
+		if (!ch_ring->started) {
+			dev_err(dev, "Channel (%u) not opened\n", ch_id);
+			return -ENODEV;
+		}
+
+		mutex_lock(&mhi_chan->lock);
+		/* Disable DB for the channel */
+		mhi_ep_mmio_disable_chdb(mhi_cntrl, ch_id);
+
+		/* Send channel disconnect status to client drivers */
+		result.transaction_status = -ENOTCONN;
+		result.bytes_xferd = 0;
+		mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+
+		/* Set channel state to STOP */
+		mhi_chan->state = MHI_CH_STATE_STOP;
+		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[ch_id].chcfg);
+		tmp &= ~CHAN_CTX_CHSTATE_MASK;
+		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_STOP);
+		mhi_cntrl->ch_ctx_cache[ch_id].chcfg = cpu_to_le32(tmp);
+
+		ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
+		if (ret) {
+			dev_err(dev, "Error sending command completion event (%u)\n",
+				MHI_EV_CC_SUCCESS);
+			goto err_unlock;
+		}
+
+		mutex_unlock(&mhi_chan->lock);
+		break;
+	case MHI_PKT_TYPE_RESET_CHAN_CMD:
+		dev_dbg(dev, "Received STOP command for channel (%u)\n", ch_id);
+		if (!ch_ring->started) {
+			dev_err(dev, "Channel (%u) not opened\n", ch_id);
+			return -ENODEV;
+		}
+
+		mutex_lock(&mhi_chan->lock);
+		/* Stop and reset the transfer ring */
+		mhi_ep_ring_reset(mhi_cntrl, ch_ring);
+
+		/* Send channel disconnect status to client driver */
+		result.transaction_status = -ENOTCONN;
+		result.bytes_xferd = 0;
+		mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+
+		/* Set channel state to DISABLED */
+		mhi_chan->state = MHI_CH_STATE_DISABLED;
+		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[ch_id].chcfg);
+		tmp &= ~CHAN_CTX_CHSTATE_MASK;
+		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_DISABLED);
+		mhi_cntrl->ch_ctx_cache[ch_id].chcfg = cpu_to_le32(tmp);
+
+		ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
+		if (ret) {
+			dev_err(dev, "Error sending command completion event (%u)\n",
+				MHI_EV_CC_SUCCESS);
+			goto err_unlock;
+		}
+
+		mutex_unlock(&mhi_chan->lock);
+		break;
+	default:
+		dev_err(dev, "Invalid command received: %lu for channel (%u)\n",
+			MHI_TRE_GET_CMD_TYPE(el), ch_id);
+		return -EINVAL;
+	}
+
+	return 0;
+
+err_unlock:
+	mutex_unlock(&mhi_chan->lock);
+
+	return ret;
+}
+
 static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	size_t cmd_ctx_host_size, ch_ctx_host_size, ev_ctx_host_size;
@@ -291,6 +442,40 @@ static int mhi_ep_enable(struct mhi_ep_cntrl *mhi_cntrl)
 	return 0;
 }
 
+static void mhi_ep_cmd_ring_worker(struct work_struct *work)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, cmd_ring_work);
+	struct mhi_ep_ring *ring = &mhi_cntrl->mhi_cmd->ring;
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	struct mhi_ring_element *el;
+	int ret;
+
+	/* Update the write offset for the ring */
+	ret = mhi_ep_update_wr_offset(ring);
+	if (ret) {
+		dev_err(dev, "Error updating write offset for ring\n");
+		return;
+	}
+
+	/* Sanity check to make sure there are elements in the ring */
+	if (ring->rd_offset == ring->wr_offset)
+		return;
+
+	/*
+	 * Process command ring element till write offset. In case of an error, just try to
+	 * process next element.
+	 */
+	while (ring->rd_offset != ring->wr_offset) {
+		el = &ring->ring_cache[ring->rd_offset];
+
+		ret = mhi_ep_process_cmd_ring(ring, el);
+		if (ret)
+			dev_err(dev, "Error processing cmd ring element: %zu\n", ring->rd_offset);
+
+		mhi_ep_ring_inc_index(ring);
+	}
+}
+
 static void mhi_ep_state_worker(struct work_struct *work)
 {
 	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
@@ -434,8 +619,10 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
 	}
 
 	/* Check for command doorbell interrupt */
-	if (FIELD_GET(MHI_CTRL_INT_STATUS_CRDB_MSK, int_value))
+	if (FIELD_GET(MHI_CTRL_INT_STATUS_CRDB_MSK, int_value)) {
 		dev_dbg(dev, "Processing command doorbell interrupt\n");
+		queue_work(mhi_cntrl->wq, &mhi_cntrl->cmd_ring_work);
+	}
 
 	/* Check for channel interrupts */
 	mhi_ep_check_channel_interrupt(mhi_cntrl);
@@ -843,6 +1030,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 
 	INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
 	INIT_WORK(&mhi_cntrl->reset_work, mhi_ep_reset_worker);
+	INIT_WORK(&mhi_cntrl->cmd_ring_work, mhi_ep_cmd_ring_worker);
 
 	mhi_cntrl->wq = alloc_workqueue("mhi_ep_wq", 0, 0);
 	if (!mhi_cntrl->wq) {
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index e77a7b025430..681c638833ff 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -77,6 +77,7 @@ struct mhi_ep_db_info {
  * @wq: Dedicated workqueue for handling rings and state changes
  * @state_work: State transition worker
  * @reset_work: Worker for MHI Endpoint reset
+ * @cmd_ring_work: Worker for processing command rings
  * @raise_irq: CB function for raising IRQ to the host
  * @alloc_addr: CB function for allocating memory in endpoint for storing host context
  * @map_addr: CB function for mapping host context to endpoint
@@ -126,6 +127,7 @@ struct mhi_ep_cntrl {
 	struct workqueue_struct *wq;
 	struct work_struct state_work;
 	struct work_struct reset_work;
+	struct work_struct cmd_ring_work;
 
 	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
 	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 23/27] bus: mhi: ep: Add support for reading from the host
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (21 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 22/27] bus: mhi: ep: Add support for processing command rings Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 24/27] bus: mhi: ep: Add support for processing channel rings Manivannan Sadhasivam
                   ` (5 subsequent siblings)
  28 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Data transfer between host and the ep device happens over the transfer
ring associated with each bi-directional channel pair. Host defines the
transfer ring by allocating memory for it. The read and write pointer
addresses of the transfer ring are stored in the channel context.

Once host places the elements in the transfer ring, it increments the
write pointer and rings the channel doorbell. Device will receive the
doorbell interrupt and will process the transfer ring elements.

This commit adds support for reading the transfer ring elements from
the transfer ring till write pointer, incrementing the read pointer and
finally sending the completion event to the host through corresponding
event ring.

Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 121 ++++++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h    |   9 +++
 2 files changed, 130 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 1d4a9f6db8a3..e7c0ef9f281b 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -299,6 +299,127 @@ int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ring_element *e
 	return ret;
 }
 
+bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir)
+{
+	struct mhi_ep_chan *mhi_chan = (dir == DMA_FROM_DEVICE) ? mhi_dev->dl_chan :
+								mhi_dev->ul_chan;
+	struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_ep_ring *ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
+
+	return !!(ring->rd_offset == ring->wr_offset);
+}
+EXPORT_SYMBOL_GPL(mhi_ep_queue_is_empty);
+
+static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
+				struct mhi_ep_ring *ring,
+				struct mhi_result *result,
+				u32 len)
+{
+	struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	size_t tr_len, read_offset, write_offset;
+	struct mhi_ring_element *el;
+	bool tr_done = false;
+	void *write_addr;
+	u64 read_addr;
+	u32 buf_left;
+	int ret;
+
+	buf_left = len;
+
+	do {
+		/* Don't process the transfer ring if the channel is not in RUNNING state */
+		if (mhi_chan->state != MHI_CH_STATE_RUNNING) {
+			dev_err(dev, "Channel not available\n");
+			return -ENODEV;
+		}
+
+		el = &ring->ring_cache[ring->rd_offset];
+
+		/* Check if there is data pending to be read from previous read operation */
+		if (mhi_chan->tre_bytes_left) {
+			dev_dbg(dev, "TRE bytes remaining: %u\n", mhi_chan->tre_bytes_left);
+			tr_len = min(buf_left, mhi_chan->tre_bytes_left);
+		} else {
+			mhi_chan->tre_loc = MHI_TRE_DATA_GET_PTR(el);
+			mhi_chan->tre_size = MHI_TRE_DATA_GET_LEN(el);
+			mhi_chan->tre_bytes_left = mhi_chan->tre_size;
+
+			tr_len = min(buf_left, mhi_chan->tre_size);
+		}
+
+		read_offset = mhi_chan->tre_size - mhi_chan->tre_bytes_left;
+		write_offset = len - buf_left;
+		read_addr = mhi_chan->tre_loc + read_offset;
+		write_addr = result->buf_addr + write_offset;
+
+		dev_dbg(dev, "Reading %zd bytes from channel (%u)\n", tr_len, ring->ch_id);
+		ret = mhi_cntrl->read_from_host(mhi_cntrl, read_addr, write_addr, tr_len);
+		if (ret < 0) {
+			dev_err(&mhi_chan->mhi_dev->dev, "Error reading from channel\n");
+			return ret;
+		}
+
+		buf_left -= tr_len;
+		mhi_chan->tre_bytes_left -= tr_len;
+
+		/*
+		 * Once the TRE (Transfer Ring Element) of a TD (Transfer Descriptor) has been
+		 * read completely:
+		 *
+		 * 1. Send completion event to the host based on the flags set in TRE.
+		 * 2. Increment the local read offset of the transfer ring.
+		 */
+		if (!mhi_chan->tre_bytes_left) {
+			/*
+			 * The host will split the data packet into multiple TREs if it can't fit
+			 * the packet in a single TRE. In that case, CHAIN flag will be set by the
+			 * host for all TREs except the last one.
+			 */
+			if (MHI_TRE_DATA_GET_CHAIN(el)) {
+				/*
+				 * IEOB (Interrupt on End of Block) flag will be set by the host if
+				 * it expects the completion event for all TREs of a TD.
+				 */
+				if (MHI_TRE_DATA_GET_IEOB(el)) {
+					ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el,
+								     MHI_TRE_DATA_GET_LEN(el),
+								     MHI_EV_CC_EOB);
+					if (ret < 0) {
+						dev_err(&mhi_chan->mhi_dev->dev,
+							"Error sending transfer compl. event\n");
+						return ret;
+					}
+				}
+			} else {
+				/*
+				 * IEOT (Interrupt on End of Transfer) flag will be set by the host
+				 * for the last TRE of the TD and expects the completion event for
+				 * the same.
+				 */
+				if (MHI_TRE_DATA_GET_IEOT(el)) {
+					ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el,
+								     MHI_TRE_DATA_GET_LEN(el),
+								     MHI_EV_CC_EOT);
+					if (ret < 0) {
+						dev_err(&mhi_chan->mhi_dev->dev,
+							"Error sending transfer compl. event\n");
+						return ret;
+					}
+				}
+
+				tr_done = true;
+			}
+
+			mhi_ep_ring_inc_index(ring);
+		}
+
+		result->bytes_xferd += tr_len;
+	} while (buf_left && !tr_done);
+
+	return 0;
+}
+
 static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	size_t cmd_ctx_host_size, ch_ctx_host_size, ev_ctx_host_size;
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 681c638833ff..45d12a55b435 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -261,4 +261,13 @@ int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
  */
 void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
 
+/**
+ * mhi_ep_queue_is_empty - Determine whether the transfer queue is empty
+ * @mhi_dev: Device associated with the channels
+ * @dir: DMA direction for the channel
+ *
+ * Return: true if the queue is empty, false otherwise.
+ */
+bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir);
+
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 24/27] bus: mhi: ep: Add support for processing channel rings
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (22 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 23/27] bus: mhi: ep: Add support for reading from the host Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 25/27] bus: mhi: ep: Add support for queueing SKBs to the host Manivannan Sadhasivam
                   ` (4 subsequent siblings)
  28 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for processing the channel rings from host. For the channel
ring associated with DL channel, the xfer callback will simply invoked.
For the case of UL channel, the ring elements will be read in a buffer
till the write pointer and later passed to the client driver using the
xfer callback.

The client drivers should provide the callbacks for both UL and DL
channels during registration.

Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 108 ++++++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h    |   2 +
 2 files changed, 110 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index e7c0ef9f281b..63e14d55aa06 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -420,6 +420,57 @@ static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
 	return 0;
 }
 
+int mhi_ep_process_ch_ring(struct mhi_ep_ring *ring, struct mhi_ring_element *el)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+	struct mhi_result result = {};
+	u32 len = MHI_EP_DEFAULT_MTU;
+	struct mhi_ep_chan *mhi_chan;
+	int ret;
+
+	mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
+
+	/*
+	 * Bail out if transfer callback is not registered for the channel.
+	 * This is most likely due to the client driver not loaded at this point.
+	 */
+	if (!mhi_chan->xfer_cb) {
+		dev_err(&mhi_chan->mhi_dev->dev, "Client driver not available\n");
+		return -ENODEV;
+	}
+
+	if (ring->ch_id % 2) {
+		/* DL channel */
+		result.dir = mhi_chan->dir;
+		mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+	} else {
+		/* UL channel */
+		result.buf_addr = kzalloc(len, GFP_KERNEL);
+		if (!result.buf_addr)
+			return -ENOMEM;
+
+		do {
+			ret = mhi_ep_read_channel(mhi_cntrl, ring, &result, len);
+			if (ret < 0) {
+				dev_err(&mhi_chan->mhi_dev->dev, "Failed to read channel\n");
+				kfree(result.buf_addr);
+				return ret;
+			}
+
+			result.dir = mhi_chan->dir;
+			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+			result.bytes_xferd = 0;
+			memset(result.buf_addr, 0, len);
+
+			/* Read until the ring becomes empty */
+		} while (!mhi_ep_queue_is_empty(mhi_chan->mhi_dev, DMA_TO_DEVICE));
+
+		kfree(result.buf_addr);
+	}
+
+	return 0;
+}
+
 static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	size_t cmd_ctx_host_size, ch_ctx_host_size, ev_ctx_host_size;
@@ -597,6 +648,60 @@ static void mhi_ep_cmd_ring_worker(struct work_struct *work)
 	}
 }
 
+static void mhi_ep_ch_ring_worker(struct work_struct *work)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, ch_ring_work);
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	struct mhi_ep_ring_item *itr, *tmp;
+	struct mhi_ring_element *el;
+	struct mhi_ep_ring *ring;
+	struct mhi_ep_chan *chan;
+	unsigned long flags;
+	LIST_HEAD(head);
+	int ret;
+
+	spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
+	list_splice_tail_init(&mhi_cntrl->ch_db_list, &head);
+	spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
+
+	/* Process each queued channel ring. In case of an error, just process next element. */
+	list_for_each_entry_safe(itr, tmp, &head, node) {
+		list_del(&itr->node);
+		ring = itr->ring;
+
+		/* Update the write offset for the ring */
+		ret = mhi_ep_update_wr_offset(ring);
+		if (ret) {
+			dev_err(dev, "Error updating write offset for ring\n");
+			kfree(itr);
+			continue;
+		}
+
+		/* Sanity check to make sure there are elements in the ring */
+		if (ring->rd_offset == ring->wr_offset) {
+			kfree(itr);
+			continue;
+		}
+
+		el = &ring->ring_cache[ring->rd_offset];
+		chan = &mhi_cntrl->mhi_chan[ring->ch_id];
+
+		mutex_lock(&chan->lock);
+		dev_dbg(dev, "Processing the ring for channel (%u)\n", ring->ch_id);
+		ret = mhi_ep_process_ch_ring(ring, el);
+		if (ret) {
+			dev_err(dev, "Error processing ring for channel (%u): %d\n",
+				ring->ch_id, ret);
+			mutex_unlock(&chan->lock);
+			kfree(itr);
+			continue;
+		}
+
+		mutex_unlock(&chan->lock);
+		kfree(itr);
+	}
+}
+
 static void mhi_ep_state_worker(struct work_struct *work)
 {
 	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
@@ -662,6 +767,8 @@ static void mhi_ep_queue_channel_db(struct mhi_ep_cntrl *mhi_cntrl, unsigned lon
 		spin_lock(&mhi_cntrl->list_lock);
 		list_splice_tail_init(&head, &mhi_cntrl->ch_db_list);
 		spin_unlock(&mhi_cntrl->list_lock);
+
+		queue_work(mhi_cntrl->wq, &mhi_cntrl->ch_ring_work);
 	}
 }
 
@@ -1152,6 +1259,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
 	INIT_WORK(&mhi_cntrl->reset_work, mhi_ep_reset_worker);
 	INIT_WORK(&mhi_cntrl->cmd_ring_work, mhi_ep_cmd_ring_worker);
+	INIT_WORK(&mhi_cntrl->ch_ring_work, mhi_ep_ch_ring_worker);
 
 	mhi_cntrl->wq = alloc_workqueue("mhi_ep_wq", 0, 0);
 	if (!mhi_cntrl->wq) {
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 45d12a55b435..74170dad09f6 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -78,6 +78,7 @@ struct mhi_ep_db_info {
  * @state_work: State transition worker
  * @reset_work: Worker for MHI Endpoint reset
  * @cmd_ring_work: Worker for processing command rings
+ * @ch_ring_work: Worker for processing channel rings
  * @raise_irq: CB function for raising IRQ to the host
  * @alloc_addr: CB function for allocating memory in endpoint for storing host context
  * @map_addr: CB function for mapping host context to endpoint
@@ -128,6 +129,7 @@ struct mhi_ep_cntrl {
 	struct work_struct state_work;
 	struct work_struct reset_work;
 	struct work_struct cmd_ring_work;
+	struct work_struct ch_ring_work;
 
 	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
 	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 25/27] bus: mhi: ep: Add support for queueing SKBs to the host
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (23 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 24/27] bus: mhi: ep: Add support for processing channel rings Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 16:51   ` Alex Elder
  2022-02-28 12:43 ` [PATCH v4 26/27] bus: mhi: ep: Add support for suspending and resuming channels Manivannan Sadhasivam
                   ` (3 subsequent siblings)
  28 siblings, 1 reply; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for queueing SKBs to the host over the transfer ring of the
relevant channel. The mhi_ep_queue_skb() API will be used by the client
networking drivers to queue the SKBs to the host over MHI bus.

The host will add ring elements to the transfer ring periodically for
the device and the device will write SKBs to the ring elements. If a
single SKB doesn't fit in a ring element (TRE), it will be placed in
multiple ring elements and the overflow event will be sent for all ring
elements except the last one. For the last ring element, the EOT event
will be sent indicating the packet boundary.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 82 +++++++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h    |  9 +++++
 2 files changed, 91 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 63e14d55aa06..25d34cf26fd7 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -471,6 +471,88 @@ int mhi_ep_process_ch_ring(struct mhi_ep_ring *ring, struct mhi_ring_element *el
 	return 0;
 }
 
+/* TODO: Handle partially formed TDs */
+int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, struct sk_buff *skb)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_ep_chan *mhi_chan = mhi_dev->dl_chan;
+	struct device *dev = &mhi_chan->mhi_dev->dev;
+	struct mhi_ring_element *el;
+	u32 buf_left, read_offset;
+	struct mhi_ep_ring *ring;
+	enum mhi_ev_ccs code;
+	void *read_addr;
+	u64 write_addr;
+	size_t tr_len;
+	u32 tre_len;
+	int ret;
+
+	buf_left = skb->len;
+	ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
+
+	mutex_lock(&mhi_chan->lock);
+
+	do {
+		/* Don't process the transfer ring if the channel is not in RUNNING state */
+		if (mhi_chan->state != MHI_CH_STATE_RUNNING) {
+			dev_err(dev, "Channel not available\n");
+			ret = -ENODEV;
+			goto err_exit;
+		}
+
+		if (mhi_ep_queue_is_empty(mhi_dev, DMA_FROM_DEVICE)) {
+			dev_err(dev, "TRE not available!\n");
+			ret = -ENOSPC;
+			goto err_exit;
+		}
+
+		el = &ring->ring_cache[ring->rd_offset];
+		tre_len = MHI_TRE_DATA_GET_LEN(el);
+
+		tr_len = min(buf_left, tre_len);
+		read_offset = skb->len - buf_left;
+		read_addr = skb->data + read_offset;
+		write_addr = MHI_TRE_DATA_GET_PTR(el);
+
+		dev_dbg(dev, "Writing %zd bytes to channel (%u)\n", tr_len, ring->ch_id);
+		ret = mhi_cntrl->write_to_host(mhi_cntrl, read_addr, write_addr, tr_len);
+		if (ret < 0) {
+			dev_err(dev, "Error writing to the channel\n");
+			goto err_exit;
+		}
+
+		buf_left -= tr_len;
+		/*
+		 * For all TREs queued by the host for DL channel, only the EOT flag will be set.
+		 * If the packet doesn't fit into a single TRE, send the OVERFLOW event to
+		 * the host so that the host can adjust the packet boundary to next TREs. Else send
+		 * the EOT event to the host indicating the packet boundary.
+		 */
+		if (buf_left)
+			code = MHI_EV_CC_OVERFLOW;
+		else
+			code = MHI_EV_CC_EOT;
+
+		ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el, tr_len, code);
+		if (ret) {
+			dev_err(dev, "Error sending transfer completion event\n");
+			goto err_exit;
+		}
+
+		mhi_ep_ring_inc_index(ring);
+	} while (buf_left);
+
+	mutex_unlock(&mhi_chan->lock);
+
+	return 0;
+
+err_exit:
+	mutex_unlock(&mhi_chan->lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_ep_queue_skb);
+
 static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	size_t cmd_ctx_host_size, ch_ctx_host_size, ev_ctx_host_size;
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 74170dad09f6..bd3ffde01f04 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -272,4 +272,13 @@ void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
  */
 bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir);
 
+/**
+ * mhi_ep_queue_skb - Send SKBs to host over MHI Endpoint
+ * @mhi_dev: Device associated with the DL channel
+ * @skb: SKBs to be queued
+ *
+ * Return: 0 if the SKBs has been sent successfully, a negative error code otherwise.
+ */
+int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, struct sk_buff *skb);
+
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 26/27] bus: mhi: ep: Add support for suspending and resuming channels
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (24 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 25/27] bus: mhi: ep: Add support for queueing SKBs to the host Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 12:43 ` [PATCH v4 27/27] bus: mhi: ep: Add uevent support for module autoloading Manivannan Sadhasivam
                   ` (2 subsequent siblings)
  28 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for suspending and resuming the channels in MHI endpoint stack.
The channels will be moved to the suspended state during M3 state
transition and will be resumed during M0 transition.

Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/internal.h |  2 ++
 drivers/bus/mhi/ep/main.c     | 58 +++++++++++++++++++++++++++++++++++
 drivers/bus/mhi/ep/sm.c       |  5 +++
 3 files changed, 65 insertions(+)

diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index a229d8b70227..14fbf4e41ebf 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -210,6 +210,8 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
 int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
 int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
 void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_resume_channels(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_suspend_channels(struct mhi_ep_cntrl *mhi_cntrl);
 
 /* MHI EP memory management functions */
 int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 25d34cf26fd7..3efdbf924076 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -1129,6 +1129,64 @@ void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl)
 }
 EXPORT_SYMBOL_GPL(mhi_ep_power_down);
 
+void mhi_ep_suspend_channels(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct mhi_ep_chan *mhi_chan;
+	u32 tmp;
+	int i;
+
+	for (i = 0; i < mhi_cntrl->max_chan; i++) {
+		mhi_chan = &mhi_cntrl->mhi_chan[i];
+
+		if (!mhi_chan->mhi_dev)
+			continue;
+
+		mutex_lock(&mhi_chan->lock);
+		/* Skip if the channel is not currently running */
+		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[i].chcfg);
+		if (FIELD_GET(CHAN_CTX_CHSTATE_MASK, tmp) != MHI_CH_STATE_RUNNING) {
+			mutex_unlock(&mhi_chan->lock);
+			continue;
+		}
+
+		dev_dbg(&mhi_chan->mhi_dev->dev, "Suspending channel\n");
+		/* Set channel state to SUSPENDED */
+		tmp &= ~CHAN_CTX_CHSTATE_MASK;
+		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_SUSPENDED);
+		mhi_cntrl->ch_ctx_cache[i].chcfg = cpu_to_le32(tmp);
+		mutex_unlock(&mhi_chan->lock);
+	}
+}
+
+void mhi_ep_resume_channels(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct mhi_ep_chan *mhi_chan;
+	u32 tmp;
+	int i;
+
+	for (i = 0; i < mhi_cntrl->max_chan; i++) {
+		mhi_chan = &mhi_cntrl->mhi_chan[i];
+
+		if (!mhi_chan->mhi_dev)
+			continue;
+
+		mutex_lock(&mhi_chan->lock);
+		/* Skip if the channel is not currently suspended */
+		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[i].chcfg);
+		if (FIELD_GET(CHAN_CTX_CHSTATE_MASK, tmp) != MHI_CH_STATE_SUSPENDED) {
+			mutex_unlock(&mhi_chan->lock);
+			continue;
+		}
+
+		dev_dbg(&mhi_chan->mhi_dev->dev, "Resuming channel\n");
+		/* Set channel state to RUNNING */
+		tmp &= ~CHAN_CTX_CHSTATE_MASK;
+		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_RUNNING);
+		mhi_cntrl->ch_ctx_cache[i].chcfg = cpu_to_le32(tmp);
+		mutex_unlock(&mhi_chan->lock);
+	}
+}
+
 static void mhi_ep_release_device(struct device *dev)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
index 4d6e8c2d615c..22b578bd851b 100644
--- a/drivers/bus/mhi/ep/sm.c
+++ b/drivers/bus/mhi/ep/sm.c
@@ -62,8 +62,11 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
 	enum mhi_state old_state;
 	int ret;
 
+	/* If MHI is in M3, resume suspended channels */
 	spin_lock_bh(&mhi_cntrl->state_lock);
 	old_state = mhi_cntrl->mhi_state;
+	if (old_state == MHI_STATE_M3)
+		mhi_ep_resume_channels(mhi_cntrl);
 
 	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
 	spin_unlock_bh(&mhi_cntrl->state_lock);
@@ -106,6 +109,8 @@ int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
 		return ret;
 	}
 
+	mhi_ep_suspend_channels(mhi_cntrl);
+
 	/* Signal host that the device moved to M3 */
 	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M3);
 	if (ret) {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v4 27/27] bus: mhi: ep: Add uevent support for module autoloading
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (25 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 26/27] bus: mhi: ep: Add support for suspending and resuming channels Manivannan Sadhasivam
@ 2022-02-28 12:43 ` Manivannan Sadhasivam
  2022-02-28 16:57 ` [PATCH v4 00/27] Add initial support for MHI endpoint stack Alex Elder
  2022-03-01  8:50 ` Manivannan Sadhasivam
  28 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-28 12:43 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add uevent support to MHI endpoint bus so that the client drivers can be
autoloaded by udev when the MHI endpoint devices gets created. The client
drivers are expected to provide MODULE_DEVICE_TABLE with the MHI id_table
struct so that the alias can be exported.

The MHI endpoint reused the mhi_device_id structure of the MHI bus.

Reviewed-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c       |  9 +++++++++
 include/linux/mod_devicetable.h |  2 ++
 scripts/mod/file2alias.c        | 10 ++++++++++
 3 files changed, 21 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 3efdbf924076..ce59f38b59a7 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -1568,6 +1568,14 @@ void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv)
 }
 EXPORT_SYMBOL_GPL(mhi_ep_driver_unregister);
 
+static int mhi_ep_uevent(struct device *dev, struct kobj_uevent_env *env)
+{
+	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+
+	return add_uevent_var(env, "MODALIAS=" MHI_EP_DEVICE_MODALIAS_FMT,
+					mhi_dev->name);
+}
+
 static int mhi_ep_match(struct device *dev, struct device_driver *drv)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -1594,6 +1602,7 @@ struct bus_type mhi_ep_bus_type = {
 	.name = "mhi_ep",
 	.dev_name = "mhi_ep",
 	.match = mhi_ep_match,
+	.uevent = mhi_ep_uevent,
 };
 
 static int __init mhi_ep_init(void)
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index 4bb71979a8fd..0cff19bd72bf 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -835,6 +835,8 @@ struct wmi_device_id {
 #define MHI_DEVICE_MODALIAS_FMT "mhi:%s"
 #define MHI_NAME_SIZE 32
 
+#define MHI_EP_DEVICE_MODALIAS_FMT "mhi_ep:%s"
+
 /**
  * struct mhi_device_id - MHI device identification
  * @chan: MHI channel name
diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
index 5258247d78ac..d9d6a31446ea 100644
--- a/scripts/mod/file2alias.c
+++ b/scripts/mod/file2alias.c
@@ -1391,6 +1391,15 @@ static int do_mhi_entry(const char *filename, void *symval, char *alias)
 	return 1;
 }
 
+/* Looks like: mhi_ep:S */
+static int do_mhi_ep_entry(const char *filename, void *symval, char *alias)
+{
+	DEF_FIELD_ADDR(symval, mhi_device_id, chan);
+	sprintf(alias, MHI_EP_DEVICE_MODALIAS_FMT, *chan);
+
+	return 1;
+}
+
 /* Looks like: ishtp:{guid} */
 static int do_ishtp_entry(const char *filename, void *symval, char *alias)
 {
@@ -1519,6 +1528,7 @@ static const struct devtable devtable[] = {
 	{"tee", SIZE_tee_client_device_id, do_tee_entry},
 	{"wmi", SIZE_wmi_device_id, do_wmi_entry},
 	{"mhi", SIZE_mhi_device_id, do_mhi_entry},
+	{"mhi_ep", SIZE_mhi_device_id, do_mhi_ep_entry},
 	{"auxiliary", SIZE_auxiliary_device_id, do_auxiliary_entry},
 	{"ssam", SIZE_ssam_device_id, do_ssam_entry},
 	{"dfl", SIZE_dfl_device_id, do_dfl_entry},
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* RE: [PATCH v4 05/27] bus: mhi: Use bitfield operations for handling DWORDs of ring elements
  2022-02-28 12:43 ` [PATCH v4 05/27] bus: mhi: Use bitfield operations for handling DWORDs of ring elements Manivannan Sadhasivam
@ 2022-02-28 14:00   ` David Laight
  2022-02-28 14:43     ` 'Manivannan Sadhasivam'
  0 siblings, 1 reply; 52+ messages in thread
From: David Laight @ 2022-02-28 14:00 UTC (permalink / raw)
  To: 'Manivannan Sadhasivam', mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder

From: Manivannan Sadhasivam
> Sent: 28 February 2022 12:43
> 
> Instead of using the hardcoded bits in DWORD definitions, let's use the
> bitfield operations to make it more clear how the DWORDs are structured.

That all makes it as clear as mud.
Try reading it!

	David

> 
> Suggested-by: Alex Elder <elder@linaro.org>
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>  drivers/bus/mhi/host/internal.h | 58 +++++++++++++++++++--------------
>  1 file changed, 33 insertions(+), 25 deletions(-)
> 
> diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
> index 156bf65b6810..1d1790e83a93 100644
> --- a/drivers/bus/mhi/host/internal.h
> +++ b/drivers/bus/mhi/host/internal.h
> @@ -7,6 +7,7 @@
>  #ifndef _MHI_INT_H
>  #define _MHI_INT_H
> 
> +#include <linux/bitfield.h>
>  #include <linux/mhi.h>
> 
>  extern struct bus_type mhi_bus_type;
> @@ -205,58 +206,65 @@ enum mhi_cmd_type {
>  /* No operation command */
>  #define MHI_TRE_CMD_NOOP_PTR (0)
>  #define MHI_TRE_CMD_NOOP_DWORD0 (0)
> -#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
> +#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_CMD_NOP)))
> 
>  /* Channel reset command */
>  #define MHI_TRE_CMD_RESET_PTR (0)
>  #define MHI_TRE_CMD_RESET_DWORD0 (0)
> -#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> -					(MHI_CMD_RESET_CHAN << 16)))
> +#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
> +					FIELD_PREP(GENMASK(23, 16), MHI_CMD_RESET_CHAN))
> 
>  /* Channel stop command */
>  #define MHI_TRE_CMD_STOP_PTR (0)
>  #define MHI_TRE_CMD_STOP_DWORD0 (0)
> -#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> -				       (MHI_CMD_STOP_CHAN << 16)))
> +#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
> +					FIELD_PREP(GENMASK(23, 16), MHI_CMD_STOP_CHAN))
> 
>  /* Channel start command */
>  #define MHI_TRE_CMD_START_PTR (0)
>  #define MHI_TRE_CMD_START_DWORD0 (0)
> -#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> -					(MHI_CMD_START_CHAN << 16)))
> +#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
> +					FIELD_PREP(GENMASK(23, 16), MHI_CMD_START_CHAN))
> 
>  #define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
> -#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> -#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> +#define MHI_TRE_GET_CMD_CHID(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
> +#define MHI_TRE_GET_CMD_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))
> 
>  /* Event descriptor macros */
>  #define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
> -#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
> -#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
> +#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), code) | \
> +						FIELD_PREP(GENMASK(15, 0), len)))
> +#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
> +						FIELD_PREP(GENMASK(23, 16), type)))
>  #define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
> -#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> -#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> -#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_CODE(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
> +#define MHI_TRE_GET_EV_LEN(tre) (FIELD_GET(GENMASK(15, 0), (MHI_TRE_GET_DWORD(tre, 0))))
> +#define MHI_TRE_GET_EV_CHID(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
> +#define MHI_TRE_GET_EV_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))
> +#define MHI_TRE_GET_EV_STATE(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
> +#define MHI_TRE_GET_EV_EXECENV(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
>  #define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
>  #define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
>  #define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> -#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> -#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
> +#define MHI_TRE_GET_EV_VEID(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 0))))
> +#define MHI_TRE_GET_EV_LINKSPEED(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
> +#define MHI_TRE_GET_EV_LINKWIDTH(tre) (FIELD_GET(GENMASK(7, 0), (MHI_TRE_GET_DWORD(tre, 0))))
> 
>  /* Transfer descriptor macros */
>  #define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
> -#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
> -#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
> -	| (ieot << 9) | (ieob << 8) | chain))
> +#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(FIELD_PREP(GENMASK(15, 0), len)))
> +#define MHI_TRE_TYPE_TRANSFER 2
> +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \
> +							MHI_TRE_TYPE_TRANSFER) | \
> +							FIELD_PREP(BIT(10), bei) | \
> +							FIELD_PREP(BIT(9), ieot) | \
> +							FIELD_PREP(BIT(8), ieob) | \
> +							FIELD_PREP(BIT(0), chain)))
> 
>  /* RSC transfer descriptor macros */
> -#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
> +#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(FIELD_PREP(GENMASK(64, 48), len) | ptr))
>  #define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
> -#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
> +#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_PKT_TYPE_COALESCING)
> 
>  enum mhi_pkt_type {
>  	MHI_PKT_TYPE_INVALID = 0x0,
> --
> 2.25.1

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 05/27] bus: mhi: Use bitfield operations for handling DWORDs of ring elements
  2022-02-28 14:00   ` David Laight
@ 2022-02-28 14:43     ` 'Manivannan Sadhasivam'
  2022-02-28 15:11       ` Alex Elder
  2022-02-28 15:40       ` David Laight
  0 siblings, 2 replies; 52+ messages in thread
From: 'Manivannan Sadhasivam' @ 2022-02-28 14:43 UTC (permalink / raw)
  To: David Laight
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder

On Mon, Feb 28, 2022 at 02:00:07PM +0000, David Laight wrote:
> From: Manivannan Sadhasivam
> > Sent: 28 February 2022 12:43
> > 
> > Instead of using the hardcoded bits in DWORD definitions, let's use the
> > bitfield operations to make it more clear how the DWORDs are structured.
> 
> That all makes it as clear as mud.

It depends on how you see it ;)

For instance,

#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)

vs

#define MHI_TRE_GET_CMD_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))

The later one makes it more obvious that the "type" field resides between bit 23
and 16. Plus it avoids the extra masking.

> Try reading it!
> 

Well I did before sending the patch.

Thanks,
Mani

> 	David
> 
> > 
> > Suggested-by: Alex Elder <elder@linaro.org>
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > ---
> >  drivers/bus/mhi/host/internal.h | 58 +++++++++++++++++++--------------
> >  1 file changed, 33 insertions(+), 25 deletions(-)
> > 
> > diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
> > index 156bf65b6810..1d1790e83a93 100644
> > --- a/drivers/bus/mhi/host/internal.h
> > +++ b/drivers/bus/mhi/host/internal.h
> > @@ -7,6 +7,7 @@
> >  #ifndef _MHI_INT_H
> >  #define _MHI_INT_H
> > 
> > +#include <linux/bitfield.h>
> >  #include <linux/mhi.h>
> > 
> >  extern struct bus_type mhi_bus_type;
> > @@ -205,58 +206,65 @@ enum mhi_cmd_type {
> >  /* No operation command */
> >  #define MHI_TRE_CMD_NOOP_PTR (0)
> >  #define MHI_TRE_CMD_NOOP_DWORD0 (0)
> > -#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
> > +#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_CMD_NOP)))
> > 
> >  /* Channel reset command */
> >  #define MHI_TRE_CMD_RESET_PTR (0)
> >  #define MHI_TRE_CMD_RESET_DWORD0 (0)
> > -#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> > -					(MHI_CMD_RESET_CHAN << 16)))
> > +#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
> > +					FIELD_PREP(GENMASK(23, 16), MHI_CMD_RESET_CHAN))
> > 
> >  /* Channel stop command */
> >  #define MHI_TRE_CMD_STOP_PTR (0)
> >  #define MHI_TRE_CMD_STOP_DWORD0 (0)
> > -#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> > -				       (MHI_CMD_STOP_CHAN << 16)))
> > +#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
> > +					FIELD_PREP(GENMASK(23, 16), MHI_CMD_STOP_CHAN))
> > 
> >  /* Channel start command */
> >  #define MHI_TRE_CMD_START_PTR (0)
> >  #define MHI_TRE_CMD_START_DWORD0 (0)
> > -#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> > -					(MHI_CMD_START_CHAN << 16)))
> > +#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
> > +					FIELD_PREP(GENMASK(23, 16), MHI_CMD_START_CHAN))
> > 
> >  #define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
> > -#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > -#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> > +#define MHI_TRE_GET_CMD_CHID(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
> > +#define MHI_TRE_GET_CMD_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))
> > 
> >  /* Event descriptor macros */
> >  #define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
> > -#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
> > -#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
> > +#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), code) | \
> > +						FIELD_PREP(GENMASK(15, 0), len)))
> > +#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
> > +						FIELD_PREP(GENMASK(23, 16), type)))
> >  #define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
> > -#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> > -#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> > -#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_EV_CODE(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
> > +#define MHI_TRE_GET_EV_LEN(tre) (FIELD_GET(GENMASK(15, 0), (MHI_TRE_GET_DWORD(tre, 0))))
> > +#define MHI_TRE_GET_EV_CHID(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
> > +#define MHI_TRE_GET_EV_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))
> > +#define MHI_TRE_GET_EV_STATE(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
> > +#define MHI_TRE_GET_EV_EXECENV(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
> >  #define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
> >  #define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
> >  #define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> > -#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> > -#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
> > +#define MHI_TRE_GET_EV_VEID(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 0))))
> > +#define MHI_TRE_GET_EV_LINKSPEED(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
> > +#define MHI_TRE_GET_EV_LINKWIDTH(tre) (FIELD_GET(GENMASK(7, 0), (MHI_TRE_GET_DWORD(tre, 0))))
> > 
> >  /* Transfer descriptor macros */
> >  #define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
> > -#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
> > -#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
> > -	| (ieot << 9) | (ieob << 8) | chain))
> > +#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(FIELD_PREP(GENMASK(15, 0), len)))
> > +#define MHI_TRE_TYPE_TRANSFER 2
> > +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \
> > +							MHI_TRE_TYPE_TRANSFER) | \
> > +							FIELD_PREP(BIT(10), bei) | \
> > +							FIELD_PREP(BIT(9), ieot) | \
> > +							FIELD_PREP(BIT(8), ieob) | \
> > +							FIELD_PREP(BIT(0), chain)))
> > 
> >  /* RSC transfer descriptor macros */
> > -#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
> > +#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(FIELD_PREP(GENMASK(64, 48), len) | ptr))
> >  #define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
> > -#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
> > +#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_PKT_TYPE_COALESCING)
> > 
> >  enum mhi_pkt_type {
> >  	MHI_PKT_TYPE_INVALID = 0x0,
> > --
> > 2.25.1
> 
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
> 

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 05/27] bus: mhi: Use bitfield operations for handling DWORDs of ring elements
  2022-02-28 14:43     ` 'Manivannan Sadhasivam'
@ 2022-02-28 15:11       ` Alex Elder
  2022-02-28 15:40       ` David Laight
  1 sibling, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 15:11 UTC (permalink / raw)
  To: 'Manivannan Sadhasivam', David Laight
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 8:43 AM, 'Manivannan Sadhasivam' wrote:
> On Mon, Feb 28, 2022 at 02:00:07PM +0000, David Laight wrote:
>> From: Manivannan Sadhasivam
>>> Sent: 28 February 2022 12:43
>>>
>>> Instead of using the hardcoded bits in DWORD definitions, let's use the
>>> bitfield operations to make it more clear how the DWORDs are structured.
>>
>> That all makes it as clear as mud.
> 
> It depends on how you see it ;)

It's possible David was commenting on the description, but I'm not sure.

> For instance,
> 
> #define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> 
> vs
> 
> #define MHI_TRE_GET_CMD_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))

Maybe you should create a static inline function to
encapsulate this (and the same for others).

In other words, something like:

#define MHI_TRE_DWORD_1_CMD_TYPE_MASK	GENMASK(23, 16)

static inline enum mhi_pkt_type
mhi_tre_cmd_type(struct mhi_ring_element el)
{
	u32 dword = le32_to_cpu(el->dword[1]);
	enum mhi_pkt_type cmd_type;

	return FIELD_GET(MHI_TRE_DWORD_1_CMD_TYPE_MASK, dword);
}

It's still a little messy, but breaking it out makes it a
little easier to understand, and the function makes the
types involved a little more obvious.

					-Alex

> The later one makes it more obvious that the "type" field resides between bit 23
> and 16. Plus it avoids the extra masking.
> 
>> Try reading it!
>>
> 
> Well I did before sending the patch.
> 
> Thanks,
> Mani
> 
>> 	David
>>
>>>
>>> Suggested-by: Alex Elder <elder@linaro.org>
>>> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>>> ---
>>>   drivers/bus/mhi/host/internal.h | 58 +++++++++++++++++++--------------
>>>   1 file changed, 33 insertions(+), 25 deletions(-)
>>>
>>> diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
>>> index 156bf65b6810..1d1790e83a93 100644
>>> --- a/drivers/bus/mhi/host/internal.h
>>> +++ b/drivers/bus/mhi/host/internal.h
>>> @@ -7,6 +7,7 @@
>>>   #ifndef _MHI_INT_H
>>>   #define _MHI_INT_H
>>>
>>> +#include <linux/bitfield.h>
>>>   #include <linux/mhi.h>
>>>
>>>   extern struct bus_type mhi_bus_type;
>>> @@ -205,58 +206,65 @@ enum mhi_cmd_type {
>>>   /* No operation command */
>>>   #define MHI_TRE_CMD_NOOP_PTR (0)
>>>   #define MHI_TRE_CMD_NOOP_DWORD0 (0)
>>> -#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
>>> +#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_CMD_NOP)))
>>>
>>>   /* Channel reset command */
>>>   #define MHI_TRE_CMD_RESET_PTR (0)
>>>   #define MHI_TRE_CMD_RESET_DWORD0 (0)
>>> -#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
>>> -					(MHI_CMD_RESET_CHAN << 16)))
>>> +#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
>>> +					FIELD_PREP(GENMASK(23, 16), MHI_CMD_RESET_CHAN))
>>>
>>>   /* Channel stop command */
>>>   #define MHI_TRE_CMD_STOP_PTR (0)
>>>   #define MHI_TRE_CMD_STOP_DWORD0 (0)
>>> -#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
>>> -				       (MHI_CMD_STOP_CHAN << 16)))
>>> +#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
>>> +					FIELD_PREP(GENMASK(23, 16), MHI_CMD_STOP_CHAN))
>>>
>>>   /* Channel start command */
>>>   #define MHI_TRE_CMD_START_PTR (0)
>>>   #define MHI_TRE_CMD_START_DWORD0 (0)
>>> -#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
>>> -					(MHI_CMD_START_CHAN << 16)))
>>> +#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid)) | \
>>> +					FIELD_PREP(GENMASK(23, 16), MHI_CMD_START_CHAN))
>>>
>>>   #define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
>>> -#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
>>> -#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
>>> +#define MHI_TRE_GET_CMD_CHID(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
>>> +#define MHI_TRE_GET_CMD_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))
>>>
>>>   /* Event descriptor macros */
>>>   #define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
>>> -#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
>>> -#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
>>> +#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), code) | \
>>> +						FIELD_PREP(GENMASK(15, 0), len)))
>>> +#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32(FIELD_PREP(GENMASK(31, 24), chid) | \
>>> +						FIELD_PREP(GENMASK(23, 16), type)))
>>>   #define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
>>> -#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
>>> -#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
>>> -#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
>>> -#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
>>> -#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
>>> -#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
>>> +#define MHI_TRE_GET_EV_CODE(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
>>> +#define MHI_TRE_GET_EV_LEN(tre) (FIELD_GET(GENMASK(15, 0), (MHI_TRE_GET_DWORD(tre, 0))))
>>> +#define MHI_TRE_GET_EV_CHID(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
>>> +#define MHI_TRE_GET_EV_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))
>>> +#define MHI_TRE_GET_EV_STATE(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
>>> +#define MHI_TRE_GET_EV_EXECENV(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 0))))
>>>   #define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
>>>   #define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
>>>   #define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
>>> -#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
>>> -#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
>>> -#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
>>> +#define MHI_TRE_GET_EV_VEID(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 0))))
>>> +#define MHI_TRE_GET_EV_LINKSPEED(tre) (FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1))))
>>> +#define MHI_TRE_GET_EV_LINKWIDTH(tre) (FIELD_GET(GENMASK(7, 0), (MHI_TRE_GET_DWORD(tre, 0))))
>>>
>>>   /* Transfer descriptor macros */
>>>   #define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
>>> -#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
>>> -#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
>>> -	| (ieot << 9) | (ieob << 8) | chain))
>>> +#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(FIELD_PREP(GENMASK(15, 0), len)))
>>> +#define MHI_TRE_TYPE_TRANSFER 2
>>> +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), \
>>> +							MHI_TRE_TYPE_TRANSFER) | \
>>> +							FIELD_PREP(BIT(10), bei) | \
>>> +							FIELD_PREP(BIT(9), ieot) | \
>>> +							FIELD_PREP(BIT(8), ieob) | \
>>> +							FIELD_PREP(BIT(0), chain)))
>>>
>>>   /* RSC transfer descriptor macros */
>>> -#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
>>> +#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(FIELD_PREP(GENMASK(64, 48), len) | ptr))
>>>   #define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
>>> -#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
>>> +#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(FIELD_PREP(GENMASK(23, 16), MHI_PKT_TYPE_COALESCING)
>>>
>>>   enum mhi_pkt_type {
>>>   	MHI_PKT_TYPE_INVALID = 0x0,
>>> --
>>> 2.25.1
>>
>> -
>> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
>> Registration No: 1397386 (Wales)
>>


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 01/27] bus: mhi: Fix pm_state conversion to string
  2022-02-28 12:43 ` [PATCH v4 01/27] bus: mhi: Fix pm_state conversion to string Manivannan Sadhasivam
@ 2022-02-28 15:30   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 15:30 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, Paul Davey,
	Manivannan Sadhasivam, Hemant Kumar, stable

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> From: Paul Davey <paul.davey@alliedtelesis.co.nz>
> 
> On big endian architectures the mhi debugfs files which report pm state
> give "Invalid State" for all states.  This is caused by using
> find_last_bit which takes an unsigned long* while the state is passed in
> as an enum mhi_pm_state which will be of int size.
> 
> Fix by using __fls to pass the value of state instead of find_last_bit.
> 
> Also the current API expects "mhi_pm_state" enumerator as the function
> argument but the function only works with bitmasks. So as Alex suggested,
> let's change the argument to u32 to avoid confusion.

(Grumble grumble too much static data in header file.)

Reviewed-by: Alex Elder <elder@linaro.org>

> Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions")
> Signed-off-by: Paul Davey <paul.davey@alliedtelesis.co.nz>
> Reviewed-by: Manivannan Sadhasivam <mani@kernel.org>
> Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
> Cc: stable@vger.kernel.org
> [mani: changed the function argument to u32]
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>   drivers/bus/mhi/core/init.c     | 10 ++++++----
>   drivers/bus/mhi/core/internal.h |  2 +-
>   2 files changed, 7 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
> index 046f407dc5d6..09394a1c29ec 100644
> --- a/drivers/bus/mhi/core/init.c
> +++ b/drivers/bus/mhi/core/init.c
> @@ -77,12 +77,14 @@ static const char * const mhi_pm_state_str[] = {
>   	[MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "Linkdown or Error Fatal Detect",
>   };
>   
> -const char *to_mhi_pm_state_str(enum mhi_pm_state state)
> +const char *to_mhi_pm_state_str(u32 state)
>   {
> -	unsigned long pm_state = state;
> -	int index = find_last_bit(&pm_state, 32);
> +	int index;
>   
> -	if (index >= ARRAY_SIZE(mhi_pm_state_str))
> +	if (state)
> +		index = __fls(state);
> +
> +	if (!state || index >= ARRAY_SIZE(mhi_pm_state_str))
>   		return "Invalid State";
>   
>   	return mhi_pm_state_str[index];
> diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
> index e2e10474a9d9..3508cbbf555d 100644
> --- a/drivers/bus/mhi/core/internal.h
> +++ b/drivers/bus/mhi/core/internal.h
> @@ -622,7 +622,7 @@ void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
>   enum mhi_pm_state __must_check mhi_tryset_pm_state(
>   					struct mhi_controller *mhi_cntrl,
>   					enum mhi_pm_state state);
> -const char *to_mhi_pm_state_str(enum mhi_pm_state state);
> +const char *to_mhi_pm_state_str(u32 state);
>   int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
>   			       enum dev_st_transition state);
>   void mhi_pm_st_worker(struct work_struct *work);


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 02/27] bus: mhi: Fix MHI DMA structure endianness
  2022-02-28 12:43 ` [PATCH v4 02/27] bus: mhi: Fix MHI DMA structure endianness Manivannan Sadhasivam
@ 2022-02-28 15:40   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 15:40 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, Paul Davey, stable

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> From: Paul Davey <paul.davey@alliedtelesis.co.nz>
> 
> The MHI driver does not work on big endian architectures.  The
> controller never transitions into mission mode.  This appears to be due
> to the modem device expecting the various contexts and transfer rings to
> have fields in little endian order in memory, but the driver constructs
> them in native endianness.
> 
> Fix MHI event, channel and command contexts and TRE handling macros to
> use explicit conversion to little endian.  Mark fields in relevant
> structures as little endian to document this requirement.
> 
> Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions")
> Fixes: 6cd330ae76ff ("bus: mhi: core: Add support for ringing channel/event ring doorbells")
> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> Signed-off-by: Paul Davey <paul.davey@alliedtelesis.co.nz>
> Cc: stable@vger.kernel.org
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

I didn't review it as carefully this time around, but this looks
good enough for me.

Reviewed-by: Alex Elder <elder@linaro.org>


> ---
>   drivers/bus/mhi/core/debugfs.c  |  26 +++----
>   drivers/bus/mhi/core/init.c     |  36 +++++-----
>   drivers/bus/mhi/core/internal.h | 119 ++++++++++++++++----------------
>   drivers/bus/mhi/core/main.c     |  22 +++---
>   drivers/bus/mhi/core/pm.c       |   4 +-
>   5 files changed, 104 insertions(+), 103 deletions(-)
> 
> diff --git a/drivers/bus/mhi/core/debugfs.c b/drivers/bus/mhi/core/debugfs.c
> index 858d7516410b..d818586c229d 100644
> --- a/drivers/bus/mhi/core/debugfs.c
> +++ b/drivers/bus/mhi/core/debugfs.c
> @@ -60,16 +60,16 @@ static int mhi_debugfs_events_show(struct seq_file *m, void *d)
>   		}
>   
>   		seq_printf(m, "Index: %d intmod count: %lu time: %lu",
> -			   i, (er_ctxt->intmod & EV_CTX_INTMODC_MASK) >>
> +			   i, (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODC_MASK) >>
>   			   EV_CTX_INTMODC_SHIFT,
> -			   (er_ctxt->intmod & EV_CTX_INTMODT_MASK) >>
> +			   (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODT_MASK) >>
>   			   EV_CTX_INTMODT_SHIFT);
>   
> -		seq_printf(m, " base: 0x%0llx len: 0x%llx", er_ctxt->rbase,
> -			   er_ctxt->rlen);
> +		seq_printf(m, " base: 0x%0llx len: 0x%llx", le64_to_cpu(er_ctxt->rbase),
> +			   le64_to_cpu(er_ctxt->rlen));
>   
> -		seq_printf(m, " rp: 0x%llx wp: 0x%llx", er_ctxt->rp,
> -			   er_ctxt->wp);
> +		seq_printf(m, " rp: 0x%llx wp: 0x%llx", le64_to_cpu(er_ctxt->rp),
> +			   le64_to_cpu(er_ctxt->wp));
>   
>   		seq_printf(m, " local rp: 0x%pK db: 0x%pad\n", ring->rp,
>   			   &mhi_event->db_cfg.db_val);
> @@ -106,18 +106,18 @@ static int mhi_debugfs_channels_show(struct seq_file *m, void *d)
>   
>   		seq_printf(m,
>   			   "%s(%u) state: 0x%lx brstmode: 0x%lx pollcfg: 0x%lx",
> -			   mhi_chan->name, mhi_chan->chan, (chan_ctxt->chcfg &
> +			   mhi_chan->name, mhi_chan->chan, (le32_to_cpu(chan_ctxt->chcfg) &
>   			   CHAN_CTX_CHSTATE_MASK) >> CHAN_CTX_CHSTATE_SHIFT,
> -			   (chan_ctxt->chcfg & CHAN_CTX_BRSTMODE_MASK) >>
> -			   CHAN_CTX_BRSTMODE_SHIFT, (chan_ctxt->chcfg &
> +			   (le32_to_cpu(chan_ctxt->chcfg) & CHAN_CTX_BRSTMODE_MASK) >>
> +			   CHAN_CTX_BRSTMODE_SHIFT, (le32_to_cpu(chan_ctxt->chcfg) &
>   			   CHAN_CTX_POLLCFG_MASK) >> CHAN_CTX_POLLCFG_SHIFT);
>   
> -		seq_printf(m, " type: 0x%x event ring: %u", chan_ctxt->chtype,
> -			   chan_ctxt->erindex);
> +		seq_printf(m, " type: 0x%x event ring: %u", le32_to_cpu(chan_ctxt->chtype),
> +			   le32_to_cpu(chan_ctxt->erindex));
>   
>   		seq_printf(m, " base: 0x%llx len: 0x%llx rp: 0x%llx wp: 0x%llx",
> -			   chan_ctxt->rbase, chan_ctxt->rlen, chan_ctxt->rp,
> -			   chan_ctxt->wp);
> +			   le64_to_cpu(chan_ctxt->rbase), le64_to_cpu(chan_ctxt->rlen),
> +			   le64_to_cpu(chan_ctxt->rp), le64_to_cpu(chan_ctxt->wp));
>   
>   		seq_printf(m, " local rp: 0x%pK local wp: 0x%pK db: 0x%pad\n",
>   			   ring->rp, ring->wp,
> diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
> index 09394a1c29ec..d8787aaa176b 100644
> --- a/drivers/bus/mhi/core/init.c
> +++ b/drivers/bus/mhi/core/init.c
> @@ -293,17 +293,17 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
>   		if (mhi_chan->offload_ch)
>   			continue;
>   
> -		tmp = chan_ctxt->chcfg;
> +		tmp = le32_to_cpu(chan_ctxt->chcfg);
>   		tmp &= ~CHAN_CTX_CHSTATE_MASK;
>   		tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
>   		tmp &= ~CHAN_CTX_BRSTMODE_MASK;
>   		tmp |= (mhi_chan->db_cfg.brstmode << CHAN_CTX_BRSTMODE_SHIFT);
>   		tmp &= ~CHAN_CTX_POLLCFG_MASK;
>   		tmp |= (mhi_chan->db_cfg.pollcfg << CHAN_CTX_POLLCFG_SHIFT);
> -		chan_ctxt->chcfg = tmp;
> +		chan_ctxt->chcfg = cpu_to_le32(tmp);
>   
> -		chan_ctxt->chtype = mhi_chan->type;
> -		chan_ctxt->erindex = mhi_chan->er_index;
> +		chan_ctxt->chtype = cpu_to_le32(mhi_chan->type);
> +		chan_ctxt->erindex = cpu_to_le32(mhi_chan->er_index);
>   
>   		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
>   		mhi_chan->tre_ring.db_addr = (void __iomem *)&chan_ctxt->wp;
> @@ -328,14 +328,14 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
>   		if (mhi_event->offload_ev)
>   			continue;
>   
> -		tmp = er_ctxt->intmod;
> +		tmp = le32_to_cpu(er_ctxt->intmod);
>   		tmp &= ~EV_CTX_INTMODC_MASK;
>   		tmp &= ~EV_CTX_INTMODT_MASK;
>   		tmp |= (mhi_event->intmod << EV_CTX_INTMODT_SHIFT);
> -		er_ctxt->intmod = tmp;
> +		er_ctxt->intmod = cpu_to_le32(tmp);
>   
> -		er_ctxt->ertype = MHI_ER_TYPE_VALID;
> -		er_ctxt->msivec = mhi_event->irq;
> +		er_ctxt->ertype = cpu_to_le32(MHI_ER_TYPE_VALID);
> +		er_ctxt->msivec = cpu_to_le32(mhi_event->irq);
>   		mhi_event->db_cfg.db_mode = true;
>   
>   		ring->el_size = sizeof(struct mhi_tre);
> @@ -349,9 +349,9 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
>   		 * ring is empty
>   		 */
>   		ring->rp = ring->wp = ring->base;
> -		er_ctxt->rbase = ring->iommu_base;
> +		er_ctxt->rbase = cpu_to_le64(ring->iommu_base);
>   		er_ctxt->rp = er_ctxt->wp = er_ctxt->rbase;
> -		er_ctxt->rlen = ring->len;
> +		er_ctxt->rlen = cpu_to_le64(ring->len);
>   		ring->ctxt_wp = &er_ctxt->wp;
>   	}
>   
> @@ -378,9 +378,9 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
>   			goto error_alloc_cmd;
>   
>   		ring->rp = ring->wp = ring->base;
> -		cmd_ctxt->rbase = ring->iommu_base;
> +		cmd_ctxt->rbase = cpu_to_le64(ring->iommu_base);
>   		cmd_ctxt->rp = cmd_ctxt->wp = cmd_ctxt->rbase;
> -		cmd_ctxt->rlen = ring->len;
> +		cmd_ctxt->rlen = cpu_to_le64(ring->len);
>   		ring->ctxt_wp = &cmd_ctxt->wp;
>   	}
>   
> @@ -581,10 +581,10 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
>   	chan_ctxt->rp = 0;
>   	chan_ctxt->wp = 0;
>   
> -	tmp = chan_ctxt->chcfg;
> +	tmp = le32_to_cpu(chan_ctxt->chcfg);
>   	tmp &= ~CHAN_CTX_CHSTATE_MASK;
>   	tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
> -	chan_ctxt->chcfg = tmp;
> +	chan_ctxt->chcfg = cpu_to_le32(tmp);
>   
>   	/* Update to all cores */
>   	smp_wmb();
> @@ -618,14 +618,14 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
>   		return -ENOMEM;
>   	}
>   
> -	tmp = chan_ctxt->chcfg;
> +	tmp = le32_to_cpu(chan_ctxt->chcfg);
>   	tmp &= ~CHAN_CTX_CHSTATE_MASK;
>   	tmp |= (MHI_CH_STATE_ENABLED << CHAN_CTX_CHSTATE_SHIFT);
> -	chan_ctxt->chcfg = tmp;
> +	chan_ctxt->chcfg = cpu_to_le32(tmp);
>   
> -	chan_ctxt->rbase = tre_ring->iommu_base;
> +	chan_ctxt->rbase = cpu_to_le64(tre_ring->iommu_base);
>   	chan_ctxt->rp = chan_ctxt->wp = chan_ctxt->rbase;
> -	chan_ctxt->rlen = tre_ring->len;
> +	chan_ctxt->rlen = cpu_to_le64(tre_ring->len);
>   	tre_ring->ctxt_wp = &chan_ctxt->wp;
>   
>   	tre_ring->rp = tre_ring->wp = tre_ring->base;
> diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
> index 3508cbbf555d..37c39bf1c7a9 100644
> --- a/drivers/bus/mhi/core/internal.h
> +++ b/drivers/bus/mhi/core/internal.h
> @@ -209,14 +209,14 @@ extern struct bus_type mhi_bus_type;
>   #define EV_CTX_INTMODT_MASK GENMASK(31, 16)
>   #define EV_CTX_INTMODT_SHIFT 16
>   struct mhi_event_ctxt {
> -	__u32 intmod;
> -	__u32 ertype;
> -	__u32 msivec;
> -
> -	__u64 rbase __packed __aligned(4);
> -	__u64 rlen __packed __aligned(4);
> -	__u64 rp __packed __aligned(4);
> -	__u64 wp __packed __aligned(4);
> +	__le32 intmod;
> +	__le32 ertype;
> +	__le32 msivec;
> +
> +	__le64 rbase __packed __aligned(4);
> +	__le64 rlen __packed __aligned(4);
> +	__le64 rp __packed __aligned(4);
> +	__le64 wp __packed __aligned(4);
>   };
>   
>   #define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
> @@ -227,25 +227,25 @@ struct mhi_event_ctxt {
>   #define CHAN_CTX_POLLCFG_SHIFT 10
>   #define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
>   struct mhi_chan_ctxt {
> -	__u32 chcfg;
> -	__u32 chtype;
> -	__u32 erindex;
> -
> -	__u64 rbase __packed __aligned(4);
> -	__u64 rlen __packed __aligned(4);
> -	__u64 rp __packed __aligned(4);
> -	__u64 wp __packed __aligned(4);
> +	__le32 chcfg;
> +	__le32 chtype;
> +	__le32 erindex;
> +
> +	__le64 rbase __packed __aligned(4);
> +	__le64 rlen __packed __aligned(4);
> +	__le64 rp __packed __aligned(4);
> +	__le64 wp __packed __aligned(4);
>   };
>   
>   struct mhi_cmd_ctxt {
> -	__u32 reserved0;
> -	__u32 reserved1;
> -	__u32 reserved2;
> -
> -	__u64 rbase __packed __aligned(4);
> -	__u64 rlen __packed __aligned(4);
> -	__u64 rp __packed __aligned(4);
> -	__u64 wp __packed __aligned(4);
> +	__le32 reserved0;
> +	__le32 reserved1;
> +	__le32 reserved2;
> +
> +	__le64 rbase __packed __aligned(4);
> +	__le64 rlen __packed __aligned(4);
> +	__le64 rp __packed __aligned(4);
> +	__le64 wp __packed __aligned(4);
>   };
>   
>   struct mhi_ctxt {
> @@ -258,8 +258,8 @@ struct mhi_ctxt {
>   };
>   
>   struct mhi_tre {
> -	u64 ptr;
> -	u32 dword[2];
> +	__le64 ptr;
> +	__le32 dword[2];
>   };
>   
>   struct bhi_vec_entry {
> @@ -277,57 +277,58 @@ enum mhi_cmd_type {
>   /* No operation command */
>   #define MHI_TRE_CMD_NOOP_PTR (0)
>   #define MHI_TRE_CMD_NOOP_DWORD0 (0)
> -#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
> +#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
>   
>   /* Channel reset command */
>   #define MHI_TRE_CMD_RESET_PTR (0)
>   #define MHI_TRE_CMD_RESET_DWORD0 (0)
> -#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
> -					(MHI_CMD_RESET_CHAN << 16))
> +#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> +					(MHI_CMD_RESET_CHAN << 16)))
>   
>   /* Channel stop command */
>   #define MHI_TRE_CMD_STOP_PTR (0)
>   #define MHI_TRE_CMD_STOP_DWORD0 (0)
> -#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
> -				       (MHI_CMD_STOP_CHAN << 16))
> +#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> +				       (MHI_CMD_STOP_CHAN << 16)))
>   
>   /* Channel start command */
>   #define MHI_TRE_CMD_START_PTR (0)
>   #define MHI_TRE_CMD_START_DWORD0 (0)
> -#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
> -					(MHI_CMD_START_CHAN << 16))
> +#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> +					(MHI_CMD_START_CHAN << 16)))
>   
> -#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
> -#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
> +#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
> +#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> +#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
>   
>   /* Event descriptor macros */
> -#define MHI_TRE_EV_PTR(ptr) (ptr)
> -#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
> -#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
> -#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
> -#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
> -#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
> -#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
> -#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
> -#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
> -#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
> -#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
> +#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
> +#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
> +#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
> +#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
> +#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> +#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> +#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
> +#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
> +#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> +#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> +#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
>   
>   /* Transfer descriptor macros */
> -#define MHI_TRE_DATA_PTR(ptr) (ptr)
> -#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
> -#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
> -	| (ieot << 9) | (ieob << 8) | chain)
> +#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
> +#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
> +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
> +	| (ieot << 9) | (ieob << 8) | chain))
>   
>   /* RSC transfer descriptor macros */
> -#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
> -#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
> -#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
> +#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
> +#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
> +#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
>   
>   enum mhi_pkt_type {
>   	MHI_PKT_TYPE_INVALID = 0x0,
> @@ -500,7 +501,7 @@ struct state_transition {
>   struct mhi_ring {
>   	dma_addr_t dma_handle;
>   	dma_addr_t iommu_base;
> -	u64 *ctxt_wp; /* point to ctxt wp */
> +	__le64 *ctxt_wp; /* point to ctxt wp */
>   	void *pre_aligned;
>   	void *base;
>   	void *rp;
> diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
> index ffde617f93a3..85f4f7c8d7c6 100644
> --- a/drivers/bus/mhi/core/main.c
> +++ b/drivers/bus/mhi/core/main.c
> @@ -114,7 +114,7 @@ void mhi_ring_er_db(struct mhi_event *mhi_event)
>   	struct mhi_ring *ring = &mhi_event->ring;
>   
>   	mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
> -				     ring->db_addr, *ring->ctxt_wp);
> +				     ring->db_addr, le64_to_cpu(*ring->ctxt_wp));
>   }
>   
>   void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
> @@ -123,7 +123,7 @@ void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
>   	struct mhi_ring *ring = &mhi_cmd->ring;
>   
>   	db = ring->iommu_base + (ring->wp - ring->base);
> -	*ring->ctxt_wp = db;
> +	*ring->ctxt_wp = cpu_to_le64(db);
>   	mhi_write_db(mhi_cntrl, ring->db_addr, db);
>   }
>   
> @@ -140,7 +140,7 @@ void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
>   	 * before letting h/w know there is new element to fetch.
>   	 */
>   	dma_wmb();
> -	*ring->ctxt_wp = db;
> +	*ring->ctxt_wp = cpu_to_le64(db);
>   
>   	mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
>   				    ring->db_addr, db);
> @@ -432,7 +432,7 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
>   	struct mhi_event_ctxt *er_ctxt =
>   		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
>   	struct mhi_ring *ev_ring = &mhi_event->ring;
> -	dma_addr_t ptr = er_ctxt->rp;
> +	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
>   	void *dev_rp;
>   
>   	if (!is_valid_ring_ptr(ev_ring, ptr)) {
> @@ -537,14 +537,14 @@ static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
>   
>   	/* Update the WP */
>   	ring->wp += ring->el_size;
> -	ctxt_wp = *ring->ctxt_wp + ring->el_size;
> +	ctxt_wp = le64_to_cpu(*ring->ctxt_wp) + ring->el_size;
>   
>   	if (ring->wp >= (ring->base + ring->len)) {
>   		ring->wp = ring->base;
>   		ctxt_wp = ring->iommu_base;
>   	}
>   
> -	*ring->ctxt_wp = ctxt_wp;
> +	*ring->ctxt_wp = cpu_to_le64(ctxt_wp);
>   
>   	/* Update the RP */
>   	ring->rp += ring->el_size;
> @@ -801,7 +801,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
>   	u32 chan;
>   	int count = 0;
> -	dma_addr_t ptr = er_ctxt->rp;
> +	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
>   
>   	/*
>   	 * This is a quick check to avoid unnecessary event processing
> @@ -940,7 +940,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>   		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
>   		local_rp = ev_ring->rp;
>   
> -		ptr = er_ctxt->rp;
> +		ptr = le64_to_cpu(er_ctxt->rp);
>   		if (!is_valid_ring_ptr(ev_ring, ptr)) {
>   			dev_err(&mhi_cntrl->mhi_dev->dev,
>   				"Event ring rp points outside of the event ring\n");
> @@ -970,7 +970,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
>   	int count = 0;
>   	u32 chan;
>   	struct mhi_chan *mhi_chan;
> -	dma_addr_t ptr = er_ctxt->rp;
> +	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
>   
>   	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
>   		return -EIO;
> @@ -1011,7 +1011,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
>   		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
>   		local_rp = ev_ring->rp;
>   
> -		ptr = er_ctxt->rp;
> +		ptr = le64_to_cpu(er_ctxt->rp);
>   		if (!is_valid_ring_ptr(ev_ring, ptr)) {
>   			dev_err(&mhi_cntrl->mhi_dev->dev,
>   				"Event ring rp points outside of the event ring\n");
> @@ -1533,7 +1533,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
>   	/* mark all stale events related to channel as STALE event */
>   	spin_lock_irqsave(&mhi_event->lock, flags);
>   
> -	ptr = er_ctxt->rp;
> +	ptr = le64_to_cpu(er_ctxt->rp);
>   	if (!is_valid_ring_ptr(ev_ring, ptr)) {
>   		dev_err(&mhi_cntrl->mhi_dev->dev,
>   			"Event ring rp points outside of the event ring\n");
> diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
> index 4aae0baea008..c35c5ddc7220 100644
> --- a/drivers/bus/mhi/core/pm.c
> +++ b/drivers/bus/mhi/core/pm.c
> @@ -218,7 +218,7 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
>   			continue;
>   
>   		ring->wp = ring->base + ring->len - ring->el_size;
> -		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
> +		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
>   		/* Update all cores */
>   		smp_wmb();
>   
> @@ -420,7 +420,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
>   			continue;
>   
>   		ring->wp = ring->base + ring->len - ring->el_size;
> -		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
> +		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
>   		/* Update to all cores */
>   		smp_wmb();
>   


^ permalink raw reply	[flat|nested] 52+ messages in thread

* RE: [PATCH v4 05/27] bus: mhi: Use bitfield operations for handling DWORDs of ring elements
  2022-02-28 14:43     ` 'Manivannan Sadhasivam'
  2022-02-28 15:11       ` Alex Elder
@ 2022-02-28 15:40       ` David Laight
  2022-02-28 15:51         ` Alex Elder
  1 sibling, 1 reply; 52+ messages in thread
From: David Laight @ 2022-02-28 15:40 UTC (permalink / raw)
  To: 'Manivannan Sadhasivam'
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder

From: 'Manivannan Sadhasivam'
> Sent: 28 February 2022 14:44
> 
> On Mon, Feb 28, 2022 at 02:00:07PM +0000, David Laight wrote:
> > From: Manivannan Sadhasivam
> > > Sent: 28 February 2022 12:43
> > >
> > > Instead of using the hardcoded bits in DWORD definitions, let's use the
> > > bitfield operations to make it more clear how the DWORDs are structured.
> >
> > That all makes it as clear as mud.
> 
> It depends on how you see it ;)
> 
> For instance,
> 
> #define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> 
> vs
> 
> #define MHI_TRE_GET_CMD_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))
> 
> The later one makes it more obvious that the "type" field resides between bit 23
> and 16. Plus it avoids the extra masking.

No, (x >> 16) & 0xff is obviously bits 23 to 16.
I can guess or try to remember what FIELD_GET() and GENMASK() do
but it is really hard work.

Both lines are actually too long to read - especially given the
number of times they are repeated with very minor changes.

I actually wonder if you shouldn't just have a struct like:
struct mhi_cmd {
	__le64   address;
	__le16   len;
	u8       state;
	u8       vid;
	__le16   xxx; /* I can't see what this is */
	u8       chid;
	u8       cmd;
};

although you might need the odd anonymous union/struct
to get the overlays in.

Even using something like:
#define MAKE_WORD0(len, state, vid) (htole16(len) | state << 16 | vid << 16)
would make for easier reading.

Oh yes, there are some 64bit fields here.
So a 'word' is 64 bits, so a 'double word' would be 128 bits!

WTF is a DWORD anyway????
Are you going to start using DWORD_PTR as well ?????

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 05/27] bus: mhi: Use bitfield operations for handling DWORDs of ring elements
  2022-02-28 15:40       ` David Laight
@ 2022-02-28 15:51         ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 15:51 UTC (permalink / raw)
  To: David Laight, 'Manivannan Sadhasivam'
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 9:40 AM, David Laight wrote:
> From: 'Manivannan Sadhasivam'
>> Sent: 28 February 2022 14:44
>>
>> On Mon, Feb 28, 2022 at 02:00:07PM +0000, David Laight wrote:
>>> From: Manivannan Sadhasivam
>>>> Sent: 28 February 2022 12:43
>>>>
>>>> Instead of using the hardcoded bits in DWORD definitions, let's use the
>>>> bitfield operations to make it more clear how the DWORDs are structured.
>>>
>>> That all makes it as clear as mud.
>>
>> It depends on how you see it ;)
>>
>> For instance,
>>
>> #define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
>>
>> vs
>>
>> #define MHI_TRE_GET_CMD_TYPE(tre) (FIELD_GET(GENMASK(23, 16), (MHI_TRE_GET_DWORD(tre, 1))))
>>
>> The later one makes it more obvious that the "type" field resides between bit 23
>> and 16. Plus it avoids the extra masking.
> 
> No, (x >> 16) & 0xff is obviously bits 23 to 16.
> I can guess or try to remember what FIELD_GET() and GENMASK() do
> but it is really hard work.

Although I suggested the use of the bitfield functions, I don't
disagree with the above statement.

The intent was to simplify some code using some standard
helpers.  One benefit of those is that you don't need to
define the shift, because the mask already defines that
(so there is no chance for them mismatching).

The way this got implemented did not line up with what I had
envisioned though (and I had some discussion with Mani about
this earlier).  So this result ended up being messier than
I expected it would.

> Both lines are actually too long to read - especially given the
> number of times they are repeated with very minor changes.

I agree with that.

> I actually wonder if you shouldn't just have a struct like:
> struct mhi_cmd {
> 	__le64   address;
> 	__le16   len;
> 	u8       state;
> 	u8       vid;
> 	__le16   xxx; /* I can't see what this is */
> 	u8       chid;
> 	u8       cmd;
> };

I suggested something similar, and maybe more.  But here
too, Mani felt what he was doing was the right way and
that his way made things simpler overall.

I'm satisfied with the code, and frankly don't want to
delay it getting accepted any further if possible.

So I'm going to say this:

Reviewed-by: Alex Elder <elder@linaro.org>

However, Mani, please consider how you can make this
more readable, and have a plan to update things after
this gets accepted.  I suggested using inline functions
to help break it down a bit.  Or perhaps you could go
back to something like David suggests.

I don't need to review this again; I assume any changes
you make will improve the readability but will not change
the effect of the code.

					-Alex

> although you might need the odd anonymous union/struct
> to get the overlays in.
> 
> Even using something like:
> #define MAKE_WORD0(len, state, vid) (htole16(len) | state << 16 | vid << 16)
> would make for easier reading.
> 
> Oh yes, there are some 64bit fields here.
> So a 'word' is 64 bits, so a 'double word' would be 128 bits!
> 
> WTF is a DWORD anyway????
> Are you going to start using DWORD_PTR as well ?????
> 
> 	David
> 
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
> 


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 07/27] bus: mhi: host: Rename "struct mhi_tre" to "struct mhi_ring_element"
  2022-02-28 12:43 ` [PATCH v4 07/27] bus: mhi: host: Rename "struct mhi_tre" to "struct mhi_ring_element" Manivannan Sadhasivam
@ 2022-02-28 15:52   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 15:52 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> Structure "struct mhi_tre" is representing a generic MHI ring element and
> not specifically a Transfer Ring Element (TRE). Fix the naming.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Looks good.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/host/init.c     |  6 +++---
>   drivers/bus/mhi/host/internal.h |  2 +-
>   drivers/bus/mhi/host/main.c     | 20 ++++++++++----------
>   3 files changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
> index ca068a017a42..016dcc35db80 100644
> --- a/drivers/bus/mhi/host/init.c
> +++ b/drivers/bus/mhi/host/init.c
> @@ -339,7 +339,7 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
>   		er_ctxt->msivec = cpu_to_le32(mhi_event->irq);
>   		mhi_event->db_cfg.db_mode = true;
>   
> -		ring->el_size = sizeof(struct mhi_tre);
> +		ring->el_size = sizeof(struct mhi_ring_element);
>   		ring->len = ring->el_size * ring->elements;
>   		ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
>   		if (ret)
> @@ -371,7 +371,7 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
>   	for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
>   		struct mhi_ring *ring = &mhi_cmd->ring;
>   
> -		ring->el_size = sizeof(struct mhi_tre);
> +		ring->el_size = sizeof(struct mhi_ring_element);
>   		ring->elements = CMD_EL_PER_RING;
>   		ring->len = ring->el_size * ring->elements;
>   		ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
> @@ -598,7 +598,7 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
>   
>   	buf_ring = &mhi_chan->buf_ring;
>   	tre_ring = &mhi_chan->tre_ring;
> -	tre_ring->el_size = sizeof(struct mhi_tre);
> +	tre_ring->el_size = sizeof(struct mhi_ring_element);
>   	tre_ring->len = tre_ring->el_size * tre_ring->elements;
>   	chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
>   	ret = mhi_alloc_aligned_ring(mhi_cntrl, tre_ring, tre_ring->len);
> diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
> index 1c7a48be033f..5860cd326db6 100644
> --- a/drivers/bus/mhi/host/internal.h
> +++ b/drivers/bus/mhi/host/internal.h
> @@ -168,7 +168,7 @@ struct mhi_ctxt {
>   	dma_addr_t cmd_ctxt_addr;
>   };
>   
> -struct mhi_tre {
> +struct mhi_ring_element {
>   	__le64 ptr;
>   	__le32 dword[2];
>   };
> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
> index 3e6e615466b7..dabf85b92a84 100644
> --- a/drivers/bus/mhi/host/main.c
> +++ b/drivers/bus/mhi/host/main.c
> @@ -554,7 +554,7 @@ static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
>   }
>   
>   static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
> -			    struct mhi_tre *event,
> +			    struct mhi_ring_element *event,
>   			    struct mhi_chan *mhi_chan)
>   {
>   	struct mhi_ring *buf_ring, *tre_ring;
> @@ -590,7 +590,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
>   	case MHI_EV_CC_EOT:
>   	{
>   		dma_addr_t ptr = MHI_TRE_GET_EV_PTR(event);
> -		struct mhi_tre *local_rp, *ev_tre;
> +		struct mhi_ring_element *local_rp, *ev_tre;
>   		void *dev_rp;
>   		struct mhi_buf_info *buf_info;
>   		u16 xfer_len;
> @@ -689,7 +689,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
>   }
>   
>   static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
> -			   struct mhi_tre *event,
> +			   struct mhi_ring_element *event,
>   			   struct mhi_chan *mhi_chan)
>   {
>   	struct mhi_ring *buf_ring, *tre_ring;
> @@ -753,12 +753,12 @@ static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
>   }
>   
>   static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
> -				       struct mhi_tre *tre)
> +				       struct mhi_ring_element *tre)
>   {
>   	dma_addr_t ptr = MHI_TRE_GET_EV_PTR(tre);
>   	struct mhi_cmd *cmd_ring = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
>   	struct mhi_ring *mhi_ring = &cmd_ring->ring;
> -	struct mhi_tre *cmd_pkt;
> +	struct mhi_ring_element *cmd_pkt;
>   	struct mhi_chan *mhi_chan;
>   	u32 chan;
>   
> @@ -791,7 +791,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>   			     struct mhi_event *mhi_event,
>   			     u32 event_quota)
>   {
> -	struct mhi_tre *dev_rp, *local_rp;
> +	struct mhi_ring_element *dev_rp, *local_rp;
>   	struct mhi_ring *ev_ring = &mhi_event->ring;
>   	struct mhi_event_ctxt *er_ctxt =
>   		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
> @@ -961,7 +961,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
>   				struct mhi_event *mhi_event,
>   				u32 event_quota)
>   {
> -	struct mhi_tre *dev_rp, *local_rp;
> +	struct mhi_ring_element *dev_rp, *local_rp;
>   	struct mhi_ring *ev_ring = &mhi_event->ring;
>   	struct mhi_event_ctxt *er_ctxt =
>   		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
> @@ -1185,7 +1185,7 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
>   			struct mhi_buf_info *info, enum mhi_flags flags)
>   {
>   	struct mhi_ring *buf_ring, *tre_ring;
> -	struct mhi_tre *mhi_tre;
> +	struct mhi_ring_element *mhi_tre;
>   	struct mhi_buf_info *buf_info;
>   	int eot, eob, chain, bei;
>   	int ret;
> @@ -1256,7 +1256,7 @@ int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
>   		 struct mhi_chan *mhi_chan,
>   		 enum mhi_cmd_type cmd)
>   {
> -	struct mhi_tre *cmd_tre = NULL;
> +	struct mhi_ring_element *cmd_tre = NULL;
>   	struct mhi_cmd *mhi_cmd = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
>   	struct mhi_ring *ring = &mhi_cmd->ring;
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> @@ -1518,7 +1518,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
>   				  int chan)
>   
>   {
> -	struct mhi_tre *dev_rp, *local_rp;
> +	struct mhi_ring_element *dev_rp, *local_rp;
>   	struct mhi_ring *ev_ring;
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
>   	unsigned long flags;


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 09/27] bus: mhi: Make mhi_state_str[] array static inline and move to common.h
  2022-02-28 12:43 ` [PATCH v4 09/27] bus: mhi: Make mhi_state_str[] array static inline and move to common.h Manivannan Sadhasivam
@ 2022-02-28 15:56   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 15:56 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, Hemant Kumar

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> mhi_state_str[] array could be used by MHI endpoint stack also. So let's
> make the array as "static inline function" and move it inside the
> "common.h" header so that the endpoint stack could also make use of it.
> 
> Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

I guess my grumbling on patch 1 belonged here.  I prefer your use
of a switch statement though, and that alleviates my concern.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/common.h       | 29 +++++++++++++++++++++++++----
>   drivers/bus/mhi/host/boot.c    |  2 +-
>   drivers/bus/mhi/host/debugfs.c |  6 +++---
>   drivers/bus/mhi/host/init.c    | 12 ------------
>   drivers/bus/mhi/host/main.c    |  8 ++++----
>   drivers/bus/mhi/host/pm.c      | 14 +++++++-------
>   6 files changed, 40 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> index f2690bf11c99..ec75ba1e6686 100644
> --- a/drivers/bus/mhi/common.h
> +++ b/drivers/bus/mhi/common.h
> @@ -275,9 +275,30 @@ struct mhi_ring_element {
>   	__le32 dword[2];
>   };
>   
> -extern const char * const mhi_state_str[MHI_STATE_MAX];
> -#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
> -				  !mhi_state_str[state]) ? \
> -				"INVALID_STATE" : mhi_state_str[state])
> +static inline const char * const mhi_state_str(enum mhi_state state)
> +{
> +	switch (state) {
> +	case MHI_STATE_RESET:
> +		return "RESET";
> +	case MHI_STATE_READY:
> +		return "READY";
> +	case MHI_STATE_M0:
> +		return "M0";
> +	case MHI_STATE_M1:
> +		return "M1";
> +	case MHI_STATE_M2:
> +		return "M2";
> +	case MHI_STATE_M3:
> +		return "M3";
> +	case MHI_STATE_M3_FAST:
> +		return "M3 FAST";
> +	case MHI_STATE_BHI:
> +		return "BHI";
> +	case MHI_STATE_SYS_ERR:
> +		return "SYS ERROR";
> +	default:
> +		return "Unknown state";
> +	}
> +};
>   
>   #endif /* _MHI_COMMON_H */
> diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
> index d5ba3c7efb61..b0da7ca4519c 100644
> --- a/drivers/bus/mhi/host/boot.c
> +++ b/drivers/bus/mhi/host/boot.c
> @@ -67,7 +67,7 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
>   
>   	dev_dbg(dev, "Entered with pm_state:%s dev_state:%s ee:%s\n",
>   		to_mhi_pm_state_str(mhi_cntrl->pm_state),
> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> +		mhi_state_str(mhi_cntrl->dev_state),
>   		TO_MHI_EXEC_STR(mhi_cntrl->ee));
>   
>   	/*
> diff --git a/drivers/bus/mhi/host/debugfs.c b/drivers/bus/mhi/host/debugfs.c
> index bdc875d7bd4d..cfec7811dfbb 100644
> --- a/drivers/bus/mhi/host/debugfs.c
> +++ b/drivers/bus/mhi/host/debugfs.c
> @@ -20,7 +20,7 @@ static int mhi_debugfs_states_show(struct seq_file *m, void *d)
>   	seq_printf(m, "PM state: %s Device: %s MHI state: %s EE: %s wake: %s\n",
>   		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
>   		   mhi_is_active(mhi_cntrl) ? "Active" : "Inactive",
> -		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> +		   mhi_state_str(mhi_cntrl->dev_state),
>   		   TO_MHI_EXEC_STR(mhi_cntrl->ee),
>   		   mhi_cntrl->wake_set ? "true" : "false");
>   
> @@ -206,13 +206,13 @@ static int mhi_debugfs_regdump_show(struct seq_file *m, void *d)
>   
>   	seq_printf(m, "Host PM state: %s Device state: %s EE: %s\n",
>   		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
> -		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> +		   mhi_state_str(mhi_cntrl->dev_state),
>   		   TO_MHI_EXEC_STR(mhi_cntrl->ee));
>   
>   	state = mhi_get_mhi_state(mhi_cntrl);
>   	ee = mhi_get_exec_env(mhi_cntrl);
>   	seq_printf(m, "Device EE: %s state: %s\n", TO_MHI_EXEC_STR(ee),
> -		   TO_MHI_STATE_STR(state));
> +		   mhi_state_str(state));
>   
>   	for (i = 0; regs[i].name; i++) {
>   		if (!regs[i].base)
> diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
> index 016dcc35db80..a665b8e92408 100644
> --- a/drivers/bus/mhi/host/init.c
> +++ b/drivers/bus/mhi/host/init.c
> @@ -45,18 +45,6 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
>   	[DEV_ST_TRANSITION_DISABLE] = "DISABLE",
>   };
>   
> -const char * const mhi_state_str[MHI_STATE_MAX] = {
> -	[MHI_STATE_RESET] = "RESET",
> -	[MHI_STATE_READY] = "READY",
> -	[MHI_STATE_M0] = "M0",
> -	[MHI_STATE_M1] = "M1",
> -	[MHI_STATE_M2] = "M2",
> -	[MHI_STATE_M3] = "M3",
> -	[MHI_STATE_M3_FAST] = "M3 FAST",
> -	[MHI_STATE_BHI] = "BHI",
> -	[MHI_STATE_SYS_ERR] = "SYS ERROR",
> -};
> -
>   const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX] = {
>   	[MHI_CH_STATE_TYPE_RESET] = "RESET",
>   	[MHI_CH_STATE_TYPE_STOP] = "STOP",
> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
> index dabf85b92a84..9021be7f2359 100644
> --- a/drivers/bus/mhi/host/main.c
> +++ b/drivers/bus/mhi/host/main.c
> @@ -477,8 +477,8 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
>   	ee = mhi_get_exec_env(mhi_cntrl);
>   	dev_dbg(dev, "local ee: %s state: %s device ee: %s state: %s\n",
>   		TO_MHI_EXEC_STR(mhi_cntrl->ee),
> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> -		TO_MHI_EXEC_STR(ee), TO_MHI_STATE_STR(state));
> +		mhi_state_str(mhi_cntrl->dev_state),
> +		TO_MHI_EXEC_STR(ee), mhi_state_str(state));
>   
>   	if (state == MHI_STATE_SYS_ERR) {
>   		dev_dbg(dev, "System error detected\n");
> @@ -844,7 +844,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>   			new_state = MHI_TRE_GET_EV_STATE(local_rp);
>   
>   			dev_dbg(dev, "State change event to state: %s\n",
> -				TO_MHI_STATE_STR(new_state));
> +				mhi_state_str(new_state));
>   
>   			switch (new_state) {
>   			case MHI_STATE_M0:
> @@ -871,7 +871,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>   			}
>   			default:
>   				dev_err(dev, "Invalid state: %s\n",
> -					TO_MHI_STATE_STR(new_state));
> +					mhi_state_str(new_state));
>   			}
>   
>   			break;
> diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
> index bb8a23e80e19..3d90b8ecd3d9 100644
> --- a/drivers/bus/mhi/host/pm.c
> +++ b/drivers/bus/mhi/host/pm.c
> @@ -541,7 +541,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
>   
>   	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
>   		to_mhi_pm_state_str(mhi_cntrl->pm_state),
> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
> +		mhi_state_str(mhi_cntrl->dev_state));
>   
>   	mutex_unlock(&mhi_cntrl->pm_mutex);
>   }
> @@ -684,7 +684,7 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
>   exit_sys_error_transition:
>   	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
>   		to_mhi_pm_state_str(mhi_cntrl->pm_state),
> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
> +		mhi_state_str(mhi_cntrl->dev_state));
>   
>   	mutex_unlock(&mhi_cntrl->pm_mutex);
>   }
> @@ -859,7 +859,7 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
>   	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
>   		dev_err(dev,
>   			"Did not enter M3 state, MHI state: %s, PM state: %s\n",
> -			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> +			mhi_state_str(mhi_cntrl->dev_state),
>   			to_mhi_pm_state_str(mhi_cntrl->pm_state));
>   		return -EIO;
>   	}
> @@ -885,7 +885,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
>   
>   	dev_dbg(dev, "Entered with PM state: %s, MHI state: %s\n",
>   		to_mhi_pm_state_str(mhi_cntrl->pm_state),
> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
> +		mhi_state_str(mhi_cntrl->dev_state));
>   
>   	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
>   		return 0;
> @@ -895,7 +895,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
>   
>   	if (mhi_get_mhi_state(mhi_cntrl) != MHI_STATE_M3) {
>   		dev_warn(dev, "Resuming from non M3 state (%s)\n",
> -			 TO_MHI_STATE_STR(mhi_get_mhi_state(mhi_cntrl)));
> +			 mhi_state_str(mhi_get_mhi_state(mhi_cntrl)));
>   		if (!force)
>   			return -EINVAL;
>   	}
> @@ -932,7 +932,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
>   	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
>   		dev_err(dev,
>   			"Did not enter M0 state, MHI state: %s, PM state: %s\n",
> -			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> +			mhi_state_str(mhi_cntrl->dev_state),
>   			to_mhi_pm_state_str(mhi_cntrl->pm_state));
>   		return -EIO;
>   	}
> @@ -1083,7 +1083,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
>   
>   	state = mhi_get_mhi_state(mhi_cntrl);
>   	dev_dbg(dev, "Attempting power on with EE: %s, state: %s\n",
> -		TO_MHI_EXEC_STR(current_ee), TO_MHI_STATE_STR(state));
> +		TO_MHI_EXEC_STR(current_ee), mhi_state_str(state));
>   
>   	if (state == MHI_STATE_SYS_ERR) {
>   		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 10/27] bus: mhi: ep: Add support for registering MHI endpoint controllers
  2022-02-28 12:43 ` [PATCH v4 10/27] bus: mhi: ep: Add support for registering MHI endpoint controllers Manivannan Sadhasivam
@ 2022-02-28 16:06   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 16:06 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> This commit adds support for registering MHI endpoint controller drivers
> with the MHI endpoint stack. MHI endpoint controller drivers manage
> the interaction with the host machines (such as x86). They are also the
> MHI endpoint bus master in charge of managing the physical link between
> the host and endpoint device. Eventhough the MHI spec is bus agnostic,
> the current implementation is entirely based on PCIe bus.
> 
> The endpoint controller driver encloses all information about the
> underlying physical bus like PCIe. The registration process involves
> parsing the channel configuration and allocating an MHI EP device.
> 
> Channels used in the endpoint stack follows the perspective of the MHI
> host stack. i.e.,
> 
> UL - From host to endpoint
> DL - From endpoint to host
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Looks good to me.  I am partially relying on the more thorough
I did earlier (here and on other patchess I'm reviewing today).
This has clearly improved though.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/Kconfig       |   1 +
>   drivers/bus/mhi/Makefile      |   3 +
>   drivers/bus/mhi/ep/Kconfig    |  10 ++
>   drivers/bus/mhi/ep/Makefile   |   2 +
>   drivers/bus/mhi/ep/internal.h | 154 ++++++++++++++++++++++
>   drivers/bus/mhi/ep/main.c     | 236 ++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        | 143 ++++++++++++++++++++
>   7 files changed, 549 insertions(+)
>   create mode 100644 drivers/bus/mhi/ep/Kconfig
>   create mode 100644 drivers/bus/mhi/ep/Makefile
>   create mode 100644 drivers/bus/mhi/ep/internal.h
>   create mode 100644 drivers/bus/mhi/ep/main.c
>   create mode 100644 include/linux/mhi_ep.h
> 
> diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
> index 4748df7f9cd5..b39a11e6c624 100644
> --- a/drivers/bus/mhi/Kconfig
> +++ b/drivers/bus/mhi/Kconfig
> @@ -6,3 +6,4 @@
>   #
>   
>   source "drivers/bus/mhi/host/Kconfig"
> +source "drivers/bus/mhi/ep/Kconfig"
> diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
> index 5f5708a249f5..46981331b38f 100644
> --- a/drivers/bus/mhi/Makefile
> +++ b/drivers/bus/mhi/Makefile
> @@ -1,2 +1,5 @@
>   # Host MHI stack
>   obj-y += host/
> +
> +# Endpoint MHI stack
> +obj-y += ep/
> diff --git a/drivers/bus/mhi/ep/Kconfig b/drivers/bus/mhi/ep/Kconfig
> new file mode 100644
> index 000000000000..90ab3b040672
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/Kconfig
> @@ -0,0 +1,10 @@
> +config MHI_BUS_EP
> +	tristate "Modem Host Interface (MHI) bus Endpoint implementation"
> +	help
> +	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
> +	  communication protocol used by a host processor to control
> +	  and communicate a modem device over a high speed peripheral
> +	  bus or shared memory.
> +
> +	  MHI_BUS_EP implements the MHI protocol for the endpoint devices,
> +	  such as SDX55 modem connected to the host machine over PCIe.
> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> new file mode 100644
> index 000000000000..64e29252b608
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/Makefile
> @@ -0,0 +1,2 @@
> +obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> +mhi_ep-y := main.o
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> new file mode 100644
> index 000000000000..58ec5fdc503f
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -0,0 +1,154 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2022, Linaro Ltd.
> + *
> + */
> +
> +#ifndef _MHI_EP_INTERNAL_
> +#define _MHI_EP_INTERNAL_
> +
> +#include <linux/bitfield.h>
> +
> +#include "../common.h"
> +
> +extern struct bus_type mhi_ep_bus_type;
> +
> +#define MHI_REG_OFFSET				0x100
> +#define BHI_REG_OFFSET				0x200
> +
> +/* MHI registers */
> +#define EP_MHIREGLEN				(MHI_REG_OFFSET + MHIREGLEN)
> +#define EP_MHIVER				(MHI_REG_OFFSET + MHIVER)
> +#define EP_MHICFG				(MHI_REG_OFFSET + MHICFG)
> +#define EP_CHDBOFF				(MHI_REG_OFFSET + CHDBOFF)
> +#define EP_ERDBOFF				(MHI_REG_OFFSET + ERDBOFF)
> +#define EP_BHIOFF				(MHI_REG_OFFSET + BHIOFF)
> +#define EP_BHIEOFF				(MHI_REG_OFFSET + BHIEOFF)
> +#define EP_DEBUGOFF				(MHI_REG_OFFSET + DEBUGOFF)
> +#define EP_MHICTRL				(MHI_REG_OFFSET + MHICTRL)
> +#define EP_MHISTATUS				(MHI_REG_OFFSET + MHISTATUS)
> +#define EP_CCABAP_LOWER				(MHI_REG_OFFSET + CCABAP_LOWER)
> +#define EP_CCABAP_HIGHER			(MHI_REG_OFFSET + CCABAP_HIGHER)
> +#define EP_ECABAP_LOWER				(MHI_REG_OFFSET + ECABAP_LOWER)
> +#define EP_ECABAP_HIGHER			(MHI_REG_OFFSET + ECABAP_HIGHER)
> +#define EP_CRCBAP_LOWER				(MHI_REG_OFFSET + CRCBAP_LOWER)
> +#define EP_CRCBAP_HIGHER			(MHI_REG_OFFSET + CRCBAP_HIGHER)
> +#define EP_CRDB_LOWER				(MHI_REG_OFFSET + CRDB_LOWER)
> +#define EP_CRDB_HIGHER				(MHI_REG_OFFSET + CRDB_HIGHER)
> +#define EP_MHICTRLBASE_LOWER			(MHI_REG_OFFSET + MHICTRLBASE_LOWER)
> +#define EP_MHICTRLBASE_HIGHER			(MHI_REG_OFFSET + MHICTRLBASE_HIGHER)
> +#define EP_MHICTRLLIMIT_LOWER			(MHI_REG_OFFSET + MHICTRLLIMIT_LOWER)
> +#define EP_MHICTRLLIMIT_HIGHER			(MHI_REG_OFFSET + MHICTRLLIMIT_HIGHER)
> +#define EP_MHIDATABASE_LOWER			(MHI_REG_OFFSET + MHIDATABASE_LOWER)
> +#define EP_MHIDATABASE_HIGHER			(MHI_REG_OFFSET + MHIDATABASE_HIGHER)
> +#define EP_MHIDATALIMIT_LOWER			(MHI_REG_OFFSET + MHIDATALIMIT_LOWER)
> +#define EP_MHIDATALIMIT_HIGHER			(MHI_REG_OFFSET + MHIDATALIMIT_HIGHER)
> +
> +/* MHI BHI registers */
> +#define EP_BHI_INTVEC				(BHI_REG_OFFSET + BHI_INTVEC)
> +#define EP_BHI_EXECENV				(BHI_REG_OFFSET + BHI_EXECENV)
> +
> +/* MHI Doorbell registers */
> +#define CHDB_LOWER_n(n)				(0x400 + 0x8 * (n))
> +#define CHDB_HIGHER_n(n)			(0x404 + 0x8 * (n))
> +#define ERDB_LOWER_n(n)				(0x800 + 0x8 * (n))
> +#define ERDB_HIGHER_n(n)			(0x804 + 0x8 * (n))
> +
> +#define MHI_CTRL_INT_STATUS			0x4
> +#define MHI_CTRL_INT_STATUS_MSK			BIT(0)
> +#define MHI_CTRL_INT_STATUS_CRDB_MSK		BIT(1)
> +#define MHI_CHDB_INT_STATUS_n(n)		(0x28 + 0x4 * (n))
> +#define MHI_ERDB_INT_STATUS_n(n)		(0x38 + 0x4 * (n))
> +
> +#define MHI_CTRL_INT_CLEAR			0x4c
> +#define MHI_CTRL_INT_MMIO_WR_CLEAR		BIT(2)
> +#define MHI_CTRL_INT_CRDB_CLEAR			BIT(1)
> +#define MHI_CTRL_INT_CRDB_MHICTRL_CLEAR		BIT(0)
> +
> +#define MHI_CHDB_INT_CLEAR_n(n)			(0x70 + 0x4 * (n))
> +#define MHI_CHDB_INT_CLEAR_n_CLEAR_ALL		GENMASK(31, 0)
> +#define MHI_ERDB_INT_CLEAR_n(n)			(0x80 + 0x4 * (n))
> +#define MHI_ERDB_INT_CLEAR_n_CLEAR_ALL		GENMASK(31, 0)
> +
> +/*
> + * Unlike the usual "masking" convention, writing "1" to a bit in this register
> + * enables the interrupt and writing "0" will disable it..
> + */
> +#define MHI_CTRL_INT_MASK			0x94
> +#define MHI_CTRL_INT_MASK_MASK			GENMASK(1, 0)
> +#define MHI_CTRL_MHICTRL_MASK			BIT(0)
> +#define MHI_CTRL_CRDB_MASK			BIT(1)
> +
> +#define MHI_CHDB_INT_MASK_n(n)			(0xb8 + 0x4 * (n))
> +#define MHI_CHDB_INT_MASK_n_EN_ALL		GENMASK(31, 0)
> +#define MHI_ERDB_INT_MASK_n(n)			(0xc8 + 0x4 * (n))
> +#define MHI_ERDB_INT_MASK_n_EN_ALL		GENMASK(31, 0)
> +
> +#define NR_OF_CMD_RINGS				1
> +#define MHI_MASK_ROWS_CH_EV_DB			4
> +#define MHI_MASK_CH_EV_LEN			32
> +
> +/* Generic context */
> +struct mhi_generic_ctx {
> +	__le32 reserved0;
> +	__le32 reserved1;
> +	__le32 reserved2;
> +
> +	__le64 rbase __packed __aligned(4);
> +	__le64 rlen __packed __aligned(4);
> +	__le64 rp __packed __aligned(4);
> +	__le64 wp __packed __aligned(4);
> +};
> +
> +enum mhi_ep_ring_type {
> +	RING_TYPE_CMD,
> +	RING_TYPE_ER,
> +	RING_TYPE_CH,
> +};
> +
> +/* Ring element */
> +union mhi_ep_ring_ctx {
> +	struct mhi_cmd_ctxt cmd;
> +	struct mhi_event_ctxt ev;
> +	struct mhi_chan_ctxt ch;
> +	struct mhi_generic_ctx generic;
> +};
> +
> +struct mhi_ep_ring {
> +	struct mhi_ep_cntrl *mhi_cntrl;
> +	union mhi_ep_ring_ctx *ring_ctx;
> +	struct mhi_ring_element *ring_cache;
> +	enum mhi_ep_ring_type type;
> +	u64 rbase;
> +	size_t rd_offset;
> +	size_t wr_offset;
> +	size_t ring_size;
> +	u32 db_offset_h;
> +	u32 db_offset_l;
> +	u32 ch_id;
> +};
> +
> +struct mhi_ep_cmd {
> +	struct mhi_ep_ring ring;
> +};
> +
> +struct mhi_ep_event {
> +	struct mhi_ep_ring ring;
> +};
> +
> +struct mhi_ep_chan {
> +	char *name;
> +	struct mhi_ep_device *mhi_dev;
> +	struct mhi_ep_ring ring;
> +	struct mutex lock;
> +	void (*xfer_cb)(struct mhi_ep_device *mhi_dev, struct mhi_result *result);
> +	enum mhi_ch_state state;
> +	enum dma_data_direction dir;
> +	u64 tre_loc;
> +	u32 tre_size;
> +	u32 tre_bytes_left;
> +	u32 chan;
> +	bool skip_td;
> +};
> +
> +#endif
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> new file mode 100644
> index 000000000000..87ca42c7b067
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -0,0 +1,236 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * MHI Endpoint bus stack
> + *
> + * Copyright (C) 2022 Linaro Ltd.
> + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> + */
> +
> +#include <linux/bitfield.h>
> +#include <linux/delay.h>
> +#include <linux/dma-direction.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/mhi_ep.h>
> +#include <linux/mod_devicetable.h>
> +#include <linux/module.h>
> +#include "internal.h"
> +
> +static DEFINE_IDA(mhi_ep_cntrl_ida);
> +
> +static void mhi_ep_release_device(struct device *dev)
> +{
> +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +
> +	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> +		mhi_dev->mhi_cntrl->mhi_dev = NULL;
> +
> +	/*
> +	 * We need to set the mhi_chan->mhi_dev to NULL here since the MHI
> +	 * devices for the channels will only get created in mhi_ep_create_device()
> +	 * if the mhi_dev associated with it is NULL.
> +	 */
> +	if (mhi_dev->ul_chan)
> +		mhi_dev->ul_chan->mhi_dev = NULL;
> +
> +	if (mhi_dev->dl_chan)
> +		mhi_dev->dl_chan->mhi_dev = NULL;
> +
> +	kfree(mhi_dev);
> +}
> +
> +static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl,
> +						 enum mhi_device_type dev_type)
> +{
> +	struct mhi_ep_device *mhi_dev;
> +	struct device *dev;
> +
> +	mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
> +	if (!mhi_dev)
> +		return ERR_PTR(-ENOMEM);
> +
> +	dev = &mhi_dev->dev;
> +	device_initialize(dev);
> +	dev->bus = &mhi_ep_bus_type;
> +	dev->release = mhi_ep_release_device;
> +
> +	/* Controller device is always allocated first */
> +	if (dev_type == MHI_DEVICE_CONTROLLER)
> +		/* for MHI controller device, parent is the bus device (e.g. PCI EPF) */
> +		dev->parent = mhi_cntrl->cntrl_dev;
> +	else
> +		/* for MHI client devices, parent is the MHI controller device */
> +		dev->parent = &mhi_cntrl->mhi_dev->dev;
> +
> +	mhi_dev->mhi_cntrl = mhi_cntrl;
> +	mhi_dev->dev_type = dev_type;
> +
> +	return mhi_dev;
> +}
> +
> +static int mhi_ep_chan_init(struct mhi_ep_cntrl *mhi_cntrl,
> +			    const struct mhi_ep_cntrl_config *config)
> +{
> +	const struct mhi_ep_channel_config *ch_cfg;
> +	struct device *dev = mhi_cntrl->cntrl_dev;
> +	u32 chan, i;
> +	int ret = -EINVAL;
> +
> +	mhi_cntrl->max_chan = config->max_channels;
> +
> +	/*
> +	 * Allocate max_channels supported by the MHI endpoint and populate
> +	 * only the defined channels
> +	 */
> +	mhi_cntrl->mhi_chan = kcalloc(mhi_cntrl->max_chan, sizeof(*mhi_cntrl->mhi_chan),
> +				      GFP_KERNEL);
> +	if (!mhi_cntrl->mhi_chan)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < config->num_channels; i++) {
> +		struct mhi_ep_chan *mhi_chan;
> +
> +		ch_cfg = &config->ch_cfg[i];
> +
> +		chan = ch_cfg->num;
> +		if (chan >= mhi_cntrl->max_chan) {
> +			dev_err(dev, "Channel (%u) exceeds maximum available channels (%u)\n",
> +				chan, mhi_cntrl->max_chan);
> +			goto error_chan_cfg;
> +		}
> +
> +		/* Bi-directional and direction less channels are not supported */
> +		if (ch_cfg->dir == DMA_BIDIRECTIONAL || ch_cfg->dir == DMA_NONE) {
> +			dev_err(dev, "Invalid direction (%u) for channel (%u)\n",
> +				ch_cfg->dir, chan);
> +			goto error_chan_cfg;
> +		}
> +
> +		mhi_chan = &mhi_cntrl->mhi_chan[chan];
> +		mhi_chan->name = ch_cfg->name;
> +		mhi_chan->chan = chan;
> +		mhi_chan->dir = ch_cfg->dir;
> +		mutex_init(&mhi_chan->lock);
> +	}
> +
> +	return 0;
> +
> +error_chan_cfg:
> +	kfree(mhi_cntrl->mhi_chan);
> +
> +	return ret;
> +}
> +
> +/*
> + * Allocate channel and command rings here. Event rings will be allocated
> + * in mhi_ep_power_up() as the config comes from the host.
> + */
> +int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> +				const struct mhi_ep_cntrl_config *config)
> +{
> +	struct mhi_ep_device *mhi_dev;
> +	int ret;
> +
> +	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
> +		return -EINVAL;
> +
> +	ret = mhi_ep_chan_init(mhi_cntrl, config);
> +	if (ret)
> +		return ret;
> +
> +	mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS, sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
> +	if (!mhi_cntrl->mhi_cmd) {
> +		ret = -ENOMEM;
> +		goto err_free_ch;
> +	}
> +
> +	/* Set controller index */
> +	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
> +	if (mhi_cntrl->index < 0) {
> +		ret = mhi_cntrl->index;
> +		goto err_free_cmd;
> +	}
> +
> +	/* Allocate the controller device */
> +	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_CONTROLLER);
> +	if (IS_ERR(mhi_dev)) {
> +		dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate controller device\n");
> +		ret = PTR_ERR(mhi_dev);
> +		goto err_ida_free;
> +	}
> +
> +	dev_set_name(&mhi_dev->dev, "mhi_ep%u", mhi_cntrl->index);
> +	mhi_dev->name = dev_name(&mhi_dev->dev);
> +	mhi_cntrl->mhi_dev = mhi_dev;
> +
> +	ret = device_add(&mhi_dev->dev);
> +	if (ret)
> +		goto err_put_dev;
> +
> +	dev_dbg(&mhi_dev->dev, "MHI EP Controller registered\n");
> +
> +	return 0;
> +
> +err_put_dev:
> +	put_device(&mhi_dev->dev);
> +err_ida_free:
> +	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
> +err_free_cmd:
> +	kfree(mhi_cntrl->mhi_cmd);
> +err_free_ch:
> +	kfree(mhi_cntrl->mhi_chan);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_register_controller);
> +
> +void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
> +
> +	kfree(mhi_cntrl->mhi_cmd);
> +	kfree(mhi_cntrl->mhi_chan);
> +
> +	device_del(&mhi_dev->dev);
> +	put_device(&mhi_dev->dev);
> +
> +	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
> +
> +static int mhi_ep_match(struct device *dev, struct device_driver *drv)
> +{
> +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +
> +	/*
> +	 * If the device is a controller type then there is no client driver
> +	 * associated with it
> +	 */
> +	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> +		return 0;
> +
> +	return 0;
> +};
> +
> +struct bus_type mhi_ep_bus_type = {
> +	.name = "mhi_ep",
> +	.dev_name = "mhi_ep",
> +	.match = mhi_ep_match,
> +};
> +
> +static int __init mhi_ep_init(void)
> +{
> +	return bus_register(&mhi_ep_bus_type);
> +}
> +
> +static void __exit mhi_ep_exit(void)
> +{
> +	bus_unregister(&mhi_ep_bus_type);
> +}
> +
> +postcore_initcall(mhi_ep_init);
> +module_exit(mhi_ep_exit);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("MHI Bus Endpoint stack");
> +MODULE_AUTHOR("Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>");
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> new file mode 100644
> index 000000000000..9c58938371e2
> --- /dev/null
> +++ b/include/linux/mhi_ep.h
> @@ -0,0 +1,143 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2022, Linaro Ltd.
> + *
> + */
> +#ifndef _MHI_EP_H_
> +#define _MHI_EP_H_
> +
> +#include <linux/dma-direction.h>
> +#include <linux/mhi.h>
> +
> +#define MHI_EP_DEFAULT_MTU 0x8000
> +
> +/**
> + * struct mhi_ep_channel_config - Channel configuration structure for controller
> + * @name: The name of this channel
> + * @num: The number assigned to this channel
> + * @num_elements: The number of elements that can be queued to this channel
> + * @dir: Direction that data may flow on this channel
> + */
> +struct mhi_ep_channel_config {
> +	char *name;
> +	u32 num;
> +	u32 num_elements;
> +	enum dma_data_direction dir;
> +};
> +
> +/**
> + * struct mhi_ep_cntrl_config - MHI Endpoint controller configuration
> + * @mhi_version: MHI spec version supported by the controller
> + * @max_channels: Maximum number of channels supported
> + * @num_channels: Number of channels defined in @ch_cfg
> + * @ch_cfg: Array of defined channels
> + */
> +struct mhi_ep_cntrl_config {
> +	u32 mhi_version;
> +	u32 max_channels;
> +	u32 num_channels;
> +	const struct mhi_ep_channel_config *ch_cfg;
> +};
> +
> +/**
> + * struct mhi_ep_db_info - MHI Endpoint doorbell info
> + * @mask: Mask of the doorbell interrupt
> + * @status: Status of the doorbell interrupt
> + */
> +struct mhi_ep_db_info {
> +	u32 mask;
> +	u32 status;
> +};
> +
> +/**
> + * struct mhi_ep_cntrl - MHI Endpoint controller structure
> + * @cntrl_dev: Pointer to the struct device of physical bus acting as the MHI
> + *             Endpoint controller
> + * @mhi_dev: MHI Endpoint device instance for the controller
> + * @mmio: MMIO region containing the MHI registers
> + * @mhi_chan: Points to the channel configuration table
> + * @mhi_event: Points to the event ring configurations table
> + * @mhi_cmd: Points to the command ring configurations table
> + * @sm: MHI Endpoint state machine
> + * @raise_irq: CB function for raising IRQ to the host
> + * @alloc_addr: CB function for allocating memory in endpoint for storing host context
> + * @map_addr: CB function for mapping host context to endpoint
> + * @free_addr: CB function to free the allocated memory in endpoint for storing host context
> + * @unmap_addr: CB function to unmap the host context in endpoint
> + * @read_from_host: CB function for reading from host memory from endpoint
> + * @write_to_host: CB function for writing to host memory from endpoint
> + * @mhi_state: MHI Endpoint state
> + * @max_chan: Maximum channels supported by the endpoint controller
> + * @mru: MRU (Maximum Receive Unit) value of the endpoint controller
> + * @index: MHI Endpoint controller index
> + */
> +struct mhi_ep_cntrl {
> +	struct device *cntrl_dev;
> +	struct mhi_ep_device *mhi_dev;
> +	void __iomem *mmio;
> +
> +	struct mhi_ep_chan *mhi_chan;
> +	struct mhi_ep_event *mhi_event;
> +	struct mhi_ep_cmd *mhi_cmd;
> +	struct mhi_ep_sm *sm;
> +
> +	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
> +	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,
> +		       size_t size);
> +	int (*map_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t phys_addr, u64 pci_addr,
> +			size_t size);
> +	void (*free_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t phys_addr,
> +			  void __iomem *virt_addr, size_t size);
> +	void (*unmap_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t phys_addr);
> +	int (*read_from_host)(struct mhi_ep_cntrl *mhi_cntrl, u64 from, void __iomem *to,
> +			      size_t size);
> +	int (*write_to_host)(struct mhi_ep_cntrl *mhi_cntrl, void __iomem *from, u64 to,
> +			     size_t size);
> +
> +	enum mhi_state mhi_state;
> +
> +	u32 max_chan;
> +	u32 mru;
> +	u32 index;
> +};
> +
> +/**
> + * struct mhi_ep_device - Structure representing an MHI Endpoint device that binds
> + *                     to channels or is associated with controllers
> + * @dev: Driver model device node for the MHI Endpoint device
> + * @mhi_cntrl: Controller the device belongs to
> + * @id: Pointer to MHI Endpoint device ID struct
> + * @name: Name of the associated MHI Endpoint device
> + * @ul_chan: UL channel for the device
> + * @dl_chan: DL channel for the device
> + * @dev_type: MHI device type
> + */
> +struct mhi_ep_device {
> +	struct device dev;
> +	struct mhi_ep_cntrl *mhi_cntrl;
> +	const struct mhi_device_id *id;
> +	const char *name;
> +	struct mhi_ep_chan *ul_chan;
> +	struct mhi_ep_chan *dl_chan;
> +	enum mhi_device_type dev_type;
> +};
> +
> +#define to_mhi_ep_device(dev) container_of(dev, struct mhi_ep_device, dev)
> +
> +/**
> + * mhi_ep_register_controller - Register MHI Endpoint controller
> + * @mhi_cntrl: MHI Endpoint controller to register
> + * @config: Configuration to use for the controller
> + *
> + * Return: 0 if controller registrations succeeds, a negative error code otherwise.
> + */
> +int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> +			       const struct mhi_ep_cntrl_config *config);
> +
> +/**
> + * mhi_ep_unregister_controller - Unregister MHI Endpoint controller
> + * @mhi_cntrl: MHI Endpoint controller to unregister
> + */
> +void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
> +
> +#endif


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 11/27] bus: mhi: ep: Add support for registering MHI endpoint client drivers
  2022-02-28 12:43 ` [PATCH v4 11/27] bus: mhi: ep: Add support for registering MHI endpoint client drivers Manivannan Sadhasivam
@ 2022-02-28 16:09   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 16:09 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, Hemant Kumar

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> This commit adds support for registering MHI endpoint client drivers
> with the MHI endpoint stack. MHI endpoint client drivers bind to one
> or more MHI endpoint devices inorder to send and receive the upper-layer
> protocol packets like IP packets, modem control messages, and
> diagnostics messages over MHI bus.
> 
> Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Looks good.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/ep/main.c | 85 +++++++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h    | 57 +++++++++++++++++++++++++-
>   2 files changed, 140 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 87ca42c7b067..2bdcf1657479 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -198,9 +198,88 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
>   }
>   EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
>   
> +static int mhi_ep_driver_probe(struct device *dev)
> +{
> +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
> +	struct mhi_ep_chan *ul_chan = mhi_dev->ul_chan;
> +	struct mhi_ep_chan *dl_chan = mhi_dev->dl_chan;
> +
> +	ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
> +	dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
> +
> +	return mhi_drv->probe(mhi_dev, mhi_dev->id);
> +}
> +
> +static int mhi_ep_driver_remove(struct device *dev)
> +{
> +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
> +	struct mhi_result result = {};
> +	struct mhi_ep_chan *mhi_chan;
> +	int dir;
> +
> +	/* Skip if it is a controller device */
> +	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> +		return 0;
> +
> +	/* Disconnect the channels associated with the driver */
> +	for (dir = 0; dir < 2; dir++) {
> +		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
> +
> +		if (!mhi_chan)
> +			continue;
> +
> +		mutex_lock(&mhi_chan->lock);
> +		/* Send channel disconnect status to the client driver */
> +		if (mhi_chan->xfer_cb) {
> +			result.transaction_status = -ENOTCONN;
> +			result.bytes_xferd = 0;
> +			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
> +		}
> +
> +		mhi_chan->state = MHI_CH_STATE_DISABLED;
> +		mhi_chan->xfer_cb = NULL;
> +		mutex_unlock(&mhi_chan->lock);
> +	}
> +
> +	/* Remove the client driver now */
> +	mhi_drv->remove(mhi_dev);
> +
> +	return 0;
> +}
> +
> +int __mhi_ep_driver_register(struct mhi_ep_driver *mhi_drv, struct module *owner)
> +{
> +	struct device_driver *driver = &mhi_drv->driver;
> +
> +	if (!mhi_drv->probe || !mhi_drv->remove)
> +		return -EINVAL;
> +
> +	/* Client drivers should have callbacks defined for both channels */
> +	if (!mhi_drv->ul_xfer_cb || !mhi_drv->dl_xfer_cb)
> +		return -EINVAL;
> +
> +	driver->bus = &mhi_ep_bus_type;
> +	driver->owner = owner;
> +	driver->probe = mhi_ep_driver_probe;
> +	driver->remove = mhi_ep_driver_remove;
> +
> +	return driver_register(driver);
> +}
> +EXPORT_SYMBOL_GPL(__mhi_ep_driver_register);
> +
> +void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv)
> +{
> +	driver_unregister(&mhi_drv->driver);
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_driver_unregister);
> +
>   static int mhi_ep_match(struct device *dev, struct device_driver *drv)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(drv);
> +	const struct mhi_device_id *id;
>   
>   	/*
>   	 * If the device is a controller type then there is no client driver
> @@ -209,6 +288,12 @@ static int mhi_ep_match(struct device *dev, struct device_driver *drv)
>   	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
>   		return 0;
>   
> +	for (id = mhi_drv->id_table; id->chan[0]; id++)
> +		if (!strcmp(mhi_dev->name, id->chan)) {
> +			mhi_dev->id = id;
> +			return 1;
> +		}
> +
>   	return 0;
>   };
>   
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 9c58938371e2..efcbdc51464f 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -108,8 +108,8 @@ struct mhi_ep_cntrl {
>    * @mhi_cntrl: Controller the device belongs to
>    * @id: Pointer to MHI Endpoint device ID struct
>    * @name: Name of the associated MHI Endpoint device
> - * @ul_chan: UL channel for the device
> - * @dl_chan: DL channel for the device
> + * @ul_chan: UL (from host to endpoint) channel for the device
> + * @dl_chan: DL (from endpoint to host) channel for the device
>    * @dev_type: MHI device type
>    */
>   struct mhi_ep_device {
> @@ -122,7 +122,60 @@ struct mhi_ep_device {
>   	enum mhi_device_type dev_type;
>   };
>   
> +/**
> + * struct mhi_ep_driver - Structure representing a MHI Endpoint client driver
> + * @id_table: Pointer to MHI Endpoint device ID table
> + * @driver: Device driver model driver
> + * @probe: CB function for client driver probe function
> + * @remove: CB function for client driver remove function
> + * @ul_xfer_cb: CB function for UL (from host to endpoint) data transfer
> + * @dl_xfer_cb: CB function for DL (from endpoint to host) data transfer
> + */
> +struct mhi_ep_driver {
> +	const struct mhi_device_id *id_table;
> +	struct device_driver driver;
> +	int (*probe)(struct mhi_ep_device *mhi_ep,
> +		     const struct mhi_device_id *id);
> +	void (*remove)(struct mhi_ep_device *mhi_ep);
> +	void (*ul_xfer_cb)(struct mhi_ep_device *mhi_dev,
> +			   struct mhi_result *result);
> +	void (*dl_xfer_cb)(struct mhi_ep_device *mhi_dev,
> +			   struct mhi_result *result);
> +};
> +
>   #define to_mhi_ep_device(dev) container_of(dev, struct mhi_ep_device, dev)
> +#define to_mhi_ep_driver(drv) container_of(drv, struct mhi_ep_driver, driver)
> +
> +/*
> + * module_mhi_ep_driver() - Helper macro for drivers that don't do
> + * anything special other than using default mhi_ep_driver_register() and
> + * mhi_ep_driver_unregister().  This eliminates a lot of boilerplate.
> + * Each module may only use this macro once.
> + */
> +#define module_mhi_ep_driver(mhi_drv) \
> +	module_driver(mhi_drv, mhi_ep_driver_register, \
> +		      mhi_ep_driver_unregister)
> +
> +/*
> + * Macro to avoid include chaining to get THIS_MODULE
> + */
> +#define mhi_ep_driver_register(mhi_drv) \
> +	__mhi_ep_driver_register(mhi_drv, THIS_MODULE)
> +
> +/**
> + * __mhi_ep_driver_register - Register a driver with MHI Endpoint bus
> + * @mhi_drv: Driver to be associated with the device
> + * @owner: The module owner
> + *
> + * Return: 0 if driver registrations succeeds, a negative error code otherwise.
> + */
> +int __mhi_ep_driver_register(struct mhi_ep_driver *mhi_drv, struct module *owner);
> +
> +/**
> + * mhi_ep_driver_unregister - Unregister a driver from MHI Endpoint bus
> + * @mhi_drv: Driver associated with the device
> + */
> +void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv);
>   
>   /**
>    * mhi_ep_register_controller - Register MHI Endpoint controller


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 12/27] bus: mhi: ep: Add support for creating and destroying MHI EP devices
  2022-02-28 12:43 ` [PATCH v4 12/27] bus: mhi: ep: Add support for creating and destroying MHI EP devices Manivannan Sadhasivam
@ 2022-02-28 16:10   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 16:10 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> This commit adds support for creating and destroying MHI endpoint devices.
> The MHI endpoint devices binds to the MHI endpoint channels and are used
> to transfer data between MHI host and endpoint device.
> 
> There is a single MHI EP device for each channel pair. The devices will be
> created when the corresponding channels has been started by the host and
> will be destroyed during MHI EP power down and reset.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Looks good.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/ep/main.c | 83 +++++++++++++++++++++++++++++++++++++++
>   1 file changed, 83 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 2bdcf1657479..3afae0bfd83c 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -68,6 +68,89 @@ static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl,
>   	return mhi_dev;
>   }
>   
> +/*
> + * MHI channels are always defined in pairs with UL as the even numbered
> + * channel and DL as odd numbered one. This function gets UL channel (primary)
> + * as the ch_id and always looks after the next entry in channel list for
> + * the corresponding DL channel (secondary).
> + */
> +static int mhi_ep_create_device(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id)
> +{
> +	struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ch_id];
> +	struct device *dev = mhi_cntrl->cntrl_dev;
> +	struct mhi_ep_device *mhi_dev;
> +	int ret;
> +
> +	/* Check if the channel name is same for both UL and DL */
> +	if (strcmp(mhi_chan->name, mhi_chan[1].name)) {
> +		dev_err(dev, "UL and DL channel names are not same: (%s) != (%s)\n",
> +			mhi_chan->name, mhi_chan[1].name);
> +		return -EINVAL;
> +	}
> +
> +	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_XFER);
> +	if (IS_ERR(mhi_dev))
> +		return PTR_ERR(mhi_dev);
> +
> +	/* Configure primary channel */
> +	mhi_dev->ul_chan = mhi_chan;
> +	get_device(&mhi_dev->dev);
> +	mhi_chan->mhi_dev = mhi_dev;
> +
> +	/* Configure secondary channel as well */
> +	mhi_chan++;
> +	mhi_dev->dl_chan = mhi_chan;
> +	get_device(&mhi_dev->dev);
> +	mhi_chan->mhi_dev = mhi_dev;
> +
> +	/* Channel name is same for both UL and DL */
> +	mhi_dev->name = mhi_chan->name;
> +	dev_set_name(&mhi_dev->dev, "%s_%s",
> +		     dev_name(&mhi_cntrl->mhi_dev->dev),
> +		     mhi_dev->name);
> +
> +	ret = device_add(&mhi_dev->dev);
> +	if (ret)
> +		put_device(&mhi_dev->dev);
> +
> +	return ret;
> +}
> +
> +static int mhi_ep_destroy_device(struct device *dev, void *data)
> +{
> +	struct mhi_ep_device *mhi_dev;
> +	struct mhi_ep_cntrl *mhi_cntrl;
> +	struct mhi_ep_chan *ul_chan, *dl_chan;
> +
> +	if (dev->bus != &mhi_ep_bus_type)
> +		return 0;
> +
> +	mhi_dev = to_mhi_ep_device(dev);
> +	mhi_cntrl = mhi_dev->mhi_cntrl;
> +
> +	/* Only destroy devices created for channels */
> +	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> +		return 0;
> +
> +	ul_chan = mhi_dev->ul_chan;
> +	dl_chan = mhi_dev->dl_chan;
> +
> +	if (ul_chan)
> +		put_device(&ul_chan->mhi_dev->dev);
> +
> +	if (dl_chan)
> +		put_device(&dl_chan->mhi_dev->dev);
> +
> +	dev_dbg(&mhi_cntrl->mhi_dev->dev, "Destroying device for chan:%s\n",
> +		 mhi_dev->name);
> +
> +	/* Notify the client and remove the device from MHI bus */
> +	device_del(dev);
> +	put_device(dev);
> +
> +	return 0;
> +}
> +
>   static int mhi_ep_chan_init(struct mhi_ep_cntrl *mhi_cntrl,
>   			    const struct mhi_ep_cntrl_config *config)
>   {


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 13/27] bus: mhi: ep: Add support for managing MMIO registers
  2022-02-28 12:43 ` [PATCH v4 13/27] bus: mhi: ep: Add support for managing MMIO registers Manivannan Sadhasivam
@ 2022-02-28 16:23   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 16:23 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> Add support for managing the Memory Mapped Input Output (MMIO) registers
> of the MHI bus. All MHI operations are carried out using the MMIO registers
> by both host and the endpoint device.
> 
> The MMIO registers reside inside the endpoint device memory (fixed
> location based on the platform) and the address is passed by the MHI EP
> controller driver during its registration.

I thought it might have been a mistake that MHI_MASK_ROWS_CH_EV_DB
was used when iterating over channels and events.  Now I see it
represents the number of "rows" of 32-bit doorbell registers for
either events or channels.

I guess it might be reasonable to assume the number of event "rows"
is the same as the number of channel rows.  But *maybe* consider
defining them separately, like:
   MHI_MASK_ROWS_CH_DB
   MHI_MASK_ROWS_EV_DB

I also have one more comment below.

Whether or not you implement one or both of these suggestions:

Reviewed-by: Alex Elder <elder@linaro.org>

> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>   drivers/bus/mhi/ep/Makefile   |   2 +-
>   drivers/bus/mhi/ep/internal.h |  26 ++++
>   drivers/bus/mhi/ep/main.c     |   6 +-
>   drivers/bus/mhi/ep/mmio.c     | 272 ++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        |  18 +++
>   5 files changed, 322 insertions(+), 2 deletions(-)
>   create mode 100644 drivers/bus/mhi/ep/mmio.c
> 

. . .

> diff --git a/drivers/bus/mhi/ep/mmio.c b/drivers/bus/mhi/ep/mmio.c
> new file mode 100644
> index 000000000000..311c5d94c4d2
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/mmio.c
> @@ -0,0 +1,272 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2022 Linaro Ltd.
> + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> + */
> +
> +#include <linux/bitfield.h>
> +#include <linux/io.h>
> +#include <linux/mhi_ep.h>
> +
> +#include "internal.h"
> +

. . .

> +bool mhi_ep_mmio_read_chdb_status_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	bool chdb = 0;
> +	u32 i;
> +
> +	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
> +		mhi_cntrl->chdb[i].status = mhi_ep_mmio_read(mhi_cntrl, MHI_CHDB_INT_STATUS_n(i));
> +		chdb |= !!mhi_cntrl->chdb[i].status;

This is fine, but I think I'd prefer this to be:

		if (mhi_cntrl->chdb[i].status)
			chdb = true;

Because you're using a bitwise operator to set a Boolean value.


> +	}
> +
> +	/* Return whether a channel doorbell interrupt occurred or not */
> +	return chdb;
> +}
> +

. . .

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 14/27] bus: mhi: ep: Add support for ring management
  2022-02-28 12:43 ` [PATCH v4 14/27] bus: mhi: ep: Add support for ring management Manivannan Sadhasivam
@ 2022-02-28 16:27   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 16:27 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> Add support for managing the MHI ring. The MHI ring is a circular queue
> of data structures used to pass the information between host and the
> endpoint.
> 
> MHI support 3 types of rings:
> 
> 1. Transfer ring
> 2. Event ring
> 3. Command ring
> 
> All rings reside inside the host memory and the MHI EP device maps it to
> the device memory using blocks like PCIe iATU. The mapping is handled in
> the MHI EP controller driver itself.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Looks good.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/ep/Makefile   |   2 +-
>   drivers/bus/mhi/ep/internal.h |  18 ++++
>   drivers/bus/mhi/ep/ring.c     | 197 ++++++++++++++++++++++++++++++++++
>   3 files changed, 216 insertions(+), 1 deletion(-)
>   create mode 100644 drivers/bus/mhi/ep/ring.c
> 
> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> index a1555ae287ad..7ba0e04801eb 100644
> --- a/drivers/bus/mhi/ep/Makefile
> +++ b/drivers/bus/mhi/ep/Makefile
> @@ -1,2 +1,2 @@
>   obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> -mhi_ep-y := main.o mmio.o
> +mhi_ep-y := main.o mmio.o ring.o
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index 139e939fcf57..b3b8770f2f4e 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -114,6 +114,11 @@ union mhi_ep_ring_ctx {
>   	struct mhi_generic_ctx generic;
>   };
>   
> +struct mhi_ep_ring_item {
> +	struct list_head node;
> +	struct mhi_ep_ring *ring;
> +};
> +
>   struct mhi_ep_ring {
>   	struct mhi_ep_cntrl *mhi_cntrl;
>   	union mhi_ep_ring_ctx *ring_ctx;
> @@ -126,6 +131,9 @@ struct mhi_ep_ring {
>   	u32 db_offset_h;
>   	u32 db_offset_l;
>   	u32 ch_id;
> +	u32 er_index;
> +	u32 irq_vector;
> +	bool started;
>   };
>   
>   struct mhi_ep_cmd {
> @@ -151,6 +159,16 @@ struct mhi_ep_chan {
>   	bool skip_td;
>   };
>   
> +/* MHI Ring related functions */
> +void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id);
> +void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring);
> +int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
> +		      union mhi_ep_ring_ctx *ctx);
> +size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr);
> +int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ring_element *element);
> +void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring);
> +int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring);
> +
>   /* MMIO related functions */
>   u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset);
>   void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
> diff --git a/drivers/bus/mhi/ep/ring.c b/drivers/bus/mhi/ep/ring.c
> new file mode 100644
> index 000000000000..1029eed2cc28
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/ring.c
> @@ -0,0 +1,197 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2022 Linaro Ltd.
> + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> + */
> +
> +#include <linux/mhi_ep.h>
> +#include "internal.h"
> +
> +size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr)
> +{
> +	return (ptr - ring->rbase) / sizeof(struct mhi_ring_element);
> +}
> +
> +static u32 mhi_ep_ring_num_elems(struct mhi_ep_ring *ring)
> +{
> +	return le64_to_cpu(ring->ring_ctx->generic.rlen) / sizeof(struct mhi_ring_element);
> +}
> +
> +void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring)
> +{
> +	ring->rd_offset = (ring->rd_offset + 1) % ring->ring_size;
> +}
> +
> +static int __mhi_ep_cache_ring(struct mhi_ep_ring *ring, size_t end)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	size_t start, copy_size;
> +	int ret;
> +
> +	/* Don't proceed in the case of event ring. This happens during mhi_ep_ring_start(). */
> +	if (ring->type == RING_TYPE_ER)
> +		return 0;
> +
> +	/* No need to cache the ring if write pointer is unmodified */
> +	if (ring->wr_offset == end)
> +		return 0;
> +
> +	start = ring->wr_offset;
> +	if (start < end) {
> +		copy_size = (end - start) * sizeof(struct mhi_ring_element);
> +		ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase +
> +						(start * sizeof(struct mhi_ring_element)),
> +						&ring->ring_cache[start], copy_size);
> +		if (ret < 0)
> +			return ret;
> +	} else {
> +		copy_size = (ring->ring_size - start) * sizeof(struct mhi_ring_element);
> +		ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase +
> +						(start * sizeof(struct mhi_ring_element)),
> +						&ring->ring_cache[start], copy_size);
> +		if (ret < 0)
> +			return ret;
> +
> +		if (end) {
> +			ret = mhi_cntrl->read_from_host(mhi_cntrl, ring->rbase,
> +							&ring->ring_cache[0],
> +							end * sizeof(struct mhi_ring_element));
> +			if (ret < 0)
> +				return ret;
> +		}
> +	}
> +
> +	dev_dbg(dev, "Cached ring: start %zu end %zu size %zu\n", start, end, copy_size);
> +
> +	return 0;
> +}
> +
> +static int mhi_ep_cache_ring(struct mhi_ep_ring *ring, u64 wr_ptr)
> +{
> +	size_t wr_offset;
> +	int ret;
> +
> +	wr_offset = mhi_ep_ring_addr2offset(ring, wr_ptr);
> +
> +	/* Cache the host ring till write offset */
> +	ret = __mhi_ep_cache_ring(ring, wr_offset);
> +	if (ret)
> +		return ret;
> +
> +	ring->wr_offset = wr_offset;
> +
> +	return 0;
> +}
> +
> +int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring)
> +{
> +	u64 wr_ptr;
> +
> +	wr_ptr = mhi_ep_mmio_get_db(ring);
> +
> +	return mhi_ep_cache_ring(ring, wr_ptr);
> +}
> +
> +/* TODO: Support for adding multiple ring elements to the ring */
> +int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ring_element *el)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	size_t old_offset = 0;
> +	u32 num_free_elem;
> +	int ret;
> +
> +	ret = mhi_ep_update_wr_offset(ring);
> +	if (ret) {
> +		dev_err(dev, "Error updating write pointer\n");
> +		return ret;
> +	}
> +
> +	if (ring->rd_offset < ring->wr_offset)
> +		num_free_elem = (ring->wr_offset - ring->rd_offset) - 1;
> +	else
> +		num_free_elem = ((ring->ring_size - ring->rd_offset) + ring->wr_offset) - 1;
> +
> +	/* Check if there is space in ring for adding at least an element */
> +	if (!num_free_elem) {
> +		dev_err(dev, "No space left in the ring\n");
> +		return -ENOSPC;
> +	}
> +
> +	old_offset = ring->rd_offset;
> +	mhi_ep_ring_inc_index(ring);
> +
> +	dev_dbg(dev, "Adding an element to ring at offset (%zu)\n", ring->rd_offset);
> +
> +	/* Update rp in ring context */
> +	ring->ring_ctx->generic.rp = cpu_to_le64((ring->rd_offset * sizeof(*el)) + ring->rbase);
> +
> +	ret = mhi_cntrl->write_to_host(mhi_cntrl, el, ring->rbase + (old_offset * sizeof(*el)),
> +				       sizeof(*el));
> +	if (ret < 0)
> +		return ret;
> +
> +	return 0;
> +}
> +
> +void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id)
> +{
> +	ring->type = type;
> +	if (ring->type == RING_TYPE_CMD) {
> +		ring->db_offset_h = EP_CRDB_HIGHER;
> +		ring->db_offset_l = EP_CRDB_LOWER;
> +	} else if (ring->type == RING_TYPE_CH) {
> +		ring->db_offset_h = CHDB_HIGHER_n(id);
> +		ring->db_offset_l = CHDB_LOWER_n(id);
> +		ring->ch_id = id;
> +	} else {
> +		ring->db_offset_h = ERDB_HIGHER_n(id);
> +		ring->db_offset_l = ERDB_LOWER_n(id);
> +	}
> +}
> +
> +int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
> +			union mhi_ep_ring_ctx *ctx)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	int ret;
> +
> +	ring->mhi_cntrl = mhi_cntrl;
> +	ring->ring_ctx = ctx;
> +	ring->ring_size = mhi_ep_ring_num_elems(ring);
> +	ring->rbase = le64_to_cpu(ring->ring_ctx->generic.rbase);
> +
> +	if (ring->type == RING_TYPE_CH)
> +		ring->er_index = le32_to_cpu(ring->ring_ctx->ch.erindex);
> +
> +	if (ring->type == RING_TYPE_ER)
> +		ring->irq_vector = le32_to_cpu(ring->ring_ctx->ev.msivec);
> +
> +	/* During ring init, both rp and wp are equal */
> +	ring->rd_offset = mhi_ep_ring_addr2offset(ring, le64_to_cpu(ring->ring_ctx->generic.rp));
> +	ring->wr_offset = mhi_ep_ring_addr2offset(ring, le64_to_cpu(ring->ring_ctx->generic.rp));
> +
> +	/* Allocate ring cache memory for holding the copy of host ring */
> +	ring->ring_cache = kcalloc(ring->ring_size, sizeof(struct mhi_ring_element), GFP_KERNEL);
> +	if (!ring->ring_cache)
> +		return -ENOMEM;
> +
> +	ret = mhi_ep_cache_ring(ring, le64_to_cpu(ring->ring_ctx->generic.wp));
> +	if (ret) {
> +		dev_err(dev, "Failed to cache ring\n");
> +		kfree(ring->ring_cache);
> +		return ret;
> +	}
> +
> +	ring->started = true;
> +
> +	return 0;
> +}
> +
> +void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring)
> +{
> +	ring->started = false;
> +	kfree(ring->ring_cache);
> +	ring->ring_cache = NULL;
> +}


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 15/27] bus: mhi: ep: Add support for sending events to the host
  2022-02-28 12:43 ` [PATCH v4 15/27] bus: mhi: ep: Add support for sending events to the host Manivannan Sadhasivam
@ 2022-02-28 16:37   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 16:37 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> Add support for sending the events to the host over MHI bus from the
> endpoint. Following events are supported:
> 
> 1. Transfer completion event
> 2. Command completion event
> 3. State change event
> 4. Execution Environment (EE) change event
> 
> An event is sent whenever an operation has been completed in the MHI EP
> device. Event is sent using the MHI event ring and additionally the host
> is notified using an IRQ if required.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

This code involves some of the same sort of bitfield manipulation
as was commented on in patch 5.  Whatever you do there, plan to
do something similar here.

I have minor suggestions/comments below, but this looks good to me.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/common.h      | 22 +++++++++
>   drivers/bus/mhi/ep/internal.h |  4 ++
>   drivers/bus/mhi/ep/main.c     | 90 +++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        |  8 ++++
>   4 files changed, 124 insertions(+)
> 
> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> index ec75ba1e6686..5b30e2d0832e 100644
> --- a/drivers/bus/mhi/common.h
> +++ b/drivers/bus/mhi/common.h
> @@ -165,6 +165,22 @@
>   #define MHI_TRE_GET_EV_LINKSPEED(tre)	FIELD_GET(GENMASK(31, 24), (MHI_TRE_GET_DWORD(tre, 1)))
>   #define MHI_TRE_GET_EV_LINKWIDTH(tre)	FIELD_GET(GENMASK(7, 0), (MHI_TRE_GET_DWORD(tre, 0)))
>   
> +/* State change event */
> +#define MHI_SC_EV_PTR			0
> +#define MHI_SC_EV_DWORD0(state)		cpu_to_le32(FIELD_PREP(GENMASK(31, 24), state))
> +#define MHI_SC_EV_DWORD1(type)		cpu_to_le32(FIELD_PREP(GENMASK(23, 16), type))
> +
> +/* EE event */
> +#define MHI_EE_EV_PTR			0
> +#define MHI_EE_EV_DWORD0(ee)		cpu_to_le32(FIELD_PREP(GENMASK(31, 24), ee))
> +#define MHI_EE_EV_DWORD1(type)		cpu_to_le32(FIELD_PREP(GENMASK(23, 16), type))
> +
> +
> +/* Command Completion event */
> +#define MHI_CC_EV_PTR(ptr)		cpu_to_le64(ptr)
> +#define MHI_CC_EV_DWORD0(code)		cpu_to_le32(FIELD_PREP(GENMASK(31, 24), code))
> +#define MHI_CC_EV_DWORD1(type)		cpu_to_le32(FIELD_PREP(GENMASK(23, 16), type))
> +
>   /* Transfer descriptor macros */
>   #define MHI_TRE_DATA_PTR(ptr)		cpu_to_le64(ptr)
>   #define MHI_TRE_DATA_DWORD0(len)	cpu_to_le32(FIELD_PREP(GENMASK(15, 0), len))
> @@ -175,6 +191,12 @@
>   								FIELD_PREP(BIT(9), ieot) |  \
>   								FIELD_PREP(BIT(8), ieob) |  \
>   								FIELD_PREP(BIT(0), chain))
> +#define MHI_TRE_DATA_GET_PTR(tre)	le64_to_cpu((tre)->ptr)
> +#define MHI_TRE_DATA_GET_LEN(tre)	FIELD_GET(GENMASK(15, 0), MHI_TRE_GET_DWORD(tre, 0))

You might consider making these macros produce Boolean results.

> +#define MHI_TRE_DATA_GET_CHAIN(tre)	FIELD_GET(BIT(0), MHI_TRE_GET_DWORD(tre, 1))
> +#define MHI_TRE_DATA_GET_IEOB(tre)	FIELD_GET(BIT(8), MHI_TRE_GET_DWORD(tre, 1))
> +#define MHI_TRE_DATA_GET_IEOT(tre)	FIELD_GET(BIT(9), MHI_TRE_GET_DWORD(tre, 1))
> +#define MHI_TRE_DATA_GET_BEI(tre)	FIELD_GET(BIT(10), MHI_TRE_GET_DWORD(tre, 1))
>   
>   /* RSC transfer descriptor macros */
>   #define MHI_RSCTRE_DATA_PTR(ptr, len)	cpu_to_le64(FIELD_PREP(GENMASK(64, 48), len) | ptr)
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index b3b8770f2f4e..8753ae93eda3 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -195,4 +195,8 @@ void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *s
>   void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
>   void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
>   
> +/* MHI EP core functions */
> +int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
> +int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ee_type exec_env);
> +
>   #endif
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index d76387c4d5fa..903f9bd3e03d 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -18,6 +18,94 @@
>   
>   static DEFINE_IDA(mhi_ep_cntrl_ida);
>   
> +static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
> +			     struct mhi_ring_element *el, bool bei)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	union mhi_ep_ring_ctx *ctx;
> +	struct mhi_ep_ring *ring;
> +	int ret;
> +
> +	mutex_lock(&mhi_cntrl->event_lock);
> +	ring = &mhi_cntrl->mhi_event[ring_idx].ring;
> +	ctx = (union mhi_ep_ring_ctx *)&mhi_cntrl->ev_ctx_cache[ring_idx];
> +	if (!ring->started) {
> +		ret = mhi_ep_ring_start(mhi_cntrl, ring, ctx);
> +		if (ret) {
> +			dev_err(dev, "Error starting event ring (%u)\n", ring_idx);
> +			goto err_unlock;
> +		}
> +	}
> +
> +	/* Add element to the event ring */
> +	ret = mhi_ep_ring_add_element(ring, el);
> +	if (ret) {
> +		dev_err(dev, "Error adding element to event ring (%u)\n", ring_idx);
> +		goto err_unlock;
> +	}
> +
> +	mutex_unlock(&mhi_cntrl->event_lock);
> +
> +	/*
> +	 * Raise IRQ to host only if the BEI flag is not set in TRE. Host might
> +	 * set this flag for interrupt moderation as per MHI protocol.
> +	 */
> +	if (!bei)
> +		mhi_cntrl->raise_irq(mhi_cntrl, ring->irq_vector);
> +
> +	return 0;
> +
> +err_unlock:
> +	mutex_unlock(&mhi_cntrl->event_lock);
> +
> +	return ret;
> +}
> +
> +static int mhi_ep_send_completion_event(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
> +					struct mhi_ring_element *tre, u32 len, enum mhi_ev_ccs code)
> +{
> +	struct mhi_ring_element event = {};
> +
> +	event.ptr = cpu_to_le64(ring->rbase + (ring->rd_offset * (sizeof(*tre))));

The parentheses around the sizeof are unnecessary; so are the
parentheses around the factors of the mulitplication.

> +	event.dword[0] = MHI_TRE_EV_DWORD0(code, len);
> +	event.dword[1] = MHI_TRE_EV_DWORD1(ring->ch_id, MHI_PKT_TYPE_TX_EVENT);
> +
> +	return mhi_ep_send_event(mhi_cntrl, ring->er_index, &event, !!MHI_TRE_DATA_GET_BEI(tre));
> +}
> +
> +int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state)
> +{
> +	struct mhi_ring_element event = {};
> +
> +	event.dword[0] = MHI_SC_EV_DWORD0(state);
> +	event.dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_STATE_CHANGE_EVENT);
> +
> +	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
> +}
> +
> +int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ee_type exec_env)
> +{
> +	struct mhi_ring_element event = {};
> +
> +	event.dword[0] = MHI_EE_EV_DWORD0(exec_env);
> +	event.dword[1] = MHI_SC_EV_DWORD1(MHI_PKT_TYPE_EE_EVENT);
> +
> +	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
> +}
> +
> +static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ev_ccs code)
> +{
> +	struct mhi_ep_ring *ring = &mhi_cntrl->mhi_cmd->ring;
> +	struct mhi_ring_element event = {};
> +
> +	event.ptr = cpu_to_le64(ring->rbase + (ring->rd_offset *
> +					       (sizeof(struct mhi_ring_element))));
> +	event.dword[0] = MHI_CC_EV_DWORD0(code);
> +	event.dword[1] = MHI_CC_EV_DWORD1(MHI_PKT_TYPE_CMD_COMPLETION_EVENT);
> +
> +	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
> +}
> +
>   static void mhi_ep_release_device(struct device *dev)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> @@ -227,6 +315,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   		goto err_free_ch;
>   	}
>   
> +	mutex_init(&mhi_cntrl->event_lock);
> +
>   	/* Set MHI version and AMSS EE before enumeration */
>   	mhi_ep_mmio_write(mhi_cntrl, EP_MHIVER, config->mhi_version);
>   	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EE_AMSS);
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 8e1de062f820..44a4669382ad 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -59,10 +59,14 @@ struct mhi_ep_db_info {
>    * @mhi_event: Points to the event ring configurations table
>    * @mhi_cmd: Points to the command ring configurations table
>    * @sm: MHI Endpoint state machine
> + * @ch_ctx_cache: Cache of host channel context data structure
> + * @ev_ctx_cache: Cache of host event context data structure
> + * @cmd_ctx_cache: Cache of host command context data structure
>    * @ch_ctx_host_pa: Physical address of host channel context data structure
>    * @ev_ctx_host_pa: Physical address of host event context data structure
>    * @cmd_ctx_host_pa: Physical address of host command context data structure
>    * @chdb: Array of channel doorbell interrupt info
> + * @event_lock: Lock for protecting event rings
>    * @raise_irq: CB function for raising IRQ to the host
>    * @alloc_addr: CB function for allocating memory in endpoint for storing host context
>    * @map_addr: CB function for mapping host context to endpoint
> @@ -89,11 +93,15 @@ struct mhi_ep_cntrl {
>   	struct mhi_ep_cmd *mhi_cmd;
>   	struct mhi_ep_sm *sm;
>   
> +	struct mhi_chan_ctxt *ch_ctx_cache;
> +	struct mhi_event_ctxt *ev_ctx_cache;
> +	struct mhi_cmd_ctxt *cmd_ctx_cache;
>   	u64 ch_ctx_host_pa;
>   	u64 ev_ctx_host_pa;
>   	u64 cmd_ctx_host_pa;
>   
>   	struct mhi_ep_db_info chdb[4];
> +	struct mutex event_lock;
>   
>   	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
>   	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 16/27] bus: mhi: ep: Add support for managing MHI state machine
  2022-02-28 12:43 ` [PATCH v4 16/27] bus: mhi: ep: Add support for managing MHI state machine Manivannan Sadhasivam
@ 2022-02-28 16:41   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 16:41 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> Add support for managing the MHI state machine by controlling the state
> transitions. Only the following MHI state transitions are supported:
> 
> 1. Ready state
> 2. M0 state
> 3. M3 state
> 4. SYS_ERR state
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Some minor comments below, but otherwise:

Reviewed-by: Alex Elder <elder@linaro.org.

> ---
>   drivers/bus/mhi/ep/Makefile   |   2 +-
>   drivers/bus/mhi/ep/internal.h |  11 +++
>   drivers/bus/mhi/ep/main.c     |  54 +++++++++++++-
>   drivers/bus/mhi/ep/sm.c       | 136 ++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        |  12 +++
>   5 files changed, 213 insertions(+), 2 deletions(-)
>   create mode 100644 drivers/bus/mhi/ep/sm.c
> 
> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> index 7ba0e04801eb..aad85f180b70 100644
> --- a/drivers/bus/mhi/ep/Makefile
> +++ b/drivers/bus/mhi/ep/Makefile
> @@ -1,2 +1,2 @@
>   obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> -mhi_ep-y := main.o mmio.o ring.o
> +mhi_ep-y := main.o mmio.o ring.o sm.o
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index 8753ae93eda3..536351218685 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -144,6 +144,11 @@ struct mhi_ep_event {
>   	struct mhi_ep_ring ring;
>   };
>   
> +struct mhi_ep_state_transition {
> +	struct list_head node;
> +	enum mhi_state state;
> +};
> +
>   struct mhi_ep_chan {
>   	char *name;
>   	struct mhi_ep_device *mhi_dev;
> @@ -198,5 +203,11 @@ void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
>   /* MHI EP core functions */
>   int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
>   int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ee_type exec_env);
> +bool mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state cur_mhi_state,
> +			    enum mhi_state mhi_state);
> +int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state);
> +int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
> +int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
> +int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
>   
>   #endif
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 903f9bd3e03d..7a29543586d0 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -106,6 +106,43 @@ static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_e
>   	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
>   }
>   
> +static void mhi_ep_state_worker(struct work_struct *work)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	struct mhi_ep_state_transition *itr, *tmp;
> +	unsigned long flags;
> +	LIST_HEAD(head);
> +	int ret;
> +
> +	spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
> +	list_splice_tail_init(&mhi_cntrl->st_transition_list, &head);
> +	spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
> +
> +	list_for_each_entry_safe(itr, tmp, &head, node) {
> +		list_del(&itr->node);
> +		dev_dbg(dev, "Handling MHI state transition to %s\n",
> +			 mhi_state_str(itr->state));
> +
> +		switch (itr->state) {
> +		case MHI_STATE_M0:
> +			ret = mhi_ep_set_m0_state(mhi_cntrl);
> +			if (ret)
> +				dev_err(dev, "Failed to transition to M0 state\n");
> +			break;
> +		case MHI_STATE_M3:
> +			ret = mhi_ep_set_m3_state(mhi_cntrl);
> +			if (ret)
> +				dev_err(dev, "Failed to transition to M3 state\n");
> +			break;
> +		default:
> +			dev_err(dev, "Invalid MHI state transition: %d\n", itr->state);
> +			break;
> +		}
> +		kfree(itr);
> +	}
> +}
> +
>   static void mhi_ep_release_device(struct device *dev)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> @@ -315,6 +352,17 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   		goto err_free_ch;
>   	}
>   
> +	INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
> +
> +	mhi_cntrl->wq = alloc_workqueue("mhi_ep_wq", 0, 0);
> +	if (!mhi_cntrl->wq) {
> +		ret = -ENOMEM;
> +		goto err_free_cmd;
> +	}
> +
> +	INIT_LIST_HEAD(&mhi_cntrl->st_transition_list);
> +	spin_lock_init(&mhi_cntrl->state_lock);
> +	spin_lock_init(&mhi_cntrl->list_lock);
>   	mutex_init(&mhi_cntrl->event_lock);
>   
>   	/* Set MHI version and AMSS EE before enumeration */
> @@ -325,7 +373,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
>   	if (mhi_cntrl->index < 0) {
>   		ret = mhi_cntrl->index;
> -		goto err_free_cmd;
> +		goto err_destroy_wq;
>   	}
>   
>   	/* Allocate the controller device */
> @@ -352,6 +400,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   	put_device(&mhi_dev->dev);
>   err_ida_free:
>   	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
> +err_destroy_wq:
> +	destroy_workqueue(mhi_cntrl->wq);
>   err_free_cmd:
>   	kfree(mhi_cntrl->mhi_cmd);
>   err_free_ch:
> @@ -365,6 +415,8 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
>   
> +	destroy_workqueue(mhi_cntrl->wq);
> +
>   	kfree(mhi_cntrl->mhi_cmd);
>   	kfree(mhi_cntrl->mhi_chan);
>   
> diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
> new file mode 100644
> index 000000000000..ad49276ec044
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/sm.c
> @@ -0,0 +1,136 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2022 Linaro Ltd.
> + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> + */
> +
> +#include <linux/errno.h>
> +#include <linux/mhi_ep.h>
> +#include "internal.h"
> +
> +bool __must_check mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl,
> +					 enum mhi_state cur_mhi_state,
> +					 enum mhi_state mhi_state)
> +{
> +	if (mhi_state == MHI_STATE_SYS_ERR)
> +		return true;    /* Allowed in any state */
> +
> +	if (mhi_state == MHI_STATE_READY)
> +		return cur_mhi_state == MHI_STATE_RESET;
> +
> +	if (mhi_state == MHI_STATE_M0)
> +		return (cur_mhi_state == MHI_STATE_M3 || cur_mhi_state == MHI_STATE_READY);

Parentheses not required here.

> +
> +	if (mhi_state == MHI_STATE_M3)
> +		return cur_mhi_state == MHI_STATE_M0;
> +
> +	return false;
> +}
> +
> +int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +
> +	if (!mhi_ep_check_mhi_state(mhi_cntrl, mhi_cntrl->mhi_state, mhi_state)) {
> +		dev_err(dev, "MHI state change to %s from %s is not allowed!\n",
> +			mhi_state_str(mhi_state),
> +			mhi_state_str(mhi_cntrl->mhi_state));
> +		return -EACCES;
> +	}
> +
> +	/* TODO */

What is TODO here?  It probably doesn't belong, but if you're going
to keep it, at least say what's expected...

> +	if (mhi_state == MHI_STATE_M1 || mhi_state == MHI_STATE_M2) {
> +		dev_err(dev, "MHI state (%s) not supported\n", mhi_state_str(mhi_state));
> +		return -EOPNOTSUPP;
> +	}
> +
> +	mhi_ep_mmio_masked_write(mhi_cntrl, EP_MHISTATUS, MHISTATUS_MHISTATE_MASK, mhi_state);
> +	mhi_cntrl->mhi_state = mhi_state;
> +
> +	if (mhi_state == MHI_STATE_READY)
> +		mhi_ep_mmio_masked_write(mhi_cntrl, EP_MHISTATUS, MHISTATUS_READY_MASK, 1);
> +
> +	if (mhi_state == MHI_STATE_SYS_ERR)
> +		mhi_ep_mmio_masked_write(mhi_cntrl, EP_MHISTATUS, MHISTATUS_SYSERR_MASK, 1);
> +
> +	return 0;
> +}
> +
> +int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	enum mhi_state old_state;
> +	int ret;
> +
> +	spin_lock_bh(&mhi_cntrl->state_lock);
> +	old_state = mhi_cntrl->mhi_state;
> +
> +	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
> +	spin_unlock_bh(&mhi_cntrl->state_lock);
> +
> +	if (ret)
> +		return ret;
> +
> +	/* Signal host that the device moved to M0 */
> +	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M0);
> +	if (ret) {
> +		dev_err(dev, "Failed sending M0 state change event\n");
> +		return ret;
> +	}
> +
> +	if (old_state == MHI_STATE_READY) {
> +		/* Send AMSS EE event to host */
> +		ret = mhi_ep_send_ee_event(mhi_cntrl, MHI_EE_AMSS);
> +		if (ret) {
> +			dev_err(dev, "Failed sending AMSS EE event\n");
> +			return ret;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	int ret;
> +
> +	spin_lock_bh(&mhi_cntrl->state_lock);
> +	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
> +	spin_unlock_bh(&mhi_cntrl->state_lock);
> +
> +	if (ret)
> +		return ret;
> +
> +	/* Signal host that the device moved to M3 */
> +	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M3);
> +	if (ret) {
> +		dev_err(dev, "Failed sending M3 state change event\n");
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	enum mhi_state mhi_state;
> +	int ret, is_ready;
> +
> +	spin_lock_bh(&mhi_cntrl->state_lock);
> +	/* Ensure that the MHISTATUS is set to RESET by host */
> +	mhi_state = mhi_ep_mmio_masked_read(mhi_cntrl, EP_MHISTATUS, MHISTATUS_MHISTATE_MASK);
> +	is_ready = mhi_ep_mmio_masked_read(mhi_cntrl, EP_MHISTATUS, MHISTATUS_READY_MASK);
> +
> +	if (mhi_state != MHI_STATE_RESET || is_ready) {
> +		dev_err(dev, "READY state transition failed. MHI host not in RESET state\n");
> +		spin_unlock_bh(&mhi_cntrl->state_lock);
> +		return -EIO;
> +	}
> +
> +	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_READY);
> +	spin_unlock_bh(&mhi_cntrl->state_lock);
> +
> +	return ret;
> +}
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 44a4669382ad..dc27a5de7d3c 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -67,6 +67,11 @@ struct mhi_ep_db_info {
>    * @cmd_ctx_host_pa: Physical address of host command context data structure
>    * @chdb: Array of channel doorbell interrupt info
>    * @event_lock: Lock for protecting event rings
> + * @list_lock: Lock for protecting state transition and channel doorbell lists
> + * @state_lock: Lock for protecting state transitions
> + * @st_transition_list: List of state transitions
> + * @wq: Dedicated workqueue for handling rings and state changes
> + * @state_work: State transition worker
>    * @raise_irq: CB function for raising IRQ to the host
>    * @alloc_addr: CB function for allocating memory in endpoint for storing host context
>    * @map_addr: CB function for mapping host context to endpoint
> @@ -102,6 +107,13 @@ struct mhi_ep_cntrl {
>   
>   	struct mhi_ep_db_info chdb[4];
>   	struct mutex event_lock;
> +	spinlock_t list_lock;
> +	spinlock_t state_lock;
> +
> +	struct list_head st_transition_list;
> +
> +	struct workqueue_struct *wq;
> +	struct work_struct state_work;
>   
>   	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
>   	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 17/27] bus: mhi: ep: Add support for processing MHI endpoint interrupts
  2022-02-28 12:43 ` [PATCH v4 17/27] bus: mhi: ep: Add support for processing MHI endpoint interrupts Manivannan Sadhasivam
@ 2022-02-28 16:45   ` Alex Elder
  2022-03-01  6:41     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 52+ messages in thread
From: Alex Elder @ 2022-02-28 16:45 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> Add support for processing MHI endpoint interrupts such as control
> interrupt, command interrupt and channel interrupt from the host.
> 
> The interrupts will be generated in the endpoint device whenever host
> writes to the corresponding doorbell registers. The doorbell logic
> is handled inside the hardware internally.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

One suggestion for future work, but otherwise this looks good.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/ep/main.c | 123 +++++++++++++++++++++++++++++++++++++-
>   include/linux/mhi_ep.h    |   4 ++
>   2 files changed, 125 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 7a29543586d0..ce690b1aeace 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -143,6 +143,112 @@ static void mhi_ep_state_worker(struct work_struct *work)
>   	}
>   }
>   
> +static void mhi_ep_queue_channel_db(struct mhi_ep_cntrl *mhi_cntrl, unsigned long ch_int,
> +				    u32 ch_idx)
> +{
> +	struct mhi_ep_ring_item *item;
> +	struct mhi_ep_ring *ring;
> +	bool work = !!ch_int;
> +	LIST_HEAD(head);
> +	u32 i;
> +
> +	/* First add the ring items to a local list */
> +	for_each_set_bit(i, &ch_int, 32) {
> +		/* Channel index varies for each register: 0, 32, 64, 96 */
> +		u32 ch_id = ch_idx + i;
> +
> +		ring = &mhi_cntrl->mhi_chan[ch_id].ring;
> +		item = kzalloc(sizeof(*item), GFP_ATOMIC);

It looks like this will be used a lot, so I suggest you
consider creating a slab cache of ring items to allocate
from.  I haven't suggested that elsewhere, but it's
possible there are other frequently-allocated structures
that would warrant that.

> +		if (!item)
> +			return;
> +
> +		item->ring = ring;
> +		list_add_tail(&item->node, &head);
> +	}
> +
> +	/* Now, splice the local list into ch_db_list and queue the work item */
> +	if (work) {
> +		spin_lock(&mhi_cntrl->list_lock);
> +		list_splice_tail_init(&head, &mhi_cntrl->ch_db_list);
> +		spin_unlock(&mhi_cntrl->list_lock);
> +	}
> +}
> +
> +/*
> + * Channel interrupt statuses are contained in 4 registers each of 32bit length.
> + * For checking all interrupts, we need to loop through each registers and then
> + * check for bits set.
> + */
> +static void mhi_ep_check_channel_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	u32 ch_int, ch_idx, i;
> +
> +	/* Bail out if there is no channel doorbell interrupt */
> +	if (!mhi_ep_mmio_read_chdb_status_interrupts(mhi_cntrl))
> +		return;
> +
> +	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
> +		ch_idx = i * MHI_MASK_CH_EV_LEN;
> +
> +		/* Only process channel interrupt if the mask is enabled */
> +		ch_int = mhi_cntrl->chdb[i].status & mhi_cntrl->chdb[i].mask;
> +		if (ch_int) {
> +			mhi_ep_queue_channel_db(mhi_cntrl, ch_int, ch_idx);
> +			mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_CLEAR_n(i),
> +							mhi_cntrl->chdb[i].status);
> +		}
> +	}
> +}
> +
> +static void mhi_ep_process_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl,
> +					 enum mhi_state state)
> +{
> +	struct mhi_ep_state_transition *item;
> +
> +	item = kzalloc(sizeof(*item), GFP_ATOMIC);
> +	if (!item)
> +		return;
> +
> +	item->state = state;
> +	spin_lock(&mhi_cntrl->list_lock);
> +	list_add_tail(&item->node, &mhi_cntrl->st_transition_list);
> +	spin_unlock(&mhi_cntrl->list_lock);
> +
> +	queue_work(mhi_cntrl->wq, &mhi_cntrl->state_work);
> +}
> +
> +/*
> + * Interrupt handler that services interrupts raised by the host writing to
> + * MHICTRL and Command ring doorbell (CRDB) registers for state change and
> + * channel interrupts.
> + */
> +static irqreturn_t mhi_ep_irq(int irq, void *data)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = data;
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	enum mhi_state state;
> +	u32 int_value;
> +
> +	/* Acknowledge the ctrl interrupt */
> +	int_value = mhi_ep_mmio_read(mhi_cntrl, MHI_CTRL_INT_STATUS);
> +	mhi_ep_mmio_write(mhi_cntrl, MHI_CTRL_INT_CLEAR, int_value);
> +
> +	/* Check for ctrl interrupt */
> +	if (FIELD_GET(MHI_CTRL_INT_STATUS_MSK, int_value)) {
> +		dev_dbg(dev, "Processing ctrl interrupt\n");
> +		mhi_ep_process_ctrl_interrupt(mhi_cntrl, state);
> +	}
> +
> +	/* Check for command doorbell interrupt */
> +	if (FIELD_GET(MHI_CTRL_INT_STATUS_CRDB_MSK, int_value))
> +		dev_dbg(dev, "Processing command doorbell interrupt\n");
> +
> +	/* Check for channel interrupts */
> +	mhi_ep_check_channel_interrupt(mhi_cntrl);
> +
> +	return IRQ_HANDLED;
> +}
> +
>   static void mhi_ep_release_device(struct device *dev)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> @@ -339,7 +445,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   	struct mhi_ep_device *mhi_dev;
>   	int ret;
>   
> -	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio)
> +	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio || !mhi_cntrl->irq)
>   		return -EINVAL;
>   
>   	ret = mhi_ep_chan_init(mhi_cntrl, config);
> @@ -361,6 +467,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   	}
>   
>   	INIT_LIST_HEAD(&mhi_cntrl->st_transition_list);
> +	INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
>   	spin_lock_init(&mhi_cntrl->state_lock);
>   	spin_lock_init(&mhi_cntrl->list_lock);
>   	mutex_init(&mhi_cntrl->event_lock);
> @@ -376,12 +483,20 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   		goto err_destroy_wq;
>   	}
>   
> +	irq_set_status_flags(mhi_cntrl->irq, IRQ_NOAUTOEN);
> +	ret = request_irq(mhi_cntrl->irq, mhi_ep_irq, IRQF_TRIGGER_HIGH,
> +			  "doorbell_irq", mhi_cntrl);
> +	if (ret) {
> +		dev_err(mhi_cntrl->cntrl_dev, "Failed to request Doorbell IRQ\n");
> +		goto err_ida_free;
> +	}
> +
>   	/* Allocate the controller device */
>   	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_CONTROLLER);
>   	if (IS_ERR(mhi_dev)) {
>   		dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate controller device\n");
>   		ret = PTR_ERR(mhi_dev);
> -		goto err_ida_free;
> +		goto err_free_irq;
>   	}
>   
>   	dev_set_name(&mhi_dev->dev, "mhi_ep%u", mhi_cntrl->index);
> @@ -398,6 +513,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   
>   err_put_dev:
>   	put_device(&mhi_dev->dev);
> +err_free_irq:
> +	free_irq(mhi_cntrl->irq, mhi_cntrl);
>   err_ida_free:
>   	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
>   err_destroy_wq:
> @@ -417,6 +534,8 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
>   
>   	destroy_workqueue(mhi_cntrl->wq);
>   
> +	free_irq(mhi_cntrl->irq, mhi_cntrl);
> +
>   	kfree(mhi_cntrl->mhi_cmd);
>   	kfree(mhi_cntrl->mhi_chan);
>   
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index dc27a5de7d3c..43aa9b133db4 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -70,6 +70,7 @@ struct mhi_ep_db_info {
>    * @list_lock: Lock for protecting state transition and channel doorbell lists
>    * @state_lock: Lock for protecting state transitions
>    * @st_transition_list: List of state transitions
> + * @ch_db_list: List of queued channel doorbells
>    * @wq: Dedicated workqueue for handling rings and state changes
>    * @state_work: State transition worker
>    * @raise_irq: CB function for raising IRQ to the host
> @@ -87,6 +88,7 @@ struct mhi_ep_db_info {
>    * @chdb_offset: Channel doorbell offset set by the host
>    * @erdb_offset: Event ring doorbell offset set by the host
>    * @index: MHI Endpoint controller index
> + * @irq: IRQ used by the endpoint controller
>    */
>   struct mhi_ep_cntrl {
>   	struct device *cntrl_dev;
> @@ -111,6 +113,7 @@ struct mhi_ep_cntrl {
>   	spinlock_t state_lock;
>   
>   	struct list_head st_transition_list;
> +	struct list_head ch_db_list;
>   
>   	struct workqueue_struct *wq;
>   	struct work_struct state_work;
> @@ -137,6 +140,7 @@ struct mhi_ep_cntrl {
>   	u32 chdb_offset;
>   	u32 erdb_offset;
>   	u32 index;
> +	int irq;
>   };
>   
>   /**


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 18/27] bus: mhi: ep: Add support for powering up the MHI endpoint stack
  2022-02-28 12:43 ` [PATCH v4 18/27] bus: mhi: ep: Add support for powering up the MHI endpoint stack Manivannan Sadhasivam
@ 2022-02-28 16:47   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 16:47 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> Add support for MHI endpoint power_up that includes initializing the MMIO
> and rings, caching the host MHI registers, and setting the MHI state to M0.
> After registering the MHI EP controller, the stack has to be powered up
> for usage.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Looks good.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/ep/internal.h |   6 +
>   drivers/bus/mhi/ep/main.c     | 237 ++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        |  16 +++
>   3 files changed, 259 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index 536351218685..a2ec4169a4b2 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -210,4 +210,10 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
>   int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
>   int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
>   
> +/* MHI EP memory management functions */
> +int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
> +		     phys_addr_t *phys_ptr, void __iomem **virt);
> +void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t phys,
> +		       void __iomem *virt, size_t size);
> +
>   #endif
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index ce690b1aeace..47807102baad 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -16,6 +16,9 @@
>   #include <linux/module.h>
>   #include "internal.h"
>   
> +#define MHI_SUSPEND_MIN			100
> +#define MHI_SUSPEND_TIMEOUT		600
> +
>   static DEFINE_IDA(mhi_ep_cntrl_ida);
>   
>   static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
> @@ -106,6 +109,186 @@ static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_e
>   	return mhi_ep_send_event(mhi_cntrl, 0, &event, 0);
>   }
>   
> +int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
> +		     phys_addr_t *phys_ptr, void __iomem **virt)
> +{
> +	size_t offset = pci_addr % 0x1000;
> +	void __iomem *buf;
> +	phys_addr_t phys;
> +	int ret;
> +
> +	size += offset;
> +
> +	buf = mhi_cntrl->alloc_addr(mhi_cntrl, &phys, size);
> +	if (!buf)
> +		return -ENOMEM;
> +
> +	ret = mhi_cntrl->map_addr(mhi_cntrl, phys, pci_addr - offset, size);
> +	if (ret) {
> +		mhi_cntrl->free_addr(mhi_cntrl, phys, buf, size);
> +		return ret;
> +	}
> +
> +	*phys_ptr = phys + offset;
> +	*virt = buf + offset;
> +
> +	return 0;
> +}
> +
> +void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t phys,
> +			void __iomem *virt, size_t size)
> +{
> +	size_t offset = pci_addr % 0x1000;
> +
> +	size += offset;
> +
> +	mhi_cntrl->unmap_addr(mhi_cntrl, phys - offset);
> +	mhi_cntrl->free_addr(mhi_cntrl, phys - offset, virt - offset, size);
> +}
> +
> +static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	size_t cmd_ctx_host_size, ch_ctx_host_size, ev_ctx_host_size;
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	int ret;
> +
> +	/* Update the number of event rings (NER) programmed by the host */
> +	mhi_ep_mmio_update_ner(mhi_cntrl);
> +
> +	dev_dbg(dev, "Number of Event rings: %u, HW Event rings: %u\n",
> +		 mhi_cntrl->event_rings, mhi_cntrl->hw_event_rings);
> +
> +	ch_ctx_host_size = sizeof(struct mhi_chan_ctxt) * mhi_cntrl->max_chan;
> +	ev_ctx_host_size = sizeof(struct mhi_event_ctxt) * mhi_cntrl->event_rings;
> +	cmd_ctx_host_size = sizeof(struct mhi_cmd_ctxt) * NR_OF_CMD_RINGS;
> +
> +	/* Get the channel context base pointer from host */
> +	mhi_ep_mmio_get_chc_base(mhi_cntrl);
> +
> +	/* Allocate and map memory for caching host channel context */
> +	ret = mhi_ep_alloc_map(mhi_cntrl, mhi_cntrl->ch_ctx_host_pa, ch_ctx_host_size,
> +				&mhi_cntrl->ch_ctx_cache_phys,
> +				(void __iomem **)&mhi_cntrl->ch_ctx_cache);
> +	if (ret) {
> +		dev_err(dev, "Failed to allocate and map ch_ctx_cache\n");
> +		return ret;
> +	}
> +
> +	/* Get the event context base pointer from host */
> +	mhi_ep_mmio_get_erc_base(mhi_cntrl);
> +
> +	/* Allocate and map memory for caching host event context */
> +	ret = mhi_ep_alloc_map(mhi_cntrl, mhi_cntrl->ev_ctx_host_pa, ev_ctx_host_size,
> +				&mhi_cntrl->ev_ctx_cache_phys,
> +				(void __iomem **)&mhi_cntrl->ev_ctx_cache);
> +	if (ret) {
> +		dev_err(dev, "Failed to allocate and map ev_ctx_cache\n");
> +		goto err_ch_ctx;
> +	}
> +
> +	/* Get the command context base pointer from host */
> +	mhi_ep_mmio_get_crc_base(mhi_cntrl);
> +
> +	/* Allocate and map memory for caching host command context */
> +	ret = mhi_ep_alloc_map(mhi_cntrl, mhi_cntrl->cmd_ctx_host_pa, cmd_ctx_host_size,
> +				&mhi_cntrl->cmd_ctx_cache_phys,
> +				(void __iomem **)&mhi_cntrl->cmd_ctx_cache);
> +	if (ret) {
> +		dev_err(dev, "Failed to allocate and map cmd_ctx_cache\n");
> +		goto err_ev_ctx;
> +	}
> +
> +	/* Initialize command ring */
> +	ret = mhi_ep_ring_start(mhi_cntrl, &mhi_cntrl->mhi_cmd->ring,
> +				(union mhi_ep_ring_ctx *)mhi_cntrl->cmd_ctx_cache);
> +	if (ret) {
> +		dev_err(dev, "Failed to start the command ring\n");
> +		goto err_cmd_ctx;
> +	}
> +
> +	return ret;
> +
> +err_cmd_ctx:
> +	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->cmd_ctx_host_pa, mhi_cntrl->cmd_ctx_cache_phys,
> +			mhi_cntrl->cmd_ctx_cache, cmd_ctx_host_size);
> +
> +err_ev_ctx:
> +	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ev_ctx_host_pa, mhi_cntrl->ev_ctx_cache_phys,
> +			mhi_cntrl->ev_ctx_cache, ev_ctx_host_size);
> +
> +err_ch_ctx:
> +	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ch_ctx_host_pa, mhi_cntrl->ch_ctx_cache_phys,
> +			mhi_cntrl->ch_ctx_cache, ch_ctx_host_size);
> +
> +	return ret;
> +}
> +
> +static void mhi_ep_free_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	size_t cmd_ctx_host_size, ch_ctx_host_size, ev_ctx_host_size;
> +
> +	ch_ctx_host_size = sizeof(struct mhi_chan_ctxt) * mhi_cntrl->max_chan;
> +	ev_ctx_host_size = sizeof(struct mhi_event_ctxt) * mhi_cntrl->event_rings;
> +	cmd_ctx_host_size = sizeof(struct mhi_cmd_ctxt) * NR_OF_CMD_RINGS;
> +
> +	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->cmd_ctx_host_pa, mhi_cntrl->cmd_ctx_cache_phys,
> +			mhi_cntrl->cmd_ctx_cache, cmd_ctx_host_size);
> +	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ev_ctx_host_pa, mhi_cntrl->ev_ctx_cache_phys,
> +			mhi_cntrl->ev_ctx_cache, ev_ctx_host_size);
> +	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ch_ctx_host_pa, mhi_cntrl->ch_ctx_cache_phys,
> +			mhi_cntrl->ch_ctx_cache, ch_ctx_host_size);
> +}
> +
> +static void mhi_ep_enable_int(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	/*
> +	 * Doorbell interrupts are enabled when the corresponding channel gets started.
> +	 * Enabling all interrupts here triggers spurious irqs as some of the interrupts
> +	 * associated with hw channels always get triggered.
> +	 */
> +	mhi_ep_mmio_enable_ctrl_interrupt(mhi_cntrl);
> +	mhi_ep_mmio_enable_cmdb_interrupt(mhi_cntrl);
> +}
> +
> +static int mhi_ep_enable(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	enum mhi_state state;
> +	u32 max_cnt = 0;
> +	bool mhi_reset;
> +	int ret;
> +
> +	/* Wait for Host to set the M0 state */
> +	do {
> +		msleep(MHI_SUSPEND_MIN);
> +		mhi_ep_mmio_get_mhi_state(mhi_cntrl, &state, &mhi_reset);
> +		if (mhi_reset) {
> +			/* Clear the MHI reset if host is in reset state */
> +			mhi_ep_mmio_clear_reset(mhi_cntrl);
> +			dev_dbg(dev, "Host initiated reset while waiting for M0\n");
> +		}
> +		max_cnt++;
> +	} while (state != MHI_STATE_M0 && max_cnt < MHI_SUSPEND_TIMEOUT);
> +
> +	if (state != MHI_STATE_M0) {
> +		dev_err(dev, "Host failed to enter M0\n");
> +		return -ETIMEDOUT;
> +	}
> +
> +	ret = mhi_ep_cache_host_cfg(mhi_cntrl);
> +	if (ret) {
> +		dev_err(dev, "Failed to cache host config\n");
> +		return ret;
> +	}
> +
> +	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EE_AMSS);
> +
> +	/* Enable all interrupts now */
> +	mhi_ep_enable_int(mhi_cntrl);
> +
> +	return 0;
> +}
> +
>   static void mhi_ep_state_worker(struct work_struct *work)
>   {
>   	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
> @@ -249,6 +432,60 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
>   	return IRQ_HANDLED;
>   }
>   
> +int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	int ret, i;
> +
> +	/*
> +	 * Mask all interrupts until the state machine is ready. Interrupts will
> +	 * be enabled later with mhi_ep_enable().
> +	 */
> +	mhi_ep_mmio_mask_interrupts(mhi_cntrl);
> +	mhi_ep_mmio_init(mhi_cntrl);
> +
> +	mhi_cntrl->mhi_event = kzalloc(mhi_cntrl->event_rings * (sizeof(*mhi_cntrl->mhi_event)),
> +					GFP_KERNEL);
> +	if (!mhi_cntrl->mhi_event)
> +		return -ENOMEM;
> +
> +	/* Initialize command, channel and event rings */
> +	mhi_ep_ring_init(&mhi_cntrl->mhi_cmd->ring, RING_TYPE_CMD, 0);
> +	for (i = 0; i < mhi_cntrl->max_chan; i++)
> +		mhi_ep_ring_init(&mhi_cntrl->mhi_chan[i].ring, RING_TYPE_CH, i);
> +	for (i = 0; i < mhi_cntrl->event_rings; i++)
> +		mhi_ep_ring_init(&mhi_cntrl->mhi_event[i].ring, RING_TYPE_ER, i);
> +
> +	mhi_cntrl->mhi_state = MHI_STATE_RESET;
> +
> +	/* Set AMSS EE before signaling ready state */
> +	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EE_AMSS);
> +
> +	/* All set, notify the host that we are ready */
> +	ret = mhi_ep_set_ready_state(mhi_cntrl);
> +	if (ret)
> +		goto err_free_event;
> +
> +	dev_dbg(dev, "READY state notification sent to the host\n");
> +
> +	ret = mhi_ep_enable(mhi_cntrl);
> +	if (ret) {
> +		dev_err(dev, "Failed to enable MHI endpoint\n");
> +		goto err_free_event;
> +	}
> +
> +	enable_irq(mhi_cntrl->irq);
> +	mhi_cntrl->enabled = true;
> +
> +	return 0;
> +
> +err_free_event:
> +	kfree(mhi_cntrl->mhi_event);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_power_up);
> +
>   static void mhi_ep_release_device(struct device *dev)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 43aa9b133db4..1b7dec859a5e 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -65,6 +65,9 @@ struct mhi_ep_db_info {
>    * @ch_ctx_host_pa: Physical address of host channel context data structure
>    * @ev_ctx_host_pa: Physical address of host event context data structure
>    * @cmd_ctx_host_pa: Physical address of host command context data structure
> + * @ch_ctx_cache_phys: Physical address of the host channel context cache
> + * @ev_ctx_cache_phys: Physical address of the host event context cache
> + * @cmd_ctx_cache_phys: Physical address of the host command context cache
>    * @chdb: Array of channel doorbell interrupt info
>    * @event_lock: Lock for protecting event rings
>    * @list_lock: Lock for protecting state transition and channel doorbell lists
> @@ -89,6 +92,7 @@ struct mhi_ep_db_info {
>    * @erdb_offset: Event ring doorbell offset set by the host
>    * @index: MHI Endpoint controller index
>    * @irq: IRQ used by the endpoint controller
> + * @enabled: Check if the endpoint controller is enabled or not
>    */
>   struct mhi_ep_cntrl {
>   	struct device *cntrl_dev;
> @@ -106,6 +110,9 @@ struct mhi_ep_cntrl {
>   	u64 ch_ctx_host_pa;
>   	u64 ev_ctx_host_pa;
>   	u64 cmd_ctx_host_pa;
> +	phys_addr_t ch_ctx_cache_phys;
> +	phys_addr_t ev_ctx_cache_phys;
> +	phys_addr_t cmd_ctx_cache_phys;
>   
>   	struct mhi_ep_db_info chdb[4];
>   	struct mutex event_lock;
> @@ -141,6 +148,7 @@ struct mhi_ep_cntrl {
>   	u32 erdb_offset;
>   	u32 index;
>   	int irq;
> +	bool enabled;
>   };
>   
>   /**
> @@ -235,4 +243,12 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>    */
>   void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
>   
> +/**
> + * mhi_ep_power_up - Power up the MHI endpoint stack
> + * @mhi_cntrl: MHI Endpoint controller
> + *
> + * Return: 0 if power up succeeds, a negative error code otherwise.
> + */
> +int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
> +
>   #endif


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 19/27] bus: mhi: ep: Add support for powering down the MHI endpoint stack
  2022-02-28 12:43 ` [PATCH v4 19/27] bus: mhi: ep: Add support for powering down " Manivannan Sadhasivam
@ 2022-02-28 16:49   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 16:49 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> Add support for MHI endpoint power_down that includes stopping all
> available channels, destroying the channels, resetting the event and
> transfer rings and freeing the host cache.
> 
> The stack will be powered down whenever the physical bus link goes down.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Looks good.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/ep/main.c | 78 +++++++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h    |  6 +++
>   2 files changed, 84 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 47807102baad..4956440273ad 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -21,6 +21,8 @@
>   
>   static DEFINE_IDA(mhi_ep_cntrl_ida);
>   
> +static int mhi_ep_destroy_device(struct device *dev, void *data);
> +
>   static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
>   			     struct mhi_ring_element *el, bool bei)
>   {
> @@ -432,6 +434,68 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
>   	return IRQ_HANDLED;
>   }
>   
> +static void mhi_ep_abort_transfer(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct mhi_ep_ring *ch_ring, *ev_ring;
> +	struct mhi_result result = {};
> +	struct mhi_ep_chan *mhi_chan;
> +	int i;
> +
> +	/* Stop all the channels */
> +	for (i = 0; i < mhi_cntrl->max_chan; i++) {
> +		mhi_chan = &mhi_cntrl->mhi_chan[i];
> +		if (!mhi_chan->ring.started)
> +			continue;
> +
> +		mutex_lock(&mhi_chan->lock);
> +		/* Send channel disconnect status to client drivers */
> +		if (mhi_chan->xfer_cb) {
> +			result.transaction_status = -ENOTCONN;
> +			result.bytes_xferd = 0;
> +			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
> +		}
> +
> +		mhi_chan->state = MHI_CH_STATE_DISABLED;
> +		mutex_unlock(&mhi_chan->lock);
> +	}
> +
> +	flush_workqueue(mhi_cntrl->wq);
> +
> +	/* Destroy devices associated with all channels */
> +	device_for_each_child(&mhi_cntrl->mhi_dev->dev, NULL, mhi_ep_destroy_device);
> +
> +	/* Stop and reset the transfer rings */
> +	for (i = 0; i < mhi_cntrl->max_chan; i++) {
> +		mhi_chan = &mhi_cntrl->mhi_chan[i];
> +		if (!mhi_chan->ring.started)
> +			continue;
> +
> +		ch_ring = &mhi_cntrl->mhi_chan[i].ring;
> +		mutex_lock(&mhi_chan->lock);
> +		mhi_ep_ring_reset(mhi_cntrl, ch_ring);
> +		mutex_unlock(&mhi_chan->lock);
> +	}
> +
> +	/* Stop and reset the event rings */
> +	for (i = 0; i < mhi_cntrl->event_rings; i++) {
> +		ev_ring = &mhi_cntrl->mhi_event[i].ring;
> +		if (!ev_ring->started)
> +			continue;
> +
> +		mutex_lock(&mhi_cntrl->event_lock);
> +		mhi_ep_ring_reset(mhi_cntrl, ev_ring);
> +		mutex_unlock(&mhi_cntrl->event_lock);
> +	}
> +
> +	/* Stop and reset the command ring */
> +	mhi_ep_ring_reset(mhi_cntrl, &mhi_cntrl->mhi_cmd->ring);
> +
> +	mhi_ep_free_host_cfg(mhi_cntrl);
> +	mhi_ep_mmio_mask_interrupts(mhi_cntrl);
> +
> +	mhi_cntrl->enabled = false;
> +}
> +
>   int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> @@ -486,6 +550,16 @@ int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
>   }
>   EXPORT_SYMBOL_GPL(mhi_ep_power_up);
>   
> +void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	if (mhi_cntrl->enabled)
> +		mhi_ep_abort_transfer(mhi_cntrl);
> +
> +	kfree(mhi_cntrl->mhi_event);
> +	disable_irq(mhi_cntrl->irq);
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_power_down);
> +
>   static void mhi_ep_release_device(struct device *dev)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> @@ -765,6 +839,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   }
>   EXPORT_SYMBOL_GPL(mhi_ep_register_controller);
>   
> +/*
> + * It is expected that the controller drivers will power down the MHI EP stack
> + * using "mhi_ep_power_down()" before calling this function to unregister themselves.
> + */
>   void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 1b7dec859a5e..8e062a4c84f4 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -251,4 +251,10 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
>    */
>   int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
>   
> +/**
> + * mhi_ep_power_down - Power down the MHI endpoint stack
> + * @mhi_cntrl: MHI controller
> + */
> +void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
> +
>   #endif


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 25/27] bus: mhi: ep: Add support for queueing SKBs to the host
  2022-02-28 12:43 ` [PATCH v4 25/27] bus: mhi: ep: Add support for queueing SKBs to the host Manivannan Sadhasivam
@ 2022-02-28 16:51   ` Alex Elder
  0 siblings, 0 replies; 52+ messages in thread
From: Alex Elder @ 2022-02-28 16:51 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> Add support for queueing SKBs to the host over the transfer ring of the
> relevant channel. The mhi_ep_queue_skb() API will be used by the client
> networking drivers to queue the SKBs to the host over MHI bus.
> 
> The host will add ring elements to the transfer ring periodically for
> the device and the device will write SKBs to the ring elements. If a
> single SKB doesn't fit in a ring element (TRE), it will be placed in
> multiple ring elements and the overflow event will be sent for all ring
> elements except the last one. For the last ring element, the EOT event
> will be sent indicating the packet boundary.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Looks good.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/ep/main.c | 82 +++++++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h    |  9 +++++
>   2 files changed, 91 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 63e14d55aa06..25d34cf26fd7 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -471,6 +471,88 @@ int mhi_ep_process_ch_ring(struct mhi_ep_ring *ring, struct mhi_ring_element *el
>   	return 0;
>   }
>   
> +/* TODO: Handle partially formed TDs */
> +int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, struct sk_buff *skb)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
> +	struct mhi_ep_chan *mhi_chan = mhi_dev->dl_chan;
> +	struct device *dev = &mhi_chan->mhi_dev->dev;
> +	struct mhi_ring_element *el;
> +	u32 buf_left, read_offset;
> +	struct mhi_ep_ring *ring;
> +	enum mhi_ev_ccs code;
> +	void *read_addr;
> +	u64 write_addr;
> +	size_t tr_len;
> +	u32 tre_len;
> +	int ret;
> +
> +	buf_left = skb->len;
> +	ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
> +
> +	mutex_lock(&mhi_chan->lock);
> +
> +	do {
> +		/* Don't process the transfer ring if the channel is not in RUNNING state */
> +		if (mhi_chan->state != MHI_CH_STATE_RUNNING) {
> +			dev_err(dev, "Channel not available\n");
> +			ret = -ENODEV;
> +			goto err_exit;
> +		}
> +
> +		if (mhi_ep_queue_is_empty(mhi_dev, DMA_FROM_DEVICE)) {
> +			dev_err(dev, "TRE not available!\n");
> +			ret = -ENOSPC;
> +			goto err_exit;
> +		}
> +
> +		el = &ring->ring_cache[ring->rd_offset];
> +		tre_len = MHI_TRE_DATA_GET_LEN(el);
> +
> +		tr_len = min(buf_left, tre_len);
> +		read_offset = skb->len - buf_left;
> +		read_addr = skb->data + read_offset;
> +		write_addr = MHI_TRE_DATA_GET_PTR(el);
> +
> +		dev_dbg(dev, "Writing %zd bytes to channel (%u)\n", tr_len, ring->ch_id);
> +		ret = mhi_cntrl->write_to_host(mhi_cntrl, read_addr, write_addr, tr_len);
> +		if (ret < 0) {
> +			dev_err(dev, "Error writing to the channel\n");
> +			goto err_exit;
> +		}
> +
> +		buf_left -= tr_len;
> +		/*
> +		 * For all TREs queued by the host for DL channel, only the EOT flag will be set.
> +		 * If the packet doesn't fit into a single TRE, send the OVERFLOW event to
> +		 * the host so that the host can adjust the packet boundary to next TREs. Else send
> +		 * the EOT event to the host indicating the packet boundary.
> +		 */
> +		if (buf_left)
> +			code = MHI_EV_CC_OVERFLOW;
> +		else
> +			code = MHI_EV_CC_EOT;
> +
> +		ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el, tr_len, code);
> +		if (ret) {
> +			dev_err(dev, "Error sending transfer completion event\n");
> +			goto err_exit;
> +		}
> +
> +		mhi_ep_ring_inc_index(ring);
> +	} while (buf_left);
> +
> +	mutex_unlock(&mhi_chan->lock);
> +
> +	return 0;
> +
> +err_exit:
> +	mutex_unlock(&mhi_chan->lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_queue_skb);
> +
>   static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	size_t cmd_ctx_host_size, ch_ctx_host_size, ev_ctx_host_size;
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 74170dad09f6..bd3ffde01f04 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -272,4 +272,13 @@ void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
>    */
>   bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir);
>   
> +/**
> + * mhi_ep_queue_skb - Send SKBs to host over MHI Endpoint
> + * @mhi_dev: Device associated with the DL channel
> + * @skb: SKBs to be queued
> + *
> + * Return: 0 if the SKBs has been sent successfully, a negative error code otherwise.
> + */
> +int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, struct sk_buff *skb);
> +
>   #endif


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 00/27] Add initial support for MHI endpoint stack
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (26 preceding siblings ...)
  2022-02-28 12:43 ` [PATCH v4 27/27] bus: mhi: ep: Add uevent support for module autoloading Manivannan Sadhasivam
@ 2022-02-28 16:57 ` Alex Elder
  2022-03-01  6:15   ` Manivannan Sadhasivam
  2022-03-01  8:50 ` Manivannan Sadhasivam
  28 siblings, 1 reply; 52+ messages in thread
From: Alex Elder @ 2022-02-28 16:57 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> Hello,
> 
> This series adds initial support for the Qualcomm specific Modem Host Interface
> (MHI) bus in endpoint devices like SDX55 modems. The MHI bus in endpoint devices
> communicates with the MHI bus in host machines like x86 over any physical bus
> like PCIe. The MHI host support is already in mainline [1] and been used by PCIe
> based modems and WLAN devices running vendor code (downstream).

I believe I have provided a "Reviewed-by" tag for all patches in
this series.  I've made a few minor suggestions, but nothing I
saw deserves issuing a new version of the series.  The only
"big thing" is whether you want to rework the stuff that David
Laight commented on in patch 5 (and 15 too).  I agree with him
that the code there isn't very pretty and could be improved,
but as I said in my review, my preference would be to get this
accepted with a promise from you to revisit that.  Improving
that would improve readability and maintainability, and that's
important.  But there's too much *other* code in this series
and I hate to see its acceptance delayed further.

So anyway, I'm done reviewing this, and in general I trust that
you will tell me (and drop my Reviewed-by tag) if you change
anything substantive in a new version of the series.

					-Alex

> 
> Overview
> ========
> 
> This series aims at adding the MHI support in the endpoint devices with the goal
> of getting data connectivity using the mainline kernel running on the modems.
> Modems here refer to the combination of an APPS processor (Cortex A grade) and
> a baseband processor (DSP). The MHI bus is located in the APPS processor and it
> transfers data packets from the baseband processor to the host machine.
> 
> The MHI Endpoint (MHI EP) stack proposed here is inspired by the downstream
> code written by Qualcomm. But the complete stack is mostly re-written to adapt
> to the "bus" framework and made it modular so that it can work with the upstream
> subsystems like "PCI Endpoint". The code structure of the MHI endpoint stack
> follows the MHI host stack to maintain uniformity.
> 
> With this initial MHI EP stack (along with few other drivers), we can establish
> the network interface between host and endpoint over the MHI software channels
> (IP_SW0) and can do things like IP forwarding, SSH, etc...
> 
> Stack Organization
> ==================
> 
> The MHI EP stack has the concept of controller and device drivers as like the
> MHI host stack. The MHI EP controller driver can be a PCI Endpoint Function
> driver and the MHI device driver can be a MHI EP Networking driver or QRTR
> driver. The MHI EP controller driver is tied to the PCI Endpoint subsystem and
> handles all bus related activities like mapping the host memory, raising IRQ,
> passing link specific events etc... The MHI EP networking driver is tied to the
> Networking stack and handles all networking related activities like
> sending/receiving the SKBs from netdev, statistics collection etc...
> 
> This series only contains the MHI EP code, whereas the PCIe EPF driver and MHI
> EP Networking drivers are not yet submitted and can be found here [2]. Though
> the MHI EP stack doesn't have the build time dependency, it cannot function
> without them.
> 
> Test setup
> ==========
> 
> This series has been tested on Telit FN980 TLB board powered by Qualcomm SDX55
> (a.k.a X55 modem) and Qualcomm SM8450 based dev board.
> 
> For testing the stability and performance, networking tools such as iperf, ssh
> and ping are used.
> 
> Limitations
> ===========
> 
> We are not _yet_ there to get the data packets from the modem as that involves
> the Qualcomm IP Accelerator (IPA) integration with MHI endpoint stack. But we
> are planning to add support for it in the coming days.
> 
> References
> ==========
> 
> MHI bus: https://www.kernel.org/doc/html/latest/mhi/mhi.html
> Linaro connect presentation around this topic: https://connect.linaro.org/resources/lvc21f/lvc21f-222/
> 
> Thanks,
> Mani
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/bus/mhi
> [2] https://git.linaro.org/landing-teams/working/qualcomm/kernel.git/log/?h=tracking-qcomlt-sdx55-drivers
> 
> Changes in v4:
> 
> * Collected reviews from Hemant and Alex.
> * Removed the A7 suffix from register names and functions.
> * Added a couple of cleanup patches.
> * Reworked the mhi_ep_queue_skb() API.
> * Switched to separate workers for command and transfer rings.
> * Used a common workqueue for state and ring management.
> * Reworked the channel ring management.
> * Other misc changes as per review from Alex.
> 
> Changes in v3:
> 
> * Splitted the patch 20/23 into two.
> * Fixed the error handling in patch 21/23.
> * Removed spurious change in patch 01/23.
> * Added check for xfer callbacks in client driver probe.
> 
> Changes in v2:
> 
> v2 mostly addresses the issues seen while testing the stack on SM8450 that is a
> SMP platform and also incorporates the review comments from Alex.
> 
> Major changes are:
> 
> * Added a cleanup patch for getting rid of SHIFT macros and used the bitfield
>    operations.
> * Added the endianess patches that were submitted to MHI list and used the
>    endianess conversion in EP patches also.
> * Added support for multiple event rings.
> * Fixed the MSI generation based on the event ring index.
> * Fixed the doorbell list handling by making use of list splice and not locking
>    the entire list manipulation.
> * Added new APIs for wrapping the reading and writing to host memory (Dmitry).
> * Optimized the read_channel and queue_skb function logics.
> * Added Hemant's R-o-b tag.
> 
> Manivannan Sadhasivam (25):
>    bus: mhi: Move host MHI code to "host" directory
>    bus: mhi: Use bitfield operations for register read and write
>    bus: mhi: Use bitfield operations for handling DWORDs of ring elements
>    bus: mhi: Cleanup the register definitions used in headers
>    bus: mhi: host: Rename "struct mhi_tre" to "struct mhi_ring_element"
>    bus: mhi: Move common MHI definitions out of host directory
>    bus: mhi: Make mhi_state_str[] array static inline and move to
>      common.h
>    bus: mhi: ep: Add support for registering MHI endpoint controllers
>    bus: mhi: ep: Add support for registering MHI endpoint client drivers
>    bus: mhi: ep: Add support for creating and destroying MHI EP devices
>    bus: mhi: ep: Add support for managing MMIO registers
>    bus: mhi: ep: Add support for ring management
>    bus: mhi: ep: Add support for sending events to the host
>    bus: mhi: ep: Add support for managing MHI state machine
>    bus: mhi: ep: Add support for processing MHI endpoint interrupts
>    bus: mhi: ep: Add support for powering up the MHI endpoint stack
>    bus: mhi: ep: Add support for powering down the MHI endpoint stack
>    bus: mhi: ep: Add support for handling MHI_RESET
>    bus: mhi: ep: Add support for handling SYS_ERR condition
>    bus: mhi: ep: Add support for processing command rings
>    bus: mhi: ep: Add support for reading from the host
>    bus: mhi: ep: Add support for processing channel rings
>    bus: mhi: ep: Add support for queueing SKBs to the host
>    bus: mhi: ep: Add support for suspending and resuming channels
>    bus: mhi: ep: Add uevent support for module autoloading
> 
> Paul Davey (2):
>    bus: mhi: Fix pm_state conversion to string
>    bus: mhi: Fix MHI DMA structure endianness
> 
>   drivers/bus/Makefile                     |    2 +-
>   drivers/bus/mhi/Kconfig                  |   28 +-
>   drivers/bus/mhi/Makefile                 |    9 +-
>   drivers/bus/mhi/common.h                 |  326 +++++
>   drivers/bus/mhi/core/internal.h          |  722 ----------
>   drivers/bus/mhi/ep/Kconfig               |   10 +
>   drivers/bus/mhi/ep/Makefile              |    2 +
>   drivers/bus/mhi/ep/internal.h            |  222 +++
>   drivers/bus/mhi/ep/main.c                | 1623 ++++++++++++++++++++++
>   drivers/bus/mhi/ep/mmio.c                |  272 ++++
>   drivers/bus/mhi/ep/ring.c                |  197 +++
>   drivers/bus/mhi/ep/sm.c                  |  148 ++
>   drivers/bus/mhi/host/Kconfig             |   31 +
>   drivers/bus/mhi/{core => host}/Makefile  |    4 +-
>   drivers/bus/mhi/{core => host}/boot.c    |   17 +-
>   drivers/bus/mhi/{core => host}/debugfs.c |   40 +-
>   drivers/bus/mhi/{core => host}/init.c    |  131 +-
>   drivers/bus/mhi/host/internal.h          |  382 +++++
>   drivers/bus/mhi/{core => host}/main.c    |   66 +-
>   drivers/bus/mhi/{ => host}/pci_generic.c |    0
>   drivers/bus/mhi/{core => host}/pm.c      |   36 +-
>   include/linux/mhi_ep.h                   |  284 ++++
>   include/linux/mod_devicetable.h          |    2 +
>   scripts/mod/file2alias.c                 |   10 +
>   24 files changed, 3649 insertions(+), 915 deletions(-)
>   create mode 100644 drivers/bus/mhi/common.h
>   delete mode 100644 drivers/bus/mhi/core/internal.h
>   create mode 100644 drivers/bus/mhi/ep/Kconfig
>   create mode 100644 drivers/bus/mhi/ep/Makefile
>   create mode 100644 drivers/bus/mhi/ep/internal.h
>   create mode 100644 drivers/bus/mhi/ep/main.c
>   create mode 100644 drivers/bus/mhi/ep/mmio.c
>   create mode 100644 drivers/bus/mhi/ep/ring.c
>   create mode 100644 drivers/bus/mhi/ep/sm.c
>   create mode 100644 drivers/bus/mhi/host/Kconfig
>   rename drivers/bus/mhi/{core => host}/Makefile (54%)
>   rename drivers/bus/mhi/{core => host}/boot.c (96%)
>   rename drivers/bus/mhi/{core => host}/debugfs.c (90%)
>   rename drivers/bus/mhi/{core => host}/init.c (92%)
>   create mode 100644 drivers/bus/mhi/host/internal.h
>   rename drivers/bus/mhi/{core => host}/main.c (97%)
>   rename drivers/bus/mhi/{ => host}/pci_generic.c (100%)
>   rename drivers/bus/mhi/{core => host}/pm.c (97%)
>   create mode 100644 include/linux/mhi_ep.h
> 


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 00/27] Add initial support for MHI endpoint stack
  2022-02-28 16:57 ` [PATCH v4 00/27] Add initial support for MHI endpoint stack Alex Elder
@ 2022-03-01  6:15   ` Manivannan Sadhasivam
  0 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-03-01  6:15 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Mon, Feb 28, 2022 at 10:57:48AM -0600, Alex Elder wrote:
> On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> > Hello,
> > 
> > This series adds initial support for the Qualcomm specific Modem Host Interface
> > (MHI) bus in endpoint devices like SDX55 modems. The MHI bus in endpoint devices
> > communicates with the MHI bus in host machines like x86 over any physical bus
> > like PCIe. The MHI host support is already in mainline [1] and been used by PCIe
> > based modems and WLAN devices running vendor code (downstream).
> 
> I believe I have provided a "Reviewed-by" tag for all patches in
> this series.  I've made a few minor suggestions, but nothing I
> saw deserves issuing a new version of the series.

Thanks a lot for your time, Alex! Much appreciated.

> The only "big thing" is whether you want to rework the stuff that David
> Laight commented on in patch 5 (and 15 too).  I agree with him
> that the code there isn't very pretty and could be improved,
> but as I said in my review, my preference would be to get this
> accepted with a promise from you to revisit that.  Improving
> that would improve readability and maintainability, and that's
> important.  But there's too much *other* code in this series
> and I hate to see its acceptance delayed further.
> 

As I replied to you during v3 review, the ring element structure changes
between command, transfer and event rings. Even with command ring, we
got different structure for each command. This makes the definition
a bit hard.

Anyway, I'll take another look once this series gets merged.

> So anyway, I'm done reviewing this, and in general I trust that
> you will tell me (and drop my Reviewed-by tag) if you change
> anything substantive in a new version of the series.
> 

Sure.

Thanks,
Mani

> 					-Alex
> 
> > 
> > Overview
> > ========
> > 
> > This series aims at adding the MHI support in the endpoint devices with the goal
> > of getting data connectivity using the mainline kernel running on the modems.
> > Modems here refer to the combination of an APPS processor (Cortex A grade) and
> > a baseband processor (DSP). The MHI bus is located in the APPS processor and it
> > transfers data packets from the baseband processor to the host machine.
> > 
> > The MHI Endpoint (MHI EP) stack proposed here is inspired by the downstream
> > code written by Qualcomm. But the complete stack is mostly re-written to adapt
> > to the "bus" framework and made it modular so that it can work with the upstream
> > subsystems like "PCI Endpoint". The code structure of the MHI endpoint stack
> > follows the MHI host stack to maintain uniformity.
> > 
> > With this initial MHI EP stack (along with few other drivers), we can establish
> > the network interface between host and endpoint over the MHI software channels
> > (IP_SW0) and can do things like IP forwarding, SSH, etc...
> > 
> > Stack Organization
> > ==================
> > 
> > The MHI EP stack has the concept of controller and device drivers as like the
> > MHI host stack. The MHI EP controller driver can be a PCI Endpoint Function
> > driver and the MHI device driver can be a MHI EP Networking driver or QRTR
> > driver. The MHI EP controller driver is tied to the PCI Endpoint subsystem and
> > handles all bus related activities like mapping the host memory, raising IRQ,
> > passing link specific events etc... The MHI EP networking driver is tied to the
> > Networking stack and handles all networking related activities like
> > sending/receiving the SKBs from netdev, statistics collection etc...
> > 
> > This series only contains the MHI EP code, whereas the PCIe EPF driver and MHI
> > EP Networking drivers are not yet submitted and can be found here [2]. Though
> > the MHI EP stack doesn't have the build time dependency, it cannot function
> > without them.
> > 
> > Test setup
> > ==========
> > 
> > This series has been tested on Telit FN980 TLB board powered by Qualcomm SDX55
> > (a.k.a X55 modem) and Qualcomm SM8450 based dev board.
> > 
> > For testing the stability and performance, networking tools such as iperf, ssh
> > and ping are used.
> > 
> > Limitations
> > ===========
> > 
> > We are not _yet_ there to get the data packets from the modem as that involves
> > the Qualcomm IP Accelerator (IPA) integration with MHI endpoint stack. But we
> > are planning to add support for it in the coming days.
> > 
> > References
> > ==========
> > 
> > MHI bus: https://www.kernel.org/doc/html/latest/mhi/mhi.html
> > Linaro connect presentation around this topic: https://connect.linaro.org/resources/lvc21f/lvc21f-222/
> > 
> > Thanks,
> > Mani
> > 
> > [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/bus/mhi
> > [2] https://git.linaro.org/landing-teams/working/qualcomm/kernel.git/log/?h=tracking-qcomlt-sdx55-drivers
> > 
> > Changes in v4:
> > 
> > * Collected reviews from Hemant and Alex.
> > * Removed the A7 suffix from register names and functions.
> > * Added a couple of cleanup patches.
> > * Reworked the mhi_ep_queue_skb() API.
> > * Switched to separate workers for command and transfer rings.
> > * Used a common workqueue for state and ring management.
> > * Reworked the channel ring management.
> > * Other misc changes as per review from Alex.
> > 
> > Changes in v3:
> > 
> > * Splitted the patch 20/23 into two.
> > * Fixed the error handling in patch 21/23.
> > * Removed spurious change in patch 01/23.
> > * Added check for xfer callbacks in client driver probe.
> > 
> > Changes in v2:
> > 
> > v2 mostly addresses the issues seen while testing the stack on SM8450 that is a
> > SMP platform and also incorporates the review comments from Alex.
> > 
> > Major changes are:
> > 
> > * Added a cleanup patch for getting rid of SHIFT macros and used the bitfield
> >    operations.
> > * Added the endianess patches that were submitted to MHI list and used the
> >    endianess conversion in EP patches also.
> > * Added support for multiple event rings.
> > * Fixed the MSI generation based on the event ring index.
> > * Fixed the doorbell list handling by making use of list splice and not locking
> >    the entire list manipulation.
> > * Added new APIs for wrapping the reading and writing to host memory (Dmitry).
> > * Optimized the read_channel and queue_skb function logics.
> > * Added Hemant's R-o-b tag.
> > 
> > Manivannan Sadhasivam (25):
> >    bus: mhi: Move host MHI code to "host" directory
> >    bus: mhi: Use bitfield operations for register read and write
> >    bus: mhi: Use bitfield operations for handling DWORDs of ring elements
> >    bus: mhi: Cleanup the register definitions used in headers
> >    bus: mhi: host: Rename "struct mhi_tre" to "struct mhi_ring_element"
> >    bus: mhi: Move common MHI definitions out of host directory
> >    bus: mhi: Make mhi_state_str[] array static inline and move to
> >      common.h
> >    bus: mhi: ep: Add support for registering MHI endpoint controllers
> >    bus: mhi: ep: Add support for registering MHI endpoint client drivers
> >    bus: mhi: ep: Add support for creating and destroying MHI EP devices
> >    bus: mhi: ep: Add support for managing MMIO registers
> >    bus: mhi: ep: Add support for ring management
> >    bus: mhi: ep: Add support for sending events to the host
> >    bus: mhi: ep: Add support for managing MHI state machine
> >    bus: mhi: ep: Add support for processing MHI endpoint interrupts
> >    bus: mhi: ep: Add support for powering up the MHI endpoint stack
> >    bus: mhi: ep: Add support for powering down the MHI endpoint stack
> >    bus: mhi: ep: Add support for handling MHI_RESET
> >    bus: mhi: ep: Add support for handling SYS_ERR condition
> >    bus: mhi: ep: Add support for processing command rings
> >    bus: mhi: ep: Add support for reading from the host
> >    bus: mhi: ep: Add support for processing channel rings
> >    bus: mhi: ep: Add support for queueing SKBs to the host
> >    bus: mhi: ep: Add support for suspending and resuming channels
> >    bus: mhi: ep: Add uevent support for module autoloading
> > 
> > Paul Davey (2):
> >    bus: mhi: Fix pm_state conversion to string
> >    bus: mhi: Fix MHI DMA structure endianness
> > 
> >   drivers/bus/Makefile                     |    2 +-
> >   drivers/bus/mhi/Kconfig                  |   28 +-
> >   drivers/bus/mhi/Makefile                 |    9 +-
> >   drivers/bus/mhi/common.h                 |  326 +++++
> >   drivers/bus/mhi/core/internal.h          |  722 ----------
> >   drivers/bus/mhi/ep/Kconfig               |   10 +
> >   drivers/bus/mhi/ep/Makefile              |    2 +
> >   drivers/bus/mhi/ep/internal.h            |  222 +++
> >   drivers/bus/mhi/ep/main.c                | 1623 ++++++++++++++++++++++
> >   drivers/bus/mhi/ep/mmio.c                |  272 ++++
> >   drivers/bus/mhi/ep/ring.c                |  197 +++
> >   drivers/bus/mhi/ep/sm.c                  |  148 ++
> >   drivers/bus/mhi/host/Kconfig             |   31 +
> >   drivers/bus/mhi/{core => host}/Makefile  |    4 +-
> >   drivers/bus/mhi/{core => host}/boot.c    |   17 +-
> >   drivers/bus/mhi/{core => host}/debugfs.c |   40 +-
> >   drivers/bus/mhi/{core => host}/init.c    |  131 +-
> >   drivers/bus/mhi/host/internal.h          |  382 +++++
> >   drivers/bus/mhi/{core => host}/main.c    |   66 +-
> >   drivers/bus/mhi/{ => host}/pci_generic.c |    0
> >   drivers/bus/mhi/{core => host}/pm.c      |   36 +-
> >   include/linux/mhi_ep.h                   |  284 ++++
> >   include/linux/mod_devicetable.h          |    2 +
> >   scripts/mod/file2alias.c                 |   10 +
> >   24 files changed, 3649 insertions(+), 915 deletions(-)
> >   create mode 100644 drivers/bus/mhi/common.h
> >   delete mode 100644 drivers/bus/mhi/core/internal.h
> >   create mode 100644 drivers/bus/mhi/ep/Kconfig
> >   create mode 100644 drivers/bus/mhi/ep/Makefile
> >   create mode 100644 drivers/bus/mhi/ep/internal.h
> >   create mode 100644 drivers/bus/mhi/ep/main.c
> >   create mode 100644 drivers/bus/mhi/ep/mmio.c
> >   create mode 100644 drivers/bus/mhi/ep/ring.c
> >   create mode 100644 drivers/bus/mhi/ep/sm.c
> >   create mode 100644 drivers/bus/mhi/host/Kconfig
> >   rename drivers/bus/mhi/{core => host}/Makefile (54%)
> >   rename drivers/bus/mhi/{core => host}/boot.c (96%)
> >   rename drivers/bus/mhi/{core => host}/debugfs.c (90%)
> >   rename drivers/bus/mhi/{core => host}/init.c (92%)
> >   create mode 100644 drivers/bus/mhi/host/internal.h
> >   rename drivers/bus/mhi/{core => host}/main.c (97%)
> >   rename drivers/bus/mhi/{ => host}/pci_generic.c (100%)
> >   rename drivers/bus/mhi/{core => host}/pm.c (97%)
> >   create mode 100644 include/linux/mhi_ep.h
> > 
> 

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 17/27] bus: mhi: ep: Add support for processing MHI endpoint interrupts
  2022-02-28 16:45   ` Alex Elder
@ 2022-03-01  6:41     ` Manivannan Sadhasivam
  0 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-03-01  6:41 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Mon, Feb 28, 2022 at 10:45:15AM -0600, Alex Elder wrote:
> On 2/28/22 6:43 AM, Manivannan Sadhasivam wrote:
> > Add support for processing MHI endpoint interrupts such as control
> > interrupt, command interrupt and channel interrupt from the host.
> > 
> > The interrupts will be generated in the endpoint device whenever host
> > writes to the corresponding doorbell registers. The doorbell logic
> > is handled inside the hardware internally.
> > 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> 
> One suggestion for future work, but otherwise this looks good.
> 
> Reviewed-by: Alex Elder <elder@linaro.org>
> 
> > ---
> >   drivers/bus/mhi/ep/main.c | 123 +++++++++++++++++++++++++++++++++++++-
> >   include/linux/mhi_ep.h    |   4 ++
> >   2 files changed, 125 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index 7a29543586d0..ce690b1aeace 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c
> > @@ -143,6 +143,112 @@ static void mhi_ep_state_worker(struct work_struct *work)
> >   	}
> >   }
> > +static void mhi_ep_queue_channel_db(struct mhi_ep_cntrl *mhi_cntrl, unsigned long ch_int,
> > +				    u32 ch_idx)
> > +{
> > +	struct mhi_ep_ring_item *item;
> > +	struct mhi_ep_ring *ring;
> > +	bool work = !!ch_int;
> > +	LIST_HEAD(head);
> > +	u32 i;
> > +
> > +	/* First add the ring items to a local list */
> > +	for_each_set_bit(i, &ch_int, 32) {
> > +		/* Channel index varies for each register: 0, 32, 64, 96 */
> > +		u32 ch_id = ch_idx + i;
> > +
> > +		ring = &mhi_cntrl->mhi_chan[ch_id].ring;
> > +		item = kzalloc(sizeof(*item), GFP_ATOMIC);
> 
> It looks like this will be used a lot, so I suggest you
> consider creating a slab cache of ring items to allocate
> from.  I haven't suggested that elsewhere, but it's
> possible there are other frequently-allocated structures
> that would warrant that.
> 

Sure.

Thanks,
Mani

> > +		if (!item)
> > +			return;
> > +
> > +		item->ring = ring;
> > +		list_add_tail(&item->node, &head);
> > +	}
> > +
> > +	/* Now, splice the local list into ch_db_list and queue the work item */
> > +	if (work) {
> > +		spin_lock(&mhi_cntrl->list_lock);
> > +		list_splice_tail_init(&head, &mhi_cntrl->ch_db_list);
> > +		spin_unlock(&mhi_cntrl->list_lock);
> > +	}
> > +}
> > +
> > +/*
> > + * Channel interrupt statuses are contained in 4 registers each of 32bit length.
> > + * For checking all interrupts, we need to loop through each registers and then
> > + * check for bits set.
> > + */
> > +static void mhi_ep_check_channel_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
> > +{
> > +	u32 ch_int, ch_idx, i;
> > +
> > +	/* Bail out if there is no channel doorbell interrupt */
> > +	if (!mhi_ep_mmio_read_chdb_status_interrupts(mhi_cntrl))
> > +		return;
> > +
> > +	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
> > +		ch_idx = i * MHI_MASK_CH_EV_LEN;
> > +
> > +		/* Only process channel interrupt if the mask is enabled */
> > +		ch_int = mhi_cntrl->chdb[i].status & mhi_cntrl->chdb[i].mask;
> > +		if (ch_int) {
> > +			mhi_ep_queue_channel_db(mhi_cntrl, ch_int, ch_idx);
> > +			mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_CLEAR_n(i),
> > +							mhi_cntrl->chdb[i].status);
> > +		}
> > +	}
> > +}
> > +
> > +static void mhi_ep_process_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl,
> > +					 enum mhi_state state)
> > +{
> > +	struct mhi_ep_state_transition *item;
> > +
> > +	item = kzalloc(sizeof(*item), GFP_ATOMIC);
> > +	if (!item)
> > +		return;
> > +
> > +	item->state = state;
> > +	spin_lock(&mhi_cntrl->list_lock);
> > +	list_add_tail(&item->node, &mhi_cntrl->st_transition_list);
> > +	spin_unlock(&mhi_cntrl->list_lock);
> > +
> > +	queue_work(mhi_cntrl->wq, &mhi_cntrl->state_work);
> > +}
> > +
> > +/*
> > + * Interrupt handler that services interrupts raised by the host writing to
> > + * MHICTRL and Command ring doorbell (CRDB) registers for state change and
> > + * channel interrupts.
> > + */
> > +static irqreturn_t mhi_ep_irq(int irq, void *data)
> > +{
> > +	struct mhi_ep_cntrl *mhi_cntrl = data;
> > +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > +	enum mhi_state state;
> > +	u32 int_value;
> > +
> > +	/* Acknowledge the ctrl interrupt */
> > +	int_value = mhi_ep_mmio_read(mhi_cntrl, MHI_CTRL_INT_STATUS);
> > +	mhi_ep_mmio_write(mhi_cntrl, MHI_CTRL_INT_CLEAR, int_value);
> > +
> > +	/* Check for ctrl interrupt */
> > +	if (FIELD_GET(MHI_CTRL_INT_STATUS_MSK, int_value)) {
> > +		dev_dbg(dev, "Processing ctrl interrupt\n");
> > +		mhi_ep_process_ctrl_interrupt(mhi_cntrl, state);
> > +	}
> > +
> > +	/* Check for command doorbell interrupt */
> > +	if (FIELD_GET(MHI_CTRL_INT_STATUS_CRDB_MSK, int_value))
> > +		dev_dbg(dev, "Processing command doorbell interrupt\n");
> > +
> > +	/* Check for channel interrupts */
> > +	mhi_ep_check_channel_interrupt(mhi_cntrl);
> > +
> > +	return IRQ_HANDLED;
> > +}
> > +
> >   static void mhi_ep_release_device(struct device *dev)
> >   {
> >   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> > @@ -339,7 +445,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> >   	struct mhi_ep_device *mhi_dev;
> >   	int ret;
> > -	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio)
> > +	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio || !mhi_cntrl->irq)
> >   		return -EINVAL;
> >   	ret = mhi_ep_chan_init(mhi_cntrl, config);
> > @@ -361,6 +467,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> >   	}
> >   	INIT_LIST_HEAD(&mhi_cntrl->st_transition_list);
> > +	INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
> >   	spin_lock_init(&mhi_cntrl->state_lock);
> >   	spin_lock_init(&mhi_cntrl->list_lock);
> >   	mutex_init(&mhi_cntrl->event_lock);
> > @@ -376,12 +483,20 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> >   		goto err_destroy_wq;
> >   	}
> > +	irq_set_status_flags(mhi_cntrl->irq, IRQ_NOAUTOEN);
> > +	ret = request_irq(mhi_cntrl->irq, mhi_ep_irq, IRQF_TRIGGER_HIGH,
> > +			  "doorbell_irq", mhi_cntrl);
> > +	if (ret) {
> > +		dev_err(mhi_cntrl->cntrl_dev, "Failed to request Doorbell IRQ\n");
> > +		goto err_ida_free;
> > +	}
> > +
> >   	/* Allocate the controller device */
> >   	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_CONTROLLER);
> >   	if (IS_ERR(mhi_dev)) {
> >   		dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate controller device\n");
> >   		ret = PTR_ERR(mhi_dev);
> > -		goto err_ida_free;
> > +		goto err_free_irq;
> >   	}
> >   	dev_set_name(&mhi_dev->dev, "mhi_ep%u", mhi_cntrl->index);
> > @@ -398,6 +513,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> >   err_put_dev:
> >   	put_device(&mhi_dev->dev);
> > +err_free_irq:
> > +	free_irq(mhi_cntrl->irq, mhi_cntrl);
> >   err_ida_free:
> >   	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
> >   err_destroy_wq:
> > @@ -417,6 +534,8 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
> >   	destroy_workqueue(mhi_cntrl->wq);
> > +	free_irq(mhi_cntrl->irq, mhi_cntrl);
> > +
> >   	kfree(mhi_cntrl->mhi_cmd);
> >   	kfree(mhi_cntrl->mhi_chan);
> > diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> > index dc27a5de7d3c..43aa9b133db4 100644
> > --- a/include/linux/mhi_ep.h
> > +++ b/include/linux/mhi_ep.h
> > @@ -70,6 +70,7 @@ struct mhi_ep_db_info {
> >    * @list_lock: Lock for protecting state transition and channel doorbell lists
> >    * @state_lock: Lock for protecting state transitions
> >    * @st_transition_list: List of state transitions
> > + * @ch_db_list: List of queued channel doorbells
> >    * @wq: Dedicated workqueue for handling rings and state changes
> >    * @state_work: State transition worker
> >    * @raise_irq: CB function for raising IRQ to the host
> > @@ -87,6 +88,7 @@ struct mhi_ep_db_info {
> >    * @chdb_offset: Channel doorbell offset set by the host
> >    * @erdb_offset: Event ring doorbell offset set by the host
> >    * @index: MHI Endpoint controller index
> > + * @irq: IRQ used by the endpoint controller
> >    */
> >   struct mhi_ep_cntrl {
> >   	struct device *cntrl_dev;
> > @@ -111,6 +113,7 @@ struct mhi_ep_cntrl {
> >   	spinlock_t state_lock;
> >   	struct list_head st_transition_list;
> > +	struct list_head ch_db_list;
> >   	struct workqueue_struct *wq;
> >   	struct work_struct state_work;
> > @@ -137,6 +140,7 @@ struct mhi_ep_cntrl {
> >   	u32 chdb_offset;
> >   	u32 erdb_offset;
> >   	u32 index;
> > +	int irq;
> >   };
> >   /**
> 

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v4 00/27] Add initial support for MHI endpoint stack
  2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (27 preceding siblings ...)
  2022-02-28 16:57 ` [PATCH v4 00/27] Add initial support for MHI endpoint stack Alex Elder
@ 2022-03-01  8:50 ` Manivannan Sadhasivam
  28 siblings, 0 replies; 52+ messages in thread
From: Manivannan Sadhasivam @ 2022-03-01  8:50 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder

On Mon, Feb 28, 2022 at 06:13:17PM +0530, Manivannan Sadhasivam wrote:
> Hello,
> 
> This series adds initial support for the Qualcomm specific Modem Host Interface
> (MHI) bus in endpoint devices like SDX55 modems. The MHI bus in endpoint devices
> communicates with the MHI bus in host machines like x86 over any physical bus
> like PCIe. The MHI host support is already in mainline [1] and been used by PCIe
> based modems and WLAN devices running vendor code (downstream).
> 

Series applied to mhi-next with Alex's Reviewed-by tag. Also incorporated few
suggestions by Alex.

Thanks,
Mani

> Overview
> ========
> 
> This series aims at adding the MHI support in the endpoint devices with the goal
> of getting data connectivity using the mainline kernel running on the modems.
> Modems here refer to the combination of an APPS processor (Cortex A grade) and
> a baseband processor (DSP). The MHI bus is located in the APPS processor and it
> transfers data packets from the baseband processor to the host machine.
> 
> The MHI Endpoint (MHI EP) stack proposed here is inspired by the downstream
> code written by Qualcomm. But the complete stack is mostly re-written to adapt
> to the "bus" framework and made it modular so that it can work with the upstream
> subsystems like "PCI Endpoint". The code structure of the MHI endpoint stack
> follows the MHI host stack to maintain uniformity.
> 
> With this initial MHI EP stack (along with few other drivers), we can establish
> the network interface between host and endpoint over the MHI software channels
> (IP_SW0) and can do things like IP forwarding, SSH, etc...
> 
> Stack Organization
> ==================
> 
> The MHI EP stack has the concept of controller and device drivers as like the
> MHI host stack. The MHI EP controller driver can be a PCI Endpoint Function
> driver and the MHI device driver can be a MHI EP Networking driver or QRTR
> driver. The MHI EP controller driver is tied to the PCI Endpoint subsystem and
> handles all bus related activities like mapping the host memory, raising IRQ,
> passing link specific events etc... The MHI EP networking driver is tied to the
> Networking stack and handles all networking related activities like
> sending/receiving the SKBs from netdev, statistics collection etc...
> 
> This series only contains the MHI EP code, whereas the PCIe EPF driver and MHI
> EP Networking drivers are not yet submitted and can be found here [2]. Though
> the MHI EP stack doesn't have the build time dependency, it cannot function
> without them.
> 
> Test setup
> ==========
> 
> This series has been tested on Telit FN980 TLB board powered by Qualcomm SDX55
> (a.k.a X55 modem) and Qualcomm SM8450 based dev board.
> 
> For testing the stability and performance, networking tools such as iperf, ssh
> and ping are used.
> 
> Limitations
> ===========
> 
> We are not _yet_ there to get the data packets from the modem as that involves
> the Qualcomm IP Accelerator (IPA) integration with MHI endpoint stack. But we
> are planning to add support for it in the coming days.
> 
> References
> ==========
> 
> MHI bus: https://www.kernel.org/doc/html/latest/mhi/mhi.html
> Linaro connect presentation around this topic: https://connect.linaro.org/resources/lvc21f/lvc21f-222/
> 
> Thanks,
> Mani
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/bus/mhi
> [2] https://git.linaro.org/landing-teams/working/qualcomm/kernel.git/log/?h=tracking-qcomlt-sdx55-drivers
> 
> Changes in v4:
> 
> * Collected reviews from Hemant and Alex.
> * Removed the A7 suffix from register names and functions.
> * Added a couple of cleanup patches.
> * Reworked the mhi_ep_queue_skb() API.
> * Switched to separate workers for command and transfer rings.
> * Used a common workqueue for state and ring management.
> * Reworked the channel ring management.
> * Other misc changes as per review from Alex.
> 
> Changes in v3:
> 
> * Splitted the patch 20/23 into two.
> * Fixed the error handling in patch 21/23.
> * Removed spurious change in patch 01/23.
> * Added check for xfer callbacks in client driver probe.
> 
> Changes in v2:
> 
> v2 mostly addresses the issues seen while testing the stack on SM8450 that is a
> SMP platform and also incorporates the review comments from Alex.
> 
> Major changes are:
> 
> * Added a cleanup patch for getting rid of SHIFT macros and used the bitfield
>   operations.
> * Added the endianess patches that were submitted to MHI list and used the
>   endianess conversion in EP patches also.
> * Added support for multiple event rings.
> * Fixed the MSI generation based on the event ring index.
> * Fixed the doorbell list handling by making use of list splice and not locking
>   the entire list manipulation.
> * Added new APIs for wrapping the reading and writing to host memory (Dmitry).
> * Optimized the read_channel and queue_skb function logics.
> * Added Hemant's R-o-b tag.
> 
> Manivannan Sadhasivam (25):
>   bus: mhi: Move host MHI code to "host" directory
>   bus: mhi: Use bitfield operations for register read and write
>   bus: mhi: Use bitfield operations for handling DWORDs of ring elements
>   bus: mhi: Cleanup the register definitions used in headers
>   bus: mhi: host: Rename "struct mhi_tre" to "struct mhi_ring_element"
>   bus: mhi: Move common MHI definitions out of host directory
>   bus: mhi: Make mhi_state_str[] array static inline and move to
>     common.h
>   bus: mhi: ep: Add support for registering MHI endpoint controllers
>   bus: mhi: ep: Add support for registering MHI endpoint client drivers
>   bus: mhi: ep: Add support for creating and destroying MHI EP devices
>   bus: mhi: ep: Add support for managing MMIO registers
>   bus: mhi: ep: Add support for ring management
>   bus: mhi: ep: Add support for sending events to the host
>   bus: mhi: ep: Add support for managing MHI state machine
>   bus: mhi: ep: Add support for processing MHI endpoint interrupts
>   bus: mhi: ep: Add support for powering up the MHI endpoint stack
>   bus: mhi: ep: Add support for powering down the MHI endpoint stack
>   bus: mhi: ep: Add support for handling MHI_RESET
>   bus: mhi: ep: Add support for handling SYS_ERR condition
>   bus: mhi: ep: Add support for processing command rings
>   bus: mhi: ep: Add support for reading from the host
>   bus: mhi: ep: Add support for processing channel rings
>   bus: mhi: ep: Add support for queueing SKBs to the host
>   bus: mhi: ep: Add support for suspending and resuming channels
>   bus: mhi: ep: Add uevent support for module autoloading
> 
> Paul Davey (2):
>   bus: mhi: Fix pm_state conversion to string
>   bus: mhi: Fix MHI DMA structure endianness
> 
>  drivers/bus/Makefile                     |    2 +-
>  drivers/bus/mhi/Kconfig                  |   28 +-
>  drivers/bus/mhi/Makefile                 |    9 +-
>  drivers/bus/mhi/common.h                 |  326 +++++
>  drivers/bus/mhi/core/internal.h          |  722 ----------
>  drivers/bus/mhi/ep/Kconfig               |   10 +
>  drivers/bus/mhi/ep/Makefile              |    2 +
>  drivers/bus/mhi/ep/internal.h            |  222 +++
>  drivers/bus/mhi/ep/main.c                | 1623 ++++++++++++++++++++++
>  drivers/bus/mhi/ep/mmio.c                |  272 ++++
>  drivers/bus/mhi/ep/ring.c                |  197 +++
>  drivers/bus/mhi/ep/sm.c                  |  148 ++
>  drivers/bus/mhi/host/Kconfig             |   31 +
>  drivers/bus/mhi/{core => host}/Makefile  |    4 +-
>  drivers/bus/mhi/{core => host}/boot.c    |   17 +-
>  drivers/bus/mhi/{core => host}/debugfs.c |   40 +-
>  drivers/bus/mhi/{core => host}/init.c    |  131 +-
>  drivers/bus/mhi/host/internal.h          |  382 +++++
>  drivers/bus/mhi/{core => host}/main.c    |   66 +-
>  drivers/bus/mhi/{ => host}/pci_generic.c |    0
>  drivers/bus/mhi/{core => host}/pm.c      |   36 +-
>  include/linux/mhi_ep.h                   |  284 ++++
>  include/linux/mod_devicetable.h          |    2 +
>  scripts/mod/file2alias.c                 |   10 +
>  24 files changed, 3649 insertions(+), 915 deletions(-)
>  create mode 100644 drivers/bus/mhi/common.h
>  delete mode 100644 drivers/bus/mhi/core/internal.h
>  create mode 100644 drivers/bus/mhi/ep/Kconfig
>  create mode 100644 drivers/bus/mhi/ep/Makefile
>  create mode 100644 drivers/bus/mhi/ep/internal.h
>  create mode 100644 drivers/bus/mhi/ep/main.c
>  create mode 100644 drivers/bus/mhi/ep/mmio.c
>  create mode 100644 drivers/bus/mhi/ep/ring.c
>  create mode 100644 drivers/bus/mhi/ep/sm.c
>  create mode 100644 drivers/bus/mhi/host/Kconfig
>  rename drivers/bus/mhi/{core => host}/Makefile (54%)
>  rename drivers/bus/mhi/{core => host}/boot.c (96%)
>  rename drivers/bus/mhi/{core => host}/debugfs.c (90%)
>  rename drivers/bus/mhi/{core => host}/init.c (92%)
>  create mode 100644 drivers/bus/mhi/host/internal.h
>  rename drivers/bus/mhi/{core => host}/main.c (97%)
>  rename drivers/bus/mhi/{ => host}/pci_generic.c (100%)
>  rename drivers/bus/mhi/{core => host}/pm.c (97%)
>  create mode 100644 include/linux/mhi_ep.h
> 
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 52+ messages in thread

end of thread, other threads:[~2022-03-01  8:50 UTC | newest]

Thread overview: 52+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-28 12:43 [PATCH v4 00/27] Add initial support for MHI endpoint stack Manivannan Sadhasivam
2022-02-28 12:43 ` [PATCH v4 01/27] bus: mhi: Fix pm_state conversion to string Manivannan Sadhasivam
2022-02-28 15:30   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 02/27] bus: mhi: Fix MHI DMA structure endianness Manivannan Sadhasivam
2022-02-28 15:40   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 03/27] bus: mhi: Move host MHI code to "host" directory Manivannan Sadhasivam
2022-02-28 12:43 ` [PATCH v4 04/27] bus: mhi: Use bitfield operations for register read and write Manivannan Sadhasivam
2022-02-28 12:43 ` [PATCH v4 05/27] bus: mhi: Use bitfield operations for handling DWORDs of ring elements Manivannan Sadhasivam
2022-02-28 14:00   ` David Laight
2022-02-28 14:43     ` 'Manivannan Sadhasivam'
2022-02-28 15:11       ` Alex Elder
2022-02-28 15:40       ` David Laight
2022-02-28 15:51         ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 06/27] bus: mhi: Cleanup the register definitions used in headers Manivannan Sadhasivam
2022-02-28 12:43 ` [PATCH v4 07/27] bus: mhi: host: Rename "struct mhi_tre" to "struct mhi_ring_element" Manivannan Sadhasivam
2022-02-28 15:52   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 08/27] bus: mhi: Move common MHI definitions out of host directory Manivannan Sadhasivam
2022-02-28 12:43 ` [PATCH v4 09/27] bus: mhi: Make mhi_state_str[] array static inline and move to common.h Manivannan Sadhasivam
2022-02-28 15:56   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 10/27] bus: mhi: ep: Add support for registering MHI endpoint controllers Manivannan Sadhasivam
2022-02-28 16:06   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 11/27] bus: mhi: ep: Add support for registering MHI endpoint client drivers Manivannan Sadhasivam
2022-02-28 16:09   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 12/27] bus: mhi: ep: Add support for creating and destroying MHI EP devices Manivannan Sadhasivam
2022-02-28 16:10   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 13/27] bus: mhi: ep: Add support for managing MMIO registers Manivannan Sadhasivam
2022-02-28 16:23   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 14/27] bus: mhi: ep: Add support for ring management Manivannan Sadhasivam
2022-02-28 16:27   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 15/27] bus: mhi: ep: Add support for sending events to the host Manivannan Sadhasivam
2022-02-28 16:37   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 16/27] bus: mhi: ep: Add support for managing MHI state machine Manivannan Sadhasivam
2022-02-28 16:41   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 17/27] bus: mhi: ep: Add support for processing MHI endpoint interrupts Manivannan Sadhasivam
2022-02-28 16:45   ` Alex Elder
2022-03-01  6:41     ` Manivannan Sadhasivam
2022-02-28 12:43 ` [PATCH v4 18/27] bus: mhi: ep: Add support for powering up the MHI endpoint stack Manivannan Sadhasivam
2022-02-28 16:47   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 19/27] bus: mhi: ep: Add support for powering down " Manivannan Sadhasivam
2022-02-28 16:49   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 20/27] bus: mhi: ep: Add support for handling MHI_RESET Manivannan Sadhasivam
2022-02-28 12:43 ` [PATCH v4 21/27] bus: mhi: ep: Add support for handling SYS_ERR condition Manivannan Sadhasivam
2022-02-28 12:43 ` [PATCH v4 22/27] bus: mhi: ep: Add support for processing command rings Manivannan Sadhasivam
2022-02-28 12:43 ` [PATCH v4 23/27] bus: mhi: ep: Add support for reading from the host Manivannan Sadhasivam
2022-02-28 12:43 ` [PATCH v4 24/27] bus: mhi: ep: Add support for processing channel rings Manivannan Sadhasivam
2022-02-28 12:43 ` [PATCH v4 25/27] bus: mhi: ep: Add support for queueing SKBs to the host Manivannan Sadhasivam
2022-02-28 16:51   ` Alex Elder
2022-02-28 12:43 ` [PATCH v4 26/27] bus: mhi: ep: Add support for suspending and resuming channels Manivannan Sadhasivam
2022-02-28 12:43 ` [PATCH v4 27/27] bus: mhi: ep: Add uevent support for module autoloading Manivannan Sadhasivam
2022-02-28 16:57 ` [PATCH v4 00/27] Add initial support for MHI endpoint stack Alex Elder
2022-03-01  6:15   ` Manivannan Sadhasivam
2022-03-01  8:50 ` Manivannan Sadhasivam

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).