linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/25] Add initial support for MHI endpoint stack
@ 2022-02-12 18:20 Manivannan Sadhasivam
  2022-02-12 18:20 ` [PATCH v3 01/25] bus: mhi: Fix pm_state conversion to string Manivannan Sadhasivam
                   ` (25 more replies)
  0 siblings, 26 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:20 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Hello,

This series adds initial support for the Qualcomm specific Modem Host Interface
(MHI) bus in endpoint devices like SDX55 modems. The MHI bus in endpoint devices
communicates with the MHI bus in host machines like x86 over any physical bus
like PCIe. The MHI host support is already in mainline [1] and been used by PCIe
based modems and WLAN devices running vendor code (downstream).

Overview
========

This series aims at adding the MHI support in the endpoint devices with the goal
of getting data connectivity using the mainline kernel running on the modems.
Modems here refer to the combination of an APPS processor (Cortex A grade) and
a baseband processor (DSP). The MHI bus is located in the APPS processor and it
transfers data packets from the baseband processor to the host machine.

The MHI Endpoint (MHI EP) stack proposed here is inspired by the downstream
code written by Qualcomm. But the complete stack is mostly re-written to adapt
to the "bus" framework and made it modular so that it can work with the upstream
subsystems like "PCI Endpoint". The code structure of the MHI endpoint stack
follows the MHI host stack to maintain uniformity.

With this initial MHI EP stack (along with few other drivers), we can establish
the network interface between host and endpoint over the MHI software channels
(IP_SW0) and can do things like IP forwarding, SSH, etc...

Stack Organization
==================

The MHI EP stack has the concept of controller and device drivers as like the
MHI host stack. The MHI EP controller driver can be a PCI Endpoint Function
driver and the MHI device driver can be a MHI EP Networking driver or QRTR
driver. The MHI EP controller driver is tied to the PCI Endpoint subsystem and
handles all bus related activities like mapping the host memory, raising IRQ,
passing link specific events etc... The MHI EP networking driver is tied to the
Networking stack and handles all networking related activities like
sending/receiving the SKBs from netdev, statistics collection etc...

This series only contains the MHI EP code, whereas the PCIe EPF driver and MHI
EP Networking drivers are not yet submitted and can be found here [2]. Though
the MHI EP stack doesn't have the build time dependency, it cannot function
without them.

Test setup
==========

This series has been tested on Telit FN980 TLB board powered by Qualcomm SDX55
(a.k.a X55 modem) and Qualcomm SM8450 based dev board.

For testing the stability and performance, networking tools such as iperf, ssh
and ping are used.

Limitations
===========

We are not _yet_ there to get the data packets from the modem as that involves
the Qualcomm IP Accelerator (IPA) integration with MHI endpoint stack. But we
are planning to add support for it in the coming days.

References
==========

MHI bus: https://www.kernel.org/doc/html/latest/mhi/mhi.html
Linaro connect presentation around this topic: https://connect.linaro.org/resources/lvc21f/lvc21f-222/

Thanks,
Mani

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/bus/mhi
[2] https://git.linaro.org/landing-teams/working/qualcomm/kernel.git/log/?h=tracking-qcomlt-sdx55-drivers

Changes in v3:

* Splitted the patch 20/23 into two.
* Fixed the error handling in patch 21/23.
* Removed spurious change in patch 01/23.
* Added check for xfer callbacks in client driver probe.

Changes in v2:

v2 mostly addresses the issues seen while testing the stack on SM8450 that is a
SMP platform and also incorporates the review comments from Alex.

Major changes are:

* Added a cleanup patch for getting rid of SHIFT macros and used the bitfield
  operations.
* Added the endianess patches that were submitted to MHI list and used the
  endianess conversion in EP patches also.
* Added support for multiple event rings.
* Fixed the MSI generation based on the event ring index.
* Fixed the doorbell list handling by making use of list splice and not locking
  the entire list manipulation.
* Added new APIs for wrapping the reading and writing to host memory (Dmitry).
* Optimized the read_channel and queue_skb function logics.
* Added Hemant's R-o-b tag.

Manivannan Sadhasivam (23):
  bus: mhi: Move host MHI code to "host" directory
  bus: mhi: Move common MHI definitions out of host directory
  bus: mhi: Make mhi_state_str[] array static inline and move to
    common.h
  bus: mhi: Cleanup the register definitions used in headers
  bus: mhi: Get rid of SHIFT macros and use bitfield operations
  bus: mhi: ep: Add support for registering MHI endpoint controllers
  bus: mhi: ep: Add support for registering MHI endpoint client drivers
  bus: mhi: ep: Add support for creating and destroying MHI EP devices
  bus: mhi: ep: Add support for managing MMIO registers
  bus: mhi: ep: Add support for ring management
  bus: mhi: ep: Add support for sending events to the host
  bus: mhi: ep: Add support for managing MHI state machine
  bus: mhi: ep: Add support for processing MHI endpoint interrupts
  bus: mhi: ep: Add support for powering up the MHI endpoint stack
  bus: mhi: ep: Add support for powering down the MHI endpoint stack
  bus: mhi: ep: Add support for handling MHI_RESET
  bus: mhi: ep: Add support for handling SYS_ERR condition
  bus: mhi: ep: Add support for processing command ring
  bus: mhi: ep: Add support for reading from the host
  bus: mhi: ep: Add support for processing transfer ring
  bus: mhi: ep: Add support for queueing SKBs to the host
  bus: mhi: ep: Add support for suspending and resuming channels
  bus: mhi: ep: Add uevent support for module autoloading

Paul Davey (2):
  bus: mhi: Fix pm_state conversion to string
  bus: mhi: Fix MHI DMA structure endianness

 drivers/bus/Makefile                      |    2 +-
 drivers/bus/mhi/Kconfig                   |   28 +-
 drivers/bus/mhi/Makefile                  |    9 +-
 drivers/bus/mhi/common.h                  |  319 ++++
 drivers/bus/mhi/ep/Kconfig                |   10 +
 drivers/bus/mhi/ep/Makefile               |    2 +
 drivers/bus/mhi/ep/internal.h             |  254 ++++
 drivers/bus/mhi/ep/main.c                 | 1601 +++++++++++++++++++++
 drivers/bus/mhi/ep/mmio.c                 |  274 ++++
 drivers/bus/mhi/ep/ring.c                 |  267 ++++
 drivers/bus/mhi/ep/sm.c                   |  174 +++
 drivers/bus/mhi/host/Kconfig              |   31 +
 drivers/bus/mhi/{core => host}/Makefile   |    4 +-
 drivers/bus/mhi/{core => host}/boot.c     |   17 +-
 drivers/bus/mhi/{core => host}/debugfs.c  |   40 +-
 drivers/bus/mhi/{core => host}/init.c     |  123 +-
 drivers/bus/mhi/{core => host}/internal.h |  427 +-----
 drivers/bus/mhi/{core => host}/main.c     |   46 +-
 drivers/bus/mhi/{ => host}/pci_generic.c  |    0
 drivers/bus/mhi/{core => host}/pm.c       |   36 +-
 include/linux/mhi_ep.h                    |  293 ++++
 include/linux/mod_devicetable.h           |    2 +
 scripts/mod/file2alias.c                  |   10 +
 23 files changed, 3442 insertions(+), 527 deletions(-)
 create mode 100644 drivers/bus/mhi/common.h
 create mode 100644 drivers/bus/mhi/ep/Kconfig
 create mode 100644 drivers/bus/mhi/ep/Makefile
 create mode 100644 drivers/bus/mhi/ep/internal.h
 create mode 100644 drivers/bus/mhi/ep/main.c
 create mode 100644 drivers/bus/mhi/ep/mmio.c
 create mode 100644 drivers/bus/mhi/ep/ring.c
 create mode 100644 drivers/bus/mhi/ep/sm.c
 create mode 100644 drivers/bus/mhi/host/Kconfig
 rename drivers/bus/mhi/{core => host}/Makefile (54%)
 rename drivers/bus/mhi/{core => host}/boot.c (96%)
 rename drivers/bus/mhi/{core => host}/debugfs.c (90%)
 rename drivers/bus/mhi/{core => host}/init.c (93%)
 rename drivers/bus/mhi/{core => host}/internal.h (50%)
 rename drivers/bus/mhi/{core => host}/main.c (98%)
 rename drivers/bus/mhi/{ => host}/pci_generic.c (100%)
 rename drivers/bus/mhi/{core => host}/pm.c (97%)
 create mode 100644 include/linux/mhi_ep.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 92+ messages in thread

* [PATCH v3 01/25] bus: mhi: Fix pm_state conversion to string
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
@ 2022-02-12 18:20 ` Manivannan Sadhasivam
  2022-02-15 20:01   ` Alex Elder
  2022-02-12 18:20 ` [PATCH v3 02/25] bus: mhi: Fix MHI DMA structure endianness Manivannan Sadhasivam
                   ` (24 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:20 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder, Paul Davey,
	Manivannan Sadhasivam, Hemant Kumar, stable,
	Manivannan Sadhasivam

From: Paul Davey <paul.davey@alliedtelesis.co.nz>

On big endian architectures the mhi debugfs files which report pm state
give "Invalid State" for all states.  This is caused by using
find_last_bit which takes an unsigned long* while the state is passed in
as an enum mhi_pm_state which will be of int size.

Fix by using __fls to pass the value of state instead of find_last_bit.

Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions")
Signed-off-by: Paul Davey <paul.davey@alliedtelesis.co.nz>
Reviewed-by: Manivannan Sadhasivam <mani@kernel.org>
Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
Cc: stable@vger.kernel.org
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/core/init.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 046f407dc5d6..af484b03558a 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -79,10 +79,12 @@ static const char * const mhi_pm_state_str[] = {
 
 const char *to_mhi_pm_state_str(enum mhi_pm_state state)
 {
-	unsigned long pm_state = state;
-	int index = find_last_bit(&pm_state, 32);
+	int index;
 
-	if (index >= ARRAY_SIZE(mhi_pm_state_str))
+	if (state)
+		index = __fls(state);
+
+	if (!state || index >= ARRAY_SIZE(mhi_pm_state_str))
 		return "Invalid State";
 
 	return mhi_pm_state_str[index];
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 02/25] bus: mhi: Fix MHI DMA structure endianness
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
  2022-02-12 18:20 ` [PATCH v3 01/25] bus: mhi: Fix pm_state conversion to string Manivannan Sadhasivam
@ 2022-02-12 18:20 ` Manivannan Sadhasivam
  2022-02-15 20:02   ` Alex Elder
  2022-02-12 18:20 ` [PATCH v3 03/25] bus: mhi: Move host MHI code to "host" directory Manivannan Sadhasivam
                   ` (23 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:20 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder, Paul Davey,
	stable, Manivannan Sadhasivam

From: Paul Davey <paul.davey@alliedtelesis.co.nz>

The MHI driver does not work on big endian architectures.  The
controller never transitions into mission mode.  This appears to be due
to the modem device expecting the various contexts and transfer rings to
have fields in little endian order in memory, but the driver constructs
them in native endianness.

Fix MHI event, channel and command contexts and TRE handling macros to
use explicit conversion to little endian.  Mark fields in relevant
structures as little endian to document this requirement.

Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions")
Fixes: 6cd330ae76ff ("bus: mhi: core: Add support for ringing channel/event ring doorbells")
Signed-off-by: Paul Davey <paul.davey@alliedtelesis.co.nz>
Cc: stable@vger.kernel.org
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/core/debugfs.c  |  26 +++----
 drivers/bus/mhi/core/init.c     |  36 +++++-----
 drivers/bus/mhi/core/internal.h | 119 ++++++++++++++++----------------
 drivers/bus/mhi/core/main.c     |  22 +++---
 drivers/bus/mhi/core/pm.c       |   4 +-
 5 files changed, 104 insertions(+), 103 deletions(-)

diff --git a/drivers/bus/mhi/core/debugfs.c b/drivers/bus/mhi/core/debugfs.c
index 858d7516410b..d818586c229d 100644
--- a/drivers/bus/mhi/core/debugfs.c
+++ b/drivers/bus/mhi/core/debugfs.c
@@ -60,16 +60,16 @@ static int mhi_debugfs_events_show(struct seq_file *m, void *d)
 		}
 
 		seq_printf(m, "Index: %d intmod count: %lu time: %lu",
-			   i, (er_ctxt->intmod & EV_CTX_INTMODC_MASK) >>
+			   i, (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODC_MASK) >>
 			   EV_CTX_INTMODC_SHIFT,
-			   (er_ctxt->intmod & EV_CTX_INTMODT_MASK) >>
+			   (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODT_MASK) >>
 			   EV_CTX_INTMODT_SHIFT);
 
-		seq_printf(m, " base: 0x%0llx len: 0x%llx", er_ctxt->rbase,
-			   er_ctxt->rlen);
+		seq_printf(m, " base: 0x%0llx len: 0x%llx", le64_to_cpu(er_ctxt->rbase),
+			   le64_to_cpu(er_ctxt->rlen));
 
-		seq_printf(m, " rp: 0x%llx wp: 0x%llx", er_ctxt->rp,
-			   er_ctxt->wp);
+		seq_printf(m, " rp: 0x%llx wp: 0x%llx", le64_to_cpu(er_ctxt->rp),
+			   le64_to_cpu(er_ctxt->wp));
 
 		seq_printf(m, " local rp: 0x%pK db: 0x%pad\n", ring->rp,
 			   &mhi_event->db_cfg.db_val);
@@ -106,18 +106,18 @@ static int mhi_debugfs_channels_show(struct seq_file *m, void *d)
 
 		seq_printf(m,
 			   "%s(%u) state: 0x%lx brstmode: 0x%lx pollcfg: 0x%lx",
-			   mhi_chan->name, mhi_chan->chan, (chan_ctxt->chcfg &
+			   mhi_chan->name, mhi_chan->chan, (le32_to_cpu(chan_ctxt->chcfg) &
 			   CHAN_CTX_CHSTATE_MASK) >> CHAN_CTX_CHSTATE_SHIFT,
-			   (chan_ctxt->chcfg & CHAN_CTX_BRSTMODE_MASK) >>
-			   CHAN_CTX_BRSTMODE_SHIFT, (chan_ctxt->chcfg &
+			   (le32_to_cpu(chan_ctxt->chcfg) & CHAN_CTX_BRSTMODE_MASK) >>
+			   CHAN_CTX_BRSTMODE_SHIFT, (le32_to_cpu(chan_ctxt->chcfg) &
 			   CHAN_CTX_POLLCFG_MASK) >> CHAN_CTX_POLLCFG_SHIFT);
 
-		seq_printf(m, " type: 0x%x event ring: %u", chan_ctxt->chtype,
-			   chan_ctxt->erindex);
+		seq_printf(m, " type: 0x%x event ring: %u", le32_to_cpu(chan_ctxt->chtype),
+			   le32_to_cpu(chan_ctxt->erindex));
 
 		seq_printf(m, " base: 0x%llx len: 0x%llx rp: 0x%llx wp: 0x%llx",
-			   chan_ctxt->rbase, chan_ctxt->rlen, chan_ctxt->rp,
-			   chan_ctxt->wp);
+			   le64_to_cpu(chan_ctxt->rbase), le64_to_cpu(chan_ctxt->rlen),
+			   le64_to_cpu(chan_ctxt->rp), le64_to_cpu(chan_ctxt->wp));
 
 		seq_printf(m, " local rp: 0x%pK local wp: 0x%pK db: 0x%pad\n",
 			   ring->rp, ring->wp,
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index af484b03558a..4bd62f32695d 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -293,17 +293,17 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 		if (mhi_chan->offload_ch)
 			continue;
 
-		tmp = chan_ctxt->chcfg;
+		tmp = le32_to_cpu(chan_ctxt->chcfg);
 		tmp &= ~CHAN_CTX_CHSTATE_MASK;
 		tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
 		tmp &= ~CHAN_CTX_BRSTMODE_MASK;
 		tmp |= (mhi_chan->db_cfg.brstmode << CHAN_CTX_BRSTMODE_SHIFT);
 		tmp &= ~CHAN_CTX_POLLCFG_MASK;
 		tmp |= (mhi_chan->db_cfg.pollcfg << CHAN_CTX_POLLCFG_SHIFT);
-		chan_ctxt->chcfg = tmp;
+		chan_ctxt->chcfg = cpu_to_le32(tmp);
 
-		chan_ctxt->chtype = mhi_chan->type;
-		chan_ctxt->erindex = mhi_chan->er_index;
+		chan_ctxt->chtype = cpu_to_le32(mhi_chan->type);
+		chan_ctxt->erindex = cpu_to_le32(mhi_chan->er_index);
 
 		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
 		mhi_chan->tre_ring.db_addr = (void __iomem *)&chan_ctxt->wp;
@@ -328,14 +328,14 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 		if (mhi_event->offload_ev)
 			continue;
 
-		tmp = er_ctxt->intmod;
+		tmp = le32_to_cpu(er_ctxt->intmod);
 		tmp &= ~EV_CTX_INTMODC_MASK;
 		tmp &= ~EV_CTX_INTMODT_MASK;
 		tmp |= (mhi_event->intmod << EV_CTX_INTMODT_SHIFT);
-		er_ctxt->intmod = tmp;
+		er_ctxt->intmod = cpu_to_le32(tmp);
 
-		er_ctxt->ertype = MHI_ER_TYPE_VALID;
-		er_ctxt->msivec = mhi_event->irq;
+		er_ctxt->ertype = cpu_to_le32(MHI_ER_TYPE_VALID);
+		er_ctxt->msivec = cpu_to_le32(mhi_event->irq);
 		mhi_event->db_cfg.db_mode = true;
 
 		ring->el_size = sizeof(struct mhi_tre);
@@ -349,9 +349,9 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 		 * ring is empty
 		 */
 		ring->rp = ring->wp = ring->base;
-		er_ctxt->rbase = ring->iommu_base;
+		er_ctxt->rbase = cpu_to_le64(ring->iommu_base);
 		er_ctxt->rp = er_ctxt->wp = er_ctxt->rbase;
-		er_ctxt->rlen = ring->len;
+		er_ctxt->rlen = cpu_to_le64(ring->len);
 		ring->ctxt_wp = &er_ctxt->wp;
 	}
 
@@ -378,9 +378,9 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 			goto error_alloc_cmd;
 
 		ring->rp = ring->wp = ring->base;
-		cmd_ctxt->rbase = ring->iommu_base;
+		cmd_ctxt->rbase = cpu_to_le64(ring->iommu_base);
 		cmd_ctxt->rp = cmd_ctxt->wp = cmd_ctxt->rbase;
-		cmd_ctxt->rlen = ring->len;
+		cmd_ctxt->rlen = cpu_to_le64(ring->len);
 		ring->ctxt_wp = &cmd_ctxt->wp;
 	}
 
@@ -581,10 +581,10 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
 	chan_ctxt->rp = 0;
 	chan_ctxt->wp = 0;
 
-	tmp = chan_ctxt->chcfg;
+	tmp = le32_to_cpu(chan_ctxt->chcfg);
 	tmp &= ~CHAN_CTX_CHSTATE_MASK;
 	tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
-	chan_ctxt->chcfg = tmp;
+	chan_ctxt->chcfg = cpu_to_le32(tmp);
 
 	/* Update to all cores */
 	smp_wmb();
@@ -618,14 +618,14 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
 		return -ENOMEM;
 	}
 
-	tmp = chan_ctxt->chcfg;
+	tmp = le32_to_cpu(chan_ctxt->chcfg);
 	tmp &= ~CHAN_CTX_CHSTATE_MASK;
 	tmp |= (MHI_CH_STATE_ENABLED << CHAN_CTX_CHSTATE_SHIFT);
-	chan_ctxt->chcfg = tmp;
+	chan_ctxt->chcfg = cpu_to_le32(tmp);
 
-	chan_ctxt->rbase = tre_ring->iommu_base;
+	chan_ctxt->rbase = cpu_to_le64(tre_ring->iommu_base);
 	chan_ctxt->rp = chan_ctxt->wp = chan_ctxt->rbase;
-	chan_ctxt->rlen = tre_ring->len;
+	chan_ctxt->rlen = cpu_to_le64(tre_ring->len);
 	tre_ring->ctxt_wp = &chan_ctxt->wp;
 
 	tre_ring->rp = tre_ring->wp = tre_ring->base;
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index e2e10474a9d9..fa64340a8997 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -209,14 +209,14 @@ extern struct bus_type mhi_bus_type;
 #define EV_CTX_INTMODT_MASK GENMASK(31, 16)
 #define EV_CTX_INTMODT_SHIFT 16
 struct mhi_event_ctxt {
-	__u32 intmod;
-	__u32 ertype;
-	__u32 msivec;
-
-	__u64 rbase __packed __aligned(4);
-	__u64 rlen __packed __aligned(4);
-	__u64 rp __packed __aligned(4);
-	__u64 wp __packed __aligned(4);
+	__le32 intmod;
+	__le32 ertype;
+	__le32 msivec;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
 };
 
 #define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
@@ -227,25 +227,25 @@ struct mhi_event_ctxt {
 #define CHAN_CTX_POLLCFG_SHIFT 10
 #define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
 struct mhi_chan_ctxt {
-	__u32 chcfg;
-	__u32 chtype;
-	__u32 erindex;
-
-	__u64 rbase __packed __aligned(4);
-	__u64 rlen __packed __aligned(4);
-	__u64 rp __packed __aligned(4);
-	__u64 wp __packed __aligned(4);
+	__le32 chcfg;
+	__le32 chtype;
+	__le32 erindex;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
 };
 
 struct mhi_cmd_ctxt {
-	__u32 reserved0;
-	__u32 reserved1;
-	__u32 reserved2;
-
-	__u64 rbase __packed __aligned(4);
-	__u64 rlen __packed __aligned(4);
-	__u64 rp __packed __aligned(4);
-	__u64 wp __packed __aligned(4);
+	__le32 reserved0;
+	__le32 reserved1;
+	__le32 reserved2;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
 };
 
 struct mhi_ctxt {
@@ -258,8 +258,8 @@ struct mhi_ctxt {
 };
 
 struct mhi_tre {
-	u64 ptr;
-	u32 dword[2];
+	__le64 ptr;
+	__le32 dword[2];
 };
 
 struct bhi_vec_entry {
@@ -277,57 +277,58 @@ enum mhi_cmd_type {
 /* No operation command */
 #define MHI_TRE_CMD_NOOP_PTR (0)
 #define MHI_TRE_CMD_NOOP_DWORD0 (0)
-#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
+#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
 
 /* Channel reset command */
 #define MHI_TRE_CMD_RESET_PTR (0)
 #define MHI_TRE_CMD_RESET_DWORD0 (0)
-#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
-					(MHI_CMD_RESET_CHAN << 16))
+#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
+					(MHI_CMD_RESET_CHAN << 16)))
 
 /* Channel stop command */
 #define MHI_TRE_CMD_STOP_PTR (0)
 #define MHI_TRE_CMD_STOP_DWORD0 (0)
-#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
-				       (MHI_CMD_STOP_CHAN << 16))
+#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
+				       (MHI_CMD_STOP_CHAN << 16)))
 
 /* Channel start command */
 #define MHI_TRE_CMD_START_PTR (0)
 #define MHI_TRE_CMD_START_DWORD0 (0)
-#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
-					(MHI_CMD_START_CHAN << 16))
+#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
+					(MHI_CMD_START_CHAN << 16)))
 
-#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
-#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
+#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
+#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
 
 /* Event descriptor macros */
-#define MHI_TRE_EV_PTR(ptr) (ptr)
-#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
-#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
-#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
-#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
-#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
-#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
-#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
-#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
+#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
+#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
+#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
+#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
+#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
+#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
+#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
+#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
+#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
 
 /* Transfer descriptor macros */
-#define MHI_TRE_DATA_PTR(ptr) (ptr)
-#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
-#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
-	| (ieot << 9) | (ieob << 8) | chain)
+#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
+#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
+#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
+	| (ieot << 9) | (ieob << 8) | chain))
 
 /* RSC transfer descriptor macros */
-#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
-#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
-#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
+#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
+#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
+#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
 
 enum mhi_pkt_type {
 	MHI_PKT_TYPE_INVALID = 0x0,
@@ -500,7 +501,7 @@ struct state_transition {
 struct mhi_ring {
 	dma_addr_t dma_handle;
 	dma_addr_t iommu_base;
-	u64 *ctxt_wp; /* point to ctxt wp */
+	__le64 *ctxt_wp; /* point to ctxt wp */
 	void *pre_aligned;
 	void *base;
 	void *rp;
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
index ffde617f93a3..85f4f7c8d7c6 100644
--- a/drivers/bus/mhi/core/main.c
+++ b/drivers/bus/mhi/core/main.c
@@ -114,7 +114,7 @@ void mhi_ring_er_db(struct mhi_event *mhi_event)
 	struct mhi_ring *ring = &mhi_event->ring;
 
 	mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
-				     ring->db_addr, *ring->ctxt_wp);
+				     ring->db_addr, le64_to_cpu(*ring->ctxt_wp));
 }
 
 void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
@@ -123,7 +123,7 @@ void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
 	struct mhi_ring *ring = &mhi_cmd->ring;
 
 	db = ring->iommu_base + (ring->wp - ring->base);
-	*ring->ctxt_wp = db;
+	*ring->ctxt_wp = cpu_to_le64(db);
 	mhi_write_db(mhi_cntrl, ring->db_addr, db);
 }
 
@@ -140,7 +140,7 @@ void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
 	 * before letting h/w know there is new element to fetch.
 	 */
 	dma_wmb();
-	*ring->ctxt_wp = db;
+	*ring->ctxt_wp = cpu_to_le64(db);
 
 	mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
 				    ring->db_addr, db);
@@ -432,7 +432,7 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
 	struct mhi_event_ctxt *er_ctxt =
 		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
 	struct mhi_ring *ev_ring = &mhi_event->ring;
-	dma_addr_t ptr = er_ctxt->rp;
+	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
 	void *dev_rp;
 
 	if (!is_valid_ring_ptr(ev_ring, ptr)) {
@@ -537,14 +537,14 @@ static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
 
 	/* Update the WP */
 	ring->wp += ring->el_size;
-	ctxt_wp = *ring->ctxt_wp + ring->el_size;
+	ctxt_wp = le64_to_cpu(*ring->ctxt_wp) + ring->el_size;
 
 	if (ring->wp >= (ring->base + ring->len)) {
 		ring->wp = ring->base;
 		ctxt_wp = ring->iommu_base;
 	}
 
-	*ring->ctxt_wp = ctxt_wp;
+	*ring->ctxt_wp = cpu_to_le64(ctxt_wp);
 
 	/* Update the RP */
 	ring->rp += ring->el_size;
@@ -801,7 +801,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
 	u32 chan;
 	int count = 0;
-	dma_addr_t ptr = er_ctxt->rp;
+	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
 
 	/*
 	 * This is a quick check to avoid unnecessary event processing
@@ -940,7 +940,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
 		local_rp = ev_ring->rp;
 
-		ptr = er_ctxt->rp;
+		ptr = le64_to_cpu(er_ctxt->rp);
 		if (!is_valid_ring_ptr(ev_ring, ptr)) {
 			dev_err(&mhi_cntrl->mhi_dev->dev,
 				"Event ring rp points outside of the event ring\n");
@@ -970,7 +970,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
 	int count = 0;
 	u32 chan;
 	struct mhi_chan *mhi_chan;
-	dma_addr_t ptr = er_ctxt->rp;
+	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
 
 	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
 		return -EIO;
@@ -1011,7 +1011,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
 		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
 		local_rp = ev_ring->rp;
 
-		ptr = er_ctxt->rp;
+		ptr = le64_to_cpu(er_ctxt->rp);
 		if (!is_valid_ring_ptr(ev_ring, ptr)) {
 			dev_err(&mhi_cntrl->mhi_dev->dev,
 				"Event ring rp points outside of the event ring\n");
@@ -1533,7 +1533,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
 	/* mark all stale events related to channel as STALE event */
 	spin_lock_irqsave(&mhi_event->lock, flags);
 
-	ptr = er_ctxt->rp;
+	ptr = le64_to_cpu(er_ctxt->rp);
 	if (!is_valid_ring_ptr(ev_ring, ptr)) {
 		dev_err(&mhi_cntrl->mhi_dev->dev,
 			"Event ring rp points outside of the event ring\n");
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
index 4aae0baea008..c35c5ddc7220 100644
--- a/drivers/bus/mhi/core/pm.c
+++ b/drivers/bus/mhi/core/pm.c
@@ -218,7 +218,7 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
 			continue;
 
 		ring->wp = ring->base + ring->len - ring->el_size;
-		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
+		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
 		/* Update all cores */
 		smp_wmb();
 
@@ -420,7 +420,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
 			continue;
 
 		ring->wp = ring->base + ring->len - ring->el_size;
-		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
+		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
 		/* Update to all cores */
 		smp_wmb();
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 03/25] bus: mhi: Move host MHI code to "host" directory
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
  2022-02-12 18:20 ` [PATCH v3 01/25] bus: mhi: Fix pm_state conversion to string Manivannan Sadhasivam
  2022-02-12 18:20 ` [PATCH v3 02/25] bus: mhi: Fix MHI DMA structure endianness Manivannan Sadhasivam
@ 2022-02-12 18:20 ` Manivannan Sadhasivam
  2022-02-15 20:02   ` Alex Elder
  2022-02-12 18:20 ` [PATCH v3 04/25] bus: mhi: Move common MHI definitions out of host directory Manivannan Sadhasivam
                   ` (22 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:20 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam, Hemant Kumar

In preparation of the endpoint MHI support, let's move the host MHI code
to its own "host" directory and adjust the toplevel MHI Kconfig & Makefile.

While at it, let's also move the "pci_generic" driver to "host" directory
as it is a host MHI controller driver.

Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/Makefile                      |  2 +-
 drivers/bus/mhi/Kconfig                   | 27 ++------------------
 drivers/bus/mhi/Makefile                  |  8 ++----
 drivers/bus/mhi/host/Kconfig              | 31 +++++++++++++++++++++++
 drivers/bus/mhi/{core => host}/Makefile   |  4 ++-
 drivers/bus/mhi/{core => host}/boot.c     |  0
 drivers/bus/mhi/{core => host}/debugfs.c  |  0
 drivers/bus/mhi/{core => host}/init.c     |  0
 drivers/bus/mhi/{core => host}/internal.h |  0
 drivers/bus/mhi/{core => host}/main.c     |  0
 drivers/bus/mhi/{ => host}/pci_generic.c  |  0
 drivers/bus/mhi/{core => host}/pm.c       |  0
 12 files changed, 39 insertions(+), 33 deletions(-)
 create mode 100644 drivers/bus/mhi/host/Kconfig
 rename drivers/bus/mhi/{core => host}/Makefile (54%)
 rename drivers/bus/mhi/{core => host}/boot.c (100%)
 rename drivers/bus/mhi/{core => host}/debugfs.c (100%)
 rename drivers/bus/mhi/{core => host}/init.c (100%)
 rename drivers/bus/mhi/{core => host}/internal.h (100%)
 rename drivers/bus/mhi/{core => host}/main.c (100%)
 rename drivers/bus/mhi/{ => host}/pci_generic.c (100%)
 rename drivers/bus/mhi/{core => host}/pm.c (100%)

diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 52c2f35a26a9..16da51130d1a 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -39,4 +39,4 @@ obj-$(CONFIG_VEXPRESS_CONFIG)	+= vexpress-config.o
 obj-$(CONFIG_DA8XX_MSTPRI)	+= da8xx-mstpri.o
 
 # MHI
-obj-$(CONFIG_MHI_BUS)		+= mhi/
+obj-y				+= mhi/
diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
index da5cd0c9fc62..4748df7f9cd5 100644
--- a/drivers/bus/mhi/Kconfig
+++ b/drivers/bus/mhi/Kconfig
@@ -2,30 +2,7 @@
 #
 # MHI bus
 #
-# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+# Copyright (c) 2021, Linaro Ltd.
 #
 
-config MHI_BUS
-	tristate "Modem Host Interface (MHI) bus"
-	help
-	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
-	  communication protocol used by the host processors to control
-	  and communicate with modem devices over a high speed peripheral
-	  bus or shared memory.
-
-config MHI_BUS_DEBUG
-	bool "Debugfs support for the MHI bus"
-	depends on MHI_BUS && DEBUG_FS
-	help
-	  Enable debugfs support for use with the MHI transport. Allows
-	  reading and/or modifying some values within the MHI controller
-	  for debug and test purposes.
-
-config MHI_BUS_PCI_GENERIC
-	tristate "MHI PCI controller driver"
-	depends on MHI_BUS
-	depends on PCI
-	help
-	  This driver provides MHI PCI controller driver for devices such as
-	  Qualcomm SDX55 based PCIe modems.
-
+source "drivers/bus/mhi/host/Kconfig"
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
index 0a2d778d6fb4..5f5708a249f5 100644
--- a/drivers/bus/mhi/Makefile
+++ b/drivers/bus/mhi/Makefile
@@ -1,6 +1,2 @@
-# core layer
-obj-y += core/
-
-obj-$(CONFIG_MHI_BUS_PCI_GENERIC) += mhi_pci_generic.o
-mhi_pci_generic-y += pci_generic.o
-
+# Host MHI stack
+obj-y += host/
diff --git a/drivers/bus/mhi/host/Kconfig b/drivers/bus/mhi/host/Kconfig
new file mode 100644
index 000000000000..da5cd0c9fc62
--- /dev/null
+++ b/drivers/bus/mhi/host/Kconfig
@@ -0,0 +1,31 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# MHI bus
+#
+# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+#
+
+config MHI_BUS
+	tristate "Modem Host Interface (MHI) bus"
+	help
+	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
+	  communication protocol used by the host processors to control
+	  and communicate with modem devices over a high speed peripheral
+	  bus or shared memory.
+
+config MHI_BUS_DEBUG
+	bool "Debugfs support for the MHI bus"
+	depends on MHI_BUS && DEBUG_FS
+	help
+	  Enable debugfs support for use with the MHI transport. Allows
+	  reading and/or modifying some values within the MHI controller
+	  for debug and test purposes.
+
+config MHI_BUS_PCI_GENERIC
+	tristate "MHI PCI controller driver"
+	depends on MHI_BUS
+	depends on PCI
+	help
+	  This driver provides MHI PCI controller driver for devices such as
+	  Qualcomm SDX55 based PCIe modems.
+
diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/host/Makefile
similarity index 54%
rename from drivers/bus/mhi/core/Makefile
rename to drivers/bus/mhi/host/Makefile
index c3feb4130aa3..859c2f38451c 100644
--- a/drivers/bus/mhi/core/Makefile
+++ b/drivers/bus/mhi/host/Makefile
@@ -1,4 +1,6 @@
 obj-$(CONFIG_MHI_BUS) += mhi.o
-
 mhi-y := init.o main.o pm.o boot.o
 mhi-$(CONFIG_MHI_BUS_DEBUG) += debugfs.o
+
+obj-$(CONFIG_MHI_BUS_PCI_GENERIC) += mhi_pci_generic.o
+mhi_pci_generic-y += pci_generic.o
diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/host/boot.c
similarity index 100%
rename from drivers/bus/mhi/core/boot.c
rename to drivers/bus/mhi/host/boot.c
diff --git a/drivers/bus/mhi/core/debugfs.c b/drivers/bus/mhi/host/debugfs.c
similarity index 100%
rename from drivers/bus/mhi/core/debugfs.c
rename to drivers/bus/mhi/host/debugfs.c
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/host/init.c
similarity index 100%
rename from drivers/bus/mhi/core/init.c
rename to drivers/bus/mhi/host/init.c
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/host/internal.h
similarity index 100%
rename from drivers/bus/mhi/core/internal.h
rename to drivers/bus/mhi/host/internal.h
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/host/main.c
similarity index 100%
rename from drivers/bus/mhi/core/main.c
rename to drivers/bus/mhi/host/main.c
diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
similarity index 100%
rename from drivers/bus/mhi/pci_generic.c
rename to drivers/bus/mhi/host/pci_generic.c
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/host/pm.c
similarity index 100%
rename from drivers/bus/mhi/core/pm.c
rename to drivers/bus/mhi/host/pm.c
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 04/25] bus: mhi: Move common MHI definitions out of host directory
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (2 preceding siblings ...)
  2022-02-12 18:20 ` [PATCH v3 03/25] bus: mhi: Move host MHI code to "host" directory Manivannan Sadhasivam
@ 2022-02-12 18:20 ` Manivannan Sadhasivam
  2022-02-15  0:28   ` Hemant Kumar
  2022-02-15 20:02   ` Alex Elder
  2022-02-12 18:20 ` [PATCH v3 05/25] bus: mhi: Make mhi_state_str[] array static inline and move to common.h Manivannan Sadhasivam
                   ` (21 subsequent siblings)
  25 siblings, 2 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:20 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Move the common MHI definitions in host "internal.h" to "common.h" so
that the endpoint code can make use of them. This also avoids
duplicating the definitions in the endpoint stack.

Still, the MHI register definitions are not moved since the offsets
vary between host and endpoint.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/common.h        | 167 ++++++++++++++++++++++++++++++++
 drivers/bus/mhi/host/internal.h | 155 +----------------------------
 2 files changed, 168 insertions(+), 154 deletions(-)
 create mode 100644 drivers/bus/mhi/common.h

diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
new file mode 100644
index 000000000000..0d13a202d334
--- /dev/null
+++ b/drivers/bus/mhi/common.h
@@ -0,0 +1,167 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2021, Linaro Ltd.
+ *
+ */
+
+#ifndef _MHI_COMMON_H
+#define _MHI_COMMON_H
+
+#include <linux/mhi.h>
+
+/* Command Ring Element macros */
+/* No operation command */
+#define MHI_TRE_CMD_NOOP_PTR (0)
+#define MHI_TRE_CMD_NOOP_DWORD0 (0)
+#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
+
+/* Channel reset command */
+#define MHI_TRE_CMD_RESET_PTR (0)
+#define MHI_TRE_CMD_RESET_DWORD0 (0)
+#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
+					(MHI_CMD_RESET_CHAN << 16)))
+
+/* Channel stop command */
+#define MHI_TRE_CMD_STOP_PTR (0)
+#define MHI_TRE_CMD_STOP_DWORD0 (0)
+#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
+				       (MHI_CMD_STOP_CHAN << 16)))
+
+/* Channel start command */
+#define MHI_TRE_CMD_START_PTR (0)
+#define MHI_TRE_CMD_START_DWORD0 (0)
+#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
+					(MHI_CMD_START_CHAN << 16)))
+
+#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
+#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
+#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
+
+/* Event descriptor macros */
+#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
+#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
+#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
+#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
+#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
+#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
+#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
+#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
+#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
+
+/* Transfer descriptor macros */
+#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
+#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
+#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
+	| (ieot << 9) | (ieob << 8) | chain))
+
+/* RSC transfer descriptor macros */
+#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
+#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
+#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
+
+enum mhi_pkt_type {
+	MHI_PKT_TYPE_INVALID = 0x0,
+	MHI_PKT_TYPE_NOOP_CMD = 0x1,
+	MHI_PKT_TYPE_TRANSFER = 0x2,
+	MHI_PKT_TYPE_COALESCING = 0x8,
+	MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
+	MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
+	MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
+	MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
+	MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
+	MHI_PKT_TYPE_TX_EVENT = 0x22,
+	MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
+	MHI_PKT_TYPE_EE_EVENT = 0x40,
+	MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
+	MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
+	MHI_PKT_TYPE_STALE_EVENT, /* internal event */
+};
+
+/* MHI transfer completion events */
+enum mhi_ev_ccs {
+	MHI_EV_CC_INVALID = 0x0,
+	MHI_EV_CC_SUCCESS = 0x1,
+	MHI_EV_CC_EOT = 0x2, /* End of transfer event */
+	MHI_EV_CC_OVERFLOW = 0x3,
+	MHI_EV_CC_EOB = 0x4, /* End of block event */
+	MHI_EV_CC_OOB = 0x5, /* Out of block event */
+	MHI_EV_CC_DB_MODE = 0x6,
+	MHI_EV_CC_UNDEFINED_ERR = 0x10,
+	MHI_EV_CC_BAD_TRE = 0x11,
+};
+
+/* Channel state */
+enum mhi_ch_state {
+	MHI_CH_STATE_DISABLED,
+	MHI_CH_STATE_ENABLED,
+	MHI_CH_STATE_RUNNING,
+	MHI_CH_STATE_SUSPENDED,
+	MHI_CH_STATE_STOP,
+	MHI_CH_STATE_ERROR,
+};
+
+enum mhi_cmd_type {
+	MHI_CMD_NOP = 1,
+	MHI_CMD_RESET_CHAN = 16,
+	MHI_CMD_STOP_CHAN = 17,
+	MHI_CMD_START_CHAN = 18,
+};
+
+#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
+#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
+#define EV_CTX_INTMODC_SHIFT 8
+#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
+#define EV_CTX_INTMODT_SHIFT 16
+struct mhi_event_ctxt {
+	__le32 intmod;
+	__le32 ertype;
+	__le32 msivec;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
+};
+
+#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
+#define CHAN_CTX_CHSTATE_SHIFT 0
+#define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
+#define CHAN_CTX_BRSTMODE_SHIFT 8
+#define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
+#define CHAN_CTX_POLLCFG_SHIFT 10
+#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
+struct mhi_chan_ctxt {
+	__le32 chcfg;
+	__le32 chtype;
+	__le32 erindex;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
+};
+
+struct mhi_cmd_ctxt {
+	__le32 reserved0;
+	__le32 reserved1;
+	__le32 reserved2;
+
+	__le64 rbase __packed __aligned(4);
+	__le64 rlen __packed __aligned(4);
+	__le64 rp __packed __aligned(4);
+	__le64 wp __packed __aligned(4);
+};
+
+extern const char * const mhi_state_str[MHI_STATE_MAX];
+#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
+				  !mhi_state_str[state]) ? \
+				"INVALID_STATE" : mhi_state_str[state])
+
+#endif /* _MHI_COMMON_H */
diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
index fa64340a8997..622de6ba1a0b 100644
--- a/drivers/bus/mhi/host/internal.h
+++ b/drivers/bus/mhi/host/internal.h
@@ -7,7 +7,7 @@
 #ifndef _MHI_INT_H
 #define _MHI_INT_H
 
-#include <linux/mhi.h>
+#include "../common.h"
 
 extern struct bus_type mhi_bus_type;
 
@@ -203,51 +203,6 @@ extern struct bus_type mhi_bus_type;
 #define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
 #define SOC_HW_VERSION_MINOR_VER_SHFT (0)
 
-#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
-#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
-#define EV_CTX_INTMODC_SHIFT 8
-#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
-#define EV_CTX_INTMODT_SHIFT 16
-struct mhi_event_ctxt {
-	__le32 intmod;
-	__le32 ertype;
-	__le32 msivec;
-
-	__le64 rbase __packed __aligned(4);
-	__le64 rlen __packed __aligned(4);
-	__le64 rp __packed __aligned(4);
-	__le64 wp __packed __aligned(4);
-};
-
-#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
-#define CHAN_CTX_CHSTATE_SHIFT 0
-#define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
-#define CHAN_CTX_BRSTMODE_SHIFT 8
-#define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
-#define CHAN_CTX_POLLCFG_SHIFT 10
-#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
-struct mhi_chan_ctxt {
-	__le32 chcfg;
-	__le32 chtype;
-	__le32 erindex;
-
-	__le64 rbase __packed __aligned(4);
-	__le64 rlen __packed __aligned(4);
-	__le64 rp __packed __aligned(4);
-	__le64 wp __packed __aligned(4);
-};
-
-struct mhi_cmd_ctxt {
-	__le32 reserved0;
-	__le32 reserved1;
-	__le32 reserved2;
-
-	__le64 rbase __packed __aligned(4);
-	__le64 rlen __packed __aligned(4);
-	__le64 rp __packed __aligned(4);
-	__le64 wp __packed __aligned(4);
-};
-
 struct mhi_ctxt {
 	struct mhi_event_ctxt *er_ctxt;
 	struct mhi_chan_ctxt *chan_ctxt;
@@ -267,109 +222,6 @@ struct bhi_vec_entry {
 	u64 size;
 };
 
-enum mhi_cmd_type {
-	MHI_CMD_NOP = 1,
-	MHI_CMD_RESET_CHAN = 16,
-	MHI_CMD_STOP_CHAN = 17,
-	MHI_CMD_START_CHAN = 18,
-};
-
-/* No operation command */
-#define MHI_TRE_CMD_NOOP_PTR (0)
-#define MHI_TRE_CMD_NOOP_DWORD0 (0)
-#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
-
-/* Channel reset command */
-#define MHI_TRE_CMD_RESET_PTR (0)
-#define MHI_TRE_CMD_RESET_DWORD0 (0)
-#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
-					(MHI_CMD_RESET_CHAN << 16)))
-
-/* Channel stop command */
-#define MHI_TRE_CMD_STOP_PTR (0)
-#define MHI_TRE_CMD_STOP_DWORD0 (0)
-#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
-				       (MHI_CMD_STOP_CHAN << 16)))
-
-/* Channel start command */
-#define MHI_TRE_CMD_START_PTR (0)
-#define MHI_TRE_CMD_START_DWORD0 (0)
-#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
-					(MHI_CMD_START_CHAN << 16)))
-
-#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
-#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
-#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
-
-/* Event descriptor macros */
-#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
-#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
-#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
-#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
-#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
-#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
-#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
-#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
-#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
-
-/* Transfer descriptor macros */
-#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
-#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
-#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
-	| (ieot << 9) | (ieob << 8) | chain))
-
-/* RSC transfer descriptor macros */
-#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
-#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
-#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
-
-enum mhi_pkt_type {
-	MHI_PKT_TYPE_INVALID = 0x0,
-	MHI_PKT_TYPE_NOOP_CMD = 0x1,
-	MHI_PKT_TYPE_TRANSFER = 0x2,
-	MHI_PKT_TYPE_COALESCING = 0x8,
-	MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
-	MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
-	MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
-	MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
-	MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
-	MHI_PKT_TYPE_TX_EVENT = 0x22,
-	MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
-	MHI_PKT_TYPE_EE_EVENT = 0x40,
-	MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
-	MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
-	MHI_PKT_TYPE_STALE_EVENT, /* internal event */
-};
-
-/* MHI transfer completion events */
-enum mhi_ev_ccs {
-	MHI_EV_CC_INVALID = 0x0,
-	MHI_EV_CC_SUCCESS = 0x1,
-	MHI_EV_CC_EOT = 0x2, /* End of transfer event */
-	MHI_EV_CC_OVERFLOW = 0x3,
-	MHI_EV_CC_EOB = 0x4, /* End of block event */
-	MHI_EV_CC_OOB = 0x5, /* Out of block event */
-	MHI_EV_CC_DB_MODE = 0x6,
-	MHI_EV_CC_UNDEFINED_ERR = 0x10,
-	MHI_EV_CC_BAD_TRE = 0x11,
-};
-
-enum mhi_ch_state {
-	MHI_CH_STATE_DISABLED = 0x0,
-	MHI_CH_STATE_ENABLED = 0x1,
-	MHI_CH_STATE_RUNNING = 0x2,
-	MHI_CH_STATE_SUSPENDED = 0x3,
-	MHI_CH_STATE_STOP = 0x4,
-	MHI_CH_STATE_ERROR = 0x5,
-};
-
 enum mhi_ch_state_type {
 	MHI_CH_STATE_TYPE_RESET,
 	MHI_CH_STATE_TYPE_STOP,
@@ -411,11 +263,6 @@ extern const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX];
 #define TO_DEV_STATE_TRANS_STR(state) (((state) >= DEV_ST_TRANSITION_MAX) ? \
 				"INVALID_STATE" : dev_state_tran_str[state])
 
-extern const char * const mhi_state_str[MHI_STATE_MAX];
-#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
-				  !mhi_state_str[state]) ? \
-				"INVALID_STATE" : mhi_state_str[state])
-
 /* internal power states */
 enum mhi_pm_state {
 	MHI_PM_STATE_DISABLE,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 05/25] bus: mhi: Make mhi_state_str[] array static inline and move to common.h
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (3 preceding siblings ...)
  2022-02-12 18:20 ` [PATCH v3 04/25] bus: mhi: Move common MHI definitions out of host directory Manivannan Sadhasivam
@ 2022-02-12 18:20 ` Manivannan Sadhasivam
  2022-02-15  0:31   ` Hemant Kumar
  2022-02-15 20:02   ` Alex Elder
  2022-02-12 18:20 ` [PATCH v3 06/25] bus: mhi: Cleanup the register definitions used in headers Manivannan Sadhasivam
                   ` (20 subsequent siblings)
  25 siblings, 2 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:20 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

mhi_state_str[] array could be used by MHI endpoint stack also. So let's
make the array as "static inline function" and move it inside the
"common.h" header so that the endpoint stack could also make use of it.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/common.h       | 29 +++++++++++++++++++++++++----
 drivers/bus/mhi/host/boot.c    |  2 +-
 drivers/bus/mhi/host/debugfs.c |  6 +++---
 drivers/bus/mhi/host/init.c    | 12 ------------
 drivers/bus/mhi/host/main.c    |  8 ++++----
 drivers/bus/mhi/host/pm.c      | 14 +++++++-------
 6 files changed, 40 insertions(+), 31 deletions(-)

diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
index 0d13a202d334..288e47168649 100644
--- a/drivers/bus/mhi/common.h
+++ b/drivers/bus/mhi/common.h
@@ -159,9 +159,30 @@ struct mhi_cmd_ctxt {
 	__le64 wp __packed __aligned(4);
 };
 
-extern const char * const mhi_state_str[MHI_STATE_MAX];
-#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
-				  !mhi_state_str[state]) ? \
-				"INVALID_STATE" : mhi_state_str[state])
+static inline const char * const mhi_state_str(enum mhi_state state)
+{
+	switch (state) {
+	case MHI_STATE_RESET:
+		return "RESET";
+	case MHI_STATE_READY:
+		return "READY";
+	case MHI_STATE_M0:
+		return "M0";
+	case MHI_STATE_M1:
+		return "M1";
+	case MHI_STATE_M2:
+		return"M2";
+	case MHI_STATE_M3:
+		return"M3";
+	case MHI_STATE_M3_FAST:
+		return"M3 FAST";
+	case MHI_STATE_BHI:
+		return"BHI";
+	case MHI_STATE_SYS_ERR:
+		return "SYS ERROR";
+	default:
+		return "Unknown state";
+	}
+};
 
 #endif /* _MHI_COMMON_H */
diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
index 74295d3cc662..93cb705614c6 100644
--- a/drivers/bus/mhi/host/boot.c
+++ b/drivers/bus/mhi/host/boot.c
@@ -68,7 +68,7 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
 
 	dev_dbg(dev, "Entered with pm_state:%s dev_state:%s ee:%s\n",
 		to_mhi_pm_state_str(mhi_cntrl->pm_state),
-		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		mhi_state_str(mhi_cntrl->dev_state),
 		TO_MHI_EXEC_STR(mhi_cntrl->ee));
 
 	/*
diff --git a/drivers/bus/mhi/host/debugfs.c b/drivers/bus/mhi/host/debugfs.c
index d818586c229d..399d0db1f1eb 100644
--- a/drivers/bus/mhi/host/debugfs.c
+++ b/drivers/bus/mhi/host/debugfs.c
@@ -20,7 +20,7 @@ static int mhi_debugfs_states_show(struct seq_file *m, void *d)
 	seq_printf(m, "PM state: %s Device: %s MHI state: %s EE: %s wake: %s\n",
 		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
 		   mhi_is_active(mhi_cntrl) ? "Active" : "Inactive",
-		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		   mhi_state_str(mhi_cntrl->dev_state),
 		   TO_MHI_EXEC_STR(mhi_cntrl->ee),
 		   mhi_cntrl->wake_set ? "true" : "false");
 
@@ -206,13 +206,13 @@ static int mhi_debugfs_regdump_show(struct seq_file *m, void *d)
 
 	seq_printf(m, "Host PM state: %s Device state: %s EE: %s\n",
 		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
-		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+		   mhi_state_str(mhi_cntrl->dev_state),
 		   TO_MHI_EXEC_STR(mhi_cntrl->ee));
 
 	state = mhi_get_mhi_state(mhi_cntrl);
 	ee = mhi_get_exec_env(mhi_cntrl);
 	seq_printf(m, "Device EE: %s state: %s\n", TO_MHI_EXEC_STR(ee),
-		   TO_MHI_STATE_STR(state));
+		   mhi_state_str(state));
 
 	for (i = 0; regs[i].name; i++) {
 		if (!regs[i].base)
diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
index 4bd62f32695d..0e301f3f305e 100644
--- a/drivers/bus/mhi/host/init.c
+++ b/drivers/bus/mhi/host/init.c
@@ -44,18 +44,6 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
 	[DEV_ST_TRANSITION_DISABLE] = "DISABLE",
 };
 
-const char * const mhi_state_str[MHI_STATE_MAX] = {
-	[MHI_STATE_RESET] = "RESET",
-	[MHI_STATE_READY] = "READY",
-	[MHI_STATE_M0] = "M0",
-	[MHI_STATE_M1] = "M1",
-	[MHI_STATE_M2] = "M2",
-	[MHI_STATE_M3] = "M3",
-	[MHI_STATE_M3_FAST] = "M3 FAST",
-	[MHI_STATE_BHI] = "BHI",
-	[MHI_STATE_SYS_ERR] = "SYS ERROR",
-};
-
 const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX] = {
 	[MHI_CH_STATE_TYPE_RESET] = "RESET",
 	[MHI_CH_STATE_TYPE_STOP] = "STOP",
diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
index 85f4f7c8d7c6..e436c2993d97 100644
--- a/drivers/bus/mhi/host/main.c
+++ b/drivers/bus/mhi/host/main.c
@@ -479,8 +479,8 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
 	ee = mhi_get_exec_env(mhi_cntrl);
 	dev_dbg(dev, "local ee: %s state: %s device ee: %s state: %s\n",
 		TO_MHI_EXEC_STR(mhi_cntrl->ee),
-		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
-		TO_MHI_EXEC_STR(ee), TO_MHI_STATE_STR(state));
+		mhi_state_str(mhi_cntrl->dev_state),
+		TO_MHI_EXEC_STR(ee), mhi_state_str(state));
 
 	if (state == MHI_STATE_SYS_ERR) {
 		dev_dbg(dev, "System error detected\n");
@@ -846,7 +846,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 			new_state = MHI_TRE_GET_EV_STATE(local_rp);
 
 			dev_dbg(dev, "State change event to state: %s\n",
-				TO_MHI_STATE_STR(new_state));
+				mhi_state_str(new_state));
 
 			switch (new_state) {
 			case MHI_STATE_M0:
@@ -873,7 +873,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
 			}
 			default:
 				dev_err(dev, "Invalid state: %s\n",
-					TO_MHI_STATE_STR(new_state));
+					mhi_state_str(new_state));
 			}
 
 			break;
diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
index c35c5ddc7220..088ade0f3e0b 100644
--- a/drivers/bus/mhi/host/pm.c
+++ b/drivers/bus/mhi/host/pm.c
@@ -545,7 +545,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
 
 	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
 		to_mhi_pm_state_str(mhi_cntrl->pm_state),
-		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+		mhi_state_str(mhi_cntrl->dev_state));
 
 	mutex_unlock(&mhi_cntrl->pm_mutex);
 }
@@ -689,7 +689,7 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
 exit_sys_error_transition:
 	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
 		to_mhi_pm_state_str(mhi_cntrl->pm_state),
-		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+		mhi_state_str(mhi_cntrl->dev_state));
 
 	mutex_unlock(&mhi_cntrl->pm_mutex);
 }
@@ -864,7 +864,7 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
 	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
 		dev_err(dev,
 			"Did not enter M3 state, MHI state: %s, PM state: %s\n",
-			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+			mhi_state_str(mhi_cntrl->dev_state),
 			to_mhi_pm_state_str(mhi_cntrl->pm_state));
 		return -EIO;
 	}
@@ -890,7 +890,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
 
 	dev_dbg(dev, "Entered with PM state: %s, MHI state: %s\n",
 		to_mhi_pm_state_str(mhi_cntrl->pm_state),
-		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+		mhi_state_str(mhi_cntrl->dev_state));
 
 	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
 		return 0;
@@ -900,7 +900,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
 
 	if (mhi_get_mhi_state(mhi_cntrl) != MHI_STATE_M3) {
 		dev_warn(dev, "Resuming from non M3 state (%s)\n",
-			 TO_MHI_STATE_STR(mhi_get_mhi_state(mhi_cntrl)));
+			 mhi_state_str(mhi_get_mhi_state(mhi_cntrl)));
 		if (!force)
 			return -EINVAL;
 	}
@@ -937,7 +937,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
 	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
 		dev_err(dev,
 			"Did not enter M0 state, MHI state: %s, PM state: %s\n",
-			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+			mhi_state_str(mhi_cntrl->dev_state),
 			to_mhi_pm_state_str(mhi_cntrl->pm_state));
 		return -EIO;
 	}
@@ -1088,7 +1088,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
 
 	state = mhi_get_mhi_state(mhi_cntrl);
 	dev_dbg(dev, "Attempting power on with EE: %s, state: %s\n",
-		TO_MHI_EXEC_STR(current_ee), TO_MHI_STATE_STR(state));
+		TO_MHI_EXEC_STR(current_ee), mhi_state_str(state));
 
 	if (state == MHI_STATE_SYS_ERR) {
 		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 06/25] bus: mhi: Cleanup the register definitions used in headers
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (4 preceding siblings ...)
  2022-02-12 18:20 ` [PATCH v3 05/25] bus: mhi: Make mhi_state_str[] array static inline and move to common.h Manivannan Sadhasivam
@ 2022-02-12 18:20 ` Manivannan Sadhasivam
  2022-02-15  0:37   ` Hemant Kumar
  2022-02-15 20:02   ` Alex Elder
  2022-02-12 18:20 ` [PATCH v3 07/25] bus: mhi: Get rid of SHIFT macros and use bitfield operations Manivannan Sadhasivam
                   ` (19 subsequent siblings)
  25 siblings, 2 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:20 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Cleanup includes:

1. Moving the MHI register definitions to common.h header with REG_ prefix
   and using them in the host/internal.h file as an alias. This makes it
   possible to reuse the register definitions in EP stack that differs by
   a fixed offset.
2. Using the GENMASK macro for masks
3. Removing brackets for single values
4. Using lowercase for hex values
5. Using two digits for hex values where applicable

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/common.h        | 243 ++++++++++++++++++++++++-----
 drivers/bus/mhi/host/internal.h | 265 +++++++++-----------------------
 2 files changed, 278 insertions(+), 230 deletions(-)

diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
index 288e47168649..f226f06d4ff9 100644
--- a/drivers/bus/mhi/common.h
+++ b/drivers/bus/mhi/common.h
@@ -9,62 +9,223 @@
 
 #include <linux/mhi.h>
 
+/* MHI registers */
+#define REG_MHIREGLEN					0x00
+#define REG_MHIVER					0x08
+#define REG_MHICFG					0x10
+#define REG_CHDBOFF					0x18
+#define REG_ERDBOFF					0x20
+#define REG_BHIOFF					0x28
+#define REG_BHIEOFF					0x2c
+#define REG_DEBUGOFF					0x30
+#define REG_MHICTRL					0x38
+#define REG_MHISTATUS					0x48
+#define REG_CCABAP_LOWER				0x58
+#define REG_CCABAP_HIGHER				0x5c
+#define REG_ECABAP_LOWER				0x60
+#define REG_ECABAP_HIGHER				0x64
+#define REG_CRCBAP_LOWER				0x68
+#define REG_CRCBAP_HIGHER				0x6c
+#define REG_CRDB_LOWER					0x70
+#define REG_CRDB_HIGHER					0x74
+#define REG_MHICTRLBASE_LOWER				0x80
+#define REG_MHICTRLBASE_HIGHER				0x84
+#define REG_MHICTRLLIMIT_LOWER				0x88
+#define REG_MHICTRLLIMIT_HIGHER				0x8c
+#define REG_MHIDATABASE_LOWER				0x98
+#define REG_MHIDATABASE_HIGHER				0x9c
+#define REG_MHIDATALIMIT_LOWER				0xa0
+#define REG_MHIDATALIMIT_HIGHER				0xa4
+
+/* MHI BHI registers */
+#define REG_BHI_BHIVERSION_MINOR			0x00
+#define REG_BHI_BHIVERSION_MAJOR			0x04
+#define REG_BHI_IMGADDR_LOW				0x08
+#define REG_BHI_IMGADDR_HIGH				0x0c
+#define REG_BHI_IMGSIZE					0x10
+#define REG_BHI_RSVD1					0x14
+#define REG_BHI_IMGTXDB					0x18
+#define REG_BHI_RSVD2					0x1c
+#define REG_BHI_INTVEC					0x20
+#define REG_BHI_RSVD3					0x24
+#define REG_BHI_EXECENV					0x28
+#define REG_BHI_STATUS					0x2c
+#define REG_BHI_ERRCODE					0x30
+#define REG_BHI_ERRDBG1					0x34
+#define REG_BHI_ERRDBG2					0x38
+#define REG_BHI_ERRDBG3					0x3c
+#define REG_BHI_SERIALNU				0x40
+#define REG_BHI_SBLANTIROLLVER				0x44
+#define REG_BHI_NUMSEG					0x48
+#define REG_BHI_MSMHWID(n)				(0x4c + (0x4 * (n)))
+#define REG_BHI_OEMPKHASH(n)				(0x64 + (0x4 * (n)))
+#define REG_BHI_RSVD5					0xc4
+
+/* BHI register bits */
+#define BHI_TXDB_SEQNUM_BMSK				GENMASK(29, 0)
+#define BHI_TXDB_SEQNUM_SHFT				0
+#define BHI_STATUS_MASK					GENMASK(31, 30)
+#define BHI_STATUS_SHIFT				30
+#define BHI_STATUS_ERROR				0x03
+#define BHI_STATUS_SUCCESS				0x02
+#define BHI_STATUS_RESET				0x00
+
+/* MHI BHIE registers */
+#define REG_BHIE_MSMSOCID_OFFS				0x00
+#define REG_BHIE_TXVECADDR_LOW_OFFS			0x2c
+#define REG_BHIE_TXVECADDR_HIGH_OFFS			0x30
+#define REG_BHIE_TXVECSIZE_OFFS				0x34
+#define REG_BHIE_TXVECDB_OFFS				0x3c
+#define REG_BHIE_TXVECSTATUS_OFFS			0x44
+#define REG_BHIE_RXVECADDR_LOW_OFFS			0x60
+#define REG_BHIE_RXVECADDR_HIGH_OFFS			0x64
+#define REG_BHIE_RXVECSIZE_OFFS				0x68
+#define REG_BHIE_RXVECDB_OFFS				0x70
+#define REG_BHIE_RXVECSTATUS_OFFS			0x78
+
+/* BHIE register bits */
+#define BHIE_TXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
+#define BHIE_TXVECDB_SEQNUM_SHFT			0
+#define BHIE_TXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
+#define BHIE_TXVECSTATUS_SEQNUM_SHFT			0
+#define BHIE_TXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
+#define BHIE_TXVECSTATUS_STATUS_SHFT			30
+#define BHIE_TXVECSTATUS_STATUS_RESET			0x00
+#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL		0x02
+#define BHIE_TXVECSTATUS_STATUS_ERROR			0x03
+#define BHIE_RXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
+#define BHIE_RXVECDB_SEQNUM_SHFT			0
+#define BHIE_RXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
+#define BHIE_RXVECSTATUS_SEQNUM_SHFT			0
+#define BHIE_RXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
+#define BHIE_RXVECSTATUS_STATUS_SHFT			30
+#define BHIE_RXVECSTATUS_STATUS_RESET			0x00
+#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL		0x02
+#define BHIE_RXVECSTATUS_STATUS_ERROR			0x03
+
+/* MHI register bits */
+#define MHIREGLEN_MHIREGLEN_MASK			GENMASK(31, 0)
+#define MHIREGLEN_MHIREGLEN_SHIFT			0
+#define MHIVER_MHIVER_MASK				GENMASK(31, 0)
+#define MHIVER_MHIVER_SHIFT				0
+#define MHICFG_NHWER_MASK				GENMASK(31, 24)
+#define MHICFG_NHWER_SHIFT				24
+#define MHICFG_NER_MASK					GENMASK(23, 16)
+#define MHICFG_NER_SHIFT				16
+#define MHICFG_NHWCH_MASK				GENMASK(15, 8)
+#define MHICFG_NHWCH_SHIFT				8
+#define MHICFG_NCH_MASK					GENMASK(7, 0)
+#define MHICFG_NCH_SHIFT				0
+#define CHDBOFF_CHDBOFF_MASK				GENMASK(31, 0)
+#define CHDBOFF_CHDBOFF_SHIFT				0
+#define ERDBOFF_ERDBOFF_MASK				GENMASK(31, 0)
+#define ERDBOFF_ERDBOFF_SHIFT				0
+#define BHIOFF_BHIOFF_MASK				GENMASK(31, 0)
+#define BHIOFF_BHIOFF_SHIFT				0
+#define BHIEOFF_BHIEOFF_MASK				GENMASK(31, 0)
+#define BHIEOFF_BHIEOFF_SHIFT				0
+#define DEBUGOFF_DEBUGOFF_MASK				GENMASK(31, 0)
+#define DEBUGOFF_DEBUGOFF_SHIFT				0
+#define MHICTRL_MHISTATE_MASK				GENMASK(15, 8)
+#define MHICTRL_MHISTATE_SHIFT				8
+#define MHICTRL_RESET_MASK				BIT(1)
+#define MHICTRL_RESET_SHIFT				1
+#define MHISTATUS_MHISTATE_MASK				GENMASK(15, 8)
+#define MHISTATUS_MHISTATE_SHIFT			8
+#define MHISTATUS_SYSERR_MASK				BIT(2)
+#define MHISTATUS_SYSERR_SHIFT				2
+#define MHISTATUS_READY_MASK				BIT(0)
+#define MHISTATUS_READY_SHIFT				0
+#define CCABAP_LOWER_CCABAP_LOWER_MASK			GENMASK(31, 0)
+#define CCABAP_LOWER_CCABAP_LOWER_SHIFT			0
+#define CCABAP_HIGHER_CCABAP_HIGHER_MASK		GENMASK(31, 0)
+#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT		0
+#define ECABAP_LOWER_ECABAP_LOWER_MASK			GENMASK(31, 0)
+#define ECABAP_LOWER_ECABAP_LOWER_SHIFT			0
+#define ECABAP_HIGHER_ECABAP_HIGHER_MASK		GENMASK(31, 0)
+#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT		0
+#define CRCBAP_LOWER_CRCBAP_LOWER_MASK			GENMASK(31, 0)
+#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT			0
+#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK		GENMASK(31, 0)
+#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT		0
+#define CRDB_LOWER_CRDB_LOWER_MASK			GENMASK(31, 0)
+#define CRDB_LOWER_CRDB_LOWER_SHIFT			0
+#define CRDB_HIGHER_CRDB_HIGHER_MASK			GENMASK(31, 0)
+#define CRDB_HIGHER_CRDB_HIGHER_SHIFT			0
+#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK	GENMASK(31, 0)
+#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT	0
+#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK	GENMASK(31, 0)
+#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT	0
+#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK	GENMASK(31, 0)
+#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT	0
+#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK	GENMASK(31, 0)
+#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT	0
+#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK	GENMASK(31, 0)
+#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT	0
+#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK	GENMASK(31, 0)
+#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT	0
+#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK	GENMASK(31, 0)
+#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT	0
+#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK	GENMASK(31, 0)
+#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT	0
+
 /* Command Ring Element macros */
 /* No operation command */
-#define MHI_TRE_CMD_NOOP_PTR (0)
-#define MHI_TRE_CMD_NOOP_DWORD0 (0)
-#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
+#define MHI_TRE_CMD_NOOP_PTR				0
+#define MHI_TRE_CMD_NOOP_DWORD0				0
+#define MHI_TRE_CMD_NOOP_DWORD1				cpu_to_le32(MHI_CMD_NOP << 16)
 
 /* Channel reset command */
-#define MHI_TRE_CMD_RESET_PTR (0)
-#define MHI_TRE_CMD_RESET_DWORD0 (0)
-#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
-					(MHI_CMD_RESET_CHAN << 16)))
+#define MHI_TRE_CMD_RESET_PTR				0
+#define MHI_TRE_CMD_RESET_DWORD0			0
+#define MHI_TRE_CMD_RESET_DWORD1(chid)			(cpu_to_le32((chid << 24) | \
+							(MHI_CMD_RESET_CHAN << 16)))
 
 /* Channel stop command */
-#define MHI_TRE_CMD_STOP_PTR (0)
-#define MHI_TRE_CMD_STOP_DWORD0 (0)
-#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
-				       (MHI_CMD_STOP_CHAN << 16)))
+#define MHI_TRE_CMD_STOP_PTR				0
+#define MHI_TRE_CMD_STOP_DWORD0				0
+#define MHI_TRE_CMD_STOP_DWORD1(chid)			(cpu_to_le32((chid << 24) | \
+							(MHI_CMD_STOP_CHAN << 16)))
 
 /* Channel start command */
-#define MHI_TRE_CMD_START_PTR (0)
-#define MHI_TRE_CMD_START_DWORD0 (0)
-#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
-					(MHI_CMD_START_CHAN << 16)))
+#define MHI_TRE_CMD_START_PTR				0
+#define MHI_TRE_CMD_START_DWORD0			0
+#define MHI_TRE_CMD_START_DWORD1(chid)			(cpu_to_le32((chid << 24) | \
+							(MHI_CMD_START_CHAN << 16)))
 
-#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
-#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
-#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
+#define MHI_TRE_GET_DWORD(tre, word)			le32_to_cpu((tre)->dword[(word)])
+#define MHI_TRE_GET_CMD_CHID(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
+#define MHI_TRE_GET_CMD_TYPE(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
 
 /* Event descriptor macros */
-#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
-#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
-#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
-#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
-#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
-#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
-#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
-#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
-#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
+/* Transfer completion event */
+#define MHI_TRE_EV_PTR(ptr)				cpu_to_le64(ptr)
+#define MHI_TRE_EV_DWORD0(code, len)			cpu_to_le32((code << 24) | len)
+#define MHI_TRE_EV_DWORD1(chid, type)			cpu_to_le32((chid << 24) | (type << 16))
+#define MHI_TRE_GET_EV_PTR(tre)				le64_to_cpu((tre)->ptr)
+#define MHI_TRE_GET_EV_CODE(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LEN(tre)				(MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
+#define MHI_TRE_GET_EV_CHID(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_TYPE(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_STATE(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_EXECENV(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_SEQ(tre)				MHI_TRE_GET_DWORD(tre, 0)
+#define MHI_TRE_GET_EV_TIME(tre)			MHI_TRE_GET_EV_PTR(tre)
+#define MHI_TRE_GET_EV_COOKIE(tre)			lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
+#define MHI_TRE_GET_EV_VEID(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_LINKSPEED(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LINKWIDTH(tre)			(MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
 
 /* Transfer descriptor macros */
-#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
-#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
-#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
-	| (ieot << 9) | (ieob << 8) | chain))
+#define MHI_TRE_DATA_PTR(ptr)				cpu_to_le64(ptr)
+#define MHI_TRE_DATA_DWORD0(len)			cpu_to_le32(len & MHI_MAX_MTU)
+#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain)	(cpu_to_le32((2 << 16) | (bei << 10) \
+							| (ieot << 9) | (ieob << 8) | chain))
 
 /* RSC transfer descriptor macros */
-#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
-#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
-#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
+#define MHI_RSCTRE_DATA_PTR(ptr, len)			cpu_to_le64(((u64)len << 48) | ptr)
+#define MHI_RSCTRE_DATA_DWORD0(cookie)			cpu_to_le32(cookie)
+#define MHI_RSCTRE_DATA_DWORD1				cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16)
 
 enum mhi_pkt_type {
 	MHI_PKT_TYPE_INVALID = 0x0,
diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
index 622de6ba1a0b..762055a6ec9f 100644
--- a/drivers/bus/mhi/host/internal.h
+++ b/drivers/bus/mhi/host/internal.h
@@ -11,197 +11,84 @@
 
 extern struct bus_type mhi_bus_type;
 
-#define MHIREGLEN (0x0)
-#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
-#define MHIREGLEN_MHIREGLEN_SHIFT (0)
-
-#define MHIVER (0x8)
-#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
-#define MHIVER_MHIVER_SHIFT (0)
-
-#define MHICFG (0x10)
-#define MHICFG_NHWER_MASK (0xFF000000)
-#define MHICFG_NHWER_SHIFT (24)
-#define MHICFG_NER_MASK (0xFF0000)
-#define MHICFG_NER_SHIFT (16)
-#define MHICFG_NHWCH_MASK (0xFF00)
-#define MHICFG_NHWCH_SHIFT (8)
-#define MHICFG_NCH_MASK (0xFF)
-#define MHICFG_NCH_SHIFT (0)
-
-#define CHDBOFF (0x18)
-#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
-#define CHDBOFF_CHDBOFF_SHIFT (0)
-
-#define ERDBOFF (0x20)
-#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
-#define ERDBOFF_ERDBOFF_SHIFT (0)
-
-#define BHIOFF (0x28)
-#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
-#define BHIOFF_BHIOFF_SHIFT (0)
-
-#define BHIEOFF (0x2C)
-#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
-#define BHIEOFF_BHIEOFF_SHIFT (0)
-
-#define DEBUGOFF (0x30)
-#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
-#define DEBUGOFF_DEBUGOFF_SHIFT (0)
-
-#define MHICTRL (0x38)
-#define MHICTRL_MHISTATE_MASK (0x0000FF00)
-#define MHICTRL_MHISTATE_SHIFT (8)
-#define MHICTRL_RESET_MASK (0x2)
-#define MHICTRL_RESET_SHIFT (1)
-
-#define MHISTATUS (0x48)
-#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
-#define MHISTATUS_MHISTATE_SHIFT (8)
-#define MHISTATUS_SYSERR_MASK (0x4)
-#define MHISTATUS_SYSERR_SHIFT (2)
-#define MHISTATUS_READY_MASK (0x1)
-#define MHISTATUS_READY_SHIFT (0)
-
-#define CCABAP_LOWER (0x58)
-#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
-#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
-
-#define CCABAP_HIGHER (0x5C)
-#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
-#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
-
-#define ECABAP_LOWER (0x60)
-#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
-#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
-
-#define ECABAP_HIGHER (0x64)
-#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
-#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
-
-#define CRCBAP_LOWER (0x68)
-#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
-#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
-
-#define CRCBAP_HIGHER (0x6C)
-#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
-#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
-
-#define CRDB_LOWER (0x70)
-#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
-#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
-
-#define CRDB_HIGHER (0x74)
-#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
-#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
-
-#define MHICTRLBASE_LOWER (0x80)
-#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
-#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
-
-#define MHICTRLBASE_HIGHER (0x84)
-#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
-#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
-
-#define MHICTRLLIMIT_LOWER (0x88)
-#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
-#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
-
-#define MHICTRLLIMIT_HIGHER (0x8C)
-#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
-#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
-
-#define MHIDATABASE_LOWER (0x98)
-#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
-#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
-
-#define MHIDATABASE_HIGHER (0x9C)
-#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
-#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
-
-#define MHIDATALIMIT_LOWER (0xA0)
-#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
-#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
-
-#define MHIDATALIMIT_HIGHER (0xA4)
-#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
-#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
+/* MHI registers */
+#define MHIREGLEN			REG_MHIREGLEN
+#define MHIVER				REG_MHIVER
+#define MHICFG				REG_MHICFG
+#define CHDBOFF				REG_CHDBOFF
+#define ERDBOFF				REG_ERDBOFF
+#define BHIOFF				REG_BHIOFF
+#define BHIEOFF				REG_BHIEOFF
+#define DEBUGOFF			REG_DEBUGOFF
+#define MHICTRL				REG_MHICTRL
+#define MHISTATUS			REG_MHISTATUS
+#define CCABAP_LOWER			REG_CCABAP_LOWER
+#define CCABAP_HIGHER			REG_CCABAP_HIGHER
+#define ECABAP_LOWER			REG_ECABAP_LOWER
+#define ECABAP_HIGHER			REG_ECABAP_HIGHER
+#define CRCBAP_LOWER			REG_CRCBAP_LOWER
+#define CRCBAP_HIGHER			REG_CRCBAP_HIGHER
+#define CRDB_LOWER			REG_CRDB_LOWER
+#define CRDB_HIGHER			REG_CRDB_HIGHER
+#define MHICTRLBASE_LOWER		REG_MHICTRLBASE_LOWER
+#define MHICTRLBASE_HIGHER		REG_MHICTRLBASE_HIGHER
+#define MHICTRLLIMIT_LOWER		REG_MHICTRLLIMIT_LOWER
+#define MHICTRLLIMIT_HIGHER		REG_MHICTRLLIMIT_HIGHER
+#define MHIDATABASE_LOWER		REG_MHIDATABASE_LOWER
+#define MHIDATABASE_HIGHER		REG_MHIDATABASE_HIGHER
+#define MHIDATALIMIT_LOWER		REG_MHIDATALIMIT_LOWER
+#define MHIDATALIMIT_HIGHER		REG_MHIDATALIMIT_HIGHER
 
 /* Host request register */
-#define MHI_SOC_RESET_REQ_OFFSET (0xB0)
-#define MHI_SOC_RESET_REQ BIT(0)
-
-/* MHI BHI offfsets */
-#define BHI_BHIVERSION_MINOR (0x00)
-#define BHI_BHIVERSION_MAJOR (0x04)
-#define BHI_IMGADDR_LOW (0x08)
-#define BHI_IMGADDR_HIGH (0x0C)
-#define BHI_IMGSIZE (0x10)
-#define BHI_RSVD1 (0x14)
-#define BHI_IMGTXDB (0x18)
-#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHI_TXDB_SEQNUM_SHFT (0)
-#define BHI_RSVD2 (0x1C)
-#define BHI_INTVEC (0x20)
-#define BHI_RSVD3 (0x24)
-#define BHI_EXECENV (0x28)
-#define BHI_STATUS (0x2C)
-#define BHI_ERRCODE (0x30)
-#define BHI_ERRDBG1 (0x34)
-#define BHI_ERRDBG2 (0x38)
-#define BHI_ERRDBG3 (0x3C)
-#define BHI_SERIALNU (0x40)
-#define BHI_SBLANTIROLLVER (0x44)
-#define BHI_NUMSEG (0x48)
-#define BHI_MSMHWID(n) (0x4C + (0x4 * (n)))
-#define BHI_OEMPKHASH(n) (0x64 + (0x4 * (n)))
-#define BHI_RSVD5 (0xC4)
-#define BHI_STATUS_MASK (0xC0000000)
-#define BHI_STATUS_SHIFT (30)
-#define BHI_STATUS_ERROR (3)
-#define BHI_STATUS_SUCCESS (2)
-#define BHI_STATUS_RESET (0)
-
-/* MHI BHIE offsets */
-#define BHIE_MSMSOCID_OFFS (0x0000)
-#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
-#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
-#define BHIE_TXVECSIZE_OFFS (0x0034)
-#define BHIE_TXVECDB_OFFS (0x003C)
-#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHIE_TXVECDB_SEQNUM_SHFT (0)
-#define BHIE_TXVECSTATUS_OFFS (0x0044)
-#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
-#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
-#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
-#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
-#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
-#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
-#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
-#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
-#define BHIE_RXVECSIZE_OFFS (0x0068)
-#define BHIE_RXVECDB_OFFS (0x0070)
-#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHIE_RXVECDB_SEQNUM_SHFT (0)
-#define BHIE_RXVECSTATUS_OFFS (0x0078)
-#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
-#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
-#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
-#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
-#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
-#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
-
-#define SOC_HW_VERSION_OFFS (0x224)
-#define SOC_HW_VERSION_FAM_NUM_BMSK (0xF0000000)
-#define SOC_HW_VERSION_FAM_NUM_SHFT (28)
-#define SOC_HW_VERSION_DEV_NUM_BMSK (0x0FFF0000)
-#define SOC_HW_VERSION_DEV_NUM_SHFT (16)
-#define SOC_HW_VERSION_MAJOR_VER_BMSK (0x0000FF00)
-#define SOC_HW_VERSION_MAJOR_VER_SHFT (8)
-#define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
-#define SOC_HW_VERSION_MINOR_VER_SHFT (0)
+#define MHI_SOC_RESET_REQ_OFFSET	0xb0
+#define MHI_SOC_RESET_REQ		BIT(0)
+
+/* MHI BHI registers */
+#define BHI_BHIVERSION_MINOR		REG_BHI_BHIVERSION_MINOR
+#define BHI_BHIVERSION_MAJOR		REG_BHI_BHIVERSION_MAJOR
+#define BHI_IMGADDR_LOW			REG_BHI_IMGADDR_LOW
+#define BHI_IMGADDR_HIGH		REG_BHI_IMGADDR_HIGH
+#define BHI_IMGSIZE			REG_BHI_IMGSIZE
+#define BHI_RSVD1			REG_BHI_RSVD1
+#define BHI_IMGTXDB			REG_BHI_IMGTXDB
+#define BHI_RSVD2			REG_BHI_RSVD2
+#define BHI_INTVEC			REG_BHI_INTVEC
+#define BHI_RSVD3			REG_BHI_RSVD3
+#define BHI_EXECENV			REG_BHI_EXECENV
+#define BHI_STATUS			REG_BHI_STATUS
+#define BHI_ERRCODE			REG_BHI_ERRCODE
+#define BHI_ERRDBG1			REG_BHI_ERRDBG1
+#define BHI_ERRDBG2			REG_BHI_ERRDBG2
+#define BHI_ERRDBG3			REG_BHI_ERRDBG3
+#define BHI_SERIALNU			REG_BHI_SERIALNU
+#define BHI_SBLANTIROLLVER		REG_BHI_SBLANTIROLLVER
+#define BHI_NUMSEG			REG_BHI_NUMSEG
+#define BHI_MSMHWID(n)			REG_BHI_MSMHWID(n)
+#define BHI_OEMPKHASH(n)		REG_BHI_OEMPKHASH(n)
+#define BHI_RSVD5			REG_BHI_RSVD5
+
+/* MHI BHIE registers */
+#define BHIE_MSMSOCID_OFFS		REG_BHIE_MSMSOCID_OFFS
+#define BHIE_TXVECADDR_LOW_OFFS		REG_BHIE_TXVECADDR_LOW_OFFS
+#define BHIE_TXVECADDR_HIGH_OFFS	REG_BHIE_TXVECADDR_HIGH_OFFS
+#define BHIE_TXVECSIZE_OFFS		REG_BHIE_TXVECSIZE_OFFS
+#define BHIE_TXVECDB_OFFS		REG_BHIE_TXVECDB_OFFS
+#define BHIE_TXVECSTATUS_OFFS		REG_BHIE_TXVECSTATUS_OFFS
+#define BHIE_RXVECADDR_LOW_OFFS		REG_BHIE_RXVECADDR_LOW_OFFS
+#define BHIE_RXVECADDR_HIGH_OFFS	REG_BHIE_RXVECADDR_HIGH_OFFS
+#define BHIE_RXVECSIZE_OFFS		REG_BHIE_RXVECSIZE_OFFS
+#define BHIE_RXVECDB_OFFS		REG_BHIE_RXVECDB_OFFS
+#define BHIE_RXVECSTATUS_OFFS		REG_BHIE_RXVECSTATUS_OFFS
+
+#define SOC_HW_VERSION_OFFS		0x224
+#define SOC_HW_VERSION_FAM_NUM_BMSK	GENMASK(31, 28)
+#define SOC_HW_VERSION_FAM_NUM_SHFT	28
+#define SOC_HW_VERSION_DEV_NUM_BMSK	GENMASK(27, 16)
+#define SOC_HW_VERSION_DEV_NUM_SHFT	16
+#define SOC_HW_VERSION_MAJOR_VER_BMSK	GENMASK(15, 8)
+#define SOC_HW_VERSION_MAJOR_VER_SHFT	8
+#define SOC_HW_VERSION_MINOR_VER_BMSK	GENMASK(7, 0)
+#define SOC_HW_VERSION_MINOR_VER_SHFT	0
 
 struct mhi_ctxt {
 	struct mhi_event_ctxt *er_ctxt;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 07/25] bus: mhi: Get rid of SHIFT macros and use bitfield operations
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (5 preceding siblings ...)
  2022-02-12 18:20 ` [PATCH v3 06/25] bus: mhi: Cleanup the register definitions used in headers Manivannan Sadhasivam
@ 2022-02-12 18:20 ` Manivannan Sadhasivam
  2022-02-15 20:02   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 08/25] bus: mhi: ep: Add support for registering MHI endpoint controllers Manivannan Sadhasivam
                   ` (18 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:20 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Instead of using the hardcoded SHIFT values, use the bitfield macros to
derive the shift value from mask during build time.

For shift values that cannot be determined during build time, "__ffs()"
helper is used find the shift value in runtime.

Suggested-by: Alex Elder <elder@linaro.org>
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/common.h        | 45 ----------------------
 drivers/bus/mhi/host/boot.c     | 15 ++------
 drivers/bus/mhi/host/debugfs.c  | 10 ++---
 drivers/bus/mhi/host/init.c     | 67 +++++++++++++++------------------
 drivers/bus/mhi/host/internal.h | 10 ++---
 drivers/bus/mhi/host/main.c     | 16 ++++----
 drivers/bus/mhi/host/pm.c       | 18 +++------
 7 files changed, 55 insertions(+), 126 deletions(-)

diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
index f226f06d4ff9..728c82928d8d 100644
--- a/drivers/bus/mhi/common.h
+++ b/drivers/bus/mhi/common.h
@@ -63,9 +63,7 @@
 
 /* BHI register bits */
 #define BHI_TXDB_SEQNUM_BMSK				GENMASK(29, 0)
-#define BHI_TXDB_SEQNUM_SHFT				0
 #define BHI_STATUS_MASK					GENMASK(31, 30)
-#define BHI_STATUS_SHIFT				30
 #define BHI_STATUS_ERROR				0x03
 #define BHI_STATUS_SUCCESS				0x02
 #define BHI_STATUS_RESET				0x00
@@ -85,89 +83,51 @@
 
 /* BHIE register bits */
 #define BHIE_TXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
-#define BHIE_TXVECDB_SEQNUM_SHFT			0
 #define BHIE_TXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
-#define BHIE_TXVECSTATUS_SEQNUM_SHFT			0
 #define BHIE_TXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
-#define BHIE_TXVECSTATUS_STATUS_SHFT			30
 #define BHIE_TXVECSTATUS_STATUS_RESET			0x00
 #define BHIE_TXVECSTATUS_STATUS_XFER_COMPL		0x02
 #define BHIE_TXVECSTATUS_STATUS_ERROR			0x03
 #define BHIE_RXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
-#define BHIE_RXVECDB_SEQNUM_SHFT			0
 #define BHIE_RXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
-#define BHIE_RXVECSTATUS_SEQNUM_SHFT			0
 #define BHIE_RXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
-#define BHIE_RXVECSTATUS_STATUS_SHFT			30
 #define BHIE_RXVECSTATUS_STATUS_RESET			0x00
 #define BHIE_RXVECSTATUS_STATUS_XFER_COMPL		0x02
 #define BHIE_RXVECSTATUS_STATUS_ERROR			0x03
 
 /* MHI register bits */
 #define MHIREGLEN_MHIREGLEN_MASK			GENMASK(31, 0)
-#define MHIREGLEN_MHIREGLEN_SHIFT			0
 #define MHIVER_MHIVER_MASK				GENMASK(31, 0)
-#define MHIVER_MHIVER_SHIFT				0
 #define MHICFG_NHWER_MASK				GENMASK(31, 24)
-#define MHICFG_NHWER_SHIFT				24
 #define MHICFG_NER_MASK					GENMASK(23, 16)
-#define MHICFG_NER_SHIFT				16
 #define MHICFG_NHWCH_MASK				GENMASK(15, 8)
-#define MHICFG_NHWCH_SHIFT				8
 #define MHICFG_NCH_MASK					GENMASK(7, 0)
-#define MHICFG_NCH_SHIFT				0
 #define CHDBOFF_CHDBOFF_MASK				GENMASK(31, 0)
-#define CHDBOFF_CHDBOFF_SHIFT				0
 #define ERDBOFF_ERDBOFF_MASK				GENMASK(31, 0)
-#define ERDBOFF_ERDBOFF_SHIFT				0
 #define BHIOFF_BHIOFF_MASK				GENMASK(31, 0)
-#define BHIOFF_BHIOFF_SHIFT				0
 #define BHIEOFF_BHIEOFF_MASK				GENMASK(31, 0)
-#define BHIEOFF_BHIEOFF_SHIFT				0
 #define DEBUGOFF_DEBUGOFF_MASK				GENMASK(31, 0)
-#define DEBUGOFF_DEBUGOFF_SHIFT				0
 #define MHICTRL_MHISTATE_MASK				GENMASK(15, 8)
-#define MHICTRL_MHISTATE_SHIFT				8
 #define MHICTRL_RESET_MASK				BIT(1)
-#define MHICTRL_RESET_SHIFT				1
 #define MHISTATUS_MHISTATE_MASK				GENMASK(15, 8)
-#define MHISTATUS_MHISTATE_SHIFT			8
 #define MHISTATUS_SYSERR_MASK				BIT(2)
-#define MHISTATUS_SYSERR_SHIFT				2
 #define MHISTATUS_READY_MASK				BIT(0)
-#define MHISTATUS_READY_SHIFT				0
 #define CCABAP_LOWER_CCABAP_LOWER_MASK			GENMASK(31, 0)
-#define CCABAP_LOWER_CCABAP_LOWER_SHIFT			0
 #define CCABAP_HIGHER_CCABAP_HIGHER_MASK		GENMASK(31, 0)
-#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT		0
 #define ECABAP_LOWER_ECABAP_LOWER_MASK			GENMASK(31, 0)
-#define ECABAP_LOWER_ECABAP_LOWER_SHIFT			0
 #define ECABAP_HIGHER_ECABAP_HIGHER_MASK		GENMASK(31, 0)
-#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT		0
 #define CRCBAP_LOWER_CRCBAP_LOWER_MASK			GENMASK(31, 0)
-#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT			0
 #define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK		GENMASK(31, 0)
-#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT		0
 #define CRDB_LOWER_CRDB_LOWER_MASK			GENMASK(31, 0)
-#define CRDB_LOWER_CRDB_LOWER_SHIFT			0
 #define CRDB_HIGHER_CRDB_HIGHER_MASK			GENMASK(31, 0)
-#define CRDB_HIGHER_CRDB_HIGHER_SHIFT			0
 #define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK	GENMASK(31, 0)
-#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT	0
 #define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK	GENMASK(31, 0)
-#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT	0
 #define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK	GENMASK(31, 0)
-#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT	0
 #define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK	GENMASK(31, 0)
-#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT	0
 #define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK	GENMASK(31, 0)
-#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT	0
 #define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK	GENMASK(31, 0)
-#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT	0
 #define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK	GENMASK(31, 0)
-#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT	0
 #define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK	GENMASK(31, 0)
-#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT	0
 
 /* Command Ring Element macros */
 /* No operation command */
@@ -277,9 +237,7 @@ enum mhi_cmd_type {
 
 #define EV_CTX_RESERVED_MASK GENMASK(7, 0)
 #define EV_CTX_INTMODC_MASK GENMASK(15, 8)
-#define EV_CTX_INTMODC_SHIFT 8
 #define EV_CTX_INTMODT_MASK GENMASK(31, 16)
-#define EV_CTX_INTMODT_SHIFT 16
 struct mhi_event_ctxt {
 	__le32 intmod;
 	__le32 ertype;
@@ -292,11 +250,8 @@ struct mhi_event_ctxt {
 };
 
 #define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
-#define CHAN_CTX_CHSTATE_SHIFT 0
 #define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
-#define CHAN_CTX_BRSTMODE_SHIFT 8
 #define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
-#define CHAN_CTX_POLLCFG_SHIFT 10
 #define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
 struct mhi_chan_ctxt {
 	__le32 chcfg;
diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
index 93cb705614c6..b0da7ca4519c 100644
--- a/drivers/bus/mhi/host/boot.c
+++ b/drivers/bus/mhi/host/boot.c
@@ -46,8 +46,7 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
 	sequence_id = MHI_RANDOM_U32_NONZERO(BHIE_RXVECSTATUS_SEQNUM_BMSK);
 
 	mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
-			    BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
-			    sequence_id);
+			    BHIE_RXVECDB_SEQNUM_BMSK, sequence_id);
 
 	dev_dbg(dev, "Address: %p and len: 0x%zx sequence: %u\n",
 		&mhi_buf->dma_addr, mhi_buf->len, sequence_id);
@@ -127,9 +126,7 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
 
 	while (retry--) {
 		ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
-					 BHIE_RXVECSTATUS_STATUS_BMSK,
-					 BHIE_RXVECSTATUS_STATUS_SHFT,
-					 &rx_status);
+					 BHIE_RXVECSTATUS_STATUS_BMSK, &rx_status);
 		if (ret)
 			return -EIO;
 
@@ -168,7 +165,6 @@ int mhi_download_rddm_image(struct mhi_controller *mhi_cntrl, bool in_panic)
 			   mhi_read_reg_field(mhi_cntrl, base,
 					      BHIE_RXVECSTATUS_OFFS,
 					      BHIE_RXVECSTATUS_STATUS_BMSK,
-					      BHIE_RXVECSTATUS_STATUS_SHFT,
 					      &rx_status) || rx_status,
 			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
 
@@ -203,8 +199,7 @@ static int mhi_fw_load_bhie(struct mhi_controller *mhi_cntrl,
 	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
 
 	mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
-			    BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
-			    sequence_id);
+			    BHIE_TXVECDB_SEQNUM_BMSK, sequence_id);
 	read_unlock_bh(pm_lock);
 
 	/* Wait for the image download to complete */
@@ -213,7 +208,6 @@ static int mhi_fw_load_bhie(struct mhi_controller *mhi_cntrl,
 				 mhi_read_reg_field(mhi_cntrl, base,
 						   BHIE_TXVECSTATUS_OFFS,
 						   BHIE_TXVECSTATUS_STATUS_BMSK,
-						   BHIE_TXVECSTATUS_STATUS_SHFT,
 						   &tx_status) || tx_status,
 				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
 	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
@@ -265,8 +259,7 @@ static int mhi_fw_load_bhi(struct mhi_controller *mhi_cntrl,
 	ret = wait_event_timeout(mhi_cntrl->state_event,
 			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
 			   mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS,
-					      BHI_STATUS_MASK, BHI_STATUS_SHIFT,
-					      &tx_status) || tx_status,
+					      BHI_STATUS_MASK, &tx_status) || tx_status,
 			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
 	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
 		goto invalid_pm_state;
diff --git a/drivers/bus/mhi/host/debugfs.c b/drivers/bus/mhi/host/debugfs.c
index 399d0db1f1eb..cfec7811dfbb 100644
--- a/drivers/bus/mhi/host/debugfs.c
+++ b/drivers/bus/mhi/host/debugfs.c
@@ -61,9 +61,9 @@ static int mhi_debugfs_events_show(struct seq_file *m, void *d)
 
 		seq_printf(m, "Index: %d intmod count: %lu time: %lu",
 			   i, (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODC_MASK) >>
-			   EV_CTX_INTMODC_SHIFT,
+			   __ffs(EV_CTX_INTMODC_MASK),
 			   (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODT_MASK) >>
-			   EV_CTX_INTMODT_SHIFT);
+			   __ffs(EV_CTX_INTMODT_MASK));
 
 		seq_printf(m, " base: 0x%0llx len: 0x%llx", le64_to_cpu(er_ctxt->rbase),
 			   le64_to_cpu(er_ctxt->rlen));
@@ -107,10 +107,10 @@ static int mhi_debugfs_channels_show(struct seq_file *m, void *d)
 		seq_printf(m,
 			   "%s(%u) state: 0x%lx brstmode: 0x%lx pollcfg: 0x%lx",
 			   mhi_chan->name, mhi_chan->chan, (le32_to_cpu(chan_ctxt->chcfg) &
-			   CHAN_CTX_CHSTATE_MASK) >> CHAN_CTX_CHSTATE_SHIFT,
+			   CHAN_CTX_CHSTATE_MASK) >> __ffs(CHAN_CTX_CHSTATE_MASK),
 			   (le32_to_cpu(chan_ctxt->chcfg) & CHAN_CTX_BRSTMODE_MASK) >>
-			   CHAN_CTX_BRSTMODE_SHIFT, (le32_to_cpu(chan_ctxt->chcfg) &
-			   CHAN_CTX_POLLCFG_MASK) >> CHAN_CTX_POLLCFG_SHIFT);
+			   __ffs(CHAN_CTX_BRSTMODE_MASK), (le32_to_cpu(chan_ctxt->chcfg) &
+			   CHAN_CTX_POLLCFG_MASK) >> __ffs(CHAN_CTX_POLLCFG_MASK));
 
 		seq_printf(m, " type: 0x%x event ring: %u", le32_to_cpu(chan_ctxt->chtype),
 			   le32_to_cpu(chan_ctxt->erindex));
diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
index 0e301f3f305e..05e457d12446 100644
--- a/drivers/bus/mhi/host/init.c
+++ b/drivers/bus/mhi/host/init.c
@@ -4,6 +4,7 @@
  *
  */
 
+#include <linux/bitfield.h>
 #include <linux/debugfs.h>
 #include <linux/device.h>
 #include <linux/dma-direction.h>
@@ -283,11 +284,11 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 
 		tmp = le32_to_cpu(chan_ctxt->chcfg);
 		tmp &= ~CHAN_CTX_CHSTATE_MASK;
-		tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
+		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_DISABLED);
 		tmp &= ~CHAN_CTX_BRSTMODE_MASK;
-		tmp |= (mhi_chan->db_cfg.brstmode << CHAN_CTX_BRSTMODE_SHIFT);
+		tmp |= FIELD_PREP(CHAN_CTX_BRSTMODE_MASK, mhi_chan->db_cfg.brstmode);
 		tmp &= ~CHAN_CTX_POLLCFG_MASK;
-		tmp |= (mhi_chan->db_cfg.pollcfg << CHAN_CTX_POLLCFG_SHIFT);
+		tmp |= FIELD_PREP(CHAN_CTX_POLLCFG_MASK, mhi_chan->db_cfg.pollcfg);
 		chan_ctxt->chcfg = cpu_to_le32(tmp);
 
 		chan_ctxt->chtype = cpu_to_le32(mhi_chan->type);
@@ -319,7 +320,7 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
 		tmp = le32_to_cpu(er_ctxt->intmod);
 		tmp &= ~EV_CTX_INTMODC_MASK;
 		tmp &= ~EV_CTX_INTMODT_MASK;
-		tmp |= (mhi_event->intmod << EV_CTX_INTMODT_SHIFT);
+		tmp |= FIELD_PREP(EV_CTX_INTMODT_MASK, mhi_event->intmod);
 		er_ctxt->intmod = cpu_to_le32(tmp);
 
 		er_ctxt->ertype = cpu_to_le32(MHI_ER_TYPE_VALID);
@@ -425,71 +426,70 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
 	struct {
 		u32 offset;
 		u32 mask;
-		u32 shift;
 		u32 val;
 	} reg_info[] = {
 		{
-			CCABAP_HIGHER, U32_MAX, 0,
+			CCABAP_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
 		},
 		{
-			CCABAP_LOWER, U32_MAX, 0,
+			CCABAP_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
 		},
 		{
-			ECABAP_HIGHER, U32_MAX, 0,
+			ECABAP_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
 		},
 		{
-			ECABAP_LOWER, U32_MAX, 0,
+			ECABAP_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
 		},
 		{
-			CRCBAP_HIGHER, U32_MAX, 0,
+			CRCBAP_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
 		},
 		{
-			CRCBAP_LOWER, U32_MAX, 0,
+			CRCBAP_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
 		},
 		{
-			MHICFG, MHICFG_NER_MASK, MHICFG_NER_SHIFT,
+			MHICFG, MHICFG_NER_MASK,
 			mhi_cntrl->total_ev_rings,
 		},
 		{
-			MHICFG, MHICFG_NHWER_MASK, MHICFG_NHWER_SHIFT,
+			MHICFG, MHICFG_NHWER_MASK,
 			mhi_cntrl->hw_ev_rings,
 		},
 		{
-			MHICTRLBASE_HIGHER, U32_MAX, 0,
+			MHICTRLBASE_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->iova_start),
 		},
 		{
-			MHICTRLBASE_LOWER, U32_MAX, 0,
+			MHICTRLBASE_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->iova_start),
 		},
 		{
-			MHIDATABASE_HIGHER, U32_MAX, 0,
+			MHIDATABASE_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->iova_start),
 		},
 		{
-			MHIDATABASE_LOWER, U32_MAX, 0,
+			MHIDATABASE_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->iova_start),
 		},
 		{
-			MHICTRLLIMIT_HIGHER, U32_MAX, 0,
+			MHICTRLLIMIT_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->iova_stop),
 		},
 		{
-			MHICTRLLIMIT_LOWER, U32_MAX, 0,
+			MHICTRLLIMIT_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->iova_stop),
 		},
 		{
-			MHIDATALIMIT_HIGHER, U32_MAX, 0,
+			MHIDATALIMIT_HIGHER, U32_MAX,
 			upper_32_bits(mhi_cntrl->iova_stop),
 		},
 		{
-			MHIDATALIMIT_LOWER, U32_MAX, 0,
+			MHIDATALIMIT_LOWER, U32_MAX,
 			lower_32_bits(mhi_cntrl->iova_stop),
 		},
 		{ 0, 0, 0 }
@@ -498,8 +498,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
 	dev_dbg(dev, "Initializing MHI registers\n");
 
 	/* Read channel db offset */
-	ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK,
-				 CHDBOFF_CHDBOFF_SHIFT, &val);
+	ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK, &val);
 	if (ret) {
 		dev_err(dev, "Unable to read CHDBOFF register\n");
 		return -EIO;
@@ -515,8 +514,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
 		mhi_chan->tre_ring.db_addr = base + val;
 
 	/* Read event ring db offset */
-	ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK,
-				 ERDBOFF_ERDBOFF_SHIFT, &val);
+	ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK, &val);
 	if (ret) {
 		dev_err(dev, "Unable to read ERDBOFF register\n");
 		return -EIO;
@@ -537,8 +535,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
 	/* Write to MMIO registers */
 	for (i = 0; reg_info[i].offset; i++)
 		mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
-				    reg_info[i].mask, reg_info[i].shift,
-				    reg_info[i].val);
+				    reg_info[i].mask, reg_info[i].val);
 
 	return 0;
 }
@@ -571,7 +568,7 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
 
 	tmp = le32_to_cpu(chan_ctxt->chcfg);
 	tmp &= ~CHAN_CTX_CHSTATE_MASK;
-	tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
+	tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_DISABLED);
 	chan_ctxt->chcfg = cpu_to_le32(tmp);
 
 	/* Update to all cores */
@@ -608,7 +605,7 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
 
 	tmp = le32_to_cpu(chan_ctxt->chcfg);
 	tmp &= ~CHAN_CTX_CHSTATE_MASK;
-	tmp |= (MHI_CH_STATE_ENABLED << CHAN_CTX_CHSTATE_SHIFT);
+	tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_ENABLED);
 	chan_ctxt->chcfg = cpu_to_le32(tmp);
 
 	chan_ctxt->rbase = cpu_to_le64(tre_ring->iommu_base);
@@ -952,14 +949,10 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
 	if (ret)
 		goto err_destroy_wq;
 
-	mhi_cntrl->family_number = (soc_info & SOC_HW_VERSION_FAM_NUM_BMSK) >>
-					SOC_HW_VERSION_FAM_NUM_SHFT;
-	mhi_cntrl->device_number = (soc_info & SOC_HW_VERSION_DEV_NUM_BMSK) >>
-					SOC_HW_VERSION_DEV_NUM_SHFT;
-	mhi_cntrl->major_version = (soc_info & SOC_HW_VERSION_MAJOR_VER_BMSK) >>
-					SOC_HW_VERSION_MAJOR_VER_SHFT;
-	mhi_cntrl->minor_version = (soc_info & SOC_HW_VERSION_MINOR_VER_BMSK) >>
-					SOC_HW_VERSION_MINOR_VER_SHFT;
+	mhi_cntrl->family_number = FIELD_GET(SOC_HW_VERSION_FAM_NUM_BMSK, soc_info);
+	mhi_cntrl->device_number = FIELD_GET(SOC_HW_VERSION_DEV_NUM_BMSK, soc_info);
+	mhi_cntrl->major_version = FIELD_GET(SOC_HW_VERSION_MAJOR_VER_BMSK, soc_info);
+	mhi_cntrl->minor_version = FIELD_GET(SOC_HW_VERSION_MINOR_VER_BMSK, soc_info);
 
 	mhi_cntrl->index = ida_alloc(&mhi_controller_ida, GFP_KERNEL);
 	if (mhi_cntrl->index < 0) {
diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
index 762055a6ec9f..21381781d7c5 100644
--- a/drivers/bus/mhi/host/internal.h
+++ b/drivers/bus/mhi/host/internal.h
@@ -82,13 +82,9 @@ extern struct bus_type mhi_bus_type;
 
 #define SOC_HW_VERSION_OFFS		0x224
 #define SOC_HW_VERSION_FAM_NUM_BMSK	GENMASK(31, 28)
-#define SOC_HW_VERSION_FAM_NUM_SHFT	28
 #define SOC_HW_VERSION_DEV_NUM_BMSK	GENMASK(27, 16)
-#define SOC_HW_VERSION_DEV_NUM_SHFT	16
 #define SOC_HW_VERSION_MAJOR_VER_BMSK	GENMASK(15, 8)
-#define SOC_HW_VERSION_MAJOR_VER_SHFT	8
 #define SOC_HW_VERSION_MINOR_VER_BMSK	GENMASK(7, 0)
-#define SOC_HW_VERSION_MINOR_VER_SHFT	0
 
 struct mhi_ctxt {
 	struct mhi_event_ctxt *er_ctxt;
@@ -393,14 +389,14 @@ int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
 			      void __iomem *base, u32 offset, u32 *out);
 int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
 				    void __iomem *base, u32 offset, u32 mask,
-				    u32 shift, u32 *out);
+				    u32 *out);
 int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
 				    void __iomem *base, u32 offset, u32 mask,
-				    u32 shift, u32 val, u32 delayus);
+				    u32 val, u32 delayus);
 void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
 		   u32 offset, u32 val);
 void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
-			 u32 offset, u32 mask, u32 shift, u32 val);
+			 u32 offset, u32 mask, u32 val);
 void mhi_ring_er_db(struct mhi_event *mhi_event);
 void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
 		  dma_addr_t db_val);
diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
index e436c2993d97..02ac5faf9178 100644
--- a/drivers/bus/mhi/host/main.c
+++ b/drivers/bus/mhi/host/main.c
@@ -24,7 +24,7 @@ int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
 
 int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
 				    void __iomem *base, u32 offset,
-				    u32 mask, u32 shift, u32 *out)
+				    u32 mask, u32 *out)
 {
 	u32 tmp;
 	int ret;
@@ -33,21 +33,20 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
 	if (ret)
 		return ret;
 
-	*out = (tmp & mask) >> shift;
+	*out = (tmp & mask) >> __ffs(mask);
 
 	return 0;
 }
 
 int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
 				    void __iomem *base, u32 offset,
-				    u32 mask, u32 shift, u32 val, u32 delayus)
+				    u32 mask, u32 val, u32 delayus)
 {
 	int ret;
 	u32 out, retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
 
 	while (retry--) {
-		ret = mhi_read_reg_field(mhi_cntrl, base, offset, mask, shift,
-					 &out);
+		ret = mhi_read_reg_field(mhi_cntrl, base, offset, mask, &out);
 		if (ret)
 			return ret;
 
@@ -67,7 +66,7 @@ void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
 }
 
 void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
-			 u32 offset, u32 mask, u32 shift, u32 val)
+			 u32 offset, u32 mask, u32 val)
 {
 	int ret;
 	u32 tmp;
@@ -77,7 +76,7 @@ void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
 		return;
 
 	tmp &= ~mask;
-	tmp |= (val << shift);
+	tmp |= (val << __ffs(mask));
 	mhi_write_reg(mhi_cntrl, base, offset, tmp);
 }
 
@@ -159,8 +158,7 @@ enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl)
 {
 	u32 state;
 	int ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
-				     MHISTATUS_MHISTATE_MASK,
-				     MHISTATUS_MHISTATE_SHIFT, &state);
+				     MHISTATUS_MHISTATE_MASK, &state);
 	return ret ? MHI_STATE_MAX : state;
 }
 EXPORT_SYMBOL_GPL(mhi_get_mhi_state);
diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
index 088ade0f3e0b..3d90b8ecd3d9 100644
--- a/drivers/bus/mhi/host/pm.c
+++ b/drivers/bus/mhi/host/pm.c
@@ -131,11 +131,10 @@ void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state)
 {
 	if (state == MHI_STATE_RESET) {
 		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
-				    MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 1);
+				    MHICTRL_RESET_MASK, 1);
 	} else {
 		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
-				    MHICTRL_MHISTATE_MASK,
-				    MHICTRL_MHISTATE_SHIFT, state);
+				    MHICTRL_MHISTATE_MASK, state);
 	}
 }
 
@@ -167,16 +166,14 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
 
 	/* Wait for RESET to be cleared and READY bit to be set by the device */
 	ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
-				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
-				 interval_us);
+				 MHICTRL_RESET_MASK, 0, interval_us);
 	if (ret) {
 		dev_err(dev, "Device failed to clear MHI Reset\n");
 		return ret;
 	}
 
 	ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
-				 MHISTATUS_READY_MASK, MHISTATUS_READY_SHIFT, 1,
-				 interval_us);
+				 MHISTATUS_READY_MASK, 1, interval_us);
 	if (ret) {
 		dev_err(dev, "Device failed to enter MHI Ready\n");
 		return ret;
@@ -470,8 +467,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
 
 		/* Wait for the reset bit to be cleared by the device */
 		ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
-				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
-				 25000);
+				 MHICTRL_RESET_MASK, 0, 25000);
 		if (ret)
 			dev_err(dev, "Device failed to clear MHI Reset\n");
 
@@ -602,7 +598,6 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
 							    mhi_cntrl->regs,
 							    MHICTRL,
 							    MHICTRL_RESET_MASK,
-							    MHICTRL_RESET_SHIFT,
 							    &in_reset) ||
 					!in_reset, timeout);
 		if (!ret || in_reset) {
@@ -1093,8 +1088,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
 	if (state == MHI_STATE_SYS_ERR) {
 		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
 		ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
-				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
-				 interval_us);
+				 MHICTRL_RESET_MASK, 0, interval_us);
 		if (ret) {
 			dev_info(dev, "Failed to reset MHI due to syserr state\n");
 			goto error_exit;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 08/25] bus: mhi: ep: Add support for registering MHI endpoint controllers
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (6 preceding siblings ...)
  2022-02-12 18:20 ` [PATCH v3 07/25] bus: mhi: Get rid of SHIFT macros and use bitfield operations Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15  1:04   ` Hemant Kumar
  2022-02-15 20:02   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 09/25] bus: mhi: ep: Add support for registering MHI endpoint client drivers Manivannan Sadhasivam
                   ` (17 subsequent siblings)
  25 siblings, 2 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

This commit adds support for registering MHI endpoint controller drivers
with the MHI endpoint stack. MHI endpoint controller drivers manages
the interaction with the host machines such as x86. They are also the
MHI endpoint bus master in charge of managing the physical link between the
host and endpoint device.

The endpoint controller driver encloses all information about the
underlying physical bus like PCIe. The registration process involves
parsing the channel configuration and allocating an MHI EP device.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/Kconfig       |   1 +
 drivers/bus/mhi/Makefile      |   3 +
 drivers/bus/mhi/ep/Kconfig    |  10 ++
 drivers/bus/mhi/ep/Makefile   |   2 +
 drivers/bus/mhi/ep/internal.h | 160 +++++++++++++++++++++++
 drivers/bus/mhi/ep/main.c     | 234 ++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h        | 143 +++++++++++++++++++++
 7 files changed, 553 insertions(+)
 create mode 100644 drivers/bus/mhi/ep/Kconfig
 create mode 100644 drivers/bus/mhi/ep/Makefile
 create mode 100644 drivers/bus/mhi/ep/internal.h
 create mode 100644 drivers/bus/mhi/ep/main.c
 create mode 100644 include/linux/mhi_ep.h

diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
index 4748df7f9cd5..b39a11e6c624 100644
--- a/drivers/bus/mhi/Kconfig
+++ b/drivers/bus/mhi/Kconfig
@@ -6,3 +6,4 @@
 #
 
 source "drivers/bus/mhi/host/Kconfig"
+source "drivers/bus/mhi/ep/Kconfig"
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
index 5f5708a249f5..46981331b38f 100644
--- a/drivers/bus/mhi/Makefile
+++ b/drivers/bus/mhi/Makefile
@@ -1,2 +1,5 @@
 # Host MHI stack
 obj-y += host/
+
+# Endpoint MHI stack
+obj-y += ep/
diff --git a/drivers/bus/mhi/ep/Kconfig b/drivers/bus/mhi/ep/Kconfig
new file mode 100644
index 000000000000..229c71397b30
--- /dev/null
+++ b/drivers/bus/mhi/ep/Kconfig
@@ -0,0 +1,10 @@
+config MHI_BUS_EP
+	tristate "Modem Host Interface (MHI) bus Endpoint implementation"
+	help
+	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
+	  communication protocol used by the host processors to control
+	  and communicate with modem devices over a high speed peripheral
+	  bus or shared memory.
+
+	  MHI_BUS_EP implements the MHI protocol for the endpoint devices
+	  like SDX55 modem connected to the host machine over PCIe.
diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
new file mode 100644
index 000000000000..64e29252b608
--- /dev/null
+++ b/drivers/bus/mhi/ep/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
+mhi_ep-y := main.o
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
new file mode 100644
index 000000000000..e313a2546664
--- /dev/null
+++ b/drivers/bus/mhi/ep/internal.h
@@ -0,0 +1,160 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2021, Linaro Ltd.
+ *
+ */
+
+#ifndef _MHI_EP_INTERNAL_
+#define _MHI_EP_INTERNAL_
+
+#include <linux/bitfield.h>
+
+#include "../common.h"
+
+extern struct bus_type mhi_ep_bus_type;
+
+#define MHI_REG_OFFSET				0x100
+#define BHI_REG_OFFSET				0x200
+
+/* MHI registers */
+#define MHIREGLEN				(MHI_REG_OFFSET + REG_MHIREGLEN)
+#define MHIVER					(MHI_REG_OFFSET + REG_MHIVER)
+#define MHICFG					(MHI_REG_OFFSET + REG_MHICFG)
+#define CHDBOFF					(MHI_REG_OFFSET + REG_CHDBOFF)
+#define ERDBOFF					(MHI_REG_OFFSET + REG_ERDBOFF)
+#define BHIOFF					(MHI_REG_OFFSET + REG_BHIOFF)
+#define BHIEOFF					(MHI_REG_OFFSET + REG_BHIEOFF)
+#define DEBUGOFF				(MHI_REG_OFFSET + REG_DEBUGOFF)
+#define MHICTRL					(MHI_REG_OFFSET + REG_MHICTRL)
+#define MHISTATUS				(MHI_REG_OFFSET + REG_MHISTATUS)
+#define CCABAP_LOWER				(MHI_REG_OFFSET + REG_CCABAP_LOWER)
+#define CCABAP_HIGHER				(MHI_REG_OFFSET + REG_CCABAP_HIGHER)
+#define ECABAP_LOWER				(MHI_REG_OFFSET + REG_ECABAP_LOWER)
+#define ECABAP_HIGHER				(MHI_REG_OFFSET + REG_ECABAP_HIGHER)
+#define CRCBAP_LOWER				(MHI_REG_OFFSET + REG_CRCBAP_LOWER)
+#define CRCBAP_HIGHER				(MHI_REG_OFFSET + REG_CRCBAP_HIGHER)
+#define CRDB_LOWER				(MHI_REG_OFFSET + REG_CRDB_LOWER)
+#define CRDB_HIGHER				(MHI_REG_OFFSET + REG_CRDB_HIGHER)
+#define MHICTRLBASE_LOWER			(MHI_REG_OFFSET + REG_MHICTRLBASE_LOWER)
+#define MHICTRLBASE_HIGHER			(MHI_REG_OFFSET + REG_MHICTRLBASE_HIGHER)
+#define MHICTRLLIMIT_LOWER			(MHI_REG_OFFSET + REG_MHICTRLLIMIT_LOWER)
+#define MHICTRLLIMIT_HIGHER			(MHI_REG_OFFSET + REG_MHICTRLLIMIT_HIGHER)
+#define MHIDATABASE_LOWER			(MHI_REG_OFFSET + REG_MHIDATABASE_LOWER)
+#define MHIDATABASE_HIGHER			(MHI_REG_OFFSET + REG_MHIDATABASE_HIGHER)
+#define MHIDATALIMIT_LOWER			(MHI_REG_OFFSET + REG_MHIDATALIMIT_LOWER)
+#define MHIDATALIMIT_HIGHER			(MHI_REG_OFFSET + REG_MHIDATALIMIT_HIGHER)
+
+/* MHI BHI registers */
+#define BHI_IMGTXDB				(BHI_REG_OFFSET + REG_BHI_IMGTXDB)
+#define BHI_EXECENV				(BHI_REG_OFFSET + REG_BHI_EXECENV)
+#define BHI_INTVEC				(BHI_REG_OFFSET + REG_BHI_INTVEC)
+
+/* MHI Doorbell registers */
+#define CHDB_LOWER_n(n)				(0x400 + 0x8 * (n))
+#define CHDB_HIGHER_n(n)			(0x404 + 0x8 * (n))
+#define ERDB_LOWER_n(n)				(0x800 + 0x8 * (n))
+#define ERDB_HIGHER_n(n)			(0x804 + 0x8 * (n))
+
+#define MHI_CTRL_INT_STATUS_A7			0x4
+#define MHI_CTRL_INT_STATUS_A7_MSK		BIT(0)
+#define MHI_CTRL_INT_STATUS_CRDB_MSK		BIT(1)
+#define MHI_CHDB_INT_STATUS_A7_n(n)		(0x28 + 0x4 * (n))
+#define MHI_ERDB_INT_STATUS_A7_n(n)		(0x38 + 0x4 * (n))
+
+#define MHI_CTRL_INT_CLEAR_A7			0x4c
+#define MHI_CTRL_INT_MMIO_WR_CLEAR		BIT(2)
+#define MHI_CTRL_INT_CRDB_CLEAR			BIT(1)
+#define MHI_CTRL_INT_CRDB_MHICTRL_CLEAR		BIT(0)
+
+#define MHI_CHDB_INT_CLEAR_A7_n(n)		(0x70 + 0x4 * (n))
+#define MHI_CHDB_INT_CLEAR_A7_n_CLEAR_ALL	GENMASK(31, 0)
+#define MHI_ERDB_INT_CLEAR_A7_n(n)		(0x80 + 0x4 * (n))
+#define MHI_ERDB_INT_CLEAR_A7_n_CLEAR_ALL	GENMASK(31, 0)
+
+/*
+ * Unlike the usual "masking" convention, writing "1" to a bit in this register
+ * enables the interrupt and writing "0" will disable it..
+ */
+#define MHI_CTRL_INT_MASK_A7			0x94
+#define MHI_CTRL_INT_MASK_A7_MASK		GENMASK(1, 0)
+#define MHI_CTRL_MHICTRL_MASK			BIT(0)
+#define MHI_CTRL_CRDB_MASK			BIT(1)
+
+#define MHI_CHDB_INT_MASK_A7_n(n)		(0xb8 + 0x4 * (n))
+#define MHI_CHDB_INT_MASK_A7_n_EN_ALL		GENMASK(31, 0)
+#define MHI_ERDB_INT_MASK_A7_n(n)		(0xc8 + 0x4 * (n))
+#define MHI_ERDB_INT_MASK_A7_n_EN_ALL		GENMASK(31, 0)
+
+#define NR_OF_CMD_RINGS				1
+#define MHI_MASK_ROWS_CH_EV_DB			4
+#define MHI_MASK_CH_EV_LEN			32
+
+/* Generic context */
+struct mhi_generic_ctx {
+	__u32 reserved0;
+	__u32 reserved1;
+	__u32 reserved2;
+
+	__u64 rbase __packed __aligned(4);
+	__u64 rlen __packed __aligned(4);
+	__u64 rp __packed __aligned(4);
+	__u64 wp __packed __aligned(4);
+};
+
+enum mhi_ep_ring_type {
+	RING_TYPE_CMD = 0,
+	RING_TYPE_ER,
+	RING_TYPE_CH,
+};
+
+struct mhi_ep_ring_element {
+	u64 ptr;
+	u32 dword[2];
+};
+
+/* Ring element */
+union mhi_ep_ring_ctx {
+	struct mhi_cmd_ctxt cmd;
+	struct mhi_event_ctxt ev;
+	struct mhi_chan_ctxt ch;
+	struct mhi_generic_ctx generic;
+};
+
+struct mhi_ep_ring {
+	struct mhi_ep_cntrl *mhi_cntrl;
+	int (*ring_cb)(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
+	union mhi_ep_ring_ctx *ring_ctx;
+	struct mhi_ep_ring_element *ring_cache;
+	enum mhi_ep_ring_type type;
+	size_t rd_offset;
+	size_t wr_offset;
+	size_t ring_size;
+	u32 db_offset_h;
+	u32 db_offset_l;
+	u32 ch_id;
+};
+
+struct mhi_ep_cmd {
+	struct mhi_ep_ring ring;
+};
+
+struct mhi_ep_event {
+	struct mhi_ep_ring ring;
+};
+
+struct mhi_ep_chan {
+	char *name;
+	struct mhi_ep_device *mhi_dev;
+	struct mhi_ep_ring ring;
+	struct mutex lock;
+	void (*xfer_cb)(struct mhi_ep_device *mhi_dev, struct mhi_result *result);
+	enum mhi_ch_state state;
+	enum dma_data_direction dir;
+	u64 tre_loc;
+	u32 tre_size;
+	u32 tre_bytes_left;
+	u32 chan;
+	bool skip_td;
+};
+
+#endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
new file mode 100644
index 000000000000..b006011d025d
--- /dev/null
+++ b/drivers/bus/mhi/ep/main.c
@@ -0,0 +1,234 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * MHI Bus Endpoint stack
+ *
+ * Copyright (C) 2021 Linaro Ltd.
+ * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/dma-direction.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/mhi_ep.h>
+#include <linux/mod_devicetable.h>
+#include <linux/module.h>
+#include "internal.h"
+
+static DEFINE_IDA(mhi_ep_cntrl_ida);
+
+static void mhi_ep_release_device(struct device *dev)
+{
+	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+
+	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+		mhi_dev->mhi_cntrl->mhi_dev = NULL;
+
+	/*
+	 * We need to set the mhi_chan->mhi_dev to NULL here since the MHI
+	 * devices for the channels will only get created during start
+	 * channel if the mhi_dev associated with it is NULL.
+	 */
+	if (mhi_dev->ul_chan)
+		mhi_dev->ul_chan->mhi_dev = NULL;
+
+	if (mhi_dev->dl_chan)
+		mhi_dev->dl_chan->mhi_dev = NULL;
+
+	kfree(mhi_dev);
+}
+
+static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl,
+						 enum mhi_device_type dev_type)
+{
+	struct mhi_ep_device *mhi_dev;
+	struct device *dev;
+
+	mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
+	if (!mhi_dev)
+		return ERR_PTR(-ENOMEM);
+
+	dev = &mhi_dev->dev;
+	device_initialize(dev);
+	dev->bus = &mhi_ep_bus_type;
+	dev->release = mhi_ep_release_device;
+
+	if (dev_type == MHI_DEVICE_CONTROLLER)
+		/* for MHI controller device, parent is the bus device (e.g. PCI EPF) */
+		dev->parent = mhi_cntrl->cntrl_dev;
+	else
+		/* for MHI client devices, parent is the MHI controller device */
+		dev->parent = &mhi_cntrl->mhi_dev->dev;
+
+	mhi_dev->mhi_cntrl = mhi_cntrl;
+	mhi_dev->dev_type = dev_type;
+
+	return mhi_dev;
+}
+
+static int parse_ch_cfg(struct mhi_ep_cntrl *mhi_cntrl,
+			const struct mhi_ep_cntrl_config *config)
+{
+	const struct mhi_ep_channel_config *ch_cfg;
+	struct device *dev = mhi_cntrl->cntrl_dev;
+	u32 chan, i;
+	int ret = -EINVAL;
+
+	mhi_cntrl->max_chan = config->max_channels;
+
+	/*
+	 * Allocate max_channels supported by the MHI endpoint and populate
+	 * only the defined channels
+	 */
+	mhi_cntrl->mhi_chan = kcalloc(mhi_cntrl->max_chan, sizeof(*mhi_cntrl->mhi_chan),
+				      GFP_KERNEL);
+	if (!mhi_cntrl->mhi_chan)
+		return -ENOMEM;
+
+	for (i = 0; i < config->num_channels; i++) {
+		struct mhi_ep_chan *mhi_chan;
+
+		ch_cfg = &config->ch_cfg[i];
+
+		chan = ch_cfg->num;
+		if (chan >= mhi_cntrl->max_chan) {
+			dev_err(dev, "Channel %d not available\n", chan);
+			goto error_chan_cfg;
+		}
+
+		/* Bi-directional and direction less channels are not supported */
+		if (ch_cfg->dir == DMA_BIDIRECTIONAL || ch_cfg->dir == DMA_NONE) {
+			dev_err(dev, "Invalid channel configuration\n");
+			goto error_chan_cfg;
+		}
+
+		mhi_chan = &mhi_cntrl->mhi_chan[chan];
+		mhi_chan->name = ch_cfg->name;
+		mhi_chan->chan = chan;
+		mhi_chan->dir = ch_cfg->dir;
+		mutex_init(&mhi_chan->lock);
+	}
+
+	return 0;
+
+error_chan_cfg:
+	kfree(mhi_cntrl->mhi_chan);
+
+	return ret;
+}
+
+/*
+ * Allocate channel and command rings here. Event rings will be allocated
+ * in mhi_ep_power_up() as the config comes from the host.
+ */
+int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
+				const struct mhi_ep_cntrl_config *config)
+{
+	struct mhi_ep_device *mhi_dev;
+	int ret;
+
+	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
+		return -EINVAL;
+
+	ret = parse_ch_cfg(mhi_cntrl, config);
+	if (ret)
+		return ret;
+
+	mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS, sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
+	if (!mhi_cntrl->mhi_cmd) {
+		ret = -ENOMEM;
+		goto err_free_ch;
+	}
+
+	/* Set controller index */
+	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
+	if (mhi_cntrl->index < 0) {
+		ret = mhi_cntrl->index;
+		goto err_free_cmd;
+	}
+
+	/* Allocate the controller device */
+	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_CONTROLLER);
+	if (IS_ERR(mhi_dev)) {
+		dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate controller device\n");
+		ret = PTR_ERR(mhi_dev);
+		goto err_ida_free;
+	}
+
+	dev_set_name(&mhi_dev->dev, "mhi_ep%d", mhi_cntrl->index);
+	mhi_dev->name = dev_name(&mhi_dev->dev);
+
+	ret = device_add(&mhi_dev->dev);
+	if (ret)
+		goto err_put_dev;
+
+	mhi_cntrl->mhi_dev = mhi_dev;
+
+	dev_dbg(&mhi_dev->dev, "MHI EP Controller registered\n");
+
+	return 0;
+
+err_put_dev:
+	put_device(&mhi_dev->dev);
+err_ida_free:
+	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
+err_free_cmd:
+	kfree(mhi_cntrl->mhi_cmd);
+err_free_ch:
+	kfree(mhi_cntrl->mhi_chan);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_ep_register_controller);
+
+void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
+
+	kfree(mhi_cntrl->mhi_cmd);
+	kfree(mhi_cntrl->mhi_chan);
+
+	device_del(&mhi_dev->dev);
+	put_device(&mhi_dev->dev);
+
+	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
+}
+EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
+
+static int mhi_ep_match(struct device *dev, struct device_driver *drv)
+{
+	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+
+	/*
+	 * If the device is a controller type then there is no client driver
+	 * associated with it
+	 */
+	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+		return 0;
+
+	return 0;
+};
+
+struct bus_type mhi_ep_bus_type = {
+	.name = "mhi_ep",
+	.dev_name = "mhi_ep",
+	.match = mhi_ep_match,
+};
+
+static int __init mhi_ep_init(void)
+{
+	return bus_register(&mhi_ep_bus_type);
+}
+
+static void __exit mhi_ep_exit(void)
+{
+	bus_unregister(&mhi_ep_bus_type);
+}
+
+postcore_initcall(mhi_ep_init);
+module_exit(mhi_ep_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("MHI Bus Endpoint stack");
+MODULE_AUTHOR("Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>");
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
new file mode 100644
index 000000000000..20238e9df1b3
--- /dev/null
+++ b/include/linux/mhi_ep.h
@@ -0,0 +1,143 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2021, Linaro Ltd.
+ *
+ */
+#ifndef _MHI_EP_H_
+#define _MHI_EP_H_
+
+#include <linux/dma-direction.h>
+#include <linux/mhi.h>
+
+#define MHI_EP_DEFAULT_MTU 0x8000
+
+/**
+ * struct mhi_ep_channel_config - Channel configuration structure for controller
+ * @name: The name of this channel
+ * @num: The number assigned to this channel
+ * @num_elements: The number of elements that can be queued to this channel
+ * @dir: Direction that data may flow on this channel
+ */
+struct mhi_ep_channel_config {
+	char *name;
+	u32 num;
+	u32 num_elements;
+	enum dma_data_direction dir;
+};
+
+/**
+ * struct mhi_ep_cntrl_config - MHI Endpoint controller configuration
+ * @max_channels: Maximum number of channels supported
+ * @num_channels: Number of channels defined in @ch_cfg
+ * @ch_cfg: Array of defined channels
+ * @mhi_version: MHI spec version supported by the controller
+ */
+struct mhi_ep_cntrl_config {
+	u32 max_channels;
+	u32 num_channels;
+	const struct mhi_ep_channel_config *ch_cfg;
+	u32 mhi_version;
+};
+
+/**
+ * struct mhi_ep_db_info - MHI Endpoint doorbell info
+ * @mask: Mask of the doorbell interrupt
+ * @status: Status of the doorbell interrupt
+ */
+struct mhi_ep_db_info {
+	u32 mask;
+	u32 status;
+};
+
+/**
+ * struct mhi_ep_cntrl - MHI Endpoint controller structure
+ * @cntrl_dev: Pointer to the struct device of physical bus acting as the MHI
+ *             Endpoint controller
+ * @mhi_dev: MHI Endpoint device instance for the controller
+ * @mmio: MMIO region containing the MHI registers
+ * @mhi_chan: Points to the channel configuration table
+ * @mhi_event: Points to the event ring configurations table
+ * @mhi_cmd: Points to the command ring configurations table
+ * @sm: MHI Endpoint state machine
+ * @raise_irq: CB function for raising IRQ to the host
+ * @alloc_addr: CB function for allocating memory in endpoint for storing host context
+ * @map_addr: CB function for mapping host context to endpoint
+ * @free_addr: CB function to free the allocated memory in endpoint for storing host context
+ * @unmap_addr: CB function to unmap the host context in endpoint
+ * @read_from_host: CB function for reading from host memory from endpoint
+ * @write_to_host: CB function for writing to host memory from endpoint
+ * @mhi_state: MHI Endpoint state
+ * @max_chan: Maximum channels supported by the endpoint controller
+ * @mru: MRU (Maximum Receive Unit) value of the endpoint controller
+ * @index: MHI Endpoint controller index
+ */
+struct mhi_ep_cntrl {
+	struct device *cntrl_dev;
+	struct mhi_ep_device *mhi_dev;
+	void __iomem *mmio;
+
+	struct mhi_ep_chan *mhi_chan;
+	struct mhi_ep_event *mhi_event;
+	struct mhi_ep_cmd *mhi_cmd;
+	struct mhi_ep_sm *sm;
+
+	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
+	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,
+		       size_t size);
+	int (*map_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t phys_addr, u64 pci_addr,
+			size_t size);
+	void (*free_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t phys_addr,
+			  void __iomem *virt_addr, size_t size);
+	void (*unmap_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t phys_addr);
+	int (*read_from_host)(struct mhi_ep_cntrl *mhi_cntrl, u64 from, void __iomem *to,
+			      size_t size);
+	int (*write_to_host)(struct mhi_ep_cntrl *mhi_cntrl, void __iomem *from, u64 to,
+			     size_t size);
+
+	enum mhi_state mhi_state;
+
+	u32 max_chan;
+	u32 mru;
+	int index;
+};
+
+/**
+ * struct mhi_ep_device - Structure representing an MHI Endpoint device that binds
+ *                     to channels or is associated with controllers
+ * @dev: Driver model device node for the MHI Endpoint device
+ * @mhi_cntrl: Controller the device belongs to
+ * @id: Pointer to MHI Endpoint device ID struct
+ * @name: Name of the associated MHI Endpoint device
+ * @ul_chan: UL channel for the device
+ * @dl_chan: DL channel for the device
+ * @dev_type: MHI device type
+ */
+struct mhi_ep_device {
+	struct device dev;
+	struct mhi_ep_cntrl *mhi_cntrl;
+	const struct mhi_device_id *id;
+	const char *name;
+	struct mhi_ep_chan *ul_chan;
+	struct mhi_ep_chan *dl_chan;
+	enum mhi_device_type dev_type;
+};
+
+#define to_mhi_ep_device(dev) container_of(dev, struct mhi_ep_device, dev)
+
+/**
+ * mhi_ep_register_controller - Register MHI Endpoint controller
+ * @mhi_cntrl: MHI Endpoint controller to register
+ * @config: Configuration to use for the controller
+ *
+ * Return: 0 if controller registrations succeeds, a negative error code otherwise.
+ */
+int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
+			       const struct mhi_ep_cntrl_config *config);
+
+/**
+ * mhi_ep_unregister_controller - Unregister MHI Endpoint controller
+ * @mhi_cntrl: MHI Endpoint controller to unregister
+ */
+void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
+
+#endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 09/25] bus: mhi: ep: Add support for registering MHI endpoint client drivers
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (7 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 08/25] bus: mhi: ep: Add support for registering MHI endpoint controllers Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-12 18:32   ` Manivannan Sadhasivam
                     ` (2 more replies)
  2022-02-12 18:21 ` [PATCH v3 10/25] bus: mhi: ep: Add support for creating and destroying MHI EP devices Manivannan Sadhasivam
                   ` (16 subsequent siblings)
  25 siblings, 3 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

This commit adds support for registering MHI endpoint client drivers
with the MHI endpoint stack. MHI endpoint client drivers binds to one
or more MHI endpoint devices inorder to send and receive the upper-layer
protocol packets like IP packets, modem control messages, and diagnostics
messages over MHI bus.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 86 +++++++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h    | 53 ++++++++++++++++++++++++
 2 files changed, 139 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index b006011d025d..f66404181972 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -196,9 +196,89 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
 }
 EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
 
+static int mhi_ep_driver_probe(struct device *dev)
+{
+	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
+	struct mhi_ep_chan *ul_chan = mhi_dev->ul_chan;
+	struct mhi_ep_chan *dl_chan = mhi_dev->dl_chan;
+
+	/* Client drivers should have callbacks for both channels */
+	if (!mhi_drv->ul_xfer_cb || !mhi_drv->dl_xfer_cb)
+		return -EINVAL;
+
+	ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
+	dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
+
+	return mhi_drv->probe(mhi_dev, mhi_dev->id);
+}
+
+static int mhi_ep_driver_remove(struct device *dev)
+{
+	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
+	struct mhi_result result = {};
+	struct mhi_ep_chan *mhi_chan;
+	int dir;
+
+	/* Skip if it is a controller device */
+	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+		return 0;
+
+	/* Disconnect the channels associated with the driver */
+	for (dir = 0; dir < 2; dir++) {
+		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+		if (!mhi_chan)
+			continue;
+
+		mutex_lock(&mhi_chan->lock);
+		/* Send channel disconnect status to the client driver */
+		if (mhi_chan->xfer_cb) {
+			result.transaction_status = -ENOTCONN;
+			result.bytes_xferd = 0;
+			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+		}
+
+		/* Set channel state to DISABLED */
+		mhi_chan->state = MHI_CH_STATE_DISABLED;
+		mhi_chan->xfer_cb = NULL;
+		mutex_unlock(&mhi_chan->lock);
+	}
+
+	/* Remove the client driver now */
+	mhi_drv->remove(mhi_dev);
+
+	return 0;
+}
+
+int __mhi_ep_driver_register(struct mhi_ep_driver *mhi_drv, struct module *owner)
+{
+	struct device_driver *driver = &mhi_drv->driver;
+
+	if (!mhi_drv->probe || !mhi_drv->remove)
+		return -EINVAL;
+
+	driver->bus = &mhi_ep_bus_type;
+	driver->owner = owner;
+	driver->probe = mhi_ep_driver_probe;
+	driver->remove = mhi_ep_driver_remove;
+
+	return driver_register(driver);
+}
+EXPORT_SYMBOL_GPL(__mhi_ep_driver_register);
+
+void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv)
+{
+	driver_unregister(&mhi_drv->driver);
+}
+EXPORT_SYMBOL_GPL(mhi_ep_driver_unregister);
+
 static int mhi_ep_match(struct device *dev, struct device_driver *drv)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(drv);
+	const struct mhi_device_id *id;
 
 	/*
 	 * If the device is a controller type then there is no client driver
@@ -207,6 +287,12 @@ static int mhi_ep_match(struct device *dev, struct device_driver *drv)
 	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
 		return 0;
 
+	for (id = mhi_drv->id_table; id->chan[0]; id++)
+		if (!strcmp(mhi_dev->name, id->chan)) {
+			mhi_dev->id = id;
+			return 1;
+		}
+
 	return 0;
 };
 
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 20238e9df1b3..da865f9d3646 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -122,7 +122,60 @@ struct mhi_ep_device {
 	enum mhi_device_type dev_type;
 };
 
+/**
+ * struct mhi_ep_driver - Structure representing a MHI Endpoint client driver
+ * @id_table: Pointer to MHI Endpoint device ID table
+ * @driver: Device driver model driver
+ * @probe: CB function for client driver probe function
+ * @remove: CB function for client driver remove function
+ * @ul_xfer_cb: CB function for UL data transfer
+ * @dl_xfer_cb: CB function for DL data transfer
+ */
+struct mhi_ep_driver {
+	const struct mhi_device_id *id_table;
+	struct device_driver driver;
+	int (*probe)(struct mhi_ep_device *mhi_ep,
+		     const struct mhi_device_id *id);
+	void (*remove)(struct mhi_ep_device *mhi_ep);
+	void (*ul_xfer_cb)(struct mhi_ep_device *mhi_dev,
+			   struct mhi_result *result);
+	void (*dl_xfer_cb)(struct mhi_ep_device *mhi_dev,
+			   struct mhi_result *result);
+};
+
 #define to_mhi_ep_device(dev) container_of(dev, struct mhi_ep_device, dev)
+#define to_mhi_ep_driver(drv) container_of(drv, struct mhi_ep_driver, driver)
+
+/*
+ * module_mhi_ep_driver() - Helper macro for drivers that don't do
+ * anything special other than using default mhi_ep_driver_register() and
+ * mhi_ep_driver_unregister().  This eliminates a lot of boilerplate.
+ * Each module may only use this macro once.
+ */
+#define module_mhi_ep_driver(mhi_drv) \
+	module_driver(mhi_drv, mhi_ep_driver_register, \
+		      mhi_ep_driver_unregister)
+
+/*
+ * Macro to avoid include chaining to get THIS_MODULE
+ */
+#define mhi_ep_driver_register(mhi_drv) \
+	__mhi_ep_driver_register(mhi_drv, THIS_MODULE)
+
+/**
+ * __mhi_ep_driver_register - Register a driver with MHI Endpoint bus
+ * @mhi_drv: Driver to be associated with the device
+ * @owner: The module owner
+ *
+ * Return: 0 if driver registrations succeeds, a negative error code otherwise.
+ */
+int __mhi_ep_driver_register(struct mhi_ep_driver *mhi_drv, struct module *owner);
+
+/**
+ * mhi_ep_driver_unregister - Unregister a driver from MHI Endpoint bus
+ * @mhi_drv: Driver associated with the device
+ */
+void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv);
 
 /**
  * mhi_ep_register_controller - Register MHI Endpoint controller
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 10/25] bus: mhi: ep: Add support for creating and destroying MHI EP devices
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (8 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 09/25] bus: mhi: ep: Add support for registering MHI endpoint client drivers Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 20:02   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 11/25] bus: mhi: ep: Add support for managing MMIO registers Manivannan Sadhasivam
                   ` (15 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

This commit adds support for creating and destroying MHI endpoint devices.
The MHI endpoint devices binds to the MHI endpoint channels and are used
to transfer data between MHI host and endpoint device.

There is a single MHI EP device for each channel pair. The devices will be
created when the corresponding channels has been started by the host and
will be destroyed during MHI EP power down and reset.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 77 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 77 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index f66404181972..fcaacf9ddbd1 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -67,6 +67,83 @@ static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl,
 	return mhi_dev;
 }
 
+/*
+ * MHI channels are always defined in pairs with UL as the even numbered
+ * channel and DL as odd numbered one.
+ */
+static int mhi_ep_create_device(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id)
+{
+	struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ch_id];
+	struct mhi_ep_device *mhi_dev;
+	int ret;
+
+	/* Check if the channel name is same for both UL and DL */
+	if (strcmp(mhi_chan->name, mhi_chan[1].name))
+		return -EINVAL;
+
+	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_XFER);
+	if (IS_ERR(mhi_dev))
+		return PTR_ERR(mhi_dev);
+
+	/* Configure primary channel */
+	mhi_dev->ul_chan = mhi_chan;
+	get_device(&mhi_dev->dev);
+	mhi_chan->mhi_dev = mhi_dev;
+
+	/* Configure secondary channel as well */
+	mhi_chan++;
+	mhi_dev->dl_chan = mhi_chan;
+	get_device(&mhi_dev->dev);
+	mhi_chan->mhi_dev = mhi_dev;
+
+	/* Channel name is same for both UL and DL */
+	mhi_dev->name = mhi_chan->name;
+	dev_set_name(&mhi_dev->dev, "%s_%s",
+		     dev_name(&mhi_cntrl->mhi_dev->dev),
+		     mhi_dev->name);
+
+	ret = device_add(&mhi_dev->dev);
+	if (ret)
+		put_device(&mhi_dev->dev);
+
+	return ret;
+}
+
+static int mhi_ep_destroy_device(struct device *dev, void *data)
+{
+	struct mhi_ep_device *mhi_dev;
+	struct mhi_ep_cntrl *mhi_cntrl;
+	struct mhi_ep_chan *ul_chan, *dl_chan;
+
+	if (dev->bus != &mhi_ep_bus_type)
+		return 0;
+
+	mhi_dev = to_mhi_ep_device(dev);
+	mhi_cntrl = mhi_dev->mhi_cntrl;
+
+	/* Only destroy devices created for channels */
+	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+		return 0;
+
+	ul_chan = mhi_dev->ul_chan;
+	dl_chan = mhi_dev->dl_chan;
+
+	if (ul_chan)
+		put_device(&ul_chan->mhi_dev->dev);
+
+	if (dl_chan)
+		put_device(&dl_chan->mhi_dev->dev);
+
+	dev_dbg(&mhi_cntrl->mhi_dev->dev, "Destroying device for chan:%s\n",
+		 mhi_dev->name);
+
+	/* Notify the client and remove the device from MHI bus */
+	device_del(dev);
+	put_device(dev);
+
+	return 0;
+}
+
 static int parse_ch_cfg(struct mhi_ep_cntrl *mhi_cntrl,
 			const struct mhi_ep_cntrl_config *config)
 {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 11/25] bus: mhi: ep: Add support for managing MMIO registers
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (9 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 10/25] bus: mhi: ep: Add support for creating and destroying MHI EP devices Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15  1:14   ` Hemant Kumar
  2022-02-15 20:03   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 12/25] bus: mhi: ep: Add support for ring management Manivannan Sadhasivam
                   ` (14 subsequent siblings)
  25 siblings, 2 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for managing the Memory Mapped Input Output (MMIO) registers
of the MHI bus. All MHI operations are carried out using the MMIO registers
by both host and the endpoint device.

The MMIO registers reside inside the endpoint device memory (fixed
location based on the platform) and the address is passed by the MHI EP
controller driver during its registration.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/Makefile   |   2 +-
 drivers/bus/mhi/ep/internal.h |  37 +++++
 drivers/bus/mhi/ep/main.c     |   6 +-
 drivers/bus/mhi/ep/mmio.c     | 274 ++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h        |  18 +++
 5 files changed, 335 insertions(+), 2 deletions(-)
 create mode 100644 drivers/bus/mhi/ep/mmio.c

diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
index 64e29252b608..a1555ae287ad 100644
--- a/drivers/bus/mhi/ep/Makefile
+++ b/drivers/bus/mhi/ep/Makefile
@@ -1,2 +1,2 @@
 obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
-mhi_ep-y := main.o
+mhi_ep-y := main.o mmio.o
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index e313a2546664..2c756a90774c 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -101,6 +101,17 @@ struct mhi_generic_ctx {
 	__u64 wp __packed __aligned(4);
 };
 
+/**
+ * enum mhi_ep_execenv - MHI Endpoint Execution Environment
+ * @MHI_EP_SBL_EE: Secondary Bootloader
+ * @MHI_EP_AMSS_EE: Advanced Mode Subscriber Software
+ */
+enum mhi_ep_execenv {
+	MHI_EP_SBL_EE = 1,
+	MHI_EP_AMSS_EE = 2,
+	MHI_EP_UNRESERVED
+};
+
 enum mhi_ep_ring_type {
 	RING_TYPE_CMD = 0,
 	RING_TYPE_ER,
@@ -157,4 +168,30 @@ struct mhi_ep_chan {
 	bool skip_td;
 };
 
+/* MMIO related functions */
+u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset);
+void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
+void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 mask, u32 val);
+u32 mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset, u32 mask);
+void mhi_ep_mmio_enable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_disable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_enable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_disable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_enable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id);
+void mhi_ep_mmio_disable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id);
+void mhi_ep_mmio_enable_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_read_chdb_status_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_mask_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_chc_base(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_erc_base(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_crc_base(struct mhi_ep_cntrl *mhi_cntrl);
+u64 mhi_ep_mmio_get_db(struct mhi_ep_ring *ring);
+void mhi_ep_mmio_set_env(struct mhi_ep_cntrl *mhi_cntrl, u32 value);
+void mhi_ep_mmio_clear_reset(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_reset(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
+			       bool *mhi_reset);
+void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
+
 #endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index fcaacf9ddbd1..950b5bcabe18 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -205,7 +205,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	struct mhi_ep_device *mhi_dev;
 	int ret;
 
-	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
+	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio)
 		return -EINVAL;
 
 	ret = parse_ch_cfg(mhi_cntrl, config);
@@ -218,6 +218,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 		goto err_free_ch;
 	}
 
+	/* Set MHI version and AMSS EE before enumeration */
+	mhi_ep_mmio_write(mhi_cntrl, MHIVER, config->mhi_version);
+	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
+
 	/* Set controller index */
 	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
 	if (mhi_cntrl->index < 0) {
diff --git a/drivers/bus/mhi/ep/mmio.c b/drivers/bus/mhi/ep/mmio.c
new file mode 100644
index 000000000000..58e887beb050
--- /dev/null
+++ b/drivers/bus/mhi/ep/mmio.c
@@ -0,0 +1,274 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Linaro Ltd.
+ * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/io.h>
+#include <linux/mhi_ep.h>
+
+#include "internal.h"
+
+u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset)
+{
+	return readl(mhi_cntrl->mmio + offset);
+}
+
+void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val)
+{
+	writel(val, mhi_cntrl->mmio + offset);
+}
+
+void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 mask, u32 val)
+{
+	u32 regval;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, offset);
+	regval &= ~mask;
+	regval |= ((val << __ffs(mask)) & mask);
+	mhi_ep_mmio_write(mhi_cntrl, offset, regval);
+}
+
+u32 mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset, u32 mask)
+{
+	u32 regval;
+
+	regval = mhi_ep_mmio_read(dev, offset);
+	regval &= mask;
+	regval >>= __ffs(mask);
+
+	return regval;
+}
+
+void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
+				bool *mhi_reset)
+{
+	u32 regval;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, MHICTRL);
+	*state = FIELD_GET(MHICTRL_MHISTATE_MASK, regval);
+	*mhi_reset = !!FIELD_GET(MHICTRL_RESET_MASK, regval);
+}
+
+static void mhi_ep_mmio_mask_set_chdb_int_a7(struct mhi_ep_cntrl *mhi_cntrl,
+						u32 chdb_id, bool enable)
+{
+	u32 chid_mask, chid_idx, chid_shift, val = 0;
+
+	chid_shift = chdb_id % 32;
+	chid_mask = BIT(chid_shift);
+	chid_idx = chdb_id / 32;
+
+	WARN_ON(chid_idx >= MHI_MASK_ROWS_CH_EV_DB);
+
+	if (enable)
+		val = 1;
+
+	mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(chid_idx),
+				  chid_mask, val);
+
+	/* Update the local copy of the channel mask */
+	mhi_cntrl->chdb[chid_idx].mask &= ~chid_mask;
+	mhi_cntrl->chdb[chid_idx].mask |= val << chid_shift;
+}
+
+void mhi_ep_mmio_enable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id)
+{
+	mhi_ep_mmio_mask_set_chdb_int_a7(mhi_cntrl, chdb_id, true);
+}
+
+void mhi_ep_mmio_disable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id)
+{
+	mhi_ep_mmio_mask_set_chdb_int_a7(mhi_cntrl, chdb_id, false);
+}
+
+static void mhi_ep_mmio_set_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl, bool enable)
+{
+	u32 val = 0, i;
+
+	if (enable)
+		val = MHI_CHDB_INT_MASK_A7_n_EN_ALL;
+
+	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
+		mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(i), val);
+		mhi_cntrl->chdb[i].mask = val;
+	}
+}
+
+void mhi_ep_mmio_enable_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_set_chdb_interrupts(mhi_cntrl, true);
+}
+
+static void mhi_ep_mmio_mask_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_set_chdb_interrupts(mhi_cntrl, false);
+}
+
+void mhi_ep_mmio_read_chdb_status_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 i;
+
+	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++)
+		mhi_cntrl->chdb[i].status = mhi_ep_mmio_read(mhi_cntrl,
+							     MHI_CHDB_INT_STATUS_A7_n(i));
+}
+
+static void mhi_ep_mmio_set_erdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl, bool enable)
+{
+	u32 val = 0, i;
+
+	if (enable)
+		val = MHI_ERDB_INT_MASK_A7_n_EN_ALL;
+
+	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++)
+		mhi_ep_mmio_write(mhi_cntrl, MHI_ERDB_INT_MASK_A7_n(i), val);
+}
+
+static void mhi_ep_mmio_mask_erdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_set_erdb_interrupts(mhi_cntrl, false);
+}
+
+void mhi_ep_mmio_enable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK_A7,
+				  MHI_CTRL_MHICTRL_MASK, 1);
+}
+
+void mhi_ep_mmio_disable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK_A7,
+				  MHI_CTRL_MHICTRL_MASK, 0);
+}
+
+void mhi_ep_mmio_enable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK_A7,
+				  MHI_CTRL_CRDB_MASK, 1);
+}
+
+void mhi_ep_mmio_disable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK_A7,
+				  MHI_CTRL_CRDB_MASK, 0);
+}
+
+void mhi_ep_mmio_mask_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_disable_ctrl_interrupt(mhi_cntrl);
+	mhi_ep_mmio_disable_cmdb_interrupt(mhi_cntrl);
+	mhi_ep_mmio_mask_chdb_interrupts(mhi_cntrl);
+	mhi_ep_mmio_mask_erdb_interrupts(mhi_cntrl);
+}
+
+static void mhi_ep_mmio_clear_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 i = 0;
+
+	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++)
+		mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_CLEAR_A7_n(i),
+				   MHI_CHDB_INT_CLEAR_A7_n_CLEAR_ALL);
+
+	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++)
+		mhi_ep_mmio_write(mhi_cntrl, MHI_ERDB_INT_CLEAR_A7_n(i),
+				   MHI_ERDB_INT_CLEAR_A7_n_CLEAR_ALL);
+
+	mhi_ep_mmio_write(mhi_cntrl, MHI_CTRL_INT_CLEAR_A7,
+			   MHI_CTRL_INT_MMIO_WR_CLEAR |
+			   MHI_CTRL_INT_CRDB_CLEAR |
+			   MHI_CTRL_INT_CRDB_MHICTRL_CLEAR);
+}
+
+void mhi_ep_mmio_get_chc_base(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 ccabap_value;
+
+	ccabap_value = mhi_ep_mmio_read(mhi_cntrl, CCABAP_HIGHER);
+	mhi_cntrl->ch_ctx_host_pa = ccabap_value;
+	mhi_cntrl->ch_ctx_host_pa <<= 32;
+
+	ccabap_value = mhi_ep_mmio_read(mhi_cntrl, CCABAP_LOWER);
+	mhi_cntrl->ch_ctx_host_pa |= ccabap_value;
+}
+
+void mhi_ep_mmio_get_erc_base(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 ecabap_value;
+
+	ecabap_value = mhi_ep_mmio_read(mhi_cntrl, ECABAP_HIGHER);
+	mhi_cntrl->ev_ctx_host_pa = ecabap_value;
+	mhi_cntrl->ev_ctx_host_pa <<= 32;
+
+	ecabap_value = mhi_ep_mmio_read(mhi_cntrl, ECABAP_LOWER);
+	mhi_cntrl->ev_ctx_host_pa |= ecabap_value;
+}
+
+void mhi_ep_mmio_get_crc_base(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 crcbap_value;
+
+	crcbap_value = mhi_ep_mmio_read(mhi_cntrl, CRCBAP_HIGHER);
+	mhi_cntrl->cmd_ctx_host_pa = crcbap_value;
+	mhi_cntrl->cmd_ctx_host_pa <<= 32;
+
+	crcbap_value = mhi_ep_mmio_read(mhi_cntrl, CRCBAP_LOWER);
+	mhi_cntrl->cmd_ctx_host_pa |= crcbap_value;
+}
+
+u64 mhi_ep_mmio_get_db(struct mhi_ep_ring *ring)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+	u64 db_offset;
+	u32 regval;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, ring->db_offset_h);
+	db_offset = regval;
+	db_offset <<= 32;
+
+	regval = mhi_ep_mmio_read(mhi_cntrl, ring->db_offset_l);
+	db_offset |= regval;
+
+	return db_offset;
+}
+
+void mhi_ep_mmio_set_env(struct mhi_ep_cntrl *mhi_cntrl, u32 value)
+{
+	mhi_ep_mmio_write(mhi_cntrl, BHI_EXECENV, value);
+}
+
+void mhi_ep_mmio_clear_reset(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_masked_write(mhi_cntrl, MHICTRL, MHICTRL_RESET_MASK, 0);
+}
+
+void mhi_ep_mmio_reset(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_write(mhi_cntrl, MHICTRL, 0);
+	mhi_ep_mmio_write(mhi_cntrl, MHISTATUS, 0);
+	mhi_ep_mmio_clear_interrupts(mhi_cntrl);
+}
+
+void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	int mhi_cfg;
+
+	mhi_cntrl->chdb_offset = mhi_ep_mmio_read(mhi_cntrl, CHDBOFF);
+	mhi_cntrl->erdb_offset = mhi_ep_mmio_read(mhi_cntrl, ERDBOFF);
+
+	mhi_cfg = mhi_ep_mmio_read(mhi_cntrl, MHICFG);
+	mhi_cntrl->event_rings = FIELD_GET(MHICFG_NER_MASK, mhi_cfg);
+	mhi_cntrl->hw_event_rings = FIELD_GET(MHICFG_NHWER_MASK, mhi_cfg);
+
+	mhi_ep_mmio_reset(mhi_cntrl);
+}
+
+void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	int mhi_cfg;
+
+	mhi_cfg = mhi_ep_mmio_read(mhi_cntrl, MHICFG);
+	mhi_cntrl->event_rings = FIELD_GET(MHICFG_NER_MASK, mhi_cfg);
+	mhi_cntrl->hw_event_rings = FIELD_GET(MHICFG_NHWER_MASK, mhi_cfg);
+}
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index da865f9d3646..3d2ab7a5ccd7 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -59,6 +59,10 @@ struct mhi_ep_db_info {
  * @mhi_event: Points to the event ring configurations table
  * @mhi_cmd: Points to the command ring configurations table
  * @sm: MHI Endpoint state machine
+ * @ch_ctx_host_pa: Physical address of host channel context data structure
+ * @ev_ctx_host_pa: Physical address of host event context data structure
+ * @cmd_ctx_host_pa: Physical address of host command context data structure
+ * @chdb: Array of channel doorbell interrupt info
  * @raise_irq: CB function for raising IRQ to the host
  * @alloc_addr: CB function for allocating memory in endpoint for storing host context
  * @map_addr: CB function for mapping host context to endpoint
@@ -69,6 +73,10 @@ struct mhi_ep_db_info {
  * @mhi_state: MHI Endpoint state
  * @max_chan: Maximum channels supported by the endpoint controller
  * @mru: MRU (Maximum Receive Unit) value of the endpoint controller
+ * @event_rings: Number of event rings supported by the endpoint controller
+ * @hw_event_rings: Number of hardware event rings supported by the endpoint controller
+ * @chdb_offset: Channel doorbell offset set by the host
+ * @erdb_offset: Event ring doorbell offset set by the host
  * @index: MHI Endpoint controller index
  */
 struct mhi_ep_cntrl {
@@ -81,6 +89,12 @@ struct mhi_ep_cntrl {
 	struct mhi_ep_cmd *mhi_cmd;
 	struct mhi_ep_sm *sm;
 
+	u64 ch_ctx_host_pa;
+	u64 ev_ctx_host_pa;
+	u64 cmd_ctx_host_pa;
+
+	struct mhi_ep_db_info chdb[4];
+
 	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
 	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,
 		       size_t size);
@@ -98,6 +112,10 @@ struct mhi_ep_cntrl {
 
 	u32 max_chan;
 	u32 mru;
+	u32 event_rings;
+	u32 hw_event_rings;
+	u32 chdb_offset;
+	u32 erdb_offset;
 	int index;
 };
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 12/25] bus: mhi: ep: Add support for ring management
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (10 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 11/25] bus: mhi: ep: Add support for managing MMIO registers Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 20:03   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 13/25] bus: mhi: ep: Add support for sending events to the host Manivannan Sadhasivam
                   ` (13 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for managing the MHI ring. The MHI ring is a circular queue
of data structures used to pass the information between host and the
endpoint.

MHI support 3 types of rings:

1. Transfer ring
2. Event ring
3. Command ring

All rings reside inside the host memory and the MHI EP device maps it to
the device memory using blocks like PCIe iATU. The mapping is handled in
the MHI EP controller driver itself.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/Makefile   |   2 +-
 drivers/bus/mhi/ep/internal.h |  33 +++++
 drivers/bus/mhi/ep/main.c     |  59 +++++++-
 drivers/bus/mhi/ep/ring.c     | 267 ++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h        |  11 ++
 5 files changed, 370 insertions(+), 2 deletions(-)
 create mode 100644 drivers/bus/mhi/ep/ring.c

diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
index a1555ae287ad..7ba0e04801eb 100644
--- a/drivers/bus/mhi/ep/Makefile
+++ b/drivers/bus/mhi/ep/Makefile
@@ -1,2 +1,2 @@
 obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
-mhi_ep-y := main.o mmio.o
+mhi_ep-y := main.o mmio.o ring.o
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index 2c756a90774c..48d6e9667d55 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -112,6 +112,18 @@ enum mhi_ep_execenv {
 	MHI_EP_UNRESERVED
 };
 
+/* Transfer Ring Element macros */
+#define MHI_EP_TRE_PTR(ptr) (ptr)
+#define MHI_EP_TRE_DWORD0(len) (len & MHI_MAX_MTU)
+#define MHI_EP_TRE_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
+	| (ieot << 9) | (ieob << 8) | chain)
+#define MHI_EP_TRE_GET_PTR(tre) ((tre)->ptr)
+#define MHI_EP_TRE_GET_LEN(tre) ((tre)->dword[0] & 0xffff)
+#define MHI_EP_TRE_GET_CHAIN(tre) FIELD_GET(BIT(0), (tre)->dword[1])
+#define MHI_EP_TRE_GET_IEOB(tre) FIELD_GET(BIT(8), (tre)->dword[1])
+#define MHI_EP_TRE_GET_IEOT(tre) FIELD_GET(BIT(9), (tre)->dword[1])
+#define MHI_EP_TRE_GET_BEI(tre) FIELD_GET(BIT(10), (tre)->dword[1])
+
 enum mhi_ep_ring_type {
 	RING_TYPE_CMD = 0,
 	RING_TYPE_ER,
@@ -131,6 +143,11 @@ union mhi_ep_ring_ctx {
 	struct mhi_generic_ctx generic;
 };
 
+struct mhi_ep_ring_item {
+	struct list_head node;
+	struct mhi_ep_ring *ring;
+};
+
 struct mhi_ep_ring {
 	struct mhi_ep_cntrl *mhi_cntrl;
 	int (*ring_cb)(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
@@ -143,6 +160,9 @@ struct mhi_ep_ring {
 	u32 db_offset_h;
 	u32 db_offset_l;
 	u32 ch_id;
+	u32 er_index;
+	u32 irq_vector;
+	bool started;
 };
 
 struct mhi_ep_cmd {
@@ -168,6 +188,19 @@ struct mhi_ep_chan {
 	bool skip_td;
 };
 
+/* MHI Ring related functions */
+void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id);
+void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring);
+int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
+		      union mhi_ep_ring_ctx *ctx);
+size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr);
+int mhi_ep_process_ring(struct mhi_ep_ring *ring);
+int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *element);
+void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring);
+int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
+int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
+int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring);
+
 /* MMIO related functions */
 u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset);
 void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 950b5bcabe18..2c8045766292 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -18,6 +18,48 @@
 
 static DEFINE_IDA(mhi_ep_cntrl_ida);
 
+static void mhi_ep_ring_worker(struct work_struct *work)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
+				struct mhi_ep_cntrl, ring_work);
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	struct mhi_ep_ring_item *itr, *tmp;
+	struct mhi_ep_ring *ring;
+	struct mhi_ep_chan *chan;
+	unsigned long flags;
+	LIST_HEAD(head);
+	int ret;
+
+	/* Process the command ring first */
+	ret = mhi_ep_process_ring(&mhi_cntrl->mhi_cmd->ring);
+	if (ret) {
+		dev_err(dev, "Error processing command ring: %d\n", ret);
+		return;
+	}
+
+	spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
+	list_splice_tail_init(&mhi_cntrl->ch_db_list, &head);
+	spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
+
+	/* Process the channel rings now */
+	list_for_each_entry_safe(itr, tmp, &head, node) {
+		list_del(&itr->node);
+		ring = itr->ring;
+		chan = &mhi_cntrl->mhi_chan[ring->ch_id];
+		mutex_lock(&chan->lock);
+		dev_dbg(dev, "Processing the ring for channel (%d)\n", ring->ch_id);
+		ret = mhi_ep_process_ring(ring);
+		if (ret) {
+			dev_err(dev, "Error processing ring for channel (%d): %d\n",
+				ring->ch_id, ret);
+			mutex_unlock(&chan->lock);
+			return;
+		}
+		mutex_unlock(&chan->lock);
+		kfree(itr);
+	}
+}
+
 static void mhi_ep_release_device(struct device *dev)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -218,6 +260,17 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 		goto err_free_ch;
 	}
 
+	INIT_WORK(&mhi_cntrl->ring_work, mhi_ep_ring_worker);
+
+	mhi_cntrl->ring_wq = alloc_workqueue("mhi_ep_ring_wq", 0, 0);
+	if (!mhi_cntrl->ring_wq) {
+		ret = -ENOMEM;
+		goto err_free_cmd;
+	}
+
+	INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
+	spin_lock_init(&mhi_cntrl->list_lock);
+
 	/* Set MHI version and AMSS EE before enumeration */
 	mhi_ep_mmio_write(mhi_cntrl, MHIVER, config->mhi_version);
 	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
@@ -226,7 +279,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
 	if (mhi_cntrl->index < 0) {
 		ret = mhi_cntrl->index;
-		goto err_free_cmd;
+		goto err_destroy_ring_wq;
 	}
 
 	/* Allocate the controller device */
@@ -254,6 +307,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	put_device(&mhi_dev->dev);
 err_ida_free:
 	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
+err_destroy_ring_wq:
+	destroy_workqueue(mhi_cntrl->ring_wq);
 err_free_cmd:
 	kfree(mhi_cntrl->mhi_cmd);
 err_free_ch:
@@ -267,6 +322,8 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
 
+	destroy_workqueue(mhi_cntrl->ring_wq);
+
 	kfree(mhi_cntrl->mhi_cmd);
 	kfree(mhi_cntrl->mhi_chan);
 
diff --git a/drivers/bus/mhi/ep/ring.c b/drivers/bus/mhi/ep/ring.c
new file mode 100644
index 000000000000..3eb02c9be5eb
--- /dev/null
+++ b/drivers/bus/mhi/ep/ring.c
@@ -0,0 +1,267 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Linaro Ltd.
+ * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+ */
+
+#include <linux/mhi_ep.h>
+#include "internal.h"
+
+size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr)
+{
+	u64 rbase;
+
+	rbase = le64_to_cpu(ring->ring_ctx->generic.rbase);
+
+	return (ptr - rbase) / sizeof(struct mhi_ep_ring_element);
+}
+
+static u32 mhi_ep_ring_num_elems(struct mhi_ep_ring *ring)
+{
+	return le64_to_cpu(ring->ring_ctx->generic.rlen) / sizeof(struct mhi_ep_ring_element);
+}
+
+void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring)
+{
+	ring->rd_offset++;
+	if (ring->rd_offset == ring->ring_size)
+		ring->rd_offset = 0;
+}
+
+static int __mhi_ep_cache_ring(struct mhi_ep_ring *ring, size_t end)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	size_t start, copy_size;
+	int ret;
+
+	/* No need to cache event rings */
+	if (ring->type == RING_TYPE_ER)
+		return 0;
+
+	/* No need to cache the ring if write pointer is unmodified */
+	if (ring->wr_offset == end)
+		return 0;
+
+	start = ring->wr_offset;
+	if (start < end) {
+		copy_size = (end - start) * sizeof(struct mhi_ep_ring_element);
+		ret = mhi_cntrl->read_from_host(mhi_cntrl,
+						(le64_to_cpu(ring->ring_ctx->generic.rbase) +
+						(start * sizeof(struct mhi_ep_ring_element))),
+						&ring->ring_cache[start], copy_size);
+		if (ret < 0)
+			return ret;
+	} else {
+		copy_size = (ring->ring_size - start) * sizeof(struct mhi_ep_ring_element);
+		ret = mhi_cntrl->read_from_host(mhi_cntrl,
+						(le64_to_cpu(ring->ring_ctx->generic.rbase) +
+						(start * sizeof(struct mhi_ep_ring_element))),
+						&ring->ring_cache[start], copy_size);
+		if (ret < 0)
+			return ret;
+
+		if (end) {
+			ret = mhi_cntrl->read_from_host(mhi_cntrl,
+							le64_to_cpu(ring->ring_ctx->generic.rbase),
+							&ring->ring_cache[0],
+							end * sizeof(struct mhi_ep_ring_element));
+			if (ret < 0)
+				return ret;
+		}
+	}
+
+	dev_dbg(dev, "Cached ring: start %zu end %zu size %zu\n", start, end, copy_size);
+
+	return 0;
+}
+
+static int mhi_ep_cache_ring(struct mhi_ep_ring *ring, u64 wr_ptr)
+{
+	size_t wr_offset;
+	int ret;
+
+	wr_offset = mhi_ep_ring_addr2offset(ring, wr_ptr);
+
+	/* Cache the host ring till write offset */
+	ret = __mhi_ep_cache_ring(ring, wr_offset);
+	if (ret)
+		return ret;
+
+	ring->wr_offset = wr_offset;
+
+	return 0;
+}
+
+int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring)
+{
+	u64 wr_ptr;
+
+	wr_ptr = mhi_ep_mmio_get_db(ring);
+
+	return mhi_ep_cache_ring(ring, wr_ptr);
+}
+
+static int mhi_ep_process_ring_element(struct mhi_ep_ring *ring, size_t offset)
+{
+	struct mhi_ep_ring_element *el;
+
+	/* Get the element and invoke the respective callback */
+	el = &ring->ring_cache[offset];
+
+	return ring->ring_cb(ring, el);
+}
+
+int mhi_ep_process_ring(struct mhi_ep_ring *ring)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	int ret = 0;
+
+	/* Event rings should not be processed */
+	if (ring->type == RING_TYPE_ER)
+		return -EINVAL;
+
+	dev_dbg(dev, "Processing ring of type: %d\n", ring->type);
+
+	/* Update the write offset for the ring */
+	ret = mhi_ep_update_wr_offset(ring);
+	if (ret) {
+		dev_err(dev, "Error updating write offset for ring\n");
+		return ret;
+	}
+
+	/* Sanity check to make sure there are elements in the ring */
+	if (ring->rd_offset == ring->wr_offset)
+		return 0;
+
+	/* Process channel ring first */
+	if (ring->type == RING_TYPE_CH) {
+		ret = mhi_ep_process_ring_element(ring, ring->rd_offset);
+		if (ret)
+			dev_err(dev, "Error processing ch ring element: %zu\n", ring->rd_offset);
+
+		return ret;
+	}
+
+	/* Process command ring now */
+	while (ring->rd_offset != ring->wr_offset) {
+		ret = mhi_ep_process_ring_element(ring, ring->rd_offset);
+		if (ret) {
+			dev_err(dev, "Error processing cmd ring element: %zu\n", ring->rd_offset);
+			return ret;
+		}
+
+		mhi_ep_ring_inc_index(ring);
+	}
+
+	return 0;
+}
+
+/* TODO: Support for adding multiple ring elements to the ring */
+int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	__le64 rbase = ring->ring_ctx->generic.rbase;
+	size_t old_offset = 0;
+	u32 num_free_elem;
+	int ret;
+
+	ret = mhi_ep_update_wr_offset(ring);
+	if (ret) {
+		dev_err(dev, "Error updating write pointer\n");
+		return ret;
+	}
+
+	if (ring->rd_offset < ring->wr_offset)
+		num_free_elem = (ring->wr_offset - ring->rd_offset) - 1;
+	else
+		num_free_elem = ((ring->ring_size - ring->rd_offset) + ring->wr_offset) - 1;
+
+	/* Check if there is space in ring for adding at least an element */
+	if (!num_free_elem) {
+		dev_err(dev, "No space left in the ring\n");
+		return -ENOSPC;
+	}
+
+	old_offset = ring->rd_offset;
+	mhi_ep_ring_inc_index(ring);
+
+	dev_dbg(dev, "Adding an element to ring at offset (%zu)\n", ring->rd_offset);
+
+	/* Update rp in ring context */
+	ring->ring_ctx->generic.rp = cpu_to_le64((ring->rd_offset * sizeof(*el))) + rbase;
+
+	/* Ensure that the ring pointer gets updated before writing the element to ring */
+	smp_wmb();
+
+	ret = mhi_cntrl->write_to_host(mhi_cntrl, el, (le64_to_cpu(rbase) +
+				       (old_offset * sizeof(*el))), sizeof(*el));
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
+void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id)
+{
+	ring->type = type;
+	if (ring->type == RING_TYPE_CMD) {
+		ring->ring_cb = mhi_ep_process_cmd_ring;
+		ring->db_offset_h = CRDB_HIGHER;
+		ring->db_offset_l = CRDB_LOWER;
+	} else if (ring->type == RING_TYPE_CH) {
+		ring->ring_cb = mhi_ep_process_tre_ring;
+		ring->db_offset_h = CHDB_HIGHER_n(id);
+		ring->db_offset_l = CHDB_LOWER_n(id);
+		ring->ch_id = id;
+	} else {
+		ring->db_offset_h = ERDB_HIGHER_n(id);
+		ring->db_offset_l = ERDB_LOWER_n(id);
+	}
+}
+
+int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
+			union mhi_ep_ring_ctx *ctx)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	int ret;
+
+	ring->mhi_cntrl = mhi_cntrl;
+	ring->ring_ctx = ctx;
+	ring->ring_size = mhi_ep_ring_num_elems(ring);
+
+	if (ring->type == RING_TYPE_CH)
+		ring->er_index = le32_to_cpu(ring->ring_ctx->ch.erindex);
+
+	if (ring->type == RING_TYPE_ER)
+		ring->irq_vector = le32_to_cpu(ring->ring_ctx->ev.msivec);
+
+	/* During ring init, both rp and wp are equal */
+	ring->rd_offset = mhi_ep_ring_addr2offset(ring, le64_to_cpu(ring->ring_ctx->generic.rp));
+	ring->wr_offset = mhi_ep_ring_addr2offset(ring, le64_to_cpu(ring->ring_ctx->generic.rp));
+
+	/* Allocate ring cache memory for holding the copy of host ring */
+	ring->ring_cache = kcalloc(ring->ring_size, sizeof(struct mhi_ep_ring_element),
+				   GFP_KERNEL);
+	if (!ring->ring_cache)
+		return -ENOMEM;
+
+	ret = mhi_ep_cache_ring(ring, le64_to_cpu(ring->ring_ctx->generic.wp));
+	if (ret) {
+		dev_err(dev, "Failed to cache ring\n");
+		kfree(ring->ring_cache);
+		return ret;
+	}
+
+	ring->started = true;
+
+	return 0;
+}
+
+void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring)
+{
+	ring->started = false;
+	kfree(ring->ring_cache);
+}
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 3d2ab7a5ccd7..33828a6c4e63 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -62,6 +62,11 @@ struct mhi_ep_db_info {
  * @ch_ctx_host_pa: Physical address of host channel context data structure
  * @ev_ctx_host_pa: Physical address of host event context data structure
  * @cmd_ctx_host_pa: Physical address of host command context data structure
+ * @ring_wq: Dedicated workqueue for processing MHI rings
+ * @ring_work: Ring worker
+ * @ch_db_list: List of queued channel doorbells
+ * @st_transition_list: List of state transitions
+ * @list_lock: Lock for protecting state transition and channel doorbell lists
  * @chdb: Array of channel doorbell interrupt info
  * @raise_irq: CB function for raising IRQ to the host
  * @alloc_addr: CB function for allocating memory in endpoint for storing host context
@@ -93,6 +98,12 @@ struct mhi_ep_cntrl {
 	u64 ev_ctx_host_pa;
 	u64 cmd_ctx_host_pa;
 
+	struct workqueue_struct	*ring_wq;
+	struct work_struct ring_work;
+
+	struct list_head ch_db_list;
+	struct list_head st_transition_list;
+	spinlock_t list_lock;
 	struct mhi_ep_db_info chdb[4];
 
 	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 13/25] bus: mhi: ep: Add support for sending events to the host
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (11 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 12/25] bus: mhi: ep: Add support for ring management Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:39   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 14/25] bus: mhi: ep: Add support for managing MHI state machine Manivannan Sadhasivam
                   ` (12 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for sending the events to the host over MHI bus from the
endpoint. Following events are supported:

1. Transfer completion event
2. Command completion event
3. State change event
4. Execution Environment (EE) change event

An event is sent whenever an operation has been completed in the MHI EP
device. Event is sent using the MHI event ring and additionally the host
is notified using an IRQ if required.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/common.h      |  15 ++++
 drivers/bus/mhi/ep/internal.h |   8 ++-
 drivers/bus/mhi/ep/main.c     | 126 ++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h        |   8 +++
 4 files changed, 155 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
index 728c82928d8d..26d94ed52b34 100644
--- a/drivers/bus/mhi/common.h
+++ b/drivers/bus/mhi/common.h
@@ -176,6 +176,21 @@
 #define MHI_TRE_GET_EV_LINKSPEED(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
 #define MHI_TRE_GET_EV_LINKWIDTH(tre)			(MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
 
+/* State change event */
+#define MHI_SC_EV_PTR					0
+#define MHI_SC_EV_DWORD0(state)				cpu_to_le32(state << 24)
+#define MHI_SC_EV_DWORD1(type)				cpu_to_le32(type << 16)
+
+/* EE event */
+#define MHI_EE_EV_PTR					0
+#define MHI_EE_EV_DWORD0(ee)				cpu_to_le32(ee << 24)
+#define MHI_EE_EV_DWORD1(type)				cpu_to_le32(type << 16)
+
+/* Command Completion event */
+#define MHI_CC_EV_PTR(ptr)				cpu_to_le64(ptr)
+#define MHI_CC_EV_DWORD0(code)				cpu_to_le32(code << 24)
+#define MHI_CC_EV_DWORD1(type)				cpu_to_le32(type << 16)
+
 /* Transfer descriptor macros */
 #define MHI_TRE_DATA_PTR(ptr)				cpu_to_le64(ptr)
 #define MHI_TRE_DATA_DWORD0(len)			cpu_to_le32(len & MHI_MAX_MTU)
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index 48d6e9667d55..fd63f79c6aec 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -131,8 +131,8 @@ enum mhi_ep_ring_type {
 };
 
 struct mhi_ep_ring_element {
-	u64 ptr;
-	u32 dword[2];
+	__le64 ptr;
+	__le32 dword[2];
 };
 
 /* Ring element */
@@ -227,4 +227,8 @@ void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *s
 void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
 void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
 
+/* MHI EP core functions */
+int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
+int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ep_execenv exec_env);
+
 #endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 2c8045766292..61f066c6286b 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -18,6 +18,131 @@
 
 static DEFINE_IDA(mhi_ep_cntrl_ida);
 
+static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
+			     struct mhi_ep_ring_element *el)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	union mhi_ep_ring_ctx *ctx;
+	struct mhi_ep_ring *ring;
+	int ret;
+
+	mutex_lock(&mhi_cntrl->event_lock);
+	ring = &mhi_cntrl->mhi_event[ring_idx].ring;
+	ctx = (union mhi_ep_ring_ctx *)&mhi_cntrl->ev_ctx_cache[ring_idx];
+	if (!ring->started) {
+		ret = mhi_ep_ring_start(mhi_cntrl, ring, ctx);
+		if (ret) {
+			dev_err(dev, "Error starting event ring (%d)\n", ring_idx);
+			goto err_unlock;
+		}
+	}
+
+	/* Add element to the event ring */
+	ret = mhi_ep_ring_add_element(ring, el);
+	if (ret) {
+		dev_err(dev, "Error adding element to event ring (%d)\n", ring_idx);
+		goto err_unlock;
+	}
+
+	/* Ensure that the ring pointer gets updated in host memory before triggering IRQ */
+	smp_wmb();
+
+	mutex_unlock(&mhi_cntrl->event_lock);
+
+	/*
+	 * Raise IRQ to host only if the BEI flag is not set in TRE. Host might
+	 * set this flag for interrupt moderation as per MHI protocol.
+	 */
+	if (!MHI_EP_TRE_GET_BEI(el))
+		mhi_cntrl->raise_irq(mhi_cntrl, ring->irq_vector);
+
+	return 0;
+
+err_unlock:
+	mutex_unlock(&mhi_cntrl->event_lock);
+
+	return ret;
+}
+
+static int mhi_ep_send_completion_event(struct mhi_ep_cntrl *mhi_cntrl,
+					struct mhi_ep_ring *ring, u32 len,
+					enum mhi_ev_ccs code)
+{
+	struct mhi_ep_ring_element event = {};
+	__le32 tmp;
+
+	event.ptr = le64_to_cpu(ring->ring_ctx->generic.rbase) +
+			ring->rd_offset * sizeof(struct mhi_ep_ring_element);
+
+	tmp = event.dword[0];
+	tmp |= MHI_TRE_EV_DWORD0(code, len);
+	event.dword[0] = tmp;
+
+	tmp = event.dword[1];
+	tmp |= MHI_TRE_EV_DWORD1(ring->ch_id, MHI_PKT_TYPE_TX_EVENT);
+	event.dword[1] = tmp;
+
+	return mhi_ep_send_event(mhi_cntrl, ring->er_index, &event);
+}
+
+int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state)
+{
+	struct mhi_ep_ring_element event = {};
+	__le32 tmp;
+
+	tmp = event.dword[0];
+	tmp |= MHI_SC_EV_DWORD0(state);
+	event.dword[0] = tmp;
+
+	tmp = event.dword[1];
+	tmp |= MHI_SC_EV_DWORD1(MHI_PKT_TYPE_STATE_CHANGE_EVENT);
+	event.dword[1] = tmp;
+
+	return mhi_ep_send_event(mhi_cntrl, 0, &event);
+}
+
+int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ep_execenv exec_env)
+{
+	struct mhi_ep_ring_element event = {};
+	__le32 tmp;
+
+	tmp = event.dword[0];
+	tmp |= MHI_EE_EV_DWORD0(exec_env);
+	event.dword[0] = tmp;
+
+	tmp = event.dword[1];
+	tmp |= MHI_SC_EV_DWORD1(MHI_PKT_TYPE_EE_EVENT);
+	event.dword[1] = tmp;
+
+	return mhi_ep_send_event(mhi_cntrl, 0, &event);
+}
+
+static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ev_ccs code)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	struct mhi_ep_ring_element event = {};
+	__le32 tmp;
+
+	if (code > MHI_EV_CC_BAD_TRE) {
+		dev_err(dev, "Invalid command completion code (%d)\n", code);
+		return -EINVAL;
+	}
+
+	event.ptr = le64_to_cpu(mhi_cntrl->cmd_ctx_cache->rbase)
+			+ (mhi_cntrl->mhi_cmd->ring.rd_offset *
+			(sizeof(struct mhi_ep_ring_element)));
+
+	tmp = event.dword[0];
+	tmp |= MHI_CC_EV_DWORD0(code);
+	event.dword[0] = tmp;
+
+	tmp = event.dword[1];
+	tmp |= MHI_CC_EV_DWORD1(MHI_PKT_TYPE_CMD_COMPLETION_EVENT);
+	event.dword[1] = tmp;
+
+	return mhi_ep_send_event(mhi_cntrl, 0, &event);
+}
+
 static void mhi_ep_ring_worker(struct work_struct *work)
 {
 	struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
@@ -270,6 +395,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 
 	INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
 	spin_lock_init(&mhi_cntrl->list_lock);
+	mutex_init(&mhi_cntrl->event_lock);
 
 	/* Set MHI version and AMSS EE before enumeration */
 	mhi_ep_mmio_write(mhi_cntrl, MHIVER, config->mhi_version);
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 33828a6c4e63..062133a68118 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -59,6 +59,9 @@ struct mhi_ep_db_info {
  * @mhi_event: Points to the event ring configurations table
  * @mhi_cmd: Points to the command ring configurations table
  * @sm: MHI Endpoint state machine
+ * @ch_ctx_cache: Cache of host channel context data structure
+ * @ev_ctx_cache: Cache of host event context data structure
+ * @cmd_ctx_cache: Cache of host command context data structure
  * @ch_ctx_host_pa: Physical address of host channel context data structure
  * @ev_ctx_host_pa: Physical address of host event context data structure
  * @cmd_ctx_host_pa: Physical address of host command context data structure
@@ -67,6 +70,7 @@ struct mhi_ep_db_info {
  * @ch_db_list: List of queued channel doorbells
  * @st_transition_list: List of state transitions
  * @list_lock: Lock for protecting state transition and channel doorbell lists
+ * @event_lock: Lock for protecting event rings
  * @chdb: Array of channel doorbell interrupt info
  * @raise_irq: CB function for raising IRQ to the host
  * @alloc_addr: CB function for allocating memory in endpoint for storing host context
@@ -94,6 +98,9 @@ struct mhi_ep_cntrl {
 	struct mhi_ep_cmd *mhi_cmd;
 	struct mhi_ep_sm *sm;
 
+	struct mhi_chan_ctxt *ch_ctx_cache;
+	struct mhi_event_ctxt *ev_ctx_cache;
+	struct mhi_cmd_ctxt *cmd_ctx_cache;
 	u64 ch_ctx_host_pa;
 	u64 ev_ctx_host_pa;
 	u64 cmd_ctx_host_pa;
@@ -104,6 +111,7 @@ struct mhi_ep_cntrl {
 	struct list_head ch_db_list;
 	struct list_head st_transition_list;
 	spinlock_t list_lock;
+	struct mutex event_lock;
 	struct mhi_ep_db_info chdb[4];
 
 	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 14/25] bus: mhi: ep: Add support for managing MHI state machine
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (12 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 13/25] bus: mhi: ep: Add support for sending events to the host Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:39   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 15/25] bus: mhi: ep: Add support for processing MHI endpoint interrupts Manivannan Sadhasivam
                   ` (11 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for managing the MHI state machine by controlling the state
transitions. Only the following MHI state transitions are supported:

1. Ready state
2. M0 state
3. M3 state
4. SYS_ERR state

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/Makefile   |   2 +-
 drivers/bus/mhi/ep/internal.h |  11 +++
 drivers/bus/mhi/ep/main.c     |  51 ++++++++++-
 drivers/bus/mhi/ep/sm.c       | 168 ++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h        |   6 ++
 5 files changed, 236 insertions(+), 2 deletions(-)
 create mode 100644 drivers/bus/mhi/ep/sm.c

diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
index 7ba0e04801eb..aad85f180b70 100644
--- a/drivers/bus/mhi/ep/Makefile
+++ b/drivers/bus/mhi/ep/Makefile
@@ -1,2 +1,2 @@
 obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
-mhi_ep-y := main.o mmio.o ring.o
+mhi_ep-y := main.o mmio.o ring.o sm.o
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index fd63f79c6aec..e4e8f06c2898 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -173,6 +173,11 @@ struct mhi_ep_event {
 	struct mhi_ep_ring ring;
 };
 
+struct mhi_ep_state_transition {
+	struct list_head node;
+	enum mhi_state state;
+};
+
 struct mhi_ep_chan {
 	char *name;
 	struct mhi_ep_device *mhi_dev;
@@ -230,5 +235,11 @@ void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
 /* MHI EP core functions */
 int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
 int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ep_execenv exec_env);
+bool mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state cur_mhi_state,
+			    enum mhi_state mhi_state);
+int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state);
+int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
+int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
+int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
 
 #endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 61f066c6286b..ccb3c2795041 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -185,6 +185,43 @@ static void mhi_ep_ring_worker(struct work_struct *work)
 	}
 }
 
+static void mhi_ep_state_worker(struct work_struct *work)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	struct mhi_ep_state_transition *itr, *tmp;
+	unsigned long flags;
+	LIST_HEAD(head);
+	int ret;
+
+	spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
+	list_splice_tail_init(&mhi_cntrl->st_transition_list, &head);
+	spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
+
+	list_for_each_entry_safe(itr, tmp, &head, node) {
+		list_del(&itr->node);
+		dev_dbg(dev, "Handling MHI state transition to %s\n",
+			 mhi_state_str(itr->state));
+
+		switch (itr->state) {
+		case MHI_STATE_M0:
+			ret = mhi_ep_set_m0_state(mhi_cntrl);
+			if (ret)
+				dev_err(dev, "Failed to transition to M0 state\n");
+			break;
+		case MHI_STATE_M3:
+			ret = mhi_ep_set_m3_state(mhi_cntrl);
+			if (ret)
+				dev_err(dev, "Failed to transition to M3 state\n");
+			break;
+		default:
+			dev_err(dev, "Invalid MHI state transition: %d\n", itr->state);
+			break;
+		}
+		kfree(itr);
+	}
+}
+
 static void mhi_ep_release_device(struct device *dev)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -386,6 +423,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	}
 
 	INIT_WORK(&mhi_cntrl->ring_work, mhi_ep_ring_worker);
+	INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
 
 	mhi_cntrl->ring_wq = alloc_workqueue("mhi_ep_ring_wq", 0, 0);
 	if (!mhi_cntrl->ring_wq) {
@@ -393,8 +431,16 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 		goto err_free_cmd;
 	}
 
+	mhi_cntrl->state_wq = alloc_workqueue("mhi_ep_state_wq", 0, 0);
+	if (!mhi_cntrl->state_wq) {
+		ret = -ENOMEM;
+		goto err_destroy_ring_wq;
+	}
+
 	INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
+	INIT_LIST_HEAD(&mhi_cntrl->st_transition_list);
 	spin_lock_init(&mhi_cntrl->list_lock);
+	spin_lock_init(&mhi_cntrl->state_lock);
 	mutex_init(&mhi_cntrl->event_lock);
 
 	/* Set MHI version and AMSS EE before enumeration */
@@ -405,7 +451,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
 	if (mhi_cntrl->index < 0) {
 		ret = mhi_cntrl->index;
-		goto err_destroy_ring_wq;
+		goto err_destroy_state_wq;
 	}
 
 	/* Allocate the controller device */
@@ -433,6 +479,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	put_device(&mhi_dev->dev);
 err_ida_free:
 	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
+err_destroy_state_wq:
+	destroy_workqueue(mhi_cntrl->state_wq);
 err_destroy_ring_wq:
 	destroy_workqueue(mhi_cntrl->ring_wq);
 err_free_cmd:
@@ -448,6 +496,7 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
 
+	destroy_workqueue(mhi_cntrl->state_wq);
 	destroy_workqueue(mhi_cntrl->ring_wq);
 
 	kfree(mhi_cntrl->mhi_cmd);
diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
new file mode 100644
index 000000000000..68e7f99b9137
--- /dev/null
+++ b/drivers/bus/mhi/ep/sm.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Linaro Ltd.
+ * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+ */
+
+#include <linux/delay.h>
+#include <linux/errno.h>
+#include <linux/mhi_ep.h>
+#include "internal.h"
+
+bool __must_check mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl,
+					 enum mhi_state cur_mhi_state,
+					 enum mhi_state mhi_state)
+{
+	bool valid = false;
+
+	switch (mhi_state) {
+	case MHI_STATE_READY:
+		valid = (cur_mhi_state == MHI_STATE_RESET);
+		break;
+	case MHI_STATE_M0:
+		valid = (cur_mhi_state == MHI_STATE_READY ||
+			  cur_mhi_state == MHI_STATE_M3);
+		break;
+	case MHI_STATE_M3:
+		valid = (cur_mhi_state == MHI_STATE_M0);
+		break;
+	case MHI_STATE_SYS_ERR:
+		/* Transition to SYS_ERR state is allowed all the time */
+		valid = true;
+		break;
+	default:
+		break;
+	}
+
+	return valid;
+}
+
+int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+
+	if (!mhi_ep_check_mhi_state(mhi_cntrl, mhi_cntrl->mhi_state, mhi_state)) {
+		dev_err(dev, "MHI state change to %s from %s is not allowed!\n",
+			mhi_state_str(mhi_state),
+			mhi_state_str(mhi_cntrl->mhi_state));
+		return -EACCES;
+	}
+
+	switch (mhi_state) {
+	case MHI_STATE_READY:
+		mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
+				MHISTATUS_READY_MASK, 1);
+
+		mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
+				MHISTATUS_MHISTATE_MASK, mhi_state);
+		break;
+	case MHI_STATE_SYS_ERR:
+		mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
+				MHISTATUS_SYSERR_MASK, 1);
+
+		mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
+				MHISTATUS_MHISTATE_MASK, mhi_state);
+		break;
+	case MHI_STATE_M1:
+	case MHI_STATE_M2:
+		dev_err(dev, "MHI state (%s) not supported\n", mhi_state_str(mhi_state));
+		return -EOPNOTSUPP;
+	case MHI_STATE_M0:
+	case MHI_STATE_M3:
+		mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
+					  MHISTATUS_MHISTATE_MASK, mhi_state);
+		break;
+	default:
+		dev_err(dev, "Invalid MHI state (%d)\n", mhi_state);
+		return -EINVAL;
+	}
+
+	mhi_cntrl->mhi_state = mhi_state;
+
+	return 0;
+}
+
+int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	enum mhi_state old_state;
+	int ret;
+
+	spin_lock_bh(&mhi_cntrl->state_lock);
+	old_state = mhi_cntrl->mhi_state;
+
+	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
+	if (ret) {
+		spin_unlock_bh(&mhi_cntrl->state_lock);
+		return ret;
+	}
+
+	spin_unlock_bh(&mhi_cntrl->state_lock);
+	/* Signal host that the device moved to M0 */
+	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M0);
+	if (ret) {
+		dev_err(dev, "Failed sending M0 state change event\n");
+		return ret;
+	}
+
+	if (old_state == MHI_STATE_READY) {
+		/* Allow the host to process state change event */
+		mdelay(1);
+
+		/* Send AMSS EE event to host */
+		ret = mhi_ep_send_ee_event(mhi_cntrl, MHI_EP_AMSS_EE);
+		if (ret) {
+			dev_err(dev, "Failed sending AMSS EE event\n");
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	int ret;
+
+	spin_lock_bh(&mhi_cntrl->state_lock);
+	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
+	if (ret) {
+		spin_unlock_bh(&mhi_cntrl->state_lock);
+		return ret;
+	}
+
+	spin_unlock_bh(&mhi_cntrl->state_lock);
+
+	/* Signal host that the device moved to M3 */
+	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M3);
+	if (ret) {
+		dev_err(dev, "Failed sending M3 state change event\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	enum mhi_state mhi_state;
+	int ret, is_ready;
+
+	spin_lock_bh(&mhi_cntrl->state_lock);
+	/* Ensure that the MHISTATUS is set to RESET by host */
+	mhi_state = mhi_ep_mmio_masked_read(mhi_cntrl, MHISTATUS, MHISTATUS_MHISTATE_MASK);
+	is_ready = mhi_ep_mmio_masked_read(mhi_cntrl, MHISTATUS, MHISTATUS_READY_MASK);
+
+	if (mhi_state != MHI_STATE_RESET || is_ready) {
+		dev_err(dev, "READY state transition failed. MHI host not in RESET state\n");
+		spin_unlock_bh(&mhi_cntrl->state_lock);
+		return -EFAULT;
+	}
+
+	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_READY);
+	spin_unlock_bh(&mhi_cntrl->state_lock);
+
+	return ret;
+}
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 062133a68118..72ce30cbe87e 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -65,11 +65,14 @@ struct mhi_ep_db_info {
  * @ch_ctx_host_pa: Physical address of host channel context data structure
  * @ev_ctx_host_pa: Physical address of host event context data structure
  * @cmd_ctx_host_pa: Physical address of host command context data structure
+ * @state_wq: Dedicated workqueue for handling MHI state transitions
  * @ring_wq: Dedicated workqueue for processing MHI rings
+ * @state_work: State transition worker
  * @ring_work: Ring worker
  * @ch_db_list: List of queued channel doorbells
  * @st_transition_list: List of state transitions
  * @list_lock: Lock for protecting state transition and channel doorbell lists
+ * @state_lock: Lock for protecting state transitions
  * @event_lock: Lock for protecting event rings
  * @chdb: Array of channel doorbell interrupt info
  * @raise_irq: CB function for raising IRQ to the host
@@ -105,12 +108,15 @@ struct mhi_ep_cntrl {
 	u64 ev_ctx_host_pa;
 	u64 cmd_ctx_host_pa;
 
+	struct workqueue_struct *state_wq;
 	struct workqueue_struct	*ring_wq;
+	struct work_struct state_work;
 	struct work_struct ring_work;
 
 	struct list_head ch_db_list;
 	struct list_head st_transition_list;
 	spinlock_t list_lock;
+	spinlock_t state_lock;
 	struct mutex event_lock;
 	struct mhi_ep_db_info chdb[4];
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 15/25] bus: mhi: ep: Add support for processing MHI endpoint interrupts
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (13 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 14/25] bus: mhi: ep: Add support for managing MHI state machine Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:39   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 16/25] bus: mhi: ep: Add support for powering up the MHI endpoint stack Manivannan Sadhasivam
                   ` (10 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for processing MHI endpoint interrupts such as control
interrupt, command interrupt and channel interrupt from the host.

The interrupts will be generated in the endpoint device whenever host
writes to the corresponding doorbell registers. The doorbell logic
is handled inside the hardware internally.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 113 +++++++++++++++++++++++++++++++++++++-
 include/linux/mhi_ep.h    |   2 +
 2 files changed, 113 insertions(+), 2 deletions(-)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index ccb3c2795041..072b872e735b 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -185,6 +185,56 @@ static void mhi_ep_ring_worker(struct work_struct *work)
 	}
 }
 
+static void mhi_ep_queue_channel_db(struct mhi_ep_cntrl *mhi_cntrl,
+				    unsigned long ch_int, u32 ch_idx)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	struct mhi_ep_ring_item *item;
+	struct mhi_ep_ring *ring;
+	unsigned int i;
+
+	for_each_set_bit(i, &ch_int, 32) {
+		/* Channel index varies for each register: 0, 32, 64, 96 */
+		i += ch_idx;
+		ring = &mhi_cntrl->mhi_chan[i].ring;
+
+		item = kmalloc(sizeof(*item), GFP_ATOMIC);
+		item->ring = ring;
+
+		dev_dbg(dev, "Queuing doorbell interrupt for channel (%d)\n", i);
+		spin_lock(&mhi_cntrl->list_lock);
+		list_add_tail(&item->node, &mhi_cntrl->ch_db_list);
+		spin_unlock(&mhi_cntrl->list_lock);
+
+		queue_work(mhi_cntrl->ring_wq, &mhi_cntrl->ring_work);
+	}
+}
+
+/*
+ * Channel interrupt statuses are contained in 4 registers each of 32bit length.
+ * For checking all interrupts, we need to loop through each registers and then
+ * check for bits set.
+ */
+static void mhi_ep_check_channel_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	u32 ch_int, ch_idx;
+	int i;
+
+	mhi_ep_mmio_read_chdb_status_interrupts(mhi_cntrl);
+
+	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
+		ch_idx = i * MHI_MASK_CH_EV_LEN;
+
+		/* Only process channel interrupt if the mask is enabled */
+		ch_int = (mhi_cntrl->chdb[i].status & mhi_cntrl->chdb[i].mask);
+		if (ch_int) {
+			mhi_ep_queue_channel_db(mhi_cntrl, ch_int, ch_idx);
+			mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_CLEAR_A7_n(i),
+							mhi_cntrl->chdb[i].status);
+		}
+	}
+}
+
 static void mhi_ep_state_worker(struct work_struct *work)
 {
 	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
@@ -222,6 +272,53 @@ static void mhi_ep_state_worker(struct work_struct *work)
 	}
 }
 
+static void mhi_ep_process_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl,
+					 enum mhi_state state)
+{
+	struct mhi_ep_state_transition *item = kmalloc(sizeof(*item), GFP_ATOMIC);
+
+	item->state = state;
+	spin_lock(&mhi_cntrl->list_lock);
+	list_add_tail(&item->node, &mhi_cntrl->st_transition_list);
+	spin_unlock(&mhi_cntrl->list_lock);
+
+	queue_work(mhi_cntrl->state_wq, &mhi_cntrl->state_work);
+}
+
+/*
+ * Interrupt handler that services interrupts raised by the host writing to
+ * MHICTRL and Command ring doorbell (CRDB) registers for state change and
+ * channel interrupts.
+ */
+static irqreturn_t mhi_ep_irq(int irq, void *data)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = data;
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	enum mhi_state state;
+	u32 int_value;
+
+	/* Acknowledge the interrupts */
+	int_value = mhi_ep_mmio_read(mhi_cntrl, MHI_CTRL_INT_STATUS_A7);
+	mhi_ep_mmio_write(mhi_cntrl, MHI_CTRL_INT_CLEAR_A7, int_value);
+
+	/* Check for ctrl interrupt */
+	if (FIELD_GET(MHI_CTRL_INT_STATUS_A7_MSK, int_value)) {
+		dev_dbg(dev, "Processing ctrl interrupt\n");
+		mhi_ep_process_ctrl_interrupt(mhi_cntrl, state);
+	}
+
+	/* Check for command doorbell interrupt */
+	if (FIELD_GET(MHI_CTRL_INT_STATUS_CRDB_MSK, int_value)) {
+		dev_dbg(dev, "Processing command doorbell interrupt\n");
+		queue_work(mhi_cntrl->ring_wq, &mhi_cntrl->ring_work);
+	}
+
+	/* Check for channel interrupts */
+	mhi_ep_check_channel_interrupt(mhi_cntrl);
+
+	return IRQ_HANDLED;
+}
+
 static void mhi_ep_release_device(struct device *dev)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -409,7 +506,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 	struct mhi_ep_device *mhi_dev;
 	int ret;
 
-	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio)
+	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio || !mhi_cntrl->irq)
 		return -EINVAL;
 
 	ret = parse_ch_cfg(mhi_cntrl, config);
@@ -454,12 +551,20 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 		goto err_destroy_state_wq;
 	}
 
+	irq_set_status_flags(mhi_cntrl->irq, IRQ_NOAUTOEN);
+	ret = request_irq(mhi_cntrl->irq, mhi_ep_irq, IRQF_TRIGGER_HIGH,
+			  "doorbell_irq", mhi_cntrl);
+	if (ret) {
+		dev_err(mhi_cntrl->cntrl_dev, "Failed to request Doorbell IRQ\n");
+		goto err_ida_free;
+	}
+
 	/* Allocate the controller device */
 	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_CONTROLLER);
 	if (IS_ERR(mhi_dev)) {
 		dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate controller device\n");
 		ret = PTR_ERR(mhi_dev);
-		goto err_ida_free;
+		goto err_free_irq;
 	}
 
 	dev_set_name(&mhi_dev->dev, "mhi_ep%d", mhi_cntrl->index);
@@ -477,6 +582,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 
 err_put_dev:
 	put_device(&mhi_dev->dev);
+err_free_irq:
+	free_irq(mhi_cntrl->irq, mhi_cntrl);
 err_ida_free:
 	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
 err_destroy_state_wq:
@@ -499,6 +606,8 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
 	destroy_workqueue(mhi_cntrl->state_wq);
 	destroy_workqueue(mhi_cntrl->ring_wq);
 
+	free_irq(mhi_cntrl->irq, mhi_cntrl);
+
 	kfree(mhi_cntrl->mhi_cmd);
 	kfree(mhi_cntrl->mhi_chan);
 
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 72ce30cbe87e..a207058a4991 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -90,6 +90,7 @@ struct mhi_ep_db_info {
  * @chdb_offset: Channel doorbell offset set by the host
  * @erdb_offset: Event ring doorbell offset set by the host
  * @index: MHI Endpoint controller index
+ * @irq: IRQ used by the endpoint controller
  */
 struct mhi_ep_cntrl {
 	struct device *cntrl_dev;
@@ -142,6 +143,7 @@ struct mhi_ep_cntrl {
 	u32 chdb_offset;
 	u32 erdb_offset;
 	int index;
+	int irq;
 };
 
 /**
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 16/25] bus: mhi: ep: Add support for powering up the MHI endpoint stack
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (14 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 15/25] bus: mhi: ep: Add support for processing MHI endpoint interrupts Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:39   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 17/25] bus: mhi: ep: Add support for powering down " Manivannan Sadhasivam
                   ` (9 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for MHI endpoint power_up that includes initializing the MMIO
and rings, caching the host MHI registers, and setting the MHI state to M0.
After registering the MHI EP controller, the stack has to be powered up
for usage.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/internal.h |   6 +
 drivers/bus/mhi/ep/main.c     | 229 ++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h        |  22 ++++
 3 files changed, 257 insertions(+)

diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index e4e8f06c2898..ee8c5974f0c0 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -242,4 +242,10 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
 int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
 int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
 
+/* MHI EP memory management functions */
+int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
+		     phys_addr_t *phys_ptr, void __iomem **virt);
+void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t phys,
+		       void __iomem *virt, size_t size);
+
 #endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 072b872e735b..016e819f640a 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -16,6 +16,9 @@
 #include <linux/module.h>
 #include "internal.h"
 
+#define MHI_SUSPEND_MIN			100
+#define MHI_SUSPEND_TIMEOUT		600
+
 static DEFINE_IDA(mhi_ep_cntrl_ida);
 
 static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
@@ -143,6 +146,176 @@ static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_e
 	return mhi_ep_send_event(mhi_cntrl, 0, &event);
 }
 
+int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
+		     phys_addr_t *phys_ptr, void __iomem **virt)
+{
+	size_t offset = pci_addr % 0x1000;
+	void __iomem *buf;
+	phys_addr_t phys;
+	int ret;
+
+	size += offset;
+
+	buf = mhi_cntrl->alloc_addr(mhi_cntrl, &phys, size);
+	if (!buf)
+		return -ENOMEM;
+
+	ret = mhi_cntrl->map_addr(mhi_cntrl, phys, pci_addr - offset, size);
+	if (ret) {
+		mhi_cntrl->free_addr(mhi_cntrl, phys, buf, size);
+		return ret;
+	}
+
+	*phys_ptr = phys + offset;
+	*virt = buf + offset;
+
+	return 0;
+}
+
+void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t phys,
+			void __iomem *virt, size_t size)
+{
+	size_t offset = pci_addr % 0x1000;
+
+	size += offset;
+
+	mhi_cntrl->unmap_addr(mhi_cntrl, phys - offset);
+	mhi_cntrl->free_addr(mhi_cntrl, phys - offset, virt - offset, size);
+}
+
+static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	int ret;
+
+	/* Update the number of event rings (NER) programmed by the host */
+	mhi_ep_mmio_update_ner(mhi_cntrl);
+
+	dev_dbg(dev, "Number of Event rings: %d, HW Event rings: %d\n",
+		 mhi_cntrl->event_rings, mhi_cntrl->hw_event_rings);
+
+	mhi_cntrl->ch_ctx_host_size = sizeof(struct mhi_chan_ctxt) *
+					mhi_cntrl->max_chan;
+	mhi_cntrl->ev_ctx_host_size = sizeof(struct mhi_event_ctxt) *
+					mhi_cntrl->event_rings;
+	mhi_cntrl->cmd_ctx_host_size = sizeof(struct mhi_cmd_ctxt);
+
+	/* Get the channel context base pointer from host */
+	mhi_ep_mmio_get_chc_base(mhi_cntrl);
+
+	/* Allocate and map memory for caching host channel context */
+	ret = mhi_ep_alloc_map(mhi_cntrl, mhi_cntrl->ch_ctx_host_pa, mhi_cntrl->ch_ctx_host_size,
+				&mhi_cntrl->ch_ctx_cache_phys,
+				(void __iomem **)&mhi_cntrl->ch_ctx_cache);
+	if (ret) {
+		dev_err(dev, "Failed to allocate and map ch_ctx_cache\n");
+		return ret;
+	}
+
+	/* Get the event context base pointer from host */
+	mhi_ep_mmio_get_erc_base(mhi_cntrl);
+
+	/* Allocate and map memory for caching host event context */
+	ret = mhi_ep_alloc_map(mhi_cntrl, mhi_cntrl->ev_ctx_host_pa, mhi_cntrl->ev_ctx_host_size,
+				&mhi_cntrl->ev_ctx_cache_phys,
+				(void __iomem **)&mhi_cntrl->ev_ctx_cache);
+	if (ret) {
+		dev_err(dev, "Failed to allocate and map ev_ctx_cache\n");
+		goto err_ch_ctx;
+	}
+
+	/* Get the command context base pointer from host */
+	mhi_ep_mmio_get_crc_base(mhi_cntrl);
+
+	/* Allocate and map memory for caching host command context */
+	ret = mhi_ep_alloc_map(mhi_cntrl, mhi_cntrl->cmd_ctx_host_pa, mhi_cntrl->cmd_ctx_host_size,
+				&mhi_cntrl->cmd_ctx_cache_phys,
+				(void __iomem **)&mhi_cntrl->cmd_ctx_cache);
+	if (ret) {
+		dev_err(dev, "Failed to allocate and map cmd_ctx_cache\n");
+		goto err_ev_ctx;
+	}
+
+	/* Initialize command ring */
+	ret = mhi_ep_ring_start(mhi_cntrl, &mhi_cntrl->mhi_cmd->ring,
+				(union mhi_ep_ring_ctx *)mhi_cntrl->cmd_ctx_cache);
+	if (ret) {
+		dev_err(dev, "Failed to start the command ring\n");
+		goto err_cmd_ctx;
+	}
+
+	return ret;
+
+err_cmd_ctx:
+	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->cmd_ctx_host_pa, mhi_cntrl->cmd_ctx_cache_phys,
+			mhi_cntrl->cmd_ctx_cache, mhi_cntrl->cmd_ctx_host_size);
+
+err_ev_ctx:
+	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ev_ctx_host_pa, mhi_cntrl->ev_ctx_cache_phys,
+			mhi_cntrl->ev_ctx_cache, mhi_cntrl->ev_ctx_host_size);
+
+err_ch_ctx:
+	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ch_ctx_host_pa, mhi_cntrl->ch_ctx_cache_phys,
+			mhi_cntrl->ch_ctx_cache, mhi_cntrl->ch_ctx_host_size);
+
+	return ret;
+}
+
+static void mhi_ep_free_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->cmd_ctx_host_pa, mhi_cntrl->cmd_ctx_cache_phys,
+			mhi_cntrl->cmd_ctx_cache, mhi_cntrl->cmd_ctx_host_size);
+	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ev_ctx_host_pa, mhi_cntrl->ev_ctx_cache_phys,
+			mhi_cntrl->ev_ctx_cache, mhi_cntrl->ev_ctx_host_size);
+	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ch_ctx_host_pa, mhi_cntrl->ch_ctx_cache_phys,
+			mhi_cntrl->ch_ctx_cache, mhi_cntrl->ch_ctx_host_size);
+}
+
+static void mhi_ep_enable_int(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	mhi_ep_mmio_enable_ctrl_interrupt(mhi_cntrl);
+	mhi_ep_mmio_enable_cmdb_interrupt(mhi_cntrl);
+}
+
+static int mhi_ep_enable(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	enum mhi_state state;
+	u32 max_cnt = 0;
+	bool mhi_reset;
+	int ret;
+
+	/* Wait for Host to set the M0 state */
+	do {
+		msleep(MHI_SUSPEND_MIN);
+		mhi_ep_mmio_get_mhi_state(mhi_cntrl, &state, &mhi_reset);
+		if (mhi_reset) {
+			/* Clear the MHI reset if host is in reset state */
+			mhi_ep_mmio_clear_reset(mhi_cntrl);
+			dev_dbg(dev, "Host initiated reset while waiting for M0\n");
+		}
+		max_cnt++;
+	} while (state != MHI_STATE_M0 && max_cnt < MHI_SUSPEND_TIMEOUT);
+
+	if (state == MHI_STATE_M0) {
+		ret = mhi_ep_cache_host_cfg(mhi_cntrl);
+		if (ret) {
+			dev_err(dev, "Failed to cache host config\n");
+			return ret;
+		}
+
+		mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
+	} else {
+		dev_err(dev, "Host failed to enter M0\n");
+		return -ETIMEDOUT;
+	}
+
+	/* Enable all interrupts now */
+	mhi_ep_enable_int(mhi_cntrl);
+
+	return 0;
+}
+
 static void mhi_ep_ring_worker(struct work_struct *work)
 {
 	struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
@@ -319,6 +492,62 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
 	return IRQ_HANDLED;
 }
 
+int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	int ret, i;
+
+	/*
+	 * Mask all interrupts until the state machine is ready. Interrupts will
+	 * be enabled later with mhi_ep_enable().
+	 */
+	mhi_ep_mmio_mask_interrupts(mhi_cntrl);
+	mhi_ep_mmio_init(mhi_cntrl);
+
+	mhi_cntrl->mhi_event = kzalloc(mhi_cntrl->event_rings * (sizeof(*mhi_cntrl->mhi_event)),
+					GFP_KERNEL);
+	if (!mhi_cntrl->mhi_event)
+		return -ENOMEM;
+
+	/* Initialize command, channel and event rings */
+	mhi_ep_ring_init(&mhi_cntrl->mhi_cmd->ring, RING_TYPE_CMD, 0);
+	for (i = 0; i < mhi_cntrl->max_chan; i++)
+		mhi_ep_ring_init(&mhi_cntrl->mhi_chan[i].ring, RING_TYPE_CH, i);
+	for (i = 0; i < mhi_cntrl->event_rings; i++)
+		mhi_ep_ring_init(&mhi_cntrl->mhi_event[i].ring, RING_TYPE_ER, i);
+
+	spin_lock_bh(&mhi_cntrl->state_lock);
+	mhi_cntrl->mhi_state = MHI_STATE_RESET;
+	spin_unlock_bh(&mhi_cntrl->state_lock);
+
+	/* Set AMSS EE before signaling ready state */
+	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
+
+	/* All set, notify the host that we are ready */
+	ret = mhi_ep_set_ready_state(mhi_cntrl);
+	if (ret)
+		goto err_free_event;
+
+	dev_dbg(dev, "READY state notification sent to the host\n");
+
+	ret = mhi_ep_enable(mhi_cntrl);
+	if (ret) {
+		dev_err(dev, "Failed to enable MHI endpoint\n");
+		goto err_free_event;
+	}
+
+	enable_irq(mhi_cntrl->irq);
+	mhi_cntrl->is_enabled = true;
+
+	return 0;
+
+err_free_event:
+	kfree(mhi_cntrl->mhi_event);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_ep_power_up);
+
 static void mhi_ep_release_device(struct device *dev)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index a207058a4991..53895f1c68e1 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -65,6 +65,12 @@ struct mhi_ep_db_info {
  * @ch_ctx_host_pa: Physical address of host channel context data structure
  * @ev_ctx_host_pa: Physical address of host event context data structure
  * @cmd_ctx_host_pa: Physical address of host command context data structure
+ * @ch_ctx_cache_phys: Physical address of the host channel context cache
+ * @ev_ctx_cache_phys: Physical address of the host event context cache
+ * @cmd_ctx_cache_phys: Physical address of the host command context cache
+ * @ch_ctx_host_size: Size of the host channel context data structure
+ * @ev_ctx_host_size: Size of the host event context data structure
+ * @cmd_ctx_host_size: Size of the host command context data structure
  * @state_wq: Dedicated workqueue for handling MHI state transitions
  * @ring_wq: Dedicated workqueue for processing MHI rings
  * @state_work: State transition worker
@@ -91,6 +97,7 @@ struct mhi_ep_db_info {
  * @erdb_offset: Event ring doorbell offset set by the host
  * @index: MHI Endpoint controller index
  * @irq: IRQ used by the endpoint controller
+ * @is_enabled: Check if the endpoint controller is enabled or not
  */
 struct mhi_ep_cntrl {
 	struct device *cntrl_dev;
@@ -108,6 +115,12 @@ struct mhi_ep_cntrl {
 	u64 ch_ctx_host_pa;
 	u64 ev_ctx_host_pa;
 	u64 cmd_ctx_host_pa;
+	phys_addr_t ch_ctx_cache_phys;
+	phys_addr_t ev_ctx_cache_phys;
+	phys_addr_t cmd_ctx_cache_phys;
+	size_t ch_ctx_host_size;
+	size_t ev_ctx_host_size;
+	size_t cmd_ctx_host_size;
 
 	struct workqueue_struct *state_wq;
 	struct workqueue_struct	*ring_wq;
@@ -144,6 +157,7 @@ struct mhi_ep_cntrl {
 	u32 erdb_offset;
 	int index;
 	int irq;
+	bool is_enabled;
 };
 
 /**
@@ -238,4 +252,12 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
  */
 void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
 
+/**
+ * mhi_ep_power_up - Power up the MHI endpoint stack
+ * @mhi_cntrl: MHI Endpoint controller
+ *
+ * Return: 0 if power up succeeds, a negative error code otherwise.
+ */
+int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
+
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 17/25] bus: mhi: ep: Add support for powering down the MHI endpoint stack
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (15 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 16/25] bus: mhi: ep: Add support for powering up the MHI endpoint stack Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:39   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 18/25] bus: mhi: ep: Add support for handling MHI_RESET Manivannan Sadhasivam
                   ` (8 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for MHI endpoint power_down that includes stopping all
available channels, destroying the channels, resetting the event and
transfer rings and freeing the host cache.

The stack will be powered down whenever the physical bus link goes down.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 81 +++++++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h    |  6 +++
 2 files changed, 87 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 016e819f640a..14cb08de4263 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -21,6 +21,8 @@
 
 static DEFINE_IDA(mhi_ep_cntrl_ida);
 
+static int mhi_ep_destroy_device(struct device *dev, void *data);
+
 static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
 			     struct mhi_ep_ring_element *el)
 {
@@ -492,6 +494,71 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
 	return IRQ_HANDLED;
 }
 
+static void mhi_ep_abort_transfer(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct mhi_ep_ring *ch_ring, *ev_ring;
+	struct mhi_result result = {};
+	struct mhi_ep_chan *mhi_chan;
+	int i;
+
+	/* Stop all the channels */
+	for (i = 0; i < mhi_cntrl->max_chan; i++) {
+		ch_ring = &mhi_cntrl->mhi_chan[i].ring;
+		if (!ch_ring->started)
+			continue;
+
+		mhi_chan = &mhi_cntrl->mhi_chan[i];
+		mutex_lock(&mhi_chan->lock);
+		/* Send channel disconnect status to client drivers */
+		if (mhi_chan->xfer_cb) {
+			result.transaction_status = -ENOTCONN;
+			result.bytes_xferd = 0;
+			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+		}
+
+		/* Set channel state to DISABLED */
+		mhi_chan->state = MHI_CH_STATE_DISABLED;
+		mutex_unlock(&mhi_chan->lock);
+	}
+
+	flush_workqueue(mhi_cntrl->ring_wq);
+	flush_workqueue(mhi_cntrl->state_wq);
+
+	/* Destroy devices associated with all channels */
+	device_for_each_child(&mhi_cntrl->mhi_dev->dev, NULL, mhi_ep_destroy_device);
+
+	/* Stop and reset the transfer rings */
+	for (i = 0; i < mhi_cntrl->max_chan; i++) {
+		ch_ring = &mhi_cntrl->mhi_chan[i].ring;
+		if (!ch_ring->started)
+			continue;
+
+		mhi_chan = &mhi_cntrl->mhi_chan[i];
+		mutex_lock(&mhi_chan->lock);
+		mhi_ep_ring_reset(mhi_cntrl, ch_ring);
+		mutex_unlock(&mhi_chan->lock);
+	}
+
+	/* Stop and reset the event rings */
+	for (i = 0; i < mhi_cntrl->event_rings; i++) {
+		ev_ring = &mhi_cntrl->mhi_event[i].ring;
+		if (!ev_ring->started)
+			continue;
+
+		mutex_lock(&mhi_cntrl->event_lock);
+		mhi_ep_ring_reset(mhi_cntrl, ev_ring);
+		mutex_unlock(&mhi_cntrl->event_lock);
+	}
+
+	/* Stop and reset the command ring */
+	mhi_ep_ring_reset(mhi_cntrl, &mhi_cntrl->mhi_cmd->ring);
+
+	mhi_ep_free_host_cfg(mhi_cntrl);
+	mhi_ep_mmio_mask_interrupts(mhi_cntrl);
+
+	mhi_cntrl->is_enabled = false;
+}
+
 int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
@@ -548,6 +615,16 @@ int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
 }
 EXPORT_SYMBOL_GPL(mhi_ep_power_up);
 
+void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	if (mhi_cntrl->is_enabled)
+		mhi_ep_abort_transfer(mhi_cntrl);
+
+	kfree(mhi_cntrl->mhi_event);
+	disable_irq(mhi_cntrl->irq);
+}
+EXPORT_SYMBOL_GPL(mhi_ep_power_down);
+
 static void mhi_ep_release_device(struct device *dev)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -828,6 +905,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 }
 EXPORT_SYMBOL_GPL(mhi_ep_register_controller);
 
+/*
+ * It is expected that the controller drivers will power down the MHI EP stack
+ * using "mhi_ep_power_down()" before calling this function to unregister themselves.
+ */
 void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 53895f1c68e1..4f86e7986c93 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -260,4 +260,10 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
  */
 int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
 
+/**
+ * mhi_ep_power_down - Power down the MHI endpoint stack
+ * @mhi_cntrl: MHI controller
+ */
+void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
+
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 18/25] bus: mhi: ep: Add support for handling MHI_RESET
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (16 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 17/25] bus: mhi: ep: Add support for powering down " Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:39   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 19/25] bus: mhi: ep: Add support for handling SYS_ERR condition Manivannan Sadhasivam
                   ` (7 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for handling MHI_RESET in MHI endpoint stack. MHI_RESET will
be issued by the host during shutdown and during error scenario so that
it can recover the endpoint device without restarting the whole device.

MHI_RESET handling involves resetting the internal MHI registers, data
structures, state machines, resetting all channels/rings and setting
MHICTRL.RESET bit to 0. Additionally the device will also move to READY
state if the reset was due to SYS_ERR.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 53 +++++++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h    |  2 ++
 2 files changed, 55 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 14cb08de4263..ddedd0fb19aa 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -471,6 +471,7 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
 	enum mhi_state state;
 	u32 int_value;
+	bool mhi_reset;
 
 	/* Acknowledge the interrupts */
 	int_value = mhi_ep_mmio_read(mhi_cntrl, MHI_CTRL_INT_STATUS_A7);
@@ -479,6 +480,14 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
 	/* Check for ctrl interrupt */
 	if (FIELD_GET(MHI_CTRL_INT_STATUS_A7_MSK, int_value)) {
 		dev_dbg(dev, "Processing ctrl interrupt\n");
+		mhi_ep_mmio_get_mhi_state(mhi_cntrl, &state, &mhi_reset);
+		if (mhi_reset) {
+			dev_info(dev, "Host triggered MHI reset!\n");
+			disable_irq_nosync(mhi_cntrl->irq);
+			schedule_work(&mhi_cntrl->reset_work);
+			return IRQ_HANDLED;
+		}
+
 		mhi_ep_process_ctrl_interrupt(mhi_cntrl, state);
 	}
 
@@ -559,6 +568,49 @@ static void mhi_ep_abort_transfer(struct mhi_ep_cntrl *mhi_cntrl)
 	mhi_cntrl->is_enabled = false;
 }
 
+static void mhi_ep_reset_worker(struct work_struct *work)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, reset_work);
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	enum mhi_state cur_state;
+	int ret;
+
+	mhi_ep_abort_transfer(mhi_cntrl);
+
+	spin_lock_bh(&mhi_cntrl->state_lock);
+	/* Reset MMIO to signal host that the MHI_RESET is completed in endpoint */
+	mhi_ep_mmio_reset(mhi_cntrl);
+	cur_state = mhi_cntrl->mhi_state;
+	spin_unlock_bh(&mhi_cntrl->state_lock);
+
+	/*
+	 * Only proceed further if the reset is due to SYS_ERR. The host will
+	 * issue reset during shutdown also and we don't need to do re-init in
+	 * that case.
+	 */
+	if (cur_state == MHI_STATE_SYS_ERR) {
+		mhi_ep_mmio_init(mhi_cntrl);
+
+		/* Set AMSS EE before signaling ready state */
+		mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
+
+		/* All set, notify the host that we are ready */
+		ret = mhi_ep_set_ready_state(mhi_cntrl);
+		if (ret)
+			return;
+
+		dev_dbg(dev, "READY state notification sent to the host\n");
+
+		ret = mhi_ep_enable(mhi_cntrl);
+		if (ret) {
+			dev_err(dev, "Failed to enable MHI endpoint: %d\n", ret);
+			return;
+		}
+
+		enable_irq(mhi_cntrl->irq);
+	}
+}
+
 int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
@@ -827,6 +879,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
 
 	INIT_WORK(&mhi_cntrl->ring_work, mhi_ep_ring_worker);
 	INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
+	INIT_WORK(&mhi_cntrl->reset_work, mhi_ep_reset_worker);
 
 	mhi_cntrl->ring_wq = alloc_workqueue("mhi_ep_ring_wq", 0, 0);
 	if (!mhi_cntrl->ring_wq) {
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 4f86e7986c93..276d29fef465 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -75,6 +75,7 @@ struct mhi_ep_db_info {
  * @ring_wq: Dedicated workqueue for processing MHI rings
  * @state_work: State transition worker
  * @ring_work: Ring worker
+ * @reset_work: Worker for MHI Endpoint reset
  * @ch_db_list: List of queued channel doorbells
  * @st_transition_list: List of state transitions
  * @list_lock: Lock for protecting state transition and channel doorbell lists
@@ -126,6 +127,7 @@ struct mhi_ep_cntrl {
 	struct workqueue_struct	*ring_wq;
 	struct work_struct state_work;
 	struct work_struct ring_work;
+	struct work_struct reset_work;
 
 	struct list_head ch_db_list;
 	struct list_head st_transition_list;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 19/25] bus: mhi: ep: Add support for handling SYS_ERR condition
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (17 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 18/25] bus: mhi: ep: Add support for handling MHI_RESET Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:39   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 20/25] bus: mhi: ep: Add support for processing command ring Manivannan Sadhasivam
                   ` (6 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for handling SYS_ERR (System Error) condition in the MHI
endpoint stack. The SYS_ERR flag will be asserted by the endpoint device
when it detects an internal error. The host will then issue reset and
reinitializes MHI to recover from the error state.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/internal.h |  1 +
 drivers/bus/mhi/ep/main.c     | 24 ++++++++++++++++++++++++
 drivers/bus/mhi/ep/sm.c       |  2 ++
 3 files changed, 27 insertions(+)

diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index ee8c5974f0c0..8654af7caf40 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -241,6 +241,7 @@ int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_stat
 int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
 int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
 int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl);
 
 /* MHI EP memory management functions */
 int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index ddedd0fb19aa..6378ac5c7e37 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -611,6 +611,30 @@ static void mhi_ep_reset_worker(struct work_struct *work)
 	}
 }
 
+/*
+ * We don't need to do anything special other than setting the MHI SYS_ERR
+ * state. The host issue will reset all contexts and issue MHI RESET so that we
+ * could also recover from error state.
+ */
+void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	int ret;
+
+	/* If MHI EP is not enabled, nothing to do */
+	if (!mhi_cntrl->is_enabled)
+		return;
+
+	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
+	if (ret)
+		return;
+
+	/* Signal host that the device went to SYS_ERR state */
+	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_SYS_ERR);
+	if (ret)
+		dev_err(dev, "Failed sending SYS_ERR state change event: %d\n", ret);
+}
+
 int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
index 68e7f99b9137..9a75ecfe1adf 100644
--- a/drivers/bus/mhi/ep/sm.c
+++ b/drivers/bus/mhi/ep/sm.c
@@ -93,6 +93,7 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
 
 	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
 	if (ret) {
+		mhi_ep_handle_syserr(mhi_cntrl);
 		spin_unlock_bh(&mhi_cntrl->state_lock);
 		return ret;
 	}
@@ -128,6 +129,7 @@ int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
 	spin_lock_bh(&mhi_cntrl->state_lock);
 	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
 	if (ret) {
+		mhi_ep_handle_syserr(mhi_cntrl);
 		spin_unlock_bh(&mhi_cntrl->state_lock);
 		return ret;
 	}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 20/25] bus: mhi: ep: Add support for processing command ring
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (18 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 19/25] bus: mhi: ep: Add support for handling SYS_ERR condition Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:40   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 21/25] bus: mhi: ep: Add support for reading from the host Manivannan Sadhasivam
                   ` (5 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for processing the command ring. Command ring is used by the
host to issue channel specific commands to the ep device. Following
commands are supported:

1. Start channel
2. Stop channel
3. Reset channel

Once the device receives the command doorbell interrupt from host, it
executes the command and generates a command completion event to the
host in the primary event ring.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 151 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 151 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 6378ac5c7e37..4c2ee517832c 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -21,6 +21,7 @@
 
 static DEFINE_IDA(mhi_ep_cntrl_ida);
 
+static int mhi_ep_create_device(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id);
 static int mhi_ep_destroy_device(struct device *dev, void *data);
 
 static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
@@ -185,6 +186,156 @@ void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t
 	mhi_cntrl->free_addr(mhi_cntrl, phys - offset, virt - offset, size);
 }
 
+int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	struct mhi_result result = {};
+	struct mhi_ep_chan *mhi_chan;
+	struct mhi_ep_ring *ch_ring;
+	u32 tmp, ch_id;
+	int ret;
+
+	ch_id = MHI_TRE_GET_CMD_CHID(el);
+	mhi_chan = &mhi_cntrl->mhi_chan[ch_id];
+	ch_ring = &mhi_cntrl->mhi_chan[ch_id].ring;
+
+	switch (MHI_TRE_GET_CMD_TYPE(el)) {
+	case MHI_PKT_TYPE_START_CHAN_CMD:
+		dev_dbg(dev, "Received START command for channel (%d)\n", ch_id);
+
+		mutex_lock(&mhi_chan->lock);
+		/* Initialize and configure the corresponding channel ring */
+		if (!ch_ring->started) {
+			ret = mhi_ep_ring_start(mhi_cntrl, ch_ring,
+				(union mhi_ep_ring_ctx *)&mhi_cntrl->ch_ctx_cache[ch_id]);
+			if (ret) {
+				dev_err(dev, "Failed to start ring for channel (%d)\n", ch_id);
+				ret = mhi_ep_send_cmd_comp_event(mhi_cntrl,
+							MHI_EV_CC_UNDEFINED_ERR);
+				if (ret)
+					dev_err(dev, "Error sending completion event (%d)\n",
+						MHI_EV_CC_UNDEFINED_ERR);
+
+				goto err_unlock;
+			}
+		}
+
+		/* Enable DB for the channel */
+		mhi_ep_mmio_enable_chdb_a7(mhi_cntrl, ch_id);
+
+		/* Set channel state to RUNNING */
+		mhi_chan->state = MHI_CH_STATE_RUNNING;
+		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[ch_id].chcfg);
+		tmp &= ~CHAN_CTX_CHSTATE_MASK;
+		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_RUNNING);
+		mhi_cntrl->ch_ctx_cache[ch_id].chcfg = cpu_to_le32(tmp);
+
+		ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
+		if (ret) {
+			dev_err(dev, "Error sending command completion event (%d)\n",
+				MHI_EV_CC_SUCCESS);
+			goto err_unlock;
+		}
+
+		mutex_unlock(&mhi_chan->lock);
+
+		/*
+		 * Create MHI device only during UL channel start. Since the MHI
+		 * channels operate in a pair, we'll associate both UL and DL
+		 * channels to the same device.
+		 *
+		 * We also need to check for mhi_dev != NULL because, the host
+		 * will issue START_CHAN command during resume and we don't
+		 * destroy the device during suspend.
+		 */
+		if (!(ch_id % 2) && !mhi_chan->mhi_dev) {
+			ret = mhi_ep_create_device(mhi_cntrl, ch_id);
+			if (ret) {
+				dev_err(dev, "Error creating device for channel (%d)\n", ch_id);
+				return ret;
+			}
+		}
+
+		break;
+	case MHI_PKT_TYPE_STOP_CHAN_CMD:
+		dev_dbg(dev, "Received STOP command for channel (%d)\n", ch_id);
+		if (!ch_ring->started) {
+			dev_err(dev, "Channel (%d) not opened\n", ch_id);
+			return -ENODEV;
+		}
+
+		mutex_lock(&mhi_chan->lock);
+		/* Disable DB for the channel */
+		mhi_ep_mmio_disable_chdb_a7(mhi_cntrl, ch_id);
+
+		/* Send channel disconnect status to client drivers */
+		result.transaction_status = -ENOTCONN;
+		result.bytes_xferd = 0;
+		mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+
+		/* Set channel state to STOP */
+		mhi_chan->state = MHI_CH_STATE_STOP;
+		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[ch_id].chcfg);
+		tmp &= ~CHAN_CTX_CHSTATE_MASK;
+		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_STOP);
+		mhi_cntrl->ch_ctx_cache[ch_id].chcfg = cpu_to_le32(tmp);
+
+		ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
+		if (ret) {
+			dev_err(dev, "Error sending command completion event (%d)\n",
+				MHI_EV_CC_SUCCESS);
+			goto err_unlock;
+		}
+
+		mutex_unlock(&mhi_chan->lock);
+		break;
+	case MHI_PKT_TYPE_RESET_CHAN_CMD:
+		dev_dbg(dev, "Received STOP command for channel (%d)\n", ch_id);
+		if (!ch_ring->started) {
+			dev_err(dev, "Channel (%d) not opened\n", ch_id);
+			return -ENODEV;
+		}
+
+		mutex_lock(&mhi_chan->lock);
+		/* Stop and reset the transfer ring */
+		mhi_ep_ring_reset(mhi_cntrl, ch_ring);
+
+		/* Send channel disconnect status to client driver */
+		result.transaction_status = -ENOTCONN;
+		result.bytes_xferd = 0;
+		mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+
+		/* Set channel state to DISABLED */
+		mhi_chan->state = MHI_CH_STATE_DISABLED;
+		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[ch_id].chcfg);
+		tmp &= ~CHAN_CTX_CHSTATE_MASK;
+		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_DISABLED);
+		mhi_cntrl->ch_ctx_cache[ch_id].chcfg = cpu_to_le32(tmp);
+
+		ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
+		if (ret) {
+			dev_err(dev, "Error sending command completion event (%d)\n",
+				MHI_EV_CC_SUCCESS);
+			goto err_unlock;
+		}
+
+		mutex_unlock(&mhi_chan->lock);
+		break;
+	default:
+		dev_err(dev, "Invalid command received: %d for channel (%d)\n",
+			MHI_TRE_GET_CMD_TYPE(el), ch_id);
+		return -EINVAL;
+	}
+
+	return 0;
+
+err_unlock:
+	mutex_unlock(&mhi_chan->lock);
+
+	return ret;
+}
+
 static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 21/25] bus: mhi: ep: Add support for reading from the host
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (19 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 20/25] bus: mhi: ep: Add support for processing command ring Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:40   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 22/25] bus: mhi: ep: Add support for processing transfer ring Manivannan Sadhasivam
                   ` (4 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Data transfer between host and the ep device happens over the transfer
ring associated with each bi-directional channel pair. Host defines the
transfer ring by allocating memory for it. The read and write pointer
addresses of the transfer ring are stored in the channel context.

Once host places the elements in the transfer ring, it increments the
write pointer and rings the channel doorbell. Device will receive the
doorbell interrupt and will process the transfer ring elements.

This commit adds support for reading the transfer ring elements from
the transfer ring till write pointer, incrementing the read pointer and
finally sending the completion event to the host through corresponding
event ring.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 103 ++++++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h    |   9 ++++
 2 files changed, 112 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 4c2ee517832c..b937c6cda9ba 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -336,6 +336,109 @@ int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element
 	return ret;
 }
 
+bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir)
+{
+	struct mhi_ep_chan *mhi_chan = (dir == DMA_FROM_DEVICE) ? mhi_dev->dl_chan :
+								mhi_dev->ul_chan;
+	struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct mhi_ep_ring *ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
+
+	return !!(ring->rd_offset == ring->wr_offset);
+}
+EXPORT_SYMBOL_GPL(mhi_ep_queue_is_empty);
+
+static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
+				struct mhi_ep_ring *ring,
+				struct mhi_result *result,
+				u32 len)
+{
+	struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
+	size_t bytes_to_read, read_offset, write_offset;
+	struct device *dev = &mhi_cntrl->mhi_dev->dev;
+	struct mhi_ep_ring_element *el;
+	bool td_done = false;
+	void *write_to_loc;
+	u64 read_from_loc;
+	u32 buf_remaining;
+	int ret;
+
+	buf_remaining = len;
+
+	do {
+		/* Don't process the transfer ring if the channel is not in RUNNING state */
+		if (mhi_chan->state != MHI_CH_STATE_RUNNING)
+			return -ENODEV;
+
+		el = &ring->ring_cache[ring->rd_offset];
+
+		/* Check if there is data pending to be read from previous read operation */
+		if (mhi_chan->tre_bytes_left) {
+			dev_dbg(dev, "TRE bytes remaining: %d\n", mhi_chan->tre_bytes_left);
+			bytes_to_read = min(buf_remaining, mhi_chan->tre_bytes_left);
+		} else {
+			mhi_chan->tre_loc = MHI_EP_TRE_GET_PTR(el);
+			mhi_chan->tre_size = MHI_EP_TRE_GET_LEN(el);
+			mhi_chan->tre_bytes_left = mhi_chan->tre_size;
+
+			bytes_to_read = min(buf_remaining, mhi_chan->tre_size);
+		}
+
+		read_offset = mhi_chan->tre_size - mhi_chan->tre_bytes_left;
+		write_offset = len - buf_remaining;
+		read_from_loc = mhi_chan->tre_loc + read_offset;
+		write_to_loc = result->buf_addr + write_offset;
+
+		dev_dbg(dev, "Reading %zd bytes from channel (%d)\n", bytes_to_read, ring->ch_id);
+		ret = mhi_cntrl->read_from_host(mhi_cntrl, read_from_loc, write_to_loc,
+						bytes_to_read);
+		if (ret < 0)
+			return ret;
+
+		buf_remaining -= bytes_to_read;
+		mhi_chan->tre_bytes_left -= bytes_to_read;
+
+		/*
+		 * Once the TRE (Transfer Ring Element) of a TD (Transfer Descriptor) has been
+		 * read completely:
+		 *
+		 * 1. Send completion event to the host based on the flags set in TRE.
+		 * 2. Increment the local read offset of the transfer ring.
+		 */
+		if (!mhi_chan->tre_bytes_left) {
+			/*
+			 * The host will split the data packet into multiple TREs if it can't fit
+			 * the packet in a single TRE. In that case, CHAIN flag will be set by the
+			 * host for all TREs except the last one.
+			 */
+			if (MHI_EP_TRE_GET_CHAIN(el)) {
+				/*
+				 * IEOB (Interrupt on End of Block) flag will be set by the host if
+				 * it expects the completion event for all TREs of a TD.
+				 */
+				if (MHI_EP_TRE_GET_IEOB(el))
+					mhi_ep_send_completion_event(mhi_cntrl,
+					ring, MHI_EP_TRE_GET_LEN(el), MHI_EV_CC_EOB);
+			} else {
+				/*
+				 * IEOT (Interrupt on End of Transfer) flag will be set by the host
+				 * for the last TRE of the TD and expects the completion event for
+				 * the same.
+				 */
+				if (MHI_EP_TRE_GET_IEOT(el))
+					mhi_ep_send_completion_event(mhi_cntrl,
+					ring, MHI_EP_TRE_GET_LEN(el), MHI_EV_CC_EOT);
+				td_done = true;
+			}
+
+			mhi_ep_ring_inc_index(ring);
+		}
+
+		result->bytes_xferd += bytes_to_read;
+	} while (buf_remaining && !td_done);
+
+	return 0;
+}
+
 static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 276d29fef465..aaf4b6942037 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -268,4 +268,13 @@ int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
  */
 void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
 
+/**
+ * mhi_ep_queue_is_empty - Determine whether the transfer queue is empty
+ * @mhi_dev: Device associated with the channels
+ * @dir: DMA direction for the channel
+ *
+ * Return: true if the queue is empty, false otherwise.
+ */
+bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir);
+
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 22/25] bus: mhi: ep: Add support for processing transfer ring
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (20 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 21/25] bus: mhi: ep: Add support for reading from the host Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:40   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 23/25] bus: mhi: ep: Add support for queueing SKBs to the host Manivannan Sadhasivam
                   ` (3 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for processing the transfer ring from host. For the transfer
ring associated with DL channel, the xfer callback will simply invoked.
For the case of UL channel, the ring elements will be read in a buffer
till the write pointer and later passed to the client driver using the
xfer callback.

The client drivers should provide the callbacks for both UL and DL
channels during registration.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 49 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index b937c6cda9ba..baf383a4857b 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -439,6 +439,55 @@ static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
 	return 0;
 }
 
+int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el)
+{
+	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+	struct mhi_result result = {};
+	u32 len = MHI_EP_DEFAULT_MTU;
+	struct mhi_ep_chan *mhi_chan;
+	int ret;
+
+	mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
+
+	/*
+	 * Bail out if transfer callback is not registered for the channel.
+	 * This is most likely due to the client driver not loaded at this point.
+	 */
+	if (!mhi_chan->xfer_cb) {
+		dev_err(&mhi_chan->mhi_dev->dev, "Client driver not available\n");
+		return -ENODEV;
+	}
+
+	if (ring->ch_id % 2) {
+		/* DL channel */
+		result.dir = mhi_chan->dir;
+		mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+	} else {
+		/* UL channel */
+		do {
+			result.buf_addr = kzalloc(len, GFP_KERNEL);
+			if (!result.buf_addr)
+				return -ENOMEM;
+
+			ret = mhi_ep_read_channel(mhi_cntrl, ring, &result, len);
+			if (ret < 0) {
+				dev_err(&mhi_chan->mhi_dev->dev, "Failed to read channel\n");
+				kfree(result.buf_addr);
+				return ret;
+			}
+
+			result.dir = mhi_chan->dir;
+			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+			kfree(result.buf_addr);
+			result.bytes_xferd = 0;
+
+			/* Read until the ring becomes empty */
+		} while (!mhi_ep_queue_is_empty(mhi_chan->mhi_dev, DMA_TO_DEVICE));
+	}
+
+	return 0;
+}
+
 static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 23/25] bus: mhi: ep: Add support for queueing SKBs to the host
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (21 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 22/25] bus: mhi: ep: Add support for processing transfer ring Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:40   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 24/25] bus: mhi: ep: Add support for suspending and resuming channels Manivannan Sadhasivam
                   ` (2 subsequent siblings)
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for queueing SKBs to the host over the transfer ring of the
relevant channel. The mhi_ep_queue_skb() API will be used by the client
networking drivers to queue the SKBs to the host over MHI bus.

The host will add ring elements to the transfer ring periodically for
the device and the device will write SKBs to the ring elements. If a
single SKB doesn't fit in a ring element (TRE), it will be placed in
multiple ring elements and the overflow event will be sent for all ring
elements except the last one. For the last ring element, the EOT event
will be sent indicating the packet boundary.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c | 102 ++++++++++++++++++++++++++++++++++++++
 include/linux/mhi_ep.h    |  13 +++++
 2 files changed, 115 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index baf383a4857b..e4186b012257 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -488,6 +488,108 @@ int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element
 	return 0;
 }
 
+int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir,
+		     struct sk_buff *skb, size_t len, enum mhi_flags mflags)
+{
+	struct mhi_ep_chan *mhi_chan = (dir == DMA_FROM_DEVICE) ? mhi_dev->dl_chan :
+								mhi_dev->ul_chan;
+	struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
+	struct device *dev = &mhi_chan->mhi_dev->dev;
+	struct mhi_ep_ring_element *el;
+	struct mhi_ep_ring *ring;
+	size_t bytes_to_write;
+	enum mhi_ev_ccs code;
+	void *read_from_loc;
+	u32 buf_remaining;
+	u64 write_to_loc;
+	u32 tre_len;
+	int ret = 0;
+
+	if (dir == DMA_TO_DEVICE)
+		return -EINVAL;
+
+	buf_remaining = len;
+	ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
+
+	mutex_lock(&mhi_chan->lock);
+
+	do {
+		/* Don't process the transfer ring if the channel is not in RUNNING state */
+		if (mhi_chan->state != MHI_CH_STATE_RUNNING) {
+			dev_err(dev, "Channel not available\n");
+			ret = -ENODEV;
+			goto err_exit;
+		}
+
+		if (mhi_ep_queue_is_empty(mhi_dev, dir)) {
+			dev_err(dev, "TRE not available!\n");
+			ret = -EINVAL;
+			goto err_exit;
+		}
+
+		el = &ring->ring_cache[ring->rd_offset];
+		tre_len = MHI_EP_TRE_GET_LEN(el);
+		if (skb->len > tre_len) {
+			dev_err(dev, "Buffer size (%d) is too large for TRE (%d)!\n",
+				skb->len, tre_len);
+			ret = -ENOMEM;
+			goto err_exit;
+		}
+
+		bytes_to_write = min(buf_remaining, tre_len);
+		read_from_loc = skb->data;
+		write_to_loc = MHI_EP_TRE_GET_PTR(el);
+
+		ret = mhi_cntrl->write_to_host(mhi_cntrl, read_from_loc, write_to_loc,
+					       bytes_to_write);
+		if (ret < 0)
+			goto err_exit;
+
+		buf_remaining -= bytes_to_write;
+		/*
+		 * For all TREs queued by the host for DL channel, only the EOT flag will be set.
+		 * If the packet doesn't fit into a single TRE, send the OVERFLOW event to
+		 * the host so that the host can adjust the packet boundary to next TREs. Else send
+		 * the EOT event to the host indicating the packet boundary.
+		 */
+		if (buf_remaining)
+			code = MHI_EV_CC_OVERFLOW;
+		else
+			code = MHI_EV_CC_EOT;
+
+		ret = mhi_ep_send_completion_event(mhi_cntrl, ring, bytes_to_write, code);
+		if (ret) {
+			dev_err(dev, "Error sending completion event\n");
+			goto err_exit;
+		}
+
+		mhi_ep_ring_inc_index(ring);
+	} while (buf_remaining);
+
+	/*
+	 * During high network traffic, sometimes the DL doorbell interrupt from the host is missed
+	 * by the endpoint. So manually check for the write pointer update here so that we don't run
+	 * out of buffer due to missing interrupts.
+	 */
+	if (ring->rd_offset + 1 == ring->wr_offset) {
+		ret = mhi_ep_update_wr_offset(ring);
+		if (ret) {
+			dev_err(dev, "Error updating write pointer\n");
+			goto err_exit;
+		}
+	}
+
+	mutex_unlock(&mhi_chan->lock);
+
+	return 0;
+
+err_exit:
+	mutex_unlock(&mhi_chan->lock);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_ep_queue_skb);
+
 static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
 {
 	struct device *dev = &mhi_cntrl->mhi_dev->dev;
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index aaf4b6942037..75cfbf0c6fb0 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -277,4 +277,17 @@ void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
  */
 bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir);
 
+/**
+ * mhi_ep_queue_skb - Send SKBs to host over MHI Endpoint
+ * @mhi_dev: Device associated with the channels
+ * @dir: DMA direction for the channel
+ * @skb: Buffer for holding SKBs
+ * @len: Buffer length
+ * @mflags: MHI Endpoint transfer flags used for the transfer
+ *
+ * Return: 0 if the SKBs has been sent successfully, a negative error code otherwise.
+ */
+int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir,
+		     struct sk_buff *skb, size_t len, enum mhi_flags mflags);
+
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 24/25] bus: mhi: ep: Add support for suspending and resuming channels
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (22 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 23/25] bus: mhi: ep: Add support for queueing SKBs to the host Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:40   ` Alex Elder
  2022-02-12 18:21 ` [PATCH v3 25/25] bus: mhi: ep: Add uevent support for module autoloading Manivannan Sadhasivam
  2022-02-15 20:01 ` [PATCH v3 00/25] Add initial support for MHI endpoint stack Alex Elder
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add support for suspending and resuming the channels in MHI endpoint stack.
The channels will be moved to the suspended state during M3 state
transition and will be resumed during M0 transition.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/internal.h |  2 ++
 drivers/bus/mhi/ep/main.c     | 58 +++++++++++++++++++++++++++++++++++
 drivers/bus/mhi/ep/sm.c       |  4 +++
 3 files changed, 64 insertions(+)

diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index 8654af7caf40..e23d2fd04282 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -242,6 +242,8 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
 int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
 int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
 void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_resume_channels(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_suspend_channels(struct mhi_ep_cntrl *mhi_cntrl);
 
 /* MHI EP memory management functions */
 int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index e4186b012257..315409705b91 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -1106,6 +1106,64 @@ void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl)
 }
 EXPORT_SYMBOL_GPL(mhi_ep_power_down);
 
+void mhi_ep_suspend_channels(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct mhi_ep_chan *mhi_chan;
+	u32 tmp;
+	int i;
+
+	for (i = 0; i < mhi_cntrl->max_chan; i++) {
+		mhi_chan = &mhi_cntrl->mhi_chan[i];
+
+		if (!mhi_chan->mhi_dev)
+			continue;
+
+		mutex_lock(&mhi_chan->lock);
+		/* Skip if the channel is not currently running */
+		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[i].chcfg);
+		if (FIELD_GET(CHAN_CTX_CHSTATE_MASK, tmp) != MHI_CH_STATE_RUNNING) {
+			mutex_unlock(&mhi_chan->lock);
+			continue;
+		}
+
+		dev_dbg(&mhi_chan->mhi_dev->dev, "Suspending channel\n");
+		/* Set channel state to SUSPENDED */
+		tmp &= ~CHAN_CTX_CHSTATE_MASK;
+		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_SUSPENDED);
+		mhi_cntrl->ch_ctx_cache[i].chcfg = cpu_to_le32(tmp);
+		mutex_unlock(&mhi_chan->lock);
+	}
+}
+
+void mhi_ep_resume_channels(struct mhi_ep_cntrl *mhi_cntrl)
+{
+	struct mhi_ep_chan *mhi_chan;
+	u32 tmp;
+	int i;
+
+	for (i = 0; i < mhi_cntrl->max_chan; i++) {
+		mhi_chan = &mhi_cntrl->mhi_chan[i];
+
+		if (!mhi_chan->mhi_dev)
+			continue;
+
+		mutex_lock(&mhi_chan->lock);
+		/* Skip if the channel is not currently suspended */
+		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[i].chcfg);
+		if (FIELD_GET(CHAN_CTX_CHSTATE_MASK, tmp) != MHI_CH_STATE_SUSPENDED) {
+			mutex_unlock(&mhi_chan->lock);
+			continue;
+		}
+
+		dev_dbg(&mhi_chan->mhi_dev->dev, "Resuming channel\n");
+		/* Set channel state to RUNNING */
+		tmp &= ~CHAN_CTX_CHSTATE_MASK;
+		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_RUNNING);
+		mhi_cntrl->ch_ctx_cache[i].chcfg = cpu_to_le32(tmp);
+		mutex_unlock(&mhi_chan->lock);
+	}
+}
+
 static void mhi_ep_release_device(struct device *dev)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
index 9a75ecfe1adf..e24ba2d85e13 100644
--- a/drivers/bus/mhi/ep/sm.c
+++ b/drivers/bus/mhi/ep/sm.c
@@ -88,8 +88,11 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
 	enum mhi_state old_state;
 	int ret;
 
+	/* If MHI is in M3, resume suspended channels */
 	spin_lock_bh(&mhi_cntrl->state_lock);
 	old_state = mhi_cntrl->mhi_state;
+	if (old_state == MHI_STATE_M3)
+		mhi_ep_resume_channels(mhi_cntrl);
 
 	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
 	if (ret) {
@@ -135,6 +138,7 @@ int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
 	}
 
 	spin_unlock_bh(&mhi_cntrl->state_lock);
+	mhi_ep_suspend_channels(mhi_cntrl);
 
 	/* Signal host that the device moved to M3 */
 	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M3);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* [PATCH v3 25/25] bus: mhi: ep: Add uevent support for module autoloading
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (23 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 24/25] bus: mhi: ep: Add support for suspending and resuming channels Manivannan Sadhasivam
@ 2022-02-12 18:21 ` Manivannan Sadhasivam
  2022-02-15 22:40   ` Alex Elder
  2022-02-15 20:01 ` [PATCH v3 00/25] Add initial support for MHI endpoint stack Alex Elder
  25 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:21 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder,
	Manivannan Sadhasivam

Add uevent support to MHI endpoint bus so that the client drivers can be
autoloaded by udev when the MHI endpoint devices gets created. The client
drivers are expected to provide MODULE_DEVICE_TABLE with the MHI id_table
struct so that the alias can be exported.

The MHI endpoint reused the mhi_device_id structure of the MHI bus.

Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
---
 drivers/bus/mhi/ep/main.c       |  9 +++++++++
 include/linux/mod_devicetable.h |  2 ++
 scripts/mod/file2alias.c        | 10 ++++++++++
 3 files changed, 21 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 315409705b91..8889382ee8d0 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -1546,6 +1546,14 @@ void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv)
 }
 EXPORT_SYMBOL_GPL(mhi_ep_driver_unregister);
 
+static int mhi_ep_uevent(struct device *dev, struct kobj_uevent_env *env)
+{
+	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+
+	return add_uevent_var(env, "MODALIAS=" MHI_EP_DEVICE_MODALIAS_FMT,
+					mhi_dev->name);
+}
+
 static int mhi_ep_match(struct device *dev, struct device_driver *drv)
 {
 	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -1572,6 +1580,7 @@ struct bus_type mhi_ep_bus_type = {
 	.name = "mhi_ep",
 	.dev_name = "mhi_ep",
 	.match = mhi_ep_match,
+	.uevent = mhi_ep_uevent,
 };
 
 static int __init mhi_ep_init(void)
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index 4bb71979a8fd..0cff19bd72bf 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -835,6 +835,8 @@ struct wmi_device_id {
 #define MHI_DEVICE_MODALIAS_FMT "mhi:%s"
 #define MHI_NAME_SIZE 32
 
+#define MHI_EP_DEVICE_MODALIAS_FMT "mhi_ep:%s"
+
 /**
  * struct mhi_device_id - MHI device identification
  * @chan: MHI channel name
diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
index 5258247d78ac..d9d6a31446ea 100644
--- a/scripts/mod/file2alias.c
+++ b/scripts/mod/file2alias.c
@@ -1391,6 +1391,15 @@ static int do_mhi_entry(const char *filename, void *symval, char *alias)
 	return 1;
 }
 
+/* Looks like: mhi_ep:S */
+static int do_mhi_ep_entry(const char *filename, void *symval, char *alias)
+{
+	DEF_FIELD_ADDR(symval, mhi_device_id, chan);
+	sprintf(alias, MHI_EP_DEVICE_MODALIAS_FMT, *chan);
+
+	return 1;
+}
+
 /* Looks like: ishtp:{guid} */
 static int do_ishtp_entry(const char *filename, void *symval, char *alias)
 {
@@ -1519,6 +1528,7 @@ static const struct devtable devtable[] = {
 	{"tee", SIZE_tee_client_device_id, do_tee_entry},
 	{"wmi", SIZE_wmi_device_id, do_wmi_entry},
 	{"mhi", SIZE_mhi_device_id, do_mhi_entry},
+	{"mhi_ep", SIZE_mhi_device_id, do_mhi_ep_entry},
 	{"auxiliary", SIZE_auxiliary_device_id, do_auxiliary_entry},
 	{"ssam", SIZE_ssam_device_id, do_ssam_entry},
 	{"dfl", SIZE_dfl_device_id, do_dfl_entry},
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 09/25] bus: mhi: ep: Add support for registering MHI endpoint client drivers
  2022-02-12 18:21 ` [PATCH v3 09/25] bus: mhi: ep: Add support for registering MHI endpoint client drivers Manivannan Sadhasivam
@ 2022-02-12 18:32   ` Manivannan Sadhasivam
  2022-02-15  1:10   ` Hemant Kumar
  2022-02-15 20:02   ` Alex Elder
  2 siblings, 0 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-12 18:32 UTC (permalink / raw)
  To: mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder

On Sat, Feb 12, 2022 at 11:51:01PM +0530, Manivannan Sadhasivam wrote:
> This commit adds support for registering MHI endpoint client drivers
> with the MHI endpoint stack. MHI endpoint client drivers binds to one
> or more MHI endpoint devices inorder to send and receive the upper-layer
> protocol packets like IP packets, modem control messages, and diagnostics
> messages over MHI bus.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>  drivers/bus/mhi/ep/main.c | 86 +++++++++++++++++++++++++++++++++++++++
>  include/linux/mhi_ep.h    | 53 ++++++++++++++++++++++++
>  2 files changed, 139 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index b006011d025d..f66404181972 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -196,9 +196,89 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
>  }
>  EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
>  
> +static int mhi_ep_driver_probe(struct device *dev)
> +{
> +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
> +	struct mhi_ep_chan *ul_chan = mhi_dev->ul_chan;
> +	struct mhi_ep_chan *dl_chan = mhi_dev->dl_chan;
> +
> +	/* Client drivers should have callbacks for both channels */
> +	if (!mhi_drv->ul_xfer_cb || !mhi_drv->dl_xfer_cb)
> +		return -EINVAL;
> +

Hmm, I had a change that moved this check to __mhi_ep_driver_register() but I
missed to apply it. Will do it in next iteration.

Thanks,
Mani

> +	ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
> +	dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
> +
> +	return mhi_drv->probe(mhi_dev, mhi_dev->id);
> +}
> +
> +static int mhi_ep_driver_remove(struct device *dev)
> +{
> +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
> +	struct mhi_result result = {};
> +	struct mhi_ep_chan *mhi_chan;
> +	int dir;
> +
> +	/* Skip if it is a controller device */
> +	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> +		return 0;
> +
> +	/* Disconnect the channels associated with the driver */
> +	for (dir = 0; dir < 2; dir++) {
> +		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
> +
> +		if (!mhi_chan)
> +			continue;
> +
> +		mutex_lock(&mhi_chan->lock);
> +		/* Send channel disconnect status to the client driver */
> +		if (mhi_chan->xfer_cb) {
> +			result.transaction_status = -ENOTCONN;
> +			result.bytes_xferd = 0;
> +			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
> +		}
> +
> +		/* Set channel state to DISABLED */
> +		mhi_chan->state = MHI_CH_STATE_DISABLED;
> +		mhi_chan->xfer_cb = NULL;
> +		mutex_unlock(&mhi_chan->lock);
> +	}
> +
> +	/* Remove the client driver now */
> +	mhi_drv->remove(mhi_dev);
> +
> +	return 0;
> +}
> +
> +int __mhi_ep_driver_register(struct mhi_ep_driver *mhi_drv, struct module *owner)
> +{
> +	struct device_driver *driver = &mhi_drv->driver;
> +
> +	if (!mhi_drv->probe || !mhi_drv->remove)
> +		return -EINVAL;
> +
> +	driver->bus = &mhi_ep_bus_type;
> +	driver->owner = owner;
> +	driver->probe = mhi_ep_driver_probe;
> +	driver->remove = mhi_ep_driver_remove;
> +
> +	return driver_register(driver);
> +}
> +EXPORT_SYMBOL_GPL(__mhi_ep_driver_register);
> +
> +void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv)
> +{
> +	driver_unregister(&mhi_drv->driver);
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_driver_unregister);
> +
>  static int mhi_ep_match(struct device *dev, struct device_driver *drv)
>  {
>  	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(drv);
> +	const struct mhi_device_id *id;
>  
>  	/*
>  	 * If the device is a controller type then there is no client driver
> @@ -207,6 +287,12 @@ static int mhi_ep_match(struct device *dev, struct device_driver *drv)
>  	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
>  		return 0;
>  
> +	for (id = mhi_drv->id_table; id->chan[0]; id++)
> +		if (!strcmp(mhi_dev->name, id->chan)) {
> +			mhi_dev->id = id;
> +			return 1;
> +		}
> +
>  	return 0;
>  };
>  
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 20238e9df1b3..da865f9d3646 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -122,7 +122,60 @@ struct mhi_ep_device {
>  	enum mhi_device_type dev_type;
>  };
>  
> +/**
> + * struct mhi_ep_driver - Structure representing a MHI Endpoint client driver
> + * @id_table: Pointer to MHI Endpoint device ID table
> + * @driver: Device driver model driver
> + * @probe: CB function for client driver probe function
> + * @remove: CB function for client driver remove function
> + * @ul_xfer_cb: CB function for UL data transfer
> + * @dl_xfer_cb: CB function for DL data transfer
> + */
> +struct mhi_ep_driver {
> +	const struct mhi_device_id *id_table;
> +	struct device_driver driver;
> +	int (*probe)(struct mhi_ep_device *mhi_ep,
> +		     const struct mhi_device_id *id);
> +	void (*remove)(struct mhi_ep_device *mhi_ep);
> +	void (*ul_xfer_cb)(struct mhi_ep_device *mhi_dev,
> +			   struct mhi_result *result);
> +	void (*dl_xfer_cb)(struct mhi_ep_device *mhi_dev,
> +			   struct mhi_result *result);
> +};
> +
>  #define to_mhi_ep_device(dev) container_of(dev, struct mhi_ep_device, dev)
> +#define to_mhi_ep_driver(drv) container_of(drv, struct mhi_ep_driver, driver)
> +
> +/*
> + * module_mhi_ep_driver() - Helper macro for drivers that don't do
> + * anything special other than using default mhi_ep_driver_register() and
> + * mhi_ep_driver_unregister().  This eliminates a lot of boilerplate.
> + * Each module may only use this macro once.
> + */
> +#define module_mhi_ep_driver(mhi_drv) \
> +	module_driver(mhi_drv, mhi_ep_driver_register, \
> +		      mhi_ep_driver_unregister)
> +
> +/*
> + * Macro to avoid include chaining to get THIS_MODULE
> + */
> +#define mhi_ep_driver_register(mhi_drv) \
> +	__mhi_ep_driver_register(mhi_drv, THIS_MODULE)
> +
> +/**
> + * __mhi_ep_driver_register - Register a driver with MHI Endpoint bus
> + * @mhi_drv: Driver to be associated with the device
> + * @owner: The module owner
> + *
> + * Return: 0 if driver registrations succeeds, a negative error code otherwise.
> + */
> +int __mhi_ep_driver_register(struct mhi_ep_driver *mhi_drv, struct module *owner);
> +
> +/**
> + * mhi_ep_driver_unregister - Unregister a driver from MHI Endpoint bus
> + * @mhi_drv: Driver associated with the device
> + */
> +void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv);
>  
>  /**
>   * mhi_ep_register_controller - Register MHI Endpoint controller
> -- 
> 2.25.1
> 

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 04/25] bus: mhi: Move common MHI definitions out of host directory
  2022-02-12 18:20 ` [PATCH v3 04/25] bus: mhi: Move common MHI definitions out of host directory Manivannan Sadhasivam
@ 2022-02-15  0:28   ` Hemant Kumar
  2022-02-15 20:02   ` Alex Elder
  1 sibling, 0 replies; 92+ messages in thread
From: Hemant Kumar @ 2022-02-15  0:28 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder



On 2/12/2022 10:20 AM, Manivannan Sadhasivam wrote:
> Move the common MHI definitions in host "internal.h" to "common.h" so
> that the endpoint code can make use of them. This also avoids
> duplicating the definitions in the endpoint stack.
> 
> Still, the MHI register definitions are not moved since the offsets
> vary between host and endpoint.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora 
Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 05/25] bus: mhi: Make mhi_state_str[] array static inline and move to common.h
  2022-02-12 18:20 ` [PATCH v3 05/25] bus: mhi: Make mhi_state_str[] array static inline and move to common.h Manivannan Sadhasivam
@ 2022-02-15  0:31   ` Hemant Kumar
  2022-02-15 20:02   ` Alex Elder
  1 sibling, 0 replies; 92+ messages in thread
From: Hemant Kumar @ 2022-02-15  0:31 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder



On 2/12/2022 10:20 AM, Manivannan Sadhasivam wrote:
> mhi_state_str[] array could be used by MHI endpoint stack also. So let's
> make the array as "static inline function" and move it inside the
> "common.h" header so that the endpoint stack could also make use of it.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora 
Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 06/25] bus: mhi: Cleanup the register definitions used in headers
  2022-02-12 18:20 ` [PATCH v3 06/25] bus: mhi: Cleanup the register definitions used in headers Manivannan Sadhasivam
@ 2022-02-15  0:37   ` Hemant Kumar
  2022-02-15 20:02   ` Alex Elder
  1 sibling, 0 replies; 92+ messages in thread
From: Hemant Kumar @ 2022-02-15  0:37 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder



On 2/12/2022 10:20 AM, Manivannan Sadhasivam wrote:
> Cleanup includes:
> 
> 1. Moving the MHI register definitions to common.h header with REG_ prefix
>     and using them in the host/internal.h file as an alias. This makes it
>     possible to reuse the register definitions in EP stack that differs by
>     a fixed offset.
> 2. Using the GENMASK macro for masks
> 3. Removing brackets for single values
> 4. Using lowercase for hex values
> 5. Using two digits for hex values where applicable
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---

Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora 
Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 08/25] bus: mhi: ep: Add support for registering MHI endpoint controllers
  2022-02-12 18:21 ` [PATCH v3 08/25] bus: mhi: ep: Add support for registering MHI endpoint controllers Manivannan Sadhasivam
@ 2022-02-15  1:04   ` Hemant Kumar
  2022-02-16 17:33     ` Manivannan Sadhasivam
  2022-02-15 20:02   ` Alex Elder
  1 sibling, 1 reply; 92+ messages in thread
From: Hemant Kumar @ 2022-02-15  1:04 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder

Hi Mani,

On 2/12/2022 10:21 AM, Manivannan Sadhasivam wrote:
> This commit adds support for registering MHI endpoint controller drivers
> with the MHI endpoint stack. MHI endpoint controller drivers manages
> the interaction with the host machines such as x86. They are also the
> MHI endpoint bus master in charge of managing the physical link between the
> host and endpoint device.
> 
> The endpoint controller driver encloses all information about the
> underlying physical bus like PCIe. The registration process involves
> parsing the channel configuration and allocating an MHI EP device.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>   drivers/bus/mhi/Kconfig       |   1 +
>   drivers/bus/mhi/Makefile      |   3 +
>   drivers/bus/mhi/ep/Kconfig    |  10 ++
>   drivers/bus/mhi/ep/Makefile   |   2 +
>   drivers/bus/mhi/ep/internal.h | 160 +++++++++++++++++++++++
>   drivers/bus/mhi/ep/main.c     | 234 ++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        | 143 +++++++++++++++++++++
>   7 files changed, 553 insertions(+)
>   create mode 100644 drivers/bus/mhi/ep/Kconfig
>   create mode 100644 drivers/bus/mhi/ep/Makefile
>   create mode 100644 drivers/bus/mhi/ep/internal.h
>   create mode 100644 drivers/bus/mhi/ep/main.c
>   create mode 100644 include/linux/mhi_ep.h
> 
> diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
> index 4748df7f9cd5..b39a11e6c624 100644
> --- a/drivers/bus/mhi/Kconfig
> +++ b/drivers/bus/mhi/Kconfig
> @@ -6,3 +6,4 @@
>   #
>   
>   source "drivers/bus/mhi/host/Kconfig"
> +source "drivers/bus/mhi/ep/Kconfig"
> diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
> index 5f5708a249f5..46981331b38f 100644
> --- a/drivers/bus/mhi/Makefile
> +++ b/drivers/bus/mhi/Makefile
> @@ -1,2 +1,5 @@
>   # Host MHI stack
>   obj-y += host/
> +
> +# Endpoint MHI stack
> +obj-y += ep/
> diff --git a/drivers/bus/mhi/ep/Kconfig b/drivers/bus/mhi/ep/Kconfig
> new file mode 100644
> index 000000000000..229c71397b30
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/Kconfig
> @@ -0,0 +1,10 @@
> +config MHI_BUS_EP
> +	tristate "Modem Host Interface (MHI) bus Endpoint implementation"
> +	help
> +	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
> +	  communication protocol used by the host processors to control
> +	  and communicate with modem devices over a high speed peripheral
> +	  bus or shared memory.
> +
> +	  MHI_BUS_EP implements the MHI protocol for the endpoint devices
> +	  like SDX55 modem connected to the host machine over PCIe.
> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> new file mode 100644
> index 000000000000..64e29252b608
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/Makefile
> @@ -0,0 +1,2 @@
> +obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> +mhi_ep-y := main.o
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> new file mode 100644
> index 000000000000..e313a2546664
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -0,0 +1,160 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2021, Linaro Ltd.
> + *
> + */
> +
> +#ifndef _MHI_EP_INTERNAL_
> +#define _MHI_EP_INTERNAL_
> +
> +#include <linux/bitfield.h>
> +
> +#include "../common.h"
> +
> +extern struct bus_type mhi_ep_bus_type;
> +
> +#define MHI_REG_OFFSET				0x100
> +#define BHI_REG_OFFSET				0x200
> +
> +/* MHI registers */
> +#define MHIREGLEN				(MHI_REG_OFFSET + REG_MHIREGLEN)
> +#define MHIVER					(MHI_REG_OFFSET + REG_MHIVER)
> +#define MHICFG					(MHI_REG_OFFSET + REG_MHICFG)
> +#define CHDBOFF					(MHI_REG_OFFSET + REG_CHDBOFF)
> +#define ERDBOFF					(MHI_REG_OFFSET + REG_ERDBOFF)
> +#define BHIOFF					(MHI_REG_OFFSET + REG_BHIOFF)
> +#define BHIEOFF					(MHI_REG_OFFSET + REG_BHIEOFF)
> +#define DEBUGOFF				(MHI_REG_OFFSET + REG_DEBUGOFF)
> +#define MHICTRL					(MHI_REG_OFFSET + REG_MHICTRL)
> +#define MHISTATUS				(MHI_REG_OFFSET + REG_MHISTATUS)
> +#define CCABAP_LOWER				(MHI_REG_OFFSET + REG_CCABAP_LOWER)
> +#define CCABAP_HIGHER				(MHI_REG_OFFSET + REG_CCABAP_HIGHER)
> +#define ECABAP_LOWER				(MHI_REG_OFFSET + REG_ECABAP_LOWER)
> +#define ECABAP_HIGHER				(MHI_REG_OFFSET + REG_ECABAP_HIGHER)
> +#define CRCBAP_LOWER				(MHI_REG_OFFSET + REG_CRCBAP_LOWER)
> +#define CRCBAP_HIGHER				(MHI_REG_OFFSET + REG_CRCBAP_HIGHER)
> +#define CRDB_LOWER				(MHI_REG_OFFSET + REG_CRDB_LOWER)
> +#define CRDB_HIGHER				(MHI_REG_OFFSET + REG_CRDB_HIGHER)
> +#define MHICTRLBASE_LOWER			(MHI_REG_OFFSET + REG_MHICTRLBASE_LOWER)
> +#define MHICTRLBASE_HIGHER			(MHI_REG_OFFSET + REG_MHICTRLBASE_HIGHER)
> +#define MHICTRLLIMIT_LOWER			(MHI_REG_OFFSET + REG_MHICTRLLIMIT_LOWER)
> +#define MHICTRLLIMIT_HIGHER			(MHI_REG_OFFSET + REG_MHICTRLLIMIT_HIGHER)
> +#define MHIDATABASE_LOWER			(MHI_REG_OFFSET + REG_MHIDATABASE_LOWER)
> +#define MHIDATABASE_HIGHER			(MHI_REG_OFFSET + REG_MHIDATABASE_HIGHER)
> +#define MHIDATALIMIT_LOWER			(MHI_REG_OFFSET + REG_MHIDATALIMIT_LOWER)
> +#define MHIDATALIMIT_HIGHER			(MHI_REG_OFFSET + REG_MHIDATALIMIT_HIGHER)
> +
> +/* MHI BHI registers */
> +#define BHI_IMGTXDB				(BHI_REG_OFFSET + REG_BHI_IMGTXDB)
> +#define BHI_EXECENV				(BHI_REG_OFFSET + REG_BHI_EXECENV)
> +#define BHI_INTVEC				(BHI_REG_OFFSET + REG_BHI_INTVEC)
> +
> +/* MHI Doorbell registers */
> +#define CHDB_LOWER_n(n)				(0x400 + 0x8 * (n))
> +#define CHDB_HIGHER_n(n)			(0x404 + 0x8 * (n))
> +#define ERDB_LOWER_n(n)				(0x800 + 0x8 * (n))
> +#define ERDB_HIGHER_n(n)			(0x804 + 0x8 * (n))
> +
> +#define MHI_CTRL_INT_STATUS_A7			0x4
can we get rid of all instances of "_A7" as this corresponds to 
Cortex-A7, in future this can change? At MHI core layer, we can avoid 
this naming convetion, even though register names are inculding them now 
and may change to something different later. This MHI EP driver would 
still be used for those new cortex vers.
> +#define MHI_CTRL_INT_STATUS_A7_MSK		BIT(0)
> +#define MHI_CTRL_INT_STATUS_CRDB_MSK		BIT(1)
> +#define MHI_CHDB_INT_STATUS_A7_n(n)		(0x28 + 0x4 * (n))
> +#define MHI_ERDB_INT_STATUS_A7_n(n)		(0x38 + 0x4 * (n))
> +
[..]

-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora 
Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 09/25] bus: mhi: ep: Add support for registering MHI endpoint client drivers
  2022-02-12 18:21 ` [PATCH v3 09/25] bus: mhi: ep: Add support for registering MHI endpoint client drivers Manivannan Sadhasivam
  2022-02-12 18:32   ` Manivannan Sadhasivam
@ 2022-02-15  1:10   ` Hemant Kumar
  2022-02-15 20:02   ` Alex Elder
  2 siblings, 0 replies; 92+ messages in thread
From: Hemant Kumar @ 2022-02-15  1:10 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder



On 2/12/2022 10:21 AM, Manivannan Sadhasivam wrote:
> This commit adds support for registering MHI endpoint client drivers
> with the MHI endpoint stack. MHI endpoint client drivers binds to one
> or more MHI endpoint devices inorder to send and receive the upper-layer
> protocol packets like IP packets, modem control messages, and diagnostics
> messages over MHI bus.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora 
Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 11/25] bus: mhi: ep: Add support for managing MMIO registers
  2022-02-12 18:21 ` [PATCH v3 11/25] bus: mhi: ep: Add support for managing MMIO registers Manivannan Sadhasivam
@ 2022-02-15  1:14   ` Hemant Kumar
  2022-02-15 20:03   ` Alex Elder
  1 sibling, 0 replies; 92+ messages in thread
From: Hemant Kumar @ 2022-02-15  1:14 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder

Hi Mani

On 2/12/2022 10:21 AM, Manivannan Sadhasivam wrote:
> Add support for managing the Memory Mapped Input Output (MMIO) registers
> of the MHI bus. All MHI operations are carried out using the MMIO registers
> by both host and the endpoint device.
> 
> The MMIO registers reside inside the endpoint device memory (fixed
> location based on the platform) and the address is passed by the MHI EP
> controller driver during its registration.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>   drivers/bus/mhi/ep/Makefile   |   2 +-
>   drivers/bus/mhi/ep/internal.h |  37 +++++
>   drivers/bus/mhi/ep/main.c     |   6 +-
>   drivers/bus/mhi/ep/mmio.c     | 274 ++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        |  18 +++
>   5 files changed, 335 insertions(+), 2 deletions(-)
>   create mode 100644 drivers/bus/mhi/ep/mmio.c
> 
> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> index 64e29252b608..a1555ae287ad 100644
> --- a/drivers/bus/mhi/ep/Makefile
> +++ b/drivers/bus/mhi/ep/Makefile
> @@ -1,2 +1,2 @@
>   obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> -mhi_ep-y := main.o
> +mhi_ep-y := main.o mmio.o
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index e313a2546664..2c756a90774c 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -101,6 +101,17 @@ struct mhi_generic_ctx {
>   	__u64 wp __packed __aligned(4);
>   };
>   
> +/**
> + * enum mhi_ep_execenv - MHI Endpoint Execution Environment
> + * @MHI_EP_SBL_EE: Secondary Bootloader
> + * @MHI_EP_AMSS_EE: Advanced Mode Subscriber Software
> + */
> +enum mhi_ep_execenv {
> +	MHI_EP_SBL_EE = 1,
> +	MHI_EP_AMSS_EE = 2,
> +	MHI_EP_UNRESERVED
> +};
can we move or use the exec env definitions from common header ?
> +
>   enum mhi_ep_ring_type {
>   	RING_TYPE_CMD = 0,
>   	RING_TYPE_ER,
> @@ -157,4 +168,30 @@ struct mhi_ep_chan {
>   	bool skip_td;
>   };
>   
> +/* MMIO related functions */
> +u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset);
> +void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
> +void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 mask, u32 val);
> +u32 mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset, u32 mask);
> +void mhi_ep_mmio_enable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_disable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_enable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_disable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_enable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id);
can we get rid of a7 from function and macros ?
> +void mhi_ep_mmio_disable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id);
[..]
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora 
Forum, a Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 00/25] Add initial support for MHI endpoint stack
  2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
                   ` (24 preceding siblings ...)
  2022-02-12 18:21 ` [PATCH v3 25/25] bus: mhi: ep: Add uevent support for module autoloading Manivannan Sadhasivam
@ 2022-02-15 20:01 ` Alex Elder
  25 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:01 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> Hello,
> 
> This series adds initial support for the Qualcomm specific Modem Host Interface
> (MHI) bus in endpoint devices like SDX55 modems. The MHI bus in endpoint devices
> communicates with the MHI bus in host machines like x86 over any physical bus
> like PCIe. The MHI host support is already in mainline [1] and been used by PCIe
> based modems and WLAN devices running vendor code (downstream).

Maybe "running (downstream) vendor code".



I have a few general comments, which I'll mention here.

- This description goes out of its way to say MHI *could* be used over
   almost any transport, and PCIe just happens to be one of them.  The
   reality is, you are only supporting it over PCIe, and as far as I
   know you have no plans to go beyond that.  Even if you did, I think
   it should be clearer that you are doing MHI support over PCIe, even
   though other options are possible (and could be incorporated in the
   future).
- The first two patches are bug fixes; I think you should send those
   out right away, without waiting for the entire series to get accepted.
     - But ideally, can we get a "Tested-by" tag on these first?
- The next several, maybe up to patch 7, are also sort of preparatory
   for the "real" code you're adding.  Maybe those could be sent out
   early/separately too, knowing that the end goal is to get the MHI
   endpoint support accepted.
- Given the endianness issues (which I pointed out last time, but
   which seem to be addressed), are you able to test this code using
   a host that has different endianness than the modem CPU (SDX55)?
     - Paul Davey seems to have the ability to test this.
- I have a few very minor suggestions in the wording below.
- You really need a picture to make it easier to see at a glance what
   the hardware model is.  Here's one I did at one point, but it also
   includes the IPA in it (which is the FUUUUTURE!!!).  The SDX55 AP
   controls the PCIe endpoint.

   ....................            ..................................
   : "Intel host"     :            :             "SDX55"            :
   :                  :            :      ------------              :
   :                  :            :      | SDX55 AP |              :
   :                  :            :      ------------              :
   :                  :      | |   :           |                    :
   : -------- (root complex) |P| (endpoint) -------       --------- :
   : | Host |----------------|C|------------| IPA |-------| Modem | :
   : --------         :      |I|   :        -------       --------- :
   :..................:      |e|   :................................:
                             | |

   Something this picture does not show is that the transfer,
   command and event rings (and buffers) reside in host memory,
   while information *about* those rings (size, location, and
   current read/write pointers) reside in PCIe memory.

> Overview
> ========
> 
> This series aims at adding the MHI support in the endpoint devices with the goal

This series adds the MHI support...

> of getting data connectivity using the mainline kernel running on the modems.
> Modems here refer to the combination of an APPS processor (Cortex A grade) and
> a baseband processor (DSP). The MHI bus is located in the APPS processor and it
> transfers data packets from the baseband processor to the host machine.
> 
> The MHI Endpoint (MHI EP) stack proposed here is inspired by the downstream
> code written by Qualcomm. But the complete stack is mostly re-written to adapt
> to the "bus" framework and made it modular so that it can work with the upstream

...framework to make it modular, so that...

> subsystems like "PCI Endpoint". The code structure of the MHI endpoint stack
> follows the MHI host stack to maintain uniformity.
> 
> With this initial MHI EP stack (along with few other drivers), we can establish
> the network interface between host and endpoint over the MHI software channels
> (IP_SW0) and can do things like IP forwarding, SSH, etc...
> 
> Stack Organization
> ==================
> 
> The MHI EP stack has the concept of controller and device drivers as like the
> MHI host stack. The MHI EP controller driver can be a PCI Endpoint Function
> driver and the MHI device driver can be a MHI EP Networking driver or QRTR
> driver. The MHI EP controller driver is tied to the PCI Endpoint subsystem and
> handles all bus related activities like mapping the host memory, raising IRQ,
> passing link specific events etc... The MHI EP networking driver is tied to the
> Networking stack and handles all networking related activities like
> sending/receiving the SKBs from netdev, statistics collection etc...
> 
> This series only contains the MHI EP code, whereas the PCIe EPF driver and MHI
> EP Networking drivers are not yet submitted and can be found here [2]. Though
> the MHI EP stack doesn't have the build time dependency, it cannot function
> without them.
> 
> Test setup
> ==========
> 
> This series has been tested on Telit FN980 TLB board powered by Qualcomm SDX55
> (a.k.a X55 modem) and Qualcomm SM8450 based dev board.
> 
> For testing the stability and performance, networking tools such as iperf, ssh
> and ping are used.
> 
> Limitations
> ===========
> 
> We are not _yet_ there to get the data packets from the modem as that involves
> the Qualcomm IP Accelerator (IPA) integration with MHI endpoint stack. But we
> are planning to add support for it in the coming days.

s/days/months/

And now I'm going to send this, along with my comments on the first
half of the patches.  I'll keep going on the rest after that.

					-Alex

> 
> References
> ==========
> 
> MHI bus: https://www.kernel.org/doc/html/latest/mhi/mhi.html
> Linaro connect presentation around this topic: https://connect.linaro.org/resources/lvc21f/lvc21f-222/
> 
> Thanks,
> Mani
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/bus/mhi
> [2] https://git.linaro.org/landing-teams/working/qualcomm/kernel.git/log/?h=tracking-qcomlt-sdx55-drivers
> 
> Changes in v3:
> 
> * Splitted the patch 20/23 into two.
> * Fixed the error handling in patch 21/23.
> * Removed spurious change in patch 01/23.
> * Added check for xfer callbacks in client driver probe.
> 
> Changes in v2:
> 
> v2 mostly addresses the issues seen while testing the stack on SM8450 that is a
> SMP platform and also incorporates the review comments from Alex.
> 
> Major changes are:
> 
> * Added a cleanup patch for getting rid of SHIFT macros and used the bitfield
>    operations.
> * Added the endianess patches that were submitted to MHI list and used the
>    endianess conversion in EP patches also.
> * Added support for multiple event rings.
> * Fixed the MSI generation based on the event ring index.
> * Fixed the doorbell list handling by making use of list splice and not locking
>    the entire list manipulation.
> * Added new APIs for wrapping the reading and writing to host memory (Dmitry).
> * Optimized the read_channel and queue_skb function logics.
> * Added Hemant's R-o-b tag.
> 
> Manivannan Sadhasivam (23):
>    bus: mhi: Move host MHI code to "host" directory
>    bus: mhi: Move common MHI definitions out of host directory
>    bus: mhi: Make mhi_state_str[] array static inline and move to
>      common.h
>    bus: mhi: Cleanup the register definitions used in headers
>    bus: mhi: Get rid of SHIFT macros and use bitfield operations
>    bus: mhi: ep: Add support for registering MHI endpoint controllers
>    bus: mhi: ep: Add support for registering MHI endpoint client drivers
>    bus: mhi: ep: Add support for creating and destroying MHI EP devices
>    bus: mhi: ep: Add support for managing MMIO registers
>    bus: mhi: ep: Add support for ring management
>    bus: mhi: ep: Add support for sending events to the host
>    bus: mhi: ep: Add support for managing MHI state machine
>    bus: mhi: ep: Add support for processing MHI endpoint interrupts
>    bus: mhi: ep: Add support for powering up the MHI endpoint stack
>    bus: mhi: ep: Add support for powering down the MHI endpoint stack
>    bus: mhi: ep: Add support for handling MHI_RESET
>    bus: mhi: ep: Add support for handling SYS_ERR condition
>    bus: mhi: ep: Add support for processing command ring
>    bus: mhi: ep: Add support for reading from the host
>    bus: mhi: ep: Add support for processing transfer ring
>    bus: mhi: ep: Add support for queueing SKBs to the host
>    bus: mhi: ep: Add support for suspending and resuming channels
>    bus: mhi: ep: Add uevent support for module autoloading
> 
> Paul Davey (2):
>    bus: mhi: Fix pm_state conversion to string
>    bus: mhi: Fix MHI DMA structure endianness
> 
>   drivers/bus/Makefile                      |    2 +-
>   drivers/bus/mhi/Kconfig                   |   28 +-
>   drivers/bus/mhi/Makefile                  |    9 +-
>   drivers/bus/mhi/common.h                  |  319 ++++
>   drivers/bus/mhi/ep/Kconfig                |   10 +
>   drivers/bus/mhi/ep/Makefile               |    2 +
>   drivers/bus/mhi/ep/internal.h             |  254 ++++
>   drivers/bus/mhi/ep/main.c                 | 1601 +++++++++++++++++++++
>   drivers/bus/mhi/ep/mmio.c                 |  274 ++++
>   drivers/bus/mhi/ep/ring.c                 |  267 ++++
>   drivers/bus/mhi/ep/sm.c                   |  174 +++
>   drivers/bus/mhi/host/Kconfig              |   31 +
>   drivers/bus/mhi/{core => host}/Makefile   |    4 +-
>   drivers/bus/mhi/{core => host}/boot.c     |   17 +-
>   drivers/bus/mhi/{core => host}/debugfs.c  |   40 +-
>   drivers/bus/mhi/{core => host}/init.c     |  123 +-
>   drivers/bus/mhi/{core => host}/internal.h |  427 +-----
>   drivers/bus/mhi/{core => host}/main.c     |   46 +-
>   drivers/bus/mhi/{ => host}/pci_generic.c  |    0
>   drivers/bus/mhi/{core => host}/pm.c       |   36 +-
>   include/linux/mhi_ep.h                    |  293 ++++
>   include/linux/mod_devicetable.h           |    2 +
>   scripts/mod/file2alias.c                  |   10 +
>   23 files changed, 3442 insertions(+), 527 deletions(-)
>   create mode 100644 drivers/bus/mhi/common.h
>   create mode 100644 drivers/bus/mhi/ep/Kconfig
>   create mode 100644 drivers/bus/mhi/ep/Makefile
>   create mode 100644 drivers/bus/mhi/ep/internal.h
>   create mode 100644 drivers/bus/mhi/ep/main.c
>   create mode 100644 drivers/bus/mhi/ep/mmio.c
>   create mode 100644 drivers/bus/mhi/ep/ring.c
>   create mode 100644 drivers/bus/mhi/ep/sm.c
>   create mode 100644 drivers/bus/mhi/host/Kconfig
>   rename drivers/bus/mhi/{core => host}/Makefile (54%)
>   rename drivers/bus/mhi/{core => host}/boot.c (96%)
>   rename drivers/bus/mhi/{core => host}/debugfs.c (90%)
>   rename drivers/bus/mhi/{core => host}/init.c (93%)
>   rename drivers/bus/mhi/{core => host}/internal.h (50%)
>   rename drivers/bus/mhi/{core => host}/main.c (98%)
>   rename drivers/bus/mhi/{ => host}/pci_generic.c (100%)
>   rename drivers/bus/mhi/{core => host}/pm.c (97%)
>   create mode 100644 include/linux/mhi_ep.h
> 


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 01/25] bus: mhi: Fix pm_state conversion to string
  2022-02-12 18:20 ` [PATCH v3 01/25] bus: mhi: Fix pm_state conversion to string Manivannan Sadhasivam
@ 2022-02-15 20:01   ` Alex Elder
  2022-02-16 11:33     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:01 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, Paul Davey,
	Manivannan Sadhasivam, Hemant Kumar, stable

On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> From: Paul Davey <paul.davey@alliedtelesis.co.nz>
> 
> On big endian architectures the mhi debugfs files which report pm state
> give "Invalid State" for all states.  This is caused by using
> find_last_bit which takes an unsigned long* while the state is passed in
> as an enum mhi_pm_state which will be of int size.

I think this would have fixed it too, but your fix is better.

	int index = find_last_bit(&(unsigned long)state, 32);

> Fix by using __fls to pass the value of state instead of find_last_bit.
> 
> Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions")
> Signed-off-by: Paul Davey <paul.davey@alliedtelesis.co.nz>
> Reviewed-by: Manivannan Sadhasivam <mani@kernel.org>
> Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
> Cc: stable@vger.kernel.org
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>   drivers/bus/mhi/core/init.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
> index 046f407dc5d6..af484b03558a 100644
> --- a/drivers/bus/mhi/core/init.c
> +++ b/drivers/bus/mhi/core/init.c
> @@ -79,10 +79,12 @@ static const char * const mhi_pm_state_str[] = {
>   
>   const char *to_mhi_pm_state_str(enum mhi_pm_state state)

The mhi_pm_state enumerated type is an enumerated sequence, not
a bit mask.  So knowing what the last (most significant) set bit
is not meaningful.  Or normally it shouldn't be.

If mhi_pm_state really were a bit mask, then its values should
be defined that way, i.e.,

	MHI_PM_STATE_DISABLE	= 1 << 0,
	MHI_PM_STATE_DISABLE	= 1 << 1,
	. . .

What's really going on is that the state value passed here
*is* a bitmask, whose bit positions are those mhi_pm_state
values.  So the state argument should have type u32.

This is a *separate* bug/issue.  It could be fixed separately
(before this patch), but I'd be OK with just explaining why
this change would occur as part of this modified patch.

>   {
> -	unsigned long pm_state = state;
> -	int index = find_last_bit(&pm_state, 32);
> +	int index;
>   
> -	if (index >= ARRAY_SIZE(mhi_pm_state_str))
> +	if (state)
> +		index = __fls(state);
> +
> +	if (!state || index >= ARRAY_SIZE(mhi_pm_state_str))
>   		return "Invalid State";

Do this test and return first, and skip the additional
check for "if (state)".

					-Alex

>   	return mhi_pm_state_str[index];


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 02/25] bus: mhi: Fix MHI DMA structure endianness
  2022-02-12 18:20 ` [PATCH v3 02/25] bus: mhi: Fix MHI DMA structure endianness Manivannan Sadhasivam
@ 2022-02-15 20:02   ` Alex Elder
  2022-02-16  7:04     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:02 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, Paul Davey, stable

On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> From: Paul Davey <paul.davey@alliedtelesis.co.nz>
> 
> The MHI driver does not work on big endian architectures.  The
> controller never transitions into mission mode.  This appears to be due
> to the modem device expecting the various contexts and transfer rings to
> have fields in little endian order in memory, but the driver constructs
> them in native endianness.

Yes, this is true.

> Fix MHI event, channel and command contexts and TRE handling macros to
> use explicit conversion to little endian.  Mark fields in relevant
> structures as little endian to document this requirement.

Basically every field in the external interface whose size
is greater than one byte must have its endianness noted.
 From what I can tell, you did that for all of the exposed
structures defined in "drivers/bus/mhi/core/internal.h",
which is good.

*However* some of the *constants* were defined the wrong way.

Basically, all of the constant values should be expressed
in host byte order.  And any needed byte swapping should be
done at the time the value is read from memory--immediately.
That way, we isolate that activity to the one place we
interface with the possibly "foreign" format, and from then
on, everything may be assumed to be in natural (CPU) byte order.

I will point out what I mean, below.

> Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions")
> Fixes: 6cd330ae76ff ("bus: mhi: core: Add support for ringing channel/event ring doorbells")
> Signed-off-by: Paul Davey <paul.davey@alliedtelesis.co.nz>
> Cc: stable@vger.kernel.org
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>   drivers/bus/mhi/core/debugfs.c  |  26 +++----
>   drivers/bus/mhi/core/init.c     |  36 +++++-----
>   drivers/bus/mhi/core/internal.h | 119 ++++++++++++++++----------------
>   drivers/bus/mhi/core/main.c     |  22 +++---
>   drivers/bus/mhi/core/pm.c       |   4 +-
>   5 files changed, 104 insertions(+), 103 deletions(-)
> 
> diff --git a/drivers/bus/mhi/core/debugfs.c b/drivers/bus/mhi/core/debugfs.c
> index 858d7516410b..d818586c229d 100644
> --- a/drivers/bus/mhi/core/debugfs.c
> +++ b/drivers/bus/mhi/core/debugfs.c
> @@ -60,16 +60,16 @@ static int mhi_debugfs_events_show(struct seq_file *m, void *d)
>   		}

These look fine, because they're doing the conversion of the
fields just as they're read from memory.

>   		seq_printf(m, "Index: %d intmod count: %lu time: %lu",
> -			   i, (er_ctxt->intmod & EV_CTX_INTMODC_MASK) >>
> +			   i, (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODC_MASK) >>
>   			   EV_CTX_INTMODC_SHIFT,
> -			   (er_ctxt->intmod & EV_CTX_INTMODT_MASK) >>
> +			   (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODT_MASK) >>
>   			   EV_CTX_INTMODT_SHIFT);

. . .

> diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
> index af484b03558a..4bd62f32695d 100644
> --- a/drivers/bus/mhi/core/init.c
> +++ b/drivers/bus/mhi/core/init.c
> @@ -293,17 +293,17 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
>   		if (mhi_chan->offload_ch)
>   			continue;
>   
> -		tmp = chan_ctxt->chcfg;
> +		tmp = le32_to_cpu(chan_ctxt->chcfg);
>   		tmp &= ~CHAN_CTX_CHSTATE_MASK;

Note that CHAN_CTX_CHSTATE_MASK, etc. here are assumed to
be in CPU byte order.  This is good, and that pattern is
followed for a bunch more code that I've omitted.

>   		tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
>   		tmp &= ~CHAN_CTX_BRSTMODE_MASK;
>   		tmp |= (mhi_chan->db_cfg.brstmode << CHAN_CTX_BRSTMODE_SHIFT);
>   		tmp &= ~CHAN_CTX_POLLCFG_MASK;
>   		tmp |= (mhi_chan->db_cfg.pollcfg << CHAN_CTX_POLLCFG_SHIFT);
> -		chan_ctxt->chcfg = tmp;
> +		chan_ctxt->chcfg = cpu_to_le32(tmp);
>   
> -		chan_ctxt->chtype = mhi_chan->type;
> -		chan_ctxt->erindex = mhi_chan->er_index;
> +		chan_ctxt->chtype = cpu_to_le32(mhi_chan->type);
> +		chan_ctxt->erindex = cpu_to_le32(mhi_chan->er_index);
>   
>   		mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
>   		mhi_chan->tre_ring.db_addr = (void __iomem *)&chan_ctxt->wp;

. . .

> diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
> index e2e10474a9d9..fa64340a8997 100644
> --- a/drivers/bus/mhi/core/internal.h
> +++ b/drivers/bus/mhi/core/internal.h
> @@ -209,14 +209,14 @@ extern struct bus_type mhi_bus_type;
>   #define EV_CTX_INTMODT_MASK GENMASK(31, 16)
>   #define EV_CTX_INTMODT_SHIFT 16
>   struct mhi_event_ctxt {
> -	__u32 intmod;
> -	__u32 ertype;
> -	__u32 msivec;
> -
> -	__u64 rbase __packed __aligned(4);
> -	__u64 rlen __packed __aligned(4);
> -	__u64 rp __packed __aligned(4);
> -	__u64 wp __packed __aligned(4);

These are all good.

> +	__le32 intmod;
> +	__le32 ertype;
> +	__le32 msivec;
> +
> +	__le64 rbase __packed __aligned(4);
> +	__le64 rlen __packed __aligned(4);
> +	__le64 rp __packed __aligned(4);
> +	__le64 wp __packed __aligned(4);
>   };

This is separate from the subject of this patch, but I'm
pretty sure the entire structure (rather than all of those
fields) can be defined with the __packed and __aligned(4)
attributes to achieve the same effect.

>   #define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)

. . .

> @@ -277,57 +277,58 @@ enum mhi_cmd_type {
>   /* No operation command */
>   #define MHI_TRE_CMD_NOOP_PTR (0)
>   #define MHI_TRE_CMD_NOOP_DWORD0 (0)
> -#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
> +#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))

This just looks wrong to me.  The original definition
should be fine, but then where it's *used* it should
be passed to cpu_to_le32().  I realize this might be
a special case, where these "DWORD" values are getting
written out to command ring elements, but even so, the
byte swapping that's happening is important and should
be made obvious in the code using these symbols.

This comment applies to many more similar definitions
below.  I don't know; maybe it looks cumbersome if
it's done in the code, but I still think it's better to
consistenly define symbols like this in CPU byte order
and do the conversions explicitly only when the values
are read/written to "foreign" (external interface)
memory.

Outside of this issue, the remainder of the patch looks
OK to me.

					-Alex

>   /* Channel reset command */
>   #define MHI_TRE_CMD_RESET_PTR (0)
>   #define MHI_TRE_CMD_RESET_DWORD0 (0)
> -#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
> -					(MHI_CMD_RESET_CHAN << 16))
> +#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> +					(MHI_CMD_RESET_CHAN << 16)))
>   
>   /* Channel stop command */
>   #define MHI_TRE_CMD_STOP_PTR (0)
>   #define MHI_TRE_CMD_STOP_DWORD0 (0)
> -#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
> -				       (MHI_CMD_STOP_CHAN << 16))
> +#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> +				       (MHI_CMD_STOP_CHAN << 16)))
>   
>   /* Channel start command */
>   #define MHI_TRE_CMD_START_PTR (0)
>   #define MHI_TRE_CMD_START_DWORD0 (0)
> -#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
> -					(MHI_CMD_START_CHAN << 16))
> +#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> +					(MHI_CMD_START_CHAN << 16)))
>   
> -#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
> -#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
> +#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
> +#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> +#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
>   
>   /* Event descriptor macros */
> -#define MHI_TRE_EV_PTR(ptr) (ptr)
> -#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
> -#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
> -#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
> -#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
> -#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
> -#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
> -#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
> -#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
> -#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
> -#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
> +#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
> +#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
> +#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
> +#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
> +#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> +#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> +#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
> +#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
> +#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> +#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> +#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
>   
>   /* Transfer descriptor macros */
> -#define MHI_TRE_DATA_PTR(ptr) (ptr)
> -#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
> -#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
> -	| (ieot << 9) | (ieob << 8) | chain)
> +#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
> +#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
> +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
> +	| (ieot << 9) | (ieob << 8) | chain))
>   
>   /* RSC transfer descriptor macros */
> -#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
> -#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
> -#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
> +#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
> +#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
> +#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
>   
>   enum mhi_pkt_type {
>   	MHI_PKT_TYPE_INVALID = 0x0,
> @@ -500,7 +501,7 @@ struct state_transition {
>   struct mhi_ring {
>   	dma_addr_t dma_handle;
>   	dma_addr_t iommu_base;
> -	u64 *ctxt_wp; /* point to ctxt wp */
> +	__le64 *ctxt_wp; /* point to ctxt wp */
>   	void *pre_aligned;
>   	void *base;
>   	void *rp;
> diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
> index ffde617f93a3..85f4f7c8d7c6 100644
> --- a/drivers/bus/mhi/core/main.c
> +++ b/drivers/bus/mhi/core/main.c
> @@ -114,7 +114,7 @@ void mhi_ring_er_db(struct mhi_event *mhi_event)
>   	struct mhi_ring *ring = &mhi_event->ring;
>   
>   	mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
> -				     ring->db_addr, *ring->ctxt_wp);
> +				     ring->db_addr, le64_to_cpu(*ring->ctxt_wp));
>   }
>   
>   void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
> @@ -123,7 +123,7 @@ void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
>   	struct mhi_ring *ring = &mhi_cmd->ring;
>   
>   	db = ring->iommu_base + (ring->wp - ring->base);
> -	*ring->ctxt_wp = db;
> +	*ring->ctxt_wp = cpu_to_le64(db);
>   	mhi_write_db(mhi_cntrl, ring->db_addr, db);
>   }
>   
> @@ -140,7 +140,7 @@ void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
>   	 * before letting h/w know there is new element to fetch.
>   	 */
>   	dma_wmb();
> -	*ring->ctxt_wp = db;
> +	*ring->ctxt_wp = cpu_to_le64(db);
>   
>   	mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
>   				    ring->db_addr, db);
> @@ -432,7 +432,7 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
>   	struct mhi_event_ctxt *er_ctxt =
>   		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
>   	struct mhi_ring *ev_ring = &mhi_event->ring;
> -	dma_addr_t ptr = er_ctxt->rp;
> +	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
>   	void *dev_rp;
>   
>   	if (!is_valid_ring_ptr(ev_ring, ptr)) {
> @@ -537,14 +537,14 @@ static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
>   
>   	/* Update the WP */
>   	ring->wp += ring->el_size;
> -	ctxt_wp = *ring->ctxt_wp + ring->el_size;
> +	ctxt_wp = le64_to_cpu(*ring->ctxt_wp) + ring->el_size;
>   
>   	if (ring->wp >= (ring->base + ring->len)) {
>   		ring->wp = ring->base;
>   		ctxt_wp = ring->iommu_base;
>   	}
>   
> -	*ring->ctxt_wp = ctxt_wp;
> +	*ring->ctxt_wp = cpu_to_le64(ctxt_wp);
>   
>   	/* Update the RP */
>   	ring->rp += ring->el_size;
> @@ -801,7 +801,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
>   	u32 chan;
>   	int count = 0;
> -	dma_addr_t ptr = er_ctxt->rp;
> +	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
>   
>   	/*
>   	 * This is a quick check to avoid unnecessary event processing
> @@ -940,7 +940,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>   		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
>   		local_rp = ev_ring->rp;
>   
> -		ptr = er_ctxt->rp;
> +		ptr = le64_to_cpu(er_ctxt->rp);
>   		if (!is_valid_ring_ptr(ev_ring, ptr)) {
>   			dev_err(&mhi_cntrl->mhi_dev->dev,
>   				"Event ring rp points outside of the event ring\n");
> @@ -970,7 +970,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
>   	int count = 0;
>   	u32 chan;
>   	struct mhi_chan *mhi_chan;
> -	dma_addr_t ptr = er_ctxt->rp;
> +	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
>   
>   	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
>   		return -EIO;
> @@ -1011,7 +1011,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
>   		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
>   		local_rp = ev_ring->rp;
>   
> -		ptr = er_ctxt->rp;
> +		ptr = le64_to_cpu(er_ctxt->rp);
>   		if (!is_valid_ring_ptr(ev_ring, ptr)) {
>   			dev_err(&mhi_cntrl->mhi_dev->dev,
>   				"Event ring rp points outside of the event ring\n");
> @@ -1533,7 +1533,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
>   	/* mark all stale events related to channel as STALE event */
>   	spin_lock_irqsave(&mhi_event->lock, flags);
>   
> -	ptr = er_ctxt->rp;
> +	ptr = le64_to_cpu(er_ctxt->rp);
>   	if (!is_valid_ring_ptr(ev_ring, ptr)) {
>   		dev_err(&mhi_cntrl->mhi_dev->dev,
>   			"Event ring rp points outside of the event ring\n");
> diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
> index 4aae0baea008..c35c5ddc7220 100644
> --- a/drivers/bus/mhi/core/pm.c
> +++ b/drivers/bus/mhi/core/pm.c
> @@ -218,7 +218,7 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
>   			continue;
>   
>   		ring->wp = ring->base + ring->len - ring->el_size;
> -		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
> +		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
>   		/* Update all cores */
>   		smp_wmb();
>   
> @@ -420,7 +420,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
>   			continue;
>   
>   		ring->wp = ring->base + ring->len - ring->el_size;
> -		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
> +		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
>   		/* Update to all cores */
>   		smp_wmb();
>   


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 03/25] bus: mhi: Move host MHI code to "host" directory
  2022-02-12 18:20 ` [PATCH v3 03/25] bus: mhi: Move host MHI code to "host" directory Manivannan Sadhasivam
@ 2022-02-15 20:02   ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:02 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, Hemant Kumar

On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> In preparation of the endpoint MHI support, let's move the host MHI code
> to its own "host" directory and adjust the toplevel MHI Kconfig & Makefile.
> 
> While at it, let's also move the "pci_generic" driver to "host" directory
> as it is a host MHI controller driver.
> 
> Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

This is a pretty simple rename, and it looks good to me.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/Makefile                      |  2 +-
>   drivers/bus/mhi/Kconfig                   | 27 ++------------------
>   drivers/bus/mhi/Makefile                  |  8 ++----
>   drivers/bus/mhi/host/Kconfig              | 31 +++++++++++++++++++++++
>   drivers/bus/mhi/{core => host}/Makefile   |  4 ++-
>   drivers/bus/mhi/{core => host}/boot.c     |  0
>   drivers/bus/mhi/{core => host}/debugfs.c  |  0
>   drivers/bus/mhi/{core => host}/init.c     |  0
>   drivers/bus/mhi/{core => host}/internal.h |  0
>   drivers/bus/mhi/{core => host}/main.c     |  0
>   drivers/bus/mhi/{ => host}/pci_generic.c  |  0
>   drivers/bus/mhi/{core => host}/pm.c       |  0
>   12 files changed, 39 insertions(+), 33 deletions(-)
>   create mode 100644 drivers/bus/mhi/host/Kconfig
>   rename drivers/bus/mhi/{core => host}/Makefile (54%)
>   rename drivers/bus/mhi/{core => host}/boot.c (100%)
>   rename drivers/bus/mhi/{core => host}/debugfs.c (100%)
>   rename drivers/bus/mhi/{core => host}/init.c (100%)
>   rename drivers/bus/mhi/{core => host}/internal.h (100%)
>   rename drivers/bus/mhi/{core => host}/main.c (100%)
>   rename drivers/bus/mhi/{ => host}/pci_generic.c (100%)
>   rename drivers/bus/mhi/{core => host}/pm.c (100%)
> 
> diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
> index 52c2f35a26a9..16da51130d1a 100644
> --- a/drivers/bus/Makefile
> +++ b/drivers/bus/Makefile
> @@ -39,4 +39,4 @@ obj-$(CONFIG_VEXPRESS_CONFIG)	+= vexpress-config.o
>   obj-$(CONFIG_DA8XX_MSTPRI)	+= da8xx-mstpri.o
>   
>   # MHI
> -obj-$(CONFIG_MHI_BUS)		+= mhi/
> +obj-y				+= mhi/
> diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
> index da5cd0c9fc62..4748df7f9cd5 100644
> --- a/drivers/bus/mhi/Kconfig
> +++ b/drivers/bus/mhi/Kconfig
> @@ -2,30 +2,7 @@
>   #
>   # MHI bus
>   #
> -# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> +# Copyright (c) 2021, Linaro Ltd.
>   #
>   
> -config MHI_BUS
> -	tristate "Modem Host Interface (MHI) bus"
> -	help
> -	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
> -	  communication protocol used by the host processors to control
> -	  and communicate with modem devices over a high speed peripheral
> -	  bus or shared memory.
> -
> -config MHI_BUS_DEBUG
> -	bool "Debugfs support for the MHI bus"
> -	depends on MHI_BUS && DEBUG_FS
> -	help
> -	  Enable debugfs support for use with the MHI transport. Allows
> -	  reading and/or modifying some values within the MHI controller
> -	  for debug and test purposes.
> -
> -config MHI_BUS_PCI_GENERIC
> -	tristate "MHI PCI controller driver"
> -	depends on MHI_BUS
> -	depends on PCI
> -	help
> -	  This driver provides MHI PCI controller driver for devices such as
> -	  Qualcomm SDX55 based PCIe modems.
> -
> +source "drivers/bus/mhi/host/Kconfig"
> diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
> index 0a2d778d6fb4..5f5708a249f5 100644
> --- a/drivers/bus/mhi/Makefile
> +++ b/drivers/bus/mhi/Makefile
> @@ -1,6 +1,2 @@
> -# core layer
> -obj-y += core/
> -
> -obj-$(CONFIG_MHI_BUS_PCI_GENERIC) += mhi_pci_generic.o
> -mhi_pci_generic-y += pci_generic.o
> -
> +# Host MHI stack
> +obj-y += host/
> diff --git a/drivers/bus/mhi/host/Kconfig b/drivers/bus/mhi/host/Kconfig
> new file mode 100644
> index 000000000000..da5cd0c9fc62
> --- /dev/null
> +++ b/drivers/bus/mhi/host/Kconfig
> @@ -0,0 +1,31 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# MHI bus
> +#
> +# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> +#
> +
> +config MHI_BUS
> +	tristate "Modem Host Interface (MHI) bus"
> +	help
> +	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
> +	  communication protocol used by the host processors to control
> +	  and communicate with modem devices over a high speed peripheral
> +	  bus or shared memory.
> +
> +config MHI_BUS_DEBUG
> +	bool "Debugfs support for the MHI bus"
> +	depends on MHI_BUS && DEBUG_FS
> +	help
> +	  Enable debugfs support for use with the MHI transport. Allows
> +	  reading and/or modifying some values within the MHI controller
> +	  for debug and test purposes.
> +
> +config MHI_BUS_PCI_GENERIC
> +	tristate "MHI PCI controller driver"
> +	depends on MHI_BUS
> +	depends on PCI
> +	help
> +	  This driver provides MHI PCI controller driver for devices such as
> +	  Qualcomm SDX55 based PCIe modems.
> +
> diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/host/Makefile
> similarity index 54%
> rename from drivers/bus/mhi/core/Makefile
> rename to drivers/bus/mhi/host/Makefile
> index c3feb4130aa3..859c2f38451c 100644
> --- a/drivers/bus/mhi/core/Makefile
> +++ b/drivers/bus/mhi/host/Makefile
> @@ -1,4 +1,6 @@
>   obj-$(CONFIG_MHI_BUS) += mhi.o
> -
>   mhi-y := init.o main.o pm.o boot.o
>   mhi-$(CONFIG_MHI_BUS_DEBUG) += debugfs.o
> +
> +obj-$(CONFIG_MHI_BUS_PCI_GENERIC) += mhi_pci_generic.o
> +mhi_pci_generic-y += pci_generic.o
> diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/host/boot.c
> similarity index 100%
> rename from drivers/bus/mhi/core/boot.c
> rename to drivers/bus/mhi/host/boot.c
> diff --git a/drivers/bus/mhi/core/debugfs.c b/drivers/bus/mhi/host/debugfs.c
> similarity index 100%
> rename from drivers/bus/mhi/core/debugfs.c
> rename to drivers/bus/mhi/host/debugfs.c
> diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/host/init.c
> similarity index 100%
> rename from drivers/bus/mhi/core/init.c
> rename to drivers/bus/mhi/host/init.c
> diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/host/internal.h
> similarity index 100%
> rename from drivers/bus/mhi/core/internal.h
> rename to drivers/bus/mhi/host/internal.h
> diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/host/main.c
> similarity index 100%
> rename from drivers/bus/mhi/core/main.c
> rename to drivers/bus/mhi/host/main.c
> diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
> similarity index 100%
> rename from drivers/bus/mhi/pci_generic.c
> rename to drivers/bus/mhi/host/pci_generic.c
> diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/host/pm.c
> similarity index 100%
> rename from drivers/bus/mhi/core/pm.c
> rename to drivers/bus/mhi/host/pm.c


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 04/25] bus: mhi: Move common MHI definitions out of host directory
  2022-02-12 18:20 ` [PATCH v3 04/25] bus: mhi: Move common MHI definitions out of host directory Manivannan Sadhasivam
  2022-02-15  0:28   ` Hemant Kumar
@ 2022-02-15 20:02   ` Alex Elder
  1 sibling, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:02 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> Move the common MHI definitions in host "internal.h" to "common.h" so
> that the endpoint code can make use of them. This also avoids
> duplicating the definitions in the endpoint stack.
> 
> Still, the MHI register definitions are not moved since the offsets
> vary between host and endpoint.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

If you care to implement the following suggestion, great, but I'm
not going to demand it.

I see you did some work in patch 6 where you move the MHI register
definitions into "common.h".  This is along the lines of something
I suggested before.

I prefer to see those sort of changes *before* a patch like this.
I.e., make changes to the way the definitions are defined, *then*
move the definitions to their new location as a large block.

The result is the same, but I just find it nicer to do work that
prepares things in early patches, making later patches simpler
transformations.

Another small example of my point is in this patch, some of the
definitions are done in a different order in their new location.
As a reviewer, I'd rather see the reordering done first, then
the move to the new location done in one big batch that is easily
verified.

Aside from making that point, this looks fine to me.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/common.h        | 167 ++++++++++++++++++++++++++++++++
>   drivers/bus/mhi/host/internal.h | 155 +----------------------------
>   2 files changed, 168 insertions(+), 154 deletions(-)
>   create mode 100644 drivers/bus/mhi/common.h
> 
> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> new file mode 100644
> index 000000000000..0d13a202d334
> --- /dev/null
> +++ b/drivers/bus/mhi/common.h
> @@ -0,0 +1,167 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2021, Linaro Ltd.
> + *
> + */
> +
> +#ifndef _MHI_COMMON_H
> +#define _MHI_COMMON_H
> +
> +#include <linux/mhi.h>
> +
> +/* Command Ring Element macros */
> +/* No operation command */
> +#define MHI_TRE_CMD_NOOP_PTR (0)
> +#define MHI_TRE_CMD_NOOP_DWORD0 (0)
> +#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
> +
> +/* Channel reset command */
> +#define MHI_TRE_CMD_RESET_PTR (0)
> +#define MHI_TRE_CMD_RESET_DWORD0 (0)
> +#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> +					(MHI_CMD_RESET_CHAN << 16)))
> +
> +/* Channel stop command */
> +#define MHI_TRE_CMD_STOP_PTR (0)
> +#define MHI_TRE_CMD_STOP_DWORD0 (0)
> +#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> +				       (MHI_CMD_STOP_CHAN << 16)))
> +
> +/* Channel start command */
> +#define MHI_TRE_CMD_START_PTR (0)
> +#define MHI_TRE_CMD_START_DWORD0 (0)
> +#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> +					(MHI_CMD_START_CHAN << 16)))
> +
> +#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
> +#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> +#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> +
> +/* Event descriptor macros */
> +#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
> +#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
> +#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
> +#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
> +#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> +#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> +#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
> +#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
> +#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> +#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> +#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
> +
> +/* Transfer descriptor macros */
> +#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
> +#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
> +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
> +	| (ieot << 9) | (ieob << 8) | chain))
> +
> +/* RSC transfer descriptor macros */
> +#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
> +#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
> +#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
> +
> +enum mhi_pkt_type {
> +	MHI_PKT_TYPE_INVALID = 0x0,
> +	MHI_PKT_TYPE_NOOP_CMD = 0x1,
> +	MHI_PKT_TYPE_TRANSFER = 0x2,
> +	MHI_PKT_TYPE_COALESCING = 0x8,
> +	MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
> +	MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
> +	MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
> +	MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
> +	MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
> +	MHI_PKT_TYPE_TX_EVENT = 0x22,
> +	MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
> +	MHI_PKT_TYPE_EE_EVENT = 0x40,
> +	MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
> +	MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
> +	MHI_PKT_TYPE_STALE_EVENT, /* internal event */
> +};
> +
> +/* MHI transfer completion events */
> +enum mhi_ev_ccs {
> +	MHI_EV_CC_INVALID = 0x0,
> +	MHI_EV_CC_SUCCESS = 0x1,
> +	MHI_EV_CC_EOT = 0x2, /* End of transfer event */
> +	MHI_EV_CC_OVERFLOW = 0x3,
> +	MHI_EV_CC_EOB = 0x4, /* End of block event */
> +	MHI_EV_CC_OOB = 0x5, /* Out of block event */
> +	MHI_EV_CC_DB_MODE = 0x6,
> +	MHI_EV_CC_UNDEFINED_ERR = 0x10,
> +	MHI_EV_CC_BAD_TRE = 0x11,
> +};
> +
> +/* Channel state */
> +enum mhi_ch_state {
> +	MHI_CH_STATE_DISABLED,
> +	MHI_CH_STATE_ENABLED,
> +	MHI_CH_STATE_RUNNING,
> +	MHI_CH_STATE_SUSPENDED,
> +	MHI_CH_STATE_STOP,
> +	MHI_CH_STATE_ERROR,
> +};
> +
> +enum mhi_cmd_type {
> +	MHI_CMD_NOP = 1,
> +	MHI_CMD_RESET_CHAN = 16,
> +	MHI_CMD_STOP_CHAN = 17,
> +	MHI_CMD_START_CHAN = 18,
> +};
> +
> +#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
> +#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
> +#define EV_CTX_INTMODC_SHIFT 8
> +#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
> +#define EV_CTX_INTMODT_SHIFT 16
> +struct mhi_event_ctxt {
> +	__le32 intmod;
> +	__le32 ertype;
> +	__le32 msivec;
> +
> +	__le64 rbase __packed __aligned(4);
> +	__le64 rlen __packed __aligned(4);
> +	__le64 rp __packed __aligned(4);
> +	__le64 wp __packed __aligned(4);
> +};
> +
> +#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
> +#define CHAN_CTX_CHSTATE_SHIFT 0
> +#define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
> +#define CHAN_CTX_BRSTMODE_SHIFT 8
> +#define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
> +#define CHAN_CTX_POLLCFG_SHIFT 10
> +#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
> +struct mhi_chan_ctxt {
> +	__le32 chcfg;
> +	__le32 chtype;
> +	__le32 erindex;
> +
> +	__le64 rbase __packed __aligned(4);
> +	__le64 rlen __packed __aligned(4);
> +	__le64 rp __packed __aligned(4);
> +	__le64 wp __packed __aligned(4);
> +};
> +
> +struct mhi_cmd_ctxt {
> +	__le32 reserved0;
> +	__le32 reserved1;
> +	__le32 reserved2;
> +
> +	__le64 rbase __packed __aligned(4);
> +	__le64 rlen __packed __aligned(4);
> +	__le64 rp __packed __aligned(4);
> +	__le64 wp __packed __aligned(4);
> +};
> +
> +extern const char * const mhi_state_str[MHI_STATE_MAX];
> +#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
> +				  !mhi_state_str[state]) ? \
> +				"INVALID_STATE" : mhi_state_str[state])
> +
> +#endif /* _MHI_COMMON_H */
> diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
> index fa64340a8997..622de6ba1a0b 100644
> --- a/drivers/bus/mhi/host/internal.h
> +++ b/drivers/bus/mhi/host/internal.h
> @@ -7,7 +7,7 @@
>   #ifndef _MHI_INT_H
>   #define _MHI_INT_H
>   
> -#include <linux/mhi.h>
> +#include "../common.h"
>   
>   extern struct bus_type mhi_bus_type;
>   
> @@ -203,51 +203,6 @@ extern struct bus_type mhi_bus_type;
>   #define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
>   #define SOC_HW_VERSION_MINOR_VER_SHFT (0)
>   
> -#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
> -#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
> -#define EV_CTX_INTMODC_SHIFT 8
> -#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
> -#define EV_CTX_INTMODT_SHIFT 16
> -struct mhi_event_ctxt {
> -	__le32 intmod;
> -	__le32 ertype;
> -	__le32 msivec;
> -
> -	__le64 rbase __packed __aligned(4);
> -	__le64 rlen __packed __aligned(4);
> -	__le64 rp __packed __aligned(4);
> -	__le64 wp __packed __aligned(4);
> -};
> -
> -#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
> -#define CHAN_CTX_CHSTATE_SHIFT 0
> -#define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
> -#define CHAN_CTX_BRSTMODE_SHIFT 8
> -#define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
> -#define CHAN_CTX_POLLCFG_SHIFT 10
> -#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
> -struct mhi_chan_ctxt {
> -	__le32 chcfg;
> -	__le32 chtype;
> -	__le32 erindex;
> -
> -	__le64 rbase __packed __aligned(4);
> -	__le64 rlen __packed __aligned(4);
> -	__le64 rp __packed __aligned(4);
> -	__le64 wp __packed __aligned(4);
> -};
> -
> -struct mhi_cmd_ctxt {
> -	__le32 reserved0;
> -	__le32 reserved1;
> -	__le32 reserved2;
> -
> -	__le64 rbase __packed __aligned(4);
> -	__le64 rlen __packed __aligned(4);
> -	__le64 rp __packed __aligned(4);
> -	__le64 wp __packed __aligned(4);
> -};
> -
>   struct mhi_ctxt {
>   	struct mhi_event_ctxt *er_ctxt;
>   	struct mhi_chan_ctxt *chan_ctxt;
> @@ -267,109 +222,6 @@ struct bhi_vec_entry {
>   	u64 size;
>   };
>   
> -enum mhi_cmd_type {
> -	MHI_CMD_NOP = 1,
> -	MHI_CMD_RESET_CHAN = 16,
> -	MHI_CMD_STOP_CHAN = 17,
> -	MHI_CMD_START_CHAN = 18,
> -};
> -
> -/* No operation command */
> -#define MHI_TRE_CMD_NOOP_PTR (0)
> -#define MHI_TRE_CMD_NOOP_DWORD0 (0)
> -#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
> -
> -/* Channel reset command */
> -#define MHI_TRE_CMD_RESET_PTR (0)
> -#define MHI_TRE_CMD_RESET_DWORD0 (0)
> -#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> -					(MHI_CMD_RESET_CHAN << 16)))
> -
> -/* Channel stop command */
> -#define MHI_TRE_CMD_STOP_PTR (0)
> -#define MHI_TRE_CMD_STOP_DWORD0 (0)
> -#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> -				       (MHI_CMD_STOP_CHAN << 16)))
> -
> -/* Channel start command */
> -#define MHI_TRE_CMD_START_PTR (0)
> -#define MHI_TRE_CMD_START_DWORD0 (0)
> -#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> -					(MHI_CMD_START_CHAN << 16)))
> -
> -#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
> -#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> -#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> -
> -/* Event descriptor macros */
> -#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
> -#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
> -#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
> -#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
> -#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> -#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> -#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
> -#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
> -#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> -#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> -#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
> -
> -/* Transfer descriptor macros */
> -#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
> -#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
> -#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
> -	| (ieot << 9) | (ieob << 8) | chain))
> -
> -/* RSC transfer descriptor macros */
> -#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
> -#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
> -#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
> -
> -enum mhi_pkt_type {
> -	MHI_PKT_TYPE_INVALID = 0x0,
> -	MHI_PKT_TYPE_NOOP_CMD = 0x1,
> -	MHI_PKT_TYPE_TRANSFER = 0x2,
> -	MHI_PKT_TYPE_COALESCING = 0x8,
> -	MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
> -	MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
> -	MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
> -	MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
> -	MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
> -	MHI_PKT_TYPE_TX_EVENT = 0x22,
> -	MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
> -	MHI_PKT_TYPE_EE_EVENT = 0x40,
> -	MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
> -	MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
> -	MHI_PKT_TYPE_STALE_EVENT, /* internal event */
> -};
> -
> -/* MHI transfer completion events */
> -enum mhi_ev_ccs {
> -	MHI_EV_CC_INVALID = 0x0,
> -	MHI_EV_CC_SUCCESS = 0x1,
> -	MHI_EV_CC_EOT = 0x2, /* End of transfer event */
> -	MHI_EV_CC_OVERFLOW = 0x3,
> -	MHI_EV_CC_EOB = 0x4, /* End of block event */
> -	MHI_EV_CC_OOB = 0x5, /* Out of block event */
> -	MHI_EV_CC_DB_MODE = 0x6,
> -	MHI_EV_CC_UNDEFINED_ERR = 0x10,
> -	MHI_EV_CC_BAD_TRE = 0x11,
> -};
> -
> -enum mhi_ch_state {
> -	MHI_CH_STATE_DISABLED = 0x0,
> -	MHI_CH_STATE_ENABLED = 0x1,
> -	MHI_CH_STATE_RUNNING = 0x2,
> -	MHI_CH_STATE_SUSPENDED = 0x3,
> -	MHI_CH_STATE_STOP = 0x4,
> -	MHI_CH_STATE_ERROR = 0x5,
> -};
> -
>   enum mhi_ch_state_type {
>   	MHI_CH_STATE_TYPE_RESET,
>   	MHI_CH_STATE_TYPE_STOP,
> @@ -411,11 +263,6 @@ extern const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX];
>   #define TO_DEV_STATE_TRANS_STR(state) (((state) >= DEV_ST_TRANSITION_MAX) ? \
>   				"INVALID_STATE" : dev_state_tran_str[state])
>   
> -extern const char * const mhi_state_str[MHI_STATE_MAX];
> -#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
> -				  !mhi_state_str[state]) ? \
> -				"INVALID_STATE" : mhi_state_str[state])
> -
>   /* internal power states */
>   enum mhi_pm_state {
>   	MHI_PM_STATE_DISABLE,


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 05/25] bus: mhi: Make mhi_state_str[] array static inline and move to common.h
  2022-02-12 18:20 ` [PATCH v3 05/25] bus: mhi: Make mhi_state_str[] array static inline and move to common.h Manivannan Sadhasivam
  2022-02-15  0:31   ` Hemant Kumar
@ 2022-02-15 20:02   ` Alex Elder
  2022-02-16 11:39     ` Manivannan Sadhasivam
  1 sibling, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:02 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> mhi_state_str[] array could be used by MHI endpoint stack also. So let's
> make the array as "static inline function" and move it inside the
> "common.h" header so that the endpoint stack could also make use of it.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

I like the use of a function to encapsulate this rather than
using the array as before.

But I still don't like declaring this much static data in a static 
inline function in a header file.  Define it as a "real" function
somewhere common and declare it here instead.

One more minor comment below.

					-Alex

> ---
>   drivers/bus/mhi/common.h       | 29 +++++++++++++++++++++++++----
>   drivers/bus/mhi/host/boot.c    |  2 +-
>   drivers/bus/mhi/host/debugfs.c |  6 +++---
>   drivers/bus/mhi/host/init.c    | 12 ------------
>   drivers/bus/mhi/host/main.c    |  8 ++++----
>   drivers/bus/mhi/host/pm.c      | 14 +++++++-------
>   6 files changed, 40 insertions(+), 31 deletions(-)
> 
> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> index 0d13a202d334..288e47168649 100644
> --- a/drivers/bus/mhi/common.h
> +++ b/drivers/bus/mhi/common.h
> @@ -159,9 +159,30 @@ struct mhi_cmd_ctxt {
>   	__le64 wp __packed __aligned(4);
>   };
>   
> -extern const char * const mhi_state_str[MHI_STATE_MAX];
> -#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
> -				  !mhi_state_str[state]) ? \
> -				"INVALID_STATE" : mhi_state_str[state])
> +static inline const char * const mhi_state_str(enum mhi_state state)
> +{
> +	switch (state) {
> +	case MHI_STATE_RESET:
> +		return "RESET";
> +	case MHI_STATE_READY:
> +		return "READY";
> +	case MHI_STATE_M0:
> +		return "M0";
> +	case MHI_STATE_M1:
> +		return "M1";
> +	case MHI_STATE_M2:
> +		return"M2";

Add space after "return" here and in a few places below.

> +	case MHI_STATE_M3:
> +		return"M3";
> +	case MHI_STATE_M3_FAST:
> +		return"M3 FAST";
> +	case MHI_STATE_BHI:
> +		return"BHI";
> +	case MHI_STATE_SYS_ERR:
> +		return "SYS ERROR";
> +	default:
> +		return "Unknown state";
> +	}
> +};
>   
>   #endif /* _MHI_COMMON_H */
> diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
> index 74295d3cc662..93cb705614c6 100644
> --- a/drivers/bus/mhi/host/boot.c
> +++ b/drivers/bus/mhi/host/boot.c
> @@ -68,7 +68,7 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
>   
>   	dev_dbg(dev, "Entered with pm_state:%s dev_state:%s ee:%s\n",
>   		to_mhi_pm_state_str(mhi_cntrl->pm_state),
> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> +		mhi_state_str(mhi_cntrl->dev_state),
>   		TO_MHI_EXEC_STR(mhi_cntrl->ee));
>   
>   	/*
> diff --git a/drivers/bus/mhi/host/debugfs.c b/drivers/bus/mhi/host/debugfs.c
> index d818586c229d..399d0db1f1eb 100644
> --- a/drivers/bus/mhi/host/debugfs.c
> +++ b/drivers/bus/mhi/host/debugfs.c
> @@ -20,7 +20,7 @@ static int mhi_debugfs_states_show(struct seq_file *m, void *d)
>   	seq_printf(m, "PM state: %s Device: %s MHI state: %s EE: %s wake: %s\n",
>   		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
>   		   mhi_is_active(mhi_cntrl) ? "Active" : "Inactive",
> -		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> +		   mhi_state_str(mhi_cntrl->dev_state),
>   		   TO_MHI_EXEC_STR(mhi_cntrl->ee),
>   		   mhi_cntrl->wake_set ? "true" : "false");
>   
> @@ -206,13 +206,13 @@ static int mhi_debugfs_regdump_show(struct seq_file *m, void *d)
>   
>   	seq_printf(m, "Host PM state: %s Device state: %s EE: %s\n",
>   		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
> -		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> +		   mhi_state_str(mhi_cntrl->dev_state),
>   		   TO_MHI_EXEC_STR(mhi_cntrl->ee));
>   
>   	state = mhi_get_mhi_state(mhi_cntrl);
>   	ee = mhi_get_exec_env(mhi_cntrl);
>   	seq_printf(m, "Device EE: %s state: %s\n", TO_MHI_EXEC_STR(ee),
> -		   TO_MHI_STATE_STR(state));
> +		   mhi_state_str(state));
>   
>   	for (i = 0; regs[i].name; i++) {
>   		if (!regs[i].base)
> diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
> index 4bd62f32695d..0e301f3f305e 100644
> --- a/drivers/bus/mhi/host/init.c
> +++ b/drivers/bus/mhi/host/init.c
> @@ -44,18 +44,6 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
>   	[DEV_ST_TRANSITION_DISABLE] = "DISABLE",
>   };
>   
> -const char * const mhi_state_str[MHI_STATE_MAX] = {
> -	[MHI_STATE_RESET] = "RESET",
> -	[MHI_STATE_READY] = "READY",
> -	[MHI_STATE_M0] = "M0",
> -	[MHI_STATE_M1] = "M1",
> -	[MHI_STATE_M2] = "M2",
> -	[MHI_STATE_M3] = "M3",
> -	[MHI_STATE_M3_FAST] = "M3 FAST",
> -	[MHI_STATE_BHI] = "BHI",
> -	[MHI_STATE_SYS_ERR] = "SYS ERROR",
> -};
> -
>   const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX] = {
>   	[MHI_CH_STATE_TYPE_RESET] = "RESET",
>   	[MHI_CH_STATE_TYPE_STOP] = "STOP",
> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
> index 85f4f7c8d7c6..e436c2993d97 100644
> --- a/drivers/bus/mhi/host/main.c
> +++ b/drivers/bus/mhi/host/main.c
> @@ -479,8 +479,8 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
>   	ee = mhi_get_exec_env(mhi_cntrl);
>   	dev_dbg(dev, "local ee: %s state: %s device ee: %s state: %s\n",
>   		TO_MHI_EXEC_STR(mhi_cntrl->ee),
> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> -		TO_MHI_EXEC_STR(ee), TO_MHI_STATE_STR(state));
> +		mhi_state_str(mhi_cntrl->dev_state),
> +		TO_MHI_EXEC_STR(ee), mhi_state_str(state));
>   
>   	if (state == MHI_STATE_SYS_ERR) {
>   		dev_dbg(dev, "System error detected\n");
> @@ -846,7 +846,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>   			new_state = MHI_TRE_GET_EV_STATE(local_rp);
>   
>   			dev_dbg(dev, "State change event to state: %s\n",
> -				TO_MHI_STATE_STR(new_state));
> +				mhi_state_str(new_state));
>   
>   			switch (new_state) {
>   			case MHI_STATE_M0:
> @@ -873,7 +873,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>   			}
>   			default:
>   				dev_err(dev, "Invalid state: %s\n",
> -					TO_MHI_STATE_STR(new_state));
> +					mhi_state_str(new_state));
>   			}
>   
>   			break;
> diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
> index c35c5ddc7220..088ade0f3e0b 100644
> --- a/drivers/bus/mhi/host/pm.c
> +++ b/drivers/bus/mhi/host/pm.c
> @@ -545,7 +545,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
>   
>   	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
>   		to_mhi_pm_state_str(mhi_cntrl->pm_state),
> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
> +		mhi_state_str(mhi_cntrl->dev_state));
>   
>   	mutex_unlock(&mhi_cntrl->pm_mutex);
>   }
> @@ -689,7 +689,7 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
>   exit_sys_error_transition:
>   	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
>   		to_mhi_pm_state_str(mhi_cntrl->pm_state),
> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
> +		mhi_state_str(mhi_cntrl->dev_state));
>   
>   	mutex_unlock(&mhi_cntrl->pm_mutex);
>   }
> @@ -864,7 +864,7 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
>   	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
>   		dev_err(dev,
>   			"Did not enter M3 state, MHI state: %s, PM state: %s\n",
> -			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> +			mhi_state_str(mhi_cntrl->dev_state),
>   			to_mhi_pm_state_str(mhi_cntrl->pm_state));
>   		return -EIO;
>   	}
> @@ -890,7 +890,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
>   
>   	dev_dbg(dev, "Entered with PM state: %s, MHI state: %s\n",
>   		to_mhi_pm_state_str(mhi_cntrl->pm_state),
> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
> +		mhi_state_str(mhi_cntrl->dev_state));
>   
>   	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
>   		return 0;
> @@ -900,7 +900,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
>   
>   	if (mhi_get_mhi_state(mhi_cntrl) != MHI_STATE_M3) {
>   		dev_warn(dev, "Resuming from non M3 state (%s)\n",
> -			 TO_MHI_STATE_STR(mhi_get_mhi_state(mhi_cntrl)));
> +			 mhi_state_str(mhi_get_mhi_state(mhi_cntrl)));
>   		if (!force)
>   			return -EINVAL;
>   	}
> @@ -937,7 +937,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
>   	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
>   		dev_err(dev,
>   			"Did not enter M0 state, MHI state: %s, PM state: %s\n",
> -			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> +			mhi_state_str(mhi_cntrl->dev_state),
>   			to_mhi_pm_state_str(mhi_cntrl->pm_state));
>   		return -EIO;
>   	}
> @@ -1088,7 +1088,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
>   
>   	state = mhi_get_mhi_state(mhi_cntrl);
>   	dev_dbg(dev, "Attempting power on with EE: %s, state: %s\n",
> -		TO_MHI_EXEC_STR(current_ee), TO_MHI_STATE_STR(state));
> +		TO_MHI_EXEC_STR(current_ee), mhi_state_str(state));
>   
>   	if (state == MHI_STATE_SYS_ERR) {
>   		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 06/25] bus: mhi: Cleanup the register definitions used in headers
  2022-02-12 18:20 ` [PATCH v3 06/25] bus: mhi: Cleanup the register definitions used in headers Manivannan Sadhasivam
  2022-02-15  0:37   ` Hemant Kumar
@ 2022-02-15 20:02   ` Alex Elder
  2022-02-16 17:21     ` Manivannan Sadhasivam
  1 sibling, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:02 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> Cleanup includes:
> 
> 1. Moving the MHI register definitions to common.h header with REG_ prefix
>     and using them in the host/internal.h file as an alias. This makes it
>     possible to reuse the register definitions in EP stack that differs by
>     a fixed offset.

I like that you're doing this.  But I don't see the point of this
kind of definition, made in "drivers/bus/mhi/host/internal.h
":

   #define MHIREGLEN	REG_MHIREGLEN

Just use REG_MHIREGLEN in the host code too.  (Or use MHIREGLEN in
both places, whichever you prefer.)


> 2. Using the GENMASK macro for masks

Great!

> 3. Removing brackets for single values

They're normally called "parentheses."  Brackets more typically []
(and {} is "braces", though that's not always the case).

> 4. Using lowercase for hex values

I think I saw a few upper case hex values in another patch.
Not a big deal, just FYI.

> 5. Using two digits for hex values where applicable

I think I suggested most of these things, so of course
they look awesome to me.

You could use bitfield accessor macros in a few more places.
For example, this:

#define MHI_TRE_CMD_RESET_DWORD1(chid)  (cpu_to_le32((chid << 24) | \
					    (MHI_CMD_RESET_CHAN << 16)))

Could use something more like this:

#define MHI_CMD_CHANNEL_MASK	GENMASK(31, 24)
#define MHI_CMD_COMMAND_MASK	GENMASK(23, 16)

#define MHI_TRE_CMD_RESET_DWORD1(chid) \
	(le32_encode_bits(chid, MHI_CMD_CHANNEL_MASK) | \	
	 le32_encode_bits(MHI_CMD_RESET_CHAN, MHI_CMD_CMD_MASK))

(But of course I already said I preferred CPU byte order on
these values...)

I would like to see you get rid of one-to-one definitions
I mentioned at the top.  I haven't done an exhaustive check
of all the symbols, but this looks good generally, so:

Reviewed-by: Alex Elder <elder@linaro.org>

> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>   drivers/bus/mhi/common.h        | 243 ++++++++++++++++++++++++-----
>   drivers/bus/mhi/host/internal.h | 265 +++++++++-----------------------
>   2 files changed, 278 insertions(+), 230 deletions(-)
> 
> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> index 288e47168649..f226f06d4ff9 100644
> --- a/drivers/bus/mhi/common.h
> +++ b/drivers/bus/mhi/common.h
> @@ -9,62 +9,223 @@
>   
>   #include <linux/mhi.h>
>   
> +/* MHI registers */
> +#define REG_MHIREGLEN					0x00
> +#define REG_MHIVER					0x08
> +#define REG_MHICFG					0x10
> +#define REG_CHDBOFF					0x18
> +#define REG_ERDBOFF					0x20
> +#define REG_BHIOFF					0x28
> +#define REG_BHIEOFF					0x2c
> +#define REG_DEBUGOFF					0x30
> +#define REG_MHICTRL					0x38
> +#define REG_MHISTATUS					0x48
> +#define REG_CCABAP_LOWER				0x58
> +#define REG_CCABAP_HIGHER				0x5c
> +#define REG_ECABAP_LOWER				0x60
> +#define REG_ECABAP_HIGHER				0x64
> +#define REG_CRCBAP_LOWER				0x68
> +#define REG_CRCBAP_HIGHER				0x6c
> +#define REG_CRDB_LOWER					0x70
> +#define REG_CRDB_HIGHER					0x74
> +#define REG_MHICTRLBASE_LOWER				0x80
> +#define REG_MHICTRLBASE_HIGHER				0x84
> +#define REG_MHICTRLLIMIT_LOWER				0x88
> +#define REG_MHICTRLLIMIT_HIGHER				0x8c
> +#define REG_MHIDATABASE_LOWER				0x98
> +#define REG_MHIDATABASE_HIGHER				0x9c
> +#define REG_MHIDATALIMIT_LOWER				0xa0
> +#define REG_MHIDATALIMIT_HIGHER				0xa4
> +
> +/* MHI BHI registers */
> +#define REG_BHI_BHIVERSION_MINOR			0x00
> +#define REG_BHI_BHIVERSION_MAJOR			0x04
> +#define REG_BHI_IMGADDR_LOW				0x08
> +#define REG_BHI_IMGADDR_HIGH				0x0c
> +#define REG_BHI_IMGSIZE					0x10
> +#define REG_BHI_RSVD1					0x14
> +#define REG_BHI_IMGTXDB					0x18
> +#define REG_BHI_RSVD2					0x1c
> +#define REG_BHI_INTVEC					0x20
> +#define REG_BHI_RSVD3					0x24
> +#define REG_BHI_EXECENV					0x28
> +#define REG_BHI_STATUS					0x2c
> +#define REG_BHI_ERRCODE					0x30
> +#define REG_BHI_ERRDBG1					0x34
> +#define REG_BHI_ERRDBG2					0x38
> +#define REG_BHI_ERRDBG3					0x3c
> +#define REG_BHI_SERIALNU				0x40
> +#define REG_BHI_SBLANTIROLLVER				0x44
> +#define REG_BHI_NUMSEG					0x48
> +#define REG_BHI_MSMHWID(n)				(0x4c + (0x4 * (n)))
> +#define REG_BHI_OEMPKHASH(n)				(0x64 + (0x4 * (n)))
> +#define REG_BHI_RSVD5					0xc4
> +
> +/* BHI register bits */
> +#define BHI_TXDB_SEQNUM_BMSK				GENMASK(29, 0)
> +#define BHI_TXDB_SEQNUM_SHFT				0
> +#define BHI_STATUS_MASK					GENMASK(31, 30)
> +#define BHI_STATUS_SHIFT				30
> +#define BHI_STATUS_ERROR				0x03
> +#define BHI_STATUS_SUCCESS				0x02
> +#define BHI_STATUS_RESET				0x00
> +
> +/* MHI BHIE registers */
> +#define REG_BHIE_MSMSOCID_OFFS				0x00
> +#define REG_BHIE_TXVECADDR_LOW_OFFS			0x2c
> +#define REG_BHIE_TXVECADDR_HIGH_OFFS			0x30
> +#define REG_BHIE_TXVECSIZE_OFFS				0x34
> +#define REG_BHIE_TXVECDB_OFFS				0x3c
> +#define REG_BHIE_TXVECSTATUS_OFFS			0x44
> +#define REG_BHIE_RXVECADDR_LOW_OFFS			0x60
> +#define REG_BHIE_RXVECADDR_HIGH_OFFS			0x64
> +#define REG_BHIE_RXVECSIZE_OFFS				0x68
> +#define REG_BHIE_RXVECDB_OFFS				0x70
> +#define REG_BHIE_RXVECSTATUS_OFFS			0x78
> +
> +/* BHIE register bits */
> +#define BHIE_TXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
> +#define BHIE_TXVECDB_SEQNUM_SHFT			0
> +#define BHIE_TXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
> +#define BHIE_TXVECSTATUS_SEQNUM_SHFT			0
> +#define BHIE_TXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
> +#define BHIE_TXVECSTATUS_STATUS_SHFT			30
> +#define BHIE_TXVECSTATUS_STATUS_RESET			0x00
> +#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL		0x02
> +#define BHIE_TXVECSTATUS_STATUS_ERROR			0x03
> +#define BHIE_RXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
> +#define BHIE_RXVECDB_SEQNUM_SHFT			0
> +#define BHIE_RXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
> +#define BHIE_RXVECSTATUS_SEQNUM_SHFT			0
> +#define BHIE_RXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
> +#define BHIE_RXVECSTATUS_STATUS_SHFT			30
> +#define BHIE_RXVECSTATUS_STATUS_RESET			0x00
> +#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL		0x02
> +#define BHIE_RXVECSTATUS_STATUS_ERROR			0x03
> +
> +/* MHI register bits */
> +#define MHIREGLEN_MHIREGLEN_MASK			GENMASK(31, 0)
> +#define MHIREGLEN_MHIREGLEN_SHIFT			0
> +#define MHIVER_MHIVER_MASK				GENMASK(31, 0)
> +#define MHIVER_MHIVER_SHIFT				0
> +#define MHICFG_NHWER_MASK				GENMASK(31, 24)
> +#define MHICFG_NHWER_SHIFT				24
> +#define MHICFG_NER_MASK					GENMASK(23, 16)
> +#define MHICFG_NER_SHIFT				16
> +#define MHICFG_NHWCH_MASK				GENMASK(15, 8)
> +#define MHICFG_NHWCH_SHIFT				8
> +#define MHICFG_NCH_MASK					GENMASK(7, 0)
> +#define MHICFG_NCH_SHIFT				0
> +#define CHDBOFF_CHDBOFF_MASK				GENMASK(31, 0)
> +#define CHDBOFF_CHDBOFF_SHIFT				0
> +#define ERDBOFF_ERDBOFF_MASK				GENMASK(31, 0)
> +#define ERDBOFF_ERDBOFF_SHIFT				0
> +#define BHIOFF_BHIOFF_MASK				GENMASK(31, 0)
> +#define BHIOFF_BHIOFF_SHIFT				0
> +#define BHIEOFF_BHIEOFF_MASK				GENMASK(31, 0)
> +#define BHIEOFF_BHIEOFF_SHIFT				0
> +#define DEBUGOFF_DEBUGOFF_MASK				GENMASK(31, 0)
> +#define DEBUGOFF_DEBUGOFF_SHIFT				0
> +#define MHICTRL_MHISTATE_MASK				GENMASK(15, 8)
> +#define MHICTRL_MHISTATE_SHIFT				8
> +#define MHICTRL_RESET_MASK				BIT(1)
> +#define MHICTRL_RESET_SHIFT				1
> +#define MHISTATUS_MHISTATE_MASK				GENMASK(15, 8)
> +#define MHISTATUS_MHISTATE_SHIFT			8
> +#define MHISTATUS_SYSERR_MASK				BIT(2)
> +#define MHISTATUS_SYSERR_SHIFT				2
> +#define MHISTATUS_READY_MASK				BIT(0)
> +#define MHISTATUS_READY_SHIFT				0
> +#define CCABAP_LOWER_CCABAP_LOWER_MASK			GENMASK(31, 0)
> +#define CCABAP_LOWER_CCABAP_LOWER_SHIFT			0
> +#define CCABAP_HIGHER_CCABAP_HIGHER_MASK		GENMASK(31, 0)
> +#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT		0
> +#define ECABAP_LOWER_ECABAP_LOWER_MASK			GENMASK(31, 0)
> +#define ECABAP_LOWER_ECABAP_LOWER_SHIFT			0
> +#define ECABAP_HIGHER_ECABAP_HIGHER_MASK		GENMASK(31, 0)
> +#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT		0
> +#define CRCBAP_LOWER_CRCBAP_LOWER_MASK			GENMASK(31, 0)
> +#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT			0
> +#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK		GENMASK(31, 0)
> +#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT		0
> +#define CRDB_LOWER_CRDB_LOWER_MASK			GENMASK(31, 0)
> +#define CRDB_LOWER_CRDB_LOWER_SHIFT			0
> +#define CRDB_HIGHER_CRDB_HIGHER_MASK			GENMASK(31, 0)
> +#define CRDB_HIGHER_CRDB_HIGHER_SHIFT			0
> +#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK	GENMASK(31, 0)
> +#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT	0
> +#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK	GENMASK(31, 0)
> +#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT	0
> +#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK	GENMASK(31, 0)
> +#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT	0
> +#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK	GENMASK(31, 0)
> +#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT	0
> +#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK	GENMASK(31, 0)
> +#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT	0
> +#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK	GENMASK(31, 0)
> +#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT	0
> +#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK	GENMASK(31, 0)
> +#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT	0
> +#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK	GENMASK(31, 0)
> +#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT	0
> +
>   /* Command Ring Element macros */
>   /* No operation command */
> -#define MHI_TRE_CMD_NOOP_PTR (0)
> -#define MHI_TRE_CMD_NOOP_DWORD0 (0)
> -#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
> +#define MHI_TRE_CMD_NOOP_PTR				0
> +#define MHI_TRE_CMD_NOOP_DWORD0				0
> +#define MHI_TRE_CMD_NOOP_DWORD1				cpu_to_le32(MHI_CMD_NOP << 16)
>   
>   /* Channel reset command */
> -#define MHI_TRE_CMD_RESET_PTR (0)
> -#define MHI_TRE_CMD_RESET_DWORD0 (0)
> -#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> -					(MHI_CMD_RESET_CHAN << 16)))
> +#define MHI_TRE_CMD_RESET_PTR				0
> +#define MHI_TRE_CMD_RESET_DWORD0			0
> +#define MHI_TRE_CMD_RESET_DWORD1(chid)			(cpu_to_le32((chid << 24) | \
> +							(MHI_CMD_RESET_CHAN << 16)))
>   
>   /* Channel stop command */
> -#define MHI_TRE_CMD_STOP_PTR (0)
> -#define MHI_TRE_CMD_STOP_DWORD0 (0)
> -#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> -				       (MHI_CMD_STOP_CHAN << 16)))
> +#define MHI_TRE_CMD_STOP_PTR				0
> +#define MHI_TRE_CMD_STOP_DWORD0				0
> +#define MHI_TRE_CMD_STOP_DWORD1(chid)			(cpu_to_le32((chid << 24) | \
> +							(MHI_CMD_STOP_CHAN << 16)))
>   
>   /* Channel start command */
> -#define MHI_TRE_CMD_START_PTR (0)
> -#define MHI_TRE_CMD_START_DWORD0 (0)
> -#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> -					(MHI_CMD_START_CHAN << 16)))
> +#define MHI_TRE_CMD_START_PTR				0
> +#define MHI_TRE_CMD_START_DWORD0			0
> +#define MHI_TRE_CMD_START_DWORD1(chid)			(cpu_to_le32((chid << 24) | \
> +							(MHI_CMD_START_CHAN << 16)))
>   
> -#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
> -#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> -#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> +#define MHI_TRE_GET_DWORD(tre, word)			le32_to_cpu((tre)->dword[(word)])
> +#define MHI_TRE_GET_CMD_CHID(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> +#define MHI_TRE_GET_CMD_TYPE(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
>   
>   /* Event descriptor macros */
> -#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
> -#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
> -#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
> -#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
> -#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> -#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> -#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
> -#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
> -#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> -#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> -#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
> +/* Transfer completion event */
> +#define MHI_TRE_EV_PTR(ptr)				cpu_to_le64(ptr)
> +#define MHI_TRE_EV_DWORD0(code, len)			cpu_to_le32((code << 24) | len)
> +#define MHI_TRE_EV_DWORD1(chid, type)			cpu_to_le32((chid << 24) | (type << 16))
> +#define MHI_TRE_GET_EV_PTR(tre)				le64_to_cpu((tre)->ptr)
> +#define MHI_TRE_GET_EV_CODE(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_LEN(tre)				(MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> +#define MHI_TRE_GET_EV_CHID(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_TYPE(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> +#define MHI_TRE_GET_EV_STATE(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_EXECENV(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_SEQ(tre)				MHI_TRE_GET_DWORD(tre, 0)
> +#define MHI_TRE_GET_EV_TIME(tre)			MHI_TRE_GET_EV_PTR(tre)
> +#define MHI_TRE_GET_EV_COOKIE(tre)			lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> +#define MHI_TRE_GET_EV_VEID(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> +#define MHI_TRE_GET_EV_LINKSPEED(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_LINKWIDTH(tre)			(MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
>   
>   /* Transfer descriptor macros */
> -#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
> -#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
> -#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
> -	| (ieot << 9) | (ieob << 8) | chain))
> +#define MHI_TRE_DATA_PTR(ptr)				cpu_to_le64(ptr)
> +#define MHI_TRE_DATA_DWORD0(len)			cpu_to_le32(len & MHI_MAX_MTU)
> +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain)	(cpu_to_le32((2 << 16) | (bei << 10) \
> +							| (ieot << 9) | (ieob << 8) | chain))
>   
>   /* RSC transfer descriptor macros */
> -#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
> -#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
> -#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
> +#define MHI_RSCTRE_DATA_PTR(ptr, len)			cpu_to_le64(((u64)len << 48) | ptr)
> +#define MHI_RSCTRE_DATA_DWORD0(cookie)			cpu_to_le32(cookie)
> +#define MHI_RSCTRE_DATA_DWORD1				cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16)
>   
>   enum mhi_pkt_type {
>   	MHI_PKT_TYPE_INVALID = 0x0,
> diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
> index 622de6ba1a0b..762055a6ec9f 100644
> --- a/drivers/bus/mhi/host/internal.h
> +++ b/drivers/bus/mhi/host/internal.h
> @@ -11,197 +11,84 @@
>   
>   extern struct bus_type mhi_bus_type;
>   
> -#define MHIREGLEN (0x0)
> -#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
> -#define MHIREGLEN_MHIREGLEN_SHIFT (0)
> -
> -#define MHIVER (0x8)
> -#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
> -#define MHIVER_MHIVER_SHIFT (0)
> -
> -#define MHICFG (0x10)
> -#define MHICFG_NHWER_MASK (0xFF000000)
> -#define MHICFG_NHWER_SHIFT (24)
> -#define MHICFG_NER_MASK (0xFF0000)
> -#define MHICFG_NER_SHIFT (16)
> -#define MHICFG_NHWCH_MASK (0xFF00)
> -#define MHICFG_NHWCH_SHIFT (8)
> -#define MHICFG_NCH_MASK (0xFF)
> -#define MHICFG_NCH_SHIFT (0)
> -
> -#define CHDBOFF (0x18)
> -#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
> -#define CHDBOFF_CHDBOFF_SHIFT (0)
> -
> -#define ERDBOFF (0x20)
> -#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
> -#define ERDBOFF_ERDBOFF_SHIFT (0)
> -
> -#define BHIOFF (0x28)
> -#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
> -#define BHIOFF_BHIOFF_SHIFT (0)
> -
> -#define BHIEOFF (0x2C)
> -#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
> -#define BHIEOFF_BHIEOFF_SHIFT (0)
> -
> -#define DEBUGOFF (0x30)
> -#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
> -#define DEBUGOFF_DEBUGOFF_SHIFT (0)
> -
> -#define MHICTRL (0x38)
> -#define MHICTRL_MHISTATE_MASK (0x0000FF00)
> -#define MHICTRL_MHISTATE_SHIFT (8)
> -#define MHICTRL_RESET_MASK (0x2)
> -#define MHICTRL_RESET_SHIFT (1)
> -
> -#define MHISTATUS (0x48)
> -#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
> -#define MHISTATUS_MHISTATE_SHIFT (8)
> -#define MHISTATUS_SYSERR_MASK (0x4)
> -#define MHISTATUS_SYSERR_SHIFT (2)
> -#define MHISTATUS_READY_MASK (0x1)
> -#define MHISTATUS_READY_SHIFT (0)
> -
> -#define CCABAP_LOWER (0x58)
> -#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
> -#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
> -
> -#define CCABAP_HIGHER (0x5C)
> -#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
> -#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
> -
> -#define ECABAP_LOWER (0x60)
> -#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
> -#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
> -
> -#define ECABAP_HIGHER (0x64)
> -#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
> -#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
> -
> -#define CRCBAP_LOWER (0x68)
> -#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
> -#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
> -
> -#define CRCBAP_HIGHER (0x6C)
> -#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
> -#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
> -
> -#define CRDB_LOWER (0x70)
> -#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
> -#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
> -
> -#define CRDB_HIGHER (0x74)
> -#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
> -#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
> -
> -#define MHICTRLBASE_LOWER (0x80)
> -#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
> -#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
> -
> -#define MHICTRLBASE_HIGHER (0x84)
> -#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
> -#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
> -
> -#define MHICTRLLIMIT_LOWER (0x88)
> -#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
> -#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
> -
> -#define MHICTRLLIMIT_HIGHER (0x8C)
> -#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
> -#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
> -
> -#define MHIDATABASE_LOWER (0x98)
> -#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
> -#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
> -
> -#define MHIDATABASE_HIGHER (0x9C)
> -#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
> -#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
> -
> -#define MHIDATALIMIT_LOWER (0xA0)
> -#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
> -#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
> -
> -#define MHIDATALIMIT_HIGHER (0xA4)
> -#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
> -#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
> +/* MHI registers */
> +#define MHIREGLEN			REG_MHIREGLEN
> +#define MHIVER				REG_MHIVER
> +#define MHICFG				REG_MHICFG
> +#define CHDBOFF				REG_CHDBOFF
> +#define ERDBOFF				REG_ERDBOFF
> +#define BHIOFF				REG_BHIOFF
> +#define BHIEOFF				REG_BHIEOFF
> +#define DEBUGOFF			REG_DEBUGOFF
> +#define MHICTRL				REG_MHICTRL
> +#define MHISTATUS			REG_MHISTATUS
> +#define CCABAP_LOWER			REG_CCABAP_LOWER
> +#define CCABAP_HIGHER			REG_CCABAP_HIGHER
> +#define ECABAP_LOWER			REG_ECABAP_LOWER
> +#define ECABAP_HIGHER			REG_ECABAP_HIGHER
> +#define CRCBAP_LOWER			REG_CRCBAP_LOWER
> +#define CRCBAP_HIGHER			REG_CRCBAP_HIGHER
> +#define CRDB_LOWER			REG_CRDB_LOWER
> +#define CRDB_HIGHER			REG_CRDB_HIGHER
> +#define MHICTRLBASE_LOWER		REG_MHICTRLBASE_LOWER
> +#define MHICTRLBASE_HIGHER		REG_MHICTRLBASE_HIGHER
> +#define MHICTRLLIMIT_LOWER		REG_MHICTRLLIMIT_LOWER
> +#define MHICTRLLIMIT_HIGHER		REG_MHICTRLLIMIT_HIGHER
> +#define MHIDATABASE_LOWER		REG_MHIDATABASE_LOWER
> +#define MHIDATABASE_HIGHER		REG_MHIDATABASE_HIGHER
> +#define MHIDATALIMIT_LOWER		REG_MHIDATALIMIT_LOWER
> +#define MHIDATALIMIT_HIGHER		REG_MHIDATALIMIT_HIGHER
>   
>   /* Host request register */
> -#define MHI_SOC_RESET_REQ_OFFSET (0xB0)
> -#define MHI_SOC_RESET_REQ BIT(0)
> -
> -/* MHI BHI offfsets */
> -#define BHI_BHIVERSION_MINOR (0x00)
> -#define BHI_BHIVERSION_MAJOR (0x04)
> -#define BHI_IMGADDR_LOW (0x08)
> -#define BHI_IMGADDR_HIGH (0x0C)
> -#define BHI_IMGSIZE (0x10)
> -#define BHI_RSVD1 (0x14)
> -#define BHI_IMGTXDB (0x18)
> -#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
> -#define BHI_TXDB_SEQNUM_SHFT (0)
> -#define BHI_RSVD2 (0x1C)
> -#define BHI_INTVEC (0x20)
> -#define BHI_RSVD3 (0x24)
> -#define BHI_EXECENV (0x28)
> -#define BHI_STATUS (0x2C)
> -#define BHI_ERRCODE (0x30)
> -#define BHI_ERRDBG1 (0x34)
> -#define BHI_ERRDBG2 (0x38)
> -#define BHI_ERRDBG3 (0x3C)
> -#define BHI_SERIALNU (0x40)
> -#define BHI_SBLANTIROLLVER (0x44)
> -#define BHI_NUMSEG (0x48)
> -#define BHI_MSMHWID(n) (0x4C + (0x4 * (n)))
> -#define BHI_OEMPKHASH(n) (0x64 + (0x4 * (n)))
> -#define BHI_RSVD5 (0xC4)
> -#define BHI_STATUS_MASK (0xC0000000)
> -#define BHI_STATUS_SHIFT (30)
> -#define BHI_STATUS_ERROR (3)
> -#define BHI_STATUS_SUCCESS (2)
> -#define BHI_STATUS_RESET (0)
> -
> -/* MHI BHIE offsets */
> -#define BHIE_MSMSOCID_OFFS (0x0000)
> -#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
> -#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
> -#define BHIE_TXVECSIZE_OFFS (0x0034)
> -#define BHIE_TXVECDB_OFFS (0x003C)
> -#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
> -#define BHIE_TXVECDB_SEQNUM_SHFT (0)
> -#define BHIE_TXVECSTATUS_OFFS (0x0044)
> -#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
> -#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
> -#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
> -#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
> -#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
> -#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
> -#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
> -#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
> -#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
> -#define BHIE_RXVECSIZE_OFFS (0x0068)
> -#define BHIE_RXVECDB_OFFS (0x0070)
> -#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
> -#define BHIE_RXVECDB_SEQNUM_SHFT (0)
> -#define BHIE_RXVECSTATUS_OFFS (0x0078)
> -#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
> -#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
> -#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
> -#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
> -#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
> -#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
> -#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
> -
> -#define SOC_HW_VERSION_OFFS (0x224)
> -#define SOC_HW_VERSION_FAM_NUM_BMSK (0xF0000000)
> -#define SOC_HW_VERSION_FAM_NUM_SHFT (28)
> -#define SOC_HW_VERSION_DEV_NUM_BMSK (0x0FFF0000)
> -#define SOC_HW_VERSION_DEV_NUM_SHFT (16)
> -#define SOC_HW_VERSION_MAJOR_VER_BMSK (0x0000FF00)
> -#define SOC_HW_VERSION_MAJOR_VER_SHFT (8)
> -#define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
> -#define SOC_HW_VERSION_MINOR_VER_SHFT (0)
> +#define MHI_SOC_RESET_REQ_OFFSET	0xb0
> +#define MHI_SOC_RESET_REQ		BIT(0)
> +
> +/* MHI BHI registers */
> +#define BHI_BHIVERSION_MINOR		REG_BHI_BHIVERSION_MINOR
> +#define BHI_BHIVERSION_MAJOR		REG_BHI_BHIVERSION_MAJOR
> +#define BHI_IMGADDR_LOW			REG_BHI_IMGADDR_LOW
> +#define BHI_IMGADDR_HIGH		REG_BHI_IMGADDR_HIGH
> +#define BHI_IMGSIZE			REG_BHI_IMGSIZE
> +#define BHI_RSVD1			REG_BHI_RSVD1
> +#define BHI_IMGTXDB			REG_BHI_IMGTXDB
> +#define BHI_RSVD2			REG_BHI_RSVD2
> +#define BHI_INTVEC			REG_BHI_INTVEC
> +#define BHI_RSVD3			REG_BHI_RSVD3
> +#define BHI_EXECENV			REG_BHI_EXECENV
> +#define BHI_STATUS			REG_BHI_STATUS
> +#define BHI_ERRCODE			REG_BHI_ERRCODE
> +#define BHI_ERRDBG1			REG_BHI_ERRDBG1
> +#define BHI_ERRDBG2			REG_BHI_ERRDBG2
> +#define BHI_ERRDBG3			REG_BHI_ERRDBG3
> +#define BHI_SERIALNU			REG_BHI_SERIALNU
> +#define BHI_SBLANTIROLLVER		REG_BHI_SBLANTIROLLVER
> +#define BHI_NUMSEG			REG_BHI_NUMSEG
> +#define BHI_MSMHWID(n)			REG_BHI_MSMHWID(n)
> +#define BHI_OEMPKHASH(n)		REG_BHI_OEMPKHASH(n)
> +#define BHI_RSVD5			REG_BHI_RSVD5
> +
> +/* MHI BHIE registers */
> +#define BHIE_MSMSOCID_OFFS		REG_BHIE_MSMSOCID_OFFS
> +#define BHIE_TXVECADDR_LOW_OFFS		REG_BHIE_TXVECADDR_LOW_OFFS
> +#define BHIE_TXVECADDR_HIGH_OFFS	REG_BHIE_TXVECADDR_HIGH_OFFS
> +#define BHIE_TXVECSIZE_OFFS		REG_BHIE_TXVECSIZE_OFFS
> +#define BHIE_TXVECDB_OFFS		REG_BHIE_TXVECDB_OFFS
> +#define BHIE_TXVECSTATUS_OFFS		REG_BHIE_TXVECSTATUS_OFFS
> +#define BHIE_RXVECADDR_LOW_OFFS		REG_BHIE_RXVECADDR_LOW_OFFS
> +#define BHIE_RXVECADDR_HIGH_OFFS	REG_BHIE_RXVECADDR_HIGH_OFFS
> +#define BHIE_RXVECSIZE_OFFS		REG_BHIE_RXVECSIZE_OFFS
> +#define BHIE_RXVECDB_OFFS		REG_BHIE_RXVECDB_OFFS
> +#define BHIE_RXVECSTATUS_OFFS		REG_BHIE_RXVECSTATUS_OFFS
> +
> +#define SOC_HW_VERSION_OFFS		0x224
> +#define SOC_HW_VERSION_FAM_NUM_BMSK	GENMASK(31, 28)
> +#define SOC_HW_VERSION_FAM_NUM_SHFT	28
> +#define SOC_HW_VERSION_DEV_NUM_BMSK	GENMASK(27, 16)
> +#define SOC_HW_VERSION_DEV_NUM_SHFT	16
> +#define SOC_HW_VERSION_MAJOR_VER_BMSK	GENMASK(15, 8)
> +#define SOC_HW_VERSION_MAJOR_VER_SHFT	8
> +#define SOC_HW_VERSION_MINOR_VER_BMSK	GENMASK(7, 0)
> +#define SOC_HW_VERSION_MINOR_VER_SHFT	0
>   
>   struct mhi_ctxt {
>   	struct mhi_event_ctxt *er_ctxt;


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 07/25] bus: mhi: Get rid of SHIFT macros and use bitfield operations
  2022-02-12 18:20 ` [PATCH v3 07/25] bus: mhi: Get rid of SHIFT macros and use bitfield operations Manivannan Sadhasivam
@ 2022-02-15 20:02   ` Alex Elder
  2022-02-16 16:45     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:02 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> Instead of using the hardcoded SHIFT values, use the bitfield macros to
> derive the shift value from mask during build time.

You accomplished this by changing the way mhi_read_reg_field(),
mhi_poll_reg_field(), and mhi_write_reg_field() are defined.
It would be helpful for you to point out that fact up front.
Then it's fairly clear that the _SHIFT (and _SHFT) definitions
can just go away.  Very nice to remove those though.

> For shift values that cannot be determined during build time, "__ffs()"
> helper is used find the shift value in runtime.

Yeah this is an annoying feature of the bitfield functions,
but you *know* when you're working with a variable mask.

I still think the mask values that are 32 bits wide are
overkill, e.g.:

   #define MHIREGLEN_MHIREGLEN_MASK	GENMASK(31, 0)


Thise are full 32-bit registers, and I don't see any reason
they would ever *not* be full registers, so there's no point
in applying a mask to them.  Even if some day it did make
sense to use a mask (less than 32 bits wide, for example),
that's something that could be added when that becomes an
issue, rather than complicating the code unnecessarily now.

If you eliminate the 32-bit wide masks, great, but even if
you don't:

Reviewed-by: Alex Elder <elder@linaro.org>

> Suggested-by: Alex Elder <elder@linaro.org>
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>   drivers/bus/mhi/common.h        | 45 ----------------------
>   drivers/bus/mhi/host/boot.c     | 15 ++------
>   drivers/bus/mhi/host/debugfs.c  | 10 ++---
>   drivers/bus/mhi/host/init.c     | 67 +++++++++++++++------------------
>   drivers/bus/mhi/host/internal.h | 10 ++---
>   drivers/bus/mhi/host/main.c     | 16 ++++----
>   drivers/bus/mhi/host/pm.c       | 18 +++------
>   7 files changed, 55 insertions(+), 126 deletions(-)
> 
> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> index f226f06d4ff9..728c82928d8d 100644
> --- a/drivers/bus/mhi/common.h
> +++ b/drivers/bus/mhi/common.h
> @@ -63,9 +63,7 @@
>   
>   /* BHI register bits */
>   #define BHI_TXDB_SEQNUM_BMSK				GENMASK(29, 0)
> -#define BHI_TXDB_SEQNUM_SHFT				0
>   #define BHI_STATUS_MASK					GENMASK(31, 30)
> -#define BHI_STATUS_SHIFT				30
>   #define BHI_STATUS_ERROR				0x03
>   #define BHI_STATUS_SUCCESS				0x02
>   #define BHI_STATUS_RESET				0x00
> @@ -85,89 +83,51 @@
>   
>   /* BHIE register bits */
>   #define BHIE_TXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
> -#define BHIE_TXVECDB_SEQNUM_SHFT			0
>   #define BHIE_TXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
> -#define BHIE_TXVECSTATUS_SEQNUM_SHFT			0
>   #define BHIE_TXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
> -#define BHIE_TXVECSTATUS_STATUS_SHFT			30
>   #define BHIE_TXVECSTATUS_STATUS_RESET			0x00
>   #define BHIE_TXVECSTATUS_STATUS_XFER_COMPL		0x02
>   #define BHIE_TXVECSTATUS_STATUS_ERROR			0x03
>   #define BHIE_RXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
> -#define BHIE_RXVECDB_SEQNUM_SHFT			0
>   #define BHIE_RXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
> -#define BHIE_RXVECSTATUS_SEQNUM_SHFT			0
>   #define BHIE_RXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
> -#define BHIE_RXVECSTATUS_STATUS_SHFT			30
>   #define BHIE_RXVECSTATUS_STATUS_RESET			0x00
>   #define BHIE_RXVECSTATUS_STATUS_XFER_COMPL		0x02
>   #define BHIE_RXVECSTATUS_STATUS_ERROR			0x03
>   
>   /* MHI register bits */
>   #define MHIREGLEN_MHIREGLEN_MASK			GENMASK(31, 0)
> -#define MHIREGLEN_MHIREGLEN_SHIFT			0
>   #define MHIVER_MHIVER_MASK				GENMASK(31, 0)
> -#define MHIVER_MHIVER_SHIFT				0
>   #define MHICFG_NHWER_MASK				GENMASK(31, 24)
> -#define MHICFG_NHWER_SHIFT				24
>   #define MHICFG_NER_MASK					GENMASK(23, 16)
> -#define MHICFG_NER_SHIFT				16
>   #define MHICFG_NHWCH_MASK				GENMASK(15, 8)
> -#define MHICFG_NHWCH_SHIFT				8
>   #define MHICFG_NCH_MASK					GENMASK(7, 0)
> -#define MHICFG_NCH_SHIFT				0
>   #define CHDBOFF_CHDBOFF_MASK				GENMASK(31, 0)
> -#define CHDBOFF_CHDBOFF_SHIFT				0
>   #define ERDBOFF_ERDBOFF_MASK				GENMASK(31, 0)
> -#define ERDBOFF_ERDBOFF_SHIFT				0
>   #define BHIOFF_BHIOFF_MASK				GENMASK(31, 0)
> -#define BHIOFF_BHIOFF_SHIFT				0
>   #define BHIEOFF_BHIEOFF_MASK				GENMASK(31, 0)
> -#define BHIEOFF_BHIEOFF_SHIFT				0
>   #define DEBUGOFF_DEBUGOFF_MASK				GENMASK(31, 0)
> -#define DEBUGOFF_DEBUGOFF_SHIFT				0
>   #define MHICTRL_MHISTATE_MASK				GENMASK(15, 8)
> -#define MHICTRL_MHISTATE_SHIFT				8
>   #define MHICTRL_RESET_MASK				BIT(1)
> -#define MHICTRL_RESET_SHIFT				1
>   #define MHISTATUS_MHISTATE_MASK				GENMASK(15, 8)
> -#define MHISTATUS_MHISTATE_SHIFT			8
>   #define MHISTATUS_SYSERR_MASK				BIT(2)
> -#define MHISTATUS_SYSERR_SHIFT				2
>   #define MHISTATUS_READY_MASK				BIT(0)
> -#define MHISTATUS_READY_SHIFT				0
>   #define CCABAP_LOWER_CCABAP_LOWER_MASK			GENMASK(31, 0)
> -#define CCABAP_LOWER_CCABAP_LOWER_SHIFT			0
>   #define CCABAP_HIGHER_CCABAP_HIGHER_MASK		GENMASK(31, 0)
> -#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT		0
>   #define ECABAP_LOWER_ECABAP_LOWER_MASK			GENMASK(31, 0)
> -#define ECABAP_LOWER_ECABAP_LOWER_SHIFT			0
>   #define ECABAP_HIGHER_ECABAP_HIGHER_MASK		GENMASK(31, 0)
> -#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT		0
>   #define CRCBAP_LOWER_CRCBAP_LOWER_MASK			GENMASK(31, 0)
> -#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT			0
>   #define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK		GENMASK(31, 0)
> -#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT		0
>   #define CRDB_LOWER_CRDB_LOWER_MASK			GENMASK(31, 0)
> -#define CRDB_LOWER_CRDB_LOWER_SHIFT			0
>   #define CRDB_HIGHER_CRDB_HIGHER_MASK			GENMASK(31, 0)
> -#define CRDB_HIGHER_CRDB_HIGHER_SHIFT			0
>   #define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK	GENMASK(31, 0)
> -#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT	0
>   #define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK	GENMASK(31, 0)
> -#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT	0
>   #define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK	GENMASK(31, 0)
> -#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT	0
>   #define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK	GENMASK(31, 0)
> -#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT	0
>   #define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK	GENMASK(31, 0)
> -#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT	0
>   #define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK	GENMASK(31, 0)
> -#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT	0
>   #define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK	GENMASK(31, 0)
> -#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT	0
>   #define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK	GENMASK(31, 0)
> -#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT	0
>   
>   /* Command Ring Element macros */
>   /* No operation command */
> @@ -277,9 +237,7 @@ enum mhi_cmd_type {
>   
>   #define EV_CTX_RESERVED_MASK GENMASK(7, 0)
>   #define EV_CTX_INTMODC_MASK GENMASK(15, 8)
> -#define EV_CTX_INTMODC_SHIFT 8
>   #define EV_CTX_INTMODT_MASK GENMASK(31, 16)
> -#define EV_CTX_INTMODT_SHIFT 16
>   struct mhi_event_ctxt {
>   	__le32 intmod;
>   	__le32 ertype;
> @@ -292,11 +250,8 @@ struct mhi_event_ctxt {
>   };
>   
>   #define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
> -#define CHAN_CTX_CHSTATE_SHIFT 0
>   #define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
> -#define CHAN_CTX_BRSTMODE_SHIFT 8
>   #define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
> -#define CHAN_CTX_POLLCFG_SHIFT 10
>   #define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
>   struct mhi_chan_ctxt {
>   	__le32 chcfg;
> diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
> index 93cb705614c6..b0da7ca4519c 100644
> --- a/drivers/bus/mhi/host/boot.c
> +++ b/drivers/bus/mhi/host/boot.c
> @@ -46,8 +46,7 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
>   	sequence_id = MHI_RANDOM_U32_NONZERO(BHIE_RXVECSTATUS_SEQNUM_BMSK);
>   
>   	mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
> -			    BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
> -			    sequence_id);
> +			    BHIE_RXVECDB_SEQNUM_BMSK, sequence_id);
>   
>   	dev_dbg(dev, "Address: %p and len: 0x%zx sequence: %u\n",
>   		&mhi_buf->dma_addr, mhi_buf->len, sequence_id);
> @@ -127,9 +126,7 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
>   
>   	while (retry--) {
>   		ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
> -					 BHIE_RXVECSTATUS_STATUS_BMSK,
> -					 BHIE_RXVECSTATUS_STATUS_SHFT,
> -					 &rx_status);
> +					 BHIE_RXVECSTATUS_STATUS_BMSK, &rx_status);
>   		if (ret)
>   			return -EIO;
>   
> @@ -168,7 +165,6 @@ int mhi_download_rddm_image(struct mhi_controller *mhi_cntrl, bool in_panic)
>   			   mhi_read_reg_field(mhi_cntrl, base,
>   					      BHIE_RXVECSTATUS_OFFS,
>   					      BHIE_RXVECSTATUS_STATUS_BMSK,
> -					      BHIE_RXVECSTATUS_STATUS_SHFT,
>   					      &rx_status) || rx_status,
>   			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
>   
> @@ -203,8 +199,7 @@ static int mhi_fw_load_bhie(struct mhi_controller *mhi_cntrl,
>   	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
>   
>   	mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
> -			    BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
> -			    sequence_id);
> +			    BHIE_TXVECDB_SEQNUM_BMSK, sequence_id);
>   	read_unlock_bh(pm_lock);
>   
>   	/* Wait for the image download to complete */
> @@ -213,7 +208,6 @@ static int mhi_fw_load_bhie(struct mhi_controller *mhi_cntrl,
>   				 mhi_read_reg_field(mhi_cntrl, base,
>   						   BHIE_TXVECSTATUS_OFFS,
>   						   BHIE_TXVECSTATUS_STATUS_BMSK,
> -						   BHIE_TXVECSTATUS_STATUS_SHFT,
>   						   &tx_status) || tx_status,
>   				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
>   	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
> @@ -265,8 +259,7 @@ static int mhi_fw_load_bhi(struct mhi_controller *mhi_cntrl,
>   	ret = wait_event_timeout(mhi_cntrl->state_event,
>   			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
>   			   mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS,
> -					      BHI_STATUS_MASK, BHI_STATUS_SHIFT,
> -					      &tx_status) || tx_status,
> +					      BHI_STATUS_MASK, &tx_status) || tx_status,
>   			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
>   	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
>   		goto invalid_pm_state;
> diff --git a/drivers/bus/mhi/host/debugfs.c b/drivers/bus/mhi/host/debugfs.c
> index 399d0db1f1eb..cfec7811dfbb 100644
> --- a/drivers/bus/mhi/host/debugfs.c
> +++ b/drivers/bus/mhi/host/debugfs.c
> @@ -61,9 +61,9 @@ static int mhi_debugfs_events_show(struct seq_file *m, void *d)
>   
>   		seq_printf(m, "Index: %d intmod count: %lu time: %lu",
>   			   i, (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODC_MASK) >>
> -			   EV_CTX_INTMODC_SHIFT,
> +			   __ffs(EV_CTX_INTMODC_MASK),
>   			   (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODT_MASK) >>
> -			   EV_CTX_INTMODT_SHIFT);
> +			   __ffs(EV_CTX_INTMODT_MASK));
>   
>   		seq_printf(m, " base: 0x%0llx len: 0x%llx", le64_to_cpu(er_ctxt->rbase),
>   			   le64_to_cpu(er_ctxt->rlen));
> @@ -107,10 +107,10 @@ static int mhi_debugfs_channels_show(struct seq_file *m, void *d)
>   		seq_printf(m,
>   			   "%s(%u) state: 0x%lx brstmode: 0x%lx pollcfg: 0x%lx",
>   			   mhi_chan->name, mhi_chan->chan, (le32_to_cpu(chan_ctxt->chcfg) &
> -			   CHAN_CTX_CHSTATE_MASK) >> CHAN_CTX_CHSTATE_SHIFT,
> +			   CHAN_CTX_CHSTATE_MASK) >> __ffs(CHAN_CTX_CHSTATE_MASK),
>   			   (le32_to_cpu(chan_ctxt->chcfg) & CHAN_CTX_BRSTMODE_MASK) >>
> -			   CHAN_CTX_BRSTMODE_SHIFT, (le32_to_cpu(chan_ctxt->chcfg) &
> -			   CHAN_CTX_POLLCFG_MASK) >> CHAN_CTX_POLLCFG_SHIFT);
> +			   __ffs(CHAN_CTX_BRSTMODE_MASK), (le32_to_cpu(chan_ctxt->chcfg) &
> +			   CHAN_CTX_POLLCFG_MASK) >> __ffs(CHAN_CTX_POLLCFG_MASK));
>   
>   		seq_printf(m, " type: 0x%x event ring: %u", le32_to_cpu(chan_ctxt->chtype),
>   			   le32_to_cpu(chan_ctxt->erindex));
> diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
> index 0e301f3f305e..05e457d12446 100644
> --- a/drivers/bus/mhi/host/init.c
> +++ b/drivers/bus/mhi/host/init.c
> @@ -4,6 +4,7 @@
>    *
>    */
>   
> +#include <linux/bitfield.h>
>   #include <linux/debugfs.h>
>   #include <linux/device.h>
>   #include <linux/dma-direction.h>
> @@ -283,11 +284,11 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
>   
>   		tmp = le32_to_cpu(chan_ctxt->chcfg);
>   		tmp &= ~CHAN_CTX_CHSTATE_MASK;
> -		tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
> +		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_DISABLED);
>   		tmp &= ~CHAN_CTX_BRSTMODE_MASK;
> -		tmp |= (mhi_chan->db_cfg.brstmode << CHAN_CTX_BRSTMODE_SHIFT);
> +		tmp |= FIELD_PREP(CHAN_CTX_BRSTMODE_MASK, mhi_chan->db_cfg.brstmode);
>   		tmp &= ~CHAN_CTX_POLLCFG_MASK;
> -		tmp |= (mhi_chan->db_cfg.pollcfg << CHAN_CTX_POLLCFG_SHIFT);
> +		tmp |= FIELD_PREP(CHAN_CTX_POLLCFG_MASK, mhi_chan->db_cfg.pollcfg);
>   		chan_ctxt->chcfg = cpu_to_le32(tmp);
>   
>   		chan_ctxt->chtype = cpu_to_le32(mhi_chan->type);
> @@ -319,7 +320,7 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
>   		tmp = le32_to_cpu(er_ctxt->intmod);
>   		tmp &= ~EV_CTX_INTMODC_MASK;
>   		tmp &= ~EV_CTX_INTMODT_MASK;
> -		tmp |= (mhi_event->intmod << EV_CTX_INTMODT_SHIFT);
> +		tmp |= FIELD_PREP(EV_CTX_INTMODT_MASK, mhi_event->intmod);
>   		er_ctxt->intmod = cpu_to_le32(tmp);
>   
>   		er_ctxt->ertype = cpu_to_le32(MHI_ER_TYPE_VALID);
> @@ -425,71 +426,70 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
>   	struct {
>   		u32 offset;
>   		u32 mask;
> -		u32 shift;
>   		u32 val;
>   	} reg_info[] = {
>   		{
> -			CCABAP_HIGHER, U32_MAX, 0,
> +			CCABAP_HIGHER, U32_MAX,
>   			upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
>   		},
>   		{
> -			CCABAP_LOWER, U32_MAX, 0,
> +			CCABAP_LOWER, U32_MAX,
>   			lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
>   		},
>   		{
> -			ECABAP_HIGHER, U32_MAX, 0,
> +			ECABAP_HIGHER, U32_MAX,
>   			upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
>   		},
>   		{
> -			ECABAP_LOWER, U32_MAX, 0,
> +			ECABAP_LOWER, U32_MAX,
>   			lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
>   		},
>   		{
> -			CRCBAP_HIGHER, U32_MAX, 0,
> +			CRCBAP_HIGHER, U32_MAX,
>   			upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
>   		},
>   		{
> -			CRCBAP_LOWER, U32_MAX, 0,
> +			CRCBAP_LOWER, U32_MAX,
>   			lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
>   		},
>   		{
> -			MHICFG, MHICFG_NER_MASK, MHICFG_NER_SHIFT,
> +			MHICFG, MHICFG_NER_MASK,
>   			mhi_cntrl->total_ev_rings,
>   		},
>   		{
> -			MHICFG, MHICFG_NHWER_MASK, MHICFG_NHWER_SHIFT,
> +			MHICFG, MHICFG_NHWER_MASK,
>   			mhi_cntrl->hw_ev_rings,
>   		},
>   		{
> -			MHICTRLBASE_HIGHER, U32_MAX, 0,
> +			MHICTRLBASE_HIGHER, U32_MAX,
>   			upper_32_bits(mhi_cntrl->iova_start),
>   		},
>   		{
> -			MHICTRLBASE_LOWER, U32_MAX, 0,
> +			MHICTRLBASE_LOWER, U32_MAX,
>   			lower_32_bits(mhi_cntrl->iova_start),
>   		},
>   		{
> -			MHIDATABASE_HIGHER, U32_MAX, 0,
> +			MHIDATABASE_HIGHER, U32_MAX,
>   			upper_32_bits(mhi_cntrl->iova_start),
>   		},
>   		{
> -			MHIDATABASE_LOWER, U32_MAX, 0,
> +			MHIDATABASE_LOWER, U32_MAX,
>   			lower_32_bits(mhi_cntrl->iova_start),
>   		},
>   		{
> -			MHICTRLLIMIT_HIGHER, U32_MAX, 0,
> +			MHICTRLLIMIT_HIGHER, U32_MAX,
>   			upper_32_bits(mhi_cntrl->iova_stop),
>   		},
>   		{
> -			MHICTRLLIMIT_LOWER, U32_MAX, 0,
> +			MHICTRLLIMIT_LOWER, U32_MAX,
>   			lower_32_bits(mhi_cntrl->iova_stop),
>   		},
>   		{
> -			MHIDATALIMIT_HIGHER, U32_MAX, 0,
> +			MHIDATALIMIT_HIGHER, U32_MAX,
>   			upper_32_bits(mhi_cntrl->iova_stop),
>   		},
>   		{
> -			MHIDATALIMIT_LOWER, U32_MAX, 0,
> +			MHIDATALIMIT_LOWER, U32_MAX,
>   			lower_32_bits(mhi_cntrl->iova_stop),
>   		},
>   		{ 0, 0, 0 }
> @@ -498,8 +498,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
>   	dev_dbg(dev, "Initializing MHI registers\n");
>   
>   	/* Read channel db offset */
> -	ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK,
> -				 CHDBOFF_CHDBOFF_SHIFT, &val);
> +	ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK, &val);
>   	if (ret) {
>   		dev_err(dev, "Unable to read CHDBOFF register\n");
>   		return -EIO;
> @@ -515,8 +514,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
>   		mhi_chan->tre_ring.db_addr = base + val;
>   
>   	/* Read event ring db offset */
> -	ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK,
> -				 ERDBOFF_ERDBOFF_SHIFT, &val);
> +	ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK, &val);
>   	if (ret) {
>   		dev_err(dev, "Unable to read ERDBOFF register\n");
>   		return -EIO;
> @@ -537,8 +535,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
>   	/* Write to MMIO registers */
>   	for (i = 0; reg_info[i].offset; i++)
>   		mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
> -				    reg_info[i].mask, reg_info[i].shift,
> -				    reg_info[i].val);
> +				    reg_info[i].mask, reg_info[i].val);
>   
>   	return 0;
>   }
> @@ -571,7 +568,7 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
>   
>   	tmp = le32_to_cpu(chan_ctxt->chcfg);
>   	tmp &= ~CHAN_CTX_CHSTATE_MASK;
> -	tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
> +	tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_DISABLED);
>   	chan_ctxt->chcfg = cpu_to_le32(tmp);
>   
>   	/* Update to all cores */
> @@ -608,7 +605,7 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
>   
>   	tmp = le32_to_cpu(chan_ctxt->chcfg);
>   	tmp &= ~CHAN_CTX_CHSTATE_MASK;
> -	tmp |= (MHI_CH_STATE_ENABLED << CHAN_CTX_CHSTATE_SHIFT);
> +	tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_ENABLED);
>   	chan_ctxt->chcfg = cpu_to_le32(tmp);
>   
>   	chan_ctxt->rbase = cpu_to_le64(tre_ring->iommu_base);
> @@ -952,14 +949,10 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
>   	if (ret)
>   		goto err_destroy_wq;
>   
> -	mhi_cntrl->family_number = (soc_info & SOC_HW_VERSION_FAM_NUM_BMSK) >>
> -					SOC_HW_VERSION_FAM_NUM_SHFT;
> -	mhi_cntrl->device_number = (soc_info & SOC_HW_VERSION_DEV_NUM_BMSK) >>
> -					SOC_HW_VERSION_DEV_NUM_SHFT;
> -	mhi_cntrl->major_version = (soc_info & SOC_HW_VERSION_MAJOR_VER_BMSK) >>
> -					SOC_HW_VERSION_MAJOR_VER_SHFT;
> -	mhi_cntrl->minor_version = (soc_info & SOC_HW_VERSION_MINOR_VER_BMSK) >>
> -					SOC_HW_VERSION_MINOR_VER_SHFT;
> +	mhi_cntrl->family_number = FIELD_GET(SOC_HW_VERSION_FAM_NUM_BMSK, soc_info);
> +	mhi_cntrl->device_number = FIELD_GET(SOC_HW_VERSION_DEV_NUM_BMSK, soc_info);
> +	mhi_cntrl->major_version = FIELD_GET(SOC_HW_VERSION_MAJOR_VER_BMSK, soc_info);
> +	mhi_cntrl->minor_version = FIELD_GET(SOC_HW_VERSION_MINOR_VER_BMSK, soc_info);
>   
>   	mhi_cntrl->index = ida_alloc(&mhi_controller_ida, GFP_KERNEL);
>   	if (mhi_cntrl->index < 0) {
> diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
> index 762055a6ec9f..21381781d7c5 100644
> --- a/drivers/bus/mhi/host/internal.h
> +++ b/drivers/bus/mhi/host/internal.h
> @@ -82,13 +82,9 @@ extern struct bus_type mhi_bus_type;
>   
>   #define SOC_HW_VERSION_OFFS		0x224
>   #define SOC_HW_VERSION_FAM_NUM_BMSK	GENMASK(31, 28)
> -#define SOC_HW_VERSION_FAM_NUM_SHFT	28
>   #define SOC_HW_VERSION_DEV_NUM_BMSK	GENMASK(27, 16)
> -#define SOC_HW_VERSION_DEV_NUM_SHFT	16
>   #define SOC_HW_VERSION_MAJOR_VER_BMSK	GENMASK(15, 8)
> -#define SOC_HW_VERSION_MAJOR_VER_SHFT	8
>   #define SOC_HW_VERSION_MINOR_VER_BMSK	GENMASK(7, 0)
> -#define SOC_HW_VERSION_MINOR_VER_SHFT	0
>   
>   struct mhi_ctxt {
>   	struct mhi_event_ctxt *er_ctxt;
> @@ -393,14 +389,14 @@ int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
>   			      void __iomem *base, u32 offset, u32 *out);
>   int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
>   				    void __iomem *base, u32 offset, u32 mask,
> -				    u32 shift, u32 *out);
> +				    u32 *out);
>   int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
>   				    void __iomem *base, u32 offset, u32 mask,
> -				    u32 shift, u32 val, u32 delayus);
> +				    u32 val, u32 delayus);
>   void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
>   		   u32 offset, u32 val);
>   void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
> -			 u32 offset, u32 mask, u32 shift, u32 val);
> +			 u32 offset, u32 mask, u32 val);
>   void mhi_ring_er_db(struct mhi_event *mhi_event);
>   void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
>   		  dma_addr_t db_val);
> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
> index e436c2993d97..02ac5faf9178 100644
> --- a/drivers/bus/mhi/host/main.c
> +++ b/drivers/bus/mhi/host/main.c
> @@ -24,7 +24,7 @@ int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
>   
>   int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
>   				    void __iomem *base, u32 offset,
> -				    u32 mask, u32 shift, u32 *out)
> +				    u32 mask, u32 *out)
>   {
>   	u32 tmp;
>   	int ret;
> @@ -33,21 +33,20 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
>   	if (ret)
>   		return ret;
>   
> -	*out = (tmp & mask) >> shift;
> +	*out = (tmp & mask) >> __ffs(mask);
>   
>   	return 0;
>   }
>   
>   int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
>   				    void __iomem *base, u32 offset,
> -				    u32 mask, u32 shift, u32 val, u32 delayus)
> +				    u32 mask, u32 val, u32 delayus)
>   {
>   	int ret;
>   	u32 out, retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
>   
>   	while (retry--) {
> -		ret = mhi_read_reg_field(mhi_cntrl, base, offset, mask, shift,
> -					 &out);
> +		ret = mhi_read_reg_field(mhi_cntrl, base, offset, mask, &out);
>   		if (ret)
>   			return ret;
>   
> @@ -67,7 +66,7 @@ void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
>   }
>   
>   void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
> -			 u32 offset, u32 mask, u32 shift, u32 val)
> +			 u32 offset, u32 mask, u32 val)
>   {
>   	int ret;
>   	u32 tmp;
> @@ -77,7 +76,7 @@ void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
>   		return;
>   
>   	tmp &= ~mask;
> -	tmp |= (val << shift);
> +	tmp |= (val << __ffs(mask));
>   	mhi_write_reg(mhi_cntrl, base, offset, tmp);
>   }
>   
> @@ -159,8 +158,7 @@ enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl)
>   {
>   	u32 state;
>   	int ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
> -				     MHISTATUS_MHISTATE_MASK,
> -				     MHISTATUS_MHISTATE_SHIFT, &state);
> +				     MHISTATUS_MHISTATE_MASK, &state);
>   	return ret ? MHI_STATE_MAX : state;
>   }
>   EXPORT_SYMBOL_GPL(mhi_get_mhi_state);
> diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
> index 088ade0f3e0b..3d90b8ecd3d9 100644
> --- a/drivers/bus/mhi/host/pm.c
> +++ b/drivers/bus/mhi/host/pm.c
> @@ -131,11 +131,10 @@ void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state)
>   {
>   	if (state == MHI_STATE_RESET) {
>   		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
> -				    MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 1);
> +				    MHICTRL_RESET_MASK, 1);
>   	} else {
>   		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
> -				    MHICTRL_MHISTATE_MASK,
> -				    MHICTRL_MHISTATE_SHIFT, state);
> +				    MHICTRL_MHISTATE_MASK, state);
>   	}
>   }
>   
> @@ -167,16 +166,14 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
>   
>   	/* Wait for RESET to be cleared and READY bit to be set by the device */
>   	ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
> -				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
> -				 interval_us);
> +				 MHICTRL_RESET_MASK, 0, interval_us);
>   	if (ret) {
>   		dev_err(dev, "Device failed to clear MHI Reset\n");
>   		return ret;
>   	}
>   
>   	ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
> -				 MHISTATUS_READY_MASK, MHISTATUS_READY_SHIFT, 1,
> -				 interval_us);
> +				 MHISTATUS_READY_MASK, 1, interval_us);
>   	if (ret) {
>   		dev_err(dev, "Device failed to enter MHI Ready\n");
>   		return ret;
> @@ -470,8 +467,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
>   
>   		/* Wait for the reset bit to be cleared by the device */
>   		ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
> -				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
> -				 25000);
> +				 MHICTRL_RESET_MASK, 0, 25000);
>   		if (ret)
>   			dev_err(dev, "Device failed to clear MHI Reset\n");
>   
> @@ -602,7 +598,6 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
>   							    mhi_cntrl->regs,
>   							    MHICTRL,
>   							    MHICTRL_RESET_MASK,
> -							    MHICTRL_RESET_SHIFT,
>   							    &in_reset) ||
>   					!in_reset, timeout);
>   		if (!ret || in_reset) {
> @@ -1093,8 +1088,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
>   	if (state == MHI_STATE_SYS_ERR) {
>   		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
>   		ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
> -				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
> -				 interval_us);
> +				 MHICTRL_RESET_MASK, 0, interval_us);
>   		if (ret) {
>   			dev_info(dev, "Failed to reset MHI due to syserr state\n");
>   			goto error_exit;


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 08/25] bus: mhi: ep: Add support for registering MHI endpoint controllers
  2022-02-12 18:21 ` [PATCH v3 08/25] bus: mhi: ep: Add support for registering MHI endpoint controllers Manivannan Sadhasivam
  2022-02-15  1:04   ` Hemant Kumar
@ 2022-02-15 20:02   ` Alex Elder
  2022-02-17  9:53     ` Manivannan Sadhasivam
  1 sibling, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:02 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> This commit adds support for registering MHI endpoint controller drivers
> with the MHI endpoint stack. MHI endpoint controller drivers manages

s/manages/manage/

> the interaction with the host machines such as x86. They are also the

  (such as x86)

> MHI endpoint bus master in charge of managing the physical link between the
> host and endpoint device.
> 
> The endpoint controller driver encloses all information about the
> underlying physical bus like PCIe. The registration process involves

s/like PCIe/(i.e., PCIe)/

> parsing the channel configuration and allocating an MHI EP device.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

OK!!!  On to the MHI endpoint code!

Quite a few comments below, but nothing very major.

					-Alex

> ---
>   drivers/bus/mhi/Kconfig       |   1 +
>   drivers/bus/mhi/Makefile      |   3 +
>   drivers/bus/mhi/ep/Kconfig    |  10 ++
>   drivers/bus/mhi/ep/Makefile   |   2 +
>   drivers/bus/mhi/ep/internal.h | 160 +++++++++++++++++++++++
>   drivers/bus/mhi/ep/main.c     | 234 ++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        | 143 +++++++++++++++++++++
>   7 files changed, 553 insertions(+)
>   create mode 100644 drivers/bus/mhi/ep/Kconfig
>   create mode 100644 drivers/bus/mhi/ep/Makefile
>   create mode 100644 drivers/bus/mhi/ep/internal.h
>   create mode 100644 drivers/bus/mhi/ep/main.c
>   create mode 100644 include/linux/mhi_ep.h
> 
> diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
> index 4748df7f9cd5..b39a11e6c624 100644
> --- a/drivers/bus/mhi/Kconfig
> +++ b/drivers/bus/mhi/Kconfig
> @@ -6,3 +6,4 @@
>   #
>   
>   source "drivers/bus/mhi/host/Kconfig"
> +source "drivers/bus/mhi/ep/Kconfig"
> diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
> index 5f5708a249f5..46981331b38f 100644
> --- a/drivers/bus/mhi/Makefile
> +++ b/drivers/bus/mhi/Makefile
> @@ -1,2 +1,5 @@
>   # Host MHI stack
>   obj-y += host/
> +
> +# Endpoint MHI stack
> +obj-y += ep/
> diff --git a/drivers/bus/mhi/ep/Kconfig b/drivers/bus/mhi/ep/Kconfig
> new file mode 100644
> index 000000000000..229c71397b30
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/Kconfig
> @@ -0,0 +1,10 @@
> +config MHI_BUS_EP
> +	tristate "Modem Host Interface (MHI) bus Endpoint implementation"
> +	help
> +	  Bus driver for MHI protocol. Modem Host Interface (MHI) is a
> +	  communication protocol used by the host processors to control

s/the host processors/a host processor/

> +	  and communicate with modem devices over a high speed peripheral

s/modem devices/a modem device/

> +	  bus or shared memory.
> +
> +	  MHI_BUS_EP implements the MHI protocol for the endpoint devices
> +	  like SDX55 modem connected to the host machine over PCIe.

s/devices like/devices, such as/

> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> new file mode 100644
> index 000000000000..64e29252b608
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/Makefile
> @@ -0,0 +1,2 @@
> +obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> +mhi_ep-y := main.o
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> new file mode 100644
> index 000000000000..e313a2546664
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -0,0 +1,160 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2021, Linaro Ltd.

Update your copyright statement (here and everywhere before you
send your next version).

> + *
> + */
> +
> +#ifndef _MHI_EP_INTERNAL_
> +#define _MHI_EP_INTERNAL_
> +
> +#include <linux/bitfield.h>
> +
> +#include "../common.h"
> +
> +extern struct bus_type mhi_ep_bus_type;
> +
> +#define MHI_REG_OFFSET				0x100
> +#define BHI_REG_OFFSET				0x200

Rather than defining the REG_OFFSET values here and adding
them to every definition below, why not have the base
address used (e.g., in mhi_write_reg_field()) be adjusted
by the constant amount?

I'm just looking at mhi_init_mmio() (in the existing code)
as an example, but for example, the base address used
comes from mhi_cntrl->regs.  Can you instead just define
a pointer somewhere that is the base of the MHI register
range, which is already offset by the appropriate amount?

> +
> +/* MHI registers */
> +#define MHIREGLEN				(MHI_REG_OFFSET + REG_MHIREGLEN)
> +#define MHIVER					(MHI_REG_OFFSET + REG_MHIVER)
> +#define MHICFG					(MHI_REG_OFFSET + REG_MHICFG)
> +#define CHDBOFF					(MHI_REG_OFFSET + REG_CHDBOFF)
> +#define ERDBOFF					(MHI_REG_OFFSET + REG_ERDBOFF)
> +#define BHIOFF					(MHI_REG_OFFSET + REG_BHIOFF)
> +#define BHIEOFF					(MHI_REG_OFFSET + REG_BHIEOFF)
> +#define DEBUGOFF				(MHI_REG_OFFSET + REG_DEBUGOFF)
> +#define MHICTRL					(MHI_REG_OFFSET + REG_MHICTRL)
> +#define MHISTATUS				(MHI_REG_OFFSET + REG_MHISTATUS)
> +#define CCABAP_LOWER				(MHI_REG_OFFSET + REG_CCABAP_LOWER)
> +#define CCABAP_HIGHER				(MHI_REG_OFFSET + REG_CCABAP_HIGHER)
> +#define ECABAP_LOWER				(MHI_REG_OFFSET + REG_ECABAP_LOWER)
> +#define ECABAP_HIGHER				(MHI_REG_OFFSET + REG_ECABAP_HIGHER)
> +#define CRCBAP_LOWER				(MHI_REG_OFFSET + REG_CRCBAP_LOWER)
> +#define CRCBAP_HIGHER				(MHI_REG_OFFSET + REG_CRCBAP_HIGHER)
> +#define CRDB_LOWER				(MHI_REG_OFFSET + REG_CRDB_LOWER)
> +#define CRDB_HIGHER				(MHI_REG_OFFSET + REG_CRDB_HIGHER)
> +#define MHICTRLBASE_LOWER			(MHI_REG_OFFSET + REG_MHICTRLBASE_LOWER)
> +#define MHICTRLBASE_HIGHER			(MHI_REG_OFFSET + REG_MHICTRLBASE_HIGHER)
> +#define MHICTRLLIMIT_LOWER			(MHI_REG_OFFSET + REG_MHICTRLLIMIT_LOWER)
> +#define MHICTRLLIMIT_HIGHER			(MHI_REG_OFFSET + REG_MHICTRLLIMIT_HIGHER)
> +#define MHIDATABASE_LOWER			(MHI_REG_OFFSET + REG_MHIDATABASE_LOWER)
> +#define MHIDATABASE_HIGHER			(MHI_REG_OFFSET + REG_MHIDATABASE_HIGHER)
> +#define MHIDATALIMIT_LOWER			(MHI_REG_OFFSET + REG_MHIDATALIMIT_LOWER)
> +#define MHIDATALIMIT_HIGHER			(MHI_REG_OFFSET + REG_MHIDATALIMIT_HIGHER)
> +
> +/* MHI BHI registers */
> +#define BHI_IMGTXDB				(BHI_REG_OFFSET + REG_BHI_IMGTXDB)
> +#define BHI_EXECENV				(BHI_REG_OFFSET + REG_BHI_EXECENV)
> +#define BHI_INTVEC				(BHI_REG_OFFSET + REG_BHI_INTVEC)
> +
> +/* MHI Doorbell registers */
> +#define CHDB_LOWER_n(n)				(0x400 + 0x8 * (n))
> +#define CHDB_HIGHER_n(n)			(0x404 + 0x8 * (n))
> +#define ERDB_LOWER_n(n)				(0x800 + 0x8 * (n))
> +#define ERDB_HIGHER_n(n)			(0x804 + 0x8 * (n))
> +
> +#define MHI_CTRL_INT_STATUS_A7			0x4
> +#define MHI_CTRL_INT_STATUS_A7_MSK		BIT(0)
> +#define MHI_CTRL_INT_STATUS_CRDB_MSK		BIT(1)
> +#define MHI_CHDB_INT_STATUS_A7_n(n)		(0x28 + 0x4 * (n))
> +#define MHI_ERDB_INT_STATUS_A7_n(n)		(0x38 + 0x4 * (n))
> +
> +#define MHI_CTRL_INT_CLEAR_A7			0x4c
> +#define MHI_CTRL_INT_MMIO_WR_CLEAR		BIT(2)
> +#define MHI_CTRL_INT_CRDB_CLEAR			BIT(1)
> +#define MHI_CTRL_INT_CRDB_MHICTRL_CLEAR		BIT(0)
> +
> +#define MHI_CHDB_INT_CLEAR_A7_n(n)		(0x70 + 0x4 * (n))
> +#define MHI_CHDB_INT_CLEAR_A7_n_CLEAR_ALL	GENMASK(31, 0)
> +#define MHI_ERDB_INT_CLEAR_A7_n(n)		(0x80 + 0x4 * (n))
> +#define MHI_ERDB_INT_CLEAR_A7_n_CLEAR_ALL	GENMASK(31, 0)
> +
> +/*
> + * Unlike the usual "masking" convention, writing "1" to a bit in this register
> + * enables the interrupt and writing "0" will disable it..
> + */
> +#define MHI_CTRL_INT_MASK_A7			0x94
> +#define MHI_CTRL_INT_MASK_A7_MASK		GENMASK(1, 0)
> +#define MHI_CTRL_MHICTRL_MASK			BIT(0)
> +#define MHI_CTRL_CRDB_MASK			BIT(1)
> +
> +#define MHI_CHDB_INT_MASK_A7_n(n)		(0xb8 + 0x4 * (n))
> +#define MHI_CHDB_INT_MASK_A7_n_EN_ALL		GENMASK(31, 0)
> +#define MHI_ERDB_INT_MASK_A7_n(n)		(0xc8 + 0x4 * (n))
> +#define MHI_ERDB_INT_MASK_A7_n_EN_ALL		GENMASK(31, 0)
> +
> +#define NR_OF_CMD_RINGS				1
> +#define MHI_MASK_ROWS_CH_EV_DB			4
> +#define MHI_MASK_CH_EV_LEN			32
> +
> +/* Generic context */
> +struct mhi_generic_ctx {
> +	__u32 reserved0;
> +	__u32 reserved1;
> +	__u32 reserved2;
> +
> +	__u64 rbase __packed __aligned(4);
> +	__u64 rlen __packed __aligned(4);
> +	__u64 rp __packed __aligned(4);
> +	__u64 wp __packed __aligned(4);
> +};

I'm pretty sure this constitutes an external interface, so
every field should have its endianness annotated.

Mentioned elsewhere, I think you can define the structure
with those attributes rather than the multiple fields.

> +
> +enum mhi_ep_ring_type {
> +	RING_TYPE_CMD = 0,
> +	RING_TYPE_ER,
> +	RING_TYPE_CH,
> +};
> +
> +struct mhi_ep_ring_element {
> +	u64 ptr;
> +	u32 dword[2];
> +};

Are these host resident rings?  Even if not, this is an external
interface, so this should be defined with explicit endianness.
The cpu_to_le64() call will be a no-op so there is no cost
to correcting this.

> +
> +/* Ring element */
> +union mhi_ep_ring_ctx {
> +	struct mhi_cmd_ctxt cmd;
> +	struct mhi_event_ctxt ev;
> +	struct mhi_chan_ctxt ch;
> +	struct mhi_generic_ctx generic;
> +};
> +
> +struct mhi_ep_ring {
> +	struct mhi_ep_cntrl *mhi_cntrl;
> +	int (*ring_cb)(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
> +	union mhi_ep_ring_ctx *ring_ctx;
> +	struct mhi_ep_ring_element *ring_cache;
> +	enum mhi_ep_ring_type type;
> +	size_t rd_offset;
> +	size_t wr_offset;
> +	size_t ring_size;
> +	u32 db_offset_h;
> +	u32 db_offset_l;
> +	u32 ch_id;
> +};

Not sure about the db_offset fields, etc. here, but it's possible
they need endianness annotations.  I'm going to stop making this
comment; please make sure anything that's exposed to the host
specifies that it's little endian.  (The host and endpoint should
have a common definition of these shared structures anyway; maybe
I'm misreading this or assuming something incorrectly.)

> +
> +struct mhi_ep_cmd {
> +	struct mhi_ep_ring ring;
> +};
> +
> +struct mhi_ep_event {
> +	struct mhi_ep_ring ring;
> +};
> +
> +struct mhi_ep_chan {
> +	char *name;
> +	struct mhi_ep_device *mhi_dev;
> +	struct mhi_ep_ring ring;
> +	struct mutex lock;
> +	void (*xfer_cb)(struct mhi_ep_device *mhi_dev, struct mhi_result *result);
> +	enum mhi_ch_state state;
> +	enum dma_data_direction dir;
> +	u64 tre_loc;
> +	u32 tre_size;
> +	u32 tre_bytes_left;
> +	u32 chan;
> +	bool skip_td;
> +};
> +
> +#endif
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> new file mode 100644
> index 000000000000..b006011d025d
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -0,0 +1,234 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * MHI Bus Endpoint stack
> + *
> + * Copyright (C) 2021 Linaro Ltd.
> + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> + */
> +
> +#include <linux/bitfield.h>
> +#include <linux/delay.h>
> +#include <linux/dma-direction.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/mhi_ep.h>
> +#include <linux/mod_devicetable.h>
> +#include <linux/module.h>
> +#include "internal.h"
> +
> +static DEFINE_IDA(mhi_ep_cntrl_ida);
> +
> +static void mhi_ep_release_device(struct device *dev)
> +{
> +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +
> +	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> +		mhi_dev->mhi_cntrl->mhi_dev = NULL;
> +
> +	/*
> +	 * We need to set the mhi_chan->mhi_dev to NULL here since the MHI
> +	 * devices for the channels will only get created during start
> +	 * channel if the mhi_dev associated with it is NULL.
> +	 */

Can you mention where in the code the above occurs?  Just for
reference.  Like, "will only get created in mhi_ep_create_device()
if the..." or whatever.

> +	if (mhi_dev->ul_chan)
> +		mhi_dev->ul_chan->mhi_dev = NULL;
> +
> +	if (mhi_dev->dl_chan)
> +		mhi_dev->dl_chan->mhi_dev = NULL;
> +
> +	kfree(mhi_dev);
> +}
> +
> +static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl,
> +						 enum mhi_device_type dev_type)
> +{
> +	struct mhi_ep_device *mhi_dev;
> +	struct device *dev;
> +
> +	mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
> +	if (!mhi_dev)
> +		return ERR_PTR(-ENOMEM);
> +
> +	dev = &mhi_dev->dev;
> +	device_initialize(dev);
> +	dev->bus = &mhi_ep_bus_type;
> +	dev->release = mhi_ep_release_device;
> +

Maybe mention that the controller device is always allocated
first.

> +	if (dev_type == MHI_DEVICE_CONTROLLER)
> +		/* for MHI controller device, parent is the bus device (e.g. PCI EPF) */
> +		dev->parent = mhi_cntrl->cntrl_dev;
> +	else
> +		/* for MHI client devices, parent is the MHI controller device */
> +		dev->parent = &mhi_cntrl->mhi_dev->dev;
> +
> +	mhi_dev->mhi_cntrl = mhi_cntrl;
> +	mhi_dev->dev_type = dev_type;
> +
> +	return mhi_dev;
> +}
> +

I think the name of the next function could be better.  Yes, it
parses the channel configuration, but what it *really* does is
alloocate and initialize the channel array.  So maybe something
more like mhi_chan_init()?

> +static int parse_ch_cfg(struct mhi_ep_cntrl *mhi_cntrl,
> +			const struct mhi_ep_cntrl_config *config)
> +{
> +	const struct mhi_ep_channel_config *ch_cfg;
> +	struct device *dev = mhi_cntrl->cntrl_dev;
> +	u32 chan, i;
> +	int ret = -EINVAL;
> +
> +	mhi_cntrl->max_chan = config->max_channels;
> +
> +	/*
> +	 * Allocate max_channels supported by the MHI endpoint and populate
> +	 * only the defined channels
> +	 */
> +	mhi_cntrl->mhi_chan = kcalloc(mhi_cntrl->max_chan, sizeof(*mhi_cntrl->mhi_chan),
> +				      GFP_KERNEL);
> +	if (!mhi_cntrl->mhi_chan)
> +		return -ENOMEM;
> +
> +	for (i = 0; i < config->num_channels; i++) {
> +		struct mhi_ep_chan *mhi_chan;

This entire block could be encapsulated in mhi_channel_add()
or something,

> +		ch_cfg = &config->ch_cfg[i];

Move the above assignment down a few lines, to just before
where it's used.

> +
> +		chan = ch_cfg->num;
> +		if (chan >= mhi_cntrl->max_chan) {
> +			dev_err(dev, "Channel %d not available\n", chan);

Maybe report the maximum channel so it's obvious why it's
not available.

> +			goto error_chan_cfg;
> +		}
> +
> +		/* Bi-directional and direction less channels are not supported */
> +		if (ch_cfg->dir == DMA_BIDIRECTIONAL || ch_cfg->dir == DMA_NONE) {
> +			dev_err(dev, "Invalid channel configuration\n");

Maybe be more specific in your message about what's wrong here.

> +			goto error_chan_cfg;
> +		}
> +
> +		mhi_chan = &mhi_cntrl->mhi_chan[chan];
> +		mhi_chan->name = ch_cfg->name;
> +		mhi_chan->chan = chan;
> +		mhi_chan->dir = ch_cfg->dir;
> +		mutex_init(&mhi_chan->lock);
> +	}
> +
> +	return 0;
> +
> +error_chan_cfg:
> +	kfree(mhi_cntrl->mhi_chan);

I'm not sure what the caller does, but maybe null this
after it's freed, or don't assign mhi_cntrll->mhi_chan
until the initialization is successful.


> +	return ret;
> +}
> +
> +/*
> + * Allocate channel and command rings here. Event rings will be allocated
> + * in mhi_ep_power_up() as the config comes from the host.
> + */
> +int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> +				const struct mhi_ep_cntrl_config *config)
> +{
> +	struct mhi_ep_device *mhi_dev;
> +	int ret;
> +
> +	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
> +		return -EINVAL;
> +
> +	ret = parse_ch_cfg(mhi_cntrl, config);
> +	if (ret)
> +		return ret;
> +
> +	mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS, sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);

I said before I thought it was silly to even define NR_OF_CMD_RINGS.
Does the MHI specification actually allow more than one command
ring for a given MHI controller?  Ever?

> +	if (!mhi_cntrl->mhi_cmd) {
> +		ret = -ENOMEM;
> +		goto err_free_ch;
> +	}
> +
> +	/* Set controller index */
> +	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
> +	if (mhi_cntrl->index < 0) {
> +		ret = mhi_cntrl->index;
> +		goto err_free_cmd;
> +	}
> +
> +	/* Allocate the controller device */
> +	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_CONTROLLER);
> +	if (IS_ERR(mhi_dev)) {
> +		dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate controller device\n");
> +		ret = PTR_ERR(mhi_dev);
> +		goto err_ida_free;
> +	}
> +
> +	dev_set_name(&mhi_dev->dev, "mhi_ep%d", mhi_cntrl->index);
> +	mhi_dev->name = dev_name(&mhi_dev->dev);
> +
> +	ret = device_add(&mhi_dev->dev);
> +	if (ret)
> +		goto err_put_dev;
> +

Should the mhi_dev pointer be set before device_add() gets called?

> +	mhi_cntrl->mhi_dev = mhi_dev;
> +
> +	dev_dbg(&mhi_dev->dev, "MHI EP Controller registered\n");
> +
> +	return 0;
> +
> +err_put_dev:
> +	put_device(&mhi_dev->dev);
> +err_ida_free:
> +	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
> +err_free_cmd:
> +	kfree(mhi_cntrl->mhi_cmd);
> +err_free_ch:
> +	kfree(mhi_cntrl->mhi_chan);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_register_controller);
> +
> +void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
> +
> +	kfree(mhi_cntrl->mhi_cmd);
> +	kfree(mhi_cntrl->mhi_chan);
> +
> +	device_del(&mhi_dev->dev);
> +	put_device(&mhi_dev->dev);
> +
> +	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
> +
> +static int mhi_ep_match(struct device *dev, struct device_driver *drv)
> +{
> +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +
> +	/*
> +	 * If the device is a controller type then there is no client driver
> +	 * associated with it
> +	 */
> +	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> +		return 0;
> +
> +	return 0;
> +};
> +
> +struct bus_type mhi_ep_bus_type = {
> +	.name = "mhi_ep",
> +	.dev_name = "mhi_ep",
> +	.match = mhi_ep_match,
> +};
> +
> +static int __init mhi_ep_init(void)
> +{
> +	return bus_register(&mhi_ep_bus_type);
> +}
> +
> +static void __exit mhi_ep_exit(void)
> +{
> +	bus_unregister(&mhi_ep_bus_type);
> +}
> +
> +postcore_initcall(mhi_ep_init);
> +module_exit(mhi_ep_exit);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("MHI Bus Endpoint stack");
> +MODULE_AUTHOR("Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>");
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> new file mode 100644
> index 000000000000..20238e9df1b3
> --- /dev/null
> +++ b/include/linux/mhi_ep.h
> @@ -0,0 +1,143 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2021, Linaro Ltd.
> + *
> + */
> +#ifndef _MHI_EP_H_
> +#define _MHI_EP_H_
> +
> +#include <linux/dma-direction.h>
> +#include <linux/mhi.h>
> +
> +#define MHI_EP_DEFAULT_MTU 0x8000
> +
> +/**
> + * struct mhi_ep_channel_config - Channel configuration structure for controller
> + * @name: The name of this channel
> + * @num: The number assigned to this channel
> + * @num_elements: The number of elements that can be queued to this channel
> + * @dir: Direction that data may flow on this channel
> + */
> +struct mhi_ep_channel_config {
> +	char *name;
> +	u32 num;
> +	u32 num_elements;
> +	enum dma_data_direction dir;
> +};
> +
> +/**
> + * struct mhi_ep_cntrl_config - MHI Endpoint controller configuration
> + * @max_channels: Maximum number of channels supported
> + * @num_channels: Number of channels defined in @ch_cfg
> + * @ch_cfg: Array of defined channels
> + * @mhi_version: MHI spec version supported by the controller
> + */
> +struct mhi_ep_cntrl_config {
> +	u32 max_channels;
> +	u32 num_channels;
> +	const struct mhi_ep_channel_config *ch_cfg;
> +	u32 mhi_version;

Put mhi_version first?

> +};
> +
> +/**
> + * struct mhi_ep_db_info - MHI Endpoint doorbell info
> + * @mask: Mask of the doorbell interrupt
> + * @status: Status of the doorbell interrupt
> + */
> +struct mhi_ep_db_info {
> +	u32 mask;
> +	u32 status;
> +};
> +
> +/**
> + * struct mhi_ep_cntrl - MHI Endpoint controller structure
> + * @cntrl_dev: Pointer to the struct device of physical bus acting as the MHI
> + *             Endpoint controller
> + * @mhi_dev: MHI Endpoint device instance for the controller
> + * @mmio: MMIO region containing the MHI registers
> + * @mhi_chan: Points to the channel configuration table
> + * @mhi_event: Points to the event ring configurations table
> + * @mhi_cmd: Points to the command ring configurations table
> + * @sm: MHI Endpoint state machine
> + * @raise_irq: CB function for raising IRQ to the host
> + * @alloc_addr: CB function for allocating memory in endpoint for storing host context
> + * @map_addr: CB function for mapping host context to endpoint
> + * @free_addr: CB function to free the allocated memory in endpoint for storing host context
> + * @unmap_addr: CB function to unmap the host context in endpoint
> + * @read_from_host: CB function for reading from host memory from endpoint
> + * @write_to_host: CB function for writing to host memory from endpoint
> + * @mhi_state: MHI Endpoint state
> + * @max_chan: Maximum channels supported by the endpoint controller
> + * @mru: MRU (Maximum Receive Unit) value of the endpoint controller
> + * @index: MHI Endpoint controller index
> + */
> +struct mhi_ep_cntrl {
> +	struct device *cntrl_dev;
> +	struct mhi_ep_device *mhi_dev;
> +	void __iomem *mmio;
> +
> +	struct mhi_ep_chan *mhi_chan;
> +	struct mhi_ep_event *mhi_event;
> +	struct mhi_ep_cmd *mhi_cmd;
> +	struct mhi_ep_sm *sm;
> +
> +	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);
> +	void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t *phys_addr,
> +		       size_t size);
> +	int (*map_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t phys_addr, u64 pci_addr,
> +			size_t size);
> +	void (*free_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t phys_addr,
> +			  void __iomem *virt_addr, size_t size);
> +	void (*unmap_addr)(struct mhi_ep_cntrl *mhi_cntrl, phys_addr_t phys_addr);
> +	int (*read_from_host)(struct mhi_ep_cntrl *mhi_cntrl, u64 from, void __iomem *to,
> +			      size_t size);
> +	int (*write_to_host)(struct mhi_ep_cntrl *mhi_cntrl, void __iomem *from, u64 to,
> +			     size_t size);
> +
> +	enum mhi_state mhi_state;
> +
> +	u32 max_chan;
> +	u32 mru;
> +	int index;

Will index ever be negative?

> +};
> +
> +/**
> + * struct mhi_ep_device - Structure representing an MHI Endpoint device that binds
> + *                     to channels or is associated with controllers
> + * @dev: Driver model device node for the MHI Endpoint device
> + * @mhi_cntrl: Controller the device belongs to
> + * @id: Pointer to MHI Endpoint device ID struct
> + * @name: Name of the associated MHI Endpoint device
> + * @ul_chan: UL channel for the device
> + * @dl_chan: DL channel for the device
> + * @dev_type: MHI device type
> + */
> +struct mhi_ep_device {
> +	struct device dev;
> +	struct mhi_ep_cntrl *mhi_cntrl;
> +	const struct mhi_device_id *id;
> +	const char *name;
> +	struct mhi_ep_chan *ul_chan;
> +	struct mhi_ep_chan *dl_chan;
> +	enum mhi_device_type dev_type;

There are two device types, controller and transfer.  Unless
there is ever going to be anything more than that, I think
the distinction is better represented as a Boolean, such as:

	bool controller;

> +};
> +
> +#define to_mhi_ep_device(dev) container_of(dev, struct mhi_ep_device, dev)
> +
> +/**
> + * mhi_ep_register_controller - Register MHI Endpoint controller
> + * @mhi_cntrl: MHI Endpoint controller to register
> + * @config: Configuration to use for the controller
> + *
> + * Return: 0 if controller registrations succeeds, a negative error code otherwise.
> + */
> +int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> +			       const struct mhi_ep_cntrl_config *config);
> +
> +/**
> + * mhi_ep_unregister_controller - Unregister MHI Endpoint controller
> + * @mhi_cntrl: MHI Endpoint controller to unregister
> + */
> +void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
> +
> +#endif


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 09/25] bus: mhi: ep: Add support for registering MHI endpoint client drivers
  2022-02-12 18:21 ` [PATCH v3 09/25] bus: mhi: ep: Add support for registering MHI endpoint client drivers Manivannan Sadhasivam
  2022-02-12 18:32   ` Manivannan Sadhasivam
  2022-02-15  1:10   ` Hemant Kumar
@ 2022-02-15 20:02   ` Alex Elder
  2022-02-17 10:20     ` Manivannan Sadhasivam
  2 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:02 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> This commit adds support for registering MHI endpoint client drivers
> with the MHI endpoint stack. MHI endpoint client drivers binds to one

s/binds/bind/

> or more MHI endpoint devices inorder to send and receive the upper-layer
> protocol packets like IP packets, modem control messages, and diagnostics
> messages over MHI bus.

I have a few more comments here but generally this looks good.

					-Alex

> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>   drivers/bus/mhi/ep/main.c | 86 +++++++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h    | 53 ++++++++++++++++++++++++
>   2 files changed, 139 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index b006011d025d..f66404181972 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -196,9 +196,89 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
>   }
>   EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
>   
> +static int mhi_ep_driver_probe(struct device *dev)
> +{
> +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
> +	struct mhi_ep_chan *ul_chan = mhi_dev->ul_chan;
> +	struct mhi_ep_chan *dl_chan = mhi_dev->dl_chan;
> +
> +	/* Client drivers should have callbacks for both channels */
> +	if (!mhi_drv->ul_xfer_cb || !mhi_drv->dl_xfer_cb)
> +		return -EINVAL;
> +
> +	ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
> +	dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
> +
> +	return mhi_drv->probe(mhi_dev, mhi_dev->id);
> +}
> +
> +static int mhi_ep_driver_remove(struct device *dev)
> +{
> +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
> +	struct mhi_result result = {};
> +	struct mhi_ep_chan *mhi_chan;
> +	int dir;
> +
> +	/* Skip if it is a controller device */
> +	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> +		return 0;
> +

It would be my preference to encapsulate the body of the
following loop into a called function, then call that once
for the UL channel and once for the DL channel.

> +	/* Disconnect the channels associated with the driver */
> +	for (dir = 0; dir < 2; dir++) {
> +		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
> +
> +		if (!mhi_chan)
> +			continue;
> +
> +		mutex_lock(&mhi_chan->lock);
> +		/* Send channel disconnect status to the client driver */
> +		if (mhi_chan->xfer_cb) {
> +			result.transaction_status = -ENOTCONN;
> +			result.bytes_xferd = 0;
> +			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);

It appears the result is ignored here.  If so, can we
define the xfer_cb() function so that a NULL pointer may
be supplied by the caller in cases like this?

> +		}
> +
> +		/* Set channel state to DISABLED */

That comment is a little tautological.  Just omit it.

> +		mhi_chan->state = MHI_CH_STATE_DISABLED;
> +		mhi_chan->xfer_cb = NULL;
> +		mutex_unlock(&mhi_chan->lock);
> +	}
> +
> +	/* Remove the client driver now */
> +	mhi_drv->remove(mhi_dev);
> +
> +	return 0;
> +}
> +
> +int __mhi_ep_driver_register(struct mhi_ep_driver *mhi_drv, struct module *owner)
> +{
> +	struct device_driver *driver = &mhi_drv->driver;
> +
> +	if (!mhi_drv->probe || !mhi_drv->remove)
> +		return -EINVAL;
> +
> +	driver->bus = &mhi_ep_bus_type;
> +	driver->owner = owner;
> +	driver->probe = mhi_ep_driver_probe;
> +	driver->remove = mhi_ep_driver_remove;
> +
> +	return driver_register(driver);
> +}
> +EXPORT_SYMBOL_GPL(__mhi_ep_driver_register);
> +
> +void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv)
> +{
> +	driver_unregister(&mhi_drv->driver);
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_driver_unregister);
> +
>   static int mhi_ep_match(struct device *dev, struct device_driver *drv)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(drv);
> +	const struct mhi_device_id *id;
>   
>   	/*
>   	 * If the device is a controller type then there is no client driver
> @@ -207,6 +287,12 @@ static int mhi_ep_match(struct device *dev, struct device_driver *drv)
>   	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
>   		return 0;
>   
> +	for (id = mhi_drv->id_table; id->chan[0]; id++)
> +		if (!strcmp(mhi_dev->name, id->chan)) {
> +			mhi_dev->id = id;
> +			return 1;
> +		}
> +
>   	return 0;
>   };
>   
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 20238e9df1b3..da865f9d3646 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -122,7 +122,60 @@ struct mhi_ep_device {
>   	enum mhi_device_type dev_type;
>   };
>   
> +/**
> + * struct mhi_ep_driver - Structure representing a MHI Endpoint client driver
> + * @id_table: Pointer to MHI Endpoint device ID table
> + * @driver: Device driver model driver
> + * @probe: CB function for client driver probe function
> + * @remove: CB function for client driver remove function
> + * @ul_xfer_cb: CB function for UL data transfer
> + * @dl_xfer_cb: CB function for DL data transfer
> + */
> +struct mhi_ep_driver {
> +	const struct mhi_device_id *id_table;
> +	struct device_driver driver;
> +	int (*probe)(struct mhi_ep_device *mhi_ep,
> +		     const struct mhi_device_id *id);
> +	void (*remove)(struct mhi_ep_device *mhi_ep);

I get confused by the "ul" versus "dl" naming scheme here.
Is "ul" from the perspective of the host, meaning upload
is from the host toward the WWAN network (and therefore
toward the SDX AP), and download is from the WWAN toward
the host?  Somewhere this should be stated clearly in
comments; maybe I just missed it.

> +	void (*ul_xfer_cb)(struct mhi_ep_device *mhi_dev,
> +			   struct mhi_result *result);
> +	void (*dl_xfer_cb)(struct mhi_ep_device *mhi_dev,
> +			   struct mhi_result *result);
> +};
> +
>   #define to_mhi_ep_device(dev) container_of(dev, struct mhi_ep_device, dev)
> +#define to_mhi_ep_driver(drv) container_of(drv, struct mhi_ep_driver, driver)
> +
> +/*
> + * module_mhi_ep_driver() - Helper macro for drivers that don't do
> + * anything special other than using default mhi_ep_driver_register() and
> + * mhi_ep_driver_unregister().  This eliminates a lot of boilerplate.
> + * Each module may only use this macro once.
> + */
> +#define module_mhi_ep_driver(mhi_drv) \
> +	module_driver(mhi_drv, mhi_ep_driver_register, \
> +		      mhi_ep_driver_unregister)
> +
> +/*
> + * Macro to avoid include chaining to get THIS_MODULE
> + */
> +#define mhi_ep_driver_register(mhi_drv) \
> +	__mhi_ep_driver_register(mhi_drv, THIS_MODULE)
> +
> +/**
> + * __mhi_ep_driver_register - Register a driver with MHI Endpoint bus
> + * @mhi_drv: Driver to be associated with the device
> + * @owner: The module owner
> + *
> + * Return: 0 if driver registrations succeeds, a negative error code otherwise.
> + */
> +int __mhi_ep_driver_register(struct mhi_ep_driver *mhi_drv, struct module *owner);
> +
> +/**
> + * mhi_ep_driver_unregister - Unregister a driver from MHI Endpoint bus
> + * @mhi_drv: Driver associated with the device
> + */
> +void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv);
>   
>   /**
>    * mhi_ep_register_controller - Register MHI Endpoint controller


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 10/25] bus: mhi: ep: Add support for creating and destroying MHI EP devices
  2022-02-12 18:21 ` [PATCH v3 10/25] bus: mhi: ep: Add support for creating and destroying MHI EP devices Manivannan Sadhasivam
@ 2022-02-15 20:02   ` Alex Elder
  2022-02-17 12:04     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:02 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> This commit adds support for creating and destroying MHI endpoint devices.
> The MHI endpoint devices binds to the MHI endpoint channels and are used
> to transfer data between MHI host and endpoint device.
> 
> There is a single MHI EP device for each channel pair. The devices will be
> created when the corresponding channels has been started by the host and
> will be destroyed during MHI EP power down and reset.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

A few comments again, nothing major.

					-Alex

> ---
>   drivers/bus/mhi/ep/main.c | 77 +++++++++++++++++++++++++++++++++++++++
>   1 file changed, 77 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index f66404181972..fcaacf9ddbd1 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -67,6 +67,83 @@ static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl,
>   	return mhi_dev;
>   }
>   
> +/*
> + * MHI channels are always defined in pairs with UL as the even numbered
> + * channel and DL as odd numbered one.
> + */

Awesome comment.  And it seems that the channel ID passed
here is even, and that there *must* be a second mhi_chan[]
entry after the one specified.  And UL is also called the
"primary" channel.

> +static int mhi_ep_create_device(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id)
> +{
> +	struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ch_id];
> +	struct mhi_ep_device *mhi_dev;
> +	int ret;
> +
> +	/* Check if the channel name is same for both UL and DL */
> +	if (strcmp(mhi_chan->name, mhi_chan[1].name))
> +		return -EINVAL;

Maybe log an error to say what's wrong with it?

> +
> +	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_XFER);
> +	if (IS_ERR(mhi_dev))
> +		return PTR_ERR(mhi_dev);

It looks like the only possible error is no memory, so you could
just have mhi_ep_alloc_device() return NULL.

> +
> +	/* Configure primary channel */
> +	mhi_dev->ul_chan = mhi_chan;
> +	get_device(&mhi_dev->dev);
> +	mhi_chan->mhi_dev = mhi_dev;
> +
> +	/* Configure secondary channel as well */
> +	mhi_chan++;
> +	mhi_dev->dl_chan = mhi_chan;
> +	get_device(&mhi_dev->dev);
> +	mhi_chan->mhi_dev = mhi_dev;
> +
> +	/* Channel name is same for both UL and DL */
> +	mhi_dev->name = mhi_chan->name;
> +	dev_set_name(&mhi_dev->dev, "%s_%s",
> +		     dev_name(&mhi_cntrl->mhi_dev->dev),
> +		     mhi_dev->name);
> +
> +	ret = device_add(&mhi_dev->dev);
> +	if (ret)
> +		put_device(&mhi_dev->dev);
> +
> +	return ret;
> +}
> +
> +static int mhi_ep_destroy_device(struct device *dev, void *data)
> +{
> +	struct mhi_ep_device *mhi_dev;
> +	struct mhi_ep_cntrl *mhi_cntrl;
> +	struct mhi_ep_chan *ul_chan, *dl_chan;
> +
> +	if (dev->bus != &mhi_ep_bus_type)
> +		return 0;
> +
> +	mhi_dev = to_mhi_ep_device(dev);
> +	mhi_cntrl = mhi_dev->mhi_cntrl;
> +
> +	/* Only destroy devices created for channels */
> +	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> +		return 0;
> +
> +	ul_chan = mhi_dev->ul_chan;
> +	dl_chan = mhi_dev->dl_chan;

Aren't they required to supply *both* channels?  Or maybe
it's just required that there are transfer callback functions
for both channels.  Anyway, no need to check for null, because
the creation function guarantees they're both non-null I think.

> +	if (ul_chan)
> +		put_device(&ul_chan->mhi_dev->dev);
> +
> +	if (dl_chan)
> +		put_device(&dl_chan->mhi_dev->dev);
> +
> +	dev_dbg(&mhi_cntrl->mhi_dev->dev, "Destroying device for chan:%s\n",
> +		 mhi_dev->name);
> +
> +	/* Notify the client and remove the device from MHI bus */
> +	device_del(dev);
> +	put_device(dev);
> +
> +	return 0;
> +}
> +
>   static int parse_ch_cfg(struct mhi_ep_cntrl *mhi_cntrl,
>   			const struct mhi_ep_cntrl_config *config)
>   {


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 11/25] bus: mhi: ep: Add support for managing MMIO registers
  2022-02-12 18:21 ` [PATCH v3 11/25] bus: mhi: ep: Add support for managing MMIO registers Manivannan Sadhasivam
  2022-02-15  1:14   ` Hemant Kumar
@ 2022-02-15 20:03   ` Alex Elder
  1 sibling, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:03 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for managing the Memory Mapped Input Output (MMIO) registers
> of the MHI bus. All MHI operations are carried out using the MMIO registers
> by both host and the endpoint device.
> 
> The MMIO registers reside inside the endpoint device memory (fixed
> location based on the platform) and the address is passed by the MHI EP
> controller driver during its registration.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

This is in pretty good shape, just a few comments.

					-Alex

> ---
>   drivers/bus/mhi/ep/Makefile   |   2 +-
>   drivers/bus/mhi/ep/internal.h |  37 +++++
>   drivers/bus/mhi/ep/main.c     |   6 +-
>   drivers/bus/mhi/ep/mmio.c     | 274 ++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        |  18 +++
>   5 files changed, 335 insertions(+), 2 deletions(-)
>   create mode 100644 drivers/bus/mhi/ep/mmio.c
> 
> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> index 64e29252b608..a1555ae287ad 100644
> --- a/drivers/bus/mhi/ep/Makefile
> +++ b/drivers/bus/mhi/ep/Makefile
> @@ -1,2 +1,2 @@
>   obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> -mhi_ep-y := main.o
> +mhi_ep-y := main.o mmio.o
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index e313a2546664..2c756a90774c 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -101,6 +101,17 @@ struct mhi_generic_ctx {
>   	__u64 wp __packed __aligned(4);
>   };
>   
> +/**
> + * enum mhi_ep_execenv - MHI Endpoint Execution Environment
> + * @MHI_EP_SBL_EE: Secondary Bootloader
> + * @MHI_EP_AMSS_EE: Advanced Mode Subscriber Software
> + */
> +enum mhi_ep_execenv {
> +	MHI_EP_SBL_EE = 1,
> +	MHI_EP_AMSS_EE = 2,
> +	MHI_EP_UNRESERVED

UNRESERVED?  What does that mean?

> +};
> +
>   enum mhi_ep_ring_type {
>   	RING_TYPE_CMD = 0,
>   	RING_TYPE_ER,
> @@ -157,4 +168,30 @@ struct mhi_ep_chan {
>   	bool skip_td;
>   };
>   
> +/* MMIO related functions */
> +u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset);
> +void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
> +void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 mask, u32 val);
> +u32 mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset, u32 mask);
> +void mhi_ep_mmio_enable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_disable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_enable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_disable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_enable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id);
> +void mhi_ep_mmio_disable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id);
> +void mhi_ep_mmio_enable_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_read_chdb_status_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_mask_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_get_chc_base(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_get_erc_base(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_get_crc_base(struct mhi_ep_cntrl *mhi_cntrl);
> +u64 mhi_ep_mmio_get_db(struct mhi_ep_ring *ring);
> +void mhi_ep_mmio_set_env(struct mhi_ep_cntrl *mhi_cntrl, u32 value);
> +void mhi_ep_mmio_clear_reset(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_reset(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
> +			       bool *mhi_reset);
> +void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
> +
>   #endif
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index fcaacf9ddbd1..950b5bcabe18 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -205,7 +205,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   	struct mhi_ep_device *mhi_dev;
>   	int ret;
>   
> -	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
> +	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio)
>   		return -EINVAL;
>   
>   	ret = parse_ch_cfg(mhi_cntrl, config);
> @@ -218,6 +218,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   		goto err_free_ch;
>   	}
>   
> +	/* Set MHI version and AMSS EE before enumeration */
> +	mhi_ep_mmio_write(mhi_cntrl, MHIVER, config->mhi_version);
> +	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
> +
>   	/* Set controller index */
>   	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
>   	if (mhi_cntrl->index < 0) {
> diff --git a/drivers/bus/mhi/ep/mmio.c b/drivers/bus/mhi/ep/mmio.c
> new file mode 100644
> index 000000000000..58e887beb050
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/mmio.c
> @@ -0,0 +1,274 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2021 Linaro Ltd.
> + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> + */
> +
> +#include <linux/bitfield.h>
> +#include <linux/io.h>
> +#include <linux/mhi_ep.h>
> +
> +#include "internal.h"
> +
> +u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset)
> +{
> +	return readl(mhi_cntrl->mmio + offset);
> +}
> +
> +void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val)
> +{
> +	writel(val, mhi_cntrl->mmio + offset);
> +}
> +
> +void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 mask, u32 val)
> +{
> +	u32 regval;
> +
> +	regval = mhi_ep_mmio_read(mhi_cntrl, offset);
> +	regval &= ~mask;
> +	regval |= ((val << __ffs(mask)) & mask);

One extra set of parentheses here is not needed.  Assignment
is very low precedence in C.

> +	mhi_ep_mmio_write(mhi_cntrl, offset, regval);
> +}
> +
> +u32 mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset, u32 mask)
> +{
> +	u32 regval;
> +
> +	regval = mhi_ep_mmio_read(dev, offset);
> +	regval &= mask;
> +	regval >>= __ffs(mask);
> +
> +	return regval;
> +}
> +
> +void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
> +				bool *mhi_reset)
> +{
> +	u32 regval;
> +
> +	regval = mhi_ep_mmio_read(mhi_cntrl, MHICTRL);
> +	*state = FIELD_GET(MHICTRL_MHISTATE_MASK, regval);
> +	*mhi_reset = !!FIELD_GET(MHICTRL_RESET_MASK, regval);
> +}
> +

What does "a7" mean to you.  Is it the host, or the SDX AP?
Will "a7" always be the proper name for that CPU?  (Maybe
it will be.)

> +static void mhi_ep_mmio_mask_set_chdb_int_a7(struct mhi_ep_cntrl *mhi_cntrl,
> +						u32 chdb_id, bool enable)

I think "ch_id" would be a better name for the "chdb_id" argument.
If you agree, update other functions so it's consistent.

> +{
> +	u32 chid_mask, chid_idx, chid_shift, val = 0;
> +
> +	chid_shift = chdb_id % 32;
> +	chid_mask = BIT(chid_shift);
> +	chid_idx = chdb_id / 32;

I think "chdb_idx" would be a better name for this.

> +
> +	WARN_ON(chid_idx >= MHI_MASK_ROWS_CH_EV_DB);

Can we tell by inspection that this will never be out
of range?  If not, can we just test it once, early, so
there's no need to ever check later on?

> +
> +	if (enable)
> +		val = 1;

	val = enable ? 1 : 0;

> +	mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(chid_idx),
> +				  chid_mask, val);
> +
> +	/* Update the local copy of the channel mask */
> +	mhi_cntrl->chdb[chid_idx].mask &= ~chid_mask;
> +	mhi_cntrl->chdb[chid_idx].mask |= val << chid_shift;
> +}
> +
> +void mhi_ep_mmio_enable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id)
> +{
> +	mhi_ep_mmio_mask_set_chdb_int_a7(mhi_cntrl, chdb_id, true);
> +}
> +
> +void mhi_ep_mmio_disable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id)
> +{
> +	mhi_ep_mmio_mask_set_chdb_int_a7(mhi_cntrl, chdb_id, false);
> +}
> +
> +static void mhi_ep_mmio_set_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl, bool enable)
> +{
> +	u32 val = 0, i;
> +
> +	if (enable)
> +		val = MHI_CHDB_INT_MASK_A7_n_EN_ALL;

	val = enable ? MHI_CHDB_INT_MASK_A7_n_EN_ALL : 0;
> +
> +	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
> +		mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(i), val);
> +		mhi_cntrl->chdb[i].mask = val;
> +	}
> +}
> +

No more comments on the rest of this file.

. . .

> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index da865f9d3646..3d2ab7a5ccd7 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h

This looks good too.

. . .

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 12/25] bus: mhi: ep: Add support for ring management
  2022-02-12 18:21 ` [PATCH v3 12/25] bus: mhi: ep: Add support for ring management Manivannan Sadhasivam
@ 2022-02-15 20:03   ` Alex Elder
  2022-02-18  8:07     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 20:03 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for managing the MHI ring. The MHI ring is a circular queue
> of data structures used to pass the information between host and the
> endpoint.
> 
> MHI support 3 types of rings:
> 
> 1. Transfer ring
> 2. Event ring
> 3. Command ring
> 
> All rings reside inside the host memory and the MHI EP device maps it to
> the device memory using blocks like PCIe iATU. The mapping is handled in
> the MHI EP controller driver itself.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Great explanation.  One more thing to add, is that the command
and transfer rings are directed from the host to the MHI EP device,
while the event rings are directed from the EP device toward the
host.

I notice that you've improved a few things I had notes about,
and I don't recall suggesting them.  I'm very happy about that.

I have a few more comments here, some worth thinking about
at least.

					-Alex

> ---
>   drivers/bus/mhi/ep/Makefile   |   2 +-
>   drivers/bus/mhi/ep/internal.h |  33 +++++
>   drivers/bus/mhi/ep/main.c     |  59 +++++++-
>   drivers/bus/mhi/ep/ring.c     | 267 ++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        |  11 ++
>   5 files changed, 370 insertions(+), 2 deletions(-)
>   create mode 100644 drivers/bus/mhi/ep/ring.c
> 
> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> index a1555ae287ad..7ba0e04801eb 100644
> --- a/drivers/bus/mhi/ep/Makefile
> +++ b/drivers/bus/mhi/ep/Makefile
> @@ -1,2 +1,2 @@
>   obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> -mhi_ep-y := main.o mmio.o
> +mhi_ep-y := main.o mmio.o ring.o
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index 2c756a90774c..48d6e9667d55 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -112,6 +112,18 @@ enum mhi_ep_execenv {
>   	MHI_EP_UNRESERVED
>   };
>   
> +/* Transfer Ring Element macros */
> +#define MHI_EP_TRE_PTR(ptr) (ptr)
> +#define MHI_EP_TRE_DWORD0(len) (len & MHI_MAX_MTU)

The above looks funny.  This assumes MHI_MAX_MTU is
a mask value (likely one less than a power-of-2).
That doesn't seem obvious to me; use modulo if you
must, but better, just ensure len is in range rather
than silently truncating it if it's not.

> +#define MHI_EP_TRE_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
> +	| (ieot << 9) | (ieob << 8) | chain)

You should probably use FIELD_PREP() to compute the value
here, since you're using FIELD_GET() to extract the field
values below.

> +#define MHI_EP_TRE_GET_PTR(tre) ((tre)->ptr)
> +#define MHI_EP_TRE_GET_LEN(tre) ((tre)->dword[0] & 0xffff)
> +#define MHI_EP_TRE_GET_CHAIN(tre) FIELD_GET(BIT(0), (tre)->dword[1])

#define	TRE_FLAG_CHAIN	BIT(0)

Then just call
	bei = FIELD_GET(TRE_FLAG_CHAIN, tre->dword[1]);

But I haven't looked at the code where this is used yet.

> +#define MHI_EP_TRE_GET_IEOB(tre) FIELD_GET(BIT(8), (tre)->dword[1])
> +#define MHI_EP_TRE_GET_IEOT(tre) FIELD_GET(BIT(9), (tre)->dword[1])
> +#define MHI_EP_TRE_GET_BEI(tre) FIELD_GET(BIT(10), (tre)->dword[1])
> +

These macros should be shared/shareable between the host and endpoint.
They operate on external interfaces and so should be byte swapped
(where used) when updating actual memory.  Unlike the patches from
Paul Davey early in this series, this does *not* byte swap the
values in the right hand side of these definitions, which is good.

I'm pretty sure I mentioned this before...  I don't really like these
"DWORD" macros that simply write compute register values to write
out to the TREs.  A TRE is a structure, not a set of registers.  And
a whole TRE can be written or read in a single ARM instruction in
some cases--but most likely you need to define it as a structure
for that to happen.

struct mhi_tre {
	__le64 addr;
	__le16 len_opcode
	__le16 reserved;
	__le32 flags;
};

Which reminds me, this shared memory area should probably be mapped
using memremap() rather than ioremap().  I haven't checked whether
it is...

>   enum mhi_ep_ring_type {
>   	RING_TYPE_CMD = 0,
>   	RING_TYPE_ER,
> @@ -131,6 +143,11 @@ union mhi_ep_ring_ctx {
>   	struct mhi_generic_ctx generic;
>   };
>   
> +struct mhi_ep_ring_item {
> +	struct list_head node;
> +	struct mhi_ep_ring *ring;
> +};
> +
>   struct mhi_ep_ring {
>   	struct mhi_ep_cntrl *mhi_cntrl;
>   	int (*ring_cb)(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
> @@ -143,6 +160,9 @@ struct mhi_ep_ring {
>   	u32 db_offset_h;
>   	u32 db_offset_l;
>   	u32 ch_id;
> +	u32 er_index;
> +	u32 irq_vector;
> +	bool started;
>   };
>   
>   struct mhi_ep_cmd {
> @@ -168,6 +188,19 @@ struct mhi_ep_chan {
>   	bool skip_td;
>   };
>   
> +/* MHI Ring related functions */
> +void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id);
> +void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring);
> +int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
> +		      union mhi_ep_ring_ctx *ctx);
> +size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr);
> +int mhi_ep_process_ring(struct mhi_ep_ring *ring);
> +int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *element);
> +void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring);
> +int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
> +int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
> +int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring);
> +
>   /* MMIO related functions */
>   u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset);
>   void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 950b5bcabe18..2c8045766292 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -18,6 +18,48 @@
>   
>   static DEFINE_IDA(mhi_ep_cntrl_ida);

The following function handles command or channel interrupt work.

> +static void mhi_ep_ring_worker(struct work_struct *work)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
> +				struct mhi_ep_cntrl, ring_work);
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	struct mhi_ep_ring_item *itr, *tmp;
> +	struct mhi_ep_ring *ring;
> +	struct mhi_ep_chan *chan;
> +	unsigned long flags;
> +	LIST_HEAD(head);
> +	int ret;
> +
> +	/* Process the command ring first */
> +	ret = mhi_ep_process_ring(&mhi_cntrl->mhi_cmd->ring);
> +	if (ret) {

At the moment I'm not sure where this work gets scheduled.
But what if there is no command to process?  It looks
like you go update the cached pointer no matter what
to see if there's anything new.  But it seems like you
ought to be able to do this when interrupted for a
command rather than all the time.

> +		dev_err(dev, "Error processing command ring: %d\n", ret);
> +		return;
> +	}
> +
> +	spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
> +	list_splice_tail_init(&mhi_cntrl->ch_db_list, &head);
> +	spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);

Here it looks like you at least only process rings that
had a doorbell interrupt.

> +	/* Process the channel rings now */
> +	list_for_each_entry_safe(itr, tmp, &head, node) {
> +		list_del(&itr->node);
> +		ring = itr->ring;
> +		chan = &mhi_cntrl->mhi_chan[ring->ch_id];
> +		mutex_lock(&chan->lock);
> +		dev_dbg(dev, "Processing the ring for channel (%d)\n", ring->ch_id);

s/%d/%u/

Look for this everywhere.  It avoids printing negative values when
the high bit is set.  (Likely not a problem here.)

> +		ret = mhi_ep_process_ring(ring);
> +		if (ret) {
> +			dev_err(dev, "Error processing ring for channel (%d): %d\n",
> +				ring->ch_id, ret);
> +			mutex_unlock(&chan->lock);

I think you should report the error but continue processing
all entries (otherwise they'll get leaked).

> +			return;
> +		}
> +		mutex_unlock(&chan->lock);
> +		kfree(itr);
> +	}
> +}
> +
>   static void mhi_ep_release_device(struct device *dev)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> @@ -218,6 +260,17 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   		goto err_free_ch;
>   	}
>   
> +	INIT_WORK(&mhi_cntrl->ring_work, mhi_ep_ring_worker);
> +
> +	mhi_cntrl->ring_wq = alloc_workqueue("mhi_ep_ring_wq", 0, 0);
> +	if (!mhi_cntrl->ring_wq) {
> +		ret = -ENOMEM;
> +		goto err_free_cmd;
> +	}
> +
> +	INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
> +	spin_lock_init(&mhi_cntrl->list_lock);
> +
>   	/* Set MHI version and AMSS EE before enumeration */
>   	mhi_ep_mmio_write(mhi_cntrl, MHIVER, config->mhi_version);
>   	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
> @@ -226,7 +279,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
>   	if (mhi_cntrl->index < 0) {
>   		ret = mhi_cntrl->index;
> -		goto err_free_cmd;
> +		goto err_destroy_ring_wq;
>   	}
>   
>   	/* Allocate the controller device */
> @@ -254,6 +307,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   	put_device(&mhi_dev->dev);
>   err_ida_free:
>   	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
> +err_destroy_ring_wq:
> +	destroy_workqueue(mhi_cntrl->ring_wq);
>   err_free_cmd:
>   	kfree(mhi_cntrl->mhi_cmd);
>   err_free_ch:
> @@ -267,6 +322,8 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
>   
> +	destroy_workqueue(mhi_cntrl->ring_wq);
> +
>   	kfree(mhi_cntrl->mhi_cmd);
>   	kfree(mhi_cntrl->mhi_chan);
>   
> diff --git a/drivers/bus/mhi/ep/ring.c b/drivers/bus/mhi/ep/ring.c
> new file mode 100644
> index 000000000000..3eb02c9be5eb
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/ring.c
> @@ -0,0 +1,267 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2021 Linaro Ltd.
> + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> + */
> +
> +#include <linux/mhi_ep.h>
> +#include "internal.h"
> +
> +size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr)
> +{
> +	u64 rbase;
> +
> +	rbase = le64_to_cpu(ring->ring_ctx->generic.rbase);
> +
> +	return (ptr - rbase) / sizeof(struct mhi_ep_ring_element);
> +}
> +
> +static u32 mhi_ep_ring_num_elems(struct mhi_ep_ring *ring)
> +{
> +	return le64_to_cpu(ring->ring_ctx->generic.rlen) / sizeof(struct mhi_ep_ring_element);
> +}
> +
> +void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring)
> +{
> +	ring->rd_offset++;
> +	if (ring->rd_offset == ring->ring_size)
> +		ring->rd_offset = 0;

Maybe:
	ring->rd_offset = (ring->rd_offset + 1) % ring->ring_size;

> +}
> +
> +static int __mhi_ep_cache_ring(struct mhi_ep_ring *ring, size_t end)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	size_t start, copy_size;
> +	int ret;
> +
> +	/* No need to cache event rings */
> +	if (ring->type == RING_TYPE_ER)
> +		return 0;

Does this ever happen--a request to cache an event ring?
This seems pointless if we can tell by inspection it
won't happen.

> +
> +	/* No need to cache the ring if write pointer is unmodified */
> +	if (ring->wr_offset == end)
> +		return 0;
> +
> +	start = ring->wr_offset;
> +	if (start < end) {
> +		copy_size = (end - start) * sizeof(struct mhi_ep_ring_element);
> +		ret = mhi_cntrl->read_from_host(mhi_cntrl,
> +						(le64_to_cpu(ring->ring_ctx->generic.rbase) +
> +						(start * sizeof(struct mhi_ep_ring_element))),
> +						&ring->ring_cache[start], copy_size);
> +		if (ret < 0)
> +			return ret;
> +	} else {
> +		copy_size = (ring->ring_size - start) * sizeof(struct mhi_ep_ring_element);
> +		ret = mhi_cntrl->read_from_host(mhi_cntrl,
> +						(le64_to_cpu(ring->ring_ctx->generic.rbase) +
> +						(start * sizeof(struct mhi_ep_ring_element))),
> +						&ring->ring_cache[start], copy_size);
> +		if (ret < 0)
> +			return ret;
> +
> +		if (end) {
> +			ret = mhi_cntrl->read_from_host(mhi_cntrl,
> +							le64_to_cpu(ring->ring_ctx->generic.rbase),
> +							&ring->ring_cache[0],
> +							end * sizeof(struct mhi_ep_ring_element));
> +			if (ret < 0)
> +				return ret;
> +		}
> +	}
> +
> +	dev_dbg(dev, "Cached ring: start %zu end %zu size %zu\n", start, end, copy_size);
> +
> +	return 0;
> +}
> +
> +static int mhi_ep_cache_ring(struct mhi_ep_ring *ring, u64 wr_ptr)
> +{
> +	size_t wr_offset;
> +	int ret;
> +
> +	wr_offset = mhi_ep_ring_addr2offset(ring, wr_ptr);
> +
> +	/* Cache the host ring till write offset */
> +	ret = __mhi_ep_cache_ring(ring, wr_offset);
> +	if (ret)
> +		return ret;
> +
> +	ring->wr_offset = wr_offset;
> +
> +	return 0;
> +}
> +
> +int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring)
> +{
> +	u64 wr_ptr;
> +
> +	wr_ptr = mhi_ep_mmio_get_db(ring);
> +
> +	return mhi_ep_cache_ring(ring, wr_ptr);
> +}
> +
> +static int mhi_ep_process_ring_element(struct mhi_ep_ring *ring, size_t offset)
> +{
> +	struct mhi_ep_ring_element *el;
> +
> +	/* Get the element and invoke the respective callback */
> +	el = &ring->ring_cache[offset];
> +
> +	return ring->ring_cb(ring, el);
> +}
> +
> +int mhi_ep_process_ring(struct mhi_ep_ring *ring)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	int ret = 0;
> +
> +	/* Event rings should not be processed */
> +	if (ring->type == RING_TYPE_ER)
> +		return -EINVAL;
> +
> +	dev_dbg(dev, "Processing ring of type: %d\n", ring->type);
> +
> +	/* Update the write offset for the ring */
> +	ret = mhi_ep_update_wr_offset(ring);
> +	if (ret) {
> +		dev_err(dev, "Error updating write offset for ring\n");
> +		return ret;
> +	}
> +
> +	/* Sanity check to make sure there are elements in the ring */
> +	if (ring->rd_offset == ring->wr_offset)
> +		return 0;
> +
> +	/* Process channel ring first */
> +	if (ring->type == RING_TYPE_CH) {
> +		ret = mhi_ep_process_ring_element(ring, ring->rd_offset);
> +		if (ret)
> +			dev_err(dev, "Error processing ch ring element: %zu\n", ring->rd_offset);
> +
> +		return ret;
> +	}
> +
> +	/* Process command ring now */
> +	while (ring->rd_offset != ring->wr_offset) {
> +		ret = mhi_ep_process_ring_element(ring, ring->rd_offset);
> +		if (ret) {
> +			dev_err(dev, "Error processing cmd ring element: %zu\n", ring->rd_offset);
> +			return ret;
> +		}
> +
> +		mhi_ep_ring_inc_index(ring);
> +	}
> +
> +	return 0;
> +}
> +
> +/* TODO: Support for adding multiple ring elements to the ring */
> +int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	__le64 rbase = ring->ring_ctx->generic.rbase;
> +	size_t old_offset = 0;
> +	u32 num_free_elem;
> +	int ret;
> +
> +	ret = mhi_ep_update_wr_offset(ring);
> +	if (ret) {
> +		dev_err(dev, "Error updating write pointer\n");
> +		return ret;
> +	}
> +
> +	if (ring->rd_offset < ring->wr_offset)
> +		num_free_elem = (ring->wr_offset - ring->rd_offset) - 1;
> +	else
> +		num_free_elem = ((ring->ring_size - ring->rd_offset) + ring->wr_offset) - 1;
> +
> +	/* Check if there is space in ring for adding at least an element */
> +	if (!num_free_elem) {
> +		dev_err(dev, "No space left in the ring\n");
> +		return -ENOSPC;
> +	}
> +
> +	old_offset = ring->rd_offset;
> +	mhi_ep_ring_inc_index(ring);
> +
> +	dev_dbg(dev, "Adding an element to ring at offset (%zu)\n", ring->rd_offset);
> +
> +	/* Update rp in ring context */
> +	ring->ring_ctx->generic.rp = cpu_to_le64((ring->rd_offset * sizeof(*el))) + rbase;

Is it valid to add a byte swapped value to a byte swapped value?
It seems odd to me, even if the result is correct.  I think you
should add the values, then byte swap them when assigning.

> +
> +	/* Ensure that the ring pointer gets updated before writing the element to ring */
> +	smp_wmb();
> +
> +	ret = mhi_cntrl->write_to_host(mhi_cntrl, el, (le64_to_cpu(rbase) +
> +				       (old_offset * sizeof(*el))), sizeof(*el));

Unneeded extra parenthese around the third argument.

> +	if (ret < 0)
> +		return ret;
> +
> +	return 0;
> +}
> +
> +void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id)
> +{
> +	ring->type = type;
> +	if (ring->type == RING_TYPE_CMD) {
> +		ring->ring_cb = mhi_ep_process_cmd_ring;
> +		ring->db_offset_h = CRDB_HIGHER;
> +		ring->db_offset_l = CRDB_LOWER;
> +	} else if (ring->type == RING_TYPE_CH) {
> +		ring->ring_cb = mhi_ep_process_tre_ring;
> +		ring->db_offset_h = CHDB_HIGHER_n(id);
> +		ring->db_offset_l = CHDB_LOWER_n(id);
> +		ring->ch_id = id;
> +	} else {
> +		ring->db_offset_h = ERDB_HIGHER_n(id);
> +		ring->db_offset_l = ERDB_LOWER_n(id);
> +	}
> +}
> +
> +int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
> +			union mhi_ep_ring_ctx *ctx)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	int ret;
> +
> +	ring->mhi_cntrl = mhi_cntrl;
> +	ring->ring_ctx = ctx;
> +	ring->ring_size = mhi_ep_ring_num_elems(ring);
> +
> +	if (ring->type == RING_TYPE_CH)
> +		ring->er_index = le32_to_cpu(ring->ring_ctx->ch.erindex);
> +
> +	if (ring->type == RING_TYPE_ER)
> +		ring->irq_vector = le32_to_cpu(ring->ring_ctx->ev.msivec);
> +
> +	/* During ring init, both rp and wp are equal */
> +	ring->rd_offset = mhi_ep_ring_addr2offset(ring, le64_to_cpu(ring->ring_ctx->generic.rp));
> +	ring->wr_offset = mhi_ep_ring_addr2offset(ring, le64_to_cpu(ring->ring_ctx->generic.rp));
> +
> +	/* Allocate ring cache memory for holding the copy of host ring */
> +	ring->ring_cache = kcalloc(ring->ring_size, sizeof(struct mhi_ep_ring_element),
> +				   GFP_KERNEL);
> +	if (!ring->ring_cache)
> +		return -ENOMEM;
> +
> +	ret = mhi_ep_cache_ring(ring, le64_to_cpu(ring->ring_ctx->generic.wp));
> +	if (ret) {
> +		dev_err(dev, "Failed to cache ring\n");
> +		kfree(ring->ring_cache);
> +		return ret;
> +	}
> +
> +	ring->started = true;
> +
> +	return 0;
> +}
> +
> +void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring)
> +{
> +	ring->started = false;
> +	kfree(ring->ring_cache);

Maybe you'll never reuse this, but it seems that a reset
might mean we'll reuse the ring.  In that case it might
be useful to set the ring_cache pointer to NULL, so we
guarantee never to use the freed memory (we'll crash with
a null pointer dereference if there's a bug).

> +}
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 3d2ab7a5ccd7..33828a6c4e63 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -62,6 +62,11 @@ struct mhi_ep_db_info {
>    * @ch_ctx_host_pa: Physical address of host channel context data structure
>    * @ev_ctx_host_pa: Physical address of host event context data structure
>    * @cmd_ctx_host_pa: Physical address of host command context data structure
> + * @ring_wq: Dedicated workqueue for processing MHI rings
> + * @ring_work: Ring worker
> + * @ch_db_list: List of queued channel doorbells
> + * @st_transition_list: List of state transitions
> + * @list_lock: Lock for protecting state transition and channel doorbell lists
>    * @chdb: Array of channel doorbell interrupt info
>    * @raise_irq: CB function for raising IRQ to the host
>    * @alloc_addr: CB function for allocating memory in endpoint for storing host context
> @@ -93,6 +98,12 @@ struct mhi_ep_cntrl {
>   	u64 ev_ctx_host_pa;
>   	u64 cmd_ctx_host_pa;
>   
> +	struct workqueue_struct	*ring_wq;
> +	struct work_struct ring_work;
> +
> +	struct list_head ch_db_list;
> +	struct list_head st_transition_list;
> +	spinlock_t list_lock;
>   	struct mhi_ep_db_info chdb[4];
>   
>   	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 13/25] bus: mhi: ep: Add support for sending events to the host
  2022-02-12 18:21 ` [PATCH v3 13/25] bus: mhi: ep: Add support for sending events to the host Manivannan Sadhasivam
@ 2022-02-15 22:39   ` Alex Elder
  2022-02-22  6:06     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:39 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for sending the events to the host over MHI bus from the
> endpoint. Following events are supported:
> 
> 1. Transfer completion event
> 2. Command completion event
> 3. State change event
> 4. Execution Environment (EE) change event
> 
> An event is sent whenever an operation has been completed in the MHI EP
> device. Event is sent using the MHI event ring and additionally the host
> is notified using an IRQ if required.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

A few things can be simplified here.

					-Alex

> ---
>   drivers/bus/mhi/common.h      |  15 ++++
>   drivers/bus/mhi/ep/internal.h |   8 ++-
>   drivers/bus/mhi/ep/main.c     | 126 ++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        |   8 +++
>   4 files changed, 155 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> index 728c82928d8d..26d94ed52b34 100644
> --- a/drivers/bus/mhi/common.h
> +++ b/drivers/bus/mhi/common.h
> @@ -176,6 +176,21 @@
>   #define MHI_TRE_GET_EV_LINKSPEED(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
>   #define MHI_TRE_GET_EV_LINKWIDTH(tre)			(MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
>   
> +/* State change event */
> +#define MHI_SC_EV_PTR					0
> +#define MHI_SC_EV_DWORD0(state)				cpu_to_le32(state << 24)
> +#define MHI_SC_EV_DWORD1(type)				cpu_to_le32(type << 16)
> +
> +/* EE event */
> +#define MHI_EE_EV_PTR					0
> +#define MHI_EE_EV_DWORD0(ee)				cpu_to_le32(ee << 24)
> +#define MHI_EE_EV_DWORD1(type)				cpu_to_le32(type << 16)
> +
> +/* Command Completion event */
> +#define MHI_CC_EV_PTR(ptr)				cpu_to_le64(ptr)
> +#define MHI_CC_EV_DWORD0(code)				cpu_to_le32(code << 24)
> +#define MHI_CC_EV_DWORD1(type)				cpu_to_le32(type << 16)
> +
>   /* Transfer descriptor macros */
>   #define MHI_TRE_DATA_PTR(ptr)				cpu_to_le64(ptr)
>   #define MHI_TRE_DATA_DWORD0(len)			cpu_to_le32(len & MHI_MAX_MTU)
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index 48d6e9667d55..fd63f79c6aec 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -131,8 +131,8 @@ enum mhi_ep_ring_type {
>   };
>   
>   struct mhi_ep_ring_element {
> -	u64 ptr;
> -	u32 dword[2];
> +	__le64 ptr;
> +	__le32 dword[2];

Yay!

>   };
>   
>   /* Ring element */
> @@ -227,4 +227,8 @@ void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *s
>   void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
>   void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
>   
> +/* MHI EP core functions */
> +int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
> +int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ep_execenv exec_env);
> +
>   #endif
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 2c8045766292..61f066c6286b 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -18,6 +18,131 @@
>   
>   static DEFINE_IDA(mhi_ep_cntrl_ida);
>   
> +static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
> +			     struct mhi_ep_ring_element *el)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	union mhi_ep_ring_ctx *ctx;
> +	struct mhi_ep_ring *ring;
> +	int ret;
> +
> +	mutex_lock(&mhi_cntrl->event_lock);
> +	ring = &mhi_cntrl->mhi_event[ring_idx].ring;
> +	ctx = (union mhi_ep_ring_ctx *)&mhi_cntrl->ev_ctx_cache[ring_idx];
> +	if (!ring->started) {
> +		ret = mhi_ep_ring_start(mhi_cntrl, ring, ctx);
> +		if (ret) {
> +			dev_err(dev, "Error starting event ring (%d)\n", ring_idx);
> +			goto err_unlock;
> +		}
> +	}
> +
> +	/* Add element to the event ring */
> +	ret = mhi_ep_ring_add_element(ring, el);
> +	if (ret) {
> +		dev_err(dev, "Error adding element to event ring (%d)\n", ring_idx);
> +		goto err_unlock;
> +	}
> +
> +	/* Ensure that the ring pointer gets updated in host memory before triggering IRQ */
> +	smp_wmb();

I think the barrier might already be provided by the mutex_unlock().

> +
> +	mutex_unlock(&mhi_cntrl->event_lock);
> +
> +	/*
> +	 * Raise IRQ to host only if the BEI flag is not set in TRE. Host might
> +	 * set this flag for interrupt moderation as per MHI protocol.
> +	 */

I don't think the BEI flag is meaningful in an event ring element.
You'd want to determine if it was present in the *transfer* ring
element for which this event is signaling the completion.

> +	if (!MHI_EP_TRE_GET_BEI(el))
> +		mhi_cntrl->raise_irq(mhi_cntrl, ring->irq_vector);
> +
> +	return 0;
> +
> +err_unlock:
> +	mutex_unlock(&mhi_cntrl->event_lock);
> +
> +	return ret;
> +}
> +
> +static int mhi_ep_send_completion_event(struct mhi_ep_cntrl *mhi_cntrl,
> +					struct mhi_ep_ring *ring, u32 len,
> +					enum mhi_ev_ccs code)
> +{
> +	struct mhi_ep_ring_element event = {};
> +	__le32 tmp;
> +
> +	event.ptr = le64_to_cpu(ring->ring_ctx->generic.rbase) +
> +			ring->rd_offset * sizeof(struct mhi_ep_ring_element);

I'm not sure at the moment where this will be called.  But
it might be easier to pass in the transfer channel pointer
rather than compute its address here.

> +
> +	tmp = event.dword[0];

You already know event.dword[0] is zero.  No need to read
its value here (or that of dword[1] below).

> +	tmp |= MHI_TRE_EV_DWORD0(code, len);
> +	event.dword[0] = tmp;
> +
> +	tmp = event.dword[1];
> +	tmp |= MHI_TRE_EV_DWORD1(ring->ch_id, MHI_PKT_TYPE_TX_EVENT);
> +	event.dword[1] = tmp;
> +
> +	return mhi_ep_send_event(mhi_cntrl, ring->er_index, &event);
> +}
> +
> +int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state)
> +{
> +	struct mhi_ep_ring_element event = {};
> +	__le32 tmp;
> +
> +	tmp = event.dword[0];

No need to read a known zero value.  (Fix this througout.)

> +	tmp |= MHI_SC_EV_DWORD0(state);
> +	event.dword[0] = tmp;
> +
> +	tmp = event.dword[1];
> +	tmp |= MHI_SC_EV_DWORD1(MHI_PKT_TYPE_STATE_CHANGE_EVENT);
> +	event.dword[1] = tmp;
> +
> +	return mhi_ep_send_event(mhi_cntrl, 0, &event);
> +}
> +
> +int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ep_execenv exec_env)
> +{
> +	struct mhi_ep_ring_element event = {};
> +	__le32 tmp;
> +
> +	tmp = event.dword[0];
> +	tmp |= MHI_EE_EV_DWORD0(exec_env);
> +	event.dword[0] = tmp;
> +
> +	tmp = event.dword[1];
> +	tmp |= MHI_SC_EV_DWORD1(MHI_PKT_TYPE_EE_EVENT);
> +	event.dword[1] = tmp;
> +
> +	return mhi_ep_send_event(mhi_cntrl, 0, &event);
> +}
> +
> +static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ev_ccs code)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	struct mhi_ep_ring_element event = {};
> +	__le32 tmp;
> +
> +	if (code > MHI_EV_CC_BAD_TRE) {

I think you can probably guarantee this won't ever happen

> +		dev_err(dev, "Invalid command completion code (%d)\n", code);
> +		return -EINVAL;
> +	}
> +
> +	event.ptr = le64_to_cpu(mhi_cntrl->cmd_ctx_cache->rbase)
> +			+ (mhi_cntrl->mhi_cmd->ring.rd_offset *
> +			(sizeof(struct mhi_ep_ring_element)));

No need for the parentheses around the sizeof() call.  Here too
it might be easier and clearer to pass in the command ring element
this event is signaling the completion of.

> +
> +	tmp = event.dword[0];
> +	tmp |= MHI_CC_EV_DWORD0(code);
> +	event.dword[0] = tmp;
> +
> +	tmp = event.dword[1];
> +	tmp |= MHI_CC_EV_DWORD1(MHI_PKT_TYPE_CMD_COMPLETION_EVENT);
> +	event.dword[1] = tmp;
> +
> +	return mhi_ep_send_event(mhi_cntrl, 0, &event);
> +}
> +
>   static void mhi_ep_ring_worker(struct work_struct *work)
>   {
>   	struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
> @@ -270,6 +395,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   
>   	INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
>   	spin_lock_init(&mhi_cntrl->list_lock);
> +	mutex_init(&mhi_cntrl->event_lock);
>   
>   	/* Set MHI version and AMSS EE before enumeration */
>   	mhi_ep_mmio_write(mhi_cntrl, MHIVER, config->mhi_version);
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 33828a6c4e63..062133a68118 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -59,6 +59,9 @@ struct mhi_ep_db_info {
>    * @mhi_event: Points to the event ring configurations table
>    * @mhi_cmd: Points to the command ring configurations table
>    * @sm: MHI Endpoint state machine
> + * @ch_ctx_cache: Cache of host channel context data structure
> + * @ev_ctx_cache: Cache of host event context data structure
> + * @cmd_ctx_cache: Cache of host command context data structure
>    * @ch_ctx_host_pa: Physical address of host channel context data structure
>    * @ev_ctx_host_pa: Physical address of host event context data structure
>    * @cmd_ctx_host_pa: Physical address of host command context data structure
> @@ -67,6 +70,7 @@ struct mhi_ep_db_info {
>    * @ch_db_list: List of queued channel doorbells
>    * @st_transition_list: List of state transitions
>    * @list_lock: Lock for protecting state transition and channel doorbell lists
> + * @event_lock: Lock for protecting event rings
>    * @chdb: Array of channel doorbell interrupt info
>    * @raise_irq: CB function for raising IRQ to the host
>    * @alloc_addr: CB function for allocating memory in endpoint for storing host context
> @@ -94,6 +98,9 @@ struct mhi_ep_cntrl {
>   	struct mhi_ep_cmd *mhi_cmd;
>   	struct mhi_ep_sm *sm;
>   
> +	struct mhi_chan_ctxt *ch_ctx_cache;
> +	struct mhi_event_ctxt *ev_ctx_cache;
> +	struct mhi_cmd_ctxt *cmd_ctx_cache;
>   	u64 ch_ctx_host_pa;
>   	u64 ev_ctx_host_pa;
>   	u64 cmd_ctx_host_pa;
> @@ -104,6 +111,7 @@ struct mhi_ep_cntrl {
>   	struct list_head ch_db_list;
>   	struct list_head st_transition_list;
>   	spinlock_t list_lock;
> +	struct mutex event_lock;
>   	struct mhi_ep_db_info chdb[4];
>   
>   	void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl, u32 vector);


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 14/25] bus: mhi: ep: Add support for managing MHI state machine
  2022-02-12 18:21 ` [PATCH v3 14/25] bus: mhi: ep: Add support for managing MHI state machine Manivannan Sadhasivam
@ 2022-02-15 22:39   ` Alex Elder
  2022-02-22  7:03     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:39 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for managing the MHI state machine by controlling the state
> transitions. Only the following MHI state transitions are supported:
> 
> 1. Ready state
> 2. M0 state
> 3. M3 state
> 4. SYS_ERR state
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Minor suggestions here.		-Alex

> ---
>   drivers/bus/mhi/ep/Makefile   |   2 +-
>   drivers/bus/mhi/ep/internal.h |  11 +++
>   drivers/bus/mhi/ep/main.c     |  51 ++++++++++-
>   drivers/bus/mhi/ep/sm.c       | 168 ++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        |   6 ++
>   5 files changed, 236 insertions(+), 2 deletions(-)
>   create mode 100644 drivers/bus/mhi/ep/sm.c
> 
> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> index 7ba0e04801eb..aad85f180b70 100644
> --- a/drivers/bus/mhi/ep/Makefile
> +++ b/drivers/bus/mhi/ep/Makefile
> @@ -1,2 +1,2 @@
>   obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> -mhi_ep-y := main.o mmio.o ring.o
> +mhi_ep-y := main.o mmio.o ring.o sm.o
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index fd63f79c6aec..e4e8f06c2898 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -173,6 +173,11 @@ struct mhi_ep_event {
>   	struct mhi_ep_ring ring;
>   };
>   
> +struct mhi_ep_state_transition {
> +	struct list_head node;
> +	enum mhi_state state;
> +};
> +
>   struct mhi_ep_chan {
>   	char *name;
>   	struct mhi_ep_device *mhi_dev;
> @@ -230,5 +235,11 @@ void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
>   /* MHI EP core functions */
>   int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
>   int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ep_execenv exec_env);
> +bool mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state cur_mhi_state,
> +			    enum mhi_state mhi_state);
> +int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state);
> +int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
> +int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
> +int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
>   
>   #endif
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 61f066c6286b..ccb3c2795041 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -185,6 +185,43 @@ static void mhi_ep_ring_worker(struct work_struct *work)
>   	}
>   }
>   
> +static void mhi_ep_state_worker(struct work_struct *work)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	struct mhi_ep_state_transition *itr, *tmp;
> +	unsigned long flags;
> +	LIST_HEAD(head);
> +	int ret;
> +
> +	spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
> +	list_splice_tail_init(&mhi_cntrl->st_transition_list, &head);
> +	spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
> +
> +	list_for_each_entry_safe(itr, tmp, &head, node) {
> +		list_del(&itr->node);
> +		dev_dbg(dev, "Handling MHI state transition to %s\n",
> +			 mhi_state_str(itr->state));
> +
> +		switch (itr->state) {
> +		case MHI_STATE_M0:
> +			ret = mhi_ep_set_m0_state(mhi_cntrl);
> +			if (ret)
> +				dev_err(dev, "Failed to transition to M0 state\n");
> +			break;
> +		case MHI_STATE_M3:
> +			ret = mhi_ep_set_m3_state(mhi_cntrl);
> +			if (ret)
> +				dev_err(dev, "Failed to transition to M3 state\n");
> +			break;
> +		default:
> +			dev_err(dev, "Invalid MHI state transition: %d\n", itr->state);
> +			break;
> +		}
> +		kfree(itr);
> +	}
> +}
> +
>   static void mhi_ep_release_device(struct device *dev)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> @@ -386,6 +423,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   	}
>   
>   	INIT_WORK(&mhi_cntrl->ring_work, mhi_ep_ring_worker);
> +	INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
>   
>   	mhi_cntrl->ring_wq = alloc_workqueue("mhi_ep_ring_wq", 0, 0);
>   	if (!mhi_cntrl->ring_wq) {
> @@ -393,8 +431,16 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   		goto err_free_cmd;
>   	}
>   
> +	mhi_cntrl->state_wq = alloc_workqueue("mhi_ep_state_wq", 0, 0);

Maybe it's not a big deal, but do we really need several separate
work queues?  Would one suffice?  Could a system workqueue be used
in some cases (such as state changes)?

> +	if (!mhi_cntrl->state_wq) {
> +		ret = -ENOMEM;
> +		goto err_destroy_ring_wq;
> +	}
> +
>   	INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
> +	INIT_LIST_HEAD(&mhi_cntrl->st_transition_list);
>   	spin_lock_init(&mhi_cntrl->list_lock);
> +	spin_lock_init(&mhi_cntrl->state_lock);
>   	mutex_init(&mhi_cntrl->event_lock);
>   
>   	/* Set MHI version and AMSS EE before enumeration */
> @@ -405,7 +451,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   	mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
>   	if (mhi_cntrl->index < 0) {
>   		ret = mhi_cntrl->index;
> -		goto err_destroy_ring_wq;
> +		goto err_destroy_state_wq;
>   	}
>   
>   	/* Allocate the controller device */
> @@ -433,6 +479,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   	put_device(&mhi_dev->dev);
>   err_ida_free:
>   	ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
> +err_destroy_state_wq:
> +	destroy_workqueue(mhi_cntrl->state_wq);
>   err_destroy_ring_wq:
>   	destroy_workqueue(mhi_cntrl->ring_wq);
>   err_free_cmd:
> @@ -448,6 +496,7 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
>   
> +	destroy_workqueue(mhi_cntrl->state_wq);
>   	destroy_workqueue(mhi_cntrl->ring_wq);
>   
>   	kfree(mhi_cntrl->mhi_cmd);
> diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
> new file mode 100644
> index 000000000000..68e7f99b9137
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/sm.c
> @@ -0,0 +1,168 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2021 Linaro Ltd.
> + * Author: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> + */
> +
> +#include <linux/delay.h>
> +#include <linux/errno.h>
> +#include <linux/mhi_ep.h>
> +#include "internal.h"
> +
> +bool __must_check mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl,
> +					 enum mhi_state cur_mhi_state,
> +					 enum mhi_state mhi_state)
> +{
> +	bool valid = false;
> +
> +	switch (mhi_state) {
> +	case MHI_STATE_READY:
> +		valid = (cur_mhi_state == MHI_STATE_RESET);

Just do:
		return cur_mhi_state == MHI_STATE_RESET;

And similar for all.  No parentheses needed.

It *might* be easier to understand if you test based
on the current state:

	if (mhi_state == MHI_STATE_SYS_ERR)
		return true;	/* Allowed in any state */

	if (mhi_state == MHI_STATE_RESET)
		return mhi_state == MHI_STATE_READY;

	if (mhi_state == MHI_STATE_READY)
		return mhi_state == MHI_STATE_M0;

	if (mhi_state == MHI_STATE_M0)
		return mhi_state == MHI_STATE_M3;

	if (mhi_state == MHI_STATE_M3)
		return mhi_state == MHI_STATE_M0;

	return false;
}
	
> +		break;
> +	case MHI_STATE_M0:
> +		valid = (cur_mhi_state == MHI_STATE_READY ||
> +			  cur_mhi_state == MHI_STATE_M3);
> +		break;
> +	case MHI_STATE_M3:
> +		valid = (cur_mhi_state == MHI_STATE_M0);
> +		break;
> +	case MHI_STATE_SYS_ERR:
> +		/* Transition to SYS_ERR state is allowed all the time */
> +		valid = true;
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return valid;
> +}
> +
> +int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +
> +	if (!mhi_ep_check_mhi_state(mhi_cntrl, mhi_cntrl->mhi_state, mhi_state)) {
> +		dev_err(dev, "MHI state change to %s from %s is not allowed!\n",
> +			mhi_state_str(mhi_state),
> +			mhi_state_str(mhi_cntrl->mhi_state));
> +		return -EACCES;
> +	}
> +

In all (valid) cases, you set the state.  Maybe do that in common
outside of the switch statement.

> +	switch (mhi_state) {
> +	case MHI_STATE_READY:
> +		mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
> +				MHISTATUS_READY_MASK, 1);
> +
> +		mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
> +				MHISTATUS_MHISTATE_MASK, mhi_state);

Maybe set the state before the READY bit?

> +		break;
> +	case MHI_STATE_SYS_ERR:
> +		mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
> +				MHISTATUS_SYSERR_MASK, 1);
> +
> +		mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
> +				MHISTATUS_MHISTATE_MASK, mhi_state);

Here too, maybe set the state before the SYSERR bit.

> +		break;
> +	case MHI_STATE_M1:
> +	case MHI_STATE_M2:
> +		dev_err(dev, "MHI state (%s) not supported\n", mhi_state_str(mhi_state));
> +		return -EOPNOTSUPP;
> +	case MHI_STATE_M0:
> +	case MHI_STATE_M3:
> +		mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
> +					  MHISTATUS_MHISTATE_MASK, mhi_state);
> +		break;
> +	default:

I think you can tell by inspection that the new state passed will
always be valid.

> +		dev_err(dev, "Invalid MHI state (%d)\n", mhi_state);
> +		return -EINVAL;
> +	}
> +
> +	mhi_cntrl->mhi_state = mhi_state;
> +
> +	return 0;
> +}
> +

/* M0 state is entered only from READY or M3 state */

> +int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	enum mhi_state old_state;
> +	int ret;
> +
> +	spin_lock_bh(&mhi_cntrl->state_lock);
> +	old_state = mhi_cntrl->mhi_state;
> +
> +	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
> +	if (ret) {
> +		spin_unlock_bh(&mhi_cntrl->state_lock);
> +		return ret;
> +	}

Rearrange this:

	ret = mhi_ep_set_mhi_state();
	
	spin_unlock_bh();

	if (ret)
		return ret;

There are other instances below where I suggest the same change.

> +
> +	spin_unlock_bh(&mhi_cntrl->state_lock);
> +	/* Signal host that the device moved to M0 */
> +	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M0);
> +	if (ret) {
> +		dev_err(dev, "Failed sending M0 state change event\n");
> +		return ret;
> +	}
> +
> +	if (old_state == MHI_STATE_READY) {
> +		/* Allow the host to process state change event */
> +		mdelay(1);

Why is 1 millisecond the correct delay?  Why not microseconds,
or seconds?

> +
> +		/* Send AMSS EE event to host */
> +		ret = mhi_ep_send_ee_event(mhi_cntrl, MHI_EP_AMSS_EE);
> +		if (ret) {
> +			dev_err(dev, "Failed sending AMSS EE event\n");
> +			return ret;
> +		}
> +	}
> +
> +	return 0;
> +}
> +

/* M3 state is entered only from M0 state */

> +int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	int ret;
> +
> +	spin_lock_bh(&mhi_cntrl->state_lock);
> +	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
> +	if (ret) {
> +		spin_unlock_bh(&mhi_cntrl->state_lock);
> +		return ret;
> +	}
> +
> +	spin_unlock_bh(&mhi_cntrl->state_lock);
> +
> +	/* Signal host that the device moved to M3 */
> +	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M3);
> +	if (ret) {
> +		dev_err(dev, "Failed sending M3 state change event\n");
> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +

/* READY state is entered only from RESET state */

> +int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	enum mhi_state mhi_state;
> +	int ret, is_ready;
> +
> +	spin_lock_bh(&mhi_cntrl->state_lock);
> +	/* Ensure that the MHISTATUS is set to RESET by host */
> +	mhi_state = mhi_ep_mmio_masked_read(mhi_cntrl, MHISTATUS, MHISTATUS_MHISTATE_MASK);
> +	is_ready = mhi_ep_mmio_masked_read(mhi_cntrl, MHISTATUS, MHISTATUS_READY_MASK);
> +
> +	if (mhi_state != MHI_STATE_RESET || is_ready) {
> +		dev_err(dev, "READY state transition failed. MHI host not in RESET state\n");
> +		spin_unlock_bh(&mhi_cntrl->state_lock);
> +		return -EFAULT;

EFAULT means that there was a problem copying memory.  This is
not the right error code.  I'm not sure what's right, but you
could use EIO or soemthing.

> +	}
> +
> +	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_READY);
> +	spin_unlock_bh(&mhi_cntrl->state_lock);
> +
> +	return ret;
> +}
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 062133a68118..72ce30cbe87e 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -65,11 +65,14 @@ struct mhi_ep_db_info {
>    * @ch_ctx_host_pa: Physical address of host channel context data structure
>    * @ev_ctx_host_pa: Physical address of host event context data structure
>    * @cmd_ctx_host_pa: Physical address of host command context data structure
> + * @state_wq: Dedicated workqueue for handling MHI state transitions
>    * @ring_wq: Dedicated workqueue for processing MHI rings
> + * @state_work: State transition worker
>    * @ring_work: Ring worker
>    * @ch_db_list: List of queued channel doorbells
>    * @st_transition_list: List of state transitions
>    * @list_lock: Lock for protecting state transition and channel doorbell lists
> + * @state_lock: Lock for protecting state transitions
>    * @event_lock: Lock for protecting event rings
>    * @chdb: Array of channel doorbell interrupt info
>    * @raise_irq: CB function for raising IRQ to the host
> @@ -105,12 +108,15 @@ struct mhi_ep_cntrl {
>   	u64 ev_ctx_host_pa;
>   	u64 cmd_ctx_host_pa;
>   
> +	struct workqueue_struct *state_wq;
>   	struct workqueue_struct	*ring_wq;
> +	struct work_struct state_work;
>   	struct work_struct ring_work;
>   
>   	struct list_head ch_db_list;
>   	struct list_head st_transition_list;
>   	spinlock_t list_lock;
> +	spinlock_t state_lock;
>   	struct mutex event_lock;
>   	struct mhi_ep_db_info chdb[4];
>   


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 15/25] bus: mhi: ep: Add support for processing MHI endpoint interrupts
  2022-02-12 18:21 ` [PATCH v3 15/25] bus: mhi: ep: Add support for processing MHI endpoint interrupts Manivannan Sadhasivam
@ 2022-02-15 22:39   ` Alex Elder
  2022-02-22  8:18     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:39 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for processing MHI endpoint interrupts such as control
> interrupt, command interrupt and channel interrupt from the host.
> 
> The interrupts will be generated in the endpoint device whenever host
> writes to the corresponding doorbell registers. The doorbell logic
> is handled inside the hardware internally.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Unless I'm mistaken, you have some bugs here.

Beyond that, I question whether you should be using workqueues
for handling all interrupts.  For now, it's fine, but there
might be room for improvement after this is accepted upstream
(using threaded interrupt handlers, for example).

					-Alex

> ---
>   drivers/bus/mhi/ep/main.c | 113 +++++++++++++++++++++++++++++++++++++-
>   include/linux/mhi_ep.h    |   2 +
>   2 files changed, 113 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index ccb3c2795041..072b872e735b 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -185,6 +185,56 @@ static void mhi_ep_ring_worker(struct work_struct *work)
>   	}
>   }
>   
> +static void mhi_ep_queue_channel_db(struct mhi_ep_cntrl *mhi_cntrl,
> +				    unsigned long ch_int, u32 ch_idx)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	struct mhi_ep_ring_item *item;
> +	struct mhi_ep_ring *ring;
> +	unsigned int i;

Why not u32 i?  And why is the ch_int argument unsigned long?  The value
passed in is a u32.

> +
> +	for_each_set_bit(i, &ch_int, 32) {
> +		/* Channel index varies for each register: 0, 32, 64, 96 */
> +		i += ch_idx;

This is a bug.  You should not be modifying the iterator
variable inside the loop.  Maybe do this instead:

	u32 ch_id = ch_idx + i;

	ring = &mhi_cntrl->mhi_chan[ch_id].ring

> +		ring = &mhi_cntrl->mhi_chan[i].ring;
> +

You are initializing all fields here so kmalloc() is fine
(rather than kzalloc()).  But if you ever add another field
to the mhi_ep_ring_item structure that's not guaranteed.
I think at least a comment here explaining why you're not
using kzalloc() would be helpful.

> +		item = kmalloc(sizeof(*item), GFP_ATOMIC);

Even an ATOMIC allocation can fail.  Check the return
pointer.

> +		item->ring = ring;
> +
> +		dev_dbg(dev, "Queuing doorbell interrupt for channel (%d)\n", i);

Use ch_id (or whatever you call it) here too.

> +		spin_lock(&mhi_cntrl->list_lock);
> +		list_add_tail(&item->node, &mhi_cntrl->ch_db_list);
> +		spin_unlock(&mhi_cntrl->list_lock);

Instead, create a list head on the stack and build up
this list without using the spinlock.  Then splice
everything you added into the ch_db_list at the end.

> +
> +		queue_work(mhi_cntrl->ring_wq, &mhi_cntrl->ring_work);

Maybe there's a small amount of latency saved by
doing this repeatedly, but you're queueing work
with the same work structure over and over again.

Instead, you could set a Boolean at the top:
	work = !!ch_int;

	for_each_set_bit() {
		. . .
	}

	if (work)
		queue_work(...);


> +	}
> +}
> +
> +/*
> + * Channel interrupt statuses are contained in 4 registers each of 32bit length.
> + * For checking all interrupts, we need to loop through each registers and then
> + * check for bits set.
> + */
> +static void mhi_ep_check_channel_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	u32 ch_int, ch_idx;
> +	int i;
> +
> +	mhi_ep_mmio_read_chdb_status_interrupts(mhi_cntrl);

You could have the above function could return a summary Boolean
value, which would indicate whether *any* channel interrupts
had occurred (skipping the below loop when we get just a control
or command doorbell interrupt).

> +
> +	for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
> +		ch_idx = i * MHI_MASK_CH_EV_LEN;
> +
> +		/* Only process channel interrupt if the mask is enabled */
> +		ch_int = (mhi_cntrl->chdb[i].status & mhi_cntrl->chdb[i].mask);

Parentheses not needed.

> +		if (ch_int) {
> +			mhi_ep_queue_channel_db(mhi_cntrl, ch_int, ch_idx);
> +			mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_CLEAR_A7_n(i),
> +							mhi_cntrl->chdb[i].status);
> +		}
> +	}
> +}
> +
>   static void mhi_ep_state_worker(struct work_struct *work)
>   {
>   	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
> @@ -222,6 +272,53 @@ static void mhi_ep_state_worker(struct work_struct *work)
>   	}
>   }
>   
> +static void mhi_ep_process_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl,
> +					 enum mhi_state state)
> +{
> +	struct mhi_ep_state_transition *item = kmalloc(sizeof(*item), GFP_ATOMIC);
> +

Do not assume ATOMIC allocations succeed.

I don't have any further comments on the rest.

> +	item->state = state;
> +	spin_lock(&mhi_cntrl->list_lock);
> +	list_add_tail(&item->node, &mhi_cntrl->st_transition_list);
> +	spin_unlock(&mhi_cntrl->list_lock);
> +
> +	queue_work(mhi_cntrl->state_wq, &mhi_cntrl->state_work);
> +}
> +

. . .

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 16/25] bus: mhi: ep: Add support for powering up the MHI endpoint stack
  2022-02-12 18:21 ` [PATCH v3 16/25] bus: mhi: ep: Add support for powering up the MHI endpoint stack Manivannan Sadhasivam
@ 2022-02-15 22:39   ` Alex Elder
  2022-02-22  9:08     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:39 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for MHI endpoint power_up that includes initializing the MMIO
> and rings, caching the host MHI registers, and setting the MHI state to M0.
> After registering the MHI EP controller, the stack has to be powered up
> for usage.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Very little to say on this one.		-Alex

> ---
>   drivers/bus/mhi/ep/internal.h |   6 +
>   drivers/bus/mhi/ep/main.c     | 229 ++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h        |  22 ++++
>   3 files changed, 257 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index e4e8f06c2898..ee8c5974f0c0 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -242,4 +242,10 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
>   int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
>   int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
>   
> +/* MHI EP memory management functions */
> +int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
> +		     phys_addr_t *phys_ptr, void __iomem **virt);
> +void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t phys,
> +		       void __iomem *virt, size_t size);
> +
>   #endif
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 072b872e735b..016e819f640a 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -16,6 +16,9 @@
>   #include <linux/module.h>
>   #include "internal.h"
>   
> +#define MHI_SUSPEND_MIN			100
> +#define MHI_SUSPEND_TIMEOUT		600
> +
>   static DEFINE_IDA(mhi_ep_cntrl_ida);
>   
>   static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
> @@ -143,6 +146,176 @@ static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_e
>   	return mhi_ep_send_event(mhi_cntrl, 0, &event);
>   }
>   
> +int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
> +		     phys_addr_t *phys_ptr, void __iomem **virt)
> +{
> +	size_t offset = pci_addr % 0x1000;
> +	void __iomem *buf;
> +	phys_addr_t phys;
> +	int ret;
> +
> +	size += offset;
> +
> +	buf = mhi_cntrl->alloc_addr(mhi_cntrl, &phys, size);
> +	if (!buf)
> +		return -ENOMEM;
> +
> +	ret = mhi_cntrl->map_addr(mhi_cntrl, phys, pci_addr - offset, size);
> +	if (ret) {
> +		mhi_cntrl->free_addr(mhi_cntrl, phys, buf, size);
> +		return ret;
> +	}
> +
> +	*phys_ptr = phys + offset;
> +	*virt = buf + offset;
> +
> +	return 0;
> +}
> +
> +void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t phys,
> +			void __iomem *virt, size_t size)
> +{
> +	size_t offset = pci_addr % 0x1000;
> +
> +	size += offset;
> +
> +	mhi_cntrl->unmap_addr(mhi_cntrl, phys - offset);
> +	mhi_cntrl->free_addr(mhi_cntrl, phys - offset, virt - offset, size);
> +}
> +
> +static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	int ret;
> +
> +	/* Update the number of event rings (NER) programmed by the host */
> +	mhi_ep_mmio_update_ner(mhi_cntrl);
> +
> +	dev_dbg(dev, "Number of Event rings: %d, HW Event rings: %d\n",
> +		 mhi_cntrl->event_rings, mhi_cntrl->hw_event_rings);
> +
> +	mhi_cntrl->ch_ctx_host_size = sizeof(struct mhi_chan_ctxt) *
> +					mhi_cntrl->max_chan;
> +	mhi_cntrl->ev_ctx_host_size = sizeof(struct mhi_event_ctxt) *
> +					mhi_cntrl->event_rings;
> +	mhi_cntrl->cmd_ctx_host_size = sizeof(struct mhi_cmd_ctxt);

If you're going to support NR_OF_CMD_RINGS command contexts,
you should maybe multiply that here too?

> +
> +	/* Get the channel context base pointer from host */
> +	mhi_ep_mmio_get_chc_base(mhi_cntrl);
> +
> +	/* Allocate and map memory for caching host channel context */
> +	ret = mhi_ep_alloc_map(mhi_cntrl, mhi_cntrl->ch_ctx_host_pa, mhi_cntrl->ch_ctx_host_size,
> +				&mhi_cntrl->ch_ctx_cache_phys,
> +				(void __iomem **)&mhi_cntrl->ch_ctx_cache);
> +	if (ret) {
> +		dev_err(dev, "Failed to allocate and map ch_ctx_cache\n");
> +		return ret;
> +	}
> +
> +	/* Get the event context base pointer from host */
> +	mhi_ep_mmio_get_erc_base(mhi_cntrl);
> +
> +	/* Allocate and map memory for caching host event context */
> +	ret = mhi_ep_alloc_map(mhi_cntrl, mhi_cntrl->ev_ctx_host_pa, mhi_cntrl->ev_ctx_host_size,
> +				&mhi_cntrl->ev_ctx_cache_phys,
> +				(void __iomem **)&mhi_cntrl->ev_ctx_cache);
> +	if (ret) {
> +		dev_err(dev, "Failed to allocate and map ev_ctx_cache\n");
> +		goto err_ch_ctx;
> +	}
> +
> +	/* Get the command context base pointer from host */
> +	mhi_ep_mmio_get_crc_base(mhi_cntrl);
> +
> +	/* Allocate and map memory for caching host command context */
> +	ret = mhi_ep_alloc_map(mhi_cntrl, mhi_cntrl->cmd_ctx_host_pa, mhi_cntrl->cmd_ctx_host_size,
> +				&mhi_cntrl->cmd_ctx_cache_phys,
> +				(void __iomem **)&mhi_cntrl->cmd_ctx_cache);
> +	if (ret) {
> +		dev_err(dev, "Failed to allocate and map cmd_ctx_cache\n");
> +		goto err_ev_ctx;
> +	}
> +
> +	/* Initialize command ring */
> +	ret = mhi_ep_ring_start(mhi_cntrl, &mhi_cntrl->mhi_cmd->ring,
> +				(union mhi_ep_ring_ctx *)mhi_cntrl->cmd_ctx_cache);
> +	if (ret) {
> +		dev_err(dev, "Failed to start the command ring\n");
> +		goto err_cmd_ctx;
> +	}
> +
> +	return ret;
> +
> +err_cmd_ctx:
> +	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->cmd_ctx_host_pa, mhi_cntrl->cmd_ctx_cache_phys,
> +			mhi_cntrl->cmd_ctx_cache, mhi_cntrl->cmd_ctx_host_size);
> +
> +err_ev_ctx:
> +	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ev_ctx_host_pa, mhi_cntrl->ev_ctx_cache_phys,
> +			mhi_cntrl->ev_ctx_cache, mhi_cntrl->ev_ctx_host_size);
> +
> +err_ch_ctx:
> +	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ch_ctx_host_pa, mhi_cntrl->ch_ctx_cache_phys,
> +			mhi_cntrl->ch_ctx_cache, mhi_cntrl->ch_ctx_host_size);
> +
> +	return ret;
> +}
> +
> +static void mhi_ep_free_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->cmd_ctx_host_pa, mhi_cntrl->cmd_ctx_cache_phys,
> +			mhi_cntrl->cmd_ctx_cache, mhi_cntrl->cmd_ctx_host_size);
> +	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ev_ctx_host_pa, mhi_cntrl->ev_ctx_cache_phys,
> +			mhi_cntrl->ev_ctx_cache, mhi_cntrl->ev_ctx_host_size);
> +	mhi_ep_unmap_free(mhi_cntrl, mhi_cntrl->ch_ctx_host_pa, mhi_cntrl->ch_ctx_cache_phys,
> +			mhi_cntrl->ch_ctx_cache, mhi_cntrl->ch_ctx_host_size);
> +}
> +
> +static void mhi_ep_enable_int(struct mhi_ep_cntrl *mhi_cntrl)
> +{

Are channel doorbell interrupts enabled separately now?
(There was previously an enable_chdb_interrupts() call.)

> +	mhi_ep_mmio_enable_ctrl_interrupt(mhi_cntrl);
> +	mhi_ep_mmio_enable_cmdb_interrupt(mhi_cntrl);
> +}
> +
> +static int mhi_ep_enable(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	enum mhi_state state;
> +	u32 max_cnt = 0;
> +	bool mhi_reset;
> +	int ret;
> +
> +	/* Wait for Host to set the M0 state */
> +	do {
> +		msleep(MHI_SUSPEND_MIN);
> +		mhi_ep_mmio_get_mhi_state(mhi_cntrl, &state, &mhi_reset);
> +		if (mhi_reset) {
> +			/* Clear the MHI reset if host is in reset state */
> +			mhi_ep_mmio_clear_reset(mhi_cntrl);
> +			dev_dbg(dev, "Host initiated reset while waiting for M0\n");
> +		}
> +		max_cnt++;
> +	} while (state != MHI_STATE_M0 && max_cnt < MHI_SUSPEND_TIMEOUT);
> +
> +	if (state == MHI_STATE_M0) {

You could rearrange this and avoid a little indentation.

	if (state != MHI_STATE_M0) {
		dev_err();
		return -ETIMEDOUT;
	}

	ret = mhi_ep_cache_host_cfg();
	. . .

> +		ret = mhi_ep_cache_host_cfg(mhi_cntrl);
> +		if (ret) {
> +			dev_err(dev, "Failed to cache host config\n");
> +			return ret;
> +		}
> +
> +		mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
> +	} else {
> +		dev_err(dev, "Host failed to enter M0\n");
> +		return -ETIMEDOUT;
> +	}
> +
> +	/* Enable all interrupts now */
> +	mhi_ep_enable_int(mhi_cntrl);
> +
> +	return 0;
> +}
> +
>   static void mhi_ep_ring_worker(struct work_struct *work)
>   {
>   	struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
> @@ -319,6 +492,62 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
>   	return IRQ_HANDLED;
>   }
>   
> +int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	int ret, i;
> +
> +	/*
> +	 * Mask all interrupts until the state machine is ready. Interrupts will
> +	 * be enabled later with mhi_ep_enable().
> +	 */
> +	mhi_ep_mmio_mask_interrupts(mhi_cntrl);
> +	mhi_ep_mmio_init(mhi_cntrl);
> +
> +	mhi_cntrl->mhi_event = kzalloc(mhi_cntrl->event_rings * (sizeof(*mhi_cntrl->mhi_event)),
> +					GFP_KERNEL);
> +	if (!mhi_cntrl->mhi_event)
> +		return -ENOMEM;
> +
> +	/* Initialize command, channel and event rings */
> +	mhi_ep_ring_init(&mhi_cntrl->mhi_cmd->ring, RING_TYPE_CMD, 0);
> +	for (i = 0; i < mhi_cntrl->max_chan; i++)
> +		mhi_ep_ring_init(&mhi_cntrl->mhi_chan[i].ring, RING_TYPE_CH, i);
> +	for (i = 0; i < mhi_cntrl->event_rings; i++)
> +		mhi_ep_ring_init(&mhi_cntrl->mhi_event[i].ring, RING_TYPE_ER, i);
> +
> +	spin_lock_bh(&mhi_cntrl->state_lock);

If we're powering up, is there anything else that could be
looking at or updating the mhi_state?

I ask because of the spinlock taken here.  Not a big deal.

But aside from that, I think this small block of code should
be done by a funtion in "sm.c", because it sets the state.

> +	mhi_cntrl->mhi_state = MHI_STATE_RESET;
> +	spin_unlock_bh(&mhi_cntrl->state_lock);
> +
> +	/* Set AMSS EE before signaling ready state */
> +	mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
> +
> +	/* All set, notify the host that we are ready */
> +	ret = mhi_ep_set_ready_state(mhi_cntrl);
> +	if (ret)
> +		goto err_free_event;
> +
> +	dev_dbg(dev, "READY state notification sent to the host\n");
> +
> +	ret = mhi_ep_enable(mhi_cntrl);
> +	if (ret) {
> +		dev_err(dev, "Failed to enable MHI endpoint\n");
> +		goto err_free_event;
> +	}
> +
> +	enable_irq(mhi_cntrl->irq);
> +	mhi_cntrl->is_enabled = true;
> +
> +	return 0;
> +
> +err_free_event:
> +	kfree(mhi_cntrl->mhi_event);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_power_up);
> +
>   static void mhi_ep_release_device(struct device *dev)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index a207058a4991..53895f1c68e1 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -65,6 +65,12 @@ struct mhi_ep_db_info {
>    * @ch_ctx_host_pa: Physical address of host channel context data structure
>    * @ev_ctx_host_pa: Physical address of host event context data structure
>    * @cmd_ctx_host_pa: Physical address of host command context data structure
> + * @ch_ctx_cache_phys: Physical address of the host channel context cache
> + * @ev_ctx_cache_phys: Physical address of the host event context cache
> + * @cmd_ctx_cache_phys: Physical address of the host command context cache
> + * @ch_ctx_host_size: Size of the host channel context data structure
> + * @ev_ctx_host_size: Size of the host event context data structure
> + * @cmd_ctx_host_size: Size of the host command context data structure
>    * @state_wq: Dedicated workqueue for handling MHI state transitions
>    * @ring_wq: Dedicated workqueue for processing MHI rings
>    * @state_work: State transition worker
> @@ -91,6 +97,7 @@ struct mhi_ep_db_info {
>    * @erdb_offset: Event ring doorbell offset set by the host
>    * @index: MHI Endpoint controller index
>    * @irq: IRQ used by the endpoint controller
> + * @is_enabled: Check if the endpoint controller is enabled or not

Maybe just "enabled"?

>    */
>   struct mhi_ep_cntrl {
>   	struct device *cntrl_dev;
> @@ -108,6 +115,12 @@ struct mhi_ep_cntrl {
>   	u64 ch_ctx_host_pa;
>   	u64 ev_ctx_host_pa;
>   	u64 cmd_ctx_host_pa;
> +	phys_addr_t ch_ctx_cache_phys;
> +	phys_addr_t ev_ctx_cache_phys;
> +	phys_addr_t cmd_ctx_cache_phys;

I don't think the next three fields are worth stashing in
this structure.  They can be trivially recalculated from
the size of the various context structures, and the only
one that ever varies in size is the event context size.

> +	size_t ch_ctx_host_size;
> +	size_t ev_ctx_host_size;
> +	size_t cmd_ctx_host_size;
>   
>   	struct workqueue_struct *state_wq;
>   	struct workqueue_struct	*ring_wq;
> @@ -144,6 +157,7 @@ struct mhi_ep_cntrl {
>   	u32 erdb_offset;
>   	int index;
>   	int irq;
> +	bool is_enabled;
>   };
>   
>   /**
> @@ -238,4 +252,12 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>    */
>   void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
>   
> +/**
> + * mhi_ep_power_up - Power up the MHI endpoint stack
> + * @mhi_cntrl: MHI Endpoint controller
> + *
> + * Return: 0 if power up succeeds, a negative error code otherwise.
> + */
> +int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
> +
>   #endif


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 17/25] bus: mhi: ep: Add support for powering down the MHI endpoint stack
  2022-02-12 18:21 ` [PATCH v3 17/25] bus: mhi: ep: Add support for powering down " Manivannan Sadhasivam
@ 2022-02-15 22:39   ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:39 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for MHI endpoint power_down that includes stopping all
> available channels, destroying the channels, resetting the event and
> transfer rings and freeing the host cache.
> 
> The stack will be powered down whenever the physical bus link goes down.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Not much to say here either, just a few suggestions.

					-Alex

> ---
>   drivers/bus/mhi/ep/main.c | 81 +++++++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h    |  6 +++
>   2 files changed, 87 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 016e819f640a..14cb08de4263 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -21,6 +21,8 @@
>   
>   static DEFINE_IDA(mhi_ep_cntrl_ida);
>   
> +static int mhi_ep_destroy_device(struct device *dev, void *data);
> +
>   static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
>   			     struct mhi_ep_ring_element *el)
>   {
> @@ -492,6 +494,71 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
>   	return IRQ_HANDLED;
>   }
>   
> +static void mhi_ep_abort_transfer(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct mhi_ep_ring *ch_ring, *ev_ring;
> +	struct mhi_result result = {};
> +	struct mhi_ep_chan *mhi_chan;
> +	int i;
> +
> +	/* Stop all the channels */
> +	for (i = 0; i < mhi_cntrl->max_chan; i++) {
		mhi_chan = &mhi_cntrl->mhi_chan[i];
		ch_ring = &mhi_chan->ring;
	
> +		ch_ring = &mhi_cntrl->mhi_chan[i].ring;
> +		if (!ch_ring->started)
> +			continue;
> +
> +		mhi_chan = &mhi_cntrl->mhi_chan[i];
> +		mutex_lock(&mhi_chan->lock);
> +		/* Send channel disconnect status to client drivers */
> +		if (mhi_chan->xfer_cb) {
> +			result.transaction_status = -ENOTCONN;
> +			result.bytes_xferd = 0;
> +			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
> +		}
> +
> +		/* Set channel state to DISABLED */

Omit the above comment.

> +		mhi_chan->state = MHI_CH_STATE_DISABLED;
> +		mutex_unlock(&mhi_chan->lock);
> +	}
> +
> +	flush_workqueue(mhi_cntrl->ring_wq);
> +	flush_workqueue(mhi_cntrl->state_wq);
> +
> +	/* Destroy devices associated with all channels */
> +	device_for_each_child(&mhi_cntrl->mhi_dev->dev, NULL, mhi_ep_destroy_device);
> +
> +	/* Stop and reset the transfer rings */
> +	for (i = 0; i < mhi_cntrl->max_chan; i++) {
		mhi_chan = ...

		if (!mhi_chan->ring.started)
			continue;

> +		ch_ring = &mhi_cntrl->mhi_chan[i].ring;
> +		if (!ch_ring->started)
> +			continue;
> +
> +		mhi_chan = &mhi_cntrl->mhi_chan[i];
> +		mutex_lock(&mhi_chan->lock);
> +		mhi_ep_ring_reset(mhi_cntrl, ch_ring);
> +		mutex_unlock(&mhi_chan->lock);
> +	}
> +
> +	/* Stop and reset the event rings */
> +	for (i = 0; i < mhi_cntrl->event_rings; i++) {
> +		ev_ring = &mhi_cntrl->mhi_event[i].ring;
> +		if (!ev_ring->started)
> +			continue;
> +
> +		mutex_lock(&mhi_cntrl->event_lock);
> +		mhi_ep_ring_reset(mhi_cntrl, ev_ring);
> +		mutex_unlock(&mhi_cntrl->event_lock);
> +	}
> +
> +	/* Stop and reset the command ring */
> +	mhi_ep_ring_reset(mhi_cntrl, &mhi_cntrl->mhi_cmd->ring);
> +
> +	mhi_ep_free_host_cfg(mhi_cntrl);
> +	mhi_ep_mmio_mask_interrupts(mhi_cntrl);
> +
> +	mhi_cntrl->is_enabled = false;
> +}
> +
>   int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> @@ -548,6 +615,16 @@ int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
>   }
>   EXPORT_SYMBOL_GPL(mhi_ep_power_up);
>   
> +void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	if (mhi_cntrl->is_enabled)
> +		mhi_ep_abort_transfer(mhi_cntrl);
> +
> +	kfree(mhi_cntrl->mhi_event);
> +	disable_irq(mhi_cntrl->irq);
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_power_down);
> +
>   static void mhi_ep_release_device(struct device *dev)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> @@ -828,6 +905,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   }
>   EXPORT_SYMBOL_GPL(mhi_ep_register_controller);
>   
> +/*
> + * It is expected that the controller drivers will power down the MHI EP stack
> + * using "mhi_ep_power_down()" before calling this function to unregister themselves.
> + */
>   void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 53895f1c68e1..4f86e7986c93 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -260,4 +260,10 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
>    */
>   int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
>   
> +/**
> + * mhi_ep_power_down - Power down the MHI endpoint stack
> + * @mhi_cntrl: MHI controller
> + */
> +void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
> +
>   #endif


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 18/25] bus: mhi: ep: Add support for handling MHI_RESET
  2022-02-12 18:21 ` [PATCH v3 18/25] bus: mhi: ep: Add support for handling MHI_RESET Manivannan Sadhasivam
@ 2022-02-15 22:39   ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:39 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for handling MHI_RESET in MHI endpoint stack. MHI_RESET will
> be issued by the host during shutdown and during error scenario so that
> it can recover the endpoint device without restarting the whole device.
> 
> MHI_RESET handling involves resetting the internal MHI registers, data
> structures, state machines, resetting all channels/rings and setting
> MHICTRL.RESET bit to 0. Additionally the device will also move to READY
> state if the reset was due to SYS_ERR.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

I might be getting tired out...  But this looks good to me!

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/ep/main.c | 53 +++++++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h    |  2 ++
>   2 files changed, 55 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 14cb08de4263..ddedd0fb19aa 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -471,6 +471,7 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
>   	enum mhi_state state;
>   	u32 int_value;
> +	bool mhi_reset;
>   
>   	/* Acknowledge the interrupts */
>   	int_value = mhi_ep_mmio_read(mhi_cntrl, MHI_CTRL_INT_STATUS_A7);
> @@ -479,6 +480,14 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
>   	/* Check for ctrl interrupt */
>   	if (FIELD_GET(MHI_CTRL_INT_STATUS_A7_MSK, int_value)) {
>   		dev_dbg(dev, "Processing ctrl interrupt\n");
> +		mhi_ep_mmio_get_mhi_state(mhi_cntrl, &state, &mhi_reset);
> +		if (mhi_reset) {
> +			dev_info(dev, "Host triggered MHI reset!\n");
> +			disable_irq_nosync(mhi_cntrl->irq);
> +			schedule_work(&mhi_cntrl->reset_work);
> +			return IRQ_HANDLED;
> +		}
> +
>   		mhi_ep_process_ctrl_interrupt(mhi_cntrl, state);
>   	}
>   
> @@ -559,6 +568,49 @@ static void mhi_ep_abort_transfer(struct mhi_ep_cntrl *mhi_cntrl)
>   	mhi_cntrl->is_enabled = false;
>   }
>   
> +static void mhi_ep_reset_worker(struct work_struct *work)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, reset_work);
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	enum mhi_state cur_state;
> +	int ret;
> +
> +	mhi_ep_abort_transfer(mhi_cntrl);
> +
> +	spin_lock_bh(&mhi_cntrl->state_lock);
> +	/* Reset MMIO to signal host that the MHI_RESET is completed in endpoint */
> +	mhi_ep_mmio_reset(mhi_cntrl);
> +	cur_state = mhi_cntrl->mhi_state;
> +	spin_unlock_bh(&mhi_cntrl->state_lock);
> +
> +	/*
> +	 * Only proceed further if the reset is due to SYS_ERR. The host will
> +	 * issue reset during shutdown also and we don't need to do re-init in
> +	 * that case.
> +	 */
> +	if (cur_state == MHI_STATE_SYS_ERR) {
> +		mhi_ep_mmio_init(mhi_cntrl);
> +
> +		/* Set AMSS EE before signaling ready state */
> +		mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
> +
> +		/* All set, notify the host that we are ready */
> +		ret = mhi_ep_set_ready_state(mhi_cntrl);
> +		if (ret)
> +			return;
> +
> +		dev_dbg(dev, "READY state notification sent to the host\n");
> +
> +		ret = mhi_ep_enable(mhi_cntrl);
> +		if (ret) {
> +			dev_err(dev, "Failed to enable MHI endpoint: %d\n", ret);
> +			return;
> +		}
> +
> +		enable_irq(mhi_cntrl->irq);
> +	}
> +}
> +
>   int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> @@ -827,6 +879,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>   
>   	INIT_WORK(&mhi_cntrl->ring_work, mhi_ep_ring_worker);
>   	INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
> +	INIT_WORK(&mhi_cntrl->reset_work, mhi_ep_reset_worker);
>   
>   	mhi_cntrl->ring_wq = alloc_workqueue("mhi_ep_ring_wq", 0, 0);
>   	if (!mhi_cntrl->ring_wq) {
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 4f86e7986c93..276d29fef465 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -75,6 +75,7 @@ struct mhi_ep_db_info {
>    * @ring_wq: Dedicated workqueue for processing MHI rings
>    * @state_work: State transition worker
>    * @ring_work: Ring worker
> + * @reset_work: Worker for MHI Endpoint reset
>    * @ch_db_list: List of queued channel doorbells
>    * @st_transition_list: List of state transitions
>    * @list_lock: Lock for protecting state transition and channel doorbell lists
> @@ -126,6 +127,7 @@ struct mhi_ep_cntrl {
>   	struct workqueue_struct	*ring_wq;
>   	struct work_struct state_work;
>   	struct work_struct ring_work;
> +	struct work_struct reset_work;
>   
>   	struct list_head ch_db_list;
>   	struct list_head st_transition_list;


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 19/25] bus: mhi: ep: Add support for handling SYS_ERR condition
  2022-02-12 18:21 ` [PATCH v3 19/25] bus: mhi: ep: Add support for handling SYS_ERR condition Manivannan Sadhasivam
@ 2022-02-15 22:39   ` Alex Elder
  2022-02-22 10:29     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:39 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for handling SYS_ERR (System Error) condition in the MHI
> endpoint stack. The SYS_ERR flag will be asserted by the endpoint device
> when it detects an internal error. The host will then issue reset and
> reinitializes MHI to recover from the error state.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

I have a few small comments, but this look good enough for me.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/ep/internal.h |  1 +
>   drivers/bus/mhi/ep/main.c     | 24 ++++++++++++++++++++++++
>   drivers/bus/mhi/ep/sm.c       |  2 ++
>   3 files changed, 27 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index ee8c5974f0c0..8654af7caf40 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -241,6 +241,7 @@ int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_stat
>   int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
>   int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
>   int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl);
>   
>   /* MHI EP memory management functions */
>   int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index ddedd0fb19aa..6378ac5c7e37 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -611,6 +611,30 @@ static void mhi_ep_reset_worker(struct work_struct *work)
>   	}
>   }
>   
> +/*
> + * We don't need to do anything special other than setting the MHI SYS_ERR
> + * state. The host issue will reset all contexts and issue MHI RESET so that we

s/host issue/host/

> + * could also recover from error state.
> + */
> +void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	int ret;
> +
> +	/* If MHI EP is not enabled, nothing to do */
> +	if (!mhi_cntrl->is_enabled)

Is this an expected condition?  SYS_ERR with the endpoint
disabled?

> +		return;
> +
> +	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
> +	if (ret)
> +		return;
> +
> +	/* Signal host that the device went to SYS_ERR state */
> +	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_SYS_ERR);
> +	if (ret)
> +		dev_err(dev, "Failed sending SYS_ERR state change event: %d\n", ret);
> +}
> +
>   int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
> index 68e7f99b9137..9a75ecfe1adf 100644
> --- a/drivers/bus/mhi/ep/sm.c
> +++ b/drivers/bus/mhi/ep/sm.c
> @@ -93,6 +93,7 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
>   
>   	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
>   	if (ret) {
> +		mhi_ep_handle_syserr(mhi_cntrl);
>   		spin_unlock_bh(&mhi_cntrl->state_lock);
>   		return ret;
>   	}
> @@ -128,6 +129,7 @@ int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
>   	spin_lock_bh(&mhi_cntrl->state_lock);
>   	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);

Are there any other spots that should do this?  For example, in
mhi_ep_set_ready_state() you don't check the return value of
the call to mhi_ep_set_mhi_state().  It seems to me it should
be possible to preclude bogus state changes anyway, but I'm
not completely sure.

>   	if (ret) {
> +		mhi_ep_handle_syserr(mhi_cntrl);
>   		spin_unlock_bh(&mhi_cntrl->state_lock);
>   		return ret;
>   	}


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 20/25] bus: mhi: ep: Add support for processing command ring
  2022-02-12 18:21 ` [PATCH v3 20/25] bus: mhi: ep: Add support for processing command ring Manivannan Sadhasivam
@ 2022-02-15 22:40   ` Alex Elder
  2022-02-22 10:35     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:40 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for processing the command ring. Command ring is used by the
> host to issue channel specific commands to the ep device. Following
> commands are supported:
> 
> 1. Start channel
> 2. Stop channel
> 3. Reset channel
> 
> Once the device receives the command doorbell interrupt from host, it
> executes the command and generates a command completion event to the
> host in the primary event ring.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

I'll let you consider my few comments below, but whether or not you
address them, this looks OK to me.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/ep/main.c | 151 ++++++++++++++++++++++++++++++++++++++
>   1 file changed, 151 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 6378ac5c7e37..4c2ee517832c 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -21,6 +21,7 @@
>   
>   static DEFINE_IDA(mhi_ep_cntrl_ida);
>   
> +static int mhi_ep_create_device(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id);
>   static int mhi_ep_destroy_device(struct device *dev, void *data);
>   
>   static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
> @@ -185,6 +186,156 @@ void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t
>   	mhi_cntrl->free_addr(mhi_cntrl, phys - offset, virt - offset, size);
>   }
>   
> +int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	struct mhi_result result = {};
> +	struct mhi_ep_chan *mhi_chan;
> +	struct mhi_ep_ring *ch_ring;
> +	u32 tmp, ch_id;
> +	int ret;
> +
> +	ch_id = MHI_TRE_GET_CMD_CHID(el);
> +	mhi_chan = &mhi_cntrl->mhi_chan[ch_id];
> +	ch_ring = &mhi_cntrl->mhi_chan[ch_id].ring;
> +
> +	switch (MHI_TRE_GET_CMD_TYPE(el)) {

No MHI_PKT_TYPE_NOOP_CMD?

> +	case MHI_PKT_TYPE_START_CHAN_CMD:
> +		dev_dbg(dev, "Received START command for channel (%d)\n", ch_id);
> +
> +		mutex_lock(&mhi_chan->lock);
> +		/* Initialize and configure the corresponding channel ring */
> +		if (!ch_ring->started) {
> +			ret = mhi_ep_ring_start(mhi_cntrl, ch_ring,
> +				(union mhi_ep_ring_ctx *)&mhi_cntrl->ch_ctx_cache[ch_id]);
> +			if (ret) {
> +				dev_err(dev, "Failed to start ring for channel (%d)\n", ch_id);
> +				ret = mhi_ep_send_cmd_comp_event(mhi_cntrl,
> +							MHI_EV_CC_UNDEFINED_ERR);
> +				if (ret)
> +					dev_err(dev, "Error sending completion event (%d)\n",
> +						MHI_EV_CC_UNDEFINED_ERR);

Print the value of ret in the above message (not UNDEFINED_ERR).

> +
> +				goto err_unlock;
> +			}
> +		}
> +
> +		/* Enable DB for the channel */
> +		mhi_ep_mmio_enable_chdb_a7(mhi_cntrl, ch_id);

If an error occurs later, this will be enabled.  Is that what
you want?  Maybe wait to enable the doorbell until everything
else succeeds.

> +
> +		/* Set channel state to RUNNING */
> +		mhi_chan->state = MHI_CH_STATE_RUNNING;
> +		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[ch_id].chcfg);
> +		tmp &= ~CHAN_CTX_CHSTATE_MASK;
> +		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_RUNNING);
> +		mhi_cntrl->ch_ctx_cache[ch_id].chcfg = cpu_to_le32(tmp);
> +
> +		ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
> +		if (ret) {
> +			dev_err(dev, "Error sending command completion event (%d)\n",
> +				MHI_EV_CC_SUCCESS);
> +			goto err_unlock;
> +		}
> +
> +		mutex_unlock(&mhi_chan->lock);
> +
> +		/*
> +		 * Create MHI device only during UL channel start. Since the MHI
> +		 * channels operate in a pair, we'll associate both UL and DL
> +		 * channels to the same device.
> +		 *
> +		 * We also need to check for mhi_dev != NULL because, the host
> +		 * will issue START_CHAN command during resume and we don't
> +		 * destroy the device during suspend.
> +		 */
> +		if (!(ch_id % 2) && !mhi_chan->mhi_dev) {
> +			ret = mhi_ep_create_device(mhi_cntrl, ch_id);
> +			if (ret) {

If this occurs, the host will already have been told the
request completed successfully.  Is that a problem that
can/should be avoided?

> +				dev_err(dev, "Error creating device for channel (%d)\n", ch_id);
> +				return ret;
> +			}
> +		}
> +
> +		break;
> +	case MHI_PKT_TYPE_STOP_CHAN_CMD:
> +		dev_dbg(dev, "Received STOP command for channel (%d)\n", ch_id);
> +		if (!ch_ring->started) {
> +			dev_err(dev, "Channel (%d) not opened\n", ch_id);
> +			return -ENODEV;
> +		}
> +
> +		mutex_lock(&mhi_chan->lock);
> +		/* Disable DB for the channel */
> +		mhi_ep_mmio_disable_chdb_a7(mhi_cntrl, ch_id);
> +
> +		/* Send channel disconnect status to client drivers */
> +		result.transaction_status = -ENOTCONN;
> +		result.bytes_xferd = 0;
> +		mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
> +
> +		/* Set channel state to STOP */
> +		mhi_chan->state = MHI_CH_STATE_STOP;
> +		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[ch_id].chcfg);
> +		tmp &= ~CHAN_CTX_CHSTATE_MASK;
> +		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_STOP);
> +		mhi_cntrl->ch_ctx_cache[ch_id].chcfg = cpu_to_le32(tmp);
> +
> +		ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
> +		if (ret) {
> +			dev_err(dev, "Error sending command completion event (%d)\n",
> +				MHI_EV_CC_SUCCESS);
> +			goto err_unlock;
> +		}
> +
> +		mutex_unlock(&mhi_chan->lock);
> +		break;
> +	case MHI_PKT_TYPE_RESET_CHAN_CMD:
> +		dev_dbg(dev, "Received STOP command for channel (%d)\n", ch_id);
> +		if (!ch_ring->started) {
> +			dev_err(dev, "Channel (%d) not opened\n", ch_id);
> +			return -ENODEV;
> +		}
> +
> +		mutex_lock(&mhi_chan->lock);
> +		/* Stop and reset the transfer ring */
> +		mhi_ep_ring_reset(mhi_cntrl, ch_ring);
> +
> +		/* Send channel disconnect status to client driver */
> +		result.transaction_status = -ENOTCONN;
> +		result.bytes_xferd = 0;
> +		mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
> +
> +		/* Set channel state to DISABLED */
> +		mhi_chan->state = MHI_CH_STATE_DISABLED;
> +		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[ch_id].chcfg);
> +		tmp &= ~CHAN_CTX_CHSTATE_MASK;
> +		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_DISABLED);
> +		mhi_cntrl->ch_ctx_cache[ch_id].chcfg = cpu_to_le32(tmp);
> +
> +		ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
> +		if (ret) {
> +			dev_err(dev, "Error sending command completion event (%d)\n",
> +				MHI_EV_CC_SUCCESS);
> +			goto err_unlock;
> +		}
> +
> +		mutex_unlock(&mhi_chan->lock);
> +		break;
> +	default:
> +		dev_err(dev, "Invalid command received: %d for channel (%d)\n",
> +			MHI_TRE_GET_CMD_TYPE(el), ch_id);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +
> +err_unlock:
> +	mutex_unlock(&mhi_chan->lock);
> +
> +	return ret;
> +}
> +
>   static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 21/25] bus: mhi: ep: Add support for reading from the host
  2022-02-12 18:21 ` [PATCH v3 21/25] bus: mhi: ep: Add support for reading from the host Manivannan Sadhasivam
@ 2022-02-15 22:40   ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:40 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Data transfer between host and the ep device happens over the transfer
> ring associated with each bi-directional channel pair. Host defines the
> transfer ring by allocating memory for it. The read and write pointer
> addresses of the transfer ring are stored in the channel context.
> 
> Once host places the elements in the transfer ring, it increments the
> write pointer and rings the channel doorbell. Device will receive the
> doorbell interrupt and will process the transfer ring elements.
> 
> This commit adds support for reading the transfer ring elements from
> the transfer ring till write pointer, incrementing the read pointer and
> finally sending the completion event to the host through corresponding
> event ring.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Indentation nits mentioned.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/ep/main.c | 103 ++++++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h    |   9 ++++
>   2 files changed, 112 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 4c2ee517832c..b937c6cda9ba 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -336,6 +336,109 @@ int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element
>   	return ret;
>   }
>   
> +bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir)
> +{
> +	struct mhi_ep_chan *mhi_chan = (dir == DMA_FROM_DEVICE) ? mhi_dev->dl_chan :
> +								mhi_dev->ul_chan;
> +	struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
> +	struct mhi_ep_ring *ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
> +
> +	return !!(ring->rd_offset == ring->wr_offset);
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_queue_is_empty);
> +
> +static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
> +				struct mhi_ep_ring *ring,
> +				struct mhi_result *result,
> +				u32 len)
> +{
> +	struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
> +	size_t bytes_to_read, read_offset, write_offset;
> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> +	struct mhi_ep_ring_element *el;
> +	bool td_done = false;
> +	void *write_to_loc;
> +	u64 read_from_loc;
> +	u32 buf_remaining;
> +	int ret;
> +
> +	buf_remaining = len;
> +
> +	do {
> +		/* Don't process the transfer ring if the channel is not in RUNNING state */
> +		if (mhi_chan->state != MHI_CH_STATE_RUNNING)
> +			return -ENODEV;
> +
> +		el = &ring->ring_cache[ring->rd_offset];
> +
> +		/* Check if there is data pending to be read from previous read operation */
> +		if (mhi_chan->tre_bytes_left) {
> +			dev_dbg(dev, "TRE bytes remaining: %d\n", mhi_chan->tre_bytes_left);
> +			bytes_to_read = min(buf_remaining, mhi_chan->tre_bytes_left);
> +		} else {
> +			mhi_chan->tre_loc = MHI_EP_TRE_GET_PTR(el);
> +			mhi_chan->tre_size = MHI_EP_TRE_GET_LEN(el);
> +			mhi_chan->tre_bytes_left = mhi_chan->tre_size;
> +
> +			bytes_to_read = min(buf_remaining, mhi_chan->tre_size);
> +		}
> +
> +		read_offset = mhi_chan->tre_size - mhi_chan->tre_bytes_left;
> +		write_offset = len - buf_remaining;
> +		read_from_loc = mhi_chan->tre_loc + read_offset;
> +		write_to_loc = result->buf_addr + write_offset;
> +
> +		dev_dbg(dev, "Reading %zd bytes from channel (%d)\n", bytes_to_read, ring->ch_id);
> +		ret = mhi_cntrl->read_from_host(mhi_cntrl, read_from_loc, write_to_loc,
> +						bytes_to_read);
> +		if (ret < 0)
> +			return ret;
> +
> +		buf_remaining -= bytes_to_read;
> +		mhi_chan->tre_bytes_left -= bytes_to_read;
> +
> +		/*
> +		 * Once the TRE (Transfer Ring Element) of a TD (Transfer Descriptor) has been
> +		 * read completely:
> +		 *
> +		 * 1. Send completion event to the host based on the flags set in TRE.
> +		 * 2. Increment the local read offset of the transfer ring.

Your comments in this section explain some things that
I did not completely understand for a *very* long time.
The same flags are used in IPA, but are not as well
documented as they are for MHI.

> +		 */
> +		if (!mhi_chan->tre_bytes_left) {
> +			/*
> +			 * The host will split the data packet into multiple TREs if it can't fit
> +			 * the packet in a single TRE. In that case, CHAIN flag will be set by the
> +			 * host for all TREs except the last one.
> +			 */
> +			if (MHI_EP_TRE_GET_CHAIN(el)) {
> +				/*
> +				 * IEOB (Interrupt on End of Block) flag will be set by the host if
> +				 * it expects the completion event for all TREs of a TD.
> +				 */
> +				if (MHI_EP_TRE_GET_IEOB(el))
> +					mhi_ep_send_completion_event(mhi_cntrl,
> +					ring, MHI_EP_TRE_GET_LEN(el), MHI_EV_CC_EOB);

Check your indentation above.

> +			} else {
> +				/*
> +				 * IEOT (Interrupt on End of Transfer) flag will be set by the host
> +				 * for the last TRE of the TD and expects the completion event for
> +				 * the same.
> +				 */
> +				if (MHI_EP_TRE_GET_IEOT(el))
> +					mhi_ep_send_completion_event(mhi_cntrl,
> +					ring, MHI_EP_TRE_GET_LEN(el), MHI_EV_CC_EOT);

Indentation here too.

> +				td_done = true;
> +			}
> +
> +			mhi_ep_ring_inc_index(ring);
> +		}
> +
> +		result->bytes_xferd += bytes_to_read;
> +	} while (buf_remaining && !td_done);
> +
> +	return 0;
> +}
> +
>   static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 276d29fef465..aaf4b6942037 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -268,4 +268,13 @@ int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
>    */
>   void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
>   
> +/**
> + * mhi_ep_queue_is_empty - Determine whether the transfer queue is empty
> + * @mhi_dev: Device associated with the channels
> + * @dir: DMA direction for the channel
> + *
> + * Return: true if the queue is empty, false otherwise.
> + */
> +bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir);
> +
>   #endif


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 22/25] bus: mhi: ep: Add support for processing transfer ring
  2022-02-12 18:21 ` [PATCH v3 22/25] bus: mhi: ep: Add support for processing transfer ring Manivannan Sadhasivam
@ 2022-02-15 22:40   ` Alex Elder
  2022-02-22 10:50     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:40 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for processing the transfer ring from host. For the transfer
> ring associated with DL channel, the xfer callback will simply invoked.
> For the case of UL channel, the ring elements will be read in a buffer
> till the write pointer and later passed to the client driver using the
> xfer callback.
> 
> The client drivers should provide the callbacks for both UL and DL
> channels during registration.

I think you already checked and guaranteed that.

I have a question and suggestion below.  But it could
be considered an optimization that could be implemented
in the future, so:

Reviewed-by: Alex Elder <elder@linaro.org>

> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> ---
>   drivers/bus/mhi/ep/main.c | 49 +++++++++++++++++++++++++++++++++++++++
>   1 file changed, 49 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index b937c6cda9ba..baf383a4857b 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -439,6 +439,55 @@ static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
>   	return 0;
>   }
>   
> +int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el)
> +{
> +	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> +	struct mhi_result result = {};
> +	u32 len = MHI_EP_DEFAULT_MTU;
> +	struct mhi_ep_chan *mhi_chan;
> +	int ret;
> +
> +	mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
> +
> +	/*
> +	 * Bail out if transfer callback is not registered for the channel.
> +	 * This is most likely due to the client driver not loaded at this point.
> +	 */
> +	if (!mhi_chan->xfer_cb) {
> +		dev_err(&mhi_chan->mhi_dev->dev, "Client driver not available\n");
> +		return -ENODEV;
> +	}
> +
> +	if (ring->ch_id % 2) {
> +		/* DL channel */
> +		result.dir = mhi_chan->dir;
> +		mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
> +	} else {
> +		/* UL channel */
> +		do {
> +			result.buf_addr = kzalloc(len, GFP_KERNEL);

So you allocate an 8KB buffer into which you copy
received data, then pass that to the ->xfer_cb()
function.  Then you free that buffer.  Repeatedly.

Two questions about this:
- This suggests that after copying the data in, the
   ->xfer_cb() function will copy it again, is that
   correct?
- If that is correct, why not just reuse the same 8KB
   buffer, allocated once outside the loop?

It might also be nice to consider whether you could
allocate the buffer here and have the ->xfer_cb()
function be responsible for freeing it (and ideally,
pass it along rather than copying it again).

> +			if (!result.buf_addr)
> +				return -ENOMEM;
> +
> +			ret = mhi_ep_read_channel(mhi_cntrl, ring, &result, len);
> +			if (ret < 0) {
> +				dev_err(&mhi_chan->mhi_dev->dev, "Failed to read channel\n");
> +				kfree(result.buf_addr);
> +				return ret;
> +			}
> +
> +			result.dir = mhi_chan->dir;
> +			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
> +			kfree(result.buf_addr);
> +			result.bytes_xferd = 0;
> +
> +			/* Read until the ring becomes empty */
> +		} while (!mhi_ep_queue_is_empty(mhi_chan->mhi_dev, DMA_TO_DEVICE));
> +	}
> +
> +	return 0;
> +}
> +
>   static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 23/25] bus: mhi: ep: Add support for queueing SKBs to the host
  2022-02-12 18:21 ` [PATCH v3 23/25] bus: mhi: ep: Add support for queueing SKBs to the host Manivannan Sadhasivam
@ 2022-02-15 22:40   ` Alex Elder
  2022-02-22 14:38     ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:40 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for queueing SKBs to the host over the transfer ring of the
> relevant channel. The mhi_ep_queue_skb() API will be used by the client
> networking drivers to queue the SKBs to the host over MHI bus.
> 
> The host will add ring elements to the transfer ring periodically for
> the device and the device will write SKBs to the ring elements. If a
> single SKB doesn't fit in a ring element (TRE), it will be placed in
> multiple ring elements and the overflow event will be sent for all ring
> elements except the last one. For the last ring element, the EOT event
> will be sent indicating the packet boundary.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

I'm a little confused by this, so maybe you can provide
a better explanation somehow.

					-Alex

> ---
>   drivers/bus/mhi/ep/main.c | 102 ++++++++++++++++++++++++++++++++++++++
>   include/linux/mhi_ep.h    |  13 +++++
>   2 files changed, 115 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index baf383a4857b..e4186b012257 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -488,6 +488,108 @@ int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element
>   	return 0;
>   }
>   
> +int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir,
> +		     struct sk_buff *skb, size_t len, enum mhi_flags mflags)

Why are both skb and len supplied?  Will an skb be supplied
without wanting to send all of it?  Must len be less than
skb->len?  I'm a little confused about the interface.

Also, the data direction is *out*, right?  You'll never
be queueing a "receive" SKB?

> +{
> +	struct mhi_ep_chan *mhi_chan = (dir == DMA_FROM_DEVICE) ? mhi_dev->dl_chan :
> +								mhi_dev->ul_chan;
> +	struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
> +	struct device *dev = &mhi_chan->mhi_dev->dev;
> +	struct mhi_ep_ring_element *el;
> +	struct mhi_ep_ring *ring;
> +	size_t bytes_to_write;
> +	enum mhi_ev_ccs code;
> +	void *read_from_loc;
> +	u32 buf_remaining;
> +	u64 write_to_loc;
> +	u32 tre_len;
> +	int ret = 0;
> +
> +	if (dir == DMA_TO_DEVICE)
> +		return -EINVAL;

Can't you just preclude this from happening, or
know it won't happen by inspection?

> +
> +	buf_remaining = len;
> +	ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
> +
> +	mutex_lock(&mhi_chan->lock);
> +
> +	do {
> +		/* Don't process the transfer ring if the channel is not in RUNNING state */
> +		if (mhi_chan->state != MHI_CH_STATE_RUNNING) {
> +			dev_err(dev, "Channel not available\n");
> +			ret = -ENODEV;
> +			goto err_exit;
> +		}
> +

It would be nice if the caller could know whether there
was enough room *before* you start transferring things.
It's probably a lot of work to get to that point though.

> +		if (mhi_ep_queue_is_empty(mhi_dev, dir)) {
> +			dev_err(dev, "TRE not available!\n");
> +			ret = -EINVAL;
> +			goto err_exit;
> +		}
> +
> +		el = &ring->ring_cache[ring->rd_offset];
> +		tre_len = MHI_EP_TRE_GET_LEN(el);
> +		if (skb->len > tre_len) {
> +			dev_err(dev, "Buffer size (%d) is too large for TRE (%d)!\n",
> +				skb->len, tre_len);

This means the receive buffer must be big enough to hold
any incoming SKB.  This is *without* checking for the
CHAIN flag in the TRE, so what you describe in the
patch description seems not to be true.  I.e., multiple
TREs in a TRD will *not* be consumed if the SKB data
requires more than what's left in the current TRE.

But you have some other code below, so it's likely I'm
just misunderstanding this.

> +			ret = -ENOMEM;
> +			goto err_exit;
> +		}
> +
> +		bytes_to_write = min(buf_remaining, tre_len);
> +		read_from_loc = skb->data;
> +		write_to_loc = MHI_EP_TRE_GET_PTR(el);
> +
> +		ret = mhi_cntrl->write_to_host(mhi_cntrl, read_from_loc, write_to_loc,
> +					       bytes_to_write);
> +		if (ret < 0)
> +			goto err_exit;
> +
> +		buf_remaining -= bytes_to_write;
> +		/*
> +		 * For all TREs queued by the host for DL channel, only the EOT flag will be set.
> +		 * If the packet doesn't fit into a single TRE, send the OVERFLOW event to
> +		 * the host so that the host can adjust the packet boundary to next TREs. Else send
> +		 * the EOT event to the host indicating the packet boundary.
> +		 */
> +		if (buf_remaining)
> +			code = MHI_EV_CC_OVERFLOW;
> +		else
> +			code = MHI_EV_CC_EOT;
> +
> +		ret = mhi_ep_send_completion_event(mhi_cntrl, ring, bytes_to_write, code);
> +		if (ret) {
> +			dev_err(dev, "Error sending completion event\n");
> +			goto err_exit;
> +		}
> +
> +		mhi_ep_ring_inc_index(ring);
> +	} while (buf_remaining);
> +
> +	/*
> +	 * During high network traffic, sometimes the DL doorbell interrupt from the host is missed
> +	 * by the endpoint. So manually check for the write pointer update here so that we don't run
> +	 * out of buffer due to missing interrupts.
> +	 */
> +	if (ring->rd_offset + 1 == ring->wr_offset) {
> +		ret = mhi_ep_update_wr_offset(ring);
> +		if (ret) {
> +			dev_err(dev, "Error updating write pointer\n");
> +			goto err_exit;
> +		}
> +	}
> +
> +	mutex_unlock(&mhi_chan->lock);
> +
> +	return 0;
> +
> +err_exit:
> +	mutex_unlock(&mhi_chan->lock);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_queue_skb);
> +
>   static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
>   {
>   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index aaf4b6942037..75cfbf0c6fb0 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -277,4 +277,17 @@ void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
>    */
>   bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir);
>   
> +/**
> + * mhi_ep_queue_skb - Send SKBs to host over MHI Endpoint
> + * @mhi_dev: Device associated with the channels
> + * @dir: DMA direction for the channel
> + * @skb: Buffer for holding SKBs
> + * @len: Buffer length
> + * @mflags: MHI Endpoint transfer flags used for the transfer
> + *
> + * Return: 0 if the SKBs has been sent successfully, a negative error code otherwise.
> + */
> +int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir,
> +		     struct sk_buff *skb, size_t len, enum mhi_flags mflags);
> +
>   #endif


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 24/25] bus: mhi: ep: Add support for suspending and resuming channels
  2022-02-12 18:21 ` [PATCH v3 24/25] bus: mhi: ep: Add support for suspending and resuming channels Manivannan Sadhasivam
@ 2022-02-15 22:40   ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:40 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add support for suspending and resuming the channels in MHI endpoint stack.
> The channels will be moved to the suspended state during M3 state
> transition and will be resumed during M0 transition.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Looks good.

Reviewed-by: Alex Elder <elder@linaro.org>

> ---
>   drivers/bus/mhi/ep/internal.h |  2 ++
>   drivers/bus/mhi/ep/main.c     | 58 +++++++++++++++++++++++++++++++++++
>   drivers/bus/mhi/ep/sm.c       |  4 +++
>   3 files changed, 64 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index 8654af7caf40..e23d2fd04282 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -242,6 +242,8 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
>   int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
>   int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
>   void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_resume_channels(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_suspend_channels(struct mhi_ep_cntrl *mhi_cntrl);
>   
>   /* MHI EP memory management functions */
>   int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index e4186b012257..315409705b91 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -1106,6 +1106,64 @@ void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl)
>   }
>   EXPORT_SYMBOL_GPL(mhi_ep_power_down);
>   
> +void mhi_ep_suspend_channels(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct mhi_ep_chan *mhi_chan;
> +	u32 tmp;
> +	int i;
> +
> +	for (i = 0; i < mhi_cntrl->max_chan; i++) {
> +		mhi_chan = &mhi_cntrl->mhi_chan[i];
> +
> +		if (!mhi_chan->mhi_dev)
> +			continue;
> +
> +		mutex_lock(&mhi_chan->lock);
> +		/* Skip if the channel is not currently running */
> +		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[i].chcfg);
> +		if (FIELD_GET(CHAN_CTX_CHSTATE_MASK, tmp) != MHI_CH_STATE_RUNNING) {
> +			mutex_unlock(&mhi_chan->lock);
> +			continue;
> +		}
> +
> +		dev_dbg(&mhi_chan->mhi_dev->dev, "Suspending channel\n");
> +		/* Set channel state to SUSPENDED */
> +		tmp &= ~CHAN_CTX_CHSTATE_MASK;
> +		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_SUSPENDED);

Somebody really needs to write a FIELD_UPDATE() macro to
do this read/modify/write pattern.

> +		mhi_cntrl->ch_ctx_cache[i].chcfg = cpu_to_le32(tmp);
> +		mutex_unlock(&mhi_chan->lock);
> +	}
> +}
> +
> +void mhi_ep_resume_channels(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> +	struct mhi_ep_chan *mhi_chan;
> +	u32 tmp;
> +	int i;
> +
> +	for (i = 0; i < mhi_cntrl->max_chan; i++) {
> +		mhi_chan = &mhi_cntrl->mhi_chan[i];
> +
> +		if (!mhi_chan->mhi_dev)
> +			continue;
> +
> +		mutex_lock(&mhi_chan->lock);
> +		/* Skip if the channel is not currently suspended */
> +		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[i].chcfg);
> +		if (FIELD_GET(CHAN_CTX_CHSTATE_MASK, tmp) != MHI_CH_STATE_SUSPENDED) {
> +			mutex_unlock(&mhi_chan->lock);
> +			continue;
> +		}
> +
> +		dev_dbg(&mhi_chan->mhi_dev->dev, "Resuming channel\n");
> +		/* Set channel state to RUNNING */
> +		tmp &= ~CHAN_CTX_CHSTATE_MASK;
> +		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_RUNNING);
> +		mhi_cntrl->ch_ctx_cache[i].chcfg = cpu_to_le32(tmp);
> +		mutex_unlock(&mhi_chan->lock);
> +	}
> +}
> +
>   static void mhi_ep_release_device(struct device *dev)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
> index 9a75ecfe1adf..e24ba2d85e13 100644
> --- a/drivers/bus/mhi/ep/sm.c
> +++ b/drivers/bus/mhi/ep/sm.c
> @@ -88,8 +88,11 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
>   	enum mhi_state old_state;
>   	int ret;
>   
> +	/* If MHI is in M3, resume suspended channels */
>   	spin_lock_bh(&mhi_cntrl->state_lock);
>   	old_state = mhi_cntrl->mhi_state;
> +	if (old_state == MHI_STATE_M3)
> +		mhi_ep_resume_channels(mhi_cntrl);
>   
>   	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
>   	if (ret) {
> @@ -135,6 +138,7 @@ int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
>   	}
>   
>   	spin_unlock_bh(&mhi_cntrl->state_lock);
> +	mhi_ep_suspend_channels(mhi_cntrl);
>   
>   	/* Signal host that the device moved to M3 */
>   	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M3);


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 25/25] bus: mhi: ep: Add uevent support for module autoloading
  2022-02-12 18:21 ` [PATCH v3 25/25] bus: mhi: ep: Add uevent support for module autoloading Manivannan Sadhasivam
@ 2022-02-15 22:40   ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-15 22:40 UTC (permalink / raw)
  To: Manivannan Sadhasivam, mhi
  Cc: quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> Add uevent support to MHI endpoint bus so that the client drivers can be
> autoloaded by udev when the MHI endpoint devices gets created. The client
> drivers are expected to provide MODULE_DEVICE_TABLE with the MHI id_table
> struct so that the alias can be exported.
> 
> The MHI endpoint reused the mhi_device_id structure of the MHI bus.
> 
> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>

Looks OK to me.

Reviewed-by: Alex Elder <elder@linaro.org>


Next time I review this, I think I'll review the code
in its entirety (i.e., with the entire series applied
rather than in steps).  At that point I'm sure it will
be nearly perfect.

					-Alex

> ---
>   drivers/bus/mhi/ep/main.c       |  9 +++++++++
>   include/linux/mod_devicetable.h |  2 ++
>   scripts/mod/file2alias.c        | 10 ++++++++++
>   3 files changed, 21 insertions(+)
> 
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index 315409705b91..8889382ee8d0 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -1546,6 +1546,14 @@ void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv)
>   }
>   EXPORT_SYMBOL_GPL(mhi_ep_driver_unregister);
>   
> +static int mhi_ep_uevent(struct device *dev, struct kobj_uevent_env *env)
> +{
> +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +
> +	return add_uevent_var(env, "MODALIAS=" MHI_EP_DEVICE_MODALIAS_FMT,
> +					mhi_dev->name);
> +}
> +
>   static int mhi_ep_match(struct device *dev, struct device_driver *drv)
>   {
>   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> @@ -1572,6 +1580,7 @@ struct bus_type mhi_ep_bus_type = {
>   	.name = "mhi_ep",
>   	.dev_name = "mhi_ep",
>   	.match = mhi_ep_match,
> +	.uevent = mhi_ep_uevent,
>   };
>   
>   static int __init mhi_ep_init(void)
> diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
> index 4bb71979a8fd..0cff19bd72bf 100644
> --- a/include/linux/mod_devicetable.h
> +++ b/include/linux/mod_devicetable.h
> @@ -835,6 +835,8 @@ struct wmi_device_id {
>   #define MHI_DEVICE_MODALIAS_FMT "mhi:%s"
>   #define MHI_NAME_SIZE 32
>   
> +#define MHI_EP_DEVICE_MODALIAS_FMT "mhi_ep:%s"
> +
>   /**
>    * struct mhi_device_id - MHI device identification
>    * @chan: MHI channel name
> diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
> index 5258247d78ac..d9d6a31446ea 100644
> --- a/scripts/mod/file2alias.c
> +++ b/scripts/mod/file2alias.c
> @@ -1391,6 +1391,15 @@ static int do_mhi_entry(const char *filename, void *symval, char *alias)
>   	return 1;
>   }
>   
> +/* Looks like: mhi_ep:S */
> +static int do_mhi_ep_entry(const char *filename, void *symval, char *alias)
> +{
> +	DEF_FIELD_ADDR(symval, mhi_device_id, chan);
> +	sprintf(alias, MHI_EP_DEVICE_MODALIAS_FMT, *chan);
> +
> +	return 1;
> +}
> +
>   /* Looks like: ishtp:{guid} */
>   static int do_ishtp_entry(const char *filename, void *symval, char *alias)
>   {
> @@ -1519,6 +1528,7 @@ static const struct devtable devtable[] = {
>   	{"tee", SIZE_tee_client_device_id, do_tee_entry},
>   	{"wmi", SIZE_wmi_device_id, do_wmi_entry},
>   	{"mhi", SIZE_mhi_device_id, do_mhi_entry},
> +	{"mhi_ep", SIZE_mhi_device_id, do_mhi_ep_entry},
>   	{"auxiliary", SIZE_auxiliary_device_id, do_auxiliary_entry},
>   	{"ssam", SIZE_ssam_device_id, do_ssam_entry},
>   	{"dfl", SIZE_dfl_device_id, do_dfl_entry},


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 02/25] bus: mhi: Fix MHI DMA structure endianness
  2022-02-15 20:02   ` Alex Elder
@ 2022-02-16  7:04     ` Manivannan Sadhasivam
  2022-02-16 14:29       ` Alex Elder
  0 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-16  7:04 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, Paul Davey, stable

On Tue, Feb 15, 2022 at 02:02:01PM -0600, Alex Elder wrote:
> On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> > From: Paul Davey <paul.davey@alliedtelesis.co.nz>
> > 
> > The MHI driver does not work on big endian architectures.  The
> > controller never transitions into mission mode.  This appears to be due
> > to the modem device expecting the various contexts and transfer rings to
> > have fields in little endian order in memory, but the driver constructs
> > them in native endianness.
> 
> Yes, this is true.
> 
> > Fix MHI event, channel and command contexts and TRE handling macros to
> > use explicit conversion to little endian.  Mark fields in relevant
> > structures as little endian to document this requirement.
> 
> Basically every field in the external interface whose size
> is greater than one byte must have its endianness noted.
> From what I can tell, you did that for all of the exposed
> structures defined in "drivers/bus/mhi/core/internal.h",
> which is good.
> 
> *However* some of the *constants* were defined the wrong way.
> 
> Basically, all of the constant values should be expressed
> in host byte order.  And any needed byte swapping should be
> done at the time the value is read from memory--immediately.
> That way, we isolate that activity to the one place we
> interface with the possibly "foreign" format, and from then
> on, everything may be assumed to be in natural (CPU) byte order.
> 

Well, I did think about it but I convinced myself that doing the
conversion in code rather in defines make the code look messy.
Also in some places it just makes it look complicated. More below:

> I will point out what I mean, below.
> 
> > Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions")
> > Fixes: 6cd330ae76ff ("bus: mhi: core: Add support for ringing channel/event ring doorbells")
> > Signed-off-by: Paul Davey <paul.davey@alliedtelesis.co.nz>
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > ---
> >   drivers/bus/mhi/core/debugfs.c  |  26 +++----
> >   drivers/bus/mhi/core/init.c     |  36 +++++-----
> >   drivers/bus/mhi/core/internal.h | 119 ++++++++++++++++----------------
> >   drivers/bus/mhi/core/main.c     |  22 +++---
> >   drivers/bus/mhi/core/pm.c       |   4 +-
> >   5 files changed, 104 insertions(+), 103 deletions(-)
> > 

[...]

> > @@ -277,57 +277,58 @@ enum mhi_cmd_type {
> >   /* No operation command */
> >   #define MHI_TRE_CMD_NOOP_PTR (0)
> >   #define MHI_TRE_CMD_NOOP_DWORD0 (0)
> > -#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
> > +#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
> 
> This just looks wrong to me.  The original definition
> should be fine, but then where it's *used* it should
> be passed to cpu_to_le32().  I realize this might be
> a special case, where these "DWORD" values are getting
> written out to command ring elements, but even so, the
> byte swapping that's happening is important and should
> be made obvious in the code using these symbols.
> 
> This comment applies to many more similar definitions
> below.  I don't know; maybe it looks cumbersome if
> it's done in the code, but I still think it's better to
> consistenly define symbols like this in CPU byte order
> and do the conversions explicitly only when the values
> are read/written to "foreign" (external interface)
> memory.
> 

Defines like MHI_TRE_GET_CMD_CHID are making the conversion look messy
to me. In this we first extract the DWORD from TRE and then doing
shifting + masking to get the CHID.

So without splitting the DWORD extraction and GET_CHID macros
separately, we can't just do the conversion in code. And we may end up
doing the conversion in defines just for these special cases but that
will break the uniformity.

So IMO it looks better if we trust the defines to do the conversion itself.

Please let me know if you think the other way.

Thanks,
Mani

> Outside of this issue, the remainder of the patch looks
> OK to me.
> 
> 					-Alex
> 
> >   /* Channel reset command */
> >   #define MHI_TRE_CMD_RESET_PTR (0)
> >   #define MHI_TRE_CMD_RESET_DWORD0 (0)
> > -#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
> > -					(MHI_CMD_RESET_CHAN << 16))
> > +#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> > +					(MHI_CMD_RESET_CHAN << 16)))
> >   /* Channel stop command */
> >   #define MHI_TRE_CMD_STOP_PTR (0)
> >   #define MHI_TRE_CMD_STOP_DWORD0 (0)
> > -#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
> > -				       (MHI_CMD_STOP_CHAN << 16))
> > +#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> > +				       (MHI_CMD_STOP_CHAN << 16)))
> >   /* Channel start command */
> >   #define MHI_TRE_CMD_START_PTR (0)
> >   #define MHI_TRE_CMD_START_DWORD0 (0)
> > -#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
> > -					(MHI_CMD_START_CHAN << 16))
> > +#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> > +					(MHI_CMD_START_CHAN << 16)))
> > -#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
> > -#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
> > +#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
> > +#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> >   /* Event descriptor macros */
> > -#define MHI_TRE_EV_PTR(ptr) (ptr)
> > -#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
> > -#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
> > -#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
> > -#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
> > -#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
> > -#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
> > -#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
> > -#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
> > -#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
> > -#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
> > +#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
> > +#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
> > +#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
> > +#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
> > +#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> > +#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> > +#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
> > +#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
> > +#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> > +#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> > +#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
> >   /* Transfer descriptor macros */
> > -#define MHI_TRE_DATA_PTR(ptr) (ptr)
> > -#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
> > -#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
> > -	| (ieot << 9) | (ieob << 8) | chain)
> > +#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
> > +#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
> > +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
> > +	| (ieot << 9) | (ieob << 8) | chain))
> >   /* RSC transfer descriptor macros */
> > -#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
> > -#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
> > -#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
> > +#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
> > +#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
> > +#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
> >   enum mhi_pkt_type {
> >   	MHI_PKT_TYPE_INVALID = 0x0,
> > @@ -500,7 +501,7 @@ struct state_transition {
> >   struct mhi_ring {
> >   	dma_addr_t dma_handle;
> >   	dma_addr_t iommu_base;
> > -	u64 *ctxt_wp; /* point to ctxt wp */
> > +	__le64 *ctxt_wp; /* point to ctxt wp */
> >   	void *pre_aligned;
> >   	void *base;
> >   	void *rp;
> > diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
> > index ffde617f93a3..85f4f7c8d7c6 100644
> > --- a/drivers/bus/mhi/core/main.c
> > +++ b/drivers/bus/mhi/core/main.c
> > @@ -114,7 +114,7 @@ void mhi_ring_er_db(struct mhi_event *mhi_event)
> >   	struct mhi_ring *ring = &mhi_event->ring;
> >   	mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
> > -				     ring->db_addr, *ring->ctxt_wp);
> > +				     ring->db_addr, le64_to_cpu(*ring->ctxt_wp));
> >   }
> >   void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
> > @@ -123,7 +123,7 @@ void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
> >   	struct mhi_ring *ring = &mhi_cmd->ring;
> >   	db = ring->iommu_base + (ring->wp - ring->base);
> > -	*ring->ctxt_wp = db;
> > +	*ring->ctxt_wp = cpu_to_le64(db);
> >   	mhi_write_db(mhi_cntrl, ring->db_addr, db);
> >   }
> > @@ -140,7 +140,7 @@ void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
> >   	 * before letting h/w know there is new element to fetch.
> >   	 */
> >   	dma_wmb();
> > -	*ring->ctxt_wp = db;
> > +	*ring->ctxt_wp = cpu_to_le64(db);
> >   	mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
> >   				    ring->db_addr, db);
> > @@ -432,7 +432,7 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
> >   	struct mhi_event_ctxt *er_ctxt =
> >   		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
> >   	struct mhi_ring *ev_ring = &mhi_event->ring;
> > -	dma_addr_t ptr = er_ctxt->rp;
> > +	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
> >   	void *dev_rp;
> >   	if (!is_valid_ring_ptr(ev_ring, ptr)) {
> > @@ -537,14 +537,14 @@ static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
> >   	/* Update the WP */
> >   	ring->wp += ring->el_size;
> > -	ctxt_wp = *ring->ctxt_wp + ring->el_size;
> > +	ctxt_wp = le64_to_cpu(*ring->ctxt_wp) + ring->el_size;
> >   	if (ring->wp >= (ring->base + ring->len)) {
> >   		ring->wp = ring->base;
> >   		ctxt_wp = ring->iommu_base;
> >   	}
> > -	*ring->ctxt_wp = ctxt_wp;
> > +	*ring->ctxt_wp = cpu_to_le64(ctxt_wp);
> >   	/* Update the RP */
> >   	ring->rp += ring->el_size;
> > @@ -801,7 +801,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
> >   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> >   	u32 chan;
> >   	int count = 0;
> > -	dma_addr_t ptr = er_ctxt->rp;
> > +	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
> >   	/*
> >   	 * This is a quick check to avoid unnecessary event processing
> > @@ -940,7 +940,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
> >   		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
> >   		local_rp = ev_ring->rp;
> > -		ptr = er_ctxt->rp;
> > +		ptr = le64_to_cpu(er_ctxt->rp);
> >   		if (!is_valid_ring_ptr(ev_ring, ptr)) {
> >   			dev_err(&mhi_cntrl->mhi_dev->dev,
> >   				"Event ring rp points outside of the event ring\n");
> > @@ -970,7 +970,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
> >   	int count = 0;
> >   	u32 chan;
> >   	struct mhi_chan *mhi_chan;
> > -	dma_addr_t ptr = er_ctxt->rp;
> > +	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
> >   	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
> >   		return -EIO;
> > @@ -1011,7 +1011,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
> >   		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
> >   		local_rp = ev_ring->rp;
> > -		ptr = er_ctxt->rp;
> > +		ptr = le64_to_cpu(er_ctxt->rp);
> >   		if (!is_valid_ring_ptr(ev_ring, ptr)) {
> >   			dev_err(&mhi_cntrl->mhi_dev->dev,
> >   				"Event ring rp points outside of the event ring\n");
> > @@ -1533,7 +1533,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
> >   	/* mark all stale events related to channel as STALE event */
> >   	spin_lock_irqsave(&mhi_event->lock, flags);
> > -	ptr = er_ctxt->rp;
> > +	ptr = le64_to_cpu(er_ctxt->rp);
> >   	if (!is_valid_ring_ptr(ev_ring, ptr)) {
> >   		dev_err(&mhi_cntrl->mhi_dev->dev,
> >   			"Event ring rp points outside of the event ring\n");
> > diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
> > index 4aae0baea008..c35c5ddc7220 100644
> > --- a/drivers/bus/mhi/core/pm.c
> > +++ b/drivers/bus/mhi/core/pm.c
> > @@ -218,7 +218,7 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
> >   			continue;
> >   		ring->wp = ring->base + ring->len - ring->el_size;
> > -		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
> > +		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
> >   		/* Update all cores */
> >   		smp_wmb();
> > @@ -420,7 +420,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
> >   			continue;
> >   		ring->wp = ring->base + ring->len - ring->el_size;
> > -		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
> > +		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
> >   		/* Update to all cores */
> >   		smp_wmb();
> 

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 01/25] bus: mhi: Fix pm_state conversion to string
  2022-02-15 20:01   ` Alex Elder
@ 2022-02-16 11:33     ` Manivannan Sadhasivam
  2022-02-16 13:41       ` Alex Elder
  0 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-16 11:33 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, Paul Davey,
	Hemant Kumar, stable

On Tue, Feb 15, 2022 at 02:01:54PM -0600, Alex Elder wrote:
> On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> > From: Paul Davey <paul.davey@alliedtelesis.co.nz>
> > 
> > On big endian architectures the mhi debugfs files which report pm state
> > give "Invalid State" for all states.  This is caused by using
> > find_last_bit which takes an unsigned long* while the state is passed in
> > as an enum mhi_pm_state which will be of int size.
> 
> I think this would have fixed it too, but your fix is better.
> 
> 	int index = find_last_bit(&(unsigned long)state, 32);
> 
> > Fix by using __fls to pass the value of state instead of find_last_bit.
> > 
> > Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions")
> > Signed-off-by: Paul Davey <paul.davey@alliedtelesis.co.nz>
> > Reviewed-by: Manivannan Sadhasivam <mani@kernel.org>
> > Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > ---
> >   drivers/bus/mhi/core/init.c | 8 +++++---
> >   1 file changed, 5 insertions(+), 3 deletions(-)
> > 
> > diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
> > index 046f407dc5d6..af484b03558a 100644
> > --- a/drivers/bus/mhi/core/init.c
> > +++ b/drivers/bus/mhi/core/init.c
> > @@ -79,10 +79,12 @@ static const char * const mhi_pm_state_str[] = {
> >   const char *to_mhi_pm_state_str(enum mhi_pm_state state)
> 
> The mhi_pm_state enumerated type is an enumerated sequence, not
> a bit mask.  So knowing what the last (most significant) set bit
> is not meaningful.  Or normally it shouldn't be.
> 
> If mhi_pm_state really were a bit mask, then its values should
> be defined that way, i.e.,
> 
> 	MHI_PM_STATE_DISABLE	= 1 << 0,
> 	MHI_PM_STATE_DISABLE	= 1 << 1,
> 	. . .
> 
> What's really going on is that the state value passed here
> *is* a bitmask, whose bit positions are those mhi_pm_state
> values.  So the state argument should have type u32.
> 

I agree with you. It should be u32.

> This is a *separate* bug/issue.  It could be fixed separately
> (before this patch), but I'd be OK with just explaining why
> this change would occur as part of this modified patch.
> 

It makes sense to do it in the same patch itself as the change is
minimal and moreover this patch will also get backported to stable.

> >   {
> > -	unsigned long pm_state = state;
> > -	int index = find_last_bit(&pm_state, 32);
> > +	int index;
> > -	if (index >= ARRAY_SIZE(mhi_pm_state_str))
> > +	if (state)
> > +		index = __fls(state);
> > +
> > +	if (!state || index >= ARRAY_SIZE(mhi_pm_state_str))
> >   		return "Invalid State";
> 
> Do this test and return first, and skip the additional
> check for "if (state)".
> 

We need to calculate index for the second check, so I guess the current
code is fine.

Thanks,
Mani

> 					-Alex
> 
> >   	return mhi_pm_state_str[index];
> 

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 05/25] bus: mhi: Make mhi_state_str[] array static inline and move to common.h
  2022-02-15 20:02   ` Alex Elder
@ 2022-02-16 11:39     ` Manivannan Sadhasivam
  2022-02-16 14:30       ` Alex Elder
  0 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-16 11:39 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 02:02:21PM -0600, Alex Elder wrote:
> On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> > mhi_state_str[] array could be used by MHI endpoint stack also. So let's
> > make the array as "static inline function" and move it inside the
> > "common.h" header so that the endpoint stack could also make use of it.
> > 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> 
> I like the use of a function to encapsulate this rather than
> using the array as before.
> 
> But I still don't like declaring this much static data in a static inline
> function in a header file.  Define it as a "real" function
> somewhere common and declare it here instead.
> 

The problem is we don't have a common c file to define this as a
function. Even if we add one, then it would be an overkill.

This pattern is more commonly used throughout the kernel source.

> One more minor comment below.
> 
> 					-Alex
> 
> > ---
> >   drivers/bus/mhi/common.h       | 29 +++++++++++++++++++++++++----
> >   drivers/bus/mhi/host/boot.c    |  2 +-
> >   drivers/bus/mhi/host/debugfs.c |  6 +++---
> >   drivers/bus/mhi/host/init.c    | 12 ------------
> >   drivers/bus/mhi/host/main.c    |  8 ++++----
> >   drivers/bus/mhi/host/pm.c      | 14 +++++++-------
> >   6 files changed, 40 insertions(+), 31 deletions(-)
> > 
> > diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> > index 0d13a202d334..288e47168649 100644
> > --- a/drivers/bus/mhi/common.h
> > +++ b/drivers/bus/mhi/common.h
> > @@ -159,9 +159,30 @@ struct mhi_cmd_ctxt {
> >   	__le64 wp __packed __aligned(4);
> >   };
> > -extern const char * const mhi_state_str[MHI_STATE_MAX];
> > -#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
> > -				  !mhi_state_str[state]) ? \
> > -				"INVALID_STATE" : mhi_state_str[state])
> > +static inline const char * const mhi_state_str(enum mhi_state state)
> > +{
> > +	switch (state) {
> > +	case MHI_STATE_RESET:
> > +		return "RESET";
> > +	case MHI_STATE_READY:
> > +		return "READY";
> > +	case MHI_STATE_M0:
> > +		return "M0";
> > +	case MHI_STATE_M1:
> > +		return "M1";
> > +	case MHI_STATE_M2:
> > +		return"M2";
> 
> Add space after "return" here and in a few places below.
> 

Ack.

Thanks,
Mani

> > +	case MHI_STATE_M3:
> > +		return"M3";
> > +	case MHI_STATE_M3_FAST:
> > +		return"M3 FAST";
> > +	case MHI_STATE_BHI:
> > +		return"BHI";
> > +	case MHI_STATE_SYS_ERR:
> > +		return "SYS ERROR";
> > +	default:
> > +		return "Unknown state";
> > +	}
> > +};
> >   #endif /* _MHI_COMMON_H */
> > diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
> > index 74295d3cc662..93cb705614c6 100644
> > --- a/drivers/bus/mhi/host/boot.c
> > +++ b/drivers/bus/mhi/host/boot.c
> > @@ -68,7 +68,7 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
> >   	dev_dbg(dev, "Entered with pm_state:%s dev_state:%s ee:%s\n",
> >   		to_mhi_pm_state_str(mhi_cntrl->pm_state),
> > -		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> > +		mhi_state_str(mhi_cntrl->dev_state),
> >   		TO_MHI_EXEC_STR(mhi_cntrl->ee));
> >   	/*
> > diff --git a/drivers/bus/mhi/host/debugfs.c b/drivers/bus/mhi/host/debugfs.c
> > index d818586c229d..399d0db1f1eb 100644
> > --- a/drivers/bus/mhi/host/debugfs.c
> > +++ b/drivers/bus/mhi/host/debugfs.c
> > @@ -20,7 +20,7 @@ static int mhi_debugfs_states_show(struct seq_file *m, void *d)
> >   	seq_printf(m, "PM state: %s Device: %s MHI state: %s EE: %s wake: %s\n",
> >   		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
> >   		   mhi_is_active(mhi_cntrl) ? "Active" : "Inactive",
> > -		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> > +		   mhi_state_str(mhi_cntrl->dev_state),
> >   		   TO_MHI_EXEC_STR(mhi_cntrl->ee),
> >   		   mhi_cntrl->wake_set ? "true" : "false");
> > @@ -206,13 +206,13 @@ static int mhi_debugfs_regdump_show(struct seq_file *m, void *d)
> >   	seq_printf(m, "Host PM state: %s Device state: %s EE: %s\n",
> >   		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
> > -		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> > +		   mhi_state_str(mhi_cntrl->dev_state),
> >   		   TO_MHI_EXEC_STR(mhi_cntrl->ee));
> >   	state = mhi_get_mhi_state(mhi_cntrl);
> >   	ee = mhi_get_exec_env(mhi_cntrl);
> >   	seq_printf(m, "Device EE: %s state: %s\n", TO_MHI_EXEC_STR(ee),
> > -		   TO_MHI_STATE_STR(state));
> > +		   mhi_state_str(state));
> >   	for (i = 0; regs[i].name; i++) {
> >   		if (!regs[i].base)
> > diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
> > index 4bd62f32695d..0e301f3f305e 100644
> > --- a/drivers/bus/mhi/host/init.c
> > +++ b/drivers/bus/mhi/host/init.c
> > @@ -44,18 +44,6 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
> >   	[DEV_ST_TRANSITION_DISABLE] = "DISABLE",
> >   };
> > -const char * const mhi_state_str[MHI_STATE_MAX] = {
> > -	[MHI_STATE_RESET] = "RESET",
> > -	[MHI_STATE_READY] = "READY",
> > -	[MHI_STATE_M0] = "M0",
> > -	[MHI_STATE_M1] = "M1",
> > -	[MHI_STATE_M2] = "M2",
> > -	[MHI_STATE_M3] = "M3",
> > -	[MHI_STATE_M3_FAST] = "M3 FAST",
> > -	[MHI_STATE_BHI] = "BHI",
> > -	[MHI_STATE_SYS_ERR] = "SYS ERROR",
> > -};
> > -
> >   const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX] = {
> >   	[MHI_CH_STATE_TYPE_RESET] = "RESET",
> >   	[MHI_CH_STATE_TYPE_STOP] = "STOP",
> > diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
> > index 85f4f7c8d7c6..e436c2993d97 100644
> > --- a/drivers/bus/mhi/host/main.c
> > +++ b/drivers/bus/mhi/host/main.c
> > @@ -479,8 +479,8 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
> >   	ee = mhi_get_exec_env(mhi_cntrl);
> >   	dev_dbg(dev, "local ee: %s state: %s device ee: %s state: %s\n",
> >   		TO_MHI_EXEC_STR(mhi_cntrl->ee),
> > -		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> > -		TO_MHI_EXEC_STR(ee), TO_MHI_STATE_STR(state));
> > +		mhi_state_str(mhi_cntrl->dev_state),
> > +		TO_MHI_EXEC_STR(ee), mhi_state_str(state));
> >   	if (state == MHI_STATE_SYS_ERR) {
> >   		dev_dbg(dev, "System error detected\n");
> > @@ -846,7 +846,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
> >   			new_state = MHI_TRE_GET_EV_STATE(local_rp);
> >   			dev_dbg(dev, "State change event to state: %s\n",
> > -				TO_MHI_STATE_STR(new_state));
> > +				mhi_state_str(new_state));
> >   			switch (new_state) {
> >   			case MHI_STATE_M0:
> > @@ -873,7 +873,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
> >   			}
> >   			default:
> >   				dev_err(dev, "Invalid state: %s\n",
> > -					TO_MHI_STATE_STR(new_state));
> > +					mhi_state_str(new_state));
> >   			}
> >   			break;
> > diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
> > index c35c5ddc7220..088ade0f3e0b 100644
> > --- a/drivers/bus/mhi/host/pm.c
> > +++ b/drivers/bus/mhi/host/pm.c
> > @@ -545,7 +545,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
> >   	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
> >   		to_mhi_pm_state_str(mhi_cntrl->pm_state),
> > -		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
> > +		mhi_state_str(mhi_cntrl->dev_state));
> >   	mutex_unlock(&mhi_cntrl->pm_mutex);
> >   }
> > @@ -689,7 +689,7 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
> >   exit_sys_error_transition:
> >   	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
> >   		to_mhi_pm_state_str(mhi_cntrl->pm_state),
> > -		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
> > +		mhi_state_str(mhi_cntrl->dev_state));
> >   	mutex_unlock(&mhi_cntrl->pm_mutex);
> >   }
> > @@ -864,7 +864,7 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
> >   	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
> >   		dev_err(dev,
> >   			"Did not enter M3 state, MHI state: %s, PM state: %s\n",
> > -			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> > +			mhi_state_str(mhi_cntrl->dev_state),
> >   			to_mhi_pm_state_str(mhi_cntrl->pm_state));
> >   		return -EIO;
> >   	}
> > @@ -890,7 +890,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
> >   	dev_dbg(dev, "Entered with PM state: %s, MHI state: %s\n",
> >   		to_mhi_pm_state_str(mhi_cntrl->pm_state),
> > -		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
> > +		mhi_state_str(mhi_cntrl->dev_state));
> >   	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
> >   		return 0;
> > @@ -900,7 +900,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
> >   	if (mhi_get_mhi_state(mhi_cntrl) != MHI_STATE_M3) {
> >   		dev_warn(dev, "Resuming from non M3 state (%s)\n",
> > -			 TO_MHI_STATE_STR(mhi_get_mhi_state(mhi_cntrl)));
> > +			 mhi_state_str(mhi_get_mhi_state(mhi_cntrl)));
> >   		if (!force)
> >   			return -EINVAL;
> >   	}
> > @@ -937,7 +937,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
> >   	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
> >   		dev_err(dev,
> >   			"Did not enter M0 state, MHI state: %s, PM state: %s\n",
> > -			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
> > +			mhi_state_str(mhi_cntrl->dev_state),
> >   			to_mhi_pm_state_str(mhi_cntrl->pm_state));
> >   		return -EIO;
> >   	}
> > @@ -1088,7 +1088,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
> >   	state = mhi_get_mhi_state(mhi_cntrl);
> >   	dev_dbg(dev, "Attempting power on with EE: %s, state: %s\n",
> > -		TO_MHI_EXEC_STR(current_ee), TO_MHI_STATE_STR(state));
> > +		TO_MHI_EXEC_STR(current_ee), mhi_state_str(state));
> >   	if (state == MHI_STATE_SYS_ERR) {
> >   		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
> 

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 01/25] bus: mhi: Fix pm_state conversion to string
  2022-02-16 11:33     ` Manivannan Sadhasivam
@ 2022-02-16 13:41       ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-16 13:41 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, Paul Davey,
	Hemant Kumar, stable

On 2/16/22 5:33 AM, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 02:01:54PM -0600, Alex Elder wrote:
>> On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
>>> From: Paul Davey <paul.davey@alliedtelesis.co.nz>
>>>
>>> On big endian architectures the mhi debugfs files which report pm state
>>> give "Invalid State" for all states.  This is caused by using
>>> find_last_bit which takes an unsigned long* while the state is passed in
>>> as an enum mhi_pm_state which will be of int size.
>>
>> I think this would have fixed it too, but your fix is better.
>>
>> 	int index = find_last_bit(&(unsigned long)state, 32);
>>
>>> Fix by using __fls to pass the value of state instead of find_last_bit.
>>>
>>> Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions")
>>> Signed-off-by: Paul Davey <paul.davey@alliedtelesis.co.nz>
>>> Reviewed-by: Manivannan Sadhasivam <mani@kernel.org>
>>> Reviewed-by: Hemant Kumar <hemantk@codeaurora.org>
>>> Cc: stable@vger.kernel.org
>>> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>>> ---
>>>    drivers/bus/mhi/core/init.c | 8 +++++---
>>>    1 file changed, 5 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
>>> index 046f407dc5d6..af484b03558a 100644
>>> --- a/drivers/bus/mhi/core/init.c
>>> +++ b/drivers/bus/mhi/core/init.c
>>> @@ -79,10 +79,12 @@ static const char * const mhi_pm_state_str[] = {
>>>    const char *to_mhi_pm_state_str(enum mhi_pm_state state)
>>
>> The mhi_pm_state enumerated type is an enumerated sequence, not
>> a bit mask.  So knowing what the last (most significant) set bit
>> is not meaningful.  Or normally it shouldn't be.
>>
>> If mhi_pm_state really were a bit mask, then its values should
>> be defined that way, i.e.,
>>
>> 	MHI_PM_STATE_DISABLE	= 1 << 0,
>> 	MHI_PM_STATE_DISABLE	= 1 << 1,
>> 	. . .
>>
>> What's really going on is that the state value passed here
>> *is* a bitmask, whose bit positions are those mhi_pm_state
>> values.  So the state argument should have type u32.
>>
> 
> I agree with you. It should be u32.
> 
>> This is a *separate* bug/issue.  It could be fixed separately
>> (before this patch), but I'd be OK with just explaining why
>> this change would occur as part of this modified patch.
>>
> 
> It makes sense to do it in the same patch itself as the change is
> minimal and moreover this patch will also get backported to stable.

Sounds good to me.	-Alex

>>>    {
>>> -	unsigned long pm_state = state;
>>> -	int index = find_last_bit(&pm_state, 32);
>>> +	int index;
>>> -	if (index >= ARRAY_SIZE(mhi_pm_state_str))
>>> +	if (state)
>>> +		index = __fls(state);
>>> +
>>> +	if (!state || index >= ARRAY_SIZE(mhi_pm_state_str))
>>>    		return "Invalid State";
>>
>> Do this test and return first, and skip the additional
>> check for "if (state)".
>>
> 
> We need to calculate index for the second check, so I guess the current
> code is fine.
> 
> Thanks,
> Mani
> 
>> 					-Alex
>>
>>>    	return mhi_pm_state_str[index];
>>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 02/25] bus: mhi: Fix MHI DMA structure endianness
  2022-02-16  7:04     ` Manivannan Sadhasivam
@ 2022-02-16 14:29       ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-16 14:29 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, Paul Davey, stable

On 2/16/22 1:04 AM, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 02:02:01PM -0600, Alex Elder wrote:
>> On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
>>> From: Paul Davey <paul.davey@alliedtelesis.co.nz>
>>>
>>> The MHI driver does not work on big endian architectures.  The
>>> controller never transitions into mission mode.  This appears to be due
>>> to the modem device expecting the various contexts and transfer rings to
>>> have fields in little endian order in memory, but the driver constructs
>>> them in native endianness.
>>
>> Yes, this is true.
>>
>>> Fix MHI event, channel and command contexts and TRE handling macros to
>>> use explicit conversion to little endian.  Mark fields in relevant
>>> structures as little endian to document this requirement.
>>
>> Basically every field in the external interface whose size
>> is greater than one byte must have its endianness noted.
>>  From what I can tell, you did that for all of the exposed
>> structures defined in "drivers/bus/mhi/core/internal.h",
>> which is good.
>>
>> *However* some of the *constants* were defined the wrong way.
>>
>> Basically, all of the constant values should be expressed
>> in host byte order.  And any needed byte swapping should be
>> done at the time the value is read from memory--immediately.
>> That way, we isolate that activity to the one place we
>> interface with the possibly "foreign" format, and from then
>> on, everything may be assumed to be in natural (CPU) byte order.
>>
> 
> Well, I did think about it but I convinced myself that doing the
> conversion in code rather in defines make the code look messy.
> Also in some places it just makes it look complicated. More below:

I thought this might the case.

>> I will point out what I mean, below.
>>
>>> Fixes: a6e2e3522f29 ("bus: mhi: core: Add support for PM state transitions")
>>> Fixes: 6cd330ae76ff ("bus: mhi: core: Add support for ringing channel/event ring doorbells")
>>> Signed-off-by: Paul Davey <paul.davey@alliedtelesis.co.nz>
>>> Cc: stable@vger.kernel.org
>>> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>>> ---
>>>    drivers/bus/mhi/core/debugfs.c  |  26 +++----
>>>    drivers/bus/mhi/core/init.c     |  36 +++++-----
>>>    drivers/bus/mhi/core/internal.h | 119 ++++++++++++++++----------------
>>>    drivers/bus/mhi/core/main.c     |  22 +++---
>>>    drivers/bus/mhi/core/pm.c       |   4 +-
>>>    5 files changed, 104 insertions(+), 103 deletions(-)
>>>
> 
> [...]
> 
>>> @@ -277,57 +277,58 @@ enum mhi_cmd_type {
>>>    /* No operation command */
>>>    #define MHI_TRE_CMD_NOOP_PTR (0)
>>>    #define MHI_TRE_CMD_NOOP_DWORD0 (0)
>>> -#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
>>> +#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
>>
>> This just looks wrong to me.  The original definition
>> should be fine, but then where it's *used* it should
>> be passed to cpu_to_le32().  I realize this might be
>> a special case, where these "DWORD" values are getting
>> written out to command ring elements, but even so, the
>> byte swapping that's happening is important and should
>> be made obvious in the code using these symbols.
>>
>> This comment applies to many more similar definitions
>> below.  I don't know; maybe it looks cumbersome if
>> it's done in the code, but I still think it's better to
>> consistenly define symbols like this in CPU byte order
>> and do the conversions explicitly only when the values
>> are read/written to "foreign" (external interface)
>> memory.
>>
> 
> Defines like MHI_TRE_GET_CMD_CHID are making the conversion look messy
> to me. In this we first extract the DWORD from TRE and then doing
> shifting + masking to get the CHID.

I didn't say so, but I don't really like those defines either.
I personally would rather see the field values get extracted
in open code rather than this, because they're actually pretty
simple operations.  But I understand, sometimes things just
"look complicated" if you do them certain ways (even if simple).

I did it in a certain way in the IPA code and I find that
preferable to the use of the "DWORD" definitions you're
using.  I also stand by my belief that it's preferable to
not hide the byte swaps in macro definitions.

You use this for reading/writing the command/transfer/event
ring elements (only) though, and you do that consistently.

> So without splitting the DWORD extraction and GET_CHID macros
> separately, we can't just do the conversion in code. And we may end up
> doing the conversion in defines just for these special cases but that
> will break the uniformity.
> 
> So IMO it looks better if we trust the defines to do the conversion itself.
> 
> Please let me know if you think the other way.

I'm OK with it.  I'm not convinced, but I won't object...

					-Alex

> 
> Thanks,
> Mani
> 
>> Outside of this issue, the remainder of the patch looks
>> OK to me.
>>
>> 					-Alex
>>
>>>    /* Channel reset command */
>>>    #define MHI_TRE_CMD_RESET_PTR (0)
>>>    #define MHI_TRE_CMD_RESET_DWORD0 (0)
>>> -#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
>>> -					(MHI_CMD_RESET_CHAN << 16))
>>> +#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
>>> +					(MHI_CMD_RESET_CHAN << 16)))
>>>    /* Channel stop command */
>>>    #define MHI_TRE_CMD_STOP_PTR (0)
>>>    #define MHI_TRE_CMD_STOP_DWORD0 (0)
>>> -#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
>>> -				       (MHI_CMD_STOP_CHAN << 16))
>>> +#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
>>> +				       (MHI_CMD_STOP_CHAN << 16)))
>>>    /* Channel start command */
>>>    #define MHI_TRE_CMD_START_PTR (0)
>>>    #define MHI_TRE_CMD_START_DWORD0 (0)
>>> -#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
>>> -					(MHI_CMD_START_CHAN << 16))
>>> +#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
>>> +					(MHI_CMD_START_CHAN << 16)))
>>> -#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
>>> -#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
>>> +#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
>>> +#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
>>> +#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
>>>    /* Event descriptor macros */
>>> -#define MHI_TRE_EV_PTR(ptr) (ptr)
>>> -#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
>>> -#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
>>> -#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
>>> -#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
>>> -#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
>>> -#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
>>> -#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
>>> -#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
>>> -#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
>>> -#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
>>> -#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
>>> -#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
>>> -#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
>>> -#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
>>> -#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
>>> +#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
>>> +#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
>>> +#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
>>> +#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
>>> +#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
>>> +#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
>>> +#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
>>> +#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
>>> +#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
>>> +#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
>>> +#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
>>> +#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
>>> +#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
>>> +#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
>>> +#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
>>> +#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
>>>    /* Transfer descriptor macros */
>>> -#define MHI_TRE_DATA_PTR(ptr) (ptr)
>>> -#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
>>> -#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
>>> -	| (ieot << 9) | (ieob << 8) | chain)
>>> +#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
>>> +#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
>>> +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
>>> +	| (ieot << 9) | (ieob << 8) | chain))
>>>    /* RSC transfer descriptor macros */
>>> -#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
>>> -#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
>>> -#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
>>> +#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
>>> +#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
>>> +#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
>>>    enum mhi_pkt_type {
>>>    	MHI_PKT_TYPE_INVALID = 0x0,
>>> @@ -500,7 +501,7 @@ struct state_transition {
>>>    struct mhi_ring {
>>>    	dma_addr_t dma_handle;
>>>    	dma_addr_t iommu_base;
>>> -	u64 *ctxt_wp; /* point to ctxt wp */
>>> +	__le64 *ctxt_wp; /* point to ctxt wp */
>>>    	void *pre_aligned;
>>>    	void *base;
>>>    	void *rp;
>>> diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
>>> index ffde617f93a3..85f4f7c8d7c6 100644
>>> --- a/drivers/bus/mhi/core/main.c
>>> +++ b/drivers/bus/mhi/core/main.c
>>> @@ -114,7 +114,7 @@ void mhi_ring_er_db(struct mhi_event *mhi_event)
>>>    	struct mhi_ring *ring = &mhi_event->ring;
>>>    	mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
>>> -				     ring->db_addr, *ring->ctxt_wp);
>>> +				     ring->db_addr, le64_to_cpu(*ring->ctxt_wp));
>>>    }
>>>    void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
>>> @@ -123,7 +123,7 @@ void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
>>>    	struct mhi_ring *ring = &mhi_cmd->ring;
>>>    	db = ring->iommu_base + (ring->wp - ring->base);
>>> -	*ring->ctxt_wp = db;
>>> +	*ring->ctxt_wp = cpu_to_le64(db);
>>>    	mhi_write_db(mhi_cntrl, ring->db_addr, db);
>>>    }
>>> @@ -140,7 +140,7 @@ void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
>>>    	 * before letting h/w know there is new element to fetch.
>>>    	 */
>>>    	dma_wmb();
>>> -	*ring->ctxt_wp = db;
>>> +	*ring->ctxt_wp = cpu_to_le64(db);
>>>    	mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
>>>    				    ring->db_addr, db);
>>> @@ -432,7 +432,7 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev)
>>>    	struct mhi_event_ctxt *er_ctxt =
>>>    		&mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
>>>    	struct mhi_ring *ev_ring = &mhi_event->ring;
>>> -	dma_addr_t ptr = er_ctxt->rp;
>>> +	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
>>>    	void *dev_rp;
>>>    	if (!is_valid_ring_ptr(ev_ring, ptr)) {
>>> @@ -537,14 +537,14 @@ static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
>>>    	/* Update the WP */
>>>    	ring->wp += ring->el_size;
>>> -	ctxt_wp = *ring->ctxt_wp + ring->el_size;
>>> +	ctxt_wp = le64_to_cpu(*ring->ctxt_wp) + ring->el_size;
>>>    	if (ring->wp >= (ring->base + ring->len)) {
>>>    		ring->wp = ring->base;
>>>    		ctxt_wp = ring->iommu_base;
>>>    	}
>>> -	*ring->ctxt_wp = ctxt_wp;
>>> +	*ring->ctxt_wp = cpu_to_le64(ctxt_wp);
>>>    	/* Update the RP */
>>>    	ring->rp += ring->el_size;
>>> @@ -801,7 +801,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>>>    	struct device *dev = &mhi_cntrl->mhi_dev->dev;
>>>    	u32 chan;
>>>    	int count = 0;
>>> -	dma_addr_t ptr = er_ctxt->rp;
>>> +	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
>>>    	/*
>>>    	 * This is a quick check to avoid unnecessary event processing
>>> @@ -940,7 +940,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>>>    		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
>>>    		local_rp = ev_ring->rp;
>>> -		ptr = er_ctxt->rp;
>>> +		ptr = le64_to_cpu(er_ctxt->rp);
>>>    		if (!is_valid_ring_ptr(ev_ring, ptr)) {
>>>    			dev_err(&mhi_cntrl->mhi_dev->dev,
>>>    				"Event ring rp points outside of the event ring\n");
>>> @@ -970,7 +970,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
>>>    	int count = 0;
>>>    	u32 chan;
>>>    	struct mhi_chan *mhi_chan;
>>> -	dma_addr_t ptr = er_ctxt->rp;
>>> +	dma_addr_t ptr = le64_to_cpu(er_ctxt->rp);
>>>    	if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
>>>    		return -EIO;
>>> @@ -1011,7 +1011,7 @@ int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
>>>    		mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
>>>    		local_rp = ev_ring->rp;
>>> -		ptr = er_ctxt->rp;
>>> +		ptr = le64_to_cpu(er_ctxt->rp);
>>>    		if (!is_valid_ring_ptr(ev_ring, ptr)) {
>>>    			dev_err(&mhi_cntrl->mhi_dev->dev,
>>>    				"Event ring rp points outside of the event ring\n");
>>> @@ -1533,7 +1533,7 @@ static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
>>>    	/* mark all stale events related to channel as STALE event */
>>>    	spin_lock_irqsave(&mhi_event->lock, flags);
>>> -	ptr = er_ctxt->rp;
>>> +	ptr = le64_to_cpu(er_ctxt->rp);
>>>    	if (!is_valid_ring_ptr(ev_ring, ptr)) {
>>>    		dev_err(&mhi_cntrl->mhi_dev->dev,
>>>    			"Event ring rp points outside of the event ring\n");
>>> diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
>>> index 4aae0baea008..c35c5ddc7220 100644
>>> --- a/drivers/bus/mhi/core/pm.c
>>> +++ b/drivers/bus/mhi/core/pm.c
>>> @@ -218,7 +218,7 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
>>>    			continue;
>>>    		ring->wp = ring->base + ring->len - ring->el_size;
>>> -		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
>>> +		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
>>>    		/* Update all cores */
>>>    		smp_wmb();
>>> @@ -420,7 +420,7 @@ static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
>>>    			continue;
>>>    		ring->wp = ring->base + ring->len - ring->el_size;
>>> -		*ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
>>> +		*ring->ctxt_wp = cpu_to_le64(ring->iommu_base + ring->len - ring->el_size);
>>>    		/* Update to all cores */
>>>    		smp_wmb();
>>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 05/25] bus: mhi: Make mhi_state_str[] array static inline and move to common.h
  2022-02-16 11:39     ` Manivannan Sadhasivam
@ 2022-02-16 14:30       ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-16 14:30 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/16/22 5:39 AM, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 02:02:21PM -0600, Alex Elder wrote:
>> On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
>>> mhi_state_str[] array could be used by MHI endpoint stack also. So let's
>>> make the array as "static inline function" and move it inside the
>>> "common.h" header so that the endpoint stack could also make use of it.
>>>
>>> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>>
>> I like the use of a function to encapsulate this rather than
>> using the array as before.
>>
>> But I still don't like declaring this much static data in a static inline
>> function in a header file.  Define it as a "real" function
>> somewhere common and declare it here instead.
>>
> 
> The problem is we don't have a common c file to define this as a
> function. Even if we add one, then it would be an overkill.

OK, I accept that.	-Alex

> 
> This pattern is more commonly used throughout the kernel source.
> 
>> One more minor comment below.
>>
>> 					-Alex
>>
>>> ---
>>>    drivers/bus/mhi/common.h       | 29 +++++++++++++++++++++++++----
>>>    drivers/bus/mhi/host/boot.c    |  2 +-
>>>    drivers/bus/mhi/host/debugfs.c |  6 +++---
>>>    drivers/bus/mhi/host/init.c    | 12 ------------
>>>    drivers/bus/mhi/host/main.c    |  8 ++++----
>>>    drivers/bus/mhi/host/pm.c      | 14 +++++++-------
>>>    6 files changed, 40 insertions(+), 31 deletions(-)
>>>
>>> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
>>> index 0d13a202d334..288e47168649 100644
>>> --- a/drivers/bus/mhi/common.h
>>> +++ b/drivers/bus/mhi/common.h
>>> @@ -159,9 +159,30 @@ struct mhi_cmd_ctxt {
>>>    	__le64 wp __packed __aligned(4);
>>>    };
>>> -extern const char * const mhi_state_str[MHI_STATE_MAX];
>>> -#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
>>> -				  !mhi_state_str[state]) ? \
>>> -				"INVALID_STATE" : mhi_state_str[state])
>>> +static inline const char * const mhi_state_str(enum mhi_state state)
>>> +{
>>> +	switch (state) {
>>> +	case MHI_STATE_RESET:
>>> +		return "RESET";
>>> +	case MHI_STATE_READY:
>>> +		return "READY";
>>> +	case MHI_STATE_M0:
>>> +		return "M0";
>>> +	case MHI_STATE_M1:
>>> +		return "M1";
>>> +	case MHI_STATE_M2:
>>> +		return"M2";
>>
>> Add space after "return" here and in a few places below.
>>
> 
> Ack.
> 
> Thanks,
> Mani
> 
>>> +	case MHI_STATE_M3:
>>> +		return"M3";
>>> +	case MHI_STATE_M3_FAST:
>>> +		return"M3 FAST";
>>> +	case MHI_STATE_BHI:
>>> +		return"BHI";
>>> +	case MHI_STATE_SYS_ERR:
>>> +		return "SYS ERROR";
>>> +	default:
>>> +		return "Unknown state";
>>> +	}
>>> +};
>>>    #endif /* _MHI_COMMON_H */
>>> diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
>>> index 74295d3cc662..93cb705614c6 100644
>>> --- a/drivers/bus/mhi/host/boot.c
>>> +++ b/drivers/bus/mhi/host/boot.c
>>> @@ -68,7 +68,7 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
>>>    	dev_dbg(dev, "Entered with pm_state:%s dev_state:%s ee:%s\n",
>>>    		to_mhi_pm_state_str(mhi_cntrl->pm_state),
>>> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
>>> +		mhi_state_str(mhi_cntrl->dev_state),
>>>    		TO_MHI_EXEC_STR(mhi_cntrl->ee));
>>>    	/*
>>> diff --git a/drivers/bus/mhi/host/debugfs.c b/drivers/bus/mhi/host/debugfs.c
>>> index d818586c229d..399d0db1f1eb 100644
>>> --- a/drivers/bus/mhi/host/debugfs.c
>>> +++ b/drivers/bus/mhi/host/debugfs.c
>>> @@ -20,7 +20,7 @@ static int mhi_debugfs_states_show(struct seq_file *m, void *d)
>>>    	seq_printf(m, "PM state: %s Device: %s MHI state: %s EE: %s wake: %s\n",
>>>    		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
>>>    		   mhi_is_active(mhi_cntrl) ? "Active" : "Inactive",
>>> -		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
>>> +		   mhi_state_str(mhi_cntrl->dev_state),
>>>    		   TO_MHI_EXEC_STR(mhi_cntrl->ee),
>>>    		   mhi_cntrl->wake_set ? "true" : "false");
>>> @@ -206,13 +206,13 @@ static int mhi_debugfs_regdump_show(struct seq_file *m, void *d)
>>>    	seq_printf(m, "Host PM state: %s Device state: %s EE: %s\n",
>>>    		   to_mhi_pm_state_str(mhi_cntrl->pm_state),
>>> -		   TO_MHI_STATE_STR(mhi_cntrl->dev_state),
>>> +		   mhi_state_str(mhi_cntrl->dev_state),
>>>    		   TO_MHI_EXEC_STR(mhi_cntrl->ee));
>>>    	state = mhi_get_mhi_state(mhi_cntrl);
>>>    	ee = mhi_get_exec_env(mhi_cntrl);
>>>    	seq_printf(m, "Device EE: %s state: %s\n", TO_MHI_EXEC_STR(ee),
>>> -		   TO_MHI_STATE_STR(state));
>>> +		   mhi_state_str(state));
>>>    	for (i = 0; regs[i].name; i++) {
>>>    		if (!regs[i].base)
>>> diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
>>> index 4bd62f32695d..0e301f3f305e 100644
>>> --- a/drivers/bus/mhi/host/init.c
>>> +++ b/drivers/bus/mhi/host/init.c
>>> @@ -44,18 +44,6 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
>>>    	[DEV_ST_TRANSITION_DISABLE] = "DISABLE",
>>>    };
>>> -const char * const mhi_state_str[MHI_STATE_MAX] = {
>>> -	[MHI_STATE_RESET] = "RESET",
>>> -	[MHI_STATE_READY] = "READY",
>>> -	[MHI_STATE_M0] = "M0",
>>> -	[MHI_STATE_M1] = "M1",
>>> -	[MHI_STATE_M2] = "M2",
>>> -	[MHI_STATE_M3] = "M3",
>>> -	[MHI_STATE_M3_FAST] = "M3 FAST",
>>> -	[MHI_STATE_BHI] = "BHI",
>>> -	[MHI_STATE_SYS_ERR] = "SYS ERROR",
>>> -};
>>> -
>>>    const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX] = {
>>>    	[MHI_CH_STATE_TYPE_RESET] = "RESET",
>>>    	[MHI_CH_STATE_TYPE_STOP] = "STOP",
>>> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
>>> index 85f4f7c8d7c6..e436c2993d97 100644
>>> --- a/drivers/bus/mhi/host/main.c
>>> +++ b/drivers/bus/mhi/host/main.c
>>> @@ -479,8 +479,8 @@ irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *priv)
>>>    	ee = mhi_get_exec_env(mhi_cntrl);
>>>    	dev_dbg(dev, "local ee: %s state: %s device ee: %s state: %s\n",
>>>    		TO_MHI_EXEC_STR(mhi_cntrl->ee),
>>> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state),
>>> -		TO_MHI_EXEC_STR(ee), TO_MHI_STATE_STR(state));
>>> +		mhi_state_str(mhi_cntrl->dev_state),
>>> +		TO_MHI_EXEC_STR(ee), mhi_state_str(state));
>>>    	if (state == MHI_STATE_SYS_ERR) {
>>>    		dev_dbg(dev, "System error detected\n");
>>> @@ -846,7 +846,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>>>    			new_state = MHI_TRE_GET_EV_STATE(local_rp);
>>>    			dev_dbg(dev, "State change event to state: %s\n",
>>> -				TO_MHI_STATE_STR(new_state));
>>> +				mhi_state_str(new_state));
>>>    			switch (new_state) {
>>>    			case MHI_STATE_M0:
>>> @@ -873,7 +873,7 @@ int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
>>>    			}
>>>    			default:
>>>    				dev_err(dev, "Invalid state: %s\n",
>>> -					TO_MHI_STATE_STR(new_state));
>>> +					mhi_state_str(new_state));
>>>    			}
>>>    			break;
>>> diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
>>> index c35c5ddc7220..088ade0f3e0b 100644
>>> --- a/drivers/bus/mhi/host/pm.c
>>> +++ b/drivers/bus/mhi/host/pm.c
>>> @@ -545,7 +545,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
>>>    	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
>>>    		to_mhi_pm_state_str(mhi_cntrl->pm_state),
>>> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
>>> +		mhi_state_str(mhi_cntrl->dev_state));
>>>    	mutex_unlock(&mhi_cntrl->pm_mutex);
>>>    }
>>> @@ -689,7 +689,7 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
>>>    exit_sys_error_transition:
>>>    	dev_dbg(dev, "Exiting with PM state: %s, MHI state: %s\n",
>>>    		to_mhi_pm_state_str(mhi_cntrl->pm_state),
>>> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
>>> +		mhi_state_str(mhi_cntrl->dev_state));
>>>    	mutex_unlock(&mhi_cntrl->pm_mutex);
>>>    }
>>> @@ -864,7 +864,7 @@ int mhi_pm_suspend(struct mhi_controller *mhi_cntrl)
>>>    	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
>>>    		dev_err(dev,
>>>    			"Did not enter M3 state, MHI state: %s, PM state: %s\n",
>>> -			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
>>> +			mhi_state_str(mhi_cntrl->dev_state),
>>>    			to_mhi_pm_state_str(mhi_cntrl->pm_state));
>>>    		return -EIO;
>>>    	}
>>> @@ -890,7 +890,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
>>>    	dev_dbg(dev, "Entered with PM state: %s, MHI state: %s\n",
>>>    		to_mhi_pm_state_str(mhi_cntrl->pm_state),
>>> -		TO_MHI_STATE_STR(mhi_cntrl->dev_state));
>>> +		mhi_state_str(mhi_cntrl->dev_state));
>>>    	if (mhi_cntrl->pm_state == MHI_PM_DISABLE)
>>>    		return 0;
>>> @@ -900,7 +900,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
>>>    	if (mhi_get_mhi_state(mhi_cntrl) != MHI_STATE_M3) {
>>>    		dev_warn(dev, "Resuming from non M3 state (%s)\n",
>>> -			 TO_MHI_STATE_STR(mhi_get_mhi_state(mhi_cntrl)));
>>> +			 mhi_state_str(mhi_get_mhi_state(mhi_cntrl)));
>>>    		if (!force)
>>>    			return -EINVAL;
>>>    	}
>>> @@ -937,7 +937,7 @@ static int __mhi_pm_resume(struct mhi_controller *mhi_cntrl, bool force)
>>>    	if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
>>>    		dev_err(dev,
>>>    			"Did not enter M0 state, MHI state: %s, PM state: %s\n",
>>> -			TO_MHI_STATE_STR(mhi_cntrl->dev_state),
>>> +			mhi_state_str(mhi_cntrl->dev_state),
>>>    			to_mhi_pm_state_str(mhi_cntrl->pm_state));
>>>    		return -EIO;
>>>    	}
>>> @@ -1088,7 +1088,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
>>>    	state = mhi_get_mhi_state(mhi_cntrl);
>>>    	dev_dbg(dev, "Attempting power on with EE: %s, state: %s\n",
>>> -		TO_MHI_EXEC_STR(current_ee), TO_MHI_STATE_STR(state));
>>> +		TO_MHI_EXEC_STR(current_ee), mhi_state_str(state));
>>>    	if (state == MHI_STATE_SYS_ERR) {
>>>    		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
>>


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 07/25] bus: mhi: Get rid of SHIFT macros and use bitfield operations
  2022-02-15 20:02   ` Alex Elder
@ 2022-02-16 16:45     ` Manivannan Sadhasivam
  0 siblings, 0 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-16 16:45 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 02:02:34PM -0600, Alex Elder wrote:
> On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> > Instead of using the hardcoded SHIFT values, use the bitfield macros to
> > derive the shift value from mask during build time.
> 
> You accomplished this by changing the way mhi_read_reg_field(),
> mhi_poll_reg_field(), and mhi_write_reg_field() are defined.
> It would be helpful for you to point out that fact up front.
> Then it's fairly clear that the _SHIFT (and _SHFT) definitions
> can just go away.  Very nice to remove those though.
> 
> > For shift values that cannot be determined during build time, "__ffs()"
> > helper is used find the shift value in runtime.
> 
> Yeah this is an annoying feature of the bitfield functions,
> but you *know* when you're working with a variable mask.
> 
> I still think the mask values that are 32 bits wide are
> overkill, e.g.:
> 
>   #define MHIREGLEN_MHIREGLEN_MASK	GENMASK(31, 0)
> 
> 
> Thise are full 32-bit registers, and I don't see any reason
> they would ever *not* be full registers, so there's no point
> in applying a mask to them.  Even if some day it did make
> sense to use a mask (less than 32 bits wide, for example),
> that's something that could be added when that becomes an
> issue, rather than complicating the code unnecessarily now.
> 

Okay. Got rid of 32bit masks and modified the commit message.

Thanks,
Mani

> If you eliminate the 32-bit wide masks, great, but even if
> you don't:
> 
> Reviewed-by: Alex Elder <elder@linaro.org>
> 
> > Suggested-by: Alex Elder <elder@linaro.org>
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > ---
> >   drivers/bus/mhi/common.h        | 45 ----------------------
> >   drivers/bus/mhi/host/boot.c     | 15 ++------
> >   drivers/bus/mhi/host/debugfs.c  | 10 ++---
> >   drivers/bus/mhi/host/init.c     | 67 +++++++++++++++------------------
> >   drivers/bus/mhi/host/internal.h | 10 ++---
> >   drivers/bus/mhi/host/main.c     | 16 ++++----
> >   drivers/bus/mhi/host/pm.c       | 18 +++------
> >   7 files changed, 55 insertions(+), 126 deletions(-)
> > 
> > diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> > index f226f06d4ff9..728c82928d8d 100644
> > --- a/drivers/bus/mhi/common.h
> > +++ b/drivers/bus/mhi/common.h
> > @@ -63,9 +63,7 @@
> >   /* BHI register bits */
> >   #define BHI_TXDB_SEQNUM_BMSK				GENMASK(29, 0)
> > -#define BHI_TXDB_SEQNUM_SHFT				0
> >   #define BHI_STATUS_MASK					GENMASK(31, 30)
> > -#define BHI_STATUS_SHIFT				30
> >   #define BHI_STATUS_ERROR				0x03
> >   #define BHI_STATUS_SUCCESS				0x02
> >   #define BHI_STATUS_RESET				0x00
> > @@ -85,89 +83,51 @@
> >   /* BHIE register bits */
> >   #define BHIE_TXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
> > -#define BHIE_TXVECDB_SEQNUM_SHFT			0
> >   #define BHIE_TXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
> > -#define BHIE_TXVECSTATUS_SEQNUM_SHFT			0
> >   #define BHIE_TXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
> > -#define BHIE_TXVECSTATUS_STATUS_SHFT			30
> >   #define BHIE_TXVECSTATUS_STATUS_RESET			0x00
> >   #define BHIE_TXVECSTATUS_STATUS_XFER_COMPL		0x02
> >   #define BHIE_TXVECSTATUS_STATUS_ERROR			0x03
> >   #define BHIE_RXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
> > -#define BHIE_RXVECDB_SEQNUM_SHFT			0
> >   #define BHIE_RXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
> > -#define BHIE_RXVECSTATUS_SEQNUM_SHFT			0
> >   #define BHIE_RXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
> > -#define BHIE_RXVECSTATUS_STATUS_SHFT			30
> >   #define BHIE_RXVECSTATUS_STATUS_RESET			0x00
> >   #define BHIE_RXVECSTATUS_STATUS_XFER_COMPL		0x02
> >   #define BHIE_RXVECSTATUS_STATUS_ERROR			0x03
> >   /* MHI register bits */
> >   #define MHIREGLEN_MHIREGLEN_MASK			GENMASK(31, 0)
> > -#define MHIREGLEN_MHIREGLEN_SHIFT			0
> >   #define MHIVER_MHIVER_MASK				GENMASK(31, 0)
> > -#define MHIVER_MHIVER_SHIFT				0
> >   #define MHICFG_NHWER_MASK				GENMASK(31, 24)
> > -#define MHICFG_NHWER_SHIFT				24
> >   #define MHICFG_NER_MASK					GENMASK(23, 16)
> > -#define MHICFG_NER_SHIFT				16
> >   #define MHICFG_NHWCH_MASK				GENMASK(15, 8)
> > -#define MHICFG_NHWCH_SHIFT				8
> >   #define MHICFG_NCH_MASK					GENMASK(7, 0)
> > -#define MHICFG_NCH_SHIFT				0
> >   #define CHDBOFF_CHDBOFF_MASK				GENMASK(31, 0)
> > -#define CHDBOFF_CHDBOFF_SHIFT				0
> >   #define ERDBOFF_ERDBOFF_MASK				GENMASK(31, 0)
> > -#define ERDBOFF_ERDBOFF_SHIFT				0
> >   #define BHIOFF_BHIOFF_MASK				GENMASK(31, 0)
> > -#define BHIOFF_BHIOFF_SHIFT				0
> >   #define BHIEOFF_BHIEOFF_MASK				GENMASK(31, 0)
> > -#define BHIEOFF_BHIEOFF_SHIFT				0
> >   #define DEBUGOFF_DEBUGOFF_MASK				GENMASK(31, 0)
> > -#define DEBUGOFF_DEBUGOFF_SHIFT				0
> >   #define MHICTRL_MHISTATE_MASK				GENMASK(15, 8)
> > -#define MHICTRL_MHISTATE_SHIFT				8
> >   #define MHICTRL_RESET_MASK				BIT(1)
> > -#define MHICTRL_RESET_SHIFT				1
> >   #define MHISTATUS_MHISTATE_MASK				GENMASK(15, 8)
> > -#define MHISTATUS_MHISTATE_SHIFT			8
> >   #define MHISTATUS_SYSERR_MASK				BIT(2)
> > -#define MHISTATUS_SYSERR_SHIFT				2
> >   #define MHISTATUS_READY_MASK				BIT(0)
> > -#define MHISTATUS_READY_SHIFT				0
> >   #define CCABAP_LOWER_CCABAP_LOWER_MASK			GENMASK(31, 0)
> > -#define CCABAP_LOWER_CCABAP_LOWER_SHIFT			0
> >   #define CCABAP_HIGHER_CCABAP_HIGHER_MASK		GENMASK(31, 0)
> > -#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT		0
> >   #define ECABAP_LOWER_ECABAP_LOWER_MASK			GENMASK(31, 0)
> > -#define ECABAP_LOWER_ECABAP_LOWER_SHIFT			0
> >   #define ECABAP_HIGHER_ECABAP_HIGHER_MASK		GENMASK(31, 0)
> > -#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT		0
> >   #define CRCBAP_LOWER_CRCBAP_LOWER_MASK			GENMASK(31, 0)
> > -#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT			0
> >   #define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK		GENMASK(31, 0)
> > -#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT		0
> >   #define CRDB_LOWER_CRDB_LOWER_MASK			GENMASK(31, 0)
> > -#define CRDB_LOWER_CRDB_LOWER_SHIFT			0
> >   #define CRDB_HIGHER_CRDB_HIGHER_MASK			GENMASK(31, 0)
> > -#define CRDB_HIGHER_CRDB_HIGHER_SHIFT			0
> >   #define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK	GENMASK(31, 0)
> > -#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT	0
> >   #define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK	GENMASK(31, 0)
> > -#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT	0
> >   #define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK	GENMASK(31, 0)
> > -#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT	0
> >   #define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK	GENMASK(31, 0)
> > -#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT	0
> >   #define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK	GENMASK(31, 0)
> > -#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT	0
> >   #define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK	GENMASK(31, 0)
> > -#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT	0
> >   #define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK	GENMASK(31, 0)
> > -#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT	0
> >   #define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK	GENMASK(31, 0)
> > -#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT	0
> >   /* Command Ring Element macros */
> >   /* No operation command */
> > @@ -277,9 +237,7 @@ enum mhi_cmd_type {
> >   #define EV_CTX_RESERVED_MASK GENMASK(7, 0)
> >   #define EV_CTX_INTMODC_MASK GENMASK(15, 8)
> > -#define EV_CTX_INTMODC_SHIFT 8
> >   #define EV_CTX_INTMODT_MASK GENMASK(31, 16)
> > -#define EV_CTX_INTMODT_SHIFT 16
> >   struct mhi_event_ctxt {
> >   	__le32 intmod;
> >   	__le32 ertype;
> > @@ -292,11 +250,8 @@ struct mhi_event_ctxt {
> >   };
> >   #define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
> > -#define CHAN_CTX_CHSTATE_SHIFT 0
> >   #define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
> > -#define CHAN_CTX_BRSTMODE_SHIFT 8
> >   #define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
> > -#define CHAN_CTX_POLLCFG_SHIFT 10
> >   #define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
> >   struct mhi_chan_ctxt {
> >   	__le32 chcfg;
> > diff --git a/drivers/bus/mhi/host/boot.c b/drivers/bus/mhi/host/boot.c
> > index 93cb705614c6..b0da7ca4519c 100644
> > --- a/drivers/bus/mhi/host/boot.c
> > +++ b/drivers/bus/mhi/host/boot.c
> > @@ -46,8 +46,7 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
> >   	sequence_id = MHI_RANDOM_U32_NONZERO(BHIE_RXVECSTATUS_SEQNUM_BMSK);
> >   	mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
> > -			    BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
> > -			    sequence_id);
> > +			    BHIE_RXVECDB_SEQNUM_BMSK, sequence_id);
> >   	dev_dbg(dev, "Address: %p and len: 0x%zx sequence: %u\n",
> >   		&mhi_buf->dma_addr, mhi_buf->len, sequence_id);
> > @@ -127,9 +126,7 @@ static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
> >   	while (retry--) {
> >   		ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
> > -					 BHIE_RXVECSTATUS_STATUS_BMSK,
> > -					 BHIE_RXVECSTATUS_STATUS_SHFT,
> > -					 &rx_status);
> > +					 BHIE_RXVECSTATUS_STATUS_BMSK, &rx_status);
> >   		if (ret)
> >   			return -EIO;
> > @@ -168,7 +165,6 @@ int mhi_download_rddm_image(struct mhi_controller *mhi_cntrl, bool in_panic)
> >   			   mhi_read_reg_field(mhi_cntrl, base,
> >   					      BHIE_RXVECSTATUS_OFFS,
> >   					      BHIE_RXVECSTATUS_STATUS_BMSK,
> > -					      BHIE_RXVECSTATUS_STATUS_SHFT,
> >   					      &rx_status) || rx_status,
> >   			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
> > @@ -203,8 +199,7 @@ static int mhi_fw_load_bhie(struct mhi_controller *mhi_cntrl,
> >   	mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
> >   	mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
> > -			    BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
> > -			    sequence_id);
> > +			    BHIE_TXVECDB_SEQNUM_BMSK, sequence_id);
> >   	read_unlock_bh(pm_lock);
> >   	/* Wait for the image download to complete */
> > @@ -213,7 +208,6 @@ static int mhi_fw_load_bhie(struct mhi_controller *mhi_cntrl,
> >   				 mhi_read_reg_field(mhi_cntrl, base,
> >   						   BHIE_TXVECSTATUS_OFFS,
> >   						   BHIE_TXVECSTATUS_STATUS_BMSK,
> > -						   BHIE_TXVECSTATUS_STATUS_SHFT,
> >   						   &tx_status) || tx_status,
> >   				 msecs_to_jiffies(mhi_cntrl->timeout_ms));
> >   	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
> > @@ -265,8 +259,7 @@ static int mhi_fw_load_bhi(struct mhi_controller *mhi_cntrl,
> >   	ret = wait_event_timeout(mhi_cntrl->state_event,
> >   			   MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
> >   			   mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS,
> > -					      BHI_STATUS_MASK, BHI_STATUS_SHIFT,
> > -					      &tx_status) || tx_status,
> > +					      BHI_STATUS_MASK, &tx_status) || tx_status,
> >   			   msecs_to_jiffies(mhi_cntrl->timeout_ms));
> >   	if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
> >   		goto invalid_pm_state;
> > diff --git a/drivers/bus/mhi/host/debugfs.c b/drivers/bus/mhi/host/debugfs.c
> > index 399d0db1f1eb..cfec7811dfbb 100644
> > --- a/drivers/bus/mhi/host/debugfs.c
> > +++ b/drivers/bus/mhi/host/debugfs.c
> > @@ -61,9 +61,9 @@ static int mhi_debugfs_events_show(struct seq_file *m, void *d)
> >   		seq_printf(m, "Index: %d intmod count: %lu time: %lu",
> >   			   i, (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODC_MASK) >>
> > -			   EV_CTX_INTMODC_SHIFT,
> > +			   __ffs(EV_CTX_INTMODC_MASK),
> >   			   (le32_to_cpu(er_ctxt->intmod) & EV_CTX_INTMODT_MASK) >>
> > -			   EV_CTX_INTMODT_SHIFT);
> > +			   __ffs(EV_CTX_INTMODT_MASK));
> >   		seq_printf(m, " base: 0x%0llx len: 0x%llx", le64_to_cpu(er_ctxt->rbase),
> >   			   le64_to_cpu(er_ctxt->rlen));
> > @@ -107,10 +107,10 @@ static int mhi_debugfs_channels_show(struct seq_file *m, void *d)
> >   		seq_printf(m,
> >   			   "%s(%u) state: 0x%lx brstmode: 0x%lx pollcfg: 0x%lx",
> >   			   mhi_chan->name, mhi_chan->chan, (le32_to_cpu(chan_ctxt->chcfg) &
> > -			   CHAN_CTX_CHSTATE_MASK) >> CHAN_CTX_CHSTATE_SHIFT,
> > +			   CHAN_CTX_CHSTATE_MASK) >> __ffs(CHAN_CTX_CHSTATE_MASK),
> >   			   (le32_to_cpu(chan_ctxt->chcfg) & CHAN_CTX_BRSTMODE_MASK) >>
> > -			   CHAN_CTX_BRSTMODE_SHIFT, (le32_to_cpu(chan_ctxt->chcfg) &
> > -			   CHAN_CTX_POLLCFG_MASK) >> CHAN_CTX_POLLCFG_SHIFT);
> > +			   __ffs(CHAN_CTX_BRSTMODE_MASK), (le32_to_cpu(chan_ctxt->chcfg) &
> > +			   CHAN_CTX_POLLCFG_MASK) >> __ffs(CHAN_CTX_POLLCFG_MASK));
> >   		seq_printf(m, " type: 0x%x event ring: %u", le32_to_cpu(chan_ctxt->chtype),
> >   			   le32_to_cpu(chan_ctxt->erindex));
> > diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
> > index 0e301f3f305e..05e457d12446 100644
> > --- a/drivers/bus/mhi/host/init.c
> > +++ b/drivers/bus/mhi/host/init.c
> > @@ -4,6 +4,7 @@
> >    *
> >    */
> > +#include <linux/bitfield.h>
> >   #include <linux/debugfs.h>
> >   #include <linux/device.h>
> >   #include <linux/dma-direction.h>
> > @@ -283,11 +284,11 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
> >   		tmp = le32_to_cpu(chan_ctxt->chcfg);
> >   		tmp &= ~CHAN_CTX_CHSTATE_MASK;
> > -		tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
> > +		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_DISABLED);
> >   		tmp &= ~CHAN_CTX_BRSTMODE_MASK;
> > -		tmp |= (mhi_chan->db_cfg.brstmode << CHAN_CTX_BRSTMODE_SHIFT);
> > +		tmp |= FIELD_PREP(CHAN_CTX_BRSTMODE_MASK, mhi_chan->db_cfg.brstmode);
> >   		tmp &= ~CHAN_CTX_POLLCFG_MASK;
> > -		tmp |= (mhi_chan->db_cfg.pollcfg << CHAN_CTX_POLLCFG_SHIFT);
> > +		tmp |= FIELD_PREP(CHAN_CTX_POLLCFG_MASK, mhi_chan->db_cfg.pollcfg);
> >   		chan_ctxt->chcfg = cpu_to_le32(tmp);
> >   		chan_ctxt->chtype = cpu_to_le32(mhi_chan->type);
> > @@ -319,7 +320,7 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
> >   		tmp = le32_to_cpu(er_ctxt->intmod);
> >   		tmp &= ~EV_CTX_INTMODC_MASK;
> >   		tmp &= ~EV_CTX_INTMODT_MASK;
> > -		tmp |= (mhi_event->intmod << EV_CTX_INTMODT_SHIFT);
> > +		tmp |= FIELD_PREP(EV_CTX_INTMODT_MASK, mhi_event->intmod);
> >   		er_ctxt->intmod = cpu_to_le32(tmp);
> >   		er_ctxt->ertype = cpu_to_le32(MHI_ER_TYPE_VALID);
> > @@ -425,71 +426,70 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
> >   	struct {
> >   		u32 offset;
> >   		u32 mask;
> > -		u32 shift;
> >   		u32 val;
> >   	} reg_info[] = {
> >   		{
> > -			CCABAP_HIGHER, U32_MAX, 0,
> > +			CCABAP_HIGHER, U32_MAX,
> >   			upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
> >   		},
> >   		{
> > -			CCABAP_LOWER, U32_MAX, 0,
> > +			CCABAP_LOWER, U32_MAX,
> >   			lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
> >   		},
> >   		{
> > -			ECABAP_HIGHER, U32_MAX, 0,
> > +			ECABAP_HIGHER, U32_MAX,
> >   			upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
> >   		},
> >   		{
> > -			ECABAP_LOWER, U32_MAX, 0,
> > +			ECABAP_LOWER, U32_MAX,
> >   			lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
> >   		},
> >   		{
> > -			CRCBAP_HIGHER, U32_MAX, 0,
> > +			CRCBAP_HIGHER, U32_MAX,
> >   			upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
> >   		},
> >   		{
> > -			CRCBAP_LOWER, U32_MAX, 0,
> > +			CRCBAP_LOWER, U32_MAX,
> >   			lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
> >   		},
> >   		{
> > -			MHICFG, MHICFG_NER_MASK, MHICFG_NER_SHIFT,
> > +			MHICFG, MHICFG_NER_MASK,
> >   			mhi_cntrl->total_ev_rings,
> >   		},
> >   		{
> > -			MHICFG, MHICFG_NHWER_MASK, MHICFG_NHWER_SHIFT,
> > +			MHICFG, MHICFG_NHWER_MASK,
> >   			mhi_cntrl->hw_ev_rings,
> >   		},
> >   		{
> > -			MHICTRLBASE_HIGHER, U32_MAX, 0,
> > +			MHICTRLBASE_HIGHER, U32_MAX,
> >   			upper_32_bits(mhi_cntrl->iova_start),
> >   		},
> >   		{
> > -			MHICTRLBASE_LOWER, U32_MAX, 0,
> > +			MHICTRLBASE_LOWER, U32_MAX,
> >   			lower_32_bits(mhi_cntrl->iova_start),
> >   		},
> >   		{
> > -			MHIDATABASE_HIGHER, U32_MAX, 0,
> > +			MHIDATABASE_HIGHER, U32_MAX,
> >   			upper_32_bits(mhi_cntrl->iova_start),
> >   		},
> >   		{
> > -			MHIDATABASE_LOWER, U32_MAX, 0,
> > +			MHIDATABASE_LOWER, U32_MAX,
> >   			lower_32_bits(mhi_cntrl->iova_start),
> >   		},
> >   		{
> > -			MHICTRLLIMIT_HIGHER, U32_MAX, 0,
> > +			MHICTRLLIMIT_HIGHER, U32_MAX,
> >   			upper_32_bits(mhi_cntrl->iova_stop),
> >   		},
> >   		{
> > -			MHICTRLLIMIT_LOWER, U32_MAX, 0,
> > +			MHICTRLLIMIT_LOWER, U32_MAX,
> >   			lower_32_bits(mhi_cntrl->iova_stop),
> >   		},
> >   		{
> > -			MHIDATALIMIT_HIGHER, U32_MAX, 0,
> > +			MHIDATALIMIT_HIGHER, U32_MAX,
> >   			upper_32_bits(mhi_cntrl->iova_stop),
> >   		},
> >   		{
> > -			MHIDATALIMIT_LOWER, U32_MAX, 0,
> > +			MHIDATALIMIT_LOWER, U32_MAX,
> >   			lower_32_bits(mhi_cntrl->iova_stop),
> >   		},
> >   		{ 0, 0, 0 }
> > @@ -498,8 +498,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
> >   	dev_dbg(dev, "Initializing MHI registers\n");
> >   	/* Read channel db offset */
> > -	ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK,
> > -				 CHDBOFF_CHDBOFF_SHIFT, &val);
> > +	ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK, &val);
> >   	if (ret) {
> >   		dev_err(dev, "Unable to read CHDBOFF register\n");
> >   		return -EIO;
> > @@ -515,8 +514,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
> >   		mhi_chan->tre_ring.db_addr = base + val;
> >   	/* Read event ring db offset */
> > -	ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK,
> > -				 ERDBOFF_ERDBOFF_SHIFT, &val);
> > +	ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK, &val);
> >   	if (ret) {
> >   		dev_err(dev, "Unable to read ERDBOFF register\n");
> >   		return -EIO;
> > @@ -537,8 +535,7 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
> >   	/* Write to MMIO registers */
> >   	for (i = 0; reg_info[i].offset; i++)
> >   		mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
> > -				    reg_info[i].mask, reg_info[i].shift,
> > -				    reg_info[i].val);
> > +				    reg_info[i].mask, reg_info[i].val);
> >   	return 0;
> >   }
> > @@ -571,7 +568,7 @@ void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
> >   	tmp = le32_to_cpu(chan_ctxt->chcfg);
> >   	tmp &= ~CHAN_CTX_CHSTATE_MASK;
> > -	tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
> > +	tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_DISABLED);
> >   	chan_ctxt->chcfg = cpu_to_le32(tmp);
> >   	/* Update to all cores */
> > @@ -608,7 +605,7 @@ int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
> >   	tmp = le32_to_cpu(chan_ctxt->chcfg);
> >   	tmp &= ~CHAN_CTX_CHSTATE_MASK;
> > -	tmp |= (MHI_CH_STATE_ENABLED << CHAN_CTX_CHSTATE_SHIFT);
> > +	tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_ENABLED);
> >   	chan_ctxt->chcfg = cpu_to_le32(tmp);
> >   	chan_ctxt->rbase = cpu_to_le64(tre_ring->iommu_base);
> > @@ -952,14 +949,10 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
> >   	if (ret)
> >   		goto err_destroy_wq;
> > -	mhi_cntrl->family_number = (soc_info & SOC_HW_VERSION_FAM_NUM_BMSK) >>
> > -					SOC_HW_VERSION_FAM_NUM_SHFT;
> > -	mhi_cntrl->device_number = (soc_info & SOC_HW_VERSION_DEV_NUM_BMSK) >>
> > -					SOC_HW_VERSION_DEV_NUM_SHFT;
> > -	mhi_cntrl->major_version = (soc_info & SOC_HW_VERSION_MAJOR_VER_BMSK) >>
> > -					SOC_HW_VERSION_MAJOR_VER_SHFT;
> > -	mhi_cntrl->minor_version = (soc_info & SOC_HW_VERSION_MINOR_VER_BMSK) >>
> > -					SOC_HW_VERSION_MINOR_VER_SHFT;
> > +	mhi_cntrl->family_number = FIELD_GET(SOC_HW_VERSION_FAM_NUM_BMSK, soc_info);
> > +	mhi_cntrl->device_number = FIELD_GET(SOC_HW_VERSION_DEV_NUM_BMSK, soc_info);
> > +	mhi_cntrl->major_version = FIELD_GET(SOC_HW_VERSION_MAJOR_VER_BMSK, soc_info);
> > +	mhi_cntrl->minor_version = FIELD_GET(SOC_HW_VERSION_MINOR_VER_BMSK, soc_info);
> >   	mhi_cntrl->index = ida_alloc(&mhi_controller_ida, GFP_KERNEL);
> >   	if (mhi_cntrl->index < 0) {
> > diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
> > index 762055a6ec9f..21381781d7c5 100644
> > --- a/drivers/bus/mhi/host/internal.h
> > +++ b/drivers/bus/mhi/host/internal.h
> > @@ -82,13 +82,9 @@ extern struct bus_type mhi_bus_type;
> >   #define SOC_HW_VERSION_OFFS		0x224
> >   #define SOC_HW_VERSION_FAM_NUM_BMSK	GENMASK(31, 28)
> > -#define SOC_HW_VERSION_FAM_NUM_SHFT	28
> >   #define SOC_HW_VERSION_DEV_NUM_BMSK	GENMASK(27, 16)
> > -#define SOC_HW_VERSION_DEV_NUM_SHFT	16
> >   #define SOC_HW_VERSION_MAJOR_VER_BMSK	GENMASK(15, 8)
> > -#define SOC_HW_VERSION_MAJOR_VER_SHFT	8
> >   #define SOC_HW_VERSION_MINOR_VER_BMSK	GENMASK(7, 0)
> > -#define SOC_HW_VERSION_MINOR_VER_SHFT	0
> >   struct mhi_ctxt {
> >   	struct mhi_event_ctxt *er_ctxt;
> > @@ -393,14 +389,14 @@ int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
> >   			      void __iomem *base, u32 offset, u32 *out);
> >   int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
> >   				    void __iomem *base, u32 offset, u32 mask,
> > -				    u32 shift, u32 *out);
> > +				    u32 *out);
> >   int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
> >   				    void __iomem *base, u32 offset, u32 mask,
> > -				    u32 shift, u32 val, u32 delayus);
> > +				    u32 val, u32 delayus);
> >   void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
> >   		   u32 offset, u32 val);
> >   void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
> > -			 u32 offset, u32 mask, u32 shift, u32 val);
> > +			 u32 offset, u32 mask, u32 val);
> >   void mhi_ring_er_db(struct mhi_event *mhi_event);
> >   void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
> >   		  dma_addr_t db_val);
> > diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
> > index e436c2993d97..02ac5faf9178 100644
> > --- a/drivers/bus/mhi/host/main.c
> > +++ b/drivers/bus/mhi/host/main.c
> > @@ -24,7 +24,7 @@ int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
> >   int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
> >   				    void __iomem *base, u32 offset,
> > -				    u32 mask, u32 shift, u32 *out)
> > +				    u32 mask, u32 *out)
> >   {
> >   	u32 tmp;
> >   	int ret;
> > @@ -33,21 +33,20 @@ int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
> >   	if (ret)
> >   		return ret;
> > -	*out = (tmp & mask) >> shift;
> > +	*out = (tmp & mask) >> __ffs(mask);
> >   	return 0;
> >   }
> >   int __must_check mhi_poll_reg_field(struct mhi_controller *mhi_cntrl,
> >   				    void __iomem *base, u32 offset,
> > -				    u32 mask, u32 shift, u32 val, u32 delayus)
> > +				    u32 mask, u32 val, u32 delayus)
> >   {
> >   	int ret;
> >   	u32 out, retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
> >   	while (retry--) {
> > -		ret = mhi_read_reg_field(mhi_cntrl, base, offset, mask, shift,
> > -					 &out);
> > +		ret = mhi_read_reg_field(mhi_cntrl, base, offset, mask, &out);
> >   		if (ret)
> >   			return ret;
> > @@ -67,7 +66,7 @@ void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
> >   }
> >   void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
> > -			 u32 offset, u32 mask, u32 shift, u32 val)
> > +			 u32 offset, u32 mask, u32 val)
> >   {
> >   	int ret;
> >   	u32 tmp;
> > @@ -77,7 +76,7 @@ void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
> >   		return;
> >   	tmp &= ~mask;
> > -	tmp |= (val << shift);
> > +	tmp |= (val << __ffs(mask));
> >   	mhi_write_reg(mhi_cntrl, base, offset, tmp);
> >   }
> > @@ -159,8 +158,7 @@ enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl)
> >   {
> >   	u32 state;
> >   	int ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
> > -				     MHISTATUS_MHISTATE_MASK,
> > -				     MHISTATUS_MHISTATE_SHIFT, &state);
> > +				     MHISTATUS_MHISTATE_MASK, &state);
> >   	return ret ? MHI_STATE_MAX : state;
> >   }
> >   EXPORT_SYMBOL_GPL(mhi_get_mhi_state);
> > diff --git a/drivers/bus/mhi/host/pm.c b/drivers/bus/mhi/host/pm.c
> > index 088ade0f3e0b..3d90b8ecd3d9 100644
> > --- a/drivers/bus/mhi/host/pm.c
> > +++ b/drivers/bus/mhi/host/pm.c
> > @@ -131,11 +131,10 @@ void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state)
> >   {
> >   	if (state == MHI_STATE_RESET) {
> >   		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
> > -				    MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 1);
> > +				    MHICTRL_RESET_MASK, 1);
> >   	} else {
> >   		mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
> > -				    MHICTRL_MHISTATE_MASK,
> > -				    MHICTRL_MHISTATE_SHIFT, state);
> > +				    MHICTRL_MHISTATE_MASK, state);
> >   	}
> >   }
> > @@ -167,16 +166,14 @@ int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
> >   	/* Wait for RESET to be cleared and READY bit to be set by the device */
> >   	ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
> > -				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
> > -				 interval_us);
> > +				 MHICTRL_RESET_MASK, 0, interval_us);
> >   	if (ret) {
> >   		dev_err(dev, "Device failed to clear MHI Reset\n");
> >   		return ret;
> >   	}
> >   	ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
> > -				 MHISTATUS_READY_MASK, MHISTATUS_READY_SHIFT, 1,
> > -				 interval_us);
> > +				 MHISTATUS_READY_MASK, 1, interval_us);
> >   	if (ret) {
> >   		dev_err(dev, "Device failed to enter MHI Ready\n");
> >   		return ret;
> > @@ -470,8 +467,7 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl)
> >   		/* Wait for the reset bit to be cleared by the device */
> >   		ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
> > -				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
> > -				 25000);
> > +				 MHICTRL_RESET_MASK, 0, 25000);
> >   		if (ret)
> >   			dev_err(dev, "Device failed to clear MHI Reset\n");
> > @@ -602,7 +598,6 @@ static void mhi_pm_sys_error_transition(struct mhi_controller *mhi_cntrl)
> >   							    mhi_cntrl->regs,
> >   							    MHICTRL,
> >   							    MHICTRL_RESET_MASK,
> > -							    MHICTRL_RESET_SHIFT,
> >   							    &in_reset) ||
> >   					!in_reset, timeout);
> >   		if (!ret || in_reset) {
> > @@ -1093,8 +1088,7 @@ int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
> >   	if (state == MHI_STATE_SYS_ERR) {
> >   		mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
> >   		ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
> > -				 MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0,
> > -				 interval_us);
> > +				 MHICTRL_RESET_MASK, 0, interval_us);
> >   		if (ret) {
> >   			dev_info(dev, "Failed to reset MHI due to syserr state\n");
> >   			goto error_exit;
> 

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 06/25] bus: mhi: Cleanup the register definitions used in headers
  2022-02-15 20:02   ` Alex Elder
@ 2022-02-16 17:21     ` Manivannan Sadhasivam
  2022-02-16 17:43       ` Manivannan Sadhasivam
  0 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-16 17:21 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 02:02:28PM -0600, Alex Elder wrote:
> On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> > Cleanup includes:
> > 
> > 1. Moving the MHI register definitions to common.h header with REG_ prefix
> >     and using them in the host/internal.h file as an alias. This makes it
> >     possible to reuse the register definitions in EP stack that differs by
> >     a fixed offset.
> 
> I like that you're doing this.  But I don't see the point of this
> kind of definition, made in "drivers/bus/mhi/host/internal.h
> ":
> 
>   #define MHIREGLEN	REG_MHIREGLEN
> 
> Just use REG_MHIREGLEN in the host code too.  (Or use MHIREGLEN in
> both places, whichever you prefer.)
> 

My intention is to use the original MHI register definitions in both
host and endpoint. So REG_ prefix acts like an overlay here. Earlier I
was defining the MHI registers in both host and endpoint separately.

But I came up with this after your v2 review.

> 
> > 2. Using the GENMASK macro for masks
> 
> Great!
> 
> > 3. Removing brackets for single values
> 
> They're normally called "parentheses."  Brackets more typically []
> (and {} is "braces", though that's not always the case).
> 

Ah, sorry for that. I used to call them "round brackets", so that came in ;)

> > 4. Using lowercase for hex values
> 
> I think I saw a few upper case hex values in another patch.
> Not a big deal, just FYI.
> 

Will change them.

> > 5. Using two digits for hex values where applicable
> 
> I think I suggested most of these things, so of course
> they look awesome to me.
> 
> You could use bitfield accessor macros in a few more places.
> For example, this:
> 
> #define MHI_TRE_CMD_RESET_DWORD1(chid)  (cpu_to_le32((chid << 24) | \
> 					    (MHI_CMD_RESET_CHAN << 16)))
> 
> Could use something more like this:
> 
> #define MHI_CMD_CHANNEL_MASK	GENMASK(31, 24)
> #define MHI_CMD_COMMAND_MASK	GENMASK(23, 16)
> 
> #define MHI_TRE_CMD_RESET_DWORD1(chid) \
> 	(le32_encode_bits(chid, MHI_CMD_CHANNEL_MASK) | \	
> 	 le32_encode_bits(MHI_CMD_RESET_CHAN, MHI_CMD_CMD_MASK))
> 

This adds more code and also makes the reading a bit complicated. I'd
prefer to stick with open coding for these.

Thanks,
Mani

> (But of course I already said I preferred CPU byte order on
> these values...)
> 
> I would like to see you get rid of one-to-one definitions
> I mentioned at the top.  I haven't done an exhaustive check
> of all the symbols, but this looks good generally, so:
> 
> Reviewed-by: Alex Elder <elder@linaro.org>
> 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > ---
> >   drivers/bus/mhi/common.h        | 243 ++++++++++++++++++++++++-----
> >   drivers/bus/mhi/host/internal.h | 265 +++++++++-----------------------
> >   2 files changed, 278 insertions(+), 230 deletions(-)
> > 
> > diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> > index 288e47168649..f226f06d4ff9 100644
> > --- a/drivers/bus/mhi/common.h
> > +++ b/drivers/bus/mhi/common.h
> > @@ -9,62 +9,223 @@
> >   #include <linux/mhi.h>
> > +/* MHI registers */
> > +#define REG_MHIREGLEN					0x00
> > +#define REG_MHIVER					0x08
> > +#define REG_MHICFG					0x10
> > +#define REG_CHDBOFF					0x18
> > +#define REG_ERDBOFF					0x20
> > +#define REG_BHIOFF					0x28
> > +#define REG_BHIEOFF					0x2c
> > +#define REG_DEBUGOFF					0x30
> > +#define REG_MHICTRL					0x38
> > +#define REG_MHISTATUS					0x48
> > +#define REG_CCABAP_LOWER				0x58
> > +#define REG_CCABAP_HIGHER				0x5c
> > +#define REG_ECABAP_LOWER				0x60
> > +#define REG_ECABAP_HIGHER				0x64
> > +#define REG_CRCBAP_LOWER				0x68
> > +#define REG_CRCBAP_HIGHER				0x6c
> > +#define REG_CRDB_LOWER					0x70
> > +#define REG_CRDB_HIGHER					0x74
> > +#define REG_MHICTRLBASE_LOWER				0x80
> > +#define REG_MHICTRLBASE_HIGHER				0x84
> > +#define REG_MHICTRLLIMIT_LOWER				0x88
> > +#define REG_MHICTRLLIMIT_HIGHER				0x8c
> > +#define REG_MHIDATABASE_LOWER				0x98
> > +#define REG_MHIDATABASE_HIGHER				0x9c
> > +#define REG_MHIDATALIMIT_LOWER				0xa0
> > +#define REG_MHIDATALIMIT_HIGHER				0xa4
> > +
> > +/* MHI BHI registers */
> > +#define REG_BHI_BHIVERSION_MINOR			0x00
> > +#define REG_BHI_BHIVERSION_MAJOR			0x04
> > +#define REG_BHI_IMGADDR_LOW				0x08
> > +#define REG_BHI_IMGADDR_HIGH				0x0c
> > +#define REG_BHI_IMGSIZE					0x10
> > +#define REG_BHI_RSVD1					0x14
> > +#define REG_BHI_IMGTXDB					0x18
> > +#define REG_BHI_RSVD2					0x1c
> > +#define REG_BHI_INTVEC					0x20
> > +#define REG_BHI_RSVD3					0x24
> > +#define REG_BHI_EXECENV					0x28
> > +#define REG_BHI_STATUS					0x2c
> > +#define REG_BHI_ERRCODE					0x30
> > +#define REG_BHI_ERRDBG1					0x34
> > +#define REG_BHI_ERRDBG2					0x38
> > +#define REG_BHI_ERRDBG3					0x3c
> > +#define REG_BHI_SERIALNU				0x40
> > +#define REG_BHI_SBLANTIROLLVER				0x44
> > +#define REG_BHI_NUMSEG					0x48
> > +#define REG_BHI_MSMHWID(n)				(0x4c + (0x4 * (n)))
> > +#define REG_BHI_OEMPKHASH(n)				(0x64 + (0x4 * (n)))
> > +#define REG_BHI_RSVD5					0xc4
> > +
> > +/* BHI register bits */
> > +#define BHI_TXDB_SEQNUM_BMSK				GENMASK(29, 0)
> > +#define BHI_TXDB_SEQNUM_SHFT				0
> > +#define BHI_STATUS_MASK					GENMASK(31, 30)
> > +#define BHI_STATUS_SHIFT				30
> > +#define BHI_STATUS_ERROR				0x03
> > +#define BHI_STATUS_SUCCESS				0x02
> > +#define BHI_STATUS_RESET				0x00
> > +
> > +/* MHI BHIE registers */
> > +#define REG_BHIE_MSMSOCID_OFFS				0x00
> > +#define REG_BHIE_TXVECADDR_LOW_OFFS			0x2c
> > +#define REG_BHIE_TXVECADDR_HIGH_OFFS			0x30
> > +#define REG_BHIE_TXVECSIZE_OFFS				0x34
> > +#define REG_BHIE_TXVECDB_OFFS				0x3c
> > +#define REG_BHIE_TXVECSTATUS_OFFS			0x44
> > +#define REG_BHIE_RXVECADDR_LOW_OFFS			0x60
> > +#define REG_BHIE_RXVECADDR_HIGH_OFFS			0x64
> > +#define REG_BHIE_RXVECSIZE_OFFS				0x68
> > +#define REG_BHIE_RXVECDB_OFFS				0x70
> > +#define REG_BHIE_RXVECSTATUS_OFFS			0x78
> > +
> > +/* BHIE register bits */
> > +#define BHIE_TXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
> > +#define BHIE_TXVECDB_SEQNUM_SHFT			0
> > +#define BHIE_TXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
> > +#define BHIE_TXVECSTATUS_SEQNUM_SHFT			0
> > +#define BHIE_TXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
> > +#define BHIE_TXVECSTATUS_STATUS_SHFT			30
> > +#define BHIE_TXVECSTATUS_STATUS_RESET			0x00
> > +#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL		0x02
> > +#define BHIE_TXVECSTATUS_STATUS_ERROR			0x03
> > +#define BHIE_RXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
> > +#define BHIE_RXVECDB_SEQNUM_SHFT			0
> > +#define BHIE_RXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
> > +#define BHIE_RXVECSTATUS_SEQNUM_SHFT			0
> > +#define BHIE_RXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
> > +#define BHIE_RXVECSTATUS_STATUS_SHFT			30
> > +#define BHIE_RXVECSTATUS_STATUS_RESET			0x00
> > +#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL		0x02
> > +#define BHIE_RXVECSTATUS_STATUS_ERROR			0x03
> > +
> > +/* MHI register bits */
> > +#define MHIREGLEN_MHIREGLEN_MASK			GENMASK(31, 0)
> > +#define MHIREGLEN_MHIREGLEN_SHIFT			0
> > +#define MHIVER_MHIVER_MASK				GENMASK(31, 0)
> > +#define MHIVER_MHIVER_SHIFT				0
> > +#define MHICFG_NHWER_MASK				GENMASK(31, 24)
> > +#define MHICFG_NHWER_SHIFT				24
> > +#define MHICFG_NER_MASK					GENMASK(23, 16)
> > +#define MHICFG_NER_SHIFT				16
> > +#define MHICFG_NHWCH_MASK				GENMASK(15, 8)
> > +#define MHICFG_NHWCH_SHIFT				8
> > +#define MHICFG_NCH_MASK					GENMASK(7, 0)
> > +#define MHICFG_NCH_SHIFT				0
> > +#define CHDBOFF_CHDBOFF_MASK				GENMASK(31, 0)
> > +#define CHDBOFF_CHDBOFF_SHIFT				0
> > +#define ERDBOFF_ERDBOFF_MASK				GENMASK(31, 0)
> > +#define ERDBOFF_ERDBOFF_SHIFT				0
> > +#define BHIOFF_BHIOFF_MASK				GENMASK(31, 0)
> > +#define BHIOFF_BHIOFF_SHIFT				0
> > +#define BHIEOFF_BHIEOFF_MASK				GENMASK(31, 0)
> > +#define BHIEOFF_BHIEOFF_SHIFT				0
> > +#define DEBUGOFF_DEBUGOFF_MASK				GENMASK(31, 0)
> > +#define DEBUGOFF_DEBUGOFF_SHIFT				0
> > +#define MHICTRL_MHISTATE_MASK				GENMASK(15, 8)
> > +#define MHICTRL_MHISTATE_SHIFT				8
> > +#define MHICTRL_RESET_MASK				BIT(1)
> > +#define MHICTRL_RESET_SHIFT				1
> > +#define MHISTATUS_MHISTATE_MASK				GENMASK(15, 8)
> > +#define MHISTATUS_MHISTATE_SHIFT			8
> > +#define MHISTATUS_SYSERR_MASK				BIT(2)
> > +#define MHISTATUS_SYSERR_SHIFT				2
> > +#define MHISTATUS_READY_MASK				BIT(0)
> > +#define MHISTATUS_READY_SHIFT				0
> > +#define CCABAP_LOWER_CCABAP_LOWER_MASK			GENMASK(31, 0)
> > +#define CCABAP_LOWER_CCABAP_LOWER_SHIFT			0
> > +#define CCABAP_HIGHER_CCABAP_HIGHER_MASK		GENMASK(31, 0)
> > +#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT		0
> > +#define ECABAP_LOWER_ECABAP_LOWER_MASK			GENMASK(31, 0)
> > +#define ECABAP_LOWER_ECABAP_LOWER_SHIFT			0
> > +#define ECABAP_HIGHER_ECABAP_HIGHER_MASK		GENMASK(31, 0)
> > +#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT		0
> > +#define CRCBAP_LOWER_CRCBAP_LOWER_MASK			GENMASK(31, 0)
> > +#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT			0
> > +#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK		GENMASK(31, 0)
> > +#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT		0
> > +#define CRDB_LOWER_CRDB_LOWER_MASK			GENMASK(31, 0)
> > +#define CRDB_LOWER_CRDB_LOWER_SHIFT			0
> > +#define CRDB_HIGHER_CRDB_HIGHER_MASK			GENMASK(31, 0)
> > +#define CRDB_HIGHER_CRDB_HIGHER_SHIFT			0
> > +#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK	GENMASK(31, 0)
> > +#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT	0
> > +#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK	GENMASK(31, 0)
> > +#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT	0
> > +#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK	GENMASK(31, 0)
> > +#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT	0
> > +#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK	GENMASK(31, 0)
> > +#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT	0
> > +#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK	GENMASK(31, 0)
> > +#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT	0
> > +#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK	GENMASK(31, 0)
> > +#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT	0
> > +#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK	GENMASK(31, 0)
> > +#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT	0
> > +#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK	GENMASK(31, 0)
> > +#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT	0
> > +
> >   /* Command Ring Element macros */
> >   /* No operation command */
> > -#define MHI_TRE_CMD_NOOP_PTR (0)
> > -#define MHI_TRE_CMD_NOOP_DWORD0 (0)
> > -#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
> > +#define MHI_TRE_CMD_NOOP_PTR				0
> > +#define MHI_TRE_CMD_NOOP_DWORD0				0
> > +#define MHI_TRE_CMD_NOOP_DWORD1				cpu_to_le32(MHI_CMD_NOP << 16)
> >   /* Channel reset command */
> > -#define MHI_TRE_CMD_RESET_PTR (0)
> > -#define MHI_TRE_CMD_RESET_DWORD0 (0)
> > -#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> > -					(MHI_CMD_RESET_CHAN << 16)))
> > +#define MHI_TRE_CMD_RESET_PTR				0
> > +#define MHI_TRE_CMD_RESET_DWORD0			0
> > +#define MHI_TRE_CMD_RESET_DWORD1(chid)			(cpu_to_le32((chid << 24) | \
> > +							(MHI_CMD_RESET_CHAN << 16)))
> >   /* Channel stop command */
> > -#define MHI_TRE_CMD_STOP_PTR (0)
> > -#define MHI_TRE_CMD_STOP_DWORD0 (0)
> > -#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> > -				       (MHI_CMD_STOP_CHAN << 16)))
> > +#define MHI_TRE_CMD_STOP_PTR				0
> > +#define MHI_TRE_CMD_STOP_DWORD0				0
> > +#define MHI_TRE_CMD_STOP_DWORD1(chid)			(cpu_to_le32((chid << 24) | \
> > +							(MHI_CMD_STOP_CHAN << 16)))
> >   /* Channel start command */
> > -#define MHI_TRE_CMD_START_PTR (0)
> > -#define MHI_TRE_CMD_START_DWORD0 (0)
> > -#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> > -					(MHI_CMD_START_CHAN << 16)))
> > +#define MHI_TRE_CMD_START_PTR				0
> > +#define MHI_TRE_CMD_START_DWORD0			0
> > +#define MHI_TRE_CMD_START_DWORD1(chid)			(cpu_to_le32((chid << 24) | \
> > +							(MHI_CMD_START_CHAN << 16)))
> > -#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
> > -#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > -#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> > +#define MHI_TRE_GET_DWORD(tre, word)			le32_to_cpu((tre)->dword[(word)])
> > +#define MHI_TRE_GET_CMD_CHID(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_CMD_TYPE(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> >   /* Event descriptor macros */
> > -#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
> > -#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
> > -#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
> > -#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
> > -#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> > -#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> > -#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
> > -#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
> > -#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> > -#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> > -#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > -#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
> > +/* Transfer completion event */
> > +#define MHI_TRE_EV_PTR(ptr)				cpu_to_le64(ptr)
> > +#define MHI_TRE_EV_DWORD0(code, len)			cpu_to_le32((code << 24) | len)
> > +#define MHI_TRE_EV_DWORD1(chid, type)			cpu_to_le32((chid << 24) | (type << 16))
> > +#define MHI_TRE_GET_EV_PTR(tre)				le64_to_cpu((tre)->ptr)
> > +#define MHI_TRE_GET_EV_CODE(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_EV_LEN(tre)				(MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> > +#define MHI_TRE_GET_EV_CHID(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_EV_TYPE(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> > +#define MHI_TRE_GET_EV_STATE(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_EV_EXECENV(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_EV_SEQ(tre)				MHI_TRE_GET_DWORD(tre, 0)
> > +#define MHI_TRE_GET_EV_TIME(tre)			MHI_TRE_GET_EV_PTR(tre)
> > +#define MHI_TRE_GET_EV_COOKIE(tre)			lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> > +#define MHI_TRE_GET_EV_VEID(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> > +#define MHI_TRE_GET_EV_LINKSPEED(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > +#define MHI_TRE_GET_EV_LINKWIDTH(tre)			(MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
> >   /* Transfer descriptor macros */
> > -#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
> > -#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
> > -#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
> > -	| (ieot << 9) | (ieob << 8) | chain))
> > +#define MHI_TRE_DATA_PTR(ptr)				cpu_to_le64(ptr)
> > +#define MHI_TRE_DATA_DWORD0(len)			cpu_to_le32(len & MHI_MAX_MTU)
> > +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain)	(cpu_to_le32((2 << 16) | (bei << 10) \
> > +							| (ieot << 9) | (ieob << 8) | chain))
> >   /* RSC transfer descriptor macros */
> > -#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
> > -#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
> > -#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
> > +#define MHI_RSCTRE_DATA_PTR(ptr, len)			cpu_to_le64(((u64)len << 48) | ptr)
> > +#define MHI_RSCTRE_DATA_DWORD0(cookie)			cpu_to_le32(cookie)
> > +#define MHI_RSCTRE_DATA_DWORD1				cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16)
> >   enum mhi_pkt_type {
> >   	MHI_PKT_TYPE_INVALID = 0x0,
> > diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
> > index 622de6ba1a0b..762055a6ec9f 100644
> > --- a/drivers/bus/mhi/host/internal.h
> > +++ b/drivers/bus/mhi/host/internal.h
> > @@ -11,197 +11,84 @@
> >   extern struct bus_type mhi_bus_type;
> > -#define MHIREGLEN (0x0)
> > -#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
> > -#define MHIREGLEN_MHIREGLEN_SHIFT (0)
> > -
> > -#define MHIVER (0x8)
> > -#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
> > -#define MHIVER_MHIVER_SHIFT (0)
> > -
> > -#define MHICFG (0x10)
> > -#define MHICFG_NHWER_MASK (0xFF000000)
> > -#define MHICFG_NHWER_SHIFT (24)
> > -#define MHICFG_NER_MASK (0xFF0000)
> > -#define MHICFG_NER_SHIFT (16)
> > -#define MHICFG_NHWCH_MASK (0xFF00)
> > -#define MHICFG_NHWCH_SHIFT (8)
> > -#define MHICFG_NCH_MASK (0xFF)
> > -#define MHICFG_NCH_SHIFT (0)
> > -
> > -#define CHDBOFF (0x18)
> > -#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
> > -#define CHDBOFF_CHDBOFF_SHIFT (0)
> > -
> > -#define ERDBOFF (0x20)
> > -#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
> > -#define ERDBOFF_ERDBOFF_SHIFT (0)
> > -
> > -#define BHIOFF (0x28)
> > -#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
> > -#define BHIOFF_BHIOFF_SHIFT (0)
> > -
> > -#define BHIEOFF (0x2C)
> > -#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
> > -#define BHIEOFF_BHIEOFF_SHIFT (0)
> > -
> > -#define DEBUGOFF (0x30)
> > -#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
> > -#define DEBUGOFF_DEBUGOFF_SHIFT (0)
> > -
> > -#define MHICTRL (0x38)
> > -#define MHICTRL_MHISTATE_MASK (0x0000FF00)
> > -#define MHICTRL_MHISTATE_SHIFT (8)
> > -#define MHICTRL_RESET_MASK (0x2)
> > -#define MHICTRL_RESET_SHIFT (1)
> > -
> > -#define MHISTATUS (0x48)
> > -#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
> > -#define MHISTATUS_MHISTATE_SHIFT (8)
> > -#define MHISTATUS_SYSERR_MASK (0x4)
> > -#define MHISTATUS_SYSERR_SHIFT (2)
> > -#define MHISTATUS_READY_MASK (0x1)
> > -#define MHISTATUS_READY_SHIFT (0)
> > -
> > -#define CCABAP_LOWER (0x58)
> > -#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
> > -#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
> > -
> > -#define CCABAP_HIGHER (0x5C)
> > -#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
> > -#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
> > -
> > -#define ECABAP_LOWER (0x60)
> > -#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
> > -#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
> > -
> > -#define ECABAP_HIGHER (0x64)
> > -#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
> > -#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
> > -
> > -#define CRCBAP_LOWER (0x68)
> > -#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
> > -#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
> > -
> > -#define CRCBAP_HIGHER (0x6C)
> > -#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
> > -#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
> > -
> > -#define CRDB_LOWER (0x70)
> > -#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
> > -#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
> > -
> > -#define CRDB_HIGHER (0x74)
> > -#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
> > -#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
> > -
> > -#define MHICTRLBASE_LOWER (0x80)
> > -#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
> > -#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
> > -
> > -#define MHICTRLBASE_HIGHER (0x84)
> > -#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
> > -#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
> > -
> > -#define MHICTRLLIMIT_LOWER (0x88)
> > -#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
> > -#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
> > -
> > -#define MHICTRLLIMIT_HIGHER (0x8C)
> > -#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
> > -#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
> > -
> > -#define MHIDATABASE_LOWER (0x98)
> > -#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
> > -#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
> > -
> > -#define MHIDATABASE_HIGHER (0x9C)
> > -#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
> > -#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
> > -
> > -#define MHIDATALIMIT_LOWER (0xA0)
> > -#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
> > -#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
> > -
> > -#define MHIDATALIMIT_HIGHER (0xA4)
> > -#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
> > -#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
> > +/* MHI registers */
> > +#define MHIREGLEN			REG_MHIREGLEN
> > +#define MHIVER				REG_MHIVER
> > +#define MHICFG				REG_MHICFG
> > +#define CHDBOFF				REG_CHDBOFF
> > +#define ERDBOFF				REG_ERDBOFF
> > +#define BHIOFF				REG_BHIOFF
> > +#define BHIEOFF				REG_BHIEOFF
> > +#define DEBUGOFF			REG_DEBUGOFF
> > +#define MHICTRL				REG_MHICTRL
> > +#define MHISTATUS			REG_MHISTATUS
> > +#define CCABAP_LOWER			REG_CCABAP_LOWER
> > +#define CCABAP_HIGHER			REG_CCABAP_HIGHER
> > +#define ECABAP_LOWER			REG_ECABAP_LOWER
> > +#define ECABAP_HIGHER			REG_ECABAP_HIGHER
> > +#define CRCBAP_LOWER			REG_CRCBAP_LOWER
> > +#define CRCBAP_HIGHER			REG_CRCBAP_HIGHER
> > +#define CRDB_LOWER			REG_CRDB_LOWER
> > +#define CRDB_HIGHER			REG_CRDB_HIGHER
> > +#define MHICTRLBASE_LOWER		REG_MHICTRLBASE_LOWER
> > +#define MHICTRLBASE_HIGHER		REG_MHICTRLBASE_HIGHER
> > +#define MHICTRLLIMIT_LOWER		REG_MHICTRLLIMIT_LOWER
> > +#define MHICTRLLIMIT_HIGHER		REG_MHICTRLLIMIT_HIGHER
> > +#define MHIDATABASE_LOWER		REG_MHIDATABASE_LOWER
> > +#define MHIDATABASE_HIGHER		REG_MHIDATABASE_HIGHER
> > +#define MHIDATALIMIT_LOWER		REG_MHIDATALIMIT_LOWER
> > +#define MHIDATALIMIT_HIGHER		REG_MHIDATALIMIT_HIGHER
> >   /* Host request register */
> > -#define MHI_SOC_RESET_REQ_OFFSET (0xB0)
> > -#define MHI_SOC_RESET_REQ BIT(0)
> > -
> > -/* MHI BHI offfsets */
> > -#define BHI_BHIVERSION_MINOR (0x00)
> > -#define BHI_BHIVERSION_MAJOR (0x04)
> > -#define BHI_IMGADDR_LOW (0x08)
> > -#define BHI_IMGADDR_HIGH (0x0C)
> > -#define BHI_IMGSIZE (0x10)
> > -#define BHI_RSVD1 (0x14)
> > -#define BHI_IMGTXDB (0x18)
> > -#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
> > -#define BHI_TXDB_SEQNUM_SHFT (0)
> > -#define BHI_RSVD2 (0x1C)
> > -#define BHI_INTVEC (0x20)
> > -#define BHI_RSVD3 (0x24)
> > -#define BHI_EXECENV (0x28)
> > -#define BHI_STATUS (0x2C)
> > -#define BHI_ERRCODE (0x30)
> > -#define BHI_ERRDBG1 (0x34)
> > -#define BHI_ERRDBG2 (0x38)
> > -#define BHI_ERRDBG3 (0x3C)
> > -#define BHI_SERIALNU (0x40)
> > -#define BHI_SBLANTIROLLVER (0x44)
> > -#define BHI_NUMSEG (0x48)
> > -#define BHI_MSMHWID(n) (0x4C + (0x4 * (n)))
> > -#define BHI_OEMPKHASH(n) (0x64 + (0x4 * (n)))
> > -#define BHI_RSVD5 (0xC4)
> > -#define BHI_STATUS_MASK (0xC0000000)
> > -#define BHI_STATUS_SHIFT (30)
> > -#define BHI_STATUS_ERROR (3)
> > -#define BHI_STATUS_SUCCESS (2)
> > -#define BHI_STATUS_RESET (0)
> > -
> > -/* MHI BHIE offsets */
> > -#define BHIE_MSMSOCID_OFFS (0x0000)
> > -#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
> > -#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
> > -#define BHIE_TXVECSIZE_OFFS (0x0034)
> > -#define BHIE_TXVECDB_OFFS (0x003C)
> > -#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
> > -#define BHIE_TXVECDB_SEQNUM_SHFT (0)
> > -#define BHIE_TXVECSTATUS_OFFS (0x0044)
> > -#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
> > -#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
> > -#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
> > -#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
> > -#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
> > -#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
> > -#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
> > -#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
> > -#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
> > -#define BHIE_RXVECSIZE_OFFS (0x0068)
> > -#define BHIE_RXVECDB_OFFS (0x0070)
> > -#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
> > -#define BHIE_RXVECDB_SEQNUM_SHFT (0)
> > -#define BHIE_RXVECSTATUS_OFFS (0x0078)
> > -#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
> > -#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
> > -#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
> > -#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
> > -#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
> > -#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
> > -#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
> > -
> > -#define SOC_HW_VERSION_OFFS (0x224)
> > -#define SOC_HW_VERSION_FAM_NUM_BMSK (0xF0000000)
> > -#define SOC_HW_VERSION_FAM_NUM_SHFT (28)
> > -#define SOC_HW_VERSION_DEV_NUM_BMSK (0x0FFF0000)
> > -#define SOC_HW_VERSION_DEV_NUM_SHFT (16)
> > -#define SOC_HW_VERSION_MAJOR_VER_BMSK (0x0000FF00)
> > -#define SOC_HW_VERSION_MAJOR_VER_SHFT (8)
> > -#define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
> > -#define SOC_HW_VERSION_MINOR_VER_SHFT (0)
> > +#define MHI_SOC_RESET_REQ_OFFSET	0xb0
> > +#define MHI_SOC_RESET_REQ		BIT(0)
> > +
> > +/* MHI BHI registers */
> > +#define BHI_BHIVERSION_MINOR		REG_BHI_BHIVERSION_MINOR
> > +#define BHI_BHIVERSION_MAJOR		REG_BHI_BHIVERSION_MAJOR
> > +#define BHI_IMGADDR_LOW			REG_BHI_IMGADDR_LOW
> > +#define BHI_IMGADDR_HIGH		REG_BHI_IMGADDR_HIGH
> > +#define BHI_IMGSIZE			REG_BHI_IMGSIZE
> > +#define BHI_RSVD1			REG_BHI_RSVD1
> > +#define BHI_IMGTXDB			REG_BHI_IMGTXDB
> > +#define BHI_RSVD2			REG_BHI_RSVD2
> > +#define BHI_INTVEC			REG_BHI_INTVEC
> > +#define BHI_RSVD3			REG_BHI_RSVD3
> > +#define BHI_EXECENV			REG_BHI_EXECENV
> > +#define BHI_STATUS			REG_BHI_STATUS
> > +#define BHI_ERRCODE			REG_BHI_ERRCODE
> > +#define BHI_ERRDBG1			REG_BHI_ERRDBG1
> > +#define BHI_ERRDBG2			REG_BHI_ERRDBG2
> > +#define BHI_ERRDBG3			REG_BHI_ERRDBG3
> > +#define BHI_SERIALNU			REG_BHI_SERIALNU
> > +#define BHI_SBLANTIROLLVER		REG_BHI_SBLANTIROLLVER
> > +#define BHI_NUMSEG			REG_BHI_NUMSEG
> > +#define BHI_MSMHWID(n)			REG_BHI_MSMHWID(n)
> > +#define BHI_OEMPKHASH(n)		REG_BHI_OEMPKHASH(n)
> > +#define BHI_RSVD5			REG_BHI_RSVD5
> > +
> > +/* MHI BHIE registers */
> > +#define BHIE_MSMSOCID_OFFS		REG_BHIE_MSMSOCID_OFFS
> > +#define BHIE_TXVECADDR_LOW_OFFS		REG_BHIE_TXVECADDR_LOW_OFFS
> > +#define BHIE_TXVECADDR_HIGH_OFFS	REG_BHIE_TXVECADDR_HIGH_OFFS
> > +#define BHIE_TXVECSIZE_OFFS		REG_BHIE_TXVECSIZE_OFFS
> > +#define BHIE_TXVECDB_OFFS		REG_BHIE_TXVECDB_OFFS
> > +#define BHIE_TXVECSTATUS_OFFS		REG_BHIE_TXVECSTATUS_OFFS
> > +#define BHIE_RXVECADDR_LOW_OFFS		REG_BHIE_RXVECADDR_LOW_OFFS
> > +#define BHIE_RXVECADDR_HIGH_OFFS	REG_BHIE_RXVECADDR_HIGH_OFFS
> > +#define BHIE_RXVECSIZE_OFFS		REG_BHIE_RXVECSIZE_OFFS
> > +#define BHIE_RXVECDB_OFFS		REG_BHIE_RXVECDB_OFFS
> > +#define BHIE_RXVECSTATUS_OFFS		REG_BHIE_RXVECSTATUS_OFFS
> > +
> > +#define SOC_HW_VERSION_OFFS		0x224
> > +#define SOC_HW_VERSION_FAM_NUM_BMSK	GENMASK(31, 28)
> > +#define SOC_HW_VERSION_FAM_NUM_SHFT	28
> > +#define SOC_HW_VERSION_DEV_NUM_BMSK	GENMASK(27, 16)
> > +#define SOC_HW_VERSION_DEV_NUM_SHFT	16
> > +#define SOC_HW_VERSION_MAJOR_VER_BMSK	GENMASK(15, 8)
> > +#define SOC_HW_VERSION_MAJOR_VER_SHFT	8
> > +#define SOC_HW_VERSION_MINOR_VER_BMSK	GENMASK(7, 0)
> > +#define SOC_HW_VERSION_MINOR_VER_SHFT	0
> >   struct mhi_ctxt {
> >   	struct mhi_event_ctxt *er_ctxt;
> 

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 08/25] bus: mhi: ep: Add support for registering MHI endpoint controllers
  2022-02-15  1:04   ` Hemant Kumar
@ 2022-02-16 17:33     ` Manivannan Sadhasivam
  0 siblings, 0 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-16 17:33 UTC (permalink / raw)
  To: Hemant Kumar
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel, elder

On Mon, Feb 14, 2022 at 05:04:23PM -0800, Hemant Kumar wrote:
> Hi Mani,
> 
> On 2/12/2022 10:21 AM, Manivannan Sadhasivam wrote:
> > This commit adds support for registering MHI endpoint controller drivers
> > with the MHI endpoint stack. MHI endpoint controller drivers manages
> > the interaction with the host machines such as x86. They are also the
> > MHI endpoint bus master in charge of managing the physical link between the
> > host and endpoint device.
> > 
> > The endpoint controller driver encloses all information about the
> > underlying physical bus like PCIe. The registration process involves
> > parsing the channel configuration and allocating an MHI EP device.
> > 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > ---
> >   drivers/bus/mhi/Kconfig       |   1 +
> >   drivers/bus/mhi/Makefile      |   3 +
> >   drivers/bus/mhi/ep/Kconfig    |  10 ++
> >   drivers/bus/mhi/ep/Makefile   |   2 +
> >   drivers/bus/mhi/ep/internal.h | 160 +++++++++++++++++++++++
> >   drivers/bus/mhi/ep/main.c     | 234 ++++++++++++++++++++++++++++++++++
> >   include/linux/mhi_ep.h        | 143 +++++++++++++++++++++
> >   7 files changed, 553 insertions(+)
> >   create mode 100644 drivers/bus/mhi/ep/Kconfig
> >   create mode 100644 drivers/bus/mhi/ep/Makefile
> >   create mode 100644 drivers/bus/mhi/ep/internal.h
> >   create mode 100644 drivers/bus/mhi/ep/main.c
> >   create mode 100644 include/linux/mhi_ep.h
> > 

[...]

> > +#define MHI_CTRL_INT_STATUS_A7			0x4
> can we get rid of all instances of "_A7" as this corresponds to Cortex-A7,
> in future this can change? At MHI core layer, we can avoid this naming
> convetion, even though register names are inculding them now and may change
> to something different later. This MHI EP driver would still be used for
> those new cortex vers.

Since these registers are not documented by the spec, I just went with
the register definition available for SDX55. But you are right, and Alex
too, that it may change in future.

I'll remove the A7 suffix.

Thanks,
Mani

> > +#define MHI_CTRL_INT_STATUS_A7_MSK		BIT(0)
> > +#define MHI_CTRL_INT_STATUS_CRDB_MSK		BIT(1)
> > +#define MHI_CHDB_INT_STATUS_A7_n(n)		(0x28 + 0x4 * (n))
> > +#define MHI_ERDB_INT_STATUS_A7_n(n)		(0x38 + 0x4 * (n))
> > +
> [..]
> 
> -- 
> The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a
> Linux Foundation Collaborative Project

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 06/25] bus: mhi: Cleanup the register definitions used in headers
  2022-02-16 17:21     ` Manivannan Sadhasivam
@ 2022-02-16 17:43       ` Manivannan Sadhasivam
  0 siblings, 0 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-16 17:43 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Wed, Feb 16, 2022 at 10:51:45PM +0530, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 02:02:28PM -0600, Alex Elder wrote:
> > On 2/12/22 12:20 PM, Manivannan Sadhasivam wrote:
> > > Cleanup includes:
> > > 
> > > 1. Moving the MHI register definitions to common.h header with REG_ prefix
> > >     and using them in the host/internal.h file as an alias. This makes it
> > >     possible to reuse the register definitions in EP stack that differs by
> > >     a fixed offset.
> > 
> > I like that you're doing this.  But I don't see the point of this
> > kind of definition, made in "drivers/bus/mhi/host/internal.h
> > ":
> > 
> >   #define MHIREGLEN	REG_MHIREGLEN
> > 
> > Just use REG_MHIREGLEN in the host code too.  (Or use MHIREGLEN in
> > both places, whichever you prefer.)
> > 
> 
> My intention is to use the original MHI register definitions in both
> host and endpoint. So REG_ prefix acts like an overlay here. Earlier I
> was defining the MHI registers in both host and endpoint separately.
> 
> But I came up with this after your v2 review.
> 

Hmm, I got your suggestion now. Just looked at patch 08/25. Please
ignore my above comment.

Thanks,
Mani

> > 
> > > 2. Using the GENMASK macro for masks
> > 
> > Great!
> > 
> > > 3. Removing brackets for single values
> > 
> > They're normally called "parentheses."  Brackets more typically []
> > (and {} is "braces", though that's not always the case).
> > 
> 
> Ah, sorry for that. I used to call them "round brackets", so that came in ;)
> 
> > > 4. Using lowercase for hex values
> > 
> > I think I saw a few upper case hex values in another patch.
> > Not a big deal, just FYI.
> > 
> 
> Will change them.
> 
> > > 5. Using two digits for hex values where applicable
> > 
> > I think I suggested most of these things, so of course
> > they look awesome to me.
> > 
> > You could use bitfield accessor macros in a few more places.
> > For example, this:
> > 
> > #define MHI_TRE_CMD_RESET_DWORD1(chid)  (cpu_to_le32((chid << 24) | \
> > 					    (MHI_CMD_RESET_CHAN << 16)))
> > 
> > Could use something more like this:
> > 
> > #define MHI_CMD_CHANNEL_MASK	GENMASK(31, 24)
> > #define MHI_CMD_COMMAND_MASK	GENMASK(23, 16)
> > 
> > #define MHI_TRE_CMD_RESET_DWORD1(chid) \
> > 	(le32_encode_bits(chid, MHI_CMD_CHANNEL_MASK) | \	
> > 	 le32_encode_bits(MHI_CMD_RESET_CHAN, MHI_CMD_CMD_MASK))
> > 
> 
> This adds more code and also makes the reading a bit complicated. I'd
> prefer to stick with open coding for these.
> 
> Thanks,
> Mani
> 
> > (But of course I already said I preferred CPU byte order on
> > these values...)
> > 
> > I would like to see you get rid of one-to-one definitions
> > I mentioned at the top.  I haven't done an exhaustive check
> > of all the symbols, but this looks good generally, so:
> > 
> > Reviewed-by: Alex Elder <elder@linaro.org>
> > 
> > > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > > ---
> > >   drivers/bus/mhi/common.h        | 243 ++++++++++++++++++++++++-----
> > >   drivers/bus/mhi/host/internal.h | 265 +++++++++-----------------------
> > >   2 files changed, 278 insertions(+), 230 deletions(-)
> > > 
> > > diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> > > index 288e47168649..f226f06d4ff9 100644
> > > --- a/drivers/bus/mhi/common.h
> > > +++ b/drivers/bus/mhi/common.h
> > > @@ -9,62 +9,223 @@
> > >   #include <linux/mhi.h>
> > > +/* MHI registers */
> > > +#define REG_MHIREGLEN					0x00
> > > +#define REG_MHIVER					0x08
> > > +#define REG_MHICFG					0x10
> > > +#define REG_CHDBOFF					0x18
> > > +#define REG_ERDBOFF					0x20
> > > +#define REG_BHIOFF					0x28
> > > +#define REG_BHIEOFF					0x2c
> > > +#define REG_DEBUGOFF					0x30
> > > +#define REG_MHICTRL					0x38
> > > +#define REG_MHISTATUS					0x48
> > > +#define REG_CCABAP_LOWER				0x58
> > > +#define REG_CCABAP_HIGHER				0x5c
> > > +#define REG_ECABAP_LOWER				0x60
> > > +#define REG_ECABAP_HIGHER				0x64
> > > +#define REG_CRCBAP_LOWER				0x68
> > > +#define REG_CRCBAP_HIGHER				0x6c
> > > +#define REG_CRDB_LOWER					0x70
> > > +#define REG_CRDB_HIGHER					0x74
> > > +#define REG_MHICTRLBASE_LOWER				0x80
> > > +#define REG_MHICTRLBASE_HIGHER				0x84
> > > +#define REG_MHICTRLLIMIT_LOWER				0x88
> > > +#define REG_MHICTRLLIMIT_HIGHER				0x8c
> > > +#define REG_MHIDATABASE_LOWER				0x98
> > > +#define REG_MHIDATABASE_HIGHER				0x9c
> > > +#define REG_MHIDATALIMIT_LOWER				0xa0
> > > +#define REG_MHIDATALIMIT_HIGHER				0xa4
> > > +
> > > +/* MHI BHI registers */
> > > +#define REG_BHI_BHIVERSION_MINOR			0x00
> > > +#define REG_BHI_BHIVERSION_MAJOR			0x04
> > > +#define REG_BHI_IMGADDR_LOW				0x08
> > > +#define REG_BHI_IMGADDR_HIGH				0x0c
> > > +#define REG_BHI_IMGSIZE					0x10
> > > +#define REG_BHI_RSVD1					0x14
> > > +#define REG_BHI_IMGTXDB					0x18
> > > +#define REG_BHI_RSVD2					0x1c
> > > +#define REG_BHI_INTVEC					0x20
> > > +#define REG_BHI_RSVD3					0x24
> > > +#define REG_BHI_EXECENV					0x28
> > > +#define REG_BHI_STATUS					0x2c
> > > +#define REG_BHI_ERRCODE					0x30
> > > +#define REG_BHI_ERRDBG1					0x34
> > > +#define REG_BHI_ERRDBG2					0x38
> > > +#define REG_BHI_ERRDBG3					0x3c
> > > +#define REG_BHI_SERIALNU				0x40
> > > +#define REG_BHI_SBLANTIROLLVER				0x44
> > > +#define REG_BHI_NUMSEG					0x48
> > > +#define REG_BHI_MSMHWID(n)				(0x4c + (0x4 * (n)))
> > > +#define REG_BHI_OEMPKHASH(n)				(0x64 + (0x4 * (n)))
> > > +#define REG_BHI_RSVD5					0xc4
> > > +
> > > +/* BHI register bits */
> > > +#define BHI_TXDB_SEQNUM_BMSK				GENMASK(29, 0)
> > > +#define BHI_TXDB_SEQNUM_SHFT				0
> > > +#define BHI_STATUS_MASK					GENMASK(31, 30)
> > > +#define BHI_STATUS_SHIFT				30
> > > +#define BHI_STATUS_ERROR				0x03
> > > +#define BHI_STATUS_SUCCESS				0x02
> > > +#define BHI_STATUS_RESET				0x00
> > > +
> > > +/* MHI BHIE registers */
> > > +#define REG_BHIE_MSMSOCID_OFFS				0x00
> > > +#define REG_BHIE_TXVECADDR_LOW_OFFS			0x2c
> > > +#define REG_BHIE_TXVECADDR_HIGH_OFFS			0x30
> > > +#define REG_BHIE_TXVECSIZE_OFFS				0x34
> > > +#define REG_BHIE_TXVECDB_OFFS				0x3c
> > > +#define REG_BHIE_TXVECSTATUS_OFFS			0x44
> > > +#define REG_BHIE_RXVECADDR_LOW_OFFS			0x60
> > > +#define REG_BHIE_RXVECADDR_HIGH_OFFS			0x64
> > > +#define REG_BHIE_RXVECSIZE_OFFS				0x68
> > > +#define REG_BHIE_RXVECDB_OFFS				0x70
> > > +#define REG_BHIE_RXVECSTATUS_OFFS			0x78
> > > +
> > > +/* BHIE register bits */
> > > +#define BHIE_TXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
> > > +#define BHIE_TXVECDB_SEQNUM_SHFT			0
> > > +#define BHIE_TXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
> > > +#define BHIE_TXVECSTATUS_SEQNUM_SHFT			0
> > > +#define BHIE_TXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
> > > +#define BHIE_TXVECSTATUS_STATUS_SHFT			30
> > > +#define BHIE_TXVECSTATUS_STATUS_RESET			0x00
> > > +#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL		0x02
> > > +#define BHIE_TXVECSTATUS_STATUS_ERROR			0x03
> > > +#define BHIE_RXVECDB_SEQNUM_BMSK			GENMASK(29, 0)
> > > +#define BHIE_RXVECDB_SEQNUM_SHFT			0
> > > +#define BHIE_RXVECSTATUS_SEQNUM_BMSK			GENMASK(29, 0)
> > > +#define BHIE_RXVECSTATUS_SEQNUM_SHFT			0
> > > +#define BHIE_RXVECSTATUS_STATUS_BMSK			GENMASK(31, 30)
> > > +#define BHIE_RXVECSTATUS_STATUS_SHFT			30
> > > +#define BHIE_RXVECSTATUS_STATUS_RESET			0x00
> > > +#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL		0x02
> > > +#define BHIE_RXVECSTATUS_STATUS_ERROR			0x03
> > > +
> > > +/* MHI register bits */
> > > +#define MHIREGLEN_MHIREGLEN_MASK			GENMASK(31, 0)
> > > +#define MHIREGLEN_MHIREGLEN_SHIFT			0
> > > +#define MHIVER_MHIVER_MASK				GENMASK(31, 0)
> > > +#define MHIVER_MHIVER_SHIFT				0
> > > +#define MHICFG_NHWER_MASK				GENMASK(31, 24)
> > > +#define MHICFG_NHWER_SHIFT				24
> > > +#define MHICFG_NER_MASK					GENMASK(23, 16)
> > > +#define MHICFG_NER_SHIFT				16
> > > +#define MHICFG_NHWCH_MASK				GENMASK(15, 8)
> > > +#define MHICFG_NHWCH_SHIFT				8
> > > +#define MHICFG_NCH_MASK					GENMASK(7, 0)
> > > +#define MHICFG_NCH_SHIFT				0
> > > +#define CHDBOFF_CHDBOFF_MASK				GENMASK(31, 0)
> > > +#define CHDBOFF_CHDBOFF_SHIFT				0
> > > +#define ERDBOFF_ERDBOFF_MASK				GENMASK(31, 0)
> > > +#define ERDBOFF_ERDBOFF_SHIFT				0
> > > +#define BHIOFF_BHIOFF_MASK				GENMASK(31, 0)
> > > +#define BHIOFF_BHIOFF_SHIFT				0
> > > +#define BHIEOFF_BHIEOFF_MASK				GENMASK(31, 0)
> > > +#define BHIEOFF_BHIEOFF_SHIFT				0
> > > +#define DEBUGOFF_DEBUGOFF_MASK				GENMASK(31, 0)
> > > +#define DEBUGOFF_DEBUGOFF_SHIFT				0
> > > +#define MHICTRL_MHISTATE_MASK				GENMASK(15, 8)
> > > +#define MHICTRL_MHISTATE_SHIFT				8
> > > +#define MHICTRL_RESET_MASK				BIT(1)
> > > +#define MHICTRL_RESET_SHIFT				1
> > > +#define MHISTATUS_MHISTATE_MASK				GENMASK(15, 8)
> > > +#define MHISTATUS_MHISTATE_SHIFT			8
> > > +#define MHISTATUS_SYSERR_MASK				BIT(2)
> > > +#define MHISTATUS_SYSERR_SHIFT				2
> > > +#define MHISTATUS_READY_MASK				BIT(0)
> > > +#define MHISTATUS_READY_SHIFT				0
> > > +#define CCABAP_LOWER_CCABAP_LOWER_MASK			GENMASK(31, 0)
> > > +#define CCABAP_LOWER_CCABAP_LOWER_SHIFT			0
> > > +#define CCABAP_HIGHER_CCABAP_HIGHER_MASK		GENMASK(31, 0)
> > > +#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT		0
> > > +#define ECABAP_LOWER_ECABAP_LOWER_MASK			GENMASK(31, 0)
> > > +#define ECABAP_LOWER_ECABAP_LOWER_SHIFT			0
> > > +#define ECABAP_HIGHER_ECABAP_HIGHER_MASK		GENMASK(31, 0)
> > > +#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT		0
> > > +#define CRCBAP_LOWER_CRCBAP_LOWER_MASK			GENMASK(31, 0)
> > > +#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT			0
> > > +#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK		GENMASK(31, 0)
> > > +#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT		0
> > > +#define CRDB_LOWER_CRDB_LOWER_MASK			GENMASK(31, 0)
> > > +#define CRDB_LOWER_CRDB_LOWER_SHIFT			0
> > > +#define CRDB_HIGHER_CRDB_HIGHER_MASK			GENMASK(31, 0)
> > > +#define CRDB_HIGHER_CRDB_HIGHER_SHIFT			0
> > > +#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK	GENMASK(31, 0)
> > > +#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT	0
> > > +#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK	GENMASK(31, 0)
> > > +#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT	0
> > > +#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK	GENMASK(31, 0)
> > > +#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT	0
> > > +#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK	GENMASK(31, 0)
> > > +#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT	0
> > > +#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK	GENMASK(31, 0)
> > > +#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT	0
> > > +#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK	GENMASK(31, 0)
> > > +#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT	0
> > > +#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK	GENMASK(31, 0)
> > > +#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT	0
> > > +#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK	GENMASK(31, 0)
> > > +#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT	0
> > > +
> > >   /* Command Ring Element macros */
> > >   /* No operation command */
> > > -#define MHI_TRE_CMD_NOOP_PTR (0)
> > > -#define MHI_TRE_CMD_NOOP_DWORD0 (0)
> > > -#define MHI_TRE_CMD_NOOP_DWORD1 (cpu_to_le32(MHI_CMD_NOP << 16))
> > > +#define MHI_TRE_CMD_NOOP_PTR				0
> > > +#define MHI_TRE_CMD_NOOP_DWORD0				0
> > > +#define MHI_TRE_CMD_NOOP_DWORD1				cpu_to_le32(MHI_CMD_NOP << 16)
> > >   /* Channel reset command */
> > > -#define MHI_TRE_CMD_RESET_PTR (0)
> > > -#define MHI_TRE_CMD_RESET_DWORD0 (0)
> > > -#define MHI_TRE_CMD_RESET_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> > > -					(MHI_CMD_RESET_CHAN << 16)))
> > > +#define MHI_TRE_CMD_RESET_PTR				0
> > > +#define MHI_TRE_CMD_RESET_DWORD0			0
> > > +#define MHI_TRE_CMD_RESET_DWORD1(chid)			(cpu_to_le32((chid << 24) | \
> > > +							(MHI_CMD_RESET_CHAN << 16)))
> > >   /* Channel stop command */
> > > -#define MHI_TRE_CMD_STOP_PTR (0)
> > > -#define MHI_TRE_CMD_STOP_DWORD0 (0)
> > > -#define MHI_TRE_CMD_STOP_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> > > -				       (MHI_CMD_STOP_CHAN << 16)))
> > > +#define MHI_TRE_CMD_STOP_PTR				0
> > > +#define MHI_TRE_CMD_STOP_DWORD0				0
> > > +#define MHI_TRE_CMD_STOP_DWORD1(chid)			(cpu_to_le32((chid << 24) | \
> > > +							(MHI_CMD_STOP_CHAN << 16)))
> > >   /* Channel start command */
> > > -#define MHI_TRE_CMD_START_PTR (0)
> > > -#define MHI_TRE_CMD_START_DWORD0 (0)
> > > -#define MHI_TRE_CMD_START_DWORD1(chid) (cpu_to_le32((chid << 24) | \
> > > -					(MHI_CMD_START_CHAN << 16)))
> > > +#define MHI_TRE_CMD_START_PTR				0
> > > +#define MHI_TRE_CMD_START_DWORD0			0
> > > +#define MHI_TRE_CMD_START_DWORD1(chid)			(cpu_to_le32((chid << 24) | \
> > > +							(MHI_CMD_START_CHAN << 16)))
> > > -#define MHI_TRE_GET_DWORD(tre, word) (le32_to_cpu((tre)->dword[(word)]))
> > > -#define MHI_TRE_GET_CMD_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > > -#define MHI_TRE_GET_CMD_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> > > +#define MHI_TRE_GET_DWORD(tre, word)			le32_to_cpu((tre)->dword[(word)])
> > > +#define MHI_TRE_GET_CMD_CHID(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > > +#define MHI_TRE_GET_CMD_TYPE(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> > >   /* Event descriptor macros */
> > > -#define MHI_TRE_EV_PTR(ptr) (cpu_to_le64(ptr))
> > > -#define MHI_TRE_EV_DWORD0(code, len) (cpu_to_le32((code << 24) | len))
> > > -#define MHI_TRE_EV_DWORD1(chid, type) (cpu_to_le32((chid << 24) | (type << 16)))
> > > -#define MHI_TRE_GET_EV_PTR(tre) (le64_to_cpu((tre)->ptr))
> > > -#define MHI_TRE_GET_EV_CODE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > > -#define MHI_TRE_GET_EV_LEN(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> > > -#define MHI_TRE_GET_EV_CHID(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > > -#define MHI_TRE_GET_EV_TYPE(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> > > -#define MHI_TRE_GET_EV_STATE(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > > -#define MHI_TRE_GET_EV_EXECENV(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > > -#define MHI_TRE_GET_EV_SEQ(tre) MHI_TRE_GET_DWORD(tre, 0)
> > > -#define MHI_TRE_GET_EV_TIME(tre) (MHI_TRE_GET_EV_PTR(tre))
> > > -#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> > > -#define MHI_TRE_GET_EV_VEID(tre) ((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> > > -#define MHI_TRE_GET_EV_LINKSPEED(tre) ((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > > -#define MHI_TRE_GET_EV_LINKWIDTH(tre) (MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
> > > +/* Transfer completion event */
> > > +#define MHI_TRE_EV_PTR(ptr)				cpu_to_le64(ptr)
> > > +#define MHI_TRE_EV_DWORD0(code, len)			cpu_to_le32((code << 24) | len)
> > > +#define MHI_TRE_EV_DWORD1(chid, type)			cpu_to_le32((chid << 24) | (type << 16))
> > > +#define MHI_TRE_GET_EV_PTR(tre)				le64_to_cpu((tre)->ptr)
> > > +#define MHI_TRE_GET_EV_CODE(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > > +#define MHI_TRE_GET_EV_LEN(tre)				(MHI_TRE_GET_DWORD(tre, 0) & 0xFFFF)
> > > +#define MHI_TRE_GET_EV_CHID(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > > +#define MHI_TRE_GET_EV_TYPE(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 16) & 0xFF)
> > > +#define MHI_TRE_GET_EV_STATE(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > > +#define MHI_TRE_GET_EV_EXECENV(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 24) & 0xFF)
> > > +#define MHI_TRE_GET_EV_SEQ(tre)				MHI_TRE_GET_DWORD(tre, 0)
> > > +#define MHI_TRE_GET_EV_TIME(tre)			MHI_TRE_GET_EV_PTR(tre)
> > > +#define MHI_TRE_GET_EV_COOKIE(tre)			lower_32_bits(MHI_TRE_GET_EV_PTR(tre))
> > > +#define MHI_TRE_GET_EV_VEID(tre)			((MHI_TRE_GET_DWORD(tre, 0) >> 16) & 0xFF)
> > > +#define MHI_TRE_GET_EV_LINKSPEED(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> > > +#define MHI_TRE_GET_EV_LINKWIDTH(tre)			(MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
> > >   /* Transfer descriptor macros */
> > > -#define MHI_TRE_DATA_PTR(ptr) (cpu_to_le64(ptr))
> > > -#define MHI_TRE_DATA_DWORD0(len) (cpu_to_le32(len & MHI_MAX_MTU))
> > > -#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) (cpu_to_le32((2 << 16) | (bei << 10) \
> > > -	| (ieot << 9) | (ieob << 8) | chain))
> > > +#define MHI_TRE_DATA_PTR(ptr)				cpu_to_le64(ptr)
> > > +#define MHI_TRE_DATA_DWORD0(len)			cpu_to_le32(len & MHI_MAX_MTU)
> > > +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain)	(cpu_to_le32((2 << 16) | (bei << 10) \
> > > +							| (ieot << 9) | (ieob << 8) | chain))
> > >   /* RSC transfer descriptor macros */
> > > -#define MHI_RSCTRE_DATA_PTR(ptr, len) (cpu_to_le64(((u64)len << 48) | ptr))
> > > -#define MHI_RSCTRE_DATA_DWORD0(cookie) (cpu_to_le32(cookie))
> > > -#define MHI_RSCTRE_DATA_DWORD1 (cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16))
> > > +#define MHI_RSCTRE_DATA_PTR(ptr, len)			cpu_to_le64(((u64)len << 48) | ptr)
> > > +#define MHI_RSCTRE_DATA_DWORD0(cookie)			cpu_to_le32(cookie)
> > > +#define MHI_RSCTRE_DATA_DWORD1				cpu_to_le32(MHI_PKT_TYPE_COALESCING << 16)
> > >   enum mhi_pkt_type {
> > >   	MHI_PKT_TYPE_INVALID = 0x0,
> > > diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
> > > index 622de6ba1a0b..762055a6ec9f 100644
> > > --- a/drivers/bus/mhi/host/internal.h
> > > +++ b/drivers/bus/mhi/host/internal.h
> > > @@ -11,197 +11,84 @@
> > >   extern struct bus_type mhi_bus_type;
> > > -#define MHIREGLEN (0x0)
> > > -#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
> > > -#define MHIREGLEN_MHIREGLEN_SHIFT (0)
> > > -
> > > -#define MHIVER (0x8)
> > > -#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
> > > -#define MHIVER_MHIVER_SHIFT (0)
> > > -
> > > -#define MHICFG (0x10)
> > > -#define MHICFG_NHWER_MASK (0xFF000000)
> > > -#define MHICFG_NHWER_SHIFT (24)
> > > -#define MHICFG_NER_MASK (0xFF0000)
> > > -#define MHICFG_NER_SHIFT (16)
> > > -#define MHICFG_NHWCH_MASK (0xFF00)
> > > -#define MHICFG_NHWCH_SHIFT (8)
> > > -#define MHICFG_NCH_MASK (0xFF)
> > > -#define MHICFG_NCH_SHIFT (0)
> > > -
> > > -#define CHDBOFF (0x18)
> > > -#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
> > > -#define CHDBOFF_CHDBOFF_SHIFT (0)
> > > -
> > > -#define ERDBOFF (0x20)
> > > -#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
> > > -#define ERDBOFF_ERDBOFF_SHIFT (0)
> > > -
> > > -#define BHIOFF (0x28)
> > > -#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
> > > -#define BHIOFF_BHIOFF_SHIFT (0)
> > > -
> > > -#define BHIEOFF (0x2C)
> > > -#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
> > > -#define BHIEOFF_BHIEOFF_SHIFT (0)
> > > -
> > > -#define DEBUGOFF (0x30)
> > > -#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
> > > -#define DEBUGOFF_DEBUGOFF_SHIFT (0)
> > > -
> > > -#define MHICTRL (0x38)
> > > -#define MHICTRL_MHISTATE_MASK (0x0000FF00)
> > > -#define MHICTRL_MHISTATE_SHIFT (8)
> > > -#define MHICTRL_RESET_MASK (0x2)
> > > -#define MHICTRL_RESET_SHIFT (1)
> > > -
> > > -#define MHISTATUS (0x48)
> > > -#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
> > > -#define MHISTATUS_MHISTATE_SHIFT (8)
> > > -#define MHISTATUS_SYSERR_MASK (0x4)
> > > -#define MHISTATUS_SYSERR_SHIFT (2)
> > > -#define MHISTATUS_READY_MASK (0x1)
> > > -#define MHISTATUS_READY_SHIFT (0)
> > > -
> > > -#define CCABAP_LOWER (0x58)
> > > -#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
> > > -#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
> > > -
> > > -#define CCABAP_HIGHER (0x5C)
> > > -#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
> > > -#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
> > > -
> > > -#define ECABAP_LOWER (0x60)
> > > -#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
> > > -#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
> > > -
> > > -#define ECABAP_HIGHER (0x64)
> > > -#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
> > > -#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
> > > -
> > > -#define CRCBAP_LOWER (0x68)
> > > -#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
> > > -#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
> > > -
> > > -#define CRCBAP_HIGHER (0x6C)
> > > -#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
> > > -#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
> > > -
> > > -#define CRDB_LOWER (0x70)
> > > -#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
> > > -#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
> > > -
> > > -#define CRDB_HIGHER (0x74)
> > > -#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
> > > -#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
> > > -
> > > -#define MHICTRLBASE_LOWER (0x80)
> > > -#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
> > > -#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
> > > -
> > > -#define MHICTRLBASE_HIGHER (0x84)
> > > -#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
> > > -#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
> > > -
> > > -#define MHICTRLLIMIT_LOWER (0x88)
> > > -#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
> > > -#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
> > > -
> > > -#define MHICTRLLIMIT_HIGHER (0x8C)
> > > -#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
> > > -#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
> > > -
> > > -#define MHIDATABASE_LOWER (0x98)
> > > -#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
> > > -#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
> > > -
> > > -#define MHIDATABASE_HIGHER (0x9C)
> > > -#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
> > > -#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
> > > -
> > > -#define MHIDATALIMIT_LOWER (0xA0)
> > > -#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
> > > -#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
> > > -
> > > -#define MHIDATALIMIT_HIGHER (0xA4)
> > > -#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
> > > -#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
> > > +/* MHI registers */
> > > +#define MHIREGLEN			REG_MHIREGLEN
> > > +#define MHIVER				REG_MHIVER
> > > +#define MHICFG				REG_MHICFG
> > > +#define CHDBOFF				REG_CHDBOFF
> > > +#define ERDBOFF				REG_ERDBOFF
> > > +#define BHIOFF				REG_BHIOFF
> > > +#define BHIEOFF				REG_BHIEOFF
> > > +#define DEBUGOFF			REG_DEBUGOFF
> > > +#define MHICTRL				REG_MHICTRL
> > > +#define MHISTATUS			REG_MHISTATUS
> > > +#define CCABAP_LOWER			REG_CCABAP_LOWER
> > > +#define CCABAP_HIGHER			REG_CCABAP_HIGHER
> > > +#define ECABAP_LOWER			REG_ECABAP_LOWER
> > > +#define ECABAP_HIGHER			REG_ECABAP_HIGHER
> > > +#define CRCBAP_LOWER			REG_CRCBAP_LOWER
> > > +#define CRCBAP_HIGHER			REG_CRCBAP_HIGHER
> > > +#define CRDB_LOWER			REG_CRDB_LOWER
> > > +#define CRDB_HIGHER			REG_CRDB_HIGHER
> > > +#define MHICTRLBASE_LOWER		REG_MHICTRLBASE_LOWER
> > > +#define MHICTRLBASE_HIGHER		REG_MHICTRLBASE_HIGHER
> > > +#define MHICTRLLIMIT_LOWER		REG_MHICTRLLIMIT_LOWER
> > > +#define MHICTRLLIMIT_HIGHER		REG_MHICTRLLIMIT_HIGHER
> > > +#define MHIDATABASE_LOWER		REG_MHIDATABASE_LOWER
> > > +#define MHIDATABASE_HIGHER		REG_MHIDATABASE_HIGHER
> > > +#define MHIDATALIMIT_LOWER		REG_MHIDATALIMIT_LOWER
> > > +#define MHIDATALIMIT_HIGHER		REG_MHIDATALIMIT_HIGHER
> > >   /* Host request register */
> > > -#define MHI_SOC_RESET_REQ_OFFSET (0xB0)
> > > -#define MHI_SOC_RESET_REQ BIT(0)
> > > -
> > > -/* MHI BHI offfsets */
> > > -#define BHI_BHIVERSION_MINOR (0x00)
> > > -#define BHI_BHIVERSION_MAJOR (0x04)
> > > -#define BHI_IMGADDR_LOW (0x08)
> > > -#define BHI_IMGADDR_HIGH (0x0C)
> > > -#define BHI_IMGSIZE (0x10)
> > > -#define BHI_RSVD1 (0x14)
> > > -#define BHI_IMGTXDB (0x18)
> > > -#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
> > > -#define BHI_TXDB_SEQNUM_SHFT (0)
> > > -#define BHI_RSVD2 (0x1C)
> > > -#define BHI_INTVEC (0x20)
> > > -#define BHI_RSVD3 (0x24)
> > > -#define BHI_EXECENV (0x28)
> > > -#define BHI_STATUS (0x2C)
> > > -#define BHI_ERRCODE (0x30)
> > > -#define BHI_ERRDBG1 (0x34)
> > > -#define BHI_ERRDBG2 (0x38)
> > > -#define BHI_ERRDBG3 (0x3C)
> > > -#define BHI_SERIALNU (0x40)
> > > -#define BHI_SBLANTIROLLVER (0x44)
> > > -#define BHI_NUMSEG (0x48)
> > > -#define BHI_MSMHWID(n) (0x4C + (0x4 * (n)))
> > > -#define BHI_OEMPKHASH(n) (0x64 + (0x4 * (n)))
> > > -#define BHI_RSVD5 (0xC4)
> > > -#define BHI_STATUS_MASK (0xC0000000)
> > > -#define BHI_STATUS_SHIFT (30)
> > > -#define BHI_STATUS_ERROR (3)
> > > -#define BHI_STATUS_SUCCESS (2)
> > > -#define BHI_STATUS_RESET (0)
> > > -
> > > -/* MHI BHIE offsets */
> > > -#define BHIE_MSMSOCID_OFFS (0x0000)
> > > -#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
> > > -#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
> > > -#define BHIE_TXVECSIZE_OFFS (0x0034)
> > > -#define BHIE_TXVECDB_OFFS (0x003C)
> > > -#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
> > > -#define BHIE_TXVECDB_SEQNUM_SHFT (0)
> > > -#define BHIE_TXVECSTATUS_OFFS (0x0044)
> > > -#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
> > > -#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
> > > -#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
> > > -#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
> > > -#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
> > > -#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
> > > -#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
> > > -#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
> > > -#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
> > > -#define BHIE_RXVECSIZE_OFFS (0x0068)
> > > -#define BHIE_RXVECDB_OFFS (0x0070)
> > > -#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
> > > -#define BHIE_RXVECDB_SEQNUM_SHFT (0)
> > > -#define BHIE_RXVECSTATUS_OFFS (0x0078)
> > > -#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
> > > -#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
> > > -#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
> > > -#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
> > > -#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
> > > -#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
> > > -#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
> > > -
> > > -#define SOC_HW_VERSION_OFFS (0x224)
> > > -#define SOC_HW_VERSION_FAM_NUM_BMSK (0xF0000000)
> > > -#define SOC_HW_VERSION_FAM_NUM_SHFT (28)
> > > -#define SOC_HW_VERSION_DEV_NUM_BMSK (0x0FFF0000)
> > > -#define SOC_HW_VERSION_DEV_NUM_SHFT (16)
> > > -#define SOC_HW_VERSION_MAJOR_VER_BMSK (0x0000FF00)
> > > -#define SOC_HW_VERSION_MAJOR_VER_SHFT (8)
> > > -#define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
> > > -#define SOC_HW_VERSION_MINOR_VER_SHFT (0)
> > > +#define MHI_SOC_RESET_REQ_OFFSET	0xb0
> > > +#define MHI_SOC_RESET_REQ		BIT(0)
> > > +
> > > +/* MHI BHI registers */
> > > +#define BHI_BHIVERSION_MINOR		REG_BHI_BHIVERSION_MINOR
> > > +#define BHI_BHIVERSION_MAJOR		REG_BHI_BHIVERSION_MAJOR
> > > +#define BHI_IMGADDR_LOW			REG_BHI_IMGADDR_LOW
> > > +#define BHI_IMGADDR_HIGH		REG_BHI_IMGADDR_HIGH
> > > +#define BHI_IMGSIZE			REG_BHI_IMGSIZE
> > > +#define BHI_RSVD1			REG_BHI_RSVD1
> > > +#define BHI_IMGTXDB			REG_BHI_IMGTXDB
> > > +#define BHI_RSVD2			REG_BHI_RSVD2
> > > +#define BHI_INTVEC			REG_BHI_INTVEC
> > > +#define BHI_RSVD3			REG_BHI_RSVD3
> > > +#define BHI_EXECENV			REG_BHI_EXECENV
> > > +#define BHI_STATUS			REG_BHI_STATUS
> > > +#define BHI_ERRCODE			REG_BHI_ERRCODE
> > > +#define BHI_ERRDBG1			REG_BHI_ERRDBG1
> > > +#define BHI_ERRDBG2			REG_BHI_ERRDBG2
> > > +#define BHI_ERRDBG3			REG_BHI_ERRDBG3
> > > +#define BHI_SERIALNU			REG_BHI_SERIALNU
> > > +#define BHI_SBLANTIROLLVER		REG_BHI_SBLANTIROLLVER
> > > +#define BHI_NUMSEG			REG_BHI_NUMSEG
> > > +#define BHI_MSMHWID(n)			REG_BHI_MSMHWID(n)
> > > +#define BHI_OEMPKHASH(n)		REG_BHI_OEMPKHASH(n)
> > > +#define BHI_RSVD5			REG_BHI_RSVD5
> > > +
> > > +/* MHI BHIE registers */
> > > +#define BHIE_MSMSOCID_OFFS		REG_BHIE_MSMSOCID_OFFS
> > > +#define BHIE_TXVECADDR_LOW_OFFS		REG_BHIE_TXVECADDR_LOW_OFFS
> > > +#define BHIE_TXVECADDR_HIGH_OFFS	REG_BHIE_TXVECADDR_HIGH_OFFS
> > > +#define BHIE_TXVECSIZE_OFFS		REG_BHIE_TXVECSIZE_OFFS
> > > +#define BHIE_TXVECDB_OFFS		REG_BHIE_TXVECDB_OFFS
> > > +#define BHIE_TXVECSTATUS_OFFS		REG_BHIE_TXVECSTATUS_OFFS
> > > +#define BHIE_RXVECADDR_LOW_OFFS		REG_BHIE_RXVECADDR_LOW_OFFS
> > > +#define BHIE_RXVECADDR_HIGH_OFFS	REG_BHIE_RXVECADDR_HIGH_OFFS
> > > +#define BHIE_RXVECSIZE_OFFS		REG_BHIE_RXVECSIZE_OFFS
> > > +#define BHIE_RXVECDB_OFFS		REG_BHIE_RXVECDB_OFFS
> > > +#define BHIE_RXVECSTATUS_OFFS		REG_BHIE_RXVECSTATUS_OFFS
> > > +
> > > +#define SOC_HW_VERSION_OFFS		0x224
> > > +#define SOC_HW_VERSION_FAM_NUM_BMSK	GENMASK(31, 28)
> > > +#define SOC_HW_VERSION_FAM_NUM_SHFT	28
> > > +#define SOC_HW_VERSION_DEV_NUM_BMSK	GENMASK(27, 16)
> > > +#define SOC_HW_VERSION_DEV_NUM_SHFT	16
> > > +#define SOC_HW_VERSION_MAJOR_VER_BMSK	GENMASK(15, 8)
> > > +#define SOC_HW_VERSION_MAJOR_VER_SHFT	8
> > > +#define SOC_HW_VERSION_MINOR_VER_BMSK	GENMASK(7, 0)
> > > +#define SOC_HW_VERSION_MINOR_VER_SHFT	0
> > >   struct mhi_ctxt {
> > >   	struct mhi_event_ctxt *er_ctxt;
> > 
> 

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 08/25] bus: mhi: ep: Add support for registering MHI endpoint controllers
  2022-02-15 20:02   ` Alex Elder
@ 2022-02-17  9:53     ` Manivannan Sadhasivam
  2022-02-17 14:47       ` Alex Elder
  2022-03-04 21:46       ` Jeffrey Hugo
  0 siblings, 2 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-17  9:53 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 02:02:41PM -0600, Alex Elder wrote:

[...]

> > +#define MHI_REG_OFFSET				0x100
> > +#define BHI_REG_OFFSET				0x200
> 
> Rather than defining the REG_OFFSET values here and adding
> them to every definition below, why not have the base
> address used (e.g., in mhi_write_reg_field()) be adjusted
> by the constant amount?
> 
> I'm just looking at mhi_init_mmio() (in the existing code)
> as an example, but for example, the base address used
> comes from mhi_cntrl->regs.  Can you instead just define
> a pointer somewhere that is the base of the MHI register
> range, which is already offset by the appropriate amount?
> 

I've defined two set of APIs for MHI and BHI read/write. They will add the
respective offsets.

> > +

[...]

> > +/* Generic context */
> > +struct mhi_generic_ctx {
> > +	__u32 reserved0;
> > +	__u32 reserved1;
> > +	__u32 reserved2;
> > +
> > +	__u64 rbase __packed __aligned(4);
> > +	__u64 rlen __packed __aligned(4);
> > +	__u64 rp __packed __aligned(4);
> > +	__u64 wp __packed __aligned(4);
> > +};
> 
> I'm pretty sure this constitutes an external interface, so
> every field should have its endianness annotated.
> 
> Mentioned elsewhere, I think you can define the structure
> with those attributes rather than the multiple fields.
> 

As I said before, this was suggested by Arnd during MHI host review. He
suggested adding the alignment and packed to only members that require
them.

But I think I should change it now...

> > +
> > +enum mhi_ep_ring_type {
> > +	RING_TYPE_CMD = 0,
> > +	RING_TYPE_ER,
> > +	RING_TYPE_CH,
> > +};
> > +
> > +struct mhi_ep_ring_element {
> > +	u64 ptr;
> > +	u32 dword[2];
> > +};
> 
> Are these host resident rings?  Even if not, this is an external
> interface, so this should be defined with explicit endianness.
> The cpu_to_le64() call will be a no-op so there is no cost
> to correcting this.
> 

Ah, this should be reusing the "struct mhi_tre" defined in host. Will do.

> > +
> > +/* Ring element */
> > +union mhi_ep_ring_ctx {
> > +	struct mhi_cmd_ctxt cmd;
> > +	struct mhi_event_ctxt ev;
> > +	struct mhi_chan_ctxt ch;
> > +	struct mhi_generic_ctx generic;
> > +};
> > +
> > +struct mhi_ep_ring {
> > +	struct mhi_ep_cntrl *mhi_cntrl;
> > +	int (*ring_cb)(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
> > +	union mhi_ep_ring_ctx *ring_ctx;
> > +	struct mhi_ep_ring_element *ring_cache;
> > +	enum mhi_ep_ring_type type;
> > +	size_t rd_offset;
> > +	size_t wr_offset;
> > +	size_t ring_size;
> > +	u32 db_offset_h;
> > +	u32 db_offset_l;
> > +	u32 ch_id;
> > +};
> 
> Not sure about the db_offset fields, etc. here, but it's possible
> they need endianness annotations.  I'm going to stop making this
> comment; please make sure anything that's exposed to the host
> specifies that it's little endian.  (The host and endpoint should
> have a common definition of these shared structures anyway; maybe
> I'm misreading this or assuming something incorrectly.)
> 

db_offset_* just holds the register offsets so they don't require
endianness annotation. All MMIO read/write are using readl/writel APIs
and they handle the endianness conversion implicitly.

Rest of the host memory accesses are annotated properly.

> > +

[...]

> > +	/*
> > +	 * Allocate max_channels supported by the MHI endpoint and populate
> > +	 * only the defined channels
> > +	 */
> > +	mhi_cntrl->mhi_chan = kcalloc(mhi_cntrl->max_chan, sizeof(*mhi_cntrl->mhi_chan),
> > +				      GFP_KERNEL);
> > +	if (!mhi_cntrl->mhi_chan)
> > +		return -ENOMEM;
> > +
> > +	for (i = 0; i < config->num_channels; i++) {
> > +		struct mhi_ep_chan *mhi_chan;
> 
> This entire block could be encapsulated in mhi_channel_add()
> or something,

Wrapping up in a function is useful if the same code is used in
different places. But I don't think it adds any value here.

> 
> > +		ch_cfg = &config->ch_cfg[i];
> 
> Move the above assignment down a few lines, to just before
> where it's used.
> 

No, ch_cfg is used just below this.

> > +
> > +		chan = ch_cfg->num;
> > +		if (chan >= mhi_cntrl->max_chan) {
> > +			dev_err(dev, "Channel %d not available\n", chan);
> 
> Maybe report the maximum channel so it's obvious why it's
> not available.
> 
> > +			goto error_chan_cfg;
> > +		}
> > +
> > +		/* Bi-directional and direction less channels are not supported */
> > +		if (ch_cfg->dir == DMA_BIDIRECTIONAL || ch_cfg->dir == DMA_NONE) {
> > +			dev_err(dev, "Invalid channel configuration\n");
> 
> Maybe be more specific in your message about what's wrong here.
> 
> > +			goto error_chan_cfg;
> > +		}
> > +
> > +		mhi_chan = &mhi_cntrl->mhi_chan[chan];
> > +		mhi_chan->name = ch_cfg->name;
> > +		mhi_chan->chan = chan;
> > +		mhi_chan->dir = ch_cfg->dir;
> > +		mutex_init(&mhi_chan->lock);
> > +	}
> > +
> > +	return 0;
> > +
> > +error_chan_cfg:
> > +	kfree(mhi_cntrl->mhi_chan);
> 
> I'm not sure what the caller does, but maybe null this
> after it's freed, or don't assign mhi_cntrll->mhi_chan
> until the initialization is successful.
> 

This is not required here as there will be no access to the pointer
after failing.

> 
> > +	return ret;
> > +}
> > +
> > +/*
> > + * Allocate channel and command rings here. Event rings will be allocated
> > + * in mhi_ep_power_up() as the config comes from the host.
> > + */
> > +int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> > +				const struct mhi_ep_cntrl_config *config)
> > +{
> > +	struct mhi_ep_device *mhi_dev;
> > +	int ret;
> > +
> > +	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
> > +		return -EINVAL;
> > +
> > +	ret = parse_ch_cfg(mhi_cntrl, config);
> > +	if (ret)
> > +		return ret;
> > +
> > +	mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS, sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
> 
> I said before I thought it was silly to even define NR_OF_CMD_RINGS.
> Does the MHI specification actually allow more than one command
> ring for a given MHI controller?  Ever?
> 

MHI spec doesn't limit the number of command rings. Eventhough I don't
envision adding more command rings in the future, I'm going to keep this
macro for now as the MHI host does the same.

[...]

> > diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> > new file mode 100644
> > index 000000000000..20238e9df1b3
> > --- /dev/null
> > +++ b/include/linux/mhi_ep.h

[...]

> > +struct mhi_ep_device {
> > +	struct device dev;
> > +	struct mhi_ep_cntrl *mhi_cntrl;
> > +	const struct mhi_device_id *id;
> > +	const char *name;
> > +	struct mhi_ep_chan *ul_chan;
> > +	struct mhi_ep_chan *dl_chan;
> > +	enum mhi_device_type dev_type;
> 
> There are two device types, controller and transfer.  Unless
> there is ever going to be anything more than that, I think
> the distinction is better represented as a Boolean, such as:
> 
> 	bool controller;

Again, this is how it is done in MHI host also. Since I'm going to
maintain both stacks, it makes it easier for me if similarities are
maintained. But I'll keep this suggestion and the one above for later.

Thanks,
Mani

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 09/25] bus: mhi: ep: Add support for registering MHI endpoint client drivers
  2022-02-15 20:02   ` Alex Elder
@ 2022-02-17 10:20     ` Manivannan Sadhasivam
  2022-02-17 14:50       ` Alex Elder
  0 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-17 10:20 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 02:02:50PM -0600, Alex Elder wrote:

[...]

> > +static int mhi_ep_driver_remove(struct device *dev)
> > +{
> > +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> > +	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
> > +	struct mhi_result result = {};
> > +	struct mhi_ep_chan *mhi_chan;
> > +	int dir;
> > +
> > +	/* Skip if it is a controller device */
> > +	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> > +		return 0;
> > +
> 
> It would be my preference to encapsulate the body of the
> following loop into a called function, then call that once
> for the UL channel and once for the DL channel.
> 

This follows the host stack, so I'd like to keep it the same.

> > +	/* Disconnect the channels associated with the driver */
> > +	for (dir = 0; dir < 2; dir++) {
> > +		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
> > +
> > +		if (!mhi_chan)
> > +			continue;
> > +
> > +		mutex_lock(&mhi_chan->lock);
> > +		/* Send channel disconnect status to the client driver */
> > +		if (mhi_chan->xfer_cb) {
> > +			result.transaction_status = -ENOTCONN;
> > +			result.bytes_xferd = 0;
> > +			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
> 
> It appears the result is ignored here.  If so, can we
> define the xfer_cb() function so that a NULL pointer may
> be supplied by the caller in cases like this?
> 

result is not ignored, only the bytes_xfered. "transaction_status" will
be used by the client drivers for error handling.

> > +		}
> > +
> > +		/* Set channel state to DISABLED */
> 
> That comment is a little tautological.  Just omit it.
> 
> > +		mhi_chan->state = MHI_CH_STATE_DISABLED;
> > +		mhi_chan->xfer_cb = NULL;
> > +		mutex_unlock(&mhi_chan->lock);
> > +	}
> > +
> > +	/* Remove the client driver now */
> > +	mhi_drv->remove(mhi_dev);
> > +
> > +	return 0;
> > +}

[...]

> > +struct mhi_ep_driver {
> > +	const struct mhi_device_id *id_table;
> > +	struct device_driver driver;
> > +	int (*probe)(struct mhi_ep_device *mhi_ep,
> > +		     const struct mhi_device_id *id);
> > +	void (*remove)(struct mhi_ep_device *mhi_ep);
> 
> I get confused by the "ul" versus "dl" naming scheme here.
> Is "ul" from the perspective of the host, meaning upload
> is from the host toward the WWAN network (and therefore
> toward the SDX AP), and download is from the WWAN toward
> the host?  Somewhere this should be stated clearly in
> comments; maybe I just missed it.
> 

Yes UL and DL are as per host context. I didn't state this explicitly
since this is the MHI host stack behaviour but I'll add a comment for
clarity

Thanks,
Mani

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 10/25] bus: mhi: ep: Add support for creating and destroying MHI EP devices
  2022-02-15 20:02   ` Alex Elder
@ 2022-02-17 12:04     ` Manivannan Sadhasivam
  0 siblings, 0 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-17 12:04 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 02:02:57PM -0600, Alex Elder wrote:

[...]

> > +
> > +	mhi_dev = mhi_ep_alloc_device(mhi_cntrl, MHI_DEVICE_XFER);
> > +	if (IS_ERR(mhi_dev))
> > +		return PTR_ERR(mhi_dev);
> 
> It looks like the only possible error is no memory, so you could
> just have mhi_ep_alloc_device() return NULL.
> 

I think returning the actual error is more safe as we may end up adding more
stuff into this function in the future.

> > +
> > +	/* Configure primary channel */
> > +	mhi_dev->ul_chan = mhi_chan;
> > +	get_device(&mhi_dev->dev);
> > +	mhi_chan->mhi_dev = mhi_dev;
> > +
> > +	/* Configure secondary channel as well */
> > +	mhi_chan++;
> > +	mhi_dev->dl_chan = mhi_chan;
> > +	get_device(&mhi_dev->dev);
> > +	mhi_chan->mhi_dev = mhi_dev;
> > +
> > +	/* Channel name is same for both UL and DL */
> > +	mhi_dev->name = mhi_chan->name;
> > +	dev_set_name(&mhi_dev->dev, "%s_%s",
> > +		     dev_name(&mhi_cntrl->mhi_dev->dev),
> > +		     mhi_dev->name);
> > +
> > +	ret = device_add(&mhi_dev->dev);
> > +	if (ret)
> > +		put_device(&mhi_dev->dev);
> > +
> > +	return ret;
> > +}
> > +
> > +static int mhi_ep_destroy_device(struct device *dev, void *data)
> > +{
> > +	struct mhi_ep_device *mhi_dev;
> > +	struct mhi_ep_cntrl *mhi_cntrl;
> > +	struct mhi_ep_chan *ul_chan, *dl_chan;
> > +
> > +	if (dev->bus != &mhi_ep_bus_type)
> > +		return 0;
> > +
> > +	mhi_dev = to_mhi_ep_device(dev);
> > +	mhi_cntrl = mhi_dev->mhi_cntrl;
> > +
> > +	/* Only destroy devices created for channels */
> > +	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> > +		return 0;
> > +
> > +	ul_chan = mhi_dev->ul_chan;
> > +	dl_chan = mhi_dev->dl_chan;
> 
> Aren't they required to supply *both* channels?  Or maybe
> it's just required that there are transfer callback functions
> for both channels.  Anyway, no need to check for null, because
> the creation function guarantees they're both non-null I think.
> 

mhi_ep_destroy_device() will be called for each device separately. So we
must check for NULL.

Thanks,
Mani

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 08/25] bus: mhi: ep: Add support for registering MHI endpoint controllers
  2022-02-17  9:53     ` Manivannan Sadhasivam
@ 2022-02-17 14:47       ` Alex Elder
  2022-03-04 21:46       ` Jeffrey Hugo
  1 sibling, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-17 14:47 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/17/22 3:53 AM, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 02:02:41PM -0600, Alex Elder wrote:
> 
> [...]
> 
>>> +#define MHI_REG_OFFSET				0x100
>>> +#define BHI_REG_OFFSET				0x200

. . .

> [...]
> 
>>> +/* Generic context */
>>> +struct mhi_generic_ctx {
>>> +	__u32 reserved0;
>>> +	__u32 reserved1;
>>> +	__u32 reserved2;
>>> +
>>> +	__u64 rbase __packed __aligned(4);
>>> +	__u64 rlen __packed __aligned(4);
>>> +	__u64 rp __packed __aligned(4);
>>> +	__u64 wp __packed __aligned(4);
>>> +};
>>
>> I'm pretty sure this constitutes an external interface, so
>> every field should have its endianness annotated.
>>
>> Mentioned elsewhere, I think you can define the structure
>> with those attributes rather than the multiple fields.
>>
> 
> As I said before, this was suggested by Arnd during MHI host review. He
> suggested adding the alignment and packed to only members that require
> them.
> 
> But I think I should change it now...

Despite suggesting this more than once, I'm not 100% sure it's
even a correct suggestion.  I trust Arnd's judgement, and I
can see the value of being explicit about *which* fields have
the alignment requirement.  So I'll leave it up to you to
decide...  If you make my suggested change, be sure to test
it.  But I'm fine if you leave these as-is.

>>> +enum mhi_ep_ring_type {
>>> +	RING_TYPE_CMD = 0,
>>> +	RING_TYPE_ER,
>>> +	RING_TYPE_CH,
>>> +};
>>> +
>>> +struct mhi_ep_ring_element {
>>> +	u64 ptr;
>>> +	u32 dword[2];
>>> +};
>>
>> Are these host resident rings?  Even if not, this is an external
>> interface, so this should be defined with explicit endianness.
>> The cpu_to_le64() call will be a no-op so there is no cost
>> to correcting this.
>>
> 
> Ah, this should be reusing the "struct mhi_tre" defined in host. Will do.
> 
>>> +
>>> +/* Ring element */
>>> +union mhi_ep_ring_ctx {
>>> +	struct mhi_cmd_ctxt cmd;
>>> +	struct mhi_event_ctxt ev;
>>> +	struct mhi_chan_ctxt ch;
>>> +	struct mhi_generic_ctx generic;
>>> +};
>>> +
>>> +struct mhi_ep_ring {
>>> +	struct mhi_ep_cntrl *mhi_cntrl;
>>> +	int (*ring_cb)(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
>>> +	union mhi_ep_ring_ctx *ring_ctx;
>>> +	struct mhi_ep_ring_element *ring_cache;
>>> +	enum mhi_ep_ring_type type;
>>> +	size_t rd_offset;
>>> +	size_t wr_offset;
>>> +	size_t ring_size;
>>> +	u32 db_offset_h;
>>> +	u32 db_offset_l;
>>> +	u32 ch_id;
>>> +};
>>
>> Not sure about the db_offset fields, etc. here, but it's possible
>> they need endianness annotations.  I'm going to stop making this
>> comment; please make sure anything that's exposed to the host
>> specifies that it's little endian.  (The host and endpoint should
>> have a common definition of these shared structures anyway; maybe
>> I'm misreading this or assuming something incorrectly.)
>>
> 
> db_offset_* just holds the register offsets so they don't require
> endianness annotation. All MMIO read/write are using readl/writel APIs
> and they handle the endianness conversion implicitly.
> 
> Rest of the host memory accesses are annotated properly.

OK, good.

> 
>>> +
> 
> [...]
> 
>>> +	/*
>>> +	 * Allocate max_channels supported by the MHI endpoint and populate
>>> +	 * only the defined channels
>>> +	 */
>>> +	mhi_cntrl->mhi_chan = kcalloc(mhi_cntrl->max_chan, sizeof(*mhi_cntrl->mhi_chan),
>>> +				      GFP_KERNEL);
>>> +	if (!mhi_cntrl->mhi_chan)
>>> +		return -ENOMEM;
>>> +
>>> +	for (i = 0; i < config->num_channels; i++) {
>>> +		struct mhi_ep_chan *mhi_chan;
>>
>> This entire block could be encapsulated in mhi_channel_add()
>> or something,
> 
> Wrapping up in a function is useful if the same code is used in
> different places. But I don't think it adds any value here.
> 
>>
>>> +		ch_cfg = &config->ch_cfg[i];
>>
>> Move the above assignment down a few lines, to just before
>> where it's used.
>>
> 
> No, ch_cfg is used just below this.

Yes you're right, I missed that.

>>> +
>>> +		chan = ch_cfg->num;
>>> +		if (chan >= mhi_cntrl->max_chan) {
>>> +			dev_err(dev, "Channel %d not available\n", chan);
>>
>> Maybe report the maximum channel so it's obvious why it's
>> not available.
>>
>>> +			goto error_chan_cfg;
>>> +		}
>>> +
>>> +		/* Bi-directional and direction less channels are not supported */
>>> +		if (ch_cfg->dir == DMA_BIDIRECTIONAL || ch_cfg->dir == DMA_NONE) {
>>> +			dev_err(dev, "Invalid channel configuration\n");
>>
>> Maybe be more specific in your message about what's wrong here.
>>
>>> +			goto error_chan_cfg;
>>> +		}
>>> +
>>> +		mhi_chan = &mhi_cntrl->mhi_chan[chan];
>>> +		mhi_chan->name = ch_cfg->name;
>>> +		mhi_chan->chan = chan;
>>> +		mhi_chan->dir = ch_cfg->dir;
>>> +		mutex_init(&mhi_chan->lock);
>>> +	}
>>> +
>>> +	return 0;
>>> +
>>> +error_chan_cfg:
>>> +	kfree(mhi_cntrl->mhi_chan);
>>
>> I'm not sure what the caller does, but maybe null this
>> after it's freed, or don't assign mhi_cntrll->mhi_chan
>> until the initialization is successful.
>>
> 
> This is not required here as there will be no access to the pointer
> after failing.

OK.

>>> +	return ret;
>>> +}
>>> +
>>> +/*
>>> + * Allocate channel and command rings here. Event rings will be allocated
>>> + * in mhi_ep_power_up() as the config comes from the host.
>>> + */
>>> +int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
>>> +				const struct mhi_ep_cntrl_config *config)
>>> +{
>>> +	struct mhi_ep_device *mhi_dev;
>>> +	int ret;
>>> +
>>> +	if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
>>> +		return -EINVAL;
>>> +
>>> +	ret = parse_ch_cfg(mhi_cntrl, config);
>>> +	if (ret)
>>> +		return ret;
>>> +
>>> +	mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS, sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
>>
>> I said before I thought it was silly to even define NR_OF_CMD_RINGS.
>> Does the MHI specification actually allow more than one command
>> ring for a given MHI controller?  Ever?
>>
> 
> MHI spec doesn't limit the number of command rings. Eventhough I don't
> envision adding more command rings in the future, I'm going to keep this
> macro for now as the MHI host does the same.

OK.

> [...]
> 
>>> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
>>> new file mode 100644
>>> index 000000000000..20238e9df1b3
>>> --- /dev/null
>>> +++ b/include/linux/mhi_ep.h
> 
> [...]
> 
>>> +struct mhi_ep_device {
>>> +	struct device dev;
>>> +	struct mhi_ep_cntrl *mhi_cntrl;
>>> +	const struct mhi_device_id *id;
>>> +	const char *name;
>>> +	struct mhi_ep_chan *ul_chan;
>>> +	struct mhi_ep_chan *dl_chan;
>>> +	enum mhi_device_type dev_type;
>>
>> There are two device types, controller and transfer.  Unless
>> there is ever going to be anything more than that, I think
>> the distinction is better represented as a Boolean, such as:
>>
>> 	bool controller;
> 
> Again, this is how it is done in MHI host also. Since I'm going to
> maintain both stacks, it makes it easier for me if similarities are
> maintained. But I'll keep this suggestion and the one above for later.

Sounds good.  Thanks.

					-Alex

> Thanks,
> Mani


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 09/25] bus: mhi: ep: Add support for registering MHI endpoint client drivers
  2022-02-17 10:20     ` Manivannan Sadhasivam
@ 2022-02-17 14:50       ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-17 14:50 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/17/22 4:20 AM, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 02:02:50PM -0600, Alex Elder wrote:
> 
> [...]
> 
>>> +static int mhi_ep_driver_remove(struct device *dev)
>>> +{
>>> +	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
>>> +	struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
>>> +	struct mhi_result result = {};
>>> +	struct mhi_ep_chan *mhi_chan;
>>> +	int dir;
>>> +
>>> +	/* Skip if it is a controller device */
>>> +	if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
>>> +		return 0;
>>> +
>>
>> It would be my preference to encapsulate the body of the
>> following loop into a called function, then call that once
>> for the UL channel and once for the DL channel.
>>
> 
> This follows the host stack, so I'd like to keep it the same.

I think you should change both, but I'll leave that up to you.

>>> +	/* Disconnect the channels associated with the driver */
>>> +	for (dir = 0; dir < 2; dir++) {
>>> +		mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
>>> +
>>> +		if (!mhi_chan)
>>> +			continue;
>>> +
>>> +		mutex_lock(&mhi_chan->lock);
>>> +		/* Send channel disconnect status to the client driver */
>>> +		if (mhi_chan->xfer_cb) {
>>> +			result.transaction_status = -ENOTCONN;
>>> +			result.bytes_xferd = 0;
>>> +			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
>>
>> It appears the result is ignored here.  If so, can we
>> define the xfer_cb() function so that a NULL pointer may
>> be supplied by the caller in cases like this?
>>
> 
> result is not ignored, only the bytes_xfered. "transaction_status" will
> be used by the client drivers for error handling.

Sorry, I was looking at the code *after* the call, and was
ignoring that it was information being passed in...  My
mistake.

>>> +		}
>>> +
>>> +		/* Set channel state to DISABLED */
>>
>> That comment is a little tautological.  Just omit it.
>>
>>> +		mhi_chan->state = MHI_CH_STATE_DISABLED;
>>> +		mhi_chan->xfer_cb = NULL;
>>> +		mutex_unlock(&mhi_chan->lock);
>>> +	}
>>> +
>>> +	/* Remove the client driver now */
>>> +	mhi_drv->remove(mhi_dev);
>>> +
>>> +	return 0;
>>> +}
> 
> [...]
> 
>>> +struct mhi_ep_driver {
>>> +	const struct mhi_device_id *id_table;
>>> +	struct device_driver driver;
>>> +	int (*probe)(struct mhi_ep_device *mhi_ep,
>>> +		     const struct mhi_device_id *id);
>>> +	void (*remove)(struct mhi_ep_device *mhi_ep);
>>
>> I get confused by the "ul" versus "dl" naming scheme here.
>> Is "ul" from the perspective of the host, meaning upload
>> is from the host toward the WWAN network (and therefore
>> toward the SDX AP), and download is from the WWAN toward
>> the host?  Somewhere this should be stated clearly in
>> comments; maybe I just missed it.
>>
> 
> Yes UL and DL are as per host context. I didn't state this explicitly
> since this is the MHI host stack behaviour but I'll add a comment for
> clarity

Sounds good, thanks.

					-Alex

> 
> Thanks,
> Mani


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 12/25] bus: mhi: ep: Add support for ring management
  2022-02-15 20:03   ` Alex Elder
@ 2022-02-18  8:07     ` Manivannan Sadhasivam
  2022-02-18 15:23       ` Manivannan Sadhasivam
  2022-02-18 15:39       ` Alex Elder
  0 siblings, 2 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-18  8:07 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 02:03:13PM -0600, Alex Elder wrote:
> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> > Add support for managing the MHI ring. The MHI ring is a circular queue
> > of data structures used to pass the information between host and the
> > endpoint.
> > 
> > MHI support 3 types of rings:
> > 
> > 1. Transfer ring
> > 2. Event ring
> > 3. Command ring
> > 
> > All rings reside inside the host memory and the MHI EP device maps it to
> > the device memory using blocks like PCIe iATU. The mapping is handled in
> > the MHI EP controller driver itself.
> > 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> 
> Great explanation.  One more thing to add, is that the command
> and transfer rings are directed from the host to the MHI EP device,
> while the event rings are directed from the EP device toward the
> host.
> 

That's correct, will add.

> I notice that you've improved a few things I had notes about,
> and I don't recall suggesting them.  I'm very happy about that.
> 
> I have a few more comments here, some worth thinking about
> at least.
> 
> 					-Alex
> 
> > ---
> >   drivers/bus/mhi/ep/Makefile   |   2 +-
> >   drivers/bus/mhi/ep/internal.h |  33 +++++
> >   drivers/bus/mhi/ep/main.c     |  59 +++++++-
> >   drivers/bus/mhi/ep/ring.c     | 267 ++++++++++++++++++++++++++++++++++
> >   include/linux/mhi_ep.h        |  11 ++
> >   5 files changed, 370 insertions(+), 2 deletions(-)
> >   create mode 100644 drivers/bus/mhi/ep/ring.c
> > 
> > diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> > index a1555ae287ad..7ba0e04801eb 100644
> > --- a/drivers/bus/mhi/ep/Makefile
> > +++ b/drivers/bus/mhi/ep/Makefile
> > @@ -1,2 +1,2 @@
> >   obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> > -mhi_ep-y := main.o mmio.o
> > +mhi_ep-y := main.o mmio.o ring.o
> > diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> > index 2c756a90774c..48d6e9667d55 100644
> > --- a/drivers/bus/mhi/ep/internal.h
> > +++ b/drivers/bus/mhi/ep/internal.h
> > @@ -112,6 +112,18 @@ enum mhi_ep_execenv {
> >   	MHI_EP_UNRESERVED
> >   };
> > +/* Transfer Ring Element macros */
> > +#define MHI_EP_TRE_PTR(ptr) (ptr)
> > +#define MHI_EP_TRE_DWORD0(len) (len & MHI_MAX_MTU)
> 
> The above looks funny.  This assumes MHI_MAX_MTU is
> a mask value (likely one less than a power-of-2).
> That doesn't seem obvious to me; use modulo if you
> must, but better, just ensure len is in range rather
> than silently truncating it if it's not.
> 
> > +#define MHI_EP_TRE_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
> > +	| (ieot << 9) | (ieob << 8) | chain)
> 
> You should probably use FIELD_PREP() to compute the value
> here, since you're using FIELD_GET() to extract the field
> values below.
> 
> > +#define MHI_EP_TRE_GET_PTR(tre) ((tre)->ptr)
> > +#define MHI_EP_TRE_GET_LEN(tre) ((tre)->dword[0] & 0xffff)
> > +#define MHI_EP_TRE_GET_CHAIN(tre) FIELD_GET(BIT(0), (tre)->dword[1])
> 
> #define	TRE_FLAG_CHAIN	BIT(0)
> 
> Then just call
> 	bei = FIELD_GET(TRE_FLAG_CHAIN, tre->dword[1]);
> 
> But I haven't looked at the code where this is used yet.
> 
> > +#define MHI_EP_TRE_GET_IEOB(tre) FIELD_GET(BIT(8), (tre)->dword[1])
> > +#define MHI_EP_TRE_GET_IEOT(tre) FIELD_GET(BIT(9), (tre)->dword[1])
> > +#define MHI_EP_TRE_GET_BEI(tre) FIELD_GET(BIT(10), (tre)->dword[1])
> > +
> 
> These macros should be shared/shareable between the host and endpoint.
> They operate on external interfaces and so should be byte swapped
> (where used) when updating actual memory.  Unlike the patches from
> Paul Davey early in this series, this does *not* byte swap the
> values in the right hand side of these definitions, which is good.
> 
> I'm pretty sure I mentioned this before...  I don't really like these
> "DWORD" macros that simply write compute register values to write
> out to the TREs.  A TRE is a structure, not a set of registers.  And
> a whole TRE can be written or read in a single ARM instruction in
> some cases--but most likely you need to define it as a structure
> for that to happen.
> 
> struct mhi_tre {
> 	__le64 addr;
> 	__le16 len_opcode
> 	__le16 reserved;
> 	__le32 flags;
> };

Changing the TRE structure requires changes to both host and endpoint
stack. So I'll tackle this as an improvement later.

Added to TODO list.

> 
> Which reminds me, this shared memory area should probably be mapped
> using memremap() rather than ioremap().  I haven't checked whether
> it is...
> 
> >   enum mhi_ep_ring_type {
> >   	RING_TYPE_CMD = 0,
> >   	RING_TYPE_ER,
> > @@ -131,6 +143,11 @@ union mhi_ep_ring_ctx {
> >   	struct mhi_generic_ctx generic;
> >   };
> > +struct mhi_ep_ring_item {
> > +	struct list_head node;
> > +	struct mhi_ep_ring *ring;
> > +};
> > +
> >   struct mhi_ep_ring {
> >   	struct mhi_ep_cntrl *mhi_cntrl;
> >   	int (*ring_cb)(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
> > @@ -143,6 +160,9 @@ struct mhi_ep_ring {
> >   	u32 db_offset_h;
> >   	u32 db_offset_l;
> >   	u32 ch_id;
> > +	u32 er_index;
> > +	u32 irq_vector;
> > +	bool started;
> >   };
> >   struct mhi_ep_cmd {
> > @@ -168,6 +188,19 @@ struct mhi_ep_chan {
> >   	bool skip_td;
> >   };
> > +/* MHI Ring related functions */
> > +void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id);
> > +void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring);
> > +int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
> > +		      union mhi_ep_ring_ctx *ctx);
> > +size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr);
> > +int mhi_ep_process_ring(struct mhi_ep_ring *ring);
> > +int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *element);
> > +void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring);
> > +int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
> > +int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
> > +int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring);
> > +
> >   /* MMIO related functions */
> >   u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset);
> >   void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index 950b5bcabe18..2c8045766292 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c
> > @@ -18,6 +18,48 @@
> >   static DEFINE_IDA(mhi_ep_cntrl_ida);
> 
> The following function handles command or channel interrupt work.
> 

Both

> > +static void mhi_ep_ring_worker(struct work_struct *work)
> > +{
> > +	struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
> > +				struct mhi_ep_cntrl, ring_work);
> > +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > +	struct mhi_ep_ring_item *itr, *tmp;
> > +	struct mhi_ep_ring *ring;
> > +	struct mhi_ep_chan *chan;
> > +	unsigned long flags;
> > +	LIST_HEAD(head);
> > +	int ret;
> > +
> > +	/* Process the command ring first */
> > +	ret = mhi_ep_process_ring(&mhi_cntrl->mhi_cmd->ring);
> > +	if (ret) {
> 
> At the moment I'm not sure where this work gets scheduled.
> But what if there is no command to process?  It looks
> like you go update the cached pointer no matter what
> to see if there's anything new.  But it seems like you
> ought to be able to do this when interrupted for a
> command rather than all the time.
> 

No, ring cache is not getting updated all the time. If you look into
process_ring(), first the write pointer is read from MMIO and there is a
check to see if there are elements in the ring or not. Only if that
check passes, the ring cache will get updated.

Since the same work item is used for both cmd and transfer rings, this
check is necessary. The other option would be to use different work items
for command and transfer rings. This is something I want to try once
this initial version gets merged.

Thanks,
Mani

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 12/25] bus: mhi: ep: Add support for ring management
  2022-02-18  8:07     ` Manivannan Sadhasivam
@ 2022-02-18 15:23       ` Manivannan Sadhasivam
  2022-02-18 15:47         ` Alex Elder
  2022-02-18 15:39       ` Alex Elder
  1 sibling, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-18 15:23 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Fri, Feb 18, 2022 at 01:37:04PM +0530, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 02:03:13PM -0600, Alex Elder wrote:
> > On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> > > Add support for managing the MHI ring. The MHI ring is a circular queue
> > > of data structures used to pass the information between host and the
> > > endpoint.
> > > 
> > > MHI support 3 types of rings:
> > > 
> > > 1. Transfer ring
> > > 2. Event ring
> > > 3. Command ring
> > > 
> > > All rings reside inside the host memory and the MHI EP device maps it to
> > > the device memory using blocks like PCIe iATU. The mapping is handled in
> > > the MHI EP controller driver itself.
> > > 
> > > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > 
> > Great explanation.  One more thing to add, is that the command
> > and transfer rings are directed from the host to the MHI EP device,
> > while the event rings are directed from the EP device toward the
> > host.
> > 
> 
> That's correct, will add.
> 
> > I notice that you've improved a few things I had notes about,
> > and I don't recall suggesting them.  I'm very happy about that.
> > 
> > I have a few more comments here, some worth thinking about
> > at least.
> > 
> > 					-Alex
> > 
> > > ---
> > >   drivers/bus/mhi/ep/Makefile   |   2 +-
> > >   drivers/bus/mhi/ep/internal.h |  33 +++++
> > >   drivers/bus/mhi/ep/main.c     |  59 +++++++-
> > >   drivers/bus/mhi/ep/ring.c     | 267 ++++++++++++++++++++++++++++++++++
> > >   include/linux/mhi_ep.h        |  11 ++
> > >   5 files changed, 370 insertions(+), 2 deletions(-)
> > >   create mode 100644 drivers/bus/mhi/ep/ring.c
> > > 
> > > diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> > > index a1555ae287ad..7ba0e04801eb 100644
> > > --- a/drivers/bus/mhi/ep/Makefile
> > > +++ b/drivers/bus/mhi/ep/Makefile
> > > @@ -1,2 +1,2 @@
> > >   obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> > > -mhi_ep-y := main.o mmio.o
> > > +mhi_ep-y := main.o mmio.o ring.o
> > > diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> > > index 2c756a90774c..48d6e9667d55 100644
> > > --- a/drivers/bus/mhi/ep/internal.h
> > > +++ b/drivers/bus/mhi/ep/internal.h
> > > @@ -112,6 +112,18 @@ enum mhi_ep_execenv {
> > >   	MHI_EP_UNRESERVED
> > >   };
> > > +/* Transfer Ring Element macros */
> > > +#define MHI_EP_TRE_PTR(ptr) (ptr)
> > > +#define MHI_EP_TRE_DWORD0(len) (len & MHI_MAX_MTU)
> > 
> > The above looks funny.  This assumes MHI_MAX_MTU is
> > a mask value (likely one less than a power-of-2).
> > That doesn't seem obvious to me; use modulo if you
> > must, but better, just ensure len is in range rather
> > than silently truncating it if it's not.
> > 
> > > +#define MHI_EP_TRE_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
> > > +	| (ieot << 9) | (ieob << 8) | chain)
> > 
> > You should probably use FIELD_PREP() to compute the value
> > here, since you're using FIELD_GET() to extract the field
> > values below.
> > 
> > > +#define MHI_EP_TRE_GET_PTR(tre) ((tre)->ptr)
> > > +#define MHI_EP_TRE_GET_LEN(tre) ((tre)->dword[0] & 0xffff)
> > > +#define MHI_EP_TRE_GET_CHAIN(tre) FIELD_GET(BIT(0), (tre)->dword[1])
> > 
> > #define	TRE_FLAG_CHAIN	BIT(0)
> > 
> > Then just call
> > 	bei = FIELD_GET(TRE_FLAG_CHAIN, tre->dword[1]);
> > 
> > But I haven't looked at the code where this is used yet.
> > 
> > > +#define MHI_EP_TRE_GET_IEOB(tre) FIELD_GET(BIT(8), (tre)->dword[1])
> > > +#define MHI_EP_TRE_GET_IEOT(tre) FIELD_GET(BIT(9), (tre)->dword[1])
> > > +#define MHI_EP_TRE_GET_BEI(tre) FIELD_GET(BIT(10), (tre)->dword[1])
> > > +
> > 
> > These macros should be shared/shareable between the host and endpoint.
> > They operate on external interfaces and so should be byte swapped
> > (where used) when updating actual memory.  Unlike the patches from
> > Paul Davey early in this series, this does *not* byte swap the
> > values in the right hand side of these definitions, which is good.
> > 
> > I'm pretty sure I mentioned this before...  I don't really like these
> > "DWORD" macros that simply write compute register values to write
> > out to the TREs.  A TRE is a structure, not a set of registers.  And
> > a whole TRE can be written or read in a single ARM instruction in
> > some cases--but most likely you need to define it as a structure
> > for that to happen.
> > 
> > struct mhi_tre {
> > 	__le64 addr;
> > 	__le16 len_opcode
> > 	__le16 reserved;
> > 	__le32 flags;
> > };
> 
> Changing the TRE structure requires changes to both host and endpoint
> stack. So I'll tackle this as an improvement later.
> 
> Added to TODO list.

Just did a comparision w/ IPA code and I convinced myself that this conversion
should happen now itself. So please ignore my above comment.

Thanks,
Mani

> 
> > 
> > Which reminds me, this shared memory area should probably be mapped
> > using memremap() rather than ioremap().  I haven't checked whether
> > it is...
> > 
> > >   enum mhi_ep_ring_type {
> > >   	RING_TYPE_CMD = 0,
> > >   	RING_TYPE_ER,
> > > @@ -131,6 +143,11 @@ union mhi_ep_ring_ctx {
> > >   	struct mhi_generic_ctx generic;
> > >   };
> > > +struct mhi_ep_ring_item {
> > > +	struct list_head node;
> > > +	struct mhi_ep_ring *ring;
> > > +};
> > > +
> > >   struct mhi_ep_ring {
> > >   	struct mhi_ep_cntrl *mhi_cntrl;
> > >   	int (*ring_cb)(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
> > > @@ -143,6 +160,9 @@ struct mhi_ep_ring {
> > >   	u32 db_offset_h;
> > >   	u32 db_offset_l;
> > >   	u32 ch_id;
> > > +	u32 er_index;
> > > +	u32 irq_vector;
> > > +	bool started;
> > >   };
> > >   struct mhi_ep_cmd {
> > > @@ -168,6 +188,19 @@ struct mhi_ep_chan {
> > >   	bool skip_td;
> > >   };
> > > +/* MHI Ring related functions */
> > > +void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id);
> > > +void mhi_ep_ring_reset(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring);
> > > +int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
> > > +		      union mhi_ep_ring_ctx *ctx);
> > > +size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr);
> > > +int mhi_ep_process_ring(struct mhi_ep_ring *ring);
> > > +int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *element);
> > > +void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring);
> > > +int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
> > > +int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
> > > +int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring);
> > > +
> > >   /* MMIO related functions */
> > >   u32 mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset);
> > >   void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
> > > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > > index 950b5bcabe18..2c8045766292 100644
> > > --- a/drivers/bus/mhi/ep/main.c
> > > +++ b/drivers/bus/mhi/ep/main.c
> > > @@ -18,6 +18,48 @@
> > >   static DEFINE_IDA(mhi_ep_cntrl_ida);
> > 
> > The following function handles command or channel interrupt work.
> > 
> 
> Both
> 
> > > +static void mhi_ep_ring_worker(struct work_struct *work)
> > > +{
> > > +	struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
> > > +				struct mhi_ep_cntrl, ring_work);
> > > +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > > +	struct mhi_ep_ring_item *itr, *tmp;
> > > +	struct mhi_ep_ring *ring;
> > > +	struct mhi_ep_chan *chan;
> > > +	unsigned long flags;
> > > +	LIST_HEAD(head);
> > > +	int ret;
> > > +
> > > +	/* Process the command ring first */
> > > +	ret = mhi_ep_process_ring(&mhi_cntrl->mhi_cmd->ring);
> > > +	if (ret) {
> > 
> > At the moment I'm not sure where this work gets scheduled.
> > But what if there is no command to process?  It looks
> > like you go update the cached pointer no matter what
> > to see if there's anything new.  But it seems like you
> > ought to be able to do this when interrupted for a
> > command rather than all the time.
> > 
> 
> No, ring cache is not getting updated all the time. If you look into
> process_ring(), first the write pointer is read from MMIO and there is a
> check to see if there are elements in the ring or not. Only if that
> check passes, the ring cache will get updated.
> 
> Since the same work item is used for both cmd and transfer rings, this
> check is necessary. The other option would be to use different work items
> for command and transfer rings. This is something I want to try once
> this initial version gets merged.
> 
> Thanks,
> Mani

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 12/25] bus: mhi: ep: Add support for ring management
  2022-02-18  8:07     ` Manivannan Sadhasivam
  2022-02-18 15:23       ` Manivannan Sadhasivam
@ 2022-02-18 15:39       ` Alex Elder
  1 sibling, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-18 15:39 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/18/22 2:07 AM, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 02:03:13PM -0600, Alex Elder wrote:
>> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
>>> Add support for managing the MHI ring. The MHI ring is a circular queue
>>> of data structures used to pass the information between host and the
>>> endpoint.
>>>
>>> MHI support 3 types of rings:
>>>
>>> 1. Transfer ring
>>> 2. Event ring
>>> 3. Command ring
>>>
>>> All rings reside inside the host memory and the MHI EP device maps it to
>>> the device memory using blocks like PCIe iATU. The mapping is handled in
>>> the MHI EP controller driver itself.
>>>
>>> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>>
>> Great explanation.  One more thing to add, is that the command
>> and transfer rings are directed from the host to the MHI EP device,
>> while the event rings are directed from the EP device toward the
>> host.
>>
> 
> That's correct, will add.
> 
>> I notice that you've improved a few things I had notes about,
>> and I don't recall suggesting them.  I'm very happy about that.
>>
>> I have a few more comments here, some worth thinking about
>> at least.
>>
>> 					-Alex
>>
>>> ---
>>>    drivers/bus/mhi/ep/Makefile   |   2 +-
>>>    drivers/bus/mhi/ep/internal.h |  33 +++++
>>>    drivers/bus/mhi/ep/main.c     |  59 +++++++-
>>>    drivers/bus/mhi/ep/ring.c     | 267 ++++++++++++++++++++++++++++++++++
>>>    include/linux/mhi_ep.h        |  11 ++
>>>    5 files changed, 370 insertions(+), 2 deletions(-)
>>>    create mode 100644 drivers/bus/mhi/ep/ring.c
>>>
>>> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
>>> index a1555ae287ad..7ba0e04801eb 100644
>>> --- a/drivers/bus/mhi/ep/Makefile
>>> +++ b/drivers/bus/mhi/ep/Makefile

. . .

>>> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
>>> index 950b5bcabe18..2c8045766292 100644
>>> --- a/drivers/bus/mhi/ep/main.c
>>> +++ b/drivers/bus/mhi/ep/main.c
>>> @@ -18,6 +18,48 @@
>>>    static DEFINE_IDA(mhi_ep_cntrl_ida);
>>
>> The following function handles command or channel interrupt work.
>>
> 
> Both

What I meant was to suggest a comment that stated that it
is used for both of those.  Not really a bit deal though.

>>> +static void mhi_ep_ring_worker(struct work_struct *work)
>>> +{
>>> +	struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
>>> +				struct mhi_ep_cntrl, ring_work);
>>> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
>>> +	struct mhi_ep_ring_item *itr, *tmp;
>>> +	struct mhi_ep_ring *ring;
>>> +	struct mhi_ep_chan *chan;
>>> +	unsigned long flags;
>>> +	LIST_HEAD(head);
>>> +	int ret;
>>> +
>>> +	/* Process the command ring first */
>>> +	ret = mhi_ep_process_ring(&mhi_cntrl->mhi_cmd->ring);
>>> +	if (ret) {
>>
>> At the moment I'm not sure where this work gets scheduled.
>> But what if there is no command to process?  It looks
>> like you go update the cached pointer no matter what
>> to see if there's anything new.  But it seems like you
>> ought to be able to do this when interrupted for a
>> command rather than all the time.
>>
> 
> No, ring cache is not getting updated all the time. If you look into
> process_ring(), first the write pointer is read from MMIO and there is a
> check to see if there are elements in the ring or not. Only if that
> check passes, the ring cache will get updated.
> 
> Since the same work item is used for both cmd and transfer rings, this
> check is necessary. The other option would be to use different work items
> for command and transfer rings. This is something I want to try once
> this initial version gets merged.

OK.  I accept your explanation (even though I confess I did not
go back and look at the code again...).

Thanks Mani.

					-Alex

> Thanks,
> Mani


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 12/25] bus: mhi: ep: Add support for ring management
  2022-02-18 15:23       ` Manivannan Sadhasivam
@ 2022-02-18 15:47         ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-18 15:47 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/18/22 9:23 AM, Manivannan Sadhasivam wrote:
>>>
>>> I'm pretty sure I mentioned this before...  I don't really like these
>>> "DWORD" macros that simply write compute register values to write
>>> out to the TREs.  A TRE is a structure, not a set of registers.  And
>>> a whole TRE can be written or read in a single ARM instruction in
>>> some cases--but most likely you need to define it as a structure
>>> for that to happen.
>>>
>>> struct mhi_tre {
>>> 	__le64 addr;
>>> 	__le16 len_opcode
>>> 	__le16 reserved;
>>> 	__le32 flags;
>>> };
>> Changing the TRE structure requires changes to both host and endpoint
>> stack. So I'll tackle this as an improvement later.
>>
>> Added to TODO list.
> Just did a comparision w/ IPA code and I convinced myself that this conversion
> should happen now itself. So please ignore my above comment.

This might not be that much work, but if it is, I somewhat
apologize for that.  Still, I believe the code will be better
as a result, so I'm not *that* sorry.

If you do this though, I would recommend you do it as a
separate, prerequisite bit of work.  Your series is too
long, and making it longer by adding this will just delay
*everything* a bit more.  So, I'd advise updating the
existing host code this way first, then adapt your patch
series to do things the new way.

Alternatively, do this later (as you earlier said you would),
and don't delay this series any more.  If it works, it works,
and you can always improve it in the future.

And now that your series is getting closer to golden, maybe
you can break it into a few smaller series?  I don't know,
that also can lead to some confusion, so I won't strongly
advocate that.  But it's something to consider for future
work regardless.

					-Alex

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 13/25] bus: mhi: ep: Add support for sending events to the host
  2022-02-15 22:39   ` Alex Elder
@ 2022-02-22  6:06     ` Manivannan Sadhasivam
  2022-02-22 13:41       ` Alex Elder
  0 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-22  6:06 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 04:39:17PM -0600, Alex Elder wrote:
> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> > Add support for sending the events to the host over MHI bus from the
> > endpoint. Following events are supported:
> > 
> > 1. Transfer completion event
> > 2. Command completion event
> > 3. State change event
> > 4. Execution Environment (EE) change event
> > 
> > An event is sent whenever an operation has been completed in the MHI EP
> > device. Event is sent using the MHI event ring and additionally the host
> > is notified using an IRQ if required.
> > 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> 
> A few things can be simplified here.
> 
> 					-Alex
> 
> > ---
> >   drivers/bus/mhi/common.h      |  15 ++++
> >   drivers/bus/mhi/ep/internal.h |   8 ++-
> >   drivers/bus/mhi/ep/main.c     | 126 ++++++++++++++++++++++++++++++++++
> >   include/linux/mhi_ep.h        |   8 +++
> >   4 files changed, 155 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> > index 728c82928d8d..26d94ed52b34 100644
> > --- a/drivers/bus/mhi/common.h
> > +++ b/drivers/bus/mhi/common.h
> > @@ -176,6 +176,21 @@
> >   #define MHI_TRE_GET_EV_LINKSPEED(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
> >   #define MHI_TRE_GET_EV_LINKWIDTH(tre)			(MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
> > +/* State change event */
> > +#define MHI_SC_EV_PTR					0
> > +#define MHI_SC_EV_DWORD0(state)				cpu_to_le32(state << 24)
> > +#define MHI_SC_EV_DWORD1(type)				cpu_to_le32(type << 16)
> > +
> > +/* EE event */
> > +#define MHI_EE_EV_PTR					0
> > +#define MHI_EE_EV_DWORD0(ee)				cpu_to_le32(ee << 24)
> > +#define MHI_EE_EV_DWORD1(type)				cpu_to_le32(type << 16)
> > +
> > +/* Command Completion event */
> > +#define MHI_CC_EV_PTR(ptr)				cpu_to_le64(ptr)
> > +#define MHI_CC_EV_DWORD0(code)				cpu_to_le32(code << 24)
> > +#define MHI_CC_EV_DWORD1(type)				cpu_to_le32(type << 16)
> > +
> >   /* Transfer descriptor macros */
> >   #define MHI_TRE_DATA_PTR(ptr)				cpu_to_le64(ptr)
> >   #define MHI_TRE_DATA_DWORD0(len)			cpu_to_le32(len & MHI_MAX_MTU)
> > diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> > index 48d6e9667d55..fd63f79c6aec 100644
> > --- a/drivers/bus/mhi/ep/internal.h
> > +++ b/drivers/bus/mhi/ep/internal.h
> > @@ -131,8 +131,8 @@ enum mhi_ep_ring_type {
> >   };
> >   struct mhi_ep_ring_element {
> > -	u64 ptr;
> > -	u32 dword[2];
> > +	__le64 ptr;
> > +	__le32 dword[2];
> 
> Yay!
> 
> >   };
> >   /* Ring element */
> > @@ -227,4 +227,8 @@ void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *s
> >   void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
> >   void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
> > +/* MHI EP core functions */
> > +int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
> > +int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ep_execenv exec_env);
> > +
> >   #endif
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index 2c8045766292..61f066c6286b 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c

[...]

> > +static int mhi_ep_send_completion_event(struct mhi_ep_cntrl *mhi_cntrl,
> > +					struct mhi_ep_ring *ring, u32 len,
> > +					enum mhi_ev_ccs code)
> > +{
> > +	struct mhi_ep_ring_element event = {};
> > +	__le32 tmp;
> > +
> > +	event.ptr = le64_to_cpu(ring->ring_ctx->generic.rbase) +
> > +			ring->rd_offset * sizeof(struct mhi_ep_ring_element);
> 
> I'm not sure at the moment where this will be called.  But
> it might be easier to pass in the transfer channel pointer
> rather than compute its address here.
> 

Passing the ring element to these functions won't help. Because, the ring
element only has the address of the buffer it points to. But what we need here
is the address of the ring element itself and that can only be found in ring
context.

Thanks,
Mani

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 14/25] bus: mhi: ep: Add support for managing MHI state machine
  2022-02-15 22:39   ` Alex Elder
@ 2022-02-22  7:03     ` Manivannan Sadhasivam
  0 siblings, 0 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-22  7:03 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 04:39:24PM -0600, Alex Elder wrote:
> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> > Add support for managing the MHI state machine by controlling the state
> > transitions. Only the following MHI state transitions are supported:
> > 
> > 1. Ready state
> > 2. M0 state
> > 3. M3 state
> > 4. SYS_ERR state
> > 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> 
> Minor suggestions here.		-Alex
> 
> > ---
> >   drivers/bus/mhi/ep/Makefile   |   2 +-
> >   drivers/bus/mhi/ep/internal.h |  11 +++
> >   drivers/bus/mhi/ep/main.c     |  51 ++++++++++-
> >   drivers/bus/mhi/ep/sm.c       | 168 ++++++++++++++++++++++++++++++++++
> >   include/linux/mhi_ep.h        |   6 ++
> >   5 files changed, 236 insertions(+), 2 deletions(-)
> >   create mode 100644 drivers/bus/mhi/ep/sm.c
> > 
> > diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> > index 7ba0e04801eb..aad85f180b70 100644
> > --- a/drivers/bus/mhi/ep/Makefile
> > +++ b/drivers/bus/mhi/ep/Makefile
> > @@ -1,2 +1,2 @@
> >   obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> > -mhi_ep-y := main.o mmio.o ring.o
> > +mhi_ep-y := main.o mmio.o ring.o sm.o
> > diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> > index fd63f79c6aec..e4e8f06c2898 100644
> > --- a/drivers/bus/mhi/ep/internal.h
> > +++ b/drivers/bus/mhi/ep/internal.h
> > @@ -173,6 +173,11 @@ struct mhi_ep_event {
> >   	struct mhi_ep_ring ring;
> >   };
> > +struct mhi_ep_state_transition {
> > +	struct list_head node;
> > +	enum mhi_state state;
> > +};
> > +
> >   struct mhi_ep_chan {
> >   	char *name;
> >   	struct mhi_ep_device *mhi_dev;
> > @@ -230,5 +235,11 @@ void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
> >   /* MHI EP core functions */
> >   int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
> >   int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ep_execenv exec_env);
> > +bool mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state cur_mhi_state,
> > +			    enum mhi_state mhi_state);
> > +int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state);
> > +int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
> > +int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
> > +int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
> >   #endif
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index 61f066c6286b..ccb3c2795041 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c
> > @@ -185,6 +185,43 @@ static void mhi_ep_ring_worker(struct work_struct *work)
> >   	}
> >   }
> > +static void mhi_ep_state_worker(struct work_struct *work)
> > +{
> > +	struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
> > +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > +	struct mhi_ep_state_transition *itr, *tmp;
> > +	unsigned long flags;
> > +	LIST_HEAD(head);
> > +	int ret;
> > +
> > +	spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
> > +	list_splice_tail_init(&mhi_cntrl->st_transition_list, &head);
> > +	spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
> > +
> > +	list_for_each_entry_safe(itr, tmp, &head, node) {
> > +		list_del(&itr->node);
> > +		dev_dbg(dev, "Handling MHI state transition to %s\n",
> > +			 mhi_state_str(itr->state));
> > +
> > +		switch (itr->state) {
> > +		case MHI_STATE_M0:
> > +			ret = mhi_ep_set_m0_state(mhi_cntrl);
> > +			if (ret)
> > +				dev_err(dev, "Failed to transition to M0 state\n");
> > +			break;
> > +		case MHI_STATE_M3:
> > +			ret = mhi_ep_set_m3_state(mhi_cntrl);
> > +			if (ret)
> > +				dev_err(dev, "Failed to transition to M3 state\n");
> > +			break;
> > +		default:
> > +			dev_err(dev, "Invalid MHI state transition: %d\n", itr->state);
> > +			break;
> > +		}
> > +		kfree(itr);
> > +	}
> > +}
> > +
> >   static void mhi_ep_release_device(struct device *dev)
> >   {
> >   	struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> > @@ -386,6 +423,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> >   	}
> >   	INIT_WORK(&mhi_cntrl->ring_work, mhi_ep_ring_worker);
> > +	INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
> >   	mhi_cntrl->ring_wq = alloc_workqueue("mhi_ep_ring_wq", 0, 0);
> >   	if (!mhi_cntrl->ring_wq) {
> > @@ -393,8 +431,16 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> >   		goto err_free_cmd;
> >   	}
> > +	mhi_cntrl->state_wq = alloc_workqueue("mhi_ep_state_wq", 0, 0);
> 
> Maybe it's not a big deal, but do we really need several separate
> work queues?  Would one suffice?  Could a system workqueue be used
> in some cases (such as state changes)?

Good question. The reason to have 2 separate workqueue was to avoid running the
two work items parallely during bringup. But the code has changed a lot afterwards,
so this could go into the other workqueue.

Thanks,
Mani

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 15/25] bus: mhi: ep: Add support for processing MHI endpoint interrupts
  2022-02-15 22:39   ` Alex Elder
@ 2022-02-22  8:18     ` Manivannan Sadhasivam
  2022-02-22 14:08       ` Alex Elder
  0 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-22  8:18 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 04:39:30PM -0600, Alex Elder wrote:
> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> > Add support for processing MHI endpoint interrupts such as control
> > interrupt, command interrupt and channel interrupt from the host.
> > 
> > The interrupts will be generated in the endpoint device whenever host
> > writes to the corresponding doorbell registers. The doorbell logic
> > is handled inside the hardware internally.
> > 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> 
> Unless I'm mistaken, you have some bugs here.
> 
> Beyond that, I question whether you should be using workqueues
> for handling all interrupts.  For now, it's fine, but there
> might be room for improvement after this is accepted upstream
> (using threaded interrupt handlers, for example).
> 

Only reason I didn't use bottom halves is that the memory for TRE buffers need
to be allocated each time, so essentially the caller should not sleep.

This is currently a limitation of iATU where there are only 8 windows for
mapping the host memory and each memory region size is also limited.

> 					-Alex
> 
> > ---
> >   drivers/bus/mhi/ep/main.c | 113 +++++++++++++++++++++++++++++++++++++-
> >   include/linux/mhi_ep.h    |   2 +
> >   2 files changed, 113 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index ccb3c2795041..072b872e735b 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c
> > @@ -185,6 +185,56 @@ static void mhi_ep_ring_worker(struct work_struct *work)
> >   	}
> >   }
> > +static void mhi_ep_queue_channel_db(struct mhi_ep_cntrl *mhi_cntrl,
> > +				    unsigned long ch_int, u32 ch_idx)
> > +{
> > +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > +	struct mhi_ep_ring_item *item;
> > +	struct mhi_ep_ring *ring;
> > +	unsigned int i;
> 
> Why not u32 i?  And why is the ch_int argument unsigned long?  The value
> passed in is a u32.
> 

for_each_set_bit() expects the 2nd argument to be of type "unsigned long".

Thanks,
Mani

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 16/25] bus: mhi: ep: Add support for powering up the MHI endpoint stack
  2022-02-15 22:39   ` Alex Elder
@ 2022-02-22  9:08     ` Manivannan Sadhasivam
  2022-02-22 14:10       ` Alex Elder
  0 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-22  9:08 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 04:39:37PM -0600, Alex Elder wrote:
> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> > Add support for MHI endpoint power_up that includes initializing the MMIO
> > and rings, caching the host MHI registers, and setting the MHI state to M0.
> > After registering the MHI EP controller, the stack has to be powered up
> > for usage.
> > 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> 
> Very little to say on this one.		-Alex
> 
> > ---
> >   drivers/bus/mhi/ep/internal.h |   6 +
> >   drivers/bus/mhi/ep/main.c     | 229 ++++++++++++++++++++++++++++++++++
> >   include/linux/mhi_ep.h        |  22 ++++
> >   3 files changed, 257 insertions(+)
> > 
> > diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> > index e4e8f06c2898..ee8c5974f0c0 100644
> > --- a/drivers/bus/mhi/ep/internal.h
> > +++ b/drivers/bus/mhi/ep/internal.h
> > @@ -242,4 +242,10 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
> >   int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
> >   int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
> > +/* MHI EP memory management functions */
> > +int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
> > +		     phys_addr_t *phys_ptr, void __iomem **virt);
> > +void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t phys,
> > +		       void __iomem *virt, size_t size);
> > +
> >   #endif
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c

[...]

> > +
> > +static void mhi_ep_enable_int(struct mhi_ep_cntrl *mhi_cntrl)
> > +{
> 
> Are channel doorbell interrupts enabled separately now?
> (There was previously an enable_chdb_interrupts() call.)
> 

Doorbell interrupts are enabled when the corresponding channel gets started.
Enabling all interrupts here triggers spurious irqs as some of the interrupts
associated with hw channels always get triggered.

Thanks,
Mani

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 19/25] bus: mhi: ep: Add support for handling SYS_ERR condition
  2022-02-15 22:39   ` Alex Elder
@ 2022-02-22 10:29     ` Manivannan Sadhasivam
  0 siblings, 0 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-22 10:29 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 04:39:55PM -0600, Alex Elder wrote:
> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> > Add support for handling SYS_ERR (System Error) condition in the MHI
> > endpoint stack. The SYS_ERR flag will be asserted by the endpoint device
> > when it detects an internal error. The host will then issue reset and
> > reinitializes MHI to recover from the error state.
> > 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> 
> I have a few small comments, but this look good enough for me.
> 
> Reviewed-by: Alex Elder <elder@linaro.org>
> 
> > ---
> >   drivers/bus/mhi/ep/internal.h |  1 +
> >   drivers/bus/mhi/ep/main.c     | 24 ++++++++++++++++++++++++
> >   drivers/bus/mhi/ep/sm.c       |  2 ++
> >   3 files changed, 27 insertions(+)
> > 
> > diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> > index ee8c5974f0c0..8654af7caf40 100644
> > --- a/drivers/bus/mhi/ep/internal.h
> > +++ b/drivers/bus/mhi/ep/internal.h
> > @@ -241,6 +241,7 @@ int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_stat
> >   int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
> >   int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
> >   int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl);
> >   /* MHI EP memory management functions */
> >   int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index ddedd0fb19aa..6378ac5c7e37 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c
> > @@ -611,6 +611,30 @@ static void mhi_ep_reset_worker(struct work_struct *work)
> >   	}
> >   }
> > +/*
> > + * We don't need to do anything special other than setting the MHI SYS_ERR
> > + * state. The host issue will reset all contexts and issue MHI RESET so that we
> 
> s/host issue/host/
> 
> > + * could also recover from error state.
> > + */
> > +void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl)
> > +{
> > +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > +	int ret;
> > +
> > +	/* If MHI EP is not enabled, nothing to do */
> > +	if (!mhi_cntrl->is_enabled)
> 
> Is this an expected condition?  SYS_ERR with the endpoint
> disabled?
> 

I hit a case during bringup but I don't exactly remember where. So I'll probably
remove this check.

> > +		return;
> > +
> > +	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
> > +	if (ret)
> > +		return;
> > +
> > +	/* Signal host that the device went to SYS_ERR state */
> > +	ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_SYS_ERR);
> > +	if (ret)
> > +		dev_err(dev, "Failed sending SYS_ERR state change event: %d\n", ret);
> > +}
> > +
> >   int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
> >   {
> >   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
> > index 68e7f99b9137..9a75ecfe1adf 100644
> > --- a/drivers/bus/mhi/ep/sm.c
> > +++ b/drivers/bus/mhi/ep/sm.c
> > @@ -93,6 +93,7 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
> >   	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
> >   	if (ret) {
> > +		mhi_ep_handle_syserr(mhi_cntrl);
> >   		spin_unlock_bh(&mhi_cntrl->state_lock);
> >   		return ret;
> >   	}
> > @@ -128,6 +129,7 @@ int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
> >   	spin_lock_bh(&mhi_cntrl->state_lock);
> >   	ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
> 
> Are there any other spots that should do this?  For example, in
> mhi_ep_set_ready_state() you don't check the return value of
> the call to mhi_ep_set_mhi_state().  It seems to me it should
> be possible to preclude bogus state changes anyway, but I'm
> not completely sure.
> 

The check should be there, I will add it to ready_state() also.

Thanks,
Mani

> >   	if (ret) {
> > +		mhi_ep_handle_syserr(mhi_cntrl);
> >   		spin_unlock_bh(&mhi_cntrl->state_lock);
> >   		return ret;
> >   	}
> 

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 20/25] bus: mhi: ep: Add support for processing command ring
  2022-02-15 22:40   ` Alex Elder
@ 2022-02-22 10:35     ` Manivannan Sadhasivam
  0 siblings, 0 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-22 10:35 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 04:40:01PM -0600, Alex Elder wrote:
> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> > Add support for processing the command ring. Command ring is used by the
> > host to issue channel specific commands to the ep device. Following
> > commands are supported:
> > 
> > 1. Start channel
> > 2. Stop channel
> > 3. Reset channel
> > 
> > Once the device receives the command doorbell interrupt from host, it
> > executes the command and generates a command completion event to the
> > host in the primary event ring.
> > 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> 
> I'll let you consider my few comments below, but whether or not you
> address them, this looks OK to me.
> 
> Reviewed-by: Alex Elder <elder@linaro.org>
> 
> > ---
> >   drivers/bus/mhi/ep/main.c | 151 ++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 151 insertions(+)
> > 
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index 6378ac5c7e37..4c2ee517832c 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c
> > @@ -21,6 +21,7 @@
> >   static DEFINE_IDA(mhi_ep_cntrl_ida);
> > +static int mhi_ep_create_device(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id);
> >   static int mhi_ep_destroy_device(struct device *dev, void *data);
> >   static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 ring_idx,
> > @@ -185,6 +186,156 @@ void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t
> >   	mhi_cntrl->free_addr(mhi_cntrl, phys - offset, virt - offset, size);
> >   }
> > +int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el)
> > +{
> > +	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> > +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > +	struct mhi_result result = {};
> > +	struct mhi_ep_chan *mhi_chan;
> > +	struct mhi_ep_ring *ch_ring;
> > +	u32 tmp, ch_id;
> > +	int ret;
> > +
> > +	ch_id = MHI_TRE_GET_CMD_CHID(el);
> > +	mhi_chan = &mhi_cntrl->mhi_chan[ch_id];
> > +	ch_ring = &mhi_cntrl->mhi_chan[ch_id].ring;
> > +
> > +	switch (MHI_TRE_GET_CMD_TYPE(el)) {
> 
> No MHI_PKT_TYPE_NOOP_CMD?
> 

Not now.

> > +	case MHI_PKT_TYPE_START_CHAN_CMD:
> > +		dev_dbg(dev, "Received START command for channel (%d)\n", ch_id);
> > +
> > +		mutex_lock(&mhi_chan->lock);
> > +		/* Initialize and configure the corresponding channel ring */
> > +		if (!ch_ring->started) {
> > +			ret = mhi_ep_ring_start(mhi_cntrl, ch_ring,
> > +				(union mhi_ep_ring_ctx *)&mhi_cntrl->ch_ctx_cache[ch_id]);
> > +			if (ret) {
> > +				dev_err(dev, "Failed to start ring for channel (%d)\n", ch_id);
> > +				ret = mhi_ep_send_cmd_comp_event(mhi_cntrl,
> > +							MHI_EV_CC_UNDEFINED_ERR);
> > +				if (ret)
> > +					dev_err(dev, "Error sending completion event (%d)\n",
> > +						MHI_EV_CC_UNDEFINED_ERR);
> 
> Print the value of ret in the above message (not UNDEFINED_ERR).
> 
> > +
> > +				goto err_unlock;
> > +			}
> > +		}
> > +
> > +		/* Enable DB for the channel */
> > +		mhi_ep_mmio_enable_chdb_a7(mhi_cntrl, ch_id);
> 
> If an error occurs later, this will be enabled.  Is that what
> you want?  Maybe wait to enable the doorbell until everything
> else succeeds.
> 

Makes sense. Will move this to the end.

> > +
> > +		/* Set channel state to RUNNING */
> > +		mhi_chan->state = MHI_CH_STATE_RUNNING;
> > +		tmp = le32_to_cpu(mhi_cntrl->ch_ctx_cache[ch_id].chcfg);
> > +		tmp &= ~CHAN_CTX_CHSTATE_MASK;
> > +		tmp |= FIELD_PREP(CHAN_CTX_CHSTATE_MASK, MHI_CH_STATE_RUNNING);
> > +		mhi_cntrl->ch_ctx_cache[ch_id].chcfg = cpu_to_le32(tmp);
> > +
> > +		ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
> > +		if (ret) {
> > +			dev_err(dev, "Error sending command completion event (%d)\n",
> > +				MHI_EV_CC_SUCCESS);
> > +			goto err_unlock;
> > +		}
> > +
> > +		mutex_unlock(&mhi_chan->lock);
> > +
> > +		/*
> > +		 * Create MHI device only during UL channel start. Since the MHI
> > +		 * channels operate in a pair, we'll associate both UL and DL
> > +		 * channels to the same device.
> > +		 *
> > +		 * We also need to check for mhi_dev != NULL because, the host
> > +		 * will issue START_CHAN command during resume and we don't
> > +		 * destroy the device during suspend.
> > +		 */
> > +		if (!(ch_id % 2) && !mhi_chan->mhi_dev) {
> > +			ret = mhi_ep_create_device(mhi_cntrl, ch_id);
> > +			if (ret) {
> 
> If this occurs, the host will already have been told the
> request completed successfully.  Is that a problem that
> can/should be avoided?
> 

This should result in SYSERR. Will handle.

Thanks,
Mani

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 22/25] bus: mhi: ep: Add support for processing transfer ring
  2022-02-15 22:40   ` Alex Elder
@ 2022-02-22 10:50     ` Manivannan Sadhasivam
  0 siblings, 0 replies; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-22 10:50 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 04:40:18PM -0600, Alex Elder wrote:
> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> > Add support for processing the transfer ring from host. For the transfer
> > ring associated with DL channel, the xfer callback will simply invoked.
> > For the case of UL channel, the ring elements will be read in a buffer
> > till the write pointer and later passed to the client driver using the
> > xfer callback.
> > 
> > The client drivers should provide the callbacks for both UL and DL
> > channels during registration.
> 
> I think you already checked and guaranteed that.
> 
> I have a question and suggestion below.  But it could
> be considered an optimization that could be implemented
> in the future, so:
> 
> Reviewed-by: Alex Elder <elder@linaro.org>
> 
> > 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> > ---
> >   drivers/bus/mhi/ep/main.c | 49 +++++++++++++++++++++++++++++++++++++++
> >   1 file changed, 49 insertions(+)
> > 
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index b937c6cda9ba..baf383a4857b 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c
> > @@ -439,6 +439,55 @@ static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
> >   	return 0;
> >   }
> > +int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el)
> > +{
> > +	struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> > +	struct mhi_result result = {};
> > +	u32 len = MHI_EP_DEFAULT_MTU;
> > +	struct mhi_ep_chan *mhi_chan;
> > +	int ret;
> > +
> > +	mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
> > +
> > +	/*
> > +	 * Bail out if transfer callback is not registered for the channel.
> > +	 * This is most likely due to the client driver not loaded at this point.
> > +	 */
> > +	if (!mhi_chan->xfer_cb) {
> > +		dev_err(&mhi_chan->mhi_dev->dev, "Client driver not available\n");
> > +		return -ENODEV;
> > +	}
> > +
> > +	if (ring->ch_id % 2) {
> > +		/* DL channel */
> > +		result.dir = mhi_chan->dir;
> > +		mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
> > +	} else {
> > +		/* UL channel */
> > +		do {
> > +			result.buf_addr = kzalloc(len, GFP_KERNEL);
> 
> So you allocate an 8KB buffer into which you copy
> received data, then pass that to the ->xfer_cb()
> function.  Then you free that buffer.  Repeatedly.
> 
> Two questions about this:
> - This suggests that after copying the data in, the
>   ->xfer_cb() function will copy it again, is that
>   correct?
> - If that is correct, why not just reuse the same 8KB
>   buffer, allocated once outside the loop?
> 

The allocation was moved into the loop so that the TRE length buffer could be
allocated but I somehow decided to allocate the Max length buffer. So this could
be moved outside of the loop.

Thanks,
Mani

> It might also be nice to consider whether you could
> allocate the buffer here and have the ->xfer_cb()
> function be responsible for freeing it (and ideally,
> pass it along rather than copying it again).
> 
> > +			if (!result.buf_addr)
> > +				return -ENOMEM;
> > +
> > +			ret = mhi_ep_read_channel(mhi_cntrl, ring, &result, len);
> > +			if (ret < 0) {
> > +				dev_err(&mhi_chan->mhi_dev->dev, "Failed to read channel\n");
> > +				kfree(result.buf_addr);
> > +				return ret;
> > +			}
> > +
> > +			result.dir = mhi_chan->dir;
> > +			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
> > +			kfree(result.buf_addr);
> > +			result.bytes_xferd = 0;
> > +
> > +			/* Read until the ring becomes empty */
> > +		} while (!mhi_ep_queue_is_empty(mhi_chan->mhi_dev, DMA_TO_DEVICE));
> > +	}
> > +
> > +	return 0;
> > +}
> > +
> >   static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
> >   {
> >   	struct device *dev = &mhi_cntrl->mhi_dev->dev;
> 

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 13/25] bus: mhi: ep: Add support for sending events to the host
  2022-02-22  6:06     ` Manivannan Sadhasivam
@ 2022-02-22 13:41       ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-22 13:41 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/22/22 12:06 AM, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 04:39:17PM -0600, Alex Elder wrote:
>> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
>>> Add support for sending the events to the host over MHI bus from the
>>> endpoint. Following events are supported:
>>>
>>> 1. Transfer completion event
>>> 2. Command completion event
>>> 3. State change event
>>> 4. Execution Environment (EE) change event
>>>
>>> An event is sent whenever an operation has been completed in the MHI EP
>>> device. Event is sent using the MHI event ring and additionally the host
>>> is notified using an IRQ if required.
>>>
>>> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>>
>> A few things can be simplified here.
>>
>> 					-Alex
>>
>>> ---
>>>    drivers/bus/mhi/common.h      |  15 ++++
>>>    drivers/bus/mhi/ep/internal.h |   8 ++-
>>>    drivers/bus/mhi/ep/main.c     | 126 ++++++++++++++++++++++++++++++++++
>>>    include/linux/mhi_ep.h        |   8 +++
>>>    4 files changed, 155 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
>>> index 728c82928d8d..26d94ed52b34 100644
>>> --- a/drivers/bus/mhi/common.h
>>> +++ b/drivers/bus/mhi/common.h
>>> @@ -176,6 +176,21 @@
>>>    #define MHI_TRE_GET_EV_LINKSPEED(tre)			((MHI_TRE_GET_DWORD(tre, 1) >> 24) & 0xFF)
>>>    #define MHI_TRE_GET_EV_LINKWIDTH(tre)			(MHI_TRE_GET_DWORD(tre, 0) & 0xFF)
>>> +/* State change event */
>>> +#define MHI_SC_EV_PTR					0
>>> +#define MHI_SC_EV_DWORD0(state)				cpu_to_le32(state << 24)
>>> +#define MHI_SC_EV_DWORD1(type)				cpu_to_le32(type << 16)
>>> +
>>> +/* EE event */
>>> +#define MHI_EE_EV_PTR					0
>>> +#define MHI_EE_EV_DWORD0(ee)				cpu_to_le32(ee << 24)
>>> +#define MHI_EE_EV_DWORD1(type)				cpu_to_le32(type << 16)
>>> +
>>> +/* Command Completion event */
>>> +#define MHI_CC_EV_PTR(ptr)				cpu_to_le64(ptr)
>>> +#define MHI_CC_EV_DWORD0(code)				cpu_to_le32(code << 24)
>>> +#define MHI_CC_EV_DWORD1(type)				cpu_to_le32(type << 16)
>>> +
>>>    /* Transfer descriptor macros */
>>>    #define MHI_TRE_DATA_PTR(ptr)				cpu_to_le64(ptr)
>>>    #define MHI_TRE_DATA_DWORD0(len)			cpu_to_le32(len & MHI_MAX_MTU)
>>> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
>>> index 48d6e9667d55..fd63f79c6aec 100644
>>> --- a/drivers/bus/mhi/ep/internal.h
>>> +++ b/drivers/bus/mhi/ep/internal.h
>>> @@ -131,8 +131,8 @@ enum mhi_ep_ring_type {
>>>    };
>>>    struct mhi_ep_ring_element {
>>> -	u64 ptr;
>>> -	u32 dword[2];
>>> +	__le64 ptr;
>>> +	__le32 dword[2];
>>
>> Yay!
>>
>>>    };
>>>    /* Ring element */
>>> @@ -227,4 +227,8 @@ void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *s
>>>    void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
>>>    void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
>>> +/* MHI EP core functions */
>>> +int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
>>> +int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ep_execenv exec_env);
>>> +
>>>    #endif
>>> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
>>> index 2c8045766292..61f066c6286b 100644
>>> --- a/drivers/bus/mhi/ep/main.c
>>> +++ b/drivers/bus/mhi/ep/main.c
> 
> [...]
> 
>>> +static int mhi_ep_send_completion_event(struct mhi_ep_cntrl *mhi_cntrl,
>>> +					struct mhi_ep_ring *ring, u32 len,
>>> +					enum mhi_ev_ccs code)
>>> +{
>>> +	struct mhi_ep_ring_element event = {};
>>> +	__le32 tmp;
>>> +
>>> +	event.ptr = le64_to_cpu(ring->ring_ctx->generic.rbase) +
>>> +			ring->rd_offset * sizeof(struct mhi_ep_ring_element);
>>
>> I'm not sure at the moment where this will be called.  But
>> it might be easier to pass in the transfer channel pointer
>> rather than compute its address here.

As I recall, I made this comment thinking that in the context of
the caller, the ring element address might be known; but I didn't
look at those calling locations to see.

In any case, what you do here looks correct, so that's fine.

					-Alex

> Passing the ring element to these functions won't help. Because, the ring
> element only has the address of the buffer it points to. But what we need here
> is the address of the ring element itself and that can only be found in ring
> context.
> 
> Thanks,
> Mani


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 15/25] bus: mhi: ep: Add support for processing MHI endpoint interrupts
  2022-02-22  8:18     ` Manivannan Sadhasivam
@ 2022-02-22 14:08       ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-22 14:08 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/22/22 2:18 AM, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 04:39:30PM -0600, Alex Elder wrote:
>> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
>>> Add support for processing MHI endpoint interrupts such as control
>>> interrupt, command interrupt and channel interrupt from the host.
>>>
>>> The interrupts will be generated in the endpoint device whenever host
>>> writes to the corresponding doorbell registers. The doorbell logic
>>> is handled inside the hardware internally.
>>>
>>> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>>
>> Unless I'm mistaken, you have some bugs here.
>>
>> Beyond that, I question whether you should be using workqueues
>> for handling all interrupts.  For now, it's fine, but there
>> might be room for improvement after this is accepted upstream
>> (using threaded interrupt handlers, for example).
>>
> 
> Only reason I didn't use bottom halves is that the memory for TRE buffers need
> to be allocated each time, so essentially the caller should not sleep.

Threaded interrupt handlers can sleep.  If scheduled, they run
immediately after hard interrupt handlers.  For receive buffers,
yes, replacing a receive buffer just consumed would require an
allocation, but for transmit I think it might be possible to
avoid the need to do a memory allocation.  (Things to think
about at some future date.)

> This is currently a limitation of iATU where there are only 8 windows for
> mapping the host memory and each memory region size is also limited.

Those are hard limitations, and probably what constrains you the most.

					-Alex
> 
>> 					-Alex
>>
>>> ---
>>>    drivers/bus/mhi/ep/main.c | 113 +++++++++++++++++++++++++++++++++++++-
>>>    include/linux/mhi_ep.h    |   2 +
>>>    2 files changed, 113 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
>>> index ccb3c2795041..072b872e735b 100644
>>> --- a/drivers/bus/mhi/ep/main.c
>>> +++ b/drivers/bus/mhi/ep/main.c
>>> @@ -185,6 +185,56 @@ static void mhi_ep_ring_worker(struct work_struct *work)
>>>    	}
>>>    }
>>> +static void mhi_ep_queue_channel_db(struct mhi_ep_cntrl *mhi_cntrl,
>>> +				    unsigned long ch_int, u32 ch_idx)
>>> +{
>>> +	struct device *dev = &mhi_cntrl->mhi_dev->dev;
>>> +	struct mhi_ep_ring_item *item;
>>> +	struct mhi_ep_ring *ring;
>>> +	unsigned int i;
>>
>> Why not u32 i?  And why is the ch_int argument unsigned long?  The value
>> passed in is a u32.
>>
> 
> for_each_set_bit() expects the 2nd argument to be of type "unsigned long".
> 
> Thanks,
> Mani


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 16/25] bus: mhi: ep: Add support for powering up the MHI endpoint stack
  2022-02-22  9:08     ` Manivannan Sadhasivam
@ 2022-02-22 14:10       ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-22 14:10 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/22/22 3:08 AM, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 04:39:37PM -0600, Alex Elder wrote:
>> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
>>> Add support for MHI endpoint power_up that includes initializing the MMIO
>>> and rings, caching the host MHI registers, and setting the MHI state to M0.
>>> After registering the MHI EP controller, the stack has to be powered up
>>> for usage.
>>>
>>> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>>
>> Very little to say on this one.		-Alex
>>
>>> ---
>>>    drivers/bus/mhi/ep/internal.h |   6 +
>>>    drivers/bus/mhi/ep/main.c     | 229 ++++++++++++++++++++++++++++++++++
>>>    include/linux/mhi_ep.h        |  22 ++++
>>>    3 files changed, 257 insertions(+)
>>>
>>> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
>>> index e4e8f06c2898..ee8c5974f0c0 100644
>>> --- a/drivers/bus/mhi/ep/internal.h
>>> +++ b/drivers/bus/mhi/ep/internal.h
>>> @@ -242,4 +242,10 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
>>>    int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
>>>    int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
>>> +/* MHI EP memory management functions */
>>> +int mhi_ep_alloc_map(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, size_t size,
>>> +		     phys_addr_t *phys_ptr, void __iomem **virt);
>>> +void mhi_ep_unmap_free(struct mhi_ep_cntrl *mhi_cntrl, u64 pci_addr, phys_addr_t phys,
>>> +		       void __iomem *virt, size_t size);
>>> +
>>>    #endif
>>> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> 
> [...]
> 
>>> +
>>> +static void mhi_ep_enable_int(struct mhi_ep_cntrl *mhi_cntrl)
>>> +{
>>
>> Are channel doorbell interrupts enabled separately now?
>> (There was previously an enable_chdb_interrupts() call.)
>>
> 
> Doorbell interrupts are enabled when the corresponding channel gets started.
> Enabling all interrupts here triggers spurious irqs as some of the interrupts
> associated with hw channels always get triggered.

This is excellent.  Thanks for the explanation.	-Alex

> 
> Thanks,
> Mani


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 23/25] bus: mhi: ep: Add support for queueing SKBs to the host
  2022-02-15 22:40   ` Alex Elder
@ 2022-02-22 14:38     ` Manivannan Sadhasivam
  2022-02-22 15:18       ` Alex Elder
  0 siblings, 1 reply; 92+ messages in thread
From: Manivannan Sadhasivam @ 2022-02-22 14:38 UTC (permalink / raw)
  To: Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On Tue, Feb 15, 2022 at 04:40:29PM -0600, Alex Elder wrote:
> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
> > Add support for queueing SKBs to the host over the transfer ring of the
> > relevant channel. The mhi_ep_queue_skb() API will be used by the client
> > networking drivers to queue the SKBs to the host over MHI bus.
> > 
> > The host will add ring elements to the transfer ring periodically for
> > the device and the device will write SKBs to the ring elements. If a
> > single SKB doesn't fit in a ring element (TRE), it will be placed in
> > multiple ring elements and the overflow event will be sent for all ring
> > elements except the last one. For the last ring element, the EOT event
> > will be sent indicating the packet boundary.
> > 
> > Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
> 
> I'm a little confused by this, so maybe you can provide
> a better explanation somehow.
> 
> 					-Alex
> 
> > ---
> >   drivers/bus/mhi/ep/main.c | 102 ++++++++++++++++++++++++++++++++++++++
> >   include/linux/mhi_ep.h    |  13 +++++
> >   2 files changed, 115 insertions(+)
> > 
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index baf383a4857b..e4186b012257 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c
> > @@ -488,6 +488,108 @@ int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element
> >   	return 0;
> >   }
> > +int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir,
> > +		     struct sk_buff *skb, size_t len, enum mhi_flags mflags)
> 
> Why are both skb and len supplied?  Will an skb be supplied
> without wanting to send all of it?  Must len be less than
> skb->len?  I'm a little confused about the interface.
> 
> Also, the data direction is *out*, right?  You'll never
> be queueing a "receive" SKB?
> 

This was done to be compatible with the MHI host API where the host can queue
SKBs in both directions. But I think I should stop doing this.

> > +{
> > +	struct mhi_ep_chan *mhi_chan = (dir == DMA_FROM_DEVICE) ? mhi_dev->dl_chan :
> > +								mhi_dev->ul_chan;
> > +	struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
> > +	struct device *dev = &mhi_chan->mhi_dev->dev;
> > +	struct mhi_ep_ring_element *el;
> > +	struct mhi_ep_ring *ring;
> > +	size_t bytes_to_write;
> > +	enum mhi_ev_ccs code;
> > +	void *read_from_loc;
> > +	u32 buf_remaining;
> > +	u64 write_to_loc;
> > +	u32 tre_len;
> > +	int ret = 0;
> > +
> > +	if (dir == DMA_TO_DEVICE)
> > +		return -EINVAL;
> 
> Can't you just preclude this from happening, or
> know it won't happen by inspection?
> 
> > +
> > +	buf_remaining = len;
> > +	ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
> > +
> > +	mutex_lock(&mhi_chan->lock);
> > +
> > +	do {
> > +		/* Don't process the transfer ring if the channel is not in RUNNING state */
> > +		if (mhi_chan->state != MHI_CH_STATE_RUNNING) {
> > +			dev_err(dev, "Channel not available\n");
> > +			ret = -ENODEV;
> > +			goto err_exit;
> > +		}
> > +
> 
> It would be nice if the caller could know whether there
> was enough room *before* you start transferring things.
> It's probably a lot of work to get to that point though.
> 

No, the caller will do this check but the check is included here so that we
don't run out of buffers when the packet needs to be splitted.

> > +		if (mhi_ep_queue_is_empty(mhi_dev, dir)) {
> > +			dev_err(dev, "TRE not available!\n");
> > +			ret = -EINVAL;
> > +			goto err_exit;
> > +		}
> > +
> > +		el = &ring->ring_cache[ring->rd_offset];
> > +		tre_len = MHI_EP_TRE_GET_LEN(el);
> > +		if (skb->len > tre_len) {
> > +			dev_err(dev, "Buffer size (%d) is too large for TRE (%d)!\n",
> > +				skb->len, tre_len);
> 
> This means the receive buffer must be big enough to hold
> any incoming SKB.  This is *without* checking for the
> CHAIN flag in the TRE, so what you describe in the
> patch description seems not to be true.  I.e., multiple
> TREs in a TRD will *not* be consumed if the SKB data
> requires more than what's left in the current TRE.
> 

I think I removed this check for v3 but somehow the change got lost :/

But anyway, there is no need to check for CHAIN flag while writing to host.
CHAIN flag is only used or even make sense when host writes data to device, so
that it knows the packet boundary and could use the CHAIN flag to tell the
device where the boundary lies.

But when the device writes to host, it already has the pre-queued elements from
host that has no idea where the packet boundary lies. So the host would've set
only EOT on all TREs and expects the device to send OVERFLOW event for TREs that
don't have the complete packet. Then finally, when device sends EOT event, the
host will detect the boundary.

Thanks,
Mani

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 23/25] bus: mhi: ep: Add support for queueing SKBs to the host
  2022-02-22 14:38     ` Manivannan Sadhasivam
@ 2022-02-22 15:18       ` Alex Elder
  2022-02-22 16:05         ` Alex Elder
  0 siblings, 1 reply; 92+ messages in thread
From: Alex Elder @ 2022-02-22 15:18 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/22/22 8:38 AM, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 04:40:29PM -0600, Alex Elder wrote:
>> On 2/12/22 12:21 PM, Manivannan Sadhasivam wrote:
>>> Add support for queueing SKBs to the host over the transfer ring of the
>>> relevant channel. The mhi_ep_queue_skb() API will be used by the client
>>> networking drivers to queue the SKBs to the host over MHI bus.
>>>
>>> The host will add ring elements to the transfer ring periodically for
>>> the device and the device will write SKBs to the ring elements. If a
>>> single SKB doesn't fit in a ring element (TRE), it will be placed in
>>> multiple ring elements and the overflow event will be sent for all ring
>>> elements except the last one. For the last ring element, the EOT event
>>> will be sent indicating the packet boundary.
>>>
>>> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
>>
>> I'm a little confused by this, so maybe you can provide
>> a better explanation somehow.
>>
>> 					-Alex
>>
>>> ---
>>>    drivers/bus/mhi/ep/main.c | 102 ++++++++++++++++++++++++++++++++++++++
>>>    include/linux/mhi_ep.h    |  13 +++++
>>>    2 files changed, 115 insertions(+)
>>>
>>> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
>>> index baf383a4857b..e4186b012257 100644
>>> --- a/drivers/bus/mhi/ep/main.c
>>> +++ b/drivers/bus/mhi/ep/main.c
>>> @@ -488,6 +488,108 @@ int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element
>>>    	return 0;
>>>    }
>>> +int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir,
>>> +		     struct sk_buff *skb, size_t len, enum mhi_flags mflags)
>>
>> Why are both skb and len supplied?  Will an skb be supplied
>> without wanting to send all of it?  Must len be less than
>> skb->len?  I'm a little confused about the interface.
>>
>> Also, the data direction is *out*, right?  You'll never
>> be queueing a "receive" SKB?
>>
> 
> This was done to be compatible with the MHI host API where the host can queue
> SKBs in both directions. But I think I should stop doing this.


OK.

>>> +{
>>> +	struct mhi_ep_chan *mhi_chan = (dir == DMA_FROM_DEVICE) ? mhi_dev->dl_chan :
>>> +								mhi_dev->ul_chan;
>>> +	struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
>>> +	struct device *dev = &mhi_chan->mhi_dev->dev;
>>> +	struct mhi_ep_ring_element *el;
>>> +	struct mhi_ep_ring *ring;
>>> +	size_t bytes_to_write;
>>> +	enum mhi_ev_ccs code;
>>> +	void *read_from_loc;
>>> +	u32 buf_remaining;
>>> +	u64 write_to_loc;
>>> +	u32 tre_len;
>>> +	int ret = 0;
>>> +
>>> +	if (dir == DMA_TO_DEVICE)
>>> +		return -EINVAL;
>>
>> Can't you just preclude this from happening, or
>> know it won't happen by inspection?
>>
>>> +
>>> +	buf_remaining = len;
>>> +	ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
>>> +
>>> +	mutex_lock(&mhi_chan->lock);
>>> +
>>> +	do {
>>> +		/* Don't process the transfer ring if the channel is not in RUNNING state */
>>> +		if (mhi_chan->state != MHI_CH_STATE_RUNNING) {
>>> +			dev_err(dev, "Channel not available\n");
>>> +			ret = -ENODEV;
>>> +			goto err_exit;
>>> +		}
>>> +
>>
>> It would be nice if the caller could know whether there
>> was enough room *before* you start transferring things.
>> It's probably a lot of work to get to that point though.
>>
> 
> No, the caller will do this check but the check is included here so that we
> don't run out of buffers when the packet needs to be splitted.
> 
>>> +		if (mhi_ep_queue_is_empty(mhi_dev, dir)) {
>>> +			dev_err(dev, "TRE not available!\n");
>>> +			ret = -EINVAL;
>>> +			goto err_exit;
>>> +		}
>>> +
>>> +		el = &ring->ring_cache[ring->rd_offset];
>>> +		tre_len = MHI_EP_TRE_GET_LEN(el);
>>> +		if (skb->len > tre_len) {
>>> +			dev_err(dev, "Buffer size (%d) is too large for TRE (%d)!\n",
>>> +				skb->len, tre_len);
>>
>> This means the receive buffer must be big enough to hold
>> any incoming SKB.  This is *without* checking for the
>> CHAIN flag in the TRE, so what you describe in the
>> patch description seems not to be true.  I.e., multiple
>> TREs in a TRD will *not* be consumed if the SKB data
>> requires more than what's left in the current TRE.
>>
> 
> I think I removed this check for v3 but somehow the change got lost :/

Looking at this now, it's possible I got confused about
which direction the data was moving; but I'm not really
sure.

 From the perspective of the endpoint device, this is the
*transmit* function.  But when the device is transmitting,
it is moving data into the *receive* buffers that the host
has allocated and supplied via the transfer ring.

My statement seems to be correct though, with this logic,
the host must supply a buffer large enough to receive the
entire next SKB, or it will get an error back.  I no longer
know what happens when this function (mhi_ep_queue_skb())
returns an error--is the skb dropped?

> But anyway, there is no need to check for CHAIN flag while writing to host.
> CHAIN flag is only used or even make sense when host writes data to device, so

I'm not sure that's correct, but I don't want to get into that issue here.
We can talk about that separately.

> that it knows the packet boundary and could use the CHAIN flag to tell the
> device where the boundary lies.

This doesn't sound to me like what the purpose of the CHAIN flag is,
but perhaps I'm misunderstanding you.  Let's have a quick private
chat about this so we don't waste any more e-mail bandwidth.

					-Alex

> But when the device writes to host, it already has the pre-queued elements from
> host that has no idea where the packet boundary lies. So the host would've set
> only EOT on all TREs and expects the device to send OVERFLOW event for TREs that
> don't have the complete packet. Then finally, when device sends EOT event, the
> host will detect the boundary.
> 
> Thanks,
> Mani


^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 23/25] bus: mhi: ep: Add support for queueing SKBs to the host
  2022-02-22 15:18       ` Alex Elder
@ 2022-02-22 16:05         ` Alex Elder
  0 siblings, 0 replies; 92+ messages in thread
From: Alex Elder @ 2022-02-22 16:05 UTC (permalink / raw)
  To: Manivannan Sadhasivam
  Cc: mhi, quic_hemantk, quic_bbhatt, quic_jhugo, vinod.koul,
	bjorn.andersson, dmitry.baryshkov, quic_vbadigan, quic_cang,
	quic_skananth, linux-arm-msm, linux-kernel

On 2/22/22 9:18 AM, Alex Elder wrote:
> 
>> But anyway, there is no need to check for CHAIN flag while writing to 
>> host.
>> CHAIN flag is only used or even make sense when host writes data to 
>> device, so
> 
> I'm not sure that's correct, but I don't want to get into that issue here.
> We can talk about that separately.

I just wanted to send a short followup here.  My comments
were based on a misunderstanding, and Mani cleared it up
for me.  For host receives, the MHI specification states
that a packet that requires more than the size of the
buffer in a single TRE leads to an overflow event being
generated from the device to the host.  The buffer on
the TRE is filled, and subsequent packet data is written
to the next TRE's buffer (assuming it's present).

This differs from one feature of IPA and its GSI transfer
rings.  I won't explain that here, to avoid any confusion.

Mani explained things to me, and he's going to send an
updated series, which I'll review.

					-Alex

^ permalink raw reply	[flat|nested] 92+ messages in thread

* Re: [PATCH v3 08/25] bus: mhi: ep: Add support for registering MHI endpoint controllers
  2022-02-17  9:53     ` Manivannan Sadhasivam
  2022-02-17 14:47       ` Alex Elder
@ 2022-03-04 21:46       ` Jeffrey Hugo
  1 sibling, 0 replies; 92+ messages in thread
From: Jeffrey Hugo @ 2022-03-04 21:46 UTC (permalink / raw)
  To: Manivannan Sadhasivam, Alex Elder
  Cc: mhi, quic_hemantk, quic_bbhatt, vinod.koul, bjorn.andersson,
	dmitry.baryshkov, quic_vbadigan, quic_cang, quic_skananth,
	linux-arm-msm, linux-kernel

On 2/17/2022 2:53 AM, Manivannan Sadhasivam wrote:
> On Tue, Feb 15, 2022 at 02:02:41PM -0600, Alex Elder wrote:
> 
> [...]
> 
>>> +#define MHI_REG_OFFSET				0x100
>>> +#define BHI_REG_OFFSET				0x200
>>
>> Rather than defining the REG_OFFSET values here and adding
>> them to every definition below, why not have the base
>> address used (e.g., in mhi_write_reg_field()) be adjusted
>> by the constant amount?
>>
>> I'm just looking at mhi_init_mmio() (in the existing code)
>> as an example, but for example, the base address used
>> comes from mhi_cntrl->regs.  Can you instead just define
>> a pointer somewhere that is the base of the MHI register
>> range, which is already offset by the appropriate amount?
>>
> 
> I've defined two set of APIs for MHI and BHI read/write. They will add the
> respective offsets.
> 

While you are making changes, maybe don't have a set BHI_REG_OFFSET? 
Sure, I think it is always 0x200, but that is a convention and nothing 
I've seen in the spec mandates it.  You can derive it from the bhi 
offset register.

This way, if it ever moves in some future chip, this code should just work.

^ permalink raw reply	[flat|nested] 92+ messages in thread

end of thread, other threads:[~2022-03-04 21:47 UTC | newest]

Thread overview: 92+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-12 18:20 [PATCH v3 00/25] Add initial support for MHI endpoint stack Manivannan Sadhasivam
2022-02-12 18:20 ` [PATCH v3 01/25] bus: mhi: Fix pm_state conversion to string Manivannan Sadhasivam
2022-02-15 20:01   ` Alex Elder
2022-02-16 11:33     ` Manivannan Sadhasivam
2022-02-16 13:41       ` Alex Elder
2022-02-12 18:20 ` [PATCH v3 02/25] bus: mhi: Fix MHI DMA structure endianness Manivannan Sadhasivam
2022-02-15 20:02   ` Alex Elder
2022-02-16  7:04     ` Manivannan Sadhasivam
2022-02-16 14:29       ` Alex Elder
2022-02-12 18:20 ` [PATCH v3 03/25] bus: mhi: Move host MHI code to "host" directory Manivannan Sadhasivam
2022-02-15 20:02   ` Alex Elder
2022-02-12 18:20 ` [PATCH v3 04/25] bus: mhi: Move common MHI definitions out of host directory Manivannan Sadhasivam
2022-02-15  0:28   ` Hemant Kumar
2022-02-15 20:02   ` Alex Elder
2022-02-12 18:20 ` [PATCH v3 05/25] bus: mhi: Make mhi_state_str[] array static inline and move to common.h Manivannan Sadhasivam
2022-02-15  0:31   ` Hemant Kumar
2022-02-15 20:02   ` Alex Elder
2022-02-16 11:39     ` Manivannan Sadhasivam
2022-02-16 14:30       ` Alex Elder
2022-02-12 18:20 ` [PATCH v3 06/25] bus: mhi: Cleanup the register definitions used in headers Manivannan Sadhasivam
2022-02-15  0:37   ` Hemant Kumar
2022-02-15 20:02   ` Alex Elder
2022-02-16 17:21     ` Manivannan Sadhasivam
2022-02-16 17:43       ` Manivannan Sadhasivam
2022-02-12 18:20 ` [PATCH v3 07/25] bus: mhi: Get rid of SHIFT macros and use bitfield operations Manivannan Sadhasivam
2022-02-15 20:02   ` Alex Elder
2022-02-16 16:45     ` Manivannan Sadhasivam
2022-02-12 18:21 ` [PATCH v3 08/25] bus: mhi: ep: Add support for registering MHI endpoint controllers Manivannan Sadhasivam
2022-02-15  1:04   ` Hemant Kumar
2022-02-16 17:33     ` Manivannan Sadhasivam
2022-02-15 20:02   ` Alex Elder
2022-02-17  9:53     ` Manivannan Sadhasivam
2022-02-17 14:47       ` Alex Elder
2022-03-04 21:46       ` Jeffrey Hugo
2022-02-12 18:21 ` [PATCH v3 09/25] bus: mhi: ep: Add support for registering MHI endpoint client drivers Manivannan Sadhasivam
2022-02-12 18:32   ` Manivannan Sadhasivam
2022-02-15  1:10   ` Hemant Kumar
2022-02-15 20:02   ` Alex Elder
2022-02-17 10:20     ` Manivannan Sadhasivam
2022-02-17 14:50       ` Alex Elder
2022-02-12 18:21 ` [PATCH v3 10/25] bus: mhi: ep: Add support for creating and destroying MHI EP devices Manivannan Sadhasivam
2022-02-15 20:02   ` Alex Elder
2022-02-17 12:04     ` Manivannan Sadhasivam
2022-02-12 18:21 ` [PATCH v3 11/25] bus: mhi: ep: Add support for managing MMIO registers Manivannan Sadhasivam
2022-02-15  1:14   ` Hemant Kumar
2022-02-15 20:03   ` Alex Elder
2022-02-12 18:21 ` [PATCH v3 12/25] bus: mhi: ep: Add support for ring management Manivannan Sadhasivam
2022-02-15 20:03   ` Alex Elder
2022-02-18  8:07     ` Manivannan Sadhasivam
2022-02-18 15:23       ` Manivannan Sadhasivam
2022-02-18 15:47         ` Alex Elder
2022-02-18 15:39       ` Alex Elder
2022-02-12 18:21 ` [PATCH v3 13/25] bus: mhi: ep: Add support for sending events to the host Manivannan Sadhasivam
2022-02-15 22:39   ` Alex Elder
2022-02-22  6:06     ` Manivannan Sadhasivam
2022-02-22 13:41       ` Alex Elder
2022-02-12 18:21 ` [PATCH v3 14/25] bus: mhi: ep: Add support for managing MHI state machine Manivannan Sadhasivam
2022-02-15 22:39   ` Alex Elder
2022-02-22  7:03     ` Manivannan Sadhasivam
2022-02-12 18:21 ` [PATCH v3 15/25] bus: mhi: ep: Add support for processing MHI endpoint interrupts Manivannan Sadhasivam
2022-02-15 22:39   ` Alex Elder
2022-02-22  8:18     ` Manivannan Sadhasivam
2022-02-22 14:08       ` Alex Elder
2022-02-12 18:21 ` [PATCH v3 16/25] bus: mhi: ep: Add support for powering up the MHI endpoint stack Manivannan Sadhasivam
2022-02-15 22:39   ` Alex Elder
2022-02-22  9:08     ` Manivannan Sadhasivam
2022-02-22 14:10       ` Alex Elder
2022-02-12 18:21 ` [PATCH v3 17/25] bus: mhi: ep: Add support for powering down " Manivannan Sadhasivam
2022-02-15 22:39   ` Alex Elder
2022-02-12 18:21 ` [PATCH v3 18/25] bus: mhi: ep: Add support for handling MHI_RESET Manivannan Sadhasivam
2022-02-15 22:39   ` Alex Elder
2022-02-12 18:21 ` [PATCH v3 19/25] bus: mhi: ep: Add support for handling SYS_ERR condition Manivannan Sadhasivam
2022-02-15 22:39   ` Alex Elder
2022-02-22 10:29     ` Manivannan Sadhasivam
2022-02-12 18:21 ` [PATCH v3 20/25] bus: mhi: ep: Add support for processing command ring Manivannan Sadhasivam
2022-02-15 22:40   ` Alex Elder
2022-02-22 10:35     ` Manivannan Sadhasivam
2022-02-12 18:21 ` [PATCH v3 21/25] bus: mhi: ep: Add support for reading from the host Manivannan Sadhasivam
2022-02-15 22:40   ` Alex Elder
2022-02-12 18:21 ` [PATCH v3 22/25] bus: mhi: ep: Add support for processing transfer ring Manivannan Sadhasivam
2022-02-15 22:40   ` Alex Elder
2022-02-22 10:50     ` Manivannan Sadhasivam
2022-02-12 18:21 ` [PATCH v3 23/25] bus: mhi: ep: Add support for queueing SKBs to the host Manivannan Sadhasivam
2022-02-15 22:40   ` Alex Elder
2022-02-22 14:38     ` Manivannan Sadhasivam
2022-02-22 15:18       ` Alex Elder
2022-02-22 16:05         ` Alex Elder
2022-02-12 18:21 ` [PATCH v3 24/25] bus: mhi: ep: Add support for suspending and resuming channels Manivannan Sadhasivam
2022-02-15 22:40   ` Alex Elder
2022-02-12 18:21 ` [PATCH v3 25/25] bus: mhi: ep: Add uevent support for module autoloading Manivannan Sadhasivam
2022-02-15 22:40   ` Alex Elder
2022-02-15 20:01 ` [PATCH v3 00/25] Add initial support for MHI endpoint stack Alex Elder

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).