linux-wireless.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem
@ 2022-01-14  1:06 Ricardo Martinez
  2022-01-14  1:06 ` [PATCH net-next v4 01/13] list: Add list_next_entry_circular() and list_prev_entry_circular() Ricardo Martinez
                   ` (13 more replies)
  0 siblings, 14 replies; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

t7xx is the PCIe host device driver for Intel 5G 5000 M.2 solution which
is based on MediaTek's T700 modem to provide WWAN connectivity.
The driver uses the WWAN framework infrastructure to create the following
control ports and network interfaces:
* /dev/wwan0mbim0 - Interface conforming to the MBIM protocol.
  Applications like libmbim [1] or Modem Manager [2] from v1.16 onwards
  with [3][4] can use it to enable data communication towards WWAN.
* /dev/wwan0at0 - Interface that supports AT commands.
* wwan0 - Primary network interface for IP traffic.

The main blocks in t7xx driver are:
* PCIe layer - Implements probe, removal, and power management callbacks.
* Port-proxy - Provides a common interface to interact with different types
  of ports such as WWAN ports.
* Modem control & status monitor - Implements the entry point for modem
  initialization, reset and exit, as well as exception handling.
* CLDMA (Control Layer DMA) - Manages the HW used by the port layer to send
  control messages to the modem using MediaTek's CCCI (Cross-Core
  Communication Interface) protocol.
* DPMAIF (Data Plane Modem AP Interface) - Controls the HW that provides
  uplink and downlink queues for the data path. The data exchange takes
  place using circular buffers to share data buffer addresses and metadata
  to describe the packets.
* MHCCIF (Modem Host Cross-Core Interface) - Provides interrupt channels
  for bidirectional event notification such as handshake, exception, PM and
  port enumeration.

The compilation of the t7xx driver is enabled by the CONFIG_MTK_T7XX config
option which depends on CONFIG_WWAN.
This driver was originally developed by MediaTek. Intel adapted t7xx to
the WWAN framework, optimized and refactored the driver source in close
collaboration with MediaTek. This will enable getting the t7xx driver on
Approved Vendor List for interested OEM's and ODM's productization plans
with Intel 5G 5000 M.2 solution.

List of contributors:
Amir Hanania <amir.hanania@intel.com>
Andriy Shevchenko <andriy.shevchenko@linux.intel.com>
Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Dinesh Sharma <dinesh.sharma@intel.com>
Eliot Lee <eliot.lee@intel.com>
Haijun Liu <haijun.liu@mediatek.com>
M Chetan Kumar <m.chetan.kumar@intel.com>
Mika Westerberg <mika.westerberg@linux.intel.com>
Moises Veleta <moises.veleta@intel.com>
Pierre-louis Bossart <pierre-louis.bossart@intel.com>
Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
Ricardo Martinez <ricardo.martinez@linux.intel.com>
Muralidharan Sethuraman <muralidharan.sethuraman@intel.com>
Soumya Prakash Mishra <Soumya.Prakash.Mishra@intel.com>
Sreehari Kancharla <sreehari.kancharla@intel.com>
Suresh Nagaraj <suresh.nagaraj@intel.com>

[1] https://www.freedesktop.org/software/libmbim/
[2] https://www.freedesktop.org/software/ModemManager/
[3] https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/582
[4] https://gitlab.freedesktop.org/mobile-broadband/ModemManager/-/merge_requests/523

v4:
- Implement list_prev_entry_circular() and list_next_entry_circular() macros.
- Remove inline from all c files.
- Define ioread32_poll_timeout_atomic() helper macro.
- Fix return code for WWAN port tx op.
- Allow AT commands fragmentation same as MBIM commands.
- Introduce t7xx_common.h file in the first patch.
- Rename functions and variables as suggested in v3.
- Reduce code duplication by creating fsm_wait_for_event() helper function.
- Remove unneeded dev_err in t7xx_fsm_clr_event().
- Remove unused variable last_state from struct t7xx_fsm_ctl.
- Remove unused variable txq_select_times from struct dpmaif_ctrl.
- Replace ETXTBSY with EBUSY.
- Refactor t7xx_dpmaif_rx_buf_alloc() to remove an unneeded allocation.
- Fix potential leak at t7xx_dpmaif_rx_frag_alloc().
- Simplify return value handling at t7xx_dpmaif_rx_start().
- Add a helper to handle the common part of CCCI header initialization.
- Make sure interrupts are enabled during PM resume.
- Add a parameter to t7xx_fsm_append_cmd() to tell if it is in interrupt context.

v3:
- Avoid unneeded ping-pong changes between patches.
- Use t7xx_ prefix in functions.
- Use t7xx_ prefix in generic structs where mtk_ or ccci prefix was used.
- Update Authors/Contributors header.
- Remove skb pools used for control path.
- Remove skb pools used for RX data path.
- Do not use dedicated TX queue for ACK-only packets.
- Remove __packed attribute from GPD structs.
- Remove the infrastructure for test and debug ports.
- Use the skb control buffer to store metadata.
- Get the IP packet type from RX PIT.
- Merge variable declaration and simple assignments.
- Use preferred coding patterns.
- Remove global variables.
- Declare HW facing structure members as little endian.
- Rename goto tags to describe what is going to be done.
- Do not use variable length arrays.
- Remove unneeded blank lines source code and kdoc headers.
- Use C99 initialization format for port-proxy ports.
- Clean up comments.
- Review included headers.
- Better use of 100 column limit.
- Remove unneeded mb() in CLDMA.
- Remove unneeded spin locks and atomics.
- Handle read_poll_timeout error.
- Use dev_err_ratelimited() where required.
- Fix resource leak when requesting IRQs.
- Use generic DEFAULT_TX_QUEUE_LEN instead custom macro.
- Use ETH_DATA_LEN instead of defining WWAN_DEFAULT_MTU.
- Use sizeof() instead of defines when the size of structures is required.
- Remove unneeded code from netdev:
    No need to configure HW address length
    No need to implement .ndo_change_mtu
    Remove random address generation
- Code simplifications by using kernel provided functions and macros such as:
    module_pci_driver
    PTR_ERR_OR_ZERO
    for_each_set_bit
    pci_device_is_present
    skb_queue_purge
    list_prev_entry
    __ffs64

v2:
- Replace pdev->driver->name with dev_driver_string(&pdev->dev).
- Replace random_ether_addr() with eth_random_addr().
- Update kernel-doc comment for enum data_policy.
- Indicate the driver is 'Supported' instead of 'Maintained'.
- Fix the Signed-of-by and Co-developed-by tags in the patches.
- Added authors and contributors in the top comment of the src files.

Ricardo Martinez (13):
  list: Add list_next_entry_circular() and list_prev_entry_circular()
  net: wwan: t7xx: Add control DMA interface
  net: wwan: t7xx: Add core components
  net: wwan: t7xx: Add port proxy infrastructure
  net: wwan: t7xx: Add control port
  net: wwan: t7xx: Add AT and MBIM WWAN ports
  net: wwan: t7xx: Data path HW layer
  net: wwan: t7xx: Add data path interface
  net: wwan: t7xx: Add WWAN network interface
  net: wwan: t7xx: Introduce power management support
  net: wwan: t7xx: Runtime PM
  net: wwan: t7xx: Device deep sleep lock/unlock
  net: wwan: t7xx: Add maintainers and documentation

 .../networking/device_drivers/wwan/index.rst  |    1 +
 .../networking/device_drivers/wwan/t7xx.rst   |  120 ++
 MAINTAINERS                                   |   11 +
 drivers/net/wwan/Kconfig                      |   14 +
 drivers/net/wwan/Makefile                     |    1 +
 drivers/net/wwan/t7xx/Makefile                |   20 +
 drivers/net/wwan/t7xx/t7xx_cldma.c            |  282 ++++
 drivers/net/wwan/t7xx/t7xx_cldma.h            |  177 ++
 drivers/net/wwan/t7xx/t7xx_common.h           |   95 ++
 drivers/net/wwan/t7xx/t7xx_dpmaif.c           | 1372 ++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_dpmaif.h           |  146 ++
 drivers/net/wwan/t7xx/t7xx_hif_cldma.c        | 1439 +++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_hif_cldma.h        |  139 ++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c       |  577 +++++++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h       |  252 +++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c    | 1251 ++++++++++++++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h    |  115 ++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c    |  754 +++++++++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h    |   89 +
 drivers/net/wwan/t7xx/t7xx_mhccif.c           |  118 ++
 drivers/net/wwan/t7xx/t7xx_mhccif.h           |   37 +
 drivers/net/wwan/t7xx/t7xx_modem_ops.c        |  703 ++++++++
 drivers/net/wwan/t7xx/t7xx_modem_ops.h        |   87 +
 drivers/net/wwan/t7xx/t7xx_netdev.c           |  433 +++++
 drivers/net/wwan/t7xx/t7xx_netdev.h           |   61 +
 drivers/net/wwan/t7xx/t7xx_pci.c              |  767 +++++++++
 drivers/net/wwan/t7xx/t7xx_pci.h              |  123 ++
 drivers/net/wwan/t7xx/t7xx_pcie_mac.c         |  277 ++++
 drivers/net/wwan/t7xx/t7xx_pcie_mac.h         |   37 +
 drivers/net/wwan/t7xx/t7xx_port.h             |  151 ++
 drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c    |  190 +++
 drivers/net/wwan/t7xx/t7xx_port_proxy.c       |  642 ++++++++
 drivers/net/wwan/t7xx/t7xx_port_proxy.h       |   87 +
 drivers/net/wwan/t7xx/t7xx_port_wwan.c        |  225 +++
 drivers/net/wwan/t7xx/t7xx_reg.h              |  379 +++++
 drivers/net/wwan/t7xx/t7xx_state_monitor.c    |  548 +++++++
 drivers/net/wwan/t7xx/t7xx_state_monitor.h    |  126 ++
 include/linux/list.h                          |   26 +
 38 files changed, 11872 insertions(+)
 create mode 100644 Documentation/networking/device_drivers/wwan/t7xx.rst
 create mode 100644 drivers/net/wwan/t7xx/Makefile
 create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_common.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_port.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_port_wwan.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_reg.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.h

-- 
2.17.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 01/13] list: Add list_next_entry_circular() and list_prev_entry_circular()
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-01-14 13:42   ` Andy Shevchenko
  2022-01-14  1:06 ` [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface Ricardo Martinez
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

Add macros to get the next or previous entries and wraparound if
needed. For example, calling list_next_entry_circular() on the last
element should return the first element in the list.

Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 include/linux/list.h | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/include/linux/list.h b/include/linux/list.h
index dd6c2041d09c..c147eeb2d39d 100644
--- a/include/linux/list.h
+++ b/include/linux/list.h
@@ -563,6 +563,19 @@ static inline void list_splice_tail_init(struct list_head *list,
 #define list_next_entry(pos, member) \
 	list_entry((pos)->member.next, typeof(*(pos)), member)
 
+/**
+ * list_next_entry_circular - get the next element in list
+ * @pos:	the type * to cursor.
+ * @head:	the list head to take the element from.
+ * @member:	the name of the list_head within the struct.
+ *
+ * Wraparound if pos is the last element (return the first element).
+ * Note, that list is expected to be not empty.
+ */
+#define list_next_entry_circular(pos, head, member) \
+	(list_is_last(&(pos)->member, head) ? \
+	list_first_entry(head, typeof(*(pos)), member) : list_next_entry(pos, member))
+
 /**
  * list_prev_entry - get the prev element in list
  * @pos:	the type * to cursor
@@ -571,6 +584,19 @@ static inline void list_splice_tail_init(struct list_head *list,
 #define list_prev_entry(pos, member) \
 	list_entry((pos)->member.prev, typeof(*(pos)), member)
 
+/**
+ * list_prev_entry_circular - get the prev element in list
+ * @pos:	the type * to cursor.
+ * @head:	the list head to take the element from.
+ * @member:	the name of the list_head within the struct.
+ *
+ * Wraparound if pos is the first element (return the last element).
+ * Note, that list is expected to be not empty.
+ */
+#define list_prev_entry_circular(pos, head, member) \
+	(list_is_first(&(pos)->member, head) ? \
+	list_last_entry(head, typeof(*(pos)), member) : list_prev_entry(pos, member))
+
 /**
  * list_for_each	-	iterate over a list
  * @pos:	the &struct list_head to use as a loop cursor.
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
  2022-01-14  1:06 ` [PATCH net-next v4 01/13] list: Add list_next_entry_circular() and list_prev_entry_circular() Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-01-14 14:13   ` Andy Shevchenko
                     ` (2 more replies)
  2022-01-14  1:06 ` [PATCH net-next v4 03/13] net: wwan: t7xx: Add core components Ricardo Martinez
                   ` (11 subsequent siblings)
  13 siblings, 3 replies; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

From: Haijun Liu <haijun.liu@mediatek.com>

Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control
path of Host-Modem data transfers. CLDMA HIF layer provides a common
interface to the Port Layer.

CLDMA manages 8 independent RX/TX physical channels with data flow
control in HW queues. CLDMA uses ring buffers of General Packet
Descriptors (GPD) for TX/RX. GPDs can represent multiple or single
data buffers (DB).

CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA
interrupts, and initializes CLDMA HW registers.

CLDMA TX flow:
1. Port Layer write
2. Get DB address
3. Configure GPD
4. Triggering processing via HW register write

CLDMA RX flow:
1. CLDMA HW sends a RX "done" to host
2. Driver starts thread to safely read GPD
3. DB is sent to Port layer
4. Create a new buffer for GPD ring

Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 drivers/net/wwan/t7xx/t7xx_cldma.c     |  282 ++++++
 drivers/net/wwan/t7xx/t7xx_cldma.h     |  177 ++++
 drivers/net/wwan/t7xx/t7xx_common.h    |   95 ++
 drivers/net/wwan/t7xx/t7xx_hif_cldma.c | 1296 ++++++++++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_hif_cldma.h |  138 +++
 5 files changed, 1988 insertions(+)
 create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_cldma.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_common.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_cldma.h

diff --git a/drivers/net/wwan/t7xx/t7xx_cldma.c b/drivers/net/wwan/t7xx/t7xx_cldma.c
new file mode 100644
index 000000000000..e560f6ed454c
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_cldma.c
@@ -0,0 +1,282 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#include <linux/bits.h>
+#include <linux/delay.h>
+#include <linux/io.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/types.h>
+
+#include "t7xx_common.h"
+#include "t7xx_cldma.h"
+
+#define ADDR_SIZE	8
+
+void t7xx_cldma_clear_ip_busy(struct t7xx_cldma_hw *hw_info)
+{
+	u32 val;
+
+	val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_IP_BUSY);
+	val |= IP_BUSY_WAKEUP;
+	iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_IP_BUSY);
+}
+
+/**
+ * t7xx_cldma_hw_restore() - Restore CLDMA HW registers.
+ * @hw_info: Pointer to struct t7xx_cldma_hw.
+ *
+ * Restore HW after resume. Writes uplink configuration for CLDMA HW.
+ */
+void t7xx_cldma_hw_restore(struct t7xx_cldma_hw *hw_info)
+{
+	u32 ul_cfg;
+
+	ul_cfg = ioread32(hw_info->ap_pdn_base + REG_CLDMA_UL_CFG);
+	ul_cfg &= ~UL_CFG_BIT_MODE_MASK;
+
+	if (hw_info->hw_mode == MODE_BIT_64)
+		ul_cfg |= UL_CFG_BIT_MODE_64;
+	else if (hw_info->hw_mode == MODE_BIT_40)
+		ul_cfg |= UL_CFG_BIT_MODE_40;
+	else if (hw_info->hw_mode == MODE_BIT_36)
+		ul_cfg |= UL_CFG_BIT_MODE_36;
+
+	iowrite32(ul_cfg, hw_info->ap_pdn_base + REG_CLDMA_UL_CFG);
+	/* Disable TX and RX invalid address check */
+	iowrite32(UL_MEM_CHECK_DIS, hw_info->ap_pdn_base + REG_CLDMA_UL_MEM);
+	iowrite32(DL_MEM_CHECK_DIS, hw_info->ap_pdn_base + REG_CLDMA_DL_MEM);
+}
+
+void t7xx_cldma_hw_start_queue(struct t7xx_cldma_hw *hw_info, u8 qno, enum mtk_txrx tx_rx)
+{
+	void __iomem *reg;
+	u32 val;
+
+	reg = tx_rx == MTK_RX ? hw_info->ap_pdn_base + REG_CLDMA_DL_START_CMD :
+				hw_info->ap_pdn_base + REG_CLDMA_UL_START_CMD;
+	val = qno == CLDMA_ALL_Q ? CLDMA_ALL_Q : BIT(qno);
+	iowrite32(val, reg);
+}
+
+void t7xx_cldma_hw_start(struct t7xx_cldma_hw *hw_info)
+{
+	/* Enable the TX & RX interrupts */
+	iowrite32(TXRX_STATUS_BITMASK, hw_info->ap_pdn_base + REG_CLDMA_L2TIMCR0);
+	iowrite32(TXRX_STATUS_BITMASK, hw_info->ap_ao_base + REG_CLDMA_L2RIMCR0);
+	/* Enable the empty queue interrupt */
+	iowrite32(EMPTY_STATUS_BITMASK, hw_info->ap_pdn_base + REG_CLDMA_L2TIMCR0);
+	iowrite32(EMPTY_STATUS_BITMASK, hw_info->ap_ao_base + REG_CLDMA_L2RIMCR0);
+}
+
+void t7xx_cldma_hw_reset(void __iomem *ao_base)
+{
+	u32 val;
+
+	val = ioread32(ao_base + REG_INFRA_RST2_SET);
+	val |= RST2_PMIC_SW_RST_SET;
+	iowrite32(val, ao_base + REG_INFRA_RST2_SET);
+	val = ioread32(ao_base + REG_INFRA_RST4_SET);
+	val |= RST4_CLDMA1_SW_RST_SET;
+	iowrite32(val, ao_base + REG_INFRA_RST4_SET);
+	udelay(1);
+
+	val = ioread32(ao_base + REG_INFRA_RST4_CLR);
+	val |= RST4_CLDMA1_SW_RST_CLR;
+	iowrite32(val, ao_base + REG_INFRA_RST4_CLR);
+	val = ioread32(ao_base + REG_INFRA_RST2_CLR);
+	val |= RST2_PMIC_SW_RST_CLR;
+	iowrite32(val, ao_base + REG_INFRA_RST2_CLR);
+}
+
+bool t7xx_cldma_tx_addr_is_set(struct t7xx_cldma_hw *hw_info, unsigned char qno)
+{
+	u32 offset = REG_CLDMA_UL_START_ADDRL_0 + qno * ADDR_SIZE;
+
+	return !!ioread64(hw_info->ap_pdn_base + offset);
+}
+
+void t7xx_cldma_hw_set_start_addr(struct t7xx_cldma_hw *hw_info, unsigned char qno, u64 address,
+				  enum mtk_txrx tx_rx)
+{
+	u32 offset = qno * ADDR_SIZE;
+	void __iomem *reg;
+
+	reg = tx_rx == MTK_RX ? hw_info->ap_ao_base + REG_CLDMA_DL_START_ADDRL_0 :
+				hw_info->ap_pdn_base + REG_CLDMA_UL_START_ADDRL_0;
+	iowrite64(address, reg + offset);
+}
+
+void t7xx_cldma_hw_resume_queue(struct t7xx_cldma_hw *hw_info, unsigned char qno,
+				enum mtk_txrx tx_rx)
+{
+	void __iomem *base = hw_info->ap_pdn_base;
+
+	if (tx_rx == MTK_RX)
+		iowrite32(BIT(qno), base + REG_CLDMA_DL_RESUME_CMD);
+	else
+		iowrite32(BIT(qno), base + REG_CLDMA_UL_RESUME_CMD);
+}
+
+unsigned int t7xx_cldma_hw_queue_status(struct t7xx_cldma_hw *hw_info, unsigned char qno,
+					enum mtk_txrx tx_rx)
+{
+	void __iomem *reg;
+	u32 mask, val;
+
+	mask = qno == CLDMA_ALL_Q ? CLDMA_ALL_Q : BIT(qno);
+	reg = tx_rx == MTK_RX ? hw_info->ap_ao_base + REG_CLDMA_DL_STATUS :
+				hw_info->ap_pdn_base + REG_CLDMA_UL_STATUS;
+	val = ioread32(reg);
+	return val & mask;
+}
+
+void t7xx_cldma_hw_tx_done(struct t7xx_cldma_hw *hw_info, unsigned int bitmask)
+{
+	unsigned int ch_id;
+
+	ch_id = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0);
+	ch_id &= bitmask;
+	/* Clear the ch IDs in the TX interrupt status register */
+	iowrite32(ch_id, hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0);
+	ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0);
+}
+
+void t7xx_cldma_hw_rx_done(struct t7xx_cldma_hw *hw_info, unsigned int bitmask)
+{
+	unsigned int ch_id;
+
+	ch_id = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0);
+	ch_id &= bitmask;
+	/* Clear the ch IDs in the RX interrupt status register */
+	iowrite32(ch_id, hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0);
+	ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0);
+}
+
+unsigned int t7xx_cldma_hw_int_status(struct t7xx_cldma_hw *hw_info, unsigned int bitmask,
+				      enum mtk_txrx tx_rx)
+{
+	void __iomem *reg;
+	u32 val;
+
+	reg = tx_rx == MTK_RX ? hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0 :
+				hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0;
+	val = ioread32(reg);
+	return val & bitmask;
+}
+
+void t7xx_cldma_hw_irq_dis_txrx(struct t7xx_cldma_hw *hw_info, unsigned char qno,
+				enum mtk_txrx tx_rx)
+{
+	void __iomem *reg;
+	u32 val;
+
+	reg = tx_rx == MTK_RX ? hw_info->ap_ao_base + REG_CLDMA_L2RIMSR0 :
+				hw_info->ap_pdn_base + REG_CLDMA_L2TIMSR0;
+	val = qno == CLDMA_ALL_Q ? CLDMA_ALL_Q : BIT(qno);
+	iowrite32(val, reg);
+}
+
+void t7xx_cldma_hw_irq_dis_eq(struct t7xx_cldma_hw *hw_info, unsigned char qno, enum mtk_txrx tx_rx)
+{
+	void __iomem *reg;
+	u32 val;
+
+	reg = tx_rx == MTK_RX ? hw_info->ap_ao_base + REG_CLDMA_L2RIMSR0 :
+				hw_info->ap_pdn_base + REG_CLDMA_L2TIMSR0;
+	val = qno == CLDMA_ALL_Q ? CLDMA_ALL_Q : BIT(qno);
+	iowrite32(val << EQ_STA_BIT_OFFSET, reg);
+}
+
+void t7xx_cldma_hw_irq_en_txrx(struct t7xx_cldma_hw *hw_info, unsigned char qno,
+			       enum mtk_txrx tx_rx)
+{
+	void __iomem *reg;
+	u32 val;
+
+	reg = tx_rx == MTK_RX ? hw_info->ap_ao_base + REG_CLDMA_L2RIMCR0 :
+				hw_info->ap_pdn_base + REG_CLDMA_L2TIMCR0;
+	val = qno == CLDMA_ALL_Q ? CLDMA_ALL_Q : BIT(qno);
+	iowrite32(val, reg);
+}
+
+void t7xx_cldma_hw_irq_en_eq(struct t7xx_cldma_hw *hw_info, unsigned char qno, enum mtk_txrx tx_rx)
+{
+	void __iomem *reg;
+	u32 val;
+
+	reg = tx_rx == MTK_RX ? hw_info->ap_ao_base + REG_CLDMA_L2RIMCR0 :
+				hw_info->ap_pdn_base + REG_CLDMA_L2TIMCR0;
+	val = qno == CLDMA_ALL_Q ? CLDMA_ALL_Q : BIT(qno);
+	iowrite32(val << EQ_STA_BIT_OFFSET, reg);
+}
+
+/**
+ * t7xx_cldma_hw_init() - Initialize CLDMA HW.
+ * @hw_info: Pointer to struct t7xx_cldma_hw.
+ *
+ * Write uplink and downlink configuration to CLDMA HW.
+ */
+void t7xx_cldma_hw_init(struct t7xx_cldma_hw *hw_info)
+{
+	u32 ul_cfg, dl_cfg;
+
+	ul_cfg = ioread32(hw_info->ap_pdn_base + REG_CLDMA_UL_CFG);
+	dl_cfg = ioread32(hw_info->ap_ao_base + REG_CLDMA_DL_CFG);
+	/* Configure the DRAM address mode */
+	ul_cfg &= ~UL_CFG_BIT_MODE_MASK;
+	dl_cfg &= ~DL_CFG_BIT_MODE_MASK;
+
+	if (hw_info->hw_mode == MODE_BIT_64) {
+		ul_cfg |= UL_CFG_BIT_MODE_64;
+		dl_cfg |= DL_CFG_BIT_MODE_64;
+	} else if (hw_info->hw_mode == MODE_BIT_40) {
+		ul_cfg |= UL_CFG_BIT_MODE_40;
+		dl_cfg |= DL_CFG_BIT_MODE_40;
+	} else if (hw_info->hw_mode == MODE_BIT_36) {
+		ul_cfg |= UL_CFG_BIT_MODE_36;
+		dl_cfg |= DL_CFG_BIT_MODE_36;
+	}
+
+	iowrite32(ul_cfg, hw_info->ap_pdn_base + REG_CLDMA_UL_CFG);
+	dl_cfg |= DL_CFG_UP_HW_LAST;
+	iowrite32(dl_cfg, hw_info->ap_ao_base + REG_CLDMA_DL_CFG);
+	iowrite32(0, hw_info->ap_ao_base + REG_CLDMA_INT_MASK);
+	iowrite32(BUSY_MASK_MD, hw_info->ap_ao_base + REG_CLDMA_BUSY_MASK);
+	iowrite32(UL_MEM_CHECK_DIS, hw_info->ap_pdn_base + REG_CLDMA_UL_MEM);
+	iowrite32(DL_MEM_CHECK_DIS, hw_info->ap_pdn_base + REG_CLDMA_DL_MEM);
+}
+
+void t7xx_cldma_hw_stop_queue(struct t7xx_cldma_hw *hw_info, u8 qno, enum mtk_txrx tx_rx)
+{
+	void __iomem *reg;
+	u32 val;
+
+	reg = tx_rx == MTK_RX ? hw_info->ap_pdn_base + REG_CLDMA_DL_STOP_CMD :
+				hw_info->ap_pdn_base + REG_CLDMA_UL_STOP_CMD;
+	val = qno == CLDMA_ALL_Q ? CLDMA_ALL_Q : BIT(qno);
+	iowrite32(val, reg);
+}
+
+void t7xx_cldma_hw_stop(struct t7xx_cldma_hw *hw_info, enum mtk_txrx tx_rx)
+{
+	void __iomem *reg;
+
+	reg = tx_rx == MTK_RX ? hw_info->ap_ao_base + REG_CLDMA_L2RIMSR0 :
+				hw_info->ap_pdn_base + REG_CLDMA_L2TIMSR0;
+	iowrite32(TXRX_STATUS_BITMASK, reg);
+	iowrite32(EMPTY_STATUS_BITMASK, reg);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_cldma.h b/drivers/net/wwan/t7xx/t7xx_cldma.h
new file mode 100644
index 000000000000..1b5e5bf15a3e
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_cldma.h
@@ -0,0 +1,177 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#ifndef __T7XX_CLDMA_H__
+#define __T7XX_CLDMA_H__
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#include "t7xx_common.h"
+
+#define CLDMA_TXQ_NUM			8
+#define CLDMA_RXQ_NUM			8
+#define CLDMA_ALL_Q			GENMASK(7, 0)
+
+/* Interrupt status bits */
+#define EMPTY_STATUS_BITMASK		GENMASK(15, 8)
+#define TXRX_STATUS_BITMASK		GENMASK(7, 0)
+#define EQ_STA_BIT_OFFSET		8
+#define L2_INT_BIT_COUNT		16
+#define EQ_STA_BIT(index)		(BIT((index) + EQ_STA_BIT_OFFSET) & EMPTY_STATUS_BITMASK)
+
+#define TQ_ERR_INT_BITMASK		GENMASK(23, 16)
+#define TQ_ACTIVE_START_ERR_INT_BITMASK	GENMASK(31, 24)
+
+#define RQ_ERR_INT_BITMASK		GENMASK(23, 16)
+#define RQ_ACTIVE_START_ERR_INT_BITMASK	GENMASK(31, 24)
+
+#define CLDMA0_AO_BASE			0x10049000
+#define CLDMA0_PD_BASE			0x1021d000
+#define CLDMA1_AO_BASE			0x1004b000
+#define CLDMA1_PD_BASE			0x1021f000
+
+#define CLDMA_R_AO_BASE			0x10023000
+#define CLDMA_R_PD_BASE			0x1023d000
+
+/* CLDMA TX */
+#define REG_CLDMA_UL_START_ADDRL_0	0x0004
+#define REG_CLDMA_UL_START_ADDRH_0	0x0008
+#define REG_CLDMA_UL_CURRENT_ADDRL_0	0x0044
+#define REG_CLDMA_UL_CURRENT_ADDRH_0	0x0048
+#define REG_CLDMA_UL_STATUS		0x0084
+#define CLDMA_INVALID_STATUS		GENMASK(31, 0)
+#define REG_CLDMA_UL_START_CMD		0x0088
+#define REG_CLDMA_UL_RESUME_CMD		0x008c
+#define REG_CLDMA_UL_STOP_CMD		0x0090
+#define REG_CLDMA_UL_ERROR		0x0094
+#define REG_CLDMA_UL_CFG		0x0098
+#define UL_CFG_BIT_MODE_36		BIT(5)
+#define UL_CFG_BIT_MODE_40		BIT(6)
+#define UL_CFG_BIT_MODE_64		BIT(7)
+#define UL_CFG_BIT_MODE_MASK		GENMASK(7, 5)
+
+#define REG_CLDMA_UL_MEM		0x009c
+#define UL_MEM_CHECK_DIS		BIT(0)
+
+/* CLDMA RX */
+#define REG_CLDMA_DL_START_CMD		0x05bc
+#define REG_CLDMA_DL_RESUME_CMD		0x05c0
+#define REG_CLDMA_DL_STOP_CMD		0x05c4
+#define REG_CLDMA_DL_MEM		0x0508
+#define DL_MEM_CHECK_DIS		BIT(0)
+
+#define REG_CLDMA_DL_CFG		0x0404
+#define DL_CFG_UP_HW_LAST		BIT(2)
+#define DL_CFG_BIT_MODE_36		BIT(10)
+#define DL_CFG_BIT_MODE_40		BIT(11)
+#define DL_CFG_BIT_MODE_64		BIT(12)
+#define DL_CFG_BIT_MODE_MASK		GENMASK(12, 10)
+
+#define REG_CLDMA_DL_START_ADDRL_0	0x0478
+#define REG_CLDMA_DL_START_ADDRH_0	0x047c
+#define REG_CLDMA_DL_CURRENT_ADDRL_0	0x04b8
+#define REG_CLDMA_DL_CURRENT_ADDRH_0	0x04bc
+#define REG_CLDMA_DL_STATUS		0x04f8
+
+/* CLDMA MISC */
+#define REG_CLDMA_L2TISAR0		0x0810
+#define REG_CLDMA_L2TISAR1		0x0814
+#define REG_CLDMA_L2TIMR0		0x0818
+#define REG_CLDMA_L2TIMR1		0x081c
+#define REG_CLDMA_L2TIMCR0		0x0820
+#define REG_CLDMA_L2TIMCR1		0x0824
+#define REG_CLDMA_L2TIMSR0		0x0828
+#define REG_CLDMA_L2TIMSR1		0x082c
+#define REG_CLDMA_L3TISAR0		0x0830
+#define REG_CLDMA_L3TISAR1		0x0834
+#define REG_CLDMA_L2RISAR0		0x0850
+#define REG_CLDMA_L2RISAR1		0x0854
+#define REG_CLDMA_L3RISAR0		0x0870
+#define REG_CLDMA_L3RISAR1		0x0874
+#define REG_CLDMA_IP_BUSY		0x08b4
+#define IP_BUSY_WAKEUP			BIT(0)
+#define CLDMA_L2TISAR0_ALL_INT_MASK	GENMASK(15, 0)
+#define CLDMA_L2RISAR0_ALL_INT_MASK	GENMASK(15, 0)
+
+/* CLDMA MISC */
+#define REG_CLDMA_L2RIMR0		0x0858
+#define REG_CLDMA_L2RIMR1		0x085c
+#define REG_CLDMA_L2RIMCR0		0x0860
+#define REG_CLDMA_L2RIMCR1		0x0864
+#define REG_CLDMA_L2RIMSR0		0x0868
+#define REG_CLDMA_L2RIMSR1		0x086c
+#define REG_CLDMA_BUSY_MASK		0x0954
+#define BUSY_MASK_PCIE			BIT(0)
+#define BUSY_MASK_AP			BIT(1)
+#define BUSY_MASK_MD			BIT(2)
+
+#define REG_CLDMA_INT_MASK		0x0960
+
+/* CLDMA RESET */
+#define REG_INFRA_RST4_SET		0x0730
+#define RST4_CLDMA1_SW_RST_SET		BIT(20)
+
+#define REG_INFRA_RST4_CLR		0x0734
+#define RST4_CLDMA1_SW_RST_CLR		BIT(20)
+
+#define REG_INFRA_RST2_SET		0x0140
+#define RST2_PMIC_SW_RST_SET		BIT(18)
+
+#define REG_INFRA_RST2_CLR		0x0144
+#define RST2_PMIC_SW_RST_CLR		BIT(18)
+
+enum t7xx_hw_mode {
+	MODE_BIT_32,
+	MODE_BIT_36,
+	MODE_BIT_40,
+	MODE_BIT_64,
+};
+
+struct t7xx_cldma_hw {
+	enum t7xx_hw_mode		hw_mode;
+	void __iomem			*ap_ao_base;
+	void __iomem			*ap_pdn_base;
+	u32				phy_interrupt_id;
+};
+
+void t7xx_cldma_hw_irq_dis_txrx(struct t7xx_cldma_hw *hw_info, unsigned char qno,
+				enum mtk_txrx tx_rx);
+void t7xx_cldma_hw_irq_dis_eq(struct t7xx_cldma_hw *hw_info, unsigned char qno,
+			      enum mtk_txrx tx_rx);
+void t7xx_cldma_hw_irq_en_txrx(struct t7xx_cldma_hw *hw_info, unsigned char qno,
+			       enum mtk_txrx tx_rx);
+void t7xx_cldma_hw_irq_en_eq(struct t7xx_cldma_hw *hw_info, unsigned char qno, enum mtk_txrx tx_rx);
+unsigned int t7xx_cldma_hw_queue_status(struct t7xx_cldma_hw *hw_info, unsigned char qno,
+					enum mtk_txrx tx_rx);
+void t7xx_cldma_hw_init(struct t7xx_cldma_hw *hw_info);
+void t7xx_cldma_hw_resume_queue(struct t7xx_cldma_hw *hw_info, unsigned char qno,
+				enum mtk_txrx tx_rx);
+void t7xx_cldma_hw_start(struct t7xx_cldma_hw *hw_info);
+void t7xx_cldma_hw_start_queue(struct t7xx_cldma_hw *hw_info, u8 qno, enum mtk_txrx tx_rx);
+void t7xx_cldma_hw_tx_done(struct t7xx_cldma_hw *hw_info, unsigned int bitmask);
+void t7xx_cldma_hw_rx_done(struct t7xx_cldma_hw *hw_info, unsigned int bitmask);
+void t7xx_cldma_hw_stop_queue(struct t7xx_cldma_hw *hw_info, u8 qno, enum mtk_txrx tx_rx);
+void t7xx_cldma_hw_set_start_addr(struct t7xx_cldma_hw *hw_info,
+				  unsigned char qno, u64 address, enum mtk_txrx tx_rx);
+void t7xx_cldma_hw_reset(void __iomem *ao_base);
+void t7xx_cldma_hw_stop(struct t7xx_cldma_hw *hw_info, enum mtk_txrx tx_rx);
+unsigned int t7xx_cldma_hw_int_status(struct t7xx_cldma_hw *hw_info, unsigned int bitmask,
+				      enum mtk_txrx tx_rx);
+void t7xx_cldma_hw_restore(struct t7xx_cldma_hw *hw_info);
+void t7xx_cldma_clear_ip_busy(struct t7xx_cldma_hw *hw_info);
+bool t7xx_cldma_tx_addr_is_set(struct t7xx_cldma_hw *hw_info, unsigned char qno);
+#endif
diff --git a/drivers/net/wwan/t7xx/t7xx_common.h b/drivers/net/wwan/t7xx/t7xx_common.h
new file mode 100644
index 000000000000..360875a8bd96
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_common.h
@@ -0,0 +1,95 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#ifndef __T7XX_COMMON_H__
+#define __T7XX_COMMON_H__
+
+#include <linux/bits.h>
+#include <linux/skbuff.h>
+#include <linux/types.h>
+
+struct ccci_header {
+	__le32 packet_header;
+	__le32 packet_len;
+	__le32 status;
+	__le32 ex_msg;
+};
+
+enum mtk_txrx {
+	MTK_TX,
+	MTK_RX,
+};
+
+#define TXQ_TYPE_DEFAULT	0
+
+#define CLDMA_NUM 2
+
+#define MTK_SKB_64K		64528		/* 63kB + CCCI header */
+#define MTK_SKB_4K		3584		/* 3.5kB */
+#define NET_RX_BUF		MTK_SKB_4K
+
+#define HDR_FLD_AST		((u32)BIT(31))
+#define HDR_FLD_SEQ		GENMASK(30, 16)
+#define HDR_FLD_CHN		GENMASK(15, 0)
+
+#define CCCI_H_LEN		16
+/* For exception flow use CCCI_H_LEN + reserved space */
+#define CCCI_H_ELEN		128
+
+/* Coupled with HW - indicates if there is data following the CCCI header or not */
+#define CCCI_HEADER_NO_DATA	0xffffffff
+
+/* Control identification numbers for AP<->MD messages  */
+#define CTL_ID_HS1_MSG		0x0
+#define CTL_ID_HS2_MSG		0x1
+#define CTL_ID_HS3_MSG		0x2
+#define CTL_ID_MD_EX		0x4
+#define CTL_ID_DRV_VER_ERROR	0x5
+#define CTL_ID_MD_EX_ACK	0x6
+#define CTL_ID_MD_EX_PASS	0x8
+#define CTL_ID_PORT_ENUM	0x9
+
+/* Modem exception check identification code - "EXCP" */
+#define MD_EX_CHK_ID		0x45584350
+/* Modem exception check acknowledge identification code - "EREC" */
+#define MD_EX_CHK_ACK_ID	0x45524543
+
+enum md_state {
+	MD_STATE_INVALID,		/* No traffic */
+	MD_STATE_GATED,			/* No traffic */
+	MD_STATE_WAITING_FOR_HS1,
+	MD_STATE_WAITING_FOR_HS2,
+	MD_STATE_READY,
+	MD_STATE_EXCEPTION,
+	MD_STATE_RESET,			/* No traffic */
+	MD_STATE_WAITING_TO_STOP,
+	MD_STATE_STOPPED,
+};
+
+#ifdef NET_SKBUFF_DATA_USES_OFFSET
+static inline unsigned int t7xx_skb_data_area_size(struct sk_buff *skb)
+{
+	return skb->head + skb->end - skb->data;
+}
+#else
+static inline unsigned int t7xx_skb_data_area_size(struct sk_buff *skb)
+{
+	return skb->end - skb->data;
+}
+#endif
+
+#endif
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
new file mode 100644
index 000000000000..3b49f7b81b01
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -0,0 +1,1296 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ *
+ * Contributors:
+ *  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ */
+
+#include <linux/bits.h>
+#include <linux/bitops.h>
+#include <linux/delay.h>
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/dmapool.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-direction.h>
+#include <linux/gfp.h>
+#include <linux/io.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/iopoll.h>
+#include <linux/irqreturn.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/netdevice.h>
+#include <linux/pci.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_cldma.h"
+#include "t7xx_common.h"
+#include "t7xx_hif_cldma.h"
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+#include "t7xx_state_monitor.h"
+
+#define MAX_TX_BUDGET			16
+#define MAX_RX_BUDGET			16
+
+#define CHECK_Q_STOP_TIMEOUT_US		1000000
+#define CHECK_Q_STOP_STEP_US		10000
+
+static void md_cd_queue_struct_reset(struct cldma_queue *queue, struct cldma_ctrl *md_ctrl,
+				     enum mtk_txrx tx_rx, unsigned char index)
+{
+	queue->dir = tx_rx;
+	queue->index = index;
+	queue->md_ctrl = md_ctrl;
+	queue->tr_ring = NULL;
+	queue->tr_done = NULL;
+	queue->tx_xmit = NULL;
+}
+
+static void md_cd_queue_struct_init(struct cldma_queue *queue, struct cldma_ctrl *md_ctrl,
+				    enum mtk_txrx tx_rx, unsigned char index)
+{
+	md_cd_queue_struct_reset(queue, md_ctrl, tx_rx, index);
+	init_waitqueue_head(&queue->req_wq);
+	spin_lock_init(&queue->ring_lock);
+}
+
+static void t7xx_cldma_tgpd_set_data_ptr(struct cldma_tgpd *tgpd, dma_addr_t data_ptr)
+{
+	tgpd->data_buff_bd_ptr_h = cpu_to_le32(upper_32_bits(data_ptr));
+	tgpd->data_buff_bd_ptr_l = cpu_to_le32(lower_32_bits(data_ptr));
+}
+
+static void t7xx_cldma_tgpd_set_next_ptr(struct cldma_tgpd *tgpd, dma_addr_t next_ptr)
+{
+	tgpd->next_gpd_ptr_h = cpu_to_le32(upper_32_bits(next_ptr));
+	tgpd->next_gpd_ptr_l = cpu_to_le32(lower_32_bits(next_ptr));
+}
+
+static void t7xx_cldma_rgpd_set_data_ptr(struct cldma_rgpd *rgpd, dma_addr_t data_ptr)
+{
+	rgpd->data_buff_bd_ptr_h = cpu_to_le32(upper_32_bits(data_ptr));
+	rgpd->data_buff_bd_ptr_l = cpu_to_le32(lower_32_bits(data_ptr));
+}
+
+static void t7xx_cldma_rgpd_set_next_ptr(struct cldma_rgpd *rgpd, dma_addr_t next_ptr)
+{
+	rgpd->next_gpd_ptr_h = cpu_to_le32(upper_32_bits(next_ptr));
+	rgpd->next_gpd_ptr_l = cpu_to_le32(lower_32_bits(next_ptr));
+}
+
+static int t7xx_cldma_alloc_and_map_skb(struct cldma_ctrl *md_ctrl, struct cldma_request *req,
+					size_t size)
+{
+	req->skb = __dev_alloc_skb(size, GFP_KERNEL);
+	if (!req->skb)
+		return -ENOMEM;
+
+	req->mapped_buff = dma_map_single(md_ctrl->dev, req->skb->data,
+					  t7xx_skb_data_area_size(req->skb), DMA_FROM_DEVICE);
+	if (dma_mapping_error(md_ctrl->dev, req->mapped_buff)) {
+		dev_err(md_ctrl->dev, "DMA mapping failed\n");
+		dev_kfree_skb_any(req->skb);
+		req->skb = NULL;
+		req->mapped_buff = 0;
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static int t7xx_cldma_gpd_rx_from_q(struct cldma_queue *queue, int budget, bool *over_budget)
+{
+	struct cldma_ctrl *md_ctrl = queue->md_ctrl;
+	unsigned char hwo_polling_count = 0;
+	struct t7xx_cldma_hw *hw_info;
+	bool rx_not_done = true;
+	int count = 0;
+
+	hw_info = &md_ctrl->hw_info;
+
+	do {
+		struct cldma_request *req;
+		struct cldma_rgpd *rgpd;
+		struct sk_buff *skb;
+		int ret;
+
+		req = queue->tr_done;
+		if (!req)
+			return -ENODATA;
+
+		rgpd = req->gpd;
+		if ((rgpd->gpd_flags & GPD_FLAGS_HWO) || !req->skb) {
+			dma_addr_t gpd_addr;
+
+			if (!pci_device_is_present(to_pci_dev(md_ctrl->dev))) {
+				dev_err(md_ctrl->dev, "PCIe Link disconnected\n");
+				return -ENODEV;
+			}
+
+			gpd_addr = ioread64(hw_info->ap_pdn_base + REG_CLDMA_DL_CURRENT_ADDRL_0 +
+					    queue->index * sizeof(u64));
+			if (req->gpd_addr == gpd_addr || hwo_polling_count++ >= 100)
+				return 0;
+
+			udelay(1);
+			continue;
+		}
+
+		hwo_polling_count = 0;
+		skb = req->skb;
+
+		if (req->mapped_buff) {
+			dma_unmap_single(md_ctrl->dev, req->mapped_buff,
+					 t7xx_skb_data_area_size(skb), DMA_FROM_DEVICE);
+			req->mapped_buff = 0;
+		}
+
+		skb->len = 0;
+		skb_reset_tail_pointer(skb);
+		skb_put(skb, le16_to_cpu(rgpd->data_buff_len));
+
+		ret = md_ctrl->recv_skb(queue, skb);
+		if (ret < 0)
+			return ret;
+
+		req->skb = NULL;
+		t7xx_cldma_rgpd_set_data_ptr(rgpd, 0);
+		queue->tr_done = list_next_entry_circular(req, &queue->tr_ring->gpd_ring, entry);
+		req = queue->rx_refill;
+
+		ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, queue->tr_ring->pkt_size);
+		if (ret)
+			return ret;
+
+		rgpd = req->gpd;
+		t7xx_cldma_rgpd_set_data_ptr(rgpd, req->mapped_buff);
+		rgpd->data_buff_len = 0;
+		rgpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
+		queue->rx_refill = list_next_entry_circular(req, &queue->tr_ring->gpd_ring, entry);
+		rx_not_done = ++count < budget || !need_resched();
+	} while (rx_not_done);
+
+	*over_budget = true;
+	return 0;
+}
+
+static int t7xx_cldma_gpd_rx_collect(struct cldma_queue *queue, int budget)
+{
+	struct cldma_ctrl *md_ctrl = queue->md_ctrl;
+	bool rx_not_done, over_budget = false;
+	struct t7xx_cldma_hw *hw_info;
+	unsigned int l2_rx_int;
+	unsigned long flags;
+	int ret;
+
+	hw_info = &md_ctrl->hw_info;
+
+	do {
+		rx_not_done = false;
+
+		ret = t7xx_cldma_gpd_rx_from_q(queue, budget, &over_budget);
+		if (ret == -ENODATA)
+			return 0;
+
+		if (ret)
+			return ret;
+
+		spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+		if (md_ctrl->rxq_active & BIT(queue->index)) {
+			if (!t7xx_cldma_hw_queue_status(hw_info, queue->index, MTK_RX))
+				t7xx_cldma_hw_resume_queue(hw_info, queue->index, MTK_RX);
+
+			l2_rx_int = t7xx_cldma_hw_int_status(hw_info, BIT(queue->index), MTK_RX);
+			if (l2_rx_int) {
+				t7xx_cldma_hw_rx_done(hw_info, l2_rx_int);
+
+				if (over_budget) {
+					spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+					return -EAGAIN;
+				}
+
+				rx_not_done = true;
+			}
+		}
+
+		spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+	} while (rx_not_done);
+
+	return 0;
+}
+
+static void t7xx_cldma_rx_done(struct work_struct *work)
+{
+	struct cldma_queue *queue = container_of(work, struct cldma_queue, cldma_work);
+	struct cldma_ctrl *md_ctrl = queue->md_ctrl;
+	int value;
+
+	value = t7xx_cldma_gpd_rx_collect(queue, queue->budget);
+	if (value && md_ctrl->rxq_active & BIT(queue->index)) {
+		queue_work(queue->worker, &queue->cldma_work);
+		return;
+	}
+
+	t7xx_cldma_clear_ip_busy(&md_ctrl->hw_info);
+	t7xx_cldma_hw_irq_en_txrx(&md_ctrl->hw_info, queue->index, MTK_RX);
+	t7xx_cldma_hw_irq_en_eq(&md_ctrl->hw_info, queue->index, MTK_RX);
+}
+
+static int t7xx_cldma_gpd_tx_collect(struct cldma_queue *queue)
+{
+	struct cldma_ctrl *md_ctrl = queue->md_ctrl;
+	unsigned int dma_len, count = 0;
+	struct cldma_request *req;
+	struct cldma_tgpd *tgpd;
+	unsigned long flags;
+	dma_addr_t dma_free;
+	struct sk_buff *skb;
+
+	while (!kthread_should_stop()) {
+		spin_lock_irqsave(&queue->ring_lock, flags);
+		req = queue->tr_done;
+		if (!req) {
+			spin_unlock_irqrestore(&queue->ring_lock, flags);
+			break;
+		}
+
+		tgpd = req->gpd;
+		if ((tgpd->gpd_flags & GPD_FLAGS_HWO) || !req->skb) {
+			spin_unlock_irqrestore(&queue->ring_lock, flags);
+			break;
+		}
+
+		queue->budget++;
+		dma_free = req->mapped_buff;
+		dma_len = le16_to_cpu(tgpd->data_buff_len);
+		skb = req->skb;
+		req->skb = NULL;
+		queue->tr_done = list_next_entry_circular(req, &queue->tr_ring->gpd_ring, entry);
+		spin_unlock_irqrestore(&queue->ring_lock, flags);
+		count++;
+		dma_unmap_single(md_ctrl->dev, dma_free, dma_len, DMA_TO_DEVICE);
+		dev_kfree_skb_any(skb);
+	}
+
+	if (count)
+		wake_up_nr(&queue->req_wq, count);
+
+	return count;
+}
+
+static void t7xx_cldma_txq_empty_hndl(struct cldma_queue *queue)
+{
+	struct cldma_ctrl *md_ctrl = queue->md_ctrl;
+	struct cldma_request *req;
+	struct cldma_tgpd *tgpd;
+	dma_addr_t ul_curr_addr;
+	unsigned long flags;
+	bool pending_gpd;
+
+	if (!(md_ctrl->txq_active & BIT(queue->index)))
+		return;
+
+	spin_lock_irqsave(&queue->ring_lock, flags);
+	req = list_prev_entry_circular(queue->tx_xmit, &queue->tr_ring->gpd_ring, entry);
+	tgpd = req->gpd;
+	pending_gpd = (tgpd->gpd_flags & GPD_FLAGS_HWO) && req->skb;
+	spin_unlock_irqrestore(&queue->ring_lock, flags);
+
+	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+	if (pending_gpd) {
+		struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
+
+		/* Check current processing TGPD, 64-bit address is in a table by Q index */
+		ul_curr_addr = ioread64(hw_info->ap_pdn_base + REG_CLDMA_UL_CURRENT_ADDRL_0 +
+					queue->index * sizeof(u64));
+		if (req->gpd_addr != ul_curr_addr) {
+			spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+			dev_err(md_ctrl->dev, "CLDMA%d queue %d is not empty\n",
+				md_ctrl->hif_id, queue->index);
+			return;
+		}
+
+		t7xx_cldma_hw_resume_queue(hw_info, queue->index, MTK_TX);
+	}
+
+	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static void t7xx_cldma_tx_done(struct work_struct *work)
+{
+	struct cldma_queue *queue = container_of(work, struct cldma_queue, cldma_work);
+	struct cldma_ctrl *md_ctrl = queue->md_ctrl;
+	struct t7xx_cldma_hw *hw_info;
+	unsigned int l2_tx_int;
+	unsigned long flags;
+
+	hw_info = &md_ctrl->hw_info;
+	t7xx_cldma_gpd_tx_collect(queue);
+	l2_tx_int = t7xx_cldma_hw_int_status(hw_info, BIT(queue->index) | EQ_STA_BIT(queue->index),
+					     MTK_TX);
+	if (l2_tx_int & EQ_STA_BIT(queue->index)) {
+		t7xx_cldma_hw_tx_done(hw_info, EQ_STA_BIT(queue->index));
+		t7xx_cldma_txq_empty_hndl(queue);
+	}
+
+	if (l2_tx_int & BIT(queue->index)) {
+		t7xx_cldma_hw_tx_done(hw_info, BIT(queue->index));
+		queue_work(queue->worker, &queue->cldma_work);
+		return;
+	}
+
+	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+	if (md_ctrl->txq_active & BIT(queue->index)) {
+		t7xx_cldma_clear_ip_busy(hw_info);
+		t7xx_cldma_hw_irq_en_eq(hw_info, queue->index, MTK_TX);
+		t7xx_cldma_hw_irq_en_txrx(hw_info, queue->index, MTK_TX);
+	}
+
+	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static void t7xx_cldma_ring_free(struct cldma_ctrl *md_ctrl,
+				 struct cldma_ring *ring, enum dma_data_direction tx_rx)
+{
+	struct cldma_request *req_cur, *req_next;
+
+	list_for_each_entry_safe(req_cur, req_next, &ring->gpd_ring, entry) {
+		if (req_cur->mapped_buff && req_cur->skb) {
+			dma_unmap_single(md_ctrl->dev, req_cur->mapped_buff,
+					 t7xx_skb_data_area_size(req_cur->skb), tx_rx);
+			req_cur->mapped_buff = 0;
+		}
+
+		dev_kfree_skb_any(req_cur->skb);
+
+		if (req_cur->gpd)
+			dma_pool_free(md_ctrl->gpd_dmapool, req_cur->gpd, req_cur->gpd_addr);
+
+		list_del(&req_cur->entry);
+		kfree_sensitive(req_cur);
+	}
+}
+
+static struct cldma_request *t7xx_alloc_rx_request(struct cldma_ctrl *md_ctrl, size_t pkt_size)
+{
+	struct cldma_request *req;
+	int val;
+
+	req = kzalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return NULL;
+
+	req->gpd = dma_pool_zalloc(md_ctrl->gpd_dmapool, GFP_KERNEL, &req->gpd_addr);
+	if (!req->gpd)
+		goto err_free_req;
+
+	val = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, pkt_size);
+	if (val)
+		goto err_free_pool;
+
+	return req;
+
+err_free_pool:
+	dma_pool_free(md_ctrl->gpd_dmapool, req->gpd, req->gpd_addr);
+
+err_free_req:
+	kfree(req);
+
+	return NULL;
+}
+
+static int t7xx_cldma_rx_ring_init(struct cldma_ctrl *md_ctrl, struct cldma_ring *ring)
+{
+	struct cldma_request *req, *first_req = NULL;
+	struct cldma_rgpd *prev_gpd, *gpd = NULL;
+	int i;
+
+	for (i = 0; i < ring->length; i++) {
+		req = t7xx_alloc_rx_request(md_ctrl, ring->pkt_size);
+		if (!req) {
+			t7xx_cldma_ring_free(md_ctrl, ring, DMA_FROM_DEVICE);
+			return -ENOMEM;
+		}
+
+		gpd = req->gpd;
+		t7xx_cldma_rgpd_set_data_ptr(gpd, req->mapped_buff);
+		gpd->data_allow_len = cpu_to_le16(ring->pkt_size);
+		gpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
+
+		if (i)
+			t7xx_cldma_rgpd_set_next_ptr(prev_gpd, req->gpd_addr);
+		else
+			first_req = req;
+
+		INIT_LIST_HEAD(&req->entry);
+		list_add_tail(&req->entry, &ring->gpd_ring);
+		prev_gpd = gpd;
+	}
+
+	if (first_req)
+		t7xx_cldma_rgpd_set_next_ptr(gpd, first_req->gpd_addr);
+
+	return 0;
+}
+
+static struct cldma_request *t7xx_alloc_tx_request(struct cldma_ctrl *md_ctrl)
+{
+	struct cldma_request *req;
+
+	req = kzalloc(sizeof(*req), GFP_KERNEL);
+	if (!req)
+		return NULL;
+
+	req->gpd = dma_pool_zalloc(md_ctrl->gpd_dmapool, GFP_KERNEL, &req->gpd_addr);
+	if (!req->gpd) {
+		kfree(req);
+		return NULL;
+	}
+
+	return req;
+}
+
+static int t7xx_cldma_tx_ring_init(struct cldma_ctrl *md_ctrl, struct cldma_ring *ring)
+{
+	struct cldma_request *req, *first_req = NULL;
+	struct cldma_tgpd *tgpd, *prev_gpd;
+	int i;
+
+	for (i = 0; i < ring->length; i++) {
+		req = t7xx_alloc_tx_request(md_ctrl);
+		if (!req) {
+			t7xx_cldma_ring_free(md_ctrl, ring, DMA_TO_DEVICE);
+			return -ENOMEM;
+		}
+
+		tgpd = req->gpd;
+		tgpd->gpd_flags = GPD_FLAGS_IOC;
+
+		if (!first_req)
+			first_req = req;
+		else
+			t7xx_cldma_tgpd_set_next_ptr(prev_gpd, req->gpd_addr);
+
+		INIT_LIST_HEAD(&req->entry);
+		list_add_tail(&req->entry, &ring->gpd_ring);
+		prev_gpd = tgpd;
+	}
+
+	if (first_req)
+		t7xx_cldma_tgpd_set_next_ptr(tgpd, first_req->gpd_addr);
+
+	return 0;
+}
+
+/**
+ * t7xx_cldma_q_reset() - Reset CLDMA request pointers to their initial values.
+ * @queue: Pointer to the queue structure.
+ *
+ */
+static void t7xx_cldma_q_reset(struct cldma_queue *queue)
+{
+	struct cldma_request *req;
+
+	req = list_first_entry(&queue->tr_ring->gpd_ring, struct cldma_request, entry);
+	queue->tr_done = req;
+	queue->budget = queue->tr_ring->length;
+
+	if (queue->dir == MTK_TX)
+		queue->tx_xmit = req;
+	else
+		queue->rx_refill = req;
+}
+
+static void t7xx_cldma_rxq_init(struct cldma_queue *queue)
+{
+	struct cldma_ctrl *md_ctrl = queue->md_ctrl;
+
+	queue->dir = MTK_RX;
+	queue->tr_ring = &md_ctrl->rx_ring[queue->index];
+	t7xx_cldma_q_reset(queue);
+}
+
+static void t7xx_cldma_txq_init(struct cldma_queue *queue)
+{
+	struct cldma_ctrl *md_ctrl = queue->md_ctrl;
+
+	queue->dir = MTK_TX;
+	queue->tr_ring = &md_ctrl->tx_ring[queue->index];
+	t7xx_cldma_q_reset(queue);
+}
+
+static void t7xx_cldma_enable_irq(struct cldma_ctrl *md_ctrl)
+{
+	t7xx_pcie_mac_set_int(md_ctrl->t7xx_dev, md_ctrl->hw_info.phy_interrupt_id);
+}
+
+static void t7xx_cldma_disable_irq(struct cldma_ctrl *md_ctrl)
+{
+	t7xx_pcie_mac_clear_int(md_ctrl->t7xx_dev, md_ctrl->hw_info.phy_interrupt_id);
+}
+
+static void t7xx_cldma_irq_work_cb(struct cldma_ctrl *md_ctrl)
+{
+	u32 l2_tx_int_msk, l2_rx_int_msk, l2_tx_int, l2_rx_int, val;
+	struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
+	int i;
+
+	/* L2 raw interrupt status */
+	l2_tx_int = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TISAR0);
+	l2_rx_int = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2RISAR0);
+	l2_tx_int_msk = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L2TIMR0);
+	l2_rx_int_msk = ioread32(hw_info->ap_ao_base + REG_CLDMA_L2RIMR0);
+	l2_tx_int &= ~l2_tx_int_msk;
+	l2_rx_int &= ~l2_rx_int_msk;
+
+	if (l2_tx_int) {
+		if (l2_tx_int & (TQ_ERR_INT_BITMASK | TQ_ACTIVE_START_ERR_INT_BITMASK)) {
+			/* Read and clear L3 TX interrupt status */
+			val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L3TISAR0);
+			iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_L3TISAR0);
+			val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L3TISAR1);
+			iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_L3TISAR1);
+		}
+
+		t7xx_cldma_hw_tx_done(hw_info, l2_tx_int);
+		if (l2_tx_int & (TXRX_STATUS_BITMASK | EMPTY_STATUS_BITMASK)) {
+			for_each_set_bit(i, (unsigned long *)&l2_tx_int, L2_INT_BIT_COUNT) {
+				if (i < CLDMA_TXQ_NUM) {
+					t7xx_cldma_hw_irq_dis_eq(hw_info, i, MTK_TX);
+					t7xx_cldma_hw_irq_dis_txrx(hw_info, i, MTK_TX);
+					queue_work(md_ctrl->txq[i].worker,
+						   &md_ctrl->txq[i].cldma_work);
+				} else {
+					t7xx_cldma_txq_empty_hndl(&md_ctrl->txq[i - CLDMA_TXQ_NUM]);
+				}
+			}
+		}
+	}
+
+	if (l2_rx_int) {
+		if (l2_rx_int & (RQ_ERR_INT_BITMASK | RQ_ACTIVE_START_ERR_INT_BITMASK)) {
+			/* Read and clear L3 RX interrupt status */
+			val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L3RISAR0);
+			iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_L3RISAR0);
+			val = ioread32(hw_info->ap_pdn_base + REG_CLDMA_L3RISAR1);
+			iowrite32(val, hw_info->ap_pdn_base + REG_CLDMA_L3RISAR1);
+		}
+
+		t7xx_cldma_hw_rx_done(hw_info, l2_rx_int);
+		if (l2_rx_int & (TXRX_STATUS_BITMASK | EMPTY_STATUS_BITMASK)) {
+			l2_rx_int |= l2_rx_int >> CLDMA_RXQ_NUM;
+			for_each_set_bit(i, (unsigned long *)&l2_rx_int, CLDMA_RXQ_NUM) {
+				t7xx_cldma_hw_irq_dis_eq(hw_info, i, MTK_RX);
+				t7xx_cldma_hw_irq_dis_txrx(hw_info, i, MTK_RX);
+				queue_work(md_ctrl->rxq[i].worker, &md_ctrl->rxq[i].cldma_work);
+			}
+		}
+	}
+}
+
+static bool t7xx_cldma_qs_are_active(struct t7xx_cldma_hw *hw_info)
+{
+	unsigned int tx_active;
+	unsigned int rx_active;
+
+	tx_active = t7xx_cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, MTK_TX);
+	rx_active = t7xx_cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, MTK_RX);
+	if (tx_active == CLDMA_INVALID_STATUS || rx_active == CLDMA_INVALID_STATUS)
+		return false;
+
+	return tx_active || rx_active;
+}
+
+/**
+ * t7xx_cldma_stop() - Stop CLDMA.
+ * @md_ctrl: CLDMA context structure.
+ *
+ * Stop TX and RX queues. Disable L1 and L2 interrupts.
+ * Clear status registers.
+ *
+ * Return:
+ * * 0		- Success.
+ * * -ERROR	- Error code from polling cldma_queues_active.
+ */
+int t7xx_cldma_stop(struct cldma_ctrl *md_ctrl)
+{
+	struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
+	bool active;
+	int i, ret;
+
+	md_ctrl->rxq_active = 0;
+	t7xx_cldma_hw_stop_queue(hw_info, CLDMA_ALL_Q, MTK_RX);
+	md_ctrl->txq_active = 0;
+	t7xx_cldma_hw_stop_queue(hw_info, CLDMA_ALL_Q, MTK_TX);
+	md_ctrl->txq_started = 0;
+	t7xx_cldma_disable_irq(md_ctrl);
+	t7xx_cldma_hw_stop(hw_info, MTK_RX);
+	t7xx_cldma_hw_stop(hw_info, MTK_TX);
+	t7xx_cldma_hw_tx_done(hw_info, CLDMA_L2TISAR0_ALL_INT_MASK);
+	t7xx_cldma_hw_rx_done(hw_info, CLDMA_L2RISAR0_ALL_INT_MASK);
+
+	if (md_ctrl->is_late_init) {
+		for (i = 0; i < CLDMA_TXQ_NUM; i++)
+			flush_work(&md_ctrl->txq[i].cldma_work);
+
+		for (i = 0; i < CLDMA_RXQ_NUM; i++)
+			flush_work(&md_ctrl->rxq[i].cldma_work);
+	}
+
+	ret = read_poll_timeout(t7xx_cldma_qs_are_active, active, !active, CHECK_Q_STOP_STEP_US,
+				CHECK_Q_STOP_TIMEOUT_US, true, hw_info);
+	if (ret)
+		dev_err(md_ctrl->dev, "Could not stop CLDMA%d queues", md_ctrl->hif_id);
+
+	return ret;
+}
+
+static void t7xx_cldma_late_release(struct cldma_ctrl *md_ctrl)
+{
+	int i;
+
+	if (!md_ctrl->is_late_init)
+		return;
+
+	for (i = 0; i < CLDMA_TXQ_NUM; i++)
+		t7xx_cldma_ring_free(md_ctrl, &md_ctrl->tx_ring[i], DMA_TO_DEVICE);
+
+	for (i = 0; i < CLDMA_RXQ_NUM; i++)
+		t7xx_cldma_ring_free(md_ctrl, &md_ctrl->rx_ring[i], DMA_FROM_DEVICE);
+
+	dma_pool_destroy(md_ctrl->gpd_dmapool);
+	md_ctrl->gpd_dmapool = NULL;
+	md_ctrl->is_late_init = false;
+}
+
+void t7xx_cldma_reset(struct cldma_ctrl *md_ctrl)
+{
+	struct t7xx_modem *md = md_ctrl->t7xx_dev->md;
+	unsigned long flags;
+	int i;
+
+	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+	md_ctrl->txq_active = 0;
+	md_ctrl->rxq_active = 0;
+	t7xx_cldma_disable_irq(md_ctrl);
+	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+	for (i = 0; i < CLDMA_TXQ_NUM; i++) {
+		md_ctrl->txq[i].md = md;
+		cancel_work_sync(&md_ctrl->txq[i].cldma_work);
+		spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+		md_cd_queue_struct_reset(&md_ctrl->txq[i], md_ctrl, MTK_TX, i);
+		spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+	}
+
+	for (i = 0; i < CLDMA_RXQ_NUM; i++) {
+		md_ctrl->rxq[i].md = md;
+		cancel_work_sync(&md_ctrl->rxq[i].cldma_work);
+		spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+		md_cd_queue_struct_reset(&md_ctrl->rxq[i], md_ctrl, MTK_RX, i);
+		spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+	}
+
+	t7xx_cldma_late_release(md_ctrl);
+}
+
+/**
+ * t7xx_cldma_start() - Start CLDMA.
+ * @md_ctrl: CLDMA context structure.
+ *
+ * Set TX/RX start address.
+ * Start all RX queues and enable L2 interrupt.
+ */
+void t7xx_cldma_start(struct cldma_ctrl *md_ctrl)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+	if (md_ctrl->is_late_init) {
+		struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
+		int i;
+
+		t7xx_cldma_enable_irq(md_ctrl);
+
+		for (i = 0; i < CLDMA_TXQ_NUM; i++) {
+			if (md_ctrl->txq[i].tr_done)
+				t7xx_cldma_hw_set_start_addr(hw_info, i,
+							     md_ctrl->txq[i].tr_done->gpd_addr,
+							     MTK_TX);
+		}
+
+		for (i = 0; i < CLDMA_RXQ_NUM; i++) {
+			if (md_ctrl->rxq[i].tr_done)
+				t7xx_cldma_hw_set_start_addr(hw_info, i,
+							     md_ctrl->rxq[i].tr_done->gpd_addr,
+							     MTK_RX);
+		}
+
+		/* Enable L2 interrupt */
+		t7xx_cldma_hw_start_queue(hw_info, CLDMA_ALL_Q, MTK_RX);
+		t7xx_cldma_hw_start(hw_info);
+		md_ctrl->txq_started = 0;
+		md_ctrl->txq_active |= TXRX_STATUS_BITMASK;
+		md_ctrl->rxq_active |= TXRX_STATUS_BITMASK;
+	}
+
+	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static void t7xx_cldma_clear_txq(struct cldma_ctrl *md_ctrl, int qnum)
+{
+	struct cldma_queue *txq = &md_ctrl->txq[qnum];
+	struct cldma_request *req;
+	struct cldma_tgpd *tgpd;
+	unsigned long flags;
+
+	spin_lock_irqsave(&txq->ring_lock, flags);
+	t7xx_cldma_q_reset(txq);
+	list_for_each_entry(req, &txq->tr_ring->gpd_ring, entry) {
+		tgpd = req->gpd;
+		tgpd->gpd_flags &= ~GPD_FLAGS_HWO;
+		t7xx_cldma_tgpd_set_data_ptr(tgpd, 0);
+		tgpd->data_buff_len = 0;
+		dev_kfree_skb_any(req->skb);
+		req->skb = NULL;
+	}
+
+	spin_unlock_irqrestore(&txq->ring_lock, flags);
+}
+
+static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
+{
+	struct cldma_queue *rxq = &md_ctrl->rxq[qnum];
+	struct cldma_request *req;
+	struct cldma_rgpd *rgpd;
+	unsigned long flags;
+
+	spin_lock_irqsave(&rxq->ring_lock, flags);
+	t7xx_cldma_q_reset(rxq);
+	list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) {
+		rgpd = req->gpd;
+		rgpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
+		rgpd->data_buff_len = 0;
+
+		if (req->skb) {
+			req->skb->len = 0;
+			skb_reset_tail_pointer(req->skb);
+		}
+	}
+
+	spin_unlock_irqrestore(&rxq->ring_lock, flags);
+	list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) {
+		int ret;
+
+		if (req->skb)
+			continue;
+
+		ret = t7xx_cldma_alloc_and_map_skb(md_ctrl, req, rxq->tr_ring->pkt_size);
+		if (ret)
+			return ret;
+
+		t7xx_cldma_rgpd_set_data_ptr(req->gpd, req->mapped_buff);
+	}
+
+	return 0;
+}
+
+static void t7xx_cldma_clear_all_qs(struct cldma_ctrl *md_ctrl, enum mtk_txrx tx_rx)
+{
+	int i;
+
+	if (tx_rx == MTK_TX) {
+		for (i = 0; i < CLDMA_TXQ_NUM; i++)
+			t7xx_cldma_clear_txq(md_ctrl, i);
+	} else {
+		for (i = 0; i < CLDMA_RXQ_NUM; i++)
+			t7xx_cldma_clear_rxq(md_ctrl, i);
+	}
+}
+
+static void t7xx_cldma_stop_q(struct cldma_ctrl *md_ctrl, unsigned char qno, enum mtk_txrx tx_rx)
+{
+	struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
+	unsigned long flags;
+
+	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+	if (tx_rx == MTK_RX) {
+		t7xx_cldma_hw_irq_dis_eq(hw_info, qno, MTK_RX);
+		t7xx_cldma_hw_irq_dis_txrx(hw_info, qno, MTK_RX);
+
+		if (qno == CLDMA_ALL_Q)
+			md_ctrl->rxq_active &= ~TXRX_STATUS_BITMASK;
+		else
+			md_ctrl->rxq_active &= ~(TXRX_STATUS_BITMASK & BIT(qno));
+
+		t7xx_cldma_hw_stop_queue(hw_info, qno, MTK_RX);
+	} else if (tx_rx == MTK_TX) {
+		t7xx_cldma_hw_irq_dis_eq(hw_info, qno, MTK_TX);
+		t7xx_cldma_hw_irq_dis_txrx(hw_info, qno, MTK_TX);
+
+		if (qno == CLDMA_ALL_Q)
+			md_ctrl->txq_active &= ~TXRX_STATUS_BITMASK;
+		else
+			md_ctrl->txq_active &= ~(TXRX_STATUS_BITMASK & BIT(qno));
+
+		t7xx_cldma_hw_stop_queue(hw_info, qno, MTK_TX);
+	}
+
+	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static int t7xx_cldma_gpd_handle_tx_request(struct cldma_queue *queue, struct cldma_request *tx_req,
+					    struct sk_buff *skb)
+{
+	struct cldma_ctrl *md_ctrl = queue->md_ctrl;
+	struct cldma_tgpd *tgpd = tx_req->gpd;
+	unsigned long flags;
+
+	/* Update GPD */
+	tx_req->mapped_buff = dma_map_single(md_ctrl->dev, skb->data, skb->len, DMA_TO_DEVICE);
+
+	if (dma_mapping_error(md_ctrl->dev, tx_req->mapped_buff)) {
+		dev_err(md_ctrl->dev, "DMA mapping failed\n");
+		return -ENOMEM;
+	}
+
+	t7xx_cldma_tgpd_set_data_ptr(tgpd, tx_req->mapped_buff);
+	tgpd->data_buff_len = cpu_to_le16(skb->len);
+
+	/* This lock must cover TGPD setting, as even without a resume operation,
+	 * CLDMA can send next HWO=1 if last TGPD just finished.
+	 */
+	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+	if (md_ctrl->txq_active & BIT(queue->index))
+		tgpd->gpd_flags |= GPD_FLAGS_HWO;
+
+	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+	tx_req->skb = skb;
+	return 0;
+}
+
+static void t7xx_cldma_hw_start_send(struct cldma_ctrl *md_ctrl, u8 qno)
+{
+	struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
+	struct cldma_request *req;
+
+	/* Check whether the device was powered off (CLDMA start address is not set) */
+	if (!t7xx_cldma_tx_addr_is_set(hw_info, qno)) {
+		t7xx_cldma_hw_init(hw_info);
+		req = list_prev_entry_circular(md_ctrl->txq[qno].tx_xmit,
+					       &md_ctrl->txq[qno].tr_ring->gpd_ring, entry);
+		t7xx_cldma_hw_set_start_addr(hw_info, qno, req->gpd_addr, MTK_TX);
+		md_ctrl->txq_started &= ~BIT(qno);
+	}
+
+	if (!t7xx_cldma_hw_queue_status(hw_info, qno, MTK_TX)) {
+		if (md_ctrl->txq_started & BIT(qno))
+			t7xx_cldma_hw_resume_queue(hw_info, qno, MTK_TX);
+		else
+			t7xx_cldma_hw_start_queue(hw_info, qno, MTK_TX);
+
+		md_ctrl->txq_started |= BIT(qno);
+	}
+}
+
+/**
+ * t7xx_cldma_set_recv_skb() - Set the callback to handle RX packets.
+ * @md_ctrl: CLDMA context structure.
+ * @recv_skb: Receiving skb callback.
+ */
+void t7xx_cldma_set_recv_skb(struct cldma_ctrl *md_ctrl,
+			     int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb))
+{
+	md_ctrl->recv_skb = recv_skb;
+}
+
+/**
+ * t7xx_cldma_send_skb() - Send control data to modem.
+ * @md_ctrl: CLDMA context structure.
+ * @qno: Queue number.
+ * @skb: Socket buffer.
+ * @blocking: True for blocking operation.
+ *
+ * Send control packet to modem using a ring buffer.
+ * If blocking is set, it will wait for completion.
+ *
+ * Return:
+ * * 0		- Success.
+ * * -ENOMEM	- Allocation failure.
+ * * -EINVAL	- Invalid queue request.
+ * * -EBUSY	- Resource lock failure.
+ */
+int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb, bool blocking)
+{
+	struct cldma_request *tx_req;
+	struct cldma_queue *queue;
+	unsigned long flags;
+	int ret;
+
+	if (qno >= CLDMA_TXQ_NUM)
+		return -EINVAL;
+
+	queue = &md_ctrl->txq[qno];
+
+	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+	if (!(md_ctrl->txq_active & BIT(qno))) {
+		ret = -EBUSY;
+		spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+		goto allow_sleep;
+	}
+
+	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+
+	do {
+		spin_lock_irqsave(&queue->ring_lock, flags);
+		tx_req = queue->tx_xmit;
+		if (queue->budget > 0 && !tx_req->skb) {
+			struct list_head *gpd_ring = &queue->tr_ring->gpd_ring;
+
+			queue->budget--;
+			t7xx_cldma_gpd_handle_tx_request(queue, tx_req, skb);
+			queue->tx_xmit = list_next_entry_circular(tx_req, gpd_ring, entry);
+			spin_unlock_irqrestore(&queue->ring_lock, flags);
+
+			/* Protect the access to the modem for queues operations (resume/start)
+			 * which access shared locations by all the queues.
+			 * cldma_lock is independent of ring_lock which is per queue.
+			 */
+			spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+			t7xx_cldma_hw_start_send(md_ctrl, qno);
+			spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+			break;
+		}
+
+		spin_unlock_irqrestore(&queue->ring_lock, flags);
+
+		if (!t7xx_cldma_hw_queue_status(&md_ctrl->hw_info, qno, MTK_TX)) {
+			spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+			t7xx_cldma_hw_resume_queue(&md_ctrl->hw_info, qno, MTK_TX);
+			spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+		}
+
+		if (!blocking) {
+			ret = -EBUSY;
+			break;
+		}
+
+		ret = wait_event_interruptible_exclusive(queue->req_wq, queue->budget > 0);
+	} while (!ret);
+
+allow_sleep:
+	return ret;
+}
+
+static int t7xx_cldma_late_init(struct cldma_ctrl *md_ctrl)
+{
+	char dma_pool_name[32];
+	int i, j, ret;
+
+	if (md_ctrl->is_late_init) {
+		dev_err(md_ctrl->dev, "CLDMA late init was already done\n");
+		return -EALREADY;
+	}
+
+	snprintf(dma_pool_name, sizeof(dma_pool_name), "cldma_req_hif%d", md_ctrl->hif_id);
+
+	md_ctrl->gpd_dmapool = dma_pool_create(dma_pool_name, md_ctrl->dev,
+					       sizeof(struct cldma_tgpd), GPD_DMAPOOL_ALIGN, 0);
+	if (!md_ctrl->gpd_dmapool) {
+		dev_err(md_ctrl->dev, "DMA pool alloc fail\n");
+		return -ENOMEM;
+	}
+
+	for (i = 0; i < CLDMA_TXQ_NUM; i++) {
+		INIT_LIST_HEAD(&md_ctrl->tx_ring[i].gpd_ring);
+		md_ctrl->tx_ring[i].length = MAX_TX_BUDGET;
+
+		ret = t7xx_cldma_tx_ring_init(md_ctrl, &md_ctrl->tx_ring[i]);
+		if (ret) {
+			dev_err(md_ctrl->dev, "control TX ring init fail\n");
+			goto err_free_tx_ring;
+		}
+	}
+
+	for (j = 0; j < CLDMA_RXQ_NUM; j++) {
+		INIT_LIST_HEAD(&md_ctrl->rx_ring[j].gpd_ring);
+		md_ctrl->rx_ring[j].length = MAX_RX_BUDGET;
+		md_ctrl->rx_ring[j].pkt_size = MTK_SKB_4K;
+
+		if (j == CLDMA_RXQ_NUM - 1)
+			md_ctrl->rx_ring[j].pkt_size = MTK_SKB_64K;
+
+		ret = t7xx_cldma_rx_ring_init(md_ctrl, &md_ctrl->rx_ring[j]);
+		if (ret) {
+			dev_err(md_ctrl->dev, "Control RX ring init fail\n");
+			goto err_free_rx_ring;
+		}
+	}
+
+	for (i = 0; i < CLDMA_TXQ_NUM; i++)
+		t7xx_cldma_txq_init(&md_ctrl->txq[i]);
+
+	for (j = 0; j < CLDMA_RXQ_NUM; j++)
+		t7xx_cldma_rxq_init(&md_ctrl->rxq[j]);
+
+	md_ctrl->is_late_init = true;
+	return 0;
+
+err_free_rx_ring:
+	while (j--)
+		t7xx_cldma_ring_free(md_ctrl, &md_ctrl->rx_ring[j], DMA_FROM_DEVICE);
+
+err_free_tx_ring:
+	while (i--)
+		t7xx_cldma_ring_free(md_ctrl, &md_ctrl->tx_ring[i], DMA_TO_DEVICE);
+
+	return ret;
+}
+
+static void __iomem *pcie_addr_transfer(void __iomem *addr, u32 addr_trs1, u32 phy_addr)
+{
+	return addr + phy_addr - addr_trs1;
+}
+
+static void t7xx_hw_info_init(struct cldma_ctrl *md_ctrl)
+{
+	struct t7xx_cldma_hw *hw_info;
+	u32 phy_ao_base, phy_pd_base;
+	struct t7xx_addr_base *pbase;
+
+	if (md_ctrl->hif_id != ID_CLDMA1)
+		return;
+
+	phy_ao_base = CLDMA1_AO_BASE;
+	phy_pd_base = CLDMA1_PD_BASE;
+	hw_info = &md_ctrl->hw_info;
+	hw_info->phy_interrupt_id = CLDMA1_INT;
+	hw_info->hw_mode = MODE_BIT_64;
+	pbase = &md_ctrl->t7xx_dev->base_addr;
+	hw_info->ap_ao_base = pcie_addr_transfer(pbase->pcie_ext_reg_base,
+						 pbase->pcie_dev_reg_trsl_addr, phy_ao_base);
+	hw_info->ap_pdn_base = pcie_addr_transfer(pbase->pcie_ext_reg_base,
+						  pbase->pcie_dev_reg_trsl_addr, phy_pd_base);
+}
+
+static int t7xx_cldma_default_recv_skb(struct cldma_queue *queue, struct sk_buff *skb)
+{
+	dev_kfree_skb_any(skb);
+	return 0;
+}
+
+int t7xx_cldma_alloc(enum cldma_id hif_id, struct t7xx_pci_dev *t7xx_dev)
+{
+	struct device *dev = &t7xx_dev->pdev->dev;
+	struct cldma_ctrl *md_ctrl;
+
+	md_ctrl = devm_kzalloc(dev, sizeof(*md_ctrl), GFP_KERNEL);
+	if (!md_ctrl)
+		return -ENOMEM;
+
+	md_ctrl->t7xx_dev = t7xx_dev;
+	md_ctrl->dev = dev;
+	md_ctrl->hif_id = hif_id;
+	md_ctrl->recv_skb = t7xx_cldma_default_recv_skb;
+	t7xx_hw_info_init(md_ctrl);
+	t7xx_dev->md->md_ctrl[hif_id] = md_ctrl;
+	return 0;
+}
+
+/**
+ * t7xx_cldma_exception() - CLDMA exception handler.
+ * @md_ctrl: CLDMA context structure.
+ * @stage: exception stage.
+ *
+ * Part of the modem exception recovery.
+ * Stages are one after the other as describe below:
+ * HIF_EX_INIT:		Disable and clear TXQ.
+ * HIF_EX_CLEARQ_DONE:	Disable RX, flush TX/RX workqueues and clear RX.
+ * HIF_EX_ALLQ_RESET:	HW is back in safe mode for re-initialization and restart.
+ */
+
+/*
+ * Modem Exception Handshake Flow
+ *
+ * Modem HW Exception interrupt received
+ *           (MD_IRQ_CCIF_EX)
+ *                   |
+ *         +---------v--------+
+ *         |   HIF_EX_INIT    | : Disable and clear TXQ
+ *         +------------------+
+ *                   |
+ *         +---------v--------+
+ *         | HIF_EX_INIT_DONE | : Wait for the init to be done
+ *         +------------------+
+ *                   |
+ *         +---------v--------+
+ *         |HIF_EX_CLEARQ_DONE| : Disable and clear RXQ
+ *         +------------------+ : Flush TX/RX workqueues
+ *                   |
+ *         +---------v--------+
+ *         |HIF_EX_ALLQ_RESET | : Restart HW and CLDMA
+ *         +------------------+
+ */
+void t7xx_cldma_exception(struct cldma_ctrl *md_ctrl, enum hif_ex_stage stage)
+{
+	switch (stage) {
+	case HIF_EX_INIT:
+		t7xx_cldma_stop_q(md_ctrl, CLDMA_ALL_Q, MTK_TX);
+		t7xx_cldma_clear_all_qs(md_ctrl, MTK_TX);
+		break;
+
+	case HIF_EX_CLEARQ_DONE:
+		/* We do not want to get CLDMA IRQ when MD is
+		 * resetting CLDMA after it got clearq_ack.
+		 */
+		t7xx_cldma_stop_q(md_ctrl, CLDMA_ALL_Q, MTK_RX);
+		t7xx_cldma_stop(md_ctrl);
+
+		if (md_ctrl->hif_id == ID_CLDMA1)
+			t7xx_cldma_hw_reset(md_ctrl->t7xx_dev->base_addr.infracfg_ao_base);
+
+		t7xx_cldma_clear_all_qs(md_ctrl, MTK_RX);
+		break;
+
+	case HIF_EX_ALLQ_RESET:
+		t7xx_cldma_hw_init(&md_ctrl->hw_info);
+		t7xx_cldma_start(md_ctrl);
+		break;
+
+	default:
+		break;
+	}
+}
+
+void t7xx_cldma_hif_hw_init(struct cldma_ctrl *md_ctrl)
+{
+	struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
+	unsigned long flags;
+
+	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+	t7xx_cldma_hw_stop(hw_info, MTK_TX);
+	t7xx_cldma_hw_stop(hw_info, MTK_RX);
+	t7xx_cldma_hw_rx_done(hw_info, EMPTY_STATUS_BITMASK | TXRX_STATUS_BITMASK);
+	t7xx_cldma_hw_tx_done(hw_info, EMPTY_STATUS_BITMASK | TXRX_STATUS_BITMASK);
+	t7xx_cldma_hw_init(hw_info);
+	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static irqreturn_t t7xx_cldma_isr_handler(int irq, void *data)
+{
+	struct cldma_ctrl *md_ctrl = data;
+	u32 interrupt;
+
+	interrupt = md_ctrl->hw_info.phy_interrupt_id;
+	t7xx_pcie_mac_clear_int(md_ctrl->t7xx_dev, interrupt);
+	t7xx_cldma_irq_work_cb(md_ctrl);
+	t7xx_pcie_mac_clear_int_status(md_ctrl->t7xx_dev, interrupt);
+	t7xx_pcie_mac_set_int(md_ctrl->t7xx_dev, interrupt);
+	return IRQ_HANDLED;
+}
+
+/**
+ * t7xx_cldma_init() - Initialize CLDMA.
+ * @md: Modem context structure.
+ * @md_ctrl: CLDMA context structure.
+ *
+ * Initialize HIF TX/RX queue structure.
+ * Register CLDMA callback ISR with PCIe driver.
+ *
+ * Return:
+ * * 0		- Success.
+ * * -ERROR	- Error code from failure sub-initializations.
+ */
+int t7xx_cldma_init(struct t7xx_modem *md, struct cldma_ctrl *md_ctrl)
+{
+	struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
+	int i;
+
+	md_ctrl->txq_active = 0;
+	md_ctrl->rxq_active = 0;
+	md_ctrl->is_late_init = false;
+
+	spin_lock_init(&md_ctrl->cldma_lock);
+	for (i = 0; i < CLDMA_TXQ_NUM; i++) {
+		md_cd_queue_struct_init(&md_ctrl->txq[i], md_ctrl, MTK_TX, i);
+		md_ctrl->txq[i].md = md;
+
+		md_ctrl->txq[i].worker =
+			alloc_workqueue("md_hif%d_tx%d_worker",
+					WQ_UNBOUND | WQ_MEM_RECLAIM | (i ? 0 : WQ_HIGHPRI),
+					1, md_ctrl->hif_id, i);
+		if (!md_ctrl->txq[i].worker)
+			return -ENOMEM;
+
+		INIT_WORK(&md_ctrl->txq[i].cldma_work, t7xx_cldma_tx_done);
+	}
+
+	for (i = 0; i < CLDMA_RXQ_NUM; i++) {
+		md_cd_queue_struct_init(&md_ctrl->rxq[i], md_ctrl, MTK_RX, i);
+		md_ctrl->rxq[i].md = md;
+		INIT_WORK(&md_ctrl->rxq[i].cldma_work, t7xx_cldma_rx_done);
+
+		md_ctrl->rxq[i].worker = alloc_workqueue("md_hif%d_rx%d_worker",
+							 WQ_UNBOUND | WQ_MEM_RECLAIM,
+							 1, md_ctrl->hif_id, i);
+		if (!md_ctrl->rxq[i].worker)
+			return -ENOMEM;
+	}
+
+	t7xx_pcie_mac_clear_int(md_ctrl->t7xx_dev, hw_info->phy_interrupt_id);
+	md_ctrl->t7xx_dev->intr_handler[hw_info->phy_interrupt_id] = t7xx_cldma_isr_handler;
+	md_ctrl->t7xx_dev->intr_thread[hw_info->phy_interrupt_id] = NULL;
+	md_ctrl->t7xx_dev->callback_param[hw_info->phy_interrupt_id] = md_ctrl;
+	t7xx_pcie_mac_clear_int_status(md_ctrl->t7xx_dev, hw_info->phy_interrupt_id);
+	return 0;
+}
+
+void t7xx_cldma_switch_cfg(struct cldma_ctrl *md_ctrl)
+{
+	t7xx_cldma_late_release(md_ctrl);
+	t7xx_cldma_late_init(md_ctrl);
+}
+
+void t7xx_cldma_exit(struct cldma_ctrl *md_ctrl)
+{
+	int i;
+
+	t7xx_cldma_stop(md_ctrl);
+	t7xx_cldma_late_release(md_ctrl);
+
+	for (i = 0; i < CLDMA_TXQ_NUM; i++) {
+		if (md_ctrl->txq[i].worker) {
+			destroy_workqueue(md_ctrl->txq[i].worker);
+			md_ctrl->txq[i].worker = NULL;
+		}
+	}
+
+	for (i = 0; i < CLDMA_RXQ_NUM; i++) {
+		if (md_ctrl->rxq[i].worker) {
+			destroy_workqueue(md_ctrl->rxq[i].worker);
+			md_ctrl->rxq[i].worker = NULL;
+		}
+	}
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.h b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
new file mode 100644
index 000000000000..5f8100c2b9bd
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
@@ -0,0 +1,138 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ */
+
+#ifndef __T7XX_HIF_CLDMA_H__
+#define __T7XX_HIF_CLDMA_H__
+
+#include <linux/bits.h>
+#include <linux/device.h>
+#include <linux/dmapool.h>
+#include <linux/pci.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+#include <linux/types.h>
+
+#include "t7xx_cldma.h"
+#include "t7xx_common.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+
+enum cldma_id {
+	ID_CLDMA0,
+	ID_CLDMA1,
+};
+
+struct cldma_request {
+	void *gpd;		/* Virtual address for CPU */
+	dma_addr_t gpd_addr;	/* Physical address for DMA */
+	struct sk_buff *skb;
+	dma_addr_t mapped_buff;
+	struct list_head entry;
+};
+
+struct cldma_queue;
+struct cldma_ctrl;
+
+struct cldma_ring {
+	struct list_head gpd_ring;	/* Ring of struct cldma_request */
+	int length;			/* Number of struct cldma_request */
+	int pkt_size;
+};
+
+struct cldma_queue {
+	struct t7xx_modem *md;
+	struct cldma_ctrl *md_ctrl;
+	enum mtk_txrx dir;
+	unsigned char index;
+	struct cldma_ring *tr_ring;
+	struct cldma_request *tr_done;
+	struct cldma_request *rx_refill;
+	struct cldma_request *tx_xmit;
+	int budget;			/* Same as ring buffer size by default */
+	spinlock_t ring_lock;
+	wait_queue_head_t req_wq;	/* Only for TX */
+	struct workqueue_struct *worker;
+	struct work_struct cldma_work;
+};
+
+struct cldma_ctrl {
+	enum cldma_id hif_id;
+	struct device *dev;
+	struct t7xx_pci_dev *t7xx_dev;
+	struct cldma_queue txq[CLDMA_TXQ_NUM];
+	struct cldma_queue rxq[CLDMA_RXQ_NUM];
+	unsigned short txq_active;
+	unsigned short rxq_active;
+	unsigned short txq_started;
+	spinlock_t cldma_lock; /* Protects CLDMA structure */
+	/* Assumes T/R GPD/BD/SPD have the same size */
+	struct dma_pool *gpd_dmapool;
+	struct cldma_ring tx_ring[CLDMA_TXQ_NUM];
+	struct cldma_ring rx_ring[CLDMA_RXQ_NUM];
+	struct t7xx_cldma_hw hw_info;
+	bool is_late_init;
+	int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb);
+};
+
+#define GPD_FLAGS_HWO		BIT(0)
+#define GPD_FLAGS_BDP		BIT(1)
+#define GPD_FLAGS_BPS		BIT(2)
+#define GPD_FLAGS_IOC		BIT(7)
+#define GPD_DMAPOOL_ALIGN	16
+
+struct cldma_tgpd {
+	u8 gpd_flags;
+	u8 not_used1;
+	u8 not_used2;
+	u8 debug_id;
+	__le32 next_gpd_ptr_h;
+	__le32 next_gpd_ptr_l;
+	__le32 data_buff_bd_ptr_h;
+	__le32 data_buff_bd_ptr_l;
+	__le16 data_buff_len;
+	__le16 not_used3;
+};
+
+struct cldma_rgpd {
+	u8 gpd_flags;
+	u8 not_used1;
+	__le16 data_allow_len;
+	__le32 next_gpd_ptr_h;
+	__le32 next_gpd_ptr_l;
+	__le32 data_buff_bd_ptr_h;
+	__le32 data_buff_bd_ptr_l;
+	__le16 data_buff_len;
+	u8 not_used2;
+	u8 debug_id;
+};
+
+int t7xx_cldma_alloc(enum cldma_id hif_id, struct t7xx_pci_dev *t7xx_dev);
+void t7xx_cldma_hif_hw_init(struct cldma_ctrl *md_ctrl);
+int t7xx_cldma_init(struct t7xx_modem *md, struct cldma_ctrl *md_ctrl);
+void t7xx_cldma_exception(struct cldma_ctrl *md_ctrl, enum hif_ex_stage stage);
+void t7xx_cldma_exit(struct cldma_ctrl *md_ctrl);
+void t7xx_cldma_switch_cfg(struct cldma_ctrl *md_ctrl);
+void t7xx_cldma_start(struct cldma_ctrl *md_ctrl);
+int t7xx_cldma_stop(struct cldma_ctrl *md_ctrl);
+void t7xx_cldma_reset(struct cldma_ctrl *md_ctrl);
+void t7xx_cldma_set_recv_skb(struct cldma_ctrl *md_ctrl,
+			     int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb));
+int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb, bool blocking);
+
+#endif /* __T7XX_HIF_CLDMA_H__ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 03/13] net: wwan: t7xx: Add core components
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
  2022-01-14  1:06 ` [PATCH net-next v4 01/13] list: Add list_next_entry_circular() and list_prev_entry_circular() Ricardo Martinez
  2022-01-14  1:06 ` [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-01-16 15:37   ` kernel test robot
  2022-01-24 14:51   ` Ilpo Järvinen
  2022-01-14  1:06 ` [PATCH net-next v4 04/13] net: wwan: t7xx: Add port proxy infrastructure Ricardo Martinez
                   ` (10 subsequent siblings)
  13 siblings, 2 replies; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

From: Haijun Liu <haijun.liu@mediatek.com>

Registers the t7xx device driver with the kernel. Setup all the core
components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
modem control operations, modem state machine, and build
infrastructure.

* PCIe layer code implements driver probe and removal.
* MHCCIF provides interrupt channels to communicate events
  such as handshake, PM and port enumeration.
* Modem control implements the entry point for modem init,
  reset and exit.
* The modem status monitor is a state machine used by modem control
  to complete initialization and stop. It is used also to propagate
  exception events reported by other components.

Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 drivers/net/wwan/Kconfig                   |  14 +
 drivers/net/wwan/Makefile                  |   1 +
 drivers/net/wwan/t7xx/Makefile             |  12 +
 drivers/net/wwan/t7xx/t7xx_mhccif.c        |  98 ++++
 drivers/net/wwan/t7xx/t7xx_mhccif.h        |  37 ++
 drivers/net/wwan/t7xx/t7xx_modem_ops.c     | 434 +++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_modem_ops.h     |  84 ++++
 drivers/net/wwan/t7xx/t7xx_pci.c           | 223 +++++++++
 drivers/net/wwan/t7xx/t7xx_pci.h           |  67 +++
 drivers/net/wwan/t7xx/t7xx_pcie_mac.c      | 277 +++++++++++
 drivers/net/wwan/t7xx/t7xx_pcie_mac.h      |  37 ++
 drivers/net/wwan/t7xx/t7xx_reg.h           | 379 +++++++++++++++
 drivers/net/wwan/t7xx/t7xx_state_monitor.c | 539 +++++++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_state_monitor.h | 123 +++++
 14 files changed, 2325 insertions(+)
 create mode 100644 drivers/net/wwan/t7xx/Makefile
 create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_mhccif.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_modem_ops.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_pci.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_pcie_mac.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_reg.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_state_monitor.h

diff --git a/drivers/net/wwan/Kconfig b/drivers/net/wwan/Kconfig
index 609fd4a2c865..3486ffe94ac4 100644
--- a/drivers/net/wwan/Kconfig
+++ b/drivers/net/wwan/Kconfig
@@ -105,6 +105,20 @@ config IOSM
 
 	  If unsure, say N.
 
+config MTK_T7XX
+	tristate "MediaTek PCIe 5G WWAN modem T7xx device"
+	depends on PCI
+	help
+	  Enables MediaTek PCIe based 5G WWAN modem (T7xx series) device.
+	  Adapts WWAN framework and provides network interface like wwan0
+	  and tty interfaces like wwan0at0 (AT protocol), wwan0mbim0
+	  (MBIM protocol), etc.
+
+	  To compile this driver as a module, choose M here: the module will be
+	  called mtk_t7xx.
+
+	  If unsure, say N.
+
 endif # WWAN
 
 endmenu
diff --git a/drivers/net/wwan/Makefile b/drivers/net/wwan/Makefile
index e722650bebea..3960c0ae2445 100644
--- a/drivers/net/wwan/Makefile
+++ b/drivers/net/wwan/Makefile
@@ -13,3 +13,4 @@ obj-$(CONFIG_MHI_WWAN_MBIM) += mhi_wwan_mbim.o
 obj-$(CONFIG_QCOM_BAM_DMUX) += qcom_bam_dmux.o
 obj-$(CONFIG_RPMSG_WWAN_CTRL) += rpmsg_wwan_ctrl.o
 obj-$(CONFIG_IOSM) += iosm/
+obj-$(CONFIG_MTK_T7XX) += t7xx/
diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
new file mode 100644
index 000000000000..6a49013bc343
--- /dev/null
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: GPL-2.0-only
+
+ccflags-y += -Werror
+
+obj-${CONFIG_MTK_T7XX} := mtk_t7xx.o
+mtk_t7xx-y:=	t7xx_pci.o \
+		t7xx_pcie_mac.o \
+		t7xx_mhccif.o \
+		t7xx_state_monitor.o  \
+		t7xx_modem_ops.o \
+		t7xx_cldma.o \
+		t7xx_hif_cldma.o  \
diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.c b/drivers/net/wwan/t7xx/t7xx_mhccif.c
new file mode 100644
index 000000000000..20aae457629c
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_mhccif.c
@@ -0,0 +1,98 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ */
+
+#include <linux/bits.h>
+#include <linux/completion.h>
+#include <linux/dev_printk.h>
+#include <linux/io.h>
+#include <linux/irqreturn.h>
+
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+
+static void t7xx_mhccif_clear_interrupts(struct t7xx_pci_dev *t7xx_dev, u32 mask)
+{
+	void __iomem *mhccif_pbase = t7xx_dev->base_addr.mhccif_rc_base;
+
+	/* Clear level 2 interrupt */
+	iowrite32(mask, mhccif_pbase + REG_EP2RC_SW_INT_ACK);
+	/* Ensure write is complete */
+	t7xx_mhccif_read_sw_int_sts(t7xx_dev);
+	/* Clear level 1 interrupt */
+	t7xx_pcie_mac_clear_int_status(t7xx_dev, MHCCIF_INT);
+}
+
+static irqreturn_t t7xx_mhccif_isr_thread(int irq, void *data)
+{
+	struct t7xx_pci_dev *t7xx_dev = data;
+	u32 int_sts, val;
+
+	val = L1_1_DISABLE_BIT(1) | L1_2_DISABLE_BIT(1);
+	iowrite32(val, IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+
+	int_sts = t7xx_mhccif_read_sw_int_sts(t7xx_dev);
+	if (int_sts & t7xx_dev->mhccif_bitmask)
+		t7xx_pci_mhccif_isr(t7xx_dev);
+
+	t7xx_mhccif_clear_interrupts(t7xx_dev, int_sts);
+	t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT);
+	return IRQ_HANDLED;
+}
+
+u32 t7xx_mhccif_read_sw_int_sts(struct t7xx_pci_dev *t7xx_dev)
+{
+	return ioread32(t7xx_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_STS);
+}
+
+void t7xx_mhccif_mask_set(struct t7xx_pci_dev *t7xx_dev, u32 val)
+{
+	iowrite32(val, t7xx_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_EAP_MASK_SET);
+}
+
+void t7xx_mhccif_mask_clr(struct t7xx_pci_dev *t7xx_dev, u32 val)
+{
+	iowrite32(val, t7xx_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_EAP_MASK_CLR);
+}
+
+u32 t7xx_mhccif_mask_get(struct t7xx_pci_dev *t7xx_dev)
+{
+	return ioread32(t7xx_dev->base_addr.mhccif_rc_base + REG_EP2RC_SW_INT_EAP_MASK);
+}
+
+static irqreturn_t t7xx_mhccif_isr_handler(int irq, void *data)
+{
+	return IRQ_WAKE_THREAD;
+}
+
+void t7xx_mhccif_init(struct t7xx_pci_dev *t7xx_dev)
+{
+	t7xx_dev->base_addr.mhccif_rc_base = t7xx_dev->base_addr.pcie_ext_reg_base +
+					    MHCCIF_RC_DEV_BASE -
+					    t7xx_dev->base_addr.pcie_dev_reg_trsl_addr;
+
+	t7xx_dev->intr_handler[MHCCIF_INT] = t7xx_mhccif_isr_handler;
+	t7xx_dev->intr_thread[MHCCIF_INT] = t7xx_mhccif_isr_thread;
+	t7xx_dev->callback_param[MHCCIF_INT] = t7xx_dev;
+}
+
+void t7xx_mhccif_h2d_swint_trigger(struct t7xx_pci_dev *t7xx_dev, u32 channel)
+{
+	void __iomem *mhccif_pbase = t7xx_dev->base_addr.mhccif_rc_base;
+
+	iowrite32(BIT(channel), mhccif_pbase + REG_RC2EP_SW_BSY);
+	iowrite32(channel, mhccif_pbase + REG_RC2EP_SW_TCHNUM);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.h b/drivers/net/wwan/t7xx/t7xx_mhccif.h
new file mode 100644
index 000000000000..62e9afa59296
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_mhccif.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ */
+
+#ifndef __T7XX_MHCCIF_H__
+#define __T7XX_MHCCIF_H__
+
+#include <linux/types.h>
+
+#include "t7xx_pci.h"
+#include "t7xx_reg.h"
+
+#define D2H_SW_INT_MASK (D2H_INT_EXCEPTION_INIT |		\
+			 D2H_INT_EXCEPTION_INIT_DONE |		\
+			 D2H_INT_EXCEPTION_CLEARQ_DONE |	\
+			 D2H_INT_EXCEPTION_ALLQ_RESET |		\
+			 D2H_INT_PORT_ENUM |			\
+			 D2H_INT_ASYNC_MD_HK)
+
+void t7xx_mhccif_mask_set(struct t7xx_pci_dev *t7xx_dev, u32 val);
+void t7xx_mhccif_mask_clr(struct t7xx_pci_dev *t7xx_dev, u32 val);
+u32 t7xx_mhccif_mask_get(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_mhccif_init(struct t7xx_pci_dev *t7xx_dev);
+u32 t7xx_mhccif_read_sw_int_sts(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_mhccif_h2d_swint_trigger(struct t7xx_pci_dev *t7xx_dev, u32 channel);
+
+#endif /*__T7XX_MHCCIF_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
new file mode 100644
index 000000000000..a106dbb526ea
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
@@ -0,0 +1,434 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#include <linux/acpi.h>
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+#include <linux/gfp.h>
+#include <linux/io.h>
+#include <linux/irqreturn.h>
+#include <linux/kthread.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_hif_cldma.h"
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+#include "t7xx_state_monitor.h"
+
+#define RGU_RESET_DELAY_MS	10
+#define PORT_RESET_DELAY_MS	2000
+#define EX_HS_TIMEOUT_MS	5000
+#define EX_HS_POLL_DELAY_MS	10
+
+static unsigned int t7xx_get_interrupt_status(struct t7xx_pci_dev *t7xx_dev)
+{
+	return t7xx_mhccif_read_sw_int_sts(t7xx_dev) & D2H_SW_INT_MASK;
+}
+
+/**
+ * t7xx_pci_mhccif_isr() - Process MHCCIF interrupts.
+ * @t7xx_dev: MTK device.
+ *
+ * Check the interrupt status and queue commands accordingly.
+ *
+ * Returns:
+ ** 0		- Success.
+ ** -EINVAL	- Failure to get FSM control.
+ */
+int t7xx_pci_mhccif_isr(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct t7xx_modem *md = t7xx_dev->md;
+	struct t7xx_fsm_ctl *ctl;
+	unsigned int int_sta;
+	u32 mask;
+
+	ctl = md->fsm_ctl;
+	if (!ctl) {
+		dev_err_ratelimited(&t7xx_dev->pdev->dev,
+				    "MHCCIF interrupt received before initializing MD monitor\n");
+		return -EINVAL;
+	}
+
+	spin_lock_bh(&md->exp_lock);
+	int_sta = t7xx_get_interrupt_status(t7xx_dev);
+
+	md->exp_id |= int_sta;
+	if (md->exp_id & D2H_INT_PORT_ENUM) {
+		md->exp_id &= ~D2H_INT_PORT_ENUM;
+
+		if (ctl->curr_state == FSM_STATE_INIT || ctl->curr_state == FSM_STATE_PRE_START ||
+		    ctl->curr_state == FSM_STATE_STOPPED)
+			t7xx_fsm_recv_md_intr(ctl, MD_IRQ_PORT_ENUM);
+	}
+
+	if (md->exp_id & D2H_INT_EXCEPTION_INIT) {
+		if (ctl->md_state == MD_STATE_INVALID ||
+		    ctl->md_state == MD_STATE_WAITING_FOR_HS1 ||
+		    ctl->md_state == MD_STATE_WAITING_FOR_HS2 ||
+		    ctl->md_state == MD_STATE_READY) {
+			md->exp_id &= ~D2H_INT_EXCEPTION_INIT;
+			t7xx_fsm_recv_md_intr(ctl, MD_IRQ_CCIF_EX);
+		}
+	} else if (ctl->md_state == MD_STATE_WAITING_FOR_HS1) {
+		mask = t7xx_mhccif_mask_get(t7xx_dev);
+		if ((md->exp_id & D2H_INT_ASYNC_MD_HK) && !(mask & D2H_INT_ASYNC_MD_HK)) {
+			md->exp_id &= ~D2H_INT_ASYNC_MD_HK;
+			queue_work(md->handshake_wq, &md->handshake_work);
+		}
+	}
+
+	spin_unlock_bh(&md->exp_lock);
+
+	return 0;
+}
+
+static void t7xx_clr_device_irq_via_pcie(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct t7xx_addr_base *pbase_addr = &t7xx_dev->base_addr;
+	void __iomem *reset_pcie_reg;
+	u32 val;
+
+	reset_pcie_reg = pbase_addr->pcie_ext_reg_base + TOPRGU_CH_PCIE_IRQ_STA -
+			  pbase_addr->pcie_dev_reg_trsl_addr;
+	val = ioread32(reset_pcie_reg);
+	iowrite32(val, reset_pcie_reg);
+}
+
+void t7xx_clear_rgu_irq(struct t7xx_pci_dev *t7xx_dev)
+{
+	/* Clear L2 */
+	t7xx_clr_device_irq_via_pcie(t7xx_dev);
+	/* Clear L1 */
+	t7xx_pcie_mac_clear_int_status(t7xx_dev, SAP_RGU_INT);
+}
+
+static int t7xx_acpi_reset(struct t7xx_pci_dev *t7xx_dev, char *fn_name)
+{
+#ifdef CONFIG_ACPI
+	struct acpi_buffer buffer = { ACPI_ALLOCATE_BUFFER, NULL };
+	struct device *dev = &t7xx_dev->pdev->dev;
+	acpi_status acpi_ret;
+	acpi_handle handle;
+
+	handle = ACPI_HANDLE(dev);
+	if (!handle) {
+		dev_err(dev, "ACPI handle not found\n");
+		return -EFAULT;
+	}
+
+	if (!acpi_has_method(handle, fn_name)) {
+		dev_err(dev, "%s method not found\n", fn_name);
+		return -EFAULT;
+	}
+
+	acpi_ret = acpi_evaluate_object(handle, fn_name, NULL, &buffer);
+	if (ACPI_FAILURE(acpi_ret)) {
+		dev_err(dev, "%s method fail: %s\n", fn_name, acpi_format_exception(acpi_ret));
+		return -EFAULT;
+	}
+
+#endif
+	return 0;
+}
+
+int t7xx_acpi_fldr_func(struct t7xx_pci_dev *t7xx_dev)
+{
+	return t7xx_acpi_reset(t7xx_dev, "_RST");
+}
+
+static void t7xx_reset_device_via_pmic(struct t7xx_pci_dev *t7xx_dev)
+{
+	u32 val;
+
+	val = ioread32(IREG_BASE(t7xx_dev) + PCIE_MISC_DEV_STATUS);
+	if (val & MISC_RESET_TYPE_PLDR)
+		t7xx_acpi_reset(t7xx_dev, "MRST._RST");
+	else if (val & MISC_RESET_TYPE_FLDR)
+		t7xx_acpi_fldr_func(t7xx_dev);
+}
+
+static irqreturn_t t7xx_rgu_isr_thread(int irq, void *data)
+{
+	struct t7xx_pci_dev *t7xx_dev = data;
+
+	msleep(RGU_RESET_DELAY_MS);
+	t7xx_reset_device_via_pmic(t7xx_dev);
+	return IRQ_HANDLED;
+}
+
+static irqreturn_t t7xx_rgu_isr_handler(int irq, void *data)
+{
+	struct t7xx_pci_dev *t7xx_dev = data;
+	struct t7xx_modem *modem;
+
+	t7xx_clear_rgu_irq(t7xx_dev);
+	if (!t7xx_dev->rgu_pci_irq_en)
+		return IRQ_HANDLED;
+
+	modem = t7xx_dev->md;
+	modem->rgu_irq_asserted = true;
+	t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
+	return IRQ_WAKE_THREAD;
+}
+
+static void t7xx_pcie_register_rgu_isr(struct t7xx_pci_dev *t7xx_dev)
+{
+	/* Registers RGU callback ISR with PCIe driver */
+	t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
+	t7xx_pcie_mac_clear_int_status(t7xx_dev, SAP_RGU_INT);
+
+	t7xx_dev->intr_handler[SAP_RGU_INT] = t7xx_rgu_isr_handler;
+	t7xx_dev->intr_thread[SAP_RGU_INT] = t7xx_rgu_isr_thread;
+	t7xx_dev->callback_param[SAP_RGU_INT] = t7xx_dev;
+	t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+}
+
+static void t7xx_md_exception(struct t7xx_modem *md, enum hif_ex_stage stage)
+{
+	struct t7xx_pci_dev *t7xx_dev = md->t7xx_dev;
+
+	if (stage == HIF_EX_CLEARQ_DONE) {
+		/* Give DHL time to flush data */
+		msleep(PORT_RESET_DELAY_MS);
+	}
+
+	t7xx_cldma_exception(md->md_ctrl[ID_CLDMA1], stage);
+
+	if (stage == HIF_EX_INIT)
+		t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_EXCEPTION_ACK);
+	else if (stage == HIF_EX_CLEARQ_DONE)
+		t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_EXCEPTION_CLEARQ_ACK);
+}
+
+static int t7xx_wait_hif_ex_hk_event(struct t7xx_modem *md, int event_id)
+{
+	unsigned int waited_time_ms = 0;
+
+	do {
+		if (md->exp_id & event_id)
+			return 0;
+
+		waited_time_ms += EX_HS_POLL_DELAY_MS;
+		msleep(EX_HS_POLL_DELAY_MS);
+	} while (waited_time_ms < EX_HS_TIMEOUT_MS);
+
+	return -EFAULT;
+}
+
+static void t7xx_md_sys_sw_init(struct t7xx_pci_dev *t7xx_dev)
+{
+	/* Register the MHCCIF ISR for MD exception, port enum and
+	 * async handshake notifications.
+	 */
+	t7xx_mhccif_mask_set(t7xx_dev, D2H_SW_INT_MASK);
+	t7xx_dev->mhccif_bitmask = D2H_SW_INT_MASK;
+	t7xx_mhccif_mask_clr(t7xx_dev, D2H_INT_PORT_ENUM);
+
+	/* Register RGU IRQ handler for sAP exception notification */
+	t7xx_dev->rgu_pci_irq_en = true;
+	t7xx_pcie_register_rgu_isr(t7xx_dev);
+}
+
+static void t7xx_md_hk_wq(struct work_struct *work)
+{
+	struct t7xx_modem *md = container_of(work, struct t7xx_modem, handshake_work);
+	struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+
+	t7xx_cldma_switch_cfg(md->md_ctrl[ID_CLDMA1]);
+	t7xx_cldma_start(md->md_ctrl[ID_CLDMA1]);
+	t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS2);
+	md->core_md.ready = true;
+	wake_up(&ctl->async_hk_wq);
+}
+
+void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id)
+{
+	struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+	void __iomem *mhccif_base;
+	unsigned int int_sta;
+	unsigned long flags;
+
+	switch (evt_id) {
+	case FSM_PRE_START:
+		t7xx_mhccif_mask_clr(md->t7xx_dev, D2H_INT_PORT_ENUM);
+		break;
+
+	case FSM_START:
+		t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_PORT_ENUM);
+		spin_lock_irqsave(&md->exp_lock, flags);
+		int_sta = t7xx_get_interrupt_status(md->t7xx_dev);
+
+		md->exp_id |= int_sta;
+		if (md->exp_id & D2H_INT_EXCEPTION_INIT) {
+			ctl->exp_flg = true;
+			md->exp_id &= ~D2H_INT_EXCEPTION_INIT;
+			md->exp_id &= ~D2H_INT_ASYNC_MD_HK;
+		} else if (ctl->exp_flg) {
+			md->exp_id &= ~D2H_INT_ASYNC_MD_HK;
+		} else if (md->exp_id & D2H_INT_ASYNC_MD_HK) {
+			queue_work(md->handshake_wq, &md->handshake_work);
+			md->exp_id &= ~D2H_INT_ASYNC_MD_HK;
+			mhccif_base = md->t7xx_dev->base_addr.mhccif_rc_base;
+			iowrite32(D2H_INT_ASYNC_MD_HK, mhccif_base + REG_EP2RC_SW_INT_ACK);
+			t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_ASYNC_MD_HK);
+		} else {
+			t7xx_mhccif_mask_clr(md->t7xx_dev, D2H_INT_ASYNC_MD_HK);
+		}
+
+		spin_unlock_irqrestore(&md->exp_lock, flags);
+
+		t7xx_mhccif_mask_clr(md->t7xx_dev,
+				     D2H_INT_EXCEPTION_INIT |
+				     D2H_INT_EXCEPTION_INIT_DONE |
+				     D2H_INT_EXCEPTION_CLEARQ_DONE |
+				     D2H_INT_EXCEPTION_ALLQ_RESET);
+		break;
+
+	case FSM_READY:
+		t7xx_mhccif_mask_set(md->t7xx_dev, D2H_INT_ASYNC_MD_HK);
+		break;
+
+	default:
+		break;
+	}
+}
+
+void t7xx_md_exception_handshake(struct t7xx_modem *md)
+{
+	struct device *dev = &md->t7xx_dev->pdev->dev;
+	int ret;
+
+	t7xx_md_exception(md, HIF_EX_INIT);
+	ret = t7xx_wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_INIT_DONE);
+	if (ret)
+		dev_err(dev, "EX CCIF HS timeout, RCH 0x%lx\n", D2H_INT_EXCEPTION_INIT_DONE);
+
+	t7xx_md_exception(md, HIF_EX_INIT_DONE);
+	ret = t7xx_wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_CLEARQ_DONE);
+	if (ret)
+		dev_err(dev, "EX CCIF HS timeout, RCH 0x%lx\n", D2H_INT_EXCEPTION_CLEARQ_DONE);
+
+	t7xx_md_exception(md, HIF_EX_CLEARQ_DONE);
+	ret = t7xx_wait_hif_ex_hk_event(md, D2H_INT_EXCEPTION_ALLQ_RESET);
+	if (ret)
+		dev_err(dev, "EX CCIF HS timeout, RCH 0x%lx\n", D2H_INT_EXCEPTION_ALLQ_RESET);
+
+	t7xx_md_exception(md, HIF_EX_ALLQ_RESET);
+}
+
+static struct t7xx_modem *t7xx_md_alloc(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct device *dev = &t7xx_dev->pdev->dev;
+	struct t7xx_modem *md;
+
+	md = devm_kzalloc(dev, sizeof(*md), GFP_KERNEL);
+	if (!md)
+		return NULL;
+
+	md->t7xx_dev = t7xx_dev;
+	t7xx_dev->md = md;
+	md->core_md.ready = false;
+	spin_lock_init(&md->exp_lock);
+	md->handshake_wq = alloc_workqueue("%s", WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI,
+					   0, "md_hk_wq");
+	if (!md->handshake_wq)
+		return NULL;
+
+	INIT_WORK(&md->handshake_work, t7xx_md_hk_wq);
+	return md;
+}
+
+void t7xx_md_reset(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct t7xx_modem *md = t7xx_dev->md;
+
+	md->md_init_finish = false;
+	md->exp_id = 0;
+	spin_lock_init(&md->exp_lock);
+	t7xx_fsm_reset(md);
+	t7xx_cldma_reset(md->md_ctrl[ID_CLDMA1]);
+	md->md_init_finish = true;
+}
+
+/**
+ * t7xx_md_init() - Initialize modem.
+ * @t7xx_dev: MTK device.
+ *
+ * Allocate and initialize MD control block, and initialize data path.
+ * Register MHCCIF ISR and RGU ISR, and start the state machine.
+ *
+ * Return:
+ ** 0		- Success.
+ ** -ENOMEM	- Allocation failure.
+ */
+int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct t7xx_modem *md;
+	int ret;
+
+	md = t7xx_md_alloc(t7xx_dev);
+	if (!md)
+		return -ENOMEM;
+
+	ret = t7xx_cldma_alloc(ID_CLDMA1, t7xx_dev);
+	if (ret)
+		goto err_destroy_hswq;
+
+	ret = t7xx_fsm_init(md);
+	if (ret)
+		goto err_destroy_hswq;
+
+	ret = t7xx_cldma_init(md, md->md_ctrl[ID_CLDMA1]);
+	if (ret)
+		goto err_uninit_fsm;
+
+	t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_START, 0);
+	t7xx_md_sys_sw_init(t7xx_dev);
+	md->md_init_finish = true;
+	return 0;
+
+err_uninit_fsm:
+	t7xx_fsm_uninit(md);
+
+err_destroy_hswq:
+	destroy_workqueue(md->handshake_wq);
+	dev_err(&t7xx_dev->pdev->dev, "Modem init failed\n");
+	return ret;
+}
+
+void t7xx_md_exit(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct t7xx_modem *md = t7xx_dev->md;
+
+	t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
+
+	if (!md->md_init_finish)
+		return;
+
+	t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_PRE_STOP, FSM_CMD_FLAG_WAIT_FOR_COMPLETION);
+	t7xx_cldma_exit(md->md_ctrl[ID_CLDMA1]);
+	t7xx_fsm_uninit(md);
+	destroy_workqueue(md->handshake_wq);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.h b/drivers/net/wwan/t7xx/t7xx_modem_ops.h
new file mode 100644
index 000000000000..24d2ee5bfbda
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#ifndef __T7XX_MODEM_OPS_H__
+#define __T7XX_MODEM_OPS_H__
+
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_common.h"
+#include "t7xx_pci.h"
+
+#define FEATURE_COUNT		64
+
+/**
+ * enum hif_ex_stage -	HIF exception handshake stages with the HW.
+ * @HIF_EX_INIT:        Disable and clear TXQ.
+ * @HIF_EX_INIT_DONE:   Polling for initialization to be done.
+ * @HIF_EX_CLEARQ_DONE: Disable RX, flush TX/RX workqueues and clear RX.
+ * @HIF_EX_ALLQ_RESET:  HW is back in safe mode for re-initialization and restart.
+ */
+enum hif_ex_stage {
+	HIF_EX_INIT,
+	HIF_EX_INIT_DONE,
+	HIF_EX_CLEARQ_DONE,
+	HIF_EX_ALLQ_RESET,
+};
+
+struct mtk_runtime_feature {
+	u8				feature_id;
+	u8				support_info;
+	u8				reserved[2];
+	__le32				data_len;
+};
+
+enum md_event_id {
+	FSM_PRE_START,
+	FSM_START,
+	FSM_READY,
+};
+
+struct t7xx_sys_info {
+	bool				ready;
+};
+
+struct t7xx_modem {
+	struct cldma_ctrl		*md_ctrl[CLDMA_NUM];
+	struct t7xx_pci_dev		*t7xx_dev;
+	struct t7xx_sys_info		core_md;
+	bool				md_init_finish;
+	bool				rgu_irq_asserted;
+	struct workqueue_struct		*handshake_wq;
+	struct work_struct		handshake_work;
+	struct t7xx_fsm_ctl		*fsm_ctl;
+	struct port_proxy		*port_prox;
+	unsigned int			exp_id;
+	spinlock_t			exp_lock; /* Protects exception events */
+};
+
+void t7xx_md_exception_handshake(struct t7xx_modem *md);
+void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id);
+void t7xx_md_reset(struct t7xx_pci_dev *t7xx_dev);
+int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_md_exit(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_clear_rgu_irq(struct t7xx_pci_dev *t7xx_dev);
+int t7xx_acpi_fldr_func(struct t7xx_pci_dev *t7xx_dev);
+int t7xx_pci_mhccif_isr(struct t7xx_pci_dev *t7xx_dev);
+
+#endif	/* __T7XX_MODEM_OPS_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
new file mode 100644
index 000000000000..6dd8897dfcbb
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_pci.c
@@ -0,0 +1,223 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ */
+
+#include <linux/atomic.h>
+#include <linux/bits.h>
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/gfp.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+
+#define PCI_IREG_BASE			0
+#define PCI_EREG_BASE			2
+
+static int t7xx_request_irq(struct pci_dev *pdev)
+{
+	struct t7xx_pci_dev *t7xx_dev;
+	int ret, i;
+
+	t7xx_dev = pci_get_drvdata(pdev);
+
+	for (i = 0; i < EXT_INT_NUM; i++) {
+		const char *irq_descr;
+		int irq_vec;
+
+		if (!t7xx_dev->intr_handler[i])
+			continue;
+
+		irq_descr = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s_%d",
+					   dev_driver_string(&pdev->dev), i);
+		if (!irq_descr) {
+			ret = -ENOMEM;
+			break;
+		}
+
+		irq_vec = pci_irq_vector(pdev, i);
+		ret = request_threaded_irq(irq_vec, t7xx_dev->intr_handler[i],
+					   t7xx_dev->intr_thread[i], 0, irq_descr,
+					   t7xx_dev->callback_param[i]);
+		if (ret) {
+			dev_err(&pdev->dev, "Failed to request IRQ: %d\n", ret);
+			break;
+		}
+	}
+
+	if (ret) {
+		while (i--) {
+			if (!t7xx_dev->intr_handler[i])
+				continue;
+
+			free_irq(pci_irq_vector(pdev, i), t7xx_dev->callback_param[i]);
+		}
+	}
+
+	return ret;
+}
+
+static int t7xx_setup_msix(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct pci_dev *pdev = t7xx_dev->pdev;
+	int ret;
+
+	/* Only using 6 interrupts, but HW-design requires power-of-2 IRQs allocation */
+	ret = pci_alloc_irq_vectors(pdev, EXT_INT_NUM, EXT_INT_NUM, PCI_IRQ_MSIX);
+	if (ret < 0) {
+		dev_err(&pdev->dev, "Failed to allocate MSI-X entry: %d\n", ret);
+		return ret;
+	}
+
+	ret = t7xx_request_irq(pdev);
+	if (ret) {
+		pci_free_irq_vectors(pdev);
+		return ret;
+	}
+
+	t7xx_pcie_set_mac_msix_cfg(t7xx_dev, EXT_INT_NUM);
+	return 0;
+}
+
+static int t7xx_interrupt_init(struct t7xx_pci_dev *t7xx_dev)
+{
+	int ret, i;
+
+	if (!t7xx_dev->pdev->msix_cap)
+		return -EINVAL;
+
+	ret = t7xx_setup_msix(t7xx_dev);
+	if (ret)
+		return ret;
+
+	/* IPs enable interrupts when ready */
+	for (i = EXT_INT_START; i < EXT_INT_START + EXT_INT_NUM; i++)
+		PCIE_MAC_MSIX_MSK_SET(t7xx_dev, i);
+
+	return 0;
+}
+
+static void t7xx_pci_infracfg_ao_calc(struct t7xx_pci_dev *t7xx_dev)
+{
+	t7xx_dev->base_addr.infracfg_ao_base = t7xx_dev->base_addr.pcie_ext_reg_base +
+					      INFRACFG_AO_DEV_CHIP -
+					      t7xx_dev->base_addr.pcie_dev_reg_trsl_addr;
+}
+
+static int t7xx_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+	struct t7xx_pci_dev *t7xx_dev;
+	int ret;
+
+	t7xx_dev = devm_kzalloc(&pdev->dev, sizeof(*t7xx_dev), GFP_KERNEL);
+	if (!t7xx_dev)
+		return -ENOMEM;
+
+	pci_set_drvdata(pdev, t7xx_dev);
+	t7xx_dev->pdev = pdev;
+
+	ret = pcim_enable_device(pdev);
+	if (ret)
+		return ret;
+
+	pci_set_master(pdev);
+
+	ret = pcim_iomap_regions(pdev, BIT(PCI_IREG_BASE) | BIT(PCI_EREG_BASE), pci_name(pdev));
+	if (ret) {
+		dev_err(&pdev->dev, "Could not request BARs: %d\n", ret);
+		return -ENOMEM;
+	}
+
+	ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
+	if (ret) {
+		dev_err(&pdev->dev, "Could not set PCI DMA mask: %d\n", ret);
+		return ret;
+	}
+
+	ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64));
+	if (ret) {
+		dev_err(&pdev->dev, "Could not set consistent PCI DMA mask: %d\n", ret);
+		return ret;
+	}
+
+	IREG_BASE(t7xx_dev) = pcim_iomap_table(pdev)[PCI_IREG_BASE];
+	t7xx_dev->base_addr.pcie_ext_reg_base = pcim_iomap_table(pdev)[PCI_EREG_BASE];
+
+	t7xx_pcie_mac_atr_init(t7xx_dev);
+	t7xx_pci_infracfg_ao_calc(t7xx_dev);
+	t7xx_mhccif_init(t7xx_dev);
+
+	ret = t7xx_md_init(t7xx_dev);
+	if (ret)
+		return ret;
+
+	t7xx_pcie_mac_interrupts_dis(t7xx_dev);
+
+	ret = t7xx_interrupt_init(t7xx_dev);
+	if (ret)
+		return ret;
+
+	t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT);
+	t7xx_pcie_mac_interrupts_en(t7xx_dev);
+
+	return 0;
+}
+
+static void t7xx_pci_remove(struct pci_dev *pdev)
+{
+	struct t7xx_pci_dev *t7xx_dev;
+	int i;
+
+	t7xx_dev = pci_get_drvdata(pdev);
+	t7xx_md_exit(t7xx_dev);
+
+	for (i = 0; i < EXT_INT_NUM; i++) {
+		if (!t7xx_dev->intr_handler[i])
+			continue;
+
+		free_irq(pci_irq_vector(pdev, i), t7xx_dev->callback_param[i]);
+	}
+
+	pci_free_irq_vectors(t7xx_dev->pdev);
+}
+
+static const struct pci_device_id t7xx_pci_table[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x4d75) },
+	{ }
+};
+MODULE_DEVICE_TABLE(pci, t7xx_pci_table);
+
+static struct pci_driver t7xx_pci_driver = {
+	.name = "mtk_t7xx",
+	.id_table = t7xx_pci_table,
+	.probe = t7xx_pci_probe,
+	.remove = t7xx_pci_remove,
+};
+
+module_pci_driver(t7xx_pci_driver);
+
+MODULE_AUTHOR("MediaTek Inc");
+MODULE_DESCRIPTION("MediaTek PCIe 5G WWAN modem T7xx driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.h b/drivers/net/wwan/t7xx/t7xx_pci.h
new file mode 100644
index 000000000000..b52aaa182a10
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_pci.h
@@ -0,0 +1,67 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ */
+
+#ifndef __T7XX_PCI_H__
+#define __T7XX_PCI_H__
+
+#include <linux/irqreturn.h>
+#include <linux/pci.h>
+#include <linux/types.h>
+
+#include "t7xx_reg.h"
+
+/* struct t7xx_addr_base - holds base addresses
+ * @pcie_mac_ireg_base: PCIe MAC register base
+ * @pcie_ext_reg_base: used to calculate base addresses for CLDMA, DPMA and MHCCIF registers
+ * @pcie_dev_reg_trsl_addr: used to calculate the register base address
+ * @infracfg_ao_base: base address used in CLDMA reset operations
+ * @mhccif_rc_base: host view of MHCCIF rc base addr
+ */
+struct t7xx_addr_base {
+	void __iomem		*pcie_mac_ireg_base;
+	void __iomem		*pcie_ext_reg_base;
+	u32			pcie_dev_reg_trsl_addr;
+	void __iomem		*infracfg_ao_base;
+	void __iomem		*mhccif_rc_base;
+};
+
+typedef irqreturn_t (*t7xx_intr_callback)(int irq, void *param);
+
+/* struct t7xx_pci_dev - MTK device context structure
+ * @intr_handler: array of handler function for request_threaded_irq
+ * @intr_thread: array of thread_fn for request_threaded_irq
+ * @callback_param: array of cookie passed back to interrupt functions
+ * @mhccif_bitmask: device to host interrupt mask
+ * @pdev: PCI device
+ * @base_addr: memory base addresses of HW components
+ * @md: modem interface
+ * @ccmni_ctlb: context structure used to control the network data path
+ * @rgu_pci_irq_en: RGU callback isr registered and active
+ * @pools: pre allocated skb pools
+ */
+struct t7xx_pci_dev {
+	t7xx_intr_callback	intr_handler[EXT_INT_NUM];
+	t7xx_intr_callback	intr_thread[EXT_INT_NUM];
+	void			*callback_param[EXT_INT_NUM];
+	u32			mhccif_bitmask;
+	struct pci_dev		*pdev;
+	struct t7xx_addr_base	base_addr;
+	struct t7xx_modem	*md;
+	struct t7xx_ccmni_ctrl	*ccmni_ctlb;
+	bool			rgu_pci_irq_en;
+};
+
+#endif /* __T7XX_PCI_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_pcie_mac.c b/drivers/net/wwan/t7xx/t7xx_pcie_mac.c
new file mode 100644
index 000000000000..fa5effa4e07a
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_pcie_mac.c
@@ -0,0 +1,277 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ */
+
+#include <linux/bits.h>
+#include <linux/bitops.h>
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/io-64-nonatomic-lo-hi.h>
+#include <linux/pci.h>
+#include <linux/string.h>
+#include <linux/types.h>
+
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+
+#define PCIE_REG_BAR			2
+#define PCIE_REG_PORT			ATR_SRC_PCI_WIN0
+#define PCIE_REG_TABLE_NUM		0
+#define PCIE_REG_TRSL_PORT		ATR_DST_AXIM_0
+
+#define PCIE_DEV_DMA_PORT_START		ATR_SRC_AXIS_0
+#define PCIE_DEV_DMA_PORT_END		ATR_SRC_AXIS_2
+#define PCIE_DEV_DMA_TABLE_NUM		0
+#define PCIE_DEV_DMA_TRSL_ADDR		0
+#define PCIE_DEV_DMA_SRC_ADDR		0
+#define PCIE_DEV_DMA_TRANSPARENT	1
+#define PCIE_DEV_DMA_SIZE		0
+
+enum t7xx_atr_src_port {
+	ATR_SRC_PCI_WIN0,
+	ATR_SRC_PCI_WIN1,
+	ATR_SRC_AXIS_0,
+	ATR_SRC_AXIS_1,
+	ATR_SRC_AXIS_2,
+	ATR_SRC_AXIS_3,
+};
+
+enum t7xx_atr_dst_port {
+	ATR_DST_PCI_TRX,
+	ATR_DST_PCI_CONFIG,
+	ATR_DST_AXIM_0 = 4,
+	ATR_DST_AXIM_1,
+	ATR_DST_AXIM_2,
+	ATR_DST_AXIM_3,
+};
+
+struct t7xx_atr_config {
+	u64			src_addr;
+	u64			trsl_addr;
+	u64			size;
+	u32			port;
+	u32			table;
+	enum t7xx_atr_dst_port	trsl_id;
+	u32			transparent;
+};
+
+static void t7xx_pcie_mac_atr_tables_dis(void __iomem *pbase, enum t7xx_atr_src_port port)
+{
+	void __iomem *reg;
+	int i, offset;
+
+	for (i = 0; i < ATR_TABLE_NUM_PER_ATR; i++) {
+		offset = ATR_PORT_OFFSET * port + ATR_TABLE_OFFSET * i;
+		reg = pbase + ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR + offset;
+		iowrite64(0, reg);
+	}
+}
+
+static int t7xx_pcie_mac_atr_cfg(struct t7xx_pci_dev *t7xx_dev, struct t7xx_atr_config *cfg)
+{
+	struct device *dev = &t7xx_dev->pdev->dev;
+	void __iomem *pbase = IREG_BASE(t7xx_dev);
+	int atr_size, pos, offset;
+	void __iomem *reg;
+	u64 value;
+
+	if (cfg->transparent) {
+		/* No address conversion is performed */
+		atr_size = ATR_TRANSPARENT_SIZE;
+	} else {
+		if (cfg->src_addr & (cfg->size - 1)) {
+			dev_err(dev, "Source address is not aligned to size\n");
+			return -EINVAL;
+		}
+
+		if (cfg->trsl_addr & (cfg->size - 1)) {
+			dev_err(dev, "Translation address %llx is not aligned to size %llx\n",
+				cfg->trsl_addr, cfg->size - 1);
+			return -EINVAL;
+		}
+
+		pos = __ffs64(cfg->size);
+
+		/* HW calculates the address translation space as 2^(atr_size + 1) */
+		atr_size = pos - 1;
+	}
+
+	offset = ATR_PORT_OFFSET * cfg->port + ATR_TABLE_OFFSET * cfg->table;
+
+	reg = pbase + ATR_PCIE_WIN0_T0_TRSL_ADDR + offset;
+	value = cfg->trsl_addr & ATR_PCIE_WIN0_ADDR_ALGMT;
+	iowrite64(value, reg);
+
+	reg = pbase + ATR_PCIE_WIN0_T0_TRSL_PARAM + offset;
+	iowrite32(cfg->trsl_id, reg);
+
+	reg = pbase + ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR + offset;
+	value = (cfg->src_addr & ATR_PCIE_WIN0_ADDR_ALGMT) | (atr_size << 1) | BIT(0);
+	iowrite64(value, reg);
+
+	/* Ensure ATR is set */
+	ioread64(reg);
+	return 0;
+}
+
+/**
+ * t7xx_pcie_mac_atr_init() - Initialize address translation.
+ * @t7xx_dev: MTK device.
+ *
+ * Setup ATR for ports & device.
+ */
+void t7xx_pcie_mac_atr_init(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct t7xx_atr_config cfg;
+	u32 i;
+
+	/* Disable for all ports */
+	for (i = ATR_SRC_PCI_WIN0; i <= ATR_SRC_AXIS_3; i++)
+		t7xx_pcie_mac_atr_tables_dis(IREG_BASE(t7xx_dev), i);
+
+	memset(&cfg, 0, sizeof(cfg));
+	/* Config ATR for RC to access device's register */
+	cfg.src_addr = pci_resource_start(t7xx_dev->pdev, PCIE_REG_BAR);
+	cfg.size = PCIE_REG_SIZE_CHIP;
+	cfg.trsl_addr = PCIE_REG_TRSL_ADDR_CHIP;
+	cfg.port = PCIE_REG_PORT;
+	cfg.table = PCIE_REG_TABLE_NUM;
+	cfg.trsl_id = PCIE_REG_TRSL_PORT;
+	t7xx_pcie_mac_atr_tables_dis(IREG_BASE(t7xx_dev), cfg.port);
+	t7xx_pcie_mac_atr_cfg(t7xx_dev, &cfg);
+
+	t7xx_dev->base_addr.pcie_dev_reg_trsl_addr = PCIE_REG_TRSL_ADDR_CHIP;
+
+	/* Config ATR for EP to access RC's memory */
+	for (i = PCIE_DEV_DMA_PORT_START; i <= PCIE_DEV_DMA_PORT_END; i++) {
+		cfg.src_addr = PCIE_DEV_DMA_SRC_ADDR;
+		cfg.size = PCIE_DEV_DMA_SIZE;
+		cfg.trsl_addr = PCIE_DEV_DMA_TRSL_ADDR;
+		cfg.port = i;
+		cfg.table = PCIE_DEV_DMA_TABLE_NUM;
+		cfg.trsl_id = ATR_DST_PCI_TRX;
+		cfg.transparent = PCIE_DEV_DMA_TRANSPARENT;
+		t7xx_pcie_mac_atr_tables_dis(IREG_BASE(t7xx_dev), cfg.port);
+		t7xx_pcie_mac_atr_cfg(t7xx_dev, &cfg);
+	}
+}
+
+/**
+ * t7xx_pcie_mac_enable_disable_int() - Enable/disable interrupts.
+ * @t7xx_dev: MTK device.
+ * @enable: Enable/disable.
+ *
+ * Enable or disable device interrupts.
+ */
+static void t7xx_pcie_mac_enable_disable_int(struct t7xx_pci_dev *t7xx_dev, bool enable)
+{
+	u32 value;
+
+	value = ioread32(IREG_BASE(t7xx_dev) + ISTAT_HST_CTRL);
+
+	if (enable)
+		value &= ~ISTAT_HST_CTRL_DIS;
+	else
+		value |= ISTAT_HST_CTRL_DIS;
+
+	iowrite32(value, IREG_BASE(t7xx_dev) + ISTAT_HST_CTRL);
+}
+
+void t7xx_pcie_mac_interrupts_en(struct t7xx_pci_dev *t7xx_dev)
+{
+	t7xx_pcie_mac_enable_disable_int(t7xx_dev, true);
+}
+
+void t7xx_pcie_mac_interrupts_dis(struct t7xx_pci_dev *t7xx_dev)
+{
+	t7xx_pcie_mac_enable_disable_int(t7xx_dev, false);
+}
+
+/**
+ * t7xx_pcie_mac_clear_set_int() - Clear/set interrupt by type.
+ * @t7xx_dev: MTK device.
+ * @int_type: Interrupt type.
+ * @clear: Clear/set.
+ *
+ * Clear or set device interrupt by type.
+ */
+static void t7xx_pcie_mac_clear_set_int(struct t7xx_pci_dev *t7xx_dev,
+					enum pcie_int int_type, bool clear)
+{
+	void __iomem *reg;
+	u32 val;
+
+	if (t7xx_dev->pdev->msix_enabled) {
+		if (clear)
+			reg = IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_CLR_GRP0_0;
+		else
+			reg = IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_SET_GRP0_0;
+	} else {
+		if (clear)
+			reg = IREG_BASE(t7xx_dev) + INT_EN_HST_CLR;
+		else
+			reg = IREG_BASE(t7xx_dev) + INT_EN_HST_SET;
+	}
+
+	val = BIT(EXT_INT_START + int_type);
+	iowrite32(val, reg);
+}
+
+void t7xx_pcie_mac_clear_int(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type)
+{
+	t7xx_pcie_mac_clear_set_int(t7xx_dev, int_type, true);
+}
+
+void t7xx_pcie_mac_set_int(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type)
+{
+	t7xx_pcie_mac_clear_set_int(t7xx_dev, int_type, false);
+}
+
+/**
+ * t7xx_pcie_mac_clear_int_status() - Clear interrupt status by type.
+ * @t7xx_dev: MTK device.
+ * @int_type: Interrupt type.
+ *
+ * Enable or disable device interrupts' status by type.
+ */
+void t7xx_pcie_mac_clear_int_status(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type)
+{
+	void __iomem *reg;
+	u32 val;
+
+	if (t7xx_dev->pdev->msix_enabled)
+		reg = IREG_BASE(t7xx_dev) + MSIX_ISTAT_HST_GRP0_0;
+	else
+		reg = IREG_BASE(t7xx_dev) + ISTAT_HST;
+
+	val = BIT(EXT_INT_START + int_type);
+	iowrite32(val, reg);
+}
+
+/**
+ * t7xx_pcie_set_mac_msix_cfg() - Write MSIX control configuration.
+ * @t7xx_dev: MTK device.
+ * @irq_count: Number of MSIX IRQ vectors.
+ *
+ * Write IRQ count to device.
+ */
+void t7xx_pcie_set_mac_msix_cfg(struct t7xx_pci_dev *t7xx_dev, unsigned int irq_count)
+{
+	u32 val;
+
+	val = ffs(irq_count) * 2 - 1;
+	iowrite32(val, IREG_BASE(t7xx_dev) + PCIE_CFG_MSIX);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_pcie_mac.h b/drivers/net/wwan/t7xx/t7xx_pcie_mac.h
new file mode 100644
index 000000000000..7cb6942fd44f
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_pcie_mac.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ *
+ * Contributors:
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ */
+
+#ifndef __T7XX_PCIE_MAC_H__
+#define __T7XX_PCIE_MAC_H__
+
+#include <linux/bits.h>
+#include <linux/io.h>
+
+#include "t7xx_pci.h"
+#include "t7xx_reg.h"
+
+#define IREG_BASE(t7xx_dev)	((t7xx_dev)->base_addr.pcie_mac_ireg_base)
+
+#define PCIE_MAC_MSIX_MSK_SET(t7xx_dev, ext_id)	\
+	iowrite32(BIT(ext_id), IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_SET_GRP0_0)
+
+void t7xx_pcie_mac_interrupts_en(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_pcie_mac_interrupts_dis(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_pcie_mac_atr_init(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_pcie_mac_clear_int(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type);
+void t7xx_pcie_mac_set_int(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type);
+void t7xx_pcie_mac_clear_int_status(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type);
+void t7xx_pcie_set_mac_msix_cfg(struct t7xx_pci_dev *t7xx_dev, unsigned int irq_count);
+
+#endif /* __T7XX_PCIE_MAC_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_reg.h b/drivers/net/wwan/t7xx/t7xx_reg.h
new file mode 100644
index 000000000000..ba2127ca8501
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_reg.h
@@ -0,0 +1,379 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#ifndef __T7XX_REG_H__
+#define __T7XX_REG_H__
+
+#include <linux/bits.h>
+
+/* RC part */
+
+/* Device base address offset - update if reg BAR base is changed */
+#define MHCCIF_RC_DEV_BASE			0x10024000
+
+#define REG_RC2EP_SW_BSY			0x04
+#define REG_RC2EP_SW_INT_START			0x08
+
+#define REG_RC2EP_SW_TCHNUM			0x0c
+#define H2D_CH_EXCEPTION_ACK			1
+#define H2D_CH_EXCEPTION_CLEARQ_ACK		2
+#define H2D_CH_DS_LOCK				3
+/* Channels 4-8 are reserved */
+#define H2D_CH_SUSPEND_REQ			9
+#define H2D_CH_RESUME_REQ			10
+#define H2D_CH_SUSPEND_REQ_AP			11
+#define H2D_CH_RESUME_REQ_AP			12
+#define H2D_CH_DEVICE_RESET			13
+#define H2D_CH_DRM_DISABLE_AP			14
+
+#define REG_EP2RC_SW_INT_STS			0x10
+#define REG_EP2RC_SW_INT_ACK			0x14
+#define REG_EP2RC_SW_INT_EAP_MASK		0x20
+#define REG_EP2RC_SW_INT_EAP_MASK_SET		0x30
+#define REG_EP2RC_SW_INT_EAP_MASK_CLR		0x40
+
+#define D2H_INT_DS_LOCK_ACK			BIT(0)
+#define D2H_INT_EXCEPTION_INIT			BIT(1)
+#define D2H_INT_EXCEPTION_INIT_DONE		BIT(2)
+#define D2H_INT_EXCEPTION_CLEARQ_DONE		BIT(3)
+#define D2H_INT_EXCEPTION_ALLQ_RESET		BIT(4)
+#define D2H_INT_PORT_ENUM			BIT(5)
+/* Bits 6-10 are reserved */
+#define D2H_INT_SUSPEND_ACK			BIT(11)
+#define D2H_INT_RESUME_ACK			BIT(12)
+#define D2H_INT_SUSPEND_ACK_AP			BIT(13)
+#define D2H_INT_RESUME_ACK_AP			BIT(14)
+#define D2H_INT_ASYNC_SAP_HK			BIT(15)
+#define D2H_INT_ASYNC_MD_HK			BIT(16)
+
+/* EP part */
+
+/* Register base */
+#define INFRACFG_AO_DEV_CHIP			0x10001000
+
+/* ATR setting */
+#define PCIE_REG_TRSL_ADDR_CHIP			0x10000000
+#define PCIE_REG_SIZE_CHIP			0x00400000
+
+/* Reset Generic Unit (RGU) */
+#define TOPRGU_CH_PCIE_IRQ_STA			0x1000790c
+
+#define ATR_PORT_OFFSET				0x100
+#define ATR_TABLE_OFFSET			0x20
+#define ATR_TABLE_NUM_PER_ATR			8
+#define ATR_TRANSPARENT_SIZE			0x3f
+
+/* PCIE_MAC_IREG Register Definition */
+
+#define INT_EN_HST				0x0188
+#define ISTAT_HST				0x018c
+
+#define ISTAT_HST_CTRL				0x01ac
+#define ISTAT_HST_CTRL_DIS			BIT(0)
+
+#define INT_EN_HST_SET				0x01f0
+#define INT_EN_HST_CLR				0x01f4
+
+#define PCIE_MISC_CTRL				0x0348
+#define PCIE_MISC_MAC_SLEEP_DIS			BIT(7)
+
+#define PCIE_CFG_MSIX				0x03ec
+#define ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR	0x0600
+#define ATR_PCIE_WIN0_T0_TRSL_ADDR		0x0608
+#define ATR_PCIE_WIN0_T0_TRSL_PARAM		0x0610
+#define ATR_PCIE_WIN0_ADDR_ALGMT		GENMASK_ULL(63, 12)
+
+#define ATR_SRC_ADDR_INVALID			0x007f
+
+#define PCIE_PM_RESUME_STATE			0x0d0c
+
+enum t7xx_pm_resume_state {
+	PM_RESUME_REG_STATE_L3,
+	PM_RESUME_REG_STATE_L1,
+	PM_RESUME_REG_STATE_INIT,
+	PM_RESUME_REG_STATE_EXP,
+	PM_RESUME_REG_STATE_L2,
+	PM_RESUME_REG_STATE_L2_EXP,
+};
+
+#define PCIE_MISC_DEV_STATUS			0x0d1c
+#define MISC_STAGE_MASK				GENMASK(2, 0)
+#define MISC_RESET_TYPE_PLDR			BIT(26)
+#define MISC_RESET_TYPE_FLDR			BIT(27)
+#define LINUX_STAGE				4
+
+#define PCIE_RESOURCE_STATUS			0x0d28
+#define PCIE_RESOURCE_STATUS_MSK		GENMASK(4, 0)
+
+#define DIS_ASPM_LOWPWR_SET_0			0x0e50
+#define DIS_ASPM_LOWPWR_CLR_0			0x0e54
+#define DIS_ASPM_LOWPWR_SET_1			0x0e58
+#define DIS_ASPM_LOWPWR_CLR_1			0x0e5c
+#define L1_DISABLE_BIT(i)			BIT((i) * 4 + 1)
+#define L1_1_DISABLE_BIT(i)			BIT((i) * 4 + 2)
+#define L1_2_DISABLE_BIT(i)			BIT((i) * 4 + 3)
+
+#define MSIX_SW_TRIG_SET_GRP0_0			0x0e80
+#define MSIX_ISTAT_HST_GRP0_0			0x0f00
+#define IMASK_HOST_MSIX_SET_GRP0_0		0x3000
+#define IMASK_HOST_MSIX_CLR_GRP0_0		0x3080
+#define IMASK_HOST_MSIX_GRP0_0			0x3100
+#define EXT_INT_START				24
+#define EXT_INT_NUM				8
+#define MSIX_MSK_SET_ALL			GENMASK(31, 24)
+
+enum pcie_int {
+	DPMAIF_INT = 0,
+	CLDMA0_INT,
+	CLDMA1_INT,
+	CLDMA2_INT,
+	MHCCIF_INT,
+	DPMAIF2_INT,
+	SAP_RGU_INT,
+	CLDMA3_INT,
+};
+
+/* DPMA definitions */
+
+#define DPMAIF_PD_BASE				0x1022d000
+#define BASE_DPMAIF_UL				DPMAIF_PD_BASE
+#define BASE_DPMAIF_DL				(DPMAIF_PD_BASE + 0x100)
+#define BASE_DPMAIF_AP_MISC			(DPMAIF_PD_BASE + 0x400)
+#define BASE_DPMAIF_MMW_HPC			(DPMAIF_PD_BASE + 0x600)
+#define BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX		(DPMAIF_PD_BASE + 0x900)
+#define BASE_DPMAIF_PD_SRAM_DL			(DPMAIF_PD_BASE + 0xc00)
+#define BASE_DPMAIF_PD_SRAM_UL			(DPMAIF_PD_BASE + 0xd00)
+
+#define DPMAIF_AO_BASE				0x10014000
+#define BASE_DPMAIF_AO_UL			DPMAIF_AO_BASE
+#define BASE_DPMAIF_AO_DL			(DPMAIF_AO_BASE + 0x400)
+
+/* DPMAIF UL */
+#define DPMAIF_UL_ADD_DESC			(BASE_DPMAIF_UL + 0x00)
+#define DPMAIF_UL_CHK_BUSY			(BASE_DPMAIF_UL + 0x88)
+#define DPMAIF_UL_RESERVE_AO_RW			(BASE_DPMAIF_UL + 0xac)
+#define DPMAIF_UL_ADD_DESC_CH0			(BASE_DPMAIF_UL + 0xb0)
+
+/* DPMAIF DL */
+#define DPMAIF_DL_BAT_INIT			(BASE_DPMAIF_DL + 0x00)
+#define DPMAIF_DL_BAT_ADD			(BASE_DPMAIF_DL + 0x04)
+#define DPMAIF_DL_BAT_INIT_CON0			(BASE_DPMAIF_DL + 0x08)
+#define DPMAIF_DL_BAT_INIT_CON1			(BASE_DPMAIF_DL + 0x0c)
+#define DPMAIF_DL_BAT_INIT_CON2			(BASE_DPMAIF_DL + 0x10)
+#define DPMAIF_DL_BAT_INIT_CON3			(BASE_DPMAIF_DL + 0x50)
+#define DPMAIF_DL_CHK_BUSY			(BASE_DPMAIF_DL + 0xb4)
+
+/* DPMAIF AP misc */
+#define DPMAIF_AP_L2TISAR0			(BASE_DPMAIF_AP_MISC + 0x00)
+#define DPMAIF_AP_APDL_L2TISAR0			(BASE_DPMAIF_AP_MISC + 0x50)
+#define DPMAIF_AP_IP_BUSY			(BASE_DPMAIF_AP_MISC + 0x60)
+#define DPMAIF_AP_CG_EN				(BASE_DPMAIF_AP_MISC + 0x68)
+#define DPMAIF_AP_OVERWRITE_CFG			(BASE_DPMAIF_AP_MISC + 0x90)
+#define DPMAIF_AP_MEM_CLR			(BASE_DPMAIF_AP_MISC + 0x94)
+#define DPMAIF_AP_ALL_L2TISAR0_MASK		GENMASK(31, 0)
+#define DPMAIF_AP_APDL_ALL_L2TISAR0_MASK	GENMASK(31, 0)
+#define DPMAIF_AP_IP_BUSY_MASK			GENMASK(31, 0)
+
+/* DPMAIF AO UL */
+#define DPMAIF_AO_UL_INIT_SET			(BASE_DPMAIF_AO_UL + 0x0)
+#define DPMAIF_AO_UL_CHNL_ARB0			(BASE_DPMAIF_AO_UL + 0x1c)
+#define DPMAIF_AO_UL_AP_L2TIMR0			(BASE_DPMAIF_AO_UL + 0x80)
+#define DPMAIF_AO_UL_AP_L2TIMCR0		(BASE_DPMAIF_AO_UL + 0x84)
+#define DPMAIF_AO_UL_AP_L2TIMSR0		(BASE_DPMAIF_AO_UL + 0x88)
+#define DPMAIF_AO_UL_AP_L1TIMR0			(BASE_DPMAIF_AO_UL + 0x8c)
+#define DPMAIF_AO_UL_APDL_L2TIMR0		(BASE_DPMAIF_AO_UL + 0x90)
+#define DPMAIF_AO_UL_APDL_L2TIMCR0		(BASE_DPMAIF_AO_UL + 0x94)
+#define DPMAIF_AO_UL_APDL_L2TIMSR0		(BASE_DPMAIF_AO_UL + 0x98)
+#define DPMAIF_AO_AP_DLUL_IP_BUSY_MASK		(BASE_DPMAIF_AO_UL + 0x9c)
+
+/* DPMAIF PD SRAM UL */
+#define DPMAIF_AO_UL_CHNL0_CON0			(BASE_DPMAIF_PD_SRAM_UL + 0x10)
+#define DPMAIF_AO_UL_CHNL0_CON1			(BASE_DPMAIF_PD_SRAM_UL + 0x14)
+#define DPMAIF_AO_UL_CHNL0_CON2			(BASE_DPMAIF_PD_SRAM_UL + 0x18)
+#define DPMAIF_AO_UL_CH0_STA			(BASE_DPMAIF_PD_SRAM_UL + 0x70)
+
+/* DPMAIF AO DL */
+#define DPMAIF_AO_DL_INIT_SET			(BASE_DPMAIF_AO_DL + 0x00)
+#define DPMAIF_AO_DL_IRQ_MASK			(BASE_DPMAIF_AO_DL + 0x0c)
+#define DPMAIF_AO_DL_DLQPIT_INIT_CON5		(BASE_DPMAIF_AO_DL + 0x28)
+#define DPMAIF_AO_DL_DLQPIT_TRIG_THRES		(BASE_DPMAIF_AO_DL + 0x34)
+
+/* DPMAIF PD SRAM DL */
+#define DPMAIF_AO_DL_PKTINFO_CON0		(BASE_DPMAIF_PD_SRAM_DL + 0x00)
+#define DPMAIF_AO_DL_PKTINFO_CON1		(BASE_DPMAIF_PD_SRAM_DL + 0x04)
+#define DPMAIF_AO_DL_PKTINFO_CON2		(BASE_DPMAIF_PD_SRAM_DL + 0x08)
+#define DPMAIF_AO_DL_RDY_CHK_THRES		(BASE_DPMAIF_PD_SRAM_DL + 0x0c)
+#define DPMAIF_AO_DL_RDY_CHK_FRG_THRES		(BASE_DPMAIF_PD_SRAM_DL + 0x10)
+
+#define DPMAIF_AO_DL_DLQ_AGG_CFG		(BASE_DPMAIF_PD_SRAM_DL + 0x20)
+#define DPMAIF_AO_DL_DLQPIT_TIMEOUT0		(BASE_DPMAIF_PD_SRAM_DL + 0x24)
+#define DPMAIF_AO_DL_DLQPIT_TIMEOUT1		(BASE_DPMAIF_PD_SRAM_DL + 0x28)
+#define DPMAIF_AO_DL_HPC_CNTL			(BASE_DPMAIF_PD_SRAM_DL + 0x38)
+#define DPMAIF_AO_DL_PIT_SEQ_END		(BASE_DPMAIF_PD_SRAM_DL + 0x40)
+
+#define DPMAIF_AO_DL_BAT_RIDX			(BASE_DPMAIF_PD_SRAM_DL + 0xd8)
+#define DPMAIF_AO_DL_BAT_WRIDX			(BASE_DPMAIF_PD_SRAM_DL + 0xdc)
+#define DPMAIF_AO_DL_PIT_RIDX			(BASE_DPMAIF_PD_SRAM_DL + 0xec)
+#define DPMAIF_AO_DL_PIT_WRIDX			(BASE_DPMAIF_PD_SRAM_DL + 0x60)
+#define DPMAIF_AO_DL_FRGBAT_WRIDX		(BASE_DPMAIF_PD_SRAM_DL + 0x78)
+#define DPMAIF_AO_DL_DLQ_WRIDX			(BASE_DPMAIF_PD_SRAM_DL + 0xa4)
+
+/* DPMAIF HPC */
+#define DPMAIF_HPC_INTR_MASK			(BASE_DPMAIF_MMW_HPC + 0x0f4)
+#define DPMA_HPC_ALL_INT_MASK			GENMASK(15, 0)
+
+#define DPMAIF_HPC_DLQ_PATH_MODE		3
+#define DPMAIF_HPC_ADD_MODE_DF			0
+#define DPMAIF_HPC_TOTAL_NUM			8
+#define DPMAIF_HPC_MAX_TOTAL_NUM		8
+
+/* DPMAIF DL queue */
+#define DPMAIF_DL_DLQPIT_INIT			(BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x00)
+#define DPMAIF_DL_DLQPIT_ADD			(BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x10)
+#define DPMAIF_DL_DLQPIT_INIT_CON0		(BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x14)
+#define DPMAIF_DL_DLQPIT_INIT_CON1		(BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x18)
+#define DPMAIF_DL_DLQPIT_INIT_CON2		(BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x1c)
+#define DPMAIF_DL_DLQPIT_INIT_CON3		(BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x20)
+#define DPMAIF_DL_DLQPIT_INIT_CON4		(BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x24)
+#define DPMAIF_DL_DLQPIT_INIT_CON5		(BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x28)
+#define DPMAIF_DL_DLQPIT_INIT_CON6		(BASE_DPMAIF_DL_DLQ_REMOVEAO_IDX + 0x2c)
+
+#define DPMAIF_ULQSAR_n(q)			(DPMAIF_AO_UL_CHNL0_CON0 + 0x10 * (q))
+#define DPMAIF_UL_DRBSIZE_ADDRH_n(q)		(DPMAIF_AO_UL_CHNL0_CON1 + 0x10 * (q))
+#define DPMAIF_UL_DRB_ADDRH_n(q)		(DPMAIF_AO_UL_CHNL0_CON2 + 0x10 * (q))
+#define DPMAIF_ULQ_STA0_n(q)			(DPMAIF_AO_UL_CH0_STA + 0x04 * (q))
+#define DPMAIF_ULQ_ADD_DESC_CH_n(q)		(DPMAIF_UL_ADD_DESC_CH0 + 0x04 * (q))
+
+#define DPMAIF_UL_DRB_RIDX_OFFSET		16
+
+#define DPMAIF_AP_RGU_ASSERT			0x10001150
+#define DPMAIF_AP_RGU_DEASSERT			0x10001154
+#define DPMAIF_AP_RST_BIT			BIT(2)
+
+#define DPMAIF_AP_AO_RGU_ASSERT			0x10001140
+#define DPMAIF_AP_AO_RGU_DEASSERT		0x10001144
+#define DPMAIF_AP_AO_RST_BIT			BIT(6)
+
+/* DPMAIF init/restore */
+#define DPMAIF_UL_ADD_NOT_READY			BIT(31)
+#define DPMAIF_UL_ADD_UPDATE			BIT(31)
+#define DPMAIF_UL_ADD_COUNT_MASK		GENMASK(15, 0)
+#define DPMAIF_UL_ALL_QUE_ARB_EN		GENMASK(11, 8)
+
+#define DPMAIF_DL_ADD_UPDATE			BIT(31)
+#define DPMAIF_DL_ADD_NOT_READY			BIT(31)
+#define DPMAIF_DL_FRG_ADD_UPDATE		BIT(16)
+#define DPMAIF_DL_ADD_COUNT_MASK		GENMASK(15, 0)
+
+#define DPMAIF_DL_BAT_INIT_ALLSET		BIT(0)
+#define DPMAIF_DL_BAT_FRG_INIT			BIT(16)
+#define DPMAIF_DL_BAT_INIT_EN			BIT(31)
+#define DPMAIF_DL_BAT_INIT_NOT_READY		BIT(31)
+#define DPMAIF_DL_BAT_INIT_ONLY_ENABLE_BIT	0
+
+#define DPMAIF_DL_PIT_INIT_ALLSET		BIT(0)
+#define DPMAIF_DL_PIT_INIT_EN			BIT(31)
+#define DPMAIF_DL_PIT_INIT_NOT_READY		BIT(31)
+
+#define DPMAIF_PKT_ALIGN64_MODE			0
+#define DPMAIF_PKT_ALIGN128_MODE		1
+
+#define DPMAIF_BAT_REMAIN_SZ_BASE		16
+#define DPMAIF_BAT_BUFFER_SZ_BASE		128
+#define DPMAIF_FRG_BUFFER_SZ_BASE		128
+
+#define DLQ_PIT_IDX_SIZE			0x20
+
+#define DPMAIF_PIT_SIZE_MSK			GENMASK(17, 0)
+
+#define DPMAIF_PIT_REM_CNT_MSK			GENMASK(17, 0)
+
+#define DPMAIF_BAT_EN_MSK			BIT(16)
+#define DPMAIF_FRG_EN_MSK			BIT(28)
+#define DPMAIF_BAT_SIZE_MSK			GENMASK(15, 0)
+
+#define DPMAIF_BAT_BID_MAXCNT_MSK		GENMASK(31, 16)
+#define DPMAIF_BAT_REMAIN_MINSZ_MSK		GENMASK(15, 8)
+#define DPMAIF_PIT_CHK_NUM_MSK			GENMASK(31, 24)
+#define DPMAIF_BAT_BUF_SZ_MSK			GENMASK(16, 8)
+#define DPMAIF_FRG_BUF_SZ_MSK			GENMASK(16, 8)
+#define DPMAIF_BAT_RSV_LEN_MSK			GENMASK(7, 0)
+#define DPMAIF_PKT_ALIGN_MSK			GENMASK(23, 22)
+
+#define DPMAIF_BAT_CHECK_THRES_MSK		GENMASK(21, 16)
+#define DPMAIF_FRG_CHECK_THRES_MSK		GENMASK(7, 0)
+
+#define DPMAIF_PKT_ALIGN_EN			BIT(23)
+
+#define DPMAIF_DRB_SIZE_MSK			GENMASK(15, 0)
+
+#define DPMAIF_DL_PIT_WRIDX_MSK			GENMASK(17, 0)
+#define DPMAIF_DL_BAT_WRIDX_MSK			GENMASK(17, 0)
+#define DPMAIF_DL_FRG_WRIDX_MSK			GENMASK(17, 0)
+
+/* Bit fields of registers */
+/* DPMAIF_UL_CHK_BUSY */
+#define DPMAIF_UL_IDLE_STS			BIT(11)
+/* DPMAIF_DL_CHK_BUSY */
+#define DPMAIF_DL_IDLE_STS			BIT(23)
+/* DPMAIF_AO_DL_RDY_CHK_THRES */
+#define DPMAIF_DL_PKT_CHECKSUM_EN		BIT(31)
+#define DPMAIF_PORT_MODE_PCIE			BIT(30)
+#define DPMAIF_DL_BURST_PIT_EN			BIT(13)
+/* DPMAIF_DL_BAT_INIT_CON1 */
+#define DPMAIF_DL_BAT_CACHE_PRI			BIT(22)
+/* DPMAIF_AP_MEM_CLR */
+#define DPMAIF_MEM_CLR				BIT(0)
+/* DPMAIF_AP_OVERWRITE_CFG */
+#define DPMAIF_SRAM_SYNC			BIT(0)
+/* DPMAIF_AO_UL_INIT_SET */
+#define DPMAIF_UL_INIT_DONE			BIT(0)
+/* DPMAIF_AO_DL_INIT_SET */
+#define DPMAIF_DL_INIT_DONE			BIT(0)
+/* DPMAIF_AO_DL_PIT_SEQ_END */
+#define DPMAIF_DL_PIT_SEQ_MSK			GENMASK(7, 0)
+/* DPMAIF_UL_RESERVE_AO_RW */
+#define DPMAIF_PCIE_MODE_SET_VALUE		0x55
+/* DPMAIF_AP_CG_EN */
+#define DPMAIF_CG_EN				0x7f
+
+#define DPMAIF_UDL_IP_BUSY			BIT(0)
+#define DPMAIF_DL_INT_DLQ0_QDONE		BIT(8)
+#define DPMAIF_DL_INT_DLQ1_QDONE		BIT(9)
+#define DPMAIF_DL_INT_DLQ0_PITCNT_LEN		BIT(10)
+#define DPMAIF_DL_INT_DLQ1_PITCNT_LEN		BIT(11)
+#define DPMAIF_DL_INT_Q2TOQ1			BIT(24)
+#define DPMAIF_DL_INT_Q2APTOP			BIT(25)
+
+#define DPMAIF_DLQ_LOW_TIMEOUT_THRES_MKS	GENMASK(15, 0)
+#define DPMAIF_DLQ_HIGH_TIMEOUT_THRES_MSK	GENMASK(31, 16)
+
+/* DPMAIF DLQ HW configure */
+#define DPMAIF_AGG_MAX_LEN_DF			65535
+#define DPMAIF_AGG_TBL_ENT_NUM_DF		50
+#define DPMAIF_HASH_PRIME_DF			13
+#define DPMAIF_MID_TIMEOUT_THRES_DF		100
+#define DPMAIF_DLQ_TIMEOUT_THRES_DF		100
+#define DPMAIF_DLQ_PRS_THRES_DF			10
+#define DPMAIF_DLQ_HASH_BIT_CHOOSE_DF		0
+
+#define DPMAIF_DLQPIT_EN_MSK			BIT(20)
+#define DPMAIF_DLQPIT_CHAN_OFS			16
+#define DPMAIF_ADD_DLQ_PIT_CHAN_OFS		20
+
+#endif /* __T7XX_REG_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
new file mode 100644
index 000000000000..a353eac3e23b
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -0,0 +1,539 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#include <linux/bits.h>
+#include <linux/bitfield.h>
+#include <linux/completion.h>
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/delay.h>
+#include <linux/err.h>
+#include <linux/gfp.h>
+#include <linux/iopoll.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+
+#include "t7xx_hif_cldma.h"
+#include "t7xx_mhccif.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+#include "t7xx_reg.h"
+#include "t7xx_state_monitor.h"
+
+#define FSM_DRM_DISABLE_DELAY_MS		200
+#define FSM_EVENT_POLL_INTERVAL_MS		20
+#define FSM_MD_EX_REC_OK_TIMEOUT_MS		10000
+#define FSM_MD_EX_PASS_TIMEOUT_MS		45000
+#define FSM_CMD_TIMEOUT_MS			2000
+
+void t7xx_fsm_notifier_register(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier)
+{
+	struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ctl->notifier_lock, flags);
+	list_add_tail(&notifier->entry, &ctl->notifier_list);
+	spin_unlock_irqrestore(&ctl->notifier_lock, flags);
+}
+
+void t7xx_fsm_notifier_unregister(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier)
+{
+	struct t7xx_fsm_notifier *notifier_cur, *notifier_next;
+	struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ctl->notifier_lock, flags);
+	list_for_each_entry_safe(notifier_cur, notifier_next, &ctl->notifier_list, entry) {
+		if (notifier_cur == notifier)
+			list_del(&notifier->entry);
+	}
+
+	spin_unlock_irqrestore(&ctl->notifier_lock, flags);
+}
+
+static void fsm_state_notify(struct t7xx_modem *md, enum md_state state)
+{
+	struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+	struct t7xx_fsm_notifier *notifier;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ctl->notifier_lock, flags);
+	list_for_each_entry(notifier, &ctl->notifier_list, entry) {
+		spin_unlock_irqrestore(&ctl->notifier_lock, flags);
+		if (notifier->notifier_fn)
+			notifier->notifier_fn(state, notifier->data);
+
+		spin_lock_irqsave(&ctl->notifier_lock, flags);
+	}
+
+	spin_unlock_irqrestore(&ctl->notifier_lock, flags);
+}
+
+void t7xx_fsm_broadcast_state(struct t7xx_fsm_ctl *ctl, enum md_state state)
+{
+	if (ctl->md_state != MD_STATE_WAITING_FOR_HS2 && state == MD_STATE_READY)
+		return;
+
+	ctl->md_state = state;
+
+	fsm_state_notify(ctl->md, state);
+}
+
+static void fsm_finish_command(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd, int result)
+{
+	if (cmd->flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
+		*cmd->ret = result;
+		complete_all(cmd->done);
+	}
+
+	kfree(cmd);
+}
+
+static void fsm_del_kf_event(struct t7xx_fsm_event *event)
+{
+	list_del(&event->entry);
+	kfree(event);
+}
+
+static void fsm_flush_event_cmd_qs(struct t7xx_fsm_ctl *ctl)
+{
+	struct device *dev = &ctl->md->t7xx_dev->pdev->dev;
+	struct t7xx_fsm_event *event, *evt_next;
+	struct t7xx_fsm_command *cmd, *cmd_next;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ctl->command_lock, flags);
+	list_for_each_entry_safe(cmd, cmd_next, &ctl->command_queue, entry) {
+		dev_warn(dev, "Unhandled command %d\n", cmd->cmd_id);
+		list_del(&cmd->entry);
+		fsm_finish_command(ctl, cmd, -EINVAL);
+	}
+
+	spin_unlock_irqrestore(&ctl->command_lock, flags);
+	spin_lock_irqsave(&ctl->event_lock, flags);
+	list_for_each_entry_safe(event, evt_next, &ctl->event_queue, entry) {
+		dev_warn(dev, "Unhandled event %d\n", event->event_id);
+		fsm_del_kf_event(event);
+	}
+
+	spin_unlock_irqrestore(&ctl->event_lock, flags);
+}
+
+static void fsm_wait_for_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id,
+			       enum t7xx_fsm_event_state event_ignore, int timeout)
+{
+	struct t7xx_fsm_event *event;
+	unsigned long flags;
+	bool ackd = false;
+	int cnt = 0;
+
+	while (cnt++ < timeout / FSM_EVENT_POLL_INTERVAL_MS) {
+		if (kthread_should_stop())
+			return;
+
+		spin_lock_irqsave(&ctl->event_lock, flags);
+		event = list_first_entry_or_null(&ctl->event_queue,
+						 struct t7xx_fsm_event, entry);
+		if (event) {
+			if (event->event_id == event_ignore) {
+				fsm_del_kf_event(event);
+			} else if (event->event_id == event_id) {
+				ackd = true;
+				fsm_del_kf_event(event);
+			}
+		}
+
+		spin_unlock_irqrestore(&ctl->event_lock, flags);
+		if (ackd)
+			break;
+
+		msleep(FSM_EVENT_POLL_INTERVAL_MS);
+	}
+}
+
+static void fsm_routine_exception(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd,
+				  enum t7xx_ex_reason reason)
+{
+	struct device *dev = &ctl->md->t7xx_dev->pdev->dev;
+
+	dev_err(dev, "Exception %d, from %ps\n", reason, __builtin_return_address(0));
+
+	if (ctl->curr_state != FSM_STATE_READY && ctl->curr_state != FSM_STATE_STARTING) {
+		if (cmd)
+			fsm_finish_command(ctl, cmd, -EINVAL);
+
+		return;
+	}
+
+	ctl->curr_state = FSM_STATE_EXCEPTION;
+
+	switch (reason) {
+	case EXCEPTION_HS_TIMEOUT:
+		dev_err(dev, "BOOT_HS_FAIL\n");
+		break;
+
+	case EXCEPTION_EVENT:
+		t7xx_fsm_broadcast_state(ctl, MD_STATE_EXCEPTION);
+		t7xx_md_exception_handshake(ctl->md);
+
+		fsm_wait_for_event(ctl, FSM_EVENT_MD_EX_REC_OK, FSM_EVENT_MD_EX,
+				   FSM_MD_EX_REC_OK_TIMEOUT_MS);
+		fsm_wait_for_event(ctl, FSM_EVENT_MD_EX_PASS, FSM_EVENT_INVALID,
+				   FSM_MD_EX_PASS_TIMEOUT_MS);
+		break;
+
+	default:
+		break;
+	}
+
+	if (cmd)
+		fsm_finish_command(ctl, cmd, 0);
+}
+
+static void fsm_stopped_handler(struct t7xx_fsm_ctl *ctl)
+{
+	ctl->curr_state = FSM_STATE_STOPPED;
+
+	t7xx_fsm_broadcast_state(ctl, MD_STATE_STOPPED);
+	t7xx_md_reset(ctl->md->t7xx_dev);
+}
+
+static void fsm_routine_stopped(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd)
+{
+	if (ctl->curr_state == FSM_STATE_STOPPED) {
+		fsm_finish_command(ctl, cmd, -EINVAL);
+		return;
+	}
+
+	fsm_stopped_handler(ctl);
+	fsm_finish_command(ctl, cmd, 0);
+}
+
+static void fsm_routine_stopping(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd)
+{
+	struct t7xx_pci_dev *t7xx_dev;
+	struct cldma_ctrl *md_ctrl;
+	int err;
+
+	if (ctl->curr_state == FSM_STATE_STOPPED || ctl->curr_state == FSM_STATE_STOPPING) {
+		fsm_finish_command(ctl, cmd, -EINVAL);
+		return;
+	}
+
+	md_ctrl = ctl->md->md_ctrl[ID_CLDMA1];
+	t7xx_dev = ctl->md->t7xx_dev;
+
+	ctl->curr_state = FSM_STATE_STOPPING;
+	t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_TO_STOP);
+	t7xx_cldma_stop(md_ctrl);
+
+	if (!ctl->md->rgu_irq_asserted) {
+		t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DRM_DISABLE_AP);
+		/* Wait for the DRM disable to take effect */
+		msleep(FSM_DRM_DISABLE_DELAY_MS);
+
+		err = t7xx_acpi_fldr_func(t7xx_dev);
+		if (err)
+			t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DEVICE_RESET);
+	}
+
+	fsm_stopped_handler(ctl);
+	fsm_finish_command(ctl, cmd, 0);
+}
+
+static void fsm_routine_ready(struct t7xx_fsm_ctl *ctl)
+{
+	struct t7xx_modem *md = ctl->md;
+
+	ctl->curr_state = FSM_STATE_READY;
+	t7xx_fsm_broadcast_state(ctl, MD_STATE_READY);
+	t7xx_md_event_notify(md, FSM_READY);
+}
+
+static void fsm_routine_starting(struct t7xx_fsm_ctl *ctl)
+{
+	struct t7xx_modem *md = ctl->md;
+	struct device *dev;
+
+	ctl->curr_state = FSM_STATE_STARTING;
+
+	t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS1);
+	t7xx_md_event_notify(md, FSM_START);
+
+	wait_event_interruptible_timeout(ctl->async_hk_wq, md->core_md.ready || ctl->exp_flg,
+					 HZ * 60);
+	dev = &md->t7xx_dev->pdev->dev;
+
+	if (ctl->exp_flg)
+		dev_err(dev, "MD exception is captured during handshake\n");
+
+	if (!md->core_md.ready) {
+		dev_err(dev, "MD handshake timeout\n");
+		fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);
+	} else {
+		fsm_routine_ready(ctl);
+	}
+}
+
+static void fsm_routine_start(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd)
+{
+	struct t7xx_modem *md = ctl->md;
+	u32 dev_status;
+	int ret;
+
+	if (!md)
+		return;
+
+	if (ctl->curr_state != FSM_STATE_INIT && ctl->curr_state != FSM_STATE_PRE_START &&
+	    ctl->curr_state != FSM_STATE_STOPPED) {
+		fsm_finish_command(ctl, cmd, -EINVAL);
+		return;
+	}
+
+	ctl->curr_state = FSM_STATE_PRE_START;
+	t7xx_md_event_notify(md, FSM_PRE_START);
+
+	ret = read_poll_timeout(ioread32, dev_status,
+				(dev_status & MISC_STAGE_MASK) == LINUX_STAGE, 20000, 2000000,
+				false, IREG_BASE(md->t7xx_dev) + PCIE_MISC_DEV_STATUS);
+	if (ret) {
+		struct device *dev = &md->t7xx_dev->pdev->dev;
+
+		dev_err(dev, "Invalid device status 0x%lx\n", dev_status & MISC_STAGE_MASK);
+		fsm_finish_command(ctl, cmd, -ETIMEDOUT);
+		return;
+	}
+
+	t7xx_cldma_hif_hw_init(md->md_ctrl[ID_CLDMA1]);
+	fsm_routine_starting(ctl);
+	fsm_finish_command(ctl, cmd, 0);
+}
+
+static int fsm_main_thread(void *data)
+{
+	struct t7xx_fsm_ctl *ctl = data;
+	struct t7xx_fsm_command *cmd;
+	unsigned long flags;
+
+	while (!kthread_should_stop()) {
+		if (wait_event_interruptible(ctl->command_wq, !list_empty(&ctl->command_queue) ||
+					     kthread_should_stop()))
+			continue;
+
+		if (kthread_should_stop())
+			break;
+
+		spin_lock_irqsave(&ctl->command_lock, flags);
+		cmd = list_first_entry(&ctl->command_queue, struct t7xx_fsm_command, entry);
+		list_del(&cmd->entry);
+		spin_unlock_irqrestore(&ctl->command_lock, flags);
+
+		switch (cmd->cmd_id) {
+		case FSM_CMD_START:
+			fsm_routine_start(ctl, cmd);
+			break;
+
+		case FSM_CMD_EXCEPTION:
+			fsm_routine_exception(ctl, cmd, FIELD_GET(FSM_CMD_EX_REASON, cmd->flag));
+			break;
+
+		case FSM_CMD_PRE_STOP:
+			fsm_routine_stopping(ctl, cmd);
+			break;
+
+		case FSM_CMD_STOP:
+			fsm_routine_stopped(ctl, cmd);
+			break;
+
+		default:
+			fsm_finish_command(ctl, cmd, -EINVAL);
+			fsm_flush_event_cmd_qs(ctl);
+			break;
+		}
+	}
+
+	return 0;
+}
+
+int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id, unsigned int flag)
+{
+	DECLARE_COMPLETION_ONSTACK(done);
+	struct t7xx_fsm_command *cmd;
+	unsigned long flags;
+	int ret;
+
+	cmd = kzalloc(sizeof(*cmd), flag & FSM_CMD_FLAG_IN_INTERRUPT ? GFP_ATOMIC : GFP_KERNEL);
+	if (!cmd)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&cmd->entry);
+	cmd->cmd_id = cmd_id;
+	cmd->flag = flag;
+	if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
+		cmd->done = &done;
+		cmd->ret = &ret;
+	}
+
+	spin_lock_irqsave(&ctl->command_lock, flags);
+	list_add_tail(&cmd->entry, &ctl->command_queue);
+	spin_unlock_irqrestore(&ctl->command_lock, flags);
+
+	wake_up(&ctl->command_wq);
+
+	if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
+		unsigned long wait_ret;
+
+		wait_ret = wait_for_completion_timeout(&done,
+						       msecs_to_jiffies(FSM_CMD_TIMEOUT_MS));
+		if (!wait_ret)
+			return -ETIMEDOUT;
+
+		return ret;
+	}
+
+	return 0;
+}
+
+int t7xx_fsm_append_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id,
+			  unsigned char *data, unsigned int length)
+{
+	struct device *dev = &ctl->md->t7xx_dev->pdev->dev;
+	struct t7xx_fsm_event *event;
+	unsigned long flags;
+
+	if (event_id <= FSM_EVENT_INVALID || event_id >= FSM_EVENT_MAX) {
+		dev_err(dev, "Invalid event %d\n", event_id);
+		return -EINVAL;
+	}
+
+	event = kmalloc(sizeof(*event) + length, in_interrupt() ? GFP_ATOMIC : GFP_KERNEL);
+	if (!event)
+		return -ENOMEM;
+
+	INIT_LIST_HEAD(&event->entry);
+	event->event_id = event_id;
+	event->length = length;
+
+	if (data && length)
+		memcpy((void *)event + sizeof(*event), data, length);
+
+	spin_lock_irqsave(&ctl->event_lock, flags);
+	list_add_tail(&event->entry, &ctl->event_queue);
+	spin_unlock_irqrestore(&ctl->event_lock, flags);
+	wake_up_all(&ctl->event_wq);
+	return 0;
+}
+
+void t7xx_fsm_clr_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id)
+{
+	struct t7xx_fsm_event *event, *evt_next;
+	unsigned long flags;
+
+	spin_lock_irqsave(&ctl->event_lock, flags);
+	list_for_each_entry_safe(event, evt_next, &ctl->event_queue, entry) {
+		if (event->event_id == event_id)
+			fsm_del_kf_event(event);
+	}
+
+	spin_unlock_irqrestore(&ctl->event_lock, flags);
+}
+
+enum md_state t7xx_fsm_get_md_state(struct t7xx_fsm_ctl *ctl)
+{
+	if (ctl)
+		return ctl->md_state;
+
+	return MD_STATE_INVALID;
+}
+
+unsigned int t7xx_fsm_get_ctl_state(struct t7xx_fsm_ctl *ctl)
+{
+	if (ctl)
+		return ctl->curr_state;
+
+	return FSM_STATE_STOPPED;
+}
+
+void t7xx_fsm_recv_md_intr(struct t7xx_fsm_ctl *ctl, enum t7xx_md_irq_type type)
+{
+	unsigned int cmd_flags = FSM_CMD_FLAG_IN_INTERRUPT;
+
+	if (type == MD_IRQ_PORT_ENUM) {
+		t7xx_fsm_append_cmd(ctl, FSM_CMD_START, cmd_flags);
+	} else if (type == MD_IRQ_CCIF_EX) {
+		ctl->exp_flg = true;
+		wake_up(&ctl->async_hk_wq);
+		cmd_flags |= FIELD_PREP(FSM_CMD_EX_REASON, EXCEPTION_EVENT);
+		t7xx_fsm_append_cmd(ctl, FSM_CMD_EXCEPTION, cmd_flags);
+	}
+}
+
+void t7xx_fsm_reset(struct t7xx_modem *md)
+{
+	struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+
+	fsm_flush_event_cmd_qs(ctl);
+	ctl->curr_state = FSM_STATE_STOPPED;
+	ctl->exp_flg = false;
+}
+
+int t7xx_fsm_init(struct t7xx_modem *md)
+{
+	struct device *dev = &md->t7xx_dev->pdev->dev;
+	struct t7xx_fsm_ctl *ctl;
+
+	ctl = devm_kzalloc(dev, sizeof(*ctl), GFP_KERNEL);
+	if (!ctl)
+		return -ENOMEM;
+
+	md->fsm_ctl = ctl;
+	ctl->md = md;
+	ctl->curr_state = FSM_STATE_INIT;
+	INIT_LIST_HEAD(&ctl->command_queue);
+	INIT_LIST_HEAD(&ctl->event_queue);
+	init_waitqueue_head(&ctl->async_hk_wq);
+	init_waitqueue_head(&ctl->event_wq);
+	INIT_LIST_HEAD(&ctl->notifier_list);
+	init_waitqueue_head(&ctl->command_wq);
+	spin_lock_init(&ctl->event_lock);
+	spin_lock_init(&ctl->command_lock);
+	ctl->exp_flg = false;
+	spin_lock_init(&ctl->notifier_lock);
+
+	ctl->fsm_thread = kthread_run(fsm_main_thread, ctl, "t7xx_fsm");
+	return PTR_ERR_OR_ZERO(ctl->fsm_thread);
+}
+
+void t7xx_fsm_uninit(struct t7xx_modem *md)
+{
+	struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+
+	if (!ctl)
+		return;
+
+	if (ctl->fsm_thread)
+		kthread_stop(ctl->fsm_thread);
+
+	fsm_flush_event_cmd_qs(ctl);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.h b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
new file mode 100644
index 000000000000..39c89b10a3bb
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
@@ -0,0 +1,123 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *
+ * Contributors:
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#ifndef __T7XX_MONITOR_H__
+#define __T7XX_MONITOR_H__
+
+#include <linux/bits.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+
+#include "t7xx_common.h"
+#include "t7xx_modem_ops.h"
+
+enum t7xx_fsm_state {
+	FSM_STATE_INIT,
+	FSM_STATE_PRE_START,
+	FSM_STATE_STARTING,
+	FSM_STATE_READY,
+	FSM_STATE_EXCEPTION,
+	FSM_STATE_STOPPING,
+	FSM_STATE_STOPPED,
+};
+
+enum t7xx_fsm_event_state {
+	FSM_EVENT_INVALID,
+	FSM_EVENT_MD_EX,
+	FSM_EVENT_MD_EX_REC_OK,
+	FSM_EVENT_MD_EX_PASS,
+	FSM_EVENT_MAX
+};
+
+enum t7xx_fsm_cmd_state {
+	FSM_CMD_INVALID,
+	FSM_CMD_START,
+	FSM_CMD_EXCEPTION,
+	FSM_CMD_PRE_STOP,
+	FSM_CMD_STOP,
+};
+
+enum t7xx_ex_reason {
+	EXCEPTION_HS_TIMEOUT,
+	EXCEPTION_EVENT,
+};
+
+enum t7xx_md_irq_type {
+	MD_IRQ_WDT,
+	MD_IRQ_CCIF_EX,
+	MD_IRQ_PORT_ENUM,
+};
+
+#define FSM_CMD_FLAG_WAIT_FOR_COMPLETION	BIT(0)
+#define FSM_CMD_FLAG_FLIGHT_MODE		BIT(1)
+#define FSM_CMD_FLAG_IN_INTERRUPT		BIT(2)
+#define FSM_CMD_EX_REASON			GENMASK(23, 16)
+
+struct t7xx_fsm_ctl {
+	struct t7xx_modem	*md;
+	enum md_state		md_state;
+	unsigned int		curr_state;
+	struct list_head	command_queue;
+	struct list_head	event_queue;
+	wait_queue_head_t	command_wq;
+	wait_queue_head_t	event_wq;
+	wait_queue_head_t	async_hk_wq;
+	spinlock_t		event_lock;		/* Protects event queue */
+	spinlock_t		command_lock;		/* Protects command queue */
+	struct task_struct	*fsm_thread;
+	bool			exp_flg;
+	spinlock_t		notifier_lock;		/* Protects notifier list */
+	struct list_head	notifier_list;
+};
+
+struct t7xx_fsm_event {
+	struct list_head	entry;
+	enum t7xx_fsm_event_state event_id;
+	unsigned int		length;
+};
+
+struct t7xx_fsm_command {
+	struct list_head	entry;
+	enum t7xx_fsm_cmd_state	cmd_id;
+	unsigned int		flag;
+	struct completion	*done;
+	int			*ret;
+};
+
+struct t7xx_fsm_notifier {
+	struct list_head	entry;
+	int (*notifier_fn)(enum md_state state, void *data);
+	void			*data;
+};
+
+int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id,
+			unsigned int flag);
+int t7xx_fsm_append_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id,
+			  unsigned char *data, unsigned int length);
+void t7xx_fsm_clr_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id);
+void t7xx_fsm_broadcast_state(struct t7xx_fsm_ctl *ctl, enum md_state state);
+void t7xx_fsm_reset(struct t7xx_modem *md);
+int t7xx_fsm_init(struct t7xx_modem *md);
+void t7xx_fsm_uninit(struct t7xx_modem *md);
+void t7xx_fsm_recv_md_intr(struct t7xx_fsm_ctl *ctl, enum t7xx_md_irq_type type);
+enum md_state t7xx_fsm_get_md_state(struct t7xx_fsm_ctl *ctl);
+unsigned int t7xx_fsm_get_ctl_state(struct t7xx_fsm_ctl *ctl);
+void t7xx_fsm_notifier_register(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier);
+void t7xx_fsm_notifier_unregister(struct t7xx_modem *md, struct t7xx_fsm_notifier *notifier);
+
+#endif /* __T7XX_MONITOR_H__ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 04/13] net: wwan: t7xx: Add port proxy infrastructure
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
                   ` (2 preceding siblings ...)
  2022-01-14  1:06 ` [PATCH net-next v4 03/13] net: wwan: t7xx: Add core components Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-01-25 13:38   ` Ilpo Järvinen
  2022-02-10 13:34   ` Ilpo Järvinen
  2022-01-14  1:06 ` [PATCH net-next v4 05/13] net: wwan: t7xx: Add control port Ricardo Martinez
                   ` (9 subsequent siblings)
  13 siblings, 2 replies; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

From: Haijun Liu <haijun.liu@mediatek.com>

Port-proxy provides a common interface to interact with different types
of ports. Ports export their configuration via `struct t7xx_port` and
operate as defined by `struct port_ops`.

Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
Co-developed-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 drivers/net/wwan/t7xx/Makefile             |   1 +
 drivers/net/wwan/t7xx/t7xx_modem_ops.c     |  12 +
 drivers/net/wwan/t7xx/t7xx_port.h          | 151 ++++++
 drivers/net/wwan/t7xx/t7xx_port_proxy.c    | 546 +++++++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_port_proxy.h    |  71 +++
 drivers/net/wwan/t7xx/t7xx_state_monitor.c |   4 +
 6 files changed, 785 insertions(+)
 create mode 100644 drivers/net/wwan/t7xx/t7xx_port.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_port_proxy.h

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index 6a49013bc343..99f9ca3b4b51 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -10,3 +10,4 @@ mtk_t7xx-y:=	t7xx_pci.o \
 		t7xx_modem_ops.o \
 		t7xx_cldma.o \
 		t7xx_hif_cldma.o  \
+		t7xx_port_proxy.o  \
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
index a106dbb526ea..df317714ba06 100644
--- a/drivers/net/wwan/t7xx/t7xx_modem_ops.c
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
@@ -34,6 +34,8 @@
 #include "t7xx_modem_ops.h"
 #include "t7xx_pci.h"
 #include "t7xx_pcie_mac.h"
+#include "t7xx_port.h"
+#include "t7xx_port_proxy.h"
 #include "t7xx_reg.h"
 #include "t7xx_state_monitor.h"
 
@@ -212,6 +214,7 @@ static void t7xx_md_exception(struct t7xx_modem *md, enum hif_ex_stage stage)
 	if (stage == HIF_EX_CLEARQ_DONE) {
 		/* Give DHL time to flush data */
 		msleep(PORT_RESET_DELAY_MS);
+		t7xx_port_proxy_reset(md->port_prox);
 	}
 
 	t7xx_cldma_exception(md->md_ctrl[ID_CLDMA1], stage);
@@ -369,6 +372,7 @@ void t7xx_md_reset(struct t7xx_pci_dev *t7xx_dev)
 	spin_lock_init(&md->exp_lock);
 	t7xx_fsm_reset(md);
 	t7xx_cldma_reset(md->md_ctrl[ID_CLDMA1]);
+	t7xx_port_proxy_reset(md->port_prox);
 	md->md_init_finish = true;
 }
 
@@ -404,11 +408,18 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev)
 	if (ret)
 		goto err_uninit_fsm;
 
+	ret = t7xx_port_proxy_init(md);
+	if (ret)
+		goto err_uninit_cldma;
+
 	t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_START, 0);
 	t7xx_md_sys_sw_init(t7xx_dev);
 	md->md_init_finish = true;
 	return 0;
 
+err_uninit_cldma:
+	t7xx_cldma_exit(md->md_ctrl[ID_CLDMA1]);
+
 err_uninit_fsm:
 	t7xx_fsm_uninit(md);
 
@@ -428,6 +439,7 @@ void t7xx_md_exit(struct t7xx_pci_dev *t7xx_dev)
 		return;
 
 	t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_PRE_STOP, FSM_CMD_FLAG_WAIT_FOR_COMPLETION);
+	t7xx_port_proxy_uninit(md->port_prox);
 	t7xx_cldma_exit(md->md_ctrl[ID_CLDMA1]);
 	t7xx_fsm_uninit(md);
 	destroy_workqueue(md->handshake_wq);
diff --git a/drivers/net/wwan/t7xx/t7xx_port.h b/drivers/net/wwan/t7xx/t7xx_port.h
new file mode 100644
index 000000000000..eeddbb361f81
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *  Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ */
+
+#ifndef __T7XX_PORT_H__
+#define __T7XX_PORT_H__
+
+#include <linux/bits.h>
+#include <linux/device.h>
+#include <linux/mutex.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+#include <linux/wwan.h>
+
+#include "t7xx_hif_cldma.h"
+#include "t7xx_pci.h"
+
+#define PORT_F_RX_ALLOW_DROP	BIT(0)	/* Packet will be dropped if port's RX buffer full */
+#define PORT_F_RX_FULLED	BIT(1)	/* RX buffer has been detected to be full */
+#define PORT_F_USER_HEADER	BIT(2)	/* CCCI header will be provided by user, but not by CCCI */
+#define PORT_F_RX_EXCLUSIVE	BIT(3)	/* RX queue only has this one port */
+#define PORT_F_RX_ADJUST_HEADER	BIT(4)	/* Check whether need remove CCCI header while recv skb */
+#define PORT_F_RX_CH_TRAFFIC	BIT(5)	/* Enable port channel traffic */
+#define PORT_F_RX_CHAR_NODE	BIT(7)	/* Requires exporting char dev node to userspace */
+#define PORT_F_CHAR_NODE_SHOW	BIT(10)	/* The char dev node is shown to userspace by default */
+
+/* Reused for net TX, Data queue, same bit as RX_FULLED */
+#define PORT_F_TX_DATA_FULLED	BIT(1)
+#define PORT_F_TX_ACK_FULLED	BIT(8)
+
+#define PORT_CH_ID_MASK		GENMASK(7, 0)
+#define	PORT_INVALID_CH_ID	GENMASK(15, 0)
+
+/* Channel ID and Message ID definitions.
+ * The channel number consists of peer_id(15:12) , channel_id(11:0)
+ * peer_id:
+ * 0:reserved, 1: to sAP, 2: to MD
+ */
+enum port_ch {
+	/* to MD */
+	PORT_CH_CONTROL_RX = 0x2000,
+	PORT_CH_CONTROL_TX = 0x2001,
+	PORT_CH_UART1_RX = 0x2006,	/* META */
+	PORT_CH_UART1_TX = 0x2008,
+	PORT_CH_UART2_RX = 0x200a,	/* AT */
+	PORT_CH_UART2_TX = 0x200c,
+	PORT_CH_MD_LOG_RX = 0x202a,	/* MD logging */
+	PORT_CH_MD_LOG_TX = 0x202b,
+	PORT_CH_LB_IT_RX = 0x203e,	/* Loop back test */
+	PORT_CH_LB_IT_TX = 0x203f,
+	PORT_CH_STATUS_RX = 0x2043,	/* Status polling */
+	PORT_CH_MIPC_RX = 0x20ce,	/* MIPC */
+	PORT_CH_MIPC_TX = 0x20cf,
+	PORT_CH_MBIM_RX = 0x20d0,
+	PORT_CH_MBIM_TX = 0x20d1,
+	PORT_CH_DSS0_RX = 0x20d2,
+	PORT_CH_DSS0_TX = 0x20d3,
+	PORT_CH_DSS1_RX = 0x20d4,
+	PORT_CH_DSS1_TX = 0x20d5,
+	PORT_CH_DSS2_RX = 0x20d6,
+	PORT_CH_DSS2_TX = 0x20d7,
+	PORT_CH_DSS3_RX = 0x20d8,
+	PORT_CH_DSS3_TX = 0x20d9,
+	PORT_CH_DSS4_RX = 0x20da,
+	PORT_CH_DSS4_TX = 0x20db,
+	PORT_CH_DSS5_RX = 0x20dc,
+	PORT_CH_DSS5_TX = 0x20dd,
+	PORT_CH_DSS6_RX = 0x20de,
+	PORT_CH_DSS6_TX = 0x20df,
+	PORT_CH_DSS7_RX = 0x20e0,
+	PORT_CH_DSS7_TX = 0x20e1,
+};
+
+struct t7xx_port;
+struct port_ops {
+	int (*init)(struct t7xx_port *port);
+	int (*recv_skb)(struct t7xx_port *port, struct sk_buff *skb);
+	void (*md_state_notify)(struct t7xx_port *port, unsigned int md_state);
+	void (*uninit)(struct t7xx_port *port);
+	int (*enable_chl)(struct t7xx_port *port);
+	int (*disable_chl)(struct t7xx_port *port);
+};
+
+typedef void (*port_skb_handler)(struct t7xx_port *port, struct sk_buff *skb);
+
+struct t7xx_port_static {
+	enum port_ch		tx_ch;
+	enum port_ch		rx_ch;
+	unsigned char		txq_index;
+	unsigned char		rxq_index;
+	unsigned char		txq_exp_index;
+	unsigned char		rxq_exp_index;
+	enum cldma_id		path_id;
+	unsigned int		flags;
+	struct port_ops		*ops;
+	char			*name;
+	enum wwan_port_type	port_type;
+};
+
+struct t7xx_port {
+	/* Members not initialized in definition */
+	struct t7xx_port_static *port_static;
+	struct wwan_port	*wwan_port;
+	struct t7xx_pci_dev	*t7xx_dev;
+	struct device		*dev;
+	short			seq_nums[2];
+	atomic_t		usage_cnt;
+	struct			list_head entry;
+	struct			list_head queue_entry;
+	/* TX and RX flows are asymmetric since ports are multiplexed on
+	 * queues.
+	 *
+	 * TX: data blocks are sent directly to a queue. Each port
+	 * does not maintain a TX list; instead, they only provide
+	 * a wait_queue_head for blocking writes.
+	 *
+	 * RX: Each port uses a RX list to hold packets,
+	 * allowing the modem to dispatch RX packet as quickly as possible.
+	 */
+	struct sk_buff_head	rx_skb_list;
+	spinlock_t		port_update_lock; /* Protects port configuration */
+	wait_queue_head_t	rx_wq;
+	int			rx_length_th;
+	port_skb_handler	skb_handler;
+	bool			chan_enable;
+	bool			chn_crt_stat;
+	struct task_struct	*thread;
+	struct mutex		tx_mutex_lock; /* Protects the seq number operation */
+	unsigned int		flags;
+};
+
+int t7xx_port_recv_skb(struct t7xx_port *port, struct sk_buff *skb);
+int t7xx_port_send_skb_to_md(struct t7xx_port *port, struct sk_buff *skb, bool blocking);
+
+#endif /* __T7XX_PORT_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.c b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
new file mode 100644
index 000000000000..af16cb01c607
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
@@ -0,0 +1,546 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *  Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#include <linux/bits.h>
+#include <linux/bitfield.h>
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/mutex.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/wait.h>
+#include <linux/wwan.h>
+
+#include "t7xx_common.h"
+#include "t7xx_hif_cldma.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_port.h"
+#include "t7xx_port_proxy.h"
+#include "t7xx_state_monitor.h"
+
+#define CHECK_RX_SEQ_MASK		GENMASK(14, 0)
+#define Q_IDX_CTRL			0
+#define Q_IDX_MBIM			2
+#define Q_IDX_AT_CMD			5
+
+#define for_each_proxy_port(i, p, proxy)	\
+	for (i = 0, (p) = &(proxy)->ports_private[i];	\
+	     i < (proxy)->port_number;		\
+	     i++, (p) = &(proxy)->ports_private[i])
+
+static struct t7xx_port_static t7xx_md_ports[1];
+
+static struct t7xx_port *t7xx_proxy_get_port_by_ch(struct port_proxy *port_prox, enum port_ch ch)
+{
+	struct t7xx_port_static *port_static;
+	struct t7xx_port *port;
+	int i;
+
+	for_each_proxy_port(i, port, port_prox) {
+		port_static = port->port_static;
+		if (port_static->rx_ch == ch || port_static->tx_ch == ch)
+			return port;
+	}
+
+	return NULL;
+}
+
+/* Sequence numbering to track for lost packets */
+void t7xx_port_proxy_set_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h)
+{
+	if (ccci_h && port) {
+		ccci_h->status &= cpu_to_le32(~HDR_FLD_SEQ);
+		ccci_h->status |= cpu_to_le32(FIELD_PREP(HDR_FLD_SEQ, port->seq_nums[MTK_TX]));
+		ccci_h->status &= cpu_to_le32(~HDR_FLD_AST);
+		ccci_h->status |= cpu_to_le32(FIELD_PREP(HDR_FLD_AST, 1));
+	}
+}
+
+static u16 t7xx_port_check_rx_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h)
+{
+	u16 seq_num, assert_bit;
+
+	seq_num = FIELD_GET(HDR_FLD_SEQ, le32_to_cpu(ccci_h->status));
+	assert_bit = FIELD_GET(HDR_FLD_AST, le32_to_cpu(ccci_h->status));
+	if (assert_bit && port->seq_nums[MTK_RX] &&
+	    ((seq_num - port->seq_nums[MTK_RX]) & CHECK_RX_SEQ_MASK) != 1) {
+		dev_warn_ratelimited(port->dev,
+				     "seq num out-of-order %d->%d (header %X, len %X)\n",
+				     seq_num, port->seq_nums[MTK_RX],
+				     le32_to_cpu(ccci_h->packet_header),
+				     le32_to_cpu(ccci_h->packet_len));
+	}
+
+	return seq_num;
+}
+
+void t7xx_port_proxy_reset(struct port_proxy *port_prox)
+{
+	struct t7xx_port *port;
+	int i;
+
+	for_each_proxy_port(i, port, port_prox) {
+		port->seq_nums[MTK_RX] = -1;
+		port->seq_nums[MTK_TX] = 0;
+	}
+}
+
+static int t7xx_port_get_queue_no(struct t7xx_port *port)
+{
+	struct t7xx_port_static *port_static = port->port_static;
+	struct t7xx_fsm_ctl *ctl = port->t7xx_dev->md->fsm_ctl;
+
+	return t7xx_fsm_get_md_state(ctl) == MD_STATE_EXCEPTION ?
+		port_static->txq_exp_index : port_static->txq_index;
+}
+
+static void t7xx_port_struct_init(struct t7xx_port *port)
+{
+	INIT_LIST_HEAD(&port->entry);
+	INIT_LIST_HEAD(&port->queue_entry);
+	skb_queue_head_init(&port->rx_skb_list);
+	init_waitqueue_head(&port->rx_wq);
+	port->seq_nums[MTK_RX] = -1;
+	port->seq_nums[MTK_TX] = 0;
+	atomic_set(&port->usage_cnt, 0);
+}
+
+static void t7xx_port_adjust_skb(struct t7xx_port *port, struct sk_buff *skb)
+{
+	struct ccci_header *ccci_h = (struct ccci_header *)skb->data;
+	struct t7xx_port_static *port_static = port->port_static;
+
+	if (port->flags & PORT_F_USER_HEADER) {
+		if (le32_to_cpu(ccci_h->packet_header) == CCCI_HEADER_NO_DATA) {
+			if (skb->len > sizeof(*ccci_h)) {
+				dev_err_ratelimited(port->dev,
+						    "Recv unexpected data for %s, skb->len=%d\n",
+						    port_static->name, skb->len);
+				skb_trim(skb, sizeof(*ccci_h));
+			}
+		}
+	} else {
+		skb_pull(skb, sizeof(*ccci_h));
+	}
+}
+
+/**
+ * t7xx_port_recv_skb() - receive skb from modem or HIF.
+ * @port: port to use.
+ * @skb: skb to use.
+ *
+ * Used to receive native HIF RX data, which has same the RX receive flow.
+ *
+ * Return:
+ * * 0		- Success.
+ * * -ENOBUFS	- Not enough queue length.
+ */
+int t7xx_port_recv_skb(struct t7xx_port *port, struct sk_buff *skb)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&port->rx_wq.lock, flags);
+	if (port->rx_skb_list.qlen < port->rx_length_th) {
+		struct ccci_header *ccci_h = (struct ccci_header *)skb->data;
+		u32 channel;
+
+		port->flags &= ~PORT_F_RX_FULLED;
+		if (port->flags & PORT_F_RX_ADJUST_HEADER)
+			t7xx_port_adjust_skb(port, skb);
+
+		channel = FIELD_GET(HDR_FLD_CHN, le32_to_cpu(ccci_h->status));
+		if (channel == PORT_CH_STATUS_RX) {
+			port->skb_handler(port, skb);
+		} else {
+			if (port->wwan_port)
+				wwan_port_rx(port->wwan_port, skb);
+			else
+				__skb_queue_tail(&port->rx_skb_list, skb);
+		}
+
+		spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+		wake_up_all(&port->rx_wq);
+		return 0;
+	}
+
+	port->flags |= PORT_F_RX_FULLED;
+	spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+	return -ENOBUFS;
+}
+
+static struct cldma_ctrl *get_md_ctrl(struct t7xx_port *port)
+{
+	enum cldma_id id = port->port_static->path_id;
+
+	return port->t7xx_dev->md->md_ctrl[id];
+}
+
+int t7xx_port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb)
+{
+	struct ccci_header *ccci_h = (struct ccci_header *)(skb->data);
+	struct cldma_ctrl *md_ctrl;
+	unsigned char tx_qno;
+	int ret;
+
+	tx_qno = t7xx_port_get_queue_no(port);
+	t7xx_port_proxy_set_seq_num(port, ccci_h);
+
+	md_ctrl = get_md_ctrl(port);
+	ret = t7xx_cldma_send_skb(md_ctrl, tx_qno, skb, true);
+	if (ret) {
+		dev_err(port->dev, "Failed to send skb: %d\n", ret);
+		return ret;
+	}
+
+	/* Record the port seq_num after the data is sent to HIF.
+	 * Only bits 0-14 are used, thus negating overflow.
+	 */
+	port->seq_nums[MTK_TX]++;
+
+	return 0;
+}
+
+int t7xx_port_send_skb_to_md(struct t7xx_port *port, struct sk_buff *skb, bool blocking)
+{
+	struct t7xx_port_static *port_static = port->port_static;
+	struct t7xx_fsm_ctl *ctl = port->t7xx_dev->md->fsm_ctl;
+	struct cldma_ctrl *md_ctrl;
+	enum md_state md_state;
+	unsigned int fsm_state;
+
+	md_state = t7xx_fsm_get_md_state(ctl);
+
+	fsm_state = t7xx_fsm_get_ctl_state(ctl);
+	if (fsm_state != FSM_STATE_PRE_START) {
+		if (md_state == MD_STATE_WAITING_FOR_HS1 || md_state == MD_STATE_WAITING_FOR_HS2)
+			return -ENODEV;
+
+		if (md_state == MD_STATE_EXCEPTION && port_static->tx_ch != PORT_CH_MD_LOG_TX &&
+		    port_static->tx_ch != PORT_CH_UART1_TX)
+			return -EBUSY;
+
+		if (md_state == MD_STATE_STOPPED || md_state == MD_STATE_WAITING_TO_STOP ||
+		    md_state == MD_STATE_INVALID)
+			return -ENODEV;
+	}
+
+	md_ctrl = get_md_ctrl(port);
+	return t7xx_cldma_send_skb(md_ctrl, t7xx_port_get_queue_no(port), skb, blocking);
+}
+
+static void t7xx_proxy_setup_ch_mapping(struct port_proxy *port_prox)
+{
+	struct t7xx_port *port;
+
+	int i, j;
+
+	for (i = 0; i < ARRAY_SIZE(port_prox->rx_ch_ports); i++)
+		INIT_LIST_HEAD(&port_prox->rx_ch_ports[i]);
+
+	for (j = 0; j < ARRAY_SIZE(port_prox->queue_ports); j++) {
+		for (i = 0; i < ARRAY_SIZE(port_prox->queue_ports[j]); i++)
+			INIT_LIST_HEAD(&port_prox->queue_ports[j][i]);
+	}
+
+	for_each_proxy_port(i, port, port_prox) {
+		struct t7xx_port_static *port_static = port->port_static;
+		enum cldma_id path_id = port_static->path_id;
+		u8 ch_id;
+
+		ch_id = FIELD_GET(PORT_CH_ID_MASK, port_static->rx_ch);
+		list_add_tail(&port->entry, &port_prox->rx_ch_ports[ch_id]);
+		list_add_tail(&port->queue_entry,
+			      &port_prox->queue_ports[path_id][port_static->rxq_index]);
+	}
+}
+
+/**
+ * t7xx_port_proxy_dispatch_recv_skb() - Dispatch received skb.
+ * @queue: CLDMA queue.
+ * @skb: Socket buffer.
+ * @drop_skb_on_err: Return value that indicates in case of an error that the skb should be dropped.
+ *
+ * If recv_skb return with 0 or drop_skb_on_err is true, then it's the port's duty
+ * to free the request and the caller should no longer reference the request.
+ * If recv_skb returns any other error, caller should free the request.
+ *
+ * Return:
+ ** 0		- Success.
+ ** -EINVAL	- Failed to get skb, channel out-of-range, or invalid MD state.
+ ** -ENETDOWN	- Network time out.
+ */
+static int t7xx_port_proxy_dispatch_recv_skb(struct cldma_queue *queue, struct sk_buff *skb,
+					     bool *drop_skb_on_err)
+{
+	struct ccci_header *ccci_h = (struct ccci_header *)skb->data;
+	struct port_proxy *port_prox = queue->md->port_prox;
+	struct t7xx_fsm_ctl *ctl = queue->md->fsm_ctl;
+	struct list_head *port_list;
+	struct t7xx_port *port;
+	u16 seq_num, channel;
+	int ret = 0;
+	u8 ch_id;
+
+	channel = FIELD_GET(HDR_FLD_CHN, le32_to_cpu(ccci_h->status));
+	ch_id = FIELD_GET(PORT_CH_ID_MASK, channel);
+
+	if (t7xx_fsm_get_md_state(ctl) == MD_STATE_INVALID) {
+		*drop_skb_on_err = true;
+		return -EINVAL;
+	}
+
+	port_list = &port_prox->rx_ch_ports[ch_id];
+	list_for_each_entry(port, port_list, entry) {
+		struct t7xx_port_static *port_static = port->port_static;
+
+		if (queue->md_ctrl->hif_id != port_static->path_id || channel !=
+		    port_static->rx_ch)
+			continue;
+
+		/* Multi-cast is not supported, because one port may be freed and can modify
+		 * this request before another port can process it.
+		 * However we still can use req->state to do some kind of multi-cast if needed.
+		 */
+		if (port_static->ops->recv_skb) {
+			seq_num = t7xx_port_check_rx_seq_num(port, ccci_h);
+			ret = port_static->ops->recv_skb(port, skb);
+			/* If the packet is stored to RX buffer successfully or dropped,
+			 * the sequence number will be updated.
+			 */
+			if (ret == -ENETDOWN || (ret < 0 && port->flags & PORT_F_RX_ALLOW_DROP)) {
+				*drop_skb_on_err = true;
+				dev_err_ratelimited(port->dev,
+						    "port %s RX full, drop packet\n",
+						    port_static->name);
+			}
+
+			if (!ret || drop_skb_on_err)
+				port->seq_nums[MTK_RX] = seq_num;
+		}
+
+		break;
+	}
+
+	return ret;
+}
+
+static int t7xx_port_proxy_recv_skb(struct cldma_queue *queue, struct sk_buff *skb)
+{
+	bool drop_skb_on_err = false;
+	int ret;
+
+	if (!skb)
+		return -EINVAL;
+
+	ret = t7xx_port_proxy_dispatch_recv_skb(queue, skb, &drop_skb_on_err);
+	if (ret < 0 && drop_skb_on_err) {
+		dev_kfree_skb_any(skb);
+		return 0;
+	}
+
+	return ret;
+}
+
+/**
+ * t7xx_port_proxy_md_status_notify() - Notify all ports of state.
+ *@port_prox: The port_proxy pointer.
+ *@state: State.
+ *
+ * Called by t7xx_fsm. Used to dispatch modem status for all ports,
+ * which want to know MD state transition.
+ */
+void t7xx_port_proxy_md_status_notify(struct port_proxy *port_prox, unsigned int state)
+{
+	struct t7xx_port *port;
+	int i;
+
+	for_each_proxy_port(i, port, port_prox) {
+		struct t7xx_port_static *port_static = port->port_static;
+
+		if (port_static->ops->md_state_notify)
+			port_static->ops->md_state_notify(port, state);
+	}
+}
+
+static void t7xx_proxy_init_all_ports(struct t7xx_modem *md)
+{
+	struct port_proxy *port_prox = md->port_prox;
+	struct t7xx_port *port;
+	int i;
+
+	for_each_proxy_port(i, port, port_prox) {
+		struct t7xx_port_static *port_static = port->port_static;
+
+		t7xx_port_struct_init(port);
+
+		port->t7xx_dev = md->t7xx_dev;
+		port->dev = &md->t7xx_dev->pdev->dev;
+		spin_lock_init(&port->port_update_lock);
+		spin_lock(&port->port_update_lock);
+		mutex_init(&port->tx_mutex_lock);
+
+		if (port->flags & PORT_F_CHAR_NODE_SHOW)
+			port->chan_enable = true;
+		else
+			port->chan_enable = false;
+
+		port->chn_crt_stat = false;
+		spin_unlock(&port->port_update_lock);
+
+		if (port_static->ops->init)
+			port_static->ops->init(port);
+	}
+
+	t7xx_proxy_setup_ch_mapping(port_prox);
+}
+
+static int t7xx_proxy_alloc(struct t7xx_modem *md)
+{
+	unsigned int port_number = ARRAY_SIZE(t7xx_md_ports);
+	struct device *dev = &md->t7xx_dev->pdev->dev;
+	struct t7xx_port *ports_private;
+	struct port_proxy *port_prox;
+	int i;
+
+	port_prox = devm_kzalloc(dev, sizeof(*port_prox), GFP_KERNEL);
+	if (!port_prox)
+		return -ENOMEM;
+
+	md->port_prox = port_prox;
+	port_prox->dev = dev;
+	port_prox->ports_shared = t7xx_md_ports;
+
+	ports_private = devm_kzalloc(dev, sizeof(*ports_private) * port_number, GFP_KERNEL);
+	if (!ports_private)
+		return -ENOMEM;
+
+	for (i = 0; i < port_number; i++) {
+		ports_private[i].port_static = &port_prox->ports_shared[i];
+		ports_private[i].flags = port_prox->ports_shared[i].flags;
+	}
+
+	port_prox->ports_private = ports_private;
+	port_prox->port_number = port_number;
+	t7xx_proxy_init_all_ports(md);
+	return 0;
+};
+
+/**
+ * t7xx_port_proxy_init() - Initialize ports.
+ * @md: Modem.
+ *
+ * Create all port instances.
+ *
+ * Return:
+ * * 0		- Success.
+ * * -ERROR	- Error code from failure sub-initializations.
+ */
+int t7xx_port_proxy_init(struct t7xx_modem *md)
+{
+	int ret;
+
+	ret = t7xx_proxy_alloc(md);
+	if (ret)
+		return ret;
+
+	t7xx_cldma_set_recv_skb(md->md_ctrl[ID_CLDMA1], t7xx_port_proxy_recv_skb);
+	return 0;
+}
+
+void t7xx_port_proxy_uninit(struct port_proxy *port_prox)
+{
+	struct t7xx_port *port;
+	int i;
+
+	for_each_proxy_port(i, port, port_prox) {
+		struct t7xx_port_static *port_static = port->port_static;
+
+		if (port_static->ops->uninit)
+			port_static->ops->uninit(port);
+	}
+}
+
+/**
+ * t7xx_port_proxy_node_control() - Create/remove node.
+ * @md: Modem.
+ * @port_msg: Message.
+ *
+ * Used to control create/remove device node.
+ *
+ * Return:
+ * * 0		- Success.
+ * * -EFAULT	- Message check failure.
+ */
+int t7xx_port_proxy_node_control(struct t7xx_modem *md, struct port_msg *port_msg)
+{
+	u32 *port_info_base = (void *)port_msg + sizeof(*port_msg);
+	struct device *dev = &md->t7xx_dev->pdev->dev;
+	unsigned int ports, i;
+	unsigned int version;
+
+	version = FIELD_GET(PORT_MSG_VERSION, le32_to_cpu(port_msg->info));
+	if (version != PORT_ENUM_VER ||
+	    le32_to_cpu(port_msg->head_pattern) != PORT_ENUM_HEAD_PATTERN ||
+	    le32_to_cpu(port_msg->tail_pattern) != PORT_ENUM_TAIL_PATTERN) {
+		dev_err(dev, "Port message enumeration invalid %x:%x:%x\n",
+			version, le32_to_cpu(port_msg->head_pattern),
+			le32_to_cpu(port_msg->tail_pattern));
+		return -EFAULT;
+	}
+
+	ports = FIELD_GET(PORT_MSG_PRT_CNT, le32_to_cpu(port_msg->info));
+
+	for (i = 0; i < ports; i++) {
+		struct t7xx_port_static *port_static;
+		u32 *port_info = port_info_base + i;
+		struct t7xx_port *port;
+		unsigned int ch_id;
+		bool en_flag;
+
+		ch_id = FIELD_GET(PORT_INFO_CH_ID, *port_info);
+		port = t7xx_proxy_get_port_by_ch(md->port_prox, ch_id);
+		if (!port) {
+			dev_warn(dev, "Port:%x not found\n", ch_id);
+			continue;
+		}
+
+		en_flag = !!FIELD_GET(PORT_INFO_ENFLG, *port_info);
+
+		if (t7xx_fsm_get_md_state(md->fsm_ctl) == MD_STATE_READY) {
+			port_static = port->port_static;
+
+			if (en_flag) {
+				if (port_static->ops->enable_chl)
+					port_static->ops->enable_chl(port);
+			} else {
+				if (port_static->ops->disable_chl)
+					port_static->ops->disable_chl(port);
+			}
+		} else {
+			port->chan_enable = en_flag;
+		}
+	}
+
+	return 0;
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.h b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
new file mode 100644
index 000000000000..c0d1b9636c12
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#ifndef __T7XX_PORT_PROXY_H__
+#define __T7XX_PORT_PROXY_H__
+
+#include <linux/bits.h>
+#include <linux/device.h>
+#include <linux/skbuff.h>
+#include <linux/types.h>
+
+#include "t7xx_common.h"
+#include "t7xx_modem_ops.h"
+#include "t7xx_port.h"
+
+#define MTK_QUEUES		16
+#define RX_QUEUE_MAXLEN		32
+#define CTRL_QUEUE_MAXLEN	16
+
+#define CLDMA_TXQ_MTU		MTK_SKB_4K
+
+struct port_proxy {
+	int				port_number;
+	struct t7xx_port_static		*ports_shared;
+	struct t7xx_port		*ports_private;
+	struct list_head		rx_ch_ports[PORT_CH_ID_MASK + 1];
+	struct list_head		queue_ports[CLDMA_NUM][MTK_QUEUES];
+	struct device			*dev;
+};
+
+struct port_msg {
+	__le32	head_pattern;
+	__le32	info;
+	__le32	tail_pattern;
+};
+
+#define PORT_INFO_RSRVD		GENMASK(31, 16)
+#define PORT_INFO_ENFLG		GENMASK(15, 15)
+#define PORT_INFO_CH_ID		GENMASK(14, 0)
+
+#define PORT_MSG_VERSION	GENMASK(31, 16)
+#define PORT_MSG_PRT_CNT	GENMASK(15, 0)
+
+#define PORT_ENUM_VER		0
+#define PORT_ENUM_HEAD_PATTERN	0x5a5a5a5a
+#define PORT_ENUM_TAIL_PATTERN	0xa5a5a5a5
+#define PORT_ENUM_VER_MISMATCH	0x00657272
+
+int t7xx_port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb);
+void t7xx_port_proxy_set_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h);
+int t7xx_port_proxy_node_control(struct t7xx_modem *md, struct port_msg *port_msg);
+void t7xx_port_proxy_reset(struct port_proxy *port_prox);
+void t7xx_port_proxy_uninit(struct port_proxy *port_prox);
+int t7xx_port_proxy_init(struct t7xx_modem *md);
+void t7xx_port_proxy_md_status_notify(struct port_proxy *port_prox, unsigned int state);
+
+#endif /* __T7XX_PORT_PROXY_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
index a353eac3e23b..e26a3d6a324f 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -38,6 +38,7 @@
 #include "t7xx_modem_ops.h"
 #include "t7xx_pci.h"
 #include "t7xx_pcie_mac.h"
+#include "t7xx_port_proxy.h"
 #include "t7xx_reg.h"
 #include "t7xx_state_monitor.h"
 
@@ -97,6 +98,9 @@ void t7xx_fsm_broadcast_state(struct t7xx_fsm_ctl *ctl, enum md_state state)
 
 	ctl->md_state = state;
 
+	/* Update to port first, otherwise sending message on HS2 may fail */
+	t7xx_port_proxy_md_status_notify(ctl->md->port_prox, state);
+
 	fsm_state_notify(ctl->md, state);
 }
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 05/13] net: wwan: t7xx: Add control port
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
                   ` (3 preceding siblings ...)
  2022-01-14  1:06 ` [PATCH net-next v4 04/13] net: wwan: t7xx: Add port proxy infrastructure Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-01-27 10:40   ` Ilpo Järvinen
  2022-01-14  1:06 ` [PATCH net-next v4 06/13] net: wwan: t7xx: Add AT and MBIM WWAN ports Ricardo Martinez
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

From: Haijun Liu <haijun.liu@mediatek.com>

Control Port implements driver control messages such as modem-host
handshaking, controls port enumeration, and handles exception messages.

The handshaking process between the driver and the modem happens during
the init sequence. The process involves the exchange of a list of
supported runtime features to make sure that modem and host are ready
to provide proper feature lists including port enumeration. Further
features can be enabled and controlled in this handshaking process.

Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 drivers/net/wwan/t7xx/Makefile             |   1 +
 drivers/net/wwan/t7xx/t7xx_modem_ops.c     | 252 ++++++++++++++++++++-
 drivers/net/wwan/t7xx/t7xx_modem_ops.h     |   3 +
 drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c | 190 ++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_port_proxy.c    |  74 +++++-
 drivers/net/wwan/t7xx/t7xx_port_proxy.h    |  15 ++
 drivers/net/wwan/t7xx/t7xx_state_monitor.c |   3 +
 drivers/net/wwan/t7xx/t7xx_state_monitor.h |   3 +
 8 files changed, 538 insertions(+), 3 deletions(-)
 create mode 100644 drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index 99f9ca3b4b51..63e1c67b82b5 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -11,3 +11,4 @@ mtk_t7xx-y:=	t7xx_pci.o \
 		t7xx_cldma.o \
 		t7xx_hif_cldma.o  \
 		t7xx_port_proxy.o  \
+		t7xx_port_ctrl_msg.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
index df317714ba06..ec55845e7ac0 100644
--- a/drivers/net/wwan/t7xx/t7xx_modem_ops.c
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
@@ -16,6 +16,8 @@
  */
 
 #include <linux/acpi.h>
+#include <linux/bits.h>
+#include <linux/bitfield.h>
 #include <linux/dev_printk.h>
 #include <linux/device.h>
 #include <linux/delay.h>
@@ -27,6 +29,7 @@
 #include <linux/spinlock.h>
 #include <linux/string.h>
 #include <linux/types.h>
+#include <linux/wait.h>
 #include <linux/workqueue.h>
 
 #include "t7xx_hif_cldma.h"
@@ -39,11 +42,24 @@
 #include "t7xx_reg.h"
 #include "t7xx_state_monitor.h"
 
+#define RT_ID_MD_PORT_ENUM	0
+/* Modem feature query identification code - "ICCC" */
+#define MD_FEATURE_QUERY_ID	0x49434343
+
+#define FEATURE_VER		GENMASK(7, 4)
+#define FEATURE_MSK		GENMASK(3, 0)
+
 #define RGU_RESET_DELAY_MS	10
 #define PORT_RESET_DELAY_MS	2000
 #define EX_HS_TIMEOUT_MS	5000
 #define EX_HS_POLL_DELAY_MS	10
 
+enum mtk_feature_support_type {
+	MTK_FEATURE_DOES_NOT_EXIST,
+	MTK_FEATURE_NOT_SUPPORTED,
+	MTK_FEATURE_MUST_BE_SUPPORTED,
+};
+
 static unsigned int t7xx_get_interrupt_status(struct t7xx_pci_dev *t7xx_dev)
 {
 	return t7xx_mhccif_read_sw_int_sts(t7xx_dev) & D2H_SW_INT_MASK;
@@ -254,16 +270,243 @@ static void t7xx_md_sys_sw_init(struct t7xx_pci_dev *t7xx_dev)
 	t7xx_pcie_register_rgu_isr(t7xx_dev);
 }
 
+struct feature_query {
+	__le32 head_pattern;
+	u8 feature_set[FEATURE_COUNT];
+	__le32 tail_pattern;
+};
+
+static void t7xx_prepare_host_rt_data_query(struct t7xx_sys_info *core)
+{
+	struct t7xx_port_static *port_static = core->ctl_port->port_static;
+	struct ctrl_msg_header *ctrl_msg_h;
+	struct feature_query *ft_query;
+	struct ccci_header *ccci_h;
+	struct sk_buff *skb;
+	size_t packet_size;
+
+	packet_size = sizeof(*ccci_h) + sizeof(*ctrl_msg_h) + sizeof(*ft_query);
+	skb = __dev_alloc_skb(packet_size, GFP_KERNEL);
+	if (!skb)
+		return;
+
+	skb_put(skb, packet_size);
+
+	ccci_h = (struct ccci_header *)skb->data;
+	t7xx_ccci_header_init(ccci_h, 0, packet_size, port_static->tx_ch, 0);
+	ccci_h->status &= cpu_to_le32(~HDR_FLD_SEQ);
+
+	ctrl_msg_h = (struct ctrl_msg_header *)(skb->data + sizeof(*ccci_h));
+	t7xx_ctrl_msg_header_init(ctrl_msg_h, CTL_ID_HS1_MSG, 0, sizeof(*ft_query));
+
+	ft_query = (struct feature_query *)(skb->data + sizeof(*ccci_h) + sizeof(*ctrl_msg_h));
+	ft_query->head_pattern = cpu_to_le32(MD_FEATURE_QUERY_ID);
+	memcpy(ft_query->feature_set, core->feature_set, FEATURE_COUNT);
+	ft_query->tail_pattern = cpu_to_le32(MD_FEATURE_QUERY_ID);
+
+	/* Send HS1 message to device */
+	t7xx_port_proxy_send_skb(core->ctl_port, skb);
+}
+
+static int t7xx_prepare_device_rt_data(struct t7xx_sys_info *core, struct device *dev,
+				       void *data, int data_length)
+{
+	struct t7xx_port_static *port_static = core->ctl_port->port_static;
+	struct mtk_runtime_feature rt_feature;
+	struct ctrl_msg_header *ctrl_msg_h;
+	struct feature_query *md_feature;
+	unsigned int total_data_len;
+	struct ccci_header *ccci_h;
+	size_t packet_size = 0;
+	struct sk_buff *skb;
+	char *rt_data;
+	int i;
+
+	skb = __dev_alloc_skb(MTK_SKB_4K, GFP_KERNEL);
+	if (!skb)
+		return -EFAULT;
+
+	ccci_h = (struct ccci_header *)skb->data;
+	t7xx_ccci_header_init(ccci_h, 0, packet_size, port_static->tx_ch, 0);
+	ccci_h->status &= cpu_to_le32(~HDR_FLD_SEQ);
+	ctrl_msg_h = (struct ctrl_msg_header *)(skb->data + sizeof(*ccci_h));
+	t7xx_ctrl_msg_header_init(ctrl_msg_h, CTL_ID_HS3_MSG, 0, 0);
+	rt_data = skb->data + sizeof(*ccci_h) + sizeof(*ctrl_msg_h);
+
+	/* Parse MD runtime data query */
+	md_feature = data;
+	if (le32_to_cpu(md_feature->head_pattern) != MD_FEATURE_QUERY_ID ||
+	    le32_to_cpu(md_feature->tail_pattern) != MD_FEATURE_QUERY_ID) {
+		dev_err(dev, "Invalid feature pattern: head 0x%x, tail 0x%x\n",
+			le32_to_cpu(md_feature->head_pattern),
+			le32_to_cpu(md_feature->tail_pattern));
+		return -EINVAL;
+	}
+
+	/* Fill runtime feature */
+	for (i = 0; i < FEATURE_COUNT; i++) {
+		u8 md_feature_mask = FIELD_GET(FEATURE_MSK, md_feature->feature_set[i]);
+
+		memset(&rt_feature, 0, sizeof(rt_feature));
+		rt_feature.feature_id = i;
+
+		switch (md_feature_mask) {
+		case MTK_FEATURE_DOES_NOT_EXIST:
+		case MTK_FEATURE_MUST_BE_SUPPORTED:
+			rt_feature.support_info = md_feature->feature_set[i];
+			break;
+
+		default:
+			break;
+		}
+
+		if (FIELD_GET(FEATURE_MSK, rt_feature.support_info) !=
+		    MTK_FEATURE_MUST_BE_SUPPORTED) {
+			memcpy(rt_data, &rt_feature, sizeof(rt_feature));
+			rt_data += sizeof(rt_feature);
+		}
+
+		packet_size += sizeof(struct mtk_runtime_feature);
+	}
+
+	ctrl_msg_h->data_length = cpu_to_le32(packet_size);
+	total_data_len = packet_size + sizeof(*ctrl_msg_h) + sizeof(*ccci_h);
+	ccci_h->packet_len = cpu_to_le32(total_data_len);
+	skb_put(skb, total_data_len);
+
+	/* Send HS3 message to device */
+	t7xx_port_proxy_send_skb(core->ctl_port, skb);
+	return 0;
+}
+
+static int t7xx_parse_host_rt_data(struct t7xx_fsm_ctl *ctl, struct t7xx_sys_info *core,
+				   struct device *dev, void *data, int data_length)
+{
+	enum mtk_feature_support_type ft_spt_st, ft_spt_cfg;
+	struct mtk_runtime_feature *rt_feature;
+	int i, offset;
+
+	offset = sizeof(struct feature_query);
+	for (i = 0; i < FEATURE_COUNT && offset < data_length; i++) {
+		rt_feature = data + offset;
+		ft_spt_st = FIELD_GET(FEATURE_MSK, rt_feature->support_info);
+		offset += sizeof(*rt_feature) + le32_to_cpu(rt_feature->data_len);
+
+		ft_spt_cfg = FIELD_GET(FEATURE_MSK, core->feature_set[i]);
+		if (ft_spt_cfg != MTK_FEATURE_MUST_BE_SUPPORTED)
+			continue;
+
+		if (ft_spt_st != MTK_FEATURE_MUST_BE_SUPPORTED)
+			return -EINVAL;
+
+		if (i == RT_ID_MD_PORT_ENUM) {
+			struct port_msg *p_msg = (void *)rt_feature + sizeof(*rt_feature);
+
+			t7xx_port_proxy_node_control(ctl->md, p_msg);
+		}
+	}
+
+	return 0;
+}
+
+static void t7xx_core_reset(struct t7xx_modem *md)
+{
+	struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
+
+	md->core_md.ready = false;
+
+	if (!ctl) {
+		struct device *dev = &md->t7xx_dev->pdev->dev;
+
+		dev_err(dev, "FSM is not initialized\n");
+		return;
+	}
+
+	if (md->core_md.handshake_ongoing)
+		t7xx_fsm_append_event(ctl, FSM_EVENT_MD_HS2_EXIT, NULL, 0);
+
+	md->core_md.handshake_ongoing = false;
+}
+
+static void t7xx_core_hk_handler(struct t7xx_modem *md, struct t7xx_fsm_ctl *ctl,
+				 enum t7xx_fsm_event_state event_id,
+				 enum t7xx_fsm_event_state err_detect)
+{
+	struct t7xx_sys_info *core_info = &md->core_md;
+	struct device *dev = &md->t7xx_dev->pdev->dev;
+	struct t7xx_fsm_event *event, *event_next;
+	unsigned long flags;
+	void *event_data;
+	int ret;
+
+	t7xx_prepare_host_rt_data_query(core_info);
+
+	while (!kthread_should_stop()) {
+		bool event_received = false;
+
+		spin_lock_irqsave(&ctl->event_lock, flags);
+		list_for_each_entry_safe(event, event_next, &ctl->event_queue, entry) {
+			if (event->event_id == err_detect) {
+				list_del(&event->entry);
+				spin_unlock_irqrestore(&ctl->event_lock, flags);
+				dev_err(dev, "Core handshake error event received\n");
+				goto err_free_event;
+			} else if (event->event_id == event_id) {
+				list_del(&event->entry);
+				event_received = true;
+				break;
+			}
+		}
+
+		spin_unlock_irqrestore(&ctl->event_lock, flags);
+
+		if (event_received)
+			break;
+
+		wait_event_interruptible(ctl->event_wq, !list_empty(&ctl->event_queue) ||
+					 kthread_should_stop());
+		if (kthread_should_stop())
+			goto err_free_event;
+	}
+
+	if (ctl->exp_flg)
+		goto err_free_event;
+
+	event_data = (void *)event + sizeof(*event);
+	ret = t7xx_parse_host_rt_data(ctl, core_info, dev, event_data, event->length);
+	if (ret) {
+		dev_err(dev, "Host failure parsing runtime data: %d\n", ret);
+		goto err_free_event;
+	}
+
+	if (ctl->exp_flg)
+		goto err_free_event;
+
+	ret = t7xx_prepare_device_rt_data(core_info, dev, event_data, event->length);
+	if (ret) {
+		dev_err(dev, "Device failure parsing runtime data: %d", ret);
+		goto err_free_event;
+	}
+
+	core_info->ready = true;
+	core_info->handshake_ongoing = false;
+	wake_up(&ctl->async_hk_wq);
+err_free_event:
+	kfree(event);
+}
+
 static void t7xx_md_hk_wq(struct work_struct *work)
 {
 	struct t7xx_modem *md = container_of(work, struct t7xx_modem, handshake_work);
 	struct t7xx_fsm_ctl *ctl = md->fsm_ctl;
 
+	/* Clear the HS2 EXIT event appended in core_reset() */
+	t7xx_fsm_clr_event(ctl, FSM_EVENT_MD_HS2_EXIT);
 	t7xx_cldma_switch_cfg(md->md_ctrl[ID_CLDMA1]);
 	t7xx_cldma_start(md->md_ctrl[ID_CLDMA1]);
 	t7xx_fsm_broadcast_state(ctl, MD_STATE_WAITING_FOR_HS2);
-	md->core_md.ready = true;
-	wake_up(&ctl->async_hk_wq);
+	md->core_md.handshake_ongoing = true;
+	t7xx_core_hk_handler(md, ctl, FSM_EVENT_MD_HS2, FSM_EVENT_MD_HS2_EXIT);
 }
 
 void t7xx_md_event_notify(struct t7xx_modem *md, enum md_event_id evt_id)
@@ -353,6 +596,7 @@ static struct t7xx_modem *t7xx_md_alloc(struct t7xx_pci_dev *t7xx_dev)
 	md->t7xx_dev = t7xx_dev;
 	t7xx_dev->md = md;
 	md->core_md.ready = false;
+	md->core_md.handshake_ongoing = false;
 	spin_lock_init(&md->exp_lock);
 	md->handshake_wq = alloc_workqueue("%s", WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI,
 					   0, "md_hk_wq");
@@ -360,6 +604,9 @@ static struct t7xx_modem *t7xx_md_alloc(struct t7xx_pci_dev *t7xx_dev)
 		return NULL;
 
 	INIT_WORK(&md->handshake_work, t7xx_md_hk_wq);
+	md->core_md.feature_set[RT_ID_MD_PORT_ENUM] &= ~FEATURE_MSK;
+	md->core_md.feature_set[RT_ID_MD_PORT_ENUM] |=
+		FIELD_PREP(FEATURE_MSK, MTK_FEATURE_MUST_BE_SUPPORTED);
 	return md;
 }
 
@@ -374,6 +621,7 @@ void t7xx_md_reset(struct t7xx_pci_dev *t7xx_dev)
 	t7xx_cldma_reset(md->md_ctrl[ID_CLDMA1]);
 	t7xx_port_proxy_reset(md->port_prox);
 	md->md_init_finish = true;
+	t7xx_core_reset(md);
 }
 
 /**
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.h b/drivers/net/wwan/t7xx/t7xx_modem_ops.h
index 24d2ee5bfbda..842def631b21 100644
--- a/drivers/net/wwan/t7xx/t7xx_modem_ops.h
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.h
@@ -56,6 +56,9 @@ enum md_event_id {
 
 struct t7xx_sys_info {
 	bool				ready;
+	bool				handshake_ongoing;
+	u8				feature_set[FEATURE_COUNT];
+	struct t7xx_port		*ctl_port;
 };
 
 struct t7xx_modem {
diff --git a/drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c b/drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c
new file mode 100644
index 000000000000..87c9c0b9c2a7
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port_ctrl_msg.c
@@ -0,0 +1,190 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/kthread.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+
+#include "t7xx_common.h"
+#include "t7xx_port.h"
+#include "t7xx_port_proxy.h"
+#include "t7xx_state_monitor.h"
+
+static void fsm_ee_message_handler(struct t7xx_fsm_ctl *ctl, struct sk_buff *skb)
+{
+	struct ctrl_msg_header *ctrl_msg_h = (struct ctrl_msg_header *)skb->data;
+	struct device *dev = &ctl->md->t7xx_dev->pdev->dev;
+	struct port_proxy *port_prox = ctl->md->port_prox;
+	enum md_state md_state;
+
+	md_state = t7xx_fsm_get_md_state(ctl);
+	if (md_state != MD_STATE_EXCEPTION) {
+		dev_err(dev, "Receive invalid MD_EX %x when MD state is %d\n",
+			ctrl_msg_h->ex_msg, md_state);
+		return;
+	}
+
+	switch (le32_to_cpu(ctrl_msg_h->ctrl_msg_id)) {
+	case CTL_ID_MD_EX:
+		if (le32_to_cpu(ctrl_msg_h->ex_msg) != MD_EX_CHK_ID) {
+			dev_err(dev, "Receive invalid MD_EX %x\n", ctrl_msg_h->ex_msg);
+		} else {
+			t7xx_port_proxy_send_msg_to_md(port_prox, PORT_CH_CONTROL_TX, CTL_ID_MD_EX,
+						       MD_EX_CHK_ID);
+			t7xx_fsm_append_event(ctl, FSM_EVENT_MD_EX, NULL, 0);
+		}
+
+		break;
+
+	case CTL_ID_MD_EX_ACK:
+		if (le32_to_cpu(ctrl_msg_h->ex_msg) != MD_EX_CHK_ACK_ID)
+			dev_err(dev, "Receive invalid MD_EX_ACK %x\n", ctrl_msg_h->ex_msg);
+		else
+			t7xx_fsm_append_event(ctl, FSM_EVENT_MD_EX_REC_OK, NULL, 0);
+
+		break;
+
+	case CTL_ID_MD_EX_PASS:
+		t7xx_fsm_append_event(ctl, FSM_EVENT_MD_EX_PASS, NULL, 0);
+		break;
+
+	case CTL_ID_DRV_VER_ERROR:
+		dev_err(dev, "AP/MD driver version mismatch\n");
+	}
+}
+
+static void control_msg_handler(struct t7xx_port *port, struct sk_buff *skb)
+{
+	struct t7xx_port_static *port_static = port->port_static;
+	struct t7xx_fsm_ctl *ctl = port->t7xx_dev->md->fsm_ctl;
+	struct port_proxy *port_prox = ctl->md->port_prox;
+	struct ctrl_msg_header *ctrl_msg_h;
+	int ret = 0;
+
+	skb_pull(skb, sizeof(struct ccci_header));
+
+	ctrl_msg_h = (struct ctrl_msg_header *)skb->data;
+	switch (le32_to_cpu(ctrl_msg_h->ctrl_msg_id)) {
+	case CTL_ID_HS2_MSG:
+		skb_pull(skb, sizeof(*ctrl_msg_h));
+
+		if (port_static->rx_ch == PORT_CH_CONTROL_RX)
+			t7xx_fsm_append_event(ctl, FSM_EVENT_MD_HS2,
+					      skb->data, le32_to_cpu(ctrl_msg_h->data_length));
+
+		dev_kfree_skb_any(skb);
+		break;
+
+	case CTL_ID_MD_EX:
+	case CTL_ID_MD_EX_ACK:
+	case CTL_ID_MD_EX_PASS:
+	case CTL_ID_DRV_VER_ERROR:
+		fsm_ee_message_handler(ctl, skb);
+		dev_kfree_skb_any(skb);
+		break;
+
+	case CTL_ID_PORT_ENUM:
+		skb_pull(skb, sizeof(*ctrl_msg_h));
+		ret = t7xx_port_proxy_node_control(ctl->md, (struct port_msg *)skb->data);
+		if (!ret)
+			t7xx_port_proxy_send_msg_to_md(port_prox, PORT_CH_CONTROL_TX,
+						       CTL_ID_PORT_ENUM, 0);
+		else
+			t7xx_port_proxy_send_msg_to_md(port_prox, PORT_CH_CONTROL_TX,
+						       CTL_ID_PORT_ENUM, PORT_ENUM_VER_MISMATCH);
+
+		break;
+
+	default:
+		dev_err(port->dev, "Unknown control message ID to FSM %x\n",
+			le32_to_cpu(ctrl_msg_h->ctrl_msg_id));
+		break;
+	}
+
+	if (ret)
+		dev_err(port->dev, "%s control message handle error: %d\n", port_static->name,
+			ret);
+}
+
+static int port_ctl_rx_thread(void *arg)
+{
+	while (!kthread_should_stop()) {
+		struct t7xx_port *port = arg;
+		struct sk_buff *skb;
+		unsigned long flags;
+
+		spin_lock_irqsave(&port->rx_wq.lock, flags);
+		if (skb_queue_empty(&port->rx_skb_list) &&
+		    wait_event_interruptible_locked_irq(port->rx_wq,
+							!skb_queue_empty(&port->rx_skb_list) ||
+							kthread_should_stop())) {
+			spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+			continue;
+		}
+
+		if (kthread_should_stop()) {
+			spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+			break;
+		}
+
+		skb = __skb_dequeue(&port->rx_skb_list);
+		spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+		port->skb_handler(port, skb);
+	}
+
+	return 0;
+}
+
+static int port_ctl_init(struct t7xx_port *port)
+{
+	struct t7xx_port_static *port_static = port->port_static;
+
+	port->skb_handler = &control_msg_handler;
+	port->thread = kthread_run(port_ctl_rx_thread, port, "%s", port_static->name);
+	if (IS_ERR(port->thread)) {
+		dev_err(port->dev, "Failed to start port control thread\n");
+		return PTR_ERR(port->thread);
+	}
+
+	port->rx_length_th = CTRL_QUEUE_MAXLEN;
+	return 0;
+}
+
+static void port_ctl_uninit(struct t7xx_port *port)
+{
+	unsigned long flags;
+	struct sk_buff *skb;
+
+	if (port->thread)
+		kthread_stop(port->thread);
+
+	spin_lock_irqsave(&port->rx_wq.lock, flags);
+	while ((skb = __skb_dequeue(&port->rx_skb_list)) != NULL)
+		dev_kfree_skb_any(skb);
+
+	spin_unlock_irqrestore(&port->rx_wq.lock, flags);
+}
+
+struct port_ops ctl_port_ops = {
+	.init = &port_ctl_init,
+	.recv_skb = &t7xx_port_recv_skb,
+	.uninit = &port_ctl_uninit,
+};
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.c b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
index af16cb01c607..ec7bb31fa9ea 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.c
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
@@ -49,7 +49,20 @@
 	     i < (proxy)->port_number;		\
 	     i++, (p) = &(proxy)->ports_private[i])
 
-static struct t7xx_port_static t7xx_md_ports[1];
+static struct t7xx_port_static t7xx_md_ports[] = {
+	{
+		.tx_ch = PORT_CH_CONTROL_TX,
+		.rx_ch = PORT_CH_CONTROL_RX,
+		.txq_index = Q_IDX_CTRL,
+		.rxq_index = Q_IDX_CTRL,
+		.txq_exp_index = 0,
+		.rxq_exp_index = 0,
+		.path_id = ID_CLDMA1,
+		.flags = 0,
+		.ops = &ctl_port_ops,
+		.name = "t7xx_ctrl",
+	},
+};
 
 static struct t7xx_port *t7xx_proxy_get_port_by_ch(struct port_proxy *port_prox, enum port_ch ch)
 {
@@ -275,6 +288,62 @@ static void t7xx_proxy_setup_ch_mapping(struct port_proxy *port_prox)
 	}
 }
 
+void t7xx_ccci_header_init(struct ccci_header *ccci_h, unsigned int pkt_header,
+			   size_t pkt_len, enum port_ch ch, unsigned int ex_msg)
+{
+	ccci_h->packet_header = cpu_to_le32(pkt_header);
+	ccci_h->packet_len = cpu_to_le32(pkt_len);
+	ccci_h->status &= cpu_to_le32(~HDR_FLD_CHN);
+	ccci_h->status |= cpu_to_le32(FIELD_PREP(HDR_FLD_CHN, ch));
+	ccci_h->ex_msg = cpu_to_le32(ex_msg);
+}
+
+void t7xx_ctrl_msg_header_init(struct ctrl_msg_header *ctrl_msg_h, unsigned int msg_id,
+			       unsigned int ex_msg, unsigned int len)
+{
+	ctrl_msg_h->ctrl_msg_id = cpu_to_le32(msg_id);
+	ctrl_msg_h->ex_msg = cpu_to_le32(ex_msg);
+	ctrl_msg_h->data_length = cpu_to_le32(len);
+}
+
+void t7xx_port_proxy_send_msg_to_md(struct port_proxy *port_prox, enum port_ch ch,
+				    unsigned int msg, unsigned int ex_msg)
+{
+	struct ctrl_msg_header *ctrl_msg_h;
+	struct ccci_header *ccci_h;
+	struct t7xx_port *port;
+	struct sk_buff *skb;
+	int ret;
+
+	port = t7xx_proxy_get_port_by_ch(port_prox, ch);
+	if (!port)
+		return;
+
+	skb = __dev_alloc_skb(sizeof(*ccci_h), GFP_KERNEL);
+	if (!skb)
+		return;
+
+	if (ch == PORT_CH_CONTROL_TX) {
+		ccci_h = (struct ccci_header *)(skb->data);
+		t7xx_ccci_header_init(ccci_h, CCCI_HEADER_NO_DATA,
+				      sizeof(*ctrl_msg_h) + CCCI_H_LEN, ch, 0);
+		ctrl_msg_h = (struct ctrl_msg_header *)(skb->data + CCCI_H_LEN);
+		t7xx_ctrl_msg_header_init(ctrl_msg_h, msg, ex_msg, 0);
+		skb_put(skb, CCCI_H_LEN + sizeof(*ctrl_msg_h));
+	} else {
+		ccci_h = skb_put(skb, sizeof(*ccci_h));
+		t7xx_ccci_header_init(ccci_h, CCCI_HEADER_NO_DATA, msg, ch, ex_msg);
+	}
+
+	ret = t7xx_port_proxy_send_skb(port, skb);
+	if (ret) {
+		struct t7xx_port_static *port_static = port->port_static;
+
+		dev_err(port->dev, "port%s send to MD fail\n", port_static->name);
+		dev_kfree_skb_any(skb);
+	}
+}
+
 /**
  * t7xx_port_proxy_dispatch_recv_skb() - Dispatch received skb.
  * @queue: CLDMA queue.
@@ -394,6 +463,9 @@ static void t7xx_proxy_init_all_ports(struct t7xx_modem *md)
 
 		t7xx_port_struct_init(port);
 
+		if (port_static->tx_ch == PORT_CH_CONTROL_TX)
+			md->core_md.ctl_port = port;
+
 		port->t7xx_dev = md->t7xx_dev;
 		port->dev = &md->t7xx_dev->pdev->dev;
 		spin_lock_init(&port->port_update_lock);
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.h b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
index c0d1b9636c12..a6c51e3bb373 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.h
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
@@ -42,6 +42,12 @@ struct port_proxy {
 	struct device			*dev;
 };
 
+struct ctrl_msg_header {
+	__le32	ctrl_msg_id;
+	__le32	ex_msg;
+	__le32	data_length;
+};
+
 struct port_msg {
 	__le32	head_pattern;
 	__le32	info;
@@ -60,12 +66,21 @@ struct port_msg {
 #define PORT_ENUM_TAIL_PATTERN	0xa5a5a5a5
 #define PORT_ENUM_VER_MISMATCH	0x00657272
 
+/* Port operations mapping */
+extern struct port_ops ctl_port_ops;
+
 int t7xx_port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb);
 void t7xx_port_proxy_set_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h);
 int t7xx_port_proxy_node_control(struct t7xx_modem *md, struct port_msg *port_msg);
 void t7xx_port_proxy_reset(struct port_proxy *port_prox);
+void t7xx_port_proxy_send_msg_to_md(struct port_proxy *port_prox, enum port_ch ch,
+				    unsigned int msg, unsigned int ex_msg);
 void t7xx_port_proxy_uninit(struct port_proxy *port_prox);
 int t7xx_port_proxy_init(struct t7xx_modem *md);
 void t7xx_port_proxy_md_status_notify(struct port_proxy *port_prox, unsigned int state);
+void t7xx_ccci_header_init(struct ccci_header *ccci_h, unsigned int pkt_header,
+			   size_t pkt_len, enum port_ch ch, unsigned int ex_msg);
+void t7xx_ctrl_msg_header_init(struct ctrl_msg_header *ctrl_msg_h, unsigned int msg_id,
+			       unsigned int ex_msg, unsigned int len);
 
 #endif /* __T7XX_PORT_PROXY_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
index e26a3d6a324f..73fab28848c6 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -294,6 +294,9 @@ static void fsm_routine_starting(struct t7xx_fsm_ctl *ctl)
 
 	if (!md->core_md.ready) {
 		dev_err(dev, "MD handshake timeout\n");
+		if (md->core_md.handshake_ongoing)
+			t7xx_fsm_append_event(ctl, FSM_EVENT_MD_HS2_EXIT, NULL, 0);
+
 		fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);
 	} else {
 		fsm_routine_ready(ctl);
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.h b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
index 39c89b10a3bb..197b02d4e09a 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.h
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
@@ -38,9 +38,12 @@ enum t7xx_fsm_state {
 
 enum t7xx_fsm_event_state {
 	FSM_EVENT_INVALID,
+	FSM_EVENT_MD_HS2,
 	FSM_EVENT_MD_EX,
 	FSM_EVENT_MD_EX_REC_OK,
 	FSM_EVENT_MD_EX_PASS,
+	FSM_EVENT_MD_HS2_EXIT,
+	FSM_EVENT_AP_HS2_EXIT,
 	FSM_EVENT_MAX
 };
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 06/13] net: wwan: t7xx: Add AT and MBIM WWAN ports
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
                   ` (4 preceding siblings ...)
  2022-01-14  1:06 ` [PATCH net-next v4 05/13] net: wwan: t7xx: Add control port Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-01-27 11:56   ` Ilpo Järvinen
  2022-01-14  1:06 ` [PATCH net-next v4 07/13] net: wwan: t7xx: Data path HW layer Ricardo Martinez
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

From: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>

Adds AT and MBIM ports to the port proxy infrastructure.
The initialization method is responsible for creating the corresponding
ports using the WWAN framework infrastructure. The implemented WWAN port
operations are start, stop, and TX.

Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 drivers/net/wwan/t7xx/Makefile          |   1 +
 drivers/net/wwan/t7xx/t7xx_port_proxy.c |  24 +++
 drivers/net/wwan/t7xx/t7xx_port_proxy.h |   1 +
 drivers/net/wwan/t7xx/t7xx_port_wwan.c  | 225 ++++++++++++++++++++++++
 4 files changed, 251 insertions(+)
 create mode 100644 drivers/net/wwan/t7xx/t7xx_port_wwan.c

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index 63e1c67b82b5..9eec2e2472fb 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -12,3 +12,4 @@ mtk_t7xx-y:=	t7xx_pci.o \
 		t7xx_hif_cldma.o  \
 		t7xx_port_proxy.o  \
 		t7xx_port_ctrl_msg.o \
+		t7xx_port_wwan.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.c b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
index ec7bb31fa9ea..28fb4c24de57 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.c
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.c
@@ -51,6 +51,30 @@
 
 static struct t7xx_port_static t7xx_md_ports[] = {
 	{
+		.tx_ch = PORT_CH_UART2_TX,
+		.rx_ch = PORT_CH_UART2_RX,
+		.txq_index = Q_IDX_AT_CMD,
+		.rxq_index = Q_IDX_AT_CMD,
+		.txq_exp_index = 0xff,
+		.rxq_exp_index = 0xff,
+		.path_id = ID_CLDMA1,
+		.flags = PORT_F_RX_CHAR_NODE,
+		.ops = &wwan_sub_port_ops,
+		.name = "AT",
+		.port_type = WWAN_PORT_AT,
+	}, {
+		.tx_ch = PORT_CH_MBIM_TX,
+		.rx_ch = PORT_CH_MBIM_RX,
+		.txq_index = Q_IDX_MBIM,
+		.rxq_index = Q_IDX_MBIM,
+		.txq_exp_index = 0,
+		.rxq_exp_index = 0,
+		.path_id = ID_CLDMA1,
+		.flags = PORT_F_RX_CHAR_NODE,
+		.ops = &wwan_sub_port_ops,
+		.name = "MBIM",
+		.port_type = WWAN_PORT_MBIM,
+	}, {
 		.tx_ch = PORT_CH_CONTROL_TX,
 		.rx_ch = PORT_CH_CONTROL_RX,
 		.txq_index = Q_IDX_CTRL,
diff --git a/drivers/net/wwan/t7xx/t7xx_port_proxy.h b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
index a6c51e3bb373..4f2d4b2c2658 100644
--- a/drivers/net/wwan/t7xx/t7xx_port_proxy.h
+++ b/drivers/net/wwan/t7xx/t7xx_port_proxy.h
@@ -67,6 +67,7 @@ struct port_msg {
 #define PORT_ENUM_VER_MISMATCH	0x00657272
 
 /* Port operations mapping */
+extern struct port_ops wwan_sub_port_ops;
 extern struct port_ops ctl_port_ops;
 
 int t7xx_port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb);
diff --git a/drivers/net/wwan/t7xx/t7xx_port_wwan.c b/drivers/net/wwan/t7xx/t7xx_port_wwan.c
new file mode 100644
index 000000000000..3398ef8dc21a
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_port_wwan.c
@@ -0,0 +1,225 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#include <linux/atomic.h>
+#include <linux/bitfield.h>
+#include <linux/dev_printk.h>
+#include <linux/err.h>
+#include <linux/gfp.h>
+#include <linux/minmax.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/wwan.h>
+
+#include "t7xx_common.h"
+#include "t7xx_port.h"
+#include "t7xx_port_proxy.h"
+#include "t7xx_state_monitor.h"
+
+static int t7xx_port_ctrl_start(struct wwan_port *port)
+{
+	struct t7xx_port *port_mtk = wwan_port_get_drvdata(port);
+
+	if (atomic_read(&port_mtk->usage_cnt))
+		return -EBUSY;
+
+	atomic_inc(&port_mtk->usage_cnt);
+	return 0;
+}
+
+static void t7xx_port_ctrl_stop(struct wwan_port *port)
+{
+	struct t7xx_port *port_mtk = wwan_port_get_drvdata(port);
+
+	atomic_dec(&port_mtk->usage_cnt);
+}
+
+static int t7xx_port_ctrl_tx(struct wwan_port *port, struct sk_buff *skb)
+{
+	struct t7xx_port *port_private = wwan_port_get_drvdata(port);
+	size_t actual_len, alloc_size, txq_mtu = CLDMA_TXQ_MTU;
+	struct t7xx_port_static *port_static;
+	unsigned int len, i, packets;
+	struct t7xx_fsm_ctl *ctl;
+	enum md_state md_state;
+
+	len = skb->len;
+	if (!len)
+		return -EINVAL;
+
+	port_static = port_private->port_static;
+	ctl = port_private->t7xx_dev->md->fsm_ctl;
+	md_state = t7xx_fsm_get_md_state(ctl);
+	if (md_state == MD_STATE_WAITING_FOR_HS1 || md_state == MD_STATE_WAITING_FOR_HS2) {
+		dev_warn(port_private->dev, "Cannot write to %s port when md_state=%d\n",
+			 port_static->name, md_state);
+		return -ENODEV;
+	}
+
+	alloc_size = min_t(size_t, txq_mtu, len + CCCI_H_ELEN);
+	actual_len = alloc_size - CCCI_H_ELEN;
+	packets = DIV_ROUND_UP(len, txq_mtu - CCCI_H_ELEN);
+
+	for (i = 0; i < packets; i++) {
+		struct ccci_header *ccci_h;
+		struct sk_buff *skb_ccci;
+		int ret;
+
+		if (packets > 1 && packets == i + 1) {
+			actual_len = len % (txq_mtu - CCCI_H_ELEN);
+			alloc_size = actual_len + CCCI_H_ELEN;
+		}
+
+		skb_ccci = __dev_alloc_skb(alloc_size, GFP_KERNEL);
+		if (!skb_ccci)
+			return -ENOMEM;
+
+		ccci_h = skb_put(skb_ccci, CCCI_H_LEN);
+		t7xx_ccci_header_init(ccci_h, 0, actual_len + CCCI_H_LEN, port_static->tx_ch, 0);
+		memcpy(skb_put(skb_ccci, actual_len), skb->data + i * (txq_mtu - CCCI_H_ELEN),
+		       actual_len);
+
+		t7xx_port_proxy_set_seq_num(port_private, ccci_h);
+
+		ret = t7xx_port_send_skb_to_md(port_private, skb_ccci, true);
+		if (ret) {
+			dev_err(port_private->dev, "Write error on %s port, %d\n",
+				port_static->name, ret);
+			dev_kfree_skb_any(skb_ccci);
+			return ret;
+		}
+
+		port_private->seq_nums[MTK_TX]++;
+	}
+
+	kfree_skb(skb);
+	return 0;
+}
+
+static const struct wwan_port_ops wwan_ops = {
+	.start = t7xx_port_ctrl_start,
+	.stop = t7xx_port_ctrl_stop,
+	.tx = t7xx_port_ctrl_tx,
+};
+
+static int t7xx_port_wwan_init(struct t7xx_port *port)
+{
+	struct t7xx_port_static *port_static = port->port_static;
+
+	port->rx_length_th = RX_QUEUE_MAXLEN;
+	port->flags |= PORT_F_RX_ADJUST_HEADER;
+
+	if (port_static->rx_ch == PORT_CH_UART2_RX)
+		port->flags |= PORT_F_RX_CH_TRAFFIC;
+
+	if (port_static->port_type != WWAN_PORT_UNKNOWN) {
+		port->wwan_port = wwan_create_port(port->dev, port_static->port_type,
+						   &wwan_ops, port);
+		if (IS_ERR(port->wwan_port))
+			return PTR_ERR(port->wwan_port);
+	} else {
+		port->wwan_port = NULL;
+	}
+
+	return 0;
+}
+
+static void t7xx_port_wwan_uninit(struct t7xx_port *port)
+{
+	if (port->wwan_port) {
+		if (port->chn_crt_stat) {
+			spin_lock(&port->port_update_lock);
+			port->chn_crt_stat = false;
+			spin_unlock(&port->port_update_lock);
+		}
+
+		wwan_remove_port(port->wwan_port);
+		port->wwan_port = NULL;
+	}
+}
+
+static int t7xx_port_wwan_recv_skb(struct t7xx_port *port, struct sk_buff *skb)
+{
+	struct t7xx_port_static *port_static = port->port_static;
+
+	if (port->flags & PORT_F_RX_CHAR_NODE) {
+		if (!atomic_read(&port->usage_cnt)) {
+			dev_err_ratelimited(port->dev, "Port %s is not opened, drop packets\n",
+					    port_static->name);
+			return -ENETDOWN;
+		}
+	}
+
+	return t7xx_port_recv_skb(port, skb);
+}
+
+static void port_status_update(struct t7xx_port *port)
+{
+	if (port->flags & PORT_F_RX_CHAR_NODE) {
+		if (port->chan_enable) {
+			port->flags &= ~PORT_F_RX_ALLOW_DROP;
+		} else {
+			port->flags |= PORT_F_RX_ALLOW_DROP;
+			spin_lock(&port->port_update_lock);
+			port->chn_crt_stat = false;
+			spin_unlock(&port->port_update_lock);
+		}
+	}
+}
+
+static int t7xx_port_wwan_enable_chl(struct t7xx_port *port)
+{
+	spin_lock(&port->port_update_lock);
+	port->chan_enable = true;
+	spin_unlock(&port->port_update_lock);
+
+	if (port->chn_crt_stat != port->chan_enable)
+		port_status_update(port);
+
+	return 0;
+}
+
+static int t7xx_port_wwan_disable_chl(struct t7xx_port *port)
+{
+	spin_lock(&port->port_update_lock);
+	port->chan_enable = false;
+	spin_unlock(&port->port_update_lock);
+
+	if (port->chn_crt_stat != port->chan_enable)
+		port_status_update(port);
+
+	return 0;
+}
+
+static void t7xx_port_wwan_md_state_notify(struct t7xx_port *port, unsigned int state)
+{
+	if (state == MD_STATE_READY)
+		port_status_update(port);
+}
+
+struct port_ops wwan_sub_port_ops = {
+	.init = &t7xx_port_wwan_init,
+	.recv_skb = &t7xx_port_wwan_recv_skb,
+	.uninit = &t7xx_port_wwan_uninit,
+	.enable_chl = &t7xx_port_wwan_enable_chl,
+	.disable_chl = &t7xx_port_wwan_disable_chl,
+	.md_state_notify = &t7xx_port_wwan_md_state_notify,
+};
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 07/13] net: wwan: t7xx: Data path HW layer
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
                   ` (5 preceding siblings ...)
  2022-01-14  1:06 ` [PATCH net-next v4 06/13] net: wwan: t7xx: Add AT and MBIM WWAN ports Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-02-01  9:08   ` Ilpo Järvinen
  2022-01-14  1:06 ` [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface Ricardo Martinez
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

From: Haijun Liu <haijun.liu@mediatek.com>

Data Path Modem AP Interface (DPMAIF) HW layer provides HW abstraction
for the upper layer (DPMAIF HIF). It implements functions to do the HW
configuration, TX/RX control and interrupt handling.

Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 drivers/net/wwan/t7xx/t7xx_dpmaif.c | 1372 +++++++++++++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_dpmaif.h |  146 +++
 2 files changed, 1518 insertions(+)
 create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_dpmaif.h

diff --git a/drivers/net/wwan/t7xx/t7xx_dpmaif.c b/drivers/net/wwan/t7xx/t7xx_dpmaif.c
new file mode 100644
index 000000000000..e9ecedce70ea
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_dpmaif.c
@@ -0,0 +1,1372 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#include <linux/bits.h>
+#include <linux/bitfield.h>
+#include <linux/bitops.h>
+#include <linux/delay.h>
+#include <linux/dev_printk.h>
+#include <linux/io.h>
+#include <linux/iopoll.h>
+#include <linux/types.h>
+
+#include "t7xx_dpmaif.h"
+#include "t7xx_hif_dpmaif.h"
+#include "t7xx_reg.h"
+
+#define ioread32_poll_timeout_atomic(addr, val, cond, delay_us, timeout_us) \
+	readx_poll_timeout_atomic(ioread32, addr, val, cond, delay_us, timeout_us)
+
+static int t7xx_dpmaif_init_intr(struct dpmaif_hw_info *hw_info)
+{
+	struct dpmaif_isr_en_mask *isr_en_msk = &hw_info->isr_en_mask;
+	u32 value, ul_intr_enable, dl_intr_enable;
+	int ret;
+
+	ul_intr_enable = DP_UL_INT_ERR_MSK | DP_UL_INT_QDONE_MSK;
+	isr_en_msk->ap_ul_l2intr_en_msk = ul_intr_enable;
+	iowrite32(DPMAIF_AP_ALL_L2TISAR0_MASK, hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+
+	/* Set interrupt enable mask */
+	iowrite32(ul_intr_enable, hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMCR0);
+	iowrite32(~ul_intr_enable, hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMSR0);
+
+	/* Check mask status */
+	ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0,
+					   value, (value & ul_intr_enable) != ul_intr_enable, 0,
+					   DPMAIF_CHECK_INIT_TIMEOUT_US);
+	if (ret)
+		return ret;
+
+	dl_intr_enable = DP_DL_INT_PITCNT_LEN_ERR | DP_DL_INT_BATCNT_LEN_ERR;
+	isr_en_msk->ap_dl_l2intr_err_en_msk = dl_intr_enable;
+	ul_intr_enable = DPMAIF_DL_INT_DLQ0_QDONE | DPMAIF_DL_INT_DLQ0_PITCNT_LEN |
+		    DPMAIF_DL_INT_DLQ1_QDONE | DPMAIF_DL_INT_DLQ1_PITCNT_LEN;
+	isr_en_msk->ap_ul_l2intr_en_msk = ul_intr_enable;
+	iowrite32(DPMAIF_AP_APDL_ALL_L2TISAR0_MASK, hw_info->pcie_base + DPMAIF_AP_APDL_L2TISAR0);
+
+	/* Set DL ISR PD enable mask */
+	iowrite32(~ul_intr_enable, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+	ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMR0,
+					   value, (value & ul_intr_enable) != ul_intr_enable, 0,
+					   DPMAIF_CHECK_INIT_TIMEOUT_US);
+	if (ret)
+		return ret;
+
+	isr_en_msk->ap_udl_ip_busy_en_msk = DPMAIF_UDL_IP_BUSY;
+	iowrite32(DPMAIF_AP_IP_BUSY_MASK, hw_info->pcie_base + DPMAIF_AP_IP_BUSY);
+	iowrite32(isr_en_msk->ap_udl_ip_busy_en_msk,
+		  hw_info->pcie_base + DPMAIF_AO_AP_DLUL_IP_BUSY_MASK);
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_AP_L1TIMR0);
+	value |= DPMAIF_DL_INT_Q2APTOP | DPMAIF_DL_INT_Q2TOQ1;
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_UL_AP_L1TIMR0);
+	iowrite32(DPMA_HPC_ALL_INT_MASK, hw_info->pcie_base + DPMAIF_HPC_INTR_MASK);
+
+	return 0;
+}
+
+static void t7xx_dpmaif_mask_ulq_intr(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int q_num)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	struct dpmaif_isr_en_mask *isr_en_msk;
+	u32 value, ul_int_que_done;
+	int ret;
+
+	isr_en_msk = &hw_info->isr_en_mask;
+	ul_int_que_done = BIT(q_num + DP_UL_INT_DONE_OFFSET) & DP_UL_INT_QDONE_MSK;
+	isr_en_msk->ap_ul_l2intr_en_msk &= ~ul_int_que_done;
+	iowrite32(ul_int_que_done, hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMSR0);
+
+	ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0,
+					   value, (value & ul_int_que_done) == ul_int_que_done, 0,
+					   DPMAIF_CHECK_TIMEOUT_US);
+	if (ret)
+		dev_err(dpmaif_ctrl->dev,
+			"Could not mask the UL interrupt. DPMAIF_AO_UL_AP_L2TIMR0 is 0x%x\n",
+			value);
+}
+
+void t7xx_dpmaif_unmask_ulq_intr(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int q_num)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	struct dpmaif_isr_en_mask *isr_en_msk;
+	u32 value, ul_int_que_done;
+	int ret;
+
+	isr_en_msk = &hw_info->isr_en_mask;
+	ul_int_que_done = BIT(q_num + DP_UL_INT_DONE_OFFSET) & DP_UL_INT_QDONE_MSK;
+	isr_en_msk->ap_ul_l2intr_en_msk |= ul_int_que_done;
+	iowrite32(ul_int_que_done, hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMCR0);
+
+	ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0,
+					   value, (value & ul_int_que_done) != ul_int_que_done, 0,
+					   DPMAIF_CHECK_TIMEOUT_US);
+	if (ret)
+		dev_err(dpmaif_ctrl->dev,
+			"Could not unmask the UL interrupt. DPMAIF_AO_UL_AP_L2TIMR0 is 0x%x\n",
+			value);
+}
+
+void t7xx_dpmaif_dl_unmask_batcnt_len_err_intr(struct dpmaif_hw_info *hw_info)
+{
+	hw_info->isr_en_mask.ap_dl_l2intr_en_msk |= DP_DL_INT_BATCNT_LEN_ERR;
+	iowrite32(DP_DL_INT_BATCNT_LEN_ERR, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+}
+
+void t7xx_dpmaif_dl_unmask_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info)
+{
+	hw_info->isr_en_mask.ap_dl_l2intr_en_msk |= DP_DL_INT_PITCNT_LEN_ERR;
+	iowrite32(DP_DL_INT_PITCNT_LEN_ERR, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+}
+
+static u32 t7xx_update_dlq_intr(struct dpmaif_hw_info *hw_info, u32 q_done)
+{
+	u32 value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0);
+	iowrite32(q_done, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+	return value;
+}
+
+static int t7xx_mask_dlq_intr(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char qno)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	u32 value, q_done;
+	int ret;
+
+	q_done = qno == DPF_RX_QNO0 ? DPMAIF_DL_INT_DLQ0_QDONE : DPMAIF_DL_INT_DLQ1_QDONE;
+	iowrite32(q_done, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+
+	ret = read_poll_timeout_atomic(t7xx_update_dlq_intr, value, value & q_done,
+				       0, DPMAIF_CHECK_TIMEOUT_US, false, hw_info, q_done);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev,
+			"Could not mask the DL interrupt. DPMAIF_AO_UL_AP_L2TIMR0 is 0x%x\n",
+			value);
+		return -ETIMEDOUT;
+	}
+
+	hw_info->isr_en_mask.ap_dl_l2intr_en_msk &= ~q_done;
+	return 0;
+}
+
+void t7xx_dpmaif_dlq_unmask_rx_done(struct dpmaif_hw_info *hw_info, unsigned char qno)
+{
+	u32 mask;
+
+	mask = qno == DPF_RX_QNO0 ? DPMAIF_DL_INT_DLQ0_QDONE : DPMAIF_DL_INT_DLQ1_QDONE;
+	iowrite32(mask, hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+	hw_info->isr_en_mask.ap_dl_l2intr_en_msk |= mask;
+}
+
+void t7xx_dpmaif_clr_ip_busy_sts(struct dpmaif_hw_info *hw_info)
+{
+	u32 ip_busy_sts;
+
+	ip_busy_sts = ioread32(hw_info->pcie_base + DPMAIF_AP_IP_BUSY);
+	iowrite32(ip_busy_sts, hw_info->pcie_base + DPMAIF_AP_IP_BUSY);
+}
+
+static void t7xx_dpmaif_dlq_mask_rx_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info,
+							unsigned char qno)
+{
+	if (qno == DPF_RX_QNO0)
+		iowrite32(DPMAIF_DL_INT_DLQ0_PITCNT_LEN,
+			  hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+	else
+		iowrite32(DPMAIF_DL_INT_DLQ1_PITCNT_LEN,
+			  hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+}
+
+void t7xx_dpmaif_dlq_unmask_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info,
+						unsigned char qno)
+{
+	if (qno == DPF_RX_QNO0)
+		iowrite32(DPMAIF_DL_INT_DLQ0_PITCNT_LEN,
+			  hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+	else
+		iowrite32(DPMAIF_DL_INT_DLQ1_PITCNT_LEN,
+			  hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMCR0);
+}
+
+void t7xx_dpmaif_ul_clr_all_intr(struct dpmaif_hw_info *hw_info)
+{
+	iowrite32(DPMAIF_AP_ALL_L2TISAR0_MASK, hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+}
+
+void t7xx_dpmaif_dl_clr_all_intr(struct dpmaif_hw_info *hw_info)
+{
+	iowrite32(DPMAIF_AP_APDL_ALL_L2TISAR0_MASK, hw_info->pcie_base + DPMAIF_AP_APDL_L2TISAR0);
+}
+
+static void t7xx_dpmaif_set_intr_para(struct dpmaif_hw_intr_st_para *para,
+				      enum dpmaif_hw_intr_type intr_type, unsigned int intr_queue)
+{
+	para->intr_types[para->intr_cnt] = intr_type;
+	para->intr_queues[para->intr_cnt] = intr_queue;
+	para->intr_cnt++;
+}
+
+/* The para->intr_cnt counter is set to zero before this function is called.
+ * It does not check for overflow as there is no risk of overflowing intr_types or intr_queues.
+ */
+static void t7xx_dpmaif_hw_check_tx_intr(struct dpmaif_ctrl *dpmaif_ctrl,
+					 unsigned int l2_txisar0,
+					 struct dpmaif_hw_intr_st_para *para)
+{
+	unsigned long value;
+
+	value = FIELD_GET(DP_UL_INT_QDONE_MSK, l2_txisar0);
+	if (value) {
+		unsigned int index;
+
+		t7xx_dpmaif_set_intr_para(para, DPF_INTR_UL_DONE, value);
+
+		for_each_set_bit(index, &value, DPMAIF_TXQ_NUM)
+			t7xx_dpmaif_mask_ulq_intr(dpmaif_ctrl, index);
+	}
+
+	value = FIELD_GET(DP_UL_INT_EMPTY_MSK, l2_txisar0);
+	if (value)
+		t7xx_dpmaif_set_intr_para(para, DPF_INTR_UL_DRB_EMPTY, value);
+
+	value = FIELD_GET(DP_UL_INT_MD_NOTREADY_MSK, l2_txisar0);
+	if (value)
+		t7xx_dpmaif_set_intr_para(para, DPF_INTR_UL_MD_NOTREADY, value);
+
+	value = FIELD_GET(DP_UL_INT_MD_PWR_NOTREADY_MSK, l2_txisar0);
+	if (value)
+		t7xx_dpmaif_set_intr_para(para, DPF_INTR_UL_MD_PWR_NOTREADY, value);
+
+	value = FIELD_GET(DP_UL_INT_ERR_MSK, l2_txisar0);
+	if (value)
+		t7xx_dpmaif_set_intr_para(para, DPF_INTR_UL_LEN_ERR, value);
+}
+
+/* The para->intr_cnt counter is set to zero before this function is called.
+ * It does not check for overflow as there is no risk of overflowing intr_types or intr_queues.
+ */
+static void t7xx_dpmaif_hw_check_rx_intr(struct dpmaif_ctrl *dpmaif_ctrl,
+					 unsigned int *pl2_rxisar0,
+					 struct dpmaif_hw_intr_st_para *para, int qno)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	unsigned int l2_rxisar0 = *pl2_rxisar0;
+	unsigned int value;
+
+	if (qno == DPF_RX_QNO_DFT) {
+		value = l2_rxisar0 & DP_DL_INT_SKB_LEN_ERR;
+		if (value)
+			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_SKB_LEN_ERR, DPF_RX_QNO_DFT);
+
+		value = l2_rxisar0 & DP_DL_INT_BATCNT_LEN_ERR;
+		if (value) {
+			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_BATCNT_LEN_ERR, DPF_RX_QNO_DFT);
+			hw_info->isr_en_mask.ap_dl_l2intr_en_msk &= ~DP_DL_INT_BATCNT_LEN_ERR;
+			iowrite32(DP_DL_INT_BATCNT_LEN_ERR,
+				  hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+		}
+
+		value = l2_rxisar0 & DP_DL_INT_PITCNT_LEN_ERR;
+		if (value) {
+			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_PITCNT_LEN_ERR, DPF_RX_QNO_DFT);
+			hw_info->isr_en_mask.ap_dl_l2intr_en_msk &= ~DP_DL_INT_PITCNT_LEN_ERR;
+			iowrite32(DP_DL_INT_PITCNT_LEN_ERR,
+				  hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
+		}
+
+		value = l2_rxisar0 & DP_DL_INT_PKT_EMPTY_MSK;
+		if (value)
+			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_PKT_EMPTY_SET, DPF_RX_QNO_DFT);
+
+		value = l2_rxisar0 & DP_DL_INT_FRG_EMPTY_MSK;
+		if (value)
+			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_FRG_EMPTY_SET, DPF_RX_QNO_DFT);
+
+		value = l2_rxisar0 & DP_DL_INT_MTU_ERR_MSK;
+		if (value)
+			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_MTU_ERR, DPF_RX_QNO_DFT);
+
+		value = l2_rxisar0 & DP_DL_INT_FRG_LENERR_MSK;
+		if (value)
+			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_FRGCNT_LEN_ERR, DPF_RX_QNO_DFT);
+
+		value = l2_rxisar0 & DP_DL_INT_Q0_PITCNT_LEN_ERR;
+		if (value) {
+			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_Q0_PITCNT_LEN_ERR, BIT(qno));
+			t7xx_dpmaif_dlq_mask_rx_pitcnt_len_err_intr(hw_info, qno);
+		}
+
+		value = l2_rxisar0 & DP_DL_INT_HPC_ENT_TYPE_ERR;
+		if (value)
+			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_HPC_ENT_TYPE_ERR,
+						  DPF_RX_QNO_DFT);
+
+		value = l2_rxisar0 & DP_DL_INT_Q0_DONE;
+		if (value) {
+			/* Mask RX done interrupt immediately after it occurs */
+			if (!t7xx_mask_dlq_intr(dpmaif_ctrl, qno)) {
+				t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_Q0_DONE, BIT(qno));
+			} else {
+				/* Unable to clear the interrupt, try again on the next one
+				 * device entered low power mode or suffer exception
+				 */
+				*pl2_rxisar0 = l2_rxisar0 & ~DP_DL_INT_Q0_DONE;
+			}
+		}
+	} else {
+		value = l2_rxisar0 & DP_DL_INT_Q1_PITCNT_LEN_ERR;
+		if (value) {
+			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_Q1_PITCNT_LEN_ERR, BIT(qno));
+			t7xx_dpmaif_dlq_mask_rx_pitcnt_len_err_intr(hw_info, qno);
+		}
+
+		value = l2_rxisar0 & DP_DL_INT_Q1_DONE;
+		if (value) {
+			if (!t7xx_mask_dlq_intr(dpmaif_ctrl, qno))
+				t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_Q1_DONE, BIT(qno));
+			else
+				*pl2_rxisar0 = l2_rxisar0 & ~DP_DL_INT_Q1_DONE;
+		}
+	}
+}
+
+/**
+ * t7xx_dpmaif_hw_get_intr_cnt() - Reads interrupt status and count from HW.
+ * @dpmaif_ctrl: Pointer to struct dpmaif_ctrl.
+ * @para: Pointer to struct dpmaif_hw_intr_st_para.
+ * @qno: Queue number.
+ *
+ * Reads RX/TX interrupt status from HW and clears UL/DL status as needed.
+ *
+ * Return: Interrupt count.
+ */
+int t7xx_dpmaif_hw_get_intr_cnt(struct dpmaif_ctrl *dpmaif_ctrl,
+				struct dpmaif_hw_intr_st_para *para, int qno)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	u32 rx_intr_status, tx_intr_status = 0;
+	u32 rx_intr_qdone, tx_intr_qdone = 0;
+
+	rx_intr_status = ioread32(hw_info->pcie_base + DPMAIF_AP_APDL_L2TISAR0);
+	rx_intr_qdone = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMR0);
+
+	/* TX interrupt status */
+	if (qno == DPF_RX_QNO_DFT) {
+		/* All ULQ and DLQ0 interrupts use the same source no need to check ULQ interrupts
+		 * when a DLQ1 interrupt has occurred.
+		 */
+		tx_intr_status = ioread32(hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+		tx_intr_qdone = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_AP_L2TIMR0);
+	}
+
+	t7xx_dpmaif_clr_ip_busy_sts(hw_info);
+
+	if (qno == DPF_RX_QNO_DFT) {
+		/* Do not schedule bottom half again or clear UL interrupt status when we
+		 * have already masked it.
+		 */
+		tx_intr_status &= ~tx_intr_qdone;
+		if (tx_intr_status) {
+			t7xx_dpmaif_hw_check_tx_intr(dpmaif_ctrl, tx_intr_status, para);
+			/* Clear interrupt status */
+			iowrite32(tx_intr_status, hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+		}
+	}
+
+	if (rx_intr_status) {
+		if (qno == DPF_RX_QNO0) {
+			rx_intr_status &= DP_DL_Q0_STATUS_MASK;
+			if (rx_intr_qdone & DPMAIF_DL_INT_DLQ0_QDONE)
+				/* Do not schedule bottom half again or clear DL
+				 * queue done interrupt status when we have already masked it.
+				 */
+				rx_intr_status &= ~DP_DL_INT_Q0_DONE;
+		} else {
+			rx_intr_status &= DP_DL_Q1_STATUS_MASK;
+			if (rx_intr_qdone & DPMAIF_DL_INT_DLQ1_QDONE)
+				rx_intr_status &= ~DP_DL_INT_Q1_DONE;
+		}
+
+		if (rx_intr_status) {
+			t7xx_dpmaif_hw_check_rx_intr(dpmaif_ctrl, &rx_intr_status, para, qno);
+			rx_intr_status |= DP_DL_INT_BATCNT_LEN_ERR;
+			/* Clear interrupt status */
+			iowrite32(rx_intr_status, hw_info->pcie_base + DPMAIF_AP_APDL_L2TISAR0);
+		}
+	}
+
+	return para->intr_cnt;
+}
+
+static int t7xx_dpmaif_sram_init(struct dpmaif_hw_info *hw_info)
+{
+	u32 value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AP_MEM_CLR);
+	value |= DPMAIF_MEM_CLR;
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AP_MEM_CLR);
+
+	return ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_AP_MEM_CLR,
+					    value, !(value & DPMAIF_MEM_CLR), 0,
+					    DPMAIF_CHECK_INIT_TIMEOUT_US);
+}
+
+static void t7xx_dpmaif_hw_reset(struct dpmaif_hw_info *hw_info)
+{
+	iowrite32(DPMAIF_AP_AO_RST_BIT, hw_info->pcie_base + DPMAIF_AP_AO_RGU_ASSERT);
+	udelay(2);
+	iowrite32(DPMAIF_AP_RST_BIT, hw_info->pcie_base + DPMAIF_AP_RGU_ASSERT);
+	udelay(2);
+	iowrite32(DPMAIF_AP_AO_RST_BIT, hw_info->pcie_base + DPMAIF_AP_AO_RGU_DEASSERT);
+	udelay(2);
+	iowrite32(DPMAIF_AP_RST_BIT, hw_info->pcie_base + DPMAIF_AP_RGU_DEASSERT);
+	udelay(2);
+}
+
+static int t7xx_dpmaif_hw_config(struct dpmaif_hw_info *hw_info)
+{
+	u32 ap_port_mode;
+	int ret;
+
+	t7xx_dpmaif_hw_reset(hw_info);
+
+	ret = t7xx_dpmaif_sram_init(hw_info);
+	if (ret)
+		return ret;
+
+	ap_port_mode = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+	ap_port_mode |= DPMAIF_PORT_MODE_PCIE;
+	iowrite32(ap_port_mode, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+	iowrite32(DPMAIF_CG_EN, hw_info->pcie_base + DPMAIF_AP_CG_EN);
+	return 0;
+}
+
+static void t7xx_dpmaif_pcie_dpmaif_sign(struct dpmaif_hw_info *hw_info)
+{
+	iowrite32(DPMAIF_PCIE_MODE_SET_VALUE, hw_info->pcie_base + DPMAIF_UL_RESERVE_AO_RW);
+}
+
+static void t7xx_dpmaif_dl_performance(struct dpmaif_hw_info *hw_info)
+{
+	u32 enable_bat_cache, enable_pit_burst;
+
+	enable_bat_cache = ioread32(hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+	enable_bat_cache |= DPMAIF_DL_BAT_CACHE_PRI;
+	iowrite32(enable_bat_cache, hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+
+	enable_pit_burst = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+	enable_pit_burst |= DPMAIF_DL_BURST_PIT_EN;
+	iowrite32(enable_pit_burst, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+}
+
+ /* DPMAIF DL DLQ part HW setting */
+
+static void t7xx_dpmaif_hw_hpc_cntl_set(struct dpmaif_hw_info *hw_info)
+{
+	unsigned int value;
+
+	value = DPMAIF_HPC_DLQ_PATH_MODE | DPMAIF_HPC_ADD_MODE_DF << 2;
+	value |= DPMAIF_HASH_PRIME_DF << 4;
+	value |= DPMAIF_HPC_TOTAL_NUM << 8;
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_HPC_CNTL);
+}
+
+static void t7xx_dpmaif_hw_agg_cfg_set(struct dpmaif_hw_info *hw_info)
+{
+	unsigned int value;
+
+	value = DPMAIF_AGG_MAX_LEN_DF | DPMAIF_AGG_TBL_ENT_NUM_DF << 16;
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_DLQ_AGG_CFG);
+}
+
+static void t7xx_dpmaif_hw_hash_bit_choose_set(struct dpmaif_hw_info *hw_info)
+{
+	iowrite32(DPMAIF_DLQ_HASH_BIT_CHOOSE_DF,
+		  hw_info->pcie_base + DPMAIF_AO_DL_DLQPIT_INIT_CON5);
+}
+
+static void t7xx_dpmaif_hw_mid_pit_timeout_thres_set(struct dpmaif_hw_info *hw_info)
+{
+	iowrite32(DPMAIF_MID_TIMEOUT_THRES_DF, hw_info->pcie_base + DPMAIF_AO_DL_DLQPIT_TIMEOUT0);
+}
+
+static void t7xx_dpmaif_hw_dlq_timeout_thres_set(struct dpmaif_hw_info *hw_info)
+{
+	unsigned int value, i;
+
+	/* Each register holds two DLQ threshold timeout values */
+	for (i = 0; i < DPMAIF_HPC_MAX_TOTAL_NUM / 2; i++) {
+		value = FIELD_PREP(DPMAIF_DLQ_LOW_TIMEOUT_THRES_MKS, DPMAIF_DLQ_TIMEOUT_THRES_DF);
+		value |= FIELD_PREP(DPMAIF_DLQ_HIGH_TIMEOUT_THRES_MSK,
+				    DPMAIF_DLQ_TIMEOUT_THRES_DF);
+		iowrite32(value,
+			  hw_info->pcie_base + DPMAIF_AO_DL_DLQPIT_TIMEOUT1 + sizeof(u32) * i);
+	}
+}
+
+static void t7xx_dpmaif_hw_dlq_start_prs_thres_set(struct dpmaif_hw_info *hw_info)
+{
+	iowrite32(DPMAIF_DLQ_PRS_THRES_DF, hw_info->pcie_base + DPMAIF_AO_DL_DLQPIT_TRIG_THRES);
+}
+
+static void t7xx_dpmaif_dl_dlq_hpc_hw_init(struct dpmaif_hw_info *hw_info)
+{
+	t7xx_dpmaif_hw_hpc_cntl_set(hw_info);
+	t7xx_dpmaif_hw_agg_cfg_set(hw_info);
+	t7xx_dpmaif_hw_hash_bit_choose_set(hw_info);
+	t7xx_dpmaif_hw_mid_pit_timeout_thres_set(hw_info);
+	t7xx_dpmaif_hw_dlq_timeout_thres_set(hw_info);
+	t7xx_dpmaif_hw_dlq_start_prs_thres_set(hw_info);
+}
+
+static int t7xx_dpmaif_dl_bat_init_done(struct dpmaif_ctrl *dpmaif_ctrl,
+					unsigned char q_num, bool frg_en)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	u32 value, dl_bat_init = 0;
+	int ret;
+
+	if (frg_en)
+		dl_bat_init = DPMAIF_DL_BAT_FRG_INIT;
+
+	dl_bat_init |= DPMAIF_DL_BAT_INIT_ALLSET;
+	dl_bat_init |= DPMAIF_DL_BAT_INIT_EN;
+
+	ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_BAT_INIT,
+					   value, !(value & DPMAIF_DL_BAT_INIT_NOT_READY), 0,
+					   DPMAIF_CHECK_INIT_TIMEOUT_US);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "Data plane modem DL BAT is not ready\n");
+		return ret;
+	}
+
+	iowrite32(dl_bat_init, hw_info->pcie_base + DPMAIF_DL_BAT_INIT);
+
+	ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_BAT_INIT,
+					   value, !(value & DPMAIF_DL_BAT_INIT_NOT_READY), 0,
+					   DPMAIF_CHECK_INIT_TIMEOUT_US);
+	if (ret)
+		dev_err(dpmaif_ctrl->dev, "Data plane modem DL BAT initialization failed\n");
+
+	return ret;
+}
+
+static void t7xx_dpmaif_dl_set_bat_base_addr(struct dpmaif_hw_info *hw_info,
+					     unsigned char q_num, dma_addr_t addr)
+{
+	iowrite32(lower_32_bits(addr), hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON0);
+	iowrite32(upper_32_bits(addr), hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON3);
+}
+
+static void t7xx_dpmaif_dl_set_bat_size(struct dpmaif_hw_info *hw_info,
+					unsigned char q_num, unsigned int size)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+	value &= ~DPMAIF_BAT_SIZE_MSK;
+	value |= size & DPMAIF_BAT_SIZE_MSK;
+	iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+}
+
+static void t7xx_dpmaif_dl_bat_en(struct dpmaif_hw_info *hw_info, unsigned char q_num, bool enable)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+
+	if (enable)
+		value |= DPMAIF_BAT_EN_MSK;
+	else
+		value &= ~DPMAIF_BAT_EN_MSK;
+
+	iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+}
+
+static void t7xx_dpmaif_dl_set_ao_bid_maxcnt(struct dpmaif_hw_info *hw_info,
+					     unsigned char q_num, unsigned int cnt)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON0);
+	value &= ~DPMAIF_BAT_BID_MAXCNT_MSK;
+	value |= FIELD_PREP(DPMAIF_BAT_BID_MAXCNT_MSK, cnt);
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON0);
+}
+
+static void t7xx_dpmaif_dl_set_ao_mtu(struct dpmaif_hw_info *hw_info, unsigned int mtu_sz)
+{
+	iowrite32(mtu_sz, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON1);
+}
+
+static void t7xx_dpmaif_dl_set_ao_pit_chknum(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+					     unsigned int number)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+	value &= ~DPMAIF_PIT_CHK_NUM_MSK;
+	value |= FIELD_PREP(DPMAIF_PIT_CHK_NUM_MSK, number);
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+}
+
+static void t7xx_dpmaif_dl_set_ao_remain_minsz(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+					       size_t min_sz)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON0);
+	value &= ~DPMAIF_BAT_REMAIN_MINSZ_MSK;
+	value |= FIELD_PREP(DPMAIF_BAT_REMAIN_MINSZ_MSK, min_sz / DPMAIF_BAT_REMAIN_SZ_BASE);
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON0);
+}
+
+static void t7xx_dpmaif_dl_set_ao_bat_bufsz(struct dpmaif_hw_info *hw_info,
+					    unsigned char q_num, size_t buf_sz)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+	value &= ~DPMAIF_BAT_BUF_SZ_MSK;
+	value |= FIELD_PREP(DPMAIF_BAT_BUF_SZ_MSK, buf_sz / DPMAIF_BAT_BUFFER_SZ_BASE);
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+}
+
+static void t7xx_dpmaif_dl_set_ao_bat_rsv_length(struct dpmaif_hw_info *hw_info,
+						 unsigned char q_num, unsigned int length)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+	value &= ~DPMAIF_BAT_RSV_LEN_MSK;
+	value |= length & DPMAIF_BAT_RSV_LEN_MSK;
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PKTINFO_CON2);
+}
+
+static void t7xx_dpmaif_dl_set_pkt_alignment(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+					     bool enable, unsigned int mode)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+	value &= ~DPMAIF_PKT_ALIGN_MSK;
+
+	if (enable) {
+		value |= DPMAIF_PKT_ALIGN_EN;
+		value |= FIELD_PREP(DPMAIF_PKT_ALIGN_MSK, mode);
+	}
+
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+}
+
+static void t7xx_dpmaif_dl_set_pkt_checksum(struct dpmaif_hw_info *hw_info)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+	value |= DPMAIF_DL_PKT_CHECKSUM_EN;
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+}
+
+static void t7xx_dpmaif_dl_set_ao_frg_check_thres(struct dpmaif_hw_info *hw_info,
+						  unsigned char q_num, unsigned int size)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+	value &= ~DPMAIF_FRG_CHECK_THRES_MSK;
+	value |= (size & DPMAIF_FRG_CHECK_THRES_MSK);
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+}
+
+static void t7xx_dpmaif_dl_set_ao_frg_bufsz(struct dpmaif_hw_info *hw_info,
+					    unsigned char q_num, unsigned int buf_sz)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+	value &= ~DPMAIF_FRG_BUF_SZ_MSK;
+	value |= FIELD_PREP(DPMAIF_FRG_BUF_SZ_MSK, buf_sz / DPMAIF_FRG_BUFFER_SZ_BASE);
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+}
+
+static void t7xx_dpmaif_dl_frg_ao_en(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+				     bool enable)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+
+	if (enable)
+		value |= DPMAIF_FRG_EN_MSK;
+	else
+		value &= ~DPMAIF_FRG_EN_MSK;
+
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_FRG_THRES);
+}
+
+static void t7xx_dpmaif_dl_set_ao_bat_check_thres(struct dpmaif_hw_info *hw_info,
+						  unsigned char q_num, unsigned int size)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+	value &= ~DPMAIF_BAT_CHECK_THRES_MSK;
+	value |= FIELD_PREP(DPMAIF_BAT_CHECK_THRES_MSK, size);
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_RDY_CHK_THRES);
+}
+
+static void t7xx_dpmaif_dl_set_pit_seqnum(struct dpmaif_hw_info *hw_info,
+					  unsigned char q_num, unsigned int seq)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PIT_SEQ_END);
+	value &= ~DPMAIF_DL_PIT_SEQ_MSK;
+	value |= seq & DPMAIF_DL_PIT_SEQ_MSK;
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_DL_PIT_SEQ_END);
+}
+
+static void t7xx_dpmaif_dl_set_dlq_pit_base_addr(struct dpmaif_hw_info *hw_info,
+						 unsigned char q_num, dma_addr_t addr)
+{
+	iowrite32(lower_32_bits(addr), hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON0);
+	iowrite32(upper_32_bits(addr), hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON4);
+}
+
+static void t7xx_dpmaif_dl_set_dlq_pit_size(struct dpmaif_hw_info *hw_info,
+					    unsigned char q_num, unsigned int size)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON1);
+	value &= ~DPMAIF_PIT_SIZE_MSK;
+	value |= size & DPMAIF_PIT_SIZE_MSK;
+	iowrite32(value, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON1);
+	iowrite32(0, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON2);
+	iowrite32(0, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON3);
+	iowrite32(0, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON5);
+	iowrite32(0, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON6);
+}
+
+static void t7xx_dpmaif_dl_dlq_pit_en(struct dpmaif_hw_info *hw_info, unsigned char q_num,
+				      bool enable)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON3);
+
+	if (enable)
+		value |= DPMAIF_DLQPIT_EN_MSK;
+	else
+		value &= ~DPMAIF_DLQPIT_EN_MSK;
+
+	iowrite32(value, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON3);
+}
+
+static void t7xx_dpmaif_dl_dlq_pit_init_done(struct dpmaif_ctrl *dpmaif_ctrl,
+					     unsigned char q_num, unsigned int pit_idx)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	unsigned int dl_pit_init;
+	int timeout;
+	u32 value;
+
+	dl_pit_init = DPMAIF_DL_PIT_INIT_ALLSET;
+	dl_pit_init |= (pit_idx << DPMAIF_DLQPIT_CHAN_OFS);
+	dl_pit_init |= DPMAIF_DL_PIT_INIT_EN;
+
+	timeout = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT,
+					       value, !(value & DPMAIF_DL_PIT_INIT_NOT_READY),
+					       DPMAIF_CHECK_DELAY_US,
+					       DPMAIF_CHECK_INIT_TIMEOUT_US);
+	if (timeout) {
+		dev_err(dpmaif_ctrl->dev, "Data plane modem DL PIT is not ready\n");
+		return;
+	}
+
+	iowrite32(dl_pit_init, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT);
+	timeout = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT,
+					       value, !(value & DPMAIF_DL_PIT_INIT_NOT_READY),
+					       DPMAIF_CHECK_DELAY_US,
+					       DPMAIF_CHECK_INIT_TIMEOUT_US);
+	if (timeout)
+		dev_err(dpmaif_ctrl->dev, "Data plane modem DL PIT initialization failed\n");
+}
+
+static void t7xx_dpmaif_config_dlq_pit_hw(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+					  struct dpmaif_dl *dl_que)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	unsigned int pit_idx = q_num;
+
+	t7xx_dpmaif_dl_set_dlq_pit_base_addr(hw_info, q_num, dl_que->pit_base);
+	t7xx_dpmaif_dl_set_dlq_pit_size(hw_info, q_num, dl_que->pit_size_cnt);
+	t7xx_dpmaif_dl_dlq_pit_en(hw_info, q_num, true);
+	t7xx_dpmaif_dl_dlq_pit_init_done(dpmaif_ctrl, q_num, pit_idx);
+}
+
+static void t7xx_dpmaif_config_all_dlq_hw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	int i;
+
+	for (i = 0; i < DPMAIF_RXQ_NUM; i++)
+		t7xx_dpmaif_config_dlq_pit_hw(dpmaif_ctrl, i, &hw_info->dl_que[i]);
+}
+
+static void t7xx_dpmaif_dl_all_q_en(struct dpmaif_ctrl *dpmaif_ctrl, bool enable)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	u32 dl_bat_init, value;
+	int timeout;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+
+	if (enable)
+		value |= DPMAIF_BAT_EN_MSK;
+	else
+		value &= ~DPMAIF_BAT_EN_MSK;
+
+	iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_INIT_CON1);
+	dl_bat_init = DPMAIF_DL_BAT_INIT_ONLY_ENABLE_BIT;
+	dl_bat_init |= DPMAIF_DL_BAT_INIT_EN;
+
+	timeout = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_BAT_INIT,
+					       value, !(value & DPMAIF_DL_BAT_INIT_NOT_READY), 0,
+					       DPMAIF_CHECK_TIMEOUT_US);
+	if (timeout)
+		dev_err(dpmaif_ctrl->dev, "Timeout updating BAT setting to HW\n");
+
+	iowrite32(dl_bat_init, hw_info->pcie_base + DPMAIF_DL_BAT_INIT);
+	timeout = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_BAT_INIT,
+					       value, !(value & DPMAIF_DL_BAT_INIT_NOT_READY), 0,
+					       DPMAIF_CHECK_TIMEOUT_US);
+	if (timeout)
+		dev_err(dpmaif_ctrl->dev, "Data plane modem DL BAT is not ready\n");
+}
+
+static int t7xx_dpmaif_config_dlq_hw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	struct dpmaif_dl_hwq *dl_hw;
+	struct dpmaif_dl *dl_que;
+	unsigned int queue = 0; /* All queues share one BAT/frag BAT table */
+	int ret;
+
+	t7xx_dpmaif_dl_dlq_hpc_hw_init(hw_info);
+	dl_hw = &hw_info->dl_que_hw[queue];
+
+	dl_que = &hw_info->dl_que[queue];
+	if (!dl_que->que_started)
+		return -EBUSY;
+
+	t7xx_dpmaif_dl_set_ao_remain_minsz(hw_info, queue, dl_hw->bat_remain_size);
+	t7xx_dpmaif_dl_set_ao_bat_bufsz(hw_info, queue, dl_hw->bat_pkt_bufsz);
+	t7xx_dpmaif_dl_set_ao_frg_bufsz(hw_info, queue, dl_hw->frg_pkt_bufsz);
+	t7xx_dpmaif_dl_set_ao_bat_rsv_length(hw_info, queue, dl_hw->bat_rsv_length);
+	t7xx_dpmaif_dl_set_ao_bid_maxcnt(hw_info, queue, dl_hw->pkt_bid_max_cnt);
+
+	if (dl_hw->pkt_alignment == 64)
+		t7xx_dpmaif_dl_set_pkt_alignment(hw_info, queue, true, DPMAIF_PKT_ALIGN64_MODE);
+	else if (dl_hw->pkt_alignment == 128)
+		t7xx_dpmaif_dl_set_pkt_alignment(hw_info, queue, true, DPMAIF_PKT_ALIGN128_MODE);
+	else
+		t7xx_dpmaif_dl_set_pkt_alignment(hw_info, queue, false, 0);
+
+	t7xx_dpmaif_dl_set_pit_seqnum(hw_info, queue, DPMAIF_DL_PIT_SEQ_VALUE);
+	t7xx_dpmaif_dl_set_ao_mtu(hw_info, dl_hw->mtu_size);
+	t7xx_dpmaif_dl_set_ao_pit_chknum(hw_info, queue, dl_hw->chk_pit_num);
+	t7xx_dpmaif_dl_set_ao_bat_check_thres(hw_info, queue, dl_hw->chk_bat_num);
+	t7xx_dpmaif_dl_set_ao_frg_check_thres(hw_info, queue, dl_hw->chk_frg_num);
+	t7xx_dpmaif_dl_frg_ao_en(hw_info, queue, true);
+
+	t7xx_dpmaif_dl_set_bat_base_addr(hw_info, queue, dl_que->frg_base);
+	t7xx_dpmaif_dl_set_bat_size(hw_info, queue, dl_que->frg_size_cnt);
+	t7xx_dpmaif_dl_bat_en(hw_info, queue, true);
+
+	ret = t7xx_dpmaif_dl_bat_init_done(dpmaif_ctrl, queue, true);
+	if (ret)
+		return ret;
+
+	t7xx_dpmaif_dl_set_bat_base_addr(hw_info, queue, dl_que->bat_base);
+	t7xx_dpmaif_dl_set_bat_size(hw_info, queue, dl_que->bat_size_cnt);
+	t7xx_dpmaif_dl_bat_en(hw_info, queue, false);
+
+	ret = t7xx_dpmaif_dl_bat_init_done(dpmaif_ctrl, queue, false);
+	if (ret)
+		return ret;
+
+	/* Init PIT (two PIT table) */
+	t7xx_dpmaif_config_all_dlq_hw(dpmaif_ctrl);
+	t7xx_dpmaif_dl_all_q_en(dpmaif_ctrl, true);
+	t7xx_dpmaif_dl_set_pkt_checksum(hw_info);
+	return 0;
+}
+
+static void t7xx_dpmaif_ul_update_drb_size(struct dpmaif_hw_info *hw_info,
+					   unsigned char q_num, unsigned int size)
+{
+	unsigned int value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_UL_DRBSIZE_ADDRH_n(q_num));
+	value &= ~DPMAIF_DRB_SIZE_MSK;
+	value |= size & DPMAIF_DRB_SIZE_MSK;
+	iowrite32(value, hw_info->pcie_base + DPMAIF_UL_DRBSIZE_ADDRH_n(q_num));
+}
+
+static void t7xx_dpmaif_ul_update_drb_base_addr(struct dpmaif_hw_info *hw_info,
+						unsigned char q_num, dma_addr_t addr)
+{
+	iowrite32(lower_32_bits(addr), hw_info->pcie_base + DPMAIF_ULQSAR_n(q_num));
+	iowrite32(upper_32_bits(addr), hw_info->pcie_base + DPMAIF_UL_DRB_ADDRH_n(q_num));
+}
+
+static void t7xx_dpmaif_ul_rdy_en(struct dpmaif_hw_info *hw_info,
+				  unsigned char q_num, bool ready)
+{
+	u32 value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+
+	if (ready)
+		value |= BIT(q_num);
+	else
+		value &= ~BIT(q_num);
+
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+}
+
+static void t7xx_dpmaif_ul_arb_en(struct dpmaif_hw_info *hw_info,
+				  unsigned char q_num, bool enable)
+{
+	u32 value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+
+	if (enable)
+		value |= BIT(q_num + 8);
+	else
+		value &= ~BIT(q_num + 8);
+
+	iowrite32(value, hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+}
+
+static void t7xx_dpmaif_config_ulq_hw(struct dpmaif_hw_info *hw_info)
+{
+	struct dpmaif_ul *ul_que;
+	int i;
+
+	for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+		ul_que = &hw_info->ul_que[i];
+		if (ul_que->que_started) {
+			t7xx_dpmaif_ul_update_drb_size(hw_info, i, ul_que->drb_size_cnt *
+						       DPMAIF_UL_DRB_ENTRY_WORD);
+			t7xx_dpmaif_ul_update_drb_base_addr(hw_info, i, ul_que->drb_base);
+			t7xx_dpmaif_ul_rdy_en(hw_info, i, true);
+			t7xx_dpmaif_ul_arb_en(hw_info, i, true);
+		} else {
+			t7xx_dpmaif_ul_arb_en(hw_info, i, false);
+		}
+	}
+}
+
+static int t7xx_dpmaif_hw_init_done(struct dpmaif_hw_info *hw_info)
+{
+	u32 ap_cfg;
+	int ret;
+
+	ap_cfg = ioread32(hw_info->pcie_base + DPMAIF_AP_OVERWRITE_CFG);
+	ap_cfg |= DPMAIF_SRAM_SYNC;
+	iowrite32(ap_cfg, hw_info->pcie_base + DPMAIF_AP_OVERWRITE_CFG);
+
+	ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_AP_OVERWRITE_CFG,
+					   ap_cfg, !(ap_cfg & DPMAIF_SRAM_SYNC), 0,
+					   DPMAIF_CHECK_TIMEOUT_US);
+	if (ret)
+		return ret;
+
+	iowrite32(DPMAIF_UL_INIT_DONE, hw_info->pcie_base + DPMAIF_AO_UL_INIT_SET);
+	iowrite32(DPMAIF_DL_INIT_DONE, hw_info->pcie_base + DPMAIF_AO_DL_INIT_SET);
+	return 0;
+}
+
+static bool t7xx_dpmaif_dl_idle_check(struct dpmaif_hw_info *hw_info)
+{
+	u32 dpmaif_dl_is_busy = ioread32(hw_info->pcie_base + DPMAIF_DL_CHK_BUSY);
+
+	return !(dpmaif_dl_is_busy & DPMAIF_DL_IDLE_STS);
+}
+
+static void t7xx_dpmaif_ul_all_q_en(struct dpmaif_hw_info *hw_info, bool enable)
+{
+	u32 ul_arb_en = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+
+	if (enable)
+		ul_arb_en |= DPMAIF_UL_ALL_QUE_ARB_EN;
+	else
+		ul_arb_en &= ~DPMAIF_UL_ALL_QUE_ARB_EN;
+
+	iowrite32(ul_arb_en, hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
+}
+
+static bool t7xx_dpmaif_ul_idle_check(struct dpmaif_hw_info *hw_info)
+{
+	u32 dpmaif_ul_is_busy = ioread32(hw_info->pcie_base + DPMAIF_UL_CHK_BUSY);
+
+	return !(dpmaif_ul_is_busy & DPMAIF_UL_IDLE_STS);
+}
+
+ /* DPMAIF UL Part HW setting */
+
+int t7xx_dpmaif_ul_update_hw_drb_cnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+				     unsigned int drb_entry_cnt)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	u32 ul_update, value;
+	int ret;
+
+	ul_update = drb_entry_cnt & DPMAIF_UL_ADD_COUNT_MASK;
+	ul_update |= DPMAIF_UL_ADD_UPDATE;
+
+	ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_ULQ_ADD_DESC_CH_n(q_num),
+					   value, !(value & DPMAIF_UL_ADD_NOT_READY), 0,
+					   DPMAIF_CHECK_TIMEOUT_US);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "UL add is not ready\n");
+		return ret;
+	}
+
+	iowrite32(ul_update, hw_info->pcie_base + DPMAIF_ULQ_ADD_DESC_CH_n(q_num));
+
+	ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_ULQ_ADD_DESC_CH_n(q_num),
+					   value, !(value & DPMAIF_UL_ADD_NOT_READY), 0,
+					   DPMAIF_CHECK_TIMEOUT_US);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "Timeout updating UL add\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+unsigned int t7xx_dpmaif_ul_get_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+	unsigned int value = ioread32(hw_info->pcie_base + DPMAIF_ULQ_STA0_n(q_num));
+
+	return value >> DPMAIF_UL_DRB_RIDX_OFFSET;
+}
+
+int t7xx_dpmaif_dlq_add_pit_remain_cnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int dlq_pit_idx,
+				       unsigned int pit_remain_cnt)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	u32 dl_update, value;
+	int ret;
+
+	dl_update = pit_remain_cnt & DPMAIF_PIT_REM_CNT_MSK;
+	dl_update |= DPMAIF_DL_ADD_UPDATE | (dlq_pit_idx << DPMAIF_ADD_DLQ_PIT_CHAN_OFS);
+
+	ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_DLQPIT_ADD,
+					   value, !(value & DPMAIF_DL_ADD_NOT_READY), 0,
+					   DPMAIF_CHECK_TIMEOUT_US);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "Data plane modem is not ready to add dlq\n");
+		return ret;
+	}
+
+	iowrite32(dl_update, hw_info->pcie_base + DPMAIF_DL_DLQPIT_ADD);
+
+	ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_DLQPIT_ADD,
+					   value, !(value & DPMAIF_DL_ADD_NOT_READY), 0,
+					   DPMAIF_CHECK_TIMEOUT_US);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "Data plane modem add dlq failed\n");
+		return ret;
+	}
+
+	return 0;
+}
+
+unsigned int t7xx_dpmaif_dl_dlq_pit_get_wr_idx(struct dpmaif_hw_info *hw_info,
+					       unsigned int dlq_pit_idx)
+{
+	u32 value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_DLQ_WRIDX +
+			 dlq_pit_idx * DLQ_PIT_IDX_SIZE);
+	return value & DPMAIF_DL_PIT_WRIDX_MSK;
+}
+
+static bool t7xx_dl_add_timedout(struct dpmaif_hw_info *hw_info)
+{
+	u32 value;
+	int ret;
+
+	ret = ioread32_poll_timeout_atomic(hw_info->pcie_base + DPMAIF_DL_BAT_ADD,
+					   value, !(value & DPMAIF_DL_ADD_NOT_READY), 0,
+					   DPMAIF_CHECK_TIMEOUT_US);
+	return !!ret;
+}
+
+int t7xx_dpmaif_dl_snd_hw_bat_cnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int bat_entry_cnt)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	unsigned int value;
+
+	if (t7xx_dl_add_timedout(hw_info)) {
+		dev_err(dpmaif_ctrl->dev, "DL add BAT not ready\n");
+		return -EBUSY;
+	}
+
+	value = bat_entry_cnt & DPMAIF_DL_ADD_COUNT_MASK;
+	value |= DPMAIF_DL_ADD_UPDATE;
+	iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_ADD);
+
+	if (t7xx_dl_add_timedout(hw_info)) {
+		dev_err(dpmaif_ctrl->dev, "DL add BAT timeout\n");
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+unsigned int t7xx_dpmaif_dl_get_bat_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+	u32 value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_BAT_RIDX);
+	return value & DPMAIF_DL_BAT_WRIDX_MSK;
+}
+
+unsigned int t7xx_dpmaif_dl_get_bat_wr_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+	u32 value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_BAT_WRIDX);
+	return value & DPMAIF_DL_BAT_WRIDX_MSK;
+}
+
+int t7xx_dpmaif_dl_snd_hw_frg_cnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int frg_entry_cnt)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	unsigned int value;
+
+	if (t7xx_dl_add_timedout(hw_info)) {
+		dev_err(dpmaif_ctrl->dev, "Data plane modem is not ready to add frag DLQ\n");
+		return -EBUSY;
+	}
+
+	value = frg_entry_cnt & DPMAIF_DL_ADD_COUNT_MASK;
+	value |= DPMAIF_DL_FRG_ADD_UPDATE | DPMAIF_DL_ADD_UPDATE;
+	iowrite32(value, hw_info->pcie_base + DPMAIF_DL_BAT_ADD);
+
+	if (t7xx_dl_add_timedout(hw_info)) {
+		dev_err(dpmaif_ctrl->dev, "Data plane modem add frag DLQ failed");
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+unsigned int t7xx_dpmaif_dl_get_frg_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
+{
+	u32 value;
+
+	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_FRGBAT_WRIDX);
+	return value & DPMAIF_DL_FRG_WRIDX_MSK;
+}
+
+static void t7xx_dpmaif_set_queue_property(struct dpmaif_hw_info *hw_info,
+					   struct dpmaif_hw_params *init_para)
+{
+	struct dpmaif_dl_hwq *dl_hwq;
+	struct dpmaif_dl *dl_que;
+	struct dpmaif_ul *ul_que;
+	int i;
+
+	for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+		dl_hwq = &hw_info->dl_que_hw[i];
+		dl_hwq->bat_remain_size = DPMAIF_HW_BAT_REMAIN;
+		dl_hwq->bat_pkt_bufsz = DPMAIF_HW_BAT_PKTBUF;
+		dl_hwq->frg_pkt_bufsz = DPMAIF_HW_FRG_PKTBUF;
+		dl_hwq->bat_rsv_length = DPMAIF_HW_BAT_RSVLEN;
+		dl_hwq->pkt_bid_max_cnt = DPMAIF_HW_PKT_BIDCNT;
+		dl_hwq->pkt_alignment = DPMAIF_HW_PKT_ALIGN;
+		dl_hwq->mtu_size = DPMAIF_HW_MTU_SIZE;
+		dl_hwq->chk_bat_num = DPMAIF_HW_CHK_BAT_NUM;
+		dl_hwq->chk_frg_num = DPMAIF_HW_CHK_FRG_NUM;
+		dl_hwq->chk_pit_num = DPMAIF_HW_CHK_PIT_NUM;
+
+		dl_que = &hw_info->dl_que[i];
+		dl_que->bat_base = init_para->pkt_bat_base_addr[i];
+		dl_que->bat_size_cnt = init_para->pkt_bat_size_cnt[i];
+		dl_que->pit_base = init_para->pit_base_addr[i];
+		dl_que->pit_size_cnt = init_para->pit_size_cnt[i];
+		dl_que->frg_base = init_para->frg_bat_base_addr[i];
+		dl_que->frg_size_cnt = init_para->frg_bat_size_cnt[i];
+		dl_que->que_started = true;
+	}
+
+	for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+		ul_que = &hw_info->ul_que[i];
+		ul_que->drb_base = init_para->drb_base_addr[i];
+		ul_que->drb_size_cnt = init_para->drb_size_cnt[i];
+		ul_que->que_started = true;
+	}
+}
+
+/**
+ * t7xx_dpmaif_hw_stop_all_txq() - Stop all TX queues.
+ * @dpmaif_ctrl: Pointer to struct dpmaif_ctrl.
+ *
+ * Disable HW UL queues. Checks busy UL queues to go to idle
+ * with an attempt count of 1000000.
+ *
+ * Return:
+ * * 0			- Success
+ * * -ETIMEDOUT		- Timed out checking busy queues
+ */
+int t7xx_dpmaif_hw_stop_all_txq(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	int count = 0;
+
+	t7xx_dpmaif_ul_all_q_en(hw_info, false);
+	while (t7xx_dpmaif_ul_idle_check(hw_info)) {
+		if (++count >= DPMAIF_MAX_CHECK_COUNT) {
+			dev_err(dpmaif_ctrl->dev, "Failed to stop TX, status: 0x%x\n",
+				ioread32(hw_info->pcie_base + DPMAIF_UL_CHK_BUSY));
+			return -ETIMEDOUT;
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * t7xx_dpmaif_hw_stop_all_rxq() - Stop all RX queues.
+ * @dpmaif_ctrl: Pointer to struct dpmaif_ctrl.
+ *
+ * Disable HW DL queue. Checks busy UL queues to go to idle
+ * with an attempt count of 1000000.
+ * Check that HW PIT write index equals read index with the same
+ * attempt count.
+ *
+ * Return:
+ * * 0			- Success.
+ * * -ETIMEDOUT		- Timed out checking busy queues.
+ */
+int t7xx_dpmaif_hw_stop_all_rxq(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	unsigned int wr_idx, rd_idx;
+	int count = 0;
+
+	t7xx_dpmaif_dl_all_q_en(dpmaif_ctrl, false);
+	while (t7xx_dpmaif_dl_idle_check(hw_info)) {
+		if (++count >= DPMAIF_MAX_CHECK_COUNT) {
+			dev_err(dpmaif_ctrl->dev, "Failed to stop RX, status: 0x%x\n",
+				ioread32(hw_info->pcie_base + DPMAIF_DL_CHK_BUSY));
+			return -ETIMEDOUT;
+		}
+	}
+
+	/* Check middle PIT sync done */
+	count = 0;
+	do {
+		wr_idx = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PIT_WRIDX);
+		wr_idx &= DPMAIF_DL_PIT_WRIDX_MSK;
+		rd_idx = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_PIT_RIDX);
+		rd_idx &= DPMAIF_DL_PIT_WRIDX_MSK;
+
+		if (wr_idx == rd_idx)
+			return 0;
+	} while (++count < DPMAIF_MAX_CHECK_COUNT);
+
+	dev_err(dpmaif_ctrl->dev, "Check middle PIT sync fail\n");
+	return -ETIMEDOUT;
+}
+
+void t7xx_dpmaif_start_hw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	t7xx_dpmaif_ul_all_q_en(&dpmaif_ctrl->hif_hw_info, true);
+	t7xx_dpmaif_dl_all_q_en(dpmaif_ctrl, true);
+}
+
+/**
+ * t7xx_dpmaif_hw_init() - Initialize HW data path API.
+ * @dpmaif_ctrl: Pointer to struct dpmaif_ctrl.
+ * @init_param: Pointer to struct dpmaif_hw_params.
+ *
+ * Configures port mode, clock config, HW interrupt initialization, and HW queue.
+ *
+ * Return:
+ * * 0		- Success.
+ * * -ERROR	- Error code from failure sub-initializations.
+ */
+int t7xx_dpmaif_hw_init(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_hw_params *init_param)
+{
+	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
+	int ret;
+
+	ret = t7xx_dpmaif_hw_config(hw_info);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "DPMAIF HW config failed\n");
+		return ret;
+	}
+
+	ret = t7xx_dpmaif_init_intr(hw_info);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "DPMAIF HW interrupts init failed\n");
+		return ret;
+	}
+
+	t7xx_dpmaif_set_queue_property(hw_info, init_param);
+	t7xx_dpmaif_pcie_dpmaif_sign(hw_info);
+	t7xx_dpmaif_dl_performance(hw_info);
+
+	ret = t7xx_dpmaif_config_dlq_hw(dpmaif_ctrl);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "DPMAIF HW dlq config failed\n");
+		return ret;
+	}
+
+	t7xx_dpmaif_config_ulq_hw(hw_info);
+
+	ret = t7xx_dpmaif_hw_init_done(hw_info);
+	if (ret)
+		dev_err(dpmaif_ctrl->dev, "DPMAIF HW queue init failed\n");
+
+	return ret;
+}
+
+bool t7xx_dpmaif_ul_clr_done(struct dpmaif_hw_info *hw_info, unsigned char qno)
+{
+	u32 intr_status;
+
+	intr_status = ioread32(hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+	intr_status &= BIT(DP_UL_INT_DONE_OFFSET + qno);
+	if (intr_status) {
+		iowrite32(intr_status, hw_info->pcie_base + DPMAIF_AP_L2TISAR0);
+		return true;
+	}
+
+	return false;
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_dpmaif.h b/drivers/net/wwan/t7xx/t7xx_dpmaif.h
new file mode 100644
index 000000000000..da67fcbab387
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_dpmaif.h
@@ -0,0 +1,146 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#ifndef __T7XX_DPMAIF_H__
+#define __T7XX_DPMAIF_H__
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#include "t7xx_hif_dpmaif.h"
+
+#define DPMAIF_DL_PIT_SEQ_VALUE		251
+#define DPMAIF_UL_DRB_BYTE_SIZE		16
+#define DPMAIF_UL_DRB_ENTRY_WORD	(DPMAIF_UL_DRB_BYTE_SIZE >> 2)
+
+#define DPMAIF_MAX_CHECK_COUNT		1000000
+#define DPMAIF_CHECK_TIMEOUT_US		10000
+#define DPMAIF_CHECK_INIT_TIMEOUT_US	100000
+#define DPMAIF_CHECK_DELAY_US		10
+
+/* DPMAIF HW Initialization parameter structure */
+struct dpmaif_hw_params {
+	/* UL part */
+	dma_addr_t			drb_base_addr[DPMAIF_TXQ_NUM];
+	unsigned int			drb_size_cnt[DPMAIF_TXQ_NUM];
+	/* DL part */
+	dma_addr_t			pkt_bat_base_addr[DPMAIF_RXQ_NUM];
+	unsigned int			pkt_bat_size_cnt[DPMAIF_RXQ_NUM];
+	dma_addr_t			frg_bat_base_addr[DPMAIF_RXQ_NUM];
+	unsigned int			frg_bat_size_cnt[DPMAIF_RXQ_NUM];
+	dma_addr_t			pit_base_addr[DPMAIF_RXQ_NUM];
+	unsigned int			pit_size_cnt[DPMAIF_RXQ_NUM];
+};
+
+enum dpmaif_hw_intr_type {
+	DPF_INTR_INVALID_MIN,
+	DPF_INTR_UL_DONE,
+	DPF_INTR_UL_DRB_EMPTY,
+	DPF_INTR_UL_MD_NOTREADY,
+	DPF_INTR_UL_MD_PWR_NOTREADY,
+	DPF_INTR_UL_LEN_ERR,
+	DPF_INTR_DL_DONE,
+	DPF_INTR_DL_SKB_LEN_ERR,
+	DPF_INTR_DL_BATCNT_LEN_ERR,
+	DPF_INTR_DL_PITCNT_LEN_ERR,
+	DPF_INTR_DL_PKT_EMPTY_SET,
+	DPF_INTR_DL_FRG_EMPTY_SET,
+	DPF_INTR_DL_MTU_ERR,
+	DPF_INTR_DL_FRGCNT_LEN_ERR,
+	DPF_INTR_DL_Q0_PITCNT_LEN_ERR,
+	DPF_INTR_DL_Q1_PITCNT_LEN_ERR,
+	DPF_INTR_DL_HPC_ENT_TYPE_ERR,
+	DPF_INTR_DL_Q0_DONE,
+	DPF_INTR_DL_Q1_DONE,
+	DPF_INTR_INVALID_MAX
+};
+
+#define DPF_RX_QNO0			0
+#define DPF_RX_QNO1			1
+#define DPF_RX_QNO_DFT			DPF_RX_QNO0
+
+struct dpmaif_hw_intr_st_para {
+	unsigned int intr_cnt;
+	enum dpmaif_hw_intr_type intr_types[DPF_INTR_INVALID_MAX - 1];
+	unsigned int intr_queues[DPF_INTR_INVALID_MAX - 1];
+};
+
+#define DPMAIF_HW_BAT_REMAIN		64
+#define DPMAIF_HW_BAT_PKTBUF		(128 * 28)
+#define DPMAIF_HW_FRG_PKTBUF		128
+#define DPMAIF_HW_BAT_RSVLEN		64
+#define DPMAIF_HW_PKT_BIDCNT		1
+#define DPMAIF_HW_PKT_ALIGN		64
+#define DPMAIF_HW_MTU_SIZE		(3 * 1024 + 8)
+#define DPMAIF_HW_CHK_BAT_NUM		62
+#define DPMAIF_HW_CHK_FRG_NUM		3
+#define DPMAIF_HW_CHK_PIT_NUM		(2 * DPMAIF_HW_CHK_BAT_NUM)
+
+#define DP_UL_INT_DONE_OFFSET		0
+#define DP_UL_INT_QDONE_MSK		GENMASK(4, 0)
+#define DP_UL_INT_EMPTY_MSK		GENMASK(9, 5)
+#define DP_UL_INT_MD_NOTREADY_MSK	GENMASK(14, 10)
+#define DP_UL_INT_MD_PWR_NOTREADY_MSK	GENMASK(19, 15)
+#define DP_UL_INT_ERR_MSK		GENMASK(24, 20)
+
+#define DP_DL_INT_QDONE_MSK		BIT(0)
+#define DP_DL_INT_SKB_LEN_ERR		BIT(1)
+#define DP_DL_INT_BATCNT_LEN_ERR	BIT(2)
+#define DP_DL_INT_PITCNT_LEN_ERR	BIT(3)
+#define DP_DL_INT_PKT_EMPTY_MSK		BIT(4)
+#define DP_DL_INT_FRG_EMPTY_MSK		BIT(5)
+#define DP_DL_INT_MTU_ERR_MSK		BIT(6)
+#define DP_DL_INT_FRG_LENERR_MSK	BIT(7)
+#define DP_DL_INT_Q0_PITCNT_LEN_ERR	BIT(8)
+#define DP_DL_INT_Q1_PITCNT_LEN_ERR	BIT(9)
+#define DP_DL_INT_HPC_ENT_TYPE_ERR	BIT(10)
+#define DP_DL_INT_Q0_DONE		BIT(13)
+#define DP_DL_INT_Q1_DONE		BIT(14)
+
+#define DP_DL_Q0_STATUS_MASK		(DP_DL_INT_Q0_PITCNT_LEN_ERR | DP_DL_INT_Q0_DONE)
+#define DP_DL_Q1_STATUS_MASK		(DP_DL_INT_Q1_PITCNT_LEN_ERR | DP_DL_INT_Q1_DONE)
+
+int t7xx_dpmaif_hw_init(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_hw_params *init_param);
+int t7xx_dpmaif_hw_stop_all_txq(struct dpmaif_ctrl *dpmaif_ctrl);
+int t7xx_dpmaif_hw_stop_all_rxq(struct dpmaif_ctrl *dpmaif_ctrl);
+void t7xx_dpmaif_start_hw(struct dpmaif_ctrl *dpmaif_ctrl);
+int t7xx_dpmaif_hw_get_intr_cnt(struct dpmaif_ctrl *dpmaif_ctrl,
+				struct dpmaif_hw_intr_st_para *para, int qno);
+void t7xx_dpmaif_unmask_ulq_intr(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int q_num);
+int t7xx_dpmaif_ul_update_hw_drb_cnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+				     unsigned int drb_entry_cnt);
+int t7xx_dpmaif_dl_snd_hw_bat_cnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int bat_entry_cnt);
+int t7xx_dpmaif_dl_snd_hw_frg_cnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int frg_entry_cnt);
+int t7xx_dpmaif_dlq_add_pit_remain_cnt(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int dlq_pit_idx,
+				       unsigned int pit_remain_cnt);
+void t7xx_dpmaif_dlq_unmask_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info,
+						unsigned char qno);
+void t7xx_dpmaif_dlq_unmask_rx_done(struct dpmaif_hw_info *hw_info, unsigned char qno);
+bool t7xx_dpmaif_ul_clr_done(struct dpmaif_hw_info *hw_info, unsigned char qno);
+unsigned int t7xx_dpmaif_ul_get_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num);
+void t7xx_dpmaif_ul_clr_all_intr(struct dpmaif_hw_info *hw_info);
+void t7xx_dpmaif_dl_clr_all_intr(struct dpmaif_hw_info *hw_info);
+void t7xx_dpmaif_clr_ip_busy_sts(struct dpmaif_hw_info *hw_info);
+void t7xx_dpmaif_dl_unmask_batcnt_len_err_intr(struct dpmaif_hw_info *hw_info);
+void t7xx_dpmaif_dl_unmask_pitcnt_len_err_intr(struct dpmaif_hw_info *hw_info);
+unsigned int t7xx_dpmaif_dl_get_bat_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num);
+unsigned int t7xx_dpmaif_dl_get_bat_wr_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num);
+unsigned int t7xx_dpmaif_dl_get_frg_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num);
+unsigned int t7xx_dpmaif_dl_dlq_pit_get_wr_idx(struct dpmaif_hw_info *hw_info,
+					       unsigned int dlq_pit_idx);
+
+#endif /* __T7XX_DPMAIF_H__ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
                   ` (6 preceding siblings ...)
  2022-01-14  1:06 ` [PATCH net-next v4 07/13] net: wwan: t7xx: Data path HW layer Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-02-03 14:23   ` Ilpo Järvinen
  2022-02-08  8:19   ` Ilpo Järvinen
  2022-01-14  1:06 ` [PATCH net-next v4 09/13] net: wwan: t7xx: Add WWAN network interface Ricardo Martinez
                   ` (5 subsequent siblings)
  13 siblings, 2 replies; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

From: Haijun Liu <haijun.liu@mediatek.com>

Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
for initialization, ISR, control and event handling of TX/RX flows.

DPMAIF TX
Exposes the `dmpaif_tx_send_skb` function which can be used by the
network device to transmit packets.
The uplink data management uses a Descriptor Ring Buffer (DRB).
First DRB entry is a message type that will be followed by 1 or more
normal DRB entries. Message type DRB will hold the skb information
and each normal DRB entry holds a pointer to the skb payload.

DPMAIF RX
The downlink buffer management uses Buffer Address Table (BAT) and
Packet Information Table (PIT) rings.
The BAT ring holds the address of skb data buffer for the HW to use,
while the PIT contains metadata about a whole network packet including
a reference to the BAT entry holding the data buffer address.
The driver reads the PIT and BAT entries written by the modem, when
reaching a threshold, the driver will reload the PIT and BAT rings.

Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 drivers/net/wwan/t7xx/Makefile             |    4 +
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c    |  487 ++++++++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h    |  251 ++++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 1227 ++++++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h |  115 ++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c |  724 ++++++++++++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h |   89 ++
 7 files changed, 2897 insertions(+)
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index 9eec2e2472fb..04a9ba50dc14 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -13,3 +13,7 @@ mtk_t7xx-y:=	t7xx_pci.o \
 		t7xx_port_proxy.o  \
 		t7xx_port_ctrl_msg.o \
 		t7xx_port_wwan.o \
+		t7xx_hif_dpmaif.o  \
+		t7xx_hif_dpmaif_tx.o \
+		t7xx_hif_dpmaif_rx.o  \
+		t7xx_dpmaif.o \
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
new file mode 100644
index 000000000000..b731d0be83ee
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
@@ -0,0 +1,487 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/irqreturn.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/string.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_common.h"
+#include "t7xx_dpmaif.h"
+#include "t7xx_hif_dpmaif.h"
+#include "t7xx_hif_dpmaif_rx.h"
+#include "t7xx_hif_dpmaif_tx.h"
+#include "t7xx_pci.h"
+#include "t7xx_pcie_mac.h"
+
+unsigned int t7xx_ring_buf_get_next_wr_idx(unsigned int buf_len, unsigned int buf_idx)
+{
+	buf_idx++;
+
+	return buf_idx < buf_len ? buf_idx : 0;
+}
+
+unsigned int t7xx_ring_buf_rd_wr_count(unsigned int total_cnt, unsigned int rd_idx,
+				       unsigned int wr_idx, enum dpmaif_rdwr rd_wr)
+{
+	int pkt_cnt;
+
+	if (rd_wr == DPMAIF_READ)
+		pkt_cnt = wr_idx - rd_idx;
+	else
+		pkt_cnt = rd_idx - wr_idx - 1;
+
+	if (pkt_cnt < 0)
+		pkt_cnt += total_cnt;
+
+	return (unsigned int)pkt_cnt;
+}
+
+static void t7xx_dpmaif_enable_irq(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct dpmaif_isr_para *isr_para;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(dpmaif_ctrl->isr_para); i++) {
+		isr_para = &dpmaif_ctrl->isr_para[i];
+		t7xx_pcie_mac_set_int(dpmaif_ctrl->t7xx_dev, isr_para->pcie_int);
+	}
+}
+
+static void t7xx_dpmaif_disable_irq(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct dpmaif_isr_para *isr_para;
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(dpmaif_ctrl->isr_para); i++) {
+		isr_para = &dpmaif_ctrl->isr_para[i];
+		t7xx_pcie_mac_clear_int(dpmaif_ctrl->t7xx_dev, isr_para->pcie_int);
+	}
+}
+
+static void t7xx_dpmaif_irq_cb(struct dpmaif_isr_para *isr_para)
+{
+	struct dpmaif_ctrl *dpmaif_ctrl = isr_para->dpmaif_ctrl;
+	struct dpmaif_hw_intr_st_para intr_status;
+	struct device *dev = dpmaif_ctrl->dev;
+	int i;
+
+	memset(&intr_status, 0, sizeof(intr_status));
+
+	if (t7xx_dpmaif_hw_get_intr_cnt(dpmaif_ctrl, &intr_status, isr_para->dlq_id) < 0) {
+		dev_err(dev, "Failed to get HW interrupt count\n");
+		return;
+	}
+
+	t7xx_pcie_mac_clear_int_status(dpmaif_ctrl->t7xx_dev, isr_para->pcie_int);
+
+	for (i = 0; i < intr_status.intr_cnt; i++) {
+		switch (intr_status.intr_types[i]) {
+		case DPF_INTR_UL_DONE:
+			t7xx_dpmaif_irq_tx_done(dpmaif_ctrl, intr_status.intr_queues[i]);
+			break;
+
+		case DPF_INTR_UL_DRB_EMPTY:
+		case DPF_INTR_UL_MD_NOTREADY:
+		case DPF_INTR_UL_MD_PWR_NOTREADY:
+			/* No need to log an error for these */
+			break;
+
+		case DPF_INTR_DL_BATCNT_LEN_ERR:
+			dev_err_ratelimited(dev, "DL interrupt: packet BAT count length error\n");
+			t7xx_dpmaif_dl_unmask_batcnt_len_err_intr(&dpmaif_ctrl->hif_hw_info);
+			break;
+
+		case DPF_INTR_DL_PITCNT_LEN_ERR:
+			dev_err_ratelimited(dev, "DL interrupt: PIT count length error\n");
+			t7xx_dpmaif_dl_unmask_pitcnt_len_err_intr(&dpmaif_ctrl->hif_hw_info);
+			break;
+
+		case DPF_INTR_DL_Q0_PITCNT_LEN_ERR:
+			dev_err_ratelimited(dev, "DL interrupt: DLQ0 PIT count length error\n");
+			t7xx_dpmaif_dlq_unmask_pitcnt_len_err_intr(&dpmaif_ctrl->hif_hw_info,
+								   DPF_RX_QNO_DFT);
+			break;
+
+		case DPF_INTR_DL_Q1_PITCNT_LEN_ERR:
+			dev_err_ratelimited(dev, "DL interrupt: DLQ1 PIT count length error\n");
+			t7xx_dpmaif_dlq_unmask_pitcnt_len_err_intr(&dpmaif_ctrl->hif_hw_info,
+								   DPF_RX_QNO1);
+			break;
+
+		case DPF_INTR_DL_DONE:
+		case DPF_INTR_DL_Q0_DONE:
+		case DPF_INTR_DL_Q1_DONE:
+			t7xx_dpmaif_irq_rx_done(dpmaif_ctrl, intr_status.intr_queues[i]);
+			break;
+
+		default:
+			dev_err_ratelimited(dev, "DL interrupt error: unknown type : %d\n",
+					    intr_status.intr_types[i]);
+		}
+	}
+}
+
+static irqreturn_t t7xx_dpmaif_isr_handler(int irq, void *data)
+{
+	struct dpmaif_isr_para *isr_para = data;
+	struct dpmaif_ctrl *dpmaif_ctrl;
+
+	dpmaif_ctrl = isr_para->dpmaif_ctrl;
+	if (dpmaif_ctrl->state != DPMAIF_STATE_PWRON) {
+		dev_err(dpmaif_ctrl->dev, "Interrupt received before initializing DPMAIF\n");
+		return IRQ_HANDLED;
+	}
+
+	t7xx_pcie_mac_clear_int(dpmaif_ctrl->t7xx_dev, isr_para->pcie_int);
+	t7xx_dpmaif_irq_cb(isr_para);
+	t7xx_pcie_mac_set_int(dpmaif_ctrl->t7xx_dev, isr_para->pcie_int);
+	return IRQ_HANDLED;
+}
+
+static void t7xx_dpmaif_isr_parameter_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct dpmaif_isr_para *isr_para;
+	unsigned char i;
+
+	dpmaif_ctrl->rxq_int_mapping[DPF_RX_QNO0] = DPMAIF_INT;
+	dpmaif_ctrl->rxq_int_mapping[DPF_RX_QNO1] = DPMAIF2_INT;
+
+	for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+		isr_para = &dpmaif_ctrl->isr_para[i];
+		isr_para->dpmaif_ctrl = dpmaif_ctrl;
+		isr_para->dlq_id = i;
+		isr_para->pcie_int = dpmaif_ctrl->rxq_int_mapping[i];
+	}
+}
+
+static void t7xx_dpmaif_register_pcie_irq(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct t7xx_pci_dev *t7xx_dev = dpmaif_ctrl->t7xx_dev;
+	struct dpmaif_isr_para *isr_para;
+	enum pcie_int int_type;
+	int i;
+
+	t7xx_dpmaif_isr_parameter_init(dpmaif_ctrl);
+
+	for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+		isr_para = &dpmaif_ctrl->isr_para[i];
+		int_type = isr_para->pcie_int;
+		t7xx_pcie_mac_clear_int(t7xx_dev, int_type);
+
+		t7xx_dev->intr_handler[int_type] = t7xx_dpmaif_isr_handler;
+		t7xx_dev->intr_thread[int_type] = NULL;
+		t7xx_dev->callback_param[int_type] = isr_para;
+
+		t7xx_pcie_mac_clear_int_status(t7xx_dev, int_type);
+		t7xx_pcie_mac_set_int(t7xx_dev, int_type);
+	}
+}
+
+static int t7xx_dpmaif_rxtx_sw_allocs(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct dpmaif_rx_queue *rx_q;
+	struct dpmaif_tx_queue *tx_q;
+	int ret, rx_idx, tx_idx, i;
+
+	ret = t7xx_dpmaif_bat_alloc(dpmaif_ctrl, &dpmaif_ctrl->bat_req, BAT_TYPE_NORMAL);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "Failed to allocate normal BAT table: %d\n", ret);
+		return ret;
+	}
+
+	ret = t7xx_dpmaif_bat_alloc(dpmaif_ctrl, &dpmaif_ctrl->bat_frag, BAT_TYPE_FRAG);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "Failed to allocate frag BAT table: %d\n", ret);
+		goto err_free_normal_bat;
+	}
+
+	for (rx_idx = 0; rx_idx < DPMAIF_RXQ_NUM; rx_idx++) {
+		rx_q = &dpmaif_ctrl->rxq[rx_idx];
+		rx_q->index = rx_idx;
+		rx_q->dpmaif_ctrl = dpmaif_ctrl;
+		ret = t7xx_dpmaif_rxq_init(rx_q);
+		if (ret)
+			goto err_free_rxq;
+	}
+
+	for (tx_idx = 0; tx_idx < DPMAIF_TXQ_NUM; tx_idx++) {
+		tx_q = &dpmaif_ctrl->txq[tx_idx];
+		tx_q->index = tx_idx;
+		tx_q->dpmaif_ctrl = dpmaif_ctrl;
+		ret = t7xx_dpmaif_txq_init(tx_q);
+		if (ret)
+			goto err_free_txq;
+	}
+
+	ret = t7xx_dpmaif_tx_thread_init(dpmaif_ctrl);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "Failed to start TX thread\n");
+		goto err_free_txq;
+	}
+
+	ret = t7xx_dpmaif_bat_rel_wq_alloc(dpmaif_ctrl);
+	if (ret)
+		goto err_thread_rel;
+
+	return 0;
+
+err_thread_rel:
+	t7xx_dpmaif_tx_thread_rel(dpmaif_ctrl);
+
+err_free_txq:
+	for (i = 0; i < tx_idx; i++) {
+		tx_q = &dpmaif_ctrl->txq[i];
+		t7xx_dpmaif_txq_free(tx_q);
+	}
+
+err_free_rxq:
+	for (i = 0; i < rx_idx; i++) {
+		rx_q = &dpmaif_ctrl->rxq[i];
+		t7xx_dpmaif_rxq_free(rx_q);
+	}
+
+	t7xx_dpmaif_bat_free(dpmaif_ctrl, &dpmaif_ctrl->bat_frag);
+
+err_free_normal_bat:
+	t7xx_dpmaif_bat_free(dpmaif_ctrl, &dpmaif_ctrl->bat_req);
+
+	return ret;
+}
+
+static void t7xx_dpmaif_sw_release(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct dpmaif_rx_queue *rx_q;
+	struct dpmaif_tx_queue *tx_q;
+	int i;
+
+	t7xx_dpmaif_tx_thread_rel(dpmaif_ctrl);
+	t7xx_dpmaif_bat_wq_rel(dpmaif_ctrl);
+
+	for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+		tx_q = &dpmaif_ctrl->txq[i];
+		t7xx_dpmaif_txq_free(tx_q);
+	}
+
+	for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+		rx_q = &dpmaif_ctrl->rxq[i];
+		t7xx_dpmaif_rxq_free(rx_q);
+	}
+}
+
+static int t7xx_dpmaif_start(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct dpmaif_hw_params hw_init_para;
+	struct dpmaif_rx_queue *rxq;
+	struct dpmaif_tx_queue *txq;
+	unsigned int buf_cnt;
+	int i, ret = 0;
+
+	if (dpmaif_ctrl->state == DPMAIF_STATE_PWRON)
+		return -EFAULT;
+
+	memset(&hw_init_para, 0, sizeof(hw_init_para));
+
+	for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+		rxq = &dpmaif_ctrl->rxq[i];
+		rxq->que_started = true;
+		rxq->index = i;
+		rxq->budget = rxq->bat_req->bat_size_cnt - 1;
+
+		hw_init_para.pkt_bat_base_addr[i] = rxq->bat_req->bat_bus_addr;
+		hw_init_para.pkt_bat_size_cnt[i] = rxq->bat_req->bat_size_cnt;
+		hw_init_para.pit_base_addr[i] = rxq->pit_bus_addr;
+		hw_init_para.pit_size_cnt[i] = rxq->pit_size_cnt;
+		hw_init_para.frg_bat_base_addr[i] = rxq->bat_frag->bat_bus_addr;
+		hw_init_para.frg_bat_size_cnt[i] = rxq->bat_frag->bat_size_cnt;
+	}
+
+	memset(dpmaif_ctrl->bat_req.bat_mask, 0,
+	       dpmaif_ctrl->bat_req.bat_size_cnt * sizeof(unsigned char));
+
+	buf_cnt = dpmaif_ctrl->bat_req.bat_size_cnt - 1;
+	ret = t7xx_dpmaif_rx_buf_alloc(dpmaif_ctrl, &dpmaif_ctrl->bat_req, 0, buf_cnt, true);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "Failed to allocate RX buffer: %d\n",
+			ret);
+		return ret;
+	}
+
+	buf_cnt = dpmaif_ctrl->bat_frag.bat_size_cnt - 1;
+	ret = t7xx_dpmaif_rx_frag_alloc(dpmaif_ctrl, &dpmaif_ctrl->bat_frag, buf_cnt, true);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "Failed to allocate frag RX buffer: %d\n",
+			ret);
+		goto err_free_normal_bat;
+	}
+
+	for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+		txq = &dpmaif_ctrl->txq[i];
+		txq->que_started = true;
+
+		hw_init_para.drb_base_addr[i] = txq->drb_bus_addr;
+		hw_init_para.drb_size_cnt[i] = txq->drb_size_cnt;
+	}
+
+	ret = t7xx_dpmaif_hw_init(dpmaif_ctrl, &hw_init_para);
+	if (ret) {
+		dev_err(dpmaif_ctrl->dev, "Failed to initialize DPMAIF HW: %d\n", ret);
+		goto err_free_frag_bat;
+	}
+
+	ret = t7xx_dpmaif_dl_snd_hw_bat_cnt(dpmaif_ctrl, rxq->bat_req->bat_size_cnt - 1);
+	if (ret)
+		goto err_free_frag_bat;
+
+	ret = t7xx_dpmaif_dl_snd_hw_frg_cnt(dpmaif_ctrl, rxq->bat_frag->bat_size_cnt - 1);
+	if (ret)
+		goto err_free_frag_bat;
+
+	t7xx_dpmaif_ul_clr_all_intr(&dpmaif_ctrl->hif_hw_info);
+	t7xx_dpmaif_dl_clr_all_intr(&dpmaif_ctrl->hif_hw_info);
+	dpmaif_ctrl->state = DPMAIF_STATE_PWRON;
+	t7xx_dpmaif_enable_irq(dpmaif_ctrl);
+	wake_up(&dpmaif_ctrl->tx_wq);
+	return 0;
+
+err_free_frag_bat:
+	t7xx_dpmaif_bat_free(rxq->dpmaif_ctrl, rxq->bat_frag);
+
+err_free_normal_bat:
+	t7xx_dpmaif_bat_free(rxq->dpmaif_ctrl, rxq->bat_req);
+
+	return ret;
+}
+
+static void t7xx_dpmaif_stop_sw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	t7xx_dpmaif_tx_stop(dpmaif_ctrl);
+	t7xx_dpmaif_rx_stop(dpmaif_ctrl);
+}
+
+static void t7xx_dpmaif_stop_hw(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	t7xx_dpmaif_hw_stop_all_txq(dpmaif_ctrl);
+	t7xx_dpmaif_hw_stop_all_rxq(dpmaif_ctrl);
+}
+
+static int t7xx_dpmaif_stop(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	if (!dpmaif_ctrl->dpmaif_sw_init_done) {
+		dev_err(dpmaif_ctrl->dev, "dpmaif SW init fail\n");
+		return -EFAULT;
+	}
+
+	if (dpmaif_ctrl->state == DPMAIF_STATE_PWROFF)
+		return -EFAULT;
+
+	t7xx_dpmaif_disable_irq(dpmaif_ctrl);
+	dpmaif_ctrl->state = DPMAIF_STATE_PWROFF;
+	t7xx_dpmaif_stop_sw(dpmaif_ctrl);
+	t7xx_dpmaif_tx_clear(dpmaif_ctrl);
+	t7xx_dpmaif_rx_clear(dpmaif_ctrl);
+	return 0;
+}
+
+int t7xx_dpmaif_md_state_callback(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char state)
+{
+	int ret = 0;
+
+	switch (state) {
+	case MD_STATE_WAITING_FOR_HS1:
+		ret = t7xx_dpmaif_start(dpmaif_ctrl);
+		break;
+
+	case MD_STATE_EXCEPTION:
+		ret = t7xx_dpmaif_stop(dpmaif_ctrl);
+		break;
+
+	case MD_STATE_STOPPED:
+		ret = t7xx_dpmaif_stop(dpmaif_ctrl);
+		break;
+
+	case MD_STATE_WAITING_TO_STOP:
+		t7xx_dpmaif_stop_hw(dpmaif_ctrl);
+		break;
+
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+/**
+ * t7xx_dpmaif_hif_init() - Initialize data path.
+ * @t7xx_dev: MTK context structure.
+ * @callbacks: Callbacks implemented by the network layer to handle RX skb and
+ *	       event notifications.
+ *
+ * Allocate and initialize datapath control block.
+ * Register datapath ISR, TX and RX resources.
+ *
+ * Return:
+ * * dpmaif_ctrl pointer - Pointer to DPMAIF context structure.
+ * * NULL		 - In case of error.
+ */
+struct dpmaif_ctrl *t7xx_dpmaif_hif_init(struct t7xx_pci_dev *t7xx_dev,
+					 struct dpmaif_callbacks *callbacks)
+{
+	struct device *dev = &t7xx_dev->pdev->dev;
+	struct dpmaif_ctrl *dpmaif_ctrl;
+	int ret;
+
+	if (!callbacks)
+		return NULL;
+
+	dpmaif_ctrl = devm_kzalloc(dev, sizeof(*dpmaif_ctrl), GFP_KERNEL);
+	if (!dpmaif_ctrl)
+		return NULL;
+
+	dpmaif_ctrl->t7xx_dev = t7xx_dev;
+	dpmaif_ctrl->callbacks = callbacks;
+	dpmaif_ctrl->dev = dev;
+	dpmaif_ctrl->dpmaif_sw_init_done = false;
+	dpmaif_ctrl->hif_hw_info.pcie_base = t7xx_dev->base_addr.pcie_ext_reg_base -
+					     t7xx_dev->base_addr.pcie_dev_reg_trsl_addr;
+
+	t7xx_dpmaif_register_pcie_irq(dpmaif_ctrl);
+	t7xx_dpmaif_disable_irq(dpmaif_ctrl);
+
+	ret = t7xx_dpmaif_rxtx_sw_allocs(dpmaif_ctrl);
+	if (ret) {
+		dev_err(dev, "Failed to allocate RX/TX SW resources: %d\n", ret);
+		return NULL;
+	}
+
+	dpmaif_ctrl->dpmaif_sw_init_done = true;
+	return dpmaif_ctrl;
+}
+
+void t7xx_dpmaif_hif_exit(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	if (dpmaif_ctrl->dpmaif_sw_init_done) {
+		t7xx_dpmaif_stop(dpmaif_ctrl);
+		t7xx_dpmaif_sw_release(dpmaif_ctrl);
+		dpmaif_ctrl->dpmaif_sw_init_done = false;
+	}
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
new file mode 100644
index 000000000000..3404e2a75566
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
@@ -0,0 +1,251 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#ifndef __T7XX_DPMA_TX_H__
+#define __T7XX_DPMA_TX_H__
+
+#include <linux/mm_types.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+#include <linux/wait.h>
+
+#include "t7xx_common.h"
+#include "t7xx_pci.h"
+
+#define DPMAIF_RXQ_NUM		2
+#define DPMAIF_TXQ_NUM		5
+
+enum dpmaif_rdwr {
+	DPMAIF_READ,
+	DPMAIF_WRITE,
+};
+
+struct dpmaif_isr_en_mask {
+	unsigned int		ap_ul_l2intr_en_msk;
+	unsigned int		ap_dl_l2intr_en_msk;
+	unsigned int		ap_udl_ip_busy_en_msk;
+	unsigned int		ap_dl_l2intr_err_en_msk;
+};
+
+struct dpmaif_ul {
+	bool			que_started;
+	unsigned char		reserve[3];
+	dma_addr_t		drb_base;
+	unsigned int		drb_size_cnt;
+};
+
+struct dpmaif_dl {
+	bool			que_started;
+	unsigned char		reserve[3];
+	dma_addr_t		pit_base;
+	unsigned int		pit_size_cnt;
+	dma_addr_t		bat_base;
+	unsigned int		bat_size_cnt;
+	dma_addr_t		frg_base;
+	unsigned int		frg_size_cnt;
+	unsigned int		pit_seq;
+};
+
+struct dpmaif_dl_hwq {
+	unsigned int		bat_remain_size;
+	unsigned int		bat_pkt_bufsz;
+	unsigned int		frg_pkt_bufsz;
+	unsigned int		bat_rsv_length;
+	unsigned int		pkt_bid_max_cnt;
+	unsigned int		pkt_alignment;
+	unsigned int		mtu_size;
+	unsigned int		chk_pit_num;
+	unsigned int		chk_bat_num;
+	unsigned int		chk_frg_num;
+};
+
+/* Structure of DL BAT */
+struct dpmaif_cur_rx_skb_info {
+	bool			msg_pit_received;
+	struct sk_buff		*cur_skb;
+	unsigned int		cur_chn_idx;
+	unsigned int		check_sum;
+	unsigned int		pit_dp;
+	unsigned int		pkt_type;
+	int			err_payload;
+};
+
+struct dpmaif_bat {
+	unsigned int		p_buffer_addr;
+	unsigned int		buffer_addr_ext;
+};
+
+struct dpmaif_bat_skb {
+	struct sk_buff		*skb;
+	dma_addr_t		data_bus_addr;
+	unsigned int		data_len;
+};
+
+struct dpmaif_bat_page {
+	struct page		*page;
+	dma_addr_t		data_bus_addr;
+	unsigned int		offset;
+	unsigned int		data_len;
+};
+
+enum bat_type {
+	BAT_TYPE_NORMAL = 0,
+	BAT_TYPE_FRAG = 1,
+};
+
+struct dpmaif_bat_request {
+	void			*bat_base;
+	dma_addr_t		bat_bus_addr;
+	unsigned int		bat_size_cnt;
+	unsigned short		bat_wr_idx;
+	unsigned short		bat_release_rd_idx;
+	void			*bat_skb;
+	unsigned int		skb_pkt_cnt;
+	unsigned int		pkt_buf_sz;
+	unsigned char		*bat_mask;
+	atomic_t		refcnt;
+	spinlock_t		mask_lock; /* Protects BAT mask */
+	enum bat_type		type;
+};
+
+struct dpmaif_rx_queue {
+	unsigned char		index;
+	bool			que_started;
+	unsigned short		budget;
+
+	void			*pit_base;
+	dma_addr_t		pit_bus_addr;
+	unsigned int		pit_size_cnt;
+
+	unsigned short		pit_rd_idx;
+	unsigned short		pit_wr_idx;
+	unsigned short		pit_release_rd_idx;
+
+	struct dpmaif_bat_request *bat_req;
+	struct dpmaif_bat_request *bat_frag;
+
+	wait_queue_head_t	rx_wq;
+	struct task_struct	*rx_thread;
+	struct sk_buff_head	skb_list;
+	unsigned int		skb_list_max_len;
+
+	struct workqueue_struct	*worker;
+	struct work_struct	dpmaif_rxq_work;
+
+	atomic_t		rx_processing;
+
+	struct dpmaif_ctrl	*dpmaif_ctrl;
+	unsigned int		expect_pit_seq;
+	unsigned int		pit_remain_release_cnt;
+	struct dpmaif_cur_rx_skb_info rx_data_info;
+};
+
+struct dpmaif_tx_queue {
+	unsigned char		index;
+	bool			que_started;
+	atomic_t		tx_budget;
+	void			*drb_base;
+	dma_addr_t		drb_bus_addr;
+	unsigned int		drb_size_cnt;
+	unsigned short		drb_wr_idx;
+	unsigned short		drb_rd_idx;
+	unsigned short		drb_release_rd_idx;
+	unsigned short		last_ch_id;
+	void			*drb_skb_base;
+	wait_queue_head_t	req_wq;
+	struct workqueue_struct	*worker;
+	struct work_struct	dpmaif_tx_work;
+	spinlock_t		tx_lock; /* Protects txq DRB */
+	atomic_t		tx_processing;
+
+	struct dpmaif_ctrl	*dpmaif_ctrl;
+	spinlock_t		tx_skb_lock; /* Protects TX thread skb list */
+	struct list_head	tx_skb_queue;
+	unsigned int		tx_submit_skb_cnt;
+	unsigned int		tx_list_max_len;
+	unsigned int		tx_skb_stat;
+	bool			drb_lack;
+};
+
+struct dpmaif_isr_para {
+	struct dpmaif_ctrl	*dpmaif_ctrl;
+	unsigned char		pcie_int;
+	unsigned char		dlq_id;
+};
+
+enum dpmaif_state {
+	DPMAIF_STATE_MIN,
+	DPMAIF_STATE_PWROFF,
+	DPMAIF_STATE_PWRON,
+	DPMAIF_STATE_EXCEPTION,
+	DPMAIF_STATE_MAX
+};
+
+struct dpmaif_hw_info {
+	void __iomem			*pcie_base;
+	struct dpmaif_dl		dl_que[DPMAIF_RXQ_NUM];
+	struct dpmaif_ul		ul_que[DPMAIF_TXQ_NUM];
+	struct dpmaif_dl_hwq		dl_que_hw[DPMAIF_RXQ_NUM];
+	struct dpmaif_isr_en_mask	isr_en_mask;
+};
+
+enum dpmaif_txq_state {
+	DMPAIF_TXQ_STATE_IRQ,
+	DMPAIF_TXQ_STATE_FULL,
+};
+
+struct dpmaif_callbacks {
+	void (*state_notify)(struct t7xx_pci_dev *t7xx_dev,
+			     enum dpmaif_txq_state state, int txqt);
+	void (*recv_skb)(struct t7xx_pci_dev *t7xx_dev, struct sk_buff *skb);
+};
+
+struct dpmaif_ctrl {
+	struct device			*dev;
+	struct t7xx_pci_dev		*t7xx_dev;
+	enum dpmaif_state		state;
+	bool				dpmaif_sw_init_done;
+	struct dpmaif_hw_info		hif_hw_info;
+	struct dpmaif_tx_queue		txq[DPMAIF_TXQ_NUM];
+	struct dpmaif_rx_queue		rxq[DPMAIF_RXQ_NUM];
+
+	unsigned char			rxq_int_mapping[DPMAIF_RXQ_NUM];
+	struct dpmaif_isr_para		isr_para[DPMAIF_RXQ_NUM];
+
+	struct dpmaif_bat_request	bat_req;
+	struct dpmaif_bat_request	bat_frag;
+	struct workqueue_struct		*bat_release_wq;
+	struct work_struct		bat_release_work;
+
+	wait_queue_head_t		tx_wq;
+	struct task_struct		*tx_thread;
+
+	struct dpmaif_callbacks		*callbacks;
+};
+
+struct dpmaif_ctrl *t7xx_dpmaif_hif_init(struct t7xx_pci_dev *t7xx_dev,
+					 struct dpmaif_callbacks *callbacks);
+void t7xx_dpmaif_hif_exit(struct dpmaif_ctrl *dpmaif_ctrl);
+int t7xx_dpmaif_md_state_callback(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char state);
+unsigned int t7xx_ring_buf_get_next_wr_idx(unsigned int buf_len, unsigned int buf_idx);
+unsigned int t7xx_ring_buf_rd_wr_count(unsigned int total_cnt, unsigned int rd_idx,
+				       unsigned int wr_idx, enum dpmaif_rdwr);
+
+#endif /* __T7XX_DPMA_TX_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
new file mode 100644
index 000000000000..7df7ffea8b14
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
@@ -0,0 +1,1227 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#include <linux/atomic.h>
+#include <linux/bitfield.h>
+#include <linux/bitops.h>
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/gfp.h>
+#include <linux/err.h>
+#include <linux/iopoll.h>
+#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/mm.h>
+#include <linux/netdevice.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_dpmaif.h"
+#include "t7xx_hif_dpmaif.h"
+#include "t7xx_hif_dpmaif_rx.h"
+#include "t7xx_pci.h"
+
+#define DPMAIF_BAT_COUNT		8192
+#define DPMAIF_FRG_COUNT		4814
+#define DPMAIF_PIT_COUNT		(DPMAIF_BAT_COUNT * 2)
+
+#define DPMAIF_BAT_CNT_THRESHOLD	30
+#define DPMAIF_PIT_CNT_THRESHOLD	60
+#define DPMAIF_RX_PUSH_THRESHOLD_MASK	GENMASK(2, 0)
+#define DPMAIF_NOTIFY_RELEASE_COUNT	128
+#define DPMAIF_POLL_PIT_TIME_US		20
+#define DPMAIF_POLL_PIT_MAX_TIME_US	2000
+#define DPMAIF_WQ_TIME_LIMIT_MS		2
+#define DPMAIF_CS_RESULT_PASS		0
+
+/* Packet type */
+#define DES_PT_PD			0
+#define DES_PT_MSG			1
+/* Buffer type */
+#define PKT_BUF_FRAG			1
+
+static unsigned int t7xx_normal_pit_bid(const struct dpmaif_normal_pit *pit_info)
+{
+	u32 value;
+
+	value = FIELD_GET(NORMAL_PIT_H_BID, le32_to_cpu(pit_info->pit_footer));
+	value <<= 13;
+	value += FIELD_GET(NORMAL_PIT_BUFFER_ID, le32_to_cpu(pit_info->pit_header));
+	return value;
+}
+
+static int t7xx_dpmaif_net_rx_push_thread(void *arg)
+{
+	struct dpmaif_rx_queue *q = arg;
+	struct dpmaif_ctrl *hif_ctrl;
+	struct dpmaif_callbacks *cb;
+
+	hif_ctrl = q->dpmaif_ctrl;
+	cb = hif_ctrl->callbacks;
+
+	while (!kthread_should_stop()) {
+		struct sk_buff *skb;
+		unsigned long flags;
+
+		if (skb_queue_empty(&q->skb_list)) {
+			if (wait_event_interruptible(q->rx_wq,
+						     !skb_queue_empty(&q->skb_list) ||
+						     kthread_should_stop()))
+				continue;
+
+			if (kthread_should_stop())
+				break;
+		}
+
+		spin_lock_irqsave(&q->skb_list.lock, flags);
+		skb = __skb_dequeue(&q->skb_list);
+		spin_unlock_irqrestore(&q->skb_list.lock, flags);
+		if (!skb)
+			continue;
+
+		cb->recv_skb(hif_ctrl->t7xx_dev, skb);
+		cond_resched();
+	}
+
+	return 0;
+}
+
+static int t7xx_dpmaif_update_bat_wr_idx(struct dpmaif_ctrl *dpmaif_ctrl,
+					 const unsigned char q_num, const unsigned int bat_cnt)
+{
+	struct dpmaif_rx_queue *rxq = &dpmaif_ctrl->rxq[q_num];
+	unsigned short old_rl_idx, new_wr_idx, old_wr_idx;
+	struct dpmaif_bat_request *bat_req = rxq->bat_req;
+
+	if (!rxq->que_started) {
+		dev_err(dpmaif_ctrl->dev, "RX queue %d has not been started\n", rxq->index);
+		return -EINVAL;
+	}
+
+	old_rl_idx = bat_req->bat_release_rd_idx;
+	old_wr_idx = bat_req->bat_wr_idx;
+	new_wr_idx = old_wr_idx + bat_cnt;
+
+	if (old_rl_idx > old_wr_idx && new_wr_idx >= old_rl_idx) {
+		dev_err(dpmaif_ctrl->dev, "RX BAT flow check fail\n");
+		return -EINVAL;
+	}
+
+	if (new_wr_idx >= bat_req->bat_size_cnt) {
+		new_wr_idx -= bat_req->bat_size_cnt;
+		if (new_wr_idx >= old_rl_idx) {
+			dev_err(dpmaif_ctrl->dev, "RX BAT flow check fail\n");
+			return -EINVAL;
+		}
+	}
+
+	bat_req->bat_wr_idx = new_wr_idx;
+	return 0;
+}
+
+static bool t7xx_alloc_and_map_skb_info(const struct dpmaif_ctrl *dpmaif_ctrl,
+					const unsigned int size, struct dpmaif_bat_skb *cur_skb)
+{
+	dma_addr_t data_bus_addr;
+	struct sk_buff *skb;
+	size_t data_len;
+
+	skb = __dev_alloc_skb(size, GFP_KERNEL);
+	if (!skb)
+		return false;
+
+	data_len = t7xx_skb_data_area_size(skb);
+
+	data_bus_addr = dma_map_single(dpmaif_ctrl->dev, skb->data, data_len, DMA_FROM_DEVICE);
+	if (dma_mapping_error(dpmaif_ctrl->dev, data_bus_addr)) {
+		dev_err_ratelimited(dpmaif_ctrl->dev, "DMA mapping error\n");
+		dev_kfree_skb_any(skb);
+		return false;
+	}
+
+	cur_skb->skb = skb;
+	cur_skb->data_bus_addr = data_bus_addr;
+	cur_skb->data_len = data_len;
+
+	return true;
+}
+
+static void t7xx_unmap_bat_skb(struct device *dev, struct dpmaif_bat_skb *bat_skb_base,
+			       unsigned int index)
+{
+	struct dpmaif_bat_skb *bat_skb = bat_skb_base + index;
+
+	if (bat_skb->skb) {
+		dma_unmap_single(dev, bat_skb->data_bus_addr, bat_skb->data_len, DMA_FROM_DEVICE);
+		kfree_skb(bat_skb->skb);
+		bat_skb->skb = NULL;
+	}
+}
+
+/**
+ * t7xx_dpmaif_rx_buf_alloc() - Allocate buffers for the BAT ring.
+ * @dpmaif_ctrl: Pointer to DPMAIF context structure.
+ * @bat_req: Pointer to BAT request structure.
+ * @q_num: Queue number.
+ * @buf_cnt: Number of buffers to allocate.
+ * @initial: Indicates if the ring is being populated for the first time.
+ *
+ * Allocate skb and store the start address of the data buffer into the BAT ring.
+ * If this is not the initial call, notify the HW about the new entries.
+ *
+ * Return:
+ * * 0		- Success.
+ * * -ERROR	- Error code from failure sub-initializations.
+ */
+int t7xx_dpmaif_rx_buf_alloc(struct dpmaif_ctrl *dpmaif_ctrl,
+			     const struct dpmaif_bat_request *bat_req,
+			     const unsigned char q_num, const unsigned int buf_cnt,
+			     const bool initial)
+{
+	unsigned int i, bat_cnt, bat_max_cnt, hw_wr_idx, alloc_cnt = buf_cnt;
+	unsigned short bat_start_idx;
+	int ret;
+
+	if (!alloc_cnt || alloc_cnt > bat_req->bat_size_cnt)
+		return -EINVAL;
+
+	/* Check BAT buffer space */
+	bat_max_cnt = bat_req->bat_size_cnt;
+
+	bat_cnt = t7xx_ring_buf_rd_wr_count(bat_max_cnt, bat_req->bat_release_rd_idx,
+					    bat_req->bat_wr_idx, DPMAIF_WRITE);
+	if (alloc_cnt > bat_cnt)
+		return -ENOMEM;
+
+	bat_start_idx = bat_req->bat_wr_idx;
+
+	for (i = 0; i < alloc_cnt; i++) {
+		unsigned short cur_bat_idx = bat_start_idx + i;
+		struct dpmaif_bat_skb *cur_skb;
+		struct dpmaif_bat *cur_bat;
+
+		if (cur_bat_idx >= bat_max_cnt)
+			cur_bat_idx -= bat_max_cnt;
+
+		cur_skb = (struct dpmaif_bat_skb *)bat_req->bat_skb + cur_bat_idx;
+		if (!cur_skb->skb &&
+		    !t7xx_alloc_and_map_skb_info(dpmaif_ctrl, bat_req->pkt_buf_sz, cur_skb))
+			break;
+
+		cur_bat = (struct dpmaif_bat *)bat_req->bat_base + cur_bat_idx;
+		cur_bat->buffer_addr_ext = upper_32_bits(cur_skb->data_bus_addr);
+		cur_bat->p_buffer_addr = lower_32_bits(cur_skb->data_bus_addr);
+	}
+
+	if (!i)
+		return -ENOMEM;
+
+	ret = t7xx_dpmaif_update_bat_wr_idx(dpmaif_ctrl, q_num, i);
+	if (ret)
+		goto err_unmap_skbs;
+
+	if (!initial) {
+		ret = t7xx_dpmaif_dl_snd_hw_bat_cnt(dpmaif_ctrl, i);
+		if (ret)
+			goto err_unmap_skbs;
+
+		hw_wr_idx = t7xx_dpmaif_dl_get_bat_wr_idx(&dpmaif_ctrl->hif_hw_info,
+							  DPF_RX_QNO_DFT);
+		if (hw_wr_idx != bat_req->bat_wr_idx) {
+			ret = -EFAULT;
+			dev_err(dpmaif_ctrl->dev, "Write index mismatch in RX ring\n");
+			goto err_unmap_skbs;
+		}
+	}
+
+	return 0;
+
+err_unmap_skbs:
+	while (--i > 0)
+		t7xx_unmap_bat_skb(dpmaif_ctrl->dev, bat_req->bat_skb, i);
+
+	return ret;
+}
+
+static int t7xx_dpmaifq_release_pit_entry(struct dpmaif_rx_queue *rxq,
+					  const unsigned short rel_entry_num)
+{
+	unsigned short old_sw_rel_idx, new_sw_rel_idx, old_hw_wr_idx;
+	int ret;
+
+	if (!rxq->que_started)
+		return 0;
+
+	if (rel_entry_num >= rxq->pit_size_cnt) {
+		dev_err(rxq->dpmaif_ctrl->dev, "Invalid PIT release index\n");
+		return -EINVAL;
+	}
+
+	old_sw_rel_idx = rxq->pit_release_rd_idx;
+	new_sw_rel_idx = old_sw_rel_idx + rel_entry_num;
+	old_hw_wr_idx = rxq->pit_wr_idx;
+	if (old_hw_wr_idx < old_sw_rel_idx && new_sw_rel_idx >= rxq->pit_size_cnt)
+		new_sw_rel_idx -= rxq->pit_size_cnt;
+
+	ret = t7xx_dpmaif_dlq_add_pit_remain_cnt(rxq->dpmaif_ctrl, rxq->index, rel_entry_num);
+	if (ret) {
+		dev_err(rxq->dpmaif_ctrl->dev, "PIT release failure: %d\n", ret);
+		return ret;
+	}
+
+	rxq->pit_release_rd_idx = new_sw_rel_idx;
+	return 0;
+}
+
+static void t7xx_dpmaif_set_bat_mask(struct device *dev, struct dpmaif_bat_request *bat_req,
+				     unsigned int idx)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&bat_req->mask_lock, flags);
+	bat_req->bat_mask[idx] = 1;
+	spin_unlock_irqrestore(&bat_req->mask_lock, flags);
+}
+
+static int t7xx_frag_bat_cur_bid_check(struct dpmaif_rx_queue *rxq,
+				       const unsigned int cur_bid)
+{
+	struct dpmaif_bat_request *bat_frag = rxq->bat_frag;
+	struct dpmaif_bat_page *bat_page;
+
+	if (cur_bid >= DPMAIF_FRG_COUNT)
+		return -EINVAL;
+
+	bat_page = bat_frag->bat_skb + cur_bid;
+	if (!bat_page->page)
+		return -EINVAL;
+
+	return 0;
+}
+
+static void t7xx_unmap_bat_page(struct device *dev, struct dpmaif_bat_page *bat_page_base,
+				unsigned int index)
+{
+	struct dpmaif_bat_page *bat_page = bat_page_base + index;
+
+	if (bat_page->page) {
+		dma_unmap_page(dev, bat_page->data_bus_addr, bat_page->data_len, DMA_FROM_DEVICE);
+		put_page(bat_page->page);
+		bat_page->page = NULL;
+	}
+}
+
+/**
+ * t7xx_dpmaif_rx_frag_alloc() - Allocates buffers for the Fragment BAT ring.
+ * @dpmaif_ctrl: Pointer to DPMAIF context structure.
+ * @bat_req: Pointer to BAT request structure.
+ * @buf_cnt: Number of buffers to allocate.
+ * @initial: Indicates if the ring is being populated for the first time.
+ *
+ * Fragment BAT is used when the received packet does not fit in a normal BAT entry.
+ * This function allocates a page fragment and stores the start address of the page
+ * into the Fragment BAT ring.
+ * If this is not the initial call, notify the HW about the new entries.
+ *
+ * Return:
+ * * 0		- Success.
+ * * -ERROR	- Error code from failure sub-initializations.
+ */
+int t7xx_dpmaif_rx_frag_alloc(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
+			      const unsigned int buf_cnt, const bool initial)
+{
+	struct dpmaif_bat_page *bat_skb = bat_req->bat_skb;
+	unsigned short cur_bat_idx = bat_req->bat_wr_idx;
+	unsigned int buf_space;
+	int ret, i;
+
+	if (!buf_cnt || buf_cnt > bat_req->bat_size_cnt)
+		return -EINVAL;
+
+	buf_space = t7xx_ring_buf_rd_wr_count(bat_req->bat_size_cnt,
+					      bat_req->bat_release_rd_idx, bat_req->bat_wr_idx,
+					      DPMAIF_WRITE);
+	if (buf_cnt > buf_space) {
+		dev_err(dpmaif_ctrl->dev,
+			"Requested more buffers than the space available in RX frag ring\n");
+		return -EINVAL;
+	}
+
+	for (i = 0; i < buf_cnt; i++) {
+		struct dpmaif_bat_page *cur_page = bat_skb + cur_bat_idx;
+		struct dpmaif_bat *cur_bat;
+		dma_addr_t data_base_addr;
+
+		if (!cur_page->page) {
+			unsigned long offset;
+			struct page *page;
+			void *data;
+
+			data = netdev_alloc_frag(bat_req->pkt_buf_sz);
+			if (!data)
+				break;
+
+			page = virt_to_head_page(data);
+			offset = data - page_address(page);
+
+			data_base_addr = dma_map_page(dpmaif_ctrl->dev, page, offset,
+						      bat_req->pkt_buf_sz, DMA_FROM_DEVICE);
+			if (dma_mapping_error(dpmaif_ctrl->dev, data_base_addr)) {
+				dev_err(dpmaif_ctrl->dev, "DMA mapping fail\n");
+				put_page(virt_to_head_page(data));
+				break;
+			}
+
+			cur_page->page = page;
+			cur_page->data_bus_addr = data_base_addr;
+			cur_page->offset = offset;
+			cur_page->data_len = bat_req->pkt_buf_sz;
+		}
+
+		data_base_addr = cur_page->data_bus_addr;
+		cur_bat = (struct dpmaif_bat *)bat_req->bat_base + cur_bat_idx;
+		cur_bat->buffer_addr_ext = upper_32_bits(data_base_addr);
+		cur_bat->p_buffer_addr = lower_32_bits(data_base_addr);
+		cur_bat_idx = t7xx_ring_buf_get_next_wr_idx(bat_req->bat_size_cnt, cur_bat_idx);
+	}
+
+	bat_req->bat_wr_idx = cur_bat_idx;
+
+	if (!initial)
+		t7xx_dpmaif_dl_snd_hw_frg_cnt(dpmaif_ctrl, i);
+
+	ret = i < buf_cnt ? -ENOMEM : 0;
+	if (ret && initial) {
+		while (--i > 0)
+			t7xx_unmap_bat_page(dpmaif_ctrl->dev, bat_req->bat_skb, i);
+	}
+
+	return ret;
+}
+
+static int t7xx_dpmaif_set_frag_to_skb(const struct dpmaif_rx_queue *rxq,
+				       const struct dpmaif_normal_pit *pkt_info,
+				       struct sk_buff *skb)
+{
+	unsigned long long data_bus_addr, data_base_addr;
+	struct device *dev = rxq->dpmaif_ctrl->dev;
+	struct dpmaif_bat_page *page_info;
+	unsigned int data_len;
+	int data_offset;
+
+	page_info = rxq->bat_frag->bat_skb;
+	page_info += t7xx_normal_pit_bid(pkt_info);
+	dma_unmap_page(dev, page_info->data_bus_addr, page_info->data_len, DMA_FROM_DEVICE);
+
+	if (!page_info->page)
+		return -EINVAL;
+
+	data_bus_addr = le32_to_cpu(pkt_info->data_addr_ext);
+	data_bus_addr = (data_bus_addr << 32) + le32_to_cpu(pkt_info->p_data_addr);
+	data_base_addr = page_info->data_bus_addr;
+	data_offset = data_bus_addr - data_base_addr;
+	data_offset += page_info->offset;
+	data_len = FIELD_GET(NORMAL_PIT_DATA_LEN, le32_to_cpu(pkt_info->pit_header));
+	skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page_info->page,
+			data_offset, data_len, page_info->data_len);
+
+	page_info->page = NULL;
+	page_info->offset = 0;
+	page_info->data_len = 0;
+	return 0;
+}
+
+static int t7xx_dpmaif_get_frag(struct dpmaif_rx_queue *rxq,
+				const struct dpmaif_normal_pit *pkt_info,
+				const struct dpmaif_cur_rx_skb_info *skb_info)
+{
+	unsigned int cur_bid = t7xx_normal_pit_bid(pkt_info);
+	int ret;
+
+	ret = t7xx_frag_bat_cur_bid_check(rxq, cur_bid);
+	if (ret < 0)
+		return ret;
+
+	ret = t7xx_dpmaif_set_frag_to_skb(rxq, pkt_info, skb_info->cur_skb);
+	if (ret < 0) {
+		dev_err(rxq->dpmaif_ctrl->dev, "Failed to set frag data to skb: %d\n", ret);
+		return ret;
+	}
+
+	t7xx_dpmaif_set_bat_mask(rxq->dpmaif_ctrl->dev, rxq->bat_frag, cur_bid);
+	return 0;
+}
+
+static int t7xx_bat_cur_bid_check(struct dpmaif_rx_queue *rxq, const unsigned int cur_bid)
+{
+	struct dpmaif_bat_skb *bat_skb = rxq->bat_req->bat_skb;
+
+	bat_skb += cur_bid;
+	if (cur_bid >= DPMAIF_BAT_COUNT || !bat_skb->skb)
+		return -EINVAL;
+
+	return 0;
+}
+
+static int t7xx_dpmaif_read_pit_seq(const struct dpmaif_normal_pit *pit)
+{
+	return FIELD_GET(NORMAL_PIT_PIT_SEQ, le32_to_cpu(pit->pit_footer));
+}
+
+static int t7xx_dpmaif_check_pit_seq(struct dpmaif_rx_queue *rxq,
+				     const struct dpmaif_normal_pit *pit)
+{
+	unsigned int cur_pit_seq, expect_pit_seq = rxq->expect_pit_seq;
+
+	if (read_poll_timeout_atomic(t7xx_dpmaif_read_pit_seq, cur_pit_seq,
+				     cur_pit_seq == expect_pit_seq, DPMAIF_POLL_PIT_TIME_US,
+				     DPMAIF_POLL_PIT_MAX_TIME_US, false, pit))
+		return -EFAULT;
+
+	rxq->expect_pit_seq++;
+	if (rxq->expect_pit_seq >= DPMAIF_DL_PIT_SEQ_VALUE)
+		rxq->expect_pit_seq = 0;
+
+	return 0;
+}
+
+static unsigned int t7xx_dpmaif_avail_pkt_bat_cnt(struct dpmaif_bat_request *bat_req)
+{
+	unsigned long flags;
+	unsigned int i;
+
+	spin_lock_irqsave(&bat_req->mask_lock, flags);
+	for (i = 0; i < bat_req->bat_size_cnt; i++) {
+		unsigned int index = bat_req->bat_release_rd_idx + i;
+
+		if (index >= bat_req->bat_size_cnt)
+			index -= bat_req->bat_size_cnt;
+
+		if (!bat_req->bat_mask[index])
+			break;
+	}
+
+	spin_unlock_irqrestore(&bat_req->mask_lock, flags);
+	return i;
+}
+
+static int t7xx_dpmaif_release_bat_entry(const struct dpmaif_rx_queue *rxq,
+					 const unsigned int rel_entry_num,
+					 const enum bat_type buf_type)
+{
+	unsigned short old_sw_rel_idx, new_sw_rel_idx, hw_rd_idx;
+	struct dpmaif_ctrl *dpmaif_ctrl = rxq->dpmaif_ctrl;
+	struct dpmaif_bat_request *bat;
+	unsigned long flags;
+	unsigned int i;
+
+	if (!rxq->que_started || !rel_entry_num)
+		return -EINVAL;
+
+	if (buf_type == BAT_TYPE_FRAG) {
+		bat = rxq->bat_frag;
+		hw_rd_idx = t7xx_dpmaif_dl_get_frg_rd_idx(&dpmaif_ctrl->hif_hw_info, rxq->index);
+	} else {
+		bat = rxq->bat_req;
+		hw_rd_idx = t7xx_dpmaif_dl_get_bat_rd_idx(&dpmaif_ctrl->hif_hw_info, rxq->index);
+	}
+
+	if (rel_entry_num >= bat->bat_size_cnt)
+		return -EINVAL;
+
+	old_sw_rel_idx = bat->bat_release_rd_idx;
+	new_sw_rel_idx = old_sw_rel_idx + rel_entry_num;
+
+	/* Do not need to release if the queue is empty */
+	if (bat->bat_wr_idx == old_sw_rel_idx)
+		return 0;
+
+	if (hw_rd_idx >= old_sw_rel_idx) {
+		if (new_sw_rel_idx > hw_rd_idx)
+			return -EINVAL;
+	}
+
+	if (new_sw_rel_idx >= bat->bat_size_cnt) {
+		new_sw_rel_idx -= bat->bat_size_cnt;
+		if (new_sw_rel_idx > hw_rd_idx)
+			return -EINVAL;
+	}
+
+	spin_lock_irqsave(&bat->mask_lock, flags);
+	for (i = 0; i < rel_entry_num; i++) {
+		unsigned int index = bat->bat_release_rd_idx + i;
+
+		if (index >= bat->bat_size_cnt)
+			index -= bat->bat_size_cnt;
+
+		bat->bat_mask[index] = 0;
+	}
+
+	spin_unlock_irqrestore(&bat->mask_lock, flags);
+	bat->bat_release_rd_idx = new_sw_rel_idx;
+	return rel_entry_num;
+}
+
+static int t7xx_dpmaif_pit_release_and_add(struct dpmaif_rx_queue *rxq)
+{
+	int ret;
+
+	if (rxq->pit_remain_release_cnt < DPMAIF_PIT_CNT_THRESHOLD)
+		return 0;
+
+	ret = t7xx_dpmaifq_release_pit_entry(rxq, rxq->pit_remain_release_cnt);
+	if (ret)
+		return ret;
+
+	rxq->pit_remain_release_cnt = 0;
+	return 0;
+}
+
+static int t7xx_dpmaif_bat_release_and_add(const struct dpmaif_rx_queue *rxq)
+{
+	unsigned int bid_cnt;
+	int ret;
+
+	bid_cnt = t7xx_dpmaif_avail_pkt_bat_cnt(rxq->bat_req);
+	if (bid_cnt < DPMAIF_BAT_CNT_THRESHOLD)
+		return 0;
+
+	ret = t7xx_dpmaif_release_bat_entry(rxq, bid_cnt, BAT_TYPE_NORMAL);
+	if (ret <= 0) {
+		dev_err(rxq->dpmaif_ctrl->dev, "Release PKT BAT failed: %d\n", ret);
+		return ret;
+	}
+
+	ret = t7xx_dpmaif_rx_buf_alloc(rxq->dpmaif_ctrl, rxq->bat_req, rxq->index, bid_cnt, false);
+	if (ret < 0)
+		dev_err(rxq->dpmaif_ctrl->dev, "Allocate new RX buffer failed: %d\n", ret);
+
+	return ret;
+}
+
+static int t7xx_dpmaif_frag_bat_release_and_add(const struct dpmaif_rx_queue *rxq)
+{
+	unsigned int bid_cnt;
+	int ret;
+
+	bid_cnt = t7xx_dpmaif_avail_pkt_bat_cnt(rxq->bat_frag);
+	if (bid_cnt < DPMAIF_BAT_CNT_THRESHOLD)
+		return 0;
+
+	ret = t7xx_dpmaif_release_bat_entry(rxq, bid_cnt, BAT_TYPE_FRAG);
+	if (ret <= 0) {
+		dev_err(rxq->dpmaif_ctrl->dev, "Release BAT entry failed: %d\n", ret);
+		return ret;
+	}
+
+	return t7xx_dpmaif_rx_frag_alloc(rxq->dpmaif_ctrl, rxq->bat_frag, bid_cnt, false);
+}
+
+static void t7xx_dpmaif_parse_msg_pit(const struct dpmaif_rx_queue *rxq,
+				      const struct dpmaif_msg_pit *msg_pit,
+				      struct dpmaif_cur_rx_skb_info *skb_info)
+{
+	skb_info->cur_chn_idx = FIELD_GET(MSG_PIT_CHANNEL_ID, le32_to_cpu(msg_pit->dword1));
+	skb_info->check_sum = FIELD_GET(MSG_PIT_CHECKSUM, le32_to_cpu(msg_pit->dword1));
+	skb_info->pit_dp = FIELD_GET(MSG_PIT_DP, le32_to_cpu(msg_pit->dword1));
+	skb_info->pkt_type = FIELD_GET(MSG_PIT_IP, le32_to_cpu(msg_pit->dword4));
+}
+
+static int t7xx_dpmaif_set_data_to_skb(const struct dpmaif_rx_queue *rxq,
+				       const struct dpmaif_normal_pit *pkt_info,
+				       struct dpmaif_cur_rx_skb_info *skb_info)
+{
+	unsigned long long data_bus_addr, data_base_addr;
+	struct device *dev = rxq->dpmaif_ctrl->dev;
+	struct dpmaif_bat_skb *bat_skb;
+	unsigned int data_len;
+	struct sk_buff *skb;
+	int data_offset;
+
+	bat_skb = rxq->bat_req->bat_skb;
+	bat_skb += t7xx_normal_pit_bid(pkt_info);
+	dma_unmap_single(dev, bat_skb->data_bus_addr, bat_skb->data_len, DMA_FROM_DEVICE);
+
+	data_bus_addr = le32_to_cpu(pkt_info->data_addr_ext);
+	data_bus_addr = (data_bus_addr << 32) + le32_to_cpu(pkt_info->p_data_addr);
+	data_base_addr = bat_skb->data_bus_addr;
+	data_offset = data_bus_addr - data_base_addr;
+	data_len = FIELD_GET(NORMAL_PIT_DATA_LEN, le32_to_cpu(pkt_info->pit_header));
+	skb = bat_skb->skb;
+	skb->len = 0;
+	skb_reset_tail_pointer(skb);
+	skb_reserve(skb, data_offset);
+
+	if (skb->tail + data_len > skb->end) {
+		dev_err(dev, "No buffer space available\n");
+		return -ENOBUFS;
+	}
+
+	skb_put(skb, data_len);
+	skb_info->cur_skb = skb;
+	bat_skb->skb = NULL;
+	return 0;
+}
+
+static int t7xx_dpmaif_get_rx_pkt(struct dpmaif_rx_queue *rxq,
+				  const struct dpmaif_normal_pit *pkt_info,
+				  struct dpmaif_cur_rx_skb_info *skb_info)
+{
+	unsigned int cur_bid = t7xx_normal_pit_bid(pkt_info);
+	int ret;
+
+	ret = t7xx_bat_cur_bid_check(rxq, cur_bid);
+	if (ret < 0)
+		return ret;
+
+	ret = t7xx_dpmaif_set_data_to_skb(rxq, pkt_info, skb_info);
+	if (ret < 0) {
+		dev_err(rxq->dpmaif_ctrl->dev, "RX set data to skb failed: %d\n", ret);
+		return ret;
+	}
+
+	t7xx_dpmaif_set_bat_mask(rxq->dpmaif_ctrl->dev, rxq->bat_req, cur_bid);
+	return 0;
+}
+
+static int t7xx_dpmaifq_rx_notify_hw(struct dpmaif_rx_queue *rxq)
+{
+	struct dpmaif_ctrl *dpmaif_ctrl = rxq->dpmaif_ctrl;
+	int ret;
+
+	queue_work(dpmaif_ctrl->bat_release_wq, &dpmaif_ctrl->bat_release_work);
+
+	ret = t7xx_dpmaif_pit_release_and_add(rxq);
+	if (ret < 0)
+		dev_err(dpmaif_ctrl->dev, "RXQ%u update PIT failed: %d\n", rxq->index, ret);
+
+	return ret;
+}
+
+static void t7xx_dpmaif_rx_skb_enqueue(struct dpmaif_rx_queue *rxq, struct sk_buff *skb)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&rxq->skb_list.lock, flags);
+	if (rxq->skb_list.qlen < rxq->skb_list_max_len)
+		__skb_queue_tail(&rxq->skb_list, skb);
+	else
+		dev_kfree_skb_any(skb);
+
+	spin_unlock_irqrestore(&rxq->skb_list.lock, flags);
+}
+
+static void t7xx_dpmaif_rx_skb(struct dpmaif_rx_queue *rxq,
+			       struct dpmaif_cur_rx_skb_info *skb_info)
+{
+	struct sk_buff *skb = skb_info->cur_skb;
+	u8 netif_id;
+
+	skb_info->cur_skb = NULL;
+
+	if (skb_info->pit_dp) {
+		dev_kfree_skb_any(skb);
+		return;
+	}
+
+	skb->ip_summed = skb_info->check_sum == DPMAIF_CS_RESULT_PASS ? CHECKSUM_UNNECESSARY :
+									CHECKSUM_NONE;
+	netif_id = FIELD_GET(NETIF_MASK, skb_info->cur_chn_idx);
+	skb->cb[RX_CB_NETIF_IDX] = netif_id;
+	skb->cb[RX_CB_PKT_TYPE] = skb_info->pkt_type;
+	t7xx_dpmaif_rx_skb_enqueue(rxq, skb);
+}
+
+static int t7xx_dpmaif_rx_start(struct dpmaif_rx_queue *rxq, const unsigned short pit_cnt,
+				const unsigned long timeout)
+{
+	struct device *dev = rxq->dpmaif_ctrl->dev;
+	struct dpmaif_cur_rx_skb_info *skb_info;
+	unsigned short rx_cnt, recv_skb_cnt = 0;
+	unsigned int cur_pit, pit_len;
+	int ret = 0;
+
+	pit_len = rxq->pit_size_cnt;
+	skb_info = &rxq->rx_data_info;
+	cur_pit = rxq->pit_rd_idx;
+
+	for (rx_cnt = 0; rx_cnt < pit_cnt; rx_cnt++) {
+		struct dpmaif_normal_pit *pkt_info;
+		u32 val;
+
+		if (!skb_info->msg_pit_received && time_after_eq(jiffies, timeout))
+			break;
+
+		pkt_info = (struct dpmaif_normal_pit *)rxq->pit_base + cur_pit;
+		if (t7xx_dpmaif_check_pit_seq(rxq, pkt_info)) {
+			dev_err_ratelimited(dev, "RXQ%u checks PIT SEQ fail\n", rxq->index);
+			return -EAGAIN;
+		}
+
+		val = FIELD_GET(NORMAL_PIT_PACKET_TYPE, le32_to_cpu(pkt_info->pit_header));
+		if (val == DES_PT_MSG) {
+			if (skb_info->msg_pit_received)
+				dev_err(dev, "RXQ%u received repeated PIT\n", rxq->index);
+
+			skb_info->msg_pit_received = true;
+			t7xx_dpmaif_parse_msg_pit(rxq, (struct dpmaif_msg_pit *)pkt_info,
+						  skb_info);
+		} else { /* DES_PT_PD */
+			val = FIELD_GET(NORMAL_PIT_BUFFER_TYPE, le32_to_cpu(pkt_info->pit_header));
+			if (val != PKT_BUF_FRAG)
+				ret = t7xx_dpmaif_get_rx_pkt(rxq, pkt_info, skb_info);
+			else if (!skb_info->cur_skb)
+				ret = -EINVAL;
+			else
+				ret = t7xx_dpmaif_get_frag(rxq, pkt_info, skb_info);
+
+			if (ret < 0) {
+				skb_info->err_payload = 1;
+				dev_err_ratelimited(dev, "RXQ%u error payload\n", rxq->index);
+			}
+
+			val = FIELD_GET(NORMAL_PIT_CONT, le32_to_cpu(pkt_info->pit_header));
+			if (!val) {
+				if (!skb_info->err_payload) {
+					t7xx_dpmaif_rx_skb(rxq, skb_info);
+				} else if (skb_info->cur_skb) {
+					dev_kfree_skb_any(skb_info->cur_skb);
+					skb_info->cur_skb = NULL;
+				}
+
+				memset(skb_info, 0, sizeof(*skb_info));
+
+				recv_skb_cnt++;
+				if (!(recv_skb_cnt & DPMAIF_RX_PUSH_THRESHOLD_MASK)) {
+					wake_up_all(&rxq->rx_wq);
+					recv_skb_cnt = 0;
+				}
+			}
+		}
+
+		cur_pit = t7xx_ring_buf_get_next_wr_idx(pit_len, cur_pit);
+		rxq->pit_rd_idx = cur_pit;
+		rxq->pit_remain_release_cnt++;
+
+		if (rx_cnt > 0 && !(rx_cnt % DPMAIF_NOTIFY_RELEASE_COUNT)) {
+			ret = t7xx_dpmaifq_rx_notify_hw(rxq);
+			if (ret < 0)
+				break;
+		}
+	}
+
+	if (recv_skb_cnt)
+		wake_up_all(&rxq->rx_wq);
+
+	if (!ret)
+		ret = t7xx_dpmaifq_rx_notify_hw(rxq);
+
+	if (ret)
+		return ret;
+
+	return rx_cnt;
+}
+
+static unsigned int t7xx_dpmaifq_poll_pit(struct dpmaif_rx_queue *rxq)
+{
+	unsigned short hw_wr_idx;
+	unsigned int pit_cnt;
+
+	if (!rxq->que_started)
+		return 0;
+
+	hw_wr_idx = t7xx_dpmaif_dl_dlq_pit_get_wr_idx(&rxq->dpmaif_ctrl->hif_hw_info, rxq->index);
+	pit_cnt = t7xx_ring_buf_rd_wr_count(rxq->pit_size_cnt, rxq->pit_rd_idx, hw_wr_idx,
+					    DPMAIF_READ);
+	rxq->pit_wr_idx = hw_wr_idx;
+	return pit_cnt;
+}
+
+static int t7xx_dpmaif_rx_data_collect(struct dpmaif_ctrl *dpmaif_ctrl,
+				       const unsigned char q_num, const int budget)
+{
+	struct dpmaif_rx_queue *rxq = &dpmaif_ctrl->rxq[q_num];
+	unsigned long time_limit;
+	unsigned int cnt;
+
+	time_limit = jiffies + msecs_to_jiffies(DPMAIF_WQ_TIME_LIMIT_MS);
+
+	do {
+		unsigned int rd_cnt;
+		int real_cnt;
+
+		cnt = t7xx_dpmaifq_poll_pit(rxq);
+		if (!cnt)
+			break;
+
+		if (!rxq->pit_base)
+			return -EAGAIN;
+
+		rd_cnt = cnt > budget ? budget : cnt;
+
+		real_cnt = t7xx_dpmaif_rx_start(rxq, rd_cnt, time_limit);
+		if (real_cnt < 0)
+			return real_cnt;
+
+		if (real_cnt < cnt)
+			return -EAGAIN;
+
+	} while (cnt);
+
+	return 0;
+}
+
+static void t7xx_dpmaif_do_rx(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_rx_queue *rxq)
+{
+	int ret;
+
+	ret = t7xx_dpmaif_rx_data_collect(dpmaif_ctrl, rxq->index, rxq->budget);
+	if (ret < 0) {
+		/* Try one more time */
+		queue_work(rxq->worker, &rxq->dpmaif_rxq_work);
+		t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+	} else {
+		t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+		t7xx_dpmaif_dlq_unmask_rx_done(&dpmaif_ctrl->hif_hw_info, rxq->index);
+	}
+}
+
+static void t7xx_dpmaif_rxq_work(struct work_struct *work)
+{
+	struct dpmaif_rx_queue *rxq = container_of(work, struct dpmaif_rx_queue, dpmaif_rxq_work);
+	struct dpmaif_ctrl *dpmaif_ctrl = rxq->dpmaif_ctrl;
+
+	atomic_set(&rxq->rx_processing, 1);
+	/* Ensure rx_processing is changed to 1 before actually begin RX flow */
+	smp_mb();
+
+	if (!rxq->que_started) {
+		atomic_set(&rxq->rx_processing, 0);
+		dev_err(dpmaif_ctrl->dev, "Work RXQ: %d has not been started\n", rxq->index);
+		return;
+	}
+
+	t7xx_dpmaif_do_rx(dpmaif_ctrl, rxq);
+
+	atomic_set(&rxq->rx_processing, 0);
+}
+
+void t7xx_dpmaif_irq_rx_done(struct dpmaif_ctrl *dpmaif_ctrl, const unsigned int que_mask)
+{
+	struct dpmaif_rx_queue *rxq;
+	int qno;
+
+	qno = ffs(que_mask) - 1;
+	if (qno < 0 || qno > DPMAIF_RXQ_NUM - 1) {
+		dev_err(dpmaif_ctrl->dev, "Invalid RXQ number: %u\n", qno);
+		return;
+	}
+
+	rxq = &dpmaif_ctrl->rxq[qno];
+	queue_work(rxq->worker, &rxq->dpmaif_rxq_work);
+}
+
+static void t7xx_dpmaif_base_free(const struct dpmaif_ctrl *dpmaif_ctrl,
+				  const struct dpmaif_bat_request *bat_req)
+{
+	if (bat_req->bat_base)
+		dma_free_coherent(dpmaif_ctrl->dev,
+				  bat_req->bat_size_cnt * sizeof(struct dpmaif_bat),
+				  bat_req->bat_base, bat_req->bat_bus_addr);
+}
+
+/**
+ * t7xx_dpmaif_bat_alloc() - Allocate the BAT ring buffer.
+ * @dpmaif_ctrl: Pointer to DPMAIF context structure.
+ * @bat_req: Pointer to BAT request structure.
+ * @buf_type: BAT ring type.
+ *
+ * This function allocates the BAT ring buffer shared with the HW device, also allocates
+ * a buffer used to store information about the BAT skbs for further release.
+ *
+ * Return:
+ * * 0		- Success.
+ * * -ERROR	- Error code from failure sub-initializations.
+ */
+int t7xx_dpmaif_bat_alloc(const struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
+			  const enum bat_type buf_type)
+{
+	int sw_buf_size;
+
+	if (buf_type == BAT_TYPE_FRAG) {
+		sw_buf_size = sizeof(struct dpmaif_bat_page);
+		bat_req->bat_size_cnt = DPMAIF_FRG_COUNT;
+		bat_req->pkt_buf_sz = DPMAIF_HW_FRG_PKTBUF;
+	} else {
+		sw_buf_size = sizeof(struct dpmaif_bat_skb);
+		bat_req->bat_size_cnt = DPMAIF_BAT_COUNT;
+		bat_req->pkt_buf_sz = NET_RX_BUF;
+	}
+
+	bat_req->skb_pkt_cnt = bat_req->bat_size_cnt;
+	bat_req->type = buf_type;
+	bat_req->bat_wr_idx = 0;
+	bat_req->bat_release_rd_idx = 0;
+
+	bat_req->bat_base = dma_alloc_coherent(dpmaif_ctrl->dev,
+					       bat_req->bat_size_cnt * sizeof(struct dpmaif_bat),
+					       &bat_req->bat_bus_addr, GFP_KERNEL | __GFP_ZERO);
+	if (!bat_req->bat_base)
+		return -ENOMEM;
+
+	/* For AP SW to record skb information */
+	bat_req->bat_skb = devm_kzalloc(dpmaif_ctrl->dev, bat_req->skb_pkt_cnt * sw_buf_size,
+					GFP_KERNEL);
+	if (!bat_req->bat_skb)
+		goto err_free_dma_mem;
+
+	bat_req->bat_mask = kcalloc(bat_req->bat_size_cnt, sizeof(unsigned char), GFP_KERNEL);
+	if (!bat_req->bat_mask)
+		goto err_free_dma_mem;
+
+	spin_lock_init(&bat_req->mask_lock);
+	atomic_set(&bat_req->refcnt, 0);
+	return 0;
+
+err_free_dma_mem:
+	t7xx_dpmaif_base_free(dpmaif_ctrl, bat_req);
+
+	return -ENOMEM;
+}
+
+void t7xx_dpmaif_bat_free(const struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req)
+{
+	if (!bat_req || !atomic_dec_and_test(&bat_req->refcnt))
+		return;
+
+	kfree(bat_req->bat_mask);
+	bat_req->bat_mask = NULL;
+
+	if (bat_req->bat_skb) {
+		unsigned int i;
+
+		for (i = 0; i < bat_req->bat_size_cnt; i++) {
+			if (bat_req->type == BAT_TYPE_FRAG)
+				t7xx_unmap_bat_page(dpmaif_ctrl->dev, bat_req->bat_skb, i);
+			else
+				t7xx_unmap_bat_skb(dpmaif_ctrl->dev, bat_req->bat_skb, i);
+		}
+	}
+
+	t7xx_dpmaif_base_free(dpmaif_ctrl, bat_req);
+}
+
+static int t7xx_dpmaif_rx_alloc(struct dpmaif_rx_queue *rxq)
+{
+	rxq->pit_size_cnt = DPMAIF_PIT_COUNT;
+	rxq->pit_rd_idx = 0;
+	rxq->pit_wr_idx = 0;
+	rxq->pit_release_rd_idx = 0;
+	rxq->expect_pit_seq = 0;
+	rxq->pit_remain_release_cnt = 0;
+	memset(&rxq->rx_data_info, 0, sizeof(rxq->rx_data_info));
+
+	rxq->pit_base = dma_alloc_coherent(rxq->dpmaif_ctrl->dev,
+					   rxq->pit_size_cnt * sizeof(struct dpmaif_normal_pit),
+					   &rxq->pit_bus_addr, GFP_KERNEL | __GFP_ZERO);
+	if (!rxq->pit_base)
+		return -ENOMEM;
+
+	rxq->bat_req = &rxq->dpmaif_ctrl->bat_req;
+	atomic_inc(&rxq->bat_req->refcnt);
+
+	rxq->bat_frag = &rxq->dpmaif_ctrl->bat_frag;
+	atomic_inc(&rxq->bat_frag->refcnt);
+	return 0;
+}
+
+static void t7xx_dpmaif_rx_buf_free(const struct dpmaif_rx_queue *rxq)
+{
+	if (!rxq->dpmaif_ctrl)
+		return;
+
+	t7xx_dpmaif_bat_free(rxq->dpmaif_ctrl, rxq->bat_req);
+	t7xx_dpmaif_bat_free(rxq->dpmaif_ctrl, rxq->bat_frag);
+
+	if (rxq->pit_base)
+		dma_free_coherent(rxq->dpmaif_ctrl->dev,
+				  rxq->pit_size_cnt * sizeof(struct dpmaif_normal_pit),
+				  rxq->pit_base, rxq->pit_bus_addr);
+}
+
+int t7xx_dpmaif_rxq_init(struct dpmaif_rx_queue *queue)
+{
+	int ret;
+
+	ret = t7xx_dpmaif_rx_alloc(queue);
+	if (ret < 0) {
+		dev_err(queue->dpmaif_ctrl->dev, "Failed to allocate RX buffers: %d\n", ret);
+		return ret;
+	}
+
+	INIT_WORK(&queue->dpmaif_rxq_work, t7xx_dpmaif_rxq_work);
+
+	queue->worker = alloc_workqueue("dpmaif_rx%d_worker",
+					WQ_UNBOUND | WQ_MEM_RECLAIM | WQ_HIGHPRI, 1, queue->index);
+	if (!queue->worker) {
+		ret = -ENOMEM;
+		goto err_free_rx_buffer;
+	}
+
+	init_waitqueue_head(&queue->rx_wq);
+	skb_queue_head_init(&queue->skb_list);
+	queue->skb_list_max_len = queue->bat_req->pkt_buf_sz;
+	queue->rx_thread = kthread_run(t7xx_dpmaif_net_rx_push_thread,
+				       queue, "dpmaif_rx%d_push", queue->index);
+
+	ret = PTR_ERR_OR_ZERO(queue->rx_thread);
+	if (ret)
+		goto err_free_workqueue;
+
+	return 0;
+
+err_free_workqueue:
+	destroy_workqueue(queue->worker);
+
+err_free_rx_buffer:
+	t7xx_dpmaif_rx_buf_free(queue);
+
+	return ret;
+}
+
+void t7xx_dpmaif_rxq_free(struct dpmaif_rx_queue *queue)
+{
+	if (queue->worker)
+		destroy_workqueue(queue->worker);
+
+	if (queue->rx_thread)
+		kthread_stop(queue->rx_thread);
+
+	skb_queue_purge(&queue->skb_list);
+	t7xx_dpmaif_rx_buf_free(queue);
+}
+
+static void t7xx_dpmaif_bat_release_work(struct work_struct *work)
+{
+	struct dpmaif_ctrl *dpmaif_ctrl = container_of(work, struct dpmaif_ctrl, bat_release_work);
+	struct dpmaif_rx_queue *rxq;
+
+	/* ALL RXQ use one BAT table, so choose DPF_RX_QNO_DFT */
+	rxq = &dpmaif_ctrl->rxq[DPF_RX_QNO_DFT];
+	t7xx_dpmaif_bat_release_and_add(rxq);
+	t7xx_dpmaif_frag_bat_release_and_add(rxq);
+}
+
+int t7xx_dpmaif_bat_rel_wq_alloc(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	dpmaif_ctrl->bat_release_wq = alloc_workqueue("dpmaif_bat_release_work_queue",
+						      WQ_MEM_RECLAIM, 1);
+	if (!dpmaif_ctrl->bat_release_wq)
+		return -ENOMEM;
+
+	INIT_WORK(&dpmaif_ctrl->bat_release_work, t7xx_dpmaif_bat_release_work);
+	return 0;
+}
+
+void t7xx_dpmaif_bat_wq_rel(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	flush_work(&dpmaif_ctrl->bat_release_work);
+
+	if (dpmaif_ctrl->bat_release_wq) {
+		destroy_workqueue(dpmaif_ctrl->bat_release_wq);
+		dpmaif_ctrl->bat_release_wq = NULL;
+	}
+}
+
+/**
+ * t7xx_dpmaif_rx_stop() - Suspend RX flow.
+ * @dpmaif_ctrl: Pointer to data path control struct dpmaif_ctrl.
+ *
+ * Wait for all the RX work to finish executing and mark the RX queue as paused.
+ */
+void t7xx_dpmaif_rx_stop(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	unsigned int i;
+
+	for (i = 0; i < DPMAIF_RXQ_NUM; i++) {
+		struct dpmaif_rx_queue *rxq = &dpmaif_ctrl->rxq[i];
+		int timeout, value;
+
+		flush_work(&rxq->dpmaif_rxq_work);
+
+		timeout = readx_poll_timeout_atomic(atomic_read, &rxq->rx_processing, value,
+						    !value, 0, DPMAIF_CHECK_INIT_TIMEOUT_US);
+		if (timeout)
+			dev_err(dpmaif_ctrl->dev, "Stop RX SW failed\n");
+
+		/* Ensure RX processing has stopped before we set rxq->que_started to false */
+		smp_mb();
+		rxq->que_started = false;
+	}
+}
+
+static void t7xx_dpmaif_stop_rxq(struct dpmaif_rx_queue *rxq)
+{
+	int cnt, j = 0;
+
+	flush_work(&rxq->dpmaif_rxq_work);
+	rxq->que_started = false;
+
+	do {
+		cnt = t7xx_ring_buf_rd_wr_count(rxq->pit_size_cnt, rxq->pit_rd_idx,
+						rxq->pit_wr_idx, DPMAIF_READ);
+
+		if (++j >= DPMAIF_MAX_CHECK_COUNT) {
+			dev_err(rxq->dpmaif_ctrl->dev, "Stop RX SW failed, %d\n", cnt);
+			break;
+		}
+	} while (cnt);
+
+	memset(rxq->pit_base, 0, rxq->pit_size_cnt * sizeof(struct dpmaif_normal_pit));
+	memset(rxq->bat_req->bat_base, 0, rxq->bat_req->bat_size_cnt * sizeof(struct dpmaif_bat));
+	memset(rxq->bat_req->bat_mask, 0, rxq->bat_req->bat_size_cnt * sizeof(unsigned char));
+	memset(&rxq->rx_data_info, 0, sizeof(rxq->rx_data_info));
+
+	rxq->pit_rd_idx = 0;
+	rxq->pit_wr_idx = 0;
+	rxq->pit_release_rd_idx = 0;
+	rxq->expect_pit_seq = 0;
+	rxq->pit_remain_release_cnt = 0;
+	rxq->bat_req->bat_release_rd_idx = 0;
+	rxq->bat_req->bat_wr_idx = 0;
+	rxq->bat_frag->bat_release_rd_idx = 0;
+	rxq->bat_frag->bat_wr_idx = 0;
+}
+
+void t7xx_dpmaif_rx_clear(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	int i;
+
+	for (i = 0; i < DPMAIF_RXQ_NUM; i++)
+		t7xx_dpmaif_stop_rxq(&dpmaif_ctrl->rxq[i]);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
new file mode 100644
index 000000000000..16df19b20866
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.h
@@ -0,0 +1,115 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#ifndef __T7XX_HIF_DPMA_RX_H__
+#define __T7XX_HIF_DPMA_RX_H__
+
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#include "t7xx_hif_dpmaif.h"
+
+#define NETIF_MASK		GENMASK(4, 0)
+#define RX_CB_NETIF_IDX		0
+#define RX_CB_PKT_TYPE		1
+
+#define PKT_TYPE_IP4		0
+#define PKT_TYPE_IP6		1
+
+/* Structure of DL PIT */
+struct dpmaif_normal_pit {
+	__le32			pit_header;
+	__le32			p_data_addr;
+	__le32			data_addr_ext;
+	__le32			pit_footer;
+};
+
+/* PIT header fields */
+#define NORMAL_PIT_DATA_LEN	GENMASK(31, 16)
+#define NORMAL_PIT_BUFFER_ID	GENMASK(15, 3)
+#define NORMAL_PIT_BUFFER_TYPE	BIT(2)
+#define NORMAL_PIT_CONT		BIT(1)
+#define NORMAL_PIT_PACKET_TYPE	BIT(0)
+/* PIT footer fields */
+#define NORMAL_PIT_DLQ_DONE	GENMASK(31, 30)
+#define NORMAL_PIT_ULQ_DONE	GENMASK(29, 24)
+#define NORMAL_PIT_HEADER_OFFSET GENMASK(23, 19)
+#define NORMAL_PIT_BI_F		GENMASK(18, 17)
+#define NORMAL_PIT_IG		BIT(16)
+#define NORMAL_PIT_RES		GENMASK(15, 11)
+#define NORMAL_PIT_H_BID	GENMASK(10, 8)
+#define NORMAL_PIT_PIT_SEQ	GENMASK(7, 0)
+
+struct dpmaif_msg_pit {
+	__le32			dword1;
+	__le32			dword2;
+	__le32			dword3;
+	__le32			dword4;
+};
+
+#define MSG_PIT_DP		BIT(31)
+#define MSG_PIT_RES		GENMASK(30, 27)
+#define MSG_PIT_NETWORK_TYPE	GENMASK(26, 24)
+#define MSG_PIT_CHANNEL_ID	GENMASK(23, 16)
+#define MSG_PIT_RES2		GENMASK(15, 12)
+#define MSG_PIT_HPC_IDX		GENMASK(11, 8)
+#define MSG_PIT_SRC_QID		GENMASK(7, 5)
+#define MSG_PIT_ERROR_BIT	BIT(4)
+#define MSG_PIT_CHECKSUM	GENMASK(3, 2)
+#define MSG_PIT_CONT		BIT(1)
+#define MSG_PIT_PACKET_TYPE	BIT(0)
+
+#define MSG_PIT_HP_IDX		GENMASK(31, 27)
+#define MSG_PIT_CMD		GENMASK(26, 24)
+#define MSG_PIT_RES3		GENMASK(23, 21)
+#define MSG_PIT_FLOW		GENMASK(20, 16)
+#define MSG_PIT_COUNT		GENMASK(15, 0)
+
+#define MSG_PIT_HASH		GENMASK(31, 24)
+#define MSG_PIT_RES4		GENMASK(23, 18)
+#define MSG_PIT_PRO		GENMASK(17, 16)
+#define MSG_PIT_VBID		GENMASK(15, 3)
+#define MSG_PIT_RES5		GENMASK(2, 0)
+
+#define MSG_PIT_DLQ_DONE	GENMASK(31, 30)
+#define MSG_PIT_ULQ_DONE	GENMASK(29, 24)
+#define MSG_PIT_IP		BIT(23)
+#define MSG_PIT_RES6		BIT(22)
+#define MSG_PIT_MR		GENMASK(21, 20)
+#define MSG_PIT_RES7		GENMASK(19, 17)
+#define MSG_PIT_IG		BIT(16)
+#define MSG_PIT_RES8		GENMASK(15, 11)
+#define MSG_PIT_H_BID		GENMASK(10, 8)
+#define MSG_PIT_PIT_SEQ		GENMASK(7, 0)
+
+int t7xx_dpmaif_rxq_init(struct dpmaif_rx_queue *queue);
+void t7xx_dpmaif_rx_clear(struct dpmaif_ctrl *dpmaif_ctrl);
+int t7xx_dpmaif_bat_rel_wq_alloc(struct dpmaif_ctrl *dpmaif_ctrl);
+int t7xx_dpmaif_rx_buf_alloc(struct dpmaif_ctrl *dpmaif_ctrl,
+			     const struct dpmaif_bat_request *bat_req, const unsigned char q_num,
+			     const unsigned int buf_cnt, const bool first_time);
+int t7xx_dpmaif_rx_frag_alloc(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
+			      const unsigned int buf_cnt, const bool first_time);
+void t7xx_dpmaif_rx_stop(struct dpmaif_ctrl *dpmaif_ctrl);
+void t7xx_dpmaif_irq_rx_done(struct dpmaif_ctrl *dpmaif_ctrl, const unsigned int que_mask);
+void t7xx_dpmaif_rxq_free(struct dpmaif_rx_queue *queue);
+void t7xx_dpmaif_bat_wq_rel(struct dpmaif_ctrl *dpmaif_ctrl);
+int t7xx_dpmaif_bat_alloc(const struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
+			  const enum bat_type buf_type);
+void t7xx_dpmaif_bat_free(const struct dpmaif_ctrl *dpmaif_ctrl,
+			  struct dpmaif_bat_request *bat_req);
+
+#endif /* __T7XX_HIF_DPMA_RX_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
new file mode 100644
index 000000000000..3c601492aa16
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
@@ -0,0 +1,724 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#include <linux/atomic.h>
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/err.h>
+#include <linux/gfp.h>
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/minmax.h>
+#include <linux/netdevice.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/skbuff.h>
+#include <linux/types.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+#include "t7xx_common.h"
+#include "t7xx_dpmaif.h"
+#include "t7xx_hif_dpmaif.h"
+#include "t7xx_hif_dpmaif_tx.h"
+#include "t7xx_pci.h"
+
+#define DPMAIF_SKB_TX_BURST_CNT	5
+#define DPMAIF_DRB_ENTRY_SIZE	6144
+
+/* DRB dtype */
+#define DES_DTYP_PD		0
+#define DES_DTYP_MSG		1
+
+static unsigned int t7xx_dpmaif_update_drb_rd_idx(struct dpmaif_ctrl *dpmaif_ctrl,
+						  unsigned char q_num)
+{
+	struct dpmaif_tx_queue *txq = &dpmaif_ctrl->txq[q_num];
+	unsigned short old_sw_rd_idx, new_hw_rd_idx;
+	unsigned int hw_read_idx;
+	unsigned int drb_cnt;
+	unsigned long flags;
+
+	if (!txq->que_started)
+		return 0;
+
+	old_sw_rd_idx = txq->drb_rd_idx;
+	hw_read_idx = t7xx_dpmaif_ul_get_rd_idx(&dpmaif_ctrl->hif_hw_info, q_num);
+
+	new_hw_rd_idx = hw_read_idx / DPMAIF_UL_DRB_ENTRY_WORD;
+	if (new_hw_rd_idx >= DPMAIF_DRB_ENTRY_SIZE) {
+		dev_err(dpmaif_ctrl->dev, "Out of range read index: %u\n", new_hw_rd_idx);
+		return 0;
+	}
+
+	if (old_sw_rd_idx <= new_hw_rd_idx)
+		drb_cnt = new_hw_rd_idx - old_sw_rd_idx;
+	else
+		drb_cnt = txq->drb_size_cnt - old_sw_rd_idx + new_hw_rd_idx;
+
+	spin_lock_irqsave(&txq->tx_lock, flags);
+	txq->drb_rd_idx = new_hw_rd_idx;
+	spin_unlock_irqrestore(&txq->tx_lock, flags);
+	return drb_cnt;
+}
+
+static unsigned short t7xx_dpmaif_release_tx_buffer(struct dpmaif_ctrl *dpmaif_ctrl,
+						    unsigned char q_num, unsigned int release_cnt)
+{
+	struct dpmaif_tx_queue *txq = &dpmaif_ctrl->txq[q_num];
+	struct dpmaif_callbacks *cb = dpmaif_ctrl->callbacks;
+	struct dpmaif_drb_skb *cur_drb_skb, *drb_skb_base;
+	struct dpmaif_drb_pd *cur_drb, *drb_base;
+	unsigned int drb_cnt, i;
+	unsigned short cur_idx;
+	unsigned long flags;
+
+	drb_skb_base = txq->drb_skb_base;
+	drb_base = txq->drb_base;
+
+	spin_lock_irqsave(&txq->tx_lock, flags);
+	drb_cnt = txq->drb_size_cnt;
+	cur_idx = txq->drb_release_rd_idx;
+	spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+	for (i = 0; i < release_cnt; i++) {
+		cur_drb = drb_base + cur_idx;
+		if (FIELD_GET(DRB_PD_DTYP, le32_to_cpu(cur_drb->header)) == DES_DTYP_PD) {
+			cur_drb_skb = drb_skb_base + cur_idx;
+			if (!FIELD_GET(DRB_SKB_IS_MSG, cur_drb_skb->config))
+				dma_unmap_single(dpmaif_ctrl->dev, cur_drb_skb->bus_addr,
+						 cur_drb_skb->data_len, DMA_TO_DEVICE);
+
+			if (!FIELD_GET(DRB_PD_CONT, le32_to_cpu(cur_drb->header))) {
+				if (!cur_drb_skb->skb) {
+					dev_err(dpmaif_ctrl->dev,
+						"txq%u: DRB check fail, invalid skb\n", q_num);
+					continue;
+				}
+
+				dev_kfree_skb_any(cur_drb_skb->skb);
+			}
+
+			cur_drb_skb->skb = NULL;
+		} else {
+			struct dpmaif_drb_msg *drb_msg = (struct dpmaif_drb_msg *)cur_drb;
+
+			txq->last_ch_id = FIELD_GET(DRB_MSG_CHANNEL_ID,
+						    le32_to_cpu(drb_msg->header_dw2));
+		}
+
+		spin_lock_irqsave(&txq->tx_lock, flags);
+		cur_idx = t7xx_ring_buf_get_next_wr_idx(drb_cnt, cur_idx);
+		txq->drb_release_rd_idx = cur_idx;
+		spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+		if (atomic_inc_return(&txq->tx_budget) > txq->drb_size_cnt / 8)
+			cb->state_notify(dpmaif_ctrl->t7xx_dev, DMPAIF_TXQ_STATE_IRQ, txq->index);
+	}
+
+	if (FIELD_GET(DRB_PD_CONT, le32_to_cpu(cur_drb->header)))
+		dev_err(dpmaif_ctrl->dev, "txq%u: DRB not marked as the last one\n", q_num);
+
+	return i;
+}
+
+static int t7xx_dpmaif_tx_release(struct dpmaif_ctrl *dpmaif_ctrl,
+				  unsigned char q_num, unsigned int budget)
+{
+	struct dpmaif_tx_queue *txq = &dpmaif_ctrl->txq[q_num];
+	unsigned int rel_cnt, real_rel_cnt;
+
+	/* Update read index from HW */
+	t7xx_dpmaif_update_drb_rd_idx(dpmaif_ctrl, q_num);
+
+	rel_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
+					    txq->drb_rd_idx, DPMAIF_READ);
+
+	real_rel_cnt = min_not_zero(budget, rel_cnt);
+	if (real_rel_cnt)
+		real_rel_cnt = t7xx_dpmaif_release_tx_buffer(dpmaif_ctrl, q_num, real_rel_cnt);
+
+	return real_rel_cnt < rel_cnt ? -EAGAIN : 0;
+}
+
+static bool t7xx_dpmaif_drb_ring_not_empty(struct dpmaif_tx_queue *txq)
+{
+	return !!t7xx_dpmaif_update_drb_rd_idx(txq->dpmaif_ctrl, txq->index);
+}
+
+static void t7xx_dpmaif_tx_done(struct work_struct *work)
+{
+	struct dpmaif_tx_queue *txq = container_of(work, struct dpmaif_tx_queue, dpmaif_tx_work);
+	struct dpmaif_ctrl *dpmaif_ctrl = txq->dpmaif_ctrl;
+	int ret;
+
+	ret = t7xx_dpmaif_tx_release(dpmaif_ctrl, txq->index, txq->drb_size_cnt);
+	if (ret == -EAGAIN ||
+	    (t7xx_dpmaif_ul_clr_done(&dpmaif_ctrl->hif_hw_info, txq->index) &&
+	     t7xx_dpmaif_drb_ring_not_empty(txq))) {
+		queue_work(dpmaif_ctrl->txq[txq->index].worker,
+			   &dpmaif_ctrl->txq[txq->index].dpmaif_tx_work);
+		/* Give the device time to enter the low power state */
+		t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+	} else {
+		t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+		t7xx_dpmaif_unmask_ulq_intr(dpmaif_ctrl, txq->index);
+	}
+}
+
+static void t7xx_setup_msg_drb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+			       unsigned short cur_idx, unsigned int pkt_len, unsigned short count_l,
+			       unsigned char channel_id)
+{
+	struct dpmaif_drb_msg *drb_base = dpmaif_ctrl->txq[q_num].drb_base;
+	struct dpmaif_drb_msg *drb = drb_base + cur_idx;
+
+	drb->header_dw1 = cpu_to_le32(FIELD_PREP(DRB_MSG_DTYP, DES_DTYP_MSG));
+	drb->header_dw1 |= cpu_to_le32(FIELD_PREP(DRB_MSG_CONT, 1));
+	drb->header_dw1 |= cpu_to_le32(FIELD_PREP(DRB_MSG_PACKET_LEN, pkt_len));
+
+	drb->header_dw2 = cpu_to_le32(FIELD_PREP(DRB_MSG_COUNT_L, count_l));
+	drb->header_dw2 |= cpu_to_le32(FIELD_PREP(DRB_MSG_CHANNEL_ID, channel_id));
+	drb->header_dw2 |= cpu_to_le32(FIELD_PREP(DRB_MSG_L4_CHK, 1));
+}
+
+static void t7xx_setup_payload_drb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+				   unsigned short cur_idx, dma_addr_t data_addr,
+				   unsigned int pkt_size, char last_one)
+{
+	struct dpmaif_drb_pd *drb_base = dpmaif_ctrl->txq[q_num].drb_base;
+	struct dpmaif_drb_pd *drb = drb_base + cur_idx;
+
+	drb->header &= cpu_to_le32(~DRB_PD_DTYP);
+	drb->header |= cpu_to_le32(FIELD_PREP(DRB_PD_DTYP, DES_DTYP_PD));
+	drb->header &= cpu_to_le32(~DRB_PD_CONT);
+
+	if (!last_one)
+		drb->header |= cpu_to_le32(FIELD_PREP(DRB_PD_CONT, 1));
+
+	drb->header &= cpu_to_le32(~DRB_PD_DATA_LEN);
+	drb->header |= cpu_to_le32(FIELD_PREP(DRB_PD_DATA_LEN, pkt_size));
+	drb->p_data_addr = cpu_to_le32(lower_32_bits(data_addr));
+	drb->data_addr_ext = cpu_to_le32(upper_32_bits(data_addr));
+}
+
+static void t7xx_record_drb_skb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
+				unsigned short cur_idx, struct sk_buff *skb, unsigned short is_msg,
+				bool is_frag, bool is_last_one, dma_addr_t bus_addr,
+				unsigned int data_len)
+{
+	struct dpmaif_drb_skb *drb_skb_base = dpmaif_ctrl->txq[q_num].drb_skb_base;
+	struct dpmaif_drb_skb *drb_skb = drb_skb_base + cur_idx;
+
+	drb_skb->skb = skb;
+	drb_skb->bus_addr = bus_addr;
+	drb_skb->data_len = data_len;
+	drb_skb->config = FIELD_PREP(DRB_SKB_DRB_IDX, cur_idx);
+	drb_skb->config |= FIELD_PREP(DRB_SKB_IS_MSG, is_msg);
+	drb_skb->config |= FIELD_PREP(DRB_SKB_IS_FRAG, is_frag);
+	drb_skb->config |= FIELD_PREP(DRB_SKB_IS_LAST, is_last_one);
+}
+
+static int t7xx_dpmaif_add_skb_to_ring(struct dpmaif_ctrl *dpmaif_ctrl, struct sk_buff *skb)
+{
+	unsigned int wr_cnt, send_cnt, payload_cnt;
+	unsigned short cur_idx, drb_wr_idx_backup;
+	bool is_frag, is_last_one = false;
+	int qtype = skb->cb[TX_CB_QTYPE];
+	struct skb_shared_info *info;
+	struct dpmaif_tx_queue *txq;
+	unsigned int data_len;
+	dma_addr_t bus_addr;
+	unsigned long flags;
+	void *data_addr;
+
+	txq = &dpmaif_ctrl->txq[qtype];
+	if (!txq->que_started || dpmaif_ctrl->state != DPMAIF_STATE_PWRON)
+		return -ENODEV;
+
+	atomic_set(&txq->tx_processing, 1);
+	 /* Ensure tx_processing is changed to 1 before actually begin TX flow */
+	smp_mb();
+
+	info = skb_shinfo(skb);
+	if (info->frag_list)
+		dev_warn_ratelimited(dpmaif_ctrl->dev, "frag_list not supported\n");
+
+	payload_cnt = info->nr_frags + 1;
+	/* nr_frags: frag cnt, 1: skb->data, 1: msg DRB */
+	send_cnt = payload_cnt + 1;
+
+	spin_lock_irqsave(&txq->tx_lock, flags);
+	cur_idx = txq->drb_wr_idx;
+	drb_wr_idx_backup = cur_idx;
+
+	txq->drb_wr_idx += send_cnt;
+	if (txq->drb_wr_idx >= txq->drb_size_cnt)
+		txq->drb_wr_idx -= txq->drb_size_cnt;
+
+	t7xx_setup_msg_drb(dpmaif_ctrl, txq->index, cur_idx, skb->len, 0, skb->cb[TX_CB_NETIF_IDX]);
+	t7xx_record_drb_skb(dpmaif_ctrl, txq->index, cur_idx, skb, 1, 0, 0, 0, 0);
+	spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+	cur_idx = t7xx_ring_buf_get_next_wr_idx(txq->drb_size_cnt, cur_idx);
+
+	for (wr_cnt = 0; wr_cnt < payload_cnt; wr_cnt++) {
+		if (!wr_cnt) {
+			data_len = skb_headlen(skb);
+			data_addr = skb->data;
+			is_frag = false;
+		} else {
+			skb_frag_t *frag = info->frags + wr_cnt - 1;
+
+			data_len = skb_frag_size(frag);
+			data_addr = skb_frag_address(frag);
+			is_frag = true;
+		}
+
+		if (wr_cnt == payload_cnt - 1)
+			is_last_one = true;
+
+		/* TX mapping */
+		bus_addr = dma_map_single(dpmaif_ctrl->dev, data_addr, data_len, DMA_TO_DEVICE);
+		if (dma_mapping_error(dpmaif_ctrl->dev, bus_addr)) {
+			dev_err(dpmaif_ctrl->dev, "DMA mapping fail\n");
+			atomic_set(&txq->tx_processing, 0);
+
+			spin_lock_irqsave(&txq->tx_lock, flags);
+			txq->drb_wr_idx = drb_wr_idx_backup;
+			spin_unlock_irqrestore(&txq->tx_lock, flags);
+			return -ENOMEM;
+		}
+
+		spin_lock_irqsave(&txq->tx_lock, flags);
+		t7xx_setup_payload_drb(dpmaif_ctrl, txq->index, cur_idx, bus_addr, data_len,
+				       is_last_one);
+		t7xx_record_drb_skb(dpmaif_ctrl, txq->index, cur_idx, skb, 0, is_frag,
+				    is_last_one, bus_addr, data_len);
+		spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+		cur_idx = t7xx_ring_buf_get_next_wr_idx(txq->drb_size_cnt, cur_idx);
+	}
+
+	atomic_sub(send_cnt, &txq->tx_budget);
+	atomic_set(&txq->tx_processing, 0);
+
+	return 0;
+}
+
+static bool t7xx_tx_lists_are_all_empty(const struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	int i;
+
+	for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+		if (!list_empty(&dpmaif_ctrl->txq[i].tx_skb_queue))
+			return false;
+	}
+
+	return true;
+}
+
+/* Currently, only the default TX queue is used */
+static int t7xx_select_tx_queue(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	return TXQ_TYPE_DEFAULT;
+}
+
+static int t7xx_txq_burst_send_skb(struct dpmaif_tx_queue *txq)
+{
+	int drb_remain_cnt, i;
+	unsigned long flags;
+	int drb_cnt = 0;
+	int ret = 0;
+
+	spin_lock_irqsave(&txq->tx_lock, flags);
+	drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
+						   txq->drb_wr_idx, DPMAIF_WRITE);
+	spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+	for (i = 0; i < DPMAIF_SKB_TX_BURST_CNT; i++) {
+		struct sk_buff *skb;
+
+		spin_lock_irqsave(&txq->tx_skb_lock, flags);
+		skb = list_first_entry_or_null(&txq->tx_skb_queue, struct sk_buff, list);
+		spin_unlock_irqrestore(&txq->tx_skb_lock, flags);
+
+		if (!skb)
+			break;
+
+		if (drb_remain_cnt < skb->cb[TX_CB_DRB_CNT]) {
+			spin_lock_irqsave(&txq->tx_lock, flags);
+			drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt,
+								   txq->drb_release_rd_idx,
+								   txq->drb_wr_idx, DPMAIF_WRITE);
+			spin_unlock_irqrestore(&txq->tx_lock, flags);
+			continue;
+		}
+
+		drb_remain_cnt -= skb->cb[TX_CB_DRB_CNT];
+
+		ret = t7xx_dpmaif_add_skb_to_ring(txq->dpmaif_ctrl, skb);
+		if (ret < 0) {
+			dev_err(txq->dpmaif_ctrl->dev,
+				"Failed to add skb to device's ring: %d\n", ret);
+			break;
+		}
+
+		drb_cnt += skb->cb[TX_CB_DRB_CNT];
+		spin_lock_irqsave(&txq->tx_skb_lock, flags);
+		list_del(&skb->list);
+		txq->tx_submit_skb_cnt--;
+		spin_unlock_irqrestore(&txq->tx_skb_lock, flags);
+	}
+
+	if (drb_cnt > 0) {
+		txq->drb_lack = false;
+		ret = drb_cnt;
+	} else if (ret == -ENOMEM) {
+		txq->drb_lack = true;
+	}
+
+	return ret;
+}
+
+static bool t7xx_check_all_txq_drb_lack(const struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	unsigned char i;
+
+	for (i = 0; i < DPMAIF_TXQ_NUM; i++)
+		if (!list_empty(&dpmaif_ctrl->txq[i].tx_skb_queue) &&
+		    !dpmaif_ctrl->txq[i].drb_lack)
+			return false;
+
+	return true;
+}
+
+static void t7xx_do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	do {
+		int txq_id;
+
+		txq_id = t7xx_select_tx_queue(dpmaif_ctrl);
+		if (txq_id >= 0) {
+			struct dpmaif_tx_queue *txq;
+			int ret;
+
+			txq = &dpmaif_ctrl->txq[txq_id];
+
+			ret = t7xx_txq_burst_send_skb(txq);
+			if (ret > 0) {
+				int drb_send_cnt = ret;
+
+				ret = t7xx_dpmaif_ul_update_hw_drb_cnt(dpmaif_ctrl,
+								       (unsigned char)txq_id,
+								       drb_send_cnt *
+								       DPMAIF_UL_DRB_ENTRY_WORD);
+				if (ret < 0)
+					dev_err(dpmaif_ctrl->dev,
+						"txq%d: Failed to update DRB count in HW\n",
+						txq_id);
+			} else if (t7xx_check_all_txq_drb_lack(dpmaif_ctrl)) {
+				usleep_range(10, 20);
+			}
+		}
+
+		cond_resched();
+	} while (!t7xx_tx_lists_are_all_empty(dpmaif_ctrl) && !kthread_should_stop() &&
+		 (dpmaif_ctrl->state == DPMAIF_STATE_PWRON));
+}
+
+static int t7xx_dpmaif_tx_hw_push_thread(void *arg)
+{
+	struct dpmaif_ctrl *dpmaif_ctrl = arg;
+
+	while (!kthread_should_stop()) {
+		if (t7xx_tx_lists_are_all_empty(dpmaif_ctrl) ||
+		    dpmaif_ctrl->state != DPMAIF_STATE_PWRON) {
+			if (wait_event_interruptible(dpmaif_ctrl->tx_wq,
+						     (!t7xx_tx_lists_are_all_empty(dpmaif_ctrl) &&
+						     dpmaif_ctrl->state == DPMAIF_STATE_PWRON) ||
+						     kthread_should_stop()))
+				continue;
+
+			if (kthread_should_stop())
+				break;
+		}
+
+		t7xx_do_tx_hw_push(dpmaif_ctrl);
+	}
+
+	return 0;
+}
+
+int t7xx_dpmaif_tx_thread_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	init_waitqueue_head(&dpmaif_ctrl->tx_wq);
+	dpmaif_ctrl->tx_thread = kthread_run(t7xx_dpmaif_tx_hw_push_thread,
+					     dpmaif_ctrl, "dpmaif_tx_hw_push");
+	return PTR_ERR_OR_ZERO(dpmaif_ctrl->tx_thread);
+}
+
+void t7xx_dpmaif_tx_thread_rel(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	if (dpmaif_ctrl->tx_thread)
+		kthread_stop(dpmaif_ctrl->tx_thread);
+}
+
+static unsigned char t7xx_get_drb_cnt_per_skb(struct sk_buff *skb)
+{
+	/* Normal DRB (frags data + skb linear data) + msg DRB */
+	return skb_shinfo(skb)->nr_frags + 2;
+}
+
+static bool t7xx_check_tx_queue_drb_available(struct dpmaif_tx_queue *txq,
+					      unsigned int send_drb_cnt)
+{
+	unsigned int drb_remain_cnt;
+	unsigned long flags;
+
+	spin_lock_irqsave(&txq->tx_lock, flags);
+	drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
+						   txq->drb_wr_idx, DPMAIF_WRITE);
+	spin_unlock_irqrestore(&txq->tx_lock, flags);
+
+	return drb_remain_cnt >= send_drb_cnt;
+}
+
+/**
+ * t7xx_dpmaif_tx_send_skb() - Add skb to the transmit queue.
+ * @dpmaif_ctrl: Pointer to struct dpmaif_ctrl.
+ * @txqt: Queue type to xmit on (normal or fast).
+ * @skb: Pointer to the skb to transmit.
+ *
+ * Add the skb to the queue of the skbs to be transmit.
+ * Wake up the thread that push the skbs from the queue to the HW.
+ *
+ * Return:
+ * * 0		- Success.
+ * * -ERROR	- Error code from failure sub-initializations.
+ */
+int t7xx_dpmaif_tx_send_skb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int txqt, struct sk_buff *skb)
+{
+	bool tx_drb_available = true;
+	struct dpmaif_tx_queue *txq;
+	struct dpmaif_callbacks *cb;
+	unsigned int send_drb_cnt;
+	unsigned long flags;
+
+	send_drb_cnt = t7xx_get_drb_cnt_per_skb(skb);
+
+	txq = &dpmaif_ctrl->txq[txqt];
+	if (!(txq->tx_skb_stat++ % DPMAIF_SKB_TX_BURST_CNT))
+		tx_drb_available = t7xx_check_tx_queue_drb_available(txq, send_drb_cnt);
+
+	if (!tx_drb_available || txq->tx_submit_skb_cnt >= txq->tx_list_max_len) {
+		cb = dpmaif_ctrl->callbacks;
+		cb->state_notify(dpmaif_ctrl->t7xx_dev, DMPAIF_TXQ_STATE_FULL, txqt);
+		return -EBUSY;
+	}
+
+	skb->cb[TX_CB_QTYPE] = txqt;
+	skb->cb[TX_CB_DRB_CNT] = send_drb_cnt;
+
+	spin_lock_irqsave(&txq->tx_skb_lock, flags);
+	list_add_tail(&skb->list, &txq->tx_skb_queue);
+	txq->tx_submit_skb_cnt++;
+	spin_unlock_irqrestore(&txq->tx_skb_lock, flags);
+	wake_up(&dpmaif_ctrl->tx_wq);
+
+	return 0;
+}
+
+void t7xx_dpmaif_irq_tx_done(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int que_mask)
+{
+	int i;
+
+	for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+		if (que_mask & BIT(i))
+			queue_work(dpmaif_ctrl->txq[i].worker, &dpmaif_ctrl->txq[i].dpmaif_tx_work);
+	}
+}
+
+static int t7xx_dpmaif_tx_drb_buf_init(struct dpmaif_tx_queue *txq)
+{
+	size_t brb_skb_size, brb_pd_size;
+
+	brb_pd_size = DPMAIF_DRB_ENTRY_SIZE * sizeof(struct dpmaif_drb_pd);
+	brb_skb_size = DPMAIF_DRB_ENTRY_SIZE * sizeof(struct dpmaif_drb_skb);
+
+	txq->drb_size_cnt = DPMAIF_DRB_ENTRY_SIZE;
+
+	/* For HW && AP SW */
+	txq->drb_base = dma_alloc_coherent(txq->dpmaif_ctrl->dev, brb_pd_size,
+					   &txq->drb_bus_addr, GFP_KERNEL | __GFP_ZERO);
+	if (!txq->drb_base)
+		return -ENOMEM;
+
+	/* For AP SW to record the skb information */
+	txq->drb_skb_base = devm_kzalloc(txq->dpmaif_ctrl->dev, brb_skb_size, GFP_KERNEL);
+	if (!txq->drb_skb_base) {
+		dma_free_coherent(txq->dpmaif_ctrl->dev, brb_pd_size,
+				  txq->drb_base, txq->drb_bus_addr);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+static void t7xx_dpmaif_tx_free_drb_skb(struct dpmaif_tx_queue *txq)
+{
+	struct dpmaif_drb_skb *drb_skb, *drb_skb_base;
+	unsigned int i;
+
+	drb_skb_base = txq->drb_skb_base;
+	if (!drb_skb_base)
+		return;
+
+	for (i = 0; i < txq->drb_size_cnt; i++) {
+		drb_skb = drb_skb_base + i;
+		if (!drb_skb->skb)
+			continue;
+
+		if (!FIELD_GET(DRB_SKB_IS_MSG, drb_skb->config))
+			dma_unmap_single(txq->dpmaif_ctrl->dev, drb_skb->bus_addr,
+					 drb_skb->data_len, DMA_TO_DEVICE);
+
+		if (FIELD_GET(DRB_SKB_IS_LAST, drb_skb->config)) {
+			kfree_skb(drb_skb->skb);
+			drb_skb->skb = NULL;
+		}
+	}
+}
+
+static void t7xx_dpmaif_tx_drb_buf_rel(struct dpmaif_tx_queue *txq)
+{
+	if (txq->drb_base)
+		dma_free_coherent(txq->dpmaif_ctrl->dev,
+				  txq->drb_size_cnt * sizeof(struct dpmaif_drb_pd),
+				  txq->drb_base, txq->drb_bus_addr);
+
+	t7xx_dpmaif_tx_free_drb_skb(txq);
+}
+
+/**
+ * t7xx_dpmaif_txq_init() - Initialize TX queue.
+ * @txq: Pointer to struct dpmaif_tx_queue.
+ *
+ * Initialize the TX queue data structure and allocate memory for it to use.
+ *
+ * Return:
+ * * 0		- Success.
+ * * -ERROR	- Error code from failure sub-initializations.
+ */
+int t7xx_dpmaif_txq_init(struct dpmaif_tx_queue *txq)
+{
+	int ret;
+
+	spin_lock_init(&txq->tx_skb_lock);
+	INIT_LIST_HEAD(&txq->tx_skb_queue);
+	txq->tx_submit_skb_cnt = 0;
+	txq->tx_skb_stat = 0;
+	txq->tx_list_max_len = DPMAIF_DRB_ENTRY_SIZE / 2;
+	txq->drb_lack = false;
+
+	init_waitqueue_head(&txq->req_wq);
+	atomic_set(&txq->tx_budget, DPMAIF_DRB_ENTRY_SIZE);
+
+	ret = t7xx_dpmaif_tx_drb_buf_init(txq);
+	if (ret) {
+		dev_err(txq->dpmaif_ctrl->dev, "Failed to initialize DRB buffers: %d\n", ret);
+		return ret;
+	}
+
+	txq->worker = alloc_workqueue("md_dpmaif_tx%d_worker", WQ_UNBOUND | WQ_MEM_RECLAIM |
+				      (txq->index ? 0 : WQ_HIGHPRI), 1, txq->index);
+	if (!txq->worker)
+		return -ENOMEM;
+
+	INIT_WORK(&txq->dpmaif_tx_work, t7xx_dpmaif_tx_done);
+	spin_lock_init(&txq->tx_lock);
+	return 0;
+}
+
+void t7xx_dpmaif_txq_free(struct dpmaif_tx_queue *txq)
+{
+	struct sk_buff *skb, *skb_next;
+	unsigned long flags;
+
+	if (txq->worker)
+		destroy_workqueue(txq->worker);
+
+	spin_lock_irqsave(&txq->tx_skb_lock, flags);
+	list_for_each_entry_safe(skb, skb_next, &txq->tx_skb_queue, list) {
+		list_del(&skb->list);
+		dev_kfree_skb_any(skb);
+	}
+
+	spin_unlock_irqrestore(&txq->tx_skb_lock, flags);
+	t7xx_dpmaif_tx_drb_buf_rel(txq);
+}
+
+void t7xx_dpmaif_tx_stop(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	int i;
+
+	for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
+		struct dpmaif_tx_queue *txq;
+		int count;
+
+		txq = &dpmaif_ctrl->txq[i];
+		txq->que_started = false;
+		/* Ensure tx_processing is changed to 1 before actually begin TX flow */
+		smp_mb();
+
+		/* Confirm that SW will not transmit */
+		count = 0;
+
+		while (atomic_read(&txq->tx_processing)) {
+			if (++count >= DPMAIF_MAX_CHECK_COUNT) {
+				dev_err(dpmaif_ctrl->dev, "TX queue stop failed\n");
+				break;
+			}
+		};
+	}
+}
+
+static void t7xx_dpmaif_txq_flush_rel(struct dpmaif_tx_queue *txq)
+{
+	txq->que_started = false;
+
+	cancel_work_sync(&txq->dpmaif_tx_work);
+	flush_work(&txq->dpmaif_tx_work);
+	t7xx_dpmaif_tx_free_drb_skb(txq);
+
+	txq->drb_rd_idx = 0;
+	txq->drb_wr_idx = 0;
+	txq->drb_release_rd_idx = 0;
+}
+
+void t7xx_dpmaif_tx_clear(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	int i;
+
+	for (i = 0; i < DPMAIF_TXQ_NUM; i++)
+		t7xx_dpmaif_txq_flush_rel(&dpmaif_ctrl->txq[i]);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h
new file mode 100644
index 000000000000..3e8b1f93d8b9
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.h
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#ifndef __T7XX_HIF_DPMA_TX_H__
+#define __T7XX_HIF_DPMA_TX_H__
+
+#include <linux/bits.h>
+#include <linux/skbuff.h>
+#include <linux/types.h>
+
+#include "t7xx_common.h"
+#include "t7xx_hif_dpmaif.h"
+
+/* SKB control buffer indexed values */
+#define TX_CB_NETIF_IDX		0
+#define TX_CB_QTYPE		1
+#define TX_CB_DRB_CNT		2
+
+/* UL DRB */
+struct dpmaif_drb_pd {
+	__le32			header;
+	__le32			p_data_addr;
+	__le32			data_addr_ext;
+	__le32			reserved2;
+};
+
+/* Header fields */
+#define DRB_PD_DATA_LEN		((u32)GENMASK(31, 16))
+#define DRB_PD_RES		GENMASK(15, 3)
+#define DRB_PD_CONT		BIT(2)
+#define DRB_PD_DTYP		GENMASK(1, 0)
+
+struct dpmaif_drb_msg {
+	__le32	header_dw1;
+	__le32	header_dw2;
+	__le32	reserved4;
+	__le32	reserved5;
+};
+
+#define DRB_MSG_PACKET_LEN	GENMASK(31, 16)
+#define DRB_MSG_DW1_RES		GENMASK(15, 3)
+#define DRB_MSG_CONT		BIT(2)
+#define DRB_MSG_DTYP		GENMASK(1, 0)
+
+#define DRB_MSG_DW2_RES		GENMASK(31, 30)
+#define DRB_MSG_L4_CHK		BIT(29)
+#define DRB_MSG_IP_CHK		BIT(28)
+#define DRB_MSG_RES2		BIT(27)
+#define DRB_MSG_NETWORK_TYPE	GENMASK(26, 24)
+#define DRB_MSG_CHANNEL_ID	GENMASK(23, 16)
+#define DRB_MSG_COUNT_L		GENMASK(15, 0)
+
+struct dpmaif_drb_skb {
+	struct sk_buff		*skb;
+	dma_addr_t		bus_addr;
+	unsigned short		data_len;
+	u16			config;
+};
+
+#define DRB_SKB_IS_LAST		BIT(15)
+#define DRB_SKB_IS_FRAG		BIT(14)
+#define DRB_SKB_IS_MSG		BIT(13)
+#define DRB_SKB_DRB_IDX		GENMASK(12, 0)
+
+int t7xx_dpmaif_tx_send_skb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int txqt,
+			    struct sk_buff *skb);
+void t7xx_dpmaif_tx_thread_rel(struct dpmaif_ctrl *dpmaif_ctrl);
+int t7xx_dpmaif_tx_thread_init(struct dpmaif_ctrl *dpmaif_ctrl);
+void t7xx_dpmaif_txq_free(struct dpmaif_tx_queue *txq);
+void t7xx_dpmaif_irq_tx_done(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int que_mask);
+int t7xx_dpmaif_txq_init(struct dpmaif_tx_queue *txq);
+void t7xx_dpmaif_tx_stop(struct dpmaif_ctrl *dpmaif_ctrl);
+void t7xx_dpmaif_tx_clear(struct dpmaif_ctrl *dpmaif_ctrl);
+
+#endif /* __T7XX_HIF_DPMA_TX_H__ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 09/13] net: wwan: t7xx: Add WWAN network interface
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
                   ` (7 preceding siblings ...)
  2022-01-14  1:06 ` [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-02-10 10:45   ` Ilpo Järvinen
  2022-01-14  1:06 ` [PATCH net-next v4 10/13] net: wwan: t7xx: Introduce power management support Ricardo Martinez
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

From: Haijun Liu <haijun.liu@mediatek.com>

Creates the Cross Core Modem Network Interface (CCMNI) which implements
the wwan_ops for registration with the WWAN framework, CCMNI also
implements the net_device_ops functions used by the network device.
Network device operations include open, close, start transmission, TX
timeout, change MTU, and select queue.

Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
Co-developed-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 drivers/net/wwan/t7xx/Makefile         |   1 +
 drivers/net/wwan/t7xx/t7xx_modem_ops.c |  11 +-
 drivers/net/wwan/t7xx/t7xx_netdev.c    | 433 +++++++++++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_netdev.h    |  61 ++++
 4 files changed, 505 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.c
 create mode 100644 drivers/net/wwan/t7xx/t7xx_netdev.h

diff --git a/drivers/net/wwan/t7xx/Makefile b/drivers/net/wwan/t7xx/Makefile
index 04a9ba50dc14..dc6a7d682c15 100644
--- a/drivers/net/wwan/t7xx/Makefile
+++ b/drivers/net/wwan/t7xx/Makefile
@@ -17,3 +17,4 @@ mtk_t7xx-y:=	t7xx_pci.o \
 		t7xx_hif_dpmaif_tx.o \
 		t7xx_hif_dpmaif_rx.o  \
 		t7xx_dpmaif.o \
+		t7xx_netdev.o
diff --git a/drivers/net/wwan/t7xx/t7xx_modem_ops.c b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
index ec55845e7ac0..3680b0b8205a 100644
--- a/drivers/net/wwan/t7xx/t7xx_modem_ops.c
+++ b/drivers/net/wwan/t7xx/t7xx_modem_ops.c
@@ -35,6 +35,7 @@
 #include "t7xx_hif_cldma.h"
 #include "t7xx_mhccif.h"
 #include "t7xx_modem_ops.h"
+#include "t7xx_netdev.h"
 #include "t7xx_pci.h"
 #include "t7xx_pcie_mac.h"
 #include "t7xx_port.h"
@@ -652,10 +653,14 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev)
 	if (ret)
 		goto err_destroy_hswq;
 
-	ret = t7xx_cldma_init(md, md->md_ctrl[ID_CLDMA1]);
+	ret = t7xx_ccmni_init(t7xx_dev);
 	if (ret)
 		goto err_uninit_fsm;
 
+	ret = t7xx_cldma_init(md, md->md_ctrl[ID_CLDMA1]);
+	if (ret)
+		goto err_uninit_ccmni;
+
 	ret = t7xx_port_proxy_init(md);
 	if (ret)
 		goto err_uninit_cldma;
@@ -668,6 +673,9 @@ int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev)
 err_uninit_cldma:
 	t7xx_cldma_exit(md->md_ctrl[ID_CLDMA1]);
 
+err_uninit_ccmni:
+	t7xx_ccmni_exit(t7xx_dev);
+
 err_uninit_fsm:
 	t7xx_fsm_uninit(md);
 
@@ -689,6 +697,7 @@ void t7xx_md_exit(struct t7xx_pci_dev *t7xx_dev)
 	t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_PRE_STOP, FSM_CMD_FLAG_WAIT_FOR_COMPLETION);
 	t7xx_port_proxy_uninit(md->port_prox);
 	t7xx_cldma_exit(md->md_ctrl[ID_CLDMA1]);
+	t7xx_ccmni_exit(t7xx_dev);
 	t7xx_fsm_uninit(md);
 	destroy_workqueue(md->handshake_wq);
 }
diff --git a/drivers/net/wwan/t7xx/t7xx_netdev.c b/drivers/net/wwan/t7xx/t7xx_netdev.c
new file mode 100644
index 000000000000..b4a2ef2c7b17
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_netdev.c
@@ -0,0 +1,433 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Eliot Lee <eliot.lee@intel.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *  Sreehari Kancharla <sreehari.kancharla@intel.com>
+ */
+
+#include <linux/atomic.h>
+#include <linux/dev_printk.h>
+#include <linux/device.h>
+#include <linux/gfp.h>
+#include <linux/if_arp.h>
+#include <linux/if_ether.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/netdev_features.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/types.h>
+#include <linux/wwan.h>
+#include <net/pkt_sched.h>
+
+#include "t7xx_common.h"
+#include "t7xx_hif_dpmaif_rx.h"
+#include "t7xx_hif_dpmaif_tx.h"
+#include "t7xx_netdev.h"
+#include "t7xx_pci.h"
+#include "t7xx_state_monitor.h"
+
+#define IP_MUX_SESSION_DEFAULT	0
+
+static u16 t7xx_ccmni_select_queue(struct net_device *dev, struct sk_buff *skb,
+				   struct net_device *sb_dev)
+{
+	return TXQ_TYPE_DEFAULT;
+}
+
+static int t7xx_ccmni_open(struct net_device *dev)
+{
+	struct t7xx_ccmni *ccmni = wwan_netdev_drvpriv(dev);
+
+	netif_carrier_on(dev);
+	netif_tx_start_all_queues(dev);
+	atomic_inc(&ccmni->usage);
+	return 0;
+}
+
+static int t7xx_ccmni_close(struct net_device *dev)
+{
+	struct t7xx_ccmni *ccmni = wwan_netdev_drvpriv(dev);
+
+	if (atomic_dec_return(&ccmni->usage) < 0)
+		return -EINVAL;
+
+	netif_carrier_off(dev);
+	netif_tx_disable(dev);
+	return 0;
+}
+
+static int t7xx_ccmni_send_packet(struct t7xx_ccmni *ccmni, struct sk_buff *skb,
+				  unsigned int txqt)
+{
+	struct t7xx_ccmni_ctrl *ctlb = ccmni->ctlb;
+
+	skb->cb[TX_CB_NETIF_IDX] = ccmni->index;
+
+	if (t7xx_dpmaif_tx_send_skb(ctlb->hif_ctrl, txqt, skb))
+		return NETDEV_TX_BUSY;
+
+	return 0;
+}
+
+static int t7xx_ccmni_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+	struct t7xx_ccmni *ccmni = wwan_netdev_drvpriv(dev);
+	int skb_len = skb->len;
+
+	/* If MTU is changed or there is no headroom, drop the packet */
+	if (skb->len > dev->mtu || skb_headroom(skb) < sizeof(struct ccci_header)) {
+		dev_kfree_skb(skb);
+		dev->stats.tx_dropped++;
+		return NETDEV_TX_OK;
+	}
+
+	if (t7xx_ccmni_send_packet(ccmni, skb, TXQ_TYPE_DEFAULT))
+		return NETDEV_TX_BUSY;
+
+	dev->stats.tx_packets++;
+	dev->stats.tx_bytes += skb_len;
+
+	return NETDEV_TX_OK;
+}
+
+static void t7xx_ccmni_tx_timeout(struct net_device *dev, unsigned int __always_unused txqueue)
+{
+	struct t7xx_ccmni *ccmni = netdev_priv(dev);
+
+	dev->stats.tx_errors++;
+
+	if (atomic_read(&ccmni->usage) > 0)
+		netif_tx_wake_all_queues(dev);
+}
+
+static const struct net_device_ops ccmni_netdev_ops = {
+	.ndo_open	  = t7xx_ccmni_open,
+	.ndo_stop	  = t7xx_ccmni_close,
+	.ndo_start_xmit   = t7xx_ccmni_start_xmit,
+	.ndo_tx_timeout   = t7xx_ccmni_tx_timeout,
+	.ndo_select_queue = t7xx_ccmni_select_queue,
+};
+
+static void t7xx_ccmni_start(struct t7xx_ccmni_ctrl *ctlb)
+{
+	struct t7xx_ccmni *ccmni;
+	int i;
+
+	for (i = 0; i < ctlb->nic_dev_num; i++) {
+		ccmni = ctlb->ccmni_inst[i];
+		if (!ccmni)
+			continue;
+
+		if (atomic_read(&ccmni->usage) > 0) {
+			netif_tx_start_all_queues(ccmni->dev);
+			netif_carrier_on(ccmni->dev);
+		}
+	}
+}
+
+static void t7xx_ccmni_pre_stop(struct t7xx_ccmni_ctrl *ctlb)
+{
+	struct t7xx_ccmni *ccmni;
+	int i;
+
+	for (i = 0; i < ctlb->nic_dev_num; i++) {
+		ccmni = ctlb->ccmni_inst[i];
+		if (!ccmni)
+			continue;
+
+		if (atomic_read(&ccmni->usage) > 0)
+			netif_tx_disable(ccmni->dev);
+	}
+}
+
+static void t7xx_ccmni_post_stop(struct t7xx_ccmni_ctrl *ctlb)
+{
+	struct t7xx_ccmni *ccmni;
+	int i;
+
+	for (i = 0; i < ctlb->nic_dev_num; i++) {
+		ccmni = ctlb->ccmni_inst[i];
+		if (!ccmni)
+			continue;
+
+		if (atomic_read(&ccmni->usage) > 0)
+			netif_carrier_off(ccmni->dev);
+	}
+}
+
+static void t7xx_ccmni_wwan_setup(struct net_device *dev)
+{
+	dev->header_ops = NULL;
+	dev->hard_header_len += sizeof(struct ccci_header);
+
+	dev->mtu = ETH_DATA_LEN;
+	dev->max_mtu = CCMNI_MTU_MAX;
+	dev->tx_queue_len = DEFAULT_TX_QUEUE_LEN;
+	dev->watchdog_timeo = CCMNI_NETDEV_WDT_TO;
+	/* CCMNI is a pure IP device */
+	dev->flags = IFF_POINTOPOINT | IFF_NOARP;
+
+	/* Not supporting VLAN */
+	dev->features = NETIF_F_VLAN_CHALLENGED;
+
+	dev->features |= NETIF_F_SG;
+	dev->hw_features |= NETIF_F_SG;
+
+	/* Uplink checksum offload */
+	dev->features |= NETIF_F_HW_CSUM;
+	dev->hw_features |= NETIF_F_HW_CSUM;
+
+	/* Downlink checksum offload */
+	dev->features |= NETIF_F_RXCSUM;
+	dev->hw_features |= NETIF_F_RXCSUM;
+
+	/* Use kernel default free_netdev() function */
+	dev->needs_free_netdev = true;
+
+	/* No need to free again because of free_netdev() */
+	dev->priv_destructor = NULL;
+	dev->type = ARPHRD_NONE;
+
+	dev->netdev_ops = &ccmni_netdev_ops;
+}
+
+static int t7xx_ccmni_wwan_newlink(void *ctxt, struct net_device *dev, u32 if_id,
+				   struct netlink_ext_ack *extack)
+{
+	struct t7xx_ccmni_ctrl *ctlb = ctxt;
+	struct t7xx_ccmni *ccmni;
+	int ret;
+
+	if (if_id >= ARRAY_SIZE(ctlb->ccmni_inst))
+		return -EINVAL;
+
+	ccmni = wwan_netdev_drvpriv(dev);
+	ccmni->index = if_id;
+	ccmni->ctlb = ctlb;
+	ccmni->dev = dev;
+	atomic_set(&ccmni->usage, 0);
+	ctlb->ccmni_inst[if_id] = ccmni;
+
+	ret = register_netdevice(dev);
+	if (ret)
+		return ret;
+
+	netif_device_attach(dev);
+	return 0;
+}
+
+static void t7xx_ccmni_wwan_dellink(void *ctxt, struct net_device *dev, struct list_head *head)
+{
+	struct t7xx_ccmni *ccmni = wwan_netdev_drvpriv(dev);
+	struct t7xx_ccmni_ctrl *ctlb = ctxt;
+	u8 if_id = ccmni->index;
+
+	if (if_id >= ARRAY_SIZE(ctlb->ccmni_inst))
+		return;
+
+	if (WARN_ON(ctlb->ccmni_inst[if_id] != ccmni))
+		return;
+
+	unregister_netdevice(dev);
+}
+
+static const struct wwan_ops ccmni_wwan_ops = {
+	.priv_size = sizeof(struct t7xx_ccmni),
+	.setup     = t7xx_ccmni_wwan_setup,
+	.newlink   = t7xx_ccmni_wwan_newlink,
+	.dellink   = t7xx_ccmni_wwan_dellink,
+};
+
+static int t7xx_ccmni_md_state_callback(enum md_state state, void *para)
+{
+	struct t7xx_ccmni_ctrl *ctlb = para;
+	int ret = 0;
+
+	ctlb->md_sta = state;
+
+	switch (state) {
+	case MD_STATE_READY:
+		t7xx_ccmni_start(ctlb);
+		break;
+
+	case MD_STATE_EXCEPTION:
+	case MD_STATE_STOPPED:
+		t7xx_ccmni_pre_stop(ctlb);
+
+		ret = t7xx_dpmaif_md_state_callback(ctlb->hif_ctrl, state);
+		if (ret < 0)
+			dev_err(ctlb->hif_ctrl->dev,
+				"dpmaif md state callback err, md_sta=%d\n", state);
+
+		t7xx_ccmni_post_stop(ctlb);
+		break;
+
+	case MD_STATE_WAITING_FOR_HS1:
+	case MD_STATE_WAITING_TO_STOP:
+		ret = t7xx_dpmaif_md_state_callback(ctlb->hif_ctrl, state);
+		if (ret < 0)
+			dev_err(ctlb->hif_ctrl->dev,
+				"dpmaif md state callback err, md_sta=%d\n", state);
+
+		break;
+
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+static void init_md_status_notifier(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct t7xx_ccmni_ctrl	*ctlb = t7xx_dev->ccmni_ctlb;
+	struct t7xx_fsm_notifier *md_status_notifier;
+
+	md_status_notifier = &ctlb->md_status_notify;
+	INIT_LIST_HEAD(&md_status_notifier->entry);
+	md_status_notifier->notifier_fn = t7xx_ccmni_md_state_callback;
+	md_status_notifier->data = ctlb;
+
+	t7xx_fsm_notifier_register(t7xx_dev->md, md_status_notifier);
+}
+
+static void t7xx_ccmni_recv_skb(struct t7xx_pci_dev *t7xx_dev, struct sk_buff *skb)
+{
+	struct t7xx_ccmni *ccmni;
+	struct net_device *net_dev;
+	int pkt_type, skb_len;
+	u8 netif_id;
+
+	netif_id = skb->cb[RX_CB_NETIF_IDX];
+	ccmni = t7xx_dev->ccmni_ctlb->ccmni_inst[netif_id];
+	if (!ccmni) {
+		dev_kfree_skb(skb);
+		return;
+	}
+
+	net_dev = ccmni->dev;
+	skb->dev = net_dev;
+
+	pkt_type = skb->cb[RX_CB_PKT_TYPE];
+	if (pkt_type == PKT_TYPE_IP6)
+		skb->protocol = htons(ETH_P_IPV6);
+	else
+		skb->protocol = htons(ETH_P_IP);
+
+	skb_len = skb->len;
+	netif_rx_any_context(skb);
+	net_dev->stats.rx_packets++;
+	net_dev->stats.rx_bytes += skb_len;
+}
+
+static void t7xx_ccmni_queue_tx_irq_notify(struct t7xx_ccmni_ctrl *ctlb, int qno)
+{
+	struct t7xx_ccmni *ccmni = ctlb->ccmni_inst[0];
+	struct netdev_queue *net_queue;
+
+	if (netif_running(ccmni->dev) && atomic_read(&ccmni->usage) > 0) {
+		if (ctlb->capability & NIC_CAP_CCMNI_MQ) {
+			net_queue = netdev_get_tx_queue(ccmni->dev, qno);
+			if (netif_tx_queue_stopped(net_queue))
+				netif_tx_wake_queue(net_queue);
+		} else if (netif_queue_stopped(ccmni->dev)) {
+			netif_wake_queue(ccmni->dev);
+		}
+	}
+}
+
+static void t7xx_ccmni_queue_tx_full_notify(struct t7xx_ccmni_ctrl *ctlb, int qno)
+{
+	struct t7xx_ccmni *ccmni = ctlb->ccmni_inst[0];
+	struct netdev_queue *net_queue;
+
+	if (atomic_read(&ccmni->usage) > 0) {
+		netdev_err(ccmni->dev, "TX queue %d is full\n", qno);
+
+		if (ctlb->capability & NIC_CAP_CCMNI_MQ) {
+			net_queue = netdev_get_tx_queue(ccmni->dev, qno);
+			netif_tx_stop_queue(net_queue);
+		} else {
+			netif_stop_queue(ccmni->dev);
+		}
+	}
+}
+
+static void t7xx_ccmni_queue_state_notify(struct t7xx_pci_dev *t7xx_dev,
+					  enum dpmaif_txq_state state, int qno)
+{
+	struct t7xx_ccmni_ctrl *ctlb = t7xx_dev->ccmni_ctlb;
+
+	if (!(ctlb->capability & NIC_CAP_TXBUSY_STOP) ||
+	    ctlb->md_sta != MD_STATE_READY)
+		return;
+
+	if (!ctlb->ccmni_inst[0]) {
+		dev_warn(&t7xx_dev->pdev->dev, "No netdev registered yet\n");
+		return;
+	}
+
+	if (state == DMPAIF_TXQ_STATE_IRQ)
+		t7xx_ccmni_queue_tx_irq_notify(ctlb, qno);
+	else if (state == DMPAIF_TXQ_STATE_FULL)
+		t7xx_ccmni_queue_tx_full_notify(ctlb, qno);
+}
+
+int t7xx_ccmni_init(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct device *dev = &t7xx_dev->pdev->dev;
+	struct t7xx_ccmni_ctrl *ctlb;
+	int ret;
+
+	ctlb = devm_kzalloc(dev, sizeof(*ctlb), GFP_KERNEL);
+	if (!ctlb)
+		return -ENOMEM;
+
+	t7xx_dev->ccmni_ctlb = ctlb;
+	ctlb->t7xx_dev = t7xx_dev;
+	ctlb->callbacks.state_notify = t7xx_ccmni_queue_state_notify;
+	ctlb->callbacks.recv_skb = t7xx_ccmni_recv_skb;
+	ctlb->nic_dev_num = NIC_DEV_DEFAULT;
+	ctlb->capability = NIC_CAP_TXBUSY_STOP | NIC_CAP_SGIO |
+			   NIC_CAP_DATA_ACK_DVD | NIC_CAP_CCMNI_MQ;
+
+	ctlb->hif_ctrl = t7xx_dpmaif_hif_init(t7xx_dev, &ctlb->callbacks);
+	if (!ctlb->hif_ctrl)
+		return -ENOMEM;
+
+	/* WWAN core will create a netdev for the default IP MUX channel */
+	ret = wwan_register_ops(dev, &ccmni_wwan_ops, ctlb, IP_MUX_SESSION_DEFAULT);
+	if (ret)
+		goto err_unregister_ops;
+
+	init_md_status_notifier(t7xx_dev);
+
+	return 0;
+
+err_unregister_ops:
+	wwan_unregister_ops(dev);
+
+	return ret;
+}
+
+void t7xx_ccmni_exit(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct t7xx_ccmni_ctrl *ctlb = t7xx_dev->ccmni_ctlb;
+
+	t7xx_fsm_notifier_unregister(t7xx_dev->md, &ctlb->md_status_notify);
+	wwan_unregister_ops(&t7xx_dev->pdev->dev);
+	t7xx_dpmaif_hif_exit(ctlb->hif_ctrl);
+}
diff --git a/drivers/net/wwan/t7xx/t7xx_netdev.h b/drivers/net/wwan/t7xx/t7xx_netdev.h
new file mode 100644
index 000000000000..2553c050fca4
--- /dev/null
+++ b/drivers/net/wwan/t7xx/t7xx_netdev.h
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (c) 2021, MediaTek Inc.
+ * Copyright (c) 2021, Intel Corporation.
+ *
+ * Authors:
+ *  Haijun Liu <haijun.liu@mediatek.com>
+ *  Moises Veleta <moises.veleta@intel.com>
+ *
+ * Contributors:
+ *  Amir Hanania <amir.hanania@intel.com>
+ *  Chiranjeevi Rapolu <chiranjeevi.rapolu@intel.com>
+ *  Ricardo Martinez<ricardo.martinez@linux.intel.com>
+ */
+
+#ifndef __T7XX_NETDEV_H__
+#define __T7XX_NETDEV_H__
+
+#include <linux/bits.h>
+#include <linux/netdevice.h>
+#include <linux/types.h>
+
+#include "t7xx_common.h"
+#include "t7xx_hif_dpmaif.h"
+#include "t7xx_pci.h"
+#include "t7xx_state_monitor.h"
+
+#define RXQ_NUM				DPMAIF_RXQ_NUM
+#define NIC_DEV_MAX			21
+#define NIC_DEV_DEFAULT			2
+#define NIC_CAP_TXBUSY_STOP		BIT(0)
+#define NIC_CAP_SGIO			BIT(1)
+#define NIC_CAP_DATA_ACK_DVD		BIT(2)
+#define NIC_CAP_CCMNI_MQ		BIT(3)
+
+/* Must be less than DPMAIF_HW_MTU_SIZE (3*1024 + 8) */
+#define CCMNI_MTU_MAX			3000
+#define CCMNI_NETDEV_WDT_TO		(1 * HZ)
+
+struct t7xx_ccmni {
+	u8				index;
+	atomic_t			usage;
+	struct net_device		*dev;
+	struct t7xx_ccmni_ctrl		*ctlb;
+};
+
+struct t7xx_ccmni_ctrl {
+	struct t7xx_pci_dev		*t7xx_dev;
+	struct dpmaif_ctrl		*hif_ctrl;
+	struct t7xx_ccmni		*ccmni_inst[NIC_DEV_MAX];
+	struct dpmaif_callbacks		callbacks;
+	unsigned int			nic_dev_num;
+	unsigned int			md_sta;
+	unsigned int			capability;
+	struct t7xx_fsm_notifier	md_status_notify;
+};
+
+int t7xx_ccmni_init(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_ccmni_exit(struct t7xx_pci_dev *t7xx_dev);
+
+#endif /* __T7XX_NETDEV_H__ */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 10/13] net: wwan: t7xx: Introduce power management support
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
                   ` (8 preceding siblings ...)
  2022-01-14  1:06 ` [PATCH net-next v4 09/13] net: wwan: t7xx: Add WWAN network interface Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-02-10 10:58   ` Ilpo Järvinen
  2022-01-14  1:06 ` [PATCH net-next v4 11/13] net: wwan: t7xx: Runtime PM Ricardo Martinez
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

From: Haijun Liu <haijun.liu@mediatek.com>

Implements suspend, resumes, freeze, thaw, poweroff, and restore
`dev_pm_ops` callbacks.

From the host point of view, the t7xx driver is one entity. But, the
device has several modules that need to be addressed in different ways
during power management (PM) flows.
The driver uses the term 'PM entities' to refer to the 2 DPMA and
2 CLDMA HW blocks that need to be managed during PM flows.
When a dev_pm_ops function is called, the PM entities list is iterated
and the matching function is called for each entry in the list.

Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 drivers/net/wwan/t7xx/t7xx_hif_cldma.c     | 120 +++++-
 drivers/net/wwan/t7xx/t7xx_hif_cldma.h     |   1 +
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c    |  90 +++++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h    |   1 +
 drivers/net/wwan/t7xx/t7xx_mhccif.c        |  17 +
 drivers/net/wwan/t7xx/t7xx_pci.c           | 426 +++++++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_pci.h           |  46 +++
 drivers/net/wwan/t7xx/t7xx_state_monitor.c |   2 +
 8 files changed, 702 insertions(+), 1 deletion(-)

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
index 3b49f7b81b01..31e32c10dabb 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -1184,6 +1184,117 @@ void t7xx_cldma_exception(struct cldma_ctrl *md_ctrl, enum hif_ex_stage stage)
 	}
 }
 
+static void t7xx_cldma_resume_early(struct t7xx_pci_dev *mtk_dev, void *entity_param)
+{
+	struct cldma_ctrl *md_ctrl = entity_param;
+	struct t7xx_cldma_hw *hw_info;
+	unsigned long flags;
+	int qno_t;
+
+	hw_info = &md_ctrl->hw_info;
+	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+	t7xx_cldma_hw_restore(hw_info);
+	for (qno_t = 0; qno_t < CLDMA_TXQ_NUM; qno_t++) {
+		t7xx_cldma_hw_set_start_addr(hw_info, qno_t, md_ctrl->txq[qno_t].tx_xmit->gpd_addr,
+					     MTK_TX);
+		t7xx_cldma_hw_set_start_addr(hw_info, qno_t, md_ctrl->rxq[qno_t].tr_done->gpd_addr,
+					     MTK_RX);
+	}
+
+	t7xx_cldma_enable_irq(md_ctrl);
+	t7xx_cldma_hw_start_queue(hw_info, CLDMA_ALL_Q, MTK_RX);
+	md_ctrl->rxq_active |= TXRX_STATUS_BITMASK;
+	t7xx_cldma_hw_irq_en_eq(hw_info, CLDMA_ALL_Q, MTK_RX);
+	t7xx_cldma_hw_irq_en_txrx(hw_info, CLDMA_ALL_Q, MTK_RX);
+	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static int t7xx_cldma_resume(struct t7xx_pci_dev *t7xx_dev, void *entity_param)
+{
+	struct cldma_ctrl *md_ctrl = entity_param;
+	unsigned long flags;
+
+	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+	md_ctrl->txq_active |= TXRX_STATUS_BITMASK;
+	t7xx_cldma_hw_irq_en_txrx(&md_ctrl->hw_info, CLDMA_ALL_Q, MTK_TX);
+	t7xx_cldma_hw_irq_en_eq(&md_ctrl->hw_info, CLDMA_ALL_Q, MTK_TX);
+	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+
+	if (md_ctrl->hif_id == ID_CLDMA1)
+		t7xx_mhccif_mask_clr(t7xx_dev, D2H_SW_INT_MASK);
+
+	return 0;
+}
+
+static void t7xx_cldma_suspend_late(struct t7xx_pci_dev *t7xx_dev, void *entity_param)
+{
+	struct cldma_ctrl *md_ctrl = entity_param;
+	struct t7xx_cldma_hw *hw_info;
+	unsigned long flags;
+
+	hw_info = &md_ctrl->hw_info;
+	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+	t7xx_cldma_hw_irq_dis_eq(hw_info, CLDMA_ALL_Q, MTK_RX);
+	t7xx_cldma_hw_irq_dis_txrx(hw_info, CLDMA_ALL_Q, MTK_RX);
+	md_ctrl->rxq_active &= ~TXRX_STATUS_BITMASK;
+	t7xx_cldma_hw_stop_queue(hw_info, CLDMA_ALL_Q, MTK_RX);
+	t7xx_cldma_clear_ip_busy(hw_info);
+	t7xx_cldma_disable_irq(md_ctrl);
+	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+}
+
+static int t7xx_cldma_suspend(struct t7xx_pci_dev *t7xx_dev, void *entity_param)
+{
+	struct cldma_ctrl *md_ctrl = entity_param;
+	struct t7xx_cldma_hw *hw_info;
+	unsigned long flags;
+
+	if (md_ctrl->hif_id == ID_CLDMA1)
+		t7xx_mhccif_mask_set(t7xx_dev, D2H_SW_INT_MASK);
+
+	hw_info = &md_ctrl->hw_info;
+	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
+	t7xx_cldma_hw_irq_dis_eq(hw_info, CLDMA_ALL_Q, MTK_TX);
+	t7xx_cldma_hw_irq_dis_txrx(hw_info, CLDMA_ALL_Q, MTK_TX);
+	md_ctrl->txq_active &= ~TXRX_STATUS_BITMASK;
+	t7xx_cldma_hw_stop_queue(hw_info, CLDMA_ALL_Q, MTK_TX);
+	md_ctrl->txq_started = 0;
+	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+	return 0;
+}
+
+static int t7xx_cldma_pm_init(struct cldma_ctrl *md_ctrl)
+{
+	md_ctrl->pm_entity = kzalloc(sizeof(*md_ctrl->pm_entity), GFP_KERNEL);
+	if (!md_ctrl->pm_entity)
+		return -ENOMEM;
+
+	md_ctrl->pm_entity->entity_param = md_ctrl;
+
+	if (md_ctrl->hif_id == ID_CLDMA1)
+		md_ctrl->pm_entity->id = PM_ENTITY_ID_CTRL1;
+	else
+		md_ctrl->pm_entity->id = PM_ENTITY_ID_CTRL2;
+
+	md_ctrl->pm_entity->suspend = t7xx_cldma_suspend;
+	md_ctrl->pm_entity->suspend_late = t7xx_cldma_suspend_late;
+	md_ctrl->pm_entity->resume = t7xx_cldma_resume;
+	md_ctrl->pm_entity->resume_early = t7xx_cldma_resume_early;
+
+	return t7xx_pci_pm_entity_register(md_ctrl->t7xx_dev, md_ctrl->pm_entity);
+}
+
+static int t7xx_cldma_pm_uninit(struct cldma_ctrl *md_ctrl)
+{
+	if (!md_ctrl->pm_entity)
+		return -EINVAL;
+
+	t7xx_pci_pm_entity_unregister(md_ctrl->t7xx_dev, md_ctrl->pm_entity);
+	kfree_sensitive(md_ctrl->pm_entity);
+	md_ctrl->pm_entity = NULL;
+	return 0;
+}
+
 void t7xx_cldma_hif_hw_init(struct cldma_ctrl *md_ctrl)
 {
 	struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
@@ -1216,6 +1327,7 @@ static irqreturn_t t7xx_cldma_isr_handler(int irq, void *data)
  * @md: Modem context structure.
  * @md_ctrl: CLDMA context structure.
  *
+ * Allocate and initialize device power management entity.
  * Initialize HIF TX/RX queue structure.
  * Register CLDMA callback ISR with PCIe driver.
  *
@@ -1226,12 +1338,16 @@ static irqreturn_t t7xx_cldma_isr_handler(int irq, void *data)
 int t7xx_cldma_init(struct t7xx_modem *md, struct cldma_ctrl *md_ctrl)
 {
 	struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
-	int i;
+	int ret, i;
 
 	md_ctrl->txq_active = 0;
 	md_ctrl->rxq_active = 0;
 	md_ctrl->is_late_init = false;
 
+	ret = t7xx_cldma_pm_init(md_ctrl);
+	if (ret)
+		return ret;
+
 	spin_lock_init(&md_ctrl->cldma_lock);
 	for (i = 0; i < CLDMA_TXQ_NUM; i++) {
 		md_cd_queue_struct_init(&md_ctrl->txq[i], md_ctrl, MTK_TX, i);
@@ -1293,4 +1409,6 @@ void t7xx_cldma_exit(struct cldma_ctrl *md_ctrl)
 			md_ctrl->rxq[i].worker = NULL;
 		}
 	}
+
+	t7xx_cldma_pm_uninit(md_ctrl);
 }
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.h b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
index 5f8100c2b9bd..029272afd1d6 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.h
@@ -85,6 +85,7 @@ struct cldma_ctrl {
 	struct dma_pool *gpd_dmapool;
 	struct cldma_ring tx_ring[CLDMA_TXQ_NUM];
 	struct cldma_ring rx_ring[CLDMA_RXQ_NUM];
+	struct md_pm_entity *pm_entity;
 	struct t7xx_cldma_hw hw_info;
 	bool is_late_init;
 	int (*recv_skb)(struct cldma_queue *queue, struct sk_buff *skb);
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
index b731d0be83ee..18e04b713b91 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.c
@@ -402,6 +402,90 @@ static int t7xx_dpmaif_stop(struct dpmaif_ctrl *dpmaif_ctrl)
 	return 0;
 }
 
+static int t7xx_dpmaif_suspend(struct t7xx_pci_dev *t7xx_dev, void *param)
+{
+	struct dpmaif_ctrl *dpmaif_ctrl = param;
+
+	t7xx_dpmaif_tx_stop(dpmaif_ctrl);
+	t7xx_dpmaif_hw_stop_all_txq(dpmaif_ctrl);
+	t7xx_dpmaif_hw_stop_all_rxq(dpmaif_ctrl);
+	t7xx_dpmaif_disable_irq(dpmaif_ctrl);
+	t7xx_dpmaif_rx_stop(dpmaif_ctrl);
+	return 0;
+}
+
+static void t7xx_dpmaif_unmask_dlq_intr(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	int qno;
+
+	for (qno = 0; qno < DPMAIF_RXQ_NUM; qno++)
+		t7xx_dpmaif_dlq_unmask_rx_done(&dpmaif_ctrl->hif_hw_info, qno);
+}
+
+static void t7xx_dpmaif_start_txrx_qs(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct dpmaif_rx_queue *rxq;
+	struct dpmaif_tx_queue *txq;
+	unsigned int que_cnt;
+
+	for (que_cnt = 0; que_cnt < DPMAIF_TXQ_NUM; que_cnt++) {
+		txq = &dpmaif_ctrl->txq[que_cnt];
+		txq->que_started = true;
+	}
+
+	for (que_cnt = 0; que_cnt < DPMAIF_RXQ_NUM; que_cnt++) {
+		rxq = &dpmaif_ctrl->rxq[que_cnt];
+		rxq->que_started = true;
+	}
+}
+
+static int t7xx_dpmaif_resume(struct t7xx_pci_dev *t7xx_dev, void *param)
+{
+	struct dpmaif_ctrl *dpmaif_ctrl = param;
+
+	if (!dpmaif_ctrl)
+		return 0;
+
+	t7xx_dpmaif_start_txrx_qs(dpmaif_ctrl);
+	t7xx_dpmaif_enable_irq(dpmaif_ctrl);
+	t7xx_dpmaif_unmask_dlq_intr(dpmaif_ctrl);
+	t7xx_dpmaif_start_hw(dpmaif_ctrl);
+	wake_up(&dpmaif_ctrl->tx_wq);
+	return 0;
+}
+
+static int t7xx_dpmaif_pm_entity_init(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct md_pm_entity *dpmaif_pm_entity = &dpmaif_ctrl->dpmaif_pm_entity;
+	int ret;
+
+	INIT_LIST_HEAD(&dpmaif_pm_entity->entity);
+	dpmaif_pm_entity->suspend = &t7xx_dpmaif_suspend;
+	dpmaif_pm_entity->suspend_late = NULL;
+	dpmaif_pm_entity->resume_early = NULL;
+	dpmaif_pm_entity->resume = &t7xx_dpmaif_resume;
+	dpmaif_pm_entity->id = PM_ENTITY_ID_DATA;
+	dpmaif_pm_entity->entity_param = dpmaif_ctrl;
+
+	ret = t7xx_pci_pm_entity_register(dpmaif_ctrl->t7xx_dev, dpmaif_pm_entity);
+	if (ret)
+		dev_err(dpmaif_ctrl->dev, "dpmaif register pm_entity fail\n");
+
+	return ret;
+}
+
+static int t7xx_dpmaif_pm_entity_release(struct dpmaif_ctrl *dpmaif_ctrl)
+{
+	struct md_pm_entity *dpmaif_pm_entity = &dpmaif_ctrl->dpmaif_pm_entity;
+	int ret;
+
+	ret = t7xx_pci_pm_entity_unregister(dpmaif_ctrl->t7xx_dev, dpmaif_pm_entity);
+	if (ret < 0)
+		dev_err(dpmaif_ctrl->dev, "dpmaif register pm_entity fail\n");
+
+	return ret;
+}
+
 int t7xx_dpmaif_md_state_callback(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char state)
 {
 	int ret = 0;
@@ -464,12 +548,17 @@ struct dpmaif_ctrl *t7xx_dpmaif_hif_init(struct t7xx_pci_dev *t7xx_dev,
 	dpmaif_ctrl->hif_hw_info.pcie_base = t7xx_dev->base_addr.pcie_ext_reg_base -
 					     t7xx_dev->base_addr.pcie_dev_reg_trsl_addr;
 
+	ret = t7xx_dpmaif_pm_entity_init(dpmaif_ctrl);
+	if (ret)
+		return NULL;
+
 	t7xx_dpmaif_register_pcie_irq(dpmaif_ctrl);
 	t7xx_dpmaif_disable_irq(dpmaif_ctrl);
 
 	ret = t7xx_dpmaif_rxtx_sw_allocs(dpmaif_ctrl);
 	if (ret) {
 		dev_err(dev, "Failed to allocate RX/TX SW resources: %d\n", ret);
+		t7xx_dpmaif_pm_entity_release(dpmaif_ctrl);
 		return NULL;
 	}
 
@@ -481,6 +570,7 @@ void t7xx_dpmaif_hif_exit(struct dpmaif_ctrl *dpmaif_ctrl)
 {
 	if (dpmaif_ctrl->dpmaif_sw_init_done) {
 		t7xx_dpmaif_stop(dpmaif_ctrl);
+		t7xx_dpmaif_pm_entity_release(dpmaif_ctrl);
 		t7xx_dpmaif_sw_release(dpmaif_ctrl);
 		dpmaif_ctrl->dpmaif_sw_init_done = false;
 	}
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
index 3404e2a75566..88b18619949d 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif.h
@@ -220,6 +220,7 @@ struct dpmaif_callbacks {
 struct dpmaif_ctrl {
 	struct device			*dev;
 	struct t7xx_pci_dev		*t7xx_dev;
+	struct md_pm_entity		dpmaif_pm_entity;
 	enum dpmaif_state		state;
 	bool				dpmaif_sw_init_done;
 	struct dpmaif_hw_info		hif_hw_info;
diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.c b/drivers/net/wwan/t7xx/t7xx_mhccif.c
index 20aae457629c..74c79d520d88 100644
--- a/drivers/net/wwan/t7xx/t7xx_mhccif.c
+++ b/drivers/net/wwan/t7xx/t7xx_mhccif.c
@@ -24,6 +24,11 @@
 #include "t7xx_pcie_mac.h"
 #include "t7xx_reg.h"
 
+#define D2H_INT_SR_ACK		(D2H_INT_SUSPEND_ACK |		\
+				 D2H_INT_RESUME_ACK |		\
+				 D2H_INT_SUSPEND_ACK_AP |	\
+				 D2H_INT_RESUME_ACK_AP)
+
 static void t7xx_mhccif_clear_interrupts(struct t7xx_pci_dev *t7xx_dev, u32 mask)
 {
 	void __iomem *mhccif_pbase = t7xx_dev->base_addr.mhccif_rc_base;
@@ -49,6 +54,18 @@ static irqreturn_t t7xx_mhccif_isr_thread(int irq, void *data)
 		t7xx_pci_mhccif_isr(t7xx_dev);
 
 	t7xx_mhccif_clear_interrupts(t7xx_dev, int_sts);
+
+	if (int_sts & D2H_INT_SR_ACK)
+		complete(&t7xx_dev->pm_sr_ack);
+
+	iowrite32(L1_DISABLE_BIT(1), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+
+	int_sts = t7xx_mhccif_read_sw_int_sts(t7xx_dev);
+	if (!int_sts) {
+		val = L1_1_DISABLE_BIT(1) | L1_2_DISABLE_BIT(1);
+		iowrite32(val, IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+	}
+
 	t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT);
 	return IRQ_HANDLED;
 }
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
index 6dd8897dfcbb..7d30f597c7e9 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.c
+++ b/drivers/net/wwan/t7xx/t7xx_pci.c
@@ -18,24 +18,444 @@
 
 #include <linux/atomic.h>
 #include <linux/bits.h>
+#include <linux/completion.h>
 #include <linux/dev_printk.h>
 #include <linux/device.h>
 #include <linux/dma-mapping.h>
 #include <linux/gfp.h>
 #include <linux/interrupt.h>
 #include <linux/io.h>
+#include <linux/iopoll.h>
+#include <linux/jiffies.h>
+#include <linux/list.h>
 #include <linux/module.h>
+#include <linux/mutex.h>
 #include <linux/pci.h>
+#include <linux/pm.h>
+#include <linux/pm_wakeup.h>
 
 #include "t7xx_mhccif.h"
 #include "t7xx_modem_ops.h"
 #include "t7xx_pci.h"
 #include "t7xx_pcie_mac.h"
 #include "t7xx_reg.h"
+#include "t7xx_state_monitor.h"
 
 #define PCI_IREG_BASE			0
 #define PCI_EREG_BASE			2
 
+#define PM_ACK_TIMEOUT_MS		1500
+#define PM_RESOURCE_POLL_TIMEOUT_US	10000
+#define PM_RESOURCE_POLL_STEP_US	100
+
+enum t7xx_pm_state {
+	MTK_PM_EXCEPTION,
+	MTK_PM_INIT,		/* Device initialized, but handshake not completed */
+	MTK_PM_SUSPENDED,
+	MTK_PM_RESUMED,
+};
+
+static int t7xx_wait_pm_config(struct t7xx_pci_dev *t7xx_dev)
+{
+	int ret, val;
+
+	ret = read_poll_timeout(ioread32, val,
+				(val & PCIE_RESOURCE_STATUS_MSK) == PCIE_RESOURCE_STATUS_MSK,
+				PM_RESOURCE_POLL_STEP_US, PM_RESOURCE_POLL_TIMEOUT_US, true,
+				IREG_BASE(t7xx_dev) + PCIE_RESOURCE_STATUS);
+	if (ret == -ETIMEDOUT)
+		dev_err(&t7xx_dev->pdev->dev, "PM configuration timed out\n");
+
+	return ret;
+}
+
+static int t7xx_pci_pm_init(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct pci_dev *pdev = t7xx_dev->pdev;
+
+	INIT_LIST_HEAD(&t7xx_dev->md_pm_entities);
+
+	mutex_init(&t7xx_dev->md_pm_entity_mtx);
+
+	init_completion(&t7xx_dev->pm_sr_ack);
+
+	device_init_wakeup(&pdev->dev, true);
+
+	dev_pm_set_driver_flags(&pdev->dev, pdev->dev.power.driver_flags |
+				DPM_FLAG_NO_DIRECT_COMPLETE);
+
+	atomic_set(&t7xx_dev->md_pm_state, MTK_PM_INIT);
+
+	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+
+	return t7xx_wait_pm_config(t7xx_dev);
+}
+
+void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev)
+{
+	/* Enable the PCIe resource lock only after MD deep sleep is done */
+	t7xx_mhccif_mask_clr(t7xx_dev,
+			     D2H_INT_SUSPEND_ACK |
+			     D2H_INT_RESUME_ACK |
+			     D2H_INT_SUSPEND_ACK_AP |
+			     D2H_INT_RESUME_ACK_AP);
+	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+	atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
+}
+
+static int t7xx_pci_pm_reinit(struct t7xx_pci_dev *t7xx_dev)
+{
+	/* The device is kept in FSM re-init flow
+	 * so just roll back PM setting to the init setting.
+	 */
+	atomic_set(&t7xx_dev->md_pm_state, MTK_PM_INIT);
+
+	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+	return t7xx_wait_pm_config(t7xx_dev);
+}
+
+void t7xx_pci_pm_exp_detected(struct t7xx_pci_dev *t7xx_dev)
+{
+	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+	t7xx_wait_pm_config(t7xx_dev);
+	atomic_set(&t7xx_dev->md_pm_state, MTK_PM_EXCEPTION);
+}
+
+int t7xx_pci_pm_entity_register(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity)
+{
+	struct md_pm_entity *entity;
+
+	mutex_lock(&t7xx_dev->md_pm_entity_mtx);
+	list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+		if (entity->id == pm_entity->id) {
+			mutex_unlock(&t7xx_dev->md_pm_entity_mtx);
+			return -EEXIST;
+		}
+	}
+
+	list_add_tail(&pm_entity->entity, &t7xx_dev->md_pm_entities);
+	mutex_unlock(&t7xx_dev->md_pm_entity_mtx);
+	return 0;
+}
+
+int t7xx_pci_pm_entity_unregister(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity)
+{
+	struct md_pm_entity *entity, *tmp_entity;
+
+	mutex_lock(&t7xx_dev->md_pm_entity_mtx);
+	list_for_each_entry_safe(entity, tmp_entity, &t7xx_dev->md_pm_entities, entity) {
+		if (entity->id == pm_entity->id) {
+			list_del(&pm_entity->entity);
+			mutex_unlock(&t7xx_dev->md_pm_entity_mtx);
+			return 0;
+		}
+	}
+
+	mutex_unlock(&t7xx_dev->md_pm_entity_mtx);
+
+	return -ENXIO;
+}
+
+static int __t7xx_pci_pm_suspend(struct pci_dev *pdev)
+{
+	struct t7xx_pci_dev *t7xx_dev;
+	struct md_pm_entity *entity;
+	unsigned long wait_ret;
+	enum t7xx_pm_id id;
+	int ret = 0;
+
+	t7xx_dev = pci_get_drvdata(pdev);
+	if (atomic_read(&t7xx_dev->md_pm_state) <= MTK_PM_INIT) {
+		dev_err(&pdev->dev,
+			"[PM] Exiting suspend, because handshake failure or in an exception\n");
+		return -EFAULT;
+	}
+
+	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+
+	ret = t7xx_wait_pm_config(t7xx_dev);
+	if (ret)
+		return ret;
+
+	atomic_set(&t7xx_dev->md_pm_state, MTK_PM_SUSPENDED);
+	t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
+	t7xx_dev->rgu_pci_irq_en = false;
+
+	list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+		if (entity->suspend) {
+			ret = entity->suspend(t7xx_dev, entity->entity_param);
+			if (ret) {
+				id = entity->id;
+				break;
+			}
+		}
+	}
+
+	if (ret) {
+		dev_err(&pdev->dev, "[PM] Suspend error: %d, id: %d\n", ret, id);
+
+		list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+			if (id == entity->id)
+				break;
+
+			if (entity->resume)
+				entity->resume(t7xx_dev, entity->entity_param);
+		}
+
+		goto suspend_complete;
+	}
+
+	reinit_completion(&t7xx_dev->pm_sr_ack);
+	t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_SUSPEND_REQ);
+	wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
+					       msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+	if (!wait_ret)
+		dev_err(&pdev->dev, "[PM] Wait for device suspend ACK timeout-MD\n");
+
+	reinit_completion(&t7xx_dev->pm_sr_ack);
+	t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_SUSPEND_REQ_AP);
+	wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
+					       msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+	if (!wait_ret)
+		dev_err(&pdev->dev, "[PM] Wait for device suspend ACK timeout-SAP\n");
+
+	list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+		if (entity->suspend_late)
+			entity->suspend_late(t7xx_dev, entity->entity_param);
+	}
+
+suspend_complete:
+	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+
+	if (ret) {
+		atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
+		t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+	}
+
+	return ret;
+}
+
+static void t7xx_pcie_interrupt_reinit(struct t7xx_pci_dev *t7xx_dev)
+{
+	t7xx_pcie_set_mac_msix_cfg(t7xx_dev, EXT_INT_NUM);
+
+	/* Disable interrupt first and let the IPs enable them */
+	iowrite32(MSIX_MSK_SET_ALL, IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_CLR_GRP0_0);
+
+	/* Device disables PCIe interrupts during resume and
+	 * following function will re-enable PCIe interrupts.
+	 */
+	t7xx_pcie_mac_interrupts_en(t7xx_dev);
+	t7xx_pcie_mac_set_int(t7xx_dev, MHCCIF_INT);
+}
+
+static int t7xx_pcie_reinit(struct t7xx_pci_dev *t7xx_dev, bool is_d3)
+{
+	int ret;
+
+	ret = pcim_enable_device(t7xx_dev->pdev);
+	if (ret)
+		return ret;
+
+	t7xx_pcie_mac_atr_init(t7xx_dev);
+	t7xx_pcie_interrupt_reinit(t7xx_dev);
+
+	if (is_d3) {
+		t7xx_mhccif_init(t7xx_dev);
+		return t7xx_pci_pm_reinit(t7xx_dev);
+	}
+
+	return 0;
+}
+
+static int t7xx_send_fsm_command(struct t7xx_pci_dev *t7xx_dev, u32 event)
+{
+	struct t7xx_fsm_ctl *fsm_ctl = t7xx_dev->md->fsm_ctl;
+	struct device *dev = &t7xx_dev->pdev->dev;
+	int ret = -EINVAL;
+
+	switch (event) {
+	case FSM_CMD_STOP:
+		ret = t7xx_fsm_append_cmd(fsm_ctl, FSM_CMD_STOP, FSM_CMD_FLAG_WAIT_FOR_COMPLETION);
+		break;
+
+	case FSM_CMD_START:
+		t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
+		t7xx_pcie_mac_clear_int_status(t7xx_dev, SAP_RGU_INT);
+		t7xx_dev->rgu_pci_irq_en = true;
+		t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+		ret = t7xx_fsm_append_cmd(fsm_ctl, FSM_CMD_START, 0);
+		break;
+
+	default:
+		break;
+	}
+
+	if (ret)
+		dev_err(dev, "Failure handling FSM command %u, %d\n", event, ret);
+
+	return ret;
+}
+
+static int __t7xx_pci_pm_resume(struct pci_dev *pdev, bool state_check)
+{
+	struct t7xx_pci_dev *t7xx_dev;
+	struct md_pm_entity *entity;
+	unsigned long wait_ret;
+	u32 prev_state;
+	int ret = 0;
+
+	t7xx_dev = pci_get_drvdata(pdev);
+	if (atomic_read(&t7xx_dev->md_pm_state) <= MTK_PM_INIT) {
+		iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+		return 0;
+	}
+
+	t7xx_pcie_mac_interrupts_en(t7xx_dev);
+	prev_state = ioread32(IREG_BASE(t7xx_dev) + PCIE_PM_RESUME_STATE);
+
+	if (state_check) {
+		/* For D3/L3 resume, the device could boot so quickly that the
+		 * initial value of the dummy register might be overwritten.
+		 * Identify new boots if the ATR source address register is not initialized.
+		 */
+		u32 atr_reg_val = ioread32(IREG_BASE(t7xx_dev) +
+					   ATR_PCIE_WIN0_T0_ATR_PARAM_SRC_ADDR);
+		if (prev_state == PM_RESUME_REG_STATE_L3 ||
+		    (prev_state == PM_RESUME_REG_STATE_INIT &&
+		     atr_reg_val == ATR_SRC_ADDR_INVALID)) {
+			ret = t7xx_send_fsm_command(t7xx_dev, FSM_CMD_STOP);
+			if (ret)
+				return ret;
+
+			ret = t7xx_pcie_reinit(t7xx_dev, true);
+			if (ret)
+				return ret;
+
+			t7xx_clear_rgu_irq(t7xx_dev);
+			return t7xx_send_fsm_command(t7xx_dev, FSM_CMD_START);
+		}
+
+		if (prev_state == PM_RESUME_REG_STATE_EXP ||
+		    prev_state == PM_RESUME_REG_STATE_L2_EXP) {
+			if (prev_state == PM_RESUME_REG_STATE_L2_EXP) {
+				ret = t7xx_pcie_reinit(t7xx_dev, false);
+				if (ret)
+					return ret;
+			}
+
+			atomic_set(&t7xx_dev->md_pm_state, MTK_PM_SUSPENDED);
+			t7xx_dev->rgu_pci_irq_en = true;
+			t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+
+			t7xx_mhccif_mask_clr(t7xx_dev,
+					     D2H_INT_EXCEPTION_INIT |
+					     D2H_INT_EXCEPTION_INIT_DONE |
+					     D2H_INT_EXCEPTION_CLEARQ_DONE |
+					     D2H_INT_EXCEPTION_ALLQ_RESET |
+					     D2H_INT_PORT_ENUM);
+
+			return ret;
+		}
+
+		if (prev_state == PM_RESUME_REG_STATE_L2) {
+			ret = t7xx_pcie_reinit(t7xx_dev, false);
+			if (ret)
+				return ret;
+
+		} else if (prev_state != PM_RESUME_REG_STATE_L1 &&
+			   prev_state != PM_RESUME_REG_STATE_INIT) {
+			ret = t7xx_send_fsm_command(t7xx_dev, FSM_CMD_STOP);
+			if (ret)
+				return ret;
+
+			t7xx_clear_rgu_irq(t7xx_dev);
+			atomic_set(&t7xx_dev->md_pm_state, MTK_PM_SUSPENDED);
+			return 0;
+		}
+	}
+
+	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+	t7xx_wait_pm_config(t7xx_dev);
+
+	list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+		if (entity->resume_early)
+			entity->resume_early(t7xx_dev, entity->entity_param);
+	}
+
+	reinit_completion(&t7xx_dev->pm_sr_ack);
+	t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_RESUME_REQ);
+	wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
+					       msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+	if (!wait_ret)
+		dev_err(&pdev->dev, "[PM] Timed out waiting for device MD resume ACK\n");
+
+	reinit_completion(&t7xx_dev->pm_sr_ack);
+	t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_RESUME_REQ_AP);
+	wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
+					       msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
+	if (!wait_ret)
+		dev_err(&pdev->dev, "[PM] Timed out waiting for device SAP resume ACK\n");
+
+	list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
+		if (entity->resume) {
+			ret = entity->resume(t7xx_dev, entity->entity_param);
+			if (ret)
+				dev_err(&pdev->dev, "[PM] Resume entry ID: %d err: %d\n",
+					entity->id, ret);
+		}
+	}
+
+	t7xx_dev->rgu_pci_irq_en = true;
+	t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
+	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+	atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
+
+	return ret;
+}
+
+static int t7xx_pci_pm_resume_noirq(struct device *dev)
+{
+	struct pci_dev *pdev = to_pci_dev(dev);
+	struct t7xx_pci_dev *t7xx_dev;
+
+	t7xx_dev = pci_get_drvdata(pdev);
+	t7xx_pcie_mac_interrupts_dis(t7xx_dev);
+
+	return 0;
+}
+
+static void t7xx_pci_shutdown(struct pci_dev *pdev)
+{
+	__t7xx_pci_pm_suspend(pdev);
+}
+
+static int t7xx_pci_pm_suspend(struct device *dev)
+{
+	return __t7xx_pci_pm_suspend(to_pci_dev(dev));
+}
+
+static int t7xx_pci_pm_resume(struct device *dev)
+{
+	return __t7xx_pci_pm_resume(to_pci_dev(dev), true);
+}
+
+static int t7xx_pci_pm_thaw(struct device *dev)
+{
+	return __t7xx_pci_pm_resume(to_pci_dev(dev), false);
+}
+
+static const struct dev_pm_ops t7xx_pci_pm_ops = {
+	.suspend = t7xx_pci_pm_suspend,
+	.resume = t7xx_pci_pm_resume,
+	.resume_noirq = t7xx_pci_pm_resume_noirq,
+	.freeze = t7xx_pci_pm_suspend,
+	.thaw = t7xx_pci_pm_thaw,
+	.poweroff = t7xx_pci_pm_suspend,
+	.restore = t7xx_pci_pm_resume,
+	.restore_noirq = t7xx_pci_pm_resume_noirq,
+};
+
 static int t7xx_request_irq(struct pci_dev *pdev)
 {
 	struct t7xx_pci_dev *t7xx_dev;
@@ -165,6 +585,10 @@ static int t7xx_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
 	IREG_BASE(t7xx_dev) = pcim_iomap_table(pdev)[PCI_IREG_BASE];
 	t7xx_dev->base_addr.pcie_ext_reg_base = pcim_iomap_table(pdev)[PCI_EREG_BASE];
 
+	ret = t7xx_pci_pm_init(t7xx_dev);
+	if (ret)
+		return ret;
+
 	t7xx_pcie_mac_atr_init(t7xx_dev);
 	t7xx_pci_infracfg_ao_calc(t7xx_dev);
 	t7xx_mhccif_init(t7xx_dev);
@@ -214,6 +638,8 @@ static struct pci_driver t7xx_pci_driver = {
 	.id_table = t7xx_pci_table,
 	.probe = t7xx_pci_probe,
 	.remove = t7xx_pci_remove,
+	.driver.pm = &t7xx_pci_pm_ops,
+	.shutdown = t7xx_pci_shutdown,
 };
 
 module_pci_driver(t7xx_pci_driver);
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.h b/drivers/net/wwan/t7xx/t7xx_pci.h
index b52aaa182a10..6310f31540ca 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.h
+++ b/drivers/net/wwan/t7xx/t7xx_pci.h
@@ -17,7 +17,9 @@
 #ifndef __T7XX_PCI_H__
 #define __T7XX_PCI_H__
 
+#include <linux/completion.h>
 #include <linux/irqreturn.h>
+#include <linux/mutex.h>
 #include <linux/pci.h>
 #include <linux/types.h>
 
@@ -48,6 +50,10 @@ typedef irqreturn_t (*t7xx_intr_callback)(int irq, void *param);
  * @pdev: PCI device
  * @base_addr: memory base addresses of HW components
  * @md: modem interface
+ * @md_pm_entities: list of pm entities
+ * @md_pm_entity_mtx: protects md_pm_entities list
+ * @pm_sr_ack: ack from the device when went to sleep or woke up
+ * @md_pm_state: state for resume/suspend
  * @ccmni_ctlb: context structure used to control the network data path
  * @rgu_pci_irq_en: RGU callback isr registered and active
  * @pools: pre allocated skb pools
@@ -60,8 +66,48 @@ struct t7xx_pci_dev {
 	struct pci_dev		*pdev;
 	struct t7xx_addr_base	base_addr;
 	struct t7xx_modem	*md;
+
+	/* Low Power Items */
+	struct list_head	md_pm_entities;
+	struct mutex		md_pm_entity_mtx;	/* Protects MD PM entities list */
+	struct completion	pm_sr_ack;
+	atomic_t		md_pm_state;
+
 	struct t7xx_ccmni_ctrl	*ccmni_ctlb;
 	bool			rgu_pci_irq_en;
 };
 
+enum t7xx_pm_id {
+	PM_ENTITY_ID_CTRL1,
+	PM_ENTITY_ID_CTRL2,
+	PM_ENTITY_ID_DATA,
+};
+
+/* struct md_pm_entity - device power management entity
+ * @entity: list of PM Entities
+ * @suspend: callback invoked before sending D3 request to device
+ * @suspend_late: callback invoked after getting D3 ACK from device
+ * @resume_early: callback invoked before sending the resume request to device
+ * @resume: callback invoked after getting resume ACK from device
+ * @id: unique PM entity identifier
+ * @entity_param: parameter passed to the registered callbacks
+ *
+ *  This structure is used to indicate PM operations required by internal
+ *  HW modules such as CLDMA and DPMA.
+ */
+struct md_pm_entity {
+	struct list_head	entity;
+	int (*suspend)(struct t7xx_pci_dev *t7xx_dev, void *entity_param);
+	void (*suspend_late)(struct t7xx_pci_dev *t7xx_dev, void *entity_param);
+	void (*resume_early)(struct t7xx_pci_dev *t7xx_dev, void *entity_param);
+	int (*resume)(struct t7xx_pci_dev *t7xx_dev, void *entity_param);
+	enum t7xx_pm_id		id;
+	void			*entity_param;
+};
+
+int t7xx_pci_pm_entity_register(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity);
+int t7xx_pci_pm_entity_unregister(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity);
+void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_pci_pm_exp_detected(struct t7xx_pci_dev *t7xx_dev);
+
 #endif /* __T7XX_PCI_H__ */
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
index 73fab28848c6..c37b23087c8c 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -199,6 +199,7 @@ static void fsm_routine_exception(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_comm
 
 	case EXCEPTION_EVENT:
 		t7xx_fsm_broadcast_state(ctl, MD_STATE_EXCEPTION);
+		t7xx_pci_pm_exp_detected(ctl->md->t7xx_dev);
 		t7xx_md_exception_handshake(ctl->md);
 
 		fsm_wait_for_event(ctl, FSM_EVENT_MD_EX_REC_OK, FSM_EVENT_MD_EX,
@@ -299,6 +300,7 @@ static void fsm_routine_starting(struct t7xx_fsm_ctl *ctl)
 
 		fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);
 	} else {
+		t7xx_pci_pm_init_late(md->t7xx_dev);
 		fsm_routine_ready(ctl);
 	}
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 11/13] net: wwan: t7xx: Runtime PM
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
                   ` (9 preceding siblings ...)
  2022-01-14  1:06 ` [PATCH net-next v4 10/13] net: wwan: t7xx: Introduce power management support Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-01-14  1:06 ` [PATCH net-next v4 12/13] net: wwan: t7xx: Device deep sleep lock/unlock Ricardo Martinez
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

From: Haijun Liu <haijun.liu@mediatek.com>

Enables runtime power management callbacks including runtime_suspend
and runtime_resume. Autosuspend is used to prevent overhead by frequent
wake-ups.

Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Co-developed-by: Eliot Lee <eliot.lee@intel.com>
Signed-off-by: Eliot Lee <eliot.lee@intel.com>
Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 drivers/net/wwan/t7xx/t7xx_hif_cldma.c     | 13 +++++++++++++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 16 ++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 15 +++++++++++++++
 drivers/net/wwan/t7xx/t7xx_pci.c           | 21 +++++++++++++++++++++
 4 files changed, 65 insertions(+)

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
index 31e32c10dabb..1678caef679c 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -34,6 +34,7 @@
 #include <linux/list.h>
 #include <linux/netdevice.h>
 #include <linux/pci.h>
+#include <linux/pm_runtime.h>
 #include <linux/sched.h>
 #include <linux/skbuff.h>
 #include <linux/slab.h>
@@ -257,6 +258,8 @@ static void t7xx_cldma_rx_done(struct work_struct *work)
 	t7xx_cldma_clear_ip_busy(&md_ctrl->hw_info);
 	t7xx_cldma_hw_irq_en_txrx(&md_ctrl->hw_info, queue->index, MTK_RX);
 	t7xx_cldma_hw_irq_en_eq(&md_ctrl->hw_info, queue->index, MTK_RX);
+	pm_runtime_mark_last_busy(md_ctrl->dev);
+	pm_runtime_put_autosuspend(md_ctrl->dev);
 }
 
 static int t7xx_cldma_gpd_tx_collect(struct cldma_queue *queue)
@@ -370,6 +373,8 @@ static void t7xx_cldma_tx_done(struct work_struct *work)
 	}
 
 	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
+	pm_runtime_mark_last_busy(md_ctrl->dev);
+	pm_runtime_put_autosuspend(md_ctrl->dev);
 }
 
 static void t7xx_cldma_ring_free(struct cldma_ctrl *md_ctrl,
@@ -579,6 +584,7 @@ static void t7xx_cldma_irq_work_cb(struct cldma_ctrl *md_ctrl)
 		if (l2_tx_int & (TXRX_STATUS_BITMASK | EMPTY_STATUS_BITMASK)) {
 			for_each_set_bit(i, (unsigned long *)&l2_tx_int, L2_INT_BIT_COUNT) {
 				if (i < CLDMA_TXQ_NUM) {
+					pm_runtime_get(md_ctrl->dev);
 					t7xx_cldma_hw_irq_dis_eq(hw_info, i, MTK_TX);
 					t7xx_cldma_hw_irq_dis_txrx(hw_info, i, MTK_TX);
 					queue_work(md_ctrl->txq[i].worker,
@@ -603,6 +609,7 @@ static void t7xx_cldma_irq_work_cb(struct cldma_ctrl *md_ctrl)
 		if (l2_rx_int & (TXRX_STATUS_BITMASK | EMPTY_STATUS_BITMASK)) {
 			l2_rx_int |= l2_rx_int >> CLDMA_RXQ_NUM;
 			for_each_set_bit(i, (unsigned long *)&l2_rx_int, CLDMA_RXQ_NUM) {
+				pm_runtime_get(md_ctrl->dev);
 				t7xx_cldma_hw_irq_dis_eq(hw_info, i, MTK_RX);
 				t7xx_cldma_hw_irq_dis_txrx(hw_info, i, MTK_RX);
 				queue_work(md_ctrl->rxq[i].worker, &md_ctrl->rxq[i].cldma_work);
@@ -952,6 +959,10 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb
 	if (qno >= CLDMA_TXQ_NUM)
 		return -EINVAL;
 
+	ret = pm_runtime_resume_and_get(md_ctrl->dev);
+	if (ret < 0 && ret != -EACCES)
+		return ret;
+
 	queue = &md_ctrl->txq[qno];
 
 	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
@@ -1001,6 +1012,8 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb
 	} while (!ret);
 
 allow_sleep:
+	pm_runtime_mark_last_busy(md_ctrl->dev);
+	pm_runtime_put_autosuspend(md_ctrl->dev);
 	return ret;
 }
 
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
index 7df7ffea8b14..279a7e72f203 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
@@ -32,6 +32,7 @@
 #include <linux/list.h>
 #include <linux/mm.h>
 #include <linux/netdevice.h>
+#include <linux/pm_runtime.h>
 #include <linux/sched.h>
 #include <linux/skbuff.h>
 #include <linux/slab.h>
@@ -915,6 +916,7 @@ static void t7xx_dpmaif_rxq_work(struct work_struct *work)
 {
 	struct dpmaif_rx_queue *rxq = container_of(work, struct dpmaif_rx_queue, dpmaif_rxq_work);
 	struct dpmaif_ctrl *dpmaif_ctrl = rxq->dpmaif_ctrl;
+	int ret;
 
 	atomic_set(&rxq->rx_processing, 1);
 	/* Ensure rx_processing is changed to 1 before actually begin RX flow */
@@ -926,8 +928,14 @@ static void t7xx_dpmaif_rxq_work(struct work_struct *work)
 		return;
 	}
 
+	ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev);
+	if (ret < 0 && ret != -EACCES)
+		return;
+
 	t7xx_dpmaif_do_rx(dpmaif_ctrl, rxq);
 
+	pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
+	pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
 	atomic_set(&rxq->rx_processing, 0);
 }
 
@@ -1130,11 +1138,19 @@ static void t7xx_dpmaif_bat_release_work(struct work_struct *work)
 {
 	struct dpmaif_ctrl *dpmaif_ctrl = container_of(work, struct dpmaif_ctrl, bat_release_work);
 	struct dpmaif_rx_queue *rxq;
+	int ret;
+
+	ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev);
+	if (ret < 0 && ret != -EACCES)
+		return;
 
 	/* ALL RXQ use one BAT table, so choose DPF_RX_QNO_DFT */
 	rxq = &dpmaif_ctrl->rxq[DPF_RX_QNO_DFT];
 	t7xx_dpmaif_bat_release_and_add(rxq);
 	t7xx_dpmaif_frag_bat_release_and_add(rxq);
+
+	pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
+	pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
 }
 
 int t7xx_dpmaif_bat_rel_wq_alloc(struct dpmaif_ctrl *dpmaif_ctrl)
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
index 3c601492aa16..12362892a334 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
@@ -29,6 +29,7 @@
 #include <linux/list.h>
 #include <linux/minmax.h>
 #include <linux/netdevice.h>
+#include <linux/pm_runtime.h>
 #include <linux/sched.h>
 #include <linux/spinlock.h>
 #include <linux/skbuff.h>
@@ -171,6 +172,10 @@ static void t7xx_dpmaif_tx_done(struct work_struct *work)
 	struct dpmaif_ctrl *dpmaif_ctrl = txq->dpmaif_ctrl;
 	int ret;
 
+	ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev);
+	if (ret < 0 && ret != -EACCES)
+		return;
+
 	ret = t7xx_dpmaif_tx_release(dpmaif_ctrl, txq->index, txq->drb_size_cnt);
 	if (ret == -EAGAIN ||
 	    (t7xx_dpmaif_ul_clr_done(&dpmaif_ctrl->hif_hw_info, txq->index) &&
@@ -183,6 +188,9 @@ static void t7xx_dpmaif_tx_done(struct work_struct *work)
 		t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
 		t7xx_dpmaif_unmask_ulq_intr(dpmaif_ctrl, txq->index);
 	}
+
+	pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
+	pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
 }
 
 static void t7xx_setup_msg_drb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
@@ -449,6 +457,7 @@ static void t7xx_do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl)
 static int t7xx_dpmaif_tx_hw_push_thread(void *arg)
 {
 	struct dpmaif_ctrl *dpmaif_ctrl = arg;
+	int ret;
 
 	while (!kthread_should_stop()) {
 		if (t7xx_tx_lists_are_all_empty(dpmaif_ctrl) ||
@@ -463,7 +472,13 @@ static int t7xx_dpmaif_tx_hw_push_thread(void *arg)
 				break;
 		}
 
+		ret = pm_runtime_resume_and_get(dpmaif_ctrl->dev);
+		if (ret < 0 && ret != -EACCES)
+			return ret;
+
 		t7xx_do_tx_hw_push(dpmaif_ctrl);
+		pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
+		pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
 	}
 
 	return 0;
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
index 7d30f597c7e9..03ed951ddfbe 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.c
+++ b/drivers/net/wwan/t7xx/t7xx_pci.c
@@ -32,6 +32,7 @@
 #include <linux/mutex.h>
 #include <linux/pci.h>
 #include <linux/pm.h>
+#include <linux/pm_runtime.h>
 #include <linux/pm_wakeup.h>
 
 #include "t7xx_mhccif.h"
@@ -45,6 +46,7 @@
 #define PCI_EREG_BASE			2
 
 #define PM_ACK_TIMEOUT_MS		1500
+#define PM_AUTOSUSPEND_MS		20000
 #define PM_RESOURCE_POLL_TIMEOUT_US	10000
 #define PM_RESOURCE_POLL_STEP_US	100
 
@@ -87,6 +89,8 @@ static int t7xx_pci_pm_init(struct t7xx_pci_dev *t7xx_dev)
 	atomic_set(&t7xx_dev->md_pm_state, MTK_PM_INIT);
 
 	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
+	pm_runtime_set_autosuspend_delay(&pdev->dev, PM_AUTOSUSPEND_MS);
+	pm_runtime_use_autosuspend(&pdev->dev);
 
 	return t7xx_wait_pm_config(t7xx_dev);
 }
@@ -101,6 +105,8 @@ void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev)
 			     D2H_INT_RESUME_ACK_AP);
 	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
 	atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
+
+	pm_runtime_put_noidle(&t7xx_dev->pdev->dev);
 }
 
 static int t7xx_pci_pm_reinit(struct t7xx_pci_dev *t7xx_dev)
@@ -110,6 +116,8 @@ static int t7xx_pci_pm_reinit(struct t7xx_pci_dev *t7xx_dev)
 	 */
 	atomic_set(&t7xx_dev->md_pm_state, MTK_PM_INIT);
 
+	pm_runtime_get_noresume(&t7xx_dev->pdev->dev);
+
 	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
 	return t7xx_wait_pm_config(t7xx_dev);
 }
@@ -409,6 +417,7 @@ static int __t7xx_pci_pm_resume(struct pci_dev *pdev, bool state_check)
 	t7xx_dev->rgu_pci_irq_en = true;
 	t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
 	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
+	pm_runtime_mark_last_busy(&pdev->dev);
 	atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
 
 	return ret;
@@ -445,6 +454,16 @@ static int t7xx_pci_pm_thaw(struct device *dev)
 	return __t7xx_pci_pm_resume(to_pci_dev(dev), false);
 }
 
+static int t7xx_pci_pm_runtime_suspend(struct device *dev)
+{
+	return __t7xx_pci_pm_suspend(to_pci_dev(dev));
+}
+
+static int t7xx_pci_pm_runtime_resume(struct device *dev)
+{
+	return __t7xx_pci_pm_resume(to_pci_dev(dev), true);
+}
+
 static const struct dev_pm_ops t7xx_pci_pm_ops = {
 	.suspend = t7xx_pci_pm_suspend,
 	.resume = t7xx_pci_pm_resume,
@@ -454,6 +473,8 @@ static const struct dev_pm_ops t7xx_pci_pm_ops = {
 	.poweroff = t7xx_pci_pm_suspend,
 	.restore = t7xx_pci_pm_resume,
 	.restore_noirq = t7xx_pci_pm_resume_noirq,
+	.runtime_suspend = t7xx_pci_pm_runtime_suspend,
+	.runtime_resume = t7xx_pci_pm_runtime_resume
 };
 
 static int t7xx_request_irq(struct pci_dev *pdev)
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 12/13] net: wwan: t7xx: Device deep sleep lock/unlock
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
                   ` (10 preceding siblings ...)
  2022-01-14  1:06 ` [PATCH net-next v4 11/13] net: wwan: t7xx: Runtime PM Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-01-14  1:06 ` [PATCH net-next v4 13/13] net: wwan: t7xx: Add maintainers and documentation Ricardo Martinez
  2022-01-15 14:53 ` [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Loic Poulain
  13 siblings, 0 replies; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

From: Haijun Liu <haijun.liu@mediatek.com>

Introduce the mechanism to lock/unlock the device 'deep sleep' mode.
When the PCIe link state is L1.2 or L2, the host side still can keep
the device is in D0 state from the host side point of view. At the same
time, if the device's 'deep sleep' mode is unlocked, the device will
go to 'deep sleep' while it is still in D0 state on the host side.

Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 drivers/net/wwan/t7xx/t7xx_hif_cldma.c     | 12 +++
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c | 14 +++-
 drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c | 37 ++++++---
 drivers/net/wwan/t7xx/t7xx_mhccif.c        |  3 +
 drivers/net/wwan/t7xx/t7xx_pci.c           | 97 ++++++++++++++++++++++
 drivers/net/wwan/t7xx/t7xx_pci.h           | 10 +++
 6 files changed, 159 insertions(+), 14 deletions(-)

diff --git a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
index 1678caef679c..211e799b754d 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_cldma.c
@@ -963,6 +963,7 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb
 	if (ret < 0 && ret != -EACCES)
 		return ret;
 
+	t7xx_pci_disable_sleep(md_ctrl->t7xx_dev);
 	queue = &md_ctrl->txq[qno];
 
 	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
@@ -985,6 +986,11 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb
 			queue->tx_xmit = list_next_entry_circular(tx_req, gpd_ring, entry);
 			spin_unlock_irqrestore(&queue->ring_lock, flags);
 
+			if (!t7xx_pci_sleep_disable_complete(md_ctrl->t7xx_dev)) {
+				ret = -EBUSY;
+				break;
+			}
+
 			/* Protect the access to the modem for queues operations (resume/start)
 			 * which access shared locations by all the queues.
 			 * cldma_lock is independent of ring_lock which is per queue.
@@ -997,6 +1003,11 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb
 
 		spin_unlock_irqrestore(&queue->ring_lock, flags);
 
+		if (!t7xx_pci_sleep_disable_complete(md_ctrl->t7xx_dev)) {
+			ret = -EBUSY;
+			break;
+		}
+
 		if (!t7xx_cldma_hw_queue_status(&md_ctrl->hw_info, qno, MTK_TX)) {
 			spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
 			t7xx_cldma_hw_resume_queue(&md_ctrl->hw_info, qno, MTK_TX);
@@ -1012,6 +1023,7 @@ int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb
 	} while (!ret);
 
 allow_sleep:
+	t7xx_pci_enable_sleep(md_ctrl->t7xx_dev);
 	pm_runtime_mark_last_busy(md_ctrl->dev);
 	pm_runtime_put_autosuspend(md_ctrl->dev);
 	return ret;
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
index 279a7e72f203..e5c540ed07ed 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
@@ -932,8 +932,11 @@ static void t7xx_dpmaif_rxq_work(struct work_struct *work)
 	if (ret < 0 && ret != -EACCES)
 		return;
 
-	t7xx_dpmaif_do_rx(dpmaif_ctrl, rxq);
+	t7xx_pci_disable_sleep(dpmaif_ctrl->t7xx_dev);
+	if (t7xx_pci_sleep_disable_complete(dpmaif_ctrl->t7xx_dev))
+		t7xx_dpmaif_do_rx(dpmaif_ctrl, rxq);
 
+	t7xx_pci_enable_sleep(dpmaif_ctrl->t7xx_dev);
 	pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
 	pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
 	atomic_set(&rxq->rx_processing, 0);
@@ -1144,11 +1147,16 @@ static void t7xx_dpmaif_bat_release_work(struct work_struct *work)
 	if (ret < 0 && ret != -EACCES)
 		return;
 
+	t7xx_pci_disable_sleep(dpmaif_ctrl->t7xx_dev);
+
 	/* ALL RXQ use one BAT table, so choose DPF_RX_QNO_DFT */
 	rxq = &dpmaif_ctrl->rxq[DPF_RX_QNO_DFT];
-	t7xx_dpmaif_bat_release_and_add(rxq);
-	t7xx_dpmaif_frag_bat_release_and_add(rxq);
+	if (t7xx_pci_sleep_disable_complete(dpmaif_ctrl->t7xx_dev)) {
+		t7xx_dpmaif_bat_release_and_add(rxq);
+		t7xx_dpmaif_frag_bat_release_and_add(rxq);
+	}
 
+	t7xx_pci_enable_sleep(dpmaif_ctrl->t7xx_dev);
 	pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
 	pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
 }
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
index 12362892a334..9b9c852ec489 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_tx.c
@@ -176,19 +176,24 @@ static void t7xx_dpmaif_tx_done(struct work_struct *work)
 	if (ret < 0 && ret != -EACCES)
 		return;
 
-	ret = t7xx_dpmaif_tx_release(dpmaif_ctrl, txq->index, txq->drb_size_cnt);
-	if (ret == -EAGAIN ||
-	    (t7xx_dpmaif_ul_clr_done(&dpmaif_ctrl->hif_hw_info, txq->index) &&
-	     t7xx_dpmaif_drb_ring_not_empty(txq))) {
-		queue_work(dpmaif_ctrl->txq[txq->index].worker,
-			   &dpmaif_ctrl->txq[txq->index].dpmaif_tx_work);
-		/* Give the device time to enter the low power state */
-		t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
-	} else {
-		t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
-		t7xx_dpmaif_unmask_ulq_intr(dpmaif_ctrl, txq->index);
+	/* The device may be in low power state. Disable sleep if needed */
+	t7xx_pci_disable_sleep(dpmaif_ctrl->t7xx_dev);
+	if (t7xx_pci_sleep_disable_complete(dpmaif_ctrl->t7xx_dev)) {
+		ret = t7xx_dpmaif_tx_release(dpmaif_ctrl, txq->index, txq->drb_size_cnt);
+		if (ret == -EAGAIN ||
+		    (t7xx_dpmaif_ul_clr_done(&dpmaif_ctrl->hif_hw_info, txq->index) &&
+		     t7xx_dpmaif_drb_ring_not_empty(txq))) {
+			queue_work(dpmaif_ctrl->txq[txq->index].worker,
+				   &dpmaif_ctrl->txq[txq->index].dpmaif_tx_work);
+			/* Give the device time to enter the low power state */
+			t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+		} else {
+			t7xx_dpmaif_clr_ip_busy_sts(&dpmaif_ctrl->hif_hw_info);
+			t7xx_dpmaif_unmask_ulq_intr(dpmaif_ctrl, txq->index);
+		}
 	}
 
+	t7xx_pci_enable_sleep(dpmaif_ctrl->t7xx_dev);
 	pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
 	pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
 }
@@ -422,6 +427,8 @@ static bool t7xx_check_all_txq_drb_lack(const struct dpmaif_ctrl *dpmaif_ctrl)
 
 static void t7xx_do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl)
 {
+	bool first_time = true;
+
 	do {
 		int txq_id;
 
@@ -436,6 +443,11 @@ static void t7xx_do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl)
 			if (ret > 0) {
 				int drb_send_cnt = ret;
 
+				/* Wait for the PCIe resource to unlock */
+				if (first_time &&
+				    !t7xx_pci_sleep_disable_complete(dpmaif_ctrl->t7xx_dev))
+					return;
+
 				ret = t7xx_dpmaif_ul_update_hw_drb_cnt(dpmaif_ctrl,
 								       (unsigned char)txq_id,
 								       drb_send_cnt *
@@ -449,6 +461,7 @@ static void t7xx_do_tx_hw_push(struct dpmaif_ctrl *dpmaif_ctrl)
 			}
 		}
 
+		first_time = false;
 		cond_resched();
 	} while (!t7xx_tx_lists_are_all_empty(dpmaif_ctrl) && !kthread_should_stop() &&
 		 (dpmaif_ctrl->state == DPMAIF_STATE_PWRON));
@@ -476,7 +489,9 @@ static int t7xx_dpmaif_tx_hw_push_thread(void *arg)
 		if (ret < 0 && ret != -EACCES)
 			return ret;
 
+		t7xx_pci_disable_sleep(dpmaif_ctrl->t7xx_dev);
 		t7xx_do_tx_hw_push(dpmaif_ctrl);
+		t7xx_pci_enable_sleep(dpmaif_ctrl->t7xx_dev);
 		pm_runtime_mark_last_busy(dpmaif_ctrl->dev);
 		pm_runtime_put_autosuspend(dpmaif_ctrl->dev);
 	}
diff --git a/drivers/net/wwan/t7xx/t7xx_mhccif.c b/drivers/net/wwan/t7xx/t7xx_mhccif.c
index 74c79d520d88..da076416da10 100644
--- a/drivers/net/wwan/t7xx/t7xx_mhccif.c
+++ b/drivers/net/wwan/t7xx/t7xx_mhccif.c
@@ -55,6 +55,9 @@ static irqreturn_t t7xx_mhccif_isr_thread(int irq, void *data)
 
 	t7xx_mhccif_clear_interrupts(t7xx_dev, int_sts);
 
+	if (int_sts & D2H_INT_DS_LOCK_ACK)
+		complete_all(&t7xx_dev->sleep_lock_acquire);
+
 	if (int_sts & D2H_INT_SR_ACK)
 		complete(&t7xx_dev->pm_sr_ack);
 
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.c b/drivers/net/wwan/t7xx/t7xx_pci.c
index 03ed951ddfbe..5aef25ed4e1d 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.c
+++ b/drivers/net/wwan/t7xx/t7xx_pci.c
@@ -34,6 +34,7 @@
 #include <linux/pm.h>
 #include <linux/pm_runtime.h>
 #include <linux/pm_wakeup.h>
+#include <linux/spinlock.h>
 
 #include "t7xx_mhccif.h"
 #include "t7xx_modem_ops.h"
@@ -45,6 +46,7 @@
 #define PCI_IREG_BASE			0
 #define PCI_EREG_BASE			2
 
+#define MTK_WAIT_TIMEOUT_MS		10
 #define PM_ACK_TIMEOUT_MS		1500
 #define PM_AUTOSUSPEND_MS		20000
 #define PM_RESOURCE_POLL_TIMEOUT_US	10000
@@ -57,6 +59,21 @@ enum t7xx_pm_state {
 	MTK_PM_RESUMED,
 };
 
+static void t7xx_dev_set_sleep_capability(struct t7xx_pci_dev *t7xx_dev, bool enable)
+{
+	void __iomem *ctrl_reg = IREG_BASE(t7xx_dev) + PCIE_MISC_CTRL;
+	u32 value;
+
+	value = ioread32(ctrl_reg);
+
+	if (enable)
+		value &= ~PCIE_MISC_MAC_SLEEP_DIS;
+	else
+		value |= PCIE_MISC_MAC_SLEEP_DIS;
+
+	iowrite32(value, ctrl_reg);
+}
+
 static int t7xx_wait_pm_config(struct t7xx_pci_dev *t7xx_dev)
 {
 	int ret, val;
@@ -77,10 +94,14 @@ static int t7xx_pci_pm_init(struct t7xx_pci_dev *t7xx_dev)
 
 	INIT_LIST_HEAD(&t7xx_dev->md_pm_entities);
 
+	spin_lock_init(&t7xx_dev->md_pm_lock);
+
 	mutex_init(&t7xx_dev->md_pm_entity_mtx);
 
+	init_completion(&t7xx_dev->sleep_lock_acquire);
 	init_completion(&t7xx_dev->pm_sr_ack);
 
+	atomic_set(&t7xx_dev->sleep_disable_count, 0);
 	device_init_wakeup(&pdev->dev, true);
 
 	dev_pm_set_driver_flags(&pdev->dev, pdev->dev.power.driver_flags |
@@ -99,6 +120,7 @@ void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev)
 {
 	/* Enable the PCIe resource lock only after MD deep sleep is done */
 	t7xx_mhccif_mask_clr(t7xx_dev,
+			     D2H_INT_DS_LOCK_ACK |
 			     D2H_INT_SUSPEND_ACK |
 			     D2H_INT_RESUME_ACK |
 			     D2H_INT_SUSPEND_ACK_AP |
@@ -164,6 +186,81 @@ int t7xx_pci_pm_entity_unregister(struct t7xx_pci_dev *t7xx_dev, struct md_pm_en
 	return -ENXIO;
 }
 
+int t7xx_pci_sleep_disable_complete(struct t7xx_pci_dev *t7xx_dev)
+{
+	struct device *dev = &t7xx_dev->pdev->dev;
+	int ret;
+
+	ret = wait_for_completion_timeout(&t7xx_dev->sleep_lock_acquire,
+					  msecs_to_jiffies(MTK_WAIT_TIMEOUT_MS));
+	if (!ret)
+		dev_err_ratelimited(dev, "Resource wait complete timed out\n");
+
+	return ret;
+}
+
+/**
+ * t7xx_pci_disable_sleep() - Disable deep sleep capability.
+ * @t7xx_dev: MTK device.
+ *
+ * Lock the deep sleep capability, note that the device can still go into deep sleep
+ * state while device is in D0 state, from the host's point-of-view.
+ *
+ * If device is in deep sleep state, wake up the device and disable deep sleep capability.
+ */
+void t7xx_pci_disable_sleep(struct t7xx_pci_dev *t7xx_dev)
+{
+	unsigned long flags;
+
+	if (atomic_read(&t7xx_dev->md_pm_state) < MTK_PM_RESUMED) {
+		atomic_inc(&t7xx_dev->sleep_disable_count);
+		complete_all(&t7xx_dev->sleep_lock_acquire);
+		return;
+	}
+
+	spin_lock_irqsave(&t7xx_dev->md_pm_lock, flags);
+	if (atomic_inc_return(&t7xx_dev->sleep_disable_count) == 1) {
+		u32 deep_sleep_enabled;
+
+		reinit_completion(&t7xx_dev->sleep_lock_acquire);
+		t7xx_dev_set_sleep_capability(t7xx_dev, false);
+
+		deep_sleep_enabled = ioread32(IREG_BASE(t7xx_dev) + PCIE_RESOURCE_STATUS);
+		deep_sleep_enabled &= PCIE_RESOURCE_STATUS_MSK;
+		if (deep_sleep_enabled) {
+			spin_unlock_irqrestore(&t7xx_dev->md_pm_lock, flags);
+			complete_all(&t7xx_dev->sleep_lock_acquire);
+			return;
+		}
+
+		t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_DS_LOCK);
+	}
+
+	spin_unlock_irqrestore(&t7xx_dev->md_pm_lock, flags);
+}
+
+/**
+ * t7xx_pci_enable_sleep() - Enable deep sleep capability.
+ * @t7xx_dev: MTK device.
+ *
+ * After enabling deep sleep, device can enter into deep sleep state.
+ */
+void t7xx_pci_enable_sleep(struct t7xx_pci_dev *t7xx_dev)
+{
+	unsigned long flags;
+
+	if (atomic_read(&t7xx_dev->md_pm_state) < MTK_PM_RESUMED) {
+		atomic_dec(&t7xx_dev->sleep_disable_count);
+		return;
+	}
+
+	if (atomic_dec_and_test(&t7xx_dev->sleep_disable_count)) {
+		spin_lock_irqsave(&t7xx_dev->md_pm_lock, flags);
+		t7xx_dev_set_sleep_capability(t7xx_dev, true);
+		spin_unlock_irqrestore(&t7xx_dev->md_pm_lock, flags);
+	}
+}
+
 static int __t7xx_pci_pm_suspend(struct pci_dev *pdev)
 {
 	struct t7xx_pci_dev *t7xx_dev;
diff --git a/drivers/net/wwan/t7xx/t7xx_pci.h b/drivers/net/wwan/t7xx/t7xx_pci.h
index 6310f31540ca..90d358bff54c 100644
--- a/drivers/net/wwan/t7xx/t7xx_pci.h
+++ b/drivers/net/wwan/t7xx/t7xx_pci.h
@@ -21,6 +21,7 @@
 #include <linux/irqreturn.h>
 #include <linux/mutex.h>
 #include <linux/pci.h>
+#include <linux/spinlock.h>
 #include <linux/types.h>
 
 #include "t7xx_reg.h"
@@ -51,7 +52,10 @@ typedef irqreturn_t (*t7xx_intr_callback)(int irq, void *param);
  * @base_addr: memory base addresses of HW components
  * @md: modem interface
  * @md_pm_entities: list of pm entities
+ * @md_pm_lock: protects PCIe sleep lock
  * @md_pm_entity_mtx: protects md_pm_entities list
+ * @sleep_disable_count: PCIe L1.2 lock counter
+ * @sleep_lock_acquire: indicates that sleep has been disabled
  * @pm_sr_ack: ack from the device when went to sleep or woke up
  * @md_pm_state: state for resume/suspend
  * @ccmni_ctlb: context structure used to control the network data path
@@ -69,7 +73,10 @@ struct t7xx_pci_dev {
 
 	/* Low Power Items */
 	struct list_head	md_pm_entities;
+	spinlock_t		md_pm_lock;		/* Protects PCI resource lock */
 	struct mutex		md_pm_entity_mtx;	/* Protects MD PM entities list */
+	atomic_t		sleep_disable_count;
+	struct completion	sleep_lock_acquire;
 	struct completion	pm_sr_ack;
 	atomic_t		md_pm_state;
 
@@ -105,6 +112,9 @@ struct md_pm_entity {
 	void			*entity_param;
 };
 
+void t7xx_pci_disable_sleep(struct t7xx_pci_dev *t7xx_dev);
+void t7xx_pci_enable_sleep(struct t7xx_pci_dev *t7xx_dev);
+int t7xx_pci_sleep_disable_complete(struct t7xx_pci_dev *t7xx_dev);
 int t7xx_pci_pm_entity_register(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity);
 int t7xx_pci_pm_entity_unregister(struct t7xx_pci_dev *t7xx_dev, struct md_pm_entity *pm_entity);
 void t7xx_pci_pm_init_late(struct t7xx_pci_dev *t7xx_dev);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH net-next v4 13/13] net: wwan: t7xx: Add maintainers and documentation
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
                   ` (11 preceding siblings ...)
  2022-01-14  1:06 ` [PATCH net-next v4 12/13] net: wwan: t7xx: Device deep sleep lock/unlock Ricardo Martinez
@ 2022-01-14  1:06 ` Ricardo Martinez
  2022-01-15 14:53 ` [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Loic Poulain
  13 siblings, 0 replies; 44+ messages in thread
From: Ricardo Martinez @ 2022-01-14  1:06 UTC (permalink / raw)
  To: netdev, linux-wireless
  Cc: kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla, Ricardo Martinez

Adds maintainers and documentation for MediaTek t7xx 5G WWAN modem
device driver.

Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
---
 .../networking/device_drivers/wwan/index.rst  |   1 +
 .../networking/device_drivers/wwan/t7xx.rst   | 120 ++++++++++++++++++
 MAINTAINERS                                   |  11 ++
 3 files changed, 132 insertions(+)
 create mode 100644 Documentation/networking/device_drivers/wwan/t7xx.rst

diff --git a/Documentation/networking/device_drivers/wwan/index.rst b/Documentation/networking/device_drivers/wwan/index.rst
index 1cb8c7371401..370d8264d5dc 100644
--- a/Documentation/networking/device_drivers/wwan/index.rst
+++ b/Documentation/networking/device_drivers/wwan/index.rst
@@ -9,6 +9,7 @@ Contents:
    :maxdepth: 2
 
    iosm
+   t7xx
 
 .. only::  subproject and html
 
diff --git a/Documentation/networking/device_drivers/wwan/t7xx.rst b/Documentation/networking/device_drivers/wwan/t7xx.rst
new file mode 100644
index 000000000000..dd5b731957ca
--- /dev/null
+++ b/Documentation/networking/device_drivers/wwan/t7xx.rst
@@ -0,0 +1,120 @@
+.. SPDX-License-Identifier: GPL-2.0-only
+
+.. Copyright (C) 2020-21 Intel Corporation
+
+.. _t7xx_driver_doc:
+
+============================================
+t7xx driver for MTK PCIe based T700 5G modem
+============================================
+The t7xx driver is a WWAN PCIe host driver developed for linux or Chrome OS platforms
+for data exchange over PCIe interface between Host platform & MediaTek's T700 5G modem.
+The driver exposes an interface conforming to the MBIM protocol [1]. Any front end
+application (e.g. Modem Manager) could easily manage the MBIM interface to enable
+data communication towards WWAN. The driver also provides an interface to interact
+with the MediaTek's modem via AT commands.
+
+Basic usage
+===========
+MBIM & AT functions are inactive when unmanaged. The t7xx driver provides
+WWAN port userspace interfaces representing MBIM & AT control channels and does
+not play any role in managing their functionality. It is the job of a userspace
+application to detect port enumeration and enable MBIM & AT functionalities.
+
+Examples of few such userspace applications are:
+
+- mbimcli (included with the libmbim [2] library), and
+- Modem Manager [3]
+
+Management Applications to carry out below required actions for establishing
+MBIM IP session:
+
+- open the MBIM control channel
+- configure network connection settings
+- connect to network
+- configure IP network interface
+
+Management Applications to carry out below required actions for send an AT
+command and receive response:
+
+- open the AT control channel using a UART tool or a special user tool
+
+Management application development
+==================================
+The driver and userspace interfaces are described below. The MBIM protocol is
+described in [1] Mobile Broadband Interface Model v1.0 Errata-1.
+
+MBIM control channel userspace ABI
+----------------------------------
+
+/dev/wwan0mbim0 character device
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The driver exposes an MBIM interface to the MBIM function by implementing
+MBIM WWAN Port. The userspace end of the control channel pipe is a
+/dev/wwan0mbim0 character device. Application shall use this interface for
+MBIM protocol communication.
+
+Fragmentation
+~~~~~~~~~~~~~
+The userspace application is responsible for all control message fragmentation
+and defragmentation as per MBIM specification.
+
+/dev/wwan0mbim0 write()
+~~~~~~~~~~~~~~~~~~~~~~~
+The MBIM control messages from the management application must not exceed the
+negotiated control message size.
+
+/dev/wwan0mbim0 read()
+~~~~~~~~~~~~~~~~~~~~~~
+The management application must accept control messages of up the negotiated
+control message size.
+
+MBIM data channel userspace ABI
+-------------------------------
+
+wwan0-X network device
+~~~~~~~~~~~~~~~~~~~~~~
+The t7xx driver exposes IP link interface "wwan0-X" of type "wwan" for IP
+traffic. Iproute network utility is used for creating "wwan0-X" network
+interface and for associating it with MBIM IP session.
+
+The userspace management application is responsible for creating new IP link
+prior to establishing MBIM IP session where the SessionId is greater than 0.
+
+For example, creating new IP link for a MBIM IP session with SessionId 1:
+
+  ip link add dev wwan0-1 parentdev wwan0 type wwan linkid 1
+
+The driver will automatically map the "wwan0-1" network device to MBIM IP
+session 1.
+
+AT port userspace ABI
+----------------------------------
+
+/dev/wwan0at0 character device
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The driver exposes an AT port by implementing AT WWAN Port.
+The userspace end of the control port is a /dev/wwan0at0 character
+device. Application shall use this interface to issue AT commands.
+
+The MediaTek's T700 modem supports the 3GPP TS 27.007 [4] specification.
+
+References
+==========
+[1] *MBIM (Mobile Broadband Interface Model) Errata-1*
+
+- https://www.usb.org/document-library/
+
+[2] *libmbim "a glib-based library for talking to WWAN modems and devices which
+speak the Mobile Interface Broadband Model (MBIM) protocol"*
+
+- http://www.freedesktop.org/wiki/Software/libmbim/
+
+[3] *Modem Manager "a DBus-activated daemon which controls mobile broadband
+(2G/3G/4G/5G) devices and connections"*
+
+- http://www.freedesktop.org/wiki/Software/ModemManager/
+
+[4] *Specification # 27.007 - 3GPP*
+
+- https://www.3gpp.org/DynaReport/27007.htm
diff --git a/MAINTAINERS b/MAINTAINERS
index 53c98d5d204c..a5fd392331ea 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -12208,6 +12208,17 @@ S:	Maintained
 F:	drivers/net/dsa/mt7530.*
 F:	net/dsa/tag_mtk.c
 
+MEDIATEK T7XX 5G WWAN MODEM DRIVER
+M:	Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
+M:	Intel Corporation <linuxwwan@intel.com>
+R:	Chiranjeevi Rapolu <chiranjeevi.rapolu@linux.intel.com>
+R:	Liu Haijun <haijun.liu@mediatek.com>
+R:	M Chetan Kumar <m.chetan.kumar@linux.intel.com>
+R:	Ricardo Martinez <ricardo.martinez@linux.intel.com>
+L:	netdev@vger.kernel.org
+S:	Supported
+F:	drivers/net/wwan/t7xx/
+
 MEDIATEK USB3 DRD IP DRIVER
 M:	Chunfeng Yun <chunfeng.yun@mediatek.com>
 L:	linux-usb@vger.kernel.org
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 01/13] list: Add list_next_entry_circular() and list_prev_entry_circular()
  2022-01-14  1:06 ` [PATCH net-next v4 01/13] list: Add list_next_entry_circular() and list_prev_entry_circular() Ricardo Martinez
@ 2022-01-14 13:42   ` Andy Shevchenko
  0 siblings, 0 replies; 44+ messages in thread
From: Andy Shevchenko @ 2022-01-14 13:42 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, dinesh.sharma,
	eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla

On Thu, Jan 13, 2022 at 06:06:15PM -0700, Ricardo Martinez wrote:
> Add macros to get the next or previous entries and wraparound if
> needed. For example, calling list_next_entry_circular() on the last
> element should return the first element in the list.

FWIW,
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>

> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---
>  include/linux/list.h | 26 ++++++++++++++++++++++++++
>  1 file changed, 26 insertions(+)
> 
> diff --git a/include/linux/list.h b/include/linux/list.h
> index dd6c2041d09c..c147eeb2d39d 100644
> --- a/include/linux/list.h
> +++ b/include/linux/list.h
> @@ -563,6 +563,19 @@ static inline void list_splice_tail_init(struct list_head *list,
>  #define list_next_entry(pos, member) \
>  	list_entry((pos)->member.next, typeof(*(pos)), member)
>  
> +/**
> + * list_next_entry_circular - get the next element in list
> + * @pos:	the type * to cursor.
> + * @head:	the list head to take the element from.
> + * @member:	the name of the list_head within the struct.
> + *
> + * Wraparound if pos is the last element (return the first element).
> + * Note, that list is expected to be not empty.
> + */
> +#define list_next_entry_circular(pos, head, member) \
> +	(list_is_last(&(pos)->member, head) ? \
> +	list_first_entry(head, typeof(*(pos)), member) : list_next_entry(pos, member))
> +
>  /**
>   * list_prev_entry - get the prev element in list
>   * @pos:	the type * to cursor
> @@ -571,6 +584,19 @@ static inline void list_splice_tail_init(struct list_head *list,
>  #define list_prev_entry(pos, member) \
>  	list_entry((pos)->member.prev, typeof(*(pos)), member)
>  
> +/**
> + * list_prev_entry_circular - get the prev element in list
> + * @pos:	the type * to cursor.
> + * @head:	the list head to take the element from.
> + * @member:	the name of the list_head within the struct.
> + *
> + * Wraparound if pos is the first element (return the last element).
> + * Note, that list is expected to be not empty.
> + */
> +#define list_prev_entry_circular(pos, head, member) \
> +	(list_is_first(&(pos)->member, head) ? \
> +	list_last_entry(head, typeof(*(pos)), member) : list_prev_entry(pos, member))
> +
>  /**
>   * list_for_each	-	iterate over a list
>   * @pos:	the &struct list_head to use as a loop cursor.
> -- 
> 2.17.1
> 

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface
  2022-01-14  1:06 ` [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface Ricardo Martinez
@ 2022-01-14 14:13   ` Andy Shevchenko
  2022-01-18 14:13   ` Ilpo Järvinen
  2022-02-10 13:50   ` Ilpo Järvinen
  2 siblings, 0 replies; 44+ messages in thread
From: Andy Shevchenko @ 2022-01-14 14:13 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, dinesh.sharma,
	eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla

On Thu, Jan 13, 2022 at 06:06:16PM -0700, Ricardo Martinez wrote:
> From: Haijun Liu <haijun.liu@mediatek.com>
> 
> Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control
> path of Host-Modem data transfers. CLDMA HIF layer provides a common
> interface to the Port Layer.
> 
> CLDMA manages 8 independent RX/TX physical channels with data flow
> control in HW queues. CLDMA uses ring buffers of General Packet
> Descriptors (GPD) for TX/RX. GPDs can represent multiple or single
> data buffers (DB).
> 
> CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA
> interrupts, and initializes CLDMA HW registers.
> 
> CLDMA TX flow:
> 1. Port Layer write
> 2. Get DB address
> 3. Configure GPD
> 4. Triggering processing via HW register write
> 
> CLDMA RX flow:
> 1. CLDMA HW sends a RX "done" to host
> 2. Driver starts thread to safely read GPD
> 3. DB is sent to Port layer
> 4. Create a new buffer for GPD ring

...

> + * Copyright (c) 2021, Intel Corporation.

2021-2022 (in all files)?

...

> +#include <linux/dev_printk.h>
> +#include <linux/device.h>

I believe the former is guaranteed to be included in the latter.
The rest of the headers in this file looks good to me.

...

> +	if (dma_mapping_error(md_ctrl->dev, req->mapped_buff)) {

> +		dev_err(md_ctrl->dev, "DMA mapping failed\n");

Can we first free resources and then print messages?

Printing messages is a slow path and freeing resources first is a good
(micro-)optimization.

> +		dev_kfree_skb_any(req->skb);
> +		req->skb = NULL;
> +		req->mapped_buff = 0;
> +		return -ENOMEM;
> +	}

...

> +static int t7xx_cldma_rx_ring_init(struct cldma_ctrl *md_ctrl, struct cldma_ring *ring)
> +{
> +	struct cldma_request *req, *first_req = NULL;
> +	struct cldma_rgpd *prev_gpd, *gpd = NULL;
> +	int i;
> +
> +	for (i = 0; i < ring->length; i++) {
> +		req = t7xx_alloc_rx_request(md_ctrl, ring->pkt_size);
> +		if (!req) {
> +			t7xx_cldma_ring_free(md_ctrl, ring, DMA_FROM_DEVICE);
> +			return -ENOMEM;
> +		}
> +
> +		gpd = req->gpd;
> +		t7xx_cldma_rgpd_set_data_ptr(gpd, req->mapped_buff);
> +		gpd->data_allow_len = cpu_to_le16(ring->pkt_size);
> +		gpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
> +
> +		if (i)
> +			t7xx_cldma_rgpd_set_next_ptr(prev_gpd, req->gpd_addr);
> +		else
> +			first_req = req;
> +
> +		INIT_LIST_HEAD(&req->entry);
> +		list_add_tail(&req->entry, &ring->gpd_ring);
> +		prev_gpd = gpd;
> +	}

> +	if (first_req)

At which circumstances it is not defined? Only when ring->length == 0, right?

> +		t7xx_cldma_rgpd_set_next_ptr(gpd, first_req->gpd_addr);

Looking into this, perhaps it makes sense to refactor this way:

1. Define list head pointer on the stack
2. Iterate over the ring->length and add elements to that list
3. Iterate over the list and set the DMA links
   (t7xx_cldma_rgpd_set_next_ptr() calls)
4. Add this list to the main one.

> +	return 0;
> +}

...

> +static int t7xx_cldma_tx_ring_init(struct cldma_ctrl *md_ctrl, struct cldma_ring *ring)
> +{
> +	struct cldma_request *req, *first_req = NULL;
> +	struct cldma_tgpd *tgpd, *prev_gpd;
> +	int i;
> +
> +	for (i = 0; i < ring->length; i++) {
> +		req = t7xx_alloc_tx_request(md_ctrl);
> +		if (!req) {
> +			t7xx_cldma_ring_free(md_ctrl, ring, DMA_TO_DEVICE);
> +			return -ENOMEM;
> +		}
> +
> +		tgpd = req->gpd;
> +		tgpd->gpd_flags = GPD_FLAGS_IOC;
> +
> +		if (!first_req)
> +			first_req = req;
> +		else
> +			t7xx_cldma_tgpd_set_next_ptr(prev_gpd, req->gpd_addr);
> +
> +		INIT_LIST_HEAD(&req->entry);
> +		list_add_tail(&req->entry, &ring->gpd_ring);
> +		prev_gpd = tgpd;
> +	}
> +
> +	if (first_req)
> +		t7xx_cldma_tgpd_set_next_ptr(tgpd, first_req->gpd_addr);
> +
> +	return 0;

Ditto.

> +}


-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem
  2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
                   ` (12 preceding siblings ...)
  2022-01-14  1:06 ` [PATCH net-next v4 13/13] net: wwan: t7xx: Add maintainers and documentation Ricardo Martinez
@ 2022-01-15 14:53 ` Loic Poulain
  13 siblings, 0 replies; 44+ messages in thread
From: Loic Poulain @ 2022-01-15 14:53 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, andriy.shevchenko,
	dinesh.sharma, eliot.lee, ilpo.johannes.jarvinen, moises.veleta,
	pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla

On Fri, 14 Jan 2022 at 02:06, Ricardo Martinez
<ricardo.martinez@linux.intel.com> wrote:
>
> t7xx is the PCIe host device driver for Intel 5G 5000 M.2 solution which
> is based on MediaTek's T700 modem to provide WWAN connectivity.
> The driver uses the WWAN framework infrastructure to create the following
> control ports and network interfaces:
> * /dev/wwan0mbim0 - Interface conforming to the MBIM protocol.
>   Applications like libmbim [1] or Modem Manager [2] from v1.16 onwards
>   with [3][4] can use it to enable data communication towards WWAN.
> * /dev/wwan0at0 - Interface that supports AT commands.
> * wwan0 - Primary network interface for IP traffic.
>
> The main blocks in t7xx driver are:
> * PCIe layer - Implements probe, removal, and power management callbacks.
> * Port-proxy - Provides a common interface to interact with different types
>   of ports such as WWAN ports.
> * Modem control & status monitor - Implements the entry point for modem
>   initialization, reset and exit, as well as exception handling.
> * CLDMA (Control Layer DMA) - Manages the HW used by the port layer to send
>   control messages to the modem using MediaTek's CCCI (Cross-Core
>   Communication Interface) protocol.
> * DPMAIF (Data Plane Modem AP Interface) - Controls the HW that provides
>   uplink and downlink queues for the data path. The data exchange takes
>   place using circular buffers to share data buffer addresses and metadata
>   to describe the packets.
> * MHCCIF (Modem Host Cross-Core Interface) - Provides interrupt channels
>   for bidirectional event notification such as handshake, exception, PM and
>   port enumeration.
>
> The compilation of the t7xx driver is enabled by the CONFIG_MTK_T7XX config
> option which depends on CONFIG_WWAN.
> This driver was originally developed by MediaTek. Intel adapted t7xx to
> the WWAN framework, optimized and refactored the driver source in close
> collaboration with MediaTek. This will enable getting the t7xx driver on
> Approved Vendor List for interested OEM's and ODM's productization plans
> with Intel 5G 5000 M.2 solution.

From a WWAN framework perspective:

Reviewed-by: Loic Poulain <loic.poulain@linaro.org>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 03/13] net: wwan: t7xx: Add core components
  2022-01-14  1:06 ` [PATCH net-next v4 03/13] net: wwan: t7xx: Add core components Ricardo Martinez
@ 2022-01-16 15:37   ` kernel test robot
  2022-01-24 14:51   ` Ilpo Järvinen
  1 sibling, 0 replies; 44+ messages in thread
From: kernel test robot @ 2022-01-16 15:37 UTC (permalink / raw)
  To: Ricardo Martinez, netdev, linux-wireless
  Cc: kbuild-all, kuba, davem, johannes, ryazanov.s.a, loic.poulain,
	m.chetan.kumar, chandrashekar.devegowda, linuxwwan

Hi Ricardo,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on net-next/master]

url:    https://github.com/0day-ci/linux/commits/Ricardo-Martinez/net-wwan-t7xx-PCIe-driver-for-MediaTek-M-2-modem/20220114-090852
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 8aaaf2f3af2ae212428f4db1af34214225f5cec3
config: ia64-randconfig-m031-20220116 (https://download.01.org/0day-ci/archive/20220116/202201162348.afiiNcX1-lkp@intel.com/config)
compiler: ia64-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/e62b3a8ab00c63ee38d0d31cd402a1a40e8b2769
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Ricardo-Martinez/net-wwan-t7xx-PCIe-driver-for-MediaTek-M-2-modem/20220114-090852
        git checkout e62b3a8ab00c63ee38d0d31cd402a1a40e8b2769
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=ia64 SHELL=/bin/bash drivers/net/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from arch/ia64/include/asm/pgtable.h:153,
                    from include/linux/pgtable.h:6,
                    from include/linux/mm.h:33,
                    from include/linux/scatterlist.h:8,
                    from include/linux/dma-mapping.h:10,
                    from drivers/net/wwan/t7xx/t7xx_pci.c:23:
   arch/ia64/include/asm/mmu_context.h: In function 'reload_context':
>> arch/ia64/include/asm/mmu_context.h:127:48: error: variable 'old_rr4' set but not used [-Werror=unused-but-set-variable]
     127 |         unsigned long rr0, rr1, rr2, rr3, rr4, old_rr4;
         |                                                ^~~~~~~
   cc1: all warnings being treated as errors


vim +/old_rr4 +127 arch/ia64/include/asm/mmu_context.h

^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  121  
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  122  static inline void
badea125d7cbd9 include/asm-ia64/mmu_context.h David Mosberger-Tang 2005-07-25  123  reload_context (nv_mm_context_t context)
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  124  {
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  125  	unsigned long rid;
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  126  	unsigned long rid_incr = 0;
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16 @127  	unsigned long rr0, rr1, rr2, rr3, rr4, old_rr4;
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  128  
0a41e250116058 include/asm-ia64/mmu_context.h Peter Chubb          2005-08-16  129  	old_rr4 = ia64_get_rr(RGN_BASE(RGN_HPAGE));
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  130  	rid = context << 3;	/* make space for encoding the region number */
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  131  	rid_incr = 1 << 8;
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  132  
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  133  	/* encode the region id, preferred page size, and VHPT enable bit: */
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  134  	rr0 = (rid << 8) | (PAGE_SHIFT << 2) | 1;
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  135  	rr1 = rr0 + 1*rid_incr;
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  136  	rr2 = rr0 + 2*rid_incr;
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  137  	rr3 = rr0 + 3*rid_incr;
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  138  	rr4 = rr0 + 4*rid_incr;
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  139  #ifdef  CONFIG_HUGETLB_PAGE
^1da177e4c3f41 include/asm-ia64/mmu_context.h Linus Torvalds       2005-04-16  140  	rr4 = (rr4 & (~(0xfcUL))) | (old_rr4 & 0xfc);
0a41e250116058 include/asm-ia64/mmu_context.h Peter Chubb          2005-08-16  141  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface
  2022-01-14  1:06 ` [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface Ricardo Martinez
  2022-01-14 14:13   ` Andy Shevchenko
@ 2022-01-18 14:13   ` Ilpo Järvinen
  2022-01-18 22:22     ` Martinez, Ricardo
  2022-02-10 13:50   ` Ilpo Järvinen
  2 siblings, 1 reply; 44+ messages in thread
From: Ilpo Järvinen @ 2022-01-18 14:13 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <haijun.liu@mediatek.com>
> 
> Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control
> path of Host-Modem data transfers. CLDMA HIF layer provides a common
> interface to the Port Layer.
> 
> CLDMA manages 8 independent RX/TX physical channels with data flow
> control in HW queues. CLDMA uses ring buffers of General Packet
> Descriptors (GPD) for TX/RX. GPDs can represent multiple or single
> data buffers (DB).
> 
> CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA
> interrupts, and initializes CLDMA HW registers.
> 
> CLDMA TX flow:
> 1. Port Layer write
> 2. Get DB address
> 3. Configure GPD
> 4. Triggering processing via HW register write
> 
> CLDMA RX flow:
> 1. CLDMA HW sends a RX "done" to host
> 2. Driver starts thread to safely read GPD
> 3. DB is sent to Port layer
> 4. Create a new buffer for GPD ring
> 
> Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---

In general, I felt it to be quite clean and understandable.


> +#define CLDMA_NUM 2

I tried to understand its purpose but it seems that only one of the 
indexes is used in the arrays where this define gives the size? Related to 
this, ID_CLDMA0 is not used anywhere?

> +static int t7xx_cldma_gpd_rx_collect(struct cldma_queue *queue, int budget)
> +{
> +	struct cldma_ctrl *md_ctrl = queue->md_ctrl;
> +	bool rx_not_done, over_budget = false;
> +	struct t7xx_cldma_hw *hw_info;
> +	unsigned int l2_rx_int;
> +	unsigned long flags;
> +	int ret;
> +
> +	hw_info = &md_ctrl->hw_info;
> +
> +	do {
> +		rx_not_done = false;
> +
> +		ret = t7xx_cldma_gpd_rx_from_q(queue, budget, &over_budget);
> +		if (ret == -ENODATA)
> +			return 0;
> +
> +		if (ret)
> +			return ret;
> +
> +		spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
> +		if (md_ctrl->rxq_active & BIT(queue->index)) {
> +			if (!t7xx_cldma_hw_queue_status(hw_info, queue->index, MTK_RX))
> +				t7xx_cldma_hw_resume_queue(hw_info, queue->index, MTK_RX);
> +
> +			l2_rx_int = t7xx_cldma_hw_int_status(hw_info, BIT(queue->index), MTK_RX);
> +			if (l2_rx_int) {
> +				t7xx_cldma_hw_rx_done(hw_info, l2_rx_int);
> +
> +				if (over_budget) {
> +					spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
> +					return -EAGAIN;
> +				}
> +
> +				rx_not_done = true;
> +			}
> +		}
> +
> +		spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
> +	} while (rx_not_done);
> +
> +	return 0;
> +}

Minor naming nit, rx_not_done doesn't seem to really tell if rx was being
done or not but whether the loop should run another iteration?

> +static void t7xx_cldma_ring_free(struct cldma_ctrl *md_ctrl,
> +				 struct cldma_ring *ring, enum dma_data_direction tx_rx)
> +{
> +	struct cldma_request *req_cur, *req_next;
> +
> +	list_for_each_entry_safe(req_cur, req_next, &ring->gpd_ring, entry) {
> +		if (req_cur->mapped_buff && req_cur->skb) {
> +			dma_unmap_single(md_ctrl->dev, req_cur->mapped_buff,
> +					 t7xx_skb_data_area_size(req_cur->skb), tx_rx);
> +			req_cur->mapped_buff = 0;
> +		}
> +
> +		dev_kfree_skb_any(req_cur->skb);
> +
> +		if (req_cur->gpd)
> +			dma_pool_free(md_ctrl->gpd_dmapool, req_cur->gpd, req_cur->gpd_addr);
> +
> +		list_del(&req_cur->entry);
> +		kfree_sensitive(req_cur);

Why _sensitive for a bunch of addresses? There's another one in 10/13 
which also looks bogus.

> +static void t7xx_cldma_enable_irq(struct cldma_ctrl *md_ctrl)
> +{
> +	t7xx_pcie_mac_set_int(md_ctrl->t7xx_dev, md_ctrl->hw_info.phy_interrupt_id);
> +}
> +
> +static void t7xx_cldma_disable_irq(struct cldma_ctrl *md_ctrl)
> +{
> +	t7xx_pcie_mac_clear_int(md_ctrl->t7xx_dev, md_ctrl->hw_info.phy_interrupt_id);
> +}

t7xx_pcie_mac_set_int and t7xx_pcie_mac_clear_int are only defined
by a later patch.

> +static bool t7xx_cldma_qs_are_active(struct t7xx_cldma_hw *hw_info)
> +{
> +	unsigned int tx_active;
> +	unsigned int rx_active;
> +
> +	tx_active = t7xx_cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, MTK_TX);
> +	rx_active = t7xx_cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, MTK_RX);
> +	if (tx_active == CLDMA_INVALID_STATUS || rx_active == CLDMA_INVALID_STATUS)

These cannot ever be true because of mask in t7xx_cldma_hw_queue_status().

> +static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
> +{
> +	struct cldma_queue *rxq = &md_ctrl->rxq[qnum];
> +	struct cldma_request *req;
> +	struct cldma_rgpd *rgpd;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&rxq->ring_lock, flags);
> +	t7xx_cldma_q_reset(rxq);
> +	list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) {
> +		rgpd = req->gpd;
> +		rgpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
> +		rgpd->data_buff_len = 0;
> +
> +		if (req->skb) {
> +			req->skb->len = 0;
> +			skb_reset_tail_pointer(req->skb);
> +		}
> +	}
> +
> +	spin_unlock_irqrestore(&rxq->ring_lock, flags);
> +	list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) {
> +		int ret;

I find this kind of newline+unlock+more code a bit odd groupingwise.
IMO, the newline should be after the unlock rather than just before it to 
better indicate the critical sections visually.

> +static void t7xx_cldma_stop_q(struct cldma_ctrl *md_ctrl, unsigned char qno, enum mtk_txrx tx_rx)
> +{
> +	struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
> +	if (tx_rx == MTK_RX) {
> +		t7xx_cldma_hw_irq_dis_eq(hw_info, qno, MTK_RX);
> +		t7xx_cldma_hw_irq_dis_txrx(hw_info, qno, MTK_RX);
> +
> +		if (qno == CLDMA_ALL_Q)
> +			md_ctrl->rxq_active &= ~TXRX_STATUS_BITMASK;
> +		else
> +			md_ctrl->rxq_active &= ~(TXRX_STATUS_BITMASK & BIT(qno));
> +
> +		t7xx_cldma_hw_stop_queue(hw_info, qno, MTK_RX);
> +	} else if (tx_rx == MTK_TX) {
> +		t7xx_cldma_hw_irq_dis_eq(hw_info, qno, MTK_TX);
> +		t7xx_cldma_hw_irq_dis_txrx(hw_info, qno, MTK_TX);
> +
> +		if (qno == CLDMA_ALL_Q)
> +			md_ctrl->txq_active &= ~TXRX_STATUS_BITMASK;
> +		else
> +			md_ctrl->txq_active &= ~(TXRX_STATUS_BITMASK & BIT(qno));
> +
> +		t7xx_cldma_hw_stop_queue(hw_info, qno, MTK_TX);
> +	}
> +
> +	spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
> +}

This is always called with CLDMA_ALL_Q, never with a single queue.

> +/**
> + * t7xx_cldma_send_skb() - Send control data to modem.
> + * @md_ctrl: CLDMA context structure.
> + * @qno: Queue number.
> + * @skb: Socket buffer.
> + * @blocking: True for blocking operation.
> + *
> + * Send control packet to modem using a ring buffer.
> + * If blocking is set, it will wait for completion.
> + *
> + * Return:
> + * * 0		- Success.
> + * * -ENOMEM	- Allocation failure.
> + * * -EINVAL	- Invalid queue request.
> + * * -EBUSY	- Resource lock failure.
> + */
> +int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb, bool blocking)
> +{
> +	struct cldma_request *tx_req;
> +	struct cldma_queue *queue;
> +	unsigned long flags;
> +	int ret;
> +
> +	if (qno >= CLDMA_TXQ_NUM)
> +		return -EINVAL;
> +
> +	queue = &md_ctrl->txq[qno];
> +
> +	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
> +	if (!(md_ctrl->txq_active & BIT(qno))) {
> +		ret = -EBUSY;
> +		spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
> +		goto allow_sleep;
> +	}
...
> +		if (!blocking) {
> +			ret = -EBUSY;
> +			break;
> +		}
> +
> +		ret = wait_event_interruptible_exclusive(queue->req_wq, queue->budget > 0);
> +	} while (!ret);
> +
> +allow_sleep:
> +	return ret;
> +}

First of all, if I interpreted the call chains correctly, this function is 
always called with blocking=true.

Second, the first codepath returning -EBUSY when not txq_active seems
twisted/reversed logic to me (not active => busy ?!?).


> +int t7xx_cldma_init(struct t7xx_modem *md, struct cldma_ctrl *md_ctrl)
> +{
> +	struct t7xx_cldma_hw *hw_info = &md_ctrl->hw_info;
> +	int i;
> +
> +	md_ctrl->txq_active = 0;
> +	md_ctrl->rxq_active = 0;
> +	md_ctrl->is_late_init = false;
> +
> +	spin_lock_init(&md_ctrl->cldma_lock);
> +	for (i = 0; i < CLDMA_TXQ_NUM; i++) {
> +		md_cd_queue_struct_init(&md_ctrl->txq[i], md_ctrl, MTK_TX, i);
> +		md_ctrl->txq[i].md = md;
> +
> +		md_ctrl->txq[i].worker =
> +			alloc_workqueue("md_hif%d_tx%d_worker",
> +					WQ_UNBOUND | WQ_MEM_RECLAIM | (i ? 0 : WQ_HIGHPRI),
> +					1, md_ctrl->hif_id, i);
> +		if (!md_ctrl->txq[i].worker)
> +			return -ENOMEM;

Leaks?

> +		INIT_WORK(&md_ctrl->txq[i].cldma_work, t7xx_cldma_tx_done);
> +	}
> +
> +	for (i = 0; i < CLDMA_RXQ_NUM; i++) {
> +		md_cd_queue_struct_init(&md_ctrl->rxq[i], md_ctrl, MTK_RX, i);
> +		md_ctrl->rxq[i].md = md;
> +		INIT_WORK(&md_ctrl->rxq[i].cldma_work, t7xx_cldma_rx_done);
> +
> +		md_ctrl->rxq[i].worker = alloc_workqueue("md_hif%d_rx%d_worker",
> +							 WQ_UNBOUND | WQ_MEM_RECLAIM,
> +							 1, md_ctrl->hif_id, i);
> +		if (!md_ctrl->rxq[i].worker)
> +			return -ENOMEM;

Ditto.

> +enum cldma_id {
> +	ID_CLDMA0,

As mentioned above, this is not used anywhere.

> +	ID_CLDMA1,
> +};
> +
> +struct cldma_request {
> +	void *gpd;		/* Virtual address for CPU */
> +	dma_addr_t gpd_addr;	/* Physical address for DMA */
> +	struct sk_buff *skb;
> +	dma_addr_t mapped_buff;
> +	struct list_head entry;
> +};
> +
> +struct cldma_queue;

Unnecessary fwd decl.

> +struct cldma_ctrl;
> +
> +struct cldma_ring {
> +	struct list_head gpd_ring;	/* Ring of struct cldma_request */
> +	int length;			/* Number of struct cldma_request */
> +	int pkt_size;
> +};
> +
> +struct cldma_queue {
> +	struct t7xx_modem *md;
> +	struct cldma_ctrl *md_ctrl;
> +	enum mtk_txrx dir;
> +	unsigned char index;
> +	struct cldma_ring *tr_ring;
> +	struct cldma_request *tr_done;
> +	struct cldma_request *rx_refill;
> +	struct cldma_request *tx_xmit;

Based on the name, I'd have expected this to point to something that is 
currently under transmission but studying how it is used, it seems to 
point to next available req. Maybe rename to tx_next_avail or something 
along those lines? (Although when the ring is full, it isn't available 
if I understood the code correctly).

> +#define GPD_FLAGS_BDP		BIT(1)
> +#define GPD_FLAGS_BPS		BIT(2)

Unused.


-- 
 i.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface
  2022-01-18 14:13   ` Ilpo Järvinen
@ 2022-01-18 22:22     ` Martinez, Ricardo
  2022-01-19  9:52       ` Ilpo Järvinen
  2022-02-11  0:25       ` Sergey Ryazanov
  0 siblings, 2 replies; 44+ messages in thread
From: Martinez, Ricardo @ 2022-01-18 22:22 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla


On 1/18/2022 6:13 AM, Ilpo Järvinen wrote:
> On Thu, 13 Jan 2022, Ricardo Martinez wrote:
...
>> +#define CLDMA_NUM 2
> I tried to understand its purpose but it seems that only one of the
> indexes is used in the arrays where this define gives the size? Related to
> this, ID_CLDMA0 is not used anywhere?

The modem HW has 2 CLDMAs, idx 0 for the app processor (SAP) and idx 1 
for the modem (MD).

CLDMA_NUM is defined as 2 to reflect the HW capabilities but mainly to 
have a cleaner upcoming

patches, which will use ID_CLDMA0.

If having array's of size 1 is not a problem then we can define 
CLDMA_NUM as 1 and

play with the CLDMA indexes.

...


>> +static void t7xx_cldma_enable_irq(struct cldma_ctrl *md_ctrl)
>> +{
>> +	t7xx_pcie_mac_set_int(md_ctrl->t7xx_dev, md_ctrl->hw_info.phy_interrupt_id);
>> +}
>> +
>> +static void t7xx_cldma_disable_irq(struct cldma_ctrl *md_ctrl)
>> +{
>> +	t7xx_pcie_mac_clear_int(md_ctrl->t7xx_dev, md_ctrl->hw_info.phy_interrupt_id);
>> +}
> t7xx_pcie_mac_set_int and t7xx_pcie_mac_clear_int are only defined
> by a later patch.
>
>> +static bool t7xx_cldma_qs_are_active(struct t7xx_cldma_hw *hw_info)
>> +{
>> +	unsigned int tx_active;
>> +	unsigned int rx_active;
>> +
>> +	tx_active = t7xx_cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, MTK_TX);
>> +	rx_active = t7xx_cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, MTK_RX);
>> +	if (tx_active == CLDMA_INVALID_STATUS || rx_active == CLDMA_INVALID_STATUS)
> These cannot ever be true because of mask in t7xx_cldma_hw_queue_status().

t7xx_cldma_hw_queue_status() shouldn't apply the mask for CLDMA_ALL_Q.

>> +static int t7xx_cldma_clear_rxq(struct cldma_ctrl *md_ctrl, int qnum)
>> +{
>> +	struct cldma_queue *rxq = &md_ctrl->rxq[qnum];
>> +	struct cldma_request *req;
>> +	struct cldma_rgpd *rgpd;
>> +	unsigned long flags;
>> +
>> +	spin_lock_irqsave(&rxq->ring_lock, flags);
>> +	t7xx_cldma_q_reset(rxq);
>> +	list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) {
>> +		rgpd = req->gpd;
>> +		rgpd->gpd_flags = GPD_FLAGS_IOC | GPD_FLAGS_HWO;
>> +		rgpd->data_buff_len = 0;
>> +
>> +		if (req->skb) {
>> +			req->skb->len = 0;
>> +			skb_reset_tail_pointer(req->skb);
>> +		}
>> +	}
>> +
>> +	spin_unlock_irqrestore(&rxq->ring_lock, flags);
>> +	list_for_each_entry(req, &rxq->tr_ring->gpd_ring, entry) {
>> +		int ret;
> I find this kind of newline+unlock+more code a bit odd groupingwise.
> IMO, the newline should be after the unlock rather than just before it to
> better indicate the critical sections visually.

Agree. In general, the driver uses a newline after '}', unlock 
operations should be an

exception since it looks better to keep the critical section blocks 
together.

...

>> +/**
>> + * t7xx_cldma_send_skb() - Send control data to modem.
>> + * @md_ctrl: CLDMA context structure.
>> + * @qno: Queue number.
>> + * @skb: Socket buffer.
>> + * @blocking: True for blocking operation.
>> + *
>> + * Send control packet to modem using a ring buffer.
>> + * If blocking is set, it will wait for completion.
>> + *
>> + * Return:
>> + * * 0		- Success.
>> + * * -ENOMEM	- Allocation failure.
>> + * * -EINVAL	- Invalid queue request.
>> + * * -EBUSY	- Resource lock failure.
>> + */
>> +int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct sk_buff *skb, bool blocking)
>> +{
>> +	struct cldma_request *tx_req;
>> +	struct cldma_queue *queue;
>> +	unsigned long flags;
>> +	int ret;
>> +
>> +	if (qno >= CLDMA_TXQ_NUM)
>> +		return -EINVAL;
>> +
>> +	queue = &md_ctrl->txq[qno];
>> +
>> +	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
>> +	if (!(md_ctrl->txq_active & BIT(qno))) {
>> +		ret = -EBUSY;
>> +		spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
>> +		goto allow_sleep;
>> +	}
> ...
>> +		if (!blocking) {
>> +			ret = -EBUSY;
>> +			break;
>> +		}
>> +
>> +		ret = wait_event_interruptible_exclusive(queue->req_wq, queue->budget > 0);
>> +	} while (!ret);
>> +
>> +allow_sleep:
>> +	return ret;
>> +}
> First of all, if I interpreted the call chains correctly, this function is
> always called with blocking=true.
>
> Second, the first codepath returning -EBUSY when not txq_active seems
> twisted/reversed logic to me (not active => busy ?!?).

What about -EINVAL?

Other codes considered: -EPERM, -ENETDOWN.

>
...


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface
  2022-01-18 22:22     ` Martinez, Ricardo
@ 2022-01-19  9:52       ` Ilpo Järvinen
  2022-01-19 19:04         ` Martinez, Ricardo
  2022-02-11  0:25       ` Sergey Ryazanov
  1 sibling, 1 reply; 44+ messages in thread
From: Ilpo Järvinen @ 2022-01-19  9:52 UTC (permalink / raw)
  To: Martinez, Ricardo
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

[-- Attachment #1: Type: text/plain, Size: 3585 bytes --]

On Tue, 18 Jan 2022, Martinez, Ricardo wrote:

> 
> On 1/18/2022 6:13 AM, Ilpo Järvinen wrote:
> > On Thu, 13 Jan 2022, Ricardo Martinez wrote:
> ...
> > > +#define CLDMA_NUM 2
> > I tried to understand its purpose but it seems that only one of the
> > indexes is used in the arrays where this define gives the size? Related to
> > this, ID_CLDMA0 is not used anywhere?
> 
> The modem HW has 2 CLDMAs, idx 0 for the app processor (SAP) and idx 1 for the
> modem (MD).
> 
> CLDMA_NUM is defined as 2 to reflect the HW capabilities but mainly to have a
> cleaner upcoming
> 
> patches, which will use ID_CLDMA0.

Please note this in your commit message then and I think it should be 
fine to leave it as is (or use 1 sized array, if you prefer to).

> If having array's of size 1 is not a problem then we can define CLDMA_NUM as 1
> and
> 
> play with the CLDMA indexes.
> 
> ...
>
> > > +static bool t7xx_cldma_qs_are_active(struct t7xx_cldma_hw *hw_info)
> > > +{
> > > +	unsigned int tx_active;
> > > +	unsigned int rx_active;
> > > +
> > > +	tx_active = t7xx_cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, MTK_TX);
> > > +	rx_active = t7xx_cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, MTK_RX);
> > > +	if (tx_active == CLDMA_INVALID_STATUS || rx_active ==
> > > CLDMA_INVALID_STATUS)
> > These cannot ever be true because of mask in t7xx_cldma_hw_queue_status().
> 
> t7xx_cldma_hw_queue_status() shouldn't apply the mask for CLDMA_ALL_Q.

I guess it shouldn't but it currently does apply 0xff (CLDMA_ALL_Q) as 
mask in that case. However, this now raises another question, if 
0xffffffff (CLDMA_INVALID_STATUS) means status is invalid, should all 
callers both single Q and CLDMA_ALL_Q be returned/check/handle that value?

Why would CLDMA_ALL_Q be special in this respect that the INVALID_STATUS 
means invalid only with it?

> > > +/**
> > > + * t7xx_cldma_send_skb() - Send control data to modem.
> > > + * @md_ctrl: CLDMA context structure.
> > > + * @qno: Queue number.
> > > + * @skb: Socket buffer.
> > > + * @blocking: True for blocking operation.
> > > + *
> > > + * Send control packet to modem using a ring buffer.
> > > + * If blocking is set, it will wait for completion.
> > > + *
> > > + * Return:
> > > + * * 0		- Success.
> > > + * * -ENOMEM	- Allocation failure.
> > > + * * -EINVAL	- Invalid queue request.
> > > + * * -EBUSY	- Resource lock failure.
> > > + */
> > > +int t7xx_cldma_send_skb(struct cldma_ctrl *md_ctrl, int qno, struct
> > > sk_buff *skb, bool blocking)
> > > +{
> > > +	struct cldma_request *tx_req;
> > > +	struct cldma_queue *queue;
> > > +	unsigned long flags;
> > > +	int ret;
> > > +
> > > +	if (qno >= CLDMA_TXQ_NUM)
> > > +		return -EINVAL;
> > > +
> > > +	queue = &md_ctrl->txq[qno];
> > > +
> > > +	spin_lock_irqsave(&md_ctrl->cldma_lock, flags);
> > > +	if (!(md_ctrl->txq_active & BIT(qno))) {
> > > +		ret = -EBUSY;
> > > +		spin_unlock_irqrestore(&md_ctrl->cldma_lock, flags);
> > > +		goto allow_sleep;
> > > +	}
> > ...
> > > +		if (!blocking) {
> > > +			ret = -EBUSY;
> > > +			break;
> > > +		}
> > > +
> > > +		ret = wait_event_interruptible_exclusive(queue->req_wq,
> > > queue->budget > 0);
> > > +	} while (!ret);
> > > +
> > > +allow_sleep:
> > > +	return ret;
> > > +}
> > First of all, if I interpreted the call chains correctly, this function is
> > always called with blocking=true.
> > 
> > Second, the first codepath returning -EBUSY when not txq_active seems
> > twisted/reversed logic to me (not active => busy ?!?).
> 
> What about -EINVAL?
> 
> Other codes considered: -EPERM, -ENETDOWN.

How about -EIO.

-- 
 i.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface
  2022-01-19  9:52       ` Ilpo Järvinen
@ 2022-01-19 19:04         ` Martinez, Ricardo
  0 siblings, 0 replies; 44+ messages in thread
From: Martinez, Ricardo @ 2022-01-19 19:04 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla


On 1/19/2022 1:52 AM, Ilpo Järvinen wrote:
> On Tue, 18 Jan 2022, Martinez, Ricardo wrote:
>
>> On 1/18/2022 6:13 AM, Ilpo Järvinen wrote:
>>> On Thu, 13 Jan 2022, Ricardo Martinez wrote:
...
>>> +static bool t7xx_cldma_qs_are_active(struct t7xx_cldma_hw *hw_info)
>>> +{
>>> +	unsigned int tx_active;
>>> +	unsigned int rx_active;
>>> +
>>> +	tx_active = t7xx_cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, MTK_TX);
>>> +	rx_active = t7xx_cldma_hw_queue_status(hw_info, CLDMA_ALL_Q, MTK_RX);
>>> +	if (tx_active == CLDMA_INVALID_STATUS || rx_active ==
>>> CLDMA_INVALID_STATUS)
>>> These cannot ever be true because of mask in t7xx_cldma_hw_queue_status().
>> t7xx_cldma_hw_queue_status() shouldn't apply the mask for CLDMA_ALL_Q.
> I guess it shouldn't but it currently does apply 0xff (CLDMA_ALL_Q) as
> mask in that case. However, this now raises another question, if
> 0xffffffff (CLDMA_INVALID_STATUS) means status is invalid, should all
> callers both single Q and CLDMA_ALL_Q be returned/check/handle that value?
>
> Why would CLDMA_ALL_Q be special in this respect that the INVALID_STATUS
> means invalid only with it?

Reading 0xffffffff is used to detect if the PCI link was disconnected,
it is relevant in t7xx_cldma_qs_are_active() because it is a helper function
polled by t7xx_cldma_stop() to wait until the queues are not active anymore.

I think a cleaner implementation would be to use pci_device_is_present()
instead of the CLDMA_INVALID_STATUS check inside t7xx_cldma_qs_are_active()
and keep t7xx_cldma_hw_queue_status() free of that logic.

...


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 03/13] net: wwan: t7xx: Add core components
  2022-01-14  1:06 ` [PATCH net-next v4 03/13] net: wwan: t7xx: Add core components Ricardo Martinez
  2022-01-16 15:37   ` kernel test robot
@ 2022-01-24 14:51   ` Ilpo Järvinen
  2022-01-25 19:13     ` Martinez, Ricardo
  1 sibling, 1 reply; 44+ messages in thread
From: Ilpo Järvinen @ 2022-01-24 14:51 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <haijun.liu@mediatek.com>
> 
> Registers the t7xx device driver with the kernel. Setup all the core
> components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
> modem control operations, modem state machine, and build
> infrastructure.
> 
> * PCIe layer code implements driver probe and removal.
> * MHCCIF provides interrupt channels to communicate events
>   such as handshake, PM and port enumeration.
> * Modem control implements the entry point for modem init,
>   reset and exit.
> * The modem status monitor is a state machine used by modem control
>   to complete initialization and stop. It is used also to propagate
>   exception events reported by other components.
> 
> Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---

Some states in t7xx_common.h (MD_STATE_...) would logically belong to this 
patch instead of 02/. ...I think they were initally here but got moved 
with t7xx_skb_data_area_size(). And there was also things clearly related 
to 05/ in t7xx_common.h (at least CTL_ID_*).

> +static irqreturn_t t7xx_mhccif_isr_thread(int irq, void *data)
> +{
> +	struct t7xx_pci_dev *t7xx_dev = data;
> +	u32 int_sts, val;
> +
> +	val = L1_1_DISABLE_BIT(1) | L1_2_DISABLE_BIT(1);
> +	iowrite32(val, IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
> +
> +	int_sts = t7xx_mhccif_read_sw_int_sts(t7xx_dev);
> +	if (int_sts & t7xx_dev->mhccif_bitmask)

hccif_bitmask is set to a constant value and used only in this one place.
I'd also spell out sts to status.

> +static int t7xx_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> +{
> +	struct t7xx_pci_dev *t7xx_dev;
> +	int ret;
> +
> +	t7xx_dev = devm_kzalloc(&pdev->dev, sizeof(*t7xx_dev), GFP_KERNEL);
> +	if (!t7xx_dev)
> +		return -ENOMEM;
> +
> +	pci_set_drvdata(pdev, t7xx_dev);
> +	t7xx_dev->pdev = pdev;
> +
> +	ret = pcim_enable_device(pdev);
> +	if (ret)
> +		return ret;
> +
> +	pci_set_master(pdev);
> +
> +	ret = pcim_iomap_regions(pdev, BIT(PCI_IREG_BASE) | BIT(PCI_EREG_BASE), pci_name(pdev));
> +	if (ret) {
> +		dev_err(&pdev->dev, "Could not request BARs: %d\n", ret);
> +		return -ENOMEM;
> +	}
> +
> +	ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(64));
> +	if (ret) {
> +		dev_err(&pdev->dev, "Could not set PCI DMA mask: %d\n", ret);
> +		return ret;
> +	}
> +
> +	ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64));
> +	if (ret) {
> +		dev_err(&pdev->dev, "Could not set consistent PCI DMA mask: %d\n", ret);
> +		return ret;
> +	}
> +
> +	IREG_BASE(t7xx_dev) = pcim_iomap_table(pdev)[PCI_IREG_BASE];
> +	t7xx_dev->base_addr.pcie_ext_reg_base = pcim_iomap_table(pdev)[PCI_EREG_BASE];
> +
> +	t7xx_pcie_mac_atr_init(t7xx_dev);
> +	t7xx_pci_infracfg_ao_calc(t7xx_dev);
> +	t7xx_mhccif_init(t7xx_dev);
> +
> +	ret = t7xx_md_init(t7xx_dev);
> +	if (ret)
> +		return ret;
> +
> +	t7xx_pcie_mac_interrupts_dis(t7xx_dev);
> +
> +	ret = t7xx_interrupt_init(t7xx_dev);
> +	if (ret)
> +		return ret;

Some leaks?

> +/**
> + * t7xx_pcie_mac_clear_set_int() - Clear/set interrupt by type.
> + * @t7xx_dev: MTK device.
> + * @int_type: Interrupt type.
> + * @clear: Clear/set.
> + *
> + * Clear or set device interrupt by type.
> + */
> +static void t7xx_pcie_mac_clear_set_int(struct t7xx_pci_dev *t7xx_dev,
> +					enum pcie_int int_type, bool clear)
> +{
> +	void __iomem *reg;
> +	u32 val;
> +
> +	if (t7xx_dev->pdev->msix_enabled) {
> +		if (clear)
> +			reg = IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_CLR_GRP0_0;
> +		else
> +			reg = IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_SET_GRP0_0;
> +	} else {
> +		if (clear)
> +			reg = IREG_BASE(t7xx_dev) + INT_EN_HST_CLR;
> +		else
> +			reg = IREG_BASE(t7xx_dev) + INT_EN_HST_SET;
> +	}
> +
> +	val = BIT(EXT_INT_START + int_type);
> +	iowrite32(val, reg);
> +}
> +
> +void t7xx_pcie_mac_clear_int(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type)
> +{
> +	t7xx_pcie_mac_clear_set_int(t7xx_dev, int_type, true);
> +}
> +
> +void t7xx_pcie_mac_set_int(struct t7xx_pci_dev *t7xx_dev, enum pcie_int int_type)
> +{
> +	t7xx_pcie_mac_clear_set_int(t7xx_dev, int_type, false);
> +}

...

> +#define PCIE_MAC_MSIX_MSK_SET(t7xx_dev, ext_id)	\
> +	iowrite32(BIT(ext_id), IREG_BASE(t7xx_dev) + IMASK_HOST_MSIX_SET_GRP0_0)

A near duplicate of t7xx_pcie_mac_clear_set_int()/t7xx_pcie_mac_set_int()?

> +enum pcie_int {
> +	DPMAIF_INT = 0,
> +	CLDMA0_INT,
> +	CLDMA1_INT,
> +	CLDMA2_INT,
> +	MHCCIF_INT,
> +	DPMAIF2_INT,
> +	SAP_RGU_INT,
> +	CLDMA3_INT,
> +};

A bit too generic name for a driver specific enum?
There were also some PCIE_ starting defines you might want to take a look 
at.

> +static void fsm_wait_for_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id,
> +			       enum t7xx_fsm_event_state event_ignore, int timeout)
> +{
> +	struct t7xx_fsm_event *event;
> +	unsigned long flags;
> +	bool ackd = false;
> +	int cnt = 0;

int retries = timeout / FSM_EVENT_POLL_INTERVAL_MS;
(Or move that divide into caller which then gets optimized away by the 
compiler).

> +
> +	while (cnt++ < timeout / FSM_EVENT_POLL_INTERVAL_MS) {
> +		if (kthread_should_stop())
> +			return;
> +
> +		spin_lock_irqsave(&ctl->event_lock, flags);
> +		event = list_first_entry_or_null(&ctl->event_queue,
> +						 struct t7xx_fsm_event, entry);
> +		if (event) {
> +			if (event->event_id == event_ignore) {
> +				fsm_del_kf_event(event);
> +			} else if (event->event_id == event_id) {
> +				ackd = true;
> +				fsm_del_kf_event(event);
> +			}
> +		}
> +
> +		spin_unlock_irqrestore(&ctl->event_lock, flags);
> +		if (ackd)
> +			break;
> +
> +		msleep(FSM_EVENT_POLL_INTERVAL_MS);

I wonder if an event gets ignored, is msleep() useful also in that case?

> +	}
> +}
> +
> +static void fsm_routine_exception(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd,
> +				  enum t7xx_ex_reason reason)
> +{
> +	struct device *dev = &ctl->md->t7xx_dev->pdev->dev;
> +
> +	dev_err(dev, "Exception %d, from %ps\n", reason, __builtin_return_address(0));

Is that address useful?

> +	if (ctl->curr_state != FSM_STATE_READY && ctl->curr_state != FSM_STATE_STARTING) {
> +		if (cmd)
> +			fsm_finish_command(ctl, cmd, -EINVAL);
> +
> +		return;
> +	}
> +
> +	ctl->curr_state = FSM_STATE_EXCEPTION;
> +
> +	switch (reason) {
> +	case EXCEPTION_HS_TIMEOUT:
> +		dev_err(dev, "BOOT_HS_FAIL\n");
> +		break;

...

> +	if (!md->core_md.ready) {
> +		dev_err(dev, "MD handshake timeout\n");
> +		fsm_routine_exception(ctl, NULL, EXCEPTION_HS_TIMEOUT);

Perhaps one dev_err() would suffice for this case :-). ...The another
one is inside fsm_routine_exception() (shown in the fragment above) 
although there's some non-trivial state-based logic in between which you 
want to check before removing either of them.


> +int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id, unsigned int flag)

No callsite in this patch seems to care about the error code, is it ok?
E.g.:
> +int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev)
> +{
...
> +       t7xx_fsm_append_cmd(md->fsm_ctl, FSM_CMD_START, 0);
If this returns an error, does it mean init/probe stalls? Or is there
some backup to restart?

> +int t7xx_fsm_append_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state event_id,
> +			  unsigned char *data, unsigned int length)
Again, none of the callsites care?


-- 
 i.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 04/13] net: wwan: t7xx: Add port proxy infrastructure
  2022-01-14  1:06 ` [PATCH net-next v4 04/13] net: wwan: t7xx: Add port proxy infrastructure Ricardo Martinez
@ 2022-01-25 13:38   ` Ilpo Järvinen
  2022-02-10 13:34   ` Ilpo Järvinen
  1 sibling, 0 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-01-25 13:38 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <haijun.liu@mediatek.com>
> 
> Port-proxy provides a common interface to interact with different types
> of ports. Ports export their configuration via `struct t7xx_port` and
> operate as defined by `struct port_ops`.
> 
> Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> Co-developed-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---

> +struct t7xx_port {
> +	/* Members not initialized in definition */
> +	struct t7xx_port_static *port_static;
> +	struct wwan_port	*wwan_port;
> +	struct t7xx_pci_dev	*t7xx_dev;
> +	struct device		*dev;
> +	short			seq_nums[2];

Make this u16 to avoid implicit sign extensions. If port->seq_nums[MTK_TX]
wouldn't be non-constant expression in
+       ccci_h->status |= cpu_to_le32(FIELD_PREP(HDR_FLD_SEQ, port->seq_nums[MTK_TX]));
it would have triggered internal consistency checks due to implicit sign
extension (in include/linux/bitfield.h):
#define __BF_FIELD_CHECK(_mask, _reg, _val, _pfx)                       \
                ...
                BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ?           \
                                 ~((_mask) >> __bf_shf(_mask)) & (_val) : 
0, \
                                 _pfx "value too large for the field"); \
t7xx_port_check_rx_seq_num() is already using u16 for the seq number.

> +#define CHECK_RX_SEQ_MASK		GENMASK(14, 0)

This seems same as FIELD_MAX(HDR_FLD_SEQ). Maybe drop the define and
use it directly in t7xx_port_check_rx_seq_num() as it is already using 
FIELD_GET(HDR_FLD_SEQ).

> +static u16 t7xx_port_check_rx_seq_num(struct t7xx_port *port, struct ccci_header *ccci_h)
> +{
> +	u16 seq_num, assert_bit;
> +
> +	seq_num = FIELD_GET(HDR_FLD_SEQ, le32_to_cpu(ccci_h->status));
> +	assert_bit = FIELD_GET(HDR_FLD_AST, le32_to_cpu(ccci_h->status));
> +	if (assert_bit && port->seq_nums[MTK_RX] &&
> +	    ((seq_num - port->seq_nums[MTK_RX]) & CHECK_RX_SEQ_MASK) != 1) {

Why the non-zero port->seq_nums[MTK_RX] check? It cannot be the initial
value check (-1 is used as the initial value). As is, it could miss ofos
when seq_nums[MTK_RX] had overflowed back to zero).

> +int t7xx_port_recv_skb(struct t7xx_port *port, struct sk_buff *skb)
> +{
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&port->rx_wq.lock, flags);
> +	if (port->rx_skb_list.qlen < port->rx_length_th) {
> +		struct ccci_header *ccci_h = (struct ccci_header *)skb->data;
> +		u32 channel;
> +
> +		port->flags &= ~PORT_F_RX_FULLED;
> +		if (port->flags & PORT_F_RX_ADJUST_HEADER)
> +			t7xx_port_adjust_skb(port, skb);
> +
> +		channel = FIELD_GET(HDR_FLD_CHN, le32_to_cpu(ccci_h->status));
> +		if (channel == PORT_CH_STATUS_RX) {
> +			port->skb_handler(port, skb);
> +		} else {
> +			if (port->wwan_port)
> +				wwan_port_rx(port->wwan_port, skb);
> +			else
> +				__skb_queue_tail(&port->rx_skb_list, skb);
> +		}
> +
> +		spin_unlock_irqrestore(&port->rx_wq.lock, flags);
> +		wake_up_all(&port->rx_wq);
> +		return 0;
> +	}
> +
> +	port->flags |= PORT_F_RX_FULLED;
> +	spin_unlock_irqrestore(&port->rx_wq.lock, flags);
> +	return -ENOBUFS;
> +}

More typical construct would reverse the if condition, use goto 
queue_full; and deindent the normal path.

> +int t7xx_port_proxy_send_skb(struct t7xx_port *port, struct sk_buff *skb)
> +{
> +	struct ccci_header *ccci_h = (struct ccci_header *)(skb->data);
> +	struct cldma_ctrl *md_ctrl;
> +	unsigned char tx_qno;
> +	int ret;
> +
> +	tx_qno = t7xx_port_get_queue_no(port);
> +	t7xx_port_proxy_set_seq_num(port, ccci_h);
> +
> +	md_ctrl = get_md_ctrl(port);
> +	ret = t7xx_cldma_send_skb(md_ctrl, tx_qno, skb, true);
> +	if (ret) {
> +		dev_err(port->dev, "Failed to send skb: %d\n", ret);
> +		return ret;
> +	}
> +
> +	/* Record the port seq_num after the data is sent to HIF.
> +	 * Only bits 0-14 are used, thus negating overflow.
> +	 */
> +	port->seq_nums[MTK_TX]++;

I think the comment is not particularly useful (and I'd argue it's not
factually correct either as overflow is still occurring).

> +static int t7xx_port_proxy_dispatch_recv_skb(struct cldma_queue *queue, struct sk_buff *skb,
> +					     bool *drop_skb_on_err)
> +{
> +	struct ccci_header *ccci_h = (struct ccci_header *)skb->data;
> +	struct port_proxy *port_prox = queue->md->port_prox;
> +	struct t7xx_fsm_ctl *ctl = queue->md->fsm_ctl;
> +	struct list_head *port_list;
> +	struct t7xx_port *port;
> +	u16 seq_num, channel;
> +	int ret = 0;
> +	u8 ch_id;
> +
> +	channel = FIELD_GET(HDR_FLD_CHN, le32_to_cpu(ccci_h->status));
> +	ch_id = FIELD_GET(PORT_CH_ID_MASK, channel);
> +
> +	if (t7xx_fsm_get_md_state(ctl) == MD_STATE_INVALID) {
> +		*drop_skb_on_err = true;
> +		return -EINVAL;
> +	}
> +
> +	port_list = &port_prox->rx_ch_ports[ch_id];
> +	list_for_each_entry(port, port_list, entry) {
> +		struct t7xx_port_static *port_static = port->port_static;
> +
> +		if (queue->md_ctrl->hif_id != port_static->path_id || channel !=
> +		    port_static->rx_ch)
> +			continue;
> +
> +		/* Multi-cast is not supported, because one port may be freed and can modify
> +		 * this request before another port can process it.
> +		 * However we still can use req->state to do some kind of multi-cast if needed.
> +		 */
> +		if (port_static->ops->recv_skb) {
> +			seq_num = t7xx_port_check_rx_seq_num(port, ccci_h);
> +			ret = port_static->ops->recv_skb(port, skb);
> +			/* If the packet is stored to RX buffer successfully or dropped,
> +			 * the sequence number will be updated.
> +			 */
> +			if (ret == -ENETDOWN || (ret < 0 && port->flags & PORT_F_RX_ALLOW_DROP)) {
> +				*drop_skb_on_err = true;
> +				dev_err_ratelimited(port->dev,
> +						    "port %s RX full, drop packet\n",
> +						    port_static->name);
> +			}

One -ENETDOWN path (the only one returned by the driver itself) already 
printed an error and the information is not consistent with what is being 
printed here, perhaps these error branches would need to differentiated?


> +static void t7xx_proxy_init_all_ports(struct t7xx_modem *md)
> +{
> +	struct port_proxy *port_prox = md->port_prox;
> +	struct t7xx_port *port;
> +	int i;
> +
> +	for_each_proxy_port(i, port, port_prox) {
> +		struct t7xx_port_static *port_static = port->port_static;
> +
> +		t7xx_port_struct_init(port);
> +
> +		port->t7xx_dev = md->t7xx_dev;
> +		port->dev = &md->t7xx_dev->pdev->dev;
> +		spin_lock_init(&port->port_update_lock);
> +		spin_lock(&port->port_update_lock);
> +		mutex_init(&port->tx_mutex_lock);
> +
> +		if (port->flags & PORT_F_CHAR_NODE_SHOW)
> +			port->chan_enable = true;
> +		else
> +			port->chan_enable = false;
> +
> +		port->chn_crt_stat = false;
> +		spin_unlock(&port->port_update_lock);

Odd looking spin_lock sequence.



> +int t7xx_port_proxy_node_control(struct t7xx_modem *md, struct port_msg *port_msg)
> +{
> +	u32 *port_info_base = (void *)port_msg + sizeof(*port_msg);
...
> +		u32 *port_info = port_info_base + i;
...
> +		ch_id = FIELD_GET(PORT_INFO_CH_ID, *port_info);

Byte-order.


> +#define PORT_INFO_ENFLG		GENMASK(15, 15)

BIT(15) would seem more sensical for a flag bit.


-- 
 i.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 03/13] net: wwan: t7xx: Add core components
  2022-01-24 14:51   ` Ilpo Järvinen
@ 2022-01-25 19:13     ` Martinez, Ricardo
  2022-01-26 10:45       ` Ilpo Järvinen
  2022-01-27 17:36       ` Ilpo Järvinen
  0 siblings, 2 replies; 44+ messages in thread
From: Martinez, Ricardo @ 2022-01-25 19:13 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla


On 1/24/2022 6:51 AM, Ilpo Järvinen wrote:
> On Thu, 13 Jan 2022, Ricardo Martinez wrote:
>
>> From: Haijun Liu <haijun.liu@mediatek.com>
>>
>> Registers the t7xx device driver with the kernel. Setup all the core
>> components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
>> modem control operations, modem state machine, and build
>> infrastructure.
>>
>> * PCIe layer code implements driver probe and removal.
>> * MHCCIF provides interrupt channels to communicate events
>>    such as handshake, PM and port enumeration.
>> * Modem control implements the entry point for modem init,
>>    reset and exit.
>> * The modem status monitor is a state machine used by modem control
>>    to complete initialization and stop. It is used also to propagate
>>    exception events reported by other components.
>>
>> Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
>> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
>> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
>> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
>> ---
> Some states in t7xx_common.h (MD_STATE_...) would logically belong to this
> patch instead of 02/. ...I think they were initally here but got moved
> with t7xx_skb_data_area_size(). And there was also things clearly related
> to 05/ in t7xx_common.h (at least CTL_ID_*).

Originally, 02 and 03 were going to be part of the same "Core 
functionality" patch,

the only reason for splitting it was to make that core patch smaller. 
The result is that

02 uses code defined at 03, note that compilation is enabled at 03.

Will merge 02 and 03 in the next version, also clean t7xx_common.h from 
definitions

not used.

...
>> +int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id, unsigned int flag)
> No callsite in this patch seems to care about the error code, is it ok?

Even though there's no recovery path (like retry) for 
t7xx_fsm_append_cmd() failures, it makes sense to

propagate the error instead of ignoring it, will add that in the next 
version.

> E.g.:
>> +int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev)
>> +{
>> ...
> If this returns an error, does it mean init/probe stalls? Or is there
> some backup to restart?
An error here will cause probe to fail, there's no recovery path for this.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 03/13] net: wwan: t7xx: Add core components
  2022-01-25 19:13     ` Martinez, Ricardo
@ 2022-01-26 10:45       ` Ilpo Järvinen
  2022-01-27 17:36       ` Ilpo Järvinen
  1 sibling, 0 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-01-26 10:45 UTC (permalink / raw)
  To: Martinez, Ricardo
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

[-- Attachment #1: Type: text/plain, Size: 2537 bytes --]

On Tue, 25 Jan 2022, Martinez, Ricardo wrote:

> 
> On 1/24/2022 6:51 AM, Ilpo Järvinen wrote:
> > On Thu, 13 Jan 2022, Ricardo Martinez wrote:
> > 
> > > From: Haijun Liu <haijun.liu@mediatek.com>
> > > 
> > > Registers the t7xx device driver with the kernel. Setup all the core
> > > components: PCIe layer, Modem Host Cross Core Interface (MHCCIF),
> > > modem control operations, modem state machine, and build
> > > infrastructure.
> > > 
> > > * PCIe layer code implements driver probe and removal.
> > > * MHCCIF provides interrupt channels to communicate events
> > >    such as handshake, PM and port enumeration.
> > > * Modem control implements the entry point for modem init,
> > >    reset and exit.
> > > * The modem status monitor is a state machine used by modem control
> > >    to complete initialization and stop. It is used also to propagate
> > >    exception events reported by other components.
> > > 
> > > Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> > > Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> > > Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> > > Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> > > ---
> > Some states in t7xx_common.h (MD_STATE_...) would logically belong to this
> > patch instead of 02/. ...I think they were initally here but got moved
> > with t7xx_skb_data_area_size(). And there was also things clearly related
> > to 05/ in t7xx_common.h (at least CTL_ID_*).
> 
> Originally, 02 and 03 were going to be part of the same "Core functionality"
> patch,
> 
> the only reason for splitting it was to make that core patch smaller. The
> result is that
> 
> 02 uses code defined at 03, note that compilation is enabled at 03.
> 
> Will merge 02 and 03 in the next version, also clean t7xx_common.h from
> definitions
> 
> not used.

I didn't mind the core split itself but some things just logically seemed 
to clearly belong to the other side of the split (or to a later patch 
altogether). As the split was made mostly based on files rather than on 
logical level, it is no surprise some defs end up into wrong side of it.
That being said, then there's IMHO no need to go entirely overboard with 
this and fine-comb every single line in the header files.

And yes, I know the compile cannot fail until 03. However, the other 
aspect is that when somebody, a few years from now, has to look to 
these changes from the git history, having most of the elements in
logical places will help.


-- 
 i.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 05/13] net: wwan: t7xx: Add control port
  2022-01-14  1:06 ` [PATCH net-next v4 05/13] net: wwan: t7xx: Add control port Ricardo Martinez
@ 2022-01-27 10:40   ` Ilpo Järvinen
  2022-01-27 14:53     ` Andy Shevchenko
  0 siblings, 1 reply; 44+ messages in thread
From: Ilpo Järvinen @ 2022-01-27 10:40 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <haijun.liu@mediatek.com>
> 
> Control Port implements driver control messages such as modem-host
> handshaking, controls port enumeration, and handles exception messages.
> 
> The handshaking process between the driver and the modem happens during
> the init sequence. The process involves the exchange of a list of
> supported runtime features to make sure that modem and host are ready
> to provide proper feature lists including port enumeration. Further
> features can be enabled and controlled in this handshaking process.
> 
> Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---

> +	/* Fill runtime feature */
> +	for (i = 0; i < FEATURE_COUNT; i++) {
> +		u8 md_feature_mask = FIELD_GET(FEATURE_MSK, md_feature->feature_set[i]);
> +
> +		memset(&rt_feature, 0, sizeof(rt_feature));
> +		rt_feature.feature_id = i;
> +
> +		switch (md_feature_mask) {
> +		case MTK_FEATURE_DOES_NOT_EXIST:
> +		case MTK_FEATURE_MUST_BE_SUPPORTED:
> +			rt_feature.support_info = md_feature->feature_set[i];
> +			break;
> +
> +		default:
> +			break;

Please remove empty default blocks from all patches.


> +		}
> +
> +		if (FIELD_GET(FEATURE_MSK, rt_feature.support_info) !=
> +		    MTK_FEATURE_MUST_BE_SUPPORTED) {
> +			memcpy(rt_data, &rt_feature, sizeof(rt_feature));
> +			rt_data += sizeof(rt_feature);
> +		}
> +
> +		packet_size += sizeof(struct mtk_runtime_feature);
> +	}

Is it intentional these two additions (rt_data and packet_size) are on
different sides of the if block?


> +static int port_ctl_init(struct t7xx_port *port)
> +{
> +	struct t7xx_port_static *port_static = port->port_static;
> +
> +	port->skb_handler = &control_msg_handler;
> +	port->thread = kthread_run(port_ctl_rx_thread, port, "%s", port_static->name);
> +	if (IS_ERR(port->thread)) {
> +		dev_err(port->dev, "Failed to start port control thread\n");
> +		return PTR_ERR(port->thread);
> +	}
> +
> +	port->rx_length_th = CTRL_QUEUE_MAXLEN;
> +	return 0;
> +}
> +
> +static void port_ctl_uninit(struct t7xx_port *port)
> +{
> +	unsigned long flags;
> +	struct sk_buff *skb;
> +
> +	if (port->thread)
> +		kthread_stop(port->thread);
> +
> +	spin_lock_irqsave(&port->rx_wq.lock, flags);
> +	while ((skb = __skb_dequeue(&port->rx_skb_list)) != NULL)
> +		dev_kfree_skb_any(skb);
> +
> +	spin_unlock_irqrestore(&port->rx_wq.lock, flags);
> +}

I wonder if the uninit should set rx_length_th to 0 to prevent
further accumulation of skbs?

> +	FSM_EVENT_AP_HS2_EXIT,

Never used anywhere.


-- 
 i.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 06/13] net: wwan: t7xx: Add AT and MBIM WWAN ports
  2022-01-14  1:06 ` [PATCH net-next v4 06/13] net: wwan: t7xx: Add AT and MBIM WWAN ports Ricardo Martinez
@ 2022-01-27 11:56   ` Ilpo Järvinen
  0 siblings, 0 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-01-27 11:56 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> 
> Adds AT and MBIM ports to the port proxy infrastructure.
> The initialization method is responsible for creating the corresponding
> ports using the WWAN framework infrastructure. The implemented WWAN port
> operations are start, stop, and TX.
> 
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---


> +		memcpy(skb_put(skb_ccci, actual_len), skb->data + i * (txq_mtu - CCCI_H_ELEN),
> +		       actual_len);

Use skb_put_data().

> +		t7xx_port_proxy_set_seq_num(port_private, ccci_h);
> +
> +		ret = t7xx_port_send_skb_to_md(port_private, skb_ccci, true);
> +		if (ret) {
> +			dev_err(port_private->dev, "Write error on %s port, %d\n",
> +				port_static->name, ret);
> +			dev_kfree_skb_any(skb_ccci);

Free first.

> +static void t7xx_port_wwan_uninit(struct t7xx_port *port)
> +{
> +	if (port->wwan_port) {
> +		if (port->chn_crt_stat) {
...
> +     if (port->chn_crt_stat != port->chan_enable)
...
> +     if (port->chn_crt_stat != port->chan_enable)

I don't see anything that would ever make chn_crt_stat true.

> +struct port_ops wwan_sub_port_ops = {
> +	.init = &t7xx_port_wwan_init,
> +	.recv_skb = &t7xx_port_wwan_recv_skb,
> +	.uninit = &t7xx_port_wwan_uninit,
> +	.enable_chl = &t7xx_port_wwan_enable_chl,
> +	.disable_chl = &t7xx_port_wwan_disable_chl,
> +	.md_state_notify = &t7xx_port_wwan_md_state_notify,

Drop & from these (in all patches).


-- 
 i.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 05/13] net: wwan: t7xx: Add control port
  2022-01-27 10:40   ` Ilpo Järvinen
@ 2022-01-27 14:53     ` Andy Shevchenko
  0 siblings, 0 replies; 44+ messages in thread
From: Andy Shevchenko @ 2022-01-27 14:53 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: Ricardo Martinez, Netdev, linux-wireless, kuba, davem, johannes,
	ryazanov.s.a, loic.poulain, m.chetan.kumar,
	chandrashekar.devegowda, linuxwwan, chiranjeevi.rapolu,
	haijun.liu, amir.hanania, dinesh.sharma, eliot.lee,
	moises.veleta, pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla

On Thu, Jan 27, 2022 at 12:40:42PM +0200, Ilpo Järvinen wrote:
> On Thu, 13 Jan 2022, Ricardo Martinez wrote:

...

> > +		default:
> > +			break;
> 
> Please remove empty default blocks from all patches.

Some (presumably old or with some warnings enabled, consider `make W=1`
or `make W=2`) compilers would not be happy of a such decision.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 03/13] net: wwan: t7xx: Add core components
  2022-01-25 19:13     ` Martinez, Ricardo
  2022-01-26 10:45       ` Ilpo Järvinen
@ 2022-01-27 17:36       ` Ilpo Järvinen
  1 sibling, 0 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-01-27 17:36 UTC (permalink / raw)
  To: Martinez, Ricardo
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

[-- Attachment #1: Type: text/plain, Size: 1105 bytes --]

On Tue, 25 Jan 2022, Martinez, Ricardo wrote:

> On 1/24/2022 6:51 AM, Ilpo Järvinen wrote:
> > On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> > > +int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state
> > > cmd_id, unsigned int flag)
> > No callsite in this patch seems to care about the error code, is it ok?
> 
> Even though there's no recovery path (like retry) for t7xx_fsm_append_cmd()
> failures, it makes sense to
> 
> propagate the error instead of ignoring it, will add that in the next version.
> 
> > E.g.:
> > > +int t7xx_md_init(struct t7xx_pci_dev *t7xx_dev)
> > > +{
> > > ...
> > If this returns an error, does it mean init/probe stalls? Or is there
> > some backup to restart?
> An error here will cause probe to fail, there's no recovery path for this.

Just to clarify, I think you misunderstood what I meant as you cut the 
critical line out in the reply. ...I meant heare that if 
t7xx_fsm_append_cmd returns an error, it will not make the probe to fail 
but lead to probe stalling (which propagating the error you intend to do 
will nicely address).


-- 
 i.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 07/13] net: wwan: t7xx: Data path HW layer
  2022-01-14  1:06 ` [PATCH net-next v4 07/13] net: wwan: t7xx: Data path HW layer Ricardo Martinez
@ 2022-02-01  9:08   ` Ilpo Järvinen
  2022-02-01 10:13     ` Ilpo Järvinen
  2022-02-03  2:30     ` Martinez, Ricardo
  0 siblings, 2 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-02-01  9:08 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <haijun.liu@mediatek.com>
> 
> Data Path Modem AP Interface (DPMAIF) HW layer provides HW abstraction
> for the upper layer (DPMAIF HIF). It implements functions to do the HW
> configuration, TX/RX control and interrupt handling.
> 
> Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---


> +static int t7xx_dpmaif_init_intr(struct dpmaif_hw_info *hw_info)
> +{
> +	struct dpmaif_isr_en_mask *isr_en_msk = &hw_info->isr_en_mask;

dpmaif_isr_en_mask is not defined until patch 08.


> +/* The para->intr_cnt counter is set to zero before this function is called.
> + * It does not check for overflow as there is no risk of overflowing intr_types or intr_queues.
> + */
> +static void t7xx_dpmaif_hw_check_rx_intr(struct dpmaif_ctrl *dpmaif_ctrl,
> +					 unsigned int *pl2_rxisar0,
> +					 struct dpmaif_hw_intr_st_para *para, int qno)
> +{
> +	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
> +	unsigned int l2_rxisar0 = *pl2_rxisar0;
> +	unsigned int value;
> +
> +	if (qno == DPF_RX_QNO_DFT) {
> +		value = l2_rxisar0 & DP_DL_INT_SKB_LEN_ERR;
> +		if (value)

In this function, value variable doesn't seem that useful compared with 
checking the condition directly on if line. And the value is never used 
after that.

> +			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_SKB_LEN_ERR, DPF_RX_QNO_DFT);
> +
> +		value = l2_rxisar0 & DP_DL_INT_BATCNT_LEN_ERR;
> +		if (value) {
> +			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_BATCNT_LEN_ERR, DPF_RX_QNO_DFT);
> +			hw_info->isr_en_mask.ap_dl_l2intr_en_msk &= ~DP_DL_INT_BATCNT_LEN_ERR;
> +			iowrite32(DP_DL_INT_BATCNT_LEN_ERR,
> +				  hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
> +		}
> +
> +		value = l2_rxisar0 & DP_DL_INT_PITCNT_LEN_ERR;
> +		if (value) {
> +			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_PITCNT_LEN_ERR, DPF_RX_QNO_DFT);
> +			hw_info->isr_en_mask.ap_dl_l2intr_en_msk &= ~DP_DL_INT_PITCNT_LEN_ERR;
> +			iowrite32(DP_DL_INT_PITCNT_LEN_ERR,
> +				  hw_info->pcie_base + DPMAIF_AO_UL_APDL_L2TIMSR0);
> +		}
> +
> +		value = l2_rxisar0 & DP_DL_INT_PKT_EMPTY_MSK;
> +		if (value)
> +			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_PKT_EMPTY_SET, DPF_RX_QNO_DFT);
> +
> +		value = l2_rxisar0 & DP_DL_INT_FRG_EMPTY_MSK;
> +		if (value)
> +			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_FRG_EMPTY_SET, DPF_RX_QNO_DFT);
> +
> +		value = l2_rxisar0 & DP_DL_INT_MTU_ERR_MSK;
> +		if (value)
> +			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_MTU_ERR, DPF_RX_QNO_DFT);
> +
> +		value = l2_rxisar0 & DP_DL_INT_FRG_LENERR_MSK;
> +		if (value)
> +			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_FRGCNT_LEN_ERR, DPF_RX_QNO_DFT);
> +
> +		value = l2_rxisar0 & DP_DL_INT_Q0_PITCNT_LEN_ERR;
> +		if (value) {
> +			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_Q0_PITCNT_LEN_ERR, BIT(qno));
> +			t7xx_dpmaif_dlq_mask_rx_pitcnt_len_err_intr(hw_info, qno);
> +		}
> +
> +		value = l2_rxisar0 & DP_DL_INT_HPC_ENT_TYPE_ERR;
> +		if (value)
> +			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_HPC_ENT_TYPE_ERR,
> +						  DPF_RX_QNO_DFT);
> +
> +		value = l2_rxisar0 & DP_DL_INT_Q0_DONE;
> +		if (value) {
> +			/* Mask RX done interrupt immediately after it occurs */
> +			if (!t7xx_mask_dlq_intr(dpmaif_ctrl, qno)) {
> +				t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_Q0_DONE, BIT(qno));
> +			} else {
> +				/* Unable to clear the interrupt, try again on the next one
> +				 * device entered low power mode or suffer exception
> +				 */

It's not obvious what "on the next one" means. I assume you're 
also missing period from end of the first line.

> +				*pl2_rxisar0 = l2_rxisar0 & ~DP_DL_INT_Q0_DONE;
> +			}
> +		}
> +	} else {
> +		value = l2_rxisar0 & DP_DL_INT_Q1_PITCNT_LEN_ERR;
> +		if (value) {
> +			t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_Q1_PITCNT_LEN_ERR, BIT(qno));
> +			t7xx_dpmaif_dlq_mask_rx_pitcnt_len_err_intr(hw_info, qno);
> +		}
> +
> +		value = l2_rxisar0 & DP_DL_INT_Q1_DONE;
> +		if (value) {
> +			if (!t7xx_mask_dlq_intr(dpmaif_ctrl, qno))
> +				t7xx_dpmaif_set_intr_para(para, DPF_INTR_DL_Q1_DONE, BIT(qno));
> +			else
> +				*pl2_rxisar0 = l2_rxisar0 & ~DP_DL_INT_Q1_DONE;
> +		}
> +	}
> +}


> +static void t7xx_dpmaif_dl_dlq_pit_en(struct dpmaif_hw_info *hw_info, unsigned char q_num,
> +				      bool enable)
> +{
> +	unsigned int value;
> +
> +	value = ioread32(hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON3);
> +
> +	if (enable)
> +		value |= DPMAIF_DLQPIT_EN_MSK;
> +	else
> +		value &= ~DPMAIF_DLQPIT_EN_MSK;
> +
> +	iowrite32(value, hw_info->pcie_base + DPMAIF_DL_DLQPIT_INIT_CON3);
> +}

Only called with enabled = true


> +static int t7xx_dpmaif_config_dlq_hw(struct dpmaif_ctrl *dpmaif_ctrl)
> +{
> +	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
> +	struct dpmaif_dl_hwq *dl_hw;

Only defined in 08. I might have not noticed all missing defs
so please compile test yourself to find the rest if any.

In general, it would be useful to use, e.g., a shell for loop to compile
test every change incrementally in the patchset before sending them out.

Another thing is that the values inside struct dpmaif_dl_hwq are
just set from constants and never changed anywhere. Why not use 
the constants directly?


> +static void t7xx_dpmaif_ul_all_q_en(struct dpmaif_hw_info *hw_info, bool enable)
> +{
> +	u32 ul_arb_en = ioread32(hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
> +
> +	if (enable)
> +		ul_arb_en |= DPMAIF_UL_ALL_QUE_ARB_EN;
> +	else
> +		ul_arb_en &= ~DPMAIF_UL_ALL_QUE_ARB_EN;
> +
> +	iowrite32(ul_arb_en, hw_info->pcie_base + DPMAIF_AO_UL_CHNL_ARB0);
> +}
> +
> +static bool t7xx_dpmaif_ul_idle_check(struct dpmaif_hw_info *hw_info)
> +{
> +	u32 dpmaif_ul_is_busy = ioread32(hw_info->pcie_base + DPMAIF_UL_CHK_BUSY);
> +
> +	return !(dpmaif_ul_is_busy & DPMAIF_UL_IDLE_STS);
> +}
> +
> + /* DPMAIF UL Part HW setting */

While not extremely useful to begin with, isn't this comment too late as
it is after two UL related functions?

> +unsigned int t7xx_dpmaif_dl_get_frg_rd_idx(struct dpmaif_hw_info *hw_info, unsigned char q_num)
> +{
> +	u32 value;
> +
> +	value = ioread32(hw_info->pcie_base + DPMAIF_AO_DL_FRGBAT_WRIDX);
> +	return value & DPMAIF_DL_FRG_WRIDX_MSK;
> +}

Function name has rd_idx but defines used are WRIDX. Is it intentional?


> +enum dpmaif_hw_intr_type {
> +	DPF_INTR_INVALID_MIN,
> +	DPF_INTR_UL_DONE,
> +	DPF_INTR_UL_DRB_EMPTY,
> +	DPF_INTR_UL_MD_NOTREADY,
> +	DPF_INTR_UL_MD_PWR_NOTREADY,
> +	DPF_INTR_UL_LEN_ERR,
> +	DPF_INTR_DL_DONE,
> +	DPF_INTR_DL_SKB_LEN_ERR,
> +	DPF_INTR_DL_BATCNT_LEN_ERR,
> +	DPF_INTR_DL_PITCNT_LEN_ERR,
> +	DPF_INTR_DL_PKT_EMPTY_SET,
> +	DPF_INTR_DL_FRG_EMPTY_SET,
> +	DPF_INTR_DL_MTU_ERR,
> +	DPF_INTR_DL_FRGCNT_LEN_ERR,
> +	DPF_INTR_DL_Q0_PITCNT_LEN_ERR,
> +	DPF_INTR_DL_Q1_PITCNT_LEN_ERR,
> +	DPF_INTR_DL_HPC_ENT_TYPE_ERR,
> +	DPF_INTR_DL_Q0_DONE,
> +	DPF_INTR_DL_Q1_DONE,
> +	DPF_INTR_INVALID_MAX
> +};
> +
> +#define DPF_RX_QNO0			0
> +#define DPF_RX_QNO1			1
> +#define DPF_RX_QNO_DFT			DPF_RX_QNO0
> +
> +struct dpmaif_hw_intr_st_para {
> +	unsigned int intr_cnt;
> +	enum dpmaif_hw_intr_type intr_types[DPF_INTR_INVALID_MAX - 1];
> +	unsigned int intr_queues[DPF_INTR_INVALID_MAX - 1];

Off-by-one errors?

In addition, I think there's some other problem related to these as
there are 20 values in enum (of which two are named "INVALID") but
t7xx_dpmaif_set_intr_para seems to be called only with 17 of them
(DPF_INTR_DL_DONE not among the calls). This implies intr_cnt will
likely be too small to cover the last entry when it is being used
in 08/13 for a for loop termination condition.


-- 
 i.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 07/13] net: wwan: t7xx: Data path HW layer
  2022-02-01  9:08   ` Ilpo Järvinen
@ 2022-02-01 10:13     ` Ilpo Järvinen
  2022-02-03  2:30     ` Martinez, Ricardo
  1 sibling, 0 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-02-01 10:13 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

[-- Attachment #1: Type: text/plain, Size: 2163 bytes --]

On Tue, 1 Feb 2022, Ilpo Järvinen wrote:

> On Thu, 13 Jan 2022, Ricardo Martinez wrote:
> 
> > From: Haijun Liu <haijun.liu@mediatek.com>
> > 
> > Data Path Modem AP Interface (DPMAIF) HW layer provides HW abstraction
> > for the upper layer (DPMAIF HIF). It implements functions to do the HW
> > configuration, TX/RX control and interrupt handling.
> > 
> > Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> > Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> > Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> > Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> > ---

> > +enum dpmaif_hw_intr_type {
> > +	DPF_INTR_INVALID_MIN,
> > +	DPF_INTR_UL_DONE,
> > +	DPF_INTR_UL_DRB_EMPTY,
> > +	DPF_INTR_UL_MD_NOTREADY,
> > +	DPF_INTR_UL_MD_PWR_NOTREADY,
> > +	DPF_INTR_UL_LEN_ERR,
> > +	DPF_INTR_DL_DONE,
> > +	DPF_INTR_DL_SKB_LEN_ERR,
> > +	DPF_INTR_DL_BATCNT_LEN_ERR,
> > +	DPF_INTR_DL_PITCNT_LEN_ERR,
> > +	DPF_INTR_DL_PKT_EMPTY_SET,
> > +	DPF_INTR_DL_FRG_EMPTY_SET,
> > +	DPF_INTR_DL_MTU_ERR,
> > +	DPF_INTR_DL_FRGCNT_LEN_ERR,
> > +	DPF_INTR_DL_Q0_PITCNT_LEN_ERR,
> > +	DPF_INTR_DL_Q1_PITCNT_LEN_ERR,
> > +	DPF_INTR_DL_HPC_ENT_TYPE_ERR,
> > +	DPF_INTR_DL_Q0_DONE,
> > +	DPF_INTR_DL_Q1_DONE,
> > +	DPF_INTR_INVALID_MAX
> > +};
> > +
> > +#define DPF_RX_QNO0			0
> > +#define DPF_RX_QNO1			1
> > +#define DPF_RX_QNO_DFT			DPF_RX_QNO0
> > +
> > +struct dpmaif_hw_intr_st_para {
> > +	unsigned int intr_cnt;
> > +	enum dpmaif_hw_intr_type intr_types[DPF_INTR_INVALID_MAX - 1];
> > +	unsigned int intr_queues[DPF_INTR_INVALID_MAX - 1];
> 
> Off-by-one errors?
> 
> In addition, I think there's some other problem related to these as
> there are 20 values in enum (of which two are named "INVALID") but
> t7xx_dpmaif_set_intr_para seems to be called only with 17 of them
> (DPF_INTR_DL_DONE not among the calls). This implies intr_cnt will
> likely be too small to cover the last entry when it is being used
> in 08/13 for a for loop termination condition.

Nevermind this one. I misread the code (I somehow got into thinking that 
the type would be used for indexing which isn't true).

-- 
 i.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 07/13] net: wwan: t7xx: Data path HW layer
  2022-02-01  9:08   ` Ilpo Järvinen
  2022-02-01 10:13     ` Ilpo Järvinen
@ 2022-02-03  2:30     ` Martinez, Ricardo
  1 sibling, 0 replies; 44+ messages in thread
From: Martinez, Ricardo @ 2022-02-03  2:30 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla


On 2/1/2022 1:08 AM, Ilpo Järvinen wrote:
> On Thu, 13 Jan 2022, Ricardo Martinez wrote:
>
...
>> +static int t7xx_dpmaif_config_dlq_hw(struct dpmaif_ctrl *dpmaif_ctrl)
>> +{
>> +	struct dpmaif_hw_info *hw_info = &dpmaif_ctrl->hif_hw_info;
>> +	struct dpmaif_dl_hwq *dl_hw;
> Only defined in 08. I might have not noticed all missing defs
> so please compile test yourself to find the rest if any.
>
> In general, it would be useful to use, e.g., a shell for loop to compile
> test every change incrementally in the patchset before sending them out.

Compilation is tested in every incremental patch.

This file provides lower level functions used only by code in 08, hence

it is added to the Makefile at 08.

For the next iteration, I'll decouple 07 and 08, but I think it makes 
sense to

keep the Makefile changes at 08 when the functionality is actually added

to the driver.

> Another thing is that the values inside struct dpmaif_dl_hwq are
> just set from constants and never changed anywhere. Why not use
> the constants directly?
>
Agree. Using the constants directly will also help to decouple 07 and 08.

...

>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface
  2022-01-14  1:06 ` [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface Ricardo Martinez
@ 2022-02-03 14:23   ` Ilpo Järvinen
  2022-02-08  8:19   ` Ilpo Järvinen
  1 sibling, 0 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-02-03 14:23 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <haijun.liu@mediatek.com>
> 
> Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
> for initialization, ISR, control and event handling of TX/RX flows.
> 
> DPMAIF TX
> Exposes the `dmpaif_tx_send_skb` function which can be used by the
> network device to transmit packets.
> The uplink data management uses a Descriptor Ring Buffer (DRB).
> First DRB entry is a message type that will be followed by 1 or more
> normal DRB entries. Message type DRB will hold the skb information
> and each normal DRB entry holds a pointer to the skb payload.
> 
> DPMAIF RX
> The downlink buffer management uses Buffer Address Table (BAT) and
> Packet Information Table (PIT) rings.
> The BAT ring holds the address of skb data buffer for the HW to use,
> while the PIT contains metadata about a whole network packet including
> a reference to the BAT entry holding the data buffer address.
> The driver reads the PIT and BAT entries written by the modem, when
> reaching a threshold, the driver will reload the PIT and BAT rings.
> 
> Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---

> +	unsigned short		last_ch_id;
Values is never used.

> +	if (old_rl_idx > old_wr_idx && new_wr_idx >= old_rl_idx) {
> +		dev_err(dpmaif_ctrl->dev, "RX BAT flow check fail\n");
> +		return -EINVAL;
> +	}
> +
> +	if (new_wr_idx >= bat_req->bat_size_cnt) {
> +		new_wr_idx -= bat_req->bat_size_cnt;
> +		if (new_wr_idx >= old_rl_idx) {
> +			dev_err(dpmaif_ctrl->dev, "RX BAT flow check fail\n");
> +			return -EINVAL;
> +		}

Make a label for the identical block and goto there.

> +static void t7xx_unmap_bat_skb(struct device *dev, struct dpmaif_bat_skb *bat_skb_base,
> +			       unsigned int index)
> +{
> +	struct dpmaif_bat_skb *bat_skb = bat_skb_base + index;
> +
> +	if (bat_skb->skb) {
> +		dma_unmap_single(dev, bat_skb->data_bus_addr, bat_skb->data_len, DMA_FROM_DEVICE);
> +		kfree_skb(bat_skb->skb);

For consistency, dev_kfree_skb?

> + * @initial: Indicates if the ring is being populated for the first time.
> + *
> + * Allocate skb and store the start address of the data buffer into the BAT ring.
> + * If this is not the initial call, notify the HW about the new entries.
> + *
> + * Return:
> + * * 0		- Success.
> + * * -ERROR	- Error code from failure sub-initializations.
> + */
> +int t7xx_dpmaif_rx_buf_alloc(struct dpmaif_ctrl *dpmaif_ctrl,
> +			     const struct dpmaif_bat_request *bat_req,
> +			     const unsigned char q_num, const unsigned int buf_cnt,
> +			     const bool initial)

vs its prototype:

+int t7xx_dpmaif_rx_buf_alloc(struct dpmaif_ctrl *dpmaif_ctrl,
+                            const struct dpmaif_bat_request *bat_req, const unsigned char q_num,
+                            const unsigned int buf_cnt, const bool first_time);

> +int t7xx_dpmaif_rx_frag_alloc(struct dpmaif_ctrl *dpmaif_ctrl, struct dpmaif_bat_request *bat_req,
> +			      const unsigned int buf_cnt, const bool initial)
> +{
> +	struct dpmaif_bat_page *bat_skb = bat_req->bat_skb;
> +	unsigned short cur_bat_idx = bat_req->bat_wr_idx;
> +	unsigned int buf_space;
> +	int ret, i;
...
> +	ret = i < buf_cnt ? -ENOMEM : 0;
> +	if (ret && initial) {

int ret = 0, i;
...
if (i < buf_cnt) {
	ret = -ENOMEM;
	if (initial) {
		...
	}
}

> +	if (!tx_drb_available || txq->tx_submit_skb_cnt >= txq->tx_list_max_len) {
> +		cb = dpmaif_ctrl->callbacks;
> +		cb->state_notify(dpmaif_ctrl->t7xx_dev, DMPAIF_TXQ_STATE_FULL, txqt);
> +		return -EBUSY;
> +	}
> +
> +	skb->cb[TX_CB_QTYPE] = txqt;
> +	skb->cb[TX_CB_DRB_CNT] = send_drb_cnt;
> +
> +	spin_lock_irqsave(&txq->tx_skb_lock, flags);
> +	list_add_tail(&skb->list, &txq->tx_skb_queue);
> +	txq->tx_submit_skb_cnt++;
> +	spin_unlock_irqrestore(&txq->tx_skb_lock, flags);

Perhaps the critical section needs to start earlier to enforce that 
tx_list_max_len check?


(I'm yet to read half of this patch...)

-- 
 i.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface
  2022-01-14  1:06 ` [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface Ricardo Martinez
  2022-02-03 14:23   ` Ilpo Järvinen
@ 2022-02-08  8:19   ` Ilpo Järvinen
  2022-02-16  2:17     ` Martinez, Ricardo
  2022-02-22 18:40     ` Martinez, Ricardo
  1 sibling, 2 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-02-08  8:19 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <haijun.liu@mediatek.com>
> 
> Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
> for initialization, ISR, control and event handling of TX/RX flows.
> 
> DPMAIF TX
> Exposes the `dmpaif_tx_send_skb` function which can be used by the
> network device to transmit packets.
> The uplink data management uses a Descriptor Ring Buffer (DRB).
> First DRB entry is a message type that will be followed by 1 or more
> normal DRB entries. Message type DRB will hold the skb information
> and each normal DRB entry holds a pointer to the skb payload.
> 
> DPMAIF RX
> The downlink buffer management uses Buffer Address Table (BAT) and
> Packet Information Table (PIT) rings.
> The BAT ring holds the address of skb data buffer for the HW to use,
> while the PIT contains metadata about a whole network packet including
> a reference to the BAT entry holding the data buffer address.
> The driver reads the PIT and BAT entries written by the modem, when
> reaching a threshold, the driver will reload the PIT and BAT rings.
> 
> Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---

> +	bat_req->bat_mask[idx] = 1;
...
> +		if (!bat_req->bat_mask[index])
...
> +		bat->bat_mask[index] = 0;

Seem to be linux/bitmap.h

I wonder though if the loop in t7xx_dpmaif_avail_pkt_bat_cnt()
could be replaced with arithmetic calculation based on
bat_release_rd_idx and some other idx? It would make the bitmap
unnecessary.

> +static int t7xx_dpmaif_rx_start(struct dpmaif_rx_queue *rxq, const unsigned short pit_cnt,
> +				const unsigned long timeout)
> +{
> +	struct device *dev = rxq->dpmaif_ctrl->dev;
> +	struct dpmaif_cur_rx_skb_info *skb_info;
> +	unsigned short rx_cnt, recv_skb_cnt = 0;

unsigned int

I'd also use unsigned int for all those local variables dealing
with the indexes instead of unsigned short.

> +static int t7xx_dpmaif_rx_data_collect(struct dpmaif_ctrl *dpmaif_ctrl,
> +				       const unsigned char q_num, const int budget)
> +{
> +	struct dpmaif_rx_queue *rxq = &dpmaif_ctrl->rxq[q_num];
> +	unsigned long time_limit;
> +	unsigned int cnt;
> +
> +	time_limit = jiffies + msecs_to_jiffies(DPMAIF_WQ_TIME_LIMIT_MS);
> +
> +	do {
> +		unsigned int rd_cnt;
> +		int real_cnt;
> +
> +		cnt = t7xx_dpmaifq_poll_pit(rxq);
> +		if (!cnt)
> +			break;
> +
> +		if (!rxq->pit_base)
> +			return -EAGAIN;
> +
> +		rd_cnt = cnt > budget ? budget : cnt;

min_t or min (after making budget const unsigned int).

> +		real_cnt = t7xx_dpmaif_rx_start(rxq, rd_cnt, time_limit);
> +		if (real_cnt < 0)
> +			return real_cnt;
> +
> +		if (real_cnt < cnt)
> +			return -EAGAIN;
> +
> +	} while (cnt);

With the break already inside the loop for the same condition,
this check is dead-code.

> +	hw_read_idx = t7xx_dpmaif_ul_get_rd_idx(&dpmaif_ctrl->hif_hw_info, q_num);
> +
> +	new_hw_rd_idx = hw_read_idx / DPMAIF_UL_DRB_ENTRY_WORD;

Is DPMAIF_UL_DRB_ENTRY_WORD size of an entry? In that case it
would probably make sense put it inside t7xx_dpmaif_ul_get_rd_idx?

> +	if (new_hw_rd_idx >= DPMAIF_DRB_ENTRY_SIZE) {

Is DPMAIF_DRB_ENTRY_SIZE telling the number of entries rather than
an "ENTRY_SIZE"? I think these both constant could likely be named 
better.

> +	drb->header_dw1 = cpu_to_le32(FIELD_PREP(DRB_MSG_DTYP, DES_DTYP_MSG));
> +	drb->header_dw1 |= cpu_to_le32(FIELD_PREP(DRB_MSG_CONT, 1));
> +	drb->header_dw1 |= cpu_to_le32(FIELD_PREP(DRB_MSG_PACKET_LEN, pkt_len));
> +
> +	drb->header_dw2 = cpu_to_le32(FIELD_PREP(DRB_MSG_COUNT_L, count_l));
> +	drb->header_dw2 |= cpu_to_le32(FIELD_PREP(DRB_MSG_CHANNEL_ID, channel_id));
> +	drb->header_dw2 |= cpu_to_le32(FIELD_PREP(DRB_MSG_L4_CHK, 1));

I'd do:
drb->header_dw1 = cpu_to_le32(FIELD_PREP(DRB_MSG_DTYP, DES_DTYP_MSG) |
                              FIELD_PREP(DRB_MSG_CONT, 1) |
                              FIELD_PREP(DRB_MSG_PACKET_LEN, pkt_len));


> +static void t7xx_setup_payload_drb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned char q_num,
> +				   unsigned short cur_idx, dma_addr_t data_addr,
> +				   unsigned int pkt_size, char last_one)

bool last_one

> +	struct skb_shared_info *info;

Variable usually called shinfo.

> +	spin_lock_irqsave(&txq->tx_lock, flags);
> +	cur_idx = txq->drb_wr_idx;
> +	drb_wr_idx_backup = cur_idx;
> +
> +	txq->drb_wr_idx += send_cnt;
> +	if (txq->drb_wr_idx >= txq->drb_size_cnt)
> +		txq->drb_wr_idx -= txq->drb_size_cnt;
> +
> +	t7xx_setup_msg_drb(dpmaif_ctrl, txq->index, cur_idx, skb->len, 0, skb->cb[TX_CB_NETIF_IDX]);
> +	t7xx_record_drb_skb(dpmaif_ctrl, txq->index, cur_idx, skb, 1, 0, 0, 0, 0);
> +	spin_unlock_irqrestore(&txq->tx_lock, flags);
> +
> +	cur_idx = t7xx_ring_buf_get_next_wr_idx(txq->drb_size_cnt, cur_idx);
> +
> +	for (wr_cnt = 0; wr_cnt < payload_cnt; wr_cnt++) {
> +		if (!wr_cnt) {
> +			data_len = skb_headlen(skb);
> +			data_addr = skb->data;
> +			is_frag = false;
> +		} else {
> +			skb_frag_t *frag = info->frags + wr_cnt - 1;
> +
> +			data_len = skb_frag_size(frag);
> +			data_addr = skb_frag_address(frag);
> +			is_frag = true;
> +		}
> +
> +		if (wr_cnt == payload_cnt - 1)
> +			is_last_one = true;
> +
> +		/* TX mapping */
> +		bus_addr = dma_map_single(dpmaif_ctrl->dev, data_addr, data_len, DMA_TO_DEVICE);
> +		if (dma_mapping_error(dpmaif_ctrl->dev, bus_addr)) {
> +			dev_err(dpmaif_ctrl->dev, "DMA mapping fail\n");
> +			atomic_set(&txq->tx_processing, 0);
> +
> +			spin_lock_irqsave(&txq->tx_lock, flags);
> +			txq->drb_wr_idx = drb_wr_idx_backup;
> +			spin_unlock_irqrestore(&txq->tx_lock, flags);

Hmm, can txq's drb_wr_idx get updated (or cleared) by something else
in between these critical sections?

That "TX mapping" comment seems to just state the obvious.

> +static int t7xx_txq_burst_send_skb(struct dpmaif_tx_queue *txq)
> +{
> +	int drb_remain_cnt, i;
> +	unsigned long flags;
> +	int drb_cnt = 0;
> +	int ret = 0;
> +
> +	spin_lock_irqsave(&txq->tx_lock, flags);
> +	drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
> +						   txq->drb_wr_idx, DPMAIF_WRITE);
> +	spin_unlock_irqrestore(&txq->tx_lock, flags);
> +
> +	for (i = 0; i < DPMAIF_SKB_TX_BURST_CNT; i++) {
> +		struct sk_buff *skb;
> +
> +		spin_lock_irqsave(&txq->tx_skb_lock, flags);
> +		skb = list_first_entry_or_null(&txq->tx_skb_queue, struct sk_buff, list);
> +		spin_unlock_irqrestore(&txq->tx_skb_lock, flags);
> +
> +		if (!skb)
> +			break;
> +
> +		if (drb_remain_cnt < skb->cb[TX_CB_DRB_CNT]) {
> +			spin_lock_irqsave(&txq->tx_lock, flags);
> +			drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt,
> +								   txq->drb_release_rd_idx,
> +								   txq->drb_wr_idx, DPMAIF_WRITE);
> +			spin_unlock_irqrestore(&txq->tx_lock, flags);
> +			continue;
> +		}
...
> +	if (drb_cnt > 0) {
> +		txq->drb_lack = false;
> +		ret = drb_cnt;
> +	} else if (ret == -ENOMEM) {
> +		txq->drb_lack = true;

Based on the variable name, I'd expect drb_lack be set true when
drb_remain_cnt < skb->cb[TX_CB_DRB_CNT] occurred but that doesn't
happen. Maybe that if branch within loop should set ret = -ENOMEM;
before continue?

It would be nice if the drb check here and in
t7xx_check_tx_queue_drb_available could be consolidated into
a single place. That requires small refactoring (adding __
variant of that function which does just the check).

Please also check the other comments on skb->cb below.

> +		txq_id = t7xx_select_tx_queue(dpmaif_ctrl);
> +		if (txq_id >= 0) {

t7xx_select_tx_queue used to do que_started check (in v2) but it
doesn't anymore so this if is always true these days. I'm left to
wonder though if it was ok to drop that que_started check?

> +static unsigned char t7xx_get_drb_cnt_per_skb(struct sk_buff *skb)
> +{
> +	/* Normal DRB (frags data + skb linear data) + msg DRB */
> +	return skb_shinfo(skb)->nr_frags + 2;
> +}

I'd rename this to t7xx_skb_drb_cnt().

> +int t7xx_dpmaif_tx_send_skb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int txqt, struct sk_buff *skb)
> +{
> +	bool tx_drb_available = true;
...
> +	send_drb_cnt = t7xx_get_drb_cnt_per_skb(skb);
> +
> +	txq = &dpmaif_ctrl->txq[txqt];
> +	if (!(txq->tx_skb_stat++ % DPMAIF_SKB_TX_BURST_CNT))
> +		tx_drb_available = t7xx_check_tx_queue_drb_available(txq, send_drb_cnt);
> +
> +	if (!tx_drb_available || txq->tx_submit_skb_cnt >= txq->tx_list_max_len) {

Because of the modulo if, drbs might not be available despite
variable claims them to be. Is it intentional?

> +		if (FIELD_GET(DRB_SKB_IS_LAST, drb_skb->config)) {
> +			kfree_skb(drb_skb->skb);

dev_kfree_...?


> +void t7xx_dpmaif_tx_stop(struct dpmaif_ctrl *dpmaif_ctrl)
> +{
> +	int i;
> +
> +	for (i = 0; i < DPMAIF_TXQ_NUM; i++) {
> +		struct dpmaif_tx_queue *txq;
> +		int count;
> +
> +		txq = &dpmaif_ctrl->txq[i];
> +		txq->que_started = false;
> +		/* Ensure tx_processing is changed to 1 before actually begin TX flow */
> +		smp_mb();
> +
> +		/* Confirm that SW will not transmit */
> +		count = 0;
> +
> +		while (atomic_read(&txq->tx_processing)) {

That "Ensure ..." comment should be reworded as it makes little
sense as is for 2 reasons:
- We're in _stop, not begin tx func
- tx_processing isn't changed to 1 here

> +/* SKB control buffer indexed values */
> +#define TX_CB_NETIF_IDX		0
> +#define TX_CB_QTYPE		1
> +#define TX_CB_DRB_CNT		2

The normal way of storing a struct to skb->cb area is:

struct t7xx_skb_cb {
	u8	netif_idx;
	u8	qtype;
	u8	drb_cnt;
};

#define T7XX_SKB_CB(__skb)	((struct t7xx_skb_cb *)&((__skb)->cb[0]))

However, there's only a single txqt/qtype (TXQ_TYPE_DEFAULT) in the 
patchset? And it seems to me that drb_cnt is a value that could be always
derived using t7xx_get_drb_cnt_per_skb() from the skb rather than
stored?

> +#define DRB_PD_DATA_LEN		((u32)GENMASK(31, 16))
Drop the cast?

> +struct dpmaif_drb_skb {
...
> +	u16			config;
> +};
> +
> +#define DRB_SKB_IS_LAST		BIT(15)
> +#define DRB_SKB_IS_FRAG		BIT(14)
> +#define DRB_SKB_IS_MSG		BIT(13)
> +#define DRB_SKB_DRB_IDX		GENMASK(12, 0)

These are not HW related (don't care about endianess)? I guess
C bitfield could be used for them.


-- 
 i.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 09/13] net: wwan: t7xx: Add WWAN network interface
  2022-01-14  1:06 ` [PATCH net-next v4 09/13] net: wwan: t7xx: Add WWAN network interface Ricardo Martinez
@ 2022-02-10 10:45   ` Ilpo Järvinen
  0 siblings, 0 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-02-10 10:45 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <haijun.liu@mediatek.com>
> 
> Creates the Cross Core Modem Network Interface (CCMNI) which implements
> the wwan_ops for registration with the WWAN framework, CCMNI also
> implements the net_device_ops functions used by the network device.
> Network device operations include open, close, start transmission, TX
> timeout, change MTU, and select queue.
> 
> Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> Co-developed-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---

> +static int t7xx_ccmni_open(struct net_device *dev)
> +{
> +	struct t7xx_ccmni *ccmni = wwan_netdev_drvpriv(dev);
> +
> +	netif_carrier_on(dev);
> +	netif_tx_start_all_queues(dev);
> +	atomic_inc(&ccmni->usage);
> +	return 0;
> +}
> +
> +static int t7xx_ccmni_close(struct net_device *dev)
> +{
> +	struct t7xx_ccmni *ccmni = wwan_netdev_drvpriv(dev);
> +
> +	if (atomic_dec_return(&ccmni->usage) < 0)
> +		return -EINVAL;

I'm certainly way out of my expertize here in knowing how/when these open 
and close can be called. That kept in mind, I wonder if there's need to do 
rollback for the atomic dec.

> +static void t7xx_ccmni_wwan_setup(struct net_device *dev)
> +{
> +	dev->header_ops = NULL;
> +	dev->hard_header_len += sizeof(struct ccci_header);
> +
> +	dev->mtu = ETH_DATA_LEN;
> +	dev->max_mtu = CCMNI_MTU_MAX;
> +	dev->tx_queue_len = DEFAULT_TX_QUEUE_LEN;
> +	dev->watchdog_timeo = CCMNI_NETDEV_WDT_TO;
> +	/* CCMNI is a pure IP device */
> +	dev->flags = IFF_POINTOPOINT | IFF_NOARP;
> +
> +	/* Not supporting VLAN */
> +	dev->features = NETIF_F_VLAN_CHALLENGED;
> +
> +	dev->features |= NETIF_F_SG;
> +	dev->hw_features |= NETIF_F_SG;
> +
> +	/* Uplink checksum offload */
> +	dev->features |= NETIF_F_HW_CSUM;
> +	dev->hw_features |= NETIF_F_HW_CSUM;
> +
> +	/* Downlink checksum offload */
> +	dev->features |= NETIF_F_RXCSUM;
> +	dev->hw_features |= NETIF_F_RXCSUM;
> +
> +	/* Use kernel default free_netdev() function */
> +	dev->needs_free_netdev = true;
> +
> +	/* No need to free again because of free_netdev() */
> +	dev->priv_destructor = NULL;

Isn't the struct zeroed for you?

Maybe some of those comments are not that useful?


> +	ctlb->capability = NIC_CAP_TXBUSY_STOP | NIC_CAP_SGIO |
> +			   NIC_CAP_DATA_ACK_DVD | NIC_CAP_CCMNI_MQ;

Is capability going to remain constant? And some of these are
not used at all.

Related to this, e.g., the NETIF_F_SG setting above doesn't seem to care 
about what is in capability (assuming SGIO means what I think it does),
should it?

> +	/* WWAN core will create a netdev for the default IP MUX channel */
> +	ret = wwan_register_ops(dev, &ccmni_wwan_ops, ctlb, IP_MUX_SESSION_DEFAULT);
> +	if (ret)
> +		goto err_unregister_ops;
> +
> +	init_md_status_notifier(t7xx_dev);
> +
> +	return 0;
> +
> +err_unregister_ops:
> +	wwan_unregister_ops(dev);

If wwan_register_ops fails, why is wwan_unregister_ops needed?

> +/* Must be less than DPMAIF_HW_MTU_SIZE (3*1024 + 8) */

This could be enforced with BUILD_BUG_ON if you want.


-- 
 i.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 10/13] net: wwan: t7xx: Introduce power management support
  2022-01-14  1:06 ` [PATCH net-next v4 10/13] net: wwan: t7xx: Introduce power management support Ricardo Martinez
@ 2022-02-10 10:58   ` Ilpo Järvinen
  0 siblings, 0 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-02-10 10:58 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <haijun.liu@mediatek.com>
> 
> Implements suspend, resumes, freeze, thaw, poweroff, and restore
> `dev_pm_ops` callbacks.
> 
> >From the host point of view, the t7xx driver is one entity. But, the
> device has several modules that need to be addressed in different ways
> during power management (PM) flows.
> The driver uses the term 'PM entities' to refer to the 2 DPMA and
> 2 CLDMA HW blocks that need to be managed during PM flows.
> When a dev_pm_ops function is called, the PM entities list is iterated
> and the matching function is called for each entry in the list.
> 
> Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---


>  	if (ret) {
>  		dev_err(dev, "Failed to allocate RX/TX SW resources: %d\n", ret);
> +		t7xx_dpmaif_pm_entity_release(dpmaif_ctrl);
>  		return NULL;

Print after release.


> +static int __t7xx_pci_pm_suspend(struct pci_dev *pdev)
> +{
> +	struct t7xx_pci_dev *t7xx_dev;
> +	struct md_pm_entity *entity;
> +	unsigned long wait_ret;
> +	enum t7xx_pm_id id;
> +	int ret = 0;
> +
> +	t7xx_dev = pci_get_drvdata(pdev);
> +	if (atomic_read(&t7xx_dev->md_pm_state) <= MTK_PM_INIT) {
> +		dev_err(&pdev->dev,
> +			"[PM] Exiting suspend, because handshake failure or in an exception\n");
> +		return -EFAULT;
> +	}
> +
> +	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_SET_0);
> +
> +	ret = t7xx_wait_pm_config(t7xx_dev);
> +	if (ret)
> +		return ret;

Do you need to rollback the iowrite?

> +	atomic_set(&t7xx_dev->md_pm_state, MTK_PM_SUSPENDED);
> +	t7xx_pcie_mac_clear_int(t7xx_dev, SAP_RGU_INT);
> +	t7xx_dev->rgu_pci_irq_en = false;
> +
> +	list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
> +		if (entity->suspend) {
> +			ret = entity->suspend(t7xx_dev, entity->entity_param);
> +			if (ret) {
> +				id = entity->id;
> +				break;
> +			}
> +		}
> +	}
> +
> +	if (ret) {
> +		dev_err(&pdev->dev, "[PM] Suspend error: %d, id: %d\n", ret, id);
> +
> +		list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
> +			if (id == entity->id)
> +				break;
> +
> +			if (entity->resume)
> +				entity->resume(t7xx_dev, entity->entity_param);
> +		}
> +
> +		goto suspend_complete;

Suspend failure path(?) gotos to "suspend_complete"?

> +	}
> +
> +	reinit_completion(&t7xx_dev->pm_sr_ack);
> +	t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_SUSPEND_REQ);
> +	wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
> +					       msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
> +	if (!wait_ret)
> +		dev_err(&pdev->dev, "[PM] Wait for device suspend ACK timeout-MD\n");
> +
> +	reinit_completion(&t7xx_dev->pm_sr_ack);
> +	t7xx_mhccif_h2d_swint_trigger(t7xx_dev, H2D_CH_SUSPEND_REQ_AP);
> +	wait_ret = wait_for_completion_timeout(&t7xx_dev->pm_sr_ack,
> +					       msecs_to_jiffies(PM_ACK_TIMEOUT_MS));
> +	if (!wait_ret)
> +		dev_err(&pdev->dev, "[PM] Wait for device suspend ACK timeout-SAP\n");
> +
> +	list_for_each_entry(entity, &t7xx_dev->md_pm_entities, entity) {
> +		if (entity->suspend_late)
> +			entity->suspend_late(t7xx_dev, entity->entity_param);
> +	}
> +
> +suspend_complete:
> +	iowrite32(L1_DISABLE_BIT(0), IREG_BASE(t7xx_dev) + DIS_ASPM_LOWPWR_CLR_0);
> +
> +	if (ret) {
> +		atomic_set(&t7xx_dev->md_pm_state, MTK_PM_RESUMED);
> +		t7xx_pcie_mac_set_int(t7xx_dev, SAP_RGU_INT);
> +	}
> +
> +	return ret;
> +}

Please check all paths in this function. I found enough oddities to not 
be able to convince myself I understood it all or found all the problems.
As if, e.g., an ok path return would be missing above misnamed 
suspend_complete label (but then there's if (ret) below it which is kind 
of counterargument against my reasoning).

I've no comments on patches 11-13.

-- 
 i.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 04/13] net: wwan: t7xx: Add port proxy infrastructure
  2022-01-14  1:06 ` [PATCH net-next v4 04/13] net: wwan: t7xx: Add port proxy infrastructure Ricardo Martinez
  2022-01-25 13:38   ` Ilpo Järvinen
@ 2022-02-10 13:34   ` Ilpo Järvinen
  1 sibling, 0 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-02-10 13:34 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <haijun.liu@mediatek.com>
> 
> Port-proxy provides a common interface to interact with different types
> of ports. Ports export their configuration via `struct t7xx_port` and
> operate as defined by `struct port_ops`.
> 
> Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> Co-developed-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---

> +	struct mutex		tx_mutex_lock; /* Protects the seq number operation */

This is unused.


-- 
 i.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface
  2022-01-14  1:06 ` [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface Ricardo Martinez
  2022-01-14 14:13   ` Andy Shevchenko
  2022-01-18 14:13   ` Ilpo Järvinen
@ 2022-02-10 13:50   ` Ilpo Järvinen
  2 siblings, 0 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-02-10 13:50 UTC (permalink / raw)
  To: Ricardo Martinez
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

On Thu, 13 Jan 2022, Ricardo Martinez wrote:

> From: Haijun Liu <haijun.liu@mediatek.com>
> 
> Cross Layer DMA (CLDMA) Hardware interface (HIF) enables the control
> path of Host-Modem data transfers. CLDMA HIF layer provides a common
> interface to the Port Layer.
> 
> CLDMA manages 8 independent RX/TX physical channels with data flow
> control in HW queues. CLDMA uses ring buffers of General Packet
> Descriptors (GPD) for TX/RX. GPDs can represent multiple or single
> data buffers (DB).
> 
> CLDMA HIF initializes GPD rings, registers ISR handlers for CLDMA
> interrupts, and initializes CLDMA HW registers.
> 
> CLDMA TX flow:
> 1. Port Layer write
> 2. Get DB address
> 3. Configure GPD
> 4. Triggering processing via HW register write
> 
> CLDMA RX flow:
> 1. CLDMA HW sends a RX "done" to host
> 2. Driver starts thread to safely read GPD
> 3. DB is sent to Port layer
> 4. Create a new buffer for GPD ring
> 
> Signed-off-by: Haijun Liu <haijun.liu@mediatek.com>
> Signed-off-by: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
> Co-developed-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> Signed-off-by: Ricardo Martinez <ricardo.martinez@linux.intel.com>
> ---

> +	struct cldma_ring *tr_ring;
> +	struct cldma_request *tr_done;
> +	struct cldma_request *rx_refill;
> +	struct cldma_request *tx_xmit;
> +	int budget;			/* Same as ring buffer size by default */
> +	spinlock_t ring_lock;

I couldn't figure out what ring_lock is supposed to protect exactly.
Since there were tr_ring operations done w/o ring_lock (in 
t7xx_cldma_gpd_rx_from_q), I was left to wonder whether it's due to
a locking bug or me not understanding what ring_lock is supposed to 
protect.


-- 
 i.


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface
  2022-01-18 22:22     ` Martinez, Ricardo
  2022-01-19  9:52       ` Ilpo Järvinen
@ 2022-02-11  0:25       ` Sergey Ryazanov
  2022-02-16  2:24         ` Martinez, Ricardo
  1 sibling, 1 reply; 44+ messages in thread
From: Sergey Ryazanov @ 2022-02-11  0:25 UTC (permalink / raw)
  To: Martinez, Ricardo
  Cc: Ilpo Järvinen, Netdev, linux-wireless, Jakub Kicinski,
	David Miller, Johannes Berg, Loic Poulain, M Chetan Kumar,
	chandrashekar.devegowda, Intel Corporation, chiranjeevi.rapolu,
	Haijun Liu (刘海军),
	amir.hanania, Andy Shevchenko, dinesh.sharma, eliot.lee,
	moises.veleta, pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla

Hello Ricardo,

On Wed, Jan 19, 2022 at 1:22 AM Martinez, Ricardo
<ricardo.martinez@linux.intel.com> wrote:
> On 1/18/2022 6:13 AM, Ilpo Järvinen wrote:
>> On Thu, 13 Jan 2022, Ricardo Martinez wrote:
> ...
>>> +#define CLDMA_NUM 2
>> I tried to understand its purpose but it seems that only one of the
>> indexes is used in the arrays where this define gives the size? Related to
>> this, ID_CLDMA0 is not used anywhere?
>
> The modem HW has 2 CLDMAs, idx 0 for the app processor (SAP) and idx 1
> for the modem (MD).
>
> CLDMA_NUM is defined as 2 to reflect the HW capabilities but mainly to
> have a cleaner upcoming patches, which will use ID_CLDMA0.
>
> If having array's of size 1 is not a problem then we can define
> CLDMA_NUM as 1 and play with the CLDMA indexes.

Please keep CLDMA_NUM defined as 2. Especially if you have a plan for
further driver development. Saving a few bytes in the structure for a
short term is not worth the jungling with indexes, possible errors and
further rework. Just document them as suggested by Ilpo and mark idx 0
as unused at the moment.

BTW, did you consider to define the cldma_id enum something like this:

/**
 * ...
 * @CLDMA_ID_AP: ... (unused ATM)
 * @CLDMA_ID_MD: ...
 */
enum cldma_id {
    CLDMA_ID_AP = 0,
    CLDMA_ID_MD = 1,

    CLDMA_NUM
};

This way elements will be self descriptive as well as CLDMA_NUM value
will be less puzzled.

-- 
Sergey

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface
  2022-02-08  8:19   ` Ilpo Järvinen
@ 2022-02-16  2:17     ` Martinez, Ricardo
  2022-02-16 14:36       ` Ilpo Järvinen
  2022-02-22 18:40     ` Martinez, Ricardo
  1 sibling, 1 reply; 44+ messages in thread
From: Martinez, Ricardo @ 2022-02-16  2:17 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla


On 2/8/2022 12:19 AM, Ilpo Järvinen wrote:
> On Thu, 13 Jan 2022, Ricardo Martinez wrote:
>
>> From: Haijun Liu <haijun.liu@mediatek.com>
>>
>> Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
>> for initialization, ISR, control and event handling of TX/RX flows.
...
>
>> +	spin_lock_irqsave(&txq->tx_lock, flags);
>> +	cur_idx = txq->drb_wr_idx;
>> +	drb_wr_idx_backup = cur_idx;
>> +
>> +	txq->drb_wr_idx += send_cnt;
>> +	if (txq->drb_wr_idx >= txq->drb_size_cnt)
>> +		txq->drb_wr_idx -= txq->drb_size_cnt;
>> +
>> +	t7xx_setup_msg_drb(dpmaif_ctrl, txq->index, cur_idx, skb->len, 0, skb->cb[TX_CB_NETIF_IDX]);
>> +	t7xx_record_drb_skb(dpmaif_ctrl, txq->index, cur_idx, skb, 1, 0, 0, 0, 0);
>> +	spin_unlock_irqrestore(&txq->tx_lock, flags);
>> +
>> +	cur_idx = t7xx_ring_buf_get_next_wr_idx(txq->drb_size_cnt, cur_idx);
>> +
>> +	for (wr_cnt = 0; wr_cnt < payload_cnt; wr_cnt++) {
>> +		if (!wr_cnt) {
>> +			data_len = skb_headlen(skb);
>> +			data_addr = skb->data;
>> +			is_frag = false;
>> +		} else {
>> +			skb_frag_t *frag = info->frags + wr_cnt - 1;
>> +
>> +			data_len = skb_frag_size(frag);
>> +			data_addr = skb_frag_address(frag);
>> +			is_frag = true;
>> +		}
>> +
>> +		if (wr_cnt == payload_cnt - 1)
>> +			is_last_one = true;
>> +
>> +		/* TX mapping */
>> +		bus_addr = dma_map_single(dpmaif_ctrl->dev, data_addr, data_len, DMA_TO_DEVICE);
>> +		if (dma_mapping_error(dpmaif_ctrl->dev, bus_addr)) {
>> +			dev_err(dpmaif_ctrl->dev, "DMA mapping fail\n");
>> +			atomic_set(&txq->tx_processing, 0);
>> +
>> +			spin_lock_irqsave(&txq->tx_lock, flags);
>> +			txq->drb_wr_idx = drb_wr_idx_backup;
>> +			spin_unlock_irqrestore(&txq->tx_lock, flags);
> Hmm, can txq's drb_wr_idx get updated (or cleared) by something else
> in between these critical sections?
drb_wr_idx cannot be modified inbetween, but it can be used to calculate 
the number of DRBs available, which shouldn't be a problem.
The function is reserving the DRBs at the beginning, in the rare case of 
error it will release them.
...
> +		txq_id = t7xx_select_tx_queue(dpmaif_ctrl);
> +		if (txq_id >= 0) {
> t7xx_select_tx_queue used to do que_started check (in v2) but it
> doesn't anymore so this if is always true these days. I'm left to
> wonder though if it was ok to drop that que_started check?

The que_started check wasn't supposed to be dropped, I'll add it back.

...

>> +/* SKB control buffer indexed values */
>> +#define TX_CB_NETIF_IDX		0
>> +#define TX_CB_QTYPE		1
>> +#define TX_CB_DRB_CNT		2
> The normal way of storing a struct to skb->cb area is:
>
> struct t7xx_skb_cb {
> 	u8	netif_idx;
> 	u8	qtype;
> 	u8	drb_cnt;
> };
>
> #define T7XX_SKB_CB(__skb)	((struct t7xx_skb_cb *)&((__skb)->cb[0]))
>
> However, there's only a single txqt/qtype (TXQ_TYPE_DEFAULT) in the
> patchset? And it seems to me that drb_cnt is a value that could be always
> derived using t7xx_get_drb_cnt_per_skb() from the skb rather than
> stored?

The next iteration will contain t7xx_tx_skb_cb and t7xx_rx_skb_cb 
structures.

Also, q_number is going to be used instead of qtype.

Only one queue is used but I think we can keep this code generic as it 
is straight forward (not like the drb_lack case), any thoughts?

>> +#define DRB_PD_DATA_LEN		((u32)GENMASK(31, 16))
> Drop the cast?

The cast was added to avoid a compiler warning about truncated bits.

I'll move it to the place where it is required:

drb->header &= cpu_to_le32(~(u32)DRB_PD_DATA_LEN);

...


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface
  2022-02-11  0:25       ` Sergey Ryazanov
@ 2022-02-16  2:24         ` Martinez, Ricardo
  0 siblings, 0 replies; 44+ messages in thread
From: Martinez, Ricardo @ 2022-02-16  2:24 UTC (permalink / raw)
  To: Sergey Ryazanov
  Cc: Ilpo Järvinen, Netdev, linux-wireless, Jakub Kicinski,
	David Miller, Johannes Berg, Loic Poulain, M Chetan Kumar,
	chandrashekar.devegowda, Intel Corporation, chiranjeevi.rapolu,
	Haijun Liu (刘海军),
	amir.hanania, Andy Shevchenko, dinesh.sharma, eliot.lee,
	moises.veleta, pierre-louis.bossart, muralidharan.sethuraman,
	Soumya.Prakash.Mishra, sreehari.kancharla


On 2/10/2022 4:25 PM, Sergey Ryazanov wrote:
> Hello Ricardo,
>
> On Wed, Jan 19, 2022 at 1:22 AM Martinez, Ricardo
> <ricardo.martinez@linux.intel.com> wrote:
>> On 1/18/2022 6:13 AM, Ilpo Järvinen wrote:
>>> On Thu, 13 Jan 2022, Ricardo Martinez wrote:
>> ...
>>>> +#define CLDMA_NUM 2
>>> I tried to understand its purpose but it seems that only one of the
>>> indexes is used in the arrays where this define gives the size? Related to
>>> this, ID_CLDMA0 is not used anywhere?
>> The modem HW has 2 CLDMAs, idx 0 for the app processor (SAP) and idx 1
>> for the modem (MD).
>>
>> CLDMA_NUM is defined as 2 to reflect the HW capabilities but mainly to
>> have a cleaner upcoming patches, which will use ID_CLDMA0.
>>
>> If having array's of size 1 is not a problem then we can define
>> CLDMA_NUM as 1 and play with the CLDMA indexes.
> Please keep CLDMA_NUM defined as 2. Especially if you have a plan for
> further driver development. Saving a few bytes in the structure for a
> short term is not worth the jungling with indexes, possible errors and
> further rework. Just document them as suggested by Ilpo and mark idx 0
> as unused at the moment.
>
> BTW, did you consider to define the cldma_id enum something like this:
>
> /**
>   * ...
>   * @CLDMA_ID_AP: ... (unused ATM)
>   * @CLDMA_ID_MD: ...
>   */
> enum cldma_id {
>      CLDMA_ID_AP = 0,
>      CLDMA_ID_MD = 1,
>
>      CLDMA_NUM
> };
>
> This way elements will be self descriptive as well as CLDMA_NUM value
> will be less puzzled.

I agree.

Actually, we already did the enum name changes, we'll incorporate the 
rest of the feedback.

>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface
  2022-02-16  2:17     ` Martinez, Ricardo
@ 2022-02-16 14:36       ` Ilpo Järvinen
  0 siblings, 0 replies; 44+ messages in thread
From: Ilpo Järvinen @ 2022-02-16 14:36 UTC (permalink / raw)
  To: Martinez, Ricardo
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla

[-- Attachment #1: Type: text/plain, Size: 1314 bytes --]

On Tue, 15 Feb 2022, Martinez, Ricardo wrote:
> On 2/8/2022 12:19 AM, Ilpo Järvinen wrote:
> > On Thu, 13 Jan 2022, Ricardo Martinez wrote:
> > 
> > > +/* SKB control buffer indexed values */
> > > +#define TX_CB_NETIF_IDX		0
> > > +#define TX_CB_QTYPE		1
> > > +#define TX_CB_DRB_CNT		2
> > The normal way of storing a struct to skb->cb area is:
> > 
> > struct t7xx_skb_cb {
> > 	u8	netif_idx;
> > 	u8	qtype;
> > 	u8	drb_cnt;
> > };
> > 
> > #define T7XX_SKB_CB(__skb)	((struct t7xx_skb_cb *)&((__skb)->cb[0]))
> > 
> > However, there's only a single txqt/qtype (TXQ_TYPE_DEFAULT) in the
> > patchset? And it seems to me that drb_cnt is a value that could be always
> > derived using t7xx_get_drb_cnt_per_skb() from the skb rather than
> > stored?
> 
> The next iteration will contain t7xx_tx_skb_cb and t7xx_rx_skb_cb structures.

Ah, I didn't even notice the other one. Why differentiate them? There's 
enough space in cb area and netif_idx is in both anyway. (union inside 
struct could be used if short on space and tx/rx differ but it is not 
needed here now.)

> Also, q_number is going to be used instead of qtype.
> 
> Only one queue is used but I think we can keep this code generic as it is
> straight forward (not like the drb_lack case), any thoughts?

I don't mind if you find it useful.


-- 
 i.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface
  2022-02-08  8:19   ` Ilpo Järvinen
  2022-02-16  2:17     ` Martinez, Ricardo
@ 2022-02-22 18:40     ` Martinez, Ricardo
  1 sibling, 0 replies; 44+ messages in thread
From: Martinez, Ricardo @ 2022-02-22 18:40 UTC (permalink / raw)
  To: Ilpo Järvinen
  Cc: Netdev, linux-wireless, kuba, davem, johannes, ryazanov.s.a,
	loic.poulain, m.chetan.kumar, chandrashekar.devegowda, linuxwwan,
	chiranjeevi.rapolu, haijun.liu, amir.hanania, Andy Shevchenko,
	dinesh.sharma, eliot.lee, moises.veleta, pierre-louis.bossart,
	muralidharan.sethuraman, Soumya.Prakash.Mishra,
	sreehari.kancharla


On 2/8/2022 12:19 AM, Ilpo Järvinen wrote:
> On Thu, 13 Jan 2022, Ricardo Martinez wrote:
>
>> From: Haijun Liu <haijun.liu@mediatek.com>
>>
>> Data Path Modem AP Interface (DPMAIF) HIF layer provides methods
>> for initialization, ISR, control and event handling of TX/RX flows.
...
>> +	bat_req->bat_mask[idx] = 1;
> ...
>> +		if (!bat_req->bat_mask[index])
> ...
>> +		bat->bat_mask[index] = 0;
> Seem to be linux/bitmap.h
>
> I wonder though if the loop in t7xx_dpmaif_avail_pkt_bat_cnt()
> could be replaced with arithmetic calculation based on
> bat_release_rd_idx and some other idx? It would make the bitmap
> unnecessary.

A bitmap is needed since entries could be returned out of order.

...

>> +	hw_read_idx = t7xx_dpmaif_ul_get_rd_idx(&dpmaif_ctrl->hif_hw_info, q_num);
>> +
>> +	new_hw_rd_idx = hw_read_idx / DPMAIF_UL_DRB_ENTRY_WORD;
> Is DPMAIF_UL_DRB_ENTRY_WORD size of an entry? In that case it
> would probably make sense put it inside t7xx_dpmaif_ul_get_rd_idx?
Yes, moving this into t7xx_dpmaif_ul_get_rd_idx()
>> +	if (new_hw_rd_idx >= DPMAIF_DRB_ENTRY_SIZE) {
> Is DPMAIF_DRB_ENTRY_SIZE telling the number of entries rather than
> an "ENTRY_SIZE"? I think these both constant could likely be named
> better.
...
>> +static int t7xx_txq_burst_send_skb(struct dpmaif_tx_queue *txq)
>> +{
>> +	int drb_remain_cnt, i;
>> +	unsigned long flags;
>> +	int drb_cnt = 0;
>> +	int ret = 0;
>> +
>> +	spin_lock_irqsave(&txq->tx_lock, flags);
>> +	drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt, txq->drb_release_rd_idx,
>> +						   txq->drb_wr_idx, DPMAIF_WRITE);
>> +	spin_unlock_irqrestore(&txq->tx_lock, flags);
>> +
>> +	for (i = 0; i < DPMAIF_SKB_TX_BURST_CNT; i++) {
>> +		struct sk_buff *skb;
>> +
>> +		spin_lock_irqsave(&txq->tx_skb_lock, flags);
>> +		skb = list_first_entry_or_null(&txq->tx_skb_queue, struct sk_buff, list);
>> +		spin_unlock_irqrestore(&txq->tx_skb_lock, flags);
>> +
>> +		if (!skb)
>> +			break;
>> +
>> +		if (drb_remain_cnt < skb->cb[TX_CB_DRB_CNT]) {
>> +			spin_lock_irqsave(&txq->tx_lock, flags);
>> +			drb_remain_cnt = t7xx_ring_buf_rd_wr_count(txq->drb_size_cnt,
>> +								   txq->drb_release_rd_idx,
>> +								   txq->drb_wr_idx, DPMAIF_WRITE);
>> +			spin_unlock_irqrestore(&txq->tx_lock, flags);
>> +			continue;
>> +		}
> ...
>> +	if (drb_cnt > 0) {
>> +		txq->drb_lack = false;
>> +		ret = drb_cnt;
>> +	} else if (ret == -ENOMEM) {
>> +		txq->drb_lack = true;
> Based on the variable name, I'd expect drb_lack be set true when
> drb_remain_cnt < skb->cb[TX_CB_DRB_CNT] occurred but that doesn't
> happen. Maybe that if branch within loop should set ret = -ENOMEM;
> before continue?

This drb_lack logic is going to be dropped since it was intended for

multiple Tx queues but currently only one is used.

> It would be nice if the drb check here and in
> t7xx_check_tx_queue_drb_available could be consolidated into
> a single place. That requires small refactoring (adding __
> variant of that function which does just the check).
>
> Please also check the other comments on skb->cb below.
...
>
>> +int t7xx_dpmaif_tx_send_skb(struct dpmaif_ctrl *dpmaif_ctrl, unsigned int txqt, struct sk_buff *skb)
>> +{
>> +	bool tx_drb_available = true;
> ...
>> +	send_drb_cnt = t7xx_get_drb_cnt_per_skb(skb);
>> +
>> +	txq = &dpmaif_ctrl->txq[txqt];
>> +	if (!(txq->tx_skb_stat++ % DPMAIF_SKB_TX_BURST_CNT))
>> +		tx_drb_available = t7xx_check_tx_queue_drb_available(txq, send_drb_cnt);
>> +
>> +	if (!tx_drb_available || txq->tx_submit_skb_cnt >= txq->tx_list_max_len) {
> Because of the modulo if, drbs might not be available despite
> variable claims them to be. Is it intentional?

It is intentional, I'll refactor this to do the  DRB and tx_list_max_len 
checks independently for clarity.

...



^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2022-02-22 18:40 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-14  1:06 [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Ricardo Martinez
2022-01-14  1:06 ` [PATCH net-next v4 01/13] list: Add list_next_entry_circular() and list_prev_entry_circular() Ricardo Martinez
2022-01-14 13:42   ` Andy Shevchenko
2022-01-14  1:06 ` [PATCH net-next v4 02/13] net: wwan: t7xx: Add control DMA interface Ricardo Martinez
2022-01-14 14:13   ` Andy Shevchenko
2022-01-18 14:13   ` Ilpo Järvinen
2022-01-18 22:22     ` Martinez, Ricardo
2022-01-19  9:52       ` Ilpo Järvinen
2022-01-19 19:04         ` Martinez, Ricardo
2022-02-11  0:25       ` Sergey Ryazanov
2022-02-16  2:24         ` Martinez, Ricardo
2022-02-10 13:50   ` Ilpo Järvinen
2022-01-14  1:06 ` [PATCH net-next v4 03/13] net: wwan: t7xx: Add core components Ricardo Martinez
2022-01-16 15:37   ` kernel test robot
2022-01-24 14:51   ` Ilpo Järvinen
2022-01-25 19:13     ` Martinez, Ricardo
2022-01-26 10:45       ` Ilpo Järvinen
2022-01-27 17:36       ` Ilpo Järvinen
2022-01-14  1:06 ` [PATCH net-next v4 04/13] net: wwan: t7xx: Add port proxy infrastructure Ricardo Martinez
2022-01-25 13:38   ` Ilpo Järvinen
2022-02-10 13:34   ` Ilpo Järvinen
2022-01-14  1:06 ` [PATCH net-next v4 05/13] net: wwan: t7xx: Add control port Ricardo Martinez
2022-01-27 10:40   ` Ilpo Järvinen
2022-01-27 14:53     ` Andy Shevchenko
2022-01-14  1:06 ` [PATCH net-next v4 06/13] net: wwan: t7xx: Add AT and MBIM WWAN ports Ricardo Martinez
2022-01-27 11:56   ` Ilpo Järvinen
2022-01-14  1:06 ` [PATCH net-next v4 07/13] net: wwan: t7xx: Data path HW layer Ricardo Martinez
2022-02-01  9:08   ` Ilpo Järvinen
2022-02-01 10:13     ` Ilpo Järvinen
2022-02-03  2:30     ` Martinez, Ricardo
2022-01-14  1:06 ` [PATCH net-next v4 08/13] net: wwan: t7xx: Add data path interface Ricardo Martinez
2022-02-03 14:23   ` Ilpo Järvinen
2022-02-08  8:19   ` Ilpo Järvinen
2022-02-16  2:17     ` Martinez, Ricardo
2022-02-16 14:36       ` Ilpo Järvinen
2022-02-22 18:40     ` Martinez, Ricardo
2022-01-14  1:06 ` [PATCH net-next v4 09/13] net: wwan: t7xx: Add WWAN network interface Ricardo Martinez
2022-02-10 10:45   ` Ilpo Järvinen
2022-01-14  1:06 ` [PATCH net-next v4 10/13] net: wwan: t7xx: Introduce power management support Ricardo Martinez
2022-02-10 10:58   ` Ilpo Järvinen
2022-01-14  1:06 ` [PATCH net-next v4 11/13] net: wwan: t7xx: Runtime PM Ricardo Martinez
2022-01-14  1:06 ` [PATCH net-next v4 12/13] net: wwan: t7xx: Device deep sleep lock/unlock Ricardo Martinez
2022-01-14  1:06 ` [PATCH net-next v4 13/13] net: wwan: t7xx: Add maintainers and documentation Ricardo Martinez
2022-01-15 14:53 ` [PATCH net-next v4 00/13] net: wwan: t7xx: PCIe driver for MediaTek M.2 modem Loic Poulain

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).