netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem
@ 2020-11-23 13:51 M Chetan Kumar
  2020-11-23 13:51 ` [RFC 01/18] net: iosm: entry point M Chetan Kumar
                   ` (17 more replies)
  0 siblings, 18 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

The IOSM (IPC over Shared Memory) driver is a PCIe host driver implemented
for linux or chrome platform for data exchange over PCIe interface between
Host platform & Intel M.2 Modem. The driver exposes interface conforming to the
MBIM protocol [1]. Any front end application ( eg: Modem Manager) could easily
manage the MBIM interface to enable data communication towards WWAN.

This is the driver we are still working on & below are the known things that
need to be addressed by driver.
1. Usage of completion() inside deinit()
2. Clean-up wrappers around hr_timer
3. Usage of net stats inside driver struct

Kindly request to review and give your suggestions.

Below is the technical detail:-
Intel M.2 modem uses 2 BAR regions. The first region is dedicated to Doorbell
register for IRQs and the second region is used as scratchpad area for book
keeping modem execution stage details along with host system shared memory
region context details. The upper edge of the driver exposes the control and
data channels for user space application interaction. At lower edge these data
and control channels are associated to pipes. The pipes are lowest level
interfaces used over PCIe as a logical channel for message exchange. A single
channel maps to UL and DL pipe and are initialized on device open.

On UL path, driver copies application sent data to SKBs associate it with
transfer descriptor and puts it on to ring buffer for DMA transfer. Once
information has been updated in shared memory region, host gives a Doorbell
to modem to perform DMA and modem uses MSI to communicate back to host.
For receiving data in DL path, SKBs are pre-allocated during pipe open and
transfer descriptors are given to modem for DMA transfer.

The driver exposes two types of ports, namely "wwanctrl", a char device node
which is used for MBIM control operation and "INMx",(x = 0,1,2..7) network
interfaces for IP data communication.
1) MBIM Control Interface:
This node exposes an interface between modem and application using char device
exposed by "IOSM" driver to establish and manage the MBIM data communication
with PCIe based Intel M.2 Modems.

It also support an IOCTL command, apart from read and write methods. The IOCTL
command, "IOCTL_WDM_MAX_COMMAND" could be used by applications to fetch the
Maximum Command buffer length supported by the driver which is restricted to
4096 bytes.

2) MBIM Data Interface:
The IOSM driver represents the MBIM data channel as a single root network
device of the "wwan0" type which is mapped as default IP session 0. Several IP
sessions(INMx) could be multiplexed over the single data channel using
sub devices of master wwanY devices. The driver models such IP sessions as
802.1q VLAN devices which are mapped to a unique VLAN ID.

M Chetan Kumar (18):
  net: iosm: entry point
  net: iosm: irq handling
  net: iosm: mmio scratchpad
  net: iosm: shared memory IPC interface
  net: iosm: shared memory I/O operations
  net: iosm: channel configuration
  net: iosm: char device for FW flash & coredump
  net: iosm: MBIM control device
  net: iosm: bottom half
  net: iosm: multiplex IP sessions
  net: iosm: encode or decode datagram
  net: iosm: power management
  net: iosm: shared memory protocol
  net: iosm: protocol operations
  net: iosm: uevent support
  net: iosm: net driver
  net: iosm: readme file
  net: iosm: infrastructure

 MAINTAINERS                                   |    7 +
 drivers/net/Kconfig                           |    1 +
 drivers/net/Makefile                          |    1 +
 drivers/net/wwan/Kconfig                      |   13 +
 drivers/net/wwan/Makefile                     |    5 +
 drivers/net/wwan/iosm/Kconfig                 |   10 +
 drivers/net/wwan/iosm/Makefile                |   27 +
 drivers/net/wwan/iosm/README                  |  126 +++
 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c     |   87 ++
 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h     |   57 +
 drivers/net/wwan/iosm/iosm_ipc_imem.c         | 1466 +++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_imem.h         |  606 ++++++++++
 drivers/net/wwan/iosm/iosm_ipc_imem_ops.c     |  779 +++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_imem_ops.h     |  102 ++
 drivers/net/wwan/iosm/iosm_ipc_irq.c          |   95 ++
 drivers/net/wwan/iosm/iosm_ipc_irq.h          |   35 +
 drivers/net/wwan/iosm/iosm_ipc_mbim.c         |  205 ++++
 drivers/net/wwan/iosm/iosm_ipc_mbim.h         |   24 +
 drivers/net/wwan/iosm/iosm_ipc_mmio.c         |  222 ++++
 drivers/net/wwan/iosm/iosm_ipc_mmio.h         |  192 ++++
 drivers/net/wwan/iosm/iosm_ipc_mux.c          |  455 ++++++++
 drivers/net/wwan/iosm/iosm_ipc_mux.h          |  344 ++++++
 drivers/net/wwan/iosm/iosm_ipc_mux_codec.c    |  902 +++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_mux_codec.h    |  194 ++++
 drivers/net/wwan/iosm/iosm_ipc_pcie.c         |  494 +++++++++
 drivers/net/wwan/iosm/iosm_ipc_pcie.h         |  205 ++++
 drivers/net/wwan/iosm/iosm_ipc_pm.c           |  334 ++++++
 drivers/net/wwan/iosm/iosm_ipc_pm.h           |  216 ++++
 drivers/net/wwan/iosm/iosm_ipc_protocol.c     |  287 +++++
 drivers/net/wwan/iosm/iosm_ipc_protocol.h     |  219 ++++
 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c |  563 ++++++++++
 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h |  358 ++++++
 drivers/net/wwan/iosm/iosm_ipc_sio.c          |  188 ++++
 drivers/net/wwan/iosm/iosm_ipc_sio.h          |   72 ++
 drivers/net/wwan/iosm/iosm_ipc_task_queue.c   |  258 +++++
 drivers/net/wwan/iosm/iosm_ipc_task_queue.h   |   46 +
 drivers/net/wwan/iosm/iosm_ipc_uevent.c       |   47 +
 drivers/net/wwan/iosm/iosm_ipc_uevent.h       |   41 +
 drivers/net/wwan/iosm/iosm_ipc_wwan.c         |  674 ++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_wwan.h         |   72 ++
 40 files changed, 10029 insertions(+)
 create mode 100644 drivers/net/wwan/Kconfig
 create mode 100644 drivers/net/wwan/Makefile
 create mode 100644 drivers/net/wwan/iosm/Kconfig
 create mode 100644 drivers/net/wwan/iosm/Makefile
 create mode 100644 drivers/net/wwan/iosm/README
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_irq.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_irq.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mbim.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mbim.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mmio.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mmio.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux_codec.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux_codec.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pcie.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pcie.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pm.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pm.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_sio.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_sio.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_task_queue.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_task_queue.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.h
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.h

-- 
2.12.3


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC 01/18] net: iosm: entry point
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 02/18] net: iosm: irq handling M Chetan Kumar
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

1) Register IOSM driver with kernel to manage Intel WWAN PCIe
   device(PCI_VENDOR_ID_INTEL, INTEL_CP_DEVICE_7560_ID).
2) Exposes the EP PCIe device capability to Host PCIe core.
3) Initializes PCIe EP configuration and defines PCIe driver probe, remove
   and power management OPS.
4) Allocate and map(dma) skb memory for data communication from device to
   kernel and vice versa.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_pcie.c | 494 ++++++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_pcie.h | 205 ++++++++++++++
 2 files changed, 699 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pcie.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pcie.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_pcie.c b/drivers/net/wwan/iosm/iosm_ipc_pcie.c
new file mode 100644
index 000000000000..9457e889695a
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_pcie.c
@@ -0,0 +1,494 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include <linux/bitfield.h>
+#include <linux/module.h>
+
+#include "iosm_ipc_imem.h"
+#include "iosm_ipc_pcie.h"
+
+#define DRV_AUTHOR "Intel Corporation <linuxwwan@intel.com>"
+
+MODULE_AUTHOR(DRV_AUTHOR);
+MODULE_DESCRIPTION("IOSM Driver");
+MODULE_LICENSE("GPL v2");
+
+static void ipc_pcie_resources_release(struct iosm_pcie *ipc_pcie)
+{
+	/* Free the MSI resources. */
+	ipc_release_irq(ipc_pcie);
+
+	/* Free mapped doorbell scratchpad bus memory into CPU space. */
+	iounmap(ipc_pcie->scratchpad);
+	ipc_pcie->scratchpad = NULL;
+
+	/* Free mapped IPC_REGS bus memory into CPU space. */
+	iounmap(ipc_pcie->ipc_regs);
+	ipc_pcie->ipc_regs = NULL;
+
+	/* Releases all PCI I/O and memory resources previously reserved by a
+	 * successful call to pci_request_regions.  Call this function only
+	 * after all use of the PCI regions has ceased.
+	 */
+	pci_release_regions(ipc_pcie->pci);
+}
+
+static void ipc_cleanup(struct iosm_pcie *ipc_pcie)
+{
+	struct pci_dev *pci;
+
+	pci = ipc_pcie->pci;
+
+	/* Free the shared memory resources. */
+	ipc_imem_cleanup(ipc_pcie->imem);
+
+	ipc_pcie_resources_release(ipc_pcie);
+
+	/* Signal to the system that the PCI device is not in use. */
+	if (ipc_pcie->pci)
+		pci_disable_device(pci);
+
+	/*dbg cleanup*/
+	ipc_pcie->dev = NULL;
+}
+
+static void ipc_pcie_deinit(struct iosm_pcie *ipc_pcie)
+{
+	if (ipc_pcie) {
+		kfree(ipc_pcie->imem);
+		kfree(ipc_pcie);
+	}
+}
+
+static void iosm_ipc_remove(struct pci_dev *pci)
+{
+	struct iosm_pcie *ipc_pcie = pci_get_drvdata(pci);
+
+	ipc_cleanup(ipc_pcie);
+
+	ipc_pcie_deinit(ipc_pcie);
+}
+
+static int ipc_pcie_resources_request(struct iosm_pcie *ipc_pcie)
+{
+	struct pci_dev *pci = ipc_pcie->pci;
+	u32 cap;
+	u32 ret;
+
+	/* Reserved PCI I/O and memory resources.
+	 * Mark all PCI regions associated with PCI device pci as
+	 * being reserved by owner IOSM_IPC.
+	 */
+	ret = pci_request_regions(pci, "IOSM_IPC");
+	if (ret) {
+		dev_err(ipc_pcie->dev, "failed pci request regions");
+		goto pci_request_region_fail;
+	}
+
+	/* Reserve the doorbell IPC REGS memory resources.
+	 * Remap the memory into CPU space. Arrange for the physical address
+	 * (BAR) to be visible from this driver.
+	 * pci_ioremap_bar() ensures that the memory is marked uncachable.
+	 */
+	ipc_pcie->ipc_regs = pci_ioremap_bar(pci, ipc_pcie->ipc_regs_bar_nr);
+
+	if (!ipc_pcie->ipc_regs) {
+		dev_err(ipc_pcie->dev, "IPC REGS ioremap error");
+		ret = -EBUSY;
+		goto ipc_regs_remap_fail;
+	}
+
+	/* Reserve the MMIO scratchpad memory resources.
+	 * Remap the memory into CPU space. Arrange for the physical address
+	 * (BAR) to be visible from this driver.
+	 * pci_ioremap_bar() ensures that the memory is marked uncachable.
+	 */
+	ipc_pcie->scratchpad =
+		pci_ioremap_bar(pci, ipc_pcie->scratchpad_bar_nr);
+
+	if (!ipc_pcie->scratchpad) {
+		dev_err(ipc_pcie->dev, "doorbell scratchpad ioremap error");
+		ret = -EBUSY;
+		goto scratch_remap_fail;
+	}
+
+	/* Install the irq handler triggered by CP. */
+	ret = ipc_acquire_irq(ipc_pcie);
+	if (ret) {
+		dev_err(ipc_pcie->dev, "acquiring MSI irq failed!");
+		goto irq_acquire_fail;
+	}
+
+	/* Enable bus-mastering for the IOSM IPC device. */
+	pci_set_master(pci);
+
+	/* Enable LTR if possible
+	 * This is needed for L1.2!
+	 */
+	pcie_capability_read_dword(ipc_pcie->pci, PCI_EXP_DEVCAP2, &cap);
+	if (cap & PCI_EXP_DEVCAP2_LTR)
+		pcie_capability_set_word(ipc_pcie->pci, PCI_EXP_DEVCTL2,
+					 PCI_EXP_DEVCTL2_LTR_EN);
+
+	dev_dbg(ipc_pcie->dev, "link between AP and CP is fully on");
+
+	return ret;
+
+irq_acquire_fail:
+	iounmap(ipc_pcie->scratchpad);
+	ipc_pcie->scratchpad = NULL;
+scratch_remap_fail:
+	iounmap(ipc_pcie->ipc_regs);
+	ipc_pcie->ipc_regs = NULL;
+ipc_regs_remap_fail:
+	pci_release_regions(pci);
+pci_request_region_fail:
+	return ret;
+}
+
+bool ipc_pcie_check_aspm_enabled(struct iosm_pcie *ipc_pcie,
+				 struct pci_dev *pdev)
+{
+	u32 enabled = 0;
+	u16 value = 0;
+
+	if (!ipc_pcie || !pdev)
+		return false;
+
+	pcie_capability_read_word(pdev, PCI_EXP_LNKCTL, &value);
+	enabled = value & PCI_EXP_LNKCTL_ASPMC;
+	dev_dbg(ipc_pcie->dev, "ASPM L1: 0x%04X 0x%03X", pdev->device, value);
+
+	return (enabled == PCI_EXP_LNKCTL_ASPM_L1 ||
+		enabled == PCI_EXP_LNKCTL_ASPMC);
+}
+
+bool ipc_pcie_check_data_link_active(struct iosm_pcie *ipc_pcie)
+{
+	struct pci_dev *parent;
+	u16 link_status = 0;
+
+	if (!ipc_pcie || !ipc_pcie->pci) {
+		dev_dbg(ipc_pcie->dev, "device not found");
+		return false;
+	}
+
+	if (!ipc_pcie->pci->bus || !ipc_pcie->pci->bus->self) {
+		dev_err(ipc_pcie->dev, "root port not found");
+		return false;
+	}
+
+	parent = ipc_pcie->pci->bus->self;
+
+	pcie_capability_read_word(parent, PCI_EXP_LNKSTA, &link_status);
+	dev_dbg(ipc_pcie->dev, "Link status: 0x%04X", link_status);
+
+	return link_status & PCI_EXP_LNKSTA_DLLLA;
+}
+
+static bool ipc_pcie_check_aspm_supported(struct iosm_pcie *ipc_pcie,
+					  struct pci_dev *pdev)
+{
+	u32 support = 0;
+	u32 cap = 0;
+
+	pcie_capability_read_dword(pdev, PCI_EXP_LNKCAP, &cap);
+	support = u32_get_bits(cap, PCI_EXP_LNKCAP_ASPMS);
+	if (support < PCI_EXP_LNKCTL_ASPM_L1) {
+		dev_dbg(ipc_pcie->dev, "ASPM L1 not supported: 0x%04X",
+			pdev->device);
+		return false;
+	}
+	return true;
+}
+
+void ipc_pcie_config_aspm(struct iosm_pcie *ipc_pcie, struct pci_dev *pdev)
+{
+	bool parent_aspm_enabled, dev_aspm_enabled;
+	struct pci_dev *parent;
+
+	if (!pci_is_pcie(pdev)) {
+		dev_err(ipc_pcie->dev, "not a PCIe device");
+		return;
+	}
+
+	parent = pdev->bus->self;
+
+	/* check if both root port and child supports ASPM L1 */
+	if (!ipc_pcie_check_aspm_supported(ipc_pcie, parent) ||
+	    !ipc_pcie_check_aspm_supported(ipc_pcie, pdev))
+		return;
+
+	parent_aspm_enabled = ipc_pcie_check_aspm_enabled(ipc_pcie, parent);
+	dev_aspm_enabled = ipc_pcie_check_aspm_enabled(ipc_pcie, pdev);
+
+	dev_dbg(ipc_pcie->dev, "ASPM parent: %s device: %s",
+		parent_aspm_enabled ? "Enabled" : "Disabled",
+		dev_aspm_enabled ? "Enabled" : "Disabled");
+}
+
+/* Function initializes PCIe endpoint configuration */
+static void ipc_pcie_config_init(struct iosm_pcie *ipc_pcie)
+{
+	/* BAR0 is used for doorbell */
+	ipc_pcie->ipc_regs_bar_nr = IPC_DOORBELL_BAR0;
+
+	/* update HW configuration */
+	ipc_pcie->scratchpad_bar_nr = IPC_SCRATCHPAD_BAR2;
+	ipc_pcie->doorbell_reg_offset = IPC_DOORBELL_CH_OFFSET;
+	ipc_pcie->doorbell_write = IPC_WRITE_PTR_REG_0;
+	ipc_pcie->doorbell_capture = IPC_CAPTURE_PTR_REG_0;
+}
+
+/* The PCI bus has recognized the IOSM IPC device and invokes
+ * iosm_ipc_probe with the assigned resources. pci_id contains
+ * the identification, which need not be verified.
+ */
+static int iosm_ipc_probe(struct pci_dev *pci,
+			  const struct pci_device_id *pci_id)
+{
+	struct iosm_pcie *ipc_pcie;
+
+	pr_debug("Probing device 0x%X from the vendor 0x%X", pci_id->device,
+		 pci_id->vendor);
+
+	ipc_pcie = kzalloc(sizeof(*ipc_pcie), GFP_KERNEL);
+	if (!ipc_pcie)
+		goto ret_fail;
+
+	/* Initialize ipc dbg component for the PCIe device */
+	ipc_pcie->dev = &pci->dev;
+
+	/* Set the driver specific data. */
+	pci_set_drvdata(pci, ipc_pcie);
+
+	/* Save the address of the PCI device configuration. */
+	ipc_pcie->pci = pci;
+
+	/* Update platform configuration */
+	ipc_pcie_config_init(ipc_pcie);
+
+	/* Initialize the device before it is used. Ask low-level code
+	 * to enable I/O and memory. Wake up the device if it was suspended.
+	 */
+	if (pci_enable_device(pci)) {
+		dev_err(ipc_pcie->dev, "failed to enable the AP PCIe device");
+		/* If enable of PCIe device has failed then calling ipc_cleanup
+		 * will panic the system. More over ipc_cleanup() is required to
+		 * be called after ipc_imem_mount()
+		 */
+		goto pci_enable_fail;
+	}
+
+	ipc_pcie_config_aspm(ipc_pcie, pci);
+	dev_dbg(ipc_pcie->dev, "PCIe device enabled.");
+
+	if (ipc_pcie_resources_request(ipc_pcie))
+		goto resources_req_fail;
+
+	/* Establish the link to the imem layer. */
+	ipc_pcie->imem = ipc_imem_init(ipc_pcie, pci->device,
+				       ipc_pcie->scratchpad, ipc_pcie->dev);
+	if (!ipc_pcie->imem) {
+		dev_err(ipc_pcie->dev, "failed to init imem");
+		goto imem_init_fail;
+	}
+
+	clear_bit(IPC_SUSPENDED, &ipc_pcie->pm_flags);
+
+	return 0;
+
+imem_init_fail:
+	ipc_pcie_resources_release(ipc_pcie);
+resources_req_fail:
+	pci_disable_device(pci);
+pci_enable_fail:
+	ipc_pcie_deinit(ipc_pcie);
+ret_fail:
+	return -EIO;
+}
+
+static const struct pci_device_id iosm_ipc_ids[] = {
+	{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, INTEL_CP_DEVICE_7560_ID) },
+	{}
+};
+
+int __maybe_unused iosm_ipc_suspend(struct device *dev)
+{
+	struct iosm_pcie *ipc_pcie;
+	struct pci_dev *pdev;
+	int ret;
+
+	pdev = to_pci_dev(dev);
+
+	ipc_pcie = pci_get_drvdata(pdev);
+
+	/* Execute D3 one time. */
+	if (pdev->current_state != PCI_D0) {
+		dev_dbg(ipc_pcie->dev, "done for PM=%d", pdev->current_state);
+		return 0;
+	}
+
+	/* The HAL shall ask the shared memory layer whether D3 is allowed. */
+	ipc_imem_pm_suspend(ipc_pcie->imem);
+
+	/* Save the PCI configuration space of a device before suspending. */
+	ret = pci_save_state(pdev);
+
+	if (ret) {
+		dev_err(ipc_pcie->dev, "pci_save_state error=%d", ret);
+		return ret;
+	}
+
+	/* Set the power state of a PCI device.
+	 * Transition a device to a new power state, using the device's PCI PM
+	 * registers.
+	 */
+	ret = pci_set_power_state(pdev, PCI_D3cold);
+
+	if (ret) {
+		dev_err(ipc_pcie->dev, "pci_set_power_state error=%d", ret);
+		return ret;
+	}
+
+	set_bit(IPC_SUSPENDED, &ipc_pcie->pm_flags);
+
+	dev_dbg(ipc_pcie->dev, "SUSPEND done");
+	return 0;
+}
+
+int __maybe_unused iosm_ipc_resume(struct device *dev)
+{
+	struct iosm_pcie *ipc_pcie;
+	struct pci_dev *pdev;
+	int ret;
+
+	pdev = to_pci_dev(dev);
+
+	ipc_pcie = pci_get_drvdata(pdev);
+
+	/* Set the power state of a PCI device.
+	 * Transition a device to a new power state, using the device's PCI PM
+	 * registers.
+	 */
+	ret = pci_set_power_state(pdev, PCI_D0);
+
+	if (ret) {
+		dev_err(ipc_pcie->dev, "pci_set_power_state error=%d", ret);
+		return ret;
+	}
+
+	pci_restore_state(pdev);
+
+	clear_bit(IPC_SUSPENDED, &ipc_pcie->pm_flags);
+
+	/* The HAL shall inform the shared memory layer that the device is
+	 * active.
+	 */
+	ipc_imem_pm_resume(ipc_pcie->imem);
+
+	dev_dbg(ipc_pcie->dev, "RESUME done");
+	return 0;
+}
+
+static SIMPLE_DEV_PM_OPS(iosm_ipc_pm, iosm_ipc_suspend, iosm_ipc_resume);
+
+static struct pci_driver iosm_ipc_driver = {
+	.name = KBUILD_MODNAME,
+	.probe = iosm_ipc_probe,
+	.remove = iosm_ipc_remove,
+	.driver = {
+		.pm = &iosm_ipc_pm,
+	},
+	.id_table = iosm_ipc_ids,
+};
+
+module_pci_driver(iosm_ipc_driver);
+
+int ipc_pcie_addr_map(struct iosm_pcie *ipc_pcie, void *mem, size_t size,
+		      dma_addr_t *mapping, int direction)
+{
+	if (!ipc_pcie || !mapping)
+		return -EINVAL;
+
+	if (ipc_pcie->pci) {
+		*mapping = dma_map_single(&ipc_pcie->pci->dev, mem, size,
+					  direction);
+		if (dma_mapping_error(&ipc_pcie->pci->dev, *mapping)) {
+			dev_err(ipc_pcie->dev, "dma mapping failed");
+			return -EINVAL;
+		}
+	}
+	return 0;
+}
+
+void ipc_pcie_addr_unmap(struct iosm_pcie *ipc_pcie, size_t size,
+			 dma_addr_t mapping, int direction)
+{
+	if (!ipc_pcie || !mapping)
+		return;
+
+	if (ipc_pcie->pci)
+		dma_unmap_single(&ipc_pcie->pci->dev, mapping, size, direction);
+}
+
+struct sk_buff *ipc_pcie_alloc_local_skb(struct iosm_pcie *ipc_pcie,
+					 gfp_t flags, size_t size)
+{
+	struct sk_buff *skb;
+
+	if (!ipc_pcie || !size) {
+		dev_err(ipc_pcie->dev, "invalid pcie object or size");
+		return NULL;
+	}
+
+	skb = __netdev_alloc_skb(NULL, size, flags);
+	if (!skb)
+		return NULL;
+
+	IPC_CB(skb)->op_type = (u8)UL_DEFAULT;
+	IPC_CB(skb)->mapping = 0;
+
+	return skb;
+}
+
+struct sk_buff *ipc_pcie_alloc_skb(struct iosm_pcie *ipc_pcie, size_t size,
+				   gfp_t flags, dma_addr_t *mapping,
+				   int direction, size_t headroom)
+{
+	struct sk_buff *skb =
+		ipc_pcie_alloc_local_skb(ipc_pcie, flags, size + headroom);
+
+	if (!skb)
+		return NULL;
+
+	if (headroom)
+		skb_reserve(skb, headroom);
+
+	if (ipc_pcie_addr_map(ipc_pcie, skb->data, size, mapping, direction)) {
+		dev_kfree_skb(skb);
+		return NULL;
+	}
+
+	BUILD_BUG_ON(sizeof(*IPC_CB(skb)) > sizeof(skb->cb));
+
+	/* Store the mapping address in skb scratch pad for later usage */
+	IPC_CB(skb)->mapping = *mapping;
+	IPC_CB(skb)->direction = direction;
+	IPC_CB(skb)->len = size;
+
+	return skb;
+}
+
+void ipc_pcie_kfree_skb(struct iosm_pcie *ipc_pcie, struct sk_buff *skb)
+{
+	if (!skb)
+		return;
+
+	ipc_pcie_addr_unmap(ipc_pcie, IPC_CB(skb)->len, IPC_CB(skb)->mapping,
+			    IPC_CB(skb)->direction);
+	IPC_CB(skb)->mapping = 0;
+	dev_kfree_skb(skb);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_pcie.h b/drivers/net/wwan/iosm/iosm_ipc_pcie.h
new file mode 100644
index 000000000000..916c049291cc
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_pcie.h
@@ -0,0 +1,205 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_PCIE_H
+#define IOSM_IPC_PCIE_H
+
+#include <linux/device.h>
+#include <linux/pci.h>
+#include <linux/skbuff.h>
+
+#include "iosm_ipc_irq.h"
+
+/* Device ID */
+#define INTEL_CP_DEVICE_7560_ID 0x7560
+
+/* Define for BAR area usage */
+#define IPC_DOORBELL_BAR0 0
+#define IPC_SCRATCHPAD_BAR2 2
+
+/* Defines for DOORBELL registers information */
+#define IPC_DOORBELL_CH_OFFSET BIT(5)
+#define IPC_WRITE_PTR_REG_0 BIT(4)
+#define IPC_CAPTURE_PTR_REG_0 BIT(3)
+#define IPC_SUSPENDED BIT(0)
+
+/* Number of MSI used for IPC */
+#define IPC_MSI_VECTORS 1
+
+/* Total number of Maximum IPC IRQ vectors used for IPC */
+#define IPC_IRQ_VECTORS IPC_MSI_VECTORS
+
+/**
+ * struct iosm_pcie - IPC_PCIE struct.
+ * @pci:			Address of the device description
+ * @dev:			Pointer to generic device structure
+ * @ipc_regs:			Remapped CP doorbell address of the irq register
+ *				set, to fire the doorbell irq.
+ * @scratchpad:			Remapped CP scratchpad address, to send the
+ *				configuration. tuple and the IPC descriptors
+ *				to CP in the ROM phase. The config tuple
+ *				information are saved on the MSI scratchpad.
+ * @imem:			Pointer to imem data struct
+ * @ipc_regs_bar_nr:		BAR number to be used for IPC doorbell
+ * @scratchpad_bar_nr:		BAR number to be used for Scratchpad
+ * @nvec:			number of requested irq vectors
+ * @doorbell_reg_offset:	doorbell_reg_offset
+ * @doorbell_write:		doorbell write register
+ * @doorbell_capture:		doorbell capture resgister
+ * @pm_flags:			flags for the Power Management
+ */
+struct iosm_pcie {
+	struct pci_dev *pci;
+	struct device *dev;
+	void __iomem *ipc_regs;
+	void __iomem *scratchpad;
+	struct iosm_imem *imem;
+	int ipc_regs_bar_nr;
+	int scratchpad_bar_nr;
+	int nvec;
+	u32 doorbell_reg_offset;
+	u32 doorbell_write;
+	u32 doorbell_capture;
+	unsigned long pm_flags;
+};
+
+/**
+ * struct ipc_skb_cb - State definition of the socket buffer which is mapped to
+ *		       the cb field of sbk
+ * @mapping:	Store physical or IOVA mapped address of skb virtual add.
+ * @direction:	DMA direction
+ * @len:	Length of the DMA mapped region
+ * @op_type:    Expected values are defined about enum ipc_ul_usr_op.
+ */
+struct ipc_skb_cb {
+	dma_addr_t mapping;
+	int direction;
+	int len;
+	u8 op_type;
+};
+
+/**
+ * enum ipc_ul_usr_op - Control information to execute the right operation on
+ *			the user interface.
+ * @UL_USR_OP_BLOCKED:	The uplink app was blocked until CP confirms that the
+ *			uplink buffer was consumed triggered by the IRQ.
+ * @UL_MUX_OP_ADB:	In MUX mode the UL ADB shall be addedd to the free list.
+ * @UL_DEFAULT:		SKB in non muxing mode
+ */
+enum ipc_ul_usr_op {
+	UL_USR_OP_BLOCKED,
+	UL_MUX_OP_ADB,
+	UL_DEFAULT,
+};
+
+/**
+ * ipc_pcie_addr_map - Maps the kernel's virtual address to either IOVA
+ *		       address space or Physical address space, the mapping is
+ *		       stored in the skb's cb.
+ * @ipc_pcie:	Pointer to struct iosm_pcie
+ * @mem:	Skb mem containing data
+ * @size:	Data size
+ * @mapping:	Dma mapping address
+ * @direction:	Data direction
+ *
+ * Returns: 0 on success else error code
+ */
+int ipc_pcie_addr_map(struct iosm_pcie *ipc_pcie, void *mem, size_t size,
+		      dma_addr_t *mapping, int direction);
+
+/**
+ * ipc_pcie_addr_unmap - Unmaps the skb memory region from IOVA address space
+ * @ipc_pcie:	Pointer to struct iosm_pcie
+ * @size:	Data size
+ * @mapping:	Dma mapping address
+ * @direction:	Data direction
+ *
+ * Returns: 0 on success else error code
+ */
+void ipc_pcie_addr_unmap(struct iosm_pcie *ipc_pcie, size_t size,
+			 dma_addr_t mapping, int direction);
+
+/**
+ * ipc_pcie_alloc_skb - Allocate an uplink SKB for the given size.
+ *			This also re-calculates the Start and End addresses
+ *			if PCIe Address Range Check (PARC) is supported.
+ * @ipc_pcie:	Pointer to struct iosm_pcie
+ * @size:	Size of the SKB required.
+ * @flags:	Allocation flags
+ * @mapping:	Copies either mapped IOVA add. or converted Phy address
+ * @direction:	DMA data direction
+ * @headroom:	Header data offset
+ * Returns: Pointer to ipc_skb on Success, NULL on failure.
+ */
+struct sk_buff *ipc_pcie_alloc_skb(struct iosm_pcie *ipc_pcie, size_t size,
+				   gfp_t flags, dma_addr_t *mapping,
+				   int direction, size_t headroom);
+
+/**
+ * ipc_pcie_alloc_local_skb - Allocate a local SKB for the given size.
+ * @ipc_pcie:	Pointer to struct iosm_pcie
+ * @flags:	Allocation flags
+ * @size:	Size of the SKB required.
+ *
+ * Returns: Pointer to ipc_skb on Success, NULL on failure.
+ */
+struct sk_buff *ipc_pcie_alloc_local_skb(struct iosm_pcie *ipc_pcie,
+					 gfp_t flags, size_t size);
+
+/**
+ * ipc_pcie_kfree_skb - Free skb allocated by ipc_pcie_alloc_*_skb().
+ *			Using ipc_pcie_kfree_skb() could cause potential
+ *			Kernel panic if used on skb not allocated by
+ *			ipc_util_alloc_skb() function.
+ * @ipc_pcie:	Pointer to struct iosm_pcie
+ * @skb:	Pointer to the skb
+ */
+void ipc_pcie_kfree_skb(struct iosm_pcie *ipc_pcie, struct sk_buff *skb);
+
+/**
+ * ipc_pcie_check_data_link_active - Check Data Link Layer Active
+ * @ipc_pcie:	Pointer to struct iosm_pcie
+ *
+ * Returns: true if active, otherwise false
+ */
+bool ipc_pcie_check_data_link_active(struct iosm_pcie *ipc_pcie);
+
+/**
+ * iosm_ipc_suspend - Callback invoked by pm_runtime_suspend. It decrements
+ *		     the device's usage count then, carry out a suspend,
+ *		     either synchronous or asynchronous.
+ * @dev:	Pointer to struct device
+ *
+ * Returns: 0 on success else error code
+ */
+int iosm_ipc_suspend(struct device *dev);
+
+/**
+ * iosm_ipc_resume - Callback invoked by pm_runtime_resume. It increments
+ *		    the device's usage count then, carry out a suspend,
+ *		    either synchronous or asynchronous.
+ * @dev:	Pointer to struct device
+ *
+ * Returns: 0 on success else error code
+ */
+int iosm_ipc_resume(struct device *dev);
+
+/**
+ * ipc_pcie_check_aspm_enabled - Check if ASPM L1 is already enabled
+ * @ipc_pcie:			 Pointer to struct iosm_pcie
+ * @pdev:	Pointer to struct pci_dev
+ *
+ * Returns: true if ASPM is already enabled else false
+ */
+bool ipc_pcie_check_aspm_enabled(struct iosm_pcie *ipc_pcie,
+				 struct pci_dev *pdev);
+/**
+ * ipc_pcie_config_aspm - Configure ASPM L1
+ * @ipc_pcie:	Pointer to struct iosm_pcie
+ * @pdev:	Pointer to struct pci_dev
+ */
+void ipc_pcie_config_aspm(struct iosm_pcie *ipc_pcie, struct pci_dev *pdev);
+
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 02/18] net: iosm: irq handling
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
  2020-11-23 13:51 ` [RFC 01/18] net: iosm: entry point M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 03/18] net: iosm: mmio scratchpad M Chetan Kumar
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

1) Request interrupt vector, frees allocated resource.
2) Registers IRQ handler.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_irq.c | 95 ++++++++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_irq.h | 35 +++++++++++++
 2 files changed, 130 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_irq.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_irq.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_irq.c b/drivers/net/wwan/iosm/iosm_ipc_irq.c
new file mode 100644
index 000000000000..b9e1bc7959db
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_irq.c
@@ -0,0 +1,95 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include "iosm_ipc_pcie.h"
+#include "iosm_ipc_protocol.h"
+
+/* Write to the specified register offset for doorbell interrupt */
+static inline void write_dbell_reg(struct iosm_pcie *ipc_pcie, int irq_n,
+				   u32 data)
+{
+	void __iomem *write_reg;
+
+	/* Select the first doorbell register, which is only currently needed
+	 * by CP.
+	 */
+	write_reg = (void __iomem *)((u8 __iomem *)ipc_pcie->ipc_regs +
+				     ipc_pcie->doorbell_write +
+				     (irq_n * ipc_pcie->doorbell_reg_offset));
+
+	/* Fire the doorbell irq by writing data on the doorbell write pointer
+	 * register.
+	 */
+	iowrite32(data, write_reg);
+}
+
+void ipc_doorbell_fire(struct iosm_pcie *ipc_pcie, int irq_n, u32 data)
+{
+	if (!ipc_pcie || !ipc_pcie->ipc_regs)
+		return;
+
+	write_dbell_reg(ipc_pcie, irq_n, data);
+}
+
+/* Threaded Interrupt handler for MSI interrupts */
+static irqreturn_t ipc_msi_interrupt(int irq, void *dev_id)
+{
+	struct iosm_pcie *ipc_pcie = dev_id;
+	int instance = irq - ipc_pcie->pci->irq;
+
+	/* Shift the MSI irq actions to the IPC tasklet. IRQ_NONE means the
+	 * irq was not from the IPC device or could not be served.
+	 */
+	if (instance >= ipc_pcie->nvec)
+		return IRQ_NONE;
+
+	ipc_imem_irq_process(ipc_pcie->imem, instance);
+
+	return IRQ_HANDLED;
+}
+
+void ipc_release_irq(struct iosm_pcie *ipc_pcie)
+{
+	struct pci_dev *pdev = ipc_pcie->pci;
+
+	if (pdev->msi_enabled) {
+		while (--ipc_pcie->nvec >= 0)
+			free_irq(pdev->irq + ipc_pcie->nvec, ipc_pcie);
+	}
+	pci_free_irq_vectors(pdev);
+}
+
+int ipc_acquire_irq(struct iosm_pcie *ipc_pcie)
+{
+	struct pci_dev *pdev = ipc_pcie->pci;
+	int i, rc = 0;
+
+	ipc_pcie->nvec = pci_alloc_irq_vectors(pdev, IPC_MSI_VECTORS,
+					       IPC_MSI_VECTORS, PCI_IRQ_MSI);
+
+	if (ipc_pcie->nvec < 0)
+		return ipc_pcie->nvec;
+
+	if (!pdev->msi_enabled) {
+		rc = -1;
+		goto error;
+	}
+
+	for (i = 0; i < ipc_pcie->nvec; ++i) {
+		rc = request_threaded_irq(pdev->irq + i, NULL,
+					  ipc_msi_interrupt, 0, KBUILD_MODNAME,
+					  ipc_pcie);
+		if (rc) {
+			dev_err(ipc_pcie->dev, "unable to grab IRQ %d, rc=%d",
+				pdev->irq, rc);
+			ipc_pcie->nvec = i;
+			ipc_release_irq(ipc_pcie);
+			goto error;
+		}
+	}
+
+error:
+	return rc;
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_irq.h b/drivers/net/wwan/iosm/iosm_ipc_irq.h
new file mode 100644
index 000000000000..db207cb95a8a
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_irq.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_IRQ_H
+#define IOSM_IPC_IRQ_H
+
+#include "iosm_ipc_pcie.h"
+
+struct iosm_pcie;
+
+/**
+ * ipc_doorbell_fire - fire doorbell to CP
+ * @ipc_pcie:	Pointer to iosm_pcie
+ * @irq_n:	Doorbell type
+ * @data:	ipc state
+ */
+void ipc_doorbell_fire(struct iosm_pcie *ipc_pcie, int irq_n, u32 data);
+
+/**
+ * ipc_release_irq - Remove the IRQ handler.
+ * @ipc_pcie:	Pointer to iosm_pcie struct
+ */
+void ipc_release_irq(struct iosm_pcie *ipc_pcie);
+
+/**
+ * ipc_acquire_irq - Install the IPC IRQ handler.
+ * @ipc_pcie:	Pointer to iosm_pcie struct
+ *
+ * Return: 0 on success and -1 on failure
+ */
+int ipc_acquire_irq(struct iosm_pcie *ipc_pcie);
+
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 03/18] net: iosm: mmio scratchpad
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
  2020-11-23 13:51 ` [RFC 01/18] net: iosm: entry point M Chetan Kumar
  2020-11-23 13:51 ` [RFC 02/18] net: iosm: irq handling M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 04/18] net: iosm: shared memory IPC interface M Chetan Kumar
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

1) Initializes the Scratchpad region for Host-Device communication.
2) Exposes device capabilities like chip info and device execution
   stages.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_mmio.c | 222 ++++++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_mmio.h | 192 +++++++++++++++++++++++++++++
 2 files changed, 414 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mmio.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mmio.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_mmio.c b/drivers/net/wwan/iosm/iosm_ipc_mmio.c
new file mode 100644
index 000000000000..eb685f6c720d
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_mmio.c
@@ -0,0 +1,222 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include <linux/delay.h>
+#include <linux/io.h>
+#include <linux/slab.h>
+
+#include "iosm_ipc_mmio.h"
+#include "iosm_ipc_sio.h"
+
+/* Definition of MMIO offsets
+ * note that MMIO_CI offsets are relative to end of chip info structure
+ */
+
+/* MMIO chip info size in bytes */
+#define MMIO_CHIP_INFO_SIZE 60
+
+/* CP execution stage */
+#define MMIO_OFFSET_EXECUTION_STAGE 0x00
+
+/* Boot ROM Chip Info struct */
+#define MMIO_OFFSET_CHIP_INFO 0x04
+
+#define MMIO_OFFSET_ROM_EXIT_CODE 0x40
+
+#define MMIO_OFFSET_PSI_ADDRESS 0x54
+
+#define MMIO_OFFSET_PSI_SIZE 0x5C
+
+#define MMIO_OFFSET_IPC_STATUS 0x60
+
+#define MMIO_OFFSET_CONTEXT_INFO 0x64
+
+#define MMIO_OFFSET_BASE_ADDR 0x6C
+
+#define MMIO_OFFSET_END_ADDR 0x74
+
+#define MMIO_OFFSET_CP_VERSION 0xF0
+
+#define MMIO_OFFSET_CP_CAPABILITIES 0xF4
+
+/* Timeout in 20 msec to wait for the modem boot code to write a valid
+ * execution stage into mmio area
+ */
+#define IPC_MMIO_EXEC_STAGE_TIMEOUT 50
+
+/* check if exec stage has one of the valid values */
+static bool ipc_mmio_is_valid_exec_stage(enum ipc_mem_exec_stage stage)
+{
+	switch (stage) {
+	case IPC_MEM_EXEC_STAGE_BOOT:
+	case IPC_MEM_EXEC_STAGE_PSI:
+	case IPC_MEM_EXEC_STAGE_EBL:
+	case IPC_MEM_EXEC_STAGE_RUN:
+	case IPC_MEM_EXEC_STAGE_CRASH:
+	case IPC_MEM_EXEC_STAGE_CD_READY:
+		return true;
+	default:
+		return false;
+	}
+}
+
+void ipc_mmio_update_cp_capability(struct iosm_mmio *ipc_mmio)
+{
+	u32 cp_cap;
+	unsigned int ver;
+
+	ver = ipc_mmio_get_cp_version(ipc_mmio);
+	cp_cap = readl(ipc_mmio->base + ipc_mmio->offset.cp_capability);
+
+	ipc_mmio->has_mux_lite = (ver >= IOSM_CP_VERSION) &&
+				 !(cp_cap & DL_AGGR) && !(cp_cap & UL_AGGR);
+
+	ipc_mmio->has_ul_flow_credit =
+		(ver >= IOSM_CP_VERSION) && (cp_cap & UL_FLOW_CREDIT);
+}
+
+struct iosm_mmio *ipc_mmio_init(void __iomem *mmio, struct device *dev)
+{
+	struct iosm_mmio *ipc_mmio = kzalloc(sizeof(*ipc_mmio), GFP_KERNEL);
+	int retries = IPC_MMIO_EXEC_STAGE_TIMEOUT;
+	enum ipc_mem_exec_stage stage;
+
+	if (!ipc_mmio)
+		return NULL;
+
+	ipc_mmio->dev = dev;
+
+	ipc_mmio->base = mmio;
+
+	ipc_mmio->offset.exec_stage = MMIO_OFFSET_EXECUTION_STAGE;
+
+	/* Check for a valid execution stage to make sure that the boot code
+	 * has correctly initialized the MMIO area.
+	 */
+	do {
+		stage = ipc_mmio_get_exec_stage(ipc_mmio);
+		if (ipc_mmio_is_valid_exec_stage(stage))
+			break;
+
+		msleep(20);
+	} while (retries-- > 0);
+
+	if (!retries) {
+		dev_err(ipc_mmio->dev, "invalid exec stage %X", stage);
+		goto init_fail;
+	}
+
+	ipc_mmio->offset.chip_info = MMIO_OFFSET_CHIP_INFO;
+
+	/* read chip info size and version from chip info structure */
+	ipc_mmio->chip_info_version =
+		ioread8(ipc_mmio->base + ipc_mmio->offset.chip_info);
+
+	/* Increment of 2 is needed as the size value in the chip info
+	 * excludes the version and size field, which are always present
+	 */
+	ipc_mmio->chip_info_size =
+		ioread8(ipc_mmio->base + ipc_mmio->offset.chip_info + 1) + 2;
+
+	if (ipc_mmio->chip_info_size != MMIO_CHIP_INFO_SIZE) {
+		dev_err(ipc_mmio->dev, "Unexpected Chip Info");
+		goto init_fail;
+	}
+
+	ipc_mmio->offset.rom_exit_code = MMIO_OFFSET_ROM_EXIT_CODE;
+
+	ipc_mmio->offset.psi_address = MMIO_OFFSET_PSI_ADDRESS;
+	ipc_mmio->offset.psi_size = MMIO_OFFSET_PSI_SIZE;
+	ipc_mmio->offset.ipc_status = MMIO_OFFSET_IPC_STATUS;
+	ipc_mmio->offset.context_info = MMIO_OFFSET_CONTEXT_INFO;
+	ipc_mmio->offset.ap_win_base = MMIO_OFFSET_BASE_ADDR;
+	ipc_mmio->offset.ap_win_end = MMIO_OFFSET_END_ADDR;
+
+	ipc_mmio->offset.cp_version = MMIO_OFFSET_CP_VERSION;
+	ipc_mmio->offset.cp_capability = MMIO_OFFSET_CP_CAPABILITIES;
+
+	return ipc_mmio;
+
+init_fail:
+	kfree(ipc_mmio);
+	return NULL;
+}
+
+enum ipc_mem_exec_stage ipc_mmio_get_exec_stage(struct iosm_mmio *ipc_mmio)
+{
+	if (!ipc_mmio)
+		return IPC_MEM_EXEC_STAGE_INVALID;
+
+	return (enum ipc_mem_exec_stage)readl(ipc_mmio->base +
+					      ipc_mmio->offset.exec_stage);
+}
+
+void ipc_mmio_copy_chip_info(struct iosm_mmio *ipc_mmio, void *dest,
+			     size_t size)
+{
+	if (ipc_mmio && dest)
+		memcpy_fromio(dest, ipc_mmio->base + ipc_mmio->offset.chip_info,
+			      size);
+}
+
+enum ipc_mem_device_ipc_state ipc_mmio_get_ipc_state(struct iosm_mmio *ipc_mmio)
+{
+	if (!ipc_mmio)
+		return IPC_MEM_DEVICE_IPC_INVALID;
+
+	return (enum ipc_mem_device_ipc_state)
+		readl(ipc_mmio->base + ipc_mmio->offset.ipc_status);
+}
+
+enum rom_exit_code ipc_mmio_get_rom_exit_code(struct iosm_mmio *ipc_mmio)
+{
+	if (!ipc_mmio)
+		return IMEM_ROM_EXIT_FAIL;
+
+	return (enum rom_exit_code)readl(ipc_mmio->base +
+					 ipc_mmio->offset.rom_exit_code);
+}
+
+void ipc_mmio_config(struct iosm_mmio *ipc_mmio)
+{
+	if (!ipc_mmio)
+		return;
+
+	/* AP memory window (full window is open and active so that modem checks
+	 * each AP address) 0 means don't check on modem side.
+	 */
+	iowrite64_lo_hi(0, ipc_mmio->base + ipc_mmio->offset.ap_win_base);
+	iowrite64_lo_hi(0, ipc_mmio->base + ipc_mmio->offset.ap_win_end);
+
+	iowrite64_lo_hi(ipc_mmio->context_info_addr,
+			ipc_mmio->base + ipc_mmio->offset.context_info);
+}
+
+void ipc_mmio_set_psi_addr_and_size(struct iosm_mmio *ipc_mmio, dma_addr_t addr,
+				    u32 size)
+{
+	if (!ipc_mmio)
+		return;
+
+	iowrite64_lo_hi(addr, ipc_mmio->base + ipc_mmio->offset.psi_address);
+	writel(size, ipc_mmio->base + ipc_mmio->offset.psi_size);
+}
+
+void ipc_mmio_set_contex_info_addr(struct iosm_mmio *ipc_mmio, phys_addr_t addr)
+{
+	if (!ipc_mmio)
+		return;
+
+	/* store context_info address. This will be stored in the mmio area
+	 * during IPC_MEM_DEVICE_IPC_INIT state via ipc_mmio_config()
+	 */
+	ipc_mmio->context_info_addr = addr;
+}
+
+int ipc_mmio_get_cp_version(struct iosm_mmio *ipc_mmio)
+{
+	return ipc_mmio ? readl(ipc_mmio->base + ipc_mmio->offset.cp_version) :
+			  -1;
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_mmio.h b/drivers/net/wwan/iosm/iosm_ipc_mmio.h
new file mode 100644
index 000000000000..bbd6bd8383b5
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_mmio.h
@@ -0,0 +1,192 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_MMIO_H
+#define IOSM_IPC_MMIO_H
+
+/* Minimal IOSM CP VERSION which has valid CP_CAPABILITIES field */
+#define IOSM_CP_VERSION 0x0100UL
+
+/* DL dir Aggregation support mask */
+#define DL_AGGR BIT(23)
+
+/* UL dir Aggregation support mask */
+#define UL_AGGR BIT(22)
+
+/* UL flow credit support mask */
+#define UL_FLOW_CREDIT BIT(21)
+
+/* Possible states of the IPC finite state machine. */
+enum ipc_mem_device_ipc_state {
+	IPC_MEM_DEVICE_IPC_UNINIT,
+	IPC_MEM_DEVICE_IPC_INIT,
+	IPC_MEM_DEVICE_IPC_RUNNING,
+	IPC_MEM_DEVICE_IPC_RECOVERY,
+	IPC_MEM_DEVICE_IPC_ERROR,
+	IPC_MEM_DEVICE_IPC_DONT_CARE,
+	IPC_MEM_DEVICE_IPC_INVALID = -1
+};
+
+/* Boot ROM exit status. */
+enum rom_exit_code {
+	IMEM_ROM_EXIT_OPEN_EXT = 0x01,
+	IMEM_ROM_EXIT_OPEN_MEM = 0x02,
+	IMEM_ROM_EXIT_CERT_EXT = 0x10,
+	IMEM_ROM_EXIT_CERT_MEM = 0x20,
+	IMEM_ROM_EXIT_FAIL = 0xFF
+};
+
+/* Boot stages */
+enum ipc_mem_exec_stage {
+	IPC_MEM_EXEC_STAGE_RUN = 0x600DF00D,
+	IPC_MEM_EXEC_STAGE_CRASH = 0x8BADF00D,
+	IPC_MEM_EXEC_STAGE_CD_READY = 0xBADC0DED,
+	IPC_MEM_EXEC_STAGE_BOOT = 0xFEEDB007,
+	IPC_MEM_EXEC_STAGE_PSI = 0xFEEDBEEF,
+	IPC_MEM_EXEC_STAGE_EBL = 0xFEEDCAFE,
+	IPC_MEM_EXEC_STAGE_INVALID = 0xFFFFFFFF,
+};
+
+struct mmio_offset {
+	int exec_stage;
+	int chip_info;
+	int rom_exit_code;
+	int psi_address;
+	int psi_size;
+	int ipc_status;
+	int context_info;
+	int ap_win_base;
+	int ap_win_end;
+	int cp_version;
+	int cp_capability;
+};
+
+/**
+ * struct iosm_mmio - MMIO region mapped to the doorbell scratchpad.
+ * @base:		Base address of MMIO region
+ * @dev:		Pointer to device structure
+ * @offset:		Start offset
+ * @context_info_addr:	Physical base address of context info structure
+ * @chip_info_version:	Version of chip info structure
+ * @chip_info_size:	Size of chip info structure
+ * @has_mux_lite:	It doesn't support mux aggergation
+ * @has_ul_flow_credit:	Ul flow credit support
+ * @has_slp_no_prot:	Device sleep no protocol support
+ * @has_mcr_support:	Usage of mcr support
+ */
+struct iosm_mmio {
+	unsigned char __iomem *base;
+	struct device *dev;
+	struct mmio_offset offset;
+	phys_addr_t context_info_addr;
+	unsigned int chip_info_version;
+	unsigned int chip_info_size;
+	u8 has_mux_lite : 1;
+	u8 has_ul_flow_credit : 1;
+	u8 has_slp_no_prot : 1;
+	u8 has_mcr_support : 1;
+};
+
+/**
+ * ipc_mmio_init - Allocate mmio instance data
+ * @mmio_addr:	Mapped AP base address of the MMIO area.
+ * @dev:	Pointer to device structure
+ *
+ * Returns: address of mmio instance data
+ */
+struct iosm_mmio *ipc_mmio_init(void __iomem *mmio_addr, struct device *dev);
+
+/**
+ * ipc_mmio_set_psi_addr_and_size - Set start address and size of the
+ *				    primary system image (PSI) for the
+ *				    boot rom dowload app.
+ * @ipc_mmio:	Pointer to mmio instance
+ * @addr:	PSI address
+ * @size:	PSI immage size
+ */
+void ipc_mmio_set_psi_addr_and_size(struct iosm_mmio *ipc_mmio, dma_addr_t addr,
+				    u32 size);
+
+/**
+ * ipc_mmio_set_contex_info_addr - Stores the Context Info Address in
+ *				   MMIO instance to share it with CP during
+ *				   mmio_init.
+ * @ipc_mmio:	Pointer to mmio instance
+ * @addr:	64-bit address of AP context information.
+ */
+void ipc_mmio_set_contex_info_addr(struct iosm_mmio *ipc_mmio,
+				   phys_addr_t addr);
+
+/**
+ * ipc_mmio_get_cp_version - Write context info and AP memory range addresses.
+ *			     This needs to be called when CP is in
+ *			     IPC_MEM_DEVICE_IPC_INIT state
+ * @ipc_mmio:	Pointer to mmio instance
+ *
+ * Returns: cp version else -1
+ */
+int ipc_mmio_get_cp_version(struct iosm_mmio *ipc_mmio);
+
+/**
+ * ipc_mmio_get_cp_version - Get the CP IPC version
+ * @ipc_mmio:	Pointer to mmio instance
+ *
+ * Returns: version number on success and -1 on failure.
+ */
+int ipc_mmio_get_cp_version(struct iosm_mmio *ipc_mmio);
+
+/**
+ * ipc_mmio_get_rom_exit_code - Get exit code from CP boot rom download app
+ * @ipc_mmio:	Pointer to mmio instance
+ *
+ * Returns: exit code from CP boot rom download APP
+ */
+enum rom_exit_code ipc_mmio_get_rom_exit_code(struct iosm_mmio *ipc_mmio);
+
+/**
+ * ipc_mmio_get_exec_stage - Query CP execution stage
+ * @ipc_mmio:	Pointer to mmio instance
+ *
+ * Returns: CP execution stage
+ */
+enum ipc_mem_exec_stage ipc_mmio_get_exec_stage(struct iosm_mmio *ipc_mmio);
+
+/**
+ * ipc_mmio_get_ipc_state - Query CP IPC state
+ * @ipc_mmio:	Pointer to mmio instance
+ *
+ * Returns: CP IPC state
+ */
+enum ipc_mem_device_ipc_state
+ipc_mmio_get_ipc_state(struct iosm_mmio *ipc_mmio);
+
+/**
+ * ipc_mmio_copy_chip_info - Copy size bytes of CP chip info structure
+ *			     into caller provided buffer
+ * @ipc_mmio:	Pointer to mmio instance
+ * @dest:	Pointer to caller provided buff
+ * @size:	Number of bytes to copy
+ */
+void ipc_mmio_copy_chip_info(struct iosm_mmio *ipc_mmio, void *dest,
+			     size_t size);
+
+/**
+ * ipc_mmio_config - Write context info and AP memory range addresses.
+ *		     This needs to be called when CP is in
+ *		     IPC_MEM_DEVICE_IPC_INIT state
+ *
+ * @ipc_mmio:	Pointer to mmio instance
+ */
+void ipc_mmio_config(struct iosm_mmio *ipc_mmio);
+
+/**
+ * ipc_mmio_update_cp_capability - Read and update modem capability, from mmio
+ *				   capability offset
+ *
+ * @ipc_mmio:	Pointer to mmio instance
+ */
+void ipc_mmio_update_cp_capability(struct iosm_mmio *ipc_mmio);
+
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 04/18] net: iosm: shared memory IPC interface
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (2 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 03/18] net: iosm: mmio scratchpad M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 05/18] net: iosm: shared memory I/O operations M Chetan Kumar
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

1) Initializes shared memory for host-device communication.
2) Allocate resources required for control & data operations.
3) Transfers the Device IRQ to IPC execution thread.
4) Defines the timer cbs for async events.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_imem.c | 1466 +++++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_imem.h |  606 ++++++++++++++
 2 files changed, 2072 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem.c b/drivers/net/wwan/iosm/iosm_ipc_imem.c
new file mode 100644
index 000000000000..7c26e2fdf77b
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_imem.c
@@ -0,0 +1,1466 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include <linux/if_vlan.h>
+
+#include "iosm_ipc_chnl_cfg.h"
+#include "iosm_ipc_imem.h"
+#include "iosm_ipc_mbim.h"
+#include "iosm_ipc_sio.h"
+#include "iosm_ipc_task_queue.h"
+
+/* Check the wwan ips if it is valid with Channel as input. */
+static inline int ipc_imem_check_wwan_ips(struct ipc_mem_channel *chnl)
+{
+	if (chnl)
+		return chnl->ctype == IPC_CTYPE_WWAN &&
+		       chnl->vlan_id == IPC_MEM_MUX_IP_CH_VLAN_ID;
+	return false;
+}
+
+static int imem_msg_send_device_sleep(struct iosm_imem *ipc_imem, u32 state)
+{
+	union ipc_msg_prep_args prep_args = {
+		.sleep.target = 1,
+		.sleep.state = state,
+	};
+
+	ipc_imem->device_sleep = state;
+
+	return ipc_protocol_tq_msg_send(ipc_imem->ipc_protocol,
+					IPC_MSG_PREP_SLEEP, &prep_args, NULL);
+}
+
+static bool imem_dl_skb_alloc(struct iosm_imem *ipc_imem, struct ipc_pipe *pipe)
+{
+	/* limit max. nr of entries */
+	if (pipe->nr_of_queued_entries >= pipe->max_nr_of_queued_entries)
+		return false;
+
+	return ipc_protocol_dl_td_prepare(ipc_imem->ipc_protocol, pipe);
+}
+
+/* This timer handler will retry DL buff allocation if a pipe has no free buf */
+static int imem_tq_td_alloc_timer(void *instance, int arg, void *msg,
+				  size_t size)
+{
+	struct iosm_imem *ipc_imem = instance;
+	bool new_buffers_available = false;
+	bool retry_allocation = false;
+	int i;
+
+	for (i = 0; i < IPC_MEM_MAX_CHANNELS; i++) {
+		struct ipc_pipe *pipe = &ipc_imem->channels[i].dl_pipe;
+
+		if (!pipe->is_open || pipe->nr_of_queued_entries > 0)
+			continue;
+
+		while (imem_dl_skb_alloc(ipc_imem, pipe))
+			new_buffers_available = true;
+
+		if (pipe->nr_of_queued_entries == 0)
+			retry_allocation = true;
+	}
+
+	if (new_buffers_available)
+		ipc_protocol_doorbell_trigger(ipc_imem->ipc_protocol,
+					      IPC_HP_DL_PROCESS);
+
+	if (retry_allocation)
+		imem_hrtimer_start(ipc_imem, &ipc_imem->td_alloc_timer,
+				   IPC_TD_ALLOC_TIMER_PERIOD_MS * 1000);
+	return 0;
+}
+
+static enum hrtimer_restart imem_td_alloc_timer_cb(struct hrtimer *hr_timer)
+{
+	struct iosm_imem *ipc_imem =
+		container_of(hr_timer, struct iosm_imem, td_alloc_timer);
+	/* Post an async tasklet event to trigger HP update Doorbell */
+	ipc_task_queue_send_task(ipc_imem, imem_tq_td_alloc_timer, 0, NULL, 0,
+				 false);
+	return HRTIMER_NORESTART;
+}
+
+/* Fast update timer tasklet handler to trigger HP update */
+static int imem_tq_fast_update_timer_cb(void *instance, int arg, void *msg,
+					size_t size)
+{
+	struct iosm_imem *ipc_imem = instance;
+
+	ipc_protocol_doorbell_trigger(ipc_imem->ipc_protocol,
+				      IPC_HP_FAST_TD_UPD_TMR);
+
+	return 0;
+}
+
+static enum hrtimer_restart imem_fast_update_timer_cb(struct hrtimer *hr_timer)
+{
+	struct iosm_imem *ipc_imem =
+		container_of(hr_timer, struct iosm_imem, fast_update_timer);
+	/* Post an async tasklet event to trigger HP update Doorbell */
+	ipc_task_queue_send_task(ipc_imem, imem_tq_fast_update_timer_cb, 0,
+				 NULL, 0, false);
+	return HRTIMER_NORESTART;
+}
+
+static void
+imem_hrtimer_init(struct hrtimer *hr_timer,
+		  enum hrtimer_restart (*callback)(struct hrtimer *hr_timer))
+{
+	hrtimer_init(hr_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
+	hr_timer->function = callback;
+}
+
+static int imem_setup_cp_mux_cap_init(struct iosm_imem *ipc_imem,
+				      struct ipc_mux_config *cfg)
+{
+	ipc_mmio_update_cp_capability(ipc_imem->mmio);
+
+	if (!ipc_imem->mmio->has_mux_lite) {
+		dev_err(ipc_imem->dev, "Failed to get Mux capability.");
+		return -1;
+	}
+
+	cfg->protocol = MUX_LITE;
+
+	cfg->ul_flow = (ipc_imem->mmio->has_ul_flow_credit == 1) ?
+			       MUX_UL_ON_CREDITS :
+			       MUX_UL;
+
+	/* The instance ID is same as channel ID because this is been reused
+	 * for channel alloc function.
+	 */
+	cfg->instance_id = IPC_MEM_MUX_IP_CH_VLAN_ID;
+	cfg->nr_sessions = IPC_MEM_MUX_IP_SESSION_ENTRIES;
+
+	return 0;
+}
+
+void imem_msg_send_feature_set(struct iosm_imem *ipc_imem,
+			       unsigned int reset_enable, bool atomic_ctx)
+{
+	union ipc_msg_prep_args prep_args = { .feature_set.reset_enable =
+						      reset_enable };
+
+	if (atomic_ctx)
+		ipc_protocol_tq_msg_send(ipc_imem->ipc_protocol,
+					 IPC_MSG_PREP_FEATURE_SET, &prep_args,
+					 NULL);
+	else
+		ipc_protocol_msg_send(ipc_imem->ipc_protocol,
+				      IPC_MSG_PREP_FEATURE_SET, &prep_args);
+}
+
+void imem_hrtimer_start(struct iosm_imem *ipc_imem, struct hrtimer *hr_timer,
+			unsigned long period)
+{
+	ipc_imem->hrtimer_period = ktime_set(0, period * 1000ULL);
+	if (!hrtimer_active(hr_timer) && period != 0)
+		hrtimer_start(hr_timer, ipc_imem->hrtimer_period,
+			      HRTIMER_MODE_REL);
+}
+
+void imem_td_update_timer_start(struct iosm_imem *ipc_imem)
+{
+	/* Use the UL timer only in the runtime phase and
+	 * trigger the doorbell irq on CP directly.
+	 */
+	if (!ipc_imem->enter_runtime || ipc_imem->td_update_timer_suspended) {
+		ipc_protocol_doorbell_trigger(ipc_imem->ipc_protocol,
+					      IPC_HP_TD_UPD_TMR_START);
+		return;
+	}
+
+	if (!hrtimer_active(&ipc_imem->tdupdate_timer))
+		imem_hrtimer_start(ipc_imem, &ipc_imem->tdupdate_timer,
+				   TD_UPDATE_DEFAULT_TIMEOUT_USEC);
+}
+
+void imem_hrtimer_stop(struct hrtimer *hr_timer)
+{
+	if (hrtimer_active(hr_timer))
+		hrtimer_cancel(hr_timer);
+}
+
+bool imem_ul_write_td(struct iosm_imem *ipc_imem)
+{
+	struct ipc_mem_channel *channel;
+	struct sk_buff_head *ul_list;
+	bool hpda_pending = false;
+	bool forced_hpdu = false;
+	struct ipc_pipe *pipe;
+	int i;
+
+	/* Analyze the uplink pipe of all active channels. */
+	for (i = 0; i < ipc_imem->nr_of_channels; i++) {
+		channel = &ipc_imem->channels[i];
+
+		if (channel->state != IMEM_CHANNEL_ACTIVE)
+			continue;
+
+		pipe = &channel->ul_pipe;
+
+		/* Get the reference to the skbuf accumulator list. */
+		ul_list = &channel->ul_list;
+
+		/* Fill the transfer descriptor with the uplink buffer info. */
+		hpda_pending |= ipc_protocol_ul_td_send(ipc_imem->ipc_protocol,
+							pipe, ul_list);
+
+		/* forced HP update needed for non data channels */
+		if (hpda_pending && !ipc_imem_check_wwan_ips(channel))
+			forced_hpdu = true;
+	}
+
+	if (forced_hpdu) {
+		hpda_pending = false;
+		ipc_protocol_doorbell_trigger(ipc_imem->ipc_protocol,
+					      IPC_HP_UL_WRITE_TD);
+	}
+
+	return hpda_pending;
+}
+
+void imem_ipc_init_check(struct iosm_imem *ipc_imem)
+{
+	int timeout = IPC_MODEM_BOOT_TIMEOUT;
+
+	/* Trigger the CP interrupt to enter the init state. */
+	ipc_imem->ipc_requested_state = IPC_MEM_DEVICE_IPC_INIT;
+
+	ipc_doorbell_fire(ipc_imem->pcie, IPC_DOORBELL_IRQ_IPC,
+			  IPC_MEM_DEVICE_IPC_INIT);
+	/* Wait for the CP update. */
+	do {
+		if (ipc_mmio_get_ipc_state(ipc_imem->mmio) ==
+		    ipc_imem->ipc_requested_state) {
+			/* Prepare the MMIO space */
+			ipc_mmio_config(ipc_imem->mmio);
+
+			/* Trigger the CP irq to enter the running state. */
+			ipc_imem->ipc_requested_state =
+				IPC_MEM_DEVICE_IPC_RUNNING;
+			ipc_doorbell_fire(ipc_imem->pcie, IPC_DOORBELL_IRQ_IPC,
+					  IPC_MEM_DEVICE_IPC_RUNNING);
+
+			return;
+		}
+		msleep(20);
+	} while (--timeout);
+
+	/* timeout */
+	dev_err(ipc_imem->dev, "%s: ipc_status(%d) ne. IPC_MEM_DEVICE_IPC_INIT",
+		ipc_ap_phase_get_string(ipc_imem->phase),
+		ipc_mmio_get_ipc_state(ipc_imem->mmio));
+
+	ipc_uevent_send(ipc_imem->dev, UEVENT_MDM_TIMEOUT);
+}
+
+/* Analyze the packet type and distribute it. */
+static void imem_dl_skb_process(struct iosm_imem *ipc_imem,
+				struct ipc_pipe *pipe, struct sk_buff *skb)
+{
+	if (!skb)
+		return;
+
+	/* An AT/control or IP packet is expected. */
+	switch (pipe->channel->ctype) {
+	case IPC_CTYPE_FLASH:
+		/* Pass the packet to the char layer. */
+		if (imem_sys_sio_receive(ipc_imem->sio, skb))
+			goto rcv_err;
+		break;
+
+	case IPC_CTYPE_MBIM:
+		/* Pass the packet to the char layer. */
+		if (imem_sys_sio_receive(ipc_imem->mbim, skb))
+			goto rcv_err;
+		break;
+
+	case IPC_CTYPE_WWAN:
+		/* drop the packet if vlan id = 0 */
+		if (pipe->channel->vlan_id == 0)
+			goto rcv_err;
+
+		if (pipe->channel->vlan_id > 256 &&
+		    pipe->channel->vlan_id < 512) {
+			if (pipe->channel->state != IMEM_CHANNEL_ACTIVE)
+				goto rcv_err;
+
+			skb_push(skb, ETH_HLEN);
+			/* map session to vlan */
+			__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
+					       pipe->channel->vlan_id);
+
+			IPC_CB(skb)->mapping = 0;
+			/* unmap skb from address mapping */
+			ipc_pcie_addr_unmap(ipc_imem->pcie, IPC_CB(skb)->len,
+					    IPC_CB(skb)->mapping,
+					    IPC_CB(skb)->direction);
+
+			if (ipc_wwan_receive(ipc_imem->wwan, skb, true))
+				pipe->channel->net_err_count++;
+			/* DL packet through IP MUX layer */
+		} else if (pipe->channel->vlan_id ==
+			   IPC_MEM_MUX_IP_CH_VLAN_ID) {
+			ipc_mux_dl_decode(ipc_imem->mux, skb);
+		}
+		break;
+	default:
+		dev_err(ipc_imem->dev, "Invalid channel type");
+		break;
+	}
+	return;
+
+rcv_err:
+	ipc_pcie_kfree_skb(ipc_imem->pcie, skb);
+}
+
+/* Process the downlink data and pass them to the char or net layer. */
+static void imem_dl_pipe_process(struct iosm_imem *ipc_imem,
+				 struct ipc_pipe *pipe)
+{
+	s32 cnt = 0, processed_td_cnt = 0;
+	struct ipc_mem_channel *channel;
+	u32 head = 0, tail = 0;
+	bool processed = false;
+	struct sk_buff *skb;
+
+	channel = pipe->channel;
+
+	ipc_protocol_get_head_tail_index(ipc_imem->ipc_protocol, pipe, &head,
+					 &tail);
+	if (pipe->old_tail != tail) {
+		if (pipe->old_tail < tail)
+			cnt = tail - pipe->old_tail;
+		else
+			cnt = pipe->nr_of_entries - pipe->old_tail + tail;
+	}
+
+	processed_td_cnt = cnt;
+
+	/* Seek for pipes with pending DL data. */
+	while (cnt--) {
+		skb = ipc_protocol_dl_td_process(ipc_imem->ipc_protocol, pipe);
+
+		/* Analyze the packet type and distribute it. */
+		imem_dl_skb_process(ipc_imem, pipe, skb);
+	}
+
+	/* try to allocate new empty DL SKbs from head..tail - 1*/
+	while (imem_dl_skb_alloc(ipc_imem, pipe))
+		processed = true;
+
+	/* flush net interfaces if needed */
+	if (processed && !ipc_imem_check_wwan_ips(channel)) {
+		/* Force HP update for non IP channels */
+		ipc_protocol_doorbell_trigger(ipc_imem->ipc_protocol,
+					      IPC_HP_DL_PROCESS);
+		processed = false;
+
+		/* If Fast Update timer is already running then stop */
+		imem_hrtimer_stop(&ipc_imem->fast_update_timer);
+	}
+
+	/* Any control channel process will get immediate HP update.
+	 * Start Fast update timer only for IP channel if all the TDs were
+	 * used in last process.
+	 */
+	if (processed && (processed_td_cnt == pipe->nr_of_entries - 1))
+		imem_hrtimer_start(ipc_imem, &ipc_imem->fast_update_timer,
+				   FORCE_UPDATE_DEFAULT_TIMEOUT_USEC);
+
+	if (ipc_imem->app_notify_dl_pend)
+		complete(&ipc_imem->dl_pend_sem);
+}
+
+/* Free the uplink buffer. */
+static void imem_ul_pipe_process(struct iosm_imem *ipc_imem,
+				 struct ipc_pipe *pipe)
+{
+	struct ipc_mem_channel *channel;
+	u32 tail = 0, head = 0;
+	struct sk_buff *skb;
+	s32 cnt = 0;
+
+	channel = pipe->channel;
+
+	/* Get the internal phase. */
+	ipc_protocol_get_head_tail_index(ipc_imem->ipc_protocol, pipe, &head,
+					 &tail);
+
+	if (pipe->old_tail != tail) {
+		if (pipe->old_tail < tail)
+			cnt = tail - pipe->old_tail;
+		else
+			cnt = pipe->nr_of_entries - pipe->old_tail + tail;
+	}
+
+	/* Free UL buffers. */
+	while (cnt--) {
+		skb = ipc_protocol_ul_td_process(ipc_imem->ipc_protocol, pipe);
+
+		if (!skb)
+			continue;
+
+		/* If the user app was suspended in uplink direction - blocking
+		 * write, resume it.
+		 */
+		if (IPC_CB(skb)->op_type == UL_USR_OP_BLOCKED)
+			complete(&channel->ul_sem);
+
+		/* Free the skbuf element. */
+		if (IPC_CB(skb)->op_type == UL_MUX_OP_ADB) {
+			if (channel->vlan_id == IPC_MEM_MUX_IP_CH_VLAN_ID)
+				ipc_mux_ul_encoded_process(ipc_imem->mux, skb);
+			else
+				dev_err(ipc_imem->dev,
+					"Channel OP Type is UL_MUX but vlan_id %d is unknown",
+					channel->vlan_id);
+		} else {
+			ipc_pcie_kfree_skb(ipc_imem->pcie, skb);
+		}
+	}
+
+	/* Trace channel stats for IP UL pipe. */
+	if (ipc_imem_check_wwan_ips(pipe->channel)) {
+		if (channel->vlan_id == IPC_MEM_MUX_IP_CH_VLAN_ID)
+			ipc_mux_check_n_restart_tx(ipc_imem->mux);
+	}
+
+	if (ipc_imem->app_notify_ul_pend)
+		complete(&ipc_imem->ul_pend_sem);
+}
+
+/* Triggers the irq. */
+static void imem_rom_irq_exec(struct iosm_imem *ipc_imem)
+{
+	struct ipc_mem_channel *channel;
+
+	if (ipc_imem->flash_channel_id < 0) {
+		ipc_imem->rom_exit_code = IMEM_ROM_EXIT_FAIL;
+		dev_err(ipc_imem->dev, "Missing flash app:%d",
+			ipc_imem->flash_channel_id);
+		return;
+	}
+
+	ipc_imem->rom_exit_code = ipc_mmio_get_rom_exit_code(ipc_imem->mmio);
+
+	/* Wake up the flash app to continue or to terminate depending
+	 * on the CP ROM exit code.
+	 */
+	channel = &ipc_imem->channels[ipc_imem->flash_channel_id];
+	complete(&channel->ul_sem);
+}
+
+/* Execute the UL bundle timer actions. */
+static int imem_tq_td_update_timer_cb(void *instance, int arg, void *msg,
+				      size_t size)
+{
+	struct iosm_imem *ipc_imem = instance;
+
+	ipc_protocol_doorbell_trigger(ipc_imem->ipc_protocol,
+				      IPC_HP_TD_UPD_TMR);
+	return 0;
+}
+
+/* Consider link power management in the runtime phase. */
+static void imem_slp_control_exec(struct iosm_imem *ipc_imem)
+{
+	if (ipc_protocol_pm_dev_sleep_handle(ipc_imem->ipc_protocol) &&
+	    /* link will go down, Test pending UL packets.*/
+	    hrtimer_active(&ipc_imem->tdupdate_timer)) {
+		/* Generate the doorbell irq. */
+		imem_tq_td_update_timer_cb(ipc_imem, 0, NULL, 0);
+		/* Deactivate the TD update timer. */
+		imem_hrtimer_stop(&ipc_imem->tdupdate_timer);
+		/* Deactivate the force update timer. */
+		imem_hrtimer_stop(&ipc_imem->fast_update_timer);
+	}
+}
+
+/* Execute startup timer and wait for delayed start (e.g. NAND) */
+static int imem_tq_startup_timer_cb(void *instance, int arg, void *msg,
+				    size_t size)
+{
+	struct iosm_imem *ipc_imem = instance;
+
+	/* Update & check the current operation phase. */
+	if (imem_ap_phase_update(ipc_imem) != IPC_P_RUN)
+		return -1;
+
+	if (ipc_mmio_get_ipc_state(ipc_imem->mmio) ==
+	    IPC_MEM_DEVICE_IPC_UNINIT) {
+		ipc_imem->ipc_requested_state = IPC_MEM_DEVICE_IPC_INIT;
+
+		ipc_doorbell_fire(ipc_imem->pcie, IPC_DOORBELL_IRQ_IPC,
+				  IPC_MEM_DEVICE_IPC_INIT);
+
+		/* reduce period to 100 ms to check for mmio init state */
+		imem_hrtimer_start(ipc_imem, &ipc_imem->startup_timer,
+				   100 * 1000UL);
+	} else if (ipc_mmio_get_ipc_state(ipc_imem->mmio) ==
+		   IPC_MEM_DEVICE_IPC_INIT) {
+		/* Startup complete  - disable timer */
+		imem_hrtimer_stop(&ipc_imem->startup_timer);
+
+		/* Prepare the MMIO space */
+		ipc_mmio_config(ipc_imem->mmio);
+		ipc_imem->ipc_requested_state = IPC_MEM_DEVICE_IPC_RUNNING;
+		ipc_doorbell_fire(ipc_imem->pcie, IPC_DOORBELL_IRQ_IPC,
+				  IPC_MEM_DEVICE_IPC_RUNNING);
+	}
+
+	return 0;
+}
+
+static enum hrtimer_restart imem_startup_timer_cb(struct hrtimer *hr_timer)
+{
+	enum hrtimer_restart result = HRTIMER_NORESTART;
+	struct iosm_imem *ipc_imem =
+		container_of(hr_timer, struct iosm_imem, startup_timer);
+
+	if (ktime_to_ns(ipc_imem->hrtimer_period) != 0) {
+		hrtimer_forward(&ipc_imem->startup_timer, ktime_get(),
+				ipc_imem->hrtimer_period);
+		result = HRTIMER_RESTART;
+	}
+
+	ipc_task_queue_send_task(ipc_imem, imem_tq_startup_timer_cb, 0, NULL, 0,
+				 false);
+	return result;
+}
+
+/* Get the CP execution stage */
+static enum ipc_mem_exec_stage
+ipc_imem_get_exec_stage_buffered(struct iosm_imem *ipc_imem)
+{
+	return (ipc_imem->phase == IPC_P_RUN &&
+		ipc_imem->ipc_status == IPC_MEM_DEVICE_IPC_RUNNING) ?
+		       ipc_protocol_get_ap_exec_stage(ipc_imem->ipc_protocol) :
+		       ipc_mmio_get_exec_stage(ipc_imem->mmio);
+}
+
+/* Callback to send the modem ready uevent */
+static int imem_send_mdm_rdy_cb(void *instance, int arg, void *msg, size_t size)
+{
+	struct iosm_imem *ipc_imem = instance;
+	enum ipc_mem_exec_stage exec_stage =
+		ipc_imem_get_exec_stage_buffered(ipc_imem);
+
+	if (exec_stage == IPC_MEM_EXEC_STAGE_RUN)
+		ipc_uevent_send(ipc_imem->dev, UEVENT_MDM_READY);
+
+	return 0;
+}
+
+/* Steps to be executed when modem reaches RUN state.
+ * This function is executed in a task context via an ipc_worker object,
+ * as the creation or removal of device can't be done from tasklet.
+ */
+static void ipc_imem_run_state_worker(struct work_struct *instance)
+{
+	struct ipc_mux_config mux_cfg;
+	struct iosm_imem *ipc_imem;
+	int total_sessions = 0;
+
+	ipc_imem = container_of(instance, struct iosm_imem, run_state_worker);
+
+	if (ipc_imem->phase != IPC_P_RUN) {
+		dev_err(ipc_imem->dev,
+			"Modem link down. Exit run state worker.");
+		return;
+	}
+
+	if (!imem_setup_cp_mux_cap_init(ipc_imem, &mux_cfg)) {
+		ipc_imem->mux = mux_init(&mux_cfg, ipc_imem);
+		if (ipc_imem->mux)
+			total_sessions += mux_cfg.nr_sessions;
+	}
+
+	wwan_channel_init(ipc_imem, total_sessions, mux_cfg.protocol);
+	if (ipc_imem->mux)
+		ipc_imem->mux->wwan = ipc_imem->wwan;
+
+	/* Remove boot sio device */
+	ipc_sio_deinit(ipc_imem->sio);
+
+	ipc_imem->sio = NULL;
+
+	ipc_task_queue_send_task(ipc_imem, imem_send_mdm_rdy_cb, 0, NULL, 0,
+				 false);
+}
+
+static void imem_handle_irq(struct iosm_imem *ipc_imem, int irq)
+{
+	enum ipc_mem_device_ipc_state curr_ipc_status;
+	enum ipc_phase old_phase, phase;
+	bool retry_allocation = false;
+	bool ul_pending = false;
+	int ch_id, i;
+
+	if (irq != IMEM_IRQ_DONT_CARE)
+		ipc_imem->ev_irq_pending[irq] = false;
+
+	/* Get the internal phase. */
+	old_phase = ipc_imem->phase;
+
+	if (old_phase == IPC_P_OFF_REQ) {
+		dev_dbg(ipc_imem->dev,
+			"[%s]: Ignoring MSI. Deinit sequence in progress!",
+			ipc_ap_phase_get_string(old_phase));
+		return;
+	}
+
+	/* Update the phase controlled by CP. */
+	phase = imem_ap_phase_update(ipc_imem);
+
+	switch (phase) {
+	case IPC_P_RUN:
+		if (!ipc_imem->enter_runtime) {
+			/* Excute the transition from flash/boot to runtime. */
+			ipc_imem->enter_runtime = 1;
+
+			/* allow device to sleep, default value is
+			 * IPC_HOST_SLEEP_ENTER_SLEEP
+			 */
+			imem_msg_send_device_sleep(ipc_imem,
+						   ipc_imem->device_sleep);
+
+			imem_msg_send_feature_set(ipc_imem,
+						  IPC_MEM_INBAND_CRASH_SIG,
+						  true);
+		}
+
+		curr_ipc_status =
+			ipc_protocol_get_ipc_status(ipc_imem->ipc_protocol);
+
+		/* check ipc_status change */
+		if (ipc_imem->ipc_status != curr_ipc_status) {
+			ipc_imem->ipc_status = curr_ipc_status;
+
+			if (ipc_imem->ipc_status ==
+			    IPC_MEM_DEVICE_IPC_RUNNING) {
+				schedule_work(&ipc_imem->run_state_worker);
+			}
+		}
+
+		/* Consider power management in the runtime phase. */
+		imem_slp_control_exec(ipc_imem);
+		break; /* Continue with skbuf processing. */
+
+		/* Unexpected phases. */
+	case IPC_P_OFF:
+	case IPC_P_OFF_REQ:
+		dev_err(ipc_imem->dev, "confused phase %s",
+			ipc_ap_phase_get_string(phase));
+		return;
+
+	case IPC_P_PSI:
+		if (old_phase != IPC_P_ROM)
+			break;
+
+		fallthrough;
+		/* On CP the PSI phase is already active. */
+
+	case IPC_P_ROM:
+		/* Before CP ROM driver starts the PSI image, it sets
+		 * the exit_code field on the doorbell scratchpad and
+		 * triggers the irq.
+		 */
+		imem_rom_irq_exec(ipc_imem);
+		return;
+
+	default:
+		break;
+	}
+
+	/* process message ring */
+	ipc_protocol_msg_process(ipc_imem->ipc_protocol, irq);
+
+	/* process all open pipes */
+	for (i = 0; i < IPC_MEM_MAX_CHANNELS; i++) {
+		struct ipc_pipe *ul_pipe = &ipc_imem->channels[i].ul_pipe;
+		struct ipc_pipe *dl_pipe = &ipc_imem->channels[i].dl_pipe;
+
+		if (dl_pipe->is_open &&
+		    (irq == IMEM_IRQ_DONT_CARE || irq == dl_pipe->irq)) {
+			imem_dl_pipe_process(ipc_imem, dl_pipe);
+
+			if (dl_pipe->nr_of_queued_entries == 0)
+				retry_allocation = true;
+		}
+
+		if (ul_pipe->is_open)
+			imem_ul_pipe_process(ipc_imem, ul_pipe);
+	}
+
+	/* Try to generate new ADB or ADGH. */
+	if (ipc_mux_ul_data_encode(ipc_imem->mux))
+		/* Do not restart the timer if already running */
+		imem_td_update_timer_start(ipc_imem);
+
+	/* Continue the send procedure with accumulated SIO or NETIF packets.
+	 * Reset the debounce flags.
+	 */
+	ul_pending |= imem_ul_write_td(ipc_imem);
+
+	/* if UL data is processed restart TD update timer */
+	if (ul_pending)
+		imem_hrtimer_start(ipc_imem, &ipc_imem->tdupdate_timer,
+				   TD_UPDATE_DEFAULT_TIMEOUT_USEC);
+
+	/* If CP has executed the transition
+	 * from IPC_INIT to IPC_RUNNING in the PSI
+	 * phase, wake up the flash app to open the pipes.
+	 */
+	if ((phase == IPC_P_PSI || phase == IPC_P_EBL) &&
+	    ipc_imem->ipc_requested_state == IPC_MEM_DEVICE_IPC_RUNNING &&
+	    ipc_mmio_get_ipc_state(ipc_imem->mmio) ==
+		    IPC_MEM_DEVICE_IPC_RUNNING &&
+	    ipc_imem->flash_channel_id >= 0) {
+		/* Wake up the flash app to open the pipes. */
+		ch_id = ipc_imem->flash_channel_id;
+		complete(&ipc_imem->channels[ch_id].ul_sem);
+	}
+
+	/* Reset the expected CP state. */
+	ipc_imem->ipc_requested_state = IPC_MEM_DEVICE_IPC_DONT_CARE;
+
+	if (retry_allocation)
+		imem_hrtimer_start(ipc_imem, &ipc_imem->td_alloc_timer,
+				   IPC_TD_ALLOC_TIMER_PERIOD_MS * 1000);
+}
+
+/* Tasklet callback for interrupt handler.*/
+static int imem_tq_irq_cb(void *instance, int arg, void *msg, size_t size)
+{
+	struct iosm_imem *ipc_imem = instance;
+
+	imem_handle_irq(ipc_imem, arg);
+
+	return 0;
+}
+
+/* Verify the CP execution save, copy the chip info, change the execution pahse
+ * to ROM and resume the flash app.
+ */
+static int imem_tq_trigger_chip_info_cb(void *instance, int arg, void *msg,
+					size_t msgsize)
+{
+	struct iosm_imem *ipc_imem = instance;
+	enum ipc_mem_exec_stage stage;
+	struct sk_buff *skb;
+	size_t size;
+	int rc = -1;
+
+	/* Test the CP execution state. */
+	stage = ipc_mmio_get_exec_stage(ipc_imem->mmio);
+	if (stage != IPC_MEM_EXEC_STAGE_BOOT) {
+		dev_err(ipc_imem->dev,
+			"execution_stage: expected BOOT,received=%X", stage);
+		return rc;
+	}
+
+	/* Allocate a new sk buf for the chip info. */
+	size = ipc_imem->mmio->chip_info_size;
+	skb = ipc_pcie_alloc_local_skb(ipc_imem->pcie, GFP_ATOMIC, size);
+	if (!skb) {
+		dev_err(ipc_imem->dev, "exhausted skbuf kernel DL memory");
+		return rc;
+	}
+
+	/* Copy the chip info characters into the ipc_skb. */
+	ipc_mmio_copy_chip_info(ipc_imem->mmio, skb_put(skb, size), size);
+
+	/* First change to the ROM boot phase. */
+	dev_dbg(ipc_imem->dev, "execution_stage[%X] eq. BOOT", stage);
+	ipc_imem->phase = IPC_P_ROM;
+
+	/* Inform the flash app, that the chip info are present. */
+	rc = imem_sys_sio_receive(ipc_imem->sio, skb);
+	if (rc) {
+		dev_err(ipc_imem->dev, "rejected downlink data");
+		ipc_pcie_kfree_skb(ipc_imem->pcie, skb);
+	}
+
+	return rc;
+}
+
+void imem_ul_send(struct iosm_imem *ipc_imem)
+{
+	/* start doorbell irq delay timer if UL is pending */
+	if (imem_ul_write_td(ipc_imem))
+		imem_td_update_timer_start(ipc_imem);
+}
+
+/* Check the execution stage and update the AP phase */
+static enum ipc_phase imem_ap_phase_update_check(struct iosm_imem *ipc_imem,
+						 enum ipc_mem_exec_stage stage)
+{
+	switch (stage) {
+	case IPC_MEM_EXEC_STAGE_BOOT:
+		if (ipc_imem->phase != IPC_P_ROM) {
+			/* Send this event only once */
+			ipc_uevent_send(ipc_imem->dev, UEVENT_ROM_READY);
+		}
+
+		return ipc_imem->phase = IPC_P_ROM;
+
+	case IPC_MEM_EXEC_STAGE_PSI:
+		return ipc_imem->phase = IPC_P_PSI;
+
+	case IPC_MEM_EXEC_STAGE_EBL:
+		return ipc_imem->phase = IPC_P_EBL;
+
+	case IPC_MEM_EXEC_STAGE_RUN:
+		if (ipc_imem->phase != IPC_P_RUN &&
+		    ipc_imem->ipc_status == IPC_MEM_DEVICE_IPC_RUNNING) {
+			ipc_uevent_send(ipc_imem->dev, UEVENT_MDM_READY);
+		}
+		return ipc_imem->phase = IPC_P_RUN;
+
+	case IPC_MEM_EXEC_STAGE_CRASH:
+		if (ipc_imem->phase != IPC_P_CRASH)
+			ipc_uevent_send(ipc_imem->dev, UEVENT_CRASH);
+
+		return ipc_imem->phase = IPC_P_CRASH;
+
+	case IPC_MEM_EXEC_STAGE_CD_READY:
+		if (ipc_imem->phase != IPC_P_CD_READY)
+			ipc_uevent_send(ipc_imem->dev, UEVENT_CD_READY);
+		return ipc_imem->phase = IPC_P_CD_READY;
+
+	default:
+		/* unknown exec stage:
+		 * assume that link is down and send info to listeners
+		 */
+		ipc_uevent_send(ipc_imem->dev, UEVENT_CD_READY_LINK_DOWN);
+		break;
+	}
+
+	return ipc_imem->phase;
+}
+
+/* Send msg to device to open pipe */
+static bool imem_pipe_open(struct iosm_imem *ipc_imem, struct ipc_pipe *pipe)
+{
+	union ipc_msg_prep_args prep_args = {
+		.pipe_open.pipe = pipe,
+	};
+
+	if (ipc_protocol_msg_send(ipc_imem->ipc_protocol,
+				  IPC_MSG_PREP_PIPE_OPEN, &prep_args) == 0)
+		pipe->is_open = true;
+
+	return pipe->is_open;
+}
+
+/* Allocates the TDs for the given pipe along with firing HP update DB. */
+static int imem_tq_pipe_td_alloc(void *instance, int arg, void *msg,
+				 size_t size)
+{
+	struct iosm_imem *ipc_imem = instance;
+	struct ipc_pipe *dl_pipe = msg;
+	bool processed = false;
+	int i;
+
+	for (i = 0; i < dl_pipe->nr_of_entries - 1; i++)
+		processed |= imem_dl_skb_alloc(ipc_imem, dl_pipe);
+
+	/* Trigger the doorbell irq to inform CP that new downlink buffers are
+	 * available.
+	 */
+	if (processed)
+		ipc_protocol_doorbell_trigger(ipc_imem->ipc_protocol, arg);
+
+	return 0;
+}
+
+static enum hrtimer_restart imem_td_update_timer_cb(struct hrtimer *hr_timer)
+{
+	struct iosm_imem *ipc_imem =
+		container_of(hr_timer, struct iosm_imem, tdupdate_timer);
+
+	ipc_task_queue_send_task(ipc_imem, imem_tq_td_update_timer_cb, 0, NULL,
+				 0, false);
+	return HRTIMER_NORESTART;
+}
+
+/* Get the CP execution state and map it to the AP phase. */
+enum ipc_phase imem_ap_phase_update(struct iosm_imem *ipc_imem)
+{
+	enum ipc_mem_exec_stage exec_stage =
+				ipc_imem_get_exec_stage_buffered(ipc_imem);
+	/* If the CP stage is undef, return the internal precalculated phase. */
+	return ipc_imem->phase == IPC_P_OFF_REQ ?
+		       ipc_imem->phase :
+		       imem_ap_phase_update_check(ipc_imem, exec_stage);
+}
+
+const char *ipc_ap_phase_get_string(enum ipc_phase phase)
+{
+	switch (phase) {
+	case IPC_P_RUN:
+		return "A-RUN";
+
+	case IPC_P_OFF:
+		return "A-OFF";
+
+	case IPC_P_ROM:
+		return "A-ROM";
+
+	case IPC_P_PSI:
+		return "A-PSI";
+
+	case IPC_P_EBL:
+		return "A-EBL";
+
+	case IPC_P_CRASH:
+		return "A-CRASH";
+
+	case IPC_P_CD_READY:
+		return "A-CD_READY";
+
+	case IPC_P_OFF_REQ:
+		return "A-OFF_REQ";
+
+	default:
+		return "A-???";
+	}
+}
+
+void imem_pipe_close(struct iosm_imem *ipc_imem, struct ipc_pipe *pipe)
+{
+	union ipc_msg_prep_args prep_args = { .pipe_close.pipe = pipe };
+
+	pipe->is_open = false;
+	ipc_protocol_msg_send(ipc_imem->ipc_protocol, IPC_MSG_PREP_PIPE_CLOSE,
+			      &prep_args);
+
+	imem_pipe_cleanup(ipc_imem, pipe);
+}
+
+void imem_channel_close(struct iosm_imem *ipc_imem, int channel_id)
+{
+	struct ipc_mem_channel *channel;
+
+	if (channel_id < 0 || channel_id >= ipc_imem->nr_of_channels) {
+		dev_err(ipc_imem->dev, "invalid channel id %d", channel_id);
+		return;
+	}
+
+	channel = &ipc_imem->channels[channel_id];
+
+	if (channel->state == IMEM_CHANNEL_FREE) {
+		dev_err(ipc_imem->dev, "ch[%d]: invalid channel state %d",
+			channel_id, channel->state);
+		return;
+	}
+
+	/* Free only the channel id in the CP power off mode. */
+	if (channel->state == IMEM_CHANNEL_RESERVED)
+		/* Release only the channel id. */
+		goto channel_free;
+
+	if (ipc_imem->phase == IPC_P_RUN) {
+		imem_pipe_close(ipc_imem, &channel->ul_pipe);
+		imem_pipe_close(ipc_imem, &channel->dl_pipe);
+	}
+
+	imem_pipe_cleanup(ipc_imem, &channel->ul_pipe);
+	imem_pipe_cleanup(ipc_imem, &channel->dl_pipe);
+
+channel_free:
+	imem_channel_free(channel);
+}
+
+struct ipc_mem_channel *imem_channel_open(struct iosm_imem *ipc_imem,
+					  int channel_id, u32 db_id)
+{
+	struct ipc_mem_channel *channel;
+
+	if (channel_id < 0 || channel_id >= IPC_MEM_MAX_CHANNELS) {
+		dev_err(ipc_imem->dev, "invalid channel ID: %d", channel_id);
+		return NULL;
+	}
+
+	channel = &ipc_imem->channels[channel_id];
+
+	channel->state = IMEM_CHANNEL_ACTIVE;
+
+	if (!imem_pipe_open(ipc_imem, &channel->ul_pipe))
+		goto ul_pipe_err;
+
+	if (!imem_pipe_open(ipc_imem, &channel->dl_pipe))
+		goto dl_pipe_err;
+
+	/* Allocate the downlink buffers and inform CP in tasklet context. */
+	if (ipc_task_queue_send_task(ipc_imem, imem_tq_pipe_td_alloc, db_id,
+				     &channel->dl_pipe, 0, false)) {
+		dev_err(ipc_imem->dev, "td allocation failed : %d", channel_id);
+		goto task_failed;
+	}
+
+	/* Active channel. */
+	return channel;
+task_failed:
+	imem_pipe_close(ipc_imem, &channel->dl_pipe);
+dl_pipe_err:
+	imem_pipe_close(ipc_imem, &channel->ul_pipe);
+ul_pipe_err:
+	imem_channel_free(channel);
+	return NULL;
+}
+
+int ipc_imem_pm_suspend(struct iosm_imem *ipc_imem)
+{
+	return ipc_protocol_suspend(ipc_imem->ipc_protocol) ? 0 : -1;
+}
+
+void ipc_imem_pm_resume(struct iosm_imem *ipc_imem)
+{
+	enum ipc_mem_exec_stage stage;
+
+	if (ipc_protocol_resume(ipc_imem->ipc_protocol)) {
+		stage = ipc_mmio_get_exec_stage(ipc_imem->mmio);
+		imem_ap_phase_update_check(ipc_imem, stage);
+	}
+}
+
+int imem_trigger_chip_info(struct iosm_imem *ipc_imem)
+{
+	return ipc_task_queue_send_task(ipc_imem, imem_tq_trigger_chip_info_cb,
+					0, NULL, 0, true);
+}
+
+void imem_channel_free(struct ipc_mem_channel *channel)
+{
+	/* Reset dynamic channel elements. */
+	channel->sio_id = -1;
+	channel->state = IMEM_CHANNEL_FREE;
+}
+
+int imem_channel_alloc(struct iosm_imem *ipc_imem, int index,
+		       enum ipc_ctype ctype)
+{
+	struct ipc_mem_channel *channel;
+	int i;
+
+	/* Find channel of given type/index */
+	for (i = 0; i < ipc_imem->nr_of_channels; i++) {
+		channel = &ipc_imem->channels[i];
+		if (channel->ctype == ctype && channel->index == index)
+			break;
+	}
+
+	if (i >= ipc_imem->nr_of_channels) {
+		dev_dbg(ipc_imem->dev,
+			"no channel definition for index=%d ctype=%d", index,
+			ctype);
+		return -1;
+	}
+
+	if (ipc_imem->channels[i].state != IMEM_CHANNEL_FREE) {
+		dev_dbg(ipc_imem->dev, "channel is in use");
+		return -1;
+	}
+
+	/* Initialize the reserved channel element. */
+	channel->sio_id = index;
+	/* set vlan id here only for dss channels */
+	if (channel->ctype == IPC_CTYPE_WWAN &&
+	    ((index > 256 && index < 512) ||
+	     index == IPC_MEM_MUX_IP_CH_VLAN_ID))
+		channel->vlan_id = index;
+
+	channel->state = IMEM_CHANNEL_RESERVED;
+
+	return i;
+}
+
+void imem_channel_init(struct iosm_imem *ipc_imem, enum ipc_ctype ctype,
+		       struct ipc_chnl_cfg chnl_cfg, u32 irq_moderation)
+{
+	struct ipc_mem_channel *channel;
+
+	if (chnl_cfg.ul_pipe >= IPC_MEM_MAX_PIPES ||
+	    chnl_cfg.dl_pipe >= IPC_MEM_MAX_PIPES) {
+		dev_err(ipc_imem->dev, "invalid pipe: ul_pipe=%d, dl_pipe=%d",
+			chnl_cfg.ul_pipe, chnl_cfg.dl_pipe);
+		return;
+	}
+
+	if (ipc_imem->nr_of_channels >= IPC_MEM_MAX_CHANNELS) {
+		dev_err(ipc_imem->dev, "too many channels");
+		return;
+	}
+
+	channel = &ipc_imem->channels[ipc_imem->nr_of_channels];
+	channel->channel_id = ipc_imem->nr_of_channels;
+	channel->ctype = ctype;
+	channel->index = chnl_cfg.id;
+	channel->sio_id = -1;
+	channel->net_err_count = 0;
+	channel->state = IMEM_CHANNEL_FREE;
+	ipc_imem->nr_of_channels++;
+
+	ipc_imem_channel_update(ipc_imem, channel->channel_id, chnl_cfg,
+				IRQ_MOD_OFF);
+
+	skb_queue_head_init(&channel->ul_list);
+
+	init_completion(&channel->ul_sem);
+}
+
+void ipc_imem_channel_update(struct iosm_imem *ipc_imem, int id,
+			     struct ipc_chnl_cfg chnl_cfg, u32 irq_moderation)
+{
+	struct ipc_mem_channel *channel;
+
+	if (id < 0 || id >= ipc_imem->nr_of_channels) {
+		dev_err(ipc_imem->dev, "invalid channel id %d", id);
+		return;
+	}
+
+	channel = &ipc_imem->channels[id];
+
+	if (channel->state != IMEM_CHANNEL_FREE &&
+	    channel->state != IMEM_CHANNEL_RESERVED) {
+		dev_err(ipc_imem->dev, "invalid channel state %d",
+			channel->state);
+		return;
+	}
+
+	channel->ul_pipe.nr_of_entries = chnl_cfg.ul_nr_of_entries;
+	channel->ul_pipe.pipe_nr = chnl_cfg.ul_pipe;
+	channel->ul_pipe.is_open = false;
+	channel->ul_pipe.irq = IPC_UL_PIPE_IRQ_VECTOR;
+	channel->ul_pipe.channel = channel;
+	channel->ul_pipe.dir = IPC_MEM_DIR_UL;
+	channel->ul_pipe.accumulation_backoff = chnl_cfg.accumulation_backoff;
+	channel->ul_pipe.irq_moderation = irq_moderation;
+	channel->ul_pipe.buf_size = 0;
+
+	channel->dl_pipe.nr_of_entries = chnl_cfg.dl_nr_of_entries;
+	channel->dl_pipe.pipe_nr = chnl_cfg.dl_pipe;
+	channel->dl_pipe.is_open = false;
+	channel->dl_pipe.irq = IPC_DL_PIPE_IRQ_VECTOR;
+	channel->dl_pipe.channel = channel;
+	channel->dl_pipe.dir = IPC_MEM_DIR_DL;
+	channel->dl_pipe.accumulation_backoff = chnl_cfg.accumulation_backoff;
+	channel->dl_pipe.irq_moderation = irq_moderation;
+	channel->dl_pipe.buf_size = chnl_cfg.dl_buf_size;
+}
+
+/* reset volatile pipe content for all channels */
+static void imem_channel_reset(struct iosm_imem *ipc_imem)
+{
+	int i;
+
+	for (i = 0; i < ipc_imem->nr_of_channels; i++) {
+		struct ipc_mem_channel *channel;
+
+		channel = &ipc_imem->channels[i];
+
+		imem_pipe_cleanup(ipc_imem, &channel->dl_pipe);
+		imem_pipe_cleanup(ipc_imem, &channel->ul_pipe);
+
+		imem_channel_free(channel);
+	}
+}
+
+void imem_pipe_cleanup(struct iosm_imem *ipc_imem, struct ipc_pipe *pipe)
+{
+	struct sk_buff *skb;
+
+	/* Force pipe to closed state also when not explicitly closed through
+	 * imem_pipe_close()
+	 */
+	pipe->is_open = false;
+
+	/* Empty the uplink skb accumulator. */
+	while ((skb = skb_dequeue(&pipe->channel->ul_list)))
+		ipc_pcie_kfree_skb(ipc_imem->pcie, skb);
+
+	ipc_protocol_pipe_cleanup(ipc_imem->ipc_protocol, pipe);
+}
+
+/* Send IPC protocol uninit to the modem when Link is active. */
+static void ipc_imem_device_ipc_uninit(struct iosm_imem *ipc_imem)
+{
+	int timeout = IPC_MODEM_UNINIT_TIMEOUT_MS;
+	enum ipc_mem_device_ipc_state ipc_state;
+
+	/* When PCIe link is up set IPC_UNINIT
+	 * of the modem otherwise ignore it when PCIe link down happens.
+	 */
+	if (ipc_pcie_check_data_link_active(ipc_imem->pcie)) {
+		/* set modem to UNINIT
+		 * (in case we want to reload the AP driver without resetting
+		 * the modem)
+		 */
+		ipc_doorbell_fire(ipc_imem->pcie, IPC_DOORBELL_IRQ_IPC,
+				  IPC_MEM_DEVICE_IPC_UNINIT);
+		ipc_state = ipc_mmio_get_ipc_state(ipc_imem->mmio);
+
+		/* Wait for maximum 30ms to allow the Modem to uninitialize the
+		 * protocol.
+		 */
+		while ((ipc_state <= IPC_MEM_DEVICE_IPC_DONT_CARE) &&
+		       (ipc_state != IPC_MEM_DEVICE_IPC_UNINIT) &&
+		       (timeout > 0)) {
+			usleep_range(1000, 1250);
+			timeout--;
+			ipc_state = ipc_mmio_get_ipc_state(ipc_imem->mmio);
+		}
+	}
+}
+
+void ipc_imem_cleanup(struct iosm_imem *ipc_imem)
+{
+	ipc_imem->phase = IPC_P_OFF_REQ;
+
+	/* forward MDM_NOT_READY to listeners */
+	ipc_uevent_send(ipc_imem->dev, UEVENT_MDM_NOT_READY);
+
+	ipc_imem_device_ipc_uninit(ipc_imem);
+
+	hrtimer_cancel(&ipc_imem->td_alloc_timer);
+
+	hrtimer_cancel(&ipc_imem->tdupdate_timer);
+
+	hrtimer_cancel(&ipc_imem->fast_update_timer);
+
+	hrtimer_cancel(&ipc_imem->startup_timer);
+
+	/* cancel the workqueue */
+	cancel_work_sync(&ipc_imem->run_state_worker);
+
+	ipc_mux_deinit(ipc_imem->mux);
+
+	ipc_wwan_deinit(ipc_imem->wwan);
+
+	imem_channel_reset(ipc_imem);
+
+	ipc_mbim_deinit(ipc_imem->mbim);
+
+	ipc_protocol_deinit(ipc_imem->ipc_protocol);
+
+	tasklet_kill(ipc_imem->ipc_tasklet);
+	kfree(ipc_imem->ipc_tasklet);
+	ipc_imem->ipc_tasklet = NULL;
+
+	ipc_task_queue_deinit(ipc_imem->ipc_task);
+
+	kfree(ipc_imem->mmio);
+
+	ipc_imem->phase = IPC_P_OFF;
+
+	ipc_imem->pcie = NULL;
+	ipc_imem->dev = NULL;
+}
+
+/* After CP has unblocked the PCIe link, save the start address of the doorbell
+ * scratchpad and prepare the shared memory region. If the flashing to RAM
+ * procedure shall be executed, copy the chip information from the doorbell
+ * scratchtpad to the application buffer and wake up the flash app.
+ */
+static int ipc_imem_config(struct iosm_imem *ipc_imem)
+{
+	enum ipc_phase phase;
+
+	/* Initialize the semaphore for the blocking read UL/DL transfer. */
+	init_completion(&ipc_imem->ul_pend_sem);
+
+	init_completion(&ipc_imem->dl_pend_sem);
+
+	/* clear internal flags */
+	ipc_imem->ipc_status = IPC_MEM_DEVICE_IPC_UNINIT;
+	ipc_imem->enter_runtime = 0;
+
+	phase = imem_ap_phase_update(ipc_imem);
+
+	/* Either CP shall be in the power off or power on phase. */
+	switch (phase) {
+	case IPC_P_ROM:
+		/* poll execution stage (for delayed start, e.g. NAND) */
+		imem_hrtimer_start(ipc_imem, &ipc_imem->startup_timer,
+				   1000 * 1000);
+		return 0;
+
+	case IPC_P_PSI:
+	case IPC_P_EBL:
+	case IPC_P_RUN:
+		/* The initial IPC state is IPC_MEM_DEVICE_IPC_UNINIT. */
+		ipc_imem->ipc_requested_state = IPC_MEM_DEVICE_IPC_UNINIT;
+
+		/* Verify the exepected initial state. */
+		if (ipc_imem->ipc_requested_state ==
+		    ipc_mmio_get_ipc_state(ipc_imem->mmio)) {
+			imem_ipc_init_check(ipc_imem);
+
+			return 0;
+		}
+		dev_err(ipc_imem->dev,
+			"ipc_status(%d) != IPC_MEM_DEVICE_IPC_UNINIT",
+			ipc_mmio_get_ipc_state(ipc_imem->mmio));
+		break;
+	case IPC_P_CRASH:
+	case IPC_P_CD_READY:
+		dev_dbg(ipc_imem->dev,
+			"Modem is in phase %d, reset Modem to collect CD",
+			phase);
+		return 0;
+	default:
+		dev_err(ipc_imem->dev, "unexpected operation phase %d", phase);
+		break;
+	}
+
+	complete(&ipc_imem->dl_pend_sem);
+	complete(&ipc_imem->ul_pend_sem);
+	ipc_imem->phase = IPC_P_OFF;
+	return -1;
+}
+
+/* Pass the dev ptr to the shared memory driver and request the entry points */
+struct iosm_imem *ipc_imem_init(struct iosm_pcie *pcie, unsigned int device_id,
+				void __iomem *mmio, struct device *dev)
+{
+	struct iosm_imem *ipc_imem = kzalloc(sizeof(*pcie->imem), GFP_KERNEL);
+
+	struct ipc_chnl_cfg chnl_cfg_flash = { 0 };
+	struct ipc_chnl_cfg chnl_cfg_mbim = { 0 };
+
+	char name_flash[32] = { 0 }; /* Holds Flash device name */
+	char name_mbim[32] = { 0 }; /* Holds mbim device name */
+
+	if (!ipc_imem)
+		return NULL;
+
+	/* Save the device address. */
+	ipc_imem->pcie = pcie;
+	ipc_imem->dev = dev;
+
+	ipc_imem->pci_device_id = device_id;
+
+	ipc_imem->ev_sio_write_pending = false;
+	ipc_imem->cp_version = 0;
+	ipc_imem->device_sleep = IPC_HOST_SLEEP_ENTER_SLEEP;
+
+	/* Reset the flash channel id. */
+	ipc_imem->flash_channel_id = -1;
+
+	/* Reset the max number of configured channels */
+	ipc_imem->nr_of_channels = 0;
+
+	/* allocate IPC MMIO */
+	ipc_imem->mmio = ipc_mmio_init(mmio, ipc_imem->dev);
+	if (!ipc_imem->mmio) {
+		dev_err(ipc_imem->dev, "failed to initialize mmio region");
+		goto mmio_init_fail;
+	}
+
+	ipc_imem->ipc_tasklet =
+		kzalloc(sizeof(*ipc_imem->ipc_tasklet), GFP_KERNEL);
+
+	/* Create tasklet for event handling*/
+	ipc_imem->ipc_task =
+		ipc_task_queue_init(ipc_imem->ipc_tasklet, ipc_imem->dev);
+
+	if (!ipc_imem->ipc_task)
+		goto ipc_task_init_fail;
+
+	INIT_WORK(&ipc_imem->run_state_worker, ipc_imem_run_state_worker);
+
+	ipc_imem->ipc_protocol = ipc_protocol_init(ipc_imem);
+
+	if (!ipc_imem->ipc_protocol)
+		goto protocol_init_fail;
+
+	/* The phase is set to power off. */
+	ipc_imem->phase = IPC_P_OFF;
+
+	/* Initialize flash channel.
+	 * The actual pipe configuration will be set once PSI has executed
+	 */
+	imem_channel_init(ipc_imem, IPC_CTYPE_FLASH, chnl_cfg_flash, 0);
+
+	snprintf(name_flash, sizeof(name_flash) - 1, "iat");
+
+	ipc_imem->sio = ipc_sio_init(ipc_imem, name_flash);
+
+	if (!ipc_imem->sio)
+		goto sio_init_fail;
+
+	if (!ipc_chnl_cfg_get(&chnl_cfg_mbim, IPC_MEM_MBIM_CTRL_CH_ID,
+			      MUX_UNKNOWN)) {
+		imem_channel_init(ipc_imem, IPC_CTYPE_MBIM, chnl_cfg_mbim,
+				  IRQ_MOD_OFF);
+	}
+
+	snprintf(name_mbim, sizeof(name_mbim) - 1, "wwanctrl");
+
+	ipc_imem->mbim = ipc_mbim_init(ipc_imem, name_mbim);
+
+	if (!ipc_imem->mbim) {
+		imem_channel_reset(ipc_imem);
+		goto mbim_init_fail;
+	}
+
+	imem_hrtimer_init(&ipc_imem->startup_timer, imem_startup_timer_cb);
+
+	imem_hrtimer_init(&ipc_imem->tdupdate_timer, imem_td_update_timer_cb);
+
+	imem_hrtimer_init(&ipc_imem->fast_update_timer,
+			  imem_fast_update_timer_cb);
+
+	imem_hrtimer_init(&ipc_imem->td_alloc_timer, imem_td_alloc_timer_cb);
+
+	if (ipc_imem_config(ipc_imem)) {
+		dev_err(ipc_imem->dev, "failed to initialize the imem");
+		goto imem_config_fail;
+	}
+
+	return ipc_imem;
+
+imem_config_fail:
+	hrtimer_cancel(&ipc_imem->td_alloc_timer);
+	hrtimer_cancel(&ipc_imem->fast_update_timer);
+	hrtimer_cancel(&ipc_imem->tdupdate_timer);
+	hrtimer_cancel(&ipc_imem->startup_timer);
+	ipc_mbim_deinit(ipc_imem->mbim);
+mbim_init_fail:
+	ipc_sio_deinit(ipc_imem->sio);
+sio_init_fail:
+	imem_channel_reset(ipc_imem);
+	ipc_protocol_deinit(ipc_imem->ipc_protocol);
+protocol_init_fail:
+	cancel_work_sync(&ipc_imem->run_state_worker);
+	ipc_task_queue_deinit(ipc_imem->ipc_task);
+ipc_task_init_fail:
+	kfree(ipc_imem->ipc_tasklet);
+	ipc_imem->ipc_tasklet = NULL;
+	kfree(ipc_imem->mmio);
+mmio_init_fail:
+	kfree(ipc_imem);
+	return NULL;
+}
+
+void ipc_imem_irq_process(struct iosm_imem *ipc_imem, int irq)
+{
+	/* Debounce IPC_EV_IRQ. */
+	if (ipc_imem && ipc_imem->ipc_task && !ipc_imem->ev_irq_pending[irq]) {
+		ipc_imem->ev_irq_pending[irq] = true;
+		ipc_task_queue_send_task(ipc_imem, imem_tq_irq_cb, irq, NULL, 0,
+					 false);
+	}
+}
+
+void imem_td_update_timer_suspend(struct iosm_imem *ipc_imem, bool suspend)
+{
+	ipc_imem->td_update_timer_suspended = suspend;
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem.h b/drivers/net/wwan/iosm/iosm_ipc_imem.h
new file mode 100644
index 000000000000..bd516e968247
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_imem.h
@@ -0,0 +1,606 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_IMEM_H
+#define IOSM_IPC_IMEM_H
+
+#include <linux/skbuff.h>
+#include <stdbool.h>
+
+#include "iosm_ipc_mmio.h"
+#include "iosm_ipc_pcie.h"
+#include "iosm_ipc_uevent.h"
+#include "iosm_ipc_wwan.h"
+
+struct ipc_chnl_cfg;
+
+/* IRQ moderation in usec */
+#define IRQ_MOD_OFF 0
+#define IRQ_MOD_NET 1000
+#define IRQ_MOD_TRC 4000
+
+/* Either the PSI image is accepted by CP or the suspended flash tool is waken,
+ * informed that the CP ROM driver is not ready to process the PSI image.
+ * unit : milliseconds
+ */
+#define IPC_PSI_TRANSFER_TIMEOUT 3000
+
+/* Timeout in 20 msec to wait for the modem to boot up to
+ * IPC_MEM_DEVICE_IPC_INIT state.
+ * unit : milliseconds (500 * ipc_util_msleep(20))
+ */
+#define IPC_MODEM_BOOT_TIMEOUT 500
+
+/* Wait timeout for ipc status reflects IPC_MEM_DEVICE_IPC_UNINIT
+ * unit : milliseconds
+ */
+#define IPC_MODEM_UNINIT_TIMEOUT_MS 30
+
+/* Pending time for processing data.
+ * unit : milliseconds
+ */
+#define IPC_PEND_DATA_TIMEOUT 500
+
+/* The timeout in milliseconds for application to wait for remote time. */
+#define IPC_REMOTE_TS_TIMEOUT_MS 10
+
+/* Timeout for TD allocation retry.
+ * unit : milliseconds
+ */
+#define IPC_TD_ALLOC_TIMER_PERIOD_MS 100
+
+/* Channel Index for SW download */
+#define IPC_MEM_FLASH_CH_ID 0
+
+/* Control Channel for MBIM */
+#define IPC_MEM_MBIM_CTRL_CH_ID 1
+
+/* Host sleep target & state. */
+
+/* Host sleep target is host */
+#define IPC_HOST_SLEEP_HOST 0
+
+/* Host sleep target is device */
+#define IPC_HOST_SLEEP_DEVICE 1
+
+/* Sleep message, target host: AP enters sleep / target device: CP is
+ * allowed to enter sleep and shall use the device sleep protocol
+ */
+#define IPC_HOST_SLEEP_ENTER_SLEEP 0
+
+/* Sleep_message, target host: AP exits  sleep / target device: CP is
+ * NOT allowed to enter sleep
+ */
+#define IPC_HOST_SLEEP_EXIT_SLEEP 1
+
+#define IMEM_IRQ_DONT_CARE (-1)
+
+#define IPC_MEM_MAX_CHANNELS 8
+
+#define IPC_MEM_MUX_IP_SESSION_ENTRIES 8
+
+#define IPC_MEM_MUX_IP_CH_VLAN_ID (-1)
+
+#define TD_UPDATE_DEFAULT_TIMEOUT_USEC 1900
+
+#define FORCE_UPDATE_DEFAULT_TIMEOUT_USEC 500
+
+/* Sleep_message, target host: not applicable  / target device: CP is
+ * allowed to enter sleep and shall NOT use the device sleep protocol
+ */
+#define IPC_HOST_SLEEP_ENTER_SLEEP_NO_PROTOCOL 2
+
+/* in_band_crash_signal IPC_MEM_INBAND_CRASH_SIG
+ * Modem crash notification configuration. If this value is non-zero then
+ * FEATURE_SET message will be sent to the Modem as a result the Modem will
+ * signal Crash via Execution Stage register. If this value is zero then Modem
+ * will use out-of-band method to notify about it's Crash.
+ */
+#define IPC_MEM_INBAND_CRASH_SIG 1
+
+/* Extra headroom to be allocated for DL SKBs to allow addition of Ethernet
+ * header
+ */
+#define IPC_MEM_DL_ETH_OFFSET 16
+#define WAIT_FOR_TIMEOUT(sem, timeout)                                         \
+	wait_for_completion_interruptible_timeout((sem),                       \
+						  msecs_to_jiffies(timeout))
+
+#define IPC_CB(skb) ((struct ipc_skb_cb *)((skb)->cb))
+
+/* List of the supported UL/DL pipes. */
+enum ipc_mem_pipes {
+	IPC_MEM_PIPE_0 = 0,
+	IPC_MEM_PIPE_1,
+	IPC_MEM_PIPE_2,
+	IPC_MEM_PIPE_3,
+	IPC_MEM_PIPE_4,
+	IPC_MEM_PIPE_5,
+	IPC_MEM_PIPE_6,
+	IPC_MEM_PIPE_7,
+	IPC_MEM_PIPE_8,
+	IPC_MEM_PIPE_9,
+	IPC_MEM_PIPE_10,
+	IPC_MEM_PIPE_11,
+	IPC_MEM_PIPE_12,
+	IPC_MEM_PIPE_13,
+	IPC_MEM_PIPE_14,
+	IPC_MEM_PIPE_15,
+	IPC_MEM_PIPE_16,
+	IPC_MEM_PIPE_17,
+	IPC_MEM_PIPE_18,
+	IPC_MEM_PIPE_19,
+	IPC_MEM_PIPE_20,
+	IPC_MEM_PIPE_21,
+	IPC_MEM_PIPE_22,
+	IPC_MEM_PIPE_23,
+	IPC_MEM_MAX_PIPES
+};
+
+/* Enum defining channel states. */
+enum ipc_channel_state {
+	IMEM_CHANNEL_FREE,
+	IMEM_CHANNEL_RESERVED,
+	IMEM_CHANNEL_ACTIVE,
+	IMEM_CHANNEL_CLOSING,
+};
+
+/* Time Unit */
+enum ipc_time_unit {
+	IPC_SEC = 0,
+	IPC_MILLI_SEC = 1,
+	IPC_MICRO_SEC = 2,
+	IPC_NANO_SEC = 3,
+	IPC_PICO_SEC = 4,
+	IPC_FEMTO_SEC = 5,
+	IPC_ATTO_SEC = 6,
+};
+
+/**
+ * enum ipc_ctype - Enum defining supported channel type needed to control the
+ *		    cp or to transfer IP packets.
+ * @IPC_CTYPE_FLASH:		Used for flashing to RAM
+ * @IPC_CTYPE_WWAN:		Used for Control and IP data
+ * @IPC_CTYPE_MBIM:		Used for MBIM Control
+ */
+enum ipc_ctype {
+	IPC_CTYPE_FLASH,
+	IPC_CTYPE_WWAN,
+	IPC_CTYPE_MBIM,
+};
+
+/* Pipe direction. */
+enum ipc_mem_pipe_dir {
+	IPC_MEM_DIR_UL,
+	IPC_MEM_DIR_DL,
+};
+
+/* HP update identifier. To be used as data for ipc_cp_irq_hpda_update() */
+enum ipc_hp_identifier {
+	IPC_HP_MR = 0,
+	IPC_HP_PM_TRIGGER,
+	IPC_HP_WAKEUP_SPEC_TMR,
+	IPC_HP_TD_UPD_TMR_START,
+	IPC_HP_TD_UPD_TMR,
+	IPC_HP_FAST_TD_UPD_TMR,
+	IPC_HP_UL_WRITE_TD,
+	IPC_HP_DL_PROCESS,
+	IPC_HP_NET_CHANNEL_INIT,
+	IPC_HP_SIO_OPEN,
+};
+
+/**
+ * struct ipc_pipe - Structure for Pipe.
+ * @tdr_start:			Ipc private protocol Transfer Descriptor Ring
+ * @channel:			Id of the sio device, set by imem_sio_open,
+ *				needed to pass DL char to the user terminal
+ * @skbr_start:			Circular buffer for skbuf and the buffer
+ *				reference in a tdr_start entry.
+ * @phy_tdr_start:		Transfer descriptor start address
+ * @old_head:			last head pointer reported to CP.
+ * @old_tail:			AP read position before CP moves the read
+ *				position to write/head. If CP has consumed the
+ *				buffers, AP has to freed the skbuf starting at
+ *				tdr_start[old_tail].
+ * @nr_of_entries:		Number of elements of skb_start and tdr_start.
+ * @max_nr_of_queued_entries:	Maximum number of queued entries in TDR
+ * @accumulation_backoff:	Accumulation in usec for accumulation
+ *				backoff (0 = no acc backoff)
+ * @irq_moderation:		timer in usec for irq_moderation
+ *				(0=no irq moderation)
+ * @pipe_nr:			Pipe identification number
+ * @irq:			Interrupt vector
+ * @dir:			Direction of data stream in pipe
+ * @td_tag:			Unique tag of the buffer queued
+ * @buf_size:			Buffer size (in bytes) for preallocated
+ *				buffers (for DL pipes)
+ * @nr_of_queued_entries:	Aueued number of entries
+ * @is_open:			Check for open pipe status
+ */
+struct ipc_pipe {
+	struct ipc_protocol_td *tdr_start;
+	struct ipc_mem_channel *channel;
+	struct sk_buff **skbr_start;
+	dma_addr_t phy_tdr_start;
+	u32 old_head;
+	u32 old_tail;
+	u32 nr_of_entries;
+	u32 max_nr_of_queued_entries;
+	u32 accumulation_backoff;
+	u32 irq_moderation;
+	u32 pipe_nr;
+	u32 irq;
+	enum ipc_mem_pipe_dir dir;
+	u32 td_tag;
+	u32 buf_size;
+	u16 nr_of_queued_entries;
+	u8 is_open : 1;
+};
+
+/**
+ * struct ipc_mem_channel - Structure for Channel.
+ * @channel_id:		Instance of the channel list and is return to the user
+ *			at the end of the open operation.
+ * @ctype:		Control or netif channel.
+ * @index:		unique index per ctype
+ * @ul_pipe:		pipe objects
+ * @dl_pipe:		pipe objects
+ * @sio_id:		Id of the sio device, set by imem_sio_open, needed to
+ *			pass downlink characters to user terminal.
+ * @vlan_id:		VLAN ID
+ * @net_err_count:	Number of downlink errors returned by ipc_wwan_receive
+ *			interface at the entry point of the IP stack.
+ * @state:		Free, reserved or busy (in use).
+ * @ul_sem:		Needed for the blocking write or uplink transfer.
+ * @ul_list:		Uplink accumulator which is filled by the uplink
+ *			char app or IP stack. The socket buffer pointer are
+ *			added to the descriptor list in the kthread context.
+ */
+struct ipc_mem_channel {
+	int channel_id;
+	enum ipc_ctype ctype;
+	int index;
+	struct ipc_pipe ul_pipe;
+	struct ipc_pipe dl_pipe;
+	int sio_id;
+	int vlan_id;
+	u32 net_err_count;
+	enum ipc_channel_state state;
+	struct completion ul_sem;
+	struct sk_buff_head ul_list;
+};
+
+/**
+ * enum ipc_phase - Different AP and CP phases.
+ *		    The enums defined after "IPC_P_ROM" and before
+ *		    "IPC_P_RUN" indicates the operating state where CP can
+ *		    respond to any requests. So while introducing new phase
+ *		    this shall be taken into consideration.
+ * @IPC_P_OFF:		On host PC, the PCIe device link settings are known
+ *			about the combined power on. PC is running, the driver
+ *			is loaded and CP is in power off mode. The PCIe bus
+ *			driver call the device power mode D3hot. In this phase
+ *			the driver the polls the device, until the device is in
+ *			the power on state and signals the power mode D0.
+ * @IPC_P_OFF_REQ:	The intermediate phase between cleanup activity starts
+ *			and ends.
+ * @IPC_P_CRASH:	The phase indicating CP crash
+ * @IPC_P_CD_READY:	The phase indicating CP core dump is ready
+ * @IPC_P_ROM:		After power on, CP starts in ROM mode and the IPC ROM
+ *			driver is waiting 150 ms for the AP active notification
+ *			saved in the PCI link status register.
+ * @IPC_P_PSI:		Primary signed image download phase
+ * @IPC_P_EBL:		Extended bootloader pahse
+ * @IPC_P_RUN:		The phase after flashing to RAM is the RUNTIME phase.
+ */
+enum ipc_phase {
+	IPC_P_OFF,
+	IPC_P_OFF_REQ,
+	IPC_P_CRASH,
+	IPC_P_CD_READY,
+	IPC_P_ROM,
+	IPC_P_PSI,
+	IPC_P_EBL,
+	IPC_P_RUN,
+};
+
+/**
+ * struct iosm_imem - Current state of the IPC shared memory.
+ * @mmio:			mmio instance to access CP MMIO area /
+ *				doorbell scratchpad.
+ * @ipc_protocol:		IPC Protocol instance
+ * @ipc_tasklet:		Tasklet for serialized work offload
+ *				from interrupts and OS callbacks
+ * @ipc_task:			Task for entry into ipc task queue
+ * @wwan:			WWAN device pointer
+ * @mux:			IP Data multiplexing state.
+ * @sio:			IPC SIO data structure pointer
+ * @mbim:			IPC MBIM data structure pointer
+ * @pcie:			IPC PCIe
+ * @dev:			Pointer to device structure
+ * @flash_channel_id:		Reserved channel id for flashing to RAM.
+ * @ipc_requested_state:	Expected IPC state on CP.
+ * @channels:			Channel list with UL/DL pipe pairs.
+ * @ipc_status:			local ipc_status
+ * @nr_of_channels:		number of configured channels
+ * @startup_timer:		startup timer for NAND support.
+ * @hrtimer_period:		Hr timer period
+ * @tdupdate_timer:		Delay the TD update doorbell.
+ * @fast_update_timer:		forced head pointer update delay timer.
+ * @td_alloc_timer:		Timer for DL pipe TD allocation retry
+ * @rom_exit_code:		Mapped boot rom exit code.
+ * @enter_runtime:		1 means the transition to runtime phase was
+ *				executed.
+ * @ul_pend_sem:		Semaphore to wait/complete of UL TDs
+ *				before closing pipe.
+ * @app_notify_ul_pend:		Signal app if UL TD is pending
+ * @dl_pend_sem:		Semaphore to wait/complete of DL TDs
+ *				before closing pipe.
+ * @app_notify_dl_pend:		Signal app if DL TD is pending
+ * @phase:			Operating phase like runtime.
+ * @pci_device_id:		Device ID
+ * @cp_version:			CP version
+ * @device_sleep:		Device sleep state
+ * @run_state_worker:		Pointer to worker component for device
+ *				setup operations to be called when modem
+ *				reaches RUN state
+ * @ev_irq_pending:		0 means inform the IPC tasklet to
+ *				process the irq actions.
+ * @td_update_timer_suspended:	if true then td update timer suspend
+ * @ev_sio_write_pending:	0 means inform the IPC tasklet to pass
+ *				the accumulated uplink buffers to CP.
+ * @ev_mux_net_transmit_pending:0 means inform the IPC tasklet to pass
+ * @reset_det_n:		Reset detect flag
+ * @pcie_wake_n:		Pcie wake flag
+ */
+struct iosm_imem {
+	struct iosm_mmio *mmio;
+	struct iosm_protocol *ipc_protocol;
+	struct tasklet_struct *ipc_tasklet;
+	struct ipc_task_queue *ipc_task;
+	struct iosm_wwan *wwan;
+	struct iosm_mux *mux;
+	struct iosm_sio *sio;
+	struct iosm_sio *mbim;
+	struct iosm_pcie *pcie;
+	struct device *dev;
+	int flash_channel_id;
+	enum ipc_mem_device_ipc_state ipc_requested_state;
+	struct ipc_mem_channel channels[IPC_MEM_MAX_CHANNELS];
+	u32 ipc_status;
+	u32 nr_of_channels;
+	struct hrtimer startup_timer;
+	ktime_t hrtimer_period;
+	struct hrtimer tdupdate_timer;
+	struct hrtimer fast_update_timer;
+	struct hrtimer td_alloc_timer;
+	enum rom_exit_code rom_exit_code;
+	u32 enter_runtime;
+	struct completion ul_pend_sem;
+	u32 app_notify_ul_pend;
+	struct completion dl_pend_sem;
+	u32 app_notify_dl_pend;
+	enum ipc_phase phase;
+	u16 pci_device_id;
+	int cp_version;
+	int device_sleep;
+	struct work_struct run_state_worker;
+	u8 ev_irq_pending[IPC_IRQ_VECTORS];
+	u8 td_update_timer_suspended : 1;
+	u8 ev_sio_write_pending : 1;
+	u8 ev_mux_net_transmit_pending : 1;
+	u8 reset_det_n : 1;
+	u8 pcie_wake_n : 1;
+};
+
+/**
+ * ipc_imem_init - Install the shared memory system
+ * @pcie:	Pointer to core driver data-struct
+ * @device_id:	PCI device ID
+ * @mmio:	Pointer to the mmio area
+ * @dev:	Pointer to device structure
+ *
+ * Returns:  Initialized imem pointer on success else NULL
+ */
+struct iosm_imem *ipc_imem_init(struct iosm_pcie *pcie, unsigned int device_id,
+				void __iomem *mmio, struct device *dev);
+
+/**
+ * ipc_imem_pm_suspend - The HAL shall ask the shared memory layer
+ *			 whether D3 is allowed.
+ * @ipc_imem:	Pointer to imem data-struct
+ *
+ * Returns: 0 On success else negative value
+ */
+int ipc_imem_pm_suspend(struct iosm_imem *ipc_imem);
+
+/**
+ * ipc_imem_pm_resume - The HAL shall inform the shared memory layer
+ *			that the device is active.
+ * @ipc_imem:	Pointer to imem data-struct
+ */
+void ipc_imem_pm_resume(struct iosm_imem *ipc_imem);
+
+/**
+ * ipc_imem_cleanup -	Inform CP and free the shared memory resources.
+ * @ipc_imem:	Pointer to imem data-struct
+ */
+void ipc_imem_cleanup(struct iosm_imem *ipc_imem);
+
+/**
+ * ipc_imem_irq_process - Shift the IRQ actions to the IPC thread.
+ * @ipc_imem:	Pointer to imem data-struct
+ * @irq:	Irq number
+ */
+void ipc_imem_irq_process(struct iosm_imem *ipc_imem, int irq);
+
+/**
+ * imem_get_device_sleep_state - Get the device sleep state value.
+ * @ipc_imem:	Pointer to imem instance
+ *
+ * Returns: device sleep state
+ */
+int imem_get_device_sleep_state(struct iosm_imem *ipc_imem);
+
+/**
+ * imem_td_update_timer_suspend - Updates the TD Update Timer suspend flag.
+ * @ipc_imem:	Pointer to imem data-struct
+ * @suspend:	Flag to update. If TRUE then HP update doorbell is triggered to
+ *		device without any wait. If FALSE then HP update doorbell is
+ *		delayed until timeout.
+ */
+void imem_td_update_timer_suspend(struct iosm_imem *ipc_imem, bool suspend);
+
+/**
+ * imem_channel_close - Release the channel resources.
+ * @ipc_imem:		Pointer to imem data-struct
+ * @channel_id:		Channel ID to be cleaned up.
+ */
+void imem_channel_close(struct iosm_imem *ipc_imem, int channel_id);
+
+/**
+ * imem_channel_alloc - Reserves a channel
+ * @ipc_imem:	Pointer to imem data-struct
+ * @index:	ID to lookup from the preallocated list.
+ * @ctype:	Channel type.
+ *
+ * Returns: Index on success and -1 on failure.
+ */
+int imem_channel_alloc(struct iosm_imem *ipc_imem, int index,
+		       enum ipc_ctype ctype);
+
+/**
+ * imem_channel_open - Establish the pipes.
+ * @ipc_imem:		Pointer to imem data-struct
+ * @channel_id:		Channel ID returned during alloc.
+ * @db_id:		Doorbell ID for trigger identifier.
+ *
+ * Returns: Pointer of ipc_mem_channel on success and NULL on failure.
+ */
+struct ipc_mem_channel *imem_channel_open(struct iosm_imem *ipc_imem,
+					  int channel_id, u32 db_id);
+
+/**
+ * imem_td_update_timer_start - Starts the TD Update Timer if not running.
+ * @ipc_imem:	Pointer to imem data-struct
+ */
+void imem_td_update_timer_start(struct iosm_imem *ipc_imem);
+
+/**
+ * imem_hrtimer_start - Starts the hr Timer if not running.
+ * @hr_timer:	Pointer to hrtimer instance
+ * @period:	Timer value
+ */
+void imem_hrtimer_start(struct iosm_imem *ipc_imem, struct hrtimer *hr_timer,
+			unsigned long period);
+
+/**
+ * imem_ul_write_td - Pass the channel UL list to protocol layer for TD
+ *		      preparation and sending them to the device.
+ * @ipc_imem:	Pointer to imem data-struct
+ *
+ * Returns: TRUE of HP Doorbell trigger is pending. FALSE otherwise.
+ */
+bool imem_ul_write_td(struct iosm_imem *ipc_imem);
+
+/**
+ * imem_ul_send - Dequeue SKB from channel list and start with
+ *		  the uplink transfer.If HP Doorbell is pending to be
+ *		  triggered then starts the TD Update Timer.
+ * @ipc_imem:	Pointer to imem data-struct
+ */
+void imem_ul_send(struct iosm_imem *ipc_imem);
+
+/**
+ * ipc_imem_channel_update - Set or modify pipe config of an existing channel
+ * @ipc_imem:		Pointer to imem data-struct
+ * @id:			Channel config index
+ * @chnl_cfg:		Channel config struct
+ * @irq_moderation:	Timer in usec for irq_moderation
+ */
+void ipc_imem_channel_update(struct iosm_imem *ipc_imem, int id,
+			     struct ipc_chnl_cfg chnl_cfg, u32 irq_moderation);
+
+/**
+ * imem_trigger_chip_info - Inform the char that the chip information are
+ *			    available if the flashing to RAM interworking shall
+ *			    be executed.
+ * @ipc_imem:	Pointer to imem data-struct
+ *
+ * Returns: 0 on success, -1 on failure
+ */
+int imem_trigger_chip_info(struct iosm_imem *ipc_imem);
+
+/**
+ * imem_channel_free -Free an IPC channel.
+ * @channel:	Channel to be freed
+ */
+void imem_channel_free(struct ipc_mem_channel *channel);
+
+/**
+ * imem_hrtimer_stop - Stop the hrtimer
+ * @hr_timer:	Pointer to hrtimer instance
+ */
+void imem_hrtimer_stop(struct hrtimer *hr_timer);
+
+/**
+ * imem_pipe_cleanup - Reset volatile pipe content for all channels
+ * @ipc_imem:	Pointer to imem data-struct
+ * @pipe:	Pipe to cleaned up
+ */
+void imem_pipe_cleanup(struct iosm_imem *ipc_imem, struct ipc_pipe *pipe);
+
+/**
+ * imem_pipe_close - Send msg to device to close pipe
+ * @ipc_imem:	Pointer to imem data-struct
+ * @pipe:	Pipe to be closed
+ */
+void imem_pipe_close(struct iosm_imem *ipc_imem, struct ipc_pipe *pipe);
+
+/**
+ * imem_ap_phase_update - Get the CP execution state
+ *			  and map it to the AP phase.
+ * @ipc_imem:	Pointer to imem data-struct
+ *
+ * Returns: Current ap updated phase
+ */
+enum ipc_phase imem_ap_phase_update(struct iosm_imem *ipc_imem);
+
+/**
+ * ipc_ap_phase_get_string - Return the current operation
+ *			     phase as string.
+ * @phase:	AP phase
+ *
+ * Returns: AP phase string
+ */
+const char *ipc_ap_phase_get_string(enum ipc_phase phase);
+
+/**
+ * imem_msg_send_feature_set - Send feature set message to modem
+ * @ipc_imem:		Pointer to imem data-struct
+ * @reset_enable:	0 = out-of-band, 1 = in-band-crash notification
+ * @atomic_ctx:		if disabled call in tasklet context
+ *
+ */
+void imem_msg_send_feature_set(struct iosm_imem *ipc_imem,
+			       unsigned int reset_enable, bool atomic_ctx);
+
+/**
+ * imem_ipc_init_check - Send the init event to CP, wait a certain time and set
+ *			 CP to runtime with the context information
+ * @ipc_imem:	Pointer to imem data-struct
+ */
+void imem_ipc_init_check(struct iosm_imem *ipc_imem);
+
+/**
+ * imem_channel_init -	Initialize the channel list with UL/DL pipe pairs.
+ * @ipc_imem:		Pointer to imem data-struct
+ * @ctype:		Channel type
+ * @chnl_cfg:		Channel configuration struct
+ * @irq_moderation:	Timer in usec for irq_moderation
+ */
+void imem_channel_init(struct iosm_imem *ipc_imem, enum ipc_ctype ctype,
+		       struct ipc_chnl_cfg chnl_cfg, u32 irq_moderation);
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 05/18] net: iosm: shared memory I/O operations
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (3 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 04/18] net: iosm: shared memory IPC interface M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 06/18] net: iosm: channel configuration M Chetan Kumar
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

1) Binds logical channel between host-device for communication.
2) Implements device specific(Char/Net) IO operations.
3) Inject primary BootLoader FW image to modem.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_imem_ops.c | 779 ++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_imem_ops.h | 102 ++++
 2 files changed, 881 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_imem_ops.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
new file mode 100644
index 000000000000..2e2f3f43e21c
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.c
@@ -0,0 +1,779 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include <linux/delay.h>
+
+#include "iosm_ipc_chnl_cfg.h"
+#include "iosm_ipc_imem.h"
+#include "iosm_ipc_imem_ops.h"
+#include "iosm_ipc_sio.h"
+#include "iosm_ipc_task_queue.h"
+
+/* Open a packet data online channel between the network layer and CP. */
+int imem_sys_wwan_open(void *instance, int vlan_id)
+{
+	struct iosm_imem *ipc_imem = instance;
+
+	dev_dbg(ipc_imem->dev, "%s[vlan id:%d]",
+		ipc_ap_phase_get_string(ipc_imem->phase), vlan_id);
+
+	/* The network interface is only supported in the runtime phase. */
+	if (imem_ap_phase_update(ipc_imem) != IPC_P_RUN) {
+		dev_err(ipc_imem->dev, "[net:%d]: refused phase %s", vlan_id,
+			ipc_ap_phase_get_string(ipc_imem->phase));
+		return -1;
+	}
+
+	/* check for the vlan tag
+	 * if tag 1 to 8 then create IP MUX channel sessions.
+	 * if tag 257 to 512 then create dss channel.
+	 * To start MUX session from 0 as vlan tag would start from 1
+	 * so map it to if_id = vlan_id - 1
+	 */
+	if (vlan_id > 0 && vlan_id <= ipc_mux_get_max_sessions(ipc_imem->mux)) {
+		return ipc_mux_open_session(ipc_imem->mux, vlan_id - 1);
+	} else if (vlan_id > 256 && vlan_id < 512) {
+		int ch_id =
+			imem_channel_alloc(ipc_imem, vlan_id, IPC_CTYPE_WWAN);
+
+		if (imem_channel_open(ipc_imem, ch_id, IPC_HP_NET_CHANNEL_INIT))
+			return ch_id;
+	}
+
+	return -1;
+}
+
+/* Release a net link to CP. */
+void imem_sys_wwan_close(void *instance, int vlan_id, int channel_id)
+{
+	struct iosm_imem *ipc_imem = instance;
+
+	if (ipc_imem->mux && vlan_id > 0 &&
+	    vlan_id <= ipc_mux_get_max_sessions(ipc_imem->mux))
+		ipc_mux_close_session(ipc_imem->mux, vlan_id - 1);
+
+	else if ((vlan_id > 256 && vlan_id < 512))
+		imem_channel_close(ipc_imem, channel_id);
+}
+
+/* Tasklet call to do uplink transfer. */
+static int imem_tq_sio_write(void *instance, int arg, void *msg, size_t size)
+{
+	struct iosm_imem *ipc_imem = instance;
+
+	ipc_imem->ev_sio_write_pending = false;
+	imem_ul_send(ipc_imem);
+
+	return 0;
+}
+
+/* Through tasklet to do sio write. */
+static bool imem_call_sio_write(struct iosm_imem *ipc_imem)
+{
+	if (ipc_imem->ev_sio_write_pending)
+		return false;
+
+	ipc_imem->ev_sio_write_pending = true;
+
+	return (!ipc_task_queue_send_task(ipc_imem, imem_tq_sio_write, 0, NULL,
+					  0, false));
+}
+
+/* Add to the ul list skb */
+static int imem_wwan_transmit(struct iosm_imem *ipc_imem, int vlan_id,
+			      int channel_id, struct sk_buff *skb)
+{
+	struct ipc_mem_channel *channel;
+
+	channel = &ipc_imem->channels[channel_id];
+
+	if (channel->state != IMEM_CHANNEL_ACTIVE) {
+		dev_err(ipc_imem->dev, "invalid state of channel %d",
+			channel_id);
+		return -1;
+	}
+
+	if (ipc_pcie_addr_map(ipc_imem->pcie, skb->data, skb->len,
+			      &IPC_CB(skb)->mapping, DMA_TO_DEVICE)) {
+		dev_err(ipc_imem->dev, "failed to map skb");
+		IPC_CB(skb)->direction = DMA_TO_DEVICE;
+		IPC_CB(skb)->len = skb->len;
+		IPC_CB(skb)->op_type = UL_DEFAULT;
+		return -1;
+	}
+
+	/* Add skb to the uplink skbuf accumulator */
+	skb_queue_tail(&channel->ul_list, skb);
+	imem_call_sio_write(ipc_imem);
+
+	return 0;
+}
+
+/* Function for transfer UL data
+ * WWAN layer must free the packet in case if imem fails to transmit.
+ * In case of success, imem layer will free it.
+ */
+int imem_sys_wwan_transmit(void *instance, int vlan_id, int channel_id,
+			   struct sk_buff *skb)
+{
+	struct iosm_imem *ipc_imem = instance;
+	int ret = -1;
+
+	if (!ipc_imem || channel_id < 0)
+		return -EINVAL;
+
+	/* Is CP Running? */
+	if (ipc_imem->phase != IPC_P_RUN) {
+		dev_dbg(ipc_imem->dev, "%s[transmit, vlanid:%d]",
+			ipc_ap_phase_get_string(ipc_imem->phase), vlan_id);
+		return -EBUSY;
+	}
+
+	if (ipc_imem->channels[channel_id].ctype == IPC_CTYPE_WWAN) {
+		if (vlan_id > 0 &&
+		    vlan_id <= ipc_mux_get_max_sessions(ipc_imem->mux))
+			/* Route the UL packet through IP MUX Layer */
+			ret = ipc_mux_ul_trigger_encode(ipc_imem->mux,
+							vlan_id - 1, skb);
+		/* Control channels and Low latency data channel for VoLTE*/
+		else if (vlan_id > 256 && vlan_id < 512)
+			ret = imem_wwan_transmit(ipc_imem, vlan_id, channel_id,
+						 skb);
+	} else {
+		dev_err(ipc_imem->dev,
+			"invalid channel type on channel %d: ctype: %d",
+			channel_id, ipc_imem->channels[channel_id].ctype);
+	}
+
+	return ret;
+}
+
+void wwan_channel_init(struct iosm_imem *ipc_imem, int total_sessions,
+		       enum ipc_mux_protocol mux_type)
+{
+	struct ipc_chnl_cfg chnl_cfg = { 0 };
+
+	ipc_imem->cp_version = ipc_mmio_get_cp_version(ipc_imem->mmio);
+
+	/* If modem version is invalid (0xffffffff), do not initialize WWAN. */
+	if (ipc_imem->cp_version == -1) {
+		dev_err(ipc_imem->dev, "invalid CP version");
+		return;
+	}
+
+	while (ipc_imem->nr_of_channels < IPC_MEM_MAX_CHANNELS &&
+	       !ipc_chnl_cfg_get(&chnl_cfg, ipc_imem->nr_of_channels,
+				 mux_type)) {
+		dev_dbg(ipc_imem->dev,
+			"initializing entry :%d id:%d ul_pipe:%d dl_pipe:%d",
+			ipc_imem->nr_of_channels, chnl_cfg.id, chnl_cfg.ul_pipe,
+			chnl_cfg.dl_pipe);
+
+		imem_channel_init(ipc_imem, IPC_CTYPE_WWAN, chnl_cfg,
+				  IRQ_MOD_OFF);
+	}
+	/* WWAN registration. */
+	ipc_imem->wwan = ipc_wwan_init(ipc_imem, ipc_imem->dev, total_sessions);
+	if (!ipc_imem->wwan)
+		dev_err(ipc_imem->dev,
+			"failed to register the ipc_wwan interfaces");
+}
+
+/* Copies the data from user space */
+static struct sk_buff *
+imem_sio_copy_from_user_to_skb(struct iosm_imem *ipc_imem, int channel_id,
+			       const unsigned char __user *buf, int size,
+			       int is_blocking)
+{
+	struct sk_buff *skb;
+	dma_addr_t mapping;
+
+	/* Allocate skb memory for the uplink buffer. */
+	skb = ipc_pcie_alloc_skb(ipc_imem->pcie, size, GFP_KERNEL, &mapping,
+				 DMA_TO_DEVICE, 0);
+	if (!skb)
+		return skb;
+
+	if (copy_from_user(skb_put(skb, size), buf, size) != 0) {
+		dev_err(ipc_imem->dev, "ch[%d]: copy from user failed",
+			channel_id);
+		ipc_pcie_kfree_skb(ipc_imem->pcie, skb);
+		return NULL;
+	}
+
+	IPC_CB(skb)->op_type =
+		(u8)(is_blocking ? UL_USR_OP_BLOCKED : UL_DEFAULT);
+
+	return skb;
+}
+
+/* Save the complete PSI image in a specific imem region, prepare the doorbell
+ * scratchpad and inform* the ROM driver. The flash app is suspended until the
+ * CP has processed the information. After the start of the PSI image, CP shall
+ * set the execution state to PSI and generate the irq, then the flash app
+ * is resumed or timeout.
+ */
+static int imem_psi_transfer(struct iosm_imem *ipc_imem,
+			     struct ipc_mem_channel *channel,
+			     const unsigned char __user *buf, int count)
+{
+	enum ipc_mem_exec_stage exec_stage = IPC_MEM_EXEC_STAGE_INVALID;
+	int psi_start_timeout = PSI_START_DEFAULT_TIMEOUT;
+	dma_addr_t mapping = 0;
+	int status, result;
+	void *dest_buf;
+
+	imem_hrtimer_stop(&ipc_imem->startup_timer);
+
+	/* Allocate the buffer for the PSI image. */
+	dest_buf = pci_alloc_consistent(ipc_imem->pcie->pci, count, &mapping);
+	if (!dest_buf) {
+		dev_err(ipc_imem->dev, "ch[%d] cannot allocate %d bytes",
+			channel->channel_id, count);
+		return -1;
+	}
+
+	/* Copy the PSI image from user to kernel space. */
+	if (copy_from_user(dest_buf, buf, count) != 0) {
+		dev_err(ipc_imem->dev, "ch[%d] copy from user failed",
+			channel->channel_id);
+		goto error;
+	}
+
+	/* Save the PSI information for the CP ROM driver on the doorbell
+	 * scratchpad.
+	 */
+	ipc_mmio_set_psi_addr_and_size(ipc_imem->mmio, mapping, count);
+
+	/* Trigger the CP interrupt to process the PSI information. */
+	ipc_doorbell_fire(ipc_imem->pcie, 0, IPC_MEM_EXEC_STAGE_BOOT);
+	/* Suspend the flash app and wait for irq. */
+	status = WAIT_FOR_TIMEOUT(&channel->ul_sem, IPC_PSI_TRANSFER_TIMEOUT);
+
+	if (status <= 0) {
+		dev_err(ipc_imem->dev,
+			"ch[%d] timeout, failed PSI transfer to CP",
+			channel->channel_id);
+		ipc_uevent_send(ipc_imem->dev, UEVENT_MDM_TIMEOUT);
+		goto error;
+	}
+
+	/* CP should have copied the PSI image. */
+	pci_free_consistent(ipc_imem->pcie->pci, count, dest_buf, mapping);
+
+	/* If the PSI download fails, return the CP boot ROM exit code to the
+	 * flash app received about the doorbell scratchpad.
+	 */
+	if (ipc_imem->rom_exit_code != IMEM_ROM_EXIT_OPEN_EXT &&
+	    ipc_imem->rom_exit_code != IMEM_ROM_EXIT_CERT_EXT)
+		return (-1) * ((int)ipc_imem->rom_exit_code);
+
+	dev_dbg(ipc_imem->dev, "PSI image successfully downloaded");
+
+	/* Wait psi_start_timeout milliseconds until the CP PSI image is
+	 * running and updates the execution_stage field with
+	 * IPC_MEM_EXEC_STAGE_PSI. Verify the execution stage.
+	 */
+	while (psi_start_timeout > 0) {
+		exec_stage = ipc_mmio_get_exec_stage(ipc_imem->mmio);
+
+		if (exec_stage == IPC_MEM_EXEC_STAGE_PSI)
+			break;
+
+		msleep(20);
+		psi_start_timeout -= 20;
+	}
+
+	if (exec_stage != IPC_MEM_EXEC_STAGE_PSI)
+		return -1; /* Unknown status of the CP PSI process. */
+
+	/* Enter the PSI phase. */
+	dev_dbg(ipc_imem->dev, "execution_stage[%X] eq. PSI", exec_stage);
+
+	ipc_imem->phase = IPC_P_PSI;
+
+	/* Request the RUNNING state from CP and wait until it was reached
+	 * or timeout.
+	 */
+	imem_ipc_init_check(ipc_imem);
+
+	/* Suspend the flash app, wait for irq and evaluate the CP IPC state. */
+	status = WAIT_FOR_TIMEOUT(&channel->ul_sem, IPC_PSI_TRANSFER_TIMEOUT);
+	if (status <= 0) {
+		dev_err(ipc_imem->dev,
+			"ch[%d] timeout, failed PSI RUNNING state on CP",
+			channel->channel_id);
+		ipc_uevent_send(ipc_imem->dev, UEVENT_MDM_TIMEOUT);
+		return -1;
+	}
+
+	if (ipc_mmio_get_ipc_state(ipc_imem->mmio) !=
+	    IPC_MEM_DEVICE_IPC_RUNNING) {
+		dev_err(ipc_imem->dev,
+			"ch[%d] %s: unexpected CP IPC state %d, not RUNNING",
+			channel->channel_id,
+			ipc_ap_phase_get_string(ipc_imem->phase),
+			ipc_mmio_get_ipc_state(ipc_imem->mmio));
+
+		return -1;
+	}
+
+	/* Create the flash channel for the transfer of the images. */
+	result = imem_sys_sio_open(ipc_imem);
+	if (result < 0) {
+		dev_err(ipc_imem->dev, "can't open flash_channel");
+		return result;
+	}
+
+	/* Inform the flash app that the PSI was sent and start on CP.
+	 * The flash app shall wait for the CP status in blocking read
+	 * entry point.
+	 */
+	return count;
+error:
+	pci_free_consistent(ipc_imem->pcie->pci, count, dest_buf, mapping);
+
+	return -1;
+}
+
+/* Get the write active channel */
+static struct ipc_mem_channel *
+imem_sio_write_channel(struct iosm_imem *ipc_imem, int ch,
+		       const unsigned char __user *buf, int size)
+{
+	struct ipc_mem_channel *channel;
+	enum ipc_phase phase;
+
+	if (ch < 0 || ch >= ipc_imem->nr_of_channels || size <= 0) {
+		dev_err(ipc_imem->dev, "invalid channel No. or buff size");
+		return NULL;
+	}
+
+	channel = &ipc_imem->channels[ch];
+	/* Update the current operation phase. */
+	phase = ipc_imem->phase;
+
+	/* Select the operation depending on the execution stage. */
+	switch (phase) {
+	case IPC_P_RUN:
+	case IPC_P_PSI:
+	case IPC_P_EBL:
+		break;
+
+	case IPC_P_ROM:
+		/* Prepare the PSI image for the CP ROM driver and
+		 * suspend the flash app.
+		 */
+		if (channel->state != IMEM_CHANNEL_RESERVED) {
+			dev_err(ipc_imem->dev,
+				"ch[%d]:invalid channel state %d,expected %d",
+				ch, channel->state, IMEM_CHANNEL_RESERVED);
+			return NULL;
+		}
+		return channel;
+
+	default:
+		/* Ignore uplink actions in all other phases. */
+		dev_err(ipc_imem->dev, "ch[%d]: confused phase %d", ch, phase);
+		return NULL;
+	}
+
+	/* Check the full availability of the channel. */
+	if (channel->state != IMEM_CHANNEL_ACTIVE) {
+		dev_err(ipc_imem->dev, "ch[%d]: confused channel state %d", ch,
+			channel->state);
+		return NULL;
+	}
+
+	return channel;
+}
+
+/* Release a sio link to CP. */
+void imem_sys_sio_close(struct iosm_sio *ipc_sio)
+{
+	struct iosm_imem *ipc_imem = ipc_sio->imem_instance;
+	int channel_id = ipc_sio->channel_id;
+	struct ipc_mem_channel *channel;
+	enum ipc_phase curr_phase;
+	int boot_check_timeout = 0;
+	int status = 0;
+	u32 tail = 0;
+
+	if (channel_id < 0 || channel_id >= ipc_imem->nr_of_channels) {
+		dev_err(ipc_imem->dev, "invalid channel id %d", channel_id);
+		return;
+	}
+	if (channel_id != IPC_MEM_MBIM_CTRL_CH_ID)
+		boot_check_timeout = BOOT_CHECK_DEFAULT_TIMEOUT;
+
+	channel = &ipc_imem->channels[channel_id];
+
+	curr_phase = ipc_imem->phase;
+
+	/* If current phase is IPC_P_OFF or SIO ID is -ve then
+	 * channel is already freed. Nothing to do.
+	 */
+	if (curr_phase == IPC_P_OFF || channel->sio_id < 0) {
+		dev_err(ipc_imem->dev,
+			"nothing to do. Current Phase: %s SIO ID: %d",
+			ipc_ap_phase_get_string(curr_phase), channel->sio_id);
+		return;
+	}
+
+	if (channel->state == IMEM_CHANNEL_FREE) {
+		dev_err(ipc_imem->dev, "ch[%d]: invalid channel state %d",
+			channel_id, channel->state);
+		return;
+	}
+	/* Free only the channel id in the CP power off mode. */
+	if (channel->state == IMEM_CHANNEL_RESERVED) {
+		imem_channel_free(channel);
+		return;
+	}
+
+	if (channel_id != IPC_MEM_MBIM_CTRL_CH_ID &&
+	    ipc_imem->flash_channel_id >= 0) {
+		int i;
+		enum ipc_mem_exec_stage exec_stage;
+
+		/* Increase the total wait time to boot_check_timeout */
+		for (i = 0; i < boot_check_timeout; i++) {
+			/* user space can terminate either the modem is finished
+			 * with Downloading or finished transferring Coredump.
+			 */
+			exec_stage = ipc_mmio_get_exec_stage(ipc_imem->mmio);
+			if (exec_stage == IPC_MEM_EXEC_STAGE_RUN ||
+			    exec_stage == IPC_MEM_EXEC_STAGE_PSI)
+				break;
+
+			msleep(20);
+		}
+
+		msleep(100);
+	}
+	/* If there are any pending TDs then wait for Timeout/Completion before
+	 * closing pipe.
+	 */
+	if (channel->ul_pipe.old_tail != channel->ul_pipe.old_head) {
+		ipc_imem->app_notify_ul_pend = 1;
+
+		/* Suspend the user app and wait a certain time for processing
+		 * UL Data.
+		 */
+		status = WAIT_FOR_TIMEOUT(&ipc_imem->ul_pend_sem,
+					  IPC_PEND_DATA_TIMEOUT);
+
+		if (status == 0) {
+			dev_dbg(ipc_imem->dev,
+				"Pending data Timeout on UL-Pipe:%d Head:%d Tail:%d",
+				channel->ul_pipe.pipe_nr,
+				channel->ul_pipe.old_head,
+				channel->ul_pipe.old_tail);
+		}
+
+		ipc_imem->app_notify_ul_pend = 0;
+	}
+
+	/* If there are any pending TDs then wait for Timeout/Completion before
+	 * closing pipe.
+	 */
+	ipc_protocol_get_head_tail_index(ipc_imem->ipc_protocol,
+					 &channel->dl_pipe, NULL, &tail);
+
+	if (tail != channel->dl_pipe.old_tail) {
+		ipc_imem->app_notify_dl_pend = 1;
+
+		/* Suspend the user app and wait a certain time for processing
+		 * DL Data.
+		 */
+		status = WAIT_FOR_TIMEOUT(&ipc_imem->dl_pend_sem,
+					  IPC_PEND_DATA_TIMEOUT);
+
+		if (status == 0) {
+			dev_dbg(ipc_imem->dev,
+				"Pending data Timeout on DL-Pipe:%d Head:%d Tail:%d",
+				channel->dl_pipe.pipe_nr,
+				channel->dl_pipe.old_head,
+				channel->dl_pipe.old_tail);
+		}
+
+		ipc_imem->app_notify_dl_pend = 0;
+	}
+
+	/* Due to wait for completion in messages, there is a small window
+	 * between closing the pipe and updating the channel is closed. In this
+	 * small window there could be HP update from Host Driver. Hence update
+	 * the channel state as CLOSING to aviod unnecessary interrupt
+	 * towards CP.
+	 */
+	channel->state = IMEM_CHANNEL_CLOSING;
+
+	/* Release the pipe resources */
+	if (channel_id != IPC_MEM_MBIM_CTRL_CH_ID &&
+	    ipc_imem->flash_channel_id != -1) {
+		/* don't send close for software download pipes, as
+		 * the device is already rebooting
+		 */
+		imem_pipe_cleanup(ipc_imem, &channel->ul_pipe);
+		imem_pipe_cleanup(ipc_imem, &channel->dl_pipe);
+	} else {
+		imem_pipe_close(ipc_imem, &channel->ul_pipe);
+		imem_pipe_close(ipc_imem, &channel->dl_pipe);
+	}
+
+	imem_channel_free(channel);
+
+	if (channel_id != IPC_MEM_MBIM_CTRL_CH_ID)
+		/* Reset the global flash channel id. */
+		ipc_imem->flash_channel_id = -1;
+}
+
+/* Open a MBIM link to CP and return the channel id. */
+int imem_sys_mbim_open(void *instance)
+{
+	struct iosm_imem *ipc_imem = instance;
+	int ch_id;
+
+	/* The MBIM interface is only supported in the runtime phase. */
+	if (imem_ap_phase_update(ipc_imem) != IPC_P_RUN) {
+		dev_err(ipc_imem->dev, "MBIM open refused, phase %s",
+			ipc_ap_phase_get_string(ipc_imem->phase));
+		return -1;
+	}
+
+	ch_id = imem_channel_alloc(ipc_imem, IPC_MEM_MBIM_CTRL_CH_ID,
+				   IPC_CTYPE_MBIM);
+
+	if (ch_id < 0) {
+		dev_err(ipc_imem->dev, "reservation of an MBIM chnl id failed");
+		return ch_id;
+	}
+
+	if (!imem_channel_open(ipc_imem, ch_id, IPC_HP_SIO_OPEN)) {
+		dev_err(ipc_imem->dev, "MBIM channel id open failed");
+		return -1;
+	}
+
+	return ch_id;
+}
+
+/* Open a SIO link to CP and return the channel id. */
+int imem_sys_sio_open(void *instance)
+{
+	struct iosm_imem *ipc_imem = instance;
+	struct ipc_chnl_cfg chnl_cfg = { 0 };
+	enum ipc_phase phase;
+	int channel_id;
+
+	phase = imem_ap_phase_update(ipc_imem);
+
+	/* The control link to CP is only supported in the power off, psi or
+	 * run phase.
+	 */
+	switch (phase) {
+	case IPC_P_OFF:
+	case IPC_P_ROM:
+		/* Get a channel id as flash id and reserve it. */
+		channel_id = imem_channel_alloc(ipc_imem, IPC_MEM_FLASH_CH_ID,
+						IPC_CTYPE_FLASH);
+		if (channel_id < 0) {
+			dev_err(ipc_imem->dev,
+				"reservation of a flash channel id failed");
+			return channel_id;
+		}
+
+		/* Enqueue chip info data to be read */
+		if (imem_trigger_chip_info(ipc_imem)) {
+			imem_channel_close(ipc_imem, channel_id);
+			return -1;
+		}
+
+		/* Save the flash channel id to execute the ROM interworking. */
+		ipc_imem->flash_channel_id = channel_id;
+
+		return channel_id;
+
+	case IPC_P_PSI:
+	case IPC_P_EBL:
+		/* The channel id used as flash id shall be already
+		 * present as reserved.
+		 */
+		if (ipc_imem->flash_channel_id < 0) {
+			dev_err(ipc_imem->dev,
+				"missing a valid flash channel id");
+			return -1;
+		}
+		channel_id = ipc_imem->flash_channel_id;
+
+		ipc_imem->cp_version = ipc_mmio_get_cp_version(ipc_imem->mmio);
+		if (ipc_imem->cp_version == -1) {
+			dev_err(ipc_imem->dev, "invalid CP version");
+			return -1;
+		}
+
+		/* PSI may have changed the CP version field, which may
+		 * result in a different channel configuration.
+		 * Fetch and update the flash channel config
+		 */
+		if (ipc_chnl_cfg_get(&chnl_cfg, ipc_imem->flash_channel_id,
+				     MUX_UNKNOWN)) {
+			dev_err(ipc_imem->dev,
+				"failed to get flash pipe configuration");
+			return -1;
+		}
+
+		ipc_imem_channel_update(ipc_imem, channel_id, chnl_cfg,
+					IRQ_MOD_OFF);
+
+		if (!imem_channel_open(ipc_imem, channel_id, IPC_HP_SIO_OPEN))
+			return -1;
+
+		return channel_id;
+
+	default:
+		/* CP is in the wrong state (e.g. CRASH or CD_READY) */
+		dev_err(ipc_imem->dev, "SIO open refused, phase %d", phase);
+		return -1;
+	}
+}
+
+ssize_t imem_sys_sio_read(struct iosm_sio *ipc_sio, unsigned char __user *buf,
+			  size_t size, struct sk_buff *skb)
+{
+	unsigned char __user *dest_buf, *dest_end;
+	size_t dest_len, src_len, copied_b = 0;
+	unsigned char *src_buf;
+
+	/* Prepare the destination space. */
+	dest_buf = buf;
+	dest_end = dest_buf + size;
+
+	/* Copy the accumulated rx packets. */
+	while (skb) {
+		/* Prepare the source elements. */
+		src_buf = skb->data;
+		src_len = skb->len;
+
+		/* Calculate the current size of the destination buffer. */
+		dest_len = dest_end - dest_buf;
+
+		/* Compute the number of bytes to copy. */
+		copied_b = (dest_len < src_len) ? dest_len : src_len;
+
+		/* Copy the chars into the user space buffer. */
+		if (copy_to_user((void __user *)dest_buf, src_buf, copied_b) !=
+		    0) {
+			dev_err(ipc_sio->dev,
+				"chid[%d] userspace copy failed n=%zu\n",
+				ipc_sio->channel_id, copied_b);
+			ipc_pcie_kfree_skb(ipc_sio->pcie, skb);
+			return -EFAULT;
+		}
+
+		/* Update the source elements. */
+		skb->data = src_buf + copied_b;
+		skb->len = skb->len - copied_b;
+
+		/* Update the desctination pointer. */
+		dest_buf += copied_b;
+
+		/* Test the fill level of the user buffer. */
+		if (dest_buf >= dest_end) {
+			/* Free the consumed skbuf or save the pending skbuf
+			 * to consume it in the read call.
+			 */
+			if (skb->len == 0)
+				ipc_pcie_kfree_skb(ipc_sio->pcie, skb);
+			else
+				ipc_sio->rx_pending_buf = skb;
+
+			/* Return the number of saved chars. */
+			break;
+		}
+
+		/* Free the consumed skbuf. */
+		ipc_pcie_kfree_skb(ipc_sio->pcie, skb);
+
+		/* Get the next skbuf element. */
+		skb = skb_dequeue(&ipc_sio->rx_list);
+	}
+
+	/* Return the number of saved chars. */
+	copied_b = dest_buf - buf;
+	return copied_b;
+}
+
+int imem_sys_sio_write(struct iosm_sio *ipc_sio,
+		       const unsigned char __user *buf, int count,
+		       bool blocking_write)
+{
+	struct iosm_imem *ipc_imem = ipc_sio->imem_instance;
+	int channel_id = ipc_sio->channel_id;
+	struct ipc_mem_channel *channel;
+	struct sk_buff *skb;
+	int ret = -1;
+
+	channel = imem_sio_write_channel(ipc_imem, channel_id, buf, count);
+
+	if (!channel || ipc_imem->phase == IPC_P_OFF_REQ)
+		return ret;
+
+	/* In the ROM phase the PSI image is passed to CP about a specific
+	 * shared memory area and doorbell scratchpad directly.
+	 */
+	if (ipc_imem->phase == IPC_P_ROM) {
+		ret = imem_psi_transfer(ipc_imem, channel, buf, count);
+
+		/* If the PSI transfer is successful then send Feature
+		 * Set message.
+		 */
+		if (ret > 0)
+			imem_msg_send_feature_set(ipc_imem,
+						  IPC_MEM_INBAND_CRASH_SIG,
+						  false);
+		return ret;
+	}
+
+	/* Allocate skb memory for the uplink buffer.*/
+	skb = imem_sio_copy_from_user_to_skb(ipc_imem, channel_id, buf, count,
+					     blocking_write);
+	if (!skb)
+		return ret;
+
+	/* Add skb to the uplink skbuf accumulator. */
+	skb_queue_tail(&channel->ul_list, skb);
+
+	/* Inform the IPC tasklet to pass uplink IP packets to CP.
+	 * Blocking write waits for UL completion notification,
+	 * non-blocking write simply returns the count.
+	 */
+	if (imem_call_sio_write(ipc_imem) && blocking_write) {
+		/* Suspend the app and wait for UL data completion. */
+		int status =
+			wait_for_completion_interruptible(&channel->ul_sem);
+
+		if (status < 0) {
+			dev_err(ipc_imem->dev,
+				"ch[%d] no CP confirmation, status=%d",
+				channel->channel_id, status);
+			return status;
+		}
+	}
+
+	return count;
+}
+
+int imem_sys_sio_receive(struct iosm_sio *ipc_sio, struct sk_buff *skb)
+{
+	dev_dbg(ipc_sio->dev, "sio receive[c-id:%d]: %d", ipc_sio->channel_id,
+		skb->len);
+
+	skb_queue_tail((&ipc_sio->rx_list), skb);
+
+	complete(&ipc_sio->read_sem);
+	wake_up_interruptible(&ipc_sio->poll_inq);
+
+	return 0;
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
new file mode 100644
index 000000000000..c60295056499
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_IMEM_OPS_H
+#define IOSM_IPC_IMEM_OPS_H
+
+#include "iosm_ipc_mux_codec.h"
+
+/* Maximum length of the SIO device names */
+#define IPC_SIO_DEVNAME_LEN 32
+#define IPC_READ_TIMEOUT 500
+
+/* The delay in ms for defering the unregister */
+#define SIO_UNREGISTER_DEFER_DELAY_MS 1
+
+/* Default delay till CP PSI image is running and modem updates the
+ * execution stage.
+ * unit : milliseconds
+ */
+#define PSI_START_DEFAULT_TIMEOUT 3000
+
+/* Default time out when closing SIO, till the modem is in
+ * running state.
+ * unit : milliseconds
+ */
+#define BOOT_CHECK_DEFAULT_TIMEOUT 400
+
+/**
+ * imem_sys_sio_open - Open a sio link to CP.
+ * @instance:	Imem instance.
+ *
+ * Return: chnl id on success, -EINVAL or -1 for failure
+ */
+int imem_sys_sio_open(void *instance);
+
+/**
+ * imem_sys_mbim_open - Open a mbim link to CP.
+ * @instance:	Imem instance.
+ *
+ * Return: chnl id on success, -EINVAL or -1 for failure
+ */
+int imem_sys_mbim_open(void *instance);
+
+/**
+ * imem_sys_sio_close - Release a sio link to CP.
+ * @ipc_sio:		iosm sio instance.
+ */
+void imem_sys_sio_close(struct iosm_sio *ipc_sio);
+
+/**
+ * imem_sys_sio_read - Copy the rx data to the user space buffer and free the
+ *		       skbuf.
+ * @ipc_sio:	Pointer to iosm_sio structi.
+ * @buf:	Pointer to destination buffer.
+ * @size:	Size of destination buffer.
+ * @skb:	Pointer to source buffer.
+ *
+ * Return: Number of bytes read, -EFAULT and -EINVAL for failure
+ */
+ssize_t imem_sys_sio_read(struct iosm_sio *ipc_sio, unsigned char __user *buf,
+			  size_t size, struct sk_buff *skb);
+
+/**
+ * imem_sys_sio_write - Route the uplink buffer to CP.
+ * @ipc_sio:		iosm_sio instance.
+ * @buf:		Pointer to source buffer.
+ * @count:		Number of data bytes to write.
+ * @blocking_write:	if true wait for UL data completion.
+ *
+ * Return: Number of bytes read, -EINVAL and -1  for failure
+ */
+int imem_sys_sio_write(struct iosm_sio *ipc_sio,
+		       const unsigned char __user *buf, int count,
+		       bool blocking_write);
+
+/**
+ * imem_sys_sio_receive - Receive downlink characters from CP, the downlink
+ *		skbuf is added at the end of the downlink or rx list.
+ * @ipc_sio:    Pointer to ipc char data-struct
+ * @skb:        Pointer to sk buffer
+ *
+ * Returns: 0 on success, -EINVAL on failure
+ */
+int imem_sys_sio_receive(struct iosm_sio *ipc_sio, struct sk_buff *skb);
+
+int imem_sys_wwan_open(void *instance, int vlan_id);
+
+void imem_sys_wwan_close(void *instance, int vlan_id, int channel_id);
+
+int imem_sys_wwan_transmit(void *instance, int vlan_id, int channel_id,
+			   struct sk_buff *skb);
+/**
+ * wwan_channel_init - Initializes WWAN channels and the channel for MUX.
+ * @ipc_imem:		Pointer to iosm_imem struct.
+ * @total_sessions:	Total sessions.
+ * @mux_type:		Type of mux protocol.
+ */
+void wwan_channel_init(struct iosm_imem *ipc_imem, int total_sessions,
+		       enum ipc_mux_protocol mux_type);
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 06/18] net: iosm: channel configuration
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (4 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 05/18] net: iosm: shared memory I/O operations M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 07/18] net: iosm: char device for FW flash & coredump M Chetan Kumar
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

Defines pipes & channel configurations like channel type,
pipe mappings, No. of transfer descriptors and transfer
buffer size etc.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c | 87 +++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h | 57 ++++++++++++++++++++
 2 files changed, 144 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c b/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c
new file mode 100644
index 000000000000..d1d239218494
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.c
@@ -0,0 +1,87 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include "iosm_ipc_chnl_cfg.h"
+
+/* Max. sizes of a downlink buffers */
+#define IPC_MEM_MAX_DL_FLASH_BUF_SIZE (16 * 1024)
+#define IPC_MEM_MAX_DL_LOOPBACK_SIZE (1 * 1024 * 1024)
+#define IPC_MEM_MAX_DL_AT_BUF_SIZE 2048
+#define IPC_MEM_MAX_DL_RPC_BUF_SIZE (32 * 1024)
+#define IPC_MEM_MAX_DL_MBIM_BUF_SIZE IPC_MEM_MAX_DL_RPC_BUF_SIZE
+
+/* Max. transfer descriptors for a pipe. */
+#define IPC_MEM_MAX_TDS_FLASH_DL 3
+#define IPC_MEM_MAX_TDS_FLASH_UL 6
+#define IPC_MEM_MAX_TDS_AT 4
+#define IPC_MEM_MAX_TDS_RPC 4
+#define IPC_MEM_MAX_TDS_MBIM IPC_MEM_MAX_TDS_RPC
+#define IPC_MEM_MAX_TDS_LOOPBACK 11
+
+/* Accumulation backoff usec */
+#define IRQ_ACC_BACKOFF_OFF 0
+
+/* MUX acc backoff 1ms */
+#define IRQ_ACC_BACKOFF_MUX 1000
+
+/* Modem channel configuration table
+ * Always reserve element zero for flash channel.
+ */
+static struct ipc_chnl_cfg modem_cfg[] = {
+	/* FLASH Channel */
+	{ IPC_MEM_FLASH_CH_ID, IPC_MEM_PIPE_0, IPC_MEM_PIPE_1,
+	  IPC_MEM_MAX_TDS_FLASH_UL, IPC_MEM_MAX_TDS_FLASH_DL,
+	  IPC_MEM_MAX_DL_FLASH_BUF_SIZE },
+	/* MBIM Channel */
+	{ IPC_MEM_MBIM_CTRL_CH_ID, IPC_MEM_PIPE_12, IPC_MEM_PIPE_13,
+	  IPC_MEM_MAX_TDS_MBIM, IPC_MEM_MAX_TDS_MBIM,
+	  IPC_MEM_MAX_DL_MBIM_BUF_SIZE },
+	/* RPC - 0 */
+	{ IPC_WWAN_DSS_ID_0, IPC_MEM_PIPE_2, IPC_MEM_PIPE_3,
+	  IPC_MEM_MAX_TDS_RPC, IPC_MEM_MAX_TDS_RPC,
+	  IPC_MEM_MAX_DL_RPC_BUF_SIZE },
+	/* IAT0 */
+	{ IPC_WWAN_DSS_ID_1, IPC_MEM_PIPE_4, IPC_MEM_PIPE_5, IPC_MEM_MAX_TDS_AT,
+	  IPC_MEM_MAX_TDS_AT, IPC_MEM_MAX_DL_AT_BUF_SIZE },
+	/* IAT1 */
+	{ IPC_WWAN_DSS_ID_2, IPC_MEM_PIPE_8, IPC_MEM_PIPE_9, IPC_MEM_MAX_TDS_AT,
+	  IPC_MEM_MAX_TDS_AT, IPC_MEM_MAX_DL_AT_BUF_SIZE },
+	/* Loopback */
+	{ IPC_WWAN_DSS_ID_3, IPC_MEM_PIPE_10, IPC_MEM_PIPE_11,
+	  IPC_MEM_MAX_TDS_LOOPBACK, IPC_MEM_MAX_TDS_LOOPBACK,
+	  IPC_MEM_MAX_DL_LOOPBACK_SIZE },
+	/* Trace */
+	{ IPC_WWAN_DSS_ID_4, IPC_MEM_PIPE_6, IPC_MEM_PIPE_7, IPC_MEM_TDS_TRC,
+	  IPC_MEM_TDS_TRC, IPC_MEM_MAX_DL_TRC_BUF_SIZE },
+	/* IP Mux */
+	{ IPC_MEM_MUX_IP_CH_VLAN_ID, IPC_MEM_PIPE_0, IPC_MEM_PIPE_1,
+	  IPC_MEM_MAX_TDS_MUX_LITE_UL, IPC_MEM_MAX_TDS_MUX_LITE_DL,
+	  IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE },
+};
+
+int ipc_chnl_cfg_get(struct ipc_chnl_cfg *chnl_cfg, int index,
+		     enum ipc_mux_protocol mux_protocol)
+{
+	int array_size = ARRAY_SIZE(modem_cfg);
+
+	if (index >= array_size) {
+		pr_err("index: %d and array_size %d", index, array_size);
+		return -1;
+	}
+
+	if (index == IPC_MEM_MUX_IP_CH_VLAN_ID)
+		chnl_cfg->accumulation_backoff = IRQ_ACC_BACKOFF_MUX;
+	else
+		chnl_cfg->accumulation_backoff = IRQ_ACC_BACKOFF_OFF;
+
+	chnl_cfg->ul_nr_of_entries = modem_cfg[index].ul_nr_of_entries;
+	chnl_cfg->dl_nr_of_entries = modem_cfg[index].dl_nr_of_entries;
+	chnl_cfg->dl_buf_size = modem_cfg[index].dl_buf_size;
+	chnl_cfg->id = modem_cfg[index].id;
+	chnl_cfg->ul_pipe = modem_cfg[index].ul_pipe;
+	chnl_cfg->dl_pipe = modem_cfg[index].dl_pipe;
+
+	return 0;
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h b/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h
new file mode 100644
index 000000000000..42ba4e4849bb
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h
@@ -0,0 +1,57 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation
+ */
+
+#ifndef IOSM_IPC_CHNL_CFG_H
+#define IOSM_IPC_CHNL_CFG_H
+
+#include "iosm_ipc_mux.h"
+
+/* Number of TDs on the trace channel */
+#define IPC_MEM_TDS_TRC 32
+
+/* Trace channel TD buffer size. */
+#define IPC_MEM_MAX_DL_TRC_BUF_SIZE 8192
+
+/* Type of the WWAN ID */
+enum ipc_wwan_id {
+	IPC_WWAN_DSS_ID_0 = 257,
+	IPC_WWAN_DSS_ID_1,
+	IPC_WWAN_DSS_ID_2,
+	IPC_WWAN_DSS_ID_3,
+	IPC_WWAN_DSS_ID_4,
+};
+
+/**
+ * struct ipc_chnl_cfg - IPC channel configuration structure
+ * @id:				VLAN ID
+ * @ul_pipe:			Uplink datastream
+ * @dl_pipe:			Downlink datastream
+ * @ul_nr_of_entries:		Number of Transfer descriptor uplink pipe
+ * @dl_nr_of_entries:		Number of Transfer descriptor downlink pipe
+ * @dl_buf_size:		Downlink buffer size
+ * @accumulation_backoff:	Time in usec for data accumalation
+ */
+struct ipc_chnl_cfg {
+	int id;
+	u32 ul_pipe;
+	u32 dl_pipe;
+	u32 ul_nr_of_entries;
+	u32 dl_nr_of_entries;
+	u32 dl_buf_size;
+	u32 accumulation_backoff;
+};
+
+/**
+ * ipc_chnl_cfg_get - Get pipe configuration.
+ * @chnl_cfg:		Array of ipc_chnl_cfg struct
+ * @index:		Channel index (upto MAX_CHANNELS)
+ * @mux_protocol:	Active mux protocol
+ *
+ * Return: 0 on success and -1 on failure
+ */
+int ipc_chnl_cfg_get(struct ipc_chnl_cfg *chnl_cfg, int index,
+		     enum ipc_mux_protocol mux_protocol);
+
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 07/18] net: iosm: char device for FW flash & coredump
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (5 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 06/18] net: iosm: channel configuration M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 08/18] net: iosm: MBIM control device M Chetan Kumar
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

Implements a char device for flashing Modem FW image while Device
is in boot rom phase and for collecting traces on modem crash.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_sio.c | 188 +++++++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_sio.h |  72 ++++++++++++++
 2 files changed, 260 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_sio.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_sio.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_sio.c b/drivers/net/wwan/iosm/iosm_ipc_sio.c
new file mode 100644
index 000000000000..c35e7c6face1
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_sio.c
@@ -0,0 +1,188 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include <linux/poll.h>
+#include <asm/ioctls.h>
+
+#include "iosm_ipc_sio.h"
+
+/* Open a shared memory device and initialize the head of the rx skbuf list. */
+static int ipc_sio_fop_open(struct inode *inode, struct file *filp)
+{
+	struct iosm_sio *ipc_sio =
+		container_of(filp->private_data, struct iosm_sio, misc);
+
+	if (test_and_set_bit(0, &ipc_sio->sio_is_open))
+		return -EBUSY;
+
+	ipc_sio->channel_id = imem_sys_sio_open(ipc_sio->imem_instance);
+
+	if (ipc_sio->channel_id < 0)
+		return -EIO;
+
+	return 0;
+}
+
+static int ipc_sio_fop_release(struct inode *inode, struct file *filp)
+{
+	struct iosm_sio *ipc_sio =
+		container_of(filp->private_data, struct iosm_sio, misc);
+
+	if (ipc_sio->channel_id < 0)
+		return -EINVAL;
+
+	imem_sys_sio_close(ipc_sio);
+
+	clear_bit(0, &ipc_sio->sio_is_open);
+
+	return 0;
+}
+
+/* Copy the data from skbuff to the user buffer */
+static ssize_t ipc_sio_fop_read(struct file *filp, char __user *buf,
+				size_t size, loff_t *l)
+{
+	struct sk_buff *skb = NULL;
+	struct iosm_sio *ipc_sio;
+	bool is_blocking;
+
+	if (!buf)
+		return -EINVAL;
+
+	ipc_sio = container_of(filp->private_data, struct iosm_sio, misc);
+
+	is_blocking = !(filp->f_flags & O_NONBLOCK);
+
+	/* only log in blocking mode to reduce flooding the log */
+	if (is_blocking)
+		dev_dbg(ipc_sio->dev, "sio read chid[%d] size=%zu",
+			ipc_sio->channel_id, size);
+
+	/* First provide the pending skbuf to the user. */
+	if (ipc_sio->rx_pending_buf) {
+		skb = ipc_sio->rx_pending_buf;
+		ipc_sio->rx_pending_buf = NULL;
+	}
+
+	/* Check rx queue until skb is available */
+	while (!skb) {
+		skb = skb_dequeue(&ipc_sio->rx_list);
+		if (skb)
+			break;
+
+		if (!is_blocking)
+			return -EAGAIN;
+		/* Suspend the user app and wait a certain time for data
+		 * from CP.
+		 */
+		if (WAIT_FOR_TIMEOUT(&ipc_sio->read_sem, IPC_READ_TIMEOUT) <
+		    0) {
+			return -ETIMEDOUT;
+		}
+	}
+
+	return imem_sys_sio_read(ipc_sio, buf, size, skb);
+}
+
+/* Route the user data to the shared memory layer. */
+static ssize_t ipc_sio_fop_write(struct file *filp, const char __user *buf,
+				 size_t size, loff_t *l)
+{
+	struct iosm_sio *ipc_sio;
+	bool is_blocking;
+
+	if (!buf)
+		return -EINVAL;
+
+	ipc_sio = container_of(filp->private_data, struct iosm_sio, misc);
+
+	is_blocking = !(filp->f_flags & O_NONBLOCK);
+
+	if (ipc_sio->channel_id < 0)
+		return -EPERM;
+
+	return imem_sys_sio_write(ipc_sio, buf, size, is_blocking);
+}
+
+/* poll for applications using nonblocking I/O */
+static __poll_t ipc_sio_fop_poll(struct file *filp, poll_table *wait)
+{
+	struct iosm_sio *ipc_sio =
+		container_of(filp->private_data, struct iosm_sio, misc);
+	__poll_t mask = EPOLLOUT | EPOLLWRNORM; /* writable */
+
+	/* Just registers wait_queue hook. This doesn't really wait. */
+	poll_wait(filp, &ipc_sio->poll_inq, wait);
+
+	/* Test the fill level of the skbuf rx queue. */
+	if (!skb_queue_empty(&ipc_sio->rx_list) || ipc_sio->rx_pending_buf)
+		mask |= EPOLLIN | EPOLLRDNORM; /* readable */
+
+	return mask;
+}
+
+struct iosm_sio *ipc_sio_init(struct iosm_imem *ipc_imem, const char *name)
+{
+	static const struct file_operations fops = {
+		.owner = THIS_MODULE,
+		.open = ipc_sio_fop_open,
+		.release = ipc_sio_fop_release,
+		.read = ipc_sio_fop_read,
+		.write = ipc_sio_fop_write,
+		.poll = ipc_sio_fop_poll,
+	};
+
+	struct iosm_sio *ipc_sio = kzalloc(sizeof(*ipc_sio), GFP_KERNEL);
+
+	if (!ipc_sio)
+		return NULL;
+
+	ipc_sio->dev = ipc_imem->dev;
+	ipc_sio->pcie = ipc_imem->pcie;
+	ipc_sio->imem_instance = ipc_imem;
+
+	ipc_sio->channel_id = -1;
+	ipc_sio->sio_is_open = 0;
+	atomic_set(&ipc_sio->dreg_called, 0);
+
+	init_completion(&ipc_sio->read_sem);
+
+	skb_queue_head_init(&ipc_sio->rx_list);
+	init_waitqueue_head(&ipc_sio->poll_inq);
+	init_waitqueue_head(&ipc_sio->poll_outq);
+
+	strncpy(ipc_sio->devname, name, sizeof(ipc_sio->devname) - 1);
+	ipc_sio->devname[IPC_SIO_DEVNAME_LEN - 1] = '\0';
+
+	ipc_sio->misc.minor = MISC_DYNAMIC_MINOR;
+	ipc_sio->misc.name = ipc_sio->devname;
+	ipc_sio->misc.fops = &fops;
+	ipc_sio->misc.mode = IPC_CHAR_DEVICE_DEFAULT_MODE;
+
+	if (misc_register(&ipc_sio->misc) != 0) {
+		kfree(ipc_sio);
+		return NULL;
+	}
+
+	return ipc_sio;
+}
+
+void ipc_sio_deinit(struct iosm_sio *ipc_sio)
+{
+	if (atomic_cmpxchg(&ipc_sio->dreg_called, 0, 1) != 0)
+		return;
+
+	misc_deregister(&ipc_sio->misc);
+
+	/* Wakeup the user app. */
+	complete(&ipc_sio->read_sem);
+
+	ipc_pcie_kfree_skb(ipc_sio->pcie, ipc_sio->rx_pending_buf);
+	ipc_sio->rx_pending_buf = NULL;
+
+	skb_queue_purge(&ipc_sio->rx_list);
+
+	kfree(ipc_sio);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_sio.h b/drivers/net/wwan/iosm/iosm_ipc_sio.h
new file mode 100644
index 000000000000..d2a8e91ea117
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_sio.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_SIO_H
+#define IOSM_IPC_SIO_H
+
+#include <linux/miscdevice.h>
+#include <linux/skbuff.h>
+
+#include "iosm_ipc_imem_ops.h"
+
+/* IPC char. device default mode. Only privileged user can access. */
+#define IPC_CHAR_DEVICE_DEFAULT_MODE 0600
+
+/**
+ * struct iosm_sio - State of the char driver layer.
+ * @misc:		OS misc device component
+ * @imem_instance:	imem instance
+ * @dev:		Pointer to device struct
+ * @pcie:		PCIe component
+ * @rx_pending_buf:	Storage for skb when its data has not been fully read
+ * @misc:		OS misc device component
+ * @devname:		Device name
+ * @channel_id:		Channel ID as received from ipc_sio_ops.open
+ * @rx_list:		Downlink skbuf list received from CP.
+ * @read_sem:		Needed for the blocking read or downlink transfer
+ * @poll_inq:		Read queues to support the poll system call
+ * @poll_outq:		write queues to support the poll system call
+ * @sio_is_open:	Sio Open flag to restricts the number of concurrent open
+ *			operations to one
+ * @mbim_is_open:	Mbim Open flag to restricts the number of concurrent
+ *			open operations to one
+ * @dreg_called:	dreg_called indicates that deregister has been called.
+ *			This makes sure dreg is only executed once.
+ * @wmaxcommand:	Max buffer size
+ */
+struct iosm_sio {
+	struct miscdevice misc;
+	void *imem_instance;
+	struct device *dev;
+	struct iosm_pcie *pcie;
+	struct sk_buff *rx_pending_buf;
+	char devname[IPC_SIO_DEVNAME_LEN];
+	int channel_id;
+	struct sk_buff_head rx_list;
+	struct completion read_sem;
+	wait_queue_head_t poll_inq;
+	wait_queue_head_t poll_outq;
+	unsigned long sio_is_open;
+	unsigned long mbim_is_open;
+	atomic_t dreg_called;
+	u16 wmaxcommand;
+};
+
+/**
+ * ipc_sio_init - Allocate and create a character device
+ * @ipc_imem:	Pointer to iosm_imem structure
+ * @name:	Pointer to character device name
+ *
+ * Returns: Pointer to sio instance on success and NULL on failure
+ */
+struct iosm_sio *ipc_sio_init(struct iosm_imem *ipc_imem, const char *name);
+
+/**
+ * ipc_sio_deinit - Free all the memory allocated for the ipc sio structure.
+ * @ipc_sio:	Pointer to the ipc sio data-struct
+ */
+void ipc_sio_deinit(struct iosm_sio *ipc_sio);
+
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 08/18] net: iosm: MBIM control device
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (6 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 07/18] net: iosm: char device for FW flash & coredump M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 09/18] net: iosm: bottom half M Chetan Kumar
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

Implements a char device for MBIM protocol communication &
provides a simple IOCTL for max transfer buffer size
configuration.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_mbim.c | 205 ++++++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_mbim.h |  24 ++++
 2 files changed, 229 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mbim.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mbim.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_mbim.c b/drivers/net/wwan/iosm/iosm_ipc_mbim.c
new file mode 100644
index 000000000000..b263c77d6eb2
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_mbim.c
@@ -0,0 +1,205 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include <linux/poll.h>
+#include <linux/uaccess.h>
+
+#include "iosm_ipc_imem_ops.h"
+#include "iosm_ipc_mbim.h"
+#include "iosm_ipc_sio.h"
+
+#define IOCTL_WDM_MAX_COMMAND _IOR('H', 0xA0, __u16)
+#define WDM_MAX_SIZE 4096
+#define IPC_READ_TIMEOUT 500
+
+/* MBIM IOCTL for max buffer size. */
+static long ipc_mbim_fop_unlocked_ioctl(struct file *filp, unsigned int cmd,
+					unsigned long arg)
+{
+	struct iosm_sio *ipc_mbim =
+		container_of(filp->private_data, struct iosm_sio, misc);
+
+	if (cmd != IOCTL_WDM_MAX_COMMAND ||
+	    !access_ok((void __user *)arg, sizeof(ipc_mbim->wmaxcommand)))
+		return -EINVAL;
+
+	if (copy_to_user((void __user *)arg, &ipc_mbim->wmaxcommand,
+			 sizeof(ipc_mbim->wmaxcommand)))
+		return -EFAULT;
+
+	return 0;
+}
+
+/* Open a shared memory device and initialize the head of the rx skbuf list. */
+static int ipc_mbim_fop_open(struct inode *inode, struct file *filp)
+{
+	struct iosm_sio *ipc_mbim =
+		container_of(filp->private_data, struct iosm_sio, misc);
+
+	if (test_and_set_bit(0, &ipc_mbim->mbim_is_open))
+		return -EBUSY;
+
+	ipc_mbim->channel_id = imem_sys_mbim_open(ipc_mbim->imem_instance);
+
+	if (ipc_mbim->channel_id < 0)
+		return -EIO;
+
+	return 0;
+}
+
+/* Close a shared memory control device and free the rx skbuf list. */
+static int ipc_mbim_fop_release(struct inode *inode, struct file *filp)
+{
+	struct iosm_sio *ipc_mbim =
+		container_of(filp->private_data, struct iosm_sio, misc);
+
+	if (ipc_mbim->channel_id < 0)
+		return -EINVAL;
+
+	imem_sys_sio_close(ipc_mbim);
+
+	clear_bit(0, &ipc_mbim->mbim_is_open);
+	return 0;
+}
+
+/* Copy the data from skbuff to the user buffer */
+static ssize_t ipc_mbim_fop_read(struct file *filp, char __user *buf,
+				 size_t size, loff_t *l)
+{
+	struct sk_buff *skb = NULL;
+	struct iosm_sio *ipc_mbim;
+	bool is_blocking;
+
+	if (!access_ok(buf, size))
+		return -EINVAL;
+
+	ipc_mbim = container_of(filp->private_data, struct iosm_sio, misc);
+
+	is_blocking = !(filp->f_flags & O_NONBLOCK);
+
+	/* First provide the pending skbuf to the user. */
+	if (ipc_mbim->rx_pending_buf) {
+		skb = ipc_mbim->rx_pending_buf;
+		ipc_mbim->rx_pending_buf = NULL;
+	}
+
+	/* Check rx queue until skb is available */
+	while (!skb) {
+		skb = skb_dequeue(&ipc_mbim->rx_list);
+		if (skb)
+			break;
+
+		if (!is_blocking)
+			return -EAGAIN;
+
+		/* Suspend the user app and wait a certain time for data
+		 * from CP.
+		 */
+		if (WAIT_FOR_TIMEOUT(&ipc_mbim->read_sem, IPC_READ_TIMEOUT) < 0)
+			return -ETIMEDOUT;
+	}
+
+	return imem_sys_sio_read(ipc_mbim, buf, size, skb);
+}
+
+/* Route the user data to the shared memory layer. */
+static ssize_t ipc_mbim_fop_write(struct file *filp, const char __user *buf,
+				  size_t size, loff_t *l)
+{
+	struct iosm_sio *ipc_mbim;
+	bool is_blocking;
+
+	if (!access_ok(buf, size))
+		return -EINVAL;
+
+	ipc_mbim = container_of(filp->private_data, struct iosm_sio, misc);
+
+	is_blocking = !(filp->f_flags & O_NONBLOCK);
+
+	if (ipc_mbim->channel_id < 0)
+		return -EPERM;
+
+	return imem_sys_sio_write(ipc_mbim, buf, size, is_blocking);
+}
+
+/* Poll mechanism for applications that use nonblocking IO */
+static __poll_t ipc_mbim_fop_poll(struct file *filp, poll_table *wait)
+{
+	struct iosm_sio *ipc_mbim =
+		container_of(filp->private_data, struct iosm_sio, misc);
+	__poll_t mask = EPOLLOUT | EPOLLWRNORM; /* writable */
+
+	/* Just registers wait_queue hook. This doesn't really wait. */
+	poll_wait(filp, &ipc_mbim->poll_inq, wait);
+
+	/* Test the fill level of the skbuf rx queue. */
+	if (!skb_queue_empty(&ipc_mbim->rx_list) || ipc_mbim->rx_pending_buf)
+		mask |= EPOLLIN | EPOLLRDNORM; /* readable */
+
+	return mask;
+}
+
+struct iosm_sio *ipc_mbim_init(struct iosm_imem *ipc_imem, const char *name)
+{
+	struct iosm_sio *ipc_mbim = kzalloc(sizeof(*ipc_mbim), GFP_KERNEL);
+
+	static const struct file_operations fops = {
+		.owner = THIS_MODULE,
+		.open = ipc_mbim_fop_open,
+		.release = ipc_mbim_fop_release,
+		.read = ipc_mbim_fop_read,
+		.write = ipc_mbim_fop_write,
+		.poll = ipc_mbim_fop_poll,
+		.unlocked_ioctl = ipc_mbim_fop_unlocked_ioctl,
+	};
+
+	if (!ipc_mbim)
+		return NULL;
+
+	ipc_mbim->dev = ipc_imem->dev;
+	ipc_mbim->pcie = ipc_imem->pcie;
+	ipc_mbim->imem_instance = ipc_imem;
+
+	ipc_mbim->wmaxcommand = WDM_MAX_SIZE;
+	ipc_mbim->channel_id = -1;
+	ipc_mbim->mbim_is_open = 0;
+
+	init_completion(&ipc_mbim->read_sem);
+
+	skb_queue_head_init(&ipc_mbim->rx_list);
+	init_waitqueue_head(&ipc_mbim->poll_inq);
+	init_waitqueue_head(&ipc_mbim->poll_outq);
+
+	strncpy(ipc_mbim->devname, name, sizeof(ipc_mbim->devname) - 1);
+	ipc_mbim->devname[IPC_SIO_DEVNAME_LEN - 1] = '\0';
+
+	ipc_mbim->misc.minor = MISC_DYNAMIC_MINOR;
+	ipc_mbim->misc.name = ipc_mbim->devname;
+	ipc_mbim->misc.fops = &fops;
+	ipc_mbim->misc.mode = IPC_CHAR_DEVICE_DEFAULT_MODE;
+
+	if (misc_register(&ipc_mbim->misc)) {
+		kfree(ipc_mbim);
+		return NULL;
+	}
+
+	dev_set_drvdata(ipc_mbim->misc.this_device, ipc_mbim);
+
+	return ipc_mbim;
+}
+
+void ipc_mbim_deinit(struct iosm_sio *ipc_mbim)
+{
+	misc_deregister(&ipc_mbim->misc);
+
+	/* Wakeup the user app. */
+	complete(&ipc_mbim->read_sem);
+
+	ipc_pcie_kfree_skb(ipc_mbim->pcie, ipc_mbim->rx_pending_buf);
+	ipc_mbim->rx_pending_buf = NULL;
+
+	skb_queue_purge(&ipc_mbim->rx_list);
+	kfree(ipc_mbim);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_mbim.h b/drivers/net/wwan/iosm/iosm_ipc_mbim.h
new file mode 100644
index 000000000000..4d87c52903ed
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_mbim.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_MBIM_H
+#define IOSM_IPC_MBIM_H
+
+/**
+ * ipc_mbim_init - Initialize and create a character device
+ * @ipc_imem:	Pointer to iosm_imem structure
+ * @name:	Pointer to character device name
+ *
+ * Returns: 0 on success
+ */
+struct iosm_sio *ipc_mbim_init(struct iosm_imem *ipc_imem, const char *name);
+
+/**
+ * ipc_mbim_deinit - Frees all the memory allocated for the ipc mbim structure.
+ * @ipc_mbim:	Pointer to the ipc mbim data-struct
+ */
+void ipc_mbim_deinit(struct iosm_sio *ipc_mbim);
+
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 09/18] net: iosm: bottom half
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (7 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 08/18] net: iosm: MBIM control device M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 10/18] net: iosm: multiplex IP sessions M Chetan Kumar
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

1) Bottom half(tasklet) for IRQ and task processing.
2) Tasks are processed asynchronous and synchronously.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_task_queue.c | 258 ++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_task_queue.h |  46 +++++
 2 files changed, 304 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_task_queue.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_task_queue.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_task_queue.c b/drivers/net/wwan/iosm/iosm_ipc_task_queue.c
new file mode 100644
index 000000000000..34f6783f7533
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_task_queue.c
@@ -0,0 +1,258 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include <linux/slab.h>
+
+#include "iosm_ipc_task_queue.h"
+
+/* Number of available element for the input message queue of the IPC
+ * ipc_task.
+ */
+#define IPC_THREAD_QUEUE_SIZE 256
+
+/**
+ * struct ipc_task_queue_args - Struct for Task queue arguments
+ * @instance:	Instance pointer for function to be called in tasklet context
+ * @msg:	Message argument for tasklet function. (optional, can be NULL)
+ * @completion:	OS object used to wait for the tasklet function to finish for
+ *		synchronous calls
+ * @func:	Function to be called in tasklet (tl) context
+ * @arg:	Generic integer argument for tasklet function (optional)
+ * @size:	Message size argument for tasklet function (optional)
+ * @response:	Return code of tasklet function for synchronous calls
+ * @is_copy:	Is true if msg contains a pointer to a copy of the original msg
+ *		for async. calls that needs to be freed once the tasklet returns
+ * @response:	Return code of tasklet function for synchronous calls
+ */
+struct ipc_task_queue_args {
+	void *instance;
+	void *msg;
+	struct completion *completion;
+	int (*func)(void *instance, int arg, void *msg, size_t size);
+	int arg;
+	size_t size;
+	int response;
+	u8 is_copy : 1;
+};
+
+/**
+ * struct ipc_task_queue - Struct for Task queue
+ * @dev:	pointer to device structure
+ * @q_lock:	Protect the message queue of the ipc ipc_task
+ * @args:	Message queue of the IPC ipc_task
+ * @q_rpos:	First queue element to process.
+ * @q_wpos:	First free element of the input queue.
+ */
+struct ipc_task_queue {
+	struct device *dev;
+	spinlock_t q_lock; /* for atomic operation on queue */
+	struct ipc_task_queue_args args[IPC_THREAD_QUEUE_SIZE];
+	unsigned int q_rpos;
+	unsigned int q_wpos;
+};
+
+/* Actual tasklet function, will be called whenever tasklet is scheduled.
+ * Calls event handler callback for each element in the message queue
+ */
+static void ipc_task_queue_handler(unsigned long data)
+{
+	struct ipc_task_queue *ipc_task = (struct ipc_task_queue *)data;
+	unsigned int q_rpos = ipc_task->q_rpos;
+
+	/* Loop over the input queue contents. */
+	while (q_rpos != ipc_task->q_wpos) {
+		/* Get the current first queue element. */
+		struct ipc_task_queue_args *args = &ipc_task->args[q_rpos];
+
+		/* Process the input message. */
+		if (args->func)
+			args->response = args->func(args->instance, args->arg,
+						    args->msg, args->size);
+
+		/* Signal completion for synchronous calls */
+		if (args->completion)
+			complete(args->completion);
+
+		/* Free message if copy was allocated. */
+		if (args->is_copy)
+			kfree(args->msg);
+
+		/* Set invalid queue element. Technically
+		 * spin_lock_irqsave is not required here as
+		 * the array element has been processed already
+		 * so we can assume that immediately after processing
+		 * ipc_task element, queue will not rotate again to
+		 * ipc_task same element within such short time.
+		 */
+		args->completion = NULL;
+		args->func = NULL;
+		args->msg = NULL;
+		args->size = 0;
+		args->is_copy = false;
+
+		/* calculate the new read ptr and update the volatile read
+		 * ptr
+		 */
+		q_rpos = (q_rpos + 1) % IPC_THREAD_QUEUE_SIZE;
+		ipc_task->q_rpos = q_rpos;
+	}
+}
+
+/* Free memory alloc and trigger completions left in the queue during dealloc */
+static void ipc_task_queue_cleanup(struct ipc_task_queue *ipc_task)
+{
+	unsigned int q_rpos = ipc_task->q_rpos;
+
+	while (q_rpos != ipc_task->q_wpos) {
+		struct ipc_task_queue_args *args = &ipc_task->args[q_rpos];
+
+		if (args->completion) {
+			complete(args->completion);
+			args->completion = NULL;
+		}
+
+		if (args->is_copy) {
+			kfree(args->msg);
+			args->is_copy = false;
+			args->msg = NULL;
+		}
+
+		q_rpos = (q_rpos + 1) % IPC_THREAD_QUEUE_SIZE;
+		ipc_task->q_rpos = q_rpos;
+	}
+}
+
+/* Add a message to the queue and trigger the ipc_task. */
+static int
+ipc_task_queue_add_task(struct tasklet_struct *ipc_tasklet,
+			struct ipc_task_queue *ipc_task,
+			int arg, void *argmnt,
+			int (*func)(void *instance, int arg, void *msg,
+				    size_t size),
+			void *instance, size_t size, bool is_copy, bool wait)
+{
+	struct completion completion;
+	unsigned int pos, nextpos;
+	unsigned long flags;
+	int result = -1;
+
+	init_completion(&completion);
+
+	/* tasklet send may be called from both interrupt or thread
+	 * context, therefore protect queue operation by spinlock
+	 */
+	spin_lock_irqsave(&ipc_task->q_lock, flags);
+
+	pos = ipc_task->q_wpos;
+	nextpos = (pos + 1) % IPC_THREAD_QUEUE_SIZE;
+
+	/* Get next queue position. */
+	if (nextpos != ipc_task->q_rpos) {
+		/* Get the reference to the queue element and save the passed
+		 * values.
+		 */
+		ipc_task->args[pos].arg = arg;
+		ipc_task->args[pos].msg = argmnt;
+		ipc_task->args[pos].func = func;
+		ipc_task->args[pos].instance = instance;
+		ipc_task->args[pos].size = size;
+		ipc_task->args[pos].is_copy = is_copy;
+		ipc_task->args[pos].completion = wait ? &completion : NULL;
+		ipc_task->args[pos].response = -1;
+
+		/* apply write barrier so that ipc_task->q_rpos elements
+		 * are updated before ipc_task->q_wpos is being updated.
+		 */
+		smp_wmb();
+
+		/* Update the status of the free queue space. */
+		ipc_task->q_wpos = nextpos;
+		result = 0;
+	}
+
+	spin_unlock_irqrestore(&ipc_task->q_lock, flags);
+
+	if (result == 0) {
+		tasklet_schedule(ipc_tasklet);
+
+		if (wait) {
+			wait_for_completion(&completion);
+			result = ipc_task->args[pos].response;
+		}
+	} else {
+		dev_err(ipc_task->dev, "queue is full");
+	}
+
+	return result;
+}
+
+int ipc_task_queue_send_task(struct iosm_imem *imem,
+			     int (*func)(void *instance, int arg, void *msg,
+					 size_t size),
+			     int arg, void *msg, size_t size, bool wait)
+{
+	struct tasklet_struct *ipc_tasklet = imem->ipc_tasklet;
+	struct ipc_task_queue *ipc_task = imem->ipc_task;
+	bool is_copy = false;
+	void *copy = msg;
+
+	if (!ipc_task || !func)
+		return -EINVAL;
+
+	if (size > 0) {
+		copy = kmemdup(msg, size, GFP_ATOMIC);
+		if (!copy)
+			return -ENOMEM;
+
+		is_copy = true;
+	}
+
+	if (ipc_task_queue_add_task(ipc_tasklet, ipc_task, arg, copy, func,
+				    imem, size, is_copy, wait) < 0) {
+		dev_err(ipc_task->dev,
+			"add task failed for %ps(%d, %p, %zu, %d)", func, arg,
+			copy, size, is_copy);
+		if (is_copy)
+			kfree(copy);
+		return -1;
+	}
+
+	return 0;
+}
+
+struct ipc_task_queue *ipc_task_queue_init(struct tasklet_struct *ipc_tasklet,
+					   struct device *dev)
+{
+	struct ipc_task_queue *ipc_task =
+		kzalloc(sizeof(*ipc_task), GFP_KERNEL);
+	if (!ipc_task)
+		return NULL;
+
+	ipc_task->dev = dev;
+
+	/* Initialize the spinlock needed to protect the message queue of the
+	 * ipc_task
+	 */
+	spin_lock_init(&ipc_task->q_lock);
+
+	tasklet_init(ipc_tasklet, ipc_task_queue_handler,
+		     (unsigned long)ipc_task);
+
+	return ipc_task;
+}
+
+void ipc_task_queue_deinit(struct ipc_task_queue *ipc_task)
+{
+	/* Handle NULL ptr gracefully similar to free() */
+	if (!ipc_task)
+		return;
+
+	/* This will free/complete any outstanding messages,
+	 * without calling the actual handler
+	 */
+	ipc_task_queue_cleanup(ipc_task);
+
+	kfree(ipc_task);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_task_queue.h b/drivers/net/wwan/iosm/iosm_ipc_task_queue.h
new file mode 100644
index 000000000000..e25dc7d9f985
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_task_queue.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_TASK_QUEUE_H
+#define IOSM_IPC_TASK_QUEUE_H
+
+#include <linux/interrupt.h>
+
+#include "iosm_ipc_imem.h"
+
+/**
+ * ipc_task_queue_init - Allocate a tasklet
+ * @ipc_tasklet:	Pointer to tasklet_struct
+ * @dev:		Pointer to device structure
+ *
+ * Returns: Pointer to allocated ipc_task data-struct or NULL on failure.
+ */
+struct ipc_task_queue *ipc_task_queue_init(struct tasklet_struct *ipc_tasklet,
+					   struct device *dev);
+
+/**
+ * ipc_task_queue_deinit - Free a tasklet, invalidating its pointer.
+ * @ipc_task:	Pointer to ipc_task instance
+ */
+void ipc_task_queue_deinit(struct ipc_task_queue *ipc_task);
+
+/**
+ * ipc_task_queue_send_task - Synchronously/Asynchronously call a function in
+ *			      tasklet context.
+ * @imem:		Pointer to iosm_imem struct
+ * @func:		Function to be called in tasklet context
+ * @arg:		Integer argument for func
+ * @msg:		Message pointer argument for func
+ * @size:		Size argument for func
+ * @wait:		if true wait for result
+ *
+ * Returns: Result value returned by func or -1 if func could not be called.
+ */
+int ipc_task_queue_send_task(struct iosm_imem *imem,
+			     int (*func)(void *instance, int arg, void *msg,
+					 size_t size),
+			     int arg, void *msg, size_t size, bool wait);
+
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 10/18] net: iosm: multiplex IP sessions
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (8 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 09/18] net: iosm: bottom half M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 11/18] net: iosm: encode or decode datagram M Chetan Kumar
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

Establish IP session between host-device & session management.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_mux.c | 455 +++++++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_mux.h | 344 ++++++++++++++++++++++++++
 2 files changed, 799 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux.c b/drivers/net/wwan/iosm/iosm_ipc_mux.c
new file mode 100644
index 000000000000..3b46ef98460d
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_mux.c
@@ -0,0 +1,455 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include "iosm_ipc_mux_codec.h"
+
+/* At the begin of the runtime phase the IP MUX channel shall created. */
+static int mux_channel_create(struct iosm_mux *ipc_mux)
+{
+	int channel_id;
+
+	channel_id = imem_channel_alloc(ipc_mux->imem, ipc_mux->instance_id,
+					IPC_CTYPE_WWAN);
+
+	if (channel_id < 0) {
+		dev_err(ipc_mux->dev,
+			"allocation of the MUX channel id failed");
+		ipc_mux->state = MUX_S_ERROR;
+		ipc_mux->event = MUX_E_NOT_APPLICABLE;
+		return channel_id; /* MUX channel is not available. */
+	}
+
+	/* Establish the MUX channel in blocking mode. */
+	ipc_mux->channel = imem_channel_open(ipc_mux->imem, channel_id,
+					     IPC_HP_NET_CHANNEL_INIT);
+
+	if (!ipc_mux->channel) {
+		dev_err(ipc_mux->dev, "imem_channel_open failed");
+		ipc_mux->state = MUX_S_ERROR;
+		ipc_mux->event = MUX_E_NOT_APPLICABLE;
+		return -1; /* MUX channel is not available. */
+	}
+
+	/* Define the MUX active state properties. */
+	ipc_mux->state = MUX_S_ACTIVE;
+	ipc_mux->event = MUX_E_NO_ORDERS;
+	return channel_id;
+}
+
+/* Reset the session/if id state. */
+static void mux_session_free(struct iosm_mux *ipc_mux, int if_id)
+{
+	struct mux_session *if_entry;
+
+	if_entry = &ipc_mux->session[if_id];
+	/* Reset the session state. */
+	if_entry->wwan = NULL;
+}
+
+/* Create and send the session open command. */
+static struct mux_cmd_open_session_resp *
+mux_session_open_send(struct iosm_mux *ipc_mux, int if_id)
+{
+	struct mux_cmd_open_session_resp *open_session_resp;
+	struct mux_acb *acb = &ipc_mux->acb;
+	union mux_cmd_param param;
+
+	/* open_session commands to one ACB and start transmission. */
+	param.open_session.flow_ctrl = 0;
+	param.open_session.reserved = 0;
+	param.open_session.ipv4v6_hints = 0;
+	param.open_session.reserved2 = 0;
+	param.open_session.dl_head_pad_len = IPC_MEM_DL_ETH_OFFSET;
+
+	/* Finish and transfer ACB. The user thread is suspended.
+	 * It is a blocking function call, until CP responds or timeout.
+	 */
+	acb->wanted_response = MUX_CMD_OPEN_SESSION_RESP;
+	if (mux_dl_acb_send_cmds(ipc_mux, MUX_CMD_OPEN_SESSION, if_id, 0,
+				 &param, sizeof(param.open_session), true,
+				 false) ||
+	    acb->got_response != MUX_CMD_OPEN_SESSION_RESP) {
+		dev_err(ipc_mux->dev, "if_id %d: OPEN_SESSION send failed",
+			if_id);
+		return NULL;
+	}
+
+	open_session_resp = &ipc_mux->acb.got_param.open_session_resp;
+	if (open_session_resp->response != MUX_CMD_RESP_SUCCESS) {
+		dev_err(ipc_mux->dev,
+			"if_id %d,session open failed,response=%d", if_id,
+			(int)open_session_resp->response);
+		return NULL;
+	}
+
+	return open_session_resp;
+}
+
+/* Open the first IP session. */
+static bool mux_session_open(struct iosm_mux *ipc_mux,
+			     struct mux_session_open *session_open)
+{
+	struct mux_cmd_open_session_resp *open_session_resp;
+	int if_id;
+
+	/* Search for a free session interface id. */
+	if_id = session_open->if_id;
+	if (if_id < 0 || if_id >= ipc_mux->nr_sessions) {
+		dev_err(ipc_mux->dev, "invalid interface id=%d", if_id);
+		return false;
+	}
+
+	/* Create and send the session open command.
+	 * It is a blocking function call, until CP responds or timeout.
+	 */
+	open_session_resp = mux_session_open_send(ipc_mux, if_id);
+	if (!open_session_resp) {
+		mux_session_free(ipc_mux, if_id);
+		session_open->if_id = -1;
+		return false;
+	}
+
+	/* Initialize the uplink skb accumulator. */
+	skb_queue_head_init(&ipc_mux->session[if_id].ul_list);
+
+	ipc_mux->session[if_id].dl_head_pad_len = IPC_MEM_DL_ETH_OFFSET;
+	ipc_mux->session[if_id].ul_head_pad_len =
+		open_session_resp->ul_head_pad_len;
+	ipc_mux->session[if_id].wwan = ipc_mux->wwan;
+
+	/* Reset the flow ctrl stats of the session */
+	ipc_mux->session[if_id].flow_ctl_en_cnt = 0;
+	ipc_mux->session[if_id].flow_ctl_dis_cnt = 0;
+	ipc_mux->session[if_id].ul_flow_credits = 0;
+	ipc_mux->session[if_id].net_tx_stop = false;
+	ipc_mux->session[if_id].flow_ctl_mask = 0;
+
+	/* Save and return the assigned if id. */
+	session_open->if_id = if_id;
+
+	return true;
+}
+
+/* Free pending session UL packet. */
+static void mux_session_reset(struct iosm_mux *ipc_mux, int if_id)
+{
+	/* Reset the session/if id state. */
+	mux_session_free(ipc_mux, if_id);
+
+	/* Empty the uplink skb accumulator. */
+	skb_queue_purge(&ipc_mux->session[if_id].ul_list);
+}
+
+static void mux_session_close(struct iosm_mux *ipc_mux,
+			      struct mux_session_close *msg)
+{
+	int if_id;
+
+	/* Copy the session interface id. */
+	if_id = msg->if_id;
+
+	if (if_id < 0 || if_id >= ipc_mux->nr_sessions) {
+		dev_err(ipc_mux->dev, "invalid session id %d", if_id);
+		return;
+	}
+
+	/* Create and send the session close command.
+	 * It is a blocking function call, until CP responds or timeout.
+	 */
+	if (mux_dl_acb_send_cmds(ipc_mux, MUX_CMD_CLOSE_SESSION, if_id, 0, NULL,
+				 0, true, false))
+		dev_err(ipc_mux->dev, "if_id %d: CLOSE_SESSION send failed",
+			if_id);
+
+	/* Reset the flow ctrl stats of the session */
+	ipc_mux->session[if_id].flow_ctl_en_cnt = 0;
+	ipc_mux->session[if_id].flow_ctl_dis_cnt = 0;
+	ipc_mux->session[if_id].flow_ctl_mask = 0;
+
+	mux_session_reset(ipc_mux, if_id);
+}
+
+static void mux_channel_close(struct iosm_mux *ipc_mux,
+			      struct mux_channel_close *channel_close_p)
+{
+	int i;
+
+	/* Free pending session UL packet. */
+	for (i = 0; i < ipc_mux->nr_sessions; i++)
+		if (ipc_mux->session[i].wwan)
+			mux_session_reset(ipc_mux, i);
+
+	imem_channel_close(ipc_mux->imem, ipc_mux->channel_id);
+
+	/* Reset the MUX object. */
+	ipc_mux->state = MUX_S_INACTIVE;
+	ipc_mux->event = MUX_E_INACTIVE;
+}
+
+/* CP has interrupted AP. If AP is in IP MUX mode, execute the pending ops. */
+static int mux_schedule(struct iosm_mux *ipc_mux, union mux_msg *msg)
+{
+	enum mux_event order;
+	bool success;
+
+	if (!ipc_mux->initialized)
+		return -1; /* Shall be used as normal IP channel. */
+
+	order = msg->common.event;
+
+	switch (ipc_mux->state) {
+	case MUX_S_INACTIVE:
+		if (order != MUX_E_MUX_SESSION_OPEN)
+			/* Wait for the request to open a session */
+			return -1;
+
+		if (ipc_mux->event == MUX_E_INACTIVE)
+			/* Establish the MUX channel and the new state. */
+			ipc_mux->channel_id = mux_channel_create(ipc_mux);
+
+		if (ipc_mux->state != MUX_S_ACTIVE)
+			/* Missing the MUX channel. */
+			return -1;
+
+		/* Disable the TD update timer and open the first IP session. */
+		imem_td_update_timer_suspend(ipc_mux->imem, true);
+		ipc_mux->event = MUX_E_MUX_SESSION_OPEN;
+		success = mux_session_open(ipc_mux, &msg->session_open);
+
+		imem_td_update_timer_suspend(ipc_mux->imem, false);
+		return success ? ipc_mux->channel_id : -1;
+
+	case MUX_S_ACTIVE:
+		switch (order) {
+		case MUX_E_MUX_SESSION_OPEN:
+			/* Disable the TD update timer and open a session */
+			imem_td_update_timer_suspend(ipc_mux->imem, true);
+			ipc_mux->event = MUX_E_MUX_SESSION_OPEN;
+			success = mux_session_open(ipc_mux, &msg->session_open);
+			imem_td_update_timer_suspend(ipc_mux->imem, false);
+			return success ? ipc_mux->channel_id : -1;
+
+		case MUX_E_MUX_SESSION_CLOSE:
+			/* Release an IP session. */
+			ipc_mux->event = MUX_E_MUX_SESSION_CLOSE;
+			mux_session_close(ipc_mux, &msg->session_close);
+			return ipc_mux->channel_id;
+
+		case MUX_E_MUX_CHANNEL_CLOSE:
+			/* Close the MUX channel pipes. */
+			ipc_mux->event = MUX_E_MUX_CHANNEL_CLOSE;
+			mux_channel_close(ipc_mux, &msg->channel_close);
+			return ipc_mux->channel_id;
+
+		default:
+			/* Invalid order. */
+			return -1;
+		}
+
+	default:
+		dev_err(ipc_mux->dev,
+			"unexpected MUX transition: state=%d, event=%d",
+			ipc_mux->state, ipc_mux->event);
+		return -1;
+	}
+}
+
+struct iosm_mux *mux_init(struct ipc_mux_config *mux_cfg,
+			  struct iosm_imem *imem)
+{
+	struct iosm_mux *ipc_mux = kzalloc(sizeof(*ipc_mux), GFP_KERNEL);
+	int i, ul_tds, ul_td_size;
+	struct mux_session *session;
+	struct sk_buff_head *free_list;
+	struct sk_buff *skb;
+
+	if (!ipc_mux)
+		return NULL;
+
+	ipc_mux->protocol = mux_cfg->protocol;
+	ipc_mux->ul_flow = mux_cfg->ul_flow;
+	ipc_mux->nr_sessions = mux_cfg->nr_sessions;
+	ipc_mux->instance_id = mux_cfg->instance_id;
+	ipc_mux->wwan_q_offset = 0;
+
+	ipc_mux->pcie = imem->pcie;
+	ipc_mux->imem = imem;
+	ipc_mux->ipc_protocol = imem->ipc_protocol;
+	ipc_mux->dev = imem->dev;
+	ipc_mux->wwan = imem->wwan;
+
+	ipc_mux->session =
+		kcalloc(ipc_mux->nr_sessions, sizeof(*session), GFP_KERNEL);
+
+	/* Get the reference to the id list. */
+	session = ipc_mux->session;
+
+	/* Get the reference to the UL ADB list. */
+	free_list = &ipc_mux->ul_adb.free_list;
+
+	/* Initialize the list with free ADB. */
+	skb_queue_head_init(free_list);
+
+	ul_td_size = IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE;
+
+	ul_tds = IPC_MEM_MAX_TDS_MUX_LITE_UL;
+
+	ipc_mux->ul_adb.dest_skb = NULL;
+
+	ipc_mux->initialized = true;
+	ipc_mux->adb_prep_ongoing = false;
+	ipc_mux->size_needed = 0;
+	ipc_mux->ul_data_pend_bytes = 0;
+	ipc_mux->state = MUX_S_INACTIVE;
+	ipc_mux->ev_mux_net_transmit_pending = false;
+	ipc_mux->tx_transaction_id = 0;
+	ipc_mux->rr_next_session = 0;
+	ipc_mux->event = MUX_E_INACTIVE;
+	ipc_mux->channel_id = -1;
+	ipc_mux->channel = NULL;
+
+	/* Allocate the list of UL ADB. */
+	for (i = 0; i < ul_tds; i++) {
+		dma_addr_t mapping;
+
+		skb = ipc_pcie_alloc_skb(ipc_mux->pcie, ul_td_size, GFP_ATOMIC,
+					 &mapping, DMA_TO_DEVICE, 0);
+		if (!skb) {
+			ipc_mux_deinit(ipc_mux);
+			return NULL;
+		}
+		/* Extend the UL ADB list. */
+		skb_queue_tail(free_list, skb);
+	}
+
+	return ipc_mux;
+}
+
+/* Informs the network stack to restart transmission for all opened session if
+ * Flow Control is not ON for that session.
+ */
+static void mux_restart_tx_for_all_sessions(struct iosm_mux *ipc_mux)
+{
+	struct mux_session *session;
+	int idx;
+
+	for (idx = 0; idx < ipc_mux->nr_sessions; idx++) {
+		session = &ipc_mux->session[idx];
+
+		if (!session->wwan)
+			continue;
+
+		/* If flow control of the session is OFF and if there was tx
+		 * stop then restart. Inform the network interface to restart
+		 * sending data.
+		 */
+		if (session->flow_ctl_mask == 0) {
+			session->net_tx_stop = false;
+			mux_netif_tx_flowctrl(session, idx, false);
+		}
+	}
+}
+
+/* Informs the network stack to stop sending further pkt for all opened
+ * sessions
+ */
+static void mux_stop_netif_for_all_sessions(struct iosm_mux *ipc_mux)
+{
+	struct mux_session *session;
+	int idx;
+
+	for (idx = 0; idx < ipc_mux->nr_sessions; idx++) {
+		session = &ipc_mux->session[idx];
+
+		if (!session->wwan)
+			continue;
+
+		mux_netif_tx_flowctrl(session, session->if_id, true);
+	}
+}
+
+void ipc_mux_check_n_restart_tx(struct iosm_mux *ipc_mux)
+{
+	if (ipc_mux->ul_flow == MUX_UL) {
+		int low_thresh = IPC_MEM_MUX_UL_FLOWCTRL_LOW_B;
+
+		if (ipc_mux->ul_data_pend_bytes < low_thresh)
+			mux_restart_tx_for_all_sessions(ipc_mux);
+	}
+}
+
+int ipc_mux_get_max_sessions(struct iosm_mux *ipc_mux)
+{
+	return ipc_mux ? ipc_mux->nr_sessions : -1;
+}
+
+enum ipc_mux_protocol ipc_mux_get_active_protocol(struct iosm_mux *ipc_mux)
+{
+	return ipc_mux ? ipc_mux->protocol : MUX_UNKNOWN;
+}
+
+int ipc_mux_open_session(struct iosm_mux *ipc_mux, int session_nr)
+{
+	struct mux_session_open *session_open;
+	union mux_msg mux_msg;
+
+	session_open = &mux_msg.session_open;
+	session_open->event = MUX_E_MUX_SESSION_OPEN;
+
+	session_open->if_id = session_nr;
+	ipc_mux->session[session_nr].flags |= IPC_MEM_WWAN_MUX;
+	return mux_schedule(ipc_mux, &mux_msg);
+}
+
+int ipc_mux_close_session(struct iosm_mux *ipc_mux, int session_nr)
+{
+	struct mux_session_close *session_close;
+	union mux_msg mux_msg;
+	int ret_val;
+
+	session_close = &mux_msg.session_close;
+	session_close->event = MUX_E_MUX_SESSION_CLOSE;
+
+	session_close->if_id = session_nr;
+	ret_val = mux_schedule(ipc_mux, &mux_msg);
+	ipc_mux->session[session_nr].flags &= ~IPC_MEM_WWAN_MUX;
+
+	return ret_val;
+}
+
+void ipc_mux_deinit(struct iosm_mux *ipc_mux)
+{
+	struct mux_channel_close *channel_close;
+	struct sk_buff_head *free_list;
+	union mux_msg mux_msg;
+	struct sk_buff *skb;
+
+	if (!ipc_mux)
+		return;
+
+	if (!ipc_mux->initialized)
+		return;
+
+	mux_stop_netif_for_all_sessions(ipc_mux);
+
+	channel_close = &mux_msg.channel_close;
+	channel_close->event = MUX_E_MUX_CHANNEL_CLOSE;
+	mux_schedule(ipc_mux, &mux_msg);
+
+	/* Empty the ADB free list. */
+	free_list = &ipc_mux->ul_adb.free_list;
+
+	/* Remove from the head of the downlink queue. */
+	while ((skb = skb_dequeue(free_list)))
+		ipc_pcie_kfree_skb(ipc_mux->pcie, skb);
+
+	if (ipc_mux->channel) {
+		ipc_mux->channel->ul_pipe.is_open = false;
+		ipc_mux->channel->dl_pipe.is_open = false;
+	}
+
+	kfree(ipc_mux->session);
+	kfree(ipc_mux);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux.h b/drivers/net/wwan/iosm/iosm_ipc_mux.h
new file mode 100644
index 000000000000..4df5e1a6f7ce
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_mux.h
@@ -0,0 +1,344 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_MUX_H
+#define IOSM_IPC_MUX_H
+
+#include "iosm_ipc_protocol.h"
+
+/* Size of the buffer for the IP MUX data buffer. */
+#define IPC_MEM_MAX_DL_MUX_BUF_SIZE (16 * 1024)
+#define IPC_MEM_MAX_UL_ADB_BUF_SIZE IPC_MEM_MAX_DL_MUX_BUF_SIZE
+
+/* Size of the buffer for the IP MUX Lite data buffer. */
+#define IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE (2 * 1024)
+
+/* TD counts for IP MUX Lite */
+#define IPC_MEM_MAX_TDS_MUX_LITE_UL 800
+#define IPC_MEM_MAX_TDS_MUX_LITE_DL 1200
+
+/* open session request (AP->CP) */
+#define MUX_CMD_OPEN_SESSION 1
+
+/* response to open session request (CP->AP) */
+#define MUX_CMD_OPEN_SESSION_RESP 2
+
+/* close session request (AP->CP) */
+#define MUX_CMD_CLOSE_SESSION 3
+
+/* response to close session request (CP->AP) */
+#define MUX_CMD_CLOSE_SESSION_RESP 4
+
+/* Flow control command with mask of the flow per queue/flow. */
+#define MUX_LITE_CMD_FLOW_CTL 5
+
+/* ACK the flow control command. Shall have the same Transaction ID as the
+ * matching FLOW_CTL command.
+ */
+#define MUX_LITE_CMD_FLOW_CTL_ACK 6
+
+/* Command for report packet indicating link quality metrics. */
+#define MUX_LITE_CMD_LINK_STATUS_REPORT 7
+
+/* Response to a report packet */
+#define MUX_LITE_CMD_LINK_STATUS_REPORT_RESP 8
+
+/* Used to reset a command/response state. */
+#define MUX_CMD_INVALID 255
+
+/* command response : command processed successfully */
+#define MUX_CMD_RESP_SUCCESS 0
+
+/* MUX for vlan devices */
+#define IPC_MEM_WWAN_MUX BIT(0)
+
+/* Initiated actions to change the state of the MUX object. */
+enum mux_event {
+	MUX_E_INACTIVE, /* No initiated actions. */
+	MUX_E_MUX_SESSION_OPEN, /* Create the MUX channel and a session. */
+	MUX_E_MUX_SESSION_CLOSE, /* Release a session. */
+	MUX_E_MUX_CHANNEL_CLOSE, /* Release the MUX channel. */
+	MUX_E_NO_ORDERS, /* No MUX order. */
+	MUX_E_NOT_APPLICABLE, /* Defect IP MUX. */
+};
+
+struct mux_session_open {
+	enum mux_event event;
+	int if_id;
+};
+
+/* MUX session close command. */
+struct mux_session_close {
+	enum mux_event event;
+	int if_id;
+};
+
+/* MUX channel close command. */
+struct mux_channel_close {
+	enum mux_event event;
+};
+
+/* Default message type to find out the right message type. */
+struct mux_common {
+	enum mux_event event;
+};
+
+/* List of the MUX orders. */
+union mux_msg {
+	struct mux_session_open session_open;
+	struct mux_session_close session_close;
+	struct mux_channel_close channel_close;
+	struct mux_common common;
+};
+
+/* Parameter definition of the open session command. */
+struct mux_cmd_open_session {
+	u32 flow_ctrl : 1; /* 0: Flow control disabled (flow allowed). */
+	/* 1: Flow control enabled (flow not allowed)*/
+	u32 reserved : 7; /* Reserved. Set to zero. */
+	u32 ipv4v6_hints : 1; /* 0: IPv4/IPv6 hints not supported.*/
+	/* 1: IPv4/IPv6 hints supported*/
+	u32 reserved2 : 23; /* Reserved. Set to zero. */
+	u32 dl_head_pad_len; /* Maximum length supported */
+	/* for DL head padding on a datagram. */
+};
+
+/* Parameter definition of the open session response. */
+struct mux_cmd_open_session_resp {
+	u32 response; /* Response code */
+	u32 flow_ctrl : 1; /* 0: Flow control disabled (flow allowed). */
+	/* 1: Flow control enabled (flow not allowed) */
+	u32 reserved : 7; /* Reserved. Set to zero. */
+	u32 ipv4v6_hints : 1; /* 0: IPv4/IPv6 hints not supported */
+	/* 1: IPv4/IPv6 hints supported */
+	u32 reserved2 : 23; /* Reserved. Set to zero. */
+	u32 ul_head_pad_len; /* Actual length supported for */
+	/* UL head padding on adatagram.*/
+};
+
+/* Parameter definition of the close session response code */
+struct mux_cmd_close_session_resp {
+	u32 response;
+};
+
+/* Parameter definition of the flow control command. */
+struct mux_cmd_flow_ctl {
+	u32 mask; /* indicating the desired flow control */
+	/* state for various flows/queues */
+};
+
+/* Parameter definition of the link status report code*/
+struct mux_cmd_link_status_report {
+	u8 payload[1];
+};
+
+/* Parameter definition of the link status report response code. */
+struct mux_cmd_link_status_report_resp {
+	u32 response;
+};
+
+/**
+ * union mux_cmd_param - Union-definition of the command parameters.
+ * @open_session:	Inband command for open session
+ * @open_session_resp:	Inband command for open session response
+ * @close_session_resp:	Inband command for close session response
+ * @flow_ctl:		In-band flow control on the opened interfaces
+ * @link_status:	In-band Link Status Report
+ * @link_status_resp:	In-band command for link status report response
+ */
+union mux_cmd_param {
+	struct mux_cmd_open_session open_session;
+	struct mux_cmd_open_session_resp open_session_resp;
+	struct mux_cmd_close_session_resp close_session_resp;
+	struct mux_cmd_flow_ctl flow_ctl;
+	struct mux_cmd_link_status_report link_status;
+	struct mux_cmd_link_status_report_resp link_status_resp;
+};
+
+/* States of the MUX object.. */
+enum mux_state {
+	MUX_S_INACTIVE, /* IP MUX is unused. */
+	MUX_S_ACTIVE, /* IP MUX channel is available. */
+	MUX_S_ERROR, /* Defect IP MUX. */
+};
+
+/* Supported MUX protocols. */
+enum ipc_mux_protocol {
+	MUX_UNKNOWN,
+	MUX_LITE,
+};
+
+/* Supported UL data transfer methods. */
+enum ipc_mux_ul_flow {
+	MUX_UL_UNKNOWN,
+	MUX_UL, /* Normal UL data transfer */
+	MUX_UL_ON_CREDITS, /* UL data transfer will be based on credits */
+};
+
+/* List of the MUX session. */
+struct mux_session {
+	struct iosm_wwan *wwan; /*Network i/f used for communication*/
+	int if_id; /* i/f id for session open message.*/
+	u32 flags;
+	u32 ul_head_pad_len; /* Nr of bytes for UL head padding. */
+	u32 dl_head_pad_len; /* Nr of bytes for DL head padding. */
+	struct sk_buff_head ul_list; /* skb entries for an ADT. */
+	u32 flow_ctl_mask; /* UL flow control */
+	u32 flow_ctl_en_cnt; /* Flow control Enable cmd count */
+	u32 flow_ctl_dis_cnt; /* Flow Control Disable cmd count */
+	int ul_flow_credits; /* UL flow credits */
+	u8 net_tx_stop : 1;
+	u8 flush : 1; /* flush net interface ? */
+};
+
+/* State of a single UL data block. */
+struct mux_adb {
+	struct sk_buff *dest_skb; /* Current UL skb for the data block. */
+	u8 *buf; /* ADB memory. */
+	struct mux_adgh *adgh; /* ADGH pointer */
+	struct sk_buff *qlth_skb; /* QLTH pointer */
+	u32 *next_table_index; /* Pointer to next table index. */
+	struct sk_buff_head free_list; /* List of alloc. ADB for the UL sess.*/
+	int size; /* Size of the ADB memory. */
+	u32 if_cnt; /* Statistic counter */
+	u32 dg_cnt_total;
+	u32 payload_size;
+};
+
+/* Temporary ACB state. */
+struct mux_acb {
+	struct sk_buff *skb; /* Used UL skb. */
+	int if_id; /* Session id. */
+	u32 wanted_response;
+	u32 got_response;
+	u32 cmd;
+	union mux_cmd_param got_param; /* Received command/response parameter */
+};
+
+/**
+ * struct iosm_mux - State of the data multiplexing over an IP channel.
+ * @dev:		pointer to device structure
+ * @session:		List of the MUX sessions.
+ * @channel:		Reference to the IP MUX channel
+ * @pcie:		Pointer to iosm_pcie struct
+ * @imem:		Pointer to iosm_imem
+ * @wwan:		Poinetr to iosm_wwan
+ * @ipc_protocol:	Pointer to iosm_protocol
+ * @channel_id:		Channel ID for MUX
+ * @protocol:		Type of the MUX protocol
+ * @ul_flow:		UL Flow type
+ * @nr_sessions:	Number of sessions
+ * @instance_id:	Instance ID
+ * @state:		States of the MUX object
+ * @event:		Initiated actions to change the state of the MUX object
+ * @tx_transaction_id:	Transaction id for the ACB command.
+ * @rr_next_session:	Next session number for round robin.
+ * @ul_adb:		State of the UL ADB/ADGH.
+ * @size_needed:	Variable to store the size needed during ADB preparation
+ * @ul_data_pend_bytes:	Pending UL data to be processed in bytes
+ * @acb:		Temporary ACB state
+ * @wwan_q_offset:	This will hold the offset of the given instance
+ *			Useful while passing or receiving packets from
+ *			wwan/imem layer.
+ * @initialized:	MUX object is initialized
+ * @ev_mux_net_transmit_pending:
+ *			0 means inform the IPC tasklet to pass the
+ *			accumulated uplink ADB to CP.
+ * @adb_prep_ongoing:	Flag for ADB preparation status
+ */
+struct iosm_mux {
+	struct device *dev;
+	struct mux_session *session;
+	struct ipc_mem_channel *channel;
+	struct iosm_pcie *pcie;
+	struct iosm_imem *imem;
+	struct iosm_wwan *wwan;
+	struct iosm_protocol *ipc_protocol;
+	int channel_id;
+	enum ipc_mux_protocol protocol;
+	enum ipc_mux_ul_flow ul_flow;
+	int nr_sessions;
+	int instance_id;
+	enum mux_state state;
+	enum mux_event event;
+	u32 tx_transaction_id;
+	int rr_next_session;
+	struct mux_adb ul_adb;
+	int size_needed;
+	long long ul_data_pend_bytes;
+	struct mux_acb acb;
+	int wwan_q_offset;
+	u8 initialized : 1;
+	u8 ev_mux_net_transmit_pending : 1;
+	u8 adb_prep_ongoing : 1;
+};
+
+/* MUX configuration structure */
+struct ipc_mux_config {
+	enum ipc_mux_protocol protocol;
+	enum ipc_mux_ul_flow ul_flow;
+	int nr_sessions;
+	int instance_id;
+};
+
+/**
+ * mux_init - Allocates and Init MUX instance data
+ * @mux_cfg:	Pointer to MUX configuration structure
+ * @ipc_imem:	Pointer to imem data-struct
+ *
+ * Returns: Initialized mux pointer on success else NULL
+ */
+struct iosm_mux *mux_init(struct ipc_mux_config *mux_cfg,
+			  struct iosm_imem *ipc_imem);
+
+/**
+ * ipc_mux_deinit - Deallocates MUX instance data
+ * @ipc_mux:	Pointer to the MUX instance data.
+ */
+void ipc_mux_deinit(struct iosm_mux *ipc_mux);
+
+/**
+ * ipc_mux_check_n_restart_tx - If UL flow type is Legacy for the given instance
+ *				then it restarts the net interface tx queue if
+ *				device has set flow control as off.
+ * @ipc_mux:	Pointer to MUX data-struct
+ */
+void ipc_mux_check_n_restart_tx(struct iosm_mux *ipc_mux);
+
+/**
+ * ipc_mux_get_active_protocol - Returns the active MUX protocol type.
+ * @ipc_mux:	Pointer to MUX data-struct
+ *
+ * Returns: enum of type ipc_mux_protocol
+ */
+enum ipc_mux_protocol ipc_mux_get_active_protocol(struct iosm_mux *ipc_mux);
+
+/**
+ * ipc_mux_open_session - Opens a MUX session.
+ * @ipc_mux:	Pointer to MUX data-struct
+ * @session_nr:	Interface ID or session number
+ *
+ * Returns: channel id on success, -1 on failure
+ */
+int ipc_mux_open_session(struct iosm_mux *ipc_mux, int session_nr);
+
+/**
+ * ipc_mux_close_session - Closes a MUX session.
+ * @ipc_mux:	Pointer to MUX data-struct
+ * @session_nr:	Interface ID or session number
+ *
+ * Returns: channel id on success, -1 on failure
+ */
+int ipc_mux_close_session(struct iosm_mux *ipc_mux, int session_nr);
+
+/**
+ * ipc_mux_get_max_sessions - Retuns the maximum sessions supported on the
+ *			      provided MUX instance..
+ * @ipc_mux:	Pointer to MUX data-struct
+ *
+ * Returns: Number of sessions supported on Success and -1 on failure
+ */
+int ipc_mux_get_max_sessions(struct iosm_mux *ipc_mux);
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 11/18] net: iosm: encode or decode datagram
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (9 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 10/18] net: iosm: multiplex IP sessions M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 12/18] net: iosm: power management M Chetan Kumar
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

1) Encode UL packet into datagram.
2) Decode DL datagram and route it to network layer.
3) Supports credit based flow control.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_mux_codec.c | 902 +++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_mux_codec.h | 194 +++++++
 2 files changed, 1096 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux_codec.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_mux_codec.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux_codec.c b/drivers/net/wwan/iosm/iosm_ipc_mux_codec.c
new file mode 100644
index 000000000000..54437651704e
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_mux_codec.c
@@ -0,0 +1,902 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include <linux/if_vlan.h>
+
+#include "iosm_ipc_imem_ops.h"
+#include "iosm_ipc_mux_codec.h"
+#include "iosm_ipc_task_queue.h"
+
+/* Test the link power state and send a MUX command in blocking mode. */
+static int mux_tq_cmd_send(void *instance, int arg, void *msg, size_t size)
+{
+	struct iosm_mux *ipc_mux = ((struct iosm_imem *)instance)->mux;
+	const struct mux_acb *acb = msg;
+
+	skb_queue_tail(&ipc_mux->channel->ul_list, acb->skb);
+	imem_ul_send(ipc_mux->imem);
+
+	return 0;
+}
+
+static int mux_acb_send(struct iosm_mux *ipc_mux, bool blocking)
+{
+	struct completion *completion = &ipc_mux->channel->ul_sem;
+
+	if (ipc_task_queue_send_task(ipc_mux->imem, mux_tq_cmd_send, 0,
+				     &ipc_mux->acb, sizeof(ipc_mux->acb),
+				     false)) {
+		dev_err(ipc_mux->dev, "unable to send mux command");
+		return -1;
+	}
+
+	/* if blocking, suspend the app and wait for irq in the flash or
+	 * crash phase. return false on timeout to indicate failure.
+	 */
+	if (blocking) {
+		u32 wait_time_milliseconds = IPC_MUX_CMD_RUN_DEFAULT_TIMEOUT;
+
+		reinit_completion(completion);
+
+		if (WAIT_FOR_TIMEOUT(completion, wait_time_milliseconds) == 0) {
+			dev_err(ipc_mux->dev, "ch[%d] timeout",
+				ipc_mux->channel_id);
+			ipc_uevent_send(ipc_mux->imem->dev, UEVENT_MDM_TIMEOUT);
+			return -ETIMEDOUT;
+		}
+	}
+
+	return 0;
+}
+
+/* Prepare mux Command */
+static struct mux_lite_cmdh *mux_lite_add_cmd(struct iosm_mux *ipc_mux, u32 cmd,
+					      struct mux_acb *acb, void *param,
+					      u32 param_size)
+{
+	struct mux_lite_cmdh *cmdh = (struct mux_lite_cmdh *)acb->skb->data;
+
+	cmdh->signature = MUX_SIG_CMDH;
+	cmdh->command_type = cmd;
+	cmdh->if_id = acb->if_id;
+
+	acb->cmd = cmd;
+
+	cmdh->cmd_len = offsetof(struct mux_lite_cmdh, param) + param_size;
+	cmdh->transaction_id = ipc_mux->tx_transaction_id++;
+
+	if (param)
+		memcpy(&cmdh->param, param, param_size);
+
+	skb_put(acb->skb, cmdh->cmd_len);
+
+	return cmdh;
+}
+
+static int mux_acb_alloc(struct iosm_mux *ipc_mux)
+{
+	struct mux_acb *acb = &ipc_mux->acb;
+	struct sk_buff *skb;
+	dma_addr_t mapping;
+
+	/* Allocate skb memory for the uplink buffer. */
+	skb = ipc_pcie_alloc_skb(ipc_mux->pcie, MUX_MAX_UL_ACB_BUF_SIZE,
+				 GFP_ATOMIC, &mapping, DMA_TO_DEVICE, 0);
+	if (!skb)
+		return -ENOMEM;
+
+	/* Save the skb address. */
+	acb->skb = skb;
+
+	memset(skb->data, 0, MUX_MAX_UL_ACB_BUF_SIZE);
+
+	return 0;
+}
+
+int mux_dl_acb_send_cmds(struct iosm_mux *ipc_mux, u32 cmd_type, u8 if_id,
+			 u32 transaction_id, union mux_cmd_param *param,
+			 size_t res_size, bool blocking, bool respond)
+{
+	struct mux_acb *acb = &ipc_mux->acb;
+	struct mux_lite_cmdh *ack_lite;
+	int ret = 0;
+
+	acb->if_id = if_id;
+	ret = mux_acb_alloc(ipc_mux);
+	if (ret)
+		return ret;
+
+	ack_lite = mux_lite_add_cmd(ipc_mux, cmd_type, acb, param, res_size);
+	if (respond)
+		ack_lite->transaction_id = (u32)transaction_id;
+
+	ret = mux_acb_send(ipc_mux, blocking);
+
+	return ret;
+}
+
+void mux_netif_tx_flowctrl(struct mux_session *session, int idx, bool on)
+{
+	/* Inform the network interface to start/stop flow ctrl */
+	if (ipc_wwan_is_tx_stopped(session->wwan, idx) != on)
+		ipc_wwan_tx_flowctrl(session->wwan, idx, on);
+}
+
+static int mux_dl_cmdresps_decode_process(struct iosm_mux *ipc_mux,
+					  struct mux_lite_cmdh *cmdh)
+{
+	struct mux_acb *acb = &ipc_mux->acb;
+
+	switch (cmdh->command_type) {
+	case MUX_CMD_OPEN_SESSION_RESP:
+	case MUX_CMD_CLOSE_SESSION_RESP:
+		/* Resume the control application. */
+		acb->got_param = cmdh->param;
+		break;
+
+	case MUX_LITE_CMD_FLOW_CTL_ACK:
+		/* This command type is not expected as response for
+		 * Aggregation version of the protocol. So return non-zero.
+		 */
+		if (ipc_mux->protocol != MUX_LITE)
+			return -EINVAL;
+
+		dev_dbg(ipc_mux->dev, "if[%u] FLOW_CTL_ACK(%u) received",
+			cmdh->if_id, cmdh->transaction_id);
+		break;
+
+	default:
+		return -EINVAL;
+	}
+
+	acb->wanted_response = MUX_CMD_INVALID;
+	acb->got_response = cmdh->command_type;
+	complete(&ipc_mux->channel->ul_sem);
+
+	return 0;
+}
+
+static int mux_dl_dlcmds_decode_process(struct iosm_mux *ipc_mux,
+					struct mux_lite_cmdh *cmdh)
+{
+	union mux_cmd_param *param = &cmdh->param;
+	struct mux_session *session;
+	int new_size;
+
+	dev_dbg(ipc_mux->dev, "if_id[%d]: dlcmds decode process %d",
+		cmdh->if_id, cmdh->command_type);
+
+	switch (cmdh->command_type) {
+	case MUX_LITE_CMD_FLOW_CTL:
+
+		if (cmdh->if_id >= ipc_mux->nr_sessions) {
+			dev_err(ipc_mux->dev, "if_id [%d] not valid",
+				cmdh->if_id);
+			return -EINVAL; /* No session interface id. */
+		}
+
+		session = &ipc_mux->session[cmdh->if_id];
+
+		new_size = offsetof(struct mux_lite_cmdh, param) +
+			   sizeof(param->flow_ctl);
+		if (param->flow_ctl.mask == 0xFFFFFFFF) {
+			/* Backward Compatibility */
+			if (cmdh->cmd_len == new_size)
+				session->flow_ctl_mask = param->flow_ctl.mask;
+			else
+				session->flow_ctl_mask = ~0;
+			/* if CP asks for FLOW CTRL Enable
+			 * then set our internal flow control Tx flag
+			 * to limit uplink session queueing
+			 */
+			session->net_tx_stop = true;
+			/* Update the stats */
+			session->flow_ctl_en_cnt++;
+		} else if (param->flow_ctl.mask == 0) {
+			/* Just reset the Flow control mask and let
+			 * mux_flow_ctrl_low_thre_b take control on
+			 * our internal Tx flag and enabling kernel
+			 * flow control
+			 */
+			/* Backward Compatibility */
+			if (cmdh->cmd_len == new_size)
+				session->flow_ctl_mask = param->flow_ctl.mask;
+			else
+				session->flow_ctl_mask = 0;
+			/* Update the stats */
+			session->flow_ctl_dis_cnt++;
+		} else {
+			break;
+		}
+
+		dev_dbg(ipc_mux->dev, "if[%u] FLOW CTRL 0x%08X", cmdh->if_id,
+			param->flow_ctl.mask);
+		break;
+
+	case MUX_LITE_CMD_LINK_STATUS_REPORT:
+		break;
+
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
+/* Decode and Send appropriate response to a command block. */
+static void mux_dl_cmd_decode(struct iosm_mux *ipc_mux, struct sk_buff *skb)
+{
+	struct mux_lite_cmdh *cmdh = (struct mux_lite_cmdh *)skb->data;
+
+	if (mux_dl_cmdresps_decode_process(ipc_mux, cmdh)) {
+		/* Unable to decode command response indicates the cmd_type
+		 * may be a command instead of response. So try to decoding it.
+		 */
+		if (!mux_dl_dlcmds_decode_process(ipc_mux, cmdh)) {
+			/* Decoded command may need a response. Give the
+			 * response according to the command type.
+			 */
+			union mux_cmd_param *mux_cmd = NULL;
+			size_t size = 0;
+			u32 cmd = MUX_LITE_CMD_LINK_STATUS_REPORT_RESP;
+
+			if (cmdh->command_type ==
+			    MUX_LITE_CMD_LINK_STATUS_REPORT) {
+				mux_cmd = &cmdh->param;
+				mux_cmd->link_status_resp.response =
+					MUX_CMD_RESP_SUCCESS;
+				/* response field is u32 */
+				size = sizeof(u32);
+			} else if (cmdh->command_type ==
+				   MUX_LITE_CMD_FLOW_CTL) {
+				cmd = MUX_LITE_CMD_FLOW_CTL_ACK;
+			} else {
+				return;
+			}
+
+			if (mux_dl_acb_send_cmds(ipc_mux, cmd, cmdh->if_id,
+						 cmdh->transaction_id, mux_cmd,
+						 size, false, true))
+				dev_err(ipc_mux->dev,
+					"if_id %d: cmd send failed",
+					cmdh->if_id);
+		}
+	}
+}
+
+/* Pass the DL packet to the netif layer. */
+static int mux_net_receive(struct iosm_mux *ipc_mux, int if_id,
+			   struct iosm_wwan *wwan, u32 offset, u8 service_class,
+			   struct sk_buff *skb)
+{
+	/* for "zero copy" use clone */
+	struct sk_buff *dest_skb = skb_clone(skb, GFP_ATOMIC);
+
+	if (!dest_skb)
+		return -1;
+
+	skb_pull(dest_skb, offset);
+
+	skb_set_tail_pointer(dest_skb, dest_skb->len);
+
+	/* Goto the start of the Ethernet header. */
+	skb_push(dest_skb, ETH_HLEN);
+
+	/* map session to vlan */
+	__vlan_hwaccel_put_tag(dest_skb, htons(ETH_P_8021Q), if_id + 1);
+
+	/* Pass the packet to the netif layer. */
+	dest_skb->priority = service_class;
+
+	return ipc_wwan_receive(wwan, dest_skb, false);
+}
+
+/* Decode Flow Credit Table in the block */
+static void mux_dl_fcth_decode(struct iosm_mux *ipc_mux, void *block)
+{
+	struct ipc_mem_lite_gen_tbl *fct = (struct ipc_mem_lite_gen_tbl *)block;
+	struct iosm_wwan *wwan;
+	int ul_credits = 0;
+	int if_id = 0;
+
+	if (fct->vfl_length != sizeof(fct->vfl[0].nr_of_bytes)) {
+		dev_err(ipc_mux->dev, "unexpected FCT length: %d",
+			fct->vfl_length);
+		return;
+	}
+
+	if_id = fct->if_id;
+	if (if_id >= ipc_mux->nr_sessions) {
+		dev_err(ipc_mux->dev, "not supported if_id: %d", if_id);
+		return;
+	}
+
+	/* Is the session active ? */
+	wwan = ipc_mux->session[if_id].wwan;
+	if (!wwan) {
+		dev_err(ipc_mux->dev, "session Net ID is NULL");
+		return;
+	}
+
+	ul_credits = fct->vfl[0].nr_of_bytes;
+
+	dev_dbg(ipc_mux->dev, "Flow_Credit:: if_id[%d] Old: %d Grants: %d",
+		if_id, ipc_mux->session[if_id].ul_flow_credits, ul_credits);
+
+	/* Update the Flow Credit information from ADB */
+	ipc_mux->session[if_id].ul_flow_credits += ul_credits;
+
+	/* Check whether the TX can be started */
+	if (ipc_mux->session[if_id].ul_flow_credits > 0) {
+		ipc_mux->session[if_id].net_tx_stop = false;
+		mux_netif_tx_flowctrl(&ipc_mux->session[if_id],
+				      ipc_mux->session[if_id].if_id, false);
+	}
+}
+
+/* Decode non-aggregated datagram */
+static void mux_dl_adgh_decode(struct iosm_mux *ipc_mux, struct sk_buff *skb)
+{
+	u32 pad_len, packet_offset;
+	struct iosm_wwan *wwan;
+	struct mux_adgh *adgh;
+	u8 *block = skb->data;
+	int rc = 0;
+	u8 if_id;
+
+	adgh = (struct mux_adgh *)block;
+
+	if (adgh->signature != MUX_SIG_ADGH) {
+		dev_err(ipc_mux->dev, "invalid ADGH signature received");
+		return;
+	}
+
+	if_id = adgh->if_id;
+	if (if_id >= ipc_mux->nr_sessions) {
+		dev_err(ipc_mux->dev, "invalid if_id while decoding %d", if_id);
+		return;
+	}
+
+	/* Is the session active ? */
+	wwan = ipc_mux->session[if_id].wwan;
+	if (!wwan) {
+		dev_err(ipc_mux->dev, "session Net ID is NULL");
+		return;
+	}
+
+	/* Store the pad len for the corresponding session
+	 * Pad bytes as negotiated in the open session less the header size
+	 * (see session management chapter for details).
+	 * If resulting padding is zero or less, the additional head padding is
+	 * omitted. For e.g., if HEAD_PAD_LEN = 16 or less, this field is
+	 * omitted if HEAD_PAD_LEN = 20, then this field will have 4 bytes
+	 * set to zero
+	 */
+	pad_len =
+		ipc_mux->session[if_id].dl_head_pad_len - IPC_MEM_DL_ETH_OFFSET;
+	packet_offset = sizeof(*adgh) + pad_len;
+
+	if_id += ipc_mux->wwan_q_offset;
+
+	/* Pass the packet to the netif layer */
+	rc = mux_net_receive(ipc_mux, if_id, wwan, packet_offset,
+			     adgh->service_class, skb);
+	if (rc) {
+		dev_err(ipc_mux->dev, "mux adgh decoding error");
+		return;
+	}
+	ipc_mux->session[if_id].flush = 1;
+}
+
+void ipc_mux_dl_decode(struct iosm_mux *ipc_mux, struct sk_buff *skb)
+{
+	u32 signature;
+
+	if (!skb->data || !ipc_mux)
+		return;
+
+	/* Decode the MUX header type. */
+	signature = le32_to_cpup((__le32 *)skb->data);
+
+	switch (signature) {
+	case MUX_SIG_ADGH:
+		mux_dl_adgh_decode(ipc_mux, skb);
+		break;
+
+	case MUX_SIG_FCTH:
+		mux_dl_fcth_decode(ipc_mux, skb->data);
+		break;
+
+	case MUX_SIG_CMDH:
+		mux_dl_cmd_decode(ipc_mux, skb);
+		break;
+
+	default:
+		dev_err(ipc_mux->dev, "invalid ABH signature");
+	}
+
+	ipc_pcie_kfree_skb(ipc_mux->pcie, skb);
+}
+
+static int mux_ul_skb_alloc(struct iosm_mux *ipc_mux, struct mux_adb *ul_adb,
+			    u32 type)
+{
+	/* Take the first element of the free list. */
+	struct sk_buff *skb = skb_dequeue(&ul_adb->free_list);
+	int qlt_size;
+
+	if (!skb)
+		return -1; /* Wait for a free ADB skb. */
+
+	/* Mark it as UL ADB to select the right free operation. */
+	IPC_CB(skb)->op_type = (u8)UL_MUX_OP_ADB;
+
+	switch (type) {
+	case MUX_SIG_ADGH:
+		/* Save the ADB memory settings. */
+		ul_adb->dest_skb = skb;
+		ul_adb->buf = skb->data;
+		ul_adb->size = IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE;
+		/* reset statistic counter */
+		ul_adb->if_cnt = 0;
+		ul_adb->payload_size = 0;
+		ul_adb->dg_cnt_total = 0;
+
+		ul_adb->adgh = (struct mux_adgh *)skb->data;
+		memset(ul_adb->adgh, 0, sizeof(struct mux_adgh));
+		break;
+
+	case MUX_SIG_QLTH:
+		qlt_size = offsetof(struct ipc_mem_lite_gen_tbl, vfl) +
+			   (MUX_QUEUE_LEVEL * sizeof(struct mux_lite_vfl));
+
+		if (qlt_size > IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE) {
+			dev_err(ipc_mux->dev,
+				"can't support. QLT size:%d SKB size: %d",
+				qlt_size, IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE);
+			return -1;
+		}
+
+		ul_adb->qlth_skb = skb;
+		memset((ul_adb->qlth_skb)->data, 0, qlt_size);
+		skb_put(skb, qlt_size);
+		break;
+	}
+
+	return 0;
+}
+
+static void mux_ul_adgh_finish(struct iosm_mux *ipc_mux)
+{
+	struct mux_adb *ul_adb = &ipc_mux->ul_adb;
+	long long bytes;
+	char *str;
+
+	if (!ul_adb || !ul_adb->dest_skb) {
+		dev_err(ipc_mux->dev, "no dest skb");
+		return;
+	}
+	skb_put(ul_adb->dest_skb, ul_adb->adgh->length);
+	skb_queue_tail(&ipc_mux->channel->ul_list, ul_adb->dest_skb);
+	ul_adb->dest_skb = NULL;
+
+	if (ipc_mux->ul_flow == MUX_UL_ON_CREDITS) {
+		struct mux_session *session;
+
+		session = &ipc_mux->session[ul_adb->adgh->if_id];
+		str = "available_credits";
+		bytes = (long long)session->ul_flow_credits;
+
+	} else {
+		str = "pend_bytes";
+		bytes = ipc_mux->ul_data_pend_bytes;
+		ipc_mux->ul_data_pend_bytes += ul_adb->adgh->length;
+	}
+
+	dev_dbg(ipc_mux->dev, "UL ADGH: size=%d, if_id=%d, payload=%d, %s=%lld",
+		ul_adb->adgh->length, ul_adb->adgh->if_id, ul_adb->payload_size,
+		str, bytes);
+}
+
+/* Allocates an ADB from the free list and initializes it with ADBH  */
+static bool mux_ul_adb_allocate(struct iosm_mux *ipc_mux, struct mux_adb *adb,
+				int *size_needed, u32 type)
+{
+	bool ret_val = false;
+	int status;
+
+	if (!adb->dest_skb) {
+		/* Allocate memory for the ADB including of the
+		 * datagram table header.
+		 */
+		status = mux_ul_skb_alloc(ipc_mux, adb, type);
+		if (status != 0)
+			/* Is a pending ADB available ? */
+			ret_val = true; /* None. */
+
+		/* Update size need to zero only for new ADB memory */
+		*size_needed = 0;
+	}
+
+	return ret_val;
+}
+
+/* Informs the network stack to stop sending further packets for all opened
+ * sessions
+ */
+static void mux_stop_tx_for_all_sessions(struct iosm_mux *ipc_mux)
+{
+	struct mux_session *session;
+	int idx;
+
+	for (idx = 0; idx < ipc_mux->nr_sessions; idx++) {
+		session = &ipc_mux->session[idx];
+
+		if (!session->wwan)
+			continue;
+
+		session->net_tx_stop = true;
+	}
+}
+
+/* Sends Queue Level Table of all opened sessions */
+static bool mux_lite_send_qlt(struct iosm_mux *ipc_mux)
+{
+	struct ipc_mem_lite_gen_tbl *qlt;
+	struct mux_session *session;
+	bool qlt_updated = false;
+	int i, ql_idx;
+	int qlt_size;
+
+	if (!ipc_mux->initialized || ipc_mux->state != MUX_S_ACTIVE)
+		return qlt_updated;
+
+	qlt_size = offsetof(struct ipc_mem_lite_gen_tbl, vfl) +
+		   MUX_QUEUE_LEVEL * sizeof(struct mux_lite_vfl);
+
+	for (i = 0; i < ipc_mux->nr_sessions; i++) {
+		session = &ipc_mux->session[i];
+
+		if (!session->wwan || session->flow_ctl_mask != 0)
+			continue;
+
+		if (mux_ul_skb_alloc(ipc_mux, &ipc_mux->ul_adb, MUX_SIG_QLTH)) {
+			dev_err(ipc_mux->dev,
+				"no reserved mem to send QLT of if_id: %d", i);
+			break;
+		}
+
+		/* Prepare QLT */
+		qlt = (struct ipc_mem_lite_gen_tbl *)(ipc_mux->ul_adb.qlth_skb)
+			      ->data;
+		qlt->signature = MUX_SIG_QLTH;
+		qlt->length = qlt_size;
+		qlt->if_id = i;
+		qlt->vfl_length = MUX_QUEUE_LEVEL * sizeof(struct mux_lite_vfl);
+		qlt->reserved[0] = 0;
+		qlt->reserved[1] = 0;
+
+		for (ql_idx = 0; ql_idx < MUX_QUEUE_LEVEL; ql_idx++)
+			qlt->vfl[ql_idx].nr_of_bytes = session->ul_list.qlen;
+
+		/* Add QLT to the transfer list. */
+		skb_queue_tail(&ipc_mux->channel->ul_list,
+			       ipc_mux->ul_adb.qlth_skb);
+
+		qlt_updated = true;
+		ipc_mux->ul_adb.qlth_skb = NULL;
+	}
+
+	if (qlt_updated)
+		/* Updates the TDs with ul_list */
+		(void)imem_ul_write_td(ipc_mux->imem);
+
+	return qlt_updated;
+}
+
+/* Checks the available credits for the specified session and returns
+ * number of packets for which credits are available.
+ */
+static int mux_ul_bytes_credits_check(struct iosm_mux *ipc_mux,
+				      struct mux_session *session,
+				      struct sk_buff_head *ul_list,
+				      int max_nr_of_pkts)
+{
+	int pkts_to_send = 0;
+	struct sk_buff *skb;
+	int credits = 0;
+
+	if (!ipc_mux || !session || !ul_list)
+		return 0;
+
+	if (ipc_mux->ul_flow == MUX_UL_ON_CREDITS) {
+		credits = session->ul_flow_credits;
+		if (credits <= 0) {
+			dev_dbg(ipc_mux->dev,
+				"FC::if_id[%d] Insuff.Credits/Qlen:%d/%u",
+				session->if_id, session->ul_flow_credits,
+				session->ul_list.qlen); /* nr_of_bytes */
+			return 0;
+		}
+	} else {
+		credits = IPC_MEM_MUX_UL_FLOWCTRL_HIGH_B -
+			  ipc_mux->ul_data_pend_bytes;
+		if (credits <= 0) {
+			mux_stop_tx_for_all_sessions(ipc_mux);
+
+			dev_dbg(ipc_mux->dev,
+				"if_id[%d] Stopped encoding.PendBytes: %llu, high_thresh: %d",
+				session->if_id, ipc_mux->ul_data_pend_bytes,
+				IPC_MEM_MUX_UL_FLOWCTRL_HIGH_B);
+			return 0;
+		}
+	}
+
+	/* Check if there are enough credits/bytes available to send the
+	 * requested max_nr_of_pkts. Otherwise restrict the nr_of_pkts
+	 * depending on available credits.
+	 */
+	skb_queue_walk(ul_list, skb)
+	{
+		if (!(credits >= skb->len && pkts_to_send < max_nr_of_pkts))
+			break;
+		credits -= skb->len;
+		pkts_to_send++;
+	}
+
+	return pkts_to_send;
+}
+
+/* Encode the UL IP packet according to Lite spec. */
+static int mux_ul_adgh_encode(struct iosm_mux *ipc_mux, int session_id,
+			      struct mux_session *session,
+			      struct sk_buff_head *ul_list, struct mux_adb *adb,
+			      int nr_of_pkts)
+{
+	int offset = sizeof(struct mux_adgh);
+	int adb_updated = -EINVAL;
+	struct sk_buff *src_skb;
+	int aligned_size = 0;
+	int nr_of_skb = 0;
+	u32 pad_len = 0;
+	int vlan_id;
+
+	/* Re-calculate the number of packets depending on number of bytes to be
+	 * processed/available credits.
+	 */
+	nr_of_pkts = mux_ul_bytes_credits_check(ipc_mux, session, ul_list,
+						nr_of_pkts);
+
+	/* If calculated nr_of_pkts from available credits is <= 0
+	 * then nothing to do.
+	 */
+	if (nr_of_pkts <= 0)
+		return 0;
+
+	/* Read configured UL head_pad_length for session.*/
+	if (session->ul_head_pad_len > IPC_MEM_DL_ETH_OFFSET)
+		pad_len = session->ul_head_pad_len - IPC_MEM_DL_ETH_OFFSET;
+
+	/* Process all pending UL packets for this session
+	 * depending on the allocated datagram table size.
+	 */
+	while (nr_of_pkts > 0) {
+		/* get destination skb allocated */
+		if (mux_ul_adb_allocate(ipc_mux, adb, &ipc_mux->size_needed,
+					MUX_SIG_ADGH)) {
+			dev_err(ipc_mux->dev, "no reserved memory for ADGH");
+			return -ENOMEM;
+		}
+
+		/* Peek at the head of the list. */
+		src_skb = skb_peek(ul_list);
+		if (!src_skb) {
+			dev_err(ipc_mux->dev,
+				"skb peek return NULL with count : %d",
+				nr_of_pkts);
+			break;
+		}
+
+		/* Calculate the memory value. */
+		aligned_size = ALIGN((pad_len + src_skb->len), 4);
+
+		ipc_mux->size_needed = sizeof(struct mux_adgh) + aligned_size;
+
+		if (ipc_mux->size_needed > adb->size) {
+			dev_dbg(ipc_mux->dev, "size needed %d, adgh size %d",
+				ipc_mux->size_needed, adb->size);
+			/* Return 1 if any IP packet is added to the transfer
+			 * list.
+			 */
+			return nr_of_skb ? 1 : 0;
+		}
+
+		vlan_id = session_id + ipc_mux->wwan_q_offset;
+		ipc_wwan_update_stats(session->wwan, vlan_id, src_skb->len,
+				      true);
+
+		/* Add buffer (without head padding to next pending transfer) */
+		memcpy(adb->buf + offset + pad_len, src_skb->data,
+		       src_skb->len);
+
+		adb->adgh->signature = MUX_SIG_ADGH;
+		adb->adgh->if_id = session_id;
+		adb->adgh->length =
+			sizeof(struct mux_adgh) + pad_len + src_skb->len;
+		adb->adgh->service_class = src_skb->priority;
+		adb->adgh->next_count = --nr_of_pkts;
+		adb->dg_cnt_total++;
+		adb->payload_size += src_skb->len;
+
+		if (ipc_mux->ul_flow == MUX_UL_ON_CREDITS)
+			/* Decrement the credit value as we are processing the
+			 * datagram from the UL list.
+			 */
+			session->ul_flow_credits -= src_skb->len;
+
+		/* Remove the processed elements and free it. */
+		src_skb = skb_dequeue(ul_list);
+		dev_kfree_skb(src_skb);
+		nr_of_skb++;
+
+		mux_ul_adgh_finish(ipc_mux);
+	}
+
+	if (nr_of_skb) {
+		/* Send QLT info to modem if pending bytes > high watermark
+		 * in case of mux lite
+		 */
+		if (ipc_mux->ul_flow == MUX_UL_ON_CREDITS ||
+		    ipc_mux->ul_data_pend_bytes >=
+			    IPC_MEM_MUX_UL_FLOWCTRL_LOW_B)
+			adb_updated = mux_lite_send_qlt(ipc_mux);
+		else
+			adb_updated = 1;
+
+		/* Updates the TDs with ul_list */
+		(void)imem_ul_write_td(ipc_mux->imem);
+	}
+
+	return adb_updated;
+}
+
+bool ipc_mux_ul_data_encode(struct iosm_mux *ipc_mux)
+{
+	struct sk_buff_head *ul_list;
+	struct mux_session *session;
+	int updated = 0;
+	int session_id;
+	int dg_n;
+	int i;
+
+	if (!ipc_mux || ipc_mux->state != MUX_S_ACTIVE ||
+	    ipc_mux->adb_prep_ongoing)
+		return false;
+
+	ipc_mux->adb_prep_ongoing = true;
+
+	for (i = 0; i < ipc_mux->nr_sessions; i++) {
+		session_id = ipc_mux->rr_next_session;
+		session = &ipc_mux->session[session_id];
+
+		/* Go to next handle rr_next_session overflow */
+		ipc_mux->rr_next_session++;
+		if (ipc_mux->rr_next_session >= ipc_mux->nr_sessions)
+			ipc_mux->rr_next_session = 0;
+
+		if (!session->wwan || session->flow_ctl_mask ||
+		    session->net_tx_stop)
+			continue;
+
+		ul_list = &session->ul_list;
+
+		/* Is something pending in UL and flow ctrl off */
+		dg_n = skb_queue_len(ul_list);
+		if (dg_n > MUX_MAX_UL_DG_ENTRIES)
+			dg_n = MUX_MAX_UL_DG_ENTRIES;
+
+		if (dg_n == 0)
+			/* Nothing to do for ipc_mux session
+			 * -> try next session id.
+			 */
+			continue;
+
+		updated = mux_ul_adgh_encode(ipc_mux, session_id, session,
+					     ul_list, &ipc_mux->ul_adb, dg_n);
+	}
+
+	ipc_mux->adb_prep_ongoing = false;
+	return updated == 1;
+}
+
+void ipc_mux_ul_encoded_process(struct iosm_mux *ipc_mux, struct sk_buff *skb)
+{
+	struct mux_adgh *adgh;
+
+	if (!ipc_mux || !skb || !skb->data)
+		return;
+
+	adgh = (struct mux_adgh *)skb->data;
+
+	if (adgh->signature == MUX_SIG_ADGH && ipc_mux->ul_flow == MUX_UL)
+		ipc_mux->ul_data_pend_bytes -= adgh->length;
+
+	if (ipc_mux->ul_flow == MUX_UL)
+		dev_dbg(ipc_mux->dev, "ul_data_pend_bytes: %lld",
+			ipc_mux->ul_data_pend_bytes);
+
+	/* Reset the skb settings. */
+	skb->tail = 0;
+	skb->len = 0;
+
+	/* Add the consumed ADB to the free list. */
+	skb_queue_tail((&ipc_mux->ul_adb.free_list), skb);
+}
+
+/* Start the NETIF uplink send transfer in MUX mode. */
+static int mux_tq_ul_trigger_encode(void *instance, int arg, void *msg,
+				    size_t size)
+{
+	struct iosm_mux *ipc_mux = ((struct iosm_imem *)instance)->mux;
+	bool ul_data_pend = false;
+
+	/* Add session UL data to a ADB and ADGH */
+	ul_data_pend = ipc_mux_ul_data_encode(ipc_mux);
+	if (ul_data_pend)
+		/* Delay the doorbell irq */
+		imem_td_update_timer_start(ipc_mux->imem);
+
+	/* reset the debounce flag */
+	ipc_mux->ev_mux_net_transmit_pending = false;
+
+	return 0;
+}
+
+int ipc_mux_ul_trigger_encode(struct iosm_mux *ipc_mux, int if_id,
+			      struct sk_buff *skb)
+{
+	struct mux_session *session = &ipc_mux->session[if_id];
+
+	if (ipc_mux->channel &&
+	    ipc_mux->channel->state != IMEM_CHANNEL_ACTIVE) {
+		dev_err(ipc_mux->dev,
+			"channel state is not IMEM_CHANNEL_ACTIVE");
+		return -1;
+	}
+
+	if (!session->wwan) {
+		dev_err(ipc_mux->dev, "session net ID is NULL");
+		return -1;
+	}
+
+	/* Session is under flow control.
+	 * Check if packet can be queued in session list, if not
+	 * suspend net tx
+	 */
+	if (skb_queue_len(&session->ul_list) >=
+	    (session->net_tx_stop ?
+		     IPC_MEM_MUX_UL_SESS_FCON_THRESHOLD :
+		     (IPC_MEM_MUX_UL_SESS_FCON_THRESHOLD *
+		      IPC_MEM_MUX_UL_SESS_FCOFF_THRESHOLD_FACTOR))) {
+		mux_netif_tx_flowctrl(session, session->if_id, true);
+		return -2;
+	}
+
+	/* Add skb to the uplink skb accumulator. */
+	skb_queue_tail(&session->ul_list, skb);
+
+	/* Inform the IPC kthread to pass uplink IP packets to CP. */
+	if (!ipc_mux->ev_mux_net_transmit_pending) {
+		ipc_mux->ev_mux_net_transmit_pending = true;
+		if (ipc_task_queue_send_task(ipc_mux->imem,
+					     mux_tq_ul_trigger_encode, 0, NULL,
+					     0, false))
+			return -1;
+	}
+	dev_dbg(ipc_mux->dev, "mux ul if[%d] qlen=%d/%u, len=%d/%d, prio=%d",
+		if_id, skb_queue_len(&session->ul_list), session->ul_list.qlen,
+		skb->len, skb->truesize, skb->priority);
+
+	return 0;
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux_codec.h b/drivers/net/wwan/iosm/iosm_ipc_mux_codec.h
new file mode 100644
index 000000000000..796790113ad5
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_mux_codec.h
@@ -0,0 +1,194 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_MUX_CODEC_H
+#define IOSM_IPC_MUX_CODEC_H
+
+#include "iosm_ipc_mux.h"
+
+/* Queue level size and reporting
+ * >1 is enable, 0 is disable
+ */
+#define MUX_QUEUE_LEVEL 1
+
+/* Size of the buffer for the IP MUX commands. */
+#define MUX_MAX_UL_ACB_BUF_SIZE 256
+
+/* Maximum number of packets in a go per session */
+#define MUX_MAX_UL_DG_ENTRIES 100
+
+/* ADGH: Signature of the Datagram Header. */
+#define MUX_SIG_ADGH 0x48474441
+
+/* CMDH: Signature of the Command Header. */
+#define MUX_SIG_CMDH 0x48444D43
+
+/* QLTH: Signature of the Queue Level Table */
+#define MUX_SIG_QLTH 0x48544C51
+
+/* FCTH: Signature of the Flow Credit Table */
+#define MUX_SIG_FCTH 0x48544346
+
+/* MUX UL session threshold factor */
+#define IPC_MEM_MUX_UL_SESS_FCOFF_THRESHOLD_FACTOR (4)
+
+/* Size of the buffer for the IP MUX Lite data buffer. */
+#define IPC_MEM_MAX_DL_MUX_LITE_BUF_SIZE (2 * 1024)
+
+/* MUX UL session threshold in number of packets */
+#define IPC_MEM_MUX_UL_SESS_FCON_THRESHOLD (64)
+
+/* Default time out for sending IPC session commands like
+ * open session, close session etc
+ * unit : milliseconds
+ */
+#define IPC_MUX_CMD_RUN_DEFAULT_TIMEOUT 1000 /* 1 second */
+
+/* MUX UL flow control lower threshold in bytes */
+#define IPC_MEM_MUX_UL_FLOWCTRL_LOW_B 10240 /* 10KB */
+
+/* MUX UL flow control higher threshold in bytes (5ms worth of data)*/
+#define IPC_MEM_MUX_UL_FLOWCTRL_HIGH_B (110 * 1024)
+
+/**
+ * struct mux_adgh - Aggregated Datagram Header.
+ * @signature:		Signature of the Aggregated Datagram Header(0x48474441)
+ * @length:		Length (in bytes) of the datagram header. This length
+ *			shall include the header size. Min value: 0x10
+ * @if_id:		ID of the interface the datagrams belong to
+ * @opt_ipv4v6:		Indicates IPv4(=0)/IPv6(=1), It is optional if not
+ *			used set it to zero.
+ * @reserved:		Reserved bits. Set to zero.
+ * @service_class:	Service class identifier for the datagram.
+ * @next_count:		Count of the datagrams that shall be following this
+ *			datagrams for this interface. A count of zero means
+ *			the next datagram may not belong to this interface.
+ * @reserved1:		Reserved bytes, Set to zero
+ */
+struct mux_adgh {
+	u32 signature;
+	u16 length;
+	u8 if_id;
+	u8 opt_ipv4v6 : 1;
+	u8 reserved : 7;
+	u8 service_class;
+	u8 next_count;
+	u8 reserved1[6];
+};
+
+/**
+ * struct mux_lite_cmdh - MUX Lite Command Header
+ * @signature:		Signature of the Command Header(0x48444D43)
+ * @cmd_len:		Length (in bytes) of the command. This length shall
+ *			include the header size. Minimum value: 0x10
+ * @if_id:		ID of the interface the commands in the table belong to.
+ * @reserved:		Reserved Set to zero.
+ * @command_type:	Command Enum.
+ * @transaction_id:	4 byte value shall be generated and sent along with a
+ *			command Responses and ACKs shall have the same
+ *			Transaction ID as their commands. It shall be unique to
+ *			the command transaction on the given interface.
+ * @param:		Optional parameters used with the command.
+ */
+struct mux_lite_cmdh {
+	u32 signature;
+	u16 cmd_len;
+	u8 if_id;
+	u8 reserved;
+	u32 command_type;
+	u32 transaction_id;
+	union mux_cmd_param param;
+};
+
+/**
+ * struct mux_lite_vfl - value field in generic table
+ * @nr_of_bytes:	Number of bytes available to transmit in the queue.
+ */
+struct mux_lite_vfl {
+	u32 nr_of_bytes;
+};
+
+/**
+ * struct ipc_mem_lite_gen_tbl - Generic table format for Queue Level
+ *				 and Flow Credit
+ * @signature:	Signature of the table
+ * @length:	Length of the table
+ * @if_id:	ID of the interface the table belongs to
+ * @vfl_length:	Value field length
+ * @reserved:	Reserved
+ * @vfl:	Value field of variable length
+ */
+struct ipc_mem_lite_gen_tbl {
+	u32 signature;
+	u16 length;
+	u8 if_id;
+	u8 vfl_length;
+	u32 reserved[2];
+	struct mux_lite_vfl vfl[1];
+};
+
+/**
+ * ipc_mux_dl_decode -Route the DL packet through the IP MUX layer
+ *		      depending on Header.
+ * @ipc_mux:	Pointer to MUX data-struct
+ * @skb:	Pointer to ipc_skb.
+ */
+void ipc_mux_dl_decode(struct iosm_mux *ipc_mux, struct sk_buff *skb);
+
+/**
+ * mux_dl_acb_send_cmds - Respond to the Command blocks.
+ * @ipc_mux:		Pointer to MUX data-struct
+ * @cmd_type:		Command
+ * @if_id:		Session interface id.
+ * @transaction_id:	Command transaction id.
+ * @param:		Pointer to command params.
+ * @res_size:		Response size
+ * @blocking:		True for blocking send
+ * @respond:		If true return transaction ID
+ *
+ * Returns: 0 in success and -ve for failure
+ */
+int mux_dl_acb_send_cmds(struct iosm_mux *ipc_mux, u32 cmd_type, u8 if_id,
+			 u32 transaction_id, union mux_cmd_param *param,
+			 size_t res_size, bool blocking, bool respond);
+
+/**
+ * mux_netif_tx_flowctrl - Enable/Disable TX flow control on MUX sessions.
+ * @session:	Pointer to mux_session struct
+ * @idx:	Session ID
+ * @on:		true for Enable and false for disable flow control
+ */
+void mux_netif_tx_flowctrl(struct mux_session *session, int idx, bool on);
+
+/**
+ * ipc_mux_ul_trigger_encode - Route the UL packet through the IP MUX layer
+ *			       for encoding.
+ * @ipc_mux:	Pointer to MUX data-struct
+ * @if_id:	Session ID.
+ * @skb:	Pointer to ipc_skb.
+ *
+ * Returns: 0 if successfully encoded
+ *	    -1 on failure
+ *	    -2 if packet has to be retransmitted.
+ */
+int ipc_mux_ul_trigger_encode(struct iosm_mux *ipc_mux, int if_id,
+			      struct sk_buff *skb);
+/**
+ * ipc_mux_ul_data_encode - UL encode function for calling from Tasklet context.
+ * @ipc_mux:	Pointer to MUX data-struct
+ *
+ * Returns: TRUE if any packet of any session is encoded FALSE otherwise.
+ */
+bool ipc_mux_ul_data_encode(struct iosm_mux *ipc_mux);
+
+/**
+ * ipc_mux_ul_encoded_process - Handles the Modem processed UL data by adding
+ *				the SKB to the UL free list.
+ * @ipc_mux:	Pointer to MUX data-struct
+ * @skb:	Pointer to ipc_skb.
+ */
+void ipc_mux_ul_encoded_process(struct iosm_mux *ipc_mux, struct sk_buff *skb);
+
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 12/18] net: iosm: power management
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (10 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 11/18] net: iosm: encode or decode datagram M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 13/18] net: iosm: shared memory protocol M Chetan Kumar
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

Implements state machine to handle host & device sleep.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_pm.c | 334 ++++++++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_pm.h | 216 +++++++++++++++++++++++
 2 files changed, 550 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pm.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_pm.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_pm.c b/drivers/net/wwan/iosm/iosm_ipc_pm.c
new file mode 100644
index 000000000000..662f8f309ec0
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_pm.c
@@ -0,0 +1,334 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include "iosm_ipc_protocol.h"
+#include "iosm_ipc_task_queue.h"
+
+/* Timeout value in MS for the PM to wait for device to reach active state */
+#define IPC_PM_ACTIVE_TIMEOUT_MS (500)
+
+/* Value definitions for union ipc_pm_cond members.
+ *
+ * Note that here "active" has the value 1, as compared to the enums
+ * ipc_mem_host_pm_state or ipc_mem_dev_pm_state, where "active" is 0
+ */
+#define IPC_PM_SLEEP (0)
+#define IPC_PM_ACTIVE (1)
+
+/* Trigger the doorbell interrupt on cp to change the PM sleep/active status */
+#define ipc_cp_irq_sleep_control(ipc_pcie, data)                               \
+	ipc_doorbell_fire(ipc_pcie, IPC_DOORBELL_IRQ_SLEEP, data)
+
+/* Trigger the doorbell interrupt on CP to do hpda update */
+#define ipc_cp_irq_hpda_update(ipc_pcie, data)                                 \
+	ipc_doorbell_fire(ipc_pcie, IPC_DOORBELL_IRQ_HPDA, 0xFF & (data))
+
+void ipc_pm_signal_hpda_doorbell(struct iosm_pm *ipc_pm, u32 identifier,
+				 bool host_slp_check)
+{
+	if (host_slp_check && ipc_pm->host_pm_state != IPC_MEM_HOST_PM_ACTIVE &&
+	    ipc_pm->host_pm_state != IPC_MEM_HOST_PM_ACTIVE_WAIT) {
+		ipc_pm->pending_hpda_update = true;
+		dev_dbg(ipc_pm->dev,
+			"Pending HPDA update set. Host PM_State: %d identifier:%d",
+			ipc_pm->host_pm_state, identifier);
+		return;
+	}
+
+	if (!ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_IRQ, true)) {
+		ipc_pm->pending_hpda_update = true;
+		dev_dbg(ipc_pm->dev, "Pending HPDA update set. identifier:%d",
+			identifier);
+		return;
+	}
+	ipc_pm->pending_hpda_update = false;
+
+	/* Trigger the irq towards CP */
+	ipc_cp_irq_hpda_update(ipc_pm->pcie, identifier);
+
+	ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_IRQ, false);
+}
+
+/* Wake up the device if it is in low power mode. */
+static bool ipc_pm_link_activate(struct iosm_pm *ipc_pm)
+{
+	if (ipc_pm->cp_state == IPC_MEM_DEV_PM_ACTIVE)
+		return true;
+
+	if (ipc_pm->cp_state == IPC_MEM_DEV_PM_SLEEP) {
+		if (ipc_pm->ap_state == IPC_MEM_DEV_PM_SLEEP) {
+			/* Wake up the device. */
+			ipc_cp_irq_sleep_control(ipc_pm->pcie,
+						 IPC_MEM_DEV_PM_WAKEUP);
+			ipc_pm->ap_state = IPC_MEM_DEV_PM_ACTIVE_WAIT;
+
+			return false;
+		}
+
+		if (ipc_pm->ap_state == IPC_MEM_DEV_PM_ACTIVE_WAIT)
+			return false;
+
+		return true;
+	}
+
+	/* link is not ready */
+	return false;
+}
+
+void ipc_pm_host_slp_reinit_dev_active_completion(struct iosm_pm *ipc_pm)
+{
+	if (!ipc_pm)
+		return;
+
+	atomic_set(&ipc_pm->host_sleep_pend, 1);
+
+	reinit_completion(&ipc_pm->host_sleep_complete);
+}
+
+bool ipc_pm_wait_for_device_active(struct iosm_pm *ipc_pm)
+{
+	bool ret_val = false;
+
+	if (ipc_pm->ap_state != IPC_MEM_DEV_PM_ACTIVE)
+
+		/* Wait for IPC_PM_ACTIVE_TIMEOUT_MS for Device sleep state
+		 * machine to enter ACTIVE state.
+		 */
+		if (!WAIT_FOR_TIMEOUT(&ipc_pm->host_sleep_complete,
+				      IPC_PM_ACTIVE_TIMEOUT_MS)) {
+			dev_err(ipc_pm->dev,
+				"PM timeout. Expected State:%d. Actual: %d",
+				IPC_MEM_DEV_PM_ACTIVE, ipc_pm->ap_state);
+			goto  active_timeout;
+		}
+
+	ret_val = true;
+active_timeout:
+	/* Reset the atomic variable in any case as device sleep
+	 * state machine change is no longer of interest.
+	 */
+	atomic_set(&ipc_pm->host_sleep_pend, 0);
+
+	return ret_val;
+}
+
+static void ipc_pm_on_link_sleep(struct iosm_pm *ipc_pm)
+{
+	/* pending sleep ack and all conditions are cleared
+	 * -> signal SLEEP__ACK to CP
+	 */
+	ipc_pm->cp_state = IPC_MEM_DEV_PM_SLEEP;
+	ipc_pm->ap_state = IPC_MEM_DEV_PM_SLEEP;
+
+	ipc_cp_irq_sleep_control(ipc_pm->pcie, IPC_MEM_DEV_PM_SLEEP);
+}
+
+static void ipc_pm_on_link_wake(struct iosm_pm *ipc_pm, bool ack)
+{
+	ipc_pm->ap_state = IPC_MEM_DEV_PM_ACTIVE;
+
+	if (ack) {
+		ipc_pm->cp_state = IPC_MEM_DEV_PM_ACTIVE;
+
+		ipc_cp_irq_sleep_control(ipc_pm->pcie, IPC_MEM_DEV_PM_ACTIVE);
+
+		/* check the consume state !!! */
+		if (atomic_cmpxchg(&ipc_pm->host_sleep_pend, 1, 0))
+			complete(&ipc_pm->host_sleep_complete);
+	}
+
+	/* Check for pending HPDA update.
+	 * Pending HP update could be because of sending message was
+	 * put on hold due to Device sleep state or due to TD update
+	 * which could be because of Device Sleep and Host Sleep
+	 * states.
+	 */
+	if (ipc_pm->pending_hpda_update &&
+	    ipc_pm->host_pm_state == IPC_MEM_HOST_PM_ACTIVE)
+		ipc_pm_signal_hpda_doorbell(ipc_pm, IPC_HP_PM_TRIGGER, true);
+}
+
+bool ipc_pm_trigger(struct iosm_pm *ipc_pm, enum ipc_pm_unit unit, bool active)
+{
+	union ipc_pm_cond old_cond;
+	union ipc_pm_cond new_cond;
+	bool link_active;
+
+	/* Save the current D3 state. */
+	new_cond = ipc_pm->pm_cond;
+	old_cond = ipc_pm->pm_cond;
+
+	/* Calculate the power state only in the runtime phase. */
+	switch (unit) {
+	case IPC_PM_UNIT_IRQ: /* CP irq */
+		new_cond.irq = active;
+		break;
+
+	case IPC_PM_UNIT_LINK: /* Device link state. */
+		new_cond.link = active;
+		break;
+
+	case IPC_PM_UNIT_HS: /* Host sleep trigger requires Link. */
+		new_cond.hs = active;
+		break;
+
+	default:
+		break;
+	}
+
+	/* Something changed ? */
+	if (old_cond.raw == new_cond.raw) {
+		/* Stay in the current PM state. */
+		link_active = old_cond.link == IPC_PM_ACTIVE;
+		goto ret;
+	}
+
+	ipc_pm->pm_cond = new_cond;
+
+	if (new_cond.link)
+		ipc_pm_on_link_wake(ipc_pm, unit == IPC_PM_UNIT_LINK);
+	else if (unit == IPC_PM_UNIT_LINK)
+		ipc_pm_on_link_sleep(ipc_pm);
+
+	if (old_cond.link == IPC_PM_SLEEP && new_cond.raw != 0) {
+		link_active = ipc_pm_link_activate(ipc_pm);
+		goto ret;
+	}
+
+	link_active = old_cond.link == IPC_PM_ACTIVE;
+
+ret:
+	return link_active;
+}
+
+bool ipc_pm_prepare_host_sleep(struct iosm_pm *ipc_pm)
+{
+	if (!ipc_pm)
+		return false;
+
+	/* suspend not allowed if host_pm_state is not IPC_MEM_HOST_PM_ACTIVE */
+	if (ipc_pm->host_pm_state != IPC_MEM_HOST_PM_ACTIVE) {
+		dev_err(ipc_pm->dev, "host_pm_state=%d\tExpected to be: %d",
+			ipc_pm->host_pm_state, IPC_MEM_HOST_PM_ACTIVE);
+		return false;
+	}
+
+	ipc_pm->host_pm_state = IPC_MEM_HOST_PM_SLEEP_WAIT_D3;
+
+	return true;
+}
+
+bool ipc_pm_prepare_host_active(struct iosm_pm *ipc_pm)
+{
+	if (!ipc_pm)
+		return false;
+
+	if (ipc_pm->host_pm_state != IPC_MEM_HOST_PM_SLEEP) {
+		dev_err(ipc_pm->dev, "host_pm_state=%d\tExpected to be: %d",
+			ipc_pm->host_pm_state, IPC_MEM_HOST_PM_SLEEP);
+		return false;
+	}
+
+	/* Sending Sleep Exit message to CP. Update the state */
+	ipc_pm->host_pm_state = IPC_MEM_HOST_PM_ACTIVE_WAIT;
+
+	return true;
+}
+
+bool ipc_pm_dev_slp_notification(struct iosm_pm *ipc_pm, u32 cp_pm_req)
+{
+	if (!ipc_pm)
+		return false;
+
+	if (cp_pm_req == ipc_pm->device_sleep_notification)
+		return false;
+
+	ipc_pm->device_sleep_notification = cp_pm_req;
+
+	/* Evaluate the PM request. */
+	switch (ipc_pm->cp_state) {
+	case IPC_MEM_DEV_PM_ACTIVE:
+		switch (cp_pm_req) {
+		case IPC_MEM_DEV_PM_ACTIVE:
+			break;
+
+		case IPC_MEM_DEV_PM_SLEEP:
+
+			/* Inform the PM that the device link can go down. */
+			ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_LINK, false);
+
+			return true;
+
+		default:
+			dev_err(ipc_pm->dev,
+				"loc-pm=(%d=active): confused req-pm=%d",
+				ipc_pm->cp_state, cp_pm_req);
+			break;
+		}
+		break;
+
+	case IPC_MEM_DEV_PM_SLEEP:
+		switch (cp_pm_req) {
+		case IPC_MEM_DEV_PM_ACTIVE:
+			/* Inform the PM that the device link is active. */
+			ipc_pm_trigger(ipc_pm, IPC_PM_UNIT_LINK, true);
+			break;
+
+		case IPC_MEM_DEV_PM_SLEEP:
+			break;
+
+		default:
+			dev_err(ipc_pm->dev,
+				"loc-pm=(%d=sleep): confused req-pm=%d",
+				ipc_pm->cp_state, cp_pm_req);
+			break;
+		}
+		break;
+
+	default:
+		dev_err(ipc_pm->dev, "confused loc-pm=%d, req-pm=%d",
+			ipc_pm->cp_state, cp_pm_req);
+		break;
+	}
+
+	return false;
+}
+
+struct iosm_pm *ipc_pm_init(struct iosm_imem *ipc_imem)
+{
+	struct iosm_pm *ipc_pm = kzalloc(sizeof(*ipc_pm), GFP_KERNEL);
+
+	if (!ipc_pm)
+		return NULL;
+
+	ipc_pm->pcie = ipc_imem->pcie;
+	ipc_pm->dev = ipc_imem->dev;
+
+	ipc_pm->pm_cond.irq = IPC_PM_SLEEP;
+	ipc_pm->pm_cond.hs = IPC_PM_SLEEP;
+	ipc_pm->pm_cond.link = IPC_PM_ACTIVE;
+
+	ipc_pm->cp_state = IPC_MEM_DEV_PM_ACTIVE;
+	ipc_pm->ap_state = IPC_MEM_DEV_PM_ACTIVE;
+	ipc_pm->host_pm_state = IPC_MEM_HOST_PM_ACTIVE;
+
+	ipc_pm->ipc_tasklet = ipc_imem->ipc_tasklet;
+	ipc_pm->ipc_task = ipc_imem->ipc_task;
+
+	/* Create generic wait-for-completion handler for Host Sleep
+	 * and device sleep coordination.
+	 */
+	init_completion(&ipc_pm->host_sleep_complete);
+
+	atomic_set(&ipc_pm->host_sleep_pend, 0);
+
+	return ipc_pm;
+}
+
+void ipc_pm_deinit(struct iosm_pm *ipc_pm)
+{
+	complete(&ipc_pm->host_sleep_complete);
+	kfree(ipc_pm);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_pm.h b/drivers/net/wwan/iosm/iosm_ipc_pm.h
new file mode 100644
index 000000000000..f09a90fe43df
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_pm.h
@@ -0,0 +1,216 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_PM_H
+#define IOSM_IPC_PM_H
+
+#include <linux/interrupt.h>
+
+/**
+ * union ipc_pm_cond - Conditions for D3 and the sleep message to CP.
+ * @raw:	raw/combined value for faster check
+ * @irq:	IRQ towards CP
+ * @hs:		Host Sleep
+ * @link:	Device link state.
+ */
+union ipc_pm_cond {
+	unsigned int raw;
+
+	struct {
+		unsigned int irq : 1;
+		unsigned int hs : 1;
+		unsigned int link : 1;
+	};
+};
+
+/**
+ * enum ipc_mem_host_pm_state - Possible states of the SLEEP finite state
+ *				machine.
+ * @IPC_MEM_HOST_PM_ACTIVE:		   Host is active
+ * @IPC_MEM_HOST_PM_ACTIVE_WAIT:	   Intermediate state before going to
+ *					   active
+ * @IPC_MEM_HOST_PM_SLEEP_WAIT_IDLE:	   Intermediate state to wait for idle
+ *					   before going into sleep
+ * @IPC_MEM_HOST_PM_SLEEP_WAIT_D3:	   Intermediate state to wait for D3
+ *					   before going to sleep
+ * @IPC_MEM_HOST_PM_SLEEP:		   after this state the interface is not
+ *					   accessible host is in suspend to RAM
+ * @IPC_MEM_HOST_PM_SLEEP_WAIT_EXIT_SLEEP: Intermediate state before exiting
+ *					   sleep
+ */
+enum ipc_mem_host_pm_state {
+	IPC_MEM_HOST_PM_ACTIVE,
+	IPC_MEM_HOST_PM_ACTIVE_WAIT,
+	IPC_MEM_HOST_PM_SLEEP_WAIT_IDLE,
+	IPC_MEM_HOST_PM_SLEEP_WAIT_D3,
+	IPC_MEM_HOST_PM_SLEEP,
+	IPC_MEM_HOST_PM_SLEEP_WAIT_EXIT_SLEEP,
+};
+
+/**
+ * enum ipc_mem_dev_pm_state - Possible states of the SLEEP finite state
+ *			       machine.
+ * @IPC_MEM_DEV_PM_ACTIVE:	IPC_MEM_DEV_PM_ACTIVE is the initial power
+ *				management state.
+ *				IRQ(struct
+ *				ipc_mem_device_info.device_sleep_notification)
+ *				and DOORBELL-IRQ-HPDA(data) values.
+ * @IPC_MEM_DEV_PM_SLEEP:	IPC_MEM_DEV_PM_SLEEP is PM state for sleep.
+ * @IPC_MEM_DEV_PM_WAKEUP:	DOORBELL-IRQ-DEVICE_WAKE(data).
+ * @IPC_MEM_DEV_PM_HOST_SLEEP:	DOORBELL-IRQ-HOST_SLEEP(data).
+ * @IPC_MEM_DEV_PM_ACTIVE_WAIT:	Local intermediate states.
+ *				Before AP triggers DOORBELL-IRQ-SLEEP(data)
+ *				either the intermediate device link state is
+ *				SYNC_ACTIVE_WAIT i.e. the user is blocked until
+ *				the link interworking was finished about IRQ and
+ *				DOORBELL-IRQ-HPDA or the intermediate device
+ *				link state is ACTIVE_WAIT i.e. the data transfer
+ *				starts after the DOORBELL-IRQ-HPDA
+ *				(IPC_MEM_DEV_PM_ACTIVE).
+ */
+enum ipc_mem_dev_pm_state {
+	IPC_MEM_DEV_PM_ACTIVE,
+	IPC_MEM_DEV_PM_SLEEP,
+	IPC_MEM_DEV_PM_WAKEUP,
+	IPC_MEM_DEV_PM_HOST_SLEEP,
+	IPC_MEM_DEV_PM_ACTIVE_WAIT,
+};
+
+/**
+ * struct iosm_pm - Power management instance data
+ * @pcie:			Pointer to iosm_pcie structure
+ * @dev:			Pointer to device structure
+ * @ipc_tasklet:		Tasklet instance
+ * @ipc_task:			Tasklet for scheduling a wakeup in task context
+ * @host_pm_state:		PM states for host
+ * @host_sleep_pend:		Variable to indicate Host Sleep Pending
+ * @host_sleep_complete:	Generic wait-for-completion used in
+ *				case of Host Sleep
+ * @pm_cond:			Conditions for power management
+ * @ap_state:			Current power management state, the
+ *				initial state is IPC_MEM_DEV_PM_ACTIVE eq. 0.
+ * @cp_state:			PM State of CP
+ * @device_sleep_notification:	last handled device_sleep_notfication
+ * @pending_hpda_update:	is a HPDA update pending?
+ */
+struct iosm_pm {
+	struct iosm_pcie *pcie;
+	struct device *dev;
+	struct tasklet_struct *ipc_tasklet;
+	struct ipc_task_queue *ipc_task;
+	enum ipc_mem_host_pm_state host_pm_state;
+	atomic_t host_sleep_pend;
+	struct completion host_sleep_complete;
+	union ipc_pm_cond pm_cond;
+	enum ipc_mem_dev_pm_state ap_state;
+	enum ipc_mem_dev_pm_state cp_state;
+	u32 device_sleep_notification;
+	u8 pending_hpda_update : 1;
+};
+
+/**
+ * enum ipc_pm_unit - Power management units.
+ * @IPC_PM_UNIT_IRQ:	IRQ towards CP
+ * @IPC_PM_UNIT_HS:	Host Sleep for converged protocol
+ * @IPC_PM_UNIT_LINK:	Link state controlled by CP.
+ */
+enum ipc_pm_unit {
+	IPC_PM_UNIT_IRQ, /* IRQ towards CP */
+	IPC_PM_UNIT_HS, /* Host Sleep for converged protocol */
+	IPC_PM_UNIT_LINK, /* Link state controlled by CP. */
+};
+
+/**
+ * ipc_pm_init - Allocate power management component
+ * @ipc_imem:	Pointer to iosm_imem structure
+ *
+ * Returns: pointer to allocated PM component or NULL on failure.
+ */
+struct iosm_pm *ipc_pm_init(struct iosm_imem *ipc_imem);
+
+/**
+ * ipc_pm_deinit - Free power management component, invalidating its pointer.
+ * @ipc_pm:	Pointer to pm component.
+ */
+void ipc_pm_deinit(struct iosm_pm *ipc_pm);
+
+/**
+ * ipc_pm_dev_slp_notification - Handle a sleep notification message from the
+ *				 device. This can be called from interrupt state
+ *				 This function handles Host Sleep requests too
+ *				 if the Host Sleep protocol is register based.
+ * @ipc_pm:			Pointer to power management component
+ * @sleep_notification:		Actual notification from device
+ *
+ * Returns: true if dev sleep state has to be checked, false otherwise.
+ */
+bool ipc_pm_dev_slp_notification(struct iosm_pm *ipc_pm,
+				 u32 sleep_notification);
+
+/**
+ * ipc_pm_prepare_host_sleep - Prepare the PM for sleep by entering
+ *			       IPC_MEM_HOST_PM_SLEEP_WAIT_D3 state.
+ * @ipc_pm:	Pointer to power management component
+ *
+ * Returns: true on success, false if the host was not active.
+ */
+bool ipc_pm_prepare_host_sleep(struct iosm_pm *ipc_pm);
+
+/**
+ * ipc_pm_prepare_host_active - Prepare the PM for wakeup by entering
+ *				IPC_MEM_HOST_PM_ACTIVE_WAIT state.
+ * @ipc_pm:	Pointer to power management component
+ *
+ * Returns: true on success, false if the host was not sleeping.
+ */
+bool ipc_pm_prepare_host_active(struct iosm_pm *ipc_pm);
+
+/**
+ * ipc_pm_wait_for_device_active - Wait for up to IPC_PM_ACTIVE_TIMEOUT_MS ms
+ *				   for the device to reach active state
+ * @ipc_pm:	Pointer to power management component
+ *
+ * Returns: true if device is active
+ */
+bool ipc_pm_wait_for_device_active(struct iosm_pm *ipc_pm);
+
+/**
+ * ipc_pm_signal_hpda_doorbell - Wake up the device if it is in low power mode
+ *				 and trigger a head pointer update interrupt.
+ * @ipc_pm:		Pointer to power management component
+ * @identifier:		specifies what component triggered hpda update irq
+ * @host_slp_check:	if set to true then Host Sleep state machine check will
+ *			be performed. If Host Sleep state machine allows HP
+ *			update then only doorbell is triggered otherwise pending
+ *			flag will be set. If set to false then Host Sleep check
+ *			will not be performed. This is helpful for Host Sleep
+ *			negotiation through message ring.
+ */
+void ipc_pm_signal_hpda_doorbell(struct iosm_pm *ipc_pm, u32 identifier,
+				 bool host_slp_check);
+/**
+ * ipc_pm_host_slp_reinit_dev_active_completion - This function initializes
+ *						  the atomic variable and
+ *						  completion object used to
+ *						  get notification about
+ *						  Device Sleep state machine
+ *						  changed to ACTIVE state
+ *						  so that Sleep negotiation
+ *						  can be started.
+ * @ipc_pm:	Pointer to power management component
+ */
+void ipc_pm_host_slp_reinit_dev_active_completion(struct iosm_pm *ipc_pm);
+
+/**
+ * ipc_pm_trigger - Update power manager and wake up the link if needed
+ * @ipc_pm:	Pointer to power management component
+ * @unit:	Power management units
+ * @active:	Device link state
+ *
+ * Returns: true if link is active.
+ */
+bool ipc_pm_trigger(struct iosm_pm *ipc_pm, enum ipc_pm_unit unit, bool active);
+
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 13/18] net: iosm: shared memory protocol
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (11 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 12/18] net: iosm: power management M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 14/18] net: iosm: protocol operations M Chetan Kumar
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

1) Defines messaging protocol for handling Transfer Descriptor
   in both UL/DL direction.
2) Ring buffer management.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_protocol.c | 287 ++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_protocol.h | 219 +++++++++++++++++++++++
 2 files changed, 506 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_protocol.c b/drivers/net/wwan/iosm/iosm_ipc_protocol.c
new file mode 100644
index 000000000000..82d75d3d191c
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_protocol.c
@@ -0,0 +1,287 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include "iosm_ipc_protocol.h"
+#include "iosm_ipc_task_queue.h"
+
+int ipc_protocol_tq_msg_send(struct iosm_protocol *ipc_protocol,
+			     enum ipc_msg_prep_type msg_type,
+			     union ipc_msg_prep_args *prep_args,
+			     struct ipc_rsp *response)
+{
+	int index = ipc_protocol_msg_prep(ipc_protocol, msg_type, prep_args);
+
+	/* Store reference towards caller specified response in response ring
+	 * and signal CP
+	 */
+	if (index >= 0 && index < IPC_MEM_MSG_ENTRIES) {
+		ipc_protocol->rsp_ring[index] = response;
+		ipc_protocol_msg_hp_update(ipc_protocol);
+	}
+
+	return index;
+}
+
+/* Tasklet message send call back function */
+static int ipc_protocol_tq_msg_send_cb(void *instance, int arg, void *msg,
+				       size_t size)
+{
+	struct ipc_call_msg_send_args *send_args = msg;
+	struct iosm_protocol *ipc_protocol =
+		((struct iosm_imem *)instance)->ipc_protocol;
+
+	return ipc_protocol_tq_msg_send(ipc_protocol, send_args->msg_type,
+					send_args->prep_args,
+					send_args->response);
+}
+
+/* Remove reference to a response. This is typically used when a requestor timed
+ * out and is no longer interested in the response.
+ */
+static int ipc_protocol_tq_msg_remove(void *instance, int arg, void *msg,
+				      size_t size)
+{
+	struct iosm_protocol *ipc_protocol =
+		((struct iosm_imem *)instance)->ipc_protocol;
+
+	ipc_protocol->rsp_ring[arg] = NULL;
+	return 0;
+}
+
+int ipc_protocol_msg_send(struct iosm_protocol *ipc_protocol,
+			  enum ipc_msg_prep_type prep,
+			  union ipc_msg_prep_args *prep_args)
+{
+	struct ipc_call_msg_send_args send_args;
+	unsigned int exec_timeout;
+	struct ipc_rsp response;
+	int result = -1;
+	int index;
+
+	exec_timeout = (ipc_protocol_get_ap_exec_stage(ipc_protocol) ==
+					IPC_MEM_EXEC_STAGE_RUN ?
+				IPC_MSG_COMPLETE_RUN_DEFAULT_TIMEOUT :
+				IPC_MSG_COMPLETE_BOOT_DEFAULT_TIMEOUT);
+
+	/* Trap if called from non-preemptible context */
+	might_sleep();
+
+	response.status = IPC_MEM_MSG_CS_INVALID;
+	init_completion(&response.completion);
+
+	send_args.msg_type = prep;
+	send_args.prep_args = prep_args;
+	send_args.response = &response;
+
+	/* Allocate and prepare message to be sent in tasklet context.
+	 * A positive index returned form tasklet_call references the message
+	 * in case it needs to be cancelled when there is a timeout.
+	 */
+	index = ipc_task_queue_send_task(ipc_protocol->imem,
+					 ipc_protocol_tq_msg_send_cb, 0,
+					 &send_args, 0, true);
+
+	if (index < 0) {
+		dev_err(ipc_protocol->dev, "msg %d failed", prep);
+		return index;
+	}
+
+	/* Wait for the device to respond to the message */
+	switch (wait_for_completion_timeout(&response.completion,
+					    msecs_to_jiffies(exec_timeout))) {
+	case 0:
+		/* Timeout, there was no response from the device.
+		 * Remove the reference to the local response completion
+		 * object as we are no longer interested in the response.
+		 */
+		ipc_task_queue_send_task(ipc_protocol->imem,
+					 ipc_protocol_tq_msg_remove, index,
+					 NULL, 0, true);
+		dev_err(ipc_protocol->dev, "msg timeout");
+		ipc_uevent_send(ipc_protocol->pcie->dev, UEVENT_MDM_TIMEOUT);
+		break;
+	default:
+		/* We got a response in time; check completion status: */
+		if (response.status == IPC_MEM_MSG_CS_SUCCESS)
+			result = 0;
+		else
+			dev_err(ipc_protocol->dev,
+				"msg completion status error %d",
+				response.status);
+		break;
+	}
+
+	return result;
+}
+
+static int ipc_protocol_msg_send_host_sleep(struct iosm_protocol *ipc_protocol,
+					    u32 state)
+{
+	union ipc_msg_prep_args prep_args = {
+		.sleep.target = 0,
+		.sleep.state = state,
+	};
+
+	return ipc_protocol_msg_send(ipc_protocol, IPC_MSG_PREP_SLEEP,
+				     &prep_args);
+}
+
+void ipc_protocol_doorbell_trigger(struct iosm_protocol *ipc_protocol,
+				   u32 identifier)
+{
+	ipc_pm_signal_hpda_doorbell(ipc_protocol->pm, identifier, true);
+}
+
+bool ipc_protocol_pm_dev_sleep_handle(struct iosm_protocol *ipc_protocol)
+{
+	u32 ipc_status = ipc_protocol_get_ipc_status(ipc_protocol);
+	u32 requested;
+
+	if (ipc_status != IPC_MEM_DEVICE_IPC_RUNNING) {
+		dev_err(ipc_protocol->dev,
+			"irq ignored, CP IPC state is %d, should be RUNNING",
+			ipc_status);
+
+		/* Stop further processing. */
+		return false;
+	}
+
+	/* Get a copy of the requested PM state by the device and the local
+	 * device PM state.
+	 */
+	requested = ipc_protocol_pm_dev_get_sleep_notification(ipc_protocol);
+
+	return ipc_pm_dev_slp_notification(ipc_protocol->pm, requested);
+}
+
+static int ipc_protocol_tq_wakeup_dev_slp(void *instance, int arg, void *msg,
+					  size_t size)
+{
+	struct iosm_protocol *ipc_protocol =
+		((struct iosm_imem *)instance)->ipc_protocol;
+
+	/* Wakeup from device sleep if it is not ACTIVE */
+	if (!ipc_pm_trigger(ipc_protocol->pm, IPC_PM_UNIT_HS, true))
+		/* Link was not active. Prepare for notification and waiting */
+		ipc_pm_host_slp_reinit_dev_active_completion(ipc_protocol->pm);
+
+	ipc_pm_trigger(ipc_protocol->pm, IPC_PM_UNIT_HS, false);
+
+	return 0;
+}
+
+bool ipc_protocol_suspend(struct iosm_protocol *ipc_protocol)
+{
+	if (!ipc_pm_prepare_host_sleep(ipc_protocol->pm))
+		return false;
+
+	ipc_task_queue_send_task(ipc_protocol->imem,
+				 ipc_protocol_tq_wakeup_dev_slp, 0, NULL, 0,
+				 true);
+
+	if (!ipc_pm_wait_for_device_active(ipc_protocol->pm)) {
+		ipc_uevent_send(ipc_protocol->pcie->dev, UEVENT_MDM_TIMEOUT);
+		return false;
+	}
+
+	/* Send the sleep message for sync sys calls. */
+	dev_dbg(ipc_protocol->dev, "send (TARGET_HOST, ENTER_SLEEP)");
+	if (ipc_protocol_msg_send_host_sleep(ipc_protocol,
+					     IPC_HOST_SLEEP_ENTER_SLEEP)) {
+		/* Sending ENTER_SLEEP message failed, we are still active */
+		ipc_protocol->pm->host_pm_state = IPC_MEM_HOST_PM_ACTIVE;
+		return false;
+	}
+
+	ipc_protocol->pm->host_pm_state = IPC_MEM_HOST_PM_SLEEP;
+
+	return true;
+}
+
+bool ipc_protocol_resume(struct iosm_protocol *ipc_protocol)
+{
+	if (!ipc_pm_prepare_host_active(ipc_protocol->pm))
+		return false;
+
+	dev_dbg(ipc_protocol->dev, "send (TARGET_HOST, EXIT_SLEEP)");
+	if (ipc_protocol_msg_send_host_sleep(ipc_protocol,
+					     IPC_HOST_SLEEP_EXIT_SLEEP)) {
+		ipc_protocol->pm->host_pm_state = IPC_MEM_HOST_PM_SLEEP;
+		return false;
+	}
+
+	ipc_protocol->pm->host_pm_state = IPC_MEM_HOST_PM_ACTIVE;
+
+	return true;
+}
+
+struct iosm_protocol *ipc_protocol_init(struct iosm_imem *ipc_imem)
+{
+	struct iosm_protocol *ipc_protocol =
+		kzalloc(sizeof(*ipc_protocol), GFP_KERNEL);
+	struct ipc_protocol_context_info *p_ci;
+	u64 addr;
+
+	if (!ipc_protocol)
+		return NULL;
+
+	ipc_protocol->dev = ipc_imem->dev;
+	ipc_protocol->pcie = ipc_imem->pcie;
+	ipc_protocol->imem = ipc_imem;
+	ipc_protocol->p_ap_shm = NULL;
+	ipc_protocol->phy_ap_shm = 0;
+
+	ipc_protocol->old_msg_tail = 0;
+
+	ipc_protocol->p_ap_shm =
+		pci_alloc_consistent(ipc_protocol->pcie->pci,
+				     sizeof(*ipc_protocol->p_ap_shm),
+				     &ipc_protocol->phy_ap_shm);
+
+	if (!ipc_protocol->p_ap_shm) {
+		dev_err(ipc_protocol->dev, "pci shm alloc error");
+		kfree(ipc_protocol);
+		return NULL;
+	}
+
+	/* Prepare the context info for CP. */
+	addr = ipc_protocol->phy_ap_shm;
+	p_ci = &ipc_protocol->p_ap_shm->ci;
+	p_ci->device_info_addr =
+		addr + offsetof(struct ipc_protocol_ap_shm, device_info);
+	p_ci->head_array =
+		addr + offsetof(struct ipc_protocol_ap_shm, head_array);
+	p_ci->tail_array =
+		addr + offsetof(struct ipc_protocol_ap_shm, tail_array);
+	p_ci->msg_head = addr + offsetof(struct ipc_protocol_ap_shm, msg_head);
+	p_ci->msg_tail = addr + offsetof(struct ipc_protocol_ap_shm, msg_tail);
+	p_ci->msg_ring_addr =
+		addr + offsetof(struct ipc_protocol_ap_shm, msg_ring);
+	p_ci->msg_ring_entries = IPC_MEM_MSG_ENTRIES;
+	p_ci->msg_irq_vector = IPC_MSG_IRQ_VECTOR;
+	p_ci->device_info_irq_vector = IPC_DEVICE_IRQ_VECTOR;
+
+	ipc_mmio_set_contex_info_addr(ipc_imem->mmio, addr);
+
+	ipc_protocol->pm = ipc_pm_init(ipc_imem);
+
+	if (!ipc_protocol->pm) {
+		ipc_protocol_deinit(ipc_protocol);
+		return NULL;
+	}
+
+	return ipc_protocol;
+}
+
+void ipc_protocol_deinit(struct iosm_protocol *proto)
+{
+	pci_free_consistent(proto->pcie->pci, sizeof(*proto->p_ap_shm),
+			    proto->p_ap_shm, proto->phy_ap_shm);
+
+	proto->p_ap_shm = NULL;
+	/* Free PM component. Must be freed before pcie, stats, params */
+	ipc_pm_deinit(proto->pm);
+	kfree(proto);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_protocol.h b/drivers/net/wwan/iosm/iosm_ipc_protocol.h
new file mode 100644
index 000000000000..e963c1901d23
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_protocol.h
@@ -0,0 +1,219 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_PROTOCOL_H
+#define IOSM_IPC_PROTOCOL_H
+
+#include "iosm_ipc_imem.h"
+#include "iosm_ipc_pm.h"
+#include "iosm_ipc_protocol_ops.h"
+
+/* Trigger the doorbell interrupt on CP. */
+#define IPC_DOORBELL_IRQ_HPDA 0
+#define IPC_DOORBELL_IRQ_IPC 1
+#define IPC_DOORBELL_IRQ_SLEEP 2
+
+/* IRQ vector number. */
+#define IPC_DEVICE_IRQ_VECTOR 0
+#define IPC_MSG_IRQ_VECTOR 0
+#define IPC_UL_PIPE_IRQ_VECTOR 0
+#define IPC_DL_PIPE_IRQ_VECTOR 0
+
+#define IPC_MEM_MSG_ENTRIES 128
+
+/* Default time out for sending IPC messages like open pipe, close pipe etc.
+ * during run mode.
+ *
+ * If the message interface lock to CP times out, the link to CP is broken.
+ * mode : run mode (IPC_MEM_EXEC_STAGE_RUN)
+ * unit : milliseconds
+ */
+#define IPC_MSG_COMPLETE_RUN_DEFAULT_TIMEOUT 500 /* 0.5 seconds */
+
+/* Default time out for sending IPC messages like open pipe, close pipe etc.
+ * during boot mode.
+ *
+ * If the message interface lock to CP times out, the link to CP is broken.
+ * mode : boot mode
+ * (IPC_MEM_EXEC_STAGE_BOOT | IPC_MEM_EXEC_STAGE_PSI | IPC_MEM_EXEC_STAGE_EBL)
+ * unit : milliseconds
+ */
+#define IPC_MSG_COMPLETE_BOOT_DEFAULT_TIMEOUT 500 /* 0.5 seconds */
+
+/**
+ * struct ipc_protocol_context_info - Structure of the context info
+ * @device_info_addr:		64 bit address to device info
+ * @head_array:			64 bit address to head pointer arr for the pipes
+ * @tail_array:			64 bit address to tail pointer arr for the pipes
+ * @msg_head:			64 bit address to message head pointer
+ * @msg_tail:			64 bit address to message tail pointer
+ * @msg_ring_addr:		64 bit pointer to the message ring buffer
+ * @msg_ring_entries:		This field provides the number of entries which
+ *				the MR can hold
+ * @msg_irq_vector:		This field provides the IRQ which shall be
+ *				generated by the EP device when generating
+ *				completion for Messages.
+ * @device_info_irq_vector:	This field provides the IRQ which shall be
+ *				generated by the EP dev after updating Dev. Info
+ * @reserved:			reserved
+ */
+struct ipc_protocol_context_info {
+	phys_addr_t device_info_addr;
+	phys_addr_t head_array;
+	phys_addr_t tail_array;
+	phys_addr_t msg_head;
+	phys_addr_t msg_tail;
+	phys_addr_t msg_ring_addr;
+	u32 msg_ring_entries : 16;
+	u32 msg_irq_vector : 5;
+	u32 device_info_irq_vector : 5;
+	u32 reserved : 6;
+};
+
+/* Structure for the device information. */
+struct ipc_protocol_device_info {
+	u32 execution_stage;
+	u32 ipc_status;
+	u32 device_sleep_notification;
+};
+
+/* Protocol Shared Memory Structure */
+struct ipc_protocol_ap_shm {
+	struct ipc_protocol_context_info ci;
+	struct ipc_protocol_device_info device_info;
+
+	u32 msg_head;
+	u32 head_array[IPC_MEM_MAX_PIPES];
+	u32 msg_tail;
+	u32 tail_array[IPC_MEM_MAX_PIPES];
+
+	/* Circular buffers for the read/tail and write/head indeces. */
+	union ipc_mem_msg_entry msg_ring[IPC_MEM_MSG_ENTRIES];
+};
+
+/**
+ * struct iosm_protocol - Structure for IPC protocol.
+ * @p_ap_shm:		Pointer to Protocol Shared Memory Structure
+ * @pm:			Pointer to struct iosm_pm
+ * @pcie:		Pointer to struct iosm_pcie
+ * @imem:		Pointer to struct iosm_imem
+ * @rsp_ring:		Array of OS completion objects to be triggered once CP
+ *			acknowledges a request in the message ring
+ * @dev:		Pointer to device structure
+ * @phy_ap_shm:		Physical/Mapped representation of the shared memory info
+ * @old_msg_tail:	Old msg tail ptr, until AP has handled ACK's from CP
+ */
+struct iosm_protocol {
+	struct ipc_protocol_ap_shm *p_ap_shm;
+	struct iosm_pm *pm;
+	struct iosm_pcie *pcie;
+	struct iosm_imem *imem;
+	struct ipc_rsp *rsp_ring[IPC_MEM_MSG_ENTRIES];
+	struct device *dev;
+	phys_addr_t phy_ap_shm;
+	u32 old_msg_tail;
+};
+
+/**
+ * struct ipc_call_msg_send_args - Structure for message argument for
+ *				   tasklet function.
+ * @prep_args:		Arguments for message preparation function
+ * @response:		Can be NULL if result can be ignored
+ * @msg_type:		Message Type
+ */
+struct ipc_call_msg_send_args {
+	union ipc_msg_prep_args *prep_args;
+	struct ipc_rsp *response;
+	enum ipc_msg_prep_type msg_type;
+};
+
+/**
+ * ipc_protocol_tq_msg_send - Call message preparation func. and Send msg to CP
+ * @ipc_protocol:	Pointer to ipc_protocol instance
+ * @msg_type:		Message type
+ * @prep_args:		Message arguments
+ * @response:		Pointer to a response object which has a
+ *			completion object and return code.
+ *
+ * Returns: 0 on success, -1 on failure
+ */
+int ipc_protocol_tq_msg_send(struct iosm_protocol *ipc_protocol,
+			     enum ipc_msg_prep_type msg_type,
+			     union ipc_msg_prep_args *prep_args,
+			     struct ipc_rsp *response);
+
+/**
+ * ipc_protocol_msg_send - Send a message to CP and wait for response
+ * @ipc_protocol:	Pointer to ipc_protocol instance
+ * @prep:		Message type
+ * @prep_args:		Message arguments
+ *
+ * Returns: 0 on success, -1 on failure
+ */
+int ipc_protocol_msg_send(struct iosm_protocol *ipc_protocol,
+			  enum ipc_msg_prep_type prep,
+			  union ipc_msg_prep_args *prep_args);
+
+/**
+ * ipc_protocol_suspend - Signal to CP that host wants to go to sleep (suspend).
+ * @ipc_protocol:	Pointer to ipc_protocol instance
+ *
+ * Returns: true if host can suspend, false if suspend must be aborted.
+ */
+bool ipc_protocol_suspend(struct iosm_protocol *ipc_protocol);
+
+/**
+ * ipc_protocol_resume - Signal to CP that host wants to resume operation.
+ * @ipc_protocol:	Pointer to ipc_protocol instance
+ *
+ * Returns: true if host can resume, false if there is a problem.
+ */
+bool ipc_protocol_resume(struct iosm_protocol *ipc_protocol);
+
+/**
+ * ipc_protocol_pm_dev_sleep_handle - Handles the Device Sleep state change
+ *				      notification.
+ * @ipc_protocol:	Pointer to ipc_protocol instance.
+ *
+ * Returns: True if sleep notification handled, False otherwise.
+ */
+bool ipc_protocol_pm_dev_sleep_handle(struct iosm_protocol *ipc_protocol);
+
+/**
+ * ipc_protocol_doorbell_trigger - Wrapper for PM function which wake up the
+ *				   device if it is in low power mode
+ *				   and trigger a head pointer update interrupt.
+ * @ipc_protocol:	Pointer to ipc_protocol instance.
+ * @identifier:		Specifies what component triggered hpda
+ *			update irq
+ */
+void ipc_protocol_doorbell_trigger(struct iosm_protocol *ipc_protocol,
+				   u32 identifier);
+
+/**
+ * ipc_protocol_sleep_notification_string - Returns last Sleep Notification as
+ *					    string.
+ * @ipc_protocol:	Instance pointer of Protocol module.
+ *
+ * Returns: Pointer to string.
+ */
+const char *
+ipc_protocol_sleep_notification_string(struct iosm_protocol *ipc_protocol);
+
+/**
+ * ipc_protocol_init - Allocates IPC protocol instance data
+ * @ipc_imem:		Pointer to iosm_imem structure
+ *
+ * Returns: Address of ipc protocol instance data
+ */
+struct iosm_protocol *ipc_protocol_init(struct iosm_imem *ipc_imem);
+
+/**
+ * ipc_protocol_deinit - Deallocates IPC protocol instance data
+ * @ipc_protocol:	pointer to the IPC protocol instance data
+ */
+void ipc_protocol_deinit(struct iosm_protocol *ipc_protocol);
+
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 14/18] net: iosm: protocol operations
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (12 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 13/18] net: iosm: shared memory protocol M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 15/18] net: iosm: uevent support M Chetan Kumar
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

1) Update UL/DL transfer descriptors in message ring.
2) Define message set for pipe/sleep protocol.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c | 563 ++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h | 358 ++++++++++++++++
 2 files changed, 921 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c b/drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c
new file mode 100644
index 000000000000..beca5e06203a
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_protocol_ops.c
@@ -0,0 +1,563 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include "iosm_ipc_protocol.h"
+#include "iosm_ipc_protocol_ops.h"
+
+/* Get the next free message element.*/
+static union ipc_mem_msg_entry *
+ipc_protocol_free_msg_get(struct iosm_protocol *ipc_protocol, int *index)
+{
+	u32 head = ipc_protocol->p_ap_shm->msg_head;
+	u32 new_head = (head + 1) % IPC_MEM_MSG_ENTRIES;
+	union ipc_mem_msg_entry *msg;
+
+	if (new_head == ipc_protocol->p_ap_shm->msg_tail) {
+		dev_err(ipc_protocol->dev, "message ring is full");
+		return NULL;
+	}
+
+	/* Get the pointer to the next free message element,
+	 * reset the fields and mark is as invalid.
+	 */
+	msg = &ipc_protocol->p_ap_shm->msg_ring[head];
+	memset(msg, 0, sizeof(*msg));
+
+	/* return index in message ring */
+	*index = head;
+
+	return msg;
+}
+
+/* Updates the message ring Head pointer */
+void ipc_protocol_msg_hp_update(void *instance)
+{
+	struct iosm_protocol *ipc_protocol = instance;
+	u32 head = ipc_protocol->p_ap_shm->msg_head;
+	u32 new_head = (head + 1) % IPC_MEM_MSG_ENTRIES;
+
+	/* Update head pointer and fire doorbell. */
+	ipc_protocol->p_ap_shm->msg_head = new_head;
+	ipc_protocol->old_msg_tail = ipc_protocol->p_ap_shm->msg_tail;
+
+	/* Host Sleep negotiation happens through Message Ring. So Host Sleep
+	 * check should be avoided by sending false as last argument.
+	 */
+	ipc_pm_signal_hpda_doorbell(ipc_protocol->pm, IPC_HP_MR, false);
+}
+
+/* Allocate and prepare a OPEN_PIPE message.
+ * This also allocates the memory for the new TDR structure and
+ * updates the pipe structure referenced in the preparation arguments.
+ */
+static int ipc_protocol_msg_prepipe_open(struct iosm_protocol *ipc_protocol,
+					 union ipc_msg_prep_args *args)
+{
+	int index = -1;
+	union ipc_mem_msg_entry *msg =
+		ipc_protocol_free_msg_get(ipc_protocol, &index);
+	struct ipc_pipe *pipe = args->pipe_open.pipe;
+	struct ipc_protocol_td *tdr;
+	struct sk_buff **skbr;
+
+	if (!msg) {
+		dev_err(ipc_protocol->dev, "failed to get free message");
+		return -1;
+	}
+
+	/* Allocate the skbuf elements for the skbuf which are on the way.
+	 * SKB ring is internal memory allocation for driver. No need to
+	 * re-calculate the start and end addresses.
+	 */
+	skbr = kcalloc(pipe->nr_of_entries, sizeof(*skbr), GFP_ATOMIC);
+	if (!skbr)
+		return -ENOMEM;
+
+	/* Allocate the transfer descriptors for the pipe. */
+	tdr = pci_alloc_consistent(ipc_protocol->pcie->pci,
+				   pipe->nr_of_entries * sizeof(*tdr),
+				   &pipe->phy_tdr_start);
+	if (!tdr) {
+		kfree(skbr);
+		dev_err(ipc_protocol->dev, "tdr alloc error");
+		return -ENOMEM;
+	}
+
+	pipe->max_nr_of_queued_entries = pipe->nr_of_entries - 1;
+	pipe->nr_of_queued_entries = 0;
+	pipe->tdr_start = tdr;
+	pipe->skbr_start = skbr;
+	pipe->old_tail = 0;
+
+	ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr] = 0;
+
+	msg->open_pipe.type_of_message = IPC_MEM_MSG_OPEN_PIPE;
+	msg->open_pipe.pipe_nr = pipe->pipe_nr;
+	msg->open_pipe.tdr_addr = pipe->phy_tdr_start;
+	msg->open_pipe.tdr_entries = pipe->nr_of_entries;
+	msg->open_pipe.interrupt_moderation = pipe->irq_moderation;
+	msg->open_pipe.accumulation_backoff = pipe->accumulation_backoff;
+	msg->open_pipe.reliable = true;
+	msg->open_pipe.optimized_completion = true;
+	msg->open_pipe.irq_vector = pipe->irq;
+
+	return index;
+}
+
+static int ipc_protocol_msg_prepipe_close(struct iosm_protocol *ipc_protocol,
+					  union ipc_msg_prep_args *args)
+{
+	int index = -1;
+	union ipc_mem_msg_entry *msg =
+		ipc_protocol_free_msg_get(ipc_protocol, &index);
+	struct ipc_pipe *pipe = args->pipe_close.pipe;
+
+	if (!msg)
+		return -1;
+
+	msg->close_pipe.type_of_message = IPC_MEM_MSG_CLOSE_PIPE;
+	msg->close_pipe.pipe_nr = pipe->pipe_nr;
+
+	dev_dbg(ipc_protocol->dev, "IPC_MEM_MSG_CLOSE_PIPE(pipe_nr=%d)",
+		msg->close_pipe.pipe_nr);
+
+	return index;
+}
+
+static int ipc_protocol_msg_prep_sleep(struct iosm_protocol *ipc_protocol,
+				       union ipc_msg_prep_args *args)
+{
+	int index = -1;
+	union ipc_mem_msg_entry *msg =
+		ipc_protocol_free_msg_get(ipc_protocol, &index);
+
+	if (!msg) {
+		dev_err(ipc_protocol->dev, "failed to get free message");
+		return -1;
+	}
+
+	/* Prepare and send the host sleep message to CP to enter or exit D3. */
+	msg->host_sleep.type_of_message = IPC_MEM_MSG_SLEEP;
+	msg->host_sleep.target = args->sleep.target; /* 0=host, 1=device */
+
+	/* state; 0=enter, 1=exit 2=enter w/o protocol */
+	msg->host_sleep.state = args->sleep.state;
+
+	dev_dbg(ipc_protocol->dev, "IPC_MEM_MSG_SLEEP(target=%d; state=%d)",
+		msg->host_sleep.target, msg->host_sleep.state);
+
+	return index;
+}
+
+static int ipc_protocol_msg_prep_feature_set(struct iosm_protocol *ipc_protocol,
+					     union ipc_msg_prep_args *args)
+{
+	int index = -1;
+	union ipc_mem_msg_entry *msg =
+		ipc_protocol_free_msg_get(ipc_protocol, &index);
+
+	if (!msg) {
+		dev_err(ipc_protocol->dev, "failed to get free message");
+		return -1;
+	}
+
+	msg->feature_set.type_of_message = IPC_MEM_MSG_FEATURE_SET;
+	msg->feature_set.reset_enable = args->feature_set.reset_enable;
+
+	dev_dbg(ipc_protocol->dev, "IPC_MEM_MSG_FEATURE_SET(reset_enable=%d)",
+		msg->feature_set.reset_enable);
+
+	return index;
+}
+
+/* Processes the message consumed by CP. */
+bool ipc_protocol_msg_process(void *instance, int irq)
+{
+	struct iosm_protocol *ipc_protocol = instance;
+	struct ipc_rsp **rsp_ring = ipc_protocol->rsp_ring;
+	bool msg_processed = false;
+	int i;
+
+	if (ipc_protocol->p_ap_shm->msg_tail >= IPC_MEM_MSG_ENTRIES) {
+		dev_err(ipc_protocol->dev, "msg_tail out of range: %d",
+			ipc_protocol->p_ap_shm->msg_tail);
+		return msg_processed;
+	}
+
+	if (irq != IMEM_IRQ_DONT_CARE &&
+	    irq != ipc_protocol->p_ap_shm->ci.msg_irq_vector)
+		return msg_processed;
+
+	for (i = ipc_protocol->old_msg_tail;
+	     i != ipc_protocol->p_ap_shm->msg_tail;
+	     i = (i + 1) % IPC_MEM_MSG_ENTRIES) {
+		union ipc_mem_msg_entry *msg =
+			&ipc_protocol->p_ap_shm->msg_ring[i];
+
+		dev_dbg(ipc_protocol->dev, "msg[%d]: type=%u status=%d", i,
+			msg->common.type_of_message,
+			msg->common.completion_status);
+
+		/* Update response with status and wake up waiting requestor */
+		if (rsp_ring[i]) {
+			rsp_ring[i]->status =
+				(enum ipc_mem_msg_cs)
+					msg->common.completion_status;
+			complete(&rsp_ring[i]->completion);
+			rsp_ring[i] = NULL;
+		}
+		msg_processed = true;
+	}
+
+	ipc_protocol->old_msg_tail = i;
+	return msg_processed;
+}
+
+/* Sends data from UL list to CP for the provided pipe by updating the Head
+ * pointer of given pipe.
+ */
+bool ipc_protocol_ul_td_send(void *protocol_inst, struct ipc_pipe *pipe,
+			     struct sk_buff_head *p_ul_list)
+{
+	struct iosm_protocol *ipc_protocol = protocol_inst;
+	struct ipc_protocol_td *td;
+	bool hpda_pending = false;
+	s32 free_elements = 0;
+	struct sk_buff *skb;
+	u32 head;
+	u32 tail;
+
+	if (!ipc_protocol->p_ap_shm) {
+		dev_err(ipc_protocol->dev, "driver is not initialized");
+		return false;
+	}
+
+	/* Get head and tail of the td list and calculate
+	 * the number of free elements.
+	 */
+	head = ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr];
+	tail = pipe->old_tail;
+
+	while (!skb_queue_empty(p_ul_list)) {
+		if (head < tail)
+			free_elements = tail - head - 1;
+		else
+			free_elements =
+				pipe->nr_of_entries - head + ((s32)tail - 1);
+
+		if (free_elements <= 0) {
+			dev_dbg(ipc_protocol->dev,
+				"no free td elements for UL pipe %d",
+				pipe->pipe_nr);
+			break;
+		}
+
+		/* Get the td address. */
+		td = &pipe->tdr_start[head];
+
+		/* Take the first element of the uplink list and add it
+		 * to the td list.
+		 */
+		skb = skb_dequeue(p_ul_list);
+		if (WARN_ON(!skb))
+			break;
+
+		/* Save the reference to the uplink skbuf. */
+		pipe->skbr_start[head] = skb;
+
+		td->buffer.address = IPC_CB(skb)->mapping;
+		td->scs.size = skb->len;
+		td->scs.completion_status = 0;
+		td->next = 0;
+		td->reserved1 = 0;
+
+		pipe->nr_of_queued_entries++;
+
+		/* Calculate the new head and save it. */
+		head++;
+		if (head >= pipe->nr_of_entries)
+			head = 0;
+
+		ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr] = head;
+	}
+
+	if (pipe->old_head != head) {
+		dev_dbg(ipc_protocol->dev, "New UL TDs Pipe:%d", pipe->pipe_nr);
+
+		pipe->old_head = head;
+		/* Trigger doorbell because of pending UL packets. */
+		hpda_pending = true;
+	}
+
+	return hpda_pending;
+}
+
+/* Checks for Tail pointer update from CP and returns the data as SKB. */
+struct sk_buff *ipc_protocol_ul_td_process(void *protocol_inst,
+					   struct ipc_pipe *pipe)
+{
+	struct iosm_protocol *ipc_protocol = protocol_inst;
+	struct ipc_protocol_td *p_td = &pipe->tdr_start[pipe->old_tail];
+	struct sk_buff *skb = pipe->skbr_start[pipe->old_tail];
+
+	pipe->nr_of_queued_entries--;
+	pipe->old_tail++;
+	if (pipe->old_tail >= pipe->nr_of_entries)
+		pipe->old_tail = 0;
+
+	if (!p_td->buffer.address) {
+		dev_err(ipc_protocol->dev, "Td buffer address is NULL");
+		return NULL;
+	}
+
+	if (p_td->buffer.address != IPC_CB(skb)->mapping) {
+		dev_err(ipc_protocol->dev,
+			"pipe(%d): invalid buf_addr=%p or skb->data=%llx",
+			pipe->pipe_nr, (void *)p_td->buffer.address,
+			skb ? IPC_CB(skb)->mapping : 0);
+		return NULL;
+	}
+
+	return skb;
+}
+
+/* Allocates an SKB for CP to send data and updates the Head Pointer
+ * of the given Pipe#.
+ */
+bool ipc_protocol_dl_td_prepare(void *protocol_inst, struct ipc_pipe *pipe)
+{
+	struct iosm_protocol *ipc_protocol = protocol_inst;
+	u32 head, new_head;
+	struct ipc_protocol_td *td;
+	dma_addr_t mapping = 0;
+	struct sk_buff *skb;
+	u32 tail;
+
+	/* Get head and tail of the td list and calculate
+	 * the number of free elements.
+	 */
+	head = ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr];
+	tail = ipc_protocol->p_ap_shm->tail_array[pipe->pipe_nr];
+
+	new_head = head + 1;
+	if (new_head >= pipe->nr_of_entries)
+		new_head = 0;
+
+	if (new_head == tail)
+		return false;
+
+	/* Get the td address. */
+	td = &pipe->tdr_start[head];
+
+	/* Allocate the skbuf for the descriptor. */
+	skb = ipc_pcie_alloc_skb(ipc_protocol->pcie, pipe->buf_size, GFP_ATOMIC,
+				 &mapping, DMA_FROM_DEVICE,
+				 IPC_MEM_DL_ETH_OFFSET);
+	if (!skb)
+		return false;
+
+	td->buffer.address = mapping;
+	td->scs.size = pipe->buf_size;
+	td->scs.completion_status = 0;
+	td->next = 0;
+	td->reserved1 = 0;
+
+	/* store the new head value. */
+	ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr] = new_head;
+
+	/* Save the reference to the skbuf. */
+	pipe->skbr_start[head] = skb;
+
+	pipe->nr_of_queued_entries++;
+
+	return true;
+}
+
+/* Processes the TD processed from CP by checking the Tail Pointer for given
+ * pipe.
+ */
+struct sk_buff *ipc_protocol_dl_td_process(void *protocol_inst,
+					   struct ipc_pipe *pipe)
+{
+	struct iosm_protocol *ipc_protocol = protocol_inst;
+	u32 tail = ipc_protocol->p_ap_shm->tail_array[pipe->pipe_nr];
+	struct ipc_protocol_td *p_td;
+	struct sk_buff *skb;
+
+	if (!pipe->tdr_start)
+		return NULL;
+
+	/* Copy the reference to the downlink buffer. */
+	p_td = &pipe->tdr_start[pipe->old_tail];
+	skb = pipe->skbr_start[pipe->old_tail];
+
+	/* Reset the ring elements. */
+	pipe->skbr_start[pipe->old_tail] = NULL;
+
+	pipe->nr_of_queued_entries--;
+
+	pipe->old_tail++;
+	if (pipe->old_tail >= pipe->nr_of_entries)
+		pipe->old_tail = 0;
+
+	if (!skb->data) {
+		dev_err(ipc_protocol->dev, "skb is null");
+		goto ret;
+	} else if (!p_td->buffer.address) {
+		dev_err(ipc_protocol->dev, "td/buffer address is null");
+		ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
+		skb = NULL;
+		goto ret;
+	}
+
+	if (!IPC_CB(skb)) {
+		dev_err(ipc_protocol->dev, "pipe# %d, tail: %d skb_cb is NULL",
+			pipe->pipe_nr, tail);
+		ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
+		skb = NULL;
+		goto ret;
+	}
+
+	if (p_td->buffer.address != IPC_CB(skb)->mapping) {
+		dev_err(ipc_protocol->dev, "invalid buf=%p or skb=%p",
+			(void *)p_td->buffer.address, skb->data);
+		ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
+		skb = NULL;
+		goto ret;
+	} else if (p_td->scs.size > pipe->buf_size) {
+		dev_err(ipc_protocol->dev, "invalid buffer size %d > %d",
+			p_td->scs.size, pipe->buf_size);
+		ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
+		skb = NULL;
+		goto ret;
+	} else if (p_td->scs.completion_status == IPC_MEM_TD_CS_ABORT) {
+		/* Discard aborted buffers. */
+		dev_dbg(ipc_protocol->dev, "discard 'aborted' buffers");
+		ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
+		skb = NULL;
+		goto ret;
+	}
+
+	/* Set the length field in skbuf. */
+	skb_put(skb, p_td->scs.size);
+
+ret:
+	return skb;
+}
+
+void ipc_protocol_get_head_tail_index(void *protocol_inst,
+				      struct ipc_pipe *pipe, u32 *head,
+				      u32 *tail)
+{
+	struct iosm_protocol *ipc_protocol = protocol_inst;
+
+	if (head)
+		*head = ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr];
+
+	if (tail)
+		*tail = ipc_protocol->p_ap_shm->tail_array[pipe->pipe_nr];
+}
+
+/* Frees the TDs given to CP.  */
+void ipc_protocol_pipe_cleanup(void *protocol_inst, struct ipc_pipe *pipe)
+{
+	struct iosm_protocol *ipc_protocol = protocol_inst;
+	struct sk_buff *skb;
+	u32 head;
+	u32 tail;
+
+	if (!ipc_protocol->p_ap_shm) {
+		dev_err(ipc_protocol->dev, "p_ap_shm is NULL");
+		return;
+	}
+
+	/* Get the start and the end of the buffer list. */
+	head = ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr];
+	tail = pipe->old_tail;
+
+	/* Reset tail and head to 0. */
+	ipc_protocol->p_ap_shm->tail_array[pipe->pipe_nr] = 0;
+	ipc_protocol->p_ap_shm->head_array[pipe->pipe_nr] = 0;
+
+	/* Free pending uplink and downlink buffers. */
+	if (pipe->skbr_start) {
+		while (head != tail) {
+			/* Get the reference to the skbuf,
+			 * which is on the way and free it.
+			 */
+			skb = pipe->skbr_start[tail];
+			if (skb)
+				ipc_pcie_kfree_skb(ipc_protocol->pcie, skb);
+
+			tail++;
+			if (tail >= pipe->nr_of_entries)
+				tail = 0;
+		}
+
+		kfree(pipe->skbr_start);
+		pipe->skbr_start = NULL;
+	}
+
+	pipe->old_tail = 0;
+
+	/* Free and reset the td and skbuf circular buffers. kfree is save! */
+	if (pipe->tdr_start) {
+		pci_free_consistent(ipc_protocol->pcie->pci,
+				    sizeof(*pipe->tdr_start) *
+					    pipe->nr_of_entries,
+				    pipe->tdr_start, pipe->phy_tdr_start);
+
+		pipe->tdr_start = NULL;
+	}
+}
+
+enum ipc_mem_device_ipc_state ipc_protocol_get_ipc_status(void *protocol_inst)
+{
+	struct iosm_protocol *ipc_protocol = protocol_inst;
+
+	return (enum ipc_mem_device_ipc_state)
+		ipc_protocol->p_ap_shm->device_info.ipc_status;
+}
+
+enum ipc_mem_exec_stage
+ipc_protocol_get_ap_exec_stage(struct iosm_protocol *ipc_protocol)
+{
+	return ipc_protocol->p_ap_shm->device_info.execution_stage;
+}
+
+int ipc_protocol_msg_prep(void *instance, enum ipc_msg_prep_type msg_type,
+			  union ipc_msg_prep_args *args)
+{
+	struct iosm_protocol *ipc_protocol = instance;
+
+	switch (msg_type) {
+	case IPC_MSG_PREP_SLEEP:
+		return ipc_protocol_msg_prep_sleep(ipc_protocol, args);
+
+	case IPC_MSG_PREP_PIPE_OPEN:
+		return ipc_protocol_msg_prepipe_open(ipc_protocol, args);
+
+	case IPC_MSG_PREP_PIPE_CLOSE:
+		return ipc_protocol_msg_prepipe_close(ipc_protocol, args);
+
+	case IPC_MSG_PREP_FEATURE_SET:
+		return ipc_protocol_msg_prep_feature_set(ipc_protocol, args);
+
+		/* Unsupported messages in protocol */
+	case IPC_MSG_PREP_MAP:
+	case IPC_MSG_PREP_UNMAP:
+	default:
+		dev_err(ipc_protocol->dev,
+			"unsupported message type: %d in protocol", msg_type);
+		return -1;
+	}
+}
+
+u32 ipc_protocol_pm_dev_get_sleep_notification(void *protocol_inst)
+{
+	struct iosm_protocol *ipc_protocol = protocol_inst;
+
+	return ipc_protocol->p_ap_shm->device_info.device_sleep_notification;
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h b/drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h
new file mode 100644
index 000000000000..d59324faff2b
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_protocol_ops.h
@@ -0,0 +1,358 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_PROTOCOL_OPS_H
+#define IOSM_IPC_PROTOCOL_OPS_H
+
+#include "iosm_ipc_protocol.h"
+
+/**
+ * enum ipc_mem_td_cs - Completion status of a TD
+ * @IPC_MEM_TD_CS_INVALID:	      Initial status - td not yet used.
+ * @IPC_MEM_TD_CS_PARTIAL_TRANSFER:   More data pending -> next TD used for this
+ * @IPC_MEM_TD_CS_END_TRANSFER:	      IO transfer is complete.
+ * @IPC_MEM_TD_CS_OVERFLOW:	      IO transfer to small for the buff to write
+ * @IPC_MEM_TD_CS_ABORT:	      TD marked as abort and shall be discarded
+ *				      by AP.
+ * @IPC_MEM_TD_CS_ERROR:	      General error.
+ */
+enum ipc_mem_td_cs {
+	IPC_MEM_TD_CS_INVALID,
+	IPC_MEM_TD_CS_PARTIAL_TRANSFER,
+	IPC_MEM_TD_CS_END_TRANSFER,
+	IPC_MEM_TD_CS_OVERFLOW,
+	IPC_MEM_TD_CS_ABORT,
+	IPC_MEM_TD_CS_ERROR,
+};
+
+/* Completion status of IPC Message */
+enum ipc_mem_msg_cs {
+	IPC_MEM_MSG_CS_INVALID,
+	IPC_MEM_MSG_CS_SUCCESS,
+	IPC_MEM_MSG_CS_ERROR,
+};
+
+/**
+ * struct ipc_msg_prep_args_pipe - Structures for argument passing towards
+ *				   the actual message preparation
+ * @pipe:	Pipe to open/close
+ */
+struct ipc_msg_prep_args_pipe {
+	struct ipc_pipe *pipe; /* pipe to open/close */
+};
+
+struct ipc_msg_prep_args_sleep {
+	unsigned int target; /* 0=host, 1=device */
+	unsigned int state; /* 0=enter sleep, 1=exit sleep */
+};
+
+struct ipc_msg_prep_feature_set {
+	/* 0 = out-of-band, 1 = in-band-crash notification */
+	unsigned int reset_enable;
+};
+
+struct ipc_msg_prep_map {
+	unsigned int region_id;
+	unsigned long addr;
+	size_t size;
+};
+
+struct ipc_msg_prep_unmap {
+	unsigned int region_id;
+};
+
+/* Union for message to handle the message to CP in the tasklet context. */
+union ipc_msg_prep_args {
+	struct ipc_msg_prep_args_pipe pipe_open;
+	struct ipc_msg_prep_args_pipe pipe_close;
+	struct ipc_msg_prep_args_sleep sleep;
+	struct ipc_msg_prep_feature_set feature_set;
+	struct ipc_msg_prep_map map;
+	struct ipc_msg_prep_unmap unmap;
+};
+
+/**
+ * enum ipc_msg_prep_type - Enum for message prepare actions
+ * @IPC_MSG_PREP_SLEEP:		prepare a sleep message
+ * @IPC_MSG_PREP_PIPE_OPEN:	prepare a pipe open message
+ * @IPC_MSG_PREP_PIPE_CLOSE:	prepare a pipe close message
+ * @IPC_MSG_PREP_FEATURE_SET:	prepare a feature set message
+ * @IPC_MSG_PREP_MAP:		prepare a memory map message
+ * @IPC_MSG_PREP_UNMAP:		prepare a memory unmap message
+ */
+enum ipc_msg_prep_type {
+	IPC_MSG_PREP_SLEEP,
+	IPC_MSG_PREP_PIPE_OPEN,
+	IPC_MSG_PREP_PIPE_CLOSE,
+	IPC_MSG_PREP_FEATURE_SET,
+	IPC_MSG_PREP_MAP,
+	IPC_MSG_PREP_UNMAP,
+};
+
+/**
+ * struct ipc_rsp - Response for message to CP
+ * @completion:	For waking up requestor
+ * @status:	Completion status
+ */
+struct ipc_rsp {
+	struct completion completion;
+	enum ipc_mem_msg_cs status;
+};
+
+/**
+ * enum ipc_mem_msg - Type-definition of the messages.
+ * @IPC_MEM_MSG_OPEN_PIPE:	AP ->CP: Open a pipe
+ * @IPC_MEM_MSG_CLOSE_PIPE:	AP ->CP: Close a pipe
+ * @IPC_MEM_MSG_ABORT_PIPE:	AP ->CP: wait for completion of the
+ *				running transfer and abort all pending
+ *				IO-transfers for the pipe
+ * @IPC_MEM_MSG_SLEEP:		AP ->CP: host enter or exit sleep
+ * @IPC_MEM_MSG_FEATURE_SET:	AP ->CP: Intel feature configuration
+ */
+enum ipc_mem_msg {
+	IPC_MEM_MSG_OPEN_PIPE = 0x01,
+	IPC_MEM_MSG_CLOSE_PIPE = 0x02,
+	IPC_MEM_MSG_ABORT_PIPE = 0x03,
+	IPC_MEM_MSG_SLEEP = 0x04,
+	IPC_MEM_MSG_FEATURE_SET = 0xF0,
+};
+
+struct ipc_mem_msg_open_pipe {
+	u64 tdr_addr;
+	u32 tdr_entries : 16;
+	u32 pipe_nr : 8;
+	u32 type_of_message : 8;
+	u32 irq_vector : 5;
+	u32 optimized_completion : 1;
+	u32 reliable : 1;
+	u32 reserved1 : 1;
+	u32 interrupt_moderation : 24;
+	u32 accumulation_backoff : 24;
+	u32 reserved2 : 8;
+	u32 completion_status;
+};
+
+/* Message structure for close pipe. */
+struct ipc_mem_msg_close_pipe {
+	u32 reserved1[2];
+	u32 reserved2 : 16;
+	u32 pipe_nr : 8;
+	u32 type_of_message : 8;
+	u32 reserved3;
+	u32 reserved4;
+	u32 completion_status;
+};
+
+/* Message structure for abort pipe. */
+struct ipc_mem_msg_abort_pipe {
+	u32 reserved1[2];
+	u32 reserved2 : 16;
+	u32 pipe_nr : 8;
+	u32 type_of_message : 8;
+	u32 reserved3;
+	u32 reserved4;
+	u32 completion_status;
+};
+
+/**
+ * struct ipc_mem_msg_host_sleep - Message structure for sleep message.
+ * @reserved1:		Reserved
+ * @target:		0=host, 1=device, host or EP devie
+ *			is the message target
+ * @state:		0=enter sleep, 1=exit sleep,
+ *			2=enter sleep no protocol
+ * @reserved2:		Reserved
+ * @type_of_message:	Message type
+ * @reserved3:		Reserved
+ * @reserved4:		Reserved
+ * @completion_status:	Message Completion Status
+ */
+struct ipc_mem_msg_host_sleep {
+	u32 reserved1[2];
+	u32 target : 8;
+	u32 state : 8;
+	u32 reserved2 : 8;
+	u32 type_of_message : 8;
+	u32 reserved3;
+	u32 reserved4;
+	u32 completion_status;
+};
+
+/* Message structure for feature_set message */
+struct ipc_mem_msg_feature_set {
+	u32 reserved1[2];
+	u32 reserved2 : 23;
+	u32 reset_enable : 1;
+	u32 type_of_message : 8;
+	u32 reserved3;
+	u32 reserved4;
+	u32 completion_status;
+};
+
+/* Message structure for completion status update. */
+struct ipc_mem_msg_common {
+	u32 reserved1[2];
+	u32 reserved2 : 24;
+	u32 type_of_message : 8;
+	u32 reserved3;
+	u32 reserved4;
+	u32 completion_status;
+};
+
+/* Union with all possible messages. */
+union ipc_mem_msg_entry {
+	struct ipc_mem_msg_open_pipe open_pipe;
+	struct ipc_mem_msg_close_pipe close_pipe;
+	struct ipc_mem_msg_abort_pipe abort_pipe;
+	struct ipc_mem_msg_host_sleep host_sleep;
+	struct ipc_mem_msg_feature_set feature_set;
+	/* Used to access msg_type and to set the completion status. */
+	struct ipc_mem_msg_common common;
+};
+
+/* Transfer descriptor definition. */
+struct ipc_protocol_td {
+	union {
+		/*   0 :  63 - 64-bit address of a buffer in host memory. */
+		dma_addr_t address;
+		struct {
+			/*   0 :  31 - 32 bit address */
+			__le32 address;
+			/*  32 :  63 - corresponding descriptor */
+			__le32 desc;
+		} __attribute__ ((__packed__)) shm;
+	} buffer;
+
+	struct {
+	/*	64 :  87 - Size of the buffer.
+	 *	The host provides the size of the buffer queued.
+	 *	The EP device reads this value and shall update
+	 *	it for downlink transfers to indicate the
+	 *	amount of data written in buffer.
+	 */
+		u32 size : 24;
+	/*	88 :  95 - This field provides the completion status
+	 *	of the TD. When queuing the TD, the host sets
+	 *	the status to 0. The EP device updates this
+	 *	field when completing the TD.
+	 */
+		u32 completion_status : 8;
+	} __attribute__ ((__packed__)) scs;
+
+	/*  96 : 103 - nr of following descriptors */
+	u32 next : 8;
+	/* 104 : 127 - reserved */
+	u32 reserved1 : 24;
+} __attribute__ ((__packed__));
+
+/**
+ * ipc_protocol_msg_prep - Prepare message based upon message type
+ * @ptr:	iosm_protocol instance
+ * @msg_type:	message prepare type
+ * @args:	message arguments
+ *
+ * Return: 0 on success, -1 in case of failure
+ */
+int ipc_protocol_msg_prep(void *ptr, enum ipc_msg_prep_type msg_type,
+			  union ipc_msg_prep_args *args);
+
+/**
+ * ipc_protocol_msg_hp_update - Function for head pointer update
+ *				of message ring
+ * @ptr:	iosm_protocol instance
+ */
+void ipc_protocol_msg_hp_update(void *ptr);
+
+/**
+ * ipc_protocol_msg_process - Function for processing responses
+ *			      to IPC messages
+ * @ptr:	iosm_protocol instance
+ * @irq:	IRQ vector
+ *
+ * Return:	True on success; false if error
+ */
+bool ipc_protocol_msg_process(void *ptr, int irq);
+
+/**
+ * ipc_protocol_ul_td_send - Function for sending the data to CP
+ * @ptr: iosm_protocol instance
+ * @pipe: Pipe instance
+ * @p_ul_list: uplink sk_buff list
+ *
+ * Return: true in success; false in case of error
+ */
+bool ipc_protocol_ul_td_send(void *ptr, struct ipc_pipe *pipe,
+			     struct sk_buff_head *p_ul_list);
+
+/**
+ * ipc_protocol_ul_td_process - Function for processing the sent data
+ * @ptr: iosm_protocol instance
+ * @pipe: Pipe instance
+ *
+ * Return: sk_buff instance
+ */
+struct sk_buff *ipc_protocol_ul_td_process(void *ptr, struct ipc_pipe *pipe);
+
+/**
+ * ipc_protocol_dl_td_prepare - Function for providing DL TDs to CP
+ * @ptr: iosm_protocol instance
+ * @pipe: Pipe instance
+ *
+ * Return: true in success; false in case of error
+ */
+bool ipc_protocol_dl_td_prepare(void *ptr, struct ipc_pipe *pipe);
+
+/**
+ * ipc_protocol_dl_td_process - Function for processing the DL data
+ * @ptr: iosm_protocol instance
+ * @pipe: Pipe instance
+ *
+ * Return: sk_buff instance
+ */
+struct sk_buff *ipc_protocol_dl_td_process(void *ptr, struct ipc_pipe *pipe);
+
+/**
+ * ipc_protocol_get_head_tail_index - Function for getting Head and Tail
+ *				      pointer index of given pipe
+ * @ptr: iosm_protocol instance
+ * @pipe: Pipe Instance
+ * @head: head pointer index of the given pipe
+ * @tail: tail pointer index of the given pipe
+ */
+void ipc_protocol_get_head_tail_index(void *ptr, struct ipc_pipe *pipe,
+				      u32 *head, u32 *tail);
+/**
+ * ipc_protocol_get_ipc_status - Function for getting the IPC Status
+ * @ptr: iosm_protocol instance
+ *
+ * Return: Returns IPC State
+ */
+enum ipc_mem_device_ipc_state ipc_protocol_get_ipc_status(void *ptr);
+
+/**
+ * ipc_protocol_pipe_cleanup - Function to cleanup pipe resources
+ * @ptr: iosm_protocol instance
+ * @pipe: Pipe instance
+ */
+void ipc_protocol_pipe_cleanup(void *ptr, struct ipc_pipe *pipe);
+
+/**
+ * ipc_protocol_get_ap_exec_stage - Function for getting AP Exec Stage
+ * @ipc_protocol: pointer to struct iosm protocol
+ *
+ * Return: returns BOOT Stages
+ */
+enum ipc_mem_exec_stage
+ipc_protocol_get_ap_exec_stage(struct iosm_protocol *ipc_protocol);
+
+/**
+ * ipc_protocol_pm_dev_get_sleep_notification - Function for getting Dev Sleep
+ *						notification
+ * @ptr: iosm_protocol instance
+ *
+ * Return: Returns dev PM State
+ */
+u32 ipc_protocol_pm_dev_get_sleep_notification(void *ptr);
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 15/18] net: iosm: uevent support
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (13 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 14/18] net: iosm: protocol operations M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 16/18] net: iosm: net driver M Chetan Kumar
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

Report modem status via uevent.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_uevent.c | 47 +++++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_uevent.h | 41 ++++++++++++++++++++++++++++
 2 files changed, 88 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_uevent.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_uevent.c b/drivers/net/wwan/iosm/iosm_ipc_uevent.c
new file mode 100644
index 000000000000..27542ca27613
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_uevent.c
@@ -0,0 +1,47 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include <linux/slab.h>
+
+#include "iosm_ipc_sio.h"
+#include "iosm_ipc_uevent.h"
+
+/* Update the uevent in work queue context */
+static void ipc_uevent_work(struct work_struct *data)
+{
+	struct ipc_uevent_info *info;
+	char *envp[2] = { NULL, NULL };
+
+	info = container_of(data, struct ipc_uevent_info, work);
+
+	envp[0] = info->uevent;
+
+	if (kobject_uevent_env(&info->dev->kobj, KOBJ_CHANGE, envp))
+		pr_err("uevent %s failed to sent", info->uevent);
+
+	kfree(info);
+}
+
+void ipc_uevent_send(struct device *dev, char *uevent)
+{
+	struct ipc_uevent_info *info;
+
+	if (!uevent || !dev)
+		return;
+
+	info = kzalloc(sizeof(*info), GFP_ATOMIC);
+	if (!info)
+		return;
+
+	/* Initialize the kernel work queue */
+	INIT_WORK(&info->work, ipc_uevent_work);
+
+	/* Store the device and event information */
+	info->dev = dev;
+	snprintf(info->uevent, MAX_UEVENT_LEN, "%s: %s", dev_name(dev), uevent);
+
+	/* Schedule uevent in process context using work queue */
+	schedule_work(&info->work);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_uevent.h b/drivers/net/wwan/iosm/iosm_ipc_uevent.h
new file mode 100644
index 000000000000..422f64411c6e
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_uevent.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_UEVENT_H
+#define IOSM_IPC_UEVENT_H
+
+/* Baseband event strings */
+#define UEVENT_MDM_NOT_READY "MDM_NOT_READY"
+#define UEVENT_ROM_READY "ROM_READY"
+#define UEVENT_MDM_READY "MDM_READY"
+#define UEVENT_CRASH "CRASH"
+#define UEVENT_CD_READY "CD_READY"
+#define UEVENT_CD_READY_LINK_DOWN "CD_READY_LINK_DOWN"
+#define UEVENT_MDM_TIMEOUT "MDM_TIMEOUT"
+
+/* Maximum length of user events */
+#define MAX_UEVENT_LEN 64
+
+/**
+ * struct ipc_uevent_info - Uevent information structure.
+ * @dev:	Pointer to device structure
+ * @uevent:	Uevent information
+ * @work:	Uevent work struct
+ */
+struct ipc_uevent_info {
+	struct device *dev;
+	char uevent[MAX_UEVENT_LEN];
+	struct work_struct work;
+};
+
+/**
+ * ipc_uevent_send - Send modem event to user space.
+ * @dev:	Generic device pointer
+ * @uevent:	Uevent information
+ *
+ */
+void ipc_uevent_send(struct device *dev, char *uevent);
+
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 16/18] net: iosm: net driver
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (14 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 15/18] net: iosm: uevent support M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 17/18] net: iosm: readme file M Chetan Kumar
  2020-11-23 13:51 ` [RFC 18/18] net: iosm: infrastructure M Chetan Kumar
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

1) Create net device for data/IP communication.
2) Bind VLAN ID to mux IP session.
3) Implement net device operations.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/iosm_ipc_wwan.c | 674 ++++++++++++++++++++++++++++++++++
 drivers/net/wwan/iosm/iosm_ipc_wwan.h |  72 ++++
 2 files changed, 746 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.c
 create mode 100644 drivers/net/wwan/iosm/iosm_ipc_wwan.h

diff --git a/drivers/net/wwan/iosm/iosm_ipc_wwan.c b/drivers/net/wwan/iosm/iosm_ipc_wwan.c
new file mode 100644
index 000000000000..f14a971455bb
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_wwan.c
@@ -0,0 +1,674 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#include <linux/if_vlan.h>
+
+#include "iosm_ipc_chnl_cfg.h"
+#include "iosm_ipc_imem_ops.h"
+
+/* Minimum number of transmit queues per WWAN root device */
+#define WWAN_MIN_TXQ (1)
+/* Minimum number of receive queues per WWAN root device */
+#define WWAN_MAX_RXQ (1)
+/* Default transmit queue for WWAN root device */
+#define WWAN_DEFAULT_TXQ (0)
+/* VLAN tag for WWAN root device */
+#define WWAN_ROOT_VLAN_TAG (0)
+
+#define IPC_MEM_MIN_MTU_SIZE (68)
+#define IPC_MEM_MAX_MTU_SIZE (1024 * 1024)
+
+#define IPC_MEM_VLAN_TO_SESSION (1)
+
+/* Required alignment for TX in bytes (32 bit/4 bytes)*/
+#define IPC_WWAN_ALIGN (4)
+
+/**
+ * struct ipc_vlan_info - This structure includes information about VLAN device.
+ * @vlan_id:	VLAN tag of the VLAN device.
+ * @ch_id:	IPC channel number for which VLAN device is created.
+ * @stats:	Contains statistics of VLAN devices.
+ */
+struct ipc_vlan_info {
+	int vlan_id;
+	int ch_id;
+	struct net_device_stats stats;
+};
+
+/**
+ * struct iosm_wwan - This structure contains information about WWAN root device
+ *		     and interface to the IPC layer.
+ * @vlan_devs:		Contains information about VLAN devices created under
+ *			WWAN root device.
+ * @netdev:		Pointer to network interface device structure.
+ * @ops_instance:	Instance pointer for Callbacks
+ * @dev:		Pointer device structure
+ * @lock:		Spinlock to be used for atomic operations of the
+ *			root device.
+ * @stats:		Contains statistics of WWAN root device
+ * @vlan_devs_nr:	Number of VLAN devices.
+ * @if_mutex:		Mutex used for add and remove vlan-id
+ * @max_devs:		Maximum supported VLAN devs
+ * @max_ip_devs:	Maximum supported IP VLAN devs
+ * @is_registered:	Registration status with netdev
+ */
+struct iosm_wwan {
+	struct ipc_vlan_info *vlan_devs;
+	struct net_device *netdev;
+	void *ops_instance;
+	struct device *dev;
+	spinlock_t lock; /* Used for atomic operations on root device */
+	struct net_device_stats stats;
+	int vlan_devs_nr;
+	struct mutex if_mutex; /* Mutex used for add and remove vlan-id */
+	int max_devs;
+	int max_ip_devs;
+	u8 is_registered : 1;
+};
+
+/* Get the array index of requested tag. */
+static int ipc_wwan_get_vlan_devs_nr(struct iosm_wwan *ipc_wwan, u16 tag)
+{
+	int i = 0;
+
+	if (!ipc_wwan->vlan_devs)
+		return -EINVAL;
+
+	for (i = 0; i < ipc_wwan->vlan_devs_nr; i++)
+		if (ipc_wwan->vlan_devs[i].vlan_id == tag)
+			return i;
+
+	return -EINVAL;
+}
+
+static int ipc_wwan_add_vlan(struct iosm_wwan *ipc_wwan, u16 vid)
+{
+	if (vid >= 512 || !ipc_wwan->vlan_devs)
+		return -EINVAL;
+
+	if (vid == WWAN_ROOT_VLAN_TAG)
+		return 0;
+
+	mutex_lock(&ipc_wwan->if_mutex);
+
+	/* get channel id */
+	ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].ch_id =
+		imem_sys_wwan_open(ipc_wwan->ops_instance, vid);
+
+	if (ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].ch_id < 0) {
+		dev_err(ipc_wwan->dev,
+			"cannot connect wwan0 & id %d to the IPC mem layer",
+			vid);
+		mutex_unlock(&ipc_wwan->if_mutex);
+		return -ENODEV;
+	}
+
+	/* save vlan id */
+	ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].vlan_id = vid;
+
+	dev_dbg(ipc_wwan->dev, "Channel id %d allocated to vlan id %d",
+		ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].ch_id,
+		ipc_wwan->vlan_devs[ipc_wwan->vlan_devs_nr].vlan_id);
+
+	ipc_wwan->vlan_devs_nr++;
+
+	mutex_unlock(&ipc_wwan->if_mutex);
+
+	return 0;
+}
+
+static int ipc_wwan_remove_vlan(struct iosm_wwan *ipc_wwan, u16 vid)
+{
+	int ch_nr = ipc_wwan_get_vlan_devs_nr(ipc_wwan, vid);
+	int i = 0;
+
+	if (ch_nr < 0) {
+		dev_err(ipc_wwan->dev, "vlan dev not found for vid = %d", vid);
+		return ch_nr;
+	}
+
+	if (ipc_wwan->vlan_devs[ch_nr].ch_id < 0) {
+		dev_err(ipc_wwan->dev, "invalid ch nr %d to kill", ch_nr);
+		return -EINVAL;
+	}
+
+	mutex_lock(&ipc_wwan->if_mutex);
+
+	imem_sys_wwan_close(ipc_wwan->ops_instance, vid,
+			    ipc_wwan->vlan_devs[ch_nr].ch_id);
+
+	ipc_wwan->vlan_devs[ch_nr].ch_id = -1;
+
+	/* re-align the vlan information as we removed one tag */
+	for (i = ch_nr; i < ipc_wwan->vlan_devs_nr; i++)
+		memcpy(&ipc_wwan->vlan_devs[i], &ipc_wwan->vlan_devs[i + 1],
+		       sizeof(struct ipc_vlan_info));
+
+	ipc_wwan->vlan_devs_nr--;
+
+	mutex_unlock(&ipc_wwan->if_mutex);
+
+	return 0;
+}
+
+/* Checks the protocol and discards the Ethernet header or VLAN header
+ * accordingly.
+ */
+static int ipc_wwan_pull_header(struct sk_buff *skb, bool *is_ip)
+{
+	unsigned int header_size;
+	__be16 proto;
+
+	if (skb->protocol == htons(ETH_P_8021Q)) {
+		proto = vlan_eth_hdr(skb)->h_vlan_encapsulated_proto;
+
+		if (skb->len < VLAN_ETH_HLEN)
+			header_size = 0;
+		else
+			header_size = VLAN_ETH_HLEN;
+	} else {
+		proto = eth_hdr(skb)->h_proto;
+
+		if (skb->len < ETH_HLEN)
+			header_size = 0;
+		else
+			header_size = ETH_HLEN;
+	}
+
+	/* If a valid pointer */
+	if (header_size > 0 && is_ip) {
+		*is_ip = (proto == htons(ETH_P_IP)) ||
+			 (proto == htons(ETH_P_IPV6));
+
+		/* Discard the vlan/ethernet header. */
+		if (unlikely(!skb_pull(skb, header_size)))
+			header_size = 0;
+	}
+
+	return header_size;
+}
+
+/* Get VLAN tag from IPC SESSION ID */
+static inline u16 ipc_wwan_mux_session_to_vlan_tag(int id)
+{
+	return (u16)(id + IPC_MEM_VLAN_TO_SESSION);
+}
+
+/* Get IPC SESSION ID from VLAN tag */
+static inline int ipc_wwan_vlan_to_mux_session_id(u16 tag)
+{
+	return tag - IPC_MEM_VLAN_TO_SESSION;
+}
+
+/* Add new vlan device and open a channel */
+static int ipc_wwan_vlan_rx_add_vid(struct net_device *netdev, __be16 proto,
+				    u16 vid)
+{
+	struct iosm_wwan *ipc_wwan = netdev_priv(netdev);
+
+	if (vid != IPC_WWAN_DSS_ID_4)
+		return ipc_wwan_add_vlan(ipc_wwan, vid);
+
+	return 0;
+}
+
+/* Remove vlan device and de-allocate channel */
+static int ipc_wwan_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto,
+				     u16 vid)
+{
+	struct iosm_wwan *ipc_wwan = netdev_priv(netdev);
+
+	if (vid == WWAN_ROOT_VLAN_TAG)
+		return 0;
+
+	return ipc_wwan_remove_vlan(ipc_wwan, vid);
+}
+
+static int ipc_wwan_open(struct net_device *netdev)
+{
+	/* Octets in one ethernet addr */
+	if (netdev->addr_len < ETH_ALEN) {
+		pr_err("cannot build the Ethernet address for \"%s\"",
+		       netdev->name);
+		return -ENODEV;
+	}
+
+	/* enable tx path, DL data may follow */
+	netif_tx_start_all_queues(netdev);
+
+	return 0;
+}
+
+static int ipc_wwan_stop(struct net_device *netdev)
+{
+	pr_debug("Stop all TX Queues");
+
+	netif_tx_stop_all_queues(netdev);
+	return 0;
+}
+
+int ipc_wwan_receive(struct iosm_wwan *ipc_wwan, struct sk_buff *skb_arg,
+		     bool dss)
+{
+	struct sk_buff *skb;
+	struct ethhdr *eth;
+	u16 tag = 0;
+
+	if (unlikely(!ipc_wwan)) {
+		if (skb_arg)
+			dev_kfree_skb(skb_arg);
+		return -EINVAL;
+	}
+
+	skb = skb_arg;
+
+	eth = (struct ethhdr *)skb->data;
+	if (unlikely(!eth)) {
+		dev_err(ipc_wwan->dev, "ethernet header info error");
+		dev_kfree_skb(skb);
+		return -1;
+	}
+
+	/* Build the ethernet header.
+	 * for kernel version latest than 3.14.0.
+	 */
+	ether_addr_copy(eth->h_dest, ipc_wwan->netdev->dev_addr);
+	ether_addr_copy(eth->h_source, ipc_wwan->netdev->dev_addr);
+	eth->h_source[ETH_ALEN - 1] ^= 0x01; /* src is us xor 1 */
+	/* set the ethernet payload type: ipv4 or ipv6 or Dummy type
+	 * for 802.3 frames
+	 */
+	eth->h_proto = htons(ETH_P_802_3);
+	if (!dss) {
+		if ((skb->data[ETH_HLEN] & 0xF0) == 0x40)
+			eth->h_proto = htons(ETH_P_IP);
+		else if ((skb->data[ETH_HLEN] & 0xF0) == 0x60)
+			eth->h_proto = htons(ETH_P_IPV6);
+	}
+
+	skb->dev = ipc_wwan->netdev;
+	skb->protocol = eth_type_trans(skb, ipc_wwan->netdev);
+	skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+	vlan_get_tag(skb, &tag);
+	/* TX stats doesn't include ETH_HLEN.
+	 * eth_type_trans() functions pulls the ethernet header.
+	 * so skb->len does not have ethernet header in it.
+	 */
+	ipc_wwan_update_stats(ipc_wwan, ipc_wwan_vlan_to_mux_session_id(tag),
+			      skb->len, false);
+
+	switch (netif_rx_ni(skb)) {
+	case NET_RX_SUCCESS:
+		break;
+	case NET_RX_DROP:
+		break;
+	default:
+		break;
+	}
+	return 0;
+}
+
+/* Align SKB to 32bit, if not already aligned */
+static struct sk_buff *ipc_wwan_skb_align(struct iosm_wwan *ipc_wwan,
+					  struct sk_buff *skb)
+{
+	unsigned int offset = (uintptr_t)skb->data & (IPC_WWAN_ALIGN - 1);
+	struct sk_buff *new_skb;
+
+	if (offset == 0)
+		return skb;
+
+	/* Allocate new skb to copy into */
+	new_skb = dev_alloc_skb(skb->len + (IPC_WWAN_ALIGN - 1));
+	if (unlikely(!new_skb)) {
+		dev_err(ipc_wwan->dev, "failed to reallocate skb");
+		goto out;
+	}
+
+	/* Make sure newly allocated skb is aligned */
+	offset = (uintptr_t)new_skb->data & (IPC_WWAN_ALIGN - 1);
+	if (unlikely(offset != 0))
+		skb_reserve(new_skb, IPC_WWAN_ALIGN - offset);
+
+	/* Copy payload */
+	memcpy(new_skb->data, skb->data, skb->len);
+
+	skb_put(new_skb, skb->len);
+out:
+	dev_kfree_skb(skb);
+	return new_skb;
+}
+
+/* Transmit a packet (called by the kernel) */
+static int ipc_wwan_transmit(struct sk_buff *skb, struct net_device *netdev)
+{
+	struct iosm_wwan *ipc_wwan = netdev_priv(netdev);
+	bool is_ip = false;
+	int ret = -EINVAL;
+	int header_size;
+	int idx = 0;
+	u16 tag = 0;
+
+	vlan_get_tag(skb, &tag);
+
+	/* If the SKB is of WWAN root device then don't send it to device.
+	 * Free the SKB and then return.
+	 */
+	if (unlikely(tag == WWAN_ROOT_VLAN_TAG))
+		goto exit;
+
+	/* Discard the Ethernet header or VLAN Ethernet header depending
+	 * on the protocol.
+	 */
+	header_size = ipc_wwan_pull_header(skb, &is_ip);
+	if (!header_size)
+		goto exit;
+
+	/* Get the channel number corresponding to VLAN ID */
+	idx = ipc_wwan_get_vlan_devs_nr(ipc_wwan, tag);
+	if (unlikely(idx < 0 || idx >= ipc_wwan->max_devs ||
+		     ipc_wwan->vlan_devs[idx].ch_id < 0))
+		goto exit;
+
+	/* VLAN IDs from 1 to 255 are for IP data
+	 * 257 to 512 are for non-IP data
+	 */
+	if ((tag > 0 && tag < 256)) {
+		if (unlikely(!is_ip)) {
+			ret = -EXDEV;
+			goto exit;
+		}
+	} else if (tag > 256 && tag < 512) {
+		if (unlikely(is_ip)) {
+			ret = -EXDEV;
+			goto exit;
+		}
+
+		/* Align the SKB only for control packets if not aligned. */
+		skb = ipc_wwan_skb_align(ipc_wwan, skb);
+		if (!skb)
+			goto exit;
+	} else {
+		/* Unknown VLAN IDs */
+		ret = -EXDEV;
+		goto exit;
+	}
+
+	/* Send the SKB to device for transmission */
+	ret = imem_sys_wwan_transmit(ipc_wwan->ops_instance, tag,
+				     ipc_wwan->vlan_devs[idx].ch_id, skb);
+
+	/* Return code of zero is success */
+	if (ret == 0) {
+		ret = NETDEV_TX_OK;
+	} else if (ret == -2) {
+		/* Return code -2 is to enable re-enqueue of the skb.
+		 * Re-push the stripped header before returning busy.
+		 */
+		if (unlikely(!skb_push(skb, header_size))) {
+			dev_err(ipc_wwan->dev, "unable to push eth hdr");
+			ret = -EIO;
+			goto exit;
+		}
+
+		ret = NETDEV_TX_BUSY;
+	} else {
+		ret = -EIO;
+		goto exit;
+	}
+
+	return ret;
+
+exit:
+	/* Log any skb drop except for WWAN Root device */
+	if (tag != 0)
+		dev_dbg(ipc_wwan->dev, "skb dropped.VLAN ID: %d, ret: %d", tag,
+			ret);
+
+	dev_kfree_skb_any(skb);
+	return ret;
+}
+
+static int ipc_wwan_change_mtu(struct net_device *dev, int new_mtu)
+{
+	struct iosm_wwan *ipc_wwan = netdev_priv(dev);
+	unsigned long flags = 0;
+
+	if (unlikely(new_mtu < IPC_MEM_MIN_MTU_SIZE ||
+		     new_mtu > IPC_MEM_MAX_MTU_SIZE)) {
+		dev_err(ipc_wwan->dev, "mtu %d out of range %d..%d", new_mtu,
+			IPC_MEM_MIN_MTU_SIZE, IPC_MEM_MAX_MTU_SIZE);
+		return -EINVAL;
+	}
+
+	spin_lock_irqsave(&ipc_wwan->lock, flags);
+	dev->mtu = new_mtu;
+	spin_unlock_irqrestore(&ipc_wwan->lock, flags);
+	return 0;
+}
+
+static int ipc_wwan_change_mac_addr(struct net_device *dev, void *sock_addr)
+{
+	struct iosm_wwan *ipc_wwan = netdev_priv(dev);
+	struct sockaddr *addr = sock_addr;
+	unsigned long flags = 0;
+	int result = 0;
+	u8 *sock_data;
+
+	sock_data = (u8 *)addr->sa_data;
+
+	spin_lock_irqsave(&ipc_wwan->lock, flags);
+
+	if (is_zero_ether_addr(sock_data)) {
+		dev->addr_len = 1;
+		memset(dev->dev_addr, 0, 6);
+		dev_dbg(ipc_wwan->dev, "mac addr set to zero");
+		goto exit;
+	}
+
+	result = eth_mac_addr(dev, sock_addr);
+exit:
+	spin_unlock_irqrestore(&ipc_wwan->lock, flags);
+	return result;
+}
+
+static int ipc_wwan_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+	if (cmd != SIOCSIFHWADDR ||
+	    !access_ok((void __user *)ifr, sizeof(struct ifreq)) ||
+	    dev->addr_len > sizeof(struct sockaddr))
+		return -EINVAL;
+
+	return ipc_wwan_change_mac_addr(dev, &ifr->ifr_hwaddr);
+}
+
+static struct net_device_stats *ipc_wwan_get_stats(struct net_device *dev)
+{
+	struct iosm_wwan *ipc_wwan = netdev_priv(dev);
+
+	return &ipc_wwan->stats;
+}
+
+/* validate mac address for wwan devices */
+static int ipc_wwan_eth_validate_addr(struct net_device *netdev)
+{
+	return eth_validate_addr(netdev);
+}
+
+/* return valid TX queue for the mapped VLAN device
+ * for kernel version latest than 4.19.0
+ */
+static u16 ipc_wwan_select_queue(struct net_device *netdev, struct sk_buff *skb,
+				 struct net_device *sb_dev)
+{
+	struct iosm_wwan *ipc_wwan = netdev_priv(netdev);
+	u16 txqn = 0xFFFF;
+	u16 tag = 0;
+
+	/* get VLAN tag for the current skb
+	 * if the packet is untagged, return the default queue.
+	 */
+	if (vlan_get_tag(skb, &tag) < 0)
+		return WWAN_DEFAULT_TXQ;
+
+	/* TX Queues are allocated as following:
+	 *
+	 * if vlan ID == 0 is used for VLAN root device i.e. WWAN0.
+	 * Assign default TX Queue which is 0.
+	 *
+	 * if vlan ID >= IMEM_WWAN_CTRL_VLAN_ID_START
+	 * && <= IMEM_WWAN_CTRL_VLAN_ID_END then we use default
+	 * TX Queue which is 0.
+	 *
+	 * if vlan ID >= IMEM_WWAN_DATA_VLAN_ID_START
+	 * && <= Max IP devices then allocate separate
+	 * TX Queue to each VLAN ID.
+	 *
+	 * For any other vlan ID return invalid Tx Queue
+	 */
+	if (tag >= IMEM_WWAN_DATA_VLAN_ID_START && tag <= ipc_wwan->max_ip_devs)
+		txqn = tag;
+	else if ((tag >= IMEM_WWAN_CTRL_VLAN_ID_START &&
+		  tag <= IMEM_WWAN_CTRL_VLAN_ID_END) ||
+		 tag == WWAN_ROOT_VLAN_TAG)
+		txqn = WWAN_DEFAULT_TXQ;
+
+	dev_dbg(ipc_wwan->dev, "VLAN tag = %u, TX Queue selected %u", tag,
+		txqn);
+	return txqn;
+}
+
+static const struct net_device_ops ipc_wwandev_ops = {
+	.ndo_open = ipc_wwan_open,
+	.ndo_stop = ipc_wwan_stop,
+	.ndo_start_xmit = ipc_wwan_transmit,
+	.ndo_change_mtu = ipc_wwan_change_mtu,
+	.ndo_validate_addr = ipc_wwan_eth_validate_addr,
+	.ndo_do_ioctl = ipc_wwan_ioctl,
+	.ndo_get_stats = ipc_wwan_get_stats,
+	.ndo_vlan_rx_add_vid = ipc_wwan_vlan_rx_add_vid,
+	.ndo_vlan_rx_kill_vid = ipc_wwan_vlan_rx_kill_vid,
+	.ndo_set_mac_address = ipc_wwan_change_mac_addr,
+	.ndo_select_queue = ipc_wwan_select_queue,
+};
+
+void ipc_wwan_update_stats(struct iosm_wwan *ipc_wwan, int id, size_t len,
+			   bool tx)
+{
+	int idx =
+		ipc_wwan_get_vlan_devs_nr(ipc_wwan,
+					  ipc_wwan_mux_session_to_vlan_tag(id));
+
+	if (unlikely(idx < 0 || idx >= ipc_wwan->max_devs)) {
+		dev_err(ipc_wwan->dev, "invalid VLAN device");
+		return;
+	}
+
+	if (tx) {
+		/* Update vlan device tx statistics */
+		ipc_wwan->vlan_devs[idx].stats.tx_packets++;
+		ipc_wwan->vlan_devs[idx].stats.tx_bytes += len;
+		/* Update root device tx statistics */
+		ipc_wwan->stats.tx_packets++;
+		ipc_wwan->stats.tx_bytes += len;
+	} else {
+		/* Update vlan device rx statistics */
+		ipc_wwan->vlan_devs[idx].stats.rx_packets++;
+		ipc_wwan->vlan_devs[idx].stats.rx_bytes += len;
+		/* Update root device rx statistics */
+		ipc_wwan->stats.rx_packets++;
+		ipc_wwan->stats.rx_bytes += len;
+	}
+}
+
+void ipc_wwan_tx_flowctrl(struct iosm_wwan *ipc_wwan, int id, bool on)
+{
+	u16 vid = ipc_wwan_mux_session_to_vlan_tag(id);
+
+	dev_dbg(ipc_wwan->dev, "MUX session id[%d]: %s", id,
+		on ? "Enable" : "Disable");
+	if (on)
+		netif_stop_subqueue(ipc_wwan->netdev, vid);
+	else
+		netif_wake_subqueue(ipc_wwan->netdev, vid);
+}
+
+static struct device_type wwan_type = { .name = "wwan" };
+
+struct iosm_wwan *ipc_wwan_init(void *ops_instance, struct device *dev,
+				int max_sessions)
+{
+	int max_tx_q = WWAN_MIN_TXQ + max_sessions;
+	struct iosm_wwan *ipc_wwan;
+	struct net_device *netdev;
+
+	/* allocate ethernet device */
+	netdev = alloc_etherdev_mqs(sizeof(*ipc_wwan), max_tx_q, WWAN_MAX_RXQ);
+
+	if (unlikely(!netdev || !ops_instance))
+		return NULL;
+
+	ipc_wwan = netdev_priv(netdev);
+
+	ipc_wwan->dev = dev;
+	ipc_wwan->netdev = netdev;
+	ipc_wwan->is_registered = false;
+
+	ipc_wwan->vlan_devs_nr = 0;
+	ipc_wwan->ops_instance = ops_instance;
+
+	ipc_wwan->max_devs = max_sessions + IPC_MEM_MAX_CHANNELS;
+	ipc_wwan->max_ip_devs = max_sessions;
+
+	ipc_wwan->vlan_devs = kcalloc(ipc_wwan->max_devs,
+				      sizeof(ipc_wwan->vlan_devs[0]),
+				      GFP_KERNEL);
+
+	spin_lock_init(&ipc_wwan->lock);
+	mutex_init(&ipc_wwan->if_mutex);
+
+	/* allocate random ethernet address */
+	eth_random_addr(netdev->dev_addr);
+	netdev->addr_assign_type = NET_ADDR_RANDOM;
+
+	snprintf(netdev->name, IFNAMSIZ, "%s", "wwan0");
+	netdev->netdev_ops = &ipc_wwandev_ops;
+	netdev->flags |= IFF_NOARP;
+	netdev->features |=
+		NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_FILTER;
+	SET_NETDEV_DEVTYPE(netdev, &wwan_type);
+
+	if (register_netdev(netdev)) {
+		dev_err(ipc_wwan->dev, "register_netdev failed");
+		ipc_wwan_deinit(ipc_wwan);
+		return NULL;
+	}
+
+	ipc_wwan->is_registered = true;
+
+	netif_device_attach(netdev);
+
+	/* Set Max MTU for kernel version latest than 4.10.0. */
+	netdev->max_mtu = IPC_MEM_MAX_MTU_SIZE;
+
+	return ipc_wwan;
+}
+
+void ipc_wwan_deinit(struct iosm_wwan *ipc_wwan)
+{
+	if (ipc_wwan->is_registered)
+		unregister_netdev(ipc_wwan->netdev);
+	kfree(ipc_wwan->vlan_devs);
+	ipc_wwan->vlan_devs = NULL;
+	free_netdev(ipc_wwan->netdev);
+}
+
+bool ipc_wwan_is_tx_stopped(struct iosm_wwan *ipc_wwan, int id)
+{
+	u16 vid = ipc_wwan_mux_session_to_vlan_tag(id);
+
+	return __netif_subqueue_stopped(ipc_wwan->netdev, vid);
+}
diff --git a/drivers/net/wwan/iosm/iosm_ipc_wwan.h b/drivers/net/wwan/iosm/iosm_ipc_wwan.h
new file mode 100644
index 000000000000..3c3b1fb31ae1
--- /dev/null
+++ b/drivers/net/wwan/iosm/iosm_ipc_wwan.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ *
+ * Copyright (C) 2020 Intel Corporation.
+ */
+
+#ifndef IOSM_IPC_WWAN_H
+#define IOSM_IPC_WWAN_H
+
+#define IMEM_WWAN_DATA_VLAN_ID_START 1
+#define IMEM_WWAN_CTRL_VLAN_ID_START 257
+#define IMEM_WWAN_CTRL_VLAN_ID_END 512
+
+/**
+ * ipc_wwan_init - Allocate, Init and register WWAN device
+ * @ops_instance:	Instance pointer for callback
+ * @dev:		Pointer to device structure
+ * @max_sessions:	Maximum number of sessions
+ *
+ * Returns: Pointer to instance on success else NULL
+ */
+struct iosm_wwan *ipc_wwan_init(void *ops_instance, struct device *dev,
+				int max_sessions);
+
+/**
+ * ipc_wwan_deinit - Unregister and free WWAN device, clear pointer
+ * @ipc_wwan:	Pointer to wwan instance data
+ */
+void ipc_wwan_deinit(struct iosm_wwan *ipc_wwan);
+
+/**
+ * ipc_wwan_receive - Receive a downlink packet from CP.
+ * @ipc_wwan:	Pointer to wwan instance
+ * @skb_arg:	Pointer to struct sk_buff
+ * @dss:	Set to true if vlan id is greater than
+ *		IMEM_WWAN_CTRL_VLAN_ID_START else false
+ *
+ * Return: 0 on success else error code
+ */
+int ipc_wwan_receive(struct iosm_wwan *ipc_wwan, struct sk_buff *skb_arg,
+		     bool dss);
+
+/**
+ * ipc_wwan_update_stats - Update device statistics
+ * @ipc_wwan:	Pointer to wwan instance
+ * @id:		Ipc mux channel session id
+ * @len:	Number of bytes to update
+ * @tx:		True if statistics needs to be updated for transmit
+ *		else false
+ *
+ */
+void ipc_wwan_update_stats(struct iosm_wwan *ipc_wwan, int id, size_t len,
+			   bool tx);
+
+/**
+ * ipc_wwan_tx_flowctrl - Enable/Disable TX flow control
+ * @ipc_wwan:	Pointer to wwan instance
+ * @id:		Ipc mux channel session id
+ * @on:		if true then flow ctrl would be enabled else disable
+ *
+ */
+void ipc_wwan_tx_flowctrl(struct iosm_wwan *ipc_wwan, int id, bool on);
+
+/**
+ * ipc_wwan_is_tx_stopped - Checks if Tx stopped for a VLAN id.
+ * @ipc_wwan:	Pointer to wwan instance
+ * @id:		Ipc mux channel session id
+ *
+ * Return: true if stopped, false otherwise
+ */
+bool ipc_wwan_is_tx_stopped(struct iosm_wwan *ipc_wwan, int id);
+
+#endif
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 17/18] net: iosm: readme file
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (15 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 16/18] net: iosm: net driver M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  2020-11-23 13:51 ` [RFC 18/18] net: iosm: infrastructure M Chetan Kumar
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

Documents IOSM Driver interface usage.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 drivers/net/wwan/iosm/README | 126 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 126 insertions(+)
 create mode 100644 drivers/net/wwan/iosm/README

diff --git a/drivers/net/wwan/iosm/README b/drivers/net/wwan/iosm/README
new file mode 100644
index 000000000000..4a489177ad96
--- /dev/null
+++ b/drivers/net/wwan/iosm/README
@@ -0,0 +1,126 @@
+IOSM Driver for PCIe based Intel M.2 Modems
+================================================
+The IOSM (IPC over Shared Memory) driver is a PCIe host driver implemented
+for linux or chrome platform for data exchange over PCIe interface between
+Host platform & Intel M.2 Modem. The driver exposes interface conforming to the
+MBIM protocol [1]. Any front end application ( eg: Modem Manager) could easily
+manage the MBIM interface to enable data communication towards WWAN.
+
+Basic usage
+===========
+MBIM functions are inactive when unmanaged. The IOSM driver only
+provides a userspace interface of a character device representing
+MBIM control channel and does not play any role in managing the
+functionality. It is the job of a userspace application to enumerate
+the port appropriately and enable MBIM functionality.
+
+Examples of few such userspace application are:
+ - mbimcli (included with the libmbim [2] library), and
+ - ModemManager [3]
+
+For establishing an MBIM IP session at least these actions are required by the
+management application:
+ - open the control channel
+ - configure network connection settings
+ - connect to network
+ - configure IP interface
+
+Management application development
+----------------------------------
+The driver and userspace interfaces are described below. The MBIM
+control channel protocol is described in [1].
+
+MBIM control channel userspace ABI
+==================================
+
+/dev/wwanctrl character device
+------------------------------
+The driver exposes an interface to the MBIM function control channel using char
+driver as a subdriver. The userspace end of the control channel pipe is a
+/dev/wwanctrl character device.
+
+The /dev/wwanctrl device is created as a subordinate character device under
+IOSM driver. The character device associated with a specific MBIM function
+can be looked up using sysfs with matching the above device name.
+
+Control channel configuration
+-----------------------------
+The wMaxControlMessage field of the MBIM functional descriptor
+limits the maximum control message size. The management application needs to
+negotiate the control message size as per the requirements.
+See also the ioctl documentation below.
+
+Fragmentation
+-------------
+The userspace application is responsible for all control message
+fragmentation and defragmentation as per MBIM.
+
+/dev/wwanctrl write()
+---------------------
+The MBIM control messages from the management application must not
+exceed the negotiated control message size.
+
+/dev/wwanctrl read()
+--------------------
+The management application must accept control messages of up the
+negotiated control message size.
+
+/dev/wwanctrl ioctl()
+--------------------
+IOCTL_WDM_MAX_COMMAND: Get Maximum Command Size
+This IOCTL command could be used by applications to fetch the Maximum Command
+buffer length supported by the driver which is restricted to 4096 bytes.
+
+	#include <stdio.h>
+	#include <fcntl.h>
+	#include <sys/ioctl.h>
+	#include <linux/types.h>
+	int main()
+	{
+		__u16 max;
+		int fd = open("/dev/wwanctrl", O_RDWR);
+		if (!ioctl(fd, IOCTL_WDM_MAX_COMMAND, &max))
+			printf("wMaxControlMessage is %d\n", max);
+	}
+
+MBIM data channel userspace ABI
+===============================
+
+wwanY network device
+--------------------
+The IOSM driver represents the MBIM data channel as a single
+network device of the "wwan0" type. This network device is initially
+mapped to MBIM IP session 0.
+
+Multiplexed IP sessions (IPS)
+-----------------------------
+IOSM driver allows multiplexing of several IP sessions over the single network
+device of type wwan0. IOSM driver models such IP sessions as 802.1q VLAN
+subdevices of the master wwanY device, mapping MBIM IP session M to VLAN ID M
+for all values of M greater than 0.
+
+The userspace management application is responsible for adding new VLAN links
+prior to establishing MBIM IP sessions where the SessionId is greater than 0.
+These links can be added by using the normal VLAN kernel interfaces.
+
+For example, adding a link for a MBIM IP session with SessionId 5:
+
+  ip link add link wwan0 name wwan0.<name> type vlan id 5
+
+The driver will automatically map the "wwan0.<name>" network device to MBIM
+IP session 5.
+
+References
+==========
+
+[1] "MBIM (Mobile Broadband Interface Model) Registry"
+       - http://compliance.usb.org/mbim/
+
+[2] libmbim - "a glib-based library for talking to WWAN modems and
+      devices which speak the Mobile Interface Broadband Model (MBIM)
+      protocol"
+      - http://www.freedesktop.org/wiki/Software/libmbim/
+
+[3] ModemManager - "a DBus-activated daemon which controls mobile
+      broadband (2G/3G/4G) devices and connections"
+      - http://www.freedesktop.org/wiki/Software/ModemManager/
\ No newline at end of file
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 18/18] net: iosm: infrastructure
  2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
                   ` (16 preceding siblings ...)
  2020-11-23 13:51 ` [RFC 17/18] net: iosm: readme file M Chetan Kumar
@ 2020-11-23 13:51 ` M Chetan Kumar
  17 siblings, 0 replies; 19+ messages in thread
From: M Chetan Kumar @ 2020-11-23 13:51 UTC (permalink / raw)
  To: netdev, linux-wireless; +Cc: johannes, krishna.c.sudi, m.chetan.kumar

1) Kconfig & Makefile changes for IOSM Driver compilation.
2) Modified driver/net Kconfig & Makefile for driver inclusion.
3) Modified MAINTAINER file for IOSM Driver addition.

Signed-off-by: M Chetan Kumar <m.chetan.kumar@intel.com>
---
 MAINTAINERS                    |  7 +++++++
 drivers/net/Kconfig            |  1 +
 drivers/net/Makefile           |  1 +
 drivers/net/wwan/Kconfig       | 13 +++++++++++++
 drivers/net/wwan/Makefile      |  5 +++++
 drivers/net/wwan/iosm/Kconfig  | 10 ++++++++++
 drivers/net/wwan/iosm/Makefile | 27 +++++++++++++++++++++++++++
 7 files changed, 64 insertions(+)
 create mode 100644 drivers/net/wwan/Kconfig
 create mode 100644 drivers/net/wwan/Makefile
 create mode 100644 drivers/net/wwan/iosm/Kconfig
 create mode 100644 drivers/net/wwan/iosm/Makefile

diff --git a/MAINTAINERS b/MAINTAINERS
index a008b70f3c16..cb1fc8fabffd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9099,6 +9099,13 @@ M:	Mario Limonciello <mario.limonciello@dell.com>
 S:	Maintained
 F:	drivers/platform/x86/intel-wmi-thunderbolt.c
 
+INTEL WWAN IOSM DRIVER
+M:      M Chetan Kumar <m.chetan.kumar@intel.com>
+M:      Intel Corporation <linuxwwan@intel.com>
+L:      netdev@vger.kernel.org
+S:      Maintained
+F:      drivers/net/wwan/iosm/
+
 INTEL(R) TRACE HUB
 M:	Alexander Shishkin <alexander.shishkin@linux.intel.com>
 S:	Supported
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index c3dbe64e628e..e0f869a2c52f 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -593,4 +593,5 @@ config NET_FAILOVER
 	  a VM with direct attached VF by failing over to the paravirtual
 	  datapath when the VF is unplugged.
 
+source "drivers/net/wwan/Kconfig"
 endif # NETDEVICES
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 72e18d505d1a..025fb399d2af 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -84,3 +84,4 @@ thunderbolt-net-y += thunderbolt.o
 obj-$(CONFIG_USB4_NET) += thunderbolt-net.o
 obj-$(CONFIG_NETDEVSIM) += netdevsim/
 obj-$(CONFIG_NET_FAILOVER) += net_failover.o
+obj-$(CONFIG_WWAN)+= wwan/
diff --git a/drivers/net/wwan/Kconfig b/drivers/net/wwan/Kconfig
new file mode 100644
index 000000000000..715dfd0598f9
--- /dev/null
+++ b/drivers/net/wwan/Kconfig
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Wireless WAN device configuration
+#
+
+menuconfig WWAN
+	bool "Wireless WAN"
+	help
+	  This section contains all Wireless WAN driver configurations.
+
+if WWAN
+source "drivers/net/wwan/iosm/Kconfig"
+endif # WWAN
diff --git a/drivers/net/wwan/Makefile b/drivers/net/wwan/Makefile
new file mode 100644
index 000000000000..a81ff28e6cd9
--- /dev/null
+++ b/drivers/net/wwan/Makefile
@@ -0,0 +1,5 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Makefile for the Linux WWAN Device Drivers.
+#
+obj-$(CONFIG_IOSM)+= iosm/
diff --git a/drivers/net/wwan/iosm/Kconfig b/drivers/net/wwan/iosm/Kconfig
new file mode 100644
index 000000000000..fed382fc9cd7
--- /dev/null
+++ b/drivers/net/wwan/iosm/Kconfig
@@ -0,0 +1,10 @@
+# SPDX-License-Identifier: (GPL-2.0-only)
+#
+# IOSM Driver configuration
+#
+
+config IOSM
+	tristate "IOSM Driver"
+	depends on INTEL_IOMMU
+	help
+	  This driver enables Intel M.2 WWAN Device communication.
diff --git a/drivers/net/wwan/iosm/Makefile b/drivers/net/wwan/iosm/Makefile
new file mode 100644
index 000000000000..153ae0360244
--- /dev/null
+++ b/drivers/net/wwan/iosm/Makefile
@@ -0,0 +1,27 @@
+# SPDX-License-Identifier: (GPL-2.0-only)
+#
+# Copyright (C) 2020 Intel Corporation.
+#
+
+iosm-y = \
+	iosm_ipc_task_queue.o	\
+	iosm_ipc_imem.o			\
+	iosm_ipc_imem_ops.o		\
+	iosm_ipc_mmio.o			\
+	iosm_ipc_sio.o			\
+	iosm_ipc_mbim.o			\
+	iosm_ipc_wwan.o			\
+	iosm_ipc_uevent.o		\
+	iosm_ipc_pm.o			\
+	iosm_ipc_pcie.o			\
+	iosm_ipc_irq.o			\
+	iosm_ipc_chnl_cfg.o		\
+	iosm_ipc_protocol.o		\
+	iosm_ipc_protocol_ops.o	\
+	iosm_ipc_mux.o			\
+	iosm_ipc_mux_codec.o
+
+obj-$(CONFIG_IOSM) := iosm.o
+
+# compilation flags
+#ccflags-y += -DDEBUG
-- 
2.12.3


^ permalink raw reply related	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-11-23 13:53 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-23 13:51 [RFC 00/18] net: iosm: PCIe Driver for Intel M.2 Modem M Chetan Kumar
2020-11-23 13:51 ` [RFC 01/18] net: iosm: entry point M Chetan Kumar
2020-11-23 13:51 ` [RFC 02/18] net: iosm: irq handling M Chetan Kumar
2020-11-23 13:51 ` [RFC 03/18] net: iosm: mmio scratchpad M Chetan Kumar
2020-11-23 13:51 ` [RFC 04/18] net: iosm: shared memory IPC interface M Chetan Kumar
2020-11-23 13:51 ` [RFC 05/18] net: iosm: shared memory I/O operations M Chetan Kumar
2020-11-23 13:51 ` [RFC 06/18] net: iosm: channel configuration M Chetan Kumar
2020-11-23 13:51 ` [RFC 07/18] net: iosm: char device for FW flash & coredump M Chetan Kumar
2020-11-23 13:51 ` [RFC 08/18] net: iosm: MBIM control device M Chetan Kumar
2020-11-23 13:51 ` [RFC 09/18] net: iosm: bottom half M Chetan Kumar
2020-11-23 13:51 ` [RFC 10/18] net: iosm: multiplex IP sessions M Chetan Kumar
2020-11-23 13:51 ` [RFC 11/18] net: iosm: encode or decode datagram M Chetan Kumar
2020-11-23 13:51 ` [RFC 12/18] net: iosm: power management M Chetan Kumar
2020-11-23 13:51 ` [RFC 13/18] net: iosm: shared memory protocol M Chetan Kumar
2020-11-23 13:51 ` [RFC 14/18] net: iosm: protocol operations M Chetan Kumar
2020-11-23 13:51 ` [RFC 15/18] net: iosm: uevent support M Chetan Kumar
2020-11-23 13:51 ` [RFC 16/18] net: iosm: net driver M Chetan Kumar
2020-11-23 13:51 ` [RFC 17/18] net: iosm: readme file M Chetan Kumar
2020-11-23 13:51 ` [RFC 18/18] net: iosm: infrastructure M Chetan Kumar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).