linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED)
@ 2020-03-06  4:28 Alex Elder
  2020-03-06  4:28 ` [PATCH v2 01/17] remoteproc: add IPA notification to q6v5 driver Alex Elder
                   ` (19 more replies)
  0 siblings, 20 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

This series presents the driver for the Qualcomm IP Accelerator (IPA).

This is version 2 of this updated series.  It includes the following
small changes since the previous version:
  - Now based on net-next instead of v5.6-rc
  - Config option now named CONFIG_QCOM_IPA
  - Some minor cleanup in the GSI code
  - Small change to replenish logic
  - No longer depends on remoteproc bug fixes
What follows is the basically same explanation as was posted previously.

					-Alex

I have posted earlier versions of this code previously, but it has
undergone quite a bit of development since the last time, so rather
than calling it "version 3" I'm just treating it as a new series
(indicating it's been updated in this message).  The fast/data path
is the same as before.  But the driver now (nearly) supports a
second platform, its transaction handling has been generalized
and improved, and modem activities are now handled in a more
unified way.

This series is available (based on net-next in branch "ipa_updated-v2"
in this git repository:
  https://git.linaro.org/people/alex.elder/linux.git

The branch depends on other one other small patch that I sent out
for review earlier.
  https://lore.kernel.org/lkml/20200306042302.17602-1-elder@linaro.org/


I want to address some of the discussion that arose last time.

First, there was the WWAN discussion.  Here's the history:
  - This was last posted nine months ago.
  - Reviewers at that time favored developing a new WWAN subsystem that
    would be used for managing devices like this.  And the suggestion
    was to not accept this driver until that could be developed.
  - Along the way, Apple acquired much of Intel's modem business.
    And as a result, the generic framework became less pressing.
  - I did participate in the WWAN subsystem design however, and
    although it went dormant for a while it's been resurrected:
      https://lore.kernel.org/netdev/20200225100053.16385-1-johannes@sipsolutions.net/
  - Unfortunately the proposed WWAN design was not an easy fit
    with Qualcomm's integrated modem interfaces.  Given that
    rmnet is a supported link type for in the upstream "iproute2"
    package (more on this below), I have opted not to integrate
    with any WWAN subsystem.

So in summary, this driver does not integrate with a generic WWAN
framework.  And I'd like it to be accepted upstream despite that.


Next, Arnd Bergmann had some concerns about flow control.  (Note:
some of my discussions with Arnd about this were offline.) The
overall architecture here also involves the "rmnet" driver:
  drivers/net/ethernet/qualcomm/rmnet

The rmnet driver presents a network device for use.  It connects
with another network device presented, by the IPA driver.  The
rmnet driver wraps (and unwraps) packets transferred to (and from)
the IPA driver with QMAP headers.

   ---------------
   | rmnet_data0 |    <-- "real" netdev
   ---------------
          ||       }- QMAP spoken here
   --------------
   | rmnet_ipa0 |     <-- also netdev, transporting QMAP packets
   --------------
          ||
   --------------
  ( IPA hardware )
   --------------

Arnd's concern was that the rmnet_data0 network device does not
have the benefit of information about the state of the underlying
IPA hardware in order to be effective in controlling TX flow.
The feared result is over-buffering of TX packets (bufferbloat).
I began working on some simple experiments to see whether (or how
much) his concern was warranted.  But it turned out that completing
these experiments was much more work than had been hoped.

The rmnet driver is present in the upstream kernel.  There is also
support for the rmnet link type in the upstream "ip" user space
command in the "iproute2" package.  Changing the layering of rmnet
over IPA likely involves deprecating the rmnet driver and its
support in "iproute2".  I would really rather not go down that
path.

There is precedent for this sort of layering of network devices
(L2TP, VLAN).  And any architecture like this would suffer the
issues Arnd mentioned; the problem is not limited to rmnet and IPA.
I do think this is a problem worth solving, but the prudent thing
to do might be to try to solve it more generally.

So to summarize on this issue, this driver does not attempt to
change the way the rmnet and IPA drivers work together.  And even
though I think Arnd's concerns warrant more investigation, I'd like
this driver to to be accepted upstream without any change to this
architecture.


Finally, a more technical description for the series, and some
acknowledgements to some people who contributed to it.

The IPA is a component present in some Qualcomm SoCs that allows
network functions such as aggregation, filtering, routing, and NAT
to be performed without active involvement of the main application
processor (AP).

In this initial patch series these advanced features are not
implemented.  The IPA driver simply provides a network interface
that makes the modem's LTE network available in Linux.  This initial
series supports only the Qualcomm SDM845 SoC.  The Qualcomm SC7180
SoC is partially supported, and support for other platforms will
follow.

This code is derived from a driver developed by Qualcomm.  A version
of the original source can be seen here:
  https://source.codeaurora.org/quic/la/kernel/msm-4.9/tree
in the "drivers/platform/msm/ipa" directory.  Many were involved in
developing this, but the following individuals deserve explicit
acknowledgement for their substantial contributions:

    Abhishek Choubey
    Ady Abraham
    Chaitanya Pratapa
    David Arinzon
    Ghanim Fodi
    Gidon Studinski
    Ravi Gummadidala
    Shihuan Liu
    Skylar Chang

					-Alex

Alex Elder (17):
  remoteproc: add IPA notification to q6v5 driver
  dt-bindings: soc: qcom: add IPA bindings
  soc: qcom: ipa: main code
  soc: qcom: ipa: configuration data
  soc: qcom: ipa: clocking, interrupts, and memory
  soc: qcom: ipa: GSI headers
  soc: qcom: ipa: the generic software interface
  soc: qcom: ipa: IPA interface to GSI
  soc: qcom: ipa: GSI transactions
  soc: qcom: ipa: IPA endpoints
  soc: qcom: ipa: filter and routing tables
  soc: qcom: ipa: immediate commands
  soc: qcom: ipa: modem and microcontroller
  soc: qcom: ipa: AP/modem communications
  soc: qcom: ipa: support build of IPA code
  MAINTAINERS: add entry for the Qualcomm IPA driver
  arm64: dts: sdm845: add IPA information

 .../devicetree/bindings/net/qcom,ipa.yaml     |  192 ++
 MAINTAINERS                                   |    6 +
 arch/arm64/boot/dts/qcom/sdm845.dtsi          |   51 +
 drivers/net/Kconfig                           |    2 +
 drivers/net/Makefile                          |    1 +
 drivers/net/ipa/Kconfig                       |   19 +
 drivers/net/ipa/Makefile                      |   12 +
 drivers/net/ipa/gsi.c                         | 2055 +++++++++++++++++
 drivers/net/ipa/gsi.h                         |  257 +++
 drivers/net/ipa/gsi_private.h                 |  118 +
 drivers/net/ipa/gsi_reg.h                     |  417 ++++
 drivers/net/ipa/gsi_trans.c                   |  786 +++++++
 drivers/net/ipa/gsi_trans.h                   |  226 ++
 drivers/net/ipa/ipa.h                         |  148 ++
 drivers/net/ipa/ipa_clock.c                   |  313 +++
 drivers/net/ipa/ipa_clock.h                   |   53 +
 drivers/net/ipa/ipa_cmd.c                     |  680 ++++++
 drivers/net/ipa/ipa_cmd.h                     |  195 ++
 drivers/net/ipa/ipa_data-sc7180.c             |  307 +++
 drivers/net/ipa/ipa_data-sdm845.c             |  329 +++
 drivers/net/ipa/ipa_data.h                    |  280 +++
 drivers/net/ipa/ipa_endpoint.c                | 1707 ++++++++++++++
 drivers/net/ipa/ipa_endpoint.h                |  110 +
 drivers/net/ipa/ipa_gsi.c                     |   54 +
 drivers/net/ipa/ipa_gsi.h                     |   60 +
 drivers/net/ipa/ipa_interrupt.c               |  253 ++
 drivers/net/ipa/ipa_interrupt.h               |  117 +
 drivers/net/ipa/ipa_main.c                    |  954 ++++++++
 drivers/net/ipa/ipa_mem.c                     |  314 +++
 drivers/net/ipa/ipa_mem.h                     |   90 +
 drivers/net/ipa/ipa_modem.c                   |  383 +++
 drivers/net/ipa/ipa_modem.h                   |   31 +
 drivers/net/ipa/ipa_qmi.c                     |  538 +++++
 drivers/net/ipa/ipa_qmi.h                     |   41 +
 drivers/net/ipa/ipa_qmi_msg.c                 |  663 ++++++
 drivers/net/ipa/ipa_qmi_msg.h                 |  252 ++
 drivers/net/ipa/ipa_reg.c                     |   38 +
 drivers/net/ipa/ipa_reg.h                     |  476 ++++
 drivers/net/ipa/ipa_smp2p.c                   |  335 +++
 drivers/net/ipa/ipa_smp2p.h                   |   48 +
 drivers/net/ipa/ipa_table.c                   |  700 ++++++
 drivers/net/ipa/ipa_table.h                   |  103 +
 drivers/net/ipa/ipa_uc.c                      |  211 ++
 drivers/net/ipa/ipa_uc.h                      |   32 +
 drivers/net/ipa/ipa_version.h                 |   23 +
 drivers/remoteproc/Kconfig                    |    6 +
 drivers/remoteproc/Makefile                   |    1 +
 drivers/remoteproc/qcom_q6v5_ipa_notify.c     |   85 +
 drivers/remoteproc/qcom_q6v5_mss.c            |   38 +
 .../linux/remoteproc/qcom_q6v5_ipa_notify.h   |   82 +
 50 files changed, 14192 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/net/qcom,ipa.yaml
 create mode 100644 drivers/net/ipa/Kconfig
 create mode 100644 drivers/net/ipa/Makefile
 create mode 100644 drivers/net/ipa/gsi.c
 create mode 100644 drivers/net/ipa/gsi.h
 create mode 100644 drivers/net/ipa/gsi_private.h
 create mode 100644 drivers/net/ipa/gsi_reg.h
 create mode 100644 drivers/net/ipa/gsi_trans.c
 create mode 100644 drivers/net/ipa/gsi_trans.h
 create mode 100644 drivers/net/ipa/ipa.h
 create mode 100644 drivers/net/ipa/ipa_clock.c
 create mode 100644 drivers/net/ipa/ipa_clock.h
 create mode 100644 drivers/net/ipa/ipa_cmd.c
 create mode 100644 drivers/net/ipa/ipa_cmd.h
 create mode 100644 drivers/net/ipa/ipa_data-sc7180.c
 create mode 100644 drivers/net/ipa/ipa_data-sdm845.c
 create mode 100644 drivers/net/ipa/ipa_data.h
 create mode 100644 drivers/net/ipa/ipa_endpoint.c
 create mode 100644 drivers/net/ipa/ipa_endpoint.h
 create mode 100644 drivers/net/ipa/ipa_gsi.c
 create mode 100644 drivers/net/ipa/ipa_gsi.h
 create mode 100644 drivers/net/ipa/ipa_interrupt.c
 create mode 100644 drivers/net/ipa/ipa_interrupt.h
 create mode 100644 drivers/net/ipa/ipa_main.c
 create mode 100644 drivers/net/ipa/ipa_mem.c
 create mode 100644 drivers/net/ipa/ipa_mem.h
 create mode 100644 drivers/net/ipa/ipa_modem.c
 create mode 100644 drivers/net/ipa/ipa_modem.h
 create mode 100644 drivers/net/ipa/ipa_qmi.c
 create mode 100644 drivers/net/ipa/ipa_qmi.h
 create mode 100644 drivers/net/ipa/ipa_qmi_msg.c
 create mode 100644 drivers/net/ipa/ipa_qmi_msg.h
 create mode 100644 drivers/net/ipa/ipa_reg.c
 create mode 100644 drivers/net/ipa/ipa_reg.h
 create mode 100644 drivers/net/ipa/ipa_smp2p.c
 create mode 100644 drivers/net/ipa/ipa_smp2p.h
 create mode 100644 drivers/net/ipa/ipa_table.c
 create mode 100644 drivers/net/ipa/ipa_table.h
 create mode 100644 drivers/net/ipa/ipa_uc.c
 create mode 100644 drivers/net/ipa/ipa_uc.h
 create mode 100644 drivers/net/ipa/ipa_version.h
 create mode 100644 drivers/remoteproc/qcom_q6v5_ipa_notify.c
 create mode 100644 include/linux/remoteproc/qcom_q6v5_ipa_notify.h

-- 
2.20.1

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 01/17] remoteproc: add IPA notification to q6v5 driver
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06 11:49   ` Leon Romanovsky
  2020-03-06  4:28 ` [PATCH v2 02/17] dt-bindings: soc: qcom: add IPA bindings Alex Elder
                   ` (18 subsequent siblings)
  19 siblings, 1 reply; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: Bjorn Andersson, Ohad Ben-Cohen, David Miller, Arnd Bergmann
  Cc: Andy Gross, Johannes Berg, Dan Williams, Evan Green, Eric Caruso,
	Susheel Yadav Yadagiri, Chaitanya Pratapa,
	Subash Abhinov Kasiviswanathan, Rob Herring, Mark Rutland,
	Siddharth Gupta, netdev, devicetree, linux-arm-kernel,
	linux-arm-msm, linux-soc, linux-kernel

Set up a subdev in the q6v5 modem remoteproc driver that generates
event notifications for the IPA driver to use for initialization and
recovery following a modem shutdown or crash.

A pair of new functions provides a way for the IPA driver to register
and deregister a notification callback function that will be called
whenever modem events (about to boot, running, about to shut down,
etc.) occur.  A void pointer value (provided by the IPA driver at
registration time) and an event type are supplied to the callback
function.

One event, MODEM_REMOVING, is signaled whenever the q6v5 driver is
about to remove the notification subdevice.  It requires the IPA
driver de-register its callback.

This sub-device is only used by the modem subsystem (MSS) driver,
so the code that adds the new subdev and allows registration and
deregistration of the notifier is found in "qcom_q6v6_mss.c".

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/remoteproc/Kconfig                    |  6 ++
 drivers/remoteproc/Makefile                   |  1 +
 drivers/remoteproc/qcom_q6v5_ipa_notify.c     | 85 +++++++++++++++++++
 drivers/remoteproc/qcom_q6v5_mss.c            | 38 +++++++++
 .../linux/remoteproc/qcom_q6v5_ipa_notify.h   | 82 ++++++++++++++++++
 5 files changed, 212 insertions(+)
 create mode 100644 drivers/remoteproc/qcom_q6v5_ipa_notify.c
 create mode 100644 include/linux/remoteproc/qcom_q6v5_ipa_notify.h

diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
index de3862c15fcc..56084635dd63 100644
--- a/drivers/remoteproc/Kconfig
+++ b/drivers/remoteproc/Kconfig
@@ -167,6 +167,12 @@ config QCOM_Q6V5_WCSS
 	  Say y here to support the Qualcomm Peripheral Image Loader for the
 	  Hexagon V5 based WCSS remote processors.
 
+config QCOM_Q6V5_IPA_NOTIFY
+	tristate
+	depends on QCOM_IPA
+	depends on QCOM_Q6V5_MSS
+	default QCOM_IPA
+
 config QCOM_SYSMON
 	tristate "Qualcomm sysmon driver"
 	depends on RPMSG
diff --git a/drivers/remoteproc/Makefile b/drivers/remoteproc/Makefile
index e30a1b15fbac..0effd3825035 100644
--- a/drivers/remoteproc/Makefile
+++ b/drivers/remoteproc/Makefile
@@ -21,6 +21,7 @@ obj-$(CONFIG_QCOM_Q6V5_ADSP)		+= qcom_q6v5_adsp.o
 obj-$(CONFIG_QCOM_Q6V5_MSS)		+= qcom_q6v5_mss.o
 obj-$(CONFIG_QCOM_Q6V5_PAS)		+= qcom_q6v5_pas.o
 obj-$(CONFIG_QCOM_Q6V5_WCSS)		+= qcom_q6v5_wcss.o
+obj-$(CONFIG_QCOM_Q6V5_IPA_NOTIFY)	+= qcom_q6v5_ipa_notify.o
 obj-$(CONFIG_QCOM_SYSMON)		+= qcom_sysmon.o
 obj-$(CONFIG_QCOM_WCNSS_PIL)		+= qcom_wcnss_pil.o
 qcom_wcnss_pil-y			+= qcom_wcnss.o
diff --git a/drivers/remoteproc/qcom_q6v5_ipa_notify.c b/drivers/remoteproc/qcom_q6v5_ipa_notify.c
new file mode 100644
index 000000000000..e1c10a128bfd
--- /dev/null
+++ b/drivers/remoteproc/qcom_q6v5_ipa_notify.c
@@ -0,0 +1,85 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/*
+ * Qualcomm IPA notification subdev support
+ *
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/remoteproc.h>
+#include <linux/remoteproc/qcom_q6v5_ipa_notify.h>
+
+static void
+ipa_notify_common(struct rproc_subdev *subdev, enum qcom_rproc_event event)
+{
+	struct qcom_rproc_ipa_notify *ipa_notify;
+	qcom_ipa_notify_t notify;
+
+	ipa_notify = container_of(subdev, struct qcom_rproc_ipa_notify, subdev);
+	notify = ipa_notify->notify;
+	if (notify)
+		notify(ipa_notify->data, event);
+}
+
+static int ipa_notify_prepare(struct rproc_subdev *subdev)
+{
+	ipa_notify_common(subdev, MODEM_STARTING);
+
+	return 0;
+}
+
+static int ipa_notify_start(struct rproc_subdev *subdev)
+{
+	ipa_notify_common(subdev, MODEM_RUNNING);
+
+	return 0;
+}
+
+static void ipa_notify_stop(struct rproc_subdev *subdev, bool crashed)
+
+{
+	ipa_notify_common(subdev, crashed ? MODEM_CRASHED : MODEM_STOPPING);
+}
+
+static void ipa_notify_unprepare(struct rproc_subdev *subdev)
+{
+	ipa_notify_common(subdev, MODEM_OFFLINE);
+}
+
+static void ipa_notify_removing(struct rproc_subdev *subdev)
+{
+	ipa_notify_common(subdev, MODEM_REMOVING);
+}
+
+/* Register the IPA notification subdevice with the Q6V5 MSS remoteproc */
+void qcom_add_ipa_notify_subdev(struct rproc *rproc,
+		struct qcom_rproc_ipa_notify *ipa_notify)
+{
+	ipa_notify->notify = NULL;
+	ipa_notify->data = NULL;
+	ipa_notify->subdev.prepare = ipa_notify_prepare;
+	ipa_notify->subdev.start = ipa_notify_start;
+	ipa_notify->subdev.stop = ipa_notify_stop;
+	ipa_notify->subdev.unprepare = ipa_notify_unprepare;
+
+	rproc_add_subdev(rproc, &ipa_notify->subdev);
+}
+EXPORT_SYMBOL_GPL(qcom_add_ipa_notify_subdev);
+
+/* Remove the IPA notification subdevice */
+void qcom_remove_ipa_notify_subdev(struct rproc *rproc,
+		struct qcom_rproc_ipa_notify *ipa_notify)
+{
+	struct rproc_subdev *subdev = &ipa_notify->subdev;
+
+	ipa_notify_removing(subdev);
+
+	rproc_remove_subdev(rproc, subdev);
+	ipa_notify->notify = NULL;	/* Make it obvious */
+}
+EXPORT_SYMBOL_GPL(qcom_remove_ipa_notify_subdev);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Qualcomm IPA notification remoteproc subdev");
diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
index a1cc9cbe038f..f9ccce76e44b 100644
--- a/drivers/remoteproc/qcom_q6v5_mss.c
+++ b/drivers/remoteproc/qcom_q6v5_mss.c
@@ -22,6 +22,7 @@
 #include <linux/regmap.h>
 #include <linux/regulator/consumer.h>
 #include <linux/remoteproc.h>
+#include "linux/remoteproc/qcom_q6v5_ipa_notify.h"
 #include <linux/reset.h>
 #include <linux/soc/qcom/mdt_loader.h>
 #include <linux/iopoll.h>
@@ -201,6 +202,7 @@ struct q6v5 {
 	struct qcom_rproc_glink glink_subdev;
 	struct qcom_rproc_subdev smd_subdev;
 	struct qcom_rproc_ssr ssr_subdev;
+	struct qcom_rproc_ipa_notify ipa_notify_subdev;
 	struct qcom_sysmon *sysmon;
 	bool need_mem_protection;
 	bool has_alt_reset;
@@ -1540,6 +1542,39 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
 	return 0;
 }
 
+#if IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY)
+
+/* Register IPA notification function */
+int qcom_register_ipa_notify(struct rproc *rproc, qcom_ipa_notify_t notify,
+			     void *data)
+{
+	struct qcom_rproc_ipa_notify *ipa_notify;
+	struct q6v5 *qproc = rproc->priv;
+
+	if (!notify)
+		return -EINVAL;
+
+	ipa_notify = &qproc->ipa_notify_subdev;
+	if (ipa_notify->notify)
+		return -EBUSY;
+
+	ipa_notify->notify = notify;
+	ipa_notify->data = data;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(qcom_register_ipa_notify);
+
+/* Deregister IPA notification function */
+void qcom_deregister_ipa_notify(struct rproc *rproc)
+{
+	struct q6v5 *qproc = rproc->priv;
+
+	qproc->ipa_notify_subdev.notify = NULL;
+}
+EXPORT_SYMBOL_GPL(qcom_deregister_ipa_notify);
+#endif /* !IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY) */
+
 static int q6v5_probe(struct platform_device *pdev)
 {
 	const struct rproc_hexagon_res *desc;
@@ -1664,6 +1699,7 @@ static int q6v5_probe(struct platform_device *pdev)
 	qcom_add_glink_subdev(rproc, &qproc->glink_subdev);
 	qcom_add_smd_subdev(rproc, &qproc->smd_subdev);
 	qcom_add_ssr_subdev(rproc, &qproc->ssr_subdev, "mpss");
+	qcom_add_ipa_notify_subdev(rproc, &qproc->ipa_notify_subdev);
 	qproc->sysmon = qcom_add_sysmon_subdev(rproc, "modem", 0x12);
 	if (IS_ERR(qproc->sysmon)) {
 		ret = PTR_ERR(qproc->sysmon);
@@ -1677,6 +1713,7 @@ static int q6v5_probe(struct platform_device *pdev)
 	return 0;
 
 detach_proxy_pds:
+	qcom_remove_ipa_notify_subdev(qproc->rproc, &qproc->ipa_notify_subdev);
 	q6v5_pds_detach(qproc, qproc->proxy_pds, qproc->proxy_pd_count);
 detach_active_pds:
 	q6v5_pds_detach(qproc, qproc->active_pds, qproc->active_pd_count);
@@ -1693,6 +1730,7 @@ static int q6v5_remove(struct platform_device *pdev)
 	rproc_del(qproc->rproc);
 
 	qcom_remove_sysmon_subdev(qproc->sysmon);
+	qcom_remove_ipa_notify_subdev(qproc->rproc, &qproc->ipa_notify_subdev);
 	qcom_remove_glink_subdev(qproc->rproc, &qproc->glink_subdev);
 	qcom_remove_smd_subdev(qproc->rproc, &qproc->smd_subdev);
 	qcom_remove_ssr_subdev(qproc->rproc, &qproc->ssr_subdev);
diff --git a/include/linux/remoteproc/qcom_q6v5_ipa_notify.h b/include/linux/remoteproc/qcom_q6v5_ipa_notify.h
new file mode 100644
index 000000000000..0820edc0ab7d
--- /dev/null
+++ b/include/linux/remoteproc/qcom_q6v5_ipa_notify.h
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (C) 2019 Linaro Ltd. */
+
+#ifndef __QCOM_Q6V5_IPA_NOTIFY_H__
+#define __QCOM_Q6V5_IPA_NOTIFY_H__
+
+#if IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY)
+
+#include <linux/remoteproc.h>
+
+enum qcom_rproc_event {
+	MODEM_STARTING	= 0,	/* Modem is about to be started */
+	MODEM_RUNNING	= 1,	/* Startup complete; modem is operational */
+	MODEM_STOPPING	= 2,	/* Modem is about to shut down */
+	MODEM_CRASHED	= 3,	/* Modem has crashed (implies stopping) */
+	MODEM_OFFLINE	= 4,	/* Modem is now offline */
+	MODEM_REMOVING	= 5,	/* Modem is about to be removed */
+};
+
+typedef void (*qcom_ipa_notify_t)(void *data, enum qcom_rproc_event event);
+
+struct qcom_rproc_ipa_notify {
+	struct rproc_subdev subdev;
+
+	qcom_ipa_notify_t notify;
+	void *data;
+};
+
+/**
+ * qcom_add_ipa_notify_subdev() - Register IPA notification subdevice
+ * @rproc:	rproc handle
+ * @ipa_notify:	IPA notification subdevice handle
+ *
+ * Register the @ipa_notify subdevice with the @rproc so modem events
+ * can be sent to IPA when they occur.
+ *
+ * This is defined in "qcom_q6v5_ipa_notify.c".
+ */
+void qcom_add_ipa_notify_subdev(struct rproc *rproc,
+		struct qcom_rproc_ipa_notify *ipa_notify);
+
+/**
+ * qcom_remove_ipa_notify_subdev() - Remove IPA SSR subdevice
+ * @rproc:	rproc handle
+ * @ipa_notify:	IPA notification subdevice handle
+ *
+ * This is defined in "qcom_q6v5_ipa_notify.c".
+ */
+void qcom_remove_ipa_notify_subdev(struct rproc *rproc,
+		struct qcom_rproc_ipa_notify *ipa_notify);
+
+/**
+ * qcom_register_ipa_notify() - Register IPA notification function
+ * @rproc:	Remote processor handle
+ * @notify:	Non-null IPA notification callback function pointer
+ * @data:	Data supplied to IPA notification callback function
+ *
+ * @Return: 0 if successful, or a negative error code otherwise
+ *
+ * This is defined in "qcom_q6v5_mss.c".
+ */
+int qcom_register_ipa_notify(struct rproc *rproc, qcom_ipa_notify_t notify,
+			     void *data);
+/**
+ * qcom_deregister_ipa_notify() - Deregister IPA notification function
+ * @rproc:	Remote processor handle
+ *
+ * This is defined in "qcom_q6v5_mss.c".
+ */
+void qcom_deregister_ipa_notify(struct rproc *rproc);
+
+#else /* !IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY) */
+
+struct qcom_rproc_ipa_notify { /* empty */ };
+
+#define qcom_add_ipa_notify_subdev(rproc, ipa_notify)		/* no-op */
+#define qcom_remove_ipa_notify_subdev(rproc, ipa_notify)	/* no-op */
+
+#endif /* !IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY) */
+
+#endif /* !__QCOM_Q6V5_IPA_NOTIFY_H__ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 02/17] dt-bindings: soc: qcom: add IPA bindings
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
  2020-03-06  4:28 ` [PATCH v2 01/17] remoteproc: add IPA notification to q6v5 driver Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 03/17] soc: qcom: ipa: main code Alex Elder
                   ` (17 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: Rob Herring, Mark Rutland, David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan,
	Ohad Ben-Cohen, Siddharth Gupta, netdev, devicetree,
	linux-arm-kernel, linux-arm-msm, linux-soc, linux-kernel,
	Rob Herring

Add the binding definitions for the "qcom,ipa" device tree node.

Signed-off-by: Alex Elder <elder@linaro.org>
Reviewed-by: Rob Herring <robh@kernel.org>
---
 .../devicetree/bindings/net/qcom,ipa.yaml     | 192 ++++++++++++++++++
 1 file changed, 192 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/net/qcom,ipa.yaml

diff --git a/Documentation/devicetree/bindings/net/qcom,ipa.yaml b/Documentation/devicetree/bindings/net/qcom,ipa.yaml
new file mode 100644
index 000000000000..91d08f2c7791
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/qcom,ipa.yaml
@@ -0,0 +1,192 @@
+# SPDX-License-Identifier: GPL-2.0 OR BSD-2-Clause
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/net/qcom,ipa.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Qualcomm IP Accelerator (IPA)
+
+maintainers:
+  - Alex Elder <elder@kernel.org>
+
+description:
+  This binding describes the Qualcomm IPA.  The IPA is capable of offloading
+  certain network processing tasks (e.g. filtering, routing, and NAT) from
+  the main processor.
+
+  The IPA sits between multiple independent "execution environments,"
+  including the Application Processor (AP) and the modem.  The IPA presents
+  a Generic Software Interface (GSI) to each execution environment.
+  The GSI is an integral part of the IPA, but it is logically isolated
+  and has a distinct interrupt and a separately-defined address space.
+
+  See also soc/qcom/qcom,smp2p.txt and interconnect/interconnect.txt.
+
+  - |
+    --------             ---------
+    |      |             |       |
+    |  AP  +<---.   .----+ Modem |
+    |      +--. |   | .->+       |
+    |      |  | |   | |  |       |
+    --------  | |   | |  ---------
+              v |   v |
+            --+-+---+-+--
+            |    GSI    |
+            |-----------|
+            |           |
+            |    IPA    |
+            |           |
+            -------------
+
+properties:
+  compatible:
+      const: "qcom,sdm845-ipa"
+
+  reg:
+    items:
+      - description: IPA registers
+      - description: IPA shared memory
+      - description: GSI registers
+
+  reg-names:
+    items:
+      - const: ipa-reg
+      - const: ipa-shared
+      - const: gsi
+
+  clocks:
+    maxItems: 1
+
+  clock-names:
+      const: core
+
+  interrupts:
+    items:
+      - description: IPA interrupt (hardware IRQ)
+      - description: GSI interrupt (hardware IRQ)
+      - description: Modem clock query interrupt (smp2p interrupt)
+      - description: Modem setup ready interrupt (smp2p interrupt)
+
+  interrupt-names:
+    items:
+      - const: ipa
+      - const: gsi
+      - const: ipa-clock-query
+      - const: ipa-setup-ready
+
+  interconnects:
+    items:
+      - description: Interconnect path between IPA and main memory
+      - description: Interconnect path between IPA and internal memory
+      - description: Interconnect path between IPA and the AP subsystem
+
+  interconnect-names:
+    items:
+      - const: memory
+      - const: imem
+      - const: config
+
+  qcom,smem-states:
+    $ref: /schemas/types.yaml#/definitions/phandle-array
+    description: State bits used in by the AP to signal the modem.
+    items:
+    - description: Whether the "ipa-clock-enabled" state bit is valid
+    - description: Whether the IPA clock is enabled (if valid)
+
+  qcom,smem-state-names:
+    $ref: /schemas/types.yaml#/definitions/string-array
+    description: The names of the state bits used for SMP2P output
+    items:
+      - const: ipa-clock-enabled-valid
+      - const: ipa-clock-enabled
+
+  modem-init:
+    type: boolean
+    description:
+      If present, it indicates that the modem is responsible for
+      performing early IPA initialization, including loading and
+      validating firwmare used by the GSI.
+
+  modem-remoteproc:
+    $ref: /schemas/types.yaml#definitions/phandle
+    description:
+      This defines the phandle to the remoteproc node representing
+      the modem subsystem.  This is requied so the IPA driver can
+      receive and act on notifications of modem up/down events.
+
+  memory-region:
+    $ref: /schemas/types.yaml#/definitions/phandle-array
+    maxItems: 1
+    description:
+      If present, a phandle for a reserved memory area that holds
+      the firmware passed to Trust Zone for authentication.  Required
+      when Trust Zone (not the modem) performs early initialization.
+
+required:
+  - compatible
+  - reg
+  - clocks
+  - interrupts
+  - interconnects
+  - qcom,smem-states
+  - modem-remoteproc
+
+oneOf:
+  - required:
+    - modem-init
+  - required:
+    - memory-region
+
+examples:
+  - |
+        smp2p-mpss {
+                compatible = "qcom,smp2p";
+                ipa_smp2p_out: ipa-ap-to-modem {
+                        qcom,entry-name = "ipa";
+                        #qcom,smem-state-cells = <1>;
+                };
+
+                ipa_smp2p_in: ipa-modem-to-ap {
+                        qcom,entry-name = "ipa";
+                        interrupt-controller;
+                        #interrupt-cells = <2>;
+                };
+        };
+        ipa@1e40000 {
+                compatible = "qcom,sdm845-ipa";
+
+                modem-init;
+                modem-remoteproc = <&mss_pil>;
+
+                reg = <0 0x1e40000 0 0x7000>,
+                        <0 0x1e47000 0 0x2000>,
+                        <0 0x1e04000 0 0x2c000>;
+                reg-names = "ipa-reg",
+                                "ipa-shared";
+                                "gsi";
+
+                interrupts-extended = <&intc 0 311 IRQ_TYPE_EDGE_RISING>,
+                                        <&intc 0 432 IRQ_TYPE_LEVEL_HIGH>,
+                                        <&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
+                                        <&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>;
+                interrupt-names = "ipa",
+                                        "gsi",
+                                        "ipa-clock-query",
+                                        "ipa-setup-ready";
+
+                clocks = <&rpmhcc RPMH_IPA_CLK>;
+                clock-names = "core";
+
+                interconnects =
+                        <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>,
+                        <&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>,
+                        <&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>;
+                interconnect-names = "memory",
+                                        "imem",
+                                        "config";
+
+                qcom,smem-states = <&ipa_smp2p_out 0>,
+                                        <&ipa_smp2p_out 1>;
+                qcom,smem-state-names = "ipa-clock-enabled-valid",
+                                        "ipa-clock-enabled";
+        };
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 03/17] soc: qcom: ipa: main code
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
  2020-03-06  4:28 ` [PATCH v2 01/17] remoteproc: add IPA notification to q6v5 driver Alex Elder
  2020-03-06  4:28 ` [PATCH v2 02/17] dt-bindings: soc: qcom: add IPA bindings Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 04/17] soc: qcom: ipa: configuration data Alex Elder
                   ` (16 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

This patch includes three source files that represent some basic "main
program" code for the IPA driver.  They are:
  - "ipa.h" defines the top-level IPA structure which represents an IPA
     device throughout the code.
  - "ipa_main.c" contains the platform driver probe function, along with
    some general code used during initialization.
  - "ipa_reg.h" defines the offsets of the 32-bit registers used for the
    IPA device, along with masks that define the position and width of
    fields within these registers.
  - "version.h" defines some symbolic IPA version numbers.

Each file includes some documentation that provides a little more
overview of how the code is organized and used.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa.h         | 148 ++++++
 drivers/net/ipa/ipa_main.c    | 954 ++++++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_reg.c     |  38 ++
 drivers/net/ipa/ipa_reg.h     | 476 +++++++++++++++++
 drivers/net/ipa/ipa_version.h |  23 +
 5 files changed, 1639 insertions(+)
 create mode 100644 drivers/net/ipa/ipa.h
 create mode 100644 drivers/net/ipa/ipa_main.c
 create mode 100644 drivers/net/ipa/ipa_reg.c
 create mode 100644 drivers/net/ipa/ipa_reg.h
 create mode 100644 drivers/net/ipa/ipa_version.h

diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
new file mode 100644
index 000000000000..23fb29889e5a
--- /dev/null
+++ b/drivers/net/ipa/ipa.h
@@ -0,0 +1,148 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+#ifndef _IPA_H_
+#define _IPA_H_
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/notifier.h>
+#include <linux/pm_wakeup.h>
+
+#include "ipa_version.h"
+#include "gsi.h"
+#include "ipa_mem.h"
+#include "ipa_qmi.h"
+#include "ipa_endpoint.h"
+#include "ipa_interrupt.h"
+
+struct clk;
+struct icc_path;
+struct net_device;
+struct platform_device;
+
+struct ipa_clock;
+struct ipa_smp2p;
+struct ipa_interrupt;
+
+/**
+ * struct ipa - IPA information
+ * @gsi:		Embedded GSI structure
+ * @version:		IPA hardware version
+ * @pdev:		Platform device
+ * @modem_rproc:	Remoteproc handle for modem subsystem
+ * @smp2p:		SMP2P information
+ * @clock:		IPA clocking information
+ * @suspend_ref:	Whether clock reference preventing suspend taken
+ * @table_addr:		DMA address of filter/route table content
+ * @table_virt:		Virtual address of filter/route table content
+ * @interrupt:		IPA Interrupt information
+ * @uc_loaded:		true after microcontroller has reported it's ready
+ * @reg_addr:		DMA address used for IPA register access
+ * @reg_virt:		Virtual address used for IPA register access
+ * @mem_addr:		DMA address of IPA-local memory space
+ * @mem_virt:		Virtual address of IPA-local memory space
+ * @mem_offset:		Offset from @mem_virt used for access to IPA memory
+ * @mem_size:		Total size (bytes) of memory at @mem_virt
+ * @mem:		Array of IPA-local memory region descriptors
+ * @zero_addr:		DMA address of preallocated zero-filled memory
+ * @zero_virt:		Virtual address of preallocated zero-filled memory
+ * @zero_size:		Size (bytes) of preallocated zero-filled memory
+ * @wakeup_source:	Wakeup source information
+ * @available:		Bit mask indicating endpoints hardware supports
+ * @filter_map:		Bit mask indicating endpoints that support filtering
+ * @initialized:	Bit mask indicating endpoints initialized
+ * @set_up:		Bit mask indicating endpoints set up
+ * @enabled:		Bit mask indicating endpoints enabled
+ * @endpoint:		Array of endpoint information
+ * @channel_map:	Mapping of GSI channel to IPA endpoint
+ * @name_map:		Mapping of IPA endpoint name to IPA endpoint
+ * @setup_complete:	Flag indicating whether setup stage has completed
+ * @modem_state:	State of modem (stopped, running)
+ * @modem_netdev:	Network device structure used for modem
+ * @qmi:		QMI information
+ */
+struct ipa {
+	struct gsi gsi;
+	enum ipa_version version;
+	struct platform_device *pdev;
+	struct rproc *modem_rproc;
+	struct ipa_smp2p *smp2p;
+	struct ipa_clock *clock;
+	atomic_t suspend_ref;
+
+	dma_addr_t table_addr;
+	__le64 *table_virt;
+
+	struct ipa_interrupt *interrupt;
+	bool uc_loaded;
+
+	dma_addr_t reg_addr;
+	void __iomem *reg_virt;
+
+	dma_addr_t mem_addr;
+	void *mem_virt;
+	u32 mem_offset;
+	u32 mem_size;
+	const struct ipa_mem *mem;
+
+	dma_addr_t zero_addr;
+	void *zero_virt;
+	size_t zero_size;
+
+	struct wakeup_source *wakeup_source;
+
+	/* Bit masks indicating endpoint state */
+	u32 available;		/* supported by hardware */
+	u32 filter_map;
+	u32 initialized;
+	u32 set_up;
+	u32 enabled;
+
+	struct ipa_endpoint endpoint[IPA_ENDPOINT_MAX];
+	struct ipa_endpoint *channel_map[GSI_CHANNEL_COUNT_MAX];
+	struct ipa_endpoint *name_map[IPA_ENDPOINT_COUNT];
+
+	bool setup_complete;
+
+	atomic_t modem_state;		/* enum ipa_modem_state */
+	struct net_device *modem_netdev;
+	struct ipa_qmi qmi;
+};
+
+/**
+ * ipa_setup() - Perform IPA setup
+ * @ipa:		IPA pointer
+ *
+ * IPA initialization is broken into stages:  init; config; and setup.
+ * (These have inverses exit, deconfig, and teardown.)
+ *
+ * Activities performed at the init stage can be done without requiring
+ * any access to IPA hardware.  Activities performed at the config stage
+ * require the IPA clock to be running, because they involve access
+ * to IPA registers.  The setup stage is performed only after the GSI
+ * hardware is ready (more on this below).  The setup stage allows
+ * the AP to perform more complex initialization by issuing "immediate
+ * commands" using a special interface to the IPA.
+ *
+ * This function, @ipa_setup(), starts the setup stage.
+ *
+ * In order for the GSI hardware to be functional it needs firmware to be
+ * loaded (in addition to some other low-level initialization).  This early
+ * GSI initialization can be done either by Trust Zone on the AP or by the
+ * modem.
+ *
+ * If it's done by Trust Zone, the AP loads the GSI firmware and supplies
+ * it to Trust Zone to verify and install.  When this completes, if
+ * verification was successful, the GSI layer is ready and ipa_setup()
+ * implements the setup phase of initialization.
+ *
+ * If the modem performs early GSI initialization, the AP needs to know
+ * when this has occurred.  An SMP2P interrupt is used for this purpose,
+ * and receipt of that interrupt triggers the call to ipa_setup().
+ */
+int ipa_setup(struct ipa *ipa);
+
+#endif /* _IPA_H_ */
diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
new file mode 100644
index 000000000000..d6e7f257e99d
--- /dev/null
+++ b/drivers/net/ipa/ipa_main.c
@@ -0,0 +1,954 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+
+#include <linux/types.h>
+#include <linux/atomic.h>
+#include <linux/bitfield.h>
+#include <linux/device.h>
+#include <linux/bug.h>
+#include <linux/io.h>
+#include <linux/firmware.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/remoteproc.h>
+#include <linux/qcom_scm.h>
+#include <linux/soc/qcom/mdt_loader.h>
+
+#include "ipa.h"
+#include "ipa_clock.h"
+#include "ipa_data.h"
+#include "ipa_endpoint.h"
+#include "ipa_cmd.h"
+#include "ipa_reg.h"
+#include "ipa_mem.h"
+#include "ipa_table.h"
+#include "ipa_modem.h"
+#include "ipa_uc.h"
+#include "ipa_interrupt.h"
+#include "gsi_trans.h"
+
+/**
+ * DOC: The IP Accelerator
+ *
+ * This driver supports the Qualcomm IP Accelerator (IPA), which is a
+ * networking component found in many Qualcomm SoCs.  The IPA is connected
+ * to the application processor (AP), but is also connected (and partially
+ * controlled by) other "execution environments" (EEs), such as a modem.
+ *
+ * The IPA is the conduit between the AP and the modem that carries network
+ * traffic.  This driver presents a network interface representing the
+ * connection of the modem to external (e.g. LTE) networks.
+ *
+ * The IPA provides protocol checksum calculation, offloading this work
+ * from the AP.  The IPA offers additional functionality, including routing,
+ * filtering, and NAT support, but that more advanced functionality is not
+ * currently supported.  Despite that, some resources--including routing
+ * tables and filter tables--are defined in this driver because they must
+ * be initialized even when the advanced hardware features are not used.
+ *
+ * There are two distinct layers that implement the IPA hardware, and this
+ * is reflected in the organization of the driver.  The generic software
+ * interface (GSI) is an integral component of the IPA, providing a
+ * well-defined communication layer between the AP subsystem and the IPA
+ * core.  The GSI implements a set of "channels" used for communication
+ * between the AP and the IPA.
+ *
+ * The IPA layer uses GSI channels to implement its "endpoints".  And while
+ * a GSI channel carries data between the AP and the IPA, a pair of IPA
+ * endpoints is used to carry traffic between two EEs.  Specifically, the main
+ * modem network interface is implemented by two pairs of endpoints:  a TX
+ * endpoint on the AP coupled with an RX endpoint on the modem; and another
+ * RX endpoint on the AP receiving data from a TX endpoint on the modem.
+ */
+
+/* The name of the GSI firmware file relative to /lib/firmware */
+#define IPA_FWS_PATH		"ipa_fws.mdt"
+#define IPA_PAS_ID		15
+
+/**
+ * ipa_suspend_handler() - Handle the suspend IPA interrupt
+ * @ipa:	IPA pointer
+ * @irq_id:	IPA interrupt type (unused)
+ *
+ * When in suspended state, the IPA can trigger a resume by sending a SUSPEND
+ * IPA interrupt.
+ */
+static void ipa_suspend_handler(struct ipa *ipa, enum ipa_irq_id irq_id)
+{
+	/* Take a a single clock reference to prevent suspend.  All
+	 * endpoints will be resumed as a result.  This reference will
+	 * be dropped when we get a power management suspend request.
+	 */
+	if (!atomic_xchg(&ipa->suspend_ref, 1))
+		ipa_clock_get(ipa);
+
+	/* Acknowledge/clear the suspend interrupt on all endpoints */
+	ipa_interrupt_suspend_clear_all(ipa->interrupt);
+}
+
+/**
+ * ipa_setup() - Set up IPA hardware
+ * @ipa:	IPA pointer
+ *
+ * Perform initialization that requires issuing immediate commands on
+ * the command TX endpoint.  If the modem is doing GSI firmware load
+ * and initialization, this function will be called when an SMP2P
+ * interrupt has been signaled by the modem.  Otherwise it will be
+ * called from ipa_probe() after GSI firmware has been successfully
+ * loaded, authenticated, and started by Trust Zone.
+ */
+int ipa_setup(struct ipa *ipa)
+{
+	struct ipa_endpoint *exception_endpoint;
+	struct ipa_endpoint *command_endpoint;
+	int ret;
+
+	/* IPA v4.0 and above don't use the doorbell engine. */
+	ret = gsi_setup(&ipa->gsi, ipa->version == IPA_VERSION_3_5_1);
+	if (ret)
+		return ret;
+
+	ipa->interrupt = ipa_interrupt_setup(ipa);
+	if (IS_ERR(ipa->interrupt)) {
+		ret = PTR_ERR(ipa->interrupt);
+		goto err_gsi_teardown;
+	}
+	ipa_interrupt_add(ipa->interrupt, IPA_IRQ_TX_SUSPEND,
+			  ipa_suspend_handler);
+
+	ipa_uc_setup(ipa);
+
+	ipa_endpoint_setup(ipa);
+
+	/* We need to use the AP command TX endpoint to perform other
+	 * initialization, so we enable first.
+	 */
+	command_endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX];
+	ret = ipa_endpoint_enable_one(command_endpoint);
+	if (ret)
+		goto err_endpoint_teardown;
+
+	ret = ipa_mem_setup(ipa);
+	if (ret)
+		goto err_command_disable;
+
+	ret = ipa_table_setup(ipa);
+	if (ret)
+		goto err_mem_teardown;
+
+	/* Enable the exception handling endpoint, and tell the hardware
+	 * to use it by default.
+	 */
+	exception_endpoint = ipa->name_map[IPA_ENDPOINT_AP_LAN_RX];
+	ret = ipa_endpoint_enable_one(exception_endpoint);
+	if (ret)
+		goto err_table_teardown;
+
+	ipa_endpoint_default_route_set(ipa, exception_endpoint->endpoint_id);
+
+	/* We're all set.  Now prepare for communication with the modem */
+	ret = ipa_modem_setup(ipa);
+	if (ret)
+		goto err_default_route_clear;
+
+	ipa->setup_complete = true;
+
+	dev_info(&ipa->pdev->dev, "IPA driver setup completed successfully\n");
+
+	return 0;
+
+err_default_route_clear:
+	ipa_endpoint_default_route_clear(ipa);
+	ipa_endpoint_disable_one(exception_endpoint);
+err_table_teardown:
+	ipa_table_teardown(ipa);
+err_mem_teardown:
+	ipa_mem_teardown(ipa);
+err_command_disable:
+	ipa_endpoint_disable_one(command_endpoint);
+err_endpoint_teardown:
+	ipa_endpoint_teardown(ipa);
+	ipa_uc_teardown(ipa);
+	ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_TX_SUSPEND);
+	ipa_interrupt_teardown(ipa->interrupt);
+err_gsi_teardown:
+	gsi_teardown(&ipa->gsi);
+
+	return ret;
+}
+
+/**
+ * ipa_teardown() - Inverse of ipa_setup()
+ * @ipa:	IPA pointer
+ */
+static void ipa_teardown(struct ipa *ipa)
+{
+	struct ipa_endpoint *exception_endpoint;
+	struct ipa_endpoint *command_endpoint;
+
+	ipa_modem_teardown(ipa);
+	ipa_endpoint_default_route_clear(ipa);
+	exception_endpoint = ipa->name_map[IPA_ENDPOINT_AP_LAN_RX];
+	ipa_endpoint_disable_one(exception_endpoint);
+	ipa_table_teardown(ipa);
+	ipa_mem_teardown(ipa);
+	command_endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX];
+	ipa_endpoint_disable_one(command_endpoint);
+	ipa_endpoint_teardown(ipa);
+	ipa_uc_teardown(ipa);
+	ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_TX_SUSPEND);
+	ipa_interrupt_teardown(ipa->interrupt);
+	gsi_teardown(&ipa->gsi);
+}
+
+/* Configure QMB Core Master Port selection */
+static void ipa_hardware_config_comp(struct ipa *ipa)
+{
+	u32 val;
+
+	/* Nothing to configure for IPA v3.5.1 */
+	if (ipa->version == IPA_VERSION_3_5_1)
+		return;
+
+	val = ioread32(ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
+
+	if (ipa->version == IPA_VERSION_4_0) {
+		val &= ~IPA_QMB_SELECT_CONS_EN_FMASK;
+		val &= ~IPA_QMB_SELECT_PROD_EN_FMASK;
+		val &= ~IPA_QMB_SELECT_GLOBAL_EN_FMASK;
+	} else  {
+		val |= GSI_MULTI_AXI_MASTERS_DIS_FMASK;
+	}
+
+	val |= GSI_MULTI_INORDER_RD_DIS_FMASK;
+	val |= GSI_MULTI_INORDER_WR_DIS_FMASK;
+
+	iowrite32(val, ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
+}
+
+/* Configure DDR and PCIe max read/write QSB values */
+static void ipa_hardware_config_qsb(struct ipa *ipa)
+{
+	u32 val;
+
+	/* QMB_0 represents DDR; QMB_1 represents PCIe (not present in 4.2) */
+	val = u32_encode_bits(8, GEN_QMB_0_MAX_WRITES_FMASK);
+	if (ipa->version == IPA_VERSION_4_2)
+		val |= u32_encode_bits(0, GEN_QMB_1_MAX_WRITES_FMASK);
+	else
+		val |= u32_encode_bits(4, GEN_QMB_1_MAX_WRITES_FMASK);
+	iowrite32(val, ipa->reg_virt + IPA_REG_QSB_MAX_WRITES_OFFSET);
+
+	if (ipa->version == IPA_VERSION_3_5_1) {
+		val = u32_encode_bits(8, GEN_QMB_0_MAX_READS_FMASK);
+		val |= u32_encode_bits(12, GEN_QMB_1_MAX_READS_FMASK);
+	} else {
+		val = u32_encode_bits(12, GEN_QMB_0_MAX_READS_FMASK);
+		if (ipa->version == IPA_VERSION_4_2)
+			val |= u32_encode_bits(0, GEN_QMB_1_MAX_READS_FMASK);
+		else
+			val |= u32_encode_bits(12, GEN_QMB_1_MAX_READS_FMASK);
+		/* GEN_QMB_0_MAX_READS_BEATS is 0 */
+		/* GEN_QMB_1_MAX_READS_BEATS is 0 */
+	}
+	iowrite32(val, ipa->reg_virt + IPA_REG_QSB_MAX_READS_OFFSET);
+}
+
+static void ipa_idle_indication_cfg(struct ipa *ipa,
+				    u32 enter_idle_debounce_thresh,
+				    bool const_non_idle_enable)
+{
+	u32 offset;
+	u32 val;
+
+	val = u32_encode_bits(enter_idle_debounce_thresh,
+			      ENTER_IDLE_DEBOUNCE_THRESH_FMASK);
+	if (const_non_idle_enable)
+		val |= CONST_NON_IDLE_ENABLE_FMASK;
+
+	offset = ipa_reg_idle_indication_cfg_offset(ipa->version);
+	iowrite32(val, ipa->reg_virt + offset);
+}
+
+/**
+ * ipa_hardware_dcd_config() - Enable dynamic clock division on IPA
+ *
+ * Configures when the IPA signals it is idle to the global clock
+ * controller, which can respond by scalling down the clock to
+ * save power.
+ */
+static void ipa_hardware_dcd_config(struct ipa *ipa)
+{
+	/* Recommended values for IPA 3.5 according to IPA HPG */
+	ipa_idle_indication_cfg(ipa, 256, false);
+}
+
+static void ipa_hardware_dcd_deconfig(struct ipa *ipa)
+{
+	/* Power-on reset values */
+	ipa_idle_indication_cfg(ipa, 0, true);
+}
+
+/**
+ * ipa_hardware_config() - Primitive hardware initialization
+ * @ipa:	IPA pointer
+ */
+static void ipa_hardware_config(struct ipa *ipa)
+{
+	u32 granularity;
+	u32 val;
+
+	/* Fill in backward-compatibility register, based on version */
+	val = ipa_reg_bcr_val(ipa->version);
+	iowrite32(val, ipa->reg_virt + IPA_REG_BCR_OFFSET);
+
+	if (ipa->version != IPA_VERSION_3_5_1) {
+		/* Enable open global clocks (hardware workaround) */
+		val = GLOBAL_FMASK;
+		val |= GLOBAL_2X_CLK_FMASK;
+		iowrite32(val, ipa->reg_virt + IPA_REG_CLKON_CFG_OFFSET);
+
+		/* Disable PA mask to allow HOLB drop (hardware workaround) */
+		val = ioread32(ipa->reg_virt + IPA_REG_TX_CFG_OFFSET);
+		val &= ~PA_MASK_EN;
+		iowrite32(val, ipa->reg_virt + IPA_REG_TX_CFG_OFFSET);
+	}
+
+	ipa_hardware_config_comp(ipa);
+
+	/* Configure system bus limits */
+	ipa_hardware_config_qsb(ipa);
+
+	/* Configure aggregation granularity */
+	val = ioread32(ipa->reg_virt + IPA_REG_COUNTER_CFG_OFFSET);
+	granularity = ipa_aggr_granularity_val(IPA_AGGR_GRANULARITY);
+	val = u32_encode_bits(granularity, AGGR_GRANULARITY);
+	iowrite32(val, ipa->reg_virt + IPA_REG_COUNTER_CFG_OFFSET);
+
+	/* Disable hashed IPv4 and IPv6 routing and filtering for IPA v4.2 */
+	if (ipa->version == IPA_VERSION_4_2)
+		iowrite32(0, ipa->reg_virt + IPA_REG_FILT_ROUT_HASH_EN_OFFSET);
+
+	/* Enable dynamic clock division */
+	ipa_hardware_dcd_config(ipa);
+}
+
+/**
+ * ipa_hardware_deconfig() - Inverse of ipa_hardware_config()
+ * @ipa:	IPA pointer
+ *
+ * This restores the power-on reset values (even if they aren't different)
+ */
+static void ipa_hardware_deconfig(struct ipa *ipa)
+{
+	/* Mostly we just leave things as we set them. */
+	ipa_hardware_dcd_deconfig(ipa);
+}
+
+#ifdef IPA_VALIDATION
+
+/* # IPA resources used based on version (see IPA_RESOURCE_GROUP_COUNT) */
+static int ipa_resource_group_count(struct ipa *ipa)
+{
+	switch (ipa->version) {
+	case IPA_VERSION_3_5_1:
+		return 3;
+
+	case IPA_VERSION_4_0:
+	case IPA_VERSION_4_1:
+		return 4;
+
+	case IPA_VERSION_4_2:
+		return 1;
+
+	default:
+		return 0;
+	}
+}
+
+static bool ipa_resource_limits_valid(struct ipa *ipa,
+				      const struct ipa_resource_data *data)
+{
+	u32 group_count = ipa_resource_group_count(ipa);
+	u32 i;
+	u32 j;
+
+	if (!group_count)
+		return false;
+
+	/* Return an error if a non-zero resource group limit is specified
+	 * for a resource not supported by hardware.
+	 */
+	for (i = 0; i < data->resource_src_count; i++) {
+		const struct ipa_resource_src *resource;
+
+		resource = &data->resource_src[i];
+		for (j = group_count; j < IPA_RESOURCE_GROUP_COUNT; j++)
+			if (resource->limits[j].min || resource->limits[j].max)
+				return false;
+	}
+
+	for (i = 0; i < data->resource_dst_count; i++) {
+		const struct ipa_resource_dst *resource;
+
+		resource = &data->resource_dst[i];
+		for (j = group_count; j < IPA_RESOURCE_GROUP_COUNT; j++)
+			if (resource->limits[j].min || resource->limits[j].max)
+				return false;
+	}
+
+	return true;
+}
+
+#else /* !IPA_VALIDATION */
+
+static bool ipa_resource_limits_valid(struct ipa *ipa,
+				      const struct ipa_resource_data *data)
+{
+	return true;
+}
+
+#endif /* !IPA_VALIDATION */
+
+static void
+ipa_resource_config_common(struct ipa *ipa, u32 offset,
+			   const struct ipa_resource_limits *xlimits,
+			   const struct ipa_resource_limits *ylimits)
+{
+	u32 val;
+
+	val = u32_encode_bits(xlimits->min, X_MIN_LIM_FMASK);
+	val |= u32_encode_bits(xlimits->max, X_MAX_LIM_FMASK);
+	val |= u32_encode_bits(ylimits->min, Y_MIN_LIM_FMASK);
+	val |= u32_encode_bits(ylimits->max, Y_MAX_LIM_FMASK);
+
+	iowrite32(val, ipa->reg_virt + offset);
+}
+
+static void ipa_resource_config_src_01(struct ipa *ipa,
+				       const struct ipa_resource_src *resource)
+{
+	u32 offset = IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_OFFSET(resource->type);
+
+	ipa_resource_config_common(ipa, offset,
+				   &resource->limits[0], &resource->limits[1]);
+}
+
+static void ipa_resource_config_src_23(struct ipa *ipa,
+				       const struct ipa_resource_src *resource)
+{
+	u32 offset = IPA_REG_SRC_RSRC_GRP_23_RSRC_TYPE_N_OFFSET(resource->type);
+
+	ipa_resource_config_common(ipa, offset,
+				   &resource->limits[2], &resource->limits[3]);
+}
+
+static void ipa_resource_config_dst_01(struct ipa *ipa,
+				       const struct ipa_resource_dst *resource)
+{
+	u32 offset = IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_OFFSET(resource->type);
+
+	ipa_resource_config_common(ipa, offset,
+				   &resource->limits[0], &resource->limits[1]);
+}
+
+static void ipa_resource_config_dst_23(struct ipa *ipa,
+				       const struct ipa_resource_dst *resource)
+{
+	u32 offset = IPA_REG_DST_RSRC_GRP_23_RSRC_TYPE_N_OFFSET(resource->type);
+
+	ipa_resource_config_common(ipa, offset,
+				   &resource->limits[2], &resource->limits[3]);
+}
+
+static int
+ipa_resource_config(struct ipa *ipa, const struct ipa_resource_data *data)
+{
+	u32 i;
+
+	if (!ipa_resource_limits_valid(ipa, data))
+		return -EINVAL;
+
+	for (i = 0; i < data->resource_src_count; i++) {
+		ipa_resource_config_src_01(ipa, &data->resource_src[i]);
+		ipa_resource_config_src_23(ipa, &data->resource_src[i]);
+	}
+
+	for (i = 0; i < data->resource_dst_count; i++) {
+		ipa_resource_config_dst_01(ipa, &data->resource_dst[i]);
+		ipa_resource_config_dst_23(ipa, &data->resource_dst[i]);
+	}
+
+	return 0;
+}
+
+static void ipa_resource_deconfig(struct ipa *ipa)
+{
+	/* Nothing to do */
+}
+
+/**
+ * ipa_config() - Configure IPA hardware
+ * @ipa:	IPA pointer
+ *
+ * Perform initialization requiring IPA clock to be enabled.
+ */
+static int ipa_config(struct ipa *ipa, const struct ipa_data *data)
+{
+	int ret;
+
+	/* Get a clock reference to allow initialization.  This reference
+	 * is held after initialization completes, and won't get dropped
+	 * unless/until a system suspend request arrives.
+	 */
+	atomic_set(&ipa->suspend_ref, 1);
+	ipa_clock_get(ipa);
+
+	ipa_hardware_config(ipa);
+
+	ret = ipa_endpoint_config(ipa);
+	if (ret)
+		goto err_hardware_deconfig;
+
+	ret = ipa_mem_config(ipa);
+	if (ret)
+		goto err_endpoint_deconfig;
+
+	ipa_table_config(ipa);
+
+	/* Assign resource limitation to each group */
+	ret = ipa_resource_config(ipa, data->resource_data);
+	if (ret)
+		goto err_table_deconfig;
+
+	ret = ipa_modem_config(ipa);
+	if (ret)
+		goto err_resource_deconfig;
+
+	return 0;
+
+err_resource_deconfig:
+	ipa_resource_deconfig(ipa);
+err_table_deconfig:
+	ipa_table_deconfig(ipa);
+	ipa_mem_deconfig(ipa);
+err_endpoint_deconfig:
+	ipa_endpoint_deconfig(ipa);
+err_hardware_deconfig:
+	ipa_hardware_deconfig(ipa);
+	ipa_clock_put(ipa);
+	atomic_set(&ipa->suspend_ref, 0);
+
+	return ret;
+}
+
+/**
+ * ipa_deconfig() - Inverse of ipa_config()
+ * @ipa:	IPA pointer
+ */
+static void ipa_deconfig(struct ipa *ipa)
+{
+	ipa_modem_deconfig(ipa);
+	ipa_resource_deconfig(ipa);
+	ipa_table_deconfig(ipa);
+	ipa_mem_deconfig(ipa);
+	ipa_endpoint_deconfig(ipa);
+	ipa_hardware_deconfig(ipa);
+	ipa_clock_put(ipa);
+	atomic_set(&ipa->suspend_ref, 0);
+}
+
+static int ipa_firmware_load(struct device *dev)
+{
+	const struct firmware *fw;
+	struct device_node *node;
+	struct resource res;
+	phys_addr_t phys;
+	ssize_t size;
+	void *virt;
+	int ret;
+
+	node = of_parse_phandle(dev->of_node, "memory-region", 0);
+	if (!node) {
+		dev_err(dev, "DT error getting \"memory-region\" property\n");
+		return -EINVAL;
+	}
+
+	ret = of_address_to_resource(node, 0, &res);
+	if (ret) {
+		dev_err(dev, "error %d getting \"memory-region\" resource\n",
+			ret);
+		return ret;
+	}
+
+	ret = request_firmware(&fw, IPA_FWS_PATH, dev);
+	if (ret) {
+		dev_err(dev, "error %d requesting \"%s\"\n", ret, IPA_FWS_PATH);
+		return ret;
+	}
+
+	phys = res.start;
+	size = (size_t)resource_size(&res);
+	virt = memremap(phys, size, MEMREMAP_WC);
+	if (!virt) {
+		dev_err(dev, "unable to remap firmware memory\n");
+		ret = -ENOMEM;
+		goto out_release_firmware;
+	}
+
+	ret = qcom_mdt_load(dev, fw, IPA_FWS_PATH, IPA_PAS_ID,
+			    virt, phys, size, NULL);
+	if (ret)
+		dev_err(dev, "error %d loading \"%s\"\n", ret, IPA_FWS_PATH);
+	else if ((ret = qcom_scm_pas_auth_and_reset(IPA_PAS_ID)))
+		dev_err(dev, "error %d authenticating \"%s\"\n", ret,
+			IPA_FWS_PATH);
+
+	memunmap(virt);
+out_release_firmware:
+	release_firmware(fw);
+
+	return ret;
+}
+
+static const struct of_device_id ipa_match[] = {
+	{
+		.compatible	= "qcom,sdm845-ipa",
+		.data		= &ipa_data_sdm845,
+	},
+	{
+		.compatible	= "qcom,sc7180-ipa",
+		.data		= &ipa_data_sc7180,
+	},
+	{ },
+};
+MODULE_DEVICE_TABLE(of, ipa_match);
+
+static phandle of_property_read_phandle(const struct device_node *np,
+					const char *name)
+{
+        struct property *prop;
+        int len = 0;
+
+        prop = of_find_property(np, name, &len);
+        if (!prop || len != sizeof(__be32))
+                return 0;
+
+        return be32_to_cpup(prop->value);
+}
+
+/* Check things that can be validated at build time.  This just
+ * groups these things BUILD_BUG_ON() calls don't clutter the rest
+ * of the code.
+ * */
+static void ipa_validate_build(void)
+{
+#ifdef IPA_VALIDATE
+	/* We assume we're working on 64-bit hardware */
+	BUILD_BUG_ON(!IS_ENABLED(CONFIG_64BIT));
+
+	/* Code assumes the EE ID for the AP is 0 (zeroed structure field) */
+	BUILD_BUG_ON(GSI_EE_AP != 0);
+
+	/* There's no point if we have no channels or event rings */
+	BUILD_BUG_ON(!GSI_CHANNEL_COUNT_MAX);
+	BUILD_BUG_ON(!GSI_EVT_RING_COUNT_MAX);
+
+	/* GSI hardware design limits */
+	BUILD_BUG_ON(GSI_CHANNEL_COUNT_MAX > 32);
+	BUILD_BUG_ON(GSI_EVT_RING_COUNT_MAX > 31);
+
+	/* The number of TREs in a transaction is limited by the channel's
+	 * TLV FIFO size.  A transaction structure uses 8-bit fields
+	 * to represents the number of TREs it has allocated and used.
+	 */
+	BUILD_BUG_ON(GSI_TLV_MAX > U8_MAX);
+
+	/* Exceeding 128 bytes makes the transaction pool *much* larger */
+	BUILD_BUG_ON(sizeof(struct gsi_trans) > 128);
+
+	/* This is used as a divisor */
+	BUILD_BUG_ON(!IPA_AGGR_GRANULARITY);
+#endif /* IPA_VALIDATE */
+}
+
+/**
+ * ipa_probe() - IPA platform driver probe function
+ * @pdev:	Platform device pointer
+ *
+ * @Return:	0 if successful, or a negative error code (possibly
+ *		EPROBE_DEFER)
+ *
+ * This is the main entry point for the IPA driver.  Initialization proceeds
+ * in several stages:
+ *   - The "init" stage involves activities that can be initialized without
+ *     access to the IPA hardware.
+ *   - The "config" stage requires the IPA clock to be active so IPA registers
+ *     can be accessed, but does not require the use of IPA immediate commands.
+ *   - The "setup" stage uses IPA immediate commands, and so requires the GSI
+ *     layer to be initialized.
+ *
+ * A Boolean Device Tree "modem-init" property determines whether GSI
+ * initialization will be performed by the AP (Trust Zone) or the modem.
+ * If the AP does GSI initialization, the setup phase is entered after
+ * this has completed successfully.  Otherwise the modem initializes
+ * the GSI layer and signals it has finished by sending an SMP2P interrupt
+ * to the AP; this triggers the start if IPA setup.
+ */
+static int ipa_probe(struct platform_device *pdev)
+{
+	struct wakeup_source *wakeup_source;
+	struct device *dev = &pdev->dev;
+	const struct ipa_data *data;
+	struct ipa_clock *clock;
+	struct rproc *rproc;
+	bool modem_alloc;
+	bool modem_init;
+	struct ipa *ipa;
+	phandle phandle;
+	bool prefetch;
+	int ret;
+
+	ipa_validate_build();
+
+	/* If we need Trust Zone, make sure it's available */
+	modem_init = of_property_read_bool(dev->of_node, "modem-init");
+	if (!modem_init)
+		if (!qcom_scm_is_available())
+			return -EPROBE_DEFER;
+
+	/* We rely on remoteproc to tell us about modem state changes */
+	phandle = of_property_read_phandle(dev->of_node, "modem-remoteproc");
+	if (!phandle) {
+		dev_err(dev, "DT missing \"modem-remoteproc\" property\n");
+		return -EINVAL;
+	}
+
+	rproc = rproc_get_by_phandle(phandle);
+	if (!rproc)
+		return -EPROBE_DEFER;
+
+	/* The clock and interconnects might not be ready when we're
+	 * probed, so might return -EPROBE_DEFER.
+	 */
+	clock = ipa_clock_init(dev);
+	if (IS_ERR(clock)) {
+		ret = PTR_ERR(clock);
+		goto err_rproc_put;
+	}
+
+	/* No more EPROBE_DEFER.  Get our configuration data */
+	data = of_device_get_match_data(dev);
+	if (!data) {
+		/* This is really IPA_VALIDATE (should never happen) */
+		dev_err(dev, "matched hardware not supported\n");
+		ret = -ENOTSUPP;
+		goto err_clock_exit;
+	}
+
+	/* Create a wakeup source. */
+	wakeup_source = wakeup_source_register(dev, "ipa");
+	if (!wakeup_source) {
+		/* The most likely reason for failure is memory exhaustion */
+		ret = -ENOMEM;
+		goto err_clock_exit;
+	}
+
+	/* Allocate and initialize the IPA structure */
+	ipa = kzalloc(sizeof(*ipa), GFP_KERNEL);
+	if (!ipa) {
+		ret = -ENOMEM;
+		goto err_wakeup_source_unregister;
+	}
+
+	ipa->pdev = pdev;
+	dev_set_drvdata(dev, ipa);
+	ipa->modem_rproc = rproc;
+	ipa->clock = clock;
+	atomic_set(&ipa->suspend_ref, 0);
+	ipa->wakeup_source = wakeup_source;
+	ipa->version = data->version;
+
+	ret = ipa_reg_init(ipa);
+	if (ret)
+		goto err_kfree_ipa;
+
+	ret = ipa_mem_init(ipa, data->mem_count, data->mem_data);
+	if (ret)
+		goto err_reg_exit;
+
+	/* GSI v2.0+ (IPA v4.0+) uses prefetch for the command channel */
+	prefetch = ipa->version != IPA_VERSION_3_5_1;
+	/* IPA v4.2 requires the AP to allocate channels for the modem */
+	modem_alloc = ipa->version == IPA_VERSION_4_2;
+
+	ret = gsi_init(&ipa->gsi, pdev, prefetch, data->endpoint_count,
+		       data->endpoint_data, modem_alloc);
+	if (ret)
+		goto err_mem_exit;
+
+	/* Result is a non-zero mask endpoints that support filtering */
+	ipa->filter_map = ipa_endpoint_init(ipa, data->endpoint_count,
+					    data->endpoint_data);
+	if (!ipa->filter_map) {
+		ret = -EINVAL;
+		goto err_gsi_exit;
+	}
+
+	ret = ipa_table_init(ipa);
+	if (ret)
+		goto err_endpoint_exit;
+
+	ret = ipa_modem_init(ipa, modem_init);
+	if (ret)
+		goto err_table_exit;
+
+	ret = ipa_config(ipa, data);
+	if (ret)
+		goto err_modem_exit;
+
+	dev_info(dev, "IPA driver initialized");
+
+	/* If the modem is doing early initialization, it will trigger a
+	 * call to ipa_setup() call when it has finished.  In that case
+	 * we're done here.
+	 */
+	if (modem_init)
+		return 0;
+
+	/* Otherwise we need to load the firmware and have Trust Zone validate
+	 * and install it.  If that succeeds we can proceed with setup.
+	 */
+	ret = ipa_firmware_load(dev);
+	if (ret)
+		goto err_deconfig;
+
+	ret = ipa_setup(ipa);
+	if (ret)
+		goto err_deconfig;
+
+	return 0;
+
+err_deconfig:
+	ipa_deconfig(ipa);
+err_modem_exit:
+	ipa_modem_exit(ipa);
+err_table_exit:
+	ipa_table_exit(ipa);
+err_endpoint_exit:
+	ipa_endpoint_exit(ipa);
+err_gsi_exit:
+	gsi_exit(&ipa->gsi);
+err_mem_exit:
+	ipa_mem_exit(ipa);
+err_reg_exit:
+	ipa_reg_exit(ipa);
+err_kfree_ipa:
+	kfree(ipa);
+err_wakeup_source_unregister:
+	wakeup_source_unregister(wakeup_source);
+err_clock_exit:
+	ipa_clock_exit(clock);
+err_rproc_put:
+	rproc_put(rproc);
+
+	return ret;
+}
+
+static int ipa_remove(struct platform_device *pdev)
+{
+	struct ipa *ipa = dev_get_drvdata(&pdev->dev);
+	struct rproc *rproc = ipa->modem_rproc;
+	struct ipa_clock *clock = ipa->clock;
+	struct wakeup_source *wakeup_source;
+	int ret;
+
+	wakeup_source = ipa->wakeup_source;
+
+	if (ipa->setup_complete) {
+		ret = ipa_modem_stop(ipa);
+		if (ret)
+			return ret;
+
+		ipa_teardown(ipa);
+	}
+
+	ipa_deconfig(ipa);
+	ipa_modem_exit(ipa);
+	ipa_table_exit(ipa);
+	ipa_endpoint_exit(ipa);
+	gsi_exit(&ipa->gsi);
+	ipa_mem_exit(ipa);
+	ipa_reg_exit(ipa);
+	kfree(ipa);
+	wakeup_source_unregister(wakeup_source);
+	ipa_clock_exit(clock);
+	rproc_put(rproc);
+
+	return 0;
+}
+
+/**
+ * ipa_suspend() - Power management system suspend callback
+ * @dev:	IPA device structure
+ *
+ * @Return:	Zero
+ *
+ * Called by the PM framework when a system suspend operation is invoked.
+ */
+static int ipa_suspend(struct device *dev)
+{
+	struct ipa *ipa = dev_get_drvdata(dev);
+
+	ipa_clock_put(ipa);
+	atomic_set(&ipa->suspend_ref, 0);
+
+	return 0;
+}
+
+/**
+ * ipa_resume() - Power management system resume callback
+ * @dev:	IPA device structure
+ *
+ * @Return:	Always returns 0
+ *
+ * Called by the PM framework when a system resume operation is invoked.
+ */
+static int ipa_resume(struct device *dev)
+{
+	struct ipa *ipa = dev_get_drvdata(dev);
+
+	/* This clock reference will keep the IPA out of suspend
+	 * until we get a power management suspend request.
+	 */
+	atomic_set(&ipa->suspend_ref, 1);
+	ipa_clock_get(ipa);
+
+	return 0;
+}
+
+static const struct dev_pm_ops ipa_pm_ops = {
+	.suspend_noirq	= ipa_suspend,
+	.resume_noirq	= ipa_resume,
+};
+
+static struct platform_driver ipa_driver = {
+	.probe	= ipa_probe,
+	.remove	= ipa_remove,
+	.driver	= {
+		.name		= "ipa",
+		.owner		= THIS_MODULE,
+		.pm		= &ipa_pm_ops,
+		.of_match_table	= ipa_match,
+	},
+};
+
+module_platform_driver(ipa_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("Qualcomm IP Accelerator device driver");
diff --git a/drivers/net/ipa/ipa_reg.c b/drivers/net/ipa/ipa_reg.c
new file mode 100644
index 000000000000..e6147a1cd787
--- /dev/null
+++ b/drivers/net/ipa/ipa_reg.c
@@ -0,0 +1,38 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+
+#include <linux/io.h>
+
+#include "ipa.h"
+#include "ipa_reg.h"
+
+int ipa_reg_init(struct ipa *ipa)
+{
+	struct device *dev = &ipa->pdev->dev;
+	struct resource *res;
+
+	/* Setup IPA register memory  */
+	res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM,
+					   "ipa-reg");
+	if (!res) {
+		dev_err(dev, "DT error getting \"ipa-reg\" memory property\n");
+		return -ENODEV;
+	}
+
+	ipa->reg_virt = ioremap(res->start, resource_size(res));
+	if (!ipa->reg_virt) {
+		dev_err(dev, "unable to remap \"ipa-reg\" memory\n");
+		return -ENOMEM;
+	}
+	ipa->reg_addr = res->start;
+
+	return 0;
+}
+
+void ipa_reg_exit(struct ipa *ipa)
+{
+	iounmap(ipa->reg_virt);
+}
diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h
new file mode 100644
index 000000000000..3b8106aa277a
--- /dev/null
+++ b/drivers/net/ipa/ipa_reg.h
@@ -0,0 +1,476 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+#ifndef _IPA_REG_H_
+#define _IPA_REG_H_
+
+#include <linux/bitfield.h>
+
+#include "ipa_version.h"
+
+struct ipa;
+
+/**
+ * DOC: IPA Registers
+ *
+ * IPA registers are located within the "ipa-reg" address space defined by
+ * Device Tree.  The offset of each register within that space is specified
+ * by symbols defined below.  The address space is mapped to virtual memory
+ * space in ipa_mem_init().  All IPA registers are 32 bits wide.
+ *
+ * Certain register types are duplicated for a number of instances of
+ * something.  For example, each IPA endpoint has an set of registers
+ * defining its configuration.  The offset to an endpoint's set of registers
+ * is computed based on an "base" offset, plus an endpoint's ID multiplied
+ * and a "stride" value for the register.  For such registers, the offset is
+ * computed by a function-like macro that takes a parameter used in the
+ * computation.
+ *
+ * Some register offsets depend on execution environment.  For these an "ee"
+ * parameter is supplied to the offset macro.  The "ee" value is a member of
+ * the gsi_ee enumerated type.
+ *
+ * The offset of a register dependent on endpoint id is computed by a macro
+ * that is supplied a parameter "ep".  The "ep" value is assumed to be less
+ * than the maximum endpoint value for the current hardware, and that will
+ * not exceed IPA_ENDPOINT_MAX.
+ *
+ * The offset of registers related to filter and route tables is computed
+ * by a macro that is supplied a parameter "er".  The "er" represents an
+ * endpoint ID for filters, or a route ID for routes.  For filters, the
+ * endpoint ID must be less than IPA_ENDPOINT_MAX, but is further restricted
+ * because not all endpoints support filtering.  For routes, the route ID
+ * must be less than IPA_ROUTE_MAX.
+ *
+ * The offset of registers related to resource types is computed by a macro
+ * that is supplied a parameter "rt".  The "rt" represents a resource type,
+ * which is is a member of the ipa_resource_type_src enumerated type for
+ * source endpoint resources or the ipa_resource_type_dst enumerated type
+ * for destination endpoint resources.
+ *
+ * Some registers encode multiple fields within them.  For these, each field
+ * has a symbol below defining a field mask that encodes both the position
+ * and width of the field within its register.
+ *
+ * In some cases, different versions of IPA hardware use different offset or
+ * field mask values.  In such cases an inline_function(ipa) is used rather
+ * than a MACRO to define the offset or field mask to use.
+ *
+ * Finally, some registers hold bitmasks representing endpoints.  In such
+ * cases the @available field in the @ipa structure defines the "full" set
+ * of valid bits for the register.
+ */
+
+#define IPA_REG_ENABLED_PIPES_OFFSET			0x00000038
+
+#define IPA_REG_COMP_CFG_OFFSET				0x0000003c
+#define ENABLE_FMASK				GENMASK(0, 0)
+#define GSI_SNOC_BYPASS_DIS_FMASK		GENMASK(1, 1)
+#define GEN_QMB_0_SNOC_BYPASS_DIS_FMASK		GENMASK(2, 2)
+#define GEN_QMB_1_SNOC_BYPASS_DIS_FMASK		GENMASK(3, 3)
+#define IPA_DCMP_FAST_CLK_EN_FMASK		GENMASK(4, 4)
+#define IPA_QMB_SELECT_CONS_EN_FMASK		GENMASK(5, 5)
+#define IPA_QMB_SELECT_PROD_EN_FMASK		GENMASK(6, 6)
+#define GSI_MULTI_INORDER_RD_DIS_FMASK		GENMASK(7, 7)
+#define GSI_MULTI_INORDER_WR_DIS_FMASK		GENMASK(8, 8)
+#define GEN_QMB_0_MULTI_INORDER_RD_DIS_FMASK	GENMASK(9, 9)
+#define GEN_QMB_1_MULTI_INORDER_RD_DIS_FMASK	GENMASK(10, 10)
+#define GEN_QMB_0_MULTI_INORDER_WR_DIS_FMASK	GENMASK(11, 11)
+#define GEN_QMB_1_MULTI_INORDER_WR_DIS_FMASK	GENMASK(12, 12)
+#define GEN_QMB_0_SNOC_CNOC_LOOP_PROT_DIS_FMASK	GENMASK(13, 13)
+#define GSI_SNOC_CNOC_LOOP_PROT_DISABLE_FMASK	GENMASK(14, 14)
+#define GSI_MULTI_AXI_MASTERS_DIS_FMASK		GENMASK(15, 15)
+#define IPA_QMB_SELECT_GLOBAL_EN_FMASK		GENMASK(16, 16)
+#define IPA_ATOMIC_FETCHER_ARB_LOCK_DIS_FMASK	GENMASK(20, 17)
+
+#define IPA_REG_CLKON_CFG_OFFSET			0x00000044
+#define RX_FMASK				GENMASK(0, 0)
+#define PROC_FMASK				GENMASK(1, 1)
+#define TX_WRAPPER_FMASK			GENMASK(2, 2)
+#define MISC_FMASK				GENMASK(3, 3)
+#define RAM_ARB_FMASK				GENMASK(4, 4)
+#define FTCH_HPS_FMASK				GENMASK(5, 5)
+#define FTCH_DPS_FMASK				GENMASK(6, 6)
+#define HPS_FMASK				GENMASK(7, 7)
+#define DPS_FMASK				GENMASK(8, 8)
+#define RX_HPS_CMDQS_FMASK			GENMASK(9, 9)
+#define HPS_DPS_CMDQS_FMASK			GENMASK(10, 10)
+#define DPS_TX_CMDQS_FMASK			GENMASK(11, 11)
+#define RSRC_MNGR_FMASK				GENMASK(12, 12)
+#define CTX_HANDLER_FMASK			GENMASK(13, 13)
+#define ACK_MNGR_FMASK				GENMASK(14, 14)
+#define D_DCPH_FMASK				GENMASK(15, 15)
+#define H_DCPH_FMASK				GENMASK(16, 16)
+#define DCMP_FMASK				GENMASK(17, 17)
+#define NTF_TX_CMDQS_FMASK			GENMASK(18, 18)
+#define TX_0_FMASK				GENMASK(19, 19)
+#define TX_1_FMASK				GENMASK(20, 20)
+#define FNR_FMASK				GENMASK(21, 21)
+#define QSB2AXI_CMDQ_L_FMASK			GENMASK(22, 22)
+#define AGGR_WRAPPER_FMASK			GENMASK(23, 23)
+#define RAM_SLAVEWAY_FMASK			GENMASK(24, 24)
+#define QMB_FMASK				GENMASK(25, 25)
+#define WEIGHT_ARB_FMASK			GENMASK(26, 26)
+#define GSI_IF_FMASK				GENMASK(27, 27)
+#define GLOBAL_FMASK				GENMASK(28, 28)
+#define GLOBAL_2X_CLK_FMASK			GENMASK(29, 29)
+
+#define IPA_REG_ROUTE_OFFSET				0x00000048
+#define ROUTE_DIS_FMASK				GENMASK(0, 0)
+#define ROUTE_DEF_PIPE_FMASK			GENMASK(5, 1)
+#define ROUTE_DEF_HDR_TABLE_FMASK		GENMASK(6, 6)
+#define ROUTE_DEF_HDR_OFST_FMASK		GENMASK(16, 7)
+#define ROUTE_FRAG_DEF_PIPE_FMASK		GENMASK(21, 17)
+#define ROUTE_DEF_RETAIN_HDR_FMASK		GENMASK(24, 24)
+
+#define IPA_REG_SHARED_MEM_SIZE_OFFSET			0x00000054
+#define SHARED_MEM_SIZE_FMASK			GENMASK(15, 0)
+#define SHARED_MEM_BADDR_FMASK			GENMASK(31, 16)
+
+#define IPA_REG_QSB_MAX_WRITES_OFFSET			0x00000074
+#define GEN_QMB_0_MAX_WRITES_FMASK		GENMASK(3, 0)
+#define GEN_QMB_1_MAX_WRITES_FMASK		GENMASK(7, 4)
+
+#define IPA_REG_QSB_MAX_READS_OFFSET			0x00000078
+#define GEN_QMB_0_MAX_READS_FMASK		GENMASK(3, 0)
+#define GEN_QMB_1_MAX_READS_FMASK		GENMASK(7, 4)
+/* The next two fields are present for IPA v4.0 and above */
+#define GEN_QMB_0_MAX_READS_BEATS_FMASK		GENMASK(23, 16)
+#define GEN_QMB_1_MAX_READS_BEATS_FMASK		GENMASK(31, 24)
+
+static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
+{
+	if (version == IPA_VERSION_3_5_1)
+		return 0x0000010c;
+
+	return 0x000000b4;
+}
+/* ipa->available defines the valid bits in the STATE_AGGR_ACTIVE register */
+
+/* The next register is present for IPA v4.2 and above */
+#define IPA_REG_FILT_ROUT_HASH_EN_OFFSET		0x00000148
+#define IPV6_ROUTER_HASH_EN			GENMASK(0, 0)
+#define IPV6_FILTER_HASH_EN			GENMASK(4, 4)
+#define IPV4_ROUTER_HASH_EN			GENMASK(8, 8)
+#define IPV4_FILTER_HASH_EN			GENMASK(12, 12)
+
+static inline u32 ipa_reg_filt_rout_hash_flush_offset(enum ipa_version version)
+{
+	if (version == IPA_VERSION_3_5_1)
+		return 0x0000090;
+
+	return 0x000014c;
+}
+
+#define IPV6_ROUTER_HASH_FLUSH			GENMASK(0, 0)
+#define IPV6_FILTER_HASH_FLUSH			GENMASK(4, 4)
+#define IPV4_ROUTER_HASH_FLUSH			GENMASK(8, 8)
+#define IPV4_FILTER_HASH_FLUSH			GENMASK(12, 12)
+
+#define IPA_REG_BCR_OFFSET				0x000001d0
+#define BCR_CMDQ_L_LACK_ONE_ENTRY		BIT(0)
+#define BCR_TX_NOT_USING_BRESP			BIT(1)
+#define BCR_SUSPEND_L2_IRQ			BIT(3)
+#define BCR_HOLB_DROP_L2_IRQ			BIT(4)
+#define BCR_DUAL_TX				BIT(5)
+
+/* Backward compatibility register value to use for each version */
+static inline u32 ipa_reg_bcr_val(enum ipa_version version)
+{
+	if (version == IPA_VERSION_3_5_1)
+		return BCR_CMDQ_L_LACK_ONE_ENTRY | BCR_TX_NOT_USING_BRESP |
+		       BCR_SUSPEND_L2_IRQ | BCR_HOLB_DROP_L2_IRQ | BCR_DUAL_TX;
+
+	if (version == IPA_VERSION_4_0 || version == IPA_VERSION_4_1)
+		return BCR_CMDQ_L_LACK_ONE_ENTRY | BCR_SUSPEND_L2_IRQ |
+		       BCR_HOLB_DROP_L2_IRQ | BCR_DUAL_TX;
+
+	return 0x00000000;
+}
+
+
+#define IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET	0x000001e8
+
+#define IPA_REG_AGGR_FORCE_CLOSE_OFFSET			0x000001ec
+/* ipa->available defines the valid bits in the AGGR_FORCE_CLOSE register */
+
+#define IPA_REG_COUNTER_CFG_OFFSET			0x000001f0
+#define AGGR_GRANULARITY			GENMASK(8, 4)
+/* Compute the value to use in the AGGR_GRANULARITY field representing
+ * the given number of microseconds (up to 1 millisecond).
+ *	x = (32 * usec) / 1000 - 1
+ */
+static inline u32 ipa_aggr_granularity_val(u32 microseconds)
+{
+	/* assert(microseconds >= 16); (?) */
+	/* assert(microseconds <= 1015); */
+
+	return DIV_ROUND_CLOSEST(32 * microseconds, 1000) - 1;
+}
+
+#define IPA_REG_TX_CFG_OFFSET				0x000001fc
+/* The first three fields are present for IPA v3.5.1 only */
+#define TX0_PREFETCH_DISABLE			GENMASK(0, 0)
+#define TX1_PREFETCH_DISABLE			GENMASK(1, 1)
+#define PREFETCH_ALMOST_EMPTY_SIZE		GENMASK(4, 2)
+/* The next fields are present for IPA v4.0 and above */
+#define PREFETCH_ALMOST_EMPTY_SIZE_TX0		GENMASK(5, 2)
+#define DMAW_SCND_OUTSD_PRED_THRESHOLD		GENMASK(9, 6)
+#define DMAW_SCND_OUTSD_PRED_EN			GENMASK(10, 10)
+#define DMAW_MAX_BEATS_256_DIS			GENMASK(11, 11)
+#define PA_MASK_EN				GENMASK(12, 12)
+#define PREFETCH_ALMOST_EMPTY_SIZE_TX1		GENMASK(16, 13)
+/* The last two fields are present for IPA v4.2 and above */
+#define SSPND_PA_NO_START_STATE			GENMASK(18, 18)
+#define SSPND_PA_NO_BQ_STATE			GENMASK(19, 19)
+
+#define IPA_REG_FLAVOR_0_OFFSET				0x00000210
+#define BAM_MAX_PIPES_FMASK			GENMASK(4, 0)
+#define BAM_MAX_CONS_PIPES_FMASK		GENMASK(12, 8)
+#define BAM_MAX_PROD_PIPES_FMASK		GENMASK(20, 16)
+#define BAM_PROD_LOWEST_FMASK			GENMASK(27, 24)
+
+static inline u32 ipa_reg_idle_indication_cfg_offset(enum ipa_version version)
+{
+	if (version == IPA_VERSION_4_2)
+		return 0x00000240;
+
+	return 0x00000220;
+}
+
+#define ENTER_IDLE_DEBOUNCE_THRESH_FMASK	GENMASK(15, 0)
+#define CONST_NON_IDLE_ENABLE_FMASK		GENMASK(16, 16)
+
+#define IPA_REG_SRC_RSRC_GRP_01_RSRC_TYPE_N_OFFSET(rt) \
+					(0x00000400 + 0x0020 * (rt))
+#define IPA_REG_SRC_RSRC_GRP_23_RSRC_TYPE_N_OFFSET(rt) \
+					(0x00000404 + 0x0020 * (rt))
+#define IPA_REG_SRC_RSRC_GRP_45_RSRC_TYPE_N_OFFSET(rt) \
+					(0x00000408 + 0x0020 * (rt))
+#define IPA_REG_DST_RSRC_GRP_01_RSRC_TYPE_N_OFFSET(rt) \
+					(0x00000500 + 0x0020 * (rt))
+#define IPA_REG_DST_RSRC_GRP_23_RSRC_TYPE_N_OFFSET(rt) \
+					(0x00000504 + 0x0020 * (rt))
+#define IPA_REG_DST_RSRC_GRP_45_RSRC_TYPE_N_OFFSET(rt) \
+					(0x00000508 + 0x0020 * (rt))
+#define X_MIN_LIM_FMASK				GENMASK(5, 0)
+#define X_MAX_LIM_FMASK				GENMASK(13, 8)
+#define Y_MIN_LIM_FMASK				GENMASK(21, 16)
+#define Y_MAX_LIM_FMASK				GENMASK(29, 24)
+
+#define IPA_REG_ENDP_INIT_CTRL_N_OFFSET(ep) \
+					(0x00000800 + 0x0070 * (ep))
+#define ENDP_SUSPEND_FMASK			GENMASK(0, 0)
+#define ENDP_DELAY_FMASK			GENMASK(1, 1)
+
+#define IPA_REG_ENDP_INIT_CFG_N_OFFSET(ep) \
+					(0x00000808 + 0x0070 * (ep))
+#define FRAG_OFFLOAD_EN_FMASK			GENMASK(0, 0)
+#define CS_OFFLOAD_EN_FMASK			GENMASK(2, 1)
+#define CS_METADATA_HDR_OFFSET_FMASK		GENMASK(6, 3)
+#define CS_GEN_QMB_MASTER_SEL_FMASK		GENMASK(8, 8)
+
+#define IPA_REG_ENDP_INIT_HDR_N_OFFSET(ep) \
+					(0x00000810 + 0x0070 * (ep))
+#define HDR_LEN_FMASK				GENMASK(5, 0)
+#define HDR_OFST_METADATA_VALID_FMASK		GENMASK(6, 6)
+#define HDR_OFST_METADATA_FMASK			GENMASK(12, 7)
+#define HDR_ADDITIONAL_CONST_LEN_FMASK		GENMASK(18, 13)
+#define HDR_OFST_PKT_SIZE_VALID_FMASK		GENMASK(19, 19)
+#define HDR_OFST_PKT_SIZE_FMASK			GENMASK(25, 20)
+#define HDR_A5_MUX_FMASK			GENMASK(26, 26)
+#define HDR_LEN_INC_DEAGG_HDR_FMASK		GENMASK(27, 27)
+#define HDR_METADATA_REG_VALID_FMASK		GENMASK(28, 28)
+
+#define IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(ep) \
+					(0x00000814 + 0x0070 * (ep))
+#define HDR_ENDIANNESS_FMASK			GENMASK(0, 0)
+#define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK	GENMASK(1, 1)
+#define HDR_TOTAL_LEN_OR_PAD_FMASK		GENMASK(2, 2)
+#define HDR_PAYLOAD_LEN_INC_PADDING_FMASK	GENMASK(3, 3)
+#define HDR_TOTAL_LEN_OR_PAD_OFFSET_FMASK	GENMASK(9, 4)
+#define HDR_PAD_TO_ALIGNMENT_FMASK		GENMASK(13, 10)
+
+#define IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(ep) \
+					(0x00000818 + 0x0070 * (ep))
+
+#define IPA_REG_ENDP_INIT_MODE_N_OFFSET(ep) \
+					(0x00000820 + 0x0070 * (ep))
+#define MODE_FMASK				GENMASK(2, 0)
+#define DEST_PIPE_INDEX_FMASK			GENMASK(8, 4)
+#define BYTE_THRESHOLD_FMASK			GENMASK(27, 12)
+#define PIPE_REPLICATION_EN_FMASK		GENMASK(28, 28)
+#define PAD_EN_FMASK				GENMASK(29, 29)
+#define HDR_FTCH_DISABLE_FMASK			GENMASK(30, 30)
+
+#define IPA_REG_ENDP_INIT_AGGR_N_OFFSET(ep) \
+					(0x00000824 +  0x0070 * (ep))
+#define AGGR_EN_FMASK				GENMASK(1, 0)
+#define AGGR_TYPE_FMASK				GENMASK(4, 2)
+#define AGGR_BYTE_LIMIT_FMASK			GENMASK(9, 5)
+#define AGGR_TIME_LIMIT_FMASK			GENMASK(14, 10)
+#define AGGR_PKT_LIMIT_FMASK			GENMASK(20, 15)
+#define AGGR_SW_EOF_ACTIVE_FMASK		GENMASK(21, 21)
+#define AGGR_FORCE_CLOSE_FMASK			GENMASK(22, 22)
+#define AGGR_HARD_BYTE_LIMIT_ENABLE_FMASK	GENMASK(24, 24)
+
+#define IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(ep) \
+					(0x0000082c +  0x0070 * (ep))
+#define HOL_BLOCK_EN_FMASK			GENMASK(0, 0)
+
+/* The next register is valid only for RX (IPA producer) endpoints */
+#define IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(ep) \
+					(0x00000830 +  0x0070 * (ep))
+/* The next fields are present for IPA v4.2 only */
+#define BASE_VALUE_FMASK			GENMASK(4, 0)
+#define SCALE_FMASK				GENMASK(12, 8)
+
+#define IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(ep) \
+					(0x00000834 + 0x0070 * (ep))
+#define DEAGGR_HDR_LEN_FMASK			GENMASK(5, 0)
+#define PACKET_OFFSET_VALID_FMASK		GENMASK(7, 7)
+#define PACKET_OFFSET_LOCATION_FMASK		GENMASK(13, 8)
+#define MAX_PACKET_LEN_FMASK			GENMASK(31, 16)
+
+#define IPA_REG_ENDP_INIT_RSRC_GRP_N_OFFSET(ep) \
+					(0x00000838 + 0x0070 * (ep))
+#define RSRC_GRP_FMASK				GENMASK(1, 0)
+
+#define IPA_REG_ENDP_INIT_SEQ_N_OFFSET(ep) \
+					(0x0000083c + 0x0070 * (ep))
+#define HPS_SEQ_TYPE_FMASK			GENMASK(3, 0)
+#define DPS_SEQ_TYPE_FMASK			GENMASK(7, 4)
+#define HPS_REP_SEQ_TYPE_FMASK			GENMASK(11, 8)
+#define DPS_REP_SEQ_TYPE_FMASK			GENMASK(15, 12)
+
+#define IPA_REG_ENDP_STATUS_N_OFFSET(ep) \
+					(0x00000840 + 0x0070 * (ep))
+#define STATUS_EN_FMASK				GENMASK(0, 0)
+#define STATUS_ENDP_FMASK			GENMASK(5, 1)
+#define STATUS_LOCATION_FMASK			GENMASK(8, 8)
+/* The next field is present for IPA v4.0 and above */
+#define STATUS_PKT_SUPPRESS_FMASK		GENMASK(9, 9)
+
+/* "er" is either an endpoint id (for filters) or a route id (for routes) */
+#define IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(er) \
+					(0x0000085c + 0x0070 * (er))
+#define FILTER_HASH_MSK_SRC_ID_FMASK		GENMASK(0, 0)
+#define FILTER_HASH_MSK_SRC_IP_FMASK		GENMASK(1, 1)
+#define FILTER_HASH_MSK_DST_IP_FMASK		GENMASK(2, 2)
+#define FILTER_HASH_MSK_SRC_PORT_FMASK		GENMASK(3, 3)
+#define FILTER_HASH_MSK_DST_PORT_FMASK		GENMASK(4, 4)
+#define FILTER_HASH_MSK_PROTOCOL_FMASK		GENMASK(5, 5)
+#define FILTER_HASH_MSK_METADATA_FMASK		GENMASK(6, 6)
+#define IPA_REG_ENDP_FILTER_HASH_MSK_ALL	GENMASK(6, 0)
+
+#define ROUTER_HASH_MSK_SRC_ID_FMASK		GENMASK(16, 16)
+#define ROUTER_HASH_MSK_SRC_IP_FMASK		GENMASK(17, 17)
+#define ROUTER_HASH_MSK_DST_IP_FMASK		GENMASK(18, 18)
+#define ROUTER_HASH_MSK_SRC_PORT_FMASK		GENMASK(19, 19)
+#define ROUTER_HASH_MSK_DST_PORT_FMASK		GENMASK(20, 20)
+#define ROUTER_HASH_MSK_PROTOCOL_FMASK		GENMASK(21, 21)
+#define ROUTER_HASH_MSK_METADATA_FMASK		GENMASK(22, 22)
+#define IPA_REG_ENDP_ROUTER_HASH_MSK_ALL	GENMASK(22, 16)
+
+#define IPA_REG_IRQ_STTS_OFFSET	\
+				IPA_REG_IRQ_STTS_EE_N_OFFSET(GSI_EE_AP)
+#define IPA_REG_IRQ_STTS_EE_N_OFFSET(ee) \
+					(0x00003008 + 0x1000 * (ee))
+
+#define IPA_REG_IRQ_EN_OFFSET \
+				IPA_REG_IRQ_EN_EE_N_OFFSET(GSI_EE_AP)
+#define IPA_REG_IRQ_EN_EE_N_OFFSET(ee) \
+					(0x0000300c + 0x1000 * (ee))
+
+#define IPA_REG_IRQ_CLR_OFFSET \
+				IPA_REG_IRQ_CLR_EE_N_OFFSET(GSI_EE_AP)
+#define IPA_REG_IRQ_CLR_EE_N_OFFSET(ee) \
+					(0x00003010 + 0x1000 * (ee))
+
+#define IPA_REG_IRQ_UC_OFFSET \
+				IPA_REG_IRQ_UC_EE_N_OFFSET(GSI_EE_AP)
+#define IPA_REG_IRQ_UC_EE_N_OFFSET(ee) \
+					(0x0000301c + 0x1000 * (ee))
+
+#define IPA_REG_IRQ_SUSPEND_INFO_OFFSET \
+				IPA_REG_IRQ_SUSPEND_INFO_EE_N_OFFSET(GSI_EE_AP)
+#define IPA_REG_IRQ_SUSPEND_INFO_EE_N_OFFSET(ee) \
+					(0x00003030 + 0x1000 * (ee))
+/* ipa->available defines the valid bits in the SUSPEND_INFO register */
+
+#define IPA_REG_SUSPEND_IRQ_EN_OFFSET \
+				IPA_REG_SUSPEND_IRQ_EN_EE_N_OFFSET(GSI_EE_AP)
+#define IPA_REG_SUSPEND_IRQ_EN_EE_N_OFFSET(ee) \
+					(0x00003034 + 0x1000 * (ee))
+/* ipa->available defines the valid bits in the SUSPEND_IRQ_EN register */
+
+#define IPA_REG_SUSPEND_IRQ_CLR_OFFSET \
+				IPA_REG_SUSPEND_IRQ_CLR_EE_N_OFFSET(GSI_EE_AP)
+#define IPA_REG_SUSPEND_IRQ_CLR_EE_N_OFFSET(ee) \
+					(0x00003038 + 0x1000 * (ee))
+/* ipa->available defines the valid bits in the SUSPEND_IRQ_CLR register */
+
+/** enum ipa_cs_offload_en - checksum offload field in ENDP_INIT_CFG_N */
+enum ipa_cs_offload_en {
+	IPA_CS_OFFLOAD_NONE	= 0,
+	IPA_CS_OFFLOAD_UL	= 1,
+	IPA_CS_OFFLOAD_DL	= 2,
+	IPA_CS_RSVD
+};
+
+/** enum ipa_aggr_en - aggregation type field in ENDP_INIT_AGGR_N */
+enum ipa_aggr_en {
+	IPA_BYPASS_AGGR		= 0,
+	IPA_ENABLE_AGGR		= 1,
+	IPA_ENABLE_DEAGGR	= 2,
+};
+
+/** enum ipa_aggr_type - aggregation type field in in_ENDP_INIT_AGGR_N */
+enum ipa_aggr_type {
+	IPA_MBIM_16	= 0,
+	IPA_HDLC	= 1,
+	IPA_TLP		= 2,
+	IPA_RNDIS	= 3,
+	IPA_GENERIC	= 4,
+	IPA_COALESCE	= 5,
+	IPA_QCMAP	= 6,
+};
+
+/** enum ipa_mode - mode field in ENDP_INIT_MODE_N */
+enum ipa_mode {
+	IPA_BASIC			= 0,
+	IPA_ENABLE_FRAMING_HDLC		= 1,
+	IPA_ENABLE_DEFRAMING_HDLC	= 2,
+	IPA_DMA				= 3,
+};
+
+/**
+ * enum ipa_seq_type - HPS and DPS sequencer type fields in in ENDP_INIT_SEQ_N
+ * @IPA_SEQ_DMA_ONLY:		only DMA is performed
+ * @IPA_SEQ_PKT_PROCESS_NO_DEC_UCP:
+ *	packet processing + no decipher + microcontroller (Ethernet Bridging)
+ * @IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP:
+ *	second packet processing pass + no decipher + microcontroller
+ * @IPA_SEQ_DMA_DEC:		DMA + cipher/decipher
+ * @IPA_SEQ_DMA_COMP_DECOMP:	DMA + compression/decompression
+ * @IPA_SEQ_INVALID:		invalid sequencer type
+ *
+ * The values defined here are broken into 4-bit nibbles that are written
+ * into fields of the INIT_SEQ_N endpoint registers.
+ */
+enum ipa_seq_type {
+	IPA_SEQ_DMA_ONLY			= 0x0000,
+	IPA_SEQ_PKT_PROCESS_NO_DEC_UCP		= 0x0002,
+	IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP	= 0x0004,
+	IPA_SEQ_DMA_DEC				= 0x0011,
+	IPA_SEQ_DMA_COMP_DECOMP			= 0x0020,
+	IPA_SEQ_PKT_PROCESS_NO_DEC_NO_UCP_DMAP	= 0x0806,
+	IPA_SEQ_INVALID				= 0xffff,
+};
+
+int ipa_reg_init(struct ipa *ipa);
+void ipa_reg_exit(struct ipa *ipa);
+
+#endif /* _IPA_REG_H_ */
diff --git a/drivers/net/ipa/ipa_version.h b/drivers/net/ipa/ipa_version.h
new file mode 100644
index 000000000000..85449df0f512
--- /dev/null
+++ b/drivers/net/ipa/ipa_version.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+#ifndef _IPA_VERSION_H_
+#define _IPA_VERSION_H_
+
+/**
+ * enum ipa_version
+ *
+ * Defines the version of IPA (and GSI) hardware present on the platform.
+ * It seems this might be better defined elsewhere, but having it here gets
+ * it where it's needed.
+ */
+enum ipa_version {
+	IPA_VERSION_3_5_1,	/* GSI version 1.3.0 */
+	IPA_VERSION_4_0,	/* GSI version 2.0 */
+	IPA_VERSION_4_1,	/* GSI version 2.1 */
+	IPA_VERSION_4_2,	/* GSI version 2.2 */
+};
+
+#endif /* _IPA_VERSION_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 04/17] soc: qcom: ipa: configuration data
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (2 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 03/17] soc: qcom: ipa: main code Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 05/17] soc: qcom: ipa: clocking, interrupts, and memory Alex Elder
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

This patch defines configuration data that is used to specify some
of the details of IPA hardware supported by the driver.  It is built
as Device Tree match data, discovered at boot time.  The driver
supports the Qualcomm SDM845 SoC.  Data for the Qualcomm SC7180 is
also defined here, but it is not yet completely supported.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_data-sc7180.c | 307 ++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_data-sdm845.c | 329 ++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_data.h        | 280 +++++++++++++++++++++++++
 3 files changed, 916 insertions(+)
 create mode 100644 drivers/net/ipa/ipa_data-sc7180.c
 create mode 100644 drivers/net/ipa/ipa_data-sdm845.c
 create mode 100644 drivers/net/ipa/ipa_data.h

diff --git a/drivers/net/ipa/ipa_data-sc7180.c b/drivers/net/ipa/ipa_data-sc7180.c
new file mode 100644
index 000000000000..042b5fc3c135
--- /dev/null
+++ b/drivers/net/ipa/ipa_data-sc7180.c
@@ -0,0 +1,307 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (C) 2019-2020 Linaro Ltd. */
+
+#include <linux/log2.h>
+
+#include "gsi.h"
+#include "ipa_data.h"
+#include "ipa_endpoint.h"
+#include "ipa_mem.h"
+
+/* Endpoint configuration for the SC7180 SoC. */
+static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = {
+	[IPA_ENDPOINT_AP_COMMAND_TX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 1,
+		.endpoint_id	= 6,
+		.toward_ipa	= true,
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 20,
+		},
+		.endpoint = {
+			.seq_type	= IPA_SEQ_DMA_ONLY,
+			.config = {
+				.dma_mode	= true,
+				.dma_endpoint	= IPA_ENDPOINT_AP_LAN_RX,
+			},
+		},
+	},
+	[IPA_ENDPOINT_AP_LAN_RX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 2,
+		.endpoint_id	= 8,
+		.toward_ipa	= false,
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 6,
+		},
+		.endpoint = {
+			.seq_type	= IPA_SEQ_INVALID,
+			.config = {
+				.aggregation	= true,
+				.status_enable	= true,
+				.rx = {
+					.pad_align	= ilog2(sizeof(u32)),
+				},
+			},
+		},
+	},
+	[IPA_ENDPOINT_AP_MODEM_TX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 0,
+		.endpoint_id	= 1,
+		.toward_ipa	= true,
+		.channel = {
+			.tre_count	= 512,
+			.event_count	= 512,
+			.tlv_count	= 8,
+		},
+		.endpoint = {
+			.filter_support	= true,
+			.seq_type	=
+				IPA_SEQ_PKT_PROCESS_NO_DEC_NO_UCP_DMAP,
+			.config = {
+				.checksum	= true,
+				.qmap		= true,
+				.status_enable	= true,
+				.tx = {
+					.status_endpoint =
+						IPA_ENDPOINT_MODEM_AP_RX,
+				},
+			},
+		},
+	},
+	[IPA_ENDPOINT_AP_MODEM_RX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 3,
+		.endpoint_id	= 9,
+		.toward_ipa	= false,
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 6,
+		},
+		.endpoint = {
+			.seq_type	= IPA_SEQ_INVALID,
+			.config = {
+				.checksum	= true,
+				.qmap		= true,
+				.aggregation	= true,
+				.rx = {
+					.aggr_close_eof	= true,
+				},
+			},
+		},
+	},
+	[IPA_ENDPOINT_MODEM_COMMAND_TX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 1,
+		.endpoint_id	= 5,
+		.toward_ipa	= true,
+	},
+	[IPA_ENDPOINT_MODEM_LAN_RX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 3,
+		.endpoint_id	= 13,
+		.toward_ipa	= false,
+	},
+	[IPA_ENDPOINT_MODEM_AP_TX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 0,
+		.endpoint_id	= 4,
+		.toward_ipa	= true,
+		.endpoint = {
+			.filter_support	= true,
+		},
+	},
+	[IPA_ENDPOINT_MODEM_AP_RX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 2,
+		.endpoint_id	= 10,
+		.toward_ipa	= false,
+	},
+};
+
+/* For the SC7180, resource groups are allocated this way:
+ *   group 0:	UL_DL
+ */
+static const struct ipa_resource_src ipa_resource_src[] = {
+	{
+		.type = IPA_RESOURCE_TYPE_SRC_PKT_CONTEXTS,
+		.limits[0] = {
+			.min = 3,
+			.max = 63,
+		},
+	},
+	{
+		.type = IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_LISTS,
+		.limits[0] = {
+			.min = 3,
+			.max = 3,
+		},
+	},
+	{
+		.type = IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_BUFF,
+		.limits[0] = {
+			.min = 10,
+			.max = 10,
+		},
+	},
+	{
+		.type = IPA_RESOURCE_TYPE_SRC_HPS_DMARS,
+		.limits[0] = {
+			.min = 1,
+			.max = 1,
+		},
+	},
+	{
+		.type = IPA_RESOURCE_TYPE_SRC_ACK_ENTRIES,
+		.limits[0] = {
+			.min = 5,
+			.max = 5,
+		},
+	},
+};
+
+static const struct ipa_resource_dst ipa_resource_dst[] = {
+	{
+		.type = IPA_RESOURCE_TYPE_DST_DATA_SECTORS,
+		.limits[0] = {
+			.min = 3,
+			.max = 3,
+		},
+	},
+	{
+		.type = IPA_RESOURCE_TYPE_DST_DPS_DMARS,
+		.limits[0] = {
+			.min = 1,
+			.max = 63,
+		},
+	},
+};
+
+/* Resource configuration for the SC7180 SoC. */
+static const struct ipa_resource_data ipa_resource_data = {
+	.resource_src_count	= ARRAY_SIZE(ipa_resource_src),
+	.resource_src		= ipa_resource_src,
+	.resource_dst_count	= ARRAY_SIZE(ipa_resource_dst),
+	.resource_dst		= ipa_resource_dst,
+};
+
+/* IPA-resident memory region configuration for the SC7180 SoC. */
+static const struct ipa_mem ipa_mem_data[] = {
+	[IPA_MEM_UC_SHARED] = {
+		.offset		= 0x0000,
+		.size		= 0x0080,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_UC_INFO] = {
+		.offset		= 0x0080,
+		.size		= 0x0200,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V4_FILTER_HASHED] = {
+		.offset		= 0x0288,
+		.size		= 0,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V4_FILTER] = {
+		.offset		= 0x0290,
+		.size		= 0x0078,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_FILTER_HASHED] = {
+		.offset		= 0x0310,
+		.size		= 0,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_FILTER] = {
+		.offset		= 0x0318,
+		.size		= 0x0078,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V4_ROUTE_HASHED] = {
+		.offset		= 0x0398,
+		.size		= 0,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V4_ROUTE] = {
+		.offset		= 0x03a0,
+		.size		= 0x0078,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_ROUTE_HASHED] = {
+		.offset		= 0x0420,
+		.size		= 0,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_ROUTE] = {
+		.offset		= 0x0428,
+		.size		= 0x0078,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_MODEM_HEADER] = {
+		.offset		= 0x04a8,
+		.size		= 0x0140,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_AP_HEADER] = {
+		.offset		= 0x05e8,
+		.size		= 0x0000,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_MODEM_PROC_CTX] = {
+		.offset		= 0x05f0,
+		.size		= 0x0200,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_AP_PROC_CTX] = {
+		.offset		= 0x07f0,
+		.size		= 0x0200,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_PDN_CONFIG] = {
+		.offset		= 0x09f8,
+		.size		= 0x0050,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_STATS_QUOTA] = {
+		.offset		= 0x0a50,
+		.size		= 0x0060,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_STATS_TETHERING] = {
+		.offset		= 0x0ab0,
+		.size		= 0x0140,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_STATS_DROP] = {
+		.offset		= 0x0bf0,
+		.size		= 0,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_MODEM] = {
+		.offset		= 0x0bf0,
+		.size		= 0x140c,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_UC_EVENT_RING] = {
+		.offset		= 0x2000,
+		.size		= 0,
+		.canary_count	= 1,
+	},
+};
+
+/* Configuration data for the SC7180 SoC. */
+const struct ipa_data ipa_data_sc7180 = {
+	.version	= IPA_VERSION_4_2,
+	.endpoint_count	= ARRAY_SIZE(ipa_gsi_endpoint_data),
+	.endpoint_data	= ipa_gsi_endpoint_data,
+	.resource_data	= &ipa_resource_data,
+	.mem_count	= ARRAY_SIZE(ipa_mem_data),
+	.mem_data	= ipa_mem_data,
+};
diff --git a/drivers/net/ipa/ipa_data-sdm845.c b/drivers/net/ipa/ipa_data-sdm845.c
new file mode 100644
index 000000000000..0d9c36e1e806
--- /dev/null
+++ b/drivers/net/ipa/ipa_data-sdm845.c
@@ -0,0 +1,329 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+
+#include <linux/log2.h>
+
+#include "gsi.h"
+#include "ipa_data.h"
+#include "ipa_endpoint.h"
+#include "ipa_mem.h"
+
+/* Endpoint configuration for the SDM845 SoC. */
+static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = {
+	[IPA_ENDPOINT_AP_COMMAND_TX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 4,
+		.endpoint_id	= 5,
+		.toward_ipa	= true,
+		.channel = {
+			.tre_count	= 512,
+			.event_count	= 256,
+			.tlv_count	= 20,
+		},
+		.endpoint = {
+			.seq_type	= IPA_SEQ_DMA_ONLY,
+			.config = {
+				.dma_mode	= true,
+				.dma_endpoint	= IPA_ENDPOINT_AP_LAN_RX,
+			},
+		},
+	},
+	[IPA_ENDPOINT_AP_LAN_RX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 5,
+		.endpoint_id	= 9,
+		.toward_ipa	= false,
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 8,
+		},
+		.endpoint = {
+			.seq_type	= IPA_SEQ_INVALID,
+			.config = {
+				.checksum	= true,
+				.aggregation	= true,
+				.status_enable	= true,
+				.rx = {
+					.pad_align	= ilog2(sizeof(u32)),
+				},
+			},
+		},
+	},
+	[IPA_ENDPOINT_AP_MODEM_TX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 3,
+		.endpoint_id	= 2,
+		.toward_ipa	= true,
+		.channel = {
+			.tre_count	= 512,
+			.event_count	= 512,
+			.tlv_count	= 16,
+		},
+		.endpoint = {
+			.filter_support	= true,
+			.seq_type	=
+				IPA_SEQ_2ND_PKT_PROCESS_PASS_NO_DEC_UCP,
+			.config = {
+				.checksum	= true,
+				.qmap		= true,
+				.status_enable	= true,
+				.tx = {
+					.status_endpoint =
+						IPA_ENDPOINT_MODEM_AP_RX,
+					.delay	= true,
+				},
+			},
+		},
+	},
+	[IPA_ENDPOINT_AP_MODEM_RX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 6,
+		.endpoint_id	= 10,
+		.toward_ipa	= false,
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 8,
+		},
+		.endpoint = {
+			.seq_type	= IPA_SEQ_INVALID,
+			.config = {
+				.checksum	= true,
+				.qmap		= true,
+				.aggregation	= true,
+				.rx = {
+					.aggr_close_eof	= true,
+				},
+			},
+		},
+	},
+	[IPA_ENDPOINT_MODEM_COMMAND_TX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 1,
+		.endpoint_id	= 4,
+		.toward_ipa	= true,
+	},
+	[IPA_ENDPOINT_MODEM_LAN_TX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 0,
+		.endpoint_id	= 3,
+		.toward_ipa	= true,
+		.endpoint = {
+			.filter_support	= true,
+		},
+	},
+	[IPA_ENDPOINT_MODEM_LAN_RX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 3,
+		.endpoint_id	= 13,
+		.toward_ipa	= false,
+	},
+	[IPA_ENDPOINT_MODEM_AP_TX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 4,
+		.endpoint_id	= 6,
+		.toward_ipa	= true,
+		.endpoint = {
+			.filter_support	= true,
+		},
+	},
+	[IPA_ENDPOINT_MODEM_AP_RX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 2,
+		.endpoint_id	= 12,
+		.toward_ipa	= false,
+	},
+};
+
+/* For the SDM845, resource groups are allocated this way:
+ *   group 0:	LWA_DL
+ *   group 1:	UL_DL
+ */
+static const struct ipa_resource_src ipa_resource_src[] = {
+	{
+		.type = IPA_RESOURCE_TYPE_SRC_PKT_CONTEXTS,
+		.limits[0] = {
+			.min = 1,
+			.max = 63,
+		},
+		.limits[1] = {
+			.min = 1,
+			.max = 63,
+		},
+	},
+	{
+		.type = IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_LISTS,
+		.limits[0] = {
+			.min = 10,
+			.max = 10,
+		},
+		.limits[1] = {
+			.min = 10,
+			.max = 10,
+		},
+	},
+	{
+		.type = IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_BUFF,
+		.limits[0] = {
+			.min = 12,
+			.max = 12,
+		},
+		.limits[1] = {
+			.min = 14,
+			.max = 14,
+		},
+	},
+	{
+		.type = IPA_RESOURCE_TYPE_SRC_HPS_DMARS,
+		.limits[0] = {
+			.min = 0,
+			.max = 63,
+		},
+		.limits[1] = {
+			.min = 0,
+			.max = 63,
+		},
+	},
+	{
+		.type = IPA_RESOURCE_TYPE_SRC_ACK_ENTRIES,
+		.limits[0] = {
+			.min = 14,
+			.max = 14,
+		},
+		.limits[1] = {
+			.min = 20,
+			.max = 20,
+		},
+	},
+};
+
+static const struct ipa_resource_dst ipa_resource_dst[] = {
+	{
+		.type = IPA_RESOURCE_TYPE_DST_DATA_SECTORS,
+		.limits[0] = {
+			.min = 4,
+			.max = 4,
+		},
+		.limits[1] = {
+			.min = 4,
+			.max = 4,
+		},
+	},
+	{
+		.type = IPA_RESOURCE_TYPE_DST_DPS_DMARS,
+		.limits[0] = {
+			.min = 2,
+			.max = 63,
+		},
+		.limits[1] = {
+			.min = 1,
+			.max = 63,
+		},
+	},
+};
+
+/* Resource configuration for the SDM845 SoC. */
+static const struct ipa_resource_data ipa_resource_data = {
+	.resource_src_count	= ARRAY_SIZE(ipa_resource_src),
+	.resource_src		= ipa_resource_src,
+	.resource_dst_count	= ARRAY_SIZE(ipa_resource_dst),
+	.resource_dst		= ipa_resource_dst,
+};
+
+/* IPA-resident memory region configuration for the SDM845 SoC. */
+static const struct ipa_mem ipa_mem_data[] = {
+	[IPA_MEM_UC_SHARED] = {
+		.offset		= 0x0000,
+		.size		= 0x0080,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_UC_INFO] = {
+		.offset		= 0x0080,
+		.size		= 0x0200,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_V4_FILTER_HASHED] = {
+		.offset		= 0x0288,
+		.size		= 0x0078,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V4_FILTER] = {
+		.offset		= 0x0308,
+		.size		= 0x0078,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_FILTER_HASHED] = {
+		.offset		= 0x0388,
+		.size		= 0x0078,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_FILTER] = {
+		.offset		= 0x0408,
+		.size		= 0x0078,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V4_ROUTE_HASHED] = {
+		.offset		= 0x0488,
+		.size		= 0x0078,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V4_ROUTE] = {
+		.offset		= 0x0508,
+		.size		= 0x0078,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_ROUTE_HASHED] = {
+		.offset		= 0x0588,
+		.size		= 0x0078,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_ROUTE] = {
+		.offset		= 0x0608,
+		.size		= 0x0078,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_MODEM_HEADER] = {
+		.offset		= 0x0688,
+		.size		= 0x0140,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_AP_HEADER] = {
+		.offset		= 0x07c8,
+		.size		= 0x0000,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_MODEM_PROC_CTX] = {
+		.offset		= 0x07d0,
+		.size		= 0x0200,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_AP_PROC_CTX] = {
+		.offset		= 0x09d0,
+		.size		= 0x0200,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_MODEM] = {
+		.offset		= 0x0bd8,
+		.size		= 0x1024,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_UC_EVENT_RING] = {
+		.offset		= 0x1c00,
+		.size		= 0x0400,
+		.canary_count	= 1,
+	},
+};
+
+/* Configuration data for the SDM845 SoC. */
+const struct ipa_data ipa_data_sdm845 = {
+	.version	= IPA_VERSION_3_5_1,
+	.endpoint_count	= ARRAY_SIZE(ipa_gsi_endpoint_data),
+	.endpoint_data	= ipa_gsi_endpoint_data,
+	.resource_data	= &ipa_resource_data,
+	.mem_count	= ARRAY_SIZE(ipa_mem_data),
+	.mem_data	= ipa_mem_data,
+};
diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
new file mode 100644
index 000000000000..7110de2de817
--- /dev/null
+++ b/drivers/net/ipa/ipa_data.h
@@ -0,0 +1,280 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+#ifndef _IPA_DATA_H_
+#define _IPA_DATA_H_
+
+#include <linux/types.h>
+
+#include "ipa_version.h"
+#include "ipa_endpoint.h"
+#include "ipa_mem.h"
+
+/**
+ * DOC: IPA/GSI Configuration Data
+ *
+ * Boot-time configuration data is used to define the configuration of the
+ * IPA and GSI resources to use for a given platform.  This data is supplied
+ * via the Device Tree match table, associated with a particular compatible
+ * string.  The data defines information about resources, endpoints, and
+ * channels.
+ *
+ * Resources are data structures used internally by the IPA hardware.  The
+ * configuration data defines the number (or limits of the number) of various
+ * types of these resources.
+ *
+ * Endpoint configuration data defines properties of both IPA endpoints and
+ * GSI channels.  A channel is a GSI construct, and represents a single
+ * communication path between the IPA and a particular execution environment
+ * (EE), such as the AP or Modem.  Each EE has a set of channels associated
+ * with it, and each channel has an ID unique for that EE.  For the most part
+ * the only GSI channels of concern to this driver belong to the AP
+ *
+ * An endpoint is an IPA construct representing a single channel anywhere
+ * in the system.  An IPA endpoint ID maps directly to an (EE, channel_id)
+ * pair.  Generally, this driver is concerned with only endpoints associated
+ * with the AP, however this will change when support for routing (etc.) is
+ * added.  IPA endpoint and GSI channel configuration data are defined
+ * together, establishing the endpoint_id->(EE, channel_id) mapping.
+ *
+ * Endpoint configuration data consists of three parts:  properties that
+ * are common to IPA and GSI (EE ID, channel ID, endpoint ID, and direction);
+ * properties associated with the GSI channel; and properties associated with
+ * the IPA endpoint.
+ */
+
+/* The maximum value returned by ipa_resource_group_count() */
+#define IPA_RESOURCE_GROUP_COUNT	4
+
+/** enum ipa_resource_type_src - source resource types */
+/**
+ * struct gsi_channel_data - GSI channel configuration data
+ * @tre_count:		number of TREs in the channel ring
+ * @event_count:	number of slots in the associated event ring
+ * @tlv_count:		number of entries in channel's TLV FIFO
+ *
+ * A GSI channel is a unidirectional means of transferring data to or
+ * from (and through) the IPA.  A GSI channel has a ring buffer made
+ * up of "transfer elements" (TREs) that specify individual data transfers
+ * or IPA immediate commands.  TREs are filled by the AP, and control
+ * is passed to IPA hardware by writing the last written element
+ * into a doorbell register.
+ *
+ * When data transfer commands have completed the GSI generates an
+ * event (a structure of data) and optionally signals the AP with
+ * an interrupt.  Event structures are implemented by another ring
+ * buffer, directed toward the AP from the IPA.
+ *
+ * The input to a GSI channel is a FIFO of type/length/value (TLV)
+ * elements, and the size of this FIFO limits the number of TREs
+ * that can be included in a single transaction.
+ */
+struct gsi_channel_data {
+	u16 tre_count;
+	u16 event_count;
+	u8 tlv_count;
+};
+
+/**
+ * struct ipa_endpoint_tx_data - configuration data for TX endpoints
+ * @status_endpoint:	endpoint to which status elements are sent
+ * @delay:		whether endpoint starts in delay mode
+ *
+ * Delay mode prevents a TX endpoint from transmitting anything, even if
+ * commands have been presented to the hardware.  Once the endpoint exits
+ * delay mode, queued transfer commands are sent.
+ *
+ * The @status_endpoint is only valid if the endpoint's @status_enable
+ * flag is set.
+ */
+struct ipa_endpoint_tx_data {
+	enum ipa_endpoint_name status_endpoint;
+	bool delay;
+};
+
+/**
+ * struct ipa_endpoint_rx_data - configuration data for RX endpoints
+ * @pad_align:	power-of-2 boundary to which packet payload is aligned
+ * @aggr_close_eof: whether aggregation closes on end-of-frame
+ *
+ * With each packet it transfers, the IPA hardware can perform certain
+ * transformations of its packet data.  One of these is adding pad bytes
+ * to the end of the packet data so the result ends on a power-of-2 boundary.
+ *
+ * It is also able to aggregate multiple packets into a single receive buffer.
+ * Aggregation is "open" while a buffer is being filled, and "closes" when
+ * certain criteria are met.  One of those criteria is the sender indicating
+ * a "frame" consisting of several transfers has ended.
+ */
+struct ipa_endpoint_rx_data {
+	u32 pad_align;
+	bool aggr_close_eof;
+};
+
+/**
+ * struct ipa_endpoint_config_data - IPA endpoint hardware configuration
+ * @checksum:		whether checksum offload is enabled
+ * @qmap:		whether endpoint uses QMAP protocol
+ * @aggregation:	whether endpoint supports aggregation
+ * @status_enable:	whether endpoint uses status elements
+ * @dma_mode:		whether endpoint operates in DMA mode
+ * @dma_endpoint:	peer endpoint, if operating in DMA mode
+ * @tx:			TX-specific endpoint information (see above)
+ * @rx:			RX-specific endpoint information (see above)
+ */
+struct ipa_endpoint_config_data {
+	bool checksum;
+	bool qmap;
+	bool aggregation;
+	bool status_enable;
+	bool dma_mode;
+	enum ipa_endpoint_name dma_endpoint;
+	union {
+		struct ipa_endpoint_tx_data tx;
+		struct ipa_endpoint_rx_data rx;
+	};
+};
+
+/**
+ * struct ipa_endpoint_data - IPA endpoint configuration data
+ * @filter_support:	whether endpoint supports filtering
+ * @seq_type:		hardware sequencer type used for endpoint
+ * @config:		hardware configuration (see above)
+ *
+ * Not all endpoints support the IPA filtering capability.  A filter table
+ * defines the filters to apply for those endpoints that support it.  The
+ * AP is responsible for initializing this table, and it must include entries
+ * for non-AP endpoints.  For this reason we define *all* endpoints used
+ * in the system, and indicate whether they support filtering.
+ *
+ * The remaining endpoint configuration data applies only to AP endpoints.
+ * The IPA hardware is implemented by sequencers, and the AP must program
+ * the type(s) of these sequencers at initialization time.  The remaining
+ * endpoint configuration data is defined above.
+ */
+struct ipa_endpoint_data {
+	bool filter_support;
+	/* The next two are specified only for AP endpoints */
+	enum ipa_seq_type seq_type;
+	struct ipa_endpoint_config_data config;
+};
+
+/**
+ * struct ipa_gsi_endpoint_data - GSI channel/IPA endpoint data
+ * ee:		GSI execution environment ID
+ * channel_id:	GSI channel ID
+ * endpoint_id:	IPA endpoint ID
+ * toward_ipa:	direction of data transfer
+ * gsi:		GSI channel configuration data (see above)
+ * ipa:		IPA endpoint configuration data (see above)
+ */
+struct ipa_gsi_endpoint_data {
+	u8 ee_id;		/* enum gsi_ee_id */
+	u8 channel_id;
+	u8 endpoint_id;
+	bool toward_ipa;
+
+	struct gsi_channel_data channel;
+	struct ipa_endpoint_data endpoint;
+};
+
+/** enum ipa_resource_type_src - source resource types */
+enum ipa_resource_type_src {
+	IPA_RESOURCE_TYPE_SRC_PKT_CONTEXTS,
+	IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_LISTS,
+	IPA_RESOURCE_TYPE_SRC_DESCRIPTOR_BUFF,
+	IPA_RESOURCE_TYPE_SRC_HPS_DMARS,
+	IPA_RESOURCE_TYPE_SRC_ACK_ENTRIES,
+};
+
+/** enum ipa_resource_type_dst - destination resource types */
+enum ipa_resource_type_dst {
+	IPA_RESOURCE_TYPE_DST_DATA_SECTORS,
+	IPA_RESOURCE_TYPE_DST_DPS_DMARS,
+};
+
+/**
+ * struct ipa_resource_limits - minimum and maximum resource counts
+ * @min:	minimum number of resources of a given type
+ * @max:	maximum number of resources of a given type
+ */
+struct ipa_resource_limits {
+	u32 min;
+	u32 max;
+};
+
+/**
+ * struct ipa_resource_src - source endpoint group resource usage
+ * @type:	source group resource type
+ * @limits:	array of limits to use for each resource group
+ */
+struct ipa_resource_src {
+	enum ipa_resource_type_src type;
+	struct ipa_resource_limits limits[IPA_RESOURCE_GROUP_COUNT];
+};
+
+/**
+ * struct ipa_resource_dst - destination endpoint group resource usage
+ * @type:	destination group resource type
+ * @limits:	array of limits to use for each resource group
+ */
+struct ipa_resource_dst {
+	enum ipa_resource_type_dst type;
+	struct ipa_resource_limits limits[IPA_RESOURCE_GROUP_COUNT];
+};
+
+/**
+ * struct ipa_resource_data - IPA resource configuration data
+ * @resource_src_count:	number of entries in the resource_src array
+ * @resource_src:	source endpoint group resources
+ * @resource_dst_count:	number of entries in the resource_dst array
+ * @resource_dst:	destination endpoint group resources
+ *
+ * In order to manage quality of service between endpoints, certain resources
+ * required for operation are allocated to groups of endpoints.  Generally
+ * this information is invisible to the AP, but the AP is responsible for
+ * programming it at initialization time, so we specify it here.
+ */
+struct ipa_resource_data {
+	u32 resource_src_count;
+	const struct ipa_resource_src *resource_src;
+	u32 resource_dst_count;
+	const struct ipa_resource_dst *resource_dst;
+};
+
+/**
+ * struct ipa_mem - IPA-local memory region description
+ * @offset:		offset in IPA memory space to base of the region
+ * @size:		size in bytes base of the region
+ * @canary_count:	number of 32-bit "canary" values that precede region
+ */
+struct ipa_mem_data {
+	u32 offset;
+	u16 size;
+	u16 canary_count;
+};
+
+/**
+ * struct ipa_data - combined IPA/GSI configuration data
+ * @version:		IPA hardware version
+ * @endpoint_count:	number of entries in endpoint_data array
+ * @endpoint_data:	IPA endpoint/GSI channel data
+ * @resource_data:	IPA resource configuration data
+ * @mem_count:		number of entries in mem_data array
+ * @mem_data:		IPA-local shared memory region data
+ */
+struct ipa_data {
+	enum ipa_version version;
+	u32 endpoint_count;	/* # entries in endpoint_data[] */
+	const struct ipa_gsi_endpoint_data *endpoint_data;
+	const struct ipa_resource_data *resource_data;
+	u32 mem_count;		/* # entries in mem_data[] */
+	const struct ipa_mem *mem_data;
+};
+
+extern const struct ipa_data ipa_data_sdm845;
+extern const struct ipa_data ipa_data_sc7180;
+
+#endif /* _IPA_DATA_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 05/17] soc: qcom: ipa: clocking, interrupts, and memory
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (3 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 04/17] soc: qcom: ipa: configuration data Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 06/17] soc: qcom: ipa: GSI headers Alex Elder
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

This patch incorporates three source files (and their headers).  They're
grouped into one patch mainly for the purpose of making the number and
size of patches in this series somewhat reasonable.

  - "ipa_clock.c" and "ipa_clock.h" implement clocking for the IPA device.
    The IPA has a single core clock managed by the common clock framework.
    In addition, the IPA has three buses whose bandwidth is managed by the
    Linux interconnect framework.  At this time the core clock and all
    three buses are either on or off; we don't yet do any more fine-grained
    management than that.  The core clock and interconnects are enabled
    and disabled as a unit, using a unified clock-like abstraction,
    ipa_clock_get()/ipa_clock_put().

  - "ipa_interrupt.c" and "ipa_interrupt.h" implement IPA interrupts.
    There are two hardware IRQs used by the IPA driver (the other is
    the GSI interrupt, described in a separate patch).  Several types
    of interrupt are handled by the IPA IRQ handler; these are not part
    of data/fast path.

  - The IPA has a region of local memory that is accessible by the AP
    (and modem).  Within that region are areas with certain defined
    purposes.  "ipa_mem.c" and "ipa_mem.h" define those regions, and
    implement their initialization.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_clock.c     | 313 +++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_clock.h     |  53 ++++++
 drivers/net/ipa/ipa_interrupt.c | 253 +++++++++++++++++++++++++
 drivers/net/ipa/ipa_interrupt.h | 117 ++++++++++++
 drivers/net/ipa/ipa_mem.c       | 314 ++++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_mem.h       |  90 +++++++++
 6 files changed, 1140 insertions(+)
 create mode 100644 drivers/net/ipa/ipa_clock.c
 create mode 100644 drivers/net/ipa/ipa_clock.h
 create mode 100644 drivers/net/ipa/ipa_interrupt.c
 create mode 100644 drivers/net/ipa/ipa_interrupt.h
 create mode 100644 drivers/net/ipa/ipa_mem.c
 create mode 100644 drivers/net/ipa/ipa_mem.h

diff --git a/drivers/net/ipa/ipa_clock.c b/drivers/net/ipa/ipa_clock.c
new file mode 100644
index 000000000000..374491ea11cf
--- /dev/null
+++ b/drivers/net/ipa/ipa_clock.c
@@ -0,0 +1,313 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+
+#include <linux/atomic.h>
+#include <linux/mutex.h>
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/interconnect.h>
+
+#include "ipa.h"
+#include "ipa_clock.h"
+#include "ipa_modem.h"
+
+/**
+ * DOC: IPA Clocking
+ *
+ * The "IPA Clock" manages both the IPA core clock and the interconnects
+ * (buses) the IPA depends on as a single logical entity.  A reference count
+ * is incremented by "get" operations and decremented by "put" operations.
+ * Transitions of that count from 0 to 1 result in the clock and interconnects
+ * being enabled, and transitions of the count from 1 to 0 cause them to be
+ * disabled.  We currently operate the core clock at a fixed clock rate, and
+ * all buses at a fixed average and peak bandwidth.  As more advanced IPA
+ * features are enabled, we can make better use of clock and bus scaling.
+ *
+ * An IPA clock reference must be held for any access to IPA hardware.
+ */
+
+#define	IPA_CORE_CLOCK_RATE		(75UL * 1000 * 1000)	/* Hz */
+
+/* Interconnect path bandwidths (each times 1000 bytes per second) */
+#define IPA_MEMORY_AVG			(80 * 1000)	/* 80 MBps */
+#define IPA_MEMORY_PEAK			(600 * 1000)
+
+#define IPA_IMEM_AVG			(80 * 1000)
+#define IPA_IMEM_PEAK			(350 * 1000)
+
+#define IPA_CONFIG_AVG			(40 * 1000)
+#define IPA_CONFIG_PEAK			(40 * 1000)
+
+/**
+ * struct ipa_clock - IPA clocking information
+ * @count:		Clocking reference count
+ * @mutex;		Protects clock enable/disable
+ * @core:		IPA core clock
+ * @memory_path:	Memory interconnect
+ * @imem_path:		Internal memory interconnect
+ * @config_path:	Configuration space interconnect
+ */
+struct ipa_clock {
+	atomic_t count;
+	struct mutex mutex; /* protects clock enable/disable */
+	struct clk *core;
+	struct icc_path *memory_path;
+	struct icc_path *imem_path;
+	struct icc_path *config_path;
+};
+
+static struct icc_path *
+ipa_interconnect_init_one(struct device *dev, const char *name)
+{
+	struct icc_path *path;
+
+	path = of_icc_get(dev, name);
+	if (IS_ERR(path))
+		dev_err(dev, "error %ld getting memory interconnect\n",
+			PTR_ERR(path));
+
+	return path;
+}
+
+/* Initialize interconnects required for IPA operation */
+static int ipa_interconnect_init(struct ipa_clock *clock, struct device *dev)
+{
+	struct icc_path *path;
+
+	path = ipa_interconnect_init_one(dev, "memory");
+	if (IS_ERR(path))
+		goto err_return;
+	clock->memory_path = path;
+
+	path = ipa_interconnect_init_one(dev, "imem");
+	if (IS_ERR(path))
+		goto err_memory_path_put;
+	clock->imem_path = path;
+
+	path = ipa_interconnect_init_one(dev, "config");
+	if (IS_ERR(path))
+		goto err_imem_path_put;
+	clock->config_path = path;
+
+	return 0;
+
+err_imem_path_put:
+	icc_put(clock->imem_path);
+err_memory_path_put:
+	icc_put(clock->memory_path);
+err_return:
+	return PTR_ERR(path);
+}
+
+/* Inverse of ipa_interconnect_init() */
+static void ipa_interconnect_exit(struct ipa_clock *clock)
+{
+	icc_put(clock->config_path);
+	icc_put(clock->imem_path);
+	icc_put(clock->memory_path);
+}
+
+/* Currently we only use one bandwidth level, so just "enable" interconnects */
+static int ipa_interconnect_enable(struct ipa *ipa)
+{
+	struct ipa_clock *clock = ipa->clock;
+	int ret;
+
+	ret = icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK);
+	if (ret)
+		return ret;
+
+	ret = icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK);
+	if (ret)
+		goto err_memory_path_disable;
+
+	ret = icc_set_bw(clock->config_path, IPA_CONFIG_AVG, IPA_CONFIG_PEAK);
+	if (ret)
+		goto err_imem_path_disable;
+
+	return 0;
+
+err_imem_path_disable:
+	(void)icc_set_bw(clock->imem_path, 0, 0);
+err_memory_path_disable:
+	(void)icc_set_bw(clock->memory_path, 0, 0);
+
+	return ret;
+}
+
+/* To disable an interconnect, we just its bandwidth to 0 */
+static int ipa_interconnect_disable(struct ipa *ipa)
+{
+	struct ipa_clock *clock = ipa->clock;
+	int ret;
+
+	ret = icc_set_bw(clock->memory_path, 0, 0);
+	if (ret)
+		return ret;
+
+	ret = icc_set_bw(clock->imem_path, 0, 0);
+	if (ret)
+		goto err_memory_path_reenable;
+
+	ret = icc_set_bw(clock->config_path, 0, 0);
+	if (ret)
+		goto err_imem_path_reenable;
+
+	return 0;
+
+err_imem_path_reenable:
+	(void)icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK);
+err_memory_path_reenable:
+	(void)icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK);
+
+	return ret;
+}
+
+/* Turn on IPA clocks, including interconnects */
+static int ipa_clock_enable(struct ipa *ipa)
+{
+	int ret;
+
+	ret = ipa_interconnect_enable(ipa);
+	if (ret)
+		return ret;
+
+	ret = clk_prepare_enable(ipa->clock->core);
+	if (ret)
+		ipa_interconnect_disable(ipa);
+
+	return ret;
+}
+
+/* Inverse of ipa_clock_enable() */
+static void ipa_clock_disable(struct ipa *ipa)
+{
+	clk_disable_unprepare(ipa->clock->core);
+	(void)ipa_interconnect_disable(ipa);
+}
+
+/* Get an IPA clock reference, but only if the reference count is
+ * already non-zero.  Returns true if the additional reference was
+ * added successfully, or false otherwise.
+ */
+bool ipa_clock_get_additional(struct ipa *ipa)
+{
+	return !!atomic_inc_not_zero(&ipa->clock->count);
+}
+
+/* Get an IPA clock reference.  If the reference count is non-zero, it is
+ * incremented and return is immediate.  Otherwise it is checked again
+ * under protection of the mutex, and if appropriate the clock (and
+ * interconnects) are enabled suspended endpoints (if any) are resumed
+ * before returning.
+ *
+ * Incrementing the reference count is intentionally deferred until
+ * after the clock is running and endpoints are resumed.
+ */
+void ipa_clock_get(struct ipa *ipa)
+{
+	struct ipa_clock *clock = ipa->clock;
+	int ret;
+
+	/* If the clock is running, just bump the reference count */
+	if (ipa_clock_get_additional(ipa))
+		return;
+
+	/* Otherwise get the mutex and check again */
+	mutex_lock(&clock->mutex);
+
+	/* A reference might have been added before we got the mutex. */
+	if (ipa_clock_get_additional(ipa))
+		goto out_mutex_unlock;
+
+	ret = ipa_clock_enable(ipa);
+	if (ret) {
+		dev_err(&ipa->pdev->dev, "error %d enabling IPA clock\n", ret);
+		goto out_mutex_unlock;
+	}
+
+	ipa_endpoint_resume(ipa);
+
+	atomic_inc(&clock->count);
+
+out_mutex_unlock:
+	mutex_unlock(&clock->mutex);
+}
+
+/* Attempt to remove an IPA clock reference.  If this represents the last
+ * reference, suspend endpoints and disable the clock (and interconnects)
+ * under protection of a mutex.
+ */
+void ipa_clock_put(struct ipa *ipa)
+{
+	struct ipa_clock *clock = ipa->clock;
+
+	/* If this is not the last reference there's nothing more to do */
+	if (!atomic_dec_and_mutex_lock(&clock->count, &clock->mutex))
+		return;
+
+	ipa_endpoint_suspend(ipa);
+
+	ipa_clock_disable(ipa);
+
+	mutex_unlock(&clock->mutex);
+}
+
+/* Initialize IPA clocking */
+struct ipa_clock *ipa_clock_init(struct device *dev)
+{
+	struct ipa_clock *clock;
+	struct clk *clk;
+	int ret;
+
+	clk = clk_get(dev, "core");
+	if (IS_ERR(clk)) {
+		dev_err(dev, "error %ld getting core clock\n", PTR_ERR(clk));
+		return ERR_CAST(clk);
+	}
+
+	ret = clk_set_rate(clk, IPA_CORE_CLOCK_RATE);
+	if (ret) {
+		dev_err(dev, "error %d setting core clock rate to %lu\n",
+			ret, IPA_CORE_CLOCK_RATE);
+		goto err_clk_put;
+	}
+
+	clock = kzalloc(sizeof(*clock), GFP_KERNEL);
+	if (!clock) {
+		ret = -ENOMEM;
+		goto err_clk_put;
+	}
+	clock->core = clk;
+
+	ret = ipa_interconnect_init(clock, dev);
+	if (ret)
+		goto err_kfree;
+
+	mutex_init(&clock->mutex);
+	atomic_set(&clock->count, 0);
+
+	return clock;
+
+err_kfree:
+	kfree(clock);
+err_clk_put:
+	clk_put(clk);
+
+	return ERR_PTR(ret);
+}
+
+/* Inverse of ipa_clock_init() */
+void ipa_clock_exit(struct ipa_clock *clock)
+{
+	struct clk *clk = clock->core;
+
+	WARN_ON(atomic_read(&clock->count) != 0);
+	mutex_destroy(&clock->mutex);
+	ipa_interconnect_exit(clock);
+	kfree(clock);
+	clk_put(clk);
+}
diff --git a/drivers/net/ipa/ipa_clock.h b/drivers/net/ipa/ipa_clock.h
new file mode 100644
index 000000000000..bc52b35e6bb2
--- /dev/null
+++ b/drivers/net/ipa/ipa_clock.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+#ifndef _IPA_CLOCK_H_
+#define _IPA_CLOCK_H_
+
+struct device;
+
+struct ipa;
+
+/**
+ * ipa_clock_init() - Initialize IPA clocking
+ * @dev:	IPA device
+ *
+ * @Return:	A pointer to an ipa_clock structure, or a pointer-coded error
+ */
+struct ipa_clock *ipa_clock_init(struct device *dev);
+
+/**
+ * ipa_clock_exit() - Inverse of ipa_clock_init()
+ * @clock:	IPA clock pointer
+ */
+void ipa_clock_exit(struct ipa_clock *clock);
+
+/**
+ * ipa_clock_get() - Get an IPA clock reference
+ * @ipa:	IPA pointer
+ *
+ * This call blocks if this is the first reference.
+ */
+void ipa_clock_get(struct ipa *ipa);
+
+/**
+ * ipa_clock_get_additional() - Get an IPA clock reference if not first
+ * @ipa:	IPA pointer
+ *
+ * This returns immediately, and only takes a reference if not the first
+ */
+bool ipa_clock_get_additional(struct ipa *ipa);
+
+/**
+ * ipa_clock_put() - Drop an IPA clock reference
+ * @ipa:	IPA pointer
+ *
+ * This drops a clock reference.  If the last reference is being dropped,
+ * the clock is stopped and RX endpoints are suspended.  This call will
+ * not block unless the last reference is dropped.
+ */
+void ipa_clock_put(struct ipa *ipa);
+
+#endif /* _IPA_CLOCK_H_ */
diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
new file mode 100644
index 000000000000..90353987c45f
--- /dev/null
+++ b/drivers/net/ipa/ipa_interrupt.c
@@ -0,0 +1,253 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2014-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+
+/* DOC: IPA Interrupts
+ *
+ * The IPA has an interrupt line distinct from the interrupt used by the GSI
+ * code.  Whereas GSI interrupts are generally related to channel events (like
+ * transfer completions), IPA interrupts are related to other events related
+ * to the IPA.  Some of the IPA interrupts come from a microcontroller
+ * embedded in the IPA.  Each IPA interrupt type can be both masked and
+ * acknowledged independent of the others.
+ *
+ * Two of the IPA interrupts are initiated by the microcontroller.  A third
+ * can be generated to signal the need for a wakeup/resume when an IPA
+ * endpoint has been suspended.  There are other IPA events, but at this
+ * time only these three are supported.
+ */
+
+#include <linux/types.h>
+#include <linux/interrupt.h>
+
+#include "ipa.h"
+#include "ipa_clock.h"
+#include "ipa_reg.h"
+#include "ipa_endpoint.h"
+#include "ipa_interrupt.h"
+
+/**
+ * struct ipa_interrupt - IPA interrupt information
+ * @ipa:		IPA pointer
+ * @irq:		Linux IRQ number used for IPA interrupts
+ * @enabled:		Mask indicating which interrupts are enabled
+ * @handler:		Array of handlers indexed by IPA interrupt ID
+ */
+struct ipa_interrupt {
+	struct ipa *ipa;
+	u32 irq;
+	u32 enabled;
+	ipa_irq_handler_t handler[IPA_IRQ_COUNT];
+};
+
+/* Returns true if the interrupt type is associated with the microcontroller */
+static bool ipa_interrupt_uc(struct ipa_interrupt *interrupt, u32 irq_id)
+{
+	return irq_id == IPA_IRQ_UC_0 || irq_id == IPA_IRQ_UC_1;
+}
+
+/* Process a particular interrupt type that has been received */
+static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 irq_id)
+{
+	bool uc_irq = ipa_interrupt_uc(interrupt, irq_id);
+	struct ipa *ipa = interrupt->ipa;
+	u32 mask = BIT(irq_id);
+
+	/* For microcontroller interrupts, clear the interrupt right away,
+	 * "to avoid clearing unhandled interrupts."
+	 */
+	if (uc_irq)
+		iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET);
+
+	if (irq_id < IPA_IRQ_COUNT && interrupt->handler[irq_id])
+		interrupt->handler[irq_id](interrupt->ipa, irq_id);
+
+	/* Clearing the SUSPEND_TX interrupt also clears the register
+	 * that tells us which suspended endpoint(s) caused the interrupt,
+	 * so defer clearing until after the handler has been called.
+	 */
+	if (!uc_irq)
+		iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET);
+}
+
+/* Process all IPA interrupt types that have been signaled */
+static void ipa_interrupt_process_all(struct ipa_interrupt *interrupt)
+{
+	struct ipa *ipa = interrupt->ipa;
+	u32 enabled = interrupt->enabled;
+	u32 mask;
+
+	/* The status register indicates which conditions are present,
+	 * including conditions whose interrupt is not enabled.  Handle
+	 * only the enabled ones.
+	 */
+	mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET);
+	while ((mask &= enabled)) {
+		do {
+			u32 irq_id = __ffs(mask);
+
+			mask ^= BIT(irq_id);
+
+			ipa_interrupt_process(interrupt, irq_id);
+		} while (mask);
+		mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET);
+	}
+}
+
+/* Threaded part of the IPA IRQ handler */
+static irqreturn_t ipa_isr_thread(int irq, void *dev_id)
+{
+	struct ipa_interrupt *interrupt = dev_id;
+
+	ipa_clock_get(interrupt->ipa);
+
+	ipa_interrupt_process_all(interrupt);
+
+	ipa_clock_put(interrupt->ipa);
+
+	return IRQ_HANDLED;
+}
+
+/* Hard part (i.e., "real" IRQ handler) of the IRQ handler */
+static irqreturn_t ipa_isr(int irq, void *dev_id)
+{
+	struct ipa_interrupt *interrupt = dev_id;
+	struct ipa *ipa = interrupt->ipa;
+	u32 mask;
+
+	mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET);
+	if (mask & interrupt->enabled)
+		return IRQ_WAKE_THREAD;
+
+	/* Nothing in the mask was supposed to cause an interrupt */
+	iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET);
+
+	dev_err(&ipa->pdev->dev, "%s: unexpected interrupt, mask 0x%08x\n",
+		__func__, mask);
+
+	return IRQ_HANDLED;
+}
+
+/* Common function used to enable/disable TX_SUSPEND for an endpoint */
+static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
+					  u32 endpoint_id, bool enable)
+{
+	struct ipa *ipa = interrupt->ipa;
+	u32 mask = BIT(endpoint_id);
+	u32 val;
+
+	/* assert(mask & ipa->available); */
+	val = ioread32(ipa->reg_virt + IPA_REG_SUSPEND_IRQ_EN_OFFSET);
+	if (enable)
+		val |= mask;
+	else
+		val &= ~mask;
+	iowrite32(val, ipa->reg_virt + IPA_REG_SUSPEND_IRQ_EN_OFFSET);
+}
+
+/* Enable TX_SUSPEND for an endpoint */
+void
+ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt, u32 endpoint_id)
+{
+	ipa_interrupt_suspend_control(interrupt, endpoint_id, true);
+}
+
+/* Disable TX_SUSPEND for an endpoint */
+void
+ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt, u32 endpoint_id)
+{
+	ipa_interrupt_suspend_control(interrupt, endpoint_id, false);
+}
+
+/* Clear the suspend interrupt for all endpoints that signaled it */
+void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt)
+{
+	struct ipa *ipa = interrupt->ipa;
+	u32 val;
+
+	val = ioread32(ipa->reg_virt + IPA_REG_IRQ_SUSPEND_INFO_OFFSET);
+	iowrite32(val, ipa->reg_virt + IPA_REG_SUSPEND_IRQ_CLR_OFFSET);
+}
+
+/* Simulate arrival of an IPA TX_SUSPEND interrupt */
+void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt)
+{
+	ipa_interrupt_process(interrupt, IPA_IRQ_TX_SUSPEND);
+}
+
+/* Add a handler for an IPA interrupt */
+void ipa_interrupt_add(struct ipa_interrupt *interrupt,
+		       enum ipa_irq_id ipa_irq, ipa_irq_handler_t handler)
+{
+	struct ipa *ipa = interrupt->ipa;
+
+	/* assert(ipa_irq < IPA_IRQ_COUNT); */
+	interrupt->handler[ipa_irq] = handler;
+
+	/* Update the IPA interrupt mask to enable it */
+	interrupt->enabled |= BIT(ipa_irq);
+	iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET);
+}
+
+/* Remove the handler for an IPA interrupt type */
+void
+ipa_interrupt_remove(struct ipa_interrupt *interrupt, enum ipa_irq_id ipa_irq)
+{
+	struct ipa *ipa = interrupt->ipa;
+
+	/* assert(ipa_irq < IPA_IRQ_COUNT); */
+	/* Update the IPA interrupt mask to disable it */
+	interrupt->enabled &= ~BIT(ipa_irq);
+	iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET);
+
+	interrupt->handler[ipa_irq] = NULL;
+}
+
+/* Set up the IPA interrupt framework */
+struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa)
+{
+	struct device *dev = &ipa->pdev->dev;
+	struct ipa_interrupt *interrupt;
+	unsigned int irq;
+	int ret;
+
+	ret = platform_get_irq_byname(ipa->pdev, "ipa");
+	if (ret <= 0) {
+		dev_err(dev, "DT error %d getting \"ipa\" IRQ property\n",
+			ret);
+		return ERR_PTR(ret ? : -EINVAL);
+	}
+	irq = ret;
+
+	interrupt = kzalloc(sizeof(*interrupt), GFP_KERNEL);
+	if (!interrupt)
+		return ERR_PTR(-ENOMEM);
+	interrupt->ipa = ipa;
+	interrupt->irq = irq;
+
+	/* Start with all IPA interrupts disabled */
+	iowrite32(0, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET);
+
+	ret = request_threaded_irq(irq, ipa_isr, ipa_isr_thread, IRQF_ONESHOT,
+				   "ipa", interrupt);
+	if (ret) {
+		dev_err(dev, "error %d requesting \"ipa\" IRQ\n", ret);
+		goto err_kfree;
+	}
+
+	return interrupt;
+
+err_kfree:
+	kfree(interrupt);
+
+	return ERR_PTR(ret);
+}
+
+/* Tear down the IPA interrupt framework */
+void ipa_interrupt_teardown(struct ipa_interrupt *interrupt)
+{
+	free_irq(interrupt->irq, interrupt);
+	kfree(interrupt);
+}
diff --git a/drivers/net/ipa/ipa_interrupt.h b/drivers/net/ipa/ipa_interrupt.h
new file mode 100644
index 000000000000..d4f4c1c9f0b1
--- /dev/null
+++ b/drivers/net/ipa/ipa_interrupt.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+#ifndef _IPA_INTERRUPT_H_
+#define _IPA_INTERRUPT_H_
+
+#include <linux/types.h>
+#include <linux/bits.h>
+
+struct ipa;
+struct ipa_interrupt;
+
+/**
+ * enum ipa_irq_id - IPA interrupt type
+ * @IPA_IRQ_UC_0:	Microcontroller event interrupt
+ * @IPA_IRQ_UC_1:	Microcontroller response interrupt
+ * @IPA_IRQ_TX_SUSPEND:	Data ready interrupt
+ *
+ * The data ready interrupt is signaled if data has arrived that is destined
+ * for an AP RX endpoint whose underlying GSI channel is suspended/stopped.
+ */
+enum ipa_irq_id {
+	IPA_IRQ_UC_0		= 2,
+	IPA_IRQ_UC_1		= 3,
+	IPA_IRQ_TX_SUSPEND	= 14,
+	IPA_IRQ_COUNT,		/* Number of interrupt types (not an index) */
+};
+
+/**
+ * typedef ipa_irq_handler_t - IPA interrupt handler function type
+ * @ipa:	IPA pointer
+ * @irq_id:	interrupt type
+ *
+ * Callback function registered by ipa_interrupt_add() to handle a specific
+ * IPA interrupt type
+ */
+typedef void (*ipa_irq_handler_t)(struct ipa *ipa, enum ipa_irq_id irq_id);
+
+/**
+ * ipa_interrupt_add() - Register a handler for an IPA interrupt type
+ * @irq_id:	IPA interrupt type
+ * @handler:	Handler function for the interrupt
+ *
+ * Add a handler for an IPA interrupt and enable it.  IPA interrupt
+ * handlers are run in threaded interrupt context, so are allowed to
+ * block.
+ */
+void ipa_interrupt_add(struct ipa_interrupt *interrupt, enum ipa_irq_id irq_id,
+		       ipa_irq_handler_t handler);
+
+/**
+ * ipa_interrupt_remove() - Remove the handler for an IPA interrupt type
+ * @interrupt:	IPA interrupt structure
+ * @irq_id:	IPA interrupt type
+ *
+ * Remove an IPA interrupt handler and disable it.
+ */
+void ipa_interrupt_remove(struct ipa_interrupt *interrupt,
+			  enum ipa_irq_id irq_id);
+
+/**
+ * ipa_interrupt_suspend_enable - Enable TX_SUSPEND for an endpoint
+ * @interrupt:		IPA interrupt structure
+ * @endpoint_id:	Endpoint whose interrupt should be enabled
+ *
+ * Note:  The "TX" in the name is from the perspective of the IPA hardware.
+ * A TX_SUSPEND interrupt arrives on an AP RX enpoint when packet data can't
+ * be delivered to the endpoint because it is suspended (or its underlying
+ * channel is stopped).
+ */
+void ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt,
+				  u32 endpoint_id);
+
+/**
+ * ipa_interrupt_suspend_disable - Disable TX_SUSPEND for an endpoint
+ * @interrupt:		IPA interrupt structure
+ * @endpoint_id:	Endpoint whose interrupt should be disabled
+ */
+void ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt,
+				   u32 endpoint_id);
+
+/**
+ * ipa_interrupt_suspend_clear_all - clear all suspend interrupts
+ * @interrupt:	IPA interrupt structure
+ *
+ * Clear the TX_SUSPEND interrupt for all endpoints that signaled it.
+ */
+void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt);
+
+/**
+ * ipa_interrupt_simulate_suspend() - Simulate TX_SUSPEND IPA interrupt
+ * @interrupt:	IPA interrupt structure
+ *
+ * This calls the TX_SUSPEND interrupt handler, as if such an interrupt
+ * had been signaled.  This is needed to work around a hardware quirk
+ * that occurs if aggregation is active on an endpoint when its underlying
+ * channel is suspended.
+ */
+void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt);
+
+/**
+ * ipa_interrupt_setup() - Set up the IPA interrupt framework
+ * @ipa:	IPA pointer
+ *
+ * @Return:	Pointer to IPA SMP2P info, or a pointer-coded error
+ */
+struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa);
+
+/**
+ * ipa_interrupt_teardown() - Tear down the IPA interrupt framework
+ * @interrupt:	IPA interrupt structure
+ */
+void ipa_interrupt_teardown(struct ipa_interrupt *interrupt);
+
+#endif /* _IPA_INTERRUPT_H_ */
diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
new file mode 100644
index 000000000000..42d2c29d9f0c
--- /dev/null
+++ b/drivers/net/ipa/ipa_mem.c
@@ -0,0 +1,314 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+
+#include <linux/types.h>
+#include <linux/bitfield.h>
+#include <linux/bug.h>
+#include <linux/dma-mapping.h>
+#include <linux/io.h>
+
+#include "ipa.h"
+#include "ipa_reg.h"
+#include "ipa_cmd.h"
+#include "ipa_mem.h"
+#include "ipa_data.h"
+#include "ipa_table.h"
+#include "gsi_trans.h"
+
+/* "Canary" value placed between memory regions to detect overflow */
+#define IPA_MEM_CANARY_VAL		cpu_to_le32(0xdeadbeef)
+
+/* Add an immediate command to a transaction that zeroes a memory region */
+static void
+ipa_mem_zero_region_add(struct gsi_trans *trans, const struct ipa_mem *mem)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	dma_addr_t addr = ipa->zero_addr;
+
+	if (!mem->size)
+		return;
+
+	ipa_cmd_dma_shared_mem_add(trans, mem->offset, mem->size, addr, true);
+}
+
+/**
+ * ipa_mem_setup() - Set up IPA AP and modem shared memory areas
+ *
+ * Set up the shared memory regions in IPA local memory.  This involves
+ * zero-filling memory regions, and in the case of header memory, telling
+ * the IPA where it's located.
+ *
+ * This function performs the initial setup of this memory.  If the modem
+ * crashes, its regions are re-zeroed in ipa_mem_zero_modem().
+ *
+ * The AP informs the modem where its portions of memory are located
+ * in a QMI exchange that occurs at modem startup.
+ *
+ * @Return:	0 if successful, or a negative error code
+ */
+int ipa_mem_setup(struct ipa *ipa)
+{
+	dma_addr_t addr = ipa->zero_addr;
+	struct gsi_trans *trans;
+	u32 offset;
+	u16 size;
+
+	/* Get a transaction to define the header memory region and to zero
+	 * the processing context and modem memory regions.
+	 */
+	trans = ipa_cmd_trans_alloc(ipa, 4);
+	if (!trans) {
+		dev_err(&ipa->pdev->dev, "no transaction for memory setup\n");
+		return -EBUSY;
+	}
+
+	/* Initialize IPA-local header memory.  The modem and AP header
+	 * regions are contiguous, and initialized together.
+	 */
+	offset = ipa->mem[IPA_MEM_MODEM_HEADER].offset;
+	size = ipa->mem[IPA_MEM_MODEM_HEADER].size;
+	size += ipa->mem[IPA_MEM_AP_HEADER].size;
+
+	ipa_cmd_hdr_init_local_add(trans, offset, size, addr);
+
+	ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM_PROC_CTX]);
+
+	ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_AP_PROC_CTX]);
+
+	ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM]);
+
+	gsi_trans_commit_wait(trans);
+
+	/* Tell the hardware where the processing context area is located */
+	iowrite32(ipa->mem_offset + offset,
+		  ipa->reg_virt + IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET);
+
+	return 0;
+}
+
+void ipa_mem_teardown(struct ipa *ipa)
+{
+	/* Nothing to do */
+}
+
+#ifdef IPA_VALIDATE
+
+static bool ipa_mem_valid(struct ipa *ipa, enum ipa_mem_id mem_id)
+{
+	const struct ipa_mem *mem = &ipa->mem[mem_id];
+	struct device *dev = &ipa->pdev->dev;
+	u16 size_multiple;
+
+	/* Other than modem memory, sizes must be a multiple of 8 */
+	size_multiple = mem_id == IPA_MEM_MODEM ? 4 : 8;
+	if (mem->size % size_multiple)
+		dev_err(dev, "region %u size not a multiple of %u bytes\n",
+			mem_id, size_multiple);
+	else if (mem->offset % 8)
+		dev_err(dev, "region %u offset not 8-byte aligned\n", mem_id);
+	else if (mem->offset < mem->canary_count * sizeof(__le32))
+		dev_err(dev, "region %u offset too small for %hu canaries\n",
+			mem_id, mem->canary_count);
+	else if (mem->offset + mem->size > ipa->mem_size)
+		dev_err(dev, "region %u ends beyond memory limit (0x%08x)\n",
+			mem_id, ipa->mem_size);
+	else
+		return true;
+
+	return false;
+}
+
+#else /* !IPA_VALIDATE */
+
+static bool ipa_mem_valid(struct ipa *ipa, enum ipa_mem_id mem_id)
+{
+	return true;
+}
+
+#endif /*! IPA_VALIDATE */
+
+/**
+ * ipa_mem_config() - Configure IPA shared memory
+ *
+ * @Return:	0 if successful, or a negative error code
+ */
+int ipa_mem_config(struct ipa *ipa)
+{
+	struct device *dev = &ipa->pdev->dev;
+	enum ipa_mem_id mem_id;
+	dma_addr_t addr;
+	u32 mem_size;
+	void *virt;
+	u32 val;
+
+	/* Check the advertised location and size of the shared memory area */
+	val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET);
+
+	/* The fields in the register are in 8 byte units */
+	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
+	/* Make sure the end is within the region's mapped space */
+	mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
+
+	/* If the sizes don't match, issue a warning */
+	if (ipa->mem_offset + mem_size > ipa->mem_size) {
+		dev_warn(dev, "ignoring larger reported memory size: 0x%08x\n",
+			mem_size);
+	} else if (ipa->mem_offset + mem_size < ipa->mem_size) {
+		dev_warn(dev, "limiting IPA memory size to 0x%08x\n",
+			 mem_size);
+		ipa->mem_size = mem_size;
+	}
+
+	/* Prealloc DMA memory for zeroing regions */
+	virt = dma_alloc_coherent(dev, IPA_MEM_MAX, &addr, GFP_KERNEL);
+	if (!virt)
+		return -ENOMEM;
+	ipa->zero_addr = addr;
+	ipa->zero_virt = virt;
+	ipa->zero_size = IPA_MEM_MAX;
+
+	/* Verify each defined memory region is valid, and if indicated
+	 * for the region, write "canary" values in the space prior to
+	 * the region's base address.
+	 */
+	for (mem_id = 0; mem_id < IPA_MEM_COUNT; mem_id++) {
+		const struct ipa_mem *mem = &ipa->mem[mem_id];
+		u16 canary_count;
+		__le32 *canary;
+
+		/* Validate all regions (even undefined ones) */
+		if (!ipa_mem_valid(ipa, mem_id))
+			goto err_dma_free;
+
+		/* Skip over undefined regions */
+		if (!mem->offset && !mem->size)
+			continue;
+
+		canary_count = mem->canary_count;
+		if (!canary_count)
+			continue;
+
+		/* Write canary values in the space before the region */
+		canary = ipa->mem_virt + ipa->mem_offset + mem->offset;
+		do
+			*--canary = IPA_MEM_CANARY_VAL;
+		while (--canary_count);
+	}
+
+	/* Make sure filter and route table memory regions are valid */
+	if (!ipa_table_valid(ipa))
+		goto err_dma_free;
+
+	/* Validate memory-related properties relevant to immediate commands */
+	if (!ipa_cmd_data_valid(ipa))
+		goto err_dma_free;
+
+	/* Verify the microcontroller ring alignment (0 is OK too) */
+	if (ipa->mem[IPA_MEM_UC_EVENT_RING].offset % 1024) {
+		dev_err(dev, "microcontroller ring not 1024-byte aligned\n");
+		goto err_dma_free;
+	}
+
+	return 0;
+
+err_dma_free:
+	dma_free_coherent(dev, IPA_MEM_MAX, ipa->zero_virt, ipa->zero_addr);
+
+	return -EINVAL;
+}
+
+/* Inverse of ipa_mem_config() */
+void ipa_mem_deconfig(struct ipa *ipa)
+{
+	struct device *dev = &ipa->pdev->dev;
+
+	dma_free_coherent(dev, ipa->zero_size, ipa->zero_virt, ipa->zero_addr);
+	ipa->zero_size = 0;
+	ipa->zero_virt = NULL;
+	ipa->zero_addr = 0;
+}
+
+/**
+ * ipa_mem_zero_modem() - Zero IPA-local memory regions owned by the modem
+ *
+ * Zero regions of IPA-local memory used by the modem.  These are configured
+ * (and initially zeroed) by ipa_mem_setup(), but if the modem crashes and
+ * restarts via SSR we need to re-initialize them.  A QMI message tells the
+ * modem where to find regions of IPA local memory it needs to know about
+ * (these included).
+ */
+int ipa_mem_zero_modem(struct ipa *ipa)
+{
+	struct gsi_trans *trans;
+
+	/* Get a transaction to zero the modem memory, modem header,
+	 * and modem processing context regions.
+	 */
+	trans = ipa_cmd_trans_alloc(ipa, 3);
+	if (!trans) {
+		dev_err(&ipa->pdev->dev,
+			"no transaction to zero modem memory\n");
+		return -EBUSY;
+	}
+
+	ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM_HEADER]);
+
+	ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM_PROC_CTX]);
+
+	ipa_mem_zero_region_add(trans, &ipa->mem[IPA_MEM_MODEM]);
+
+	gsi_trans_commit_wait(trans);
+
+	return 0;
+}
+
+/* Perform memory region-related initialization */
+int ipa_mem_init(struct ipa *ipa, u32 count, const struct ipa_mem *mem)
+{
+	struct device *dev = &ipa->pdev->dev;
+	struct resource *res;
+	int ret;
+
+	if (count > IPA_MEM_COUNT) {
+		dev_err(dev, "to many memory regions (%u > %u)\n",
+			count, IPA_MEM_COUNT);
+		return -EINVAL;
+	}
+
+	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64));
+	if (ret) {
+		dev_err(dev, "error %d setting DMA mask\n", ret);
+		return ret;
+	}
+
+	res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM,
+					   "ipa-shared");
+	if (!res) {
+		dev_err(dev,
+			"DT error getting \"ipa-shared\" memory property\n");
+		return -ENODEV;
+	}
+
+	ipa->mem_virt = memremap(res->start, resource_size(res), MEMREMAP_WC);
+	if (!ipa->mem_virt) {
+		dev_err(dev, "unable to remap \"ipa-shared\" memory\n");
+		return -ENOMEM;
+	}
+
+	ipa->mem_addr = res->start;
+	ipa->mem_size = resource_size(res);
+
+	/* The ipa->mem[] array is indexed by enum ipa_mem_id values */
+	ipa->mem = mem;
+
+	return 0;
+}
+
+/* Inverse of ipa_mem_init() */
+void ipa_mem_exit(struct ipa *ipa)
+{
+	memunmap(ipa->mem_virt);
+}
diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h
new file mode 100644
index 000000000000..065cb499ebe5
--- /dev/null
+++ b/drivers/net/ipa/ipa_mem.h
@@ -0,0 +1,90 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+#ifndef _IPA_MEM_H_
+#define _IPA_MEM_H_
+
+struct ipa;
+
+/**
+ * DOC: IPA Local Memory
+ *
+ * The IPA has a block of shared memory, divided into regions used for
+ * specific purposes.
+ *
+ * The regions within the shared block are bounded by an offset (relative to
+ * the "ipa-shared" memory range) and size found in the IPA_SHARED_MEM_SIZE
+ * register.
+ *
+ * Each region is optionally preceded by one or more 32-bit "canary" values.
+ * These are meant to detect out-of-range writes (if they become corrupted).
+ * A given region (such as a filter or routing table) has the same number
+ * of canaries for all IPA hardware versions.  Still, the number used is
+ * defined in the config data, allowing for generic handling of regions.
+ *
+ * The set of memory regions is defined in configuration data.  They are
+ * subject to these constraints:
+ * - a zero offset and zero size represents and undefined region
+ * - a region's offset is defined to be *past* all "canary" values
+ * - offset must be large enough to account for all canaries
+ * - a region's size may be zero, but may still have canaries
+ * - all offsets must be 8-byte aligned
+ * - most sizes must be a multiple of 8
+ * - modem memory size must be a multiple of 4
+ * - the microcontroller ring offset must be a multiple of 1024
+ */
+
+/* The maximum allowed size for any memory region */
+#define IPA_MEM_MAX	(2 * PAGE_SIZE)
+
+/* IPA-resident memory region ids */
+enum ipa_mem_id {
+	IPA_MEM_UC_SHARED,		/* 0 canaries */
+	IPA_MEM_UC_INFO,		/* 0 canaries */
+	IPA_MEM_V4_FILTER_HASHED,	/* 2 canaries */
+	IPA_MEM_V4_FILTER,		/* 2 canaries */
+	IPA_MEM_V6_FILTER_HASHED,	/* 2 canaries */
+	IPA_MEM_V6_FILTER,		/* 2 canaries */
+	IPA_MEM_V4_ROUTE_HASHED,	/* 2 canaries */
+	IPA_MEM_V4_ROUTE,		/* 2 canaries */
+	IPA_MEM_V6_ROUTE_HASHED,	/* 2 canaries */
+	IPA_MEM_V6_ROUTE,		/* 2 canaries */
+	IPA_MEM_MODEM_HEADER,		/* 2 canaries */
+	IPA_MEM_AP_HEADER,		/* 0 canaries */
+	IPA_MEM_MODEM_PROC_CTX,		/* 2 canaries */
+	IPA_MEM_AP_PROC_CTX,		/* 0 canaries */
+	IPA_MEM_PDN_CONFIG,		/* 2 canaries (IPA v4.0 and above) */
+	IPA_MEM_STATS_QUOTA,		/* 2 canaries (IPA v4.0 and above) */
+	IPA_MEM_STATS_TETHERING,	/* 0 canaries (IPA v4.0 and above) */
+	IPA_MEM_STATS_DROP,		/* 0 canaries (IPA v4.0 and above) */
+	IPA_MEM_MODEM,			/* 0 canaries */
+	IPA_MEM_UC_EVENT_RING,		/* 1 canary */
+	IPA_MEM_COUNT,			/* Number of regions (not an index) */
+};
+
+/**
+ * struct ipa_mem - IPA local memory region description
+ * @offset:		offset in IPA memory space to base of the region
+ * @size:		size in bytes base of the region
+ * @canary_count	# 32-bit "canary" values that precede region
+ */
+struct ipa_mem {
+	u32 offset;
+	u16 size;
+	u16 canary_count;
+};
+
+int ipa_mem_config(struct ipa *ipa);
+void ipa_mem_deconfig(struct ipa *ipa);
+
+int ipa_mem_setup(struct ipa *ipa);
+void ipa_mem_teardown(struct ipa *ipa);
+
+int ipa_mem_zero_modem(struct ipa *ipa);
+
+int ipa_mem_init(struct ipa *ipa, u32 count, const struct ipa_mem *mem);
+void ipa_mem_exit(struct ipa *ipa);
+
+#endif /* _IPA_MEM_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 06/17] soc: qcom: ipa: GSI headers
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (4 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 05/17] soc: qcom: ipa: clocking, interrupts, and memory Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 07/17] soc: qcom: ipa: the generic software interface Alex Elder
                   ` (13 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

The Generic Software Interface is a layer of the IPA driver that
abstracts the underlying hardware.  The next patch includes the
main code for GSI (including some additional documentation).  This
patch just includes three GSI header files.

  - "gsi.h" is the top-level GSI header file.  This structure is
    is embedded within the IPA structure.  The main abstraction
    implemented by the GSI code is the channel, and this header
    exposes several operations that can be performed on a GSI channel.

  - "gsi_private.h" exposes some definitions that are intended to be
    private, used only by the main GSI code and the GSI transaction
    code (defined in an upcoming patch).

  - Like "ipa_reg.h", "gsi_reg.h" defines the offsets of the 32-bit
    registers used by the GSI layer, along with masks that define the
    position and width of fields less than 32 bits located within
    these registers.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/gsi.h         | 257 +++++++++++++++++++++
 drivers/net/ipa/gsi_private.h | 118 ++++++++++
 drivers/net/ipa/gsi_reg.h     | 417 ++++++++++++++++++++++++++++++++++
 3 files changed, 792 insertions(+)
 create mode 100644 drivers/net/ipa/gsi.h
 create mode 100644 drivers/net/ipa/gsi_private.h
 create mode 100644 drivers/net/ipa/gsi_reg.h

diff --git a/drivers/net/ipa/gsi.h b/drivers/net/ipa/gsi.h
new file mode 100644
index 000000000000..0698ff1ae7a6
--- /dev/null
+++ b/drivers/net/ipa/gsi.h
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+#ifndef _GSI_H_
+#define _GSI_H_
+
+#include <linux/types.h>
+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+#include <linux/completion.h>
+#include <linux/platform_device.h>
+#include <linux/netdevice.h>
+
+/* Maximum number of channels and event rings supported by the driver */
+#define GSI_CHANNEL_COUNT_MAX	17
+#define GSI_EVT_RING_COUNT_MAX	13
+
+/* Maximum TLV FIFO size for a channel; 64 here is arbitrary (and high) */
+#define GSI_TLV_MAX		64
+
+struct device;
+struct scatterlist;
+struct platform_device;
+
+struct gsi;
+struct gsi_trans;
+struct gsi_channel_data;
+struct ipa_gsi_endpoint_data;
+
+/* Execution environment IDs */
+enum gsi_ee_id {
+	GSI_EE_AP	= 0,
+	GSI_EE_MODEM	= 1,
+	GSI_EE_UC	= 2,
+	GSI_EE_TZ	= 3,
+};
+
+struct gsi_ring {
+	void *virt;			/* ring array base address */
+	dma_addr_t addr;		/* primarily low 32 bits used */
+	u32 count;			/* number of elements in ring */
+
+	/* The ring index value indicates the next "open" entry in the ring.
+	 *
+	 * A channel ring consists of TRE entries filled by the AP and passed
+	 * to the hardware for processing.  For a channel ring, the ring index
+	 * identifies the next unused entry to be filled by the AP.
+	 *
+	 * An event ring consists of event structures filled by the hardware
+	 * and passed to the AP.  For event rings, the ring index identifies
+	 * the next ring entry that is not known to have been filled by the
+	 * hardware.
+	 */
+	u32 index;
+};
+
+/* Transactions use several resources that can be allocated dynamically
+ * but taken from a fixed-size pool.  The number of elements required for
+ * the pool is limited by the total number of TREs that can be outstanding.
+ *
+ * If sufficient TREs are available to reserve for a transaction,
+ * allocation from these pools is guaranteed to succeed.  Furthermore,
+ * these resources are implicitly freed whenever the TREs in the
+ * transaction they're associated with are released.
+ *
+ * The result of a pool allocation of multiple elements is always
+ * contiguous.
+ */
+struct gsi_trans_pool {
+	void *base;			/* base address of element pool */
+	u32 count;			/* # elements in the pool */
+	u32 free;			/* next free element in pool (modulo) */
+	u32 size;			/* size (bytes) of an element */
+	u32 max_alloc;			/* max allocation request */
+	dma_addr_t addr;		/* DMA address if DMA pool (or 0) */
+};
+
+struct gsi_trans_info {
+	atomic_t tre_avail;		/* TREs available for allocation */
+	struct gsi_trans_pool pool;	/* transaction pool */
+	struct gsi_trans_pool sg_pool;	/* scatterlist pool */
+	struct gsi_trans_pool cmd_pool;	/* command payload DMA pool */
+	struct gsi_trans_pool info_pool;/* command information pool */
+	struct gsi_trans **map;		/* TRE -> transaction map */
+
+	spinlock_t spinlock;		/* protects updates to the lists */
+	struct list_head alloc;		/* allocated, not committed */
+	struct list_head pending;	/* committed, awaiting completion */
+	struct list_head complete;	/* completed, awaiting poll */
+	struct list_head polled;	/* returned by gsi_channel_poll_one() */
+};
+
+/* Hardware values signifying the state of a channel */
+enum gsi_channel_state {
+	GSI_CHANNEL_STATE_NOT_ALLOCATED	= 0x0,
+	GSI_CHANNEL_STATE_ALLOCATED	= 0x1,
+	GSI_CHANNEL_STATE_STARTED	= 0x2,
+	GSI_CHANNEL_STATE_STOPPED	= 0x3,
+	GSI_CHANNEL_STATE_STOP_IN_PROC	= 0x4,
+	GSI_CHANNEL_STATE_ERROR		= 0xf,
+};
+
+/* We only care about channels between IPA and AP */
+struct gsi_channel {
+	struct gsi *gsi;
+	bool toward_ipa;
+	bool command;			/* AP command TX channel or not */
+	bool use_prefetch;		/* use prefetch (else escape buf) */
+
+	u8 tlv_count;			/* # entries in TLV FIFO */
+	u16 tre_count;
+	u16 event_count;
+
+	struct completion completion;	/* signals channel state changes */
+	enum gsi_channel_state state;
+
+	struct gsi_ring tre_ring;
+	u32 evt_ring_id;
+
+	u64 byte_count;			/* total # bytes transferred */
+	u64 trans_count;		/* total # transactions */
+	/* The following counts are used only for TX endpoints */
+	u64 queued_byte_count;		/* last reported queued byte count */
+	u64 queued_trans_count;		/* ...and queued trans count */
+	u64 compl_byte_count;		/* last reported completed byte count */
+	u64 compl_trans_count;		/* ...and completed trans count */
+
+	struct gsi_trans_info trans_info;
+
+	struct napi_struct napi;
+};
+
+/* Hardware values signifying the state of an event ring */
+enum gsi_evt_ring_state {
+	GSI_EVT_RING_STATE_NOT_ALLOCATED	= 0x0,
+	GSI_EVT_RING_STATE_ALLOCATED		= 0x1,
+	GSI_EVT_RING_STATE_ERROR		= 0xf,
+};
+
+struct gsi_evt_ring {
+	struct gsi_channel *channel;
+	struct completion completion;	/* signals event ring state changes */
+	enum gsi_evt_ring_state state;
+	struct gsi_ring ring;
+};
+
+struct gsi {
+	struct device *dev;		/* Same as IPA device */
+	struct net_device dummy_dev;	/* needed for NAPI */
+	void __iomem *virt;
+	u32 irq;
+	bool irq_wake_enabled;
+	u32 channel_count;
+	u32 evt_ring_count;
+	struct gsi_channel channel[GSI_CHANNEL_COUNT_MAX];
+	struct gsi_evt_ring evt_ring[GSI_EVT_RING_COUNT_MAX];
+	u32 event_bitmap;
+	u32 event_enable_bitmap;
+	u32 modem_channel_bitmap;
+	struct completion completion;	/* for global EE commands */
+	struct mutex mutex;		/* protects commands, programming */
+};
+
+/**
+ * gsi_setup() - Set up the GSI subsystem
+ * @gsi:	Address of GSI structure embedded in an IPA structure
+ * @db_enable:	Whether to use the GSI doorbell engine
+ *
+ * @Return:	0 if successful, or a negative error code
+ *
+ * Performs initialization that must wait until the GSI hardware is
+ * ready (including firmware loaded).
+ */
+int gsi_setup(struct gsi *gsi, bool db_enable);
+
+/**
+ * gsi_teardown() - Tear down GSI subsystem
+ * @gsi:	GSI address previously passed to a successful gsi_setup() call
+ */
+void gsi_teardown(struct gsi *gsi);
+
+/**
+ * gsi_channel_tre_max() - Channel maximum number of in-flight TREs
+ * @gsi:	GSI pointer
+ * @channel_id:	Channel whose limit is to be returned
+ *
+ * @Return:	 The maximum number of TREs oustanding on the channel
+ */
+u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id);
+
+/**
+ * gsi_channel_trans_tre_max() - Maximum TREs in a single transaction
+ * @gsi:	GSI pointer
+ * @channel_id:	Channel whose limit is to be returned
+ *
+ * @Return:	 The maximum TRE count per transaction on the channel
+ */
+u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id);
+
+/**
+ * gsi_channel_start() - Start an allocated GSI channel
+ * @gsi:	GSI pointer
+ * @channel_id:	Channel to start
+ *
+ * @Return:	0 if successful, or a negative error code
+ */
+int gsi_channel_start(struct gsi *gsi, u32 channel_id);
+
+/**
+ * gsi_channel_stop() - Stop a started GSI channel
+ * @gsi:	GSI pointer returned by gsi_setup()
+ * @channel_id:	Channel to stop
+ *
+ * @Return:	0 if successful, or a negative error code
+ */
+int gsi_channel_stop(struct gsi *gsi, u32 channel_id);
+
+/**
+ * gsi_channel_reset() - Reset an allocated GSI channel
+ * @gsi:	GSI pointer
+ * @channel_id:	Channel to be reset
+ * @db_enable:	Whether doorbell engine should be enabled
+ *
+ * Reset a channel and reconfigure it.  The @db_enable flag indicates
+ * whether the doorbell engine will be enabled following reconfiguration.
+ *
+ * GSI hardware relinquishes ownership of all pending receive buffer
+ * transactions and they will complete with their cancelled flag set.
+ */
+void gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool db_enable);
+
+int gsi_channel_suspend(struct gsi *gsi, u32 channel_id, bool stop);
+int gsi_channel_resume(struct gsi *gsi, u32 channel_id, bool start);
+
+/**
+ * gsi_init() - Initialize the GSI subsystem
+ * @gsi:	Address of GSI structure embedded in an IPA structure
+ * @pdev:	IPA platform device
+ *
+ * @Return:	0 if successful, or a negative error code
+ *
+ * Early stage initialization of the GSI subsystem, performing tasks
+ * that can be done before the GSI hardware is ready to use.
+ */
+int gsi_init(struct gsi *gsi, struct platform_device *pdev, bool prefetch,
+	     u32 count, const struct ipa_gsi_endpoint_data *data,
+	     bool modem_alloc);
+
+/**
+ * gsi_exit() - Exit the GSI subsystem
+ * @gsi:	GSI address previously passed to a successful gsi_init() call
+ */
+void gsi_exit(struct gsi *gsi);
+
+#endif /* _GSI_H_ */
diff --git a/drivers/net/ipa/gsi_private.h b/drivers/net/ipa/gsi_private.h
new file mode 100644
index 000000000000..b57d0198ebc1
--- /dev/null
+++ b/drivers/net/ipa/gsi_private.h
@@ -0,0 +1,118 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+#ifndef _GSI_PRIVATE_H_
+#define _GSI_PRIVATE_H_
+
+/* === Only "gsi.c" and "gsi_trans.c" should include this file === */
+
+#include <linux/types.h>
+
+struct gsi_trans;
+struct gsi_ring;
+struct gsi_channel;
+
+#define GSI_RING_ELEMENT_SIZE	16	/* bytes */
+
+/* Return the entry that follows one provided in a transaction pool */
+void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element);
+
+/**
+ * gsi_trans_move_complete() - Mark a GSI transaction completed
+ * @trans:	Transaction to commit
+ */
+void gsi_trans_move_complete(struct gsi_trans *trans);
+
+/**
+ * gsi_trans_move_polled() - Mark a transaction polled
+ * @trans:	Transaction to update
+ */
+void gsi_trans_move_polled(struct gsi_trans *trans);
+
+/**
+ * gsi_trans_complete() - Complete a GSI transaction
+ * @trans:	Transaction to complete
+ *
+ * Marks a transaction complete (including freeing it).
+ */
+void gsi_trans_complete(struct gsi_trans *trans);
+
+/**
+ * gsi_channel_trans_mapped() - Return a transaction mapped to a TRE index
+ * @channel:	Channel associated with the transaction
+ * @index:	Index of the TRE having a transaction
+ *
+ * @Return:	The GSI transaction pointer associated with the TRE index
+ */
+struct gsi_trans *gsi_channel_trans_mapped(struct gsi_channel *channel,
+					   u32 index);
+
+/**
+ * gsi_channel_trans_complete() - Return a channel's next completed transaction
+ * @channel:	Channel whose next transaction is to be returned
+ *
+ * @Return:	The next completed transaction, or NULL if nothing new
+ */
+struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel);
+
+/**
+ * gsi_channel_trans_cancel_pending() - Cancel pending transactions
+ * @channel:	Channel whose pending transactions should be cancelled
+ *
+ * Cancel all pending transactions on a channel.  These are transactions
+ * that have been committed but not yet completed.  This is required when
+ * the channel gets reset.  At that time all pending transactions will be
+ * marked as cancelled.
+ *
+ * NOTE:  Transactions already complete at the time of this call are
+ *	  unaffected.
+ */
+void gsi_channel_trans_cancel_pending(struct gsi_channel *channel);
+
+/**
+ * gsi_channel_trans_init() - Initialize a channel's GSI transaction info
+ * @gsi:	GSI pointer
+ * @channel_id:	Channel number
+ *
+ * @Return:	0 if successful, or -ENOMEM on allocation failure
+ *
+ * Creates and sets up information for managing transactions on a channel
+ */
+int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id);
+
+/**
+ * gsi_channel_trans_exit() - Inverse of gsi_channel_trans_init()
+ * @channel:	Channel whose transaction information is to be cleaned up
+ */
+void gsi_channel_trans_exit(struct gsi_channel *channel);
+
+/**
+ * gsi_channel_doorbell() - Ring a channel's doorbell
+ * @channel:	Channel whose doorbell should be rung
+ *
+ * Rings a channel's doorbell to inform the GSI hardware that new
+ * transactions (TREs, really) are available for it to process.
+ */
+void gsi_channel_doorbell(struct gsi_channel *channel);
+
+/**
+ * gsi_ring_virt() - Return virtual address for a ring entry
+ * @ring:	Ring whose address is to be translated
+ * @addr:	Index (slot number) of entry
+ */
+void *gsi_ring_virt(struct gsi_ring *ring, u32 index);
+
+/**
+ * gsi_channel_tx_queued() - Report the number of bytes queued to hardware
+ * @channel:	Channel whose bytes have been queued
+ *
+ * This arranges for the the number of transactions and bytes for
+ * transfer that have been queued to hardware to be reported.  It
+ * passes this information up the network stack so it can be used to
+ * throttle transmissions.
+ */
+void gsi_channel_tx_queued(struct gsi_channel *channel);
+
+#endif /* _GSI_PRIVATE_H_ */
diff --git a/drivers/net/ipa/gsi_reg.h b/drivers/net/ipa/gsi_reg.h
new file mode 100644
index 000000000000..7613b9cc7cf6
--- /dev/null
+++ b/drivers/net/ipa/gsi_reg.h
@@ -0,0 +1,417 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+#ifndef _GSI_REG_H_
+#define _GSI_REG_H_
+
+/* === Only "gsi.c" should include this file === */
+
+#include <linux/bits.h>
+
+/**
+ * DOC: GSI Registers
+ *
+ * GSI registers are located within the "gsi" address space defined by Device
+ * Tree.  The offset of each register within that space is specified by
+ * symbols defined below.  The GSI address space is mapped to virtual memory
+ * space in gsi_init().  All GSI registers are 32 bits wide.
+ *
+ * Each register type is duplicated for a number of instances of something.
+ * For example, each GSI channel has its own set of registers defining its
+ * configuration.  The offset to a channel's set of registers is computed
+ * based on a "base" offset plus an additional "stride" amount computed
+ * from the channel's ID.  For such registers, the offset is computed by a
+ * function-like macro that takes a parameter used in the computation.
+ *
+ * The offset of a register dependent on execution environment is computed
+ * by a macro that is supplied a parameter "ee".  The "ee" value is a member
+ * of the gsi_ee_id enumerated type.
+ *
+ * The offset of a channel register is computed by a macro that is supplied a
+ * parameter "ch".  The "ch" value is a channel id whose maximum value is 30
+ * (though the actual limit is hardware-dependent).
+ *
+ * The offset of an event register is computed by a macro that is supplied a
+ * parameter "ev".  The "ev" value is an event id whose maximum value is 15
+ * (though the actual limit is hardware-dependent).
+ */
+
+#define GSI_INTER_EE_SRC_CH_IRQ_OFFSET \
+			GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(GSI_EE_AP)
+#define GSI_INTER_EE_N_SRC_CH_IRQ_OFFSET(ee) \
+			(0x0000c018 + 0x1000 * (ee))
+
+#define GSI_INTER_EE_SRC_EV_CH_IRQ_OFFSET \
+			GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP)
+#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_OFFSET(ee) \
+			(0x0000c01c + 0x1000 * (ee))
+
+#define GSI_INTER_EE_SRC_CH_IRQ_CLR_OFFSET \
+			GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(GSI_EE_AP)
+#define GSI_INTER_EE_N_SRC_CH_IRQ_CLR_OFFSET(ee) \
+			(0x0000c028 + 0x1000 * (ee))
+
+#define GSI_INTER_EE_SRC_EV_CH_IRQ_CLR_OFFSET \
+			GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(GSI_EE_AP)
+#define GSI_INTER_EE_N_SRC_EV_CH_IRQ_CLR_OFFSET(ee) \
+			(0x0000c02c + 0x1000 * (ee))
+
+#define GSI_CH_C_CNTXT_0_OFFSET(ch) \
+		GSI_EE_N_CH_C_CNTXT_0_OFFSET((ch), GSI_EE_AP)
+#define GSI_EE_N_CH_C_CNTXT_0_OFFSET(ch, ee) \
+		(0x0001c000 + 0x4000 * (ee) + 0x80 * (ch))
+#define CHTYPE_PROTOCOL_FMASK		GENMASK(2, 0)
+#define CHTYPE_DIR_FMASK		GENMASK(3, 3)
+#define EE_FMASK			GENMASK(7, 4)
+#define CHID_FMASK			GENMASK(12, 8)
+/* The next field is present for GSI v2.0 and above */
+#define CHTYPE_PROTOCOL_MSB_FMASK	GENMASK(13, 13)
+#define ERINDEX_FMASK			GENMASK(18, 14)
+#define CHSTATE_FMASK			GENMASK(23, 20)
+#define ELEMENT_SIZE_FMASK		GENMASK(31, 24)
+
+#define GSI_CH_C_CNTXT_1_OFFSET(ch) \
+		GSI_EE_N_CH_C_CNTXT_1_OFFSET((ch), GSI_EE_AP)
+#define GSI_EE_N_CH_C_CNTXT_1_OFFSET(ch, ee) \
+		(0x0001c004 + 0x4000 * (ee) + 0x80 * (ch))
+#define R_LENGTH_FMASK			GENMASK(15, 0)
+
+#define GSI_CH_C_CNTXT_2_OFFSET(ch) \
+		GSI_EE_N_CH_C_CNTXT_2_OFFSET((ch), GSI_EE_AP)
+#define GSI_EE_N_CH_C_CNTXT_2_OFFSET(ch, ee) \
+		(0x0001c008 + 0x4000 * (ee) + 0x80 * (ch))
+
+#define GSI_CH_C_CNTXT_3_OFFSET(ch) \
+		GSI_EE_N_CH_C_CNTXT_3_OFFSET((ch), GSI_EE_AP)
+#define GSI_EE_N_CH_C_CNTXT_3_OFFSET(ch, ee) \
+		(0x0001c00c + 0x4000 * (ee) + 0x80 * (ch))
+
+#define GSI_CH_C_QOS_OFFSET(ch) \
+		GSI_EE_N_CH_C_QOS_OFFSET((ch), GSI_EE_AP)
+#define GSI_EE_N_CH_C_QOS_OFFSET(ch, ee) \
+		(0x0001c05c + 0x4000 * (ee) + 0x80 * (ch))
+#define WRR_WEIGHT_FMASK		GENMASK(3, 0)
+#define MAX_PREFETCH_FMASK		GENMASK(8, 8)
+#define USE_DB_ENG_FMASK		GENMASK(9, 9)
+/* The next field is present for GSI v2.0 and above */
+#define USE_ESCAPE_BUF_ONLY_FMASK	GENMASK(10, 10)
+
+#define GSI_CH_C_SCRATCH_0_OFFSET(ch) \
+		GSI_EE_N_CH_C_SCRATCH_0_OFFSET((ch), GSI_EE_AP)
+#define GSI_EE_N_CH_C_SCRATCH_0_OFFSET(ch, ee) \
+		(0x0001c060 + 0x4000 * (ee) + 0x80 * (ch))
+
+#define GSI_CH_C_SCRATCH_1_OFFSET(ch) \
+		GSI_EE_N_CH_C_SCRATCH_1_OFFSET((ch), GSI_EE_AP)
+#define GSI_EE_N_CH_C_SCRATCH_1_OFFSET(ch, ee) \
+		(0x0001c064 + 0x4000 * (ee) + 0x80 * (ch))
+
+#define GSI_CH_C_SCRATCH_2_OFFSET(ch) \
+		GSI_EE_N_CH_C_SCRATCH_2_OFFSET((ch), GSI_EE_AP)
+#define GSI_EE_N_CH_C_SCRATCH_2_OFFSET(ch, ee) \
+		(0x0001c068 + 0x4000 * (ee) + 0x80 * (ch))
+
+#define GSI_CH_C_SCRATCH_3_OFFSET(ch) \
+		GSI_EE_N_CH_C_SCRATCH_3_OFFSET((ch), GSI_EE_AP)
+#define GSI_EE_N_CH_C_SCRATCH_3_OFFSET(ch, ee) \
+		(0x0001c06c + 0x4000 * (ee) + 0x80 * (ch))
+
+#define GSI_EV_CH_E_CNTXT_0_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_CNTXT_0_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_CNTXT_0_OFFSET(ev, ee) \
+		(0x0001d000 + 0x4000 * (ee) + 0x80 * (ev))
+#define EV_CHTYPE_FMASK			GENMASK(3, 0)
+#define EV_EE_FMASK			GENMASK(7, 4)
+#define EV_EVCHID_FMASK			GENMASK(15, 8)
+#define EV_INTYPE_FMASK			GENMASK(16, 16)
+#define EV_CHSTATE_FMASK		GENMASK(23, 20)
+#define EV_ELEMENT_SIZE_FMASK		GENMASK(31, 24)
+
+#define GSI_EV_CH_E_CNTXT_1_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_CNTXT_1_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_CNTXT_1_OFFSET(ev, ee) \
+		(0x0001d004 + 0x4000 * (ee) + 0x80 * (ev))
+#define EV_R_LENGTH_FMASK		GENMASK(15, 0)
+
+#define GSI_EV_CH_E_CNTXT_2_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_CNTXT_2_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_CNTXT_2_OFFSET(ev, ee) \
+		(0x0001d008 + 0x4000 * (ee) + 0x80 * (ev))
+
+#define GSI_EV_CH_E_CNTXT_3_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_CNTXT_3_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_CNTXT_3_OFFSET(ev, ee) \
+		(0x0001d00c + 0x4000 * (ee) + 0x80 * (ev))
+
+#define GSI_EV_CH_E_CNTXT_4_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_CNTXT_4_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_CNTXT_4_OFFSET(ev, ee) \
+		(0x0001d010 + 0x4000 * (ee) + 0x80 * (ev))
+
+#define GSI_EV_CH_E_CNTXT_8_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_CNTXT_8_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_CNTXT_8_OFFSET(ev, ee) \
+		(0x0001d020 + 0x4000 * (ee) + 0x80 * (ev))
+#define MODT_FMASK			GENMASK(15, 0)
+#define MODC_FMASK			GENMASK(23, 16)
+#define MOD_CNT_FMASK			GENMASK(31, 24)
+
+#define GSI_EV_CH_E_CNTXT_9_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_CNTXT_9_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_CNTXT_9_OFFSET(ev, ee) \
+		(0x0001d024 + 0x4000 * (ee) + 0x80 * (ev))
+
+#define GSI_EV_CH_E_CNTXT_10_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_CNTXT_10_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_CNTXT_10_OFFSET(ev, ee) \
+		(0x0001d028 + 0x4000 * (ee) + 0x80 * (ev))
+
+#define GSI_EV_CH_E_CNTXT_11_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_CNTXT_11_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_CNTXT_11_OFFSET(ev, ee) \
+		(0x0001d02c + 0x4000 * (ee) + 0x80 * (ev))
+
+#define GSI_EV_CH_E_CNTXT_12_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_CNTXT_12_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_CNTXT_12_OFFSET(ev, ee) \
+		(0x0001d030 + 0x4000 * (ee) + 0x80 * (ev))
+
+#define GSI_EV_CH_E_CNTXT_13_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_CNTXT_13_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_CNTXT_13_OFFSET(ev, ee) \
+		(0x0001d034 + 0x4000 * (ee) + 0x80 * (ev))
+
+#define GSI_EV_CH_E_SCRATCH_0_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_SCRATCH_0_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_SCRATCH_0_OFFSET(ev, ee) \
+		(0x0001d048 + 0x4000 * (ee) + 0x80 * (ev))
+
+#define GSI_EV_CH_E_SCRATCH_1_OFFSET(ev) \
+		GSI_EE_N_EV_CH_E_SCRATCH_1_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_SCRATCH_1_OFFSET(ev, ee) \
+		(0x0001d04c + 0x4000 * (ee) + 0x80 * (ev))
+
+#define GSI_CH_C_DOORBELL_0_OFFSET(ch) \
+		GSI_EE_N_CH_C_DOORBELL_0_OFFSET((ch), GSI_EE_AP)
+#define GSI_EE_N_CH_C_DOORBELL_0_OFFSET(ch, ee) \
+			(0x0001e000 + 0x4000 * (ee) + 0x08 * (ch))
+
+#define GSI_EV_CH_E_DOORBELL_0_OFFSET(ev) \
+			GSI_EE_N_EV_CH_E_DOORBELL_0_OFFSET((ev), GSI_EE_AP)
+#define GSI_EE_N_EV_CH_E_DOORBELL_0_OFFSET(ev, ee) \
+			(0x0001e100 + 0x4000 * (ee) + 0x08 * (ev))
+
+#define GSI_GSI_STATUS_OFFSET \
+			GSI_EE_N_GSI_STATUS_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_GSI_STATUS_OFFSET(ee) \
+			(0x0001f000 + 0x4000 * (ee))
+#define ENABLED_FMASK			GENMASK(0, 0)
+
+#define GSI_CH_CMD_OFFSET \
+			GSI_EE_N_CH_CMD_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CH_CMD_OFFSET(ee) \
+			(0x0001f008 + 0x4000 * (ee))
+#define CH_CHID_FMASK			GENMASK(7, 0)
+#define CH_OPCODE_FMASK			GENMASK(31, 24)
+
+#define GSI_EV_CH_CMD_OFFSET \
+			GSI_EE_N_EV_CH_CMD_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_EV_CH_CMD_OFFSET(ee) \
+			(0x0001f010 + 0x4000 * (ee))
+#define EV_CHID_FMASK			GENMASK(7, 0)
+#define EV_OPCODE_FMASK			GENMASK(31, 24)
+
+#define GSI_GENERIC_CMD_OFFSET \
+			GSI_EE_N_GENERIC_CMD_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_GENERIC_CMD_OFFSET(ee) \
+			(0x0001f018 + 0x4000 * (ee))
+#define GENERIC_OPCODE_FMASK		GENMASK(4, 0)
+#define GENERIC_CHID_FMASK		GENMASK(9, 5)
+#define GENERIC_EE_FMASK		GENMASK(13, 10)
+
+#define GSI_GSI_HW_PARAM_2_OFFSET \
+			GSI_EE_N_GSI_HW_PARAM_2_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_GSI_HW_PARAM_2_OFFSET(ee) \
+			(0x0001f040 + 0x4000 * (ee))
+#define IRAM_SIZE_FMASK			GENMASK(2, 0)
+#define IRAM_SIZE_ONE_KB_FVAL			0
+#define IRAM_SIZE_TWO_KB_FVAL			1
+/* The next two values are available for GSI v2.0 and above */
+#define IRAM_SIZE_TWO_N_HALF_KB_FVAL		2
+#define IRAM_SIZE_THREE_KB_FVAL			3
+#define NUM_CH_PER_EE_FMASK		GENMASK(7, 3)
+#define NUM_EV_PER_EE_FMASK		GENMASK(12, 8)
+#define GSI_CH_PEND_TRANSLATE_FMASK	GENMASK(13, 13)
+#define GSI_CH_FULL_LOGIC_FMASK		GENMASK(14, 14)
+/* Fields below are present for GSI v2.0 and above */
+#define GSI_USE_SDMA_FMASK		GENMASK(15, 15)
+#define GSI_SDMA_N_INT_FMASK		GENMASK(18, 16)
+#define GSI_SDMA_MAX_BURST_FMASK	GENMASK(26, 19)
+#define GSI_SDMA_N_IOVEC_FMASK		GENMASK(29, 27)
+/* Fields below are present for GSI v2.2 and above */
+#define GSI_USE_RD_WR_ENG_FMASK		GENMASK(30, 30)
+#define GSI_USE_INTER_EE_FMASK		GENMASK(31, 31)
+
+#define GSI_CNTXT_TYPE_IRQ_OFFSET \
+			GSI_EE_N_CNTXT_TYPE_IRQ_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_TYPE_IRQ_OFFSET(ee) \
+			(0x0001f080 + 0x4000 * (ee))
+#define CH_CTRL_FMASK			GENMASK(0, 0)
+#define EV_CTRL_FMASK			GENMASK(1, 1)
+#define GLOB_EE_FMASK			GENMASK(2, 2)
+#define IEOB_FMASK			GENMASK(3, 3)
+#define INTER_EE_CH_CTRL_FMASK		GENMASK(4, 4)
+#define INTER_EE_EV_CTRL_FMASK		GENMASK(5, 5)
+#define GENERAL_FMASK			GENMASK(6, 6)
+
+#define GSI_CNTXT_TYPE_IRQ_MSK_OFFSET \
+			GSI_EE_N_CNTXT_TYPE_IRQ_MSK_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_TYPE_IRQ_MSK_OFFSET(ee) \
+			(0x0001f088 + 0x4000 * (ee))
+#define MSK_CH_CTRL_FMASK		GENMASK(0, 0)
+#define MSK_EV_CTRL_FMASK		GENMASK(1, 1)
+#define MSK_GLOB_EE_FMASK		GENMASK(2, 2)
+#define MSK_IEOB_FMASK			GENMASK(3, 3)
+#define MSK_INTER_EE_CH_CTRL_FMASK	GENMASK(4, 4)
+#define MSK_INTER_EE_EV_CTRL_FMASK	GENMASK(5, 5)
+#define MSK_GENERAL_FMASK		GENMASK(6, 6)
+#define GSI_CNTXT_TYPE_IRQ_MSK_ALL	GENMASK(6, 0)
+
+#define GSI_CNTXT_SRC_CH_IRQ_OFFSET \
+			GSI_EE_N_CNTXT_SRC_CH_IRQ_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_SRC_CH_IRQ_OFFSET(ee) \
+			(0x0001f090 + 0x4000 * (ee))
+
+#define GSI_CNTXT_SRC_EV_CH_IRQ_OFFSET \
+			GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_OFFSET(ee) \
+			(0x0001f094 + 0x4000 * (ee))
+
+#define GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET \
+			GSI_EE_N_CNTXT_SRC_CH_IRQ_MSK_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_SRC_CH_IRQ_MSK_OFFSET(ee) \
+			(0x0001f098 + 0x4000 * (ee))
+
+#define GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET \
+			GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET(ee) \
+			(0x0001f09c + 0x4000 * (ee))
+
+#define GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET \
+			GSI_EE_N_CNTXT_SRC_CH_IRQ_CLR_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_SRC_CH_IRQ_CLR_OFFSET(ee) \
+			(0x0001f0a0 + 0x4000 * (ee))
+
+#define GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET \
+			GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET(ee) \
+			(0x0001f0a4 + 0x4000 * (ee))
+
+#define GSI_CNTXT_SRC_IEOB_IRQ_OFFSET \
+			GSI_EE_N_CNTXT_SRC_IEOB_IRQ_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_OFFSET(ee) \
+			(0x0001f0b0 + 0x4000 * (ee))
+
+#define GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET \
+			GSI_EE_N_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET(ee) \
+			(0x0001f0b8 + 0x4000 * (ee))
+
+#define GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET \
+			GSI_EE_N_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET(ee) \
+			(0x0001f0c0 + 0x4000 * (ee))
+
+#define GSI_CNTXT_GLOB_IRQ_STTS_OFFSET \
+			GSI_EE_N_CNTXT_GLOB_IRQ_STTS_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_GLOB_IRQ_STTS_OFFSET(ee) \
+			(0x0001f100 + 0x4000 * (ee))
+#define ERROR_INT_FMASK			GENMASK(0, 0)
+#define GP_INT1_FMASK			GENMASK(1, 1)
+#define GP_INT2_FMASK			GENMASK(2, 2)
+#define GP_INT3_FMASK			GENMASK(3, 3)
+
+#define GSI_CNTXT_GLOB_IRQ_EN_OFFSET \
+			GSI_EE_N_CNTXT_GLOB_IRQ_EN_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_GLOB_IRQ_EN_OFFSET(ee) \
+			(0x0001f108 + 0x4000 * (ee))
+#define EN_ERROR_INT_FMASK		GENMASK(0, 0)
+#define EN_GP_INT1_FMASK		GENMASK(1, 1)
+#define EN_GP_INT2_FMASK		GENMASK(2, 2)
+#define EN_GP_INT3_FMASK		GENMASK(3, 3)
+#define GSI_CNTXT_GLOB_IRQ_ALL		GENMASK(3, 0)
+
+#define GSI_CNTXT_GLOB_IRQ_CLR_OFFSET \
+			GSI_EE_N_CNTXT_GLOB_IRQ_CLR_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_GLOB_IRQ_CLR_OFFSET(ee) \
+			(0x0001f110 + 0x4000 * (ee))
+#define CLR_ERROR_INT_FMASK		GENMASK(0, 0)
+#define CLR_GP_INT1_FMASK		GENMASK(1, 1)
+#define CLR_GP_INT2_FMASK		GENMASK(2, 2)
+#define CLR_GP_INT3_FMASK		GENMASK(3, 3)
+
+#define GSI_CNTXT_GSI_IRQ_STTS_OFFSET \
+			GSI_EE_N_CNTXT_GSI_IRQ_STTS_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_GSI_IRQ_STTS_OFFSET(ee) \
+			(0x0001f118 + 0x4000 * (ee))
+#define BREAK_POINT_FMASK		GENMASK(0, 0)
+#define BUS_ERROR_FMASK			GENMASK(1, 1)
+#define CMD_FIFO_OVRFLOW_FMASK		GENMASK(2, 2)
+#define MCS_STACK_OVRFLOW_FMASK		GENMASK(3, 3)
+
+#define GSI_CNTXT_GSI_IRQ_EN_OFFSET \
+			GSI_EE_N_CNTXT_GSI_IRQ_EN_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_GSI_IRQ_EN_OFFSET(ee) \
+			(0x0001f120 + 0x4000 * (ee))
+#define EN_BREAK_POINT_FMASK		GENMASK(0, 0)
+#define EN_BUS_ERROR_FMASK		GENMASK(1, 1)
+#define EN_CMD_FIFO_OVRFLOW_FMASK	GENMASK(2, 2)
+#define EN_MCS_STACK_OVRFLOW_FMASK	GENMASK(3, 3)
+#define GSI_CNTXT_GSI_IRQ_ALL		GENMASK(3, 0)
+
+#define GSI_CNTXT_GSI_IRQ_CLR_OFFSET \
+			GSI_EE_N_CNTXT_GSI_IRQ_CLR_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_GSI_IRQ_CLR_OFFSET(ee) \
+			(0x0001f128 + 0x4000 * (ee))
+#define CLR_BREAK_POINT_FMASK		GENMASK(0, 0)
+#define CLR_BUS_ERROR_FMASK		GENMASK(1, 1)
+#define CLR_CMD_FIFO_OVRFLOW_FMASK	GENMASK(2, 2)
+#define CLR_MCS_STACK_OVRFLOW_FMASK	GENMASK(3, 3)
+
+#define GSI_CNTXT_INTSET_OFFSET \
+			GSI_EE_N_CNTXT_INTSET_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_INTSET_OFFSET(ee) \
+			(0x0001f180 + 0x4000 * (ee))
+#define INTYPE_FMASK			GENMASK(0, 0)
+
+#define GSI_ERROR_LOG_OFFSET \
+			GSI_EE_N_ERROR_LOG_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_ERROR_LOG_OFFSET(ee) \
+			(0x0001f200 + 0x4000 * (ee))
+#define ERR_ARG3_FMASK			GENMASK(3, 0)
+#define ERR_ARG2_FMASK			GENMASK(7, 4)
+#define ERR_ARG1_FMASK			GENMASK(11, 8)
+#define ERR_CODE_FMASK			GENMASK(15, 12)
+#define ERR_VIRT_IDX_FMASK		GENMASK(23, 19)
+#define ERR_TYPE_FMASK			GENMASK(27, 24)
+#define ERR_EE_FMASK			GENMASK(31, 28)
+
+#define GSI_ERROR_LOG_CLR_OFFSET \
+			GSI_EE_N_ERROR_LOG_CLR_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_ERROR_LOG_CLR_OFFSET(ee) \
+			(0x0001f210 + 0x4000 * (ee))
+
+#define GSI_CNTXT_SCRATCH_0_OFFSET \
+			GSI_EE_N_CNTXT_SCRATCH_0_OFFSET(GSI_EE_AP)
+#define GSI_EE_N_CNTXT_SCRATCH_0_OFFSET(ee) \
+			(0x0001f400 + 0x4000 * (ee))
+#define INTER_EE_RESULT_FMASK		GENMASK(2, 0)
+#define GENERIC_EE_RESULT_FMASK		GENMASK(7, 5)
+#define GENERIC_EE_SUCCESS_FVAL			1
+#define GENERIC_EE_NO_RESOURCES_FVAL		7
+#define USB_MAX_PACKET_FMASK		GENMASK(15, 15)	/* 0: HS; 1: SS */
+#define MHI_BASE_CHANNEL_FMASK		GENMASK(31, 24)
+
+#endif	/* _GSI_REG_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 07/17] soc: qcom: ipa: the generic software interface
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (5 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 06/17] soc: qcom: ipa: GSI headers Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 08/17] soc: qcom: ipa: IPA interface to GSI Alex Elder
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

This patch includes "gsi.c", which implements the generic software
interface (GSI) for IPA.  The generic software interface abstracts
channels, which provide a means of transferring data either from the
AP to the IPA, or from the IPA to the AP.  A ring buffer of "transfer
elements" (TREs) is used to describe data transfers to perform.  The
AP writes a doorbell register associated with a channel to let it know
it has added new entries (for an AP->IPA channel) or has finished
processing entries (for an IPA->AP channel).

Each channel also has an event ring buffer, used by the IPA to
communicate information about events related to a channel (for
example, the completion of TREs).  The IPA writes its own doorbell
register, which triggers an interrupt on the AP, to signal that
new event information has arrived.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/gsi.c | 2055 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 2055 insertions(+)
 create mode 100644 drivers/net/ipa/gsi.c

diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
new file mode 100644
index 000000000000..845478a19a4f
--- /dev/null
+++ b/drivers/net/ipa/gsi.c
@@ -0,0 +1,2055 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+
+#include <linux/types.h>
+#include <linux/bits.h>
+#include <linux/bitfield.h>
+#include <linux/mutex.h>
+#include <linux/completion.h>
+#include <linux/io.h>
+#include <linux/bug.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/netdevice.h>
+
+#include "gsi.h"
+#include "gsi_reg.h"
+#include "gsi_private.h"
+#include "gsi_trans.h"
+#include "ipa_gsi.h"
+#include "ipa_data.h"
+
+/**
+ * DOC: The IPA Generic Software Interface
+ *
+ * The generic software interface (GSI) is an integral component of the IPA,
+ * providing a well-defined communication layer between the AP subsystem
+ * and the IPA core.  The modem uses the GSI layer as well.
+ *
+ *	--------	     ---------
+ *	|      |	     |	     |
+ *	|  AP  +<---.	.----+ Modem |
+ *	|      +--. |	| .->+	     |
+ *	|      |  | |	| |  |	     |
+ *	--------  | |	| |  ---------
+ *		  v |	v |
+ *		--+-+---+-+--
+ *		|    GSI    |
+ *		|-----------|
+ *		|	    |
+ *		|    IPA    |
+ *		|	    |
+ *		-------------
+ *
+ * In the above diagram, the AP and Modem represent "execution environments"
+ * (EEs), which are independent operating environments that use the IPA for
+ * data transfer.
+ *
+ * Each EE uses a set of unidirectional GSI "channels," which allow transfer
+ * of data to or from the IPA.  A channel is implemented as a ring buffer,
+ * with a DRAM-resident array of "transfer elements" (TREs) available to
+ * describe transfers to or from other EEs through the IPA.  A transfer
+ * element can also contain an immediate command, requesting the IPA perform
+ * actions other than data transfer.
+ *
+ * Each TRE refers to a block of data--also located DRAM.  After writing one
+ * or more TREs to a channel, the writer (either the IPA or an EE) writes a
+ * doorbell register to inform the receiving side how many elements have
+ * been written.
+ *
+ * Each channel has a GSI "event ring" associated with it.  An event ring
+ * is implemented very much like a channel ring, but is always directed from
+ * the IPA to an EE.  The IPA notifies an EE (such as the AP) about channel
+ * events by adding an entry to the event ring associated with the channel.
+ * The GSI then writes its doorbell for the event ring, causing the target
+ * EE to be interrupted.  Each entry in an event ring contains a pointer
+ * to the channel TRE whose completion the event represents.
+ *
+ * Each TRE in a channel ring has a set of flags.  One flag indicates whether
+ * the completion of the transfer operation generates an entry (and possibly
+ * an interrupt) in the channel's event ring.  Other flags allow transfer
+ * elements to be chained together, forming a single logical transaction.
+ * TRE flags are used to control whether and when interrupts are generated
+ * to signal completion of channel transfers.
+ *
+ * Elements in channel and event rings are completed (or consumed) strictly
+ * in order.  Completion of one entry implies the completion of all preceding
+ * entries.  A single completion interrupt can therefore communicate the
+ * completion of many transfers.
+ *
+ * Note that all GSI registers are little-endian, which is the assumed
+ * endianness of I/O space accesses.  The accessor functions perform byte
+ * swapping if needed (i.e., for a big endian CPU).
+ */
+
+/* Delay period for interrupt moderation (in 32KHz IPA internal timer ticks) */
+#define GSI_EVT_RING_INT_MODT		(32 * 1) /* 1ms under 32KHz clock */
+
+#define GSI_CMD_TIMEOUT			5	/* seconds */
+
+#define GSI_CHANNEL_STOP_RX_RETRIES	10
+
+#define GSI_MHI_EVENT_ID_START		10	/* 1st reserved event id */
+#define GSI_MHI_EVENT_ID_END		16	/* Last reserved event id */
+
+#define GSI_ISR_MAX_ITER		50	/* Detect interrupt storms */
+
+/* An entry in an event ring */
+struct gsi_event {
+	__le64 xfer_ptr;
+	__le16 len;
+	u8 reserved1;
+	u8 code;
+	__le16 reserved2;
+	u8 type;
+	u8 chid;
+};
+
+/* Hardware values from the error log register error code field */
+enum gsi_err_code {
+	GSI_INVALID_TRE_ERR			= 0x1,
+	GSI_OUT_OF_BUFFERS_ERR			= 0x2,
+	GSI_OUT_OF_RESOURCES_ERR		= 0x3,
+	GSI_UNSUPPORTED_INTER_EE_OP_ERR		= 0x4,
+	GSI_EVT_RING_EMPTY_ERR			= 0x5,
+	GSI_NON_ALLOCATED_EVT_ACCESS_ERR	= 0x6,
+	GSI_HWO_1_ERR				= 0x8,
+};
+
+/* Hardware values from the error log register error type field */
+enum gsi_err_type {
+	GSI_ERR_TYPE_GLOB	= 0x1,
+	GSI_ERR_TYPE_CHAN	= 0x2,
+	GSI_ERR_TYPE_EVT	= 0x3,
+};
+
+/* Hardware values used when programming an event ring */
+enum gsi_evt_chtype {
+	GSI_EVT_CHTYPE_MHI_EV	= 0x0,
+	GSI_EVT_CHTYPE_XHCI_EV	= 0x1,
+	GSI_EVT_CHTYPE_GPI_EV	= 0x2,
+	GSI_EVT_CHTYPE_XDCI_EV	= 0x3,
+};
+
+/* Hardware values used when programming a channel */
+enum gsi_channel_protocol {
+	GSI_CHANNEL_PROTOCOL_MHI	= 0x0,
+	GSI_CHANNEL_PROTOCOL_XHCI	= 0x1,
+	GSI_CHANNEL_PROTOCOL_GPI	= 0x2,
+	GSI_CHANNEL_PROTOCOL_XDCI	= 0x3,
+};
+
+/* Hardware values representing an event ring immediate command opcode */
+enum gsi_evt_cmd_opcode {
+	GSI_EVT_ALLOCATE	= 0x0,
+	GSI_EVT_RESET		= 0x9,
+	GSI_EVT_DE_ALLOC	= 0xa,
+};
+
+/* Hardware values representing a generic immediate command opcode */
+enum gsi_generic_cmd_opcode {
+	GSI_GENERIC_HALT_CHANNEL	= 0x1,
+	GSI_GENERIC_ALLOCATE_CHANNEL	= 0x2,
+};
+
+/* Hardware values representing a channel immediate command opcode */
+enum gsi_ch_cmd_opcode {
+	GSI_CH_ALLOCATE	= 0x0,
+	GSI_CH_START	= 0x1,
+	GSI_CH_STOP	= 0x2,
+	GSI_CH_RESET	= 0x9,
+	GSI_CH_DE_ALLOC	= 0xa,
+};
+
+/** gsi_channel_scratch_gpi - GPI protocol scratch register
+ * @max_outstanding_tre:
+ *	Defines the maximum number of TREs allowed in a single transaction
+ *	on a channel (in bytes).  This determines the amount of prefetch
+ *	performed by the hardware.  We configure this to equal the size of
+ *	the TLV FIFO for the channel.
+ * @outstanding_threshold:
+ *	Defines the threshold (in bytes) determining when the sequencer
+ *	should update the channel doorbell.  We configure this to equal
+ *	the size of two TREs.
+ */
+struct gsi_channel_scratch_gpi {
+	u64 reserved1;
+	u16 reserved2;
+	u16 max_outstanding_tre;
+	u16 reserved3;
+	u16 outstanding_threshold;
+};
+
+/** gsi_channel_scratch - channel scratch configuration area
+ *
+ * The exact interpretation of this register is protocol-specific.
+ * We only use GPI channels; see struct gsi_channel_scratch_gpi, above.
+ */
+union gsi_channel_scratch {
+	struct gsi_channel_scratch_gpi gpi;
+	struct {
+		u32 word1;
+		u32 word2;
+		u32 word3;
+		u32 word4;
+	} data;
+};
+
+/* Check things that can be validated at build time. */
+static void gsi_validate_build(void)
+{
+	/* This is used as a divisor */
+	BUILD_BUG_ON(!GSI_RING_ELEMENT_SIZE);
+
+	/* Code assumes the size of channel and event ring element are
+	 * the same (and fixed).  Make sure the size of an event ring
+	 * element is what's expected.
+	 */
+	BUILD_BUG_ON(sizeof(struct gsi_event) != GSI_RING_ELEMENT_SIZE);
+
+	/* Hardware requires a 2^n ring size.  We ensure the number of
+	 * elements in an event ring is a power of 2 elsewhere; this
+	 * ensure the elements themselves meet the requirement.
+	 */
+	BUILD_BUG_ON(!is_power_of_2(GSI_RING_ELEMENT_SIZE));
+
+	/* The channel element size must fit in this field */
+	BUILD_BUG_ON(GSI_RING_ELEMENT_SIZE > field_max(ELEMENT_SIZE_FMASK));
+
+	/* The event ring element size must fit in this field */
+	BUILD_BUG_ON(GSI_RING_ELEMENT_SIZE > field_max(EV_ELEMENT_SIZE_FMASK));
+}
+
+/* Return the channel id associated with a given channel */
+static u32 gsi_channel_id(struct gsi_channel *channel)
+{
+	return channel - &channel->gsi->channel[0];
+}
+
+static void gsi_irq_ieob_enable(struct gsi *gsi, u32 evt_ring_id)
+{
+	u32 val;
+
+	gsi->event_enable_bitmap |= BIT(evt_ring_id);
+	val = gsi->event_enable_bitmap;
+	iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET);
+}
+
+static void gsi_isr_ieob_clear(struct gsi *gsi, u32 mask)
+{
+	iowrite32(mask, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_CLR_OFFSET);
+}
+
+static void gsi_irq_ieob_disable(struct gsi *gsi, u32 evt_ring_id)
+{
+	u32 val;
+
+	gsi->event_enable_bitmap &= ~BIT(evt_ring_id);
+	val = gsi->event_enable_bitmap;
+	iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET);
+}
+
+/* Enable all GSI_interrupt types */
+static void gsi_irq_enable(struct gsi *gsi)
+{
+	u32 val;
+
+	/* We don't use inter-EE channel or event interrupts */
+	val = GSI_CNTXT_TYPE_IRQ_MSK_ALL;
+	val &= ~MSK_INTER_EE_CH_CTRL_FMASK;
+	val &= ~MSK_INTER_EE_EV_CTRL_FMASK;
+	iowrite32(val, gsi->virt + GSI_CNTXT_TYPE_IRQ_MSK_OFFSET);
+
+	val = GENMASK(gsi->channel_count - 1, 0);
+	iowrite32(val, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET);
+
+	val = GENMASK(gsi->evt_ring_count - 1, 0);
+	iowrite32(val, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET);
+
+	/* Each IEOB interrupt is enabled (later) as needed by channels */
+	iowrite32(0, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET);
+
+	val = GSI_CNTXT_GLOB_IRQ_ALL;
+	iowrite32(val, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET);
+
+	/* Never enable GSI_BREAK_POINT */
+	val = GSI_CNTXT_GSI_IRQ_ALL & ~EN_BREAK_POINT_FMASK;
+	iowrite32(val, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET);
+}
+
+/* Disable all GSI_interrupt types */
+static void gsi_irq_disable(struct gsi *gsi)
+{
+	iowrite32(0, gsi->virt + GSI_CNTXT_GSI_IRQ_EN_OFFSET);
+	iowrite32(0, gsi->virt + GSI_CNTXT_GLOB_IRQ_EN_OFFSET);
+	iowrite32(0, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET);
+	iowrite32(0, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET);
+	iowrite32(0, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET);
+	iowrite32(0, gsi->virt + GSI_CNTXT_TYPE_IRQ_MSK_OFFSET);
+}
+
+/* Return the virtual address associated with a ring index */
+void *gsi_ring_virt(struct gsi_ring *ring, u32 index)
+{
+	/* Note: index *must* be used modulo the ring count here */
+	return ring->virt + (index % ring->count) * GSI_RING_ELEMENT_SIZE;
+}
+
+/* Return the 32-bit DMA address associated with a ring index */
+static u32 gsi_ring_addr(struct gsi_ring *ring, u32 index)
+{
+	return (ring->addr & GENMASK(31, 0)) + index * GSI_RING_ELEMENT_SIZE;
+}
+
+/* Return the ring index of a 32-bit ring offset */
+static u32 gsi_ring_index(struct gsi_ring *ring, u32 offset)
+{
+	return (offset - gsi_ring_addr(ring, 0)) / GSI_RING_ELEMENT_SIZE;
+}
+
+/* Issue a GSI command by writing a value to a register, then wait for
+ * completion to be signaled.  Returns true if the command completes
+ * or false if it times out.
+ */
+static bool
+gsi_command(struct gsi *gsi, u32 reg, u32 val, struct completion *completion)
+{
+	reinit_completion(completion);
+
+	iowrite32(val, gsi->virt + reg);
+
+	return !!wait_for_completion_timeout(completion, GSI_CMD_TIMEOUT * HZ);
+}
+
+/* Return the hardware's notion of the current state of an event ring */
+static enum gsi_evt_ring_state
+gsi_evt_ring_state(struct gsi *gsi, u32 evt_ring_id)
+{
+	u32 val;
+
+	val = ioread32(gsi->virt + GSI_EV_CH_E_CNTXT_0_OFFSET(evt_ring_id));
+
+	return u32_get_bits(val, EV_CHSTATE_FMASK);
+}
+
+/* Issue an event ring command and wait for it to complete */
+static int evt_ring_command(struct gsi *gsi, u32 evt_ring_id,
+			    enum gsi_evt_cmd_opcode opcode)
+{
+	struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
+	struct completion *completion = &evt_ring->completion;
+	u32 val;
+
+	val = u32_encode_bits(evt_ring_id, EV_CHID_FMASK);
+	val |= u32_encode_bits(opcode, EV_OPCODE_FMASK);
+
+	if (gsi_command(gsi, GSI_EV_CH_CMD_OFFSET, val, completion))
+		return 0;	/* Success! */
+
+	dev_err(gsi->dev, "GSI command %u to event ring %u timed out "
+		"(state is %u)\n", opcode, evt_ring_id, evt_ring->state);
+
+	return -ETIMEDOUT;
+}
+
+/* Allocate an event ring in NOT_ALLOCATED state */
+static int gsi_evt_ring_alloc_command(struct gsi *gsi, u32 evt_ring_id)
+{
+	struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
+	int ret;
+
+	/* Get initial event ring state */
+	evt_ring->state = gsi_evt_ring_state(gsi, evt_ring_id);
+
+	if (evt_ring->state != GSI_EVT_RING_STATE_NOT_ALLOCATED)
+		return -EINVAL;
+
+	ret = evt_ring_command(gsi, evt_ring_id, GSI_EVT_ALLOCATE);
+	if (!ret && evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) {
+		dev_err(gsi->dev, "bad event ring state (%u) after alloc\n",
+			evt_ring->state);
+		ret = -EIO;
+	}
+
+	return ret;
+}
+
+/* Reset a GSI event ring in ALLOCATED or ERROR state. */
+static void gsi_evt_ring_reset_command(struct gsi *gsi, u32 evt_ring_id)
+{
+	struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
+	enum gsi_evt_ring_state state = evt_ring->state;
+	int ret;
+
+	if (state != GSI_EVT_RING_STATE_ALLOCATED &&
+	    state != GSI_EVT_RING_STATE_ERROR) {
+		dev_err(gsi->dev, "bad event ring state (%u) before reset\n",
+			evt_ring->state);
+		return;
+	}
+
+	ret = evt_ring_command(gsi, evt_ring_id, GSI_EVT_RESET);
+	if (!ret && evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED)
+		dev_err(gsi->dev, "bad event ring state (%u) after reset\n",
+			evt_ring->state);
+}
+
+/* Issue a hardware de-allocation request for an allocated event ring */
+static void gsi_evt_ring_de_alloc_command(struct gsi *gsi, u32 evt_ring_id)
+{
+	struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
+	int ret;
+
+	if (evt_ring->state != GSI_EVT_RING_STATE_ALLOCATED) {
+		dev_err(gsi->dev, "bad event ring state (%u) before dealloc\n",
+			evt_ring->state);
+		return;
+	}
+
+	ret = evt_ring_command(gsi, evt_ring_id, GSI_EVT_DE_ALLOC);
+	if (!ret && evt_ring->state != GSI_EVT_RING_STATE_NOT_ALLOCATED)
+		dev_err(gsi->dev, "bad event ring state (%u) after dealloc\n",
+			evt_ring->state);
+}
+
+/* Return the hardware's notion of the current state of a channel */
+static enum gsi_channel_state
+gsi_channel_state(struct gsi *gsi, u32 channel_id)
+{
+	u32 val;
+
+	val = ioread32(gsi->virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id));
+
+	return u32_get_bits(val, CHSTATE_FMASK);
+}
+
+/* Issue a channel command and wait for it to complete */
+static int
+gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode opcode)
+{
+	struct completion *completion = &channel->completion;
+	u32 channel_id = gsi_channel_id(channel);
+	u32 val;
+
+	val = u32_encode_bits(channel_id, CH_CHID_FMASK);
+	val |= u32_encode_bits(opcode, CH_OPCODE_FMASK);
+
+	if (gsi_command(channel->gsi, GSI_CH_CMD_OFFSET, val, completion))
+		return 0;	/* Success! */
+
+	dev_err(channel->gsi->dev, "GSI command %u to channel %u timed out "
+		"(state is %u)\n", opcode, channel_id, channel->state);
+
+	return -ETIMEDOUT;
+}
+
+/* Allocate GSI channel in NOT_ALLOCATED state */
+static int gsi_channel_alloc_command(struct gsi *gsi, u32 channel_id)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+	int ret;
+
+	/* Get initial channel state */
+	channel->state = gsi_channel_state(gsi, channel_id);
+
+	if (channel->state != GSI_CHANNEL_STATE_NOT_ALLOCATED)
+		return -EINVAL;
+
+	ret = gsi_channel_command(channel, GSI_CH_ALLOCATE);
+	if (!ret && channel->state != GSI_CHANNEL_STATE_ALLOCATED) {
+		dev_err(gsi->dev, "bad channel state (%u) after alloc\n",
+			channel->state);
+		ret = -EIO;
+	}
+
+	return ret;
+}
+
+/* Start an ALLOCATED channel */
+static int gsi_channel_start_command(struct gsi_channel *channel)
+{
+	enum gsi_channel_state state = channel->state;
+	int ret;
+
+	if (state != GSI_CHANNEL_STATE_ALLOCATED &&
+	    state != GSI_CHANNEL_STATE_STOPPED)
+		return -EINVAL;
+
+	ret = gsi_channel_command(channel, GSI_CH_START);
+	if (!ret && channel->state != GSI_CHANNEL_STATE_STARTED) {
+		dev_err(channel->gsi->dev,
+			"bad channel state (%u) after start\n",
+			channel->state);
+		ret = -EIO;
+	}
+
+	return ret;
+}
+
+/* Stop a GSI channel in STARTED state */
+static int gsi_channel_stop_command(struct gsi_channel *channel)
+{
+	enum gsi_channel_state state = channel->state;
+	int ret;
+
+	if (state != GSI_CHANNEL_STATE_STARTED &&
+	    state != GSI_CHANNEL_STATE_STOP_IN_PROC)
+		return -EINVAL;
+
+	ret = gsi_channel_command(channel, GSI_CH_STOP);
+	if (ret || channel->state == GSI_CHANNEL_STATE_STOPPED)
+		return ret;
+
+	/* We may have to try again if stop is in progress */
+	if (channel->state == GSI_CHANNEL_STATE_STOP_IN_PROC)
+		return -EAGAIN;
+
+	dev_err(channel->gsi->dev, "bad channel state (%u) after stop\n",
+		channel->state);
+
+	return -EIO;
+}
+
+/* Reset a GSI channel in ALLOCATED or ERROR state. */
+static void gsi_channel_reset_command(struct gsi_channel *channel)
+{
+	int ret;
+
+	msleep(1);	/* A short delay is required before a RESET command */
+
+	if (channel->state != GSI_CHANNEL_STATE_STOPPED &&
+	    channel->state != GSI_CHANNEL_STATE_ERROR) {
+		dev_err(channel->gsi->dev,
+			"bad channel state (%u) before reset\n",
+			channel->state);
+		return;
+	}
+
+	ret = gsi_channel_command(channel, GSI_CH_RESET);
+	if (!ret && channel->state != GSI_CHANNEL_STATE_ALLOCATED)
+		dev_err(channel->gsi->dev,
+			"bad channel state (%u) after reset\n",
+			channel->state);
+}
+
+/* Deallocate an ALLOCATED GSI channel */
+static void gsi_channel_de_alloc_command(struct gsi *gsi, u32 channel_id)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+	int ret;
+
+	if (channel->state != GSI_CHANNEL_STATE_ALLOCATED) {
+		dev_err(gsi->dev, "bad channel state (%u) before dealloc\n",
+			channel->state);
+		return;
+	}
+
+	ret = gsi_channel_command(channel, GSI_CH_DE_ALLOC);
+	if (!ret && channel->state != GSI_CHANNEL_STATE_NOT_ALLOCATED)
+		dev_err(gsi->dev, "bad channel state (%u) after dealloc\n",
+			channel->state);
+}
+
+/* Ring an event ring doorbell, reporting the last entry processed by the AP.
+ * The index argument (modulo the ring count) is the first unfilled entry, so
+ * we supply one less than that with the doorbell.  Update the event ring
+ * index field with the value provided.
+ */
+static void gsi_evt_ring_doorbell(struct gsi *gsi, u32 evt_ring_id, u32 index)
+{
+	struct gsi_ring *ring = &gsi->evt_ring[evt_ring_id].ring;
+	u32 val;
+
+	ring->index = index;	/* Next unused entry */
+
+	/* Note: index *must* be used modulo the ring count here */
+	val = gsi_ring_addr(ring, (index - 1) % ring->count);
+	iowrite32(val, gsi->virt + GSI_EV_CH_E_DOORBELL_0_OFFSET(evt_ring_id));
+}
+
+/* Program an event ring for use */
+static void gsi_evt_ring_program(struct gsi *gsi, u32 evt_ring_id)
+{
+	struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
+	size_t size = evt_ring->ring.count * GSI_RING_ELEMENT_SIZE;
+	u32 val;
+
+	val = u32_encode_bits(GSI_EVT_CHTYPE_GPI_EV, EV_CHTYPE_FMASK);
+	val |= EV_INTYPE_FMASK;
+	val |= u32_encode_bits(GSI_RING_ELEMENT_SIZE, EV_ELEMENT_SIZE_FMASK);
+	iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_0_OFFSET(evt_ring_id));
+
+	val = u32_encode_bits(size, EV_R_LENGTH_FMASK);
+	iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_1_OFFSET(evt_ring_id));
+
+	/* The context 2 and 3 registers store the low-order and
+	 * high-order 32 bits of the address of the event ring,
+	 * respectively.
+	 */
+	val = evt_ring->ring.addr & GENMASK(31, 0);
+	iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_2_OFFSET(evt_ring_id));
+
+	val = evt_ring->ring.addr >> 32;
+	iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_3_OFFSET(evt_ring_id));
+
+	/* Enable interrupt moderation by setting the moderation delay */
+	val = u32_encode_bits(GSI_EVT_RING_INT_MODT, MODT_FMASK);
+	val |= u32_encode_bits(1, MODC_FMASK);	/* comes from channel */
+	iowrite32(val, gsi->virt + GSI_EV_CH_E_CNTXT_8_OFFSET(evt_ring_id));
+
+	/* No MSI write data, and MSI address high and low address is 0 */
+	iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_9_OFFSET(evt_ring_id));
+	iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_10_OFFSET(evt_ring_id));
+	iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_11_OFFSET(evt_ring_id));
+
+	/* We don't need to get event read pointer updates */
+	iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_12_OFFSET(evt_ring_id));
+	iowrite32(0, gsi->virt + GSI_EV_CH_E_CNTXT_13_OFFSET(evt_ring_id));
+
+	/* Finally, tell the hardware we've completed event 0 (arbitrary) */
+	gsi_evt_ring_doorbell(gsi, evt_ring_id, 0);
+}
+
+/* Return the last (most recent) transaction completed on a channel. */
+static struct gsi_trans *gsi_channel_trans_last(struct gsi_channel *channel)
+{
+	struct gsi_trans_info *trans_info = &channel->trans_info;
+	struct gsi_trans *trans;
+
+	spin_lock_bh(&trans_info->spinlock);
+
+	if (!list_empty(&trans_info->complete))
+		trans = list_last_entry(&trans_info->complete,
+					struct gsi_trans, links);
+	else if (!list_empty(&trans_info->polled))
+		trans = list_last_entry(&trans_info->polled,
+					struct gsi_trans, links);
+	else
+		trans = NULL;
+
+	/* Caller will wait for this, so take a reference */
+	if (trans)
+		refcount_inc(&trans->refcount);
+
+	spin_unlock_bh(&trans_info->spinlock);
+
+	return trans;
+}
+
+/* Wait for transaction activity on a channel to complete */
+static void gsi_channel_trans_quiesce(struct gsi_channel *channel)
+{
+	struct gsi_trans *trans;
+
+	/* Get the last transaction, and wait for it to complete */
+	trans = gsi_channel_trans_last(channel);
+	if (trans) {
+		wait_for_completion(&trans->completion);
+		gsi_trans_free(trans);
+	}
+}
+
+/* Stop channel activity.  Transactions may not be allocated until thawed. */
+static void gsi_channel_freeze(struct gsi_channel *channel)
+{
+	gsi_channel_trans_quiesce(channel);
+
+	napi_disable(&channel->napi);
+
+	gsi_irq_ieob_disable(channel->gsi, channel->evt_ring_id);
+}
+
+/* Allow transactions to be used on the channel again. */
+static void gsi_channel_thaw(struct gsi_channel *channel)
+{
+	gsi_irq_ieob_enable(channel->gsi, channel->evt_ring_id);
+
+	napi_enable(&channel->napi);
+}
+
+/* Program a channel for use */
+static void gsi_channel_program(struct gsi_channel *channel, bool doorbell)
+{
+	size_t size = channel->tre_ring.count * GSI_RING_ELEMENT_SIZE;
+	u32 channel_id = gsi_channel_id(channel);
+	union gsi_channel_scratch scr = { };
+	struct gsi_channel_scratch_gpi *gpi;
+	struct gsi *gsi = channel->gsi;
+	u32 wrr_weight = 0;
+	u32 val;
+
+	/* Arbitrarily pick TRE 0 as the first channel element to use */
+	channel->tre_ring.index = 0;
+
+	/* We program all channels to use GPI protocol */
+	val = u32_encode_bits(GSI_CHANNEL_PROTOCOL_GPI, CHTYPE_PROTOCOL_FMASK);
+	if (channel->toward_ipa)
+		val |= CHTYPE_DIR_FMASK;
+	val |= u32_encode_bits(channel->evt_ring_id, ERINDEX_FMASK);
+	val |= u32_encode_bits(GSI_RING_ELEMENT_SIZE, ELEMENT_SIZE_FMASK);
+	iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id));
+
+	val = u32_encode_bits(size, R_LENGTH_FMASK);
+	iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_1_OFFSET(channel_id));
+
+	/* The context 2 and 3 registers store the low-order and
+	 * high-order 32 bits of the address of the channel ring,
+	 * respectively.
+	 */
+	val = channel->tre_ring.addr & GENMASK(31, 0);
+	iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_2_OFFSET(channel_id));
+
+	val = channel->tre_ring.addr >> 32;
+	iowrite32(val, gsi->virt + GSI_CH_C_CNTXT_3_OFFSET(channel_id));
+
+	/* Command channel gets low weighted round-robin priority */
+	if (channel->command)
+		wrr_weight = field_max(WRR_WEIGHT_FMASK);
+	val = u32_encode_bits(wrr_weight, WRR_WEIGHT_FMASK);
+
+	/* Max prefetch is 1 segment (do not set MAX_PREFETCH_FMASK) */
+
+	/* Enable the doorbell engine if requested */
+	if (doorbell)
+		val |= USE_DB_ENG_FMASK;
+
+	if (!channel->use_prefetch)
+		val |= USE_ESCAPE_BUF_ONLY_FMASK;
+
+	iowrite32(val, gsi->virt + GSI_CH_C_QOS_OFFSET(channel_id));
+
+	/* Now update the scratch registers for GPI protocol */
+	gpi = &scr.gpi;
+	gpi->max_outstanding_tre = gsi_channel_trans_tre_max(gsi, channel_id) *
+					GSI_RING_ELEMENT_SIZE;
+	gpi->outstanding_threshold = 2 * GSI_RING_ELEMENT_SIZE;
+
+	val = scr.data.word1;
+	iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_0_OFFSET(channel_id));
+
+	val = scr.data.word2;
+	iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_1_OFFSET(channel_id));
+
+	val = scr.data.word3;
+	iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_2_OFFSET(channel_id));
+
+	/* We must preserve the upper 16 bits of the last scratch register.
+	 * The next sequence assumes those bits remain unchanged between the
+	 * read and the write.
+	 */
+	val = ioread32(gsi->virt + GSI_CH_C_SCRATCH_3_OFFSET(channel_id));
+	val = (scr.data.word4 & GENMASK(31, 16)) | (val & GENMASK(15, 0));
+	iowrite32(val, gsi->virt + GSI_CH_C_SCRATCH_3_OFFSET(channel_id));
+
+	/* All done! */
+}
+
+static void gsi_channel_deprogram(struct gsi_channel *channel)
+{
+	/* Nothing to do */
+}
+
+/* Start an allocated GSI channel */
+int gsi_channel_start(struct gsi *gsi, u32 channel_id)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+	u32 evt_ring_id = channel->evt_ring_id;
+	int ret;
+
+	mutex_lock(&gsi->mutex);
+
+	ret = gsi_channel_start_command(channel);
+
+	mutex_unlock(&gsi->mutex);
+
+	/* Clear the channel's event ring interrupt in case it's pending */
+	gsi_isr_ieob_clear(gsi, BIT(evt_ring_id));
+
+	gsi_channel_thaw(channel);
+
+	return ret;
+}
+
+/* Stop a started channel */
+int gsi_channel_stop(struct gsi *gsi, u32 channel_id)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+	u32 retries;
+	int ret;
+
+	gsi_channel_freeze(channel);
+
+	/* Channel could have entered STOPPED state since last call if the
+	 * STOP command timed out.  We won't stop a channel if stopping it
+	 * was successful previously (so we still want the freeze above).
+	 */
+	if (channel->state == GSI_CHANNEL_STATE_STOPPED)
+		return 0;
+
+	/* RX channels might require a little time to enter STOPPED state */
+	retries = channel->toward_ipa ? 0 : GSI_CHANNEL_STOP_RX_RETRIES;
+
+	mutex_lock(&gsi->mutex);
+
+	do {
+		ret = gsi_channel_stop_command(channel);
+		if (ret != -EAGAIN)
+			break;
+		msleep(1);
+	} while (retries--);
+
+	mutex_unlock(&gsi->mutex);
+
+	/* Thaw the channel if we need to retry (or on error) */
+	if (ret)
+		gsi_channel_thaw(channel);
+
+	return ret;
+}
+
+/* Reset and reconfigure a channel (possibly leaving doorbell disabled) */
+void gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool db_enable)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+
+	mutex_lock(&gsi->mutex);
+
+	/* Due to a hardware quirk we need to reset RX channels twice. */
+	gsi_channel_reset_command(channel);
+	if (!channel->toward_ipa)
+		gsi_channel_reset_command(channel);
+
+	gsi_channel_program(channel, db_enable);
+	gsi_channel_trans_cancel_pending(channel);
+
+	mutex_unlock(&gsi->mutex);
+}
+
+/* Stop a STARTED channel for suspend (using stop if requested) */
+int gsi_channel_suspend(struct gsi *gsi, u32 channel_id, bool stop)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+
+	if (stop)
+		return gsi_channel_stop(gsi, channel_id);
+
+	gsi_channel_freeze(channel);
+
+	return 0;
+}
+
+/* Resume a suspended channel (starting will be requested if STOPPED) */
+int gsi_channel_resume(struct gsi *gsi, u32 channel_id, bool start)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+
+	if (start)
+		return gsi_channel_start(gsi, channel_id);
+
+	gsi_channel_thaw(channel);
+
+	return 0;
+}
+
+/**
+ * gsi_channel_tx_queued() - Report queued TX transfers for a channel
+ * @channel:	Channel for which to report
+ *
+ * Report to the network stack the number of bytes and transactions that
+ * have been queued to hardware since last call.  This and the next function
+ * supply information used by the network stack for throttling.
+ *
+ * For each channel we track the number of transactions used and bytes of
+ * data those transactions represent.  We also track what those values are
+ * each time this function is called.  Subtracting the two tells us
+ * the number of bytes and transactions that have been added between
+ * successive calls.
+ *
+ * Calling this each time we ring the channel doorbell allows us to
+ * provide accurate information to the network stack about how much
+ * work we've given the hardware at any point in time.
+ */
+void gsi_channel_tx_queued(struct gsi_channel *channel)
+{
+	u32 trans_count;
+	u32 byte_count;
+
+	byte_count = channel->byte_count - channel->queued_byte_count;
+	trans_count = channel->trans_count - channel->queued_trans_count;
+	channel->queued_byte_count = channel->byte_count;
+	channel->queued_trans_count = channel->trans_count;
+
+	ipa_gsi_channel_tx_queued(channel->gsi, gsi_channel_id(channel),
+				  trans_count, byte_count);
+}
+
+/**
+ * gsi_channel_tx_update() - Report completed TX transfers
+ * @channel:	Channel that has completed transmitting packets
+ * @trans:	Last transation known to be complete
+ *
+ * Compute the number of transactions and bytes that have been transferred
+ * over a TX channel since the given transaction was committed.  Report this
+ * information to the network stack.
+ *
+ * At the time a transaction is committed, we record its channel's
+ * committed transaction and byte counts *in the transaction*.
+ * Completions are signaled by the hardware with an interrupt, and
+ * we can determine the latest completed transaction at that time.
+ *
+ * The difference between the byte/transaction count recorded in
+ * the transaction and the count last time we recorded a completion
+ * tells us exactly how much data has been transferred between
+ * completions.
+ *
+ * Calling this each time we learn of a newly-completed transaction
+ * allows us to provide accurate information to the network stack
+ * about how much work has been completed by the hardware at a given
+ * point in time.
+ */
+static void
+gsi_channel_tx_update(struct gsi_channel *channel, struct gsi_trans *trans)
+{
+	u64 byte_count = trans->byte_count + trans->len;
+	u64 trans_count = trans->trans_count + 1;
+
+	byte_count -= channel->compl_byte_count;
+	channel->compl_byte_count += byte_count;
+	trans_count -= channel->compl_trans_count;
+	channel->compl_trans_count += trans_count;
+
+	ipa_gsi_channel_tx_completed(channel->gsi, gsi_channel_id(channel),
+				     trans_count, byte_count);
+}
+
+/* Channel control interrupt handler */
+static void gsi_isr_chan_ctrl(struct gsi *gsi)
+{
+	u32 channel_mask;
+
+	channel_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_CH_IRQ_OFFSET);
+	iowrite32(channel_mask, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_CLR_OFFSET);
+
+	while (channel_mask) {
+		u32 channel_id = __ffs(channel_mask);
+		struct gsi_channel *channel;
+
+		channel_mask ^= BIT(channel_id);
+
+		channel = &gsi->channel[channel_id];
+		channel->state = gsi_channel_state(gsi, channel_id);
+
+		complete(&channel->completion);
+	}
+}
+
+/* Event ring control interrupt handler */
+static void gsi_isr_evt_ctrl(struct gsi *gsi)
+{
+	u32 event_mask;
+
+	event_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_OFFSET);
+	iowrite32(event_mask, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_CLR_OFFSET);
+
+	while (event_mask) {
+		u32 evt_ring_id = __ffs(event_mask);
+		struct gsi_evt_ring *evt_ring;
+
+		event_mask ^= BIT(evt_ring_id);
+
+		evt_ring = &gsi->evt_ring[evt_ring_id];
+		evt_ring->state = gsi_evt_ring_state(gsi, evt_ring_id);
+
+		complete(&evt_ring->completion);
+	}
+}
+
+/* Global channel error interrupt handler */
+static void
+gsi_isr_glob_chan_err(struct gsi *gsi, u32 err_ee, u32 channel_id, u32 code)
+{
+	if (code == GSI_OUT_OF_RESOURCES_ERR) {
+		dev_err(gsi->dev, "channel %u out of resources\n", channel_id);
+		complete(&gsi->channel[channel_id].completion);
+		return;
+	}
+
+	/* Report, but otherwise ignore all other error codes */
+	dev_err(gsi->dev, "channel %u global error ee 0x%08x code 0x%08x\n",
+		channel_id, err_ee, code);
+}
+
+/* Global event error interrupt handler */
+static void
+gsi_isr_glob_evt_err(struct gsi *gsi, u32 err_ee, u32 evt_ring_id, u32 code)
+{
+	if (code == GSI_OUT_OF_RESOURCES_ERR) {
+		struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
+		u32 channel_id = gsi_channel_id(evt_ring->channel);
+
+		complete(&evt_ring->completion);
+		dev_err(gsi->dev, "evt_ring for channel %u out of resources\n",
+			channel_id);
+		return;
+	}
+
+	/* Report, but otherwise ignore all other error codes */
+	dev_err(gsi->dev, "event ring %u global error ee %u code 0x%08x\n",
+		evt_ring_id, err_ee, code);
+}
+
+/* Global error interrupt handler */
+static void gsi_isr_glob_err(struct gsi *gsi)
+{
+	enum gsi_err_type type;
+	enum gsi_err_code code;
+	u32 which;
+	u32 val;
+	u32 ee;
+
+	/* Get the logged error, then reinitialize the log */
+	val = ioread32(gsi->virt + GSI_ERROR_LOG_OFFSET);
+	iowrite32(0, gsi->virt + GSI_ERROR_LOG_OFFSET);
+	iowrite32(~0, gsi->virt + GSI_ERROR_LOG_CLR_OFFSET);
+
+	ee = u32_get_bits(val, ERR_EE_FMASK);
+	which = u32_get_bits(val, ERR_VIRT_IDX_FMASK);
+	type = u32_get_bits(val, ERR_TYPE_FMASK);
+	code = u32_get_bits(val, ERR_CODE_FMASK);
+
+	if (type == GSI_ERR_TYPE_CHAN)
+		gsi_isr_glob_chan_err(gsi, ee, which, code);
+	else if (type == GSI_ERR_TYPE_EVT)
+		gsi_isr_glob_evt_err(gsi, ee, which, code);
+	else	/* type GSI_ERR_TYPE_GLOB should be fatal */
+		dev_err(gsi->dev, "unexpected global error 0x%08x\n", type);
+}
+
+/* Generic EE interrupt handler */
+static void gsi_isr_gp_int1(struct gsi *gsi)
+{
+	u32 result;
+	u32 val;
+
+	val = ioread32(gsi->virt + GSI_CNTXT_SCRATCH_0_OFFSET);
+	result = u32_get_bits(val, GENERIC_EE_RESULT_FMASK);
+	if (result != GENERIC_EE_SUCCESS_FVAL)
+		dev_err(gsi->dev, "global INT1 generic result %u\n", result);
+
+	complete(&gsi->completion);
+}
+/* Inter-EE interrupt handler */
+static void gsi_isr_glob_ee(struct gsi *gsi)
+{
+	u32 val;
+
+	val = ioread32(gsi->virt + GSI_CNTXT_GLOB_IRQ_STTS_OFFSET);
+
+	if (val & ERROR_INT_FMASK)
+		gsi_isr_glob_err(gsi);
+
+	iowrite32(val, gsi->virt + GSI_CNTXT_GLOB_IRQ_CLR_OFFSET);
+
+	val &= ~ERROR_INT_FMASK;
+
+	if (val & EN_GP_INT1_FMASK) {
+		val ^= EN_GP_INT1_FMASK;
+		gsi_isr_gp_int1(gsi);
+	}
+
+	if (val)
+		dev_err(gsi->dev, "unexpected global interrupt 0x%08x\n", val);
+}
+
+/* I/O completion interrupt event */
+static void gsi_isr_ieob(struct gsi *gsi)
+{
+	u32 event_mask;
+
+	event_mask = ioread32(gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_OFFSET);
+	gsi_isr_ieob_clear(gsi, event_mask);
+
+	while (event_mask) {
+		u32 evt_ring_id = __ffs(event_mask);
+
+		event_mask ^= BIT(evt_ring_id);
+
+		gsi_irq_ieob_disable(gsi, evt_ring_id);
+		napi_schedule(&gsi->evt_ring[evt_ring_id].channel->napi);
+	}
+}
+
+/* General event interrupts represent serious problems, so report them */
+static void gsi_isr_general(struct gsi *gsi)
+{
+	struct device *dev = gsi->dev;
+	u32 val;
+
+	val = ioread32(gsi->virt + GSI_CNTXT_GSI_IRQ_STTS_OFFSET);
+	iowrite32(val, gsi->virt + GSI_CNTXT_GSI_IRQ_CLR_OFFSET);
+
+	if (val)
+		dev_err(dev, "unexpected general interrupt 0x%08x\n", val);
+}
+
+/**
+ * gsi_isr() - Top level GSI interrupt service routine
+ * @irq:	Interrupt number (ignored)
+ * @dev_id:	GSI pointer supplied to request_irq()
+ *
+ * This is the main handler function registered for the GSI IRQ. Each type
+ * of interrupt has a separate handler function that is called from here.
+ */
+static irqreturn_t gsi_isr(int irq, void *dev_id)
+{
+	struct gsi *gsi = dev_id;
+	u32 intr_mask;
+	u32 cnt = 0;
+
+	while ((intr_mask = ioread32(gsi->virt + GSI_CNTXT_TYPE_IRQ_OFFSET))) {
+		/* intr_mask contains bitmask of pending GSI interrupts */
+		do {
+			u32 gsi_intr = BIT(__ffs(intr_mask));
+
+			intr_mask ^= gsi_intr;
+
+			switch (gsi_intr) {
+			case CH_CTRL_FMASK:
+				gsi_isr_chan_ctrl(gsi);
+				break;
+			case EV_CTRL_FMASK:
+				gsi_isr_evt_ctrl(gsi);
+				break;
+			case GLOB_EE_FMASK:
+				gsi_isr_glob_ee(gsi);
+				break;
+			case IEOB_FMASK:
+				gsi_isr_ieob(gsi);
+				break;
+			case GENERAL_FMASK:
+				gsi_isr_general(gsi);
+				break;
+			default:
+				dev_err(gsi->dev,
+					"%s: unrecognized type 0x%08x\n",
+					__func__, gsi_intr);
+				break;
+			}
+		} while (intr_mask);
+
+		if (++cnt > GSI_ISR_MAX_ITER) {
+			dev_err(gsi->dev, "interrupt flood\n");
+			break;
+		}
+	}
+
+	return IRQ_HANDLED;
+}
+
+/* Return the transaction associated with a transfer completion event */
+static struct gsi_trans *gsi_event_trans(struct gsi_channel *channel,
+					 struct gsi_event *event)
+{
+	u32 tre_offset;
+	u32 tre_index;
+
+	/* Event xfer_ptr records the TRE it's associated with */
+	tre_offset = le64_to_cpu(event->xfer_ptr) & GENMASK(31, 0);
+	tre_index = gsi_ring_index(&channel->tre_ring, tre_offset);
+
+	return gsi_channel_trans_mapped(channel, tre_index);
+}
+
+/**
+ * gsi_evt_ring_rx_update() - Record lengths of received data
+ * @evt_ring:	Event ring associated with channel that received packets
+ * @index:	Event index in ring reported by hardware
+ *
+ * Events for RX channels contain the actual number of bytes received into
+ * the buffer.  Every event has a transaction associated with it, and here
+ * we update transactions to record their actual received lengths.
+ *
+ * This function is called whenever we learn that the GSI hardware has filled
+ * new events since the last time we checked.  The ring's index field tells
+ * the first entry in need of processing.  The index provided is the
+ * first *unfilled* event in the ring (following the last filled one).
+ *
+ * Events are sequential within the event ring, and transactions are
+ * sequential within the transaction pool.
+ *
+ * Note that @index always refers to an element *within* the event ring.
+ */
+static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
+{
+	struct gsi_channel *channel = evt_ring->channel;
+	struct gsi_ring *ring = &evt_ring->ring;
+	struct gsi_trans_info *trans_info;
+	struct gsi_event *event_done;
+	struct gsi_event *event;
+	struct gsi_trans *trans;
+	u32 byte_count = 0;
+	u32 old_index;
+	u32 event_avail;
+
+	trans_info = &channel->trans_info;
+
+	/* We'll start with the oldest un-processed event.  RX channels
+	 * replenish receive buffers in single-TRE transactions, so we
+	 * can just map that event to its transaction.  Transactions
+	 * associated with completion events are consecutive.
+	 */
+	old_index = ring->index;
+	event = gsi_ring_virt(ring, old_index);
+	trans = gsi_event_trans(channel, event);
+
+	/* Compute the number of events to process before we wrap,
+	 * and determine when we'll be done processing events.
+	 */
+	event_avail = ring->count - old_index % ring->count;
+	event_done = gsi_ring_virt(ring, index);
+	do {
+		trans->len = __le16_to_cpu(event->len);
+		byte_count += trans->len;
+
+		/* Move on to the next event and transaction */
+		if (--event_avail)
+			event++;
+		else
+			event = gsi_ring_virt(ring, 0);
+		trans = gsi_trans_pool_next(&trans_info->pool, trans);
+	} while (event != event_done);
+
+	/* We record RX bytes when they are received */
+	channel->byte_count += byte_count;
+	channel->trans_count++;
+}
+
+/* Initialize a ring, including allocating DMA memory for its entries */
+static int gsi_ring_alloc(struct gsi *gsi, struct gsi_ring *ring, u32 count)
+{
+	size_t size = count * GSI_RING_ELEMENT_SIZE;
+	struct device *dev = gsi->dev;
+	dma_addr_t addr;
+
+	/* Hardware requires a 2^n ring size, with alignment equal to size */
+	ring->virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
+	if (ring->virt && addr % size) {
+		dma_free_coherent(dev, size, ring->virt, ring->addr);
+		dev_err(dev, "unable to alloc 0x%zx-aligned ring buffer\n",
+				size);
+		return -EINVAL;	/* Not a good error value, but distinct */
+	} else if (!ring->virt) {
+		return -ENOMEM;
+	}
+	ring->addr = addr;
+	ring->count = count;
+
+	return 0;
+}
+
+/* Free a previously-allocated ring */
+static void gsi_ring_free(struct gsi *gsi, struct gsi_ring *ring)
+{
+	size_t size = ring->count * GSI_RING_ELEMENT_SIZE;
+
+	dma_free_coherent(gsi->dev, size, ring->virt, ring->addr);
+}
+
+/* Allocate an available event ring id */
+static int gsi_evt_ring_id_alloc(struct gsi *gsi)
+{
+	u32 evt_ring_id;
+
+	if (gsi->event_bitmap == ~0U) {
+		dev_err(gsi->dev, "event rings exhausted\n");
+		return -ENOSPC;
+	}
+
+	evt_ring_id = ffz(gsi->event_bitmap);
+	gsi->event_bitmap |= BIT(evt_ring_id);
+
+	return (int)evt_ring_id;
+}
+
+/* Free a previously-allocated event ring id */
+static void gsi_evt_ring_id_free(struct gsi *gsi, u32 evt_ring_id)
+{
+	gsi->event_bitmap &= ~BIT(evt_ring_id);
+}
+
+/* Ring a channel doorbell, reporting the first un-filled entry */
+void gsi_channel_doorbell(struct gsi_channel *channel)
+{
+	struct gsi_ring *tre_ring = &channel->tre_ring;
+	u32 channel_id = gsi_channel_id(channel);
+	struct gsi *gsi = channel->gsi;
+	u32 val;
+
+	/* Note: index *must* be used modulo the ring count here */
+	val = gsi_ring_addr(tre_ring, tre_ring->index % tre_ring->count);
+	iowrite32(val, gsi->virt + GSI_CH_C_DOORBELL_0_OFFSET(channel_id));
+}
+
+/* Consult hardware, move any newly completed transactions to completed list */
+static void gsi_channel_update(struct gsi_channel *channel)
+{
+	u32 evt_ring_id = channel->evt_ring_id;
+	struct gsi *gsi = channel->gsi;
+	struct gsi_evt_ring *evt_ring;
+	struct gsi_trans *trans;
+	struct gsi_ring *ring;
+	u32 offset;
+	u32 index;
+
+	evt_ring = &gsi->evt_ring[evt_ring_id];
+	ring = &evt_ring->ring;
+
+	/* See if there's anything new to process; if not, we're done.  Note
+	 * that index always refers to an entry *within* the event ring.
+	 */
+	offset = GSI_EV_CH_E_CNTXT_4_OFFSET(evt_ring_id);
+	index = gsi_ring_index(ring, ioread32(gsi->virt + offset));
+	if (index == ring->index % ring->count)
+		return;
+
+	/* Get the transaction for the latest completed event.  Take a
+	 * reference to keep it from completing before we give the events
+	 * for this and previous transactions back to the hardware.
+	 */
+	trans = gsi_event_trans(channel, gsi_ring_virt(ring, index - 1));
+	refcount_inc(&trans->refcount);
+
+	/* For RX channels, update each completed transaction with the number
+	 * of bytes that were actually received.  For TX channels, report
+	 * the number of transactions and bytes this completion represents
+	 * up the network stack.
+	 */
+	if (channel->toward_ipa)
+		gsi_channel_tx_update(channel, trans);
+	else
+		gsi_evt_ring_rx_update(evt_ring, index);
+
+	gsi_trans_move_complete(trans);
+
+	/* Tell the hardware we've handled these events */
+	gsi_evt_ring_doorbell(channel->gsi, channel->evt_ring_id, index);
+
+	gsi_trans_free(trans);
+}
+
+/**
+ * gsi_channel_poll_one() - Return a single completed transaction on a channel
+ * @channel:	Channel to be polled
+ *
+ * @Return:	 Transaction pointer, or null if none are available
+ *
+ * This function returns the first entry on a channel's completed transaction
+ * list.  If that list is empty, the hardware is consulted to determine
+ * whether any new transactions have completed.  If so, they're moved to the
+ * completed list and the new first entry is returned.  If there are no more
+ * completed transactions, a null pointer is returned.
+ */
+static struct gsi_trans *gsi_channel_poll_one(struct gsi_channel *channel)
+{
+	struct gsi_trans *trans;
+
+	/* Get the first transaction from the completed list */
+	trans = gsi_channel_trans_complete(channel);
+	if (!trans) {
+		/* List is empty; see if there's more to do */
+		gsi_channel_update(channel);
+		trans = gsi_channel_trans_complete(channel);
+	}
+
+	if (trans)
+		gsi_trans_move_polled(trans);
+
+	return trans;
+}
+
+/**
+ * gsi_channel_poll() - NAPI poll function for a channel
+ * @napi:	NAPI structure for the channel
+ * @budget:	Budget supplied by NAPI core
+
+ * @Return:	 Number of items polled (<= budget)
+ *
+ * Single transactions completed by hardware are polled until either
+ * the budget is exhausted, or there are no more.  Each transaction
+ * polled is passed to gsi_trans_complete(), to perform remaining
+ * completion processing and retire/free the transaction.
+ */
+static int gsi_channel_poll(struct napi_struct *napi, int budget)
+{
+	struct gsi_channel *channel;
+	int count = 0;
+
+	channel = container_of(napi, struct gsi_channel, napi);
+	while (count < budget) {
+		struct gsi_trans *trans;
+
+		trans = gsi_channel_poll_one(channel);
+		if (!trans)
+			break;
+		gsi_trans_complete(trans);
+	}
+
+	if (count < budget) {
+		napi_complete(&channel->napi);
+		gsi_irq_ieob_enable(channel->gsi, channel->evt_ring_id);
+	}
+
+	return count;
+}
+
+/* The event bitmap represents which event ids are available for allocation.
+ * Set bits are not available, clear bits can be used.  This function
+ * initializes the map so all events supported by the hardware are available,
+ * then precludes any reserved events from being allocated.
+ */
+static u32 gsi_event_bitmap_init(u32 evt_ring_max)
+{
+	u32 event_bitmap = GENMASK(BITS_PER_LONG - 1, evt_ring_max);
+
+	event_bitmap |= GENMASK(GSI_MHI_EVENT_ID_END, GSI_MHI_EVENT_ID_START);
+
+	return event_bitmap;
+}
+
+/* Setup function for event rings */
+static void gsi_evt_ring_setup(struct gsi *gsi)
+{
+	/* Nothing to do */
+}
+
+/* Inverse of gsi_evt_ring_setup() */
+static void gsi_evt_ring_teardown(struct gsi *gsi)
+{
+	/* Nothing to do */
+}
+
+/* Setup function for a single channel */
+static int gsi_channel_setup_one(struct gsi *gsi, u32 channel_id,
+				 bool db_enable)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+	u32 evt_ring_id = channel->evt_ring_id;
+	int ret;
+
+	if (!channel->gsi)
+		return 0;	/* Ignore uninitialized channels */
+
+	ret = gsi_evt_ring_alloc_command(gsi, evt_ring_id);
+	if (ret)
+		return ret;
+
+	gsi_evt_ring_program(gsi, evt_ring_id);
+
+	ret = gsi_channel_alloc_command(gsi, channel_id);
+	if (ret)
+		goto err_evt_ring_de_alloc;
+
+	gsi_channel_program(channel, db_enable);
+
+	if (channel->toward_ipa)
+		netif_tx_napi_add(&gsi->dummy_dev, &channel->napi,
+				  gsi_channel_poll, NAPI_POLL_WEIGHT);
+	else
+		netif_napi_add(&gsi->dummy_dev, &channel->napi,
+			       gsi_channel_poll, NAPI_POLL_WEIGHT);
+
+	return 0;
+
+err_evt_ring_de_alloc:
+	/* We've done nothing with the event ring yet so don't reset */
+	gsi_evt_ring_de_alloc_command(gsi, evt_ring_id);
+
+	return ret;
+}
+
+/* Inverse of gsi_channel_setup_one() */
+static void gsi_channel_teardown_one(struct gsi *gsi, u32 channel_id)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+	u32 evt_ring_id = channel->evt_ring_id;
+
+	if (!channel->gsi)
+		return;		/* Ignore uninitialized channels */
+
+	netif_napi_del(&channel->napi);
+
+	gsi_channel_deprogram(channel);
+	gsi_channel_de_alloc_command(gsi, channel_id);
+	gsi_evt_ring_reset_command(gsi, evt_ring_id);
+	gsi_evt_ring_de_alloc_command(gsi, evt_ring_id);
+}
+
+static int gsi_generic_command(struct gsi *gsi, u32 channel_id,
+			       enum gsi_generic_cmd_opcode opcode)
+{
+	struct completion *completion = &gsi->completion;
+	u32 val;
+
+	val = u32_encode_bits(opcode, GENERIC_OPCODE_FMASK);
+	val |= u32_encode_bits(channel_id, GENERIC_CHID_FMASK);
+	val |= u32_encode_bits(GSI_EE_MODEM, GENERIC_EE_FMASK);
+
+	if (gsi_command(gsi, GSI_GENERIC_CMD_OFFSET, val, completion))
+		return 0;	/* Success! */
+
+	dev_err(gsi->dev, "GSI generic command %u to channel %u timed out\n",
+		opcode, channel_id);
+
+	return -ETIMEDOUT;
+}
+
+static int gsi_modem_channel_alloc(struct gsi *gsi, u32 channel_id)
+{
+	return gsi_generic_command(gsi, channel_id,
+				   GSI_GENERIC_ALLOCATE_CHANNEL);
+}
+
+static void gsi_modem_channel_halt(struct gsi *gsi, u32 channel_id)
+{
+	int ret;
+
+	ret = gsi_generic_command(gsi, channel_id, GSI_GENERIC_HALT_CHANNEL);
+	if (ret)
+		dev_err(gsi->dev, "error %d halting modem channel %u\n",
+			ret, channel_id);
+}
+
+/* Setup function for channels */
+static int gsi_channel_setup(struct gsi *gsi, bool db_enable)
+{
+	u32 channel_id = 0;
+	u32 mask;
+	int ret;
+
+	gsi_evt_ring_setup(gsi);
+	gsi_irq_enable(gsi);
+
+	mutex_lock(&gsi->mutex);
+
+	do {
+		ret = gsi_channel_setup_one(gsi, channel_id, db_enable);
+		if (ret)
+			goto err_unwind;
+	} while (++channel_id < gsi->channel_count);
+
+	/* Make sure no channels were defined that hardware does not support */
+	while (channel_id < GSI_CHANNEL_COUNT_MAX) {
+		struct gsi_channel *channel = &gsi->channel[channel_id++];
+
+		if (!channel->gsi)
+			continue;	/* Ignore uninitialized channels */
+
+		dev_err(gsi->dev, "channel %u not supported by hardware\n",
+			channel_id - 1);
+		channel_id = gsi->channel_count;
+		goto err_unwind;
+	}
+
+	/* Allocate modem channels if necessary */
+	mask = gsi->modem_channel_bitmap;
+	while (mask) {
+		u32 modem_channel_id = __ffs(mask);
+
+		ret = gsi_modem_channel_alloc(gsi, modem_channel_id);
+		if (ret)
+			goto err_unwind_modem;
+
+		/* Clear bit from mask only after success (for unwind) */
+		mask ^= BIT(modem_channel_id);
+	}
+
+	mutex_unlock(&gsi->mutex);
+
+	return 0;
+
+err_unwind_modem:
+	/* Compute which modem channels need to be deallocated */
+	mask ^= gsi->modem_channel_bitmap;
+	while (mask) {
+		u32 channel_id = __fls(mask);
+
+		mask ^= BIT(channel_id);
+
+		gsi_modem_channel_halt(gsi, channel_id);
+	}
+
+err_unwind:
+	while (channel_id--)
+		gsi_channel_teardown_one(gsi, channel_id);
+
+	mutex_unlock(&gsi->mutex);
+
+	gsi_irq_disable(gsi);
+	gsi_evt_ring_teardown(gsi);
+
+	return ret;
+}
+
+/* Inverse of gsi_channel_setup() */
+static void gsi_channel_teardown(struct gsi *gsi)
+{
+	u32 mask = gsi->modem_channel_bitmap;
+	u32 channel_id;
+
+	mutex_lock(&gsi->mutex);
+
+	while (mask) {
+		u32 channel_id = __fls(mask);
+
+		mask ^= BIT(channel_id);
+
+		gsi_modem_channel_halt(gsi, channel_id);
+	}
+
+	channel_id = gsi->channel_count - 1;
+	do
+		gsi_channel_teardown_one(gsi, channel_id);
+	while (channel_id--);
+
+	mutex_unlock(&gsi->mutex);
+
+	gsi_irq_disable(gsi);
+	gsi_evt_ring_teardown(gsi);
+}
+
+/* Setup function for GSI.  GSI firmware must be loaded and initialized */
+int gsi_setup(struct gsi *gsi, bool db_enable)
+{
+	u32 val;
+
+	/* Here is where we first touch the GSI hardware */
+	val = ioread32(gsi->virt + GSI_GSI_STATUS_OFFSET);
+	if (!(val & ENABLED_FMASK)) {
+		dev_err(gsi->dev, "GSI has not been enabled\n");
+		return -EIO;
+	}
+
+	val = ioread32(gsi->virt + GSI_GSI_HW_PARAM_2_OFFSET);
+
+	gsi->channel_count = u32_get_bits(val, NUM_CH_PER_EE_FMASK);
+	if (!gsi->channel_count) {
+		dev_err(gsi->dev, "GSI reports zero channels supported\n");
+		return -EINVAL;
+	}
+	if (gsi->channel_count > GSI_CHANNEL_COUNT_MAX) {
+		dev_warn(gsi->dev,
+			"limiting to %u channels (hardware supports %u)\n",
+			 GSI_CHANNEL_COUNT_MAX, gsi->channel_count);
+		gsi->channel_count = GSI_CHANNEL_COUNT_MAX;
+	}
+
+	gsi->evt_ring_count = u32_get_bits(val, NUM_EV_PER_EE_FMASK);
+	if (!gsi->evt_ring_count) {
+		dev_err(gsi->dev, "GSI reports zero event rings supported\n");
+		return -EINVAL;
+	}
+	if (gsi->evt_ring_count > GSI_EVT_RING_COUNT_MAX) {
+		dev_warn(gsi->dev,
+			"limiting to %u event rings (hardware supports %u)\n",
+			 GSI_EVT_RING_COUNT_MAX, gsi->evt_ring_count);
+		gsi->evt_ring_count = GSI_EVT_RING_COUNT_MAX;
+	}
+
+	/* Initialize the error log */
+	iowrite32(0, gsi->virt + GSI_ERROR_LOG_OFFSET);
+
+	/* Writing 1 indicates IRQ interrupts; 0 would be MSI */
+	iowrite32(1, gsi->virt + GSI_CNTXT_INTSET_OFFSET);
+
+	return gsi_channel_setup(gsi, db_enable);
+}
+
+/* Inverse of gsi_setup() */
+void gsi_teardown(struct gsi *gsi)
+{
+	gsi_channel_teardown(gsi);
+}
+
+/* Initialize a channel's event ring */
+static int gsi_channel_evt_ring_init(struct gsi_channel *channel)
+{
+	struct gsi *gsi = channel->gsi;
+	struct gsi_evt_ring *evt_ring;
+	int ret;
+
+	ret = gsi_evt_ring_id_alloc(gsi);
+	if (ret < 0)
+		return ret;
+	channel->evt_ring_id = ret;
+
+	evt_ring = &gsi->evt_ring[channel->evt_ring_id];
+	evt_ring->channel = channel;
+
+	ret = gsi_ring_alloc(gsi, &evt_ring->ring, channel->event_count);
+	if (!ret)
+		return 0;	/* Success! */
+
+	dev_err(gsi->dev, "error %d allocating channel %u event ring\n",
+		ret, gsi_channel_id(channel));
+
+	gsi_evt_ring_id_free(gsi, channel->evt_ring_id);
+
+	return ret;
+}
+
+/* Inverse of gsi_channel_evt_ring_init() */
+static void gsi_channel_evt_ring_exit(struct gsi_channel *channel)
+{
+	u32 evt_ring_id = channel->evt_ring_id;
+	struct gsi *gsi = channel->gsi;
+	struct gsi_evt_ring *evt_ring;
+
+	evt_ring = &gsi->evt_ring[evt_ring_id];
+	gsi_ring_free(gsi, &evt_ring->ring);
+	gsi_evt_ring_id_free(gsi, evt_ring_id);
+}
+
+/* Init function for event rings */
+static void gsi_evt_ring_init(struct gsi *gsi)
+{
+	u32 evt_ring_id = 0;
+
+	gsi->event_bitmap = gsi_event_bitmap_init(GSI_EVT_RING_COUNT_MAX);
+	gsi->event_enable_bitmap = 0;
+	do
+		init_completion(&gsi->evt_ring[evt_ring_id].completion);
+	while (++evt_ring_id < GSI_EVT_RING_COUNT_MAX);
+}
+
+/* Inverse of gsi_evt_ring_init() */
+static void gsi_evt_ring_exit(struct gsi *gsi)
+{
+	/* Nothing to do */
+}
+
+static bool gsi_channel_data_valid(struct gsi *gsi,
+				   const struct ipa_gsi_endpoint_data *data)
+{
+#ifdef IPA_VALIDATION
+	u32 channel_id = data->channel_id;
+	struct device *dev = gsi->dev;
+
+	/* Make sure channel ids are in the range driver supports */
+	if (channel_id >= GSI_CHANNEL_COUNT_MAX) {
+		dev_err(dev, "bad channel id %u (must be less than %u)\n",
+			channel_id, GSI_CHANNEL_COUNT_MAX);
+		return false;
+	}
+
+	if (data->ee_id != GSI_EE_AP && data->ee_id != GSI_EE_MODEM) {
+		dev_err(dev, "bad EE id %u (AP or modem)\n", data->ee_id);
+		return false;
+	}
+
+	if (!data->channel.tlv_count ||
+	    data->channel.tlv_count > GSI_TLV_MAX) {
+		dev_err(dev, "channel %u bad tlv_count %u (must be 1..%u)\n",
+			channel_id, data->channel.tlv_count, GSI_TLV_MAX);
+		return false;
+	}
+
+	/* We have to allow at least one maximally-sized transaction to
+	 * be outstanding (which would use tlv_count TREs).  Given how
+	 * gsi_channel_tre_max() is computed, tre_count has to be almost
+	 * twice the TLV FIFO size to satisfy this requirement.
+	 */
+	if (data->channel.tre_count < 2 * data->channel.tlv_count - 1) {
+		dev_err(dev, "channel %u TLV count %u exceeds TRE count %u\n",
+			channel_id, data->channel.tlv_count,
+			data->channel.tre_count);
+		return false;
+	}
+
+	if (!is_power_of_2(data->channel.tre_count)) {
+		dev_err(dev, "channel %u bad tre_count %u (not power of 2)\n",
+			channel_id, data->channel.tre_count);
+		return false;
+	}
+
+	if (!is_power_of_2(data->channel.event_count)) {
+		dev_err(dev, "channel %u bad event_count %u (not power of 2)\n",
+			channel_id, data->channel.event_count);
+		return false;
+	}
+#endif /* IPA_VALIDATION */
+
+	return true;
+}
+
+/* Init function for a single channel */
+static int gsi_channel_init_one(struct gsi *gsi,
+				const struct ipa_gsi_endpoint_data *data,
+				bool command, bool prefetch)
+{
+	struct gsi_channel *channel;
+	u32 tre_count;
+	int ret;
+
+	if (!gsi_channel_data_valid(gsi, data))
+		return -EINVAL;
+
+	/* Worst case we need an event for every outstanding TRE */
+	if (data->channel.tre_count > data->channel.event_count) {
+		dev_warn(gsi->dev, "channel %u limited to %u TREs\n",
+			data->channel_id, data->channel.tre_count);
+		tre_count = data->channel.event_count;
+	} else {
+		tre_count = data->channel.tre_count;
+	}
+
+	channel = &gsi->channel[data->channel_id];
+	memset(channel, 0, sizeof(*channel));
+
+	channel->gsi = gsi;
+	channel->toward_ipa = data->toward_ipa;
+	channel->command = command;
+	channel->use_prefetch = command && prefetch;
+	channel->tlv_count = data->channel.tlv_count;
+	channel->tre_count = tre_count;
+	channel->event_count = data->channel.event_count;
+	init_completion(&channel->completion);
+
+	ret = gsi_channel_evt_ring_init(channel);
+	if (ret)
+		goto err_clear_gsi;
+
+	ret = gsi_ring_alloc(gsi, &channel->tre_ring, data->channel.tre_count);
+	if (ret) {
+		dev_err(gsi->dev, "error %d allocating channel %u ring\n",
+			ret, data->channel_id);
+		goto err_channel_evt_ring_exit;
+	}
+
+	ret = gsi_channel_trans_init(gsi, data->channel_id);
+	if (ret)
+		goto err_ring_free;
+
+	if (command) {
+		u32 tre_max = gsi_channel_tre_max(gsi, data->channel_id);
+
+		ret = ipa_cmd_pool_init(channel, tre_max);
+	}
+	if (!ret)
+		return 0;	/* Success! */
+
+	gsi_channel_trans_exit(channel);
+err_ring_free:
+	gsi_ring_free(gsi, &channel->tre_ring);
+err_channel_evt_ring_exit:
+	gsi_channel_evt_ring_exit(channel);
+err_clear_gsi:
+	channel->gsi = NULL;	/* Mark it not (fully) initialized */
+
+	return ret;
+}
+
+/* Inverse of gsi_channel_init_one() */
+static void gsi_channel_exit_one(struct gsi_channel *channel)
+{
+	if (!channel->gsi)
+		return;		/* Ignore uninitialized channels */
+
+	if (channel->command)
+		ipa_cmd_pool_exit(channel);
+	gsi_channel_trans_exit(channel);
+	gsi_ring_free(channel->gsi, &channel->tre_ring);
+	gsi_channel_evt_ring_exit(channel);
+}
+
+/* Init function for channels */
+static int gsi_channel_init(struct gsi *gsi, bool prefetch, u32 count,
+			    const struct ipa_gsi_endpoint_data *data,
+			    bool modem_alloc)
+{
+	int ret = 0;
+	u32 i;
+
+	gsi_evt_ring_init(gsi);
+
+	/* The endpoint data array is indexed by endpoint name */
+	for (i = 0; i < count; i++) {
+		bool command = i == IPA_ENDPOINT_AP_COMMAND_TX;
+
+		if (ipa_gsi_endpoint_data_empty(&data[i]))
+			continue;	/* Skip over empty slots */
+
+		/* Mark modem channels to be allocated (hardware workaround) */
+		if (data[i].ee_id == GSI_EE_MODEM) {
+			if (modem_alloc)
+				gsi->modem_channel_bitmap |=
+						BIT(data[i].channel_id);
+			continue;
+		}
+
+		ret = gsi_channel_init_one(gsi, &data[i], command, prefetch);
+		if (ret)
+			goto err_unwind;
+	}
+
+	return ret;
+
+err_unwind:
+	while (i--) {
+		if (ipa_gsi_endpoint_data_empty(&data[i]))
+			continue;
+		if (modem_alloc && data[i].ee_id == GSI_EE_MODEM) {
+			gsi->modem_channel_bitmap &= ~BIT(data[i].channel_id);
+			continue;
+		}
+		gsi_channel_exit_one(&gsi->channel[data->channel_id]);
+	}
+	gsi_evt_ring_exit(gsi);
+
+	return ret;
+}
+
+/* Inverse of gsi_channel_init() */
+static void gsi_channel_exit(struct gsi *gsi)
+{
+	u32 channel_id = GSI_CHANNEL_COUNT_MAX - 1;
+
+	do
+		gsi_channel_exit_one(&gsi->channel[channel_id]);
+	while (channel_id--);
+	gsi->modem_channel_bitmap = 0;
+
+	gsi_evt_ring_exit(gsi);
+}
+
+/* Init function for GSI.  GSI hardware does not need to be "ready" */
+int gsi_init(struct gsi *gsi, struct platform_device *pdev, bool prefetch,
+	     u32 count, const struct ipa_gsi_endpoint_data *data,
+	     bool modem_alloc)
+{
+	struct resource *res;
+	resource_size_t size;
+	unsigned int irq;
+	int ret;
+
+	gsi_validate_build();
+
+	gsi->dev = &pdev->dev;
+
+	/* The GSI layer performs NAPI on all endpoints.  NAPI requires a
+	 * network device structure, but the GSI layer does not have one,
+	 * so we must create a dummy network device for this purpose.
+	 */
+	init_dummy_netdev(&gsi->dummy_dev);
+
+	/* Get the GSI IRQ and request for it to wake the system */
+	ret = platform_get_irq_byname(pdev, "gsi");
+	if (ret <= 0) {
+		dev_err(gsi->dev,
+			"DT error %d getting \"gsi\" IRQ property\n", ret);
+		return ret ? : -EINVAL;
+	}
+	irq = ret;
+
+	ret = request_irq(irq, gsi_isr, 0, "gsi", gsi);
+	if (ret) {
+		dev_err(gsi->dev, "error %d requesting \"gsi\" IRQ\n", ret);
+		return ret;
+	}
+	gsi->irq = irq;
+
+	ret = enable_irq_wake(gsi->irq);
+	if (ret)
+		dev_warn(gsi->dev, "error %d enabling gsi wake irq\n", ret);
+	gsi->irq_wake_enabled = !ret;
+
+	/* Get GSI memory range and map it */
+	res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "gsi");
+	if (!res) {
+		dev_err(gsi->dev,
+			"DT error getting \"gsi\" memory property\n");
+		ret = -ENODEV;
+		goto err_disable_irq_wake;
+	}
+
+	size = resource_size(res);
+	if (res->start > U32_MAX || size > U32_MAX - res->start) {
+		dev_err(gsi->dev, "DT memory resource \"gsi\" out of range\n");
+		ret = -EINVAL;
+		goto err_disable_irq_wake;
+	}
+
+	gsi->virt = ioremap(res->start, size);
+	if (!gsi->virt) {
+		dev_err(gsi->dev, "unable to remap \"gsi\" memory\n");
+		ret = -ENOMEM;
+		goto err_disable_irq_wake;
+	}
+
+	ret = gsi_channel_init(gsi, prefetch, count, data, modem_alloc);
+	if (ret)
+		goto err_iounmap;
+
+	mutex_init(&gsi->mutex);
+	init_completion(&gsi->completion);
+
+	return 0;
+
+err_iounmap:
+	iounmap(gsi->virt);
+err_disable_irq_wake:
+	if (gsi->irq_wake_enabled)
+		(void)disable_irq_wake(gsi->irq);
+	free_irq(gsi->irq, gsi);
+
+	return ret;
+}
+
+/* Inverse of gsi_init() */
+void gsi_exit(struct gsi *gsi)
+{
+	mutex_destroy(&gsi->mutex);
+	gsi_channel_exit(gsi);
+	if (gsi->irq_wake_enabled)
+		(void)disable_irq_wake(gsi->irq);
+	free_irq(gsi->irq, gsi);
+	iounmap(gsi->virt);
+}
+
+/* The maximum number of outstanding TREs on a channel.  This limits
+ * a channel's maximum number of transactions outstanding (worst case
+ * is one TRE per transaction).
+ *
+ * The absolute limit is the number of TREs in the channel's TRE ring,
+ * and in theory we should be able use all of them.  But in practice,
+ * doing that led to the hardware reporting exhaustion of event ring
+ * slots for writing completion information.  So the hardware limit
+ * would be (tre_count - 1).
+ *
+ * We reduce it a bit further though.  Transaction resource pools are
+ * sized to be a little larger than this maximum, to allow resource
+ * allocations to always be contiguous.  The number of entries in a
+ * TRE ring buffer is a power of 2, and the extra resources in a pool
+ * tends to nearly double the memory allocated for it.  Reducing the
+ * maximum number of outstanding TREs allows the number of entries in
+ * a pool to avoid crossing that power-of-2 boundary, and this can
+ * substantially reduce pool memory requirements.  The number we
+ * reduce it by matches the number added in gsi_trans_pool_init().
+ */
+u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+
+	/* Hardware limit is channel->tre_count - 1 */
+	return channel->tre_count - (channel->tlv_count - 1);
+}
+
+/* Returns the maximum number of TREs in a single transaction for a channel */
+u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+
+	return channel->tlv_count;
+}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 08/17] soc: qcom: ipa: IPA interface to GSI
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (6 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 07/17] soc: qcom: ipa: the generic software interface Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 09/17] soc: qcom: ipa: GSI transactions Alex Elder
                   ` (11 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

This patch provides interface functions supplied by the IPA layer
that are called from the GSI layer.  One function is called when a
GSI transaction has completed.  The others allow the GSI layer to
inform the IPA layer when the hardware has been told it has new TREs
to execute, and when the hardware has indicated transactions have
completed.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_gsi.c | 54 +++++++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_gsi.h | 60 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 114 insertions(+)
 create mode 100644 drivers/net/ipa/ipa_gsi.c
 create mode 100644 drivers/net/ipa/ipa_gsi.h

diff --git a/drivers/net/ipa/ipa_gsi.c b/drivers/net/ipa/ipa_gsi.c
new file mode 100644
index 000000000000..dc4a5c2196ae
--- /dev/null
+++ b/drivers/net/ipa/ipa_gsi.c
@@ -0,0 +1,54 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+
+#include <linux/types.h>
+
+#include "gsi_trans.h"
+#include "ipa.h"
+#include "ipa_endpoint.h"
+#include "ipa_data.h"
+
+void ipa_gsi_trans_complete(struct gsi_trans *trans)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+
+	ipa_endpoint_trans_complete(ipa->channel_map[trans->channel_id], trans);
+}
+
+void ipa_gsi_trans_release(struct gsi_trans *trans)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+
+	ipa_endpoint_trans_release(ipa->channel_map[trans->channel_id], trans);
+}
+
+void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count,
+			       u32 byte_count)
+{
+	struct ipa *ipa = container_of(gsi, struct ipa, gsi);
+	struct ipa_endpoint *endpoint;
+
+	endpoint = ipa->channel_map[channel_id];
+	if (endpoint->netdev)
+		netdev_sent_queue(endpoint->netdev, byte_count);
+}
+
+void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count,
+				  u32 byte_count)
+{
+	struct ipa *ipa = container_of(gsi, struct ipa, gsi);
+	struct ipa_endpoint *endpoint;
+
+	endpoint = ipa->channel_map[channel_id];
+	if (endpoint->netdev)
+		netdev_completed_queue(endpoint->netdev, count, byte_count);
+}
+
+/* Indicate whether an endpoint config data entry is "empty" */
+bool ipa_gsi_endpoint_data_empty(const struct ipa_gsi_endpoint_data *data)
+{
+	return data->ee_id == GSI_EE_AP && !data->channel.tlv_count;
+}
diff --git a/drivers/net/ipa/ipa_gsi.h b/drivers/net/ipa/ipa_gsi.h
new file mode 100644
index 000000000000..3cf18600c68e
--- /dev/null
+++ b/drivers/net/ipa/ipa_gsi.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+#ifndef _IPA_GSI_TRANS_H_
+#define _IPA_GSI_TRANS_H_
+
+#include <linux/types.h>
+
+struct gsi_trans;
+
+/**
+ * ipa_gsi_trans_complete() - GSI transaction completion callback
+ * @trans:	Transaction that has completed
+ *
+ * This called from the GSI layer to notify the IPA layer that a
+ * transaction has completed.
+ */
+void ipa_gsi_trans_complete(struct gsi_trans *trans);
+
+/**
+ * ipa_gsi_trans_release() - GSI transaction release callback
+ * @trans:	Transaction whose resources should be freed
+ *
+ * This called from the GSI layer to notify the IPA layer that a
+ * transaction is about to be freed, so any resources associated
+ * with it should be released.
+ */
+void ipa_gsi_trans_release(struct gsi_trans *trans);
+
+/**
+ * ipa_gsi_channel_tx_queued() - GSI queued to hardware notification
+ * @gsi:	GSI pointer
+ * @channel_id:	Channel number
+ * @count:	Number of transactions queued
+ * @byte_count:	Number of bytes to transfer represented by transactions
+ *
+ * This called from the GSI layer to notify the IPA layer that some
+ * number of transactions have been queued to hardware for execution.
+ */
+void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count,
+			       u32 byte_count);
+/**
+ * ipa_gsi_trans_complete() - GSI transaction completion callback
+ipa_gsi_channel_tx_completed()
+ * @gsi:	GSI pointer
+ * @channel_id:	Channel number
+ * @count:	Number of transactions completed since last report
+ * @byte_count:	Number of bytes transferred represented by transactions
+ *
+ * This called from the GSI layer to notify the IPA layer that the hardware
+ * has reported the completion of some number of transactions.
+ */
+void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count,
+				  u32 byte_count);
+
+bool ipa_gsi_endpoint_data_empty(const struct ipa_gsi_endpoint_data *data);
+
+#endif /* _IPA_GSI_TRANS_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 09/17] soc: qcom: ipa: GSI transactions
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (7 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 08/17] soc: qcom: ipa: IPA interface to GSI Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 10/17] soc: qcom: ipa: IPA endpoints Alex Elder
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

This patch implements GSI transactions.  A GSI transaction is a
structure that represents a single request (consisting of one or
more TREs) sent to the GSI hardware.  The last TRE in a transaction
includes a flag requesting that the GSI interrupt the AP to notify
that it has completed.

TREs are executed and completed strictly in order.  For this reason,
the completion of a single TRE implies that all previous TREs (in
particular all of those "earlier" in a transaction) have completed.

Whenever there is a need to send a request (a set of TREs) to the
IPA, a GSI transaction is allocated, specifying the number of TREs
that will be required.  Details of the request (e.g. transfer offsets
and length) are represented by in a Linux scatterlist array that is
incorporated in the transaction structure.

Once all commands (TREs) are added to a transaction it is committed.
When the hardware signals that the request has completed, a callback
function allows for cleanup or followup activity to be performed
before the transaction is freed.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/gsi_trans.c | 786 ++++++++++++++++++++++++++++++++++++
 drivers/net/ipa/gsi_trans.h | 226 +++++++++++
 2 files changed, 1012 insertions(+)
 create mode 100644 drivers/net/ipa/gsi_trans.c
 create mode 100644 drivers/net/ipa/gsi_trans.h

diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/gsi_trans.c
new file mode 100644
index 000000000000..2fd21d75367d
--- /dev/null
+++ b/drivers/net/ipa/gsi_trans.c
@@ -0,0 +1,786 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+
+#include <linux/types.h>
+#include <linux/bits.h>
+#include <linux/bitfield.h>
+#include <linux/refcount.h>
+#include <linux/scatterlist.h>
+#include <linux/dma-direction.h>
+
+#include "gsi.h"
+#include "gsi_private.h"
+#include "gsi_trans.h"
+#include "ipa_gsi.h"
+#include "ipa_data.h"
+#include "ipa_cmd.h"
+
+/**
+ * DOC: GSI Transactions
+ *
+ * A GSI transaction abstracts the behavior of a GSI channel by representing
+ * everything about a related group of IPA commands in a single structure.
+ * (A "command" in this sense is either a data transfer or an IPA immediate
+ * command.)  Most details of interaction with the GSI hardware are managed
+ * by the GSI transaction core, allowing users to simply describe commands
+ * to be performed.  When a transaction has completed a callback function
+ * (dependent on the type of endpoint associated with the channel) allows
+ * cleanup of resources associated with the transaction.
+ *
+ * To perform a command (or set of them), a user of the GSI transaction
+ * interface allocates a transaction, indicating the number of TREs required
+ * (one per command).  If sufficient TREs are available, they are reserved
+ * for use in the transaction and the allocation succeeds.  This way
+ * exhaustion of the available TREs in a channel ring is detected
+ * as early as possible.  All resources required to complete a transaction
+ * are allocated at transaction allocation time.
+ *
+ * Commands performed as part of a transaction are represented in an array
+ * of Linux scatterlist structures.  This array is allocated with the
+ * transaction, and its entries are initialized using standard scatterlist
+ * functions (such as sg_set_buf() or skb_to_sgvec()).
+ *
+ * Once a transaction's scatterlist structures have been initialized, the
+ * transaction is committed.  The caller is responsible for mapping buffers
+ * for DMA if necessary, and this should be done *before* allocating
+ * the transaction.  Between a successful allocation and commit of a
+ * transaction no errors should occur.
+ *
+ * Committing transfers ownership of the entire transaction to the GSI
+ * transaction core.  The GSI transaction code formats the content of
+ * the scatterlist array into the channel ring buffer and informs the
+ * hardware that new TREs are available to process.
+ *
+ * The last TRE in each transaction is marked to interrupt the AP when the
+ * GSI hardware has completed it.  Because transfers described by TREs are
+ * performed strictly in order, signaling the completion of just the last
+ * TRE in the transaction is sufficient to indicate the full transaction
+ * is complete.
+ *
+ * When a transaction is complete, ipa_gsi_trans_complete() is called by the
+ * GSI code into the IPA layer, allowing it to perform any final cleanup
+ * required before the transaction is freed.
+ */
+
+/* Hardware values representing a transfer element type */
+enum gsi_tre_type {
+	GSI_RE_XFER	= 0x2,
+	GSI_RE_IMMD_CMD	= 0x3,
+};
+
+/* An entry in a channel ring */
+struct gsi_tre {
+	__le64 addr;		/* DMA address */
+	__le16 len_opcode;	/* length in bytes or enum IPA_CMD_* */
+	__le16 reserved;
+	__le32 flags;		/* TRE_FLAGS_* */
+};
+
+/* gsi_tre->flags mask values (in CPU byte order) */
+#define TRE_FLAGS_CHAIN_FMASK	GENMASK(0, 0)
+#define TRE_FLAGS_IEOB_FMASK	GENMASK(8, 8)
+#define TRE_FLAGS_IEOT_FMASK	GENMASK(9, 9)
+#define TRE_FLAGS_BEI_FMASK	GENMASK(10, 10)
+#define TRE_FLAGS_TYPE_FMASK	GENMASK(23, 16)
+
+int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count,
+			u32 max_alloc)
+{
+	void *virt;
+
+#ifdef IPA_VALIDATE
+	if (!size || size % 8)
+		return -EINVAL;
+	if (count < max_alloc)
+		return -EINVAL;
+	if (!max_alloc)
+		return -EINVAL;
+#endif /* IPA_VALIDATE */
+
+	/* By allocating a few extra entries in our pool (one less
+	 * than the maximum number that will be requested in a
+	 * single allocation), we can always satisfy requests without
+	 * ever worrying about straddling the end of the pool array.
+	 * If there aren't enough entries starting at the free index,
+	 * we just allocate free entries from the beginning of the pool.
+	 */
+	virt = kcalloc(count + max_alloc - 1, size, GFP_KERNEL);
+	if (!virt)
+		return -ENOMEM;
+
+	pool->base = virt;
+	/* If the allocator gave us any extra memory, use it */
+	pool->count = ksize(pool->base) / size;
+	pool->free = 0;
+	pool->max_alloc = max_alloc;
+	pool->size = size;
+	pool->addr = 0;		/* Only used for DMA pools */
+
+	return 0;
+}
+
+void gsi_trans_pool_exit(struct gsi_trans_pool *pool)
+{
+	kfree(pool->base);
+	memset(pool, 0, sizeof(*pool));
+}
+
+/* Allocate the requested number of (zeroed) entries from the pool */
+/* Home-grown DMA pool.  This way we can preallocate and use the tre_count
+ * to guarantee allocations will succeed.  Even though we specify max_alloc
+ * (and it can be more than one), we only allow allocation of a single
+ * element from a DMA pool.
+ */
+int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
+			    size_t size, u32 count, u32 max_alloc)
+{
+	size_t total_size;
+	dma_addr_t addr;
+	void *virt;
+
+#ifdef IPA_VALIDATE
+	if (!size || size % 8)
+		return -EINVAL;
+	if (count < max_alloc)
+		return -EINVAL;
+	if (!max_alloc)
+		return -EINVAL;
+#endif /* IPA_VALIDATE */
+
+	/* Don't let allocations cross a power-of-two boundary */
+	size = __roundup_pow_of_two(size);
+	total_size = (count + max_alloc - 1) * size;
+
+	/* The allocator will give us a power-of-2 number of pages.  But we
+	 * can't guarantee that, so request it.  That way we won't waste any
+	 * memory that would be available beyond the required space.
+	 */
+	total_size = get_order(total_size) << PAGE_SHIFT;
+
+	virt = dma_alloc_coherent(dev, total_size, &addr, GFP_KERNEL);
+	if (!virt)
+		return -ENOMEM;
+
+	pool->base = virt;
+	pool->count = total_size / size;
+	pool->free = 0;
+	pool->size = size;
+	pool->max_alloc = max_alloc;
+	pool->addr = addr;
+
+	return 0;
+}
+
+void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool)
+{
+	dma_free_coherent(dev, pool->size, pool->base, pool->addr);
+	memset(pool, 0, sizeof(*pool));
+}
+
+/* Return the byte offset of the next free entry in the pool */
+static u32 gsi_trans_pool_alloc_common(struct gsi_trans_pool *pool, u32 count)
+{
+	u32 offset;
+
+	/* assert(count > 0); */
+	/* assert(count <= pool->max_alloc); */
+
+	/* Allocate from beginning if wrap would occur */
+	if (count > pool->count - pool->free)
+		pool->free = 0;
+
+	offset = pool->free * pool->size;
+	pool->free += count;
+	memset(pool->base + offset, 0, count * pool->size);
+
+	return offset;
+}
+
+/* Allocate a contiguous block of zeroed entries from a pool */
+void *gsi_trans_pool_alloc(struct gsi_trans_pool *pool, u32 count)
+{
+	return pool->base + gsi_trans_pool_alloc_common(pool, count);
+}
+
+/* Allocate a single zeroed entry from a DMA pool */
+void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr)
+{
+	u32 offset = gsi_trans_pool_alloc_common(pool, 1);
+
+	*addr = pool->addr + offset;
+
+	return pool->base + offset;
+}
+
+/* Return the pool element that immediately follows the one given.
+ * This only works done if elements are allocated one at a time.
+ */
+void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element)
+{
+	void *end = pool->base + pool->count * pool->size;
+
+	/* assert(element >= pool->base); */
+	/* assert(element < end); */
+	/* assert(pool->max_alloc == 1); */
+	element += pool->size;
+
+	return element < end ? element : pool->base;
+}
+
+/* Map a given ring entry index to the transaction associated with it */
+static void gsi_channel_trans_map(struct gsi_channel *channel, u32 index,
+				  struct gsi_trans *trans)
+{
+	/* Note: index *must* be used modulo the ring count here */
+	channel->trans_info.map[index % channel->tre_ring.count] = trans;
+}
+
+/* Return the transaction mapped to a given ring entry */
+struct gsi_trans *
+gsi_channel_trans_mapped(struct gsi_channel *channel, u32 index)
+{
+	/* Note: index *must* be used modulo the ring count here */
+	return channel->trans_info.map[index % channel->tre_ring.count];
+}
+
+/* Return the oldest completed transaction for a channel (or null) */
+struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel)
+{
+	return list_first_entry_or_null(&channel->trans_info.complete,
+					struct gsi_trans, links);
+}
+
+/* Move a transaction from the allocated list to the pending list */
+static void gsi_trans_move_pending(struct gsi_trans *trans)
+{
+	struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id];
+	struct gsi_trans_info *trans_info = &channel->trans_info;
+
+	spin_lock_bh(&trans_info->spinlock);
+
+	list_move_tail(&trans->links, &trans_info->pending);
+
+	spin_unlock_bh(&trans_info->spinlock);
+}
+
+/* Move a transaction and all of its predecessors from the pending list
+ * to the completed list.
+ */
+void gsi_trans_move_complete(struct gsi_trans *trans)
+{
+	struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id];
+	struct gsi_trans_info *trans_info = &channel->trans_info;
+	struct list_head list;
+
+	spin_lock_bh(&trans_info->spinlock);
+
+	/* Move this transaction and all predecessors to completed list */
+	list_cut_position(&list, &trans_info->pending, &trans->links);
+	list_splice_tail(&list, &trans_info->complete);
+
+	spin_unlock_bh(&trans_info->spinlock);
+}
+
+/* Move a transaction from the completed list to the polled list */
+void gsi_trans_move_polled(struct gsi_trans *trans)
+{
+	struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id];
+	struct gsi_trans_info *trans_info = &channel->trans_info;
+
+	spin_lock_bh(&trans_info->spinlock);
+
+	list_move_tail(&trans->links, &trans_info->polled);
+
+	spin_unlock_bh(&trans_info->spinlock);
+}
+
+/* Reserve some number of TREs on a channel.  Returns true if successful */
+static bool
+gsi_trans_tre_reserve(struct gsi_trans_info *trans_info, u32 tre_count)
+{
+	int avail = atomic_read(&trans_info->tre_avail);
+	int new;
+
+	do {
+		new = avail - (int)tre_count;
+		if (unlikely(new < 0))
+			return false;
+	} while (!atomic_try_cmpxchg(&trans_info->tre_avail, &avail, new));
+
+	return true;
+}
+
+/* Release previously-reserved TRE entries to a channel */
+static void
+gsi_trans_tre_release(struct gsi_trans_info *trans_info, u32 tre_count)
+{
+	atomic_add(tre_count, &trans_info->tre_avail);
+}
+
+/* Allocate a GSI transaction on a channel */
+struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
+					  u32 tre_count,
+					  enum dma_data_direction direction)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct gsi_trans_info *trans_info;
+	struct gsi_trans *trans;
+
+	/* assert(tre_count <= gsi_channel_trans_tre_max(gsi, channel_id)); */
+
+	trans_info = &channel->trans_info;
+
+	/* We reserve the TREs now, but consume them at commit time.
+	 * If there aren't enough available, we're done.
+	 */
+	if (!gsi_trans_tre_reserve(trans_info, tre_count))
+		return NULL;
+
+	/* Allocate and initialize non-zero fields in the the transaction */
+	trans = gsi_trans_pool_alloc(&trans_info->pool, 1);
+	trans->gsi = gsi;
+	trans->channel_id = channel_id;
+	trans->tre_count = tre_count;
+	init_completion(&trans->completion);
+
+	/* Allocate the scatterlist and (if requested) info entries. */
+	trans->sgl = gsi_trans_pool_alloc(&trans_info->sg_pool, tre_count);
+	sg_init_marker(trans->sgl, tre_count);
+
+	trans->direction = direction;
+
+	spin_lock_bh(&trans_info->spinlock);
+
+	list_add_tail(&trans->links, &trans_info->alloc);
+
+	spin_unlock_bh(&trans_info->spinlock);
+
+	refcount_set(&trans->refcount, 1);
+
+	return trans;
+}
+
+/* Free a previously-allocated transaction (used only in case of error) */
+void gsi_trans_free(struct gsi_trans *trans)
+{
+	struct gsi_trans_info *trans_info;
+
+	if (!refcount_dec_and_test(&trans->refcount))
+		return;
+
+	trans_info = &trans->gsi->channel[trans->channel_id].trans_info;
+
+	spin_lock_bh(&trans_info->spinlock);
+
+	list_del(&trans->links);
+
+	spin_unlock_bh(&trans_info->spinlock);
+
+	ipa_gsi_trans_release(trans);
+
+	/* Releasing the reserved TREs implicitly frees the sgl[] and
+	 * (if present) info[] arrays, plus the transaction itself.
+	 */
+	gsi_trans_tre_release(trans_info, trans->tre_count);
+}
+
+/* Add an immediate command to a transaction */
+void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size,
+		       dma_addr_t addr, enum dma_data_direction direction,
+		       enum ipa_cmd_opcode opcode)
+{
+	struct ipa_cmd_info *info;
+	u32 which = trans->used++;
+	struct scatterlist *sg;
+
+	/* assert(which < trans->tre_count); */
+
+	/* Set the page information for the buffer.  We also need to fill in
+	 * the DMA address for the buffer (something dma_map_sg() normally
+	 * does).
+	 */
+	sg = &trans->sgl[which];
+
+	sg_set_buf(sg, buf, size);
+	sg_dma_address(sg) = addr;
+
+	info = &trans->info[which];
+	info->opcode = opcode;
+	info->direction = direction;
+}
+
+/* Add a page transfer to a transaction.  It will fill the only TRE. */
+int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size,
+		       u32 offset)
+{
+	struct scatterlist *sg = &trans->sgl[0];
+	int ret;
+
+	/* assert(trans->tre_count == 1); */
+	/* assert(!trans->used); */
+
+	sg_set_page(sg, page, size, offset);
+	ret = dma_map_sg(trans->gsi->dev, sg, 1, trans->direction);
+	if (!ret)
+		return -ENOMEM;
+
+	trans->used++;	/* Transaction now owns the (DMA mapped) page */
+
+	return 0;
+}
+
+/* Add an SKB transfer to a transaction.  No other TREs will be used. */
+int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb)
+{
+	struct scatterlist *sg = &trans->sgl[0];
+	u32 used;
+	int ret;
+
+	/* assert(trans->tre_count == 1); */
+	/* assert(!trans->used); */
+
+	/* skb->len will not be 0 (checked early) */
+	ret = skb_to_sgvec(skb, sg, 0, skb->len);
+	if (ret < 0)
+		return ret;
+	used = ret;
+
+	ret = dma_map_sg(trans->gsi->dev, sg, used, trans->direction);
+	if (!ret)
+		return -ENOMEM;
+
+	trans->used += used;	/* Transaction now owns the (DMA mapped) skb */
+
+	return 0;
+}
+
+/* Compute the length/opcode value to use for a TRE */
+static __le16 gsi_tre_len_opcode(enum ipa_cmd_opcode opcode, u32 len)
+{
+	return opcode == IPA_CMD_NONE ? cpu_to_le16((u16)len)
+				      : cpu_to_le16((u16)opcode);
+}
+
+/* Compute the flags value to use for a given TRE */
+static __le32 gsi_tre_flags(bool last_tre, bool bei, enum ipa_cmd_opcode opcode)
+{
+	enum gsi_tre_type tre_type;
+	u32 tre_flags;
+
+	tre_type = opcode == IPA_CMD_NONE ? GSI_RE_XFER : GSI_RE_IMMD_CMD;
+	tre_flags = u32_encode_bits(tre_type, TRE_FLAGS_TYPE_FMASK);
+
+	/* Last TRE contains interrupt flags */
+	if (last_tre) {
+		/* All transactions end in a transfer completion interrupt */
+		tre_flags |= TRE_FLAGS_IEOT_FMASK;
+		/* Don't interrupt when outbound commands are acknowledged */
+		if (bei)
+			tre_flags |= TRE_FLAGS_BEI_FMASK;
+	} else {	/* All others indicate there's more to come */
+		tre_flags |= TRE_FLAGS_CHAIN_FMASK;
+	}
+
+	return cpu_to_le32(tre_flags);
+}
+
+static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr,
+			       u32 len, bool last_tre, bool bei,
+			       enum ipa_cmd_opcode opcode)
+{
+	struct gsi_tre tre;
+
+	tre.addr = cpu_to_le64(addr);
+	tre.len_opcode = gsi_tre_len_opcode(opcode, len);
+	tre.reserved = 0;
+	tre.flags = gsi_tre_flags(last_tre, bei, opcode);
+
+	/* ARM64 can write 16 bytes as a unit with a single instruction.
+	 * Doing the assignment this way is an attempt to make that happen.
+	 */
+	*dest_tre = tre;
+}
+
+/**
+ * __gsi_trans_commit() - Common GSI transaction commit code
+ * @trans:	Transaction to commit
+ * @ring_db:	Whether to tell the hardware about these queued transfers
+ *
+ * Formats channel ring TRE entries based on the content of the scatterlist.
+ * Maps a transaction pointer to the last ring entry used for the transaction,
+ * so it can be recovered when it completes.  Moves the transaction to the
+ * pending list.  Finally, updates the channel ring pointer and optionally
+ * rings the doorbell.
+ */
+static void __gsi_trans_commit(struct gsi_trans *trans, bool ring_db)
+{
+	struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id];
+	struct gsi_ring *ring = &channel->tre_ring;
+	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
+	bool bei = channel->toward_ipa;
+	struct ipa_cmd_info *info;
+	struct gsi_tre *dest_tre;
+	struct scatterlist *sg;
+	u32 byte_count = 0;
+	u32 avail;
+	u32 i;
+
+	/* assert(trans->used > 0); */
+
+	/* Consume the entries.  If we cross the end of the ring while
+	 * filling them we'll switch to the beginning to finish.
+	 * If there is no info array we're doing a simple data
+	 * transfer request, whose opcode is IPA_CMD_NONE.
+	 */
+	info = trans->info ? &trans->info[0] : NULL;
+	avail = ring->count - ring->index % ring->count;
+	dest_tre = gsi_ring_virt(ring, ring->index);
+	for_each_sg(trans->sgl, sg, trans->used, i) {
+		bool last_tre = i == trans->used - 1;
+		dma_addr_t addr = sg_dma_address(sg);
+		u32 len = sg_dma_len(sg);
+
+		byte_count += len;
+		if (!avail--)
+			dest_tre = gsi_ring_virt(ring, 0);
+		if (info)
+			opcode = info++->opcode;
+
+		gsi_trans_tre_fill(dest_tre, addr, len, last_tre, bei, opcode);
+		dest_tre++;
+	}
+	ring->index += trans->used;
+
+	if (channel->toward_ipa) {
+		/* We record TX bytes when they are sent */
+		trans->len = byte_count;
+		trans->trans_count = channel->trans_count;
+		trans->byte_count = channel->byte_count;
+		channel->trans_count++;
+		channel->byte_count += byte_count;
+	}
+
+	/* Associate the last TRE with the transaction */
+	gsi_channel_trans_map(channel, ring->index - 1, trans);
+
+	gsi_trans_move_pending(trans);
+
+	/* Ring doorbell if requested, or if all TREs are allocated */
+	if (ring_db || !atomic_read(&channel->trans_info.tre_avail)) {
+		/* Report what we're handing off to hardware for TX channels */
+		if (channel->toward_ipa)
+			gsi_channel_tx_queued(channel);
+		gsi_channel_doorbell(channel);
+	}
+}
+
+/* Commit a GSI transaction */
+void gsi_trans_commit(struct gsi_trans *trans, bool ring_db)
+{
+	if (trans->used)
+		__gsi_trans_commit(trans, ring_db);
+	else
+		gsi_trans_free(trans);
+}
+
+/* Commit a GSI transaction and wait for it to complete */
+void gsi_trans_commit_wait(struct gsi_trans *trans)
+{
+	if (!trans->used)
+		goto out_trans_free;
+
+	refcount_inc(&trans->refcount);
+
+	__gsi_trans_commit(trans, true);
+
+	wait_for_completion(&trans->completion);
+
+out_trans_free:
+	gsi_trans_free(trans);
+}
+
+/* Commit a GSI transaction and wait for it to complete, with timeout */
+int gsi_trans_commit_wait_timeout(struct gsi_trans *trans,
+				  unsigned long timeout)
+{
+	unsigned long timeout_jiffies = msecs_to_jiffies(timeout);
+	unsigned long remaining = 1;	/* In case of empty transaction */
+
+	if (!trans->used)
+		goto out_trans_free;
+
+	refcount_inc(&trans->refcount);
+
+	__gsi_trans_commit(trans, true);
+
+	remaining = wait_for_completion_timeout(&trans->completion,
+						timeout_jiffies);
+out_trans_free:
+	gsi_trans_free(trans);
+
+	return remaining ? 0 : -ETIMEDOUT;
+}
+
+/* Process the completion of a transaction; called while polling */
+void gsi_trans_complete(struct gsi_trans *trans)
+{
+	/* If the entire SGL was mapped when added, unmap it now */
+	if (trans->direction != DMA_NONE)
+		dma_unmap_sg(trans->gsi->dev, trans->sgl, trans->used,
+			     trans->direction);
+
+	ipa_gsi_trans_complete(trans);
+
+	complete(&trans->completion);
+
+	gsi_trans_free(trans);
+}
+
+/* Cancel a channel's pending transactions */
+void gsi_channel_trans_cancel_pending(struct gsi_channel *channel)
+{
+	struct gsi_trans_info *trans_info = &channel->trans_info;
+	struct gsi_trans *trans;
+	bool cancelled;
+
+	/* channel->gsi->mutex is held by caller */
+	spin_lock_bh(&trans_info->spinlock);
+
+	cancelled = !list_empty(&trans_info->pending);
+	list_for_each_entry(trans, &trans_info->pending, links)
+		trans->cancelled = true;
+
+	list_splice_tail_init(&trans_info->pending, &trans_info->complete);
+
+	spin_unlock_bh(&trans_info->spinlock);
+
+	/* Schedule NAPI polling to complete the cancelled transactions */
+	if (cancelled)
+		napi_schedule(&channel->napi);
+}
+
+/* Issue a command to read a single byte from a channel */
+int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct gsi_ring *ring = &channel->tre_ring;
+	struct gsi_trans_info *trans_info;
+	struct gsi_tre *dest_tre;
+
+	trans_info = &channel->trans_info;
+
+	/* First reserve the TRE, if possible */
+	if (!gsi_trans_tre_reserve(trans_info, 1))
+		return -EBUSY;
+
+	/* Now fill the the reserved TRE and tell the hardware */
+
+	dest_tre = gsi_ring_virt(ring, ring->index);
+	gsi_trans_tre_fill(dest_tre, addr, 1, true, false, IPA_CMD_NONE);
+
+	ring->index++;
+	gsi_channel_doorbell(channel);
+
+	return 0;
+}
+
+/* Mark a gsi_trans_read_byte() request done */
+void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+
+	gsi_trans_tre_release(&channel->trans_info, 1);
+}
+
+/* Initialize a channel's GSI transaction info */
+int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id)
+{
+	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct gsi_trans_info *trans_info;
+	u32 tre_max;
+	int ret;
+
+	/* Ensure the size of a channel element is what's expected */
+	BUILD_BUG_ON(sizeof(struct gsi_tre) != GSI_RING_ELEMENT_SIZE);
+
+	/* The map array is used to determine what transaction is associated
+	 * with a TRE that the hardware reports has completed.  We need one
+	 * map entry per TRE.
+	 */
+	trans_info = &channel->trans_info;
+	trans_info->map = kcalloc(channel->tre_count, sizeof(*trans_info->map),
+				  GFP_KERNEL);
+	if (!trans_info->map)
+		return -ENOMEM;
+
+	/* We can't use more TREs than there are available in the ring.
+	 * This limits the number of transactions that can be oustanding.
+	 * Worst case is one TRE per transaction (but we actually limit
+	 * it to something a little less than that).  We allocate resources
+	 * for transactions (including transaction structures) based on
+	 * this maximum number.
+	 */
+	tre_max = gsi_channel_tre_max(channel->gsi, channel_id);
+
+	/* Transactions are allocated one at a time. */
+	ret = gsi_trans_pool_init(&trans_info->pool, sizeof(struct gsi_trans),
+				  tre_max, 1);
+	if (ret)
+		goto err_kfree;
+
+	/* A transaction uses a scatterlist array to represent the data
+	 * transfers implemented by the transaction.  Each scatterlist
+	 * element is used to fill a single TRE when the transaction is
+	 * committed.  So we need as many scatterlist elements as the
+	 * maximum number of TREs that can be outstanding.
+	 *
+	 * All TREs in a transaction must fit within the channel's TLV FIFO.
+	 * A transaction on a channel can allocate as many TREs as that but
+	 * no more.
+	 */
+	ret = gsi_trans_pool_init(&trans_info->sg_pool,
+				  sizeof(struct scatterlist),
+				  tre_max, channel->tlv_count);
+	if (ret)
+		goto err_trans_pool_exit;
+
+	/* Finally, the tre_avail field is what ultimately limits the number
+	 * of outstanding transactions and their resources.  A transaction
+	 * allocation succeeds only if the TREs available are sufficient for
+	 * what the transaction might need.  Transaction resource pools are
+	 * sized based on the maximum number of outstanding TREs, so there
+	 * will always be resources available if there are TREs available.
+	 */
+	atomic_set(&trans_info->tre_avail, tre_max);
+
+	spin_lock_init(&trans_info->spinlock);
+	INIT_LIST_HEAD(&trans_info->alloc);
+	INIT_LIST_HEAD(&trans_info->pending);
+	INIT_LIST_HEAD(&trans_info->complete);
+	INIT_LIST_HEAD(&trans_info->polled);
+
+	return 0;
+
+err_trans_pool_exit:
+	gsi_trans_pool_exit(&trans_info->pool);
+err_kfree:
+	kfree(trans_info->map);
+
+	dev_err(gsi->dev, "error %d initializing channel %u transactions\n",
+		ret, channel_id);
+
+	return ret;
+}
+
+/* Inverse of gsi_channel_trans_init() */
+void gsi_channel_trans_exit(struct gsi_channel *channel)
+{
+	struct gsi_trans_info *trans_info = &channel->trans_info;
+
+	gsi_trans_pool_exit(&trans_info->sg_pool);
+	gsi_trans_pool_exit(&trans_info->pool);
+	kfree(trans_info->map);
+}
diff --git a/drivers/net/ipa/gsi_trans.h b/drivers/net/ipa/gsi_trans.h
new file mode 100644
index 000000000000..1477fc15b30a
--- /dev/null
+++ b/drivers/net/ipa/gsi_trans.h
@@ -0,0 +1,226 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+#ifndef _GSI_TRANS_H_
+#define _GSI_TRANS_H_
+
+#include <linux/types.h>
+#include <linux/refcount.h>
+#include <linux/completion.h>
+#include <linux/dma-direction.h>
+
+#include "ipa_cmd.h"
+
+struct scatterlist;
+struct device;
+struct sk_buff;
+
+struct gsi;
+struct gsi_trans;
+struct gsi_trans_pool;
+
+/**
+ * struct gsi_trans - a GSI transaction
+ *
+ * Most fields in this structure for internal use by the transaction core code:
+ * @links:	Links for channel transaction lists by state
+ * @gsi:	GSI pointer
+ * @channel_id: Channel number transaction is associated with
+ * @cancelled:	If set by the core code, transaction was cancelled
+ * @tre_count:	Number of TREs reserved for this transaction
+ * @used:	Number of TREs *used* (could be less than tre_count)
+ * @len:	Total # of transfer bytes represented in sgl[] (set by core)
+ * @data:	Preserved but not touched by the core transaction code
+ * @sgl:	An array of scatter/gather entries managed by core code
+ * @info:	Array of command information structures (command channel)
+ * @direction:	DMA transfer direction (DMA_NONE for commands)
+ * @refcount:	Reference count used for destruction
+ * @completion:	Completed when the transaction completes
+ * @byte_count:	TX channel byte count recorded when transaction committed
+ * @trans_count: Channel transaction count when committed (for BQL accounting)
+ *
+ * The size used for some fields in this structure were chosen to ensure
+ * the full structure size is no larger than 128 bytes.
+ */
+struct gsi_trans {
+	struct list_head links;		/* gsi_channel lists */
+
+	struct gsi *gsi;
+	u8 channel_id;
+
+	bool cancelled;			/* true if transaction was cancelled */
+
+	u8 tre_count;			/* # TREs requested */
+	u8 used;			/* # entries used in sgl[] */
+	u32 len;			/* total # bytes across sgl[] */
+
+	void *data;
+	struct scatterlist *sgl;
+	struct ipa_cmd_info *info;	/* array of entries, or null */
+	enum dma_data_direction direction;
+
+	refcount_t refcount;
+	struct completion completion;
+
+	u64 byte_count;			/* channel byte_count when committed */
+	u64 trans_count;		/* channel trans_count when committed */
+};
+
+/**
+ * gsi_trans_pool_init() - Initialize a pool of structures for transactions
+ * @gsi:	GSI pointer
+ * @size:	Size of elements in the pool
+ * @count:	Minimum number of elements in the pool
+ * @max_alloc:	Maximum number of elements allocated at a time from pool
+ *
+ * @Return:	0 if successful, or a negative error code
+ */
+int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count,
+			u32 max_alloc);
+
+/**
+ * gsi_trans_pool_alloc() - Allocate one or more elements from a pool
+ * @pool:	Pool pointer
+ * @count:	Number of elements to allocate from the pool
+ *
+ * @Return:	Virtual address of element(s) allocated from the pool
+ */
+void *gsi_trans_pool_alloc(struct gsi_trans_pool *pool, u32 count);
+
+/**
+ * gsi_trans_pool_exit() - Inverse of gsi_trans_pool_init()
+ * @pool:	Pool pointer
+ */
+void gsi_trans_pool_exit(struct gsi_trans_pool *pool);
+
+/**
+ * gsi_trans_pool_init_dma() - Initialize a pool of DMA-able structures
+ * @dev:	Device used for DMA
+ * @pool:	Pool pointer
+ * @size:	Size of elements in the pool
+ * @count:	Minimum number of elements in the pool
+ * @max_alloc:	Maximum number of elements allocated at a time from pool
+ *
+ * @Return:	0 if successful, or a negative error code
+ *
+ * Structures in this pool reside in DMA-coherent memory.
+ */
+int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
+			    size_t size, u32 count, u32 max_alloc);
+
+/**
+ * gsi_trans_pool_alloc_dma() - Allocate an element from a DMA pool
+ * @pool:	DMA pool pointer
+ * @addr:	DMA address "handle" associated with the allocation
+ *
+ * @Return:	Virtual address of element allocated from the pool
+ *
+ * Only one element at a time may be allocated from a DMA pool.
+ */
+void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr);
+
+/**
+ * gsi_trans_pool_exit() - Inverse of gsi_trans_pool_init()
+ * @pool:	Pool pointer
+ */
+void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool);
+
+/**
+ * gsi_channel_trans_alloc() - Allocate a GSI transaction on a channel
+ * @gsi:	GSI pointer
+ * @channel_id:	Channel the transaction is associated with
+ * @tre_count:	Number of elements in the transaction
+ * @direction:	DMA direction for entire SGL (or DMA_NONE)
+ *
+ * @Return:	A GSI transaction structure, or a null pointer if all
+ *		available transactions are in use
+ */
+struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
+					  u32 tre_count,
+					  enum dma_data_direction direction);
+
+/**
+ * gsi_trans_free() - Free a previously-allocated GSI transaction
+ * @trans:	Transaction to be freed
+ */
+void gsi_trans_free(struct gsi_trans *trans);
+
+/**
+ * gsi_trans_cmd_add() - Add an immediate command to a transaction
+ * @trans:	Transaction
+ * @buf:	Buffer pointer for command payload
+ * @size:	Number of bytes in buffer
+ * @addr:	DMA address for payload
+ * @direction:	Direction of DMA transfer (or DMA_NONE if none required)
+ * @opcode:	IPA immediate command opcode
+ */
+void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size,
+		       dma_addr_t addr, enum dma_data_direction direction,
+		       enum ipa_cmd_opcode opcode);
+
+/**
+ * gsi_trans_page_add() - Add a page transfer to a transaction
+ * @trans:	Transaction
+ * @page:	Page pointer
+ * @size:	Number of bytes (starting at offset) to transfer
+ * @offset:	Offset within page for start of transfer
+ */
+int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size,
+		       u32 offset);
+
+/**
+ * gsi_trans_skb_add() - Add a socket transfer to a transaction
+ * @trans:	Transaction
+ * @skb:	Socket buffer for transfer (outbound)
+ *
+ * @Return:	0, or -EMSGSIZE if socket data won't fit in transaction.
+ */
+int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb);
+
+/**
+ * gsi_trans_commit() - Commit a GSI transaction
+ * @trans:	Transaction to commit
+ * @ring_db:	Whether to tell the hardware about these queued transfers
+ */
+void gsi_trans_commit(struct gsi_trans *trans, bool ring_db);
+
+/**
+ * gsi_trans_commit_wait() - Commit a GSI transaction and wait for it
+ *			     to complete
+ * @trans:	Transaction to commit
+ */
+void gsi_trans_commit_wait(struct gsi_trans *trans);
+
+/**
+ * gsi_trans_commit_wait_timeout() - Commit a GSI transaction and wait for
+ *				     it to complete, with timeout
+ * @trans:	Transaction to commit
+ * @timeout:	Timeout period (in milliseconds)
+ */
+int gsi_trans_commit_wait_timeout(struct gsi_trans *trans,
+				  unsigned long timeout);
+
+/**
+ * gsi_trans_read_byte() - Issue a single byte read TRE on a channel
+ * @gsi:	GSI pointer
+ * @channel_id:	Channel on which to read a byte
+ * @addr:	DMA address into which to transfer the one byte
+ *
+ * This is not a transaction operation at all.  It's defined here because
+ * it needs to be done in coordination with other transaction activity.
+ */
+int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr);
+
+/**
+ * gsi_trans_read_byte_done() - Clean up after a single byte read TRE
+ * @gsi:	GSI pointer
+ * @channel_id:	Channel on which byte was read
+ *
+ * This function needs to be called to signal that the work related
+ * to reading a byte initiated by gsi_trans_read_byte() is complete.
+ */
+void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id);
+
+#endif /* _GSI_TRANS_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 10/17] soc: qcom: ipa: IPA endpoints
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (8 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 09/17] soc: qcom: ipa: GSI transactions Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 11/17] soc: qcom: ipa: filter and routing tables Alex Elder
                   ` (9 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

This patch includes the code implementing an IPA endpoint.  This is
the primary abstraction implemented by the IPA.  An endpoint is one
end of a network connection between two entities physically
connected to the IPA.  Specifically, the AP and the modem implement
endpoints, and an (AP endpoint, modem endpoint) pair implements the
transfer of network data in one direction between the AP and modem.

Endpoints are built on top of GSI channels, but IPA endpoints
represent the higher-level functionality that the IPA provides.
Data can be sent through a GSI channel, but it is the IPA endpoint
that represents what is on the "other end" to receive that data.
Other functionality, including aggregation, checksum offload and
(at some future date) IP routing and filtering are all associated
with the IPA endpoint.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_endpoint.c | 1707 ++++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_endpoint.h |  110 ++
 2 files changed, 1817 insertions(+)
 create mode 100644 drivers/net/ipa/ipa_endpoint.c
 create mode 100644 drivers/net/ipa/ipa_endpoint.h

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
new file mode 100644
index 000000000000..915b4cd05dd2
--- /dev/null
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -0,0 +1,1707 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/bitfield.h>
+#include <linux/if_rmnet.h>
+#include <linux/version.h>
+#include <linux/dma-direction.h>
+
+#include "gsi.h"
+#include "gsi_trans.h"
+#include "ipa.h"
+#include "ipa_data.h"
+#include "ipa_endpoint.h"
+#include "ipa_cmd.h"
+#include "ipa_mem.h"
+#include "ipa_modem.h"
+#include "ipa_table.h"
+#include "ipa_gsi.h"
+
+#define atomic_dec_not_zero(v)	atomic_add_unless((v), -1, 0)
+
+#define IPA_REPLENISH_BATCH	16
+
+#define IPA_RX_BUFFER_SIZE	(PAGE_SIZE << IPA_RX_BUFFER_ORDER)
+#define IPA_RX_BUFFER_ORDER	1	/* 8KB endpoint RX buffers (2 pages) */
+
+/* The amount of RX buffer space consumed by standard skb overhead */
+#define IPA_RX_BUFFER_OVERHEAD	(PAGE_SIZE - SKB_MAX_ORDER(NET_SKB_PAD, 0))
+
+#define IPA_ENDPOINT_STOP_RX_RETRIES		10
+#define IPA_ENDPOINT_STOP_RX_SIZE		1	/* bytes */
+
+#define IPA_ENDPOINT_RESET_AGGR_RETRY_MAX	3
+#define IPA_AGGR_TIME_LIMIT_DEFAULT		1000	/* microseconds */
+
+#define ENDPOINT_STOP_DMA_TIMEOUT		15	/* milliseconds */
+
+/** enum ipa_status_opcode - status element opcode hardware values */
+enum ipa_status_opcode {
+	IPA_STATUS_OPCODE_PACKET		= 0x01,
+	IPA_STATUS_OPCODE_NEW_FRAG_RULE		= 0x02,
+	IPA_STATUS_OPCODE_DROPPED_PACKET	= 0x04,
+	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 0x08,
+	IPA_STATUS_OPCODE_LOG			= 0x10,
+	IPA_STATUS_OPCODE_DCMP			= 0x20,
+	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 0x40,
+};
+
+/** enum ipa_status_exception - status element exception type */
+enum ipa_status_exception {
+	/* 0 means no exception */
+	IPA_STATUS_EXCEPTION_DEAGGR		= 0x01,
+	IPA_STATUS_EXCEPTION_IPTYPE		= 0x04,
+	IPA_STATUS_EXCEPTION_PACKET_LENGTH	= 0x08,
+	IPA_STATUS_EXCEPTION_FRAG_RULE_MISS	= 0x10,
+	IPA_STATUS_EXCEPTION_SW_FILT		= 0x20,
+	/* The meaning of the next value depends on whether the IP version */
+	IPA_STATUS_EXCEPTION_NAT		= 0x40,		/* IPv4 */
+	IPA_STATUS_EXCEPTION_IPV6CT		= IPA_STATUS_EXCEPTION_NAT,
+};
+
+/* Status element provided by hardware */
+struct ipa_status {
+	u8 opcode;		/* enum ipa_status_opcode */
+	u8 exception;		/* enum ipa_status_exception */
+	__le16 mask;
+	__le16 pkt_len;
+	u8 endp_src_idx;
+	u8 endp_dst_idx;
+	__le32 metadata;
+	__le32 flags1;
+	__le64 flags2;
+	__le32 flags3;
+	__le32 flags4;
+};
+
+/* Field masks for struct ipa_status structure fields */
+
+#define IPA_STATUS_SRC_IDX_FMASK		GENMASK(4, 0)
+
+#define IPA_STATUS_DST_IDX_FMASK		GENMASK(4, 0)
+
+#define IPA_STATUS_FLAGS1_FLT_LOCAL_FMASK	GENMASK(0, 0)
+#define IPA_STATUS_FLAGS1_FLT_HASH_FMASK	GENMASK(1, 1)
+#define IPA_STATUS_FLAGS1_FLT_GLOBAL_FMASK	GENMASK(2, 2)
+#define IPA_STATUS_FLAGS1_FLT_RET_HDR_FMASK	GENMASK(3, 3)
+#define IPA_STATUS_FLAGS1_FLT_RULE_ID_FMASK	GENMASK(13, 4)
+#define IPA_STATUS_FLAGS1_RT_LOCAL_FMASK	GENMASK(14, 14)
+#define IPA_STATUS_FLAGS1_RT_HASH_FMASK		GENMASK(15, 15)
+#define IPA_STATUS_FLAGS1_UCP_FMASK		GENMASK(16, 16)
+#define IPA_STATUS_FLAGS1_RT_TBL_IDX_FMASK	GENMASK(21, 17)
+#define IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK	GENMASK(31, 22)
+
+#define IPA_STATUS_FLAGS2_NAT_HIT_FMASK		GENMASK_ULL(0, 0)
+#define IPA_STATUS_FLAGS2_NAT_ENTRY_IDX_FMASK	GENMASK_ULL(13, 1)
+#define IPA_STATUS_FLAGS2_NAT_TYPE_FMASK	GENMASK_ULL(15, 14)
+#define IPA_STATUS_FLAGS2_TAG_INFO_FMASK	GENMASK_ULL(63, 16)
+
+#define IPA_STATUS_FLAGS3_SEQ_NUM_FMASK		GENMASK(7, 0)
+#define IPA_STATUS_FLAGS3_TOD_CTR_FMASK		GENMASK(31, 8)
+
+#define IPA_STATUS_FLAGS4_HDR_LOCAL_FMASK	GENMASK(0, 0)
+#define IPA_STATUS_FLAGS4_HDR_OFFSET_FMASK	GENMASK(10, 1)
+#define IPA_STATUS_FLAGS4_FRAG_HIT_FMASK	GENMASK(11, 11)
+#define IPA_STATUS_FLAGS4_FRAG_RULE_FMASK	GENMASK(15, 12)
+#define IPA_STATUS_FLAGS4_HW_SPECIFIC_FMASK	GENMASK(31, 16)
+
+#ifdef IPA_VALIDATE
+
+static void ipa_endpoint_validate_build(void)
+{
+	/* The aggregation byte limit defines the point at which an
+	 * aggregation window will close.  It is programmed into the
+	 * IPA hardware as a number of KB.  We don't use "hard byte
+	 * limit" aggregation, which means that we need to supply
+	 * enough space in a receive buffer to hold a complete MTU
+	 * plus normal skb overhead *after* that aggregation byte
+	 * limit has been crossed.
+	 *
+	 * This check just ensures we don't define a receive buffer
+	 * size that would exceed what we can represent in the field
+	 * that is used to program its size.
+	 */
+	BUILD_BUG_ON(IPA_RX_BUFFER_SIZE >
+		     field_max(AGGR_BYTE_LIMIT_FMASK) * SZ_1K +
+		     IPA_MTU + IPA_RX_BUFFER_OVERHEAD);
+
+	/* I honestly don't know where this requirement comes from.  But
+	 * it holds, and if we someday need to loosen the constraint we
+	 * can try to track it down.
+	 */
+	BUILD_BUG_ON(sizeof(struct ipa_status) % 4);
+}
+
+static bool ipa_endpoint_data_valid_one(struct ipa *ipa, u32 count,
+			    const struct ipa_gsi_endpoint_data *all_data,
+			    const struct ipa_gsi_endpoint_data *data)
+{
+	const struct ipa_gsi_endpoint_data *other_data;
+	struct device *dev = &ipa->pdev->dev;
+	enum ipa_endpoint_name other_name;
+
+	if (ipa_gsi_endpoint_data_empty(data))
+		return true;
+
+	if (!data->toward_ipa) {
+		if (data->endpoint.filter_support) {
+			dev_err(dev, "filtering not supported for "
+					"RX endpoint %u\n",
+				data->endpoint_id);
+			return false;
+		}
+
+		return true;	/* Nothing more to check for RX */
+	}
+
+	if (data->endpoint.config.status_enable) {
+		other_name = data->endpoint.config.tx.status_endpoint;
+		if (other_name >= count) {
+			dev_err(dev, "status endpoint name %u out of range "
+					"for endpoint %u\n",
+				other_name, data->endpoint_id);
+			return false;
+		}
+
+		/* Status endpoint must be defined... */
+		other_data = &all_data[other_name];
+		if (ipa_gsi_endpoint_data_empty(other_data)) {
+			dev_err(dev, "DMA endpoint name %u undefined "
+					"for endpoint %u\n",
+				other_name, data->endpoint_id);
+			return false;
+		}
+
+		/* ...and has to be an RX endpoint... */
+		if (other_data->toward_ipa) {
+			dev_err(dev,
+				"status endpoint for endpoint %u not RX\n",
+				data->endpoint_id);
+			return false;
+		}
+
+		/* ...and if it's to be an AP endpoint... */
+		if (other_data->ee_id == GSI_EE_AP) {
+			/* ...make sure it has status enabled. */
+			if (!other_data->endpoint.config.status_enable) {
+				dev_err(dev,
+					"status not enabled for endpoint %u\n",
+					other_data->endpoint_id);
+				return false;
+			}
+		}
+	}
+
+	if (data->endpoint.config.dma_mode) {
+		other_name = data->endpoint.config.dma_endpoint;
+		if (other_name >= count) {
+			dev_err(dev, "DMA endpoint name %u out of range "
+					"for endpoint %u\n",
+				other_name, data->endpoint_id);
+			return false;
+		}
+
+		other_data = &all_data[other_name];
+		if (ipa_gsi_endpoint_data_empty(other_data)) {
+			dev_err(dev, "DMA endpoint name %u undefined "
+					"for endpoint %u\n",
+				other_name, data->endpoint_id);
+			return false;
+		}
+	}
+
+	return true;
+}
+
+static bool ipa_endpoint_data_valid(struct ipa *ipa, u32 count,
+				    const struct ipa_gsi_endpoint_data *data)
+{
+	const struct ipa_gsi_endpoint_data *dp = data;
+	struct device *dev = &ipa->pdev->dev;
+	enum ipa_endpoint_name name;
+
+	ipa_endpoint_validate_build();
+
+	if (count > IPA_ENDPOINT_COUNT) {
+		dev_err(dev, "too many endpoints specified (%u > %u)\n",
+			count, IPA_ENDPOINT_COUNT);
+		return false;
+	}
+
+	/* Make sure needed endpoints have defined data */
+	if (ipa_gsi_endpoint_data_empty(&data[IPA_ENDPOINT_AP_COMMAND_TX])) {
+		dev_err(dev, "command TX endpoint not defined\n");
+		return false;
+	}
+	if (ipa_gsi_endpoint_data_empty(&data[IPA_ENDPOINT_AP_LAN_RX])) {
+		dev_err(dev, "LAN RX endpoint not defined\n");
+		return false;
+	}
+	if (ipa_gsi_endpoint_data_empty(&data[IPA_ENDPOINT_AP_MODEM_TX])) {
+		dev_err(dev, "AP->modem TX endpoint not defined\n");
+		return false;
+	}
+	if (ipa_gsi_endpoint_data_empty(&data[IPA_ENDPOINT_AP_MODEM_RX])) {
+		dev_err(dev, "AP<-modem RX endpoint not defined\n");
+		return false;
+	}
+
+	for (name = 0; name < count; name++, dp++)
+		if (!ipa_endpoint_data_valid_one(ipa, count, data, dp))
+			return false;
+
+	return true;
+}
+
+#else /* !IPA_VALIDATE */
+
+static bool ipa_endpoint_data_valid(struct ipa *ipa, u32 count,
+				    const struct ipa_gsi_endpoint_data *data)
+{
+	return true;
+}
+
+#endif /* !IPA_VALIDATE */
+
+/* Allocate a transaction to use on a non-command endpoint */
+static struct gsi_trans *ipa_endpoint_trans_alloc(struct ipa_endpoint *endpoint,
+						  u32 tre_count)
+{
+	struct gsi *gsi = &endpoint->ipa->gsi;
+	u32 channel_id = endpoint->channel_id;
+	enum dma_data_direction direction;
+
+	direction = endpoint->toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+
+	return gsi_channel_trans_alloc(gsi, channel_id, tre_count, direction);
+}
+
+/* suspend_delay represents suspend for RX, delay for TX endpoints.
+ * Note that suspend is not supported starting with IPA v4.0.
+ */
+static int
+ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay)
+{
+	u32 offset = IPA_REG_ENDP_INIT_CTRL_N_OFFSET(endpoint->endpoint_id);
+	struct ipa *ipa = endpoint->ipa;
+	u32 mask;
+	u32 val;
+
+	/* assert(ipa->version == IPA_VERSION_3_5_1 */
+	mask = endpoint->toward_ipa ? ENDP_DELAY_FMASK : ENDP_SUSPEND_FMASK;
+
+	val = ioread32(ipa->reg_virt + offset);
+	if (suspend_delay == !!(val & mask))
+		return -EALREADY;	/* Already set to desired state */
+
+	val ^= mask;
+	iowrite32(val, ipa->reg_virt + offset);
+
+	return 0;
+}
+
+/* Enable or disable delay or suspend mode on all modem endpoints */
+void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable)
+{
+	bool support_suspend;
+	u32 endpoint_id;
+
+	/* DELAY mode doesn't work right on IPA v4.2 */
+	if (ipa->version == IPA_VERSION_4_2)
+		return;
+
+	/* Only IPA v3.5.1 supports SUSPEND mode on RX endpoints */
+	support_suspend = ipa->version == IPA_VERSION_3_5_1;
+
+	for (endpoint_id = 0; endpoint_id < IPA_ENDPOINT_MAX; endpoint_id++) {
+		struct ipa_endpoint *endpoint = &ipa->endpoint[endpoint_id];
+
+		if (endpoint->ee_id != GSI_EE_MODEM)
+			continue;
+
+		/* Set TX delay mode, or for IPA v3.5.1 RX suspend mode */
+		if (endpoint->toward_ipa || support_suspend)
+			(void)ipa_endpoint_init_ctrl(endpoint, enable);
+	}
+}
+
+/* Reset all modem endpoints to use the default exception endpoint */
+int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
+{
+	u32 initialized = ipa->initialized;
+	struct gsi_trans *trans;
+	u32 count;
+
+	/* We need one command per modem TX endpoint.  We can get an upper
+	 * bound on that by assuming all initialized endpoints are modem->IPA.
+	 * That won't happen, and we could be more precise, but this is fine
+	 * for now.  We need to end the transactio with a "tag process."
+	 */
+	count = hweight32(initialized) + ipa_cmd_tag_process_count();
+	trans = ipa_cmd_trans_alloc(ipa, count);
+	if (!trans) {
+		dev_err(&ipa->pdev->dev,
+			"no transaction to reset modem exception endpoints\n");
+		return -EBUSY;
+	}
+
+	while (initialized) {
+		u32 endpoint_id = __ffs(initialized);
+		struct ipa_endpoint *endpoint;
+		u32 offset;
+
+		initialized ^= BIT(endpoint_id);
+
+		/* We only reset modem TX endpoints */
+		endpoint = &ipa->endpoint[endpoint_id];
+		if (!(endpoint->ee_id == GSI_EE_MODEM && endpoint->toward_ipa))
+			continue;
+
+		offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
+
+		/* Value written is 0, and all bits are updated.  That
+		 * means status is disabled on the endpoint, and as a
+		 * result all other fields in the register are ignored.
+		 */
+		ipa_cmd_register_write_add(trans, offset, 0, ~0, false);
+	}
+
+	ipa_cmd_tag_process_add(trans);
+
+	/* XXX This should have a 1 second timeout */
+	gsi_trans_commit_wait(trans);
+
+	return 0;
+}
+
+static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint)
+{
+	u32 offset = IPA_REG_ENDP_INIT_CFG_N_OFFSET(endpoint->endpoint_id);
+	u32 val = 0;
+
+	/* FRAG_OFFLOAD_EN is 0 */
+	if (endpoint->data->checksum) {
+		if (endpoint->toward_ipa) {
+			u32 checksum_offset;
+
+			val |= u32_encode_bits(IPA_CS_OFFLOAD_UL,
+					       CS_OFFLOAD_EN_FMASK);
+			/* Checksum header offset is in 4-byte units */
+			checksum_offset = sizeof(struct rmnet_map_header);
+			checksum_offset /= sizeof(u32);
+			val |= u32_encode_bits(checksum_offset,
+					       CS_METADATA_HDR_OFFSET_FMASK);
+		} else {
+			val |= u32_encode_bits(IPA_CS_OFFLOAD_DL,
+					       CS_OFFLOAD_EN_FMASK);
+		}
+	} else {
+		val |= u32_encode_bits(IPA_CS_OFFLOAD_NONE,
+				       CS_OFFLOAD_EN_FMASK);
+	}
+	/* CS_GEN_QMB_MASTER_SEL is 0 */
+
+	iowrite32(val, endpoint->ipa->reg_virt + offset);
+}
+
+static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
+{
+	u32 offset = IPA_REG_ENDP_INIT_HDR_N_OFFSET(endpoint->endpoint_id);
+	u32 val = 0;
+
+	if (endpoint->data->qmap) {
+		size_t header_size = sizeof(struct rmnet_map_header);
+
+		if (endpoint->toward_ipa && endpoint->data->checksum)
+			header_size += sizeof(struct rmnet_map_ul_csum_header);
+
+		val |= u32_encode_bits(header_size, HDR_LEN_FMASK);
+		/* metadata is the 4 byte rmnet_map header itself */
+		val |= HDR_OFST_METADATA_VALID_FMASK;
+		val |= u32_encode_bits(0, HDR_OFST_METADATA_FMASK);
+		/* HDR_ADDITIONAL_CONST_LEN is 0; (IPA->AP only) */
+		if (!endpoint->toward_ipa) {
+			u32 size_offset = offsetof(struct rmnet_map_header,
+						   pkt_len);
+
+			val |= HDR_OFST_PKT_SIZE_VALID_FMASK;
+			val |= u32_encode_bits(size_offset,
+					       HDR_OFST_PKT_SIZE_FMASK);
+		}
+		/* HDR_A5_MUX is 0 */
+		/* HDR_LEN_INC_DEAGG_HDR is 0 */
+		/* HDR_METADATA_REG_VALID is 0; (AP->IPA only) */
+	}
+
+	iowrite32(val, endpoint->ipa->reg_virt + offset);
+}
+
+static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
+{
+	u32 offset = IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(endpoint->endpoint_id);
+	u32 pad_align = endpoint->data->rx.pad_align;
+	u32 val = 0;
+
+	val |= HDR_ENDIANNESS_FMASK;		/* big endian */
+	val |= HDR_TOTAL_LEN_OR_PAD_VALID_FMASK;
+	/* HDR_TOTAL_LEN_OR_PAD is 0 (pad, not total_len) */
+	/* HDR_PAYLOAD_LEN_INC_PADDING is 0 */
+	/* HDR_TOTAL_LEN_OR_PAD_OFFSET is 0 */
+	if (!endpoint->toward_ipa)
+		val |= u32_encode_bits(pad_align, HDR_PAD_TO_ALIGNMENT_FMASK);
+
+	iowrite32(val, endpoint->ipa->reg_virt + offset);
+}
+
+/**
+ * Generate a metadata mask value that will select only the mux_id
+ * field in an rmnet_map header structure.  The mux_id is at offset
+ * 1 byte from the beginning of the structure, but the metadata
+ * value is treated as a 4-byte unit.  So this mask must be computed
+ * with endianness in mind.  Note that ipa_endpoint_init_hdr_metadata_mask()
+ * will convert this value to the proper byte order.
+ *
+ * Marked __always_inline because this is really computing a
+ * constant value.
+ */
+static __always_inline __be32 ipa_rmnet_mux_id_metadata_mask(void)
+{
+	size_t mux_id_offset = offsetof(struct rmnet_map_header, mux_id);
+	u32 mux_id_mask = 0;
+	u8 *bytes;
+
+	bytes = (u8 *)&mux_id_mask;
+	bytes[mux_id_offset] = 0xff;	/* mux_id is 1 byte */
+
+	return cpu_to_be32(mux_id_mask);
+}
+
+static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
+{
+	u32 endpoint_id = endpoint->endpoint_id;
+	u32 val = 0;
+	u32 offset;
+
+	offset = IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(endpoint_id);
+
+	if (!endpoint->toward_ipa && endpoint->data->qmap)
+		val = ipa_rmnet_mux_id_metadata_mask();
+
+	iowrite32(val, endpoint->ipa->reg_virt + offset);
+}
+
+static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint)
+{
+	u32 offset = IPA_REG_ENDP_INIT_MODE_N_OFFSET(endpoint->endpoint_id);
+	u32 val;
+
+	if (endpoint->toward_ipa && endpoint->data->dma_mode) {
+		enum ipa_endpoint_name name = endpoint->data->dma_endpoint;
+		u32 dma_endpoint_id;
+
+		dma_endpoint_id = endpoint->ipa->name_map[name]->endpoint_id;
+
+		val = u32_encode_bits(IPA_DMA, MODE_FMASK);
+		val |= u32_encode_bits(dma_endpoint_id, DEST_PIPE_INDEX_FMASK);
+	} else {
+		val = u32_encode_bits(IPA_BASIC, MODE_FMASK);
+	}
+	/* Other bitfields unspecified (and 0) */
+
+	iowrite32(val, endpoint->ipa->reg_virt + offset);
+}
+
+/* Compute the aggregation size value to use for a given buffer size */
+static u32 ipa_aggr_size_kb(u32 rx_buffer_size)
+{
+	/* We don't use "hard byte limit" aggregation, so we define the
+	 * aggregation limit such that our buffer has enough space *after*
+	 * that limit to receive a full MTU of data, plus overhead.
+	 */
+	rx_buffer_size -= IPA_MTU + IPA_RX_BUFFER_OVERHEAD;
+
+	return rx_buffer_size / SZ_1K;
+}
+
+static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
+{
+	u32 offset = IPA_REG_ENDP_INIT_AGGR_N_OFFSET(endpoint->endpoint_id);
+	u32 val = 0;
+
+	if (endpoint->data->aggregation) {
+		if (!endpoint->toward_ipa) {
+			u32 aggr_size = ipa_aggr_size_kb(IPA_RX_BUFFER_SIZE);
+			u32 limit;
+
+			val |= u32_encode_bits(IPA_ENABLE_AGGR, AGGR_EN_FMASK);
+			val |= u32_encode_bits(IPA_GENERIC, AGGR_TYPE_FMASK);
+			val |= u32_encode_bits(aggr_size,
+					       AGGR_BYTE_LIMIT_FMASK);
+			limit = IPA_AGGR_TIME_LIMIT_DEFAULT;
+			val |= u32_encode_bits(limit / IPA_AGGR_GRANULARITY,
+					       AGGR_TIME_LIMIT_FMASK);
+			val |= u32_encode_bits(0, AGGR_PKT_LIMIT_FMASK);
+			if (endpoint->data->rx.aggr_close_eof)
+				val |= AGGR_SW_EOF_ACTIVE_FMASK;
+			/* AGGR_HARD_BYTE_LIMIT_ENABLE is 0 */
+		} else {
+			val |= u32_encode_bits(IPA_ENABLE_DEAGGR,
+					       AGGR_EN_FMASK);
+			val |= u32_encode_bits(IPA_QCMAP, AGGR_TYPE_FMASK);
+			/* other fields ignored */
+		}
+		/* AGGR_FORCE_CLOSE is 0 */
+	} else {
+		val |= u32_encode_bits(IPA_BYPASS_AGGR, AGGR_EN_FMASK);
+		/* other fields ignored */
+	}
+
+	iowrite32(val, endpoint->ipa->reg_virt + offset);
+}
+
+/* A return value of 0 indicates an error */
+static u32 ipa_reg_init_hol_block_timer_val(struct ipa *ipa, u32 microseconds)
+{
+	u32 scale;
+	u32 base;
+	u32 val;
+
+	if (!microseconds)
+		return 0;	/* invalid delay */
+
+	/* Timer is represented in units of clock ticks. */
+	if (ipa->version < IPA_VERSION_4_2)
+		return microseconds;	/* XXX Needs to be computed */
+
+	/* IPA v4.2 represents the tick count as base * scale */
+	scale = 1;			/* XXX Needs to be computed */
+	if (scale > field_max(SCALE_FMASK))
+		return 0;		/* scale too big */
+
+	base = DIV_ROUND_CLOSEST(microseconds, scale);
+	if (base > field_max(BASE_VALUE_FMASK))
+		return 0;		/* microseconds too big */
+
+	val = u32_encode_bits(scale, SCALE_FMASK);
+	val |= u32_encode_bits(base, BASE_VALUE_FMASK);
+
+	return val;
+}
+
+static int ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
+					     u32 microseconds)
+{
+	u32 endpoint_id = endpoint->endpoint_id;
+	struct ipa *ipa = endpoint->ipa;
+	u32 offset;
+	u32 val;
+
+	/* XXX We'll fix this when the register definition is clear */
+	if (microseconds) {
+		struct device *dev = &ipa->pdev->dev;
+
+		dev_err(dev, "endpoint %u non-zero HOLB period (ignoring)\n",
+			endpoint_id);
+		microseconds = 0;
+	}
+
+	if (microseconds) {
+		val = ipa_reg_init_hol_block_timer_val(ipa, microseconds);
+		if (!val)
+			return -EINVAL;
+	} else {
+		val = 0;	/* timeout is immediate */
+	}
+	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(endpoint_id);
+	iowrite32(val, ipa->reg_virt + offset);
+
+	return 0;
+}
+
+static void
+ipa_endpoint_init_hol_block_enable(struct ipa_endpoint *endpoint, bool enable)
+{
+	u32 endpoint_id = endpoint->endpoint_id;
+	u32 offset;
+	u32 val;
+
+	val = u32_encode_bits(enable ? 1 : 0, HOL_BLOCK_EN_FMASK);
+	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(endpoint_id);
+	iowrite32(val, endpoint->ipa->reg_virt + offset);
+}
+
+void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa)
+{
+	u32 i;
+
+	for (i = 0; i < IPA_ENDPOINT_MAX; i++) {
+		struct ipa_endpoint *endpoint = &ipa->endpoint[i];
+
+		if (endpoint->ee_id != GSI_EE_MODEM)
+			continue;
+
+		(void)ipa_endpoint_init_hol_block_timer(endpoint, 0);
+		ipa_endpoint_init_hol_block_enable(endpoint, true);
+	}
+}
+
+static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint)
+{
+	u32 offset = IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(endpoint->endpoint_id);
+	u32 val = 0;
+
+	/* DEAGGR_HDR_LEN is 0 */
+	/* PACKET_OFFSET_VALID is 0 */
+	/* PACKET_OFFSET_LOCATION is ignored (not valid) */
+	/* MAX_PACKET_LEN is 0 (not enforced) */
+
+	iowrite32(val, endpoint->ipa->reg_virt + offset);
+}
+
+static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint)
+{
+	u32 offset = IPA_REG_ENDP_INIT_SEQ_N_OFFSET(endpoint->endpoint_id);
+	u32 seq_type = endpoint->seq_type;
+	u32 val = 0;
+
+	val |= u32_encode_bits(seq_type & 0xf, HPS_SEQ_TYPE_FMASK);
+	val |= u32_encode_bits((seq_type >> 4) & 0xf, DPS_SEQ_TYPE_FMASK);
+	/* HPS_REP_SEQ_TYPE is 0 */
+	/* DPS_REP_SEQ_TYPE is 0 */
+
+	iowrite32(val, endpoint->ipa->reg_virt + offset);
+}
+
+/**
+ * ipa_endpoint_skb_tx() - Transmit a socket buffer
+ * @endpoint:	Endpoint pointer
+ * @skb:	Socket buffer to send
+ *
+ * Returns:	0 if successful, or a negative error code
+ */
+int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb)
+{
+	struct gsi_trans *trans;
+	u32 nr_frags;
+	int ret;
+
+	/* Make sure source endpoint's TLV FIFO has enough entries to
+	 * hold the linear portion of the skb and all its fragments.
+	 * If not, see if we can linearize it before giving up.
+	 */
+	nr_frags = skb_shinfo(skb)->nr_frags;
+	if (1 + nr_frags > endpoint->trans_tre_max) {
+		if (skb_linearize(skb))
+			return -E2BIG;
+		nr_frags = 0;
+	}
+
+	trans = ipa_endpoint_trans_alloc(endpoint, 1 + nr_frags);
+	if (!trans)
+		return -EBUSY;
+
+	ret = gsi_trans_skb_add(trans, skb);
+	if (ret)
+		goto err_trans_free;
+	trans->data = skb;	/* transaction owns skb now */
+
+	gsi_trans_commit(trans, !netdev_xmit_more());
+
+	return 0;
+
+err_trans_free:
+	gsi_trans_free(trans);
+
+	return -ENOMEM;
+}
+
+static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
+{
+	u32 endpoint_id = endpoint->endpoint_id;
+	struct ipa *ipa = endpoint->ipa;
+	u32 val = 0;
+	u32 offset;
+
+	offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
+
+	if (endpoint->data->status_enable) {
+		val |= STATUS_EN_FMASK;
+		if (endpoint->toward_ipa) {
+			enum ipa_endpoint_name name;
+			u32 status_endpoint_id;
+
+			name = endpoint->data->tx.status_endpoint;
+			status_endpoint_id = ipa->name_map[name]->endpoint_id;
+
+			val |= u32_encode_bits(status_endpoint_id,
+					       STATUS_ENDP_FMASK);
+		}
+		/* STATUS_LOCATION is 0 (status element precedes packet) */
+		/* The next field is present for IPA v4.0 and above */
+		/* STATUS_PKT_SUPPRESS_FMASK is 0 */
+	}
+
+	iowrite32(val, ipa->reg_virt + offset);
+}
+
+static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
+{
+	struct gsi_trans *trans;
+	bool doorbell = false;
+	struct page *page;
+	u32 offset;
+	u32 len;
+	int ret;
+
+	page = dev_alloc_pages(IPA_RX_BUFFER_ORDER);
+	if (!page)
+		return -ENOMEM;
+
+	trans = ipa_endpoint_trans_alloc(endpoint, 1);
+	if (!trans)
+		goto err_free_pages;
+
+	/* Offset the buffer to make space for skb headroom */
+	offset = NET_SKB_PAD;
+	len = IPA_RX_BUFFER_SIZE - offset;
+
+	ret = gsi_trans_page_add(trans, page, len, offset);
+	if (ret)
+		goto err_trans_free;
+	trans->data = page;	/* transaction owns page now */
+
+	if (++endpoint->replenish_ready == IPA_REPLENISH_BATCH) {
+		doorbell = true;
+		endpoint->replenish_ready = 0;
+	}
+
+	gsi_trans_commit(trans, doorbell);
+
+	return 0;
+
+err_trans_free:
+	gsi_trans_free(trans);
+err_free_pages:
+	__free_pages(page, IPA_RX_BUFFER_ORDER);
+
+	return -ENOMEM;
+}
+
+/**
+ * ipa_endpoint_replenish() - Replenish the Rx packets cache.
+ *
+ * Allocate RX packet wrapper structures with maximal socket buffers
+ * for an endpoint.  These are supplied to the hardware, which fills
+ * them with incoming data.
+ */
+static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, u32 count)
+{
+	struct gsi *gsi;
+	u32 backlog;
+
+	if (!endpoint->replenish_enabled) {
+		if (count)
+			atomic_add(count, &endpoint->replenish_saved);
+		return;
+	}
+
+
+	while (atomic_dec_not_zero(&endpoint->replenish_backlog))
+		if (ipa_endpoint_replenish_one(endpoint))
+			goto try_again_later;
+	if (count)
+		atomic_add(count, &endpoint->replenish_backlog);
+
+	return;
+
+try_again_later:
+	/* The last one didn't succeed, so fix the backlog */
+	backlog = atomic_inc_return(&endpoint->replenish_backlog);
+
+	if (count)
+		atomic_add(count, &endpoint->replenish_backlog);
+
+	/* Whenever a receive buffer transaction completes we'll try to
+	 * replenish again.  It's unlikely, but if we fail to supply even
+	 * one buffer, nothing will trigger another replenish attempt.
+	 * Receive buffer transactions use one TRE, so schedule work to
+	 * try replenishing again if our backlog is *all* available TREs.
+	 */
+	gsi = &endpoint->ipa->gsi;
+	if (backlog == gsi_channel_tre_max(gsi, endpoint->channel_id))
+		schedule_delayed_work(&endpoint->replenish_work,
+				      msecs_to_jiffies(1));
+}
+
+static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
+{
+	struct gsi *gsi = &endpoint->ipa->gsi;
+	u32 max_backlog;
+	u32 saved;
+
+	endpoint->replenish_enabled = true;
+	while ((saved = atomic_xchg(&endpoint->replenish_saved, 0)))
+		atomic_add(saved, &endpoint->replenish_backlog);
+
+	/* Start replenishing if hardware currently has no buffers */
+	max_backlog = gsi_channel_tre_max(gsi, endpoint->channel_id);
+	if (atomic_read(&endpoint->replenish_backlog) == max_backlog)
+		ipa_endpoint_replenish(endpoint, 0);
+}
+
+static void ipa_endpoint_replenish_disable(struct ipa_endpoint *endpoint)
+{
+	u32 backlog;
+
+	endpoint->replenish_enabled = false;
+	while ((backlog = atomic_xchg(&endpoint->replenish_backlog, 0)))
+		atomic_add(backlog, &endpoint->replenish_saved);
+}
+
+static void ipa_endpoint_replenish_work(struct work_struct *work)
+{
+	struct delayed_work *dwork = to_delayed_work(work);
+	struct ipa_endpoint *endpoint;
+
+	endpoint = container_of(dwork, struct ipa_endpoint, replenish_work);
+
+	ipa_endpoint_replenish(endpoint, 0);
+}
+
+static void ipa_endpoint_skb_copy(struct ipa_endpoint *endpoint,
+				  void *data, u32 len, u32 extra)
+{
+	struct sk_buff *skb;
+
+	skb = __dev_alloc_skb(len, GFP_ATOMIC);
+	if (skb) {
+		skb_put(skb, len);
+		memcpy(skb->data, data, len);
+		skb->truesize += extra;
+	}
+
+	/* Now receive it, or drop it if there's no netdev */
+	if (endpoint->netdev)
+		ipa_modem_skb_rx(endpoint->netdev, skb);
+	else if (skb)
+		dev_kfree_skb_any(skb);
+}
+
+static bool ipa_endpoint_skb_build(struct ipa_endpoint *endpoint,
+				   struct page *page, u32 len)
+{
+	struct sk_buff *skb;
+
+	/* Nothing to do if there's no netdev */
+	if (!endpoint->netdev)
+		return false;
+
+	/* assert(len <= SKB_WITH_OVERHEAD(IPA_RX_BUFFER_SIZE-NET_SKB_PAD)); */
+	skb = build_skb(page_address(page), IPA_RX_BUFFER_SIZE);
+	if (skb) {
+		/* Reserve the headroom and account for the data */
+		skb_reserve(skb, NET_SKB_PAD);
+		skb_put(skb, len);
+	}
+
+	/* Receive the buffer (or record drop if unable to build it) */
+	ipa_modem_skb_rx(endpoint->netdev, skb);
+
+	return skb != NULL;
+}
+
+/* The format of a packet status element is the same for several status
+ * types (opcodes).  The NEW_FRAG_RULE, LOG, DCMP (decompression) types
+ * aren't currently supported
+ */
+static bool ipa_status_format_packet(enum ipa_status_opcode opcode)
+{
+	switch (opcode) {
+	case IPA_STATUS_OPCODE_PACKET:
+	case IPA_STATUS_OPCODE_DROPPED_PACKET:
+	case IPA_STATUS_OPCODE_SUSPENDED_PACKET:
+	case IPA_STATUS_OPCODE_PACKET_2ND_PASS:
+		return true;
+	default:
+		return false;
+	}
+}
+
+static bool ipa_endpoint_status_skip(struct ipa_endpoint *endpoint,
+				     const struct ipa_status *status)
+{
+	u32 endpoint_id;
+
+	if (!ipa_status_format_packet(status->opcode))
+		return true;
+	if (!status->pkt_len)
+		return true;
+	endpoint_id = u32_get_bits(status->endp_dst_idx,
+				   IPA_STATUS_DST_IDX_FMASK);
+	if (endpoint_id != endpoint->endpoint_id)
+		return true;
+
+	return false;	/* Don't skip this packet, process it */
+}
+
+/* Return whether the status indicates the packet should be dropped */
+static bool ipa_status_drop_packet(const struct ipa_status *status)
+{
+	u32 val;
+
+	/* Deaggregation exceptions we drop; others we consume */
+	if (status->exception)
+		return status->exception == IPA_STATUS_EXCEPTION_DEAGGR;
+
+	/* Drop the packet if it fails to match a routing rule; otherwise no */
+	val = le32_get_bits(status->flags1, IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK);
+
+	return val == field_max(IPA_STATUS_FLAGS1_RT_RULE_ID_FMASK);
+}
+
+static void ipa_endpoint_status_parse(struct ipa_endpoint *endpoint,
+				      struct page *page, u32 total_len)
+{
+	void *data = page_address(page) + NET_SKB_PAD;
+	u32 unused = IPA_RX_BUFFER_SIZE - total_len;
+	u32 resid = total_len;
+
+	while (resid) {
+		const struct ipa_status *status = data;
+		u32 align;
+		u32 len;
+
+		if (resid < sizeof(*status)) {
+			dev_err(&endpoint->ipa->pdev->dev,
+				"short message (%u bytes < %zu byte status)\n",
+				resid, sizeof(*status));
+			break;
+		}
+
+		/* Skip over status packets that lack packet data */
+		if (ipa_endpoint_status_skip(endpoint, status)) {
+			data += sizeof(*status);
+			resid -= sizeof(*status);
+			continue;
+		}
+
+		/* Compute the amount of buffer space consumed by the
+		 * packet, including the status element.  If the hardware
+		 * is configured to pad packet data to an aligned boundary,
+		 * account for that.  And if checksum offload is is enabled
+		 * a trailer containing computed checksum information will
+		 * be appended.
+		 */
+		align = endpoint->data->rx.pad_align ? : 1;
+		len = le16_to_cpu(status->pkt_len);
+		len = sizeof(*status) + ALIGN(len, align);
+		if (endpoint->data->checksum)
+			len += sizeof(struct rmnet_map_dl_csum_trailer);
+
+		/* Charge the new packet with a proportional fraction of
+		 * the unused space in the original receive buffer.
+		 * XXX Charge a proportion of the *whole* receive buffer?
+		 */
+		if (!ipa_status_drop_packet(status)) {
+			u32 extra = unused * len / total_len;
+			void *data2 = data + sizeof(*status);
+			u32 len2 = le16_to_cpu(status->pkt_len);
+
+			/* Client receives only packet data (no status) */
+			ipa_endpoint_skb_copy(endpoint, data2, len2, extra);
+		}
+
+		/* Consume status and the full packet it describes */
+		data += len;
+		resid -= len;
+	}
+}
+
+/* Complete a TX transaction, command or from ipa_endpoint_skb_tx() */
+static void ipa_endpoint_tx_complete(struct ipa_endpoint *endpoint,
+				     struct gsi_trans *trans)
+{
+}
+
+/* Complete transaction initiated in ipa_endpoint_replenish_one() */
+static void ipa_endpoint_rx_complete(struct ipa_endpoint *endpoint,
+				     struct gsi_trans *trans)
+{
+	struct page *page;
+
+	ipa_endpoint_replenish(endpoint, 1);
+
+	if (trans->cancelled)
+		return;
+
+	/* Parse or build a socket buffer using the actual received length */
+	page = trans->data;
+	if (endpoint->data->status_enable)
+		ipa_endpoint_status_parse(endpoint, page, trans->len);
+	else if (ipa_endpoint_skb_build(endpoint, page, trans->len))
+		trans->data = NULL;	/* Pages have been consumed */
+}
+
+void ipa_endpoint_trans_complete(struct ipa_endpoint *endpoint,
+				 struct gsi_trans *trans)
+{
+	if (endpoint->toward_ipa)
+		ipa_endpoint_tx_complete(endpoint, trans);
+	else
+		ipa_endpoint_rx_complete(endpoint, trans);
+}
+
+void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint,
+				struct gsi_trans *trans)
+{
+	if (endpoint->toward_ipa) {
+		struct ipa *ipa = endpoint->ipa;
+
+		/* Nothing to do for command transactions */
+		if (endpoint != ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]) {
+			struct sk_buff *skb = trans->data;
+
+			if (skb)
+				dev_kfree_skb_any(skb);
+		}
+	} else {
+		struct page *page = trans->data;
+
+		if (page)
+			__free_pages(page, IPA_RX_BUFFER_ORDER);
+	}
+}
+
+void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id)
+{
+	u32 val;
+
+	/* ROUTE_DIS is 0 */
+	val = u32_encode_bits(endpoint_id, ROUTE_DEF_PIPE_FMASK);
+	val |= ROUTE_DEF_HDR_TABLE_FMASK;
+	val |= u32_encode_bits(0, ROUTE_DEF_HDR_OFST_FMASK);
+	val |= u32_encode_bits(endpoint_id, ROUTE_FRAG_DEF_PIPE_FMASK);
+	val |= ROUTE_DEF_RETAIN_HDR_FMASK;
+
+	iowrite32(val, ipa->reg_virt + IPA_REG_ROUTE_OFFSET);
+}
+
+void ipa_endpoint_default_route_clear(struct ipa *ipa)
+{
+	ipa_endpoint_default_route_set(ipa, 0);
+}
+
+static bool ipa_endpoint_aggr_active(struct ipa_endpoint *endpoint)
+{
+	u32 mask = BIT(endpoint->endpoint_id);
+	struct ipa *ipa = endpoint->ipa;
+	u32 offset;
+	u32 val;
+
+	/* assert(mask & ipa->available); */
+	offset = ipa_reg_state_aggr_active_offset(ipa->version);
+	val = ioread32(ipa->reg_virt + offset);
+
+	return !!(val & mask);
+}
+
+static void ipa_endpoint_force_close(struct ipa_endpoint *endpoint)
+{
+	u32 mask = BIT(endpoint->endpoint_id);
+	struct ipa *ipa = endpoint->ipa;
+
+	/* assert(mask & ipa->available); */
+	iowrite32(mask, ipa->reg_virt + IPA_REG_AGGR_FORCE_CLOSE_OFFSET);
+}
+
+/**
+ * ipa_endpoint_reset_rx_aggr() - Reset RX endpoint with aggregation active
+ * @endpoint:	Endpoint to be reset
+ *
+ * If aggregation is active on an RX endpoint when a reset is performed
+ * on its underlying GSI channel, a special sequence of actions must be
+ * taken to ensure the IPA pipeline is properly cleared.
+ *
+ * @Return:	0 if successful, or a negative error code
+ */
+static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
+{
+	struct device *dev = &endpoint->ipa->pdev->dev;
+	struct ipa *ipa = endpoint->ipa;
+	bool endpoint_suspended = false;
+	struct gsi *gsi = &ipa->gsi;
+	dma_addr_t addr;
+	bool db_enable;
+	u32 retries;
+	u32 len = 1;
+	void *virt;
+	int ret;
+
+	virt = kzalloc(len, GFP_KERNEL);
+	if (!virt)
+		return -ENOMEM;
+
+	addr = dma_map_single(dev, virt, len, DMA_FROM_DEVICE);
+	if (dma_mapping_error(dev, addr)) {
+		ret = -ENOMEM;
+		goto out_kfree;
+	}
+
+	/* Force close aggregation before issuing the reset */
+	ipa_endpoint_force_close(endpoint);
+
+	/* Reset and reconfigure the channel with the doorbell engine
+	 * disabled.  Then poll until we know aggregation is no longer
+	 * active.  We'll re-enable the doorbell (if appropriate) when
+	 * we reset again below.
+	 */
+	gsi_channel_reset(gsi, endpoint->channel_id, false);
+
+	/* Make sure the channel isn't suspended */
+	if (endpoint->ipa->version == IPA_VERSION_3_5_1)
+		if (!ipa_endpoint_init_ctrl(endpoint, false))
+			endpoint_suspended = true;
+
+	/* Start channel and do a 1 byte read */
+	ret = gsi_channel_start(gsi, endpoint->channel_id);
+	if (ret)
+		goto out_suspend_again;
+
+	ret = gsi_trans_read_byte(gsi, endpoint->channel_id, addr);
+	if (ret)
+		goto err_endpoint_stop;
+
+	/* Wait for aggregation to be closed on the channel */
+	retries = IPA_ENDPOINT_RESET_AGGR_RETRY_MAX;
+	do {
+		if (!ipa_endpoint_aggr_active(endpoint))
+			break;
+		msleep(1);
+	} while (retries--);
+
+	/* Check one last time */
+	if (ipa_endpoint_aggr_active(endpoint))
+		dev_err(dev, "endpoint %u still active during reset\n",
+			endpoint->endpoint_id);
+
+	gsi_trans_read_byte_done(gsi, endpoint->channel_id);
+
+	ret = ipa_endpoint_stop(endpoint);
+	if (ret)
+		goto out_suspend_again;
+
+	/* Finally, reset and reconfigure the channel again (re-enabling the
+	 * the doorbell engine if appropriate).  Sleep for 1 millisecond to
+	 * complete the channel reset sequence.  Finish by suspending the
+	 * channel again (if necessary).
+	 */
+	db_enable = ipa->version == IPA_VERSION_3_5_1;
+	gsi_channel_reset(gsi, endpoint->channel_id, db_enable);
+
+	msleep(1);
+
+	goto out_suspend_again;
+
+err_endpoint_stop:
+	ipa_endpoint_stop(endpoint);
+out_suspend_again:
+	if (endpoint_suspended)
+		(void)ipa_endpoint_init_ctrl(endpoint, true);
+	dma_unmap_single(dev, addr, len, DMA_FROM_DEVICE);
+out_kfree:
+	kfree(virt);
+
+	return ret;
+}
+
+static void ipa_endpoint_reset(struct ipa_endpoint *endpoint)
+{
+	u32 channel_id = endpoint->channel_id;
+	struct ipa *ipa = endpoint->ipa;
+	bool db_enable;
+	bool special;
+	int ret = 0;
+
+	/* On IPA v3.5.1, if an RX endpoint is reset while aggregation
+	 * is active, we need to handle things specially to recover.
+	 * All other cases just need to reset the underlying GSI channel.
+	 *
+	 * IPA v3.5.1 enables the doorbell engine.  Newer versions do not.
+	 */
+	db_enable = ipa->version == IPA_VERSION_3_5_1;
+	special = !endpoint->toward_ipa && endpoint->data->aggregation;
+	if (special && ipa_endpoint_aggr_active(endpoint))
+		ret = ipa_endpoint_reset_rx_aggr(endpoint);
+	else
+		gsi_channel_reset(&ipa->gsi, channel_id, db_enable);
+
+	if (ret)
+		dev_err(&ipa->pdev->dev,
+			"error %d resetting channel %u for endpoint %u\n",
+			ret, endpoint->channel_id, endpoint->endpoint_id);
+}
+
+static int ipa_endpoint_stop_rx_dma(struct ipa *ipa)
+{
+	u16 size = IPA_ENDPOINT_STOP_RX_SIZE;
+	struct gsi_trans *trans;
+	dma_addr_t addr;
+	int ret;
+
+	trans = ipa_cmd_trans_alloc(ipa, 1);
+	if (!trans) {
+		dev_err(&ipa->pdev->dev,
+			"no transaction for RX endpoint STOP workaround\n");
+		return -EBUSY;
+	}
+
+	/* Read into the highest part of the zero memory area */
+	addr = ipa->zero_addr + ipa->zero_size - size;
+
+	ipa_cmd_dma_task_32b_addr_add(trans, size, addr, false);
+
+	ret = gsi_trans_commit_wait_timeout(trans, ENDPOINT_STOP_DMA_TIMEOUT);
+	if (ret)
+		gsi_trans_free(trans);
+
+	return ret;
+}
+
+/**
+ * ipa_endpoint_stop() - Stops a GSI channel in IPA
+ * @client:	Client whose endpoint should be stopped
+ *
+ * This function implements the sequence to stop a GSI channel
+ * in IPA. This function returns when the channel is is STOP state.
+ *
+ * Return value: 0 on success, negative otherwise
+ */
+int ipa_endpoint_stop(struct ipa_endpoint *endpoint)
+{
+	u32 retries = endpoint->toward_ipa ? 0 : IPA_ENDPOINT_STOP_RX_RETRIES;
+	int ret;
+
+	do {
+		struct ipa *ipa = endpoint->ipa;
+		struct gsi *gsi = &ipa->gsi;
+
+		ret = gsi_channel_stop(gsi, endpoint->channel_id);
+		if (ret != -EAGAIN)
+			break;
+
+		if (endpoint->toward_ipa)
+			continue;
+
+		/* For IPA v3.5.1, send a DMA read task and check again */
+		if (ipa->version == IPA_VERSION_3_5_1) {
+			ret = ipa_endpoint_stop_rx_dma(ipa);
+			if (ret)
+				break;
+		}
+
+		msleep(1);
+	} while (retries--);
+
+	return retries ? ret : -EIO;
+}
+
+static void ipa_endpoint_program(struct ipa_endpoint *endpoint)
+{
+	struct device *dev = &endpoint->ipa->pdev->dev;
+	int ret;
+
+	if (endpoint->toward_ipa) {
+		bool delay_mode = endpoint->data->tx.delay;
+
+		ret = ipa_endpoint_init_ctrl(endpoint, delay_mode);
+		/* Endpoint is expected to not be in delay mode */
+		if (!ret != delay_mode) {
+			dev_warn(dev,
+				"TX endpoint %u was %sin delay mode\n",
+				endpoint->endpoint_id,
+				delay_mode ? "already " : "");
+		}
+		ipa_endpoint_init_hdr_ext(endpoint);
+		ipa_endpoint_init_aggr(endpoint);
+		ipa_endpoint_init_deaggr(endpoint);
+		ipa_endpoint_init_seq(endpoint);
+	} else {
+		if (endpoint->ipa->version == IPA_VERSION_3_5_1) {
+			if (!ipa_endpoint_init_ctrl(endpoint, false))
+				dev_warn(dev,
+					"RX endpoint %u was suspended\n",
+					endpoint->endpoint_id);
+		}
+		ipa_endpoint_init_hdr_ext(endpoint);
+		ipa_endpoint_init_aggr(endpoint);
+	}
+	ipa_endpoint_init_cfg(endpoint);
+	ipa_endpoint_init_hdr(endpoint);
+	ipa_endpoint_init_hdr_metadata_mask(endpoint);
+	ipa_endpoint_init_mode(endpoint);
+	ipa_endpoint_status(endpoint);
+}
+
+int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint)
+{
+	struct ipa *ipa = endpoint->ipa;
+	struct gsi *gsi = &ipa->gsi;
+	int ret;
+
+	ret = gsi_channel_start(gsi, endpoint->channel_id);
+	if (ret) {
+		dev_err(&ipa->pdev->dev,
+			"error %d starting %cX channel %u for endpoint %u\n",
+			ret, endpoint->toward_ipa ? 'T' : 'R',
+			endpoint->channel_id, endpoint->endpoint_id);
+		return ret;
+	}
+
+	if (!endpoint->toward_ipa) {
+		ipa_interrupt_suspend_enable(ipa->interrupt,
+					     endpoint->endpoint_id);
+		ipa_endpoint_replenish_enable(endpoint);
+	}
+
+	ipa->enabled |= BIT(endpoint->endpoint_id);
+
+	return 0;
+}
+
+void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
+{
+	u32 mask = BIT(endpoint->endpoint_id);
+	struct ipa *ipa = endpoint->ipa;
+	int ret;
+
+	if (!(endpoint->ipa->enabled & mask))
+		return;
+
+	endpoint->ipa->enabled ^= mask;
+
+	if (!endpoint->toward_ipa) {
+		ipa_endpoint_replenish_disable(endpoint);
+		ipa_interrupt_suspend_disable(ipa->interrupt,
+					      endpoint->endpoint_id);
+	}
+
+	/* Note that if stop fails, the channel's state is not well-defined */
+	ret = ipa_endpoint_stop(endpoint);
+	if (ret)
+		dev_err(&ipa->pdev->dev,
+			"error %d attempting to stop endpoint %u\n", ret,
+			endpoint->endpoint_id);
+}
+
+/**
+ * ipa_endpoint_suspend_aggr() - Emulate suspend interrupt
+ * @endpoint_id:	Endpoint on which to emulate a suspend
+ *
+ *  Emulate suspend IPA interrupt to unsuspend an endpoint suspended
+ *  with an open aggregation frame.  This is to work around a hardware
+ *  issue in IPA version 3.5.1 where the suspend interrupt will not be
+ *  generated when it should be.
+ */
+static void ipa_endpoint_suspend_aggr(struct ipa_endpoint *endpoint)
+{
+	struct ipa *ipa = endpoint->ipa;
+
+	/* assert(ipa->version == IPA_VERSION_3_5_1); */
+
+	if (!endpoint->data->aggregation)
+		return;
+
+	/* Nothing to do if the endpoint doesn't have aggregation open */
+	if (!ipa_endpoint_aggr_active(endpoint))
+		return;
+
+	/* Force close aggregation */
+	ipa_endpoint_force_close(endpoint);
+
+	ipa_interrupt_simulate_suspend(ipa->interrupt);
+}
+
+void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
+{
+	struct device *dev = &endpoint->ipa->pdev->dev;
+	struct gsi *gsi = &endpoint->ipa->gsi;
+	bool stop_channel;
+	int ret;
+
+	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
+		return;
+
+	if (!endpoint->toward_ipa)
+		ipa_endpoint_replenish_disable(endpoint);
+
+	/* IPA v3.5.1 doesn't use channel stop for suspend */
+	stop_channel = endpoint->ipa->version != IPA_VERSION_3_5_1;
+	if (!endpoint->toward_ipa && !stop_channel) {
+		/* Due to a hardware bug, a client suspended with an open
+		 * aggregation frame will not generate a SUSPEND IPA
+		 * interrupt.  We work around this by force-closing the
+		 * aggregation frame, then simulating the arrival of such
+		 * an interrupt.
+		 */
+		WARN_ON(ipa_endpoint_init_ctrl(endpoint, true));
+		ipa_endpoint_suspend_aggr(endpoint);
+	}
+
+	ret = gsi_channel_suspend(gsi, endpoint->channel_id, stop_channel);
+	if (ret)
+		dev_err(dev, "error %d suspending channel %u\n", ret,
+			endpoint->channel_id);
+}
+
+void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
+{
+	struct device *dev = &endpoint->ipa->pdev->dev;
+	struct gsi *gsi = &endpoint->ipa->gsi;
+	bool start_channel;
+	int ret;
+
+	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
+		return;
+
+	/* IPA v3.5.1 doesn't use channel start for resume */
+	start_channel = endpoint->ipa->version != IPA_VERSION_3_5_1;
+	if (!endpoint->toward_ipa && !start_channel)
+		WARN_ON(ipa_endpoint_init_ctrl(endpoint, false));
+
+	ret = gsi_channel_resume(gsi, endpoint->channel_id, start_channel);
+	if (ret)
+		dev_err(dev, "error %d resuming channel %u\n", ret,
+			endpoint->channel_id);
+	else if (!endpoint->toward_ipa)
+		ipa_endpoint_replenish_enable(endpoint);
+}
+
+void ipa_endpoint_suspend(struct ipa *ipa)
+{
+	if (ipa->modem_netdev)
+		ipa_modem_suspend(ipa->modem_netdev);
+
+	ipa_endpoint_suspend_one(ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]);
+	ipa_endpoint_suspend_one(ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]);
+}
+
+void ipa_endpoint_resume(struct ipa *ipa)
+{
+	ipa_endpoint_resume_one(ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX]);
+	ipa_endpoint_resume_one(ipa->name_map[IPA_ENDPOINT_AP_LAN_RX]);
+
+	if (ipa->modem_netdev)
+		ipa_modem_resume(ipa->modem_netdev);
+}
+
+static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
+{
+	struct gsi *gsi = &endpoint->ipa->gsi;
+	u32 channel_id = endpoint->channel_id;
+
+	/* Only AP endpoints get set up */
+	if (endpoint->ee_id != GSI_EE_AP)
+		return;
+
+	endpoint->trans_tre_max = gsi_channel_trans_tre_max(gsi, channel_id);
+	if (!endpoint->toward_ipa) {
+		/* RX transactions require a single TRE, so the maximum
+		 * backlog is the same as the maximum outstanding TREs.
+		 */
+		endpoint->replenish_enabled = false;
+		atomic_set(&endpoint->replenish_saved,
+			   gsi_channel_tre_max(gsi, endpoint->channel_id));
+		atomic_set(&endpoint->replenish_backlog, 0);
+		INIT_DELAYED_WORK(&endpoint->replenish_work,
+				  ipa_endpoint_replenish_work);
+	}
+
+	ipa_endpoint_program(endpoint);
+
+	endpoint->ipa->set_up |= BIT(endpoint->endpoint_id);
+}
+
+static void ipa_endpoint_teardown_one(struct ipa_endpoint *endpoint)
+{
+	endpoint->ipa->set_up &= ~BIT(endpoint->endpoint_id);
+
+	if (!endpoint->toward_ipa)
+		cancel_delayed_work_sync(&endpoint->replenish_work);
+
+	ipa_endpoint_reset(endpoint);
+}
+
+void ipa_endpoint_setup(struct ipa *ipa)
+{
+	u32 initialized = ipa->initialized;
+
+	ipa->set_up = 0;
+	while (initialized) {
+		u32 endpoint_id = __ffs(initialized);
+
+		initialized ^= BIT(endpoint_id);
+
+		ipa_endpoint_setup_one(&ipa->endpoint[endpoint_id]);
+	}
+}
+
+void ipa_endpoint_teardown(struct ipa *ipa)
+{
+	u32 set_up = ipa->set_up;
+
+	while (set_up) {
+		u32 endpoint_id = __fls(set_up);
+
+		set_up ^= BIT(endpoint_id);
+
+		ipa_endpoint_teardown_one(&ipa->endpoint[endpoint_id]);
+	}
+	ipa->set_up = 0;
+}
+
+int ipa_endpoint_config(struct ipa *ipa)
+{
+	struct device *dev = &ipa->pdev->dev;
+	u32 initialized;
+	u32 rx_base;
+	u32 rx_mask;
+	u32 tx_mask;
+	int ret = 0;
+	u32 max;
+	u32 val;
+
+	/* Find out about the endpoints supplied by the hardware, and ensure
+	 * the highest one doesn't exceed the number we support.
+	 */
+	val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
+
+	/* Our RX is an IPA producer */
+	rx_base = u32_get_bits(val, BAM_PROD_LOWEST_FMASK);
+	max = rx_base + u32_get_bits(val, BAM_MAX_PROD_PIPES_FMASK);
+	if (max > IPA_ENDPOINT_MAX) {
+		dev_err(dev, "too many endpoints (%u > %u)\n",
+			max, IPA_ENDPOINT_MAX);
+		return -EINVAL;
+	}
+	rx_mask = GENMASK(max - 1, rx_base);
+
+	/* Our TX is an IPA consumer */
+	max = u32_get_bits(val, BAM_MAX_CONS_PIPES_FMASK);
+	tx_mask = GENMASK(max - 1, 0);
+
+	ipa->available = rx_mask | tx_mask;
+
+	/* Check for initialized endpoints not supported by the hardware */
+	if (ipa->initialized & ~ipa->available) {
+		dev_err(dev, "unavailable endpoint id(s) 0x%08x\n",
+			ipa->initialized & ~ipa->available);
+		ret = -EINVAL;		/* Report other errors too */
+	}
+
+	initialized = ipa->initialized;
+	while (initialized) {
+		u32 endpoint_id = __ffs(initialized);
+		struct ipa_endpoint *endpoint;
+
+		initialized ^= BIT(endpoint_id);
+
+		/* Make sure it's pointing in the right direction */
+		endpoint = &ipa->endpoint[endpoint_id];
+		if ((endpoint_id < rx_base) != !!endpoint->toward_ipa) {
+			dev_err(dev, "endpoint id %u wrong direction\n",
+				endpoint_id);
+			ret = -EINVAL;
+		}
+	}
+
+	return ret;
+}
+
+void ipa_endpoint_deconfig(struct ipa *ipa)
+{
+	ipa->available = 0;	/* Nothing more to do */
+}
+
+static void ipa_endpoint_init_one(struct ipa *ipa, enum ipa_endpoint_name name,
+				  const struct ipa_gsi_endpoint_data *data)
+{
+	struct ipa_endpoint *endpoint;
+
+	endpoint = &ipa->endpoint[data->endpoint_id];
+
+	if (data->ee_id == GSI_EE_AP)
+		ipa->channel_map[data->channel_id] = endpoint;
+	ipa->name_map[name] = endpoint;
+
+	endpoint->ipa = ipa;
+	endpoint->ee_id = data->ee_id;
+	endpoint->seq_type = data->endpoint.seq_type;
+	endpoint->channel_id = data->channel_id;
+	endpoint->endpoint_id = data->endpoint_id;
+	endpoint->toward_ipa = data->toward_ipa;
+	endpoint->data = &data->endpoint.config;
+
+	ipa->initialized |= BIT(endpoint->endpoint_id);
+}
+
+void ipa_endpoint_exit_one(struct ipa_endpoint *endpoint)
+{
+	endpoint->ipa->initialized &= ~BIT(endpoint->endpoint_id);
+
+	memset(endpoint, 0, sizeof(*endpoint));
+}
+
+void ipa_endpoint_exit(struct ipa *ipa)
+{
+	u32 initialized = ipa->initialized;
+
+	while (initialized) {
+		u32 endpoint_id = __fls(initialized);
+
+		initialized ^= BIT(endpoint_id);
+
+		ipa_endpoint_exit_one(&ipa->endpoint[endpoint_id]);
+	}
+	memset(ipa->name_map, 0, sizeof(ipa->name_map));
+	memset(ipa->channel_map, 0, sizeof(ipa->channel_map));
+}
+
+/* Returns a bitmask of endpoints that support filtering, or 0 on error */
+u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
+		      const struct ipa_gsi_endpoint_data *data)
+{
+	enum ipa_endpoint_name name;
+	u32 filter_map;
+
+	if (!ipa_endpoint_data_valid(ipa, count, data))
+		return 0;	/* Error */
+
+	ipa->initialized = 0;
+
+	filter_map = 0;
+	for (name = 0; name < count; name++, data++) {
+		if (ipa_gsi_endpoint_data_empty(data))
+			continue;	/* Skip over empty slots */
+
+		ipa_endpoint_init_one(ipa, name, data);
+
+		if (data->endpoint.filter_support)
+			filter_map |= BIT(data->endpoint_id);
+	}
+
+	if (!ipa_filter_map_valid(ipa, filter_map))
+		goto err_endpoint_exit;
+
+	return filter_map;	/* Non-zero bitmask */
+
+err_endpoint_exit:
+	ipa_endpoint_exit(ipa);
+
+	return 0;	/* Error */
+}
diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h
new file mode 100644
index 000000000000..4b336a1f759d
--- /dev/null
+++ b/drivers/net/ipa/ipa_endpoint.h
@@ -0,0 +1,110 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+#ifndef _IPA_ENDPOINT_H_
+#define _IPA_ENDPOINT_H_
+
+#include <linux/types.h>
+#include <linux/workqueue.h>
+#include <linux/if_ether.h>
+
+#include "gsi.h"
+#include "ipa_reg.h"
+
+struct net_device;
+struct sk_buff;
+
+struct ipa;
+struct ipa_gsi_endpoint_data;
+
+/* Non-zero granularity of counter used to implement aggregation timeout */
+#define IPA_AGGR_GRANULARITY		500	/* microseconds */
+
+#define IPA_MTU			ETH_DATA_LEN
+
+enum ipa_endpoint_name {
+	IPA_ENDPOINT_AP_MODEM_TX	= 0,
+	IPA_ENDPOINT_MODEM_LAN_TX,
+	IPA_ENDPOINT_MODEM_COMMAND_TX,
+	IPA_ENDPOINT_AP_COMMAND_TX,
+	IPA_ENDPOINT_MODEM_AP_TX,
+	IPA_ENDPOINT_AP_LAN_RX,
+	IPA_ENDPOINT_AP_MODEM_RX,
+	IPA_ENDPOINT_MODEM_AP_RX,
+	IPA_ENDPOINT_MODEM_LAN_RX,
+	IPA_ENDPOINT_COUNT,	/* Number of names (not an index) */
+};
+
+#define IPA_ENDPOINT_MAX		32	/* Max supported by driver */
+
+/**
+ * struct ipa_endpoint - IPA endpoint information
+ * @client:	Client associated with the endpoint
+ * @channel_id:	EP's GSI channel
+ * @evt_ring_id: EP's GSI channel event ring
+ */
+struct ipa_endpoint {
+	struct ipa *ipa;
+	enum ipa_seq_type seq_type;
+	enum gsi_ee_id ee_id;
+	u32 channel_id;
+	u32 endpoint_id;
+	bool toward_ipa;
+	const struct ipa_endpoint_config_data *data;
+
+	u32 trans_tre_max;	/* maximum descriptors per transaction */
+	u32 evt_ring_id;
+
+	/* Net device this endpoint is associated with, if any */
+	struct net_device *netdev;
+
+	/* Receive buffer replenishing for RX endpoints */
+	bool replenish_enabled;
+	u32 replenish_ready;
+	atomic_t replenish_saved;
+	atomic_t replenish_backlog;
+	struct delayed_work replenish_work;		/* global wq */
+};
+
+void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa);
+
+void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable);
+
+int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa);
+
+int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb);
+
+int ipa_endpoint_stop(struct ipa_endpoint *endpoint);
+
+void ipa_endpoint_exit_one(struct ipa_endpoint *endpoint);
+
+int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint);
+void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint);
+
+void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint);
+void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint);
+
+void ipa_endpoint_suspend(struct ipa *ipa);
+void ipa_endpoint_resume(struct ipa *ipa);
+
+void ipa_endpoint_setup(struct ipa *ipa);
+void ipa_endpoint_teardown(struct ipa *ipa);
+
+int ipa_endpoint_config(struct ipa *ipa);
+void ipa_endpoint_deconfig(struct ipa *ipa);
+
+void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id);
+void ipa_endpoint_default_route_clear(struct ipa *ipa);
+
+u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
+		      const struct ipa_gsi_endpoint_data *data);
+void ipa_endpoint_exit(struct ipa *ipa);
+
+void ipa_endpoint_trans_complete(struct ipa_endpoint *ipa,
+				 struct gsi_trans *trans);
+void ipa_endpoint_trans_release(struct ipa_endpoint *ipa,
+				struct gsi_trans *trans);
+
+#endif /* _IPA_ENDPOINT_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 11/17] soc: qcom: ipa: filter and routing tables
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (9 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 10/17] soc: qcom: ipa: IPA endpoints Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 12/17] soc: qcom: ipa: immediate commands Alex Elder
                   ` (8 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

This patch contains code implementing filter and routing tables for
the IPA.  A filter table allows rules to be used for filtering
packets that depart the AP at an endpoint.  A filter table entry
contains the address of a set of rules to apply for each endpoint
that supports filtering.

A routing table allows packets to be routed to an endpoint based
on packet metadata.  It is also a table whose entries each contain
the address of a set of routing rules to apply.

Neither filtering nor routing is supported by the current driver.
All table entries refer to rules that mean "no filtering" and "no
routing."

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_table.c | 700 ++++++++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_table.h | 103 ++++++
 2 files changed, 803 insertions(+)
 create mode 100644 drivers/net/ipa/ipa_table.c
 create mode 100644 drivers/net/ipa/ipa_table.h

diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
new file mode 100644
index 000000000000..9df2a3e78c98
--- /dev/null
+++ b/drivers/net/ipa/ipa_table.c
@@ -0,0 +1,700 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/bits.h>
+#include <linux/bitops.h>
+#include <linux/bitfield.h>
+#include <linux/io.h>
+#include <linux/build_bug.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+
+#include "ipa.h"
+#include "ipa_version.h"
+#include "ipa_endpoint.h"
+#include "ipa_table.h"
+#include "ipa_reg.h"
+#include "ipa_mem.h"
+#include "ipa_cmd.h"
+#include "gsi.h"
+#include "gsi_trans.h"
+
+/**
+ * DOC: IPA Filter and Route Tables
+ *
+ * The IPA has tables defined in its local shared memory that define filter
+ * and routing rules.  Each entry in these tables contains a 64-bit DMA
+ * address that refers to DRAM (system memory) containing a rule definition.
+ * A rule consists of a contiguous block of 32-bit values terminated with
+ * 32 zero bits.  A special "zero entry" rule consisting of 64 zero bits
+ * represents "no filtering" or "no routing," and is the reset value for
+ * filter or route table rules.  Separate tables (both filter and route)
+ * used for IPv4 and IPv6.  Additionally, there can be hashed filter or
+ * route tables, which are used when a hash of message metadata matches.
+ * Hashed operation is not supported by all IPA hardware.
+ *
+ * Each filter rule is associated with an AP or modem TX endpoint, though
+ * not all TX endpoints support filtering.  The first 64-bit entry in a
+ * filter table is a bitmap indicating which endpoints have entries in
+ * the table.  The low-order bit (bit 0) in this bitmap represents a
+ * special global filter, which applies to all traffic.  This is not
+ * used in the current code.  Bit 1, if set, indicates that there is an
+ * entry (i.e. a DMA address referring to a rule) for endpoint 0 in the
+ * table.  Bit 2, if set, indicates there is an entry for endpoint 1,
+ * and so on.  Space is set aside in IPA local memory to hold as many
+ * filter table entries as might be required, but typically they are not
+ * all used.
+ *
+ * The AP initializes all entries in a filter table to refer to a "zero"
+ * entry.  Once initialized the modem and AP update the entries for
+ * endpoints they "own" directly.  Currently the AP does not use the
+ * IPA filtering functionality.
+ *
+ *                    IPA Filter Table
+ *                 ----------------------
+ * endpoint bitmap | 0x0000000000000048 | Bits 3 and 6 set (endpoints 2 and 5)
+ *                 |--------------------|
+ * 1st endpoint    | 0x000123456789abc0 | DMA address for modem endpoint 2 rule
+ *                 |--------------------|
+ * 2nd endpoint    | 0x000123456789abf0 | DMA address for AP endpoint 5 rule
+ *                 |--------------------|
+ * (unused)        |                    | (Unused space in filter table)
+ *                 |--------------------|
+ *                          . . .
+ *                 |--------------------|
+ * (unused)        |                    | (Unused space in filter table)
+ *                 ----------------------
+ *
+ * The set of available route rules is divided about equally between the AP
+ * and modem.  The AP initializes all entries in a route table to refer to
+ * a "zero entry".  Once initialized, the modem and AP are responsible for
+ * updating their own entries.  All entries in a route table are usable,
+ * though the AP currently does not use the IPA routing functionality.
+ *
+ *                    IPA Route Table
+ *                 ----------------------
+ * 1st modem route | 0x0001234500001100 | DMA address for first route rule
+ *                 |--------------------|
+ * 2nd modem route | 0x0001234500001140 | DMA address for second route rule
+ *                 |--------------------|
+ *                          . . .
+ *                 |--------------------|
+ * Last modem route| 0x0001234500002280 | DMA address for Nth route rule
+ *                 |--------------------|
+ * 1st AP route    | 0x0001234500001100 | DMA address for route rule (N+1)
+ *                 |--------------------|
+ * 2nd AP route    | 0x0001234500001140 | DMA address for next route rule
+ *                 |--------------------|
+ *                          . . .
+ *                 |--------------------|
+ * Last AP route   | 0x0001234500002280 | DMA address for last route rule
+ *                 ----------------------
+ */
+
+/* IPA hardware constrains filter and route tables alignment */
+#define IPA_TABLE_ALIGN			128	/* Minimum table alignment */
+
+/* Assignment of route table entries to the modem and AP */
+#define IPA_ROUTE_MODEM_MIN		0
+#define IPA_ROUTE_MODEM_COUNT		8
+
+#define IPA_ROUTE_AP_MIN		IPA_ROUTE_MODEM_COUNT
+#define IPA_ROUTE_AP_COUNT \
+		(IPA_ROUTE_COUNT_MAX - IPA_ROUTE_MODEM_COUNT)
+
+/* Filter or route rules consist of a set of 32-bit values followed by a
+ * 32-bit all-zero rule list terminator.  The "zero rule" is simply an
+ * all-zero rule followed by the list terminator.
+ */
+#define IPA_ZERO_RULE_SIZE		(2 * sizeof(__le32))
+
+#ifdef IPA_VALIDATE
+
+/* Check things that can be validated at build time. */
+static void ipa_table_validate_build(void)
+{
+	/* IPA hardware accesses memory 128 bytes at a time.  Addresses
+	 * referred to by entries in filter and route tables must be
+	 * aligned on 128-byte byte boundaries.  The only rule address
+	 * ever use is the "zero rule", and it's aligned at the base
+	 * of a coherent DMA allocation.
+	 */
+	BUILD_BUG_ON(ARCH_DMA_MINALIGN % IPA_TABLE_ALIGN);
+
+	/* Filter and route tables contain DMA addresses that refer to
+	 * filter or route rules.  We use a fixed constant to represent
+	 * the size of either type of table entry.  Code in ipa_table_init()
+	 * uses a pointer to __le64 to initialize table entriews.
+	 */
+	BUILD_BUG_ON(IPA_TABLE_ENTRY_SIZE != sizeof(dma_addr_t));
+	BUILD_BUG_ON(sizeof(dma_addr_t) != sizeof(__le64));
+
+	/* A "zero rule" is used to represent no filtering or no routing.
+	 * It is a 64-bit block of zeroed memory.  Code in ipa_table_init()
+	 * assumes that it can be written using a pointer to __le64.
+	 */
+	BUILD_BUG_ON(IPA_ZERO_RULE_SIZE != sizeof(__le64));
+
+	/* Impose a practical limit on the number of routes */
+	BUILD_BUG_ON(IPA_ROUTE_COUNT_MAX > 32);
+	/* The modem must be allotted at least one route table entry */
+	BUILD_BUG_ON(!IPA_ROUTE_MODEM_COUNT);
+	/* But it can't have more than what is available */
+	BUILD_BUG_ON(IPA_ROUTE_MODEM_COUNT > IPA_ROUTE_COUNT_MAX);
+
+}
+
+static bool
+ipa_table_valid_one(struct ipa *ipa, bool route, bool ipv6, bool hashed)
+{
+	struct device *dev = &ipa->pdev->dev;
+	const struct ipa_mem *mem;
+	u32 size;
+
+	if (route) {
+		if (ipv6)
+			mem = hashed ? &ipa->mem[IPA_MEM_V6_ROUTE_HASHED]
+				     : &ipa->mem[IPA_MEM_V6_ROUTE];
+		else
+			mem = hashed ? &ipa->mem[IPA_MEM_V4_ROUTE_HASHED]
+				     : &ipa->mem[IPA_MEM_V4_ROUTE];
+		size = IPA_ROUTE_COUNT_MAX * IPA_TABLE_ENTRY_SIZE;
+	} else {
+		if (ipv6)
+			mem = hashed ? &ipa->mem[IPA_MEM_V6_FILTER_HASHED]
+				     : &ipa->mem[IPA_MEM_V6_FILTER];
+		else
+			mem = hashed ? &ipa->mem[IPA_MEM_V4_FILTER_HASHED]
+				     : &ipa->mem[IPA_MEM_V4_FILTER];
+		size = (1 + IPA_FILTER_COUNT_MAX) * IPA_TABLE_ENTRY_SIZE;
+	}
+
+	if (!ipa_cmd_table_valid(ipa, mem, route, ipv6, hashed))
+		return false;
+
+	/* mem->size >= size is sufficient, but we'll demand more */
+	if (mem->size == size)
+		return true;
+
+	/* Hashed table regions can be zero size if hashing is not supported */
+	if (hashed && !mem->size)
+		return true;
+
+	dev_err(dev, "IPv%c %s%s table region size 0x%02x, expected 0x%02x\n",
+		ipv6 ? '6' : '4', hashed ? "hashed " : "",
+		route ? "route" : "filter", mem->size, size);
+
+	return false;
+}
+
+/* Verify the filter and route table memory regions are the expected size */
+bool ipa_table_valid(struct ipa *ipa)
+{
+	bool valid = true;
+
+	valid = valid && ipa_table_valid_one(ipa, false, false, false);
+	valid = valid && ipa_table_valid_one(ipa, false, false, true);
+	valid = valid && ipa_table_valid_one(ipa, false, true, false);
+	valid = valid && ipa_table_valid_one(ipa, false, true, true);
+	valid = valid && ipa_table_valid_one(ipa, true, false, false);
+	valid = valid && ipa_table_valid_one(ipa, true, false, true);
+	valid = valid && ipa_table_valid_one(ipa, true, true, false);
+	valid = valid && ipa_table_valid_one(ipa, true, true, true);
+
+	return valid;
+}
+
+bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_map)
+{
+	struct device *dev = &ipa->pdev->dev;
+	u32 count;
+
+	if (!filter_map) {
+		dev_err(dev, "at least one filtering endpoint is required\n");
+
+		return false;
+	}
+
+	count = hweight32(filter_map);
+	if (count > IPA_FILTER_COUNT_MAX) {
+		dev_err(dev, "too many filtering endpoints (%u, max %u)\n",
+			count, IPA_FILTER_COUNT_MAX);
+
+		return false;
+	}
+
+	return true;
+}
+
+#else /* !IPA_VALIDATE */
+static void ipa_table_validate_build(void)
+
+{
+}
+
+#endif /* !IPA_VALIDATE */
+
+/* Zero entry count means no table, so just return a 0 address */
+static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count)
+{
+	u32 skip;
+
+	if (!count)
+		return 0;
+
+/* assert(count <= max_t(u32, IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX)); */
+
+	/* Skip over the zero rule and possibly the filter mask */
+	skip = filter_mask ? 1 : 2;
+
+	return ipa->table_addr + skip * sizeof(*ipa->table_virt);
+}
+
+static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
+				u16 first, u16 count, const struct ipa_mem *mem)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	dma_addr_t addr;
+	u32 offset;
+	u16 size;
+
+	/* Nothing to do if the table memory regions is empty */
+	if (!mem->size)
+		return;
+
+	if (filter)
+		first++;	/* skip over bitmap */
+
+	offset = mem->offset + first * IPA_TABLE_ENTRY_SIZE;
+	size = count * IPA_TABLE_ENTRY_SIZE;
+	addr = ipa_table_addr(ipa, false, count);
+
+	ipa_cmd_dma_shared_mem_add(trans, offset, size, addr, true);
+}
+
+/* Reset entries in a single filter table belonging to either the AP or
+ * modem to refer to the zero entry.  The memory region supplied will be
+ * for the IPv4 and IPv6 non-hashed and hashed filter tables.
+ */
+static int
+ipa_filter_reset_table(struct ipa *ipa, const struct ipa_mem *mem, bool modem)
+{
+	u32 ep_mask = ipa->filter_map;
+	u32 count = hweight32(ep_mask);
+	struct gsi_trans *trans;
+	enum gsi_ee_id ee_id;
+
+	if (!mem->size)
+		return 0;
+
+	trans = ipa_cmd_trans_alloc(ipa, count);
+	if (!trans) {
+		dev_err(&ipa->pdev->dev,
+			"no transaction for %s filter reset\n",
+			modem ? "modem" : "AP");
+		return -EBUSY;
+	}
+
+	ee_id = modem ? GSI_EE_MODEM : GSI_EE_AP;
+	while (ep_mask) {
+		u32 endpoint_id = __ffs(ep_mask);
+		struct ipa_endpoint *endpoint;
+
+		ep_mask ^= BIT(endpoint_id);
+
+		endpoint = &ipa->endpoint[endpoint_id];
+		if (endpoint->ee_id != ee_id)
+			continue;
+
+		ipa_table_reset_add(trans, true, endpoint_id, 1, mem);
+	}
+
+	gsi_trans_commit_wait(trans);
+
+	return 0;
+}
+
+/* Theoretically, each filter table could have more filter slots to
+ * update than the maximum number of commands in a transaction.  So
+ * we do each table separately.
+ */
+static int ipa_filter_reset(struct ipa *ipa, bool modem)
+{
+	int ret;
+
+	ret = ipa_filter_reset_table(ipa, &ipa->mem[IPA_MEM_V4_FILTER], modem);
+	if (ret)
+		return ret;
+
+	ret = ipa_filter_reset_table(ipa, &ipa->mem[IPA_MEM_V4_FILTER_HASHED],
+				     modem);
+	if (ret)
+		return ret;
+
+	ret = ipa_filter_reset_table(ipa, &ipa->mem[IPA_MEM_V6_FILTER], modem);
+	if (ret)
+		return ret;
+	ret = ipa_filter_reset_table(ipa, &ipa->mem[IPA_MEM_V6_FILTER_HASHED],
+				     modem);
+
+	return ret;
+}
+
+/* The AP routes and modem routes are each contiguous within the
+ * table.  We can update each table with a single command, and we
+ * won't exceed the per-transaction command limit.
+ * */
+static int ipa_route_reset(struct ipa *ipa, bool modem)
+{
+	struct gsi_trans *trans;
+	u16 first;
+	u16 count;
+
+	trans = ipa_cmd_trans_alloc(ipa, 4);
+	if (!trans) {
+		dev_err(&ipa->pdev->dev,
+			"no transaction for %s route reset\n",
+			modem ? "modem" : "AP");
+		return -EBUSY;
+	}
+
+	if (modem) {
+		first = IPA_ROUTE_MODEM_MIN;
+		count = IPA_ROUTE_MODEM_COUNT;
+	} else {
+		first = IPA_ROUTE_AP_MIN;
+		count = IPA_ROUTE_AP_COUNT;
+	}
+
+	ipa_table_reset_add(trans, false, first, count,
+			    &ipa->mem[IPA_MEM_V4_ROUTE]);
+	ipa_table_reset_add(trans, false, first, count,
+			    &ipa->mem[IPA_MEM_V4_ROUTE_HASHED]);
+
+	ipa_table_reset_add(trans, false, first, count,
+			    &ipa->mem[IPA_MEM_V6_ROUTE]);
+	ipa_table_reset_add(trans, false, first, count,
+			    &ipa->mem[IPA_MEM_V6_ROUTE_HASHED]);
+
+	gsi_trans_commit_wait(trans);
+
+	return 0;
+}
+
+void ipa_table_reset(struct ipa *ipa, bool modem)
+{
+	struct device *dev = &ipa->pdev->dev;
+	const char *ee_name;
+	int ret;
+
+	ee_name = modem ? "modem" : "AP";
+
+	/* Report errors, but reset filter and route tables */
+	ret = ipa_filter_reset(ipa, modem);
+	if (ret)
+		dev_err(dev, "error %d resetting filter table for %s\n",
+				ret, ee_name);
+
+	ret = ipa_route_reset(ipa, modem);
+	if (ret)
+		dev_err(dev, "error %d resetting route table for %s\n",
+				ret, ee_name);
+}
+
+int ipa_table_hash_flush(struct ipa *ipa)
+{
+	u32 offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version);
+	struct gsi_trans *trans;
+	u32 val;
+
+	/* IPA version 4.2 does not support hashed tables */
+	if (ipa->version == IPA_VERSION_4_2)
+		return 0;
+
+	trans = ipa_cmd_trans_alloc(ipa, 1);
+	if (!trans) {
+		dev_err(&ipa->pdev->dev, "no transaction for hash flush\n");
+		return -EBUSY;
+	}
+
+	val = IPV4_FILTER_HASH_FLUSH | IPV6_FILTER_HASH_FLUSH;
+	val |= IPV6_ROUTER_HASH_FLUSH | IPV4_ROUTER_HASH_FLUSH;
+
+	ipa_cmd_register_write_add(trans, offset, val, val, false);
+
+	gsi_trans_commit_wait(trans);
+
+	return 0;
+}
+
+static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
+			       enum ipa_cmd_opcode opcode,
+			       const struct ipa_mem *mem,
+			       const struct ipa_mem *hash_mem)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	dma_addr_t hash_addr;
+	dma_addr_t addr;
+	u16 hash_count;
+	u16 hash_size;
+	u16 count;
+	u16 size;
+
+	/* The number of filtering endpoints determines number of entries
+	 * in the filter table.  The hashed and non-hashed filter table
+	 * will have the same number of entries.  The size of the route
+	 * table region determines the number of entries it has.
+	 */
+	if (filter) {
+		count = hweight32(ipa->filter_map);
+		hash_count = hash_mem->size ? count : 0;
+	} else {
+		count = mem->size / IPA_TABLE_ENTRY_SIZE;
+		hash_count = hash_mem->size / IPA_TABLE_ENTRY_SIZE;
+	}
+	size = count * IPA_TABLE_ENTRY_SIZE;
+	hash_size = hash_count * IPA_TABLE_ENTRY_SIZE;
+
+	addr = ipa_table_addr(ipa, filter, count);
+	hash_addr = ipa_table_addr(ipa, filter, hash_count);
+
+	ipa_cmd_table_init_add(trans, opcode, size, mem->offset, addr,
+			       hash_size, hash_mem->offset, hash_addr);
+}
+
+int ipa_table_setup(struct ipa *ipa)
+{
+	struct gsi_trans *trans;
+
+	trans = ipa_cmd_trans_alloc(ipa, 4);
+	if (!trans) {
+		dev_err(&ipa->pdev->dev, "no transaction for table setup\n");
+		return -EBUSY;
+	}
+
+	ipa_table_init_add(trans, false, IPA_CMD_IP_V4_ROUTING_INIT,
+			   &ipa->mem[IPA_MEM_V4_ROUTE],
+			   &ipa->mem[IPA_MEM_V4_ROUTE_HASHED]);
+
+	ipa_table_init_add(trans, false, IPA_CMD_IP_V6_ROUTING_INIT,
+			   &ipa->mem[IPA_MEM_V6_ROUTE],
+			   &ipa->mem[IPA_MEM_V6_ROUTE_HASHED]);
+
+	ipa_table_init_add(trans, true, IPA_CMD_IP_V4_FILTER_INIT,
+			   &ipa->mem[IPA_MEM_V4_FILTER],
+			   &ipa->mem[IPA_MEM_V4_FILTER_HASHED]);
+
+	ipa_table_init_add(trans, true, IPA_CMD_IP_V6_FILTER_INIT,
+			   &ipa->mem[IPA_MEM_V6_FILTER],
+			   &ipa->mem[IPA_MEM_V6_FILTER_HASHED]);
+
+	gsi_trans_commit_wait(trans);
+
+	return 0;
+}
+
+void ipa_table_teardown(struct ipa *ipa)
+{
+	/* Nothing to do */	/* XXX Maybe reset the tables? */
+}
+
+/**
+ * ipa_filter_tuple_zero() - Zero an endpoint's hashed filter tuple
+ * @endpoint_id:	Endpoint whose filter hash tuple should be zeroed
+ *
+ * Endpoint must be for the AP (not modem) and support filtering. Updates
+ * the filter hash values without changing route ones.
+ */
+static void ipa_filter_tuple_zero(struct ipa_endpoint *endpoint)
+{
+	u32 endpoint_id = endpoint->endpoint_id;
+	u32 offset;
+	u32 val;
+
+	offset = IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(endpoint_id);
+
+	val = ioread32(endpoint->ipa->reg_virt + offset);
+
+	/* Zero all filter-related fields, preserving the rest */
+	u32_replace_bits(val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL);
+
+	iowrite32(val, endpoint->ipa->reg_virt + offset);
+}
+
+static void ipa_filter_config(struct ipa *ipa, bool modem)
+{
+	enum gsi_ee_id ee_id = modem ? GSI_EE_MODEM : GSI_EE_AP;
+	u32 ep_mask = ipa->filter_map;
+
+	/* IPA version 4.2 has no hashed route tables */
+	if (ipa->version == IPA_VERSION_4_2)
+		return;
+
+	while (ep_mask) {
+		u32 endpoint_id = __ffs(ep_mask);
+		struct ipa_endpoint *endpoint;
+
+		ep_mask ^= BIT(endpoint_id);
+
+		endpoint = &ipa->endpoint[endpoint_id];
+		if (endpoint->ee_id == ee_id)
+			ipa_filter_tuple_zero(endpoint);
+	}
+}
+
+static void ipa_filter_deconfig(struct ipa *ipa, bool modem)
+{
+	/* Nothing to do */
+}
+
+static bool ipa_route_id_modem(u32 route_id)
+{
+	return route_id >= IPA_ROUTE_MODEM_MIN &&
+		route_id <= IPA_ROUTE_MODEM_MIN + IPA_ROUTE_MODEM_COUNT - 1;
+}
+
+/**
+ * ipa_route_tuple_zero() - Zero a hashed route table entry tuple
+ * @route_id:	Route table entry whose hash tuple should be zeroed
+ *
+ * Updates the route hash values without changing filter ones.
+ */
+static void ipa_route_tuple_zero(struct ipa *ipa, u32 route_id)
+{
+	u32 offset = IPA_REG_ENDP_FILTER_ROUTER_HSH_CFG_N_OFFSET(route_id);
+	u32 val;
+
+	val = ioread32(ipa->reg_virt + offset);
+
+	/* Zero all route-related fields, preserving the rest */
+	u32_replace_bits(val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL);
+
+	iowrite32(val, ipa->reg_virt + offset);
+}
+
+static void ipa_route_config(struct ipa *ipa, bool modem)
+{
+	u32 route_id;
+
+	/* IPA version 4.2 has no hashed route tables */
+	if (ipa->version == IPA_VERSION_4_2)
+		return;
+
+	for (route_id = 0; route_id < IPA_ROUTE_COUNT_MAX; route_id++)
+		if (ipa_route_id_modem(route_id) == modem)
+			ipa_route_tuple_zero(ipa, route_id);
+}
+
+static void ipa_route_deconfig(struct ipa *ipa, bool modem)
+{
+	/* Nothing to do */
+}
+
+void ipa_table_config(struct ipa *ipa)
+{
+	ipa_filter_config(ipa, false);
+	ipa_filter_config(ipa, true);
+	ipa_route_config(ipa, false);
+	ipa_route_config(ipa, true);
+}
+
+void ipa_table_deconfig(struct ipa *ipa)
+{
+	ipa_route_deconfig(ipa, true);
+	ipa_route_deconfig(ipa, false);
+	ipa_filter_deconfig(ipa, true);
+	ipa_filter_deconfig(ipa, false);
+}
+
+/*
+ * Initialize a coherent DMA allocation containing initialized filter and
+ * route table data.  This is used when initializing or resetting the IPA
+ * filter or route table.
+ *
+ * The first entry in a filter table contains a bitmap indicating which
+ * endpoints contain entries in the table.  In addition to that first entry,
+ * there are at most IPA_FILTER_COUNT_MAX entries that follow.  Filter table
+ * entries are 64 bits wide, and (other than the bitmap) contain the DMA
+ * address of a filter rule.  A "zero rule" indicates no filtering, and
+ * consists of 64 bits of zeroes.  When a filter table is initialized (or
+ * reset) its entries are made to refer to the zero rule.
+ *
+ * Each entry in a route table is the DMA address of a routing rule.  For
+ * routing there is also a 64-bit "zero rule" that means no routing, and
+ * when a route table is initialized or reset, its entries are made to refer
+ * to the zero rule.  The zero rule is shared for route and filter tables.
+ *
+ * Note that the IPA hardware requires a filter or route rule address to be
+ * aligned on a 128 byte boundary.  The coherent DMA buffer we allocate here
+ * has a minimum alignment, and we place the zero rule at the base of that
+ * allocated space.  In ipa_table_init() we verify the minimum DMA allocation
+ * meets our requirement.
+ *
+ *	     +-------------------+
+ *	 --> |     zero rule     |
+ *	/    |-------------------|
+ *	|    |     filter mask   |
+ *	|\   |-------------------|
+ *	| ---- zero rule address | \
+ *	|\   |-------------------|  |
+ *	| ---- zero rule address |  |	IPA_FILTER_COUNT_MAX
+ *	|    |-------------------|   >	or IPA_ROUTE_COUNT_MAX,
+ *	|	      ...	    |	whichever is greater
+ *	 \   |-------------------|  |
+ *	  ---- zero rule address | /
+ *	     +-------------------+
+ */
+int ipa_table_init(struct ipa *ipa)
+{
+	u32 count = max_t(u32, IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX);
+	struct device *dev = &ipa->pdev->dev;
+	dma_addr_t addr;
+	__le64 le_addr;
+	__le64 *virt;
+	size_t size;
+
+	ipa_table_validate_build();
+
+	size = IPA_ZERO_RULE_SIZE + (1 + count) * IPA_TABLE_ENTRY_SIZE;
+	virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
+	if (!virt)
+		return -ENOMEM;
+
+	ipa->table_virt = virt;
+	ipa->table_addr = addr;
+
+	/* First slot is the zero rule */
+	*virt++ = 0;
+
+	/* Next is the filter table bitmap.  The "soft" bitmap value
+	 * must be converted to the hardware representation by shifting
+	 * it left one position.  (Bit 0 repesents global filtering,
+	 * which is possible but not used.)
+	 */
+	*virt++ = cpu_to_le64((u64)ipa->filter_map << 1);
+
+	/* All the rest contain the DMA address of the zero rule */
+	le_addr = cpu_to_le64(addr);
+	while (count--)
+		*virt++ = le_addr;
+
+	return 0;
+}
+
+void ipa_table_exit(struct ipa *ipa)
+{
+	u32 count = max_t(u32, 1 + IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX);
+	struct device *dev = &ipa->pdev->dev;
+	size_t size;
+
+	size = IPA_ZERO_RULE_SIZE + (1 + count) * IPA_TABLE_ENTRY_SIZE;
+
+	dma_free_coherent(dev, size, ipa->table_virt, ipa->table_addr);
+	ipa->table_addr = 0;
+	ipa->table_virt = NULL;
+}
diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
new file mode 100644
index 000000000000..64ea0221441a
--- /dev/null
+++ b/drivers/net/ipa/ipa_table.h
@@ -0,0 +1,103 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+#ifndef _IPA_TABLE_H_
+#define _IPA_TABLE_H_
+
+#include <linux/types.h>
+
+struct ipa;
+
+/* The size of a filter or route table entry */
+#define IPA_TABLE_ENTRY_SIZE	sizeof(__le64)	/* Holds a physical address */
+
+/* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */
+#define IPA_FILTER_COUNT_MAX	14
+
+/* The maximum number of route table entries (IPv4, IPv6; hashed or not) */
+#define IPA_ROUTE_COUNT_MAX	15
+
+#ifdef IPA_VALIDATE
+
+/**
+ * ipa_table_valid() - Validate route and filter table memory regions
+ * @ipa:	IPA pointer
+
+ * @Return:	true if all regions are valid, false otherwise
+ */
+bool ipa_table_valid(struct ipa *ipa);
+
+/**
+ * ipa_filter_map_valid() - Validate a filter table endpoint bitmap
+ * @ipa:	IPA pointer
+ *
+ * @Return:	true if all regions are valid, false otherwise
+ */
+bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask);
+
+#else /* !IPA_VALIDATE */
+
+static inline bool ipa_table_valid(struct ipa *ipa)
+{
+	return true;
+}
+
+static inline bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask)
+{
+	return true;
+}
+
+#endif /* !IPA_VALIDATE */
+
+/**
+ * ipa_table_reset() - Reset filter and route tables entries to "none"
+ * @ipa:	IPA pointer
+ * @modem:	Whether to reset modem or AP entries
+ */
+void ipa_table_reset(struct ipa *ipa, bool modem);
+
+/**
+ * ipa_table_hash_flush() - Synchronize hashed filter and route updates
+ * @ipa:	IPA pointer
+ */
+int ipa_table_hash_flush(struct ipa *ipa);
+
+/**
+ * ipa_table_setup() - Set up filter and route tables
+ * @ipa:	IPA pointer
+ */
+int ipa_table_setup(struct ipa *ipa);
+
+/**
+ * ipa_table_teardown() - Inverse of ipa_table_setup()
+ * @ipa:	IPA pointer
+ */
+void ipa_table_teardown(struct ipa *ipa);
+
+/**
+ * ipa_table_config() - Configure filter and route tables
+ * @ipa:	IPA pointer
+ */
+void ipa_table_config(struct ipa *ipa);
+
+/**
+ * ipa_table_deconfig() - Inverse of ipa_table_config()
+ * @ipa:	IPA pointer
+ */
+void ipa_table_deconfig(struct ipa *ipa);
+
+/**
+ * ipa_table_init() - Do early initialization of filter and route tables
+ * @ipa:	IPA pointer
+ */
+int ipa_table_init(struct ipa *ipa);
+
+/**
+ * ipa_table_exit() - Inverse of ipa_table_init()
+ * @ipa:	IPA pointer
+ */
+void ipa_table_exit(struct ipa *ipa);
+
+#endif /* _IPA_TABLE_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 12/17] soc: qcom: ipa: immediate commands
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (10 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 11/17] soc: qcom: ipa: filter and routing tables Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 13/17] soc: qcom: ipa: modem and microcontroller Alex Elder
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

One TX endpoint (per EE) is used for issuing immediate commands to
the IPA.  These commands request activites beyond simple data
transfers to be done by the IPA hardware.  For example, the IPA is
able to manage routing packets among endpoints, and immediate commands
are used to configure tables used for that routing.

Immediate commands are built on top of GSI transactions.  They are
different from normal transfers (in that they use a special endpoint,
and their "payload" is interpreted differently), so separate functions
are used to issue immediate command transactions.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_cmd.c | 680 ++++++++++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_cmd.h | 195 +++++++++++
 2 files changed, 875 insertions(+)
 create mode 100644 drivers/net/ipa/ipa_cmd.c
 create mode 100644 drivers/net/ipa/ipa_cmd.h

diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
new file mode 100644
index 000000000000..d226b858742d
--- /dev/null
+++ b/drivers/net/ipa/ipa_cmd.c
@@ -0,0 +1,680 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/bitfield.h>
+#include <linux/dma-direction.h>
+
+#include "gsi.h"
+#include "gsi_trans.h"
+#include "ipa.h"
+#include "ipa_endpoint.h"
+#include "ipa_table.h"
+#include "ipa_cmd.h"
+#include "ipa_mem.h"
+
+/**
+ * DOC:  IPA Immediate Commands
+ *
+ * The AP command TX endpoint is used to issue immediate commands to the IPA.
+ * An immediate command is generally used to request the IPA do something
+ * other than data transfer to another endpoint.
+ *
+ * Immediate commands are represented by GSI transactions just like other
+ * transfer requests, represented by a single GSI TRE.  Each immediate
+ * command has a well-defined format, having a payload of a known length.
+ * This allows the transfer element's length field to be used to hold an
+ * immediate command's opcode.  The payload for a command resides in DRAM
+ * and is described by a single scatterlist entry in its transaction.
+ * Commands do not require a transaction completion callback.  To commit
+ * an immediate command transaction, either gsi_trans_commit_wait() or
+ * gsi_trans_commit_wait_timeout() is used.
+ */
+
+/* Some commands can wait until indicated pipeline stages are clear */
+enum pipeline_clear_options {
+	pipeline_clear_hps	= 0,
+	pipeline_clear_src_grp	= 1,
+	pipeline_clear_full	= 2,
+};
+
+/* IPA_CMD_IP_V{4,6}_{FILTER,ROUTING}_INIT */
+
+struct ipa_cmd_hw_ip_fltrt_init {
+	__le64 hash_rules_addr;
+	__le64 flags;
+	__le64 nhash_rules_addr;
+};
+
+/* Field masks for ipa_cmd_hw_ip_fltrt_init structure fields */
+#define IP_FLTRT_FLAGS_HASH_SIZE_FMASK			GENMASK_ULL(11, 0)
+#define IP_FLTRT_FLAGS_HASH_ADDR_FMASK			GENMASK_ULL(27, 12)
+#define IP_FLTRT_FLAGS_NHASH_SIZE_FMASK			GENMASK_ULL(39, 28)
+#define IP_FLTRT_FLAGS_NHASH_ADDR_FMASK			GENMASK_ULL(55, 40)
+
+/* IPA_CMD_HDR_INIT_LOCAL */
+
+struct ipa_cmd_hw_hdr_init_local {
+	__le64 hdr_table_addr;
+	__le32 flags;
+	__le32 reserved;
+};
+
+/* Field masks for ipa_cmd_hw_hdr_init_local structure fields */
+#define HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK		GENMASK(11, 0)
+#define HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK		GENMASK(27, 12)
+
+/* IPA_CMD_REGISTER_WRITE */
+
+/* For IPA v4.0+, this opcode gets modified with pipeline clear options */
+
+#define REGISTER_WRITE_OPCODE_SKIP_CLEAR_FMASK		GENMASK(8, 8)
+#define REGISTER_WRITE_OPCODE_CLEAR_OPTION_FMASK	GENMASK(10, 9)
+
+struct ipa_cmd_register_write {
+	__le16 flags;		/* Unused/reserved for IPA v3.5.1 */
+	__le16 offset;
+	__le32 value;
+	__le32 value_mask;
+	__le32 clear_options;	/* Unused/reserved for IPA v4.0+ */
+};
+
+/* Field masks for ipa_cmd_register_write structure fields */
+/* The next field is present for IPA v4.0 and above */
+#define REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK		GENMASK(14, 11)
+/* The next field is present for IPA v3.5.1 only */
+#define REGISTER_WRITE_FLAGS_SKIP_CLEAR_FMASK		GENMASK(15, 15)
+
+/* The next field and its values are present for IPA v3.5.1 only */
+#define REGISTER_WRITE_CLEAR_OPTIONS_FMASK		GENMASK(1, 0)
+
+/* IPA_CMD_IP_PACKET_INIT */
+
+struct ipa_cmd_ip_packet_init {
+	u8 dest_endpoint;
+	u8 reserved[7];
+};
+
+/* Field masks for ipa_cmd_ip_packet_init dest_endpoint field */
+#define IPA_PACKET_INIT_DEST_ENDPOINT_FMASK		GENMASK(4, 0)
+
+/* IPA_CMD_DMA_TASK_32B_ADDR */
+
+/* This opcode gets modified with a DMA operation count */
+
+#define DMA_TASK_32B_ADDR_OPCODE_COUNT_FMASK		GENMASK(15, 8)
+
+struct ipa_cmd_hw_dma_task_32b_addr {
+	__le16 flags;
+	__le16 size;
+	__le32 addr;
+	__le16 packet_size;
+	u8 reserved[6];
+};
+
+/* Field masks for ipa_cmd_hw_dma_task_32b_addr flags field */
+#define DMA_TASK_32B_ADDR_FLAGS_SW_RSVD_FMASK		GENMASK(10, 0)
+#define DMA_TASK_32B_ADDR_FLAGS_CMPLT_FMASK		GENMASK(11, 11)
+#define DMA_TASK_32B_ADDR_FLAGS_EOF_FMASK		GENMASK(12, 12)
+#define DMA_TASK_32B_ADDR_FLAGS_FLSH_FMASK		GENMASK(13, 13)
+#define DMA_TASK_32B_ADDR_FLAGS_LOCK_FMASK		GENMASK(14, 14)
+#define DMA_TASK_32B_ADDR_FLAGS_UNLOCK_FMASK		GENMASK(15, 15)
+
+/* IPA_CMD_DMA_SHARED_MEM */
+
+/* For IPA v4.0+, this opcode gets modified with pipeline clear options */
+
+#define DMA_SHARED_MEM_OPCODE_SKIP_CLEAR_FMASK		GENMASK(8, 8)
+#define DMA_SHARED_MEM_OPCODE_CLEAR_OPTION_FMASK	GENMASK(10, 9)
+
+struct ipa_cmd_hw_dma_mem_mem {
+	__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
+	__le16 size;
+	__le16 local_addr;
+	__le16 flags;
+	__le64 system_addr;
+};
+
+/* Flag allowing atomic clear of target region after reading data (v4.0+)*/
+#define DMA_SHARED_MEM_CLEAR_AFTER_READ			GENMASK(15, 15)
+
+/* Field masks for ipa_cmd_hw_dma_mem_mem structure fields */
+#define DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK		GENMASK(0, 0)
+/* The next two fields are present for IPA v3.5.1 only. */
+#define DMA_SHARED_MEM_FLAGS_SKIP_CLEAR_FMASK		GENMASK(1, 1)
+#define DMA_SHARED_MEM_FLAGS_CLEAR_OPTIONS_FMASK	GENMASK(3, 2)
+
+/* IPA_CMD_IP_PACKET_TAG_STATUS */
+
+struct ipa_cmd_ip_packet_tag_status {
+	__le64 tag;
+};
+
+#define IP_PACKET_TAG_STATUS_TAG_FMASK			GENMASK_ULL(63, 16)
+
+/* Immediate command payload */
+union ipa_cmd_payload {
+	struct ipa_cmd_hw_ip_fltrt_init table_init;
+	struct ipa_cmd_hw_hdr_init_local hdr_init_local;
+	struct ipa_cmd_register_write register_write;
+	struct ipa_cmd_ip_packet_init ip_packet_init;
+	struct ipa_cmd_hw_dma_task_32b_addr dma_task_32b_addr;
+	struct ipa_cmd_hw_dma_mem_mem dma_shared_mem;
+	struct ipa_cmd_ip_packet_tag_status ip_packet_tag_status;
+};
+
+static void ipa_cmd_validate_build(void)
+{
+	/* The sizes of a filter and route tables need to fit into fields
+	 * in the ipa_cmd_hw_ip_fltrt_init structure.  Although hashed tables
+	 * might not be used, non-hashed and hashed tables have the same
+	 * maximum size.  IPv4 and IPv6 filter tables have the same number
+	 * of entries, as and IPv4 and IPv6 route tables have the same number
+	 * of entries.
+	 */
+#define TABLE_SIZE	(TABLE_COUNT_MAX * IPA_TABLE_ENTRY_SIZE)
+#define TABLE_COUNT_MAX	max_t(u32, IPA_ROUTE_COUNT_MAX, IPA_FILTER_COUNT_MAX)
+	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK));
+	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));
+#undef TABLE_COUNT_MAX
+#undef TABLE_SIZE
+}
+
+#ifdef IPA_VALIDATE
+
+/* Validate a memory region holding a table */
+bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem,
+			 bool route, bool ipv6, bool hashed)
+{
+	struct device *dev = &ipa->pdev->dev;
+	u32 offset_max;
+
+	offset_max = hashed ? field_max(IP_FLTRT_FLAGS_HASH_ADDR_FMASK)
+			    : field_max(IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
+	if (mem->offset > offset_max ||
+	    ipa->mem_offset > offset_max - mem->offset) {
+		dev_err(dev, "IPv%c %s%s table region offset too large "
+			      "(0x%04x + 0x%04x > 0x%04x)\n",
+			      ipv6 ? '6' : '4', hashed ? "hashed " : "",
+			      route ? "route" : "filter",
+			      ipa->mem_offset, mem->offset, offset_max);
+		return false;
+	}
+
+	if (mem->offset > ipa->mem_size ||
+	    mem->size > ipa->mem_size - mem->offset) {
+		dev_err(dev, "IPv%c %s%s table region out of range "
+			      "(0x%04x + 0x%04x > 0x%04x)\n",
+			      ipv6 ? '6' : '4', hashed ? "hashed " : "",
+			      route ? "route" : "filter",
+			      mem->offset, mem->size, ipa->mem_size);
+		return false;
+	}
+
+	return true;
+}
+
+/* Validate the memory region that holds headers */
+static bool ipa_cmd_header_valid(struct ipa *ipa)
+{
+	const struct ipa_mem *mem = &ipa->mem[IPA_MEM_MODEM_HEADER];
+	struct device *dev = &ipa->pdev->dev;
+	u32 offset_max;
+	u32 size_max;
+	u32 size;
+
+	offset_max = field_max(HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);
+	if (mem->offset > offset_max ||
+	    ipa->mem_offset > offset_max - mem->offset) {
+		dev_err(dev, "header table region offset too large "
+			      "(0x%04x + 0x%04x > 0x%04x)\n",
+			      ipa->mem_offset + mem->offset, offset_max);
+		return false;
+	}
+
+	size_max = field_max(HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK);
+	size = ipa->mem[IPA_MEM_MODEM_HEADER].size;
+	size += ipa->mem[IPA_MEM_AP_HEADER].size;
+	if (mem->offset > ipa->mem_size || size > ipa->mem_size - mem->offset) {
+		dev_err(dev, "header table region out of range "
+			      "(0x%04x + 0x%04x > 0x%04x)\n",
+			      mem->offset, size, ipa->mem_size);
+		return false;
+	}
+
+	return true;
+}
+
+/* Indicate whether an offset can be used with a register_write command */
+static bool ipa_cmd_register_write_offset_valid(struct ipa *ipa,
+						const char *name, u32 offset)
+{
+	struct ipa_cmd_register_write *payload;
+	struct device *dev = &ipa->pdev->dev;
+	u32 offset_max;
+	u32 bit_count;
+
+	/* The maximum offset in a register_write immediate command depends
+	 * on the version of IPA.  IPA v3.5.1 supports a 16 bit offset, but
+	 * newer versions allow some additional high-order bits.
+	 */
+	bit_count = BITS_PER_BYTE * sizeof(payload->offset);
+	if (ipa->version != IPA_VERSION_3_5_1)
+		bit_count += hweight32(REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK);
+	BUILD_BUG_ON(bit_count > 32);
+	offset_max = ~0 >> (32 - bit_count);
+
+	if (offset > offset_max || ipa->mem_offset > offset_max - offset) {
+		dev_err(dev, "%s offset too large 0x%04x + 0x%04x > 0x%04x)\n",
+				ipa->mem_offset + offset, offset_max);
+		return false;
+	}
+
+	return true;
+}
+
+/* Check whether offsets passed to register_write are valid */
+static bool ipa_cmd_register_write_valid(struct ipa *ipa)
+{
+	const char *name;
+	u32 offset;
+
+	offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version);
+	name = "filter/route hash flush";
+	if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
+		return false;
+
+	offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT);
+	name = "maximal endpoint status";
+	if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
+		return false;
+
+	return true;
+}
+
+bool ipa_cmd_data_valid(struct ipa *ipa)
+{
+	if (!ipa_cmd_header_valid(ipa))
+		return false;
+
+	if (!ipa_cmd_register_write_valid(ipa))
+		return false;
+
+	return true;
+}
+
+#endif /* IPA_VALIDATE */
+
+int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_max)
+{
+	struct gsi_trans_info *trans_info = &channel->trans_info;
+	struct device *dev = channel->gsi->dev;
+	int ret;
+
+	/* This is as good a place as any to validate build constants */
+	ipa_cmd_validate_build();
+
+	/* Even though command payloads are allocated one at a time,
+	 * a single transaction can require up to tlv_count of them,
+	 * so we treat them as if that many can be allocated at once.
+	 */
+	ret = gsi_trans_pool_init_dma(dev, &trans_info->cmd_pool,
+				      sizeof(union ipa_cmd_payload),
+				      tre_max, channel->tlv_count);
+	if (ret)
+		return ret;
+
+	/* Each TRE needs a command info structure */
+	ret = gsi_trans_pool_init(&trans_info->info_pool,
+				   sizeof(struct ipa_cmd_info),
+				   tre_max, channel->tlv_count);
+	if (ret)
+		gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool);
+
+	return ret;
+}
+
+void ipa_cmd_pool_exit(struct gsi_channel *channel)
+{
+	struct gsi_trans_info *trans_info = &channel->trans_info;
+	struct device *dev = channel->gsi->dev;
+
+	gsi_trans_pool_exit(&trans_info->info_pool);
+	gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool);
+}
+
+static union ipa_cmd_payload *
+ipa_cmd_payload_alloc(struct ipa *ipa, dma_addr_t *addr)
+{
+	struct gsi_trans_info *trans_info;
+	struct ipa_endpoint *endpoint;
+
+	endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX];
+	trans_info = &ipa->gsi.channel[endpoint->channel_id].trans_info;
+
+	return gsi_trans_pool_alloc_dma(&trans_info->cmd_pool, addr);
+}
+
+/* If hash_size is 0, hash_offset and hash_addr ignored. */
+void ipa_cmd_table_init_add(struct gsi_trans *trans,
+			    enum ipa_cmd_opcode opcode, u16 size, u32 offset,
+			    dma_addr_t addr, u16 hash_size, u32 hash_offset,
+			    dma_addr_t hash_addr)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	enum dma_data_direction direction = DMA_TO_DEVICE;
+	struct ipa_cmd_hw_ip_fltrt_init *payload;
+	union ipa_cmd_payload *cmd_payload;
+	dma_addr_t payload_addr;
+	u64 val;
+
+	/* Record the non-hash table offset and size */
+	offset += ipa->mem_offset;
+	val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
+	val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
+
+	/* The hash table offset and address are zero if its size is 0 */
+	if (hash_size) {
+		/* Record the hash table offset and size */
+		hash_offset += ipa->mem_offset;
+		val |= u64_encode_bits(hash_offset,
+				       IP_FLTRT_FLAGS_HASH_ADDR_FMASK);
+		val |= u64_encode_bits(hash_size,
+				       IP_FLTRT_FLAGS_HASH_SIZE_FMASK);
+	}
+
+	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
+	payload = &cmd_payload->table_init;
+
+	/* Fill in all offsets and sizes and the non-hash table address */
+	if (hash_size)
+		payload->hash_rules_addr = cpu_to_le64(hash_addr);
+	payload->flags = cpu_to_le64(val);
+	payload->nhash_rules_addr = cpu_to_le64(addr);
+
+	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+			  direction, opcode);
+}
+
+/* Initialize header space in IPA-local memory */
+void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size,
+				dma_addr_t addr)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL;
+	enum dma_data_direction direction = DMA_TO_DEVICE;
+	struct ipa_cmd_hw_hdr_init_local *payload;
+	union ipa_cmd_payload *cmd_payload;
+	dma_addr_t payload_addr;
+	u32 flags;
+
+	offset += ipa->mem_offset;
+
+	/* With this command we tell the IPA where in its local memory the
+	 * header tables reside.  The content of the buffer provided is
+	 * also written via DMA into that space.  The IPA hardware owns
+	 * the table, but the AP must initialize it.
+	 */
+	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
+	payload = &cmd_payload->hdr_init_local;
+
+	payload->hdr_table_addr = cpu_to_le64(addr);
+	flags = u32_encode_bits(size, HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK);
+	flags |= u32_encode_bits(offset, HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);
+	payload->flags = cpu_to_le32(flags);
+
+	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+			  direction, opcode);
+}
+
+void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value,
+				u32 mask, bool clear_full)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa_cmd_register_write *payload;
+	union ipa_cmd_payload *cmd_payload;
+	u32 opcode = IPA_CMD_REGISTER_WRITE;
+	dma_addr_t payload_addr;
+	u32 clear_option;
+	u32 options;
+	u16 flags;
+
+	/* pipeline_clear_src_grp is not used */
+	clear_option = clear_full ? pipeline_clear_full : pipeline_clear_hps;
+
+	if (ipa->version != IPA_VERSION_3_5_1) {
+		u16 offset_high;
+		u32 val;
+
+		/* Opcode encodes pipeline clear options */
+		/* SKIP_CLEAR is always 0 (don't skip pipeline clear) */
+		val = u16_encode_bits(clear_option,
+				      REGISTER_WRITE_OPCODE_CLEAR_OPTION_FMASK);
+		opcode |= val;
+
+		/* Extract the high 4 bits from the offset */
+		offset_high = (u16)u32_get_bits(offset, GENMASK(19, 16));
+		offset &= (1 << 16) - 1;
+
+		/* Extract the top 4 bits and encode it into the flags field */
+		flags = u16_encode_bits(offset_high,
+				REGISTER_WRITE_FLAGS_OFFSET_HIGH_FMASK);
+		options = 0;	/* reserved */
+
+	} else {
+		flags = 0;	/* SKIP_CLEAR flag is always 0 */
+		options = u16_encode_bits(clear_option,
+					  REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
+	}
+
+	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
+	payload = &cmd_payload->register_write;
+
+	payload->flags = cpu_to_le16(flags);
+	payload->offset = cpu_to_le16((u16)offset);
+	payload->value = cpu_to_le32(value);
+	payload->value_mask = cpu_to_le32(mask);
+	payload->clear_options = cpu_to_le32(options);
+
+	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+			  DMA_NONE, opcode);
+}
+
+/* Skip IP packet processing on the next data transfer on a TX channel */
+static void ipa_cmd_ip_packet_init_add(struct gsi_trans *trans, u8 endpoint_id)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_INIT;
+	enum dma_data_direction direction = DMA_TO_DEVICE;
+	struct ipa_cmd_ip_packet_init *payload;
+	union ipa_cmd_payload *cmd_payload;
+	dma_addr_t payload_addr;
+
+	/* assert(endpoint_id <
+		  field_max(IPA_PACKET_INIT_DEST_ENDPOINT_FMASK)); */
+
+	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
+	payload = &cmd_payload->ip_packet_init;
+
+	payload->dest_endpoint = u8_encode_bits(endpoint_id,
+					IPA_PACKET_INIT_DEST_ENDPOINT_FMASK);
+
+	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+			  direction, opcode);
+}
+
+/* Use a 32-bit DMA command to zero a block of memory */
+void ipa_cmd_dma_task_32b_addr_add(struct gsi_trans *trans, u16 size,
+				   dma_addr_t addr, bool toward_ipa)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	enum ipa_cmd_opcode opcode = IPA_CMD_DMA_TASK_32B_ADDR;
+	struct ipa_cmd_hw_dma_task_32b_addr *payload;
+	union ipa_cmd_payload *cmd_payload;
+	enum dma_data_direction direction;
+	dma_addr_t payload_addr;
+	u16 flags;
+
+	/* assert(addr <= U32_MAX); */
+	addr &= GENMASK_ULL(31, 0);
+
+	/* The opcode encodes the number of DMA operations in the high byte */
+	opcode |= u16_encode_bits(1, DMA_TASK_32B_ADDR_OPCODE_COUNT_FMASK);
+
+	direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+
+	/* complete: 0 = don't interrupt; eof: 0 = don't assert eot */
+	flags = DMA_TASK_32B_ADDR_FLAGS_FLSH_FMASK;
+	/* lock: 0 = don't lock endpoint; unlock: 0 = don't unlock */
+
+	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
+	payload = &cmd_payload->dma_task_32b_addr;
+
+	payload->flags = cpu_to_le16(flags);
+	payload->size = cpu_to_le16(size);
+	payload->addr = cpu_to_le32((u32)addr);
+	payload->packet_size = cpu_to_le16(size);
+
+	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+			  direction, opcode);
+}
+
+/* Use a DMA command to read or write a block of IPA-resident memory */
+void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, u16 size,
+				dma_addr_t addr, bool toward_ipa)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM;
+	struct ipa_cmd_hw_dma_mem_mem *payload;
+	union ipa_cmd_payload *cmd_payload;
+	enum dma_data_direction direction;
+	dma_addr_t payload_addr;
+	u16 flags;
+
+	/* size and offset must fit in 16 bit fields */
+	/* assert(size > 0 && size <= U16_MAX); */
+	/* assert(offset <= U16_MAX && ipa->mem_offset <= U16_MAX - offset); */
+
+	offset += ipa->mem_offset;
+
+	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
+	payload = &cmd_payload->dma_shared_mem;
+
+	/* payload->clear_after_read was reserved prior to IPA v4.0.  It's
+	 * never needed for current code, so it's 0 regardless of version.
+	 */
+	payload->size = cpu_to_le16(size);
+	payload->local_addr = cpu_to_le16(offset);
+	/* payload->flags:
+	 *   direction:		0 = write to IPA, 1 read from IPA
+	 * Starting at v4.0 these are reserved; either way, all zero:
+	 *   pipeline clear:	0 = wait for pipeline clear (don't skip)
+	 *   clear_options:	0 = pipeline_clear_hps
+	 * Instead, for v4.0+ these are encoded in the opcode.  But again
+	 * since both values are 0 we won't bother OR'ing them in.
+	 */
+	flags = toward_ipa ? 0 : DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK;
+	payload->flags = cpu_to_le16(flags);
+	payload->system_addr = cpu_to_le64(addr);
+
+	direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+
+	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+			  direction, opcode);
+}
+
+static void ipa_cmd_ip_tag_status_add(struct gsi_trans *trans, u64 tag)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_TAG_STATUS;
+	enum dma_data_direction direction = DMA_TO_DEVICE;
+	struct ipa_cmd_ip_packet_tag_status *payload;
+	union ipa_cmd_payload *cmd_payload;
+	dma_addr_t payload_addr;
+
+	/* assert(tag <= field_max(IP_PACKET_TAG_STATUS_TAG_FMASK)); */
+
+	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
+	payload = &cmd_payload->ip_packet_tag_status;
+
+	payload->tag = u64_encode_bits(tag, IP_PACKET_TAG_STATUS_TAG_FMASK);
+
+	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+			  direction, opcode);
+}
+
+/* Issue a small command TX data transfer */
+static void ipa_cmd_transfer_add(struct gsi_trans *trans, u16 size)
+{
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	enum dma_data_direction direction = DMA_TO_DEVICE;
+	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
+	union ipa_cmd_payload *payload;
+	dma_addr_t payload_addr;
+
+	/* assert(size <= sizeof(*payload)); */
+
+	/* Just transfer a zero-filled payload structure */
+	payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
+
+	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+			  direction, opcode);
+}
+
+void ipa_cmd_tag_process_add(struct gsi_trans *trans)
+{
+	ipa_cmd_register_write_add(trans, 0, 0, 0, true);
+#if 1
+	/* Reference these functions to avoid a compile error */
+	(void)ipa_cmd_ip_packet_init_add;
+	(void)ipa_cmd_ip_tag_status_add;
+	(void) ipa_cmd_transfer_add;
+#else
+	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct gsi_endpoint *endpoint;
+
+	endpoint = ipa->name_map[IPA_ENDPOINT_AP_LAN_RX];
+	ipa_cmd_ip_packet_init_add(trans, endpoint->endpoint_id);
+
+	ipa_cmd_ip_tag_status_add(trans, 0xcba987654321);
+
+	ipa_cmd_transfer_add(trans, 4);
+#endif
+}
+
+/* Returns the number of commands required for the tag process */
+u32 ipa_cmd_tag_process_count(void)
+{
+	return 4;
+}
+
+static struct ipa_cmd_info *
+ipa_cmd_info_alloc(struct ipa_endpoint *endpoint, u32 tre_count)
+{
+	struct gsi_channel *channel;
+
+	channel = &endpoint->ipa->gsi.channel[endpoint->channel_id];
+
+	return gsi_trans_pool_alloc(&channel->trans_info.info_pool, tre_count);
+}
+
+/* Allocate a transaction for the command TX endpoint */
+struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count)
+{
+	struct ipa_endpoint *endpoint;
+	struct gsi_trans *trans;
+
+	endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX];
+
+	trans = gsi_channel_trans_alloc(&ipa->gsi, endpoint->channel_id,
+					tre_count, DMA_NONE);
+	if (trans)
+		trans->info = ipa_cmd_info_alloc(endpoint, tre_count);
+
+	return trans;
+}
diff --git a/drivers/net/ipa/ipa_cmd.h b/drivers/net/ipa/ipa_cmd.h
new file mode 100644
index 000000000000..4917525b3a47
--- /dev/null
+++ b/drivers/net/ipa/ipa_cmd.h
@@ -0,0 +1,195 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+#ifndef _IPA_CMD_H_
+#define _IPA_CMD_H_
+
+#include <linux/types.h>
+#include <linux/dma-direction.h>
+
+struct sk_buff;
+struct scatterlist;
+
+struct ipa;
+struct ipa_mem;
+struct gsi_trans;
+struct gsi_channel;
+
+/**
+ * enum ipa_cmd_opcode:	IPA immediate commands
+ *
+ * All immediate commands are issued using the AP command TX endpoint.
+ * The numeric values here are the opcodes for IPA v3.5.1 hardware.
+ *
+ * IPA_CMD_NONE is a special (invalid) value that's used to indicate
+ * a request is *not* an immediate command.
+ */
+enum ipa_cmd_opcode {
+	IPA_CMD_NONE			= 0,
+	IPA_CMD_IP_V4_FILTER_INIT	= 3,
+	IPA_CMD_IP_V6_FILTER_INIT	= 4,
+	IPA_CMD_IP_V4_ROUTING_INIT	= 7,
+	IPA_CMD_IP_V6_ROUTING_INIT	= 8,
+	IPA_CMD_HDR_INIT_LOCAL		= 9,
+	IPA_CMD_REGISTER_WRITE		= 12,
+	IPA_CMD_IP_PACKET_INIT		= 16,
+	IPA_CMD_DMA_TASK_32B_ADDR	= 17,
+	IPA_CMD_DMA_SHARED_MEM		= 19,
+	IPA_CMD_IP_PACKET_TAG_STATUS	= 20,
+};
+
+/**
+ * struct ipa_cmd_info - information needed for an IPA immediate command
+ *
+ * @opcode:	The command opcode.
+ * @direction:	Direction of data transfer for DMA commands
+ */
+struct ipa_cmd_info {
+	enum ipa_cmd_opcode opcode;
+	enum dma_data_direction direction;
+};
+
+
+#ifdef IPA_VALIDATE
+
+/**
+ * ipa_cmd_table_valid() - Validate a memory region holding a table
+ * @ipa:	- IPA pointer
+ * @mem:	- IPA memory region descriptor
+ * @route:	- Whether the region holds a route or filter table
+ * @ipv6:	- Whether the table is for IPv6 or IPv4
+ * @hashed:	- Whether the table is hashed or non-hashed
+ *
+ * @Return:	true if region is valid, false otherwise
+ */
+bool ipa_cmd_table_valid(struct ipa *ipa, const struct ipa_mem *mem,
+			    bool route, bool ipv6, bool hashed);
+
+/**
+ * ipa_cmd_data_valid() - Validate command-realted configuration is valid
+ * @ipa:	- IPA pointer
+ *
+ * @Return:	true if assumptions required for command are valid
+ */
+bool ipa_cmd_data_valid(struct ipa *ipa);
+
+#else /* !IPA_VALIDATE */
+
+static inline bool ipa_cmd_table_valid(struct ipa *ipa,
+				       const struct ipa_mem *mem, bool route,
+				       bool ipv6, bool hashed)
+{
+	return true;
+}
+
+static inline bool ipa_cmd_data_valid(struct ipa *ipa)
+{
+	return true;
+}
+
+#endif /* !IPA_VALIDATE */
+
+/**
+ * ipa_cmd_pool_init() - initialize command channel pools
+ * @channel:	AP->IPA command TX GSI channel pointer
+ * @tre_count:	Number of pool elements to allocate
+ *
+ * @Return:	0 if successful, or a negative error code
+ */
+int ipa_cmd_pool_init(struct gsi_channel *gsi_channel, u32 tre_count);
+
+/**
+ * ipa_cmd_pool_exit() - Inverse of ipa_cmd_pool_init()
+ * @channel:	AP->IPA command TX GSI channel pointer
+ */
+void ipa_cmd_pool_exit(struct gsi_channel *channel);
+
+/**
+ * ipa_cmd_table_init_add() - Add table init command to a transaction
+ * @trans:	GSI transaction
+ * @opcode:	IPA immediate command opcode
+ * @size:	Size of non-hashed routing table memory
+ * @offset:	Offset in IPA shared memory of non-hashed routing table memory
+ * @addr:	DMA address of non-hashed table data to write
+ * @hash_size:	Size of hashed routing table memory
+ * @hash_offset: Offset in IPA shared memory of hashed routing table memory
+ * @hash_addr:	DMA address of hashed table data to write
+ *
+ * If hash_size is 0, hash_offset and hash_addr are ignored.
+ */
+void ipa_cmd_table_init_add(struct gsi_trans *trans, enum ipa_cmd_opcode opcode,
+			    u16 size, u32 offset, dma_addr_t addr,
+			    u16 hash_size, u32 hash_offset,
+			    dma_addr_t hash_addr);
+
+/**
+ * ipa_cmd_hdr_init_local_add() - Add a header init command to a transaction
+ * @ipa:	IPA structure
+ * @offset:	Offset of header memory in IPA local space
+ * @size:	Size of header memory
+ * @addr:	DMA address of buffer to be written from
+ *
+ * Defines and fills the location in IPA memory to use for headers.
+ */
+void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size,
+				dma_addr_t addr);
+
+/**
+ * ipa_cmd_register_write_add() - Add a register write command to a transaction
+ * @trans:	GSI transaction
+ * @offset:	Offset of register to be written
+ * @value:	Value to be written
+ * @mask:	Mask of bits in register to update with bits from value
+ * @clear_full: Pipeline clear option; true means full pipeline clear
+ */
+void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value,
+				u32 mask, bool clear_full);
+
+/**
+ * ipa_cmd_dma_task_32b_addr_add() - Add a 32-bit DMA command to a transaction
+ * @trans:	GSi transaction
+ * @size:	Number of bytes to be memory to be transferred
+ * @addr:	DMA address of buffer to be read into or written from
+ * @toward_ipa:	true means write to IPA memory; false means read
+ */
+void ipa_cmd_dma_task_32b_addr_add(struct gsi_trans *trans, u16 size,
+				   dma_addr_t addr, bool toward_ipa);
+
+/**
+ * ipa_cmd_dma_shared_mem_add() - Add a DMA memory command to a transaction
+ * @trans:	GSI transaction
+ * @offset:	Offset of IPA memory to be read or written
+ * @size:	Number of bytes of memory to be transferred
+ * @addr:	DMA address of buffer to be read into or written from
+ * @toward_ipa:	true means write to IPA memory; false means read
+ */
+void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset,
+				u16 size, dma_addr_t addr, bool toward_ipa);
+
+/**
+ * ipa_cmd_tag_process_add() - Add IPA tag process commands to a transaction
+ * @trans:	GSI transaction
+ */
+void ipa_cmd_tag_process_add(struct gsi_trans *trans);
+
+/**
+ * ipa_cmd_tag_process_add_count() - Number of commands in a tag process
+ *
+ * @Return:	The number of elements to allocate in a transaction
+ *		to hold tag process commands
+ */
+u32 ipa_cmd_tag_process_count(void);
+
+/**
+ * ipa_cmd_trans_alloc() - Allocate a transaction for the command TX endpoint
+ * @ipa:	IPA pointer
+ * @tre_count:	Number of elements in the transaction
+ *
+ * @Return:	A GSI transaction structure, or a null pointer if all
+ *		available transactions are in use
+ */
+struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count);
+
+#endif /* _IPA_CMD_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 13/17] soc: qcom: ipa: modem and microcontroller
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (11 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 12/17] soc: qcom: ipa: immediate commands Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 14/17] soc: qcom: ipa: AP/modem communications Alex Elder
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

This patch includes code implementing the modem functionality.
There are several communication paths between the AP and modem,
separate from the main data path provided by IPA.  SMP2P provides
primitive messaging and interrupt capability, and QMI allows more
complex out-of-band messaging to occur between entities on the AP
and modem.  (SMP2P and QMI support are added by the next patch.)
Management of these (plus the network device implementing the data
path) is done by code within "ipa_modem.c".

Sort of unrelated, this patch also includes the code supporting the
microcontroller CPU present on the IPA.  The microcontroller can be
used to implement special handling of packets, but at this time we
don't support that.  Still, it is a component that needs to be
initialized, and in the event of a crash we need to do some
synchronization between the AP and the microcontroller.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_modem.c | 383 ++++++++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_modem.h |  31 +++
 drivers/net/ipa/ipa_uc.c    | 211 ++++++++++++++++++++
 drivers/net/ipa/ipa_uc.h    |  32 +++
 4 files changed, 657 insertions(+)
 create mode 100644 drivers/net/ipa/ipa_modem.c
 create mode 100644 drivers/net/ipa/ipa_modem.h
 create mode 100644 drivers/net/ipa/ipa_uc.c
 create mode 100644 drivers/net/ipa/ipa_uc.h

diff --git a/drivers/net/ipa/ipa_modem.c b/drivers/net/ipa/ipa_modem.c
new file mode 100644
index 000000000000..039afc8c608e
--- /dev/null
+++ b/drivers/net/ipa/ipa_modem.c
@@ -0,0 +1,383 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2014-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+
+#include <linux/errno.h>
+#include <linux/if_arp.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/if_rmnet.h>
+#include <linux/remoteproc/qcom_q6v5_ipa_notify.h>
+
+#include "ipa.h"
+#include "ipa_data.h"
+#include "ipa_endpoint.h"
+#include "ipa_table.h"
+#include "ipa_mem.h"
+#include "ipa_modem.h"
+#include "ipa_smp2p.h"
+#include "ipa_qmi.h"
+
+#define IPA_NETDEV_NAME		"rmnet_ipa%d"
+#define IPA_NETDEV_TAILROOM	0	/* for padding by mux layer */
+#define IPA_NETDEV_TIMEOUT	10	/* seconds */
+
+enum ipa_modem_state {
+	IPA_MODEM_STATE_STOPPED	= 0,
+	IPA_MODEM_STATE_STARTING,
+	IPA_MODEM_STATE_RUNNING,
+	IPA_MODEM_STATE_STOPPING,
+};
+
+/** struct ipa_priv - IPA network device private data */
+struct ipa_priv {
+	struct ipa *ipa;
+};
+
+/** ipa_open() - Opens the modem network interface */
+static int ipa_open(struct net_device *netdev)
+{
+	struct ipa_priv *priv = netdev_priv(netdev);
+	struct ipa *ipa = priv->ipa;
+	int ret;
+
+	ret = ipa_endpoint_enable_one(ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]);
+	if (ret)
+		return ret;
+	ret = ipa_endpoint_enable_one(ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]);
+	if (ret)
+		goto err_disable_tx;
+
+	netif_start_queue(netdev);
+
+	return 0;
+
+err_disable_tx:
+	ipa_endpoint_disable_one(ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]);
+
+	return ret;
+}
+
+/** ipa_stop() - Stops the modem network interface. */
+static int ipa_stop(struct net_device *netdev)
+{
+	struct ipa_priv *priv = netdev_priv(netdev);
+	struct ipa *ipa = priv->ipa;
+
+	netif_stop_queue(netdev);
+
+	ipa_endpoint_disable_one(ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]);
+	ipa_endpoint_disable_one(ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]);
+
+	return 0;
+}
+
+/** ipa_start_xmit() - Transmits an skb.
+ * @skb: skb to be transmitted
+ * @dev: network device
+ *
+ * Return codes:
+ * NETDEV_TX_OK: Success
+ * NETDEV_TX_BUSY: Error while transmitting the skb. Try again later
+ */
+static int ipa_start_xmit(struct sk_buff *skb, struct net_device *netdev)
+{
+	struct net_device_stats *stats = &netdev->stats;
+	struct ipa_priv *priv = netdev_priv(netdev);
+	struct ipa_endpoint *endpoint;
+	struct ipa *ipa = priv->ipa;
+	u32 skb_len = skb->len;
+	int ret;
+
+	if (!skb_len)
+		goto err_drop_skb;
+
+	endpoint = ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX];
+	if (endpoint->data->qmap && skb->protocol != htons(ETH_P_MAP))
+		goto err_drop_skb;
+
+	ret = ipa_endpoint_skb_tx(endpoint, skb);
+	if (ret) {
+		if (ret != -E2BIG)
+			return NETDEV_TX_BUSY;
+		goto err_drop_skb;
+	}
+
+	stats->tx_packets++;
+	stats->tx_bytes += skb_len;
+
+	return NETDEV_TX_OK;
+
+err_drop_skb:
+	dev_kfree_skb_any(skb);
+	stats->tx_dropped++;
+
+	return NETDEV_TX_OK;
+}
+
+void ipa_modem_skb_rx(struct net_device *netdev, struct sk_buff *skb)
+{
+	struct net_device_stats *stats = &netdev->stats;
+
+	if (skb) {
+		skb->dev = netdev;
+		skb->protocol = htons(ETH_P_MAP);
+		stats->rx_packets++;
+		stats->rx_bytes += skb->len;
+
+		(void)netif_receive_skb(skb);
+	} else {
+		stats->rx_dropped++;
+	}
+}
+
+static const struct net_device_ops ipa_modem_ops = {
+	.ndo_open	= ipa_open,
+	.ndo_stop	= ipa_stop,
+	.ndo_start_xmit	= ipa_start_xmit,
+};
+
+/** ipa_modem_netdev_setup() - netdev setup function for the modem */
+static void ipa_modem_netdev_setup(struct net_device *netdev)
+{
+	netdev->netdev_ops = &ipa_modem_ops;
+	ether_setup(netdev);
+	/* No header ops (override value set by ether_setup()) */
+	netdev->header_ops = NULL;
+	netdev->type = ARPHRD_RAWIP;
+	netdev->hard_header_len = 0;
+	netdev->max_mtu = IPA_MTU;
+	netdev->mtu = netdev->max_mtu;
+	netdev->addr_len = 0;
+	netdev->flags &= ~(IFF_BROADCAST | IFF_MULTICAST);
+	/* The endpoint is configured for QMAP */
+	netdev->needed_headroom = sizeof(struct rmnet_map_header);
+	netdev->needed_tailroom = IPA_NETDEV_TAILROOM;
+	netdev->watchdog_timeo = IPA_NETDEV_TIMEOUT * HZ;
+	netdev->hw_features = NETIF_F_SG;
+}
+
+/** ipa_modem_suspend() - suspend callback
+ * @netdev:	Network device
+ *
+ * Suspend the modem's endpoints.
+ */
+void ipa_modem_suspend(struct net_device *netdev)
+{
+	struct ipa_priv *priv = netdev_priv(netdev);
+	struct ipa *ipa = priv->ipa;
+
+	netif_stop_queue(netdev);
+
+	ipa_endpoint_suspend_one(ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]);
+	ipa_endpoint_suspend_one(ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]);
+}
+
+/** ipa_modem_resume() - resume callback for runtime_pm
+ * @dev: pointer to device
+ *
+ * Resume the modem's endpoints.
+ */
+void ipa_modem_resume(struct net_device *netdev)
+{
+	struct ipa_priv *priv = netdev_priv(netdev);
+	struct ipa *ipa = priv->ipa;
+
+	ipa_endpoint_resume_one(ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]);
+	ipa_endpoint_resume_one(ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]);
+
+	netif_wake_queue(netdev);
+}
+
+int ipa_modem_start(struct ipa *ipa)
+{
+	enum ipa_modem_state state;
+	struct net_device *netdev;
+	struct ipa_priv *priv;
+	int ret;
+
+	/* Only attempt to start the modem if it's stopped */
+	state = atomic_cmpxchg(&ipa->modem_state, IPA_MODEM_STATE_STOPPED,
+			       IPA_MODEM_STATE_STARTING);
+
+	/* Silently ignore attempts when running, or when changing state */
+	if (state != IPA_MODEM_STATE_STOPPED)
+		return 0;
+
+	netdev = alloc_netdev(sizeof(struct ipa_priv), IPA_NETDEV_NAME,
+			      NET_NAME_UNKNOWN, ipa_modem_netdev_setup);
+	if (!netdev) {
+		ret = -ENOMEM;
+		goto out_set_state;
+	}
+
+	ipa->name_map[IPA_ENDPOINT_AP_MODEM_TX]->netdev = netdev;
+	ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]->netdev = netdev;
+
+	priv = netdev_priv(netdev);
+	priv->ipa = ipa;
+
+	ret = register_netdev(netdev);
+	if (ret)
+		free_netdev(netdev);
+	else
+		ipa->modem_netdev = netdev;
+
+out_set_state:
+	if (ret)
+		atomic_set(&ipa->modem_state, IPA_MODEM_STATE_STOPPED);
+	else
+		atomic_set(&ipa->modem_state, IPA_MODEM_STATE_RUNNING);
+	smp_mb__after_atomic();
+
+	return ret;
+}
+
+int ipa_modem_stop(struct ipa *ipa)
+{
+	struct net_device *netdev = ipa->modem_netdev;
+	enum ipa_modem_state state;
+	int ret;
+
+	/* Only attempt to stop the modem if it's running */
+	state = atomic_cmpxchg(&ipa->modem_state, IPA_MODEM_STATE_RUNNING,
+			       IPA_MODEM_STATE_STOPPING);
+
+	/* Silently ignore attempts when already stopped */
+	if (state == IPA_MODEM_STATE_STOPPED)
+		return 0;
+
+	/* If we're somewhere between stopped and starting, we're busy */
+	if (state != IPA_MODEM_STATE_RUNNING)
+		return -EBUSY;
+
+	/* Prevent the modem from triggering a call to ipa_setup() */
+	ipa_smp2p_disable(ipa);
+
+	if (netdev) {
+		/* Stop the queue and disable the endpoints if it's open */
+		ret = ipa_stop(netdev);
+		if (ret)
+			goto out_set_state;
+
+		ipa->modem_netdev = NULL;
+		unregister_netdev(netdev);
+		free_netdev(netdev);
+	} else {
+		ret = 0;
+	}
+
+out_set_state:
+	if (ret)
+		atomic_set(&ipa->modem_state, IPA_MODEM_STATE_RUNNING);
+	else
+		atomic_set(&ipa->modem_state, IPA_MODEM_STATE_STOPPED);
+	smp_mb__after_atomic();
+
+	return ret;
+}
+
+/* Treat a "clean" modem stop the same as a crash */
+static void ipa_modem_crashed(struct ipa *ipa)
+{
+	struct device *dev = &ipa->pdev->dev;
+	int ret;
+
+	ipa_endpoint_modem_pause_all(ipa, true);
+
+	ipa_endpoint_modem_hol_block_clear_all(ipa);
+
+	ipa_table_reset(ipa, true);
+
+	ret = ipa_table_hash_flush(ipa);
+	if (ret)
+		dev_err(dev, "error %d flushing hash cahces\n", ret);
+
+	ret = ipa_endpoint_modem_exception_reset_all(ipa);
+	if (ret)
+		dev_err(dev, "error %d resetting exception endpoint",
+			ret);
+
+	ipa_endpoint_modem_pause_all(ipa, false);
+
+	ret = ipa_modem_stop(ipa);
+	if (ret)
+		dev_err(dev, "error %d stopping modem", ret);
+
+	/* Now prepare for the next modem boot */
+	ret = ipa_mem_zero_modem(ipa);
+	if (ret)
+		dev_err(dev, "error %d zeroing modem memory regions\n", ret);
+}
+
+static void ipa_modem_notify(void *data, enum qcom_rproc_event event)
+{
+	struct ipa *ipa = data;
+	struct device *dev;
+
+	dev = &ipa->pdev->dev;
+	switch (event) {
+	case MODEM_STARTING:
+		dev_info(dev, "received modem starting event\n");
+		ipa_smp2p_notify_reset(ipa);
+		break;
+
+	case MODEM_RUNNING:
+		dev_info(dev, "received modem running event\n");
+		break;
+
+	case MODEM_STOPPING:
+	case MODEM_CRASHED:
+		dev_info(dev, "received modem %s event\n",
+			 event == MODEM_STOPPING ? "stopping"
+						 : "crashed");
+		if (ipa->setup_complete)
+			ipa_modem_crashed(ipa);
+		break;
+
+	case MODEM_OFFLINE:
+		dev_info(dev, "received modem offline event\n");
+		break;
+
+	case MODEM_REMOVING:
+		dev_info(dev, "received modem stopping event\n");
+		break;
+
+	default:
+		dev_err(&ipa->pdev->dev, "unrecognized event %u\n", event);
+		break;
+	}
+}
+
+int ipa_modem_init(struct ipa *ipa, bool modem_init)
+{
+	return ipa_smp2p_init(ipa, modem_init);
+}
+
+void ipa_modem_exit(struct ipa *ipa)
+{
+	ipa_smp2p_exit(ipa);
+}
+
+int ipa_modem_config(struct ipa *ipa)
+{
+	return qcom_register_ipa_notify(ipa->modem_rproc, ipa_modem_notify,
+					ipa);
+}
+
+void ipa_modem_deconfig(struct ipa *ipa)
+{
+	qcom_deregister_ipa_notify(ipa->modem_rproc);
+}
+
+int ipa_modem_setup(struct ipa *ipa)
+{
+	return ipa_qmi_setup(ipa);
+}
+
+void ipa_modem_teardown(struct ipa *ipa)
+{
+	ipa_qmi_teardown(ipa);
+}
diff --git a/drivers/net/ipa/ipa_modem.h b/drivers/net/ipa/ipa_modem.h
new file mode 100644
index 000000000000..2de3e216d1d4
--- /dev/null
+++ b/drivers/net/ipa/ipa_modem.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+#ifndef _IPA_MODEM_H_
+#define _IPA_MODEM_H_
+
+struct ipa;
+struct ipa_endpoint;
+struct net_device;
+struct sk_buff;
+
+int ipa_modem_start(struct ipa *ipa);
+int ipa_modem_stop(struct ipa *ipa);
+
+void ipa_modem_skb_rx(struct net_device *netdev, struct sk_buff *skb);
+
+void ipa_modem_suspend(struct net_device *netdev);
+void ipa_modem_resume(struct net_device *netdev);
+
+int ipa_modem_init(struct ipa *ipa, bool modem_init);
+void ipa_modem_exit(struct ipa *ipa);
+
+int ipa_modem_config(struct ipa *ipa);
+void ipa_modem_deconfig(struct ipa *ipa);
+
+int ipa_modem_setup(struct ipa *ipa);
+void ipa_modem_teardown(struct ipa *ipa);
+
+#endif /* _IPA_MODEM_H_ */
diff --git a/drivers/net/ipa/ipa_uc.c b/drivers/net/ipa/ipa_uc.c
new file mode 100644
index 000000000000..a1f8db00d55a
--- /dev/null
+++ b/drivers/net/ipa/ipa_uc.c
@@ -0,0 +1,211 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+
+#include <linux/types.h>
+#include <linux/io.h>
+#include <linux/delay.h>
+
+#include "ipa.h"
+#include "ipa_clock.h"
+#include "ipa_uc.h"
+
+/**
+ * DOC:  The IPA embedded microcontroller
+ *
+ * The IPA incorporates a microcontroller that is able to do some additional
+ * handling/offloading of network activity.  The current code makes
+ * essentially no use of the microcontroller, but it still requires some
+ * initialization.  It needs to be notified in the event the AP crashes.
+ *
+ * The microcontroller can generate two interrupts to the AP.  One interrupt
+ * is used to indicate that a response to a request from the AP is available.
+ * The other is used to notify the AP of the occurrence of an event.  In
+ * addition, the AP can interrupt the microcontroller by writing a register.
+ *
+ * A 128 byte block of structured memory within the IPA SRAM is used together
+ * with these interrupts to implement the communication interface between the
+ * AP and the IPA microcontroller.  Each side writes data to the shared area
+ * before interrupting its peer, which will read the written data in response
+ * to the interrupt.  Some information found in the shared area is currently
+ * unused.  All remaining space in the shared area is reserved, and must not
+ * be read or written by the AP.
+ */
+/* Supports hardware interface version 0x2000 */
+
+/* Offset relative to the base of the IPA shared address space of the
+ * shared region used for communication with the microcontroller.  The
+ * region is 128 bytes in size, but only the first 40 bytes are used.
+ */
+#define IPA_MEM_UC_OFFSET	0x0000
+
+/* Delay to allow a the microcontroller to save state when crashing */
+#define IPA_SEND_DELAY		100	/* microseconds */
+
+/**
+ * struct ipa_uc_mem_area - AP/microcontroller shared memory area
+ * @command:		command code (AP->microcontroller)
+ * @command_param:	low 32 bits of command parameter (AP->microcontroller)
+ * @command_param_hi:	high 32 bits of command parameter (AP->microcontroller)
+ *
+ * @response:		response code (microcontroller->AP)
+ * @response_param:	response parameter (microcontroller->AP)
+ *
+ * @event:		event code (microcontroller->AP)
+ * @event_param:	event parameter (microcontroller->AP)
+ *
+ * @first_error_address: address of first error-source on SNOC
+ * @hw_state:		state of hardware (including error type information)
+ * @warning_counter:	counter of non-fatal hardware errors
+ * @interface_version:	hardware-reported interface version
+ */
+struct ipa_uc_mem_area {
+	u8 command;		/* enum ipa_uc_command */
+	u8 reserved0[3];
+	__le32 command_param;
+	__le32 command_param_hi;
+	u8 response;		/* enum ipa_uc_response */
+	u8 reserved1[3];
+	__le32 response_param;
+	u8 event;		/* enum ipa_uc_event */
+	u8 reserved2[3];
+
+	__le32 event_param;
+	__le32 first_error_address;
+	u8 hw_state;
+	u8 warning_counter;
+	__le16 reserved3;
+	__le16 interface_version;
+	__le16 reserved4;
+};
+
+/** enum ipa_uc_command - commands from the AP to the microcontroller */
+enum ipa_uc_command {
+	IPA_UC_COMMAND_NO_OP		= 0,
+	IPA_UC_COMMAND_UPDATE_FLAGS	= 1,
+	IPA_UC_COMMAND_DEBUG_RUN_TEST	= 2,
+	IPA_UC_COMMAND_DEBUG_GET_INFO	= 3,
+	IPA_UC_COMMAND_ERR_FATAL	= 4,
+	IPA_UC_COMMAND_CLK_GATE		= 5,
+	IPA_UC_COMMAND_CLK_UNGATE	= 6,
+	IPA_UC_COMMAND_MEMCPY		= 7,
+	IPA_UC_COMMAND_RESET_PIPE	= 8,
+	IPA_UC_COMMAND_REG_WRITE	= 9,
+	IPA_UC_COMMAND_GSI_CH_EMPTY	= 10,
+};
+
+/** enum ipa_uc_response - microcontroller response codes */
+enum ipa_uc_response {
+	IPA_UC_RESPONSE_NO_OP		= 0,
+	IPA_UC_RESPONSE_INIT_COMPLETED	= 1,
+	IPA_UC_RESPONSE_CMD_COMPLETED	= 2,
+	IPA_UC_RESPONSE_DEBUG_GET_INFO	= 3,
+};
+
+/** enum ipa_uc_event - common cpu events reported by the microcontroller */
+enum ipa_uc_event {
+	IPA_UC_EVENT_NO_OP     = 0,
+	IPA_UC_EVENT_ERROR     = 1,
+	IPA_UC_EVENT_LOG_INFO  = 2,
+};
+
+static struct ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
+{
+	u32 offset = ipa->mem_offset + ipa->mem[IPA_MEM_UC_SHARED].offset;
+
+	return ipa->mem_virt + offset;
+}
+
+/* Microcontroller event IPA interrupt handler */
+static void ipa_uc_event_handler(struct ipa *ipa, enum ipa_irq_id irq_id)
+{
+	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
+	struct device *dev = &ipa->pdev->dev;
+
+	if (shared->event == IPA_UC_EVENT_ERROR)
+		dev_err(dev, "microcontroller error event\n");
+	else
+		dev_err(dev, "unsupported microcontroller event %hhu\n",
+			shared->event);
+}
+
+/* Microcontroller response IPA interrupt handler */
+static void ipa_uc_response_hdlr(struct ipa *ipa, enum ipa_irq_id irq_id)
+{
+	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
+
+	/* An INIT_COMPLETED response message is sent to the AP by the
+	 * microcontroller when it is operational.  Other than this, the AP
+	 * should only receive responses from the microcontroller when it has
+	 * sent it a request message.
+	 *
+	 * We can drop the clock reference taken in ipa_uc_init() once we
+	 * know the microcontroller has finished its initialization.
+	 */
+	switch (shared->response) {
+	case IPA_UC_RESPONSE_INIT_COMPLETED:
+		ipa->uc_loaded = true;
+		ipa_clock_put(ipa);
+		break;
+	default:
+		dev_warn(&ipa->pdev->dev,
+			 "unsupported microcontroller response %hhu\n",
+			 shared->response);
+		break;
+	}
+}
+
+/* ipa_uc_setup() - Set up the microcontroller */
+void ipa_uc_setup(struct ipa *ipa)
+{
+	/* The microcontroller needs the IPA clock running until it has
+	 * completed its initialization.  It signals this by sending an
+	 * INIT_COMPLETED response message to the AP.  This could occur after
+	 * we have finished doing the rest of the IPA initialization, so we
+	 * need to take an extra "proxy" reference, and hold it until we've
+	 * received that signal.  (This reference is dropped in
+	 * ipa_uc_response_hdlr(), above.)
+	 */
+	ipa_clock_get(ipa);
+
+	ipa->uc_loaded = false;
+	ipa_interrupt_add(ipa->interrupt, IPA_IRQ_UC_0, ipa_uc_event_handler);
+	ipa_interrupt_add(ipa->interrupt, IPA_IRQ_UC_1, ipa_uc_response_hdlr);
+}
+
+/* Inverse of ipa_uc_setup() */
+void ipa_uc_teardown(struct ipa *ipa)
+{
+	ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_UC_1);
+	ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_UC_0);
+	if (!ipa->uc_loaded)
+		ipa_clock_put(ipa);
+}
+
+/* Send a command to the microcontroller */
+static void send_uc_command(struct ipa *ipa, u32 command, u32 command_param)
+{
+	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
+
+	shared->command = command;
+	shared->command_param = cpu_to_le32(command_param);
+	shared->command_param_hi = 0;
+	shared->response = 0;
+	shared->response_param = 0;
+
+	iowrite32(1, ipa->reg_virt + IPA_REG_IRQ_UC_OFFSET);
+}
+
+/* Tell the microcontroller the AP is shutting down */
+void ipa_uc_panic_notifier(struct ipa *ipa)
+{
+	if (!ipa->uc_loaded)
+		return;
+
+	send_uc_command(ipa, IPA_UC_COMMAND_ERR_FATAL, 0);
+
+	/* give uc enough time to save state */
+	udelay(IPA_SEND_DELAY);
+}
diff --git a/drivers/net/ipa/ipa_uc.h b/drivers/net/ipa/ipa_uc.h
new file mode 100644
index 000000000000..e8510899a3f0
--- /dev/null
+++ b/drivers/net/ipa/ipa_uc.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+#ifndef _IPA_UC_H_
+#define _IPA_UC_H_
+
+struct ipa;
+
+/**
+ * ipa_uc_setup() - set up the IPA microcontroller subsystem
+ * @ipa:	IPA pointer
+ */
+void ipa_uc_setup(struct ipa *ipa);
+
+/**
+ * ipa_uc_teardown() - inverse of ipa_uc_setup()
+ * @ipa:	IPA pointer
+ */
+void ipa_uc_teardown(struct ipa *ipa);
+
+/**
+ * ipa_uc_panic_notifier()
+ * @ipa:	IPA pointer
+ *
+ * Notifier function called when the system crashes, to inform the
+ * microcontroller of the event.
+ */
+void ipa_uc_panic_notifier(struct ipa *ipa);
+
+#endif /* _IPA_UC_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 14/17] soc: qcom: ipa: AP/modem communications
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (12 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 13/17] soc: qcom: ipa: modem and microcontroller Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 15/17] soc: qcom: ipa: support build of IPA code Alex Elder
                   ` (5 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

This patch implements two forms of out-of-band communication between
the AP and modem.

  - QMI is a mechanism that allows clients running on the AP
    interact with services running on the modem (and vice-versa).
    The AP IPA driver uses QMI to communicate with the corresponding
    IPA driver resident on the modem, to agree on parameters used
    with the IPA hardware and to ensure both sides are ready before
    entering operational mode.

  - SMP2P is a more primitive mechanism available for the modem and
    AP to communicate with each other.  It provides a means for either
    the AP or modem to interrupt the other, and furthermore, to provide
    32 bits worth of information.  The IPA driver uses SMP2P to tell
    the modem what the state of the IPA clock was in the event of a
    crash.  This allows the modem to safely access the IPA hardware
    (or avoid doing so) when a crash occurs, for example, to access
    information within the IPA hardware.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_qmi.c     | 538 +++++++++++++++++++++++++++
 drivers/net/ipa/ipa_qmi.h     |  41 +++
 drivers/net/ipa/ipa_qmi_msg.c | 663 ++++++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_qmi_msg.h | 252 +++++++++++++
 drivers/net/ipa/ipa_smp2p.c   | 335 +++++++++++++++++
 drivers/net/ipa/ipa_smp2p.h   |  48 +++
 6 files changed, 1877 insertions(+)
 create mode 100644 drivers/net/ipa/ipa_qmi.c
 create mode 100644 drivers/net/ipa/ipa_qmi.h
 create mode 100644 drivers/net/ipa/ipa_qmi_msg.c
 create mode 100644 drivers/net/ipa/ipa_qmi_msg.h
 create mode 100644 drivers/net/ipa/ipa_smp2p.c
 create mode 100644 drivers/net/ipa/ipa_smp2p.h

diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
new file mode 100644
index 000000000000..5090f0f923ad
--- /dev/null
+++ b/drivers/net/ipa/ipa_qmi.c
@@ -0,0 +1,538 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2013-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+
+#include <linux/types.h>
+#include <linux/string.h>
+#include <linux/slab.h>
+#include <linux/qrtr.h>
+#include <linux/soc/qcom/qmi.h>
+
+#include "ipa.h"
+#include "ipa_endpoint.h"
+#include "ipa_mem.h"
+#include "ipa_table.h"
+#include "ipa_modem.h"
+#include "ipa_qmi_msg.h"
+
+/**
+ * DOC: AP/Modem QMI Handshake
+ *
+ * The AP and modem perform a "handshake" at initialization time to ensure
+ * both sides know when everything is ready to begin operating.  The AP
+ * driver (this code) uses two QMI handles (endpoints) for this; a client
+ * using a service on the modem, and server to service modem requests (and
+ * to supply an indication message from the AP).  Once the handshake is
+ * complete, the AP and modem may begin IPA operation.  This occurs
+ * only when the AP IPA driver, modem IPA driver, and IPA microcontroller
+ * are ready.
+ *
+ * The QMI service on the modem expects to receive an INIT_DRIVER request from
+ * the AP, which contains parameters used by the modem during initialization.
+ * The AP sends this request as soon as it is knows the modem side service
+ * is available.  The modem responds to this request, and if this response
+ * contains a success result, the AP knows the modem IPA driver is ready.
+ *
+ * The modem is responsible for loading firmware on the IPA microcontroller.
+ * This occurs only during the initial modem boot.  The modem sends a
+ * separate DRIVER_INIT_COMPLETE request to the AP to report that the
+ * microcontroller is ready.  The AP may assume the microcontroller is
+ * ready and remain so (even if the modem reboots) once it has received
+ * and responded to this request.
+ *
+ * There is one final exchange involved in the handshake.  It is required
+ * on the initial modem boot, but optional (but in practice does occur) on
+ * subsequent boots.  The modem expects to receive a final INIT_COMPLETE
+ * indication message from the AP when it is about to begin its normal
+ * operation.  The AP will only send this message after it has received
+ * and responded to an INDICATION_REGISTER request from the modem.
+ *
+ * So in summary:
+ * - Whenever the AP learns the modem has booted and its IPA QMI service
+ *   is available, it sends an INIT_DRIVER request to the modem.  The
+ *   modem supplies a success response when it is ready to operate.
+ * - On the initial boot, the modem sets up the IPA microcontroller, and
+ *   sends a DRIVER_INIT_COMPLETE request to the AP when this is done.
+ * - When the modem is ready to receive an INIT_COMPLETE indication from
+ *   the AP, it sends an INDICATION_REGISTER request to the AP.
+ * - On the initial modem boot, everything is ready when:
+ *	- AP has received a success response from its INIT_DRIVER request
+ *	- AP has responded to a DRIVER_INIT_COMPLETE request
+ *	- AP has responded to an INDICATION_REGISTER request from the modem
+ *	- AP has sent an INIT_COMPLETE indication to the modem
+ * - On subsequent modem boots, everything is ready when:
+ *	- AP has received a success response from its INIT_DRIVER request
+ *	- AP has responded to a DRIVER_INIT_COMPLETE request
+ * - The INDICATION_REGISTER request and INIT_COMPLETE indication are
+ *   optional for non-initial modem boots, and have no bearing on the
+ *   determination of when things are "ready"
+ */
+
+#define IPA_HOST_SERVICE_SVC_ID		0x31
+#define IPA_HOST_SVC_VERS		1
+#define IPA_HOST_SERVICE_INS_ID		1
+
+#define IPA_MODEM_SERVICE_SVC_ID	0x31
+#define IPA_MODEM_SERVICE_INS_ID	2
+#define IPA_MODEM_SVC_VERS		1
+
+#define QMI_INIT_DRIVER_TIMEOUT		60000	/* A minute in milliseconds */
+
+/* Send an INIT_COMPLETE indication message to the modem */
+static void ipa_server_init_complete(struct ipa_qmi *ipa_qmi)
+{
+	struct ipa *ipa = container_of(ipa_qmi, struct ipa, qmi);
+	struct qmi_handle *qmi = &ipa_qmi->server_handle;
+	struct sockaddr_qrtr *sq = &ipa_qmi->modem_sq;
+	struct ipa_init_complete_ind ind = { };
+	int ret;
+
+	ind.status.result = QMI_RESULT_SUCCESS_V01;
+	ind.status.error = QMI_ERR_NONE_V01;
+
+	ret = qmi_send_indication(qmi, sq, IPA_QMI_INIT_COMPLETE,
+				   IPA_QMI_INIT_COMPLETE_IND_SZ,
+				   ipa_init_complete_ind_ei, &ind);
+	if (ret)
+		dev_err(&ipa->pdev->dev,
+			"error %d sending init complete indication\n", ret);
+	else
+		ipa_qmi->indication_sent = true;
+}
+
+/* If requested (and not already sent) send the INIT_COMPLETE indication */
+static void ipa_qmi_indication(struct ipa_qmi *ipa_qmi)
+{
+	if (!ipa_qmi->indication_requested)
+		return;
+
+	if (ipa_qmi->indication_sent)
+		return;
+
+	ipa_server_init_complete(ipa_qmi);
+}
+
+/* Determine whether everything is ready to start normal operation.
+ * We know everything (else) is ready when we know the IPA driver on
+ * the modem is ready, and the microcontroller is ready.
+ *
+ * When the modem boots (or reboots), the handshake sequence starts
+ * with the AP sending the modem an INIT_DRIVER request.  Within
+ * that request, the uc_loaded flag will be zero (false) for an
+ * initial boot, non-zero (true) for a subsequent (SSR) boot.
+ */
+static void ipa_qmi_ready(struct ipa_qmi *ipa_qmi)
+{
+	struct ipa *ipa = container_of(ipa_qmi, struct ipa, qmi);
+	int ret;
+
+	/* We aren't ready until the modem and microcontroller are */
+	if (!ipa_qmi->modem_ready || !ipa_qmi->uc_ready)
+		return;
+
+	/* Send the indication message if it was requested */
+	ipa_qmi_indication(ipa_qmi);
+
+	/* The initial boot requires us to send the indication. */
+	if (ipa_qmi->initial_boot) {
+		if (!ipa_qmi->indication_sent)
+			return;
+
+		/* The initial modem boot completed successfully */
+		ipa_qmi->initial_boot = false;
+	}
+
+	/* We're ready.  Start up normal operation */
+	ipa = container_of(ipa_qmi, struct ipa, qmi);
+	ret = ipa_modem_start(ipa);
+	if (ret)
+		dev_err(&ipa->pdev->dev, "error %d starting modem\n", ret);
+}
+
+/* All QMI clients from the modem node are gone (modem shut down or crashed). */
+static void ipa_server_bye(struct qmi_handle *qmi, unsigned int node)
+{
+	struct ipa_qmi *ipa_qmi;
+
+	ipa_qmi = container_of(qmi, struct ipa_qmi, server_handle);
+
+	/* The modem client and server go away at the same time */
+	memset(&ipa_qmi->modem_sq, 0, sizeof(ipa_qmi->modem_sq));
+
+	/* initial_boot doesn't change when modem reboots */
+	/* uc_ready doesn't change when modem reboots */
+	ipa_qmi->modem_ready = false;
+	ipa_qmi->indication_requested = false;
+	ipa_qmi->indication_sent = false;
+}
+
+static struct qmi_ops ipa_server_ops = {
+	.bye		= ipa_server_bye,
+};
+
+/* Callback function to handle an INDICATION_REGISTER request message from the
+ * modem.  This informs the AP that the modem is now ready to receive the
+ * INIT_COMPLETE indication message.
+ */
+static void ipa_server_indication_register(struct qmi_handle *qmi,
+					   struct sockaddr_qrtr *sq,
+					   struct qmi_txn *txn,
+					   const void *decoded)
+{
+	struct ipa_indication_register_rsp rsp = { };
+	struct ipa_qmi *ipa_qmi;
+	struct ipa *ipa;
+	int ret;
+
+	ipa_qmi = container_of(qmi, struct ipa_qmi, server_handle);
+	ipa = container_of(ipa_qmi, struct ipa, qmi);
+
+	rsp.rsp.result = QMI_RESULT_SUCCESS_V01;
+	rsp.rsp.error = QMI_ERR_NONE_V01;
+
+	ret = qmi_send_response(qmi, sq, txn, IPA_QMI_INDICATION_REGISTER,
+				IPA_QMI_INDICATION_REGISTER_RSP_SZ,
+				ipa_indication_register_rsp_ei, &rsp);
+	if (!ret) {
+		ipa_qmi->indication_requested = true;
+		ipa_qmi_ready(ipa_qmi);		/* We might be ready now */
+	} else {
+		dev_err(&ipa->pdev->dev,
+			"error %d sending register indication response\n", ret);
+	}
+}
+
+/* Respond to a DRIVER_INIT_COMPLETE request message from the modem. */
+static void ipa_server_driver_init_complete(struct qmi_handle *qmi,
+					    struct sockaddr_qrtr *sq,
+					    struct qmi_txn *txn,
+					    const void *decoded)
+{
+	struct ipa_driver_init_complete_rsp rsp = { };
+	struct ipa_qmi *ipa_qmi;
+	struct ipa *ipa;
+	int ret;
+
+	ipa_qmi = container_of(qmi, struct ipa_qmi, server_handle);
+	ipa = container_of(ipa_qmi, struct ipa, qmi);
+
+	rsp.rsp.result = QMI_RESULT_SUCCESS_V01;
+	rsp.rsp.error = QMI_ERR_NONE_V01;
+
+	ret = qmi_send_response(qmi, sq, txn, IPA_QMI_DRIVER_INIT_COMPLETE,
+				IPA_QMI_DRIVER_INIT_COMPLETE_RSP_SZ,
+				ipa_driver_init_complete_rsp_ei, &rsp);
+	if (!ret) {
+		ipa_qmi->uc_ready = true;
+		ipa_qmi_ready(ipa_qmi);		/* We might be ready now */
+	} else {
+		dev_err(&ipa->pdev->dev,
+			"error %d sending init complete response\n", ret);
+	}
+}
+
+/* The server handles two request message types sent by the modem. */
+static struct qmi_msg_handler ipa_server_msg_handlers[] = {
+	{
+		.type		= QMI_REQUEST,
+		.msg_id		= IPA_QMI_INDICATION_REGISTER,
+		.ei		= ipa_indication_register_req_ei,
+		.decoded_size	= IPA_QMI_INDICATION_REGISTER_REQ_SZ,
+		.fn		= ipa_server_indication_register,
+	},
+	{
+		.type		= QMI_REQUEST,
+		.msg_id		= IPA_QMI_DRIVER_INIT_COMPLETE,
+		.ei		= ipa_driver_init_complete_req_ei,
+		.decoded_size	= IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ,
+		.fn		= ipa_server_driver_init_complete,
+	},
+};
+
+/* Handle an INIT_DRIVER response message from the modem. */
+static void ipa_client_init_driver(struct qmi_handle *qmi,
+				   struct sockaddr_qrtr *sq,
+				   struct qmi_txn *txn, const void *decoded)
+{
+	txn->result = 0;	/* IPA_QMI_INIT_DRIVER request was successful */
+	complete(&txn->completion);
+}
+
+/* The client handles one response message type sent by the modem. */
+static struct qmi_msg_handler ipa_client_msg_handlers[] = {
+	{
+		.type		= QMI_RESPONSE,
+		.msg_id		= IPA_QMI_INIT_DRIVER,
+		.ei		= ipa_init_modem_driver_rsp_ei,
+		.decoded_size	= IPA_QMI_INIT_DRIVER_RSP_SZ,
+		.fn		= ipa_client_init_driver,
+	},
+};
+
+/* Return a pointer to an init modem driver request structure, which contains
+ * configuration parameters for the modem.  The modem may be started multiple
+ * times, but generally these parameters don't change so we can reuse the
+ * request structure once it's initialized.  The only exception is the
+ * skip_uc_load field, which will be set only after the microcontroller has
+ * reported it has completed its initialization.
+ */
+static const struct ipa_init_modem_driver_req *
+init_modem_driver_req(struct ipa_qmi *ipa_qmi)
+{
+	struct ipa *ipa = container_of(ipa_qmi, struct ipa, qmi);
+	static struct ipa_init_modem_driver_req req;
+	const struct ipa_mem *mem;
+
+	/* The microcontroller is initialized on the first boot */
+	req.skip_uc_load_valid = 1;
+	req.skip_uc_load = ipa->uc_loaded ? 1 : 0;
+
+	/* We only have to initialize most of it once */
+	if (req.platform_type_valid)
+		return &req;
+
+	req.platform_type_valid = 1;
+	req.platform_type = IPA_QMI_PLATFORM_TYPE_MSM_ANDROID;
+
+	mem = &ipa->mem[IPA_MEM_MODEM_HEADER];
+	if (mem->size) {
+		req.hdr_tbl_info_valid = 1;
+		req.hdr_tbl_info.start = ipa->mem_offset + mem->offset;
+		req.hdr_tbl_info.end = req.hdr_tbl_info.start + mem->size - 1;
+	}
+
+	mem = &ipa->mem[IPA_MEM_V4_ROUTE];
+	req.v4_route_tbl_info_valid = 1;
+	req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset;
+	req.v4_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE;
+
+	mem = &ipa->mem[IPA_MEM_V6_ROUTE];
+	req.v6_route_tbl_info_valid = 1;
+	req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset;
+	req.v6_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE;
+
+	mem = &ipa->mem[IPA_MEM_V4_FILTER];
+	req.v4_filter_tbl_start_valid = 1;
+	req.v4_filter_tbl_start = ipa->mem_offset + mem->offset;
+
+	mem = &ipa->mem[IPA_MEM_V6_FILTER];
+	req.v6_filter_tbl_start_valid = 1;
+	req.v6_filter_tbl_start = ipa->mem_offset + mem->offset;
+
+	mem = &ipa->mem[IPA_MEM_MODEM];
+	if (mem->size) {
+		req.modem_mem_info_valid = 1;
+		req.modem_mem_info.start = ipa->mem_offset + mem->offset;
+		req.modem_mem_info.size = mem->size;
+	}
+
+	req.ctrl_comm_dest_end_pt_valid = 1;
+	req.ctrl_comm_dest_end_pt =
+		ipa->name_map[IPA_ENDPOINT_AP_MODEM_RX]->endpoint_id;
+
+	/* skip_uc_load_valid and skip_uc_load are set above */
+
+	mem = &ipa->mem[IPA_MEM_MODEM_PROC_CTX];
+	if (mem->size) {
+		req.hdr_proc_ctx_tbl_info_valid = 1;
+		req.hdr_proc_ctx_tbl_info.start =
+			ipa->mem_offset + mem->offset;
+		req.hdr_proc_ctx_tbl_info.end =
+			req.hdr_proc_ctx_tbl_info.start + mem->size - 1;
+	}
+
+	/* Nothing to report for the compression table (zip_tbl_info) */
+
+	mem = &ipa->mem[IPA_MEM_V4_ROUTE_HASHED];
+	if (mem->size) {
+		req.v4_hash_route_tbl_info_valid = 1;
+		req.v4_hash_route_tbl_info.start =
+				ipa->mem_offset + mem->offset;
+		req.v4_hash_route_tbl_info.count =
+				mem->size / IPA_TABLE_ENTRY_SIZE;
+	}
+
+	mem = &ipa->mem[IPA_MEM_V6_ROUTE_HASHED];
+	if (mem->size) {
+		req.v6_hash_route_tbl_info_valid = 1;
+		req.v6_hash_route_tbl_info.start =
+			ipa->mem_offset + mem->offset;
+		req.v6_hash_route_tbl_info.count =
+			mem->size / IPA_TABLE_ENTRY_SIZE;
+	}
+
+	mem = &ipa->mem[IPA_MEM_V4_FILTER_HASHED];
+	if (mem->size) {
+		req.v4_hash_filter_tbl_start_valid = 1;
+		req.v4_hash_filter_tbl_start = ipa->mem_offset + mem->offset;
+	}
+
+	mem = &ipa->mem[IPA_MEM_V6_FILTER_HASHED];
+	if (mem->size) {
+		req.v6_hash_filter_tbl_start_valid = 1;
+		req.v6_hash_filter_tbl_start = ipa->mem_offset + mem->offset;
+	}
+
+	/* None of the stats fields are valid (IPA v4.0 and above) */
+
+	if (ipa->version != IPA_VERSION_3_5_1) {
+		mem = &ipa->mem[IPA_MEM_STATS_QUOTA];
+		if (mem->size) {
+			req.hw_stats_quota_base_addr_valid = 1;
+			req.hw_stats_quota_base_addr =
+				ipa->mem_offset + mem->offset;
+			req.hw_stats_quota_size_valid = 1;
+			req.hw_stats_quota_size = ipa->mem_offset + mem->size;
+		}
+
+		mem = &ipa->mem[IPA_MEM_STATS_DROP];
+		if (mem->size) {
+			req.hw_stats_drop_base_addr_valid = 1;
+			req.hw_stats_drop_base_addr =
+				ipa->mem_offset + mem->offset;
+			req.hw_stats_drop_size_valid = 1;
+			req.hw_stats_drop_size = ipa->mem_offset + mem->size;
+		}
+	}
+
+	return &req;
+}
+
+/* Send an INIT_DRIVER request to the modem, and wait for it to complete. */
+static void ipa_client_init_driver_work(struct work_struct *work)
+{
+	unsigned long timeout = msecs_to_jiffies(QMI_INIT_DRIVER_TIMEOUT);
+	const struct ipa_init_modem_driver_req *req;
+	struct ipa_qmi *ipa_qmi;
+	struct qmi_handle *qmi;
+	struct qmi_txn txn;
+	struct device *dev;
+	struct ipa *ipa;
+	int ret;
+
+	ipa_qmi = container_of(work, struct ipa_qmi, init_driver_work);
+	qmi = &ipa_qmi->client_handle,
+
+	ipa = container_of(ipa_qmi, struct ipa, qmi);
+	dev = &ipa->pdev->dev;
+
+	ret = qmi_txn_init(qmi, &txn, NULL, NULL);
+	if (ret < 0) {
+		dev_err(dev, "error %d preparing init driver request\n", ret);
+		return;
+	}
+
+	/* Send the request, and if successful wait for its response */
+	req = init_modem_driver_req(ipa_qmi);
+	ret = qmi_send_request(qmi, &ipa_qmi->modem_sq, &txn,
+			       IPA_QMI_INIT_DRIVER, IPA_QMI_INIT_DRIVER_REQ_SZ,
+			       ipa_init_modem_driver_req_ei, req);
+	if (ret)
+		dev_err(dev, "error %d sending init driver request\n", ret);
+	else if ((ret = qmi_txn_wait(&txn, timeout)))
+		dev_err(dev, "error %d awaiting init driver response\n", ret);
+
+	if (!ret) {
+		ipa_qmi->modem_ready = true;
+		ipa_qmi_ready(ipa_qmi);		/* We might be ready now */
+	} else {
+		/* If any error occurs we need to cancel the transaction */
+		qmi_txn_cancel(&txn);
+	}
+}
+
+/* The modem server is now available.  We will send an INIT_DRIVER request
+ * to the modem, but can't wait for it to complete in this callback thread.
+ * Schedule a worker on the global workqueue to do that for us.
+ */
+static int
+ipa_client_new_server(struct qmi_handle *qmi, struct qmi_service *svc)
+{
+	struct ipa_qmi *ipa_qmi;
+
+	ipa_qmi = container_of(qmi, struct ipa_qmi, client_handle);
+
+	ipa_qmi->modem_sq.sq_family = AF_QIPCRTR;
+	ipa_qmi->modem_sq.sq_node = svc->node;
+	ipa_qmi->modem_sq.sq_port = svc->port;
+
+	schedule_work(&ipa_qmi->init_driver_work);
+
+	return 0;
+}
+
+static struct qmi_ops ipa_client_ops = {
+	.new_server	= ipa_client_new_server,
+};
+
+/* This is called by ipa_setup().  We can be informed via remoteproc that
+ * the modem has shut down, in which case this function will be called
+ * again to prepare for it coming back up again.
+ */
+int ipa_qmi_setup(struct ipa *ipa)
+{
+	struct ipa_qmi *ipa_qmi = &ipa->qmi;
+	int ret;
+
+	ipa_qmi->initial_boot = true;
+
+	/* The server handle is used to handle the DRIVER_INIT_COMPLETE
+	 * request on the first modem boot.  It also receives the
+	 * INDICATION_REGISTER request on the first boot and (optionally)
+	 * subsequent boots.  The INIT_COMPLETE indication message is
+	 * sent over the server handle if requested.
+	 */
+	ret = qmi_handle_init(&ipa_qmi->server_handle,
+			      IPA_QMI_SERVER_MAX_RCV_SZ, &ipa_server_ops,
+			      ipa_server_msg_handlers);
+	if (ret)
+		return ret;
+
+	ret = qmi_add_server(&ipa_qmi->server_handle, IPA_HOST_SERVICE_SVC_ID,
+			     IPA_HOST_SVC_VERS, IPA_HOST_SERVICE_INS_ID);
+	if (ret)
+		goto err_server_handle_release;
+
+	/* The client handle is only used for sending an INIT_DRIVER request
+	 * to the modem, and receiving its response message.
+	 */
+	ret = qmi_handle_init(&ipa_qmi->client_handle,
+			      IPA_QMI_CLIENT_MAX_RCV_SZ, &ipa_client_ops,
+			      ipa_client_msg_handlers);
+	if (ret)
+		goto err_server_handle_release;
+
+	/* We need this ready before the service lookup is added */
+	INIT_WORK(&ipa_qmi->init_driver_work, ipa_client_init_driver_work);
+
+	ret = qmi_add_lookup(&ipa_qmi->client_handle, IPA_MODEM_SERVICE_SVC_ID,
+			     IPA_MODEM_SVC_VERS, IPA_MODEM_SERVICE_INS_ID);
+	if (ret)
+		goto err_client_handle_release;
+
+	return 0;
+
+err_client_handle_release:
+	/* Releasing the handle also removes registered lookups */
+	qmi_handle_release(&ipa_qmi->client_handle);
+	memset(&ipa_qmi->client_handle, 0, sizeof(ipa_qmi->client_handle));
+err_server_handle_release:
+	/* Releasing the handle also removes registered services */
+	qmi_handle_release(&ipa_qmi->server_handle);
+	memset(&ipa_qmi->server_handle, 0, sizeof(ipa_qmi->server_handle));
+
+	return ret;
+}
+
+void ipa_qmi_teardown(struct ipa *ipa)
+{
+	cancel_work_sync(&ipa->qmi.init_driver_work);
+
+	qmi_handle_release(&ipa->qmi.client_handle);
+	memset(&ipa->qmi.client_handle, 0, sizeof(ipa->qmi.client_handle));
+
+	qmi_handle_release(&ipa->qmi.server_handle);
+	memset(&ipa->qmi.server_handle, 0, sizeof(ipa->qmi.server_handle));
+}
diff --git a/drivers/net/ipa/ipa_qmi.h b/drivers/net/ipa/ipa_qmi.h
new file mode 100644
index 000000000000..3993687593d0
--- /dev/null
+++ b/drivers/net/ipa/ipa_qmi.h
@@ -0,0 +1,41 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+#ifndef _IPA_QMI_H_
+#define _IPA_QMI_H_
+
+#include <linux/types.h>
+#include <linux/soc/qcom/qmi.h>
+
+struct ipa;
+
+/**
+ * struct ipa_qmi - QMI state associated with an IPA
+ * @client_handle	- used to send an QMI requests to the modem
+ * @server_handle	- used to handle QMI requests from the modem
+ * @initialized		- whether QMI initialization has completed
+ * @indication_register_received - tracks modem request receipt
+ * @init_driver_response_received - tracks modem response receipt
+ */
+struct ipa_qmi {
+	struct qmi_handle client_handle;
+	struct qmi_handle server_handle;
+
+	/* Information used for the client handle */
+	struct sockaddr_qrtr modem_sq;
+	struct work_struct init_driver_work;
+
+	/* Flags used in negotiating readiness */
+	bool initial_boot;
+	bool uc_ready;
+	bool modem_ready;
+	bool indication_requested;
+	bool indication_sent;
+};
+
+int ipa_qmi_setup(struct ipa *ipa);
+void ipa_qmi_teardown(struct ipa *ipa);
+
+#endif /* !_IPA_QMI_H_ */
diff --git a/drivers/net/ipa/ipa_qmi_msg.c b/drivers/net/ipa/ipa_qmi_msg.c
new file mode 100644
index 000000000000..03a1d0e55964
--- /dev/null
+++ b/drivers/net/ipa/ipa_qmi_msg.c
@@ -0,0 +1,663 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+#include <linux/stddef.h>
+#include <linux/soc/qcom/qmi.h>
+
+#include "ipa_qmi_msg.h"
+
+/* QMI message structure definition for struct ipa_indication_register_req */
+struct qmi_elem_info ipa_indication_register_req_ei[] = {
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_indication_register_req,
+				     master_driver_init_complete_valid),
+		.tlv_type	= 0x10,
+		.offset		= offsetof(struct ipa_indication_register_req,
+					   master_driver_init_complete_valid),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_1_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_indication_register_req,
+				     master_driver_init_complete),
+		.tlv_type	= 0x10,
+		.offset		= offsetof(struct ipa_indication_register_req,
+					   master_driver_init_complete),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_indication_register_req,
+				     data_usage_quota_reached_valid),
+		.tlv_type	= 0x11,
+		.offset		= offsetof(struct ipa_indication_register_req,
+					   data_usage_quota_reached_valid),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_1_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_indication_register_req,
+				     data_usage_quota_reached),
+		.tlv_type	= 0x11,
+		.offset		= offsetof(struct ipa_indication_register_req,
+					   data_usage_quota_reached),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_indication_register_req,
+				     ipa_mhi_ready_ind_valid),
+		.tlv_type	= 0x11,
+		.offset		= offsetof(struct ipa_indication_register_req,
+					   ipa_mhi_ready_ind_valid),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_1_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_indication_register_req,
+				     ipa_mhi_ready_ind),
+		.tlv_type	= 0x11,
+		.offset		= offsetof(struct ipa_indication_register_req,
+					   ipa_mhi_ready_ind),
+	},
+	{
+		.data_type	= QMI_EOTI,
+	},
+};
+
+/* QMI message structure definition for struct ipa_indication_register_rsp */
+struct qmi_elem_info ipa_indication_register_rsp_ei[] = {
+	{
+		.data_type	= QMI_STRUCT,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_indication_register_rsp,
+				     rsp),
+		.tlv_type	= 0x02,
+		.offset		= offsetof(struct ipa_indication_register_rsp,
+					   rsp),
+		.ei_array	= qmi_response_type_v01_ei,
+	},
+	{
+		.data_type	= QMI_EOTI,
+	},
+};
+
+/* QMI message structure definition for struct ipa_driver_init_complete_req */
+struct qmi_elem_info ipa_driver_init_complete_req_ei[] = {
+	{
+		.data_type	= QMI_UNSIGNED_1_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_driver_init_complete_req,
+				     status),
+		.tlv_type	= 0x01,
+		.offset		= offsetof(struct ipa_driver_init_complete_req,
+					   status),
+	},
+	{
+		.data_type	= QMI_EOTI,
+	},
+};
+
+/* QMI message structure definition for struct ipa_driver_init_complete_rsp */
+struct qmi_elem_info ipa_driver_init_complete_rsp_ei[] = {
+	{
+		.data_type	= QMI_STRUCT,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_driver_init_complete_rsp,
+				     rsp),
+		.tlv_type	= 0x02,
+		.elem_size	= offsetof(struct ipa_driver_init_complete_rsp,
+					   rsp),
+		.ei_array	= qmi_response_type_v01_ei,
+	},
+	{
+		.data_type	= QMI_EOTI,
+	},
+};
+
+/* QMI message structure definition for struct ipa_init_complete_ind */
+struct qmi_elem_info ipa_init_complete_ind_ei[] = {
+	{
+		.data_type	= QMI_STRUCT,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_complete_ind,
+				     status),
+		.tlv_type	= 0x02,
+		.elem_size	= offsetof(struct ipa_init_complete_ind,
+					   status),
+		.ei_array	= qmi_response_type_v01_ei,
+	},
+	{
+		.data_type	= QMI_EOTI,
+	},
+};
+
+/* QMI message structure definition for struct ipa_mem_bounds */
+struct qmi_elem_info ipa_mem_bounds_ei[] = {
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_mem_bounds, start),
+		.offset		= offsetof(struct ipa_mem_bounds, start),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_mem_bounds, end),
+		.offset		= offsetof(struct ipa_mem_bounds, end),
+	},
+	{
+		.data_type	= QMI_EOTI,
+	},
+};
+
+/* QMI message structure definition for struct ipa_mem_array */
+struct qmi_elem_info ipa_mem_array_ei[] = {
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_mem_array, start),
+		.offset		= offsetof(struct ipa_mem_array, start),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_mem_array, count),
+		.offset		= offsetof(struct ipa_mem_array, count),
+	},
+	{
+		.data_type	= QMI_EOTI,
+	},
+};
+
+/* QMI message structure definition for struct ipa_mem_range */
+struct qmi_elem_info ipa_mem_range_ei[] = {
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_mem_range, start),
+		.offset		= offsetof(struct ipa_mem_range, start),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_mem_range, size),
+		.offset		= offsetof(struct ipa_mem_range, size),
+	},
+	{
+		.data_type	= QMI_EOTI,
+	},
+};
+
+/* QMI message structure definition for struct ipa_init_modem_driver_req */
+struct qmi_elem_info ipa_init_modem_driver_req_ei[] = {
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     platform_type_valid),
+		.tlv_type	= 0x10,
+		.elem_size	= offsetof(struct ipa_init_modem_driver_req,
+					   platform_type_valid),
+	},
+	{
+		.data_type	= QMI_SIGNED_4_BYTE_ENUM,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     platform_type),
+		.tlv_type	= 0x10,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   platform_type),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     hdr_tbl_info_valid),
+		.tlv_type	= 0x11,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   hdr_tbl_info_valid),
+	},
+	{
+		.data_type	= QMI_STRUCT,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     hdr_tbl_info),
+		.tlv_type	= 0x11,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   hdr_tbl_info),
+		.ei_array	= ipa_mem_bounds_ei,
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v4_route_tbl_info_valid),
+		.tlv_type	= 0x12,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v4_route_tbl_info_valid),
+	},
+	{
+		.data_type	= QMI_STRUCT,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v4_route_tbl_info),
+		.tlv_type	= 0x12,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v4_route_tbl_info),
+		.ei_array	= ipa_mem_array_ei,
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v6_route_tbl_info_valid),
+		.tlv_type	= 0x13,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v6_route_tbl_info_valid),
+	},
+	{
+		.data_type	= QMI_STRUCT,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v6_route_tbl_info),
+		.tlv_type	= 0x13,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v6_route_tbl_info),
+		.ei_array	= ipa_mem_array_ei,
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v4_filter_tbl_start_valid),
+		.tlv_type	= 0x14,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v4_filter_tbl_start_valid),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v4_filter_tbl_start),
+		.tlv_type	= 0x14,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v4_filter_tbl_start),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v6_filter_tbl_start_valid),
+		.tlv_type	= 0x15,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v6_filter_tbl_start_valid),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v6_filter_tbl_start),
+		.tlv_type	= 0x15,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v6_filter_tbl_start),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     modem_mem_info_valid),
+		.tlv_type	= 0x16,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   modem_mem_info_valid),
+	},
+	{
+		.data_type	= QMI_STRUCT,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     modem_mem_info),
+		.tlv_type	= 0x16,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   modem_mem_info),
+		.ei_array	= ipa_mem_range_ei,
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     ctrl_comm_dest_end_pt_valid),
+		.tlv_type	= 0x17,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   ctrl_comm_dest_end_pt_valid),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     ctrl_comm_dest_end_pt),
+		.tlv_type	= 0x17,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   ctrl_comm_dest_end_pt),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     skip_uc_load_valid),
+		.tlv_type	= 0x18,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   skip_uc_load_valid),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_1_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     skip_uc_load),
+		.tlv_type	= 0x18,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   skip_uc_load),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     hdr_proc_ctx_tbl_info_valid),
+		.tlv_type	= 0x19,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   hdr_proc_ctx_tbl_info_valid),
+	},
+	{
+		.data_type	= QMI_STRUCT,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     hdr_proc_ctx_tbl_info),
+		.tlv_type	= 0x19,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   hdr_proc_ctx_tbl_info),
+		.ei_array	= ipa_mem_bounds_ei,
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     zip_tbl_info_valid),
+		.tlv_type	= 0x1a,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   zip_tbl_info_valid),
+	},
+	{
+		.data_type	= QMI_STRUCT,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     zip_tbl_info),
+		.tlv_type	= 0x1a,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   zip_tbl_info),
+		.ei_array	= ipa_mem_bounds_ei,
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v4_hash_route_tbl_info_valid),
+		.tlv_type	= 0x1b,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v4_hash_route_tbl_info_valid),
+	},
+	{
+		.data_type	= QMI_STRUCT,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v4_hash_route_tbl_info),
+		.tlv_type	= 0x1b,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v4_hash_route_tbl_info),
+		.ei_array	= ipa_mem_array_ei,
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v6_hash_route_tbl_info_valid),
+		.tlv_type	= 0x1c,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v6_hash_route_tbl_info_valid),
+	},
+	{
+		.data_type	= QMI_STRUCT,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v6_hash_route_tbl_info),
+		.tlv_type	= 0x1c,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v6_hash_route_tbl_info),
+		.ei_array	= ipa_mem_array_ei,
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v4_hash_filter_tbl_start_valid),
+		.tlv_type	= 0x1d,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v4_hash_filter_tbl_start_valid),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v4_hash_filter_tbl_start),
+		.tlv_type	= 0x1d,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v4_hash_filter_tbl_start),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v6_hash_filter_tbl_start_valid),
+		.tlv_type	= 0x1e,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v6_hash_filter_tbl_start_valid),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     v6_hash_filter_tbl_start),
+		.tlv_type	= 0x1e,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   v6_hash_filter_tbl_start),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     hw_stats_quota_base_addr_valid),
+		.tlv_type	= 0x1f,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   hw_stats_quota_base_addr_valid),
+	},
+	{
+		.data_type	= QMI_SIGNED_4_BYTE_ENUM,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     hw_stats_quota_base_addr),
+		.tlv_type	= 0x1f,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   hw_stats_quota_base_addr),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     hw_stats_quota_size_valid),
+		.tlv_type	= 0x1f,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   hw_stats_quota_size_valid),
+	},
+	{
+		.data_type	= QMI_SIGNED_4_BYTE_ENUM,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     hw_stats_quota_size),
+		.tlv_type	= 0x1f,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   hw_stats_quota_size),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     hw_stats_drop_size_valid),
+		.tlv_type	= 0x1f,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   hw_stats_drop_size_valid),
+	},
+	{
+		.data_type	= QMI_SIGNED_4_BYTE_ENUM,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_req,
+				     hw_stats_drop_size),
+		.tlv_type	= 0x1f,
+		.offset		= offsetof(struct ipa_init_modem_driver_req,
+					   hw_stats_drop_size),
+	},
+	{
+		.data_type	= QMI_EOTI,
+	},
+};
+
+/* QMI message structure definition for struct ipa_init_modem_driver_rsp */
+struct qmi_elem_info ipa_init_modem_driver_rsp_ei[] = {
+	{
+		.data_type	= QMI_STRUCT,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_rsp,
+				     rsp),
+		.tlv_type	= 0x02,
+		.offset		= offsetof(struct ipa_init_modem_driver_rsp,
+					   rsp),
+		.ei_array	= qmi_response_type_v01_ei,
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_rsp,
+				     ctrl_comm_dest_end_pt_valid),
+		.tlv_type	= 0x10,
+		.offset		= offsetof(struct ipa_init_modem_driver_rsp,
+					   ctrl_comm_dest_end_pt_valid),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_rsp,
+				     ctrl_comm_dest_end_pt),
+		.tlv_type	= 0x10,
+		.offset		= offsetof(struct ipa_init_modem_driver_rsp,
+					   ctrl_comm_dest_end_pt),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_rsp,
+				     default_end_pt_valid),
+		.tlv_type	= 0x11,
+		.offset		= offsetof(struct ipa_init_modem_driver_rsp,
+					   default_end_pt_valid),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_4_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_rsp,
+				     default_end_pt),
+		.tlv_type	= 0x11,
+		.offset		= offsetof(struct ipa_init_modem_driver_rsp,
+					   default_end_pt),
+	},
+	{
+		.data_type	= QMI_OPT_FLAG,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_rsp,
+				     modem_driver_init_pending_valid),
+		.tlv_type	= 0x12,
+		.offset		= offsetof(struct ipa_init_modem_driver_rsp,
+					   modem_driver_init_pending_valid),
+	},
+	{
+		.data_type	= QMI_UNSIGNED_1_BYTE,
+		.elem_len	= 1,
+		.elem_size	=
+			sizeof_field(struct ipa_init_modem_driver_rsp,
+				     modem_driver_init_pending),
+		.tlv_type	= 0x12,
+		.offset		= offsetof(struct ipa_init_modem_driver_rsp,
+					   modem_driver_init_pending),
+	},
+	{
+		.data_type	= QMI_EOTI,
+	},
+};
diff --git a/drivers/net/ipa/ipa_qmi_msg.h b/drivers/net/ipa/ipa_qmi_msg.h
new file mode 100644
index 000000000000..cfac456cea0c
--- /dev/null
+++ b/drivers/net/ipa/ipa_qmi_msg.h
@@ -0,0 +1,252 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2020 Linaro Ltd.
+ */
+#ifndef _IPA_QMI_MSG_H_
+#define _IPA_QMI_MSG_H_
+
+/* === Only "ipa_qmi" and "ipa_qmi_msg.c" should include this file === */
+
+#include <linux/types.h>
+#include <linux/soc/qcom/qmi.h>
+
+/* Request/response/indication QMI message ids used for IPA.  Receiving
+ * end issues a response for requests; indications require no response.
+ */
+#define IPA_QMI_INDICATION_REGISTER	0x20	/* modem -> AP request */
+#define IPA_QMI_INIT_DRIVER		0x21	/* AP -> modem request */
+#define IPA_QMI_INIT_COMPLETE		0x22	/* AP -> modem indication */
+#define IPA_QMI_DRIVER_INIT_COMPLETE	0x35	/* modem -> AP request */
+
+/* The maximum size required for message types.  These sizes include
+ * the message data, along with type (1 byte) and length (2 byte)
+ * information for each field.  The qmi_send_*() interfaces require
+ * the message size to be provided.
+ */
+#define IPA_QMI_INDICATION_REGISTER_REQ_SZ	12	/* -> server handle */
+#define IPA_QMI_INDICATION_REGISTER_RSP_SZ	7	/* <- server handle */
+#define IPA_QMI_INIT_DRIVER_REQ_SZ		162	/* client handle -> */
+#define IPA_QMI_INIT_DRIVER_RSP_SZ		25	/* client handle <- */
+#define IPA_QMI_INIT_COMPLETE_IND_SZ		7	/* <- server handle */
+#define IPA_QMI_DRIVER_INIT_COMPLETE_REQ_SZ	4	/* -> server handle */
+#define IPA_QMI_DRIVER_INIT_COMPLETE_RSP_SZ	7	/* <- server handle */
+
+/* Maximum size of messages we expect the AP to receive (max of above) */
+#define IPA_QMI_SERVER_MAX_RCV_SZ		8
+#define IPA_QMI_CLIENT_MAX_RCV_SZ		25
+
+/* Request message for the IPA_QMI_INDICATION_REGISTER request */
+struct ipa_indication_register_req {
+	u8 master_driver_init_complete_valid;
+	u8 master_driver_init_complete;
+	u8 data_usage_quota_reached_valid;
+	u8 data_usage_quota_reached;
+	u8 ipa_mhi_ready_ind_valid;
+	u8 ipa_mhi_ready_ind;
+};
+
+/* The response to a IPA_QMI_INDICATION_REGISTER request consists only of
+ * a standard QMI response.
+ */
+struct ipa_indication_register_rsp {
+	struct qmi_response_type_v01 rsp;
+};
+
+/* Request message for the IPA_QMI_DRIVER_INIT_COMPLETE request */
+struct ipa_driver_init_complete_req {
+	u8 status;
+};
+
+/* The response to a IPA_QMI_DRIVER_INIT_COMPLETE request consists only
+ * of a standard QMI response.
+ */
+struct ipa_driver_init_complete_rsp {
+	struct qmi_response_type_v01 rsp;
+};
+
+/* The message for the IPA_QMI_INIT_COMPLETE_IND indication consists
+ * only of a standard QMI response.
+ */
+struct ipa_init_complete_ind {
+	struct qmi_response_type_v01 status;
+};
+
+/* The AP tells the modem its platform type.  We assume Android. */
+enum ipa_platform_type {
+	IPA_QMI_PLATFORM_TYPE_INVALID		= 0,	/* Invalid */
+	IPA_QMI_PLATFORM_TYPE_TN		= 1,	/* Data card */
+	IPA_QMI_PLATFORM_TYPE_LE		= 2,	/* Data router */
+	IPA_QMI_PLATFORM_TYPE_MSM_ANDROID	= 3,	/* Android MSM */
+	IPA_QMI_PLATFORM_TYPE_MSM_WINDOWS	= 4,	/* Windows MSM */
+	IPA_QMI_PLATFORM_TYPE_MSM_QNX_V01	= 5,	/* QNX MSM */
+};
+
+/* This defines the start and end offset of a range of memory.  Both
+ * fields are offsets relative to the start of IPA shared memory.
+ * The end value is the last addressable byte *within* the range.
+ */
+struct ipa_mem_bounds {
+	u32 start;
+	u32 end;
+};
+
+/* This defines the location and size of an array.  The start value
+ * is an offset relative to the start of IPA shared memory.  The
+ * size of the array is implied by the number of entries (the entry
+ * size is assumed to be known).
+ */
+struct ipa_mem_array {
+	u32 start;
+	u32 count;
+};
+
+/* This defines the location and size of a range of memory.  The
+ * start is an offset relative to the start of IPA shared memory.
+ * This differs from the ipa_mem_bounds structure in that the size
+ * (in bytes) of the memory region is specified rather than the
+ * offset of its last byte.
+ */
+struct ipa_mem_range {
+	u32 start;
+	u32 size;
+};
+
+/* The message for the IPA_QMI_INIT_DRIVER request contains information
+ * from the AP that affects modem initialization.
+ */
+struct ipa_init_modem_driver_req {
+	u8			platform_type_valid;
+	u32			platform_type;	/* enum ipa_platform_type */
+
+	/* Modem header table information.  This defines the IPA shared
+	 * memory in which the modem may insert header table entries.
+	 */
+	u8			hdr_tbl_info_valid;
+	struct ipa_mem_bounds	hdr_tbl_info;
+
+	/* Routing table information.  These define the location and size of
+	 * non-hashable IPv4 and IPv6 filter tables.  The start values are
+	 * offsets relative to the start of IPA shared memory.
+	 */
+	u8			v4_route_tbl_info_valid;
+	struct ipa_mem_array	v4_route_tbl_info;
+	u8			v6_route_tbl_info_valid;
+	struct ipa_mem_array	v6_route_tbl_info;
+
+	/* Filter table information.  These define the location of the
+	 * non-hashable IPv4 and IPv6 filter tables.  The start values are
+	 * offsets relative to the start of IPA shared memory.
+	 */
+	u8			v4_filter_tbl_start_valid;
+	u32			v4_filter_tbl_start;
+	u8			v6_filter_tbl_start_valid;
+	u32			v6_filter_tbl_start;
+
+	/* Modem memory information.  This defines the location and
+	 * size of memory available for the modem to use.
+	 */
+	u8			modem_mem_info_valid;
+	struct ipa_mem_range	modem_mem_info;
+
+	/* This defines the destination endpoint on the AP to which
+	 * the modem driver can send control commands.  Must be less
+	 * than ipa_endpoint_max().
+	 */
+	u8			ctrl_comm_dest_end_pt_valid;
+	u32			ctrl_comm_dest_end_pt;
+
+	/* This defines whether the modem should load the microcontroller
+	 * or not.  It is unnecessary to reload it if the modem is being
+	 * restarted.
+	 *
+	 * NOTE: this field is named "is_ssr_bootup" elsewhere.
+	 */
+	u8			skip_uc_load_valid;
+	u8			skip_uc_load;
+
+	/* Processing context memory information.  This defines the memory in
+	 * which the modem may insert header processing context table entries.
+	 */
+	u8			hdr_proc_ctx_tbl_info_valid;
+	struct ipa_mem_bounds	hdr_proc_ctx_tbl_info;
+
+	/* Compression command memory information.  This defines the memory
+	 * in which the modem may insert compression/decompression commands.
+	 */
+	u8			zip_tbl_info_valid;
+	struct ipa_mem_bounds	zip_tbl_info;
+
+	/* Routing table information.  These define the location and size
+	 * of hashable IPv4 and IPv6 filter tables.  The start values are
+	 * offsets relative to the start of IPA shared memory.
+	 */
+	u8			v4_hash_route_tbl_info_valid;
+	struct ipa_mem_array	v4_hash_route_tbl_info;
+	u8			v6_hash_route_tbl_info_valid;
+	struct ipa_mem_array	v6_hash_route_tbl_info;
+
+	/* Filter table information.  These define the location and size
+	 * of hashable IPv4 and IPv6 filter tables.  The start values are
+	 * offsets relative to the start of IPA shared memory.
+	 */
+	u8			v4_hash_filter_tbl_start_valid;
+	u32			v4_hash_filter_tbl_start;
+	u8			v6_hash_filter_tbl_start_valid;
+	u32			v6_hash_filter_tbl_start;
+
+	/* Statistics information.  These define the locations of the
+	 * first and last statistics sub-regions.  (IPA v4.0 and above)
+	 */
+	u8			hw_stats_quota_base_addr_valid;
+	u32			hw_stats_quota_base_addr;
+	u8			hw_stats_quota_size_valid;
+	u32			hw_stats_quota_size;
+	u8			hw_stats_drop_base_addr_valid;
+	u32			hw_stats_drop_base_addr;
+	u8			hw_stats_drop_size_valid;
+	u32			hw_stats_drop_size;
+};
+
+/* The response to a IPA_QMI_INIT_DRIVER request begins with a standard
+ * QMI response, but contains other information as well.  Currently we
+ * simply wait for the the INIT_DRIVER transaction to complete and
+ * ignore any other data that might be returned.
+ */
+struct ipa_init_modem_driver_rsp {
+	struct qmi_response_type_v01	rsp;
+
+	/* This defines the destination endpoint on the modem to which
+	 * the AP driver can send control commands.  Must be less than
+	 * ipa_endpoint_max().
+	 */
+	u8				ctrl_comm_dest_end_pt_valid;
+	u32				ctrl_comm_dest_end_pt;
+
+	/* This defines the default endpoint.  The AP driver is not
+	 * required to configure the hardware with this value.  Must
+	 * be less than ipa_endpoint_max().
+	 */
+	u8				default_end_pt_valid;
+	u32				default_end_pt;
+
+	/* This defines whether a second handshake is required to complete
+	 * initialization.
+	 */
+	u8				modem_driver_init_pending_valid;
+	u8				modem_driver_init_pending;
+};
+
+/* Message structure definitions defined in "ipa_qmi_msg.c" */
+extern struct qmi_elem_info ipa_indication_register_req_ei[];
+extern struct qmi_elem_info ipa_indication_register_rsp_ei[];
+extern struct qmi_elem_info ipa_driver_init_complete_req_ei[];
+extern struct qmi_elem_info ipa_driver_init_complete_rsp_ei[];
+extern struct qmi_elem_info ipa_init_complete_ind_ei[];
+extern struct qmi_elem_info ipa_mem_bounds_ei[];
+extern struct qmi_elem_info ipa_mem_array_ei[];
+extern struct qmi_elem_info ipa_mem_range_ei[];
+extern struct qmi_elem_info ipa_init_modem_driver_req_ei[];
+extern struct qmi_elem_info ipa_init_modem_driver_rsp_ei[];
+
+#endif /* !_IPA_QMI_MSG_H_ */
diff --git a/drivers/net/ipa/ipa_smp2p.c b/drivers/net/ipa/ipa_smp2p.c
new file mode 100644
index 000000000000..4d33aa7ebfbb
--- /dev/null
+++ b/drivers/net/ipa/ipa_smp2p.c
@@ -0,0 +1,335 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/interrupt.h>
+#include <linux/notifier.h>
+#include <linux/soc/qcom/smem.h>
+#include <linux/soc/qcom/smem_state.h>
+
+#include "ipa_smp2p.h"
+#include "ipa.h"
+#include "ipa_uc.h"
+#include "ipa_clock.h"
+
+/**
+ * DOC: IPA SMP2P communication with the modem
+ *
+ * SMP2P is a primitive communication mechanism available between the AP and
+ * the modem.  The IPA driver uses this for two purposes:  to enable the modem
+ * to state that the GSI hardware is ready to use; and to communicate the
+ * state of the IPA clock in the event of a crash.
+ *
+ * GSI needs to have early initialization completed before it can be used.
+ * This initialization is done either by Trust Zone or by the modem.  In the
+ * latter case, the modem uses an SMP2P interrupt to tell the AP IPA driver
+ * when the GSI is ready to use.
+ *
+ * The modem is also able to inquire about the current state of the IPA
+ * clock by trigging another SMP2P interrupt to the AP.  We communicate
+ * whether the clock is enabled using two SMP2P state bits--one to
+ * indicate the clock state (on or off), and a second to indicate the
+ * clock state bit is valid.  The modem will poll the valid bit until it
+ * is set, and at that time records whether the AP has the IPA clock enabled.
+ *
+ * Finally, if the AP kernel panics, we update the SMP2P state bits even if
+ * we never receive an interrupt from the modem requesting this.
+ */
+
+/**
+ * struct ipa_smp2p - IPA SMP2P information
+ * @ipa:		IPA pointer
+ * @valid_state:	SMEM state indicating enabled state is valid
+ * @enabled_state:	SMEM state to indicate clock is enabled
+ * @valid_bit:		Valid bit in 32-bit SMEM state mask
+ * @enabled_bit:	Enabled bit in 32-bit SMEM state mask
+ * @enabled_bit:	Enabled bit in 32-bit SMEM state mask
+ * @clock_query_irq:	IPA interrupt triggered by modem for clock query
+ * @setup_ready_irq:	IPA interrupt triggered by modem to signal GSI ready
+ * @clock_on:		Whether IPA clock is on
+ * @notified:		Whether modem has been notified of clock state
+ * @disabled:		Whether setup ready interrupt handling is disabled
+ * @mutex mutex:	Motex protecting ready interrupt/shutdown interlock
+ * @panic_notifier:	Panic notifier structure
+*/
+struct ipa_smp2p {
+	struct ipa *ipa;
+	struct qcom_smem_state *valid_state;
+	struct qcom_smem_state *enabled_state;
+	u32 valid_bit;
+	u32 enabled_bit;
+	u32 clock_query_irq;
+	u32 setup_ready_irq;
+	bool clock_on;
+	bool notified;
+	bool disabled;
+	struct mutex mutex;
+	struct notifier_block panic_notifier;
+};
+
+/**
+ * ipa_smp2p_notify() - use SMP2P to tell modem about IPA clock state
+ * @smp2p:	SMP2P information
+ *
+ * This is called either when the modem has requested it (by triggering
+ * the modem clock query IPA interrupt) or whenever the AP is shutting down
+ * (via a panic notifier).  It sets the two SMP2P state bits--one saying
+ * whether the IPA clock is running, and the other indicating the first bit
+ * is valid.
+ */
+static void ipa_smp2p_notify(struct ipa_smp2p *smp2p)
+{
+	u32 value;
+	u32 mask;
+
+	if (smp2p->notified)
+		return;
+
+	smp2p->clock_on = ipa_clock_get_additional(smp2p->ipa);
+
+	/* Signal whether the clock is enabled */
+	mask = BIT(smp2p->enabled_bit);
+	value = smp2p->clock_on ? mask : 0;
+	qcom_smem_state_update_bits(smp2p->enabled_state, mask, value);
+
+	/* Now indicate that the enabled flag is valid */
+	mask = BIT(smp2p->valid_bit);
+	value = mask;
+	qcom_smem_state_update_bits(smp2p->valid_state, mask, value);
+
+	smp2p->notified = true;
+}
+
+/* Threaded IRQ handler for modem "ipa-clock-query" SMP2P interrupt */
+static irqreturn_t ipa_smp2p_modem_clk_query_isr(int irq, void *dev_id)
+{
+	struct ipa_smp2p *smp2p = dev_id;
+
+	ipa_smp2p_notify(smp2p);
+
+	return IRQ_HANDLED;
+}
+
+static int ipa_smp2p_panic_notifier(struct notifier_block *nb,
+				    unsigned long action, void *data)
+{
+	struct ipa_smp2p *smp2p;
+
+	smp2p = container_of(nb, struct ipa_smp2p, panic_notifier);
+
+	ipa_smp2p_notify(smp2p);
+
+	if (smp2p->clock_on)
+		ipa_uc_panic_notifier(smp2p->ipa);
+
+	return NOTIFY_DONE;
+}
+
+static int ipa_smp2p_panic_notifier_register(struct ipa_smp2p *smp2p)
+{
+	/* IPA panic handler needs to run before modem shuts down */
+	smp2p->panic_notifier.notifier_call = ipa_smp2p_panic_notifier;
+	smp2p->panic_notifier.priority = INT_MAX;	/* Do it early */
+
+	return atomic_notifier_chain_register(&panic_notifier_list,
+					      &smp2p->panic_notifier);
+}
+
+static void ipa_smp2p_panic_notifier_unregister(struct ipa_smp2p *smp2p)
+{
+	atomic_notifier_chain_unregister(&panic_notifier_list,
+					 &smp2p->panic_notifier);
+}
+
+/* Threaded IRQ handler for modem "ipa-setup-ready" SMP2P interrupt */
+static irqreturn_t ipa_smp2p_modem_setup_ready_isr(int irq, void *dev_id)
+{
+	struct ipa_smp2p *smp2p = dev_id;
+
+	mutex_lock(&smp2p->mutex);
+
+	if (!smp2p->disabled) {
+		int ret;
+
+		ret = ipa_setup(smp2p->ipa);
+		if (ret)
+			dev_err(&smp2p->ipa->pdev->dev,
+				"error %d from ipa_setup()\n", ret);
+		smp2p->disabled = true;
+	}
+
+	mutex_unlock(&smp2p->mutex);
+
+	return IRQ_HANDLED;
+}
+
+/* Initialize SMP2P interrupts */
+static int ipa_smp2p_irq_init(struct ipa_smp2p *smp2p, const char *name,
+			      irq_handler_t handler)
+{
+	struct device *dev = &smp2p->ipa->pdev->dev;
+	unsigned int irq;
+	int ret;
+
+	ret = platform_get_irq_byname(smp2p->ipa->pdev, name);
+	if (ret <= 0) {
+		dev_err(dev, "DT error %d getting \"%s\" IRQ property\n",
+			ret, name);
+		return ret ? : -EINVAL;
+	}
+	irq = ret;
+
+	ret = request_threaded_irq(irq, NULL, handler, 0, name, smp2p);
+	if (ret) {
+		dev_err(dev, "error %d requesting \"%s\" IRQ\n", ret, name);
+		return ret;
+	}
+
+	return irq;
+}
+
+static void ipa_smp2p_irq_exit(struct ipa_smp2p *smp2p, u32 irq)
+{
+	free_irq(irq, smp2p);
+}
+
+/* Drop the clock reference if it was taken in ipa_smp2p_notify() */
+static void ipa_smp2p_clock_release(struct ipa *ipa)
+{
+	if (!ipa->smp2p->clock_on)
+		return;
+
+	ipa_clock_put(ipa);
+	ipa->smp2p->clock_on = false;
+}
+
+/* Initialize the IPA SMP2P subsystem */
+int ipa_smp2p_init(struct ipa *ipa, bool modem_init)
+{
+	struct qcom_smem_state *enabled_state;
+	struct device *dev = &ipa->pdev->dev;
+	struct qcom_smem_state *valid_state;
+	struct ipa_smp2p *smp2p;
+	u32 enabled_bit;
+	u32 valid_bit;
+	int ret;
+
+	valid_state = qcom_smem_state_get(dev, "ipa-clock-enabled-valid",
+					  &valid_bit);
+	if (IS_ERR(valid_state))
+		return PTR_ERR(valid_state);
+	if (valid_bit >= 32)		/* BITS_PER_U32 */
+		return -EINVAL;
+
+	enabled_state = qcom_smem_state_get(dev, "ipa-clock-enabled",
+					    &enabled_bit);
+	if (IS_ERR(enabled_state))
+		return PTR_ERR(enabled_state);
+	if (enabled_bit >= 32)		/* BITS_PER_U32 */
+		return -EINVAL;
+
+	smp2p = kzalloc(sizeof(*smp2p), GFP_KERNEL);
+	if (!smp2p)
+		return -ENOMEM;
+
+	smp2p->ipa = ipa;
+
+	/* These fields are needed by the clock query interrupt
+	 * handler, so initialize them now.
+	 */
+	mutex_init(&smp2p->mutex);
+	smp2p->valid_state = valid_state;
+	smp2p->valid_bit = valid_bit;
+	smp2p->enabled_state = enabled_state;
+	smp2p->enabled_bit = enabled_bit;
+
+	/* We have enough information saved to handle notifications */
+	ipa->smp2p = smp2p;
+
+	ret = ipa_smp2p_irq_init(smp2p, "ipa-clock-query",
+				 ipa_smp2p_modem_clk_query_isr);
+	if (ret < 0)
+		goto err_null_smp2p;
+	smp2p->clock_query_irq = ret;
+
+	ret = ipa_smp2p_panic_notifier_register(smp2p);
+	if (ret)
+		goto err_irq_exit;
+
+	if (modem_init) {
+		/* Result will be non-zero (negative for error) */
+		ret = ipa_smp2p_irq_init(smp2p, "ipa-setup-ready",
+					 ipa_smp2p_modem_setup_ready_isr);
+		if (ret < 0)
+			goto err_notifier_unregister;
+		smp2p->setup_ready_irq = ret;
+	}
+
+	return 0;
+
+err_notifier_unregister:
+	ipa_smp2p_panic_notifier_unregister(smp2p);
+err_irq_exit:
+	ipa_smp2p_irq_exit(smp2p, smp2p->clock_query_irq);
+err_null_smp2p:
+	ipa->smp2p = NULL;
+	mutex_destroy(&smp2p->mutex);
+	kfree(smp2p);
+
+	return ret;
+}
+
+void ipa_smp2p_exit(struct ipa *ipa)
+{
+	struct ipa_smp2p *smp2p = ipa->smp2p;
+
+	if (smp2p->setup_ready_irq)
+		ipa_smp2p_irq_exit(smp2p, smp2p->setup_ready_irq);
+	ipa_smp2p_panic_notifier_unregister(smp2p);
+	ipa_smp2p_irq_exit(smp2p, smp2p->clock_query_irq);
+	/* We won't get notified any more; drop clock reference (if any) */
+	ipa_smp2p_clock_release(ipa);
+	ipa->smp2p = NULL;
+	mutex_destroy(&smp2p->mutex);
+	kfree(smp2p);
+}
+
+void ipa_smp2p_disable(struct ipa *ipa)
+{
+	struct ipa_smp2p *smp2p = ipa->smp2p;
+
+	if (!smp2p->setup_ready_irq)
+		return;
+
+	mutex_lock(&smp2p->mutex);
+
+	smp2p->disabled = true;
+
+	mutex_unlock(&smp2p->mutex);
+}
+
+/* Reset state tracking whether we have notified the modem */
+void ipa_smp2p_notify_reset(struct ipa *ipa)
+{
+	struct ipa_smp2p *smp2p = ipa->smp2p;
+	u32 mask;
+
+	if (!smp2p->notified)
+		return;
+
+	ipa_smp2p_clock_release(ipa);
+
+	/* Reset the clock enabled valid flag */
+	mask = BIT(smp2p->valid_bit);
+	qcom_smem_state_update_bits(smp2p->valid_state, mask, 0);
+
+	/* Mark the clock disabled for good measure... */
+	mask = BIT(smp2p->enabled_bit);
+	qcom_smem_state_update_bits(smp2p->enabled_state, mask, 0);
+
+	smp2p->notified = false;
+}
diff --git a/drivers/net/ipa/ipa_smp2p.h b/drivers/net/ipa/ipa_smp2p.h
new file mode 100644
index 000000000000..1f65cdc9d406
--- /dev/null
+++ b/drivers/net/ipa/ipa_smp2p.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+#ifndef _IPA_SMP2P_H_
+#define _IPA_SMP2P_H_
+
+#include <linux/types.h>
+
+struct ipa;
+
+/**
+ * ipa_smp2p_init() - Initialize the IPA SMP2P subsystem
+ * @ipa:	IPA pointer
+ * @modem_init:	Whether the modem is responsible for GSI initialization
+ *
+ * @Return:	0 if successful, or a negative error code
+ *
+ */
+int ipa_smp2p_init(struct ipa *ipa, bool modem_init);
+
+/**
+ * ipa_smp2p_exit() - Inverse of ipa_smp2p_init()
+ * @ipa:	IPA pointer
+ */
+void ipa_smp2p_exit(struct ipa *ipa);
+
+/**
+ * ipa_smp2p_disable() - Prevent "ipa-setup-ready" interrupt handling
+ * @IPA:	IPA pointer
+ *
+ * Prevent handling of the "setup ready" interrupt from the modem.
+ * This is used before initiating shutdown of the driver.
+ */
+void ipa_smp2p_disable(struct ipa *ipa);
+
+/**
+ * ipa_smp2p_notify_reset() - Reset modem notification state
+ * @ipa:	IPA pointer
+ *
+ * If the modem crashes it queries the IPA clock state.  In cleaning
+ * up after such a crash this is used to reset some state maintained
+ * for managing this notification.
+ */
+void ipa_smp2p_notify_reset(struct ipa *ipa);
+
+#endif /* _IPA_SMP2P_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 15/17] soc: qcom: ipa: support build of IPA code
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (13 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 14/17] soc: qcom: ipa: AP/modem communications Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-11 10:54   ` Jon Hunter
  2020-03-06  4:28 ` [PATCH v2 16/17] MAINTAINERS: add entry for the Qualcomm IPA driver Alex Elder
                   ` (4 subsequent siblings)
  19 siblings, 1 reply; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

Add build and Kconfig support for the Qualcomm IPA driver.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/Kconfig      |  2 ++
 drivers/net/Makefile     |  1 +
 drivers/net/ipa/Kconfig  | 19 +++++++++++++++++++
 drivers/net/ipa/Makefile | 12 ++++++++++++
 4 files changed, 34 insertions(+)
 create mode 100644 drivers/net/ipa/Kconfig
 create mode 100644 drivers/net/ipa/Makefile

diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index 66e410e58c8e..02565bc2be8a 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -444,6 +444,8 @@ source "drivers/net/fddi/Kconfig"
 
 source "drivers/net/hippi/Kconfig"
 
+source "drivers/net/ipa/Kconfig"
+
 config NET_SB1000
 	tristate "General Instruments Surfboard 1000"
 	depends on PNP
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 65967246f240..94b60800887a 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -47,6 +47,7 @@ obj-$(CONFIG_ETHERNET) += ethernet/
 obj-$(CONFIG_FDDI) += fddi/
 obj-$(CONFIG_HIPPI) += hippi/
 obj-$(CONFIG_HAMRADIO) += hamradio/
+obj-$(CONFIG_QCOM_IPA) += ipa/
 obj-$(CONFIG_PLIP) += plip/
 obj-$(CONFIG_PPP) += ppp/
 obj-$(CONFIG_PPP_ASYNC) += ppp/
diff --git a/drivers/net/ipa/Kconfig b/drivers/net/ipa/Kconfig
new file mode 100644
index 000000000000..b8cb7cadbf75
--- /dev/null
+++ b/drivers/net/ipa/Kconfig
@@ -0,0 +1,19 @@
+config QCOM_IPA
+	tristate "Qualcomm IPA support"
+	depends on ARCH_QCOM && 64BIT && NET
+	select QCOM_QMI_HELPERS
+	select QCOM_MDT_LOADER
+	default QCOM_Q6V5_COMMON
+	help
+	  Choose Y or M here to include support for the Qualcomm
+	  IP Accelerator (IPA), a hardware block present in some
+	  Qualcomm SoCs.  The IPA is a programmable protocol processor
+	  that is capable of generic hardware handling of IP packets,
+	  including routing, filtering, and NAT.  Currently the IPA
+	  driver supports only basic transport of network traffic
+	  between the AP and modem, on the Qualcomm SDM845 SoC.
+
+	  Note that if selected, the selection type must match that
+	  of QCOM_Q6V5_COMMON (Y or M).
+
+	  If unsure, say N.
diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
new file mode 100644
index 000000000000..afe5df1e6eee
--- /dev/null
+++ b/drivers/net/ipa/Makefile
@@ -0,0 +1,12 @@
+# Un-comment the next line if you want to validate configuration data
+#ccflags-y		+=	-DIPA_VALIDATE
+
+obj-$(CONFIG_QCOM_IPA)	+=	ipa.o
+
+ipa-y			:=	ipa_main.o ipa_clock.o ipa_reg.o ipa_mem.o \
+				ipa_table.o ipa_interrupt.o gsi.o gsi_trans.o \
+				ipa_gsi.o ipa_smp2p.o ipa_uc.o \
+				ipa_endpoint.o ipa_cmd.o ipa_modem.o \
+				ipa_qmi.o ipa_qmi_msg.o
+
+ipa-y			+=	ipa_data-sdm845.o ipa_data-sc7180.o
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 16/17] MAINTAINERS: add entry for the Qualcomm IPA driver
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (14 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 15/17] soc: qcom: ipa: support build of IPA code Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-06  4:28 ` [PATCH v2 17/17] arm64: dts: sdm845: add IPA information Alex Elder
                   ` (3 subsequent siblings)
  19 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

Add an entry in the MAINTAINERS file for the Qualcomm IPA driver

Signed-off-by: Alex Elder <elder@linaro.org>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 2ec6a539fa42..e8666f980a21 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -13662,6 +13662,12 @@ L:	alsa-devel@alsa-project.org (moderated for non-subscribers)
 S:	Supported
 F:	sound/soc/qcom/
 
+QCOM IPA DRIVER
+M:	Alex Elder <elder@kernel.org>
+L:	netdev@vger.kernel.org
+S:	Supported
+F:	drivers/net/ipa/
+
 QEMU MACHINE EMULATOR AND VIRTUALIZER SUPPORT
 M:	Gabriel Somlo <somlo@cmu.edu>
 M:	"Michael S. Tsirkin" <mst@redhat.com>
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH v2 17/17] arm64: dts: sdm845: add IPA information
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (15 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 16/17] MAINTAINERS: add entry for the Qualcomm IPA driver Alex Elder
@ 2020-03-06  4:28 ` Alex Elder
  2020-03-11 10:49   ` Jon Hunter
  2020-03-09  5:09 ` [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) David Miller
                   ` (2 subsequent siblings)
  19 siblings, 1 reply; 30+ messages in thread
From: Alex Elder @ 2020-03-06  4:28 UTC (permalink / raw)
  To: Bjorn Andersson, Andy Gross
  Cc: David Miller, Arnd Bergmann, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

Add IPA-related nodes and definitions to "sdm845.dtsi".

Signed-off-by: Alex Elder <elder@linaro.org>
---
 arch/arm64/boot/dts/qcom/sdm845.dtsi | 51 ++++++++++++++++++++++++++++
 1 file changed, 51 insertions(+)

diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
index d42302b8889b..58fd1c611849 100644
--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
@@ -675,6 +675,17 @@
 			interrupt-controller;
 			#interrupt-cells = <2>;
 		};
+
+		ipa_smp2p_out: ipa-ap-to-modem {
+			qcom,entry-name = "ipa";
+			#qcom,smem-state-cells = <1>;
+		};
+
+		ipa_smp2p_in: ipa-modem-to-ap {
+			qcom,entry-name = "ipa";
+			interrupt-controller;
+			#interrupt-cells = <2>;
+		};
 	};
 
 	smp2p-slpi {
@@ -1435,6 +1446,46 @@
 			};
 		};
 
+		ipa@1e40000 {
+			compatible = "qcom,sdm845-ipa";
+
+			modem-init;
+			modem-remoteproc = <&mss_pil>;
+
+			reg = <0 0x1e40000 0 0x7000>,
+			      <0 0x1e47000 0 0x2000>,
+			      <0 0x1e04000 0 0x2c000>;
+			reg-names = "ipa-reg",
+				    "ipa-shared",
+				    "gsi";
+
+			interrupts-extended =
+					<&intc 0 311 IRQ_TYPE_EDGE_RISING>,
+					<&intc 0 432 IRQ_TYPE_LEVEL_HIGH>,
+					<&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
+					<&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>;
+			interrupt-names = "ipa",
+					  "gsi",
+					  "ipa-clock-query",
+					  "ipa-setup-ready";
+
+			clocks = <&rpmhcc RPMH_IPA_CLK>;
+			clock-names = "core";
+
+			interconnects =
+				<&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>,
+				<&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>,
+				<&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>;
+			interconnect-names = "memory",
+					     "imem",
+					     "config";
+
+			qcom,smem-states = <&ipa_smp2p_out 0>,
+					   <&ipa_smp2p_out 1>;
+			qcom,smem-state-names = "ipa-clock-enabled-valid",
+						"ipa-clock-enabled";
+		};
+
 		tcsr_mutex_regs: syscon@1f40000 {
 			compatible = "syscon";
 			reg = <0 0x01f40000 0 0x40000>;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 01/17] remoteproc: add IPA notification to q6v5 driver
  2020-03-06  4:28 ` [PATCH v2 01/17] remoteproc: add IPA notification to q6v5 driver Alex Elder
@ 2020-03-06 11:49   ` Leon Romanovsky
  2020-03-06 13:29     ` Alex Elder
  0 siblings, 1 reply; 30+ messages in thread
From: Leon Romanovsky @ 2020-03-06 11:49 UTC (permalink / raw)
  To: Alex Elder
  Cc: Bjorn Andersson, Ohad Ben-Cohen, David Miller, Arnd Bergmann,
	Andy Gross, Johannes Berg, Dan Williams, Evan Green, Eric Caruso,
	Susheel Yadav Yadagiri, Chaitanya Pratapa,
	Subash Abhinov Kasiviswanathan, Rob Herring, Mark Rutland,
	Siddharth Gupta, netdev, devicetree, linux-arm-kernel,
	linux-arm-msm, linux-soc, linux-kernel

On Thu, Mar 05, 2020 at 10:28:15PM -0600, Alex Elder wrote:
> Set up a subdev in the q6v5 modem remoteproc driver that generates
> event notifications for the IPA driver to use for initialization and
> recovery following a modem shutdown or crash.
>
> A pair of new functions provides a way for the IPA driver to register
> and deregister a notification callback function that will be called
> whenever modem events (about to boot, running, about to shut down,
> etc.) occur.  A void pointer value (provided by the IPA driver at
> registration time) and an event type are supplied to the callback
> function.
>
> One event, MODEM_REMOVING, is signaled whenever the q6v5 driver is
> about to remove the notification subdevice.  It requires the IPA
> driver de-register its callback.
>
> This sub-device is only used by the modem subsystem (MSS) driver,
> so the code that adds the new subdev and allows registration and
> deregistration of the notifier is found in "qcom_q6v6_mss.c".
>
> Signed-off-by: Alex Elder <elder@linaro.org>
> ---
>  drivers/remoteproc/Kconfig                    |  6 ++
>  drivers/remoteproc/Makefile                   |  1 +
>  drivers/remoteproc/qcom_q6v5_ipa_notify.c     | 85 +++++++++++++++++++
>  drivers/remoteproc/qcom_q6v5_mss.c            | 38 +++++++++
>  .../linux/remoteproc/qcom_q6v5_ipa_notify.h   | 82 ++++++++++++++++++
>  5 files changed, 212 insertions(+)
>  create mode 100644 drivers/remoteproc/qcom_q6v5_ipa_notify.c
>  create mode 100644 include/linux/remoteproc/qcom_q6v5_ipa_notify.h
>
> diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
> index de3862c15fcc..56084635dd63 100644
> --- a/drivers/remoteproc/Kconfig
> +++ b/drivers/remoteproc/Kconfig
> @@ -167,6 +167,12 @@ config QCOM_Q6V5_WCSS
>  	  Say y here to support the Qualcomm Peripheral Image Loader for the
>  	  Hexagon V5 based WCSS remote processors.
>
> +config QCOM_Q6V5_IPA_NOTIFY
> +	tristate
> +	depends on QCOM_IPA
> +	depends on QCOM_Q6V5_MSS
> +	default QCOM_IPA
> +
>  config QCOM_SYSMON
>  	tristate "Qualcomm sysmon driver"
>  	depends on RPMSG
> diff --git a/drivers/remoteproc/Makefile b/drivers/remoteproc/Makefile
> index e30a1b15fbac..0effd3825035 100644
> --- a/drivers/remoteproc/Makefile
> +++ b/drivers/remoteproc/Makefile
> @@ -21,6 +21,7 @@ obj-$(CONFIG_QCOM_Q6V5_ADSP)		+= qcom_q6v5_adsp.o
>  obj-$(CONFIG_QCOM_Q6V5_MSS)		+= qcom_q6v5_mss.o
>  obj-$(CONFIG_QCOM_Q6V5_PAS)		+= qcom_q6v5_pas.o
>  obj-$(CONFIG_QCOM_Q6V5_WCSS)		+= qcom_q6v5_wcss.o
> +obj-$(CONFIG_QCOM_Q6V5_IPA_NOTIFY)	+= qcom_q6v5_ipa_notify.o
>  obj-$(CONFIG_QCOM_SYSMON)		+= qcom_sysmon.o
>  obj-$(CONFIG_QCOM_WCNSS_PIL)		+= qcom_wcnss_pil.o
>  qcom_wcnss_pil-y			+= qcom_wcnss.o
> diff --git a/drivers/remoteproc/qcom_q6v5_ipa_notify.c b/drivers/remoteproc/qcom_q6v5_ipa_notify.c
> new file mode 100644
> index 000000000000..e1c10a128bfd
> --- /dev/null
> +++ b/drivers/remoteproc/qcom_q6v5_ipa_notify.c
> @@ -0,0 +1,85 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/*
> + * Qualcomm IPA notification subdev support
> + *
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#include <linux/kernel.h>
> +#include <linux/module.h>
> +#include <linux/remoteproc.h>
> +#include <linux/remoteproc/qcom_q6v5_ipa_notify.h>
> +
> +static void
> +ipa_notify_common(struct rproc_subdev *subdev, enum qcom_rproc_event event)
> +{
> +	struct qcom_rproc_ipa_notify *ipa_notify;
> +	qcom_ipa_notify_t notify;
> +
> +	ipa_notify = container_of(subdev, struct qcom_rproc_ipa_notify, subdev);
> +	notify = ipa_notify->notify;
> +	if (notify)
> +		notify(ipa_notify->data, event);
> +}
> +
> +static int ipa_notify_prepare(struct rproc_subdev *subdev)
> +{
> +	ipa_notify_common(subdev, MODEM_STARTING);
> +
> +	return 0;
> +}
> +
> +static int ipa_notify_start(struct rproc_subdev *subdev)
> +{
> +	ipa_notify_common(subdev, MODEM_RUNNING);
> +
> +	return 0;
> +}
> +
> +static void ipa_notify_stop(struct rproc_subdev *subdev, bool crashed)
> +
> +{
> +	ipa_notify_common(subdev, crashed ? MODEM_CRASHED : MODEM_STOPPING);
> +}
> +
> +static void ipa_notify_unprepare(struct rproc_subdev *subdev)
> +{
> +	ipa_notify_common(subdev, MODEM_OFFLINE);
> +}
> +
> +static void ipa_notify_removing(struct rproc_subdev *subdev)
> +{
> +	ipa_notify_common(subdev, MODEM_REMOVING);
> +}
> +
> +/* Register the IPA notification subdevice with the Q6V5 MSS remoteproc */
> +void qcom_add_ipa_notify_subdev(struct rproc *rproc,
> +		struct qcom_rproc_ipa_notify *ipa_notify)
> +{
> +	ipa_notify->notify = NULL;
> +	ipa_notify->data = NULL;
> +	ipa_notify->subdev.prepare = ipa_notify_prepare;
> +	ipa_notify->subdev.start = ipa_notify_start;
> +	ipa_notify->subdev.stop = ipa_notify_stop;
> +	ipa_notify->subdev.unprepare = ipa_notify_unprepare;
> +
> +	rproc_add_subdev(rproc, &ipa_notify->subdev);
> +}
> +EXPORT_SYMBOL_GPL(qcom_add_ipa_notify_subdev);
> +
> +/* Remove the IPA notification subdevice */
> +void qcom_remove_ipa_notify_subdev(struct rproc *rproc,
> +		struct qcom_rproc_ipa_notify *ipa_notify)
> +{
> +	struct rproc_subdev *subdev = &ipa_notify->subdev;
> +
> +	ipa_notify_removing(subdev);
> +
> +	rproc_remove_subdev(rproc, subdev);
> +	ipa_notify->notify = NULL;	/* Make it obvious */
> +}
> +EXPORT_SYMBOL_GPL(qcom_remove_ipa_notify_subdev);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("Qualcomm IPA notification remoteproc subdev");
> diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
> index a1cc9cbe038f..f9ccce76e44b 100644
> --- a/drivers/remoteproc/qcom_q6v5_mss.c
> +++ b/drivers/remoteproc/qcom_q6v5_mss.c
> @@ -22,6 +22,7 @@
>  #include <linux/regmap.h>
>  #include <linux/regulator/consumer.h>
>  #include <linux/remoteproc.h>
> +#include "linux/remoteproc/qcom_q6v5_ipa_notify.h"
>  #include <linux/reset.h>
>  #include <linux/soc/qcom/mdt_loader.h>
>  #include <linux/iopoll.h>
> @@ -201,6 +202,7 @@ struct q6v5 {
>  	struct qcom_rproc_glink glink_subdev;
>  	struct qcom_rproc_subdev smd_subdev;
>  	struct qcom_rproc_ssr ssr_subdev;
> +	struct qcom_rproc_ipa_notify ipa_notify_subdev;
>  	struct qcom_sysmon *sysmon;
>  	bool need_mem_protection;
>  	bool has_alt_reset;
> @@ -1540,6 +1542,39 @@ static int q6v5_alloc_memory_region(struct q6v5 *qproc)
>  	return 0;
>  }
>
> +#if IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY)
> +
> +/* Register IPA notification function */
> +int qcom_register_ipa_notify(struct rproc *rproc, qcom_ipa_notify_t notify,
> +			     void *data)
> +{
> +	struct qcom_rproc_ipa_notify *ipa_notify;
> +	struct q6v5 *qproc = rproc->priv;
> +
> +	if (!notify)
> +		return -EINVAL;
> +
> +	ipa_notify = &qproc->ipa_notify_subdev;
> +	if (ipa_notify->notify)
> +		return -EBUSY;
> +
> +	ipa_notify->notify = notify;
> +	ipa_notify->data = data;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(qcom_register_ipa_notify);
> +
> +/* Deregister IPA notification function */
> +void qcom_deregister_ipa_notify(struct rproc *rproc)
> +{
> +	struct q6v5 *qproc = rproc->priv;
> +
> +	qproc->ipa_notify_subdev.notify = NULL;
> +}
> +EXPORT_SYMBOL_GPL(qcom_deregister_ipa_notify);
> +#endif /* !IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY) */
> +
>  static int q6v5_probe(struct platform_device *pdev)
>  {
>  	const struct rproc_hexagon_res *desc;
> @@ -1664,6 +1699,7 @@ static int q6v5_probe(struct platform_device *pdev)
>  	qcom_add_glink_subdev(rproc, &qproc->glink_subdev);
>  	qcom_add_smd_subdev(rproc, &qproc->smd_subdev);
>  	qcom_add_ssr_subdev(rproc, &qproc->ssr_subdev, "mpss");
> +	qcom_add_ipa_notify_subdev(rproc, &qproc->ipa_notify_subdev);
>  	qproc->sysmon = qcom_add_sysmon_subdev(rproc, "modem", 0x12);
>  	if (IS_ERR(qproc->sysmon)) {
>  		ret = PTR_ERR(qproc->sysmon);
> @@ -1677,6 +1713,7 @@ static int q6v5_probe(struct platform_device *pdev)
>  	return 0;
>
>  detach_proxy_pds:
> +	qcom_remove_ipa_notify_subdev(qproc->rproc, &qproc->ipa_notify_subdev);
>  	q6v5_pds_detach(qproc, qproc->proxy_pds, qproc->proxy_pd_count);
>  detach_active_pds:
>  	q6v5_pds_detach(qproc, qproc->active_pds, qproc->active_pd_count);
> @@ -1693,6 +1730,7 @@ static int q6v5_remove(struct platform_device *pdev)
>  	rproc_del(qproc->rproc);
>
>  	qcom_remove_sysmon_subdev(qproc->sysmon);
> +	qcom_remove_ipa_notify_subdev(qproc->rproc, &qproc->ipa_notify_subdev);
>  	qcom_remove_glink_subdev(qproc->rproc, &qproc->glink_subdev);
>  	qcom_remove_smd_subdev(qproc->rproc, &qproc->smd_subdev);
>  	qcom_remove_ssr_subdev(qproc->rproc, &qproc->ssr_subdev);
> diff --git a/include/linux/remoteproc/qcom_q6v5_ipa_notify.h b/include/linux/remoteproc/qcom_q6v5_ipa_notify.h
> new file mode 100644
> index 000000000000..0820edc0ab7d
> --- /dev/null
> +++ b/include/linux/remoteproc/qcom_q6v5_ipa_notify.h
> @@ -0,0 +1,82 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +/* Copyright (C) 2019 Linaro Ltd. */
> +
> +#ifndef __QCOM_Q6V5_IPA_NOTIFY_H__
> +#define __QCOM_Q6V5_IPA_NOTIFY_H__
> +
> +#if IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY)

Why don't you put this guard in the places where such include is called?
Or the best variant is to ensure that this include is compiled in only
in CONFIG_QCOM_Q6V5_IPA_NOTIFY flows.

That is more common way to guard internal header files.

Thanks

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 01/17] remoteproc: add IPA notification to q6v5 driver
  2020-03-06 11:49   ` Leon Romanovsky
@ 2020-03-06 13:29     ` Alex Elder
  0 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-06 13:29 UTC (permalink / raw)
  To: Leon Romanovsky
  Cc: Bjorn Andersson, Ohad Ben-Cohen, David Miller, Arnd Bergmann,
	Andy Gross, Johannes Berg, Dan Williams, Evan Green, Eric Caruso,
	Susheel Yadav Yadagiri, Chaitanya Pratapa,
	Subash Abhinov Kasiviswanathan, Rob Herring, Mark Rutland,
	Siddharth Gupta, netdev, devicetree, linux-arm-kernel,
	linux-arm-msm, linux-soc, linux-kernel

On 3/6/20 5:49 AM, Leon Romanovsky wrote:
> On Thu, Mar 05, 2020 at 10:28:15PM -0600, Alex Elder wrote:
>> Set up a subdev in the q6v5 modem remoteproc driver that generates
>> event notifications for the IPA driver to use for initialization and
>> recovery following a modem shutdown or crash.

. . .

>> diff --git a/include/linux/remoteproc/qcom_q6v5_ipa_notify.h b/include/linux/remoteproc/qcom_q6v5_ipa_notify.h
>> new file mode 100644
>> index 000000000000..0820edc0ab7d
>> --- /dev/null
>> +++ b/include/linux/remoteproc/qcom_q6v5_ipa_notify.h
>> @@ -0,0 +1,82 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +
>> +/* Copyright (C) 2019 Linaro Ltd. */
>> +
>> +#ifndef __QCOM_Q6V5_IPA_NOTIFY_H__
>> +#define __QCOM_Q6V5_IPA_NOTIFY_H__
>> +
>> +#if IS_ENABLED(CONFIG_QCOM_Q6V5_IPA_NOTIFY)
> 
> Why don't you put this guard in the places where such include is called?
> Or the best variant is to ensure that this include is compiled in only
> in CONFIG_QCOM_Q6V5_IPA_NOTIFY flows.

I did it this way so the no-op definitions resided in the same header
file if the config option is not enabled.  And the no-ops were there
so the calling code didn't have to use #ifdef.

I have no objection to what you suggest.  I did a quick scan for other
examples like this for guidance and found lots of examples of doing it
the way I did.

So I'm happy to change it, but would like an additional request to do
so before I do that work.

Thanks.

					-Alex

> That is more common way to guard internal header files.
> 
> Thanks
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED)
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (16 preceding siblings ...)
  2020-03-06  4:28 ` [PATCH v2 17/17] arm64: dts: sdm845: add IPA information Alex Elder
@ 2020-03-09  5:09 ` David Miller
  2020-03-09 16:54 ` Dave Taht
  2020-04-29 23:17 ` Evan Green
  19 siblings, 0 replies; 30+ messages in thread
From: David Miller @ 2020-03-09  5:09 UTC (permalink / raw)
  To: elder
  Cc: arnd, bjorn.andersson, agross, johannes, dcbw, evgreen, ejcaruso,
	syadagir, cpratapa, subashab, robh+dt, mark.rutland, ohad,
	sidgup, netdev, devicetree, linux-arm-kernel, linux-arm-msm,
	linux-soc, linux-kernel

From: Alex Elder <elder@linaro.org>
Date: Thu,  5 Mar 2020 22:28:14 -0600

> This series presents the driver for the Qualcomm IP Accelerator (IPA).

Series applied, thank you.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED)
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (17 preceding siblings ...)
  2020-03-09  5:09 ` [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) David Miller
@ 2020-03-09 16:54 ` Dave Taht
  2020-03-12  3:09   ` Alex Elder
  2020-04-29 23:17 ` Evan Green
  19 siblings, 1 reply; 30+ messages in thread
From: Dave Taht @ 2020-03-09 16:54 UTC (permalink / raw)
  To: Alex Elder
  Cc: David Miller, Arnd Bergmann, Bjorn Andersson, Andy Gross,
	Johannes Berg, Dan Williams, Evan Green, Eric Caruso,
	Susheel Yadav Yadagiri, Chaitanya Pratapa,
	Subash Abhinov Kasiviswanathan, Rob Herring, Mark Rutland,
	Ohad Ben-Cohen, Siddharth Gupta, Linux Kernel Network Developers,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

I am happy to see this driver upstream.

>Arnd's concern was that the rmnet_data0 network device does not
>have the benefit of information about the state of the underlying
>IPA hardware in order to be effective in controlling TX flow.
>The feared result is over-buffering of TX packets (bufferbloat).
>I began working on some simple experiments to see whether (or how
>much) his concern was warranted.  But it turned out that completing
>these experiments was much more work than had been hoped.

Members of the bufferbloat project *care*, and have tools and testbeds for
exploring these issues. It would be good to establish a relationship with
the vendor, obtain hardware, and other (technical and financial) support, if
possible.

Is there any specific hardware now available (generally or in beta) that
can be obtained by us to take a harder look? A contact at linaro or QCA
willing discuss options?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 17/17] arm64: dts: sdm845: add IPA information
  2020-03-06  4:28 ` [PATCH v2 17/17] arm64: dts: sdm845: add IPA information Alex Elder
@ 2020-03-11 10:49   ` Jon Hunter
  2020-03-11 14:39     ` Alex Elder
  0 siblings, 1 reply; 30+ messages in thread
From: Jon Hunter @ 2020-03-11 10:49 UTC (permalink / raw)
  To: Alex Elder, Bjorn Andersson, Andy Gross
  Cc: David Miller, Arnd Bergmann, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel


On 06/03/2020 04:28, Alex Elder wrote:
> Add IPA-related nodes and definitions to "sdm845.dtsi".
> 
> Signed-off-by: Alex Elder <elder@linaro.org>
> ---
>  arch/arm64/boot/dts/qcom/sdm845.dtsi | 51 ++++++++++++++++++++++++++++
>  1 file changed, 51 insertions(+)
> 
> diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
> index d42302b8889b..58fd1c611849 100644
> --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
> +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
> @@ -675,6 +675,17 @@
>  			interrupt-controller;
>  			#interrupt-cells = <2>;
>  		};
> +
> +		ipa_smp2p_out: ipa-ap-to-modem {
> +			qcom,entry-name = "ipa";
> +			#qcom,smem-state-cells = <1>;
> +		};
> +
> +		ipa_smp2p_in: ipa-modem-to-ap {
> +			qcom,entry-name = "ipa";
> +			interrupt-controller;
> +			#interrupt-cells = <2>;
> +		};
>  	};
>  
>  	smp2p-slpi {
> @@ -1435,6 +1446,46 @@
>  			};
>  		};
>  
> +		ipa@1e40000 {
> +			compatible = "qcom,sdm845-ipa";
> +
> +			modem-init;
> +			modem-remoteproc = <&mss_pil>;
> +
> +			reg = <0 0x1e40000 0 0x7000>,
> +			      <0 0x1e47000 0 0x2000>,
> +			      <0 0x1e04000 0 0x2c000>;
> +			reg-names = "ipa-reg",
> +				    "ipa-shared",
> +				    "gsi";
> +
> +			interrupts-extended =
> +					<&intc 0 311 IRQ_TYPE_EDGE_RISING>,
> +					<&intc 0 432 IRQ_TYPE_LEVEL_HIGH>,
> +					<&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
> +					<&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>;
> +			interrupt-names = "ipa",
> +					  "gsi",
> +					  "ipa-clock-query",
> +					  "ipa-setup-ready";
> +
> +			clocks = <&rpmhcc RPMH_IPA_CLK>;
> +			clock-names = "core";
> +
> +			interconnects =
> +				<&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>,
> +				<&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>,
> +				<&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>;
> +			interconnect-names = "memory",
> +					     "imem",
> +					     "config";
> +
> +			qcom,smem-states = <&ipa_smp2p_out 0>,
> +					   <&ipa_smp2p_out 1>;
> +			qcom,smem-state-names = "ipa-clock-enabled-valid",
> +						"ipa-clock-enabled";
> +		};
> +
>  		tcsr_mutex_regs: syscon@1f40000 {
>  			compatible = "syscon";
>  			reg = <0 0x01f40000 0 0x40000>;
> 


This change is causing the following build error on today's -next ...

 DTC     arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dtb
 arch/arm64/boot/dts/qcom/sdm845.dtsi:1710.15-1748.5: ERROR (phandle_references): /soc@0/ipa@1e40000: Reference to non-existent node or label "rsc_hlos"

Cheers
Jon

-- 
nvpublic

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 15/17] soc: qcom: ipa: support build of IPA code
  2020-03-06  4:28 ` [PATCH v2 15/17] soc: qcom: ipa: support build of IPA code Alex Elder
@ 2020-03-11 10:54   ` Jon Hunter
  2020-03-11 12:33     ` Alex Elder
  0 siblings, 1 reply; 30+ messages in thread
From: Jon Hunter @ 2020-03-11 10:54 UTC (permalink / raw)
  To: Alex Elder, David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel


On 06/03/2020 04:28, Alex Elder wrote:
> Add build and Kconfig support for the Qualcomm IPA driver.
> 
> Signed-off-by: Alex Elder <elder@linaro.org>
> ---
>  drivers/net/Kconfig      |  2 ++
>  drivers/net/Makefile     |  1 +
>  drivers/net/ipa/Kconfig  | 19 +++++++++++++++++++
>  drivers/net/ipa/Makefile | 12 ++++++++++++
>  4 files changed, 34 insertions(+)
>  create mode 100644 drivers/net/ipa/Kconfig
>  create mode 100644 drivers/net/ipa/Makefile
> 
> diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
> index 66e410e58c8e..02565bc2be8a 100644
> --- a/drivers/net/Kconfig
> +++ b/drivers/net/Kconfig
> @@ -444,6 +444,8 @@ source "drivers/net/fddi/Kconfig"
>  
>  source "drivers/net/hippi/Kconfig"
>  
> +source "drivers/net/ipa/Kconfig"
> +
>  config NET_SB1000
>  	tristate "General Instruments Surfboard 1000"
>  	depends on PNP
> diff --git a/drivers/net/Makefile b/drivers/net/Makefile
> index 65967246f240..94b60800887a 100644
> --- a/drivers/net/Makefile
> +++ b/drivers/net/Makefile
> @@ -47,6 +47,7 @@ obj-$(CONFIG_ETHERNET) += ethernet/
>  obj-$(CONFIG_FDDI) += fddi/
>  obj-$(CONFIG_HIPPI) += hippi/
>  obj-$(CONFIG_HAMRADIO) += hamradio/
> +obj-$(CONFIG_QCOM_IPA) += ipa/
>  obj-$(CONFIG_PLIP) += plip/
>  obj-$(CONFIG_PPP) += ppp/
>  obj-$(CONFIG_PPP_ASYNC) += ppp/
> diff --git a/drivers/net/ipa/Kconfig b/drivers/net/ipa/Kconfig
> new file mode 100644
> index 000000000000..b8cb7cadbf75
> --- /dev/null
> +++ b/drivers/net/ipa/Kconfig
> @@ -0,0 +1,19 @@
> +config QCOM_IPA
> +	tristate "Qualcomm IPA support"
> +	depends on ARCH_QCOM && 64BIT && NET
> +	select QCOM_QMI_HELPERS
> +	select QCOM_MDT_LOADER
> +	default QCOM_Q6V5_COMMON
> +	help
> +	  Choose Y or M here to include support for the Qualcomm
> +	  IP Accelerator (IPA), a hardware block present in some
> +	  Qualcomm SoCs.  The IPA is a programmable protocol processor
> +	  that is capable of generic hardware handling of IP packets,
> +	  including routing, filtering, and NAT.  Currently the IPA
> +	  driver supports only basic transport of network traffic
> +	  between the AP and modem, on the Qualcomm SDM845 SoC.
> +
> +	  Note that if selected, the selection type must match that
> +	  of QCOM_Q6V5_COMMON (Y or M).
> +
> +	  If unsure, say N.
> diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
> new file mode 100644
> index 000000000000..afe5df1e6eee
> --- /dev/null
> +++ b/drivers/net/ipa/Makefile
> @@ -0,0 +1,12 @@
> +# Un-comment the next line if you want to validate configuration data
> +#ccflags-y		+=	-DIPA_VALIDATE
> +
> +obj-$(CONFIG_QCOM_IPA)	+=	ipa.o
> +
> +ipa-y			:=	ipa_main.o ipa_clock.o ipa_reg.o ipa_mem.o \
> +				ipa_table.o ipa_interrupt.o gsi.o gsi_trans.o \
> +				ipa_gsi.o ipa_smp2p.o ipa_uc.o \
> +				ipa_endpoint.o ipa_cmd.o ipa_modem.o \
> +				ipa_qmi.o ipa_qmi_msg.o
> +
> +ipa-y			+=	ipa_data-sdm845.o ipa_data-sc7180.o


This patch is also causing build issues on the current -next ...

  CC [M]  drivers/net/ipa/gsi.o
  In file included from include/linux/build_bug.h:5:0,
                   from include/linux/bitfield.h:10,
                   from drivers/net/ipa/gsi.c:9:
  drivers/net/ipa/gsi.c: In function ‘gsi_validate_build’:
  drivers/net/ipa/gsi.c:220:39: error: implicit declaration of function ‘field_max’ [-Werror=implicit-function-declaration]
    BUILD_BUG_ON(GSI_RING_ELEMENT_SIZE > field_max(ELEMENT_SIZE_FMASK));
                                         ^
  include/linux/compiler.h:374:9: note: in definition of macro ‘__compiletime_assert’
     if (!(condition))     \
           ^~~~~~~~~
  include/linux/compiler.h:394:2: note: in expansion of macro ‘_compiletime_assert’
    _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
    ^~~~~~~~~~~~~~~~~~~
  include/linux/build_bug.h:39:37: note: in expansion of macro ‘compiletime_assert’
   #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
                                       ^~~~~~~~~~~~~~~~~~
  include/linux/build_bug.h:50:2: note: in expansion of macro ‘BUILD_BUG_ON_MSG’
    BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
    ^~~~~~~~~~~~~~~~
  drivers/net/ipa/gsi.c:220:2: note: in expansion of macro ‘BUILD_BUG_ON’
    BUILD_BUG_ON(GSI_RING_ELEMENT_SIZE > field_max(ELEMENT_SIZE_FMASK));
    ^~~~~~~~~~~~

Jon 

-- 
nvpublic

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 15/17] soc: qcom: ipa: support build of IPA code
  2020-03-11 10:54   ` Jon Hunter
@ 2020-03-11 12:33     ` Alex Elder
  0 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-11 12:33 UTC (permalink / raw)
  To: Jon Hunter, David Miller, Arnd Bergmann
  Cc: Bjorn Andersson, Andy Gross, Johannes Berg, Dan Williams,
	Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

On 3/11/20 5:54 AM, Jon Hunter wrote:
> 
> On 06/03/2020 04:28, Alex Elder wrote:
>> Add build and Kconfig support for the Qualcomm IPA driver.
>>
>> Signed-off-by: Alex Elder <elder@linaro.org>
>> ---
>>  drivers/net/Kconfig      |  2 ++
>>  drivers/net/Makefile     |  1 +
>>  drivers/net/ipa/Kconfig  | 19 +++++++++++++++++++
>>  drivers/net/ipa/Makefile | 12 ++++++++++++
>>  4 files changed, 34 insertions(+)
>>  create mode 100644 drivers/net/ipa/Kconfig
>>  create mode 100644 drivers/net/ipa/Makefile
>>
>> diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
>> index 66e410e58c8e..02565bc2be8a 100644
>> --- a/drivers/net/Kconfig
>> +++ b/drivers/net/Kconfig
>> @@ -444,6 +444,8 @@ source "drivers/net/fddi/Kconfig"
>>  
>>  source "drivers/net/hippi/Kconfig"
>>  
>> +source "drivers/net/ipa/Kconfig"
>> +
>>  config NET_SB1000
>>  	tristate "General Instruments Surfboard 1000"
>>  	depends on PNP
>> diff --git a/drivers/net/Makefile b/drivers/net/Makefile
>> index 65967246f240..94b60800887a 100644
>> --- a/drivers/net/Makefile
>> +++ b/drivers/net/Makefile
>> @@ -47,6 +47,7 @@ obj-$(CONFIG_ETHERNET) += ethernet/
>>  obj-$(CONFIG_FDDI) += fddi/
>>  obj-$(CONFIG_HIPPI) += hippi/
>>  obj-$(CONFIG_HAMRADIO) += hamradio/
>> +obj-$(CONFIG_QCOM_IPA) += ipa/
>>  obj-$(CONFIG_PLIP) += plip/
>>  obj-$(CONFIG_PPP) += ppp/
>>  obj-$(CONFIG_PPP_ASYNC) += ppp/
>> diff --git a/drivers/net/ipa/Kconfig b/drivers/net/ipa/Kconfig
>> new file mode 100644
>> index 000000000000..b8cb7cadbf75
>> --- /dev/null
>> +++ b/drivers/net/ipa/Kconfig
>> @@ -0,0 +1,19 @@
>> +config QCOM_IPA
>> +	tristate "Qualcomm IPA support"
>> +	depends on ARCH_QCOM && 64BIT && NET
>> +	select QCOM_QMI_HELPERS
>> +	select QCOM_MDT_LOADER
>> +	default QCOM_Q6V5_COMMON
>> +	help
>> +	  Choose Y or M here to include support for the Qualcomm
>> +	  IP Accelerator (IPA), a hardware block present in some
>> +	  Qualcomm SoCs.  The IPA is a programmable protocol processor
>> +	  that is capable of generic hardware handling of IP packets,
>> +	  including routing, filtering, and NAT.  Currently the IPA
>> +	  driver supports only basic transport of network traffic
>> +	  between the AP and modem, on the Qualcomm SDM845 SoC.
>> +
>> +	  Note that if selected, the selection type must match that
>> +	  of QCOM_Q6V5_COMMON (Y or M).
>> +
>> +	  If unsure, say N.
>> diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
>> new file mode 100644
>> index 000000000000..afe5df1e6eee
>> --- /dev/null
>> +++ b/drivers/net/ipa/Makefile
>> @@ -0,0 +1,12 @@
>> +# Un-comment the next line if you want to validate configuration data
>> +#ccflags-y		+=	-DIPA_VALIDATE
>> +
>> +obj-$(CONFIG_QCOM_IPA)	+=	ipa.o
>> +
>> +ipa-y			:=	ipa_main.o ipa_clock.o ipa_reg.o ipa_mem.o \
>> +				ipa_table.o ipa_interrupt.o gsi.o gsi_trans.o \
>> +				ipa_gsi.o ipa_smp2p.o ipa_uc.o \
>> +				ipa_endpoint.o ipa_cmd.o ipa_modem.o \
>> +				ipa_qmi.o ipa_qmi_msg.o
>> +
>> +ipa-y			+=	ipa_data-sdm845.o ipa_data-sc7180.
Yes, a needed patch defining field_max() is missing.  I sent
an updated request to include it in net-next to resolve this
issue.

  https://lore.kernel.org/netdev/20200311024240.26834-1-elder@linaro.org/

Thank you for pointing it out.

					-Alex

> This patch is also causing build issues on the current -next ...
> 
>   CC [M]  drivers/net/ipa/gsi.o
>   In file included from include/linux/build_bug.h:5:0,
>                    from include/linux/bitfield.h:10,
>                    from drivers/net/ipa/gsi.c:9:
>   drivers/net/ipa/gsi.c: In function ‘gsi_validate_build’:
>   drivers/net/ipa/gsi.c:220:39: error: implicit declaration of function ‘field_max’ [-Werror=implicit-function-declaration]
>     BUILD_BUG_ON(GSI_RING_ELEMENT_SIZE > field_max(ELEMENT_SIZE_FMASK));
>                                          ^
>   include/linux/compiler.h:374:9: note: in definition of macro ‘__compiletime_assert’
>      if (!(condition))     \
>            ^~~~~~~~~
>   include/linux/compiler.h:394:2: note: in expansion of macro ‘_compiletime_assert’
>     _compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
>     ^~~~~~~~~~~~~~~~~~~
>   include/linux/build_bug.h:39:37: note: in expansion of macro ‘compiletime_assert’
>    #define BUILD_BUG_ON_MSG(cond, msg) compiletime_assert(!(cond), msg)
>                                        ^~~~~~~~~~~~~~~~~~
>   include/linux/build_bug.h:50:2: note: in expansion of macro ‘BUILD_BUG_ON_MSG’
>     BUILD_BUG_ON_MSG(condition, "BUILD_BUG_ON failed: " #condition)
>     ^~~~~~~~~~~~~~~~
>   drivers/net/ipa/gsi.c:220:2: note: in expansion of macro ‘BUILD_BUG_ON’
>     BUILD_BUG_ON(GSI_RING_ELEMENT_SIZE > field_max(ELEMENT_SIZE_FMASK));
>     ^~~~~~~~~~~~
> 
> Jon 
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 17/17] arm64: dts: sdm845: add IPA information
  2020-03-11 10:49   ` Jon Hunter
@ 2020-03-11 14:39     ` Alex Elder
  2020-03-11 19:02       ` Bjorn Andersson
  0 siblings, 1 reply; 30+ messages in thread
From: Alex Elder @ 2020-03-11 14:39 UTC (permalink / raw)
  To: Jon Hunter, Bjorn Andersson, Andy Gross, David Miller
  Cc: Arnd Bergmann, Johannes Berg, Dan Williams, Evan Green,
	Eric Caruso, Susheel Yadav Yadagiri, Chaitanya Pratapa,
	Subash Abhinov Kasiviswanathan, Rob Herring, Mark Rutland,
	Ohad Ben-Cohen, Siddharth Gupta, netdev, devicetree,
	linux-arm-kernel, linux-arm-msm, linux-soc, linux-kernel

On 3/11/20 5:49 AM, Jon Hunter wrote:
> 
> On 06/03/2020 04:28, Alex Elder wrote:
>> Add IPA-related nodes and definitions to "sdm845.dtsi".
>>
>> Signed-off-by: Alex Elder <elder@linaro.org>
>> ---
>>  arch/arm64/boot/dts/qcom/sdm845.dtsi | 51 ++++++++++++++++++++++++++++
>>  1 file changed, 51 insertions(+)
>>
>> diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
>> index d42302b8889b..58fd1c611849 100644
>> --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
>> +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
>> @@ -675,6 +675,17 @@
>>  			interrupt-controller;
>>  			#interrupt-cells = <2>;
>>  		};
>> +
>> +		ipa_smp2p_out: ipa-ap-to-modem {
>> +			qcom,entry-name = "ipa";
>> +			#qcom,smem-state-cells = <1>;
>> +		};
>> +
>> +		ipa_smp2p_in: ipa-modem-to-ap {
>> +			qcom,entry-name = "ipa";
>> +			interrupt-controller;
>> +			#interrupt-cells = <2>;
>> +		};
>>  	};
>>  
>>  	smp2p-slpi {
>> @@ -1435,6 +1446,46 @@
>>  			};
>>  		};
>>  
>> +		ipa@1e40000 {
>> +			compatible = "qcom,sdm845-ipa";
>> +
>> +			modem-init;
>> +			modem-remoteproc = <&mss_pil>;
>> +
>> +			reg = <0 0x1e40000 0 0x7000>,
>> +			      <0 0x1e47000 0 0x2000>,
>> +			      <0 0x1e04000 0 0x2c000>;
>> +			reg-names = "ipa-reg",
>> +				    "ipa-shared",
>> +				    "gsi";
>> +
>> +			interrupts-extended =
>> +					<&intc 0 311 IRQ_TYPE_EDGE_RISING>,
>> +					<&intc 0 432 IRQ_TYPE_LEVEL_HIGH>,
>> +					<&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
>> +					<&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>;
>> +			interrupt-names = "ipa",
>> +					  "gsi",
>> +					  "ipa-clock-query",
>> +					  "ipa-setup-ready";
>> +
>> +			clocks = <&rpmhcc RPMH_IPA_CLK>;
>> +			clock-names = "core";
>> +
>> +			interconnects =
>> +				<&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>,
>> +				<&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>,
>> +				<&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>;
>> +			interconnect-names = "memory",
>> +					     "imem",
>> +					     "config";
>> +
>> +			qcom,smem-states = <&ipa_smp2p_out 0>,
>> +					   <&ipa_smp2p_out 1>;
>> +			qcom,smem-state-names = "ipa-clock-enabled-valid",
>> +						"ipa-clock-enabled";
>> +		};
>> +
>>  		tcsr_mutex_regs: syscon@1f40000 {
>>  			compatible = "syscon";
>>  			reg = <0 0x01f40000 0 0x40000>;
>>
> 
> 
> This change is causing the following build error on today's -next ...
> 
>  DTC     arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dtb
>  arch/arm64/boot/dts/qcom/sdm845.dtsi:1710.15-1748.5: ERROR (phandle_references): /soc@0/ipa@1e40000: Reference to non-existent node or label "rsc_hlos"

This problem arises because a commit in the Qualcomm SoC tree affects
"arch/arm64/boot/dts/qcom/sdm845.dtsi", changing the interconnect provider
node(s) used by IPA:
  b303f9f0050b arm64: dts: sdm845: Redefine interconnect provider DT nodes

I will send out a patch today that updates the IPA node in "sdm845.dtsi"
to correct that.

In the mean time, David, perhaps you should revert this change in net-next:
  9cc5ae125f0e arm64: dts: sdm845: add IPA information
and let me work out fixing "sdm845.dtsi" with Andy and Bjorn in the
Qualcomm tree.

Thanks.

				-Alex

> Cheers
> Jon
> 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 17/17] arm64: dts: sdm845: add IPA information
  2020-03-11 14:39     ` Alex Elder
@ 2020-03-11 19:02       ` Bjorn Andersson
  0 siblings, 0 replies; 30+ messages in thread
From: Bjorn Andersson @ 2020-03-11 19:02 UTC (permalink / raw)
  To: David Miller, Alex Elder
  Cc: Jon Hunter, Andy Gross, Arnd Bergmann, Johannes Berg,
	Dan Williams, Evan Green, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

On Wed 11 Mar 07:39 PDT 2020, Alex Elder wrote:

> On 3/11/20 5:49 AM, Jon Hunter wrote:
> > 
> > On 06/03/2020 04:28, Alex Elder wrote:
> >> Add IPA-related nodes and definitions to "sdm845.dtsi".
> >>
> >> Signed-off-by: Alex Elder <elder@linaro.org>
> >> ---
> >>  arch/arm64/boot/dts/qcom/sdm845.dtsi | 51 ++++++++++++++++++++++++++++
> >>  1 file changed, 51 insertions(+)
> >>
> >> diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
> >> index d42302b8889b..58fd1c611849 100644
> >> --- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
> >> +++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
> >> @@ -675,6 +675,17 @@
> >>  			interrupt-controller;
> >>  			#interrupt-cells = <2>;
> >>  		};
> >> +
> >> +		ipa_smp2p_out: ipa-ap-to-modem {
> >> +			qcom,entry-name = "ipa";
> >> +			#qcom,smem-state-cells = <1>;
> >> +		};
> >> +
> >> +		ipa_smp2p_in: ipa-modem-to-ap {
> >> +			qcom,entry-name = "ipa";
> >> +			interrupt-controller;
> >> +			#interrupt-cells = <2>;
> >> +		};
> >>  	};
> >>  
> >>  	smp2p-slpi {
> >> @@ -1435,6 +1446,46 @@
> >>  			};
> >>  		};
> >>  
> >> +		ipa@1e40000 {
> >> +			compatible = "qcom,sdm845-ipa";
> >> +
> >> +			modem-init;
> >> +			modem-remoteproc = <&mss_pil>;
> >> +
> >> +			reg = <0 0x1e40000 0 0x7000>,
> >> +			      <0 0x1e47000 0 0x2000>,
> >> +			      <0 0x1e04000 0 0x2c000>;
> >> +			reg-names = "ipa-reg",
> >> +				    "ipa-shared",
> >> +				    "gsi";
> >> +
> >> +			interrupts-extended =
> >> +					<&intc 0 311 IRQ_TYPE_EDGE_RISING>,
> >> +					<&intc 0 432 IRQ_TYPE_LEVEL_HIGH>,
> >> +					<&ipa_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
> >> +					<&ipa_smp2p_in 1 IRQ_TYPE_EDGE_RISING>;
> >> +			interrupt-names = "ipa",
> >> +					  "gsi",
> >> +					  "ipa-clock-query",
> >> +					  "ipa-setup-ready";
> >> +
> >> +			clocks = <&rpmhcc RPMH_IPA_CLK>;
> >> +			clock-names = "core";
> >> +
> >> +			interconnects =
> >> +				<&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_EBI1>,
> >> +				<&rsc_hlos MASTER_IPA &rsc_hlos SLAVE_IMEM>,
> >> +				<&rsc_hlos MASTER_APPSS_PROC &rsc_hlos SLAVE_IPA_CFG>;
> >> +			interconnect-names = "memory",
> >> +					     "imem",
> >> +					     "config";
> >> +
> >> +			qcom,smem-states = <&ipa_smp2p_out 0>,
> >> +					   <&ipa_smp2p_out 1>;
> >> +			qcom,smem-state-names = "ipa-clock-enabled-valid",
> >> +						"ipa-clock-enabled";
> >> +		};
> >> +
> >>  		tcsr_mutex_regs: syscon@1f40000 {
> >>  			compatible = "syscon";
> >>  			reg = <0 0x01f40000 0 0x40000>;
> >>
> > 
> > 
> > This change is causing the following build error on today's -next ...
> > 
> >  DTC     arch/arm64/boot/dts/qcom/sdm850-lenovo-yoga-c630.dtb
> >  arch/arm64/boot/dts/qcom/sdm845.dtsi:1710.15-1748.5: ERROR (phandle_references): /soc@0/ipa@1e40000: Reference to non-existent node or label "rsc_hlos"
> 
> This problem arises because a commit in the Qualcomm SoC tree affects
> "arch/arm64/boot/dts/qcom/sdm845.dtsi", changing the interconnect provider
> node(s) used by IPA:
>   b303f9f0050b arm64: dts: sdm845: Redefine interconnect provider DT nodes
> 
> I will send out a patch today that updates the IPA node in "sdm845.dtsi"
> to correct that.
> 
> In the mean time, David, perhaps you should revert this change in net-next:
>   9cc5ae125f0e arm64: dts: sdm845: add IPA information
> and let me work out fixing "sdm845.dtsi" with Andy and Bjorn in the
> Qualcomm tree.
> 

Reverting this in net-next and applying it in our tree sounds like the
easiest path forward, and avoids further conflicts down the road.

David, are you onboard with this?

Regards,
Bjorn

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED)
  2020-03-09 16:54 ` Dave Taht
@ 2020-03-12  3:09   ` Alex Elder
  0 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2020-03-12  3:09 UTC (permalink / raw)
  To: Dave Taht
  Cc: David Miller, Arnd Bergmann, Bjorn Andersson, Andy Gross,
	Johannes Berg, Dan Williams, Evan Green, Eric Caruso,
	Susheel Yadav Yadagiri, Chaitanya Pratapa,
	Subash Abhinov Kasiviswanathan, Rob Herring, Mark Rutland,
	Ohad Ben-Cohen, Siddharth Gupta, Linux Kernel Network Developers,
	devicetree, linux-arm-kernel, linux-arm-msm, linux-soc,
	linux-kernel

On 3/9/20 11:54 AM, Dave Taht wrote:
> I am happy to see this driver upstream.
> 
>> Arnd's concern was that the rmnet_data0 network device does not
>> have the benefit of information about the state of the underlying
>> IPA hardware in order to be effective in controlling TX flow.
>> The feared result is over-buffering of TX packets (bufferbloat).
>> I began working on some simple experiments to see whether (or how
>> much) his concern was warranted.  But it turned out that completing
>> these experiments was much more work than had been hoped.
> 
> Members of the bufferbloat project *care*, and have tools and testbeds for
> exploring these issues. It would be good to establish a relationship with
> the vendor, obtain hardware, and other (technical and financial) support, if
> possible.
> 
> Is there any specific hardware now available (generally or in beta) that
> can be obtained by us to take a harder look? A contact at linaro or QCA
> willing discuss options?

There exists some hardware that could be used, but at the moment I have
not ported this code to operate on it.  It is a current effort however,
and I will be glad to keep you in the loop on progress.  There are a
couple of target environments we'd like to support but until last week
the primary goal was inclusion in the upstream tree.

I will follow up with you after the dust settles a little bit with
this patch series, maybe in a week or so.  In the mean time I'll
also find out whether there are any other resources (people and/or
hardware) available.

					-Alex

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED)
  2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
                   ` (18 preceding siblings ...)
  2020-03-09 16:54 ` Dave Taht
@ 2020-04-29 23:17 ` Evan Green
  19 siblings, 0 replies; 30+ messages in thread
From: Evan Green @ 2020-04-29 23:17 UTC (permalink / raw)
  To: Alex Elder
  Cc: David Miller, Arnd Bergmann, Bjorn Andersson, Andy Gross,
	Johannes Berg, Dan Williams, Eric Caruso, Susheel Yadav Yadagiri,
	Chaitanya Pratapa, Subash Abhinov Kasiviswanathan, Rob Herring,
	Mark Rutland, Ohad Ben-Cohen, Siddharth Gupta, netdev,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS,
	linux-arm Mailing List, linux-arm-msm, linux-soc, LKML

On Thu, Mar 5, 2020 at 8:28 PM Alex Elder <elder@linaro.org> wrote:
>
> This series presents the driver for the Qualcomm IP Accelerator (IPA).
>
> This is version 2 of this updated series.  It includes the following
> small changes since the previous version:
>   - Now based on net-next instead of v5.6-rc
>   - Config option now named CONFIG_QCOM_IPA
>   - Some minor cleanup in the GSI code
>   - Small change to replenish logic
>   - No longer depends on remoteproc bug fixes
> What follows is the basically same explanation as was posted previously.
>
>                                         -Alex
>
> I have posted earlier versions of this code previously, but it has
> undergone quite a bit of development since the last time, so rather
> than calling it "version 3" I'm just treating it as a new series
> (indicating it's been updated in this message).  The fast/data path
> is the same as before.  But the driver now (nearly) supports a
> second platform, its transaction handling has been generalized
> and improved, and modem activities are now handled in a more
> unified way.
>
> This series is available (based on net-next in branch "ipa_updated-v2"
> in this git repository:
>   https://git.linaro.org/people/alex.elder/linux.git
>
> The branch depends on other one other small patch that I sent out
> for review earlier.
>   https://lore.kernel.org/lkml/20200306042302.17602-1-elder@linaro.org/
>

I realize this is all already in (yay!), but it took me a long time to
get around to fully reading this driver. I'll paste my notes here for
posterity or possible future patches. Overall the driver seemed well
documented and thoughtfully written. As someone who has seen the old
downstream IPA driver (though I didn't look long as my brain started
hurting), I greatly appreciate the work required by Alex to polish
this all up. So firstly, thanks Alex!

Onto the notes. There are a couple themes I noticed. The driver seems
occasionally to be unnecessarily layer-caked. I noticed "could be
inlined" as a common refrain in my feedback. There are also a couple
places with hand-rolled refcounting, atomic exchanges, and odd
mutexes. I haven't fully digested those to be able to know how to get
rid of them, but I'll point them out as something that "doesn't smell
quite right".

Acronyms (for my own benefit):
ee - execution environment
ep - endpoint
er - endpoint or route ID
rt - resource type
dcd - Dynamic clock division (request to GCC to turn you off)
bcr - Backwards compatibility register
comp - Core master port
holb - ???

ipa_main.c:
What is IPA_VALIDATION. Can this just be on always or removed?
otherwise it will likely bit rot.
I'd like to see this suspend_ref go away.
ipa_reg.c can be inlined
ipa_mem_init can be inlined.


IPA_NOTIFY:
Shouldn't CONFIG_IPA depend on IPA_NOTIFY?


ipa_data.h
Why are ipa_resource_src and ipa_resource_dst separate structures?
maybe the extern globals at the bottom should just be moved into ipa_main.c


ipa_endpoint.h
Add a note for enum ipa_endpoint_name indicating who is TXing and RXing


ipa_data-sc7180.c
Where is IPA_ENDPOINT_MODEM_LAN_TX definition?


ipa_clock.c
IPA_CORE_CLOCK_RATE - Should probably be specified in DT as a fixed
frequency rather than here in code.
Interconnect bandwidths - Are these a function of the core clock rate?
This may be fine for the initial version, but is there any way to
derive the bandwidth requirement?
ipa_interconnect_init_one - Probably best to just inline this
ipa_clock_get_additional - Seems sketchy, would like to remove this
Overall don't like the homebrew reference counting here. Would runtime
PM help you do this?


ipa_interrupt.h
I'd like to get rid of ipa_interrupt_add and ipa_interrupt_remove.
Seems like there's no need for these to be dynamically added, it's all
one driver.


ipa_interrupt.c
Why does ipa_interrupt_setup() need to dynamically allocate the
structure, can't we just embed it in struct ipa?
Without the kzalloc, ipa_interrupt_setup() and
ipa_interrupt_teardown() are simple enough they can probably be
inlined (at least teardown for sure).
Interrupt processing seems a little odd. What I would have expected is:
Hard ISR reads pending bits, and immediately writes all pending bits
to quiesce them. Save bitmask of pending bits, and send to the
threaded handler. Threaded handler then reads and clears pending bits
out, and acts on any.
Fixes interrupt storm in ipa_isr() if an unexpected interrupt comes in
but an expected interrupt is also pending.
Avoids multiple register writes (one for each bit) in ipa_interrupt_process()
Saves all the register reads in ipa_interrupt_process_all(). That
additional read in the loop seems like it shouldn't be there either
way.


ipa_mem.h
Is IPA_SHARED_MEM_SIZE supposed to be defined? It's mentioned in the comment.
Comment says the number of canaries is the same for all IPA versions,
but ipa_data-sdm845.c and ipa_data-sc7180.c seem to have different
canary counts for IPA_MEM_UC_INFO?
Should the number of canaries really be part of the chipset-specific
config info if it's never going to change?
Do the canary values eat into the previous region? Can we add a
warning to ensure we don't write canary values off the beginning of
the memory region?


ipa_mem.c
Maybe remove ipa_mem_teardown() if we're not planning to add anything
to it soon, or inline it in the header for now.
Does ipa_mem_zero_modem() erase canary values previously set up?


gsi.h
Why make gsi_evt_ring_state 0xf? Remove assignments and let enum do its thing.
enum gsi_ee_id - Probably worth commenting that this defines the
layout of the per-EE register regions, so rearranging this would
horribly break our access to hardware.


gsi_reg.h
What is gsi v2.0? Is that the same as IPA 4.0?
Why do the channel macros have things like CH_C and EE_N in them? Why
not just CH and EE? Oh, I also see CH_E, what's that?


gsi.c:
enum gsi_err_code: Where's 0x7?
gsi_channel_deprogram(): delete
gsi_channel_update(): I'm worried about this refcount thing, how does it work?
gsi_event_bitmap_init() can be inlined
gsi_evt_ring_setup() and gsi_evt_ring_teardown() can be removed
gsi_teardown(): inline
gsi_evt_ring_exit(): remove


ipa_gsi.h:
Comment for ipa_gsi_channel_tx_completed has wrong function name copypasta.


ipa_gsi.c:
This is an interesting mezzanine interface, it looks like it was
designed to keep GSI code from calling IPA code directly. Why is that?
Could these at least be inlined into the ipa_gsi.h?


gsi_trans.h:
Why is it important that struct gsi_trans be < 128 bytes?


gsi_trans.c:
gsi_tre_type - Should this be in a header?
TRE_FLAGS_ - Should these be in a header? Also, replace GENMASK(x,x)
with BIT(x). TRE_FLAGS_IEOB_FMASK is never used (which is fine, but
should it be?)
gsi_trans_tre_reserve() - Why atomic_try_cmpxchg? What's the
difference between that and atomic_cmpxchg?
gsi_tre_len_opcode() - If len is truncated to 16 bits, why is u32
passed in? Is len sometimes used as 32 bits?
gsi_trans_tre_fill() - If it doesn't do a 16-byte atomic write, is
this a problem? Could the controller see a half-baked TRE?


ipa_endpoint.c:
What is HOLB timer?


ipa_table.c:
ipa_table_valid() - This just runs all 3-bit possibilities. Could use
flags and a loop instead.
ipa_table_teardown() - Remove?


ipa_cmd.c:
ipa_cmd_tag_process_add() - What happened here? Is this just
functionality we're not using right now?


ipa_modem.c
ipa_start_xmit() - Could returning BUSY result in an infinite loop if
something goes wrong in the lower layers?
ipa_modem_start() - Shouldn't we print some errors if the state
variable has an unexpected value (ie not RUNNING)? In those cases we
are likely not in a good place.


ipa_qmi.c:
ipa_qmi_indication() could be inlined
init_modem_driver_req() use of static means this can never run
concurrently with itself, right? Also if the request gets stuck in
qmi_txn_wait() you're hosed.


ipa_qmi_msg.c
You could macro-ize the initialization of these elements, which would
make things way shorter, and probably easier to read. I'm imagining
for instance the first element in the file could be reduced to
IPA_QMI_ELEM(QMI_OPT_FLAG, 1, struct ipa_indication_register_req,
master_driver_init_complete_valid, 0x10)


ipa_smp2p.c:
s/Motex/Mutex/
Actually I don't get why the mutex is needed at all. It's certainly
not needed in ipa_smp2p_disable() (stores are already atomic), and
threaded irqs already have mutual exclusion. Or are you trying to make
sure ipa_smp2p_disable() doesn't return until
ipa_smp2p_modem_setup_ready_isr() has fully completed? If that's
really why, you should explain that's what it's doing and why it's
necessary.
Thinking more about it, why can't you just actually disable the irq?
That calls synchronize_irq, which will flush out any instances of the
irq running. Then no mutex necessary!
ipa_smp2p_irq_init(), and _exit() can be inlined.
I'd love to see clock_on and the weird reference counting go away. Is
that really necessary?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 05/17] soc: qcom: ipa: clocking, interrupts, and memory
  2019-05-31  3:53 [PATCH v2 00/17] net: introduce Qualcomm IPA driver Alex Elder
@ 2019-05-31  3:53 ` Alex Elder
  0 siblings, 0 replies; 30+ messages in thread
From: Alex Elder @ 2019-05-31  3:53 UTC (permalink / raw)
  To: davem, arnd, bjorn.andersson, ilias.apalodimas
  Cc: evgreen, benchan, ejcaruso, cpratapa, syadagir, subashab,
	abhishek.esse, netdev, devicetree, linux-kernel, linux-soc,
	linux-arm-kernel, linux-arm-msm

This patch incorporates three source files (and their headers).  They're
grouped into one patch mainly for the purpose of making the number and
size of patches in this series somewhat reasonable.

  - "ipa_clock.c" and "ipa_clock.h" implement clocking for the IPA device.
    The IPA has a single core clock managed by the common clock framework.
    In addition, the IPA has three buses whose bandwidth is managed by the
    Linux interconnect framework.  At this time the core clock and all
    three buses are either on or off; we don't yet do any more fine-grained
    management than that.  The core clock and interconnects are enabled
    and disabled as a unit, using a unified clock-like abstraction,
    ipa_clock_get()/ipa_clock_put().

  - "ipa_interrupt.c" and "ipa_interrupt.h" implement IPA interrupts.
    There are two hardare IRQs used by the IPA driver (the other is
    the GSI interrupt, described in a separate patch).  Several types
    of interrupt are handled by the IPA IRQ handler; these are not part
    of data/fast path.

  - The IPA has a region of local memory that is accessible by the AP
    (and modem).  Within that region are areas with certain defined
    purposes.  "ipa_mem.c" and "ipa_mem.h" define those regions, and
    implement their initialization.

Signed-off-by: Alex Elder <elder@linaro.org>
---
 drivers/net/ipa/ipa_clock.c     | 297 ++++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_clock.h     |  52 ++++++
 drivers/net/ipa/ipa_interrupt.c | 279 ++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_interrupt.h |  53 ++++++
 drivers/net/ipa/ipa_mem.c       | 234 +++++++++++++++++++++++++
 drivers/net/ipa/ipa_mem.h       |  83 +++++++++
 6 files changed, 998 insertions(+)
 create mode 100644 drivers/net/ipa/ipa_clock.c
 create mode 100644 drivers/net/ipa/ipa_clock.h
 create mode 100644 drivers/net/ipa/ipa_interrupt.c
 create mode 100644 drivers/net/ipa/ipa_interrupt.h
 create mode 100644 drivers/net/ipa/ipa_mem.c
 create mode 100644 drivers/net/ipa/ipa_mem.h

diff --git a/drivers/net/ipa/ipa_clock.c b/drivers/net/ipa/ipa_clock.c
new file mode 100644
index 000000000000..9ed12e8183ad
--- /dev/null
+++ b/drivers/net/ipa/ipa_clock.c
@@ -0,0 +1,297 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2019 Linaro Ltd.
+ */
+
+#include <linux/atomic.h>
+#include <linux/mutex.h>
+#include <linux/clk.h>
+#include <linux/device.h>
+#include <linux/interconnect.h>
+
+#include "ipa.h"
+#include "ipa_clock.h"
+#include "ipa_netdev.h"
+
+/**
+ * DOC: IPA Clocking
+ *
+ * The "IPA Clock" manages both the IPA core clock and the interconnects
+ * (buses) the IPA depends on as a single logical entity.  A reference count
+ * is incremented by "get" operations and decremented by "put" operations.
+ * Transitions of that count from 0 to 1 result in the clock and interconnects
+ * being enabled, and transitions of the count from 1 to 0 cause them to be
+ * disabled.  We currently operate the core clock at a fixed clock rate, and
+ * all buses at a fixed average and peak bandwidth.  As more advanced IPA
+ * features are enabled, we can will better use of clock and bus scaling.
+ *
+ * An IPA clock reference must be held for any access to IPA hardware.
+ */
+
+#define	IPA_CORE_CLOCK_RATE		(75UL * 1000 * 1000)	/* Hz */
+
+/* Interconnect path bandwidths (each times 1000 bytes per second) */
+#define IPA_MEMORY_AVG			(80 * 1000)	/* 80 MBps */
+#define IPA_MEMORY_PEAK			(600 * 1000)
+
+#define IPA_IMEM_AVG			(80 * 1000)
+#define IPA_IMEM_PEAK			(350 * 1000)
+
+#define IPA_CONFIG_AVG			(40 * 1000)
+#define IPA_CONFIG_PEAK			(40 * 1000)
+
+/**
+ * struct ipa_clock - IPA clocking information
+ * @core:		IPA core clock
+ * @memory_path:	Memory interconnect
+ * @imem_path:		Internal memory interconnect
+ * @config_path:	Configuration space interconnect
+ * @mutex;		Protects clock enable/disable
+ * @count:		Clocking reference count
+ */
+struct ipa_clock {
+	struct ipa *ipa;
+	atomic_t count;
+	struct mutex mutex; /* protects clock enable/disable */
+	struct clk *core;
+	struct icc_path *memory_path;
+	struct icc_path *imem_path;
+	struct icc_path *config_path;
+};
+
+/* Initialize interconnects required for IPA operation */
+static int ipa_interconnect_init(struct ipa_clock *clock, struct device *dev)
+{
+	struct icc_path *path;
+
+	path = of_icc_get(dev, "memory");
+	if (IS_ERR(path))
+		goto err_return;
+	clock->memory_path = path;
+
+	path = of_icc_get(dev, "imem");
+	if (IS_ERR(path))
+		goto err_memory_path_put;
+	clock->imem_path = path;
+
+	path = of_icc_get(dev, "config");
+	if (IS_ERR(path))
+		goto err_imem_path_put;
+	clock->config_path = path;
+
+	return 0;
+
+err_imem_path_put:
+	icc_put(clock->imem_path);
+err_memory_path_put:
+	icc_put(clock->memory_path);
+err_return:
+
+	return PTR_ERR(path);
+}
+
+/* Inverse of ipa_interconnect_init() */
+static void ipa_interconnect_exit(struct ipa_clock *clock)
+{
+	icc_put(clock->config_path);
+	icc_put(clock->imem_path);
+	icc_put(clock->memory_path);
+}
+
+/* Currently we only use one bandwidth level, so just "enable" interconnects */
+static int ipa_interconnect_enable(struct ipa_clock *clock)
+{
+	int ret;
+
+	ret = icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK);
+	if (ret)
+		return ret;
+
+	ret = icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK);
+	if (ret)
+		goto err_disable_memory_path;
+
+	ret = icc_set_bw(clock->config_path, IPA_CONFIG_AVG, IPA_CONFIG_PEAK);
+	if (ret)
+		goto err_disable_imem_path;
+
+	return 0;
+
+err_disable_imem_path:
+	(void)icc_set_bw(clock->imem_path, 0, 0);
+err_disable_memory_path:
+	(void)icc_set_bw(clock->memory_path, 0, 0);
+
+	return ret;
+}
+
+/* To disable an interconnect, we just its bandwidth to 0 */
+static int ipa_interconnect_disable(struct ipa_clock *clock)
+{
+	int ret;
+
+	ret = icc_set_bw(clock->memory_path, 0, 0);
+	if (ret)
+		return ret;
+
+	ret = icc_set_bw(clock->imem_path, 0, 0);
+	if (ret)
+		goto err_reenable_memory_path;
+
+	ret = icc_set_bw(clock->config_path, 0, 0);
+	if (ret)
+		goto err_reenable_imem_path;
+
+	return 0;
+
+err_reenable_imem_path:
+	(void)icc_set_bw(clock->imem_path, IPA_IMEM_AVG, IPA_IMEM_PEAK);
+err_reenable_memory_path:
+	(void)icc_set_bw(clock->memory_path, IPA_MEMORY_AVG, IPA_MEMORY_PEAK);
+
+	return ret;
+}
+
+/* Turn on IPA clocks, including interconnects */
+static int ipa_clock_enable(struct ipa_clock *clock)
+{
+	int ret;
+
+	ret = ipa_interconnect_enable(clock);
+	if (ret)
+		return ret;
+
+	ret = clk_prepare_enable(clock->core);
+	if (ret)
+		ipa_interconnect_disable(clock);
+
+	return ret;
+}
+
+/* Inverse of ipa_clock_enable() */
+static void ipa_clock_disable(struct ipa_clock *clock)
+{
+	clk_disable_unprepare(clock->core);
+	(void)ipa_interconnect_disable(clock);
+}
+
+/* Get an IPA clock reference, but only if the reference count is
+ * already non-zero.  Returns true if the additional reference was
+ * added successfully, or false otherwise.
+ */
+bool ipa_clock_get_additional(struct ipa_clock *clock)
+{
+	return !!atomic_inc_not_zero(&clock->count);
+}
+
+/* Get an IPA clock reference.  If the reference count is non-zero, it is
+ * incremented and return is immediate.  Otherwise it is checked again
+ * under protection of the mutex, and enable clocks and resume RX endpoints
+ * before returning.  For the first reference, the count is intentionally
+ * not incremented until after these activities are complete.
+ */
+void ipa_clock_get(struct ipa_clock *clock)
+{
+	/* If the clock is running, just bump the reference count */
+	if (ipa_clock_get_additional(clock))
+		return;
+
+	/* Otherwise get the mutex and check again */
+	mutex_lock(&clock->mutex);
+
+	/* A reference might have been added before we got the mutex. */
+	if (!ipa_clock_get_additional(clock)) {
+		int ret;
+
+		ret = ipa_clock_enable(clock);
+		if (!WARN(ret, "error %d enabling IPA clock\n", ret)) {
+			struct ipa *ipa = clock->ipa;
+
+			if (ipa->command_endpoint)
+				ipa_endpoint_resume(ipa->command_endpoint);
+
+			if (ipa->default_endpoint)
+				ipa_endpoint_resume(ipa->default_endpoint);
+
+			if (ipa->modem_netdev)
+				ipa_netdev_resume(ipa->modem_netdev);
+
+			atomic_inc(&clock->count);
+		}
+	}
+
+	mutex_unlock(&clock->mutex);
+}
+
+/* Attempt to remove an IPA clock reference.  If this represents
+ * the last reference, suspend endpoints and disable the clock
+ * (and interconnects) under protection of a mutex.
+ */
+void ipa_clock_put(struct ipa_clock *clock)
+{
+	/* If this is not the last reference there's nothing more to do */
+	if (!atomic_dec_and_mutex_lock(&clock->count, &clock->mutex))
+		return;
+
+	if (clock->ipa->modem_netdev)
+		ipa_netdev_suspend(clock->ipa->modem_netdev);
+
+	if (clock->ipa->default_endpoint)
+		ipa_endpoint_suspend(clock->ipa->default_endpoint);
+
+	if (clock->ipa->command_endpoint)
+		ipa_endpoint_suspend(clock->ipa->command_endpoint);
+
+	ipa_clock_disable(clock);
+
+	mutex_unlock(&clock->mutex);
+}
+
+/* Initialize IPA clocking */
+struct ipa_clock *ipa_clock_init(struct ipa *ipa)
+{
+	struct device *dev = &ipa->pdev->dev;
+	struct ipa_clock *clock;
+	int ret;
+
+	clock = kzalloc(sizeof(*clock), GFP_KERNEL);
+	if (!clock)
+		return ERR_PTR(-ENOMEM);
+
+	clock->ipa = ipa;
+	clock->core = clk_get(dev, "core");
+	if (IS_ERR(clock->core)) {
+		ret = PTR_ERR(clock->core);
+		goto err_free_clock;
+	}
+
+	ret = clk_set_rate(clock->core, IPA_CORE_CLOCK_RATE);
+	if (ret)
+		goto err_clk_put;
+
+	ret = ipa_interconnect_init(clock, dev);
+	if (ret)
+		goto err_clk_put;
+
+	mutex_init(&clock->mutex);
+	atomic_set(&clock->count, 0);
+
+	return clock;
+
+err_clk_put:
+	clk_put(clock->core);
+err_free_clock:
+	kfree(clock);
+
+	return ERR_PTR(ret);
+}
+
+/* Inverse of ipa_clock_init() */
+void ipa_clock_exit(struct ipa_clock *clock)
+{
+	mutex_destroy(&clock->mutex);
+	ipa_interconnect_exit(clock);
+	clk_put(clock->core);
+	kfree(clock);
+}
diff --git a/drivers/net/ipa/ipa_clock.h b/drivers/net/ipa/ipa_clock.h
new file mode 100644
index 000000000000..f38c3face29a
--- /dev/null
+++ b/drivers/net/ipa/ipa_clock.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2019 Linaro Ltd.
+ */
+#ifndef _IPA_CLOCK_H_
+#define _IPA_CLOCK_H_
+
+struct ipa;
+struct ipa_clock;
+
+/**
+ * ipa_clock_init() - Initialize IPA clocking
+ * @ipa:	IPA pointer
+ *
+ * @Return:	A pointer to an ipa_clock structure, or a pointer-coded error
+ */
+struct ipa_clock *ipa_clock_init(struct ipa *ipa);
+
+/**
+ * ipa_clock_exit() - Inverse of ipa_clock_init()
+ * @clock:	IPA clock pointer
+ */
+void ipa_clock_exit(struct ipa_clock *clock);
+
+/**
+ * ipa_clock_get() - Get an IPA clock reference
+ * @clock:	IPA clock pointer
+ *
+ * This call blocks if this is the first reference.
+ */
+void ipa_clock_get(struct ipa_clock *clock);
+
+/**
+ * ipa_clock_get_additional() - Get an IPA clock reference if not first
+ * @clock:	IPA clock pointer
+ *
+ * This returns immediately, and only takes a reference if not the first
+ */
+bool ipa_clock_get_additional(struct ipa_clock *clock);
+
+/**
+ * ipa_clock_put() - Drop an IPA clock reference
+ * @clock:	IPA clock pointer
+ *
+ * This drops a clock reference.  If the last reference is being dropped,
+ * the clock is stopped and RX endpoints are suspended.  This call will
+ * not block unless the last reference is dropped.
+ */
+void ipa_clock_put(struct ipa_clock *clock);
+
+#endif /* _IPA_CLOCK_H_ */
diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
new file mode 100644
index 000000000000..5be6b3c762ed
--- /dev/null
+++ b/drivers/net/ipa/ipa_interrupt.c
@@ -0,0 +1,279 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2014-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2019 Linaro Ltd.
+ */
+
+/* DOC: IPA Interrupts
+ *
+ * The IPA has an interrupt line distinct from the interrupt used by the GSI
+ * code.  Whereas GSI interrupts are generally related to channel events (like
+ * transfer completions), IPA interrupts are related to other events related
+ * to the IPA.  Some of the IPA interrupts come from a microcontroller
+ * embedded in the IPA.  Each IPA interrupt type can be both masked and
+ * acknowledged independent of the others.
+ *
+ * Two of the IPA interrupts are initiated by the microcontroller.  A third
+ * can be generated to signal the need for a wakeup/resume when an IPA
+ * endpoint has been suspended.  There are other IPA events defined, but at
+ * this time only these three are supported.
+ */
+
+#include <linux/types.h>
+#include <linux/interrupt.h>
+
+#include "ipa.h"
+#include "ipa_clock.h"
+#include "ipa_reg.h"
+#include "ipa_endpoint.h"
+#include "ipa_interrupt.h"
+
+/* Maximum number of bits in an IPA interrupt mask */
+#define IPA_INTERRUPT_MAX	(sizeof(u32) * BITS_PER_BYTE)
+
+struct ipa_interrupt_info {
+	ipa_irq_handler_t handler;
+	enum ipa_interrupt_id interrupt_id;
+};
+
+/**
+ * struct ipa_interrupt - IPA interrupt information
+ * @ipa:		IPA pointer
+ * @irq:		Linux IRQ number used for IPA interrupts
+ * @interrupt_info:	Information for each IPA interrupt type
+ */
+struct ipa_interrupt {
+	struct ipa *ipa;
+	u32 irq;
+	u32 enabled;
+	struct ipa_interrupt_info info[IPA_INTERRUPT_MAX];
+};
+
+/* Map a logical interrupt number to a hardware IPA IRQ number */
+static const u32 ipa_interrupt_mapping[] = {
+	[IPA_INTERRUPT_UC_0]		= 2,
+	[IPA_INTERRUPT_UC_1]		= 3,
+	[IPA_INTERRUPT_TX_SUSPEND]	= 14,
+};
+
+static bool ipa_interrupt_uc(struct ipa_interrupt *interrupt, u32 ipa_irq)
+{
+	return ipa_irq == ipa_interrupt_mapping[IPA_INTERRUPT_UC_0] ||
+		ipa_irq == ipa_interrupt_mapping[IPA_INTERRUPT_UC_1];
+}
+
+static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 ipa_irq)
+{
+	struct ipa_interrupt_info *info = &interrupt->info[ipa_irq];
+	bool uc_irq = ipa_interrupt_uc(interrupt, ipa_irq);
+	struct ipa *ipa = interrupt->ipa;
+	u32 mask = BIT(ipa_irq);
+
+	/* For microcontroller interrupts, clear the interrupt right away,
+	 * "to avoid clearing unhandled interrupts."
+	 */
+	if (uc_irq)
+		iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET);
+
+	if (info->handler)
+		info->handler(interrupt->ipa, info->interrupt_id);
+
+	/* Clearing the SUSPEND_TX interrupt also clears the register
+	 * that tells us which suspended endpoint(s) caused the interrupt,
+	 * so defer clearing until after the handler's been called.
+	 */
+	if (!uc_irq)
+		iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET);
+}
+
+static void ipa_interrupt_process_all(struct ipa_interrupt *interrupt)
+{
+	struct ipa *ipa = interrupt->ipa;
+	u32 enabled = interrupt->enabled;
+	u32 mask;
+
+	/* The status register indicates which conditions are present,
+	 * including conditions whose interrupt is not enabled.  Handle
+	 * only the enabled ones.
+	 */
+	mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET);
+	while ((mask &= enabled)) {
+		do {
+			u32 ipa_irq = __ffs(mask);
+
+			mask ^= BIT(ipa_irq);
+
+			ipa_interrupt_process(interrupt, ipa_irq);
+		} while (mask);
+		mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET);
+	}
+}
+
+/* Threaded part of the IRQ handler */
+static irqreturn_t ipa_isr_thread(int irq, void *dev_id)
+{
+	struct ipa_interrupt *interrupt = dev_id;
+
+	ipa_clock_get(interrupt->ipa->clock);
+
+	ipa_interrupt_process_all(interrupt);
+
+	ipa_clock_put(interrupt->ipa->clock);
+
+	return IRQ_HANDLED;
+}
+
+/* Hard part of the IRQ handler */
+static irqreturn_t ipa_isr(int irq, void *dev_id)
+{
+	struct ipa_interrupt *interrupt = dev_id;
+	struct ipa *ipa = interrupt->ipa;
+	u32 mask;
+
+	mask = ioread32(ipa->reg_virt + IPA_REG_IRQ_STTS_OFFSET);
+	if (mask & interrupt->enabled)
+		return IRQ_WAKE_THREAD;
+
+	/* Nothing in the mask was supposed to cause an interrupt */
+	iowrite32(mask, ipa->reg_virt + IPA_REG_IRQ_CLR_OFFSET);
+
+	dev_err(&ipa->pdev->dev, "%s: unexpected interrupt, mask 0x%08x\n",
+		__func__, mask);
+
+	return IRQ_HANDLED;
+}
+
+static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
+					  enum ipa_endpoint_id endpoint_id,
+					  bool enable)
+{
+	u32 offset = IPA_REG_SUSPEND_IRQ_EN_OFFSET;
+	u32 mask = BIT(endpoint_id);
+	u32 val;
+
+	val = ioread32(interrupt->ipa->reg_virt + offset);
+	if (enable)
+		val |= mask;
+	else
+		val &= ~mask;
+	iowrite32(val, interrupt->ipa->reg_virt + offset);
+}
+
+void ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt,
+				  enum ipa_endpoint_id endpoint_id)
+{
+	ipa_interrupt_suspend_control(interrupt, endpoint_id, true);
+}
+
+void ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt,
+				   enum ipa_endpoint_id endpoint_id)
+{
+	ipa_interrupt_suspend_control(interrupt, endpoint_id, false);
+}
+
+/* Clear the suspend interrupt for all endpoints that signaled it */
+void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt)
+{
+	struct ipa *ipa = interrupt->ipa;
+	u32 val;
+
+	val = ioread32(ipa->reg_virt + IPA_REG_IRQ_SUSPEND_INFO_OFFSET);
+	iowrite32(val, ipa->reg_virt + IPA_REG_SUSPEND_IRQ_CLR_OFFSET);
+}
+
+/**
+ * ipa_interrupt_simulate() - Simulate arrival of an IPA TX_SUSPEND interrupt
+ *
+ * This is needed to work around a problem that occurs if aggregation
+ * is active on an endpoint when its underlying channel is suspended.
+ */
+void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt)
+{
+	u32 ipa_irq = ipa_interrupt_mapping[IPA_INTERRUPT_TX_SUSPEND];
+
+	ipa_interrupt_process(interrupt, ipa_irq);
+}
+
+/**
+ * ipa_interrupt_add() - Adds handler for an IPA interrupt
+ * @interrupt_id:	IPA interrupt type
+ * @handler:		The handler for that interrupt
+ *
+ * Adds handler for an IPA interrupt and enable it.  IPA interrupt
+ * handlers are run in threaded interrupt context, so are allowed to
+ * block.
+ */
+void ipa_interrupt_add(struct ipa_interrupt *interrupt,
+		       enum ipa_interrupt_id interrupt_id,
+		       ipa_irq_handler_t handler)
+{
+	u32 ipa_irq = ipa_interrupt_mapping[interrupt_id];
+	struct ipa *ipa = interrupt->ipa;
+
+	interrupt->info[ipa_irq].handler = handler;
+	interrupt->info[ipa_irq].interrupt_id = interrupt_id;
+
+	/* Update the IPA interrupt mask to enable it */
+	interrupt->enabled |= BIT(ipa_irq);
+	iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET);
+}
+
+/**
+ * ipa_interrupt_remove() - Removes handler for an IPA interrupt type
+ * @interrupt:		IPA interrupt type
+ *
+ * Remove an IPA interrupt handler and disable it.
+ */
+void ipa_interrupt_remove(struct ipa_interrupt *interrupt,
+			  enum ipa_interrupt_id interrupt_id)
+{
+	u32 ipa_irq = ipa_interrupt_mapping[interrupt_id];
+	struct ipa *ipa = interrupt->ipa;
+
+	/* Update the IPA interrupt mask to disable it */
+	interrupt->enabled &= ~BIT(ipa_irq);
+	iowrite32(interrupt->enabled, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET);
+
+	interrupt->info[ipa_irq].handler = NULL;
+}
+
+/**
+ * ipa_interrupts_init() - Initialize the IPA interrupts framework
+ */
+struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa)
+{
+	struct ipa_interrupt *interrupt;
+	unsigned int irq;
+	int ret;
+
+	ret = platform_get_irq_byname(ipa->pdev, "ipa");
+	if (ret < 0)
+		return ERR_PTR(ret);
+	irq = ret;
+
+	interrupt = kzalloc(sizeof(*interrupt), GFP_KERNEL);
+	if (!interrupt)
+		return ERR_PTR(-ENOMEM);
+	interrupt->ipa = ipa;
+	interrupt->irq = irq;
+
+	/* Start with all IPA interrupts disabled */
+	iowrite32(0, ipa->reg_virt + IPA_REG_IRQ_EN_OFFSET);
+
+	ret = request_threaded_irq(irq, ipa_isr, ipa_isr_thread, IRQF_ONESHOT,
+				   "ipa", interrupt);
+	if (ret)
+		goto err_free_interrupt;
+
+	return interrupt;
+
+err_free_interrupt:
+	kfree(interrupt);
+
+	return ERR_PTR(ret);
+}
+
+void ipa_interrupt_teardown(struct ipa_interrupt *interrupt)
+{
+	free_irq(interrupt->irq, interrupt);
+}
diff --git a/drivers/net/ipa/ipa_interrupt.h b/drivers/net/ipa/ipa_interrupt.h
new file mode 100644
index 000000000000..6e452430c156
--- /dev/null
+++ b/drivers/net/ipa/ipa_interrupt.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2018-2019 Linaro Ltd.
+ */
+#ifndef _IPA_INTERRUPT_H_
+#define _IPA_INTERRUPT_H_
+
+#include <linux/types.h>
+#include <linux/bits.h>
+
+struct ipa;
+struct ipa_interrupt;
+
+/**
+ * enum ipa_interrupt_id - IPA Interrupt Type
+ *
+ * Used to register handlers for IPA interrupts.
+ */
+enum ipa_interrupt_id {
+	IPA_INTERRUPT_UC_0,
+	IPA_INTERRUPT_UC_1,
+	IPA_INTERRUPT_TX_SUSPEND,
+};
+
+/**
+ * typedef ipa_irq_handler_t - irq handler/callback type
+ * @param interrupt		- interrupt type
+ * @param interrupt_data	- interrupt information data
+ *
+ * Callback function registered by ipa_interrupt_add() to handle a specific
+ * interrupt type
+ */
+typedef void (*ipa_irq_handler_t)(struct ipa *ipa,
+				  enum ipa_interrupt_id interrupt_id);
+
+struct ipa_interrupt *ipa_interrupt_setup(struct ipa *ipa);
+void ipa_interrupt_teardown(struct ipa_interrupt *interrupt);
+
+void ipa_interrupt_add(struct ipa_interrupt *interrupt,
+		       enum ipa_interrupt_id interrupt_id,
+		       ipa_irq_handler_t handler);
+void ipa_interrupt_remove(struct ipa_interrupt *interrupt,
+			  enum ipa_interrupt_id interrupt_id);
+
+void ipa_interrupt_suspend_enable(struct ipa_interrupt *interrupt,
+				  enum ipa_endpoint_id endpoint_id);
+void ipa_interrupt_suspend_disable(struct ipa_interrupt *interrupt,
+				   enum ipa_endpoint_id endpoint_id);
+void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt);
+void ipa_interrupt_simulate_suspend(struct ipa_interrupt *interrupt);
+
+#endif /* _IPA_INTERRUPT_H_ */
diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
new file mode 100644
index 000000000000..ad7e55aec31f
--- /dev/null
+++ b/drivers/net/ipa/ipa_mem.c
@@ -0,0 +1,234 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#include <linux/types.h>
+#include <linux/bitfield.h>
+#include <linux/bug.h>
+#include <linux/dma-mapping.h>
+#include <linux/io.h>
+
+#include "ipa.h"
+#include "ipa_reg.h"
+#include "ipa_cmd.h"
+#include "ipa_mem.h"
+
+/* "Canary" value placed between memory regions to detect overflow */
+#define IPA_SMEM_CANARY_VAL		cpu_to_le32(0xdeadbeef)
+
+/* Only used for IPA_SMEM_UC_EVENT_RING */
+static __always_inline void smem_set_canary(struct ipa *ipa, u32 offset)
+{
+	__le32 *cp = ipa->shared_virt + offset;
+
+	BUILD_BUG_ON(offset < sizeof(*cp));
+
+	*--cp = IPA_SMEM_CANARY_VAL;
+}
+
+static __always_inline void smem_set_canaries(struct ipa *ipa, u32 offset)
+{
+	__le32 *cp = ipa->shared_virt + offset;
+
+	/* IPA accesses memory at 8-byte aligned offsets, 8 bytes at a time */
+	BUILD_BUG_ON(offset % 8);
+	BUILD_BUG_ON(offset < 2 * sizeof(*cp));
+
+	*--cp = IPA_SMEM_CANARY_VAL;
+	*--cp = IPA_SMEM_CANARY_VAL;
+}
+
+/**
+ * ipa_smem_setup() - Set up IPA AP and modem shared memory areas
+ *
+ * Set up the IPA-local shared memory areas located in shared memory
+ * located in the IPA.  This involves zero-filling each area (using
+ * DMA) and then telling the IPA where it's located.  We set up the
+ * regions for the header and processing context structures used by
+ * both the modem and the AP.
+ *
+ * The modem and AP header areas are contiguous, with the modem area
+ * located at the lower address.  The processing context memory areas
+ * for the modem and AP are also contiguous, with the modem at the base
+ * of the combined space.
+ *
+ * The modem portions are also zeroed in ipa_smem_zero_modem(); if it
+ * crashes and restarts via SSR these areas need to be * re-initialized.
+ *
+ * @Return:	0 if successful, or a negative error code
+ */
+int ipa_smem_setup(struct ipa *ipa)
+{
+	u32 offset;
+	u32 size;
+	int ret;
+
+	/* Alignments of some offsets are verified in smem_set_canaries() */
+	BUILD_BUG_ON(IPA_SMEM_AP_HDR_OFFSET % 8);
+	BUILD_BUG_ON(IPA_SMEM_MODEM_HDR_SIZE % 8);
+	BUILD_BUG_ON(IPA_SMEM_AP_HDR_SIZE % 8);
+
+	/* Initialize IPA-local header memory */
+	offset = IPA_SMEM_MODEM_HDR_OFFSET;
+	size = IPA_SMEM_MODEM_HDR_SIZE + IPA_SMEM_AP_HDR_SIZE;
+	ret = ipa_cmd_hdr_init_local(ipa, offset, size);
+	if (ret)
+		return ret;
+
+	BUILD_BUG_ON(IPA_SMEM_AP_HDR_PROC_CTX_OFFSET % 8);
+	BUILD_BUG_ON(IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE % 8);
+	BUILD_BUG_ON(IPA_SMEM_AP_HDR_PROC_CTX_SIZE % 8);
+
+	/* Zero the processing context IPA-local memory for the modem and AP */
+	offset = IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET;
+	size = IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE + IPA_SMEM_AP_HDR_PROC_CTX_SIZE;
+	ret = ipa_cmd_smem_dma_zero(ipa, offset, size);
+	if (ret)
+		return ret;
+
+	/* Tell the hardware where the processing context area is located */
+	iowrite32(ipa->shared_offset + offset,
+		  ipa->reg_virt + IPA_REG_LOCAL_PKT_PROC_CNTXT_BASE_OFFSET);
+
+	return ret;
+}
+
+void ipa_smem_teardown(struct ipa *ipa)
+{
+	/* Nothing to do */
+}
+
+/**
+ * ipa_smem_config() - Configure IPA shared memory
+ *
+ * @Return:	0 if successful, or a negative error code
+ */
+int ipa_smem_config(struct ipa *ipa)
+{
+	u32 size;
+	u32 val;
+
+	/* Check the advertised location and size of the shared memory area */
+	val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET);
+
+	/* The fields in the register are in 8 byte units */
+	ipa->shared_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
+	dev_dbg(&ipa->pdev->dev, "shared memory offset 0x%x bytes\n",
+		ipa->shared_offset);
+	if (WARN_ON(ipa->shared_offset))
+		return -EINVAL;
+
+	/* The code assumes a certain minimum shared memory area size */
+	size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
+	dev_dbg(&ipa->pdev->dev, "shared memory size 0x%x bytes\n", size);
+	if (WARN_ON(size < IPA_SMEM_SIZE))
+		return -EINVAL;
+
+	/* Now write "canary" values before each sub-section. */
+	smem_set_canaries(ipa, IPA_SMEM_V4_FLT_HASH_OFFSET);
+	smem_set_canaries(ipa, IPA_SMEM_V4_FLT_NHASH_OFFSET);
+	smem_set_canaries(ipa, IPA_SMEM_V6_FLT_HASH_OFFSET);
+	smem_set_canaries(ipa, IPA_SMEM_V6_FLT_NHASH_OFFSET);
+	smem_set_canaries(ipa, IPA_SMEM_V4_RT_HASH_OFFSET);
+	smem_set_canaries(ipa, IPA_SMEM_V4_RT_NHASH_OFFSET);
+	smem_set_canaries(ipa, IPA_SMEM_V6_RT_HASH_OFFSET);
+	smem_set_canaries(ipa, IPA_SMEM_V6_RT_NHASH_OFFSET);
+	smem_set_canaries(ipa, IPA_SMEM_MODEM_HDR_OFFSET);
+	smem_set_canaries(ipa, IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET);
+	smem_set_canaries(ipa, IPA_SMEM_MODEM_OFFSET);
+
+	/* Only one canary precedes the microcontroller ring */
+	BUILD_BUG_ON(IPA_SMEM_UC_EVENT_RING_OFFSET % 1024);
+	smem_set_canary(ipa, IPA_SMEM_UC_EVENT_RING_OFFSET);
+
+	return 0;
+}
+
+void ipa_smem_deconfig(struct ipa *ipa)
+{
+	/* Don't bother zeroing any of the shared memory on exit */
+}
+
+/**
+ * ipa_smem_zero_modem() - Zero modem IPA-local memory regions
+ *
+ * Zero regions of IPA-local memory used by the modem.  These are
+ * configured (and initially zeroed) by ipa_smem_setup(), but if
+ * the modem crashes and restarts via SSR we need to re-initialize
+ * them.
+ */
+int ipa_smem_zero_modem(struct ipa *ipa)
+{
+	int ret;
+
+	ret = ipa_cmd_smem_dma_zero(ipa, IPA_SMEM_MODEM_OFFSET,
+				    IPA_SMEM_MODEM_SIZE);
+	if (ret)
+		return ret;
+
+	ret = ipa_cmd_smem_dma_zero(ipa, IPA_SMEM_MODEM_HDR_OFFSET,
+				    IPA_SMEM_MODEM_HDR_SIZE);
+	if (ret)
+		return ret;
+
+	ret = ipa_cmd_smem_dma_zero(ipa, IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET,
+				    IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE);
+
+	return ret;
+}
+
+int ipa_mem_init(struct ipa *ipa)
+{
+	struct resource *res;
+	int ret;
+
+	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64));
+	if (ret)
+		return ret;
+
+	/* Set up IPA shared memory */
+	res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM,
+					   "ipa-shared");
+	if (!res)
+		return -ENODEV;
+
+	/* The code assumes a certain minimum shared memory area size */
+	if (WARN_ON(resource_size(res) < IPA_SMEM_SIZE))
+		return -EINVAL;
+
+	ipa->shared_virt = memremap(res->start, resource_size(res),
+				    MEMREMAP_WC);
+	if (!ipa->shared_virt)
+		ret = -ENOMEM;
+	ipa->shared_phys = res->start;
+
+	/* Setup IPA register memory  */
+	res = platform_get_resource_byname(ipa->pdev, IORESOURCE_MEM,
+					   "ipa-reg");
+	if (!res) {
+		ret = -ENODEV;
+		goto err_unmap_shared;
+	}
+
+	ipa->reg_virt = ioremap(res->start, resource_size(res));
+	if (!ipa->reg_virt) {
+		ret = -ENOMEM;
+		goto err_unmap_shared;
+	}
+	ipa->reg_phys = res->start;
+
+	return 0;
+
+err_unmap_shared:
+	memunmap(ipa->shared_virt);
+
+	return ret;
+}
+
+void ipa_mem_exit(struct ipa *ipa)
+{
+	iounmap(ipa->reg_virt);
+	memunmap(ipa->shared_virt);
+}
diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h
new file mode 100644
index 000000000000..179b62c958ed
--- /dev/null
+++ b/drivers/net/ipa/ipa_mem.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+#ifndef _IPA_MEM_H_
+#define _IPA_MEM_H_
+
+struct ipa;
+
+/**
+ * DOC: IPA Local Memory
+ *
+ * The IPA has a block of shared memory, divided into regions used for
+ * specific purposes.  The offset within the IPA address space of this shared
+ * memory block is defined by the IPA_SMEM_DIRECT_ACCESS_OFFSET register.
+ *
+ * The regions within the shared block are bounded by an offset and size found
+ * in the IPA_SHARED_MEM_SIZE register.  The first 128 bytes of the shared
+ * memory block are shared with the microcontroller, and the first 40 bytes of
+ * that contain a structure used to communicate between the microcontroller
+ * and the AP.
+ *
+ * There is a set of filter and routing tables, and each is given a 128 byte
+ * region in shared memory.  Each entry in a filter or route table is
+ * IPA_TABLE_ENTRY_SIZE, or 8 bytes.  The first "slot" of every table is
+ * filled with a "canary" value, and the table offsets defined below represent
+ * the location of the first real entry in each table after this.
+ *
+ * The number of filter table entries depends on the number of endpoints that
+ * support filtering.  The first non-canary slot of a filter table contains a
+ * bitmap, with each set bit indicating an endpoint containing an entry in the
+ * table.  Bit 0 is used to represent a global filter.
+ *
+ * About half of the routing table entries are reserved for modem use.
+ */
+
+/* The maximum number of filter table entries (IPv4, IPv6; hashed and not) */
+#define IPA_SMEM_FLT_COUNT			14
+
+/* The number of routing table entries (IPv4, IPv6; hashed and not) */
+#define IPA_SMEM_RT_COUNT			15
+
+ /* Which routing table entries are for the modem */
+#define IPA_SMEM_MODEM_RT_COUNT			8
+#define IPA_SMEM_MODEM_RT_INDEX_MIN		0
+#define IPA_SMEM_MODEM_RT_INDEX_MAX \
+		(IPA_SMEM_MODEM_RT_INDEX_MIN + IPA_SMEM_MODEM_RT_COUNT - 1)
+
+/* Regions within the shared memory block.  Table sizes are 0x80 bytes. */
+#define IPA_SMEM_V4_FLT_HASH_OFFSET		0x0288
+#define IPA_SMEM_V4_FLT_NHASH_OFFSET		0x0308
+#define IPA_SMEM_V6_FLT_HASH_OFFSET		0x0388
+#define IPA_SMEM_V6_FLT_NHASH_OFFSET		0x0408
+#define IPA_SMEM_V4_RT_HASH_OFFSET		0x0488
+#define IPA_SMEM_V4_RT_NHASH_OFFSET		0x0508
+#define IPA_SMEM_V6_RT_HASH_OFFSET		0x0588
+#define IPA_SMEM_V6_RT_NHASH_OFFSET		0x0608
+#define IPA_SMEM_MODEM_HDR_OFFSET		0x0688
+#define IPA_SMEM_MODEM_HDR_SIZE			0x0140
+#define IPA_SMEM_AP_HDR_OFFSET			0x07c8
+#define IPA_SMEM_AP_HDR_SIZE			0x0000
+#define IPA_SMEM_MODEM_HDR_PROC_CTX_OFFSET	0x07d0
+#define IPA_SMEM_MODEM_HDR_PROC_CTX_SIZE	0x0200
+#define IPA_SMEM_AP_HDR_PROC_CTX_OFFSET		0x09d0
+#define IPA_SMEM_AP_HDR_PROC_CTX_SIZE		0x0200
+#define IPA_SMEM_MODEM_OFFSET			0x0bd8
+#define IPA_SMEM_MODEM_SIZE			0x1024
+#define IPA_SMEM_UC_EVENT_RING_OFFSET		0x1c00	/* v3.5 and later */
+#define IPA_SMEM_SIZE				0x2000
+
+int ipa_smem_config(struct ipa *ipa);
+void ipa_smem_deconfig(struct ipa *ipa);
+
+int ipa_smem_setup(struct ipa *ipa);
+void ipa_smem_teardown(struct ipa *ipa);
+
+int ipa_smem_zero_modem(struct ipa *ipa);
+
+int ipa_mem_init(struct ipa *ipa);
+void ipa_mem_exit(struct ipa *ipa);
+
+#endif /* _IPA_SMEM_H_ */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2020-04-29 23:18 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-06  4:28 [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) Alex Elder
2020-03-06  4:28 ` [PATCH v2 01/17] remoteproc: add IPA notification to q6v5 driver Alex Elder
2020-03-06 11:49   ` Leon Romanovsky
2020-03-06 13:29     ` Alex Elder
2020-03-06  4:28 ` [PATCH v2 02/17] dt-bindings: soc: qcom: add IPA bindings Alex Elder
2020-03-06  4:28 ` [PATCH v2 03/17] soc: qcom: ipa: main code Alex Elder
2020-03-06  4:28 ` [PATCH v2 04/17] soc: qcom: ipa: configuration data Alex Elder
2020-03-06  4:28 ` [PATCH v2 05/17] soc: qcom: ipa: clocking, interrupts, and memory Alex Elder
2020-03-06  4:28 ` [PATCH v2 06/17] soc: qcom: ipa: GSI headers Alex Elder
2020-03-06  4:28 ` [PATCH v2 07/17] soc: qcom: ipa: the generic software interface Alex Elder
2020-03-06  4:28 ` [PATCH v2 08/17] soc: qcom: ipa: IPA interface to GSI Alex Elder
2020-03-06  4:28 ` [PATCH v2 09/17] soc: qcom: ipa: GSI transactions Alex Elder
2020-03-06  4:28 ` [PATCH v2 10/17] soc: qcom: ipa: IPA endpoints Alex Elder
2020-03-06  4:28 ` [PATCH v2 11/17] soc: qcom: ipa: filter and routing tables Alex Elder
2020-03-06  4:28 ` [PATCH v2 12/17] soc: qcom: ipa: immediate commands Alex Elder
2020-03-06  4:28 ` [PATCH v2 13/17] soc: qcom: ipa: modem and microcontroller Alex Elder
2020-03-06  4:28 ` [PATCH v2 14/17] soc: qcom: ipa: AP/modem communications Alex Elder
2020-03-06  4:28 ` [PATCH v2 15/17] soc: qcom: ipa: support build of IPA code Alex Elder
2020-03-11 10:54   ` Jon Hunter
2020-03-11 12:33     ` Alex Elder
2020-03-06  4:28 ` [PATCH v2 16/17] MAINTAINERS: add entry for the Qualcomm IPA driver Alex Elder
2020-03-06  4:28 ` [PATCH v2 17/17] arm64: dts: sdm845: add IPA information Alex Elder
2020-03-11 10:49   ` Jon Hunter
2020-03-11 14:39     ` Alex Elder
2020-03-11 19:02       ` Bjorn Andersson
2020-03-09  5:09 ` [PATCH v2 00/17] net: introduce Qualcomm IPA driver (UPDATED) David Miller
2020-03-09 16:54 ` Dave Taht
2020-03-12  3:09   ` Alex Elder
2020-04-29 23:17 ` Evan Green
  -- strict thread matches above, loose matches on Subject: below --
2019-05-31  3:53 [PATCH v2 00/17] net: introduce Qualcomm IPA driver Alex Elder
2019-05-31  3:53 ` [PATCH v2 05/17] soc: qcom: ipa: clocking, interrupts, and memory Alex Elder

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).