All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x
@ 2021-09-20  3:07 Sireesh Kodali
  2021-09-20  3:07 ` [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration Sireesh Kodali
                   ` (17 more replies)
  0 siblings, 18 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:07 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali

Hi,

This RFC patch series adds support for IPA v2, v2.5 and v2.6L
(collectively referred to as IPA v2.x).

Basic description:
IPA v2.x is the older version of the IPA hardware found on Qualcomm
SoCs. The biggest differences between v2.x and later versions are:
- 32 bit hardware (the IPA microcontroler is 32 bit)
- BAM (as opposed to GSI as a DMA transport)
- Changes to the QMI init sequence (described in the commit message)

The fact that IPA v2.x are 32 bit only affects us directly in the table
init code. However, its impact is felt in other parts of the code, as it
changes the size of fields of various structs (e.g. in the commands that
can be sent).

BAM support is already present in the mainline kernel, however it lacks
two things:
- Support for DMA metadata, to pass the size of the transaction from the
  hardware to the dma client
- Support for immediate commands, which are needed to pass commands from
  the driver to the microcontroller

Separate patch series have been created to deal with these (linked in
the end)

This patch series adds support for BAM as a transport by refactoring the
current GSI code to create an abstract uniform API on top. This API
allows the rest of the driver to handle DMA without worrying about the
IPA version.

The final thing that hasn't been touched by this patch series is the IPA
resource manager. On the downstream CAF kernel, the driver seems to
share the resource code between IPA v2.x and IPA v3.x, which should mean
all it would take to add support for resources on IPA v2.x would be to
add the definitions in the ipa_data.

Testing:
This patch series was tested on kernel version 5.13 on a phone with
SDM625 (IPA v2.6L), and a phone with MSM8996 (IPA v2.5). The phone with
IPA v2.5 was able to get an IP address using modem-manager, although
sending/receiving packets was not tested. The phone with IPA v2.6L was
able to get an IP, but was unable to send/receive packets. Its modem
also relies on IPA v2.6l's compression/decompression support, and
without this patch series, the modem simply crashes and restarts,
waiting for the IPA block to come up.

This patch series is based on code from the downstream CAF kernel v4.9

There are some things in this patch series that would obviously not get
accepted in their current form:
- All IPA 2.x data is in a single file
- Some stray printks might still be around
- Some values have been hardcoded (e.g. the filter_map)
Please excuse these

Lastly, this patch series depends upon the following patches for BAM:
[0]: https://lkml.org/lkml/2021/9/19/126
[1]: https://lkml.org/lkml/2021/9/19/135

Regards,
Sireesh Kodali

Sireesh Kodali (10):
  net: ipa: Add IPA v2.x register definitions
  net: ipa: Add support for using BAM as a DMA transport
  net: ipa: Add support for IPA v2.x commands and table init
  net: ipa: Add support for IPA v2.x endpoints
  net: ipa: Add support for IPA v2.x memory map
  net: ipa: Add support for IPA v2.x in the driver's QMI interface
  net: ipa: Add support for IPA v2 microcontroller
  net: ipa: Add IPA v2.6L initialization sequence support
  net: ipa: Add hw config describing IPA v2.x hardware
  dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA

Vladimir Lypak (7):
  net: ipa: Correct ipa_status_opcode enumeration
  net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support
  net: ipa: Refactor GSI code
  net: ipa: Establish ipa_dma interface
  net: ipa: Check interrupts for availability
  net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait
  net: ipa: Add support for IPA v2.x interrupts

 .../devicetree/bindings/net/qcom,ipa.yaml     |   2 +
 drivers/net/ipa/Makefile                      |  11 +-
 drivers/net/ipa/bam.c                         | 525 ++++++++++++++++++
 drivers/net/ipa/gsi.c                         | 322 ++++++-----
 drivers/net/ipa/ipa.h                         |   8 +-
 drivers/net/ipa/ipa_cmd.c                     | 244 +++++---
 drivers/net/ipa/ipa_cmd.h                     |  20 +-
 drivers/net/ipa/ipa_data-v2.c                 | 369 ++++++++++++
 drivers/net/ipa/ipa_data-v3.1.c               |   2 +-
 drivers/net/ipa/ipa_data-v3.5.1.c             |   2 +-
 drivers/net/ipa/ipa_data-v4.11.c              |   2 +-
 drivers/net/ipa/ipa_data-v4.2.c               |   2 +-
 drivers/net/ipa/ipa_data-v4.5.c               |   2 +-
 drivers/net/ipa/ipa_data-v4.9.c               |   2 +-
 drivers/net/ipa/ipa_data.h                    |   4 +
 drivers/net/ipa/{gsi.h => ipa_dma.h}          | 179 +++---
 .../ipa/{gsi_private.h => ipa_dma_private.h}  |  46 +-
 drivers/net/ipa/ipa_endpoint.c                | 188 ++++---
 drivers/net/ipa/ipa_endpoint.h                |   6 +-
 drivers/net/ipa/ipa_gsi.c                     |  18 +-
 drivers/net/ipa/ipa_gsi.h                     |  12 +-
 drivers/net/ipa/ipa_interrupt.c               |  36 +-
 drivers/net/ipa/ipa_main.c                    |  82 ++-
 drivers/net/ipa/ipa_mem.c                     |  55 +-
 drivers/net/ipa/ipa_mem.h                     |   5 +-
 drivers/net/ipa/ipa_power.c                   |   4 +-
 drivers/net/ipa/ipa_qmi.c                     |  37 +-
 drivers/net/ipa/ipa_qmi.h                     |  10 +
 drivers/net/ipa/ipa_reg.h                     | 184 +++++-
 drivers/net/ipa/ipa_resource.c                |   3 +
 drivers/net/ipa/ipa_smp2p.c                   |  11 +-
 drivers/net/ipa/ipa_sysfs.c                   |   6 +
 drivers/net/ipa/ipa_table.c                   |  86 +--
 drivers/net/ipa/ipa_table.h                   |   6 +-
 drivers/net/ipa/{gsi_trans.c => ipa_trans.c}  | 182 +++---
 drivers/net/ipa/{gsi_trans.h => ipa_trans.h}  |  78 +--
 drivers/net/ipa/ipa_uc.c                      |  96 ++--
 drivers/net/ipa/ipa_version.h                 |  12 +
 38 files changed, 2133 insertions(+), 726 deletions(-)
 create mode 100644 drivers/net/ipa/bam.c
 create mode 100644 drivers/net/ipa/ipa_data-v2.c
 rename drivers/net/ipa/{gsi.h => ipa_dma.h} (57%)
 rename drivers/net/ipa/{gsi_private.h => ipa_dma_private.h} (66%)
 rename drivers/net/ipa/{gsi_trans.c => ipa_trans.c} (80%)
 rename drivers/net/ipa/{gsi_trans.h => ipa_trans.h} (71%)

-- 
2.33.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
@ 2021-09-20  3:07 ` Sireesh Kodali
  2021-10-13 22:28   ` Alex Elder
  2021-09-20  3:07 ` [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support Sireesh Kodali
                   ` (16 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:07 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

The values in the enumaration were defined as bitmasks (base 2 exponents of
actual opcodes). Meanwhile, it's used not as bitmask
ipa_endpoint_status_skip and ipa_status_formet_packet functions (compared
directly with opcode from status packet). This commit converts these values
to actual hardware constansts.

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_endpoint.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 5528d97110d5..29227de6661f 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -41,10 +41,10 @@
 
 /** enum ipa_status_opcode - status element opcode hardware values */
 enum ipa_status_opcode {
-	IPA_STATUS_OPCODE_PACKET		= 0x01,
-	IPA_STATUS_OPCODE_DROPPED_PACKET	= 0x04,
-	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 0x08,
-	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 0x40,
+	IPA_STATUS_OPCODE_PACKET		= 0,
+	IPA_STATUS_OPCODE_DROPPED_PACKET	= 2,
+	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 3,
+	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 6,
 };
 
 /** enum ipa_status_exception - status element exception type */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
  2021-09-20  3:07 ` [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration Sireesh Kodali
@ 2021-09-20  3:07 ` Sireesh Kodali
  2021-10-13 22:28   ` Alex Elder
  2021-09-20  3:07 ` [RFC PATCH 03/17] net: ipa: Refactor GSI code Sireesh Kodali
                   ` (15 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:07 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

IPA v2.x is 32 bit. Having an IPA_TABLE_ENTRY size makes it easier to
deal with supporting both 32 bit and 64 bit IPA versions

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_qmi.c   | 10 ++++++----
 drivers/net/ipa/ipa_table.c | 29 +++++++++++++----------------
 drivers/net/ipa/ipa_table.h |  4 ++++
 3 files changed, 23 insertions(+), 20 deletions(-)

diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
index 90f3aec55b36..7e2fe701cc4d 100644
--- a/drivers/net/ipa/ipa_qmi.c
+++ b/drivers/net/ipa/ipa_qmi.c
@@ -308,12 +308,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
 	mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE);
 	req.v4_route_tbl_info_valid = 1;
 	req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset;
-	req.v4_route_tbl_info.count = mem->size / sizeof(__le64);
+	req.v4_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
 
 	mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE);
 	req.v6_route_tbl_info_valid = 1;
 	req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset;
-	req.v6_route_tbl_info.count = mem->size / sizeof(__le64);
+	req.v6_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
 
 	mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER);
 	req.v4_filter_tbl_start_valid = 1;
@@ -352,7 +352,8 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
 		req.v4_hash_route_tbl_info_valid = 1;
 		req.v4_hash_route_tbl_info.start =
 				ipa->mem_offset + mem->offset;
-		req.v4_hash_route_tbl_info.count = mem->size / sizeof(__le64);
+		req.v4_hash_route_tbl_info.count =
+				mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
 	}
 
 	mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE_HASHED);
@@ -360,7 +361,8 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
 		req.v6_hash_route_tbl_info_valid = 1;
 		req.v6_hash_route_tbl_info.start =
 			ipa->mem_offset + mem->offset;
-		req.v6_hash_route_tbl_info.count = mem->size / sizeof(__le64);
+		req.v6_hash_route_tbl_info.count =
+				mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
 	}
 
 	mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER_HASHED);
diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
index 1da334f54944..96c467c80a2e 100644
--- a/drivers/net/ipa/ipa_table.c
+++ b/drivers/net/ipa/ipa_table.c
@@ -118,7 +118,8 @@
  * 32-bit all-zero rule list terminator.  The "zero rule" is simply an
  * all-zero rule followed by the list terminator.
  */
-#define IPA_ZERO_RULE_SIZE		(2 * sizeof(__le32))
+#define IPA_ZERO_RULE_SIZE(version) \
+	 (IPA_IS_64BIT(version) ? 2 * sizeof(__le32) : sizeof(__le32))
 
 /* Check things that can be validated at build time. */
 static void ipa_table_validate_build(void)
@@ -132,12 +133,6 @@ static void ipa_table_validate_build(void)
 	 */
 	BUILD_BUG_ON(sizeof(dma_addr_t) > sizeof(__le64));
 
-	/* A "zero rule" is used to represent no filtering or no routing.
-	 * It is a 64-bit block of zeroed memory.  Code in ipa_table_init()
-	 * assumes that it can be written using a pointer to __le64.
-	 */
-	BUILD_BUG_ON(IPA_ZERO_RULE_SIZE != sizeof(__le64));
-
 	/* Impose a practical limit on the number of routes */
 	BUILD_BUG_ON(IPA_ROUTE_COUNT_MAX > 32);
 	/* The modem must be allotted at least one route table entry */
@@ -236,7 +231,7 @@ static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count)
 	/* Skip over the zero rule and possibly the filter mask */
 	skip = filter_mask ? 1 : 2;
 
-	return ipa->table_addr + skip * sizeof(*ipa->table_virt);
+	return ipa->table_addr + skip * IPA_TABLE_ENTRY_SIZE(ipa->version);
 }
 
 static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
@@ -255,8 +250,8 @@ static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
 	if (filter)
 		first++;	/* skip over bitmap */
 
-	offset = mem->offset + first * sizeof(__le64);
-	size = count * sizeof(__le64);
+	offset = mem->offset + first * IPA_TABLE_ENTRY_SIZE(ipa->version);
+	size = count * IPA_TABLE_ENTRY_SIZE(ipa->version);
 	addr = ipa_table_addr(ipa, false, count);
 
 	ipa_cmd_dma_shared_mem_add(trans, offset, size, addr, true);
@@ -434,11 +429,11 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
 		count = 1 + hweight32(ipa->filter_map);
 		hash_count = hash_mem->size ? count : 0;
 	} else {
-		count = mem->size / sizeof(__le64);
-		hash_count = hash_mem->size / sizeof(__le64);
+		count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
+		hash_count = hash_mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
 	}
-	size = count * sizeof(__le64);
-	hash_size = hash_count * sizeof(__le64);
+	size = count * IPA_TABLE_ENTRY_SIZE(ipa->version);
+	hash_size = hash_count * IPA_TABLE_ENTRY_SIZE(ipa->version);
 
 	addr = ipa_table_addr(ipa, filter, count);
 	hash_addr = ipa_table_addr(ipa, filter, hash_count);
@@ -621,7 +616,8 @@ int ipa_table_init(struct ipa *ipa)
 	 * by dma_alloc_coherent() is guaranteed to be a power-of-2 number
 	 * of pages, which satisfies the rule alignment requirement.
 	 */
-	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
+	size = IPA_ZERO_RULE_SIZE(ipa->version) +
+	       (1 + count) * IPA_TABLE_ENTRY_SIZE(ipa->version);
 	virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
 	if (!virt)
 		return -ENOMEM;
@@ -653,7 +649,8 @@ void ipa_table_exit(struct ipa *ipa)
 	struct device *dev = &ipa->pdev->dev;
 	size_t size;
 
-	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
+	size = IPA_ZERO_RULE_SIZE(ipa->version) +
+	       (1 + count) * IPA_TABLE_ENTRY_SIZE(ipa->version);
 
 	dma_free_coherent(dev, size, ipa->table_virt, ipa->table_addr);
 	ipa->table_addr = 0;
diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
index b6a9a0d79d68..78a168ce6558 100644
--- a/drivers/net/ipa/ipa_table.h
+++ b/drivers/net/ipa/ipa_table.h
@@ -10,6 +10,10 @@
 
 struct ipa;
 
+/* The size of a filter or route table entry */
+#define IPA_TABLE_ENTRY_SIZE(version)	\
+	(IPA_IS_64BIT(version) ? sizeof(__le64) : sizeof(__le32))
+
 /* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */
 #define IPA_FILTER_COUNT_MAX	14
 
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 03/17] net: ipa: Refactor GSI code
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
  2021-09-20  3:07 ` [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration Sireesh Kodali
  2021-09-20  3:07 ` [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support Sireesh Kodali
@ 2021-09-20  3:07 ` Sireesh Kodali
  2021-10-13 22:29   ` Alex Elder
  2021-09-20  3:07 ` [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface Sireesh Kodali
                   ` (14 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:07 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

Perform machine refactor to change "gsi_" with "ipa_" for function which
aren't actually GSI-specific and going to be reused for IPA v2 with BAM
DMA interface.

Also rename "gsi_trans.*" to "ipa_trans.*", gsi.h to ipa_dma.h, gsi_private.h
to "ipa_dma_private.h".

All the changes in this commit is done with this script:

symbols="gsi_trans gsi_trans_pool gsi_trans_info gsi_channel gsi_trans_pool_init
gsi_trans_pool_exit gsi_trans_pool_init_dma gsi_trans_pool_exit_dma
gsi_trans_pool_alloc_common gsi_trans_pool_alloc gsi_trans_pool_alloc_dma
gsi_trans_pool_next gsi_channel_trans_complete gsi_trans_move_pending
gsi_trans_move_complete gsi_trans_move_polled gsi_trans_tre_reserve
gsi_trans_tre_release gsi_channel_trans_alloc gsi_trans_free
gsi_trans_cmd_add gsi_trans_page_add gsi_trans_skb_add gsi_trans_commit
gsi_trans_commit_wait gsi_trans_commit_wait_timeout gsi_trans_complete
gsi_channel_trans_cancel_pending gsi_channel_trans_init gsi_channel_trans_exit
gsi_channel_tx_queued"

git mv gsi.h ipa_dma.h
git mv gsi_private.h ipa_dma_private.h
git mv gsi_trans.c ipa_trans.c
git mv gsi_trans.h ipa_trans.h

sed -i "s/\<gsi\.h\>/ipa_dma.h/g" *
sed -i "s/\<gsi_private\.h\>/ipa_dma_private.h/g" *
sed -i "s/\<gsi_trans\.o\>/ipa_trans.o/g" Makefile
sed -i "s/\<gsi_trans\.h\>/ipa_trans.h/g" *

for i in $symbols; do
    sed -i "s/\<${i}\>/ipa_${i##gsi_}/g" *
done

sed -i "s/\<struct gsi\>/struct ipa_dma/g" *

sed -i "s/\<struct ipa_dma\> \*gsi/struct ipa_dma *dma_subsys/g" ipa_trans.h ipa_dma.h
sed -i "s/\<channel->gsi\>/channel->dma_subsys/g" *
sed -i "s/\<trans->gsi\>/trans->dma_subsys/g" *

sed -i "s/\<struct ipa_dma\> gsi/struct ipa_dma dma_subsys/g" ipa.h ipa_dma.h
sed -i "s/struct ipa, gsi/struct ipa, dma_subsys/g" *
sed -i "s/\<ipa->gsi\>/ipa->dma_subsys/g" *

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/Makefile                      |   2 +-
 drivers/net/ipa/gsi.c                         | 305 +++++++++---------
 drivers/net/ipa/ipa.h                         |   6 +-
 drivers/net/ipa/ipa_cmd.c                     |  98 +++---
 drivers/net/ipa/ipa_cmd.h                     |  20 +-
 drivers/net/ipa/ipa_data-v3.5.1.c             |   2 +-
 drivers/net/ipa/ipa_data-v4.11.c              |   2 +-
 drivers/net/ipa/ipa_data-v4.2.c               |   2 +-
 drivers/net/ipa/ipa_data-v4.5.c               |   2 +-
 drivers/net/ipa/ipa_data-v4.9.c               |   2 +-
 drivers/net/ipa/{gsi.h => ipa_dma.h}          |  56 ++--
 .../ipa/{gsi_private.h => ipa_dma_private.h}  |  44 +--
 drivers/net/ipa/ipa_endpoint.c                |  60 ++--
 drivers/net/ipa/ipa_endpoint.h                |   6 +-
 drivers/net/ipa/ipa_gsi.c                     |  18 +-
 drivers/net/ipa/ipa_gsi.h                     |  12 +-
 drivers/net/ipa/ipa_main.c                    |  14 +-
 drivers/net/ipa/ipa_mem.c                     |  14 +-
 drivers/net/ipa/ipa_table.c                   |  28 +-
 drivers/net/ipa/{gsi_trans.c => ipa_trans.c}  | 172 +++++-----
 drivers/net/ipa/{gsi_trans.h => ipa_trans.h}  |  74 ++---
 21 files changed, 471 insertions(+), 468 deletions(-)
 rename drivers/net/ipa/{gsi.h => ipa_dma.h} (85%)
 rename drivers/net/ipa/{gsi_private.h => ipa_dma_private.h} (67%)
 rename drivers/net/ipa/{gsi_trans.c => ipa_trans.c} (81%)
 rename drivers/net/ipa/{gsi_trans.h => ipa_trans.h} (72%)

diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
index bdfb2430ab2c..3cd021fb992e 100644
--- a/drivers/net/ipa/Makefile
+++ b/drivers/net/ipa/Makefile
@@ -1,7 +1,7 @@
 obj-$(CONFIG_QCOM_IPA)	+=	ipa.o
 
 ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
-				ipa_table.o ipa_interrupt.o gsi.o gsi_trans.o \
+				ipa_table.o ipa_interrupt.o gsi.o ipa_trans.o \
 				ipa_gsi.o ipa_smp2p.o ipa_uc.o \
 				ipa_endpoint.o ipa_cmd.o ipa_modem.o \
 				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
index a2fcdb1abdb9..74ae0d07f859 100644
--- a/drivers/net/ipa/gsi.c
+++ b/drivers/net/ipa/gsi.c
@@ -15,10 +15,10 @@
 #include <linux/platform_device.h>
 #include <linux/netdevice.h>
 
-#include "gsi.h"
+#include "ipa_dma.h"
 #include "gsi_reg.h"
-#include "gsi_private.h"
-#include "gsi_trans.h"
+#include "ipa_dma_private.h"
+#include "ipa_trans.h"
 #include "ipa_gsi.h"
 #include "ipa_data.h"
 #include "ipa_version.h"
@@ -170,40 +170,41 @@ static void gsi_validate_build(void)
 }
 
 /* Return the channel id associated with a given channel */
-static u32 gsi_channel_id(struct gsi_channel *channel)
+static u32 gsi_channel_id(struct ipa_channel *channel)
 {
-	return channel - &channel->gsi->channel[0];
+	return channel - &channel->dma_subsys->channel[0];
 }
 
 /* An initialized channel has a non-null GSI pointer */
-static bool gsi_channel_initialized(struct gsi_channel *channel)
+static bool gsi_channel_initialized(struct ipa_channel *channel)
 {
-	return !!channel->gsi;
+	return !!channel->dma_subsys;
 }
 
 /* Update the GSI IRQ type register with the cached value */
-static void gsi_irq_type_update(struct gsi *gsi, u32 val)
+static void gsi_irq_type_update(struct ipa_dma *gsi, u32 val)
 {
 	gsi->type_enabled_bitmap = val;
 	iowrite32(val, gsi->virt + GSI_CNTXT_TYPE_IRQ_MSK_OFFSET);
 }
 
-static void gsi_irq_type_enable(struct gsi *gsi, enum gsi_irq_type_id type_id)
+static void gsi_irq_type_enable(struct ipa_dma *gsi, enum gsi_irq_type_id type_id)
 {
 	gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | BIT(type_id));
 }
 
-static void gsi_irq_type_disable(struct gsi *gsi, enum gsi_irq_type_id type_id)
+static void gsi_irq_type_disable(struct ipa_dma *gsi, enum gsi_irq_type_id type_id)
 {
 	gsi_irq_type_update(gsi, gsi->type_enabled_bitmap & ~BIT(type_id));
 }
 
+/* Turn off all GSI interrupts initially; there is no gsi_irq_teardown() */
 /* Event ring commands are performed one at a time.  Their completion
  * is signaled by the event ring control GSI interrupt type, which is
  * only enabled when we issue an event ring command.  Only the event
  * ring being operated on has this interrupt enabled.
  */
-static void gsi_irq_ev_ctrl_enable(struct gsi *gsi, u32 evt_ring_id)
+static void gsi_irq_ev_ctrl_enable(struct ipa_dma *gsi, u32 evt_ring_id)
 {
 	u32 val = BIT(evt_ring_id);
 
@@ -218,7 +219,7 @@ static void gsi_irq_ev_ctrl_enable(struct gsi *gsi, u32 evt_ring_id)
 }
 
 /* Disable event ring control interrupts */
-static void gsi_irq_ev_ctrl_disable(struct gsi *gsi)
+static void gsi_irq_ev_ctrl_disable(struct ipa_dma *gsi)
 {
 	gsi_irq_type_disable(gsi, GSI_EV_CTRL);
 	iowrite32(0, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET);
@@ -229,7 +230,7 @@ static void gsi_irq_ev_ctrl_disable(struct gsi *gsi)
  * enabled when we issue a channel command.  Only the channel being
  * operated on has this interrupt enabled.
  */
-static void gsi_irq_ch_ctrl_enable(struct gsi *gsi, u32 channel_id)
+static void gsi_irq_ch_ctrl_enable(struct ipa_dma *gsi, u32 channel_id)
 {
 	u32 val = BIT(channel_id);
 
@@ -244,13 +245,13 @@ static void gsi_irq_ch_ctrl_enable(struct gsi *gsi, u32 channel_id)
 }
 
 /* Disable channel control interrupts */
-static void gsi_irq_ch_ctrl_disable(struct gsi *gsi)
+static void gsi_irq_ch_ctrl_disable(struct ipa_dma *gsi)
 {
 	gsi_irq_type_disable(gsi, GSI_CH_CTRL);
 	iowrite32(0, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET);
 }
 
-static void gsi_irq_ieob_enable_one(struct gsi *gsi, u32 evt_ring_id)
+static void gsi_irq_ieob_enable_one(struct ipa_dma *gsi, u32 evt_ring_id)
 {
 	bool enable_ieob = !gsi->ieob_enabled_bitmap;
 	u32 val;
@@ -264,7 +265,7 @@ static void gsi_irq_ieob_enable_one(struct gsi *gsi, u32 evt_ring_id)
 		gsi_irq_type_enable(gsi, GSI_IEOB);
 }
 
-static void gsi_irq_ieob_disable(struct gsi *gsi, u32 event_mask)
+static void gsi_irq_ieob_disable(struct ipa_dma *gsi, u32 event_mask)
 {
 	u32 val;
 
@@ -278,13 +279,13 @@ static void gsi_irq_ieob_disable(struct gsi *gsi, u32 event_mask)
 	iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET);
 }
 
-static void gsi_irq_ieob_disable_one(struct gsi *gsi, u32 evt_ring_id)
+static void gsi_irq_ieob_disable_one(struct ipa_dma *gsi, u32 evt_ring_id)
 {
 	gsi_irq_ieob_disable(gsi, BIT(evt_ring_id));
 }
 
 /* Enable all GSI_interrupt types */
-static void gsi_irq_enable(struct gsi *gsi)
+static void gsi_irq_enable(struct ipa_dma *gsi)
 {
 	u32 val;
 
@@ -307,7 +308,7 @@ static void gsi_irq_enable(struct gsi *gsi)
 }
 
 /* Disable all GSI interrupt types */
-static void gsi_irq_disable(struct gsi *gsi)
+static void gsi_irq_disable(struct ipa_dma *gsi)
 {
 	gsi_irq_type_update(gsi, 0);
 
@@ -340,7 +341,7 @@ static u32 gsi_ring_index(struct gsi_ring *ring, u32 offset)
  * or false if it times out.
  */
 static bool
-gsi_command(struct gsi *gsi, u32 reg, u32 val, struct completion *completion)
+gsi_command(struct ipa_dma *gsi, u32 reg, u32 val, struct completion *completion)
 {
 	unsigned long timeout = msecs_to_jiffies(GSI_CMD_TIMEOUT);
 
@@ -353,7 +354,7 @@ gsi_command(struct gsi *gsi, u32 reg, u32 val, struct completion *completion)
 
 /* Return the hardware's notion of the current state of an event ring */
 static enum gsi_evt_ring_state
-gsi_evt_ring_state(struct gsi *gsi, u32 evt_ring_id)
+gsi_evt_ring_state(struct ipa_dma *gsi, u32 evt_ring_id)
 {
 	u32 val;
 
@@ -363,7 +364,7 @@ gsi_evt_ring_state(struct gsi *gsi, u32 evt_ring_id)
 }
 
 /* Issue an event ring command and wait for it to complete */
-static void gsi_evt_ring_command(struct gsi *gsi, u32 evt_ring_id,
+static void gsi_evt_ring_command(struct ipa_dma *gsi, u32 evt_ring_id,
 				 enum gsi_evt_cmd_opcode opcode)
 {
 	struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
@@ -390,7 +391,7 @@ static void gsi_evt_ring_command(struct gsi *gsi, u32 evt_ring_id,
 }
 
 /* Allocate an event ring in NOT_ALLOCATED state */
-static int gsi_evt_ring_alloc_command(struct gsi *gsi, u32 evt_ring_id)
+static int gsi_evt_ring_alloc_command(struct ipa_dma *gsi, u32 evt_ring_id)
 {
 	enum gsi_evt_ring_state state;
 
@@ -416,7 +417,7 @@ static int gsi_evt_ring_alloc_command(struct gsi *gsi, u32 evt_ring_id)
 }
 
 /* Reset a GSI event ring in ALLOCATED or ERROR state. */
-static void gsi_evt_ring_reset_command(struct gsi *gsi, u32 evt_ring_id)
+static void gsi_evt_ring_reset_command(struct ipa_dma *gsi, u32 evt_ring_id)
 {
 	enum gsi_evt_ring_state state;
 
@@ -440,7 +441,7 @@ static void gsi_evt_ring_reset_command(struct gsi *gsi, u32 evt_ring_id)
 }
 
 /* Issue a hardware de-allocation request for an allocated event ring */
-static void gsi_evt_ring_de_alloc_command(struct gsi *gsi, u32 evt_ring_id)
+static void gsi_evt_ring_de_alloc_command(struct ipa_dma *gsi, u32 evt_ring_id)
 {
 	enum gsi_evt_ring_state state;
 
@@ -463,10 +464,10 @@ static void gsi_evt_ring_de_alloc_command(struct gsi *gsi, u32 evt_ring_id)
 }
 
 /* Fetch the current state of a channel from hardware */
-static enum gsi_channel_state gsi_channel_state(struct gsi_channel *channel)
+static enum gsi_channel_state gsi_channel_state(struct ipa_channel *channel)
 {
 	u32 channel_id = gsi_channel_id(channel);
-	void __iomem *virt = channel->gsi->virt;
+	void __iomem *virt = channel->dma_subsys->virt;
 	u32 val;
 
 	val = ioread32(virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id));
@@ -476,11 +477,11 @@ static enum gsi_channel_state gsi_channel_state(struct gsi_channel *channel)
 
 /* Issue a channel command and wait for it to complete */
 static void
-gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode opcode)
+gsi_channel_command(struct ipa_channel *channel, enum gsi_ch_cmd_opcode opcode)
 {
 	struct completion *completion = &channel->completion;
 	u32 channel_id = gsi_channel_id(channel);
-	struct gsi *gsi = channel->gsi;
+	struct ipa_dma *gsi = channel->dma_subsys;
 	struct device *dev = gsi->dev;
 	bool timeout;
 	u32 val;
@@ -502,9 +503,9 @@ gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode opcode)
 }
 
 /* Allocate GSI channel in NOT_ALLOCATED state */
-static int gsi_channel_alloc_command(struct gsi *gsi, u32 channel_id)
+static int gsi_channel_alloc_command(struct ipa_dma *gsi, u32 channel_id)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 	struct device *dev = gsi->dev;
 	enum gsi_channel_state state;
 
@@ -530,9 +531,9 @@ static int gsi_channel_alloc_command(struct gsi *gsi, u32 channel_id)
 }
 
 /* Start an ALLOCATED channel */
-static int gsi_channel_start_command(struct gsi_channel *channel)
+static int gsi_channel_start_command(struct ipa_channel *channel)
 {
-	struct device *dev = channel->gsi->dev;
+	struct device *dev = channel->dma_subsys->dev;
 	enum gsi_channel_state state;
 
 	state = gsi_channel_state(channel);
@@ -557,9 +558,9 @@ static int gsi_channel_start_command(struct gsi_channel *channel)
 }
 
 /* Stop a GSI channel in STARTED state */
-static int gsi_channel_stop_command(struct gsi_channel *channel)
+static int gsi_channel_stop_command(struct ipa_channel *channel)
 {
-	struct device *dev = channel->gsi->dev;
+	struct device *dev = channel->dma_subsys->dev;
 	enum gsi_channel_state state;
 
 	state = gsi_channel_state(channel);
@@ -595,9 +596,9 @@ static int gsi_channel_stop_command(struct gsi_channel *channel)
 }
 
 /* Reset a GSI channel in ALLOCATED or ERROR state. */
-static void gsi_channel_reset_command(struct gsi_channel *channel)
+static void gsi_channel_reset_command(struct ipa_channel *channel)
 {
-	struct device *dev = channel->gsi->dev;
+	struct device *dev = channel->dma_subsys->dev;
 	enum gsi_channel_state state;
 
 	/* A short delay is required before a RESET command */
@@ -623,9 +624,9 @@ static void gsi_channel_reset_command(struct gsi_channel *channel)
 }
 
 /* Deallocate an ALLOCATED GSI channel */
-static void gsi_channel_de_alloc_command(struct gsi *gsi, u32 channel_id)
+static void gsi_channel_de_alloc_command(struct ipa_dma *gsi, u32 channel_id)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 	struct device *dev = gsi->dev;
 	enum gsi_channel_state state;
 
@@ -651,7 +652,7 @@ static void gsi_channel_de_alloc_command(struct gsi *gsi, u32 channel_id)
  * we supply one less than that with the doorbell.  Update the event ring
  * index field with the value provided.
  */
-static void gsi_evt_ring_doorbell(struct gsi *gsi, u32 evt_ring_id, u32 index)
+static void gsi_evt_ring_doorbell(struct ipa_dma *gsi, u32 evt_ring_id, u32 index)
 {
 	struct gsi_ring *ring = &gsi->evt_ring[evt_ring_id].ring;
 	u32 val;
@@ -664,7 +665,7 @@ static void gsi_evt_ring_doorbell(struct gsi *gsi, u32 evt_ring_id, u32 index)
 }
 
 /* Program an event ring for use */
-static void gsi_evt_ring_program(struct gsi *gsi, u32 evt_ring_id)
+static void gsi_evt_ring_program(struct ipa_dma *gsi, u32 evt_ring_id)
 {
 	struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
 	size_t size = evt_ring->ring.count * GSI_RING_ELEMENT_SIZE;
@@ -707,11 +708,11 @@ static void gsi_evt_ring_program(struct gsi *gsi, u32 evt_ring_id)
 }
 
 /* Find the transaction whose completion indicates a channel is quiesced */
-static struct gsi_trans *gsi_channel_trans_last(struct gsi_channel *channel)
+static struct ipa_trans *gsi_channel_trans_last(struct ipa_channel *channel)
 {
-	struct gsi_trans_info *trans_info = &channel->trans_info;
+	struct ipa_trans_info *trans_info = &channel->trans_info;
 	const struct list_head *list;
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 
 	spin_lock_bh(&trans_info->spinlock);
 
@@ -737,7 +738,7 @@ static struct gsi_trans *gsi_channel_trans_last(struct gsi_channel *channel)
 	if (list_empty(list))
 		list = NULL;
 done:
-	trans = list ? list_last_entry(list, struct gsi_trans, links) : NULL;
+	trans = list ? list_last_entry(list, struct ipa_trans, links) : NULL;
 
 	/* Caller will wait for this, so take a reference */
 	if (trans)
@@ -749,26 +750,26 @@ static struct gsi_trans *gsi_channel_trans_last(struct gsi_channel *channel)
 }
 
 /* Wait for transaction activity on a channel to complete */
-static void gsi_channel_trans_quiesce(struct gsi_channel *channel)
+static void gsi_channel_trans_quiesce(struct ipa_channel *channel)
 {
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 
 	/* Get the last transaction, and wait for it to complete */
 	trans = gsi_channel_trans_last(channel);
 	if (trans) {
 		wait_for_completion(&trans->completion);
-		gsi_trans_free(trans);
+		ipa_trans_free(trans);
 	}
 }
 
 /* Program a channel for use; there is no gsi_channel_deprogram() */
-static void gsi_channel_program(struct gsi_channel *channel, bool doorbell)
+static void gsi_channel_program(struct ipa_channel *channel, bool doorbell)
 {
 	size_t size = channel->tre_ring.count * GSI_RING_ELEMENT_SIZE;
 	u32 channel_id = gsi_channel_id(channel);
 	union gsi_channel_scratch scr = { };
 	struct gsi_channel_scratch_gpi *gpi;
-	struct gsi *gsi = channel->gsi;
+	struct ipa_dma *gsi = channel->dma_subsys;
 	u32 wrr_weight = 0;
 	u32 val;
 
@@ -849,9 +850,9 @@ static void gsi_channel_program(struct gsi_channel *channel, bool doorbell)
 	/* All done! */
 }
 
-static int __gsi_channel_start(struct gsi_channel *channel, bool resume)
+static int __gsi_channel_start(struct ipa_channel *channel, bool resume)
 {
-	struct gsi *gsi = channel->gsi;
+	struct ipa_dma *gsi = channel->dma_subsys;
 	int ret;
 
 	/* Prior to IPA v4.0 suspend/resume is not implemented by GSI */
@@ -868,9 +869,9 @@ static int __gsi_channel_start(struct gsi_channel *channel, bool resume)
 }
 
 /* Start an allocated GSI channel */
-int gsi_channel_start(struct gsi *gsi, u32 channel_id)
+int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 	int ret;
 
 	/* Enable NAPI and the completion interrupt */
@@ -886,7 +887,7 @@ int gsi_channel_start(struct gsi *gsi, u32 channel_id)
 	return ret;
 }
 
-static int gsi_channel_stop_retry(struct gsi_channel *channel)
+static int gsi_channel_stop_retry(struct ipa_channel *channel)
 {
 	u32 retries = GSI_CHANNEL_STOP_RETRIES;
 	int ret;
@@ -901,9 +902,9 @@ static int gsi_channel_stop_retry(struct gsi_channel *channel)
 	return ret;
 }
 
-static int __gsi_channel_stop(struct gsi_channel *channel, bool suspend)
+static int __gsi_channel_stop(struct ipa_channel *channel, bool suspend)
 {
-	struct gsi *gsi = channel->gsi;
+	struct ipa_dma *gsi = channel->dma_subsys;
 	int ret;
 
 	/* Wait for any underway transactions to complete before stopping. */
@@ -923,9 +924,9 @@ static int __gsi_channel_stop(struct gsi_channel *channel, bool suspend)
 }
 
 /* Stop a started channel */
-int gsi_channel_stop(struct gsi *gsi, u32 channel_id)
+int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 	int ret;
 
 	ret = __gsi_channel_stop(channel, false);
@@ -940,9 +941,9 @@ int gsi_channel_stop(struct gsi *gsi, u32 channel_id)
 }
 
 /* Reset and reconfigure a channel, (possibly) enabling the doorbell engine */
-void gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool doorbell)
+void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 
 	mutex_lock(&gsi->mutex);
 
@@ -952,15 +953,15 @@ void gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool doorbell)
 		gsi_channel_reset_command(channel);
 
 	gsi_channel_program(channel, doorbell);
-	gsi_channel_trans_cancel_pending(channel);
+	ipa_channel_trans_cancel_pending(channel);
 
 	mutex_unlock(&gsi->mutex);
 }
 
 /* Stop a started channel for suspend */
-int gsi_channel_suspend(struct gsi *gsi, u32 channel_id)
+int gsi_channel_suspend(struct ipa_dma *gsi, u32 channel_id)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 	int ret;
 
 	ret = __gsi_channel_stop(channel, true);
@@ -974,27 +975,27 @@ int gsi_channel_suspend(struct gsi *gsi, u32 channel_id)
 }
 
 /* Resume a suspended channel (starting if stopped) */
-int gsi_channel_resume(struct gsi *gsi, u32 channel_id)
+int gsi_channel_resume(struct ipa_dma *gsi, u32 channel_id)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 
 	return __gsi_channel_start(channel, true);
 }
 
 /* Prevent all GSI interrupts while suspended */
-void gsi_suspend(struct gsi *gsi)
+void gsi_suspend(struct ipa_dma *gsi)
 {
 	disable_irq(gsi->irq);
 }
 
 /* Allow all GSI interrupts again when resuming */
-void gsi_resume(struct gsi *gsi)
+void gsi_resume(struct ipa_dma *gsi)
 {
 	enable_irq(gsi->irq);
 }
 
 /**
- * gsi_channel_tx_queued() - Report queued TX transfers for a channel
+ * ipa_channel_tx_queued() - Report queued TX transfers for a channel
  * @channel:	Channel for which to report
  *
  * Report to the network stack the number of bytes and transactions that
@@ -1011,7 +1012,7 @@ void gsi_resume(struct gsi *gsi)
  * provide accurate information to the network stack about how much
  * work we've given the hardware at any point in time.
  */
-void gsi_channel_tx_queued(struct gsi_channel *channel)
+void ipa_channel_tx_queued(struct ipa_channel *channel)
 {
 	u32 trans_count;
 	u32 byte_count;
@@ -1021,7 +1022,7 @@ void gsi_channel_tx_queued(struct gsi_channel *channel)
 	channel->queued_byte_count = channel->byte_count;
 	channel->queued_trans_count = channel->trans_count;
 
-	ipa_gsi_channel_tx_queued(channel->gsi, gsi_channel_id(channel),
+	ipa_gsi_channel_tx_queued(channel->dma_subsys, gsi_channel_id(channel),
 				  trans_count, byte_count);
 }
 
@@ -1050,7 +1051,7 @@ void gsi_channel_tx_queued(struct gsi_channel *channel)
  * point in time.
  */
 static void
-gsi_channel_tx_update(struct gsi_channel *channel, struct gsi_trans *trans)
+gsi_channel_tx_update(struct ipa_channel *channel, struct ipa_trans *trans)
 {
 	u64 byte_count = trans->byte_count + trans->len;
 	u64 trans_count = trans->trans_count + 1;
@@ -1060,12 +1061,12 @@ gsi_channel_tx_update(struct gsi_channel *channel, struct gsi_trans *trans)
 	trans_count -= channel->compl_trans_count;
 	channel->compl_trans_count += trans_count;
 
-	ipa_gsi_channel_tx_completed(channel->gsi, gsi_channel_id(channel),
+	ipa_gsi_channel_tx_completed(channel->dma_subsys, gsi_channel_id(channel),
 				     trans_count, byte_count);
 }
 
 /* Channel control interrupt handler */
-static void gsi_isr_chan_ctrl(struct gsi *gsi)
+static void gsi_isr_chan_ctrl(struct ipa_dma *gsi)
 {
 	u32 channel_mask;
 
@@ -1074,7 +1075,7 @@ static void gsi_isr_chan_ctrl(struct gsi *gsi)
 
 	while (channel_mask) {
 		u32 channel_id = __ffs(channel_mask);
-		struct gsi_channel *channel;
+		struct ipa_channel *channel;
 
 		channel_mask ^= BIT(channel_id);
 
@@ -1085,7 +1086,7 @@ static void gsi_isr_chan_ctrl(struct gsi *gsi)
 }
 
 /* Event ring control interrupt handler */
-static void gsi_isr_evt_ctrl(struct gsi *gsi)
+static void gsi_isr_evt_ctrl(struct ipa_dma *gsi)
 {
 	u32 event_mask;
 
@@ -1106,7 +1107,7 @@ static void gsi_isr_evt_ctrl(struct gsi *gsi)
 
 /* Global channel error interrupt handler */
 static void
-gsi_isr_glob_chan_err(struct gsi *gsi, u32 err_ee, u32 channel_id, u32 code)
+gsi_isr_glob_chan_err(struct ipa_dma *gsi, u32 err_ee, u32 channel_id, u32 code)
 {
 	if (code == GSI_OUT_OF_RESOURCES) {
 		dev_err(gsi->dev, "channel %u out of resources\n", channel_id);
@@ -1121,7 +1122,7 @@ gsi_isr_glob_chan_err(struct gsi *gsi, u32 err_ee, u32 channel_id, u32 code)
 
 /* Global event error interrupt handler */
 static void
-gsi_isr_glob_evt_err(struct gsi *gsi, u32 err_ee, u32 evt_ring_id, u32 code)
+gsi_isr_glob_evt_err(struct ipa_dma *gsi, u32 err_ee, u32 evt_ring_id, u32 code)
 {
 	if (code == GSI_OUT_OF_RESOURCES) {
 		struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
@@ -1139,7 +1140,7 @@ gsi_isr_glob_evt_err(struct gsi *gsi, u32 err_ee, u32 evt_ring_id, u32 code)
 }
 
 /* Global error interrupt handler */
-static void gsi_isr_glob_err(struct gsi *gsi)
+static void gsi_isr_glob_err(struct ipa_dma *gsi)
 {
 	enum gsi_err_type type;
 	enum gsi_err_code code;
@@ -1166,7 +1167,7 @@ static void gsi_isr_glob_err(struct gsi *gsi)
 }
 
 /* Generic EE interrupt handler */
-static void gsi_isr_gp_int1(struct gsi *gsi)
+static void gsi_isr_gp_int1(struct ipa_dma *gsi)
 {
 	u32 result;
 	u32 val;
@@ -1208,7 +1209,7 @@ static void gsi_isr_gp_int1(struct gsi *gsi)
 }
 
 /* Inter-EE interrupt handler */
-static void gsi_isr_glob_ee(struct gsi *gsi)
+static void gsi_isr_glob_ee(struct ipa_dma *gsi)
 {
 	u32 val;
 
@@ -1231,7 +1232,7 @@ static void gsi_isr_glob_ee(struct gsi *gsi)
 }
 
 /* I/O completion interrupt event */
-static void gsi_isr_ieob(struct gsi *gsi)
+static void gsi_isr_ieob(struct ipa_dma *gsi)
 {
 	u32 event_mask;
 
@@ -1249,7 +1250,7 @@ static void gsi_isr_ieob(struct gsi *gsi)
 }
 
 /* General event interrupts represent serious problems, so report them */
-static void gsi_isr_general(struct gsi *gsi)
+static void gsi_isr_general(struct ipa_dma *gsi)
 {
 	struct device *dev = gsi->dev;
 	u32 val;
@@ -1270,7 +1271,7 @@ static void gsi_isr_general(struct gsi *gsi)
  */
 static irqreturn_t gsi_isr(int irq, void *dev_id)
 {
-	struct gsi *gsi = dev_id;
+	struct ipa_dma *gsi = dev_id;
 	u32 intr_mask;
 	u32 cnt = 0;
 
@@ -1316,7 +1317,7 @@ static irqreturn_t gsi_isr(int irq, void *dev_id)
 }
 
 /* Init function for GSI IRQ lookup; there is no gsi_irq_exit() */
-static int gsi_irq_init(struct gsi *gsi, struct platform_device *pdev)
+static int gsi_irq_init(struct ipa_dma *gsi, struct platform_device *pdev)
 {
 	int ret;
 
@@ -1330,7 +1331,7 @@ static int gsi_irq_init(struct gsi *gsi, struct platform_device *pdev)
 }
 
 /* Return the transaction associated with a transfer completion event */
-static struct gsi_trans *gsi_event_trans(struct gsi_channel *channel,
+static struct ipa_trans *gsi_event_trans(struct ipa_channel *channel,
 					 struct gsi_event *event)
 {
 	u32 tre_offset;
@@ -1364,12 +1365,12 @@ static struct gsi_trans *gsi_event_trans(struct gsi_channel *channel,
  */
 static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
 {
-	struct gsi_channel *channel = evt_ring->channel;
+	struct ipa_channel *channel = evt_ring->channel;
 	struct gsi_ring *ring = &evt_ring->ring;
-	struct gsi_trans_info *trans_info;
+	struct ipa_trans_info *trans_info;
 	struct gsi_event *event_done;
 	struct gsi_event *event;
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 	u32 byte_count = 0;
 	u32 old_index;
 	u32 event_avail;
@@ -1399,7 +1400,7 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
 			event++;
 		else
 			event = gsi_ring_virt(ring, 0);
-		trans = gsi_trans_pool_next(&trans_info->pool, trans);
+		trans = ipa_trans_pool_next(&trans_info->pool, trans);
 	} while (event != event_done);
 
 	/* We record RX bytes when they are received */
@@ -1408,7 +1409,7 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
 }
 
 /* Initialize a ring, including allocating DMA memory for its entries */
-static int gsi_ring_alloc(struct gsi *gsi, struct gsi_ring *ring, u32 count)
+static int gsi_ring_alloc(struct ipa_dma *gsi, struct gsi_ring *ring, u32 count)
 {
 	u32 size = count * GSI_RING_ELEMENT_SIZE;
 	struct device *dev = gsi->dev;
@@ -1429,7 +1430,7 @@ static int gsi_ring_alloc(struct gsi *gsi, struct gsi_ring *ring, u32 count)
 }
 
 /* Free a previously-allocated ring */
-static void gsi_ring_free(struct gsi *gsi, struct gsi_ring *ring)
+static void gsi_ring_free(struct ipa_dma *gsi, struct gsi_ring *ring)
 {
 	size_t size = ring->count * GSI_RING_ELEMENT_SIZE;
 
@@ -1437,7 +1438,7 @@ static void gsi_ring_free(struct gsi *gsi, struct gsi_ring *ring)
 }
 
 /* Allocate an available event ring id */
-static int gsi_evt_ring_id_alloc(struct gsi *gsi)
+static int gsi_evt_ring_id_alloc(struct ipa_dma *gsi)
 {
 	u32 evt_ring_id;
 
@@ -1453,17 +1454,17 @@ static int gsi_evt_ring_id_alloc(struct gsi *gsi)
 }
 
 /* Free a previously-allocated event ring id */
-static void gsi_evt_ring_id_free(struct gsi *gsi, u32 evt_ring_id)
+static void gsi_evt_ring_id_free(struct ipa_dma *gsi, u32 evt_ring_id)
 {
 	gsi->event_bitmap &= ~BIT(evt_ring_id);
 }
 
 /* Ring a channel doorbell, reporting the first un-filled entry */
-void gsi_channel_doorbell(struct gsi_channel *channel)
+void gsi_channel_doorbell(struct ipa_channel *channel)
 {
 	struct gsi_ring *tre_ring = &channel->tre_ring;
 	u32 channel_id = gsi_channel_id(channel);
-	struct gsi *gsi = channel->gsi;
+	struct ipa_dma *gsi = channel->dma_subsys;
 	u32 val;
 
 	/* Note: index *must* be used modulo the ring count here */
@@ -1472,12 +1473,12 @@ void gsi_channel_doorbell(struct gsi_channel *channel)
 }
 
 /* Consult hardware, move any newly completed transactions to completed list */
-static struct gsi_trans *gsi_channel_update(struct gsi_channel *channel)
+static struct ipa_trans *gsi_channel_update(struct ipa_channel *channel)
 {
 	u32 evt_ring_id = channel->evt_ring_id;
-	struct gsi *gsi = channel->gsi;
+	struct ipa_dma *gsi = channel->dma_subsys;
 	struct gsi_evt_ring *evt_ring;
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 	struct gsi_ring *ring;
 	u32 offset;
 	u32 index;
@@ -1510,14 +1511,14 @@ static struct gsi_trans *gsi_channel_update(struct gsi_channel *channel)
 	else
 		gsi_evt_ring_rx_update(evt_ring, index);
 
-	gsi_trans_move_complete(trans);
+	ipa_trans_move_complete(trans);
 
 	/* Tell the hardware we've handled these events */
-	gsi_evt_ring_doorbell(channel->gsi, channel->evt_ring_id, index);
+	gsi_evt_ring_doorbell(channel->dma_subsys, channel->evt_ring_id, index);
 
-	gsi_trans_free(trans);
+	ipa_trans_free(trans);
 
-	return gsi_channel_trans_complete(channel);
+	return ipa_channel_trans_complete(channel);
 }
 
 /**
@@ -1532,17 +1533,17 @@ static struct gsi_trans *gsi_channel_update(struct gsi_channel *channel)
  * completed list and the new first entry is returned.  If there are no more
  * completed transactions, a null pointer is returned.
  */
-static struct gsi_trans *gsi_channel_poll_one(struct gsi_channel *channel)
+static struct ipa_trans *gsi_channel_poll_one(struct ipa_channel *channel)
 {
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 
 	/* Get the first transaction from the completed list */
-	trans = gsi_channel_trans_complete(channel);
+	trans = ipa_channel_trans_complete(channel);
 	if (!trans)	/* List is empty; see if there's more to do */
 		trans = gsi_channel_update(channel);
 
 	if (trans)
-		gsi_trans_move_polled(trans);
+		ipa_trans_move_polled(trans);
 
 	return trans;
 }
@@ -1556,26 +1557,26 @@ static struct gsi_trans *gsi_channel_poll_one(struct gsi_channel *channel)
  *
  * Single transactions completed by hardware are polled until either
  * the budget is exhausted, or there are no more.  Each transaction
- * polled is passed to gsi_trans_complete(), to perform remaining
+ * polled is passed to ipa_trans_complete(), to perform remaining
  * completion processing and retire/free the transaction.
  */
 static int gsi_channel_poll(struct napi_struct *napi, int budget)
 {
-	struct gsi_channel *channel;
+	struct ipa_channel *channel;
 	int count;
 
-	channel = container_of(napi, struct gsi_channel, napi);
+	channel = container_of(napi, struct ipa_channel, napi);
 	for (count = 0; count < budget; count++) {
-		struct gsi_trans *trans;
+		struct ipa_trans *trans;
 
 		trans = gsi_channel_poll_one(channel);
 		if (!trans)
 			break;
-		gsi_trans_complete(trans);
+		ipa_trans_complete(trans);
 	}
 
 	if (count < budget && napi_complete(napi))
-		gsi_irq_ieob_enable_one(channel->gsi, channel->evt_ring_id);
+		gsi_irq_ieob_enable_one(channel->dma_subsys, channel->evt_ring_id);
 
 	return count;
 }
@@ -1595,9 +1596,9 @@ static u32 gsi_event_bitmap_init(u32 evt_ring_max)
 }
 
 /* Setup function for a single channel */
-static int gsi_channel_setup_one(struct gsi *gsi, u32 channel_id)
+static int gsi_channel_setup_one(struct ipa_dma *gsi, u32 channel_id)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 	u32 evt_ring_id = channel->evt_ring_id;
 	int ret;
 
@@ -1633,9 +1634,9 @@ static int gsi_channel_setup_one(struct gsi *gsi, u32 channel_id)
 }
 
 /* Inverse of gsi_channel_setup_one() */
-static void gsi_channel_teardown_one(struct gsi *gsi, u32 channel_id)
+static void gsi_channel_teardown_one(struct ipa_dma *gsi, u32 channel_id)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 	u32 evt_ring_id = channel->evt_ring_id;
 
 	if (!gsi_channel_initialized(channel))
@@ -1648,7 +1649,7 @@ static void gsi_channel_teardown_one(struct gsi *gsi, u32 channel_id)
 	gsi_evt_ring_de_alloc_command(gsi, evt_ring_id);
 }
 
-static int gsi_generic_command(struct gsi *gsi, u32 channel_id,
+static int gsi_generic_command(struct ipa_dma *gsi, u32 channel_id,
 			       enum gsi_generic_cmd_opcode opcode)
 {
 	struct completion *completion = &gsi->completion;
@@ -1689,13 +1690,13 @@ static int gsi_generic_command(struct gsi *gsi, u32 channel_id,
 	return -ETIMEDOUT;
 }
 
-static int gsi_modem_channel_alloc(struct gsi *gsi, u32 channel_id)
+static int gsi_modem_channel_alloc(struct ipa_dma *gsi, u32 channel_id)
 {
 	return gsi_generic_command(gsi, channel_id,
 				   GSI_GENERIC_ALLOCATE_CHANNEL);
 }
 
-static void gsi_modem_channel_halt(struct gsi *gsi, u32 channel_id)
+static void gsi_modem_channel_halt(struct ipa_dma *gsi, u32 channel_id)
 {
 	u32 retries = GSI_CHANNEL_MODEM_HALT_RETRIES;
 	int ret;
@@ -1711,7 +1712,7 @@ static void gsi_modem_channel_halt(struct gsi *gsi, u32 channel_id)
 }
 
 /* Setup function for channels */
-static int gsi_channel_setup(struct gsi *gsi)
+static int gsi_channel_setup(struct ipa_dma *gsi)
 {
 	u32 channel_id = 0;
 	u32 mask;
@@ -1729,7 +1730,7 @@ static int gsi_channel_setup(struct gsi *gsi)
 
 	/* Make sure no channels were defined that hardware does not support */
 	while (channel_id < GSI_CHANNEL_COUNT_MAX) {
-		struct gsi_channel *channel = &gsi->channel[channel_id++];
+		struct ipa_channel *channel = &gsi->channel[channel_id++];
 
 		if (!gsi_channel_initialized(channel))
 			continue;
@@ -1781,7 +1782,7 @@ static int gsi_channel_setup(struct gsi *gsi)
 }
 
 /* Inverse of gsi_channel_setup() */
-static void gsi_channel_teardown(struct gsi *gsi)
+static void gsi_channel_teardown(struct ipa_dma *gsi)
 {
 	u32 mask = gsi->modem_channel_bitmap;
 	u32 channel_id;
@@ -1807,7 +1808,7 @@ static void gsi_channel_teardown(struct gsi *gsi)
 }
 
 /* Turn off all GSI interrupts initially */
-static int gsi_irq_setup(struct gsi *gsi)
+static int gsi_irq_setup(struct ipa_dma *gsi)
 {
 	int ret;
 
@@ -1843,13 +1844,13 @@ static int gsi_irq_setup(struct gsi *gsi)
 	return ret;
 }
 
-static void gsi_irq_teardown(struct gsi *gsi)
+static void gsi_irq_teardown(struct ipa_dma *gsi)
 {
 	free_irq(gsi->irq, gsi);
 }
 
 /* Get # supported channel and event rings; there is no gsi_ring_teardown() */
-static int gsi_ring_setup(struct gsi *gsi)
+static int gsi_ring_setup(struct ipa_dma *gsi)
 {
 	struct device *dev = gsi->dev;
 	u32 count;
@@ -1894,7 +1895,7 @@ static int gsi_ring_setup(struct gsi *gsi)
 }
 
 /* Setup function for GSI.  GSI firmware must be loaded and initialized */
-int gsi_setup(struct gsi *gsi)
+int gsi_setup(struct ipa_dma *gsi)
 {
 	u32 val;
 	int ret;
@@ -1930,16 +1931,16 @@ int gsi_setup(struct gsi *gsi)
 }
 
 /* Inverse of gsi_setup() */
-void gsi_teardown(struct gsi *gsi)
+void gsi_teardown(struct ipa_dma *gsi)
 {
 	gsi_channel_teardown(gsi);
 	gsi_irq_teardown(gsi);
 }
 
 /* Initialize a channel's event ring */
-static int gsi_channel_evt_ring_init(struct gsi_channel *channel)
+static int gsi_channel_evt_ring_init(struct ipa_channel *channel)
 {
-	struct gsi *gsi = channel->gsi;
+	struct ipa_dma *gsi = channel->dma_subsys;
 	struct gsi_evt_ring *evt_ring;
 	int ret;
 
@@ -1964,10 +1965,10 @@ static int gsi_channel_evt_ring_init(struct gsi_channel *channel)
 }
 
 /* Inverse of gsi_channel_evt_ring_init() */
-static void gsi_channel_evt_ring_exit(struct gsi_channel *channel)
+static void gsi_channel_evt_ring_exit(struct ipa_channel *channel)
 {
 	u32 evt_ring_id = channel->evt_ring_id;
-	struct gsi *gsi = channel->gsi;
+	struct ipa_dma *gsi = channel->dma_subsys;
 	struct gsi_evt_ring *evt_ring;
 
 	evt_ring = &gsi->evt_ring[evt_ring_id];
@@ -1976,7 +1977,7 @@ static void gsi_channel_evt_ring_exit(struct gsi_channel *channel)
 }
 
 /* Init function for event rings; there is no gsi_evt_ring_exit() */
-static void gsi_evt_ring_init(struct gsi *gsi)
+static void gsi_evt_ring_init(struct ipa_dma *gsi)
 {
 	u32 evt_ring_id = 0;
 
@@ -1987,7 +1988,7 @@ static void gsi_evt_ring_init(struct gsi *gsi)
 	while (++evt_ring_id < GSI_EVT_RING_COUNT_MAX);
 }
 
-static bool gsi_channel_data_valid(struct gsi *gsi,
+static bool gsi_channel_data_valid(struct ipa_dma *gsi,
 				   const struct ipa_gsi_endpoint_data *data)
 {
 	u32 channel_id = data->channel_id;
@@ -2040,11 +2041,11 @@ static bool gsi_channel_data_valid(struct gsi *gsi,
 }
 
 /* Init function for a single channel */
-static int gsi_channel_init_one(struct gsi *gsi,
+static int gsi_channel_init_one(struct ipa_dma *gsi,
 				const struct ipa_gsi_endpoint_data *data,
 				bool command)
 {
-	struct gsi_channel *channel;
+	struct ipa_channel *channel;
 	u32 tre_count;
 	int ret;
 
@@ -2063,7 +2064,7 @@ static int gsi_channel_init_one(struct gsi *gsi,
 	channel = &gsi->channel[data->channel_id];
 	memset(channel, 0, sizeof(*channel));
 
-	channel->gsi = gsi;
+	channel->dma_subsys = gsi;
 	channel->toward_ipa = data->toward_ipa;
 	channel->command = command;
 	channel->tlv_count = data->channel.tlv_count;
@@ -2082,7 +2083,7 @@ static int gsi_channel_init_one(struct gsi *gsi,
 		goto err_channel_evt_ring_exit;
 	}
 
-	ret = gsi_channel_trans_init(gsi, data->channel_id);
+	ret = ipa_channel_trans_init(gsi, data->channel_id);
 	if (ret)
 		goto err_ring_free;
 
@@ -2094,32 +2095,32 @@ static int gsi_channel_init_one(struct gsi *gsi,
 	if (!ret)
 		return 0;	/* Success! */
 
-	gsi_channel_trans_exit(channel);
+	ipa_channel_trans_exit(channel);
 err_ring_free:
 	gsi_ring_free(gsi, &channel->tre_ring);
 err_channel_evt_ring_exit:
 	gsi_channel_evt_ring_exit(channel);
 err_clear_gsi:
-	channel->gsi = NULL;	/* Mark it not (fully) initialized */
+	channel->dma_subsys = NULL;	/* Mark it not (fully) initialized */
 
 	return ret;
 }
 
 /* Inverse of gsi_channel_init_one() */
-static void gsi_channel_exit_one(struct gsi_channel *channel)
+static void gsi_channel_exit_one(struct ipa_channel *channel)
 {
 	if (!gsi_channel_initialized(channel))
 		return;
 
 	if (channel->command)
 		ipa_cmd_pool_exit(channel);
-	gsi_channel_trans_exit(channel);
-	gsi_ring_free(channel->gsi, &channel->tre_ring);
+	ipa_channel_trans_exit(channel);
+	gsi_ring_free(channel->dma_subsys, &channel->tre_ring);
 	gsi_channel_evt_ring_exit(channel);
 }
 
 /* Init function for channels */
-static int gsi_channel_init(struct gsi *gsi, u32 count,
+static int gsi_channel_init(struct ipa_dma *gsi, u32 count,
 			    const struct ipa_gsi_endpoint_data *data)
 {
 	bool modem_alloc;
@@ -2168,7 +2169,7 @@ static int gsi_channel_init(struct gsi *gsi, u32 count,
 }
 
 /* Inverse of gsi_channel_init() */
-static void gsi_channel_exit(struct gsi *gsi)
+static void gsi_channel_exit(struct ipa_dma *gsi)
 {
 	u32 channel_id = GSI_CHANNEL_COUNT_MAX - 1;
 
@@ -2179,7 +2180,7 @@ static void gsi_channel_exit(struct gsi *gsi)
 }
 
 /* Init function for GSI.  GSI hardware does not need to be "ready" */
-int gsi_init(struct gsi *gsi, struct platform_device *pdev,
+int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
 	     enum ipa_version version, u32 count,
 	     const struct ipa_gsi_endpoint_data *data)
 {
@@ -2249,7 +2250,7 @@ int gsi_init(struct gsi *gsi, struct platform_device *pdev,
 }
 
 /* Inverse of gsi_init() */
-void gsi_exit(struct gsi *gsi)
+void gsi_exit(struct ipa_dma *gsi)
 {
 	mutex_destroy(&gsi->mutex);
 	gsi_channel_exit(gsi);
@@ -2274,20 +2275,20 @@ void gsi_exit(struct gsi *gsi)
  * maximum number of outstanding TREs allows the number of entries in
  * a pool to avoid crossing that power-of-2 boundary, and this can
  * substantially reduce pool memory requirements.  The number we
- * reduce it by matches the number added in gsi_trans_pool_init().
+ * reduce it by matches the number added in ipa_trans_pool_init().
  */
-u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id)
+u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 
 	/* Hardware limit is channel->tre_count - 1 */
 	return channel->tre_count - (channel->tlv_count - 1);
 }
 
 /* Returns the maximum number of TREs in a single transaction for a channel */
-u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id)
+u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 
 	return channel->tlv_count;
 }
diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
index 9fc880eb7e3a..80a83ac45729 100644
--- a/drivers/net/ipa/ipa.h
+++ b/drivers/net/ipa/ipa.h
@@ -12,7 +12,7 @@
 #include <linux/pm_wakeup.h>
 
 #include "ipa_version.h"
-#include "gsi.h"
+#include "ipa_dma.h"
 #include "ipa_mem.h"
 #include "ipa_qmi.h"
 #include "ipa_endpoint.h"
@@ -29,7 +29,7 @@ struct ipa_interrupt;
 
 /**
  * struct ipa - IPA information
- * @gsi:		Embedded GSI structure
+ * @ipa_dma:		Embedded IPA DMA structure
  * @version:		IPA hardware version
  * @pdev:		Platform device
  * @completion:		Used to signal pipeline clear transfer complete
@@ -71,7 +71,7 @@ struct ipa_interrupt;
  * @qmi:		QMI information
  */
 struct ipa {
-	struct gsi gsi;
+	struct ipa_dma dma_subsys;
 	enum ipa_version version;
 	struct platform_device *pdev;
 	struct completion completion;
diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
index cff51731195a..3db9e94e484f 100644
--- a/drivers/net/ipa/ipa_cmd.c
+++ b/drivers/net/ipa/ipa_cmd.c
@@ -10,8 +10,8 @@
 #include <linux/bitfield.h>
 #include <linux/dma-direction.h>
 
-#include "gsi.h"
-#include "gsi_trans.h"
+#include "ipa_dma.h"
+#include "ipa_trans.h"
 #include "ipa.h"
 #include "ipa_endpoint.h"
 #include "ipa_table.h"
@@ -32,8 +32,8 @@
  * immediate command's opcode.  The payload for a command resides in DRAM
  * and is described by a single scatterlist entry in its transaction.
  * Commands do not require a transaction completion callback.  To commit
- * an immediate command transaction, either gsi_trans_commit_wait() or
- * gsi_trans_commit_wait_timeout() is used.
+ * an immediate command transaction, either ipa_trans_commit_wait() or
+ * ipa_trans_commit_wait_timeout() is used.
  */
 
 /* Some commands can wait until indicated pipeline stages are clear */
@@ -346,10 +346,10 @@ bool ipa_cmd_data_valid(struct ipa *ipa)
 }
 
 
-int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_max)
+int ipa_cmd_pool_init(struct ipa_channel *channel, u32 tre_max)
 {
-	struct gsi_trans_info *trans_info = &channel->trans_info;
-	struct device *dev = channel->gsi->dev;
+	struct ipa_trans_info *trans_info = &channel->trans_info;
+	struct device *dev = channel->dma_subsys->dev;
 	int ret;
 
 	/* This is as good a place as any to validate build constants */
@@ -359,50 +359,50 @@ int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_max)
 	 * a single transaction can require up to tlv_count of them,
 	 * so we treat them as if that many can be allocated at once.
 	 */
-	ret = gsi_trans_pool_init_dma(dev, &trans_info->cmd_pool,
+	ret = ipa_trans_pool_init_dma(dev, &trans_info->cmd_pool,
 				      sizeof(union ipa_cmd_payload),
 				      tre_max, channel->tlv_count);
 	if (ret)
 		return ret;
 
 	/* Each TRE needs a command info structure */
-	ret = gsi_trans_pool_init(&trans_info->info_pool,
+	ret = ipa_trans_pool_init(&trans_info->info_pool,
 				   sizeof(struct ipa_cmd_info),
 				   tre_max, channel->tlv_count);
 	if (ret)
-		gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool);
+		ipa_trans_pool_exit_dma(dev, &trans_info->cmd_pool);
 
 	return ret;
 }
 
-void ipa_cmd_pool_exit(struct gsi_channel *channel)
+void ipa_cmd_pool_exit(struct ipa_channel *channel)
 {
-	struct gsi_trans_info *trans_info = &channel->trans_info;
-	struct device *dev = channel->gsi->dev;
+	struct ipa_trans_info *trans_info = &channel->trans_info;
+	struct device *dev = channel->dma_subsys->dev;
 
-	gsi_trans_pool_exit(&trans_info->info_pool);
-	gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool);
+	ipa_trans_pool_exit(&trans_info->info_pool);
+	ipa_trans_pool_exit_dma(dev, &trans_info->cmd_pool);
 }
 
 static union ipa_cmd_payload *
 ipa_cmd_payload_alloc(struct ipa *ipa, dma_addr_t *addr)
 {
-	struct gsi_trans_info *trans_info;
+	struct ipa_trans_info *trans_info;
 	struct ipa_endpoint *endpoint;
 
 	endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX];
-	trans_info = &ipa->gsi.channel[endpoint->channel_id].trans_info;
+	trans_info = &ipa->dma_subsys.channel[endpoint->channel_id].trans_info;
 
-	return gsi_trans_pool_alloc_dma(&trans_info->cmd_pool, addr);
+	return ipa_trans_pool_alloc_dma(&trans_info->cmd_pool, addr);
 }
 
 /* If hash_size is 0, hash_offset and hash_addr ignored. */
-void ipa_cmd_table_init_add(struct gsi_trans *trans,
+void ipa_cmd_table_init_add(struct ipa_trans *trans,
 			    enum ipa_cmd_opcode opcode, u16 size, u32 offset,
 			    dma_addr_t addr, u16 hash_size, u32 hash_offset,
 			    dma_addr_t hash_addr)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	enum dma_data_direction direction = DMA_TO_DEVICE;
 	struct ipa_cmd_hw_ip_fltrt_init *payload;
 	union ipa_cmd_payload *cmd_payload;
@@ -433,15 +433,15 @@ void ipa_cmd_table_init_add(struct gsi_trans *trans,
 	payload->flags = cpu_to_le64(val);
 	payload->nhash_rules_addr = cpu_to_le64(addr);
 
-	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  direction, opcode);
 }
 
 /* Initialize header space in IPA-local memory */
-void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size,
+void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
 				dma_addr_t addr)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL;
 	enum dma_data_direction direction = DMA_TO_DEVICE;
 	struct ipa_cmd_hw_hdr_init_local *payload;
@@ -464,14 +464,14 @@ void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size,
 	flags |= u32_encode_bits(offset, HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);
 	payload->flags = cpu_to_le32(flags);
 
-	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  direction, opcode);
 }
 
-void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value,
+void ipa_cmd_register_write_add(struct ipa_trans *trans, u32 offset, u32 value,
 				u32 mask, bool clear_full)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	struct ipa_cmd_register_write *payload;
 	union ipa_cmd_payload *cmd_payload;
 	u32 opcode = IPA_CMD_REGISTER_WRITE;
@@ -521,14 +521,14 @@ void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value,
 	payload->value_mask = cpu_to_le32(mask);
 	payload->clear_options = cpu_to_le32(options);
 
-	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  DMA_NONE, opcode);
 }
 
 /* Skip IP packet processing on the next data transfer on a TX channel */
-static void ipa_cmd_ip_packet_init_add(struct gsi_trans *trans, u8 endpoint_id)
+static void ipa_cmd_ip_packet_init_add(struct ipa_trans *trans, u8 endpoint_id)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_INIT;
 	enum dma_data_direction direction = DMA_TO_DEVICE;
 	struct ipa_cmd_ip_packet_init *payload;
@@ -541,15 +541,15 @@ static void ipa_cmd_ip_packet_init_add(struct gsi_trans *trans, u8 endpoint_id)
 	payload->dest_endpoint = u8_encode_bits(endpoint_id,
 					IPA_PACKET_INIT_DEST_ENDPOINT_FMASK);
 
-	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  direction, opcode);
 }
 
 /* Use a DMA command to read or write a block of IPA-resident memory */
-void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, u16 size,
+void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
 				dma_addr_t addr, bool toward_ipa)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM;
 	struct ipa_cmd_hw_dma_mem_mem *payload;
 	union ipa_cmd_payload *cmd_payload;
@@ -586,13 +586,13 @@ void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, u16 size,
 
 	direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
 
-	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  direction, opcode);
 }
 
-static void ipa_cmd_ip_tag_status_add(struct gsi_trans *trans)
+static void ipa_cmd_ip_tag_status_add(struct ipa_trans *trans)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_TAG_STATUS;
 	enum dma_data_direction direction = DMA_TO_DEVICE;
 	struct ipa_cmd_ip_packet_tag_status *payload;
@@ -604,14 +604,14 @@ static void ipa_cmd_ip_tag_status_add(struct gsi_trans *trans)
 
 	payload->tag = le64_encode_bits(0, IP_PACKET_TAG_STATUS_TAG_FMASK);
 
-	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  direction, opcode);
 }
 
 /* Issue a small command TX data transfer */
-static void ipa_cmd_transfer_add(struct gsi_trans *trans)
+static void ipa_cmd_transfer_add(struct ipa_trans *trans)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	enum dma_data_direction direction = DMA_TO_DEVICE;
 	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
 	union ipa_cmd_payload *payload;
@@ -620,14 +620,14 @@ static void ipa_cmd_transfer_add(struct gsi_trans *trans)
 	/* Just transfer a zero-filled payload structure */
 	payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
 
-	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
+	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  direction, opcode);
 }
 
 /* Add immediate commands to a transaction to clear the hardware pipeline */
-void ipa_cmd_pipeline_clear_add(struct gsi_trans *trans)
+void ipa_cmd_pipeline_clear_add(struct ipa_trans *trans)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	struct ipa_endpoint *endpoint;
 
 	/* This will complete when the transfer is received */
@@ -664,12 +664,12 @@ void ipa_cmd_pipeline_clear_wait(struct ipa *ipa)
 void ipa_cmd_pipeline_clear(struct ipa *ipa)
 {
 	u32 count = ipa_cmd_pipeline_clear_count();
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 
 	trans = ipa_cmd_trans_alloc(ipa, count);
 	if (trans) {
 		ipa_cmd_pipeline_clear_add(trans);
-		gsi_trans_commit_wait(trans);
+		ipa_trans_commit_wait(trans);
 		ipa_cmd_pipeline_clear_wait(ipa);
 	} else {
 		dev_err(&ipa->pdev->dev,
@@ -680,22 +680,22 @@ void ipa_cmd_pipeline_clear(struct ipa *ipa)
 static struct ipa_cmd_info *
 ipa_cmd_info_alloc(struct ipa_endpoint *endpoint, u32 tre_count)
 {
-	struct gsi_channel *channel;
+	struct ipa_channel *channel;
 
-	channel = &endpoint->ipa->gsi.channel[endpoint->channel_id];
+	channel = &endpoint->ipa->dma_subsys.channel[endpoint->channel_id];
 
-	return gsi_trans_pool_alloc(&channel->trans_info.info_pool, tre_count);
+	return ipa_trans_pool_alloc(&channel->trans_info.info_pool, tre_count);
 }
 
 /* Allocate a transaction for the command TX endpoint */
-struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count)
+struct ipa_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count)
 {
 	struct ipa_endpoint *endpoint;
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 
 	endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX];
 
-	trans = gsi_channel_trans_alloc(&ipa->gsi, endpoint->channel_id,
+	trans = ipa_channel_trans_alloc(&ipa->dma_subsys, endpoint->channel_id,
 					tre_count, DMA_NONE);
 	if (trans)
 		trans->info = ipa_cmd_info_alloc(endpoint, tre_count);
diff --git a/drivers/net/ipa/ipa_cmd.h b/drivers/net/ipa/ipa_cmd.h
index 69cd085d427d..bf3b72d11e9d 100644
--- a/drivers/net/ipa/ipa_cmd.h
+++ b/drivers/net/ipa/ipa_cmd.h
@@ -14,8 +14,8 @@ struct scatterlist;
 
 struct ipa;
 struct ipa_mem;
-struct gsi_trans;
-struct gsi_channel;
+struct ipa_trans;
+struct ipa_channel;
 
 /**
  * enum ipa_cmd_opcode:	IPA immediate commands
@@ -83,13 +83,13 @@ bool ipa_cmd_data_valid(struct ipa *ipa);
  *
  * Return:	0 if successful, or a negative error code
  */
-int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_count);
+int ipa_cmd_pool_init(struct ipa_channel *channel, u32 tre_count);
 
 /**
  * ipa_cmd_pool_exit() - Inverse of ipa_cmd_pool_init()
  * @channel:	AP->IPA command TX GSI channel pointer
  */
-void ipa_cmd_pool_exit(struct gsi_channel *channel);
+void ipa_cmd_pool_exit(struct ipa_channel *channel);
 
 /**
  * ipa_cmd_table_init_add() - Add table init command to a transaction
@@ -104,7 +104,7 @@ void ipa_cmd_pool_exit(struct gsi_channel *channel);
  *
  * If hash_size is 0, hash_offset and hash_addr are ignored.
  */
-void ipa_cmd_table_init_add(struct gsi_trans *trans, enum ipa_cmd_opcode opcode,
+void ipa_cmd_table_init_add(struct ipa_trans *trans, enum ipa_cmd_opcode opcode,
 			    u16 size, u32 offset, dma_addr_t addr,
 			    u16 hash_size, u32 hash_offset,
 			    dma_addr_t hash_addr);
@@ -118,7 +118,7 @@ void ipa_cmd_table_init_add(struct gsi_trans *trans, enum ipa_cmd_opcode opcode,
  *
  * Defines and fills the location in IPA memory to use for headers.
  */
-void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size,
+void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
 				dma_addr_t addr);
 
 /**
@@ -129,7 +129,7 @@ void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size,
  * @mask:	Mask of bits in register to update with bits from value
  * @clear_full: Pipeline clear option; true means full pipeline clear
  */
-void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value,
+void ipa_cmd_register_write_add(struct ipa_trans *trans, u32 offset, u32 value,
 				u32 mask, bool clear_full);
 
 /**
@@ -140,14 +140,14 @@ void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value,
  * @addr:	DMA address of buffer to be read into or written from
  * @toward_ipa:	true means write to IPA memory; false means read
  */
-void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset,
+void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset,
 				u16 size, dma_addr_t addr, bool toward_ipa);
 
 /**
  * ipa_cmd_pipeline_clear_add() - Add pipeline clear commands to a transaction
  * @trans:	GSI transaction
  */
-void ipa_cmd_pipeline_clear_add(struct gsi_trans *trans);
+void ipa_cmd_pipeline_clear_add(struct ipa_trans *trans);
 
 /**
  * ipa_cmd_pipeline_clear_count() - # commands required to clear pipeline
@@ -177,6 +177,6 @@ void ipa_cmd_pipeline_clear(struct ipa *ipa);
  * Return:	A GSI transaction structure, or a null pointer if all
  *		available transactions are in use
  */
-struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count);
+struct ipa_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count);
 
 #endif /* _IPA_CMD_H_ */
diff --git a/drivers/net/ipa/ipa_data-v3.5.1.c b/drivers/net/ipa/ipa_data-v3.5.1.c
index 760c22bbdf70..80ec55ef5ecc 100644
--- a/drivers/net/ipa/ipa_data-v3.5.1.c
+++ b/drivers/net/ipa/ipa_data-v3.5.1.c
@@ -6,7 +6,7 @@
 
 #include <linux/log2.h>
 
-#include "gsi.h"
+#include "ipa_dma.h"
 #include "ipa_data.h"
 #include "ipa_endpoint.h"
 #include "ipa_mem.h"
diff --git a/drivers/net/ipa/ipa_data-v4.11.c b/drivers/net/ipa/ipa_data-v4.11.c
index fea91451a0c3..9db4c82213e4 100644
--- a/drivers/net/ipa/ipa_data-v4.11.c
+++ b/drivers/net/ipa/ipa_data-v4.11.c
@@ -4,7 +4,7 @@
 
 #include <linux/log2.h>
 
-#include "gsi.h"
+#include "ipa_dma.h"
 #include "ipa_data.h"
 #include "ipa_endpoint.h"
 #include "ipa_mem.h"
diff --git a/drivers/net/ipa/ipa_data-v4.2.c b/drivers/net/ipa/ipa_data-v4.2.c
index 2a231e79d5e1..afae3fdbf6d7 100644
--- a/drivers/net/ipa/ipa_data-v4.2.c
+++ b/drivers/net/ipa/ipa_data-v4.2.c
@@ -4,7 +4,7 @@
 
 #include <linux/log2.h>
 
-#include "gsi.h"
+#include "ipa_dma.h"
 #include "ipa_data.h"
 #include "ipa_endpoint.h"
 #include "ipa_mem.h"
diff --git a/drivers/net/ipa/ipa_data-v4.5.c b/drivers/net/ipa/ipa_data-v4.5.c
index e62ab9c3ac67..415167658962 100644
--- a/drivers/net/ipa/ipa_data-v4.5.c
+++ b/drivers/net/ipa/ipa_data-v4.5.c
@@ -4,7 +4,7 @@
 
 #include <linux/log2.h>
 
-#include "gsi.h"
+#include "ipa_dma.h"
 #include "ipa_data.h"
 #include "ipa_endpoint.h"
 #include "ipa_mem.h"
diff --git a/drivers/net/ipa/ipa_data-v4.9.c b/drivers/net/ipa/ipa_data-v4.9.c
index 2421b5abb5d4..e5c20fc080c3 100644
--- a/drivers/net/ipa/ipa_data-v4.9.c
+++ b/drivers/net/ipa/ipa_data-v4.9.c
@@ -4,7 +4,7 @@
 
 #include <linux/log2.h>
 
-#include "gsi.h"
+#include "ipa_dma.h"
 #include "ipa_data.h"
 #include "ipa_endpoint.h"
 #include "ipa_mem.h"
diff --git a/drivers/net/ipa/gsi.h b/drivers/net/ipa/ipa_dma.h
similarity index 85%
rename from drivers/net/ipa/gsi.h
rename to drivers/net/ipa/ipa_dma.h
index 88b80dc3db79..d053929ca3e3 100644
--- a/drivers/net/ipa/gsi.h
+++ b/drivers/net/ipa/ipa_dma.h
@@ -26,8 +26,8 @@ struct device;
 struct scatterlist;
 struct platform_device;
 
-struct gsi;
-struct gsi_trans;
+struct ipa_dma;
+struct ipa_trans;
 struct gsi_channel_data;
 struct ipa_gsi_endpoint_data;
 
@@ -70,7 +70,7 @@ struct gsi_ring {
  * The result of a pool allocation of multiple elements is always
  * contiguous.
  */
-struct gsi_trans_pool {
+struct ipa_trans_pool {
 	void *base;			/* base address of element pool */
 	u32 count;			/* # elements in the pool */
 	u32 free;			/* next free element in pool (modulo) */
@@ -79,13 +79,13 @@ struct gsi_trans_pool {
 	dma_addr_t addr;		/* DMA address if DMA pool (or 0) */
 };
 
-struct gsi_trans_info {
+struct ipa_trans_info {
 	atomic_t tre_avail;		/* TREs available for allocation */
-	struct gsi_trans_pool pool;	/* transaction pool */
-	struct gsi_trans_pool sg_pool;	/* scatterlist pool */
-	struct gsi_trans_pool cmd_pool;	/* command payload DMA pool */
-	struct gsi_trans_pool info_pool;/* command information pool */
-	struct gsi_trans **map;		/* TRE -> transaction map */
+	struct ipa_trans_pool pool;	/* transaction pool */
+	struct ipa_trans_pool sg_pool;	/* scatterlist pool */
+	struct ipa_trans_pool cmd_pool;	/* command payload DMA pool */
+	struct ipa_trans_pool info_pool;/* command information pool */
+	struct ipa_trans **map;		/* TRE -> transaction map */
 
 	spinlock_t spinlock;		/* protects updates to the lists */
 	struct list_head alloc;		/* allocated, not committed */
@@ -105,8 +105,8 @@ enum gsi_channel_state {
 };
 
 /* We only care about channels between IPA and AP */
-struct gsi_channel {
-	struct gsi *gsi;
+struct ipa_channel {
+	struct ipa_dma *dma_subsys;
 	bool toward_ipa;
 	bool command;			/* AP command TX channel or not */
 
@@ -127,7 +127,7 @@ struct gsi_channel {
 	u64 compl_byte_count;		/* last reported completed byte count */
 	u64 compl_trans_count;		/* ...and completed trans count */
 
-	struct gsi_trans_info trans_info;
+	struct ipa_trans_info trans_info;
 
 	struct napi_struct napi;
 };
@@ -140,12 +140,12 @@ enum gsi_evt_ring_state {
 };
 
 struct gsi_evt_ring {
-	struct gsi_channel *channel;
+	struct ipa_channel *channel;
 	struct completion completion;	/* signals event ring state changes */
 	struct gsi_ring ring;
 };
 
-struct gsi {
+struct ipa_dma {
 	struct device *dev;		/* Same as IPA device */
 	enum ipa_version version;
 	struct net_device dummy_dev;	/* needed for NAPI */
@@ -154,7 +154,7 @@ struct gsi {
 	u32 irq;
 	u32 channel_count;
 	u32 evt_ring_count;
-	struct gsi_channel channel[GSI_CHANNEL_COUNT_MAX];
+	struct ipa_channel channel[GSI_CHANNEL_COUNT_MAX];
 	struct gsi_evt_ring evt_ring[GSI_EVT_RING_COUNT_MAX];
 	u32 event_bitmap;		/* allocated event rings */
 	u32 modem_channel_bitmap;	/* modem channels to allocate */
@@ -174,13 +174,13 @@ struct gsi {
  * Performs initialization that must wait until the GSI hardware is
  * ready (including firmware loaded).
  */
-int gsi_setup(struct gsi *gsi);
+int gsi_setup(struct ipa_dma *dma_subsys);
 
 /**
  * gsi_teardown() - Tear down GSI subsystem
  * @gsi:	GSI address previously passed to a successful gsi_setup() call
  */
-void gsi_teardown(struct gsi *gsi);
+void gsi_teardown(struct ipa_dma *dma_subsys);
 
 /**
  * gsi_channel_tre_max() - Channel maximum number of in-flight TREs
@@ -189,7 +189,7 @@ void gsi_teardown(struct gsi *gsi);
  *
  * Return:	 The maximum number of TREs oustanding on the channel
  */
-u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id);
+u32 gsi_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
 
 /**
  * gsi_channel_trans_tre_max() - Maximum TREs in a single transaction
@@ -198,7 +198,7 @@ u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id);
  *
  * Return:	 The maximum TRE count per transaction on the channel
  */
-u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id);
+u32 gsi_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
 
 /**
  * gsi_channel_start() - Start an allocated GSI channel
@@ -207,7 +207,7 @@ u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id);
  *
  * Return:	0 if successful, or a negative error code
  */
-int gsi_channel_start(struct gsi *gsi, u32 channel_id);
+int gsi_channel_start(struct ipa_dma *dma_subsys, u32 channel_id);
 
 /**
  * gsi_channel_stop() - Stop a started GSI channel
@@ -216,7 +216,7 @@ int gsi_channel_start(struct gsi *gsi, u32 channel_id);
  *
  * Return:	0 if successful, or a negative error code
  */
-int gsi_channel_stop(struct gsi *gsi, u32 channel_id);
+int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
 
 /**
  * gsi_channel_reset() - Reset an allocated GSI channel
@@ -230,19 +230,19 @@ int gsi_channel_stop(struct gsi *gsi, u32 channel_id);
  * GSI hardware relinquishes ownership of all pending receive buffer
  * transactions and they will complete with their cancelled flag set.
  */
-void gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool doorbell);
+void gsi_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
 
 /**
  * gsi_suspend() - Prepare the GSI subsystem for suspend
  * @gsi:	GSI pointer
  */
-void gsi_suspend(struct gsi *gsi);
+void gsi_suspend(struct ipa_dma *dma_subsys);
 
 /**
  * gsi_resume() - Resume the GSI subsystem following suspend
  * @gsi:	GSI pointer
  */
-void gsi_resume(struct gsi *gsi);
+void gsi_resume(struct ipa_dma *dma_subsys);
 
 /**
  * gsi_channel_suspend() - Suspend a GSI channel
@@ -251,7 +251,7 @@ void gsi_resume(struct gsi *gsi);
  *
  * For IPA v4.0+, suspend is implemented by stopping the channel.
  */
-int gsi_channel_suspend(struct gsi *gsi, u32 channel_id);
+int gsi_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id);
 
 /**
  * gsi_channel_resume() - Resume a suspended GSI channel
@@ -260,7 +260,7 @@ int gsi_channel_suspend(struct gsi *gsi, u32 channel_id);
  *
  * For IPA v4.0+, the stopped channel is started again.
  */
-int gsi_channel_resume(struct gsi *gsi, u32 channel_id);
+int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
 
 /**
  * gsi_init() - Initialize the GSI subsystem
@@ -275,7 +275,7 @@ int gsi_channel_resume(struct gsi *gsi, u32 channel_id);
  * Early stage initialization of the GSI subsystem, performing tasks
  * that can be done before the GSI hardware is ready to use.
  */
-int gsi_init(struct gsi *gsi, struct platform_device *pdev,
+int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
 	     enum ipa_version version, u32 count,
 	     const struct ipa_gsi_endpoint_data *data);
 
@@ -283,6 +283,6 @@ int gsi_init(struct gsi *gsi, struct platform_device *pdev,
  * gsi_exit() - Exit the GSI subsystem
  * @gsi:	GSI address previously passed to a successful gsi_init() call
  */
-void gsi_exit(struct gsi *gsi);
+void gsi_exit(struct ipa_dma *dma_subsys);
 
 #endif /* _GSI_H_ */
diff --git a/drivers/net/ipa/gsi_private.h b/drivers/net/ipa/ipa_dma_private.h
similarity index 67%
rename from drivers/net/ipa/gsi_private.h
rename to drivers/net/ipa/ipa_dma_private.h
index ea333a244cf5..40148a551b47 100644
--- a/drivers/net/ipa/gsi_private.h
+++ b/drivers/net/ipa/ipa_dma_private.h
@@ -6,38 +6,38 @@
 #ifndef _GSI_PRIVATE_H_
 #define _GSI_PRIVATE_H_
 
-/* === Only "gsi.c" and "gsi_trans.c" should include this file === */
+/* === Only "gsi.c" and "ipa_trans.c" should include this file === */
 
 #include <linux/types.h>
 
-struct gsi_trans;
+struct ipa_trans;
 struct gsi_ring;
-struct gsi_channel;
+struct ipa_channel;
 
 #define GSI_RING_ELEMENT_SIZE	16	/* bytes; must be a power of 2 */
 
 /* Return the entry that follows one provided in a transaction pool */
-void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element);
+void *ipa_trans_pool_next(struct ipa_trans_pool *pool, void *element);
 
 /**
- * gsi_trans_move_complete() - Mark a GSI transaction completed
+ * ipa_trans_move_complete() - Mark a GSI transaction completed
  * @trans:	Transaction to commit
  */
-void gsi_trans_move_complete(struct gsi_trans *trans);
+void ipa_trans_move_complete(struct ipa_trans *trans);
 
 /**
- * gsi_trans_move_polled() - Mark a transaction polled
+ * ipa_trans_move_polled() - Mark a transaction polled
  * @trans:	Transaction to update
  */
-void gsi_trans_move_polled(struct gsi_trans *trans);
+void ipa_trans_move_polled(struct ipa_trans *trans);
 
 /**
- * gsi_trans_complete() - Complete a GSI transaction
+ * ipa_trans_complete() - Complete a GSI transaction
  * @trans:	Transaction to complete
  *
  * Marks a transaction complete (including freeing it).
  */
-void gsi_trans_complete(struct gsi_trans *trans);
+void ipa_trans_complete(struct ipa_trans *trans);
 
 /**
  * gsi_channel_trans_mapped() - Return a transaction mapped to a TRE index
@@ -46,19 +46,19 @@ void gsi_trans_complete(struct gsi_trans *trans);
  *
  * Return:	The GSI transaction pointer associated with the TRE index
  */
-struct gsi_trans *gsi_channel_trans_mapped(struct gsi_channel *channel,
+struct ipa_trans *gsi_channel_trans_mapped(struct ipa_channel *channel,
 					   u32 index);
 
 /**
- * gsi_channel_trans_complete() - Return a channel's next completed transaction
+ * ipa_channel_trans_complete() - Return a channel's next completed transaction
  * @channel:	Channel whose next transaction is to be returned
  *
  * Return:	The next completed transaction, or NULL if nothing new
  */
-struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel);
+struct ipa_trans *ipa_channel_trans_complete(struct ipa_channel *channel);
 
 /**
- * gsi_channel_trans_cancel_pending() - Cancel pending transactions
+ * ipa_channel_trans_cancel_pending() - Cancel pending transactions
  * @channel:	Channel whose pending transactions should be cancelled
  *
  * Cancel all pending transactions on a channel.  These are transactions
@@ -69,10 +69,10 @@ struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel);
  * NOTE:  Transactions already complete at the time of this call are
  *	  unaffected.
  */
-void gsi_channel_trans_cancel_pending(struct gsi_channel *channel);
+void ipa_channel_trans_cancel_pending(struct ipa_channel *channel);
 
 /**
- * gsi_channel_trans_init() - Initialize a channel's GSI transaction info
+ * ipa_channel_trans_init() - Initialize a channel's GSI transaction info
  * @gsi:	GSI pointer
  * @channel_id:	Channel number
  *
@@ -80,13 +80,13 @@ void gsi_channel_trans_cancel_pending(struct gsi_channel *channel);
  *
  * Creates and sets up information for managing transactions on a channel
  */
-int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id);
+int ipa_channel_trans_init(struct ipa_dma *gsi, u32 channel_id);
 
 /**
- * gsi_channel_trans_exit() - Inverse of gsi_channel_trans_init()
+ * ipa_channel_trans_exit() - Inverse of ipa_channel_trans_init()
  * @channel:	Channel whose transaction information is to be cleaned up
  */
-void gsi_channel_trans_exit(struct gsi_channel *channel);
+void ipa_channel_trans_exit(struct ipa_channel *channel);
 
 /**
  * gsi_channel_doorbell() - Ring a channel's doorbell
@@ -95,7 +95,7 @@ void gsi_channel_trans_exit(struct gsi_channel *channel);
  * Rings a channel's doorbell to inform the GSI hardware that new
  * transactions (TREs, really) are available for it to process.
  */
-void gsi_channel_doorbell(struct gsi_channel *channel);
+void gsi_channel_doorbell(struct ipa_channel *channel);
 
 /**
  * gsi_ring_virt() - Return virtual address for a ring entry
@@ -105,7 +105,7 @@ void gsi_channel_doorbell(struct gsi_channel *channel);
 void *gsi_ring_virt(struct gsi_ring *ring, u32 index);
 
 /**
- * gsi_channel_tx_queued() - Report the number of bytes queued to hardware
+ * ipa_channel_tx_queued() - Report the number of bytes queued to hardware
  * @channel:	Channel whose bytes have been queued
  *
  * This arranges for the the number of transactions and bytes for
@@ -113,6 +113,6 @@ void *gsi_ring_virt(struct gsi_ring *ring, u32 index);
  * passes this information up the network stack so it can be used to
  * throttle transmissions.
  */
-void gsi_channel_tx_queued(struct gsi_channel *channel);
+void ipa_channel_tx_queued(struct ipa_channel *channel);
 
 #endif /* _GSI_PRIVATE_H_ */
diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 29227de6661f..90d6880e8a25 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -11,8 +11,8 @@
 #include <linux/if_rmnet.h>
 #include <linux/dma-direction.h>
 
-#include "gsi.h"
-#include "gsi_trans.h"
+#include "ipa_dma.h"
+#include "ipa_trans.h"
 #include "ipa.h"
 #include "ipa_data.h"
 #include "ipa_endpoint.h"
@@ -224,16 +224,16 @@ static bool ipa_endpoint_data_valid(struct ipa *ipa, u32 count,
 }
 
 /* Allocate a transaction to use on a non-command endpoint */
-static struct gsi_trans *ipa_endpoint_trans_alloc(struct ipa_endpoint *endpoint,
+static struct ipa_trans *ipa_endpoint_trans_alloc(struct ipa_endpoint *endpoint,
 						  u32 tre_count)
 {
-	struct gsi *gsi = &endpoint->ipa->gsi;
+	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
 	u32 channel_id = endpoint->channel_id;
 	enum dma_data_direction direction;
 
 	direction = endpoint->toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
 
-	return gsi_channel_trans_alloc(gsi, channel_id, tre_count, direction);
+	return ipa_channel_trans_alloc(gsi, channel_id, tre_count, direction);
 }
 
 /* suspend_delay represents suspend for RX, delay for TX endpoints.
@@ -382,7 +382,7 @@ void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable)
 int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
 {
 	u32 initialized = ipa->initialized;
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 	u32 count;
 
 	/* We need one command per modem TX endpoint.  We can get an upper
@@ -422,7 +422,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
 	ipa_cmd_pipeline_clear_add(trans);
 
 	/* XXX This should have a 1 second timeout */
-	gsi_trans_commit_wait(trans);
+	ipa_trans_commit_wait(trans);
 
 	ipa_cmd_pipeline_clear_wait(ipa);
 
@@ -938,7 +938,7 @@ static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint)
  */
 int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb)
 {
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 	u32 nr_frags;
 	int ret;
 
@@ -957,17 +957,17 @@ int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb)
 	if (!trans)
 		return -EBUSY;
 
-	ret = gsi_trans_skb_add(trans, skb);
+	ret = ipa_trans_skb_add(trans, skb);
 	if (ret)
 		goto err_trans_free;
 	trans->data = skb;	/* transaction owns skb now */
 
-	gsi_trans_commit(trans, !netdev_xmit_more());
+	ipa_trans_commit(trans, !netdev_xmit_more());
 
 	return 0;
 
 err_trans_free:
-	gsi_trans_free(trans);
+	ipa_trans_free(trans);
 
 	return -ENOMEM;
 }
@@ -1004,7 +1004,7 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
 
 static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
 {
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 	bool doorbell = false;
 	struct page *page;
 	u32 offset;
@@ -1023,7 +1023,7 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
 	offset = NET_SKB_PAD;
 	len = IPA_RX_BUFFER_SIZE - offset;
 
-	ret = gsi_trans_page_add(trans, page, len, offset);
+	ret = ipa_trans_page_add(trans, page, len, offset);
 	if (ret)
 		goto err_trans_free;
 	trans->data = page;	/* transaction owns page now */
@@ -1033,12 +1033,12 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
 		endpoint->replenish_ready = 0;
 	}
 
-	gsi_trans_commit(trans, doorbell);
+	ipa_trans_commit(trans, doorbell);
 
 	return 0;
 
 err_trans_free:
-	gsi_trans_free(trans);
+	ipa_trans_free(trans);
 err_free_pages:
 	__free_pages(page, get_order(IPA_RX_BUFFER_SIZE));
 
@@ -1060,7 +1060,7 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
  */
 static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
 {
-	struct gsi *gsi;
+	struct ipa_dma *gsi;
 	u32 backlog;
 
 	if (!endpoint->replenish_enabled) {
@@ -1090,7 +1090,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
 	 * Receive buffer transactions use one TRE, so schedule work to
 	 * try replenishing again if our backlog is *all* available TREs.
 	 */
-	gsi = &endpoint->ipa->gsi;
+	gsi = &endpoint->ipa->dma_subsys;
 	if (backlog == gsi_channel_tre_max(gsi, endpoint->channel_id))
 		schedule_delayed_work(&endpoint->replenish_work,
 				      msecs_to_jiffies(1));
@@ -1098,7 +1098,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
 
 static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
 {
-	struct gsi *gsi = &endpoint->ipa->gsi;
+	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
 	u32 max_backlog;
 	u32 saved;
 
@@ -1320,13 +1320,13 @@ static void ipa_endpoint_status_parse(struct ipa_endpoint *endpoint,
 
 /* Complete a TX transaction, command or from ipa_endpoint_skb_tx() */
 static void ipa_endpoint_tx_complete(struct ipa_endpoint *endpoint,
-				     struct gsi_trans *trans)
+				     struct ipa_trans *trans)
 {
 }
 
 /* Complete transaction initiated in ipa_endpoint_replenish_one() */
 static void ipa_endpoint_rx_complete(struct ipa_endpoint *endpoint,
-				     struct gsi_trans *trans)
+				     struct ipa_trans *trans)
 {
 	struct page *page;
 
@@ -1344,7 +1344,7 @@ static void ipa_endpoint_rx_complete(struct ipa_endpoint *endpoint,
 }
 
 void ipa_endpoint_trans_complete(struct ipa_endpoint *endpoint,
-				 struct gsi_trans *trans)
+				 struct ipa_trans *trans)
 {
 	if (endpoint->toward_ipa)
 		ipa_endpoint_tx_complete(endpoint, trans);
@@ -1353,7 +1353,7 @@ void ipa_endpoint_trans_complete(struct ipa_endpoint *endpoint,
 }
 
 void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint,
-				struct gsi_trans *trans)
+				struct ipa_trans *trans)
 {
 	if (endpoint->toward_ipa) {
 		struct ipa *ipa = endpoint->ipa;
@@ -1406,7 +1406,7 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
 {
 	struct device *dev = &endpoint->ipa->pdev->dev;
 	struct ipa *ipa = endpoint->ipa;
-	struct gsi *gsi = &ipa->gsi;
+	struct ipa_dma *gsi = &ipa->dma_subsys;
 	bool suspended = false;
 	dma_addr_t addr;
 	u32 retries;
@@ -1504,7 +1504,7 @@ static void ipa_endpoint_reset(struct ipa_endpoint *endpoint)
 	if (special && ipa_endpoint_aggr_active(endpoint))
 		ret = ipa_endpoint_reset_rx_aggr(endpoint);
 	else
-		gsi_channel_reset(&ipa->gsi, channel_id, true);
+		gsi_channel_reset(&ipa->dma_subsys, channel_id, true);
 
 	if (ret)
 		dev_err(&ipa->pdev->dev,
@@ -1534,7 +1534,7 @@ static void ipa_endpoint_program(struct ipa_endpoint *endpoint)
 int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint)
 {
 	struct ipa *ipa = endpoint->ipa;
-	struct gsi *gsi = &ipa->gsi;
+	struct ipa_dma *gsi = &ipa->dma_subsys;
 	int ret;
 
 	ret = gsi_channel_start(gsi, endpoint->channel_id);
@@ -1561,7 +1561,7 @@ void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
 {
 	u32 mask = BIT(endpoint->endpoint_id);
 	struct ipa *ipa = endpoint->ipa;
-	struct gsi *gsi = &ipa->gsi;
+	struct ipa_dma *gsi = &ipa->dma_subsys;
 	int ret;
 
 	if (!(ipa->enabled & mask))
@@ -1586,7 +1586,8 @@ void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
 void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
 {
 	struct device *dev = &endpoint->ipa->pdev->dev;
-	struct gsi *gsi = &endpoint->ipa->gsi;
+	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
+	bool stop_channel;
 	int ret;
 
 	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
@@ -1606,7 +1607,8 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
 void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
 {
 	struct device *dev = &endpoint->ipa->pdev->dev;
-	struct gsi *gsi = &endpoint->ipa->gsi;
+	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
+	bool start_channel;
 	int ret;
 
 	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
@@ -1651,7 +1653,7 @@ void ipa_endpoint_resume(struct ipa *ipa)
 
 static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
 {
-	struct gsi *gsi = &endpoint->ipa->gsi;
+	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
 	u32 channel_id = endpoint->channel_id;
 
 	/* Only AP endpoints get set up */
diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h
index 0a859d10312d..7ba06abc1968 100644
--- a/drivers/net/ipa/ipa_endpoint.h
+++ b/drivers/net/ipa/ipa_endpoint.h
@@ -10,7 +10,7 @@
 #include <linux/workqueue.h>
 #include <linux/if_ether.h>
 
-#include "gsi.h"
+#include "ipa_dma.h"
 #include "ipa_reg.h"
 
 struct net_device;
@@ -110,8 +110,8 @@ u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
 void ipa_endpoint_exit(struct ipa *ipa);
 
 void ipa_endpoint_trans_complete(struct ipa_endpoint *ipa,
-				 struct gsi_trans *trans);
+				 struct ipa_trans *trans);
 void ipa_endpoint_trans_release(struct ipa_endpoint *ipa,
-				struct gsi_trans *trans);
+				struct ipa_trans *trans);
 
 #endif /* _IPA_ENDPOINT_H_ */
diff --git a/drivers/net/ipa/ipa_gsi.c b/drivers/net/ipa/ipa_gsi.c
index d323adb03383..d212ca01894d 100644
--- a/drivers/net/ipa/ipa_gsi.c
+++ b/drivers/net/ipa/ipa_gsi.c
@@ -7,29 +7,29 @@
 #include <linux/types.h>
 
 #include "ipa_gsi.h"
-#include "gsi_trans.h"
+#include "ipa_trans.h"
 #include "ipa.h"
 #include "ipa_endpoint.h"
 #include "ipa_data.h"
 
-void ipa_gsi_trans_complete(struct gsi_trans *trans)
+void ipa_gsi_trans_complete(struct ipa_trans *trans)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 
 	ipa_endpoint_trans_complete(ipa->channel_map[trans->channel_id], trans);
 }
 
-void ipa_gsi_trans_release(struct gsi_trans *trans)
+void ipa_gsi_trans_release(struct ipa_trans *trans)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 
 	ipa_endpoint_trans_release(ipa->channel_map[trans->channel_id], trans);
 }
 
-void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count,
+void ipa_gsi_channel_tx_queued(struct ipa_dma *gsi, u32 channel_id, u32 count,
 			       u32 byte_count)
 {
-	struct ipa *ipa = container_of(gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(gsi, struct ipa, dma_subsys);
 	struct ipa_endpoint *endpoint;
 
 	endpoint = ipa->channel_map[channel_id];
@@ -37,10 +37,10 @@ void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count,
 		netdev_sent_queue(endpoint->netdev, byte_count);
 }
 
-void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count,
+void ipa_gsi_channel_tx_completed(struct ipa_dma *gsi, u32 channel_id, u32 count,
 				  u32 byte_count)
 {
-	struct ipa *ipa = container_of(gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(gsi, struct ipa, dma_subsys);
 	struct ipa_endpoint *endpoint;
 
 	endpoint = ipa->channel_map[channel_id];
diff --git a/drivers/net/ipa/ipa_gsi.h b/drivers/net/ipa/ipa_gsi.h
index c02cb6f3a2e1..85df59177c34 100644
--- a/drivers/net/ipa/ipa_gsi.h
+++ b/drivers/net/ipa/ipa_gsi.h
@@ -8,8 +8,8 @@
 
 #include <linux/types.h>
 
-struct gsi;
-struct gsi_trans;
+struct ipa_dma;
+struct ipa_trans;
 struct ipa_gsi_endpoint_data;
 
 /**
@@ -19,7 +19,7 @@ struct ipa_gsi_endpoint_data;
  * This called from the GSI layer to notify the IPA layer that a
  * transaction has completed.
  */
-void ipa_gsi_trans_complete(struct gsi_trans *trans);
+void ipa_gsi_trans_complete(struct ipa_trans *trans);
 
 /**
  * ipa_gsi_trans_release() - GSI transaction release callback
@@ -29,7 +29,7 @@ void ipa_gsi_trans_complete(struct gsi_trans *trans);
  * transaction is about to be freed, so any resources associated
  * with it should be released.
  */
-void ipa_gsi_trans_release(struct gsi_trans *trans);
+void ipa_gsi_trans_release(struct ipa_trans *trans);
 
 /**
  * ipa_gsi_channel_tx_queued() - GSI queued to hardware notification
@@ -41,7 +41,7 @@ void ipa_gsi_trans_release(struct gsi_trans *trans);
  * This called from the GSI layer to notify the IPA layer that some
  * number of transactions have been queued to hardware for execution.
  */
-void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count,
+void ipa_gsi_channel_tx_queued(struct ipa_dma *gsi, u32 channel_id, u32 count,
 			       u32 byte_count);
 
 /**
@@ -54,7 +54,7 @@ void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count,
  * This called from the GSI layer to notify the IPA layer that the hardware
  * has reported the completion of some number of transactions.
  */
-void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count,
+void ipa_gsi_channel_tx_completed(struct ipa_dma *gsi, u32 channel_id, u32 count,
 				  u32 byte_count);
 
 /* ipa_gsi_endpoint_data_empty() - Empty endpoint config data test
diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index cdfa98a76e1f..026f5555fa7d 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -31,7 +31,7 @@
 #include "ipa_modem.h"
 #include "ipa_uc.h"
 #include "ipa_interrupt.h"
-#include "gsi_trans.h"
+#include "ipa_trans.h"
 #include "ipa_sysfs.h"
 
 /**
@@ -98,7 +98,7 @@ int ipa_setup(struct ipa *ipa)
 	struct device *dev = &ipa->pdev->dev;
 	int ret;
 
-	ret = gsi_setup(&ipa->gsi);
+	ret = gsi_setup(&ipa->dma_subsys);
 	if (ret)
 		return ret;
 
@@ -154,7 +154,7 @@ int ipa_setup(struct ipa *ipa)
 	ipa_endpoint_teardown(ipa);
 	ipa_power_teardown(ipa);
 err_gsi_teardown:
-	gsi_teardown(&ipa->gsi);
+	gsi_teardown(&ipa->dma_subsys);
 
 	return ret;
 }
@@ -179,7 +179,7 @@ static void ipa_teardown(struct ipa *ipa)
 	ipa_endpoint_disable_one(command_endpoint);
 	ipa_endpoint_teardown(ipa);
 	ipa_power_teardown(ipa);
-	gsi_teardown(&ipa->gsi);
+	gsi_teardown(&ipa->dma_subsys);
 }
 
 /* Configure bus access behavior for IPA components */
@@ -716,7 +716,7 @@ static int ipa_probe(struct platform_device *pdev)
 	if (ret)
 		goto err_reg_exit;
 
-	ret = gsi_init(&ipa->gsi, pdev, ipa->version, data->endpoint_count,
+	ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
 		       data->endpoint_data);
 	if (ret)
 		goto err_mem_exit;
@@ -781,7 +781,7 @@ static int ipa_probe(struct platform_device *pdev)
 err_endpoint_exit:
 	ipa_endpoint_exit(ipa);
 err_gsi_exit:
-	gsi_exit(&ipa->gsi);
+	gsi_exit(&ipa->dma_subsys);
 err_mem_exit:
 	ipa_mem_exit(ipa);
 err_reg_exit:
@@ -824,7 +824,7 @@ static int ipa_remove(struct platform_device *pdev)
 	ipa_modem_exit(ipa);
 	ipa_table_exit(ipa);
 	ipa_endpoint_exit(ipa);
-	gsi_exit(&ipa->gsi);
+	gsi_exit(&ipa->dma_subsys);
 	ipa_mem_exit(ipa);
 	ipa_reg_exit(ipa);
 	kfree(ipa);
diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
index 4337b0920d3d..16e5fdd5bd73 100644
--- a/drivers/net/ipa/ipa_mem.c
+++ b/drivers/net/ipa/ipa_mem.c
@@ -18,7 +18,7 @@
 #include "ipa_cmd.h"
 #include "ipa_mem.h"
 #include "ipa_table.h"
-#include "gsi_trans.h"
+#include "ipa_trans.h"
 
 /* "Canary" value placed between memory regions to detect overflow */
 #define IPA_MEM_CANARY_VAL		cpu_to_le32(0xdeadbeef)
@@ -42,9 +42,9 @@ const struct ipa_mem *ipa_mem_find(struct ipa *ipa, enum ipa_mem_id mem_id)
 
 /* Add an immediate command to a transaction that zeroes a memory region */
 static void
-ipa_mem_zero_region_add(struct gsi_trans *trans, enum ipa_mem_id mem_id)
+ipa_mem_zero_region_add(struct ipa_trans *trans, enum ipa_mem_id mem_id)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	const struct ipa_mem *mem = ipa_mem_find(ipa, mem_id);
 	dma_addr_t addr = ipa->zero_addr;
 
@@ -76,7 +76,7 @@ int ipa_mem_setup(struct ipa *ipa)
 {
 	dma_addr_t addr = ipa->zero_addr;
 	const struct ipa_mem *mem;
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 	u32 offset;
 	u16 size;
 	u32 val;
@@ -107,7 +107,7 @@ int ipa_mem_setup(struct ipa *ipa)
 	ipa_mem_zero_region_add(trans, IPA_MEM_AP_PROC_CTX);
 	ipa_mem_zero_region_add(trans, IPA_MEM_MODEM);
 
-	gsi_trans_commit_wait(trans);
+	ipa_trans_commit_wait(trans);
 
 	/* Tell the hardware where the processing context area is located */
 	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
@@ -408,7 +408,7 @@ void ipa_mem_deconfig(struct ipa *ipa)
  */
 int ipa_mem_zero_modem(struct ipa *ipa)
 {
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 
 	/* Get a transaction to zero the modem memory, modem header,
 	 * and modem processing context regions.
@@ -424,7 +424,7 @@ int ipa_mem_zero_modem(struct ipa *ipa)
 	ipa_mem_zero_region_add(trans, IPA_MEM_MODEM_PROC_CTX);
 	ipa_mem_zero_region_add(trans, IPA_MEM_MODEM);
 
-	gsi_trans_commit_wait(trans);
+	ipa_trans_commit_wait(trans);
 
 	return 0;
 }
diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
index 96c467c80a2e..d197959cc032 100644
--- a/drivers/net/ipa/ipa_table.c
+++ b/drivers/net/ipa/ipa_table.c
@@ -21,8 +21,8 @@
 #include "ipa_reg.h"
 #include "ipa_mem.h"
 #include "ipa_cmd.h"
-#include "gsi.h"
-#include "gsi_trans.h"
+#include "ipa_dma.h"
+#include "ipa_trans.h"
 
 /**
  * DOC: IPA Filter and Route Tables
@@ -234,10 +234,10 @@ static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count)
 	return ipa->table_addr + skip * IPA_TABLE_ENTRY_SIZE(ipa->version);
 }
 
-static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
+static void ipa_table_reset_add(struct ipa_trans *trans, bool filter,
 				u16 first, u16 count, enum ipa_mem_id mem_id)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	const struct ipa_mem *mem = ipa_mem_find(ipa, mem_id);
 	dma_addr_t addr;
 	u32 offset;
@@ -266,7 +266,7 @@ ipa_filter_reset_table(struct ipa *ipa, enum ipa_mem_id mem_id, bool modem)
 {
 	u32 ep_mask = ipa->filter_map;
 	u32 count = hweight32(ep_mask);
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 	enum gsi_ee_id ee_id;
 
 	trans = ipa_cmd_trans_alloc(ipa, count);
@@ -291,7 +291,7 @@ ipa_filter_reset_table(struct ipa *ipa, enum ipa_mem_id mem_id, bool modem)
 		ipa_table_reset_add(trans, true, endpoint_id, 1, mem_id);
 	}
 
-	gsi_trans_commit_wait(trans);
+	ipa_trans_commit_wait(trans);
 
 	return 0;
 }
@@ -326,7 +326,7 @@ static int ipa_filter_reset(struct ipa *ipa, bool modem)
  * */
 static int ipa_route_reset(struct ipa *ipa, bool modem)
 {
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 	u16 first;
 	u16 count;
 
@@ -354,7 +354,7 @@ static int ipa_route_reset(struct ipa *ipa, bool modem)
 	ipa_table_reset_add(trans, false, first, count,
 			    IPA_MEM_V6_ROUTE_HASHED);
 
-	gsi_trans_commit_wait(trans);
+	ipa_trans_commit_wait(trans);
 
 	return 0;
 }
@@ -382,7 +382,7 @@ void ipa_table_reset(struct ipa *ipa, bool modem)
 int ipa_table_hash_flush(struct ipa *ipa)
 {
 	u32 offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version);
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 	u32 val;
 
 	if (!ipa_table_hash_support(ipa))
@@ -399,17 +399,17 @@ int ipa_table_hash_flush(struct ipa *ipa)
 
 	ipa_cmd_register_write_add(trans, offset, val, val, false);
 
-	gsi_trans_commit_wait(trans);
+	ipa_trans_commit_wait(trans);
 
 	return 0;
 }
 
-static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
+static void ipa_table_init_add(struct ipa_trans *trans, bool filter,
 			       enum ipa_cmd_opcode opcode,
 			       enum ipa_mem_id mem_id,
 			       enum ipa_mem_id hash_mem_id)
 {
-	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
+	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	const struct ipa_mem *hash_mem = ipa_mem_find(ipa, hash_mem_id);
 	const struct ipa_mem *mem = ipa_mem_find(ipa, mem_id);
 	dma_addr_t hash_addr;
@@ -444,7 +444,7 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
 
 int ipa_table_setup(struct ipa *ipa)
 {
-	struct gsi_trans *trans;
+	struct ipa_trans *trans;
 
 	trans = ipa_cmd_trans_alloc(ipa, 4);
 	if (!trans) {
@@ -464,7 +464,7 @@ int ipa_table_setup(struct ipa *ipa)
 	ipa_table_init_add(trans, true, IPA_CMD_IP_V6_FILTER_INIT,
 			   IPA_MEM_V6_FILTER, IPA_MEM_V6_FILTER_HASHED);
 
-	gsi_trans_commit_wait(trans);
+	ipa_trans_commit_wait(trans);
 
 	return 0;
 }
diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/ipa_trans.c
similarity index 81%
rename from drivers/net/ipa/gsi_trans.c
rename to drivers/net/ipa/ipa_trans.c
index 1544564bc283..b87936b18770 100644
--- a/drivers/net/ipa/gsi_trans.c
+++ b/drivers/net/ipa/ipa_trans.c
@@ -11,9 +11,9 @@
 #include <linux/scatterlist.h>
 #include <linux/dma-direction.h>
 
-#include "gsi.h"
-#include "gsi_private.h"
-#include "gsi_trans.h"
+#include "ipa_dma.h"
+#include "ipa_dma_private.h"
+#include "ipa_trans.h"
 #include "ipa_gsi.h"
 #include "ipa_data.h"
 #include "ipa_cmd.h"
@@ -85,7 +85,7 @@ struct gsi_tre {
 #define TRE_FLAGS_BEI_FMASK	GENMASK(10, 10)
 #define TRE_FLAGS_TYPE_FMASK	GENMASK(23, 16)
 
-int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count,
+int ipa_trans_pool_init(struct ipa_trans_pool *pool, size_t size, u32 count,
 			u32 max_alloc)
 {
 	void *virt;
@@ -119,7 +119,7 @@ int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count,
 	return 0;
 }
 
-void gsi_trans_pool_exit(struct gsi_trans_pool *pool)
+void ipa_trans_pool_exit(struct ipa_trans_pool *pool)
 {
 	kfree(pool->base);
 	memset(pool, 0, sizeof(*pool));
@@ -131,7 +131,7 @@ void gsi_trans_pool_exit(struct gsi_trans_pool *pool)
  * (and it can be more than one), we only allow allocation of a single
  * element from a DMA pool.
  */
-int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
+int ipa_trans_pool_init_dma(struct device *dev, struct ipa_trans_pool *pool,
 			    size_t size, u32 count, u32 max_alloc)
 {
 	size_t total_size;
@@ -152,7 +152,7 @@ int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
 	/* The allocator will give us a power-of-2 number of pages
 	 * sufficient to satisfy our request.  Round up our requested
 	 * size to avoid any unused space in the allocation.  This way
-	 * gsi_trans_pool_exit_dma() can assume the total allocated
+	 * ipa_trans_pool_exit_dma() can assume the total allocated
 	 * size is exactly (count * size).
 	 */
 	total_size = get_order(total_size) << PAGE_SHIFT;
@@ -171,7 +171,7 @@ int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
 	return 0;
 }
 
-void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool)
+void ipa_trans_pool_exit_dma(struct device *dev, struct ipa_trans_pool *pool)
 {
 	size_t total_size = pool->count * pool->size;
 
@@ -180,7 +180,7 @@ void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool)
 }
 
 /* Return the byte offset of the next free entry in the pool */
-static u32 gsi_trans_pool_alloc_common(struct gsi_trans_pool *pool, u32 count)
+static u32 ipa_trans_pool_alloc_common(struct ipa_trans_pool *pool, u32 count)
 {
 	u32 offset;
 
@@ -199,15 +199,15 @@ static u32 gsi_trans_pool_alloc_common(struct gsi_trans_pool *pool, u32 count)
 }
 
 /* Allocate a contiguous block of zeroed entries from a pool */
-void *gsi_trans_pool_alloc(struct gsi_trans_pool *pool, u32 count)
+void *ipa_trans_pool_alloc(struct ipa_trans_pool *pool, u32 count)
 {
-	return pool->base + gsi_trans_pool_alloc_common(pool, count);
+	return pool->base + ipa_trans_pool_alloc_common(pool, count);
 }
 
 /* Allocate a single zeroed entry from a DMA pool */
-void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr)
+void *ipa_trans_pool_alloc_dma(struct ipa_trans_pool *pool, dma_addr_t *addr)
 {
-	u32 offset = gsi_trans_pool_alloc_common(pool, 1);
+	u32 offset = ipa_trans_pool_alloc_common(pool, 1);
 
 	*addr = pool->addr + offset;
 
@@ -217,7 +217,7 @@ void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr)
 /* Return the pool element that immediately follows the one given.
  * This only works done if elements are allocated one at a time.
  */
-void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element)
+void *ipa_trans_pool_next(struct ipa_trans_pool *pool, void *element)
 {
 	void *end = pool->base + pool->count * pool->size;
 
@@ -231,33 +231,33 @@ void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element)
 }
 
 /* Map a given ring entry index to the transaction associated with it */
-static void gsi_channel_trans_map(struct gsi_channel *channel, u32 index,
-				  struct gsi_trans *trans)
+static void gsi_channel_trans_map(struct ipa_channel *channel, u32 index,
+				  struct ipa_trans *trans)
 {
 	/* Note: index *must* be used modulo the ring count here */
 	channel->trans_info.map[index % channel->tre_ring.count] = trans;
 }
 
 /* Return the transaction mapped to a given ring entry */
-struct gsi_trans *
-gsi_channel_trans_mapped(struct gsi_channel *channel, u32 index)
+struct ipa_trans *
+gsi_channel_trans_mapped(struct ipa_channel *channel, u32 index)
 {
 	/* Note: index *must* be used modulo the ring count here */
 	return channel->trans_info.map[index % channel->tre_ring.count];
 }
 
 /* Return the oldest completed transaction for a channel (or null) */
-struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel)
+struct ipa_trans *ipa_channel_trans_complete(struct ipa_channel *channel)
 {
 	return list_first_entry_or_null(&channel->trans_info.complete,
-					struct gsi_trans, links);
+					struct ipa_trans, links);
 }
 
 /* Move a transaction from the allocated list to the pending list */
-static void gsi_trans_move_pending(struct gsi_trans *trans)
+static void ipa_trans_move_pending(struct ipa_trans *trans)
 {
-	struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id];
-	struct gsi_trans_info *trans_info = &channel->trans_info;
+	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
+	struct ipa_trans_info *trans_info = &channel->trans_info;
 
 	spin_lock_bh(&trans_info->spinlock);
 
@@ -269,10 +269,10 @@ static void gsi_trans_move_pending(struct gsi_trans *trans)
 /* Move a transaction and all of its predecessors from the pending list
  * to the completed list.
  */
-void gsi_trans_move_complete(struct gsi_trans *trans)
+void ipa_trans_move_complete(struct ipa_trans *trans)
 {
-	struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id];
-	struct gsi_trans_info *trans_info = &channel->trans_info;
+	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
+	struct ipa_trans_info *trans_info = &channel->trans_info;
 	struct list_head list;
 
 	spin_lock_bh(&trans_info->spinlock);
@@ -285,10 +285,10 @@ void gsi_trans_move_complete(struct gsi_trans *trans)
 }
 
 /* Move a transaction from the completed list to the polled list */
-void gsi_trans_move_polled(struct gsi_trans *trans)
+void ipa_trans_move_polled(struct ipa_trans *trans)
 {
-	struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id];
-	struct gsi_trans_info *trans_info = &channel->trans_info;
+	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
+	struct ipa_trans_info *trans_info = &channel->trans_info;
 
 	spin_lock_bh(&trans_info->spinlock);
 
@@ -299,7 +299,7 @@ void gsi_trans_move_polled(struct gsi_trans *trans)
 
 /* Reserve some number of TREs on a channel.  Returns true if successful */
 static bool
-gsi_trans_tre_reserve(struct gsi_trans_info *trans_info, u32 tre_count)
+ipa_trans_tre_reserve(struct ipa_trans_info *trans_info, u32 tre_count)
 {
 	int avail = atomic_read(&trans_info->tre_avail);
 	int new;
@@ -315,21 +315,21 @@ gsi_trans_tre_reserve(struct gsi_trans_info *trans_info, u32 tre_count)
 
 /* Release previously-reserved TRE entries to a channel */
 static void
-gsi_trans_tre_release(struct gsi_trans_info *trans_info, u32 tre_count)
+ipa_trans_tre_release(struct ipa_trans_info *trans_info, u32 tre_count)
 {
 	atomic_add(tre_count, &trans_info->tre_avail);
 }
 
 /* Allocate a GSI transaction on a channel */
-struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
+struct ipa_trans *ipa_channel_trans_alloc(struct ipa_dma *gsi, u32 channel_id,
 					  u32 tre_count,
 					  enum dma_data_direction direction)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
-	struct gsi_trans_info *trans_info;
-	struct gsi_trans *trans;
+	struct ipa_channel *channel = &gsi->channel[channel_id];
+	struct ipa_trans_info *trans_info;
+	struct ipa_trans *trans;
 
-	if (WARN_ON(tre_count > gsi_channel_trans_tre_max(gsi, channel_id)))
+	if (WARN_ON(tre_count > ipa_channel_trans_tre_max(gsi, channel_id)))
 		return NULL;
 
 	trans_info = &channel->trans_info;
@@ -337,18 +337,18 @@ struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
 	/* We reserve the TREs now, but consume them at commit time.
 	 * If there aren't enough available, we're done.
 	 */
-	if (!gsi_trans_tre_reserve(trans_info, tre_count))
+	if (!ipa_trans_tre_reserve(trans_info, tre_count))
 		return NULL;
 
 	/* Allocate and initialize non-zero fields in the the transaction */
-	trans = gsi_trans_pool_alloc(&trans_info->pool, 1);
-	trans->gsi = gsi;
+	trans = ipa_trans_pool_alloc(&trans_info->pool, 1);
+	trans->dma_subsys = gsi;
 	trans->channel_id = channel_id;
 	trans->tre_count = tre_count;
 	init_completion(&trans->completion);
 
 	/* Allocate the scatterlist and (if requested) info entries. */
-	trans->sgl = gsi_trans_pool_alloc(&trans_info->sg_pool, tre_count);
+	trans->sgl = ipa_trans_pool_alloc(&trans_info->sg_pool, tre_count);
 	sg_init_marker(trans->sgl, tre_count);
 
 	trans->direction = direction;
@@ -365,17 +365,17 @@ struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
 }
 
 /* Free a previously-allocated transaction */
-void gsi_trans_free(struct gsi_trans *trans)
+void ipa_trans_free(struct ipa_trans *trans)
 {
 	refcount_t *refcount = &trans->refcount;
-	struct gsi_trans_info *trans_info;
+	struct ipa_trans_info *trans_info;
 	bool last;
 
 	/* We must hold the lock to release the last reference */
 	if (refcount_dec_not_one(refcount))
 		return;
 
-	trans_info = &trans->gsi->channel[trans->channel_id].trans_info;
+	trans_info = &trans->dma_subsys->channel[trans->channel_id].trans_info;
 
 	spin_lock_bh(&trans_info->spinlock);
 
@@ -394,11 +394,11 @@ void gsi_trans_free(struct gsi_trans *trans)
 	/* Releasing the reserved TREs implicitly frees the sgl[] and
 	 * (if present) info[] arrays, plus the transaction itself.
 	 */
-	gsi_trans_tre_release(trans_info, trans->tre_count);
+	ipa_trans_tre_release(trans_info, trans->tre_count);
 }
 
 /* Add an immediate command to a transaction */
-void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size,
+void ipa_trans_cmd_add(struct ipa_trans *trans, void *buf, u32 size,
 		       dma_addr_t addr, enum dma_data_direction direction,
 		       enum ipa_cmd_opcode opcode)
 {
@@ -415,7 +415,7 @@ void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size,
 	 *
 	 * When a transaction completes, the SGL is normally unmapped.
 	 * A command transaction has direction DMA_NONE, which tells
-	 * gsi_trans_complete() to skip the unmapping step.
+	 * ipa_trans_complete() to skip the unmapping step.
 	 *
 	 * The only things we use directly in a command scatter/gather
 	 * entry are the DMA address and length.  We still need the SG
@@ -433,7 +433,7 @@ void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size,
 }
 
 /* Add a page transfer to a transaction.  It will fill the only TRE. */
-int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size,
+int ipa_trans_page_add(struct ipa_trans *trans, struct page *page, u32 size,
 		       u32 offset)
 {
 	struct scatterlist *sg = &trans->sgl[0];
@@ -445,7 +445,7 @@ int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size,
 		return -EINVAL;
 
 	sg_set_page(sg, page, size, offset);
-	ret = dma_map_sg(trans->gsi->dev, sg, 1, trans->direction);
+	ret = dma_map_sg(trans->dma_subsys->dev, sg, 1, trans->direction);
 	if (!ret)
 		return -ENOMEM;
 
@@ -455,7 +455,7 @@ int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size,
 }
 
 /* Add an SKB transfer to a transaction.  No other TREs will be used. */
-int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb)
+int ipa_trans_skb_add(struct ipa_trans *trans, struct sk_buff *skb)
 {
 	struct scatterlist *sg = &trans->sgl[0];
 	u32 used;
@@ -472,7 +472,7 @@ int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb)
 		return ret;
 	used = ret;
 
-	ret = dma_map_sg(trans->gsi->dev, sg, used, trans->direction);
+	ret = dma_map_sg(trans->dma_subsys->dev, sg, used, trans->direction);
 	if (!ret)
 		return -ENOMEM;
 
@@ -539,9 +539,9 @@ static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr,
  * pending list.  Finally, updates the channel ring pointer and optionally
  * rings the doorbell.
  */
-static void __gsi_trans_commit(struct gsi_trans *trans, bool ring_db)
+static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
 {
-	struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id];
+	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
 	struct gsi_ring *ring = &channel->tre_ring;
 	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
 	bool bei = channel->toward_ipa;
@@ -590,28 +590,28 @@ static void __gsi_trans_commit(struct gsi_trans *trans, bool ring_db)
 	/* Associate the last TRE with the transaction */
 	gsi_channel_trans_map(channel, ring->index - 1, trans);
 
-	gsi_trans_move_pending(trans);
+	ipa_trans_move_pending(trans);
 
 	/* Ring doorbell if requested, or if all TREs are allocated */
 	if (ring_db || !atomic_read(&channel->trans_info.tre_avail)) {
 		/* Report what we're handing off to hardware for TX channels */
 		if (channel->toward_ipa)
-			gsi_channel_tx_queued(channel);
+			ipa_channel_tx_queued(channel);
 		gsi_channel_doorbell(channel);
 	}
 }
 
 /* Commit a GSI transaction */
-void gsi_trans_commit(struct gsi_trans *trans, bool ring_db)
+void ipa_trans_commit(struct ipa_trans *trans, bool ring_db)
 {
 	if (trans->used)
 		__gsi_trans_commit(trans, ring_db);
 	else
-		gsi_trans_free(trans);
+		ipa_trans_free(trans);
 }
 
 /* Commit a GSI transaction and wait for it to complete */
-void gsi_trans_commit_wait(struct gsi_trans *trans)
+void ipa_trans_commit_wait(struct ipa_trans *trans)
 {
 	if (!trans->used)
 		goto out_trans_free;
@@ -623,11 +623,11 @@ void gsi_trans_commit_wait(struct gsi_trans *trans)
 	wait_for_completion(&trans->completion);
 
 out_trans_free:
-	gsi_trans_free(trans);
+	ipa_trans_free(trans);
 }
 
 /* Commit a GSI transaction and wait for it to complete, with timeout */
-int gsi_trans_commit_wait_timeout(struct gsi_trans *trans,
+int ipa_trans_commit_wait_timeout(struct ipa_trans *trans,
 				  unsigned long timeout)
 {
 	unsigned long timeout_jiffies = msecs_to_jiffies(timeout);
@@ -643,34 +643,34 @@ int gsi_trans_commit_wait_timeout(struct gsi_trans *trans,
 	remaining = wait_for_completion_timeout(&trans->completion,
 						timeout_jiffies);
 out_trans_free:
-	gsi_trans_free(trans);
+	ipa_trans_free(trans);
 
 	return remaining ? 0 : -ETIMEDOUT;
 }
 
 /* Process the completion of a transaction; called while polling */
-void gsi_trans_complete(struct gsi_trans *trans)
+void ipa_trans_complete(struct ipa_trans *trans)
 {
 	/* If the entire SGL was mapped when added, unmap it now */
 	if (trans->direction != DMA_NONE)
-		dma_unmap_sg(trans->gsi->dev, trans->sgl, trans->used,
+		dma_unmap_sg(trans->dma_subsys->dev, trans->sgl, trans->used,
 			     trans->direction);
 
 	ipa_gsi_trans_complete(trans);
 
 	complete(&trans->completion);
 
-	gsi_trans_free(trans);
+	ipa_trans_free(trans);
 }
 
 /* Cancel a channel's pending transactions */
-void gsi_channel_trans_cancel_pending(struct gsi_channel *channel)
+void ipa_channel_trans_cancel_pending(struct ipa_channel *channel)
 {
-	struct gsi_trans_info *trans_info = &channel->trans_info;
-	struct gsi_trans *trans;
+	struct ipa_trans_info *trans_info = &channel->trans_info;
+	struct ipa_trans *trans;
 	bool cancelled;
 
-	/* channel->gsi->mutex is held by caller */
+	/* channel->dma_subsys->mutex is held by caller */
 	spin_lock_bh(&trans_info->spinlock);
 
 	cancelled = !list_empty(&trans_info->pending);
@@ -687,17 +687,17 @@ void gsi_channel_trans_cancel_pending(struct gsi_channel *channel)
 }
 
 /* Issue a command to read a single byte from a channel */
-int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr)
+int gsi_trans_read_byte(struct ipa_dma *gsi, u32 channel_id, dma_addr_t addr)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 	struct gsi_ring *ring = &channel->tre_ring;
-	struct gsi_trans_info *trans_info;
+	struct ipa_trans_info *trans_info;
 	struct gsi_tre *dest_tre;
 
 	trans_info = &channel->trans_info;
 
 	/* First reserve the TRE, if possible */
-	if (!gsi_trans_tre_reserve(trans_info, 1))
+	if (!ipa_trans_tre_reserve(trans_info, 1))
 		return -EBUSY;
 
 	/* Now fill the the reserved TRE and tell the hardware */
@@ -712,18 +712,18 @@ int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr)
 }
 
 /* Mark a gsi_trans_read_byte() request done */
-void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id)
+void gsi_trans_read_byte_done(struct ipa_dma *gsi, u32 channel_id)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
+	struct ipa_channel *channel = &gsi->channel[channel_id];
 
-	gsi_trans_tre_release(&channel->trans_info, 1);
+	ipa_trans_tre_release(&channel->trans_info, 1);
 }
 
 /* Initialize a channel's GSI transaction info */
-int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id)
+int ipa_channel_trans_init(struct ipa_dma *gsi, u32 channel_id)
 {
-	struct gsi_channel *channel = &gsi->channel[channel_id];
-	struct gsi_trans_info *trans_info;
+	struct ipa_channel *channel = &gsi->channel[channel_id];
+	struct ipa_trans_info *trans_info;
 	u32 tre_max;
 	int ret;
 
@@ -747,10 +747,10 @@ int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id)
 	 * for transactions (including transaction structures) based on
 	 * this maximum number.
 	 */
-	tre_max = gsi_channel_tre_max(channel->gsi, channel_id);
+	tre_max = gsi_channel_tre_max(channel->dma_subsys, channel_id);
 
 	/* Transactions are allocated one at a time. */
-	ret = gsi_trans_pool_init(&trans_info->pool, sizeof(struct gsi_trans),
+	ret = ipa_trans_pool_init(&trans_info->pool, sizeof(struct ipa_trans),
 				  tre_max, 1);
 	if (ret)
 		goto err_kfree;
@@ -765,7 +765,7 @@ int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id)
 	 * A transaction on a channel can allocate as many TREs as that but
 	 * no more.
 	 */
-	ret = gsi_trans_pool_init(&trans_info->sg_pool,
+	ret = ipa_trans_pool_init(&trans_info->sg_pool,
 				  sizeof(struct scatterlist),
 				  tre_max, channel->tlv_count);
 	if (ret)
@@ -789,7 +789,7 @@ int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id)
 	return 0;
 
 err_trans_pool_exit:
-	gsi_trans_pool_exit(&trans_info->pool);
+	ipa_trans_pool_exit(&trans_info->pool);
 err_kfree:
 	kfree(trans_info->map);
 
@@ -799,12 +799,12 @@ int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id)
 	return ret;
 }
 
-/* Inverse of gsi_channel_trans_init() */
-void gsi_channel_trans_exit(struct gsi_channel *channel)
+/* Inverse of ipa_channel_trans_init() */
+void ipa_channel_trans_exit(struct ipa_channel *channel)
 {
-	struct gsi_trans_info *trans_info = &channel->trans_info;
+	struct ipa_trans_info *trans_info = &channel->trans_info;
 
-	gsi_trans_pool_exit(&trans_info->sg_pool);
-	gsi_trans_pool_exit(&trans_info->pool);
+	ipa_trans_pool_exit(&trans_info->sg_pool);
+	ipa_trans_pool_exit(&trans_info->pool);
 	kfree(trans_info->map);
 }
diff --git a/drivers/net/ipa/gsi_trans.h b/drivers/net/ipa/ipa_trans.h
similarity index 72%
rename from drivers/net/ipa/gsi_trans.h
rename to drivers/net/ipa/ipa_trans.h
index 17fd1822d8a9..b93342414360 100644
--- a/drivers/net/ipa/gsi_trans.h
+++ b/drivers/net/ipa/ipa_trans.h
@@ -18,12 +18,12 @@ struct scatterlist;
 struct device;
 struct sk_buff;
 
-struct gsi;
-struct gsi_trans;
-struct gsi_trans_pool;
+struct ipa_dma;
+struct ipa_trans;
+struct ipa_trans_pool;
 
 /**
- * struct gsi_trans - a GSI transaction
+ * struct ipa_trans - a GSI transaction
  *
  * Most fields in this structure for internal use by the transaction core code:
  * @links:	Links for channel transaction lists by state
@@ -45,10 +45,10 @@ struct gsi_trans_pool;
  * The size used for some fields in this structure were chosen to ensure
  * the full structure size is no larger than 128 bytes.
  */
-struct gsi_trans {
-	struct list_head links;		/* gsi_channel lists */
+struct ipa_trans {
+	struct list_head links;		/* ipa_channel lists */
 
-	struct gsi *gsi;
+	struct ipa_dma *dma_subsys;
 	u8 channel_id;
 
 	bool cancelled;			/* true if transaction was cancelled */
@@ -70,7 +70,7 @@ struct gsi_trans {
 };
 
 /**
- * gsi_trans_pool_init() - Initialize a pool of structures for transactions
+ * ipa_trans_pool_init() - Initialize a pool of structures for transactions
  * @pool:	GSI transaction poll pointer
  * @size:	Size of elements in the pool
  * @count:	Minimum number of elements in the pool
@@ -78,26 +78,26 @@ struct gsi_trans {
  *
  * Return:	0 if successful, or a negative error code
  */
-int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count,
+int ipa_trans_pool_init(struct ipa_trans_pool *pool, size_t size, u32 count,
 			u32 max_alloc);
 
 /**
- * gsi_trans_pool_alloc() - Allocate one or more elements from a pool
+ * ipa_trans_pool_alloc() - Allocate one or more elements from a pool
  * @pool:	Pool pointer
  * @count:	Number of elements to allocate from the pool
  *
  * Return:	Virtual address of element(s) allocated from the pool
  */
-void *gsi_trans_pool_alloc(struct gsi_trans_pool *pool, u32 count);
+void *ipa_trans_pool_alloc(struct ipa_trans_pool *pool, u32 count);
 
 /**
- * gsi_trans_pool_exit() - Inverse of gsi_trans_pool_init()
+ * ipa_trans_pool_exit() - Inverse of ipa_trans_pool_init()
  * @pool:	Pool pointer
  */
-void gsi_trans_pool_exit(struct gsi_trans_pool *pool);
+void ipa_trans_pool_exit(struct ipa_trans_pool *pool);
 
 /**
- * gsi_trans_pool_init_dma() - Initialize a pool of DMA-able structures
+ * ipa_trans_pool_init_dma() - Initialize a pool of DMA-able structures
  * @dev:	Device used for DMA
  * @pool:	Pool pointer
  * @size:	Size of elements in the pool
@@ -108,11 +108,11 @@ void gsi_trans_pool_exit(struct gsi_trans_pool *pool);
  *
  * Structures in this pool reside in DMA-coherent memory.
  */
-int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
+int ipa_trans_pool_init_dma(struct device *dev, struct ipa_trans_pool *pool,
 			    size_t size, u32 count, u32 max_alloc);
 
 /**
- * gsi_trans_pool_alloc_dma() - Allocate an element from a DMA pool
+ * ipa_trans_pool_alloc_dma() - Allocate an element from a DMA pool
  * @pool:	DMA pool pointer
  * @addr:	DMA address "handle" associated with the allocation
  *
@@ -120,17 +120,17 @@ int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
  *
  * Only one element at a time may be allocated from a DMA pool.
  */
-void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr);
+void *ipa_trans_pool_alloc_dma(struct ipa_trans_pool *pool, dma_addr_t *addr);
 
 /**
- * gsi_trans_pool_exit_dma() - Inverse of gsi_trans_pool_init_dma()
+ * ipa_trans_pool_exit_dma() - Inverse of ipa_trans_pool_init_dma()
  * @dev:	Device used for DMA
  * @pool:	Pool pointer
  */
-void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool);
+void ipa_trans_pool_exit_dma(struct device *dev, struct ipa_trans_pool *pool);
 
 /**
- * gsi_channel_trans_alloc() - Allocate a GSI transaction on a channel
+ * ipa_channel_trans_alloc() - Allocate a GSI transaction on a channel
  * @gsi:	GSI pointer
  * @channel_id:	Channel the transaction is associated with
  * @tre_count:	Number of elements in the transaction
@@ -139,18 +139,18 @@ void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool);
  * Return:	A GSI transaction structure, or a null pointer if all
  *		available transactions are in use
  */
-struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
+struct ipa_trans *ipa_channel_trans_alloc(struct ipa_dma *dma_subsys, u32 channel_id,
 					  u32 tre_count,
 					  enum dma_data_direction direction);
 
 /**
- * gsi_trans_free() - Free a previously-allocated GSI transaction
+ * ipa_trans_free() - Free a previously-allocated GSI transaction
  * @trans:	Transaction to be freed
  */
-void gsi_trans_free(struct gsi_trans *trans);
+void ipa_trans_free(struct ipa_trans *trans);
 
 /**
- * gsi_trans_cmd_add() - Add an immediate command to a transaction
+ * ipa_trans_cmd_add() - Add an immediate command to a transaction
  * @trans:	Transaction
  * @buf:	Buffer pointer for command payload
  * @size:	Number of bytes in buffer
@@ -158,50 +158,50 @@ void gsi_trans_free(struct gsi_trans *trans);
  * @direction:	Direction of DMA transfer (or DMA_NONE if none required)
  * @opcode:	IPA immediate command opcode
  */
-void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size,
+void ipa_trans_cmd_add(struct ipa_trans *trans, void *buf, u32 size,
 		       dma_addr_t addr, enum dma_data_direction direction,
 		       enum ipa_cmd_opcode opcode);
 
 /**
- * gsi_trans_page_add() - Add a page transfer to a transaction
+ * ipa_trans_page_add() - Add a page transfer to a transaction
  * @trans:	Transaction
  * @page:	Page pointer
  * @size:	Number of bytes (starting at offset) to transfer
  * @offset:	Offset within page for start of transfer
  */
-int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size,
+int ipa_trans_page_add(struct ipa_trans *trans, struct page *page, u32 size,
 		       u32 offset);
 
 /**
- * gsi_trans_skb_add() - Add a socket transfer to a transaction
+ * ipa_trans_skb_add() - Add a socket transfer to a transaction
  * @trans:	Transaction
  * @skb:	Socket buffer for transfer (outbound)
  *
  * Return:	0, or -EMSGSIZE if socket data won't fit in transaction.
  */
-int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb);
+int ipa_trans_skb_add(struct ipa_trans *trans, struct sk_buff *skb);
 
 /**
- * gsi_trans_commit() - Commit a GSI transaction
+ * ipa_trans_commit() - Commit a GSI transaction
  * @trans:	Transaction to commit
  * @ring_db:	Whether to tell the hardware about these queued transfers
  */
-void gsi_trans_commit(struct gsi_trans *trans, bool ring_db);
+void ipa_trans_commit(struct ipa_trans *trans, bool ring_db);
 
 /**
- * gsi_trans_commit_wait() - Commit a GSI transaction and wait for it
+ * ipa_trans_commit_wait() - Commit a GSI transaction and wait for it
  *			     to complete
  * @trans:	Transaction to commit
  */
-void gsi_trans_commit_wait(struct gsi_trans *trans);
+void ipa_trans_commit_wait(struct ipa_trans *trans);
 
 /**
- * gsi_trans_commit_wait_timeout() - Commit a GSI transaction and wait for
+ * ipa_trans_commit_wait_timeout() - Commit a GSI transaction and wait for
  *				     it to complete, with timeout
  * @trans:	Transaction to commit
  * @timeout:	Timeout period (in milliseconds)
  */
-int gsi_trans_commit_wait_timeout(struct gsi_trans *trans,
+int ipa_trans_commit_wait_timeout(struct ipa_trans *trans,
 				  unsigned long timeout);
 
 /**
@@ -213,7 +213,7 @@ int gsi_trans_commit_wait_timeout(struct gsi_trans *trans,
  * This is not a transaction operation at all.  It's defined here because
  * it needs to be done in coordination with other transaction activity.
  */
-int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr);
+int gsi_trans_read_byte(struct ipa_dma *dma_subsys, u32 channel_id, dma_addr_t addr);
 
 /**
  * gsi_trans_read_byte_done() - Clean up after a single byte read TRE
@@ -223,6 +223,6 @@ int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr);
  * This function needs to be called to signal that the work related
  * to reading a byte initiated by gsi_trans_read_byte() is complete.
  */
-void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id);
+void gsi_trans_read_byte_done(struct ipa_dma *dma_subsys, u32 channel_id);
 
 #endif /* _GSI_TRANS_H_ */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (2 preceding siblings ...)
  2021-09-20  3:07 ` [RFC PATCH 03/17] net: ipa: Refactor GSI code Sireesh Kodali
@ 2021-09-20  3:07 ` Sireesh Kodali
  2021-10-13 22:29   ` Alex Elder
  2021-09-20  3:07 ` [RFC PATCH 05/17] net: ipa: Check interrupts for availability Sireesh Kodali
                   ` (13 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:07 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

Establish callback-based interface to abstract GSI and BAM DMA differences.
Interface is based on prototypes from ipa_dma.h (old gsi.h). Callbacks
are stored in struct ipa_dma (old struct gsi) and assigned in gsi_init.

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/gsi.c          |  30 ++++++--
 drivers/net/ipa/ipa_dma.h      | 133 ++++++++++++++++++++++-----------
 drivers/net/ipa/ipa_endpoint.c |  28 +++----
 drivers/net/ipa/ipa_main.c     |  18 ++---
 drivers/net/ipa/ipa_power.c    |   4 +-
 drivers/net/ipa/ipa_trans.c    |   2 +-
 6 files changed, 138 insertions(+), 77 deletions(-)

diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
index 74ae0d07f859..39d9ca620a9f 100644
--- a/drivers/net/ipa/gsi.c
+++ b/drivers/net/ipa/gsi.c
@@ -99,6 +99,10 @@
 
 #define GSI_ISR_MAX_ITER		50	/* Detect interrupt storms */
 
+static u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id);
+static u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id);
+static void gsi_exit(struct ipa_dma *gsi);
+
 /* An entry in an event ring */
 struct gsi_event {
 	__le64 xfer_ptr;
@@ -869,7 +873,7 @@ static int __gsi_channel_start(struct ipa_channel *channel, bool resume)
 }
 
 /* Start an allocated GSI channel */
-int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
+static int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
 {
 	struct ipa_channel *channel = &gsi->channel[channel_id];
 	int ret;
@@ -924,7 +928,7 @@ static int __gsi_channel_stop(struct ipa_channel *channel, bool suspend)
 }
 
 /* Stop a started channel */
-int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
+static int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
 {
 	struct ipa_channel *channel = &gsi->channel[channel_id];
 	int ret;
@@ -941,7 +945,7 @@ int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
 }
 
 /* Reset and reconfigure a channel, (possibly) enabling the doorbell engine */
-void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
+static void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
 {
 	struct ipa_channel *channel = &gsi->channel[channel_id];
 
@@ -1931,7 +1935,7 @@ int gsi_setup(struct ipa_dma *gsi)
 }
 
 /* Inverse of gsi_setup() */
-void gsi_teardown(struct ipa_dma *gsi)
+static void gsi_teardown(struct ipa_dma *gsi)
 {
 	gsi_channel_teardown(gsi);
 	gsi_irq_teardown(gsi);
@@ -2194,6 +2198,18 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
 
 	gsi->dev = dev;
 	gsi->version = version;
+	gsi->setup = gsi_setup;
+	gsi->teardown = gsi_teardown;
+	gsi->exit = gsi_exit;
+	gsi->suspend = gsi_suspend;
+	gsi->resume = gsi_resume;
+	gsi->channel_tre_max = gsi_channel_tre_max;
+	gsi->channel_trans_tre_max = gsi_channel_trans_tre_max;
+	gsi->channel_start = gsi_channel_start;
+	gsi->channel_stop = gsi_channel_stop;
+	gsi->channel_reset = gsi_channel_reset;
+	gsi->channel_suspend = gsi_channel_suspend;
+	gsi->channel_resume = gsi_channel_resume;
 
 	/* GSI uses NAPI on all channels.  Create a dummy network device
 	 * for the channel NAPI contexts to be associated with.
@@ -2250,7 +2266,7 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
 }
 
 /* Inverse of gsi_init() */
-void gsi_exit(struct ipa_dma *gsi)
+static void gsi_exit(struct ipa_dma *gsi)
 {
 	mutex_destroy(&gsi->mutex);
 	gsi_channel_exit(gsi);
@@ -2277,7 +2293,7 @@ void gsi_exit(struct ipa_dma *gsi)
  * substantially reduce pool memory requirements.  The number we
  * reduce it by matches the number added in ipa_trans_pool_init().
  */
-u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
+static u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
 {
 	struct ipa_channel *channel = &gsi->channel[channel_id];
 
@@ -2286,7 +2302,7 @@ u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
 }
 
 /* Returns the maximum number of TREs in a single transaction for a channel */
-u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
+static u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
 {
 	struct ipa_channel *channel = &gsi->channel[channel_id];
 
diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h
index d053929ca3e3..1a23e6ac5785 100644
--- a/drivers/net/ipa/ipa_dma.h
+++ b/drivers/net/ipa/ipa_dma.h
@@ -163,64 +163,96 @@ struct ipa_dma {
 	struct completion completion;	/* for global EE commands */
 	int result;			/* Negative errno (generic commands) */
 	struct mutex mutex;		/* protects commands, programming */
+
+	int (*setup)(struct ipa_dma *dma_subsys);
+	void (*teardown)(struct ipa_dma *dma_subsys);
+	void (*exit)(struct ipa_dma *dma_subsys);
+	void (*suspend)(struct ipa_dma *dma_subsys);
+	void (*resume)(struct ipa_dma *dma_subsys);
+	u32 (*channel_tre_max)(struct ipa_dma *dma_subsys, u32 channel_id);
+	u32 (*channel_trans_tre_max)(struct ipa_dma *dma_subsys, u32 channel_id);
+	int (*channel_start)(struct ipa_dma *dma_subsys, u32 channel_id);
+	int (*channel_stop)(struct ipa_dma *dma_subsys, u32 channel_id);
+	void (*channel_reset)(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
+	int (*channel_suspend)(struct ipa_dma *dma_subsys, u32 channel_id);
+	int (*channel_resume)(struct ipa_dma *dma_subsys, u32 channel_id);
+	void (*trans_commit)(struct ipa_trans *trans, bool ring_db);
 };
 
 /**
- * gsi_setup() - Set up the GSI subsystem
- * @gsi:	Address of GSI structure embedded in an IPA structure
+ * ipa_dma_setup() - Set up the DMA subsystem
+ * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
  *
  * Return:	0 if successful, or a negative error code
  *
- * Performs initialization that must wait until the GSI hardware is
+ * Performs initialization that must wait until the GSI/BAM hardware is
  * ready (including firmware loaded).
  */
-int gsi_setup(struct ipa_dma *dma_subsys);
+static inline int ipa_dma_setup(struct ipa_dma *dma_subsys)
+{
+	return dma_subsys->setup(dma_subsys);
+}
 
 /**
- * gsi_teardown() - Tear down GSI subsystem
- * @gsi:	GSI address previously passed to a successful gsi_setup() call
+ * ipa_dma_teardown() - Tear down DMA subsystem
+ * @dma_subsys:	ipa_dma address previously passed to a successful ipa_dma_setup() call
  */
-void gsi_teardown(struct ipa_dma *dma_subsys);
+static inline void ipa_dma_teardown(struct ipa_dma *dma_subsys)
+{
+	dma_subsys->teardown(dma_subsys);
+}
 
 /**
- * gsi_channel_tre_max() - Channel maximum number of in-flight TREs
- * @gsi:	GSI pointer
+ * ipa_channel_tre_max() - Channel maximum number of in-flight TREs
+ * @dma_subsys:	pointer to ipa_dma structure
  * @channel_id:	Channel whose limit is to be returned
  *
  * Return:	 The maximum number of TREs oustanding on the channel
  */
-u32 gsi_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
+static inline u32 ipa_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id)
+{
+	return dma_subsys->channel_tre_max(dma_subsys, channel_id);
+}
 
 /**
- * gsi_channel_trans_tre_max() - Maximum TREs in a single transaction
- * @gsi:	GSI pointer
+ * ipa_channel_trans_tre_max() - Maximum TREs in a single transaction
+ * @dma_subsys:	pointer to ipa_dma structure
  * @channel_id:	Channel whose limit is to be returned
  *
  * Return:	 The maximum TRE count per transaction on the channel
  */
-u32 gsi_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
+static inline u32 ipa_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id)
+{
+	return dma_subsys->channel_trans_tre_max(dma_subsys, channel_id);
+}
 
 /**
- * gsi_channel_start() - Start an allocated GSI channel
- * @gsi:	GSI pointer
+ * ipa_channel_start() - Start an allocated DMA channel
+ * @dma_subsys:	pointer to ipa_dma structure
  * @channel_id:	Channel to start
  *
  * Return:	0 if successful, or a negative error code
  */
-int gsi_channel_start(struct ipa_dma *dma_subsys, u32 channel_id);
+static inline int ipa_channel_start(struct ipa_dma *dma_subsys, u32 channel_id)
+{
+	return dma_subsys->channel_start(dma_subsys, channel_id);
+}
 
 /**
- * gsi_channel_stop() - Stop a started GSI channel
- * @gsi:	GSI pointer returned by gsi_setup()
+ * ipa_channel_stop() - Stop a started DMA channel
+ * @dma_subsys:	pointer to ipa_dma structure returned by ipa_dma_setup()
  * @channel_id:	Channel to stop
  *
  * Return:	0 if successful, or a negative error code
  */
-int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
+static inline int ipa_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id)
+{
+	return dma_subsys->channel_stop(dma_subsys, channel_id);
+}
 
 /**
- * gsi_channel_reset() - Reset an allocated GSI channel
- * @gsi:	GSI pointer
+ * ipa_channel_reset() - Reset an allocated DMA channel
+ * @dma_subsys:	pointer to ipa_dma structure
  * @channel_id:	Channel to be reset
  * @doorbell:	Whether to (possibly) enable the doorbell engine
  *
@@ -230,41 +262,49 @@ int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
  * GSI hardware relinquishes ownership of all pending receive buffer
  * transactions and they will complete with their cancelled flag set.
  */
-void gsi_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
+static inline void ipa_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell)
+{
+	 dma_subsys->channel_reset(dma_subsys, channel_id, doorbell);
+}
 
-/**
- * gsi_suspend() - Prepare the GSI subsystem for suspend
- * @gsi:	GSI pointer
- */
-void gsi_suspend(struct ipa_dma *dma_subsys);
 
 /**
- * gsi_resume() - Resume the GSI subsystem following suspend
- * @gsi:	GSI pointer
- */
-void gsi_resume(struct ipa_dma *dma_subsys);
-
-/**
- * gsi_channel_suspend() - Suspend a GSI channel
- * @gsi:	GSI pointer
+ * ipa_channel_suspend() - Suspend a DMA channel
+ * @dma_subsys:	pointer to ipa_dma structure
  * @channel_id:	Channel to suspend
  *
  * For IPA v4.0+, suspend is implemented by stopping the channel.
  */
-int gsi_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id);
+static inline int ipa_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id)
+{
+	return dma_subsys->channel_suspend(dma_subsys, channel_id);
+}
 
 /**
- * gsi_channel_resume() - Resume a suspended GSI channel
- * @gsi:	GSI pointer
+ * ipa_channel_resume() - Resume a suspended DMA channel
+ * @dma_subsys:	pointer to ipa_dma structure
  * @channel_id:	Channel to resume
  *
  * For IPA v4.0+, the stopped channel is started again.
  */
-int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
+static inline int ipa_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id)
+{
+	return dma_subsys->channel_resume(dma_subsys, channel_id);
+}
+
+static inline void ipa_dma_suspend(struct ipa_dma *dma_subsys)
+{
+	return dma_subsys->suspend(dma_subsys);
+}
+
+static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
+{
+	return dma_subsys->resume(dma_subsys);
+}
 
 /**
- * gsi_init() - Initialize the GSI subsystem
- * @gsi:	Address of GSI structure embedded in an IPA structure
+ * ipa_dma_init() - Initialize the GSI subsystem
+ * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
  * @pdev:	IPA platform device
  * @version:	IPA hardware version (implies GSI version)
  * @count:	Number of entries in the configuration data array
@@ -275,14 +315,19 @@ int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
  * Early stage initialization of the GSI subsystem, performing tasks
  * that can be done before the GSI hardware is ready to use.
  */
+
 int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
 	     enum ipa_version version, u32 count,
 	     const struct ipa_gsi_endpoint_data *data);
 
 /**
- * gsi_exit() - Exit the GSI subsystem
- * @gsi:	GSI address previously passed to a successful gsi_init() call
+ * ipa_dma_exit() - Exit the DMA subsystem
+ * @dma_subsys:	ipa_dma address previously passed to a successful gsi_init() call
  */
-void gsi_exit(struct ipa_dma *dma_subsys);
+static inline void ipa_dma_exit(struct ipa_dma *dma_subsys)
+{
+	if (dma_subsys)
+		dma_subsys->exit(dma_subsys);
+}
 
 #endif /* _GSI_H_ */
diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 90d6880e8a25..dbef549c4537 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1091,7 +1091,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
 	 * try replenishing again if our backlog is *all* available TREs.
 	 */
 	gsi = &endpoint->ipa->dma_subsys;
-	if (backlog == gsi_channel_tre_max(gsi, endpoint->channel_id))
+	if (backlog == ipa_channel_tre_max(gsi, endpoint->channel_id))
 		schedule_delayed_work(&endpoint->replenish_work,
 				      msecs_to_jiffies(1));
 }
@@ -1107,7 +1107,7 @@ static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
 		atomic_add(saved, &endpoint->replenish_backlog);
 
 	/* Start replenishing if hardware currently has no buffers */
-	max_backlog = gsi_channel_tre_max(gsi, endpoint->channel_id);
+	max_backlog = ipa_channel_tre_max(gsi, endpoint->channel_id);
 	if (atomic_read(&endpoint->replenish_backlog) == max_backlog)
 		ipa_endpoint_replenish(endpoint, false);
 }
@@ -1432,13 +1432,13 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
 	 * active.  We'll re-enable the doorbell (if appropriate) when
 	 * we reset again below.
 	 */
-	gsi_channel_reset(gsi, endpoint->channel_id, false);
+	ipa_channel_reset(gsi, endpoint->channel_id, false);
 
 	/* Make sure the channel isn't suspended */
 	suspended = ipa_endpoint_program_suspend(endpoint, false);
 
 	/* Start channel and do a 1 byte read */
-	ret = gsi_channel_start(gsi, endpoint->channel_id);
+	ret = ipa_channel_start(gsi, endpoint->channel_id);
 	if (ret)
 		goto out_suspend_again;
 
@@ -1461,7 +1461,7 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
 
 	gsi_trans_read_byte_done(gsi, endpoint->channel_id);
 
-	ret = gsi_channel_stop(gsi, endpoint->channel_id);
+	ret = ipa_channel_stop(gsi, endpoint->channel_id);
 	if (ret)
 		goto out_suspend_again;
 
@@ -1470,14 +1470,14 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
 	 * complete the channel reset sequence.  Finish by suspending the
 	 * channel again (if necessary).
 	 */
-	gsi_channel_reset(gsi, endpoint->channel_id, true);
+	ipa_channel_reset(gsi, endpoint->channel_id, true);
 
 	usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC);
 
 	goto out_suspend_again;
 
 err_endpoint_stop:
-	(void)gsi_channel_stop(gsi, endpoint->channel_id);
+	(void)ipa_channel_stop(gsi, endpoint->channel_id);
 out_suspend_again:
 	if (suspended)
 		(void)ipa_endpoint_program_suspend(endpoint, true);
@@ -1504,7 +1504,7 @@ static void ipa_endpoint_reset(struct ipa_endpoint *endpoint)
 	if (special && ipa_endpoint_aggr_active(endpoint))
 		ret = ipa_endpoint_reset_rx_aggr(endpoint);
 	else
-		gsi_channel_reset(&ipa->dma_subsys, channel_id, true);
+		ipa_channel_reset(&ipa->dma_subsys, channel_id, true);
 
 	if (ret)
 		dev_err(&ipa->pdev->dev,
@@ -1537,7 +1537,7 @@ int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint)
 	struct ipa_dma *gsi = &ipa->dma_subsys;
 	int ret;
 
-	ret = gsi_channel_start(gsi, endpoint->channel_id);
+	ret = ipa_channel_start(gsi, endpoint->channel_id);
 	if (ret) {
 		dev_err(&ipa->pdev->dev,
 			"error %d starting %cX channel %u for endpoint %u\n",
@@ -1576,7 +1576,7 @@ void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
 	}
 
 	/* Note that if stop fails, the channel's state is not well-defined */
-	ret = gsi_channel_stop(gsi, endpoint->channel_id);
+	ret = ipa_channel_stop(gsi, endpoint->channel_id);
 	if (ret)
 		dev_err(&ipa->pdev->dev,
 			"error %d attempting to stop endpoint %u\n", ret,
@@ -1598,7 +1598,7 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
 		(void)ipa_endpoint_program_suspend(endpoint, true);
 	}
 
-	ret = gsi_channel_suspend(gsi, endpoint->channel_id);
+	ret = ipa_channel_suspend(gsi, endpoint->channel_id);
 	if (ret)
 		dev_err(dev, "error %d suspending channel %u\n", ret,
 			endpoint->channel_id);
@@ -1617,7 +1617,7 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
 	if (!endpoint->toward_ipa)
 		(void)ipa_endpoint_program_suspend(endpoint, false);
 
-	ret = gsi_channel_resume(gsi, endpoint->channel_id);
+	ret = ipa_channel_resume(gsi, endpoint->channel_id);
 	if (ret)
 		dev_err(dev, "error %d resuming channel %u\n", ret,
 			endpoint->channel_id);
@@ -1660,14 +1660,14 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
 	if (endpoint->ee_id != GSI_EE_AP)
 		return;
 
-	endpoint->trans_tre_max = gsi_channel_trans_tre_max(gsi, channel_id);
+	endpoint->trans_tre_max = ipa_channel_trans_tre_max(gsi, channel_id);
 	if (!endpoint->toward_ipa) {
 		/* RX transactions require a single TRE, so the maximum
 		 * backlog is the same as the maximum outstanding TREs.
 		 */
 		endpoint->replenish_enabled = false;
 		atomic_set(&endpoint->replenish_saved,
-			   gsi_channel_tre_max(gsi, endpoint->channel_id));
+			   ipa_channel_tre_max(gsi, endpoint->channel_id));
 		atomic_set(&endpoint->replenish_backlog, 0);
 		INIT_DELAYED_WORK(&endpoint->replenish_work,
 				  ipa_endpoint_replenish_work);
diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index 026f5555fa7d..6ab691ff1faf 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -98,13 +98,13 @@ int ipa_setup(struct ipa *ipa)
 	struct device *dev = &ipa->pdev->dev;
 	int ret;
 
-	ret = gsi_setup(&ipa->dma_subsys);
+	ret = ipa_dma_setup(&ipa->dma_subsys);
 	if (ret)
 		return ret;
 
 	ret = ipa_power_setup(ipa);
 	if (ret)
-		goto err_gsi_teardown;
+		goto err_dma_teardown;
 
 	ipa_endpoint_setup(ipa);
 
@@ -153,8 +153,8 @@ int ipa_setup(struct ipa *ipa)
 err_endpoint_teardown:
 	ipa_endpoint_teardown(ipa);
 	ipa_power_teardown(ipa);
-err_gsi_teardown:
-	gsi_teardown(&ipa->dma_subsys);
+err_dma_teardown:
+	ipa_dma_teardown(&ipa->dma_subsys);
 
 	return ret;
 }
@@ -179,7 +179,7 @@ static void ipa_teardown(struct ipa *ipa)
 	ipa_endpoint_disable_one(command_endpoint);
 	ipa_endpoint_teardown(ipa);
 	ipa_power_teardown(ipa);
-	gsi_teardown(&ipa->dma_subsys);
+	ipa_dma_teardown(&ipa->dma_subsys);
 }
 
 /* Configure bus access behavior for IPA components */
@@ -726,7 +726,7 @@ static int ipa_probe(struct platform_device *pdev)
 					    data->endpoint_data);
 	if (!ipa->filter_map) {
 		ret = -EINVAL;
-		goto err_gsi_exit;
+		goto err_dma_exit;
 	}
 
 	ret = ipa_table_init(ipa);
@@ -780,8 +780,8 @@ static int ipa_probe(struct platform_device *pdev)
 	ipa_table_exit(ipa);
 err_endpoint_exit:
 	ipa_endpoint_exit(ipa);
-err_gsi_exit:
-	gsi_exit(&ipa->dma_subsys);
+err_dma_exit:
+	ipa_dma_exit(&ipa->dma_subsys);
 err_mem_exit:
 	ipa_mem_exit(ipa);
 err_reg_exit:
@@ -824,7 +824,7 @@ static int ipa_remove(struct platform_device *pdev)
 	ipa_modem_exit(ipa);
 	ipa_table_exit(ipa);
 	ipa_endpoint_exit(ipa);
-	gsi_exit(&ipa->dma_subsys);
+	ipa_dma_exit(&ipa->dma_subsys);
 	ipa_mem_exit(ipa);
 	ipa_reg_exit(ipa);
 	kfree(ipa);
diff --git a/drivers/net/ipa/ipa_power.c b/drivers/net/ipa/ipa_power.c
index b1c6c0fcb654..096cfb8ae9a5 100644
--- a/drivers/net/ipa/ipa_power.c
+++ b/drivers/net/ipa/ipa_power.c
@@ -243,7 +243,7 @@ static int ipa_runtime_suspend(struct device *dev)
 	if (ipa->setup_complete) {
 		__clear_bit(IPA_POWER_FLAG_RESUMED, ipa->power->flags);
 		ipa_endpoint_suspend(ipa);
-		gsi_suspend(&ipa->gsi);
+		ipa_dma_suspend(&ipa->dma_subsys);
 	}
 
 	return ipa_power_disable(ipa);
@@ -260,7 +260,7 @@ static int ipa_runtime_resume(struct device *dev)
 
 	/* Endpoints aren't usable until setup is complete */
 	if (ipa->setup_complete) {
-		gsi_resume(&ipa->gsi);
+		ipa_dma_resume(&ipa->dma_subsys);
 		ipa_endpoint_resume(ipa);
 	}
 
diff --git a/drivers/net/ipa/ipa_trans.c b/drivers/net/ipa/ipa_trans.c
index b87936b18770..22755f3ce3da 100644
--- a/drivers/net/ipa/ipa_trans.c
+++ b/drivers/net/ipa/ipa_trans.c
@@ -747,7 +747,7 @@ int ipa_channel_trans_init(struct ipa_dma *gsi, u32 channel_id)
 	 * for transactions (including transaction structures) based on
 	 * this maximum number.
 	 */
-	tre_max = gsi_channel_tre_max(channel->dma_subsys, channel_id);
+	tre_max = ipa_channel_tre_max(channel->dma_subsys, channel_id);
 
 	/* Transactions are allocated one at a time. */
 	ret = ipa_trans_pool_init(&trans_info->pool, sizeof(struct ipa_trans),
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 05/17] net: ipa: Check interrupts for availability
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (3 preceding siblings ...)
  2021-09-20  3:07 ` [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface Sireesh Kodali
@ 2021-09-20  3:07 ` Sireesh Kodali
  2021-10-13 22:29   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait Sireesh Kodali
                   ` (12 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:07 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

Make ipa_interrupt_add/ipa_interrupt_remove no-operation if requested
interrupt is not supported by IPA hardware.

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_interrupt.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
index b35170a93b0f..94708a23a597 100644
--- a/drivers/net/ipa/ipa_interrupt.c
+++ b/drivers/net/ipa/ipa_interrupt.c
@@ -48,6 +48,25 @@ static bool ipa_interrupt_uc(struct ipa_interrupt *interrupt, u32 irq_id)
 	return irq_id == IPA_IRQ_UC_0 || irq_id == IPA_IRQ_UC_1;
 }
 
+static bool ipa_interrupt_check_fixup(enum ipa_irq_id *irq_id, enum ipa_version version)
+{
+	switch (*irq_id) {
+	case IPA_IRQ_EOT_COAL:
+		return version < IPA_VERSION_3_5;
+	case IPA_IRQ_DCMP:
+		return version < IPA_VERSION_4_5;
+	case IPA_IRQ_TLV_LEN_MIN_DSM:
+		return version >= IPA_VERSION_4_5;
+	default:
+		break;
+	}
+
+	if (*irq_id >= IPA_IRQ_DRBIP_PKT_EXCEED_MAX_SIZE_EN)
+		return version >= IPA_VERSION_4_9;
+
+	return true;
+}
+
 /* Process a particular interrupt type that has been received */
 static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 irq_id)
 {
@@ -191,6 +210,9 @@ void ipa_interrupt_add(struct ipa_interrupt *interrupt,
 	struct ipa *ipa = interrupt->ipa;
 	u32 offset;
 
+	if (!ipa_interrupt_check_fixup(&ipa_irq, ipa->version))
+		return;
+
 	WARN_ON(ipa_irq >= IPA_IRQ_COUNT);
 
 	interrupt->handler[ipa_irq] = handler;
@@ -208,6 +230,9 @@ ipa_interrupt_remove(struct ipa_interrupt *interrupt, enum ipa_irq_id ipa_irq)
 	struct ipa *ipa = interrupt->ipa;
 	u32 offset;
 
+	if (!ipa_interrupt_check_fixup(&ipa_irq, ipa->version))
+		return;
+
 	WARN_ON(ipa_irq >= IPA_IRQ_COUNT);
 
 	/* Update the IPA interrupt mask to disable it */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (4 preceding siblings ...)
  2021-09-20  3:07 ` [RFC PATCH 05/17] net: ipa: Check interrupts for availability Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:29   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions Sireesh Kodali
                   ` (11 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

Sometimes the pipeline clear fails, and when it does, having a hang in
kernel is ugly. The timeout gives us a nice error message. Note that
this shouldn't actually hang, ever. It only hangs if there is a mistake
in the config, and the timeout is only useful when debugging.

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_cmd.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
index 3db9e94e484f..0bdbc331fa78 100644
--- a/drivers/net/ipa/ipa_cmd.c
+++ b/drivers/net/ipa/ipa_cmd.c
@@ -658,7 +658,10 @@ u32 ipa_cmd_pipeline_clear_count(void)
 
 void ipa_cmd_pipeline_clear_wait(struct ipa *ipa)
 {
-	wait_for_completion(&ipa->completion);
+	unsigned long timeout_jiffies = msecs_to_jiffies(1000);
+
+	if (!wait_for_completion_timeout(&ipa->completion, timeout_jiffies))
+		dev_err(&ipa->pdev->dev, "%s time out\n", __func__);
 }
 
 void ipa_cmd_pipeline_clear(struct ipa *ipa)
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (5 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:29   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 08/17] net: ipa: Add support for IPA v2.x interrupts Sireesh Kodali
                   ` (10 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

IPA v2.x is an older version the IPA hardware, and is 32 bit.

Most of the registers were just shifted in newer IPA versions, but
the register fields have remained the same across IPA versions. This
means that only the register addresses needed to be added to the driver.

To handle the different IPA register addresses, static inline functions
have been defined that return the correct register address.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_cmd.c      |   3 +-
 drivers/net/ipa/ipa_endpoint.c |  33 +++---
 drivers/net/ipa/ipa_main.c     |   8 +-
 drivers/net/ipa/ipa_mem.c      |   5 +-
 drivers/net/ipa/ipa_reg.h      | 184 +++++++++++++++++++++++++++------
 drivers/net/ipa/ipa_version.h  |  12 +++
 6 files changed, 195 insertions(+), 50 deletions(-)

diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
index 0bdbc331fa78..7a104540dc26 100644
--- a/drivers/net/ipa/ipa_cmd.c
+++ b/drivers/net/ipa/ipa_cmd.c
@@ -326,7 +326,8 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa)
 	 * worst case (highest endpoint number) offset of that endpoint
 	 * fits in the register write command field(s) that must hold it.
 	 */
-	offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT - 1);
+	offset = ipa_reg_endp_status_n_offset(ipa->version,
+			IPA_ENDPOINT_COUNT - 1);
 	name = "maximal endpoint status";
 	if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
 		return false;
diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index dbef549c4537..7d3ab61cd890 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -242,8 +242,8 @@ static struct ipa_trans *ipa_endpoint_trans_alloc(struct ipa_endpoint *endpoint,
 static bool
 ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay)
 {
-	u32 offset = IPA_REG_ENDP_INIT_CTRL_N_OFFSET(endpoint->endpoint_id);
 	struct ipa *ipa = endpoint->ipa;
+	u32 offset = ipa_reg_endp_init_ctrl_n_offset(ipa->version, endpoint->endpoint_id);
 	bool state;
 	u32 mask;
 	u32 val;
@@ -410,7 +410,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
 		if (!(endpoint->ee_id == GSI_EE_MODEM && endpoint->toward_ipa))
 			continue;
 
-		offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
+		offset = ipa_reg_endp_status_n_offset(ipa->version, endpoint_id);
 
 		/* Value written is 0, and all bits are updated.  That
 		 * means status is disabled on the endpoint, and as a
@@ -431,7 +431,8 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
 
 static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint)
 {
-	u32 offset = IPA_REG_ENDP_INIT_CFG_N_OFFSET(endpoint->endpoint_id);
+	struct ipa *ipa = endpoint->ipa;
+	u32 offset = ipa_reg_endp_init_cfg_n_offset(ipa->version, endpoint->endpoint_id);
 	enum ipa_cs_offload_en enabled;
 	u32 val = 0;
 
@@ -523,8 +524,8 @@ ipa_qmap_header_size(enum ipa_version version, struct ipa_endpoint *endpoint)
  */
 static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
 {
-	u32 offset = IPA_REG_ENDP_INIT_HDR_N_OFFSET(endpoint->endpoint_id);
 	struct ipa *ipa = endpoint->ipa;
+	u32 offset = ipa_reg_endp_init_hdr_n_offset(ipa->version, endpoint->endpoint_id);
 	u32 val = 0;
 
 	if (endpoint->data->qmap) {
@@ -565,9 +566,9 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
 
 static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
 {
-	u32 offset = IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(endpoint->endpoint_id);
-	u32 pad_align = endpoint->data->rx.pad_align;
 	struct ipa *ipa = endpoint->ipa;
+	u32 offset = ipa_reg_endp_init_hdr_ext_n_offset(ipa->version, endpoint->endpoint_id);
+	u32 pad_align = endpoint->data->rx.pad_align;
 	u32 val = 0;
 
 	val |= HDR_ENDIANNESS_FMASK;		/* big endian */
@@ -609,6 +610,7 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
 
 static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
 {
+	enum ipa_version version = endpoint->ipa->version;
 	u32 endpoint_id = endpoint->endpoint_id;
 	u32 val = 0;
 	u32 offset;
@@ -616,7 +618,7 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
 	if (endpoint->toward_ipa)
 		return;		/* Register not valid for TX endpoints */
 
-	offset = IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(endpoint_id);
+	offset = ipa_reg_endp_init_hdr_metadata_mask_n_offset(version, endpoint_id);
 
 	/* Note that HDR_ENDIANNESS indicates big endian header fields */
 	if (endpoint->data->qmap)
@@ -627,7 +629,8 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
 
 static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint)
 {
-	u32 offset = IPA_REG_ENDP_INIT_MODE_N_OFFSET(endpoint->endpoint_id);
+	enum ipa_version version = endpoint->ipa->version;
+	u32 offset = ipa_reg_endp_init_mode_n_offset(version, endpoint->endpoint_id);
 	u32 val;
 
 	if (!endpoint->toward_ipa)
@@ -716,8 +719,8 @@ static u32 aggr_sw_eof_active_encoded(enum ipa_version version, bool enabled)
 
 static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
 {
-	u32 offset = IPA_REG_ENDP_INIT_AGGR_N_OFFSET(endpoint->endpoint_id);
 	enum ipa_version version = endpoint->ipa->version;
+	u32 offset = ipa_reg_endp_init_aggr_n_offset(version, endpoint->endpoint_id);
 	u32 val = 0;
 
 	if (endpoint->data->aggregation) {
@@ -853,7 +856,7 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
 	u32 offset;
 	u32 val;
 
-	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(endpoint_id);
+	offset = ipa_reg_endp_init_hol_block_timer_n_offset(ipa->version, endpoint_id);
 	val = hol_block_timer_val(ipa, microseconds);
 	iowrite32(val, ipa->reg_virt + offset);
 }
@@ -861,12 +864,13 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
 static void
 ipa_endpoint_init_hol_block_enable(struct ipa_endpoint *endpoint, bool enable)
 {
+	enum ipa_version version = endpoint->ipa->version;
 	u32 endpoint_id = endpoint->endpoint_id;
 	u32 offset;
 	u32 val;
 
 	val = enable ? HOL_BLOCK_EN_FMASK : 0;
-	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(endpoint_id);
+	offset = ipa_reg_endp_init_hol_block_en_n_offset(version, endpoint_id);
 	iowrite32(val, endpoint->ipa->reg_virt + offset);
 }
 
@@ -887,7 +891,8 @@ void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa)
 
 static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint)
 {
-	u32 offset = IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(endpoint->endpoint_id);
+	enum ipa_version version = endpoint->ipa->version;
+	u32 offset = ipa_reg_endp_init_deaggr_n_offset(version, endpoint->endpoint_id);
 	u32 val = 0;
 
 	if (!endpoint->toward_ipa)
@@ -979,7 +984,7 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
 	u32 val = 0;
 	u32 offset;
 
-	offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
+	offset = ipa_reg_endp_status_n_offset(ipa->version, endpoint_id);
 
 	if (endpoint->data->status_enable) {
 		val |= STATUS_EN_FMASK;
@@ -1384,7 +1389,7 @@ void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id)
 	val |= u32_encode_bits(endpoint_id, ROUTE_FRAG_DEF_PIPE_FMASK);
 	val |= ROUTE_DEF_RETAIN_HDR_FMASK;
 
-	iowrite32(val, ipa->reg_virt + IPA_REG_ROUTE_OFFSET);
+	iowrite32(val, ipa->reg_virt + ipa_reg_route_offset(ipa->version));
 }
 
 void ipa_endpoint_default_route_clear(struct ipa *ipa)
diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index 6ab691ff1faf..ba06e3ad554c 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -191,7 +191,7 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
 	if (ipa->version < IPA_VERSION_4_0)
 		return;
 
-	val = ioread32(ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
+	val = ioread32(ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
 
 	if (ipa->version == IPA_VERSION_4_0) {
 		val &= ~IPA_QMB_SELECT_CONS_EN_FMASK;
@@ -206,7 +206,7 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
 	val |= GSI_MULTI_INORDER_RD_DIS_FMASK;
 	val |= GSI_MULTI_INORDER_WR_DIS_FMASK;
 
-	iowrite32(val, ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
+	iowrite32(val, ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
 }
 
 /* Configure DDR and (possibly) PCIe max read/write QSB values */
@@ -355,7 +355,7 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
 	/* IPA v4.5+ has no backward compatibility register */
 	if (version < IPA_VERSION_4_5) {
 		val = data->backward_compat;
-		iowrite32(val, ipa->reg_virt + IPA_REG_BCR_OFFSET);
+		iowrite32(val, ipa->reg_virt + ipa_reg_bcr_offset(ipa->version));
 	}
 
 	/* Implement some hardware workarounds */
@@ -384,7 +384,7 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
 		/* Configure aggregation timer granularity */
 		granularity = ipa_aggr_granularity_val(IPA_AGGR_GRANULARITY);
 		val = u32_encode_bits(granularity, AGGR_GRANULARITY_FMASK);
-		iowrite32(val, ipa->reg_virt + IPA_REG_COUNTER_CFG_OFFSET);
+		iowrite32(val, ipa->reg_virt + ipa_reg_counter_cfg_offset(ipa->version));
 	} else {
 		ipa_qtime_config(ipa);
 	}
diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
index 16e5fdd5bd73..8acc88070a6f 100644
--- a/drivers/net/ipa/ipa_mem.c
+++ b/drivers/net/ipa/ipa_mem.c
@@ -113,7 +113,8 @@ int ipa_mem_setup(struct ipa *ipa)
 	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
 	offset = ipa->mem_offset + mem->offset;
 	val = proc_cntxt_base_addr_encoded(ipa->version, offset);
-	iowrite32(val, ipa->reg_virt + IPA_REG_LOCAL_PKT_PROC_CNTXT_OFFSET);
+	iowrite32(val, ipa->reg_virt +
+		  ipa_reg_local_pkt_proc_cntxt_base_offset(ipa->version));
 
 	return 0;
 }
@@ -316,7 +317,7 @@ int ipa_mem_config(struct ipa *ipa)
 	u32 i;
 
 	/* Check the advertised location and size of the shared memory area */
-	val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET);
+	val = ioread32(ipa->reg_virt + ipa_reg_shared_mem_size_offset(ipa->version));
 
 	/* The fields in the register are in 8 byte units */
 	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h
index a5b355384d4a..fcae0296cfa4 100644
--- a/drivers/net/ipa/ipa_reg.h
+++ b/drivers/net/ipa/ipa_reg.h
@@ -65,7 +65,17 @@ struct ipa;
  * of valid bits for the register.
  */
 
-#define IPA_REG_COMP_CFG_OFFSET				0x0000003c
+#define IPA_REG_COMP_SW_RESET_OFFSET		0x0000003c
+
+#define IPA_REG_V2_ENABLED_PIPES_OFFSET		0x000005dc
+
+static inline u32 ipa_reg_comp_cfg_offset(enum ipa_version version)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x38;
+
+	return 0x3c;
+}
 /* The next field is not supported for IPA v4.0+, not present for IPA v4.5+ */
 #define ENABLE_FMASK				GENMASK(0, 0)
 /* The next field is present for IPA v4.7+ */
@@ -124,6 +134,7 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
 	return u32_encode_bits(val, GENMASK(17, 17));
 }
 
+/* This register is only present on IPA v3.0 and above */
 #define IPA_REG_CLKON_CFG_OFFSET			0x00000044
 #define RX_FMASK				GENMASK(0, 0)
 #define PROC_FMASK				GENMASK(1, 1)
@@ -164,7 +175,14 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
 /* The next field is present for IPA v4.7+ */
 #define DRBIP_FMASK				GENMASK(31, 31)
 
-#define IPA_REG_ROUTE_OFFSET				0x00000048
+static inline u32 ipa_reg_route_offset(enum ipa_version version)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x44;
+
+	return 0x48;
+}
+
 #define ROUTE_DIS_FMASK				GENMASK(0, 0)
 #define ROUTE_DEF_PIPE_FMASK			GENMASK(5, 1)
 #define ROUTE_DEF_HDR_TABLE_FMASK		GENMASK(6, 6)
@@ -172,7 +190,14 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
 #define ROUTE_FRAG_DEF_PIPE_FMASK		GENMASK(21, 17)
 #define ROUTE_DEF_RETAIN_HDR_FMASK		GENMASK(24, 24)
 
-#define IPA_REG_SHARED_MEM_SIZE_OFFSET			0x00000054
+static inline u32 ipa_reg_shared_mem_size_offset(enum ipa_version version)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x50;
+
+	return 0x54;
+}
+
 #define SHARED_MEM_SIZE_FMASK			GENMASK(15, 0)
 #define SHARED_MEM_BADDR_FMASK			GENMASK(31, 16)
 
@@ -219,7 +244,13 @@ static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
 }
 
 /* The next register is not present for IPA v4.5+ */
-#define IPA_REG_BCR_OFFSET				0x000001d0
+static inline u32 ipa_reg_bcr_offset(enum ipa_version version)
+{
+	if (IPA_VERSION_RANGE(version, 2_5, 2_6L))
+		return 0x5b0;
+
+	return 0x1d0;
+}
 /* The next two fields are not present for IPA v4.2+ */
 #define BCR_CMDQ_L_LACK_ONE_ENTRY_FMASK		GENMASK(0, 0)
 #define BCR_TX_NOT_USING_BRESP_FMASK		GENMASK(1, 1)
@@ -236,7 +267,14 @@ static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
 #define BCR_ROUTER_PREFETCH_EN_FMASK		GENMASK(9, 9)
 
 /* The value of the next register must be a multiple of 8 (bottom 3 bits 0) */
-#define IPA_REG_LOCAL_PKT_PROC_CNTXT_OFFSET		0x000001e8
+static inline u32 ipa_reg_local_pkt_proc_cntxt_base_offset(enum ipa_version version)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x5e0;
+
+	return 0x1e8;
+}
+
 
 /* Encoded value for LOCAL_PKT_PROC_CNTXT register BASE_ADDR field */
 static inline u32 proc_cntxt_base_addr_encoded(enum ipa_version version,
@@ -252,7 +290,14 @@ static inline u32 proc_cntxt_base_addr_encoded(enum ipa_version version,
 #define IPA_REG_AGGR_FORCE_CLOSE_OFFSET			0x000001ec
 
 /* The next register is not present for IPA v4.5+ */
-#define IPA_REG_COUNTER_CFG_OFFSET			0x000001f0
+static inline u32 ipa_reg_counter_cfg_offset(enum ipa_version version)
+{
+	if (IPA_VERSION_RANGE(version, 2_5, 2_6L))
+		return 0x5e8;
+
+	return 0x1f0;
+}
+
 /* The next field is not present for IPA v3.5+ */
 #define EOT_COAL_GRANULARITY			GENMASK(3, 0)
 #define AGGR_GRANULARITY_FMASK			GENMASK(8, 4)
@@ -349,15 +394,27 @@ enum ipa_pulse_gran {
 #define Y_MIN_LIM_FMASK				GENMASK(21, 16)
 #define Y_MAX_LIM_FMASK				GENMASK(29, 24)
 
-#define IPA_REG_ENDP_INIT_CTRL_N_OFFSET(ep) \
-					(0x00000800 + 0x0070 * (ep))
+static inline u32 ipa_reg_endp_init_ctrl_n_offset(enum ipa_version version, u16 ep)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x70 + 0x4 * ep;
+
+	return 0x800 + 0x70 * ep;
+}
+
 /* Valid only for RX (IPA producer) endpoints (do not use for IPA v4.0+) */
 #define ENDP_SUSPEND_FMASK			GENMASK(0, 0)
 /* Valid only for TX (IPA consumer) endpoints */
 #define ENDP_DELAY_FMASK			GENMASK(1, 1)
 
-#define IPA_REG_ENDP_INIT_CFG_N_OFFSET(ep) \
-					(0x00000808 + 0x0070 * (ep))
+static inline u32 ipa_reg_endp_init_cfg_n_offset(enum ipa_version version, u16 ep)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0xc0 + 0x4 * ep;
+
+	return 0x808 + 0x70 * ep;
+}
+
 #define FRAG_OFFLOAD_EN_FMASK			GENMASK(0, 0)
 #define CS_OFFLOAD_EN_FMASK			GENMASK(2, 1)
 #define CS_METADATA_HDR_OFFSET_FMASK		GENMASK(6, 3)
@@ -383,8 +440,14 @@ enum ipa_nat_en {
 	IPA_NAT_DST			= 0x2,
 };
 
-#define IPA_REG_ENDP_INIT_HDR_N_OFFSET(ep) \
-					(0x00000810 + 0x0070 * (ep))
+static inline u32 ipa_reg_endp_init_hdr_n_offset(enum ipa_version version, u16 ep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x170 + 0x4 * ep;
+
+	return 0x810 + 0x70 * ep;
+}
+
 #define HDR_LEN_FMASK				GENMASK(5, 0)
 #define HDR_OFST_METADATA_VALID_FMASK		GENMASK(6, 6)
 #define HDR_OFST_METADATA_FMASK			GENMASK(12, 7)
@@ -440,8 +503,14 @@ static inline u32 ipa_metadata_offset_encoded(enum ipa_version version,
 	return val;
 }
 
-#define IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(ep) \
-					(0x00000814 + 0x0070 * (ep))
+static inline u32 ipa_reg_endp_init_hdr_ext_n_offset(enum ipa_version version, u16 ep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x1c0 + 0x4 * ep;
+
+	return 0x814 + 0x70 * ep;
+}
+
 #define HDR_ENDIANNESS_FMASK			GENMASK(0, 0)
 #define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK	GENMASK(1, 1)
 #define HDR_TOTAL_LEN_OR_PAD_FMASK		GENMASK(2, 2)
@@ -454,12 +523,23 @@ static inline u32 ipa_metadata_offset_encoded(enum ipa_version version,
 #define HDR_ADDITIONAL_CONST_LEN_MSB_FMASK	GENMASK(21, 20)
 
 /* Valid only for RX (IPA producer) endpoints */
-#define IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(rxep) \
-					(0x00000818 + 0x0070 * (rxep))
+static inline u32 ipa_reg_endp_init_hdr_metadata_mask_n_offset(enum ipa_version version, u16 rxep)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x220 + 0x4 * rxep;
+
+	return 0x818 + 0x70 * rxep;
+}
 
 /* Valid only for TX (IPA consumer) endpoints */
-#define IPA_REG_ENDP_INIT_MODE_N_OFFSET(txep) \
-					(0x00000820 + 0x0070 * (txep))
+static inline u32 ipa_reg_endp_init_mode_n_offset(enum ipa_version version, u16 txep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x2c0 + 0x4 * txep;
+
+	return 0x820 + 0x70 * txep;
+}
+
 #define MODE_FMASK				GENMASK(2, 0)
 /* The next field is present for IPA v4.5+ */
 #define DCPH_ENABLE_FMASK			GENMASK(3, 3)
@@ -480,8 +560,14 @@ enum ipa_mode {
 	IPA_DMA				= 0x3,
 };
 
-#define IPA_REG_ENDP_INIT_AGGR_N_OFFSET(ep) \
-					(0x00000824 +  0x0070 * (ep))
+static inline u32 ipa_reg_endp_init_aggr_n_offset(enum ipa_version version,
+						  u16 ep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x320 + 0x4 * ep;
+	return 0x824 + 0x70 * ep;
+}
+
 #define AGGR_EN_FMASK				GENMASK(1, 0)
 #define AGGR_TYPE_FMASK				GENMASK(4, 2)
 
@@ -543,14 +629,27 @@ enum ipa_aggr_type {
 };
 
 /* Valid only for RX (IPA producer) endpoints */
-#define IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(rxep) \
-					(0x0000082c +  0x0070 * (rxep))
+static inline u32 ipa_reg_endp_init_hol_block_en_n_offset(enum ipa_version version,
+							  u16 rxep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x3c0 + 0x4 * rxep;
+
+	return 0x82c + 0x70 * rxep;
+}
+
 #define HOL_BLOCK_EN_FMASK			GENMASK(0, 0)
 
 /* Valid only for RX (IPA producer) endpoints */
-#define IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(rxep) \
-					(0x00000830 +  0x0070 * (rxep))
-/* The next two fields are present for IPA v4.2 only */
+static inline u32 ipa_reg_endp_init_hol_block_timer_n_offset(enum ipa_version version, u16 rxep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x420 + 0x4 * rxep;
+
+	return 0x830 + 0x70 * rxep;
+}
+
+/* The next fields are present for IPA v4.2 only */
 #define BASE_VALUE_FMASK			GENMASK(4, 0)
 #define SCALE_FMASK				GENMASK(12, 8)
 /* The next two fields are present for IPA v4.5 */
@@ -558,8 +657,14 @@ enum ipa_aggr_type {
 #define GRAN_SEL_FMASK				GENMASK(8, 8)
 
 /* Valid only for TX (IPA consumer) endpoints */
-#define IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(txep) \
-					(0x00000834 + 0x0070 * (txep))
+static inline u32 ipa_reg_endp_init_deaggr_n_offset(enum ipa_version version, u16 txep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x470 + 0x4 * txep;
+
+	return 0x834 + 0x70 * txep;
+}
+
 #define DEAGGR_HDR_LEN_FMASK			GENMASK(5, 0)
 #define SYSPIPE_ERR_DETECTION_FMASK		GENMASK(6, 6)
 #define PACKET_OFFSET_VALID_FMASK		GENMASK(7, 7)
@@ -629,8 +734,14 @@ enum ipa_seq_rep_type {
 	IPA_SEQ_REP_DMA_PARSER			= 0x08,
 };
 
-#define IPA_REG_ENDP_STATUS_N_OFFSET(ep) \
-					(0x00000840 + 0x0070 * (ep))
+static inline u32 ipa_reg_endp_status_n_offset(enum ipa_version version, u16 ep)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x4c0 + 0x4 * ep;
+
+	return 0x840 + 0x70 * ep;
+}
+
 #define STATUS_EN_FMASK				GENMASK(0, 0)
 #define STATUS_ENDP_FMASK			GENMASK(5, 1)
 /* The next field is not present for IPA v4.5+ */
@@ -662,6 +773,9 @@ enum ipa_seq_rep_type {
 static inline u32 ipa_reg_irq_stts_ee_n_offset(enum ipa_version version,
 					       u32 ee)
 {
+	if (version <= IPA_VERSION_2_6L)
+		return 0x00001008 + 0x1000 * ee;
+
 	if (version < IPA_VERSION_4_9)
 		return 0x00003008 + 0x1000 * ee;
 
@@ -675,6 +789,9 @@ static inline u32 ipa_reg_irq_stts_offset(enum ipa_version version)
 
 static inline u32 ipa_reg_irq_en_ee_n_offset(enum ipa_version version, u32 ee)
 {
+	if (version <= IPA_VERSION_2_6L)
+		return 0x0000100c + 0x1000 * ee;
+
 	if (version < IPA_VERSION_4_9)
 		return 0x0000300c + 0x1000 * ee;
 
@@ -688,6 +805,9 @@ static inline u32 ipa_reg_irq_en_offset(enum ipa_version version)
 
 static inline u32 ipa_reg_irq_clr_ee_n_offset(enum ipa_version version, u32 ee)
 {
+	if (version <= IPA_VERSION_2_6L)
+		return 0x00001010 + 0x1000 * ee;
+
 	if (version < IPA_VERSION_4_9)
 		return 0x00003010 + 0x1000 * ee;
 
@@ -776,6 +896,9 @@ enum ipa_irq_id {
 
 static inline u32 ipa_reg_irq_uc_ee_n_offset(enum ipa_version version, u32 ee)
 {
+	if (version <= IPA_VERSION_2_6L)
+		return 0x101c + 1000 * ee;
+
 	if (version < IPA_VERSION_4_9)
 		return 0x0000301c + 0x1000 * ee;
 
@@ -793,6 +916,9 @@ static inline u32 ipa_reg_irq_uc_offset(enum ipa_version version)
 static inline u32
 ipa_reg_irq_suspend_info_ee_n_offset(enum ipa_version version, u32 ee)
 {
+	if (version <= IPA_VERSION_2_6L)
+		return 0x00001098 + 0x1000 * ee;
+
 	if (version == IPA_VERSION_3_0)
 		return 0x00003098 + 0x1000 * ee;
 
diff --git a/drivers/net/ipa/ipa_version.h b/drivers/net/ipa/ipa_version.h
index 6c16c895d842..0d816de586ba 100644
--- a/drivers/net/ipa/ipa_version.h
+++ b/drivers/net/ipa/ipa_version.h
@@ -8,6 +8,9 @@
 
 /**
  * enum ipa_version
+ * @IPA_VERSION_2_0:	IPA version 2.0
+ * @IPA_VERSION_2_5:	IPA version 2.5/2.6
+ * @IPA_VERSION_2_6:	IPA version 2.6L
  * @IPA_VERSION_3_0:	IPA version 3.0/GSI version 1.0
  * @IPA_VERSION_3_1:	IPA version 3.1/GSI version 1.1
  * @IPA_VERSION_3_5:	IPA version 3.5/GSI version 1.2
@@ -25,6 +28,9 @@
  * new version is added.
  */
 enum ipa_version {
+	IPA_VERSION_2_0,
+	IPA_VERSION_2_5,
+	IPA_VERSION_2_6L,
 	IPA_VERSION_3_0,
 	IPA_VERSION_3_1,
 	IPA_VERSION_3_5,
@@ -38,4 +44,10 @@ enum ipa_version {
 	IPA_VERSION_4_11,
 };
 
+#define IPA_HAS_GSI(version) ((version) > IPA_VERSION_2_6L)
+#define IPA_IS_64BIT(version) ((version) > IPA_VERSION_2_6L)
+#define IPA_VERSION_RANGE(_version, _from, _to) \
+	((_version) >= (IPA_VERSION_##_from) &&  \
+	 (_version) <= (IPA_VERSION_##_to))
+
 #endif /* _IPA_VERSION_H_ */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 08/17] net: ipa: Add support for IPA v2.x interrupts
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (6 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:29   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport Sireesh Kodali
                   ` (9 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

Interrupts on IPA v2.x have different numbers from the v3.x and above
interrupts. IPA v2.x also doesn't support the TX_SUSPEND irq, like v3.0

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_interrupt.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
index 94708a23a597..37b5932253aa 100644
--- a/drivers/net/ipa/ipa_interrupt.c
+++ b/drivers/net/ipa/ipa_interrupt.c
@@ -63,6 +63,11 @@ static bool ipa_interrupt_check_fixup(enum ipa_irq_id *irq_id, enum ipa_version
 
 	if (*irq_id >= IPA_IRQ_DRBIP_PKT_EXCEED_MAX_SIZE_EN)
 		return version >= IPA_VERSION_4_9;
+	else if (*irq_id > IPA_IRQ_BAM_GSI_IDLE)
+		return version >= IPA_VERSION_3_0;
+	else if (version <= IPA_VERSION_2_6L &&
+			*irq_id >= IPA_IRQ_PROC_UC_ACK_Q_NOT_EMPTY)
+		*irq_id += 2;
 
 	return true;
 }
@@ -152,8 +157,8 @@ static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
 
 	WARN_ON(!(mask & ipa->available));
 
-	/* IPA version 3.0 does not support TX_SUSPEND interrupt control */
-	if (ipa->version == IPA_VERSION_3_0)
+	/* IPA version <=3.0 does not support TX_SUSPEND interrupt control */
+	if (ipa->version <= IPA_VERSION_3_0)
 		return;
 
 	offset = ipa_reg_irq_suspend_en_offset(ipa->version);
@@ -190,7 +195,7 @@ void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt)
 	val = ioread32(ipa->reg_virt + offset);
 
 	/* SUSPEND interrupt status isn't cleared on IPA version 3.0 */
-	if (ipa->version == IPA_VERSION_3_0)
+	if (ipa->version <= IPA_VERSION_3_0)
 		return;
 
 	offset = ipa_reg_irq_suspend_clr_offset(ipa->version);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (7 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 08/17] net: ipa: Add support for IPA v2.x interrupts Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-09-20 14:31   ` kernel test robot
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init Sireesh Kodali
                   ` (8 subsequent siblings)
  17 siblings, 2 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

BAM is used on IPA v2.x. Since BAM already has a nice dmaengine driver,
the IPA driver only makes calls the dmaengine API.
Also add BAM transaction support to IPA's trasaction abstraction layer.

BAM transactions should use NAPI just like GSI transactions, but just
use callbacks on each transaction for now.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/Makefile          |   2 +-
 drivers/net/ipa/bam.c             | 525 ++++++++++++++++++++++++++++++
 drivers/net/ipa/gsi.c             |   1 +
 drivers/net/ipa/ipa_data.h        |   1 +
 drivers/net/ipa/ipa_dma.h         |  18 +-
 drivers/net/ipa/ipa_dma_private.h |   2 +
 drivers/net/ipa/ipa_main.c        |  20 +-
 drivers/net/ipa/ipa_trans.c       |  14 +-
 drivers/net/ipa/ipa_trans.h       |   4 +
 9 files changed, 569 insertions(+), 18 deletions(-)
 create mode 100644 drivers/net/ipa/bam.c

diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
index 3cd021fb992e..4abebc667f77 100644
--- a/drivers/net/ipa/Makefile
+++ b/drivers/net/ipa/Makefile
@@ -2,7 +2,7 @@ obj-$(CONFIG_QCOM_IPA)	+=	ipa.o
 
 ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
 				ipa_table.o ipa_interrupt.o gsi.o ipa_trans.o \
-				ipa_gsi.o ipa_smp2p.o ipa_uc.o \
+				ipa_gsi.o ipa_smp2p.o ipa_uc.o bam.o \
 				ipa_endpoint.o ipa_cmd.o ipa_modem.o \
 				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
 				ipa_sysfs.o
diff --git a/drivers/net/ipa/bam.c b/drivers/net/ipa/bam.c
new file mode 100644
index 000000000000..0726e385fee5
--- /dev/null
+++ b/drivers/net/ipa/bam.c
@@ -0,0 +1,525 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#include <linux/completion.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmaengine.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/mutex.h>
+#include <linux/netdevice.h>
+#include <linux/platform_device.h>
+
+#include "ipa_gsi.h"
+#include "ipa.h"
+#include "ipa_dma.h"
+#include "ipa_dma_private.h"
+#include "ipa_gsi.h"
+#include "ipa_trans.h"
+#include "ipa_data.h"
+
+/**
+ * DOC: The IPA Smart Peripheral System Interface
+ *
+ * The Smart Peripheral System is a means to communicate over BAM pipes to
+ * the IPA block. The Modem also uses BAM pipes to communicate with the IPA
+ * core.
+ *
+ * Refer the GSI documentation, because BAM is a precursor to GSI and more or less
+ * the same, conceptually (maybe, IDK, I have no docs to go through).
+ *
+ * Each channel here corresponds to 1 BAM pipe configured in BAM2BAM mode
+ *
+ * IPA cmds are transferred one at a time, each in one BAM transfer.
+ */
+
+/* Get and configure the BAM DMA channel */
+int bam_channel_init_one(struct ipa_dma *bam,
+			 const struct ipa_gsi_endpoint_data *data, bool command)
+{
+	struct dma_slave_config bam_config;
+	u32 channel_id = data->channel_id;
+	struct ipa_channel *channel = &bam->channel[channel_id];
+	int ret;
+
+	/*TODO: if (!bam_channel_data_valid(bam, data))
+		return -EINVAL;*/
+
+	channel->dma_subsys = bam;
+	channel->dma_chan = dma_request_chan(bam->dev, data->channel_name);
+	channel->toward_ipa = data->toward_ipa;
+	channel->tlv_count = data->channel.tlv_count;
+	channel->tre_count = data->channel.tre_count;
+	if (IS_ERR(channel->dma_chan)) {
+		dev_err(bam->dev, "failed to request BAM channel %s: %d\n",
+				data->channel_name,
+				(int) PTR_ERR(channel->dma_chan));
+		return PTR_ERR(channel->dma_chan);
+	}
+
+	ret = ipa_channel_trans_init(bam, data->channel_id);
+	if (ret)
+		goto err_dma_chan_free;
+
+	if (data->toward_ipa) {
+		bam_config.direction = DMA_MEM_TO_DEV;
+		bam_config.dst_maxburst = channel->tlv_count;
+	} else {
+		bam_config.direction = DMA_DEV_TO_MEM;
+		bam_config.src_maxburst = channel->tlv_count;
+	}
+
+	dmaengine_slave_config(channel->dma_chan, &bam_config);
+
+	if (command)
+		ret = ipa_cmd_pool_init(channel, 256);
+
+	if (!ret)
+		return 0;
+
+err_dma_chan_free:
+	dma_release_channel(channel->dma_chan);
+	return ret;
+}
+
+static void bam_channel_exit_one(struct ipa_channel *channel)
+{
+	if (channel->dma_chan) {
+		dmaengine_terminate_sync(channel->dma_chan);
+		dma_release_channel(channel->dma_chan);
+	}
+}
+
+/* Get channels from BAM_DMA */
+int bam_channel_init(struct ipa_dma *bam, u32 count,
+		const struct ipa_gsi_endpoint_data *data)
+{
+	int ret = 0;
+	u32 i;
+
+	for (i = 0; i < count; ++i) {
+		bool command = i == IPA_ENDPOINT_AP_COMMAND_TX;
+
+		if (!data[i].channel_name || data[i].ee_id == GSI_EE_MODEM)
+			continue;
+
+		ret = bam_channel_init_one(bam, &data[i], command);
+		if (ret)
+			goto err_unwind;
+	}
+
+	return ret;
+
+err_unwind:
+	while (i--) {
+		if (ipa_gsi_endpoint_data_empty(&data[i]))
+			continue;
+
+		bam_channel_exit_one(&bam->channel[i]);
+	}
+	return ret;
+}
+
+/* Inverse of bam_channel_init() */
+void bam_channel_exit(struct ipa_dma *bam)
+{
+	u32 channel_id = BAM_CHANNEL_COUNT_MAX - 1;
+
+	do
+		bam_channel_exit_one(&bam->channel[channel_id]);
+	while (channel_id--);
+}
+
+/* Inverse of bam_init() */
+static void bam_exit(struct ipa_dma *bam)
+{
+	mutex_destroy(&bam->mutex);
+	bam_channel_exit(bam);
+}
+
+/* Return the channel id associated with a given channel */
+static u32 bam_channel_id(struct ipa_channel *channel)
+{
+	return channel - &channel->dma_subsys->channel[0];
+}
+
+static void
+bam_channel_tx_update(struct ipa_channel *channel, struct ipa_trans *trans)
+{
+	u64 byte_count = trans->byte_count + trans->len;
+	u64 trans_count = trans->trans_count + 1;
+
+	byte_count -= channel->compl_byte_count;
+	channel->compl_byte_count += byte_count;
+	trans_count -= channel->compl_trans_count;
+	channel->compl_trans_count += trans_count;
+
+	ipa_gsi_channel_tx_completed(channel->dma_subsys, bam_channel_id(channel),
+					   trans_count, byte_count);
+}
+
+static void
+bam_channel_rx_update(struct ipa_channel *channel, struct ipa_trans *trans)
+{
+	/* FIXME */
+	u64 byte_count = trans->byte_count + trans->len;
+
+	channel->byte_count += byte_count;
+	channel->trans_count++;
+}
+
+/* Consult hardware, move any newly completed transactions to completed list */
+static void bam_channel_update(struct ipa_channel *channel)
+{
+	struct ipa_trans *trans;
+
+	list_for_each_entry(trans, &channel->trans_info.pending, links) {
+		enum dma_status trans_status =
+				dma_async_is_tx_complete(channel->dma_chan,
+					trans->cookie, NULL, NULL);
+		if (trans_status == DMA_COMPLETE)
+			break;
+	}
+	/* Get the transaction for the latest completed event.  Take a
+	 * reference to keep it from completing before we give the events
+	 * for this and previous transactions back to the hardware.
+	 */
+	refcount_inc(&trans->refcount);
+
+	/* For RX channels, update each completed transaction with the number
+	 * of bytes that were actually received.  For TX channels, report
+	 * the number of transactions and bytes this completion represents
+	 * up the network stack.
+	 */
+	if (channel->toward_ipa)
+		bam_channel_tx_update(channel, trans);
+	else
+		bam_channel_rx_update(channel, trans);
+
+	ipa_trans_move_complete(trans);
+
+	ipa_trans_free(trans);
+}
+
+/**
+ * bam_channel_poll_one() - Return a single completed transaction on a channel
+ * @channel:	Channel to be polled
+ *
+ * Return:	Transaction pointer, or null if none are available
+ *
+ * This function returns the first entry on a channel's completed transaction
+ * list.  If that list is empty, the hardware is consulted to determine
+ * whether any new transactions have completed.  If so, they're moved to the
+ * completed list and the new first entry is returned.  If there are no more
+ * completed transactions, a null pointer is returned.
+ */
+static struct ipa_trans *bam_channel_poll_one(struct ipa_channel *channel)
+{
+	struct ipa_trans *trans;
+
+	/* Get the first transaction from the completed list */
+	trans = ipa_channel_trans_complete(channel);
+	if (!trans) {
+		bam_channel_update(channel);
+		trans = ipa_channel_trans_complete(channel);
+	}
+
+	if (trans)
+		ipa_trans_move_polled(trans);
+
+	return trans;
+}
+
+/**
+ * bam_channel_poll() - NAPI poll function for a channel
+ * @napi:	NAPI structure for the channel
+ * @budget:	Budget supplied by NAPI core
+ *
+ * Return:	Number of items polled (<= budget)
+ *
+ * Single transactions completed by hardware are polled until either
+ * the budget is exhausted, or there are no more.  Each transaction
+ * polled is passed to ipa_trans_complete(), to perform remaining
+ * completion processing and retire/free the transaction.
+ */
+static int bam_channel_poll(struct napi_struct *napi, int budget)
+{
+	struct ipa_channel *channel;
+	int count = 0;
+
+	channel = container_of(napi, struct ipa_channel, napi);
+	while (count < budget) {
+		struct ipa_trans *trans;
+
+		count++;
+		trans = bam_channel_poll_one(channel);
+		if (!trans)
+			break;
+		ipa_trans_complete(trans);
+	}
+
+	if (count < budget)
+		napi_complete(&channel->napi);
+
+	return count;
+}
+
+/* Setup function for a single channel */
+static void bam_channel_setup_one(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	if (!channel->dma_subsys)
+		return;	/* Ignore uninitialized channels */
+
+	if (channel->toward_ipa) {
+		netif_tx_napi_add(&bam->dummy_dev, &channel->napi,
+				  bam_channel_poll, NAPI_POLL_WEIGHT);
+	} else {
+		netif_napi_add(&bam->dummy_dev, &channel->napi,
+			       bam_channel_poll, NAPI_POLL_WEIGHT);
+	}
+	napi_enable(&channel->napi);
+}
+
+static void bam_channel_teardown_one(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	if (!channel->dma_subsys)
+		return;		/* Ignore uninitialized channels */
+
+	netif_napi_del(&channel->napi);
+}
+
+/* Setup function for channels */
+static int bam_channel_setup(struct ipa_dma *bam)
+{
+	u32 channel_id = 0;
+	int ret;
+
+	mutex_lock(&bam->mutex);
+
+	do
+		bam_channel_setup_one(bam, channel_id);
+	while (++channel_id < BAM_CHANNEL_COUNT_MAX);
+
+	/* Make sure no channels were defined that hardware does not support */
+	while (channel_id < BAM_CHANNEL_COUNT_MAX) {
+		struct ipa_channel *channel = &bam->channel[channel_id++];
+
+		if (!channel->dma_subsys)
+			continue;	/* Ignore uninitialized channels */
+
+		dev_err(bam->dev, "channel %u not supported by hardware\n",
+			channel_id - 1);
+		channel_id = BAM_CHANNEL_COUNT_MAX;
+		goto err_unwind;
+	}
+
+	mutex_unlock(&bam->mutex);
+
+	return 0;
+
+err_unwind:
+	while (channel_id--)
+		bam_channel_teardown_one(bam, channel_id);
+
+	mutex_unlock(&bam->mutex);
+
+	return ret;
+}
+
+/* Inverse of bam_channel_setup() */
+static void bam_channel_teardown(struct ipa_dma *bam)
+{
+	u32 channel_id;
+
+	mutex_lock(&bam->mutex);
+
+	channel_id = BAM_CHANNEL_COUNT_MAX;
+	do
+		bam_channel_teardown_one(bam, channel_id);
+	while (channel_id--);
+
+	mutex_unlock(&bam->mutex);
+}
+
+static int bam_setup(struct ipa_dma *bam)
+{
+	return bam_channel_setup(bam);
+}
+
+static void bam_teardown(struct ipa_dma *bam)
+{
+	bam_channel_teardown(bam);
+}
+
+static u32 bam_channel_tre_max(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	/* Hardware limit is channel->tre_count - 1 */
+	return channel->tre_count - (channel->tlv_count - 1);
+}
+
+static u32 bam_channel_trans_tre_max(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	return channel->tlv_count;
+}
+
+static int bam_channel_start(struct ipa_dma *bam, u32 channel_id)
+{
+	return 0;
+}
+
+static int bam_channel_stop(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	return dmaengine_terminate_sync(channel->dma_chan);
+}
+
+static void bam_channel_reset(struct ipa_dma *bam, u32 channel_id, bool doorbell)
+{
+	bam_channel_stop(bam, channel_id);
+}
+
+static int bam_channel_suspend(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	return dmaengine_pause(channel->dma_chan);
+}
+
+static int bam_channel_resume(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	return dmaengine_resume(channel->dma_chan);
+}
+
+static void bam_suspend(struct ipa_dma *bam)
+{
+	/* No-op for now */
+}
+
+static void bam_resume(struct ipa_dma *bam)
+{
+	/* No-op for now */
+}
+
+static void bam_trans_callback(void *arg)
+{
+	ipa_trans_complete(arg);
+}
+
+static void bam_trans_commit(struct ipa_trans *trans, bool unused)
+{
+	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
+	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
+	struct ipa_cmd_info *info;
+	struct scatterlist *sg;
+	u32 byte_count = 0;
+	u32 i;
+	enum dma_transfer_direction direction;
+
+	if (channel->toward_ipa)
+		direction = DMA_MEM_TO_DEV;
+	else
+		direction = DMA_DEV_TO_MEM;
+
+	/* assert(trans->used > 0); */
+
+	info = trans->info ? &trans->info[0] : NULL;
+	for_each_sg(trans->sgl, sg, trans->used, i) {
+		bool last_tre = i == trans->used - 1;
+		dma_addr_t addr = sg_dma_address(sg);
+		u32 len = sg_dma_len(sg);
+		u32 dma_flags = 0;
+		struct dma_async_tx_descriptor *desc;
+
+		byte_count += len;
+		if (info)
+			opcode = info++->opcode;
+
+		if (opcode != IPA_CMD_NONE) {
+			len = opcode;
+			dma_flags |= DMA_PREP_IMM_CMD;
+		}
+
+		if (last_tre)
+			dma_flags |= DMA_PREP_INTERRUPT;
+
+		desc = dmaengine_prep_slave_single(channel->dma_chan, addr, len,
+				direction, dma_flags);
+
+		if (last_tre) {
+			desc->callback = bam_trans_callback;
+			desc->callback_param = trans;
+		}
+
+		desc->cookie = dmaengine_submit(desc);
+
+		if (last_tre)
+			trans->cookie = desc->cookie;
+
+		if (direction == DMA_DEV_TO_MEM)
+			dmaengine_desc_attach_metadata(desc, &trans->len, sizeof(trans->len));
+	}
+
+	if (channel->toward_ipa) {
+		/* We record TX bytes when they are sent */
+		trans->len = byte_count;
+		trans->trans_count = channel->trans_count;
+		trans->byte_count = channel->byte_count;
+		channel->trans_count++;
+		channel->byte_count += byte_count;
+	}
+
+	ipa_trans_move_pending(trans);
+
+	dma_async_issue_pending(channel->dma_chan);
+}
+
+/* Initialize the BAM DMA channels
+ * Actual hw init is handled by the BAM_DMA driver
+ */
+int bam_init(struct ipa_dma *bam, struct platform_device *pdev,
+		enum ipa_version version, u32 count,
+		const struct ipa_gsi_endpoint_data *data)
+{
+	struct device *dev = &pdev->dev;
+	int ret;
+
+	bam->dev = dev;
+	bam->version = version;
+	bam->setup = bam_setup;
+	bam->teardown = bam_teardown;
+	bam->exit = bam_exit;
+	bam->suspend = bam_suspend;
+	bam->resume = bam_resume;
+	bam->channel_tre_max = bam_channel_tre_max;
+	bam->channel_trans_tre_max = bam_channel_trans_tre_max;
+	bam->channel_start = bam_channel_start;
+	bam->channel_stop = bam_channel_stop;
+	bam->channel_reset = bam_channel_reset;
+	bam->channel_suspend = bam_channel_suspend;
+	bam->channel_resume = bam_channel_resume;
+	bam->trans_commit = bam_trans_commit;
+
+	init_dummy_netdev(&bam->dummy_dev);
+
+	ret = bam_channel_init(bam, count, data);
+	if (ret)
+		return ret;
+
+	mutex_init(&bam->mutex);
+
+	return 0;
+}
diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
index 39d9ca620a9f..ac0b9e748fa1 100644
--- a/drivers/net/ipa/gsi.c
+++ b/drivers/net/ipa/gsi.c
@@ -2210,6 +2210,7 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
 	gsi->channel_reset = gsi_channel_reset;
 	gsi->channel_suspend = gsi_channel_suspend;
 	gsi->channel_resume = gsi_channel_resume;
+	gsi->trans_commit = gsi_trans_commit;
 
 	/* GSI uses NAPI on all channels.  Create a dummy network device
 	 * for the channel NAPI contexts to be associated with.
diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
index 6d329e9ce5d2..7d62d49f414f 100644
--- a/drivers/net/ipa/ipa_data.h
+++ b/drivers/net/ipa/ipa_data.h
@@ -188,6 +188,7 @@ struct ipa_gsi_endpoint_data {
 	u8 channel_id;
 	u8 endpoint_id;
 	bool toward_ipa;
+	const char *channel_name;	/* used only for BAM DMA channels */
 
 	struct gsi_channel_data channel;
 	struct ipa_endpoint_data endpoint;
diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h
index 1a23e6ac5785..3000182ae689 100644
--- a/drivers/net/ipa/ipa_dma.h
+++ b/drivers/net/ipa/ipa_dma.h
@@ -17,7 +17,11 @@
 
 /* Maximum number of channels and event rings supported by the driver */
 #define GSI_CHANNEL_COUNT_MAX	23
+#define BAM_CHANNEL_COUNT_MAX	20
 #define GSI_EVT_RING_COUNT_MAX	24
+#define IPA_CHANNEL_COUNT_MAX	MAX(GSI_CHANNEL_COUNT_MAX, \
+				    BAM_CHANNEL_COUNT_MAX)
+#define MAX(a, b)		((a > b) ? a : b)
 
 /* Maximum TLV FIFO size for a channel; 64 here is arbitrary (and high) */
 #define GSI_TLV_MAX		64
@@ -119,6 +123,8 @@ struct ipa_channel {
 	struct gsi_ring tre_ring;
 	u32 evt_ring_id;
 
+	struct dma_chan *dma_chan;
+
 	u64 byte_count;			/* total # bytes transferred */
 	u64 trans_count;		/* total # transactions */
 	/* The following counts are used only for TX endpoints */
@@ -154,7 +160,7 @@ struct ipa_dma {
 	u32 irq;
 	u32 channel_count;
 	u32 evt_ring_count;
-	struct ipa_channel channel[GSI_CHANNEL_COUNT_MAX];
+	struct ipa_channel channel[IPA_CHANNEL_COUNT_MAX];
 	struct gsi_evt_ring evt_ring[GSI_EVT_RING_COUNT_MAX];
 	u32 event_bitmap;		/* allocated event rings */
 	u32 modem_channel_bitmap;	/* modem channels to allocate */
@@ -303,7 +309,7 @@ static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
 }
 
 /**
- * ipa_dma_init() - Initialize the GSI subsystem
+ * ipa_init/bam_init() - Initialize the GSI/BAM subsystem
  * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
  * @pdev:	IPA platform device
  * @version:	IPA hardware version (implies GSI version)
@@ -312,14 +318,18 @@ static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
  *
  * Return:	0 if successful, or a negative error code
  *
- * Early stage initialization of the GSI subsystem, performing tasks
- * that can be done before the GSI hardware is ready to use.
+ * Early stage initialization of the GSI/BAM subsystem, performing tasks
+ * that can be done before the GSI/BAM hardware is ready to use.
  */
 
 int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
 	     enum ipa_version version, u32 count,
 	     const struct ipa_gsi_endpoint_data *data);
 
+int bam_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
+	     enum ipa_version version, u32 count,
+	     const struct ipa_gsi_endpoint_data *data);
+
 /**
  * ipa_dma_exit() - Exit the DMA subsystem
  * @dma_subsys:	ipa_dma address previously passed to a successful gsi_init() call
diff --git a/drivers/net/ipa/ipa_dma_private.h b/drivers/net/ipa/ipa_dma_private.h
index 40148a551b47..1db53e597a61 100644
--- a/drivers/net/ipa/ipa_dma_private.h
+++ b/drivers/net/ipa/ipa_dma_private.h
@@ -16,6 +16,8 @@ struct ipa_channel;
 
 #define GSI_RING_ELEMENT_SIZE	16	/* bytes; must be a power of 2 */
 
+void gsi_trans_commit(struct ipa_trans *trans, bool ring_db);
+
 /* Return the entry that follows one provided in a transaction pool */
 void *ipa_trans_pool_next(struct ipa_trans_pool *pool, void *element);
 
diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index ba06e3ad554c..ea6c4347f2c6 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -60,12 +60,15 @@
  * core.  The GSI implements a set of "channels" used for communication
  * between the AP and the IPA.
  *
- * The IPA layer uses GSI channels to implement its "endpoints".  And while
- * a GSI channel carries data between the AP and the IPA, a pair of IPA
- * endpoints is used to carry traffic between two EEs.  Specifically, the main
- * modem network interface is implemented by two pairs of endpoints:  a TX
+ * The IPA layer uses GSI channels or BAM pipes to implement its "endpoints".
+ * And while a GSI channel carries data between the AP and the IPA, a pair of
+ * IPA endpoints is used to carry traffic between two EEs.  Specifically, the
+ * main modem network interface is implemented by two pairs of endpoints:  a TX
  * endpoint on the AP coupled with an RX endpoint on the modem; and another
  * RX endpoint on the AP receiving data from a TX endpoint on the modem.
+ *
+ * For BAM based transport, a pair of BAM pipes are used for TX and RX between
+ * the AP and IPA, and between IPA and other EEs.
  */
 
 /* The name of the GSI firmware file relative to /lib/firmware */
@@ -716,8 +719,13 @@ static int ipa_probe(struct platform_device *pdev)
 	if (ret)
 		goto err_reg_exit;
 
-	ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
-		       data->endpoint_data);
+	if (IPA_HAS_GSI(ipa->version))
+		ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
+			       data->endpoint_data);
+	else
+		ret = bam_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
+			       data->endpoint_data);
+
 	if (ret)
 		goto err_mem_exit;
 
diff --git a/drivers/net/ipa/ipa_trans.c b/drivers/net/ipa/ipa_trans.c
index 22755f3ce3da..444f44846da8 100644
--- a/drivers/net/ipa/ipa_trans.c
+++ b/drivers/net/ipa/ipa_trans.c
@@ -254,7 +254,7 @@ struct ipa_trans *ipa_channel_trans_complete(struct ipa_channel *channel)
 }
 
 /* Move a transaction from the allocated list to the pending list */
-static void ipa_trans_move_pending(struct ipa_trans *trans)
+void ipa_trans_move_pending(struct ipa_trans *trans)
 {
 	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
 	struct ipa_trans_info *trans_info = &channel->trans_info;
@@ -539,7 +539,7 @@ static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr,
  * pending list.  Finally, updates the channel ring pointer and optionally
  * rings the doorbell.
  */
-static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
+void gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
 {
 	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
 	struct gsi_ring *ring = &channel->tre_ring;
@@ -604,9 +604,9 @@ static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
 /* Commit a GSI transaction */
 void ipa_trans_commit(struct ipa_trans *trans, bool ring_db)
 {
-	if (trans->used)
-		__gsi_trans_commit(trans, ring_db);
-	else
+	if (trans->used) {
+		trans->dma_subsys->trans_commit(trans, ring_db);
+	} else
 		ipa_trans_free(trans);
 }
 
@@ -618,7 +618,7 @@ void ipa_trans_commit_wait(struct ipa_trans *trans)
 
 	refcount_inc(&trans->refcount);
 
-	__gsi_trans_commit(trans, true);
+	trans->dma_subsys->trans_commit(trans, true);
 
 	wait_for_completion(&trans->completion);
 
@@ -638,7 +638,7 @@ int ipa_trans_commit_wait_timeout(struct ipa_trans *trans,
 
 	refcount_inc(&trans->refcount);
 
-	__gsi_trans_commit(trans, true);
+	trans->dma_subsys->trans_commit(trans, true);
 
 	remaining = wait_for_completion_timeout(&trans->completion,
 						timeout_jiffies);
diff --git a/drivers/net/ipa/ipa_trans.h b/drivers/net/ipa/ipa_trans.h
index b93342414360..5f41e3e6f92a 100644
--- a/drivers/net/ipa/ipa_trans.h
+++ b/drivers/net/ipa/ipa_trans.h
@@ -10,6 +10,7 @@
 #include <linux/refcount.h>
 #include <linux/completion.h>
 #include <linux/dma-direction.h>
+#include <linux/dmaengine.h>
 
 #include "ipa_cmd.h"
 
@@ -61,6 +62,7 @@ struct ipa_trans {
 	struct scatterlist *sgl;
 	struct ipa_cmd_info *info;	/* array of entries, or null */
 	enum dma_data_direction direction;
+	dma_cookie_t cookie;
 
 	refcount_t refcount;
 	struct completion completion;
@@ -149,6 +151,8 @@ struct ipa_trans *ipa_channel_trans_alloc(struct ipa_dma *dma_subsys, u32 channe
  */
 void ipa_trans_free(struct ipa_trans *trans);
 
+void ipa_trans_move_pending(struct ipa_trans *trans);
+
 /**
  * ipa_trans_cmd_add() - Add an immediate command to a transaction
  * @trans:	Transaction
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (8 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints Sireesh Kodali
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, Vladimir Lypak, David S. Miller, Jakub Kicinski

IPA v2.x commands are different from later IPA revisions mostly because
of the fact that IPA v2.x is 32 bit. There are also other minor
differences some of the command structs.

The tables again are only different because of the fact that IPA v2.x is
32 bit.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
---
 drivers/net/ipa/ipa.h       |   2 +-
 drivers/net/ipa/ipa_cmd.c   | 138 ++++++++++++++++++++++++++----------
 drivers/net/ipa/ipa_table.c |  29 ++++++--
 drivers/net/ipa/ipa_table.h |   2 +-
 4 files changed, 125 insertions(+), 46 deletions(-)

diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
index 80a83ac45729..63b2b368b588 100644
--- a/drivers/net/ipa/ipa.h
+++ b/drivers/net/ipa/ipa.h
@@ -81,7 +81,7 @@ struct ipa {
 	struct ipa_power *power;
 
 	dma_addr_t table_addr;
-	__le64 *table_virt;
+	void *table_virt;
 
 	struct ipa_interrupt *interrupt;
 	bool uc_powered;
diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
index 7a104540dc26..58dae4b3bf87 100644
--- a/drivers/net/ipa/ipa_cmd.c
+++ b/drivers/net/ipa/ipa_cmd.c
@@ -25,8 +25,8 @@
  * An immediate command is generally used to request the IPA do something
  * other than data transfer to another endpoint.
  *
- * Immediate commands are represented by GSI transactions just like other
- * transfer requests, represented by a single GSI TRE.  Each immediate
+ * Immediate commands on IPA v3 are represented by GSI transactions just like
+ * other transfer requests, represented by a single GSI TRE.  Each immediate
  * command has a well-defined format, having a payload of a known length.
  * This allows the transfer element's length field to be used to hold an
  * immediate command's opcode.  The payload for a command resides in DRAM
@@ -45,10 +45,16 @@ enum pipeline_clear_options {
 
 /* IPA_CMD_IP_V{4,6}_{FILTER,ROUTING}_INIT */
 
-struct ipa_cmd_hw_ip_fltrt_init {
-	__le64 hash_rules_addr;
-	__le64 flags;
-	__le64 nhash_rules_addr;
+union ipa_cmd_hw_ip_fltrt_init {
+	struct {
+		__le32 nhash_rules_addr;
+		__le32 flags;
+	} v2;
+	struct {
+		__le64 hash_rules_addr;
+		__le64 flags;
+		__le64 nhash_rules_addr;
+	} v3;
 };
 
 /* Field masks for ipa_cmd_hw_ip_fltrt_init structure fields */
@@ -56,13 +62,23 @@ struct ipa_cmd_hw_ip_fltrt_init {
 #define IP_FLTRT_FLAGS_HASH_ADDR_FMASK			GENMASK_ULL(27, 12)
 #define IP_FLTRT_FLAGS_NHASH_SIZE_FMASK			GENMASK_ULL(39, 28)
 #define IP_FLTRT_FLAGS_NHASH_ADDR_FMASK			GENMASK_ULL(55, 40)
+#define IP_V2_IPV4_FLTRT_FLAGS_SIZE_FMASK		GENMASK_ULL(11, 0)
+#define IP_V2_IPV4_FLTRT_FLAGS_ADDR_FMASK		GENMASK_ULL(27, 12)
+#define IP_V2_IPV6_FLTRT_FLAGS_SIZE_FMASK		GENMASK_ULL(15, 0)
+#define IP_V2_IPV6_FLTRT_FLAGS_ADDR_FMASK		GENMASK_ULL(31, 16)
 
 /* IPA_CMD_HDR_INIT_LOCAL */
 
-struct ipa_cmd_hw_hdr_init_local {
-	__le64 hdr_table_addr;
-	__le32 flags;
-	__le32 reserved;
+union ipa_cmd_hw_hdr_init_local {
+	struct {
+		__le32 hdr_table_addr;
+		__le32 flags;
+	} v2;
+	struct {
+		__le64 hdr_table_addr;
+		__le32 flags;
+		__le32 reserved;
+	} v3;
 };
 
 /* Field masks for ipa_cmd_hw_hdr_init_local structure fields */
@@ -109,14 +125,37 @@ struct ipa_cmd_ip_packet_init {
 #define DMA_SHARED_MEM_OPCODE_SKIP_CLEAR_FMASK		GENMASK(8, 8)
 #define DMA_SHARED_MEM_OPCODE_CLEAR_OPTION_FMASK	GENMASK(10, 9)
 
-struct ipa_cmd_hw_dma_mem_mem {
-	__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
-	__le16 size;
-	__le16 local_addr;
-	__le16 flags;
-	__le64 system_addr;
+union ipa_cmd_hw_dma_mem_mem {
+	struct {
+		__le16 reserved;
+		__le16 size;
+		__le32 system_addr;
+		__le16 local_addr;
+		__le16 flags; /* the least significant 14 bits are reserved */
+		__le32 padding;
+	} v2;
+	struct {
+		__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
+		__le16 size;
+		__le16 local_addr;
+		__le16 flags;
+		__le64 system_addr;
+	} v3;
 };
 
+#define CMD_FIELD(_version, _payload, _field)				\
+	*(((_version) > IPA_VERSION_2_6L) ?		    		\
+	  &(_payload->v3._field) :			    		\
+	  &(_payload->v2._field))
+
+#define SET_DMA_FIELD(_ver, _payload, _field, _value)			\
+	do {								\
+		if ((_ver) >= IPA_VERSION_3_0)				\
+			(_payload)->v3._field = cpu_to_le64(_value);	\
+		else							\
+			(_payload)->v2._field = cpu_to_le32(_value);	\
+	} while (0)
+
 /* Flag allowing atomic clear of target region after reading data (v4.0+)*/
 #define DMA_SHARED_MEM_CLEAR_AFTER_READ			GENMASK(15, 15)
 
@@ -132,15 +171,16 @@ struct ipa_cmd_ip_packet_tag_status {
 	__le64 tag;
 };
 
-#define IP_PACKET_TAG_STATUS_TAG_FMASK			GENMASK_ULL(63, 16)
+#define IPA_V2_IP_PACKET_TAG_STATUS_TAG_FMASK		GENMASK_ULL(63, 32)
+#define IPA_V3_IP_PACKET_TAG_STATUS_TAG_FMASK		GENMASK_ULL(63, 16)
 
 /* Immediate command payload */
 union ipa_cmd_payload {
-	struct ipa_cmd_hw_ip_fltrt_init table_init;
-	struct ipa_cmd_hw_hdr_init_local hdr_init_local;
+	union ipa_cmd_hw_ip_fltrt_init table_init;
+	union ipa_cmd_hw_hdr_init_local hdr_init_local;
 	struct ipa_cmd_register_write register_write;
 	struct ipa_cmd_ip_packet_init ip_packet_init;
-	struct ipa_cmd_hw_dma_mem_mem dma_shared_mem;
+	union ipa_cmd_hw_dma_mem_mem dma_shared_mem;
 	struct ipa_cmd_ip_packet_tag_status ip_packet_tag_status;
 };
 
@@ -154,6 +194,7 @@ static void ipa_cmd_validate_build(void)
 	 * of entries.
 	 */
 #define TABLE_SIZE	(TABLE_COUNT_MAX * sizeof(__le64))
+// TODO
 #define TABLE_COUNT_MAX	max_t(u32, IPA_ROUTE_COUNT_MAX, IPA_FILTER_COUNT_MAX)
 	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK));
 	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));
@@ -405,15 +446,26 @@ void ipa_cmd_table_init_add(struct ipa_trans *trans,
 {
 	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	enum dma_data_direction direction = DMA_TO_DEVICE;
-	struct ipa_cmd_hw_ip_fltrt_init *payload;
+	union ipa_cmd_hw_ip_fltrt_init *payload;
+	enum ipa_version version = ipa->version;
 	union ipa_cmd_payload *cmd_payload;
 	dma_addr_t payload_addr;
 	u64 val;
 
 	/* Record the non-hash table offset and size */
 	offset += ipa->mem_offset;
-	val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
-	val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
+
+	if (version >= IPA_VERSION_3_0) {
+		val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
+		val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
+	} else if (opcode == IPA_CMD_IP_V4_FILTER_INIT ||
+		   opcode == IPA_CMD_IP_V4_ROUTING_INIT) {
+		val = u64_encode_bits(offset, IP_V2_IPV4_FLTRT_FLAGS_ADDR_FMASK);
+		val |= u64_encode_bits(size, IP_V2_IPV4_FLTRT_FLAGS_SIZE_FMASK);
+	} else { /* IPA <= v2.6L IPv6 */
+		val = u64_encode_bits(offset, IP_V2_IPV6_FLTRT_FLAGS_ADDR_FMASK);
+		val |= u64_encode_bits(size, IP_V2_IPV6_FLTRT_FLAGS_SIZE_FMASK);
+	}
 
 	/* The hash table offset and address are zero if its size is 0 */
 	if (hash_size) {
@@ -429,10 +481,10 @@ void ipa_cmd_table_init_add(struct ipa_trans *trans,
 	payload = &cmd_payload->table_init;
 
 	/* Fill in all offsets and sizes and the non-hash table address */
-	if (hash_size)
-		payload->hash_rules_addr = cpu_to_le64(hash_addr);
-	payload->flags = cpu_to_le64(val);
-	payload->nhash_rules_addr = cpu_to_le64(addr);
+	if (hash_size && version >= IPA_VERSION_3_0)
+		payload->v3.hash_rules_addr = cpu_to_le64(hash_addr);
+	SET_DMA_FIELD(version, payload, flags, val);
+	SET_DMA_FIELD(version, payload, nhash_rules_addr, addr);
 
 	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  direction, opcode);
@@ -445,7 +497,7 @@ void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
 	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL;
 	enum dma_data_direction direction = DMA_TO_DEVICE;
-	struct ipa_cmd_hw_hdr_init_local *payload;
+	union ipa_cmd_hw_hdr_init_local *payload;
 	union ipa_cmd_payload *cmd_payload;
 	dma_addr_t payload_addr;
 	u32 flags;
@@ -460,10 +512,10 @@ void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
 	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
 	payload = &cmd_payload->hdr_init_local;
 
-	payload->hdr_table_addr = cpu_to_le64(addr);
+	SET_DMA_FIELD(ipa->version, payload, hdr_table_addr, addr);
 	flags = u32_encode_bits(size, HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK);
 	flags |= u32_encode_bits(offset, HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);
-	payload->flags = cpu_to_le32(flags);
+	CMD_FIELD(ipa->version, payload, flags) = cpu_to_le32(flags);
 
 	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  direction, opcode);
@@ -509,8 +561,11 @@ void ipa_cmd_register_write_add(struct ipa_trans *trans, u32 offset, u32 value,
 
 	} else {
 		flags = 0;	/* SKIP_CLEAR flag is always 0 */
-		options = u16_encode_bits(clear_option,
-					  REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
+		if (ipa->version > IPA_VERSION_2_6L)
+			options = u16_encode_bits(clear_option,
+					REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
+		else
+			options = 0;
 	}
 
 	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
@@ -552,7 +607,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
 {
 	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM;
-	struct ipa_cmd_hw_dma_mem_mem *payload;
+	enum ipa_version version = ipa->version;
+	union ipa_cmd_hw_dma_mem_mem *payload;
 	union ipa_cmd_payload *cmd_payload;
 	enum dma_data_direction direction;
 	dma_addr_t payload_addr;
@@ -571,8 +627,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
 	/* payload->clear_after_read was reserved prior to IPA v4.0.  It's
 	 * never needed for current code, so it's 0 regardless of version.
 	 */
-	payload->size = cpu_to_le16(size);
-	payload->local_addr = cpu_to_le16(offset);
+	CMD_FIELD(version, payload, size) = cpu_to_le16(size);
+	CMD_FIELD(version, payload, local_addr) = cpu_to_le16(offset);
 	/* payload->flags:
 	 *   direction:		0 = write to IPA, 1 read from IPA
 	 * Starting at v4.0 these are reserved; either way, all zero:
@@ -582,8 +638,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
 	 * since both values are 0 we won't bother OR'ing them in.
 	 */
 	flags = toward_ipa ? 0 : DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK;
-	payload->flags = cpu_to_le16(flags);
-	payload->system_addr = cpu_to_le64(addr);
+	CMD_FIELD(version, payload, flags) = cpu_to_le16(flags);
+	SET_DMA_FIELD(version, payload, system_addr, addr);
 
 	direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
 
@@ -599,11 +655,17 @@ static void ipa_cmd_ip_tag_status_add(struct ipa_trans *trans)
 	struct ipa_cmd_ip_packet_tag_status *payload;
 	union ipa_cmd_payload *cmd_payload;
 	dma_addr_t payload_addr;
+	u64 tag_mask;
+
+	if (trans->dma_subsys->version <= IPA_VERSION_2_6L)
+		tag_mask = IPA_V2_IP_PACKET_TAG_STATUS_TAG_FMASK;
+	else
+		tag_mask = IPA_V3_IP_PACKET_TAG_STATUS_TAG_FMASK;
 
 	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
 	payload = &cmd_payload->ip_packet_tag_status;
 
-	payload->tag = le64_encode_bits(0, IP_PACKET_TAG_STATUS_TAG_FMASK);
+	payload->tag = le64_encode_bits(0, tag_mask);
 
 	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  direction, opcode);
diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
index d197959cc032..459fb4830244 100644
--- a/drivers/net/ipa/ipa_table.c
+++ b/drivers/net/ipa/ipa_table.c
@@ -8,6 +8,7 @@
 #include <linux/kernel.h>
 #include <linux/bits.h>
 #include <linux/bitops.h>
+#include <linux/module.h>
 #include <linux/bitfield.h>
 #include <linux/io.h>
 #include <linux/build_bug.h>
@@ -561,6 +562,19 @@ void ipa_table_config(struct ipa *ipa)
 	ipa_route_config(ipa, true);
 }
 
+static inline void *ipa_table_write(enum ipa_version version,
+				   void *virt, u64 value)
+{
+	if (IPA_IS_64BIT(version)) {
+		__le64 *ptr = virt;
+		*ptr = cpu_to_le64(value);
+	} else {
+		__le32 *ptr = virt;
+		*ptr = cpu_to_le32(value);
+	}
+	return virt + IPA_TABLE_ENTRY_SIZE(version);
+}
+
 /*
  * Initialize a coherent DMA allocation containing initialized filter and
  * route table data.  This is used when initializing or resetting the IPA
@@ -602,10 +616,11 @@ void ipa_table_config(struct ipa *ipa)
 int ipa_table_init(struct ipa *ipa)
 {
 	u32 count = max_t(u32, IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX);
+	enum ipa_version version = ipa->version;
 	struct device *dev = &ipa->pdev->dev;
+	u64 filter_map = ipa->filter_map << 1;
 	dma_addr_t addr;
-	__le64 le_addr;
-	__le64 *virt;
+	void *virt;
 	size_t size;
 
 	ipa_table_validate_build();
@@ -626,19 +641,21 @@ int ipa_table_init(struct ipa *ipa)
 	ipa->table_addr = addr;
 
 	/* First slot is the zero rule */
-	*virt++ = 0;
+	virt = ipa_table_write(version, virt, 0);
 
 	/* Next is the filter table bitmap.  The "soft" bitmap value
 	 * must be converted to the hardware representation by shifting
 	 * it left one position.  (Bit 0 repesents global filtering,
 	 * which is possible but not used.)
 	 */
-	*virt++ = cpu_to_le64((u64)ipa->filter_map << 1);
+	if (version <= IPA_VERSION_2_6L)
+		filter_map |= 1;
+
+	virt = ipa_table_write(version, virt, filter_map);
 
 	/* All the rest contain the DMA address of the zero rule */
-	le_addr = cpu_to_le64(addr);
 	while (count--)
-		*virt++ = le_addr;
+		virt = ipa_table_write(version, virt, addr);
 
 	return 0;
 }
diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
index 78a168ce6558..6e12fc49e45b 100644
--- a/drivers/net/ipa/ipa_table.h
+++ b/drivers/net/ipa/ipa_table.h
@@ -43,7 +43,7 @@ bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask);
  */
 static inline bool ipa_table_hash_support(struct ipa *ipa)
 {
-	return ipa->version != IPA_VERSION_4_2;
+	return ipa->version != IPA_VERSION_4_2 && ipa->version > IPA_VERSION_2_6L;
 }
 
 /**
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (9 preceding siblings ...)
  2021-09-20  3:08 ` [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map Sireesh Kodali
                   ` (6 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

IPA v2.x endpoints are the same as the endpoints on later versions. The
only big change was the addition of the "skip_config" flag. The only
other change is the backlog limit, which is a fixed number for IPA v2.6L

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_endpoint.c | 65 ++++++++++++++++++++++------------
 1 file changed, 43 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 7d3ab61cd890..024cf3a0ded0 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -360,8 +360,10 @@ void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable)
 {
 	u32 endpoint_id;
 
-	/* DELAY mode doesn't work correctly on IPA v4.2 */
-	if (ipa->version == IPA_VERSION_4_2)
+	/* DELAY mode doesn't work correctly on IPA v4.2
+	 * Pausing is not supported on IPA v2.6L
+	 */
+	if (ipa->version == IPA_VERSION_4_2 || ipa->version <= IPA_VERSION_2_6L)
 		return;
 
 	for (endpoint_id = 0; endpoint_id < IPA_ENDPOINT_MAX; endpoint_id++) {
@@ -383,6 +385,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
 {
 	u32 initialized = ipa->initialized;
 	struct ipa_trans *trans;
+	u32 value = 0, value_mask = ~0;
 	u32 count;
 
 	/* We need one command per modem TX endpoint.  We can get an upper
@@ -398,6 +401,11 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
 		return -EBUSY;
 	}
 
+	if (ipa->version <= IPA_VERSION_2_6L) {
+		value = aggr_force_close_fmask(true);
+		value_mask = aggr_force_close_fmask(true);
+	}
+
 	while (initialized) {
 		u32 endpoint_id = __ffs(initialized);
 		struct ipa_endpoint *endpoint;
@@ -416,7 +424,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
 		 * means status is disabled on the endpoint, and as a
 		 * result all other fields in the register are ignored.
 		 */
-		ipa_cmd_register_write_add(trans, offset, 0, ~0, false);
+		ipa_cmd_register_write_add(trans, offset, value, value_mask, false);
 	}
 
 	ipa_cmd_pipeline_clear_add(trans);
@@ -1531,8 +1539,10 @@ static void ipa_endpoint_program(struct ipa_endpoint *endpoint)
 	ipa_endpoint_init_mode(endpoint);
 	ipa_endpoint_init_aggr(endpoint);
 	ipa_endpoint_init_deaggr(endpoint);
-	ipa_endpoint_init_rsrc_grp(endpoint);
-	ipa_endpoint_init_seq(endpoint);
+	if (endpoint->ipa->version > IPA_VERSION_2_6L) {
+		ipa_endpoint_init_rsrc_grp(endpoint);
+		ipa_endpoint_init_seq(endpoint);
+	}
 	ipa_endpoint_status(endpoint);
 }
 
@@ -1592,7 +1602,6 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
 {
 	struct device *dev = &endpoint->ipa->pdev->dev;
 	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
-	bool stop_channel;
 	int ret;
 
 	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
@@ -1613,7 +1622,6 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
 {
 	struct device *dev = &endpoint->ipa->pdev->dev;
 	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
-	bool start_channel;
 	int ret;
 
 	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
@@ -1750,23 +1758,33 @@ int ipa_endpoint_config(struct ipa *ipa)
 	/* Find out about the endpoints supplied by the hardware, and ensure
 	 * the highest one doesn't exceed the number we support.
 	 */
-	val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
-
-	/* Our RX is an IPA producer */
-	rx_base = u32_get_bits(val, IPA_PROD_LOWEST_FMASK);
-	max = rx_base + u32_get_bits(val, IPA_MAX_PROD_PIPES_FMASK);
-	if (max > IPA_ENDPOINT_MAX) {
-		dev_err(dev, "too many endpoints (%u > %u)\n",
-			max, IPA_ENDPOINT_MAX);
-		return -EINVAL;
-	}
-	rx_mask = GENMASK(max - 1, rx_base);
+	if (ipa->version <= IPA_VERSION_2_6L) {
+		// FIXME Not used anywhere?
+		if (ipa->version == IPA_VERSION_2_6L)
+			val = ioread32(ipa->reg_virt +
+					IPA_REG_V2_ENABLED_PIPES_OFFSET);
+		/* IPA v2.6L supports 20 pipes */
+		ipa->available = ipa->filter_map;
+		return 0;
+	} else {
+		val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
+
+		/* Our RX is an IPA producer */
+		rx_base = u32_get_bits(val, IPA_PROD_LOWEST_FMASK);
+		max = rx_base + u32_get_bits(val, IPA_MAX_PROD_PIPES_FMASK);
+		if (max > IPA_ENDPOINT_MAX) {
+			dev_err(dev, "too many endpoints (%u > %u)\n",
+					max, IPA_ENDPOINT_MAX);
+			return -EINVAL;
+		}
+		rx_mask = GENMASK(max - 1, rx_base);
 
-	/* Our TX is an IPA consumer */
-	max = u32_get_bits(val, IPA_MAX_CONS_PIPES_FMASK);
-	tx_mask = GENMASK(max - 1, 0);
+		/* Our TX is an IPA consumer */
+		max = u32_get_bits(val, IPA_MAX_CONS_PIPES_FMASK);
+		tx_mask = GENMASK(max - 1, 0);
 
-	ipa->available = rx_mask | tx_mask;
+		ipa->available = rx_mask | tx_mask;
+	}
 
 	/* Check for initialized endpoints not supported by the hardware */
 	if (ipa->initialized & ~ipa->available) {
@@ -1865,6 +1883,9 @@ u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
 			filter_map |= BIT(data->endpoint_id);
 	}
 
+	if (ipa->version <= IPA_VERSION_2_6L)
+		filter_map = 0x1fffff;
+
 	if (!ipa_filter_map_valid(ipa, filter_map))
 		goto err_endpoint_exit;
 
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (10 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface Sireesh Kodali
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

IPA v2.6L has an extra region to handle compression/decompression
acceleration. This region is used by some modems during modem init.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_mem.c | 36 ++++++++++++++++++++++++++++++------
 drivers/net/ipa/ipa_mem.h |  5 ++++-
 2 files changed, 34 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
index 8acc88070a6f..bfcdc7e08de2 100644
--- a/drivers/net/ipa/ipa_mem.c
+++ b/drivers/net/ipa/ipa_mem.c
@@ -84,7 +84,7 @@ int ipa_mem_setup(struct ipa *ipa)
 	/* Get a transaction to define the header memory region and to zero
 	 * the processing context and modem memory regions.
 	 */
-	trans = ipa_cmd_trans_alloc(ipa, 4);
+	trans = ipa_cmd_trans_alloc(ipa, 5);
 	if (!trans) {
 		dev_err(&ipa->pdev->dev, "no transaction for memory setup\n");
 		return -EBUSY;
@@ -107,8 +107,14 @@ int ipa_mem_setup(struct ipa *ipa)
 	ipa_mem_zero_region_add(trans, IPA_MEM_AP_PROC_CTX);
 	ipa_mem_zero_region_add(trans, IPA_MEM_MODEM);
 
+	ipa_mem_zero_region_add(trans, IPA_MEM_ZIP);
+
 	ipa_trans_commit_wait(trans);
 
+	/* On IPA version <=2.6L (except 2.5) there is no PROC_CTX.  */
+	if (ipa->version != IPA_VERSION_2_5 && ipa->version <= IPA_VERSION_2_6L)
+		return 0;
+
 	/* Tell the hardware where the processing context area is located */
 	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
 	offset = ipa->mem_offset + mem->offset;
@@ -147,6 +153,11 @@ static bool ipa_mem_id_valid(struct ipa *ipa, enum ipa_mem_id mem_id)
 	case IPA_MEM_END_MARKER:	/* pseudo region */
 		break;
 
+	case IPA_MEM_ZIP:
+		if (version == IPA_VERSION_2_6L)
+			return true;
+		break;
+
 	case IPA_MEM_STATS_TETHERING:
 	case IPA_MEM_STATS_DROP:
 		if (version < IPA_VERSION_4_0)
@@ -319,10 +330,15 @@ int ipa_mem_config(struct ipa *ipa)
 	/* Check the advertised location and size of the shared memory area */
 	val = ioread32(ipa->reg_virt + ipa_reg_shared_mem_size_offset(ipa->version));
 
-	/* The fields in the register are in 8 byte units */
-	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
-	/* Make sure the end is within the region's mapped space */
-	mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
+	if (IPA_VERSION_RANGE(ipa->version, 2_0, 2_6L)) {
+		/* The fields in the register are in 8 byte units */
+		ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
+		/* Make sure the end is within the region's mapped space */
+		mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
+	} else {
+		ipa->mem_offset = u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
+		mem_size = u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
+	}
 
 	/* If the sizes don't match, issue a warning */
 	if (ipa->mem_offset + mem_size < ipa->mem_size) {
@@ -564,6 +580,10 @@ static int ipa_smem_init(struct ipa *ipa, u32 item, size_t size)
 		return -EINVAL;
 	}
 
+	/* IPA v2.6L does not use IOMMU */
+	if (ipa->version <= IPA_VERSION_2_6L)
+		return 0;
+
 	domain = iommu_get_domain_for_dev(dev);
 	if (!domain) {
 		dev_err(dev, "no IOMMU domain found for SMEM\n");
@@ -591,6 +611,9 @@ static void ipa_smem_exit(struct ipa *ipa)
 	struct device *dev = &ipa->pdev->dev;
 	struct iommu_domain *domain;
 
+	if (ipa->version <= IPA_VERSION_2_6L)
+		return;
+
 	domain = iommu_get_domain_for_dev(dev);
 	if (domain) {
 		size_t size;
@@ -622,7 +645,8 @@ int ipa_mem_init(struct ipa *ipa, const struct ipa_mem_data *mem_data)
 	ipa->mem_count = mem_data->local_count;
 	ipa->mem = mem_data->local;
 
-	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64));
+	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, IPA_IS_64BIT(ipa->version) ?
+					DMA_BIT_MASK(64) : DMA_BIT_MASK(32));
 	if (ret) {
 		dev_err(dev, "error %d setting DMA mask\n", ret);
 		return ret;
diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h
index 570bfdd99bff..be91cb38b6a8 100644
--- a/drivers/net/ipa/ipa_mem.h
+++ b/drivers/net/ipa/ipa_mem.h
@@ -47,8 +47,10 @@ enum ipa_mem_id {
 	IPA_MEM_UC_INFO,		/* 0 canaries */
 	IPA_MEM_V4_FILTER_HASHED,	/* 2 canaries */
 	IPA_MEM_V4_FILTER,		/* 2 canaries */
+	IPA_MEM_V4_FILTER_AP,		/* 2 canaries (IPA v2.0) */
 	IPA_MEM_V6_FILTER_HASHED,	/* 2 canaries */
 	IPA_MEM_V6_FILTER,		/* 2 canaries */
+	IPA_MEM_V6_FILTER_AP,		/* 0 canaries (IPA v2.0) */
 	IPA_MEM_V4_ROUTE_HASHED,	/* 2 canaries */
 	IPA_MEM_V4_ROUTE,		/* 2 canaries */
 	IPA_MEM_V6_ROUTE_HASHED,	/* 2 canaries */
@@ -57,7 +59,8 @@ enum ipa_mem_id {
 	IPA_MEM_AP_HEADER,		/* 0 canaries, optional */
 	IPA_MEM_MODEM_PROC_CTX,		/* 2 canaries */
 	IPA_MEM_AP_PROC_CTX,		/* 0 canaries */
-	IPA_MEM_MODEM,			/* 0/2 canaries */
+	IPA_MEM_ZIP,			/* 1 canary (IPA v2.6L) */
+	IPA_MEM_MODEM,			/* 0-2 canaries */
 	IPA_MEM_UC_EVENT_RING,		/* 1 canary, optional */
 	IPA_MEM_PDN_CONFIG,		/* 0/2 canaries (IPA v4.0+) */
 	IPA_MEM_STATS_QUOTA_MODEM,	/* 2/4 canaries (IPA v4.0+) */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (11 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 14/17] net: ipa: Add support for IPA v2 microcontroller Sireesh Kodali
                   ` (4 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, Vladimir Lypak, David S. Miller, Jakub Kicinski

On IPA v2.x, the modem doesn't send a DRIVER_INIT_COMPLETED, so we have
to rely on the uc's IPA_UC_RESPONSE_INIT_COMPLETED to know when its
ready. We add a function here that marks uc_ready = true. This function
is called by ipa_uc.c when IPA_UC_RESPONSE_INIT_COMPLETED is handled.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
---
 drivers/net/ipa/ipa_qmi.c | 27 ++++++++++++++++++++++++++-
 drivers/net/ipa/ipa_qmi.h | 10 ++++++++++
 2 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
index 7e2fe701cc4d..876e2a004f70 100644
--- a/drivers/net/ipa/ipa_qmi.c
+++ b/drivers/net/ipa/ipa_qmi.c
@@ -68,6 +68,11 @@
  * - The INDICATION_REGISTER request and INIT_COMPLETE indication are
  *   optional for non-initial modem boots, and have no bearing on the
  *   determination of when things are "ready"
+ *
+ * Note that on IPA v2.x, the modem doesn't send a DRIVER_INIT_COMPLETE
+ * request. Thus, we rely on the uc's IPA_UC_RESPONSE_INIT_COMPLETED to know
+ * when the uc is ready. The rest of the process is the same on IPA v2.x and
+ * later IPA versions
  */
 
 #define IPA_HOST_SERVICE_SVC_ID		0x31
@@ -345,7 +350,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
 			req.hdr_proc_ctx_tbl_info.start + mem->size - 1;
 	}
 
-	/* Nothing to report for the compression table (zip_tbl_info) */
+	mem = &ipa->mem[IPA_MEM_ZIP];
+	if (mem->size) {
+		req.zip_tbl_info_valid = 1;
+		req.zip_tbl_info.start = ipa->mem_offset + mem->offset;
+		req.zip_tbl_info.end = ipa->mem_offset + mem->size - 1;
+	}
 
 	mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE_HASHED);
 	if (mem->size) {
@@ -525,6 +535,21 @@ int ipa_qmi_setup(struct ipa *ipa)
 	return ret;
 }
 
+/* With IPA v2 modem is not required to send DRIVER_INIT_COMPLETE request to AP.
+ * We start operation as soon as IPA_UC_RESPONSE_INIT_COMPLETED irq is triggered.
+ */
+void ipa_qmi_signal_uc_loaded(struct ipa *ipa)
+{
+	struct ipa_qmi *ipa_qmi = &ipa->qmi;
+
+	/* This is needed only on IPA 2.x */
+	if (ipa->version > IPA_VERSION_2_6L)
+		return;
+
+	ipa_qmi->uc_ready = true;
+	ipa_qmi_ready(ipa_qmi);
+}
+
 /* Tear down IPA QMI handles */
 void ipa_qmi_teardown(struct ipa *ipa)
 {
diff --git a/drivers/net/ipa/ipa_qmi.h b/drivers/net/ipa/ipa_qmi.h
index 856ef629ccc8..4962d88b0d22 100644
--- a/drivers/net/ipa/ipa_qmi.h
+++ b/drivers/net/ipa/ipa_qmi.h
@@ -55,6 +55,16 @@ struct ipa_qmi {
  */
 int ipa_qmi_setup(struct ipa *ipa);
 
+/**
+ * ipa_qmi_signal_uc_loaded() - Signal that the UC has been loaded
+ * @ipa:		IPA pointer
+ *
+ * This is called when the uc indicates that it is ready. This exists, because
+ * on IPA v2.x, the modem does not send a DRIVER_INIT_COMPLETED. Thus we have
+ * to rely on the uc's INIT_COMPLETED response to know if it was initialized
+ */
+void ipa_qmi_signal_uc_loaded(struct ipa *ipa);
+
 /**
  * ipa_qmi_teardown() - Tear down IPA QMI handles
  * @ipa:		IPA pointer
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 14/17] net: ipa: Add support for IPA v2 microcontroller
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (12 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 15/17] net: ipa: Add IPA v2.6L initialization sequence support Sireesh Kodali
                   ` (3 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

There are some minor differences between IPA v2.x and later revisions
with regards to the uc. The biggeset difference is the shared memory's
layout. There are also some changes to the command numbers, but these
are not too important, since the mainline driver doesn't use them.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_uc.c | 96 ++++++++++++++++++++++++++--------------
 1 file changed, 63 insertions(+), 33 deletions(-)

diff --git a/drivers/net/ipa/ipa_uc.c b/drivers/net/ipa/ipa_uc.c
index 856e55a080a7..bf6b25098301 100644
--- a/drivers/net/ipa/ipa_uc.c
+++ b/drivers/net/ipa/ipa_uc.c
@@ -39,11 +39,12 @@
 #define IPA_SEND_DELAY		100	/* microseconds */
 
 /**
- * struct ipa_uc_mem_area - AP/microcontroller shared memory area
+ * union ipa_uc_mem_area - AP/microcontroller shared memory area
  * @command:		command code (AP->microcontroller)
  * @reserved0:		reserved bytes; avoid reading or writing
  * @command_param:	low 32 bits of command parameter (AP->microcontroller)
  * @command_param_hi:	high 32 bits of command parameter (AP->microcontroller)
+ *			Available since IPA v3.0
  *
  * @response:		response code (microcontroller->AP)
  * @reserved1:		reserved bytes; avoid reading or writing
@@ -59,31 +60,58 @@
  * @reserved3:		reserved bytes; avoid reading or writing
  * @interface_version:	hardware-reported interface version
  * @reserved4:		reserved bytes; avoid reading or writing
+ * @reserved5:		reserved bytes; avoid reading or writing
  *
  * A shared memory area at the base of IPA resident memory is used for
  * communication with the microcontroller.  The region is 128 bytes in
  * size, but only the first 40 bytes (structured this way) are used.
  */
-struct ipa_uc_mem_area {
-	u8 command;		/* enum ipa_uc_command */
-	u8 reserved0[3];
-	__le32 command_param;
-	__le32 command_param_hi;
-	u8 response;		/* enum ipa_uc_response */
-	u8 reserved1[3];
-	__le32 response_param;
-	u8 event;		/* enum ipa_uc_event */
-	u8 reserved2[3];
-
-	__le32 event_param;
-	__le32 first_error_address;
-	u8 hw_state;
-	u8 warning_counter;
-	__le16 reserved3;
-	__le16 interface_version;
-	__le16 reserved4;
+union ipa_uc_mem_area {
+	struct {
+		u8 command;		/* enum ipa_uc_command */
+		u8 reserved0[3];
+		__le32 command_param;
+		u8 response;		/* enum ipa_uc_response */
+		u8 reserved1[3];
+		__le32 response_param;
+		u8 event;		/* enum ipa_uc_event */
+		u8 reserved2[3];
+
+		__le32 event_param;
+		__le32 reserved3;
+		__le32 first_error_address;
+		u8 hw_state;
+		u8 warning_counter;
+		__le16 reserved4;
+		__le16 interface_version;
+		__le16 reserved5;
+	} v2;
+	struct {
+		u8 command;		/* enum ipa_uc_command */
+		u8 reserved0[3];
+		__le32 command_param;
+		__le32 command_param_hi;
+		u8 response;		/* enum ipa_uc_response */
+		u8 reserved1[3];
+		__le32 response_param;
+		u8 event;		/* enum ipa_uc_event */
+		u8 reserved2[3];
+
+		__le32 event_param;
+		__le32 first_error_address;
+		u8 hw_state;
+		u8 warning_counter;
+		__le16 reserved3;
+		__le16 interface_version;
+		__le16 reserved4;
+	} v3;
 };
 
+#define UC_FIELD(_ipa, _field)			\
+	*((_ipa->version >= IPA_VERSION_3_0) ?	\
+	  &(ipa_uc_shared(_ipa)->v3._field) :	\
+	  &(ipa_uc_shared(_ipa)->v2._field))
+
 /** enum ipa_uc_command - commands from the AP to the microcontroller */
 enum ipa_uc_command {
 	IPA_UC_COMMAND_NO_OP		= 0x0,
@@ -95,6 +123,7 @@ enum ipa_uc_command {
 	IPA_UC_COMMAND_CLK_UNGATE	= 0x6,
 	IPA_UC_COMMAND_MEMCPY		= 0x7,
 	IPA_UC_COMMAND_RESET_PIPE	= 0x8,
+	/* Next two commands are present for IPA v3.0+ */
 	IPA_UC_COMMAND_REG_WRITE	= 0x9,
 	IPA_UC_COMMAND_GSI_CH_EMPTY	= 0xa,
 };
@@ -114,7 +143,7 @@ enum ipa_uc_event {
 	IPA_UC_EVENT_LOG_INFO		= 0x2,
 };
 
-static struct ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
+static union ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
 {
 	const struct ipa_mem *mem = ipa_mem_find(ipa, IPA_MEM_UC_SHARED);
 	u32 offset = ipa->mem_offset + mem->offset;
@@ -125,22 +154,22 @@ static struct ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
 /* Microcontroller event IPA interrupt handler */
 static void ipa_uc_event_handler(struct ipa *ipa, enum ipa_irq_id irq_id)
 {
-	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
 	struct device *dev = &ipa->pdev->dev;
+	u32 event = UC_FIELD(ipa, event);
 
-	if (shared->event == IPA_UC_EVENT_ERROR)
+	if (event == IPA_UC_EVENT_ERROR)
 		dev_err(dev, "microcontroller error event\n");
-	else if (shared->event != IPA_UC_EVENT_LOG_INFO)
+	else if (event != IPA_UC_EVENT_LOG_INFO)
 		dev_err(dev, "unsupported microcontroller event %u\n",
-			shared->event);
+			event);
 	/* The LOG_INFO event can be safely ignored */
 }
 
 /* Microcontroller response IPA interrupt handler */
 static void ipa_uc_response_hdlr(struct ipa *ipa, enum ipa_irq_id irq_id)
 {
-	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
 	struct device *dev = &ipa->pdev->dev;
+	u32 response = UC_FIELD(ipa, response);
 
 	/* An INIT_COMPLETED response message is sent to the AP by the
 	 * microcontroller when it is operational.  Other than this, the AP
@@ -150,20 +179,21 @@ static void ipa_uc_response_hdlr(struct ipa *ipa, enum ipa_irq_id irq_id)
 	 * We can drop the power reference taken in ipa_uc_power() once we
 	 * know the microcontroller has finished its initialization.
 	 */
-	switch (shared->response) {
+	switch (response) {
 	case IPA_UC_RESPONSE_INIT_COMPLETED:
 		if (ipa->uc_powered) {
 			ipa->uc_loaded = true;
 			pm_runtime_mark_last_busy(dev);
 			(void)pm_runtime_put_autosuspend(dev);
 			ipa->uc_powered = false;
+			ipa_qmi_signal_uc_loaded(ipa);
 		} else {
 			dev_warn(dev, "unexpected init_completed response\n");
 		}
 		break;
 	default:
 		dev_warn(dev, "unsupported microcontroller response %u\n",
-			 shared->response);
+			 response);
 		break;
 	}
 }
@@ -216,16 +246,16 @@ void ipa_uc_power(struct ipa *ipa)
 /* Send a command to the microcontroller */
 static void send_uc_command(struct ipa *ipa, u32 command, u32 command_param)
 {
-	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
 	u32 offset;
 	u32 val;
 
 	/* Fill in the command data */
-	shared->command = command;
-	shared->command_param = cpu_to_le32(command_param);
-	shared->command_param_hi = 0;
-	shared->response = 0;
-	shared->response_param = 0;
+	UC_FIELD(ipa, command) = command;
+	UC_FIELD(ipa, command_param) = cpu_to_le32(command_param);
+	if (ipa->version >= IPA_VERSION_3_0)
+		ipa_uc_shared(ipa)->v3.command_param_hi = 1;
+	UC_FIELD(ipa, response) = 0;
+	UC_FIELD(ipa, response_param) = 0;
 
 	/* Use an interrupt to tell the microcontroller the command is ready */
 	val = u32_encode_bits(1, UC_INTR_FMASK);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 15/17] net: ipa: Add IPA v2.6L initialization sequence support
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (13 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 14/17] net: ipa: Add support for IPA v2 microcontroller Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware Sireesh Kodali
                   ` (2 subsequent siblings)
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

The biggest changes are:

- Make SMP2P functions no-operation
- Make resource init no-operation
- Skip firmware loading
- Add reset sequence

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_main.c     | 19 ++++++++++++++++---
 drivers/net/ipa/ipa_resource.c |  3 +++
 drivers/net/ipa/ipa_smp2p.c    | 11 +++++++++--
 3 files changed, 28 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index ea6c4347f2c6..b437fbf95edf 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -355,12 +355,22 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
 	u32 granularity;
 	u32 val;
 
+	if (ipa->version <= IPA_VERSION_2_6L) {
+		iowrite32(1, ipa->reg_virt + IPA_REG_COMP_SW_RESET_OFFSET);
+		iowrite32(0, ipa->reg_virt + IPA_REG_COMP_SW_RESET_OFFSET);
+
+		iowrite32(1, ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
+	}
+
 	/* IPA v4.5+ has no backward compatibility register */
-	if (version < IPA_VERSION_4_5) {
+	if (version >= IPA_VERSION_2_5 && version < IPA_VERSION_4_5) {
 		val = data->backward_compat;
 		iowrite32(val, ipa->reg_virt + ipa_reg_bcr_offset(ipa->version));
 	}
 
+	if (ipa->version <= IPA_VERSION_2_6L)
+		return;
+
 	/* Implement some hardware workarounds */
 	if (version >= IPA_VERSION_4_0 && version < IPA_VERSION_4_5) {
 		/* Disable PA mask to allow HOLB drop */
@@ -412,7 +422,8 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
 static void ipa_hardware_deconfig(struct ipa *ipa)
 {
 	/* Mostly we just leave things as we set them. */
-	ipa_hardware_dcd_deconfig(ipa);
+	if (ipa->version > IPA_VERSION_2_6L)
+		ipa_hardware_dcd_deconfig(ipa);
 }
 
 /**
@@ -765,8 +776,10 @@ static int ipa_probe(struct platform_device *pdev)
 
 	/* Otherwise we need to load the firmware and have Trust Zone validate
 	 * and install it.  If that succeeds we can proceed with setup.
+	 * But on IPA v2.6L we don't need to do firmware loading :D
 	 */
-	ret = ipa_firmware_load(dev);
+	if (ipa->version > IPA_VERSION_2_6L)
+		ret = ipa_firmware_load(dev);
 	if (ret)
 		goto err_deconfig;
 
diff --git a/drivers/net/ipa/ipa_resource.c b/drivers/net/ipa/ipa_resource.c
index e3da95d69409..36a72324d828 100644
--- a/drivers/net/ipa/ipa_resource.c
+++ b/drivers/net/ipa/ipa_resource.c
@@ -162,6 +162,9 @@ int ipa_resource_config(struct ipa *ipa, const struct ipa_resource_data *data)
 {
 	u32 i;
 
+	if (ipa->version <= IPA_VERSION_2_6L)
+		return 0;
+
 	if (!ipa_resource_limits_valid(ipa, data))
 		return -EINVAL;
 
diff --git a/drivers/net/ipa/ipa_smp2p.c b/drivers/net/ipa/ipa_smp2p.c
index df7639c39d71..fa4a9f1c196a 100644
--- a/drivers/net/ipa/ipa_smp2p.c
+++ b/drivers/net/ipa/ipa_smp2p.c
@@ -233,6 +233,10 @@ int ipa_smp2p_init(struct ipa *ipa, bool modem_init)
 	u32 valid_bit;
 	int ret;
 
+	/* With IPA v2.6L and earlier SMP2P interrupts are used */
+	if (ipa->version <= IPA_VERSION_2_6L)
+		return 0;
+
 	valid_state = qcom_smem_state_get(dev, "ipa-clock-enabled-valid",
 					  &valid_bit);
 	if (IS_ERR(valid_state))
@@ -302,6 +306,9 @@ void ipa_smp2p_exit(struct ipa *ipa)
 {
 	struct ipa_smp2p *smp2p = ipa->smp2p;
 
+	if (!smp2p)
+		return;
+
 	if (smp2p->setup_ready_irq)
 		ipa_smp2p_irq_exit(smp2p, smp2p->setup_ready_irq);
 	ipa_smp2p_panic_notifier_unregister(smp2p);
@@ -317,7 +324,7 @@ void ipa_smp2p_disable(struct ipa *ipa)
 {
 	struct ipa_smp2p *smp2p = ipa->smp2p;
 
-	if (!smp2p->setup_ready_irq)
+	if (!smp2p || !smp2p->setup_ready_irq)
 		return;
 
 	mutex_lock(&smp2p->mutex);
@@ -333,7 +340,7 @@ void ipa_smp2p_notify_reset(struct ipa *ipa)
 	struct ipa_smp2p *smp2p = ipa->smp2p;
 	u32 mask;
 
-	if (!smp2p->notified)
+	if (!smp2p || !smp2p->notified)
 		return;
 
 	ipa_smp2p_power_release(ipa);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (14 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 15/17] net: ipa: Add IPA v2.6L initialization sequence support Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA Sireesh Kodali
  2021-10-13 22:27 ` [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Alex Elder
  17 siblings, 1 reply; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

This commit adds the config for IPA v2.0, v2.5, v2.6L. IPA v2.5 is found
on msm8996. IPA v2.6L hardware is found on following SoCs: msm8920,
msm8940, msm8952, msm8953, msm8956, msm8976, sdm630, sdm660. No
SoC-specific configuration in ipa driver is required.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/Makefile        |   7 +-
 drivers/net/ipa/ipa_data-v2.c   | 369 ++++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_data-v3.1.c |   2 +-
 drivers/net/ipa/ipa_data.h      |   3 +
 drivers/net/ipa/ipa_main.c      |  15 ++
 drivers/net/ipa/ipa_sysfs.c     |   6 +
 6 files changed, 398 insertions(+), 4 deletions(-)
 create mode 100644 drivers/net/ipa/ipa_data-v2.c

diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
index 4abebc667f77..858fbf76cff3 100644
--- a/drivers/net/ipa/Makefile
+++ b/drivers/net/ipa/Makefile
@@ -7,6 +7,7 @@ ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
 				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
 				ipa_sysfs.o
 
-ipa-y			+=	ipa_data-v3.1.o ipa_data-v3.5.1.o \
-				ipa_data-v4.2.o ipa_data-v4.5.o \
-				ipa_data-v4.9.o ipa_data-v4.11.o
+ipa-y			+=	ipa_data-v2.o ipa_data-v3.1.o \
+				ipa_data-v3.5.1.o ipa_data-v4.2.o \
+				ipa_data-v4.5.o ipa_data-v4.9.o \
+				ipa_data-v4.11.o
diff --git a/drivers/net/ipa/ipa_data-v2.c b/drivers/net/ipa/ipa_data-v2.c
new file mode 100644
index 000000000000..869b8a1a45d6
--- /dev/null
+++ b/drivers/net/ipa/ipa_data-v2.c
@@ -0,0 +1,369 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+
+#include <linux/log2.h>
+
+#include "ipa_data.h"
+#include "ipa_endpoint.h"
+#include "ipa_mem.h"
+
+/* Endpoint configuration for the IPA v2 hardware. */
+static const struct ipa_gsi_endpoint_data ipa_endpoint_data[] = {
+	[IPA_ENDPOINT_AP_COMMAND_TX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 3,
+		.endpoint_id	= 3,
+		.channel_name	= "cmd_tx",
+		.toward_ipa	= true,
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 20,
+		},
+		.endpoint = {
+			.config	= {
+				.dma_mode	= true,
+				.dma_endpoint	= IPA_ENDPOINT_AP_LAN_RX,
+			},
+		},
+	},
+	[IPA_ENDPOINT_AP_LAN_RX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 2,
+		.endpoint_id	= 2,
+		.channel_name	= "ap_lan_rx",
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 8,
+		},
+		.endpoint	= {
+			.config	= {
+				.aggregation	= true,
+				.status_enable	= true,
+				.rx = {
+					.pad_align	= ilog2(sizeof(u32)),
+				},
+			},
+		},
+	},
+	[IPA_ENDPOINT_AP_MODEM_TX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 4,
+		.endpoint_id	= 4,
+		.channel_name	= "ap_modem_tx",
+		.toward_ipa	= true,
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 8,
+		},
+		.endpoint	= {
+			.config	= {
+				.qmap		= true,
+				.status_enable	= true,
+				.tx = {
+					.status_endpoint =
+						IPA_ENDPOINT_AP_LAN_RX,
+				},
+			},
+		},
+	},
+	[IPA_ENDPOINT_AP_MODEM_RX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 5,
+		.endpoint_id	= 5,
+		.channel_name	= "ap_modem_rx",
+		.toward_ipa	= false,
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 8,
+		},
+		.endpoint	= {
+			.config = {
+				.aggregation	= true,
+				.qmap		= true,
+			},
+		},
+	},
+	[IPA_ENDPOINT_MODEM_LAN_TX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 6,
+		.endpoint_id	= 6,
+		.channel_name	= "modem_lan_tx",
+		.toward_ipa	= true,
+	},
+	[IPA_ENDPOINT_MODEM_COMMAND_TX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 7,
+		.endpoint_id	= 7,
+		.channel_name	= "modem_cmd_tx",
+		.toward_ipa	= true,
+	},
+	[IPA_ENDPOINT_MODEM_LAN_RX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 8,
+		.endpoint_id	= 8,
+		.channel_name	= "modem_lan_rx",
+		.toward_ipa	= false,
+	},
+	[IPA_ENDPOINT_MODEM_AP_RX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 9,
+		.endpoint_id	= 9,
+		.channel_name	= "modem_ap_rx",
+		.toward_ipa	= false,
+	},
+};
+
+static struct ipa_interconnect_data ipa_interconnect_data[] = {
+	{
+		.name = "memory",
+		.peak_bandwidth	= 1200000,	/* 1200 MBps */
+		.average_bandwidth = 100000,	/* 100 MBps */
+	},
+	{
+		.name = "imem",
+		.peak_bandwidth	= 350000,	/* 350 MBps */
+		.average_bandwidth  = 0,	/* unused */
+	},
+	{
+		.name = "config",
+		.peak_bandwidth	= 40000,	/* 40 MBps */
+		.average_bandwidth = 0,		/* unused */
+	},
+};
+
+static struct ipa_power_data ipa_power_data = {
+	.core_clock_rate	= 200 * 1000 * 1000,	/* Hz */
+	.interconnect_count	= ARRAY_SIZE(ipa_interconnect_data),
+	.interconnect_data	= ipa_interconnect_data,
+};
+
+/* IPA-resident memory region configuration for v2.0 */
+static const struct ipa_mem ipa_mem_local_data_v2_0[IPA_MEM_COUNT] = {
+	[IPA_MEM_UC_SHARED] = {
+		.offset         = 0,
+		.size           = 0x80,
+		.canary_count   = 0,
+	},
+	[IPA_MEM_V4_FILTER] = {
+		.offset		= 0x0080,
+		.size		= 0x0058,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_V6_FILTER] = {
+		.offset		= 0x00e0,
+		.size		= 0x0058,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V4_ROUTE] = {
+		.offset		= 0x0140,
+		.size		= 0x002c,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_ROUTE] = {
+		.offset		= 0x0170,
+		.size		= 0x002c,
+		.canary_count	= 1,
+	},
+	[IPA_MEM_MODEM_HEADER] = {
+		.offset		= 0x01a0,
+		.size		= 0x0140,
+		.canary_count	= 1,
+	},
+	[IPA_MEM_AP_HEADER] = {
+		.offset		= 0x02e0,
+		.size		= 0x0048,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_MODEM] = {
+		.offset		= 0x032c,
+		.size		= 0x0dcc,
+		.canary_count	= 1,
+	},
+	[IPA_MEM_V4_FILTER_AP] = {
+		.offset		= 0x10fc,
+		.size		= 0x0780,
+		.canary_count	= 1,
+	},
+	[IPA_MEM_V6_FILTER_AP] = {
+		.offset		= 0x187c,
+		.size		= 0x055c,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_UC_INFO] = {
+		.offset		= 0x1ddc,
+		.size		= 0x0124,
+		.canary_count	= 1,
+	},
+};
+
+static struct ipa_mem_data ipa_mem_data_v2_0 = {
+	.local		= ipa_mem_local_data_v2_0,
+	.smem_id	= 497,
+	.smem_size	= 0x00001f00,
+};
+
+/* Configuration data for IPAv2.0 */
+const struct ipa_data ipa_data_v2_0  = {
+	.version	= IPA_VERSION_2_0,
+	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
+	.endpoint_data	= ipa_endpoint_data,
+	.mem_data	= &ipa_mem_data_v2_0,
+	.power_data	= &ipa_power_data,
+};
+
+/* IPA-resident memory region configuration for v2.5 */
+static const struct ipa_mem ipa_mem_local_data_v2_5[IPA_MEM_COUNT] = {
+	[IPA_MEM_UC_SHARED] = {
+		.offset         = 0,
+		.size           = 0x80,
+		.canary_count   = 0,
+	},
+	[IPA_MEM_UC_INFO] = {
+		.offset		= 0x0080,
+		.size		= 0x0200,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_V4_FILTER] = {
+		.offset		= 0x0288,
+		.size		= 0x0058,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_FILTER] = {
+		.offset		= 0x02e8,
+		.size		= 0x0058,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V4_ROUTE] = {
+		.offset		= 0x0348,
+		.size		= 0x003c,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_ROUTE] = {
+		.offset		= 0x0388,
+		.size		= 0x003c,
+		.canary_count	= 1,
+	},
+	[IPA_MEM_MODEM_HEADER] = {
+		.offset		= 0x03c8,
+		.size		= 0x0140,
+		.canary_count	= 1,
+	},
+	[IPA_MEM_MODEM_PROC_CTX] = {
+		.offset		= 0x0510,
+		.size		= 0x0200,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_AP_PROC_CTX] = {
+		.offset		= 0x0710,
+		.size		= 0x0200,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_MODEM] = {
+		.offset		= 0x0914,
+		.size		= 0x16a8,
+		.canary_count	= 1,
+	},
+};
+
+static struct ipa_mem_data ipa_mem_data_v2_5 = {
+	.local		= ipa_mem_local_data_v2_5,
+	.smem_id	= 497,
+	.smem_size	= 0x00002000,
+};
+
+/* Configuration data for IPAv2.5 */
+const struct ipa_data ipa_data_v2_5  = {
+	.version	= IPA_VERSION_2_5,
+	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
+	.endpoint_data	= ipa_endpoint_data,
+	.mem_data	= &ipa_mem_data_v2_5,
+	.power_data	= &ipa_power_data,
+};
+
+/* IPA-resident memory region configuration for v2.6L */
+static const struct ipa_mem ipa_mem_local_data_v2_6L[IPA_MEM_COUNT] = {
+	{
+		.id		= IPA_MEM_UC_SHARED,
+		.offset         = 0,
+		.size           = 0x80,
+		.canary_count   = 0,
+	},
+	{
+		.id 		= IPA_MEM_UC_INFO,
+		.offset		= 0x0080,
+		.size		= 0x0200,
+		.canary_count	= 0,
+	},
+	{
+		.id		= IPA_MEM_V4_FILTER,
+		.offset		= 0x0288,
+		.size		= 0x0058,
+		.canary_count	= 2,
+	},
+	{
+		.id		= IPA_MEM_V6_FILTER,
+		.offset		= 0x02e8,
+		.size		= 0x0058,
+		.canary_count	= 2,
+	},
+	{
+		.id		= IPA_MEM_V4_ROUTE,
+		.offset		= 0x0348,
+		.size		= 0x003c,
+		.canary_count	= 2,
+	},
+	{
+		.id		= IPA_MEM_V6_ROUTE,
+		.offset		= 0x0388,
+		.size		= 0x003c,
+		.canary_count	= 1,
+	},
+	{
+		.id		= IPA_MEM_MODEM_HEADER,
+		.offset		= 0x03c8,
+		.size		= 0x0140,
+		.canary_count	= 1,
+	},
+	{
+		.id		= IPA_MEM_ZIP,
+		.offset		= 0x0510,
+		.size		= 0x0200,
+		.canary_count	= 2,
+	},
+	{
+		.id		= IPA_MEM_MODEM,
+		.offset		= 0x0714,
+		.size		= 0x18e8,
+		.canary_count	= 1,
+	},
+	{
+		.id		= IPA_MEM_END_MARKER,
+		.offset		= 0x2000,
+		.size		= 0,
+		.canary_count	= 1,
+	},
+};
+
+static struct ipa_mem_data ipa_mem_data_v2_6L = {
+	.local		= ipa_mem_local_data_v2_6L,
+	.smem_id	= 497,
+	.smem_size	= 0x00002000,
+};
+
+/* Configuration data for IPAv2.6L */
+const struct ipa_data ipa_data_v2_6L  = {
+	.version	= IPA_VERSION_2_6L,
+	/* Unfortunately we don't know what this BCR value corresponds to */
+	.backward_compat = 0x1fff7f,
+	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
+	.endpoint_data	= ipa_endpoint_data,
+	.mem_data	= &ipa_mem_data_v2_6L,
+	.power_data	= &ipa_power_data,
+};
diff --git a/drivers/net/ipa/ipa_data-v3.1.c b/drivers/net/ipa/ipa_data-v3.1.c
index 06ddb85f39b2..12d231232756 100644
--- a/drivers/net/ipa/ipa_data-v3.1.c
+++ b/drivers/net/ipa/ipa_data-v3.1.c
@@ -6,7 +6,7 @@
 
 #include <linux/log2.h>
 
-#include "gsi.h"
+#include "ipa_dma.h"
 #include "ipa_data.h"
 #include "ipa_endpoint.h"
 #include "ipa_mem.h"
diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
index 7d62d49f414f..e7ce2e9388b6 100644
--- a/drivers/net/ipa/ipa_data.h
+++ b/drivers/net/ipa/ipa_data.h
@@ -301,6 +301,9 @@ struct ipa_data {
 	const struct ipa_power_data *power_data;
 };
 
+extern const struct ipa_data ipa_data_v2_0;
+extern const struct ipa_data ipa_data_v2_5;
+extern const struct ipa_data ipa_data_v2_6L;
 extern const struct ipa_data ipa_data_v3_1;
 extern const struct ipa_data ipa_data_v3_5_1;
 extern const struct ipa_data ipa_data_v4_2;
diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index b437fbf95edf..3ae5c5c6734b 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -560,6 +560,18 @@ static int ipa_firmware_load(struct device *dev)
 }
 
 static const struct of_device_id ipa_match[] = {
+	{
+		.compatible	= "qcom,ipa-v2.0",
+		.data		= &ipa_data_v2_0,
+	},
+	{
+		.compatible	= "qcom,msm8996-ipa",
+		.data		= &ipa_data_v2_5,
+	},
+	{
+		.compatible	= "qcom,msm8953-ipa",
+		.data		= &ipa_data_v2_6L,
+	},
 	{
 		.compatible	= "qcom,msm8998-ipa",
 		.data		= &ipa_data_v3_1,
@@ -632,6 +644,9 @@ static void ipa_validate_build(void)
 static bool ipa_version_valid(enum ipa_version version)
 {
 	switch (version) {
+	case IPA_VERSION_2_0:
+	case IPA_VERSION_2_5:
+	case IPA_VERSION_2_6L:
 	case IPA_VERSION_3_0:
 	case IPA_VERSION_3_1:
 	case IPA_VERSION_3_5:
diff --git a/drivers/net/ipa/ipa_sysfs.c b/drivers/net/ipa/ipa_sysfs.c
index ff61dbdd70d8..f5d159f6bc06 100644
--- a/drivers/net/ipa/ipa_sysfs.c
+++ b/drivers/net/ipa/ipa_sysfs.c
@@ -14,6 +14,12 @@
 static const char *ipa_version_string(struct ipa *ipa)
 {
 	switch (ipa->version) {
+	case IPA_VERSION_2_0:
+		return "2.0";
+	case IPA_VERSION_2_5:
+		return "2.5";
+	case IPA_VERSION_2_6L:
+		"return 2.6L";
 	case IPA_VERSION_3_0:
 		return "3.0";
 	case IPA_VERSION_3_1:
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (15 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-09-23 12:42   ` Rob Herring
  2021-10-13 22:31   ` Alex Elder
  2021-10-13 22:27 ` [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Alex Elder
  17 siblings, 2 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, Andy Gross, Bjorn Andersson, David S. Miller,
	Jakub Kicinski, Rob Herring,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS

MSM8996 uses IPA v2.5 and MSM8953 uses IPA v2.6l

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 Documentation/devicetree/bindings/net/qcom,ipa.yaml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/Documentation/devicetree/bindings/net/qcom,ipa.yaml b/Documentation/devicetree/bindings/net/qcom,ipa.yaml
index b8a0b392b24e..e857827bfa54 100644
--- a/Documentation/devicetree/bindings/net/qcom,ipa.yaml
+++ b/Documentation/devicetree/bindings/net/qcom,ipa.yaml
@@ -44,6 +44,8 @@ description:
 properties:
   compatible:
     enum:
+      - qcom,msm8953-ipa
+      - qcom,msm8996-ipa
       - qcom,msm8998-ipa
       - qcom,sc7180-ipa
       - qcom,sc7280-ipa
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport
  2021-09-20  3:08 ` [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport Sireesh Kodali
@ 2021-09-20 14:31   ` kernel test robot
  2021-10-13 22:30   ` Alex Elder
  1 sibling, 0 replies; 49+ messages in thread
From: kernel test robot @ 2021-09-20 14:31 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 9926 bytes --]

Hi Sireesh,

[FYI, it's a private test report for your RFC patch.]
[auto build test WARNING on net/master]
[also build test WARNING on horms-ipvs/master net-next/master linus/master v5.15-rc2 next-20210920]
[cannot apply to robh/for-next]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Sireesh-Kodali/net-ipa-Add-support-for-IPA-v2-x/20210920-111317
base:   https://git.kernel.org/pub/scm/linux/kernel/git/davem/net.git e30cd812dffadc58241ae378e48728e6a161becd
config: parisc-allyesconfig (attached as .config)
compiler: hppa-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/b1b5bc3b7f526068559fc747d55c245971371803
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Sireesh-Kodali/net-ipa-Add-support-for-IPA-v2-x/20210920-111317
        git checkout b1b5bc3b7f526068559fc747d55c245971371803
        # save the attached .config to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross O=build_dir ARCH=parisc SHELL=/bin/bash drivers/net/ipa/

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> drivers/net/ipa/ipa_trans.c:543: warning: expecting prototype for __gsi_trans_commit(). Prototype was for gsi_trans_commit() instead


vim +543 drivers/net/ipa/ipa_trans.c

9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  530  
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  531  /**
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  532   * __gsi_trans_commit() - Common GSI transaction commit code
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  533   * @trans:	Transaction to commit
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  534   * @ring_db:	Whether to tell the hardware about these queued transfers
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  535   *
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  536   * Formats channel ring TRE entries based on the content of the scatterlist.
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  537   * Maps a transaction pointer to the last ring entry used for the transaction,
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  538   * so it can be recovered when it completes.  Moves the transaction to the
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  539   * pending list.  Finally, updates the channel ring pointer and optionally
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  540   * rings the doorbell.
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  541   */
b1b5bc3b7f5260 drivers/net/ipa/ipa_trans.c Sireesh Kodali 2021-09-20  542  void gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05 @543  {
715855d209e083 drivers/net/ipa/ipa_trans.c Vladimir Lypak 2021-09-20  544  	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  545  	struct gsi_ring *ring = &channel->tre_ring;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  546  	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  547  	bool bei = channel->toward_ipa;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  548  	struct ipa_cmd_info *info;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  549  	struct gsi_tre *dest_tre;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  550  	struct scatterlist *sg;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  551  	u32 byte_count = 0;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  552  	u32 avail;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  553  	u32 i;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  554  
5bc5588466a1f8 drivers/net/ipa/gsi_trans.c Alex Elder     2021-07-26  555  	WARN_ON(!trans->used);
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  556  
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  557  	/* Consume the entries.  If we cross the end of the ring while
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  558  	 * filling them we'll switch to the beginning to finish.
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  559  	 * If there is no info array we're doing a simple data
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  560  	 * transfer request, whose opcode is IPA_CMD_NONE.
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  561  	 */
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  562  	info = trans->info ? &trans->info[0] : NULL;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  563  	avail = ring->count - ring->index % ring->count;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  564  	dest_tre = gsi_ring_virt(ring, ring->index);
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  565  	for_each_sg(trans->sgl, sg, trans->used, i) {
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  566  		bool last_tre = i == trans->used - 1;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  567  		dma_addr_t addr = sg_dma_address(sg);
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  568  		u32 len = sg_dma_len(sg);
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  569  
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  570  		byte_count += len;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  571  		if (!avail--)
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  572  			dest_tre = gsi_ring_virt(ring, 0);
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  573  		if (info)
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  574  			opcode = info++->opcode;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  575  
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  576  		gsi_trans_tre_fill(dest_tre, addr, len, last_tre, bei, opcode);
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  577  		dest_tre++;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  578  	}
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  579  	ring->index += trans->used;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  580  
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  581  	if (channel->toward_ipa) {
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  582  		/* We record TX bytes when they are sent */
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  583  		trans->len = byte_count;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  584  		trans->trans_count = channel->trans_count;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  585  		trans->byte_count = channel->byte_count;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  586  		channel->trans_count++;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  587  		channel->byte_count += byte_count;
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  588  	}
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  589  
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  590  	/* Associate the last TRE with the transaction */
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  591  	gsi_channel_trans_map(channel, ring->index - 1, trans);
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  592  
715855d209e083 drivers/net/ipa/ipa_trans.c Vladimir Lypak 2021-09-20  593  	ipa_trans_move_pending(trans);
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  594  
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  595  	/* Ring doorbell if requested, or if all TREs are allocated */
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  596  	if (ring_db || !atomic_read(&channel->trans_info.tre_avail)) {
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  597  		/* Report what we're handing off to hardware for TX channels */
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  598  		if (channel->toward_ipa)
715855d209e083 drivers/net/ipa/ipa_trans.c Vladimir Lypak 2021-09-20  599  			ipa_channel_tx_queued(channel);
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  600  		gsi_channel_doorbell(channel);
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  601  	}
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  602  }
9dd441e4ed5755 drivers/net/ipa/gsi_trans.c Alex Elder     2020-03-05  603  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 69360 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA
  2021-09-20  3:08 ` [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA Sireesh Kodali
@ 2021-09-23 12:42   ` Rob Herring
  2021-10-13 22:31   ` Alex Elder
  1 sibling, 0 replies; 49+ messages in thread
From: Rob Herring @ 2021-09-23 12:42 UTC (permalink / raw)
  To: Sireesh Kodali
  Cc: netdev, Jakub Kicinski, Bjorn Andersson,
	~postmarketos/upstreaming, elder, linux-arm-msm, linux-kernel,
	devicetree, Andy Gross, Rob Herring, phone-devel,
	David S. Miller

On Mon, 20 Sep 2021 08:38:11 +0530, Sireesh Kodali wrote:
> MSM8996 uses IPA v2.5 and MSM8953 uses IPA v2.6l
> 
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>  Documentation/devicetree/bindings/net/qcom,ipa.yaml | 2 ++
>  1 file changed, 2 insertions(+)
> 

Acked-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (16 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA Sireesh Kodali
@ 2021-10-13 22:27 ` Alex Elder
  17 siblings, 0 replies; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:27 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder

On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> Hi,
> 
> This RFC patch series adds support for IPA v2, v2.5 and v2.6L
> (collectively referred to as IPA v2.x).

I'm sorry for the delay on this.  I want to give this a
reasonable review, but it's been hard to prioritize doing
so.  So for now I aim to give you some "easy" feedback,
knowing that this doesn't cover all issues.  This is an
RFC, after all...

So this isn't a "real review" but I'll try to be helpful.

Overall, I appreciate how well you adhered to the patterns and
conventions used elsewhere in the driver.  There are many levels
to that, but I think consistency is a huge factor in keeping
code maintainable.  I didn't see all that many places where
I felt like whining about naming you used, or oddities in
indentation, and so on.

Abstracting the GSI layer seemed to be done more easily than
I expected.  I didn't dive deep into the BAM code, and would
want to pay much closer attention to that in the future.

The BAM/GSI difference is the biggest one dividing IPA v3.0+
from its predecessors.  But as you see, the 32- versus 64-bit
address and field size differences lead to some ugliness
that's hard to avoid.

Anyway, nice work; I hope my feedback is helpful.

					-Alex

> Basic description:
> IPA v2.x is the older version of the IPA hardware found on Qualcomm
> SoCs. The biggest differences between v2.x and later versions are:
> - 32 bit hardware (the IPA microcontroler is 32 bit)
> - BAM (as opposed to GSI as a DMA transport)
> - Changes to the QMI init sequence (described in the commit message)
> 
> The fact that IPA v2.x are 32 bit only affects us directly in the table
> init code. However, its impact is felt in other parts of the code, as it
> changes the size of fields of various structs (e.g. in the commands that
> can be sent).
> 
> BAM support is already present in the mainline kernel, however it lacks
> two things:
> - Support for DMA metadata, to pass the size of the transaction from the
>    hardware to the dma client
> - Support for immediate commands, which are needed to pass commands from
>    the driver to the microcontroller
> 
> Separate patch series have been created to deal with these (linked in
> the end)
> 
> This patch series adds support for BAM as a transport by refactoring the
> current GSI code to create an abstract uniform API on top. This API
> allows the rest of the driver to handle DMA without worrying about the
> IPA version.
> 
> The final thing that hasn't been touched by this patch series is the IPA
> resource manager. On the downstream CAF kernel, the driver seems to
> share the resource code between IPA v2.x and IPA v3.x, which should mean
> all it would take to add support for resources on IPA v2.x would be to
> add the definitions in the ipa_data.
> 
> Testing:
> This patch series was tested on kernel version 5.13 on a phone with
> SDM625 (IPA v2.6L), and a phone with MSM8996 (IPA v2.5). The phone with
> IPA v2.5 was able to get an IP address using modem-manager, although
> sending/receiving packets was not tested. The phone with IPA v2.6L was
> able to get an IP, but was unable to send/receive packets. Its modem
> also relies on IPA v2.6l's compression/decompression support, and
> without this patch series, the modem simply crashes and restarts,
> waiting for the IPA block to come up.
> 
> This patch series is based on code from the downstream CAF kernel v4.9
> 
> There are some things in this patch series that would obviously not get
> accepted in their current form:
> - All IPA 2.x data is in a single file
> - Some stray printks might still be around
> - Some values have been hardcoded (e.g. the filter_map)
> Please excuse these
> 
> Lastly, this patch series depends upon the following patches for BAM:
> [0]: https://lkml.org/lkml/2021/9/19/126
> [1]: https://lkml.org/lkml/2021/9/19/135
> 
> Regards,
> Sireesh Kodali
> 
> Sireesh Kodali (10):
>    net: ipa: Add IPA v2.x register definitions
>    net: ipa: Add support for using BAM as a DMA transport
>    net: ipa: Add support for IPA v2.x commands and table init
>    net: ipa: Add support for IPA v2.x endpoints
>    net: ipa: Add support for IPA v2.x memory map
>    net: ipa: Add support for IPA v2.x in the driver's QMI interface
>    net: ipa: Add support for IPA v2 microcontroller
>    net: ipa: Add IPA v2.6L initialization sequence support
>    net: ipa: Add hw config describing IPA v2.x hardware
>    dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA
> 
> Vladimir Lypak (7):
>    net: ipa: Correct ipa_status_opcode enumeration
>    net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support
>    net: ipa: Refactor GSI code
>    net: ipa: Establish ipa_dma interface
>    net: ipa: Check interrupts for availability
>    net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait
>    net: ipa: Add support for IPA v2.x interrupts
> 
>   .../devicetree/bindings/net/qcom,ipa.yaml     |   2 +
>   drivers/net/ipa/Makefile                      |  11 +-
>   drivers/net/ipa/bam.c                         | 525 ++++++++++++++++++
>   drivers/net/ipa/gsi.c                         | 322 ++++++-----
>   drivers/net/ipa/ipa.h                         |   8 +-
>   drivers/net/ipa/ipa_cmd.c                     | 244 +++++---
>   drivers/net/ipa/ipa_cmd.h                     |  20 +-
>   drivers/net/ipa/ipa_data-v2.c                 | 369 ++++++++++++
>   drivers/net/ipa/ipa_data-v3.1.c               |   2 +-
>   drivers/net/ipa/ipa_data-v3.5.1.c             |   2 +-
>   drivers/net/ipa/ipa_data-v4.11.c              |   2 +-
>   drivers/net/ipa/ipa_data-v4.2.c               |   2 +-
>   drivers/net/ipa/ipa_data-v4.5.c               |   2 +-
>   drivers/net/ipa/ipa_data-v4.9.c               |   2 +-
>   drivers/net/ipa/ipa_data.h                    |   4 +
>   drivers/net/ipa/{gsi.h => ipa_dma.h}          | 179 +++---
>   .../ipa/{gsi_private.h => ipa_dma_private.h}  |  46 +-
>   drivers/net/ipa/ipa_endpoint.c                | 188 ++++---
>   drivers/net/ipa/ipa_endpoint.h                |   6 +-
>   drivers/net/ipa/ipa_gsi.c                     |  18 +-
>   drivers/net/ipa/ipa_gsi.h                     |  12 +-
>   drivers/net/ipa/ipa_interrupt.c               |  36 +-
>   drivers/net/ipa/ipa_main.c                    |  82 ++-
>   drivers/net/ipa/ipa_mem.c                     |  55 +-
>   drivers/net/ipa/ipa_mem.h                     |   5 +-
>   drivers/net/ipa/ipa_power.c                   |   4 +-
>   drivers/net/ipa/ipa_qmi.c                     |  37 +-
>   drivers/net/ipa/ipa_qmi.h                     |  10 +
>   drivers/net/ipa/ipa_reg.h                     | 184 +++++-
>   drivers/net/ipa/ipa_resource.c                |   3 +
>   drivers/net/ipa/ipa_smp2p.c                   |  11 +-
>   drivers/net/ipa/ipa_sysfs.c                   |   6 +
>   drivers/net/ipa/ipa_table.c                   |  86 +--
>   drivers/net/ipa/ipa_table.h                   |   6 +-
>   drivers/net/ipa/{gsi_trans.c => ipa_trans.c}  | 182 +++---
>   drivers/net/ipa/{gsi_trans.h => ipa_trans.h}  |  78 +--
>   drivers/net/ipa/ipa_uc.c                      |  96 ++--
>   drivers/net/ipa/ipa_version.h                 |  12 +
>   38 files changed, 2133 insertions(+), 726 deletions(-)
>   create mode 100644 drivers/net/ipa/bam.c
>   create mode 100644 drivers/net/ipa/ipa_data-v2.c
>   rename drivers/net/ipa/{gsi.h => ipa_dma.h} (57%)
>   rename drivers/net/ipa/{gsi_private.h => ipa_dma_private.h} (66%)
>   rename drivers/net/ipa/{gsi_trans.c => ipa_trans.c} (80%)
>   rename drivers/net/ipa/{gsi_trans.h => ipa_trans.h} (71%)
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration
  2021-09-20  3:07 ` [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration Sireesh Kodali
@ 2021-10-13 22:28   ` Alex Elder
  2021-10-18 16:12     ` Sireesh Kodali
  0 siblings, 1 reply; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:28 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> The values in the enumaration were defined as bitmasks (base 2 exponents of
> actual opcodes). Meanwhile, it's used not as bitmask
> ipa_endpoint_status_skip and ipa_status_formet_packet functions (compared
> directly with opcode from status packet). This commit converts these values
> to actual hardware constansts.
> 
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_endpoint.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> index 5528d97110d5..29227de6661f 100644
> --- a/drivers/net/ipa/ipa_endpoint.c
> +++ b/drivers/net/ipa/ipa_endpoint.c
> @@ -41,10 +41,10 @@
>   
>   /** enum ipa_status_opcode - status element opcode hardware values */
>   enum ipa_status_opcode {
> -	IPA_STATUS_OPCODE_PACKET		= 0x01,
> -	IPA_STATUS_OPCODE_DROPPED_PACKET	= 0x04,
> -	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 0x08,
> -	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 0x40,
> +	IPA_STATUS_OPCODE_PACKET		= 0,
> +	IPA_STATUS_OPCODE_DROPPED_PACKET	= 2,
> +	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 3,
> +	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 6,

I haven't looked at how these symbols are used (whether you
changed it at all), but I'm pretty sure this is wrong.

The downstream tends to define "soft" symbols that must
be mapped to their hardware equivalent values.  So for
example you might find a function ipa_pkt_status_parse()
that translates between the hardware status structure
and the abstracted "soft" status structure.  In that
function you see, for example, that hardware status
opcode 0x1 is translated to IPAHAL_PKT_STATUS_OPCODE_PACKET,
which downstream is defined to have value 0.

In many places the upstream code eliminates that layer
of indirection where possible.  So enumerated constants
are assigned specific values that match what the hardware
uses.

					-Alex

>   };
>   
>   /** enum ipa_status_exception - status element exception type */
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support
  2021-09-20  3:07 ` [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support Sireesh Kodali
@ 2021-10-13 22:28   ` Alex Elder
  2021-10-18 16:16     ` Sireesh Kodali
  0 siblings, 1 reply; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:28 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> IPA v2.x is 32 bit. Having an IPA_TABLE_ENTRY size makes it easier to
> deal with supporting both 32 bit and 64 bit IPA versions

This looks reasonable.  At this point filter/route tables aren't
really used, so this is a simple fix.  You use IPA_IS_64BIT()
here, but it isn't defined until patch 7, which I expect is a
build problem.

					-Alex

> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_qmi.c   | 10 ++++++----
>   drivers/net/ipa/ipa_table.c | 29 +++++++++++++----------------
>   drivers/net/ipa/ipa_table.h |  4 ++++
>   3 files changed, 23 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
> index 90f3aec55b36..7e2fe701cc4d 100644
> --- a/drivers/net/ipa/ipa_qmi.c
> +++ b/drivers/net/ipa/ipa_qmi.c
> @@ -308,12 +308,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
>   	mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE);
>   	req.v4_route_tbl_info_valid = 1;
>   	req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset;
> -	req.v4_route_tbl_info.count = mem->size / sizeof(__le64);
> +	req.v4_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
>   
>   	mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE);
>   	req.v6_route_tbl_info_valid = 1;
>   	req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset;
> -	req.v6_route_tbl_info.count = mem->size / sizeof(__le64);
> +	req.v6_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
>   
>   	mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER);
>   	req.v4_filter_tbl_start_valid = 1;
> @@ -352,7 +352,8 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
>   		req.v4_hash_route_tbl_info_valid = 1;
>   		req.v4_hash_route_tbl_info.start =
>   				ipa->mem_offset + mem->offset;
> -		req.v4_hash_route_tbl_info.count = mem->size / sizeof(__le64);
> +		req.v4_hash_route_tbl_info.count =
> +				mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
>   	}
>   
>   	mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE_HASHED);
> @@ -360,7 +361,8 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
>   		req.v6_hash_route_tbl_info_valid = 1;
>   		req.v6_hash_route_tbl_info.start =
>   			ipa->mem_offset + mem->offset;
> -		req.v6_hash_route_tbl_info.count = mem->size / sizeof(__le64);
> +		req.v6_hash_route_tbl_info.count =
> +				mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
>   	}
>   
>   	mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER_HASHED);
> diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
> index 1da334f54944..96c467c80a2e 100644
> --- a/drivers/net/ipa/ipa_table.c
> +++ b/drivers/net/ipa/ipa_table.c
> @@ -118,7 +118,8 @@
>    * 32-bit all-zero rule list terminator.  The "zero rule" is simply an
>    * all-zero rule followed by the list terminator.
>    */
> -#define IPA_ZERO_RULE_SIZE		(2 * sizeof(__le32))
> +#define IPA_ZERO_RULE_SIZE(version) \
> +	 (IPA_IS_64BIT(version) ? 2 * sizeof(__le32) : sizeof(__le32))
>   
>   /* Check things that can be validated at build time. */
>   static void ipa_table_validate_build(void)
> @@ -132,12 +133,6 @@ static void ipa_table_validate_build(void)
>   	 */
>   	BUILD_BUG_ON(sizeof(dma_addr_t) > sizeof(__le64));
>   
> -	/* A "zero rule" is used to represent no filtering or no routing.
> -	 * It is a 64-bit block of zeroed memory.  Code in ipa_table_init()
> -	 * assumes that it can be written using a pointer to __le64.
> -	 */
> -	BUILD_BUG_ON(IPA_ZERO_RULE_SIZE != sizeof(__le64));
> -
>   	/* Impose a practical limit on the number of routes */
>   	BUILD_BUG_ON(IPA_ROUTE_COUNT_MAX > 32);
>   	/* The modem must be allotted at least one route table entry */
> @@ -236,7 +231,7 @@ static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count)
>   	/* Skip over the zero rule and possibly the filter mask */
>   	skip = filter_mask ? 1 : 2;
>   
> -	return ipa->table_addr + skip * sizeof(*ipa->table_virt);
> +	return ipa->table_addr + skip * IPA_TABLE_ENTRY_SIZE(ipa->version);
>   }
>   
>   static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
> @@ -255,8 +250,8 @@ static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
>   	if (filter)
>   		first++;	/* skip over bitmap */
>   
> -	offset = mem->offset + first * sizeof(__le64);
> -	size = count * sizeof(__le64);
> +	offset = mem->offset + first * IPA_TABLE_ENTRY_SIZE(ipa->version);
> +	size = count * IPA_TABLE_ENTRY_SIZE(ipa->version);
>   	addr = ipa_table_addr(ipa, false, count);
>   
>   	ipa_cmd_dma_shared_mem_add(trans, offset, size, addr, true);
> @@ -434,11 +429,11 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
>   		count = 1 + hweight32(ipa->filter_map);
>   		hash_count = hash_mem->size ? count : 0;
>   	} else {
> -		count = mem->size / sizeof(__le64);
> -		hash_count = hash_mem->size / sizeof(__le64);
> +		count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> +		hash_count = hash_mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
>   	}
> -	size = count * sizeof(__le64);
> -	hash_size = hash_count * sizeof(__le64);
> +	size = count * IPA_TABLE_ENTRY_SIZE(ipa->version);
> +	hash_size = hash_count * IPA_TABLE_ENTRY_SIZE(ipa->version);
>   
>   	addr = ipa_table_addr(ipa, filter, count);
>   	hash_addr = ipa_table_addr(ipa, filter, hash_count);
> @@ -621,7 +616,8 @@ int ipa_table_init(struct ipa *ipa)
>   	 * by dma_alloc_coherent() is guaranteed to be a power-of-2 number
>   	 * of pages, which satisfies the rule alignment requirement.
>   	 */
> -	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
> +	size = IPA_ZERO_RULE_SIZE(ipa->version) +
> +	       (1 + count) * IPA_TABLE_ENTRY_SIZE(ipa->version);
>   	virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
>   	if (!virt)
>   		return -ENOMEM;
> @@ -653,7 +649,8 @@ void ipa_table_exit(struct ipa *ipa)
>   	struct device *dev = &ipa->pdev->dev;
>   	size_t size;
>   
> -	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
> +	size = IPA_ZERO_RULE_SIZE(ipa->version) +
> +	       (1 + count) * IPA_TABLE_ENTRY_SIZE(ipa->version);
>   
>   	dma_free_coherent(dev, size, ipa->table_virt, ipa->table_addr);
>   	ipa->table_addr = 0;
> diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
> index b6a9a0d79d68..78a168ce6558 100644
> --- a/drivers/net/ipa/ipa_table.h
> +++ b/drivers/net/ipa/ipa_table.h
> @@ -10,6 +10,10 @@
>   
>   struct ipa;
>   
> +/* The size of a filter or route table entry */
> +#define IPA_TABLE_ENTRY_SIZE(version)	\
> +	(IPA_IS_64BIT(version) ? sizeof(__le64) : sizeof(__le32))
> +
>   /* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */
>   #define IPA_FILTER_COUNT_MAX	14
>   
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 03/17] net: ipa: Refactor GSI code
  2021-09-20  3:07 ` [RFC PATCH 03/17] net: ipa: Refactor GSI code Sireesh Kodali
@ 2021-10-13 22:29   ` Alex Elder
  0 siblings, 0 replies; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:29 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> Perform machine refactor to change "gsi_" with "ipa_" for function which
> aren't actually GSI-specific and going to be reused for IPA v2 with BAM
> DMA interface.
> 
> Also rename "gsi_trans.*" to "ipa_trans.*", gsi.h to ipa_dma.h, gsi_private.h
> to "ipa_dma_private.h".

OK so this is simply a mechanical change so I'm not going to review it.
I understand your purpose in doing this, and I appreciate you doing it
as a separate step.

					-Alex

> All the changes in this commit is done with this script:
> 
> symbols="gsi_trans gsi_trans_pool gsi_trans_info gsi_channel gsi_trans_pool_init
> gsi_trans_pool_exit gsi_trans_pool_init_dma gsi_trans_pool_exit_dma
> gsi_trans_pool_alloc_common gsi_trans_pool_alloc gsi_trans_pool_alloc_dma
> gsi_trans_pool_next gsi_channel_trans_complete gsi_trans_move_pending
> gsi_trans_move_complete gsi_trans_move_polled gsi_trans_tre_reserve
> gsi_trans_tre_release gsi_channel_trans_alloc gsi_trans_free
> gsi_trans_cmd_add gsi_trans_page_add gsi_trans_skb_add gsi_trans_commit
> gsi_trans_commit_wait gsi_trans_commit_wait_timeout gsi_trans_complete
> gsi_channel_trans_cancel_pending gsi_channel_trans_init gsi_channel_trans_exit
> gsi_channel_tx_queued"
> 
> git mv gsi.h ipa_dma.h
> git mv gsi_private.h ipa_dma_private.h
> git mv gsi_trans.c ipa_trans.c
> git mv gsi_trans.h ipa_trans.h
> 
> sed -i "s/\<gsi\.h\>/ipa_dma.h/g" *
> sed -i "s/\<gsi_private\.h\>/ipa_dma_private.h/g" *
> sed -i "s/\<gsi_trans\.o\>/ipa_trans.o/g" Makefile
> sed -i "s/\<gsi_trans\.h\>/ipa_trans.h/g" *
> 
> for i in $symbols; do
>      sed -i "s/\<${i}\>/ipa_${i##gsi_}/g" *
> done
> 
> sed -i "s/\<struct gsi\>/struct ipa_dma/g" *
> 
> sed -i "s/\<struct ipa_dma\> \*gsi/struct ipa_dma *dma_subsys/g" ipa_trans.h ipa_dma.h
> sed -i "s/\<channel->gsi\>/channel->dma_subsys/g" *
> sed -i "s/\<trans->gsi\>/trans->dma_subsys/g" *
> 
> sed -i "s/\<struct ipa_dma\> gsi/struct ipa_dma dma_subsys/g" ipa.h ipa_dma.h
> sed -i "s/struct ipa, gsi/struct ipa, dma_subsys/g" *
> sed -i "s/\<ipa->gsi\>/ipa->dma_subsys/g" *
> 
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/Makefile                      |   2 +-
>   drivers/net/ipa/gsi.c                         | 305 +++++++++---------
>   drivers/net/ipa/ipa.h                         |   6 +-
>   drivers/net/ipa/ipa_cmd.c                     |  98 +++---
>   drivers/net/ipa/ipa_cmd.h                     |  20 +-
>   drivers/net/ipa/ipa_data-v3.5.1.c             |   2 +-
>   drivers/net/ipa/ipa_data-v4.11.c              |   2 +-
>   drivers/net/ipa/ipa_data-v4.2.c               |   2 +-
>   drivers/net/ipa/ipa_data-v4.5.c               |   2 +-
>   drivers/net/ipa/ipa_data-v4.9.c               |   2 +-
>   drivers/net/ipa/{gsi.h => ipa_dma.h}          |  56 ++--
>   .../ipa/{gsi_private.h => ipa_dma_private.h}  |  44 +--
>   drivers/net/ipa/ipa_endpoint.c                |  60 ++--
>   drivers/net/ipa/ipa_endpoint.h                |   6 +-
>   drivers/net/ipa/ipa_gsi.c                     |  18 +-
>   drivers/net/ipa/ipa_gsi.h                     |  12 +-
>   drivers/net/ipa/ipa_main.c                    |  14 +-
>   drivers/net/ipa/ipa_mem.c                     |  14 +-
>   drivers/net/ipa/ipa_table.c                   |  28 +-
>   drivers/net/ipa/{gsi_trans.c => ipa_trans.c}  | 172 +++++-----
>   drivers/net/ipa/{gsi_trans.h => ipa_trans.h}  |  74 ++---
>   21 files changed, 471 insertions(+), 468 deletions(-)
>   rename drivers/net/ipa/{gsi.h => ipa_dma.h} (85%)
>   rename drivers/net/ipa/{gsi_private.h => ipa_dma_private.h} (67%)
>   rename drivers/net/ipa/{gsi_trans.c => ipa_trans.c} (81%)
>   rename drivers/net/ipa/{gsi_trans.h => ipa_trans.h} (72%)
> 
> diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
> index bdfb2430ab2c..3cd021fb992e 100644
> --- a/drivers/net/ipa/Makefile
> +++ b/drivers/net/ipa/Makefile
> @@ -1,7 +1,7 @@
>   obj-$(CONFIG_QCOM_IPA)	+=	ipa.o
>   
>   ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
> -				ipa_table.o ipa_interrupt.o gsi.o gsi_trans.o \
> +				ipa_table.o ipa_interrupt.o gsi.o ipa_trans.o \
>   				ipa_gsi.o ipa_smp2p.o ipa_uc.o \
>   				ipa_endpoint.o ipa_cmd.o ipa_modem.o \
>   				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
> diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
> index a2fcdb1abdb9..74ae0d07f859 100644
> --- a/drivers/net/ipa/gsi.c
> +++ b/drivers/net/ipa/gsi.c
> @@ -15,10 +15,10 @@
>   #include <linux/platform_device.h>
>   #include <linux/netdevice.h>
>   
> -#include "gsi.h"
> +#include "ipa_dma.h"
>   #include "gsi_reg.h"
> -#include "gsi_private.h"
> -#include "gsi_trans.h"
> +#include "ipa_dma_private.h"
> +#include "ipa_trans.h"
>   #include "ipa_gsi.h"
>   #include "ipa_data.h"
>   #include "ipa_version.h"
> @@ -170,40 +170,41 @@ static void gsi_validate_build(void)
>   }
>   
>   /* Return the channel id associated with a given channel */
> -static u32 gsi_channel_id(struct gsi_channel *channel)
> +static u32 gsi_channel_id(struct ipa_channel *channel)
>   {
> -	return channel - &channel->gsi->channel[0];
> +	return channel - &channel->dma_subsys->channel[0];
>   }
>   
>   /* An initialized channel has a non-null GSI pointer */
> -static bool gsi_channel_initialized(struct gsi_channel *channel)
> +static bool gsi_channel_initialized(struct ipa_channel *channel)
>   {
> -	return !!channel->gsi;
> +	return !!channel->dma_subsys;
>   }
>   
>   /* Update the GSI IRQ type register with the cached value */
> -static void gsi_irq_type_update(struct gsi *gsi, u32 val)
> +static void gsi_irq_type_update(struct ipa_dma *gsi, u32 val)
>   {
>   	gsi->type_enabled_bitmap = val;
>   	iowrite32(val, gsi->virt + GSI_CNTXT_TYPE_IRQ_MSK_OFFSET);
>   }
>   
> -static void gsi_irq_type_enable(struct gsi *gsi, enum gsi_irq_type_id type_id)
> +static void gsi_irq_type_enable(struct ipa_dma *gsi, enum gsi_irq_type_id type_id)
>   {
>   	gsi_irq_type_update(gsi, gsi->type_enabled_bitmap | BIT(type_id));
>   }
>   
> -static void gsi_irq_type_disable(struct gsi *gsi, enum gsi_irq_type_id type_id)
> +static void gsi_irq_type_disable(struct ipa_dma *gsi, enum gsi_irq_type_id type_id)
>   {
>   	gsi_irq_type_update(gsi, gsi->type_enabled_bitmap & ~BIT(type_id));
>   }
>   
> +/* Turn off all GSI interrupts initially; there is no gsi_irq_teardown() */
>   /* Event ring commands are performed one at a time.  Their completion
>    * is signaled by the event ring control GSI interrupt type, which is
>    * only enabled when we issue an event ring command.  Only the event
>    * ring being operated on has this interrupt enabled.
>    */
> -static void gsi_irq_ev_ctrl_enable(struct gsi *gsi, u32 evt_ring_id)
> +static void gsi_irq_ev_ctrl_enable(struct ipa_dma *gsi, u32 evt_ring_id)
>   {
>   	u32 val = BIT(evt_ring_id);
>   
> @@ -218,7 +219,7 @@ static void gsi_irq_ev_ctrl_enable(struct gsi *gsi, u32 evt_ring_id)
>   }
>   
>   /* Disable event ring control interrupts */
> -static void gsi_irq_ev_ctrl_disable(struct gsi *gsi)
> +static void gsi_irq_ev_ctrl_disable(struct ipa_dma *gsi)
>   {
>   	gsi_irq_type_disable(gsi, GSI_EV_CTRL);
>   	iowrite32(0, gsi->virt + GSI_CNTXT_SRC_EV_CH_IRQ_MSK_OFFSET);
> @@ -229,7 +230,7 @@ static void gsi_irq_ev_ctrl_disable(struct gsi *gsi)
>    * enabled when we issue a channel command.  Only the channel being
>    * operated on has this interrupt enabled.
>    */
> -static void gsi_irq_ch_ctrl_enable(struct gsi *gsi, u32 channel_id)
> +static void gsi_irq_ch_ctrl_enable(struct ipa_dma *gsi, u32 channel_id)
>   {
>   	u32 val = BIT(channel_id);
>   
> @@ -244,13 +245,13 @@ static void gsi_irq_ch_ctrl_enable(struct gsi *gsi, u32 channel_id)
>   }
>   
>   /* Disable channel control interrupts */
> -static void gsi_irq_ch_ctrl_disable(struct gsi *gsi)
> +static void gsi_irq_ch_ctrl_disable(struct ipa_dma *gsi)
>   {
>   	gsi_irq_type_disable(gsi, GSI_CH_CTRL);
>   	iowrite32(0, gsi->virt + GSI_CNTXT_SRC_CH_IRQ_MSK_OFFSET);
>   }
>   
> -static void gsi_irq_ieob_enable_one(struct gsi *gsi, u32 evt_ring_id)
> +static void gsi_irq_ieob_enable_one(struct ipa_dma *gsi, u32 evt_ring_id)
>   {
>   	bool enable_ieob = !gsi->ieob_enabled_bitmap;
>   	u32 val;
> @@ -264,7 +265,7 @@ static void gsi_irq_ieob_enable_one(struct gsi *gsi, u32 evt_ring_id)
>   		gsi_irq_type_enable(gsi, GSI_IEOB);
>   }
>   
> -static void gsi_irq_ieob_disable(struct gsi *gsi, u32 event_mask)
> +static void gsi_irq_ieob_disable(struct ipa_dma *gsi, u32 event_mask)
>   {
>   	u32 val;
>   
> @@ -278,13 +279,13 @@ static void gsi_irq_ieob_disable(struct gsi *gsi, u32 event_mask)
>   	iowrite32(val, gsi->virt + GSI_CNTXT_SRC_IEOB_IRQ_MSK_OFFSET);
>   }
>   
> -static void gsi_irq_ieob_disable_one(struct gsi *gsi, u32 evt_ring_id)
> +static void gsi_irq_ieob_disable_one(struct ipa_dma *gsi, u32 evt_ring_id)
>   {
>   	gsi_irq_ieob_disable(gsi, BIT(evt_ring_id));
>   }
>   
>   /* Enable all GSI_interrupt types */
> -static void gsi_irq_enable(struct gsi *gsi)
> +static void gsi_irq_enable(struct ipa_dma *gsi)
>   {
>   	u32 val;
>   
> @@ -307,7 +308,7 @@ static void gsi_irq_enable(struct gsi *gsi)
>   }
>   
>   /* Disable all GSI interrupt types */
> -static void gsi_irq_disable(struct gsi *gsi)
> +static void gsi_irq_disable(struct ipa_dma *gsi)
>   {
>   	gsi_irq_type_update(gsi, 0);
>   
> @@ -340,7 +341,7 @@ static u32 gsi_ring_index(struct gsi_ring *ring, u32 offset)
>    * or false if it times out.
>    */
>   static bool
> -gsi_command(struct gsi *gsi, u32 reg, u32 val, struct completion *completion)
> +gsi_command(struct ipa_dma *gsi, u32 reg, u32 val, struct completion *completion)
>   {
>   	unsigned long timeout = msecs_to_jiffies(GSI_CMD_TIMEOUT);
>   
> @@ -353,7 +354,7 @@ gsi_command(struct gsi *gsi, u32 reg, u32 val, struct completion *completion)
>   
>   /* Return the hardware's notion of the current state of an event ring */
>   static enum gsi_evt_ring_state
> -gsi_evt_ring_state(struct gsi *gsi, u32 evt_ring_id)
> +gsi_evt_ring_state(struct ipa_dma *gsi, u32 evt_ring_id)
>   {
>   	u32 val;
>   
> @@ -363,7 +364,7 @@ gsi_evt_ring_state(struct gsi *gsi, u32 evt_ring_id)
>   }
>   
>   /* Issue an event ring command and wait for it to complete */
> -static void gsi_evt_ring_command(struct gsi *gsi, u32 evt_ring_id,
> +static void gsi_evt_ring_command(struct ipa_dma *gsi, u32 evt_ring_id,
>   				 enum gsi_evt_cmd_opcode opcode)
>   {
>   	struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
> @@ -390,7 +391,7 @@ static void gsi_evt_ring_command(struct gsi *gsi, u32 evt_ring_id,
>   }
>   
>   /* Allocate an event ring in NOT_ALLOCATED state */
> -static int gsi_evt_ring_alloc_command(struct gsi *gsi, u32 evt_ring_id)
> +static int gsi_evt_ring_alloc_command(struct ipa_dma *gsi, u32 evt_ring_id)
>   {
>   	enum gsi_evt_ring_state state;
>   
> @@ -416,7 +417,7 @@ static int gsi_evt_ring_alloc_command(struct gsi *gsi, u32 evt_ring_id)
>   }
>   
>   /* Reset a GSI event ring in ALLOCATED or ERROR state. */
> -static void gsi_evt_ring_reset_command(struct gsi *gsi, u32 evt_ring_id)
> +static void gsi_evt_ring_reset_command(struct ipa_dma *gsi, u32 evt_ring_id)
>   {
>   	enum gsi_evt_ring_state state;
>   
> @@ -440,7 +441,7 @@ static void gsi_evt_ring_reset_command(struct gsi *gsi, u32 evt_ring_id)
>   }
>   
>   /* Issue a hardware de-allocation request for an allocated event ring */
> -static void gsi_evt_ring_de_alloc_command(struct gsi *gsi, u32 evt_ring_id)
> +static void gsi_evt_ring_de_alloc_command(struct ipa_dma *gsi, u32 evt_ring_id)
>   {
>   	enum gsi_evt_ring_state state;
>   
> @@ -463,10 +464,10 @@ static void gsi_evt_ring_de_alloc_command(struct gsi *gsi, u32 evt_ring_id)
>   }
>   
>   /* Fetch the current state of a channel from hardware */
> -static enum gsi_channel_state gsi_channel_state(struct gsi_channel *channel)
> +static enum gsi_channel_state gsi_channel_state(struct ipa_channel *channel)
>   {
>   	u32 channel_id = gsi_channel_id(channel);
> -	void __iomem *virt = channel->gsi->virt;
> +	void __iomem *virt = channel->dma_subsys->virt;
>   	u32 val;
>   
>   	val = ioread32(virt + GSI_CH_C_CNTXT_0_OFFSET(channel_id));
> @@ -476,11 +477,11 @@ static enum gsi_channel_state gsi_channel_state(struct gsi_channel *channel)
>   
>   /* Issue a channel command and wait for it to complete */
>   static void
> -gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode opcode)
> +gsi_channel_command(struct ipa_channel *channel, enum gsi_ch_cmd_opcode opcode)
>   {
>   	struct completion *completion = &channel->completion;
>   	u32 channel_id = gsi_channel_id(channel);
> -	struct gsi *gsi = channel->gsi;
> +	struct ipa_dma *gsi = channel->dma_subsys;
>   	struct device *dev = gsi->dev;
>   	bool timeout;
>   	u32 val;
> @@ -502,9 +503,9 @@ gsi_channel_command(struct gsi_channel *channel, enum gsi_ch_cmd_opcode opcode)
>   }
>   
>   /* Allocate GSI channel in NOT_ALLOCATED state */
> -static int gsi_channel_alloc_command(struct gsi *gsi, u32 channel_id)
> +static int gsi_channel_alloc_command(struct ipa_dma *gsi, u32 channel_id)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   	struct device *dev = gsi->dev;
>   	enum gsi_channel_state state;
>   
> @@ -530,9 +531,9 @@ static int gsi_channel_alloc_command(struct gsi *gsi, u32 channel_id)
>   }
>   
>   /* Start an ALLOCATED channel */
> -static int gsi_channel_start_command(struct gsi_channel *channel)
> +static int gsi_channel_start_command(struct ipa_channel *channel)
>   {
> -	struct device *dev = channel->gsi->dev;
> +	struct device *dev = channel->dma_subsys->dev;
>   	enum gsi_channel_state state;
>   
>   	state = gsi_channel_state(channel);
> @@ -557,9 +558,9 @@ static int gsi_channel_start_command(struct gsi_channel *channel)
>   }
>   
>   /* Stop a GSI channel in STARTED state */
> -static int gsi_channel_stop_command(struct gsi_channel *channel)
> +static int gsi_channel_stop_command(struct ipa_channel *channel)
>   {
> -	struct device *dev = channel->gsi->dev;
> +	struct device *dev = channel->dma_subsys->dev;
>   	enum gsi_channel_state state;
>   
>   	state = gsi_channel_state(channel);
> @@ -595,9 +596,9 @@ static int gsi_channel_stop_command(struct gsi_channel *channel)
>   }
>   
>   /* Reset a GSI channel in ALLOCATED or ERROR state. */
> -static void gsi_channel_reset_command(struct gsi_channel *channel)
> +static void gsi_channel_reset_command(struct ipa_channel *channel)
>   {
> -	struct device *dev = channel->gsi->dev;
> +	struct device *dev = channel->dma_subsys->dev;
>   	enum gsi_channel_state state;
>   
>   	/* A short delay is required before a RESET command */
> @@ -623,9 +624,9 @@ static void gsi_channel_reset_command(struct gsi_channel *channel)
>   }
>   
>   /* Deallocate an ALLOCATED GSI channel */
> -static void gsi_channel_de_alloc_command(struct gsi *gsi, u32 channel_id)
> +static void gsi_channel_de_alloc_command(struct ipa_dma *gsi, u32 channel_id)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   	struct device *dev = gsi->dev;
>   	enum gsi_channel_state state;
>   
> @@ -651,7 +652,7 @@ static void gsi_channel_de_alloc_command(struct gsi *gsi, u32 channel_id)
>    * we supply one less than that with the doorbell.  Update the event ring
>    * index field with the value provided.
>    */
> -static void gsi_evt_ring_doorbell(struct gsi *gsi, u32 evt_ring_id, u32 index)
> +static void gsi_evt_ring_doorbell(struct ipa_dma *gsi, u32 evt_ring_id, u32 index)
>   {
>   	struct gsi_ring *ring = &gsi->evt_ring[evt_ring_id].ring;
>   	u32 val;
> @@ -664,7 +665,7 @@ static void gsi_evt_ring_doorbell(struct gsi *gsi, u32 evt_ring_id, u32 index)
>   }
>   
>   /* Program an event ring for use */
> -static void gsi_evt_ring_program(struct gsi *gsi, u32 evt_ring_id)
> +static void gsi_evt_ring_program(struct ipa_dma *gsi, u32 evt_ring_id)
>   {
>   	struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
>   	size_t size = evt_ring->ring.count * GSI_RING_ELEMENT_SIZE;
> @@ -707,11 +708,11 @@ static void gsi_evt_ring_program(struct gsi *gsi, u32 evt_ring_id)
>   }
>   
>   /* Find the transaction whose completion indicates a channel is quiesced */
> -static struct gsi_trans *gsi_channel_trans_last(struct gsi_channel *channel)
> +static struct ipa_trans *gsi_channel_trans_last(struct ipa_channel *channel)
>   {
> -	struct gsi_trans_info *trans_info = &channel->trans_info;
> +	struct ipa_trans_info *trans_info = &channel->trans_info;
>   	const struct list_head *list;
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   
>   	spin_lock_bh(&trans_info->spinlock);
>   
> @@ -737,7 +738,7 @@ static struct gsi_trans *gsi_channel_trans_last(struct gsi_channel *channel)
>   	if (list_empty(list))
>   		list = NULL;
>   done:
> -	trans = list ? list_last_entry(list, struct gsi_trans, links) : NULL;
> +	trans = list ? list_last_entry(list, struct ipa_trans, links) : NULL;
>   
>   	/* Caller will wait for this, so take a reference */
>   	if (trans)
> @@ -749,26 +750,26 @@ static struct gsi_trans *gsi_channel_trans_last(struct gsi_channel *channel)
>   }
>   
>   /* Wait for transaction activity on a channel to complete */
> -static void gsi_channel_trans_quiesce(struct gsi_channel *channel)
> +static void gsi_channel_trans_quiesce(struct ipa_channel *channel)
>   {
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   
>   	/* Get the last transaction, and wait for it to complete */
>   	trans = gsi_channel_trans_last(channel);
>   	if (trans) {
>   		wait_for_completion(&trans->completion);
> -		gsi_trans_free(trans);
> +		ipa_trans_free(trans);
>   	}
>   }
>   
>   /* Program a channel for use; there is no gsi_channel_deprogram() */
> -static void gsi_channel_program(struct gsi_channel *channel, bool doorbell)
> +static void gsi_channel_program(struct ipa_channel *channel, bool doorbell)
>   {
>   	size_t size = channel->tre_ring.count * GSI_RING_ELEMENT_SIZE;
>   	u32 channel_id = gsi_channel_id(channel);
>   	union gsi_channel_scratch scr = { };
>   	struct gsi_channel_scratch_gpi *gpi;
> -	struct gsi *gsi = channel->gsi;
> +	struct ipa_dma *gsi = channel->dma_subsys;
>   	u32 wrr_weight = 0;
>   	u32 val;
>   
> @@ -849,9 +850,9 @@ static void gsi_channel_program(struct gsi_channel *channel, bool doorbell)
>   	/* All done! */
>   }
>   
> -static int __gsi_channel_start(struct gsi_channel *channel, bool resume)
> +static int __gsi_channel_start(struct ipa_channel *channel, bool resume)
>   {
> -	struct gsi *gsi = channel->gsi;
> +	struct ipa_dma *gsi = channel->dma_subsys;
>   	int ret;
>   
>   	/* Prior to IPA v4.0 suspend/resume is not implemented by GSI */
> @@ -868,9 +869,9 @@ static int __gsi_channel_start(struct gsi_channel *channel, bool resume)
>   }
>   
>   /* Start an allocated GSI channel */
> -int gsi_channel_start(struct gsi *gsi, u32 channel_id)
> +int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   	int ret;
>   
>   	/* Enable NAPI and the completion interrupt */
> @@ -886,7 +887,7 @@ int gsi_channel_start(struct gsi *gsi, u32 channel_id)
>   	return ret;
>   }
>   
> -static int gsi_channel_stop_retry(struct gsi_channel *channel)
> +static int gsi_channel_stop_retry(struct ipa_channel *channel)
>   {
>   	u32 retries = GSI_CHANNEL_STOP_RETRIES;
>   	int ret;
> @@ -901,9 +902,9 @@ static int gsi_channel_stop_retry(struct gsi_channel *channel)
>   	return ret;
>   }
>   
> -static int __gsi_channel_stop(struct gsi_channel *channel, bool suspend)
> +static int __gsi_channel_stop(struct ipa_channel *channel, bool suspend)
>   {
> -	struct gsi *gsi = channel->gsi;
> +	struct ipa_dma *gsi = channel->dma_subsys;
>   	int ret;
>   
>   	/* Wait for any underway transactions to complete before stopping. */
> @@ -923,9 +924,9 @@ static int __gsi_channel_stop(struct gsi_channel *channel, bool suspend)
>   }
>   
>   /* Stop a started channel */
> -int gsi_channel_stop(struct gsi *gsi, u32 channel_id)
> +int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   	int ret;
>   
>   	ret = __gsi_channel_stop(channel, false);
> @@ -940,9 +941,9 @@ int gsi_channel_stop(struct gsi *gsi, u32 channel_id)
>   }
>   
>   /* Reset and reconfigure a channel, (possibly) enabling the doorbell engine */
> -void gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool doorbell)
> +void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   
>   	mutex_lock(&gsi->mutex);
>   
> @@ -952,15 +953,15 @@ void gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool doorbell)
>   		gsi_channel_reset_command(channel);
>   
>   	gsi_channel_program(channel, doorbell);
> -	gsi_channel_trans_cancel_pending(channel);
> +	ipa_channel_trans_cancel_pending(channel);
>   
>   	mutex_unlock(&gsi->mutex);
>   }
>   
>   /* Stop a started channel for suspend */
> -int gsi_channel_suspend(struct gsi *gsi, u32 channel_id)
> +int gsi_channel_suspend(struct ipa_dma *gsi, u32 channel_id)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   	int ret;
>   
>   	ret = __gsi_channel_stop(channel, true);
> @@ -974,27 +975,27 @@ int gsi_channel_suspend(struct gsi *gsi, u32 channel_id)
>   }
>   
>   /* Resume a suspended channel (starting if stopped) */
> -int gsi_channel_resume(struct gsi *gsi, u32 channel_id)
> +int gsi_channel_resume(struct ipa_dma *gsi, u32 channel_id)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   
>   	return __gsi_channel_start(channel, true);
>   }
>   
>   /* Prevent all GSI interrupts while suspended */
> -void gsi_suspend(struct gsi *gsi)
> +void gsi_suspend(struct ipa_dma *gsi)
>   {
>   	disable_irq(gsi->irq);
>   }
>   
>   /* Allow all GSI interrupts again when resuming */
> -void gsi_resume(struct gsi *gsi)
> +void gsi_resume(struct ipa_dma *gsi)
>   {
>   	enable_irq(gsi->irq);
>   }
>   
>   /**
> - * gsi_channel_tx_queued() - Report queued TX transfers for a channel
> + * ipa_channel_tx_queued() - Report queued TX transfers for a channel
>    * @channel:	Channel for which to report
>    *
>    * Report to the network stack the number of bytes and transactions that
> @@ -1011,7 +1012,7 @@ void gsi_resume(struct gsi *gsi)
>    * provide accurate information to the network stack about how much
>    * work we've given the hardware at any point in time.
>    */
> -void gsi_channel_tx_queued(struct gsi_channel *channel)
> +void ipa_channel_tx_queued(struct ipa_channel *channel)
>   {
>   	u32 trans_count;
>   	u32 byte_count;
> @@ -1021,7 +1022,7 @@ void gsi_channel_tx_queued(struct gsi_channel *channel)
>   	channel->queued_byte_count = channel->byte_count;
>   	channel->queued_trans_count = channel->trans_count;
>   
> -	ipa_gsi_channel_tx_queued(channel->gsi, gsi_channel_id(channel),
> +	ipa_gsi_channel_tx_queued(channel->dma_subsys, gsi_channel_id(channel),
>   				  trans_count, byte_count);
>   }
>   
> @@ -1050,7 +1051,7 @@ void gsi_channel_tx_queued(struct gsi_channel *channel)
>    * point in time.
>    */
>   static void
> -gsi_channel_tx_update(struct gsi_channel *channel, struct gsi_trans *trans)
> +gsi_channel_tx_update(struct ipa_channel *channel, struct ipa_trans *trans)
>   {
>   	u64 byte_count = trans->byte_count + trans->len;
>   	u64 trans_count = trans->trans_count + 1;
> @@ -1060,12 +1061,12 @@ gsi_channel_tx_update(struct gsi_channel *channel, struct gsi_trans *trans)
>   	trans_count -= channel->compl_trans_count;
>   	channel->compl_trans_count += trans_count;
>   
> -	ipa_gsi_channel_tx_completed(channel->gsi, gsi_channel_id(channel),
> +	ipa_gsi_channel_tx_completed(channel->dma_subsys, gsi_channel_id(channel),
>   				     trans_count, byte_count);
>   }
>   
>   /* Channel control interrupt handler */
> -static void gsi_isr_chan_ctrl(struct gsi *gsi)
> +static void gsi_isr_chan_ctrl(struct ipa_dma *gsi)
>   {
>   	u32 channel_mask;
>   
> @@ -1074,7 +1075,7 @@ static void gsi_isr_chan_ctrl(struct gsi *gsi)
>   
>   	while (channel_mask) {
>   		u32 channel_id = __ffs(channel_mask);
> -		struct gsi_channel *channel;
> +		struct ipa_channel *channel;
>   
>   		channel_mask ^= BIT(channel_id);
>   
> @@ -1085,7 +1086,7 @@ static void gsi_isr_chan_ctrl(struct gsi *gsi)
>   }
>   
>   /* Event ring control interrupt handler */
> -static void gsi_isr_evt_ctrl(struct gsi *gsi)
> +static void gsi_isr_evt_ctrl(struct ipa_dma *gsi)
>   {
>   	u32 event_mask;
>   
> @@ -1106,7 +1107,7 @@ static void gsi_isr_evt_ctrl(struct gsi *gsi)
>   
>   /* Global channel error interrupt handler */
>   static void
> -gsi_isr_glob_chan_err(struct gsi *gsi, u32 err_ee, u32 channel_id, u32 code)
> +gsi_isr_glob_chan_err(struct ipa_dma *gsi, u32 err_ee, u32 channel_id, u32 code)
>   {
>   	if (code == GSI_OUT_OF_RESOURCES) {
>   		dev_err(gsi->dev, "channel %u out of resources\n", channel_id);
> @@ -1121,7 +1122,7 @@ gsi_isr_glob_chan_err(struct gsi *gsi, u32 err_ee, u32 channel_id, u32 code)
>   
>   /* Global event error interrupt handler */
>   static void
> -gsi_isr_glob_evt_err(struct gsi *gsi, u32 err_ee, u32 evt_ring_id, u32 code)
> +gsi_isr_glob_evt_err(struct ipa_dma *gsi, u32 err_ee, u32 evt_ring_id, u32 code)
>   {
>   	if (code == GSI_OUT_OF_RESOURCES) {
>   		struct gsi_evt_ring *evt_ring = &gsi->evt_ring[evt_ring_id];
> @@ -1139,7 +1140,7 @@ gsi_isr_glob_evt_err(struct gsi *gsi, u32 err_ee, u32 evt_ring_id, u32 code)
>   }
>   
>   /* Global error interrupt handler */
> -static void gsi_isr_glob_err(struct gsi *gsi)
> +static void gsi_isr_glob_err(struct ipa_dma *gsi)
>   {
>   	enum gsi_err_type type;
>   	enum gsi_err_code code;
> @@ -1166,7 +1167,7 @@ static void gsi_isr_glob_err(struct gsi *gsi)
>   }
>   
>   /* Generic EE interrupt handler */
> -static void gsi_isr_gp_int1(struct gsi *gsi)
> +static void gsi_isr_gp_int1(struct ipa_dma *gsi)
>   {
>   	u32 result;
>   	u32 val;
> @@ -1208,7 +1209,7 @@ static void gsi_isr_gp_int1(struct gsi *gsi)
>   }
>   
>   /* Inter-EE interrupt handler */
> -static void gsi_isr_glob_ee(struct gsi *gsi)
> +static void gsi_isr_glob_ee(struct ipa_dma *gsi)
>   {
>   	u32 val;
>   
> @@ -1231,7 +1232,7 @@ static void gsi_isr_glob_ee(struct gsi *gsi)
>   }
>   
>   /* I/O completion interrupt event */
> -static void gsi_isr_ieob(struct gsi *gsi)
> +static void gsi_isr_ieob(struct ipa_dma *gsi)
>   {
>   	u32 event_mask;
>   
> @@ -1249,7 +1250,7 @@ static void gsi_isr_ieob(struct gsi *gsi)
>   }
>   
>   /* General event interrupts represent serious problems, so report them */
> -static void gsi_isr_general(struct gsi *gsi)
> +static void gsi_isr_general(struct ipa_dma *gsi)
>   {
>   	struct device *dev = gsi->dev;
>   	u32 val;
> @@ -1270,7 +1271,7 @@ static void gsi_isr_general(struct gsi *gsi)
>    */
>   static irqreturn_t gsi_isr(int irq, void *dev_id)
>   {
> -	struct gsi *gsi = dev_id;
> +	struct ipa_dma *gsi = dev_id;
>   	u32 intr_mask;
>   	u32 cnt = 0;
>   
> @@ -1316,7 +1317,7 @@ static irqreturn_t gsi_isr(int irq, void *dev_id)
>   }
>   
>   /* Init function for GSI IRQ lookup; there is no gsi_irq_exit() */
> -static int gsi_irq_init(struct gsi *gsi, struct platform_device *pdev)
> +static int gsi_irq_init(struct ipa_dma *gsi, struct platform_device *pdev)
>   {
>   	int ret;
>   
> @@ -1330,7 +1331,7 @@ static int gsi_irq_init(struct gsi *gsi, struct platform_device *pdev)
>   }
>   
>   /* Return the transaction associated with a transfer completion event */
> -static struct gsi_trans *gsi_event_trans(struct gsi_channel *channel,
> +static struct ipa_trans *gsi_event_trans(struct ipa_channel *channel,
>   					 struct gsi_event *event)
>   {
>   	u32 tre_offset;
> @@ -1364,12 +1365,12 @@ static struct gsi_trans *gsi_event_trans(struct gsi_channel *channel,
>    */
>   static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
>   {
> -	struct gsi_channel *channel = evt_ring->channel;
> +	struct ipa_channel *channel = evt_ring->channel;
>   	struct gsi_ring *ring = &evt_ring->ring;
> -	struct gsi_trans_info *trans_info;
> +	struct ipa_trans_info *trans_info;
>   	struct gsi_event *event_done;
>   	struct gsi_event *event;
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   	u32 byte_count = 0;
>   	u32 old_index;
>   	u32 event_avail;
> @@ -1399,7 +1400,7 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
>   			event++;
>   		else
>   			event = gsi_ring_virt(ring, 0);
> -		trans = gsi_trans_pool_next(&trans_info->pool, trans);
> +		trans = ipa_trans_pool_next(&trans_info->pool, trans);
>   	} while (event != event_done);
>   
>   	/* We record RX bytes when they are received */
> @@ -1408,7 +1409,7 @@ static void gsi_evt_ring_rx_update(struct gsi_evt_ring *evt_ring, u32 index)
>   }
>   
>   /* Initialize a ring, including allocating DMA memory for its entries */
> -static int gsi_ring_alloc(struct gsi *gsi, struct gsi_ring *ring, u32 count)
> +static int gsi_ring_alloc(struct ipa_dma *gsi, struct gsi_ring *ring, u32 count)
>   {
>   	u32 size = count * GSI_RING_ELEMENT_SIZE;
>   	struct device *dev = gsi->dev;
> @@ -1429,7 +1430,7 @@ static int gsi_ring_alloc(struct gsi *gsi, struct gsi_ring *ring, u32 count)
>   }
>   
>   /* Free a previously-allocated ring */
> -static void gsi_ring_free(struct gsi *gsi, struct gsi_ring *ring)
> +static void gsi_ring_free(struct ipa_dma *gsi, struct gsi_ring *ring)
>   {
>   	size_t size = ring->count * GSI_RING_ELEMENT_SIZE;
>   
> @@ -1437,7 +1438,7 @@ static void gsi_ring_free(struct gsi *gsi, struct gsi_ring *ring)
>   }
>   
>   /* Allocate an available event ring id */
> -static int gsi_evt_ring_id_alloc(struct gsi *gsi)
> +static int gsi_evt_ring_id_alloc(struct ipa_dma *gsi)
>   {
>   	u32 evt_ring_id;
>   
> @@ -1453,17 +1454,17 @@ static int gsi_evt_ring_id_alloc(struct gsi *gsi)
>   }
>   
>   /* Free a previously-allocated event ring id */
> -static void gsi_evt_ring_id_free(struct gsi *gsi, u32 evt_ring_id)
> +static void gsi_evt_ring_id_free(struct ipa_dma *gsi, u32 evt_ring_id)
>   {
>   	gsi->event_bitmap &= ~BIT(evt_ring_id);
>   }
>   
>   /* Ring a channel doorbell, reporting the first un-filled entry */
> -void gsi_channel_doorbell(struct gsi_channel *channel)
> +void gsi_channel_doorbell(struct ipa_channel *channel)
>   {
>   	struct gsi_ring *tre_ring = &channel->tre_ring;
>   	u32 channel_id = gsi_channel_id(channel);
> -	struct gsi *gsi = channel->gsi;
> +	struct ipa_dma *gsi = channel->dma_subsys;
>   	u32 val;
>   
>   	/* Note: index *must* be used modulo the ring count here */
> @@ -1472,12 +1473,12 @@ void gsi_channel_doorbell(struct gsi_channel *channel)
>   }
>   
>   /* Consult hardware, move any newly completed transactions to completed list */
> -static struct gsi_trans *gsi_channel_update(struct gsi_channel *channel)
> +static struct ipa_trans *gsi_channel_update(struct ipa_channel *channel)
>   {
>   	u32 evt_ring_id = channel->evt_ring_id;
> -	struct gsi *gsi = channel->gsi;
> +	struct ipa_dma *gsi = channel->dma_subsys;
>   	struct gsi_evt_ring *evt_ring;
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   	struct gsi_ring *ring;
>   	u32 offset;
>   	u32 index;
> @@ -1510,14 +1511,14 @@ static struct gsi_trans *gsi_channel_update(struct gsi_channel *channel)
>   	else
>   		gsi_evt_ring_rx_update(evt_ring, index);
>   
> -	gsi_trans_move_complete(trans);
> +	ipa_trans_move_complete(trans);
>   
>   	/* Tell the hardware we've handled these events */
> -	gsi_evt_ring_doorbell(channel->gsi, channel->evt_ring_id, index);
> +	gsi_evt_ring_doorbell(channel->dma_subsys, channel->evt_ring_id, index);
>   
> -	gsi_trans_free(trans);
> +	ipa_trans_free(trans);
>   
> -	return gsi_channel_trans_complete(channel);
> +	return ipa_channel_trans_complete(channel);
>   }
>   
>   /**
> @@ -1532,17 +1533,17 @@ static struct gsi_trans *gsi_channel_update(struct gsi_channel *channel)
>    * completed list and the new first entry is returned.  If there are no more
>    * completed transactions, a null pointer is returned.
>    */
> -static struct gsi_trans *gsi_channel_poll_one(struct gsi_channel *channel)
> +static struct ipa_trans *gsi_channel_poll_one(struct ipa_channel *channel)
>   {
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   
>   	/* Get the first transaction from the completed list */
> -	trans = gsi_channel_trans_complete(channel);
> +	trans = ipa_channel_trans_complete(channel);
>   	if (!trans)	/* List is empty; see if there's more to do */
>   		trans = gsi_channel_update(channel);
>   
>   	if (trans)
> -		gsi_trans_move_polled(trans);
> +		ipa_trans_move_polled(trans);
>   
>   	return trans;
>   }
> @@ -1556,26 +1557,26 @@ static struct gsi_trans *gsi_channel_poll_one(struct gsi_channel *channel)
>    *
>    * Single transactions completed by hardware are polled until either
>    * the budget is exhausted, or there are no more.  Each transaction
> - * polled is passed to gsi_trans_complete(), to perform remaining
> + * polled is passed to ipa_trans_complete(), to perform remaining
>    * completion processing and retire/free the transaction.
>    */
>   static int gsi_channel_poll(struct napi_struct *napi, int budget)
>   {
> -	struct gsi_channel *channel;
> +	struct ipa_channel *channel;
>   	int count;
>   
> -	channel = container_of(napi, struct gsi_channel, napi);
> +	channel = container_of(napi, struct ipa_channel, napi);
>   	for (count = 0; count < budget; count++) {
> -		struct gsi_trans *trans;
> +		struct ipa_trans *trans;
>   
>   		trans = gsi_channel_poll_one(channel);
>   		if (!trans)
>   			break;
> -		gsi_trans_complete(trans);
> +		ipa_trans_complete(trans);
>   	}
>   
>   	if (count < budget && napi_complete(napi))
> -		gsi_irq_ieob_enable_one(channel->gsi, channel->evt_ring_id);
> +		gsi_irq_ieob_enable_one(channel->dma_subsys, channel->evt_ring_id);
>   
>   	return count;
>   }
> @@ -1595,9 +1596,9 @@ static u32 gsi_event_bitmap_init(u32 evt_ring_max)
>   }
>   
>   /* Setup function for a single channel */
> -static int gsi_channel_setup_one(struct gsi *gsi, u32 channel_id)
> +static int gsi_channel_setup_one(struct ipa_dma *gsi, u32 channel_id)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   	u32 evt_ring_id = channel->evt_ring_id;
>   	int ret;
>   
> @@ -1633,9 +1634,9 @@ static int gsi_channel_setup_one(struct gsi *gsi, u32 channel_id)
>   }
>   
>   /* Inverse of gsi_channel_setup_one() */
> -static void gsi_channel_teardown_one(struct gsi *gsi, u32 channel_id)
> +static void gsi_channel_teardown_one(struct ipa_dma *gsi, u32 channel_id)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   	u32 evt_ring_id = channel->evt_ring_id;
>   
>   	if (!gsi_channel_initialized(channel))
> @@ -1648,7 +1649,7 @@ static void gsi_channel_teardown_one(struct gsi *gsi, u32 channel_id)
>   	gsi_evt_ring_de_alloc_command(gsi, evt_ring_id);
>   }
>   
> -static int gsi_generic_command(struct gsi *gsi, u32 channel_id,
> +static int gsi_generic_command(struct ipa_dma *gsi, u32 channel_id,
>   			       enum gsi_generic_cmd_opcode opcode)
>   {
>   	struct completion *completion = &gsi->completion;
> @@ -1689,13 +1690,13 @@ static int gsi_generic_command(struct gsi *gsi, u32 channel_id,
>   	return -ETIMEDOUT;
>   }
>   
> -static int gsi_modem_channel_alloc(struct gsi *gsi, u32 channel_id)
> +static int gsi_modem_channel_alloc(struct ipa_dma *gsi, u32 channel_id)
>   {
>   	return gsi_generic_command(gsi, channel_id,
>   				   GSI_GENERIC_ALLOCATE_CHANNEL);
>   }
>   
> -static void gsi_modem_channel_halt(struct gsi *gsi, u32 channel_id)
> +static void gsi_modem_channel_halt(struct ipa_dma *gsi, u32 channel_id)
>   {
>   	u32 retries = GSI_CHANNEL_MODEM_HALT_RETRIES;
>   	int ret;
> @@ -1711,7 +1712,7 @@ static void gsi_modem_channel_halt(struct gsi *gsi, u32 channel_id)
>   }
>   
>   /* Setup function for channels */
> -static int gsi_channel_setup(struct gsi *gsi)
> +static int gsi_channel_setup(struct ipa_dma *gsi)
>   {
>   	u32 channel_id = 0;
>   	u32 mask;
> @@ -1729,7 +1730,7 @@ static int gsi_channel_setup(struct gsi *gsi)
>   
>   	/* Make sure no channels were defined that hardware does not support */
>   	while (channel_id < GSI_CHANNEL_COUNT_MAX) {
> -		struct gsi_channel *channel = &gsi->channel[channel_id++];
> +		struct ipa_channel *channel = &gsi->channel[channel_id++];
>   
>   		if (!gsi_channel_initialized(channel))
>   			continue;
> @@ -1781,7 +1782,7 @@ static int gsi_channel_setup(struct gsi *gsi)
>   }
>   
>   /* Inverse of gsi_channel_setup() */
> -static void gsi_channel_teardown(struct gsi *gsi)
> +static void gsi_channel_teardown(struct ipa_dma *gsi)
>   {
>   	u32 mask = gsi->modem_channel_bitmap;
>   	u32 channel_id;
> @@ -1807,7 +1808,7 @@ static void gsi_channel_teardown(struct gsi *gsi)
>   }
>   
>   /* Turn off all GSI interrupts initially */
> -static int gsi_irq_setup(struct gsi *gsi)
> +static int gsi_irq_setup(struct ipa_dma *gsi)
>   {
>   	int ret;
>   
> @@ -1843,13 +1844,13 @@ static int gsi_irq_setup(struct gsi *gsi)
>   	return ret;
>   }
>   
> -static void gsi_irq_teardown(struct gsi *gsi)
> +static void gsi_irq_teardown(struct ipa_dma *gsi)
>   {
>   	free_irq(gsi->irq, gsi);
>   }
>   
>   /* Get # supported channel and event rings; there is no gsi_ring_teardown() */
> -static int gsi_ring_setup(struct gsi *gsi)
> +static int gsi_ring_setup(struct ipa_dma *gsi)
>   {
>   	struct device *dev = gsi->dev;
>   	u32 count;
> @@ -1894,7 +1895,7 @@ static int gsi_ring_setup(struct gsi *gsi)
>   }
>   
>   /* Setup function for GSI.  GSI firmware must be loaded and initialized */
> -int gsi_setup(struct gsi *gsi)
> +int gsi_setup(struct ipa_dma *gsi)
>   {
>   	u32 val;
>   	int ret;
> @@ -1930,16 +1931,16 @@ int gsi_setup(struct gsi *gsi)
>   }
>   
>   /* Inverse of gsi_setup() */
> -void gsi_teardown(struct gsi *gsi)
> +void gsi_teardown(struct ipa_dma *gsi)
>   {
>   	gsi_channel_teardown(gsi);
>   	gsi_irq_teardown(gsi);
>   }
>   
>   /* Initialize a channel's event ring */
> -static int gsi_channel_evt_ring_init(struct gsi_channel *channel)
> +static int gsi_channel_evt_ring_init(struct ipa_channel *channel)
>   {
> -	struct gsi *gsi = channel->gsi;
> +	struct ipa_dma *gsi = channel->dma_subsys;
>   	struct gsi_evt_ring *evt_ring;
>   	int ret;
>   
> @@ -1964,10 +1965,10 @@ static int gsi_channel_evt_ring_init(struct gsi_channel *channel)
>   }
>   
>   /* Inverse of gsi_channel_evt_ring_init() */
> -static void gsi_channel_evt_ring_exit(struct gsi_channel *channel)
> +static void gsi_channel_evt_ring_exit(struct ipa_channel *channel)
>   {
>   	u32 evt_ring_id = channel->evt_ring_id;
> -	struct gsi *gsi = channel->gsi;
> +	struct ipa_dma *gsi = channel->dma_subsys;
>   	struct gsi_evt_ring *evt_ring;
>   
>   	evt_ring = &gsi->evt_ring[evt_ring_id];
> @@ -1976,7 +1977,7 @@ static void gsi_channel_evt_ring_exit(struct gsi_channel *channel)
>   }
>   
>   /* Init function for event rings; there is no gsi_evt_ring_exit() */
> -static void gsi_evt_ring_init(struct gsi *gsi)
> +static void gsi_evt_ring_init(struct ipa_dma *gsi)
>   {
>   	u32 evt_ring_id = 0;
>   
> @@ -1987,7 +1988,7 @@ static void gsi_evt_ring_init(struct gsi *gsi)
>   	while (++evt_ring_id < GSI_EVT_RING_COUNT_MAX);
>   }
>   
> -static bool gsi_channel_data_valid(struct gsi *gsi,
> +static bool gsi_channel_data_valid(struct ipa_dma *gsi,
>   				   const struct ipa_gsi_endpoint_data *data)
>   {
>   	u32 channel_id = data->channel_id;
> @@ -2040,11 +2041,11 @@ static bool gsi_channel_data_valid(struct gsi *gsi,
>   }
>   
>   /* Init function for a single channel */
> -static int gsi_channel_init_one(struct gsi *gsi,
> +static int gsi_channel_init_one(struct ipa_dma *gsi,
>   				const struct ipa_gsi_endpoint_data *data,
>   				bool command)
>   {
> -	struct gsi_channel *channel;
> +	struct ipa_channel *channel;
>   	u32 tre_count;
>   	int ret;
>   
> @@ -2063,7 +2064,7 @@ static int gsi_channel_init_one(struct gsi *gsi,
>   	channel = &gsi->channel[data->channel_id];
>   	memset(channel, 0, sizeof(*channel));
>   
> -	channel->gsi = gsi;
> +	channel->dma_subsys = gsi;
>   	channel->toward_ipa = data->toward_ipa;
>   	channel->command = command;
>   	channel->tlv_count = data->channel.tlv_count;
> @@ -2082,7 +2083,7 @@ static int gsi_channel_init_one(struct gsi *gsi,
>   		goto err_channel_evt_ring_exit;
>   	}
>   
> -	ret = gsi_channel_trans_init(gsi, data->channel_id);
> +	ret = ipa_channel_trans_init(gsi, data->channel_id);
>   	if (ret)
>   		goto err_ring_free;
>   
> @@ -2094,32 +2095,32 @@ static int gsi_channel_init_one(struct gsi *gsi,
>   	if (!ret)
>   		return 0;	/* Success! */
>   
> -	gsi_channel_trans_exit(channel);
> +	ipa_channel_trans_exit(channel);
>   err_ring_free:
>   	gsi_ring_free(gsi, &channel->tre_ring);
>   err_channel_evt_ring_exit:
>   	gsi_channel_evt_ring_exit(channel);
>   err_clear_gsi:
> -	channel->gsi = NULL;	/* Mark it not (fully) initialized */
> +	channel->dma_subsys = NULL;	/* Mark it not (fully) initialized */
>   
>   	return ret;
>   }
>   
>   /* Inverse of gsi_channel_init_one() */
> -static void gsi_channel_exit_one(struct gsi_channel *channel)
> +static void gsi_channel_exit_one(struct ipa_channel *channel)
>   {
>   	if (!gsi_channel_initialized(channel))
>   		return;
>   
>   	if (channel->command)
>   		ipa_cmd_pool_exit(channel);
> -	gsi_channel_trans_exit(channel);
> -	gsi_ring_free(channel->gsi, &channel->tre_ring);
> +	ipa_channel_trans_exit(channel);
> +	gsi_ring_free(channel->dma_subsys, &channel->tre_ring);
>   	gsi_channel_evt_ring_exit(channel);
>   }
>   
>   /* Init function for channels */
> -static int gsi_channel_init(struct gsi *gsi, u32 count,
> +static int gsi_channel_init(struct ipa_dma *gsi, u32 count,
>   			    const struct ipa_gsi_endpoint_data *data)
>   {
>   	bool modem_alloc;
> @@ -2168,7 +2169,7 @@ static int gsi_channel_init(struct gsi *gsi, u32 count,
>   }
>   
>   /* Inverse of gsi_channel_init() */
> -static void gsi_channel_exit(struct gsi *gsi)
> +static void gsi_channel_exit(struct ipa_dma *gsi)
>   {
>   	u32 channel_id = GSI_CHANNEL_COUNT_MAX - 1;
>   
> @@ -2179,7 +2180,7 @@ static void gsi_channel_exit(struct gsi *gsi)
>   }
>   
>   /* Init function for GSI.  GSI hardware does not need to be "ready" */
> -int gsi_init(struct gsi *gsi, struct platform_device *pdev,
> +int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
>   	     enum ipa_version version, u32 count,
>   	     const struct ipa_gsi_endpoint_data *data)
>   {
> @@ -2249,7 +2250,7 @@ int gsi_init(struct gsi *gsi, struct platform_device *pdev,
>   }
>   
>   /* Inverse of gsi_init() */
> -void gsi_exit(struct gsi *gsi)
> +void gsi_exit(struct ipa_dma *gsi)
>   {
>   	mutex_destroy(&gsi->mutex);
>   	gsi_channel_exit(gsi);
> @@ -2274,20 +2275,20 @@ void gsi_exit(struct gsi *gsi)
>    * maximum number of outstanding TREs allows the number of entries in
>    * a pool to avoid crossing that power-of-2 boundary, and this can
>    * substantially reduce pool memory requirements.  The number we
> - * reduce it by matches the number added in gsi_trans_pool_init().
> + * reduce it by matches the number added in ipa_trans_pool_init().
>    */
> -u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id)
> +u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   
>   	/* Hardware limit is channel->tre_count - 1 */
>   	return channel->tre_count - (channel->tlv_count - 1);
>   }
>   
>   /* Returns the maximum number of TREs in a single transaction for a channel */
> -u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id)
> +u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   
>   	return channel->tlv_count;
>   }
> diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
> index 9fc880eb7e3a..80a83ac45729 100644
> --- a/drivers/net/ipa/ipa.h
> +++ b/drivers/net/ipa/ipa.h
> @@ -12,7 +12,7 @@
>   #include <linux/pm_wakeup.h>
>   
>   #include "ipa_version.h"
> -#include "gsi.h"
> +#include "ipa_dma.h"
>   #include "ipa_mem.h"
>   #include "ipa_qmi.h"
>   #include "ipa_endpoint.h"
> @@ -29,7 +29,7 @@ struct ipa_interrupt;
>   
>   /**
>    * struct ipa - IPA information
> - * @gsi:		Embedded GSI structure
> + * @ipa_dma:		Embedded IPA DMA structure
>    * @version:		IPA hardware version
>    * @pdev:		Platform device
>    * @completion:		Used to signal pipeline clear transfer complete
> @@ -71,7 +71,7 @@ struct ipa_interrupt;
>    * @qmi:		QMI information
>    */
>   struct ipa {
> -	struct gsi gsi;
> +	struct ipa_dma dma_subsys;
>   	enum ipa_version version;
>   	struct platform_device *pdev;
>   	struct completion completion;
> diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> index cff51731195a..3db9e94e484f 100644
> --- a/drivers/net/ipa/ipa_cmd.c
> +++ b/drivers/net/ipa/ipa_cmd.c
> @@ -10,8 +10,8 @@
>   #include <linux/bitfield.h>
>   #include <linux/dma-direction.h>
>   
> -#include "gsi.h"
> -#include "gsi_trans.h"
> +#include "ipa_dma.h"
> +#include "ipa_trans.h"
>   #include "ipa.h"
>   #include "ipa_endpoint.h"
>   #include "ipa_table.h"
> @@ -32,8 +32,8 @@
>    * immediate command's opcode.  The payload for a command resides in DRAM
>    * and is described by a single scatterlist entry in its transaction.
>    * Commands do not require a transaction completion callback.  To commit
> - * an immediate command transaction, either gsi_trans_commit_wait() or
> - * gsi_trans_commit_wait_timeout() is used.
> + * an immediate command transaction, either ipa_trans_commit_wait() or
> + * ipa_trans_commit_wait_timeout() is used.
>    */
>   
>   /* Some commands can wait until indicated pipeline stages are clear */
> @@ -346,10 +346,10 @@ bool ipa_cmd_data_valid(struct ipa *ipa)
>   }
>   
>   
> -int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_max)
> +int ipa_cmd_pool_init(struct ipa_channel *channel, u32 tre_max)
>   {
> -	struct gsi_trans_info *trans_info = &channel->trans_info;
> -	struct device *dev = channel->gsi->dev;
> +	struct ipa_trans_info *trans_info = &channel->trans_info;
> +	struct device *dev = channel->dma_subsys->dev;
>   	int ret;
>   
>   	/* This is as good a place as any to validate build constants */
> @@ -359,50 +359,50 @@ int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_max)
>   	 * a single transaction can require up to tlv_count of them,
>   	 * so we treat them as if that many can be allocated at once.
>   	 */
> -	ret = gsi_trans_pool_init_dma(dev, &trans_info->cmd_pool,
> +	ret = ipa_trans_pool_init_dma(dev, &trans_info->cmd_pool,
>   				      sizeof(union ipa_cmd_payload),
>   				      tre_max, channel->tlv_count);
>   	if (ret)
>   		return ret;
>   
>   	/* Each TRE needs a command info structure */
> -	ret = gsi_trans_pool_init(&trans_info->info_pool,
> +	ret = ipa_trans_pool_init(&trans_info->info_pool,
>   				   sizeof(struct ipa_cmd_info),
>   				   tre_max, channel->tlv_count);
>   	if (ret)
> -		gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool);
> +		ipa_trans_pool_exit_dma(dev, &trans_info->cmd_pool);
>   
>   	return ret;
>   }
>   
> -void ipa_cmd_pool_exit(struct gsi_channel *channel)
> +void ipa_cmd_pool_exit(struct ipa_channel *channel)
>   {
> -	struct gsi_trans_info *trans_info = &channel->trans_info;
> -	struct device *dev = channel->gsi->dev;
> +	struct ipa_trans_info *trans_info = &channel->trans_info;
> +	struct device *dev = channel->dma_subsys->dev;
>   
> -	gsi_trans_pool_exit(&trans_info->info_pool);
> -	gsi_trans_pool_exit_dma(dev, &trans_info->cmd_pool);
> +	ipa_trans_pool_exit(&trans_info->info_pool);
> +	ipa_trans_pool_exit_dma(dev, &trans_info->cmd_pool);
>   }
>   
>   static union ipa_cmd_payload *
>   ipa_cmd_payload_alloc(struct ipa *ipa, dma_addr_t *addr)
>   {
> -	struct gsi_trans_info *trans_info;
> +	struct ipa_trans_info *trans_info;
>   	struct ipa_endpoint *endpoint;
>   
>   	endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX];
> -	trans_info = &ipa->gsi.channel[endpoint->channel_id].trans_info;
> +	trans_info = &ipa->dma_subsys.channel[endpoint->channel_id].trans_info;
>   
> -	return gsi_trans_pool_alloc_dma(&trans_info->cmd_pool, addr);
> +	return ipa_trans_pool_alloc_dma(&trans_info->cmd_pool, addr);
>   }
>   
>   /* If hash_size is 0, hash_offset and hash_addr ignored. */
> -void ipa_cmd_table_init_add(struct gsi_trans *trans,
> +void ipa_cmd_table_init_add(struct ipa_trans *trans,
>   			    enum ipa_cmd_opcode opcode, u16 size, u32 offset,
>   			    dma_addr_t addr, u16 hash_size, u32 hash_offset,
>   			    dma_addr_t hash_addr)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	enum dma_data_direction direction = DMA_TO_DEVICE;
>   	struct ipa_cmd_hw_ip_fltrt_init *payload;
>   	union ipa_cmd_payload *cmd_payload;
> @@ -433,15 +433,15 @@ void ipa_cmd_table_init_add(struct gsi_trans *trans,
>   	payload->flags = cpu_to_le64(val);
>   	payload->nhash_rules_addr = cpu_to_le64(addr);
>   
> -	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> +	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  direction, opcode);
>   }
>   
>   /* Initialize header space in IPA-local memory */
> -void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size,
> +void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
>   				dma_addr_t addr)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL;
>   	enum dma_data_direction direction = DMA_TO_DEVICE;
>   	struct ipa_cmd_hw_hdr_init_local *payload;
> @@ -464,14 +464,14 @@ void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size,
>   	flags |= u32_encode_bits(offset, HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);
>   	payload->flags = cpu_to_le32(flags);
>   
> -	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> +	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  direction, opcode);
>   }
>   
> -void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value,
> +void ipa_cmd_register_write_add(struct ipa_trans *trans, u32 offset, u32 value,
>   				u32 mask, bool clear_full)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	struct ipa_cmd_register_write *payload;
>   	union ipa_cmd_payload *cmd_payload;
>   	u32 opcode = IPA_CMD_REGISTER_WRITE;
> @@ -521,14 +521,14 @@ void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value,
>   	payload->value_mask = cpu_to_le32(mask);
>   	payload->clear_options = cpu_to_le32(options);
>   
> -	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> +	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  DMA_NONE, opcode);
>   }
>   
>   /* Skip IP packet processing on the next data transfer on a TX channel */
> -static void ipa_cmd_ip_packet_init_add(struct gsi_trans *trans, u8 endpoint_id)
> +static void ipa_cmd_ip_packet_init_add(struct ipa_trans *trans, u8 endpoint_id)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_INIT;
>   	enum dma_data_direction direction = DMA_TO_DEVICE;
>   	struct ipa_cmd_ip_packet_init *payload;
> @@ -541,15 +541,15 @@ static void ipa_cmd_ip_packet_init_add(struct gsi_trans *trans, u8 endpoint_id)
>   	payload->dest_endpoint = u8_encode_bits(endpoint_id,
>   					IPA_PACKET_INIT_DEST_ENDPOINT_FMASK);
>   
> -	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> +	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  direction, opcode);
>   }
>   
>   /* Use a DMA command to read or write a block of IPA-resident memory */
> -void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, u16 size,
> +void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
>   				dma_addr_t addr, bool toward_ipa)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM;
>   	struct ipa_cmd_hw_dma_mem_mem *payload;
>   	union ipa_cmd_payload *cmd_payload;
> @@ -586,13 +586,13 @@ void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset, u16 size,
>   
>   	direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
>   
> -	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> +	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  direction, opcode);
>   }
>   
> -static void ipa_cmd_ip_tag_status_add(struct gsi_trans *trans)
> +static void ipa_cmd_ip_tag_status_add(struct ipa_trans *trans)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	enum ipa_cmd_opcode opcode = IPA_CMD_IP_PACKET_TAG_STATUS;
>   	enum dma_data_direction direction = DMA_TO_DEVICE;
>   	struct ipa_cmd_ip_packet_tag_status *payload;
> @@ -604,14 +604,14 @@ static void ipa_cmd_ip_tag_status_add(struct gsi_trans *trans)
>   
>   	payload->tag = le64_encode_bits(0, IP_PACKET_TAG_STATUS_TAG_FMASK);
>   
> -	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> +	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  direction, opcode);
>   }
>   
>   /* Issue a small command TX data transfer */
> -static void ipa_cmd_transfer_add(struct gsi_trans *trans)
> +static void ipa_cmd_transfer_add(struct ipa_trans *trans)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	enum dma_data_direction direction = DMA_TO_DEVICE;
>   	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
>   	union ipa_cmd_payload *payload;
> @@ -620,14 +620,14 @@ static void ipa_cmd_transfer_add(struct gsi_trans *trans)
>   	/* Just transfer a zero-filled payload structure */
>   	payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
>   
> -	gsi_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> +	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  direction, opcode);
>   }
>   
>   /* Add immediate commands to a transaction to clear the hardware pipeline */
> -void ipa_cmd_pipeline_clear_add(struct gsi_trans *trans)
> +void ipa_cmd_pipeline_clear_add(struct ipa_trans *trans)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	struct ipa_endpoint *endpoint;
>   
>   	/* This will complete when the transfer is received */
> @@ -664,12 +664,12 @@ void ipa_cmd_pipeline_clear_wait(struct ipa *ipa)
>   void ipa_cmd_pipeline_clear(struct ipa *ipa)
>   {
>   	u32 count = ipa_cmd_pipeline_clear_count();
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   
>   	trans = ipa_cmd_trans_alloc(ipa, count);
>   	if (trans) {
>   		ipa_cmd_pipeline_clear_add(trans);
> -		gsi_trans_commit_wait(trans);
> +		ipa_trans_commit_wait(trans);
>   		ipa_cmd_pipeline_clear_wait(ipa);
>   	} else {
>   		dev_err(&ipa->pdev->dev,
> @@ -680,22 +680,22 @@ void ipa_cmd_pipeline_clear(struct ipa *ipa)
>   static struct ipa_cmd_info *
>   ipa_cmd_info_alloc(struct ipa_endpoint *endpoint, u32 tre_count)
>   {
> -	struct gsi_channel *channel;
> +	struct ipa_channel *channel;
>   
> -	channel = &endpoint->ipa->gsi.channel[endpoint->channel_id];
> +	channel = &endpoint->ipa->dma_subsys.channel[endpoint->channel_id];
>   
> -	return gsi_trans_pool_alloc(&channel->trans_info.info_pool, tre_count);
> +	return ipa_trans_pool_alloc(&channel->trans_info.info_pool, tre_count);
>   }
>   
>   /* Allocate a transaction for the command TX endpoint */
> -struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count)
> +struct ipa_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count)
>   {
>   	struct ipa_endpoint *endpoint;
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   
>   	endpoint = ipa->name_map[IPA_ENDPOINT_AP_COMMAND_TX];
>   
> -	trans = gsi_channel_trans_alloc(&ipa->gsi, endpoint->channel_id,
> +	trans = ipa_channel_trans_alloc(&ipa->dma_subsys, endpoint->channel_id,
>   					tre_count, DMA_NONE);
>   	if (trans)
>   		trans->info = ipa_cmd_info_alloc(endpoint, tre_count);
> diff --git a/drivers/net/ipa/ipa_cmd.h b/drivers/net/ipa/ipa_cmd.h
> index 69cd085d427d..bf3b72d11e9d 100644
> --- a/drivers/net/ipa/ipa_cmd.h
> +++ b/drivers/net/ipa/ipa_cmd.h
> @@ -14,8 +14,8 @@ struct scatterlist;
>   
>   struct ipa;
>   struct ipa_mem;
> -struct gsi_trans;
> -struct gsi_channel;
> +struct ipa_trans;
> +struct ipa_channel;
>   
>   /**
>    * enum ipa_cmd_opcode:	IPA immediate commands
> @@ -83,13 +83,13 @@ bool ipa_cmd_data_valid(struct ipa *ipa);
>    *
>    * Return:	0 if successful, or a negative error code
>    */
> -int ipa_cmd_pool_init(struct gsi_channel *channel, u32 tre_count);
> +int ipa_cmd_pool_init(struct ipa_channel *channel, u32 tre_count);
>   
>   /**
>    * ipa_cmd_pool_exit() - Inverse of ipa_cmd_pool_init()
>    * @channel:	AP->IPA command TX GSI channel pointer
>    */
> -void ipa_cmd_pool_exit(struct gsi_channel *channel);
> +void ipa_cmd_pool_exit(struct ipa_channel *channel);
>   
>   /**
>    * ipa_cmd_table_init_add() - Add table init command to a transaction
> @@ -104,7 +104,7 @@ void ipa_cmd_pool_exit(struct gsi_channel *channel);
>    *
>    * If hash_size is 0, hash_offset and hash_addr are ignored.
>    */
> -void ipa_cmd_table_init_add(struct gsi_trans *trans, enum ipa_cmd_opcode opcode,
> +void ipa_cmd_table_init_add(struct ipa_trans *trans, enum ipa_cmd_opcode opcode,
>   			    u16 size, u32 offset, dma_addr_t addr,
>   			    u16 hash_size, u32 hash_offset,
>   			    dma_addr_t hash_addr);
> @@ -118,7 +118,7 @@ void ipa_cmd_table_init_add(struct gsi_trans *trans, enum ipa_cmd_opcode opcode,
>    *
>    * Defines and fills the location in IPA memory to use for headers.
>    */
> -void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size,
> +void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
>   				dma_addr_t addr);
>   
>   /**
> @@ -129,7 +129,7 @@ void ipa_cmd_hdr_init_local_add(struct gsi_trans *trans, u32 offset, u16 size,
>    * @mask:	Mask of bits in register to update with bits from value
>    * @clear_full: Pipeline clear option; true means full pipeline clear
>    */
> -void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value,
> +void ipa_cmd_register_write_add(struct ipa_trans *trans, u32 offset, u32 value,
>   				u32 mask, bool clear_full);
>   
>   /**
> @@ -140,14 +140,14 @@ void ipa_cmd_register_write_add(struct gsi_trans *trans, u32 offset, u32 value,
>    * @addr:	DMA address of buffer to be read into or written from
>    * @toward_ipa:	true means write to IPA memory; false means read
>    */
> -void ipa_cmd_dma_shared_mem_add(struct gsi_trans *trans, u32 offset,
> +void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset,
>   				u16 size, dma_addr_t addr, bool toward_ipa);
>   
>   /**
>    * ipa_cmd_pipeline_clear_add() - Add pipeline clear commands to a transaction
>    * @trans:	GSI transaction
>    */
> -void ipa_cmd_pipeline_clear_add(struct gsi_trans *trans);
> +void ipa_cmd_pipeline_clear_add(struct ipa_trans *trans);
>   
>   /**
>    * ipa_cmd_pipeline_clear_count() - # commands required to clear pipeline
> @@ -177,6 +177,6 @@ void ipa_cmd_pipeline_clear(struct ipa *ipa);
>    * Return:	A GSI transaction structure, or a null pointer if all
>    *		available transactions are in use
>    */
> -struct gsi_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count);
> +struct ipa_trans *ipa_cmd_trans_alloc(struct ipa *ipa, u32 tre_count);
>   
>   #endif /* _IPA_CMD_H_ */
> diff --git a/drivers/net/ipa/ipa_data-v3.5.1.c b/drivers/net/ipa/ipa_data-v3.5.1.c
> index 760c22bbdf70..80ec55ef5ecc 100644
> --- a/drivers/net/ipa/ipa_data-v3.5.1.c
> +++ b/drivers/net/ipa/ipa_data-v3.5.1.c
> @@ -6,7 +6,7 @@
>   
>   #include <linux/log2.h>
>   
> -#include "gsi.h"
> +#include "ipa_dma.h"
>   #include "ipa_data.h"
>   #include "ipa_endpoint.h"
>   #include "ipa_mem.h"
> diff --git a/drivers/net/ipa/ipa_data-v4.11.c b/drivers/net/ipa/ipa_data-v4.11.c
> index fea91451a0c3..9db4c82213e4 100644
> --- a/drivers/net/ipa/ipa_data-v4.11.c
> +++ b/drivers/net/ipa/ipa_data-v4.11.c
> @@ -4,7 +4,7 @@
>   
>   #include <linux/log2.h>
>   
> -#include "gsi.h"
> +#include "ipa_dma.h"
>   #include "ipa_data.h"
>   #include "ipa_endpoint.h"
>   #include "ipa_mem.h"
> diff --git a/drivers/net/ipa/ipa_data-v4.2.c b/drivers/net/ipa/ipa_data-v4.2.c
> index 2a231e79d5e1..afae3fdbf6d7 100644
> --- a/drivers/net/ipa/ipa_data-v4.2.c
> +++ b/drivers/net/ipa/ipa_data-v4.2.c
> @@ -4,7 +4,7 @@
>   
>   #include <linux/log2.h>
>   
> -#include "gsi.h"
> +#include "ipa_dma.h"
>   #include "ipa_data.h"
>   #include "ipa_endpoint.h"
>   #include "ipa_mem.h"
> diff --git a/drivers/net/ipa/ipa_data-v4.5.c b/drivers/net/ipa/ipa_data-v4.5.c
> index e62ab9c3ac67..415167658962 100644
> --- a/drivers/net/ipa/ipa_data-v4.5.c
> +++ b/drivers/net/ipa/ipa_data-v4.5.c
> @@ -4,7 +4,7 @@
>   
>   #include <linux/log2.h>
>   
> -#include "gsi.h"
> +#include "ipa_dma.h"
>   #include "ipa_data.h"
>   #include "ipa_endpoint.h"
>   #include "ipa_mem.h"
> diff --git a/drivers/net/ipa/ipa_data-v4.9.c b/drivers/net/ipa/ipa_data-v4.9.c
> index 2421b5abb5d4..e5c20fc080c3 100644
> --- a/drivers/net/ipa/ipa_data-v4.9.c
> +++ b/drivers/net/ipa/ipa_data-v4.9.c
> @@ -4,7 +4,7 @@
>   
>   #include <linux/log2.h>
>   
> -#include "gsi.h"
> +#include "ipa_dma.h"
>   #include "ipa_data.h"
>   #include "ipa_endpoint.h"
>   #include "ipa_mem.h"
> diff --git a/drivers/net/ipa/gsi.h b/drivers/net/ipa/ipa_dma.h
> similarity index 85%
> rename from drivers/net/ipa/gsi.h
> rename to drivers/net/ipa/ipa_dma.h
> index 88b80dc3db79..d053929ca3e3 100644
> --- a/drivers/net/ipa/gsi.h
> +++ b/drivers/net/ipa/ipa_dma.h
> @@ -26,8 +26,8 @@ struct device;
>   struct scatterlist;
>   struct platform_device;
>   
> -struct gsi;
> -struct gsi_trans;
> +struct ipa_dma;
> +struct ipa_trans;
>   struct gsi_channel_data;
>   struct ipa_gsi_endpoint_data;
>   
> @@ -70,7 +70,7 @@ struct gsi_ring {
>    * The result of a pool allocation of multiple elements is always
>    * contiguous.
>    */
> -struct gsi_trans_pool {
> +struct ipa_trans_pool {
>   	void *base;			/* base address of element pool */
>   	u32 count;			/* # elements in the pool */
>   	u32 free;			/* next free element in pool (modulo) */
> @@ -79,13 +79,13 @@ struct gsi_trans_pool {
>   	dma_addr_t addr;		/* DMA address if DMA pool (or 0) */
>   };
>   
> -struct gsi_trans_info {
> +struct ipa_trans_info {
>   	atomic_t tre_avail;		/* TREs available for allocation */
> -	struct gsi_trans_pool pool;	/* transaction pool */
> -	struct gsi_trans_pool sg_pool;	/* scatterlist pool */
> -	struct gsi_trans_pool cmd_pool;	/* command payload DMA pool */
> -	struct gsi_trans_pool info_pool;/* command information pool */
> -	struct gsi_trans **map;		/* TRE -> transaction map */
> +	struct ipa_trans_pool pool;	/* transaction pool */
> +	struct ipa_trans_pool sg_pool;	/* scatterlist pool */
> +	struct ipa_trans_pool cmd_pool;	/* command payload DMA pool */
> +	struct ipa_trans_pool info_pool;/* command information pool */
> +	struct ipa_trans **map;		/* TRE -> transaction map */
>   
>   	spinlock_t spinlock;		/* protects updates to the lists */
>   	struct list_head alloc;		/* allocated, not committed */
> @@ -105,8 +105,8 @@ enum gsi_channel_state {
>   };
>   
>   /* We only care about channels between IPA and AP */
> -struct gsi_channel {
> -	struct gsi *gsi;
> +struct ipa_channel {
> +	struct ipa_dma *dma_subsys;
>   	bool toward_ipa;
>   	bool command;			/* AP command TX channel or not */
>   
> @@ -127,7 +127,7 @@ struct gsi_channel {
>   	u64 compl_byte_count;		/* last reported completed byte count */
>   	u64 compl_trans_count;		/* ...and completed trans count */
>   
> -	struct gsi_trans_info trans_info;
> +	struct ipa_trans_info trans_info;
>   
>   	struct napi_struct napi;
>   };
> @@ -140,12 +140,12 @@ enum gsi_evt_ring_state {
>   };
>   
>   struct gsi_evt_ring {
> -	struct gsi_channel *channel;
> +	struct ipa_channel *channel;
>   	struct completion completion;	/* signals event ring state changes */
>   	struct gsi_ring ring;
>   };
>   
> -struct gsi {
> +struct ipa_dma {
>   	struct device *dev;		/* Same as IPA device */
>   	enum ipa_version version;
>   	struct net_device dummy_dev;	/* needed for NAPI */
> @@ -154,7 +154,7 @@ struct gsi {
>   	u32 irq;
>   	u32 channel_count;
>   	u32 evt_ring_count;
> -	struct gsi_channel channel[GSI_CHANNEL_COUNT_MAX];
> +	struct ipa_channel channel[GSI_CHANNEL_COUNT_MAX];
>   	struct gsi_evt_ring evt_ring[GSI_EVT_RING_COUNT_MAX];
>   	u32 event_bitmap;		/* allocated event rings */
>   	u32 modem_channel_bitmap;	/* modem channels to allocate */
> @@ -174,13 +174,13 @@ struct gsi {
>    * Performs initialization that must wait until the GSI hardware is
>    * ready (including firmware loaded).
>    */
> -int gsi_setup(struct gsi *gsi);
> +int gsi_setup(struct ipa_dma *dma_subsys);
>   
>   /**
>    * gsi_teardown() - Tear down GSI subsystem
>    * @gsi:	GSI address previously passed to a successful gsi_setup() call
>    */
> -void gsi_teardown(struct gsi *gsi);
> +void gsi_teardown(struct ipa_dma *dma_subsys);
>   
>   /**
>    * gsi_channel_tre_max() - Channel maximum number of in-flight TREs
> @@ -189,7 +189,7 @@ void gsi_teardown(struct gsi *gsi);
>    *
>    * Return:	 The maximum number of TREs oustanding on the channel
>    */
> -u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id);
> +u32 gsi_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
>   
>   /**
>    * gsi_channel_trans_tre_max() - Maximum TREs in a single transaction
> @@ -198,7 +198,7 @@ u32 gsi_channel_tre_max(struct gsi *gsi, u32 channel_id);
>    *
>    * Return:	 The maximum TRE count per transaction on the channel
>    */
> -u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id);
> +u32 gsi_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
>   
>   /**
>    * gsi_channel_start() - Start an allocated GSI channel
> @@ -207,7 +207,7 @@ u32 gsi_channel_trans_tre_max(struct gsi *gsi, u32 channel_id);
>    *
>    * Return:	0 if successful, or a negative error code
>    */
> -int gsi_channel_start(struct gsi *gsi, u32 channel_id);
> +int gsi_channel_start(struct ipa_dma *dma_subsys, u32 channel_id);
>   
>   /**
>    * gsi_channel_stop() - Stop a started GSI channel
> @@ -216,7 +216,7 @@ int gsi_channel_start(struct gsi *gsi, u32 channel_id);
>    *
>    * Return:	0 if successful, or a negative error code
>    */
> -int gsi_channel_stop(struct gsi *gsi, u32 channel_id);
> +int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
>   
>   /**
>    * gsi_channel_reset() - Reset an allocated GSI channel
> @@ -230,19 +230,19 @@ int gsi_channel_stop(struct gsi *gsi, u32 channel_id);
>    * GSI hardware relinquishes ownership of all pending receive buffer
>    * transactions and they will complete with their cancelled flag set.
>    */
> -void gsi_channel_reset(struct gsi *gsi, u32 channel_id, bool doorbell);
> +void gsi_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
>   
>   /**
>    * gsi_suspend() - Prepare the GSI subsystem for suspend
>    * @gsi:	GSI pointer
>    */
> -void gsi_suspend(struct gsi *gsi);
> +void gsi_suspend(struct ipa_dma *dma_subsys);
>   
>   /**
>    * gsi_resume() - Resume the GSI subsystem following suspend
>    * @gsi:	GSI pointer
>    */
> -void gsi_resume(struct gsi *gsi);
> +void gsi_resume(struct ipa_dma *dma_subsys);
>   
>   /**
>    * gsi_channel_suspend() - Suspend a GSI channel
> @@ -251,7 +251,7 @@ void gsi_resume(struct gsi *gsi);
>    *
>    * For IPA v4.0+, suspend is implemented by stopping the channel.
>    */
> -int gsi_channel_suspend(struct gsi *gsi, u32 channel_id);
> +int gsi_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id);
>   
>   /**
>    * gsi_channel_resume() - Resume a suspended GSI channel
> @@ -260,7 +260,7 @@ int gsi_channel_suspend(struct gsi *gsi, u32 channel_id);
>    *
>    * For IPA v4.0+, the stopped channel is started again.
>    */
> -int gsi_channel_resume(struct gsi *gsi, u32 channel_id);
> +int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
>   
>   /**
>    * gsi_init() - Initialize the GSI subsystem
> @@ -275,7 +275,7 @@ int gsi_channel_resume(struct gsi *gsi, u32 channel_id);
>    * Early stage initialization of the GSI subsystem, performing tasks
>    * that can be done before the GSI hardware is ready to use.
>    */
> -int gsi_init(struct gsi *gsi, struct platform_device *pdev,
> +int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
>   	     enum ipa_version version, u32 count,
>   	     const struct ipa_gsi_endpoint_data *data);
>   
> @@ -283,6 +283,6 @@ int gsi_init(struct gsi *gsi, struct platform_device *pdev,
>    * gsi_exit() - Exit the GSI subsystem
>    * @gsi:	GSI address previously passed to a successful gsi_init() call
>    */
> -void gsi_exit(struct gsi *gsi);
> +void gsi_exit(struct ipa_dma *dma_subsys);
>   
>   #endif /* _GSI_H_ */
> diff --git a/drivers/net/ipa/gsi_private.h b/drivers/net/ipa/ipa_dma_private.h
> similarity index 67%
> rename from drivers/net/ipa/gsi_private.h
> rename to drivers/net/ipa/ipa_dma_private.h
> index ea333a244cf5..40148a551b47 100644
> --- a/drivers/net/ipa/gsi_private.h
> +++ b/drivers/net/ipa/ipa_dma_private.h
> @@ -6,38 +6,38 @@
>   #ifndef _GSI_PRIVATE_H_
>   #define _GSI_PRIVATE_H_
>   
> -/* === Only "gsi.c" and "gsi_trans.c" should include this file === */
> +/* === Only "gsi.c" and "ipa_trans.c" should include this file === */
>   
>   #include <linux/types.h>
>   
> -struct gsi_trans;
> +struct ipa_trans;
>   struct gsi_ring;
> -struct gsi_channel;
> +struct ipa_channel;
>   
>   #define GSI_RING_ELEMENT_SIZE	16	/* bytes; must be a power of 2 */
>   
>   /* Return the entry that follows one provided in a transaction pool */
> -void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element);
> +void *ipa_trans_pool_next(struct ipa_trans_pool *pool, void *element);
>   
>   /**
> - * gsi_trans_move_complete() - Mark a GSI transaction completed
> + * ipa_trans_move_complete() - Mark a GSI transaction completed
>    * @trans:	Transaction to commit
>    */
> -void gsi_trans_move_complete(struct gsi_trans *trans);
> +void ipa_trans_move_complete(struct ipa_trans *trans);
>   
>   /**
> - * gsi_trans_move_polled() - Mark a transaction polled
> + * ipa_trans_move_polled() - Mark a transaction polled
>    * @trans:	Transaction to update
>    */
> -void gsi_trans_move_polled(struct gsi_trans *trans);
> +void ipa_trans_move_polled(struct ipa_trans *trans);
>   
>   /**
> - * gsi_trans_complete() - Complete a GSI transaction
> + * ipa_trans_complete() - Complete a GSI transaction
>    * @trans:	Transaction to complete
>    *
>    * Marks a transaction complete (including freeing it).
>    */
> -void gsi_trans_complete(struct gsi_trans *trans);
> +void ipa_trans_complete(struct ipa_trans *trans);
>   
>   /**
>    * gsi_channel_trans_mapped() - Return a transaction mapped to a TRE index
> @@ -46,19 +46,19 @@ void gsi_trans_complete(struct gsi_trans *trans);
>    *
>    * Return:	The GSI transaction pointer associated with the TRE index
>    */
> -struct gsi_trans *gsi_channel_trans_mapped(struct gsi_channel *channel,
> +struct ipa_trans *gsi_channel_trans_mapped(struct ipa_channel *channel,
>   					   u32 index);
>   
>   /**
> - * gsi_channel_trans_complete() - Return a channel's next completed transaction
> + * ipa_channel_trans_complete() - Return a channel's next completed transaction
>    * @channel:	Channel whose next transaction is to be returned
>    *
>    * Return:	The next completed transaction, or NULL if nothing new
>    */
> -struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel);
> +struct ipa_trans *ipa_channel_trans_complete(struct ipa_channel *channel);
>   
>   /**
> - * gsi_channel_trans_cancel_pending() - Cancel pending transactions
> + * ipa_channel_trans_cancel_pending() - Cancel pending transactions
>    * @channel:	Channel whose pending transactions should be cancelled
>    *
>    * Cancel all pending transactions on a channel.  These are transactions
> @@ -69,10 +69,10 @@ struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel);
>    * NOTE:  Transactions already complete at the time of this call are
>    *	  unaffected.
>    */
> -void gsi_channel_trans_cancel_pending(struct gsi_channel *channel);
> +void ipa_channel_trans_cancel_pending(struct ipa_channel *channel);
>   
>   /**
> - * gsi_channel_trans_init() - Initialize a channel's GSI transaction info
> + * ipa_channel_trans_init() - Initialize a channel's GSI transaction info
>    * @gsi:	GSI pointer
>    * @channel_id:	Channel number
>    *
> @@ -80,13 +80,13 @@ void gsi_channel_trans_cancel_pending(struct gsi_channel *channel);
>    *
>    * Creates and sets up information for managing transactions on a channel
>    */
> -int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id);
> +int ipa_channel_trans_init(struct ipa_dma *gsi, u32 channel_id);
>   
>   /**
> - * gsi_channel_trans_exit() - Inverse of gsi_channel_trans_init()
> + * ipa_channel_trans_exit() - Inverse of ipa_channel_trans_init()
>    * @channel:	Channel whose transaction information is to be cleaned up
>    */
> -void gsi_channel_trans_exit(struct gsi_channel *channel);
> +void ipa_channel_trans_exit(struct ipa_channel *channel);
>   
>   /**
>    * gsi_channel_doorbell() - Ring a channel's doorbell
> @@ -95,7 +95,7 @@ void gsi_channel_trans_exit(struct gsi_channel *channel);
>    * Rings a channel's doorbell to inform the GSI hardware that new
>    * transactions (TREs, really) are available for it to process.
>    */
> -void gsi_channel_doorbell(struct gsi_channel *channel);
> +void gsi_channel_doorbell(struct ipa_channel *channel);
>   
>   /**
>    * gsi_ring_virt() - Return virtual address for a ring entry
> @@ -105,7 +105,7 @@ void gsi_channel_doorbell(struct gsi_channel *channel);
>   void *gsi_ring_virt(struct gsi_ring *ring, u32 index);
>   
>   /**
> - * gsi_channel_tx_queued() - Report the number of bytes queued to hardware
> + * ipa_channel_tx_queued() - Report the number of bytes queued to hardware
>    * @channel:	Channel whose bytes have been queued
>    *
>    * This arranges for the the number of transactions and bytes for
> @@ -113,6 +113,6 @@ void *gsi_ring_virt(struct gsi_ring *ring, u32 index);
>    * passes this information up the network stack so it can be used to
>    * throttle transmissions.
>    */
> -void gsi_channel_tx_queued(struct gsi_channel *channel);
> +void ipa_channel_tx_queued(struct ipa_channel *channel);
>   
>   #endif /* _GSI_PRIVATE_H_ */
> diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> index 29227de6661f..90d6880e8a25 100644
> --- a/drivers/net/ipa/ipa_endpoint.c
> +++ b/drivers/net/ipa/ipa_endpoint.c
> @@ -11,8 +11,8 @@
>   #include <linux/if_rmnet.h>
>   #include <linux/dma-direction.h>
>   
> -#include "gsi.h"
> -#include "gsi_trans.h"
> +#include "ipa_dma.h"
> +#include "ipa_trans.h"
>   #include "ipa.h"
>   #include "ipa_data.h"
>   #include "ipa_endpoint.h"
> @@ -224,16 +224,16 @@ static bool ipa_endpoint_data_valid(struct ipa *ipa, u32 count,
>   }
>   
>   /* Allocate a transaction to use on a non-command endpoint */
> -static struct gsi_trans *ipa_endpoint_trans_alloc(struct ipa_endpoint *endpoint,
> +static struct ipa_trans *ipa_endpoint_trans_alloc(struct ipa_endpoint *endpoint,
>   						  u32 tre_count)
>   {
> -	struct gsi *gsi = &endpoint->ipa->gsi;
> +	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
>   	u32 channel_id = endpoint->channel_id;
>   	enum dma_data_direction direction;
>   
>   	direction = endpoint->toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
>   
> -	return gsi_channel_trans_alloc(gsi, channel_id, tre_count, direction);
> +	return ipa_channel_trans_alloc(gsi, channel_id, tre_count, direction);
>   }
>   
>   /* suspend_delay represents suspend for RX, delay for TX endpoints.
> @@ -382,7 +382,7 @@ void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable)
>   int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
>   {
>   	u32 initialized = ipa->initialized;
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   	u32 count;
>   
>   	/* We need one command per modem TX endpoint.  We can get an upper
> @@ -422,7 +422,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
>   	ipa_cmd_pipeline_clear_add(trans);
>   
>   	/* XXX This should have a 1 second timeout */
> -	gsi_trans_commit_wait(trans);
> +	ipa_trans_commit_wait(trans);
>   
>   	ipa_cmd_pipeline_clear_wait(ipa);
>   
> @@ -938,7 +938,7 @@ static void ipa_endpoint_init_seq(struct ipa_endpoint *endpoint)
>    */
>   int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb)
>   {
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   	u32 nr_frags;
>   	int ret;
>   
> @@ -957,17 +957,17 @@ int ipa_endpoint_skb_tx(struct ipa_endpoint *endpoint, struct sk_buff *skb)
>   	if (!trans)
>   		return -EBUSY;
>   
> -	ret = gsi_trans_skb_add(trans, skb);
> +	ret = ipa_trans_skb_add(trans, skb);
>   	if (ret)
>   		goto err_trans_free;
>   	trans->data = skb;	/* transaction owns skb now */
>   
> -	gsi_trans_commit(trans, !netdev_xmit_more());
> +	ipa_trans_commit(trans, !netdev_xmit_more());
>   
>   	return 0;
>   
>   err_trans_free:
> -	gsi_trans_free(trans);
> +	ipa_trans_free(trans);
>   
>   	return -ENOMEM;
>   }
> @@ -1004,7 +1004,7 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
>   
>   static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
>   {
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   	bool doorbell = false;
>   	struct page *page;
>   	u32 offset;
> @@ -1023,7 +1023,7 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
>   	offset = NET_SKB_PAD;
>   	len = IPA_RX_BUFFER_SIZE - offset;
>   
> -	ret = gsi_trans_page_add(trans, page, len, offset);
> +	ret = ipa_trans_page_add(trans, page, len, offset);
>   	if (ret)
>   		goto err_trans_free;
>   	trans->data = page;	/* transaction owns page now */
> @@ -1033,12 +1033,12 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
>   		endpoint->replenish_ready = 0;
>   	}
>   
> -	gsi_trans_commit(trans, doorbell);
> +	ipa_trans_commit(trans, doorbell);
>   
>   	return 0;
>   
>   err_trans_free:
> -	gsi_trans_free(trans);
> +	ipa_trans_free(trans);
>   err_free_pages:
>   	__free_pages(page, get_order(IPA_RX_BUFFER_SIZE));
>   
> @@ -1060,7 +1060,7 @@ static int ipa_endpoint_replenish_one(struct ipa_endpoint *endpoint)
>    */
>   static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
>   {
> -	struct gsi *gsi;
> +	struct ipa_dma *gsi;
>   	u32 backlog;
>   
>   	if (!endpoint->replenish_enabled) {
> @@ -1090,7 +1090,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
>   	 * Receive buffer transactions use one TRE, so schedule work to
>   	 * try replenishing again if our backlog is *all* available TREs.
>   	 */
> -	gsi = &endpoint->ipa->gsi;
> +	gsi = &endpoint->ipa->dma_subsys;
>   	if (backlog == gsi_channel_tre_max(gsi, endpoint->channel_id))
>   		schedule_delayed_work(&endpoint->replenish_work,
>   				      msecs_to_jiffies(1));
> @@ -1098,7 +1098,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
>   
>   static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
>   {
> -	struct gsi *gsi = &endpoint->ipa->gsi;
> +	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
>   	u32 max_backlog;
>   	u32 saved;
>   
> @@ -1320,13 +1320,13 @@ static void ipa_endpoint_status_parse(struct ipa_endpoint *endpoint,
>   
>   /* Complete a TX transaction, command or from ipa_endpoint_skb_tx() */
>   static void ipa_endpoint_tx_complete(struct ipa_endpoint *endpoint,
> -				     struct gsi_trans *trans)
> +				     struct ipa_trans *trans)
>   {
>   }
>   
>   /* Complete transaction initiated in ipa_endpoint_replenish_one() */
>   static void ipa_endpoint_rx_complete(struct ipa_endpoint *endpoint,
> -				     struct gsi_trans *trans)
> +				     struct ipa_trans *trans)
>   {
>   	struct page *page;
>   
> @@ -1344,7 +1344,7 @@ static void ipa_endpoint_rx_complete(struct ipa_endpoint *endpoint,
>   }
>   
>   void ipa_endpoint_trans_complete(struct ipa_endpoint *endpoint,
> -				 struct gsi_trans *trans)
> +				 struct ipa_trans *trans)
>   {
>   	if (endpoint->toward_ipa)
>   		ipa_endpoint_tx_complete(endpoint, trans);
> @@ -1353,7 +1353,7 @@ void ipa_endpoint_trans_complete(struct ipa_endpoint *endpoint,
>   }
>   
>   void ipa_endpoint_trans_release(struct ipa_endpoint *endpoint,
> -				struct gsi_trans *trans)
> +				struct ipa_trans *trans)
>   {
>   	if (endpoint->toward_ipa) {
>   		struct ipa *ipa = endpoint->ipa;
> @@ -1406,7 +1406,7 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
>   {
>   	struct device *dev = &endpoint->ipa->pdev->dev;
>   	struct ipa *ipa = endpoint->ipa;
> -	struct gsi *gsi = &ipa->gsi;
> +	struct ipa_dma *gsi = &ipa->dma_subsys;
>   	bool suspended = false;
>   	dma_addr_t addr;
>   	u32 retries;
> @@ -1504,7 +1504,7 @@ static void ipa_endpoint_reset(struct ipa_endpoint *endpoint)
>   	if (special && ipa_endpoint_aggr_active(endpoint))
>   		ret = ipa_endpoint_reset_rx_aggr(endpoint);
>   	else
> -		gsi_channel_reset(&ipa->gsi, channel_id, true);
> +		gsi_channel_reset(&ipa->dma_subsys, channel_id, true);
>   
>   	if (ret)
>   		dev_err(&ipa->pdev->dev,
> @@ -1534,7 +1534,7 @@ static void ipa_endpoint_program(struct ipa_endpoint *endpoint)
>   int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint)
>   {
>   	struct ipa *ipa = endpoint->ipa;
> -	struct gsi *gsi = &ipa->gsi;
> +	struct ipa_dma *gsi = &ipa->dma_subsys;
>   	int ret;
>   
>   	ret = gsi_channel_start(gsi, endpoint->channel_id);
> @@ -1561,7 +1561,7 @@ void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
>   {
>   	u32 mask = BIT(endpoint->endpoint_id);
>   	struct ipa *ipa = endpoint->ipa;
> -	struct gsi *gsi = &ipa->gsi;
> +	struct ipa_dma *gsi = &ipa->dma_subsys;
>   	int ret;
>   
>   	if (!(ipa->enabled & mask))
> @@ -1586,7 +1586,8 @@ void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
>   void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
>   {
>   	struct device *dev = &endpoint->ipa->pdev->dev;
> -	struct gsi *gsi = &endpoint->ipa->gsi;
> +	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
> +	bool stop_channel;
>   	int ret;
>   
>   	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
> @@ -1606,7 +1607,8 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
>   void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
>   {
>   	struct device *dev = &endpoint->ipa->pdev->dev;
> -	struct gsi *gsi = &endpoint->ipa->gsi;
> +	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
> +	bool start_channel;
>   	int ret;
>   
>   	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
> @@ -1651,7 +1653,7 @@ void ipa_endpoint_resume(struct ipa *ipa)
>   
>   static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
>   {
> -	struct gsi *gsi = &endpoint->ipa->gsi;
> +	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
>   	u32 channel_id = endpoint->channel_id;
>   
>   	/* Only AP endpoints get set up */
> diff --git a/drivers/net/ipa/ipa_endpoint.h b/drivers/net/ipa/ipa_endpoint.h
> index 0a859d10312d..7ba06abc1968 100644
> --- a/drivers/net/ipa/ipa_endpoint.h
> +++ b/drivers/net/ipa/ipa_endpoint.h
> @@ -10,7 +10,7 @@
>   #include <linux/workqueue.h>
>   #include <linux/if_ether.h>
>   
> -#include "gsi.h"
> +#include "ipa_dma.h"
>   #include "ipa_reg.h"
>   
>   struct net_device;
> @@ -110,8 +110,8 @@ u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
>   void ipa_endpoint_exit(struct ipa *ipa);
>   
>   void ipa_endpoint_trans_complete(struct ipa_endpoint *ipa,
> -				 struct gsi_trans *trans);
> +				 struct ipa_trans *trans);
>   void ipa_endpoint_trans_release(struct ipa_endpoint *ipa,
> -				struct gsi_trans *trans);
> +				struct ipa_trans *trans);
>   
>   #endif /* _IPA_ENDPOINT_H_ */
> diff --git a/drivers/net/ipa/ipa_gsi.c b/drivers/net/ipa/ipa_gsi.c
> index d323adb03383..d212ca01894d 100644
> --- a/drivers/net/ipa/ipa_gsi.c
> +++ b/drivers/net/ipa/ipa_gsi.c
> @@ -7,29 +7,29 @@
>   #include <linux/types.h>
>   
>   #include "ipa_gsi.h"
> -#include "gsi_trans.h"
> +#include "ipa_trans.h"
>   #include "ipa.h"
>   #include "ipa_endpoint.h"
>   #include "ipa_data.h"
>   
> -void ipa_gsi_trans_complete(struct gsi_trans *trans)
> +void ipa_gsi_trans_complete(struct ipa_trans *trans)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   
>   	ipa_endpoint_trans_complete(ipa->channel_map[trans->channel_id], trans);
>   }
>   
> -void ipa_gsi_trans_release(struct gsi_trans *trans)
> +void ipa_gsi_trans_release(struct ipa_trans *trans)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   
>   	ipa_endpoint_trans_release(ipa->channel_map[trans->channel_id], trans);
>   }
>   
> -void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count,
> +void ipa_gsi_channel_tx_queued(struct ipa_dma *gsi, u32 channel_id, u32 count,
>   			       u32 byte_count)
>   {
> -	struct ipa *ipa = container_of(gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(gsi, struct ipa, dma_subsys);
>   	struct ipa_endpoint *endpoint;
>   
>   	endpoint = ipa->channel_map[channel_id];
> @@ -37,10 +37,10 @@ void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count,
>   		netdev_sent_queue(endpoint->netdev, byte_count);
>   }
>   
> -void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count,
> +void ipa_gsi_channel_tx_completed(struct ipa_dma *gsi, u32 channel_id, u32 count,
>   				  u32 byte_count)
>   {
> -	struct ipa *ipa = container_of(gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(gsi, struct ipa, dma_subsys);
>   	struct ipa_endpoint *endpoint;
>   
>   	endpoint = ipa->channel_map[channel_id];
> diff --git a/drivers/net/ipa/ipa_gsi.h b/drivers/net/ipa/ipa_gsi.h
> index c02cb6f3a2e1..85df59177c34 100644
> --- a/drivers/net/ipa/ipa_gsi.h
> +++ b/drivers/net/ipa/ipa_gsi.h
> @@ -8,8 +8,8 @@
>   
>   #include <linux/types.h>
>   
> -struct gsi;
> -struct gsi_trans;
> +struct ipa_dma;
> +struct ipa_trans;
>   struct ipa_gsi_endpoint_data;
>   
>   /**
> @@ -19,7 +19,7 @@ struct ipa_gsi_endpoint_data;
>    * This called from the GSI layer to notify the IPA layer that a
>    * transaction has completed.
>    */
> -void ipa_gsi_trans_complete(struct gsi_trans *trans);
> +void ipa_gsi_trans_complete(struct ipa_trans *trans);
>   
>   /**
>    * ipa_gsi_trans_release() - GSI transaction release callback
> @@ -29,7 +29,7 @@ void ipa_gsi_trans_complete(struct gsi_trans *trans);
>    * transaction is about to be freed, so any resources associated
>    * with it should be released.
>    */
> -void ipa_gsi_trans_release(struct gsi_trans *trans);
> +void ipa_gsi_trans_release(struct ipa_trans *trans);
>   
>   /**
>    * ipa_gsi_channel_tx_queued() - GSI queued to hardware notification
> @@ -41,7 +41,7 @@ void ipa_gsi_trans_release(struct gsi_trans *trans);
>    * This called from the GSI layer to notify the IPA layer that some
>    * number of transactions have been queued to hardware for execution.
>    */
> -void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count,
> +void ipa_gsi_channel_tx_queued(struct ipa_dma *gsi, u32 channel_id, u32 count,
>   			       u32 byte_count);
>   
>   /**
> @@ -54,7 +54,7 @@ void ipa_gsi_channel_tx_queued(struct gsi *gsi, u32 channel_id, u32 count,
>    * This called from the GSI layer to notify the IPA layer that the hardware
>    * has reported the completion of some number of transactions.
>    */
> -void ipa_gsi_channel_tx_completed(struct gsi *gsi, u32 channel_id, u32 count,
> +void ipa_gsi_channel_tx_completed(struct ipa_dma *gsi, u32 channel_id, u32 count,
>   				  u32 byte_count);
>   
>   /* ipa_gsi_endpoint_data_empty() - Empty endpoint config data test
> diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> index cdfa98a76e1f..026f5555fa7d 100644
> --- a/drivers/net/ipa/ipa_main.c
> +++ b/drivers/net/ipa/ipa_main.c
> @@ -31,7 +31,7 @@
>   #include "ipa_modem.h"
>   #include "ipa_uc.h"
>   #include "ipa_interrupt.h"
> -#include "gsi_trans.h"
> +#include "ipa_trans.h"
>   #include "ipa_sysfs.h"
>   
>   /**
> @@ -98,7 +98,7 @@ int ipa_setup(struct ipa *ipa)
>   	struct device *dev = &ipa->pdev->dev;
>   	int ret;
>   
> -	ret = gsi_setup(&ipa->gsi);
> +	ret = gsi_setup(&ipa->dma_subsys);
>   	if (ret)
>   		return ret;
>   
> @@ -154,7 +154,7 @@ int ipa_setup(struct ipa *ipa)
>   	ipa_endpoint_teardown(ipa);
>   	ipa_power_teardown(ipa);
>   err_gsi_teardown:
> -	gsi_teardown(&ipa->gsi);
> +	gsi_teardown(&ipa->dma_subsys);
>   
>   	return ret;
>   }
> @@ -179,7 +179,7 @@ static void ipa_teardown(struct ipa *ipa)
>   	ipa_endpoint_disable_one(command_endpoint);
>   	ipa_endpoint_teardown(ipa);
>   	ipa_power_teardown(ipa);
> -	gsi_teardown(&ipa->gsi);
> +	gsi_teardown(&ipa->dma_subsys);
>   }
>   
>   /* Configure bus access behavior for IPA components */
> @@ -716,7 +716,7 @@ static int ipa_probe(struct platform_device *pdev)
>   	if (ret)
>   		goto err_reg_exit;
>   
> -	ret = gsi_init(&ipa->gsi, pdev, ipa->version, data->endpoint_count,
> +	ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
>   		       data->endpoint_data);
>   	if (ret)
>   		goto err_mem_exit;
> @@ -781,7 +781,7 @@ static int ipa_probe(struct platform_device *pdev)
>   err_endpoint_exit:
>   	ipa_endpoint_exit(ipa);
>   err_gsi_exit:
> -	gsi_exit(&ipa->gsi);
> +	gsi_exit(&ipa->dma_subsys);
>   err_mem_exit:
>   	ipa_mem_exit(ipa);
>   err_reg_exit:
> @@ -824,7 +824,7 @@ static int ipa_remove(struct platform_device *pdev)
>   	ipa_modem_exit(ipa);
>   	ipa_table_exit(ipa);
>   	ipa_endpoint_exit(ipa);
> -	gsi_exit(&ipa->gsi);
> +	gsi_exit(&ipa->dma_subsys);
>   	ipa_mem_exit(ipa);
>   	ipa_reg_exit(ipa);
>   	kfree(ipa);
> diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
> index 4337b0920d3d..16e5fdd5bd73 100644
> --- a/drivers/net/ipa/ipa_mem.c
> +++ b/drivers/net/ipa/ipa_mem.c
> @@ -18,7 +18,7 @@
>   #include "ipa_cmd.h"
>   #include "ipa_mem.h"
>   #include "ipa_table.h"
> -#include "gsi_trans.h"
> +#include "ipa_trans.h"
>   
>   /* "Canary" value placed between memory regions to detect overflow */
>   #define IPA_MEM_CANARY_VAL		cpu_to_le32(0xdeadbeef)
> @@ -42,9 +42,9 @@ const struct ipa_mem *ipa_mem_find(struct ipa *ipa, enum ipa_mem_id mem_id)
>   
>   /* Add an immediate command to a transaction that zeroes a memory region */
>   static void
> -ipa_mem_zero_region_add(struct gsi_trans *trans, enum ipa_mem_id mem_id)
> +ipa_mem_zero_region_add(struct ipa_trans *trans, enum ipa_mem_id mem_id)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	const struct ipa_mem *mem = ipa_mem_find(ipa, mem_id);
>   	dma_addr_t addr = ipa->zero_addr;
>   
> @@ -76,7 +76,7 @@ int ipa_mem_setup(struct ipa *ipa)
>   {
>   	dma_addr_t addr = ipa->zero_addr;
>   	const struct ipa_mem *mem;
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   	u32 offset;
>   	u16 size;
>   	u32 val;
> @@ -107,7 +107,7 @@ int ipa_mem_setup(struct ipa *ipa)
>   	ipa_mem_zero_region_add(trans, IPA_MEM_AP_PROC_CTX);
>   	ipa_mem_zero_region_add(trans, IPA_MEM_MODEM);
>   
> -	gsi_trans_commit_wait(trans);
> +	ipa_trans_commit_wait(trans);
>   
>   	/* Tell the hardware where the processing context area is located */
>   	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
> @@ -408,7 +408,7 @@ void ipa_mem_deconfig(struct ipa *ipa)
>    */
>   int ipa_mem_zero_modem(struct ipa *ipa)
>   {
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   
>   	/* Get a transaction to zero the modem memory, modem header,
>   	 * and modem processing context regions.
> @@ -424,7 +424,7 @@ int ipa_mem_zero_modem(struct ipa *ipa)
>   	ipa_mem_zero_region_add(trans, IPA_MEM_MODEM_PROC_CTX);
>   	ipa_mem_zero_region_add(trans, IPA_MEM_MODEM);
>   
> -	gsi_trans_commit_wait(trans);
> +	ipa_trans_commit_wait(trans);
>   
>   	return 0;
>   }
> diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
> index 96c467c80a2e..d197959cc032 100644
> --- a/drivers/net/ipa/ipa_table.c
> +++ b/drivers/net/ipa/ipa_table.c
> @@ -21,8 +21,8 @@
>   #include "ipa_reg.h"
>   #include "ipa_mem.h"
>   #include "ipa_cmd.h"
> -#include "gsi.h"
> -#include "gsi_trans.h"
> +#include "ipa_dma.h"
> +#include "ipa_trans.h"
>   
>   /**
>    * DOC: IPA Filter and Route Tables
> @@ -234,10 +234,10 @@ static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count)
>   	return ipa->table_addr + skip * IPA_TABLE_ENTRY_SIZE(ipa->version);
>   }
>   
> -static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
> +static void ipa_table_reset_add(struct ipa_trans *trans, bool filter,
>   				u16 first, u16 count, enum ipa_mem_id mem_id)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	const struct ipa_mem *mem = ipa_mem_find(ipa, mem_id);
>   	dma_addr_t addr;
>   	u32 offset;
> @@ -266,7 +266,7 @@ ipa_filter_reset_table(struct ipa *ipa, enum ipa_mem_id mem_id, bool modem)
>   {
>   	u32 ep_mask = ipa->filter_map;
>   	u32 count = hweight32(ep_mask);
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   	enum gsi_ee_id ee_id;
>   
>   	trans = ipa_cmd_trans_alloc(ipa, count);
> @@ -291,7 +291,7 @@ ipa_filter_reset_table(struct ipa *ipa, enum ipa_mem_id mem_id, bool modem)
>   		ipa_table_reset_add(trans, true, endpoint_id, 1, mem_id);
>   	}
>   
> -	gsi_trans_commit_wait(trans);
> +	ipa_trans_commit_wait(trans);
>   
>   	return 0;
>   }
> @@ -326,7 +326,7 @@ static int ipa_filter_reset(struct ipa *ipa, bool modem)
>    * */
>   static int ipa_route_reset(struct ipa *ipa, bool modem)
>   {
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   	u16 first;
>   	u16 count;
>   
> @@ -354,7 +354,7 @@ static int ipa_route_reset(struct ipa *ipa, bool modem)
>   	ipa_table_reset_add(trans, false, first, count,
>   			    IPA_MEM_V6_ROUTE_HASHED);
>   
> -	gsi_trans_commit_wait(trans);
> +	ipa_trans_commit_wait(trans);
>   
>   	return 0;
>   }
> @@ -382,7 +382,7 @@ void ipa_table_reset(struct ipa *ipa, bool modem)
>   int ipa_table_hash_flush(struct ipa *ipa)
>   {
>   	u32 offset = ipa_reg_filt_rout_hash_flush_offset(ipa->version);
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   	u32 val;
>   
>   	if (!ipa_table_hash_support(ipa))
> @@ -399,17 +399,17 @@ int ipa_table_hash_flush(struct ipa *ipa)
>   
>   	ipa_cmd_register_write_add(trans, offset, val, val, false);
>   
> -	gsi_trans_commit_wait(trans);
> +	ipa_trans_commit_wait(trans);
>   
>   	return 0;
>   }
>   
> -static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
> +static void ipa_table_init_add(struct ipa_trans *trans, bool filter,
>   			       enum ipa_cmd_opcode opcode,
>   			       enum ipa_mem_id mem_id,
>   			       enum ipa_mem_id hash_mem_id)
>   {
> -	struct ipa *ipa = container_of(trans->gsi, struct ipa, gsi);
> +	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	const struct ipa_mem *hash_mem = ipa_mem_find(ipa, hash_mem_id);
>   	const struct ipa_mem *mem = ipa_mem_find(ipa, mem_id);
>   	dma_addr_t hash_addr;
> @@ -444,7 +444,7 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
>   
>   int ipa_table_setup(struct ipa *ipa)
>   {
> -	struct gsi_trans *trans;
> +	struct ipa_trans *trans;
>   
>   	trans = ipa_cmd_trans_alloc(ipa, 4);
>   	if (!trans) {
> @@ -464,7 +464,7 @@ int ipa_table_setup(struct ipa *ipa)
>   	ipa_table_init_add(trans, true, IPA_CMD_IP_V6_FILTER_INIT,
>   			   IPA_MEM_V6_FILTER, IPA_MEM_V6_FILTER_HASHED);
>   
> -	gsi_trans_commit_wait(trans);
> +	ipa_trans_commit_wait(trans);
>   
>   	return 0;
>   }
> diff --git a/drivers/net/ipa/gsi_trans.c b/drivers/net/ipa/ipa_trans.c
> similarity index 81%
> rename from drivers/net/ipa/gsi_trans.c
> rename to drivers/net/ipa/ipa_trans.c
> index 1544564bc283..b87936b18770 100644
> --- a/drivers/net/ipa/gsi_trans.c
> +++ b/drivers/net/ipa/ipa_trans.c
> @@ -11,9 +11,9 @@
>   #include <linux/scatterlist.h>
>   #include <linux/dma-direction.h>
>   
> -#include "gsi.h"
> -#include "gsi_private.h"
> -#include "gsi_trans.h"
> +#include "ipa_dma.h"
> +#include "ipa_dma_private.h"
> +#include "ipa_trans.h"
>   #include "ipa_gsi.h"
>   #include "ipa_data.h"
>   #include "ipa_cmd.h"
> @@ -85,7 +85,7 @@ struct gsi_tre {
>   #define TRE_FLAGS_BEI_FMASK	GENMASK(10, 10)
>   #define TRE_FLAGS_TYPE_FMASK	GENMASK(23, 16)
>   
> -int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count,
> +int ipa_trans_pool_init(struct ipa_trans_pool *pool, size_t size, u32 count,
>   			u32 max_alloc)
>   {
>   	void *virt;
> @@ -119,7 +119,7 @@ int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count,
>   	return 0;
>   }
>   
> -void gsi_trans_pool_exit(struct gsi_trans_pool *pool)
> +void ipa_trans_pool_exit(struct ipa_trans_pool *pool)
>   {
>   	kfree(pool->base);
>   	memset(pool, 0, sizeof(*pool));
> @@ -131,7 +131,7 @@ void gsi_trans_pool_exit(struct gsi_trans_pool *pool)
>    * (and it can be more than one), we only allow allocation of a single
>    * element from a DMA pool.
>    */
> -int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
> +int ipa_trans_pool_init_dma(struct device *dev, struct ipa_trans_pool *pool,
>   			    size_t size, u32 count, u32 max_alloc)
>   {
>   	size_t total_size;
> @@ -152,7 +152,7 @@ int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
>   	/* The allocator will give us a power-of-2 number of pages
>   	 * sufficient to satisfy our request.  Round up our requested
>   	 * size to avoid any unused space in the allocation.  This way
> -	 * gsi_trans_pool_exit_dma() can assume the total allocated
> +	 * ipa_trans_pool_exit_dma() can assume the total allocated
>   	 * size is exactly (count * size).
>   	 */
>   	total_size = get_order(total_size) << PAGE_SHIFT;
> @@ -171,7 +171,7 @@ int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
>   	return 0;
>   }
>   
> -void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool)
> +void ipa_trans_pool_exit_dma(struct device *dev, struct ipa_trans_pool *pool)
>   {
>   	size_t total_size = pool->count * pool->size;
>   
> @@ -180,7 +180,7 @@ void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool)
>   }
>   
>   /* Return the byte offset of the next free entry in the pool */
> -static u32 gsi_trans_pool_alloc_common(struct gsi_trans_pool *pool, u32 count)
> +static u32 ipa_trans_pool_alloc_common(struct ipa_trans_pool *pool, u32 count)
>   {
>   	u32 offset;
>   
> @@ -199,15 +199,15 @@ static u32 gsi_trans_pool_alloc_common(struct gsi_trans_pool *pool, u32 count)
>   }
>   
>   /* Allocate a contiguous block of zeroed entries from a pool */
> -void *gsi_trans_pool_alloc(struct gsi_trans_pool *pool, u32 count)
> +void *ipa_trans_pool_alloc(struct ipa_trans_pool *pool, u32 count)
>   {
> -	return pool->base + gsi_trans_pool_alloc_common(pool, count);
> +	return pool->base + ipa_trans_pool_alloc_common(pool, count);
>   }
>   
>   /* Allocate a single zeroed entry from a DMA pool */
> -void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr)
> +void *ipa_trans_pool_alloc_dma(struct ipa_trans_pool *pool, dma_addr_t *addr)
>   {
> -	u32 offset = gsi_trans_pool_alloc_common(pool, 1);
> +	u32 offset = ipa_trans_pool_alloc_common(pool, 1);
>   
>   	*addr = pool->addr + offset;
>   
> @@ -217,7 +217,7 @@ void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr)
>   /* Return the pool element that immediately follows the one given.
>    * This only works done if elements are allocated one at a time.
>    */
> -void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element)
> +void *ipa_trans_pool_next(struct ipa_trans_pool *pool, void *element)
>   {
>   	void *end = pool->base + pool->count * pool->size;
>   
> @@ -231,33 +231,33 @@ void *gsi_trans_pool_next(struct gsi_trans_pool *pool, void *element)
>   }
>   
>   /* Map a given ring entry index to the transaction associated with it */
> -static void gsi_channel_trans_map(struct gsi_channel *channel, u32 index,
> -				  struct gsi_trans *trans)
> +static void gsi_channel_trans_map(struct ipa_channel *channel, u32 index,
> +				  struct ipa_trans *trans)
>   {
>   	/* Note: index *must* be used modulo the ring count here */
>   	channel->trans_info.map[index % channel->tre_ring.count] = trans;
>   }
>   
>   /* Return the transaction mapped to a given ring entry */
> -struct gsi_trans *
> -gsi_channel_trans_mapped(struct gsi_channel *channel, u32 index)
> +struct ipa_trans *
> +gsi_channel_trans_mapped(struct ipa_channel *channel, u32 index)
>   {
>   	/* Note: index *must* be used modulo the ring count here */
>   	return channel->trans_info.map[index % channel->tre_ring.count];
>   }
>   
>   /* Return the oldest completed transaction for a channel (or null) */
> -struct gsi_trans *gsi_channel_trans_complete(struct gsi_channel *channel)
> +struct ipa_trans *ipa_channel_trans_complete(struct ipa_channel *channel)
>   {
>   	return list_first_entry_or_null(&channel->trans_info.complete,
> -					struct gsi_trans, links);
> +					struct ipa_trans, links);
>   }
>   
>   /* Move a transaction from the allocated list to the pending list */
> -static void gsi_trans_move_pending(struct gsi_trans *trans)
> +static void ipa_trans_move_pending(struct ipa_trans *trans)
>   {
> -	struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id];
> -	struct gsi_trans_info *trans_info = &channel->trans_info;
> +	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
> +	struct ipa_trans_info *trans_info = &channel->trans_info;
>   
>   	spin_lock_bh(&trans_info->spinlock);
>   
> @@ -269,10 +269,10 @@ static void gsi_trans_move_pending(struct gsi_trans *trans)
>   /* Move a transaction and all of its predecessors from the pending list
>    * to the completed list.
>    */
> -void gsi_trans_move_complete(struct gsi_trans *trans)
> +void ipa_trans_move_complete(struct ipa_trans *trans)
>   {
> -	struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id];
> -	struct gsi_trans_info *trans_info = &channel->trans_info;
> +	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
> +	struct ipa_trans_info *trans_info = &channel->trans_info;
>   	struct list_head list;
>   
>   	spin_lock_bh(&trans_info->spinlock);
> @@ -285,10 +285,10 @@ void gsi_trans_move_complete(struct gsi_trans *trans)
>   }
>   
>   /* Move a transaction from the completed list to the polled list */
> -void gsi_trans_move_polled(struct gsi_trans *trans)
> +void ipa_trans_move_polled(struct ipa_trans *trans)
>   {
> -	struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id];
> -	struct gsi_trans_info *trans_info = &channel->trans_info;
> +	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
> +	struct ipa_trans_info *trans_info = &channel->trans_info;
>   
>   	spin_lock_bh(&trans_info->spinlock);
>   
> @@ -299,7 +299,7 @@ void gsi_trans_move_polled(struct gsi_trans *trans)
>   
>   /* Reserve some number of TREs on a channel.  Returns true if successful */
>   static bool
> -gsi_trans_tre_reserve(struct gsi_trans_info *trans_info, u32 tre_count)
> +ipa_trans_tre_reserve(struct ipa_trans_info *trans_info, u32 tre_count)
>   {
>   	int avail = atomic_read(&trans_info->tre_avail);
>   	int new;
> @@ -315,21 +315,21 @@ gsi_trans_tre_reserve(struct gsi_trans_info *trans_info, u32 tre_count)
>   
>   /* Release previously-reserved TRE entries to a channel */
>   static void
> -gsi_trans_tre_release(struct gsi_trans_info *trans_info, u32 tre_count)
> +ipa_trans_tre_release(struct ipa_trans_info *trans_info, u32 tre_count)
>   {
>   	atomic_add(tre_count, &trans_info->tre_avail);
>   }
>   
>   /* Allocate a GSI transaction on a channel */
> -struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
> +struct ipa_trans *ipa_channel_trans_alloc(struct ipa_dma *gsi, u32 channel_id,
>   					  u32 tre_count,
>   					  enum dma_data_direction direction)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> -	struct gsi_trans_info *trans_info;
> -	struct gsi_trans *trans;
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_trans_info *trans_info;
> +	struct ipa_trans *trans;
>   
> -	if (WARN_ON(tre_count > gsi_channel_trans_tre_max(gsi, channel_id)))
> +	if (WARN_ON(tre_count > ipa_channel_trans_tre_max(gsi, channel_id)))
>   		return NULL;
>   
>   	trans_info = &channel->trans_info;
> @@ -337,18 +337,18 @@ struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
>   	/* We reserve the TREs now, but consume them at commit time.
>   	 * If there aren't enough available, we're done.
>   	 */
> -	if (!gsi_trans_tre_reserve(trans_info, tre_count))
> +	if (!ipa_trans_tre_reserve(trans_info, tre_count))
>   		return NULL;
>   
>   	/* Allocate and initialize non-zero fields in the the transaction */
> -	trans = gsi_trans_pool_alloc(&trans_info->pool, 1);
> -	trans->gsi = gsi;
> +	trans = ipa_trans_pool_alloc(&trans_info->pool, 1);
> +	trans->dma_subsys = gsi;
>   	trans->channel_id = channel_id;
>   	trans->tre_count = tre_count;
>   	init_completion(&trans->completion);
>   
>   	/* Allocate the scatterlist and (if requested) info entries. */
> -	trans->sgl = gsi_trans_pool_alloc(&trans_info->sg_pool, tre_count);
> +	trans->sgl = ipa_trans_pool_alloc(&trans_info->sg_pool, tre_count);
>   	sg_init_marker(trans->sgl, tre_count);
>   
>   	trans->direction = direction;
> @@ -365,17 +365,17 @@ struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
>   }
>   
>   /* Free a previously-allocated transaction */
> -void gsi_trans_free(struct gsi_trans *trans)
> +void ipa_trans_free(struct ipa_trans *trans)
>   {
>   	refcount_t *refcount = &trans->refcount;
> -	struct gsi_trans_info *trans_info;
> +	struct ipa_trans_info *trans_info;
>   	bool last;
>   
>   	/* We must hold the lock to release the last reference */
>   	if (refcount_dec_not_one(refcount))
>   		return;
>   
> -	trans_info = &trans->gsi->channel[trans->channel_id].trans_info;
> +	trans_info = &trans->dma_subsys->channel[trans->channel_id].trans_info;
>   
>   	spin_lock_bh(&trans_info->spinlock);
>   
> @@ -394,11 +394,11 @@ void gsi_trans_free(struct gsi_trans *trans)
>   	/* Releasing the reserved TREs implicitly frees the sgl[] and
>   	 * (if present) info[] arrays, plus the transaction itself.
>   	 */
> -	gsi_trans_tre_release(trans_info, trans->tre_count);
> +	ipa_trans_tre_release(trans_info, trans->tre_count);
>   }
>   
>   /* Add an immediate command to a transaction */
> -void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size,
> +void ipa_trans_cmd_add(struct ipa_trans *trans, void *buf, u32 size,
>   		       dma_addr_t addr, enum dma_data_direction direction,
>   		       enum ipa_cmd_opcode opcode)
>   {
> @@ -415,7 +415,7 @@ void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size,
>   	 *
>   	 * When a transaction completes, the SGL is normally unmapped.
>   	 * A command transaction has direction DMA_NONE, which tells
> -	 * gsi_trans_complete() to skip the unmapping step.
> +	 * ipa_trans_complete() to skip the unmapping step.
>   	 *
>   	 * The only things we use directly in a command scatter/gather
>   	 * entry are the DMA address and length.  We still need the SG
> @@ -433,7 +433,7 @@ void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size,
>   }
>   
>   /* Add a page transfer to a transaction.  It will fill the only TRE. */
> -int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size,
> +int ipa_trans_page_add(struct ipa_trans *trans, struct page *page, u32 size,
>   		       u32 offset)
>   {
>   	struct scatterlist *sg = &trans->sgl[0];
> @@ -445,7 +445,7 @@ int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size,
>   		return -EINVAL;
>   
>   	sg_set_page(sg, page, size, offset);
> -	ret = dma_map_sg(trans->gsi->dev, sg, 1, trans->direction);
> +	ret = dma_map_sg(trans->dma_subsys->dev, sg, 1, trans->direction);
>   	if (!ret)
>   		return -ENOMEM;
>   
> @@ -455,7 +455,7 @@ int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size,
>   }
>   
>   /* Add an SKB transfer to a transaction.  No other TREs will be used. */
> -int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb)
> +int ipa_trans_skb_add(struct ipa_trans *trans, struct sk_buff *skb)
>   {
>   	struct scatterlist *sg = &trans->sgl[0];
>   	u32 used;
> @@ -472,7 +472,7 @@ int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb)
>   		return ret;
>   	used = ret;
>   
> -	ret = dma_map_sg(trans->gsi->dev, sg, used, trans->direction);
> +	ret = dma_map_sg(trans->dma_subsys->dev, sg, used, trans->direction);
>   	if (!ret)
>   		return -ENOMEM;
>   
> @@ -539,9 +539,9 @@ static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr,
>    * pending list.  Finally, updates the channel ring pointer and optionally
>    * rings the doorbell.
>    */
> -static void __gsi_trans_commit(struct gsi_trans *trans, bool ring_db)
> +static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
>   {
> -	struct gsi_channel *channel = &trans->gsi->channel[trans->channel_id];
> +	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
>   	struct gsi_ring *ring = &channel->tre_ring;
>   	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
>   	bool bei = channel->toward_ipa;
> @@ -590,28 +590,28 @@ static void __gsi_trans_commit(struct gsi_trans *trans, bool ring_db)
>   	/* Associate the last TRE with the transaction */
>   	gsi_channel_trans_map(channel, ring->index - 1, trans);
>   
> -	gsi_trans_move_pending(trans);
> +	ipa_trans_move_pending(trans);
>   
>   	/* Ring doorbell if requested, or if all TREs are allocated */
>   	if (ring_db || !atomic_read(&channel->trans_info.tre_avail)) {
>   		/* Report what we're handing off to hardware for TX channels */
>   		if (channel->toward_ipa)
> -			gsi_channel_tx_queued(channel);
> +			ipa_channel_tx_queued(channel);
>   		gsi_channel_doorbell(channel);
>   	}
>   }
>   
>   /* Commit a GSI transaction */
> -void gsi_trans_commit(struct gsi_trans *trans, bool ring_db)
> +void ipa_trans_commit(struct ipa_trans *trans, bool ring_db)
>   {
>   	if (trans->used)
>   		__gsi_trans_commit(trans, ring_db);
>   	else
> -		gsi_trans_free(trans);
> +		ipa_trans_free(trans);
>   }
>   
>   /* Commit a GSI transaction and wait for it to complete */
> -void gsi_trans_commit_wait(struct gsi_trans *trans)
> +void ipa_trans_commit_wait(struct ipa_trans *trans)
>   {
>   	if (!trans->used)
>   		goto out_trans_free;
> @@ -623,11 +623,11 @@ void gsi_trans_commit_wait(struct gsi_trans *trans)
>   	wait_for_completion(&trans->completion);
>   
>   out_trans_free:
> -	gsi_trans_free(trans);
> +	ipa_trans_free(trans);
>   }
>   
>   /* Commit a GSI transaction and wait for it to complete, with timeout */
> -int gsi_trans_commit_wait_timeout(struct gsi_trans *trans,
> +int ipa_trans_commit_wait_timeout(struct ipa_trans *trans,
>   				  unsigned long timeout)
>   {
>   	unsigned long timeout_jiffies = msecs_to_jiffies(timeout);
> @@ -643,34 +643,34 @@ int gsi_trans_commit_wait_timeout(struct gsi_trans *trans,
>   	remaining = wait_for_completion_timeout(&trans->completion,
>   						timeout_jiffies);
>   out_trans_free:
> -	gsi_trans_free(trans);
> +	ipa_trans_free(trans);
>   
>   	return remaining ? 0 : -ETIMEDOUT;
>   }
>   
>   /* Process the completion of a transaction; called while polling */
> -void gsi_trans_complete(struct gsi_trans *trans)
> +void ipa_trans_complete(struct ipa_trans *trans)
>   {
>   	/* If the entire SGL was mapped when added, unmap it now */
>   	if (trans->direction != DMA_NONE)
> -		dma_unmap_sg(trans->gsi->dev, trans->sgl, trans->used,
> +		dma_unmap_sg(trans->dma_subsys->dev, trans->sgl, trans->used,
>   			     trans->direction);
>   
>   	ipa_gsi_trans_complete(trans);
>   
>   	complete(&trans->completion);
>   
> -	gsi_trans_free(trans);
> +	ipa_trans_free(trans);
>   }
>   
>   /* Cancel a channel's pending transactions */
> -void gsi_channel_trans_cancel_pending(struct gsi_channel *channel)
> +void ipa_channel_trans_cancel_pending(struct ipa_channel *channel)
>   {
> -	struct gsi_trans_info *trans_info = &channel->trans_info;
> -	struct gsi_trans *trans;
> +	struct ipa_trans_info *trans_info = &channel->trans_info;
> +	struct ipa_trans *trans;
>   	bool cancelled;
>   
> -	/* channel->gsi->mutex is held by caller */
> +	/* channel->dma_subsys->mutex is held by caller */
>   	spin_lock_bh(&trans_info->spinlock);
>   
>   	cancelled = !list_empty(&trans_info->pending);
> @@ -687,17 +687,17 @@ void gsi_channel_trans_cancel_pending(struct gsi_channel *channel)
>   }
>   
>   /* Issue a command to read a single byte from a channel */
> -int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr)
> +int gsi_trans_read_byte(struct ipa_dma *gsi, u32 channel_id, dma_addr_t addr)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   	struct gsi_ring *ring = &channel->tre_ring;
> -	struct gsi_trans_info *trans_info;
> +	struct ipa_trans_info *trans_info;
>   	struct gsi_tre *dest_tre;
>   
>   	trans_info = &channel->trans_info;
>   
>   	/* First reserve the TRE, if possible */
> -	if (!gsi_trans_tre_reserve(trans_info, 1))
> +	if (!ipa_trans_tre_reserve(trans_info, 1))
>   		return -EBUSY;
>   
>   	/* Now fill the the reserved TRE and tell the hardware */
> @@ -712,18 +712,18 @@ int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr)
>   }
>   
>   /* Mark a gsi_trans_read_byte() request done */
> -void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id)
> +void gsi_trans_read_byte_done(struct ipa_dma *gsi, u32 channel_id)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
>   
> -	gsi_trans_tre_release(&channel->trans_info, 1);
> +	ipa_trans_tre_release(&channel->trans_info, 1);
>   }
>   
>   /* Initialize a channel's GSI transaction info */
> -int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id)
> +int ipa_channel_trans_init(struct ipa_dma *gsi, u32 channel_id)
>   {
> -	struct gsi_channel *channel = &gsi->channel[channel_id];
> -	struct gsi_trans_info *trans_info;
> +	struct ipa_channel *channel = &gsi->channel[channel_id];
> +	struct ipa_trans_info *trans_info;
>   	u32 tre_max;
>   	int ret;
>   
> @@ -747,10 +747,10 @@ int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id)
>   	 * for transactions (including transaction structures) based on
>   	 * this maximum number.
>   	 */
> -	tre_max = gsi_channel_tre_max(channel->gsi, channel_id);
> +	tre_max = gsi_channel_tre_max(channel->dma_subsys, channel_id);
>   
>   	/* Transactions are allocated one at a time. */
> -	ret = gsi_trans_pool_init(&trans_info->pool, sizeof(struct gsi_trans),
> +	ret = ipa_trans_pool_init(&trans_info->pool, sizeof(struct ipa_trans),
>   				  tre_max, 1);
>   	if (ret)
>   		goto err_kfree;
> @@ -765,7 +765,7 @@ int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id)
>   	 * A transaction on a channel can allocate as many TREs as that but
>   	 * no more.
>   	 */
> -	ret = gsi_trans_pool_init(&trans_info->sg_pool,
> +	ret = ipa_trans_pool_init(&trans_info->sg_pool,
>   				  sizeof(struct scatterlist),
>   				  tre_max, channel->tlv_count);
>   	if (ret)
> @@ -789,7 +789,7 @@ int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id)
>   	return 0;
>   
>   err_trans_pool_exit:
> -	gsi_trans_pool_exit(&trans_info->pool);
> +	ipa_trans_pool_exit(&trans_info->pool);
>   err_kfree:
>   	kfree(trans_info->map);
>   
> @@ -799,12 +799,12 @@ int gsi_channel_trans_init(struct gsi *gsi, u32 channel_id)
>   	return ret;
>   }
>   
> -/* Inverse of gsi_channel_trans_init() */
> -void gsi_channel_trans_exit(struct gsi_channel *channel)
> +/* Inverse of ipa_channel_trans_init() */
> +void ipa_channel_trans_exit(struct ipa_channel *channel)
>   {
> -	struct gsi_trans_info *trans_info = &channel->trans_info;
> +	struct ipa_trans_info *trans_info = &channel->trans_info;
>   
> -	gsi_trans_pool_exit(&trans_info->sg_pool);
> -	gsi_trans_pool_exit(&trans_info->pool);
> +	ipa_trans_pool_exit(&trans_info->sg_pool);
> +	ipa_trans_pool_exit(&trans_info->pool);
>   	kfree(trans_info->map);
>   }
> diff --git a/drivers/net/ipa/gsi_trans.h b/drivers/net/ipa/ipa_trans.h
> similarity index 72%
> rename from drivers/net/ipa/gsi_trans.h
> rename to drivers/net/ipa/ipa_trans.h
> index 17fd1822d8a9..b93342414360 100644
> --- a/drivers/net/ipa/gsi_trans.h
> +++ b/drivers/net/ipa/ipa_trans.h
> @@ -18,12 +18,12 @@ struct scatterlist;
>   struct device;
>   struct sk_buff;
>   
> -struct gsi;
> -struct gsi_trans;
> -struct gsi_trans_pool;
> +struct ipa_dma;
> +struct ipa_trans;
> +struct ipa_trans_pool;
>   
>   /**
> - * struct gsi_trans - a GSI transaction
> + * struct ipa_trans - a GSI transaction
>    *
>    * Most fields in this structure for internal use by the transaction core code:
>    * @links:	Links for channel transaction lists by state
> @@ -45,10 +45,10 @@ struct gsi_trans_pool;
>    * The size used for some fields in this structure were chosen to ensure
>    * the full structure size is no larger than 128 bytes.
>    */
> -struct gsi_trans {
> -	struct list_head links;		/* gsi_channel lists */
> +struct ipa_trans {
> +	struct list_head links;		/* ipa_channel lists */
>   
> -	struct gsi *gsi;
> +	struct ipa_dma *dma_subsys;
>   	u8 channel_id;
>   
>   	bool cancelled;			/* true if transaction was cancelled */
> @@ -70,7 +70,7 @@ struct gsi_trans {
>   };
>   
>   /**
> - * gsi_trans_pool_init() - Initialize a pool of structures for transactions
> + * ipa_trans_pool_init() - Initialize a pool of structures for transactions
>    * @pool:	GSI transaction poll pointer
>    * @size:	Size of elements in the pool
>    * @count:	Minimum number of elements in the pool
> @@ -78,26 +78,26 @@ struct gsi_trans {
>    *
>    * Return:	0 if successful, or a negative error code
>    */
> -int gsi_trans_pool_init(struct gsi_trans_pool *pool, size_t size, u32 count,
> +int ipa_trans_pool_init(struct ipa_trans_pool *pool, size_t size, u32 count,
>   			u32 max_alloc);
>   
>   /**
> - * gsi_trans_pool_alloc() - Allocate one or more elements from a pool
> + * ipa_trans_pool_alloc() - Allocate one or more elements from a pool
>    * @pool:	Pool pointer
>    * @count:	Number of elements to allocate from the pool
>    *
>    * Return:	Virtual address of element(s) allocated from the pool
>    */
> -void *gsi_trans_pool_alloc(struct gsi_trans_pool *pool, u32 count);
> +void *ipa_trans_pool_alloc(struct ipa_trans_pool *pool, u32 count);
>   
>   /**
> - * gsi_trans_pool_exit() - Inverse of gsi_trans_pool_init()
> + * ipa_trans_pool_exit() - Inverse of ipa_trans_pool_init()
>    * @pool:	Pool pointer
>    */
> -void gsi_trans_pool_exit(struct gsi_trans_pool *pool);
> +void ipa_trans_pool_exit(struct ipa_trans_pool *pool);
>   
>   /**
> - * gsi_trans_pool_init_dma() - Initialize a pool of DMA-able structures
> + * ipa_trans_pool_init_dma() - Initialize a pool of DMA-able structures
>    * @dev:	Device used for DMA
>    * @pool:	Pool pointer
>    * @size:	Size of elements in the pool
> @@ -108,11 +108,11 @@ void gsi_trans_pool_exit(struct gsi_trans_pool *pool);
>    *
>    * Structures in this pool reside in DMA-coherent memory.
>    */
> -int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
> +int ipa_trans_pool_init_dma(struct device *dev, struct ipa_trans_pool *pool,
>   			    size_t size, u32 count, u32 max_alloc);
>   
>   /**
> - * gsi_trans_pool_alloc_dma() - Allocate an element from a DMA pool
> + * ipa_trans_pool_alloc_dma() - Allocate an element from a DMA pool
>    * @pool:	DMA pool pointer
>    * @addr:	DMA address "handle" associated with the allocation
>    *
> @@ -120,17 +120,17 @@ int gsi_trans_pool_init_dma(struct device *dev, struct gsi_trans_pool *pool,
>    *
>    * Only one element at a time may be allocated from a DMA pool.
>    */
> -void *gsi_trans_pool_alloc_dma(struct gsi_trans_pool *pool, dma_addr_t *addr);
> +void *ipa_trans_pool_alloc_dma(struct ipa_trans_pool *pool, dma_addr_t *addr);
>   
>   /**
> - * gsi_trans_pool_exit_dma() - Inverse of gsi_trans_pool_init_dma()
> + * ipa_trans_pool_exit_dma() - Inverse of ipa_trans_pool_init_dma()
>    * @dev:	Device used for DMA
>    * @pool:	Pool pointer
>    */
> -void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool);
> +void ipa_trans_pool_exit_dma(struct device *dev, struct ipa_trans_pool *pool);
>   
>   /**
> - * gsi_channel_trans_alloc() - Allocate a GSI transaction on a channel
> + * ipa_channel_trans_alloc() - Allocate a GSI transaction on a channel
>    * @gsi:	GSI pointer
>    * @channel_id:	Channel the transaction is associated with
>    * @tre_count:	Number of elements in the transaction
> @@ -139,18 +139,18 @@ void gsi_trans_pool_exit_dma(struct device *dev, struct gsi_trans_pool *pool);
>    * Return:	A GSI transaction structure, or a null pointer if all
>    *		available transactions are in use
>    */
> -struct gsi_trans *gsi_channel_trans_alloc(struct gsi *gsi, u32 channel_id,
> +struct ipa_trans *ipa_channel_trans_alloc(struct ipa_dma *dma_subsys, u32 channel_id,
>   					  u32 tre_count,
>   					  enum dma_data_direction direction);
>   
>   /**
> - * gsi_trans_free() - Free a previously-allocated GSI transaction
> + * ipa_trans_free() - Free a previously-allocated GSI transaction
>    * @trans:	Transaction to be freed
>    */
> -void gsi_trans_free(struct gsi_trans *trans);
> +void ipa_trans_free(struct ipa_trans *trans);
>   
>   /**
> - * gsi_trans_cmd_add() - Add an immediate command to a transaction
> + * ipa_trans_cmd_add() - Add an immediate command to a transaction
>    * @trans:	Transaction
>    * @buf:	Buffer pointer for command payload
>    * @size:	Number of bytes in buffer
> @@ -158,50 +158,50 @@ void gsi_trans_free(struct gsi_trans *trans);
>    * @direction:	Direction of DMA transfer (or DMA_NONE if none required)
>    * @opcode:	IPA immediate command opcode
>    */
> -void gsi_trans_cmd_add(struct gsi_trans *trans, void *buf, u32 size,
> +void ipa_trans_cmd_add(struct ipa_trans *trans, void *buf, u32 size,
>   		       dma_addr_t addr, enum dma_data_direction direction,
>   		       enum ipa_cmd_opcode opcode);
>   
>   /**
> - * gsi_trans_page_add() - Add a page transfer to a transaction
> + * ipa_trans_page_add() - Add a page transfer to a transaction
>    * @trans:	Transaction
>    * @page:	Page pointer
>    * @size:	Number of bytes (starting at offset) to transfer
>    * @offset:	Offset within page for start of transfer
>    */
> -int gsi_trans_page_add(struct gsi_trans *trans, struct page *page, u32 size,
> +int ipa_trans_page_add(struct ipa_trans *trans, struct page *page, u32 size,
>   		       u32 offset);
>   
>   /**
> - * gsi_trans_skb_add() - Add a socket transfer to a transaction
> + * ipa_trans_skb_add() - Add a socket transfer to a transaction
>    * @trans:	Transaction
>    * @skb:	Socket buffer for transfer (outbound)
>    *
>    * Return:	0, or -EMSGSIZE if socket data won't fit in transaction.
>    */
> -int gsi_trans_skb_add(struct gsi_trans *trans, struct sk_buff *skb);
> +int ipa_trans_skb_add(struct ipa_trans *trans, struct sk_buff *skb);
>   
>   /**
> - * gsi_trans_commit() - Commit a GSI transaction
> + * ipa_trans_commit() - Commit a GSI transaction
>    * @trans:	Transaction to commit
>    * @ring_db:	Whether to tell the hardware about these queued transfers
>    */
> -void gsi_trans_commit(struct gsi_trans *trans, bool ring_db);
> +void ipa_trans_commit(struct ipa_trans *trans, bool ring_db);
>   
>   /**
> - * gsi_trans_commit_wait() - Commit a GSI transaction and wait for it
> + * ipa_trans_commit_wait() - Commit a GSI transaction and wait for it
>    *			     to complete
>    * @trans:	Transaction to commit
>    */
> -void gsi_trans_commit_wait(struct gsi_trans *trans);
> +void ipa_trans_commit_wait(struct ipa_trans *trans);
>   
>   /**
> - * gsi_trans_commit_wait_timeout() - Commit a GSI transaction and wait for
> + * ipa_trans_commit_wait_timeout() - Commit a GSI transaction and wait for
>    *				     it to complete, with timeout
>    * @trans:	Transaction to commit
>    * @timeout:	Timeout period (in milliseconds)
>    */
> -int gsi_trans_commit_wait_timeout(struct gsi_trans *trans,
> +int ipa_trans_commit_wait_timeout(struct ipa_trans *trans,
>   				  unsigned long timeout);
>   
>   /**
> @@ -213,7 +213,7 @@ int gsi_trans_commit_wait_timeout(struct gsi_trans *trans,
>    * This is not a transaction operation at all.  It's defined here because
>    * it needs to be done in coordination with other transaction activity.
>    */
> -int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr);
> +int gsi_trans_read_byte(struct ipa_dma *dma_subsys, u32 channel_id, dma_addr_t addr);
>   
>   /**
>    * gsi_trans_read_byte_done() - Clean up after a single byte read TRE
> @@ -223,6 +223,6 @@ int gsi_trans_read_byte(struct gsi *gsi, u32 channel_id, dma_addr_t addr);
>    * This function needs to be called to signal that the work related
>    * to reading a byte initiated by gsi_trans_read_byte() is complete.
>    */
> -void gsi_trans_read_byte_done(struct gsi *gsi, u32 channel_id);
> +void gsi_trans_read_byte_done(struct ipa_dma *dma_subsys, u32 channel_id);
>   
>   #endif /* _GSI_TRANS_H_ */
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface
  2021-09-20  3:07 ` [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface Sireesh Kodali
@ 2021-10-13 22:29   ` Alex Elder
  2021-10-18 16:45     ` Sireesh Kodali
  0 siblings, 1 reply; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:29 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> Establish callback-based interface to abstract GSI and BAM DMA differences.
> Interface is based on prototypes from ipa_dma.h (old gsi.h). Callbacks
> are stored in struct ipa_dma (old struct gsi) and assigned in gsi_init.

This is interesting and seems to have been fairly easy to abstract
this way.  The patch is actually pretty straightforward, much more
so than I would have expected.  I think I'll have more to say about
how to separate GSI from BAM in the future, but not today.

					-Alex

> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/gsi.c          |  30 ++++++--
>   drivers/net/ipa/ipa_dma.h      | 133 ++++++++++++++++++++++-----------
>   drivers/net/ipa/ipa_endpoint.c |  28 +++----
>   drivers/net/ipa/ipa_main.c     |  18 ++---
>   drivers/net/ipa/ipa_power.c    |   4 +-
>   drivers/net/ipa/ipa_trans.c    |   2 +-
>   6 files changed, 138 insertions(+), 77 deletions(-)
> 
> diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
> index 74ae0d07f859..39d9ca620a9f 100644
> --- a/drivers/net/ipa/gsi.c
> +++ b/drivers/net/ipa/gsi.c
> @@ -99,6 +99,10 @@
>   
>   #define GSI_ISR_MAX_ITER		50	/* Detect interrupt storms */
>   
> +static u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id);
> +static u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id);
> +static void gsi_exit(struct ipa_dma *gsi);
> +
>   /* An entry in an event ring */
>   struct gsi_event {
>   	__le64 xfer_ptr;
> @@ -869,7 +873,7 @@ static int __gsi_channel_start(struct ipa_channel *channel, bool resume)
>   }
>   
>   /* Start an allocated GSI channel */
> -int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
> +static int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
>   {
>   	struct ipa_channel *channel = &gsi->channel[channel_id];
>   	int ret;
> @@ -924,7 +928,7 @@ static int __gsi_channel_stop(struct ipa_channel *channel, bool suspend)
>   }
>   
>   /* Stop a started channel */
> -int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
> +static int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
>   {
>   	struct ipa_channel *channel = &gsi->channel[channel_id];
>   	int ret;
> @@ -941,7 +945,7 @@ int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
>   }
>   
>   /* Reset and reconfigure a channel, (possibly) enabling the doorbell engine */
> -void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
> +static void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
>   {
>   	struct ipa_channel *channel = &gsi->channel[channel_id];
>   
> @@ -1931,7 +1935,7 @@ int gsi_setup(struct ipa_dma *gsi)
>   }
>   
>   /* Inverse of gsi_setup() */
> -void gsi_teardown(struct ipa_dma *gsi)
> +static void gsi_teardown(struct ipa_dma *gsi)
>   {
>   	gsi_channel_teardown(gsi);
>   	gsi_irq_teardown(gsi);
> @@ -2194,6 +2198,18 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
>   
>   	gsi->dev = dev;
>   	gsi->version = version;
> +	gsi->setup = gsi_setup;
> +	gsi->teardown = gsi_teardown;
> +	gsi->exit = gsi_exit;
> +	gsi->suspend = gsi_suspend;
> +	gsi->resume = gsi_resume;
> +	gsi->channel_tre_max = gsi_channel_tre_max;
> +	gsi->channel_trans_tre_max = gsi_channel_trans_tre_max;
> +	gsi->channel_start = gsi_channel_start;
> +	gsi->channel_stop = gsi_channel_stop;
> +	gsi->channel_reset = gsi_channel_reset;
> +	gsi->channel_suspend = gsi_channel_suspend;
> +	gsi->channel_resume = gsi_channel_resume;
>   
>   	/* GSI uses NAPI on all channels.  Create a dummy network device
>   	 * for the channel NAPI contexts to be associated with.
> @@ -2250,7 +2266,7 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
>   }
>   
>   /* Inverse of gsi_init() */
> -void gsi_exit(struct ipa_dma *gsi)
> +static void gsi_exit(struct ipa_dma *gsi)
>   {
>   	mutex_destroy(&gsi->mutex);
>   	gsi_channel_exit(gsi);
> @@ -2277,7 +2293,7 @@ void gsi_exit(struct ipa_dma *gsi)
>    * substantially reduce pool memory requirements.  The number we
>    * reduce it by matches the number added in ipa_trans_pool_init().
>    */
> -u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
> +static u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
>   {
>   	struct ipa_channel *channel = &gsi->channel[channel_id];
>   
> @@ -2286,7 +2302,7 @@ u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
>   }
>   
>   /* Returns the maximum number of TREs in a single transaction for a channel */
> -u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
> +static u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
>   {
>   	struct ipa_channel *channel = &gsi->channel[channel_id];
>   
> diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h
> index d053929ca3e3..1a23e6ac5785 100644
> --- a/drivers/net/ipa/ipa_dma.h
> +++ b/drivers/net/ipa/ipa_dma.h
> @@ -163,64 +163,96 @@ struct ipa_dma {
>   	struct completion completion;	/* for global EE commands */
>   	int result;			/* Negative errno (generic commands) */
>   	struct mutex mutex;		/* protects commands, programming */
> +
> +	int (*setup)(struct ipa_dma *dma_subsys);
> +	void (*teardown)(struct ipa_dma *dma_subsys);
> +	void (*exit)(struct ipa_dma *dma_subsys);
> +	void (*suspend)(struct ipa_dma *dma_subsys);
> +	void (*resume)(struct ipa_dma *dma_subsys);
> +	u32 (*channel_tre_max)(struct ipa_dma *dma_subsys, u32 channel_id);
> +	u32 (*channel_trans_tre_max)(struct ipa_dma *dma_subsys, u32 channel_id);
> +	int (*channel_start)(struct ipa_dma *dma_subsys, u32 channel_id);
> +	int (*channel_stop)(struct ipa_dma *dma_subsys, u32 channel_id);
> +	void (*channel_reset)(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
> +	int (*channel_suspend)(struct ipa_dma *dma_subsys, u32 channel_id);
> +	int (*channel_resume)(struct ipa_dma *dma_subsys, u32 channel_id);
> +	void (*trans_commit)(struct ipa_trans *trans, bool ring_db);
>   };
>   
>   /**
> - * gsi_setup() - Set up the GSI subsystem
> - * @gsi:	Address of GSI structure embedded in an IPA structure
> + * ipa_dma_setup() - Set up the DMA subsystem
> + * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
>    *
>    * Return:	0 if successful, or a negative error code
>    *
> - * Performs initialization that must wait until the GSI hardware is
> + * Performs initialization that must wait until the GSI/BAM hardware is
>    * ready (including firmware loaded).
>    */
> -int gsi_setup(struct ipa_dma *dma_subsys);
> +static inline int ipa_dma_setup(struct ipa_dma *dma_subsys)
> +{
> +	return dma_subsys->setup(dma_subsys);
> +}
>   
>   /**
> - * gsi_teardown() - Tear down GSI subsystem
> - * @gsi:	GSI address previously passed to a successful gsi_setup() call
> + * ipa_dma_teardown() - Tear down DMA subsystem
> + * @dma_subsys:	ipa_dma address previously passed to a successful ipa_dma_setup() call
>    */
> -void gsi_teardown(struct ipa_dma *dma_subsys);
> +static inline void ipa_dma_teardown(struct ipa_dma *dma_subsys)
> +{
> +	dma_subsys->teardown(dma_subsys);
> +}
>   
>   /**
> - * gsi_channel_tre_max() - Channel maximum number of in-flight TREs
> - * @gsi:	GSI pointer
> + * ipa_channel_tre_max() - Channel maximum number of in-flight TREs
> + * @dma_subsys:	pointer to ipa_dma structure
>    * @channel_id:	Channel whose limit is to be returned
>    *
>    * Return:	 The maximum number of TREs oustanding on the channel
>    */
> -u32 gsi_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
> +static inline u32 ipa_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id)
> +{
> +	return dma_subsys->channel_tre_max(dma_subsys, channel_id);
> +}
>   
>   /**
> - * gsi_channel_trans_tre_max() - Maximum TREs in a single transaction
> - * @gsi:	GSI pointer
> + * ipa_channel_trans_tre_max() - Maximum TREs in a single transaction
> + * @dma_subsys:	pointer to ipa_dma structure
>    * @channel_id:	Channel whose limit is to be returned
>    *
>    * Return:	 The maximum TRE count per transaction on the channel
>    */
> -u32 gsi_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
> +static inline u32 ipa_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id)
> +{
> +	return dma_subsys->channel_trans_tre_max(dma_subsys, channel_id);
> +}
>   
>   /**
> - * gsi_channel_start() - Start an allocated GSI channel
> - * @gsi:	GSI pointer
> + * ipa_channel_start() - Start an allocated DMA channel
> + * @dma_subsys:	pointer to ipa_dma structure
>    * @channel_id:	Channel to start
>    *
>    * Return:	0 if successful, or a negative error code
>    */
> -int gsi_channel_start(struct ipa_dma *dma_subsys, u32 channel_id);
> +static inline int ipa_channel_start(struct ipa_dma *dma_subsys, u32 channel_id)
> +{
> +	return dma_subsys->channel_start(dma_subsys, channel_id);
> +}
>   
>   /**
> - * gsi_channel_stop() - Stop a started GSI channel
> - * @gsi:	GSI pointer returned by gsi_setup()
> + * ipa_channel_stop() - Stop a started DMA channel
> + * @dma_subsys:	pointer to ipa_dma structure returned by ipa_dma_setup()
>    * @channel_id:	Channel to stop
>    *
>    * Return:	0 if successful, or a negative error code
>    */
> -int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
> +static inline int ipa_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id)
> +{
> +	return dma_subsys->channel_stop(dma_subsys, channel_id);
> +}
>   
>   /**
> - * gsi_channel_reset() - Reset an allocated GSI channel
> - * @gsi:	GSI pointer
> + * ipa_channel_reset() - Reset an allocated DMA channel
> + * @dma_subsys:	pointer to ipa_dma structure
>    * @channel_id:	Channel to be reset
>    * @doorbell:	Whether to (possibly) enable the doorbell engine
>    *
> @@ -230,41 +262,49 @@ int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
>    * GSI hardware relinquishes ownership of all pending receive buffer
>    * transactions and they will complete with their cancelled flag set.
>    */
> -void gsi_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
> +static inline void ipa_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell)
> +{
> +	 dma_subsys->channel_reset(dma_subsys, channel_id, doorbell);
> +}
>   
> -/**
> - * gsi_suspend() - Prepare the GSI subsystem for suspend
> - * @gsi:	GSI pointer
> - */
> -void gsi_suspend(struct ipa_dma *dma_subsys);
>   
>   /**
> - * gsi_resume() - Resume the GSI subsystem following suspend
> - * @gsi:	GSI pointer
> - */
> -void gsi_resume(struct ipa_dma *dma_subsys);
> -
> -/**
> - * gsi_channel_suspend() - Suspend a GSI channel
> - * @gsi:	GSI pointer
> + * ipa_channel_suspend() - Suspend a DMA channel
> + * @dma_subsys:	pointer to ipa_dma structure
>    * @channel_id:	Channel to suspend
>    *
>    * For IPA v4.0+, suspend is implemented by stopping the channel.
>    */
> -int gsi_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id);
> +static inline int ipa_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id)
> +{
> +	return dma_subsys->channel_suspend(dma_subsys, channel_id);
> +}
>   
>   /**
> - * gsi_channel_resume() - Resume a suspended GSI channel
> - * @gsi:	GSI pointer
> + * ipa_channel_resume() - Resume a suspended DMA channel
> + * @dma_subsys:	pointer to ipa_dma structure
>    * @channel_id:	Channel to resume
>    *
>    * For IPA v4.0+, the stopped channel is started again.
>    */
> -int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
> +static inline int ipa_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id)
> +{
> +	return dma_subsys->channel_resume(dma_subsys, channel_id);
> +}
> +
> +static inline void ipa_dma_suspend(struct ipa_dma *dma_subsys)
> +{
> +	return dma_subsys->suspend(dma_subsys);
> +}
> +
> +static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
> +{
> +	return dma_subsys->resume(dma_subsys);
> +}
>   
>   /**
> - * gsi_init() - Initialize the GSI subsystem
> - * @gsi:	Address of GSI structure embedded in an IPA structure
> + * ipa_dma_init() - Initialize the GSI subsystem
> + * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
>    * @pdev:	IPA platform device
>    * @version:	IPA hardware version (implies GSI version)
>    * @count:	Number of entries in the configuration data array
> @@ -275,14 +315,19 @@ int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
>    * Early stage initialization of the GSI subsystem, performing tasks
>    * that can be done before the GSI hardware is ready to use.
>    */
> +
>   int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
>   	     enum ipa_version version, u32 count,
>   	     const struct ipa_gsi_endpoint_data *data);
>   
>   /**
> - * gsi_exit() - Exit the GSI subsystem
> - * @gsi:	GSI address previously passed to a successful gsi_init() call
> + * ipa_dma_exit() - Exit the DMA subsystem
> + * @dma_subsys:	ipa_dma address previously passed to a successful gsi_init() call
>    */
> -void gsi_exit(struct ipa_dma *dma_subsys);
> +static inline void ipa_dma_exit(struct ipa_dma *dma_subsys)
> +{
> +	if (dma_subsys)
> +		dma_subsys->exit(dma_subsys);
> +}
>   
>   #endif /* _GSI_H_ */
> diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> index 90d6880e8a25..dbef549c4537 100644
> --- a/drivers/net/ipa/ipa_endpoint.c
> +++ b/drivers/net/ipa/ipa_endpoint.c
> @@ -1091,7 +1091,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
>   	 * try replenishing again if our backlog is *all* available TREs.
>   	 */
>   	gsi = &endpoint->ipa->dma_subsys;
> -	if (backlog == gsi_channel_tre_max(gsi, endpoint->channel_id))
> +	if (backlog == ipa_channel_tre_max(gsi, endpoint->channel_id))
>   		schedule_delayed_work(&endpoint->replenish_work,
>   				      msecs_to_jiffies(1));
>   }
> @@ -1107,7 +1107,7 @@ static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
>   		atomic_add(saved, &endpoint->replenish_backlog);
>   
>   	/* Start replenishing if hardware currently has no buffers */
> -	max_backlog = gsi_channel_tre_max(gsi, endpoint->channel_id);
> +	max_backlog = ipa_channel_tre_max(gsi, endpoint->channel_id);
>   	if (atomic_read(&endpoint->replenish_backlog) == max_backlog)
>   		ipa_endpoint_replenish(endpoint, false);
>   }
> @@ -1432,13 +1432,13 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
>   	 * active.  We'll re-enable the doorbell (if appropriate) when
>   	 * we reset again below.
>   	 */
> -	gsi_channel_reset(gsi, endpoint->channel_id, false);
> +	ipa_channel_reset(gsi, endpoint->channel_id, false);
>   
>   	/* Make sure the channel isn't suspended */
>   	suspended = ipa_endpoint_program_suspend(endpoint, false);
>   
>   	/* Start channel and do a 1 byte read */
> -	ret = gsi_channel_start(gsi, endpoint->channel_id);
> +	ret = ipa_channel_start(gsi, endpoint->channel_id);
>   	if (ret)
>   		goto out_suspend_again;
>   
> @@ -1461,7 +1461,7 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
>   
>   	gsi_trans_read_byte_done(gsi, endpoint->channel_id);
>   
> -	ret = gsi_channel_stop(gsi, endpoint->channel_id);
> +	ret = ipa_channel_stop(gsi, endpoint->channel_id);
>   	if (ret)
>   		goto out_suspend_again;
>   
> @@ -1470,14 +1470,14 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
>   	 * complete the channel reset sequence.  Finish by suspending the
>   	 * channel again (if necessary).
>   	 */
> -	gsi_channel_reset(gsi, endpoint->channel_id, true);
> +	ipa_channel_reset(gsi, endpoint->channel_id, true);
>   
>   	usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC);
>   
>   	goto out_suspend_again;
>   
>   err_endpoint_stop:
> -	(void)gsi_channel_stop(gsi, endpoint->channel_id);
> +	(void)ipa_channel_stop(gsi, endpoint->channel_id);
>   out_suspend_again:
>   	if (suspended)
>   		(void)ipa_endpoint_program_suspend(endpoint, true);
> @@ -1504,7 +1504,7 @@ static void ipa_endpoint_reset(struct ipa_endpoint *endpoint)
>   	if (special && ipa_endpoint_aggr_active(endpoint))
>   		ret = ipa_endpoint_reset_rx_aggr(endpoint);
>   	else
> -		gsi_channel_reset(&ipa->dma_subsys, channel_id, true);
> +		ipa_channel_reset(&ipa->dma_subsys, channel_id, true);
>   
>   	if (ret)
>   		dev_err(&ipa->pdev->dev,
> @@ -1537,7 +1537,7 @@ int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint)
>   	struct ipa_dma *gsi = &ipa->dma_subsys;
>   	int ret;
>   
> -	ret = gsi_channel_start(gsi, endpoint->channel_id);
> +	ret = ipa_channel_start(gsi, endpoint->channel_id);
>   	if (ret) {
>   		dev_err(&ipa->pdev->dev,
>   			"error %d starting %cX channel %u for endpoint %u\n",
> @@ -1576,7 +1576,7 @@ void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
>   	}
>   
>   	/* Note that if stop fails, the channel's state is not well-defined */
> -	ret = gsi_channel_stop(gsi, endpoint->channel_id);
> +	ret = ipa_channel_stop(gsi, endpoint->channel_id);
>   	if (ret)
>   		dev_err(&ipa->pdev->dev,
>   			"error %d attempting to stop endpoint %u\n", ret,
> @@ -1598,7 +1598,7 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
>   		(void)ipa_endpoint_program_suspend(endpoint, true);
>   	}
>   
> -	ret = gsi_channel_suspend(gsi, endpoint->channel_id);
> +	ret = ipa_channel_suspend(gsi, endpoint->channel_id);
>   	if (ret)
>   		dev_err(dev, "error %d suspending channel %u\n", ret,
>   			endpoint->channel_id);
> @@ -1617,7 +1617,7 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
>   	if (!endpoint->toward_ipa)
>   		(void)ipa_endpoint_program_suspend(endpoint, false);
>   
> -	ret = gsi_channel_resume(gsi, endpoint->channel_id);
> +	ret = ipa_channel_resume(gsi, endpoint->channel_id);
>   	if (ret)
>   		dev_err(dev, "error %d resuming channel %u\n", ret,
>   			endpoint->channel_id);
> @@ -1660,14 +1660,14 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
>   	if (endpoint->ee_id != GSI_EE_AP)
>   		return;
>   
> -	endpoint->trans_tre_max = gsi_channel_trans_tre_max(gsi, channel_id);
> +	endpoint->trans_tre_max = ipa_channel_trans_tre_max(gsi, channel_id);
>   	if (!endpoint->toward_ipa) {
>   		/* RX transactions require a single TRE, so the maximum
>   		 * backlog is the same as the maximum outstanding TREs.
>   		 */
>   		endpoint->replenish_enabled = false;
>   		atomic_set(&endpoint->replenish_saved,
> -			   gsi_channel_tre_max(gsi, endpoint->channel_id));
> +			   ipa_channel_tre_max(gsi, endpoint->channel_id));
>   		atomic_set(&endpoint->replenish_backlog, 0);
>   		INIT_DELAYED_WORK(&endpoint->replenish_work,
>   				  ipa_endpoint_replenish_work);
> diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> index 026f5555fa7d..6ab691ff1faf 100644
> --- a/drivers/net/ipa/ipa_main.c
> +++ b/drivers/net/ipa/ipa_main.c
> @@ -98,13 +98,13 @@ int ipa_setup(struct ipa *ipa)
>   	struct device *dev = &ipa->pdev->dev;
>   	int ret;
>   
> -	ret = gsi_setup(&ipa->dma_subsys);
> +	ret = ipa_dma_setup(&ipa->dma_subsys);
>   	if (ret)
>   		return ret;
>   
>   	ret = ipa_power_setup(ipa);
>   	if (ret)
> -		goto err_gsi_teardown;
> +		goto err_dma_teardown;
>   
>   	ipa_endpoint_setup(ipa);
>   
> @@ -153,8 +153,8 @@ int ipa_setup(struct ipa *ipa)
>   err_endpoint_teardown:
>   	ipa_endpoint_teardown(ipa);
>   	ipa_power_teardown(ipa);
> -err_gsi_teardown:
> -	gsi_teardown(&ipa->dma_subsys);
> +err_dma_teardown:
> +	ipa_dma_teardown(&ipa->dma_subsys);
>   
>   	return ret;
>   }
> @@ -179,7 +179,7 @@ static void ipa_teardown(struct ipa *ipa)
>   	ipa_endpoint_disable_one(command_endpoint);
>   	ipa_endpoint_teardown(ipa);
>   	ipa_power_teardown(ipa);
> -	gsi_teardown(&ipa->dma_subsys);
> +	ipa_dma_teardown(&ipa->dma_subsys);
>   }
>   
>   /* Configure bus access behavior for IPA components */
> @@ -726,7 +726,7 @@ static int ipa_probe(struct platform_device *pdev)
>   					    data->endpoint_data);
>   	if (!ipa->filter_map) {
>   		ret = -EINVAL;
> -		goto err_gsi_exit;
> +		goto err_dma_exit;
>   	}
>   
>   	ret = ipa_table_init(ipa);
> @@ -780,8 +780,8 @@ static int ipa_probe(struct platform_device *pdev)
>   	ipa_table_exit(ipa);
>   err_endpoint_exit:
>   	ipa_endpoint_exit(ipa);
> -err_gsi_exit:
> -	gsi_exit(&ipa->dma_subsys);
> +err_dma_exit:
> +	ipa_dma_exit(&ipa->dma_subsys);
>   err_mem_exit:
>   	ipa_mem_exit(ipa);
>   err_reg_exit:
> @@ -824,7 +824,7 @@ static int ipa_remove(struct platform_device *pdev)
>   	ipa_modem_exit(ipa);
>   	ipa_table_exit(ipa);
>   	ipa_endpoint_exit(ipa);
> -	gsi_exit(&ipa->dma_subsys);
> +	ipa_dma_exit(&ipa->dma_subsys);
>   	ipa_mem_exit(ipa);
>   	ipa_reg_exit(ipa);
>   	kfree(ipa);
> diff --git a/drivers/net/ipa/ipa_power.c b/drivers/net/ipa/ipa_power.c
> index b1c6c0fcb654..096cfb8ae9a5 100644
> --- a/drivers/net/ipa/ipa_power.c
> +++ b/drivers/net/ipa/ipa_power.c
> @@ -243,7 +243,7 @@ static int ipa_runtime_suspend(struct device *dev)
>   	if (ipa->setup_complete) {
>   		__clear_bit(IPA_POWER_FLAG_RESUMED, ipa->power->flags);
>   		ipa_endpoint_suspend(ipa);
> -		gsi_suspend(&ipa->gsi);
> +		ipa_dma_suspend(&ipa->dma_subsys);
>   	}
>   
>   	return ipa_power_disable(ipa);
> @@ -260,7 +260,7 @@ static int ipa_runtime_resume(struct device *dev)
>   
>   	/* Endpoints aren't usable until setup is complete */
>   	if (ipa->setup_complete) {
> -		gsi_resume(&ipa->gsi);
> +		ipa_dma_resume(&ipa->dma_subsys);
>   		ipa_endpoint_resume(ipa);
>   	}
>   
> diff --git a/drivers/net/ipa/ipa_trans.c b/drivers/net/ipa/ipa_trans.c
> index b87936b18770..22755f3ce3da 100644
> --- a/drivers/net/ipa/ipa_trans.c
> +++ b/drivers/net/ipa/ipa_trans.c
> @@ -747,7 +747,7 @@ int ipa_channel_trans_init(struct ipa_dma *gsi, u32 channel_id)
>   	 * for transactions (including transaction structures) based on
>   	 * this maximum number.
>   	 */
> -	tre_max = gsi_channel_tre_max(channel->dma_subsys, channel_id);
> +	tre_max = ipa_channel_tre_max(channel->dma_subsys, channel_id);
>   
>   	/* Transactions are allocated one at a time. */
>   	ret = ipa_trans_pool_init(&trans_info->pool, sizeof(struct ipa_trans),
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 05/17] net: ipa: Check interrupts for availability
  2021-09-20  3:07 ` [RFC PATCH 05/17] net: ipa: Check interrupts for availability Sireesh Kodali
@ 2021-10-13 22:29   ` Alex Elder
  0 siblings, 0 replies; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:29 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> Make ipa_interrupt_add/ipa_interrupt_remove no-operation if requested
> interrupt is not supported by IPA hardware.
> 
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>

I'm not sure why this is important.  Callers shouldn't add an
interrupt type that isn't supported by the hardware.  The check
here would be for sanity.

And there's no point in checking in the interrupt remove
function, the only interrupts removed will have already
been added.

Anyway, maybe I'll see you're adding support for these
IPA interrupt types later on?

					-Alex

> ---
>   drivers/net/ipa/ipa_interrupt.c | 25 +++++++++++++++++++++++++
>   1 file changed, 25 insertions(+)
> 
> diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
> index b35170a93b0f..94708a23a597 100644
> --- a/drivers/net/ipa/ipa_interrupt.c
> +++ b/drivers/net/ipa/ipa_interrupt.c
> @@ -48,6 +48,25 @@ static bool ipa_interrupt_uc(struct ipa_interrupt *interrupt, u32 irq_id)
>   	return irq_id == IPA_IRQ_UC_0 || irq_id == IPA_IRQ_UC_1;
>   }
>   
> +static bool ipa_interrupt_check_fixup(enum ipa_irq_id *irq_id, enum ipa_version version)
> +{
> +	switch (*irq_id) {
> +	case IPA_IRQ_EOT_COAL:
> +		return version < IPA_VERSION_3_5;
> +	case IPA_IRQ_DCMP:
> +		return version < IPA_VERSION_4_5;
> +	case IPA_IRQ_TLV_LEN_MIN_DSM:
> +		return version >= IPA_VERSION_4_5;
> +	default:
> +		break;
> +	}
> +
> +	if (*irq_id >= IPA_IRQ_DRBIP_PKT_EXCEED_MAX_SIZE_EN)
> +		return version >= IPA_VERSION_4_9;
> +
> +	return true;
> +}
> +
>   /* Process a particular interrupt type that has been received */
>   static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 irq_id)
>   {
> @@ -191,6 +210,9 @@ void ipa_interrupt_add(struct ipa_interrupt *interrupt,
>   	struct ipa *ipa = interrupt->ipa;
>   	u32 offset;
>   
> +	if (!ipa_interrupt_check_fixup(&ipa_irq, ipa->version))
> +		return;
> +
>   	WARN_ON(ipa_irq >= IPA_IRQ_COUNT);
>   
>   	interrupt->handler[ipa_irq] = handler;
> @@ -208,6 +230,9 @@ ipa_interrupt_remove(struct ipa_interrupt *interrupt, enum ipa_irq_id ipa_irq)
>   	struct ipa *ipa = interrupt->ipa;
>   	u32 offset;
>   
> +	if (!ipa_interrupt_check_fixup(&ipa_irq, ipa->version))
> +		return;
> +
>   	WARN_ON(ipa_irq >= IPA_IRQ_COUNT);
>   
>   	/* Update the IPA interrupt mask to disable it */
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait
  2021-09-20  3:08 ` [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait Sireesh Kodali
@ 2021-10-13 22:29   ` Alex Elder
  2021-10-18 17:02     ` Sireesh Kodali
  0 siblings, 1 reply; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:29 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> Sometimes the pipeline clear fails, and when it does, having a hang in
> kernel is ugly. The timeout gives us a nice error message. Note that
> this shouldn't actually hang, ever. It only hangs if there is a mistake
> in the config, and the timeout is only useful when debugging.
> 
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>

This is actually an item on my to-do list.  All of the waits
for GSI completions should have timeouts.  The only reason it
hasn't been implemented already is that I would like to be sure
all paths that could have a timeout actually have a reasonable
recovery.

I'd say an error message after a timeout is better than a hung
task panic, but if this does time out, I'm not sure the state
of the hardware is well-defined.

					-Alex

> ---
>   drivers/net/ipa/ipa_cmd.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> index 3db9e94e484f..0bdbc331fa78 100644
> --- a/drivers/net/ipa/ipa_cmd.c
> +++ b/drivers/net/ipa/ipa_cmd.c
> @@ -658,7 +658,10 @@ u32 ipa_cmd_pipeline_clear_count(void)
>   
>   void ipa_cmd_pipeline_clear_wait(struct ipa *ipa)
>   {
> -	wait_for_completion(&ipa->completion);
> +	unsigned long timeout_jiffies = msecs_to_jiffies(1000);
> +
> +	if (!wait_for_completion_timeout(&ipa->completion, timeout_jiffies))
> +		dev_err(&ipa->pdev->dev, "%s time out\n", __func__);
>   }
>   
>   void ipa_cmd_pipeline_clear(struct ipa *ipa)
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions
  2021-09-20  3:08 ` [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions Sireesh Kodali
@ 2021-10-13 22:29   ` Alex Elder
  2021-10-18 17:25     ` Sireesh Kodali
  0 siblings, 1 reply; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:29 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> IPA v2.x is an older version the IPA hardware, and is 32 bit.
> 
> Most of the registers were just shifted in newer IPA versions, but
> the register fields have remained the same across IPA versions. This
> means that only the register addresses needed to be added to the driver.
> 
> To handle the different IPA register addresses, static inline functions
> have been defined that return the correct register address.

Thank you for following the existing convention in implementing these.
Even if it isn't perfect, it's good to remain consistent.

You use:
	if (version <= IPA_VERSION_2_6L)
but then also define and use
	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
And the only new IPA versions are 2_0, 2_5, and 2_6L.

I would stick with the former and don't define IPA_VERSION_RANGE().
Nothing less than IPA v2.0 (or 3.0 currently) is supported, so
"there is no version less than that."

Oh, and I noticed some local variables defined without the
"reverse Christmas tree order" which, like it or not, is the
convention used consistently throughout this driver.

I might quibble with a few other minor things in these definitions
but overall this looks fine.

					-Alex

> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_cmd.c      |   3 +-
>   drivers/net/ipa/ipa_endpoint.c |  33 +++---
>   drivers/net/ipa/ipa_main.c     |   8 +-
>   drivers/net/ipa/ipa_mem.c      |   5 +-
>   drivers/net/ipa/ipa_reg.h      | 184 +++++++++++++++++++++++++++------
>   drivers/net/ipa/ipa_version.h  |  12 +++
>   6 files changed, 195 insertions(+), 50 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> index 0bdbc331fa78..7a104540dc26 100644
> --- a/drivers/net/ipa/ipa_cmd.c
> +++ b/drivers/net/ipa/ipa_cmd.c
> @@ -326,7 +326,8 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa)
>   	 * worst case (highest endpoint number) offset of that endpoint
>   	 * fits in the register write command field(s) that must hold it.
>   	 */
> -	offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT - 1);
> +	offset = ipa_reg_endp_status_n_offset(ipa->version,
> +			IPA_ENDPOINT_COUNT - 1);
>   	name = "maximal endpoint status";
>   	if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
>   		return false;
> diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> index dbef549c4537..7d3ab61cd890 100644
> --- a/drivers/net/ipa/ipa_endpoint.c
> +++ b/drivers/net/ipa/ipa_endpoint.c
> @@ -242,8 +242,8 @@ static struct ipa_trans *ipa_endpoint_trans_alloc(struct ipa_endpoint *endpoint,
>   static bool
>   ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_CTRL_N_OFFSET(endpoint->endpoint_id);
>   	struct ipa *ipa = endpoint->ipa;
> +	u32 offset = ipa_reg_endp_init_ctrl_n_offset(ipa->version, endpoint->endpoint_id);
>   	bool state;
>   	u32 mask;
>   	u32 val;
> @@ -410,7 +410,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
>   		if (!(endpoint->ee_id == GSI_EE_MODEM && endpoint->toward_ipa))
>   			continue;
>   
> -		offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
> +		offset = ipa_reg_endp_status_n_offset(ipa->version, endpoint_id);
>   
>   		/* Value written is 0, and all bits are updated.  That
>   		 * means status is disabled on the endpoint, and as a
> @@ -431,7 +431,8 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
>   
>   static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_CFG_N_OFFSET(endpoint->endpoint_id);
> +	struct ipa *ipa = endpoint->ipa;
> +	u32 offset = ipa_reg_endp_init_cfg_n_offset(ipa->version, endpoint->endpoint_id);
>   	enum ipa_cs_offload_en enabled;
>   	u32 val = 0;
>   
> @@ -523,8 +524,8 @@ ipa_qmap_header_size(enum ipa_version version, struct ipa_endpoint *endpoint)
>    */
>   static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_HDR_N_OFFSET(endpoint->endpoint_id);
>   	struct ipa *ipa = endpoint->ipa;
> +	u32 offset = ipa_reg_endp_init_hdr_n_offset(ipa->version, endpoint->endpoint_id);
>   	u32 val = 0;
>   
>   	if (endpoint->data->qmap) {
> @@ -565,9 +566,9 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
>   
>   static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(endpoint->endpoint_id);
> -	u32 pad_align = endpoint->data->rx.pad_align;
>   	struct ipa *ipa = endpoint->ipa;
> +	u32 offset = ipa_reg_endp_init_hdr_ext_n_offset(ipa->version, endpoint->endpoint_id);
> +	u32 pad_align = endpoint->data->rx.pad_align;
>   	u32 val = 0;
>   
>   	val |= HDR_ENDIANNESS_FMASK;		/* big endian */
> @@ -609,6 +610,7 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
>   
>   static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
>   {
> +	enum ipa_version version = endpoint->ipa->version;
>   	u32 endpoint_id = endpoint->endpoint_id;
>   	u32 val = 0;
>   	u32 offset;
> @@ -616,7 +618,7 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
>   	if (endpoint->toward_ipa)
>   		return;		/* Register not valid for TX endpoints */
>   
> -	offset = IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(endpoint_id);
> +	offset = ipa_reg_endp_init_hdr_metadata_mask_n_offset(version, endpoint_id);
>   
>   	/* Note that HDR_ENDIANNESS indicates big endian header fields */
>   	if (endpoint->data->qmap)
> @@ -627,7 +629,8 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
>   
>   static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_MODE_N_OFFSET(endpoint->endpoint_id);
> +	enum ipa_version version = endpoint->ipa->version;
> +	u32 offset = ipa_reg_endp_init_mode_n_offset(version, endpoint->endpoint_id);
>   	u32 val;
>   
>   	if (!endpoint->toward_ipa)
> @@ -716,8 +719,8 @@ static u32 aggr_sw_eof_active_encoded(enum ipa_version version, bool enabled)
>   
>   static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_AGGR_N_OFFSET(endpoint->endpoint_id);
>   	enum ipa_version version = endpoint->ipa->version;
> +	u32 offset = ipa_reg_endp_init_aggr_n_offset(version, endpoint->endpoint_id);
>   	u32 val = 0;
>   
>   	if (endpoint->data->aggregation) {
> @@ -853,7 +856,7 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
>   	u32 offset;
>   	u32 val;
>   
> -	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(endpoint_id);
> +	offset = ipa_reg_endp_init_hol_block_timer_n_offset(ipa->version, endpoint_id);
>   	val = hol_block_timer_val(ipa, microseconds);
>   	iowrite32(val, ipa->reg_virt + offset);
>   }
> @@ -861,12 +864,13 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
>   static void
>   ipa_endpoint_init_hol_block_enable(struct ipa_endpoint *endpoint, bool enable)
>   {
> +	enum ipa_version version = endpoint->ipa->version;
>   	u32 endpoint_id = endpoint->endpoint_id;
>   	u32 offset;
>   	u32 val;
>   
>   	val = enable ? HOL_BLOCK_EN_FMASK : 0;
> -	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(endpoint_id);
> +	offset = ipa_reg_endp_init_hol_block_en_n_offset(version, endpoint_id);
>   	iowrite32(val, endpoint->ipa->reg_virt + offset);
>   }
>   
> @@ -887,7 +891,8 @@ void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa)
>   
>   static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(endpoint->endpoint_id);
> +	enum ipa_version version = endpoint->ipa->version;
> +	u32 offset = ipa_reg_endp_init_deaggr_n_offset(version, endpoint->endpoint_id);
>   	u32 val = 0;
>   
>   	if (!endpoint->toward_ipa)
> @@ -979,7 +984,7 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
>   	u32 val = 0;
>   	u32 offset;
>   
> -	offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
> +	offset = ipa_reg_endp_status_n_offset(ipa->version, endpoint_id);
>   
>   	if (endpoint->data->status_enable) {
>   		val |= STATUS_EN_FMASK;
> @@ -1384,7 +1389,7 @@ void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id)
>   	val |= u32_encode_bits(endpoint_id, ROUTE_FRAG_DEF_PIPE_FMASK);
>   	val |= ROUTE_DEF_RETAIN_HDR_FMASK;
>   
> -	iowrite32(val, ipa->reg_virt + IPA_REG_ROUTE_OFFSET);
> +	iowrite32(val, ipa->reg_virt + ipa_reg_route_offset(ipa->version));
>   }
>   
>   void ipa_endpoint_default_route_clear(struct ipa *ipa)
> diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> index 6ab691ff1faf..ba06e3ad554c 100644
> --- a/drivers/net/ipa/ipa_main.c
> +++ b/drivers/net/ipa/ipa_main.c
> @@ -191,7 +191,7 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
>   	if (ipa->version < IPA_VERSION_4_0)
>   		return;
>   
> -	val = ioread32(ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
> +	val = ioread32(ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
>   
>   	if (ipa->version == IPA_VERSION_4_0) {
>   		val &= ~IPA_QMB_SELECT_CONS_EN_FMASK;
> @@ -206,7 +206,7 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
>   	val |= GSI_MULTI_INORDER_RD_DIS_FMASK;
>   	val |= GSI_MULTI_INORDER_WR_DIS_FMASK;
>   
> -	iowrite32(val, ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
> +	iowrite32(val, ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
>   }
>   
>   /* Configure DDR and (possibly) PCIe max read/write QSB values */
> @@ -355,7 +355,7 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
>   	/* IPA v4.5+ has no backward compatibility register */
>   	if (version < IPA_VERSION_4_5) {
>   		val = data->backward_compat;
> -		iowrite32(val, ipa->reg_virt + IPA_REG_BCR_OFFSET);
> +		iowrite32(val, ipa->reg_virt + ipa_reg_bcr_offset(ipa->version));
>   	}
>   
>   	/* Implement some hardware workarounds */
> @@ -384,7 +384,7 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
>   		/* Configure aggregation timer granularity */
>   		granularity = ipa_aggr_granularity_val(IPA_AGGR_GRANULARITY);
>   		val = u32_encode_bits(granularity, AGGR_GRANULARITY_FMASK);
> -		iowrite32(val, ipa->reg_virt + IPA_REG_COUNTER_CFG_OFFSET);
> +		iowrite32(val, ipa->reg_virt + ipa_reg_counter_cfg_offset(ipa->version));
>   	} else {
>   		ipa_qtime_config(ipa);
>   	}
> diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
> index 16e5fdd5bd73..8acc88070a6f 100644
> --- a/drivers/net/ipa/ipa_mem.c
> +++ b/drivers/net/ipa/ipa_mem.c
> @@ -113,7 +113,8 @@ int ipa_mem_setup(struct ipa *ipa)
>   	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
>   	offset = ipa->mem_offset + mem->offset;
>   	val = proc_cntxt_base_addr_encoded(ipa->version, offset);
> -	iowrite32(val, ipa->reg_virt + IPA_REG_LOCAL_PKT_PROC_CNTXT_OFFSET);
> +	iowrite32(val, ipa->reg_virt +
> +		  ipa_reg_local_pkt_proc_cntxt_base_offset(ipa->version));
>   
>   	return 0;
>   }
> @@ -316,7 +317,7 @@ int ipa_mem_config(struct ipa *ipa)
>   	u32 i;
>   
>   	/* Check the advertised location and size of the shared memory area */
> -	val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET);
> +	val = ioread32(ipa->reg_virt + ipa_reg_shared_mem_size_offset(ipa->version));
>   
>   	/* The fields in the register are in 8 byte units */
>   	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h
> index a5b355384d4a..fcae0296cfa4 100644
> --- a/drivers/net/ipa/ipa_reg.h
> +++ b/drivers/net/ipa/ipa_reg.h
> @@ -65,7 +65,17 @@ struct ipa;
>    * of valid bits for the register.
>    */
>   
> -#define IPA_REG_COMP_CFG_OFFSET				0x0000003c
> +#define IPA_REG_COMP_SW_RESET_OFFSET		0x0000003c
> +
> +#define IPA_REG_V2_ENABLED_PIPES_OFFSET		0x000005dc
> +
> +static inline u32 ipa_reg_comp_cfg_offset(enum ipa_version version)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x38;
> +
> +	return 0x3c;
> +}
>   /* The next field is not supported for IPA v4.0+, not present for IPA v4.5+ */
>   #define ENABLE_FMASK				GENMASK(0, 0)
>   /* The next field is present for IPA v4.7+ */
> @@ -124,6 +134,7 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
>   	return u32_encode_bits(val, GENMASK(17, 17));
>   }
>   
> +/* This register is only present on IPA v3.0 and above */
>   #define IPA_REG_CLKON_CFG_OFFSET			0x00000044
>   #define RX_FMASK				GENMASK(0, 0)
>   #define PROC_FMASK				GENMASK(1, 1)
> @@ -164,7 +175,14 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
>   /* The next field is present for IPA v4.7+ */
>   #define DRBIP_FMASK				GENMASK(31, 31)
>   
> -#define IPA_REG_ROUTE_OFFSET				0x00000048
> +static inline u32 ipa_reg_route_offset(enum ipa_version version)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x44;
> +
> +	return 0x48;
> +}
> +
>   #define ROUTE_DIS_FMASK				GENMASK(0, 0)
>   #define ROUTE_DEF_PIPE_FMASK			GENMASK(5, 1)
>   #define ROUTE_DEF_HDR_TABLE_FMASK		GENMASK(6, 6)
> @@ -172,7 +190,14 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
>   #define ROUTE_FRAG_DEF_PIPE_FMASK		GENMASK(21, 17)
>   #define ROUTE_DEF_RETAIN_HDR_FMASK		GENMASK(24, 24)
>   
> -#define IPA_REG_SHARED_MEM_SIZE_OFFSET			0x00000054
> +static inline u32 ipa_reg_shared_mem_size_offset(enum ipa_version version)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x50;
> +
> +	return 0x54;
> +}
> +
>   #define SHARED_MEM_SIZE_FMASK			GENMASK(15, 0)
>   #define SHARED_MEM_BADDR_FMASK			GENMASK(31, 16)
>   
> @@ -219,7 +244,13 @@ static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
>   }
>   
>   /* The next register is not present for IPA v4.5+ */
> -#define IPA_REG_BCR_OFFSET				0x000001d0
> +static inline u32 ipa_reg_bcr_offset(enum ipa_version version)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_5, 2_6L))
> +		return 0x5b0;
> +
> +	return 0x1d0;
> +}
>   /* The next two fields are not present for IPA v4.2+ */
>   #define BCR_CMDQ_L_LACK_ONE_ENTRY_FMASK		GENMASK(0, 0)
>   #define BCR_TX_NOT_USING_BRESP_FMASK		GENMASK(1, 1)
> @@ -236,7 +267,14 @@ static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
>   #define BCR_ROUTER_PREFETCH_EN_FMASK		GENMASK(9, 9)
>   
>   /* The value of the next register must be a multiple of 8 (bottom 3 bits 0) */
> -#define IPA_REG_LOCAL_PKT_PROC_CNTXT_OFFSET		0x000001e8
> +static inline u32 ipa_reg_local_pkt_proc_cntxt_base_offset(enum ipa_version version)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x5e0;
> +
> +	return 0x1e8;
> +}
> +
>   
>   /* Encoded value for LOCAL_PKT_PROC_CNTXT register BASE_ADDR field */
>   static inline u32 proc_cntxt_base_addr_encoded(enum ipa_version version,
> @@ -252,7 +290,14 @@ static inline u32 proc_cntxt_base_addr_encoded(enum ipa_version version,
>   #define IPA_REG_AGGR_FORCE_CLOSE_OFFSET			0x000001ec
>   
>   /* The next register is not present for IPA v4.5+ */
> -#define IPA_REG_COUNTER_CFG_OFFSET			0x000001f0
> +static inline u32 ipa_reg_counter_cfg_offset(enum ipa_version version)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_5, 2_6L))
> +		return 0x5e8;
> +
> +	return 0x1f0;
> +}
> +
>   /* The next field is not present for IPA v3.5+ */
>   #define EOT_COAL_GRANULARITY			GENMASK(3, 0)
>   #define AGGR_GRANULARITY_FMASK			GENMASK(8, 4)
> @@ -349,15 +394,27 @@ enum ipa_pulse_gran {
>   #define Y_MIN_LIM_FMASK				GENMASK(21, 16)
>   #define Y_MAX_LIM_FMASK				GENMASK(29, 24)
>   
> -#define IPA_REG_ENDP_INIT_CTRL_N_OFFSET(ep) \
> -					(0x00000800 + 0x0070 * (ep))
> +static inline u32 ipa_reg_endp_init_ctrl_n_offset(enum ipa_version version, u16 ep)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x70 + 0x4 * ep;
> +
> +	return 0x800 + 0x70 * ep;
> +}
> +
>   /* Valid only for RX (IPA producer) endpoints (do not use for IPA v4.0+) */
>   #define ENDP_SUSPEND_FMASK			GENMASK(0, 0)
>   /* Valid only for TX (IPA consumer) endpoints */
>   #define ENDP_DELAY_FMASK			GENMASK(1, 1)
>   
> -#define IPA_REG_ENDP_INIT_CFG_N_OFFSET(ep) \
> -					(0x00000808 + 0x0070 * (ep))
> +static inline u32 ipa_reg_endp_init_cfg_n_offset(enum ipa_version version, u16 ep)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0xc0 + 0x4 * ep;
> +
> +	return 0x808 + 0x70 * ep;
> +}
> +
>   #define FRAG_OFFLOAD_EN_FMASK			GENMASK(0, 0)
>   #define CS_OFFLOAD_EN_FMASK			GENMASK(2, 1)
>   #define CS_METADATA_HDR_OFFSET_FMASK		GENMASK(6, 3)
> @@ -383,8 +440,14 @@ enum ipa_nat_en {
>   	IPA_NAT_DST			= 0x2,
>   };
>   
> -#define IPA_REG_ENDP_INIT_HDR_N_OFFSET(ep) \
> -					(0x00000810 + 0x0070 * (ep))
> +static inline u32 ipa_reg_endp_init_hdr_n_offset(enum ipa_version version, u16 ep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x170 + 0x4 * ep;
> +
> +	return 0x810 + 0x70 * ep;
> +}
> +
>   #define HDR_LEN_FMASK				GENMASK(5, 0)
>   #define HDR_OFST_METADATA_VALID_FMASK		GENMASK(6, 6)
>   #define HDR_OFST_METADATA_FMASK			GENMASK(12, 7)
> @@ -440,8 +503,14 @@ static inline u32 ipa_metadata_offset_encoded(enum ipa_version version,
>   	return val;
>   }
>   
> -#define IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(ep) \
> -					(0x00000814 + 0x0070 * (ep))
> +static inline u32 ipa_reg_endp_init_hdr_ext_n_offset(enum ipa_version version, u16 ep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x1c0 + 0x4 * ep;
> +
> +	return 0x814 + 0x70 * ep;
> +}
> +
>   #define HDR_ENDIANNESS_FMASK			GENMASK(0, 0)
>   #define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK	GENMASK(1, 1)
>   #define HDR_TOTAL_LEN_OR_PAD_FMASK		GENMASK(2, 2)
> @@ -454,12 +523,23 @@ static inline u32 ipa_metadata_offset_encoded(enum ipa_version version,
>   #define HDR_ADDITIONAL_CONST_LEN_MSB_FMASK	GENMASK(21, 20)
>   
>   /* Valid only for RX (IPA producer) endpoints */
> -#define IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(rxep) \
> -					(0x00000818 + 0x0070 * (rxep))
> +static inline u32 ipa_reg_endp_init_hdr_metadata_mask_n_offset(enum ipa_version version, u16 rxep)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x220 + 0x4 * rxep;
> +
> +	return 0x818 + 0x70 * rxep;
> +}
>   
>   /* Valid only for TX (IPA consumer) endpoints */
> -#define IPA_REG_ENDP_INIT_MODE_N_OFFSET(txep) \
> -					(0x00000820 + 0x0070 * (txep))
> +static inline u32 ipa_reg_endp_init_mode_n_offset(enum ipa_version version, u16 txep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x2c0 + 0x4 * txep;
> +
> +	return 0x820 + 0x70 * txep;
> +}
> +
>   #define MODE_FMASK				GENMASK(2, 0)
>   /* The next field is present for IPA v4.5+ */
>   #define DCPH_ENABLE_FMASK			GENMASK(3, 3)
> @@ -480,8 +560,14 @@ enum ipa_mode {
>   	IPA_DMA				= 0x3,
>   };
>   
> -#define IPA_REG_ENDP_INIT_AGGR_N_OFFSET(ep) \
> -					(0x00000824 +  0x0070 * (ep))
> +static inline u32 ipa_reg_endp_init_aggr_n_offset(enum ipa_version version,
> +						  u16 ep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x320 + 0x4 * ep;
> +	return 0x824 + 0x70 * ep;
> +}
> +
>   #define AGGR_EN_FMASK				GENMASK(1, 0)
>   #define AGGR_TYPE_FMASK				GENMASK(4, 2)
>   
> @@ -543,14 +629,27 @@ enum ipa_aggr_type {
>   };
>   
>   /* Valid only for RX (IPA producer) endpoints */
> -#define IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(rxep) \
> -					(0x0000082c +  0x0070 * (rxep))
> +static inline u32 ipa_reg_endp_init_hol_block_en_n_offset(enum ipa_version version,
> +							  u16 rxep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x3c0 + 0x4 * rxep;
> +
> +	return 0x82c + 0x70 * rxep;
> +}
> +
>   #define HOL_BLOCK_EN_FMASK			GENMASK(0, 0)
>   
>   /* Valid only for RX (IPA producer) endpoints */
> -#define IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(rxep) \
> -					(0x00000830 +  0x0070 * (rxep))
> -/* The next two fields are present for IPA v4.2 only */
> +static inline u32 ipa_reg_endp_init_hol_block_timer_n_offset(enum ipa_version version, u16 rxep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x420 + 0x4 * rxep;
> +
> +	return 0x830 + 0x70 * rxep;
> +}
> +
> +/* The next fields are present for IPA v4.2 only */
>   #define BASE_VALUE_FMASK			GENMASK(4, 0)
>   #define SCALE_FMASK				GENMASK(12, 8)
>   /* The next two fields are present for IPA v4.5 */
> @@ -558,8 +657,14 @@ enum ipa_aggr_type {
>   #define GRAN_SEL_FMASK				GENMASK(8, 8)
>   
>   /* Valid only for TX (IPA consumer) endpoints */
> -#define IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(txep) \
> -					(0x00000834 + 0x0070 * (txep))
> +static inline u32 ipa_reg_endp_init_deaggr_n_offset(enum ipa_version version, u16 txep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x470 + 0x4 * txep;
> +
> +	return 0x834 + 0x70 * txep;
> +}
> +
>   #define DEAGGR_HDR_LEN_FMASK			GENMASK(5, 0)
>   #define SYSPIPE_ERR_DETECTION_FMASK		GENMASK(6, 6)
>   #define PACKET_OFFSET_VALID_FMASK		GENMASK(7, 7)
> @@ -629,8 +734,14 @@ enum ipa_seq_rep_type {
>   	IPA_SEQ_REP_DMA_PARSER			= 0x08,
>   };
>   
> -#define IPA_REG_ENDP_STATUS_N_OFFSET(ep) \
> -					(0x00000840 + 0x0070 * (ep))
> +static inline u32 ipa_reg_endp_status_n_offset(enum ipa_version version, u16 ep)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x4c0 + 0x4 * ep;
> +
> +	return 0x840 + 0x70 * ep;
> +}
> +
>   #define STATUS_EN_FMASK				GENMASK(0, 0)
>   #define STATUS_ENDP_FMASK			GENMASK(5, 1)
>   /* The next field is not present for IPA v4.5+ */
> @@ -662,6 +773,9 @@ enum ipa_seq_rep_type {
>   static inline u32 ipa_reg_irq_stts_ee_n_offset(enum ipa_version version,
>   					       u32 ee)
>   {
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x00001008 + 0x1000 * ee;
> +
>   	if (version < IPA_VERSION_4_9)
>   		return 0x00003008 + 0x1000 * ee;
>   
> @@ -675,6 +789,9 @@ static inline u32 ipa_reg_irq_stts_offset(enum ipa_version version)
>   
>   static inline u32 ipa_reg_irq_en_ee_n_offset(enum ipa_version version, u32 ee)
>   {
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x0000100c + 0x1000 * ee;
> +
>   	if (version < IPA_VERSION_4_9)
>   		return 0x0000300c + 0x1000 * ee;
>   
> @@ -688,6 +805,9 @@ static inline u32 ipa_reg_irq_en_offset(enum ipa_version version)
>   
>   static inline u32 ipa_reg_irq_clr_ee_n_offset(enum ipa_version version, u32 ee)
>   {
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x00001010 + 0x1000 * ee;
> +
>   	if (version < IPA_VERSION_4_9)
>   		return 0x00003010 + 0x1000 * ee;
>   
> @@ -776,6 +896,9 @@ enum ipa_irq_id {
>   
>   static inline u32 ipa_reg_irq_uc_ee_n_offset(enum ipa_version version, u32 ee)
>   {
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x101c + 1000 * ee;
> +
>   	if (version < IPA_VERSION_4_9)
>   		return 0x0000301c + 0x1000 * ee;
>   
> @@ -793,6 +916,9 @@ static inline u32 ipa_reg_irq_uc_offset(enum ipa_version version)
>   static inline u32
>   ipa_reg_irq_suspend_info_ee_n_offset(enum ipa_version version, u32 ee)
>   {
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x00001098 + 0x1000 * ee;
> +
>   	if (version == IPA_VERSION_3_0)
>   		return 0x00003098 + 0x1000 * ee;
>   
> diff --git a/drivers/net/ipa/ipa_version.h b/drivers/net/ipa/ipa_version.h
> index 6c16c895d842..0d816de586ba 100644
> --- a/drivers/net/ipa/ipa_version.h
> +++ b/drivers/net/ipa/ipa_version.h
> @@ -8,6 +8,9 @@
>   
>   /**
>    * enum ipa_version
> + * @IPA_VERSION_2_0:	IPA version 2.0
> + * @IPA_VERSION_2_5:	IPA version 2.5/2.6
> + * @IPA_VERSION_2_6:	IPA version 2.6L
>    * @IPA_VERSION_3_0:	IPA version 3.0/GSI version 1.0
>    * @IPA_VERSION_3_1:	IPA version 3.1/GSI version 1.1
>    * @IPA_VERSION_3_5:	IPA version 3.5/GSI version 1.2
> @@ -25,6 +28,9 @@
>    * new version is added.
>    */
>   enum ipa_version {
> +	IPA_VERSION_2_0,
> +	IPA_VERSION_2_5,
> +	IPA_VERSION_2_6L,
>   	IPA_VERSION_3_0,
>   	IPA_VERSION_3_1,
>   	IPA_VERSION_3_5,
> @@ -38,4 +44,10 @@ enum ipa_version {
>   	IPA_VERSION_4_11,
>   };
>   
> +#define IPA_HAS_GSI(version) ((version) > IPA_VERSION_2_6L)
> +#define IPA_IS_64BIT(version) ((version) > IPA_VERSION_2_6L)
> +#define IPA_VERSION_RANGE(_version, _from, _to) \
> +	((_version) >= (IPA_VERSION_##_from) &&  \
> +	 (_version) <= (IPA_VERSION_##_to))
> +
>   #endif /* _IPA_VERSION_H_ */
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 08/17] net: ipa: Add support for IPA v2.x interrupts
  2021-09-20  3:08 ` [RFC PATCH 08/17] net: ipa: Add support for IPA v2.x interrupts Sireesh Kodali
@ 2021-10-13 22:29   ` Alex Elder
  0 siblings, 0 replies; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:29 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> Interrupts on IPA v2.x have different numbers from the v3.x and above
> interrupts. IPA v2.x also doesn't support the TX_SUSPEND irq, like v3.0

I'm not sure I like this way of fixing the interrupt ids (by
adding an offset), but it's a simple change.  (And now I have
a better understanding for why the "fixup" function exists).

					-Alex

> 
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_interrupt.c | 11 ++++++++---
>   1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
> index 94708a23a597..37b5932253aa 100644
> --- a/drivers/net/ipa/ipa_interrupt.c
> +++ b/drivers/net/ipa/ipa_interrupt.c
> @@ -63,6 +63,11 @@ static bool ipa_interrupt_check_fixup(enum ipa_irq_id *irq_id, enum ipa_version
>   
>   	if (*irq_id >= IPA_IRQ_DRBIP_PKT_EXCEED_MAX_SIZE_EN)
>   		return version >= IPA_VERSION_4_9;
> +	else if (*irq_id > IPA_IRQ_BAM_GSI_IDLE)
> +		return version >= IPA_VERSION_3_0;
> +	else if (version <= IPA_VERSION_2_6L &&
> +			*irq_id >= IPA_IRQ_PROC_UC_ACK_Q_NOT_EMPTY)
> +		*irq_id += 2;
>   
>   	return true;
>   }
> @@ -152,8 +157,8 @@ static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
>   
>   	WARN_ON(!(mask & ipa->available));
>   
> -	/* IPA version 3.0 does not support TX_SUSPEND interrupt control */
> -	if (ipa->version == IPA_VERSION_3_0)
> +	/* IPA version <=3.0 does not support TX_SUSPEND interrupt control */
> +	if (ipa->version <= IPA_VERSION_3_0)
>   		return;
>   
>   	offset = ipa_reg_irq_suspend_en_offset(ipa->version);
> @@ -190,7 +195,7 @@ void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt)
>   	val = ioread32(ipa->reg_virt + offset);
>   
>   	/* SUSPEND interrupt status isn't cleared on IPA version 3.0 */
> -	if (ipa->version == IPA_VERSION_3_0)
> +	if (ipa->version <= IPA_VERSION_3_0)
>   		return;
>   
>   	offset = ipa_reg_irq_suspend_clr_offset(ipa->version);
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport
  2021-09-20  3:08 ` [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport Sireesh Kodali
  2021-09-20 14:31   ` kernel test robot
@ 2021-10-13 22:30   ` Alex Elder
  2021-10-18 17:30     ` Sireesh Kodali
  1 sibling, 1 reply; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> BAM is used on IPA v2.x. Since BAM already has a nice dmaengine driver,
> the IPA driver only makes calls the dmaengine API.
> Also add BAM transaction support to IPA's trasaction abstraction layer.
> 
> BAM transactions should use NAPI just like GSI transactions, but just
> use callbacks on each transaction for now.

This is where things get a little more complicated.  I'm not really
familiar with the BAM interface and would really like to give this
a much deeper review, and I won't be doing that now.

At first glance, it looks reasonably clean to me, and it surprises
me a little that this different system can be used with a relatively
small amount of change.  Much looks duplicated, so it could be that
a little more work abstracting might avoid that (but I haven't looked
that closely).

					-Alex

> 
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/Makefile          |   2 +-
>   drivers/net/ipa/bam.c             | 525 ++++++++++++++++++++++++++++++
>   drivers/net/ipa/gsi.c             |   1 +
>   drivers/net/ipa/ipa_data.h        |   1 +
>   drivers/net/ipa/ipa_dma.h         |  18 +-
>   drivers/net/ipa/ipa_dma_private.h |   2 +
>   drivers/net/ipa/ipa_main.c        |  20 +-
>   drivers/net/ipa/ipa_trans.c       |  14 +-
>   drivers/net/ipa/ipa_trans.h       |   4 +
>   9 files changed, 569 insertions(+), 18 deletions(-)
>   create mode 100644 drivers/net/ipa/bam.c
> 
> diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
> index 3cd021fb992e..4abebc667f77 100644
> --- a/drivers/net/ipa/Makefile
> +++ b/drivers/net/ipa/Makefile
> @@ -2,7 +2,7 @@ obj-$(CONFIG_QCOM_IPA)	+=	ipa.o
>   
>   ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
>   				ipa_table.o ipa_interrupt.o gsi.o ipa_trans.o \
> -				ipa_gsi.o ipa_smp2p.o ipa_uc.o \
> +				ipa_gsi.o ipa_smp2p.o ipa_uc.o bam.o \
>   				ipa_endpoint.o ipa_cmd.o ipa_modem.o \
>   				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
>   				ipa_sysfs.o
> diff --git a/drivers/net/ipa/bam.c b/drivers/net/ipa/bam.c
> new file mode 100644
> index 000000000000..0726e385fee5
> --- /dev/null
> +++ b/drivers/net/ipa/bam.c
> @@ -0,0 +1,525 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/* Copyright (c) 2020, The Linux Foundation. All rights reserved.
> + */
> +
> +#include <linux/completion.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/dmaengine.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/kernel.h>
> +#include <linux/mutex.h>
> +#include <linux/netdevice.h>
> +#include <linux/platform_device.h>
> +
> +#include "ipa_gsi.h"
> +#include "ipa.h"
> +#include "ipa_dma.h"
> +#include "ipa_dma_private.h"
> +#include "ipa_gsi.h"
> +#include "ipa_trans.h"
> +#include "ipa_data.h"
> +
> +/**
> + * DOC: The IPA Smart Peripheral System Interface
> + *
> + * The Smart Peripheral System is a means to communicate over BAM pipes to
> + * the IPA block. The Modem also uses BAM pipes to communicate with the IPA
> + * core.
> + *
> + * Refer the GSI documentation, because BAM is a precursor to GSI and more or less
> + * the same, conceptually (maybe, IDK, I have no docs to go through).
> + *
> + * Each channel here corresponds to 1 BAM pipe configured in BAM2BAM mode
> + *
> + * IPA cmds are transferred one at a time, each in one BAM transfer.
> + */
> +
> +/* Get and configure the BAM DMA channel */
> +int bam_channel_init_one(struct ipa_dma *bam,
> +			 const struct ipa_gsi_endpoint_data *data, bool command)
> +{
> +	struct dma_slave_config bam_config;
> +	u32 channel_id = data->channel_id;
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +	int ret;
> +
> +	/*TODO: if (!bam_channel_data_valid(bam, data))
> +		return -EINVAL;*/
> +
> +	channel->dma_subsys = bam;
> +	channel->dma_chan = dma_request_chan(bam->dev, data->channel_name);
> +	channel->toward_ipa = data->toward_ipa;
> +	channel->tlv_count = data->channel.tlv_count;
> +	channel->tre_count = data->channel.tre_count;
> +	if (IS_ERR(channel->dma_chan)) {
> +		dev_err(bam->dev, "failed to request BAM channel %s: %d\n",
> +				data->channel_name,
> +				(int) PTR_ERR(channel->dma_chan));
> +		return PTR_ERR(channel->dma_chan);
> +	}
> +
> +	ret = ipa_channel_trans_init(bam, data->channel_id);
> +	if (ret)
> +		goto err_dma_chan_free;
> +
> +	if (data->toward_ipa) {
> +		bam_config.direction = DMA_MEM_TO_DEV;
> +		bam_config.dst_maxburst = channel->tlv_count;
> +	} else {
> +		bam_config.direction = DMA_DEV_TO_MEM;
> +		bam_config.src_maxburst = channel->tlv_count;
> +	}
> +
> +	dmaengine_slave_config(channel->dma_chan, &bam_config);
> +
> +	if (command)
> +		ret = ipa_cmd_pool_init(channel, 256);
> +
> +	if (!ret)
> +		return 0;
> +
> +err_dma_chan_free:
> +	dma_release_channel(channel->dma_chan);
> +	return ret;
> +}
> +
> +static void bam_channel_exit_one(struct ipa_channel *channel)
> +{
> +	if (channel->dma_chan) {
> +		dmaengine_terminate_sync(channel->dma_chan);
> +		dma_release_channel(channel->dma_chan);
> +	}
> +}
> +
> +/* Get channels from BAM_DMA */
> +int bam_channel_init(struct ipa_dma *bam, u32 count,
> +		const struct ipa_gsi_endpoint_data *data)
> +{
> +	int ret = 0;
> +	u32 i;
> +
> +	for (i = 0; i < count; ++i) {
> +		bool command = i == IPA_ENDPOINT_AP_COMMAND_TX;
> +
> +		if (!data[i].channel_name || data[i].ee_id == GSI_EE_MODEM)
> +			continue;
> +
> +		ret = bam_channel_init_one(bam, &data[i], command);
> +		if (ret)
> +			goto err_unwind;
> +	}
> +
> +	return ret;
> +
> +err_unwind:
> +	while (i--) {
> +		if (ipa_gsi_endpoint_data_empty(&data[i]))
> +			continue;
> +
> +		bam_channel_exit_one(&bam->channel[i]);
> +	}
> +	return ret;
> +}
> +
> +/* Inverse of bam_channel_init() */
> +void bam_channel_exit(struct ipa_dma *bam)
> +{
> +	u32 channel_id = BAM_CHANNEL_COUNT_MAX - 1;
> +
> +	do
> +		bam_channel_exit_one(&bam->channel[channel_id]);
> +	while (channel_id--);
> +}
> +
> +/* Inverse of bam_init() */
> +static void bam_exit(struct ipa_dma *bam)
> +{
> +	mutex_destroy(&bam->mutex);
> +	bam_channel_exit(bam);
> +}
> +
> +/* Return the channel id associated with a given channel */
> +static u32 bam_channel_id(struct ipa_channel *channel)
> +{
> +	return channel - &channel->dma_subsys->channel[0];
> +}
> +
> +static void
> +bam_channel_tx_update(struct ipa_channel *channel, struct ipa_trans *trans)
> +{
> +	u64 byte_count = trans->byte_count + trans->len;
> +	u64 trans_count = trans->trans_count + 1;
> +
> +	byte_count -= channel->compl_byte_count;
> +	channel->compl_byte_count += byte_count;
> +	trans_count -= channel->compl_trans_count;
> +	channel->compl_trans_count += trans_count;
> +
> +	ipa_gsi_channel_tx_completed(channel->dma_subsys, bam_channel_id(channel),
> +					   trans_count, byte_count);
> +}
> +
> +static void
> +bam_channel_rx_update(struct ipa_channel *channel, struct ipa_trans *trans)
> +{
> +	/* FIXME */
> +	u64 byte_count = trans->byte_count + trans->len;
> +
> +	channel->byte_count += byte_count;
> +	channel->trans_count++;
> +}
> +
> +/* Consult hardware, move any newly completed transactions to completed list */
> +static void bam_channel_update(struct ipa_channel *channel)
> +{
> +	struct ipa_trans *trans;
> +
> +	list_for_each_entry(trans, &channel->trans_info.pending, links) {
> +		enum dma_status trans_status =
> +				dma_async_is_tx_complete(channel->dma_chan,
> +					trans->cookie, NULL, NULL);
> +		if (trans_status == DMA_COMPLETE)
> +			break;
> +	}
> +	/* Get the transaction for the latest completed event.  Take a
> +	 * reference to keep it from completing before we give the events
> +	 * for this and previous transactions back to the hardware.
> +	 */
> +	refcount_inc(&trans->refcount);
> +
> +	/* For RX channels, update each completed transaction with the number
> +	 * of bytes that were actually received.  For TX channels, report
> +	 * the number of transactions and bytes this completion represents
> +	 * up the network stack.
> +	 */
> +	if (channel->toward_ipa)
> +		bam_channel_tx_update(channel, trans);
> +	else
> +		bam_channel_rx_update(channel, trans);
> +
> +	ipa_trans_move_complete(trans);
> +
> +	ipa_trans_free(trans);
> +}
> +
> +/**
> + * bam_channel_poll_one() - Return a single completed transaction on a channel
> + * @channel:	Channel to be polled
> + *
> + * Return:	Transaction pointer, or null if none are available
> + *
> + * This function returns the first entry on a channel's completed transaction
> + * list.  If that list is empty, the hardware is consulted to determine
> + * whether any new transactions have completed.  If so, they're moved to the
> + * completed list and the new first entry is returned.  If there are no more
> + * completed transactions, a null pointer is returned.
> + */
> +static struct ipa_trans *bam_channel_poll_one(struct ipa_channel *channel)
> +{
> +	struct ipa_trans *trans;
> +
> +	/* Get the first transaction from the completed list */
> +	trans = ipa_channel_trans_complete(channel);
> +	if (!trans) {
> +		bam_channel_update(channel);
> +		trans = ipa_channel_trans_complete(channel);
> +	}
> +
> +	if (trans)
> +		ipa_trans_move_polled(trans);
> +
> +	return trans;
> +}
> +
> +/**
> + * bam_channel_poll() - NAPI poll function for a channel
> + * @napi:	NAPI structure for the channel
> + * @budget:	Budget supplied by NAPI core
> + *
> + * Return:	Number of items polled (<= budget)
> + *
> + * Single transactions completed by hardware are polled until either
> + * the budget is exhausted, or there are no more.  Each transaction
> + * polled is passed to ipa_trans_complete(), to perform remaining
> + * completion processing and retire/free the transaction.
> + */
> +static int bam_channel_poll(struct napi_struct *napi, int budget)
> +{
> +	struct ipa_channel *channel;
> +	int count = 0;
> +
> +	channel = container_of(napi, struct ipa_channel, napi);
> +	while (count < budget) {
> +		struct ipa_trans *trans;
> +
> +		count++;
> +		trans = bam_channel_poll_one(channel);
> +		if (!trans)
> +			break;
> +		ipa_trans_complete(trans);
> +	}
> +
> +	if (count < budget)
> +		napi_complete(&channel->napi);
> +
> +	return count;
> +}
> +
> +/* Setup function for a single channel */
> +static void bam_channel_setup_one(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	if (!channel->dma_subsys)
> +		return;	/* Ignore uninitialized channels */
> +
> +	if (channel->toward_ipa) {
> +		netif_tx_napi_add(&bam->dummy_dev, &channel->napi,
> +				  bam_channel_poll, NAPI_POLL_WEIGHT);
> +	} else {
> +		netif_napi_add(&bam->dummy_dev, &channel->napi,
> +			       bam_channel_poll, NAPI_POLL_WEIGHT);
> +	}
> +	napi_enable(&channel->napi);
> +}
> +
> +static void bam_channel_teardown_one(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	if (!channel->dma_subsys)
> +		return;		/* Ignore uninitialized channels */
> +
> +	netif_napi_del(&channel->napi);
> +}
> +
> +/* Setup function for channels */
> +static int bam_channel_setup(struct ipa_dma *bam)
> +{
> +	u32 channel_id = 0;
> +	int ret;
> +
> +	mutex_lock(&bam->mutex);
> +
> +	do
> +		bam_channel_setup_one(bam, channel_id);
> +	while (++channel_id < BAM_CHANNEL_COUNT_MAX);
> +
> +	/* Make sure no channels were defined that hardware does not support */
> +	while (channel_id < BAM_CHANNEL_COUNT_MAX) {
> +		struct ipa_channel *channel = &bam->channel[channel_id++];
> +
> +		if (!channel->dma_subsys)
> +			continue;	/* Ignore uninitialized channels */
> +
> +		dev_err(bam->dev, "channel %u not supported by hardware\n",
> +			channel_id - 1);
> +		channel_id = BAM_CHANNEL_COUNT_MAX;
> +		goto err_unwind;
> +	}
> +
> +	mutex_unlock(&bam->mutex);
> +
> +	return 0;
> +
> +err_unwind:
> +	while (channel_id--)
> +		bam_channel_teardown_one(bam, channel_id);
> +
> +	mutex_unlock(&bam->mutex);
> +
> +	return ret;
> +}
> +
> +/* Inverse of bam_channel_setup() */
> +static void bam_channel_teardown(struct ipa_dma *bam)
> +{
> +	u32 channel_id;
> +
> +	mutex_lock(&bam->mutex);
> +
> +	channel_id = BAM_CHANNEL_COUNT_MAX;
> +	do
> +		bam_channel_teardown_one(bam, channel_id);
> +	while (channel_id--);
> +
> +	mutex_unlock(&bam->mutex);
> +}
> +
> +static int bam_setup(struct ipa_dma *bam)
> +{
> +	return bam_channel_setup(bam);
> +}
> +
> +static void bam_teardown(struct ipa_dma *bam)
> +{
> +	bam_channel_teardown(bam);
> +}
> +
> +static u32 bam_channel_tre_max(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	/* Hardware limit is channel->tre_count - 1 */
> +	return channel->tre_count - (channel->tlv_count - 1);
> +}
> +
> +static u32 bam_channel_trans_tre_max(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	return channel->tlv_count;
> +}
> +
> +static int bam_channel_start(struct ipa_dma *bam, u32 channel_id)
> +{
> +	return 0;
> +}
> +
> +static int bam_channel_stop(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	return dmaengine_terminate_sync(channel->dma_chan);
> +}
> +
> +static void bam_channel_reset(struct ipa_dma *bam, u32 channel_id, bool doorbell)
> +{
> +	bam_channel_stop(bam, channel_id);
> +}
> +
> +static int bam_channel_suspend(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	return dmaengine_pause(channel->dma_chan);
> +}
> +
> +static int bam_channel_resume(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	return dmaengine_resume(channel->dma_chan);
> +}
> +
> +static void bam_suspend(struct ipa_dma *bam)
> +{
> +	/* No-op for now */
> +}
> +
> +static void bam_resume(struct ipa_dma *bam)
> +{
> +	/* No-op for now */
> +}
> +
> +static void bam_trans_callback(void *arg)
> +{
> +	ipa_trans_complete(arg);
> +}
> +
> +static void bam_trans_commit(struct ipa_trans *trans, bool unused)
> +{
> +	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
> +	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
> +	struct ipa_cmd_info *info;
> +	struct scatterlist *sg;
> +	u32 byte_count = 0;
> +	u32 i;
> +	enum dma_transfer_direction direction;
> +
> +	if (channel->toward_ipa)
> +		direction = DMA_MEM_TO_DEV;
> +	else
> +		direction = DMA_DEV_TO_MEM;
> +
> +	/* assert(trans->used > 0); */
> +
> +	info = trans->info ? &trans->info[0] : NULL;
> +	for_each_sg(trans->sgl, sg, trans->used, i) {
> +		bool last_tre = i == trans->used - 1;
> +		dma_addr_t addr = sg_dma_address(sg);
> +		u32 len = sg_dma_len(sg);
> +		u32 dma_flags = 0;
> +		struct dma_async_tx_descriptor *desc;
> +
> +		byte_count += len;
> +		if (info)
> +			opcode = info++->opcode;
> +
> +		if (opcode != IPA_CMD_NONE) {
> +			len = opcode;
> +			dma_flags |= DMA_PREP_IMM_CMD;
> +		}
> +
> +		if (last_tre)
> +			dma_flags |= DMA_PREP_INTERRUPT;
> +
> +		desc = dmaengine_prep_slave_single(channel->dma_chan, addr, len,
> +				direction, dma_flags);
> +
> +		if (last_tre) {
> +			desc->callback = bam_trans_callback;
> +			desc->callback_param = trans;
> +		}
> +
> +		desc->cookie = dmaengine_submit(desc);
> +
> +		if (last_tre)
> +			trans->cookie = desc->cookie;
> +
> +		if (direction == DMA_DEV_TO_MEM)
> +			dmaengine_desc_attach_metadata(desc, &trans->len, sizeof(trans->len));
> +	}
> +
> +	if (channel->toward_ipa) {
> +		/* We record TX bytes when they are sent */
> +		trans->len = byte_count;
> +		trans->trans_count = channel->trans_count;
> +		trans->byte_count = channel->byte_count;
> +		channel->trans_count++;
> +		channel->byte_count += byte_count;
> +	}
> +
> +	ipa_trans_move_pending(trans);
> +
> +	dma_async_issue_pending(channel->dma_chan);
> +}
> +
> +/* Initialize the BAM DMA channels
> + * Actual hw init is handled by the BAM_DMA driver
> + */
> +int bam_init(struct ipa_dma *bam, struct platform_device *pdev,
> +		enum ipa_version version, u32 count,
> +		const struct ipa_gsi_endpoint_data *data)
> +{
> +	struct device *dev = &pdev->dev;
> +	int ret;
> +
> +	bam->dev = dev;
> +	bam->version = version;
> +	bam->setup = bam_setup;
> +	bam->teardown = bam_teardown;
> +	bam->exit = bam_exit;
> +	bam->suspend = bam_suspend;
> +	bam->resume = bam_resume;
> +	bam->channel_tre_max = bam_channel_tre_max;
> +	bam->channel_trans_tre_max = bam_channel_trans_tre_max;
> +	bam->channel_start = bam_channel_start;
> +	bam->channel_stop = bam_channel_stop;
> +	bam->channel_reset = bam_channel_reset;
> +	bam->channel_suspend = bam_channel_suspend;
> +	bam->channel_resume = bam_channel_resume;
> +	bam->trans_commit = bam_trans_commit;
> +
> +	init_dummy_netdev(&bam->dummy_dev);
> +
> +	ret = bam_channel_init(bam, count, data);
> +	if (ret)
> +		return ret;
> +
> +	mutex_init(&bam->mutex);
> +
> +	return 0;
> +}
> diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
> index 39d9ca620a9f..ac0b9e748fa1 100644
> --- a/drivers/net/ipa/gsi.c
> +++ b/drivers/net/ipa/gsi.c
> @@ -2210,6 +2210,7 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
>   	gsi->channel_reset = gsi_channel_reset;
>   	gsi->channel_suspend = gsi_channel_suspend;
>   	gsi->channel_resume = gsi_channel_resume;
> +	gsi->trans_commit = gsi_trans_commit;
>   
>   	/* GSI uses NAPI on all channels.  Create a dummy network device
>   	 * for the channel NAPI contexts to be associated with.
> diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
> index 6d329e9ce5d2..7d62d49f414f 100644
> --- a/drivers/net/ipa/ipa_data.h
> +++ b/drivers/net/ipa/ipa_data.h
> @@ -188,6 +188,7 @@ struct ipa_gsi_endpoint_data {
>   	u8 channel_id;
>   	u8 endpoint_id;
>   	bool toward_ipa;
> +	const char *channel_name;	/* used only for BAM DMA channels */
>   
>   	struct gsi_channel_data channel;
>   	struct ipa_endpoint_data endpoint;
> diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h
> index 1a23e6ac5785..3000182ae689 100644
> --- a/drivers/net/ipa/ipa_dma.h
> +++ b/drivers/net/ipa/ipa_dma.h
> @@ -17,7 +17,11 @@
>   
>   /* Maximum number of channels and event rings supported by the driver */
>   #define GSI_CHANNEL_COUNT_MAX	23
> +#define BAM_CHANNEL_COUNT_MAX	20
>   #define GSI_EVT_RING_COUNT_MAX	24
> +#define IPA_CHANNEL_COUNT_MAX	MAX(GSI_CHANNEL_COUNT_MAX, \
> +				    BAM_CHANNEL_COUNT_MAX)
> +#define MAX(a, b)		((a > b) ? a : b)
>   
>   /* Maximum TLV FIFO size for a channel; 64 here is arbitrary (and high) */
>   #define GSI_TLV_MAX		64
> @@ -119,6 +123,8 @@ struct ipa_channel {
>   	struct gsi_ring tre_ring;
>   	u32 evt_ring_id;
>   
> +	struct dma_chan *dma_chan;
> +
>   	u64 byte_count;			/* total # bytes transferred */
>   	u64 trans_count;		/* total # transactions */
>   	/* The following counts are used only for TX endpoints */
> @@ -154,7 +160,7 @@ struct ipa_dma {
>   	u32 irq;
>   	u32 channel_count;
>   	u32 evt_ring_count;
> -	struct ipa_channel channel[GSI_CHANNEL_COUNT_MAX];
> +	struct ipa_channel channel[IPA_CHANNEL_COUNT_MAX];
>   	struct gsi_evt_ring evt_ring[GSI_EVT_RING_COUNT_MAX];
>   	u32 event_bitmap;		/* allocated event rings */
>   	u32 modem_channel_bitmap;	/* modem channels to allocate */
> @@ -303,7 +309,7 @@ static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
>   }
>   
>   /**
> - * ipa_dma_init() - Initialize the GSI subsystem
> + * ipa_init/bam_init() - Initialize the GSI/BAM subsystem
>    * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
>    * @pdev:	IPA platform device
>    * @version:	IPA hardware version (implies GSI version)
> @@ -312,14 +318,18 @@ static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
>    *
>    * Return:	0 if successful, or a negative error code
>    *
> - * Early stage initialization of the GSI subsystem, performing tasks
> - * that can be done before the GSI hardware is ready to use.
> + * Early stage initialization of the GSI/BAM subsystem, performing tasks
> + * that can be done before the GSI/BAM hardware is ready to use.
>    */
>   
>   int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
>   	     enum ipa_version version, u32 count,
>   	     const struct ipa_gsi_endpoint_data *data);
>   
> +int bam_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
> +	     enum ipa_version version, u32 count,
> +	     const struct ipa_gsi_endpoint_data *data);
> +
>   /**
>    * ipa_dma_exit() - Exit the DMA subsystem
>    * @dma_subsys:	ipa_dma address previously passed to a successful gsi_init() call
> diff --git a/drivers/net/ipa/ipa_dma_private.h b/drivers/net/ipa/ipa_dma_private.h
> index 40148a551b47..1db53e597a61 100644
> --- a/drivers/net/ipa/ipa_dma_private.h
> +++ b/drivers/net/ipa/ipa_dma_private.h
> @@ -16,6 +16,8 @@ struct ipa_channel;
>   
>   #define GSI_RING_ELEMENT_SIZE	16	/* bytes; must be a power of 2 */
>   
> +void gsi_trans_commit(struct ipa_trans *trans, bool ring_db);
> +
>   /* Return the entry that follows one provided in a transaction pool */
>   void *ipa_trans_pool_next(struct ipa_trans_pool *pool, void *element);
>   
> diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> index ba06e3ad554c..ea6c4347f2c6 100644
> --- a/drivers/net/ipa/ipa_main.c
> +++ b/drivers/net/ipa/ipa_main.c
> @@ -60,12 +60,15 @@
>    * core.  The GSI implements a set of "channels" used for communication
>    * between the AP and the IPA.
>    *
> - * The IPA layer uses GSI channels to implement its "endpoints".  And while
> - * a GSI channel carries data between the AP and the IPA, a pair of IPA
> - * endpoints is used to carry traffic between two EEs.  Specifically, the main
> - * modem network interface is implemented by two pairs of endpoints:  a TX
> + * The IPA layer uses GSI channels or BAM pipes to implement its "endpoints".
> + * And while a GSI channel carries data between the AP and the IPA, a pair of
> + * IPA endpoints is used to carry traffic between two EEs.  Specifically, the
> + * main modem network interface is implemented by two pairs of endpoints:  a TX
>    * endpoint on the AP coupled with an RX endpoint on the modem; and another
>    * RX endpoint on the AP receiving data from a TX endpoint on the modem.
> + *
> + * For BAM based transport, a pair of BAM pipes are used for TX and RX between
> + * the AP and IPA, and between IPA and other EEs.
>    */
>   
>   /* The name of the GSI firmware file relative to /lib/firmware */
> @@ -716,8 +719,13 @@ static int ipa_probe(struct platform_device *pdev)
>   	if (ret)
>   		goto err_reg_exit;
>   
> -	ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
> -		       data->endpoint_data);
> +	if (IPA_HAS_GSI(ipa->version))
> +		ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
> +			       data->endpoint_data);
> +	else
> +		ret = bam_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
> +			       data->endpoint_data);
> +
>   	if (ret)
>   		goto err_mem_exit;
>   
> diff --git a/drivers/net/ipa/ipa_trans.c b/drivers/net/ipa/ipa_trans.c
> index 22755f3ce3da..444f44846da8 100644
> --- a/drivers/net/ipa/ipa_trans.c
> +++ b/drivers/net/ipa/ipa_trans.c
> @@ -254,7 +254,7 @@ struct ipa_trans *ipa_channel_trans_complete(struct ipa_channel *channel)
>   }
>   
>   /* Move a transaction from the allocated list to the pending list */
> -static void ipa_trans_move_pending(struct ipa_trans *trans)
> +void ipa_trans_move_pending(struct ipa_trans *trans)
>   {
>   	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
>   	struct ipa_trans_info *trans_info = &channel->trans_info;
> @@ -539,7 +539,7 @@ static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr,
>    * pending list.  Finally, updates the channel ring pointer and optionally
>    * rings the doorbell.
>    */
> -static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
> +void gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
>   {
>   	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
>   	struct gsi_ring *ring = &channel->tre_ring;
> @@ -604,9 +604,9 @@ static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
>   /* Commit a GSI transaction */
>   void ipa_trans_commit(struct ipa_trans *trans, bool ring_db)
>   {
> -	if (trans->used)
> -		__gsi_trans_commit(trans, ring_db);
> -	else
> +	if (trans->used) {
> +		trans->dma_subsys->trans_commit(trans, ring_db);
> +	} else
>   		ipa_trans_free(trans);
>   }
>   
> @@ -618,7 +618,7 @@ void ipa_trans_commit_wait(struct ipa_trans *trans)
>   
>   	refcount_inc(&trans->refcount);
>   
> -	__gsi_trans_commit(trans, true);
> +	trans->dma_subsys->trans_commit(trans, true);
>   
>   	wait_for_completion(&trans->completion);
>   
> @@ -638,7 +638,7 @@ int ipa_trans_commit_wait_timeout(struct ipa_trans *trans,
>   
>   	refcount_inc(&trans->refcount);
>   
> -	__gsi_trans_commit(trans, true);
> +	trans->dma_subsys->trans_commit(trans, true);
>   
>   	remaining = wait_for_completion_timeout(&trans->completion,
>   						timeout_jiffies);
> diff --git a/drivers/net/ipa/ipa_trans.h b/drivers/net/ipa/ipa_trans.h
> index b93342414360..5f41e3e6f92a 100644
> --- a/drivers/net/ipa/ipa_trans.h
> +++ b/drivers/net/ipa/ipa_trans.h
> @@ -10,6 +10,7 @@
>   #include <linux/refcount.h>
>   #include <linux/completion.h>
>   #include <linux/dma-direction.h>
> +#include <linux/dmaengine.h>
>   
>   #include "ipa_cmd.h"
>   
> @@ -61,6 +62,7 @@ struct ipa_trans {
>   	struct scatterlist *sgl;
>   	struct ipa_cmd_info *info;	/* array of entries, or null */
>   	enum dma_data_direction direction;
> +	dma_cookie_t cookie;
>   
>   	refcount_t refcount;
>   	struct completion completion;
> @@ -149,6 +151,8 @@ struct ipa_trans *ipa_channel_trans_alloc(struct ipa_dma *dma_subsys, u32 channe
>    */
>   void ipa_trans_free(struct ipa_trans *trans);
>   
> +void ipa_trans_move_pending(struct ipa_trans *trans);
> +
>   /**
>    * ipa_trans_cmd_add() - Add an immediate command to a transaction
>    * @trans:	Transaction
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init
  2021-09-20  3:08 ` [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  2021-10-18 18:13     ` Sireesh Kodali
  0 siblings, 1 reply; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> IPA v2.x commands are different from later IPA revisions mostly because
> of the fact that IPA v2.x is 32 bit. There are also other minor
> differences some of the command structs.
> 
> The tables again are only different because of the fact that IPA v2.x is
> 32 bit.

There's no "RFC" on this patch, but I assume it's just invisible.

There are some things in here where some conventions used elsewhere
in the driver aren't as well followed.  One example is the use of
symbol names with IPA version encoded in them; such cases usually
have a macro that takes a version as argument.

And I don't especially like using a macro on the left hand side
of an assignment expression.

I'm skimming now, but overall this looks OK.

					-Alex

> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> ---
>   drivers/net/ipa/ipa.h       |   2 +-
>   drivers/net/ipa/ipa_cmd.c   | 138 ++++++++++++++++++++++++++----------
>   drivers/net/ipa/ipa_table.c |  29 ++++++--
>   drivers/net/ipa/ipa_table.h |   2 +-
>   4 files changed, 125 insertions(+), 46 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
> index 80a83ac45729..63b2b368b588 100644
> --- a/drivers/net/ipa/ipa.h
> +++ b/drivers/net/ipa/ipa.h
> @@ -81,7 +81,7 @@ struct ipa {
>   	struct ipa_power *power;
>   
>   	dma_addr_t table_addr;
> -	__le64 *table_virt;
> +	void *table_virt;
>   
>   	struct ipa_interrupt *interrupt;
>   	bool uc_powered;
> diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> index 7a104540dc26..58dae4b3bf87 100644
> --- a/drivers/net/ipa/ipa_cmd.c
> +++ b/drivers/net/ipa/ipa_cmd.c
> @@ -25,8 +25,8 @@
>    * An immediate command is generally used to request the IPA do something
>    * other than data transfer to another endpoint.
>    *
> - * Immediate commands are represented by GSI transactions just like other
> - * transfer requests, represented by a single GSI TRE.  Each immediate
> + * Immediate commands on IPA v3 are represented by GSI transactions just like
> + * other transfer requests, represented by a single GSI TRE.  Each immediate
>    * command has a well-defined format, having a payload of a known length.
>    * This allows the transfer element's length field to be used to hold an
>    * immediate command's opcode.  The payload for a command resides in DRAM
> @@ -45,10 +45,16 @@ enum pipeline_clear_options {
>   
>   /* IPA_CMD_IP_V{4,6}_{FILTER,ROUTING}_INIT */
>   
> -struct ipa_cmd_hw_ip_fltrt_init {
> -	__le64 hash_rules_addr;
> -	__le64 flags;
> -	__le64 nhash_rules_addr;
> +union ipa_cmd_hw_ip_fltrt_init {
> +	struct {
> +		__le32 nhash_rules_addr;
> +		__le32 flags;
> +	} v2;
> +	struct {
> +		__le64 hash_rules_addr;
> +		__le64 flags;
> +		__le64 nhash_rules_addr;
> +	} v3;
>   };
>   
>   /* Field masks for ipa_cmd_hw_ip_fltrt_init structure fields */
> @@ -56,13 +62,23 @@ struct ipa_cmd_hw_ip_fltrt_init {
>   #define IP_FLTRT_FLAGS_HASH_ADDR_FMASK			GENMASK_ULL(27, 12)
>   #define IP_FLTRT_FLAGS_NHASH_SIZE_FMASK			GENMASK_ULL(39, 28)
>   #define IP_FLTRT_FLAGS_NHASH_ADDR_FMASK			GENMASK_ULL(55, 40)
> +#define IP_V2_IPV4_FLTRT_FLAGS_SIZE_FMASK		GENMASK_ULL(11, 0)
> +#define IP_V2_IPV4_FLTRT_FLAGS_ADDR_FMASK		GENMASK_ULL(27, 12)
> +#define IP_V2_IPV6_FLTRT_FLAGS_SIZE_FMASK		GENMASK_ULL(15, 0)
> +#define IP_V2_IPV6_FLTRT_FLAGS_ADDR_FMASK		GENMASK_ULL(31, 16)
>   
>   /* IPA_CMD_HDR_INIT_LOCAL */
>   
> -struct ipa_cmd_hw_hdr_init_local {
> -	__le64 hdr_table_addr;
> -	__le32 flags;
> -	__le32 reserved;
> +union ipa_cmd_hw_hdr_init_local {
> +	struct {
> +		__le32 hdr_table_addr;
> +		__le32 flags;
> +	} v2;
> +	struct {
> +		__le64 hdr_table_addr;
> +		__le32 flags;
> +		__le32 reserved;
> +	} v3;
>   };
>   
>   /* Field masks for ipa_cmd_hw_hdr_init_local structure fields */
> @@ -109,14 +125,37 @@ struct ipa_cmd_ip_packet_init {
>   #define DMA_SHARED_MEM_OPCODE_SKIP_CLEAR_FMASK		GENMASK(8, 8)
>   #define DMA_SHARED_MEM_OPCODE_CLEAR_OPTION_FMASK	GENMASK(10, 9)
>   
> -struct ipa_cmd_hw_dma_mem_mem {
> -	__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
> -	__le16 size;
> -	__le16 local_addr;
> -	__le16 flags;
> -	__le64 system_addr;
> +union ipa_cmd_hw_dma_mem_mem {
> +	struct {
> +		__le16 reserved;
> +		__le16 size;
> +		__le32 system_addr;
> +		__le16 local_addr;
> +		__le16 flags; /* the least significant 14 bits are reserved */
> +		__le32 padding;
> +	} v2;
> +	struct {
> +		__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
> +		__le16 size;
> +		__le16 local_addr;
> +		__le16 flags;
> +		__le64 system_addr;
> +	} v3;
>   };
>   
> +#define CMD_FIELD(_version, _payload, _field)				\
> +	*(((_version) > IPA_VERSION_2_6L) ?		    		\
> +	  &(_payload->v3._field) :			    		\
> +	  &(_payload->v2._field))
> +
> +#define SET_DMA_FIELD(_ver, _payload, _field, _value)			\
> +	do {								\
> +		if ((_ver) >= IPA_VERSION_3_0)				\
> +			(_payload)->v3._field = cpu_to_le64(_value);	\
> +		else							\
> +			(_payload)->v2._field = cpu_to_le32(_value);	\
> +	} while (0)
> +
>   /* Flag allowing atomic clear of target region after reading data (v4.0+)*/
>   #define DMA_SHARED_MEM_CLEAR_AFTER_READ			GENMASK(15, 15)
>   
> @@ -132,15 +171,16 @@ struct ipa_cmd_ip_packet_tag_status {
>   	__le64 tag;
>   };
>   
> -#define IP_PACKET_TAG_STATUS_TAG_FMASK			GENMASK_ULL(63, 16)
> +#define IPA_V2_IP_PACKET_TAG_STATUS_TAG_FMASK		GENMASK_ULL(63, 32)
> +#define IPA_V3_IP_PACKET_TAG_STATUS_TAG_FMASK		GENMASK_ULL(63, 16)
>   
>   /* Immediate command payload */
>   union ipa_cmd_payload {
> -	struct ipa_cmd_hw_ip_fltrt_init table_init;
> -	struct ipa_cmd_hw_hdr_init_local hdr_init_local;
> +	union ipa_cmd_hw_ip_fltrt_init table_init;
> +	union ipa_cmd_hw_hdr_init_local hdr_init_local;
>   	struct ipa_cmd_register_write register_write;
>   	struct ipa_cmd_ip_packet_init ip_packet_init;
> -	struct ipa_cmd_hw_dma_mem_mem dma_shared_mem;
> +	union ipa_cmd_hw_dma_mem_mem dma_shared_mem;
>   	struct ipa_cmd_ip_packet_tag_status ip_packet_tag_status;
>   };
>   
> @@ -154,6 +194,7 @@ static void ipa_cmd_validate_build(void)
>   	 * of entries.
>   	 */
>   #define TABLE_SIZE	(TABLE_COUNT_MAX * sizeof(__le64))
> +// TODO
>   #define TABLE_COUNT_MAX	max_t(u32, IPA_ROUTE_COUNT_MAX, IPA_FILTER_COUNT_MAX)
>   	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK));
>   	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));
> @@ -405,15 +446,26 @@ void ipa_cmd_table_init_add(struct ipa_trans *trans,
>   {
>   	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	enum dma_data_direction direction = DMA_TO_DEVICE;
> -	struct ipa_cmd_hw_ip_fltrt_init *payload;
> +	union ipa_cmd_hw_ip_fltrt_init *payload;
> +	enum ipa_version version = ipa->version;
>   	union ipa_cmd_payload *cmd_payload;
>   	dma_addr_t payload_addr;
>   	u64 val;
>   
>   	/* Record the non-hash table offset and size */
>   	offset += ipa->mem_offset;
> -	val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
> -	val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
> +
> +	if (version >= IPA_VERSION_3_0) {
> +		val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
> +		val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
> +	} else if (opcode == IPA_CMD_IP_V4_FILTER_INIT ||
> +		   opcode == IPA_CMD_IP_V4_ROUTING_INIT) {
> +		val = u64_encode_bits(offset, IP_V2_IPV4_FLTRT_FLAGS_ADDR_FMASK);
> +		val |= u64_encode_bits(size, IP_V2_IPV4_FLTRT_FLAGS_SIZE_FMASK);
> +	} else { /* IPA <= v2.6L IPv6 */
> +		val = u64_encode_bits(offset, IP_V2_IPV6_FLTRT_FLAGS_ADDR_FMASK);
> +		val |= u64_encode_bits(size, IP_V2_IPV6_FLTRT_FLAGS_SIZE_FMASK);
> +	}
>   
>   	/* The hash table offset and address are zero if its size is 0 */
>   	if (hash_size) {
> @@ -429,10 +481,10 @@ void ipa_cmd_table_init_add(struct ipa_trans *trans,
>   	payload = &cmd_payload->table_init;
>   
>   	/* Fill in all offsets and sizes and the non-hash table address */
> -	if (hash_size)
> -		payload->hash_rules_addr = cpu_to_le64(hash_addr);
> -	payload->flags = cpu_to_le64(val);
> -	payload->nhash_rules_addr = cpu_to_le64(addr);
> +	if (hash_size && version >= IPA_VERSION_3_0)
> +		payload->v3.hash_rules_addr = cpu_to_le64(hash_addr);
> +	SET_DMA_FIELD(version, payload, flags, val);
> +	SET_DMA_FIELD(version, payload, nhash_rules_addr, addr);
>   
>   	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  direction, opcode);
> @@ -445,7 +497,7 @@ void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
>   	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL;
>   	enum dma_data_direction direction = DMA_TO_DEVICE;
> -	struct ipa_cmd_hw_hdr_init_local *payload;
> +	union ipa_cmd_hw_hdr_init_local *payload;
>   	union ipa_cmd_payload *cmd_payload;
>   	dma_addr_t payload_addr;
>   	u32 flags;
> @@ -460,10 +512,10 @@ void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
>   	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
>   	payload = &cmd_payload->hdr_init_local;
>   
> -	payload->hdr_table_addr = cpu_to_le64(addr);
> +	SET_DMA_FIELD(ipa->version, payload, hdr_table_addr, addr);
>   	flags = u32_encode_bits(size, HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK);
>   	flags |= u32_encode_bits(offset, HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);
> -	payload->flags = cpu_to_le32(flags);
> +	CMD_FIELD(ipa->version, payload, flags) = cpu_to_le32(flags);
>   
>   	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  direction, opcode);
> @@ -509,8 +561,11 @@ void ipa_cmd_register_write_add(struct ipa_trans *trans, u32 offset, u32 value,
>   
>   	} else {
>   		flags = 0;	/* SKIP_CLEAR flag is always 0 */
> -		options = u16_encode_bits(clear_option,
> -					  REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
> +		if (ipa->version > IPA_VERSION_2_6L)
> +			options = u16_encode_bits(clear_option,
> +					REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
> +		else
> +			options = 0;
>   	}
>   
>   	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
> @@ -552,7 +607,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
>   {
>   	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM;
> -	struct ipa_cmd_hw_dma_mem_mem *payload;
> +	enum ipa_version version = ipa->version;
> +	union ipa_cmd_hw_dma_mem_mem *payload;
>   	union ipa_cmd_payload *cmd_payload;
>   	enum dma_data_direction direction;
>   	dma_addr_t payload_addr;
> @@ -571,8 +627,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
>   	/* payload->clear_after_read was reserved prior to IPA v4.0.  It's
>   	 * never needed for current code, so it's 0 regardless of version.
>   	 */
> -	payload->size = cpu_to_le16(size);
> -	payload->local_addr = cpu_to_le16(offset);
> +	CMD_FIELD(version, payload, size) = cpu_to_le16(size);
> +	CMD_FIELD(version, payload, local_addr) = cpu_to_le16(offset);
>   	/* payload->flags:
>   	 *   direction:		0 = write to IPA, 1 read from IPA
>   	 * Starting at v4.0 these are reserved; either way, all zero:
> @@ -582,8 +638,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
>   	 * since both values are 0 we won't bother OR'ing them in.
>   	 */
>   	flags = toward_ipa ? 0 : DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK;
> -	payload->flags = cpu_to_le16(flags);
> -	payload->system_addr = cpu_to_le64(addr);
> +	CMD_FIELD(version, payload, flags) = cpu_to_le16(flags);
> +	SET_DMA_FIELD(version, payload, system_addr, addr);
>   
>   	direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
>   
> @@ -599,11 +655,17 @@ static void ipa_cmd_ip_tag_status_add(struct ipa_trans *trans)
>   	struct ipa_cmd_ip_packet_tag_status *payload;
>   	union ipa_cmd_payload *cmd_payload;
>   	dma_addr_t payload_addr;
> +	u64 tag_mask;
> +
> +	if (trans->dma_subsys->version <= IPA_VERSION_2_6L)
> +		tag_mask = IPA_V2_IP_PACKET_TAG_STATUS_TAG_FMASK;
> +	else
> +		tag_mask = IPA_V3_IP_PACKET_TAG_STATUS_TAG_FMASK;
>   
>   	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
>   	payload = &cmd_payload->ip_packet_tag_status;
>   
> -	payload->tag = le64_encode_bits(0, IP_PACKET_TAG_STATUS_TAG_FMASK);
> +	payload->tag = le64_encode_bits(0, tag_mask);
>   
>   	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  direction, opcode);
> diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
> index d197959cc032..459fb4830244 100644
> --- a/drivers/net/ipa/ipa_table.c
> +++ b/drivers/net/ipa/ipa_table.c
> @@ -8,6 +8,7 @@
>   #include <linux/kernel.h>
>   #include <linux/bits.h>
>   #include <linux/bitops.h>
> +#include <linux/module.h>
>   #include <linux/bitfield.h>
>   #include <linux/io.h>
>   #include <linux/build_bug.h>
> @@ -561,6 +562,19 @@ void ipa_table_config(struct ipa *ipa)
>   	ipa_route_config(ipa, true);
>   }
>   
> +static inline void *ipa_table_write(enum ipa_version version,
> +				   void *virt, u64 value)
> +{
> +	if (IPA_IS_64BIT(version)) {
> +		__le64 *ptr = virt;
> +		*ptr = cpu_to_le64(value);
> +	} else {
> +		__le32 *ptr = virt;
> +		*ptr = cpu_to_le32(value);
> +	}
> +	return virt + IPA_TABLE_ENTRY_SIZE(version);
> +}
> +
>   /*
>    * Initialize a coherent DMA allocation containing initialized filter and
>    * route table data.  This is used when initializing or resetting the IPA
> @@ -602,10 +616,11 @@ void ipa_table_config(struct ipa *ipa)
>   int ipa_table_init(struct ipa *ipa)
>   {
>   	u32 count = max_t(u32, IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX);
> +	enum ipa_version version = ipa->version;
>   	struct device *dev = &ipa->pdev->dev;
> +	u64 filter_map = ipa->filter_map << 1;
>   	dma_addr_t addr;
> -	__le64 le_addr;
> -	__le64 *virt;
> +	void *virt;
>   	size_t size;
>   
>   	ipa_table_validate_build();
> @@ -626,19 +641,21 @@ int ipa_table_init(struct ipa *ipa)
>   	ipa->table_addr = addr;
>   
>   	/* First slot is the zero rule */
> -	*virt++ = 0;
> +	virt = ipa_table_write(version, virt, 0);
>   
>   	/* Next is the filter table bitmap.  The "soft" bitmap value
>   	 * must be converted to the hardware representation by shifting
>   	 * it left one position.  (Bit 0 repesents global filtering,
>   	 * which is possible but not used.)
>   	 */
> -	*virt++ = cpu_to_le64((u64)ipa->filter_map << 1);
> +	if (version <= IPA_VERSION_2_6L)
> +		filter_map |= 1;
> +
> +	virt = ipa_table_write(version, virt, filter_map);
>   
>   	/* All the rest contain the DMA address of the zero rule */
> -	le_addr = cpu_to_le64(addr);
>   	while (count--)
> -		*virt++ = le_addr;
> +		virt = ipa_table_write(version, virt, addr);
>   
>   	return 0;
>   }
> diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
> index 78a168ce6558..6e12fc49e45b 100644
> --- a/drivers/net/ipa/ipa_table.h
> +++ b/drivers/net/ipa/ipa_table.h
> @@ -43,7 +43,7 @@ bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask);
>    */
>   static inline bool ipa_table_hash_support(struct ipa *ipa)
>   {
> -	return ipa->version != IPA_VERSION_4_2;
> +	return ipa->version != IPA_VERSION_4_2 && ipa->version > IPA_VERSION_2_6L;
>   }
>   
>   /**
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints
  2021-09-20  3:08 ` [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  2021-10-18 18:17     ` Sireesh Kodali
  0 siblings, 1 reply; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> IPA v2.x endpoints are the same as the endpoints on later versions. The
> only big change was the addition of the "skip_config" flag. The only
> other change is the backlog limit, which is a fixed number for IPA v2.6L

Not much to say here.  Your patches are reasonably small, which
makes them easier to review (thank you).

					-Alex

> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_endpoint.c | 65 ++++++++++++++++++++++------------
>   1 file changed, 43 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> index 7d3ab61cd890..024cf3a0ded0 100644
> --- a/drivers/net/ipa/ipa_endpoint.c
> +++ b/drivers/net/ipa/ipa_endpoint.c
> @@ -360,8 +360,10 @@ void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable)
>   {
>   	u32 endpoint_id;
>   
> -	/* DELAY mode doesn't work correctly on IPA v4.2 */
> -	if (ipa->version == IPA_VERSION_4_2)
> +	/* DELAY mode doesn't work correctly on IPA v4.2
> +	 * Pausing is not supported on IPA v2.6L
> +	 */
> +	if (ipa->version == IPA_VERSION_4_2 || ipa->version <= IPA_VERSION_2_6L)
>   		return;
>   
>   	for (endpoint_id = 0; endpoint_id < IPA_ENDPOINT_MAX; endpoint_id++) {
> @@ -383,6 +385,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
>   {
>   	u32 initialized = ipa->initialized;
>   	struct ipa_trans *trans;
> +	u32 value = 0, value_mask = ~0;
>   	u32 count;
>   
>   	/* We need one command per modem TX endpoint.  We can get an upper
> @@ -398,6 +401,11 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
>   		return -EBUSY;
>   	}
>   
> +	if (ipa->version <= IPA_VERSION_2_6L) {
> +		value = aggr_force_close_fmask(true);
> +		value_mask = aggr_force_close_fmask(true);
> +	}
> +
>   	while (initialized) {
>   		u32 endpoint_id = __ffs(initialized);
>   		struct ipa_endpoint *endpoint;
> @@ -416,7 +424,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
>   		 * means status is disabled on the endpoint, and as a
>   		 * result all other fields in the register are ignored.
>   		 */
> -		ipa_cmd_register_write_add(trans, offset, 0, ~0, false);
> +		ipa_cmd_register_write_add(trans, offset, value, value_mask, false);
>   	}
>   
>   	ipa_cmd_pipeline_clear_add(trans);
> @@ -1531,8 +1539,10 @@ static void ipa_endpoint_program(struct ipa_endpoint *endpoint)
>   	ipa_endpoint_init_mode(endpoint);
>   	ipa_endpoint_init_aggr(endpoint);
>   	ipa_endpoint_init_deaggr(endpoint);
> -	ipa_endpoint_init_rsrc_grp(endpoint);
> -	ipa_endpoint_init_seq(endpoint);
> +	if (endpoint->ipa->version > IPA_VERSION_2_6L) {
> +		ipa_endpoint_init_rsrc_grp(endpoint);
> +		ipa_endpoint_init_seq(endpoint);
> +	}
>   	ipa_endpoint_status(endpoint);
>   }
>   
> @@ -1592,7 +1602,6 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
>   {
>   	struct device *dev = &endpoint->ipa->pdev->dev;
>   	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
> -	bool stop_channel;
>   	int ret;
>   
>   	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
> @@ -1613,7 +1622,6 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
>   {
>   	struct device *dev = &endpoint->ipa->pdev->dev;
>   	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
> -	bool start_channel;
>   	int ret;
>   
>   	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
> @@ -1750,23 +1758,33 @@ int ipa_endpoint_config(struct ipa *ipa)
>   	/* Find out about the endpoints supplied by the hardware, and ensure
>   	 * the highest one doesn't exceed the number we support.
>   	 */
> -	val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
> -
> -	/* Our RX is an IPA producer */
> -	rx_base = u32_get_bits(val, IPA_PROD_LOWEST_FMASK);
> -	max = rx_base + u32_get_bits(val, IPA_MAX_PROD_PIPES_FMASK);
> -	if (max > IPA_ENDPOINT_MAX) {
> -		dev_err(dev, "too many endpoints (%u > %u)\n",
> -			max, IPA_ENDPOINT_MAX);
> -		return -EINVAL;
> -	}
> -	rx_mask = GENMASK(max - 1, rx_base);
> +	if (ipa->version <= IPA_VERSION_2_6L) {
> +		// FIXME Not used anywhere?
> +		if (ipa->version == IPA_VERSION_2_6L)
> +			val = ioread32(ipa->reg_virt +
> +					IPA_REG_V2_ENABLED_PIPES_OFFSET);
> +		/* IPA v2.6L supports 20 pipes */
> +		ipa->available = ipa->filter_map;
> +		return 0;
> +	} else {
> +		val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
> +
> +		/* Our RX is an IPA producer */
> +		rx_base = u32_get_bits(val, IPA_PROD_LOWEST_FMASK);
> +		max = rx_base + u32_get_bits(val, IPA_MAX_PROD_PIPES_FMASK);
> +		if (max > IPA_ENDPOINT_MAX) {
> +			dev_err(dev, "too many endpoints (%u > %u)\n",
> +					max, IPA_ENDPOINT_MAX);
> +			return -EINVAL;
> +		}
> +		rx_mask = GENMASK(max - 1, rx_base);
>   
> -	/* Our TX is an IPA consumer */
> -	max = u32_get_bits(val, IPA_MAX_CONS_PIPES_FMASK);
> -	tx_mask = GENMASK(max - 1, 0);
> +		/* Our TX is an IPA consumer */
> +		max = u32_get_bits(val, IPA_MAX_CONS_PIPES_FMASK);
> +		tx_mask = GENMASK(max - 1, 0);
>   
> -	ipa->available = rx_mask | tx_mask;
> +		ipa->available = rx_mask | tx_mask;
> +	}
>   
>   	/* Check for initialized endpoints not supported by the hardware */
>   	if (ipa->initialized & ~ipa->available) {
> @@ -1865,6 +1883,9 @@ u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
>   			filter_map |= BIT(data->endpoint_id);
>   	}
>   
> +	if (ipa->version <= IPA_VERSION_2_6L)
> +		filter_map = 0x1fffff;
> +
>   	if (!ipa_filter_map_valid(ipa, filter_map))
>   		goto err_endpoint_exit;
>   
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map
  2021-09-20  3:08 ` [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  2021-10-18 18:19     ` Sireesh Kodali
  0 siblings, 1 reply; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> IPA v2.6L has an extra region to handle compression/decompression
> acceleration. This region is used by some modems during modem init.

So it has to be initialized?  (I guess so.)

The memory size register apparently doesn't express things in
units of 8 bytes either.

					-Alex

> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_mem.c | 36 ++++++++++++++++++++++++++++++------
>   drivers/net/ipa/ipa_mem.h |  5 ++++-
>   2 files changed, 34 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
> index 8acc88070a6f..bfcdc7e08de2 100644
> --- a/drivers/net/ipa/ipa_mem.c
> +++ b/drivers/net/ipa/ipa_mem.c
> @@ -84,7 +84,7 @@ int ipa_mem_setup(struct ipa *ipa)
>   	/* Get a transaction to define the header memory region and to zero
>   	 * the processing context and modem memory regions.
>   	 */
> -	trans = ipa_cmd_trans_alloc(ipa, 4);
> +	trans = ipa_cmd_trans_alloc(ipa, 5);
>   	if (!trans) {
>   		dev_err(&ipa->pdev->dev, "no transaction for memory setup\n");
>   		return -EBUSY;
> @@ -107,8 +107,14 @@ int ipa_mem_setup(struct ipa *ipa)
>   	ipa_mem_zero_region_add(trans, IPA_MEM_AP_PROC_CTX);
>   	ipa_mem_zero_region_add(trans, IPA_MEM_MODEM);
>   
> +	ipa_mem_zero_region_add(trans, IPA_MEM_ZIP);
> +
>   	ipa_trans_commit_wait(trans);
>   
> +	/* On IPA version <=2.6L (except 2.5) there is no PROC_CTX.  */
> +	if (ipa->version != IPA_VERSION_2_5 && ipa->version <= IPA_VERSION_2_6L)
> +		return 0;
> +
>   	/* Tell the hardware where the processing context area is located */
>   	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
>   	offset = ipa->mem_offset + mem->offset;
> @@ -147,6 +153,11 @@ static bool ipa_mem_id_valid(struct ipa *ipa, enum ipa_mem_id mem_id)
>   	case IPA_MEM_END_MARKER:	/* pseudo region */
>   		break;
>   
> +	case IPA_MEM_ZIP:
> +		if (version == IPA_VERSION_2_6L)
> +			return true;
> +		break;
> +
>   	case IPA_MEM_STATS_TETHERING:
>   	case IPA_MEM_STATS_DROP:
>   		if (version < IPA_VERSION_4_0)
> @@ -319,10 +330,15 @@ int ipa_mem_config(struct ipa *ipa)
>   	/* Check the advertised location and size of the shared memory area */
>   	val = ioread32(ipa->reg_virt + ipa_reg_shared_mem_size_offset(ipa->version));
>   
> -	/* The fields in the register are in 8 byte units */
> -	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> -	/* Make sure the end is within the region's mapped space */
> -	mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
> +	if (IPA_VERSION_RANGE(ipa->version, 2_0, 2_6L)) {
> +		/* The fields in the register are in 8 byte units */
> +		ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> +		/* Make sure the end is within the region's mapped space */
> +		mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
> +	} else {
> +		ipa->mem_offset = u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> +		mem_size = u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
> +	}
>   
>   	/* If the sizes don't match, issue a warning */
>   	if (ipa->mem_offset + mem_size < ipa->mem_size) {
> @@ -564,6 +580,10 @@ static int ipa_smem_init(struct ipa *ipa, u32 item, size_t size)
>   		return -EINVAL;
>   	}
>   
> +	/* IPA v2.6L does not use IOMMU */
> +	if (ipa->version <= IPA_VERSION_2_6L)
> +		return 0;
> +
>   	domain = iommu_get_domain_for_dev(dev);
>   	if (!domain) {
>   		dev_err(dev, "no IOMMU domain found for SMEM\n");
> @@ -591,6 +611,9 @@ static void ipa_smem_exit(struct ipa *ipa)
>   	struct device *dev = &ipa->pdev->dev;
>   	struct iommu_domain *domain;
>   
> +	if (ipa->version <= IPA_VERSION_2_6L)
> +		return;
> +
>   	domain = iommu_get_domain_for_dev(dev);
>   	if (domain) {
>   		size_t size;
> @@ -622,7 +645,8 @@ int ipa_mem_init(struct ipa *ipa, const struct ipa_mem_data *mem_data)
>   	ipa->mem_count = mem_data->local_count;
>   	ipa->mem = mem_data->local;
>   
> -	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64));
> +	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, IPA_IS_64BIT(ipa->version) ?
> +					DMA_BIT_MASK(64) : DMA_BIT_MASK(32));
>   	if (ret) {
>   		dev_err(dev, "error %d setting DMA mask\n", ret);
>   		return ret;
> diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h
> index 570bfdd99bff..be91cb38b6a8 100644
> --- a/drivers/net/ipa/ipa_mem.h
> +++ b/drivers/net/ipa/ipa_mem.h
> @@ -47,8 +47,10 @@ enum ipa_mem_id {
>   	IPA_MEM_UC_INFO,		/* 0 canaries */
>   	IPA_MEM_V4_FILTER_HASHED,	/* 2 canaries */
>   	IPA_MEM_V4_FILTER,		/* 2 canaries */
> +	IPA_MEM_V4_FILTER_AP,		/* 2 canaries (IPA v2.0) */
>   	IPA_MEM_V6_FILTER_HASHED,	/* 2 canaries */
>   	IPA_MEM_V6_FILTER,		/* 2 canaries */
> +	IPA_MEM_V6_FILTER_AP,		/* 0 canaries (IPA v2.0) */
>   	IPA_MEM_V4_ROUTE_HASHED,	/* 2 canaries */
>   	IPA_MEM_V4_ROUTE,		/* 2 canaries */
>   	IPA_MEM_V6_ROUTE_HASHED,	/* 2 canaries */
> @@ -57,7 +59,8 @@ enum ipa_mem_id {
>   	IPA_MEM_AP_HEADER,		/* 0 canaries, optional */
>   	IPA_MEM_MODEM_PROC_CTX,		/* 2 canaries */
>   	IPA_MEM_AP_PROC_CTX,		/* 0 canaries */
> -	IPA_MEM_MODEM,			/* 0/2 canaries */
> +	IPA_MEM_ZIP,			/* 1 canary (IPA v2.6L) */
> +	IPA_MEM_MODEM,			/* 0-2 canaries */
>   	IPA_MEM_UC_EVENT_RING,		/* 1 canary, optional */
>   	IPA_MEM_PDN_CONFIG,		/* 0/2 canaries (IPA v4.0+) */
>   	IPA_MEM_STATS_QUOTA_MODEM,	/* 2/4 canaries (IPA v4.0+) */
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface
  2021-09-20  3:08 ` [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  2021-10-18 18:22     ` Sireesh Kodali
  0 siblings, 1 reply; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> On IPA v2.x, the modem doesn't send a DRIVER_INIT_COMPLETED, so we have
> to rely on the uc's IPA_UC_RESPONSE_INIT_COMPLETED to know when its
> ready. We add a function here that marks uc_ready = true. This function
> is called by ipa_uc.c when IPA_UC_RESPONSE_INIT_COMPLETED is handled.

This should use the new ipa_mem_find() interface for getting the
memory information for the ZIP region.

I don't know where the IPA_UC_RESPONSE_INIT_COMPLETED gets sent
but I presume it ends up calling ipa_qmi_signal_uc_loaded().

I think actually the DRIVER_INIT_COMPLETE message from the modem
is saying "I finished initializing the microcontroller."  And
I've wondered why there is a duplicate mechanism.  Maybe there
was a race or something.

					-Alex

> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> ---
>   drivers/net/ipa/ipa_qmi.c | 27 ++++++++++++++++++++++++++-
>   drivers/net/ipa/ipa_qmi.h | 10 ++++++++++
>   2 files changed, 36 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
> index 7e2fe701cc4d..876e2a004f70 100644
> --- a/drivers/net/ipa/ipa_qmi.c
> +++ b/drivers/net/ipa/ipa_qmi.c
> @@ -68,6 +68,11 @@
>    * - The INDICATION_REGISTER request and INIT_COMPLETE indication are
>    *   optional for non-initial modem boots, and have no bearing on the
>    *   determination of when things are "ready"
> + *
> + * Note that on IPA v2.x, the modem doesn't send a DRIVER_INIT_COMPLETE
> + * request. Thus, we rely on the uc's IPA_UC_RESPONSE_INIT_COMPLETED to know
> + * when the uc is ready. The rest of the process is the same on IPA v2.x and
> + * later IPA versions
>    */
>   
>   #define IPA_HOST_SERVICE_SVC_ID		0x31
> @@ -345,7 +350,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
>   			req.hdr_proc_ctx_tbl_info.start + mem->size - 1;
>   	}
>   
> -	/* Nothing to report for the compression table (zip_tbl_info) */
> +	mem = &ipa->mem[IPA_MEM_ZIP];
> +	if (mem->size) {
> +		req.zip_tbl_info_valid = 1;
> +		req.zip_tbl_info.start = ipa->mem_offset + mem->offset;
> +		req.zip_tbl_info.end = ipa->mem_offset + mem->size - 1;
> +	}
>   
>   	mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE_HASHED);
>   	if (mem->size) {
> @@ -525,6 +535,21 @@ int ipa_qmi_setup(struct ipa *ipa)
>   	return ret;
>   }
>   
> +/* With IPA v2 modem is not required to send DRIVER_INIT_COMPLETE request to AP.
> + * We start operation as soon as IPA_UC_RESPONSE_INIT_COMPLETED irq is triggered.
> + */
> +void ipa_qmi_signal_uc_loaded(struct ipa *ipa)
> +{
> +	struct ipa_qmi *ipa_qmi = &ipa->qmi;
> +
> +	/* This is needed only on IPA 2.x */
> +	if (ipa->version > IPA_VERSION_2_6L)
> +		return;
> +
> +	ipa_qmi->uc_ready = true;
> +	ipa_qmi_ready(ipa_qmi);
> +}
> +
>   /* Tear down IPA QMI handles */
>   void ipa_qmi_teardown(struct ipa *ipa)
>   {
> diff --git a/drivers/net/ipa/ipa_qmi.h b/drivers/net/ipa/ipa_qmi.h
> index 856ef629ccc8..4962d88b0d22 100644
> --- a/drivers/net/ipa/ipa_qmi.h
> +++ b/drivers/net/ipa/ipa_qmi.h
> @@ -55,6 +55,16 @@ struct ipa_qmi {
>    */
>   int ipa_qmi_setup(struct ipa *ipa);
>   
> +/**
> + * ipa_qmi_signal_uc_loaded() - Signal that the UC has been loaded
> + * @ipa:		IPA pointer
> + *
> + * This is called when the uc indicates that it is ready. This exists, because
> + * on IPA v2.x, the modem does not send a DRIVER_INIT_COMPLETED. Thus we have
> + * to rely on the uc's INIT_COMPLETED response to know if it was initialized
> + */
> +void ipa_qmi_signal_uc_loaded(struct ipa *ipa);
> +
>   /**
>    * ipa_qmi_teardown() - Tear down IPA QMI handles
>    * @ipa:		IPA pointer
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 14/17] net: ipa: Add support for IPA v2 microcontroller
  2021-09-20  3:08 ` [RFC PATCH 14/17] net: ipa: Add support for IPA v2 microcontroller Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  0 siblings, 0 replies; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> There are some minor differences between IPA v2.x and later revisions
> with regards to the uc. The biggeset difference is the shared memory's
> layout. There are also some changes to the command numbers, but these
> are not too important, since the mainline driver doesn't use them.

It's a shame that so much has to be rearranged when the
structure definitions are changed.  If I spent more time
thinking about this I might suggest a different way of
abstracting the two, but for now this looks fine.

					-Alex


> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_uc.c | 96 ++++++++++++++++++++++++++--------------
>   1 file changed, 63 insertions(+), 33 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_uc.c b/drivers/net/ipa/ipa_uc.c
> index 856e55a080a7..bf6b25098301 100644
> --- a/drivers/net/ipa/ipa_uc.c
> +++ b/drivers/net/ipa/ipa_uc.c
> @@ -39,11 +39,12 @@
>   #define IPA_SEND_DELAY		100	/* microseconds */
>   
>   /**
> - * struct ipa_uc_mem_area - AP/microcontroller shared memory area
> + * union ipa_uc_mem_area - AP/microcontroller shared memory area
>    * @command:		command code (AP->microcontroller)
>    * @reserved0:		reserved bytes; avoid reading or writing
>    * @command_param:	low 32 bits of command parameter (AP->microcontroller)
>    * @command_param_hi:	high 32 bits of command parameter (AP->microcontroller)
> + *			Available since IPA v3.0
>    *
>    * @response:		response code (microcontroller->AP)
>    * @reserved1:		reserved bytes; avoid reading or writing
> @@ -59,31 +60,58 @@
>    * @reserved3:		reserved bytes; avoid reading or writing
>    * @interface_version:	hardware-reported interface version
>    * @reserved4:		reserved bytes; avoid reading or writing
> + * @reserved5:		reserved bytes; avoid reading or writing
>    *
>    * A shared memory area at the base of IPA resident memory is used for
>    * communication with the microcontroller.  The region is 128 bytes in
>    * size, but only the first 40 bytes (structured this way) are used.
>    */
> -struct ipa_uc_mem_area {
> -	u8 command;		/* enum ipa_uc_command */
> -	u8 reserved0[3];
> -	__le32 command_param;
> -	__le32 command_param_hi;
> -	u8 response;		/* enum ipa_uc_response */
> -	u8 reserved1[3];
> -	__le32 response_param;
> -	u8 event;		/* enum ipa_uc_event */
> -	u8 reserved2[3];
> -
> -	__le32 event_param;
> -	__le32 first_error_address;
> -	u8 hw_state;
> -	u8 warning_counter;
> -	__le16 reserved3;
> -	__le16 interface_version;
> -	__le16 reserved4;
> +union ipa_uc_mem_area {
> +	struct {
> +		u8 command;		/* enum ipa_uc_command */
> +		u8 reserved0[3];
> +		__le32 command_param;
> +		u8 response;		/* enum ipa_uc_response */
> +		u8 reserved1[3];
> +		__le32 response_param;
> +		u8 event;		/* enum ipa_uc_event */
> +		u8 reserved2[3];
> +
> +		__le32 event_param;
> +		__le32 reserved3;
> +		__le32 first_error_address;
> +		u8 hw_state;
> +		u8 warning_counter;
> +		__le16 reserved4;
> +		__le16 interface_version;
> +		__le16 reserved5;
> +	} v2;
> +	struct {
> +		u8 command;		/* enum ipa_uc_command */
> +		u8 reserved0[3];
> +		__le32 command_param;
> +		__le32 command_param_hi;
> +		u8 response;		/* enum ipa_uc_response */
> +		u8 reserved1[3];
> +		__le32 response_param;
> +		u8 event;		/* enum ipa_uc_event */
> +		u8 reserved2[3];
> +
> +		__le32 event_param;
> +		__le32 first_error_address;
> +		u8 hw_state;
> +		u8 warning_counter;
> +		__le16 reserved3;
> +		__le16 interface_version;
> +		__le16 reserved4;
> +	} v3;
>   };
>   
> +#define UC_FIELD(_ipa, _field)			\
> +	*((_ipa->version >= IPA_VERSION_3_0) ?	\
> +	  &(ipa_uc_shared(_ipa)->v3._field) :	\
> +	  &(ipa_uc_shared(_ipa)->v2._field))
> +
>   /** enum ipa_uc_command - commands from the AP to the microcontroller */
>   enum ipa_uc_command {
>   	IPA_UC_COMMAND_NO_OP		= 0x0,
> @@ -95,6 +123,7 @@ enum ipa_uc_command {
>   	IPA_UC_COMMAND_CLK_UNGATE	= 0x6,
>   	IPA_UC_COMMAND_MEMCPY		= 0x7,
>   	IPA_UC_COMMAND_RESET_PIPE	= 0x8,
> +	/* Next two commands are present for IPA v3.0+ */
>   	IPA_UC_COMMAND_REG_WRITE	= 0x9,
>   	IPA_UC_COMMAND_GSI_CH_EMPTY	= 0xa,
>   };
> @@ -114,7 +143,7 @@ enum ipa_uc_event {
>   	IPA_UC_EVENT_LOG_INFO		= 0x2,
>   };
>   
> -static struct ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
> +static union ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
>   {
>   	const struct ipa_mem *mem = ipa_mem_find(ipa, IPA_MEM_UC_SHARED);
>   	u32 offset = ipa->mem_offset + mem->offset;
> @@ -125,22 +154,22 @@ static struct ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
>   /* Microcontroller event IPA interrupt handler */
>   static void ipa_uc_event_handler(struct ipa *ipa, enum ipa_irq_id irq_id)
>   {
> -	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
>   	struct device *dev = &ipa->pdev->dev;
> +	u32 event = UC_FIELD(ipa, event);
>   
> -	if (shared->event == IPA_UC_EVENT_ERROR)
> +	if (event == IPA_UC_EVENT_ERROR)
>   		dev_err(dev, "microcontroller error event\n");
> -	else if (shared->event != IPA_UC_EVENT_LOG_INFO)
> +	else if (event != IPA_UC_EVENT_LOG_INFO)
>   		dev_err(dev, "unsupported microcontroller event %u\n",
> -			shared->event);
> +			event);
>   	/* The LOG_INFO event can be safely ignored */
>   }
>   
>   /* Microcontroller response IPA interrupt handler */
>   static void ipa_uc_response_hdlr(struct ipa *ipa, enum ipa_irq_id irq_id)
>   {
> -	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
>   	struct device *dev = &ipa->pdev->dev;
> +	u32 response = UC_FIELD(ipa, response);
>   
>   	/* An INIT_COMPLETED response message is sent to the AP by the
>   	 * microcontroller when it is operational.  Other than this, the AP
> @@ -150,20 +179,21 @@ static void ipa_uc_response_hdlr(struct ipa *ipa, enum ipa_irq_id irq_id)
>   	 * We can drop the power reference taken in ipa_uc_power() once we
>   	 * know the microcontroller has finished its initialization.
>   	 */
> -	switch (shared->response) {
> +	switch (response) {
>   	case IPA_UC_RESPONSE_INIT_COMPLETED:
>   		if (ipa->uc_powered) {
>   			ipa->uc_loaded = true;
>   			pm_runtime_mark_last_busy(dev);
>   			(void)pm_runtime_put_autosuspend(dev);
>   			ipa->uc_powered = false;
> +			ipa_qmi_signal_uc_loaded(ipa);
>   		} else {
>   			dev_warn(dev, "unexpected init_completed response\n");
>   		}
>   		break;
>   	default:
>   		dev_warn(dev, "unsupported microcontroller response %u\n",
> -			 shared->response);
> +			 response);
>   		break;
>   	}
>   }
> @@ -216,16 +246,16 @@ void ipa_uc_power(struct ipa *ipa)
>   /* Send a command to the microcontroller */
>   static void send_uc_command(struct ipa *ipa, u32 command, u32 command_param)
>   {
> -	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
>   	u32 offset;
>   	u32 val;
>   
>   	/* Fill in the command data */
> -	shared->command = command;
> -	shared->command_param = cpu_to_le32(command_param);
> -	shared->command_param_hi = 0;
> -	shared->response = 0;
> -	shared->response_param = 0;
> +	UC_FIELD(ipa, command) = command;
> +	UC_FIELD(ipa, command_param) = cpu_to_le32(command_param);
> +	if (ipa->version >= IPA_VERSION_3_0)
> +		ipa_uc_shared(ipa)->v3.command_param_hi = 1;
> +	UC_FIELD(ipa, response) = 0;
> +	UC_FIELD(ipa, response_param) = 0;
>   
>   	/* Use an interrupt to tell the microcontroller the command is ready */
>   	val = u32_encode_bits(1, UC_INTR_FMASK);
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 15/17] net: ipa: Add IPA v2.6L initialization sequence support
  2021-09-20  3:08 ` [RFC PATCH 15/17] net: ipa: Add IPA v2.6L initialization sequence support Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  0 siblings, 0 replies; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> The biggest changes are:
> 
> - Make SMP2P functions no-operation
> - Make resource init no-operation
> - Skip firmware loading
> - Add reset sequence

The only comments I have are not very major, so I'll wait
for a later review to suggest that sort of fine tuning.

					-Alex

> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_main.c     | 19 ++++++++++++++++---
>   drivers/net/ipa/ipa_resource.c |  3 +++
>   drivers/net/ipa/ipa_smp2p.c    | 11 +++++++++--
>   3 files changed, 28 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> index ea6c4347f2c6..b437fbf95edf 100644
> --- a/drivers/net/ipa/ipa_main.c
> +++ b/drivers/net/ipa/ipa_main.c
> @@ -355,12 +355,22 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
>   	u32 granularity;
>   	u32 val;
>   
> +	if (ipa->version <= IPA_VERSION_2_6L) {
> +		iowrite32(1, ipa->reg_virt + IPA_REG_COMP_SW_RESET_OFFSET);
> +		iowrite32(0, ipa->reg_virt + IPA_REG_COMP_SW_RESET_OFFSET);
> +
> +		iowrite32(1, ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
> +	}
> +
>   	/* IPA v4.5+ has no backward compatibility register */
> -	if (version < IPA_VERSION_4_5) {
> +	if (version >= IPA_VERSION_2_5 && version < IPA_VERSION_4_5) {
>   		val = data->backward_compat;
>   		iowrite32(val, ipa->reg_virt + ipa_reg_bcr_offset(ipa->version));
>   	}
>   
> +	if (ipa->version <= IPA_VERSION_2_6L)
> +		return;
> +
>   	/* Implement some hardware workarounds */
>   	if (version >= IPA_VERSION_4_0 && version < IPA_VERSION_4_5) {
>   		/* Disable PA mask to allow HOLB drop */
> @@ -412,7 +422,8 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
>   static void ipa_hardware_deconfig(struct ipa *ipa)
>   {
>   	/* Mostly we just leave things as we set them. */
> -	ipa_hardware_dcd_deconfig(ipa);
> +	if (ipa->version > IPA_VERSION_2_6L)
> +		ipa_hardware_dcd_deconfig(ipa);
>   }
>   
>   /**
> @@ -765,8 +776,10 @@ static int ipa_probe(struct platform_device *pdev)
>   
>   	/* Otherwise we need to load the firmware and have Trust Zone validate
>   	 * and install it.  If that succeeds we can proceed with setup.
> +	 * But on IPA v2.6L we don't need to do firmware loading :D
>   	 */
> -	ret = ipa_firmware_load(dev);
> +	if (ipa->version > IPA_VERSION_2_6L)
> +		ret = ipa_firmware_load(dev);
>   	if (ret)
>   		goto err_deconfig;
>   
> diff --git a/drivers/net/ipa/ipa_resource.c b/drivers/net/ipa/ipa_resource.c
> index e3da95d69409..36a72324d828 100644
> --- a/drivers/net/ipa/ipa_resource.c
> +++ b/drivers/net/ipa/ipa_resource.c
> @@ -162,6 +162,9 @@ int ipa_resource_config(struct ipa *ipa, const struct ipa_resource_data *data)
>   {
>   	u32 i;
>   
> +	if (ipa->version <= IPA_VERSION_2_6L)
> +		return 0;
> +
>   	if (!ipa_resource_limits_valid(ipa, data))
>   		return -EINVAL;
>   
> diff --git a/drivers/net/ipa/ipa_smp2p.c b/drivers/net/ipa/ipa_smp2p.c
> index df7639c39d71..fa4a9f1c196a 100644
> --- a/drivers/net/ipa/ipa_smp2p.c
> +++ b/drivers/net/ipa/ipa_smp2p.c
> @@ -233,6 +233,10 @@ int ipa_smp2p_init(struct ipa *ipa, bool modem_init)
>   	u32 valid_bit;
>   	int ret;
>   
> +	/* With IPA v2.6L and earlier SMP2P interrupts are used */
> +	if (ipa->version <= IPA_VERSION_2_6L)
> +		return 0;
> +
>   	valid_state = qcom_smem_state_get(dev, "ipa-clock-enabled-valid",
>   					  &valid_bit);
>   	if (IS_ERR(valid_state))
> @@ -302,6 +306,9 @@ void ipa_smp2p_exit(struct ipa *ipa)
>   {
>   	struct ipa_smp2p *smp2p = ipa->smp2p;
>   
> +	if (!smp2p)
> +		return;
> +
>   	if (smp2p->setup_ready_irq)
>   		ipa_smp2p_irq_exit(smp2p, smp2p->setup_ready_irq);
>   	ipa_smp2p_panic_notifier_unregister(smp2p);
> @@ -317,7 +324,7 @@ void ipa_smp2p_disable(struct ipa *ipa)
>   {
>   	struct ipa_smp2p *smp2p = ipa->smp2p;
>   
> -	if (!smp2p->setup_ready_irq)
> +	if (!smp2p || !smp2p->setup_ready_irq)
>   		return;
>   
>   	mutex_lock(&smp2p->mutex);
> @@ -333,7 +340,7 @@ void ipa_smp2p_notify_reset(struct ipa *ipa)
>   	struct ipa_smp2p *smp2p = ipa->smp2p;
>   	u32 mask;
>   
> -	if (!smp2p->notified)
> +	if (!smp2p || !smp2p->notified)
>   		return;
>   
>   	ipa_smp2p_power_release(ipa);
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware
  2021-09-20  3:08 ` [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  2021-10-18 18:35     ` Sireesh Kodali
  0 siblings, 1 reply; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> This commit adds the config for IPA v2.0, v2.5, v2.6L. IPA v2.5 is found
> on msm8996. IPA v2.6L hardware is found on following SoCs: msm8920,
> msm8940, msm8952, msm8953, msm8956, msm8976, sdm630, sdm660. No
> SoC-specific configuration in ipa driver is required.
> 
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>

I will not look at this in great detail right now.  It looks
good to me, but I didn't notice where "channel_name" got
defined.  I'm not sure what the BCR value represents either.

					-Alex

> ---
>   drivers/net/ipa/Makefile        |   7 +-
>   drivers/net/ipa/ipa_data-v2.c   | 369 ++++++++++++++++++++++++++++++++
>   drivers/net/ipa/ipa_data-v3.1.c |   2 +-
>   drivers/net/ipa/ipa_data.h      |   3 +
>   drivers/net/ipa/ipa_main.c      |  15 ++
>   drivers/net/ipa/ipa_sysfs.c     |   6 +
>   6 files changed, 398 insertions(+), 4 deletions(-)
>   create mode 100644 drivers/net/ipa/ipa_data-v2.c
> 
> diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
> index 4abebc667f77..858fbf76cff3 100644
> --- a/drivers/net/ipa/Makefile
> +++ b/drivers/net/ipa/Makefile
> @@ -7,6 +7,7 @@ ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
>   				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
>   				ipa_sysfs.o
>   
> -ipa-y			+=	ipa_data-v3.1.o ipa_data-v3.5.1.o \
> -				ipa_data-v4.2.o ipa_data-v4.5.o \
> -				ipa_data-v4.9.o ipa_data-v4.11.o
> +ipa-y			+=	ipa_data-v2.o ipa_data-v3.1.o \
> +				ipa_data-v3.5.1.o ipa_data-v4.2.o \
> +				ipa_data-v4.5.o ipa_data-v4.9.o \
> +				ipa_data-v4.11.o
> diff --git a/drivers/net/ipa/ipa_data-v2.c b/drivers/net/ipa/ipa_data-v2.c
> new file mode 100644
> index 000000000000..869b8a1a45d6
> --- /dev/null
> +++ b/drivers/net/ipa/ipa_data-v2.c
> @@ -0,0 +1,369 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
> + * Copyright (C) 2019-2020 Linaro Ltd.
> + */
> +
> +#include <linux/log2.h>
> +
> +#include "ipa_data.h"
> +#include "ipa_endpoint.h"
> +#include "ipa_mem.h"
> +
> +/* Endpoint configuration for the IPA v2 hardware. */
> +static const struct ipa_gsi_endpoint_data ipa_endpoint_data[] = {
> +	[IPA_ENDPOINT_AP_COMMAND_TX] = {
> +		.ee_id		= GSI_EE_AP,
> +		.channel_id	= 3,
> +		.endpoint_id	= 3,
> +		.channel_name	= "cmd_tx",
> +		.toward_ipa	= true,
> +		.channel = {
> +			.tre_count	= 256,
> +			.event_count	= 256,
> +			.tlv_count	= 20,
> +		},
> +		.endpoint = {
> +			.config	= {
> +				.dma_mode	= true,
> +				.dma_endpoint	= IPA_ENDPOINT_AP_LAN_RX,
> +			},
> +		},
> +	},
> +	[IPA_ENDPOINT_AP_LAN_RX] = {
> +		.ee_id		= GSI_EE_AP,
> +		.channel_id	= 2,
> +		.endpoint_id	= 2,
> +		.channel_name	= "ap_lan_rx",
> +		.channel = {
> +			.tre_count	= 256,
> +			.event_count	= 256,
> +			.tlv_count	= 8,
> +		},
> +		.endpoint	= {
> +			.config	= {
> +				.aggregation	= true,
> +				.status_enable	= true,
> +				.rx = {
> +					.pad_align	= ilog2(sizeof(u32)),
> +				},
> +			},
> +		},
> +	},
> +	[IPA_ENDPOINT_AP_MODEM_TX] = {
> +		.ee_id		= GSI_EE_AP,
> +		.channel_id	= 4,
> +		.endpoint_id	= 4,
> +		.channel_name	= "ap_modem_tx",
> +		.toward_ipa	= true,
> +		.channel = {
> +			.tre_count	= 256,
> +			.event_count	= 256,
> +			.tlv_count	= 8,
> +		},
> +		.endpoint	= {
> +			.config	= {
> +				.qmap		= true,
> +				.status_enable	= true,
> +				.tx = {
> +					.status_endpoint =
> +						IPA_ENDPOINT_AP_LAN_RX,
> +				},
> +			},
> +		},
> +	},
> +	[IPA_ENDPOINT_AP_MODEM_RX] = {
> +		.ee_id		= GSI_EE_AP,
> +		.channel_id	= 5,
> +		.endpoint_id	= 5,
> +		.channel_name	= "ap_modem_rx",
> +		.toward_ipa	= false,
> +		.channel = {
> +			.tre_count	= 256,
> +			.event_count	= 256,
> +			.tlv_count	= 8,
> +		},
> +		.endpoint	= {
> +			.config = {
> +				.aggregation	= true,
> +				.qmap		= true,
> +			},
> +		},
> +	},
> +	[IPA_ENDPOINT_MODEM_LAN_TX] = {
> +		.ee_id		= GSI_EE_MODEM,
> +		.channel_id	= 6,
> +		.endpoint_id	= 6,
> +		.channel_name	= "modem_lan_tx",
> +		.toward_ipa	= true,
> +	},
> +	[IPA_ENDPOINT_MODEM_COMMAND_TX] = {
> +		.ee_id		= GSI_EE_MODEM,
> +		.channel_id	= 7,
> +		.endpoint_id	= 7,
> +		.channel_name	= "modem_cmd_tx",
> +		.toward_ipa	= true,
> +	},
> +	[IPA_ENDPOINT_MODEM_LAN_RX] = {
> +		.ee_id		= GSI_EE_MODEM,
> +		.channel_id	= 8,
> +		.endpoint_id	= 8,
> +		.channel_name	= "modem_lan_rx",
> +		.toward_ipa	= false,
> +	},
> +	[IPA_ENDPOINT_MODEM_AP_RX] = {
> +		.ee_id		= GSI_EE_MODEM,
> +		.channel_id	= 9,
> +		.endpoint_id	= 9,
> +		.channel_name	= "modem_ap_rx",
> +		.toward_ipa	= false,
> +	},
> +};
> +
> +static struct ipa_interconnect_data ipa_interconnect_data[] = {
> +	{
> +		.name = "memory",
> +		.peak_bandwidth	= 1200000,	/* 1200 MBps */
> +		.average_bandwidth = 100000,	/* 100 MBps */
> +	},
> +	{
> +		.name = "imem",
> +		.peak_bandwidth	= 350000,	/* 350 MBps */
> +		.average_bandwidth  = 0,	/* unused */
> +	},
> +	{
> +		.name = "config",
> +		.peak_bandwidth	= 40000,	/* 40 MBps */
> +		.average_bandwidth = 0,		/* unused */
> +	},
> +};
> +
> +static struct ipa_power_data ipa_power_data = {
> +	.core_clock_rate	= 200 * 1000 * 1000,	/* Hz */
> +	.interconnect_count	= ARRAY_SIZE(ipa_interconnect_data),
> +	.interconnect_data	= ipa_interconnect_data,
> +};
> +
> +/* IPA-resident memory region configuration for v2.0 */
> +static const struct ipa_mem ipa_mem_local_data_v2_0[IPA_MEM_COUNT] = {
> +	[IPA_MEM_UC_SHARED] = {
> +		.offset         = 0,
> +		.size           = 0x80,
> +		.canary_count   = 0,
> +	},
> +	[IPA_MEM_V4_FILTER] = {
> +		.offset		= 0x0080,
> +		.size		= 0x0058,
> +		.canary_count	= 0,
> +	},
> +	[IPA_MEM_V6_FILTER] = {
> +		.offset		= 0x00e0,
> +		.size		= 0x0058,
> +		.canary_count	= 2,
> +	},
> +	[IPA_MEM_V4_ROUTE] = {
> +		.offset		= 0x0140,
> +		.size		= 0x002c,
> +		.canary_count	= 2,
> +	},
> +	[IPA_MEM_V6_ROUTE] = {
> +		.offset		= 0x0170,
> +		.size		= 0x002c,
> +		.canary_count	= 1,
> +	},
> +	[IPA_MEM_MODEM_HEADER] = {
> +		.offset		= 0x01a0,
> +		.size		= 0x0140,
> +		.canary_count	= 1,
> +	},
> +	[IPA_MEM_AP_HEADER] = {
> +		.offset		= 0x02e0,
> +		.size		= 0x0048,
> +		.canary_count	= 0,
> +	},
> +	[IPA_MEM_MODEM] = {
> +		.offset		= 0x032c,
> +		.size		= 0x0dcc,
> +		.canary_count	= 1,
> +	},
> +	[IPA_MEM_V4_FILTER_AP] = {
> +		.offset		= 0x10fc,
> +		.size		= 0x0780,
> +		.canary_count	= 1,
> +	},
> +	[IPA_MEM_V6_FILTER_AP] = {
> +		.offset		= 0x187c,
> +		.size		= 0x055c,
> +		.canary_count	= 0,
> +	},
> +	[IPA_MEM_UC_INFO] = {
> +		.offset		= 0x1ddc,
> +		.size		= 0x0124,
> +		.canary_count	= 1,
> +	},
> +};
> +
> +static struct ipa_mem_data ipa_mem_data_v2_0 = {
> +	.local		= ipa_mem_local_data_v2_0,
> +	.smem_id	= 497,
> +	.smem_size	= 0x00001f00,
> +};
> +
> +/* Configuration data for IPAv2.0 */
> +const struct ipa_data ipa_data_v2_0  = {
> +	.version	= IPA_VERSION_2_0,
> +	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
> +	.endpoint_data	= ipa_endpoint_data,
> +	.mem_data	= &ipa_mem_data_v2_0,
> +	.power_data	= &ipa_power_data,
> +};
> +
> +/* IPA-resident memory region configuration for v2.5 */
> +static const struct ipa_mem ipa_mem_local_data_v2_5[IPA_MEM_COUNT] = {
> +	[IPA_MEM_UC_SHARED] = {
> +		.offset         = 0,
> +		.size           = 0x80,
> +		.canary_count   = 0,
> +	},
> +	[IPA_MEM_UC_INFO] = {
> +		.offset		= 0x0080,
> +		.size		= 0x0200,
> +		.canary_count	= 0,
> +	},
> +	[IPA_MEM_V4_FILTER] = {
> +		.offset		= 0x0288,
> +		.size		= 0x0058,
> +		.canary_count	= 2,
> +	},
> +	[IPA_MEM_V6_FILTER] = {
> +		.offset		= 0x02e8,
> +		.size		= 0x0058,
> +		.canary_count	= 2,
> +	},
> +	[IPA_MEM_V4_ROUTE] = {
> +		.offset		= 0x0348,
> +		.size		= 0x003c,
> +		.canary_count	= 2,
> +	},
> +	[IPA_MEM_V6_ROUTE] = {
> +		.offset		= 0x0388,
> +		.size		= 0x003c,
> +		.canary_count	= 1,
> +	},
> +	[IPA_MEM_MODEM_HEADER] = {
> +		.offset		= 0x03c8,
> +		.size		= 0x0140,
> +		.canary_count	= 1,
> +	},
> +	[IPA_MEM_MODEM_PROC_CTX] = {
> +		.offset		= 0x0510,
> +		.size		= 0x0200,
> +		.canary_count	= 2,
> +	},
> +	[IPA_MEM_AP_PROC_CTX] = {
> +		.offset		= 0x0710,
> +		.size		= 0x0200,
> +		.canary_count	= 0,
> +	},
> +	[IPA_MEM_MODEM] = {
> +		.offset		= 0x0914,
> +		.size		= 0x16a8,
> +		.canary_count	= 1,
> +	},
> +};
> +
> +static struct ipa_mem_data ipa_mem_data_v2_5 = {
> +	.local		= ipa_mem_local_data_v2_5,
> +	.smem_id	= 497,
> +	.smem_size	= 0x00002000,
> +};
> +
> +/* Configuration data for IPAv2.5 */
> +const struct ipa_data ipa_data_v2_5  = {
> +	.version	= IPA_VERSION_2_5,
> +	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
> +	.endpoint_data	= ipa_endpoint_data,
> +	.mem_data	= &ipa_mem_data_v2_5,
> +	.power_data	= &ipa_power_data,
> +};
> +
> +/* IPA-resident memory region configuration for v2.6L */
> +static const struct ipa_mem ipa_mem_local_data_v2_6L[IPA_MEM_COUNT] = {
> +	{
> +		.id		= IPA_MEM_UC_SHARED,
> +		.offset         = 0,
> +		.size           = 0x80,
> +		.canary_count   = 0,
> +	},
> +	{
> +		.id 		= IPA_MEM_UC_INFO,
> +		.offset		= 0x0080,
> +		.size		= 0x0200,
> +		.canary_count	= 0,
> +	},
> +	{
> +		.id		= IPA_MEM_V4_FILTER,
> +		.offset		= 0x0288,
> +		.size		= 0x0058,
> +		.canary_count	= 2,
> +	},
> +	{
> +		.id		= IPA_MEM_V6_FILTER,
> +		.offset		= 0x02e8,
> +		.size		= 0x0058,
> +		.canary_count	= 2,
> +	},
> +	{
> +		.id		= IPA_MEM_V4_ROUTE,
> +		.offset		= 0x0348,
> +		.size		= 0x003c,
> +		.canary_count	= 2,
> +	},
> +	{
> +		.id		= IPA_MEM_V6_ROUTE,
> +		.offset		= 0x0388,
> +		.size		= 0x003c,
> +		.canary_count	= 1,
> +	},
> +	{
> +		.id		= IPA_MEM_MODEM_HEADER,
> +		.offset		= 0x03c8,
> +		.size		= 0x0140,
> +		.canary_count	= 1,
> +	},
> +	{
> +		.id		= IPA_MEM_ZIP,
> +		.offset		= 0x0510,
> +		.size		= 0x0200,
> +		.canary_count	= 2,
> +	},
> +	{
> +		.id		= IPA_MEM_MODEM,
> +		.offset		= 0x0714,
> +		.size		= 0x18e8,
> +		.canary_count	= 1,
> +	},
> +	{
> +		.id		= IPA_MEM_END_MARKER,
> +		.offset		= 0x2000,
> +		.size		= 0,
> +		.canary_count	= 1,
> +	},
> +};
> +
> +static struct ipa_mem_data ipa_mem_data_v2_6L = {
> +	.local		= ipa_mem_local_data_v2_6L,
> +	.smem_id	= 497,
> +	.smem_size	= 0x00002000,
> +};
> +
> +/* Configuration data for IPAv2.6L */
> +const struct ipa_data ipa_data_v2_6L  = {
> +	.version	= IPA_VERSION_2_6L,
> +	/* Unfortunately we don't know what this BCR value corresponds to */
> +	.backward_compat = 0x1fff7f,
> +	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
> +	.endpoint_data	= ipa_endpoint_data,
> +	.mem_data	= &ipa_mem_data_v2_6L,
> +	.power_data	= &ipa_power_data,
> +};
> diff --git a/drivers/net/ipa/ipa_data-v3.1.c b/drivers/net/ipa/ipa_data-v3.1.c
> index 06ddb85f39b2..12d231232756 100644
> --- a/drivers/net/ipa/ipa_data-v3.1.c
> +++ b/drivers/net/ipa/ipa_data-v3.1.c
> @@ -6,7 +6,7 @@
>   
>   #include <linux/log2.h>
>   
> -#include "gsi.h"
> +#include "ipa_dma.h"
>   #include "ipa_data.h"
>   #include "ipa_endpoint.h"
>   #include "ipa_mem.h"
> diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
> index 7d62d49f414f..e7ce2e9388b6 100644
> --- a/drivers/net/ipa/ipa_data.h
> +++ b/drivers/net/ipa/ipa_data.h
> @@ -301,6 +301,9 @@ struct ipa_data {
>   	const struct ipa_power_data *power_data;
>   };
>   
> +extern const struct ipa_data ipa_data_v2_0;
> +extern const struct ipa_data ipa_data_v2_5;
> +extern const struct ipa_data ipa_data_v2_6L;
>   extern const struct ipa_data ipa_data_v3_1;
>   extern const struct ipa_data ipa_data_v3_5_1;
>   extern const struct ipa_data ipa_data_v4_2;
> diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> index b437fbf95edf..3ae5c5c6734b 100644
> --- a/drivers/net/ipa/ipa_main.c
> +++ b/drivers/net/ipa/ipa_main.c
> @@ -560,6 +560,18 @@ static int ipa_firmware_load(struct device *dev)
>   }
>   
>   static const struct of_device_id ipa_match[] = {
> +	{
> +		.compatible	= "qcom,ipa-v2.0",
> +		.data		= &ipa_data_v2_0,
> +	},
> +	{
> +		.compatible	= "qcom,msm8996-ipa",
> +		.data		= &ipa_data_v2_5,
> +	},
> +	{
> +		.compatible	= "qcom,msm8953-ipa",
> +		.data		= &ipa_data_v2_6L,
> +	},
>   	{
>   		.compatible	= "qcom,msm8998-ipa",
>   		.data		= &ipa_data_v3_1,
> @@ -632,6 +644,9 @@ static void ipa_validate_build(void)
>   static bool ipa_version_valid(enum ipa_version version)
>   {
>   	switch (version) {
> +	case IPA_VERSION_2_0:
> +	case IPA_VERSION_2_5:
> +	case IPA_VERSION_2_6L:
>   	case IPA_VERSION_3_0:
>   	case IPA_VERSION_3_1:
>   	case IPA_VERSION_3_5:
> diff --git a/drivers/net/ipa/ipa_sysfs.c b/drivers/net/ipa/ipa_sysfs.c
> index ff61dbdd70d8..f5d159f6bc06 100644
> --- a/drivers/net/ipa/ipa_sysfs.c
> +++ b/drivers/net/ipa/ipa_sysfs.c
> @@ -14,6 +14,12 @@
>   static const char *ipa_version_string(struct ipa *ipa)
>   {
>   	switch (ipa->version) {
> +	case IPA_VERSION_2_0:
> +		return "2.0";
> +	case IPA_VERSION_2_5:
> +		return "2.5";
> +	case IPA_VERSION_2_6L:
> +		"return 2.6L";
>   	case IPA_VERSION_3_0:
>   		return "3.0";
>   	case IPA_VERSION_3_1:
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA
  2021-09-20  3:08 ` [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA Sireesh Kodali
  2021-09-23 12:42   ` Rob Herring
@ 2021-10-13 22:31   ` Alex Elder
  1 sibling, 0 replies; 49+ messages in thread
From: Alex Elder @ 2021-10-13 22:31 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Andy Gross, Bjorn Andersson, David S. Miller, Jakub Kicinski,
	Rob Herring,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> MSM8996 uses IPA v2.5 and MSM8953 uses IPA v2.6l
> 
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>

This looks good.  And if it's good enough for Rob, it
*must* be good.

					-Alex

> ---
>   Documentation/devicetree/bindings/net/qcom,ipa.yaml | 2 ++
>   1 file changed, 2 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/net/qcom,ipa.yaml b/Documentation/devicetree/bindings/net/qcom,ipa.yaml
> index b8a0b392b24e..e857827bfa54 100644
> --- a/Documentation/devicetree/bindings/net/qcom,ipa.yaml
> +++ b/Documentation/devicetree/bindings/net/qcom,ipa.yaml
> @@ -44,6 +44,8 @@ description:
>   properties:
>     compatible:
>       enum:
> +      - qcom,msm8953-ipa
> +      - qcom,msm8996-ipa
>         - qcom,msm8998-ipa
>         - qcom,sc7180-ipa
>         - qcom,sc7280-ipa
> 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration
  2021-10-13 22:28   ` Alex Elder
@ 2021-10-18 16:12     ` Sireesh Kodali
  0 siblings, 0 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-10-18 16:12 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 3:58 AM IST, Alex Elder wrote:
> On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> > From: Vladimir Lypak <vladimir.lypak@gmail.com>
> > 
> > The values in the enumaration were defined as bitmasks (base 2 exponents of
> > actual opcodes). Meanwhile, it's used not as bitmask
> > ipa_endpoint_status_skip and ipa_status_formet_packet functions (compared
> > directly with opcode from status packet). This commit converts these values
> > to actual hardware constansts.
> > 
> > Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/ipa_endpoint.c | 8 ++++----
> >   1 file changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> > index 5528d97110d5..29227de6661f 100644
> > --- a/drivers/net/ipa/ipa_endpoint.c
> > +++ b/drivers/net/ipa/ipa_endpoint.c
> > @@ -41,10 +41,10 @@
> >   
> >   /** enum ipa_status_opcode - status element opcode hardware values */
> >   enum ipa_status_opcode {
> > -	IPA_STATUS_OPCODE_PACKET		= 0x01,
> > -	IPA_STATUS_OPCODE_DROPPED_PACKET	= 0x04,
> > -	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 0x08,
> > -	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 0x40,
> > +	IPA_STATUS_OPCODE_PACKET		= 0,
> > +	IPA_STATUS_OPCODE_DROPPED_PACKET	= 2,
> > +	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 3,
> > +	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 6,
>
> I haven't looked at how these symbols are used (whether you
> changed it at all), but I'm pretty sure this is wrong.
>
> The downstream tends to define "soft" symbols that must
> be mapped to their hardware equivalent values. So for
> example you might find a function ipa_pkt_status_parse()
> that translates between the hardware status structure
> and the abstracted "soft" status structure. In that
> function you see, for example, that hardware status
> opcode 0x1 is translated to IPAHAL_PKT_STATUS_OPCODE_PACKET,
> which downstream is defined to have value 0.
>
> In many places the upstream code eliminates that layer
> of indirection where possible. So enumerated constants
> are assigned specific values that match what the hardware
> uses.
>

Looking at these again, I realised this patch is indeed wrong...
The status values are different on v2 and v3+. I guess the correct
approach here would be to use an inline function and pick the correct
status opcode, like how its handled for register defintions.

Regards,
Sireesh

> -Alex
>
> >   };
> >   
> >   /** enum ipa_status_exception - status element exception type */
> > 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support
  2021-10-13 22:28   ` Alex Elder
@ 2021-10-18 16:16     ` Sireesh Kodali
  0 siblings, 0 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-10-18 16:16 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 3:58 AM IST, Alex Elder wrote:
> On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> > From: Vladimir Lypak <vladimir.lypak@gmail.com>
> > 
> > IPA v2.x is 32 bit. Having an IPA_TABLE_ENTRY size makes it easier to
> > deal with supporting both 32 bit and 64 bit IPA versions
>
> This looks reasonable. At this point filter/route tables aren't
> really used, so this is a simple fix. You use IPA_IS_64BIT()
> here, but it isn't defined until patch 7, which I expect is a
> build problem.

Oof, I probably messed this up while re-ordering the commits... will fix

Regards,
Sireesh
>
> -Alex
>
> > Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/ipa_qmi.c   | 10 ++++++----
> >   drivers/net/ipa/ipa_table.c | 29 +++++++++++++----------------
> >   drivers/net/ipa/ipa_table.h |  4 ++++
> >   3 files changed, 23 insertions(+), 20 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
> > index 90f3aec55b36..7e2fe701cc4d 100644
> > --- a/drivers/net/ipa/ipa_qmi.c
> > +++ b/drivers/net/ipa/ipa_qmi.c
> > @@ -308,12 +308,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
> >   	mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE);
> >   	req.v4_route_tbl_info_valid = 1;
> >   	req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset;
> > -	req.v4_route_tbl_info.count = mem->size / sizeof(__le64);
> > +	req.v4_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   
> >   	mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE);
> >   	req.v6_route_tbl_info_valid = 1;
> >   	req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset;
> > -	req.v6_route_tbl_info.count = mem->size / sizeof(__le64);
> > +	req.v6_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   
> >   	mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER);
> >   	req.v4_filter_tbl_start_valid = 1;
> > @@ -352,7 +352,8 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
> >   		req.v4_hash_route_tbl_info_valid = 1;
> >   		req.v4_hash_route_tbl_info.start =
> >   				ipa->mem_offset + mem->offset;
> > -		req.v4_hash_route_tbl_info.count = mem->size / sizeof(__le64);
> > +		req.v4_hash_route_tbl_info.count =
> > +				mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   	}
> >   
> >   	mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE_HASHED);
> > @@ -360,7 +361,8 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
> >   		req.v6_hash_route_tbl_info_valid = 1;
> >   		req.v6_hash_route_tbl_info.start =
> >   			ipa->mem_offset + mem->offset;
> > -		req.v6_hash_route_tbl_info.count = mem->size / sizeof(__le64);
> > +		req.v6_hash_route_tbl_info.count =
> > +				mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   	}
> >   
> >   	mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER_HASHED);
> > diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
> > index 1da334f54944..96c467c80a2e 100644
> > --- a/drivers/net/ipa/ipa_table.c
> > +++ b/drivers/net/ipa/ipa_table.c
> > @@ -118,7 +118,8 @@
> >    * 32-bit all-zero rule list terminator.  The "zero rule" is simply an
> >    * all-zero rule followed by the list terminator.
> >    */
> > -#define IPA_ZERO_RULE_SIZE		(2 * sizeof(__le32))
> > +#define IPA_ZERO_RULE_SIZE(version) \
> > +	 (IPA_IS_64BIT(version) ? 2 * sizeof(__le32) : sizeof(__le32))
> >   
> >   /* Check things that can be validated at build time. */
> >   static void ipa_table_validate_build(void)
> > @@ -132,12 +133,6 @@ static void ipa_table_validate_build(void)
> >   	 */
> >   	BUILD_BUG_ON(sizeof(dma_addr_t) > sizeof(__le64));
> >   
> > -	/* A "zero rule" is used to represent no filtering or no routing.
> > -	 * It is a 64-bit block of zeroed memory.  Code in ipa_table_init()
> > -	 * assumes that it can be written using a pointer to __le64.
> > -	 */
> > -	BUILD_BUG_ON(IPA_ZERO_RULE_SIZE != sizeof(__le64));
> > -
> >   	/* Impose a practical limit on the number of routes */
> >   	BUILD_BUG_ON(IPA_ROUTE_COUNT_MAX > 32);
> >   	/* The modem must be allotted at least one route table entry */
> > @@ -236,7 +231,7 @@ static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count)
> >   	/* Skip over the zero rule and possibly the filter mask */
> >   	skip = filter_mask ? 1 : 2;
> >   
> > -	return ipa->table_addr + skip * sizeof(*ipa->table_virt);
> > +	return ipa->table_addr + skip * IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   }
> >   
> >   static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
> > @@ -255,8 +250,8 @@ static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
> >   	if (filter)
> >   		first++;	/* skip over bitmap */
> >   
> > -	offset = mem->offset + first * sizeof(__le64);
> > -	size = count * sizeof(__le64);
> > +	offset = mem->offset + first * IPA_TABLE_ENTRY_SIZE(ipa->version);
> > +	size = count * IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   	addr = ipa_table_addr(ipa, false, count);
> >   
> >   	ipa_cmd_dma_shared_mem_add(trans, offset, size, addr, true);
> > @@ -434,11 +429,11 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
> >   		count = 1 + hweight32(ipa->filter_map);
> >   		hash_count = hash_mem->size ? count : 0;
> >   	} else {
> > -		count = mem->size / sizeof(__le64);
> > -		hash_count = hash_mem->size / sizeof(__le64);
> > +		count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> > +		hash_count = hash_mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   	}
> > -	size = count * sizeof(__le64);
> > -	hash_size = hash_count * sizeof(__le64);
> > +	size = count * IPA_TABLE_ENTRY_SIZE(ipa->version);
> > +	hash_size = hash_count * IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   
> >   	addr = ipa_table_addr(ipa, filter, count);
> >   	hash_addr = ipa_table_addr(ipa, filter, hash_count);
> > @@ -621,7 +616,8 @@ int ipa_table_init(struct ipa *ipa)
> >   	 * by dma_alloc_coherent() is guaranteed to be a power-of-2 number
> >   	 * of pages, which satisfies the rule alignment requirement.
> >   	 */
> > -	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
> > +	size = IPA_ZERO_RULE_SIZE(ipa->version) +
> > +	       (1 + count) * IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   	virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
> >   	if (!virt)
> >   		return -ENOMEM;
> > @@ -653,7 +649,8 @@ void ipa_table_exit(struct ipa *ipa)
> >   	struct device *dev = &ipa->pdev->dev;
> >   	size_t size;
> >   
> > -	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
> > +	size = IPA_ZERO_RULE_SIZE(ipa->version) +
> > +	       (1 + count) * IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   
> >   	dma_free_coherent(dev, size, ipa->table_virt, ipa->table_addr);
> >   	ipa->table_addr = 0;
> > diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
> > index b6a9a0d79d68..78a168ce6558 100644
> > --- a/drivers/net/ipa/ipa_table.h
> > +++ b/drivers/net/ipa/ipa_table.h
> > @@ -10,6 +10,10 @@
> >   
> >   struct ipa;
> >   
> > +/* The size of a filter or route table entry */
> > +#define IPA_TABLE_ENTRY_SIZE(version)	\
> > +	(IPA_IS_64BIT(version) ? sizeof(__le64) : sizeof(__le32))
> > +
> >   /* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */
> >   #define IPA_FILTER_COUNT_MAX	14
> >   
> > 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface
  2021-10-13 22:29   ` Alex Elder
@ 2021-10-18 16:45     ` Sireesh Kodali
  0 siblings, 0 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-10-18 16:45 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 3:59 AM IST, Alex Elder wrote:
> On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> > From: Vladimir Lypak <vladimir.lypak@gmail.com>
> > 
> > Establish callback-based interface to abstract GSI and BAM DMA differences.
> > Interface is based on prototypes from ipa_dma.h (old gsi.h). Callbacks
> > are stored in struct ipa_dma (old struct gsi) and assigned in gsi_init.
>
> This is interesting and seems to have been fairly easy to abstract
> this way. The patch is actually pretty straightforward, much more
> so than I would have expected. I think I'll have more to say about
> how to separate GSI from BAM in the future, but not today.
>
> -Alex

Yes, GSI code was fairly easy to abstract. Thankfully, the dmaegine API
maps very nicely onto the existing GSI API.  I'm not sure if this was
intentional or accidental, but its nice either way.

Perhaps in future it might make sense to move the GSI code into a separate
dmaengine driver as well? In practice that should mean the IPA driver would
simply call into the dmaengine API, with no knowledge of the underlying
transport, and would remove the need for the BAM/GSI abstraction layer, since
the abstraction would be handled by dmaengine. I'm not sure how easy that
would be though.

Regards,
Sireesh
>
> > Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/gsi.c          |  30 ++++++--
> >   drivers/net/ipa/ipa_dma.h      | 133 ++++++++++++++++++++++-----------
> >   drivers/net/ipa/ipa_endpoint.c |  28 +++----
> >   drivers/net/ipa/ipa_main.c     |  18 ++---
> >   drivers/net/ipa/ipa_power.c    |   4 +-
> >   drivers/net/ipa/ipa_trans.c    |   2 +-
> >   6 files changed, 138 insertions(+), 77 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
> > index 74ae0d07f859..39d9ca620a9f 100644
> > --- a/drivers/net/ipa/gsi.c
> > +++ b/drivers/net/ipa/gsi.c
> > @@ -99,6 +99,10 @@
> >   
> >   #define GSI_ISR_MAX_ITER		50	/* Detect interrupt storms */
> >   
> > +static u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id);
> > +static u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id);
> > +static void gsi_exit(struct ipa_dma *gsi);
> > +
> >   /* An entry in an event ring */
> >   struct gsi_event {
> >   	__le64 xfer_ptr;
> > @@ -869,7 +873,7 @@ static int __gsi_channel_start(struct ipa_channel *channel, bool resume)
> >   }
> >   
> >   /* Start an allocated GSI channel */
> > -int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
> > +static int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
> >   {
> >   	struct ipa_channel *channel = &gsi->channel[channel_id];
> >   	int ret;
> > @@ -924,7 +928,7 @@ static int __gsi_channel_stop(struct ipa_channel *channel, bool suspend)
> >   }
> >   
> >   /* Stop a started channel */
> > -int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
> > +static int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
> >   {
> >   	struct ipa_channel *channel = &gsi->channel[channel_id];
> >   	int ret;
> > @@ -941,7 +945,7 @@ int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
> >   }
> >   
> >   /* Reset and reconfigure a channel, (possibly) enabling the doorbell engine */
> > -void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
> > +static void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
> >   {
> >   	struct ipa_channel *channel = &gsi->channel[channel_id];
> >   
> > @@ -1931,7 +1935,7 @@ int gsi_setup(struct ipa_dma *gsi)
> >   }
> >   
> >   /* Inverse of gsi_setup() */
> > -void gsi_teardown(struct ipa_dma *gsi)
> > +static void gsi_teardown(struct ipa_dma *gsi)
> >   {
> >   	gsi_channel_teardown(gsi);
> >   	gsi_irq_teardown(gsi);
> > @@ -2194,6 +2198,18 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
> >   
> >   	gsi->dev = dev;
> >   	gsi->version = version;
> > +	gsi->setup = gsi_setup;
> > +	gsi->teardown = gsi_teardown;
> > +	gsi->exit = gsi_exit;
> > +	gsi->suspend = gsi_suspend;
> > +	gsi->resume = gsi_resume;
> > +	gsi->channel_tre_max = gsi_channel_tre_max;
> > +	gsi->channel_trans_tre_max = gsi_channel_trans_tre_max;
> > +	gsi->channel_start = gsi_channel_start;
> > +	gsi->channel_stop = gsi_channel_stop;
> > +	gsi->channel_reset = gsi_channel_reset;
> > +	gsi->channel_suspend = gsi_channel_suspend;
> > +	gsi->channel_resume = gsi_channel_resume;
> >   
> >   	/* GSI uses NAPI on all channels.  Create a dummy network device
> >   	 * for the channel NAPI contexts to be associated with.
> > @@ -2250,7 +2266,7 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
> >   }
> >   
> >   /* Inverse of gsi_init() */
> > -void gsi_exit(struct ipa_dma *gsi)
> > +static void gsi_exit(struct ipa_dma *gsi)
> >   {
> >   	mutex_destroy(&gsi->mutex);
> >   	gsi_channel_exit(gsi);
> > @@ -2277,7 +2293,7 @@ void gsi_exit(struct ipa_dma *gsi)
> >    * substantially reduce pool memory requirements.  The number we
> >    * reduce it by matches the number added in ipa_trans_pool_init().
> >    */
> > -u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
> > +static u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
> >   {
> >   	struct ipa_channel *channel = &gsi->channel[channel_id];
> >   
> > @@ -2286,7 +2302,7 @@ u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
> >   }
> >   
> >   /* Returns the maximum number of TREs in a single transaction for a channel */
> > -u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
> > +static u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
> >   {
> >   	struct ipa_channel *channel = &gsi->channel[channel_id];
> >   
> > diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h
> > index d053929ca3e3..1a23e6ac5785 100644
> > --- a/drivers/net/ipa/ipa_dma.h
> > +++ b/drivers/net/ipa/ipa_dma.h
> > @@ -163,64 +163,96 @@ struct ipa_dma {
> >   	struct completion completion;	/* for global EE commands */
> >   	int result;			/* Negative errno (generic commands) */
> >   	struct mutex mutex;		/* protects commands, programming */
> > +
> > +	int (*setup)(struct ipa_dma *dma_subsys);
> > +	void (*teardown)(struct ipa_dma *dma_subsys);
> > +	void (*exit)(struct ipa_dma *dma_subsys);
> > +	void (*suspend)(struct ipa_dma *dma_subsys);
> > +	void (*resume)(struct ipa_dma *dma_subsys);
> > +	u32 (*channel_tre_max)(struct ipa_dma *dma_subsys, u32 channel_id);
> > +	u32 (*channel_trans_tre_max)(struct ipa_dma *dma_subsys, u32 channel_id);
> > +	int (*channel_start)(struct ipa_dma *dma_subsys, u32 channel_id);
> > +	int (*channel_stop)(struct ipa_dma *dma_subsys, u32 channel_id);
> > +	void (*channel_reset)(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
> > +	int (*channel_suspend)(struct ipa_dma *dma_subsys, u32 channel_id);
> > +	int (*channel_resume)(struct ipa_dma *dma_subsys, u32 channel_id);
> > +	void (*trans_commit)(struct ipa_trans *trans, bool ring_db);
> >   };
> >   
> >   /**
> > - * gsi_setup() - Set up the GSI subsystem
> > - * @gsi:	Address of GSI structure embedded in an IPA structure
> > + * ipa_dma_setup() - Set up the DMA subsystem
> > + * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
> >    *
> >    * Return:	0 if successful, or a negative error code
> >    *
> > - * Performs initialization that must wait until the GSI hardware is
> > + * Performs initialization that must wait until the GSI/BAM hardware is
> >    * ready (including firmware loaded).
> >    */
> > -int gsi_setup(struct ipa_dma *dma_subsys);
> > +static inline int ipa_dma_setup(struct ipa_dma *dma_subsys)
> > +{
> > +	return dma_subsys->setup(dma_subsys);
> > +}
> >   
> >   /**
> > - * gsi_teardown() - Tear down GSI subsystem
> > - * @gsi:	GSI address previously passed to a successful gsi_setup() call
> > + * ipa_dma_teardown() - Tear down DMA subsystem
> > + * @dma_subsys:	ipa_dma address previously passed to a successful ipa_dma_setup() call
> >    */
> > -void gsi_teardown(struct ipa_dma *dma_subsys);
> > +static inline void ipa_dma_teardown(struct ipa_dma *dma_subsys)
> > +{
> > +	dma_subsys->teardown(dma_subsys);
> > +}
> >   
> >   /**
> > - * gsi_channel_tre_max() - Channel maximum number of in-flight TREs
> > - * @gsi:	GSI pointer
> > + * ipa_channel_tre_max() - Channel maximum number of in-flight TREs
> > + * @dma_subsys:	pointer to ipa_dma structure
> >    * @channel_id:	Channel whose limit is to be returned
> >    *
> >    * Return:	 The maximum number of TREs oustanding on the channel
> >    */
> > -u32 gsi_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
> > +static inline u32 ipa_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id)
> > +{
> > +	return dma_subsys->channel_tre_max(dma_subsys, channel_id);
> > +}
> >   
> >   /**
> > - * gsi_channel_trans_tre_max() - Maximum TREs in a single transaction
> > - * @gsi:	GSI pointer
> > + * ipa_channel_trans_tre_max() - Maximum TREs in a single transaction
> > + * @dma_subsys:	pointer to ipa_dma structure
> >    * @channel_id:	Channel whose limit is to be returned
> >    *
> >    * Return:	 The maximum TRE count per transaction on the channel
> >    */
> > -u32 gsi_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
> > +static inline u32 ipa_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id)
> > +{
> > +	return dma_subsys->channel_trans_tre_max(dma_subsys, channel_id);
> > +}
> >   
> >   /**
> > - * gsi_channel_start() - Start an allocated GSI channel
> > - * @gsi:	GSI pointer
> > + * ipa_channel_start() - Start an allocated DMA channel
> > + * @dma_subsys:	pointer to ipa_dma structure
> >    * @channel_id:	Channel to start
> >    *
> >    * Return:	0 if successful, or a negative error code
> >    */
> > -int gsi_channel_start(struct ipa_dma *dma_subsys, u32 channel_id);
> > +static inline int ipa_channel_start(struct ipa_dma *dma_subsys, u32 channel_id)
> > +{
> > +	return dma_subsys->channel_start(dma_subsys, channel_id);
> > +}
> >   
> >   /**
> > - * gsi_channel_stop() - Stop a started GSI channel
> > - * @gsi:	GSI pointer returned by gsi_setup()
> > + * ipa_channel_stop() - Stop a started DMA channel
> > + * @dma_subsys:	pointer to ipa_dma structure returned by ipa_dma_setup()
> >    * @channel_id:	Channel to stop
> >    *
> >    * Return:	0 if successful, or a negative error code
> >    */
> > -int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
> > +static inline int ipa_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id)
> > +{
> > +	return dma_subsys->channel_stop(dma_subsys, channel_id);
> > +}
> >   
> >   /**
> > - * gsi_channel_reset() - Reset an allocated GSI channel
> > - * @gsi:	GSI pointer
> > + * ipa_channel_reset() - Reset an allocated DMA channel
> > + * @dma_subsys:	pointer to ipa_dma structure
> >    * @channel_id:	Channel to be reset
> >    * @doorbell:	Whether to (possibly) enable the doorbell engine
> >    *
> > @@ -230,41 +262,49 @@ int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
> >    * GSI hardware relinquishes ownership of all pending receive buffer
> >    * transactions and they will complete with their cancelled flag set.
> >    */
> > -void gsi_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
> > +static inline void ipa_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell)
> > +{
> > +	 dma_subsys->channel_reset(dma_subsys, channel_id, doorbell);
> > +}
> >   
> > -/**
> > - * gsi_suspend() - Prepare the GSI subsystem for suspend
> > - * @gsi:	GSI pointer
> > - */
> > -void gsi_suspend(struct ipa_dma *dma_subsys);
> >   
> >   /**
> > - * gsi_resume() - Resume the GSI subsystem following suspend
> > - * @gsi:	GSI pointer
> > - */
> > -void gsi_resume(struct ipa_dma *dma_subsys);
> > -
> > -/**
> > - * gsi_channel_suspend() - Suspend a GSI channel
> > - * @gsi:	GSI pointer
> > + * ipa_channel_suspend() - Suspend a DMA channel
> > + * @dma_subsys:	pointer to ipa_dma structure
> >    * @channel_id:	Channel to suspend
> >    *
> >    * For IPA v4.0+, suspend is implemented by stopping the channel.
> >    */
> > -int gsi_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id);
> > +static inline int ipa_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id)
> > +{
> > +	return dma_subsys->channel_suspend(dma_subsys, channel_id);
> > +}
> >   
> >   /**
> > - * gsi_channel_resume() - Resume a suspended GSI channel
> > - * @gsi:	GSI pointer
> > + * ipa_channel_resume() - Resume a suspended DMA channel
> > + * @dma_subsys:	pointer to ipa_dma structure
> >    * @channel_id:	Channel to resume
> >    *
> >    * For IPA v4.0+, the stopped channel is started again.
> >    */
> > -int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
> > +static inline int ipa_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id)
> > +{
> > +	return dma_subsys->channel_resume(dma_subsys, channel_id);
> > +}
> > +
> > +static inline void ipa_dma_suspend(struct ipa_dma *dma_subsys)
> > +{
> > +	return dma_subsys->suspend(dma_subsys);
> > +}
> > +
> > +static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
> > +{
> > +	return dma_subsys->resume(dma_subsys);
> > +}
> >   
> >   /**
> > - * gsi_init() - Initialize the GSI subsystem
> > - * @gsi:	Address of GSI structure embedded in an IPA structure
> > + * ipa_dma_init() - Initialize the GSI subsystem
> > + * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
> >    * @pdev:	IPA platform device
> >    * @version:	IPA hardware version (implies GSI version)
> >    * @count:	Number of entries in the configuration data array
> > @@ -275,14 +315,19 @@ int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
> >    * Early stage initialization of the GSI subsystem, performing tasks
> >    * that can be done before the GSI hardware is ready to use.
> >    */
> > +
> >   int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
> >   	     enum ipa_version version, u32 count,
> >   	     const struct ipa_gsi_endpoint_data *data);
> >   
> >   /**
> > - * gsi_exit() - Exit the GSI subsystem
> > - * @gsi:	GSI address previously passed to a successful gsi_init() call
> > + * ipa_dma_exit() - Exit the DMA subsystem
> > + * @dma_subsys:	ipa_dma address previously passed to a successful gsi_init() call
> >    */
> > -void gsi_exit(struct ipa_dma *dma_subsys);
> > +static inline void ipa_dma_exit(struct ipa_dma *dma_subsys)
> > +{
> > +	if (dma_subsys)
> > +		dma_subsys->exit(dma_subsys);
> > +}
> >   
> >   #endif /* _GSI_H_ */
> > diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> > index 90d6880e8a25..dbef549c4537 100644
> > --- a/drivers/net/ipa/ipa_endpoint.c
> > +++ b/drivers/net/ipa/ipa_endpoint.c
> > @@ -1091,7 +1091,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
> >   	 * try replenishing again if our backlog is *all* available TREs.
> >   	 */
> >   	gsi = &endpoint->ipa->dma_subsys;
> > -	if (backlog == gsi_channel_tre_max(gsi, endpoint->channel_id))
> > +	if (backlog == ipa_channel_tre_max(gsi, endpoint->channel_id))
> >   		schedule_delayed_work(&endpoint->replenish_work,
> >   				      msecs_to_jiffies(1));
> >   }
> > @@ -1107,7 +1107,7 @@ static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
> >   		atomic_add(saved, &endpoint->replenish_backlog);
> >   
> >   	/* Start replenishing if hardware currently has no buffers */
> > -	max_backlog = gsi_channel_tre_max(gsi, endpoint->channel_id);
> > +	max_backlog = ipa_channel_tre_max(gsi, endpoint->channel_id);
> >   	if (atomic_read(&endpoint->replenish_backlog) == max_backlog)
> >   		ipa_endpoint_replenish(endpoint, false);
> >   }
> > @@ -1432,13 +1432,13 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
> >   	 * active.  We'll re-enable the doorbell (if appropriate) when
> >   	 * we reset again below.
> >   	 */
> > -	gsi_channel_reset(gsi, endpoint->channel_id, false);
> > +	ipa_channel_reset(gsi, endpoint->channel_id, false);
> >   
> >   	/* Make sure the channel isn't suspended */
> >   	suspended = ipa_endpoint_program_suspend(endpoint, false);
> >   
> >   	/* Start channel and do a 1 byte read */
> > -	ret = gsi_channel_start(gsi, endpoint->channel_id);
> > +	ret = ipa_channel_start(gsi, endpoint->channel_id);
> >   	if (ret)
> >   		goto out_suspend_again;
> >   
> > @@ -1461,7 +1461,7 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
> >   
> >   	gsi_trans_read_byte_done(gsi, endpoint->channel_id);
> >   
> > -	ret = gsi_channel_stop(gsi, endpoint->channel_id);
> > +	ret = ipa_channel_stop(gsi, endpoint->channel_id);
> >   	if (ret)
> >   		goto out_suspend_again;
> >   
> > @@ -1470,14 +1470,14 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
> >   	 * complete the channel reset sequence.  Finish by suspending the
> >   	 * channel again (if necessary).
> >   	 */
> > -	gsi_channel_reset(gsi, endpoint->channel_id, true);
> > +	ipa_channel_reset(gsi, endpoint->channel_id, true);
> >   
> >   	usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC);
> >   
> >   	goto out_suspend_again;
> >   
> >   err_endpoint_stop:
> > -	(void)gsi_channel_stop(gsi, endpoint->channel_id);
> > +	(void)ipa_channel_stop(gsi, endpoint->channel_id);
> >   out_suspend_again:
> >   	if (suspended)
> >   		(void)ipa_endpoint_program_suspend(endpoint, true);
> > @@ -1504,7 +1504,7 @@ static void ipa_endpoint_reset(struct ipa_endpoint *endpoint)
> >   	if (special && ipa_endpoint_aggr_active(endpoint))
> >   		ret = ipa_endpoint_reset_rx_aggr(endpoint);
> >   	else
> > -		gsi_channel_reset(&ipa->dma_subsys, channel_id, true);
> > +		ipa_channel_reset(&ipa->dma_subsys, channel_id, true);
> >   
> >   	if (ret)
> >   		dev_err(&ipa->pdev->dev,
> > @@ -1537,7 +1537,7 @@ int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint)
> >   	struct ipa_dma *gsi = &ipa->dma_subsys;
> >   	int ret;
> >   
> > -	ret = gsi_channel_start(gsi, endpoint->channel_id);
> > +	ret = ipa_channel_start(gsi, endpoint->channel_id);
> >   	if (ret) {
> >   		dev_err(&ipa->pdev->dev,
> >   			"error %d starting %cX channel %u for endpoint %u\n",
> > @@ -1576,7 +1576,7 @@ void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
> >   	}
> >   
> >   	/* Note that if stop fails, the channel's state is not well-defined */
> > -	ret = gsi_channel_stop(gsi, endpoint->channel_id);
> > +	ret = ipa_channel_stop(gsi, endpoint->channel_id);
> >   	if (ret)
> >   		dev_err(&ipa->pdev->dev,
> >   			"error %d attempting to stop endpoint %u\n", ret,
> > @@ -1598,7 +1598,7 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
> >   		(void)ipa_endpoint_program_suspend(endpoint, true);
> >   	}
> >   
> > -	ret = gsi_channel_suspend(gsi, endpoint->channel_id);
> > +	ret = ipa_channel_suspend(gsi, endpoint->channel_id);
> >   	if (ret)
> >   		dev_err(dev, "error %d suspending channel %u\n", ret,
> >   			endpoint->channel_id);
> > @@ -1617,7 +1617,7 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
> >   	if (!endpoint->toward_ipa)
> >   		(void)ipa_endpoint_program_suspend(endpoint, false);
> >   
> > -	ret = gsi_channel_resume(gsi, endpoint->channel_id);
> > +	ret = ipa_channel_resume(gsi, endpoint->channel_id);
> >   	if (ret)
> >   		dev_err(dev, "error %d resuming channel %u\n", ret,
> >   			endpoint->channel_id);
> > @@ -1660,14 +1660,14 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
> >   	if (endpoint->ee_id != GSI_EE_AP)
> >   		return;
> >   
> > -	endpoint->trans_tre_max = gsi_channel_trans_tre_max(gsi, channel_id);
> > +	endpoint->trans_tre_max = ipa_channel_trans_tre_max(gsi, channel_id);
> >   	if (!endpoint->toward_ipa) {
> >   		/* RX transactions require a single TRE, so the maximum
> >   		 * backlog is the same as the maximum outstanding TREs.
> >   		 */
> >   		endpoint->replenish_enabled = false;
> >   		atomic_set(&endpoint->replenish_saved,
> > -			   gsi_channel_tre_max(gsi, endpoint->channel_id));
> > +			   ipa_channel_tre_max(gsi, endpoint->channel_id));
> >   		atomic_set(&endpoint->replenish_backlog, 0);
> >   		INIT_DELAYED_WORK(&endpoint->replenish_work,
> >   				  ipa_endpoint_replenish_work);
> > diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> > index 026f5555fa7d..6ab691ff1faf 100644
> > --- a/drivers/net/ipa/ipa_main.c
> > +++ b/drivers/net/ipa/ipa_main.c
> > @@ -98,13 +98,13 @@ int ipa_setup(struct ipa *ipa)
> >   	struct device *dev = &ipa->pdev->dev;
> >   	int ret;
> >   
> > -	ret = gsi_setup(&ipa->dma_subsys);
> > +	ret = ipa_dma_setup(&ipa->dma_subsys);
> >   	if (ret)
> >   		return ret;
> >   
> >   	ret = ipa_power_setup(ipa);
> >   	if (ret)
> > -		goto err_gsi_teardown;
> > +		goto err_dma_teardown;
> >   
> >   	ipa_endpoint_setup(ipa);
> >   
> > @@ -153,8 +153,8 @@ int ipa_setup(struct ipa *ipa)
> >   err_endpoint_teardown:
> >   	ipa_endpoint_teardown(ipa);
> >   	ipa_power_teardown(ipa);
> > -err_gsi_teardown:
> > -	gsi_teardown(&ipa->dma_subsys);
> > +err_dma_teardown:
> > +	ipa_dma_teardown(&ipa->dma_subsys);
> >   
> >   	return ret;
> >   }
> > @@ -179,7 +179,7 @@ static void ipa_teardown(struct ipa *ipa)
> >   	ipa_endpoint_disable_one(command_endpoint);
> >   	ipa_endpoint_teardown(ipa);
> >   	ipa_power_teardown(ipa);
> > -	gsi_teardown(&ipa->dma_subsys);
> > +	ipa_dma_teardown(&ipa->dma_subsys);
> >   }
> >   
> >   /* Configure bus access behavior for IPA components */
> > @@ -726,7 +726,7 @@ static int ipa_probe(struct platform_device *pdev)
> >   					    data->endpoint_data);
> >   	if (!ipa->filter_map) {
> >   		ret = -EINVAL;
> > -		goto err_gsi_exit;
> > +		goto err_dma_exit;
> >   	}
> >   
> >   	ret = ipa_table_init(ipa);
> > @@ -780,8 +780,8 @@ static int ipa_probe(struct platform_device *pdev)
> >   	ipa_table_exit(ipa);
> >   err_endpoint_exit:
> >   	ipa_endpoint_exit(ipa);
> > -err_gsi_exit:
> > -	gsi_exit(&ipa->dma_subsys);
> > +err_dma_exit:
> > +	ipa_dma_exit(&ipa->dma_subsys);
> >   err_mem_exit:
> >   	ipa_mem_exit(ipa);
> >   err_reg_exit:
> > @@ -824,7 +824,7 @@ static int ipa_remove(struct platform_device *pdev)
> >   	ipa_modem_exit(ipa);
> >   	ipa_table_exit(ipa);
> >   	ipa_endpoint_exit(ipa);
> > -	gsi_exit(&ipa->dma_subsys);
> > +	ipa_dma_exit(&ipa->dma_subsys);
> >   	ipa_mem_exit(ipa);
> >   	ipa_reg_exit(ipa);
> >   	kfree(ipa);
> > diff --git a/drivers/net/ipa/ipa_power.c b/drivers/net/ipa/ipa_power.c
> > index b1c6c0fcb654..096cfb8ae9a5 100644
> > --- a/drivers/net/ipa/ipa_power.c
> > +++ b/drivers/net/ipa/ipa_power.c
> > @@ -243,7 +243,7 @@ static int ipa_runtime_suspend(struct device *dev)
> >   	if (ipa->setup_complete) {
> >   		__clear_bit(IPA_POWER_FLAG_RESUMED, ipa->power->flags);
> >   		ipa_endpoint_suspend(ipa);
> > -		gsi_suspend(&ipa->gsi);
> > +		ipa_dma_suspend(&ipa->dma_subsys);
> >   	}
> >   
> >   	return ipa_power_disable(ipa);
> > @@ -260,7 +260,7 @@ static int ipa_runtime_resume(struct device *dev)
> >   
> >   	/* Endpoints aren't usable until setup is complete */
> >   	if (ipa->setup_complete) {
> > -		gsi_resume(&ipa->gsi);
> > +		ipa_dma_resume(&ipa->dma_subsys);
> >   		ipa_endpoint_resume(ipa);
> >   	}
> >   
> > diff --git a/drivers/net/ipa/ipa_trans.c b/drivers/net/ipa/ipa_trans.c
> > index b87936b18770..22755f3ce3da 100644
> > --- a/drivers/net/ipa/ipa_trans.c
> > +++ b/drivers/net/ipa/ipa_trans.c
> > @@ -747,7 +747,7 @@ int ipa_channel_trans_init(struct ipa_dma *gsi, u32 channel_id)
> >   	 * for transactions (including transaction structures) based on
> >   	 * this maximum number.
> >   	 */
> > -	tre_max = gsi_channel_tre_max(channel->dma_subsys, channel_id);
> > +	tre_max = ipa_channel_tre_max(channel->dma_subsys, channel_id);
> >   
> >   	/* Transactions are allocated one at a time. */
> >   	ret = ipa_trans_pool_init(&trans_info->pool, sizeof(struct ipa_trans),
> > 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait
  2021-10-13 22:29   ` Alex Elder
@ 2021-10-18 17:02     ` Sireesh Kodali
  0 siblings, 0 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-10-18 17:02 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 3:59 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > From: Vladimir Lypak <vladimir.lypak@gmail.com>
> > 
> > Sometimes the pipeline clear fails, and when it does, having a hang in
> > kernel is ugly. The timeout gives us a nice error message. Note that
> > this shouldn't actually hang, ever. It only hangs if there is a mistake
> > in the config, and the timeout is only useful when debugging.
> > 
> > Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
>
> This is actually an item on my to-do list. All of the waits
> for GSI completions should have timeouts. The only reason it
> hasn't been implemented already is that I would like to be sure
> all paths that could have a timeout actually have a reasonable
> recovery.
>
> I'd say an error message after a timeout is better than a hung
> task panic, but if this does time out, I'm not sure the state
> of the hardware is well-defined.

Early on while wiring up BAM support, I handn't quite figured out the
IPA init sequence, and some of the BAM opcode stuff. This caused the
driver to hang when it would reach the completion. Since this particular
completion was waited for just before the probe function retured, it
prevented hung up the kernel thread, and prevented the module from being
`modprobe -r`ed.

Since then, I've properly fixed the BAM code, the completion always
returns, making the patch kinda useless for now. Since its only for
debugging, I'll just drop this patch. I think the only error handling we
can do at this stage is to return -EIO, and get the callee to handle
de-initing everything.

Regards,
Sireesh

>
> -Alex
>
> > ---
> >   drivers/net/ipa/ipa_cmd.c | 5 ++++-
> >   1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> > index 3db9e94e484f..0bdbc331fa78 100644
> > --- a/drivers/net/ipa/ipa_cmd.c
> > +++ b/drivers/net/ipa/ipa_cmd.c
> > @@ -658,7 +658,10 @@ u32 ipa_cmd_pipeline_clear_count(void)
> >   
> >   void ipa_cmd_pipeline_clear_wait(struct ipa *ipa)
> >   {
> > -	wait_for_completion(&ipa->completion);
> > +	unsigned long timeout_jiffies = msecs_to_jiffies(1000);
> > +
> > +	if (!wait_for_completion_timeout(&ipa->completion, timeout_jiffies))
> > +		dev_err(&ipa->pdev->dev, "%s time out\n", __func__);
> >   }
> >   
> >   void ipa_cmd_pipeline_clear(struct ipa *ipa)
> > 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions
  2021-10-13 22:29   ` Alex Elder
@ 2021-10-18 17:25     ` Sireesh Kodali
  0 siblings, 0 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-10-18 17:25 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 3:59 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > IPA v2.x is an older version the IPA hardware, and is 32 bit.
> > 
> > Most of the registers were just shifted in newer IPA versions, but
> > the register fields have remained the same across IPA versions. This
> > means that only the register addresses needed to be added to the driver.
> > 
> > To handle the different IPA register addresses, static inline functions
> > have been defined that return the correct register address.
>
> Thank you for following the existing convention in implementing these.
> Even if it isn't perfect, it's good to remain consistent.
>
> You use:
> if (version <= IPA_VERSION_2_6L)
> but then also define and use
> if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> And the only new IPA versions are 2_0, 2_5, and 2_6L.
>
> I would stick with the former and don't define IPA_VERSION_RANGE().
> Nothing less than IPA v2.0 (or 3.0 currently) is supported, so
> "there is no version less than that."

Makes sense, thanks!
>
> Oh, and I noticed some local variables defined without the
> "reverse Christmas tree order" which, like it or not, is the
> convention used consistently throughout this driver.
>

I wasn't aware of this, it should be easy enough to fix.

> I might quibble with a few other minor things in these definitions
> but overall this looks fine.
>

Thanks,
Sireesh
> -Alex
>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/ipa_cmd.c      |   3 +-
> >   drivers/net/ipa/ipa_endpoint.c |  33 +++---
> >   drivers/net/ipa/ipa_main.c     |   8 +-
> >   drivers/net/ipa/ipa_mem.c      |   5 +-
> >   drivers/net/ipa/ipa_reg.h      | 184 +++++++++++++++++++++++++++------
> >   drivers/net/ipa/ipa_version.h  |  12 +++
> >   6 files changed, 195 insertions(+), 50 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> > index 0bdbc331fa78..7a104540dc26 100644
> > --- a/drivers/net/ipa/ipa_cmd.c
> > +++ b/drivers/net/ipa/ipa_cmd.c
> > @@ -326,7 +326,8 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa)
> >   	 * worst case (highest endpoint number) offset of that endpoint
> >   	 * fits in the register write command field(s) that must hold it.
> >   	 */
> > -	offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT - 1);
> > +	offset = ipa_reg_endp_status_n_offset(ipa->version,
> > +			IPA_ENDPOINT_COUNT - 1);
> >   	name = "maximal endpoint status";
> >   	if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
> >   		return false;
> > diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> > index dbef549c4537..7d3ab61cd890 100644
> > --- a/drivers/net/ipa/ipa_endpoint.c
> > +++ b/drivers/net/ipa/ipa_endpoint.c
> > @@ -242,8 +242,8 @@ static struct ipa_trans *ipa_endpoint_trans_alloc(struct ipa_endpoint *endpoint,
> >   static bool
> >   ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_CTRL_N_OFFSET(endpoint->endpoint_id);
> >   	struct ipa *ipa = endpoint->ipa;
> > +	u32 offset = ipa_reg_endp_init_ctrl_n_offset(ipa->version, endpoint->endpoint_id);
> >   	bool state;
> >   	u32 mask;
> >   	u32 val;
> > @@ -410,7 +410,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
> >   		if (!(endpoint->ee_id == GSI_EE_MODEM && endpoint->toward_ipa))
> >   			continue;
> >   
> > -		offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
> > +		offset = ipa_reg_endp_status_n_offset(ipa->version, endpoint_id);
> >   
> >   		/* Value written is 0, and all bits are updated.  That
> >   		 * means status is disabled on the endpoint, and as a
> > @@ -431,7 +431,8 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
> >   
> >   static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_CFG_N_OFFSET(endpoint->endpoint_id);
> > +	struct ipa *ipa = endpoint->ipa;
> > +	u32 offset = ipa_reg_endp_init_cfg_n_offset(ipa->version, endpoint->endpoint_id);
> >   	enum ipa_cs_offload_en enabled;
> >   	u32 val = 0;
> >   
> > @@ -523,8 +524,8 @@ ipa_qmap_header_size(enum ipa_version version, struct ipa_endpoint *endpoint)
> >    */
> >   static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_HDR_N_OFFSET(endpoint->endpoint_id);
> >   	struct ipa *ipa = endpoint->ipa;
> > +	u32 offset = ipa_reg_endp_init_hdr_n_offset(ipa->version, endpoint->endpoint_id);
> >   	u32 val = 0;
> >   
> >   	if (endpoint->data->qmap) {
> > @@ -565,9 +566,9 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
> >   
> >   static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(endpoint->endpoint_id);
> > -	u32 pad_align = endpoint->data->rx.pad_align;
> >   	struct ipa *ipa = endpoint->ipa;
> > +	u32 offset = ipa_reg_endp_init_hdr_ext_n_offset(ipa->version, endpoint->endpoint_id);
> > +	u32 pad_align = endpoint->data->rx.pad_align;
> >   	u32 val = 0;
> >   
> >   	val |= HDR_ENDIANNESS_FMASK;		/* big endian */
> > @@ -609,6 +610,7 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
> >   
> >   static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
> >   {
> > +	enum ipa_version version = endpoint->ipa->version;
> >   	u32 endpoint_id = endpoint->endpoint_id;
> >   	u32 val = 0;
> >   	u32 offset;
> > @@ -616,7 +618,7 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
> >   	if (endpoint->toward_ipa)
> >   		return;		/* Register not valid for TX endpoints */
> >   
> > -	offset = IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(endpoint_id);
> > +	offset = ipa_reg_endp_init_hdr_metadata_mask_n_offset(version, endpoint_id);
> >   
> >   	/* Note that HDR_ENDIANNESS indicates big endian header fields */
> >   	if (endpoint->data->qmap)
> > @@ -627,7 +629,8 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
> >   
> >   static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_MODE_N_OFFSET(endpoint->endpoint_id);
> > +	enum ipa_version version = endpoint->ipa->version;
> > +	u32 offset = ipa_reg_endp_init_mode_n_offset(version, endpoint->endpoint_id);
> >   	u32 val;
> >   
> >   	if (!endpoint->toward_ipa)
> > @@ -716,8 +719,8 @@ static u32 aggr_sw_eof_active_encoded(enum ipa_version version, bool enabled)
> >   
> >   static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_AGGR_N_OFFSET(endpoint->endpoint_id);
> >   	enum ipa_version version = endpoint->ipa->version;
> > +	u32 offset = ipa_reg_endp_init_aggr_n_offset(version, endpoint->endpoint_id);
> >   	u32 val = 0;
> >   
> >   	if (endpoint->data->aggregation) {
> > @@ -853,7 +856,7 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
> >   	u32 offset;
> >   	u32 val;
> >   
> > -	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(endpoint_id);
> > +	offset = ipa_reg_endp_init_hol_block_timer_n_offset(ipa->version, endpoint_id);
> >   	val = hol_block_timer_val(ipa, microseconds);
> >   	iowrite32(val, ipa->reg_virt + offset);
> >   }
> > @@ -861,12 +864,13 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
> >   static void
> >   ipa_endpoint_init_hol_block_enable(struct ipa_endpoint *endpoint, bool enable)
> >   {
> > +	enum ipa_version version = endpoint->ipa->version;
> >   	u32 endpoint_id = endpoint->endpoint_id;
> >   	u32 offset;
> >   	u32 val;
> >   
> >   	val = enable ? HOL_BLOCK_EN_FMASK : 0;
> > -	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(endpoint_id);
> > +	offset = ipa_reg_endp_init_hol_block_en_n_offset(version, endpoint_id);
> >   	iowrite32(val, endpoint->ipa->reg_virt + offset);
> >   }
> >   
> > @@ -887,7 +891,8 @@ void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa)
> >   
> >   static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(endpoint->endpoint_id);
> > +	enum ipa_version version = endpoint->ipa->version;
> > +	u32 offset = ipa_reg_endp_init_deaggr_n_offset(version, endpoint->endpoint_id);
> >   	u32 val = 0;
> >   
> >   	if (!endpoint->toward_ipa)
> > @@ -979,7 +984,7 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
> >   	u32 val = 0;
> >   	u32 offset;
> >   
> > -	offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
> > +	offset = ipa_reg_endp_status_n_offset(ipa->version, endpoint_id);
> >   
> >   	if (endpoint->data->status_enable) {
> >   		val |= STATUS_EN_FMASK;
> > @@ -1384,7 +1389,7 @@ void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id)
> >   	val |= u32_encode_bits(endpoint_id, ROUTE_FRAG_DEF_PIPE_FMASK);
> >   	val |= ROUTE_DEF_RETAIN_HDR_FMASK;
> >   
> > -	iowrite32(val, ipa->reg_virt + IPA_REG_ROUTE_OFFSET);
> > +	iowrite32(val, ipa->reg_virt + ipa_reg_route_offset(ipa->version));
> >   }
> >   
> >   void ipa_endpoint_default_route_clear(struct ipa *ipa)
> > diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> > index 6ab691ff1faf..ba06e3ad554c 100644
> > --- a/drivers/net/ipa/ipa_main.c
> > +++ b/drivers/net/ipa/ipa_main.c
> > @@ -191,7 +191,7 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
> >   	if (ipa->version < IPA_VERSION_4_0)
> >   		return;
> >   
> > -	val = ioread32(ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
> > +	val = ioread32(ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
> >   
> >   	if (ipa->version == IPA_VERSION_4_0) {
> >   		val &= ~IPA_QMB_SELECT_CONS_EN_FMASK;
> > @@ -206,7 +206,7 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
> >   	val |= GSI_MULTI_INORDER_RD_DIS_FMASK;
> >   	val |= GSI_MULTI_INORDER_WR_DIS_FMASK;
> >   
> > -	iowrite32(val, ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
> > +	iowrite32(val, ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
> >   }
> >   
> >   /* Configure DDR and (possibly) PCIe max read/write QSB values */
> > @@ -355,7 +355,7 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
> >   	/* IPA v4.5+ has no backward compatibility register */
> >   	if (version < IPA_VERSION_4_5) {
> >   		val = data->backward_compat;
> > -		iowrite32(val, ipa->reg_virt + IPA_REG_BCR_OFFSET);
> > +		iowrite32(val, ipa->reg_virt + ipa_reg_bcr_offset(ipa->version));
> >   	}
> >   
> >   	/* Implement some hardware workarounds */
> > @@ -384,7 +384,7 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
> >   		/* Configure aggregation timer granularity */
> >   		granularity = ipa_aggr_granularity_val(IPA_AGGR_GRANULARITY);
> >   		val = u32_encode_bits(granularity, AGGR_GRANULARITY_FMASK);
> > -		iowrite32(val, ipa->reg_virt + IPA_REG_COUNTER_CFG_OFFSET);
> > +		iowrite32(val, ipa->reg_virt + ipa_reg_counter_cfg_offset(ipa->version));
> >   	} else {
> >   		ipa_qtime_config(ipa);
> >   	}
> > diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
> > index 16e5fdd5bd73..8acc88070a6f 100644
> > --- a/drivers/net/ipa/ipa_mem.c
> > +++ b/drivers/net/ipa/ipa_mem.c
> > @@ -113,7 +113,8 @@ int ipa_mem_setup(struct ipa *ipa)
> >   	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
> >   	offset = ipa->mem_offset + mem->offset;
> >   	val = proc_cntxt_base_addr_encoded(ipa->version, offset);
> > -	iowrite32(val, ipa->reg_virt + IPA_REG_LOCAL_PKT_PROC_CNTXT_OFFSET);
> > +	iowrite32(val, ipa->reg_virt +
> > +		  ipa_reg_local_pkt_proc_cntxt_base_offset(ipa->version));
> >   
> >   	return 0;
> >   }
> > @@ -316,7 +317,7 @@ int ipa_mem_config(struct ipa *ipa)
> >   	u32 i;
> >   
> >   	/* Check the advertised location and size of the shared memory area */
> > -	val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET);
> > +	val = ioread32(ipa->reg_virt + ipa_reg_shared_mem_size_offset(ipa->version));
> >   
> >   	/* The fields in the register are in 8 byte units */
> >   	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> > diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h
> > index a5b355384d4a..fcae0296cfa4 100644
> > --- a/drivers/net/ipa/ipa_reg.h
> > +++ b/drivers/net/ipa/ipa_reg.h
> > @@ -65,7 +65,17 @@ struct ipa;
> >    * of valid bits for the register.
> >    */
> >   
> > -#define IPA_REG_COMP_CFG_OFFSET				0x0000003c
> > +#define IPA_REG_COMP_SW_RESET_OFFSET		0x0000003c
> > +
> > +#define IPA_REG_V2_ENABLED_PIPES_OFFSET		0x000005dc
> > +
> > +static inline u32 ipa_reg_comp_cfg_offset(enum ipa_version version)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x38;
> > +
> > +	return 0x3c;
> > +}
> >   /* The next field is not supported for IPA v4.0+, not present for IPA v4.5+ */
> >   #define ENABLE_FMASK				GENMASK(0, 0)
> >   /* The next field is present for IPA v4.7+ */
> > @@ -124,6 +134,7 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
> >   	return u32_encode_bits(val, GENMASK(17, 17));
> >   }
> >   
> > +/* This register is only present on IPA v3.0 and above */
> >   #define IPA_REG_CLKON_CFG_OFFSET			0x00000044
> >   #define RX_FMASK				GENMASK(0, 0)
> >   #define PROC_FMASK				GENMASK(1, 1)
> > @@ -164,7 +175,14 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
> >   /* The next field is present for IPA v4.7+ */
> >   #define DRBIP_FMASK				GENMASK(31, 31)
> >   
> > -#define IPA_REG_ROUTE_OFFSET				0x00000048
> > +static inline u32 ipa_reg_route_offset(enum ipa_version version)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x44;
> > +
> > +	return 0x48;
> > +}
> > +
> >   #define ROUTE_DIS_FMASK				GENMASK(0, 0)
> >   #define ROUTE_DEF_PIPE_FMASK			GENMASK(5, 1)
> >   #define ROUTE_DEF_HDR_TABLE_FMASK		GENMASK(6, 6)
> > @@ -172,7 +190,14 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
> >   #define ROUTE_FRAG_DEF_PIPE_FMASK		GENMASK(21, 17)
> >   #define ROUTE_DEF_RETAIN_HDR_FMASK		GENMASK(24, 24)
> >   
> > -#define IPA_REG_SHARED_MEM_SIZE_OFFSET			0x00000054
> > +static inline u32 ipa_reg_shared_mem_size_offset(enum ipa_version version)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x50;
> > +
> > +	return 0x54;
> > +}
> > +
> >   #define SHARED_MEM_SIZE_FMASK			GENMASK(15, 0)
> >   #define SHARED_MEM_BADDR_FMASK			GENMASK(31, 16)
> >   
> > @@ -219,7 +244,13 @@ static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
> >   }
> >   
> >   /* The next register is not present for IPA v4.5+ */
> > -#define IPA_REG_BCR_OFFSET				0x000001d0
> > +static inline u32 ipa_reg_bcr_offset(enum ipa_version version)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_5, 2_6L))
> > +		return 0x5b0;
> > +
> > +	return 0x1d0;
> > +}
> >   /* The next two fields are not present for IPA v4.2+ */
> >   #define BCR_CMDQ_L_LACK_ONE_ENTRY_FMASK		GENMASK(0, 0)
> >   #define BCR_TX_NOT_USING_BRESP_FMASK		GENMASK(1, 1)
> > @@ -236,7 +267,14 @@ static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
> >   #define BCR_ROUTER_PREFETCH_EN_FMASK		GENMASK(9, 9)
> >   
> >   /* The value of the next register must be a multiple of 8 (bottom 3 bits 0) */
> > -#define IPA_REG_LOCAL_PKT_PROC_CNTXT_OFFSET		0x000001e8
> > +static inline u32 ipa_reg_local_pkt_proc_cntxt_base_offset(enum ipa_version version)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x5e0;
> > +
> > +	return 0x1e8;
> > +}
> > +
> >   
> >   /* Encoded value for LOCAL_PKT_PROC_CNTXT register BASE_ADDR field */
> >   static inline u32 proc_cntxt_base_addr_encoded(enum ipa_version version,
> > @@ -252,7 +290,14 @@ static inline u32 proc_cntxt_base_addr_encoded(enum ipa_version version,
> >   #define IPA_REG_AGGR_FORCE_CLOSE_OFFSET			0x000001ec
> >   
> >   /* The next register is not present for IPA v4.5+ */
> > -#define IPA_REG_COUNTER_CFG_OFFSET			0x000001f0
> > +static inline u32 ipa_reg_counter_cfg_offset(enum ipa_version version)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_5, 2_6L))
> > +		return 0x5e8;
> > +
> > +	return 0x1f0;
> > +}
> > +
> >   /* The next field is not present for IPA v3.5+ */
> >   #define EOT_COAL_GRANULARITY			GENMASK(3, 0)
> >   #define AGGR_GRANULARITY_FMASK			GENMASK(8, 4)
> > @@ -349,15 +394,27 @@ enum ipa_pulse_gran {
> >   #define Y_MIN_LIM_FMASK				GENMASK(21, 16)
> >   #define Y_MAX_LIM_FMASK				GENMASK(29, 24)
> >   
> > -#define IPA_REG_ENDP_INIT_CTRL_N_OFFSET(ep) \
> > -					(0x00000800 + 0x0070 * (ep))
> > +static inline u32 ipa_reg_endp_init_ctrl_n_offset(enum ipa_version version, u16 ep)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x70 + 0x4 * ep;
> > +
> > +	return 0x800 + 0x70 * ep;
> > +}
> > +
> >   /* Valid only for RX (IPA producer) endpoints (do not use for IPA v4.0+) */
> >   #define ENDP_SUSPEND_FMASK			GENMASK(0, 0)
> >   /* Valid only for TX (IPA consumer) endpoints */
> >   #define ENDP_DELAY_FMASK			GENMASK(1, 1)
> >   
> > -#define IPA_REG_ENDP_INIT_CFG_N_OFFSET(ep) \
> > -					(0x00000808 + 0x0070 * (ep))
> > +static inline u32 ipa_reg_endp_init_cfg_n_offset(enum ipa_version version, u16 ep)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0xc0 + 0x4 * ep;
> > +
> > +	return 0x808 + 0x70 * ep;
> > +}
> > +
> >   #define FRAG_OFFLOAD_EN_FMASK			GENMASK(0, 0)
> >   #define CS_OFFLOAD_EN_FMASK			GENMASK(2, 1)
> >   #define CS_METADATA_HDR_OFFSET_FMASK		GENMASK(6, 3)
> > @@ -383,8 +440,14 @@ enum ipa_nat_en {
> >   	IPA_NAT_DST			= 0x2,
> >   };
> >   
> > -#define IPA_REG_ENDP_INIT_HDR_N_OFFSET(ep) \
> > -					(0x00000810 + 0x0070 * (ep))
> > +static inline u32 ipa_reg_endp_init_hdr_n_offset(enum ipa_version version, u16 ep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x170 + 0x4 * ep;
> > +
> > +	return 0x810 + 0x70 * ep;
> > +}
> > +
> >   #define HDR_LEN_FMASK				GENMASK(5, 0)
> >   #define HDR_OFST_METADATA_VALID_FMASK		GENMASK(6, 6)
> >   #define HDR_OFST_METADATA_FMASK			GENMASK(12, 7)
> > @@ -440,8 +503,14 @@ static inline u32 ipa_metadata_offset_encoded(enum ipa_version version,
> >   	return val;
> >   }
> >   
> > -#define IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(ep) \
> > -					(0x00000814 + 0x0070 * (ep))
> > +static inline u32 ipa_reg_endp_init_hdr_ext_n_offset(enum ipa_version version, u16 ep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x1c0 + 0x4 * ep;
> > +
> > +	return 0x814 + 0x70 * ep;
> > +}
> > +
> >   #define HDR_ENDIANNESS_FMASK			GENMASK(0, 0)
> >   #define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK	GENMASK(1, 1)
> >   #define HDR_TOTAL_LEN_OR_PAD_FMASK		GENMASK(2, 2)
> > @@ -454,12 +523,23 @@ static inline u32 ipa_metadata_offset_encoded(enum ipa_version version,
> >   #define HDR_ADDITIONAL_CONST_LEN_MSB_FMASK	GENMASK(21, 20)
> >   
> >   /* Valid only for RX (IPA producer) endpoints */
> > -#define IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(rxep) \
> > -					(0x00000818 + 0x0070 * (rxep))
> > +static inline u32 ipa_reg_endp_init_hdr_metadata_mask_n_offset(enum ipa_version version, u16 rxep)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x220 + 0x4 * rxep;
> > +
> > +	return 0x818 + 0x70 * rxep;
> > +}
> >   
> >   /* Valid only for TX (IPA consumer) endpoints */
> > -#define IPA_REG_ENDP_INIT_MODE_N_OFFSET(txep) \
> > -					(0x00000820 + 0x0070 * (txep))
> > +static inline u32 ipa_reg_endp_init_mode_n_offset(enum ipa_version version, u16 txep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x2c0 + 0x4 * txep;
> > +
> > +	return 0x820 + 0x70 * txep;
> > +}
> > +
> >   #define MODE_FMASK				GENMASK(2, 0)
> >   /* The next field is present for IPA v4.5+ */
> >   #define DCPH_ENABLE_FMASK			GENMASK(3, 3)
> > @@ -480,8 +560,14 @@ enum ipa_mode {
> >   	IPA_DMA				= 0x3,
> >   };
> >   
> > -#define IPA_REG_ENDP_INIT_AGGR_N_OFFSET(ep) \
> > -					(0x00000824 +  0x0070 * (ep))
> > +static inline u32 ipa_reg_endp_init_aggr_n_offset(enum ipa_version version,
> > +						  u16 ep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x320 + 0x4 * ep;
> > +	return 0x824 + 0x70 * ep;
> > +}
> > +
> >   #define AGGR_EN_FMASK				GENMASK(1, 0)
> >   #define AGGR_TYPE_FMASK				GENMASK(4, 2)
> >   
> > @@ -543,14 +629,27 @@ enum ipa_aggr_type {
> >   };
> >   
> >   /* Valid only for RX (IPA producer) endpoints */
> > -#define IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(rxep) \
> > -					(0x0000082c +  0x0070 * (rxep))
> > +static inline u32 ipa_reg_endp_init_hol_block_en_n_offset(enum ipa_version version,
> > +							  u16 rxep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x3c0 + 0x4 * rxep;
> > +
> > +	return 0x82c + 0x70 * rxep;
> > +}
> > +
> >   #define HOL_BLOCK_EN_FMASK			GENMASK(0, 0)
> >   
> >   /* Valid only for RX (IPA producer) endpoints */
> > -#define IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(rxep) \
> > -					(0x00000830 +  0x0070 * (rxep))
> > -/* The next two fields are present for IPA v4.2 only */
> > +static inline u32 ipa_reg_endp_init_hol_block_timer_n_offset(enum ipa_version version, u16 rxep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x420 + 0x4 * rxep;
> > +
> > +	return 0x830 + 0x70 * rxep;
> > +}
> > +
> > +/* The next fields are present for IPA v4.2 only */
> >   #define BASE_VALUE_FMASK			GENMASK(4, 0)
> >   #define SCALE_FMASK				GENMASK(12, 8)
> >   /* The next two fields are present for IPA v4.5 */
> > @@ -558,8 +657,14 @@ enum ipa_aggr_type {
> >   #define GRAN_SEL_FMASK				GENMASK(8, 8)
> >   
> >   /* Valid only for TX (IPA consumer) endpoints */
> > -#define IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(txep) \
> > -					(0x00000834 + 0x0070 * (txep))
> > +static inline u32 ipa_reg_endp_init_deaggr_n_offset(enum ipa_version version, u16 txep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x470 + 0x4 * txep;
> > +
> > +	return 0x834 + 0x70 * txep;
> > +}
> > +
> >   #define DEAGGR_HDR_LEN_FMASK			GENMASK(5, 0)
> >   #define SYSPIPE_ERR_DETECTION_FMASK		GENMASK(6, 6)
> >   #define PACKET_OFFSET_VALID_FMASK		GENMASK(7, 7)
> > @@ -629,8 +734,14 @@ enum ipa_seq_rep_type {
> >   	IPA_SEQ_REP_DMA_PARSER			= 0x08,
> >   };
> >   
> > -#define IPA_REG_ENDP_STATUS_N_OFFSET(ep) \
> > -					(0x00000840 + 0x0070 * (ep))
> > +static inline u32 ipa_reg_endp_status_n_offset(enum ipa_version version, u16 ep)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x4c0 + 0x4 * ep;
> > +
> > +	return 0x840 + 0x70 * ep;
> > +}
> > +
> >   #define STATUS_EN_FMASK				GENMASK(0, 0)
> >   #define STATUS_ENDP_FMASK			GENMASK(5, 1)
> >   /* The next field is not present for IPA v4.5+ */
> > @@ -662,6 +773,9 @@ enum ipa_seq_rep_type {
> >   static inline u32 ipa_reg_irq_stts_ee_n_offset(enum ipa_version version,
> >   					       u32 ee)
> >   {
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x00001008 + 0x1000 * ee;
> > +
> >   	if (version < IPA_VERSION_4_9)
> >   		return 0x00003008 + 0x1000 * ee;
> >   
> > @@ -675,6 +789,9 @@ static inline u32 ipa_reg_irq_stts_offset(enum ipa_version version)
> >   
> >   static inline u32 ipa_reg_irq_en_ee_n_offset(enum ipa_version version, u32 ee)
> >   {
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x0000100c + 0x1000 * ee;
> > +
> >   	if (version < IPA_VERSION_4_9)
> >   		return 0x0000300c + 0x1000 * ee;
> >   
> > @@ -688,6 +805,9 @@ static inline u32 ipa_reg_irq_en_offset(enum ipa_version version)
> >   
> >   static inline u32 ipa_reg_irq_clr_ee_n_offset(enum ipa_version version, u32 ee)
> >   {
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x00001010 + 0x1000 * ee;
> > +
> >   	if (version < IPA_VERSION_4_9)
> >   		return 0x00003010 + 0x1000 * ee;
> >   
> > @@ -776,6 +896,9 @@ enum ipa_irq_id {
> >   
> >   static inline u32 ipa_reg_irq_uc_ee_n_offset(enum ipa_version version, u32 ee)
> >   {
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x101c + 1000 * ee;
> > +
> >   	if (version < IPA_VERSION_4_9)
> >   		return 0x0000301c + 0x1000 * ee;
> >   
> > @@ -793,6 +916,9 @@ static inline u32 ipa_reg_irq_uc_offset(enum ipa_version version)
> >   static inline u32
> >   ipa_reg_irq_suspend_info_ee_n_offset(enum ipa_version version, u32 ee)
> >   {
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x00001098 + 0x1000 * ee;
> > +
> >   	if (version == IPA_VERSION_3_0)
> >   		return 0x00003098 + 0x1000 * ee;
> >   
> > diff --git a/drivers/net/ipa/ipa_version.h b/drivers/net/ipa/ipa_version.h
> > index 6c16c895d842..0d816de586ba 100644
> > --- a/drivers/net/ipa/ipa_version.h
> > +++ b/drivers/net/ipa/ipa_version.h
> > @@ -8,6 +8,9 @@
> >   
> >   /**
> >    * enum ipa_version
> > + * @IPA_VERSION_2_0:	IPA version 2.0
> > + * @IPA_VERSION_2_5:	IPA version 2.5/2.6
> > + * @IPA_VERSION_2_6:	IPA version 2.6L
> >    * @IPA_VERSION_3_0:	IPA version 3.0/GSI version 1.0
> >    * @IPA_VERSION_3_1:	IPA version 3.1/GSI version 1.1
> >    * @IPA_VERSION_3_5:	IPA version 3.5/GSI version 1.2
> > @@ -25,6 +28,9 @@
> >    * new version is added.
> >    */
> >   enum ipa_version {
> > +	IPA_VERSION_2_0,
> > +	IPA_VERSION_2_5,
> > +	IPA_VERSION_2_6L,
> >   	IPA_VERSION_3_0,
> >   	IPA_VERSION_3_1,
> >   	IPA_VERSION_3_5,
> > @@ -38,4 +44,10 @@ enum ipa_version {
> >   	IPA_VERSION_4_11,
> >   };
> >   
> > +#define IPA_HAS_GSI(version) ((version) > IPA_VERSION_2_6L)
> > +#define IPA_IS_64BIT(version) ((version) > IPA_VERSION_2_6L)
> > +#define IPA_VERSION_RANGE(_version, _from, _to) \
> > +	((_version) >= (IPA_VERSION_##_from) &&  \
> > +	 (_version) <= (IPA_VERSION_##_to))
> > +
> >   #endif /* _IPA_VERSION_H_ */
> > 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport
  2021-10-13 22:30   ` Alex Elder
@ 2021-10-18 17:30     ` Sireesh Kodali
  0 siblings, 0 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-10-18 17:30 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 4:00 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > BAM is used on IPA v2.x. Since BAM already has a nice dmaengine driver,
> > the IPA driver only makes calls the dmaengine API.
> > Also add BAM transaction support to IPA's trasaction abstraction layer.
> > 
> > BAM transactions should use NAPI just like GSI transactions, but just
> > use callbacks on each transaction for now.
>
> This is where things get a little more complicated. I'm not really
> familiar with the BAM interface and would really like to give this
> a much deeper review, and I won't be doing that now.
>
> At first glance, it looks reasonably clean to me, and it surprises
> me a little that this different system can be used with a relatively
> small amount of change. Much looks duplicated, so it could be that
> a little more work abstracting might avoid that (but I haven't looked
> that closely).
>

BAM is handled by the bam_dma driver, which supports the dmaengine API, so
all the functions are like so:

bam_function()
{
	bookkeeping();
	dmaengine_api_call();
	bookkeeping();
}

gsi_function()
{
	bookkeeping();
	gsi_register_rws();
	gsi_misc_ops();
	bookkeeping();
}

Some of the bookkeeping code is common between BAM and GSI, but the
current abstraction doesn't allow sharing that code. As I stated
previously, we might be able to share more code (or possibly all code)
between BAM and GSI if GSI was implemented as a dmaengine driver. But
reimplementing GSI like this might be rather time consuming, and there
might be easier solutions to improve code sharing between BAM and GSI.

Regards,
Sireesh

> -Alex
>
> > 
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/Makefile          |   2 +-
> >   drivers/net/ipa/bam.c             | 525 ++++++++++++++++++++++++++++++
> >   drivers/net/ipa/gsi.c             |   1 +
> >   drivers/net/ipa/ipa_data.h        |   1 +
> >   drivers/net/ipa/ipa_dma.h         |  18 +-
> >   drivers/net/ipa/ipa_dma_private.h |   2 +
> >   drivers/net/ipa/ipa_main.c        |  20 +-
> >   drivers/net/ipa/ipa_trans.c       |  14 +-
> >   drivers/net/ipa/ipa_trans.h       |   4 +
> >   9 files changed, 569 insertions(+), 18 deletions(-)
> >   create mode 100644 drivers/net/ipa/bam.c
> > 
> > diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
> > index 3cd021fb992e..4abebc667f77 100644
> > --- a/drivers/net/ipa/Makefile
> > +++ b/drivers/net/ipa/Makefile
> > @@ -2,7 +2,7 @@ obj-$(CONFIG_QCOM_IPA)	+=	ipa.o
> >   
> >   ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
> >   				ipa_table.o ipa_interrupt.o gsi.o ipa_trans.o \
> > -				ipa_gsi.o ipa_smp2p.o ipa_uc.o \
> > +				ipa_gsi.o ipa_smp2p.o ipa_uc.o bam.o \
> >   				ipa_endpoint.o ipa_cmd.o ipa_modem.o \
> >   				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
> >   				ipa_sysfs.o
> > diff --git a/drivers/net/ipa/bam.c b/drivers/net/ipa/bam.c
> > new file mode 100644
> > index 000000000000..0726e385fee5
> > --- /dev/null
> > +++ b/drivers/net/ipa/bam.c
> > @@ -0,0 +1,525 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +/* Copyright (c) 2020, The Linux Foundation. All rights reserved.
> > + */
> > +
> > +#include <linux/completion.h>
> > +#include <linux/dma-mapping.h>
> > +#include <linux/dmaengine.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/io.h>
> > +#include <linux/kernel.h>
> > +#include <linux/mutex.h>
> > +#include <linux/netdevice.h>
> > +#include <linux/platform_device.h>
> > +
> > +#include "ipa_gsi.h"
> > +#include "ipa.h"
> > +#include "ipa_dma.h"
> > +#include "ipa_dma_private.h"
> > +#include "ipa_gsi.h"
> > +#include "ipa_trans.h"
> > +#include "ipa_data.h"
> > +
> > +/**
> > + * DOC: The IPA Smart Peripheral System Interface
> > + *
> > + * The Smart Peripheral System is a means to communicate over BAM pipes to
> > + * the IPA block. The Modem also uses BAM pipes to communicate with the IPA
> > + * core.
> > + *
> > + * Refer the GSI documentation, because BAM is a precursor to GSI and more or less
> > + * the same, conceptually (maybe, IDK, I have no docs to go through).
> > + *
> > + * Each channel here corresponds to 1 BAM pipe configured in BAM2BAM mode
> > + *
> > + * IPA cmds are transferred one at a time, each in one BAM transfer.
> > + */
> > +
> > +/* Get and configure the BAM DMA channel */
> > +int bam_channel_init_one(struct ipa_dma *bam,
> > +			 const struct ipa_gsi_endpoint_data *data, bool command)
> > +{
> > +	struct dma_slave_config bam_config;
> > +	u32 channel_id = data->channel_id;
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +	int ret;
> > +
> > +	/*TODO: if (!bam_channel_data_valid(bam, data))
> > +		return -EINVAL;*/
> > +
> > +	channel->dma_subsys = bam;
> > +	channel->dma_chan = dma_request_chan(bam->dev, data->channel_name);
> > +	channel->toward_ipa = data->toward_ipa;
> > +	channel->tlv_count = data->channel.tlv_count;
> > +	channel->tre_count = data->channel.tre_count;
> > +	if (IS_ERR(channel->dma_chan)) {
> > +		dev_err(bam->dev, "failed to request BAM channel %s: %d\n",
> > +				data->channel_name,
> > +				(int) PTR_ERR(channel->dma_chan));
> > +		return PTR_ERR(channel->dma_chan);
> > +	}
> > +
> > +	ret = ipa_channel_trans_init(bam, data->channel_id);
> > +	if (ret)
> > +		goto err_dma_chan_free;
> > +
> > +	if (data->toward_ipa) {
> > +		bam_config.direction = DMA_MEM_TO_DEV;
> > +		bam_config.dst_maxburst = channel->tlv_count;
> > +	} else {
> > +		bam_config.direction = DMA_DEV_TO_MEM;
> > +		bam_config.src_maxburst = channel->tlv_count;
> > +	}
> > +
> > +	dmaengine_slave_config(channel->dma_chan, &bam_config);
> > +
> > +	if (command)
> > +		ret = ipa_cmd_pool_init(channel, 256);
> > +
> > +	if (!ret)
> > +		return 0;
> > +
> > +err_dma_chan_free:
> > +	dma_release_channel(channel->dma_chan);
> > +	return ret;
> > +}
> > +
> > +static void bam_channel_exit_one(struct ipa_channel *channel)
> > +{
> > +	if (channel->dma_chan) {
> > +		dmaengine_terminate_sync(channel->dma_chan);
> > +		dma_release_channel(channel->dma_chan);
> > +	}
> > +}
> > +
> > +/* Get channels from BAM_DMA */
> > +int bam_channel_init(struct ipa_dma *bam, u32 count,
> > +		const struct ipa_gsi_endpoint_data *data)
> > +{
> > +	int ret = 0;
> > +	u32 i;
> > +
> > +	for (i = 0; i < count; ++i) {
> > +		bool command = i == IPA_ENDPOINT_AP_COMMAND_TX;
> > +
> > +		if (!data[i].channel_name || data[i].ee_id == GSI_EE_MODEM)
> > +			continue;
> > +
> > +		ret = bam_channel_init_one(bam, &data[i], command);
> > +		if (ret)
> > +			goto err_unwind;
> > +	}
> > +
> > +	return ret;
> > +
> > +err_unwind:
> > +	while (i--) {
> > +		if (ipa_gsi_endpoint_data_empty(&data[i]))
> > +			continue;
> > +
> > +		bam_channel_exit_one(&bam->channel[i]);
> > +	}
> > +	return ret;
> > +}
> > +
> > +/* Inverse of bam_channel_init() */
> > +void bam_channel_exit(struct ipa_dma *bam)
> > +{
> > +	u32 channel_id = BAM_CHANNEL_COUNT_MAX - 1;
> > +
> > +	do
> > +		bam_channel_exit_one(&bam->channel[channel_id]);
> > +	while (channel_id--);
> > +}
> > +
> > +/* Inverse of bam_init() */
> > +static void bam_exit(struct ipa_dma *bam)
> > +{
> > +	mutex_destroy(&bam->mutex);
> > +	bam_channel_exit(bam);
> > +}
> > +
> > +/* Return the channel id associated with a given channel */
> > +static u32 bam_channel_id(struct ipa_channel *channel)
> > +{
> > +	return channel - &channel->dma_subsys->channel[0];
> > +}
> > +
> > +static void
> > +bam_channel_tx_update(struct ipa_channel *channel, struct ipa_trans *trans)
> > +{
> > +	u64 byte_count = trans->byte_count + trans->len;
> > +	u64 trans_count = trans->trans_count + 1;
> > +
> > +	byte_count -= channel->compl_byte_count;
> > +	channel->compl_byte_count += byte_count;
> > +	trans_count -= channel->compl_trans_count;
> > +	channel->compl_trans_count += trans_count;
> > +
> > +	ipa_gsi_channel_tx_completed(channel->dma_subsys, bam_channel_id(channel),
> > +					   trans_count, byte_count);
> > +}
> > +
> > +static void
> > +bam_channel_rx_update(struct ipa_channel *channel, struct ipa_trans *trans)
> > +{
> > +	/* FIXME */
> > +	u64 byte_count = trans->byte_count + trans->len;
> > +
> > +	channel->byte_count += byte_count;
> > +	channel->trans_count++;
> > +}
> > +
> > +/* Consult hardware, move any newly completed transactions to completed list */
> > +static void bam_channel_update(struct ipa_channel *channel)
> > +{
> > +	struct ipa_trans *trans;
> > +
> > +	list_for_each_entry(trans, &channel->trans_info.pending, links) {
> > +		enum dma_status trans_status =
> > +				dma_async_is_tx_complete(channel->dma_chan,
> > +					trans->cookie, NULL, NULL);
> > +		if (trans_status == DMA_COMPLETE)
> > +			break;
> > +	}
> > +	/* Get the transaction for the latest completed event.  Take a
> > +	 * reference to keep it from completing before we give the events
> > +	 * for this and previous transactions back to the hardware.
> > +	 */
> > +	refcount_inc(&trans->refcount);
> > +
> > +	/* For RX channels, update each completed transaction with the number
> > +	 * of bytes that were actually received.  For TX channels, report
> > +	 * the number of transactions and bytes this completion represents
> > +	 * up the network stack.
> > +	 */
> > +	if (channel->toward_ipa)
> > +		bam_channel_tx_update(channel, trans);
> > +	else
> > +		bam_channel_rx_update(channel, trans);
> > +
> > +	ipa_trans_move_complete(trans);
> > +
> > +	ipa_trans_free(trans);
> > +}
> > +
> > +/**
> > + * bam_channel_poll_one() - Return a single completed transaction on a channel
> > + * @channel:	Channel to be polled
> > + *
> > + * Return:	Transaction pointer, or null if none are available
> > + *
> > + * This function returns the first entry on a channel's completed transaction
> > + * list.  If that list is empty, the hardware is consulted to determine
> > + * whether any new transactions have completed.  If so, they're moved to the
> > + * completed list and the new first entry is returned.  If there are no more
> > + * completed transactions, a null pointer is returned.
> > + */
> > +static struct ipa_trans *bam_channel_poll_one(struct ipa_channel *channel)
> > +{
> > +	struct ipa_trans *trans;
> > +
> > +	/* Get the first transaction from the completed list */
> > +	trans = ipa_channel_trans_complete(channel);
> > +	if (!trans) {
> > +		bam_channel_update(channel);
> > +		trans = ipa_channel_trans_complete(channel);
> > +	}
> > +
> > +	if (trans)
> > +		ipa_trans_move_polled(trans);
> > +
> > +	return trans;
> > +}
> > +
> > +/**
> > + * bam_channel_poll() - NAPI poll function for a channel
> > + * @napi:	NAPI structure for the channel
> > + * @budget:	Budget supplied by NAPI core
> > + *
> > + * Return:	Number of items polled (<= budget)
> > + *
> > + * Single transactions completed by hardware are polled until either
> > + * the budget is exhausted, or there are no more.  Each transaction
> > + * polled is passed to ipa_trans_complete(), to perform remaining
> > + * completion processing and retire/free the transaction.
> > + */
> > +static int bam_channel_poll(struct napi_struct *napi, int budget)
> > +{
> > +	struct ipa_channel *channel;
> > +	int count = 0;
> > +
> > +	channel = container_of(napi, struct ipa_channel, napi);
> > +	while (count < budget) {
> > +		struct ipa_trans *trans;
> > +
> > +		count++;
> > +		trans = bam_channel_poll_one(channel);
> > +		if (!trans)
> > +			break;
> > +		ipa_trans_complete(trans);
> > +	}
> > +
> > +	if (count < budget)
> > +		napi_complete(&channel->napi);
> > +
> > +	return count;
> > +}
> > +
> > +/* Setup function for a single channel */
> > +static void bam_channel_setup_one(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	if (!channel->dma_subsys)
> > +		return;	/* Ignore uninitialized channels */
> > +
> > +	if (channel->toward_ipa) {
> > +		netif_tx_napi_add(&bam->dummy_dev, &channel->napi,
> > +				  bam_channel_poll, NAPI_POLL_WEIGHT);
> > +	} else {
> > +		netif_napi_add(&bam->dummy_dev, &channel->napi,
> > +			       bam_channel_poll, NAPI_POLL_WEIGHT);
> > +	}
> > +	napi_enable(&channel->napi);
> > +}
> > +
> > +static void bam_channel_teardown_one(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	if (!channel->dma_subsys)
> > +		return;		/* Ignore uninitialized channels */
> > +
> > +	netif_napi_del(&channel->napi);
> > +}
> > +
> > +/* Setup function for channels */
> > +static int bam_channel_setup(struct ipa_dma *bam)
> > +{
> > +	u32 channel_id = 0;
> > +	int ret;
> > +
> > +	mutex_lock(&bam->mutex);
> > +
> > +	do
> > +		bam_channel_setup_one(bam, channel_id);
> > +	while (++channel_id < BAM_CHANNEL_COUNT_MAX);
> > +
> > +	/* Make sure no channels were defined that hardware does not support */
> > +	while (channel_id < BAM_CHANNEL_COUNT_MAX) {
> > +		struct ipa_channel *channel = &bam->channel[channel_id++];
> > +
> > +		if (!channel->dma_subsys)
> > +			continue;	/* Ignore uninitialized channels */
> > +
> > +		dev_err(bam->dev, "channel %u not supported by hardware\n",
> > +			channel_id - 1);
> > +		channel_id = BAM_CHANNEL_COUNT_MAX;
> > +		goto err_unwind;
> > +	}
> > +
> > +	mutex_unlock(&bam->mutex);
> > +
> > +	return 0;
> > +
> > +err_unwind:
> > +	while (channel_id--)
> > +		bam_channel_teardown_one(bam, channel_id);
> > +
> > +	mutex_unlock(&bam->mutex);
> > +
> > +	return ret;
> > +}
> > +
> > +/* Inverse of bam_channel_setup() */
> > +static void bam_channel_teardown(struct ipa_dma *bam)
> > +{
> > +	u32 channel_id;
> > +
> > +	mutex_lock(&bam->mutex);
> > +
> > +	channel_id = BAM_CHANNEL_COUNT_MAX;
> > +	do
> > +		bam_channel_teardown_one(bam, channel_id);
> > +	while (channel_id--);
> > +
> > +	mutex_unlock(&bam->mutex);
> > +}
> > +
> > +static int bam_setup(struct ipa_dma *bam)
> > +{
> > +	return bam_channel_setup(bam);
> > +}
> > +
> > +static void bam_teardown(struct ipa_dma *bam)
> > +{
> > +	bam_channel_teardown(bam);
> > +}
> > +
> > +static u32 bam_channel_tre_max(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	/* Hardware limit is channel->tre_count - 1 */
> > +	return channel->tre_count - (channel->tlv_count - 1);
> > +}
> > +
> > +static u32 bam_channel_trans_tre_max(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	return channel->tlv_count;
> > +}
> > +
> > +static int bam_channel_start(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int bam_channel_stop(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	return dmaengine_terminate_sync(channel->dma_chan);
> > +}
> > +
> > +static void bam_channel_reset(struct ipa_dma *bam, u32 channel_id, bool doorbell)
> > +{
> > +	bam_channel_stop(bam, channel_id);
> > +}
> > +
> > +static int bam_channel_suspend(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	return dmaengine_pause(channel->dma_chan);
> > +}
> > +
> > +static int bam_channel_resume(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	return dmaengine_resume(channel->dma_chan);
> > +}
> > +
> > +static void bam_suspend(struct ipa_dma *bam)
> > +{
> > +	/* No-op for now */
> > +}
> > +
> > +static void bam_resume(struct ipa_dma *bam)
> > +{
> > +	/* No-op for now */
> > +}
> > +
> > +static void bam_trans_callback(void *arg)
> > +{
> > +	ipa_trans_complete(arg);
> > +}
> > +
> > +static void bam_trans_commit(struct ipa_trans *trans, bool unused)
> > +{
> > +	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
> > +	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
> > +	struct ipa_cmd_info *info;
> > +	struct scatterlist *sg;
> > +	u32 byte_count = 0;
> > +	u32 i;
> > +	enum dma_transfer_direction direction;
> > +
> > +	if (channel->toward_ipa)
> > +		direction = DMA_MEM_TO_DEV;
> > +	else
> > +		direction = DMA_DEV_TO_MEM;
> > +
> > +	/* assert(trans->used > 0); */
> > +
> > +	info = trans->info ? &trans->info[0] : NULL;
> > +	for_each_sg(trans->sgl, sg, trans->used, i) {
> > +		bool last_tre = i == trans->used - 1;
> > +		dma_addr_t addr = sg_dma_address(sg);
> > +		u32 len = sg_dma_len(sg);
> > +		u32 dma_flags = 0;
> > +		struct dma_async_tx_descriptor *desc;
> > +
> > +		byte_count += len;
> > +		if (info)
> > +			opcode = info++->opcode;
> > +
> > +		if (opcode != IPA_CMD_NONE) {
> > +			len = opcode;
> > +			dma_flags |= DMA_PREP_IMM_CMD;
> > +		}
> > +
> > +		if (last_tre)
> > +			dma_flags |= DMA_PREP_INTERRUPT;
> > +
> > +		desc = dmaengine_prep_slave_single(channel->dma_chan, addr, len,
> > +				direction, dma_flags);
> > +
> > +		if (last_tre) {
> > +			desc->callback = bam_trans_callback;
> > +			desc->callback_param = trans;
> > +		}
> > +
> > +		desc->cookie = dmaengine_submit(desc);
> > +
> > +		if (last_tre)
> > +			trans->cookie = desc->cookie;
> > +
> > +		if (direction == DMA_DEV_TO_MEM)
> > +			dmaengine_desc_attach_metadata(desc, &trans->len, sizeof(trans->len));
> > +	}
> > +
> > +	if (channel->toward_ipa) {
> > +		/* We record TX bytes when they are sent */
> > +		trans->len = byte_count;
> > +		trans->trans_count = channel->trans_count;
> > +		trans->byte_count = channel->byte_count;
> > +		channel->trans_count++;
> > +		channel->byte_count += byte_count;
> > +	}
> > +
> > +	ipa_trans_move_pending(trans);
> > +
> > +	dma_async_issue_pending(channel->dma_chan);
> > +}
> > +
> > +/* Initialize the BAM DMA channels
> > + * Actual hw init is handled by the BAM_DMA driver
> > + */
> > +int bam_init(struct ipa_dma *bam, struct platform_device *pdev,
> > +		enum ipa_version version, u32 count,
> > +		const struct ipa_gsi_endpoint_data *data)
> > +{
> > +	struct device *dev = &pdev->dev;
> > +	int ret;
> > +
> > +	bam->dev = dev;
> > +	bam->version = version;
> > +	bam->setup = bam_setup;
> > +	bam->teardown = bam_teardown;
> > +	bam->exit = bam_exit;
> > +	bam->suspend = bam_suspend;
> > +	bam->resume = bam_resume;
> > +	bam->channel_tre_max = bam_channel_tre_max;
> > +	bam->channel_trans_tre_max = bam_channel_trans_tre_max;
> > +	bam->channel_start = bam_channel_start;
> > +	bam->channel_stop = bam_channel_stop;
> > +	bam->channel_reset = bam_channel_reset;
> > +	bam->channel_suspend = bam_channel_suspend;
> > +	bam->channel_resume = bam_channel_resume;
> > +	bam->trans_commit = bam_trans_commit;
> > +
> > +	init_dummy_netdev(&bam->dummy_dev);
> > +
> > +	ret = bam_channel_init(bam, count, data);
> > +	if (ret)
> > +		return ret;
> > +
> > +	mutex_init(&bam->mutex);
> > +
> > +	return 0;
> > +}
> > diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
> > index 39d9ca620a9f..ac0b9e748fa1 100644
> > --- a/drivers/net/ipa/gsi.c
> > +++ b/drivers/net/ipa/gsi.c
> > @@ -2210,6 +2210,7 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
> >   	gsi->channel_reset = gsi_channel_reset;
> >   	gsi->channel_suspend = gsi_channel_suspend;
> >   	gsi->channel_resume = gsi_channel_resume;
> > +	gsi->trans_commit = gsi_trans_commit;
> >   
> >   	/* GSI uses NAPI on all channels.  Create a dummy network device
> >   	 * for the channel NAPI contexts to be associated with.
> > diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
> > index 6d329e9ce5d2..7d62d49f414f 100644
> > --- a/drivers/net/ipa/ipa_data.h
> > +++ b/drivers/net/ipa/ipa_data.h
> > @@ -188,6 +188,7 @@ struct ipa_gsi_endpoint_data {
> >   	u8 channel_id;
> >   	u8 endpoint_id;
> >   	bool toward_ipa;
> > +	const char *channel_name;	/* used only for BAM DMA channels */
> >   
> >   	struct gsi_channel_data channel;
> >   	struct ipa_endpoint_data endpoint;
> > diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h
> > index 1a23e6ac5785..3000182ae689 100644
> > --- a/drivers/net/ipa/ipa_dma.h
> > +++ b/drivers/net/ipa/ipa_dma.h
> > @@ -17,7 +17,11 @@
> >   
> >   /* Maximum number of channels and event rings supported by the driver */
> >   #define GSI_CHANNEL_COUNT_MAX	23
> > +#define BAM_CHANNEL_COUNT_MAX	20
> >   #define GSI_EVT_RING_COUNT_MAX	24
> > +#define IPA_CHANNEL_COUNT_MAX	MAX(GSI_CHANNEL_COUNT_MAX, \
> > +				    BAM_CHANNEL_COUNT_MAX)
> > +#define MAX(a, b)		((a > b) ? a : b)
> >   
> >   /* Maximum TLV FIFO size for a channel; 64 here is arbitrary (and high) */
> >   #define GSI_TLV_MAX		64
> > @@ -119,6 +123,8 @@ struct ipa_channel {
> >   	struct gsi_ring tre_ring;
> >   	u32 evt_ring_id;
> >   
> > +	struct dma_chan *dma_chan;
> > +
> >   	u64 byte_count;			/* total # bytes transferred */
> >   	u64 trans_count;		/* total # transactions */
> >   	/* The following counts are used only for TX endpoints */
> > @@ -154,7 +160,7 @@ struct ipa_dma {
> >   	u32 irq;
> >   	u32 channel_count;
> >   	u32 evt_ring_count;
> > -	struct ipa_channel channel[GSI_CHANNEL_COUNT_MAX];
> > +	struct ipa_channel channel[IPA_CHANNEL_COUNT_MAX];
> >   	struct gsi_evt_ring evt_ring[GSI_EVT_RING_COUNT_MAX];
> >   	u32 event_bitmap;		/* allocated event rings */
> >   	u32 modem_channel_bitmap;	/* modem channels to allocate */
> > @@ -303,7 +309,7 @@ static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
> >   }
> >   
> >   /**
> > - * ipa_dma_init() - Initialize the GSI subsystem
> > + * ipa_init/bam_init() - Initialize the GSI/BAM subsystem
> >    * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
> >    * @pdev:	IPA platform device
> >    * @version:	IPA hardware version (implies GSI version)
> > @@ -312,14 +318,18 @@ static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
> >    *
> >    * Return:	0 if successful, or a negative error code
> >    *
> > - * Early stage initialization of the GSI subsystem, performing tasks
> > - * that can be done before the GSI hardware is ready to use.
> > + * Early stage initialization of the GSI/BAM subsystem, performing tasks
> > + * that can be done before the GSI/BAM hardware is ready to use.
> >    */
> >   
> >   int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
> >   	     enum ipa_version version, u32 count,
> >   	     const struct ipa_gsi_endpoint_data *data);
> >   
> > +int bam_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
> > +	     enum ipa_version version, u32 count,
> > +	     const struct ipa_gsi_endpoint_data *data);
> > +
> >   /**
> >    * ipa_dma_exit() - Exit the DMA subsystem
> >    * @dma_subsys:	ipa_dma address previously passed to a successful gsi_init() call
> > diff --git a/drivers/net/ipa/ipa_dma_private.h b/drivers/net/ipa/ipa_dma_private.h
> > index 40148a551b47..1db53e597a61 100644
> > --- a/drivers/net/ipa/ipa_dma_private.h
> > +++ b/drivers/net/ipa/ipa_dma_private.h
> > @@ -16,6 +16,8 @@ struct ipa_channel;
> >   
> >   #define GSI_RING_ELEMENT_SIZE	16	/* bytes; must be a power of 2 */
> >   
> > +void gsi_trans_commit(struct ipa_trans *trans, bool ring_db);
> > +
> >   /* Return the entry that follows one provided in a transaction pool */
> >   void *ipa_trans_pool_next(struct ipa_trans_pool *pool, void *element);
> >   
> > diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> > index ba06e3ad554c..ea6c4347f2c6 100644
> > --- a/drivers/net/ipa/ipa_main.c
> > +++ b/drivers/net/ipa/ipa_main.c
> > @@ -60,12 +60,15 @@
> >    * core.  The GSI implements a set of "channels" used for communication
> >    * between the AP and the IPA.
> >    *
> > - * The IPA layer uses GSI channels to implement its "endpoints".  And while
> > - * a GSI channel carries data between the AP and the IPA, a pair of IPA
> > - * endpoints is used to carry traffic between two EEs.  Specifically, the main
> > - * modem network interface is implemented by two pairs of endpoints:  a TX
> > + * The IPA layer uses GSI channels or BAM pipes to implement its "endpoints".
> > + * And while a GSI channel carries data between the AP and the IPA, a pair of
> > + * IPA endpoints is used to carry traffic between two EEs.  Specifically, the
> > + * main modem network interface is implemented by two pairs of endpoints:  a TX
> >    * endpoint on the AP coupled with an RX endpoint on the modem; and another
> >    * RX endpoint on the AP receiving data from a TX endpoint on the modem.
> > + *
> > + * For BAM based transport, a pair of BAM pipes are used for TX and RX between
> > + * the AP and IPA, and between IPA and other EEs.
> >    */
> >   
> >   /* The name of the GSI firmware file relative to /lib/firmware */
> > @@ -716,8 +719,13 @@ static int ipa_probe(struct platform_device *pdev)
> >   	if (ret)
> >   		goto err_reg_exit;
> >   
> > -	ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
> > -		       data->endpoint_data);
> > +	if (IPA_HAS_GSI(ipa->version))
> > +		ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
> > +			       data->endpoint_data);
> > +	else
> > +		ret = bam_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
> > +			       data->endpoint_data);
> > +
> >   	if (ret)
> >   		goto err_mem_exit;
> >   
> > diff --git a/drivers/net/ipa/ipa_trans.c b/drivers/net/ipa/ipa_trans.c
> > index 22755f3ce3da..444f44846da8 100644
> > --- a/drivers/net/ipa/ipa_trans.c
> > +++ b/drivers/net/ipa/ipa_trans.c
> > @@ -254,7 +254,7 @@ struct ipa_trans *ipa_channel_trans_complete(struct ipa_channel *channel)
> >   }
> >   
> >   /* Move a transaction from the allocated list to the pending list */
> > -static void ipa_trans_move_pending(struct ipa_trans *trans)
> > +void ipa_trans_move_pending(struct ipa_trans *trans)
> >   {
> >   	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
> >   	struct ipa_trans_info *trans_info = &channel->trans_info;
> > @@ -539,7 +539,7 @@ static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr,
> >    * pending list.  Finally, updates the channel ring pointer and optionally
> >    * rings the doorbell.
> >    */
> > -static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
> > +void gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
> >   {
> >   	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
> >   	struct gsi_ring *ring = &channel->tre_ring;
> > @@ -604,9 +604,9 @@ static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
> >   /* Commit a GSI transaction */
> >   void ipa_trans_commit(struct ipa_trans *trans, bool ring_db)
> >   {
> > -	if (trans->used)
> > -		__gsi_trans_commit(trans, ring_db);
> > -	else
> > +	if (trans->used) {
> > +		trans->dma_subsys->trans_commit(trans, ring_db);
> > +	} else
> >   		ipa_trans_free(trans);
> >   }
> >   
> > @@ -618,7 +618,7 @@ void ipa_trans_commit_wait(struct ipa_trans *trans)
> >   
> >   	refcount_inc(&trans->refcount);
> >   
> > -	__gsi_trans_commit(trans, true);
> > +	trans->dma_subsys->trans_commit(trans, true);
> >   
> >   	wait_for_completion(&trans->completion);
> >   
> > @@ -638,7 +638,7 @@ int ipa_trans_commit_wait_timeout(struct ipa_trans *trans,
> >   
> >   	refcount_inc(&trans->refcount);
> >   
> > -	__gsi_trans_commit(trans, true);
> > +	trans->dma_subsys->trans_commit(trans, true);
> >   
> >   	remaining = wait_for_completion_timeout(&trans->completion,
> >   						timeout_jiffies);
> > diff --git a/drivers/net/ipa/ipa_trans.h b/drivers/net/ipa/ipa_trans.h
> > index b93342414360..5f41e3e6f92a 100644
> > --- a/drivers/net/ipa/ipa_trans.h
> > +++ b/drivers/net/ipa/ipa_trans.h
> > @@ -10,6 +10,7 @@
> >   #include <linux/refcount.h>
> >   #include <linux/completion.h>
> >   #include <linux/dma-direction.h>
> > +#include <linux/dmaengine.h>
> >   
> >   #include "ipa_cmd.h"
> >   
> > @@ -61,6 +62,7 @@ struct ipa_trans {
> >   	struct scatterlist *sgl;
> >   	struct ipa_cmd_info *info;	/* array of entries, or null */
> >   	enum dma_data_direction direction;
> > +	dma_cookie_t cookie;
> >   
> >   	refcount_t refcount;
> >   	struct completion completion;
> > @@ -149,6 +151,8 @@ struct ipa_trans *ipa_channel_trans_alloc(struct ipa_dma *dma_subsys, u32 channe
> >    */
> >   void ipa_trans_free(struct ipa_trans *trans);
> >   
> > +void ipa_trans_move_pending(struct ipa_trans *trans);
> > +
> >   /**
> >    * ipa_trans_cmd_add() - Add an immediate command to a transaction
> >    * @trans:	Transaction
> > 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init
  2021-10-13 22:30   ` Alex Elder
@ 2021-10-18 18:13     ` Sireesh Kodali
  0 siblings, 0 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-10-18 18:13 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 4:00 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > IPA v2.x commands are different from later IPA revisions mostly because
> > of the fact that IPA v2.x is 32 bit. There are also other minor
> > differences some of the command structs.
> > 
> > The tables again are only different because of the fact that IPA v2.x is
> > 32 bit.
>
> There's no "RFC" on this patch, but I assume it's just invisible.

Eep, I forgot to the tag to this patch

>
> There are some things in here where some conventions used elsewhere
> in the driver aren't as well followed. One example is the use of
> symbol names with IPA version encoded in them; such cases usually
> have a macro that takes a version as argument.

Got it, I'll fix that

>
> And I don't especially like using a macro on the left hand side
> of an assignment expression.
>

That's fair, I'll try comming up with a more clean solution here

Regards,
Sireesh
> I'm skimming now, but overall this looks OK.
>
> -Alex
>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> > ---
> >   drivers/net/ipa/ipa.h       |   2 +-
> >   drivers/net/ipa/ipa_cmd.c   | 138 ++++++++++++++++++++++++++----------
> >   drivers/net/ipa/ipa_table.c |  29 ++++++--
> >   drivers/net/ipa/ipa_table.h |   2 +-
> >   4 files changed, 125 insertions(+), 46 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
> > index 80a83ac45729..63b2b368b588 100644
> > --- a/drivers/net/ipa/ipa.h
> > +++ b/drivers/net/ipa/ipa.h
> > @@ -81,7 +81,7 @@ struct ipa {
> >   	struct ipa_power *power;
> >   
> >   	dma_addr_t table_addr;
> > -	__le64 *table_virt;
> > +	void *table_virt;
> >   
> >   	struct ipa_interrupt *interrupt;
> >   	bool uc_powered;
> > diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> > index 7a104540dc26..58dae4b3bf87 100644
> > --- a/drivers/net/ipa/ipa_cmd.c
> > +++ b/drivers/net/ipa/ipa_cmd.c
> > @@ -25,8 +25,8 @@
> >    * An immediate command is generally used to request the IPA do something
> >    * other than data transfer to another endpoint.
> >    *
> > - * Immediate commands are represented by GSI transactions just like other
> > - * transfer requests, represented by a single GSI TRE.  Each immediate
> > + * Immediate commands on IPA v3 are represented by GSI transactions just like
> > + * other transfer requests, represented by a single GSI TRE.  Each immediate
> >    * command has a well-defined format, having a payload of a known length.
> >    * This allows the transfer element's length field to be used to hold an
> >    * immediate command's opcode.  The payload for a command resides in DRAM
> > @@ -45,10 +45,16 @@ enum pipeline_clear_options {
> >   
> >   /* IPA_CMD_IP_V{4,6}_{FILTER,ROUTING}_INIT */
> >   
> > -struct ipa_cmd_hw_ip_fltrt_init {
> > -	__le64 hash_rules_addr;
> > -	__le64 flags;
> > -	__le64 nhash_rules_addr;
> > +union ipa_cmd_hw_ip_fltrt_init {
> > +	struct {
> > +		__le32 nhash_rules_addr;
> > +		__le32 flags;
> > +	} v2;
> > +	struct {
> > +		__le64 hash_rules_addr;
> > +		__le64 flags;
> > +		__le64 nhash_rules_addr;
> > +	} v3;
> >   };
> >   
> >   /* Field masks for ipa_cmd_hw_ip_fltrt_init structure fields */
> > @@ -56,13 +62,23 @@ struct ipa_cmd_hw_ip_fltrt_init {
> >   #define IP_FLTRT_FLAGS_HASH_ADDR_FMASK			GENMASK_ULL(27, 12)
> >   #define IP_FLTRT_FLAGS_NHASH_SIZE_FMASK			GENMASK_ULL(39, 28)
> >   #define IP_FLTRT_FLAGS_NHASH_ADDR_FMASK			GENMASK_ULL(55, 40)
> > +#define IP_V2_IPV4_FLTRT_FLAGS_SIZE_FMASK		GENMASK_ULL(11, 0)
> > +#define IP_V2_IPV4_FLTRT_FLAGS_ADDR_FMASK		GENMASK_ULL(27, 12)
> > +#define IP_V2_IPV6_FLTRT_FLAGS_SIZE_FMASK		GENMASK_ULL(15, 0)
> > +#define IP_V2_IPV6_FLTRT_FLAGS_ADDR_FMASK		GENMASK_ULL(31, 16)
> >   
> >   /* IPA_CMD_HDR_INIT_LOCAL */
> >   
> > -struct ipa_cmd_hw_hdr_init_local {
> > -	__le64 hdr_table_addr;
> > -	__le32 flags;
> > -	__le32 reserved;
> > +union ipa_cmd_hw_hdr_init_local {
> > +	struct {
> > +		__le32 hdr_table_addr;
> > +		__le32 flags;
> > +	} v2;
> > +	struct {
> > +		__le64 hdr_table_addr;
> > +		__le32 flags;
> > +		__le32 reserved;
> > +	} v3;
> >   };
> >   
> >   /* Field masks for ipa_cmd_hw_hdr_init_local structure fields */
> > @@ -109,14 +125,37 @@ struct ipa_cmd_ip_packet_init {
> >   #define DMA_SHARED_MEM_OPCODE_SKIP_CLEAR_FMASK		GENMASK(8, 8)
> >   #define DMA_SHARED_MEM_OPCODE_CLEAR_OPTION_FMASK	GENMASK(10, 9)
> >   
> > -struct ipa_cmd_hw_dma_mem_mem {
> > -	__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
> > -	__le16 size;
> > -	__le16 local_addr;
> > -	__le16 flags;
> > -	__le64 system_addr;
> > +union ipa_cmd_hw_dma_mem_mem {
> > +	struct {
> > +		__le16 reserved;
> > +		__le16 size;
> > +		__le32 system_addr;
> > +		__le16 local_addr;
> > +		__le16 flags; /* the least significant 14 bits are reserved */
> > +		__le32 padding;
> > +	} v2;
> > +	struct {
> > +		__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
> > +		__le16 size;
> > +		__le16 local_addr;
> > +		__le16 flags;
> > +		__le64 system_addr;
> > +	} v3;
> >   };
> >   
> > +#define CMD_FIELD(_version, _payload, _field)				\
> > +	*(((_version) > IPA_VERSION_2_6L) ?		    		\
> > +	  &(_payload->v3._field) :			    		\
> > +	  &(_payload->v2._field))
> > +
> > +#define SET_DMA_FIELD(_ver, _payload, _field, _value)			\
> > +	do {								\
> > +		if ((_ver) >= IPA_VERSION_3_0)				\
> > +			(_payload)->v3._field = cpu_to_le64(_value);	\
> > +		else							\
> > +			(_payload)->v2._field = cpu_to_le32(_value);	\
> > +	} while (0)
> > +
> >   /* Flag allowing atomic clear of target region after reading data (v4.0+)*/
> >   #define DMA_SHARED_MEM_CLEAR_AFTER_READ			GENMASK(15, 15)
> >   
> > @@ -132,15 +171,16 @@ struct ipa_cmd_ip_packet_tag_status {
> >   	__le64 tag;
> >   };
> >   
> > -#define IP_PACKET_TAG_STATUS_TAG_FMASK			GENMASK_ULL(63, 16)
> > +#define IPA_V2_IP_PACKET_TAG_STATUS_TAG_FMASK		GENMASK_ULL(63, 32)
> > +#define IPA_V3_IP_PACKET_TAG_STATUS_TAG_FMASK		GENMASK_ULL(63, 16)
> >   
> >   /* Immediate command payload */
> >   union ipa_cmd_payload {
> > -	struct ipa_cmd_hw_ip_fltrt_init table_init;
> > -	struct ipa_cmd_hw_hdr_init_local hdr_init_local;
> > +	union ipa_cmd_hw_ip_fltrt_init table_init;
> > +	union ipa_cmd_hw_hdr_init_local hdr_init_local;
> >   	struct ipa_cmd_register_write register_write;
> >   	struct ipa_cmd_ip_packet_init ip_packet_init;
> > -	struct ipa_cmd_hw_dma_mem_mem dma_shared_mem;
> > +	union ipa_cmd_hw_dma_mem_mem dma_shared_mem;
> >   	struct ipa_cmd_ip_packet_tag_status ip_packet_tag_status;
> >   };
> >   
> > @@ -154,6 +194,7 @@ static void ipa_cmd_validate_build(void)
> >   	 * of entries.
> >   	 */
> >   #define TABLE_SIZE	(TABLE_COUNT_MAX * sizeof(__le64))
> > +// TODO
> >   #define TABLE_COUNT_MAX	max_t(u32, IPA_ROUTE_COUNT_MAX, IPA_FILTER_COUNT_MAX)
> >   	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK));
> >   	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));
> > @@ -405,15 +446,26 @@ void ipa_cmd_table_init_add(struct ipa_trans *trans,
> >   {
> >   	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
> >   	enum dma_data_direction direction = DMA_TO_DEVICE;
> > -	struct ipa_cmd_hw_ip_fltrt_init *payload;
> > +	union ipa_cmd_hw_ip_fltrt_init *payload;
> > +	enum ipa_version version = ipa->version;
> >   	union ipa_cmd_payload *cmd_payload;
> >   	dma_addr_t payload_addr;
> >   	u64 val;
> >   
> >   	/* Record the non-hash table offset and size */
> >   	offset += ipa->mem_offset;
> > -	val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
> > -	val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
> > +
> > +	if (version >= IPA_VERSION_3_0) {
> > +		val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
> > +		val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
> > +	} else if (opcode == IPA_CMD_IP_V4_FILTER_INIT ||
> > +		   opcode == IPA_CMD_IP_V4_ROUTING_INIT) {
> > +		val = u64_encode_bits(offset, IP_V2_IPV4_FLTRT_FLAGS_ADDR_FMASK);
> > +		val |= u64_encode_bits(size, IP_V2_IPV4_FLTRT_FLAGS_SIZE_FMASK);
> > +	} else { /* IPA <= v2.6L IPv6 */
> > +		val = u64_encode_bits(offset, IP_V2_IPV6_FLTRT_FLAGS_ADDR_FMASK);
> > +		val |= u64_encode_bits(size, IP_V2_IPV6_FLTRT_FLAGS_SIZE_FMASK);
> > +	}
> >   
> >   	/* The hash table offset and address are zero if its size is 0 */
> >   	if (hash_size) {
> > @@ -429,10 +481,10 @@ void ipa_cmd_table_init_add(struct ipa_trans *trans,
> >   	payload = &cmd_payload->table_init;
> >   
> >   	/* Fill in all offsets and sizes and the non-hash table address */
> > -	if (hash_size)
> > -		payload->hash_rules_addr = cpu_to_le64(hash_addr);
> > -	payload->flags = cpu_to_le64(val);
> > -	payload->nhash_rules_addr = cpu_to_le64(addr);
> > +	if (hash_size && version >= IPA_VERSION_3_0)
> > +		payload->v3.hash_rules_addr = cpu_to_le64(hash_addr);
> > +	SET_DMA_FIELD(version, payload, flags, val);
> > +	SET_DMA_FIELD(version, payload, nhash_rules_addr, addr);
> >   
> >   	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> >   			  direction, opcode);
> > @@ -445,7 +497,7 @@ void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
> >   	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
> >   	enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL;
> >   	enum dma_data_direction direction = DMA_TO_DEVICE;
> > -	struct ipa_cmd_hw_hdr_init_local *payload;
> > +	union ipa_cmd_hw_hdr_init_local *payload;
> >   	union ipa_cmd_payload *cmd_payload;
> >   	dma_addr_t payload_addr;
> >   	u32 flags;
> > @@ -460,10 +512,10 @@ void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
> >   	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
> >   	payload = &cmd_payload->hdr_init_local;
> >   
> > -	payload->hdr_table_addr = cpu_to_le64(addr);
> > +	SET_DMA_FIELD(ipa->version, payload, hdr_table_addr, addr);
> >   	flags = u32_encode_bits(size, HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK);
> >   	flags |= u32_encode_bits(offset, HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);
> > -	payload->flags = cpu_to_le32(flags);
> > +	CMD_FIELD(ipa->version, payload, flags) = cpu_to_le32(flags);
> >   
> >   	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> >   			  direction, opcode);
> > @@ -509,8 +561,11 @@ void ipa_cmd_register_write_add(struct ipa_trans *trans, u32 offset, u32 value,
> >   
> >   	} else {
> >   		flags = 0;	/* SKIP_CLEAR flag is always 0 */
> > -		options = u16_encode_bits(clear_option,
> > -					  REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
> > +		if (ipa->version > IPA_VERSION_2_6L)
> > +			options = u16_encode_bits(clear_option,
> > +					REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
> > +		else
> > +			options = 0;
> >   	}
> >   
> >   	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
> > @@ -552,7 +607,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
> >   {
> >   	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
> >   	enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM;
> > -	struct ipa_cmd_hw_dma_mem_mem *payload;
> > +	enum ipa_version version = ipa->version;
> > +	union ipa_cmd_hw_dma_mem_mem *payload;
> >   	union ipa_cmd_payload *cmd_payload;
> >   	enum dma_data_direction direction;
> >   	dma_addr_t payload_addr;
> > @@ -571,8 +627,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
> >   	/* payload->clear_after_read was reserved prior to IPA v4.0.  It's
> >   	 * never needed for current code, so it's 0 regardless of version.
> >   	 */
> > -	payload->size = cpu_to_le16(size);
> > -	payload->local_addr = cpu_to_le16(offset);
> > +	CMD_FIELD(version, payload, size) = cpu_to_le16(size);
> > +	CMD_FIELD(version, payload, local_addr) = cpu_to_le16(offset);
> >   	/* payload->flags:
> >   	 *   direction:		0 = write to IPA, 1 read from IPA
> >   	 * Starting at v4.0 these are reserved; either way, all zero:
> > @@ -582,8 +638,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
> >   	 * since both values are 0 we won't bother OR'ing them in.
> >   	 */
> >   	flags = toward_ipa ? 0 : DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK;
> > -	payload->flags = cpu_to_le16(flags);
> > -	payload->system_addr = cpu_to_le64(addr);
> > +	CMD_FIELD(version, payload, flags) = cpu_to_le16(flags);
> > +	SET_DMA_FIELD(version, payload, system_addr, addr);
> >   
> >   	direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
> >   
> > @@ -599,11 +655,17 @@ static void ipa_cmd_ip_tag_status_add(struct ipa_trans *trans)
> >   	struct ipa_cmd_ip_packet_tag_status *payload;
> >   	union ipa_cmd_payload *cmd_payload;
> >   	dma_addr_t payload_addr;
> > +	u64 tag_mask;
> > +
> > +	if (trans->dma_subsys->version <= IPA_VERSION_2_6L)
> > +		tag_mask = IPA_V2_IP_PACKET_TAG_STATUS_TAG_FMASK;
> > +	else
> > +		tag_mask = IPA_V3_IP_PACKET_TAG_STATUS_TAG_FMASK;
> >   
> >   	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
> >   	payload = &cmd_payload->ip_packet_tag_status;
> >   
> > -	payload->tag = le64_encode_bits(0, IP_PACKET_TAG_STATUS_TAG_FMASK);
> > +	payload->tag = le64_encode_bits(0, tag_mask);
> >   
> >   	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> >   			  direction, opcode);
> > diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
> > index d197959cc032..459fb4830244 100644
> > --- a/drivers/net/ipa/ipa_table.c
> > +++ b/drivers/net/ipa/ipa_table.c
> > @@ -8,6 +8,7 @@
> >   #include <linux/kernel.h>
> >   #include <linux/bits.h>
> >   #include <linux/bitops.h>
> > +#include <linux/module.h>
> >   #include <linux/bitfield.h>
> >   #include <linux/io.h>
> >   #include <linux/build_bug.h>
> > @@ -561,6 +562,19 @@ void ipa_table_config(struct ipa *ipa)
> >   	ipa_route_config(ipa, true);
> >   }
> >   
> > +static inline void *ipa_table_write(enum ipa_version version,
> > +				   void *virt, u64 value)
> > +{
> > +	if (IPA_IS_64BIT(version)) {
> > +		__le64 *ptr = virt;
> > +		*ptr = cpu_to_le64(value);
> > +	} else {
> > +		__le32 *ptr = virt;
> > +		*ptr = cpu_to_le32(value);
> > +	}
> > +	return virt + IPA_TABLE_ENTRY_SIZE(version);
> > +}
> > +
> >   /*
> >    * Initialize a coherent DMA allocation containing initialized filter and
> >    * route table data.  This is used when initializing or resetting the IPA
> > @@ -602,10 +616,11 @@ void ipa_table_config(struct ipa *ipa)
> >   int ipa_table_init(struct ipa *ipa)
> >   {
> >   	u32 count = max_t(u32, IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX);
> > +	enum ipa_version version = ipa->version;
> >   	struct device *dev = &ipa->pdev->dev;
> > +	u64 filter_map = ipa->filter_map << 1;
> >   	dma_addr_t addr;
> > -	__le64 le_addr;
> > -	__le64 *virt;
> > +	void *virt;
> >   	size_t size;
> >   
> >   	ipa_table_validate_build();
> > @@ -626,19 +641,21 @@ int ipa_table_init(struct ipa *ipa)
> >   	ipa->table_addr = addr;
> >   
> >   	/* First slot is the zero rule */
> > -	*virt++ = 0;
> > +	virt = ipa_table_write(version, virt, 0);
> >   
> >   	/* Next is the filter table bitmap.  The "soft" bitmap value
> >   	 * must be converted to the hardware representation by shifting
> >   	 * it left one position.  (Bit 0 repesents global filtering,
> >   	 * which is possible but not used.)
> >   	 */
> > -	*virt++ = cpu_to_le64((u64)ipa->filter_map << 1);
> > +	if (version <= IPA_VERSION_2_6L)
> > +		filter_map |= 1;
> > +
> > +	virt = ipa_table_write(version, virt, filter_map);
> >   
> >   	/* All the rest contain the DMA address of the zero rule */
> > -	le_addr = cpu_to_le64(addr);
> >   	while (count--)
> > -		*virt++ = le_addr;
> > +		virt = ipa_table_write(version, virt, addr);
> >   
> >   	return 0;
> >   }
> > diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
> > index 78a168ce6558..6e12fc49e45b 100644
> > --- a/drivers/net/ipa/ipa_table.h
> > +++ b/drivers/net/ipa/ipa_table.h
> > @@ -43,7 +43,7 @@ bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask);
> >    */
> >   static inline bool ipa_table_hash_support(struct ipa *ipa)
> >   {
> > -	return ipa->version != IPA_VERSION_4_2;
> > +	return ipa->version != IPA_VERSION_4_2 && ipa->version > IPA_VERSION_2_6L;
> >   }
> >   
> >   /**
> > 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints
  2021-10-13 22:30   ` Alex Elder
@ 2021-10-18 18:17     ` Sireesh Kodali
  0 siblings, 0 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-10-18 18:17 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 4:00 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > IPA v2.x endpoints are the same as the endpoints on later versions. The
> > only big change was the addition of the "skip_config" flag. The only
> > other change is the backlog limit, which is a fixed number for IPA v2.6L
>
> Not much to say here. Your patches are reasonably small, which
> makes them easier to review (thank you).
>
> -Alex

I'm glad splitting them up paid off!

Regards,
Sireesh
>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/ipa_endpoint.c | 65 ++++++++++++++++++++++------------
> >   1 file changed, 43 insertions(+), 22 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> > index 7d3ab61cd890..024cf3a0ded0 100644
> > --- a/drivers/net/ipa/ipa_endpoint.c
> > +++ b/drivers/net/ipa/ipa_endpoint.c
> > @@ -360,8 +360,10 @@ void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable)
> >   {
> >   	u32 endpoint_id;
> >   
> > -	/* DELAY mode doesn't work correctly on IPA v4.2 */
> > -	if (ipa->version == IPA_VERSION_4_2)
> > +	/* DELAY mode doesn't work correctly on IPA v4.2
> > +	 * Pausing is not supported on IPA v2.6L
> > +	 */
> > +	if (ipa->version == IPA_VERSION_4_2 || ipa->version <= IPA_VERSION_2_6L)
> >   		return;
> >   
> >   	for (endpoint_id = 0; endpoint_id < IPA_ENDPOINT_MAX; endpoint_id++) {
> > @@ -383,6 +385,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
> >   {
> >   	u32 initialized = ipa->initialized;
> >   	struct ipa_trans *trans;
> > +	u32 value = 0, value_mask = ~0;
> >   	u32 count;
> >   
> >   	/* We need one command per modem TX endpoint.  We can get an upper
> > @@ -398,6 +401,11 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
> >   		return -EBUSY;
> >   	}
> >   
> > +	if (ipa->version <= IPA_VERSION_2_6L) {
> > +		value = aggr_force_close_fmask(true);
> > +		value_mask = aggr_force_close_fmask(true);
> > +	}
> > +
> >   	while (initialized) {
> >   		u32 endpoint_id = __ffs(initialized);
> >   		struct ipa_endpoint *endpoint;
> > @@ -416,7 +424,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
> >   		 * means status is disabled on the endpoint, and as a
> >   		 * result all other fields in the register are ignored.
> >   		 */
> > -		ipa_cmd_register_write_add(trans, offset, 0, ~0, false);
> > +		ipa_cmd_register_write_add(trans, offset, value, value_mask, false);
> >   	}
> >   
> >   	ipa_cmd_pipeline_clear_add(trans);
> > @@ -1531,8 +1539,10 @@ static void ipa_endpoint_program(struct ipa_endpoint *endpoint)
> >   	ipa_endpoint_init_mode(endpoint);
> >   	ipa_endpoint_init_aggr(endpoint);
> >   	ipa_endpoint_init_deaggr(endpoint);
> > -	ipa_endpoint_init_rsrc_grp(endpoint);
> > -	ipa_endpoint_init_seq(endpoint);
> > +	if (endpoint->ipa->version > IPA_VERSION_2_6L) {
> > +		ipa_endpoint_init_rsrc_grp(endpoint);
> > +		ipa_endpoint_init_seq(endpoint);
> > +	}
> >   	ipa_endpoint_status(endpoint);
> >   }
> >   
> > @@ -1592,7 +1602,6 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
> >   {
> >   	struct device *dev = &endpoint->ipa->pdev->dev;
> >   	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
> > -	bool stop_channel;
> >   	int ret;
> >   
> >   	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
> > @@ -1613,7 +1622,6 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
> >   {
> >   	struct device *dev = &endpoint->ipa->pdev->dev;
> >   	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
> > -	bool start_channel;
> >   	int ret;
> >   
> >   	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
> > @@ -1750,23 +1758,33 @@ int ipa_endpoint_config(struct ipa *ipa)
> >   	/* Find out about the endpoints supplied by the hardware, and ensure
> >   	 * the highest one doesn't exceed the number we support.
> >   	 */
> > -	val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
> > -
> > -	/* Our RX is an IPA producer */
> > -	rx_base = u32_get_bits(val, IPA_PROD_LOWEST_FMASK);
> > -	max = rx_base + u32_get_bits(val, IPA_MAX_PROD_PIPES_FMASK);
> > -	if (max > IPA_ENDPOINT_MAX) {
> > -		dev_err(dev, "too many endpoints (%u > %u)\n",
> > -			max, IPA_ENDPOINT_MAX);
> > -		return -EINVAL;
> > -	}
> > -	rx_mask = GENMASK(max - 1, rx_base);
> > +	if (ipa->version <= IPA_VERSION_2_6L) {
> > +		// FIXME Not used anywhere?
> > +		if (ipa->version == IPA_VERSION_2_6L)
> > +			val = ioread32(ipa->reg_virt +
> > +					IPA_REG_V2_ENABLED_PIPES_OFFSET);
> > +		/* IPA v2.6L supports 20 pipes */
> > +		ipa->available = ipa->filter_map;
> > +		return 0;
> > +	} else {
> > +		val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
> > +
> > +		/* Our RX is an IPA producer */
> > +		rx_base = u32_get_bits(val, IPA_PROD_LOWEST_FMASK);
> > +		max = rx_base + u32_get_bits(val, IPA_MAX_PROD_PIPES_FMASK);
> > +		if (max > IPA_ENDPOINT_MAX) {
> > +			dev_err(dev, "too many endpoints (%u > %u)\n",
> > +					max, IPA_ENDPOINT_MAX);
> > +			return -EINVAL;
> > +		}
> > +		rx_mask = GENMASK(max - 1, rx_base);
> >   
> > -	/* Our TX is an IPA consumer */
> > -	max = u32_get_bits(val, IPA_MAX_CONS_PIPES_FMASK);
> > -	tx_mask = GENMASK(max - 1, 0);
> > +		/* Our TX is an IPA consumer */
> > +		max = u32_get_bits(val, IPA_MAX_CONS_PIPES_FMASK);
> > +		tx_mask = GENMASK(max - 1, 0);
> >   
> > -	ipa->available = rx_mask | tx_mask;
> > +		ipa->available = rx_mask | tx_mask;
> > +	}
> >   
> >   	/* Check for initialized endpoints not supported by the hardware */
> >   	if (ipa->initialized & ~ipa->available) {
> > @@ -1865,6 +1883,9 @@ u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
> >   			filter_map |= BIT(data->endpoint_id);
> >   	}
> >   
> > +	if (ipa->version <= IPA_VERSION_2_6L)
> > +		filter_map = 0x1fffff;
> > +
> >   	if (!ipa_filter_map_valid(ipa, filter_map))
> >   		goto err_endpoint_exit;
> >   
> > 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map
  2021-10-13 22:30   ` Alex Elder
@ 2021-10-18 18:19     ` Sireesh Kodali
  0 siblings, 0 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-10-18 18:19 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 4:00 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > IPA v2.6L has an extra region to handle compression/decompression
> > acceleration. This region is used by some modems during modem init.
>
> So it has to be initialized? (I guess so.)

This is how downstream handles it, I haven't tested not initializing it.

>
> The memory size register apparently doesn't express things in
> units of 8 bytes either.
>

Indeed, with the hardware being 32 bits, it expresses things in values
of 4 bytes instead.

Regards,
Sireesh
> -Alex
>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/ipa_mem.c | 36 ++++++++++++++++++++++++++++++------
> >   drivers/net/ipa/ipa_mem.h |  5 ++++-
> >   2 files changed, 34 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
> > index 8acc88070a6f..bfcdc7e08de2 100644
> > --- a/drivers/net/ipa/ipa_mem.c
> > +++ b/drivers/net/ipa/ipa_mem.c
> > @@ -84,7 +84,7 @@ int ipa_mem_setup(struct ipa *ipa)
> >   	/* Get a transaction to define the header memory region and to zero
> >   	 * the processing context and modem memory regions.
> >   	 */
> > -	trans = ipa_cmd_trans_alloc(ipa, 4);
> > +	trans = ipa_cmd_trans_alloc(ipa, 5);
> >   	if (!trans) {
> >   		dev_err(&ipa->pdev->dev, "no transaction for memory setup\n");
> >   		return -EBUSY;
> > @@ -107,8 +107,14 @@ int ipa_mem_setup(struct ipa *ipa)
> >   	ipa_mem_zero_region_add(trans, IPA_MEM_AP_PROC_CTX);
> >   	ipa_mem_zero_region_add(trans, IPA_MEM_MODEM);
> >   
> > +	ipa_mem_zero_region_add(trans, IPA_MEM_ZIP);
> > +
> >   	ipa_trans_commit_wait(trans);
> >   
> > +	/* On IPA version <=2.6L (except 2.5) there is no PROC_CTX.  */
> > +	if (ipa->version != IPA_VERSION_2_5 && ipa->version <= IPA_VERSION_2_6L)
> > +		return 0;
> > +
> >   	/* Tell the hardware where the processing context area is located */
> >   	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
> >   	offset = ipa->mem_offset + mem->offset;
> > @@ -147,6 +153,11 @@ static bool ipa_mem_id_valid(struct ipa *ipa, enum ipa_mem_id mem_id)
> >   	case IPA_MEM_END_MARKER:	/* pseudo region */
> >   		break;
> >   
> > +	case IPA_MEM_ZIP:
> > +		if (version == IPA_VERSION_2_6L)
> > +			return true;
> > +		break;
> > +
> >   	case IPA_MEM_STATS_TETHERING:
> >   	case IPA_MEM_STATS_DROP:
> >   		if (version < IPA_VERSION_4_0)
> > @@ -319,10 +330,15 @@ int ipa_mem_config(struct ipa *ipa)
> >   	/* Check the advertised location and size of the shared memory area */
> >   	val = ioread32(ipa->reg_virt + ipa_reg_shared_mem_size_offset(ipa->version));
> >   
> > -	/* The fields in the register are in 8 byte units */
> > -	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> > -	/* Make sure the end is within the region's mapped space */
> > -	mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
> > +	if (IPA_VERSION_RANGE(ipa->version, 2_0, 2_6L)) {
> > +		/* The fields in the register are in 8 byte units */
> > +		ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> > +		/* Make sure the end is within the region's mapped space */
> > +		mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
> > +	} else {
> > +		ipa->mem_offset = u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> > +		mem_size = u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
> > +	}
> >   
> >   	/* If the sizes don't match, issue a warning */
> >   	if (ipa->mem_offset + mem_size < ipa->mem_size) {
> > @@ -564,6 +580,10 @@ static int ipa_smem_init(struct ipa *ipa, u32 item, size_t size)
> >   		return -EINVAL;
> >   	}
> >   
> > +	/* IPA v2.6L does not use IOMMU */
> > +	if (ipa->version <= IPA_VERSION_2_6L)
> > +		return 0;
> > +
> >   	domain = iommu_get_domain_for_dev(dev);
> >   	if (!domain) {
> >   		dev_err(dev, "no IOMMU domain found for SMEM\n");
> > @@ -591,6 +611,9 @@ static void ipa_smem_exit(struct ipa *ipa)
> >   	struct device *dev = &ipa->pdev->dev;
> >   	struct iommu_domain *domain;
> >   
> > +	if (ipa->version <= IPA_VERSION_2_6L)
> > +		return;
> > +
> >   	domain = iommu_get_domain_for_dev(dev);
> >   	if (domain) {
> >   		size_t size;
> > @@ -622,7 +645,8 @@ int ipa_mem_init(struct ipa *ipa, const struct ipa_mem_data *mem_data)
> >   	ipa->mem_count = mem_data->local_count;
> >   	ipa->mem = mem_data->local;
> >   
> > -	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64));
> > +	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, IPA_IS_64BIT(ipa->version) ?
> > +					DMA_BIT_MASK(64) : DMA_BIT_MASK(32));
> >   	if (ret) {
> >   		dev_err(dev, "error %d setting DMA mask\n", ret);
> >   		return ret;
> > diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h
> > index 570bfdd99bff..be91cb38b6a8 100644
> > --- a/drivers/net/ipa/ipa_mem.h
> > +++ b/drivers/net/ipa/ipa_mem.h
> > @@ -47,8 +47,10 @@ enum ipa_mem_id {
> >   	IPA_MEM_UC_INFO,		/* 0 canaries */
> >   	IPA_MEM_V4_FILTER_HASHED,	/* 2 canaries */
> >   	IPA_MEM_V4_FILTER,		/* 2 canaries */
> > +	IPA_MEM_V4_FILTER_AP,		/* 2 canaries (IPA v2.0) */
> >   	IPA_MEM_V6_FILTER_HASHED,	/* 2 canaries */
> >   	IPA_MEM_V6_FILTER,		/* 2 canaries */
> > +	IPA_MEM_V6_FILTER_AP,		/* 0 canaries (IPA v2.0) */
> >   	IPA_MEM_V4_ROUTE_HASHED,	/* 2 canaries */
> >   	IPA_MEM_V4_ROUTE,		/* 2 canaries */
> >   	IPA_MEM_V6_ROUTE_HASHED,	/* 2 canaries */
> > @@ -57,7 +59,8 @@ enum ipa_mem_id {
> >   	IPA_MEM_AP_HEADER,		/* 0 canaries, optional */
> >   	IPA_MEM_MODEM_PROC_CTX,		/* 2 canaries */
> >   	IPA_MEM_AP_PROC_CTX,		/* 0 canaries */
> > -	IPA_MEM_MODEM,			/* 0/2 canaries */
> > +	IPA_MEM_ZIP,			/* 1 canary (IPA v2.6L) */
> > +	IPA_MEM_MODEM,			/* 0-2 canaries */
> >   	IPA_MEM_UC_EVENT_RING,		/* 1 canary, optional */
> >   	IPA_MEM_PDN_CONFIG,		/* 0/2 canaries (IPA v4.0+) */
> >   	IPA_MEM_STATS_QUOTA_MODEM,	/* 2/4 canaries (IPA v4.0+) */
> > 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface
  2021-10-13 22:30   ` Alex Elder
@ 2021-10-18 18:22     ` Sireesh Kodali
  0 siblings, 0 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-10-18 18:22 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 4:00 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > On IPA v2.x, the modem doesn't send a DRIVER_INIT_COMPLETED, so we have
> > to rely on the uc's IPA_UC_RESPONSE_INIT_COMPLETED to know when its
> > ready. We add a function here that marks uc_ready = true. This function
> > is called by ipa_uc.c when IPA_UC_RESPONSE_INIT_COMPLETED is handled.
>
> This should use the new ipa_mem_find() interface for getting the
> memory information for the ZIP region.
>

Got it, thanks

> I don't know where the IPA_UC_RESPONSE_INIT_COMPLETED gets sent
> but I presume it ends up calling ipa_qmi_signal_uc_loaded().
>

IPA_UC_RESPONSE_INIT_COMPLETED is handled by the ipa_uc sub-driver. The
handler calls ipa_qmi_signal_uc_loaded() once the response is received,
at which point we know the uc has been inited.

> I think actually the DRIVER_INIT_COMPLETE message from the modem
> is saying "I finished initializing the microcontroller." And
> I've wondered why there is a duplicate mechanism. Maybe there
> was a race or something.
>

This makes sense. Given that some modems rely on the IPA block for
initialization, I wonder if Qualcomm decided it would be easier to allow
the modem to complete the uc initialization and send the signal instead.

Regards,
Sireesh
> -Alex
>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> > ---
> >   drivers/net/ipa/ipa_qmi.c | 27 ++++++++++++++++++++++++++-
> >   drivers/net/ipa/ipa_qmi.h | 10 ++++++++++
> >   2 files changed, 36 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
> > index 7e2fe701cc4d..876e2a004f70 100644
> > --- a/drivers/net/ipa/ipa_qmi.c
> > +++ b/drivers/net/ipa/ipa_qmi.c
> > @@ -68,6 +68,11 @@
> >    * - The INDICATION_REGISTER request and INIT_COMPLETE indication are
> >    *   optional for non-initial modem boots, and have no bearing on the
> >    *   determination of when things are "ready"
> > + *
> > + * Note that on IPA v2.x, the modem doesn't send a DRIVER_INIT_COMPLETE
> > + * request. Thus, we rely on the uc's IPA_UC_RESPONSE_INIT_COMPLETED to know
> > + * when the uc is ready. The rest of the process is the same on IPA v2.x and
> > + * later IPA versions
> >    */
> >   
> >   #define IPA_HOST_SERVICE_SVC_ID		0x31
> > @@ -345,7 +350,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
> >   			req.hdr_proc_ctx_tbl_info.start + mem->size - 1;
> >   	}
> >   
> > -	/* Nothing to report for the compression table (zip_tbl_info) */
> > +	mem = &ipa->mem[IPA_MEM_ZIP];
> > +	if (mem->size) {
> > +		req.zip_tbl_info_valid = 1;
> > +		req.zip_tbl_info.start = ipa->mem_offset + mem->offset;
> > +		req.zip_tbl_info.end = ipa->mem_offset + mem->size - 1;
> > +	}
> >   
> >   	mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE_HASHED);
> >   	if (mem->size) {
> > @@ -525,6 +535,21 @@ int ipa_qmi_setup(struct ipa *ipa)
> >   	return ret;
> >   }
> >   
> > +/* With IPA v2 modem is not required to send DRIVER_INIT_COMPLETE request to AP.
> > + * We start operation as soon as IPA_UC_RESPONSE_INIT_COMPLETED irq is triggered.
> > + */
> > +void ipa_qmi_signal_uc_loaded(struct ipa *ipa)
> > +{
> > +	struct ipa_qmi *ipa_qmi = &ipa->qmi;
> > +
> > +	/* This is needed only on IPA 2.x */
> > +	if (ipa->version > IPA_VERSION_2_6L)
> > +		return;
> > +
> > +	ipa_qmi->uc_ready = true;
> > +	ipa_qmi_ready(ipa_qmi);
> > +}
> > +
> >   /* Tear down IPA QMI handles */
> >   void ipa_qmi_teardown(struct ipa *ipa)
> >   {
> > diff --git a/drivers/net/ipa/ipa_qmi.h b/drivers/net/ipa/ipa_qmi.h
> > index 856ef629ccc8..4962d88b0d22 100644
> > --- a/drivers/net/ipa/ipa_qmi.h
> > +++ b/drivers/net/ipa/ipa_qmi.h
> > @@ -55,6 +55,16 @@ struct ipa_qmi {
> >    */
> >   int ipa_qmi_setup(struct ipa *ipa);
> >   
> > +/**
> > + * ipa_qmi_signal_uc_loaded() - Signal that the UC has been loaded
> > + * @ipa:		IPA pointer
> > + *
> > + * This is called when the uc indicates that it is ready. This exists, because
> > + * on IPA v2.x, the modem does not send a DRIVER_INIT_COMPLETED. Thus we have
> > + * to rely on the uc's INIT_COMPLETED response to know if it was initialized
> > + */
> > +void ipa_qmi_signal_uc_loaded(struct ipa *ipa);
> > +
> >   /**
> >    * ipa_qmi_teardown() - Tear down IPA QMI handles
> >    * @ipa:		IPA pointer
> > 


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware
  2021-10-13 22:30   ` Alex Elder
@ 2021-10-18 18:35     ` Sireesh Kodali
  0 siblings, 0 replies; 49+ messages in thread
From: Sireesh Kodali @ 2021-10-18 18:35 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 4:00 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > This commit adds the config for IPA v2.0, v2.5, v2.6L. IPA v2.5 is found
> > on msm8996. IPA v2.6L hardware is found on following SoCs: msm8920,
> > msm8940, msm8952, msm8953, msm8956, msm8976, sdm630, sdm660. No
> > SoC-specific configuration in ipa driver is required.
> > 
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
>
> I will not look at this in great detail right now. It looks
> good to me, but I didn't notice where "channel_name" got
> defined. I'm not sure what the BCR value represents either.
>

I probably messed up while splitting the commits, it should be easy
enough to fix. As for the BCR, it was simple `#define`d in
downstream, with no comments, leaving us clueless as to what the magic
number means :(

Regards,
Sireesh
> -Alex
>
> > ---
> >   drivers/net/ipa/Makefile        |   7 +-
> >   drivers/net/ipa/ipa_data-v2.c   | 369 ++++++++++++++++++++++++++++++++
> >   drivers/net/ipa/ipa_data-v3.1.c |   2 +-
> >   drivers/net/ipa/ipa_data.h      |   3 +
> >   drivers/net/ipa/ipa_main.c      |  15 ++
> >   drivers/net/ipa/ipa_sysfs.c     |   6 +
> >   6 files changed, 398 insertions(+), 4 deletions(-)
> >   create mode 100644 drivers/net/ipa/ipa_data-v2.c
> > 
> > diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
> > index 4abebc667f77..858fbf76cff3 100644
> > --- a/drivers/net/ipa/Makefile
> > +++ b/drivers/net/ipa/Makefile
> > @@ -7,6 +7,7 @@ ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
> >   				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
> >   				ipa_sysfs.o
> >   
> > -ipa-y			+=	ipa_data-v3.1.o ipa_data-v3.5.1.o \
> > -				ipa_data-v4.2.o ipa_data-v4.5.o \
> > -				ipa_data-v4.9.o ipa_data-v4.11.o
> > +ipa-y			+=	ipa_data-v2.o ipa_data-v3.1.o \
> > +				ipa_data-v3.5.1.o ipa_data-v4.2.o \
> > +				ipa_data-v4.5.o ipa_data-v4.9.o \
> > +				ipa_data-v4.11.o
> > diff --git a/drivers/net/ipa/ipa_data-v2.c b/drivers/net/ipa/ipa_data-v2.c
> > new file mode 100644
> > index 000000000000..869b8a1a45d6
> > --- /dev/null
> > +++ b/drivers/net/ipa/ipa_data-v2.c
> > @@ -0,0 +1,369 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
> > + * Copyright (C) 2019-2020 Linaro Ltd.
> > + */
> > +
> > +#include <linux/log2.h>
> > +
> > +#include "ipa_data.h"
> > +#include "ipa_endpoint.h"
> > +#include "ipa_mem.h"
> > +
> > +/* Endpoint configuration for the IPA v2 hardware. */
> > +static const struct ipa_gsi_endpoint_data ipa_endpoint_data[] = {
> > +	[IPA_ENDPOINT_AP_COMMAND_TX] = {
> > +		.ee_id		= GSI_EE_AP,
> > +		.channel_id	= 3,
> > +		.endpoint_id	= 3,
> > +		.channel_name	= "cmd_tx",
> > +		.toward_ipa	= true,
> > +		.channel = {
> > +			.tre_count	= 256,
> > +			.event_count	= 256,
> > +			.tlv_count	= 20,
> > +		},
> > +		.endpoint = {
> > +			.config	= {
> > +				.dma_mode	= true,
> > +				.dma_endpoint	= IPA_ENDPOINT_AP_LAN_RX,
> > +			},
> > +		},
> > +	},
> > +	[IPA_ENDPOINT_AP_LAN_RX] = {
> > +		.ee_id		= GSI_EE_AP,
> > +		.channel_id	= 2,
> > +		.endpoint_id	= 2,
> > +		.channel_name	= "ap_lan_rx",
> > +		.channel = {
> > +			.tre_count	= 256,
> > +			.event_count	= 256,
> > +			.tlv_count	= 8,
> > +		},
> > +		.endpoint	= {
> > +			.config	= {
> > +				.aggregation	= true,
> > +				.status_enable	= true,
> > +				.rx = {
> > +					.pad_align	= ilog2(sizeof(u32)),
> > +				},
> > +			},
> > +		},
> > +	},
> > +	[IPA_ENDPOINT_AP_MODEM_TX] = {
> > +		.ee_id		= GSI_EE_AP,
> > +		.channel_id	= 4,
> > +		.endpoint_id	= 4,
> > +		.channel_name	= "ap_modem_tx",
> > +		.toward_ipa	= true,
> > +		.channel = {
> > +			.tre_count	= 256,
> > +			.event_count	= 256,
> > +			.tlv_count	= 8,
> > +		},
> > +		.endpoint	= {
> > +			.config	= {
> > +				.qmap		= true,
> > +				.status_enable	= true,
> > +				.tx = {
> > +					.status_endpoint =
> > +						IPA_ENDPOINT_AP_LAN_RX,
> > +				},
> > +			},
> > +		},
> > +	},
> > +	[IPA_ENDPOINT_AP_MODEM_RX] = {
> > +		.ee_id		= GSI_EE_AP,
> > +		.channel_id	= 5,
> > +		.endpoint_id	= 5,
> > +		.channel_name	= "ap_modem_rx",
> > +		.toward_ipa	= false,
> > +		.channel = {
> > +			.tre_count	= 256,
> > +			.event_count	= 256,
> > +			.tlv_count	= 8,
> > +		},
> > +		.endpoint	= {
> > +			.config = {
> > +				.aggregation	= true,
> > +				.qmap		= true,
> > +			},
> > +		},
> > +	},
> > +	[IPA_ENDPOINT_MODEM_LAN_TX] = {
> > +		.ee_id		= GSI_EE_MODEM,
> > +		.channel_id	= 6,
> > +		.endpoint_id	= 6,
> > +		.channel_name	= "modem_lan_tx",
> > +		.toward_ipa	= true,
> > +	},
> > +	[IPA_ENDPOINT_MODEM_COMMAND_TX] = {
> > +		.ee_id		= GSI_EE_MODEM,
> > +		.channel_id	= 7,
> > +		.endpoint_id	= 7,
> > +		.channel_name	= "modem_cmd_tx",
> > +		.toward_ipa	= true,
> > +	},
> > +	[IPA_ENDPOINT_MODEM_LAN_RX] = {
> > +		.ee_id		= GSI_EE_MODEM,
> > +		.channel_id	= 8,
> > +		.endpoint_id	= 8,
> > +		.channel_name	= "modem_lan_rx",
> > +		.toward_ipa	= false,
> > +	},
> > +	[IPA_ENDPOINT_MODEM_AP_RX] = {
> > +		.ee_id		= GSI_EE_MODEM,
> > +		.channel_id	= 9,
> > +		.endpoint_id	= 9,
> > +		.channel_name	= "modem_ap_rx",
> > +		.toward_ipa	= false,
> > +	},
> > +};
> > +
> > +static struct ipa_interconnect_data ipa_interconnect_data[] = {
> > +	{
> > +		.name = "memory",
> > +		.peak_bandwidth	= 1200000,	/* 1200 MBps */
> > +		.average_bandwidth = 100000,	/* 100 MBps */
> > +	},
> > +	{
> > +		.name = "imem",
> > +		.peak_bandwidth	= 350000,	/* 350 MBps */
> > +		.average_bandwidth  = 0,	/* unused */
> > +	},
> > +	{
> > +		.name = "config",
> > +		.peak_bandwidth	= 40000,	/* 40 MBps */
> > +		.average_bandwidth = 0,		/* unused */
> > +	},
> > +};
> > +
> > +static struct ipa_power_data ipa_power_data = {
> > +	.core_clock_rate	= 200 * 1000 * 1000,	/* Hz */
> > +	.interconnect_count	= ARRAY_SIZE(ipa_interconnect_data),
> > +	.interconnect_data	= ipa_interconnect_data,
> > +};
> > +
> > +/* IPA-resident memory region configuration for v2.0 */
> > +static const struct ipa_mem ipa_mem_local_data_v2_0[IPA_MEM_COUNT] = {
> > +	[IPA_MEM_UC_SHARED] = {
> > +		.offset         = 0,
> > +		.size           = 0x80,
> > +		.canary_count   = 0,
> > +	},
> > +	[IPA_MEM_V4_FILTER] = {
> > +		.offset		= 0x0080,
> > +		.size		= 0x0058,
> > +		.canary_count	= 0,
> > +	},
> > +	[IPA_MEM_V6_FILTER] = {
> > +		.offset		= 0x00e0,
> > +		.size		= 0x0058,
> > +		.canary_count	= 2,
> > +	},
> > +	[IPA_MEM_V4_ROUTE] = {
> > +		.offset		= 0x0140,
> > +		.size		= 0x002c,
> > +		.canary_count	= 2,
> > +	},
> > +	[IPA_MEM_V6_ROUTE] = {
> > +		.offset		= 0x0170,
> > +		.size		= 0x002c,
> > +		.canary_count	= 1,
> > +	},
> > +	[IPA_MEM_MODEM_HEADER] = {
> > +		.offset		= 0x01a0,
> > +		.size		= 0x0140,
> > +		.canary_count	= 1,
> > +	},
> > +	[IPA_MEM_AP_HEADER] = {
> > +		.offset		= 0x02e0,
> > +		.size		= 0x0048,
> > +		.canary_count	= 0,
> > +	},
> > +	[IPA_MEM_MODEM] = {
> > +		.offset		= 0x032c,
> > +		.size		= 0x0dcc,
> > +		.canary_count	= 1,
> > +	},
> > +	[IPA_MEM_V4_FILTER_AP] = {
> > +		.offset		= 0x10fc,
> > +		.size		= 0x0780,
> > +		.canary_count	= 1,
> > +	},
> > +	[IPA_MEM_V6_FILTER_AP] = {
> > +		.offset		= 0x187c,
> > +		.size		= 0x055c,
> > +		.canary_count	= 0,
> > +	},
> > +	[IPA_MEM_UC_INFO] = {
> > +		.offset		= 0x1ddc,
> > +		.size		= 0x0124,
> > +		.canary_count	= 1,
> > +	},
> > +};
> > +
> > +static struct ipa_mem_data ipa_mem_data_v2_0 = {
> > +	.local		= ipa_mem_local_data_v2_0,
> > +	.smem_id	= 497,
> > +	.smem_size	= 0x00001f00,
> > +};
> > +
> > +/* Configuration data for IPAv2.0 */
> > +const struct ipa_data ipa_data_v2_0  = {
> > +	.version	= IPA_VERSION_2_0,
> > +	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
> > +	.endpoint_data	= ipa_endpoint_data,
> > +	.mem_data	= &ipa_mem_data_v2_0,
> > +	.power_data	= &ipa_power_data,
> > +};
> > +
> > +/* IPA-resident memory region configuration for v2.5 */
> > +static const struct ipa_mem ipa_mem_local_data_v2_5[IPA_MEM_COUNT] = {
> > +	[IPA_MEM_UC_SHARED] = {
> > +		.offset         = 0,
> > +		.size           = 0x80,
> > +		.canary_count   = 0,
> > +	},
> > +	[IPA_MEM_UC_INFO] = {
> > +		.offset		= 0x0080,
> > +		.size		= 0x0200,
> > +		.canary_count	= 0,
> > +	},
> > +	[IPA_MEM_V4_FILTER] = {
> > +		.offset		= 0x0288,
> > +		.size		= 0x0058,
> > +		.canary_count	= 2,
> > +	},
> > +	[IPA_MEM_V6_FILTER] = {
> > +		.offset		= 0x02e8,
> > +		.size		= 0x0058,
> > +		.canary_count	= 2,
> > +	},
> > +	[IPA_MEM_V4_ROUTE] = {
> > +		.offset		= 0x0348,
> > +		.size		= 0x003c,
> > +		.canary_count	= 2,
> > +	},
> > +	[IPA_MEM_V6_ROUTE] = {
> > +		.offset		= 0x0388,
> > +		.size		= 0x003c,
> > +		.canary_count	= 1,
> > +	},
> > +	[IPA_MEM_MODEM_HEADER] = {
> > +		.offset		= 0x03c8,
> > +		.size		= 0x0140,
> > +		.canary_count	= 1,
> > +	},
> > +	[IPA_MEM_MODEM_PROC_CTX] = {
> > +		.offset		= 0x0510,
> > +		.size		= 0x0200,
> > +		.canary_count	= 2,
> > +	},
> > +	[IPA_MEM_AP_PROC_CTX] = {
> > +		.offset		= 0x0710,
> > +		.size		= 0x0200,
> > +		.canary_count	= 0,
> > +	},
> > +	[IPA_MEM_MODEM] = {
> > +		.offset		= 0x0914,
> > +		.size		= 0x16a8,
> > +		.canary_count	= 1,
> > +	},
> > +};
> > +
> > +static struct ipa_mem_data ipa_mem_data_v2_5 = {
> > +	.local		= ipa_mem_local_data_v2_5,
> > +	.smem_id	= 497,
> > +	.smem_size	= 0x00002000,
> > +};
> > +
> > +/* Configuration data for IPAv2.5 */
> > +const struct ipa_data ipa_data_v2_5  = {
> > +	.version	= IPA_VERSION_2_5,
> > +	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
> > +	.endpoint_data	= ipa_endpoint_data,
> > +	.mem_data	= &ipa_mem_data_v2_5,
> > +	.power_data	= &ipa_power_data,
> > +};
> > +
> > +/* IPA-resident memory region configuration for v2.6L */
> > +static const struct ipa_mem ipa_mem_local_data_v2_6L[IPA_MEM_COUNT] = {
> > +	{
> > +		.id		= IPA_MEM_UC_SHARED,
> > +		.offset         = 0,
> > +		.size           = 0x80,
> > +		.canary_count   = 0,
> > +	},
> > +	{
> > +		.id 		= IPA_MEM_UC_INFO,
> > +		.offset		= 0x0080,
> > +		.size		= 0x0200,
> > +		.canary_count	= 0,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_V4_FILTER,
> > +		.offset		= 0x0288,
> > +		.size		= 0x0058,
> > +		.canary_count	= 2,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_V6_FILTER,
> > +		.offset		= 0x02e8,
> > +		.size		= 0x0058,
> > +		.canary_count	= 2,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_V4_ROUTE,
> > +		.offset		= 0x0348,
> > +		.size		= 0x003c,
> > +		.canary_count	= 2,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_V6_ROUTE,
> > +		.offset		= 0x0388,
> > +		.size		= 0x003c,
> > +		.canary_count	= 1,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_MODEM_HEADER,
> > +		.offset		= 0x03c8,
> > +		.size		= 0x0140,
> > +		.canary_count	= 1,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_ZIP,
> > +		.offset		= 0x0510,
> > +		.size		= 0x0200,
> > +		.canary_count	= 2,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_MODEM,
> > +		.offset		= 0x0714,
> > +		.size		= 0x18e8,
> > +		.canary_count	= 1,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_END_MARKER,
> > +		.offset		= 0x2000,
> > +		.size		= 0,
> > +		.canary_count	= 1,
> > +	},
> > +};
> > +
> > +static struct ipa_mem_data ipa_mem_data_v2_6L = {
> > +	.local		= ipa_mem_local_data_v2_6L,
> > +	.smem_id	= 497,
> > +	.smem_size	= 0x00002000,
> > +};
> > +
> > +/* Configuration data for IPAv2.6L */
> > +const struct ipa_data ipa_data_v2_6L  = {
> > +	.version	= IPA_VERSION_2_6L,
> > +	/* Unfortunately we don't know what this BCR value corresponds to */
> > +	.backward_compat = 0x1fff7f,
> > +	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
> > +	.endpoint_data	= ipa_endpoint_data,
> > +	.mem_data	= &ipa_mem_data_v2_6L,
> > +	.power_data	= &ipa_power_data,
> > +};
> > diff --git a/drivers/net/ipa/ipa_data-v3.1.c b/drivers/net/ipa/ipa_data-v3.1.c
> > index 06ddb85f39b2..12d231232756 100644
> > --- a/drivers/net/ipa/ipa_data-v3.1.c
> > +++ b/drivers/net/ipa/ipa_data-v3.1.c
> > @@ -6,7 +6,7 @@
> >   
> >   #include <linux/log2.h>
> >   
> > -#include "gsi.h"
> > +#include "ipa_dma.h"
> >   #include "ipa_data.h"
> >   #include "ipa_endpoint.h"
> >   #include "ipa_mem.h"
> > diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
> > index 7d62d49f414f..e7ce2e9388b6 100644
> > --- a/drivers/net/ipa/ipa_data.h
> > +++ b/drivers/net/ipa/ipa_data.h
> > @@ -301,6 +301,9 @@ struct ipa_data {
> >   	const struct ipa_power_data *power_data;
> >   };
> >   
> > +extern const struct ipa_data ipa_data_v2_0;
> > +extern const struct ipa_data ipa_data_v2_5;
> > +extern const struct ipa_data ipa_data_v2_6L;
> >   extern const struct ipa_data ipa_data_v3_1;
> >   extern const struct ipa_data ipa_data_v3_5_1;
> >   extern const struct ipa_data ipa_data_v4_2;
> > diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> > index b437fbf95edf..3ae5c5c6734b 100644
> > --- a/drivers/net/ipa/ipa_main.c
> > +++ b/drivers/net/ipa/ipa_main.c
> > @@ -560,6 +560,18 @@ static int ipa_firmware_load(struct device *dev)
> >   }
> >   
> >   static const struct of_device_id ipa_match[] = {
> > +	{
> > +		.compatible	= "qcom,ipa-v2.0",
> > +		.data		= &ipa_data_v2_0,
> > +	},
> > +	{
> > +		.compatible	= "qcom,msm8996-ipa",
> > +		.data		= &ipa_data_v2_5,
> > +	},
> > +	{
> > +		.compatible	= "qcom,msm8953-ipa",
> > +		.data		= &ipa_data_v2_6L,
> > +	},
> >   	{
> >   		.compatible	= "qcom,msm8998-ipa",
> >   		.data		= &ipa_data_v3_1,
> > @@ -632,6 +644,9 @@ static void ipa_validate_build(void)
> >   static bool ipa_version_valid(enum ipa_version version)
> >   {
> >   	switch (version) {
> > +	case IPA_VERSION_2_0:
> > +	case IPA_VERSION_2_5:
> > +	case IPA_VERSION_2_6L:
> >   	case IPA_VERSION_3_0:
> >   	case IPA_VERSION_3_1:
> >   	case IPA_VERSION_3_5:
> > diff --git a/drivers/net/ipa/ipa_sysfs.c b/drivers/net/ipa/ipa_sysfs.c
> > index ff61dbdd70d8..f5d159f6bc06 100644
> > --- a/drivers/net/ipa/ipa_sysfs.c
> > +++ b/drivers/net/ipa/ipa_sysfs.c
> > @@ -14,6 +14,12 @@
> >   static const char *ipa_version_string(struct ipa *ipa)
> >   {
> >   	switch (ipa->version) {
> > +	case IPA_VERSION_2_0:
> > +		return "2.0";
> > +	case IPA_VERSION_2_5:
> > +		return "2.5";
> > +	case IPA_VERSION_2_6L:
> > +		"return 2.6L";
> >   	case IPA_VERSION_3_0:
> >   		return "3.0";
> >   	case IPA_VERSION_3_1:
> > 


^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2021-10-18 18:39 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
2021-09-20  3:07 ` [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration Sireesh Kodali
2021-10-13 22:28   ` Alex Elder
2021-10-18 16:12     ` Sireesh Kodali
2021-09-20  3:07 ` [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support Sireesh Kodali
2021-10-13 22:28   ` Alex Elder
2021-10-18 16:16     ` Sireesh Kodali
2021-09-20  3:07 ` [RFC PATCH 03/17] net: ipa: Refactor GSI code Sireesh Kodali
2021-10-13 22:29   ` Alex Elder
2021-09-20  3:07 ` [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface Sireesh Kodali
2021-10-13 22:29   ` Alex Elder
2021-10-18 16:45     ` Sireesh Kodali
2021-09-20  3:07 ` [RFC PATCH 05/17] net: ipa: Check interrupts for availability Sireesh Kodali
2021-10-13 22:29   ` Alex Elder
2021-09-20  3:08 ` [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait Sireesh Kodali
2021-10-13 22:29   ` Alex Elder
2021-10-18 17:02     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions Sireesh Kodali
2021-10-13 22:29   ` Alex Elder
2021-10-18 17:25     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 08/17] net: ipa: Add support for IPA v2.x interrupts Sireesh Kodali
2021-10-13 22:29   ` Alex Elder
2021-09-20  3:08 ` [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport Sireesh Kodali
2021-09-20 14:31   ` kernel test robot
2021-10-13 22:30   ` Alex Elder
2021-10-18 17:30     ` Sireesh Kodali
2021-09-20  3:08 ` [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-10-18 18:13     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-10-18 18:17     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-10-18 18:19     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-10-18 18:22     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 14/17] net: ipa: Add support for IPA v2 microcontroller Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-09-20  3:08 ` [RFC PATCH 15/17] net: ipa: Add IPA v2.6L initialization sequence support Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-09-20  3:08 ` [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-10-18 18:35     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA Sireesh Kodali
2021-09-23 12:42   ` Rob Herring
2021-10-13 22:31   ` Alex Elder
2021-10-13 22:27 ` [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Alex Elder

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.