phone-devel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x
@ 2021-09-20  3:07 Sireesh Kodali
  2021-09-20  3:07 ` [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration Sireesh Kodali
                   ` (16 more replies)
  0 siblings, 17 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:07 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali

Hi,

This RFC patch series adds support for IPA v2, v2.5 and v2.6L
(collectively referred to as IPA v2.x).

Basic description:
IPA v2.x is the older version of the IPA hardware found on Qualcomm
SoCs. The biggest differences between v2.x and later versions are:
- 32 bit hardware (the IPA microcontroler is 32 bit)
- BAM (as opposed to GSI as a DMA transport)
- Changes to the QMI init sequence (described in the commit message)

The fact that IPA v2.x are 32 bit only affects us directly in the table
init code. However, its impact is felt in other parts of the code, as it
changes the size of fields of various structs (e.g. in the commands that
can be sent).

BAM support is already present in the mainline kernel, however it lacks
two things:
- Support for DMA metadata, to pass the size of the transaction from the
  hardware to the dma client
- Support for immediate commands, which are needed to pass commands from
  the driver to the microcontroller

Separate patch series have been created to deal with these (linked in
the end)

This patch series adds support for BAM as a transport by refactoring the
current GSI code to create an abstract uniform API on top. This API
allows the rest of the driver to handle DMA without worrying about the
IPA version.

The final thing that hasn't been touched by this patch series is the IPA
resource manager. On the downstream CAF kernel, the driver seems to
share the resource code between IPA v2.x and IPA v3.x, which should mean
all it would take to add support for resources on IPA v2.x would be to
add the definitions in the ipa_data.

Testing:
This patch series was tested on kernel version 5.13 on a phone with
SDM625 (IPA v2.6L), and a phone with MSM8996 (IPA v2.5). The phone with
IPA v2.5 was able to get an IP address using modem-manager, although
sending/receiving packets was not tested. The phone with IPA v2.6L was
able to get an IP, but was unable to send/receive packets. Its modem
also relies on IPA v2.6l's compression/decompression support, and
without this patch series, the modem simply crashes and restarts,
waiting for the IPA block to come up.

This patch series is based on code from the downstream CAF kernel v4.9

There are some things in this patch series that would obviously not get
accepted in their current form:
- All IPA 2.x data is in a single file
- Some stray printks might still be around
- Some values have been hardcoded (e.g. the filter_map)
Please excuse these

Lastly, this patch series depends upon the following patches for BAM:
[0]: https://lkml.org/lkml/2021/9/19/126
[1]: https://lkml.org/lkml/2021/9/19/135

Regards,
Sireesh Kodali

Sireesh Kodali (10):
  net: ipa: Add IPA v2.x register definitions
  net: ipa: Add support for using BAM as a DMA transport
  net: ipa: Add support for IPA v2.x commands and table init
  net: ipa: Add support for IPA v2.x endpoints
  net: ipa: Add support for IPA v2.x memory map
  net: ipa: Add support for IPA v2.x in the driver's QMI interface
  net: ipa: Add support for IPA v2 microcontroller
  net: ipa: Add IPA v2.6L initialization sequence support
  net: ipa: Add hw config describing IPA v2.x hardware
  dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA

Vladimir Lypak (7):
  net: ipa: Correct ipa_status_opcode enumeration
  net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support
  net: ipa: Refactor GSI code
  net: ipa: Establish ipa_dma interface
  net: ipa: Check interrupts for availability
  net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait
  net: ipa: Add support for IPA v2.x interrupts

 .../devicetree/bindings/net/qcom,ipa.yaml     |   2 +
 drivers/net/ipa/Makefile                      |  11 +-
 drivers/net/ipa/bam.c                         | 525 ++++++++++++++++++
 drivers/net/ipa/gsi.c                         | 322 ++++++-----
 drivers/net/ipa/ipa.h                         |   8 +-
 drivers/net/ipa/ipa_cmd.c                     | 244 +++++---
 drivers/net/ipa/ipa_cmd.h                     |  20 +-
 drivers/net/ipa/ipa_data-v2.c                 | 369 ++++++++++++
 drivers/net/ipa/ipa_data-v3.1.c               |   2 +-
 drivers/net/ipa/ipa_data-v3.5.1.c             |   2 +-
 drivers/net/ipa/ipa_data-v4.11.c              |   2 +-
 drivers/net/ipa/ipa_data-v4.2.c               |   2 +-
 drivers/net/ipa/ipa_data-v4.5.c               |   2 +-
 drivers/net/ipa/ipa_data-v4.9.c               |   2 +-
 drivers/net/ipa/ipa_data.h                    |   4 +
 drivers/net/ipa/{gsi.h => ipa_dma.h}          | 179 +++---
 .../ipa/{gsi_private.h => ipa_dma_private.h}  |  46 +-
 drivers/net/ipa/ipa_endpoint.c                | 188 ++++---
 drivers/net/ipa/ipa_endpoint.h                |   6 +-
 drivers/net/ipa/ipa_gsi.c                     |  18 +-
 drivers/net/ipa/ipa_gsi.h                     |  12 +-
 drivers/net/ipa/ipa_interrupt.c               |  36 +-
 drivers/net/ipa/ipa_main.c                    |  82 ++-
 drivers/net/ipa/ipa_mem.c                     |  55 +-
 drivers/net/ipa/ipa_mem.h                     |   5 +-
 drivers/net/ipa/ipa_power.c                   |   4 +-
 drivers/net/ipa/ipa_qmi.c                     |  37 +-
 drivers/net/ipa/ipa_qmi.h                     |  10 +
 drivers/net/ipa/ipa_reg.h                     | 184 +++++-
 drivers/net/ipa/ipa_resource.c                |   3 +
 drivers/net/ipa/ipa_smp2p.c                   |  11 +-
 drivers/net/ipa/ipa_sysfs.c                   |   6 +
 drivers/net/ipa/ipa_table.c                   |  86 +--
 drivers/net/ipa/ipa_table.h                   |   6 +-
 drivers/net/ipa/{gsi_trans.c => ipa_trans.c}  | 182 +++---
 drivers/net/ipa/{gsi_trans.h => ipa_trans.h}  |  78 +--
 drivers/net/ipa/ipa_uc.c                      |  96 ++--
 drivers/net/ipa/ipa_version.h                 |  12 +
 38 files changed, 2133 insertions(+), 726 deletions(-)
 create mode 100644 drivers/net/ipa/bam.c
 create mode 100644 drivers/net/ipa/ipa_data-v2.c
 rename drivers/net/ipa/{gsi.h => ipa_dma.h} (57%)
 rename drivers/net/ipa/{gsi_private.h => ipa_dma_private.h} (66%)
 rename drivers/net/ipa/{gsi_trans.c => ipa_trans.c} (80%)
 rename drivers/net/ipa/{gsi_trans.h => ipa_trans.h} (71%)

-- 
2.33.0


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
@ 2021-09-20  3:07 ` Sireesh Kodali
  2021-10-13 22:28   ` Alex Elder
  2021-09-20  3:07 ` [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support Sireesh Kodali
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:07 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

The values in the enumaration were defined as bitmasks (base 2 exponents of
actual opcodes). Meanwhile, it's used not as bitmask
ipa_endpoint_status_skip and ipa_status_formet_packet functions (compared
directly with opcode from status packet). This commit converts these values
to actual hardware constansts.

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_endpoint.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 5528d97110d5..29227de6661f 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -41,10 +41,10 @@
 
 /** enum ipa_status_opcode - status element opcode hardware values */
 enum ipa_status_opcode {
-	IPA_STATUS_OPCODE_PACKET		= 0x01,
-	IPA_STATUS_OPCODE_DROPPED_PACKET	= 0x04,
-	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 0x08,
-	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 0x40,
+	IPA_STATUS_OPCODE_PACKET		= 0,
+	IPA_STATUS_OPCODE_DROPPED_PACKET	= 2,
+	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 3,
+	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 6,
 };
 
 /** enum ipa_status_exception - status element exception type */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
  2021-09-20  3:07 ` [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration Sireesh Kodali
@ 2021-09-20  3:07 ` Sireesh Kodali
  2021-10-13 22:28   ` Alex Elder
  2021-09-20  3:07 ` [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface Sireesh Kodali
                   ` (14 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:07 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

IPA v2.x is 32 bit. Having an IPA_TABLE_ENTRY size makes it easier to
deal with supporting both 32 bit and 64 bit IPA versions

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_qmi.c   | 10 ++++++----
 drivers/net/ipa/ipa_table.c | 29 +++++++++++++----------------
 drivers/net/ipa/ipa_table.h |  4 ++++
 3 files changed, 23 insertions(+), 20 deletions(-)

diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
index 90f3aec55b36..7e2fe701cc4d 100644
--- a/drivers/net/ipa/ipa_qmi.c
+++ b/drivers/net/ipa/ipa_qmi.c
@@ -308,12 +308,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
 	mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE);
 	req.v4_route_tbl_info_valid = 1;
 	req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset;
-	req.v4_route_tbl_info.count = mem->size / sizeof(__le64);
+	req.v4_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
 
 	mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE);
 	req.v6_route_tbl_info_valid = 1;
 	req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset;
-	req.v6_route_tbl_info.count = mem->size / sizeof(__le64);
+	req.v6_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
 
 	mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER);
 	req.v4_filter_tbl_start_valid = 1;
@@ -352,7 +352,8 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
 		req.v4_hash_route_tbl_info_valid = 1;
 		req.v4_hash_route_tbl_info.start =
 				ipa->mem_offset + mem->offset;
-		req.v4_hash_route_tbl_info.count = mem->size / sizeof(__le64);
+		req.v4_hash_route_tbl_info.count =
+				mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
 	}
 
 	mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE_HASHED);
@@ -360,7 +361,8 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
 		req.v6_hash_route_tbl_info_valid = 1;
 		req.v6_hash_route_tbl_info.start =
 			ipa->mem_offset + mem->offset;
-		req.v6_hash_route_tbl_info.count = mem->size / sizeof(__le64);
+		req.v6_hash_route_tbl_info.count =
+				mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
 	}
 
 	mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER_HASHED);
diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
index 1da334f54944..96c467c80a2e 100644
--- a/drivers/net/ipa/ipa_table.c
+++ b/drivers/net/ipa/ipa_table.c
@@ -118,7 +118,8 @@
  * 32-bit all-zero rule list terminator.  The "zero rule" is simply an
  * all-zero rule followed by the list terminator.
  */
-#define IPA_ZERO_RULE_SIZE		(2 * sizeof(__le32))
+#define IPA_ZERO_RULE_SIZE(version) \
+	 (IPA_IS_64BIT(version) ? 2 * sizeof(__le32) : sizeof(__le32))
 
 /* Check things that can be validated at build time. */
 static void ipa_table_validate_build(void)
@@ -132,12 +133,6 @@ static void ipa_table_validate_build(void)
 	 */
 	BUILD_BUG_ON(sizeof(dma_addr_t) > sizeof(__le64));
 
-	/* A "zero rule" is used to represent no filtering or no routing.
-	 * It is a 64-bit block of zeroed memory.  Code in ipa_table_init()
-	 * assumes that it can be written using a pointer to __le64.
-	 */
-	BUILD_BUG_ON(IPA_ZERO_RULE_SIZE != sizeof(__le64));
-
 	/* Impose a practical limit on the number of routes */
 	BUILD_BUG_ON(IPA_ROUTE_COUNT_MAX > 32);
 	/* The modem must be allotted at least one route table entry */
@@ -236,7 +231,7 @@ static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count)
 	/* Skip over the zero rule and possibly the filter mask */
 	skip = filter_mask ? 1 : 2;
 
-	return ipa->table_addr + skip * sizeof(*ipa->table_virt);
+	return ipa->table_addr + skip * IPA_TABLE_ENTRY_SIZE(ipa->version);
 }
 
 static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
@@ -255,8 +250,8 @@ static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
 	if (filter)
 		first++;	/* skip over bitmap */
 
-	offset = mem->offset + first * sizeof(__le64);
-	size = count * sizeof(__le64);
+	offset = mem->offset + first * IPA_TABLE_ENTRY_SIZE(ipa->version);
+	size = count * IPA_TABLE_ENTRY_SIZE(ipa->version);
 	addr = ipa_table_addr(ipa, false, count);
 
 	ipa_cmd_dma_shared_mem_add(trans, offset, size, addr, true);
@@ -434,11 +429,11 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
 		count = 1 + hweight32(ipa->filter_map);
 		hash_count = hash_mem->size ? count : 0;
 	} else {
-		count = mem->size / sizeof(__le64);
-		hash_count = hash_mem->size / sizeof(__le64);
+		count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
+		hash_count = hash_mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
 	}
-	size = count * sizeof(__le64);
-	hash_size = hash_count * sizeof(__le64);
+	size = count * IPA_TABLE_ENTRY_SIZE(ipa->version);
+	hash_size = hash_count * IPA_TABLE_ENTRY_SIZE(ipa->version);
 
 	addr = ipa_table_addr(ipa, filter, count);
 	hash_addr = ipa_table_addr(ipa, filter, hash_count);
@@ -621,7 +616,8 @@ int ipa_table_init(struct ipa *ipa)
 	 * by dma_alloc_coherent() is guaranteed to be a power-of-2 number
 	 * of pages, which satisfies the rule alignment requirement.
 	 */
-	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
+	size = IPA_ZERO_RULE_SIZE(ipa->version) +
+	       (1 + count) * IPA_TABLE_ENTRY_SIZE(ipa->version);
 	virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
 	if (!virt)
 		return -ENOMEM;
@@ -653,7 +649,8 @@ void ipa_table_exit(struct ipa *ipa)
 	struct device *dev = &ipa->pdev->dev;
 	size_t size;
 
-	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
+	size = IPA_ZERO_RULE_SIZE(ipa->version) +
+	       (1 + count) * IPA_TABLE_ENTRY_SIZE(ipa->version);
 
 	dma_free_coherent(dev, size, ipa->table_virt, ipa->table_addr);
 	ipa->table_addr = 0;
diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
index b6a9a0d79d68..78a168ce6558 100644
--- a/drivers/net/ipa/ipa_table.h
+++ b/drivers/net/ipa/ipa_table.h
@@ -10,6 +10,10 @@
 
 struct ipa;
 
+/* The size of a filter or route table entry */
+#define IPA_TABLE_ENTRY_SIZE(version)	\
+	(IPA_IS_64BIT(version) ? sizeof(__le64) : sizeof(__le32))
+
 /* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */
 #define IPA_FILTER_COUNT_MAX	14
 
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
  2021-09-20  3:07 ` [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration Sireesh Kodali
  2021-09-20  3:07 ` [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support Sireesh Kodali
@ 2021-09-20  3:07 ` Sireesh Kodali
  2021-10-13 22:29   ` Alex Elder
  2021-09-20  3:07 ` [RFC PATCH 05/17] net: ipa: Check interrupts for availability Sireesh Kodali
                   ` (13 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:07 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

Establish callback-based interface to abstract GSI and BAM DMA differences.
Interface is based on prototypes from ipa_dma.h (old gsi.h). Callbacks
are stored in struct ipa_dma (old struct gsi) and assigned in gsi_init.

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/gsi.c          |  30 ++++++--
 drivers/net/ipa/ipa_dma.h      | 133 ++++++++++++++++++++++-----------
 drivers/net/ipa/ipa_endpoint.c |  28 +++----
 drivers/net/ipa/ipa_main.c     |  18 ++---
 drivers/net/ipa/ipa_power.c    |   4 +-
 drivers/net/ipa/ipa_trans.c    |   2 +-
 6 files changed, 138 insertions(+), 77 deletions(-)

diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
index 74ae0d07f859..39d9ca620a9f 100644
--- a/drivers/net/ipa/gsi.c
+++ b/drivers/net/ipa/gsi.c
@@ -99,6 +99,10 @@
 
 #define GSI_ISR_MAX_ITER		50	/* Detect interrupt storms */
 
+static u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id);
+static u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id);
+static void gsi_exit(struct ipa_dma *gsi);
+
 /* An entry in an event ring */
 struct gsi_event {
 	__le64 xfer_ptr;
@@ -869,7 +873,7 @@ static int __gsi_channel_start(struct ipa_channel *channel, bool resume)
 }
 
 /* Start an allocated GSI channel */
-int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
+static int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
 {
 	struct ipa_channel *channel = &gsi->channel[channel_id];
 	int ret;
@@ -924,7 +928,7 @@ static int __gsi_channel_stop(struct ipa_channel *channel, bool suspend)
 }
 
 /* Stop a started channel */
-int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
+static int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
 {
 	struct ipa_channel *channel = &gsi->channel[channel_id];
 	int ret;
@@ -941,7 +945,7 @@ int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
 }
 
 /* Reset and reconfigure a channel, (possibly) enabling the doorbell engine */
-void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
+static void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
 {
 	struct ipa_channel *channel = &gsi->channel[channel_id];
 
@@ -1931,7 +1935,7 @@ int gsi_setup(struct ipa_dma *gsi)
 }
 
 /* Inverse of gsi_setup() */
-void gsi_teardown(struct ipa_dma *gsi)
+static void gsi_teardown(struct ipa_dma *gsi)
 {
 	gsi_channel_teardown(gsi);
 	gsi_irq_teardown(gsi);
@@ -2194,6 +2198,18 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
 
 	gsi->dev = dev;
 	gsi->version = version;
+	gsi->setup = gsi_setup;
+	gsi->teardown = gsi_teardown;
+	gsi->exit = gsi_exit;
+	gsi->suspend = gsi_suspend;
+	gsi->resume = gsi_resume;
+	gsi->channel_tre_max = gsi_channel_tre_max;
+	gsi->channel_trans_tre_max = gsi_channel_trans_tre_max;
+	gsi->channel_start = gsi_channel_start;
+	gsi->channel_stop = gsi_channel_stop;
+	gsi->channel_reset = gsi_channel_reset;
+	gsi->channel_suspend = gsi_channel_suspend;
+	gsi->channel_resume = gsi_channel_resume;
 
 	/* GSI uses NAPI on all channels.  Create a dummy network device
 	 * for the channel NAPI contexts to be associated with.
@@ -2250,7 +2266,7 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
 }
 
 /* Inverse of gsi_init() */
-void gsi_exit(struct ipa_dma *gsi)
+static void gsi_exit(struct ipa_dma *gsi)
 {
 	mutex_destroy(&gsi->mutex);
 	gsi_channel_exit(gsi);
@@ -2277,7 +2293,7 @@ void gsi_exit(struct ipa_dma *gsi)
  * substantially reduce pool memory requirements.  The number we
  * reduce it by matches the number added in ipa_trans_pool_init().
  */
-u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
+static u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
 {
 	struct ipa_channel *channel = &gsi->channel[channel_id];
 
@@ -2286,7 +2302,7 @@ u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
 }
 
 /* Returns the maximum number of TREs in a single transaction for a channel */
-u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
+static u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
 {
 	struct ipa_channel *channel = &gsi->channel[channel_id];
 
diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h
index d053929ca3e3..1a23e6ac5785 100644
--- a/drivers/net/ipa/ipa_dma.h
+++ b/drivers/net/ipa/ipa_dma.h
@@ -163,64 +163,96 @@ struct ipa_dma {
 	struct completion completion;	/* for global EE commands */
 	int result;			/* Negative errno (generic commands) */
 	struct mutex mutex;		/* protects commands, programming */
+
+	int (*setup)(struct ipa_dma *dma_subsys);
+	void (*teardown)(struct ipa_dma *dma_subsys);
+	void (*exit)(struct ipa_dma *dma_subsys);
+	void (*suspend)(struct ipa_dma *dma_subsys);
+	void (*resume)(struct ipa_dma *dma_subsys);
+	u32 (*channel_tre_max)(struct ipa_dma *dma_subsys, u32 channel_id);
+	u32 (*channel_trans_tre_max)(struct ipa_dma *dma_subsys, u32 channel_id);
+	int (*channel_start)(struct ipa_dma *dma_subsys, u32 channel_id);
+	int (*channel_stop)(struct ipa_dma *dma_subsys, u32 channel_id);
+	void (*channel_reset)(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
+	int (*channel_suspend)(struct ipa_dma *dma_subsys, u32 channel_id);
+	int (*channel_resume)(struct ipa_dma *dma_subsys, u32 channel_id);
+	void (*trans_commit)(struct ipa_trans *trans, bool ring_db);
 };
 
 /**
- * gsi_setup() - Set up the GSI subsystem
- * @gsi:	Address of GSI structure embedded in an IPA structure
+ * ipa_dma_setup() - Set up the DMA subsystem
+ * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
  *
  * Return:	0 if successful, or a negative error code
  *
- * Performs initialization that must wait until the GSI hardware is
+ * Performs initialization that must wait until the GSI/BAM hardware is
  * ready (including firmware loaded).
  */
-int gsi_setup(struct ipa_dma *dma_subsys);
+static inline int ipa_dma_setup(struct ipa_dma *dma_subsys)
+{
+	return dma_subsys->setup(dma_subsys);
+}
 
 /**
- * gsi_teardown() - Tear down GSI subsystem
- * @gsi:	GSI address previously passed to a successful gsi_setup() call
+ * ipa_dma_teardown() - Tear down DMA subsystem
+ * @dma_subsys:	ipa_dma address previously passed to a successful ipa_dma_setup() call
  */
-void gsi_teardown(struct ipa_dma *dma_subsys);
+static inline void ipa_dma_teardown(struct ipa_dma *dma_subsys)
+{
+	dma_subsys->teardown(dma_subsys);
+}
 
 /**
- * gsi_channel_tre_max() - Channel maximum number of in-flight TREs
- * @gsi:	GSI pointer
+ * ipa_channel_tre_max() - Channel maximum number of in-flight TREs
+ * @dma_subsys:	pointer to ipa_dma structure
  * @channel_id:	Channel whose limit is to be returned
  *
  * Return:	 The maximum number of TREs oustanding on the channel
  */
-u32 gsi_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
+static inline u32 ipa_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id)
+{
+	return dma_subsys->channel_tre_max(dma_subsys, channel_id);
+}
 
 /**
- * gsi_channel_trans_tre_max() - Maximum TREs in a single transaction
- * @gsi:	GSI pointer
+ * ipa_channel_trans_tre_max() - Maximum TREs in a single transaction
+ * @dma_subsys:	pointer to ipa_dma structure
  * @channel_id:	Channel whose limit is to be returned
  *
  * Return:	 The maximum TRE count per transaction on the channel
  */
-u32 gsi_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
+static inline u32 ipa_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id)
+{
+	return dma_subsys->channel_trans_tre_max(dma_subsys, channel_id);
+}
 
 /**
- * gsi_channel_start() - Start an allocated GSI channel
- * @gsi:	GSI pointer
+ * ipa_channel_start() - Start an allocated DMA channel
+ * @dma_subsys:	pointer to ipa_dma structure
  * @channel_id:	Channel to start
  *
  * Return:	0 if successful, or a negative error code
  */
-int gsi_channel_start(struct ipa_dma *dma_subsys, u32 channel_id);
+static inline int ipa_channel_start(struct ipa_dma *dma_subsys, u32 channel_id)
+{
+	return dma_subsys->channel_start(dma_subsys, channel_id);
+}
 
 /**
- * gsi_channel_stop() - Stop a started GSI channel
- * @gsi:	GSI pointer returned by gsi_setup()
+ * ipa_channel_stop() - Stop a started DMA channel
+ * @dma_subsys:	pointer to ipa_dma structure returned by ipa_dma_setup()
  * @channel_id:	Channel to stop
  *
  * Return:	0 if successful, or a negative error code
  */
-int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
+static inline int ipa_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id)
+{
+	return dma_subsys->channel_stop(dma_subsys, channel_id);
+}
 
 /**
- * gsi_channel_reset() - Reset an allocated GSI channel
- * @gsi:	GSI pointer
+ * ipa_channel_reset() - Reset an allocated DMA channel
+ * @dma_subsys:	pointer to ipa_dma structure
  * @channel_id:	Channel to be reset
  * @doorbell:	Whether to (possibly) enable the doorbell engine
  *
@@ -230,41 +262,49 @@ int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
  * GSI hardware relinquishes ownership of all pending receive buffer
  * transactions and they will complete with their cancelled flag set.
  */
-void gsi_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
+static inline void ipa_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell)
+{
+	 dma_subsys->channel_reset(dma_subsys, channel_id, doorbell);
+}
 
-/**
- * gsi_suspend() - Prepare the GSI subsystem for suspend
- * @gsi:	GSI pointer
- */
-void gsi_suspend(struct ipa_dma *dma_subsys);
 
 /**
- * gsi_resume() - Resume the GSI subsystem following suspend
- * @gsi:	GSI pointer
- */
-void gsi_resume(struct ipa_dma *dma_subsys);
-
-/**
- * gsi_channel_suspend() - Suspend a GSI channel
- * @gsi:	GSI pointer
+ * ipa_channel_suspend() - Suspend a DMA channel
+ * @dma_subsys:	pointer to ipa_dma structure
  * @channel_id:	Channel to suspend
  *
  * For IPA v4.0+, suspend is implemented by stopping the channel.
  */
-int gsi_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id);
+static inline int ipa_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id)
+{
+	return dma_subsys->channel_suspend(dma_subsys, channel_id);
+}
 
 /**
- * gsi_channel_resume() - Resume a suspended GSI channel
- * @gsi:	GSI pointer
+ * ipa_channel_resume() - Resume a suspended DMA channel
+ * @dma_subsys:	pointer to ipa_dma structure
  * @channel_id:	Channel to resume
  *
  * For IPA v4.0+, the stopped channel is started again.
  */
-int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
+static inline int ipa_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id)
+{
+	return dma_subsys->channel_resume(dma_subsys, channel_id);
+}
+
+static inline void ipa_dma_suspend(struct ipa_dma *dma_subsys)
+{
+	return dma_subsys->suspend(dma_subsys);
+}
+
+static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
+{
+	return dma_subsys->resume(dma_subsys);
+}
 
 /**
- * gsi_init() - Initialize the GSI subsystem
- * @gsi:	Address of GSI structure embedded in an IPA structure
+ * ipa_dma_init() - Initialize the GSI subsystem
+ * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
  * @pdev:	IPA platform device
  * @version:	IPA hardware version (implies GSI version)
  * @count:	Number of entries in the configuration data array
@@ -275,14 +315,19 @@ int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
  * Early stage initialization of the GSI subsystem, performing tasks
  * that can be done before the GSI hardware is ready to use.
  */
+
 int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
 	     enum ipa_version version, u32 count,
 	     const struct ipa_gsi_endpoint_data *data);
 
 /**
- * gsi_exit() - Exit the GSI subsystem
- * @gsi:	GSI address previously passed to a successful gsi_init() call
+ * ipa_dma_exit() - Exit the DMA subsystem
+ * @dma_subsys:	ipa_dma address previously passed to a successful gsi_init() call
  */
-void gsi_exit(struct ipa_dma *dma_subsys);
+static inline void ipa_dma_exit(struct ipa_dma *dma_subsys)
+{
+	if (dma_subsys)
+		dma_subsys->exit(dma_subsys);
+}
 
 #endif /* _GSI_H_ */
diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 90d6880e8a25..dbef549c4537 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -1091,7 +1091,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
 	 * try replenishing again if our backlog is *all* available TREs.
 	 */
 	gsi = &endpoint->ipa->dma_subsys;
-	if (backlog == gsi_channel_tre_max(gsi, endpoint->channel_id))
+	if (backlog == ipa_channel_tre_max(gsi, endpoint->channel_id))
 		schedule_delayed_work(&endpoint->replenish_work,
 				      msecs_to_jiffies(1));
 }
@@ -1107,7 +1107,7 @@ static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
 		atomic_add(saved, &endpoint->replenish_backlog);
 
 	/* Start replenishing if hardware currently has no buffers */
-	max_backlog = gsi_channel_tre_max(gsi, endpoint->channel_id);
+	max_backlog = ipa_channel_tre_max(gsi, endpoint->channel_id);
 	if (atomic_read(&endpoint->replenish_backlog) == max_backlog)
 		ipa_endpoint_replenish(endpoint, false);
 }
@@ -1432,13 +1432,13 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
 	 * active.  We'll re-enable the doorbell (if appropriate) when
 	 * we reset again below.
 	 */
-	gsi_channel_reset(gsi, endpoint->channel_id, false);
+	ipa_channel_reset(gsi, endpoint->channel_id, false);
 
 	/* Make sure the channel isn't suspended */
 	suspended = ipa_endpoint_program_suspend(endpoint, false);
 
 	/* Start channel and do a 1 byte read */
-	ret = gsi_channel_start(gsi, endpoint->channel_id);
+	ret = ipa_channel_start(gsi, endpoint->channel_id);
 	if (ret)
 		goto out_suspend_again;
 
@@ -1461,7 +1461,7 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
 
 	gsi_trans_read_byte_done(gsi, endpoint->channel_id);
 
-	ret = gsi_channel_stop(gsi, endpoint->channel_id);
+	ret = ipa_channel_stop(gsi, endpoint->channel_id);
 	if (ret)
 		goto out_suspend_again;
 
@@ -1470,14 +1470,14 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
 	 * complete the channel reset sequence.  Finish by suspending the
 	 * channel again (if necessary).
 	 */
-	gsi_channel_reset(gsi, endpoint->channel_id, true);
+	ipa_channel_reset(gsi, endpoint->channel_id, true);
 
 	usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC);
 
 	goto out_suspend_again;
 
 err_endpoint_stop:
-	(void)gsi_channel_stop(gsi, endpoint->channel_id);
+	(void)ipa_channel_stop(gsi, endpoint->channel_id);
 out_suspend_again:
 	if (suspended)
 		(void)ipa_endpoint_program_suspend(endpoint, true);
@@ -1504,7 +1504,7 @@ static void ipa_endpoint_reset(struct ipa_endpoint *endpoint)
 	if (special && ipa_endpoint_aggr_active(endpoint))
 		ret = ipa_endpoint_reset_rx_aggr(endpoint);
 	else
-		gsi_channel_reset(&ipa->dma_subsys, channel_id, true);
+		ipa_channel_reset(&ipa->dma_subsys, channel_id, true);
 
 	if (ret)
 		dev_err(&ipa->pdev->dev,
@@ -1537,7 +1537,7 @@ int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint)
 	struct ipa_dma *gsi = &ipa->dma_subsys;
 	int ret;
 
-	ret = gsi_channel_start(gsi, endpoint->channel_id);
+	ret = ipa_channel_start(gsi, endpoint->channel_id);
 	if (ret) {
 		dev_err(&ipa->pdev->dev,
 			"error %d starting %cX channel %u for endpoint %u\n",
@@ -1576,7 +1576,7 @@ void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
 	}
 
 	/* Note that if stop fails, the channel's state is not well-defined */
-	ret = gsi_channel_stop(gsi, endpoint->channel_id);
+	ret = ipa_channel_stop(gsi, endpoint->channel_id);
 	if (ret)
 		dev_err(&ipa->pdev->dev,
 			"error %d attempting to stop endpoint %u\n", ret,
@@ -1598,7 +1598,7 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
 		(void)ipa_endpoint_program_suspend(endpoint, true);
 	}
 
-	ret = gsi_channel_suspend(gsi, endpoint->channel_id);
+	ret = ipa_channel_suspend(gsi, endpoint->channel_id);
 	if (ret)
 		dev_err(dev, "error %d suspending channel %u\n", ret,
 			endpoint->channel_id);
@@ -1617,7 +1617,7 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
 	if (!endpoint->toward_ipa)
 		(void)ipa_endpoint_program_suspend(endpoint, false);
 
-	ret = gsi_channel_resume(gsi, endpoint->channel_id);
+	ret = ipa_channel_resume(gsi, endpoint->channel_id);
 	if (ret)
 		dev_err(dev, "error %d resuming channel %u\n", ret,
 			endpoint->channel_id);
@@ -1660,14 +1660,14 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
 	if (endpoint->ee_id != GSI_EE_AP)
 		return;
 
-	endpoint->trans_tre_max = gsi_channel_trans_tre_max(gsi, channel_id);
+	endpoint->trans_tre_max = ipa_channel_trans_tre_max(gsi, channel_id);
 	if (!endpoint->toward_ipa) {
 		/* RX transactions require a single TRE, so the maximum
 		 * backlog is the same as the maximum outstanding TREs.
 		 */
 		endpoint->replenish_enabled = false;
 		atomic_set(&endpoint->replenish_saved,
-			   gsi_channel_tre_max(gsi, endpoint->channel_id));
+			   ipa_channel_tre_max(gsi, endpoint->channel_id));
 		atomic_set(&endpoint->replenish_backlog, 0);
 		INIT_DELAYED_WORK(&endpoint->replenish_work,
 				  ipa_endpoint_replenish_work);
diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index 026f5555fa7d..6ab691ff1faf 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -98,13 +98,13 @@ int ipa_setup(struct ipa *ipa)
 	struct device *dev = &ipa->pdev->dev;
 	int ret;
 
-	ret = gsi_setup(&ipa->dma_subsys);
+	ret = ipa_dma_setup(&ipa->dma_subsys);
 	if (ret)
 		return ret;
 
 	ret = ipa_power_setup(ipa);
 	if (ret)
-		goto err_gsi_teardown;
+		goto err_dma_teardown;
 
 	ipa_endpoint_setup(ipa);
 
@@ -153,8 +153,8 @@ int ipa_setup(struct ipa *ipa)
 err_endpoint_teardown:
 	ipa_endpoint_teardown(ipa);
 	ipa_power_teardown(ipa);
-err_gsi_teardown:
-	gsi_teardown(&ipa->dma_subsys);
+err_dma_teardown:
+	ipa_dma_teardown(&ipa->dma_subsys);
 
 	return ret;
 }
@@ -179,7 +179,7 @@ static void ipa_teardown(struct ipa *ipa)
 	ipa_endpoint_disable_one(command_endpoint);
 	ipa_endpoint_teardown(ipa);
 	ipa_power_teardown(ipa);
-	gsi_teardown(&ipa->dma_subsys);
+	ipa_dma_teardown(&ipa->dma_subsys);
 }
 
 /* Configure bus access behavior for IPA components */
@@ -726,7 +726,7 @@ static int ipa_probe(struct platform_device *pdev)
 					    data->endpoint_data);
 	if (!ipa->filter_map) {
 		ret = -EINVAL;
-		goto err_gsi_exit;
+		goto err_dma_exit;
 	}
 
 	ret = ipa_table_init(ipa);
@@ -780,8 +780,8 @@ static int ipa_probe(struct platform_device *pdev)
 	ipa_table_exit(ipa);
 err_endpoint_exit:
 	ipa_endpoint_exit(ipa);
-err_gsi_exit:
-	gsi_exit(&ipa->dma_subsys);
+err_dma_exit:
+	ipa_dma_exit(&ipa->dma_subsys);
 err_mem_exit:
 	ipa_mem_exit(ipa);
 err_reg_exit:
@@ -824,7 +824,7 @@ static int ipa_remove(struct platform_device *pdev)
 	ipa_modem_exit(ipa);
 	ipa_table_exit(ipa);
 	ipa_endpoint_exit(ipa);
-	gsi_exit(&ipa->dma_subsys);
+	ipa_dma_exit(&ipa->dma_subsys);
 	ipa_mem_exit(ipa);
 	ipa_reg_exit(ipa);
 	kfree(ipa);
diff --git a/drivers/net/ipa/ipa_power.c b/drivers/net/ipa/ipa_power.c
index b1c6c0fcb654..096cfb8ae9a5 100644
--- a/drivers/net/ipa/ipa_power.c
+++ b/drivers/net/ipa/ipa_power.c
@@ -243,7 +243,7 @@ static int ipa_runtime_suspend(struct device *dev)
 	if (ipa->setup_complete) {
 		__clear_bit(IPA_POWER_FLAG_RESUMED, ipa->power->flags);
 		ipa_endpoint_suspend(ipa);
-		gsi_suspend(&ipa->gsi);
+		ipa_dma_suspend(&ipa->dma_subsys);
 	}
 
 	return ipa_power_disable(ipa);
@@ -260,7 +260,7 @@ static int ipa_runtime_resume(struct device *dev)
 
 	/* Endpoints aren't usable until setup is complete */
 	if (ipa->setup_complete) {
-		gsi_resume(&ipa->gsi);
+		ipa_dma_resume(&ipa->dma_subsys);
 		ipa_endpoint_resume(ipa);
 	}
 
diff --git a/drivers/net/ipa/ipa_trans.c b/drivers/net/ipa/ipa_trans.c
index b87936b18770..22755f3ce3da 100644
--- a/drivers/net/ipa/ipa_trans.c
+++ b/drivers/net/ipa/ipa_trans.c
@@ -747,7 +747,7 @@ int ipa_channel_trans_init(struct ipa_dma *gsi, u32 channel_id)
 	 * for transactions (including transaction structures) based on
 	 * this maximum number.
 	 */
-	tre_max = gsi_channel_tre_max(channel->dma_subsys, channel_id);
+	tre_max = ipa_channel_tre_max(channel->dma_subsys, channel_id);
 
 	/* Transactions are allocated one at a time. */
 	ret = ipa_trans_pool_init(&trans_info->pool, sizeof(struct ipa_trans),
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 05/17] net: ipa: Check interrupts for availability
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (2 preceding siblings ...)
  2021-09-20  3:07 ` [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface Sireesh Kodali
@ 2021-09-20  3:07 ` Sireesh Kodali
  2021-10-13 22:29   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait Sireesh Kodali
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:07 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

Make ipa_interrupt_add/ipa_interrupt_remove no-operation if requested
interrupt is not supported by IPA hardware.

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_interrupt.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
index b35170a93b0f..94708a23a597 100644
--- a/drivers/net/ipa/ipa_interrupt.c
+++ b/drivers/net/ipa/ipa_interrupt.c
@@ -48,6 +48,25 @@ static bool ipa_interrupt_uc(struct ipa_interrupt *interrupt, u32 irq_id)
 	return irq_id == IPA_IRQ_UC_0 || irq_id == IPA_IRQ_UC_1;
 }
 
+static bool ipa_interrupt_check_fixup(enum ipa_irq_id *irq_id, enum ipa_version version)
+{
+	switch (*irq_id) {
+	case IPA_IRQ_EOT_COAL:
+		return version < IPA_VERSION_3_5;
+	case IPA_IRQ_DCMP:
+		return version < IPA_VERSION_4_5;
+	case IPA_IRQ_TLV_LEN_MIN_DSM:
+		return version >= IPA_VERSION_4_5;
+	default:
+		break;
+	}
+
+	if (*irq_id >= IPA_IRQ_DRBIP_PKT_EXCEED_MAX_SIZE_EN)
+		return version >= IPA_VERSION_4_9;
+
+	return true;
+}
+
 /* Process a particular interrupt type that has been received */
 static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 irq_id)
 {
@@ -191,6 +210,9 @@ void ipa_interrupt_add(struct ipa_interrupt *interrupt,
 	struct ipa *ipa = interrupt->ipa;
 	u32 offset;
 
+	if (!ipa_interrupt_check_fixup(&ipa_irq, ipa->version))
+		return;
+
 	WARN_ON(ipa_irq >= IPA_IRQ_COUNT);
 
 	interrupt->handler[ipa_irq] = handler;
@@ -208,6 +230,9 @@ ipa_interrupt_remove(struct ipa_interrupt *interrupt, enum ipa_irq_id ipa_irq)
 	struct ipa *ipa = interrupt->ipa;
 	u32 offset;
 
+	if (!ipa_interrupt_check_fixup(&ipa_irq, ipa->version))
+		return;
+
 	WARN_ON(ipa_irq >= IPA_IRQ_COUNT);
 
 	/* Update the IPA interrupt mask to disable it */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (3 preceding siblings ...)
  2021-09-20  3:07 ` [RFC PATCH 05/17] net: ipa: Check interrupts for availability Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:29   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions Sireesh Kodali
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

Sometimes the pipeline clear fails, and when it does, having a hang in
kernel is ugly. The timeout gives us a nice error message. Note that
this shouldn't actually hang, ever. It only hangs if there is a mistake
in the config, and the timeout is only useful when debugging.

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_cmd.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
index 3db9e94e484f..0bdbc331fa78 100644
--- a/drivers/net/ipa/ipa_cmd.c
+++ b/drivers/net/ipa/ipa_cmd.c
@@ -658,7 +658,10 @@ u32 ipa_cmd_pipeline_clear_count(void)
 
 void ipa_cmd_pipeline_clear_wait(struct ipa *ipa)
 {
-	wait_for_completion(&ipa->completion);
+	unsigned long timeout_jiffies = msecs_to_jiffies(1000);
+
+	if (!wait_for_completion_timeout(&ipa->completion, timeout_jiffies))
+		dev_err(&ipa->pdev->dev, "%s time out\n", __func__);
 }
 
 void ipa_cmd_pipeline_clear(struct ipa *ipa)
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (4 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:29   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 08/17] net: ipa: Add support for IPA v2.x interrupts Sireesh Kodali
                   ` (10 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

IPA v2.x is an older version the IPA hardware, and is 32 bit.

Most of the registers were just shifted in newer IPA versions, but
the register fields have remained the same across IPA versions. This
means that only the register addresses needed to be added to the driver.

To handle the different IPA register addresses, static inline functions
have been defined that return the correct register address.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_cmd.c      |   3 +-
 drivers/net/ipa/ipa_endpoint.c |  33 +++---
 drivers/net/ipa/ipa_main.c     |   8 +-
 drivers/net/ipa/ipa_mem.c      |   5 +-
 drivers/net/ipa/ipa_reg.h      | 184 +++++++++++++++++++++++++++------
 drivers/net/ipa/ipa_version.h  |  12 +++
 6 files changed, 195 insertions(+), 50 deletions(-)

diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
index 0bdbc331fa78..7a104540dc26 100644
--- a/drivers/net/ipa/ipa_cmd.c
+++ b/drivers/net/ipa/ipa_cmd.c
@@ -326,7 +326,8 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa)
 	 * worst case (highest endpoint number) offset of that endpoint
 	 * fits in the register write command field(s) that must hold it.
 	 */
-	offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT - 1);
+	offset = ipa_reg_endp_status_n_offset(ipa->version,
+			IPA_ENDPOINT_COUNT - 1);
 	name = "maximal endpoint status";
 	if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
 		return false;
diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index dbef549c4537..7d3ab61cd890 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -242,8 +242,8 @@ static struct ipa_trans *ipa_endpoint_trans_alloc(struct ipa_endpoint *endpoint,
 static bool
 ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay)
 {
-	u32 offset = IPA_REG_ENDP_INIT_CTRL_N_OFFSET(endpoint->endpoint_id);
 	struct ipa *ipa = endpoint->ipa;
+	u32 offset = ipa_reg_endp_init_ctrl_n_offset(ipa->version, endpoint->endpoint_id);
 	bool state;
 	u32 mask;
 	u32 val;
@@ -410,7 +410,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
 		if (!(endpoint->ee_id == GSI_EE_MODEM && endpoint->toward_ipa))
 			continue;
 
-		offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
+		offset = ipa_reg_endp_status_n_offset(ipa->version, endpoint_id);
 
 		/* Value written is 0, and all bits are updated.  That
 		 * means status is disabled on the endpoint, and as a
@@ -431,7 +431,8 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
 
 static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint)
 {
-	u32 offset = IPA_REG_ENDP_INIT_CFG_N_OFFSET(endpoint->endpoint_id);
+	struct ipa *ipa = endpoint->ipa;
+	u32 offset = ipa_reg_endp_init_cfg_n_offset(ipa->version, endpoint->endpoint_id);
 	enum ipa_cs_offload_en enabled;
 	u32 val = 0;
 
@@ -523,8 +524,8 @@ ipa_qmap_header_size(enum ipa_version version, struct ipa_endpoint *endpoint)
  */
 static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
 {
-	u32 offset = IPA_REG_ENDP_INIT_HDR_N_OFFSET(endpoint->endpoint_id);
 	struct ipa *ipa = endpoint->ipa;
+	u32 offset = ipa_reg_endp_init_hdr_n_offset(ipa->version, endpoint->endpoint_id);
 	u32 val = 0;
 
 	if (endpoint->data->qmap) {
@@ -565,9 +566,9 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
 
 static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
 {
-	u32 offset = IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(endpoint->endpoint_id);
-	u32 pad_align = endpoint->data->rx.pad_align;
 	struct ipa *ipa = endpoint->ipa;
+	u32 offset = ipa_reg_endp_init_hdr_ext_n_offset(ipa->version, endpoint->endpoint_id);
+	u32 pad_align = endpoint->data->rx.pad_align;
 	u32 val = 0;
 
 	val |= HDR_ENDIANNESS_FMASK;		/* big endian */
@@ -609,6 +610,7 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
 
 static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
 {
+	enum ipa_version version = endpoint->ipa->version;
 	u32 endpoint_id = endpoint->endpoint_id;
 	u32 val = 0;
 	u32 offset;
@@ -616,7 +618,7 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
 	if (endpoint->toward_ipa)
 		return;		/* Register not valid for TX endpoints */
 
-	offset = IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(endpoint_id);
+	offset = ipa_reg_endp_init_hdr_metadata_mask_n_offset(version, endpoint_id);
 
 	/* Note that HDR_ENDIANNESS indicates big endian header fields */
 	if (endpoint->data->qmap)
@@ -627,7 +629,8 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
 
 static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint)
 {
-	u32 offset = IPA_REG_ENDP_INIT_MODE_N_OFFSET(endpoint->endpoint_id);
+	enum ipa_version version = endpoint->ipa->version;
+	u32 offset = ipa_reg_endp_init_mode_n_offset(version, endpoint->endpoint_id);
 	u32 val;
 
 	if (!endpoint->toward_ipa)
@@ -716,8 +719,8 @@ static u32 aggr_sw_eof_active_encoded(enum ipa_version version, bool enabled)
 
 static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
 {
-	u32 offset = IPA_REG_ENDP_INIT_AGGR_N_OFFSET(endpoint->endpoint_id);
 	enum ipa_version version = endpoint->ipa->version;
+	u32 offset = ipa_reg_endp_init_aggr_n_offset(version, endpoint->endpoint_id);
 	u32 val = 0;
 
 	if (endpoint->data->aggregation) {
@@ -853,7 +856,7 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
 	u32 offset;
 	u32 val;
 
-	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(endpoint_id);
+	offset = ipa_reg_endp_init_hol_block_timer_n_offset(ipa->version, endpoint_id);
 	val = hol_block_timer_val(ipa, microseconds);
 	iowrite32(val, ipa->reg_virt + offset);
 }
@@ -861,12 +864,13 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
 static void
 ipa_endpoint_init_hol_block_enable(struct ipa_endpoint *endpoint, bool enable)
 {
+	enum ipa_version version = endpoint->ipa->version;
 	u32 endpoint_id = endpoint->endpoint_id;
 	u32 offset;
 	u32 val;
 
 	val = enable ? HOL_BLOCK_EN_FMASK : 0;
-	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(endpoint_id);
+	offset = ipa_reg_endp_init_hol_block_en_n_offset(version, endpoint_id);
 	iowrite32(val, endpoint->ipa->reg_virt + offset);
 }
 
@@ -887,7 +891,8 @@ void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa)
 
 static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint)
 {
-	u32 offset = IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(endpoint->endpoint_id);
+	enum ipa_version version = endpoint->ipa->version;
+	u32 offset = ipa_reg_endp_init_deaggr_n_offset(version, endpoint->endpoint_id);
 	u32 val = 0;
 
 	if (!endpoint->toward_ipa)
@@ -979,7 +984,7 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
 	u32 val = 0;
 	u32 offset;
 
-	offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
+	offset = ipa_reg_endp_status_n_offset(ipa->version, endpoint_id);
 
 	if (endpoint->data->status_enable) {
 		val |= STATUS_EN_FMASK;
@@ -1384,7 +1389,7 @@ void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id)
 	val |= u32_encode_bits(endpoint_id, ROUTE_FRAG_DEF_PIPE_FMASK);
 	val |= ROUTE_DEF_RETAIN_HDR_FMASK;
 
-	iowrite32(val, ipa->reg_virt + IPA_REG_ROUTE_OFFSET);
+	iowrite32(val, ipa->reg_virt + ipa_reg_route_offset(ipa->version));
 }
 
 void ipa_endpoint_default_route_clear(struct ipa *ipa)
diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index 6ab691ff1faf..ba06e3ad554c 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -191,7 +191,7 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
 	if (ipa->version < IPA_VERSION_4_0)
 		return;
 
-	val = ioread32(ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
+	val = ioread32(ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
 
 	if (ipa->version == IPA_VERSION_4_0) {
 		val &= ~IPA_QMB_SELECT_CONS_EN_FMASK;
@@ -206,7 +206,7 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
 	val |= GSI_MULTI_INORDER_RD_DIS_FMASK;
 	val |= GSI_MULTI_INORDER_WR_DIS_FMASK;
 
-	iowrite32(val, ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
+	iowrite32(val, ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
 }
 
 /* Configure DDR and (possibly) PCIe max read/write QSB values */
@@ -355,7 +355,7 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
 	/* IPA v4.5+ has no backward compatibility register */
 	if (version < IPA_VERSION_4_5) {
 		val = data->backward_compat;
-		iowrite32(val, ipa->reg_virt + IPA_REG_BCR_OFFSET);
+		iowrite32(val, ipa->reg_virt + ipa_reg_bcr_offset(ipa->version));
 	}
 
 	/* Implement some hardware workarounds */
@@ -384,7 +384,7 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
 		/* Configure aggregation timer granularity */
 		granularity = ipa_aggr_granularity_val(IPA_AGGR_GRANULARITY);
 		val = u32_encode_bits(granularity, AGGR_GRANULARITY_FMASK);
-		iowrite32(val, ipa->reg_virt + IPA_REG_COUNTER_CFG_OFFSET);
+		iowrite32(val, ipa->reg_virt + ipa_reg_counter_cfg_offset(ipa->version));
 	} else {
 		ipa_qtime_config(ipa);
 	}
diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
index 16e5fdd5bd73..8acc88070a6f 100644
--- a/drivers/net/ipa/ipa_mem.c
+++ b/drivers/net/ipa/ipa_mem.c
@@ -113,7 +113,8 @@ int ipa_mem_setup(struct ipa *ipa)
 	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
 	offset = ipa->mem_offset + mem->offset;
 	val = proc_cntxt_base_addr_encoded(ipa->version, offset);
-	iowrite32(val, ipa->reg_virt + IPA_REG_LOCAL_PKT_PROC_CNTXT_OFFSET);
+	iowrite32(val, ipa->reg_virt +
+		  ipa_reg_local_pkt_proc_cntxt_base_offset(ipa->version));
 
 	return 0;
 }
@@ -316,7 +317,7 @@ int ipa_mem_config(struct ipa *ipa)
 	u32 i;
 
 	/* Check the advertised location and size of the shared memory area */
-	val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET);
+	val = ioread32(ipa->reg_virt + ipa_reg_shared_mem_size_offset(ipa->version));
 
 	/* The fields in the register are in 8 byte units */
 	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h
index a5b355384d4a..fcae0296cfa4 100644
--- a/drivers/net/ipa/ipa_reg.h
+++ b/drivers/net/ipa/ipa_reg.h
@@ -65,7 +65,17 @@ struct ipa;
  * of valid bits for the register.
  */
 
-#define IPA_REG_COMP_CFG_OFFSET				0x0000003c
+#define IPA_REG_COMP_SW_RESET_OFFSET		0x0000003c
+
+#define IPA_REG_V2_ENABLED_PIPES_OFFSET		0x000005dc
+
+static inline u32 ipa_reg_comp_cfg_offset(enum ipa_version version)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x38;
+
+	return 0x3c;
+}
 /* The next field is not supported for IPA v4.0+, not present for IPA v4.5+ */
 #define ENABLE_FMASK				GENMASK(0, 0)
 /* The next field is present for IPA v4.7+ */
@@ -124,6 +134,7 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
 	return u32_encode_bits(val, GENMASK(17, 17));
 }
 
+/* This register is only present on IPA v3.0 and above */
 #define IPA_REG_CLKON_CFG_OFFSET			0x00000044
 #define RX_FMASK				GENMASK(0, 0)
 #define PROC_FMASK				GENMASK(1, 1)
@@ -164,7 +175,14 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
 /* The next field is present for IPA v4.7+ */
 #define DRBIP_FMASK				GENMASK(31, 31)
 
-#define IPA_REG_ROUTE_OFFSET				0x00000048
+static inline u32 ipa_reg_route_offset(enum ipa_version version)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x44;
+
+	return 0x48;
+}
+
 #define ROUTE_DIS_FMASK				GENMASK(0, 0)
 #define ROUTE_DEF_PIPE_FMASK			GENMASK(5, 1)
 #define ROUTE_DEF_HDR_TABLE_FMASK		GENMASK(6, 6)
@@ -172,7 +190,14 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
 #define ROUTE_FRAG_DEF_PIPE_FMASK		GENMASK(21, 17)
 #define ROUTE_DEF_RETAIN_HDR_FMASK		GENMASK(24, 24)
 
-#define IPA_REG_SHARED_MEM_SIZE_OFFSET			0x00000054
+static inline u32 ipa_reg_shared_mem_size_offset(enum ipa_version version)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x50;
+
+	return 0x54;
+}
+
 #define SHARED_MEM_SIZE_FMASK			GENMASK(15, 0)
 #define SHARED_MEM_BADDR_FMASK			GENMASK(31, 16)
 
@@ -219,7 +244,13 @@ static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
 }
 
 /* The next register is not present for IPA v4.5+ */
-#define IPA_REG_BCR_OFFSET				0x000001d0
+static inline u32 ipa_reg_bcr_offset(enum ipa_version version)
+{
+	if (IPA_VERSION_RANGE(version, 2_5, 2_6L))
+		return 0x5b0;
+
+	return 0x1d0;
+}
 /* The next two fields are not present for IPA v4.2+ */
 #define BCR_CMDQ_L_LACK_ONE_ENTRY_FMASK		GENMASK(0, 0)
 #define BCR_TX_NOT_USING_BRESP_FMASK		GENMASK(1, 1)
@@ -236,7 +267,14 @@ static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
 #define BCR_ROUTER_PREFETCH_EN_FMASK		GENMASK(9, 9)
 
 /* The value of the next register must be a multiple of 8 (bottom 3 bits 0) */
-#define IPA_REG_LOCAL_PKT_PROC_CNTXT_OFFSET		0x000001e8
+static inline u32 ipa_reg_local_pkt_proc_cntxt_base_offset(enum ipa_version version)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x5e0;
+
+	return 0x1e8;
+}
+
 
 /* Encoded value for LOCAL_PKT_PROC_CNTXT register BASE_ADDR field */
 static inline u32 proc_cntxt_base_addr_encoded(enum ipa_version version,
@@ -252,7 +290,14 @@ static inline u32 proc_cntxt_base_addr_encoded(enum ipa_version version,
 #define IPA_REG_AGGR_FORCE_CLOSE_OFFSET			0x000001ec
 
 /* The next register is not present for IPA v4.5+ */
-#define IPA_REG_COUNTER_CFG_OFFSET			0x000001f0
+static inline u32 ipa_reg_counter_cfg_offset(enum ipa_version version)
+{
+	if (IPA_VERSION_RANGE(version, 2_5, 2_6L))
+		return 0x5e8;
+
+	return 0x1f0;
+}
+
 /* The next field is not present for IPA v3.5+ */
 #define EOT_COAL_GRANULARITY			GENMASK(3, 0)
 #define AGGR_GRANULARITY_FMASK			GENMASK(8, 4)
@@ -349,15 +394,27 @@ enum ipa_pulse_gran {
 #define Y_MIN_LIM_FMASK				GENMASK(21, 16)
 #define Y_MAX_LIM_FMASK				GENMASK(29, 24)
 
-#define IPA_REG_ENDP_INIT_CTRL_N_OFFSET(ep) \
-					(0x00000800 + 0x0070 * (ep))
+static inline u32 ipa_reg_endp_init_ctrl_n_offset(enum ipa_version version, u16 ep)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x70 + 0x4 * ep;
+
+	return 0x800 + 0x70 * ep;
+}
+
 /* Valid only for RX (IPA producer) endpoints (do not use for IPA v4.0+) */
 #define ENDP_SUSPEND_FMASK			GENMASK(0, 0)
 /* Valid only for TX (IPA consumer) endpoints */
 #define ENDP_DELAY_FMASK			GENMASK(1, 1)
 
-#define IPA_REG_ENDP_INIT_CFG_N_OFFSET(ep) \
-					(0x00000808 + 0x0070 * (ep))
+static inline u32 ipa_reg_endp_init_cfg_n_offset(enum ipa_version version, u16 ep)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0xc0 + 0x4 * ep;
+
+	return 0x808 + 0x70 * ep;
+}
+
 #define FRAG_OFFLOAD_EN_FMASK			GENMASK(0, 0)
 #define CS_OFFLOAD_EN_FMASK			GENMASK(2, 1)
 #define CS_METADATA_HDR_OFFSET_FMASK		GENMASK(6, 3)
@@ -383,8 +440,14 @@ enum ipa_nat_en {
 	IPA_NAT_DST			= 0x2,
 };
 
-#define IPA_REG_ENDP_INIT_HDR_N_OFFSET(ep) \
-					(0x00000810 + 0x0070 * (ep))
+static inline u32 ipa_reg_endp_init_hdr_n_offset(enum ipa_version version, u16 ep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x170 + 0x4 * ep;
+
+	return 0x810 + 0x70 * ep;
+}
+
 #define HDR_LEN_FMASK				GENMASK(5, 0)
 #define HDR_OFST_METADATA_VALID_FMASK		GENMASK(6, 6)
 #define HDR_OFST_METADATA_FMASK			GENMASK(12, 7)
@@ -440,8 +503,14 @@ static inline u32 ipa_metadata_offset_encoded(enum ipa_version version,
 	return val;
 }
 
-#define IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(ep) \
-					(0x00000814 + 0x0070 * (ep))
+static inline u32 ipa_reg_endp_init_hdr_ext_n_offset(enum ipa_version version, u16 ep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x1c0 + 0x4 * ep;
+
+	return 0x814 + 0x70 * ep;
+}
+
 #define HDR_ENDIANNESS_FMASK			GENMASK(0, 0)
 #define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK	GENMASK(1, 1)
 #define HDR_TOTAL_LEN_OR_PAD_FMASK		GENMASK(2, 2)
@@ -454,12 +523,23 @@ static inline u32 ipa_metadata_offset_encoded(enum ipa_version version,
 #define HDR_ADDITIONAL_CONST_LEN_MSB_FMASK	GENMASK(21, 20)
 
 /* Valid only for RX (IPA producer) endpoints */
-#define IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(rxep) \
-					(0x00000818 + 0x0070 * (rxep))
+static inline u32 ipa_reg_endp_init_hdr_metadata_mask_n_offset(enum ipa_version version, u16 rxep)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x220 + 0x4 * rxep;
+
+	return 0x818 + 0x70 * rxep;
+}
 
 /* Valid only for TX (IPA consumer) endpoints */
-#define IPA_REG_ENDP_INIT_MODE_N_OFFSET(txep) \
-					(0x00000820 + 0x0070 * (txep))
+static inline u32 ipa_reg_endp_init_mode_n_offset(enum ipa_version version, u16 txep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x2c0 + 0x4 * txep;
+
+	return 0x820 + 0x70 * txep;
+}
+
 #define MODE_FMASK				GENMASK(2, 0)
 /* The next field is present for IPA v4.5+ */
 #define DCPH_ENABLE_FMASK			GENMASK(3, 3)
@@ -480,8 +560,14 @@ enum ipa_mode {
 	IPA_DMA				= 0x3,
 };
 
-#define IPA_REG_ENDP_INIT_AGGR_N_OFFSET(ep) \
-					(0x00000824 +  0x0070 * (ep))
+static inline u32 ipa_reg_endp_init_aggr_n_offset(enum ipa_version version,
+						  u16 ep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x320 + 0x4 * ep;
+	return 0x824 + 0x70 * ep;
+}
+
 #define AGGR_EN_FMASK				GENMASK(1, 0)
 #define AGGR_TYPE_FMASK				GENMASK(4, 2)
 
@@ -543,14 +629,27 @@ enum ipa_aggr_type {
 };
 
 /* Valid only for RX (IPA producer) endpoints */
-#define IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(rxep) \
-					(0x0000082c +  0x0070 * (rxep))
+static inline u32 ipa_reg_endp_init_hol_block_en_n_offset(enum ipa_version version,
+							  u16 rxep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x3c0 + 0x4 * rxep;
+
+	return 0x82c + 0x70 * rxep;
+}
+
 #define HOL_BLOCK_EN_FMASK			GENMASK(0, 0)
 
 /* Valid only for RX (IPA producer) endpoints */
-#define IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(rxep) \
-					(0x00000830 +  0x0070 * (rxep))
-/* The next two fields are present for IPA v4.2 only */
+static inline u32 ipa_reg_endp_init_hol_block_timer_n_offset(enum ipa_version version, u16 rxep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x420 + 0x4 * rxep;
+
+	return 0x830 + 0x70 * rxep;
+}
+
+/* The next fields are present for IPA v4.2 only */
 #define BASE_VALUE_FMASK			GENMASK(4, 0)
 #define SCALE_FMASK				GENMASK(12, 8)
 /* The next two fields are present for IPA v4.5 */
@@ -558,8 +657,14 @@ enum ipa_aggr_type {
 #define GRAN_SEL_FMASK				GENMASK(8, 8)
 
 /* Valid only for TX (IPA consumer) endpoints */
-#define IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(txep) \
-					(0x00000834 + 0x0070 * (txep))
+static inline u32 ipa_reg_endp_init_deaggr_n_offset(enum ipa_version version, u16 txep)
+{
+	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
+		return 0x470 + 0x4 * txep;
+
+	return 0x834 + 0x70 * txep;
+}
+
 #define DEAGGR_HDR_LEN_FMASK			GENMASK(5, 0)
 #define SYSPIPE_ERR_DETECTION_FMASK		GENMASK(6, 6)
 #define PACKET_OFFSET_VALID_FMASK		GENMASK(7, 7)
@@ -629,8 +734,14 @@ enum ipa_seq_rep_type {
 	IPA_SEQ_REP_DMA_PARSER			= 0x08,
 };
 
-#define IPA_REG_ENDP_STATUS_N_OFFSET(ep) \
-					(0x00000840 + 0x0070 * (ep))
+static inline u32 ipa_reg_endp_status_n_offset(enum ipa_version version, u16 ep)
+{
+	if (version <= IPA_VERSION_2_6L)
+		return 0x4c0 + 0x4 * ep;
+
+	return 0x840 + 0x70 * ep;
+}
+
 #define STATUS_EN_FMASK				GENMASK(0, 0)
 #define STATUS_ENDP_FMASK			GENMASK(5, 1)
 /* The next field is not present for IPA v4.5+ */
@@ -662,6 +773,9 @@ enum ipa_seq_rep_type {
 static inline u32 ipa_reg_irq_stts_ee_n_offset(enum ipa_version version,
 					       u32 ee)
 {
+	if (version <= IPA_VERSION_2_6L)
+		return 0x00001008 + 0x1000 * ee;
+
 	if (version < IPA_VERSION_4_9)
 		return 0x00003008 + 0x1000 * ee;
 
@@ -675,6 +789,9 @@ static inline u32 ipa_reg_irq_stts_offset(enum ipa_version version)
 
 static inline u32 ipa_reg_irq_en_ee_n_offset(enum ipa_version version, u32 ee)
 {
+	if (version <= IPA_VERSION_2_6L)
+		return 0x0000100c + 0x1000 * ee;
+
 	if (version < IPA_VERSION_4_9)
 		return 0x0000300c + 0x1000 * ee;
 
@@ -688,6 +805,9 @@ static inline u32 ipa_reg_irq_en_offset(enum ipa_version version)
 
 static inline u32 ipa_reg_irq_clr_ee_n_offset(enum ipa_version version, u32 ee)
 {
+	if (version <= IPA_VERSION_2_6L)
+		return 0x00001010 + 0x1000 * ee;
+
 	if (version < IPA_VERSION_4_9)
 		return 0x00003010 + 0x1000 * ee;
 
@@ -776,6 +896,9 @@ enum ipa_irq_id {
 
 static inline u32 ipa_reg_irq_uc_ee_n_offset(enum ipa_version version, u32 ee)
 {
+	if (version <= IPA_VERSION_2_6L)
+		return 0x101c + 1000 * ee;
+
 	if (version < IPA_VERSION_4_9)
 		return 0x0000301c + 0x1000 * ee;
 
@@ -793,6 +916,9 @@ static inline u32 ipa_reg_irq_uc_offset(enum ipa_version version)
 static inline u32
 ipa_reg_irq_suspend_info_ee_n_offset(enum ipa_version version, u32 ee)
 {
+	if (version <= IPA_VERSION_2_6L)
+		return 0x00001098 + 0x1000 * ee;
+
 	if (version == IPA_VERSION_3_0)
 		return 0x00003098 + 0x1000 * ee;
 
diff --git a/drivers/net/ipa/ipa_version.h b/drivers/net/ipa/ipa_version.h
index 6c16c895d842..0d816de586ba 100644
--- a/drivers/net/ipa/ipa_version.h
+++ b/drivers/net/ipa/ipa_version.h
@@ -8,6 +8,9 @@
 
 /**
  * enum ipa_version
+ * @IPA_VERSION_2_0:	IPA version 2.0
+ * @IPA_VERSION_2_5:	IPA version 2.5/2.6
+ * @IPA_VERSION_2_6:	IPA version 2.6L
  * @IPA_VERSION_3_0:	IPA version 3.0/GSI version 1.0
  * @IPA_VERSION_3_1:	IPA version 3.1/GSI version 1.1
  * @IPA_VERSION_3_5:	IPA version 3.5/GSI version 1.2
@@ -25,6 +28,9 @@
  * new version is added.
  */
 enum ipa_version {
+	IPA_VERSION_2_0,
+	IPA_VERSION_2_5,
+	IPA_VERSION_2_6L,
 	IPA_VERSION_3_0,
 	IPA_VERSION_3_1,
 	IPA_VERSION_3_5,
@@ -38,4 +44,10 @@ enum ipa_version {
 	IPA_VERSION_4_11,
 };
 
+#define IPA_HAS_GSI(version) ((version) > IPA_VERSION_2_6L)
+#define IPA_IS_64BIT(version) ((version) > IPA_VERSION_2_6L)
+#define IPA_VERSION_RANGE(_version, _from, _to) \
+	((_version) >= (IPA_VERSION_##_from) &&  \
+	 (_version) <= (IPA_VERSION_##_to))
+
 #endif /* _IPA_VERSION_H_ */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 08/17] net: ipa: Add support for IPA v2.x interrupts
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (5 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:29   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport Sireesh Kodali
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Vladimir Lypak, Sireesh Kodali, David S. Miller, Jakub Kicinski

From: Vladimir Lypak <vladimir.lypak@gmail.com>

Interrupts on IPA v2.x have different numbers from the v3.x and above
interrupts. IPA v2.x also doesn't support the TX_SUSPEND irq, like v3.0

Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_interrupt.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
index 94708a23a597..37b5932253aa 100644
--- a/drivers/net/ipa/ipa_interrupt.c
+++ b/drivers/net/ipa/ipa_interrupt.c
@@ -63,6 +63,11 @@ static bool ipa_interrupt_check_fixup(enum ipa_irq_id *irq_id, enum ipa_version
 
 	if (*irq_id >= IPA_IRQ_DRBIP_PKT_EXCEED_MAX_SIZE_EN)
 		return version >= IPA_VERSION_4_9;
+	else if (*irq_id > IPA_IRQ_BAM_GSI_IDLE)
+		return version >= IPA_VERSION_3_0;
+	else if (version <= IPA_VERSION_2_6L &&
+			*irq_id >= IPA_IRQ_PROC_UC_ACK_Q_NOT_EMPTY)
+		*irq_id += 2;
 
 	return true;
 }
@@ -152,8 +157,8 @@ static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
 
 	WARN_ON(!(mask & ipa->available));
 
-	/* IPA version 3.0 does not support TX_SUSPEND interrupt control */
-	if (ipa->version == IPA_VERSION_3_0)
+	/* IPA version <=3.0 does not support TX_SUSPEND interrupt control */
+	if (ipa->version <= IPA_VERSION_3_0)
 		return;
 
 	offset = ipa_reg_irq_suspend_en_offset(ipa->version);
@@ -190,7 +195,7 @@ void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt)
 	val = ioread32(ipa->reg_virt + offset);
 
 	/* SUSPEND interrupt status isn't cleared on IPA version 3.0 */
-	if (ipa->version == IPA_VERSION_3_0)
+	if (ipa->version <= IPA_VERSION_3_0)
 		return;
 
 	offset = ipa_reg_irq_suspend_clr_offset(ipa->version);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (6 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 08/17] net: ipa: Add support for IPA v2.x interrupts Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init Sireesh Kodali
                   ` (8 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

BAM is used on IPA v2.x. Since BAM already has a nice dmaengine driver,
the IPA driver only makes calls the dmaengine API.
Also add BAM transaction support to IPA's trasaction abstraction layer.

BAM transactions should use NAPI just like GSI transactions, but just
use callbacks on each transaction for now.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/Makefile          |   2 +-
 drivers/net/ipa/bam.c             | 525 ++++++++++++++++++++++++++++++
 drivers/net/ipa/gsi.c             |   1 +
 drivers/net/ipa/ipa_data.h        |   1 +
 drivers/net/ipa/ipa_dma.h         |  18 +-
 drivers/net/ipa/ipa_dma_private.h |   2 +
 drivers/net/ipa/ipa_main.c        |  20 +-
 drivers/net/ipa/ipa_trans.c       |  14 +-
 drivers/net/ipa/ipa_trans.h       |   4 +
 9 files changed, 569 insertions(+), 18 deletions(-)
 create mode 100644 drivers/net/ipa/bam.c

diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
index 3cd021fb992e..4abebc667f77 100644
--- a/drivers/net/ipa/Makefile
+++ b/drivers/net/ipa/Makefile
@@ -2,7 +2,7 @@ obj-$(CONFIG_QCOM_IPA)	+=	ipa.o
 
 ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
 				ipa_table.o ipa_interrupt.o gsi.o ipa_trans.o \
-				ipa_gsi.o ipa_smp2p.o ipa_uc.o \
+				ipa_gsi.o ipa_smp2p.o ipa_uc.o bam.o \
 				ipa_endpoint.o ipa_cmd.o ipa_modem.o \
 				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
 				ipa_sysfs.o
diff --git a/drivers/net/ipa/bam.c b/drivers/net/ipa/bam.c
new file mode 100644
index 000000000000..0726e385fee5
--- /dev/null
+++ b/drivers/net/ipa/bam.c
@@ -0,0 +1,525 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2020, The Linux Foundation. All rights reserved.
+ */
+
+#include <linux/completion.h>
+#include <linux/dma-mapping.h>
+#include <linux/dmaengine.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/mutex.h>
+#include <linux/netdevice.h>
+#include <linux/platform_device.h>
+
+#include "ipa_gsi.h"
+#include "ipa.h"
+#include "ipa_dma.h"
+#include "ipa_dma_private.h"
+#include "ipa_gsi.h"
+#include "ipa_trans.h"
+#include "ipa_data.h"
+
+/**
+ * DOC: The IPA Smart Peripheral System Interface
+ *
+ * The Smart Peripheral System is a means to communicate over BAM pipes to
+ * the IPA block. The Modem also uses BAM pipes to communicate with the IPA
+ * core.
+ *
+ * Refer the GSI documentation, because BAM is a precursor to GSI and more or less
+ * the same, conceptually (maybe, IDK, I have no docs to go through).
+ *
+ * Each channel here corresponds to 1 BAM pipe configured in BAM2BAM mode
+ *
+ * IPA cmds are transferred one at a time, each in one BAM transfer.
+ */
+
+/* Get and configure the BAM DMA channel */
+int bam_channel_init_one(struct ipa_dma *bam,
+			 const struct ipa_gsi_endpoint_data *data, bool command)
+{
+	struct dma_slave_config bam_config;
+	u32 channel_id = data->channel_id;
+	struct ipa_channel *channel = &bam->channel[channel_id];
+	int ret;
+
+	/*TODO: if (!bam_channel_data_valid(bam, data))
+		return -EINVAL;*/
+
+	channel->dma_subsys = bam;
+	channel->dma_chan = dma_request_chan(bam->dev, data->channel_name);
+	channel->toward_ipa = data->toward_ipa;
+	channel->tlv_count = data->channel.tlv_count;
+	channel->tre_count = data->channel.tre_count;
+	if (IS_ERR(channel->dma_chan)) {
+		dev_err(bam->dev, "failed to request BAM channel %s: %d\n",
+				data->channel_name,
+				(int) PTR_ERR(channel->dma_chan));
+		return PTR_ERR(channel->dma_chan);
+	}
+
+	ret = ipa_channel_trans_init(bam, data->channel_id);
+	if (ret)
+		goto err_dma_chan_free;
+
+	if (data->toward_ipa) {
+		bam_config.direction = DMA_MEM_TO_DEV;
+		bam_config.dst_maxburst = channel->tlv_count;
+	} else {
+		bam_config.direction = DMA_DEV_TO_MEM;
+		bam_config.src_maxburst = channel->tlv_count;
+	}
+
+	dmaengine_slave_config(channel->dma_chan, &bam_config);
+
+	if (command)
+		ret = ipa_cmd_pool_init(channel, 256);
+
+	if (!ret)
+		return 0;
+
+err_dma_chan_free:
+	dma_release_channel(channel->dma_chan);
+	return ret;
+}
+
+static void bam_channel_exit_one(struct ipa_channel *channel)
+{
+	if (channel->dma_chan) {
+		dmaengine_terminate_sync(channel->dma_chan);
+		dma_release_channel(channel->dma_chan);
+	}
+}
+
+/* Get channels from BAM_DMA */
+int bam_channel_init(struct ipa_dma *bam, u32 count,
+		const struct ipa_gsi_endpoint_data *data)
+{
+	int ret = 0;
+	u32 i;
+
+	for (i = 0; i < count; ++i) {
+		bool command = i == IPA_ENDPOINT_AP_COMMAND_TX;
+
+		if (!data[i].channel_name || data[i].ee_id == GSI_EE_MODEM)
+			continue;
+
+		ret = bam_channel_init_one(bam, &data[i], command);
+		if (ret)
+			goto err_unwind;
+	}
+
+	return ret;
+
+err_unwind:
+	while (i--) {
+		if (ipa_gsi_endpoint_data_empty(&data[i]))
+			continue;
+
+		bam_channel_exit_one(&bam->channel[i]);
+	}
+	return ret;
+}
+
+/* Inverse of bam_channel_init() */
+void bam_channel_exit(struct ipa_dma *bam)
+{
+	u32 channel_id = BAM_CHANNEL_COUNT_MAX - 1;
+
+	do
+		bam_channel_exit_one(&bam->channel[channel_id]);
+	while (channel_id--);
+}
+
+/* Inverse of bam_init() */
+static void bam_exit(struct ipa_dma *bam)
+{
+	mutex_destroy(&bam->mutex);
+	bam_channel_exit(bam);
+}
+
+/* Return the channel id associated with a given channel */
+static u32 bam_channel_id(struct ipa_channel *channel)
+{
+	return channel - &channel->dma_subsys->channel[0];
+}
+
+static void
+bam_channel_tx_update(struct ipa_channel *channel, struct ipa_trans *trans)
+{
+	u64 byte_count = trans->byte_count + trans->len;
+	u64 trans_count = trans->trans_count + 1;
+
+	byte_count -= channel->compl_byte_count;
+	channel->compl_byte_count += byte_count;
+	trans_count -= channel->compl_trans_count;
+	channel->compl_trans_count += trans_count;
+
+	ipa_gsi_channel_tx_completed(channel->dma_subsys, bam_channel_id(channel),
+					   trans_count, byte_count);
+}
+
+static void
+bam_channel_rx_update(struct ipa_channel *channel, struct ipa_trans *trans)
+{
+	/* FIXME */
+	u64 byte_count = trans->byte_count + trans->len;
+
+	channel->byte_count += byte_count;
+	channel->trans_count++;
+}
+
+/* Consult hardware, move any newly completed transactions to completed list */
+static void bam_channel_update(struct ipa_channel *channel)
+{
+	struct ipa_trans *trans;
+
+	list_for_each_entry(trans, &channel->trans_info.pending, links) {
+		enum dma_status trans_status =
+				dma_async_is_tx_complete(channel->dma_chan,
+					trans->cookie, NULL, NULL);
+		if (trans_status == DMA_COMPLETE)
+			break;
+	}
+	/* Get the transaction for the latest completed event.  Take a
+	 * reference to keep it from completing before we give the events
+	 * for this and previous transactions back to the hardware.
+	 */
+	refcount_inc(&trans->refcount);
+
+	/* For RX channels, update each completed transaction with the number
+	 * of bytes that were actually received.  For TX channels, report
+	 * the number of transactions and bytes this completion represents
+	 * up the network stack.
+	 */
+	if (channel->toward_ipa)
+		bam_channel_tx_update(channel, trans);
+	else
+		bam_channel_rx_update(channel, trans);
+
+	ipa_trans_move_complete(trans);
+
+	ipa_trans_free(trans);
+}
+
+/**
+ * bam_channel_poll_one() - Return a single completed transaction on a channel
+ * @channel:	Channel to be polled
+ *
+ * Return:	Transaction pointer, or null if none are available
+ *
+ * This function returns the first entry on a channel's completed transaction
+ * list.  If that list is empty, the hardware is consulted to determine
+ * whether any new transactions have completed.  If so, they're moved to the
+ * completed list and the new first entry is returned.  If there are no more
+ * completed transactions, a null pointer is returned.
+ */
+static struct ipa_trans *bam_channel_poll_one(struct ipa_channel *channel)
+{
+	struct ipa_trans *trans;
+
+	/* Get the first transaction from the completed list */
+	trans = ipa_channel_trans_complete(channel);
+	if (!trans) {
+		bam_channel_update(channel);
+		trans = ipa_channel_trans_complete(channel);
+	}
+
+	if (trans)
+		ipa_trans_move_polled(trans);
+
+	return trans;
+}
+
+/**
+ * bam_channel_poll() - NAPI poll function for a channel
+ * @napi:	NAPI structure for the channel
+ * @budget:	Budget supplied by NAPI core
+ *
+ * Return:	Number of items polled (<= budget)
+ *
+ * Single transactions completed by hardware are polled until either
+ * the budget is exhausted, or there are no more.  Each transaction
+ * polled is passed to ipa_trans_complete(), to perform remaining
+ * completion processing and retire/free the transaction.
+ */
+static int bam_channel_poll(struct napi_struct *napi, int budget)
+{
+	struct ipa_channel *channel;
+	int count = 0;
+
+	channel = container_of(napi, struct ipa_channel, napi);
+	while (count < budget) {
+		struct ipa_trans *trans;
+
+		count++;
+		trans = bam_channel_poll_one(channel);
+		if (!trans)
+			break;
+		ipa_trans_complete(trans);
+	}
+
+	if (count < budget)
+		napi_complete(&channel->napi);
+
+	return count;
+}
+
+/* Setup function for a single channel */
+static void bam_channel_setup_one(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	if (!channel->dma_subsys)
+		return;	/* Ignore uninitialized channels */
+
+	if (channel->toward_ipa) {
+		netif_tx_napi_add(&bam->dummy_dev, &channel->napi,
+				  bam_channel_poll, NAPI_POLL_WEIGHT);
+	} else {
+		netif_napi_add(&bam->dummy_dev, &channel->napi,
+			       bam_channel_poll, NAPI_POLL_WEIGHT);
+	}
+	napi_enable(&channel->napi);
+}
+
+static void bam_channel_teardown_one(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	if (!channel->dma_subsys)
+		return;		/* Ignore uninitialized channels */
+
+	netif_napi_del(&channel->napi);
+}
+
+/* Setup function for channels */
+static int bam_channel_setup(struct ipa_dma *bam)
+{
+	u32 channel_id = 0;
+	int ret;
+
+	mutex_lock(&bam->mutex);
+
+	do
+		bam_channel_setup_one(bam, channel_id);
+	while (++channel_id < BAM_CHANNEL_COUNT_MAX);
+
+	/* Make sure no channels were defined that hardware does not support */
+	while (channel_id < BAM_CHANNEL_COUNT_MAX) {
+		struct ipa_channel *channel = &bam->channel[channel_id++];
+
+		if (!channel->dma_subsys)
+			continue;	/* Ignore uninitialized channels */
+
+		dev_err(bam->dev, "channel %u not supported by hardware\n",
+			channel_id - 1);
+		channel_id = BAM_CHANNEL_COUNT_MAX;
+		goto err_unwind;
+	}
+
+	mutex_unlock(&bam->mutex);
+
+	return 0;
+
+err_unwind:
+	while (channel_id--)
+		bam_channel_teardown_one(bam, channel_id);
+
+	mutex_unlock(&bam->mutex);
+
+	return ret;
+}
+
+/* Inverse of bam_channel_setup() */
+static void bam_channel_teardown(struct ipa_dma *bam)
+{
+	u32 channel_id;
+
+	mutex_lock(&bam->mutex);
+
+	channel_id = BAM_CHANNEL_COUNT_MAX;
+	do
+		bam_channel_teardown_one(bam, channel_id);
+	while (channel_id--);
+
+	mutex_unlock(&bam->mutex);
+}
+
+static int bam_setup(struct ipa_dma *bam)
+{
+	return bam_channel_setup(bam);
+}
+
+static void bam_teardown(struct ipa_dma *bam)
+{
+	bam_channel_teardown(bam);
+}
+
+static u32 bam_channel_tre_max(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	/* Hardware limit is channel->tre_count - 1 */
+	return channel->tre_count - (channel->tlv_count - 1);
+}
+
+static u32 bam_channel_trans_tre_max(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	return channel->tlv_count;
+}
+
+static int bam_channel_start(struct ipa_dma *bam, u32 channel_id)
+{
+	return 0;
+}
+
+static int bam_channel_stop(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	return dmaengine_terminate_sync(channel->dma_chan);
+}
+
+static void bam_channel_reset(struct ipa_dma *bam, u32 channel_id, bool doorbell)
+{
+	bam_channel_stop(bam, channel_id);
+}
+
+static int bam_channel_suspend(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	return dmaengine_pause(channel->dma_chan);
+}
+
+static int bam_channel_resume(struct ipa_dma *bam, u32 channel_id)
+{
+	struct ipa_channel *channel = &bam->channel[channel_id];
+
+	return dmaengine_resume(channel->dma_chan);
+}
+
+static void bam_suspend(struct ipa_dma *bam)
+{
+	/* No-op for now */
+}
+
+static void bam_resume(struct ipa_dma *bam)
+{
+	/* No-op for now */
+}
+
+static void bam_trans_callback(void *arg)
+{
+	ipa_trans_complete(arg);
+}
+
+static void bam_trans_commit(struct ipa_trans *trans, bool unused)
+{
+	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
+	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
+	struct ipa_cmd_info *info;
+	struct scatterlist *sg;
+	u32 byte_count = 0;
+	u32 i;
+	enum dma_transfer_direction direction;
+
+	if (channel->toward_ipa)
+		direction = DMA_MEM_TO_DEV;
+	else
+		direction = DMA_DEV_TO_MEM;
+
+	/* assert(trans->used > 0); */
+
+	info = trans->info ? &trans->info[0] : NULL;
+	for_each_sg(trans->sgl, sg, trans->used, i) {
+		bool last_tre = i == trans->used - 1;
+		dma_addr_t addr = sg_dma_address(sg);
+		u32 len = sg_dma_len(sg);
+		u32 dma_flags = 0;
+		struct dma_async_tx_descriptor *desc;
+
+		byte_count += len;
+		if (info)
+			opcode = info++->opcode;
+
+		if (opcode != IPA_CMD_NONE) {
+			len = opcode;
+			dma_flags |= DMA_PREP_IMM_CMD;
+		}
+
+		if (last_tre)
+			dma_flags |= DMA_PREP_INTERRUPT;
+
+		desc = dmaengine_prep_slave_single(channel->dma_chan, addr, len,
+				direction, dma_flags);
+
+		if (last_tre) {
+			desc->callback = bam_trans_callback;
+			desc->callback_param = trans;
+		}
+
+		desc->cookie = dmaengine_submit(desc);
+
+		if (last_tre)
+			trans->cookie = desc->cookie;
+
+		if (direction == DMA_DEV_TO_MEM)
+			dmaengine_desc_attach_metadata(desc, &trans->len, sizeof(trans->len));
+	}
+
+	if (channel->toward_ipa) {
+		/* We record TX bytes when they are sent */
+		trans->len = byte_count;
+		trans->trans_count = channel->trans_count;
+		trans->byte_count = channel->byte_count;
+		channel->trans_count++;
+		channel->byte_count += byte_count;
+	}
+
+	ipa_trans_move_pending(trans);
+
+	dma_async_issue_pending(channel->dma_chan);
+}
+
+/* Initialize the BAM DMA channels
+ * Actual hw init is handled by the BAM_DMA driver
+ */
+int bam_init(struct ipa_dma *bam, struct platform_device *pdev,
+		enum ipa_version version, u32 count,
+		const struct ipa_gsi_endpoint_data *data)
+{
+	struct device *dev = &pdev->dev;
+	int ret;
+
+	bam->dev = dev;
+	bam->version = version;
+	bam->setup = bam_setup;
+	bam->teardown = bam_teardown;
+	bam->exit = bam_exit;
+	bam->suspend = bam_suspend;
+	bam->resume = bam_resume;
+	bam->channel_tre_max = bam_channel_tre_max;
+	bam->channel_trans_tre_max = bam_channel_trans_tre_max;
+	bam->channel_start = bam_channel_start;
+	bam->channel_stop = bam_channel_stop;
+	bam->channel_reset = bam_channel_reset;
+	bam->channel_suspend = bam_channel_suspend;
+	bam->channel_resume = bam_channel_resume;
+	bam->trans_commit = bam_trans_commit;
+
+	init_dummy_netdev(&bam->dummy_dev);
+
+	ret = bam_channel_init(bam, count, data);
+	if (ret)
+		return ret;
+
+	mutex_init(&bam->mutex);
+
+	return 0;
+}
diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
index 39d9ca620a9f..ac0b9e748fa1 100644
--- a/drivers/net/ipa/gsi.c
+++ b/drivers/net/ipa/gsi.c
@@ -2210,6 +2210,7 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
 	gsi->channel_reset = gsi_channel_reset;
 	gsi->channel_suspend = gsi_channel_suspend;
 	gsi->channel_resume = gsi_channel_resume;
+	gsi->trans_commit = gsi_trans_commit;
 
 	/* GSI uses NAPI on all channels.  Create a dummy network device
 	 * for the channel NAPI contexts to be associated with.
diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
index 6d329e9ce5d2..7d62d49f414f 100644
--- a/drivers/net/ipa/ipa_data.h
+++ b/drivers/net/ipa/ipa_data.h
@@ -188,6 +188,7 @@ struct ipa_gsi_endpoint_data {
 	u8 channel_id;
 	u8 endpoint_id;
 	bool toward_ipa;
+	const char *channel_name;	/* used only for BAM DMA channels */
 
 	struct gsi_channel_data channel;
 	struct ipa_endpoint_data endpoint;
diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h
index 1a23e6ac5785..3000182ae689 100644
--- a/drivers/net/ipa/ipa_dma.h
+++ b/drivers/net/ipa/ipa_dma.h
@@ -17,7 +17,11 @@
 
 /* Maximum number of channels and event rings supported by the driver */
 #define GSI_CHANNEL_COUNT_MAX	23
+#define BAM_CHANNEL_COUNT_MAX	20
 #define GSI_EVT_RING_COUNT_MAX	24
+#define IPA_CHANNEL_COUNT_MAX	MAX(GSI_CHANNEL_COUNT_MAX, \
+				    BAM_CHANNEL_COUNT_MAX)
+#define MAX(a, b)		((a > b) ? a : b)
 
 /* Maximum TLV FIFO size for a channel; 64 here is arbitrary (and high) */
 #define GSI_TLV_MAX		64
@@ -119,6 +123,8 @@ struct ipa_channel {
 	struct gsi_ring tre_ring;
 	u32 evt_ring_id;
 
+	struct dma_chan *dma_chan;
+
 	u64 byte_count;			/* total # bytes transferred */
 	u64 trans_count;		/* total # transactions */
 	/* The following counts are used only for TX endpoints */
@@ -154,7 +160,7 @@ struct ipa_dma {
 	u32 irq;
 	u32 channel_count;
 	u32 evt_ring_count;
-	struct ipa_channel channel[GSI_CHANNEL_COUNT_MAX];
+	struct ipa_channel channel[IPA_CHANNEL_COUNT_MAX];
 	struct gsi_evt_ring evt_ring[GSI_EVT_RING_COUNT_MAX];
 	u32 event_bitmap;		/* allocated event rings */
 	u32 modem_channel_bitmap;	/* modem channels to allocate */
@@ -303,7 +309,7 @@ static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
 }
 
 /**
- * ipa_dma_init() - Initialize the GSI subsystem
+ * ipa_init/bam_init() - Initialize the GSI/BAM subsystem
  * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
  * @pdev:	IPA platform device
  * @version:	IPA hardware version (implies GSI version)
@@ -312,14 +318,18 @@ static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
  *
  * Return:	0 if successful, or a negative error code
  *
- * Early stage initialization of the GSI subsystem, performing tasks
- * that can be done before the GSI hardware is ready to use.
+ * Early stage initialization of the GSI/BAM subsystem, performing tasks
+ * that can be done before the GSI/BAM hardware is ready to use.
  */
 
 int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
 	     enum ipa_version version, u32 count,
 	     const struct ipa_gsi_endpoint_data *data);
 
+int bam_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
+	     enum ipa_version version, u32 count,
+	     const struct ipa_gsi_endpoint_data *data);
+
 /**
  * ipa_dma_exit() - Exit the DMA subsystem
  * @dma_subsys:	ipa_dma address previously passed to a successful gsi_init() call
diff --git a/drivers/net/ipa/ipa_dma_private.h b/drivers/net/ipa/ipa_dma_private.h
index 40148a551b47..1db53e597a61 100644
--- a/drivers/net/ipa/ipa_dma_private.h
+++ b/drivers/net/ipa/ipa_dma_private.h
@@ -16,6 +16,8 @@ struct ipa_channel;
 
 #define GSI_RING_ELEMENT_SIZE	16	/* bytes; must be a power of 2 */
 
+void gsi_trans_commit(struct ipa_trans *trans, bool ring_db);
+
 /* Return the entry that follows one provided in a transaction pool */
 void *ipa_trans_pool_next(struct ipa_trans_pool *pool, void *element);
 
diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index ba06e3ad554c..ea6c4347f2c6 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -60,12 +60,15 @@
  * core.  The GSI implements a set of "channels" used for communication
  * between the AP and the IPA.
  *
- * The IPA layer uses GSI channels to implement its "endpoints".  And while
- * a GSI channel carries data between the AP and the IPA, a pair of IPA
- * endpoints is used to carry traffic between two EEs.  Specifically, the main
- * modem network interface is implemented by two pairs of endpoints:  a TX
+ * The IPA layer uses GSI channels or BAM pipes to implement its "endpoints".
+ * And while a GSI channel carries data between the AP and the IPA, a pair of
+ * IPA endpoints is used to carry traffic between two EEs.  Specifically, the
+ * main modem network interface is implemented by two pairs of endpoints:  a TX
  * endpoint on the AP coupled with an RX endpoint on the modem; and another
  * RX endpoint on the AP receiving data from a TX endpoint on the modem.
+ *
+ * For BAM based transport, a pair of BAM pipes are used for TX and RX between
+ * the AP and IPA, and between IPA and other EEs.
  */
 
 /* The name of the GSI firmware file relative to /lib/firmware */
@@ -716,8 +719,13 @@ static int ipa_probe(struct platform_device *pdev)
 	if (ret)
 		goto err_reg_exit;
 
-	ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
-		       data->endpoint_data);
+	if (IPA_HAS_GSI(ipa->version))
+		ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
+			       data->endpoint_data);
+	else
+		ret = bam_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
+			       data->endpoint_data);
+
 	if (ret)
 		goto err_mem_exit;
 
diff --git a/drivers/net/ipa/ipa_trans.c b/drivers/net/ipa/ipa_trans.c
index 22755f3ce3da..444f44846da8 100644
--- a/drivers/net/ipa/ipa_trans.c
+++ b/drivers/net/ipa/ipa_trans.c
@@ -254,7 +254,7 @@ struct ipa_trans *ipa_channel_trans_complete(struct ipa_channel *channel)
 }
 
 /* Move a transaction from the allocated list to the pending list */
-static void ipa_trans_move_pending(struct ipa_trans *trans)
+void ipa_trans_move_pending(struct ipa_trans *trans)
 {
 	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
 	struct ipa_trans_info *trans_info = &channel->trans_info;
@@ -539,7 +539,7 @@ static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr,
  * pending list.  Finally, updates the channel ring pointer and optionally
  * rings the doorbell.
  */
-static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
+void gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
 {
 	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
 	struct gsi_ring *ring = &channel->tre_ring;
@@ -604,9 +604,9 @@ static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
 /* Commit a GSI transaction */
 void ipa_trans_commit(struct ipa_trans *trans, bool ring_db)
 {
-	if (trans->used)
-		__gsi_trans_commit(trans, ring_db);
-	else
+	if (trans->used) {
+		trans->dma_subsys->trans_commit(trans, ring_db);
+	} else
 		ipa_trans_free(trans);
 }
 
@@ -618,7 +618,7 @@ void ipa_trans_commit_wait(struct ipa_trans *trans)
 
 	refcount_inc(&trans->refcount);
 
-	__gsi_trans_commit(trans, true);
+	trans->dma_subsys->trans_commit(trans, true);
 
 	wait_for_completion(&trans->completion);
 
@@ -638,7 +638,7 @@ int ipa_trans_commit_wait_timeout(struct ipa_trans *trans,
 
 	refcount_inc(&trans->refcount);
 
-	__gsi_trans_commit(trans, true);
+	trans->dma_subsys->trans_commit(trans, true);
 
 	remaining = wait_for_completion_timeout(&trans->completion,
 						timeout_jiffies);
diff --git a/drivers/net/ipa/ipa_trans.h b/drivers/net/ipa/ipa_trans.h
index b93342414360..5f41e3e6f92a 100644
--- a/drivers/net/ipa/ipa_trans.h
+++ b/drivers/net/ipa/ipa_trans.h
@@ -10,6 +10,7 @@
 #include <linux/refcount.h>
 #include <linux/completion.h>
 #include <linux/dma-direction.h>
+#include <linux/dmaengine.h>
 
 #include "ipa_cmd.h"
 
@@ -61,6 +62,7 @@ struct ipa_trans {
 	struct scatterlist *sgl;
 	struct ipa_cmd_info *info;	/* array of entries, or null */
 	enum dma_data_direction direction;
+	dma_cookie_t cookie;
 
 	refcount_t refcount;
 	struct completion completion;
@@ -149,6 +151,8 @@ struct ipa_trans *ipa_channel_trans_alloc(struct ipa_dma *dma_subsys, u32 channe
  */
 void ipa_trans_free(struct ipa_trans *trans);
 
+void ipa_trans_move_pending(struct ipa_trans *trans);
+
 /**
  * ipa_trans_cmd_add() - Add an immediate command to a transaction
  * @trans:	Transaction
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (7 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints Sireesh Kodali
                   ` (7 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, Vladimir Lypak, David S. Miller, Jakub Kicinski

IPA v2.x commands are different from later IPA revisions mostly because
of the fact that IPA v2.x is 32 bit. There are also other minor
differences some of the command structs.

The tables again are only different because of the fact that IPA v2.x is
32 bit.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
---
 drivers/net/ipa/ipa.h       |   2 +-
 drivers/net/ipa/ipa_cmd.c   | 138 ++++++++++++++++++++++++++----------
 drivers/net/ipa/ipa_table.c |  29 ++++++--
 drivers/net/ipa/ipa_table.h |   2 +-
 4 files changed, 125 insertions(+), 46 deletions(-)

diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
index 80a83ac45729..63b2b368b588 100644
--- a/drivers/net/ipa/ipa.h
+++ b/drivers/net/ipa/ipa.h
@@ -81,7 +81,7 @@ struct ipa {
 	struct ipa_power *power;
 
 	dma_addr_t table_addr;
-	__le64 *table_virt;
+	void *table_virt;
 
 	struct ipa_interrupt *interrupt;
 	bool uc_powered;
diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
index 7a104540dc26..58dae4b3bf87 100644
--- a/drivers/net/ipa/ipa_cmd.c
+++ b/drivers/net/ipa/ipa_cmd.c
@@ -25,8 +25,8 @@
  * An immediate command is generally used to request the IPA do something
  * other than data transfer to another endpoint.
  *
- * Immediate commands are represented by GSI transactions just like other
- * transfer requests, represented by a single GSI TRE.  Each immediate
+ * Immediate commands on IPA v3 are represented by GSI transactions just like
+ * other transfer requests, represented by a single GSI TRE.  Each immediate
  * command has a well-defined format, having a payload of a known length.
  * This allows the transfer element's length field to be used to hold an
  * immediate command's opcode.  The payload for a command resides in DRAM
@@ -45,10 +45,16 @@ enum pipeline_clear_options {
 
 /* IPA_CMD_IP_V{4,6}_{FILTER,ROUTING}_INIT */
 
-struct ipa_cmd_hw_ip_fltrt_init {
-	__le64 hash_rules_addr;
-	__le64 flags;
-	__le64 nhash_rules_addr;
+union ipa_cmd_hw_ip_fltrt_init {
+	struct {
+		__le32 nhash_rules_addr;
+		__le32 flags;
+	} v2;
+	struct {
+		__le64 hash_rules_addr;
+		__le64 flags;
+		__le64 nhash_rules_addr;
+	} v3;
 };
 
 /* Field masks for ipa_cmd_hw_ip_fltrt_init structure fields */
@@ -56,13 +62,23 @@ struct ipa_cmd_hw_ip_fltrt_init {
 #define IP_FLTRT_FLAGS_HASH_ADDR_FMASK			GENMASK_ULL(27, 12)
 #define IP_FLTRT_FLAGS_NHASH_SIZE_FMASK			GENMASK_ULL(39, 28)
 #define IP_FLTRT_FLAGS_NHASH_ADDR_FMASK			GENMASK_ULL(55, 40)
+#define IP_V2_IPV4_FLTRT_FLAGS_SIZE_FMASK		GENMASK_ULL(11, 0)
+#define IP_V2_IPV4_FLTRT_FLAGS_ADDR_FMASK		GENMASK_ULL(27, 12)
+#define IP_V2_IPV6_FLTRT_FLAGS_SIZE_FMASK		GENMASK_ULL(15, 0)
+#define IP_V2_IPV6_FLTRT_FLAGS_ADDR_FMASK		GENMASK_ULL(31, 16)
 
 /* IPA_CMD_HDR_INIT_LOCAL */
 
-struct ipa_cmd_hw_hdr_init_local {
-	__le64 hdr_table_addr;
-	__le32 flags;
-	__le32 reserved;
+union ipa_cmd_hw_hdr_init_local {
+	struct {
+		__le32 hdr_table_addr;
+		__le32 flags;
+	} v2;
+	struct {
+		__le64 hdr_table_addr;
+		__le32 flags;
+		__le32 reserved;
+	} v3;
 };
 
 /* Field masks for ipa_cmd_hw_hdr_init_local structure fields */
@@ -109,14 +125,37 @@ struct ipa_cmd_ip_packet_init {
 #define DMA_SHARED_MEM_OPCODE_SKIP_CLEAR_FMASK		GENMASK(8, 8)
 #define DMA_SHARED_MEM_OPCODE_CLEAR_OPTION_FMASK	GENMASK(10, 9)
 
-struct ipa_cmd_hw_dma_mem_mem {
-	__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
-	__le16 size;
-	__le16 local_addr;
-	__le16 flags;
-	__le64 system_addr;
+union ipa_cmd_hw_dma_mem_mem {
+	struct {
+		__le16 reserved;
+		__le16 size;
+		__le32 system_addr;
+		__le16 local_addr;
+		__le16 flags; /* the least significant 14 bits are reserved */
+		__le32 padding;
+	} v2;
+	struct {
+		__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
+		__le16 size;
+		__le16 local_addr;
+		__le16 flags;
+		__le64 system_addr;
+	} v3;
 };
 
+#define CMD_FIELD(_version, _payload, _field)				\
+	*(((_version) > IPA_VERSION_2_6L) ?		    		\
+	  &(_payload->v3._field) :			    		\
+	  &(_payload->v2._field))
+
+#define SET_DMA_FIELD(_ver, _payload, _field, _value)			\
+	do {								\
+		if ((_ver) >= IPA_VERSION_3_0)				\
+			(_payload)->v3._field = cpu_to_le64(_value);	\
+		else							\
+			(_payload)->v2._field = cpu_to_le32(_value);	\
+	} while (0)
+
 /* Flag allowing atomic clear of target region after reading data (v4.0+)*/
 #define DMA_SHARED_MEM_CLEAR_AFTER_READ			GENMASK(15, 15)
 
@@ -132,15 +171,16 @@ struct ipa_cmd_ip_packet_tag_status {
 	__le64 tag;
 };
 
-#define IP_PACKET_TAG_STATUS_TAG_FMASK			GENMASK_ULL(63, 16)
+#define IPA_V2_IP_PACKET_TAG_STATUS_TAG_FMASK		GENMASK_ULL(63, 32)
+#define IPA_V3_IP_PACKET_TAG_STATUS_TAG_FMASK		GENMASK_ULL(63, 16)
 
 /* Immediate command payload */
 union ipa_cmd_payload {
-	struct ipa_cmd_hw_ip_fltrt_init table_init;
-	struct ipa_cmd_hw_hdr_init_local hdr_init_local;
+	union ipa_cmd_hw_ip_fltrt_init table_init;
+	union ipa_cmd_hw_hdr_init_local hdr_init_local;
 	struct ipa_cmd_register_write register_write;
 	struct ipa_cmd_ip_packet_init ip_packet_init;
-	struct ipa_cmd_hw_dma_mem_mem dma_shared_mem;
+	union ipa_cmd_hw_dma_mem_mem dma_shared_mem;
 	struct ipa_cmd_ip_packet_tag_status ip_packet_tag_status;
 };
 
@@ -154,6 +194,7 @@ static void ipa_cmd_validate_build(void)
 	 * of entries.
 	 */
 #define TABLE_SIZE	(TABLE_COUNT_MAX * sizeof(__le64))
+// TODO
 #define TABLE_COUNT_MAX	max_t(u32, IPA_ROUTE_COUNT_MAX, IPA_FILTER_COUNT_MAX)
 	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK));
 	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));
@@ -405,15 +446,26 @@ void ipa_cmd_table_init_add(struct ipa_trans *trans,
 {
 	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	enum dma_data_direction direction = DMA_TO_DEVICE;
-	struct ipa_cmd_hw_ip_fltrt_init *payload;
+	union ipa_cmd_hw_ip_fltrt_init *payload;
+	enum ipa_version version = ipa->version;
 	union ipa_cmd_payload *cmd_payload;
 	dma_addr_t payload_addr;
 	u64 val;
 
 	/* Record the non-hash table offset and size */
 	offset += ipa->mem_offset;
-	val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
-	val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
+
+	if (version >= IPA_VERSION_3_0) {
+		val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
+		val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
+	} else if (opcode == IPA_CMD_IP_V4_FILTER_INIT ||
+		   opcode == IPA_CMD_IP_V4_ROUTING_INIT) {
+		val = u64_encode_bits(offset, IP_V2_IPV4_FLTRT_FLAGS_ADDR_FMASK);
+		val |= u64_encode_bits(size, IP_V2_IPV4_FLTRT_FLAGS_SIZE_FMASK);
+	} else { /* IPA <= v2.6L IPv6 */
+		val = u64_encode_bits(offset, IP_V2_IPV6_FLTRT_FLAGS_ADDR_FMASK);
+		val |= u64_encode_bits(size, IP_V2_IPV6_FLTRT_FLAGS_SIZE_FMASK);
+	}
 
 	/* The hash table offset and address are zero if its size is 0 */
 	if (hash_size) {
@@ -429,10 +481,10 @@ void ipa_cmd_table_init_add(struct ipa_trans *trans,
 	payload = &cmd_payload->table_init;
 
 	/* Fill in all offsets and sizes and the non-hash table address */
-	if (hash_size)
-		payload->hash_rules_addr = cpu_to_le64(hash_addr);
-	payload->flags = cpu_to_le64(val);
-	payload->nhash_rules_addr = cpu_to_le64(addr);
+	if (hash_size && version >= IPA_VERSION_3_0)
+		payload->v3.hash_rules_addr = cpu_to_le64(hash_addr);
+	SET_DMA_FIELD(version, payload, flags, val);
+	SET_DMA_FIELD(version, payload, nhash_rules_addr, addr);
 
 	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  direction, opcode);
@@ -445,7 +497,7 @@ void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
 	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL;
 	enum dma_data_direction direction = DMA_TO_DEVICE;
-	struct ipa_cmd_hw_hdr_init_local *payload;
+	union ipa_cmd_hw_hdr_init_local *payload;
 	union ipa_cmd_payload *cmd_payload;
 	dma_addr_t payload_addr;
 	u32 flags;
@@ -460,10 +512,10 @@ void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
 	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
 	payload = &cmd_payload->hdr_init_local;
 
-	payload->hdr_table_addr = cpu_to_le64(addr);
+	SET_DMA_FIELD(ipa->version, payload, hdr_table_addr, addr);
 	flags = u32_encode_bits(size, HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK);
 	flags |= u32_encode_bits(offset, HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);
-	payload->flags = cpu_to_le32(flags);
+	CMD_FIELD(ipa->version, payload, flags) = cpu_to_le32(flags);
 
 	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  direction, opcode);
@@ -509,8 +561,11 @@ void ipa_cmd_register_write_add(struct ipa_trans *trans, u32 offset, u32 value,
 
 	} else {
 		flags = 0;	/* SKIP_CLEAR flag is always 0 */
-		options = u16_encode_bits(clear_option,
-					  REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
+		if (ipa->version > IPA_VERSION_2_6L)
+			options = u16_encode_bits(clear_option,
+					REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
+		else
+			options = 0;
 	}
 
 	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
@@ -552,7 +607,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
 {
 	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
 	enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM;
-	struct ipa_cmd_hw_dma_mem_mem *payload;
+	enum ipa_version version = ipa->version;
+	union ipa_cmd_hw_dma_mem_mem *payload;
 	union ipa_cmd_payload *cmd_payload;
 	enum dma_data_direction direction;
 	dma_addr_t payload_addr;
@@ -571,8 +627,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
 	/* payload->clear_after_read was reserved prior to IPA v4.0.  It's
 	 * never needed for current code, so it's 0 regardless of version.
 	 */
-	payload->size = cpu_to_le16(size);
-	payload->local_addr = cpu_to_le16(offset);
+	CMD_FIELD(version, payload, size) = cpu_to_le16(size);
+	CMD_FIELD(version, payload, local_addr) = cpu_to_le16(offset);
 	/* payload->flags:
 	 *   direction:		0 = write to IPA, 1 read from IPA
 	 * Starting at v4.0 these are reserved; either way, all zero:
@@ -582,8 +638,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
 	 * since both values are 0 we won't bother OR'ing them in.
 	 */
 	flags = toward_ipa ? 0 : DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK;
-	payload->flags = cpu_to_le16(flags);
-	payload->system_addr = cpu_to_le64(addr);
+	CMD_FIELD(version, payload, flags) = cpu_to_le16(flags);
+	SET_DMA_FIELD(version, payload, system_addr, addr);
 
 	direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
 
@@ -599,11 +655,17 @@ static void ipa_cmd_ip_tag_status_add(struct ipa_trans *trans)
 	struct ipa_cmd_ip_packet_tag_status *payload;
 	union ipa_cmd_payload *cmd_payload;
 	dma_addr_t payload_addr;
+	u64 tag_mask;
+
+	if (trans->dma_subsys->version <= IPA_VERSION_2_6L)
+		tag_mask = IPA_V2_IP_PACKET_TAG_STATUS_TAG_FMASK;
+	else
+		tag_mask = IPA_V3_IP_PACKET_TAG_STATUS_TAG_FMASK;
 
 	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
 	payload = &cmd_payload->ip_packet_tag_status;
 
-	payload->tag = le64_encode_bits(0, IP_PACKET_TAG_STATUS_TAG_FMASK);
+	payload->tag = le64_encode_bits(0, tag_mask);
 
 	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
 			  direction, opcode);
diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
index d197959cc032..459fb4830244 100644
--- a/drivers/net/ipa/ipa_table.c
+++ b/drivers/net/ipa/ipa_table.c
@@ -8,6 +8,7 @@
 #include <linux/kernel.h>
 #include <linux/bits.h>
 #include <linux/bitops.h>
+#include <linux/module.h>
 #include <linux/bitfield.h>
 #include <linux/io.h>
 #include <linux/build_bug.h>
@@ -561,6 +562,19 @@ void ipa_table_config(struct ipa *ipa)
 	ipa_route_config(ipa, true);
 }
 
+static inline void *ipa_table_write(enum ipa_version version,
+				   void *virt, u64 value)
+{
+	if (IPA_IS_64BIT(version)) {
+		__le64 *ptr = virt;
+		*ptr = cpu_to_le64(value);
+	} else {
+		__le32 *ptr = virt;
+		*ptr = cpu_to_le32(value);
+	}
+	return virt + IPA_TABLE_ENTRY_SIZE(version);
+}
+
 /*
  * Initialize a coherent DMA allocation containing initialized filter and
  * route table data.  This is used when initializing or resetting the IPA
@@ -602,10 +616,11 @@ void ipa_table_config(struct ipa *ipa)
 int ipa_table_init(struct ipa *ipa)
 {
 	u32 count = max_t(u32, IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX);
+	enum ipa_version version = ipa->version;
 	struct device *dev = &ipa->pdev->dev;
+	u64 filter_map = ipa->filter_map << 1;
 	dma_addr_t addr;
-	__le64 le_addr;
-	__le64 *virt;
+	void *virt;
 	size_t size;
 
 	ipa_table_validate_build();
@@ -626,19 +641,21 @@ int ipa_table_init(struct ipa *ipa)
 	ipa->table_addr = addr;
 
 	/* First slot is the zero rule */
-	*virt++ = 0;
+	virt = ipa_table_write(version, virt, 0);
 
 	/* Next is the filter table bitmap.  The "soft" bitmap value
 	 * must be converted to the hardware representation by shifting
 	 * it left one position.  (Bit 0 repesents global filtering,
 	 * which is possible but not used.)
 	 */
-	*virt++ = cpu_to_le64((u64)ipa->filter_map << 1);
+	if (version <= IPA_VERSION_2_6L)
+		filter_map |= 1;
+
+	virt = ipa_table_write(version, virt, filter_map);
 
 	/* All the rest contain the DMA address of the zero rule */
-	le_addr = cpu_to_le64(addr);
 	while (count--)
-		*virt++ = le_addr;
+		virt = ipa_table_write(version, virt, addr);
 
 	return 0;
 }
diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
index 78a168ce6558..6e12fc49e45b 100644
--- a/drivers/net/ipa/ipa_table.h
+++ b/drivers/net/ipa/ipa_table.h
@@ -43,7 +43,7 @@ bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask);
  */
 static inline bool ipa_table_hash_support(struct ipa *ipa)
 {
-	return ipa->version != IPA_VERSION_4_2;
+	return ipa->version != IPA_VERSION_4_2 && ipa->version > IPA_VERSION_2_6L;
 }
 
 /**
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (8 preceding siblings ...)
  2021-09-20  3:08 ` [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map Sireesh Kodali
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

IPA v2.x endpoints are the same as the endpoints on later versions. The
only big change was the addition of the "skip_config" flag. The only
other change is the backlog limit, which is a fixed number for IPA v2.6L

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_endpoint.c | 65 ++++++++++++++++++++++------------
 1 file changed, 43 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
index 7d3ab61cd890..024cf3a0ded0 100644
--- a/drivers/net/ipa/ipa_endpoint.c
+++ b/drivers/net/ipa/ipa_endpoint.c
@@ -360,8 +360,10 @@ void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable)
 {
 	u32 endpoint_id;
 
-	/* DELAY mode doesn't work correctly on IPA v4.2 */
-	if (ipa->version == IPA_VERSION_4_2)
+	/* DELAY mode doesn't work correctly on IPA v4.2
+	 * Pausing is not supported on IPA v2.6L
+	 */
+	if (ipa->version == IPA_VERSION_4_2 || ipa->version <= IPA_VERSION_2_6L)
 		return;
 
 	for (endpoint_id = 0; endpoint_id < IPA_ENDPOINT_MAX; endpoint_id++) {
@@ -383,6 +385,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
 {
 	u32 initialized = ipa->initialized;
 	struct ipa_trans *trans;
+	u32 value = 0, value_mask = ~0;
 	u32 count;
 
 	/* We need one command per modem TX endpoint.  We can get an upper
@@ -398,6 +401,11 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
 		return -EBUSY;
 	}
 
+	if (ipa->version <= IPA_VERSION_2_6L) {
+		value = aggr_force_close_fmask(true);
+		value_mask = aggr_force_close_fmask(true);
+	}
+
 	while (initialized) {
 		u32 endpoint_id = __ffs(initialized);
 		struct ipa_endpoint *endpoint;
@@ -416,7 +424,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
 		 * means status is disabled on the endpoint, and as a
 		 * result all other fields in the register are ignored.
 		 */
-		ipa_cmd_register_write_add(trans, offset, 0, ~0, false);
+		ipa_cmd_register_write_add(trans, offset, value, value_mask, false);
 	}
 
 	ipa_cmd_pipeline_clear_add(trans);
@@ -1531,8 +1539,10 @@ static void ipa_endpoint_program(struct ipa_endpoint *endpoint)
 	ipa_endpoint_init_mode(endpoint);
 	ipa_endpoint_init_aggr(endpoint);
 	ipa_endpoint_init_deaggr(endpoint);
-	ipa_endpoint_init_rsrc_grp(endpoint);
-	ipa_endpoint_init_seq(endpoint);
+	if (endpoint->ipa->version > IPA_VERSION_2_6L) {
+		ipa_endpoint_init_rsrc_grp(endpoint);
+		ipa_endpoint_init_seq(endpoint);
+	}
 	ipa_endpoint_status(endpoint);
 }
 
@@ -1592,7 +1602,6 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
 {
 	struct device *dev = &endpoint->ipa->pdev->dev;
 	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
-	bool stop_channel;
 	int ret;
 
 	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
@@ -1613,7 +1622,6 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
 {
 	struct device *dev = &endpoint->ipa->pdev->dev;
 	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
-	bool start_channel;
 	int ret;
 
 	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
@@ -1750,23 +1758,33 @@ int ipa_endpoint_config(struct ipa *ipa)
 	/* Find out about the endpoints supplied by the hardware, and ensure
 	 * the highest one doesn't exceed the number we support.
 	 */
-	val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
-
-	/* Our RX is an IPA producer */
-	rx_base = u32_get_bits(val, IPA_PROD_LOWEST_FMASK);
-	max = rx_base + u32_get_bits(val, IPA_MAX_PROD_PIPES_FMASK);
-	if (max > IPA_ENDPOINT_MAX) {
-		dev_err(dev, "too many endpoints (%u > %u)\n",
-			max, IPA_ENDPOINT_MAX);
-		return -EINVAL;
-	}
-	rx_mask = GENMASK(max - 1, rx_base);
+	if (ipa->version <= IPA_VERSION_2_6L) {
+		// FIXME Not used anywhere?
+		if (ipa->version == IPA_VERSION_2_6L)
+			val = ioread32(ipa->reg_virt +
+					IPA_REG_V2_ENABLED_PIPES_OFFSET);
+		/* IPA v2.6L supports 20 pipes */
+		ipa->available = ipa->filter_map;
+		return 0;
+	} else {
+		val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
+
+		/* Our RX is an IPA producer */
+		rx_base = u32_get_bits(val, IPA_PROD_LOWEST_FMASK);
+		max = rx_base + u32_get_bits(val, IPA_MAX_PROD_PIPES_FMASK);
+		if (max > IPA_ENDPOINT_MAX) {
+			dev_err(dev, "too many endpoints (%u > %u)\n",
+					max, IPA_ENDPOINT_MAX);
+			return -EINVAL;
+		}
+		rx_mask = GENMASK(max - 1, rx_base);
 
-	/* Our TX is an IPA consumer */
-	max = u32_get_bits(val, IPA_MAX_CONS_PIPES_FMASK);
-	tx_mask = GENMASK(max - 1, 0);
+		/* Our TX is an IPA consumer */
+		max = u32_get_bits(val, IPA_MAX_CONS_PIPES_FMASK);
+		tx_mask = GENMASK(max - 1, 0);
 
-	ipa->available = rx_mask | tx_mask;
+		ipa->available = rx_mask | tx_mask;
+	}
 
 	/* Check for initialized endpoints not supported by the hardware */
 	if (ipa->initialized & ~ipa->available) {
@@ -1865,6 +1883,9 @@ u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
 			filter_map |= BIT(data->endpoint_id);
 	}
 
+	if (ipa->version <= IPA_VERSION_2_6L)
+		filter_map = 0x1fffff;
+
 	if (!ipa_filter_map_valid(ipa, filter_map))
 		goto err_endpoint_exit;
 
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (9 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface Sireesh Kodali
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

IPA v2.6L has an extra region to handle compression/decompression
acceleration. This region is used by some modems during modem init.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_mem.c | 36 ++++++++++++++++++++++++++++++------
 drivers/net/ipa/ipa_mem.h |  5 ++++-
 2 files changed, 34 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
index 8acc88070a6f..bfcdc7e08de2 100644
--- a/drivers/net/ipa/ipa_mem.c
+++ b/drivers/net/ipa/ipa_mem.c
@@ -84,7 +84,7 @@ int ipa_mem_setup(struct ipa *ipa)
 	/* Get a transaction to define the header memory region and to zero
 	 * the processing context and modem memory regions.
 	 */
-	trans = ipa_cmd_trans_alloc(ipa, 4);
+	trans = ipa_cmd_trans_alloc(ipa, 5);
 	if (!trans) {
 		dev_err(&ipa->pdev->dev, "no transaction for memory setup\n");
 		return -EBUSY;
@@ -107,8 +107,14 @@ int ipa_mem_setup(struct ipa *ipa)
 	ipa_mem_zero_region_add(trans, IPA_MEM_AP_PROC_CTX);
 	ipa_mem_zero_region_add(trans, IPA_MEM_MODEM);
 
+	ipa_mem_zero_region_add(trans, IPA_MEM_ZIP);
+
 	ipa_trans_commit_wait(trans);
 
+	/* On IPA version <=2.6L (except 2.5) there is no PROC_CTX.  */
+	if (ipa->version != IPA_VERSION_2_5 && ipa->version <= IPA_VERSION_2_6L)
+		return 0;
+
 	/* Tell the hardware where the processing context area is located */
 	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
 	offset = ipa->mem_offset + mem->offset;
@@ -147,6 +153,11 @@ static bool ipa_mem_id_valid(struct ipa *ipa, enum ipa_mem_id mem_id)
 	case IPA_MEM_END_MARKER:	/* pseudo region */
 		break;
 
+	case IPA_MEM_ZIP:
+		if (version == IPA_VERSION_2_6L)
+			return true;
+		break;
+
 	case IPA_MEM_STATS_TETHERING:
 	case IPA_MEM_STATS_DROP:
 		if (version < IPA_VERSION_4_0)
@@ -319,10 +330,15 @@ int ipa_mem_config(struct ipa *ipa)
 	/* Check the advertised location and size of the shared memory area */
 	val = ioread32(ipa->reg_virt + ipa_reg_shared_mem_size_offset(ipa->version));
 
-	/* The fields in the register are in 8 byte units */
-	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
-	/* Make sure the end is within the region's mapped space */
-	mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
+	if (IPA_VERSION_RANGE(ipa->version, 2_0, 2_6L)) {
+		/* The fields in the register are in 8 byte units */
+		ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
+		/* Make sure the end is within the region's mapped space */
+		mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
+	} else {
+		ipa->mem_offset = u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
+		mem_size = u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
+	}
 
 	/* If the sizes don't match, issue a warning */
 	if (ipa->mem_offset + mem_size < ipa->mem_size) {
@@ -564,6 +580,10 @@ static int ipa_smem_init(struct ipa *ipa, u32 item, size_t size)
 		return -EINVAL;
 	}
 
+	/* IPA v2.6L does not use IOMMU */
+	if (ipa->version <= IPA_VERSION_2_6L)
+		return 0;
+
 	domain = iommu_get_domain_for_dev(dev);
 	if (!domain) {
 		dev_err(dev, "no IOMMU domain found for SMEM\n");
@@ -591,6 +611,9 @@ static void ipa_smem_exit(struct ipa *ipa)
 	struct device *dev = &ipa->pdev->dev;
 	struct iommu_domain *domain;
 
+	if (ipa->version <= IPA_VERSION_2_6L)
+		return;
+
 	domain = iommu_get_domain_for_dev(dev);
 	if (domain) {
 		size_t size;
@@ -622,7 +645,8 @@ int ipa_mem_init(struct ipa *ipa, const struct ipa_mem_data *mem_data)
 	ipa->mem_count = mem_data->local_count;
 	ipa->mem = mem_data->local;
 
-	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64));
+	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, IPA_IS_64BIT(ipa->version) ?
+					DMA_BIT_MASK(64) : DMA_BIT_MASK(32));
 	if (ret) {
 		dev_err(dev, "error %d setting DMA mask\n", ret);
 		return ret;
diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h
index 570bfdd99bff..be91cb38b6a8 100644
--- a/drivers/net/ipa/ipa_mem.h
+++ b/drivers/net/ipa/ipa_mem.h
@@ -47,8 +47,10 @@ enum ipa_mem_id {
 	IPA_MEM_UC_INFO,		/* 0 canaries */
 	IPA_MEM_V4_FILTER_HASHED,	/* 2 canaries */
 	IPA_MEM_V4_FILTER,		/* 2 canaries */
+	IPA_MEM_V4_FILTER_AP,		/* 2 canaries (IPA v2.0) */
 	IPA_MEM_V6_FILTER_HASHED,	/* 2 canaries */
 	IPA_MEM_V6_FILTER,		/* 2 canaries */
+	IPA_MEM_V6_FILTER_AP,		/* 0 canaries (IPA v2.0) */
 	IPA_MEM_V4_ROUTE_HASHED,	/* 2 canaries */
 	IPA_MEM_V4_ROUTE,		/* 2 canaries */
 	IPA_MEM_V6_ROUTE_HASHED,	/* 2 canaries */
@@ -57,7 +59,8 @@ enum ipa_mem_id {
 	IPA_MEM_AP_HEADER,		/* 0 canaries, optional */
 	IPA_MEM_MODEM_PROC_CTX,		/* 2 canaries */
 	IPA_MEM_AP_PROC_CTX,		/* 0 canaries */
-	IPA_MEM_MODEM,			/* 0/2 canaries */
+	IPA_MEM_ZIP,			/* 1 canary (IPA v2.6L) */
+	IPA_MEM_MODEM,			/* 0-2 canaries */
 	IPA_MEM_UC_EVENT_RING,		/* 1 canary, optional */
 	IPA_MEM_PDN_CONFIG,		/* 0/2 canaries (IPA v4.0+) */
 	IPA_MEM_STATS_QUOTA_MODEM,	/* 2/4 canaries (IPA v4.0+) */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (10 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 14/17] net: ipa: Add support for IPA v2 microcontroller Sireesh Kodali
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, Vladimir Lypak, David S. Miller, Jakub Kicinski

On IPA v2.x, the modem doesn't send a DRIVER_INIT_COMPLETED, so we have
to rely on the uc's IPA_UC_RESPONSE_INIT_COMPLETED to know when its
ready. We add a function here that marks uc_ready = true. This function
is called by ipa_uc.c when IPA_UC_RESPONSE_INIT_COMPLETED is handled.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
---
 drivers/net/ipa/ipa_qmi.c | 27 ++++++++++++++++++++++++++-
 drivers/net/ipa/ipa_qmi.h | 10 ++++++++++
 2 files changed, 36 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
index 7e2fe701cc4d..876e2a004f70 100644
--- a/drivers/net/ipa/ipa_qmi.c
+++ b/drivers/net/ipa/ipa_qmi.c
@@ -68,6 +68,11 @@
  * - The INDICATION_REGISTER request and INIT_COMPLETE indication are
  *   optional for non-initial modem boots, and have no bearing on the
  *   determination of when things are "ready"
+ *
+ * Note that on IPA v2.x, the modem doesn't send a DRIVER_INIT_COMPLETE
+ * request. Thus, we rely on the uc's IPA_UC_RESPONSE_INIT_COMPLETED to know
+ * when the uc is ready. The rest of the process is the same on IPA v2.x and
+ * later IPA versions
  */
 
 #define IPA_HOST_SERVICE_SVC_ID		0x31
@@ -345,7 +350,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
 			req.hdr_proc_ctx_tbl_info.start + mem->size - 1;
 	}
 
-	/* Nothing to report for the compression table (zip_tbl_info) */
+	mem = &ipa->mem[IPA_MEM_ZIP];
+	if (mem->size) {
+		req.zip_tbl_info_valid = 1;
+		req.zip_tbl_info.start = ipa->mem_offset + mem->offset;
+		req.zip_tbl_info.end = ipa->mem_offset + mem->size - 1;
+	}
 
 	mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE_HASHED);
 	if (mem->size) {
@@ -525,6 +535,21 @@ int ipa_qmi_setup(struct ipa *ipa)
 	return ret;
 }
 
+/* With IPA v2 modem is not required to send DRIVER_INIT_COMPLETE request to AP.
+ * We start operation as soon as IPA_UC_RESPONSE_INIT_COMPLETED irq is triggered.
+ */
+void ipa_qmi_signal_uc_loaded(struct ipa *ipa)
+{
+	struct ipa_qmi *ipa_qmi = &ipa->qmi;
+
+	/* This is needed only on IPA 2.x */
+	if (ipa->version > IPA_VERSION_2_6L)
+		return;
+
+	ipa_qmi->uc_ready = true;
+	ipa_qmi_ready(ipa_qmi);
+}
+
 /* Tear down IPA QMI handles */
 void ipa_qmi_teardown(struct ipa *ipa)
 {
diff --git a/drivers/net/ipa/ipa_qmi.h b/drivers/net/ipa/ipa_qmi.h
index 856ef629ccc8..4962d88b0d22 100644
--- a/drivers/net/ipa/ipa_qmi.h
+++ b/drivers/net/ipa/ipa_qmi.h
@@ -55,6 +55,16 @@ struct ipa_qmi {
  */
 int ipa_qmi_setup(struct ipa *ipa);
 
+/**
+ * ipa_qmi_signal_uc_loaded() - Signal that the UC has been loaded
+ * @ipa:		IPA pointer
+ *
+ * This is called when the uc indicates that it is ready. This exists, because
+ * on IPA v2.x, the modem does not send a DRIVER_INIT_COMPLETED. Thus we have
+ * to rely on the uc's INIT_COMPLETED response to know if it was initialized
+ */
+void ipa_qmi_signal_uc_loaded(struct ipa *ipa);
+
 /**
  * ipa_qmi_teardown() - Tear down IPA QMI handles
  * @ipa:		IPA pointer
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 14/17] net: ipa: Add support for IPA v2 microcontroller
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (11 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 15/17] net: ipa: Add IPA v2.6L initialization sequence support Sireesh Kodali
                   ` (3 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

There are some minor differences between IPA v2.x and later revisions
with regards to the uc. The biggeset difference is the shared memory's
layout. There are also some changes to the command numbers, but these
are not too important, since the mainline driver doesn't use them.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_uc.c | 96 ++++++++++++++++++++++++++--------------
 1 file changed, 63 insertions(+), 33 deletions(-)

diff --git a/drivers/net/ipa/ipa_uc.c b/drivers/net/ipa/ipa_uc.c
index 856e55a080a7..bf6b25098301 100644
--- a/drivers/net/ipa/ipa_uc.c
+++ b/drivers/net/ipa/ipa_uc.c
@@ -39,11 +39,12 @@
 #define IPA_SEND_DELAY		100	/* microseconds */
 
 /**
- * struct ipa_uc_mem_area - AP/microcontroller shared memory area
+ * union ipa_uc_mem_area - AP/microcontroller shared memory area
  * @command:		command code (AP->microcontroller)
  * @reserved0:		reserved bytes; avoid reading or writing
  * @command_param:	low 32 bits of command parameter (AP->microcontroller)
  * @command_param_hi:	high 32 bits of command parameter (AP->microcontroller)
+ *			Available since IPA v3.0
  *
  * @response:		response code (microcontroller->AP)
  * @reserved1:		reserved bytes; avoid reading or writing
@@ -59,31 +60,58 @@
  * @reserved3:		reserved bytes; avoid reading or writing
  * @interface_version:	hardware-reported interface version
  * @reserved4:		reserved bytes; avoid reading or writing
+ * @reserved5:		reserved bytes; avoid reading or writing
  *
  * A shared memory area at the base of IPA resident memory is used for
  * communication with the microcontroller.  The region is 128 bytes in
  * size, but only the first 40 bytes (structured this way) are used.
  */
-struct ipa_uc_mem_area {
-	u8 command;		/* enum ipa_uc_command */
-	u8 reserved0[3];
-	__le32 command_param;
-	__le32 command_param_hi;
-	u8 response;		/* enum ipa_uc_response */
-	u8 reserved1[3];
-	__le32 response_param;
-	u8 event;		/* enum ipa_uc_event */
-	u8 reserved2[3];
-
-	__le32 event_param;
-	__le32 first_error_address;
-	u8 hw_state;
-	u8 warning_counter;
-	__le16 reserved3;
-	__le16 interface_version;
-	__le16 reserved4;
+union ipa_uc_mem_area {
+	struct {
+		u8 command;		/* enum ipa_uc_command */
+		u8 reserved0[3];
+		__le32 command_param;
+		u8 response;		/* enum ipa_uc_response */
+		u8 reserved1[3];
+		__le32 response_param;
+		u8 event;		/* enum ipa_uc_event */
+		u8 reserved2[3];
+
+		__le32 event_param;
+		__le32 reserved3;
+		__le32 first_error_address;
+		u8 hw_state;
+		u8 warning_counter;
+		__le16 reserved4;
+		__le16 interface_version;
+		__le16 reserved5;
+	} v2;
+	struct {
+		u8 command;		/* enum ipa_uc_command */
+		u8 reserved0[3];
+		__le32 command_param;
+		__le32 command_param_hi;
+		u8 response;		/* enum ipa_uc_response */
+		u8 reserved1[3];
+		__le32 response_param;
+		u8 event;		/* enum ipa_uc_event */
+		u8 reserved2[3];
+
+		__le32 event_param;
+		__le32 first_error_address;
+		u8 hw_state;
+		u8 warning_counter;
+		__le16 reserved3;
+		__le16 interface_version;
+		__le16 reserved4;
+	} v3;
 };
 
+#define UC_FIELD(_ipa, _field)			\
+	*((_ipa->version >= IPA_VERSION_3_0) ?	\
+	  &(ipa_uc_shared(_ipa)->v3._field) :	\
+	  &(ipa_uc_shared(_ipa)->v2._field))
+
 /** enum ipa_uc_command - commands from the AP to the microcontroller */
 enum ipa_uc_command {
 	IPA_UC_COMMAND_NO_OP		= 0x0,
@@ -95,6 +123,7 @@ enum ipa_uc_command {
 	IPA_UC_COMMAND_CLK_UNGATE	= 0x6,
 	IPA_UC_COMMAND_MEMCPY		= 0x7,
 	IPA_UC_COMMAND_RESET_PIPE	= 0x8,
+	/* Next two commands are present for IPA v3.0+ */
 	IPA_UC_COMMAND_REG_WRITE	= 0x9,
 	IPA_UC_COMMAND_GSI_CH_EMPTY	= 0xa,
 };
@@ -114,7 +143,7 @@ enum ipa_uc_event {
 	IPA_UC_EVENT_LOG_INFO		= 0x2,
 };
 
-static struct ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
+static union ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
 {
 	const struct ipa_mem *mem = ipa_mem_find(ipa, IPA_MEM_UC_SHARED);
 	u32 offset = ipa->mem_offset + mem->offset;
@@ -125,22 +154,22 @@ static struct ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
 /* Microcontroller event IPA interrupt handler */
 static void ipa_uc_event_handler(struct ipa *ipa, enum ipa_irq_id irq_id)
 {
-	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
 	struct device *dev = &ipa->pdev->dev;
+	u32 event = UC_FIELD(ipa, event);
 
-	if (shared->event == IPA_UC_EVENT_ERROR)
+	if (event == IPA_UC_EVENT_ERROR)
 		dev_err(dev, "microcontroller error event\n");
-	else if (shared->event != IPA_UC_EVENT_LOG_INFO)
+	else if (event != IPA_UC_EVENT_LOG_INFO)
 		dev_err(dev, "unsupported microcontroller event %u\n",
-			shared->event);
+			event);
 	/* The LOG_INFO event can be safely ignored */
 }
 
 /* Microcontroller response IPA interrupt handler */
 static void ipa_uc_response_hdlr(struct ipa *ipa, enum ipa_irq_id irq_id)
 {
-	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
 	struct device *dev = &ipa->pdev->dev;
+	u32 response = UC_FIELD(ipa, response);
 
 	/* An INIT_COMPLETED response message is sent to the AP by the
 	 * microcontroller when it is operational.  Other than this, the AP
@@ -150,20 +179,21 @@ static void ipa_uc_response_hdlr(struct ipa *ipa, enum ipa_irq_id irq_id)
 	 * We can drop the power reference taken in ipa_uc_power() once we
 	 * know the microcontroller has finished its initialization.
 	 */
-	switch (shared->response) {
+	switch (response) {
 	case IPA_UC_RESPONSE_INIT_COMPLETED:
 		if (ipa->uc_powered) {
 			ipa->uc_loaded = true;
 			pm_runtime_mark_last_busy(dev);
 			(void)pm_runtime_put_autosuspend(dev);
 			ipa->uc_powered = false;
+			ipa_qmi_signal_uc_loaded(ipa);
 		} else {
 			dev_warn(dev, "unexpected init_completed response\n");
 		}
 		break;
 	default:
 		dev_warn(dev, "unsupported microcontroller response %u\n",
-			 shared->response);
+			 response);
 		break;
 	}
 }
@@ -216,16 +246,16 @@ void ipa_uc_power(struct ipa *ipa)
 /* Send a command to the microcontroller */
 static void send_uc_command(struct ipa *ipa, u32 command, u32 command_param)
 {
-	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
 	u32 offset;
 	u32 val;
 
 	/* Fill in the command data */
-	shared->command = command;
-	shared->command_param = cpu_to_le32(command_param);
-	shared->command_param_hi = 0;
-	shared->response = 0;
-	shared->response_param = 0;
+	UC_FIELD(ipa, command) = command;
+	UC_FIELD(ipa, command_param) = cpu_to_le32(command_param);
+	if (ipa->version >= IPA_VERSION_3_0)
+		ipa_uc_shared(ipa)->v3.command_param_hi = 1;
+	UC_FIELD(ipa, response) = 0;
+	UC_FIELD(ipa, response_param) = 0;
 
 	/* Use an interrupt to tell the microcontroller the command is ready */
 	val = u32_encode_bits(1, UC_INTR_FMASK);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 15/17] net: ipa: Add IPA v2.6L initialization sequence support
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (12 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 14/17] net: ipa: Add support for IPA v2 microcontroller Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware Sireesh Kodali
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

The biggest changes are:

- Make SMP2P functions no-operation
- Make resource init no-operation
- Skip firmware loading
- Add reset sequence

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/ipa_main.c     | 19 ++++++++++++++++---
 drivers/net/ipa/ipa_resource.c |  3 +++
 drivers/net/ipa/ipa_smp2p.c    | 11 +++++++++--
 3 files changed, 28 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index ea6c4347f2c6..b437fbf95edf 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -355,12 +355,22 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
 	u32 granularity;
 	u32 val;
 
+	if (ipa->version <= IPA_VERSION_2_6L) {
+		iowrite32(1, ipa->reg_virt + IPA_REG_COMP_SW_RESET_OFFSET);
+		iowrite32(0, ipa->reg_virt + IPA_REG_COMP_SW_RESET_OFFSET);
+
+		iowrite32(1, ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
+	}
+
 	/* IPA v4.5+ has no backward compatibility register */
-	if (version < IPA_VERSION_4_5) {
+	if (version >= IPA_VERSION_2_5 && version < IPA_VERSION_4_5) {
 		val = data->backward_compat;
 		iowrite32(val, ipa->reg_virt + ipa_reg_bcr_offset(ipa->version));
 	}
 
+	if (ipa->version <= IPA_VERSION_2_6L)
+		return;
+
 	/* Implement some hardware workarounds */
 	if (version >= IPA_VERSION_4_0 && version < IPA_VERSION_4_5) {
 		/* Disable PA mask to allow HOLB drop */
@@ -412,7 +422,8 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
 static void ipa_hardware_deconfig(struct ipa *ipa)
 {
 	/* Mostly we just leave things as we set them. */
-	ipa_hardware_dcd_deconfig(ipa);
+	if (ipa->version > IPA_VERSION_2_6L)
+		ipa_hardware_dcd_deconfig(ipa);
 }
 
 /**
@@ -765,8 +776,10 @@ static int ipa_probe(struct platform_device *pdev)
 
 	/* Otherwise we need to load the firmware and have Trust Zone validate
 	 * and install it.  If that succeeds we can proceed with setup.
+	 * But on IPA v2.6L we don't need to do firmware loading :D
 	 */
-	ret = ipa_firmware_load(dev);
+	if (ipa->version > IPA_VERSION_2_6L)
+		ret = ipa_firmware_load(dev);
 	if (ret)
 		goto err_deconfig;
 
diff --git a/drivers/net/ipa/ipa_resource.c b/drivers/net/ipa/ipa_resource.c
index e3da95d69409..36a72324d828 100644
--- a/drivers/net/ipa/ipa_resource.c
+++ b/drivers/net/ipa/ipa_resource.c
@@ -162,6 +162,9 @@ int ipa_resource_config(struct ipa *ipa, const struct ipa_resource_data *data)
 {
 	u32 i;
 
+	if (ipa->version <= IPA_VERSION_2_6L)
+		return 0;
+
 	if (!ipa_resource_limits_valid(ipa, data))
 		return -EINVAL;
 
diff --git a/drivers/net/ipa/ipa_smp2p.c b/drivers/net/ipa/ipa_smp2p.c
index df7639c39d71..fa4a9f1c196a 100644
--- a/drivers/net/ipa/ipa_smp2p.c
+++ b/drivers/net/ipa/ipa_smp2p.c
@@ -233,6 +233,10 @@ int ipa_smp2p_init(struct ipa *ipa, bool modem_init)
 	u32 valid_bit;
 	int ret;
 
+	/* With IPA v2.6L and earlier SMP2P interrupts are used */
+	if (ipa->version <= IPA_VERSION_2_6L)
+		return 0;
+
 	valid_state = qcom_smem_state_get(dev, "ipa-clock-enabled-valid",
 					  &valid_bit);
 	if (IS_ERR(valid_state))
@@ -302,6 +306,9 @@ void ipa_smp2p_exit(struct ipa *ipa)
 {
 	struct ipa_smp2p *smp2p = ipa->smp2p;
 
+	if (!smp2p)
+		return;
+
 	if (smp2p->setup_ready_irq)
 		ipa_smp2p_irq_exit(smp2p, smp2p->setup_ready_irq);
 	ipa_smp2p_panic_notifier_unregister(smp2p);
@@ -317,7 +324,7 @@ void ipa_smp2p_disable(struct ipa *ipa)
 {
 	struct ipa_smp2p *smp2p = ipa->smp2p;
 
-	if (!smp2p->setup_ready_irq)
+	if (!smp2p || !smp2p->setup_ready_irq)
 		return;
 
 	mutex_lock(&smp2p->mutex);
@@ -333,7 +340,7 @@ void ipa_smp2p_notify_reset(struct ipa *ipa)
 	struct ipa_smp2p *smp2p = ipa->smp2p;
 	u32 mask;
 
-	if (!smp2p->notified)
+	if (!smp2p || !smp2p->notified)
 		return;
 
 	ipa_smp2p_power_release(ipa);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (13 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 15/17] net: ipa: Add IPA v2.6L initialization sequence support Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-10-13 22:30   ` Alex Elder
  2021-09-20  3:08 ` [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA Sireesh Kodali
  2021-10-13 22:27 ` [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Alex Elder
  16 siblings, 1 reply; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, David S. Miller, Jakub Kicinski

This commit adds the config for IPA v2.0, v2.5, v2.6L. IPA v2.5 is found
on msm8996. IPA v2.6L hardware is found on following SoCs: msm8920,
msm8940, msm8952, msm8953, msm8956, msm8976, sdm630, sdm660. No
SoC-specific configuration in ipa driver is required.

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 drivers/net/ipa/Makefile        |   7 +-
 drivers/net/ipa/ipa_data-v2.c   | 369 ++++++++++++++++++++++++++++++++
 drivers/net/ipa/ipa_data-v3.1.c |   2 +-
 drivers/net/ipa/ipa_data.h      |   3 +
 drivers/net/ipa/ipa_main.c      |  15 ++
 drivers/net/ipa/ipa_sysfs.c     |   6 +
 6 files changed, 398 insertions(+), 4 deletions(-)
 create mode 100644 drivers/net/ipa/ipa_data-v2.c

diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
index 4abebc667f77..858fbf76cff3 100644
--- a/drivers/net/ipa/Makefile
+++ b/drivers/net/ipa/Makefile
@@ -7,6 +7,7 @@ ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
 				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
 				ipa_sysfs.o
 
-ipa-y			+=	ipa_data-v3.1.o ipa_data-v3.5.1.o \
-				ipa_data-v4.2.o ipa_data-v4.5.o \
-				ipa_data-v4.9.o ipa_data-v4.11.o
+ipa-y			+=	ipa_data-v2.o ipa_data-v3.1.o \
+				ipa_data-v3.5.1.o ipa_data-v4.2.o \
+				ipa_data-v4.5.o ipa_data-v4.9.o \
+				ipa_data-v4.11.o
diff --git a/drivers/net/ipa/ipa_data-v2.c b/drivers/net/ipa/ipa_data-v2.c
new file mode 100644
index 000000000000..869b8a1a45d6
--- /dev/null
+++ b/drivers/net/ipa/ipa_data-v2.c
@@ -0,0 +1,369 @@
+// SPDX-License-Identifier: GPL-2.0
+
+/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
+ * Copyright (C) 2019-2020 Linaro Ltd.
+ */
+
+#include <linux/log2.h>
+
+#include "ipa_data.h"
+#include "ipa_endpoint.h"
+#include "ipa_mem.h"
+
+/* Endpoint configuration for the IPA v2 hardware. */
+static const struct ipa_gsi_endpoint_data ipa_endpoint_data[] = {
+	[IPA_ENDPOINT_AP_COMMAND_TX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 3,
+		.endpoint_id	= 3,
+		.channel_name	= "cmd_tx",
+		.toward_ipa	= true,
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 20,
+		},
+		.endpoint = {
+			.config	= {
+				.dma_mode	= true,
+				.dma_endpoint	= IPA_ENDPOINT_AP_LAN_RX,
+			},
+		},
+	},
+	[IPA_ENDPOINT_AP_LAN_RX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 2,
+		.endpoint_id	= 2,
+		.channel_name	= "ap_lan_rx",
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 8,
+		},
+		.endpoint	= {
+			.config	= {
+				.aggregation	= true,
+				.status_enable	= true,
+				.rx = {
+					.pad_align	= ilog2(sizeof(u32)),
+				},
+			},
+		},
+	},
+	[IPA_ENDPOINT_AP_MODEM_TX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 4,
+		.endpoint_id	= 4,
+		.channel_name	= "ap_modem_tx",
+		.toward_ipa	= true,
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 8,
+		},
+		.endpoint	= {
+			.config	= {
+				.qmap		= true,
+				.status_enable	= true,
+				.tx = {
+					.status_endpoint =
+						IPA_ENDPOINT_AP_LAN_RX,
+				},
+			},
+		},
+	},
+	[IPA_ENDPOINT_AP_MODEM_RX] = {
+		.ee_id		= GSI_EE_AP,
+		.channel_id	= 5,
+		.endpoint_id	= 5,
+		.channel_name	= "ap_modem_rx",
+		.toward_ipa	= false,
+		.channel = {
+			.tre_count	= 256,
+			.event_count	= 256,
+			.tlv_count	= 8,
+		},
+		.endpoint	= {
+			.config = {
+				.aggregation	= true,
+				.qmap		= true,
+			},
+		},
+	},
+	[IPA_ENDPOINT_MODEM_LAN_TX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 6,
+		.endpoint_id	= 6,
+		.channel_name	= "modem_lan_tx",
+		.toward_ipa	= true,
+	},
+	[IPA_ENDPOINT_MODEM_COMMAND_TX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 7,
+		.endpoint_id	= 7,
+		.channel_name	= "modem_cmd_tx",
+		.toward_ipa	= true,
+	},
+	[IPA_ENDPOINT_MODEM_LAN_RX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 8,
+		.endpoint_id	= 8,
+		.channel_name	= "modem_lan_rx",
+		.toward_ipa	= false,
+	},
+	[IPA_ENDPOINT_MODEM_AP_RX] = {
+		.ee_id		= GSI_EE_MODEM,
+		.channel_id	= 9,
+		.endpoint_id	= 9,
+		.channel_name	= "modem_ap_rx",
+		.toward_ipa	= false,
+	},
+};
+
+static struct ipa_interconnect_data ipa_interconnect_data[] = {
+	{
+		.name = "memory",
+		.peak_bandwidth	= 1200000,	/* 1200 MBps */
+		.average_bandwidth = 100000,	/* 100 MBps */
+	},
+	{
+		.name = "imem",
+		.peak_bandwidth	= 350000,	/* 350 MBps */
+		.average_bandwidth  = 0,	/* unused */
+	},
+	{
+		.name = "config",
+		.peak_bandwidth	= 40000,	/* 40 MBps */
+		.average_bandwidth = 0,		/* unused */
+	},
+};
+
+static struct ipa_power_data ipa_power_data = {
+	.core_clock_rate	= 200 * 1000 * 1000,	/* Hz */
+	.interconnect_count	= ARRAY_SIZE(ipa_interconnect_data),
+	.interconnect_data	= ipa_interconnect_data,
+};
+
+/* IPA-resident memory region configuration for v2.0 */
+static const struct ipa_mem ipa_mem_local_data_v2_0[IPA_MEM_COUNT] = {
+	[IPA_MEM_UC_SHARED] = {
+		.offset         = 0,
+		.size           = 0x80,
+		.canary_count   = 0,
+	},
+	[IPA_MEM_V4_FILTER] = {
+		.offset		= 0x0080,
+		.size		= 0x0058,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_V6_FILTER] = {
+		.offset		= 0x00e0,
+		.size		= 0x0058,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V4_ROUTE] = {
+		.offset		= 0x0140,
+		.size		= 0x002c,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_ROUTE] = {
+		.offset		= 0x0170,
+		.size		= 0x002c,
+		.canary_count	= 1,
+	},
+	[IPA_MEM_MODEM_HEADER] = {
+		.offset		= 0x01a0,
+		.size		= 0x0140,
+		.canary_count	= 1,
+	},
+	[IPA_MEM_AP_HEADER] = {
+		.offset		= 0x02e0,
+		.size		= 0x0048,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_MODEM] = {
+		.offset		= 0x032c,
+		.size		= 0x0dcc,
+		.canary_count	= 1,
+	},
+	[IPA_MEM_V4_FILTER_AP] = {
+		.offset		= 0x10fc,
+		.size		= 0x0780,
+		.canary_count	= 1,
+	},
+	[IPA_MEM_V6_FILTER_AP] = {
+		.offset		= 0x187c,
+		.size		= 0x055c,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_UC_INFO] = {
+		.offset		= 0x1ddc,
+		.size		= 0x0124,
+		.canary_count	= 1,
+	},
+};
+
+static struct ipa_mem_data ipa_mem_data_v2_0 = {
+	.local		= ipa_mem_local_data_v2_0,
+	.smem_id	= 497,
+	.smem_size	= 0x00001f00,
+};
+
+/* Configuration data for IPAv2.0 */
+const struct ipa_data ipa_data_v2_0  = {
+	.version	= IPA_VERSION_2_0,
+	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
+	.endpoint_data	= ipa_endpoint_data,
+	.mem_data	= &ipa_mem_data_v2_0,
+	.power_data	= &ipa_power_data,
+};
+
+/* IPA-resident memory region configuration for v2.5 */
+static const struct ipa_mem ipa_mem_local_data_v2_5[IPA_MEM_COUNT] = {
+	[IPA_MEM_UC_SHARED] = {
+		.offset         = 0,
+		.size           = 0x80,
+		.canary_count   = 0,
+	},
+	[IPA_MEM_UC_INFO] = {
+		.offset		= 0x0080,
+		.size		= 0x0200,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_V4_FILTER] = {
+		.offset		= 0x0288,
+		.size		= 0x0058,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_FILTER] = {
+		.offset		= 0x02e8,
+		.size		= 0x0058,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V4_ROUTE] = {
+		.offset		= 0x0348,
+		.size		= 0x003c,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_V6_ROUTE] = {
+		.offset		= 0x0388,
+		.size		= 0x003c,
+		.canary_count	= 1,
+	},
+	[IPA_MEM_MODEM_HEADER] = {
+		.offset		= 0x03c8,
+		.size		= 0x0140,
+		.canary_count	= 1,
+	},
+	[IPA_MEM_MODEM_PROC_CTX] = {
+		.offset		= 0x0510,
+		.size		= 0x0200,
+		.canary_count	= 2,
+	},
+	[IPA_MEM_AP_PROC_CTX] = {
+		.offset		= 0x0710,
+		.size		= 0x0200,
+		.canary_count	= 0,
+	},
+	[IPA_MEM_MODEM] = {
+		.offset		= 0x0914,
+		.size		= 0x16a8,
+		.canary_count	= 1,
+	},
+};
+
+static struct ipa_mem_data ipa_mem_data_v2_5 = {
+	.local		= ipa_mem_local_data_v2_5,
+	.smem_id	= 497,
+	.smem_size	= 0x00002000,
+};
+
+/* Configuration data for IPAv2.5 */
+const struct ipa_data ipa_data_v2_5  = {
+	.version	= IPA_VERSION_2_5,
+	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
+	.endpoint_data	= ipa_endpoint_data,
+	.mem_data	= &ipa_mem_data_v2_5,
+	.power_data	= &ipa_power_data,
+};
+
+/* IPA-resident memory region configuration for v2.6L */
+static const struct ipa_mem ipa_mem_local_data_v2_6L[IPA_MEM_COUNT] = {
+	{
+		.id		= IPA_MEM_UC_SHARED,
+		.offset         = 0,
+		.size           = 0x80,
+		.canary_count   = 0,
+	},
+	{
+		.id 		= IPA_MEM_UC_INFO,
+		.offset		= 0x0080,
+		.size		= 0x0200,
+		.canary_count	= 0,
+	},
+	{
+		.id		= IPA_MEM_V4_FILTER,
+		.offset		= 0x0288,
+		.size		= 0x0058,
+		.canary_count	= 2,
+	},
+	{
+		.id		= IPA_MEM_V6_FILTER,
+		.offset		= 0x02e8,
+		.size		= 0x0058,
+		.canary_count	= 2,
+	},
+	{
+		.id		= IPA_MEM_V4_ROUTE,
+		.offset		= 0x0348,
+		.size		= 0x003c,
+		.canary_count	= 2,
+	},
+	{
+		.id		= IPA_MEM_V6_ROUTE,
+		.offset		= 0x0388,
+		.size		= 0x003c,
+		.canary_count	= 1,
+	},
+	{
+		.id		= IPA_MEM_MODEM_HEADER,
+		.offset		= 0x03c8,
+		.size		= 0x0140,
+		.canary_count	= 1,
+	},
+	{
+		.id		= IPA_MEM_ZIP,
+		.offset		= 0x0510,
+		.size		= 0x0200,
+		.canary_count	= 2,
+	},
+	{
+		.id		= IPA_MEM_MODEM,
+		.offset		= 0x0714,
+		.size		= 0x18e8,
+		.canary_count	= 1,
+	},
+	{
+		.id		= IPA_MEM_END_MARKER,
+		.offset		= 0x2000,
+		.size		= 0,
+		.canary_count	= 1,
+	},
+};
+
+static struct ipa_mem_data ipa_mem_data_v2_6L = {
+	.local		= ipa_mem_local_data_v2_6L,
+	.smem_id	= 497,
+	.smem_size	= 0x00002000,
+};
+
+/* Configuration data for IPAv2.6L */
+const struct ipa_data ipa_data_v2_6L  = {
+	.version	= IPA_VERSION_2_6L,
+	/* Unfortunately we don't know what this BCR value corresponds to */
+	.backward_compat = 0x1fff7f,
+	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
+	.endpoint_data	= ipa_endpoint_data,
+	.mem_data	= &ipa_mem_data_v2_6L,
+	.power_data	= &ipa_power_data,
+};
diff --git a/drivers/net/ipa/ipa_data-v3.1.c b/drivers/net/ipa/ipa_data-v3.1.c
index 06ddb85f39b2..12d231232756 100644
--- a/drivers/net/ipa/ipa_data-v3.1.c
+++ b/drivers/net/ipa/ipa_data-v3.1.c
@@ -6,7 +6,7 @@
 
 #include <linux/log2.h>
 
-#include "gsi.h"
+#include "ipa_dma.h"
 #include "ipa_data.h"
 #include "ipa_endpoint.h"
 #include "ipa_mem.h"
diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
index 7d62d49f414f..e7ce2e9388b6 100644
--- a/drivers/net/ipa/ipa_data.h
+++ b/drivers/net/ipa/ipa_data.h
@@ -301,6 +301,9 @@ struct ipa_data {
 	const struct ipa_power_data *power_data;
 };
 
+extern const struct ipa_data ipa_data_v2_0;
+extern const struct ipa_data ipa_data_v2_5;
+extern const struct ipa_data ipa_data_v2_6L;
 extern const struct ipa_data ipa_data_v3_1;
 extern const struct ipa_data ipa_data_v3_5_1;
 extern const struct ipa_data ipa_data_v4_2;
diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
index b437fbf95edf..3ae5c5c6734b 100644
--- a/drivers/net/ipa/ipa_main.c
+++ b/drivers/net/ipa/ipa_main.c
@@ -560,6 +560,18 @@ static int ipa_firmware_load(struct device *dev)
 }
 
 static const struct of_device_id ipa_match[] = {
+	{
+		.compatible	= "qcom,ipa-v2.0",
+		.data		= &ipa_data_v2_0,
+	},
+	{
+		.compatible	= "qcom,msm8996-ipa",
+		.data		= &ipa_data_v2_5,
+	},
+	{
+		.compatible	= "qcom,msm8953-ipa",
+		.data		= &ipa_data_v2_6L,
+	},
 	{
 		.compatible	= "qcom,msm8998-ipa",
 		.data		= &ipa_data_v3_1,
@@ -632,6 +644,9 @@ static void ipa_validate_build(void)
 static bool ipa_version_valid(enum ipa_version version)
 {
 	switch (version) {
+	case IPA_VERSION_2_0:
+	case IPA_VERSION_2_5:
+	case IPA_VERSION_2_6L:
 	case IPA_VERSION_3_0:
 	case IPA_VERSION_3_1:
 	case IPA_VERSION_3_5:
diff --git a/drivers/net/ipa/ipa_sysfs.c b/drivers/net/ipa/ipa_sysfs.c
index ff61dbdd70d8..f5d159f6bc06 100644
--- a/drivers/net/ipa/ipa_sysfs.c
+++ b/drivers/net/ipa/ipa_sysfs.c
@@ -14,6 +14,12 @@
 static const char *ipa_version_string(struct ipa *ipa)
 {
 	switch (ipa->version) {
+	case IPA_VERSION_2_0:
+		return "2.0";
+	case IPA_VERSION_2_5:
+		return "2.5";
+	case IPA_VERSION_2_6L:
+		"return 2.6L";
 	case IPA_VERSION_3_0:
 		return "3.0";
 	case IPA_VERSION_3_1:
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (14 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware Sireesh Kodali
@ 2021-09-20  3:08 ` Sireesh Kodali
  2021-09-23 12:42   ` Rob Herring
  2021-10-13 22:31   ` Alex Elder
  2021-10-13 22:27 ` [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Alex Elder
  16 siblings, 2 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-09-20  3:08 UTC (permalink / raw)
  To: phone-devel, ~postmarketos/upstreaming, netdev, linux-kernel,
	linux-arm-msm, elder
  Cc: Sireesh Kodali, Andy Gross, Bjorn Andersson, David S. Miller,
	Jakub Kicinski, Rob Herring,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS

MSM8996 uses IPA v2.5 and MSM8953 uses IPA v2.6l

Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
---
 Documentation/devicetree/bindings/net/qcom,ipa.yaml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/Documentation/devicetree/bindings/net/qcom,ipa.yaml b/Documentation/devicetree/bindings/net/qcom,ipa.yaml
index b8a0b392b24e..e857827bfa54 100644
--- a/Documentation/devicetree/bindings/net/qcom,ipa.yaml
+++ b/Documentation/devicetree/bindings/net/qcom,ipa.yaml
@@ -44,6 +44,8 @@ description:
 properties:
   compatible:
     enum:
+      - qcom,msm8953-ipa
+      - qcom,msm8996-ipa
       - qcom,msm8998-ipa
       - qcom,sc7180-ipa
       - qcom,sc7280-ipa
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA
  2021-09-20  3:08 ` [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA Sireesh Kodali
@ 2021-09-23 12:42   ` Rob Herring
  2021-10-13 22:31   ` Alex Elder
  1 sibling, 0 replies; 46+ messages in thread
From: Rob Herring @ 2021-09-23 12:42 UTC (permalink / raw)
  To: Sireesh Kodali
  Cc: netdev, Jakub Kicinski, Bjorn Andersson,
	~postmarketos/upstreaming, elder, linux-arm-msm, linux-kernel,
	devicetree, Andy Gross, Rob Herring, phone-devel,
	David S. Miller

On Mon, 20 Sep 2021 08:38:11 +0530, Sireesh Kodali wrote:
> MSM8996 uses IPA v2.5 and MSM8953 uses IPA v2.6l
> 
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>  Documentation/devicetree/bindings/net/qcom,ipa.yaml | 2 ++
>  1 file changed, 2 insertions(+)
> 

Acked-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x
  2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
                   ` (15 preceding siblings ...)
  2021-09-20  3:08 ` [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA Sireesh Kodali
@ 2021-10-13 22:27 ` Alex Elder
  16 siblings, 0 replies; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:27 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder

On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> Hi,
> 
> This RFC patch series adds support for IPA v2, v2.5 and v2.6L
> (collectively referred to as IPA v2.x).

I'm sorry for the delay on this.  I want to give this a
reasonable review, but it's been hard to prioritize doing
so.  So for now I aim to give you some "easy" feedback,
knowing that this doesn't cover all issues.  This is an
RFC, after all...

So this isn't a "real review" but I'll try to be helpful.

Overall, I appreciate how well you adhered to the patterns and
conventions used elsewhere in the driver.  There are many levels
to that, but I think consistency is a huge factor in keeping
code maintainable.  I didn't see all that many places where
I felt like whining about naming you used, or oddities in
indentation, and so on.

Abstracting the GSI layer seemed to be done more easily than
I expected.  I didn't dive deep into the BAM code, and would
want to pay much closer attention to that in the future.

The BAM/GSI difference is the biggest one dividing IPA v3.0+
from its predecessors.  But as you see, the 32- versus 64-bit
address and field size differences lead to some ugliness
that's hard to avoid.

Anyway, nice work; I hope my feedback is helpful.

					-Alex

> Basic description:
> IPA v2.x is the older version of the IPA hardware found on Qualcomm
> SoCs. The biggest differences between v2.x and later versions are:
> - 32 bit hardware (the IPA microcontroler is 32 bit)
> - BAM (as opposed to GSI as a DMA transport)
> - Changes to the QMI init sequence (described in the commit message)
> 
> The fact that IPA v2.x are 32 bit only affects us directly in the table
> init code. However, its impact is felt in other parts of the code, as it
> changes the size of fields of various structs (e.g. in the commands that
> can be sent).
> 
> BAM support is already present in the mainline kernel, however it lacks
> two things:
> - Support for DMA metadata, to pass the size of the transaction from the
>    hardware to the dma client
> - Support for immediate commands, which are needed to pass commands from
>    the driver to the microcontroller
> 
> Separate patch series have been created to deal with these (linked in
> the end)
> 
> This patch series adds support for BAM as a transport by refactoring the
> current GSI code to create an abstract uniform API on top. This API
> allows the rest of the driver to handle DMA without worrying about the
> IPA version.
> 
> The final thing that hasn't been touched by this patch series is the IPA
> resource manager. On the downstream CAF kernel, the driver seems to
> share the resource code between IPA v2.x and IPA v3.x, which should mean
> all it would take to add support for resources on IPA v2.x would be to
> add the definitions in the ipa_data.
> 
> Testing:
> This patch series was tested on kernel version 5.13 on a phone with
> SDM625 (IPA v2.6L), and a phone with MSM8996 (IPA v2.5). The phone with
> IPA v2.5 was able to get an IP address using modem-manager, although
> sending/receiving packets was not tested. The phone with IPA v2.6L was
> able to get an IP, but was unable to send/receive packets. Its modem
> also relies on IPA v2.6l's compression/decompression support, and
> without this patch series, the modem simply crashes and restarts,
> waiting for the IPA block to come up.
> 
> This patch series is based on code from the downstream CAF kernel v4.9
> 
> There are some things in this patch series that would obviously not get
> accepted in their current form:
> - All IPA 2.x data is in a single file
> - Some stray printks might still be around
> - Some values have been hardcoded (e.g. the filter_map)
> Please excuse these
> 
> Lastly, this patch series depends upon the following patches for BAM:
> [0]: https://lkml.org/lkml/2021/9/19/126
> [1]: https://lkml.org/lkml/2021/9/19/135
> 
> Regards,
> Sireesh Kodali
> 
> Sireesh Kodali (10):
>    net: ipa: Add IPA v2.x register definitions
>    net: ipa: Add support for using BAM as a DMA transport
>    net: ipa: Add support for IPA v2.x commands and table init
>    net: ipa: Add support for IPA v2.x endpoints
>    net: ipa: Add support for IPA v2.x memory map
>    net: ipa: Add support for IPA v2.x in the driver's QMI interface
>    net: ipa: Add support for IPA v2 microcontroller
>    net: ipa: Add IPA v2.6L initialization sequence support
>    net: ipa: Add hw config describing IPA v2.x hardware
>    dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA
> 
> Vladimir Lypak (7):
>    net: ipa: Correct ipa_status_opcode enumeration
>    net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support
>    net: ipa: Refactor GSI code
>    net: ipa: Establish ipa_dma interface
>    net: ipa: Check interrupts for availability
>    net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait
>    net: ipa: Add support for IPA v2.x interrupts
> 
>   .../devicetree/bindings/net/qcom,ipa.yaml     |   2 +
>   drivers/net/ipa/Makefile                      |  11 +-
>   drivers/net/ipa/bam.c                         | 525 ++++++++++++++++++
>   drivers/net/ipa/gsi.c                         | 322 ++++++-----
>   drivers/net/ipa/ipa.h                         |   8 +-
>   drivers/net/ipa/ipa_cmd.c                     | 244 +++++---
>   drivers/net/ipa/ipa_cmd.h                     |  20 +-
>   drivers/net/ipa/ipa_data-v2.c                 | 369 ++++++++++++
>   drivers/net/ipa/ipa_data-v3.1.c               |   2 +-
>   drivers/net/ipa/ipa_data-v3.5.1.c             |   2 +-
>   drivers/net/ipa/ipa_data-v4.11.c              |   2 +-
>   drivers/net/ipa/ipa_data-v4.2.c               |   2 +-
>   drivers/net/ipa/ipa_data-v4.5.c               |   2 +-
>   drivers/net/ipa/ipa_data-v4.9.c               |   2 +-
>   drivers/net/ipa/ipa_data.h                    |   4 +
>   drivers/net/ipa/{gsi.h => ipa_dma.h}          | 179 +++---
>   .../ipa/{gsi_private.h => ipa_dma_private.h}  |  46 +-
>   drivers/net/ipa/ipa_endpoint.c                | 188 ++++---
>   drivers/net/ipa/ipa_endpoint.h                |   6 +-
>   drivers/net/ipa/ipa_gsi.c                     |  18 +-
>   drivers/net/ipa/ipa_gsi.h                     |  12 +-
>   drivers/net/ipa/ipa_interrupt.c               |  36 +-
>   drivers/net/ipa/ipa_main.c                    |  82 ++-
>   drivers/net/ipa/ipa_mem.c                     |  55 +-
>   drivers/net/ipa/ipa_mem.h                     |   5 +-
>   drivers/net/ipa/ipa_power.c                   |   4 +-
>   drivers/net/ipa/ipa_qmi.c                     |  37 +-
>   drivers/net/ipa/ipa_qmi.h                     |  10 +
>   drivers/net/ipa/ipa_reg.h                     | 184 +++++-
>   drivers/net/ipa/ipa_resource.c                |   3 +
>   drivers/net/ipa/ipa_smp2p.c                   |  11 +-
>   drivers/net/ipa/ipa_sysfs.c                   |   6 +
>   drivers/net/ipa/ipa_table.c                   |  86 +--
>   drivers/net/ipa/ipa_table.h                   |   6 +-
>   drivers/net/ipa/{gsi_trans.c => ipa_trans.c}  | 182 +++---
>   drivers/net/ipa/{gsi_trans.h => ipa_trans.h}  |  78 +--
>   drivers/net/ipa/ipa_uc.c                      |  96 ++--
>   drivers/net/ipa/ipa_version.h                 |  12 +
>   38 files changed, 2133 insertions(+), 726 deletions(-)
>   create mode 100644 drivers/net/ipa/bam.c
>   create mode 100644 drivers/net/ipa/ipa_data-v2.c
>   rename drivers/net/ipa/{gsi.h => ipa_dma.h} (57%)
>   rename drivers/net/ipa/{gsi_private.h => ipa_dma_private.h} (66%)
>   rename drivers/net/ipa/{gsi_trans.c => ipa_trans.c} (80%)
>   rename drivers/net/ipa/{gsi_trans.h => ipa_trans.h} (71%)
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration
  2021-09-20  3:07 ` [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration Sireesh Kodali
@ 2021-10-13 22:28   ` Alex Elder
  2021-10-18 16:12     ` Sireesh Kodali
  0 siblings, 1 reply; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:28 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> The values in the enumaration were defined as bitmasks (base 2 exponents of
> actual opcodes). Meanwhile, it's used not as bitmask
> ipa_endpoint_status_skip and ipa_status_formet_packet functions (compared
> directly with opcode from status packet). This commit converts these values
> to actual hardware constansts.
> 
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_endpoint.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> index 5528d97110d5..29227de6661f 100644
> --- a/drivers/net/ipa/ipa_endpoint.c
> +++ b/drivers/net/ipa/ipa_endpoint.c
> @@ -41,10 +41,10 @@
>   
>   /** enum ipa_status_opcode - status element opcode hardware values */
>   enum ipa_status_opcode {
> -	IPA_STATUS_OPCODE_PACKET		= 0x01,
> -	IPA_STATUS_OPCODE_DROPPED_PACKET	= 0x04,
> -	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 0x08,
> -	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 0x40,
> +	IPA_STATUS_OPCODE_PACKET		= 0,
> +	IPA_STATUS_OPCODE_DROPPED_PACKET	= 2,
> +	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 3,
> +	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 6,

I haven't looked at how these symbols are used (whether you
changed it at all), but I'm pretty sure this is wrong.

The downstream tends to define "soft" symbols that must
be mapped to their hardware equivalent values.  So for
example you might find a function ipa_pkt_status_parse()
that translates between the hardware status structure
and the abstracted "soft" status structure.  In that
function you see, for example, that hardware status
opcode 0x1 is translated to IPAHAL_PKT_STATUS_OPCODE_PACKET,
which downstream is defined to have value 0.

In many places the upstream code eliminates that layer
of indirection where possible.  So enumerated constants
are assigned specific values that match what the hardware
uses.

					-Alex

>   };
>   
>   /** enum ipa_status_exception - status element exception type */
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support
  2021-09-20  3:07 ` [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support Sireesh Kodali
@ 2021-10-13 22:28   ` Alex Elder
  2021-10-18 16:16     ` Sireesh Kodali
  0 siblings, 1 reply; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:28 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> IPA v2.x is 32 bit. Having an IPA_TABLE_ENTRY size makes it easier to
> deal with supporting both 32 bit and 64 bit IPA versions

This looks reasonable.  At this point filter/route tables aren't
really used, so this is a simple fix.  You use IPA_IS_64BIT()
here, but it isn't defined until patch 7, which I expect is a
build problem.

					-Alex

> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_qmi.c   | 10 ++++++----
>   drivers/net/ipa/ipa_table.c | 29 +++++++++++++----------------
>   drivers/net/ipa/ipa_table.h |  4 ++++
>   3 files changed, 23 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
> index 90f3aec55b36..7e2fe701cc4d 100644
> --- a/drivers/net/ipa/ipa_qmi.c
> +++ b/drivers/net/ipa/ipa_qmi.c
> @@ -308,12 +308,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
>   	mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE);
>   	req.v4_route_tbl_info_valid = 1;
>   	req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset;
> -	req.v4_route_tbl_info.count = mem->size / sizeof(__le64);
> +	req.v4_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
>   
>   	mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE);
>   	req.v6_route_tbl_info_valid = 1;
>   	req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset;
> -	req.v6_route_tbl_info.count = mem->size / sizeof(__le64);
> +	req.v6_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
>   
>   	mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER);
>   	req.v4_filter_tbl_start_valid = 1;
> @@ -352,7 +352,8 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
>   		req.v4_hash_route_tbl_info_valid = 1;
>   		req.v4_hash_route_tbl_info.start =
>   				ipa->mem_offset + mem->offset;
> -		req.v4_hash_route_tbl_info.count = mem->size / sizeof(__le64);
> +		req.v4_hash_route_tbl_info.count =
> +				mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
>   	}
>   
>   	mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE_HASHED);
> @@ -360,7 +361,8 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
>   		req.v6_hash_route_tbl_info_valid = 1;
>   		req.v6_hash_route_tbl_info.start =
>   			ipa->mem_offset + mem->offset;
> -		req.v6_hash_route_tbl_info.count = mem->size / sizeof(__le64);
> +		req.v6_hash_route_tbl_info.count =
> +				mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
>   	}
>   
>   	mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER_HASHED);
> diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
> index 1da334f54944..96c467c80a2e 100644
> --- a/drivers/net/ipa/ipa_table.c
> +++ b/drivers/net/ipa/ipa_table.c
> @@ -118,7 +118,8 @@
>    * 32-bit all-zero rule list terminator.  The "zero rule" is simply an
>    * all-zero rule followed by the list terminator.
>    */
> -#define IPA_ZERO_RULE_SIZE		(2 * sizeof(__le32))
> +#define IPA_ZERO_RULE_SIZE(version) \
> +	 (IPA_IS_64BIT(version) ? 2 * sizeof(__le32) : sizeof(__le32))
>   
>   /* Check things that can be validated at build time. */
>   static void ipa_table_validate_build(void)
> @@ -132,12 +133,6 @@ static void ipa_table_validate_build(void)
>   	 */
>   	BUILD_BUG_ON(sizeof(dma_addr_t) > sizeof(__le64));
>   
> -	/* A "zero rule" is used to represent no filtering or no routing.
> -	 * It is a 64-bit block of zeroed memory.  Code in ipa_table_init()
> -	 * assumes that it can be written using a pointer to __le64.
> -	 */
> -	BUILD_BUG_ON(IPA_ZERO_RULE_SIZE != sizeof(__le64));
> -
>   	/* Impose a practical limit on the number of routes */
>   	BUILD_BUG_ON(IPA_ROUTE_COUNT_MAX > 32);
>   	/* The modem must be allotted at least one route table entry */
> @@ -236,7 +231,7 @@ static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count)
>   	/* Skip over the zero rule and possibly the filter mask */
>   	skip = filter_mask ? 1 : 2;
>   
> -	return ipa->table_addr + skip * sizeof(*ipa->table_virt);
> +	return ipa->table_addr + skip * IPA_TABLE_ENTRY_SIZE(ipa->version);
>   }
>   
>   static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
> @@ -255,8 +250,8 @@ static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
>   	if (filter)
>   		first++;	/* skip over bitmap */
>   
> -	offset = mem->offset + first * sizeof(__le64);
> -	size = count * sizeof(__le64);
> +	offset = mem->offset + first * IPA_TABLE_ENTRY_SIZE(ipa->version);
> +	size = count * IPA_TABLE_ENTRY_SIZE(ipa->version);
>   	addr = ipa_table_addr(ipa, false, count);
>   
>   	ipa_cmd_dma_shared_mem_add(trans, offset, size, addr, true);
> @@ -434,11 +429,11 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
>   		count = 1 + hweight32(ipa->filter_map);
>   		hash_count = hash_mem->size ? count : 0;
>   	} else {
> -		count = mem->size / sizeof(__le64);
> -		hash_count = hash_mem->size / sizeof(__le64);
> +		count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> +		hash_count = hash_mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
>   	}
> -	size = count * sizeof(__le64);
> -	hash_size = hash_count * sizeof(__le64);
> +	size = count * IPA_TABLE_ENTRY_SIZE(ipa->version);
> +	hash_size = hash_count * IPA_TABLE_ENTRY_SIZE(ipa->version);
>   
>   	addr = ipa_table_addr(ipa, filter, count);
>   	hash_addr = ipa_table_addr(ipa, filter, hash_count);
> @@ -621,7 +616,8 @@ int ipa_table_init(struct ipa *ipa)
>   	 * by dma_alloc_coherent() is guaranteed to be a power-of-2 number
>   	 * of pages, which satisfies the rule alignment requirement.
>   	 */
> -	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
> +	size = IPA_ZERO_RULE_SIZE(ipa->version) +
> +	       (1 + count) * IPA_TABLE_ENTRY_SIZE(ipa->version);
>   	virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
>   	if (!virt)
>   		return -ENOMEM;
> @@ -653,7 +649,8 @@ void ipa_table_exit(struct ipa *ipa)
>   	struct device *dev = &ipa->pdev->dev;
>   	size_t size;
>   
> -	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
> +	size = IPA_ZERO_RULE_SIZE(ipa->version) +
> +	       (1 + count) * IPA_TABLE_ENTRY_SIZE(ipa->version);
>   
>   	dma_free_coherent(dev, size, ipa->table_virt, ipa->table_addr);
>   	ipa->table_addr = 0;
> diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
> index b6a9a0d79d68..78a168ce6558 100644
> --- a/drivers/net/ipa/ipa_table.h
> +++ b/drivers/net/ipa/ipa_table.h
> @@ -10,6 +10,10 @@
>   
>   struct ipa;
>   
> +/* The size of a filter or route table entry */
> +#define IPA_TABLE_ENTRY_SIZE(version)	\
> +	(IPA_IS_64BIT(version) ? sizeof(__le64) : sizeof(__le32))
> +
>   /* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */
>   #define IPA_FILTER_COUNT_MAX	14
>   
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface
  2021-09-20  3:07 ` [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface Sireesh Kodali
@ 2021-10-13 22:29   ` Alex Elder
  2021-10-18 16:45     ` Sireesh Kodali
  0 siblings, 1 reply; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:29 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> Establish callback-based interface to abstract GSI and BAM DMA differences.
> Interface is based on prototypes from ipa_dma.h (old gsi.h). Callbacks
> are stored in struct ipa_dma (old struct gsi) and assigned in gsi_init.

This is interesting and seems to have been fairly easy to abstract
this way.  The patch is actually pretty straightforward, much more
so than I would have expected.  I think I'll have more to say about
how to separate GSI from BAM in the future, but not today.

					-Alex

> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/gsi.c          |  30 ++++++--
>   drivers/net/ipa/ipa_dma.h      | 133 ++++++++++++++++++++++-----------
>   drivers/net/ipa/ipa_endpoint.c |  28 +++----
>   drivers/net/ipa/ipa_main.c     |  18 ++---
>   drivers/net/ipa/ipa_power.c    |   4 +-
>   drivers/net/ipa/ipa_trans.c    |   2 +-
>   6 files changed, 138 insertions(+), 77 deletions(-)
> 
> diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
> index 74ae0d07f859..39d9ca620a9f 100644
> --- a/drivers/net/ipa/gsi.c
> +++ b/drivers/net/ipa/gsi.c
> @@ -99,6 +99,10 @@
>   
>   #define GSI_ISR_MAX_ITER		50	/* Detect interrupt storms */
>   
> +static u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id);
> +static u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id);
> +static void gsi_exit(struct ipa_dma *gsi);
> +
>   /* An entry in an event ring */
>   struct gsi_event {
>   	__le64 xfer_ptr;
> @@ -869,7 +873,7 @@ static int __gsi_channel_start(struct ipa_channel *channel, bool resume)
>   }
>   
>   /* Start an allocated GSI channel */
> -int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
> +static int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
>   {
>   	struct ipa_channel *channel = &gsi->channel[channel_id];
>   	int ret;
> @@ -924,7 +928,7 @@ static int __gsi_channel_stop(struct ipa_channel *channel, bool suspend)
>   }
>   
>   /* Stop a started channel */
> -int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
> +static int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
>   {
>   	struct ipa_channel *channel = &gsi->channel[channel_id];
>   	int ret;
> @@ -941,7 +945,7 @@ int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
>   }
>   
>   /* Reset and reconfigure a channel, (possibly) enabling the doorbell engine */
> -void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
> +static void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
>   {
>   	struct ipa_channel *channel = &gsi->channel[channel_id];
>   
> @@ -1931,7 +1935,7 @@ int gsi_setup(struct ipa_dma *gsi)
>   }
>   
>   /* Inverse of gsi_setup() */
> -void gsi_teardown(struct ipa_dma *gsi)
> +static void gsi_teardown(struct ipa_dma *gsi)
>   {
>   	gsi_channel_teardown(gsi);
>   	gsi_irq_teardown(gsi);
> @@ -2194,6 +2198,18 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
>   
>   	gsi->dev = dev;
>   	gsi->version = version;
> +	gsi->setup = gsi_setup;
> +	gsi->teardown = gsi_teardown;
> +	gsi->exit = gsi_exit;
> +	gsi->suspend = gsi_suspend;
> +	gsi->resume = gsi_resume;
> +	gsi->channel_tre_max = gsi_channel_tre_max;
> +	gsi->channel_trans_tre_max = gsi_channel_trans_tre_max;
> +	gsi->channel_start = gsi_channel_start;
> +	gsi->channel_stop = gsi_channel_stop;
> +	gsi->channel_reset = gsi_channel_reset;
> +	gsi->channel_suspend = gsi_channel_suspend;
> +	gsi->channel_resume = gsi_channel_resume;
>   
>   	/* GSI uses NAPI on all channels.  Create a dummy network device
>   	 * for the channel NAPI contexts to be associated with.
> @@ -2250,7 +2266,7 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
>   }
>   
>   /* Inverse of gsi_init() */
> -void gsi_exit(struct ipa_dma *gsi)
> +static void gsi_exit(struct ipa_dma *gsi)
>   {
>   	mutex_destroy(&gsi->mutex);
>   	gsi_channel_exit(gsi);
> @@ -2277,7 +2293,7 @@ void gsi_exit(struct ipa_dma *gsi)
>    * substantially reduce pool memory requirements.  The number we
>    * reduce it by matches the number added in ipa_trans_pool_init().
>    */
> -u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
> +static u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
>   {
>   	struct ipa_channel *channel = &gsi->channel[channel_id];
>   
> @@ -2286,7 +2302,7 @@ u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
>   }
>   
>   /* Returns the maximum number of TREs in a single transaction for a channel */
> -u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
> +static u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
>   {
>   	struct ipa_channel *channel = &gsi->channel[channel_id];
>   
> diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h
> index d053929ca3e3..1a23e6ac5785 100644
> --- a/drivers/net/ipa/ipa_dma.h
> +++ b/drivers/net/ipa/ipa_dma.h
> @@ -163,64 +163,96 @@ struct ipa_dma {
>   	struct completion completion;	/* for global EE commands */
>   	int result;			/* Negative errno (generic commands) */
>   	struct mutex mutex;		/* protects commands, programming */
> +
> +	int (*setup)(struct ipa_dma *dma_subsys);
> +	void (*teardown)(struct ipa_dma *dma_subsys);
> +	void (*exit)(struct ipa_dma *dma_subsys);
> +	void (*suspend)(struct ipa_dma *dma_subsys);
> +	void (*resume)(struct ipa_dma *dma_subsys);
> +	u32 (*channel_tre_max)(struct ipa_dma *dma_subsys, u32 channel_id);
> +	u32 (*channel_trans_tre_max)(struct ipa_dma *dma_subsys, u32 channel_id);
> +	int (*channel_start)(struct ipa_dma *dma_subsys, u32 channel_id);
> +	int (*channel_stop)(struct ipa_dma *dma_subsys, u32 channel_id);
> +	void (*channel_reset)(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
> +	int (*channel_suspend)(struct ipa_dma *dma_subsys, u32 channel_id);
> +	int (*channel_resume)(struct ipa_dma *dma_subsys, u32 channel_id);
> +	void (*trans_commit)(struct ipa_trans *trans, bool ring_db);
>   };
>   
>   /**
> - * gsi_setup() - Set up the GSI subsystem
> - * @gsi:	Address of GSI structure embedded in an IPA structure
> + * ipa_dma_setup() - Set up the DMA subsystem
> + * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
>    *
>    * Return:	0 if successful, or a negative error code
>    *
> - * Performs initialization that must wait until the GSI hardware is
> + * Performs initialization that must wait until the GSI/BAM hardware is
>    * ready (including firmware loaded).
>    */
> -int gsi_setup(struct ipa_dma *dma_subsys);
> +static inline int ipa_dma_setup(struct ipa_dma *dma_subsys)
> +{
> +	return dma_subsys->setup(dma_subsys);
> +}
>   
>   /**
> - * gsi_teardown() - Tear down GSI subsystem
> - * @gsi:	GSI address previously passed to a successful gsi_setup() call
> + * ipa_dma_teardown() - Tear down DMA subsystem
> + * @dma_subsys:	ipa_dma address previously passed to a successful ipa_dma_setup() call
>    */
> -void gsi_teardown(struct ipa_dma *dma_subsys);
> +static inline void ipa_dma_teardown(struct ipa_dma *dma_subsys)
> +{
> +	dma_subsys->teardown(dma_subsys);
> +}
>   
>   /**
> - * gsi_channel_tre_max() - Channel maximum number of in-flight TREs
> - * @gsi:	GSI pointer
> + * ipa_channel_tre_max() - Channel maximum number of in-flight TREs
> + * @dma_subsys:	pointer to ipa_dma structure
>    * @channel_id:	Channel whose limit is to be returned
>    *
>    * Return:	 The maximum number of TREs oustanding on the channel
>    */
> -u32 gsi_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
> +static inline u32 ipa_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id)
> +{
> +	return dma_subsys->channel_tre_max(dma_subsys, channel_id);
> +}
>   
>   /**
> - * gsi_channel_trans_tre_max() - Maximum TREs in a single transaction
> - * @gsi:	GSI pointer
> + * ipa_channel_trans_tre_max() - Maximum TREs in a single transaction
> + * @dma_subsys:	pointer to ipa_dma structure
>    * @channel_id:	Channel whose limit is to be returned
>    *
>    * Return:	 The maximum TRE count per transaction on the channel
>    */
> -u32 gsi_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
> +static inline u32 ipa_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id)
> +{
> +	return dma_subsys->channel_trans_tre_max(dma_subsys, channel_id);
> +}
>   
>   /**
> - * gsi_channel_start() - Start an allocated GSI channel
> - * @gsi:	GSI pointer
> + * ipa_channel_start() - Start an allocated DMA channel
> + * @dma_subsys:	pointer to ipa_dma structure
>    * @channel_id:	Channel to start
>    *
>    * Return:	0 if successful, or a negative error code
>    */
> -int gsi_channel_start(struct ipa_dma *dma_subsys, u32 channel_id);
> +static inline int ipa_channel_start(struct ipa_dma *dma_subsys, u32 channel_id)
> +{
> +	return dma_subsys->channel_start(dma_subsys, channel_id);
> +}
>   
>   /**
> - * gsi_channel_stop() - Stop a started GSI channel
> - * @gsi:	GSI pointer returned by gsi_setup()
> + * ipa_channel_stop() - Stop a started DMA channel
> + * @dma_subsys:	pointer to ipa_dma structure returned by ipa_dma_setup()
>    * @channel_id:	Channel to stop
>    *
>    * Return:	0 if successful, or a negative error code
>    */
> -int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
> +static inline int ipa_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id)
> +{
> +	return dma_subsys->channel_stop(dma_subsys, channel_id);
> +}
>   
>   /**
> - * gsi_channel_reset() - Reset an allocated GSI channel
> - * @gsi:	GSI pointer
> + * ipa_channel_reset() - Reset an allocated DMA channel
> + * @dma_subsys:	pointer to ipa_dma structure
>    * @channel_id:	Channel to be reset
>    * @doorbell:	Whether to (possibly) enable the doorbell engine
>    *
> @@ -230,41 +262,49 @@ int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
>    * GSI hardware relinquishes ownership of all pending receive buffer
>    * transactions and they will complete with their cancelled flag set.
>    */
> -void gsi_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
> +static inline void ipa_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell)
> +{
> +	 dma_subsys->channel_reset(dma_subsys, channel_id, doorbell);
> +}
>   
> -/**
> - * gsi_suspend() - Prepare the GSI subsystem for suspend
> - * @gsi:	GSI pointer
> - */
> -void gsi_suspend(struct ipa_dma *dma_subsys);
>   
>   /**
> - * gsi_resume() - Resume the GSI subsystem following suspend
> - * @gsi:	GSI pointer
> - */
> -void gsi_resume(struct ipa_dma *dma_subsys);
> -
> -/**
> - * gsi_channel_suspend() - Suspend a GSI channel
> - * @gsi:	GSI pointer
> + * ipa_channel_suspend() - Suspend a DMA channel
> + * @dma_subsys:	pointer to ipa_dma structure
>    * @channel_id:	Channel to suspend
>    *
>    * For IPA v4.0+, suspend is implemented by stopping the channel.
>    */
> -int gsi_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id);
> +static inline int ipa_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id)
> +{
> +	return dma_subsys->channel_suspend(dma_subsys, channel_id);
> +}
>   
>   /**
> - * gsi_channel_resume() - Resume a suspended GSI channel
> - * @gsi:	GSI pointer
> + * ipa_channel_resume() - Resume a suspended DMA channel
> + * @dma_subsys:	pointer to ipa_dma structure
>    * @channel_id:	Channel to resume
>    *
>    * For IPA v4.0+, the stopped channel is started again.
>    */
> -int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
> +static inline int ipa_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id)
> +{
> +	return dma_subsys->channel_resume(dma_subsys, channel_id);
> +}
> +
> +static inline void ipa_dma_suspend(struct ipa_dma *dma_subsys)
> +{
> +	return dma_subsys->suspend(dma_subsys);
> +}
> +
> +static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
> +{
> +	return dma_subsys->resume(dma_subsys);
> +}
>   
>   /**
> - * gsi_init() - Initialize the GSI subsystem
> - * @gsi:	Address of GSI structure embedded in an IPA structure
> + * ipa_dma_init() - Initialize the GSI subsystem
> + * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
>    * @pdev:	IPA platform device
>    * @version:	IPA hardware version (implies GSI version)
>    * @count:	Number of entries in the configuration data array
> @@ -275,14 +315,19 @@ int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
>    * Early stage initialization of the GSI subsystem, performing tasks
>    * that can be done before the GSI hardware is ready to use.
>    */
> +
>   int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
>   	     enum ipa_version version, u32 count,
>   	     const struct ipa_gsi_endpoint_data *data);
>   
>   /**
> - * gsi_exit() - Exit the GSI subsystem
> - * @gsi:	GSI address previously passed to a successful gsi_init() call
> + * ipa_dma_exit() - Exit the DMA subsystem
> + * @dma_subsys:	ipa_dma address previously passed to a successful gsi_init() call
>    */
> -void gsi_exit(struct ipa_dma *dma_subsys);
> +static inline void ipa_dma_exit(struct ipa_dma *dma_subsys)
> +{
> +	if (dma_subsys)
> +		dma_subsys->exit(dma_subsys);
> +}
>   
>   #endif /* _GSI_H_ */
> diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> index 90d6880e8a25..dbef549c4537 100644
> --- a/drivers/net/ipa/ipa_endpoint.c
> +++ b/drivers/net/ipa/ipa_endpoint.c
> @@ -1091,7 +1091,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
>   	 * try replenishing again if our backlog is *all* available TREs.
>   	 */
>   	gsi = &endpoint->ipa->dma_subsys;
> -	if (backlog == gsi_channel_tre_max(gsi, endpoint->channel_id))
> +	if (backlog == ipa_channel_tre_max(gsi, endpoint->channel_id))
>   		schedule_delayed_work(&endpoint->replenish_work,
>   				      msecs_to_jiffies(1));
>   }
> @@ -1107,7 +1107,7 @@ static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
>   		atomic_add(saved, &endpoint->replenish_backlog);
>   
>   	/* Start replenishing if hardware currently has no buffers */
> -	max_backlog = gsi_channel_tre_max(gsi, endpoint->channel_id);
> +	max_backlog = ipa_channel_tre_max(gsi, endpoint->channel_id);
>   	if (atomic_read(&endpoint->replenish_backlog) == max_backlog)
>   		ipa_endpoint_replenish(endpoint, false);
>   }
> @@ -1432,13 +1432,13 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
>   	 * active.  We'll re-enable the doorbell (if appropriate) when
>   	 * we reset again below.
>   	 */
> -	gsi_channel_reset(gsi, endpoint->channel_id, false);
> +	ipa_channel_reset(gsi, endpoint->channel_id, false);
>   
>   	/* Make sure the channel isn't suspended */
>   	suspended = ipa_endpoint_program_suspend(endpoint, false);
>   
>   	/* Start channel and do a 1 byte read */
> -	ret = gsi_channel_start(gsi, endpoint->channel_id);
> +	ret = ipa_channel_start(gsi, endpoint->channel_id);
>   	if (ret)
>   		goto out_suspend_again;
>   
> @@ -1461,7 +1461,7 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
>   
>   	gsi_trans_read_byte_done(gsi, endpoint->channel_id);
>   
> -	ret = gsi_channel_stop(gsi, endpoint->channel_id);
> +	ret = ipa_channel_stop(gsi, endpoint->channel_id);
>   	if (ret)
>   		goto out_suspend_again;
>   
> @@ -1470,14 +1470,14 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
>   	 * complete the channel reset sequence.  Finish by suspending the
>   	 * channel again (if necessary).
>   	 */
> -	gsi_channel_reset(gsi, endpoint->channel_id, true);
> +	ipa_channel_reset(gsi, endpoint->channel_id, true);
>   
>   	usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC);
>   
>   	goto out_suspend_again;
>   
>   err_endpoint_stop:
> -	(void)gsi_channel_stop(gsi, endpoint->channel_id);
> +	(void)ipa_channel_stop(gsi, endpoint->channel_id);
>   out_suspend_again:
>   	if (suspended)
>   		(void)ipa_endpoint_program_suspend(endpoint, true);
> @@ -1504,7 +1504,7 @@ static void ipa_endpoint_reset(struct ipa_endpoint *endpoint)
>   	if (special && ipa_endpoint_aggr_active(endpoint))
>   		ret = ipa_endpoint_reset_rx_aggr(endpoint);
>   	else
> -		gsi_channel_reset(&ipa->dma_subsys, channel_id, true);
> +		ipa_channel_reset(&ipa->dma_subsys, channel_id, true);
>   
>   	if (ret)
>   		dev_err(&ipa->pdev->dev,
> @@ -1537,7 +1537,7 @@ int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint)
>   	struct ipa_dma *gsi = &ipa->dma_subsys;
>   	int ret;
>   
> -	ret = gsi_channel_start(gsi, endpoint->channel_id);
> +	ret = ipa_channel_start(gsi, endpoint->channel_id);
>   	if (ret) {
>   		dev_err(&ipa->pdev->dev,
>   			"error %d starting %cX channel %u for endpoint %u\n",
> @@ -1576,7 +1576,7 @@ void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
>   	}
>   
>   	/* Note that if stop fails, the channel's state is not well-defined */
> -	ret = gsi_channel_stop(gsi, endpoint->channel_id);
> +	ret = ipa_channel_stop(gsi, endpoint->channel_id);
>   	if (ret)
>   		dev_err(&ipa->pdev->dev,
>   			"error %d attempting to stop endpoint %u\n", ret,
> @@ -1598,7 +1598,7 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
>   		(void)ipa_endpoint_program_suspend(endpoint, true);
>   	}
>   
> -	ret = gsi_channel_suspend(gsi, endpoint->channel_id);
> +	ret = ipa_channel_suspend(gsi, endpoint->channel_id);
>   	if (ret)
>   		dev_err(dev, "error %d suspending channel %u\n", ret,
>   			endpoint->channel_id);
> @@ -1617,7 +1617,7 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
>   	if (!endpoint->toward_ipa)
>   		(void)ipa_endpoint_program_suspend(endpoint, false);
>   
> -	ret = gsi_channel_resume(gsi, endpoint->channel_id);
> +	ret = ipa_channel_resume(gsi, endpoint->channel_id);
>   	if (ret)
>   		dev_err(dev, "error %d resuming channel %u\n", ret,
>   			endpoint->channel_id);
> @@ -1660,14 +1660,14 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
>   	if (endpoint->ee_id != GSI_EE_AP)
>   		return;
>   
> -	endpoint->trans_tre_max = gsi_channel_trans_tre_max(gsi, channel_id);
> +	endpoint->trans_tre_max = ipa_channel_trans_tre_max(gsi, channel_id);
>   	if (!endpoint->toward_ipa) {
>   		/* RX transactions require a single TRE, so the maximum
>   		 * backlog is the same as the maximum outstanding TREs.
>   		 */
>   		endpoint->replenish_enabled = false;
>   		atomic_set(&endpoint->replenish_saved,
> -			   gsi_channel_tre_max(gsi, endpoint->channel_id));
> +			   ipa_channel_tre_max(gsi, endpoint->channel_id));
>   		atomic_set(&endpoint->replenish_backlog, 0);
>   		INIT_DELAYED_WORK(&endpoint->replenish_work,
>   				  ipa_endpoint_replenish_work);
> diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> index 026f5555fa7d..6ab691ff1faf 100644
> --- a/drivers/net/ipa/ipa_main.c
> +++ b/drivers/net/ipa/ipa_main.c
> @@ -98,13 +98,13 @@ int ipa_setup(struct ipa *ipa)
>   	struct device *dev = &ipa->pdev->dev;
>   	int ret;
>   
> -	ret = gsi_setup(&ipa->dma_subsys);
> +	ret = ipa_dma_setup(&ipa->dma_subsys);
>   	if (ret)
>   		return ret;
>   
>   	ret = ipa_power_setup(ipa);
>   	if (ret)
> -		goto err_gsi_teardown;
> +		goto err_dma_teardown;
>   
>   	ipa_endpoint_setup(ipa);
>   
> @@ -153,8 +153,8 @@ int ipa_setup(struct ipa *ipa)
>   err_endpoint_teardown:
>   	ipa_endpoint_teardown(ipa);
>   	ipa_power_teardown(ipa);
> -err_gsi_teardown:
> -	gsi_teardown(&ipa->dma_subsys);
> +err_dma_teardown:
> +	ipa_dma_teardown(&ipa->dma_subsys);
>   
>   	return ret;
>   }
> @@ -179,7 +179,7 @@ static void ipa_teardown(struct ipa *ipa)
>   	ipa_endpoint_disable_one(command_endpoint);
>   	ipa_endpoint_teardown(ipa);
>   	ipa_power_teardown(ipa);
> -	gsi_teardown(&ipa->dma_subsys);
> +	ipa_dma_teardown(&ipa->dma_subsys);
>   }
>   
>   /* Configure bus access behavior for IPA components */
> @@ -726,7 +726,7 @@ static int ipa_probe(struct platform_device *pdev)
>   					    data->endpoint_data);
>   	if (!ipa->filter_map) {
>   		ret = -EINVAL;
> -		goto err_gsi_exit;
> +		goto err_dma_exit;
>   	}
>   
>   	ret = ipa_table_init(ipa);
> @@ -780,8 +780,8 @@ static int ipa_probe(struct platform_device *pdev)
>   	ipa_table_exit(ipa);
>   err_endpoint_exit:
>   	ipa_endpoint_exit(ipa);
> -err_gsi_exit:
> -	gsi_exit(&ipa->dma_subsys);
> +err_dma_exit:
> +	ipa_dma_exit(&ipa->dma_subsys);
>   err_mem_exit:
>   	ipa_mem_exit(ipa);
>   err_reg_exit:
> @@ -824,7 +824,7 @@ static int ipa_remove(struct platform_device *pdev)
>   	ipa_modem_exit(ipa);
>   	ipa_table_exit(ipa);
>   	ipa_endpoint_exit(ipa);
> -	gsi_exit(&ipa->dma_subsys);
> +	ipa_dma_exit(&ipa->dma_subsys);
>   	ipa_mem_exit(ipa);
>   	ipa_reg_exit(ipa);
>   	kfree(ipa);
> diff --git a/drivers/net/ipa/ipa_power.c b/drivers/net/ipa/ipa_power.c
> index b1c6c0fcb654..096cfb8ae9a5 100644
> --- a/drivers/net/ipa/ipa_power.c
> +++ b/drivers/net/ipa/ipa_power.c
> @@ -243,7 +243,7 @@ static int ipa_runtime_suspend(struct device *dev)
>   	if (ipa->setup_complete) {
>   		__clear_bit(IPA_POWER_FLAG_RESUMED, ipa->power->flags);
>   		ipa_endpoint_suspend(ipa);
> -		gsi_suspend(&ipa->gsi);
> +		ipa_dma_suspend(&ipa->dma_subsys);
>   	}
>   
>   	return ipa_power_disable(ipa);
> @@ -260,7 +260,7 @@ static int ipa_runtime_resume(struct device *dev)
>   
>   	/* Endpoints aren't usable until setup is complete */
>   	if (ipa->setup_complete) {
> -		gsi_resume(&ipa->gsi);
> +		ipa_dma_resume(&ipa->dma_subsys);
>   		ipa_endpoint_resume(ipa);
>   	}
>   
> diff --git a/drivers/net/ipa/ipa_trans.c b/drivers/net/ipa/ipa_trans.c
> index b87936b18770..22755f3ce3da 100644
> --- a/drivers/net/ipa/ipa_trans.c
> +++ b/drivers/net/ipa/ipa_trans.c
> @@ -747,7 +747,7 @@ int ipa_channel_trans_init(struct ipa_dma *gsi, u32 channel_id)
>   	 * for transactions (including transaction structures) based on
>   	 * this maximum number.
>   	 */
> -	tre_max = gsi_channel_tre_max(channel->dma_subsys, channel_id);
> +	tre_max = ipa_channel_tre_max(channel->dma_subsys, channel_id);
>   
>   	/* Transactions are allocated one at a time. */
>   	ret = ipa_trans_pool_init(&trans_info->pool, sizeof(struct ipa_trans),
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 05/17] net: ipa: Check interrupts for availability
  2021-09-20  3:07 ` [RFC PATCH 05/17] net: ipa: Check interrupts for availability Sireesh Kodali
@ 2021-10-13 22:29   ` Alex Elder
  0 siblings, 0 replies; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:29 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> Make ipa_interrupt_add/ipa_interrupt_remove no-operation if requested
> interrupt is not supported by IPA hardware.
> 
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>

I'm not sure why this is important.  Callers shouldn't add an
interrupt type that isn't supported by the hardware.  The check
here would be for sanity.

And there's no point in checking in the interrupt remove
function, the only interrupts removed will have already
been added.

Anyway, maybe I'll see you're adding support for these
IPA interrupt types later on?

					-Alex

> ---
>   drivers/net/ipa/ipa_interrupt.c | 25 +++++++++++++++++++++++++
>   1 file changed, 25 insertions(+)
> 
> diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
> index b35170a93b0f..94708a23a597 100644
> --- a/drivers/net/ipa/ipa_interrupt.c
> +++ b/drivers/net/ipa/ipa_interrupt.c
> @@ -48,6 +48,25 @@ static bool ipa_interrupt_uc(struct ipa_interrupt *interrupt, u32 irq_id)
>   	return irq_id == IPA_IRQ_UC_0 || irq_id == IPA_IRQ_UC_1;
>   }
>   
> +static bool ipa_interrupt_check_fixup(enum ipa_irq_id *irq_id, enum ipa_version version)
> +{
> +	switch (*irq_id) {
> +	case IPA_IRQ_EOT_COAL:
> +		return version < IPA_VERSION_3_5;
> +	case IPA_IRQ_DCMP:
> +		return version < IPA_VERSION_4_5;
> +	case IPA_IRQ_TLV_LEN_MIN_DSM:
> +		return version >= IPA_VERSION_4_5;
> +	default:
> +		break;
> +	}
> +
> +	if (*irq_id >= IPA_IRQ_DRBIP_PKT_EXCEED_MAX_SIZE_EN)
> +		return version >= IPA_VERSION_4_9;
> +
> +	return true;
> +}
> +
>   /* Process a particular interrupt type that has been received */
>   static void ipa_interrupt_process(struct ipa_interrupt *interrupt, u32 irq_id)
>   {
> @@ -191,6 +210,9 @@ void ipa_interrupt_add(struct ipa_interrupt *interrupt,
>   	struct ipa *ipa = interrupt->ipa;
>   	u32 offset;
>   
> +	if (!ipa_interrupt_check_fixup(&ipa_irq, ipa->version))
> +		return;
> +
>   	WARN_ON(ipa_irq >= IPA_IRQ_COUNT);
>   
>   	interrupt->handler[ipa_irq] = handler;
> @@ -208,6 +230,9 @@ ipa_interrupt_remove(struct ipa_interrupt *interrupt, enum ipa_irq_id ipa_irq)
>   	struct ipa *ipa = interrupt->ipa;
>   	u32 offset;
>   
> +	if (!ipa_interrupt_check_fixup(&ipa_irq, ipa->version))
> +		return;
> +
>   	WARN_ON(ipa_irq >= IPA_IRQ_COUNT);
>   
>   	/* Update the IPA interrupt mask to disable it */
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait
  2021-09-20  3:08 ` [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait Sireesh Kodali
@ 2021-10-13 22:29   ` Alex Elder
  2021-10-18 17:02     ` Sireesh Kodali
  0 siblings, 1 reply; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:29 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> Sometimes the pipeline clear fails, and when it does, having a hang in
> kernel is ugly. The timeout gives us a nice error message. Note that
> this shouldn't actually hang, ever. It only hangs if there is a mistake
> in the config, and the timeout is only useful when debugging.
> 
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>

This is actually an item on my to-do list.  All of the waits
for GSI completions should have timeouts.  The only reason it
hasn't been implemented already is that I would like to be sure
all paths that could have a timeout actually have a reasonable
recovery.

I'd say an error message after a timeout is better than a hung
task panic, but if this does time out, I'm not sure the state
of the hardware is well-defined.

					-Alex

> ---
>   drivers/net/ipa/ipa_cmd.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> index 3db9e94e484f..0bdbc331fa78 100644
> --- a/drivers/net/ipa/ipa_cmd.c
> +++ b/drivers/net/ipa/ipa_cmd.c
> @@ -658,7 +658,10 @@ u32 ipa_cmd_pipeline_clear_count(void)
>   
>   void ipa_cmd_pipeline_clear_wait(struct ipa *ipa)
>   {
> -	wait_for_completion(&ipa->completion);
> +	unsigned long timeout_jiffies = msecs_to_jiffies(1000);
> +
> +	if (!wait_for_completion_timeout(&ipa->completion, timeout_jiffies))
> +		dev_err(&ipa->pdev->dev, "%s time out\n", __func__);
>   }
>   
>   void ipa_cmd_pipeline_clear(struct ipa *ipa)
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions
  2021-09-20  3:08 ` [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions Sireesh Kodali
@ 2021-10-13 22:29   ` Alex Elder
  2021-10-18 17:25     ` Sireesh Kodali
  0 siblings, 1 reply; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:29 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> IPA v2.x is an older version the IPA hardware, and is 32 bit.
> 
> Most of the registers were just shifted in newer IPA versions, but
> the register fields have remained the same across IPA versions. This
> means that only the register addresses needed to be added to the driver.
> 
> To handle the different IPA register addresses, static inline functions
> have been defined that return the correct register address.

Thank you for following the existing convention in implementing these.
Even if it isn't perfect, it's good to remain consistent.

You use:
	if (version <= IPA_VERSION_2_6L)
but then also define and use
	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
And the only new IPA versions are 2_0, 2_5, and 2_6L.

I would stick with the former and don't define IPA_VERSION_RANGE().
Nothing less than IPA v2.0 (or 3.0 currently) is supported, so
"there is no version less than that."

Oh, and I noticed some local variables defined without the
"reverse Christmas tree order" which, like it or not, is the
convention used consistently throughout this driver.

I might quibble with a few other minor things in these definitions
but overall this looks fine.

					-Alex

> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_cmd.c      |   3 +-
>   drivers/net/ipa/ipa_endpoint.c |  33 +++---
>   drivers/net/ipa/ipa_main.c     |   8 +-
>   drivers/net/ipa/ipa_mem.c      |   5 +-
>   drivers/net/ipa/ipa_reg.h      | 184 +++++++++++++++++++++++++++------
>   drivers/net/ipa/ipa_version.h  |  12 +++
>   6 files changed, 195 insertions(+), 50 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> index 0bdbc331fa78..7a104540dc26 100644
> --- a/drivers/net/ipa/ipa_cmd.c
> +++ b/drivers/net/ipa/ipa_cmd.c
> @@ -326,7 +326,8 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa)
>   	 * worst case (highest endpoint number) offset of that endpoint
>   	 * fits in the register write command field(s) that must hold it.
>   	 */
> -	offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT - 1);
> +	offset = ipa_reg_endp_status_n_offset(ipa->version,
> +			IPA_ENDPOINT_COUNT - 1);
>   	name = "maximal endpoint status";
>   	if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
>   		return false;
> diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> index dbef549c4537..7d3ab61cd890 100644
> --- a/drivers/net/ipa/ipa_endpoint.c
> +++ b/drivers/net/ipa/ipa_endpoint.c
> @@ -242,8 +242,8 @@ static struct ipa_trans *ipa_endpoint_trans_alloc(struct ipa_endpoint *endpoint,
>   static bool
>   ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_CTRL_N_OFFSET(endpoint->endpoint_id);
>   	struct ipa *ipa = endpoint->ipa;
> +	u32 offset = ipa_reg_endp_init_ctrl_n_offset(ipa->version, endpoint->endpoint_id);
>   	bool state;
>   	u32 mask;
>   	u32 val;
> @@ -410,7 +410,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
>   		if (!(endpoint->ee_id == GSI_EE_MODEM && endpoint->toward_ipa))
>   			continue;
>   
> -		offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
> +		offset = ipa_reg_endp_status_n_offset(ipa->version, endpoint_id);
>   
>   		/* Value written is 0, and all bits are updated.  That
>   		 * means status is disabled on the endpoint, and as a
> @@ -431,7 +431,8 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
>   
>   static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_CFG_N_OFFSET(endpoint->endpoint_id);
> +	struct ipa *ipa = endpoint->ipa;
> +	u32 offset = ipa_reg_endp_init_cfg_n_offset(ipa->version, endpoint->endpoint_id);
>   	enum ipa_cs_offload_en enabled;
>   	u32 val = 0;
>   
> @@ -523,8 +524,8 @@ ipa_qmap_header_size(enum ipa_version version, struct ipa_endpoint *endpoint)
>    */
>   static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_HDR_N_OFFSET(endpoint->endpoint_id);
>   	struct ipa *ipa = endpoint->ipa;
> +	u32 offset = ipa_reg_endp_init_hdr_n_offset(ipa->version, endpoint->endpoint_id);
>   	u32 val = 0;
>   
>   	if (endpoint->data->qmap) {
> @@ -565,9 +566,9 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
>   
>   static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(endpoint->endpoint_id);
> -	u32 pad_align = endpoint->data->rx.pad_align;
>   	struct ipa *ipa = endpoint->ipa;
> +	u32 offset = ipa_reg_endp_init_hdr_ext_n_offset(ipa->version, endpoint->endpoint_id);
> +	u32 pad_align = endpoint->data->rx.pad_align;
>   	u32 val = 0;
>   
>   	val |= HDR_ENDIANNESS_FMASK;		/* big endian */
> @@ -609,6 +610,7 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
>   
>   static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
>   {
> +	enum ipa_version version = endpoint->ipa->version;
>   	u32 endpoint_id = endpoint->endpoint_id;
>   	u32 val = 0;
>   	u32 offset;
> @@ -616,7 +618,7 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
>   	if (endpoint->toward_ipa)
>   		return;		/* Register not valid for TX endpoints */
>   
> -	offset = IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(endpoint_id);
> +	offset = ipa_reg_endp_init_hdr_metadata_mask_n_offset(version, endpoint_id);
>   
>   	/* Note that HDR_ENDIANNESS indicates big endian header fields */
>   	if (endpoint->data->qmap)
> @@ -627,7 +629,8 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
>   
>   static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_MODE_N_OFFSET(endpoint->endpoint_id);
> +	enum ipa_version version = endpoint->ipa->version;
> +	u32 offset = ipa_reg_endp_init_mode_n_offset(version, endpoint->endpoint_id);
>   	u32 val;
>   
>   	if (!endpoint->toward_ipa)
> @@ -716,8 +719,8 @@ static u32 aggr_sw_eof_active_encoded(enum ipa_version version, bool enabled)
>   
>   static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_AGGR_N_OFFSET(endpoint->endpoint_id);
>   	enum ipa_version version = endpoint->ipa->version;
> +	u32 offset = ipa_reg_endp_init_aggr_n_offset(version, endpoint->endpoint_id);
>   	u32 val = 0;
>   
>   	if (endpoint->data->aggregation) {
> @@ -853,7 +856,7 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
>   	u32 offset;
>   	u32 val;
>   
> -	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(endpoint_id);
> +	offset = ipa_reg_endp_init_hol_block_timer_n_offset(ipa->version, endpoint_id);
>   	val = hol_block_timer_val(ipa, microseconds);
>   	iowrite32(val, ipa->reg_virt + offset);
>   }
> @@ -861,12 +864,13 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
>   static void
>   ipa_endpoint_init_hol_block_enable(struct ipa_endpoint *endpoint, bool enable)
>   {
> +	enum ipa_version version = endpoint->ipa->version;
>   	u32 endpoint_id = endpoint->endpoint_id;
>   	u32 offset;
>   	u32 val;
>   
>   	val = enable ? HOL_BLOCK_EN_FMASK : 0;
> -	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(endpoint_id);
> +	offset = ipa_reg_endp_init_hol_block_en_n_offset(version, endpoint_id);
>   	iowrite32(val, endpoint->ipa->reg_virt + offset);
>   }
>   
> @@ -887,7 +891,8 @@ void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa)
>   
>   static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint)
>   {
> -	u32 offset = IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(endpoint->endpoint_id);
> +	enum ipa_version version = endpoint->ipa->version;
> +	u32 offset = ipa_reg_endp_init_deaggr_n_offset(version, endpoint->endpoint_id);
>   	u32 val = 0;
>   
>   	if (!endpoint->toward_ipa)
> @@ -979,7 +984,7 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
>   	u32 val = 0;
>   	u32 offset;
>   
> -	offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
> +	offset = ipa_reg_endp_status_n_offset(ipa->version, endpoint_id);
>   
>   	if (endpoint->data->status_enable) {
>   		val |= STATUS_EN_FMASK;
> @@ -1384,7 +1389,7 @@ void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id)
>   	val |= u32_encode_bits(endpoint_id, ROUTE_FRAG_DEF_PIPE_FMASK);
>   	val |= ROUTE_DEF_RETAIN_HDR_FMASK;
>   
> -	iowrite32(val, ipa->reg_virt + IPA_REG_ROUTE_OFFSET);
> +	iowrite32(val, ipa->reg_virt + ipa_reg_route_offset(ipa->version));
>   }
>   
>   void ipa_endpoint_default_route_clear(struct ipa *ipa)
> diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> index 6ab691ff1faf..ba06e3ad554c 100644
> --- a/drivers/net/ipa/ipa_main.c
> +++ b/drivers/net/ipa/ipa_main.c
> @@ -191,7 +191,7 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
>   	if (ipa->version < IPA_VERSION_4_0)
>   		return;
>   
> -	val = ioread32(ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
> +	val = ioread32(ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
>   
>   	if (ipa->version == IPA_VERSION_4_0) {
>   		val &= ~IPA_QMB_SELECT_CONS_EN_FMASK;
> @@ -206,7 +206,7 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
>   	val |= GSI_MULTI_INORDER_RD_DIS_FMASK;
>   	val |= GSI_MULTI_INORDER_WR_DIS_FMASK;
>   
> -	iowrite32(val, ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
> +	iowrite32(val, ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
>   }
>   
>   /* Configure DDR and (possibly) PCIe max read/write QSB values */
> @@ -355,7 +355,7 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
>   	/* IPA v4.5+ has no backward compatibility register */
>   	if (version < IPA_VERSION_4_5) {
>   		val = data->backward_compat;
> -		iowrite32(val, ipa->reg_virt + IPA_REG_BCR_OFFSET);
> +		iowrite32(val, ipa->reg_virt + ipa_reg_bcr_offset(ipa->version));
>   	}
>   
>   	/* Implement some hardware workarounds */
> @@ -384,7 +384,7 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
>   		/* Configure aggregation timer granularity */
>   		granularity = ipa_aggr_granularity_val(IPA_AGGR_GRANULARITY);
>   		val = u32_encode_bits(granularity, AGGR_GRANULARITY_FMASK);
> -		iowrite32(val, ipa->reg_virt + IPA_REG_COUNTER_CFG_OFFSET);
> +		iowrite32(val, ipa->reg_virt + ipa_reg_counter_cfg_offset(ipa->version));
>   	} else {
>   		ipa_qtime_config(ipa);
>   	}
> diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
> index 16e5fdd5bd73..8acc88070a6f 100644
> --- a/drivers/net/ipa/ipa_mem.c
> +++ b/drivers/net/ipa/ipa_mem.c
> @@ -113,7 +113,8 @@ int ipa_mem_setup(struct ipa *ipa)
>   	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
>   	offset = ipa->mem_offset + mem->offset;
>   	val = proc_cntxt_base_addr_encoded(ipa->version, offset);
> -	iowrite32(val, ipa->reg_virt + IPA_REG_LOCAL_PKT_PROC_CNTXT_OFFSET);
> +	iowrite32(val, ipa->reg_virt +
> +		  ipa_reg_local_pkt_proc_cntxt_base_offset(ipa->version));
>   
>   	return 0;
>   }
> @@ -316,7 +317,7 @@ int ipa_mem_config(struct ipa *ipa)
>   	u32 i;
>   
>   	/* Check the advertised location and size of the shared memory area */
> -	val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET);
> +	val = ioread32(ipa->reg_virt + ipa_reg_shared_mem_size_offset(ipa->version));
>   
>   	/* The fields in the register are in 8 byte units */
>   	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h
> index a5b355384d4a..fcae0296cfa4 100644
> --- a/drivers/net/ipa/ipa_reg.h
> +++ b/drivers/net/ipa/ipa_reg.h
> @@ -65,7 +65,17 @@ struct ipa;
>    * of valid bits for the register.
>    */
>   
> -#define IPA_REG_COMP_CFG_OFFSET				0x0000003c
> +#define IPA_REG_COMP_SW_RESET_OFFSET		0x0000003c
> +
> +#define IPA_REG_V2_ENABLED_PIPES_OFFSET		0x000005dc
> +
> +static inline u32 ipa_reg_comp_cfg_offset(enum ipa_version version)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x38;
> +
> +	return 0x3c;
> +}
>   /* The next field is not supported for IPA v4.0+, not present for IPA v4.5+ */
>   #define ENABLE_FMASK				GENMASK(0, 0)
>   /* The next field is present for IPA v4.7+ */
> @@ -124,6 +134,7 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
>   	return u32_encode_bits(val, GENMASK(17, 17));
>   }
>   
> +/* This register is only present on IPA v3.0 and above */
>   #define IPA_REG_CLKON_CFG_OFFSET			0x00000044
>   #define RX_FMASK				GENMASK(0, 0)
>   #define PROC_FMASK				GENMASK(1, 1)
> @@ -164,7 +175,14 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
>   /* The next field is present for IPA v4.7+ */
>   #define DRBIP_FMASK				GENMASK(31, 31)
>   
> -#define IPA_REG_ROUTE_OFFSET				0x00000048
> +static inline u32 ipa_reg_route_offset(enum ipa_version version)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x44;
> +
> +	return 0x48;
> +}
> +
>   #define ROUTE_DIS_FMASK				GENMASK(0, 0)
>   #define ROUTE_DEF_PIPE_FMASK			GENMASK(5, 1)
>   #define ROUTE_DEF_HDR_TABLE_FMASK		GENMASK(6, 6)
> @@ -172,7 +190,14 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
>   #define ROUTE_FRAG_DEF_PIPE_FMASK		GENMASK(21, 17)
>   #define ROUTE_DEF_RETAIN_HDR_FMASK		GENMASK(24, 24)
>   
> -#define IPA_REG_SHARED_MEM_SIZE_OFFSET			0x00000054
> +static inline u32 ipa_reg_shared_mem_size_offset(enum ipa_version version)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x50;
> +
> +	return 0x54;
> +}
> +
>   #define SHARED_MEM_SIZE_FMASK			GENMASK(15, 0)
>   #define SHARED_MEM_BADDR_FMASK			GENMASK(31, 16)
>   
> @@ -219,7 +244,13 @@ static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
>   }
>   
>   /* The next register is not present for IPA v4.5+ */
> -#define IPA_REG_BCR_OFFSET				0x000001d0
> +static inline u32 ipa_reg_bcr_offset(enum ipa_version version)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_5, 2_6L))
> +		return 0x5b0;
> +
> +	return 0x1d0;
> +}
>   /* The next two fields are not present for IPA v4.2+ */
>   #define BCR_CMDQ_L_LACK_ONE_ENTRY_FMASK		GENMASK(0, 0)
>   #define BCR_TX_NOT_USING_BRESP_FMASK		GENMASK(1, 1)
> @@ -236,7 +267,14 @@ static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
>   #define BCR_ROUTER_PREFETCH_EN_FMASK		GENMASK(9, 9)
>   
>   /* The value of the next register must be a multiple of 8 (bottom 3 bits 0) */
> -#define IPA_REG_LOCAL_PKT_PROC_CNTXT_OFFSET		0x000001e8
> +static inline u32 ipa_reg_local_pkt_proc_cntxt_base_offset(enum ipa_version version)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x5e0;
> +
> +	return 0x1e8;
> +}
> +
>   
>   /* Encoded value for LOCAL_PKT_PROC_CNTXT register BASE_ADDR field */
>   static inline u32 proc_cntxt_base_addr_encoded(enum ipa_version version,
> @@ -252,7 +290,14 @@ static inline u32 proc_cntxt_base_addr_encoded(enum ipa_version version,
>   #define IPA_REG_AGGR_FORCE_CLOSE_OFFSET			0x000001ec
>   
>   /* The next register is not present for IPA v4.5+ */
> -#define IPA_REG_COUNTER_CFG_OFFSET			0x000001f0
> +static inline u32 ipa_reg_counter_cfg_offset(enum ipa_version version)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_5, 2_6L))
> +		return 0x5e8;
> +
> +	return 0x1f0;
> +}
> +
>   /* The next field is not present for IPA v3.5+ */
>   #define EOT_COAL_GRANULARITY			GENMASK(3, 0)
>   #define AGGR_GRANULARITY_FMASK			GENMASK(8, 4)
> @@ -349,15 +394,27 @@ enum ipa_pulse_gran {
>   #define Y_MIN_LIM_FMASK				GENMASK(21, 16)
>   #define Y_MAX_LIM_FMASK				GENMASK(29, 24)
>   
> -#define IPA_REG_ENDP_INIT_CTRL_N_OFFSET(ep) \
> -					(0x00000800 + 0x0070 * (ep))
> +static inline u32 ipa_reg_endp_init_ctrl_n_offset(enum ipa_version version, u16 ep)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x70 + 0x4 * ep;
> +
> +	return 0x800 + 0x70 * ep;
> +}
> +
>   /* Valid only for RX (IPA producer) endpoints (do not use for IPA v4.0+) */
>   #define ENDP_SUSPEND_FMASK			GENMASK(0, 0)
>   /* Valid only for TX (IPA consumer) endpoints */
>   #define ENDP_DELAY_FMASK			GENMASK(1, 1)
>   
> -#define IPA_REG_ENDP_INIT_CFG_N_OFFSET(ep) \
> -					(0x00000808 + 0x0070 * (ep))
> +static inline u32 ipa_reg_endp_init_cfg_n_offset(enum ipa_version version, u16 ep)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0xc0 + 0x4 * ep;
> +
> +	return 0x808 + 0x70 * ep;
> +}
> +
>   #define FRAG_OFFLOAD_EN_FMASK			GENMASK(0, 0)
>   #define CS_OFFLOAD_EN_FMASK			GENMASK(2, 1)
>   #define CS_METADATA_HDR_OFFSET_FMASK		GENMASK(6, 3)
> @@ -383,8 +440,14 @@ enum ipa_nat_en {
>   	IPA_NAT_DST			= 0x2,
>   };
>   
> -#define IPA_REG_ENDP_INIT_HDR_N_OFFSET(ep) \
> -					(0x00000810 + 0x0070 * (ep))
> +static inline u32 ipa_reg_endp_init_hdr_n_offset(enum ipa_version version, u16 ep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x170 + 0x4 * ep;
> +
> +	return 0x810 + 0x70 * ep;
> +}
> +
>   #define HDR_LEN_FMASK				GENMASK(5, 0)
>   #define HDR_OFST_METADATA_VALID_FMASK		GENMASK(6, 6)
>   #define HDR_OFST_METADATA_FMASK			GENMASK(12, 7)
> @@ -440,8 +503,14 @@ static inline u32 ipa_metadata_offset_encoded(enum ipa_version version,
>   	return val;
>   }
>   
> -#define IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(ep) \
> -					(0x00000814 + 0x0070 * (ep))
> +static inline u32 ipa_reg_endp_init_hdr_ext_n_offset(enum ipa_version version, u16 ep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x1c0 + 0x4 * ep;
> +
> +	return 0x814 + 0x70 * ep;
> +}
> +
>   #define HDR_ENDIANNESS_FMASK			GENMASK(0, 0)
>   #define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK	GENMASK(1, 1)
>   #define HDR_TOTAL_LEN_OR_PAD_FMASK		GENMASK(2, 2)
> @@ -454,12 +523,23 @@ static inline u32 ipa_metadata_offset_encoded(enum ipa_version version,
>   #define HDR_ADDITIONAL_CONST_LEN_MSB_FMASK	GENMASK(21, 20)
>   
>   /* Valid only for RX (IPA producer) endpoints */
> -#define IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(rxep) \
> -					(0x00000818 + 0x0070 * (rxep))
> +static inline u32 ipa_reg_endp_init_hdr_metadata_mask_n_offset(enum ipa_version version, u16 rxep)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x220 + 0x4 * rxep;
> +
> +	return 0x818 + 0x70 * rxep;
> +}
>   
>   /* Valid only for TX (IPA consumer) endpoints */
> -#define IPA_REG_ENDP_INIT_MODE_N_OFFSET(txep) \
> -					(0x00000820 + 0x0070 * (txep))
> +static inline u32 ipa_reg_endp_init_mode_n_offset(enum ipa_version version, u16 txep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x2c0 + 0x4 * txep;
> +
> +	return 0x820 + 0x70 * txep;
> +}
> +
>   #define MODE_FMASK				GENMASK(2, 0)
>   /* The next field is present for IPA v4.5+ */
>   #define DCPH_ENABLE_FMASK			GENMASK(3, 3)
> @@ -480,8 +560,14 @@ enum ipa_mode {
>   	IPA_DMA				= 0x3,
>   };
>   
> -#define IPA_REG_ENDP_INIT_AGGR_N_OFFSET(ep) \
> -					(0x00000824 +  0x0070 * (ep))
> +static inline u32 ipa_reg_endp_init_aggr_n_offset(enum ipa_version version,
> +						  u16 ep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x320 + 0x4 * ep;
> +	return 0x824 + 0x70 * ep;
> +}
> +
>   #define AGGR_EN_FMASK				GENMASK(1, 0)
>   #define AGGR_TYPE_FMASK				GENMASK(4, 2)
>   
> @@ -543,14 +629,27 @@ enum ipa_aggr_type {
>   };
>   
>   /* Valid only for RX (IPA producer) endpoints */
> -#define IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(rxep) \
> -					(0x0000082c +  0x0070 * (rxep))
> +static inline u32 ipa_reg_endp_init_hol_block_en_n_offset(enum ipa_version version,
> +							  u16 rxep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x3c0 + 0x4 * rxep;
> +
> +	return 0x82c + 0x70 * rxep;
> +}
> +
>   #define HOL_BLOCK_EN_FMASK			GENMASK(0, 0)
>   
>   /* Valid only for RX (IPA producer) endpoints */
> -#define IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(rxep) \
> -					(0x00000830 +  0x0070 * (rxep))
> -/* The next two fields are present for IPA v4.2 only */
> +static inline u32 ipa_reg_endp_init_hol_block_timer_n_offset(enum ipa_version version, u16 rxep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x420 + 0x4 * rxep;
> +
> +	return 0x830 + 0x70 * rxep;
> +}
> +
> +/* The next fields are present for IPA v4.2 only */
>   #define BASE_VALUE_FMASK			GENMASK(4, 0)
>   #define SCALE_FMASK				GENMASK(12, 8)
>   /* The next two fields are present for IPA v4.5 */
> @@ -558,8 +657,14 @@ enum ipa_aggr_type {
>   #define GRAN_SEL_FMASK				GENMASK(8, 8)
>   
>   /* Valid only for TX (IPA consumer) endpoints */
> -#define IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(txep) \
> -					(0x00000834 + 0x0070 * (txep))
> +static inline u32 ipa_reg_endp_init_deaggr_n_offset(enum ipa_version version, u16 txep)
> +{
> +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> +		return 0x470 + 0x4 * txep;
> +
> +	return 0x834 + 0x70 * txep;
> +}
> +
>   #define DEAGGR_HDR_LEN_FMASK			GENMASK(5, 0)
>   #define SYSPIPE_ERR_DETECTION_FMASK		GENMASK(6, 6)
>   #define PACKET_OFFSET_VALID_FMASK		GENMASK(7, 7)
> @@ -629,8 +734,14 @@ enum ipa_seq_rep_type {
>   	IPA_SEQ_REP_DMA_PARSER			= 0x08,
>   };
>   
> -#define IPA_REG_ENDP_STATUS_N_OFFSET(ep) \
> -					(0x00000840 + 0x0070 * (ep))
> +static inline u32 ipa_reg_endp_status_n_offset(enum ipa_version version, u16 ep)
> +{
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x4c0 + 0x4 * ep;
> +
> +	return 0x840 + 0x70 * ep;
> +}
> +
>   #define STATUS_EN_FMASK				GENMASK(0, 0)
>   #define STATUS_ENDP_FMASK			GENMASK(5, 1)
>   /* The next field is not present for IPA v4.5+ */
> @@ -662,6 +773,9 @@ enum ipa_seq_rep_type {
>   static inline u32 ipa_reg_irq_stts_ee_n_offset(enum ipa_version version,
>   					       u32 ee)
>   {
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x00001008 + 0x1000 * ee;
> +
>   	if (version < IPA_VERSION_4_9)
>   		return 0x00003008 + 0x1000 * ee;
>   
> @@ -675,6 +789,9 @@ static inline u32 ipa_reg_irq_stts_offset(enum ipa_version version)
>   
>   static inline u32 ipa_reg_irq_en_ee_n_offset(enum ipa_version version, u32 ee)
>   {
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x0000100c + 0x1000 * ee;
> +
>   	if (version < IPA_VERSION_4_9)
>   		return 0x0000300c + 0x1000 * ee;
>   
> @@ -688,6 +805,9 @@ static inline u32 ipa_reg_irq_en_offset(enum ipa_version version)
>   
>   static inline u32 ipa_reg_irq_clr_ee_n_offset(enum ipa_version version, u32 ee)
>   {
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x00001010 + 0x1000 * ee;
> +
>   	if (version < IPA_VERSION_4_9)
>   		return 0x00003010 + 0x1000 * ee;
>   
> @@ -776,6 +896,9 @@ enum ipa_irq_id {
>   
>   static inline u32 ipa_reg_irq_uc_ee_n_offset(enum ipa_version version, u32 ee)
>   {
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x101c + 1000 * ee;
> +
>   	if (version < IPA_VERSION_4_9)
>   		return 0x0000301c + 0x1000 * ee;
>   
> @@ -793,6 +916,9 @@ static inline u32 ipa_reg_irq_uc_offset(enum ipa_version version)
>   static inline u32
>   ipa_reg_irq_suspend_info_ee_n_offset(enum ipa_version version, u32 ee)
>   {
> +	if (version <= IPA_VERSION_2_6L)
> +		return 0x00001098 + 0x1000 * ee;
> +
>   	if (version == IPA_VERSION_3_0)
>   		return 0x00003098 + 0x1000 * ee;
>   
> diff --git a/drivers/net/ipa/ipa_version.h b/drivers/net/ipa/ipa_version.h
> index 6c16c895d842..0d816de586ba 100644
> --- a/drivers/net/ipa/ipa_version.h
> +++ b/drivers/net/ipa/ipa_version.h
> @@ -8,6 +8,9 @@
>   
>   /**
>    * enum ipa_version
> + * @IPA_VERSION_2_0:	IPA version 2.0
> + * @IPA_VERSION_2_5:	IPA version 2.5/2.6
> + * @IPA_VERSION_2_6:	IPA version 2.6L
>    * @IPA_VERSION_3_0:	IPA version 3.0/GSI version 1.0
>    * @IPA_VERSION_3_1:	IPA version 3.1/GSI version 1.1
>    * @IPA_VERSION_3_5:	IPA version 3.5/GSI version 1.2
> @@ -25,6 +28,9 @@
>    * new version is added.
>    */
>   enum ipa_version {
> +	IPA_VERSION_2_0,
> +	IPA_VERSION_2_5,
> +	IPA_VERSION_2_6L,
>   	IPA_VERSION_3_0,
>   	IPA_VERSION_3_1,
>   	IPA_VERSION_3_5,
> @@ -38,4 +44,10 @@ enum ipa_version {
>   	IPA_VERSION_4_11,
>   };
>   
> +#define IPA_HAS_GSI(version) ((version) > IPA_VERSION_2_6L)
> +#define IPA_IS_64BIT(version) ((version) > IPA_VERSION_2_6L)
> +#define IPA_VERSION_RANGE(_version, _from, _to) \
> +	((_version) >= (IPA_VERSION_##_from) &&  \
> +	 (_version) <= (IPA_VERSION_##_to))
> +
>   #endif /* _IPA_VERSION_H_ */
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 08/17] net: ipa: Add support for IPA v2.x interrupts
  2021-09-20  3:08 ` [RFC PATCH 08/17] net: ipa: Add support for IPA v2.x interrupts Sireesh Kodali
@ 2021-10-13 22:29   ` Alex Elder
  0 siblings, 0 replies; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:29 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> From: Vladimir Lypak <vladimir.lypak@gmail.com>
> 
> Interrupts on IPA v2.x have different numbers from the v3.x and above
> interrupts. IPA v2.x also doesn't support the TX_SUSPEND irq, like v3.0

I'm not sure I like this way of fixing the interrupt ids (by
adding an offset), but it's a simple change.  (And now I have
a better understanding for why the "fixup" function exists).

					-Alex

> 
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_interrupt.c | 11 ++++++++---
>   1 file changed, 8 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_interrupt.c b/drivers/net/ipa/ipa_interrupt.c
> index 94708a23a597..37b5932253aa 100644
> --- a/drivers/net/ipa/ipa_interrupt.c
> +++ b/drivers/net/ipa/ipa_interrupt.c
> @@ -63,6 +63,11 @@ static bool ipa_interrupt_check_fixup(enum ipa_irq_id *irq_id, enum ipa_version
>   
>   	if (*irq_id >= IPA_IRQ_DRBIP_PKT_EXCEED_MAX_SIZE_EN)
>   		return version >= IPA_VERSION_4_9;
> +	else if (*irq_id > IPA_IRQ_BAM_GSI_IDLE)
> +		return version >= IPA_VERSION_3_0;
> +	else if (version <= IPA_VERSION_2_6L &&
> +			*irq_id >= IPA_IRQ_PROC_UC_ACK_Q_NOT_EMPTY)
> +		*irq_id += 2;
>   
>   	return true;
>   }
> @@ -152,8 +157,8 @@ static void ipa_interrupt_suspend_control(struct ipa_interrupt *interrupt,
>   
>   	WARN_ON(!(mask & ipa->available));
>   
> -	/* IPA version 3.0 does not support TX_SUSPEND interrupt control */
> -	if (ipa->version == IPA_VERSION_3_0)
> +	/* IPA version <=3.0 does not support TX_SUSPEND interrupt control */
> +	if (ipa->version <= IPA_VERSION_3_0)
>   		return;
>   
>   	offset = ipa_reg_irq_suspend_en_offset(ipa->version);
> @@ -190,7 +195,7 @@ void ipa_interrupt_suspend_clear_all(struct ipa_interrupt *interrupt)
>   	val = ioread32(ipa->reg_virt + offset);
>   
>   	/* SUSPEND interrupt status isn't cleared on IPA version 3.0 */
> -	if (ipa->version == IPA_VERSION_3_0)
> +	if (ipa->version <= IPA_VERSION_3_0)
>   		return;
>   
>   	offset = ipa_reg_irq_suspend_clr_offset(ipa->version);
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport
  2021-09-20  3:08 ` [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  2021-10-18 17:30     ` Sireesh Kodali
  0 siblings, 1 reply; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> BAM is used on IPA v2.x. Since BAM already has a nice dmaengine driver,
> the IPA driver only makes calls the dmaengine API.
> Also add BAM transaction support to IPA's trasaction abstraction layer.
> 
> BAM transactions should use NAPI just like GSI transactions, but just
> use callbacks on each transaction for now.

This is where things get a little more complicated.  I'm not really
familiar with the BAM interface and would really like to give this
a much deeper review, and I won't be doing that now.

At first glance, it looks reasonably clean to me, and it surprises
me a little that this different system can be used with a relatively
small amount of change.  Much looks duplicated, so it could be that
a little more work abstracting might avoid that (but I haven't looked
that closely).

					-Alex

> 
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/Makefile          |   2 +-
>   drivers/net/ipa/bam.c             | 525 ++++++++++++++++++++++++++++++
>   drivers/net/ipa/gsi.c             |   1 +
>   drivers/net/ipa/ipa_data.h        |   1 +
>   drivers/net/ipa/ipa_dma.h         |  18 +-
>   drivers/net/ipa/ipa_dma_private.h |   2 +
>   drivers/net/ipa/ipa_main.c        |  20 +-
>   drivers/net/ipa/ipa_trans.c       |  14 +-
>   drivers/net/ipa/ipa_trans.h       |   4 +
>   9 files changed, 569 insertions(+), 18 deletions(-)
>   create mode 100644 drivers/net/ipa/bam.c
> 
> diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
> index 3cd021fb992e..4abebc667f77 100644
> --- a/drivers/net/ipa/Makefile
> +++ b/drivers/net/ipa/Makefile
> @@ -2,7 +2,7 @@ obj-$(CONFIG_QCOM_IPA)	+=	ipa.o
>   
>   ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
>   				ipa_table.o ipa_interrupt.o gsi.o ipa_trans.o \
> -				ipa_gsi.o ipa_smp2p.o ipa_uc.o \
> +				ipa_gsi.o ipa_smp2p.o ipa_uc.o bam.o \
>   				ipa_endpoint.o ipa_cmd.o ipa_modem.o \
>   				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
>   				ipa_sysfs.o
> diff --git a/drivers/net/ipa/bam.c b/drivers/net/ipa/bam.c
> new file mode 100644
> index 000000000000..0726e385fee5
> --- /dev/null
> +++ b/drivers/net/ipa/bam.c
> @@ -0,0 +1,525 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/* Copyright (c) 2020, The Linux Foundation. All rights reserved.
> + */
> +
> +#include <linux/completion.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/dmaengine.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/kernel.h>
> +#include <linux/mutex.h>
> +#include <linux/netdevice.h>
> +#include <linux/platform_device.h>
> +
> +#include "ipa_gsi.h"
> +#include "ipa.h"
> +#include "ipa_dma.h"
> +#include "ipa_dma_private.h"
> +#include "ipa_gsi.h"
> +#include "ipa_trans.h"
> +#include "ipa_data.h"
> +
> +/**
> + * DOC: The IPA Smart Peripheral System Interface
> + *
> + * The Smart Peripheral System is a means to communicate over BAM pipes to
> + * the IPA block. The Modem also uses BAM pipes to communicate with the IPA
> + * core.
> + *
> + * Refer the GSI documentation, because BAM is a precursor to GSI and more or less
> + * the same, conceptually (maybe, IDK, I have no docs to go through).
> + *
> + * Each channel here corresponds to 1 BAM pipe configured in BAM2BAM mode
> + *
> + * IPA cmds are transferred one at a time, each in one BAM transfer.
> + */
> +
> +/* Get and configure the BAM DMA channel */
> +int bam_channel_init_one(struct ipa_dma *bam,
> +			 const struct ipa_gsi_endpoint_data *data, bool command)
> +{
> +	struct dma_slave_config bam_config;
> +	u32 channel_id = data->channel_id;
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +	int ret;
> +
> +	/*TODO: if (!bam_channel_data_valid(bam, data))
> +		return -EINVAL;*/
> +
> +	channel->dma_subsys = bam;
> +	channel->dma_chan = dma_request_chan(bam->dev, data->channel_name);
> +	channel->toward_ipa = data->toward_ipa;
> +	channel->tlv_count = data->channel.tlv_count;
> +	channel->tre_count = data->channel.tre_count;
> +	if (IS_ERR(channel->dma_chan)) {
> +		dev_err(bam->dev, "failed to request BAM channel %s: %d\n",
> +				data->channel_name,
> +				(int) PTR_ERR(channel->dma_chan));
> +		return PTR_ERR(channel->dma_chan);
> +	}
> +
> +	ret = ipa_channel_trans_init(bam, data->channel_id);
> +	if (ret)
> +		goto err_dma_chan_free;
> +
> +	if (data->toward_ipa) {
> +		bam_config.direction = DMA_MEM_TO_DEV;
> +		bam_config.dst_maxburst = channel->tlv_count;
> +	} else {
> +		bam_config.direction = DMA_DEV_TO_MEM;
> +		bam_config.src_maxburst = channel->tlv_count;
> +	}
> +
> +	dmaengine_slave_config(channel->dma_chan, &bam_config);
> +
> +	if (command)
> +		ret = ipa_cmd_pool_init(channel, 256);
> +
> +	if (!ret)
> +		return 0;
> +
> +err_dma_chan_free:
> +	dma_release_channel(channel->dma_chan);
> +	return ret;
> +}
> +
> +static void bam_channel_exit_one(struct ipa_channel *channel)
> +{
> +	if (channel->dma_chan) {
> +		dmaengine_terminate_sync(channel->dma_chan);
> +		dma_release_channel(channel->dma_chan);
> +	}
> +}
> +
> +/* Get channels from BAM_DMA */
> +int bam_channel_init(struct ipa_dma *bam, u32 count,
> +		const struct ipa_gsi_endpoint_data *data)
> +{
> +	int ret = 0;
> +	u32 i;
> +
> +	for (i = 0; i < count; ++i) {
> +		bool command = i == IPA_ENDPOINT_AP_COMMAND_TX;
> +
> +		if (!data[i].channel_name || data[i].ee_id == GSI_EE_MODEM)
> +			continue;
> +
> +		ret = bam_channel_init_one(bam, &data[i], command);
> +		if (ret)
> +			goto err_unwind;
> +	}
> +
> +	return ret;
> +
> +err_unwind:
> +	while (i--) {
> +		if (ipa_gsi_endpoint_data_empty(&data[i]))
> +			continue;
> +
> +		bam_channel_exit_one(&bam->channel[i]);
> +	}
> +	return ret;
> +}
> +
> +/* Inverse of bam_channel_init() */
> +void bam_channel_exit(struct ipa_dma *bam)
> +{
> +	u32 channel_id = BAM_CHANNEL_COUNT_MAX - 1;
> +
> +	do
> +		bam_channel_exit_one(&bam->channel[channel_id]);
> +	while (channel_id--);
> +}
> +
> +/* Inverse of bam_init() */
> +static void bam_exit(struct ipa_dma *bam)
> +{
> +	mutex_destroy(&bam->mutex);
> +	bam_channel_exit(bam);
> +}
> +
> +/* Return the channel id associated with a given channel */
> +static u32 bam_channel_id(struct ipa_channel *channel)
> +{
> +	return channel - &channel->dma_subsys->channel[0];
> +}
> +
> +static void
> +bam_channel_tx_update(struct ipa_channel *channel, struct ipa_trans *trans)
> +{
> +	u64 byte_count = trans->byte_count + trans->len;
> +	u64 trans_count = trans->trans_count + 1;
> +
> +	byte_count -= channel->compl_byte_count;
> +	channel->compl_byte_count += byte_count;
> +	trans_count -= channel->compl_trans_count;
> +	channel->compl_trans_count += trans_count;
> +
> +	ipa_gsi_channel_tx_completed(channel->dma_subsys, bam_channel_id(channel),
> +					   trans_count, byte_count);
> +}
> +
> +static void
> +bam_channel_rx_update(struct ipa_channel *channel, struct ipa_trans *trans)
> +{
> +	/* FIXME */
> +	u64 byte_count = trans->byte_count + trans->len;
> +
> +	channel->byte_count += byte_count;
> +	channel->trans_count++;
> +}
> +
> +/* Consult hardware, move any newly completed transactions to completed list */
> +static void bam_channel_update(struct ipa_channel *channel)
> +{
> +	struct ipa_trans *trans;
> +
> +	list_for_each_entry(trans, &channel->trans_info.pending, links) {
> +		enum dma_status trans_status =
> +				dma_async_is_tx_complete(channel->dma_chan,
> +					trans->cookie, NULL, NULL);
> +		if (trans_status == DMA_COMPLETE)
> +			break;
> +	}
> +	/* Get the transaction for the latest completed event.  Take a
> +	 * reference to keep it from completing before we give the events
> +	 * for this and previous transactions back to the hardware.
> +	 */
> +	refcount_inc(&trans->refcount);
> +
> +	/* For RX channels, update each completed transaction with the number
> +	 * of bytes that were actually received.  For TX channels, report
> +	 * the number of transactions and bytes this completion represents
> +	 * up the network stack.
> +	 */
> +	if (channel->toward_ipa)
> +		bam_channel_tx_update(channel, trans);
> +	else
> +		bam_channel_rx_update(channel, trans);
> +
> +	ipa_trans_move_complete(trans);
> +
> +	ipa_trans_free(trans);
> +}
> +
> +/**
> + * bam_channel_poll_one() - Return a single completed transaction on a channel
> + * @channel:	Channel to be polled
> + *
> + * Return:	Transaction pointer, or null if none are available
> + *
> + * This function returns the first entry on a channel's completed transaction
> + * list.  If that list is empty, the hardware is consulted to determine
> + * whether any new transactions have completed.  If so, they're moved to the
> + * completed list and the new first entry is returned.  If there are no more
> + * completed transactions, a null pointer is returned.
> + */
> +static struct ipa_trans *bam_channel_poll_one(struct ipa_channel *channel)
> +{
> +	struct ipa_trans *trans;
> +
> +	/* Get the first transaction from the completed list */
> +	trans = ipa_channel_trans_complete(channel);
> +	if (!trans) {
> +		bam_channel_update(channel);
> +		trans = ipa_channel_trans_complete(channel);
> +	}
> +
> +	if (trans)
> +		ipa_trans_move_polled(trans);
> +
> +	return trans;
> +}
> +
> +/**
> + * bam_channel_poll() - NAPI poll function for a channel
> + * @napi:	NAPI structure for the channel
> + * @budget:	Budget supplied by NAPI core
> + *
> + * Return:	Number of items polled (<= budget)
> + *
> + * Single transactions completed by hardware are polled until either
> + * the budget is exhausted, or there are no more.  Each transaction
> + * polled is passed to ipa_trans_complete(), to perform remaining
> + * completion processing and retire/free the transaction.
> + */
> +static int bam_channel_poll(struct napi_struct *napi, int budget)
> +{
> +	struct ipa_channel *channel;
> +	int count = 0;
> +
> +	channel = container_of(napi, struct ipa_channel, napi);
> +	while (count < budget) {
> +		struct ipa_trans *trans;
> +
> +		count++;
> +		trans = bam_channel_poll_one(channel);
> +		if (!trans)
> +			break;
> +		ipa_trans_complete(trans);
> +	}
> +
> +	if (count < budget)
> +		napi_complete(&channel->napi);
> +
> +	return count;
> +}
> +
> +/* Setup function for a single channel */
> +static void bam_channel_setup_one(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	if (!channel->dma_subsys)
> +		return;	/* Ignore uninitialized channels */
> +
> +	if (channel->toward_ipa) {
> +		netif_tx_napi_add(&bam->dummy_dev, &channel->napi,
> +				  bam_channel_poll, NAPI_POLL_WEIGHT);
> +	} else {
> +		netif_napi_add(&bam->dummy_dev, &channel->napi,
> +			       bam_channel_poll, NAPI_POLL_WEIGHT);
> +	}
> +	napi_enable(&channel->napi);
> +}
> +
> +static void bam_channel_teardown_one(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	if (!channel->dma_subsys)
> +		return;		/* Ignore uninitialized channels */
> +
> +	netif_napi_del(&channel->napi);
> +}
> +
> +/* Setup function for channels */
> +static int bam_channel_setup(struct ipa_dma *bam)
> +{
> +	u32 channel_id = 0;
> +	int ret;
> +
> +	mutex_lock(&bam->mutex);
> +
> +	do
> +		bam_channel_setup_one(bam, channel_id);
> +	while (++channel_id < BAM_CHANNEL_COUNT_MAX);
> +
> +	/* Make sure no channels were defined that hardware does not support */
> +	while (channel_id < BAM_CHANNEL_COUNT_MAX) {
> +		struct ipa_channel *channel = &bam->channel[channel_id++];
> +
> +		if (!channel->dma_subsys)
> +			continue;	/* Ignore uninitialized channels */
> +
> +		dev_err(bam->dev, "channel %u not supported by hardware\n",
> +			channel_id - 1);
> +		channel_id = BAM_CHANNEL_COUNT_MAX;
> +		goto err_unwind;
> +	}
> +
> +	mutex_unlock(&bam->mutex);
> +
> +	return 0;
> +
> +err_unwind:
> +	while (channel_id--)
> +		bam_channel_teardown_one(bam, channel_id);
> +
> +	mutex_unlock(&bam->mutex);
> +
> +	return ret;
> +}
> +
> +/* Inverse of bam_channel_setup() */
> +static void bam_channel_teardown(struct ipa_dma *bam)
> +{
> +	u32 channel_id;
> +
> +	mutex_lock(&bam->mutex);
> +
> +	channel_id = BAM_CHANNEL_COUNT_MAX;
> +	do
> +		bam_channel_teardown_one(bam, channel_id);
> +	while (channel_id--);
> +
> +	mutex_unlock(&bam->mutex);
> +}
> +
> +static int bam_setup(struct ipa_dma *bam)
> +{
> +	return bam_channel_setup(bam);
> +}
> +
> +static void bam_teardown(struct ipa_dma *bam)
> +{
> +	bam_channel_teardown(bam);
> +}
> +
> +static u32 bam_channel_tre_max(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	/* Hardware limit is channel->tre_count - 1 */
> +	return channel->tre_count - (channel->tlv_count - 1);
> +}
> +
> +static u32 bam_channel_trans_tre_max(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	return channel->tlv_count;
> +}
> +
> +static int bam_channel_start(struct ipa_dma *bam, u32 channel_id)
> +{
> +	return 0;
> +}
> +
> +static int bam_channel_stop(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	return dmaengine_terminate_sync(channel->dma_chan);
> +}
> +
> +static void bam_channel_reset(struct ipa_dma *bam, u32 channel_id, bool doorbell)
> +{
> +	bam_channel_stop(bam, channel_id);
> +}
> +
> +static int bam_channel_suspend(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	return dmaengine_pause(channel->dma_chan);
> +}
> +
> +static int bam_channel_resume(struct ipa_dma *bam, u32 channel_id)
> +{
> +	struct ipa_channel *channel = &bam->channel[channel_id];
> +
> +	return dmaengine_resume(channel->dma_chan);
> +}
> +
> +static void bam_suspend(struct ipa_dma *bam)
> +{
> +	/* No-op for now */
> +}
> +
> +static void bam_resume(struct ipa_dma *bam)
> +{
> +	/* No-op for now */
> +}
> +
> +static void bam_trans_callback(void *arg)
> +{
> +	ipa_trans_complete(arg);
> +}
> +
> +static void bam_trans_commit(struct ipa_trans *trans, bool unused)
> +{
> +	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
> +	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
> +	struct ipa_cmd_info *info;
> +	struct scatterlist *sg;
> +	u32 byte_count = 0;
> +	u32 i;
> +	enum dma_transfer_direction direction;
> +
> +	if (channel->toward_ipa)
> +		direction = DMA_MEM_TO_DEV;
> +	else
> +		direction = DMA_DEV_TO_MEM;
> +
> +	/* assert(trans->used > 0); */
> +
> +	info = trans->info ? &trans->info[0] : NULL;
> +	for_each_sg(trans->sgl, sg, trans->used, i) {
> +		bool last_tre = i == trans->used - 1;
> +		dma_addr_t addr = sg_dma_address(sg);
> +		u32 len = sg_dma_len(sg);
> +		u32 dma_flags = 0;
> +		struct dma_async_tx_descriptor *desc;
> +
> +		byte_count += len;
> +		if (info)
> +			opcode = info++->opcode;
> +
> +		if (opcode != IPA_CMD_NONE) {
> +			len = opcode;
> +			dma_flags |= DMA_PREP_IMM_CMD;
> +		}
> +
> +		if (last_tre)
> +			dma_flags |= DMA_PREP_INTERRUPT;
> +
> +		desc = dmaengine_prep_slave_single(channel->dma_chan, addr, len,
> +				direction, dma_flags);
> +
> +		if (last_tre) {
> +			desc->callback = bam_trans_callback;
> +			desc->callback_param = trans;
> +		}
> +
> +		desc->cookie = dmaengine_submit(desc);
> +
> +		if (last_tre)
> +			trans->cookie = desc->cookie;
> +
> +		if (direction == DMA_DEV_TO_MEM)
> +			dmaengine_desc_attach_metadata(desc, &trans->len, sizeof(trans->len));
> +	}
> +
> +	if (channel->toward_ipa) {
> +		/* We record TX bytes when they are sent */
> +		trans->len = byte_count;
> +		trans->trans_count = channel->trans_count;
> +		trans->byte_count = channel->byte_count;
> +		channel->trans_count++;
> +		channel->byte_count += byte_count;
> +	}
> +
> +	ipa_trans_move_pending(trans);
> +
> +	dma_async_issue_pending(channel->dma_chan);
> +}
> +
> +/* Initialize the BAM DMA channels
> + * Actual hw init is handled by the BAM_DMA driver
> + */
> +int bam_init(struct ipa_dma *bam, struct platform_device *pdev,
> +		enum ipa_version version, u32 count,
> +		const struct ipa_gsi_endpoint_data *data)
> +{
> +	struct device *dev = &pdev->dev;
> +	int ret;
> +
> +	bam->dev = dev;
> +	bam->version = version;
> +	bam->setup = bam_setup;
> +	bam->teardown = bam_teardown;
> +	bam->exit = bam_exit;
> +	bam->suspend = bam_suspend;
> +	bam->resume = bam_resume;
> +	bam->channel_tre_max = bam_channel_tre_max;
> +	bam->channel_trans_tre_max = bam_channel_trans_tre_max;
> +	bam->channel_start = bam_channel_start;
> +	bam->channel_stop = bam_channel_stop;
> +	bam->channel_reset = bam_channel_reset;
> +	bam->channel_suspend = bam_channel_suspend;
> +	bam->channel_resume = bam_channel_resume;
> +	bam->trans_commit = bam_trans_commit;
> +
> +	init_dummy_netdev(&bam->dummy_dev);
> +
> +	ret = bam_channel_init(bam, count, data);
> +	if (ret)
> +		return ret;
> +
> +	mutex_init(&bam->mutex);
> +
> +	return 0;
> +}
> diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
> index 39d9ca620a9f..ac0b9e748fa1 100644
> --- a/drivers/net/ipa/gsi.c
> +++ b/drivers/net/ipa/gsi.c
> @@ -2210,6 +2210,7 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
>   	gsi->channel_reset = gsi_channel_reset;
>   	gsi->channel_suspend = gsi_channel_suspend;
>   	gsi->channel_resume = gsi_channel_resume;
> +	gsi->trans_commit = gsi_trans_commit;
>   
>   	/* GSI uses NAPI on all channels.  Create a dummy network device
>   	 * for the channel NAPI contexts to be associated with.
> diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
> index 6d329e9ce5d2..7d62d49f414f 100644
> --- a/drivers/net/ipa/ipa_data.h
> +++ b/drivers/net/ipa/ipa_data.h
> @@ -188,6 +188,7 @@ struct ipa_gsi_endpoint_data {
>   	u8 channel_id;
>   	u8 endpoint_id;
>   	bool toward_ipa;
> +	const char *channel_name;	/* used only for BAM DMA channels */
>   
>   	struct gsi_channel_data channel;
>   	struct ipa_endpoint_data endpoint;
> diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h
> index 1a23e6ac5785..3000182ae689 100644
> --- a/drivers/net/ipa/ipa_dma.h
> +++ b/drivers/net/ipa/ipa_dma.h
> @@ -17,7 +17,11 @@
>   
>   /* Maximum number of channels and event rings supported by the driver */
>   #define GSI_CHANNEL_COUNT_MAX	23
> +#define BAM_CHANNEL_COUNT_MAX	20
>   #define GSI_EVT_RING_COUNT_MAX	24
> +#define IPA_CHANNEL_COUNT_MAX	MAX(GSI_CHANNEL_COUNT_MAX, \
> +				    BAM_CHANNEL_COUNT_MAX)
> +#define MAX(a, b)		((a > b) ? a : b)
>   
>   /* Maximum TLV FIFO size for a channel; 64 here is arbitrary (and high) */
>   #define GSI_TLV_MAX		64
> @@ -119,6 +123,8 @@ struct ipa_channel {
>   	struct gsi_ring tre_ring;
>   	u32 evt_ring_id;
>   
> +	struct dma_chan *dma_chan;
> +
>   	u64 byte_count;			/* total # bytes transferred */
>   	u64 trans_count;		/* total # transactions */
>   	/* The following counts are used only for TX endpoints */
> @@ -154,7 +160,7 @@ struct ipa_dma {
>   	u32 irq;
>   	u32 channel_count;
>   	u32 evt_ring_count;
> -	struct ipa_channel channel[GSI_CHANNEL_COUNT_MAX];
> +	struct ipa_channel channel[IPA_CHANNEL_COUNT_MAX];
>   	struct gsi_evt_ring evt_ring[GSI_EVT_RING_COUNT_MAX];
>   	u32 event_bitmap;		/* allocated event rings */
>   	u32 modem_channel_bitmap;	/* modem channels to allocate */
> @@ -303,7 +309,7 @@ static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
>   }
>   
>   /**
> - * ipa_dma_init() - Initialize the GSI subsystem
> + * ipa_init/bam_init() - Initialize the GSI/BAM subsystem
>    * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
>    * @pdev:	IPA platform device
>    * @version:	IPA hardware version (implies GSI version)
> @@ -312,14 +318,18 @@ static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
>    *
>    * Return:	0 if successful, or a negative error code
>    *
> - * Early stage initialization of the GSI subsystem, performing tasks
> - * that can be done before the GSI hardware is ready to use.
> + * Early stage initialization of the GSI/BAM subsystem, performing tasks
> + * that can be done before the GSI/BAM hardware is ready to use.
>    */
>   
>   int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
>   	     enum ipa_version version, u32 count,
>   	     const struct ipa_gsi_endpoint_data *data);
>   
> +int bam_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
> +	     enum ipa_version version, u32 count,
> +	     const struct ipa_gsi_endpoint_data *data);
> +
>   /**
>    * ipa_dma_exit() - Exit the DMA subsystem
>    * @dma_subsys:	ipa_dma address previously passed to a successful gsi_init() call
> diff --git a/drivers/net/ipa/ipa_dma_private.h b/drivers/net/ipa/ipa_dma_private.h
> index 40148a551b47..1db53e597a61 100644
> --- a/drivers/net/ipa/ipa_dma_private.h
> +++ b/drivers/net/ipa/ipa_dma_private.h
> @@ -16,6 +16,8 @@ struct ipa_channel;
>   
>   #define GSI_RING_ELEMENT_SIZE	16	/* bytes; must be a power of 2 */
>   
> +void gsi_trans_commit(struct ipa_trans *trans, bool ring_db);
> +
>   /* Return the entry that follows one provided in a transaction pool */
>   void *ipa_trans_pool_next(struct ipa_trans_pool *pool, void *element);
>   
> diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> index ba06e3ad554c..ea6c4347f2c6 100644
> --- a/drivers/net/ipa/ipa_main.c
> +++ b/drivers/net/ipa/ipa_main.c
> @@ -60,12 +60,15 @@
>    * core.  The GSI implements a set of "channels" used for communication
>    * between the AP and the IPA.
>    *
> - * The IPA layer uses GSI channels to implement its "endpoints".  And while
> - * a GSI channel carries data between the AP and the IPA, a pair of IPA
> - * endpoints is used to carry traffic between two EEs.  Specifically, the main
> - * modem network interface is implemented by two pairs of endpoints:  a TX
> + * The IPA layer uses GSI channels or BAM pipes to implement its "endpoints".
> + * And while a GSI channel carries data between the AP and the IPA, a pair of
> + * IPA endpoints is used to carry traffic between two EEs.  Specifically, the
> + * main modem network interface is implemented by two pairs of endpoints:  a TX
>    * endpoint on the AP coupled with an RX endpoint on the modem; and another
>    * RX endpoint on the AP receiving data from a TX endpoint on the modem.
> + *
> + * For BAM based transport, a pair of BAM pipes are used for TX and RX between
> + * the AP and IPA, and between IPA and other EEs.
>    */
>   
>   /* The name of the GSI firmware file relative to /lib/firmware */
> @@ -716,8 +719,13 @@ static int ipa_probe(struct platform_device *pdev)
>   	if (ret)
>   		goto err_reg_exit;
>   
> -	ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
> -		       data->endpoint_data);
> +	if (IPA_HAS_GSI(ipa->version))
> +		ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
> +			       data->endpoint_data);
> +	else
> +		ret = bam_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
> +			       data->endpoint_data);
> +
>   	if (ret)
>   		goto err_mem_exit;
>   
> diff --git a/drivers/net/ipa/ipa_trans.c b/drivers/net/ipa/ipa_trans.c
> index 22755f3ce3da..444f44846da8 100644
> --- a/drivers/net/ipa/ipa_trans.c
> +++ b/drivers/net/ipa/ipa_trans.c
> @@ -254,7 +254,7 @@ struct ipa_trans *ipa_channel_trans_complete(struct ipa_channel *channel)
>   }
>   
>   /* Move a transaction from the allocated list to the pending list */
> -static void ipa_trans_move_pending(struct ipa_trans *trans)
> +void ipa_trans_move_pending(struct ipa_trans *trans)
>   {
>   	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
>   	struct ipa_trans_info *trans_info = &channel->trans_info;
> @@ -539,7 +539,7 @@ static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr,
>    * pending list.  Finally, updates the channel ring pointer and optionally
>    * rings the doorbell.
>    */
> -static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
> +void gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
>   {
>   	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
>   	struct gsi_ring *ring = &channel->tre_ring;
> @@ -604,9 +604,9 @@ static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
>   /* Commit a GSI transaction */
>   void ipa_trans_commit(struct ipa_trans *trans, bool ring_db)
>   {
> -	if (trans->used)
> -		__gsi_trans_commit(trans, ring_db);
> -	else
> +	if (trans->used) {
> +		trans->dma_subsys->trans_commit(trans, ring_db);
> +	} else
>   		ipa_trans_free(trans);
>   }
>   
> @@ -618,7 +618,7 @@ void ipa_trans_commit_wait(struct ipa_trans *trans)
>   
>   	refcount_inc(&trans->refcount);
>   
> -	__gsi_trans_commit(trans, true);
> +	trans->dma_subsys->trans_commit(trans, true);
>   
>   	wait_for_completion(&trans->completion);
>   
> @@ -638,7 +638,7 @@ int ipa_trans_commit_wait_timeout(struct ipa_trans *trans,
>   
>   	refcount_inc(&trans->refcount);
>   
> -	__gsi_trans_commit(trans, true);
> +	trans->dma_subsys->trans_commit(trans, true);
>   
>   	remaining = wait_for_completion_timeout(&trans->completion,
>   						timeout_jiffies);
> diff --git a/drivers/net/ipa/ipa_trans.h b/drivers/net/ipa/ipa_trans.h
> index b93342414360..5f41e3e6f92a 100644
> --- a/drivers/net/ipa/ipa_trans.h
> +++ b/drivers/net/ipa/ipa_trans.h
> @@ -10,6 +10,7 @@
>   #include <linux/refcount.h>
>   #include <linux/completion.h>
>   #include <linux/dma-direction.h>
> +#include <linux/dmaengine.h>
>   
>   #include "ipa_cmd.h"
>   
> @@ -61,6 +62,7 @@ struct ipa_trans {
>   	struct scatterlist *sgl;
>   	struct ipa_cmd_info *info;	/* array of entries, or null */
>   	enum dma_data_direction direction;
> +	dma_cookie_t cookie;
>   
>   	refcount_t refcount;
>   	struct completion completion;
> @@ -149,6 +151,8 @@ struct ipa_trans *ipa_channel_trans_alloc(struct ipa_dma *dma_subsys, u32 channe
>    */
>   void ipa_trans_free(struct ipa_trans *trans);
>   
> +void ipa_trans_move_pending(struct ipa_trans *trans);
> +
>   /**
>    * ipa_trans_cmd_add() - Add an immediate command to a transaction
>    * @trans:	Transaction
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init
  2021-09-20  3:08 ` [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  2021-10-18 18:13     ` Sireesh Kodali
  0 siblings, 1 reply; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> IPA v2.x commands are different from later IPA revisions mostly because
> of the fact that IPA v2.x is 32 bit. There are also other minor
> differences some of the command structs.
> 
> The tables again are only different because of the fact that IPA v2.x is
> 32 bit.

There's no "RFC" on this patch, but I assume it's just invisible.

There are some things in here where some conventions used elsewhere
in the driver aren't as well followed.  One example is the use of
symbol names with IPA version encoded in them; such cases usually
have a macro that takes a version as argument.

And I don't especially like using a macro on the left hand side
of an assignment expression.

I'm skimming now, but overall this looks OK.

					-Alex

> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> ---
>   drivers/net/ipa/ipa.h       |   2 +-
>   drivers/net/ipa/ipa_cmd.c   | 138 ++++++++++++++++++++++++++----------
>   drivers/net/ipa/ipa_table.c |  29 ++++++--
>   drivers/net/ipa/ipa_table.h |   2 +-
>   4 files changed, 125 insertions(+), 46 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
> index 80a83ac45729..63b2b368b588 100644
> --- a/drivers/net/ipa/ipa.h
> +++ b/drivers/net/ipa/ipa.h
> @@ -81,7 +81,7 @@ struct ipa {
>   	struct ipa_power *power;
>   
>   	dma_addr_t table_addr;
> -	__le64 *table_virt;
> +	void *table_virt;
>   
>   	struct ipa_interrupt *interrupt;
>   	bool uc_powered;
> diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> index 7a104540dc26..58dae4b3bf87 100644
> --- a/drivers/net/ipa/ipa_cmd.c
> +++ b/drivers/net/ipa/ipa_cmd.c
> @@ -25,8 +25,8 @@
>    * An immediate command is generally used to request the IPA do something
>    * other than data transfer to another endpoint.
>    *
> - * Immediate commands are represented by GSI transactions just like other
> - * transfer requests, represented by a single GSI TRE.  Each immediate
> + * Immediate commands on IPA v3 are represented by GSI transactions just like
> + * other transfer requests, represented by a single GSI TRE.  Each immediate
>    * command has a well-defined format, having a payload of a known length.
>    * This allows the transfer element's length field to be used to hold an
>    * immediate command's opcode.  The payload for a command resides in DRAM
> @@ -45,10 +45,16 @@ enum pipeline_clear_options {
>   
>   /* IPA_CMD_IP_V{4,6}_{FILTER,ROUTING}_INIT */
>   
> -struct ipa_cmd_hw_ip_fltrt_init {
> -	__le64 hash_rules_addr;
> -	__le64 flags;
> -	__le64 nhash_rules_addr;
> +union ipa_cmd_hw_ip_fltrt_init {
> +	struct {
> +		__le32 nhash_rules_addr;
> +		__le32 flags;
> +	} v2;
> +	struct {
> +		__le64 hash_rules_addr;
> +		__le64 flags;
> +		__le64 nhash_rules_addr;
> +	} v3;
>   };
>   
>   /* Field masks for ipa_cmd_hw_ip_fltrt_init structure fields */
> @@ -56,13 +62,23 @@ struct ipa_cmd_hw_ip_fltrt_init {
>   #define IP_FLTRT_FLAGS_HASH_ADDR_FMASK			GENMASK_ULL(27, 12)
>   #define IP_FLTRT_FLAGS_NHASH_SIZE_FMASK			GENMASK_ULL(39, 28)
>   #define IP_FLTRT_FLAGS_NHASH_ADDR_FMASK			GENMASK_ULL(55, 40)
> +#define IP_V2_IPV4_FLTRT_FLAGS_SIZE_FMASK		GENMASK_ULL(11, 0)
> +#define IP_V2_IPV4_FLTRT_FLAGS_ADDR_FMASK		GENMASK_ULL(27, 12)
> +#define IP_V2_IPV6_FLTRT_FLAGS_SIZE_FMASK		GENMASK_ULL(15, 0)
> +#define IP_V2_IPV6_FLTRT_FLAGS_ADDR_FMASK		GENMASK_ULL(31, 16)
>   
>   /* IPA_CMD_HDR_INIT_LOCAL */
>   
> -struct ipa_cmd_hw_hdr_init_local {
> -	__le64 hdr_table_addr;
> -	__le32 flags;
> -	__le32 reserved;
> +union ipa_cmd_hw_hdr_init_local {
> +	struct {
> +		__le32 hdr_table_addr;
> +		__le32 flags;
> +	} v2;
> +	struct {
> +		__le64 hdr_table_addr;
> +		__le32 flags;
> +		__le32 reserved;
> +	} v3;
>   };
>   
>   /* Field masks for ipa_cmd_hw_hdr_init_local structure fields */
> @@ -109,14 +125,37 @@ struct ipa_cmd_ip_packet_init {
>   #define DMA_SHARED_MEM_OPCODE_SKIP_CLEAR_FMASK		GENMASK(8, 8)
>   #define DMA_SHARED_MEM_OPCODE_CLEAR_OPTION_FMASK	GENMASK(10, 9)
>   
> -struct ipa_cmd_hw_dma_mem_mem {
> -	__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
> -	__le16 size;
> -	__le16 local_addr;
> -	__le16 flags;
> -	__le64 system_addr;
> +union ipa_cmd_hw_dma_mem_mem {
> +	struct {
> +		__le16 reserved;
> +		__le16 size;
> +		__le32 system_addr;
> +		__le16 local_addr;
> +		__le16 flags; /* the least significant 14 bits are reserved */
> +		__le32 padding;
> +	} v2;
> +	struct {
> +		__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
> +		__le16 size;
> +		__le16 local_addr;
> +		__le16 flags;
> +		__le64 system_addr;
> +	} v3;
>   };
>   
> +#define CMD_FIELD(_version, _payload, _field)				\
> +	*(((_version) > IPA_VERSION_2_6L) ?		    		\
> +	  &(_payload->v3._field) :			    		\
> +	  &(_payload->v2._field))
> +
> +#define SET_DMA_FIELD(_ver, _payload, _field, _value)			\
> +	do {								\
> +		if ((_ver) >= IPA_VERSION_3_0)				\
> +			(_payload)->v3._field = cpu_to_le64(_value);	\
> +		else							\
> +			(_payload)->v2._field = cpu_to_le32(_value);	\
> +	} while (0)
> +
>   /* Flag allowing atomic clear of target region after reading data (v4.0+)*/
>   #define DMA_SHARED_MEM_CLEAR_AFTER_READ			GENMASK(15, 15)
>   
> @@ -132,15 +171,16 @@ struct ipa_cmd_ip_packet_tag_status {
>   	__le64 tag;
>   };
>   
> -#define IP_PACKET_TAG_STATUS_TAG_FMASK			GENMASK_ULL(63, 16)
> +#define IPA_V2_IP_PACKET_TAG_STATUS_TAG_FMASK		GENMASK_ULL(63, 32)
> +#define IPA_V3_IP_PACKET_TAG_STATUS_TAG_FMASK		GENMASK_ULL(63, 16)
>   
>   /* Immediate command payload */
>   union ipa_cmd_payload {
> -	struct ipa_cmd_hw_ip_fltrt_init table_init;
> -	struct ipa_cmd_hw_hdr_init_local hdr_init_local;
> +	union ipa_cmd_hw_ip_fltrt_init table_init;
> +	union ipa_cmd_hw_hdr_init_local hdr_init_local;
>   	struct ipa_cmd_register_write register_write;
>   	struct ipa_cmd_ip_packet_init ip_packet_init;
> -	struct ipa_cmd_hw_dma_mem_mem dma_shared_mem;
> +	union ipa_cmd_hw_dma_mem_mem dma_shared_mem;
>   	struct ipa_cmd_ip_packet_tag_status ip_packet_tag_status;
>   };
>   
> @@ -154,6 +194,7 @@ static void ipa_cmd_validate_build(void)
>   	 * of entries.
>   	 */
>   #define TABLE_SIZE	(TABLE_COUNT_MAX * sizeof(__le64))
> +// TODO
>   #define TABLE_COUNT_MAX	max_t(u32, IPA_ROUTE_COUNT_MAX, IPA_FILTER_COUNT_MAX)
>   	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK));
>   	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));
> @@ -405,15 +446,26 @@ void ipa_cmd_table_init_add(struct ipa_trans *trans,
>   {
>   	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	enum dma_data_direction direction = DMA_TO_DEVICE;
> -	struct ipa_cmd_hw_ip_fltrt_init *payload;
> +	union ipa_cmd_hw_ip_fltrt_init *payload;
> +	enum ipa_version version = ipa->version;
>   	union ipa_cmd_payload *cmd_payload;
>   	dma_addr_t payload_addr;
>   	u64 val;
>   
>   	/* Record the non-hash table offset and size */
>   	offset += ipa->mem_offset;
> -	val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
> -	val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
> +
> +	if (version >= IPA_VERSION_3_0) {
> +		val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
> +		val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
> +	} else if (opcode == IPA_CMD_IP_V4_FILTER_INIT ||
> +		   opcode == IPA_CMD_IP_V4_ROUTING_INIT) {
> +		val = u64_encode_bits(offset, IP_V2_IPV4_FLTRT_FLAGS_ADDR_FMASK);
> +		val |= u64_encode_bits(size, IP_V2_IPV4_FLTRT_FLAGS_SIZE_FMASK);
> +	} else { /* IPA <= v2.6L IPv6 */
> +		val = u64_encode_bits(offset, IP_V2_IPV6_FLTRT_FLAGS_ADDR_FMASK);
> +		val |= u64_encode_bits(size, IP_V2_IPV6_FLTRT_FLAGS_SIZE_FMASK);
> +	}
>   
>   	/* The hash table offset and address are zero if its size is 0 */
>   	if (hash_size) {
> @@ -429,10 +481,10 @@ void ipa_cmd_table_init_add(struct ipa_trans *trans,
>   	payload = &cmd_payload->table_init;
>   
>   	/* Fill in all offsets and sizes and the non-hash table address */
> -	if (hash_size)
> -		payload->hash_rules_addr = cpu_to_le64(hash_addr);
> -	payload->flags = cpu_to_le64(val);
> -	payload->nhash_rules_addr = cpu_to_le64(addr);
> +	if (hash_size && version >= IPA_VERSION_3_0)
> +		payload->v3.hash_rules_addr = cpu_to_le64(hash_addr);
> +	SET_DMA_FIELD(version, payload, flags, val);
> +	SET_DMA_FIELD(version, payload, nhash_rules_addr, addr);
>   
>   	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  direction, opcode);
> @@ -445,7 +497,7 @@ void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
>   	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL;
>   	enum dma_data_direction direction = DMA_TO_DEVICE;
> -	struct ipa_cmd_hw_hdr_init_local *payload;
> +	union ipa_cmd_hw_hdr_init_local *payload;
>   	union ipa_cmd_payload *cmd_payload;
>   	dma_addr_t payload_addr;
>   	u32 flags;
> @@ -460,10 +512,10 @@ void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
>   	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
>   	payload = &cmd_payload->hdr_init_local;
>   
> -	payload->hdr_table_addr = cpu_to_le64(addr);
> +	SET_DMA_FIELD(ipa->version, payload, hdr_table_addr, addr);
>   	flags = u32_encode_bits(size, HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK);
>   	flags |= u32_encode_bits(offset, HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);
> -	payload->flags = cpu_to_le32(flags);
> +	CMD_FIELD(ipa->version, payload, flags) = cpu_to_le32(flags);
>   
>   	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  direction, opcode);
> @@ -509,8 +561,11 @@ void ipa_cmd_register_write_add(struct ipa_trans *trans, u32 offset, u32 value,
>   
>   	} else {
>   		flags = 0;	/* SKIP_CLEAR flag is always 0 */
> -		options = u16_encode_bits(clear_option,
> -					  REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
> +		if (ipa->version > IPA_VERSION_2_6L)
> +			options = u16_encode_bits(clear_option,
> +					REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
> +		else
> +			options = 0;
>   	}
>   
>   	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
> @@ -552,7 +607,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
>   {
>   	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
>   	enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM;
> -	struct ipa_cmd_hw_dma_mem_mem *payload;
> +	enum ipa_version version = ipa->version;
> +	union ipa_cmd_hw_dma_mem_mem *payload;
>   	union ipa_cmd_payload *cmd_payload;
>   	enum dma_data_direction direction;
>   	dma_addr_t payload_addr;
> @@ -571,8 +627,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
>   	/* payload->clear_after_read was reserved prior to IPA v4.0.  It's
>   	 * never needed for current code, so it's 0 regardless of version.
>   	 */
> -	payload->size = cpu_to_le16(size);
> -	payload->local_addr = cpu_to_le16(offset);
> +	CMD_FIELD(version, payload, size) = cpu_to_le16(size);
> +	CMD_FIELD(version, payload, local_addr) = cpu_to_le16(offset);
>   	/* payload->flags:
>   	 *   direction:		0 = write to IPA, 1 read from IPA
>   	 * Starting at v4.0 these are reserved; either way, all zero:
> @@ -582,8 +638,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
>   	 * since both values are 0 we won't bother OR'ing them in.
>   	 */
>   	flags = toward_ipa ? 0 : DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK;
> -	payload->flags = cpu_to_le16(flags);
> -	payload->system_addr = cpu_to_le64(addr);
> +	CMD_FIELD(version, payload, flags) = cpu_to_le16(flags);
> +	SET_DMA_FIELD(version, payload, system_addr, addr);
>   
>   	direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
>   
> @@ -599,11 +655,17 @@ static void ipa_cmd_ip_tag_status_add(struct ipa_trans *trans)
>   	struct ipa_cmd_ip_packet_tag_status *payload;
>   	union ipa_cmd_payload *cmd_payload;
>   	dma_addr_t payload_addr;
> +	u64 tag_mask;
> +
> +	if (trans->dma_subsys->version <= IPA_VERSION_2_6L)
> +		tag_mask = IPA_V2_IP_PACKET_TAG_STATUS_TAG_FMASK;
> +	else
> +		tag_mask = IPA_V3_IP_PACKET_TAG_STATUS_TAG_FMASK;
>   
>   	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
>   	payload = &cmd_payload->ip_packet_tag_status;
>   
> -	payload->tag = le64_encode_bits(0, IP_PACKET_TAG_STATUS_TAG_FMASK);
> +	payload->tag = le64_encode_bits(0, tag_mask);
>   
>   	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
>   			  direction, opcode);
> diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
> index d197959cc032..459fb4830244 100644
> --- a/drivers/net/ipa/ipa_table.c
> +++ b/drivers/net/ipa/ipa_table.c
> @@ -8,6 +8,7 @@
>   #include <linux/kernel.h>
>   #include <linux/bits.h>
>   #include <linux/bitops.h>
> +#include <linux/module.h>
>   #include <linux/bitfield.h>
>   #include <linux/io.h>
>   #include <linux/build_bug.h>
> @@ -561,6 +562,19 @@ void ipa_table_config(struct ipa *ipa)
>   	ipa_route_config(ipa, true);
>   }
>   
> +static inline void *ipa_table_write(enum ipa_version version,
> +				   void *virt, u64 value)
> +{
> +	if (IPA_IS_64BIT(version)) {
> +		__le64 *ptr = virt;
> +		*ptr = cpu_to_le64(value);
> +	} else {
> +		__le32 *ptr = virt;
> +		*ptr = cpu_to_le32(value);
> +	}
> +	return virt + IPA_TABLE_ENTRY_SIZE(version);
> +}
> +
>   /*
>    * Initialize a coherent DMA allocation containing initialized filter and
>    * route table data.  This is used when initializing or resetting the IPA
> @@ -602,10 +616,11 @@ void ipa_table_config(struct ipa *ipa)
>   int ipa_table_init(struct ipa *ipa)
>   {
>   	u32 count = max_t(u32, IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX);
> +	enum ipa_version version = ipa->version;
>   	struct device *dev = &ipa->pdev->dev;
> +	u64 filter_map = ipa->filter_map << 1;
>   	dma_addr_t addr;
> -	__le64 le_addr;
> -	__le64 *virt;
> +	void *virt;
>   	size_t size;
>   
>   	ipa_table_validate_build();
> @@ -626,19 +641,21 @@ int ipa_table_init(struct ipa *ipa)
>   	ipa->table_addr = addr;
>   
>   	/* First slot is the zero rule */
> -	*virt++ = 0;
> +	virt = ipa_table_write(version, virt, 0);
>   
>   	/* Next is the filter table bitmap.  The "soft" bitmap value
>   	 * must be converted to the hardware representation by shifting
>   	 * it left one position.  (Bit 0 repesents global filtering,
>   	 * which is possible but not used.)
>   	 */
> -	*virt++ = cpu_to_le64((u64)ipa->filter_map << 1);
> +	if (version <= IPA_VERSION_2_6L)
> +		filter_map |= 1;
> +
> +	virt = ipa_table_write(version, virt, filter_map);
>   
>   	/* All the rest contain the DMA address of the zero rule */
> -	le_addr = cpu_to_le64(addr);
>   	while (count--)
> -		*virt++ = le_addr;
> +		virt = ipa_table_write(version, virt, addr);
>   
>   	return 0;
>   }
> diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
> index 78a168ce6558..6e12fc49e45b 100644
> --- a/drivers/net/ipa/ipa_table.h
> +++ b/drivers/net/ipa/ipa_table.h
> @@ -43,7 +43,7 @@ bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask);
>    */
>   static inline bool ipa_table_hash_support(struct ipa *ipa)
>   {
> -	return ipa->version != IPA_VERSION_4_2;
> +	return ipa->version != IPA_VERSION_4_2 && ipa->version > IPA_VERSION_2_6L;
>   }
>   
>   /**
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints
  2021-09-20  3:08 ` [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  2021-10-18 18:17     ` Sireesh Kodali
  0 siblings, 1 reply; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> IPA v2.x endpoints are the same as the endpoints on later versions. The
> only big change was the addition of the "skip_config" flag. The only
> other change is the backlog limit, which is a fixed number for IPA v2.6L

Not much to say here.  Your patches are reasonably small, which
makes them easier to review (thank you).

					-Alex

> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_endpoint.c | 65 ++++++++++++++++++++++------------
>   1 file changed, 43 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> index 7d3ab61cd890..024cf3a0ded0 100644
> --- a/drivers/net/ipa/ipa_endpoint.c
> +++ b/drivers/net/ipa/ipa_endpoint.c
> @@ -360,8 +360,10 @@ void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable)
>   {
>   	u32 endpoint_id;
>   
> -	/* DELAY mode doesn't work correctly on IPA v4.2 */
> -	if (ipa->version == IPA_VERSION_4_2)
> +	/* DELAY mode doesn't work correctly on IPA v4.2
> +	 * Pausing is not supported on IPA v2.6L
> +	 */
> +	if (ipa->version == IPA_VERSION_4_2 || ipa->version <= IPA_VERSION_2_6L)
>   		return;
>   
>   	for (endpoint_id = 0; endpoint_id < IPA_ENDPOINT_MAX; endpoint_id++) {
> @@ -383,6 +385,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
>   {
>   	u32 initialized = ipa->initialized;
>   	struct ipa_trans *trans;
> +	u32 value = 0, value_mask = ~0;
>   	u32 count;
>   
>   	/* We need one command per modem TX endpoint.  We can get an upper
> @@ -398,6 +401,11 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
>   		return -EBUSY;
>   	}
>   
> +	if (ipa->version <= IPA_VERSION_2_6L) {
> +		value = aggr_force_close_fmask(true);
> +		value_mask = aggr_force_close_fmask(true);
> +	}
> +
>   	while (initialized) {
>   		u32 endpoint_id = __ffs(initialized);
>   		struct ipa_endpoint *endpoint;
> @@ -416,7 +424,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
>   		 * means status is disabled on the endpoint, and as a
>   		 * result all other fields in the register are ignored.
>   		 */
> -		ipa_cmd_register_write_add(trans, offset, 0, ~0, false);
> +		ipa_cmd_register_write_add(trans, offset, value, value_mask, false);
>   	}
>   
>   	ipa_cmd_pipeline_clear_add(trans);
> @@ -1531,8 +1539,10 @@ static void ipa_endpoint_program(struct ipa_endpoint *endpoint)
>   	ipa_endpoint_init_mode(endpoint);
>   	ipa_endpoint_init_aggr(endpoint);
>   	ipa_endpoint_init_deaggr(endpoint);
> -	ipa_endpoint_init_rsrc_grp(endpoint);
> -	ipa_endpoint_init_seq(endpoint);
> +	if (endpoint->ipa->version > IPA_VERSION_2_6L) {
> +		ipa_endpoint_init_rsrc_grp(endpoint);
> +		ipa_endpoint_init_seq(endpoint);
> +	}
>   	ipa_endpoint_status(endpoint);
>   }
>   
> @@ -1592,7 +1602,6 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
>   {
>   	struct device *dev = &endpoint->ipa->pdev->dev;
>   	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
> -	bool stop_channel;
>   	int ret;
>   
>   	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
> @@ -1613,7 +1622,6 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
>   {
>   	struct device *dev = &endpoint->ipa->pdev->dev;
>   	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
> -	bool start_channel;
>   	int ret;
>   
>   	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
> @@ -1750,23 +1758,33 @@ int ipa_endpoint_config(struct ipa *ipa)
>   	/* Find out about the endpoints supplied by the hardware, and ensure
>   	 * the highest one doesn't exceed the number we support.
>   	 */
> -	val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
> -
> -	/* Our RX is an IPA producer */
> -	rx_base = u32_get_bits(val, IPA_PROD_LOWEST_FMASK);
> -	max = rx_base + u32_get_bits(val, IPA_MAX_PROD_PIPES_FMASK);
> -	if (max > IPA_ENDPOINT_MAX) {
> -		dev_err(dev, "too many endpoints (%u > %u)\n",
> -			max, IPA_ENDPOINT_MAX);
> -		return -EINVAL;
> -	}
> -	rx_mask = GENMASK(max - 1, rx_base);
> +	if (ipa->version <= IPA_VERSION_2_6L) {
> +		// FIXME Not used anywhere?
> +		if (ipa->version == IPA_VERSION_2_6L)
> +			val = ioread32(ipa->reg_virt +
> +					IPA_REG_V2_ENABLED_PIPES_OFFSET);
> +		/* IPA v2.6L supports 20 pipes */
> +		ipa->available = ipa->filter_map;
> +		return 0;
> +	} else {
> +		val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
> +
> +		/* Our RX is an IPA producer */
> +		rx_base = u32_get_bits(val, IPA_PROD_LOWEST_FMASK);
> +		max = rx_base + u32_get_bits(val, IPA_MAX_PROD_PIPES_FMASK);
> +		if (max > IPA_ENDPOINT_MAX) {
> +			dev_err(dev, "too many endpoints (%u > %u)\n",
> +					max, IPA_ENDPOINT_MAX);
> +			return -EINVAL;
> +		}
> +		rx_mask = GENMASK(max - 1, rx_base);
>   
> -	/* Our TX is an IPA consumer */
> -	max = u32_get_bits(val, IPA_MAX_CONS_PIPES_FMASK);
> -	tx_mask = GENMASK(max - 1, 0);
> +		/* Our TX is an IPA consumer */
> +		max = u32_get_bits(val, IPA_MAX_CONS_PIPES_FMASK);
> +		tx_mask = GENMASK(max - 1, 0);
>   
> -	ipa->available = rx_mask | tx_mask;
> +		ipa->available = rx_mask | tx_mask;
> +	}
>   
>   	/* Check for initialized endpoints not supported by the hardware */
>   	if (ipa->initialized & ~ipa->available) {
> @@ -1865,6 +1883,9 @@ u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
>   			filter_map |= BIT(data->endpoint_id);
>   	}
>   
> +	if (ipa->version <= IPA_VERSION_2_6L)
> +		filter_map = 0x1fffff;
> +
>   	if (!ipa_filter_map_valid(ipa, filter_map))
>   		goto err_endpoint_exit;
>   
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map
  2021-09-20  3:08 ` [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  2021-10-18 18:19     ` Sireesh Kodali
  0 siblings, 1 reply; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> IPA v2.6L has an extra region to handle compression/decompression
> acceleration. This region is used by some modems during modem init.

So it has to be initialized?  (I guess so.)

The memory size register apparently doesn't express things in
units of 8 bytes either.

					-Alex

> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_mem.c | 36 ++++++++++++++++++++++++++++++------
>   drivers/net/ipa/ipa_mem.h |  5 ++++-
>   2 files changed, 34 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
> index 8acc88070a6f..bfcdc7e08de2 100644
> --- a/drivers/net/ipa/ipa_mem.c
> +++ b/drivers/net/ipa/ipa_mem.c
> @@ -84,7 +84,7 @@ int ipa_mem_setup(struct ipa *ipa)
>   	/* Get a transaction to define the header memory region and to zero
>   	 * the processing context and modem memory regions.
>   	 */
> -	trans = ipa_cmd_trans_alloc(ipa, 4);
> +	trans = ipa_cmd_trans_alloc(ipa, 5);
>   	if (!trans) {
>   		dev_err(&ipa->pdev->dev, "no transaction for memory setup\n");
>   		return -EBUSY;
> @@ -107,8 +107,14 @@ int ipa_mem_setup(struct ipa *ipa)
>   	ipa_mem_zero_region_add(trans, IPA_MEM_AP_PROC_CTX);
>   	ipa_mem_zero_region_add(trans, IPA_MEM_MODEM);
>   
> +	ipa_mem_zero_region_add(trans, IPA_MEM_ZIP);
> +
>   	ipa_trans_commit_wait(trans);
>   
> +	/* On IPA version <=2.6L (except 2.5) there is no PROC_CTX.  */
> +	if (ipa->version != IPA_VERSION_2_5 && ipa->version <= IPA_VERSION_2_6L)
> +		return 0;
> +
>   	/* Tell the hardware where the processing context area is located */
>   	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
>   	offset = ipa->mem_offset + mem->offset;
> @@ -147,6 +153,11 @@ static bool ipa_mem_id_valid(struct ipa *ipa, enum ipa_mem_id mem_id)
>   	case IPA_MEM_END_MARKER:	/* pseudo region */
>   		break;
>   
> +	case IPA_MEM_ZIP:
> +		if (version == IPA_VERSION_2_6L)
> +			return true;
> +		break;
> +
>   	case IPA_MEM_STATS_TETHERING:
>   	case IPA_MEM_STATS_DROP:
>   		if (version < IPA_VERSION_4_0)
> @@ -319,10 +330,15 @@ int ipa_mem_config(struct ipa *ipa)
>   	/* Check the advertised location and size of the shared memory area */
>   	val = ioread32(ipa->reg_virt + ipa_reg_shared_mem_size_offset(ipa->version));
>   
> -	/* The fields in the register are in 8 byte units */
> -	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> -	/* Make sure the end is within the region's mapped space */
> -	mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
> +	if (IPA_VERSION_RANGE(ipa->version, 2_0, 2_6L)) {
> +		/* The fields in the register are in 8 byte units */
> +		ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> +		/* Make sure the end is within the region's mapped space */
> +		mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
> +	} else {
> +		ipa->mem_offset = u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> +		mem_size = u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
> +	}
>   
>   	/* If the sizes don't match, issue a warning */
>   	if (ipa->mem_offset + mem_size < ipa->mem_size) {
> @@ -564,6 +580,10 @@ static int ipa_smem_init(struct ipa *ipa, u32 item, size_t size)
>   		return -EINVAL;
>   	}
>   
> +	/* IPA v2.6L does not use IOMMU */
> +	if (ipa->version <= IPA_VERSION_2_6L)
> +		return 0;
> +
>   	domain = iommu_get_domain_for_dev(dev);
>   	if (!domain) {
>   		dev_err(dev, "no IOMMU domain found for SMEM\n");
> @@ -591,6 +611,9 @@ static void ipa_smem_exit(struct ipa *ipa)
>   	struct device *dev = &ipa->pdev->dev;
>   	struct iommu_domain *domain;
>   
> +	if (ipa->version <= IPA_VERSION_2_6L)
> +		return;
> +
>   	domain = iommu_get_domain_for_dev(dev);
>   	if (domain) {
>   		size_t size;
> @@ -622,7 +645,8 @@ int ipa_mem_init(struct ipa *ipa, const struct ipa_mem_data *mem_data)
>   	ipa->mem_count = mem_data->local_count;
>   	ipa->mem = mem_data->local;
>   
> -	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64));
> +	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, IPA_IS_64BIT(ipa->version) ?
> +					DMA_BIT_MASK(64) : DMA_BIT_MASK(32));
>   	if (ret) {
>   		dev_err(dev, "error %d setting DMA mask\n", ret);
>   		return ret;
> diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h
> index 570bfdd99bff..be91cb38b6a8 100644
> --- a/drivers/net/ipa/ipa_mem.h
> +++ b/drivers/net/ipa/ipa_mem.h
> @@ -47,8 +47,10 @@ enum ipa_mem_id {
>   	IPA_MEM_UC_INFO,		/* 0 canaries */
>   	IPA_MEM_V4_FILTER_HASHED,	/* 2 canaries */
>   	IPA_MEM_V4_FILTER,		/* 2 canaries */
> +	IPA_MEM_V4_FILTER_AP,		/* 2 canaries (IPA v2.0) */
>   	IPA_MEM_V6_FILTER_HASHED,	/* 2 canaries */
>   	IPA_MEM_V6_FILTER,		/* 2 canaries */
> +	IPA_MEM_V6_FILTER_AP,		/* 0 canaries (IPA v2.0) */
>   	IPA_MEM_V4_ROUTE_HASHED,	/* 2 canaries */
>   	IPA_MEM_V4_ROUTE,		/* 2 canaries */
>   	IPA_MEM_V6_ROUTE_HASHED,	/* 2 canaries */
> @@ -57,7 +59,8 @@ enum ipa_mem_id {
>   	IPA_MEM_AP_HEADER,		/* 0 canaries, optional */
>   	IPA_MEM_MODEM_PROC_CTX,		/* 2 canaries */
>   	IPA_MEM_AP_PROC_CTX,		/* 0 canaries */
> -	IPA_MEM_MODEM,			/* 0/2 canaries */
> +	IPA_MEM_ZIP,			/* 1 canary (IPA v2.6L) */
> +	IPA_MEM_MODEM,			/* 0-2 canaries */
>   	IPA_MEM_UC_EVENT_RING,		/* 1 canary, optional */
>   	IPA_MEM_PDN_CONFIG,		/* 0/2 canaries (IPA v4.0+) */
>   	IPA_MEM_STATS_QUOTA_MODEM,	/* 2/4 canaries (IPA v4.0+) */
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface
  2021-09-20  3:08 ` [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  2021-10-18 18:22     ` Sireesh Kodali
  0 siblings, 1 reply; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> On IPA v2.x, the modem doesn't send a DRIVER_INIT_COMPLETED, so we have
> to rely on the uc's IPA_UC_RESPONSE_INIT_COMPLETED to know when its
> ready. We add a function here that marks uc_ready = true. This function
> is called by ipa_uc.c when IPA_UC_RESPONSE_INIT_COMPLETED is handled.

This should use the new ipa_mem_find() interface for getting the
memory information for the ZIP region.

I don't know where the IPA_UC_RESPONSE_INIT_COMPLETED gets sent
but I presume it ends up calling ipa_qmi_signal_uc_loaded().

I think actually the DRIVER_INIT_COMPLETE message from the modem
is saying "I finished initializing the microcontroller."  And
I've wondered why there is a duplicate mechanism.  Maybe there
was a race or something.

					-Alex

> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> ---
>   drivers/net/ipa/ipa_qmi.c | 27 ++++++++++++++++++++++++++-
>   drivers/net/ipa/ipa_qmi.h | 10 ++++++++++
>   2 files changed, 36 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
> index 7e2fe701cc4d..876e2a004f70 100644
> --- a/drivers/net/ipa/ipa_qmi.c
> +++ b/drivers/net/ipa/ipa_qmi.c
> @@ -68,6 +68,11 @@
>    * - The INDICATION_REGISTER request and INIT_COMPLETE indication are
>    *   optional for non-initial modem boots, and have no bearing on the
>    *   determination of when things are "ready"
> + *
> + * Note that on IPA v2.x, the modem doesn't send a DRIVER_INIT_COMPLETE
> + * request. Thus, we rely on the uc's IPA_UC_RESPONSE_INIT_COMPLETED to know
> + * when the uc is ready. The rest of the process is the same on IPA v2.x and
> + * later IPA versions
>    */
>   
>   #define IPA_HOST_SERVICE_SVC_ID		0x31
> @@ -345,7 +350,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
>   			req.hdr_proc_ctx_tbl_info.start + mem->size - 1;
>   	}
>   
> -	/* Nothing to report for the compression table (zip_tbl_info) */
> +	mem = &ipa->mem[IPA_MEM_ZIP];
> +	if (mem->size) {
> +		req.zip_tbl_info_valid = 1;
> +		req.zip_tbl_info.start = ipa->mem_offset + mem->offset;
> +		req.zip_tbl_info.end = ipa->mem_offset + mem->size - 1;
> +	}
>   
>   	mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE_HASHED);
>   	if (mem->size) {
> @@ -525,6 +535,21 @@ int ipa_qmi_setup(struct ipa *ipa)
>   	return ret;
>   }
>   
> +/* With IPA v2 modem is not required to send DRIVER_INIT_COMPLETE request to AP.
> + * We start operation as soon as IPA_UC_RESPONSE_INIT_COMPLETED irq is triggered.
> + */
> +void ipa_qmi_signal_uc_loaded(struct ipa *ipa)
> +{
> +	struct ipa_qmi *ipa_qmi = &ipa->qmi;
> +
> +	/* This is needed only on IPA 2.x */
> +	if (ipa->version > IPA_VERSION_2_6L)
> +		return;
> +
> +	ipa_qmi->uc_ready = true;
> +	ipa_qmi_ready(ipa_qmi);
> +}
> +
>   /* Tear down IPA QMI handles */
>   void ipa_qmi_teardown(struct ipa *ipa)
>   {
> diff --git a/drivers/net/ipa/ipa_qmi.h b/drivers/net/ipa/ipa_qmi.h
> index 856ef629ccc8..4962d88b0d22 100644
> --- a/drivers/net/ipa/ipa_qmi.h
> +++ b/drivers/net/ipa/ipa_qmi.h
> @@ -55,6 +55,16 @@ struct ipa_qmi {
>    */
>   int ipa_qmi_setup(struct ipa *ipa);
>   
> +/**
> + * ipa_qmi_signal_uc_loaded() - Signal that the UC has been loaded
> + * @ipa:		IPA pointer
> + *
> + * This is called when the uc indicates that it is ready. This exists, because
> + * on IPA v2.x, the modem does not send a DRIVER_INIT_COMPLETED. Thus we have
> + * to rely on the uc's INIT_COMPLETED response to know if it was initialized
> + */
> +void ipa_qmi_signal_uc_loaded(struct ipa *ipa);
> +
>   /**
>    * ipa_qmi_teardown() - Tear down IPA QMI handles
>    * @ipa:		IPA pointer
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 14/17] net: ipa: Add support for IPA v2 microcontroller
  2021-09-20  3:08 ` [RFC PATCH 14/17] net: ipa: Add support for IPA v2 microcontroller Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  0 siblings, 0 replies; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> There are some minor differences between IPA v2.x and later revisions
> with regards to the uc. The biggeset difference is the shared memory's
> layout. There are also some changes to the command numbers, but these
> are not too important, since the mainline driver doesn't use them.

It's a shame that so much has to be rearranged when the
structure definitions are changed.  If I spent more time
thinking about this I might suggest a different way of
abstracting the two, but for now this looks fine.

					-Alex


> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_uc.c | 96 ++++++++++++++++++++++++++--------------
>   1 file changed, 63 insertions(+), 33 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_uc.c b/drivers/net/ipa/ipa_uc.c
> index 856e55a080a7..bf6b25098301 100644
> --- a/drivers/net/ipa/ipa_uc.c
> +++ b/drivers/net/ipa/ipa_uc.c
> @@ -39,11 +39,12 @@
>   #define IPA_SEND_DELAY		100	/* microseconds */
>   
>   /**
> - * struct ipa_uc_mem_area - AP/microcontroller shared memory area
> + * union ipa_uc_mem_area - AP/microcontroller shared memory area
>    * @command:		command code (AP->microcontroller)
>    * @reserved0:		reserved bytes; avoid reading or writing
>    * @command_param:	low 32 bits of command parameter (AP->microcontroller)
>    * @command_param_hi:	high 32 bits of command parameter (AP->microcontroller)
> + *			Available since IPA v3.0
>    *
>    * @response:		response code (microcontroller->AP)
>    * @reserved1:		reserved bytes; avoid reading or writing
> @@ -59,31 +60,58 @@
>    * @reserved3:		reserved bytes; avoid reading or writing
>    * @interface_version:	hardware-reported interface version
>    * @reserved4:		reserved bytes; avoid reading or writing
> + * @reserved5:		reserved bytes; avoid reading or writing
>    *
>    * A shared memory area at the base of IPA resident memory is used for
>    * communication with the microcontroller.  The region is 128 bytes in
>    * size, but only the first 40 bytes (structured this way) are used.
>    */
> -struct ipa_uc_mem_area {
> -	u8 command;		/* enum ipa_uc_command */
> -	u8 reserved0[3];
> -	__le32 command_param;
> -	__le32 command_param_hi;
> -	u8 response;		/* enum ipa_uc_response */
> -	u8 reserved1[3];
> -	__le32 response_param;
> -	u8 event;		/* enum ipa_uc_event */
> -	u8 reserved2[3];
> -
> -	__le32 event_param;
> -	__le32 first_error_address;
> -	u8 hw_state;
> -	u8 warning_counter;
> -	__le16 reserved3;
> -	__le16 interface_version;
> -	__le16 reserved4;
> +union ipa_uc_mem_area {
> +	struct {
> +		u8 command;		/* enum ipa_uc_command */
> +		u8 reserved0[3];
> +		__le32 command_param;
> +		u8 response;		/* enum ipa_uc_response */
> +		u8 reserved1[3];
> +		__le32 response_param;
> +		u8 event;		/* enum ipa_uc_event */
> +		u8 reserved2[3];
> +
> +		__le32 event_param;
> +		__le32 reserved3;
> +		__le32 first_error_address;
> +		u8 hw_state;
> +		u8 warning_counter;
> +		__le16 reserved4;
> +		__le16 interface_version;
> +		__le16 reserved5;
> +	} v2;
> +	struct {
> +		u8 command;		/* enum ipa_uc_command */
> +		u8 reserved0[3];
> +		__le32 command_param;
> +		__le32 command_param_hi;
> +		u8 response;		/* enum ipa_uc_response */
> +		u8 reserved1[3];
> +		__le32 response_param;
> +		u8 event;		/* enum ipa_uc_event */
> +		u8 reserved2[3];
> +
> +		__le32 event_param;
> +		__le32 first_error_address;
> +		u8 hw_state;
> +		u8 warning_counter;
> +		__le16 reserved3;
> +		__le16 interface_version;
> +		__le16 reserved4;
> +	} v3;
>   };
>   
> +#define UC_FIELD(_ipa, _field)			\
> +	*((_ipa->version >= IPA_VERSION_3_0) ?	\
> +	  &(ipa_uc_shared(_ipa)->v3._field) :	\
> +	  &(ipa_uc_shared(_ipa)->v2._field))
> +
>   /** enum ipa_uc_command - commands from the AP to the microcontroller */
>   enum ipa_uc_command {
>   	IPA_UC_COMMAND_NO_OP		= 0x0,
> @@ -95,6 +123,7 @@ enum ipa_uc_command {
>   	IPA_UC_COMMAND_CLK_UNGATE	= 0x6,
>   	IPA_UC_COMMAND_MEMCPY		= 0x7,
>   	IPA_UC_COMMAND_RESET_PIPE	= 0x8,
> +	/* Next two commands are present for IPA v3.0+ */
>   	IPA_UC_COMMAND_REG_WRITE	= 0x9,
>   	IPA_UC_COMMAND_GSI_CH_EMPTY	= 0xa,
>   };
> @@ -114,7 +143,7 @@ enum ipa_uc_event {
>   	IPA_UC_EVENT_LOG_INFO		= 0x2,
>   };
>   
> -static struct ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
> +static union ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
>   {
>   	const struct ipa_mem *mem = ipa_mem_find(ipa, IPA_MEM_UC_SHARED);
>   	u32 offset = ipa->mem_offset + mem->offset;
> @@ -125,22 +154,22 @@ static struct ipa_uc_mem_area *ipa_uc_shared(struct ipa *ipa)
>   /* Microcontroller event IPA interrupt handler */
>   static void ipa_uc_event_handler(struct ipa *ipa, enum ipa_irq_id irq_id)
>   {
> -	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
>   	struct device *dev = &ipa->pdev->dev;
> +	u32 event = UC_FIELD(ipa, event);
>   
> -	if (shared->event == IPA_UC_EVENT_ERROR)
> +	if (event == IPA_UC_EVENT_ERROR)
>   		dev_err(dev, "microcontroller error event\n");
> -	else if (shared->event != IPA_UC_EVENT_LOG_INFO)
> +	else if (event != IPA_UC_EVENT_LOG_INFO)
>   		dev_err(dev, "unsupported microcontroller event %u\n",
> -			shared->event);
> +			event);
>   	/* The LOG_INFO event can be safely ignored */
>   }
>   
>   /* Microcontroller response IPA interrupt handler */
>   static void ipa_uc_response_hdlr(struct ipa *ipa, enum ipa_irq_id irq_id)
>   {
> -	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
>   	struct device *dev = &ipa->pdev->dev;
> +	u32 response = UC_FIELD(ipa, response);
>   
>   	/* An INIT_COMPLETED response message is sent to the AP by the
>   	 * microcontroller when it is operational.  Other than this, the AP
> @@ -150,20 +179,21 @@ static void ipa_uc_response_hdlr(struct ipa *ipa, enum ipa_irq_id irq_id)
>   	 * We can drop the power reference taken in ipa_uc_power() once we
>   	 * know the microcontroller has finished its initialization.
>   	 */
> -	switch (shared->response) {
> +	switch (response) {
>   	case IPA_UC_RESPONSE_INIT_COMPLETED:
>   		if (ipa->uc_powered) {
>   			ipa->uc_loaded = true;
>   			pm_runtime_mark_last_busy(dev);
>   			(void)pm_runtime_put_autosuspend(dev);
>   			ipa->uc_powered = false;
> +			ipa_qmi_signal_uc_loaded(ipa);
>   		} else {
>   			dev_warn(dev, "unexpected init_completed response\n");
>   		}
>   		break;
>   	default:
>   		dev_warn(dev, "unsupported microcontroller response %u\n",
> -			 shared->response);
> +			 response);
>   		break;
>   	}
>   }
> @@ -216,16 +246,16 @@ void ipa_uc_power(struct ipa *ipa)
>   /* Send a command to the microcontroller */
>   static void send_uc_command(struct ipa *ipa, u32 command, u32 command_param)
>   {
> -	struct ipa_uc_mem_area *shared = ipa_uc_shared(ipa);
>   	u32 offset;
>   	u32 val;
>   
>   	/* Fill in the command data */
> -	shared->command = command;
> -	shared->command_param = cpu_to_le32(command_param);
> -	shared->command_param_hi = 0;
> -	shared->response = 0;
> -	shared->response_param = 0;
> +	UC_FIELD(ipa, command) = command;
> +	UC_FIELD(ipa, command_param) = cpu_to_le32(command_param);
> +	if (ipa->version >= IPA_VERSION_3_0)
> +		ipa_uc_shared(ipa)->v3.command_param_hi = 1;
> +	UC_FIELD(ipa, response) = 0;
> +	UC_FIELD(ipa, response_param) = 0;
>   
>   	/* Use an interrupt to tell the microcontroller the command is ready */
>   	val = u32_encode_bits(1, UC_INTR_FMASK);
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 15/17] net: ipa: Add IPA v2.6L initialization sequence support
  2021-09-20  3:08 ` [RFC PATCH 15/17] net: ipa: Add IPA v2.6L initialization sequence support Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  0 siblings, 0 replies; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> The biggest changes are:
> 
> - Make SMP2P functions no-operation
> - Make resource init no-operation
> - Skip firmware loading
> - Add reset sequence

The only comments I have are not very major, so I'll wait
for a later review to suggest that sort of fine tuning.

					-Alex

> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> ---
>   drivers/net/ipa/ipa_main.c     | 19 ++++++++++++++++---
>   drivers/net/ipa/ipa_resource.c |  3 +++
>   drivers/net/ipa/ipa_smp2p.c    | 11 +++++++++--
>   3 files changed, 28 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> index ea6c4347f2c6..b437fbf95edf 100644
> --- a/drivers/net/ipa/ipa_main.c
> +++ b/drivers/net/ipa/ipa_main.c
> @@ -355,12 +355,22 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
>   	u32 granularity;
>   	u32 val;
>   
> +	if (ipa->version <= IPA_VERSION_2_6L) {
> +		iowrite32(1, ipa->reg_virt + IPA_REG_COMP_SW_RESET_OFFSET);
> +		iowrite32(0, ipa->reg_virt + IPA_REG_COMP_SW_RESET_OFFSET);
> +
> +		iowrite32(1, ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
> +	}
> +
>   	/* IPA v4.5+ has no backward compatibility register */
> -	if (version < IPA_VERSION_4_5) {
> +	if (version >= IPA_VERSION_2_5 && version < IPA_VERSION_4_5) {
>   		val = data->backward_compat;
>   		iowrite32(val, ipa->reg_virt + ipa_reg_bcr_offset(ipa->version));
>   	}
>   
> +	if (ipa->version <= IPA_VERSION_2_6L)
> +		return;
> +
>   	/* Implement some hardware workarounds */
>   	if (version >= IPA_VERSION_4_0 && version < IPA_VERSION_4_5) {
>   		/* Disable PA mask to allow HOLB drop */
> @@ -412,7 +422,8 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
>   static void ipa_hardware_deconfig(struct ipa *ipa)
>   {
>   	/* Mostly we just leave things as we set them. */
> -	ipa_hardware_dcd_deconfig(ipa);
> +	if (ipa->version > IPA_VERSION_2_6L)
> +		ipa_hardware_dcd_deconfig(ipa);
>   }
>   
>   /**
> @@ -765,8 +776,10 @@ static int ipa_probe(struct platform_device *pdev)
>   
>   	/* Otherwise we need to load the firmware and have Trust Zone validate
>   	 * and install it.  If that succeeds we can proceed with setup.
> +	 * But on IPA v2.6L we don't need to do firmware loading :D
>   	 */
> -	ret = ipa_firmware_load(dev);
> +	if (ipa->version > IPA_VERSION_2_6L)
> +		ret = ipa_firmware_load(dev);
>   	if (ret)
>   		goto err_deconfig;
>   
> diff --git a/drivers/net/ipa/ipa_resource.c b/drivers/net/ipa/ipa_resource.c
> index e3da95d69409..36a72324d828 100644
> --- a/drivers/net/ipa/ipa_resource.c
> +++ b/drivers/net/ipa/ipa_resource.c
> @@ -162,6 +162,9 @@ int ipa_resource_config(struct ipa *ipa, const struct ipa_resource_data *data)
>   {
>   	u32 i;
>   
> +	if (ipa->version <= IPA_VERSION_2_6L)
> +		return 0;
> +
>   	if (!ipa_resource_limits_valid(ipa, data))
>   		return -EINVAL;
>   
> diff --git a/drivers/net/ipa/ipa_smp2p.c b/drivers/net/ipa/ipa_smp2p.c
> index df7639c39d71..fa4a9f1c196a 100644
> --- a/drivers/net/ipa/ipa_smp2p.c
> +++ b/drivers/net/ipa/ipa_smp2p.c
> @@ -233,6 +233,10 @@ int ipa_smp2p_init(struct ipa *ipa, bool modem_init)
>   	u32 valid_bit;
>   	int ret;
>   
> +	/* With IPA v2.6L and earlier SMP2P interrupts are used */
> +	if (ipa->version <= IPA_VERSION_2_6L)
> +		return 0;
> +
>   	valid_state = qcom_smem_state_get(dev, "ipa-clock-enabled-valid",
>   					  &valid_bit);
>   	if (IS_ERR(valid_state))
> @@ -302,6 +306,9 @@ void ipa_smp2p_exit(struct ipa *ipa)
>   {
>   	struct ipa_smp2p *smp2p = ipa->smp2p;
>   
> +	if (!smp2p)
> +		return;
> +
>   	if (smp2p->setup_ready_irq)
>   		ipa_smp2p_irq_exit(smp2p, smp2p->setup_ready_irq);
>   	ipa_smp2p_panic_notifier_unregister(smp2p);
> @@ -317,7 +324,7 @@ void ipa_smp2p_disable(struct ipa *ipa)
>   {
>   	struct ipa_smp2p *smp2p = ipa->smp2p;
>   
> -	if (!smp2p->setup_ready_irq)
> +	if (!smp2p || !smp2p->setup_ready_irq)
>   		return;
>   
>   	mutex_lock(&smp2p->mutex);
> @@ -333,7 +340,7 @@ void ipa_smp2p_notify_reset(struct ipa *ipa)
>   	struct ipa_smp2p *smp2p = ipa->smp2p;
>   	u32 mask;
>   
> -	if (!smp2p->notified)
> +	if (!smp2p || !smp2p->notified)
>   		return;
>   
>   	ipa_smp2p_power_release(ipa);
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware
  2021-09-20  3:08 ` [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware Sireesh Kodali
@ 2021-10-13 22:30   ` Alex Elder
  2021-10-18 18:35     ` Sireesh Kodali
  0 siblings, 1 reply; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:30 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> This commit adds the config for IPA v2.0, v2.5, v2.6L. IPA v2.5 is found
> on msm8996. IPA v2.6L hardware is found on following SoCs: msm8920,
> msm8940, msm8952, msm8953, msm8956, msm8976, sdm630, sdm660. No
> SoC-specific configuration in ipa driver is required.
> 
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>

I will not look at this in great detail right now.  It looks
good to me, but I didn't notice where "channel_name" got
defined.  I'm not sure what the BCR value represents either.

					-Alex

> ---
>   drivers/net/ipa/Makefile        |   7 +-
>   drivers/net/ipa/ipa_data-v2.c   | 369 ++++++++++++++++++++++++++++++++
>   drivers/net/ipa/ipa_data-v3.1.c |   2 +-
>   drivers/net/ipa/ipa_data.h      |   3 +
>   drivers/net/ipa/ipa_main.c      |  15 ++
>   drivers/net/ipa/ipa_sysfs.c     |   6 +
>   6 files changed, 398 insertions(+), 4 deletions(-)
>   create mode 100644 drivers/net/ipa/ipa_data-v2.c
> 
> diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
> index 4abebc667f77..858fbf76cff3 100644
> --- a/drivers/net/ipa/Makefile
> +++ b/drivers/net/ipa/Makefile
> @@ -7,6 +7,7 @@ ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
>   				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
>   				ipa_sysfs.o
>   
> -ipa-y			+=	ipa_data-v3.1.o ipa_data-v3.5.1.o \
> -				ipa_data-v4.2.o ipa_data-v4.5.o \
> -				ipa_data-v4.9.o ipa_data-v4.11.o
> +ipa-y			+=	ipa_data-v2.o ipa_data-v3.1.o \
> +				ipa_data-v3.5.1.o ipa_data-v4.2.o \
> +				ipa_data-v4.5.o ipa_data-v4.9.o \
> +				ipa_data-v4.11.o
> diff --git a/drivers/net/ipa/ipa_data-v2.c b/drivers/net/ipa/ipa_data-v2.c
> new file mode 100644
> index 000000000000..869b8a1a45d6
> --- /dev/null
> +++ b/drivers/net/ipa/ipa_data-v2.c
> @@ -0,0 +1,369 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
> + * Copyright (C) 2019-2020 Linaro Ltd.
> + */
> +
> +#include <linux/log2.h>
> +
> +#include "ipa_data.h"
> +#include "ipa_endpoint.h"
> +#include "ipa_mem.h"
> +
> +/* Endpoint configuration for the IPA v2 hardware. */
> +static const struct ipa_gsi_endpoint_data ipa_endpoint_data[] = {
> +	[IPA_ENDPOINT_AP_COMMAND_TX] = {
> +		.ee_id		= GSI_EE_AP,
> +		.channel_id	= 3,
> +		.endpoint_id	= 3,
> +		.channel_name	= "cmd_tx",
> +		.toward_ipa	= true,
> +		.channel = {
> +			.tre_count	= 256,
> +			.event_count	= 256,
> +			.tlv_count	= 20,
> +		},
> +		.endpoint = {
> +			.config	= {
> +				.dma_mode	= true,
> +				.dma_endpoint	= IPA_ENDPOINT_AP_LAN_RX,
> +			},
> +		},
> +	},
> +	[IPA_ENDPOINT_AP_LAN_RX] = {
> +		.ee_id		= GSI_EE_AP,
> +		.channel_id	= 2,
> +		.endpoint_id	= 2,
> +		.channel_name	= "ap_lan_rx",
> +		.channel = {
> +			.tre_count	= 256,
> +			.event_count	= 256,
> +			.tlv_count	= 8,
> +		},
> +		.endpoint	= {
> +			.config	= {
> +				.aggregation	= true,
> +				.status_enable	= true,
> +				.rx = {
> +					.pad_align	= ilog2(sizeof(u32)),
> +				},
> +			},
> +		},
> +	},
> +	[IPA_ENDPOINT_AP_MODEM_TX] = {
> +		.ee_id		= GSI_EE_AP,
> +		.channel_id	= 4,
> +		.endpoint_id	= 4,
> +		.channel_name	= "ap_modem_tx",
> +		.toward_ipa	= true,
> +		.channel = {
> +			.tre_count	= 256,
> +			.event_count	= 256,
> +			.tlv_count	= 8,
> +		},
> +		.endpoint	= {
> +			.config	= {
> +				.qmap		= true,
> +				.status_enable	= true,
> +				.tx = {
> +					.status_endpoint =
> +						IPA_ENDPOINT_AP_LAN_RX,
> +				},
> +			},
> +		},
> +	},
> +	[IPA_ENDPOINT_AP_MODEM_RX] = {
> +		.ee_id		= GSI_EE_AP,
> +		.channel_id	= 5,
> +		.endpoint_id	= 5,
> +		.channel_name	= "ap_modem_rx",
> +		.toward_ipa	= false,
> +		.channel = {
> +			.tre_count	= 256,
> +			.event_count	= 256,
> +			.tlv_count	= 8,
> +		},
> +		.endpoint	= {
> +			.config = {
> +				.aggregation	= true,
> +				.qmap		= true,
> +			},
> +		},
> +	},
> +	[IPA_ENDPOINT_MODEM_LAN_TX] = {
> +		.ee_id		= GSI_EE_MODEM,
> +		.channel_id	= 6,
> +		.endpoint_id	= 6,
> +		.channel_name	= "modem_lan_tx",
> +		.toward_ipa	= true,
> +	},
> +	[IPA_ENDPOINT_MODEM_COMMAND_TX] = {
> +		.ee_id		= GSI_EE_MODEM,
> +		.channel_id	= 7,
> +		.endpoint_id	= 7,
> +		.channel_name	= "modem_cmd_tx",
> +		.toward_ipa	= true,
> +	},
> +	[IPA_ENDPOINT_MODEM_LAN_RX] = {
> +		.ee_id		= GSI_EE_MODEM,
> +		.channel_id	= 8,
> +		.endpoint_id	= 8,
> +		.channel_name	= "modem_lan_rx",
> +		.toward_ipa	= false,
> +	},
> +	[IPA_ENDPOINT_MODEM_AP_RX] = {
> +		.ee_id		= GSI_EE_MODEM,
> +		.channel_id	= 9,
> +		.endpoint_id	= 9,
> +		.channel_name	= "modem_ap_rx",
> +		.toward_ipa	= false,
> +	},
> +};
> +
> +static struct ipa_interconnect_data ipa_interconnect_data[] = {
> +	{
> +		.name = "memory",
> +		.peak_bandwidth	= 1200000,	/* 1200 MBps */
> +		.average_bandwidth = 100000,	/* 100 MBps */
> +	},
> +	{
> +		.name = "imem",
> +		.peak_bandwidth	= 350000,	/* 350 MBps */
> +		.average_bandwidth  = 0,	/* unused */
> +	},
> +	{
> +		.name = "config",
> +		.peak_bandwidth	= 40000,	/* 40 MBps */
> +		.average_bandwidth = 0,		/* unused */
> +	},
> +};
> +
> +static struct ipa_power_data ipa_power_data = {
> +	.core_clock_rate	= 200 * 1000 * 1000,	/* Hz */
> +	.interconnect_count	= ARRAY_SIZE(ipa_interconnect_data),
> +	.interconnect_data	= ipa_interconnect_data,
> +};
> +
> +/* IPA-resident memory region configuration for v2.0 */
> +static const struct ipa_mem ipa_mem_local_data_v2_0[IPA_MEM_COUNT] = {
> +	[IPA_MEM_UC_SHARED] = {
> +		.offset         = 0,
> +		.size           = 0x80,
> +		.canary_count   = 0,
> +	},
> +	[IPA_MEM_V4_FILTER] = {
> +		.offset		= 0x0080,
> +		.size		= 0x0058,
> +		.canary_count	= 0,
> +	},
> +	[IPA_MEM_V6_FILTER] = {
> +		.offset		= 0x00e0,
> +		.size		= 0x0058,
> +		.canary_count	= 2,
> +	},
> +	[IPA_MEM_V4_ROUTE] = {
> +		.offset		= 0x0140,
> +		.size		= 0x002c,
> +		.canary_count	= 2,
> +	},
> +	[IPA_MEM_V6_ROUTE] = {
> +		.offset		= 0x0170,
> +		.size		= 0x002c,
> +		.canary_count	= 1,
> +	},
> +	[IPA_MEM_MODEM_HEADER] = {
> +		.offset		= 0x01a0,
> +		.size		= 0x0140,
> +		.canary_count	= 1,
> +	},
> +	[IPA_MEM_AP_HEADER] = {
> +		.offset		= 0x02e0,
> +		.size		= 0x0048,
> +		.canary_count	= 0,
> +	},
> +	[IPA_MEM_MODEM] = {
> +		.offset		= 0x032c,
> +		.size		= 0x0dcc,
> +		.canary_count	= 1,
> +	},
> +	[IPA_MEM_V4_FILTER_AP] = {
> +		.offset		= 0x10fc,
> +		.size		= 0x0780,
> +		.canary_count	= 1,
> +	},
> +	[IPA_MEM_V6_FILTER_AP] = {
> +		.offset		= 0x187c,
> +		.size		= 0x055c,
> +		.canary_count	= 0,
> +	},
> +	[IPA_MEM_UC_INFO] = {
> +		.offset		= 0x1ddc,
> +		.size		= 0x0124,
> +		.canary_count	= 1,
> +	},
> +};
> +
> +static struct ipa_mem_data ipa_mem_data_v2_0 = {
> +	.local		= ipa_mem_local_data_v2_0,
> +	.smem_id	= 497,
> +	.smem_size	= 0x00001f00,
> +};
> +
> +/* Configuration data for IPAv2.0 */
> +const struct ipa_data ipa_data_v2_0  = {
> +	.version	= IPA_VERSION_2_0,
> +	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
> +	.endpoint_data	= ipa_endpoint_data,
> +	.mem_data	= &ipa_mem_data_v2_0,
> +	.power_data	= &ipa_power_data,
> +};
> +
> +/* IPA-resident memory region configuration for v2.5 */
> +static const struct ipa_mem ipa_mem_local_data_v2_5[IPA_MEM_COUNT] = {
> +	[IPA_MEM_UC_SHARED] = {
> +		.offset         = 0,
> +		.size           = 0x80,
> +		.canary_count   = 0,
> +	},
> +	[IPA_MEM_UC_INFO] = {
> +		.offset		= 0x0080,
> +		.size		= 0x0200,
> +		.canary_count	= 0,
> +	},
> +	[IPA_MEM_V4_FILTER] = {
> +		.offset		= 0x0288,
> +		.size		= 0x0058,
> +		.canary_count	= 2,
> +	},
> +	[IPA_MEM_V6_FILTER] = {
> +		.offset		= 0x02e8,
> +		.size		= 0x0058,
> +		.canary_count	= 2,
> +	},
> +	[IPA_MEM_V4_ROUTE] = {
> +		.offset		= 0x0348,
> +		.size		= 0x003c,
> +		.canary_count	= 2,
> +	},
> +	[IPA_MEM_V6_ROUTE] = {
> +		.offset		= 0x0388,
> +		.size		= 0x003c,
> +		.canary_count	= 1,
> +	},
> +	[IPA_MEM_MODEM_HEADER] = {
> +		.offset		= 0x03c8,
> +		.size		= 0x0140,
> +		.canary_count	= 1,
> +	},
> +	[IPA_MEM_MODEM_PROC_CTX] = {
> +		.offset		= 0x0510,
> +		.size		= 0x0200,
> +		.canary_count	= 2,
> +	},
> +	[IPA_MEM_AP_PROC_CTX] = {
> +		.offset		= 0x0710,
> +		.size		= 0x0200,
> +		.canary_count	= 0,
> +	},
> +	[IPA_MEM_MODEM] = {
> +		.offset		= 0x0914,
> +		.size		= 0x16a8,
> +		.canary_count	= 1,
> +	},
> +};
> +
> +static struct ipa_mem_data ipa_mem_data_v2_5 = {
> +	.local		= ipa_mem_local_data_v2_5,
> +	.smem_id	= 497,
> +	.smem_size	= 0x00002000,
> +};
> +
> +/* Configuration data for IPAv2.5 */
> +const struct ipa_data ipa_data_v2_5  = {
> +	.version	= IPA_VERSION_2_5,
> +	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
> +	.endpoint_data	= ipa_endpoint_data,
> +	.mem_data	= &ipa_mem_data_v2_5,
> +	.power_data	= &ipa_power_data,
> +};
> +
> +/* IPA-resident memory region configuration for v2.6L */
> +static const struct ipa_mem ipa_mem_local_data_v2_6L[IPA_MEM_COUNT] = {
> +	{
> +		.id		= IPA_MEM_UC_SHARED,
> +		.offset         = 0,
> +		.size           = 0x80,
> +		.canary_count   = 0,
> +	},
> +	{
> +		.id 		= IPA_MEM_UC_INFO,
> +		.offset		= 0x0080,
> +		.size		= 0x0200,
> +		.canary_count	= 0,
> +	},
> +	{
> +		.id		= IPA_MEM_V4_FILTER,
> +		.offset		= 0x0288,
> +		.size		= 0x0058,
> +		.canary_count	= 2,
> +	},
> +	{
> +		.id		= IPA_MEM_V6_FILTER,
> +		.offset		= 0x02e8,
> +		.size		= 0x0058,
> +		.canary_count	= 2,
> +	},
> +	{
> +		.id		= IPA_MEM_V4_ROUTE,
> +		.offset		= 0x0348,
> +		.size		= 0x003c,
> +		.canary_count	= 2,
> +	},
> +	{
> +		.id		= IPA_MEM_V6_ROUTE,
> +		.offset		= 0x0388,
> +		.size		= 0x003c,
> +		.canary_count	= 1,
> +	},
> +	{
> +		.id		= IPA_MEM_MODEM_HEADER,
> +		.offset		= 0x03c8,
> +		.size		= 0x0140,
> +		.canary_count	= 1,
> +	},
> +	{
> +		.id		= IPA_MEM_ZIP,
> +		.offset		= 0x0510,
> +		.size		= 0x0200,
> +		.canary_count	= 2,
> +	},
> +	{
> +		.id		= IPA_MEM_MODEM,
> +		.offset		= 0x0714,
> +		.size		= 0x18e8,
> +		.canary_count	= 1,
> +	},
> +	{
> +		.id		= IPA_MEM_END_MARKER,
> +		.offset		= 0x2000,
> +		.size		= 0,
> +		.canary_count	= 1,
> +	},
> +};
> +
> +static struct ipa_mem_data ipa_mem_data_v2_6L = {
> +	.local		= ipa_mem_local_data_v2_6L,
> +	.smem_id	= 497,
> +	.smem_size	= 0x00002000,
> +};
> +
> +/* Configuration data for IPAv2.6L */
> +const struct ipa_data ipa_data_v2_6L  = {
> +	.version	= IPA_VERSION_2_6L,
> +	/* Unfortunately we don't know what this BCR value corresponds to */
> +	.backward_compat = 0x1fff7f,
> +	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
> +	.endpoint_data	= ipa_endpoint_data,
> +	.mem_data	= &ipa_mem_data_v2_6L,
> +	.power_data	= &ipa_power_data,
> +};
> diff --git a/drivers/net/ipa/ipa_data-v3.1.c b/drivers/net/ipa/ipa_data-v3.1.c
> index 06ddb85f39b2..12d231232756 100644
> --- a/drivers/net/ipa/ipa_data-v3.1.c
> +++ b/drivers/net/ipa/ipa_data-v3.1.c
> @@ -6,7 +6,7 @@
>   
>   #include <linux/log2.h>
>   
> -#include "gsi.h"
> +#include "ipa_dma.h"
>   #include "ipa_data.h"
>   #include "ipa_endpoint.h"
>   #include "ipa_mem.h"
> diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
> index 7d62d49f414f..e7ce2e9388b6 100644
> --- a/drivers/net/ipa/ipa_data.h
> +++ b/drivers/net/ipa/ipa_data.h
> @@ -301,6 +301,9 @@ struct ipa_data {
>   	const struct ipa_power_data *power_data;
>   };
>   
> +extern const struct ipa_data ipa_data_v2_0;
> +extern const struct ipa_data ipa_data_v2_5;
> +extern const struct ipa_data ipa_data_v2_6L;
>   extern const struct ipa_data ipa_data_v3_1;
>   extern const struct ipa_data ipa_data_v3_5_1;
>   extern const struct ipa_data ipa_data_v4_2;
> diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> index b437fbf95edf..3ae5c5c6734b 100644
> --- a/drivers/net/ipa/ipa_main.c
> +++ b/drivers/net/ipa/ipa_main.c
> @@ -560,6 +560,18 @@ static int ipa_firmware_load(struct device *dev)
>   }
>   
>   static const struct of_device_id ipa_match[] = {
> +	{
> +		.compatible	= "qcom,ipa-v2.0",
> +		.data		= &ipa_data_v2_0,
> +	},
> +	{
> +		.compatible	= "qcom,msm8996-ipa",
> +		.data		= &ipa_data_v2_5,
> +	},
> +	{
> +		.compatible	= "qcom,msm8953-ipa",
> +		.data		= &ipa_data_v2_6L,
> +	},
>   	{
>   		.compatible	= "qcom,msm8998-ipa",
>   		.data		= &ipa_data_v3_1,
> @@ -632,6 +644,9 @@ static void ipa_validate_build(void)
>   static bool ipa_version_valid(enum ipa_version version)
>   {
>   	switch (version) {
> +	case IPA_VERSION_2_0:
> +	case IPA_VERSION_2_5:
> +	case IPA_VERSION_2_6L:
>   	case IPA_VERSION_3_0:
>   	case IPA_VERSION_3_1:
>   	case IPA_VERSION_3_5:
> diff --git a/drivers/net/ipa/ipa_sysfs.c b/drivers/net/ipa/ipa_sysfs.c
> index ff61dbdd70d8..f5d159f6bc06 100644
> --- a/drivers/net/ipa/ipa_sysfs.c
> +++ b/drivers/net/ipa/ipa_sysfs.c
> @@ -14,6 +14,12 @@
>   static const char *ipa_version_string(struct ipa *ipa)
>   {
>   	switch (ipa->version) {
> +	case IPA_VERSION_2_0:
> +		return "2.0";
> +	case IPA_VERSION_2_5:
> +		return "2.5";
> +	case IPA_VERSION_2_6L:
> +		"return 2.6L";
>   	case IPA_VERSION_3_0:
>   		return "3.0";
>   	case IPA_VERSION_3_1:
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA
  2021-09-20  3:08 ` [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA Sireesh Kodali
  2021-09-23 12:42   ` Rob Herring
@ 2021-10-13 22:31   ` Alex Elder
  1 sibling, 0 replies; 46+ messages in thread
From: Alex Elder @ 2021-10-13 22:31 UTC (permalink / raw)
  To: Sireesh Kodali, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Andy Gross, Bjorn Andersson, David S. Miller, Jakub Kicinski,
	Rob Herring,
	open list:OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS

On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> MSM8996 uses IPA v2.5 and MSM8953 uses IPA v2.6l
> 
> Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>

This looks good.  And if it's good enough for Rob, it
*must* be good.

					-Alex

> ---
>   Documentation/devicetree/bindings/net/qcom,ipa.yaml | 2 ++
>   1 file changed, 2 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/net/qcom,ipa.yaml b/Documentation/devicetree/bindings/net/qcom,ipa.yaml
> index b8a0b392b24e..e857827bfa54 100644
> --- a/Documentation/devicetree/bindings/net/qcom,ipa.yaml
> +++ b/Documentation/devicetree/bindings/net/qcom,ipa.yaml
> @@ -44,6 +44,8 @@ description:
>   properties:
>     compatible:
>       enum:
> +      - qcom,msm8953-ipa
> +      - qcom,msm8996-ipa
>         - qcom,msm8998-ipa
>         - qcom,sc7180-ipa
>         - qcom,sc7280-ipa
> 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration
  2021-10-13 22:28   ` Alex Elder
@ 2021-10-18 16:12     ` Sireesh Kodali
  0 siblings, 0 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-10-18 16:12 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 3:58 AM IST, Alex Elder wrote:
> On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> > From: Vladimir Lypak <vladimir.lypak@gmail.com>
> > 
> > The values in the enumaration were defined as bitmasks (base 2 exponents of
> > actual opcodes). Meanwhile, it's used not as bitmask
> > ipa_endpoint_status_skip and ipa_status_formet_packet functions (compared
> > directly with opcode from status packet). This commit converts these values
> > to actual hardware constansts.
> > 
> > Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/ipa_endpoint.c | 8 ++++----
> >   1 file changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> > index 5528d97110d5..29227de6661f 100644
> > --- a/drivers/net/ipa/ipa_endpoint.c
> > +++ b/drivers/net/ipa/ipa_endpoint.c
> > @@ -41,10 +41,10 @@
> >   
> >   /** enum ipa_status_opcode - status element opcode hardware values */
> >   enum ipa_status_opcode {
> > -	IPA_STATUS_OPCODE_PACKET		= 0x01,
> > -	IPA_STATUS_OPCODE_DROPPED_PACKET	= 0x04,
> > -	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 0x08,
> > -	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 0x40,
> > +	IPA_STATUS_OPCODE_PACKET		= 0,
> > +	IPA_STATUS_OPCODE_DROPPED_PACKET	= 2,
> > +	IPA_STATUS_OPCODE_SUSPENDED_PACKET	= 3,
> > +	IPA_STATUS_OPCODE_PACKET_2ND_PASS	= 6,
>
> I haven't looked at how these symbols are used (whether you
> changed it at all), but I'm pretty sure this is wrong.
>
> The downstream tends to define "soft" symbols that must
> be mapped to their hardware equivalent values. So for
> example you might find a function ipa_pkt_status_parse()
> that translates between the hardware status structure
> and the abstracted "soft" status structure. In that
> function you see, for example, that hardware status
> opcode 0x1 is translated to IPAHAL_PKT_STATUS_OPCODE_PACKET,
> which downstream is defined to have value 0.
>
> In many places the upstream code eliminates that layer
> of indirection where possible. So enumerated constants
> are assigned specific values that match what the hardware
> uses.
>

Looking at these again, I realised this patch is indeed wrong...
The status values are different on v2 and v3+. I guess the correct
approach here would be to use an inline function and pick the correct
status opcode, like how its handled for register defintions.

Regards,
Sireesh

> -Alex
>
> >   };
> >   
> >   /** enum ipa_status_exception - status element exception type */
> > 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support
  2021-10-13 22:28   ` Alex Elder
@ 2021-10-18 16:16     ` Sireesh Kodali
  0 siblings, 0 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-10-18 16:16 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 3:58 AM IST, Alex Elder wrote:
> On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> > From: Vladimir Lypak <vladimir.lypak@gmail.com>
> > 
> > IPA v2.x is 32 bit. Having an IPA_TABLE_ENTRY size makes it easier to
> > deal with supporting both 32 bit and 64 bit IPA versions
>
> This looks reasonable. At this point filter/route tables aren't
> really used, so this is a simple fix. You use IPA_IS_64BIT()
> here, but it isn't defined until patch 7, which I expect is a
> build problem.

Oof, I probably messed this up while re-ordering the commits... will fix

Regards,
Sireesh
>
> -Alex
>
> > Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/ipa_qmi.c   | 10 ++++++----
> >   drivers/net/ipa/ipa_table.c | 29 +++++++++++++----------------
> >   drivers/net/ipa/ipa_table.h |  4 ++++
> >   3 files changed, 23 insertions(+), 20 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
> > index 90f3aec55b36..7e2fe701cc4d 100644
> > --- a/drivers/net/ipa/ipa_qmi.c
> > +++ b/drivers/net/ipa/ipa_qmi.c
> > @@ -308,12 +308,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
> >   	mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE);
> >   	req.v4_route_tbl_info_valid = 1;
> >   	req.v4_route_tbl_info.start = ipa->mem_offset + mem->offset;
> > -	req.v4_route_tbl_info.count = mem->size / sizeof(__le64);
> > +	req.v4_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   
> >   	mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE);
> >   	req.v6_route_tbl_info_valid = 1;
> >   	req.v6_route_tbl_info.start = ipa->mem_offset + mem->offset;
> > -	req.v6_route_tbl_info.count = mem->size / sizeof(__le64);
> > +	req.v6_route_tbl_info.count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   
> >   	mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER);
> >   	req.v4_filter_tbl_start_valid = 1;
> > @@ -352,7 +352,8 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
> >   		req.v4_hash_route_tbl_info_valid = 1;
> >   		req.v4_hash_route_tbl_info.start =
> >   				ipa->mem_offset + mem->offset;
> > -		req.v4_hash_route_tbl_info.count = mem->size / sizeof(__le64);
> > +		req.v4_hash_route_tbl_info.count =
> > +				mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   	}
> >   
> >   	mem = ipa_mem_find(ipa, IPA_MEM_V6_ROUTE_HASHED);
> > @@ -360,7 +361,8 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
> >   		req.v6_hash_route_tbl_info_valid = 1;
> >   		req.v6_hash_route_tbl_info.start =
> >   			ipa->mem_offset + mem->offset;
> > -		req.v6_hash_route_tbl_info.count = mem->size / sizeof(__le64);
> > +		req.v6_hash_route_tbl_info.count =
> > +				mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   	}
> >   
> >   	mem = ipa_mem_find(ipa, IPA_MEM_V4_FILTER_HASHED);
> > diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
> > index 1da334f54944..96c467c80a2e 100644
> > --- a/drivers/net/ipa/ipa_table.c
> > +++ b/drivers/net/ipa/ipa_table.c
> > @@ -118,7 +118,8 @@
> >    * 32-bit all-zero rule list terminator.  The "zero rule" is simply an
> >    * all-zero rule followed by the list terminator.
> >    */
> > -#define IPA_ZERO_RULE_SIZE		(2 * sizeof(__le32))
> > +#define IPA_ZERO_RULE_SIZE(version) \
> > +	 (IPA_IS_64BIT(version) ? 2 * sizeof(__le32) : sizeof(__le32))
> >   
> >   /* Check things that can be validated at build time. */
> >   static void ipa_table_validate_build(void)
> > @@ -132,12 +133,6 @@ static void ipa_table_validate_build(void)
> >   	 */
> >   	BUILD_BUG_ON(sizeof(dma_addr_t) > sizeof(__le64));
> >   
> > -	/* A "zero rule" is used to represent no filtering or no routing.
> > -	 * It is a 64-bit block of zeroed memory.  Code in ipa_table_init()
> > -	 * assumes that it can be written using a pointer to __le64.
> > -	 */
> > -	BUILD_BUG_ON(IPA_ZERO_RULE_SIZE != sizeof(__le64));
> > -
> >   	/* Impose a practical limit on the number of routes */
> >   	BUILD_BUG_ON(IPA_ROUTE_COUNT_MAX > 32);
> >   	/* The modem must be allotted at least one route table entry */
> > @@ -236,7 +231,7 @@ static dma_addr_t ipa_table_addr(struct ipa *ipa, bool filter_mask, u16 count)
> >   	/* Skip over the zero rule and possibly the filter mask */
> >   	skip = filter_mask ? 1 : 2;
> >   
> > -	return ipa->table_addr + skip * sizeof(*ipa->table_virt);
> > +	return ipa->table_addr + skip * IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   }
> >   
> >   static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
> > @@ -255,8 +250,8 @@ static void ipa_table_reset_add(struct gsi_trans *trans, bool filter,
> >   	if (filter)
> >   		first++;	/* skip over bitmap */
> >   
> > -	offset = mem->offset + first * sizeof(__le64);
> > -	size = count * sizeof(__le64);
> > +	offset = mem->offset + first * IPA_TABLE_ENTRY_SIZE(ipa->version);
> > +	size = count * IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   	addr = ipa_table_addr(ipa, false, count);
> >   
> >   	ipa_cmd_dma_shared_mem_add(trans, offset, size, addr, true);
> > @@ -434,11 +429,11 @@ static void ipa_table_init_add(struct gsi_trans *trans, bool filter,
> >   		count = 1 + hweight32(ipa->filter_map);
> >   		hash_count = hash_mem->size ? count : 0;
> >   	} else {
> > -		count = mem->size / sizeof(__le64);
> > -		hash_count = hash_mem->size / sizeof(__le64);
> > +		count = mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> > +		hash_count = hash_mem->size / IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   	}
> > -	size = count * sizeof(__le64);
> > -	hash_size = hash_count * sizeof(__le64);
> > +	size = count * IPA_TABLE_ENTRY_SIZE(ipa->version);
> > +	hash_size = hash_count * IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   
> >   	addr = ipa_table_addr(ipa, filter, count);
> >   	hash_addr = ipa_table_addr(ipa, filter, hash_count);
> > @@ -621,7 +616,8 @@ int ipa_table_init(struct ipa *ipa)
> >   	 * by dma_alloc_coherent() is guaranteed to be a power-of-2 number
> >   	 * of pages, which satisfies the rule alignment requirement.
> >   	 */
> > -	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
> > +	size = IPA_ZERO_RULE_SIZE(ipa->version) +
> > +	       (1 + count) * IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   	virt = dma_alloc_coherent(dev, size, &addr, GFP_KERNEL);
> >   	if (!virt)
> >   		return -ENOMEM;
> > @@ -653,7 +649,8 @@ void ipa_table_exit(struct ipa *ipa)
> >   	struct device *dev = &ipa->pdev->dev;
> >   	size_t size;
> >   
> > -	size = IPA_ZERO_RULE_SIZE + (1 + count) * sizeof(__le64);
> > +	size = IPA_ZERO_RULE_SIZE(ipa->version) +
> > +	       (1 + count) * IPA_TABLE_ENTRY_SIZE(ipa->version);
> >   
> >   	dma_free_coherent(dev, size, ipa->table_virt, ipa->table_addr);
> >   	ipa->table_addr = 0;
> > diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
> > index b6a9a0d79d68..78a168ce6558 100644
> > --- a/drivers/net/ipa/ipa_table.h
> > +++ b/drivers/net/ipa/ipa_table.h
> > @@ -10,6 +10,10 @@
> >   
> >   struct ipa;
> >   
> > +/* The size of a filter or route table entry */
> > +#define IPA_TABLE_ENTRY_SIZE(version)	\
> > +	(IPA_IS_64BIT(version) ? sizeof(__le64) : sizeof(__le32))
> > +
> >   /* The maximum number of filter table entries (IPv4, IPv6; hashed or not) */
> >   #define IPA_FILTER_COUNT_MAX	14
> >   
> > 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface
  2021-10-13 22:29   ` Alex Elder
@ 2021-10-18 16:45     ` Sireesh Kodali
  0 siblings, 0 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-10-18 16:45 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 3:59 AM IST, Alex Elder wrote:
> On 9/19/21 10:07 PM, Sireesh Kodali wrote:
> > From: Vladimir Lypak <vladimir.lypak@gmail.com>
> > 
> > Establish callback-based interface to abstract GSI and BAM DMA differences.
> > Interface is based on prototypes from ipa_dma.h (old gsi.h). Callbacks
> > are stored in struct ipa_dma (old struct gsi) and assigned in gsi_init.
>
> This is interesting and seems to have been fairly easy to abstract
> this way. The patch is actually pretty straightforward, much more
> so than I would have expected. I think I'll have more to say about
> how to separate GSI from BAM in the future, but not today.
>
> -Alex

Yes, GSI code was fairly easy to abstract. Thankfully, the dmaegine API
maps very nicely onto the existing GSI API.  I'm not sure if this was
intentional or accidental, but its nice either way.

Perhaps in future it might make sense to move the GSI code into a separate
dmaengine driver as well? In practice that should mean the IPA driver would
simply call into the dmaengine API, with no knowledge of the underlying
transport, and would remove the need for the BAM/GSI abstraction layer, since
the abstraction would be handled by dmaengine. I'm not sure how easy that
would be though.

Regards,
Sireesh
>
> > Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/gsi.c          |  30 ++++++--
> >   drivers/net/ipa/ipa_dma.h      | 133 ++++++++++++++++++++++-----------
> >   drivers/net/ipa/ipa_endpoint.c |  28 +++----
> >   drivers/net/ipa/ipa_main.c     |  18 ++---
> >   drivers/net/ipa/ipa_power.c    |   4 +-
> >   drivers/net/ipa/ipa_trans.c    |   2 +-
> >   6 files changed, 138 insertions(+), 77 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
> > index 74ae0d07f859..39d9ca620a9f 100644
> > --- a/drivers/net/ipa/gsi.c
> > +++ b/drivers/net/ipa/gsi.c
> > @@ -99,6 +99,10 @@
> >   
> >   #define GSI_ISR_MAX_ITER		50	/* Detect interrupt storms */
> >   
> > +static u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id);
> > +static u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id);
> > +static void gsi_exit(struct ipa_dma *gsi);
> > +
> >   /* An entry in an event ring */
> >   struct gsi_event {
> >   	__le64 xfer_ptr;
> > @@ -869,7 +873,7 @@ static int __gsi_channel_start(struct ipa_channel *channel, bool resume)
> >   }
> >   
> >   /* Start an allocated GSI channel */
> > -int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
> > +static int gsi_channel_start(struct ipa_dma *gsi, u32 channel_id)
> >   {
> >   	struct ipa_channel *channel = &gsi->channel[channel_id];
> >   	int ret;
> > @@ -924,7 +928,7 @@ static int __gsi_channel_stop(struct ipa_channel *channel, bool suspend)
> >   }
> >   
> >   /* Stop a started channel */
> > -int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
> > +static int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
> >   {
> >   	struct ipa_channel *channel = &gsi->channel[channel_id];
> >   	int ret;
> > @@ -941,7 +945,7 @@ int gsi_channel_stop(struct ipa_dma *gsi, u32 channel_id)
> >   }
> >   
> >   /* Reset and reconfigure a channel, (possibly) enabling the doorbell engine */
> > -void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
> > +static void gsi_channel_reset(struct ipa_dma *gsi, u32 channel_id, bool doorbell)
> >   {
> >   	struct ipa_channel *channel = &gsi->channel[channel_id];
> >   
> > @@ -1931,7 +1935,7 @@ int gsi_setup(struct ipa_dma *gsi)
> >   }
> >   
> >   /* Inverse of gsi_setup() */
> > -void gsi_teardown(struct ipa_dma *gsi)
> > +static void gsi_teardown(struct ipa_dma *gsi)
> >   {
> >   	gsi_channel_teardown(gsi);
> >   	gsi_irq_teardown(gsi);
> > @@ -2194,6 +2198,18 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
> >   
> >   	gsi->dev = dev;
> >   	gsi->version = version;
> > +	gsi->setup = gsi_setup;
> > +	gsi->teardown = gsi_teardown;
> > +	gsi->exit = gsi_exit;
> > +	gsi->suspend = gsi_suspend;
> > +	gsi->resume = gsi_resume;
> > +	gsi->channel_tre_max = gsi_channel_tre_max;
> > +	gsi->channel_trans_tre_max = gsi_channel_trans_tre_max;
> > +	gsi->channel_start = gsi_channel_start;
> > +	gsi->channel_stop = gsi_channel_stop;
> > +	gsi->channel_reset = gsi_channel_reset;
> > +	gsi->channel_suspend = gsi_channel_suspend;
> > +	gsi->channel_resume = gsi_channel_resume;
> >   
> >   	/* GSI uses NAPI on all channels.  Create a dummy network device
> >   	 * for the channel NAPI contexts to be associated with.
> > @@ -2250,7 +2266,7 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
> >   }
> >   
> >   /* Inverse of gsi_init() */
> > -void gsi_exit(struct ipa_dma *gsi)
> > +static void gsi_exit(struct ipa_dma *gsi)
> >   {
> >   	mutex_destroy(&gsi->mutex);
> >   	gsi_channel_exit(gsi);
> > @@ -2277,7 +2293,7 @@ void gsi_exit(struct ipa_dma *gsi)
> >    * substantially reduce pool memory requirements.  The number we
> >    * reduce it by matches the number added in ipa_trans_pool_init().
> >    */
> > -u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
> > +static u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
> >   {
> >   	struct ipa_channel *channel = &gsi->channel[channel_id];
> >   
> > @@ -2286,7 +2302,7 @@ u32 gsi_channel_tre_max(struct ipa_dma *gsi, u32 channel_id)
> >   }
> >   
> >   /* Returns the maximum number of TREs in a single transaction for a channel */
> > -u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
> > +static u32 gsi_channel_trans_tre_max(struct ipa_dma *gsi, u32 channel_id)
> >   {
> >   	struct ipa_channel *channel = &gsi->channel[channel_id];
> >   
> > diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h
> > index d053929ca3e3..1a23e6ac5785 100644
> > --- a/drivers/net/ipa/ipa_dma.h
> > +++ b/drivers/net/ipa/ipa_dma.h
> > @@ -163,64 +163,96 @@ struct ipa_dma {
> >   	struct completion completion;	/* for global EE commands */
> >   	int result;			/* Negative errno (generic commands) */
> >   	struct mutex mutex;		/* protects commands, programming */
> > +
> > +	int (*setup)(struct ipa_dma *dma_subsys);
> > +	void (*teardown)(struct ipa_dma *dma_subsys);
> > +	void (*exit)(struct ipa_dma *dma_subsys);
> > +	void (*suspend)(struct ipa_dma *dma_subsys);
> > +	void (*resume)(struct ipa_dma *dma_subsys);
> > +	u32 (*channel_tre_max)(struct ipa_dma *dma_subsys, u32 channel_id);
> > +	u32 (*channel_trans_tre_max)(struct ipa_dma *dma_subsys, u32 channel_id);
> > +	int (*channel_start)(struct ipa_dma *dma_subsys, u32 channel_id);
> > +	int (*channel_stop)(struct ipa_dma *dma_subsys, u32 channel_id);
> > +	void (*channel_reset)(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
> > +	int (*channel_suspend)(struct ipa_dma *dma_subsys, u32 channel_id);
> > +	int (*channel_resume)(struct ipa_dma *dma_subsys, u32 channel_id);
> > +	void (*trans_commit)(struct ipa_trans *trans, bool ring_db);
> >   };
> >   
> >   /**
> > - * gsi_setup() - Set up the GSI subsystem
> > - * @gsi:	Address of GSI structure embedded in an IPA structure
> > + * ipa_dma_setup() - Set up the DMA subsystem
> > + * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
> >    *
> >    * Return:	0 if successful, or a negative error code
> >    *
> > - * Performs initialization that must wait until the GSI hardware is
> > + * Performs initialization that must wait until the GSI/BAM hardware is
> >    * ready (including firmware loaded).
> >    */
> > -int gsi_setup(struct ipa_dma *dma_subsys);
> > +static inline int ipa_dma_setup(struct ipa_dma *dma_subsys)
> > +{
> > +	return dma_subsys->setup(dma_subsys);
> > +}
> >   
> >   /**
> > - * gsi_teardown() - Tear down GSI subsystem
> > - * @gsi:	GSI address previously passed to a successful gsi_setup() call
> > + * ipa_dma_teardown() - Tear down DMA subsystem
> > + * @dma_subsys:	ipa_dma address previously passed to a successful ipa_dma_setup() call
> >    */
> > -void gsi_teardown(struct ipa_dma *dma_subsys);
> > +static inline void ipa_dma_teardown(struct ipa_dma *dma_subsys)
> > +{
> > +	dma_subsys->teardown(dma_subsys);
> > +}
> >   
> >   /**
> > - * gsi_channel_tre_max() - Channel maximum number of in-flight TREs
> > - * @gsi:	GSI pointer
> > + * ipa_channel_tre_max() - Channel maximum number of in-flight TREs
> > + * @dma_subsys:	pointer to ipa_dma structure
> >    * @channel_id:	Channel whose limit is to be returned
> >    *
> >    * Return:	 The maximum number of TREs oustanding on the channel
> >    */
> > -u32 gsi_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
> > +static inline u32 ipa_channel_tre_max(struct ipa_dma *dma_subsys, u32 channel_id)
> > +{
> > +	return dma_subsys->channel_tre_max(dma_subsys, channel_id);
> > +}
> >   
> >   /**
> > - * gsi_channel_trans_tre_max() - Maximum TREs in a single transaction
> > - * @gsi:	GSI pointer
> > + * ipa_channel_trans_tre_max() - Maximum TREs in a single transaction
> > + * @dma_subsys:	pointer to ipa_dma structure
> >    * @channel_id:	Channel whose limit is to be returned
> >    *
> >    * Return:	 The maximum TRE count per transaction on the channel
> >    */
> > -u32 gsi_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id);
> > +static inline u32 ipa_channel_trans_tre_max(struct ipa_dma *dma_subsys, u32 channel_id)
> > +{
> > +	return dma_subsys->channel_trans_tre_max(dma_subsys, channel_id);
> > +}
> >   
> >   /**
> > - * gsi_channel_start() - Start an allocated GSI channel
> > - * @gsi:	GSI pointer
> > + * ipa_channel_start() - Start an allocated DMA channel
> > + * @dma_subsys:	pointer to ipa_dma structure
> >    * @channel_id:	Channel to start
> >    *
> >    * Return:	0 if successful, or a negative error code
> >    */
> > -int gsi_channel_start(struct ipa_dma *dma_subsys, u32 channel_id);
> > +static inline int ipa_channel_start(struct ipa_dma *dma_subsys, u32 channel_id)
> > +{
> > +	return dma_subsys->channel_start(dma_subsys, channel_id);
> > +}
> >   
> >   /**
> > - * gsi_channel_stop() - Stop a started GSI channel
> > - * @gsi:	GSI pointer returned by gsi_setup()
> > + * ipa_channel_stop() - Stop a started DMA channel
> > + * @dma_subsys:	pointer to ipa_dma structure returned by ipa_dma_setup()
> >    * @channel_id:	Channel to stop
> >    *
> >    * Return:	0 if successful, or a negative error code
> >    */
> > -int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
> > +static inline int ipa_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id)
> > +{
> > +	return dma_subsys->channel_stop(dma_subsys, channel_id);
> > +}
> >   
> >   /**
> > - * gsi_channel_reset() - Reset an allocated GSI channel
> > - * @gsi:	GSI pointer
> > + * ipa_channel_reset() - Reset an allocated DMA channel
> > + * @dma_subsys:	pointer to ipa_dma structure
> >    * @channel_id:	Channel to be reset
> >    * @doorbell:	Whether to (possibly) enable the doorbell engine
> >    *
> > @@ -230,41 +262,49 @@ int gsi_channel_stop(struct ipa_dma *dma_subsys, u32 channel_id);
> >    * GSI hardware relinquishes ownership of all pending receive buffer
> >    * transactions and they will complete with their cancelled flag set.
> >    */
> > -void gsi_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell);
> > +static inline void ipa_channel_reset(struct ipa_dma *dma_subsys, u32 channel_id, bool doorbell)
> > +{
> > +	 dma_subsys->channel_reset(dma_subsys, channel_id, doorbell);
> > +}
> >   
> > -/**
> > - * gsi_suspend() - Prepare the GSI subsystem for suspend
> > - * @gsi:	GSI pointer
> > - */
> > -void gsi_suspend(struct ipa_dma *dma_subsys);
> >   
> >   /**
> > - * gsi_resume() - Resume the GSI subsystem following suspend
> > - * @gsi:	GSI pointer
> > - */
> > -void gsi_resume(struct ipa_dma *dma_subsys);
> > -
> > -/**
> > - * gsi_channel_suspend() - Suspend a GSI channel
> > - * @gsi:	GSI pointer
> > + * ipa_channel_suspend() - Suspend a DMA channel
> > + * @dma_subsys:	pointer to ipa_dma structure
> >    * @channel_id:	Channel to suspend
> >    *
> >    * For IPA v4.0+, suspend is implemented by stopping the channel.
> >    */
> > -int gsi_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id);
> > +static inline int ipa_channel_suspend(struct ipa_dma *dma_subsys, u32 channel_id)
> > +{
> > +	return dma_subsys->channel_suspend(dma_subsys, channel_id);
> > +}
> >   
> >   /**
> > - * gsi_channel_resume() - Resume a suspended GSI channel
> > - * @gsi:	GSI pointer
> > + * ipa_channel_resume() - Resume a suspended DMA channel
> > + * @dma_subsys:	pointer to ipa_dma structure
> >    * @channel_id:	Channel to resume
> >    *
> >    * For IPA v4.0+, the stopped channel is started again.
> >    */
> > -int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
> > +static inline int ipa_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id)
> > +{
> > +	return dma_subsys->channel_resume(dma_subsys, channel_id);
> > +}
> > +
> > +static inline void ipa_dma_suspend(struct ipa_dma *dma_subsys)
> > +{
> > +	return dma_subsys->suspend(dma_subsys);
> > +}
> > +
> > +static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
> > +{
> > +	return dma_subsys->resume(dma_subsys);
> > +}
> >   
> >   /**
> > - * gsi_init() - Initialize the GSI subsystem
> > - * @gsi:	Address of GSI structure embedded in an IPA structure
> > + * ipa_dma_init() - Initialize the GSI subsystem
> > + * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
> >    * @pdev:	IPA platform device
> >    * @version:	IPA hardware version (implies GSI version)
> >    * @count:	Number of entries in the configuration data array
> > @@ -275,14 +315,19 @@ int gsi_channel_resume(struct ipa_dma *dma_subsys, u32 channel_id);
> >    * Early stage initialization of the GSI subsystem, performing tasks
> >    * that can be done before the GSI hardware is ready to use.
> >    */
> > +
> >   int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
> >   	     enum ipa_version version, u32 count,
> >   	     const struct ipa_gsi_endpoint_data *data);
> >   
> >   /**
> > - * gsi_exit() - Exit the GSI subsystem
> > - * @gsi:	GSI address previously passed to a successful gsi_init() call
> > + * ipa_dma_exit() - Exit the DMA subsystem
> > + * @dma_subsys:	ipa_dma address previously passed to a successful gsi_init() call
> >    */
> > -void gsi_exit(struct ipa_dma *dma_subsys);
> > +static inline void ipa_dma_exit(struct ipa_dma *dma_subsys)
> > +{
> > +	if (dma_subsys)
> > +		dma_subsys->exit(dma_subsys);
> > +}
> >   
> >   #endif /* _GSI_H_ */
> > diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> > index 90d6880e8a25..dbef549c4537 100644
> > --- a/drivers/net/ipa/ipa_endpoint.c
> > +++ b/drivers/net/ipa/ipa_endpoint.c
> > @@ -1091,7 +1091,7 @@ static void ipa_endpoint_replenish(struct ipa_endpoint *endpoint, bool add_one)
> >   	 * try replenishing again if our backlog is *all* available TREs.
> >   	 */
> >   	gsi = &endpoint->ipa->dma_subsys;
> > -	if (backlog == gsi_channel_tre_max(gsi, endpoint->channel_id))
> > +	if (backlog == ipa_channel_tre_max(gsi, endpoint->channel_id))
> >   		schedule_delayed_work(&endpoint->replenish_work,
> >   				      msecs_to_jiffies(1));
> >   }
> > @@ -1107,7 +1107,7 @@ static void ipa_endpoint_replenish_enable(struct ipa_endpoint *endpoint)
> >   		atomic_add(saved, &endpoint->replenish_backlog);
> >   
> >   	/* Start replenishing if hardware currently has no buffers */
> > -	max_backlog = gsi_channel_tre_max(gsi, endpoint->channel_id);
> > +	max_backlog = ipa_channel_tre_max(gsi, endpoint->channel_id);
> >   	if (atomic_read(&endpoint->replenish_backlog) == max_backlog)
> >   		ipa_endpoint_replenish(endpoint, false);
> >   }
> > @@ -1432,13 +1432,13 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
> >   	 * active.  We'll re-enable the doorbell (if appropriate) when
> >   	 * we reset again below.
> >   	 */
> > -	gsi_channel_reset(gsi, endpoint->channel_id, false);
> > +	ipa_channel_reset(gsi, endpoint->channel_id, false);
> >   
> >   	/* Make sure the channel isn't suspended */
> >   	suspended = ipa_endpoint_program_suspend(endpoint, false);
> >   
> >   	/* Start channel and do a 1 byte read */
> > -	ret = gsi_channel_start(gsi, endpoint->channel_id);
> > +	ret = ipa_channel_start(gsi, endpoint->channel_id);
> >   	if (ret)
> >   		goto out_suspend_again;
> >   
> > @@ -1461,7 +1461,7 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
> >   
> >   	gsi_trans_read_byte_done(gsi, endpoint->channel_id);
> >   
> > -	ret = gsi_channel_stop(gsi, endpoint->channel_id);
> > +	ret = ipa_channel_stop(gsi, endpoint->channel_id);
> >   	if (ret)
> >   		goto out_suspend_again;
> >   
> > @@ -1470,14 +1470,14 @@ static int ipa_endpoint_reset_rx_aggr(struct ipa_endpoint *endpoint)
> >   	 * complete the channel reset sequence.  Finish by suspending the
> >   	 * channel again (if necessary).
> >   	 */
> > -	gsi_channel_reset(gsi, endpoint->channel_id, true);
> > +	ipa_channel_reset(gsi, endpoint->channel_id, true);
> >   
> >   	usleep_range(USEC_PER_MSEC, 2 * USEC_PER_MSEC);
> >   
> >   	goto out_suspend_again;
> >   
> >   err_endpoint_stop:
> > -	(void)gsi_channel_stop(gsi, endpoint->channel_id);
> > +	(void)ipa_channel_stop(gsi, endpoint->channel_id);
> >   out_suspend_again:
> >   	if (suspended)
> >   		(void)ipa_endpoint_program_suspend(endpoint, true);
> > @@ -1504,7 +1504,7 @@ static void ipa_endpoint_reset(struct ipa_endpoint *endpoint)
> >   	if (special && ipa_endpoint_aggr_active(endpoint))
> >   		ret = ipa_endpoint_reset_rx_aggr(endpoint);
> >   	else
> > -		gsi_channel_reset(&ipa->dma_subsys, channel_id, true);
> > +		ipa_channel_reset(&ipa->dma_subsys, channel_id, true);
> >   
> >   	if (ret)
> >   		dev_err(&ipa->pdev->dev,
> > @@ -1537,7 +1537,7 @@ int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint)
> >   	struct ipa_dma *gsi = &ipa->dma_subsys;
> >   	int ret;
> >   
> > -	ret = gsi_channel_start(gsi, endpoint->channel_id);
> > +	ret = ipa_channel_start(gsi, endpoint->channel_id);
> >   	if (ret) {
> >   		dev_err(&ipa->pdev->dev,
> >   			"error %d starting %cX channel %u for endpoint %u\n",
> > @@ -1576,7 +1576,7 @@ void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint)
> >   	}
> >   
> >   	/* Note that if stop fails, the channel's state is not well-defined */
> > -	ret = gsi_channel_stop(gsi, endpoint->channel_id);
> > +	ret = ipa_channel_stop(gsi, endpoint->channel_id);
> >   	if (ret)
> >   		dev_err(&ipa->pdev->dev,
> >   			"error %d attempting to stop endpoint %u\n", ret,
> > @@ -1598,7 +1598,7 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
> >   		(void)ipa_endpoint_program_suspend(endpoint, true);
> >   	}
> >   
> > -	ret = gsi_channel_suspend(gsi, endpoint->channel_id);
> > +	ret = ipa_channel_suspend(gsi, endpoint->channel_id);
> >   	if (ret)
> >   		dev_err(dev, "error %d suspending channel %u\n", ret,
> >   			endpoint->channel_id);
> > @@ -1617,7 +1617,7 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
> >   	if (!endpoint->toward_ipa)
> >   		(void)ipa_endpoint_program_suspend(endpoint, false);
> >   
> > -	ret = gsi_channel_resume(gsi, endpoint->channel_id);
> > +	ret = ipa_channel_resume(gsi, endpoint->channel_id);
> >   	if (ret)
> >   		dev_err(dev, "error %d resuming channel %u\n", ret,
> >   			endpoint->channel_id);
> > @@ -1660,14 +1660,14 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint)
> >   	if (endpoint->ee_id != GSI_EE_AP)
> >   		return;
> >   
> > -	endpoint->trans_tre_max = gsi_channel_trans_tre_max(gsi, channel_id);
> > +	endpoint->trans_tre_max = ipa_channel_trans_tre_max(gsi, channel_id);
> >   	if (!endpoint->toward_ipa) {
> >   		/* RX transactions require a single TRE, so the maximum
> >   		 * backlog is the same as the maximum outstanding TREs.
> >   		 */
> >   		endpoint->replenish_enabled = false;
> >   		atomic_set(&endpoint->replenish_saved,
> > -			   gsi_channel_tre_max(gsi, endpoint->channel_id));
> > +			   ipa_channel_tre_max(gsi, endpoint->channel_id));
> >   		atomic_set(&endpoint->replenish_backlog, 0);
> >   		INIT_DELAYED_WORK(&endpoint->replenish_work,
> >   				  ipa_endpoint_replenish_work);
> > diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> > index 026f5555fa7d..6ab691ff1faf 100644
> > --- a/drivers/net/ipa/ipa_main.c
> > +++ b/drivers/net/ipa/ipa_main.c
> > @@ -98,13 +98,13 @@ int ipa_setup(struct ipa *ipa)
> >   	struct device *dev = &ipa->pdev->dev;
> >   	int ret;
> >   
> > -	ret = gsi_setup(&ipa->dma_subsys);
> > +	ret = ipa_dma_setup(&ipa->dma_subsys);
> >   	if (ret)
> >   		return ret;
> >   
> >   	ret = ipa_power_setup(ipa);
> >   	if (ret)
> > -		goto err_gsi_teardown;
> > +		goto err_dma_teardown;
> >   
> >   	ipa_endpoint_setup(ipa);
> >   
> > @@ -153,8 +153,8 @@ int ipa_setup(struct ipa *ipa)
> >   err_endpoint_teardown:
> >   	ipa_endpoint_teardown(ipa);
> >   	ipa_power_teardown(ipa);
> > -err_gsi_teardown:
> > -	gsi_teardown(&ipa->dma_subsys);
> > +err_dma_teardown:
> > +	ipa_dma_teardown(&ipa->dma_subsys);
> >   
> >   	return ret;
> >   }
> > @@ -179,7 +179,7 @@ static void ipa_teardown(struct ipa *ipa)
> >   	ipa_endpoint_disable_one(command_endpoint);
> >   	ipa_endpoint_teardown(ipa);
> >   	ipa_power_teardown(ipa);
> > -	gsi_teardown(&ipa->dma_subsys);
> > +	ipa_dma_teardown(&ipa->dma_subsys);
> >   }
> >   
> >   /* Configure bus access behavior for IPA components */
> > @@ -726,7 +726,7 @@ static int ipa_probe(struct platform_device *pdev)
> >   					    data->endpoint_data);
> >   	if (!ipa->filter_map) {
> >   		ret = -EINVAL;
> > -		goto err_gsi_exit;
> > +		goto err_dma_exit;
> >   	}
> >   
> >   	ret = ipa_table_init(ipa);
> > @@ -780,8 +780,8 @@ static int ipa_probe(struct platform_device *pdev)
> >   	ipa_table_exit(ipa);
> >   err_endpoint_exit:
> >   	ipa_endpoint_exit(ipa);
> > -err_gsi_exit:
> > -	gsi_exit(&ipa->dma_subsys);
> > +err_dma_exit:
> > +	ipa_dma_exit(&ipa->dma_subsys);
> >   err_mem_exit:
> >   	ipa_mem_exit(ipa);
> >   err_reg_exit:
> > @@ -824,7 +824,7 @@ static int ipa_remove(struct platform_device *pdev)
> >   	ipa_modem_exit(ipa);
> >   	ipa_table_exit(ipa);
> >   	ipa_endpoint_exit(ipa);
> > -	gsi_exit(&ipa->dma_subsys);
> > +	ipa_dma_exit(&ipa->dma_subsys);
> >   	ipa_mem_exit(ipa);
> >   	ipa_reg_exit(ipa);
> >   	kfree(ipa);
> > diff --git a/drivers/net/ipa/ipa_power.c b/drivers/net/ipa/ipa_power.c
> > index b1c6c0fcb654..096cfb8ae9a5 100644
> > --- a/drivers/net/ipa/ipa_power.c
> > +++ b/drivers/net/ipa/ipa_power.c
> > @@ -243,7 +243,7 @@ static int ipa_runtime_suspend(struct device *dev)
> >   	if (ipa->setup_complete) {
> >   		__clear_bit(IPA_POWER_FLAG_RESUMED, ipa->power->flags);
> >   		ipa_endpoint_suspend(ipa);
> > -		gsi_suspend(&ipa->gsi);
> > +		ipa_dma_suspend(&ipa->dma_subsys);
> >   	}
> >   
> >   	return ipa_power_disable(ipa);
> > @@ -260,7 +260,7 @@ static int ipa_runtime_resume(struct device *dev)
> >   
> >   	/* Endpoints aren't usable until setup is complete */
> >   	if (ipa->setup_complete) {
> > -		gsi_resume(&ipa->gsi);
> > +		ipa_dma_resume(&ipa->dma_subsys);
> >   		ipa_endpoint_resume(ipa);
> >   	}
> >   
> > diff --git a/drivers/net/ipa/ipa_trans.c b/drivers/net/ipa/ipa_trans.c
> > index b87936b18770..22755f3ce3da 100644
> > --- a/drivers/net/ipa/ipa_trans.c
> > +++ b/drivers/net/ipa/ipa_trans.c
> > @@ -747,7 +747,7 @@ int ipa_channel_trans_init(struct ipa_dma *gsi, u32 channel_id)
> >   	 * for transactions (including transaction structures) based on
> >   	 * this maximum number.
> >   	 */
> > -	tre_max = gsi_channel_tre_max(channel->dma_subsys, channel_id);
> > +	tre_max = ipa_channel_tre_max(channel->dma_subsys, channel_id);
> >   
> >   	/* Transactions are allocated one at a time. */
> >   	ret = ipa_trans_pool_init(&trans_info->pool, sizeof(struct ipa_trans),
> > 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait
  2021-10-13 22:29   ` Alex Elder
@ 2021-10-18 17:02     ` Sireesh Kodali
  0 siblings, 0 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-10-18 17:02 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 3:59 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > From: Vladimir Lypak <vladimir.lypak@gmail.com>
> > 
> > Sometimes the pipeline clear fails, and when it does, having a hang in
> > kernel is ugly. The timeout gives us a nice error message. Note that
> > this shouldn't actually hang, ever. It only hangs if there is a mistake
> > in the config, and the timeout is only useful when debugging.
> > 
> > Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
>
> This is actually an item on my to-do list. All of the waits
> for GSI completions should have timeouts. The only reason it
> hasn't been implemented already is that I would like to be sure
> all paths that could have a timeout actually have a reasonable
> recovery.
>
> I'd say an error message after a timeout is better than a hung
> task panic, but if this does time out, I'm not sure the state
> of the hardware is well-defined.

Early on while wiring up BAM support, I handn't quite figured out the
IPA init sequence, and some of the BAM opcode stuff. This caused the
driver to hang when it would reach the completion. Since this particular
completion was waited for just before the probe function retured, it
prevented hung up the kernel thread, and prevented the module from being
`modprobe -r`ed.

Since then, I've properly fixed the BAM code, the completion always
returns, making the patch kinda useless for now. Since its only for
debugging, I'll just drop this patch. I think the only error handling we
can do at this stage is to return -EIO, and get the callee to handle
de-initing everything.

Regards,
Sireesh

>
> -Alex
>
> > ---
> >   drivers/net/ipa/ipa_cmd.c | 5 ++++-
> >   1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> > index 3db9e94e484f..0bdbc331fa78 100644
> > --- a/drivers/net/ipa/ipa_cmd.c
> > +++ b/drivers/net/ipa/ipa_cmd.c
> > @@ -658,7 +658,10 @@ u32 ipa_cmd_pipeline_clear_count(void)
> >   
> >   void ipa_cmd_pipeline_clear_wait(struct ipa *ipa)
> >   {
> > -	wait_for_completion(&ipa->completion);
> > +	unsigned long timeout_jiffies = msecs_to_jiffies(1000);
> > +
> > +	if (!wait_for_completion_timeout(&ipa->completion, timeout_jiffies))
> > +		dev_err(&ipa->pdev->dev, "%s time out\n", __func__);
> >   }
> >   
> >   void ipa_cmd_pipeline_clear(struct ipa *ipa)
> > 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions
  2021-10-13 22:29   ` Alex Elder
@ 2021-10-18 17:25     ` Sireesh Kodali
  0 siblings, 0 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-10-18 17:25 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 3:59 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > IPA v2.x is an older version the IPA hardware, and is 32 bit.
> > 
> > Most of the registers were just shifted in newer IPA versions, but
> > the register fields have remained the same across IPA versions. This
> > means that only the register addresses needed to be added to the driver.
> > 
> > To handle the different IPA register addresses, static inline functions
> > have been defined that return the correct register address.
>
> Thank you for following the existing convention in implementing these.
> Even if it isn't perfect, it's good to remain consistent.
>
> You use:
> if (version <= IPA_VERSION_2_6L)
> but then also define and use
> if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> And the only new IPA versions are 2_0, 2_5, and 2_6L.
>
> I would stick with the former and don't define IPA_VERSION_RANGE().
> Nothing less than IPA v2.0 (or 3.0 currently) is supported, so
> "there is no version less than that."

Makes sense, thanks!
>
> Oh, and I noticed some local variables defined without the
> "reverse Christmas tree order" which, like it or not, is the
> convention used consistently throughout this driver.
>

I wasn't aware of this, it should be easy enough to fix.

> I might quibble with a few other minor things in these definitions
> but overall this looks fine.
>

Thanks,
Sireesh
> -Alex
>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/ipa_cmd.c      |   3 +-
> >   drivers/net/ipa/ipa_endpoint.c |  33 +++---
> >   drivers/net/ipa/ipa_main.c     |   8 +-
> >   drivers/net/ipa/ipa_mem.c      |   5 +-
> >   drivers/net/ipa/ipa_reg.h      | 184 +++++++++++++++++++++++++++------
> >   drivers/net/ipa/ipa_version.h  |  12 +++
> >   6 files changed, 195 insertions(+), 50 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> > index 0bdbc331fa78..7a104540dc26 100644
> > --- a/drivers/net/ipa/ipa_cmd.c
> > +++ b/drivers/net/ipa/ipa_cmd.c
> > @@ -326,7 +326,8 @@ static bool ipa_cmd_register_write_valid(struct ipa *ipa)
> >   	 * worst case (highest endpoint number) offset of that endpoint
> >   	 * fits in the register write command field(s) that must hold it.
> >   	 */
> > -	offset = IPA_REG_ENDP_STATUS_N_OFFSET(IPA_ENDPOINT_COUNT - 1);
> > +	offset = ipa_reg_endp_status_n_offset(ipa->version,
> > +			IPA_ENDPOINT_COUNT - 1);
> >   	name = "maximal endpoint status";
> >   	if (!ipa_cmd_register_write_offset_valid(ipa, name, offset))
> >   		return false;
> > diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> > index dbef549c4537..7d3ab61cd890 100644
> > --- a/drivers/net/ipa/ipa_endpoint.c
> > +++ b/drivers/net/ipa/ipa_endpoint.c
> > @@ -242,8 +242,8 @@ static struct ipa_trans *ipa_endpoint_trans_alloc(struct ipa_endpoint *endpoint,
> >   static bool
> >   ipa_endpoint_init_ctrl(struct ipa_endpoint *endpoint, bool suspend_delay)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_CTRL_N_OFFSET(endpoint->endpoint_id);
> >   	struct ipa *ipa = endpoint->ipa;
> > +	u32 offset = ipa_reg_endp_init_ctrl_n_offset(ipa->version, endpoint->endpoint_id);
> >   	bool state;
> >   	u32 mask;
> >   	u32 val;
> > @@ -410,7 +410,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
> >   		if (!(endpoint->ee_id == GSI_EE_MODEM && endpoint->toward_ipa))
> >   			continue;
> >   
> > -		offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
> > +		offset = ipa_reg_endp_status_n_offset(ipa->version, endpoint_id);
> >   
> >   		/* Value written is 0, and all bits are updated.  That
> >   		 * means status is disabled on the endpoint, and as a
> > @@ -431,7 +431,8 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
> >   
> >   static void ipa_endpoint_init_cfg(struct ipa_endpoint *endpoint)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_CFG_N_OFFSET(endpoint->endpoint_id);
> > +	struct ipa *ipa = endpoint->ipa;
> > +	u32 offset = ipa_reg_endp_init_cfg_n_offset(ipa->version, endpoint->endpoint_id);
> >   	enum ipa_cs_offload_en enabled;
> >   	u32 val = 0;
> >   
> > @@ -523,8 +524,8 @@ ipa_qmap_header_size(enum ipa_version version, struct ipa_endpoint *endpoint)
> >    */
> >   static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_HDR_N_OFFSET(endpoint->endpoint_id);
> >   	struct ipa *ipa = endpoint->ipa;
> > +	u32 offset = ipa_reg_endp_init_hdr_n_offset(ipa->version, endpoint->endpoint_id);
> >   	u32 val = 0;
> >   
> >   	if (endpoint->data->qmap) {
> > @@ -565,9 +566,9 @@ static void ipa_endpoint_init_hdr(struct ipa_endpoint *endpoint)
> >   
> >   static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(endpoint->endpoint_id);
> > -	u32 pad_align = endpoint->data->rx.pad_align;
> >   	struct ipa *ipa = endpoint->ipa;
> > +	u32 offset = ipa_reg_endp_init_hdr_ext_n_offset(ipa->version, endpoint->endpoint_id);
> > +	u32 pad_align = endpoint->data->rx.pad_align;
> >   	u32 val = 0;
> >   
> >   	val |= HDR_ENDIANNESS_FMASK;		/* big endian */
> > @@ -609,6 +610,7 @@ static void ipa_endpoint_init_hdr_ext(struct ipa_endpoint *endpoint)
> >   
> >   static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
> >   {
> > +	enum ipa_version version = endpoint->ipa->version;
> >   	u32 endpoint_id = endpoint->endpoint_id;
> >   	u32 val = 0;
> >   	u32 offset;
> > @@ -616,7 +618,7 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
> >   	if (endpoint->toward_ipa)
> >   		return;		/* Register not valid for TX endpoints */
> >   
> > -	offset = IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(endpoint_id);
> > +	offset = ipa_reg_endp_init_hdr_metadata_mask_n_offset(version, endpoint_id);
> >   
> >   	/* Note that HDR_ENDIANNESS indicates big endian header fields */
> >   	if (endpoint->data->qmap)
> > @@ -627,7 +629,8 @@ static void ipa_endpoint_init_hdr_metadata_mask(struct ipa_endpoint *endpoint)
> >   
> >   static void ipa_endpoint_init_mode(struct ipa_endpoint *endpoint)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_MODE_N_OFFSET(endpoint->endpoint_id);
> > +	enum ipa_version version = endpoint->ipa->version;
> > +	u32 offset = ipa_reg_endp_init_mode_n_offset(version, endpoint->endpoint_id);
> >   	u32 val;
> >   
> >   	if (!endpoint->toward_ipa)
> > @@ -716,8 +719,8 @@ static u32 aggr_sw_eof_active_encoded(enum ipa_version version, bool enabled)
> >   
> >   static void ipa_endpoint_init_aggr(struct ipa_endpoint *endpoint)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_AGGR_N_OFFSET(endpoint->endpoint_id);
> >   	enum ipa_version version = endpoint->ipa->version;
> > +	u32 offset = ipa_reg_endp_init_aggr_n_offset(version, endpoint->endpoint_id);
> >   	u32 val = 0;
> >   
> >   	if (endpoint->data->aggregation) {
> > @@ -853,7 +856,7 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
> >   	u32 offset;
> >   	u32 val;
> >   
> > -	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(endpoint_id);
> > +	offset = ipa_reg_endp_init_hol_block_timer_n_offset(ipa->version, endpoint_id);
> >   	val = hol_block_timer_val(ipa, microseconds);
> >   	iowrite32(val, ipa->reg_virt + offset);
> >   }
> > @@ -861,12 +864,13 @@ static void ipa_endpoint_init_hol_block_timer(struct ipa_endpoint *endpoint,
> >   static void
> >   ipa_endpoint_init_hol_block_enable(struct ipa_endpoint *endpoint, bool enable)
> >   {
> > +	enum ipa_version version = endpoint->ipa->version;
> >   	u32 endpoint_id = endpoint->endpoint_id;
> >   	u32 offset;
> >   	u32 val;
> >   
> >   	val = enable ? HOL_BLOCK_EN_FMASK : 0;
> > -	offset = IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(endpoint_id);
> > +	offset = ipa_reg_endp_init_hol_block_en_n_offset(version, endpoint_id);
> >   	iowrite32(val, endpoint->ipa->reg_virt + offset);
> >   }
> >   
> > @@ -887,7 +891,8 @@ void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa)
> >   
> >   static void ipa_endpoint_init_deaggr(struct ipa_endpoint *endpoint)
> >   {
> > -	u32 offset = IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(endpoint->endpoint_id);
> > +	enum ipa_version version = endpoint->ipa->version;
> > +	u32 offset = ipa_reg_endp_init_deaggr_n_offset(version, endpoint->endpoint_id);
> >   	u32 val = 0;
> >   
> >   	if (!endpoint->toward_ipa)
> > @@ -979,7 +984,7 @@ static void ipa_endpoint_status(struct ipa_endpoint *endpoint)
> >   	u32 val = 0;
> >   	u32 offset;
> >   
> > -	offset = IPA_REG_ENDP_STATUS_N_OFFSET(endpoint_id);
> > +	offset = ipa_reg_endp_status_n_offset(ipa->version, endpoint_id);
> >   
> >   	if (endpoint->data->status_enable) {
> >   		val |= STATUS_EN_FMASK;
> > @@ -1384,7 +1389,7 @@ void ipa_endpoint_default_route_set(struct ipa *ipa, u32 endpoint_id)
> >   	val |= u32_encode_bits(endpoint_id, ROUTE_FRAG_DEF_PIPE_FMASK);
> >   	val |= ROUTE_DEF_RETAIN_HDR_FMASK;
> >   
> > -	iowrite32(val, ipa->reg_virt + IPA_REG_ROUTE_OFFSET);
> > +	iowrite32(val, ipa->reg_virt + ipa_reg_route_offset(ipa->version));
> >   }
> >   
> >   void ipa_endpoint_default_route_clear(struct ipa *ipa)
> > diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> > index 6ab691ff1faf..ba06e3ad554c 100644
> > --- a/drivers/net/ipa/ipa_main.c
> > +++ b/drivers/net/ipa/ipa_main.c
> > @@ -191,7 +191,7 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
> >   	if (ipa->version < IPA_VERSION_4_0)
> >   		return;
> >   
> > -	val = ioread32(ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
> > +	val = ioread32(ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
> >   
> >   	if (ipa->version == IPA_VERSION_4_0) {
> >   		val &= ~IPA_QMB_SELECT_CONS_EN_FMASK;
> > @@ -206,7 +206,7 @@ static void ipa_hardware_config_comp(struct ipa *ipa)
> >   	val |= GSI_MULTI_INORDER_RD_DIS_FMASK;
> >   	val |= GSI_MULTI_INORDER_WR_DIS_FMASK;
> >   
> > -	iowrite32(val, ipa->reg_virt + IPA_REG_COMP_CFG_OFFSET);
> > +	iowrite32(val, ipa->reg_virt + ipa_reg_comp_cfg_offset(ipa->version));
> >   }
> >   
> >   /* Configure DDR and (possibly) PCIe max read/write QSB values */
> > @@ -355,7 +355,7 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
> >   	/* IPA v4.5+ has no backward compatibility register */
> >   	if (version < IPA_VERSION_4_5) {
> >   		val = data->backward_compat;
> > -		iowrite32(val, ipa->reg_virt + IPA_REG_BCR_OFFSET);
> > +		iowrite32(val, ipa->reg_virt + ipa_reg_bcr_offset(ipa->version));
> >   	}
> >   
> >   	/* Implement some hardware workarounds */
> > @@ -384,7 +384,7 @@ static void ipa_hardware_config(struct ipa *ipa, const struct ipa_data *data)
> >   		/* Configure aggregation timer granularity */
> >   		granularity = ipa_aggr_granularity_val(IPA_AGGR_GRANULARITY);
> >   		val = u32_encode_bits(granularity, AGGR_GRANULARITY_FMASK);
> > -		iowrite32(val, ipa->reg_virt + IPA_REG_COUNTER_CFG_OFFSET);
> > +		iowrite32(val, ipa->reg_virt + ipa_reg_counter_cfg_offset(ipa->version));
> >   	} else {
> >   		ipa_qtime_config(ipa);
> >   	}
> > diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
> > index 16e5fdd5bd73..8acc88070a6f 100644
> > --- a/drivers/net/ipa/ipa_mem.c
> > +++ b/drivers/net/ipa/ipa_mem.c
> > @@ -113,7 +113,8 @@ int ipa_mem_setup(struct ipa *ipa)
> >   	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
> >   	offset = ipa->mem_offset + mem->offset;
> >   	val = proc_cntxt_base_addr_encoded(ipa->version, offset);
> > -	iowrite32(val, ipa->reg_virt + IPA_REG_LOCAL_PKT_PROC_CNTXT_OFFSET);
> > +	iowrite32(val, ipa->reg_virt +
> > +		  ipa_reg_local_pkt_proc_cntxt_base_offset(ipa->version));
> >   
> >   	return 0;
> >   }
> > @@ -316,7 +317,7 @@ int ipa_mem_config(struct ipa *ipa)
> >   	u32 i;
> >   
> >   	/* Check the advertised location and size of the shared memory area */
> > -	val = ioread32(ipa->reg_virt + IPA_REG_SHARED_MEM_SIZE_OFFSET);
> > +	val = ioread32(ipa->reg_virt + ipa_reg_shared_mem_size_offset(ipa->version));
> >   
> >   	/* The fields in the register are in 8 byte units */
> >   	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> > diff --git a/drivers/net/ipa/ipa_reg.h b/drivers/net/ipa/ipa_reg.h
> > index a5b355384d4a..fcae0296cfa4 100644
> > --- a/drivers/net/ipa/ipa_reg.h
> > +++ b/drivers/net/ipa/ipa_reg.h
> > @@ -65,7 +65,17 @@ struct ipa;
> >    * of valid bits for the register.
> >    */
> >   
> > -#define IPA_REG_COMP_CFG_OFFSET				0x0000003c
> > +#define IPA_REG_COMP_SW_RESET_OFFSET		0x0000003c
> > +
> > +#define IPA_REG_V2_ENABLED_PIPES_OFFSET		0x000005dc
> > +
> > +static inline u32 ipa_reg_comp_cfg_offset(enum ipa_version version)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x38;
> > +
> > +	return 0x3c;
> > +}
> >   /* The next field is not supported for IPA v4.0+, not present for IPA v4.5+ */
> >   #define ENABLE_FMASK				GENMASK(0, 0)
> >   /* The next field is present for IPA v4.7+ */
> > @@ -124,6 +134,7 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
> >   	return u32_encode_bits(val, GENMASK(17, 17));
> >   }
> >   
> > +/* This register is only present on IPA v3.0 and above */
> >   #define IPA_REG_CLKON_CFG_OFFSET			0x00000044
> >   #define RX_FMASK				GENMASK(0, 0)
> >   #define PROC_FMASK				GENMASK(1, 1)
> > @@ -164,7 +175,14 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
> >   /* The next field is present for IPA v4.7+ */
> >   #define DRBIP_FMASK				GENMASK(31, 31)
> >   
> > -#define IPA_REG_ROUTE_OFFSET				0x00000048
> > +static inline u32 ipa_reg_route_offset(enum ipa_version version)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x44;
> > +
> > +	return 0x48;
> > +}
> > +
> >   #define ROUTE_DIS_FMASK				GENMASK(0, 0)
> >   #define ROUTE_DEF_PIPE_FMASK			GENMASK(5, 1)
> >   #define ROUTE_DEF_HDR_TABLE_FMASK		GENMASK(6, 6)
> > @@ -172,7 +190,14 @@ static inline u32 full_flush_rsc_closure_en_encoded(enum ipa_version version,
> >   #define ROUTE_FRAG_DEF_PIPE_FMASK		GENMASK(21, 17)
> >   #define ROUTE_DEF_RETAIN_HDR_FMASK		GENMASK(24, 24)
> >   
> > -#define IPA_REG_SHARED_MEM_SIZE_OFFSET			0x00000054
> > +static inline u32 ipa_reg_shared_mem_size_offset(enum ipa_version version)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x50;
> > +
> > +	return 0x54;
> > +}
> > +
> >   #define SHARED_MEM_SIZE_FMASK			GENMASK(15, 0)
> >   #define SHARED_MEM_BADDR_FMASK			GENMASK(31, 16)
> >   
> > @@ -219,7 +244,13 @@ static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
> >   }
> >   
> >   /* The next register is not present for IPA v4.5+ */
> > -#define IPA_REG_BCR_OFFSET				0x000001d0
> > +static inline u32 ipa_reg_bcr_offset(enum ipa_version version)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_5, 2_6L))
> > +		return 0x5b0;
> > +
> > +	return 0x1d0;
> > +}
> >   /* The next two fields are not present for IPA v4.2+ */
> >   #define BCR_CMDQ_L_LACK_ONE_ENTRY_FMASK		GENMASK(0, 0)
> >   #define BCR_TX_NOT_USING_BRESP_FMASK		GENMASK(1, 1)
> > @@ -236,7 +267,14 @@ static inline u32 ipa_reg_state_aggr_active_offset(enum ipa_version version)
> >   #define BCR_ROUTER_PREFETCH_EN_FMASK		GENMASK(9, 9)
> >   
> >   /* The value of the next register must be a multiple of 8 (bottom 3 bits 0) */
> > -#define IPA_REG_LOCAL_PKT_PROC_CNTXT_OFFSET		0x000001e8
> > +static inline u32 ipa_reg_local_pkt_proc_cntxt_base_offset(enum ipa_version version)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x5e0;
> > +
> > +	return 0x1e8;
> > +}
> > +
> >   
> >   /* Encoded value for LOCAL_PKT_PROC_CNTXT register BASE_ADDR field */
> >   static inline u32 proc_cntxt_base_addr_encoded(enum ipa_version version,
> > @@ -252,7 +290,14 @@ static inline u32 proc_cntxt_base_addr_encoded(enum ipa_version version,
> >   #define IPA_REG_AGGR_FORCE_CLOSE_OFFSET			0x000001ec
> >   
> >   /* The next register is not present for IPA v4.5+ */
> > -#define IPA_REG_COUNTER_CFG_OFFSET			0x000001f0
> > +static inline u32 ipa_reg_counter_cfg_offset(enum ipa_version version)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_5, 2_6L))
> > +		return 0x5e8;
> > +
> > +	return 0x1f0;
> > +}
> > +
> >   /* The next field is not present for IPA v3.5+ */
> >   #define EOT_COAL_GRANULARITY			GENMASK(3, 0)
> >   #define AGGR_GRANULARITY_FMASK			GENMASK(8, 4)
> > @@ -349,15 +394,27 @@ enum ipa_pulse_gran {
> >   #define Y_MIN_LIM_FMASK				GENMASK(21, 16)
> >   #define Y_MAX_LIM_FMASK				GENMASK(29, 24)
> >   
> > -#define IPA_REG_ENDP_INIT_CTRL_N_OFFSET(ep) \
> > -					(0x00000800 + 0x0070 * (ep))
> > +static inline u32 ipa_reg_endp_init_ctrl_n_offset(enum ipa_version version, u16 ep)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x70 + 0x4 * ep;
> > +
> > +	return 0x800 + 0x70 * ep;
> > +}
> > +
> >   /* Valid only for RX (IPA producer) endpoints (do not use for IPA v4.0+) */
> >   #define ENDP_SUSPEND_FMASK			GENMASK(0, 0)
> >   /* Valid only for TX (IPA consumer) endpoints */
> >   #define ENDP_DELAY_FMASK			GENMASK(1, 1)
> >   
> > -#define IPA_REG_ENDP_INIT_CFG_N_OFFSET(ep) \
> > -					(0x00000808 + 0x0070 * (ep))
> > +static inline u32 ipa_reg_endp_init_cfg_n_offset(enum ipa_version version, u16 ep)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0xc0 + 0x4 * ep;
> > +
> > +	return 0x808 + 0x70 * ep;
> > +}
> > +
> >   #define FRAG_OFFLOAD_EN_FMASK			GENMASK(0, 0)
> >   #define CS_OFFLOAD_EN_FMASK			GENMASK(2, 1)
> >   #define CS_METADATA_HDR_OFFSET_FMASK		GENMASK(6, 3)
> > @@ -383,8 +440,14 @@ enum ipa_nat_en {
> >   	IPA_NAT_DST			= 0x2,
> >   };
> >   
> > -#define IPA_REG_ENDP_INIT_HDR_N_OFFSET(ep) \
> > -					(0x00000810 + 0x0070 * (ep))
> > +static inline u32 ipa_reg_endp_init_hdr_n_offset(enum ipa_version version, u16 ep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x170 + 0x4 * ep;
> > +
> > +	return 0x810 + 0x70 * ep;
> > +}
> > +
> >   #define HDR_LEN_FMASK				GENMASK(5, 0)
> >   #define HDR_OFST_METADATA_VALID_FMASK		GENMASK(6, 6)
> >   #define HDR_OFST_METADATA_FMASK			GENMASK(12, 7)
> > @@ -440,8 +503,14 @@ static inline u32 ipa_metadata_offset_encoded(enum ipa_version version,
> >   	return val;
> >   }
> >   
> > -#define IPA_REG_ENDP_INIT_HDR_EXT_N_OFFSET(ep) \
> > -					(0x00000814 + 0x0070 * (ep))
> > +static inline u32 ipa_reg_endp_init_hdr_ext_n_offset(enum ipa_version version, u16 ep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x1c0 + 0x4 * ep;
> > +
> > +	return 0x814 + 0x70 * ep;
> > +}
> > +
> >   #define HDR_ENDIANNESS_FMASK			GENMASK(0, 0)
> >   #define HDR_TOTAL_LEN_OR_PAD_VALID_FMASK	GENMASK(1, 1)
> >   #define HDR_TOTAL_LEN_OR_PAD_FMASK		GENMASK(2, 2)
> > @@ -454,12 +523,23 @@ static inline u32 ipa_metadata_offset_encoded(enum ipa_version version,
> >   #define HDR_ADDITIONAL_CONST_LEN_MSB_FMASK	GENMASK(21, 20)
> >   
> >   /* Valid only for RX (IPA producer) endpoints */
> > -#define IPA_REG_ENDP_INIT_HDR_METADATA_MASK_N_OFFSET(rxep) \
> > -					(0x00000818 + 0x0070 * (rxep))
> > +static inline u32 ipa_reg_endp_init_hdr_metadata_mask_n_offset(enum ipa_version version, u16 rxep)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x220 + 0x4 * rxep;
> > +
> > +	return 0x818 + 0x70 * rxep;
> > +}
> >   
> >   /* Valid only for TX (IPA consumer) endpoints */
> > -#define IPA_REG_ENDP_INIT_MODE_N_OFFSET(txep) \
> > -					(0x00000820 + 0x0070 * (txep))
> > +static inline u32 ipa_reg_endp_init_mode_n_offset(enum ipa_version version, u16 txep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x2c0 + 0x4 * txep;
> > +
> > +	return 0x820 + 0x70 * txep;
> > +}
> > +
> >   #define MODE_FMASK				GENMASK(2, 0)
> >   /* The next field is present for IPA v4.5+ */
> >   #define DCPH_ENABLE_FMASK			GENMASK(3, 3)
> > @@ -480,8 +560,14 @@ enum ipa_mode {
> >   	IPA_DMA				= 0x3,
> >   };
> >   
> > -#define IPA_REG_ENDP_INIT_AGGR_N_OFFSET(ep) \
> > -					(0x00000824 +  0x0070 * (ep))
> > +static inline u32 ipa_reg_endp_init_aggr_n_offset(enum ipa_version version,
> > +						  u16 ep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x320 + 0x4 * ep;
> > +	return 0x824 + 0x70 * ep;
> > +}
> > +
> >   #define AGGR_EN_FMASK				GENMASK(1, 0)
> >   #define AGGR_TYPE_FMASK				GENMASK(4, 2)
> >   
> > @@ -543,14 +629,27 @@ enum ipa_aggr_type {
> >   };
> >   
> >   /* Valid only for RX (IPA producer) endpoints */
> > -#define IPA_REG_ENDP_INIT_HOL_BLOCK_EN_N_OFFSET(rxep) \
> > -					(0x0000082c +  0x0070 * (rxep))
> > +static inline u32 ipa_reg_endp_init_hol_block_en_n_offset(enum ipa_version version,
> > +							  u16 rxep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x3c0 + 0x4 * rxep;
> > +
> > +	return 0x82c + 0x70 * rxep;
> > +}
> > +
> >   #define HOL_BLOCK_EN_FMASK			GENMASK(0, 0)
> >   
> >   /* Valid only for RX (IPA producer) endpoints */
> > -#define IPA_REG_ENDP_INIT_HOL_BLOCK_TIMER_N_OFFSET(rxep) \
> > -					(0x00000830 +  0x0070 * (rxep))
> > -/* The next two fields are present for IPA v4.2 only */
> > +static inline u32 ipa_reg_endp_init_hol_block_timer_n_offset(enum ipa_version version, u16 rxep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x420 + 0x4 * rxep;
> > +
> > +	return 0x830 + 0x70 * rxep;
> > +}
> > +
> > +/* The next fields are present for IPA v4.2 only */
> >   #define BASE_VALUE_FMASK			GENMASK(4, 0)
> >   #define SCALE_FMASK				GENMASK(12, 8)
> >   /* The next two fields are present for IPA v4.5 */
> > @@ -558,8 +657,14 @@ enum ipa_aggr_type {
> >   #define GRAN_SEL_FMASK				GENMASK(8, 8)
> >   
> >   /* Valid only for TX (IPA consumer) endpoints */
> > -#define IPA_REG_ENDP_INIT_DEAGGR_N_OFFSET(txep) \
> > -					(0x00000834 + 0x0070 * (txep))
> > +static inline u32 ipa_reg_endp_init_deaggr_n_offset(enum ipa_version version, u16 txep)
> > +{
> > +	if (IPA_VERSION_RANGE(version, 2_0, 2_6L))
> > +		return 0x470 + 0x4 * txep;
> > +
> > +	return 0x834 + 0x70 * txep;
> > +}
> > +
> >   #define DEAGGR_HDR_LEN_FMASK			GENMASK(5, 0)
> >   #define SYSPIPE_ERR_DETECTION_FMASK		GENMASK(6, 6)
> >   #define PACKET_OFFSET_VALID_FMASK		GENMASK(7, 7)
> > @@ -629,8 +734,14 @@ enum ipa_seq_rep_type {
> >   	IPA_SEQ_REP_DMA_PARSER			= 0x08,
> >   };
> >   
> > -#define IPA_REG_ENDP_STATUS_N_OFFSET(ep) \
> > -					(0x00000840 + 0x0070 * (ep))
> > +static inline u32 ipa_reg_endp_status_n_offset(enum ipa_version version, u16 ep)
> > +{
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x4c0 + 0x4 * ep;
> > +
> > +	return 0x840 + 0x70 * ep;
> > +}
> > +
> >   #define STATUS_EN_FMASK				GENMASK(0, 0)
> >   #define STATUS_ENDP_FMASK			GENMASK(5, 1)
> >   /* The next field is not present for IPA v4.5+ */
> > @@ -662,6 +773,9 @@ enum ipa_seq_rep_type {
> >   static inline u32 ipa_reg_irq_stts_ee_n_offset(enum ipa_version version,
> >   					       u32 ee)
> >   {
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x00001008 + 0x1000 * ee;
> > +
> >   	if (version < IPA_VERSION_4_9)
> >   		return 0x00003008 + 0x1000 * ee;
> >   
> > @@ -675,6 +789,9 @@ static inline u32 ipa_reg_irq_stts_offset(enum ipa_version version)
> >   
> >   static inline u32 ipa_reg_irq_en_ee_n_offset(enum ipa_version version, u32 ee)
> >   {
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x0000100c + 0x1000 * ee;
> > +
> >   	if (version < IPA_VERSION_4_9)
> >   		return 0x0000300c + 0x1000 * ee;
> >   
> > @@ -688,6 +805,9 @@ static inline u32 ipa_reg_irq_en_offset(enum ipa_version version)
> >   
> >   static inline u32 ipa_reg_irq_clr_ee_n_offset(enum ipa_version version, u32 ee)
> >   {
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x00001010 + 0x1000 * ee;
> > +
> >   	if (version < IPA_VERSION_4_9)
> >   		return 0x00003010 + 0x1000 * ee;
> >   
> > @@ -776,6 +896,9 @@ enum ipa_irq_id {
> >   
> >   static inline u32 ipa_reg_irq_uc_ee_n_offset(enum ipa_version version, u32 ee)
> >   {
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x101c + 1000 * ee;
> > +
> >   	if (version < IPA_VERSION_4_9)
> >   		return 0x0000301c + 0x1000 * ee;
> >   
> > @@ -793,6 +916,9 @@ static inline u32 ipa_reg_irq_uc_offset(enum ipa_version version)
> >   static inline u32
> >   ipa_reg_irq_suspend_info_ee_n_offset(enum ipa_version version, u32 ee)
> >   {
> > +	if (version <= IPA_VERSION_2_6L)
> > +		return 0x00001098 + 0x1000 * ee;
> > +
> >   	if (version == IPA_VERSION_3_0)
> >   		return 0x00003098 + 0x1000 * ee;
> >   
> > diff --git a/drivers/net/ipa/ipa_version.h b/drivers/net/ipa/ipa_version.h
> > index 6c16c895d842..0d816de586ba 100644
> > --- a/drivers/net/ipa/ipa_version.h
> > +++ b/drivers/net/ipa/ipa_version.h
> > @@ -8,6 +8,9 @@
> >   
> >   /**
> >    * enum ipa_version
> > + * @IPA_VERSION_2_0:	IPA version 2.0
> > + * @IPA_VERSION_2_5:	IPA version 2.5/2.6
> > + * @IPA_VERSION_2_6:	IPA version 2.6L
> >    * @IPA_VERSION_3_0:	IPA version 3.0/GSI version 1.0
> >    * @IPA_VERSION_3_1:	IPA version 3.1/GSI version 1.1
> >    * @IPA_VERSION_3_5:	IPA version 3.5/GSI version 1.2
> > @@ -25,6 +28,9 @@
> >    * new version is added.
> >    */
> >   enum ipa_version {
> > +	IPA_VERSION_2_0,
> > +	IPA_VERSION_2_5,
> > +	IPA_VERSION_2_6L,
> >   	IPA_VERSION_3_0,
> >   	IPA_VERSION_3_1,
> >   	IPA_VERSION_3_5,
> > @@ -38,4 +44,10 @@ enum ipa_version {
> >   	IPA_VERSION_4_11,
> >   };
> >   
> > +#define IPA_HAS_GSI(version) ((version) > IPA_VERSION_2_6L)
> > +#define IPA_IS_64BIT(version) ((version) > IPA_VERSION_2_6L)
> > +#define IPA_VERSION_RANGE(_version, _from, _to) \
> > +	((_version) >= (IPA_VERSION_##_from) &&  \
> > +	 (_version) <= (IPA_VERSION_##_to))
> > +
> >   #endif /* _IPA_VERSION_H_ */
> > 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport
  2021-10-13 22:30   ` Alex Elder
@ 2021-10-18 17:30     ` Sireesh Kodali
  0 siblings, 0 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-10-18 17:30 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 4:00 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > BAM is used on IPA v2.x. Since BAM already has a nice dmaengine driver,
> > the IPA driver only makes calls the dmaengine API.
> > Also add BAM transaction support to IPA's trasaction abstraction layer.
> > 
> > BAM transactions should use NAPI just like GSI transactions, but just
> > use callbacks on each transaction for now.
>
> This is where things get a little more complicated. I'm not really
> familiar with the BAM interface and would really like to give this
> a much deeper review, and I won't be doing that now.
>
> At first glance, it looks reasonably clean to me, and it surprises
> me a little that this different system can be used with a relatively
> small amount of change. Much looks duplicated, so it could be that
> a little more work abstracting might avoid that (but I haven't looked
> that closely).
>

BAM is handled by the bam_dma driver, which supports the dmaengine API, so
all the functions are like so:

bam_function()
{
	bookkeeping();
	dmaengine_api_call();
	bookkeeping();
}

gsi_function()
{
	bookkeeping();
	gsi_register_rws();
	gsi_misc_ops();
	bookkeeping();
}

Some of the bookkeeping code is common between BAM and GSI, but the
current abstraction doesn't allow sharing that code. As I stated
previously, we might be able to share more code (or possibly all code)
between BAM and GSI if GSI was implemented as a dmaengine driver. But
reimplementing GSI like this might be rather time consuming, and there
might be easier solutions to improve code sharing between BAM and GSI.

Regards,
Sireesh

> -Alex
>
> > 
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/Makefile          |   2 +-
> >   drivers/net/ipa/bam.c             | 525 ++++++++++++++++++++++++++++++
> >   drivers/net/ipa/gsi.c             |   1 +
> >   drivers/net/ipa/ipa_data.h        |   1 +
> >   drivers/net/ipa/ipa_dma.h         |  18 +-
> >   drivers/net/ipa/ipa_dma_private.h |   2 +
> >   drivers/net/ipa/ipa_main.c        |  20 +-
> >   drivers/net/ipa/ipa_trans.c       |  14 +-
> >   drivers/net/ipa/ipa_trans.h       |   4 +
> >   9 files changed, 569 insertions(+), 18 deletions(-)
> >   create mode 100644 drivers/net/ipa/bam.c
> > 
> > diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
> > index 3cd021fb992e..4abebc667f77 100644
> > --- a/drivers/net/ipa/Makefile
> > +++ b/drivers/net/ipa/Makefile
> > @@ -2,7 +2,7 @@ obj-$(CONFIG_QCOM_IPA)	+=	ipa.o
> >   
> >   ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
> >   				ipa_table.o ipa_interrupt.o gsi.o ipa_trans.o \
> > -				ipa_gsi.o ipa_smp2p.o ipa_uc.o \
> > +				ipa_gsi.o ipa_smp2p.o ipa_uc.o bam.o \
> >   				ipa_endpoint.o ipa_cmd.o ipa_modem.o \
> >   				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
> >   				ipa_sysfs.o
> > diff --git a/drivers/net/ipa/bam.c b/drivers/net/ipa/bam.c
> > new file mode 100644
> > index 000000000000..0726e385fee5
> > --- /dev/null
> > +++ b/drivers/net/ipa/bam.c
> > @@ -0,0 +1,525 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +/* Copyright (c) 2020, The Linux Foundation. All rights reserved.
> > + */
> > +
> > +#include <linux/completion.h>
> > +#include <linux/dma-mapping.h>
> > +#include <linux/dmaengine.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/io.h>
> > +#include <linux/kernel.h>
> > +#include <linux/mutex.h>
> > +#include <linux/netdevice.h>
> > +#include <linux/platform_device.h>
> > +
> > +#include "ipa_gsi.h"
> > +#include "ipa.h"
> > +#include "ipa_dma.h"
> > +#include "ipa_dma_private.h"
> > +#include "ipa_gsi.h"
> > +#include "ipa_trans.h"
> > +#include "ipa_data.h"
> > +
> > +/**
> > + * DOC: The IPA Smart Peripheral System Interface
> > + *
> > + * The Smart Peripheral System is a means to communicate over BAM pipes to
> > + * the IPA block. The Modem also uses BAM pipes to communicate with the IPA
> > + * core.
> > + *
> > + * Refer the GSI documentation, because BAM is a precursor to GSI and more or less
> > + * the same, conceptually (maybe, IDK, I have no docs to go through).
> > + *
> > + * Each channel here corresponds to 1 BAM pipe configured in BAM2BAM mode
> > + *
> > + * IPA cmds are transferred one at a time, each in one BAM transfer.
> > + */
> > +
> > +/* Get and configure the BAM DMA channel */
> > +int bam_channel_init_one(struct ipa_dma *bam,
> > +			 const struct ipa_gsi_endpoint_data *data, bool command)
> > +{
> > +	struct dma_slave_config bam_config;
> > +	u32 channel_id = data->channel_id;
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +	int ret;
> > +
> > +	/*TODO: if (!bam_channel_data_valid(bam, data))
> > +		return -EINVAL;*/
> > +
> > +	channel->dma_subsys = bam;
> > +	channel->dma_chan = dma_request_chan(bam->dev, data->channel_name);
> > +	channel->toward_ipa = data->toward_ipa;
> > +	channel->tlv_count = data->channel.tlv_count;
> > +	channel->tre_count = data->channel.tre_count;
> > +	if (IS_ERR(channel->dma_chan)) {
> > +		dev_err(bam->dev, "failed to request BAM channel %s: %d\n",
> > +				data->channel_name,
> > +				(int) PTR_ERR(channel->dma_chan));
> > +		return PTR_ERR(channel->dma_chan);
> > +	}
> > +
> > +	ret = ipa_channel_trans_init(bam, data->channel_id);
> > +	if (ret)
> > +		goto err_dma_chan_free;
> > +
> > +	if (data->toward_ipa) {
> > +		bam_config.direction = DMA_MEM_TO_DEV;
> > +		bam_config.dst_maxburst = channel->tlv_count;
> > +	} else {
> > +		bam_config.direction = DMA_DEV_TO_MEM;
> > +		bam_config.src_maxburst = channel->tlv_count;
> > +	}
> > +
> > +	dmaengine_slave_config(channel->dma_chan, &bam_config);
> > +
> > +	if (command)
> > +		ret = ipa_cmd_pool_init(channel, 256);
> > +
> > +	if (!ret)
> > +		return 0;
> > +
> > +err_dma_chan_free:
> > +	dma_release_channel(channel->dma_chan);
> > +	return ret;
> > +}
> > +
> > +static void bam_channel_exit_one(struct ipa_channel *channel)
> > +{
> > +	if (channel->dma_chan) {
> > +		dmaengine_terminate_sync(channel->dma_chan);
> > +		dma_release_channel(channel->dma_chan);
> > +	}
> > +}
> > +
> > +/* Get channels from BAM_DMA */
> > +int bam_channel_init(struct ipa_dma *bam, u32 count,
> > +		const struct ipa_gsi_endpoint_data *data)
> > +{
> > +	int ret = 0;
> > +	u32 i;
> > +
> > +	for (i = 0; i < count; ++i) {
> > +		bool command = i == IPA_ENDPOINT_AP_COMMAND_TX;
> > +
> > +		if (!data[i].channel_name || data[i].ee_id == GSI_EE_MODEM)
> > +			continue;
> > +
> > +		ret = bam_channel_init_one(bam, &data[i], command);
> > +		if (ret)
> > +			goto err_unwind;
> > +	}
> > +
> > +	return ret;
> > +
> > +err_unwind:
> > +	while (i--) {
> > +		if (ipa_gsi_endpoint_data_empty(&data[i]))
> > +			continue;
> > +
> > +		bam_channel_exit_one(&bam->channel[i]);
> > +	}
> > +	return ret;
> > +}
> > +
> > +/* Inverse of bam_channel_init() */
> > +void bam_channel_exit(struct ipa_dma *bam)
> > +{
> > +	u32 channel_id = BAM_CHANNEL_COUNT_MAX - 1;
> > +
> > +	do
> > +		bam_channel_exit_one(&bam->channel[channel_id]);
> > +	while (channel_id--);
> > +}
> > +
> > +/* Inverse of bam_init() */
> > +static void bam_exit(struct ipa_dma *bam)
> > +{
> > +	mutex_destroy(&bam->mutex);
> > +	bam_channel_exit(bam);
> > +}
> > +
> > +/* Return the channel id associated with a given channel */
> > +static u32 bam_channel_id(struct ipa_channel *channel)
> > +{
> > +	return channel - &channel->dma_subsys->channel[0];
> > +}
> > +
> > +static void
> > +bam_channel_tx_update(struct ipa_channel *channel, struct ipa_trans *trans)
> > +{
> > +	u64 byte_count = trans->byte_count + trans->len;
> > +	u64 trans_count = trans->trans_count + 1;
> > +
> > +	byte_count -= channel->compl_byte_count;
> > +	channel->compl_byte_count += byte_count;
> > +	trans_count -= channel->compl_trans_count;
> > +	channel->compl_trans_count += trans_count;
> > +
> > +	ipa_gsi_channel_tx_completed(channel->dma_subsys, bam_channel_id(channel),
> > +					   trans_count, byte_count);
> > +}
> > +
> > +static void
> > +bam_channel_rx_update(struct ipa_channel *channel, struct ipa_trans *trans)
> > +{
> > +	/* FIXME */
> > +	u64 byte_count = trans->byte_count + trans->len;
> > +
> > +	channel->byte_count += byte_count;
> > +	channel->trans_count++;
> > +}
> > +
> > +/* Consult hardware, move any newly completed transactions to completed list */
> > +static void bam_channel_update(struct ipa_channel *channel)
> > +{
> > +	struct ipa_trans *trans;
> > +
> > +	list_for_each_entry(trans, &channel->trans_info.pending, links) {
> > +		enum dma_status trans_status =
> > +				dma_async_is_tx_complete(channel->dma_chan,
> > +					trans->cookie, NULL, NULL);
> > +		if (trans_status == DMA_COMPLETE)
> > +			break;
> > +	}
> > +	/* Get the transaction for the latest completed event.  Take a
> > +	 * reference to keep it from completing before we give the events
> > +	 * for this and previous transactions back to the hardware.
> > +	 */
> > +	refcount_inc(&trans->refcount);
> > +
> > +	/* For RX channels, update each completed transaction with the number
> > +	 * of bytes that were actually received.  For TX channels, report
> > +	 * the number of transactions and bytes this completion represents
> > +	 * up the network stack.
> > +	 */
> > +	if (channel->toward_ipa)
> > +		bam_channel_tx_update(channel, trans);
> > +	else
> > +		bam_channel_rx_update(channel, trans);
> > +
> > +	ipa_trans_move_complete(trans);
> > +
> > +	ipa_trans_free(trans);
> > +}
> > +
> > +/**
> > + * bam_channel_poll_one() - Return a single completed transaction on a channel
> > + * @channel:	Channel to be polled
> > + *
> > + * Return:	Transaction pointer, or null if none are available
> > + *
> > + * This function returns the first entry on a channel's completed transaction
> > + * list.  If that list is empty, the hardware is consulted to determine
> > + * whether any new transactions have completed.  If so, they're moved to the
> > + * completed list and the new first entry is returned.  If there are no more
> > + * completed transactions, a null pointer is returned.
> > + */
> > +static struct ipa_trans *bam_channel_poll_one(struct ipa_channel *channel)
> > +{
> > +	struct ipa_trans *trans;
> > +
> > +	/* Get the first transaction from the completed list */
> > +	trans = ipa_channel_trans_complete(channel);
> > +	if (!trans) {
> > +		bam_channel_update(channel);
> > +		trans = ipa_channel_trans_complete(channel);
> > +	}
> > +
> > +	if (trans)
> > +		ipa_trans_move_polled(trans);
> > +
> > +	return trans;
> > +}
> > +
> > +/**
> > + * bam_channel_poll() - NAPI poll function for a channel
> > + * @napi:	NAPI structure for the channel
> > + * @budget:	Budget supplied by NAPI core
> > + *
> > + * Return:	Number of items polled (<= budget)
> > + *
> > + * Single transactions completed by hardware are polled until either
> > + * the budget is exhausted, or there are no more.  Each transaction
> > + * polled is passed to ipa_trans_complete(), to perform remaining
> > + * completion processing and retire/free the transaction.
> > + */
> > +static int bam_channel_poll(struct napi_struct *napi, int budget)
> > +{
> > +	struct ipa_channel *channel;
> > +	int count = 0;
> > +
> > +	channel = container_of(napi, struct ipa_channel, napi);
> > +	while (count < budget) {
> > +		struct ipa_trans *trans;
> > +
> > +		count++;
> > +		trans = bam_channel_poll_one(channel);
> > +		if (!trans)
> > +			break;
> > +		ipa_trans_complete(trans);
> > +	}
> > +
> > +	if (count < budget)
> > +		napi_complete(&channel->napi);
> > +
> > +	return count;
> > +}
> > +
> > +/* Setup function for a single channel */
> > +static void bam_channel_setup_one(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	if (!channel->dma_subsys)
> > +		return;	/* Ignore uninitialized channels */
> > +
> > +	if (channel->toward_ipa) {
> > +		netif_tx_napi_add(&bam->dummy_dev, &channel->napi,
> > +				  bam_channel_poll, NAPI_POLL_WEIGHT);
> > +	} else {
> > +		netif_napi_add(&bam->dummy_dev, &channel->napi,
> > +			       bam_channel_poll, NAPI_POLL_WEIGHT);
> > +	}
> > +	napi_enable(&channel->napi);
> > +}
> > +
> > +static void bam_channel_teardown_one(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	if (!channel->dma_subsys)
> > +		return;		/* Ignore uninitialized channels */
> > +
> > +	netif_napi_del(&channel->napi);
> > +}
> > +
> > +/* Setup function for channels */
> > +static int bam_channel_setup(struct ipa_dma *bam)
> > +{
> > +	u32 channel_id = 0;
> > +	int ret;
> > +
> > +	mutex_lock(&bam->mutex);
> > +
> > +	do
> > +		bam_channel_setup_one(bam, channel_id);
> > +	while (++channel_id < BAM_CHANNEL_COUNT_MAX);
> > +
> > +	/* Make sure no channels were defined that hardware does not support */
> > +	while (channel_id < BAM_CHANNEL_COUNT_MAX) {
> > +		struct ipa_channel *channel = &bam->channel[channel_id++];
> > +
> > +		if (!channel->dma_subsys)
> > +			continue;	/* Ignore uninitialized channels */
> > +
> > +		dev_err(bam->dev, "channel %u not supported by hardware\n",
> > +			channel_id - 1);
> > +		channel_id = BAM_CHANNEL_COUNT_MAX;
> > +		goto err_unwind;
> > +	}
> > +
> > +	mutex_unlock(&bam->mutex);
> > +
> > +	return 0;
> > +
> > +err_unwind:
> > +	while (channel_id--)
> > +		bam_channel_teardown_one(bam, channel_id);
> > +
> > +	mutex_unlock(&bam->mutex);
> > +
> > +	return ret;
> > +}
> > +
> > +/* Inverse of bam_channel_setup() */
> > +static void bam_channel_teardown(struct ipa_dma *bam)
> > +{
> > +	u32 channel_id;
> > +
> > +	mutex_lock(&bam->mutex);
> > +
> > +	channel_id = BAM_CHANNEL_COUNT_MAX;
> > +	do
> > +		bam_channel_teardown_one(bam, channel_id);
> > +	while (channel_id--);
> > +
> > +	mutex_unlock(&bam->mutex);
> > +}
> > +
> > +static int bam_setup(struct ipa_dma *bam)
> > +{
> > +	return bam_channel_setup(bam);
> > +}
> > +
> > +static void bam_teardown(struct ipa_dma *bam)
> > +{
> > +	bam_channel_teardown(bam);
> > +}
> > +
> > +static u32 bam_channel_tre_max(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	/* Hardware limit is channel->tre_count - 1 */
> > +	return channel->tre_count - (channel->tlv_count - 1);
> > +}
> > +
> > +static u32 bam_channel_trans_tre_max(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	return channel->tlv_count;
> > +}
> > +
> > +static int bam_channel_start(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	return 0;
> > +}
> > +
> > +static int bam_channel_stop(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	return dmaengine_terminate_sync(channel->dma_chan);
> > +}
> > +
> > +static void bam_channel_reset(struct ipa_dma *bam, u32 channel_id, bool doorbell)
> > +{
> > +	bam_channel_stop(bam, channel_id);
> > +}
> > +
> > +static int bam_channel_suspend(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	return dmaengine_pause(channel->dma_chan);
> > +}
> > +
> > +static int bam_channel_resume(struct ipa_dma *bam, u32 channel_id)
> > +{
> > +	struct ipa_channel *channel = &bam->channel[channel_id];
> > +
> > +	return dmaengine_resume(channel->dma_chan);
> > +}
> > +
> > +static void bam_suspend(struct ipa_dma *bam)
> > +{
> > +	/* No-op for now */
> > +}
> > +
> > +static void bam_resume(struct ipa_dma *bam)
> > +{
> > +	/* No-op for now */
> > +}
> > +
> > +static void bam_trans_callback(void *arg)
> > +{
> > +	ipa_trans_complete(arg);
> > +}
> > +
> > +static void bam_trans_commit(struct ipa_trans *trans, bool unused)
> > +{
> > +	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
> > +	enum ipa_cmd_opcode opcode = IPA_CMD_NONE;
> > +	struct ipa_cmd_info *info;
> > +	struct scatterlist *sg;
> > +	u32 byte_count = 0;
> > +	u32 i;
> > +	enum dma_transfer_direction direction;
> > +
> > +	if (channel->toward_ipa)
> > +		direction = DMA_MEM_TO_DEV;
> > +	else
> > +		direction = DMA_DEV_TO_MEM;
> > +
> > +	/* assert(trans->used > 0); */
> > +
> > +	info = trans->info ? &trans->info[0] : NULL;
> > +	for_each_sg(trans->sgl, sg, trans->used, i) {
> > +		bool last_tre = i == trans->used - 1;
> > +		dma_addr_t addr = sg_dma_address(sg);
> > +		u32 len = sg_dma_len(sg);
> > +		u32 dma_flags = 0;
> > +		struct dma_async_tx_descriptor *desc;
> > +
> > +		byte_count += len;
> > +		if (info)
> > +			opcode = info++->opcode;
> > +
> > +		if (opcode != IPA_CMD_NONE) {
> > +			len = opcode;
> > +			dma_flags |= DMA_PREP_IMM_CMD;
> > +		}
> > +
> > +		if (last_tre)
> > +			dma_flags |= DMA_PREP_INTERRUPT;
> > +
> > +		desc = dmaengine_prep_slave_single(channel->dma_chan, addr, len,
> > +				direction, dma_flags);
> > +
> > +		if (last_tre) {
> > +			desc->callback = bam_trans_callback;
> > +			desc->callback_param = trans;
> > +		}
> > +
> > +		desc->cookie = dmaengine_submit(desc);
> > +
> > +		if (last_tre)
> > +			trans->cookie = desc->cookie;
> > +
> > +		if (direction == DMA_DEV_TO_MEM)
> > +			dmaengine_desc_attach_metadata(desc, &trans->len, sizeof(trans->len));
> > +	}
> > +
> > +	if (channel->toward_ipa) {
> > +		/* We record TX bytes when they are sent */
> > +		trans->len = byte_count;
> > +		trans->trans_count = channel->trans_count;
> > +		trans->byte_count = channel->byte_count;
> > +		channel->trans_count++;
> > +		channel->byte_count += byte_count;
> > +	}
> > +
> > +	ipa_trans_move_pending(trans);
> > +
> > +	dma_async_issue_pending(channel->dma_chan);
> > +}
> > +
> > +/* Initialize the BAM DMA channels
> > + * Actual hw init is handled by the BAM_DMA driver
> > + */
> > +int bam_init(struct ipa_dma *bam, struct platform_device *pdev,
> > +		enum ipa_version version, u32 count,
> > +		const struct ipa_gsi_endpoint_data *data)
> > +{
> > +	struct device *dev = &pdev->dev;
> > +	int ret;
> > +
> > +	bam->dev = dev;
> > +	bam->version = version;
> > +	bam->setup = bam_setup;
> > +	bam->teardown = bam_teardown;
> > +	bam->exit = bam_exit;
> > +	bam->suspend = bam_suspend;
> > +	bam->resume = bam_resume;
> > +	bam->channel_tre_max = bam_channel_tre_max;
> > +	bam->channel_trans_tre_max = bam_channel_trans_tre_max;
> > +	bam->channel_start = bam_channel_start;
> > +	bam->channel_stop = bam_channel_stop;
> > +	bam->channel_reset = bam_channel_reset;
> > +	bam->channel_suspend = bam_channel_suspend;
> > +	bam->channel_resume = bam_channel_resume;
> > +	bam->trans_commit = bam_trans_commit;
> > +
> > +	init_dummy_netdev(&bam->dummy_dev);
> > +
> > +	ret = bam_channel_init(bam, count, data);
> > +	if (ret)
> > +		return ret;
> > +
> > +	mutex_init(&bam->mutex);
> > +
> > +	return 0;
> > +}
> > diff --git a/drivers/net/ipa/gsi.c b/drivers/net/ipa/gsi.c
> > index 39d9ca620a9f..ac0b9e748fa1 100644
> > --- a/drivers/net/ipa/gsi.c
> > +++ b/drivers/net/ipa/gsi.c
> > @@ -2210,6 +2210,7 @@ int gsi_init(struct ipa_dma *gsi, struct platform_device *pdev,
> >   	gsi->channel_reset = gsi_channel_reset;
> >   	gsi->channel_suspend = gsi_channel_suspend;
> >   	gsi->channel_resume = gsi_channel_resume;
> > +	gsi->trans_commit = gsi_trans_commit;
> >   
> >   	/* GSI uses NAPI on all channels.  Create a dummy network device
> >   	 * for the channel NAPI contexts to be associated with.
> > diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
> > index 6d329e9ce5d2..7d62d49f414f 100644
> > --- a/drivers/net/ipa/ipa_data.h
> > +++ b/drivers/net/ipa/ipa_data.h
> > @@ -188,6 +188,7 @@ struct ipa_gsi_endpoint_data {
> >   	u8 channel_id;
> >   	u8 endpoint_id;
> >   	bool toward_ipa;
> > +	const char *channel_name;	/* used only for BAM DMA channels */
> >   
> >   	struct gsi_channel_data channel;
> >   	struct ipa_endpoint_data endpoint;
> > diff --git a/drivers/net/ipa/ipa_dma.h b/drivers/net/ipa/ipa_dma.h
> > index 1a23e6ac5785..3000182ae689 100644
> > --- a/drivers/net/ipa/ipa_dma.h
> > +++ b/drivers/net/ipa/ipa_dma.h
> > @@ -17,7 +17,11 @@
> >   
> >   /* Maximum number of channels and event rings supported by the driver */
> >   #define GSI_CHANNEL_COUNT_MAX	23
> > +#define BAM_CHANNEL_COUNT_MAX	20
> >   #define GSI_EVT_RING_COUNT_MAX	24
> > +#define IPA_CHANNEL_COUNT_MAX	MAX(GSI_CHANNEL_COUNT_MAX, \
> > +				    BAM_CHANNEL_COUNT_MAX)
> > +#define MAX(a, b)		((a > b) ? a : b)
> >   
> >   /* Maximum TLV FIFO size for a channel; 64 here is arbitrary (and high) */
> >   #define GSI_TLV_MAX		64
> > @@ -119,6 +123,8 @@ struct ipa_channel {
> >   	struct gsi_ring tre_ring;
> >   	u32 evt_ring_id;
> >   
> > +	struct dma_chan *dma_chan;
> > +
> >   	u64 byte_count;			/* total # bytes transferred */
> >   	u64 trans_count;		/* total # transactions */
> >   	/* The following counts are used only for TX endpoints */
> > @@ -154,7 +160,7 @@ struct ipa_dma {
> >   	u32 irq;
> >   	u32 channel_count;
> >   	u32 evt_ring_count;
> > -	struct ipa_channel channel[GSI_CHANNEL_COUNT_MAX];
> > +	struct ipa_channel channel[IPA_CHANNEL_COUNT_MAX];
> >   	struct gsi_evt_ring evt_ring[GSI_EVT_RING_COUNT_MAX];
> >   	u32 event_bitmap;		/* allocated event rings */
> >   	u32 modem_channel_bitmap;	/* modem channels to allocate */
> > @@ -303,7 +309,7 @@ static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
> >   }
> >   
> >   /**
> > - * ipa_dma_init() - Initialize the GSI subsystem
> > + * ipa_init/bam_init() - Initialize the GSI/BAM subsystem
> >    * @dma_subsys:	Address of ipa_dma structure embedded in an IPA structure
> >    * @pdev:	IPA platform device
> >    * @version:	IPA hardware version (implies GSI version)
> > @@ -312,14 +318,18 @@ static inline void ipa_dma_resume(struct ipa_dma *dma_subsys)
> >    *
> >    * Return:	0 if successful, or a negative error code
> >    *
> > - * Early stage initialization of the GSI subsystem, performing tasks
> > - * that can be done before the GSI hardware is ready to use.
> > + * Early stage initialization of the GSI/BAM subsystem, performing tasks
> > + * that can be done before the GSI/BAM hardware is ready to use.
> >    */
> >   
> >   int gsi_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
> >   	     enum ipa_version version, u32 count,
> >   	     const struct ipa_gsi_endpoint_data *data);
> >   
> > +int bam_init(struct ipa_dma *dma_subsys, struct platform_device *pdev,
> > +	     enum ipa_version version, u32 count,
> > +	     const struct ipa_gsi_endpoint_data *data);
> > +
> >   /**
> >    * ipa_dma_exit() - Exit the DMA subsystem
> >    * @dma_subsys:	ipa_dma address previously passed to a successful gsi_init() call
> > diff --git a/drivers/net/ipa/ipa_dma_private.h b/drivers/net/ipa/ipa_dma_private.h
> > index 40148a551b47..1db53e597a61 100644
> > --- a/drivers/net/ipa/ipa_dma_private.h
> > +++ b/drivers/net/ipa/ipa_dma_private.h
> > @@ -16,6 +16,8 @@ struct ipa_channel;
> >   
> >   #define GSI_RING_ELEMENT_SIZE	16	/* bytes; must be a power of 2 */
> >   
> > +void gsi_trans_commit(struct ipa_trans *trans, bool ring_db);
> > +
> >   /* Return the entry that follows one provided in a transaction pool */
> >   void *ipa_trans_pool_next(struct ipa_trans_pool *pool, void *element);
> >   
> > diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> > index ba06e3ad554c..ea6c4347f2c6 100644
> > --- a/drivers/net/ipa/ipa_main.c
> > +++ b/drivers/net/ipa/ipa_main.c
> > @@ -60,12 +60,15 @@
> >    * core.  The GSI implements a set of "channels" used for communication
> >    * between the AP and the IPA.
> >    *
> > - * The IPA layer uses GSI channels to implement its "endpoints".  And while
> > - * a GSI channel carries data between the AP and the IPA, a pair of IPA
> > - * endpoints is used to carry traffic between two EEs.  Specifically, the main
> > - * modem network interface is implemented by two pairs of endpoints:  a TX
> > + * The IPA layer uses GSI channels or BAM pipes to implement its "endpoints".
> > + * And while a GSI channel carries data between the AP and the IPA, a pair of
> > + * IPA endpoints is used to carry traffic between two EEs.  Specifically, the
> > + * main modem network interface is implemented by two pairs of endpoints:  a TX
> >    * endpoint on the AP coupled with an RX endpoint on the modem; and another
> >    * RX endpoint on the AP receiving data from a TX endpoint on the modem.
> > + *
> > + * For BAM based transport, a pair of BAM pipes are used for TX and RX between
> > + * the AP and IPA, and between IPA and other EEs.
> >    */
> >   
> >   /* The name of the GSI firmware file relative to /lib/firmware */
> > @@ -716,8 +719,13 @@ static int ipa_probe(struct platform_device *pdev)
> >   	if (ret)
> >   		goto err_reg_exit;
> >   
> > -	ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
> > -		       data->endpoint_data);
> > +	if (IPA_HAS_GSI(ipa->version))
> > +		ret = gsi_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
> > +			       data->endpoint_data);
> > +	else
> > +		ret = bam_init(&ipa->dma_subsys, pdev, ipa->version, data->endpoint_count,
> > +			       data->endpoint_data);
> > +
> >   	if (ret)
> >   		goto err_mem_exit;
> >   
> > diff --git a/drivers/net/ipa/ipa_trans.c b/drivers/net/ipa/ipa_trans.c
> > index 22755f3ce3da..444f44846da8 100644
> > --- a/drivers/net/ipa/ipa_trans.c
> > +++ b/drivers/net/ipa/ipa_trans.c
> > @@ -254,7 +254,7 @@ struct ipa_trans *ipa_channel_trans_complete(struct ipa_channel *channel)
> >   }
> >   
> >   /* Move a transaction from the allocated list to the pending list */
> > -static void ipa_trans_move_pending(struct ipa_trans *trans)
> > +void ipa_trans_move_pending(struct ipa_trans *trans)
> >   {
> >   	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
> >   	struct ipa_trans_info *trans_info = &channel->trans_info;
> > @@ -539,7 +539,7 @@ static void gsi_trans_tre_fill(struct gsi_tre *dest_tre, dma_addr_t addr,
> >    * pending list.  Finally, updates the channel ring pointer and optionally
> >    * rings the doorbell.
> >    */
> > -static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
> > +void gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
> >   {
> >   	struct ipa_channel *channel = &trans->dma_subsys->channel[trans->channel_id];
> >   	struct gsi_ring *ring = &channel->tre_ring;
> > @@ -604,9 +604,9 @@ static void __gsi_trans_commit(struct ipa_trans *trans, bool ring_db)
> >   /* Commit a GSI transaction */
> >   void ipa_trans_commit(struct ipa_trans *trans, bool ring_db)
> >   {
> > -	if (trans->used)
> > -		__gsi_trans_commit(trans, ring_db);
> > -	else
> > +	if (trans->used) {
> > +		trans->dma_subsys->trans_commit(trans, ring_db);
> > +	} else
> >   		ipa_trans_free(trans);
> >   }
> >   
> > @@ -618,7 +618,7 @@ void ipa_trans_commit_wait(struct ipa_trans *trans)
> >   
> >   	refcount_inc(&trans->refcount);
> >   
> > -	__gsi_trans_commit(trans, true);
> > +	trans->dma_subsys->trans_commit(trans, true);
> >   
> >   	wait_for_completion(&trans->completion);
> >   
> > @@ -638,7 +638,7 @@ int ipa_trans_commit_wait_timeout(struct ipa_trans *trans,
> >   
> >   	refcount_inc(&trans->refcount);
> >   
> > -	__gsi_trans_commit(trans, true);
> > +	trans->dma_subsys->trans_commit(trans, true);
> >   
> >   	remaining = wait_for_completion_timeout(&trans->completion,
> >   						timeout_jiffies);
> > diff --git a/drivers/net/ipa/ipa_trans.h b/drivers/net/ipa/ipa_trans.h
> > index b93342414360..5f41e3e6f92a 100644
> > --- a/drivers/net/ipa/ipa_trans.h
> > +++ b/drivers/net/ipa/ipa_trans.h
> > @@ -10,6 +10,7 @@
> >   #include <linux/refcount.h>
> >   #include <linux/completion.h>
> >   #include <linux/dma-direction.h>
> > +#include <linux/dmaengine.h>
> >   
> >   #include "ipa_cmd.h"
> >   
> > @@ -61,6 +62,7 @@ struct ipa_trans {
> >   	struct scatterlist *sgl;
> >   	struct ipa_cmd_info *info;	/* array of entries, or null */
> >   	enum dma_data_direction direction;
> > +	dma_cookie_t cookie;
> >   
> >   	refcount_t refcount;
> >   	struct completion completion;
> > @@ -149,6 +151,8 @@ struct ipa_trans *ipa_channel_trans_alloc(struct ipa_dma *dma_subsys, u32 channe
> >    */
> >   void ipa_trans_free(struct ipa_trans *trans);
> >   
> > +void ipa_trans_move_pending(struct ipa_trans *trans);
> > +
> >   /**
> >    * ipa_trans_cmd_add() - Add an immediate command to a transaction
> >    * @trans:	Transaction
> > 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init
  2021-10-13 22:30   ` Alex Elder
@ 2021-10-18 18:13     ` Sireesh Kodali
  0 siblings, 0 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-10-18 18:13 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 4:00 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > IPA v2.x commands are different from later IPA revisions mostly because
> > of the fact that IPA v2.x is 32 bit. There are also other minor
> > differences some of the command structs.
> > 
> > The tables again are only different because of the fact that IPA v2.x is
> > 32 bit.
>
> There's no "RFC" on this patch, but I assume it's just invisible.

Eep, I forgot to the tag to this patch

>
> There are some things in here where some conventions used elsewhere
> in the driver aren't as well followed. One example is the use of
> symbol names with IPA version encoded in them; such cases usually
> have a macro that takes a version as argument.

Got it, I'll fix that

>
> And I don't especially like using a macro on the left hand side
> of an assignment expression.
>

That's fair, I'll try comming up with a more clean solution here

Regards,
Sireesh
> I'm skimming now, but overall this looks OK.
>
> -Alex
>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> > ---
> >   drivers/net/ipa/ipa.h       |   2 +-
> >   drivers/net/ipa/ipa_cmd.c   | 138 ++++++++++++++++++++++++++----------
> >   drivers/net/ipa/ipa_table.c |  29 ++++++--
> >   drivers/net/ipa/ipa_table.h |   2 +-
> >   4 files changed, 125 insertions(+), 46 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/ipa.h b/drivers/net/ipa/ipa.h
> > index 80a83ac45729..63b2b368b588 100644
> > --- a/drivers/net/ipa/ipa.h
> > +++ b/drivers/net/ipa/ipa.h
> > @@ -81,7 +81,7 @@ struct ipa {
> >   	struct ipa_power *power;
> >   
> >   	dma_addr_t table_addr;
> > -	__le64 *table_virt;
> > +	void *table_virt;
> >   
> >   	struct ipa_interrupt *interrupt;
> >   	bool uc_powered;
> > diff --git a/drivers/net/ipa/ipa_cmd.c b/drivers/net/ipa/ipa_cmd.c
> > index 7a104540dc26..58dae4b3bf87 100644
> > --- a/drivers/net/ipa/ipa_cmd.c
> > +++ b/drivers/net/ipa/ipa_cmd.c
> > @@ -25,8 +25,8 @@
> >    * An immediate command is generally used to request the IPA do something
> >    * other than data transfer to another endpoint.
> >    *
> > - * Immediate commands are represented by GSI transactions just like other
> > - * transfer requests, represented by a single GSI TRE.  Each immediate
> > + * Immediate commands on IPA v3 are represented by GSI transactions just like
> > + * other transfer requests, represented by a single GSI TRE.  Each immediate
> >    * command has a well-defined format, having a payload of a known length.
> >    * This allows the transfer element's length field to be used to hold an
> >    * immediate command's opcode.  The payload for a command resides in DRAM
> > @@ -45,10 +45,16 @@ enum pipeline_clear_options {
> >   
> >   /* IPA_CMD_IP_V{4,6}_{FILTER,ROUTING}_INIT */
> >   
> > -struct ipa_cmd_hw_ip_fltrt_init {
> > -	__le64 hash_rules_addr;
> > -	__le64 flags;
> > -	__le64 nhash_rules_addr;
> > +union ipa_cmd_hw_ip_fltrt_init {
> > +	struct {
> > +		__le32 nhash_rules_addr;
> > +		__le32 flags;
> > +	} v2;
> > +	struct {
> > +		__le64 hash_rules_addr;
> > +		__le64 flags;
> > +		__le64 nhash_rules_addr;
> > +	} v3;
> >   };
> >   
> >   /* Field masks for ipa_cmd_hw_ip_fltrt_init structure fields */
> > @@ -56,13 +62,23 @@ struct ipa_cmd_hw_ip_fltrt_init {
> >   #define IP_FLTRT_FLAGS_HASH_ADDR_FMASK			GENMASK_ULL(27, 12)
> >   #define IP_FLTRT_FLAGS_NHASH_SIZE_FMASK			GENMASK_ULL(39, 28)
> >   #define IP_FLTRT_FLAGS_NHASH_ADDR_FMASK			GENMASK_ULL(55, 40)
> > +#define IP_V2_IPV4_FLTRT_FLAGS_SIZE_FMASK		GENMASK_ULL(11, 0)
> > +#define IP_V2_IPV4_FLTRT_FLAGS_ADDR_FMASK		GENMASK_ULL(27, 12)
> > +#define IP_V2_IPV6_FLTRT_FLAGS_SIZE_FMASK		GENMASK_ULL(15, 0)
> > +#define IP_V2_IPV6_FLTRT_FLAGS_ADDR_FMASK		GENMASK_ULL(31, 16)
> >   
> >   /* IPA_CMD_HDR_INIT_LOCAL */
> >   
> > -struct ipa_cmd_hw_hdr_init_local {
> > -	__le64 hdr_table_addr;
> > -	__le32 flags;
> > -	__le32 reserved;
> > +union ipa_cmd_hw_hdr_init_local {
> > +	struct {
> > +		__le32 hdr_table_addr;
> > +		__le32 flags;
> > +	} v2;
> > +	struct {
> > +		__le64 hdr_table_addr;
> > +		__le32 flags;
> > +		__le32 reserved;
> > +	} v3;
> >   };
> >   
> >   /* Field masks for ipa_cmd_hw_hdr_init_local structure fields */
> > @@ -109,14 +125,37 @@ struct ipa_cmd_ip_packet_init {
> >   #define DMA_SHARED_MEM_OPCODE_SKIP_CLEAR_FMASK		GENMASK(8, 8)
> >   #define DMA_SHARED_MEM_OPCODE_CLEAR_OPTION_FMASK	GENMASK(10, 9)
> >   
> > -struct ipa_cmd_hw_dma_mem_mem {
> > -	__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
> > -	__le16 size;
> > -	__le16 local_addr;
> > -	__le16 flags;
> > -	__le64 system_addr;
> > +union ipa_cmd_hw_dma_mem_mem {
> > +	struct {
> > +		__le16 reserved;
> > +		__le16 size;
> > +		__le32 system_addr;
> > +		__le16 local_addr;
> > +		__le16 flags; /* the least significant 14 bits are reserved */
> > +		__le32 padding;
> > +	} v2;
> > +	struct {
> > +		__le16 clear_after_read; /* 0 or DMA_SHARED_MEM_CLEAR_AFTER_READ */
> > +		__le16 size;
> > +		__le16 local_addr;
> > +		__le16 flags;
> > +		__le64 system_addr;
> > +	} v3;
> >   };
> >   
> > +#define CMD_FIELD(_version, _payload, _field)				\
> > +	*(((_version) > IPA_VERSION_2_6L) ?		    		\
> > +	  &(_payload->v3._field) :			    		\
> > +	  &(_payload->v2._field))
> > +
> > +#define SET_DMA_FIELD(_ver, _payload, _field, _value)			\
> > +	do {								\
> > +		if ((_ver) >= IPA_VERSION_3_0)				\
> > +			(_payload)->v3._field = cpu_to_le64(_value);	\
> > +		else							\
> > +			(_payload)->v2._field = cpu_to_le32(_value);	\
> > +	} while (0)
> > +
> >   /* Flag allowing atomic clear of target region after reading data (v4.0+)*/
> >   #define DMA_SHARED_MEM_CLEAR_AFTER_READ			GENMASK(15, 15)
> >   
> > @@ -132,15 +171,16 @@ struct ipa_cmd_ip_packet_tag_status {
> >   	__le64 tag;
> >   };
> >   
> > -#define IP_PACKET_TAG_STATUS_TAG_FMASK			GENMASK_ULL(63, 16)
> > +#define IPA_V2_IP_PACKET_TAG_STATUS_TAG_FMASK		GENMASK_ULL(63, 32)
> > +#define IPA_V3_IP_PACKET_TAG_STATUS_TAG_FMASK		GENMASK_ULL(63, 16)
> >   
> >   /* Immediate command payload */
> >   union ipa_cmd_payload {
> > -	struct ipa_cmd_hw_ip_fltrt_init table_init;
> > -	struct ipa_cmd_hw_hdr_init_local hdr_init_local;
> > +	union ipa_cmd_hw_ip_fltrt_init table_init;
> > +	union ipa_cmd_hw_hdr_init_local hdr_init_local;
> >   	struct ipa_cmd_register_write register_write;
> >   	struct ipa_cmd_ip_packet_init ip_packet_init;
> > -	struct ipa_cmd_hw_dma_mem_mem dma_shared_mem;
> > +	union ipa_cmd_hw_dma_mem_mem dma_shared_mem;
> >   	struct ipa_cmd_ip_packet_tag_status ip_packet_tag_status;
> >   };
> >   
> > @@ -154,6 +194,7 @@ static void ipa_cmd_validate_build(void)
> >   	 * of entries.
> >   	 */
> >   #define TABLE_SIZE	(TABLE_COUNT_MAX * sizeof(__le64))
> > +// TODO
> >   #define TABLE_COUNT_MAX	max_t(u32, IPA_ROUTE_COUNT_MAX, IPA_FILTER_COUNT_MAX)
> >   	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_HASH_SIZE_FMASK));
> >   	BUILD_BUG_ON(TABLE_SIZE > field_max(IP_FLTRT_FLAGS_NHASH_SIZE_FMASK));
> > @@ -405,15 +446,26 @@ void ipa_cmd_table_init_add(struct ipa_trans *trans,
> >   {
> >   	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
> >   	enum dma_data_direction direction = DMA_TO_DEVICE;
> > -	struct ipa_cmd_hw_ip_fltrt_init *payload;
> > +	union ipa_cmd_hw_ip_fltrt_init *payload;
> > +	enum ipa_version version = ipa->version;
> >   	union ipa_cmd_payload *cmd_payload;
> >   	dma_addr_t payload_addr;
> >   	u64 val;
> >   
> >   	/* Record the non-hash table offset and size */
> >   	offset += ipa->mem_offset;
> > -	val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
> > -	val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
> > +
> > +	if (version >= IPA_VERSION_3_0) {
> > +		val = u64_encode_bits(offset, IP_FLTRT_FLAGS_NHASH_ADDR_FMASK);
> > +		val |= u64_encode_bits(size, IP_FLTRT_FLAGS_NHASH_SIZE_FMASK);
> > +	} else if (opcode == IPA_CMD_IP_V4_FILTER_INIT ||
> > +		   opcode == IPA_CMD_IP_V4_ROUTING_INIT) {
> > +		val = u64_encode_bits(offset, IP_V2_IPV4_FLTRT_FLAGS_ADDR_FMASK);
> > +		val |= u64_encode_bits(size, IP_V2_IPV4_FLTRT_FLAGS_SIZE_FMASK);
> > +	} else { /* IPA <= v2.6L IPv6 */
> > +		val = u64_encode_bits(offset, IP_V2_IPV6_FLTRT_FLAGS_ADDR_FMASK);
> > +		val |= u64_encode_bits(size, IP_V2_IPV6_FLTRT_FLAGS_SIZE_FMASK);
> > +	}
> >   
> >   	/* The hash table offset and address are zero if its size is 0 */
> >   	if (hash_size) {
> > @@ -429,10 +481,10 @@ void ipa_cmd_table_init_add(struct ipa_trans *trans,
> >   	payload = &cmd_payload->table_init;
> >   
> >   	/* Fill in all offsets and sizes and the non-hash table address */
> > -	if (hash_size)
> > -		payload->hash_rules_addr = cpu_to_le64(hash_addr);
> > -	payload->flags = cpu_to_le64(val);
> > -	payload->nhash_rules_addr = cpu_to_le64(addr);
> > +	if (hash_size && version >= IPA_VERSION_3_0)
> > +		payload->v3.hash_rules_addr = cpu_to_le64(hash_addr);
> > +	SET_DMA_FIELD(version, payload, flags, val);
> > +	SET_DMA_FIELD(version, payload, nhash_rules_addr, addr);
> >   
> >   	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> >   			  direction, opcode);
> > @@ -445,7 +497,7 @@ void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
> >   	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
> >   	enum ipa_cmd_opcode opcode = IPA_CMD_HDR_INIT_LOCAL;
> >   	enum dma_data_direction direction = DMA_TO_DEVICE;
> > -	struct ipa_cmd_hw_hdr_init_local *payload;
> > +	union ipa_cmd_hw_hdr_init_local *payload;
> >   	union ipa_cmd_payload *cmd_payload;
> >   	dma_addr_t payload_addr;
> >   	u32 flags;
> > @@ -460,10 +512,10 @@ void ipa_cmd_hdr_init_local_add(struct ipa_trans *trans, u32 offset, u16 size,
> >   	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
> >   	payload = &cmd_payload->hdr_init_local;
> >   
> > -	payload->hdr_table_addr = cpu_to_le64(addr);
> > +	SET_DMA_FIELD(ipa->version, payload, hdr_table_addr, addr);
> >   	flags = u32_encode_bits(size, HDR_INIT_LOCAL_FLAGS_TABLE_SIZE_FMASK);
> >   	flags |= u32_encode_bits(offset, HDR_INIT_LOCAL_FLAGS_HDR_ADDR_FMASK);
> > -	payload->flags = cpu_to_le32(flags);
> > +	CMD_FIELD(ipa->version, payload, flags) = cpu_to_le32(flags);
> >   
> >   	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> >   			  direction, opcode);
> > @@ -509,8 +561,11 @@ void ipa_cmd_register_write_add(struct ipa_trans *trans, u32 offset, u32 value,
> >   
> >   	} else {
> >   		flags = 0;	/* SKIP_CLEAR flag is always 0 */
> > -		options = u16_encode_bits(clear_option,
> > -					  REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
> > +		if (ipa->version > IPA_VERSION_2_6L)
> > +			options = u16_encode_bits(clear_option,
> > +					REGISTER_WRITE_CLEAR_OPTIONS_FMASK);
> > +		else
> > +			options = 0;
> >   	}
> >   
> >   	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
> > @@ -552,7 +607,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
> >   {
> >   	struct ipa *ipa = container_of(trans->dma_subsys, struct ipa, dma_subsys);
> >   	enum ipa_cmd_opcode opcode = IPA_CMD_DMA_SHARED_MEM;
> > -	struct ipa_cmd_hw_dma_mem_mem *payload;
> > +	enum ipa_version version = ipa->version;
> > +	union ipa_cmd_hw_dma_mem_mem *payload;
> >   	union ipa_cmd_payload *cmd_payload;
> >   	enum dma_data_direction direction;
> >   	dma_addr_t payload_addr;
> > @@ -571,8 +627,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
> >   	/* payload->clear_after_read was reserved prior to IPA v4.0.  It's
> >   	 * never needed for current code, so it's 0 regardless of version.
> >   	 */
> > -	payload->size = cpu_to_le16(size);
> > -	payload->local_addr = cpu_to_le16(offset);
> > +	CMD_FIELD(version, payload, size) = cpu_to_le16(size);
> > +	CMD_FIELD(version, payload, local_addr) = cpu_to_le16(offset);
> >   	/* payload->flags:
> >   	 *   direction:		0 = write to IPA, 1 read from IPA
> >   	 * Starting at v4.0 these are reserved; either way, all zero:
> > @@ -582,8 +638,8 @@ void ipa_cmd_dma_shared_mem_add(struct ipa_trans *trans, u32 offset, u16 size,
> >   	 * since both values are 0 we won't bother OR'ing them in.
> >   	 */
> >   	flags = toward_ipa ? 0 : DMA_SHARED_MEM_FLAGS_DIRECTION_FMASK;
> > -	payload->flags = cpu_to_le16(flags);
> > -	payload->system_addr = cpu_to_le64(addr);
> > +	CMD_FIELD(version, payload, flags) = cpu_to_le16(flags);
> > +	SET_DMA_FIELD(version, payload, system_addr, addr);
> >   
> >   	direction = toward_ipa ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
> >   
> > @@ -599,11 +655,17 @@ static void ipa_cmd_ip_tag_status_add(struct ipa_trans *trans)
> >   	struct ipa_cmd_ip_packet_tag_status *payload;
> >   	union ipa_cmd_payload *cmd_payload;
> >   	dma_addr_t payload_addr;
> > +	u64 tag_mask;
> > +
> > +	if (trans->dma_subsys->version <= IPA_VERSION_2_6L)
> > +		tag_mask = IPA_V2_IP_PACKET_TAG_STATUS_TAG_FMASK;
> > +	else
> > +		tag_mask = IPA_V3_IP_PACKET_TAG_STATUS_TAG_FMASK;
> >   
> >   	cmd_payload = ipa_cmd_payload_alloc(ipa, &payload_addr);
> >   	payload = &cmd_payload->ip_packet_tag_status;
> >   
> > -	payload->tag = le64_encode_bits(0, IP_PACKET_TAG_STATUS_TAG_FMASK);
> > +	payload->tag = le64_encode_bits(0, tag_mask);
> >   
> >   	ipa_trans_cmd_add(trans, payload, sizeof(*payload), payload_addr,
> >   			  direction, opcode);
> > diff --git a/drivers/net/ipa/ipa_table.c b/drivers/net/ipa/ipa_table.c
> > index d197959cc032..459fb4830244 100644
> > --- a/drivers/net/ipa/ipa_table.c
> > +++ b/drivers/net/ipa/ipa_table.c
> > @@ -8,6 +8,7 @@
> >   #include <linux/kernel.h>
> >   #include <linux/bits.h>
> >   #include <linux/bitops.h>
> > +#include <linux/module.h>
> >   #include <linux/bitfield.h>
> >   #include <linux/io.h>
> >   #include <linux/build_bug.h>
> > @@ -561,6 +562,19 @@ void ipa_table_config(struct ipa *ipa)
> >   	ipa_route_config(ipa, true);
> >   }
> >   
> > +static inline void *ipa_table_write(enum ipa_version version,
> > +				   void *virt, u64 value)
> > +{
> > +	if (IPA_IS_64BIT(version)) {
> > +		__le64 *ptr = virt;
> > +		*ptr = cpu_to_le64(value);
> > +	} else {
> > +		__le32 *ptr = virt;
> > +		*ptr = cpu_to_le32(value);
> > +	}
> > +	return virt + IPA_TABLE_ENTRY_SIZE(version);
> > +}
> > +
> >   /*
> >    * Initialize a coherent DMA allocation containing initialized filter and
> >    * route table data.  This is used when initializing or resetting the IPA
> > @@ -602,10 +616,11 @@ void ipa_table_config(struct ipa *ipa)
> >   int ipa_table_init(struct ipa *ipa)
> >   {
> >   	u32 count = max_t(u32, IPA_FILTER_COUNT_MAX, IPA_ROUTE_COUNT_MAX);
> > +	enum ipa_version version = ipa->version;
> >   	struct device *dev = &ipa->pdev->dev;
> > +	u64 filter_map = ipa->filter_map << 1;
> >   	dma_addr_t addr;
> > -	__le64 le_addr;
> > -	__le64 *virt;
> > +	void *virt;
> >   	size_t size;
> >   
> >   	ipa_table_validate_build();
> > @@ -626,19 +641,21 @@ int ipa_table_init(struct ipa *ipa)
> >   	ipa->table_addr = addr;
> >   
> >   	/* First slot is the zero rule */
> > -	*virt++ = 0;
> > +	virt = ipa_table_write(version, virt, 0);
> >   
> >   	/* Next is the filter table bitmap.  The "soft" bitmap value
> >   	 * must be converted to the hardware representation by shifting
> >   	 * it left one position.  (Bit 0 repesents global filtering,
> >   	 * which is possible but not used.)
> >   	 */
> > -	*virt++ = cpu_to_le64((u64)ipa->filter_map << 1);
> > +	if (version <= IPA_VERSION_2_6L)
> > +		filter_map |= 1;
> > +
> > +	virt = ipa_table_write(version, virt, filter_map);
> >   
> >   	/* All the rest contain the DMA address of the zero rule */
> > -	le_addr = cpu_to_le64(addr);
> >   	while (count--)
> > -		*virt++ = le_addr;
> > +		virt = ipa_table_write(version, virt, addr);
> >   
> >   	return 0;
> >   }
> > diff --git a/drivers/net/ipa/ipa_table.h b/drivers/net/ipa/ipa_table.h
> > index 78a168ce6558..6e12fc49e45b 100644
> > --- a/drivers/net/ipa/ipa_table.h
> > +++ b/drivers/net/ipa/ipa_table.h
> > @@ -43,7 +43,7 @@ bool ipa_filter_map_valid(struct ipa *ipa, u32 filter_mask);
> >    */
> >   static inline bool ipa_table_hash_support(struct ipa *ipa)
> >   {
> > -	return ipa->version != IPA_VERSION_4_2;
> > +	return ipa->version != IPA_VERSION_4_2 && ipa->version > IPA_VERSION_2_6L;
> >   }
> >   
> >   /**
> > 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints
  2021-10-13 22:30   ` Alex Elder
@ 2021-10-18 18:17     ` Sireesh Kodali
  0 siblings, 0 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-10-18 18:17 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 4:00 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > IPA v2.x endpoints are the same as the endpoints on later versions. The
> > only big change was the addition of the "skip_config" flag. The only
> > other change is the backlog limit, which is a fixed number for IPA v2.6L
>
> Not much to say here. Your patches are reasonably small, which
> makes them easier to review (thank you).
>
> -Alex

I'm glad splitting them up paid off!

Regards,
Sireesh
>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/ipa_endpoint.c | 65 ++++++++++++++++++++++------------
> >   1 file changed, 43 insertions(+), 22 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c
> > index 7d3ab61cd890..024cf3a0ded0 100644
> > --- a/drivers/net/ipa/ipa_endpoint.c
> > +++ b/drivers/net/ipa/ipa_endpoint.c
> > @@ -360,8 +360,10 @@ void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable)
> >   {
> >   	u32 endpoint_id;
> >   
> > -	/* DELAY mode doesn't work correctly on IPA v4.2 */
> > -	if (ipa->version == IPA_VERSION_4_2)
> > +	/* DELAY mode doesn't work correctly on IPA v4.2
> > +	 * Pausing is not supported on IPA v2.6L
> > +	 */
> > +	if (ipa->version == IPA_VERSION_4_2 || ipa->version <= IPA_VERSION_2_6L)
> >   		return;
> >   
> >   	for (endpoint_id = 0; endpoint_id < IPA_ENDPOINT_MAX; endpoint_id++) {
> > @@ -383,6 +385,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
> >   {
> >   	u32 initialized = ipa->initialized;
> >   	struct ipa_trans *trans;
> > +	u32 value = 0, value_mask = ~0;
> >   	u32 count;
> >   
> >   	/* We need one command per modem TX endpoint.  We can get an upper
> > @@ -398,6 +401,11 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
> >   		return -EBUSY;
> >   	}
> >   
> > +	if (ipa->version <= IPA_VERSION_2_6L) {
> > +		value = aggr_force_close_fmask(true);
> > +		value_mask = aggr_force_close_fmask(true);
> > +	}
> > +
> >   	while (initialized) {
> >   		u32 endpoint_id = __ffs(initialized);
> >   		struct ipa_endpoint *endpoint;
> > @@ -416,7 +424,7 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa)
> >   		 * means status is disabled on the endpoint, and as a
> >   		 * result all other fields in the register are ignored.
> >   		 */
> > -		ipa_cmd_register_write_add(trans, offset, 0, ~0, false);
> > +		ipa_cmd_register_write_add(trans, offset, value, value_mask, false);
> >   	}
> >   
> >   	ipa_cmd_pipeline_clear_add(trans);
> > @@ -1531,8 +1539,10 @@ static void ipa_endpoint_program(struct ipa_endpoint *endpoint)
> >   	ipa_endpoint_init_mode(endpoint);
> >   	ipa_endpoint_init_aggr(endpoint);
> >   	ipa_endpoint_init_deaggr(endpoint);
> > -	ipa_endpoint_init_rsrc_grp(endpoint);
> > -	ipa_endpoint_init_seq(endpoint);
> > +	if (endpoint->ipa->version > IPA_VERSION_2_6L) {
> > +		ipa_endpoint_init_rsrc_grp(endpoint);
> > +		ipa_endpoint_init_seq(endpoint);
> > +	}
> >   	ipa_endpoint_status(endpoint);
> >   }
> >   
> > @@ -1592,7 +1602,6 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint)
> >   {
> >   	struct device *dev = &endpoint->ipa->pdev->dev;
> >   	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
> > -	bool stop_channel;
> >   	int ret;
> >   
> >   	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
> > @@ -1613,7 +1622,6 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint)
> >   {
> >   	struct device *dev = &endpoint->ipa->pdev->dev;
> >   	struct ipa_dma *gsi = &endpoint->ipa->dma_subsys;
> > -	bool start_channel;
> >   	int ret;
> >   
> >   	if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id)))
> > @@ -1750,23 +1758,33 @@ int ipa_endpoint_config(struct ipa *ipa)
> >   	/* Find out about the endpoints supplied by the hardware, and ensure
> >   	 * the highest one doesn't exceed the number we support.
> >   	 */
> > -	val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
> > -
> > -	/* Our RX is an IPA producer */
> > -	rx_base = u32_get_bits(val, IPA_PROD_LOWEST_FMASK);
> > -	max = rx_base + u32_get_bits(val, IPA_MAX_PROD_PIPES_FMASK);
> > -	if (max > IPA_ENDPOINT_MAX) {
> > -		dev_err(dev, "too many endpoints (%u > %u)\n",
> > -			max, IPA_ENDPOINT_MAX);
> > -		return -EINVAL;
> > -	}
> > -	rx_mask = GENMASK(max - 1, rx_base);
> > +	if (ipa->version <= IPA_VERSION_2_6L) {
> > +		// FIXME Not used anywhere?
> > +		if (ipa->version == IPA_VERSION_2_6L)
> > +			val = ioread32(ipa->reg_virt +
> > +					IPA_REG_V2_ENABLED_PIPES_OFFSET);
> > +		/* IPA v2.6L supports 20 pipes */
> > +		ipa->available = ipa->filter_map;
> > +		return 0;
> > +	} else {
> > +		val = ioread32(ipa->reg_virt + IPA_REG_FLAVOR_0_OFFSET);
> > +
> > +		/* Our RX is an IPA producer */
> > +		rx_base = u32_get_bits(val, IPA_PROD_LOWEST_FMASK);
> > +		max = rx_base + u32_get_bits(val, IPA_MAX_PROD_PIPES_FMASK);
> > +		if (max > IPA_ENDPOINT_MAX) {
> > +			dev_err(dev, "too many endpoints (%u > %u)\n",
> > +					max, IPA_ENDPOINT_MAX);
> > +			return -EINVAL;
> > +		}
> > +		rx_mask = GENMASK(max - 1, rx_base);
> >   
> > -	/* Our TX is an IPA consumer */
> > -	max = u32_get_bits(val, IPA_MAX_CONS_PIPES_FMASK);
> > -	tx_mask = GENMASK(max - 1, 0);
> > +		/* Our TX is an IPA consumer */
> > +		max = u32_get_bits(val, IPA_MAX_CONS_PIPES_FMASK);
> > +		tx_mask = GENMASK(max - 1, 0);
> >   
> > -	ipa->available = rx_mask | tx_mask;
> > +		ipa->available = rx_mask | tx_mask;
> > +	}
> >   
> >   	/* Check for initialized endpoints not supported by the hardware */
> >   	if (ipa->initialized & ~ipa->available) {
> > @@ -1865,6 +1883,9 @@ u32 ipa_endpoint_init(struct ipa *ipa, u32 count,
> >   			filter_map |= BIT(data->endpoint_id);
> >   	}
> >   
> > +	if (ipa->version <= IPA_VERSION_2_6L)
> > +		filter_map = 0x1fffff;
> > +
> >   	if (!ipa_filter_map_valid(ipa, filter_map))
> >   		goto err_endpoint_exit;
> >   
> > 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map
  2021-10-13 22:30   ` Alex Elder
@ 2021-10-18 18:19     ` Sireesh Kodali
  0 siblings, 0 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-10-18 18:19 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 4:00 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > IPA v2.6L has an extra region to handle compression/decompression
> > acceleration. This region is used by some modems during modem init.
>
> So it has to be initialized? (I guess so.)

This is how downstream handles it, I haven't tested not initializing it.

>
> The memory size register apparently doesn't express things in
> units of 8 bytes either.
>

Indeed, with the hardware being 32 bits, it expresses things in values
of 4 bytes instead.

Regards,
Sireesh
> -Alex
>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > ---
> >   drivers/net/ipa/ipa_mem.c | 36 ++++++++++++++++++++++++++++++------
> >   drivers/net/ipa/ipa_mem.h |  5 ++++-
> >   2 files changed, 34 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_mem.c b/drivers/net/ipa/ipa_mem.c
> > index 8acc88070a6f..bfcdc7e08de2 100644
> > --- a/drivers/net/ipa/ipa_mem.c
> > +++ b/drivers/net/ipa/ipa_mem.c
> > @@ -84,7 +84,7 @@ int ipa_mem_setup(struct ipa *ipa)
> >   	/* Get a transaction to define the header memory region and to zero
> >   	 * the processing context and modem memory regions.
> >   	 */
> > -	trans = ipa_cmd_trans_alloc(ipa, 4);
> > +	trans = ipa_cmd_trans_alloc(ipa, 5);
> >   	if (!trans) {
> >   		dev_err(&ipa->pdev->dev, "no transaction for memory setup\n");
> >   		return -EBUSY;
> > @@ -107,8 +107,14 @@ int ipa_mem_setup(struct ipa *ipa)
> >   	ipa_mem_zero_region_add(trans, IPA_MEM_AP_PROC_CTX);
> >   	ipa_mem_zero_region_add(trans, IPA_MEM_MODEM);
> >   
> > +	ipa_mem_zero_region_add(trans, IPA_MEM_ZIP);
> > +
> >   	ipa_trans_commit_wait(trans);
> >   
> > +	/* On IPA version <=2.6L (except 2.5) there is no PROC_CTX.  */
> > +	if (ipa->version != IPA_VERSION_2_5 && ipa->version <= IPA_VERSION_2_6L)
> > +		return 0;
> > +
> >   	/* Tell the hardware where the processing context area is located */
> >   	mem = ipa_mem_find(ipa, IPA_MEM_MODEM_PROC_CTX);
> >   	offset = ipa->mem_offset + mem->offset;
> > @@ -147,6 +153,11 @@ static bool ipa_mem_id_valid(struct ipa *ipa, enum ipa_mem_id mem_id)
> >   	case IPA_MEM_END_MARKER:	/* pseudo region */
> >   		break;
> >   
> > +	case IPA_MEM_ZIP:
> > +		if (version == IPA_VERSION_2_6L)
> > +			return true;
> > +		break;
> > +
> >   	case IPA_MEM_STATS_TETHERING:
> >   	case IPA_MEM_STATS_DROP:
> >   		if (version < IPA_VERSION_4_0)
> > @@ -319,10 +330,15 @@ int ipa_mem_config(struct ipa *ipa)
> >   	/* Check the advertised location and size of the shared memory area */
> >   	val = ioread32(ipa->reg_virt + ipa_reg_shared_mem_size_offset(ipa->version));
> >   
> > -	/* The fields in the register are in 8 byte units */
> > -	ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> > -	/* Make sure the end is within the region's mapped space */
> > -	mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
> > +	if (IPA_VERSION_RANGE(ipa->version, 2_0, 2_6L)) {
> > +		/* The fields in the register are in 8 byte units */
> > +		ipa->mem_offset = 8 * u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> > +		/* Make sure the end is within the region's mapped space */
> > +		mem_size = 8 * u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
> > +	} else {
> > +		ipa->mem_offset = u32_get_bits(val, SHARED_MEM_BADDR_FMASK);
> > +		mem_size = u32_get_bits(val, SHARED_MEM_SIZE_FMASK);
> > +	}
> >   
> >   	/* If the sizes don't match, issue a warning */
> >   	if (ipa->mem_offset + mem_size < ipa->mem_size) {
> > @@ -564,6 +580,10 @@ static int ipa_smem_init(struct ipa *ipa, u32 item, size_t size)
> >   		return -EINVAL;
> >   	}
> >   
> > +	/* IPA v2.6L does not use IOMMU */
> > +	if (ipa->version <= IPA_VERSION_2_6L)
> > +		return 0;
> > +
> >   	domain = iommu_get_domain_for_dev(dev);
> >   	if (!domain) {
> >   		dev_err(dev, "no IOMMU domain found for SMEM\n");
> > @@ -591,6 +611,9 @@ static void ipa_smem_exit(struct ipa *ipa)
> >   	struct device *dev = &ipa->pdev->dev;
> >   	struct iommu_domain *domain;
> >   
> > +	if (ipa->version <= IPA_VERSION_2_6L)
> > +		return;
> > +
> >   	domain = iommu_get_domain_for_dev(dev);
> >   	if (domain) {
> >   		size_t size;
> > @@ -622,7 +645,8 @@ int ipa_mem_init(struct ipa *ipa, const struct ipa_mem_data *mem_data)
> >   	ipa->mem_count = mem_data->local_count;
> >   	ipa->mem = mem_data->local;
> >   
> > -	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, DMA_BIT_MASK(64));
> > +	ret = dma_set_mask_and_coherent(&ipa->pdev->dev, IPA_IS_64BIT(ipa->version) ?
> > +					DMA_BIT_MASK(64) : DMA_BIT_MASK(32));
> >   	if (ret) {
> >   		dev_err(dev, "error %d setting DMA mask\n", ret);
> >   		return ret;
> > diff --git a/drivers/net/ipa/ipa_mem.h b/drivers/net/ipa/ipa_mem.h
> > index 570bfdd99bff..be91cb38b6a8 100644
> > --- a/drivers/net/ipa/ipa_mem.h
> > +++ b/drivers/net/ipa/ipa_mem.h
> > @@ -47,8 +47,10 @@ enum ipa_mem_id {
> >   	IPA_MEM_UC_INFO,		/* 0 canaries */
> >   	IPA_MEM_V4_FILTER_HASHED,	/* 2 canaries */
> >   	IPA_MEM_V4_FILTER,		/* 2 canaries */
> > +	IPA_MEM_V4_FILTER_AP,		/* 2 canaries (IPA v2.0) */
> >   	IPA_MEM_V6_FILTER_HASHED,	/* 2 canaries */
> >   	IPA_MEM_V6_FILTER,		/* 2 canaries */
> > +	IPA_MEM_V6_FILTER_AP,		/* 0 canaries (IPA v2.0) */
> >   	IPA_MEM_V4_ROUTE_HASHED,	/* 2 canaries */
> >   	IPA_MEM_V4_ROUTE,		/* 2 canaries */
> >   	IPA_MEM_V6_ROUTE_HASHED,	/* 2 canaries */
> > @@ -57,7 +59,8 @@ enum ipa_mem_id {
> >   	IPA_MEM_AP_HEADER,		/* 0 canaries, optional */
> >   	IPA_MEM_MODEM_PROC_CTX,		/* 2 canaries */
> >   	IPA_MEM_AP_PROC_CTX,		/* 0 canaries */
> > -	IPA_MEM_MODEM,			/* 0/2 canaries */
> > +	IPA_MEM_ZIP,			/* 1 canary (IPA v2.6L) */
> > +	IPA_MEM_MODEM,			/* 0-2 canaries */
> >   	IPA_MEM_UC_EVENT_RING,		/* 1 canary, optional */
> >   	IPA_MEM_PDN_CONFIG,		/* 0/2 canaries (IPA v4.0+) */
> >   	IPA_MEM_STATS_QUOTA_MODEM,	/* 2/4 canaries (IPA v4.0+) */
> > 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface
  2021-10-13 22:30   ` Alex Elder
@ 2021-10-18 18:22     ` Sireesh Kodali
  0 siblings, 0 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-10-18 18:22 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: Vladimir Lypak, David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 4:00 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > On IPA v2.x, the modem doesn't send a DRIVER_INIT_COMPLETED, so we have
> > to rely on the uc's IPA_UC_RESPONSE_INIT_COMPLETED to know when its
> > ready. We add a function here that marks uc_ready = true. This function
> > is called by ipa_uc.c when IPA_UC_RESPONSE_INIT_COMPLETED is handled.
>
> This should use the new ipa_mem_find() interface for getting the
> memory information for the ZIP region.
>

Got it, thanks

> I don't know where the IPA_UC_RESPONSE_INIT_COMPLETED gets sent
> but I presume it ends up calling ipa_qmi_signal_uc_loaded().
>

IPA_UC_RESPONSE_INIT_COMPLETED is handled by the ipa_uc sub-driver. The
handler calls ipa_qmi_signal_uc_loaded() once the response is received,
at which point we know the uc has been inited.

> I think actually the DRIVER_INIT_COMPLETE message from the modem
> is saying "I finished initializing the microcontroller." And
> I've wondered why there is a duplicate mechanism. Maybe there
> was a race or something.
>

This makes sense. Given that some modems rely on the IPA block for
initialization, I wonder if Qualcomm decided it would be easier to allow
the modem to complete the uc initialization and send the signal instead.

Regards,
Sireesh
> -Alex
>
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
> > Signed-off-by: Vladimir Lypak <vladimir.lypak@gmail.com>
> > ---
> >   drivers/net/ipa/ipa_qmi.c | 27 ++++++++++++++++++++++++++-
> >   drivers/net/ipa/ipa_qmi.h | 10 ++++++++++
> >   2 files changed, 36 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/net/ipa/ipa_qmi.c b/drivers/net/ipa/ipa_qmi.c
> > index 7e2fe701cc4d..876e2a004f70 100644
> > --- a/drivers/net/ipa/ipa_qmi.c
> > +++ b/drivers/net/ipa/ipa_qmi.c
> > @@ -68,6 +68,11 @@
> >    * - The INDICATION_REGISTER request and INIT_COMPLETE indication are
> >    *   optional for non-initial modem boots, and have no bearing on the
> >    *   determination of when things are "ready"
> > + *
> > + * Note that on IPA v2.x, the modem doesn't send a DRIVER_INIT_COMPLETE
> > + * request. Thus, we rely on the uc's IPA_UC_RESPONSE_INIT_COMPLETED to know
> > + * when the uc is ready. The rest of the process is the same on IPA v2.x and
> > + * later IPA versions
> >    */
> >   
> >   #define IPA_HOST_SERVICE_SVC_ID		0x31
> > @@ -345,7 +350,12 @@ init_modem_driver_req(struct ipa_qmi *ipa_qmi)
> >   			req.hdr_proc_ctx_tbl_info.start + mem->size - 1;
> >   	}
> >   
> > -	/* Nothing to report for the compression table (zip_tbl_info) */
> > +	mem = &ipa->mem[IPA_MEM_ZIP];
> > +	if (mem->size) {
> > +		req.zip_tbl_info_valid = 1;
> > +		req.zip_tbl_info.start = ipa->mem_offset + mem->offset;
> > +		req.zip_tbl_info.end = ipa->mem_offset + mem->size - 1;
> > +	}
> >   
> >   	mem = ipa_mem_find(ipa, IPA_MEM_V4_ROUTE_HASHED);
> >   	if (mem->size) {
> > @@ -525,6 +535,21 @@ int ipa_qmi_setup(struct ipa *ipa)
> >   	return ret;
> >   }
> >   
> > +/* With IPA v2 modem is not required to send DRIVER_INIT_COMPLETE request to AP.
> > + * We start operation as soon as IPA_UC_RESPONSE_INIT_COMPLETED irq is triggered.
> > + */
> > +void ipa_qmi_signal_uc_loaded(struct ipa *ipa)
> > +{
> > +	struct ipa_qmi *ipa_qmi = &ipa->qmi;
> > +
> > +	/* This is needed only on IPA 2.x */
> > +	if (ipa->version > IPA_VERSION_2_6L)
> > +		return;
> > +
> > +	ipa_qmi->uc_ready = true;
> > +	ipa_qmi_ready(ipa_qmi);
> > +}
> > +
> >   /* Tear down IPA QMI handles */
> >   void ipa_qmi_teardown(struct ipa *ipa)
> >   {
> > diff --git a/drivers/net/ipa/ipa_qmi.h b/drivers/net/ipa/ipa_qmi.h
> > index 856ef629ccc8..4962d88b0d22 100644
> > --- a/drivers/net/ipa/ipa_qmi.h
> > +++ b/drivers/net/ipa/ipa_qmi.h
> > @@ -55,6 +55,16 @@ struct ipa_qmi {
> >    */
> >   int ipa_qmi_setup(struct ipa *ipa);
> >   
> > +/**
> > + * ipa_qmi_signal_uc_loaded() - Signal that the UC has been loaded
> > + * @ipa:		IPA pointer
> > + *
> > + * This is called when the uc indicates that it is ready. This exists, because
> > + * on IPA v2.x, the modem does not send a DRIVER_INIT_COMPLETED. Thus we have
> > + * to rely on the uc's INIT_COMPLETED response to know if it was initialized
> > + */
> > +void ipa_qmi_signal_uc_loaded(struct ipa *ipa);
> > +
> >   /**
> >    * ipa_qmi_teardown() - Tear down IPA QMI handles
> >    * @ipa:		IPA pointer
> > 


^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware
  2021-10-13 22:30   ` Alex Elder
@ 2021-10-18 18:35     ` Sireesh Kodali
  0 siblings, 0 replies; 46+ messages in thread
From: Sireesh Kodali @ 2021-10-18 18:35 UTC (permalink / raw)
  To: Alex Elder, phone-devel, ~postmarketos/upstreaming, netdev,
	linux-kernel, linux-arm-msm, elder
  Cc: David S. Miller, Jakub Kicinski

On Thu Oct 14, 2021 at 4:00 AM IST, Alex Elder wrote:
> On 9/19/21 10:08 PM, Sireesh Kodali wrote:
> > This commit adds the config for IPA v2.0, v2.5, v2.6L. IPA v2.5 is found
> > on msm8996. IPA v2.6L hardware is found on following SoCs: msm8920,
> > msm8940, msm8952, msm8953, msm8956, msm8976, sdm630, sdm660. No
> > SoC-specific configuration in ipa driver is required.
> > 
> > Signed-off-by: Sireesh Kodali <sireeshkodali1@gmail.com>
>
> I will not look at this in great detail right now. It looks
> good to me, but I didn't notice where "channel_name" got
> defined. I'm not sure what the BCR value represents either.
>

I probably messed up while splitting the commits, it should be easy
enough to fix. As for the BCR, it was simple `#define`d in
downstream, with no comments, leaving us clueless as to what the magic
number means :(

Regards,
Sireesh
> -Alex
>
> > ---
> >   drivers/net/ipa/Makefile        |   7 +-
> >   drivers/net/ipa/ipa_data-v2.c   | 369 ++++++++++++++++++++++++++++++++
> >   drivers/net/ipa/ipa_data-v3.1.c |   2 +-
> >   drivers/net/ipa/ipa_data.h      |   3 +
> >   drivers/net/ipa/ipa_main.c      |  15 ++
> >   drivers/net/ipa/ipa_sysfs.c     |   6 +
> >   6 files changed, 398 insertions(+), 4 deletions(-)
> >   create mode 100644 drivers/net/ipa/ipa_data-v2.c
> > 
> > diff --git a/drivers/net/ipa/Makefile b/drivers/net/ipa/Makefile
> > index 4abebc667f77..858fbf76cff3 100644
> > --- a/drivers/net/ipa/Makefile
> > +++ b/drivers/net/ipa/Makefile
> > @@ -7,6 +7,7 @@ ipa-y			:=	ipa_main.o ipa_power.o ipa_reg.o ipa_mem.o \
> >   				ipa_resource.o ipa_qmi.o ipa_qmi_msg.o \
> >   				ipa_sysfs.o
> >   
> > -ipa-y			+=	ipa_data-v3.1.o ipa_data-v3.5.1.o \
> > -				ipa_data-v4.2.o ipa_data-v4.5.o \
> > -				ipa_data-v4.9.o ipa_data-v4.11.o
> > +ipa-y			+=	ipa_data-v2.o ipa_data-v3.1.o \
> > +				ipa_data-v3.5.1.o ipa_data-v4.2.o \
> > +				ipa_data-v4.5.o ipa_data-v4.9.o \
> > +				ipa_data-v4.11.o
> > diff --git a/drivers/net/ipa/ipa_data-v2.c b/drivers/net/ipa/ipa_data-v2.c
> > new file mode 100644
> > index 000000000000..869b8a1a45d6
> > --- /dev/null
> > +++ b/drivers/net/ipa/ipa_data-v2.c
> > @@ -0,0 +1,369 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +
> > +/* Copyright (c) 2012-2018, The Linux Foundation. All rights reserved.
> > + * Copyright (C) 2019-2020 Linaro Ltd.
> > + */
> > +
> > +#include <linux/log2.h>
> > +
> > +#include "ipa_data.h"
> > +#include "ipa_endpoint.h"
> > +#include "ipa_mem.h"
> > +
> > +/* Endpoint configuration for the IPA v2 hardware. */
> > +static const struct ipa_gsi_endpoint_data ipa_endpoint_data[] = {
> > +	[IPA_ENDPOINT_AP_COMMAND_TX] = {
> > +		.ee_id		= GSI_EE_AP,
> > +		.channel_id	= 3,
> > +		.endpoint_id	= 3,
> > +		.channel_name	= "cmd_tx",
> > +		.toward_ipa	= true,
> > +		.channel = {
> > +			.tre_count	= 256,
> > +			.event_count	= 256,
> > +			.tlv_count	= 20,
> > +		},
> > +		.endpoint = {
> > +			.config	= {
> > +				.dma_mode	= true,
> > +				.dma_endpoint	= IPA_ENDPOINT_AP_LAN_RX,
> > +			},
> > +		},
> > +	},
> > +	[IPA_ENDPOINT_AP_LAN_RX] = {
> > +		.ee_id		= GSI_EE_AP,
> > +		.channel_id	= 2,
> > +		.endpoint_id	= 2,
> > +		.channel_name	= "ap_lan_rx",
> > +		.channel = {
> > +			.tre_count	= 256,
> > +			.event_count	= 256,
> > +			.tlv_count	= 8,
> > +		},
> > +		.endpoint	= {
> > +			.config	= {
> > +				.aggregation	= true,
> > +				.status_enable	= true,
> > +				.rx = {
> > +					.pad_align	= ilog2(sizeof(u32)),
> > +				},
> > +			},
> > +		},
> > +	},
> > +	[IPA_ENDPOINT_AP_MODEM_TX] = {
> > +		.ee_id		= GSI_EE_AP,
> > +		.channel_id	= 4,
> > +		.endpoint_id	= 4,
> > +		.channel_name	= "ap_modem_tx",
> > +		.toward_ipa	= true,
> > +		.channel = {
> > +			.tre_count	= 256,
> > +			.event_count	= 256,
> > +			.tlv_count	= 8,
> > +		},
> > +		.endpoint	= {
> > +			.config	= {
> > +				.qmap		= true,
> > +				.status_enable	= true,
> > +				.tx = {
> > +					.status_endpoint =
> > +						IPA_ENDPOINT_AP_LAN_RX,
> > +				},
> > +			},
> > +		},
> > +	},
> > +	[IPA_ENDPOINT_AP_MODEM_RX] = {
> > +		.ee_id		= GSI_EE_AP,
> > +		.channel_id	= 5,
> > +		.endpoint_id	= 5,
> > +		.channel_name	= "ap_modem_rx",
> > +		.toward_ipa	= false,
> > +		.channel = {
> > +			.tre_count	= 256,
> > +			.event_count	= 256,
> > +			.tlv_count	= 8,
> > +		},
> > +		.endpoint	= {
> > +			.config = {
> > +				.aggregation	= true,
> > +				.qmap		= true,
> > +			},
> > +		},
> > +	},
> > +	[IPA_ENDPOINT_MODEM_LAN_TX] = {
> > +		.ee_id		= GSI_EE_MODEM,
> > +		.channel_id	= 6,
> > +		.endpoint_id	= 6,
> > +		.channel_name	= "modem_lan_tx",
> > +		.toward_ipa	= true,
> > +	},
> > +	[IPA_ENDPOINT_MODEM_COMMAND_TX] = {
> > +		.ee_id		= GSI_EE_MODEM,
> > +		.channel_id	= 7,
> > +		.endpoint_id	= 7,
> > +		.channel_name	= "modem_cmd_tx",
> > +		.toward_ipa	= true,
> > +	},
> > +	[IPA_ENDPOINT_MODEM_LAN_RX] = {
> > +		.ee_id		= GSI_EE_MODEM,
> > +		.channel_id	= 8,
> > +		.endpoint_id	= 8,
> > +		.channel_name	= "modem_lan_rx",
> > +		.toward_ipa	= false,
> > +	},
> > +	[IPA_ENDPOINT_MODEM_AP_RX] = {
> > +		.ee_id		= GSI_EE_MODEM,
> > +		.channel_id	= 9,
> > +		.endpoint_id	= 9,
> > +		.channel_name	= "modem_ap_rx",
> > +		.toward_ipa	= false,
> > +	},
> > +};
> > +
> > +static struct ipa_interconnect_data ipa_interconnect_data[] = {
> > +	{
> > +		.name = "memory",
> > +		.peak_bandwidth	= 1200000,	/* 1200 MBps */
> > +		.average_bandwidth = 100000,	/* 100 MBps */
> > +	},
> > +	{
> > +		.name = "imem",
> > +		.peak_bandwidth	= 350000,	/* 350 MBps */
> > +		.average_bandwidth  = 0,	/* unused */
> > +	},
> > +	{
> > +		.name = "config",
> > +		.peak_bandwidth	= 40000,	/* 40 MBps */
> > +		.average_bandwidth = 0,		/* unused */
> > +	},
> > +};
> > +
> > +static struct ipa_power_data ipa_power_data = {
> > +	.core_clock_rate	= 200 * 1000 * 1000,	/* Hz */
> > +	.interconnect_count	= ARRAY_SIZE(ipa_interconnect_data),
> > +	.interconnect_data	= ipa_interconnect_data,
> > +};
> > +
> > +/* IPA-resident memory region configuration for v2.0 */
> > +static const struct ipa_mem ipa_mem_local_data_v2_0[IPA_MEM_COUNT] = {
> > +	[IPA_MEM_UC_SHARED] = {
> > +		.offset         = 0,
> > +		.size           = 0x80,
> > +		.canary_count   = 0,
> > +	},
> > +	[IPA_MEM_V4_FILTER] = {
> > +		.offset		= 0x0080,
> > +		.size		= 0x0058,
> > +		.canary_count	= 0,
> > +	},
> > +	[IPA_MEM_V6_FILTER] = {
> > +		.offset		= 0x00e0,
> > +		.size		= 0x0058,
> > +		.canary_count	= 2,
> > +	},
> > +	[IPA_MEM_V4_ROUTE] = {
> > +		.offset		= 0x0140,
> > +		.size		= 0x002c,
> > +		.canary_count	= 2,
> > +	},
> > +	[IPA_MEM_V6_ROUTE] = {
> > +		.offset		= 0x0170,
> > +		.size		= 0x002c,
> > +		.canary_count	= 1,
> > +	},
> > +	[IPA_MEM_MODEM_HEADER] = {
> > +		.offset		= 0x01a0,
> > +		.size		= 0x0140,
> > +		.canary_count	= 1,
> > +	},
> > +	[IPA_MEM_AP_HEADER] = {
> > +		.offset		= 0x02e0,
> > +		.size		= 0x0048,
> > +		.canary_count	= 0,
> > +	},
> > +	[IPA_MEM_MODEM] = {
> > +		.offset		= 0x032c,
> > +		.size		= 0x0dcc,
> > +		.canary_count	= 1,
> > +	},
> > +	[IPA_MEM_V4_FILTER_AP] = {
> > +		.offset		= 0x10fc,
> > +		.size		= 0x0780,
> > +		.canary_count	= 1,
> > +	},
> > +	[IPA_MEM_V6_FILTER_AP] = {
> > +		.offset		= 0x187c,
> > +		.size		= 0x055c,
> > +		.canary_count	= 0,
> > +	},
> > +	[IPA_MEM_UC_INFO] = {
> > +		.offset		= 0x1ddc,
> > +		.size		= 0x0124,
> > +		.canary_count	= 1,
> > +	},
> > +};
> > +
> > +static struct ipa_mem_data ipa_mem_data_v2_0 = {
> > +	.local		= ipa_mem_local_data_v2_0,
> > +	.smem_id	= 497,
> > +	.smem_size	= 0x00001f00,
> > +};
> > +
> > +/* Configuration data for IPAv2.0 */
> > +const struct ipa_data ipa_data_v2_0  = {
> > +	.version	= IPA_VERSION_2_0,
> > +	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
> > +	.endpoint_data	= ipa_endpoint_data,
> > +	.mem_data	= &ipa_mem_data_v2_0,
> > +	.power_data	= &ipa_power_data,
> > +};
> > +
> > +/* IPA-resident memory region configuration for v2.5 */
> > +static const struct ipa_mem ipa_mem_local_data_v2_5[IPA_MEM_COUNT] = {
> > +	[IPA_MEM_UC_SHARED] = {
> > +		.offset         = 0,
> > +		.size           = 0x80,
> > +		.canary_count   = 0,
> > +	},
> > +	[IPA_MEM_UC_INFO] = {
> > +		.offset		= 0x0080,
> > +		.size		= 0x0200,
> > +		.canary_count	= 0,
> > +	},
> > +	[IPA_MEM_V4_FILTER] = {
> > +		.offset		= 0x0288,
> > +		.size		= 0x0058,
> > +		.canary_count	= 2,
> > +	},
> > +	[IPA_MEM_V6_FILTER] = {
> > +		.offset		= 0x02e8,
> > +		.size		= 0x0058,
> > +		.canary_count	= 2,
> > +	},
> > +	[IPA_MEM_V4_ROUTE] = {
> > +		.offset		= 0x0348,
> > +		.size		= 0x003c,
> > +		.canary_count	= 2,
> > +	},
> > +	[IPA_MEM_V6_ROUTE] = {
> > +		.offset		= 0x0388,
> > +		.size		= 0x003c,
> > +		.canary_count	= 1,
> > +	},
> > +	[IPA_MEM_MODEM_HEADER] = {
> > +		.offset		= 0x03c8,
> > +		.size		= 0x0140,
> > +		.canary_count	= 1,
> > +	},
> > +	[IPA_MEM_MODEM_PROC_CTX] = {
> > +		.offset		= 0x0510,
> > +		.size		= 0x0200,
> > +		.canary_count	= 2,
> > +	},
> > +	[IPA_MEM_AP_PROC_CTX] = {
> > +		.offset		= 0x0710,
> > +		.size		= 0x0200,
> > +		.canary_count	= 0,
> > +	},
> > +	[IPA_MEM_MODEM] = {
> > +		.offset		= 0x0914,
> > +		.size		= 0x16a8,
> > +		.canary_count	= 1,
> > +	},
> > +};
> > +
> > +static struct ipa_mem_data ipa_mem_data_v2_5 = {
> > +	.local		= ipa_mem_local_data_v2_5,
> > +	.smem_id	= 497,
> > +	.smem_size	= 0x00002000,
> > +};
> > +
> > +/* Configuration data for IPAv2.5 */
> > +const struct ipa_data ipa_data_v2_5  = {
> > +	.version	= IPA_VERSION_2_5,
> > +	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
> > +	.endpoint_data	= ipa_endpoint_data,
> > +	.mem_data	= &ipa_mem_data_v2_5,
> > +	.power_data	= &ipa_power_data,
> > +};
> > +
> > +/* IPA-resident memory region configuration for v2.6L */
> > +static const struct ipa_mem ipa_mem_local_data_v2_6L[IPA_MEM_COUNT] = {
> > +	{
> > +		.id		= IPA_MEM_UC_SHARED,
> > +		.offset         = 0,
> > +		.size           = 0x80,
> > +		.canary_count   = 0,
> > +	},
> > +	{
> > +		.id 		= IPA_MEM_UC_INFO,
> > +		.offset		= 0x0080,
> > +		.size		= 0x0200,
> > +		.canary_count	= 0,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_V4_FILTER,
> > +		.offset		= 0x0288,
> > +		.size		= 0x0058,
> > +		.canary_count	= 2,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_V6_FILTER,
> > +		.offset		= 0x02e8,
> > +		.size		= 0x0058,
> > +		.canary_count	= 2,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_V4_ROUTE,
> > +		.offset		= 0x0348,
> > +		.size		= 0x003c,
> > +		.canary_count	= 2,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_V6_ROUTE,
> > +		.offset		= 0x0388,
> > +		.size		= 0x003c,
> > +		.canary_count	= 1,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_MODEM_HEADER,
> > +		.offset		= 0x03c8,
> > +		.size		= 0x0140,
> > +		.canary_count	= 1,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_ZIP,
> > +		.offset		= 0x0510,
> > +		.size		= 0x0200,
> > +		.canary_count	= 2,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_MODEM,
> > +		.offset		= 0x0714,
> > +		.size		= 0x18e8,
> > +		.canary_count	= 1,
> > +	},
> > +	{
> > +		.id		= IPA_MEM_END_MARKER,
> > +		.offset		= 0x2000,
> > +		.size		= 0,
> > +		.canary_count	= 1,
> > +	},
> > +};
> > +
> > +static struct ipa_mem_data ipa_mem_data_v2_6L = {
> > +	.local		= ipa_mem_local_data_v2_6L,
> > +	.smem_id	= 497,
> > +	.smem_size	= 0x00002000,
> > +};
> > +
> > +/* Configuration data for IPAv2.6L */
> > +const struct ipa_data ipa_data_v2_6L  = {
> > +	.version	= IPA_VERSION_2_6L,
> > +	/* Unfortunately we don't know what this BCR value corresponds to */
> > +	.backward_compat = 0x1fff7f,
> > +	.endpoint_count	= ARRAY_SIZE(ipa_endpoint_data),
> > +	.endpoint_data	= ipa_endpoint_data,
> > +	.mem_data	= &ipa_mem_data_v2_6L,
> > +	.power_data	= &ipa_power_data,
> > +};
> > diff --git a/drivers/net/ipa/ipa_data-v3.1.c b/drivers/net/ipa/ipa_data-v3.1.c
> > index 06ddb85f39b2..12d231232756 100644
> > --- a/drivers/net/ipa/ipa_data-v3.1.c
> > +++ b/drivers/net/ipa/ipa_data-v3.1.c
> > @@ -6,7 +6,7 @@
> >   
> >   #include <linux/log2.h>
> >   
> > -#include "gsi.h"
> > +#include "ipa_dma.h"
> >   #include "ipa_data.h"
> >   #include "ipa_endpoint.h"
> >   #include "ipa_mem.h"
> > diff --git a/drivers/net/ipa/ipa_data.h b/drivers/net/ipa/ipa_data.h
> > index 7d62d49f414f..e7ce2e9388b6 100644
> > --- a/drivers/net/ipa/ipa_data.h
> > +++ b/drivers/net/ipa/ipa_data.h
> > @@ -301,6 +301,9 @@ struct ipa_data {
> >   	const struct ipa_power_data *power_data;
> >   };
> >   
> > +extern const struct ipa_data ipa_data_v2_0;
> > +extern const struct ipa_data ipa_data_v2_5;
> > +extern const struct ipa_data ipa_data_v2_6L;
> >   extern const struct ipa_data ipa_data_v3_1;
> >   extern const struct ipa_data ipa_data_v3_5_1;
> >   extern const struct ipa_data ipa_data_v4_2;
> > diff --git a/drivers/net/ipa/ipa_main.c b/drivers/net/ipa/ipa_main.c
> > index b437fbf95edf..3ae5c5c6734b 100644
> > --- a/drivers/net/ipa/ipa_main.c
> > +++ b/drivers/net/ipa/ipa_main.c
> > @@ -560,6 +560,18 @@ static int ipa_firmware_load(struct device *dev)
> >   }
> >   
> >   static const struct of_device_id ipa_match[] = {
> > +	{
> > +		.compatible	= "qcom,ipa-v2.0",
> > +		.data		= &ipa_data_v2_0,
> > +	},
> > +	{
> > +		.compatible	= "qcom,msm8996-ipa",
> > +		.data		= &ipa_data_v2_5,
> > +	},
> > +	{
> > +		.compatible	= "qcom,msm8953-ipa",
> > +		.data		= &ipa_data_v2_6L,
> > +	},
> >   	{
> >   		.compatible	= "qcom,msm8998-ipa",
> >   		.data		= &ipa_data_v3_1,
> > @@ -632,6 +644,9 @@ static void ipa_validate_build(void)
> >   static bool ipa_version_valid(enum ipa_version version)
> >   {
> >   	switch (version) {
> > +	case IPA_VERSION_2_0:
> > +	case IPA_VERSION_2_5:
> > +	case IPA_VERSION_2_6L:
> >   	case IPA_VERSION_3_0:
> >   	case IPA_VERSION_3_1:
> >   	case IPA_VERSION_3_5:
> > diff --git a/drivers/net/ipa/ipa_sysfs.c b/drivers/net/ipa/ipa_sysfs.c
> > index ff61dbdd70d8..f5d159f6bc06 100644
> > --- a/drivers/net/ipa/ipa_sysfs.c
> > +++ b/drivers/net/ipa/ipa_sysfs.c
> > @@ -14,6 +14,12 @@
> >   static const char *ipa_version_string(struct ipa *ipa)
> >   {
> >   	switch (ipa->version) {
> > +	case IPA_VERSION_2_0:
> > +		return "2.0";
> > +	case IPA_VERSION_2_5:
> > +		return "2.5";
> > +	case IPA_VERSION_2_6L:
> > +		"return 2.6L";
> >   	case IPA_VERSION_3_0:
> >   		return "3.0";
> >   	case IPA_VERSION_3_1:
> > 


^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2021-10-18 18:39 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-20  3:07 [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Sireesh Kodali
2021-09-20  3:07 ` [RFC PATCH 01/17] net: ipa: Correct ipa_status_opcode enumeration Sireesh Kodali
2021-10-13 22:28   ` Alex Elder
2021-10-18 16:12     ` Sireesh Kodali
2021-09-20  3:07 ` [RFC PATCH 02/17] net: ipa: revert to IPA_TABLE_ENTRY_SIZE for 32-bit IPA support Sireesh Kodali
2021-10-13 22:28   ` Alex Elder
2021-10-18 16:16     ` Sireesh Kodali
2021-09-20  3:07 ` [RFC PATCH 04/17] net: ipa: Establish ipa_dma interface Sireesh Kodali
2021-10-13 22:29   ` Alex Elder
2021-10-18 16:45     ` Sireesh Kodali
2021-09-20  3:07 ` [RFC PATCH 05/17] net: ipa: Check interrupts for availability Sireesh Kodali
2021-10-13 22:29   ` Alex Elder
2021-09-20  3:08 ` [RFC PATCH 06/17] net: ipa: Add timeout for ipa_cmd_pipeline_clear_wait Sireesh Kodali
2021-10-13 22:29   ` Alex Elder
2021-10-18 17:02     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 07/17] net: ipa: Add IPA v2.x register definitions Sireesh Kodali
2021-10-13 22:29   ` Alex Elder
2021-10-18 17:25     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 08/17] net: ipa: Add support for IPA v2.x interrupts Sireesh Kodali
2021-10-13 22:29   ` Alex Elder
2021-09-20  3:08 ` [RFC PATCH 09/17] net: ipa: Add support for using BAM as a DMA transport Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-10-18 17:30     ` Sireesh Kodali
2021-09-20  3:08 ` [PATCH 10/17] net: ipa: Add support for IPA v2.x commands and table init Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-10-18 18:13     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 11/17] net: ipa: Add support for IPA v2.x endpoints Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-10-18 18:17     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 12/17] net: ipa: Add support for IPA v2.x memory map Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-10-18 18:19     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 13/17] net: ipa: Add support for IPA v2.x in the driver's QMI interface Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-10-18 18:22     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 14/17] net: ipa: Add support for IPA v2 microcontroller Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-09-20  3:08 ` [RFC PATCH 15/17] net: ipa: Add IPA v2.6L initialization sequence support Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-09-20  3:08 ` [RFC PATCH 16/17] net: ipa: Add hw config describing IPA v2.x hardware Sireesh Kodali
2021-10-13 22:30   ` Alex Elder
2021-10-18 18:35     ` Sireesh Kodali
2021-09-20  3:08 ` [RFC PATCH 17/17] dt-bindings: net: qcom,ipa: Add support for MSM8953 and MSM8996 IPA Sireesh Kodali
2021-09-23 12:42   ` Rob Herring
2021-10-13 22:31   ` Alex Elder
2021-10-13 22:27 ` [RFC PATCH 00/17] net: ipa: Add support for IPA v2.x Alex Elder

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).