All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/12]  iwlwifi: updates intended for v5.10 2020-09-30
@ 2020-09-30 13:31 Luca Coelho
  2020-09-30 13:31 ` [PATCH 01/12] iwlwifi: mvm: Add FTM initiator RTT smoothing logic Luca Coelho
                   ` (11 more replies)
  0 siblings, 12 replies; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Luca Coelho <luciano.coelho@intel.com>

Hi,

Here's the fourth set of patches intended for v5.10.  It's the usual
development, new features, cleanups and bugfixes.

The changes are:

* FTM updates;
* More new FW API version support;
* A bit of reorganiztion in the queue code;
* A few debugging infra improvements;
* Add support for new GTK rekeying;
* Some other small fixes and clean-ups;

As usual, I'm pushing this to a pending branch, for kbuild bot, and
will send a pull-request later.

Please review.

Cheers,
Luca.


Avraham Stern (3):
  iwlwifi: mvm: location: set the HLTK when PASN station is added
  iwlwifi: mvm: responder: allow to set only the HLTK for an associated
    station
  iwlwifi: mvm: initiator: add option for adding a PASN responder

Gil Adam (1):
  iwlwifi: thermal: support new temperature measurement API

Ilan Peer (1):
  iwlwifi: mvm: Add FTM initiator RTT smoothing logic

Johannes Berg (1):
  iwlwifi: mvm: d3: support GCMP ciphers

Mordechay Goodstein (4):
  iwlwifi: move all bus-independent TX functions to common code
  iwlwifi: dbg: remove no filter condition
  iwlwifi: dbg: run init_cfg function once per driver load
  iwlwifi: phy-ctxt: add new API VER 3 for phy context cmd

Nathan Errera (1):
  iwlwifi: mvm: support more GTK rekeying algorithms

Sara Sharon (1):
  iwlwifi: mvm: add d3 prints

 drivers/net/wireless/intel/iwlwifi/Makefile   |    1 +
 .../wireless/intel/iwlwifi/fw/api/phy-ctxt.h  |   32 +-
 .../net/wireless/intel/iwlwifi/fw/api/phy.h   |   13 +-
 .../net/wireless/intel/iwlwifi/iwl-dbg-tlv.c  |    8 +-
 .../net/wireless/intel/iwlwifi/iwl-debug.h    |    6 +-
 .../net/wireless/intel/iwlwifi/iwl-trans.c    |   19 +
 .../net/wireless/intel/iwlwifi/iwl-trans.h    |    1 +
 .../wireless/intel/iwlwifi/mvm/constants.h    |    6 +
 drivers/net/wireless/intel/iwlwifi/mvm/d3.c   |   60 +-
 .../intel/iwlwifi/mvm/ftm-initiator.c         |  300 +++-
 .../intel/iwlwifi/mvm/ftm-responder.c         |   62 +-
 drivers/net/wireless/intel/iwlwifi/mvm/fw.c   |    2 +
 .../net/wireless/intel/iwlwifi/mvm/mac80211.c |    7 +
 drivers/net/wireless/intel/iwlwifi/mvm/mvm.h  |   31 +-
 drivers/net/wireless/intel/iwlwifi/mvm/ops.c  |    1 +
 .../net/wireless/intel/iwlwifi/mvm/phy-ctxt.c |  126 +-
 drivers/net/wireless/intel/iwlwifi/mvm/tt.c   |   78 +-
 .../wireless/intel/iwlwifi/pcie/ctxt-info.c   |    2 +-
 .../wireless/intel/iwlwifi/pcie/internal.h    |  125 +-
 drivers/net/wireless/intel/iwlwifi/pcie/rx.c  |    2 +-
 .../wireless/intel/iwlwifi/pcie/trans-gen2.c  |    4 +-
 .../net/wireless/intel/iwlwifi/pcie/trans.c   |   59 +-
 .../net/wireless/intel/iwlwifi/pcie/tx-gen2.c | 1078 +------------
 drivers/net/wireless/intel/iwlwifi/pcie/tx.c  |  311 +---
 drivers/net/wireless/intel/iwlwifi/queue/tx.c | 1375 +++++++++++++++++
 drivers/net/wireless/intel/iwlwifi/queue/tx.h |  188 +++
 26 files changed, 2286 insertions(+), 1611 deletions(-)
 create mode 100644 drivers/net/wireless/intel/iwlwifi/queue/tx.c
 create mode 100644 drivers/net/wireless/intel/iwlwifi/queue/tx.h

-- 
2.28.0


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 01/12] iwlwifi: mvm: Add FTM initiator RTT smoothing logic
  2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
@ 2020-09-30 13:31 ` Luca Coelho
  2020-10-01 19:01   ` Luca Coelho
  2020-09-30 13:31 ` [PATCH 02/12] iwlwifi: mvm: location: set the HLTK when PASN station is added Luca Coelho
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Ilan Peer <ilan.peer@intel.com>

The overcome instabilities in the RTT results add smoothing logic
to the reported results. In short, the smoothing logic tracks the
RTT average of each responder for a period of time, and in case
a new RTT results is found to be a spur, the tracked RTT average
is reported instead of the current RTT measurement.

Smooth logic debug configuration using iwl-dbg-cfg.ini:

- MVM_FTM_INITIATOR_ENABLE_SMOOTH: Set to 1 to enable smoothing logic
 (default=0).
- MVM_FTM_INITIATOR_SMOOTH_ALPHA: A value between 0 - 100, defining
  the weight of the current RTT results vs. the RTT average tracked
  based on the previous results. A value of 100 means use only the
  current RTT results.
- MVM_FTM_INITIATOR_SMOOTH_AGE_SEC: The maximal time in seconds in which
  the RTT average tracked based on previous results is considered valid.
- MVM_FTM_INITIATOR_SMOOTH_UNDERSHOOT: if the current RTT is positive
  and below the RTT average by at least this value, report the average
  RTT instead of the current one. In units of picoseconds.
- MVM_FTM_INITIATOR_SMOOTH_OVERSHOOT: if the current RTT is positive
  and above the RTT average by at least this value, report the average
  RTT instead of the current one. In units of picoseconds.

Signed-off-by: Ilan Peer <ilan.peer@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
---
 .../wireless/intel/iwlwifi/mvm/constants.h    |   6 +
 .../intel/iwlwifi/mvm/ftm-initiator.c         | 123 ++++++++++++++++++
 drivers/net/wireless/intel/iwlwifi/mvm/fw.c   |   2 +
 .../net/wireless/intel/iwlwifi/mvm/mac80211.c |   2 +
 drivers/net/wireless/intel/iwlwifi/mvm/mvm.h  |   5 +
 5 files changed, 138 insertions(+)

diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/constants.h b/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
index 426ca1f86500..2487871eac73 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
@@ -159,5 +159,11 @@
 #define IWL_MVM_PHY_FILTER_CHAIN_B		0
 #define IWL_MVM_PHY_FILTER_CHAIN_C		0
 #define IWL_MVM_PHY_FILTER_CHAIN_D		0
+#define IWL_MVM_FTM_INITIATOR_ENABLE_SMOOTH     false
+#define IWL_MVM_FTM_INITIATOR_SMOOTH_ALPHA      40
+/*  20016 pSec is 6 meter RTT, meaning 3 meter range */
+#define IWL_MVM_FTM_INITIATOR_SMOOTH_UNDERSHOOT 20016
+#define IWL_MVM_FTM_INITIATOR_SMOOTH_OVERSHOOT  20016
+#define IWL_MVM_FTM_INITIATOR_SMOOTH_AGE_SEC    2
 
 #endif /* __MVM_CONSTANTS_H */
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
index 7efc6cb2d610..65dc443f37df 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
@@ -76,6 +76,13 @@ struct iwl_mvm_loc_entry {
 	u8 buf[];
 };
 
+struct iwl_mvm_smooth_entry {
+	struct list_head list;
+	u8 addr[ETH_ALEN];
+	s64 rtt_avg;
+	u64 host_time;
+};
+
 static void iwl_mvm_ftm_reset(struct iwl_mvm *mvm)
 {
 	struct iwl_mvm_loc_entry *e, *t;
@@ -84,6 +91,7 @@ static void iwl_mvm_ftm_reset(struct iwl_mvm *mvm)
 	mvm->ftm_initiator.req_wdev = NULL;
 	memset(mvm->ftm_initiator.responses, 0,
 	       sizeof(mvm->ftm_initiator.responses));
+
 	list_for_each_entry_safe(e, t, &mvm->ftm_initiator.loc_list, list) {
 		list_del(&e->list);
 		kfree(e);
@@ -120,6 +128,30 @@ void iwl_mvm_ftm_restart(struct iwl_mvm *mvm)
 	iwl_mvm_ftm_reset(mvm);
 }
 
+void iwl_mvm_ftm_initiator_smooth_config(struct iwl_mvm *mvm)
+{
+	INIT_LIST_HEAD(&mvm->ftm_initiator.smooth.resp);
+
+	IWL_DEBUG_INFO(mvm,
+		       "enable=%u, alpha=%u, age_jiffies=%u, thresh=(%u:%u)\n",
+			IWL_MVM_FTM_INITIATOR_ENABLE_SMOOTH,
+			IWL_MVM_FTM_INITIATOR_SMOOTH_ALPHA,
+			IWL_MVM_FTM_INITIATOR_SMOOTH_AGE_SEC * HZ,
+			IWL_MVM_FTM_INITIATOR_SMOOTH_OVERSHOOT,
+			IWL_MVM_FTM_INITIATOR_SMOOTH_UNDERSHOOT);
+}
+
+void iwl_mvm_ftm_initiator_smooth_stop(struct iwl_mvm *mvm)
+{
+	struct iwl_mvm_smooth_entry *se, *st;
+
+	list_for_each_entry_safe(se, st, &mvm->ftm_initiator.smooth.resp,
+				 list) {
+		list_del(&se->list);
+		kfree(se);
+	}
+}
+
 static int
 iwl_ftm_range_request_status_to_err(enum iwl_tof_range_request_status s)
 {
@@ -728,6 +760,95 @@ static int iwl_mvm_ftm_range_resp_valid(struct iwl_mvm *mvm, u8 request_id,
 	return 0;
 }
 
+static void iwl_mvm_ftm_rtt_smoothing(struct iwl_mvm *mvm,
+				      struct cfg80211_pmsr_result *res)
+{
+	struct iwl_mvm_smooth_entry *resp;
+	s64 rtt_avg, rtt = res->ftm.rtt_avg;
+	u32 undershoot, overshoot;
+	u8 alpha;
+	bool found;
+
+	if (!IWL_MVM_FTM_INITIATOR_ENABLE_SMOOTH)
+		return;
+
+	WARN_ON(rtt < 0);
+
+	if (res->status != NL80211_PMSR_STATUS_SUCCESS) {
+		IWL_DEBUG_INFO(mvm,
+			       ": %pM: ignore failed measurement. Status=%u\n",
+			       res->addr, res->status);
+		return;
+	}
+
+	found = false;
+	list_for_each_entry(resp, &mvm->ftm_initiator.smooth.resp, list) {
+		if (!memcmp(res->addr, resp->addr, ETH_ALEN)) {
+			found = true;
+			break;
+		}
+	}
+
+	if (!found) {
+		resp = kzalloc(sizeof(*resp), GFP_KERNEL);
+		if (!resp)
+			return;
+
+		memcpy(resp->addr, res->addr, ETH_ALEN);
+		list_add_tail(&resp->list, &mvm->ftm_initiator.smooth.resp);
+
+		resp->rtt_avg = rtt;
+
+		IWL_DEBUG_INFO(mvm, "new: %pM: rtt_avg=%lld\n",
+			       resp->addr, resp->rtt_avg);
+		goto update_time;
+	}
+
+	if (res->host_time - resp->host_time >
+	    IWL_MVM_FTM_INITIATOR_SMOOTH_AGE_SEC * 1000000000) {
+		resp->rtt_avg = rtt;
+
+		IWL_DEBUG_INFO(mvm, "expired: %pM: rtt_avg=%lld\n",
+			       resp->addr, resp->rtt_avg);
+		goto update_time;
+	}
+
+	/* Smooth the results based on the tracked RTT average */
+	undershoot = IWL_MVM_FTM_INITIATOR_SMOOTH_UNDERSHOOT;
+	overshoot = IWL_MVM_FTM_INITIATOR_SMOOTH_OVERSHOOT;
+	alpha = IWL_MVM_FTM_INITIATOR_SMOOTH_ALPHA;
+
+	rtt_avg = (alpha * rtt + (100 - alpha) * resp->rtt_avg) / 100;
+
+	IWL_DEBUG_INFO(mvm,
+		       "%pM: prev rtt_avg=%lld, new rtt_avg=%lld, rtt=%lld\n",
+		       resp->addr, resp->rtt_avg, rtt_avg, rtt);
+
+	/*
+	 * update the responder's average RTT results regardless of
+	 * the under/over shoot logic below
+	 */
+	resp->rtt_avg = rtt_avg;
+
+	/* smooth the results */
+	if (rtt_avg > rtt && (rtt_avg - rtt) > undershoot) {
+		res->ftm.rtt_avg = rtt_avg;
+
+		IWL_DEBUG_INFO(mvm,
+			       "undershoot: val=%lld\n",
+			       (rtt_avg - rtt));
+	} else if (rtt_avg < rtt && (rtt - rtt_avg) >
+		   overshoot) {
+		res->ftm.rtt_avg = rtt_avg;
+		IWL_DEBUG_INFO(mvm,
+			       "overshoot: val=%lld\n",
+			       (rtt - rtt_avg));
+	}
+
+update_time:
+	resp->host_time = res->host_time;
+}
+
 static void iwl_mvm_debug_range_resp(struct iwl_mvm *mvm, u8 index,
 				     struct cfg80211_pmsr_result *res)
 {
@@ -865,6 +986,8 @@ void iwl_mvm_ftm_range_resp(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb)
 
 		iwl_mvm_ftm_get_lci_civic(mvm, &result);
 
+		iwl_mvm_ftm_rtt_smoothing(mvm, &result);
+
 		cfg80211_pmsr_report(mvm->ftm_initiator.req_wdev,
 				     mvm->ftm_initiator.req,
 				     &result, GFP_KERNEL);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
index 4d4315bc669e..897249201b06 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
@@ -1512,6 +1512,8 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
 	iwl_mvm_tas_init(mvm);
 	iwl_mvm_leds_sync(mvm);
 
+	iwl_mvm_ftm_initiator_smooth_config(mvm);
+
 	IWL_DEBUG_INFO(mvm, "RT uCode started.\n");
 	return 0;
  error:
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
index 30e5a5b5664e..1c5f18d1b4c2 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
@@ -1211,6 +1211,8 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm)
 {
 	lockdep_assert_held(&mvm->mutex);
 
+	iwl_mvm_ftm_initiator_smooth_stop(mvm);
+
 	/* firmware counters are obviously reset now, but we shouldn't
 	 * partially track so also clear the fw_reset_accu counters.
 	 */
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
index c57d45090715..ba1b74d10577 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
@@ -1107,6 +1107,9 @@ struct iwl_mvm {
 		struct wireless_dev *req_wdev;
 		struct list_head loc_list;
 		int responses[IWL_MVM_TOF_MAX_APS];
+		struct {
+			struct list_head resp;
+		} smooth;
 	} ftm_initiator;
 
 	struct list_head resp_pasn_list;
@@ -2011,6 +2014,8 @@ void iwl_mvm_ftm_lc_notif(struct iwl_mvm *mvm,
 int iwl_mvm_ftm_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
 		      struct cfg80211_pmsr_request *request);
 void iwl_mvm_ftm_abort(struct iwl_mvm *mvm, struct cfg80211_pmsr_request *req);
+void iwl_mvm_ftm_initiator_smooth_config(struct iwl_mvm *mvm);
+void iwl_mvm_ftm_initiator_smooth_stop(struct iwl_mvm *mvm);
 
 /* TDLS */
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 02/12] iwlwifi: mvm: location: set the HLTK when PASN station is added
  2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
  2020-09-30 13:31 ` [PATCH 01/12] iwlwifi: mvm: Add FTM initiator RTT smoothing logic Luca Coelho
@ 2020-09-30 13:31 ` Luca Coelho
  2020-09-30 13:31 ` [PATCH 03/12] iwlwifi: mvm: responder: allow to set only the HLTK for an associated station Luca Coelho
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Avraham Stern <avraham.stern@intel.com>

When a PASN station is added, set the HLTK to FW.

Signed-off-by: Avraham Stern <avraham.stern@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
---
 .../intel/iwlwifi/mvm/ftm-responder.c         | 43 ++++++++++++++-----
 drivers/net/wireless/intel/iwlwifi/mvm/mvm.h  | 15 +++++++
 2 files changed, 47 insertions(+), 11 deletions(-)

diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c
index e24e5bc7b40c..e940ef138f55 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c
@@ -306,6 +306,16 @@ iwl_mvm_ftm_responder_dyn_cfg_cmd(struct iwl_mvm *mvm,
 	return ret;
 }
 
+static void iwl_mvm_resp_del_pasn_sta(struct iwl_mvm *mvm,
+				      struct ieee80211_vif *vif,
+				      struct iwl_mvm_pasn_sta *sta)
+{
+	list_del(&sta->list);
+	iwl_mvm_rm_sta_id(mvm, vif, sta->int_sta.sta_id);
+	iwl_mvm_dealloc_int_sta(mvm, &sta->int_sta);
+	kfree(sta);
+}
+
 int iwl_mvm_ftm_respoder_add_pasn_sta(struct iwl_mvm *mvm,
 				      struct ieee80211_vif *vif,
 				      u8 *addr, u32 cipher, u8 *tk, u32 tk_len,
@@ -313,9 +323,26 @@ int iwl_mvm_ftm_respoder_add_pasn_sta(struct iwl_mvm *mvm,
 {
 	int ret;
 	struct iwl_mvm_pasn_sta *sta;
+	struct iwl_mvm_pasn_hltk_data hltk_data = {
+		.addr = addr,
+		.hltk = hltk,
+	};
+	u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, LOCATION_GROUP,
+					   TOF_RESPONDER_DYN_CONFIG_CMD, 2);
 
 	lockdep_assert_held(&mvm->mutex);
 
+	if (cmd_ver < 3) {
+		IWL_ERR(mvm, "Adding PASN station not supported by FW\n");
+		return -ENOTSUPP;
+	}
+
+	hltk_data.cipher = iwl_mvm_cipher_to_location_cipher(cipher);
+	if (hltk_data.cipher == IWL_LOCATION_CIPHER_INVALID) {
+		IWL_ERR(mvm, "invalid cipher: %u\n", cipher);
+		return -EINVAL;
+	}
+
 	sta = kmalloc(sizeof(*sta), GFP_KERNEL);
 	if (!sta)
 		return -ENOBUFS;
@@ -327,23 +354,17 @@ int iwl_mvm_ftm_respoder_add_pasn_sta(struct iwl_mvm *mvm,
 		return ret;
 	}
 
-	// TODO: set the HLTK to fw
+	ret = iwl_mvm_ftm_responder_dyn_cfg_v3(mvm, vif, NULL, &hltk_data);
+	if (ret) {
+		iwl_mvm_resp_del_pasn_sta(mvm, vif, sta);
+		return ret;
+	}
 
 	memcpy(sta->addr, addr, ETH_ALEN);
 	list_add_tail(&sta->list, &mvm->resp_pasn_list);
 	return 0;
 }
 
-static void iwl_mvm_resp_del_pasn_sta(struct iwl_mvm *mvm,
-				      struct ieee80211_vif *vif,
-				      struct iwl_mvm_pasn_sta *sta)
-{
-	list_del(&sta->list);
-	iwl_mvm_rm_sta_id(mvm, vif, sta->int_sta.sta_id);
-	iwl_mvm_dealloc_int_sta(mvm, &sta->int_sta);
-	kfree(sta);
-}
-
 int iwl_mvm_ftm_resp_remove_pasn_sta(struct iwl_mvm *mvm,
 				     struct ieee80211_vif *vif, u8 *addr)
 {
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
index ba1b74d10577..40e102f2017f 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
@@ -2161,4 +2161,19 @@ static inline int iwl_umac_scan_get_max_profiles(const struct iwl_fw *fw)
 	return (ver == IWL_FW_CMD_VER_UNKNOWN || ver < 3) ?
 		IWL_SCAN_MAX_PROFILES : IWL_SCAN_MAX_PROFILES_V2;
 }
+
+static inline
+enum iwl_location_cipher iwl_mvm_cipher_to_location_cipher(u32 cipher)
+{
+	switch (cipher) {
+	case WLAN_CIPHER_SUITE_CCMP:
+		return IWL_LOCATION_CIPHER_CCMP_128;
+	case WLAN_CIPHER_SUITE_GCMP:
+		return IWL_LOCATION_CIPHER_GCMP_128;
+	case WLAN_CIPHER_SUITE_GCMP_256:
+		return IWL_LOCATION_CIPHER_GCMP_256;
+	default:
+		return IWL_LOCATION_CIPHER_INVALID;
+	}
+}
 #endif /* __IWL_MVM_H__ */
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 03/12] iwlwifi: mvm: responder: allow to set only the HLTK for an associated station
  2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
  2020-09-30 13:31 ` [PATCH 01/12] iwlwifi: mvm: Add FTM initiator RTT smoothing logic Luca Coelho
  2020-09-30 13:31 ` [PATCH 02/12] iwlwifi: mvm: location: set the HLTK when PASN station is added Luca Coelho
@ 2020-09-30 13:31 ` Luca Coelho
  2020-09-30 13:31 ` [PATCH 04/12] iwlwifi: mvm: initiator: add option for adding a PASN responder Luca Coelho
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Avraham Stern <avraham.stern@intel.com>

For secure ranging with an associated station, the driver only needs
to set the HLTK. There is no need to add an internal station for PMF
since the FW will use the existing station which already has the TK
installed.

Signed-off-by: Avraham Stern <avraham.stern@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
---
 .../intel/iwlwifi/mvm/ftm-responder.c         | 23 +++++++++++--------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c
index e940ef138f55..c794612c41d5 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c
@@ -322,7 +322,7 @@ int iwl_mvm_ftm_respoder_add_pasn_sta(struct iwl_mvm *mvm,
 				      u8 *hltk, u32 hltk_len)
 {
 	int ret;
-	struct iwl_mvm_pasn_sta *sta;
+	struct iwl_mvm_pasn_sta *sta = NULL;
 	struct iwl_mvm_pasn_hltk_data hltk_data = {
 		.addr = addr,
 		.hltk = hltk,
@@ -343,20 +343,23 @@ int iwl_mvm_ftm_respoder_add_pasn_sta(struct iwl_mvm *mvm,
 		return -EINVAL;
 	}
 
-	sta = kmalloc(sizeof(*sta), GFP_KERNEL);
-	if (!sta)
-		return -ENOBUFS;
+	if (tk && tk_len) {
+		sta = kzalloc(sizeof(*sta), GFP_KERNEL);
+		if (!sta)
+			return -ENOBUFS;
 
-	ret = iwl_mvm_add_pasn_sta(mvm, vif, &sta->int_sta, addr, cipher, tk,
-				   tk_len);
-	if (ret) {
-		kfree(sta);
-		return ret;
+		ret = iwl_mvm_add_pasn_sta(mvm, vif, &sta->int_sta, addr,
+					   cipher, tk, tk_len);
+		if (ret) {
+			kfree(sta);
+			return ret;
+		}
 	}
 
 	ret = iwl_mvm_ftm_responder_dyn_cfg_v3(mvm, vif, NULL, &hltk_data);
 	if (ret) {
-		iwl_mvm_resp_del_pasn_sta(mvm, vif, sta);
+		if (sta)
+			iwl_mvm_resp_del_pasn_sta(mvm, vif, sta);
 		return ret;
 	}
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 04/12] iwlwifi: mvm: initiator: add option for adding a PASN responder
  2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
                   ` (2 preceding siblings ...)
  2020-09-30 13:31 ` [PATCH 03/12] iwlwifi: mvm: responder: allow to set only the HLTK for an associated station Luca Coelho
@ 2020-09-30 13:31 ` Luca Coelho
  2020-09-30 13:31 ` [PATCH 05/12] iwlwifi: move all bus-independent TX functions to common code Luca Coelho
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Avraham Stern <avraham.stern@intel.com>

Add an option for adding a PASN responder, specifying the HLTK and
TK (if not associated). When a receiving a range request for a
PASN responder, the driver will ask for a secured measurement with
the specified HLTK and TK.

Signed-off-by: Avraham Stern <avraham.stern@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
---
 .../intel/iwlwifi/mvm/ftm-initiator.c         | 177 +++++++++++++++++-
 drivers/net/wireless/intel/iwlwifi/mvm/mvm.h  |   5 +
 drivers/net/wireless/intel/iwlwifi/mvm/ops.c  |   1 +
 3 files changed, 179 insertions(+), 4 deletions(-)

diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
index 65dc443f37df..a0ce761d0c59 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-initiator.c
@@ -83,6 +83,96 @@ struct iwl_mvm_smooth_entry {
 	u64 host_time;
 };
 
+struct iwl_mvm_ftm_pasn_entry {
+	struct list_head list;
+	u8 addr[ETH_ALEN];
+	u8 hltk[HLTK_11AZ_LEN];
+	u8 tk[TK_11AZ_LEN];
+	u8 cipher;
+	u8 tx_pn[IEEE80211_CCMP_PN_LEN];
+	u8 rx_pn[IEEE80211_CCMP_PN_LEN];
+};
+
+int iwl_mvm_ftm_add_pasn_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+			     u8 *addr, u32 cipher, u8 *tk, u32 tk_len,
+			     u8 *hltk, u32 hltk_len)
+{
+	struct iwl_mvm_ftm_pasn_entry *pasn = kzalloc(sizeof(*pasn),
+						      GFP_KERNEL);
+	u32 expected_tk_len;
+
+	lockdep_assert_held(&mvm->mutex);
+
+	if (!pasn)
+		return -ENOBUFS;
+
+	pasn->cipher = iwl_mvm_cipher_to_location_cipher(cipher);
+
+	switch (pasn->cipher) {
+	case IWL_LOCATION_CIPHER_CCMP_128:
+	case IWL_LOCATION_CIPHER_GCMP_128:
+		expected_tk_len = WLAN_KEY_LEN_CCMP;
+		break;
+	case IWL_LOCATION_CIPHER_GCMP_256:
+		expected_tk_len = WLAN_KEY_LEN_GCMP_256;
+		break;
+	default:
+		goto out;
+	}
+
+	/*
+	 * If associated to this AP and already have security context,
+	 * the TK is already configured for this station, so it
+	 * shouldn't be set again here.
+	 */
+	if (vif->bss_conf.assoc &&
+	    !memcmp(addr, vif->bss_conf.bssid, ETH_ALEN)) {
+		struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+		struct ieee80211_sta *sta;
+
+		rcu_read_lock();
+		sta = rcu_dereference(mvm->fw_id_to_mac_id[mvmvif->ap_sta_id]);
+		if (!IS_ERR_OR_NULL(sta) && sta->mfp)
+			expected_tk_len = 0;
+		rcu_read_unlock();
+	}
+
+	if (tk_len != expected_tk_len || hltk_len != sizeof(pasn->hltk)) {
+		IWL_ERR(mvm, "Invalid key length: tk_len=%u hltk_len=%u\n",
+			tk_len, hltk_len);
+		goto out;
+	}
+
+	memcpy(pasn->addr, addr, sizeof(pasn->addr));
+	memcpy(pasn->hltk, hltk, sizeof(pasn->hltk));
+
+	if (tk && tk_len)
+		memcpy(pasn->tk, tk, sizeof(pasn->tk));
+
+	list_add_tail(&pasn->list, &mvm->ftm_initiator.pasn_list);
+	return 0;
+out:
+	kfree(pasn);
+	return -EINVAL;
+}
+
+void iwl_mvm_ftm_remove_pasn_sta(struct iwl_mvm *mvm, u8 *addr)
+{
+	struct iwl_mvm_ftm_pasn_entry *entry, *prev;
+
+	lockdep_assert_held(&mvm->mutex);
+
+	list_for_each_entry_safe(entry, prev, &mvm->ftm_initiator.pasn_list,
+				 list) {
+		if (memcmp(entry->addr, addr, sizeof(entry->addr)))
+			continue;
+
+		list_del(&entry->list);
+		kfree(entry);
+		return;
+	}
+}
+
 static void iwl_mvm_ftm_reset(struct iwl_mvm *mvm)
 {
 	struct iwl_mvm_loc_entry *e, *t;
@@ -595,6 +685,63 @@ static int iwl_mvm_ftm_start_v9(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
 	return iwl_mvm_ftm_send_cmd(mvm, &hcmd);
 }
 
+static void iter(struct ieee80211_hw *hw,
+		 struct ieee80211_vif *vif,
+		 struct ieee80211_sta *sta,
+		 struct ieee80211_key_conf *key,
+		 void *data)
+{
+	struct iwl_tof_range_req_ap_entry_v6 *target = data;
+
+	if (!sta || memcmp(sta->addr, target->bssid, ETH_ALEN))
+		return;
+
+	WARN_ON(!sta->mfp);
+
+	if (WARN_ON(key->keylen > sizeof(target->tk)))
+		return;
+
+	memcpy(target->tk, key->key, key->keylen);
+	target->cipher = iwl_mvm_cipher_to_location_cipher(key->cipher);
+	WARN_ON(target->cipher == IWL_LOCATION_CIPHER_INVALID);
+}
+
+static void
+iwl_mvm_ftm_set_secured_ranging(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+				struct iwl_tof_range_req_ap_entry_v7 *target)
+{
+	struct iwl_mvm_ftm_pasn_entry *entry;
+	u32 flags = le32_to_cpu(target->initiator_ap_flags);
+
+	if (!(flags & (IWL_INITIATOR_AP_FLAGS_NON_TB |
+		       IWL_INITIATOR_AP_FLAGS_TB)))
+		return;
+
+	lockdep_assert_held(&mvm->mutex);
+
+	list_for_each_entry(entry, &mvm->ftm_initiator.pasn_list, list) {
+		if (memcmp(entry->addr, target->bssid, sizeof(entry->addr)))
+			continue;
+
+		target->cipher = entry->cipher;
+		memcpy(target->hltk, entry->hltk, sizeof(target->hltk));
+
+		if (vif->bss_conf.assoc &&
+		    !memcmp(vif->bss_conf.bssid, target->bssid,
+			    sizeof(target->bssid)))
+			ieee80211_iter_keys(mvm->hw, vif, iter, target);
+		else
+			memcpy(target->tk, entry->tk, sizeof(target->tk));
+
+		memcpy(target->rx_pn, entry->rx_pn, sizeof(target->rx_pn));
+		memcpy(target->tx_pn, entry->tx_pn, sizeof(target->tx_pn));
+
+		target->initiator_ap_flags |=
+			cpu_to_le32(IWL_INITIATOR_AP_FLAGS_SECURED);
+		return;
+	}
+}
+
 static int iwl_mvm_ftm_start_v11(struct iwl_mvm *mvm,
 				 struct ieee80211_vif *vif,
 				 struct cfg80211_pmsr_request *req)
@@ -618,6 +765,8 @@ static int iwl_mvm_ftm_start_v11(struct iwl_mvm *mvm,
 		err = iwl_mvm_ftm_put_target(mvm, vif, peer, (void *)target);
 		if (err)
 			return err;
+
+		iwl_mvm_ftm_set_secured_ranging(mvm, vif, target);
 	}
 
 	return iwl_mvm_ftm_send_cmd(mvm, &hcmd);
@@ -868,6 +1017,24 @@ static void iwl_mvm_debug_range_resp(struct iwl_mvm *mvm, u8 index,
 	IWL_DEBUG_INFO(mvm, "\tdistance: %lld\n", rtt_avg);
 }
 
+static void
+iwl_mvm_ftm_pasn_update_pn(struct iwl_mvm *mvm,
+			   struct iwl_tof_range_rsp_ap_entry_ntfy_v6 *fw_ap)
+{
+	struct iwl_mvm_ftm_pasn_entry *entry;
+
+	lockdep_assert_held(&mvm->mutex);
+
+	list_for_each_entry(entry, &mvm->ftm_initiator.pasn_list, list) {
+		if (memcmp(fw_ap->bssid, entry->addr, sizeof(entry->addr)))
+			continue;
+
+		memcpy(entry->rx_pn, fw_ap->rx_pn, sizeof(entry->rx_pn));
+		memcpy(entry->tx_pn, fw_ap->tx_pn, sizeof(entry->tx_pn));
+		return;
+	}
+}
+
 void iwl_mvm_ftm_range_resp(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb)
 {
 	struct iwl_rx_packet *pkt = rxb_addr(rxb);
@@ -912,13 +1079,15 @@ void iwl_mvm_ftm_range_resp(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb)
 		int peer_idx;
 
 		if (new_api) {
-			if (mvm->cmd_ver.range_resp == 8)
+			if (mvm->cmd_ver.range_resp == 8) {
 				fw_ap = &fw_resp_v8->ap[i];
-			else if (fw_has_api(&mvm->fw->ucode_capa,
-					    IWL_UCODE_TLV_API_FTM_RTT_ACCURACY))
+				iwl_mvm_ftm_pasn_update_pn(mvm, fw_ap);
+			} else if (fw_has_api(&mvm->fw->ucode_capa,
+					      IWL_UCODE_TLV_API_FTM_RTT_ACCURACY)) {
 				fw_ap = (void *)&fw_resp_v7->ap[i];
-			else
+			} else {
 				fw_ap = (void *)&fw_resp_v6->ap[i];
+			}
 
 			result.final = fw_ap->last_burst;
 			result.ap_tsf = le32_to_cpu(fw_ap->start_tsf);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
index 40e102f2017f..1836589218fa 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
@@ -1110,6 +1110,7 @@ struct iwl_mvm {
 		struct {
 			struct list_head resp;
 		} smooth;
+		struct list_head pasn_list;
 	} ftm_initiator;
 
 	struct list_head resp_pasn_list;
@@ -2016,6 +2017,10 @@ int iwl_mvm_ftm_start(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
 void iwl_mvm_ftm_abort(struct iwl_mvm *mvm, struct cfg80211_pmsr_request *req);
 void iwl_mvm_ftm_initiator_smooth_config(struct iwl_mvm *mvm);
 void iwl_mvm_ftm_initiator_smooth_stop(struct iwl_mvm *mvm);
+int iwl_mvm_ftm_add_pasn_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+			     u8 *addr, u32 cipher, u8 *tk, u32 tk_len,
+			     u8 *hltk, u32 hltk_len);
+void iwl_mvm_ftm_remove_pasn_sta(struct iwl_mvm *mvm, u8 *addr);
 
 /* TDLS */
 
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
index c59ce3966807..737ef0fd6ff1 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
@@ -695,6 +695,7 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
 	INIT_LIST_HEAD(&mvm->async_handlers_list);
 	spin_lock_init(&mvm->time_event_lock);
 	INIT_LIST_HEAD(&mvm->ftm_initiator.loc_list);
+	INIT_LIST_HEAD(&mvm->ftm_initiator.pasn_list);
 	INIT_LIST_HEAD(&mvm->resp_pasn_list);
 
 	INIT_WORK(&mvm->async_handlers_wk, iwl_mvm_async_handlers_wk);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 05/12] iwlwifi: move all bus-independent TX functions to common code
  2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
                   ` (3 preceding siblings ...)
  2020-09-30 13:31 ` [PATCH 04/12] iwlwifi: mvm: initiator: add option for adding a PASN responder Luca Coelho
@ 2020-09-30 13:31 ` Luca Coelho
  2020-09-30 13:31 ` [PATCH 06/12] iwlwifi: mvm: support more GTK rekeying algorithms Luca Coelho
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Mordechay Goodstein <mordechay.goodstein@intel.com>

After moving out all Tx fields not related to pcie-bus
it's time to move the code to a common place.

We also rename all pcie functions name to txq.

Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
---
 drivers/net/wireless/intel/iwlwifi/Makefile   |    1 +
 .../net/wireless/intel/iwlwifi/iwl-trans.c    |   19 +
 .../net/wireless/intel/iwlwifi/iwl-trans.h    |    1 +
 .../wireless/intel/iwlwifi/pcie/ctxt-info.c   |    2 +-
 .../wireless/intel/iwlwifi/pcie/internal.h    |  125 +-
 drivers/net/wireless/intel/iwlwifi/pcie/rx.c  |    2 +-
 .../wireless/intel/iwlwifi/pcie/trans-gen2.c  |    4 +-
 .../net/wireless/intel/iwlwifi/pcie/trans.c   |   59 +-
 .../net/wireless/intel/iwlwifi/pcie/tx-gen2.c | 1078 +------------
 drivers/net/wireless/intel/iwlwifi/pcie/tx.c  |  311 +---
 drivers/net/wireless/intel/iwlwifi/queue/tx.c | 1375 +++++++++++++++++
 drivers/net/wireless/intel/iwlwifi/queue/tx.h |  188 +++
 12 files changed, 1646 insertions(+), 1519 deletions(-)
 create mode 100644 drivers/net/wireless/intel/iwlwifi/queue/tx.c
 create mode 100644 drivers/net/wireless/intel/iwlwifi/queue/tx.h

diff --git a/drivers/net/wireless/intel/iwlwifi/Makefile b/drivers/net/wireless/intel/iwlwifi/Makefile
index fbcd1405aeea..85c6fed28f8e 100644
--- a/drivers/net/wireless/intel/iwlwifi/Makefile
+++ b/drivers/net/wireless/intel/iwlwifi/Makefile
@@ -13,6 +13,7 @@ iwlwifi-$(CONFIG_IWLDVM) += cfg/1000.o cfg/2000.o cfg/5000.o cfg/6000.o
 iwlwifi-$(CONFIG_IWLMVM) += cfg/7000.o cfg/8000.o cfg/9000.o cfg/22000.o
 iwlwifi-objs		+= iwl-dbg-tlv.o
 iwlwifi-objs		+= iwl-trans.o
+iwlwifi-objs		+= queue/tx.o
 
 iwlwifi-objs		+= fw/img.o fw/notif-wait.o
 iwlwifi-objs		+= fw/dbg.o
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c
index 073efce47e74..a26da96763dd 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.c
@@ -66,6 +66,7 @@
 #include "iwl-trans.h"
 #include "iwl-drv.h"
 #include "iwl-fh.h"
+#include "queue/tx.h"
 #include <linux/dmapool.h>
 
 struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
@@ -150,11 +151,29 @@ struct iwl_trans *iwl_trans_alloc(unsigned int priv_size,
 
 	WARN_ON(!ops->wait_txq_empty && !ops->wait_tx_queues_empty);
 
+	trans->txqs.tso_hdr_page = alloc_percpu(struct iwl_tso_hdr_page);
+	if (!trans->txqs.tso_hdr_page) {
+		kmem_cache_destroy(trans->dev_cmd_pool);
+		return NULL;
+	}
+
 	return trans;
 }
 
 void iwl_trans_free(struct iwl_trans *trans)
 {
+	int i;
+
+	for_each_possible_cpu(i) {
+		struct iwl_tso_hdr_page *p =
+			per_cpu_ptr(trans->txqs.tso_hdr_page, i);
+
+		if (p->page)
+			__free_page(p->page);
+	}
+
+	free_percpu(trans->txqs.tso_hdr_page);
+
 	kmem_cache_destroy(trans->dev_cmd_pool);
 }
 
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
index 8fe720ac1c74..c3053fa3ff73 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
@@ -928,6 +928,7 @@ struct iwl_trans_txqs {
 	bool bc_table_dword;
 	u8 page_offs;
 	u8 dev_cmd_offs;
+	struct __percpu iwl_tso_hdr_page * tso_hdr_page;
 
 	struct {
 		u8 fifo;
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
index 23abfbd096b0..2597faea79c4 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info.c
@@ -73,7 +73,7 @@ static void *_iwl_pcie_ctxt_info_dma_alloc_coherent(struct iwl_trans *trans,
 	if (!result)
 		return NULL;
 
-	if (unlikely(iwl_pcie_crosses_4g_boundary(*phys, size))) {
+	if (unlikely(iwl_txq_crosses_4g_boundary(*phys, size))) {
 		void *old = result;
 		dma_addr_t oldphys = *phys;
 
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
index 22b4731ef511..1e6b988953ad 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
@@ -79,6 +79,7 @@
 #include "iwl-io.h"
 #include "iwl-op-mode.h"
 #include "iwl-drv.h"
+#include "queue/tx.h"
 
 /*
  * RX related structures and functions
@@ -240,16 +241,6 @@ struct iwl_rb_allocator {
 	struct work_struct rx_alloc;
 };
 
-/**
- * iwl_queue_inc_wrap - increment queue index, wrap back to beginning
- * @index -- current index
- */
-static inline int iwl_queue_inc_wrap(struct iwl_trans *trans, int index)
-{
-	return ++index &
-		(trans->trans_cfg->base_params->max_tfd_queue_size - 1);
-}
-
 /**
  * iwl_get_closed_rb_stts - get closed rb stts from different structs
  * @rxq - the rxq to get the rb stts from
@@ -268,28 +259,6 @@ static inline __le16 iwl_get_closed_rb_stts(struct iwl_trans *trans,
 	}
 }
 
-/**
- * iwl_queue_dec_wrap - decrement queue index, wrap back to end
- * @index -- current index
- */
-static inline int iwl_queue_dec_wrap(struct iwl_trans *trans, int index)
-{
-	return --index &
-		(trans->trans_cfg->base_params->max_tfd_queue_size - 1);
-}
-
-static inline dma_addr_t
-iwl_pcie_get_first_tb_dma(struct iwl_txq *txq, int idx)
-{
-	return txq->first_tb_dma +
-	       sizeof(struct iwl_pcie_first_tb_buf) * idx;
-}
-
-struct iwl_tso_hdr_page {
-	struct page *page;
-	u8 *pos;
-};
-
 #ifdef CONFIG_IWLWIFI_DEBUGFS
 /**
  * enum iwl_fw_mon_dbgfs_state - the different states of the monitor_data
@@ -427,8 +396,6 @@ struct iwl_trans_pcie {
 
 	struct net_device napi_dev;
 
-	struct __percpu iwl_tso_hdr_page *tso_hdr_page;
-
 	/* INT ICT Table */
 	__le32 *ict_tbl;
 	dma_addr_t ict_tbl_dma;
@@ -566,19 +533,7 @@ void iwl_pcie_disable_ict(struct iwl_trans *trans);
 /*****************************************************
 * TX / HCMD
 ******************************************************/
-/*
- * We need this inline in case dma_addr_t is only 32-bits - since the
- * hardware is always 64-bit, the issue can still occur in that case,
- * so use u64 for 'phys' here to force the addition in 64-bit.
- */
-static inline bool iwl_pcie_crosses_4g_boundary(u64 phys, u16 len)
-{
-	return upper_32_bits(phys) != upper_32_bits(phys + len);
-}
-
 int iwl_pcie_tx_init(struct iwl_trans *trans);
-int iwl_pcie_gen2_tx_init(struct iwl_trans *trans, int txq_id,
-			  int queue_size);
 void iwl_pcie_tx_start(struct iwl_trans *trans, u32 scd_base_addr);
 int iwl_pcie_tx_stop(struct iwl_trans *trans);
 void iwl_pcie_tx_free(struct iwl_trans *trans);
@@ -589,14 +544,10 @@ void iwl_trans_pcie_txq_disable(struct iwl_trans *trans, int queue,
 				bool configure_scd);
 void iwl_trans_pcie_txq_set_shared_mode(struct iwl_trans *trans, u32 txq_id,
 					bool shared_mode);
-void iwl_trans_pcie_log_scd_error(struct iwl_trans *trans,
-				  struct iwl_txq *txq);
 int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
 		      struct iwl_device_tx_cmd *dev_cmd, int txq_id);
 void iwl_pcie_txq_check_wrptrs(struct iwl_trans *trans);
 int iwl_trans_pcie_send_hcmd(struct iwl_trans *trans, struct iwl_host_cmd *cmd);
-void iwl_pcie_gen2_txq_inc_wr_ptr(struct iwl_trans *trans,
-				  struct iwl_txq *txq);
 void iwl_pcie_hcmd_complete(struct iwl_trans *trans,
 			    struct iwl_rx_cmd_buffer *rxb);
 void iwl_trans_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
@@ -787,20 +738,6 @@ static inline void iwl_enable_fw_load_int_ctx_info(struct iwl_trans *trans)
 	}
 }
 
-static inline u16 iwl_pcie_get_cmd_index(const struct iwl_txq *q, u32 index)
-{
-	return index & (q->n_window - 1);
-}
-
-static inline void *iwl_pcie_get_tfd(struct iwl_trans *trans,
-				     struct iwl_txq *txq, int idx)
-{
-	if (trans->trans_cfg->use_tfh)
-		idx = iwl_pcie_get_cmd_index(txq, idx);
-
-	return txq->tfds + trans->txqs.tfd.size * idx;
-}
-
 static inline const char *queue_name(struct device *dev,
 				     struct iwl_trans_pcie *trans_p, int i)
 {
@@ -852,37 +789,6 @@ static inline void iwl_enable_rfkill_int(struct iwl_trans *trans)
 
 void iwl_pcie_handle_rfkill_irq(struct iwl_trans *trans);
 
-static inline void iwl_wake_queue(struct iwl_trans *trans,
-				  struct iwl_txq *txq)
-{
-	if (test_and_clear_bit(txq->id, trans->txqs.queue_stopped)) {
-		IWL_DEBUG_TX_QUEUES(trans, "Wake hwq %d\n", txq->id);
-		iwl_op_mode_queue_not_full(trans->op_mode, txq->id);
-	}
-}
-
-static inline void iwl_stop_queue(struct iwl_trans *trans,
-				  struct iwl_txq *txq)
-{
-	if (!test_and_set_bit(txq->id, trans->txqs.queue_stopped)) {
-		iwl_op_mode_queue_full(trans->op_mode, txq->id);
-		IWL_DEBUG_TX_QUEUES(trans, "Stop hwq %d\n", txq->id);
-	} else
-		IWL_DEBUG_TX_QUEUES(trans, "hwq %d already stopped\n",
-				    txq->id);
-}
-
-static inline bool iwl_queue_used(const struct iwl_txq *q, int i)
-{
-	int index = iwl_pcie_get_cmd_index(q, i);
-	int r = iwl_pcie_get_cmd_index(q, q->read_ptr);
-	int w = iwl_pcie_get_cmd_index(q, q->write_ptr);
-
-	return w >= r ?
-		(index >= r && index < w) :
-		!(index < r && index >= w);
-}
-
 static inline bool iwl_is_rfkill_set(struct iwl_trans *trans)
 {
 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
@@ -949,23 +855,12 @@ bool iwl_pcie_check_hw_rf_kill(struct iwl_trans *trans);
 void iwl_trans_pcie_handle_stop_rfkill(struct iwl_trans *trans,
 				       bool was_in_rfkill);
 void iwl_pcie_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq);
-int iwl_queue_space(struct iwl_trans *trans, const struct iwl_txq *q);
 void iwl_pcie_apm_stop_master(struct iwl_trans *trans);
 void iwl_pcie_conf_msix_hw(struct iwl_trans_pcie *trans_pcie);
-int iwl_pcie_txq_init(struct iwl_trans *trans, struct iwl_txq *txq,
-		      int slots_num, bool cmd_queue);
-int iwl_pcie_txq_alloc(struct iwl_trans *trans,
-		       struct iwl_txq *txq, int slots_num,  bool cmd_queue);
 int iwl_pcie_alloc_dma_ptr(struct iwl_trans *trans,
 			   struct iwl_dma_ptr *ptr, size_t size);
 void iwl_pcie_free_dma_ptr(struct iwl_trans *trans, struct iwl_dma_ptr *ptr);
 void iwl_pcie_apply_destination(struct iwl_trans *trans);
-void iwl_pcie_free_tso_page(struct iwl_trans *trans,
-			    struct sk_buff *skb);
-#ifdef CONFIG_INET
-struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len,
-				      struct sk_buff *skb);
-#endif
 
 /* common functions that are used by gen3 transport */
 void iwl_pcie_alloc_fw_monitor(struct iwl_trans *trans, u8 max_power);
@@ -974,28 +869,10 @@ void iwl_pcie_alloc_fw_monitor(struct iwl_trans *trans, u8 max_power);
 int iwl_trans_pcie_gen2_start_fw(struct iwl_trans *trans,
 				 const struct fw_img *fw, bool run_in_rfkill);
 void iwl_trans_pcie_gen2_fw_alive(struct iwl_trans *trans, u32 scd_addr);
-void iwl_pcie_gen2_txq_free_memory(struct iwl_trans *trans,
-				   struct iwl_txq *txq);
-int iwl_trans_pcie_dyn_txq_alloc_dma(struct iwl_trans *trans,
-				     struct iwl_txq **intxq, int size,
-				     unsigned int timeout);
-int iwl_trans_pcie_txq_alloc_response(struct iwl_trans *trans,
-				      struct iwl_txq *txq,
-				      struct iwl_host_cmd *hcmd);
-int iwl_trans_pcie_dyn_txq_alloc(struct iwl_trans *trans,
-				 __le16 flags, u8 sta_id, u8 tid,
-				 int cmd_id, int size,
-				 unsigned int timeout);
-void iwl_trans_pcie_dyn_txq_free(struct iwl_trans *trans, int queue);
-int iwl_trans_pcie_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb,
-			   struct iwl_device_tx_cmd *dev_cmd, int txq_id);
 int iwl_trans_pcie_gen2_send_hcmd(struct iwl_trans *trans,
 				  struct iwl_host_cmd *cmd);
 void iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans);
 void _iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans);
-void iwl_pcie_gen2_txq_unmap(struct iwl_trans *trans, int txq_id);
-void iwl_pcie_gen2_tx_free(struct iwl_trans *trans);
-void iwl_pcie_gen2_tx_stop(struct iwl_trans *trans);
 void iwl_pcie_d3_complete_suspend(struct iwl_trans *trans,
 				  bool test, bool reset);
 #endif /* __iwl_trans_int_pcie_h__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
index 9463c108aa96..94299f259518 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
@@ -1359,7 +1359,7 @@ static void iwl_pcie_rx_handle_rb(struct iwl_trans *trans,
 
 		sequence = le16_to_cpu(pkt->hdr.sequence);
 		index = SEQ_TO_INDEX(sequence);
-		cmd_index = iwl_pcie_get_cmd_index(txq, index);
+		cmd_index = iwl_txq_get_cmd_index(txq, index);
 
 		if (rxq->id == trans_pcie->def_rx_queue)
 			iwl_op_mode_rx(trans->op_mode, &rxq->napi,
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
index 97c9e9c87436..91ec9379c061 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
@@ -162,7 +162,7 @@ void _iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans)
 	if (test_and_clear_bit(STATUS_DEVICE_ENABLED, &trans->status)) {
 		IWL_DEBUG_INFO(trans,
 			       "DEVICE_ENABLED bit was set and is now cleared\n");
-		iwl_pcie_gen2_tx_stop(trans);
+		iwl_txq_gen2_tx_stop(trans);
 		iwl_pcie_rx_stop(trans);
 	}
 
@@ -245,7 +245,7 @@ static int iwl_pcie_gen2_nic_init(struct iwl_trans *trans)
 		return -ENOMEM;
 
 	/* Allocate or reset and init all Tx and Command queues */
-	if (iwl_pcie_gen2_tx_init(trans, trans->txqs.cmd.q_id, queue_size))
+	if (iwl_txq_gen2_init(trans, trans->txqs.cmd.q_id, queue_size))
 		return -ENOMEM;
 
 	/* enable shadow regs in HW */
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
index 52e61df6206e..61f91bd9050b 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
@@ -1955,7 +1955,7 @@ void iwl_trans_pcie_free(struct iwl_trans *trans)
 	iwl_pcie_synchronize_irqs(trans);
 
 	if (trans->trans_cfg->gen2)
-		iwl_pcie_gen2_tx_free(trans);
+		iwl_txq_gen2_tx_free(trans);
 	else
 		iwl_pcie_tx_free(trans);
 	iwl_pcie_rx_free(trans);
@@ -1979,15 +1979,6 @@ void iwl_trans_pcie_free(struct iwl_trans *trans)
 
 	iwl_pcie_free_fw_monitor(trans);
 
-	for_each_possible_cpu(i) {
-		struct iwl_tso_hdr_page *p =
-			per_cpu_ptr(trans_pcie->tso_hdr_page, i);
-
-		if (p->page)
-			__free_page(p->page);
-	}
-
-	free_percpu(trans_pcie->tso_hdr_page);
 	mutex_destroy(&trans_pcie->mutex);
 	iwl_trans_free(trans);
 }
@@ -2280,36 +2271,6 @@ static void iwl_trans_pcie_block_txq_ptrs(struct iwl_trans *trans, bool block)
 
 #define IWL_FLUSH_WAIT_MS	2000
 
-void iwl_trans_pcie_log_scd_error(struct iwl_trans *trans, struct iwl_txq *txq)
-{
-	u32 txq_id = txq->id;
-	u32 status;
-	bool active;
-	u8 fifo;
-
-	if (trans->trans_cfg->use_tfh) {
-		IWL_ERR(trans, "Queue %d is stuck %d %d\n", txq_id,
-			txq->read_ptr, txq->write_ptr);
-		/* TODO: access new SCD registers and dump them */
-		return;
-	}
-
-	status = iwl_read_prph(trans, SCD_QUEUE_STATUS_BITS(txq_id));
-	fifo = (status >> SCD_QUEUE_STTS_REG_POS_TXF) & 0x7;
-	active = !!(status & BIT(SCD_QUEUE_STTS_REG_POS_ACTIVE));
-
-	IWL_ERR(trans,
-		"Queue %d is %sactive on fifo %d and stuck for %u ms. SW [%d, %d] HW [%d, %d] FH TRB=0x0%x\n",
-		txq_id, active ? "" : "in", fifo,
-		jiffies_to_msecs(txq->wd_timeout),
-		txq->read_ptr, txq->write_ptr,
-		iwl_read_prph(trans, SCD_QUEUE_RDPTR(txq_id)) &
-			(trans->trans_cfg->base_params->max_tfd_queue_size - 1),
-			iwl_read_prph(trans, SCD_QUEUE_WRPTR(txq_id)) &
-			(trans->trans_cfg->base_params->max_tfd_queue_size - 1),
-			iwl_read_direct32(trans, FH_TX_TRB_REG(fifo)));
-}
-
 static int iwl_trans_pcie_rxq_dma_data(struct iwl_trans *trans, int queue,
 				       struct iwl_trans_rxq_dma_data *data)
 {
@@ -2378,7 +2339,7 @@ static int iwl_trans_pcie_wait_txq_empty(struct iwl_trans *trans, int txq_idx)
 	if (txq->read_ptr != txq->write_ptr) {
 		IWL_ERR(trans,
 			"fail to flush all tx fifo queues Q %d\n", txq_idx);
-		iwl_trans_pcie_log_scd_error(trans, txq);
+		iwl_txq_log_scd_error(trans, txq);
 		return -ETIMEDOUT;
 	}
 
@@ -3339,7 +3300,7 @@ static struct iwl_trans_dump_data
 		spin_lock_bh(&cmdq->lock);
 		ptr = cmdq->write_ptr;
 		for (i = 0; i < cmdq->n_window; i++) {
-			u8 idx = iwl_pcie_get_cmd_index(cmdq, ptr);
+			u8 idx = iwl_txq_get_cmd_index(cmdq, ptr);
 			u8 tfdidx;
 			u32 caplen, cmdlen;
 
@@ -3362,7 +3323,7 @@ static struct iwl_trans_dump_data
 				txcmd = (void *)((u8 *)txcmd->data + caplen);
 			}
 
-			ptr = iwl_queue_dec_wrap(trans, ptr);
+			ptr = iwl_txq_dec_wrap(trans, ptr);
 		}
 		spin_unlock_bh(&cmdq->lock);
 
@@ -3481,13 +3442,13 @@ static const struct iwl_trans_ops trans_ops_pcie_gen2 = {
 
 	.send_cmd = iwl_trans_pcie_gen2_send_hcmd,
 
-	.tx = iwl_trans_pcie_gen2_tx,
+	.tx = iwl_txq_gen2_tx,
 	.reclaim = iwl_trans_pcie_reclaim,
 
 	.set_q_ptrs = iwl_trans_pcie_set_q_ptrs,
 
-	.txq_alloc = iwl_trans_pcie_dyn_txq_alloc,
-	.txq_free = iwl_trans_pcie_dyn_txq_free,
+	.txq_alloc = iwl_txq_dyn_alloc,
+	.txq_free = iwl_txq_dyn_free,
 	.wait_txq_empty = iwl_trans_pcie_wait_txq_empty,
 	.rxq_dma_data = iwl_trans_pcie_rxq_dma_data,
 #ifdef CONFIG_IWLWIFI_DEBUGFS
@@ -3534,11 +3495,6 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev,
 	}
 	INIT_WORK(&trans_pcie->rba.rx_alloc, iwl_pcie_rx_allocator_work);
 
-	trans_pcie->tso_hdr_page = alloc_percpu(struct iwl_tso_hdr_page);
-	if (!trans_pcie->tso_hdr_page) {
-		ret = -ENOMEM;
-		goto out_no_pci;
-	}
 	trans_pcie->debug_rfkill = -1;
 
 	if (!cfg_trans->base_params->pcie_l1_allowed) {
@@ -3671,7 +3627,6 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev,
 out_free_ict:
 	iwl_pcie_free_ict(trans);
 out_no_pci:
-	free_percpu(trans_pcie->tso_hdr_page);
 	destroy_workqueue(trans_pcie->rba.alloc_wq);
 out_free_trans:
 	iwl_trans_free(trans);
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
index 5ed7852289d4..baa83a0b8593 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
@@ -58,746 +58,7 @@
 #include "iwl-io.h"
 #include "internal.h"
 #include "fw/api/tx.h"
-
- /*
- * iwl_pcie_gen2_tx_stop - Stop all Tx DMA channels
- */
-void iwl_pcie_gen2_tx_stop(struct iwl_trans *trans)
-{
-	int txq_id;
-
-	/*
-	 * This function can be called before the op_mode disabled the
-	 * queues. This happens when we have an rfkill interrupt.
-	 * Since we stop Tx altogether - mark the queues as stopped.
-	 */
-	memset(trans->txqs.queue_stopped, 0,
-	       sizeof(trans->txqs.queue_stopped));
-	memset(trans->txqs.queue_used, 0, sizeof(trans->txqs.queue_used));
-
-	/* Unmap DMA from host system and free skb's */
-	for (txq_id = 0; txq_id < ARRAY_SIZE(trans->txqs.txq); txq_id++) {
-		if (!trans->txqs.txq[txq_id])
-			continue;
-		iwl_pcie_gen2_txq_unmap(trans, txq_id);
-	}
-}
-
-/*
- * iwl_pcie_txq_update_byte_tbl - Set up entry in Tx byte-count array
- */
-static void iwl_pcie_gen2_update_byte_tbl(struct iwl_trans *trans,
-					  struct iwl_txq *txq, u16 byte_cnt,
-					  int num_tbs)
-{
-	int idx = iwl_pcie_get_cmd_index(txq, txq->write_ptr);
-	u8 filled_tfd_size, num_fetch_chunks;
-	u16 len = byte_cnt;
-	__le16 bc_ent;
-
-	if (WARN(idx >= txq->n_window, "%d >= %d\n", idx, txq->n_window))
-		return;
-
-	filled_tfd_size = offsetof(struct iwl_tfh_tfd, tbs) +
-			  num_tbs * sizeof(struct iwl_tfh_tb);
-	/*
-	 * filled_tfd_size contains the number of filled bytes in the TFD.
-	 * Dividing it by 64 will give the number of chunks to fetch
-	 * to SRAM- 0 for one chunk, 1 for 2 and so on.
-	 * If, for example, TFD contains only 3 TBs then 32 bytes
-	 * of the TFD are used, and only one chunk of 64 bytes should
-	 * be fetched
-	 */
-	num_fetch_chunks = DIV_ROUND_UP(filled_tfd_size, 64) - 1;
-
-	if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) {
-		struct iwl_gen3_bc_tbl *scd_bc_tbl_gen3 = txq->bc_tbl.addr;
-
-		/* Starting from AX210, the HW expects bytes */
-		WARN_ON(trans->txqs.bc_table_dword);
-		WARN_ON(len > 0x3FFF);
-		bc_ent = cpu_to_le16(len | (num_fetch_chunks << 14));
-		scd_bc_tbl_gen3->tfd_offset[idx] = bc_ent;
-	} else {
-		struct iwlagn_scd_bc_tbl *scd_bc_tbl = txq->bc_tbl.addr;
-
-		/* Before AX210, the HW expects DW */
-		WARN_ON(!trans->txqs.bc_table_dword);
-		len = DIV_ROUND_UP(len, 4);
-		WARN_ON(len > 0xFFF);
-		bc_ent = cpu_to_le16(len | (num_fetch_chunks << 12));
-		scd_bc_tbl->tfd_offset[idx] = bc_ent;
-	}
-}
-
-/*
- * iwl_pcie_gen2_txq_inc_wr_ptr - Send new write index to hardware
- */
-void iwl_pcie_gen2_txq_inc_wr_ptr(struct iwl_trans *trans,
-				  struct iwl_txq *txq)
-{
-	lockdep_assert_held(&txq->lock);
-
-	IWL_DEBUG_TX(trans, "Q:%d WR: 0x%x\n", txq->id, txq->write_ptr);
-
-	/*
-	 * if not in power-save mode, uCode will never sleep when we're
-	 * trying to tx (during RFKILL, we're not trying to tx).
-	 */
-	iwl_write32(trans, HBUS_TARG_WRPTR, txq->write_ptr | (txq->id << 16));
-}
-
-static u8 iwl_pcie_gen2_get_num_tbs(struct iwl_trans *trans,
-				    struct iwl_tfh_tfd *tfd)
-{
-	return le16_to_cpu(tfd->num_tbs) & 0x1f;
-}
-
-static void iwl_pcie_gen2_tfd_unmap(struct iwl_trans *trans,
-				    struct iwl_cmd_meta *meta,
-				    struct iwl_tfh_tfd *tfd)
-{
-	int i, num_tbs;
-
-	/* Sanity check on number of chunks */
-	num_tbs = iwl_pcie_gen2_get_num_tbs(trans, tfd);
-
-	if (num_tbs > trans->txqs.tfd.max_tbs) {
-		IWL_ERR(trans, "Too many chunks: %i\n", num_tbs);
-		return;
-	}
-
-	/* first TB is never freed - it's the bidirectional DMA data */
-	for (i = 1; i < num_tbs; i++) {
-		if (meta->tbs & BIT(i))
-			dma_unmap_page(trans->dev,
-				       le64_to_cpu(tfd->tbs[i].addr),
-				       le16_to_cpu(tfd->tbs[i].tb_len),
-				       DMA_TO_DEVICE);
-		else
-			dma_unmap_single(trans->dev,
-					 le64_to_cpu(tfd->tbs[i].addr),
-					 le16_to_cpu(tfd->tbs[i].tb_len),
-					 DMA_TO_DEVICE);
-	}
-
-	tfd->num_tbs = 0;
-}
-
-static void iwl_pcie_gen2_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq)
-{
-	/* rd_ptr is bounded by TFD_QUEUE_SIZE_MAX and
-	 * idx is bounded by n_window
-	 */
-	int idx = iwl_pcie_get_cmd_index(txq, txq->read_ptr);
-
-	lockdep_assert_held(&txq->lock);
-
-	iwl_pcie_gen2_tfd_unmap(trans, &txq->entries[idx].meta,
-				iwl_pcie_get_tfd(trans, txq, idx));
-
-	/* free SKB */
-	if (txq->entries) {
-		struct sk_buff *skb;
-
-		skb = txq->entries[idx].skb;
-
-		/* Can be called from irqs-disabled context
-		 * If skb is not NULL, it means that the whole queue is being
-		 * freed and that the queue is not empty - free the skb
-		 */
-		if (skb) {
-			iwl_op_mode_free_skb(trans->op_mode, skb);
-			txq->entries[idx].skb = NULL;
-		}
-	}
-}
-
-static int iwl_pcie_gen2_set_tb(struct iwl_trans *trans,
-				struct iwl_tfh_tfd *tfd, dma_addr_t addr,
-				u16 len)
-{
-	int idx = iwl_pcie_gen2_get_num_tbs(trans, tfd);
-	struct iwl_tfh_tb *tb;
-
-	/*
-	 * Only WARN here so we know about the issue, but we mess up our
-	 * unmap path because not every place currently checks for errors
-	 * returned from this function - it can only return an error if
-	 * there's no more space, and so when we know there is enough we
-	 * don't always check ...
-	 */
-	WARN(iwl_pcie_crosses_4g_boundary(addr, len),
-	     "possible DMA problem with iova:0x%llx, len:%d\n",
-	     (unsigned long long)addr, len);
-
-	if (WARN_ON(idx >= IWL_TFH_NUM_TBS))
-		return -EINVAL;
-	tb = &tfd->tbs[idx];
-
-	/* Each TFD can point to a maximum max_tbs Tx buffers */
-	if (le16_to_cpu(tfd->num_tbs) >= trans->txqs.tfd.max_tbs) {
-		IWL_ERR(trans, "Error can not send more than %d chunks\n",
-			trans->txqs.tfd.max_tbs);
-		return -EINVAL;
-	}
-
-	put_unaligned_le64(addr, &tb->addr);
-	tb->tb_len = cpu_to_le16(len);
-
-	tfd->num_tbs = cpu_to_le16(idx + 1);
-
-	return idx;
-}
-
-static struct page *get_workaround_page(struct iwl_trans *trans,
-					struct sk_buff *skb)
-{
-	struct page **page_ptr;
-	struct page *ret;
-
-	page_ptr = (void *)((u8 *)skb->cb + trans->txqs.page_offs);
-
-	ret = alloc_page(GFP_ATOMIC);
-	if (!ret)
-		return NULL;
-
-	/* set the chaining pointer to the previous page if there */
-	*(void **)(page_address(ret) + PAGE_SIZE - sizeof(void *)) = *page_ptr;
-	*page_ptr = ret;
-
-	return ret;
-}
-
-/*
- * Add a TB and if needed apply the FH HW bug workaround;
- * meta != NULL indicates that it's a page mapping and we
- * need to dma_unmap_page() and set the meta->tbs bit in
- * this case.
- */
-static int iwl_pcie_gen2_set_tb_with_wa(struct iwl_trans *trans,
-					struct sk_buff *skb,
-					struct iwl_tfh_tfd *tfd,
-					dma_addr_t phys, void *virt,
-					u16 len, struct iwl_cmd_meta *meta)
-{
-	dma_addr_t oldphys = phys;
-	struct page *page;
-	int ret;
-
-	if (unlikely(dma_mapping_error(trans->dev, phys)))
-		return -ENOMEM;
-
-	if (likely(!iwl_pcie_crosses_4g_boundary(phys, len))) {
-		ret = iwl_pcie_gen2_set_tb(trans, tfd, phys, len);
-
-		if (ret < 0)
-			goto unmap;
-
-		if (meta)
-			meta->tbs |= BIT(ret);
-
-		ret = 0;
-		goto trace;
-	}
-
-	/*
-	 * Work around a hardware bug. If (as expressed in the
-	 * condition above) the TB ends on a 32-bit boundary,
-	 * then the next TB may be accessed with the wrong
-	 * address.
-	 * To work around it, copy the data elsewhere and make
-	 * a new mapping for it so the device will not fail.
-	 */
-
-	if (WARN_ON(len > PAGE_SIZE - sizeof(void *))) {
-		ret = -ENOBUFS;
-		goto unmap;
-	}
-
-	page = get_workaround_page(trans, skb);
-	if (!page) {
-		ret = -ENOMEM;
-		goto unmap;
-	}
-
-	memcpy(page_address(page), virt, len);
-
-	phys = dma_map_single(trans->dev, page_address(page), len,
-			      DMA_TO_DEVICE);
-	if (unlikely(dma_mapping_error(trans->dev, phys)))
-		return -ENOMEM;
-	ret = iwl_pcie_gen2_set_tb(trans, tfd, phys, len);
-	if (ret < 0) {
-		/* unmap the new allocation as single */
-		oldphys = phys;
-		meta = NULL;
-		goto unmap;
-	}
-	IWL_WARN(trans,
-		 "TB bug workaround: copied %d bytes from 0x%llx to 0x%llx\n",
-		 len, (unsigned long long)oldphys, (unsigned long long)phys);
-
-	ret = 0;
-unmap:
-	if (meta)
-		dma_unmap_page(trans->dev, oldphys, len, DMA_TO_DEVICE);
-	else
-		dma_unmap_single(trans->dev, oldphys, len, DMA_TO_DEVICE);
-trace:
-	trace_iwlwifi_dev_tx_tb(trans->dev, skb, virt, phys, len);
-
-	return ret;
-}
-
-static int iwl_pcie_gen2_build_amsdu(struct iwl_trans *trans,
-				     struct sk_buff *skb,
-				     struct iwl_tfh_tfd *tfd, int start_len,
-				     u8 hdr_len,
-				     struct iwl_device_tx_cmd *dev_cmd)
-{
-#ifdef CONFIG_INET
-	struct iwl_tx_cmd_gen2 *tx_cmd = (void *)dev_cmd->payload;
-	struct ieee80211_hdr *hdr = (void *)skb->data;
-	unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room;
-	unsigned int mss = skb_shinfo(skb)->gso_size;
-	u16 length, amsdu_pad;
-	u8 *start_hdr;
-	struct iwl_tso_hdr_page *hdr_page;
-	struct tso_t tso;
-
-	trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd),
-			     &dev_cmd->hdr, start_len, 0);
-
-	ip_hdrlen = skb_transport_header(skb) - skb_network_header(skb);
-	snap_ip_tcp_hdrlen = 8 + ip_hdrlen + tcp_hdrlen(skb);
-	total_len = skb->len - snap_ip_tcp_hdrlen - hdr_len;
-	amsdu_pad = 0;
-
-	/* total amount of header we may need for this A-MSDU */
-	hdr_room = DIV_ROUND_UP(total_len, mss) *
-		(3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr));
-
-	/* Our device supports 9 segments at most, it will fit in 1 page */
-	hdr_page = get_page_hdr(trans, hdr_room, skb);
-	if (!hdr_page)
-		return -ENOMEM;
-
-	start_hdr = hdr_page->pos;
-
-	/*
-	 * Pull the ieee80211 header to be able to use TSO core,
-	 * we will restore it for the tx_status flow.
-	 */
-	skb_pull(skb, hdr_len);
-
-	/*
-	 * Remove the length of all the headers that we don't actually
-	 * have in the MPDU by themselves, but that we duplicate into
-	 * all the different MSDUs inside the A-MSDU.
-	 */
-	le16_add_cpu(&tx_cmd->len, -snap_ip_tcp_hdrlen);
-
-	tso_start(skb, &tso);
-
-	while (total_len) {
-		/* this is the data left for this subframe */
-		unsigned int data_left = min_t(unsigned int, mss, total_len);
-		struct sk_buff *csum_skb = NULL;
-		unsigned int tb_len;
-		dma_addr_t tb_phys;
-		u8 *subf_hdrs_start = hdr_page->pos;
-
-		total_len -= data_left;
-
-		memset(hdr_page->pos, 0, amsdu_pad);
-		hdr_page->pos += amsdu_pad;
-		amsdu_pad = (4 - (sizeof(struct ethhdr) + snap_ip_tcp_hdrlen +
-				  data_left)) & 0x3;
-		ether_addr_copy(hdr_page->pos, ieee80211_get_DA(hdr));
-		hdr_page->pos += ETH_ALEN;
-		ether_addr_copy(hdr_page->pos, ieee80211_get_SA(hdr));
-		hdr_page->pos += ETH_ALEN;
-
-		length = snap_ip_tcp_hdrlen + data_left;
-		*((__be16 *)hdr_page->pos) = cpu_to_be16(length);
-		hdr_page->pos += sizeof(length);
-
-		/*
-		 * This will copy the SNAP as well which will be considered
-		 * as MAC header.
-		 */
-		tso_build_hdr(skb, hdr_page->pos, &tso, data_left, !total_len);
-
-		hdr_page->pos += snap_ip_tcp_hdrlen;
-
-		tb_len = hdr_page->pos - start_hdr;
-		tb_phys = dma_map_single(trans->dev, start_hdr,
-					 tb_len, DMA_TO_DEVICE);
-		if (unlikely(dma_mapping_error(trans->dev, tb_phys))) {
-			dev_kfree_skb(csum_skb);
-			goto out_err;
-		}
-		/*
-		 * No need for _with_wa, this is from the TSO page and
-		 * we leave some space at the end of it so can't hit
-		 * the buggy scenario.
-		 */
-		iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb_len);
-		trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr,
-					tb_phys, tb_len);
-		/* add this subframe's headers' length to the tx_cmd */
-		le16_add_cpu(&tx_cmd->len, hdr_page->pos - subf_hdrs_start);
-
-		/* prepare the start_hdr for the next subframe */
-		start_hdr = hdr_page->pos;
-
-		/* put the payload */
-		while (data_left) {
-			int ret;
-
-			tb_len = min_t(unsigned int, tso.size, data_left);
-			tb_phys = dma_map_single(trans->dev, tso.data,
-						 tb_len, DMA_TO_DEVICE);
-			ret = iwl_pcie_gen2_set_tb_with_wa(trans, skb, tfd,
-							   tb_phys, tso.data,
-							   tb_len, NULL);
-			if (ret) {
-				dev_kfree_skb(csum_skb);
-				goto out_err;
-			}
-
-			data_left -= tb_len;
-			tso_build_data(skb, &tso, tb_len);
-		}
-	}
-
-	/* re -add the WiFi header */
-	skb_push(skb, hdr_len);
-
-	return 0;
-
-out_err:
-#endif
-	return -EINVAL;
-}
-
-static struct
-iwl_tfh_tfd *iwl_pcie_gen2_build_tx_amsdu(struct iwl_trans *trans,
-					  struct iwl_txq *txq,
-					  struct iwl_device_tx_cmd *dev_cmd,
-					  struct sk_buff *skb,
-					  struct iwl_cmd_meta *out_meta,
-					  int hdr_len,
-					  int tx_cmd_len)
-{
-	int idx = iwl_pcie_get_cmd_index(txq, txq->write_ptr);
-	struct iwl_tfh_tfd *tfd = iwl_pcie_get_tfd(trans, txq, idx);
-	dma_addr_t tb_phys;
-	int len;
-	void *tb1_addr;
-
-	tb_phys = iwl_pcie_get_first_tb_dma(txq, idx);
-
-	/*
-	 * No need for _with_wa, the first TB allocation is aligned up
-	 * to a 64-byte boundary and thus can't be at the end or cross
-	 * a page boundary (much less a 2^32 boundary).
-	 */
-	iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE);
-
-	/*
-	 * The second TB (tb1) points to the remainder of the TX command
-	 * and the 802.11 header - dword aligned size
-	 * (This calculation modifies the TX command, so do it before the
-	 * setup of the first TB)
-	 */
-	len = tx_cmd_len + sizeof(struct iwl_cmd_header) + hdr_len -
-	      IWL_FIRST_TB_SIZE;
-
-	/* do not align A-MSDU to dword as the subframe header aligns it */
-
-	/* map the data for TB1 */
-	tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_FIRST_TB_SIZE;
-	tb_phys = dma_map_single(trans->dev, tb1_addr, len, DMA_TO_DEVICE);
-	if (unlikely(dma_mapping_error(trans->dev, tb_phys)))
-		goto out_err;
-	/*
-	 * No need for _with_wa(), we ensure (via alignment) that the data
-	 * here can never cross or end at a page boundary.
-	 */
-	iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, len);
-
-	if (iwl_pcie_gen2_build_amsdu(trans, skb, tfd,
-				      len + IWL_FIRST_TB_SIZE,
-				      hdr_len, dev_cmd))
-		goto out_err;
-
-	/* building the A-MSDU might have changed this data, memcpy it now */
-	memcpy(&txq->first_tb_bufs[idx], dev_cmd, IWL_FIRST_TB_SIZE);
-	return tfd;
-
-out_err:
-	iwl_pcie_gen2_tfd_unmap(trans, out_meta, tfd);
-	return NULL;
-}
-
-static int iwl_pcie_gen2_tx_add_frags(struct iwl_trans *trans,
-				      struct sk_buff *skb,
-				      struct iwl_tfh_tfd *tfd,
-				      struct iwl_cmd_meta *out_meta)
-{
-	int i;
-
-	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
-		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
-		dma_addr_t tb_phys;
-		unsigned int fragsz = skb_frag_size(frag);
-		int ret;
-
-		if (!fragsz)
-			continue;
-
-		tb_phys = skb_frag_dma_map(trans->dev, frag, 0,
-					   fragsz, DMA_TO_DEVICE);
-		ret = iwl_pcie_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys,
-						   skb_frag_address(frag),
-						   fragsz, out_meta);
-		if (ret)
-			return ret;
-	}
-
-	return 0;
-}
-
-static struct
-iwl_tfh_tfd *iwl_pcie_gen2_build_tx(struct iwl_trans *trans,
-				    struct iwl_txq *txq,
-				    struct iwl_device_tx_cmd *dev_cmd,
-				    struct sk_buff *skb,
-				    struct iwl_cmd_meta *out_meta,
-				    int hdr_len,
-				    int tx_cmd_len,
-				    bool pad)
-{
-	int idx = iwl_pcie_get_cmd_index(txq, txq->write_ptr);
-	struct iwl_tfh_tfd *tfd = iwl_pcie_get_tfd(trans, txq, idx);
-	dma_addr_t tb_phys;
-	int len, tb1_len, tb2_len;
-	void *tb1_addr;
-	struct sk_buff *frag;
-
-	tb_phys = iwl_pcie_get_first_tb_dma(txq, idx);
-
-	/* The first TB points to bi-directional DMA data */
-	memcpy(&txq->first_tb_bufs[idx], dev_cmd, IWL_FIRST_TB_SIZE);
-
-	/*
-	 * No need for _with_wa, the first TB allocation is aligned up
-	 * to a 64-byte boundary and thus can't be at the end or cross
-	 * a page boundary (much less a 2^32 boundary).
-	 */
-	iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE);
-
-	/*
-	 * The second TB (tb1) points to the remainder of the TX command
-	 * and the 802.11 header - dword aligned size
-	 * (This calculation modifies the TX command, so do it before the
-	 * setup of the first TB)
-	 */
-	len = tx_cmd_len + sizeof(struct iwl_cmd_header) + hdr_len -
-	      IWL_FIRST_TB_SIZE;
-
-	if (pad)
-		tb1_len = ALIGN(len, 4);
-	else
-		tb1_len = len;
-
-	/* map the data for TB1 */
-	tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_FIRST_TB_SIZE;
-	tb_phys = dma_map_single(trans->dev, tb1_addr, tb1_len, DMA_TO_DEVICE);
-	if (unlikely(dma_mapping_error(trans->dev, tb_phys)))
-		goto out_err;
-	/*
-	 * No need for _with_wa(), we ensure (via alignment) that the data
-	 * here can never cross or end at a page boundary.
-	 */
-	iwl_pcie_gen2_set_tb(trans, tfd, tb_phys, tb1_len);
-	trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), &dev_cmd->hdr,
-			     IWL_FIRST_TB_SIZE + tb1_len, hdr_len);
-
-	/* set up TFD's third entry to point to remainder of skb's head */
-	tb2_len = skb_headlen(skb) - hdr_len;
-
-	if (tb2_len > 0) {
-		int ret;
-
-		tb_phys = dma_map_single(trans->dev, skb->data + hdr_len,
-					 tb2_len, DMA_TO_DEVICE);
-		ret = iwl_pcie_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys,
-						   skb->data + hdr_len, tb2_len,
-						   NULL);
-		if (ret)
-			goto out_err;
-	}
-
-	if (iwl_pcie_gen2_tx_add_frags(trans, skb, tfd, out_meta))
-		goto out_err;
-
-	skb_walk_frags(skb, frag) {
-		int ret;
-
-		tb_phys = dma_map_single(trans->dev, frag->data,
-					 skb_headlen(frag), DMA_TO_DEVICE);
-		ret = iwl_pcie_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys,
-						   frag->data,
-						   skb_headlen(frag), NULL);
-		if (ret)
-			goto out_err;
-		if (iwl_pcie_gen2_tx_add_frags(trans, frag, tfd, out_meta))
-			goto out_err;
-	}
-
-	return tfd;
-
-out_err:
-	iwl_pcie_gen2_tfd_unmap(trans, out_meta, tfd);
-	return NULL;
-}
-
-static
-struct iwl_tfh_tfd *iwl_pcie_gen2_build_tfd(struct iwl_trans *trans,
-					    struct iwl_txq *txq,
-					    struct iwl_device_tx_cmd *dev_cmd,
-					    struct sk_buff *skb,
-					    struct iwl_cmd_meta *out_meta)
-{
-	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
-	int idx = iwl_pcie_get_cmd_index(txq, txq->write_ptr);
-	struct iwl_tfh_tfd *tfd = iwl_pcie_get_tfd(trans, txq, idx);
-	int len, hdr_len;
-	bool amsdu;
-
-	/* There must be data left over for TB1 or this code must be changed */
-	BUILD_BUG_ON(sizeof(struct iwl_tx_cmd_gen2) < IWL_FIRST_TB_SIZE);
-
-	memset(tfd, 0, sizeof(*tfd));
-
-	if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
-		len = sizeof(struct iwl_tx_cmd_gen2);
-	else
-		len = sizeof(struct iwl_tx_cmd_gen3);
-
-	amsdu = ieee80211_is_data_qos(hdr->frame_control) &&
-			(*ieee80211_get_qos_ctl(hdr) &
-			 IEEE80211_QOS_CTL_A_MSDU_PRESENT);
-
-	hdr_len = ieee80211_hdrlen(hdr->frame_control);
-
-	/*
-	 * Only build A-MSDUs here if doing so by GSO, otherwise it may be
-	 * an A-MSDU for other reasons, e.g. NAN or an A-MSDU having been
-	 * built in the higher layers already.
-	 */
-	if (amsdu && skb_shinfo(skb)->gso_size)
-		return iwl_pcie_gen2_build_tx_amsdu(trans, txq, dev_cmd, skb,
-						    out_meta, hdr_len, len);
-
-	return iwl_pcie_gen2_build_tx(trans, txq, dev_cmd, skb, out_meta,
-				      hdr_len, len, !amsdu);
-}
-
-int iwl_trans_pcie_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb,
-			   struct iwl_device_tx_cmd *dev_cmd, int txq_id)
-{
-	struct iwl_cmd_meta *out_meta;
-	struct iwl_txq *txq = trans->txqs.txq[txq_id];
-	u16 cmd_len;
-	int idx;
-	void *tfd;
-
-	if (WARN_ONCE(txq_id >= IWL_MAX_TVQM_QUEUES,
-		      "queue %d out of range", txq_id))
-		return -EINVAL;
-
-	if (WARN_ONCE(!test_bit(txq_id, trans->txqs.queue_used),
-		      "TX on unused queue %d\n", txq_id))
-		return -EINVAL;
-
-	if (skb_is_nonlinear(skb) &&
-	    skb_shinfo(skb)->nr_frags > IWL_TRANS_MAX_FRAGS(trans) &&
-	    __skb_linearize(skb))
-		return -ENOMEM;
-
-	spin_lock(&txq->lock);
-
-	if (iwl_queue_space(trans, txq) < txq->high_mark) {
-		iwl_stop_queue(trans, txq);
-
-		/* don't put the packet on the ring, if there is no room */
-		if (unlikely(iwl_queue_space(trans, txq) < 3)) {
-			struct iwl_device_tx_cmd **dev_cmd_ptr;
-
-			dev_cmd_ptr = (void *)((u8 *)skb->cb +
-					       trans->txqs.dev_cmd_offs);
-
-			*dev_cmd_ptr = dev_cmd;
-			__skb_queue_tail(&txq->overflow_q, skb);
-			spin_unlock(&txq->lock);
-			return 0;
-		}
-	}
-
-	idx = iwl_pcie_get_cmd_index(txq, txq->write_ptr);
-
-	/* Set up driver data for this TFD */
-	txq->entries[idx].skb = skb;
-	txq->entries[idx].cmd = dev_cmd;
-
-	dev_cmd->hdr.sequence =
-		cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) |
-			    INDEX_TO_SEQ(idx)));
-
-	/* Set up first empty entry in queue's array of Tx/cmd buffers */
-	out_meta = &txq->entries[idx].meta;
-	out_meta->flags = 0;
-
-	tfd = iwl_pcie_gen2_build_tfd(trans, txq, dev_cmd, skb, out_meta);
-	if (!tfd) {
-		spin_unlock(&txq->lock);
-		return -1;
-	}
-
-	if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) {
-		struct iwl_tx_cmd_gen3 *tx_cmd_gen3 =
-			(void *)dev_cmd->payload;
-
-		cmd_len = le16_to_cpu(tx_cmd_gen3->len);
-	} else {
-		struct iwl_tx_cmd_gen2 *tx_cmd_gen2 =
-			(void *)dev_cmd->payload;
-
-		cmd_len = le16_to_cpu(tx_cmd_gen2->len);
-	}
-
-	/* Set up entry for this TFD in Tx byte-count array */
-	iwl_pcie_gen2_update_byte_tbl(trans, txq, cmd_len,
-				      iwl_pcie_gen2_get_num_tbs(trans, tfd));
-
-	/* start timer if queue currently empty */
-	if (txq->read_ptr == txq->write_ptr && txq->wd_timeout)
-		mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout);
-
-	/* Tell device the write index *just past* this latest filled TFD */
-	txq->write_ptr = iwl_queue_inc_wrap(trans, txq->write_ptr);
-	iwl_pcie_gen2_txq_inc_wr_ptr(trans, txq);
-	/*
-	 * At this point the frame is "transmitted" successfully
-	 * and we will get a TX status notification eventually.
-	 */
-	spin_unlock(&txq->lock);
-	return 0;
-}
+#include "queue/tx.h"
 
 /*************** HOST COMMAND QUEUE FUNCTIONS   *****/
 
@@ -897,11 +158,11 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
 
 	spin_lock_bh(&txq->lock);
 
-	idx = iwl_pcie_get_cmd_index(txq, txq->write_ptr);
-	tfd = iwl_pcie_get_tfd(trans, txq, txq->write_ptr);
+	idx = iwl_txq_get_cmd_index(txq, txq->write_ptr);
+	tfd = iwl_txq_get_tfd(trans, txq, txq->write_ptr);
 	memset(tfd, 0, sizeof(*tfd));
 
-	if (iwl_queue_space(trans, txq) < ((cmd->flags & CMD_ASYNC) ? 2 : 1)) {
+	if (iwl_txq_space(trans, txq) < ((cmd->flags & CMD_ASYNC) ? 2 : 1)) {
 		spin_unlock_bh(&txq->lock);
 
 		IWL_ERR(trans, "No space in command queue\n");
@@ -979,8 +240,8 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
 	/* start the TFD with the minimum copy bytes */
 	tb0_size = min_t(int, copy_size, IWL_FIRST_TB_SIZE);
 	memcpy(&txq->first_tb_bufs[idx], out_cmd, tb0_size);
-	iwl_pcie_gen2_set_tb(trans, tfd, iwl_pcie_get_first_tb_dma(txq, idx),
-			     tb0_size);
+	iwl_txq_gen2_set_tb(trans, tfd, iwl_txq_get_first_tb_dma(txq, idx),
+			    tb0_size);
 
 	/* map first command fragment, if any remains */
 	if (copy_size > tb0_size) {
@@ -990,11 +251,11 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
 					   DMA_TO_DEVICE);
 		if (dma_mapping_error(trans->dev, phys_addr)) {
 			idx = -ENOMEM;
-			iwl_pcie_gen2_tfd_unmap(trans, out_meta, tfd);
+			iwl_txq_gen2_tfd_unmap(trans, out_meta, tfd);
 			goto out;
 		}
-		iwl_pcie_gen2_set_tb(trans, tfd, phys_addr,
-				     copy_size - tb0_size);
+		iwl_txq_gen2_set_tb(trans, tfd, phys_addr,
+				    copy_size - tb0_size);
 	}
 
 	/* map the remaining (adjusted) nocopy/dup fragments */
@@ -1012,10 +273,10 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
 					   cmdlen[i], DMA_TO_DEVICE);
 		if (dma_mapping_error(trans->dev, phys_addr)) {
 			idx = -ENOMEM;
-			iwl_pcie_gen2_tfd_unmap(trans, out_meta, tfd);
+			iwl_txq_gen2_tfd_unmap(trans, out_meta, tfd);
 			goto out;
 		}
-		iwl_pcie_gen2_set_tb(trans, tfd, phys_addr, cmdlen[i]);
+		iwl_txq_gen2_set_tb(trans, tfd, phys_addr, cmdlen[i]);
 	}
 
 	BUILD_BUG_ON(IWL_TFH_NUM_TBS > sizeof(out_meta->tbs) * BITS_PER_BYTE);
@@ -1032,8 +293,8 @@ static int iwl_pcie_gen2_enqueue_hcmd(struct iwl_trans *trans,
 
 	spin_lock_irqsave(&trans_pcie->reg_lock, flags);
 	/* Increment and update queue's write index */
-	txq->write_ptr = iwl_queue_inc_wrap(trans, txq->write_ptr);
-	iwl_pcie_gen2_txq_inc_wr_ptr(trans, txq);
+	txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr);
+	iwl_txq_inc_wr_ptr(trans, txq);
 	spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
 
 out:
@@ -1164,316 +425,3 @@ int iwl_trans_pcie_gen2_send_hcmd(struct iwl_trans *trans,
 	return iwl_pcie_gen2_send_hcmd_sync(trans, cmd);
 }
 
-/*
- * iwl_pcie_gen2_txq_unmap -  Unmap any remaining DMA mappings and free skb's
- */
-void iwl_pcie_gen2_txq_unmap(struct iwl_trans *trans, int txq_id)
-{
-	struct iwl_txq *txq = trans->txqs.txq[txq_id];
-
-	spin_lock_bh(&txq->lock);
-	while (txq->write_ptr != txq->read_ptr) {
-		IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n",
-				   txq_id, txq->read_ptr);
-
-		if (txq_id != trans->txqs.cmd.q_id) {
-			int idx = iwl_pcie_get_cmd_index(txq, txq->read_ptr);
-			struct sk_buff *skb = txq->entries[idx].skb;
-
-			if (WARN_ON_ONCE(!skb))
-				continue;
-
-			iwl_pcie_free_tso_page(trans, skb);
-		}
-		iwl_pcie_gen2_free_tfd(trans, txq);
-		txq->read_ptr = iwl_queue_inc_wrap(trans, txq->read_ptr);
-	}
-
-	while (!skb_queue_empty(&txq->overflow_q)) {
-		struct sk_buff *skb = __skb_dequeue(&txq->overflow_q);
-
-		iwl_op_mode_free_skb(trans->op_mode, skb);
-	}
-
-	spin_unlock_bh(&txq->lock);
-
-	/* just in case - this queue may have been stopped */
-	iwl_wake_queue(trans, txq);
-}
-
-void iwl_pcie_gen2_txq_free_memory(struct iwl_trans *trans,
-				   struct iwl_txq *txq)
-{
-	struct device *dev = trans->dev;
-
-	/* De-alloc circular buffer of TFDs */
-	if (txq->tfds) {
-		dma_free_coherent(dev,
-				  trans->txqs.tfd.size * txq->n_window,
-				  txq->tfds, txq->dma_addr);
-		dma_free_coherent(dev,
-				  sizeof(*txq->first_tb_bufs) * txq->n_window,
-				  txq->first_tb_bufs, txq->first_tb_dma);
-	}
-
-	kfree(txq->entries);
-	if (txq->bc_tbl.addr)
-		dma_pool_free(trans->txqs.bc_pool, txq->bc_tbl.addr,
-			      txq->bc_tbl.dma);
-	kfree(txq);
-}
-
-/*
- * iwl_pcie_txq_free - Deallocate DMA queue.
- * @txq: Transmit queue to deallocate.
- *
- * Empty queue by removing and destroying all BD's.
- * Free all buffers.
- * 0-fill, but do not free "txq" descriptor structure.
- */
-static void iwl_pcie_gen2_txq_free(struct iwl_trans *trans, int txq_id)
-{
-	struct iwl_txq *txq;
-	int i;
-
-	if (WARN_ONCE(txq_id >= IWL_MAX_TVQM_QUEUES,
-		      "queue %d out of range", txq_id))
-		return;
-
-	txq = trans->txqs.txq[txq_id];
-
-	if (WARN_ON(!txq))
-		return;
-
-	iwl_pcie_gen2_txq_unmap(trans, txq_id);
-
-	/* De-alloc array of command/tx buffers */
-	if (txq_id == trans->txqs.cmd.q_id)
-		for (i = 0; i < txq->n_window; i++) {
-			kfree_sensitive(txq->entries[i].cmd);
-			kfree_sensitive(txq->entries[i].free_buf);
-		}
-	del_timer_sync(&txq->stuck_timer);
-
-	iwl_pcie_gen2_txq_free_memory(trans, txq);
-
-	trans->txqs.txq[txq_id] = NULL;
-
-	clear_bit(txq_id, trans->txqs.queue_used);
-}
-
-int iwl_trans_pcie_dyn_txq_alloc_dma(struct iwl_trans *trans,
-				     struct iwl_txq **intxq, int size,
-				     unsigned int timeout)
-{
-	size_t bc_tbl_size, bc_tbl_entries;
-	struct iwl_txq *txq;
-	int ret;
-
-	WARN_ON(!trans->txqs.bc_tbl_size);
-
-	bc_tbl_size = trans->txqs.bc_tbl_size;
-	bc_tbl_entries = bc_tbl_size / sizeof(u16);
-
-	if (WARN_ON(size > bc_tbl_entries))
-		return -EINVAL;
-
-	txq = kzalloc(sizeof(*txq), GFP_KERNEL);
-	if (!txq)
-		return -ENOMEM;
-
-	txq->bc_tbl.addr = dma_pool_alloc(trans->txqs.bc_pool, GFP_KERNEL,
-					  &txq->bc_tbl.dma);
-	if (!txq->bc_tbl.addr) {
-		IWL_ERR(trans, "Scheduler BC Table allocation failed\n");
-		kfree(txq);
-		return -ENOMEM;
-	}
-
-	ret = iwl_pcie_txq_alloc(trans, txq, size, false);
-	if (ret) {
-		IWL_ERR(trans, "Tx queue alloc failed\n");
-		goto error;
-	}
-	ret = iwl_pcie_txq_init(trans, txq, size, false);
-	if (ret) {
-		IWL_ERR(trans, "Tx queue init failed\n");
-		goto error;
-	}
-
-	txq->wd_timeout = msecs_to_jiffies(timeout);
-
-	*intxq = txq;
-	return 0;
-
-error:
-	iwl_pcie_gen2_txq_free_memory(trans, txq);
-	return ret;
-}
-
-int iwl_trans_pcie_txq_alloc_response(struct iwl_trans *trans,
-				      struct iwl_txq *txq,
-				      struct iwl_host_cmd *hcmd)
-{
-	struct iwl_tx_queue_cfg_rsp *rsp;
-	int ret, qid;
-	u32 wr_ptr;
-
-	if (WARN_ON(iwl_rx_packet_payload_len(hcmd->resp_pkt) !=
-		    sizeof(*rsp))) {
-		ret = -EINVAL;
-		goto error_free_resp;
-	}
-
-	rsp = (void *)hcmd->resp_pkt->data;
-	qid = le16_to_cpu(rsp->queue_number);
-	wr_ptr = le16_to_cpu(rsp->write_pointer);
-
-	if (qid >= ARRAY_SIZE(trans->txqs.txq)) {
-		WARN_ONCE(1, "queue index %d unsupported", qid);
-		ret = -EIO;
-		goto error_free_resp;
-	}
-
-	if (test_and_set_bit(qid, trans->txqs.queue_used)) {
-		WARN_ONCE(1, "queue %d already used", qid);
-		ret = -EIO;
-		goto error_free_resp;
-	}
-
-	txq->id = qid;
-	trans->txqs.txq[qid] = txq;
-	wr_ptr &= (trans->trans_cfg->base_params->max_tfd_queue_size - 1);
-
-	/* Place first TFD at index corresponding to start sequence number */
-	txq->read_ptr = wr_ptr;
-	txq->write_ptr = wr_ptr;
-
-	IWL_DEBUG_TX_QUEUES(trans, "Activate queue %d\n", qid);
-
-	iwl_free_resp(hcmd);
-	return qid;
-
-error_free_resp:
-	iwl_free_resp(hcmd);
-	iwl_pcie_gen2_txq_free_memory(trans, txq);
-	return ret;
-}
-
-int iwl_trans_pcie_dyn_txq_alloc(struct iwl_trans *trans,
-				 __le16 flags, u8 sta_id, u8 tid,
-				 int cmd_id, int size,
-				 unsigned int timeout)
-{
-	struct iwl_txq *txq = NULL;
-	struct iwl_tx_queue_cfg_cmd cmd = {
-		.flags = flags,
-		.sta_id = sta_id,
-		.tid = tid,
-	};
-	struct iwl_host_cmd hcmd = {
-		.id = cmd_id,
-		.len = { sizeof(cmd) },
-		.data = { &cmd, },
-		.flags = CMD_WANT_SKB,
-	};
-	int ret;
-
-	ret = iwl_trans_pcie_dyn_txq_alloc_dma(trans, &txq, size, timeout);
-	if (ret)
-		return ret;
-
-	cmd.tfdq_addr = cpu_to_le64(txq->dma_addr);
-	cmd.byte_cnt_addr = cpu_to_le64(txq->bc_tbl.dma);
-	cmd.cb_size = cpu_to_le32(TFD_QUEUE_CB_SIZE(size));
-
-	ret = iwl_trans_send_cmd(trans, &hcmd);
-	if (ret)
-		goto error;
-
-	return iwl_trans_pcie_txq_alloc_response(trans, txq, &hcmd);
-
-error:
-	iwl_pcie_gen2_txq_free_memory(trans, txq);
-	return ret;
-}
-
-void iwl_trans_pcie_dyn_txq_free(struct iwl_trans *trans, int queue)
-{
-	if (WARN(queue >= IWL_MAX_TVQM_QUEUES,
-		 "queue %d out of range", queue))
-		return;
-
-	/*
-	 * Upon HW Rfkill - we stop the device, and then stop the queues
-	 * in the op_mode. Just for the sake of the simplicity of the op_mode,
-	 * allow the op_mode to call txq_disable after it already called
-	 * stop_device.
-	 */
-	if (!test_and_clear_bit(queue, trans->txqs.queue_used)) {
-		WARN_ONCE(test_bit(STATUS_DEVICE_ENABLED, &trans->status),
-			  "queue %d not used", queue);
-		return;
-	}
-
-	iwl_pcie_gen2_txq_unmap(trans, queue);
-
-	iwl_pcie_gen2_txq_free_memory(trans, trans->txqs.txq[queue]);
-	trans->txqs.txq[queue] = NULL;
-
-	IWL_DEBUG_TX_QUEUES(trans, "Deactivate queue %d\n", queue);
-}
-
-void iwl_pcie_gen2_tx_free(struct iwl_trans *trans)
-{
-	int i;
-
-	memset(trans->txqs.queue_used, 0, sizeof(trans->txqs.queue_used));
-
-	/* Free all TX queues */
-	for (i = 0; i < ARRAY_SIZE(trans->txqs.txq); i++) {
-		if (!trans->txqs.txq[i])
-			continue;
-
-		iwl_pcie_gen2_txq_free(trans, i);
-	}
-}
-
-int iwl_pcie_gen2_tx_init(struct iwl_trans *trans, int txq_id, int queue_size)
-{
-	struct iwl_txq *queue;
-	int ret;
-
-	/* alloc and init the tx queue */
-	if (!trans->txqs.txq[txq_id]) {
-		queue = kzalloc(sizeof(*queue), GFP_KERNEL);
-		if (!queue) {
-			IWL_ERR(trans, "Not enough memory for tx queue\n");
-			return -ENOMEM;
-		}
-		trans->txqs.txq[txq_id] = queue;
-		ret = iwl_pcie_txq_alloc(trans, queue, queue_size, true);
-		if (ret) {
-			IWL_ERR(trans, "Tx %d queue init failed\n", txq_id);
-			goto error;
-		}
-	} else {
-		queue = trans->txqs.txq[txq_id];
-	}
-
-	ret = iwl_pcie_txq_init(trans, queue, queue_size,
-				(txq_id == trans->txqs.cmd.q_id));
-	if (ret) {
-		IWL_ERR(trans, "Tx %d queue alloc failed\n", txq_id);
-		goto error;
-	}
-	trans->txqs.txq[txq_id]->id = txq_id;
-	set_bit(txq_id, trans->txqs.queue_used);
-
-	return 0;
-
-error:
-	iwl_pcie_gen2_tx_free(trans);
-	return ret;
-}
-
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
index 027b4e787ee6..9eee4a0e7668 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
@@ -102,60 +102,6 @@
  *
  ***************************************************/
 
-int iwl_queue_space(struct iwl_trans *trans, const struct iwl_txq *q)
-{
-	unsigned int max;
-	unsigned int used;
-
-	/*
-	 * To avoid ambiguity between empty and completely full queues, there
-	 * should always be less than max_tfd_queue_size elements in the queue.
-	 * If q->n_window is smaller than max_tfd_queue_size, there is no need
-	 * to reserve any queue entries for this purpose.
-	 */
-	if (q->n_window < trans->trans_cfg->base_params->max_tfd_queue_size)
-		max = q->n_window;
-	else
-		max = trans->trans_cfg->base_params->max_tfd_queue_size - 1;
-
-	/*
-	 * max_tfd_queue_size is a power of 2, so the following is equivalent to
-	 * modulo by max_tfd_queue_size and is well defined.
-	 */
-	used = (q->write_ptr - q->read_ptr) &
-		(trans->trans_cfg->base_params->max_tfd_queue_size - 1);
-
-	if (WARN_ON(used > max))
-		return 0;
-
-	return max - used;
-}
-
-/*
- * iwl_queue_init - Initialize queue's high/low-water and read/write indexes
- */
-static int iwl_queue_init(struct iwl_txq *q, int slots_num)
-{
-	q->n_window = slots_num;
-
-	/* slots_num must be power-of-two size, otherwise
-	 * iwl_pcie_get_cmd_index is broken. */
-	if (WARN_ON(!is_power_of_2(slots_num)))
-		return -EINVAL;
-
-	q->low_mark = q->n_window / 4;
-	if (q->low_mark < 4)
-		q->low_mark = 4;
-
-	q->high_mark = q->n_window / 8;
-	if (q->high_mark < 2)
-		q->high_mark = 2;
-
-	q->write_ptr = 0;
-	q->read_ptr = 0;
-
-	return 0;
-}
 
 int iwl_pcie_alloc_dma_ptr(struct iwl_trans *trans,
 			   struct iwl_dma_ptr *ptr, size_t size)
@@ -180,24 +126,6 @@ void iwl_pcie_free_dma_ptr(struct iwl_trans *trans, struct iwl_dma_ptr *ptr)
 	memset(ptr, 0, sizeof(*ptr));
 }
 
-static void iwl_pcie_txq_stuck_timer(struct timer_list *t)
-{
-	struct iwl_txq *txq = from_timer(txq, t, stuck_timer);
-	struct iwl_trans *trans = txq->trans;
-
-	spin_lock(&txq->lock);
-	/* check if triggered erroneously */
-	if (txq->read_ptr == txq->write_ptr) {
-		spin_unlock(&txq->lock);
-		return;
-	}
-	spin_unlock(&txq->lock);
-
-	iwl_trans_pcie_log_scd_error(trans, txq);
-
-	iwl_force_nmi(trans);
-}
-
 /*
  * iwl_pcie_txq_update_byte_cnt_tbl - Set up entry in Tx byte-count array
  */
@@ -402,7 +330,7 @@ static void iwl_pcie_tfd_unmap(struct iwl_trans *trans,
 			       struct iwl_txq *txq, int index)
 {
 	int i, num_tbs;
-	void *tfd = iwl_pcie_get_tfd(trans, txq, index);
+	void *tfd = iwl_txq_get_tfd(trans, txq, index);
 
 	/* Sanity check on number of chunks */
 	num_tbs = iwl_pcie_tfd_get_num_tbs(trans, tfd);
@@ -459,7 +387,7 @@ void iwl_pcie_txq_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq)
 	 * idx is bounded by n_window
 	 */
 	int rd_ptr = txq->read_ptr;
-	int idx = iwl_pcie_get_cmd_index(txq, rd_ptr);
+	int idx = iwl_txq_get_cmd_index(txq, rd_ptr);
 
 	lockdep_assert_held(&txq->lock);
 
@@ -514,125 +442,6 @@ static int iwl_pcie_txq_build_tfd(struct iwl_trans *trans, struct iwl_txq *txq,
 	return num_tbs;
 }
 
-int iwl_pcie_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq,
-		       int slots_num, bool cmd_queue)
-{
-	size_t tfd_sz = trans->txqs.tfd.size *
-		trans->trans_cfg->base_params->max_tfd_queue_size;
-	size_t tb0_buf_sz;
-	int i;
-
-	if (WARN_ON(txq->entries || txq->tfds))
-		return -EINVAL;
-
-	if (trans->trans_cfg->use_tfh)
-		tfd_sz = trans->txqs.tfd.size * slots_num;
-
-	timer_setup(&txq->stuck_timer, iwl_pcie_txq_stuck_timer, 0);
-	txq->trans = trans;
-
-	txq->n_window = slots_num;
-
-	txq->entries = kcalloc(slots_num,
-			       sizeof(struct iwl_pcie_txq_entry),
-			       GFP_KERNEL);
-
-	if (!txq->entries)
-		goto error;
-
-	if (cmd_queue)
-		for (i = 0; i < slots_num; i++) {
-			txq->entries[i].cmd =
-				kmalloc(sizeof(struct iwl_device_cmd),
-					GFP_KERNEL);
-			if (!txq->entries[i].cmd)
-				goto error;
-		}
-
-	/* Circular buffer of transmit frame descriptors (TFDs),
-	 * shared with device */
-	txq->tfds = dma_alloc_coherent(trans->dev, tfd_sz,
-				       &txq->dma_addr, GFP_KERNEL);
-	if (!txq->tfds)
-		goto error;
-
-	BUILD_BUG_ON(IWL_FIRST_TB_SIZE_ALIGN != sizeof(*txq->first_tb_bufs));
-
-	tb0_buf_sz = sizeof(*txq->first_tb_bufs) * slots_num;
-
-	txq->first_tb_bufs = dma_alloc_coherent(trans->dev, tb0_buf_sz,
-					      &txq->first_tb_dma,
-					      GFP_KERNEL);
-	if (!txq->first_tb_bufs)
-		goto err_free_tfds;
-
-	return 0;
-err_free_tfds:
-	dma_free_coherent(trans->dev, tfd_sz, txq->tfds, txq->dma_addr);
-error:
-	if (txq->entries && cmd_queue)
-		for (i = 0; i < slots_num; i++)
-			kfree(txq->entries[i].cmd);
-	kfree(txq->entries);
-	txq->entries = NULL;
-
-	return -ENOMEM;
-
-}
-
-int iwl_pcie_txq_init(struct iwl_trans *trans, struct iwl_txq *txq,
-		      int slots_num, bool cmd_queue)
-{
-	int ret;
-	u32 tfd_queue_max_size =
-		trans->trans_cfg->base_params->max_tfd_queue_size;
-
-	txq->need_update = false;
-
-	/* max_tfd_queue_size must be power-of-two size, otherwise
-	 * iwl_queue_inc_wrap and iwl_queue_dec_wrap are broken. */
-	if (WARN_ONCE(tfd_queue_max_size & (tfd_queue_max_size - 1),
-		      "Max tfd queue size must be a power of two, but is %d",
-		      tfd_queue_max_size))
-		return -EINVAL;
-
-	/* Initialize queue's high/low-water marks, and head/tail indexes */
-	ret = iwl_queue_init(txq, slots_num);
-	if (ret)
-		return ret;
-
-	spin_lock_init(&txq->lock);
-
-	if (cmd_queue) {
-		static struct lock_class_key iwl_pcie_cmd_queue_lock_class;
-
-		lockdep_set_class(&txq->lock, &iwl_pcie_cmd_queue_lock_class);
-	}
-
-	__skb_queue_head_init(&txq->overflow_q);
-
-	return 0;
-}
-
-void iwl_pcie_free_tso_page(struct iwl_trans *trans,
-			    struct sk_buff *skb)
-{
-	struct page **page_ptr;
-	struct page *next;
-
-	page_ptr = (void *)((u8 *)skb->cb + trans->txqs.page_offs);
-	next = *page_ptr;
-	*page_ptr = NULL;
-
-	while (next) {
-		struct page *tmp = next;
-
-		next = *(void **)(page_address(next) + PAGE_SIZE -
-				  sizeof(void *));
-		__free_page(tmp);
-	}
-}
-
 static void iwl_pcie_clear_cmd_in_flight(struct iwl_trans *trans)
 {
 	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
@@ -668,10 +477,10 @@ static void iwl_pcie_txq_unmap(struct iwl_trans *trans, int txq_id)
 			if (WARN_ON_ONCE(!skb))
 				continue;
 
-			iwl_pcie_free_tso_page(trans, skb);
+			iwl_txq_free_tso_page(trans, skb);
 		}
 		iwl_pcie_txq_free_tfd(trans, txq);
-		txq->read_ptr = iwl_queue_inc_wrap(trans, txq->read_ptr);
+		txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
 
 		if (txq->read_ptr == txq->write_ptr) {
 			unsigned long flags;
@@ -996,8 +805,8 @@ static int iwl_pcie_tx_alloc(struct iwl_trans *trans)
 			slots_num = max_t(u32, IWL_DEFAULT_QUEUE_SIZE,
 					  trans->cfg->min_256_ba_txq_size);
 		trans->txqs.txq[txq_id] = &trans_pcie->txq_memory[txq_id];
-		ret = iwl_pcie_txq_alloc(trans, trans->txqs.txq[txq_id],
-					 slots_num, cmd_queue);
+		ret = iwl_txq_alloc(trans, trans->txqs.txq[txq_id], slots_num,
+				    cmd_queue);
 		if (ret) {
 			IWL_ERR(trans, "Tx %d queue alloc failed\n", txq_id);
 			goto error;
@@ -1049,8 +858,8 @@ int iwl_pcie_tx_init(struct iwl_trans *trans)
 		else
 			slots_num = max_t(u32, IWL_DEFAULT_QUEUE_SIZE,
 					  trans->cfg->min_256_ba_txq_size);
-		ret = iwl_pcie_txq_init(trans, trans->txqs.txq[txq_id],
-					slots_num, cmd_queue);
+		ret = iwl_txq_init(trans, trans->txqs.txq[txq_id], slots_num,
+				   cmd_queue);
 		if (ret) {
 			IWL_ERR(trans, "Tx %d queue init failed\n", txq_id);
 			goto error;
@@ -1108,8 +917,8 @@ void iwl_trans_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
 			    struct sk_buff_head *skbs)
 {
 	struct iwl_txq *txq = trans->txqs.txq[txq_id];
-	int tfd_num = iwl_pcie_get_cmd_index(txq, ssn);
-	int read_ptr = iwl_pcie_get_cmd_index(txq, txq->read_ptr);
+	int tfd_num = iwl_txq_get_cmd_index(txq, ssn);
+	int read_ptr = iwl_txq_get_cmd_index(txq, txq->read_ptr);
 	int last_to_free;
 
 	/* This function is not meant to release cmd queue*/
@@ -1132,9 +941,9 @@ void iwl_trans_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
 
 	/*Since we free until index _not_ inclusive, the one before index is
 	 * the last we will free. This one must be used */
-	last_to_free = iwl_queue_dec_wrap(trans, tfd_num);
+	last_to_free = iwl_txq_dec_wrap(trans, tfd_num);
 
-	if (!iwl_queue_used(txq, last_to_free)) {
+	if (!iwl_txq_used(txq, last_to_free)) {
 		IWL_ERR(trans,
 			"%s: Read index for txq id (%d), last_to_free %d is out of range [0-%d] %d %d.\n",
 			__func__, txq_id, last_to_free,
@@ -1148,14 +957,14 @@ void iwl_trans_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
 
 	for (;
 	     read_ptr != tfd_num;
-	     txq->read_ptr = iwl_queue_inc_wrap(trans, txq->read_ptr),
-	     read_ptr = iwl_pcie_get_cmd_index(txq, txq->read_ptr)) {
+	     txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr),
+	     read_ptr = iwl_txq_get_cmd_index(txq, txq->read_ptr)) {
 		struct sk_buff *skb = txq->entries[read_ptr].skb;
 
 		if (WARN_ON_ONCE(!skb))
 			continue;
 
-		iwl_pcie_free_tso_page(trans, skb);
+		iwl_txq_free_tso_page(trans, skb);
 
 		__skb_queue_tail(skbs, skb);
 
@@ -1169,7 +978,7 @@ void iwl_trans_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
 
 	iwl_pcie_txq_progress(txq);
 
-	if (iwl_queue_space(trans, txq) > txq->low_mark &&
+	if (iwl_txq_space(trans, txq) > txq->low_mark &&
 	    test_bit(txq_id, trans->txqs.queue_stopped)) {
 		struct sk_buff_head overflow_skbs;
 
@@ -1203,13 +1012,13 @@ void iwl_trans_pcie_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
 
 			/*
 			 * Note that we can very well be overflowing again.
-			 * In that case, iwl_queue_space will be small again
+			 * In that case, iwl_txq_space will be small again
 			 * and we won't wake mac80211's queue.
 			 */
 			iwl_trans_tx(trans, skb, dev_cmd_ptr, txq_id);
 		}
 
-		if (iwl_queue_space(trans, txq) > txq->low_mark)
+		if (iwl_txq_space(trans, txq) > txq->low_mark)
 			iwl_wake_queue(trans, txq);
 
 		spin_lock_bh(&txq->lock);
@@ -1290,11 +1099,11 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
 
 	lockdep_assert_held(&txq->lock);
 
-	idx = iwl_pcie_get_cmd_index(txq, idx);
-	r = iwl_pcie_get_cmd_index(txq, txq->read_ptr);
+	idx = iwl_txq_get_cmd_index(txq, idx);
+	r = iwl_txq_get_cmd_index(txq, txq->read_ptr);
 
 	if (idx >= trans->trans_cfg->base_params->max_tfd_queue_size ||
-	    (!iwl_queue_used(txq, idx))) {
+	    (!iwl_txq_used(txq, idx))) {
 		WARN_ONCE(test_bit(txq_id, trans->txqs.queue_used),
 			  "%s: Read index for DMA queue txq id (%d), index %d is out of range [0-%d] %d %d.\n",
 			  __func__, txq_id, idx,
@@ -1303,9 +1112,9 @@ static void iwl_pcie_cmdq_reclaim(struct iwl_trans *trans, int txq_id, int idx)
 		return;
 	}
 
-	for (idx = iwl_queue_inc_wrap(trans, idx); r != idx;
-	     r = iwl_queue_inc_wrap(trans, r)) {
-		txq->read_ptr = iwl_queue_inc_wrap(trans, txq->read_ptr);
+	for (idx = iwl_txq_inc_wrap(trans, idx); r != idx;
+	     r = iwl_txq_inc_wrap(trans, r)) {
+		txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
 
 		if (nfreed++ > 0) {
 			IWL_ERR(trans, "HCMD skipped: index (%d) %d %d\n",
@@ -1617,7 +1426,7 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
 
 	spin_lock_bh(&txq->lock);
 
-	if (iwl_queue_space(trans, txq) < ((cmd->flags & CMD_ASYNC) ? 2 : 1)) {
+	if (iwl_txq_space(trans, txq) < ((cmd->flags & CMD_ASYNC) ? 2 : 1)) {
 		spin_unlock_bh(&txq->lock);
 
 		IWL_ERR(trans, "No space in command queue\n");
@@ -1626,7 +1435,7 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
 		goto free_dup_buf;
 	}
 
-	idx = iwl_pcie_get_cmd_index(txq, txq->write_ptr);
+	idx = iwl_txq_get_cmd_index(txq, txq->write_ptr);
 	out_cmd = txq->entries[idx].cmd;
 	out_meta = &txq->entries[idx].meta;
 
@@ -1709,7 +1518,7 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
 	tb0_size = min_t(int, copy_size, IWL_FIRST_TB_SIZE);
 	memcpy(&txq->first_tb_bufs[idx], &out_cmd->hdr, tb0_size);
 	iwl_pcie_txq_build_tfd(trans, txq,
-			       iwl_pcie_get_first_tb_dma(txq, idx),
+			       iwl_txq_get_first_tb_dma(txq, idx),
 			       tb0_size, true);
 
 	/* map first command fragment, if any remains */
@@ -1773,7 +1582,7 @@ static int iwl_pcie_enqueue_hcmd(struct iwl_trans *trans,
 	}
 
 	/* Increment and update queue's write index */
-	txq->write_ptr = iwl_queue_inc_wrap(trans, txq->write_ptr);
+	txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr);
 	iwl_pcie_txq_inc_wr_ptr(trans, txq);
 
 	spin_unlock_irqrestore(&trans_pcie->reg_lock, flags);
@@ -1818,7 +1627,7 @@ void iwl_pcie_hcmd_complete(struct iwl_trans *trans,
 
 	spin_lock_bh(&txq->lock);
 
-	cmd_index = iwl_pcie_get_cmd_index(txq, index);
+	cmd_index = iwl_txq_get_cmd_index(txq, index);
 	cmd = txq->entries[cmd_index].cmd;
 	meta = &txq->entries[cmd_index].meta;
 	group_id = cmd->hdr.group_id;
@@ -2045,51 +1854,6 @@ static int iwl_fill_data_tbs(struct iwl_trans *trans, struct sk_buff *skb,
 }
 
 #ifdef CONFIG_INET
-struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len,
-				      struct sk_buff *skb)
-{
-	struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
-	struct iwl_tso_hdr_page *p = this_cpu_ptr(trans_pcie->tso_hdr_page);
-	struct page **page_ptr;
-
-	page_ptr = (void *)((u8 *)skb->cb + trans->txqs.page_offs);
-
-	if (WARN_ON(*page_ptr))
-		return NULL;
-
-	if (!p->page)
-		goto alloc;
-
-	/*
-	 * Check if there's enough room on this page
-	 *
-	 * Note that we put a page chaining pointer *last* in the
-	 * page - we need it somewhere, and if it's there then we
-	 * avoid DMA mapping the last bits of the page which may
-	 * trigger the 32-bit boundary hardware bug.
-	 *
-	 * (see also get_workaround_page() in tx-gen2.c)
-	 */
-	if (p->pos + len < (u8 *)page_address(p->page) + PAGE_SIZE -
-			   sizeof(void *))
-		goto out;
-
-	/* We don't have enough room on this page, get a new one. */
-	__free_page(p->page);
-
-alloc:
-	p->page = alloc_page(GFP_ATOMIC);
-	if (!p->page)
-		return NULL;
-	p->pos = page_address(p->page);
-	/* set the chaining pointer to NULL */
-	*(void **)(page_address(p->page) + PAGE_SIZE - sizeof(void *)) = NULL;
-out:
-	*page_ptr = p->page;
-	get_page(p->page);
-	return p;
-}
-
 static void iwl_compute_pseudo_hdr_csum(void *iph, struct tcphdr *tcph,
 					bool ipv6, unsigned int len)
 {
@@ -2132,7 +1896,7 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
 		IEEE80211_CCMP_HDR_LEN : 0;
 
 	trace_iwlwifi_dev_tx(trans->dev, skb,
-			     iwl_pcie_get_tfd(trans, txq, txq->write_ptr),
+			     iwl_txq_get_tfd(trans, txq, txq->write_ptr),
 			     trans->txqs.tfd.size,
 			     &dev_cmd->hdr, IWL_FIRST_TB_SIZE + tb1_len, 0);
 
@@ -2355,11 +2119,11 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
 
 	spin_lock(&txq->lock);
 
-	if (iwl_queue_space(trans, txq) < txq->high_mark) {
-		iwl_stop_queue(trans, txq);
+	if (iwl_txq_space(trans, txq) < txq->high_mark) {
+		iwl_txq_stop(trans, txq);
 
 		/* don't put the packet on the ring, if there is no room */
-		if (unlikely(iwl_queue_space(trans, txq) < 3)) {
+		if (unlikely(iwl_txq_space(trans, txq) < 3)) {
 			struct iwl_device_tx_cmd **dev_cmd_ptr;
 
 			dev_cmd_ptr = (void *)((u8 *)skb->cb +
@@ -2392,7 +2156,7 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
 		cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) |
 			    INDEX_TO_SEQ(txq->write_ptr)));
 
-	tb0_phys = iwl_pcie_get_first_tb_dma(txq, txq->write_ptr);
+	tb0_phys = iwl_txq_get_first_tb_dma(txq, txq->write_ptr);
 	scratch_phys = tb0_phys + sizeof(struct iwl_cmd_header) +
 		       offsetof(struct iwl_tx_cmd, scratch);
 
@@ -2442,8 +2206,7 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
 	iwl_pcie_txq_build_tfd(trans, txq, tb1_phys, tb1_len, false);
 
 	trace_iwlwifi_dev_tx(trans->dev, skb,
-			     iwl_pcie_get_tfd(trans, txq,
-					      txq->write_ptr),
+			     iwl_txq_get_tfd(trans, txq, txq->write_ptr),
 			     trans->txqs.tfd.size,
 			     &dev_cmd->hdr, IWL_FIRST_TB_SIZE + tb1_len,
 			     hdr_len);
@@ -2476,7 +2239,7 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
 	/* building the A-MSDU might have changed this data, so memcpy it now */
 	memcpy(&txq->first_tb_bufs[txq->write_ptr], dev_cmd, IWL_FIRST_TB_SIZE);
 
-	tfd = iwl_pcie_get_tfd(trans, txq, txq->write_ptr);
+	tfd = iwl_txq_get_tfd(trans, txq, txq->write_ptr);
 	/* Set up entry for this TFD in Tx byte-count array */
 	iwl_pcie_txq_update_byte_cnt_tbl(trans, txq, le16_to_cpu(tx_cmd->len),
 					 iwl_pcie_tfd_get_num_tbs(trans, tfd));
@@ -2499,7 +2262,7 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
 	}
 
 	/* Tell device the write index *just past* this latest filled TFD */
-	txq->write_ptr = iwl_queue_inc_wrap(trans, txq->write_ptr);
+	txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr);
 	if (!wait_write_ptr)
 		iwl_pcie_txq_inc_wr_ptr(trans, txq);
 
diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.c b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
new file mode 100644
index 000000000000..a6d03b75f5b7
--- /dev/null
+++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
@@ -0,0 +1,1375 @@
+/******************************************************************************
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2020 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2020 Intel Corporation
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *  * Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ *  * Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *  * Neither the name Intel Corporation nor the names of its
+ *    contributors may be used to endorse or promote products derived
+ *    from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *****************************************************************************/
+#include <net/tso.h>
+#include <linux/tcp.h>
+
+#include "iwl-debug.h"
+#include "iwl-io.h"
+#include "fw/api/tx.h"
+#include "queue/tx.h"
+#include "iwl-fh.h"
+#include "iwl-scd.h"
+#include <linux/dmapool.h>
+
+/*
+ * iwl_txq_gen2_tx_stop - Stop all Tx DMA channels
+ */
+void iwl_txq_gen2_tx_stop(struct iwl_trans *trans)
+{
+	int txq_id;
+
+	/*
+	 * This function can be called before the op_mode disabled the
+	 * queues. This happens when we have an rfkill interrupt.
+	 * Since we stop Tx altogether - mark the queues as stopped.
+	 */
+	memset(trans->txqs.queue_stopped, 0,
+	       sizeof(trans->txqs.queue_stopped));
+	memset(trans->txqs.queue_used, 0, sizeof(trans->txqs.queue_used));
+
+	/* Unmap DMA from host system and free skb's */
+	for (txq_id = 0; txq_id < ARRAY_SIZE(trans->txqs.txq); txq_id++) {
+		if (!trans->txqs.txq[txq_id])
+			continue;
+		iwl_txq_gen2_unmap(trans, txq_id);
+	}
+}
+
+/*
+ * iwl_txq_update_byte_tbl - Set up entry in Tx byte-count array
+ */
+static void iwl_pcie_gen2_update_byte_tbl(struct iwl_trans *trans,
+					  struct iwl_txq *txq, u16 byte_cnt,
+					  int num_tbs)
+{
+	int idx = iwl_txq_get_cmd_index(txq, txq->write_ptr);
+	u8 filled_tfd_size, num_fetch_chunks;
+	u16 len = byte_cnt;
+	__le16 bc_ent;
+
+	if (WARN(idx >= txq->n_window, "%d >= %d\n", idx, txq->n_window))
+		return;
+
+	filled_tfd_size = offsetof(struct iwl_tfh_tfd, tbs) +
+			  num_tbs * sizeof(struct iwl_tfh_tb);
+	/*
+	 * filled_tfd_size contains the number of filled bytes in the TFD.
+	 * Dividing it by 64 will give the number of chunks to fetch
+	 * to SRAM- 0 for one chunk, 1 for 2 and so on.
+	 * If, for example, TFD contains only 3 TBs then 32 bytes
+	 * of the TFD are used, and only one chunk of 64 bytes should
+	 * be fetched
+	 */
+	num_fetch_chunks = DIV_ROUND_UP(filled_tfd_size, 64) - 1;
+
+	if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) {
+		struct iwl_gen3_bc_tbl *scd_bc_tbl_gen3 = txq->bc_tbl.addr;
+
+		/* Starting from AX210, the HW expects bytes */
+		WARN_ON(trans->txqs.bc_table_dword);
+		WARN_ON(len > 0x3FFF);
+		bc_ent = cpu_to_le16(len | (num_fetch_chunks << 14));
+		scd_bc_tbl_gen3->tfd_offset[idx] = bc_ent;
+	} else {
+		struct iwlagn_scd_bc_tbl *scd_bc_tbl = txq->bc_tbl.addr;
+
+		/* Before AX210, the HW expects DW */
+		WARN_ON(!trans->txqs.bc_table_dword);
+		len = DIV_ROUND_UP(len, 4);
+		WARN_ON(len > 0xFFF);
+		bc_ent = cpu_to_le16(len | (num_fetch_chunks << 12));
+		scd_bc_tbl->tfd_offset[idx] = bc_ent;
+	}
+}
+
+/*
+ * iwl_txq_inc_wr_ptr - Send new write index to hardware
+ */
+void iwl_txq_inc_wr_ptr(struct iwl_trans *trans, struct iwl_txq *txq)
+{
+	lockdep_assert_held(&txq->lock);
+
+	IWL_DEBUG_TX(trans, "Q:%d WR: 0x%x\n", txq->id, txq->write_ptr);
+
+	/*
+	 * if not in power-save mode, uCode will never sleep when we're
+	 * trying to tx (during RFKILL, we're not trying to tx).
+	 */
+	iwl_write32(trans, HBUS_TARG_WRPTR, txq->write_ptr | (txq->id << 16));
+}
+
+static u8 iwl_txq_gen2_get_num_tbs(struct iwl_trans *trans,
+				   struct iwl_tfh_tfd *tfd)
+{
+	return le16_to_cpu(tfd->num_tbs) & 0x1f;
+}
+
+void iwl_txq_gen2_tfd_unmap(struct iwl_trans *trans, struct iwl_cmd_meta *meta,
+			    struct iwl_tfh_tfd *tfd)
+{
+	int i, num_tbs;
+
+	/* Sanity check on number of chunks */
+	num_tbs = iwl_txq_gen2_get_num_tbs(trans, tfd);
+
+	if (num_tbs > trans->txqs.tfd.max_tbs) {
+		IWL_ERR(trans, "Too many chunks: %i\n", num_tbs);
+		return;
+	}
+
+	/* first TB is never freed - it's the bidirectional DMA data */
+	for (i = 1; i < num_tbs; i++) {
+		if (meta->tbs & BIT(i))
+			dma_unmap_page(trans->dev,
+				       le64_to_cpu(tfd->tbs[i].addr),
+				       le16_to_cpu(tfd->tbs[i].tb_len),
+				       DMA_TO_DEVICE);
+		else
+			dma_unmap_single(trans->dev,
+					 le64_to_cpu(tfd->tbs[i].addr),
+					 le16_to_cpu(tfd->tbs[i].tb_len),
+					 DMA_TO_DEVICE);
+	}
+
+	tfd->num_tbs = 0;
+}
+
+void iwl_txq_gen2_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq)
+{
+	/* rd_ptr is bounded by TFD_QUEUE_SIZE_MAX and
+	 * idx is bounded by n_window
+	 */
+	int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr);
+
+	lockdep_assert_held(&txq->lock);
+
+	iwl_txq_gen2_tfd_unmap(trans, &txq->entries[idx].meta,
+			       iwl_txq_get_tfd(trans, txq, idx));
+
+	/* free SKB */
+	if (txq->entries) {
+		struct sk_buff *skb;
+
+		skb = txq->entries[idx].skb;
+
+		/* Can be called from irqs-disabled context
+		 * If skb is not NULL, it means that the whole queue is being
+		 * freed and that the queue is not empty - free the skb
+		 */
+		if (skb) {
+			iwl_op_mode_free_skb(trans->op_mode, skb);
+			txq->entries[idx].skb = NULL;
+		}
+	}
+}
+
+int iwl_txq_gen2_set_tb(struct iwl_trans *trans, struct iwl_tfh_tfd *tfd,
+			dma_addr_t addr, u16 len)
+{
+	int idx = iwl_txq_gen2_get_num_tbs(trans, tfd);
+	struct iwl_tfh_tb *tb;
+
+	/*
+	 * Only WARN here so we know about the issue, but we mess up our
+	 * unmap path because not every place currently checks for errors
+	 * returned from this function - it can only return an error if
+	 * there's no more space, and so when we know there is enough we
+	 * don't always check ...
+	 */
+	WARN(iwl_txq_crosses_4g_boundary(addr, len),
+	     "possible DMA problem with iova:0x%llx, len:%d\n",
+	     (unsigned long long)addr, len);
+
+	if (WARN_ON(idx >= IWL_TFH_NUM_TBS))
+		return -EINVAL;
+	tb = &tfd->tbs[idx];
+
+	/* Each TFD can point to a maximum max_tbs Tx buffers */
+	if (le16_to_cpu(tfd->num_tbs) >= trans->txqs.tfd.max_tbs) {
+		IWL_ERR(trans, "Error can not send more than %d chunks\n",
+			trans->txqs.tfd.max_tbs);
+		return -EINVAL;
+	}
+
+	put_unaligned_le64(addr, &tb->addr);
+	tb->tb_len = cpu_to_le16(len);
+
+	tfd->num_tbs = cpu_to_le16(idx + 1);
+
+	return idx;
+}
+
+static struct page *get_workaround_page(struct iwl_trans *trans,
+					struct sk_buff *skb)
+{
+	struct page **page_ptr;
+	struct page *ret;
+
+	page_ptr = (void *)((u8 *)skb->cb + trans->txqs.page_offs);
+
+	ret = alloc_page(GFP_ATOMIC);
+	if (!ret)
+		return NULL;
+
+	/* set the chaining pointer to the previous page if there */
+	*(void **)(page_address(ret) + PAGE_SIZE - sizeof(void *)) = *page_ptr;
+	*page_ptr = ret;
+
+	return ret;
+}
+
+/*
+ * Add a TB and if needed apply the FH HW bug workaround;
+ * meta != NULL indicates that it's a page mapping and we
+ * need to dma_unmap_page() and set the meta->tbs bit in
+ * this case.
+ */
+static int iwl_txq_gen2_set_tb_with_wa(struct iwl_trans *trans,
+				       struct sk_buff *skb,
+				       struct iwl_tfh_tfd *tfd,
+				       dma_addr_t phys, void *virt,
+				       u16 len, struct iwl_cmd_meta *meta)
+{
+	dma_addr_t oldphys = phys;
+	struct page *page;
+	int ret;
+
+	if (unlikely(dma_mapping_error(trans->dev, phys)))
+		return -ENOMEM;
+
+	if (likely(!iwl_txq_crosses_4g_boundary(phys, len))) {
+		ret = iwl_txq_gen2_set_tb(trans, tfd, phys, len);
+
+		if (ret < 0)
+			goto unmap;
+
+		if (meta)
+			meta->tbs |= BIT(ret);
+
+		ret = 0;
+		goto trace;
+	}
+
+	/*
+	 * Work around a hardware bug. If (as expressed in the
+	 * condition above) the TB ends on a 32-bit boundary,
+	 * then the next TB may be accessed with the wrong
+	 * address.
+	 * To work around it, copy the data elsewhere and make
+	 * a new mapping for it so the device will not fail.
+	 */
+
+	if (WARN_ON(len > PAGE_SIZE - sizeof(void *))) {
+		ret = -ENOBUFS;
+		goto unmap;
+	}
+
+	page = get_workaround_page(trans, skb);
+	if (!page) {
+		ret = -ENOMEM;
+		goto unmap;
+	}
+
+	memcpy(page_address(page), virt, len);
+
+	phys = dma_map_single(trans->dev, page_address(page), len,
+			      DMA_TO_DEVICE);
+	if (unlikely(dma_mapping_error(trans->dev, phys)))
+		return -ENOMEM;
+	ret = iwl_txq_gen2_set_tb(trans, tfd, phys, len);
+	if (ret < 0) {
+		/* unmap the new allocation as single */
+		oldphys = phys;
+		meta = NULL;
+		goto unmap;
+	}
+	IWL_WARN(trans,
+		 "TB bug workaround: copied %d bytes from 0x%llx to 0x%llx\n",
+		 len, (unsigned long long)oldphys, (unsigned long long)phys);
+
+	ret = 0;
+unmap:
+	if (meta)
+		dma_unmap_page(trans->dev, oldphys, len, DMA_TO_DEVICE);
+	else
+		dma_unmap_single(trans->dev, oldphys, len, DMA_TO_DEVICE);
+trace:
+	trace_iwlwifi_dev_tx_tb(trans->dev, skb, virt, phys, len);
+
+	return ret;
+}
+
+#ifdef CONFIG_INET
+struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len,
+				      struct sk_buff *skb)
+{
+	struct iwl_tso_hdr_page *p = this_cpu_ptr(trans->txqs.tso_hdr_page);
+	struct page **page_ptr;
+
+	page_ptr = (void *)((u8 *)skb->cb + trans->txqs.page_offs);
+
+	if (WARN_ON(*page_ptr))
+		return NULL;
+
+	if (!p->page)
+		goto alloc;
+
+	/*
+	 * Check if there's enough room on this page
+	 *
+	 * Note that we put a page chaining pointer *last* in the
+	 * page - we need it somewhere, and if it's there then we
+	 * avoid DMA mapping the last bits of the page which may
+	 * trigger the 32-bit boundary hardware bug.
+	 *
+	 * (see also get_workaround_page() in tx-gen2.c)
+	 */
+	if (p->pos + len < (u8 *)page_address(p->page) + PAGE_SIZE -
+			   sizeof(void *))
+		goto out;
+
+	/* We don't have enough room on this page, get a new one. */
+	__free_page(p->page);
+
+alloc:
+	p->page = alloc_page(GFP_ATOMIC);
+	if (!p->page)
+		return NULL;
+	p->pos = page_address(p->page);
+	/* set the chaining pointer to NULL */
+	*(void **)(page_address(p->page) + PAGE_SIZE - sizeof(void *)) = NULL;
+out:
+	*page_ptr = p->page;
+	get_page(p->page);
+	return p;
+}
+#endif
+
+static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans,
+				    struct sk_buff *skb,
+				    struct iwl_tfh_tfd *tfd, int start_len,
+				    u8 hdr_len,
+				    struct iwl_device_tx_cmd *dev_cmd)
+{
+#ifdef CONFIG_INET
+	struct iwl_tx_cmd_gen2 *tx_cmd = (void *)dev_cmd->payload;
+	struct ieee80211_hdr *hdr = (void *)skb->data;
+	unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room;
+	unsigned int mss = skb_shinfo(skb)->gso_size;
+	u16 length, amsdu_pad;
+	u8 *start_hdr;
+	struct iwl_tso_hdr_page *hdr_page;
+	struct tso_t tso;
+
+	trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd),
+			     &dev_cmd->hdr, start_len, 0);
+
+	ip_hdrlen = skb_transport_header(skb) - skb_network_header(skb);
+	snap_ip_tcp_hdrlen = 8 + ip_hdrlen + tcp_hdrlen(skb);
+	total_len = skb->len - snap_ip_tcp_hdrlen - hdr_len;
+	amsdu_pad = 0;
+
+	/* total amount of header we may need for this A-MSDU */
+	hdr_room = DIV_ROUND_UP(total_len, mss) *
+		(3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr));
+
+	/* Our device supports 9 segments at most, it will fit in 1 page */
+	hdr_page = get_page_hdr(trans, hdr_room, skb);
+	if (!hdr_page)
+		return -ENOMEM;
+
+	start_hdr = hdr_page->pos;
+
+	/*
+	 * Pull the ieee80211 header to be able to use TSO core,
+	 * we will restore it for the tx_status flow.
+	 */
+	skb_pull(skb, hdr_len);
+
+	/*
+	 * Remove the length of all the headers that we don't actually
+	 * have in the MPDU by themselves, but that we duplicate into
+	 * all the different MSDUs inside the A-MSDU.
+	 */
+	le16_add_cpu(&tx_cmd->len, -snap_ip_tcp_hdrlen);
+
+	tso_start(skb, &tso);
+
+	while (total_len) {
+		/* this is the data left for this subframe */
+		unsigned int data_left = min_t(unsigned int, mss, total_len);
+		struct sk_buff *csum_skb = NULL;
+		unsigned int tb_len;
+		dma_addr_t tb_phys;
+		u8 *subf_hdrs_start = hdr_page->pos;
+
+		total_len -= data_left;
+
+		memset(hdr_page->pos, 0, amsdu_pad);
+		hdr_page->pos += amsdu_pad;
+		amsdu_pad = (4 - (sizeof(struct ethhdr) + snap_ip_tcp_hdrlen +
+				  data_left)) & 0x3;
+		ether_addr_copy(hdr_page->pos, ieee80211_get_DA(hdr));
+		hdr_page->pos += ETH_ALEN;
+		ether_addr_copy(hdr_page->pos, ieee80211_get_SA(hdr));
+		hdr_page->pos += ETH_ALEN;
+
+		length = snap_ip_tcp_hdrlen + data_left;
+		*((__be16 *)hdr_page->pos) = cpu_to_be16(length);
+		hdr_page->pos += sizeof(length);
+
+		/*
+		 * This will copy the SNAP as well which will be considered
+		 * as MAC header.
+		 */
+		tso_build_hdr(skb, hdr_page->pos, &tso, data_left, !total_len);
+
+		hdr_page->pos += snap_ip_tcp_hdrlen;
+
+		tb_len = hdr_page->pos - start_hdr;
+		tb_phys = dma_map_single(trans->dev, start_hdr,
+					 tb_len, DMA_TO_DEVICE);
+		if (unlikely(dma_mapping_error(trans->dev, tb_phys))) {
+			dev_kfree_skb(csum_skb);
+			goto out_err;
+		}
+		/*
+		 * No need for _with_wa, this is from the TSO page and
+		 * we leave some space at the end of it so can't hit
+		 * the buggy scenario.
+		 */
+		iwl_txq_gen2_set_tb(trans, tfd, tb_phys, tb_len);
+		trace_iwlwifi_dev_tx_tb(trans->dev, skb, start_hdr,
+					tb_phys, tb_len);
+		/* add this subframe's headers' length to the tx_cmd */
+		le16_add_cpu(&tx_cmd->len, hdr_page->pos - subf_hdrs_start);
+
+		/* prepare the start_hdr for the next subframe */
+		start_hdr = hdr_page->pos;
+
+		/* put the payload */
+		while (data_left) {
+			int ret;
+
+			tb_len = min_t(unsigned int, tso.size, data_left);
+			tb_phys = dma_map_single(trans->dev, tso.data,
+						 tb_len, DMA_TO_DEVICE);
+			ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd,
+							  tb_phys, tso.data,
+							  tb_len, NULL);
+			if (ret) {
+				dev_kfree_skb(csum_skb);
+				goto out_err;
+			}
+
+			data_left -= tb_len;
+			tso_build_data(skb, &tso, tb_len);
+		}
+	}
+
+	/* re -add the WiFi header */
+	skb_push(skb, hdr_len);
+
+	return 0;
+
+out_err:
+#endif
+	return -EINVAL;
+}
+
+static struct
+iwl_tfh_tfd *iwl_txq_gen2_build_tx_amsdu(struct iwl_trans *trans,
+					 struct iwl_txq *txq,
+					 struct iwl_device_tx_cmd *dev_cmd,
+					 struct sk_buff *skb,
+					 struct iwl_cmd_meta *out_meta,
+					 int hdr_len,
+					 int tx_cmd_len)
+{
+	int idx = iwl_txq_get_cmd_index(txq, txq->write_ptr);
+	struct iwl_tfh_tfd *tfd = iwl_txq_get_tfd(trans, txq, idx);
+	dma_addr_t tb_phys;
+	int len;
+	void *tb1_addr;
+
+	tb_phys = iwl_txq_get_first_tb_dma(txq, idx);
+
+	/*
+	 * No need for _with_wa, the first TB allocation is aligned up
+	 * to a 64-byte boundary and thus can't be at the end or cross
+	 * a page boundary (much less a 2^32 boundary).
+	 */
+	iwl_txq_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE);
+
+	/*
+	 * The second TB (tb1) points to the remainder of the TX command
+	 * and the 802.11 header - dword aligned size
+	 * (This calculation modifies the TX command, so do it before the
+	 * setup of the first TB)
+	 */
+	len = tx_cmd_len + sizeof(struct iwl_cmd_header) + hdr_len -
+	      IWL_FIRST_TB_SIZE;
+
+	/* do not align A-MSDU to dword as the subframe header aligns it */
+
+	/* map the data for TB1 */
+	tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_FIRST_TB_SIZE;
+	tb_phys = dma_map_single(trans->dev, tb1_addr, len, DMA_TO_DEVICE);
+	if (unlikely(dma_mapping_error(trans->dev, tb_phys)))
+		goto out_err;
+	/*
+	 * No need for _with_wa(), we ensure (via alignment) that the data
+	 * here can never cross or end at a page boundary.
+	 */
+	iwl_txq_gen2_set_tb(trans, tfd, tb_phys, len);
+
+	if (iwl_txq_gen2_build_amsdu(trans, skb, tfd, len + IWL_FIRST_TB_SIZE,
+				     hdr_len, dev_cmd))
+		goto out_err;
+
+	/* building the A-MSDU might have changed this data, memcpy it now */
+	memcpy(&txq->first_tb_bufs[idx], dev_cmd, IWL_FIRST_TB_SIZE);
+	return tfd;
+
+out_err:
+	iwl_txq_gen2_tfd_unmap(trans, out_meta, tfd);
+	return NULL;
+}
+
+static int iwl_txq_gen2_tx_add_frags(struct iwl_trans *trans,
+				     struct sk_buff *skb,
+				     struct iwl_tfh_tfd *tfd,
+				     struct iwl_cmd_meta *out_meta)
+{
+	int i;
+
+	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+		const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+		dma_addr_t tb_phys;
+		unsigned int fragsz = skb_frag_size(frag);
+		int ret;
+
+		if (!fragsz)
+			continue;
+
+		tb_phys = skb_frag_dma_map(trans->dev, frag, 0,
+					   fragsz, DMA_TO_DEVICE);
+		ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys,
+						  skb_frag_address(frag),
+						  fragsz, out_meta);
+		if (ret)
+			return ret;
+	}
+
+	return 0;
+}
+
+static struct
+iwl_tfh_tfd *iwl_txq_gen2_build_tx(struct iwl_trans *trans,
+				   struct iwl_txq *txq,
+				   struct iwl_device_tx_cmd *dev_cmd,
+				   struct sk_buff *skb,
+				   struct iwl_cmd_meta *out_meta,
+				   int hdr_len,
+				   int tx_cmd_len,
+				   bool pad)
+{
+	int idx = iwl_txq_get_cmd_index(txq, txq->write_ptr);
+	struct iwl_tfh_tfd *tfd = iwl_txq_get_tfd(trans, txq, idx);
+	dma_addr_t tb_phys;
+	int len, tb1_len, tb2_len;
+	void *tb1_addr;
+	struct sk_buff *frag;
+
+	tb_phys = iwl_txq_get_first_tb_dma(txq, idx);
+
+	/* The first TB points to bi-directional DMA data */
+	memcpy(&txq->first_tb_bufs[idx], dev_cmd, IWL_FIRST_TB_SIZE);
+
+	/*
+	 * No need for _with_wa, the first TB allocation is aligned up
+	 * to a 64-byte boundary and thus can't be at the end or cross
+	 * a page boundary (much less a 2^32 boundary).
+	 */
+	iwl_txq_gen2_set_tb(trans, tfd, tb_phys, IWL_FIRST_TB_SIZE);
+
+	/*
+	 * The second TB (tb1) points to the remainder of the TX command
+	 * and the 802.11 header - dword aligned size
+	 * (This calculation modifies the TX command, so do it before the
+	 * setup of the first TB)
+	 */
+	len = tx_cmd_len + sizeof(struct iwl_cmd_header) + hdr_len -
+	      IWL_FIRST_TB_SIZE;
+
+	if (pad)
+		tb1_len = ALIGN(len, 4);
+	else
+		tb1_len = len;
+
+	/* map the data for TB1 */
+	tb1_addr = ((u8 *)&dev_cmd->hdr) + IWL_FIRST_TB_SIZE;
+	tb_phys = dma_map_single(trans->dev, tb1_addr, tb1_len, DMA_TO_DEVICE);
+	if (unlikely(dma_mapping_error(trans->dev, tb_phys)))
+		goto out_err;
+	/*
+	 * No need for _with_wa(), we ensure (via alignment) that the data
+	 * here can never cross or end at a page boundary.
+	 */
+	iwl_txq_gen2_set_tb(trans, tfd, tb_phys, tb1_len);
+	trace_iwlwifi_dev_tx(trans->dev, skb, tfd, sizeof(*tfd), &dev_cmd->hdr,
+			     IWL_FIRST_TB_SIZE + tb1_len, hdr_len);
+
+	/* set up TFD's third entry to point to remainder of skb's head */
+	tb2_len = skb_headlen(skb) - hdr_len;
+
+	if (tb2_len > 0) {
+		int ret;
+
+		tb_phys = dma_map_single(trans->dev, skb->data + hdr_len,
+					 tb2_len, DMA_TO_DEVICE);
+		ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys,
+						  skb->data + hdr_len, tb2_len,
+						  NULL);
+		if (ret)
+			goto out_err;
+	}
+
+	if (iwl_txq_gen2_tx_add_frags(trans, skb, tfd, out_meta))
+		goto out_err;
+
+	skb_walk_frags(skb, frag) {
+		int ret;
+
+		tb_phys = dma_map_single(trans->dev, frag->data,
+					 skb_headlen(frag), DMA_TO_DEVICE);
+		ret = iwl_txq_gen2_set_tb_with_wa(trans, skb, tfd, tb_phys,
+						  frag->data,
+						  skb_headlen(frag), NULL);
+		if (ret)
+			goto out_err;
+		if (iwl_txq_gen2_tx_add_frags(trans, frag, tfd, out_meta))
+			goto out_err;
+	}
+
+	return tfd;
+
+out_err:
+	iwl_txq_gen2_tfd_unmap(trans, out_meta, tfd);
+	return NULL;
+}
+
+static
+struct iwl_tfh_tfd *iwl_txq_gen2_build_tfd(struct iwl_trans *trans,
+					   struct iwl_txq *txq,
+					   struct iwl_device_tx_cmd *dev_cmd,
+					   struct sk_buff *skb,
+					   struct iwl_cmd_meta *out_meta)
+{
+	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+	int idx = iwl_txq_get_cmd_index(txq, txq->write_ptr);
+	struct iwl_tfh_tfd *tfd = iwl_txq_get_tfd(trans, txq, idx);
+	int len, hdr_len;
+	bool amsdu;
+
+	/* There must be data left over for TB1 or this code must be changed */
+	BUILD_BUG_ON(sizeof(struct iwl_tx_cmd_gen2) < IWL_FIRST_TB_SIZE);
+
+	memset(tfd, 0, sizeof(*tfd));
+
+	if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
+		len = sizeof(struct iwl_tx_cmd_gen2);
+	else
+		len = sizeof(struct iwl_tx_cmd_gen3);
+
+	amsdu = ieee80211_is_data_qos(hdr->frame_control) &&
+			(*ieee80211_get_qos_ctl(hdr) &
+			 IEEE80211_QOS_CTL_A_MSDU_PRESENT);
+
+	hdr_len = ieee80211_hdrlen(hdr->frame_control);
+
+	/*
+	 * Only build A-MSDUs here if doing so by GSO, otherwise it may be
+	 * an A-MSDU for other reasons, e.g. NAN or an A-MSDU having been
+	 * built in the higher layers already.
+	 */
+	if (amsdu && skb_shinfo(skb)->gso_size)
+		return iwl_txq_gen2_build_tx_amsdu(trans, txq, dev_cmd, skb,
+						    out_meta, hdr_len, len);
+	return iwl_txq_gen2_build_tx(trans, txq, dev_cmd, skb, out_meta,
+				      hdr_len, len, !amsdu);
+}
+
+int iwl_txq_space(struct iwl_trans *trans, const struct iwl_txq *q)
+{
+	unsigned int max;
+	unsigned int used;
+
+	/*
+	 * To avoid ambiguity between empty and completely full queues, there
+	 * should always be less than max_tfd_queue_size elements in the queue.
+	 * If q->n_window is smaller than max_tfd_queue_size, there is no need
+	 * to reserve any queue entries for this purpose.
+	 */
+	if (q->n_window < trans->trans_cfg->base_params->max_tfd_queue_size)
+		max = q->n_window;
+	else
+		max = trans->trans_cfg->base_params->max_tfd_queue_size - 1;
+
+	/*
+	 * max_tfd_queue_size is a power of 2, so the following is equivalent to
+	 * modulo by max_tfd_queue_size and is well defined.
+	 */
+	used = (q->write_ptr - q->read_ptr) &
+		(trans->trans_cfg->base_params->max_tfd_queue_size - 1);
+
+	if (WARN_ON(used > max))
+		return 0;
+
+	return max - used;
+}
+
+int iwl_txq_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb,
+		    struct iwl_device_tx_cmd *dev_cmd, int txq_id)
+{
+	struct iwl_cmd_meta *out_meta;
+	struct iwl_txq *txq = trans->txqs.txq[txq_id];
+	u16 cmd_len;
+	int idx;
+	void *tfd;
+
+	if (WARN_ONCE(txq_id >= IWL_MAX_TVQM_QUEUES,
+		      "queue %d out of range", txq_id))
+		return -EINVAL;
+
+	if (WARN_ONCE(!test_bit(txq_id, trans->txqs.queue_used),
+		      "TX on unused queue %d\n", txq_id))
+		return -EINVAL;
+
+	if (skb_is_nonlinear(skb) &&
+	    skb_shinfo(skb)->nr_frags > IWL_TRANS_MAX_FRAGS(trans) &&
+	    __skb_linearize(skb))
+		return -ENOMEM;
+
+	spin_lock(&txq->lock);
+
+	if (iwl_txq_space(trans, txq) < txq->high_mark) {
+		iwl_txq_stop(trans, txq);
+
+		/* don't put the packet on the ring, if there is no room */
+		if (unlikely(iwl_txq_space(trans, txq) < 3)) {
+			struct iwl_device_tx_cmd **dev_cmd_ptr;
+
+			dev_cmd_ptr = (void *)((u8 *)skb->cb +
+					       trans->txqs.dev_cmd_offs);
+
+			*dev_cmd_ptr = dev_cmd;
+			__skb_queue_tail(&txq->overflow_q, skb);
+			spin_unlock(&txq->lock);
+			return 0;
+		}
+	}
+
+	idx = iwl_txq_get_cmd_index(txq, txq->write_ptr);
+
+	/* Set up driver data for this TFD */
+	txq->entries[idx].skb = skb;
+	txq->entries[idx].cmd = dev_cmd;
+
+	dev_cmd->hdr.sequence =
+		cpu_to_le16((u16)(QUEUE_TO_SEQ(txq_id) |
+			    INDEX_TO_SEQ(idx)));
+
+	/* Set up first empty entry in queue's array of Tx/cmd buffers */
+	out_meta = &txq->entries[idx].meta;
+	out_meta->flags = 0;
+
+	tfd = iwl_txq_gen2_build_tfd(trans, txq, dev_cmd, skb, out_meta);
+	if (!tfd) {
+		spin_unlock(&txq->lock);
+		return -1;
+	}
+
+	if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) {
+		struct iwl_tx_cmd_gen3 *tx_cmd_gen3 =
+			(void *)dev_cmd->payload;
+
+		cmd_len = le16_to_cpu(tx_cmd_gen3->len);
+	} else {
+		struct iwl_tx_cmd_gen2 *tx_cmd_gen2 =
+			(void *)dev_cmd->payload;
+
+		cmd_len = le16_to_cpu(tx_cmd_gen2->len);
+	}
+
+	/* Set up entry for this TFD in Tx byte-count array */
+	iwl_pcie_gen2_update_byte_tbl(trans, txq, cmd_len,
+				      iwl_txq_gen2_get_num_tbs(trans, tfd));
+
+	/* start timer if queue currently empty */
+	if (txq->read_ptr == txq->write_ptr && txq->wd_timeout)
+		mod_timer(&txq->stuck_timer, jiffies + txq->wd_timeout);
+
+	/* Tell device the write index *just past* this latest filled TFD */
+	txq->write_ptr = iwl_txq_inc_wrap(trans, txq->write_ptr);
+	iwl_txq_inc_wr_ptr(trans, txq);
+	/*
+	 * At this point the frame is "transmitted" successfully
+	 * and we will get a TX status notification eventually.
+	 */
+	spin_unlock(&txq->lock);
+	return 0;
+}
+
+/*************** HOST COMMAND QUEUE FUNCTIONS   *****/
+
+/*
+ * iwl_txq_gen2_unmap -  Unmap any remaining DMA mappings and free skb's
+ */
+void iwl_txq_gen2_unmap(struct iwl_trans *trans, int txq_id)
+{
+	struct iwl_txq *txq = trans->txqs.txq[txq_id];
+
+	spin_lock_bh(&txq->lock);
+	while (txq->write_ptr != txq->read_ptr) {
+		IWL_DEBUG_TX_REPLY(trans, "Q %d Free %d\n",
+				   txq_id, txq->read_ptr);
+
+		if (txq_id != trans->txqs.cmd.q_id) {
+			int idx = iwl_txq_get_cmd_index(txq, txq->read_ptr);
+			struct sk_buff *skb = txq->entries[idx].skb;
+
+			if (WARN_ON_ONCE(!skb))
+				continue;
+
+			iwl_txq_free_tso_page(trans, skb);
+		}
+		iwl_txq_gen2_free_tfd(trans, txq);
+		txq->read_ptr = iwl_txq_inc_wrap(trans, txq->read_ptr);
+	}
+
+	while (!skb_queue_empty(&txq->overflow_q)) {
+		struct sk_buff *skb = __skb_dequeue(&txq->overflow_q);
+
+		iwl_op_mode_free_skb(trans->op_mode, skb);
+	}
+
+	spin_unlock_bh(&txq->lock);
+
+	/* just in case - this queue may have been stopped */
+	iwl_wake_queue(trans, txq);
+}
+
+static void iwl_txq_gen2_free_memory(struct iwl_trans *trans,
+				     struct iwl_txq *txq)
+{
+	struct device *dev = trans->dev;
+
+	/* De-alloc circular buffer of TFDs */
+	if (txq->tfds) {
+		dma_free_coherent(dev,
+				  trans->txqs.tfd.size * txq->n_window,
+				  txq->tfds, txq->dma_addr);
+		dma_free_coherent(dev,
+				  sizeof(*txq->first_tb_bufs) * txq->n_window,
+				  txq->first_tb_bufs, txq->first_tb_dma);
+	}
+
+	kfree(txq->entries);
+	if (txq->bc_tbl.addr)
+		dma_pool_free(trans->txqs.bc_pool,
+			      txq->bc_tbl.addr, txq->bc_tbl.dma);
+	kfree(txq);
+}
+
+/*
+ * iwl_pcie_txq_free - Deallocate DMA queue.
+ * @txq: Transmit queue to deallocate.
+ *
+ * Empty queue by removing and destroying all BD's.
+ * Free all buffers.
+ * 0-fill, but do not free "txq" descriptor structure.
+ */
+static void iwl_txq_gen2_free(struct iwl_trans *trans, int txq_id)
+{
+	struct iwl_txq *txq;
+	int i;
+
+	if (WARN_ONCE(txq_id >= IWL_MAX_TVQM_QUEUES,
+		      "queue %d out of range", txq_id))
+		return;
+
+	txq = trans->txqs.txq[txq_id];
+
+	if (WARN_ON(!txq))
+		return;
+
+	iwl_txq_gen2_unmap(trans, txq_id);
+
+	/* De-alloc array of command/tx buffers */
+	if (txq_id == trans->txqs.cmd.q_id)
+		for (i = 0; i < txq->n_window; i++) {
+			kfree_sensitive(txq->entries[i].cmd);
+			kfree_sensitive(txq->entries[i].free_buf);
+		}
+	del_timer_sync(&txq->stuck_timer);
+
+	iwl_txq_gen2_free_memory(trans, txq);
+
+	trans->txqs.txq[txq_id] = NULL;
+
+	clear_bit(txq_id, trans->txqs.queue_used);
+}
+
+/*
+ * iwl_queue_init - Initialize queue's high/low-water and read/write indexes
+ */
+static int iwl_queue_init(struct iwl_txq *q, int slots_num)
+{
+	q->n_window = slots_num;
+
+	/* slots_num must be power-of-two size, otherwise
+	 * iwl_txq_get_cmd_index is broken. */
+	if (WARN_ON(!is_power_of_2(slots_num)))
+		return -EINVAL;
+
+	q->low_mark = q->n_window / 4;
+	if (q->low_mark < 4)
+		q->low_mark = 4;
+
+	q->high_mark = q->n_window / 8;
+	if (q->high_mark < 2)
+		q->high_mark = 2;
+
+	q->write_ptr = 0;
+	q->read_ptr = 0;
+
+	return 0;
+}
+
+int iwl_txq_init(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num,
+		 bool cmd_queue)
+{
+	int ret;
+	u32 tfd_queue_max_size =
+		trans->trans_cfg->base_params->max_tfd_queue_size;
+
+	txq->need_update = false;
+
+	/* max_tfd_queue_size must be power-of-two size, otherwise
+	 * iwl_txq_inc_wrap and iwl_txq_dec_wrap are broken. */
+	if (WARN_ONCE(tfd_queue_max_size & (tfd_queue_max_size - 1),
+		      "Max tfd queue size must be a power of two, but is %d",
+		      tfd_queue_max_size))
+		return -EINVAL;
+
+	/* Initialize queue's high/low-water marks, and head/tail indexes */
+	ret = iwl_queue_init(txq, slots_num);
+	if (ret)
+		return ret;
+
+	spin_lock_init(&txq->lock);
+
+	if (cmd_queue) {
+		static struct lock_class_key iwl_txq_cmd_queue_lock_class;
+
+		lockdep_set_class(&txq->lock, &iwl_txq_cmd_queue_lock_class);
+	}
+
+	__skb_queue_head_init(&txq->overflow_q);
+
+	return 0;
+}
+
+void iwl_txq_free_tso_page(struct iwl_trans *trans, struct sk_buff *skb)
+{
+	struct page **page_ptr;
+	struct page *next;
+
+	page_ptr = (void *)((u8 *)skb->cb + trans->txqs.page_offs);
+	next = *page_ptr;
+	*page_ptr = NULL;
+
+	while (next) {
+		struct page *tmp = next;
+
+		next = *(void **)(page_address(next) + PAGE_SIZE -
+				  sizeof(void *));
+		__free_page(tmp);
+	}
+}
+
+void iwl_txq_log_scd_error(struct iwl_trans *trans, struct iwl_txq *txq)
+{
+	u32 txq_id = txq->id;
+	u32 status;
+	bool active;
+	u8 fifo;
+
+	if (trans->trans_cfg->use_tfh) {
+		IWL_ERR(trans, "Queue %d is stuck %d %d\n", txq_id,
+			txq->read_ptr, txq->write_ptr);
+		/* TODO: access new SCD registers and dump them */
+		return;
+	}
+
+	status = iwl_read_prph(trans, SCD_QUEUE_STATUS_BITS(txq_id));
+	fifo = (status >> SCD_QUEUE_STTS_REG_POS_TXF) & 0x7;
+	active = !!(status & BIT(SCD_QUEUE_STTS_REG_POS_ACTIVE));
+
+	IWL_ERR(trans,
+		"Queue %d is %sactive on fifo %d and stuck for %u ms. SW [%d, %d] HW [%d, %d] FH TRB=0x0%x\n",
+		txq_id, active ? "" : "in", fifo,
+		jiffies_to_msecs(txq->wd_timeout),
+		txq->read_ptr, txq->write_ptr,
+		iwl_read_prph(trans, SCD_QUEUE_RDPTR(txq_id)) &
+			(trans->trans_cfg->base_params->max_tfd_queue_size - 1),
+			iwl_read_prph(trans, SCD_QUEUE_WRPTR(txq_id)) &
+			(trans->trans_cfg->base_params->max_tfd_queue_size - 1),
+			iwl_read_direct32(trans, FH_TX_TRB_REG(fifo)));
+}
+
+static void iwl_txq_stuck_timer(struct timer_list *t)
+{
+	struct iwl_txq *txq = from_timer(txq, t, stuck_timer);
+	struct iwl_trans *trans = txq->trans;
+
+	spin_lock(&txq->lock);
+	/* check if triggered erroneously */
+	if (txq->read_ptr == txq->write_ptr) {
+		spin_unlock(&txq->lock);
+		return;
+	}
+	spin_unlock(&txq->lock);
+
+	iwl_txq_log_scd_error(trans, txq);
+
+	iwl_force_nmi(trans);
+}
+
+int iwl_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num,
+		  bool cmd_queue)
+{
+	size_t tfd_sz = trans->txqs.tfd.size *
+		trans->trans_cfg->base_params->max_tfd_queue_size;
+	size_t tb0_buf_sz;
+	int i;
+
+	if (WARN_ON(txq->entries || txq->tfds))
+		return -EINVAL;
+
+	if (trans->trans_cfg->use_tfh)
+		tfd_sz = trans->txqs.tfd.size * slots_num;
+
+	timer_setup(&txq->stuck_timer, iwl_txq_stuck_timer, 0);
+	txq->trans = trans;
+
+	txq->n_window = slots_num;
+
+	txq->entries = kcalloc(slots_num,
+			       sizeof(struct iwl_pcie_txq_entry),
+			       GFP_KERNEL);
+
+	if (!txq->entries)
+		goto error;
+
+	if (cmd_queue)
+		for (i = 0; i < slots_num; i++) {
+			txq->entries[i].cmd =
+				kmalloc(sizeof(struct iwl_device_cmd),
+					GFP_KERNEL);
+			if (!txq->entries[i].cmd)
+				goto error;
+		}
+
+	/* Circular buffer of transmit frame descriptors (TFDs),
+	 * shared with device */
+	txq->tfds = dma_alloc_coherent(trans->dev, tfd_sz,
+				       &txq->dma_addr, GFP_KERNEL);
+	if (!txq->tfds)
+		goto error;
+
+	BUILD_BUG_ON(sizeof(*txq->first_tb_bufs) != IWL_FIRST_TB_SIZE_ALIGN);
+
+	tb0_buf_sz = sizeof(*txq->first_tb_bufs) * slots_num;
+
+	txq->first_tb_bufs = dma_alloc_coherent(trans->dev, tb0_buf_sz,
+						&txq->first_tb_dma,
+						GFP_KERNEL);
+	if (!txq->first_tb_bufs)
+		goto err_free_tfds;
+
+	return 0;
+err_free_tfds:
+	dma_free_coherent(trans->dev, tfd_sz, txq->tfds, txq->dma_addr);
+error:
+	if (txq->entries && cmd_queue)
+		for (i = 0; i < slots_num; i++)
+			kfree(txq->entries[i].cmd);
+	kfree(txq->entries);
+	txq->entries = NULL;
+
+	return -ENOMEM;
+}
+
+static int iwl_txq_dyn_alloc_dma(struct iwl_trans *trans,
+				 struct iwl_txq **intxq, int size,
+				 unsigned int timeout)
+{
+	size_t bc_tbl_size, bc_tbl_entries;
+	struct iwl_txq *txq;
+	int ret;
+
+	WARN_ON(!trans->txqs.bc_tbl_size);
+
+	bc_tbl_size = trans->txqs.bc_tbl_size;
+	bc_tbl_entries = bc_tbl_size / sizeof(u16);
+
+	if (WARN_ON(size > bc_tbl_entries))
+		return -EINVAL;
+
+	txq = kzalloc(sizeof(*txq), GFP_KERNEL);
+	if (!txq)
+		return -ENOMEM;
+
+	txq->bc_tbl.addr = dma_pool_alloc(trans->txqs.bc_pool, GFP_KERNEL,
+					  &txq->bc_tbl.dma);
+	if (!txq->bc_tbl.addr) {
+		IWL_ERR(trans, "Scheduler BC Table allocation failed\n");
+		kfree(txq);
+		return -ENOMEM;
+	}
+
+	ret = iwl_txq_alloc(trans, txq, size, false);
+	if (ret) {
+		IWL_ERR(trans, "Tx queue alloc failed\n");
+		goto error;
+	}
+	ret = iwl_txq_init(trans, txq, size, false);
+	if (ret) {
+		IWL_ERR(trans, "Tx queue init failed\n");
+		goto error;
+	}
+
+	txq->wd_timeout = msecs_to_jiffies(timeout);
+
+	*intxq = txq;
+	return 0;
+
+error:
+	iwl_txq_gen2_free_memory(trans, txq);
+	return ret;
+}
+
+static int iwl_txq_alloc_response(struct iwl_trans *trans, struct iwl_txq *txq,
+				  struct iwl_host_cmd *hcmd)
+{
+	struct iwl_tx_queue_cfg_rsp *rsp;
+	int ret, qid;
+	u32 wr_ptr;
+
+	if (WARN_ON(iwl_rx_packet_payload_len(hcmd->resp_pkt) !=
+		    sizeof(*rsp))) {
+		ret = -EINVAL;
+		goto error_free_resp;
+	}
+
+	rsp = (void *)hcmd->resp_pkt->data;
+	qid = le16_to_cpu(rsp->queue_number);
+	wr_ptr = le16_to_cpu(rsp->write_pointer);
+
+	if (qid >= ARRAY_SIZE(trans->txqs.txq)) {
+		WARN_ONCE(1, "queue index %d unsupported", qid);
+		ret = -EIO;
+		goto error_free_resp;
+	}
+
+	if (test_and_set_bit(qid, trans->txqs.queue_used)) {
+		WARN_ONCE(1, "queue %d already used", qid);
+		ret = -EIO;
+		goto error_free_resp;
+	}
+
+	txq->id = qid;
+	trans->txqs.txq[qid] = txq;
+	wr_ptr &= (trans->trans_cfg->base_params->max_tfd_queue_size - 1);
+
+	/* Place first TFD at index corresponding to start sequence number */
+	txq->read_ptr = wr_ptr;
+	txq->write_ptr = wr_ptr;
+
+	IWL_DEBUG_TX_QUEUES(trans, "Activate queue %d\n", qid);
+
+	iwl_free_resp(hcmd);
+	return qid;
+
+error_free_resp:
+	iwl_free_resp(hcmd);
+	iwl_txq_gen2_free_memory(trans, txq);
+	return ret;
+}
+
+int iwl_txq_dyn_alloc(struct iwl_trans *trans, __le16 flags, u8 sta_id, u8 tid,
+		      int cmd_id, int size, unsigned int timeout)
+{
+	struct iwl_txq *txq = NULL;
+	struct iwl_tx_queue_cfg_cmd cmd = {
+		.flags = flags,
+		.sta_id = sta_id,
+		.tid = tid,
+	};
+	struct iwl_host_cmd hcmd = {
+		.id = cmd_id,
+		.len = { sizeof(cmd) },
+		.data = { &cmd, },
+		.flags = CMD_WANT_SKB,
+	};
+	int ret;
+
+	ret = iwl_txq_dyn_alloc_dma(trans, &txq, size, timeout);
+	if (ret)
+		return ret;
+
+	cmd.tfdq_addr = cpu_to_le64(txq->dma_addr);
+	cmd.byte_cnt_addr = cpu_to_le64(txq->bc_tbl.dma);
+	cmd.cb_size = cpu_to_le32(TFD_QUEUE_CB_SIZE(size));
+
+	ret = iwl_trans_send_cmd(trans, &hcmd);
+	if (ret)
+		goto error;
+
+	return iwl_txq_alloc_response(trans, txq, &hcmd);
+
+error:
+	iwl_txq_gen2_free_memory(trans, txq);
+	return ret;
+}
+
+void iwl_txq_dyn_free(struct iwl_trans *trans, int queue)
+{
+	if (WARN(queue >= IWL_MAX_TVQM_QUEUES,
+		 "queue %d out of range", queue))
+		return;
+
+	/*
+	 * Upon HW Rfkill - we stop the device, and then stop the queues
+	 * in the op_mode. Just for the sake of the simplicity of the op_mode,
+	 * allow the op_mode to call txq_disable after it already called
+	 * stop_device.
+	 */
+	if (!test_and_clear_bit(queue, trans->txqs.queue_used)) {
+		WARN_ONCE(test_bit(STATUS_DEVICE_ENABLED, &trans->status),
+			  "queue %d not used", queue);
+		return;
+	}
+
+	iwl_txq_gen2_unmap(trans, queue);
+
+	iwl_txq_gen2_free_memory(trans, trans->txqs.txq[queue]);
+
+	trans->txqs.txq[queue] = NULL;
+
+	IWL_DEBUG_TX_QUEUES(trans, "Deactivate queue %d\n", queue);
+}
+
+void iwl_txq_gen2_tx_free(struct iwl_trans *trans)
+{
+	int i;
+
+	memset(trans->txqs.queue_used, 0, sizeof(trans->txqs.queue_used));
+
+	/* Free all TX queues */
+	for (i = 0; i < ARRAY_SIZE(trans->txqs.txq); i++) {
+		if (!trans->txqs.txq[i])
+			continue;
+
+		iwl_txq_gen2_free(trans, i);
+	}
+}
+
+int iwl_txq_gen2_init(struct iwl_trans *trans, int txq_id, int queue_size)
+{
+	struct iwl_txq *queue;
+	int ret;
+
+	/* alloc and init the tx queue */
+	if (!trans->txqs.txq[txq_id]) {
+		queue = kzalloc(sizeof(*queue), GFP_KERNEL);
+		if (!queue) {
+			IWL_ERR(trans, "Not enough memory for tx queue\n");
+			return -ENOMEM;
+		}
+		trans->txqs.txq[txq_id] = queue;
+		ret = iwl_txq_alloc(trans, queue, queue_size, true);
+		if (ret) {
+			IWL_ERR(trans, "Tx %d queue init failed\n", txq_id);
+			goto error;
+		}
+	} else {
+		queue = trans->txqs.txq[txq_id];
+	}
+
+	ret = iwl_txq_init(trans, queue, queue_size,
+			   (txq_id == trans->txqs.cmd.q_id));
+	if (ret) {
+		IWL_ERR(trans, "Tx %d queue alloc failed\n", txq_id);
+		goto error;
+	}
+	trans->txqs.txq[txq_id]->id = txq_id;
+	set_bit(txq_id, trans->txqs.queue_used);
+
+	return 0;
+
+error:
+	iwl_txq_gen2_tx_free(trans);
+	return ret;
+}
+
diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.h b/drivers/net/wireless/intel/iwlwifi/queue/tx.h
new file mode 100644
index 000000000000..4b08764d71bd
--- /dev/null
+++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.h
@@ -0,0 +1,188 @@
+/******************************************************************************
+ *
+ * This file is provided under a dual BSD/GPLv2 license.  When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * Copyright(c) 2020 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in the
+ * file called COPYING.
+ *
+ * Contact Information:
+ *  Intel Linux Wireless <linuxwifi@intel.com>
+ * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+ *
+ * BSD LICENSE
+ *
+ * Copyright(c) 2020 Intel Corporation
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ *  * Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ *  * Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in
+ *    the documentation and/or other materials provided with the
+ *    distribution.
+ *  * Neither the name Intel Corporation nor the names of its
+ *    contributors may be used to endorse or promote products derived
+ *    from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ *****************************************************************************/
+#ifndef __iwl_trans_queue_tx_h__
+#define __iwl_trans_queue_tx_h__
+#include "iwl-fh.h"
+#include "fw/api/tx.h"
+
+struct iwl_tso_hdr_page {
+	struct page *page;
+	u8 *pos;
+};
+
+static inline dma_addr_t
+iwl_txq_get_first_tb_dma(struct iwl_txq *txq, int idx)
+{
+	return txq->first_tb_dma +
+	       sizeof(struct iwl_pcie_first_tb_buf) * idx;
+}
+
+static inline u16 iwl_txq_get_cmd_index(const struct iwl_txq *q, u32 index)
+{
+	return index & (q->n_window - 1);
+}
+
+void iwl_txq_gen2_unmap(struct iwl_trans *trans, int txq_id);
+
+static inline void iwl_wake_queue(struct iwl_trans *trans,
+				  struct iwl_txq *txq)
+{
+	if (test_and_clear_bit(txq->id, trans->txqs.queue_stopped)) {
+		IWL_DEBUG_TX_QUEUES(trans, "Wake hwq %d\n", txq->id);
+		iwl_op_mode_queue_not_full(trans->op_mode, txq->id);
+	}
+}
+
+static inline void *iwl_txq_get_tfd(struct iwl_trans *trans,
+				    struct iwl_txq *txq, int idx)
+{
+	if (trans->trans_cfg->use_tfh)
+		idx = iwl_txq_get_cmd_index(txq, idx);
+
+	return txq->tfds + trans->txqs.tfd.size * idx;
+}
+
+int iwl_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num,
+		  bool cmd_queue);
+/*
+ * We need this inline in case dma_addr_t is only 32-bits - since the
+ * hardware is always 64-bit, the issue can still occur in that case,
+ * so use u64 for 'phys' here to force the addition in 64-bit.
+ */
+static inline bool iwl_txq_crosses_4g_boundary(u64 phys, u16 len)
+{
+	return upper_32_bits(phys) != upper_32_bits(phys + len);
+}
+
+int iwl_txq_space(struct iwl_trans *trans, const struct iwl_txq *q);
+
+static inline void iwl_txq_stop(struct iwl_trans *trans, struct iwl_txq *txq)
+{
+	if (!test_and_set_bit(txq->id, trans->txqs.queue_stopped)) {
+		iwl_op_mode_queue_full(trans->op_mode, txq->id);
+		IWL_DEBUG_TX_QUEUES(trans, "Stop hwq %d\n", txq->id);
+	} else {
+		IWL_DEBUG_TX_QUEUES(trans, "hwq %d already stopped\n",
+				    txq->id);
+	}
+}
+
+/**
+ * iwl_txq_inc_wrap - increment queue index, wrap back to beginning
+ * @index -- current index
+ */
+static inline int iwl_txq_inc_wrap(struct iwl_trans *trans, int index)
+{
+	return ++index &
+		(trans->trans_cfg->base_params->max_tfd_queue_size - 1);
+}
+
+/**
+ * iwl_txq_dec_wrap - decrement queue index, wrap back to end
+ * @index -- current index
+ */
+static inline int iwl_txq_dec_wrap(struct iwl_trans *trans, int index)
+{
+	return --index &
+		(trans->trans_cfg->base_params->max_tfd_queue_size - 1);
+}
+
+static inline bool iwl_txq_used(const struct iwl_txq *q, int i)
+{
+	int index = iwl_txq_get_cmd_index(q, i);
+	int r = iwl_txq_get_cmd_index(q, q->read_ptr);
+	int w = iwl_txq_get_cmd_index(q, q->write_ptr);
+
+	return w >= r ?
+		(index >= r && index < w) :
+		!(index < r && index >= w);
+}
+
+void iwl_txq_free_tso_page(struct iwl_trans *trans, struct sk_buff *skb);
+
+void iwl_txq_log_scd_error(struct iwl_trans *trans, struct iwl_txq *txq);
+
+int iwl_txq_gen2_set_tb(struct iwl_trans *trans,
+			struct iwl_tfh_tfd *tfd, dma_addr_t addr,
+			u16 len);
+
+void iwl_txq_gen2_tfd_unmap(struct iwl_trans *trans,
+			    struct iwl_cmd_meta *meta,
+			    struct iwl_tfh_tfd *tfd);
+
+int iwl_txq_dyn_alloc(struct iwl_trans *trans,
+		      __le16 flags, u8 sta_id, u8 tid,
+		      int cmd_id, int size,
+		      unsigned int timeout);
+
+int iwl_txq_gen2_tx(struct iwl_trans *trans, struct sk_buff *skb,
+		    struct iwl_device_tx_cmd *dev_cmd, int txq_id);
+
+void iwl_txq_dyn_free(struct iwl_trans *trans, int queue);
+void iwl_txq_gen2_free_tfd(struct iwl_trans *trans, struct iwl_txq *txq);
+void iwl_txq_inc_wr_ptr(struct iwl_trans *trans, struct iwl_txq *txq);
+void iwl_txq_gen2_tx_stop(struct iwl_trans *trans);
+void iwl_txq_gen2_tx_free(struct iwl_trans *trans);
+int iwl_txq_init(struct iwl_trans *trans, struct iwl_txq *txq, int slots_num,
+		 bool cmd_queue);
+int iwl_txq_gen2_init(struct iwl_trans *trans, int txq_id, int queue_size);
+#ifdef CONFIG_INET
+struct iwl_tso_hdr_page *get_page_hdr(struct iwl_trans *trans, size_t len,
+				      struct sk_buff *skb);
+#endif
+#endif /* __iwl_trans_queue_tx_h__ */
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 06/12] iwlwifi: mvm: support more GTK rekeying algorithms
  2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
                   ` (4 preceding siblings ...)
  2020-09-30 13:31 ` [PATCH 05/12] iwlwifi: move all bus-independent TX functions to common code Luca Coelho
@ 2020-09-30 13:31 ` Luca Coelho
  2020-09-30 13:31 ` [PATCH 07/12] iwlwifi: mvm: d3: support GCMP ciphers Luca Coelho
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Nathan Errera <nathan.errera@intel.com>

add and use new API version for GTK rekeying. This will allow our
firmware to do GTK rekeying for more algorithms (GCMP 128, GCMP 256,
SAE).

Signed-off-by: Nathan Errera <nathan.errera@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
---
 drivers/net/wireless/intel/iwlwifi/mvm/d3.c   | 29 +++++++++++++++----
 .../net/wireless/intel/iwlwifi/mvm/mac80211.c |  5 ++++
 drivers/net/wireless/intel/iwlwifi/mvm/mvm.h  |  6 +++-
 3 files changed, 33 insertions(+), 7 deletions(-)

diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
index b152f5a6ba0f..f027029553dd 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
@@ -81,8 +81,11 @@ void iwl_mvm_set_rekey_data(struct ieee80211_hw *hw,
 
 	mutex_lock(&mvm->mutex);
 
-	memcpy(mvmvif->rekey_data.kek, data->kek, NL80211_KEK_LEN);
-	memcpy(mvmvif->rekey_data.kck, data->kck, NL80211_KCK_LEN);
+	mvmvif->rekey_data.kek_len = data->kek_len;
+	mvmvif->rekey_data.kck_len = data->kck_len;
+	memcpy(mvmvif->rekey_data.kek, data->kek, data->kek_len);
+	memcpy(mvmvif->rekey_data.kck, data->kck, data->kck_len);
+	mvmvif->rekey_data.akm = data->akm & 0xFF;
 	mvmvif->rekey_data.replay_ctr =
 		cpu_to_le64(be64_to_cpup((__be64 *)data->replay_ctr));
 	mvmvif->rekey_data.valid = true;
@@ -157,6 +160,7 @@ static const u8 *iwl_mvm_find_max_pn(struct ieee80211_key_conf *key,
 struct wowlan_key_data {
 	struct iwl_wowlan_rsc_tsc_params_cmd *rsc_tsc;
 	struct iwl_wowlan_tkip_params_cmd *tkip;
+	struct iwl_wowlan_kek_kck_material_cmd_v3 *kek_kck_cmd;
 	bool error, use_rsc_tsc, use_tkip, configure_keys;
 	int wep_key_idx;
 };
@@ -233,7 +237,12 @@ static void iwl_mvm_wowlan_program_keys(struct ieee80211_hw *hw,
 	default:
 		data->error = true;
 		return;
+	case WLAN_CIPHER_SUITE_BIP_GMAC_256:
+	case WLAN_CIPHER_SUITE_BIP_GMAC_128:
+		data->kek_kck_cmd->igtk_cipher = cpu_to_le32(STA_KEY_FLG_GCMP);
+		return;
 	case WLAN_CIPHER_SUITE_AES_CMAC:
+		data->kek_kck_cmd->igtk_cipher = cpu_to_le32(STA_KEY_FLG_CCM);
 		/*
 		 * Ignore CMAC keys -- the WoWLAN firmware doesn't support them
 		 * but we also shouldn't abort suspend due to that. It does have
@@ -271,6 +280,8 @@ static void iwl_mvm_wowlan_program_keys(struct ieee80211_hw *hw,
 			  data->rsc_tsc->params.all_tsc_rsc.tkip.multicast_rsc;
 			rx_p1ks = data->tkip->rx_multi;
 			rx_mic_key = data->tkip->mic_keys.rx_mcast;
+			data->kek_kck_cmd->gtk_cipher =
+				cpu_to_le32(STA_KEY_FLG_TKIP);
 		}
 
 		/*
@@ -315,6 +326,10 @@ static void iwl_mvm_wowlan_program_keys(struct ieee80211_hw *hw,
 		} else {
 			aes_sc =
 			   data->rsc_tsc->params.all_tsc_rsc.aes.multicast_rsc;
+			data->kek_kck_cmd->gtk_cipher =
+				key->cipher == WLAN_CIPHER_SUITE_CCMP ?
+				cpu_to_le32(STA_KEY_FLG_CCM) :
+				cpu_to_le32(STA_KEY_FLG_GCMP);
 		}
 
 		/*
@@ -749,6 +764,7 @@ static int iwl_mvm_wowlan_config_key_params(struct iwl_mvm *mvm,
 		.use_rsc_tsc = false,
 		.tkip = &tkip_cmd,
 		.use_tkip = false,
+		.kek_kck_cmd = &kek_kck_cmd,
 	};
 	struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
 	int ret;
@@ -852,12 +868,13 @@ static int iwl_mvm_wowlan_config_key_params(struct iwl_mvm *mvm,
 
 		memset(&kek_kck_cmd, 0, sizeof(kek_kck_cmd));
 		memcpy(kek_kck_cmd.kck, mvmvif->rekey_data.kck,
-		       NL80211_KCK_LEN);
-		kek_kck_cmd.kck_len = cpu_to_le16(NL80211_KCK_LEN);
+		       mvmvif->rekey_data.kck_len);
+		kek_kck_cmd.kck_len = cpu_to_le16(mvmvif->rekey_data.kck_len);
 		memcpy(kek_kck_cmd.kek, mvmvif->rekey_data.kek,
-		       NL80211_KEK_LEN);
-		kek_kck_cmd.kek_len = cpu_to_le16(NL80211_KEK_LEN);
+		       mvmvif->rekey_data.kek_len);
+		kek_kck_cmd.kek_len = cpu_to_le16(mvmvif->rekey_data.kek_len);
 		kek_kck_cmd.replay_ctr = mvmvif->rekey_data.replay_ctr;
+		kek_kck_cmd.akm = cpu_to_le32(mvmvif->rekey_data.akm);
 
 		ret = iwl_mvm_send_cmd_pdu(mvm,
 					   WOWLAN_KEK_KCK_MATERIAL, cmd_flags,
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
index 1c5f18d1b4c2..5c9bde99ce19 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
@@ -666,6 +666,11 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm)
 			IWL_UCODE_TLV_CAPA_WFA_TPC_REP_IE_SUPPORT))
 		hw->wiphy->features |= NL80211_FEATURE_WFA_TPC_IE_IN_PROBES;
 
+	if (iwl_fw_lookup_cmd_ver(mvm->fw, IWL_ALWAYS_LONG_GROUP,
+				  WOWLAN_KEK_KCK_MATERIAL,
+				  IWL_FW_CMD_VER_UNKNOWN) == 3)
+		hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK;
+
 	if (fw_has_api(&mvm->fw->ucode_capa,
 		       IWL_UCODE_TLV_API_SCAN_TSF_REPORT)) {
 		wiphy_ext_feature_set(hw->wiphy,
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
index 1836589218fa..9187f8a1126d 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
@@ -416,7 +416,11 @@ struct iwl_mvm_vif {
 #ifdef CONFIG_PM
 	/* WoWLAN GTK rekey data */
 	struct {
-		u8 kck[NL80211_KCK_LEN], kek[NL80211_KEK_LEN];
+		u8 kck[NL80211_KCK_EXT_LEN];
+		u8 kek[NL80211_KEK_EXT_LEN];
+		size_t kek_len;
+		size_t kck_len;
+		u32 akm;
 		__le64 replay_ctr;
 		bool valid;
 	} rekey_data;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 07/12] iwlwifi: mvm: d3: support GCMP ciphers
  2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
                   ` (5 preceding siblings ...)
  2020-09-30 13:31 ` [PATCH 06/12] iwlwifi: mvm: support more GTK rekeying algorithms Luca Coelho
@ 2020-09-30 13:31 ` Luca Coelho
  2020-09-30 13:31 ` [PATCH 08/12] iwlwifi: dbg: remove no filter condition Luca Coelho
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Johannes Berg <johannes.berg@intel.com>

We really should support GCMP ciphers (both sizes) since
all the handling is identical to CCMP, except for the one
case where the key material is copied.

Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
---
 drivers/net/wireless/intel/iwlwifi/mvm/d3.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
index f027029553dd..5f6092d548cf 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
@@ -313,6 +313,8 @@ static void iwl_mvm_wowlan_program_keys(struct ieee80211_hw *hw,
 		data->use_rsc_tsc = true;
 		break;
 	case WLAN_CIPHER_SUITE_CCMP:
+	case WLAN_CIPHER_SUITE_GCMP:
+	case WLAN_CIPHER_SUITE_GCMP_256:
 		if (sta) {
 			u64 pn64;
 
@@ -1405,6 +1407,8 @@ static void iwl_mvm_set_key_rx_seq(struct iwl_mvm *mvm,
 
 	switch (key->cipher) {
 	case WLAN_CIPHER_SUITE_CCMP:
+	case WLAN_CIPHER_SUITE_GCMP:
+	case WLAN_CIPHER_SUITE_GCMP_256:
 		iwl_mvm_set_aes_rx_seq(mvm, rsc->aes.multicast_rsc, NULL, key);
 		break;
 	case WLAN_CIPHER_SUITE_TKIP:
@@ -1441,6 +1445,8 @@ static void iwl_mvm_d3_update_keys(struct ieee80211_hw *hw,
 		/* ignore WEP completely, nothing to do */
 		return;
 	case WLAN_CIPHER_SUITE_CCMP:
+	case WLAN_CIPHER_SUITE_GCMP:
+	case WLAN_CIPHER_SUITE_GCMP_256:
 	case WLAN_CIPHER_SUITE_TKIP:
 		/* we support these */
 		break;
@@ -1466,6 +1472,8 @@ static void iwl_mvm_d3_update_keys(struct ieee80211_hw *hw,
 
 		switch (key->cipher) {
 		case WLAN_CIPHER_SUITE_CCMP:
+		case WLAN_CIPHER_SUITE_GCMP:
+		case WLAN_CIPHER_SUITE_GCMP_256:
 			iwl_mvm_set_aes_rx_seq(data->mvm, sc->aes.unicast_rsc,
 					       sta, key);
 			atomic64_set(&key->tx_pn, le64_to_cpu(sc->aes.tsc.pn));
@@ -1548,11 +1556,21 @@ static bool iwl_mvm_setup_connection_keep(struct iwl_mvm *mvm,
 
 		switch (gtkdata.cipher) {
 		case WLAN_CIPHER_SUITE_CCMP:
+		case WLAN_CIPHER_SUITE_GCMP:
+			BUILD_BUG_ON(WLAN_KEY_LEN_CCMP != WLAN_KEY_LEN_GCMP);
+			BUILD_BUG_ON(sizeof(conf.key) < WLAN_KEY_LEN_CCMP);
 			conf.conf.keylen = WLAN_KEY_LEN_CCMP;
 			memcpy(conf.conf.key, status->gtk[0].key,
 			       WLAN_KEY_LEN_CCMP);
 			break;
+		case WLAN_CIPHER_SUITE_GCMP_256:
+			BUILD_BUG_ON(sizeof(conf.key) < WLAN_KEY_LEN_GCMP_256);
+			conf.conf.keylen = WLAN_KEY_LEN_GCMP_256;
+			memcpy(conf.conf.key, status->gtk[0].key,
+			       WLAN_KEY_LEN_GCMP_256);
+			break;
 		case WLAN_CIPHER_SUITE_TKIP:
+			BUILD_BUG_ON(sizeof(conf.key) < WLAN_KEY_LEN_TKIP);
 			conf.conf.keylen = WLAN_KEY_LEN_TKIP;
 			memcpy(conf.conf.key, status->gtk[0].key, 16);
 			/* leave TX MIC key zeroed, we don't use it anyway */
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 08/12] iwlwifi: dbg: remove no filter condition
  2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
                   ` (6 preceding siblings ...)
  2020-09-30 13:31 ` [PATCH 07/12] iwlwifi: mvm: d3: support GCMP ciphers Luca Coelho
@ 2020-09-30 13:31 ` Luca Coelho
  2020-09-30 13:31 ` [PATCH 09/12] iwlwifi: mvm: add d3 prints Luca Coelho
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Mordechay Goodstein <mordechay.goodstein@intel.com>

Currently if group-id and command-id values are zero we
trigger and collect every RX frame,
this is not the right behavior and zero value
should be handled like any other filter.

Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Fixes: 3ed34fbf9d3b ("iwlwifi: dbg_ini: support FW response/notification region type")
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
---
 drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
index c44e61aa2aca..9b64a12e489d 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
@@ -954,9 +954,8 @@ static bool iwl_dbg_tlv_check_fw_pkt(struct iwl_fw_runtime *fwrt,
 	struct iwl_rx_packet *pkt = tp_data->fw_pkt;
 	struct iwl_cmd_header *wanted_hdr = (void *)&trig_data;
 
-	if (pkt && ((wanted_hdr->cmd == 0 && wanted_hdr->group_id == 0) ||
-		    (pkt->hdr.cmd == wanted_hdr->cmd &&
-		     pkt->hdr.group_id == wanted_hdr->group_id))) {
+	if (pkt && (pkt->hdr.cmd == wanted_hdr->cmd &&
+		    pkt->hdr.group_id == wanted_hdr->group_id)) {
 		struct iwl_rx_packet *fw_pkt =
 			kmemdup(pkt,
 				sizeof(*pkt) + iwl_rx_packet_payload_len(pkt),
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 09/12] iwlwifi: mvm: add d3 prints
  2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
                   ` (7 preceding siblings ...)
  2020-09-30 13:31 ` [PATCH 08/12] iwlwifi: dbg: remove no filter condition Luca Coelho
@ 2020-09-30 13:31 ` Luca Coelho
  2020-09-30 13:31 ` [PATCH 10/12] iwlwifi: dbg: run init_cfg function once per driver load Luca Coelho
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Sara Sharon <sara.sharon@intel.com>

This is long overdue - add a special WOWLAN flag, and D3
prints. It will ease debug and enable test automation. Use
a new flag, instead of one that currently isn't used.

Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
---
 drivers/net/wireless/intel/iwlwifi/iwl-debug.h |  6 +++---
 drivers/net/wireless/intel/iwlwifi/mvm/d3.c    | 13 +++++++++++++
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-debug.h b/drivers/net/wireless/intel/iwlwifi/iwl-debug.h
index 063d8add147f..528eba441926 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-debug.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-debug.h
@@ -2,7 +2,7 @@
 /******************************************************************************
  *
  * Copyright(c) 2003 - 2014 Intel Corporation. All rights reserved.
- * Copyright (C) 2018 Intel Corporation
+ * Copyright(c) 2018 - 2020 Intel Corporation
  *
  * Portions of this file are derived from the ipw3945 project.
  *
@@ -139,7 +139,7 @@ do {                                            			\
 /* 0x00000F00 - 0x00000100 */
 #define IWL_DL_POWER		0x00000100
 #define IWL_DL_TEMP		0x00000200
-#define IWL_DL_RPM		0x00000400
+#define IWL_DL_WOWLAN		0x00000400
 #define IWL_DL_SCAN		0x00000800
 /* 0x0000F000 - 0x00001000 */
 #define IWL_DL_ASSOC		0x00001000
@@ -205,7 +205,7 @@ do {                                            			\
 #define IWL_DEBUG_POWER(p, f, a...)	IWL_DEBUG(p, IWL_DL_POWER, f, ## a)
 #define IWL_DEBUG_11H(p, f, a...)	IWL_DEBUG(p, IWL_DL_11H, f, ## a)
 #define IWL_DEBUG_TPT(p, f, a...)	IWL_DEBUG(p, IWL_DL_TPT, f, ## a)
-#define IWL_DEBUG_RPM(p, f, a...)	IWL_DEBUG(p, IWL_DL_RPM, f, ## a)
+#define IWL_DEBUG_WOWLAN(p, f, a...)	IWL_DEBUG(p, IWL_DL_WOWLAN, f, ## a)
 #define IWL_DEBUG_LAR(p, f, a...)	IWL_DEBUG(p, IWL_DL_LAR, f, ## a)
 #define IWL_DEBUG_FW_INFO(p, f, a...)		\
 		IWL_DEBUG(p, IWL_DL_INFO | IWL_DL_FW, f, ## a)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
index 5f6092d548cf..20c30a6be259 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
@@ -377,6 +377,8 @@ static void iwl_mvm_wowlan_program_keys(struct ieee80211_hw *hw,
 		break;
 	}
 
+	IWL_DEBUG_WOWLAN(mvm, "GTK cipher %d\n", data->kek_kck_cmd->gtk_cipher);
+
 	if (data->configure_keys) {
 		mutex_lock(&mvm->mutex);
 		/*
@@ -878,6 +880,9 @@ static int iwl_mvm_wowlan_config_key_params(struct iwl_mvm *mvm,
 		kek_kck_cmd.replay_ctr = mvmvif->rekey_data.replay_ctr;
 		kek_kck_cmd.akm = cpu_to_le32(mvmvif->rekey_data.akm);
 
+		IWL_DEBUG_WOWLAN(mvm, "setting akm %d\n",
+				 mvmvif->rekey_data.akm);
+
 		ret = iwl_mvm_send_cmd_pdu(mvm,
 					   WOWLAN_KEK_KCK_MATERIAL, cmd_flags,
 					   cmd_size,
@@ -1542,6 +1547,8 @@ static bool iwl_mvm_setup_connection_keep(struct iwl_mvm *mvm,
 	ieee80211_iter_keys(mvm->hw, vif,
 			    iwl_mvm_d3_update_keys, &gtkdata);
 
+	IWL_DEBUG_WOWLAN(mvm, "num of GTK rekeying %d\n",
+			 le32_to_cpu(status->num_of_gtk_rekeys));
 	if (status->num_of_gtk_rekeys) {
 		struct ieee80211_key_conf *key;
 		struct {
@@ -1554,6 +1561,9 @@ static bool iwl_mvm_setup_connection_keep(struct iwl_mvm *mvm,
 		};
 		__be64 replay_ctr;
 
+		IWL_DEBUG_WOWLAN(mvm,
+				 "Received from FW GTK cipher %d, key index %d\n",
+				 conf.conf.cipher, conf.conf.keyidx);
 		switch (gtkdata.cipher) {
 		case WLAN_CIPHER_SUITE_CCMP:
 		case WLAN_CIPHER_SUITE_GCMP:
@@ -1740,6 +1750,9 @@ static bool iwl_mvm_query_wakeup_reasons(struct iwl_mvm *mvm,
 	if (IS_ERR_OR_NULL(fw_status))
 		goto out_unlock;
 
+	IWL_DEBUG_WOWLAN(mvm, "wakeup reason 0x%x\n",
+			 le32_to_cpu(fw_status->wakeup_reasons));
+
 	status.pattern_number = le16_to_cpu(fw_status->pattern_number);
 	for (i = 0; i < 8; i++)
 		status.qos_seq_ctr[i] =
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 10/12] iwlwifi: dbg: run init_cfg function once per driver load
  2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
                   ` (8 preceding siblings ...)
  2020-09-30 13:31 ` [PATCH 09/12] iwlwifi: mvm: add d3 prints Luca Coelho
@ 2020-09-30 13:31 ` Luca Coelho
  2020-09-30 13:31 ` [PATCH 11/12] iwlwifi: thermal: support new temperature measurement API Luca Coelho
  2020-09-30 13:31 ` [PATCH 12/12] iwlwifi: phy-ctxt: add new API VER 3 for phy context cmd Luca Coelho
  11 siblings, 0 replies; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Mordechay Goodstein <mordechay.goodstein@intel.com>

Every time we call init_cfg driver appends the enabled triggers
to the active triggers while this should be done only once per
driver load.

Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Fixes: 14124b25780d ("iwlwifi: dbg_ini: implement monitor allocation flow")
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
---
 drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
index 9b64a12e489d..ab4b19412906 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
@@ -1018,6 +1018,9 @@ static void iwl_dbg_tlv_init_cfg(struct iwl_fw_runtime *fwrt)
 	enum iwl_fw_ini_buffer_location *ini_dest = &fwrt->trans->dbg.ini_dest;
 	int ret, i;
 
+	if (*ini_dest != IWL_FW_INI_LOCATION_INVALID)
+		return;
+
 	IWL_DEBUG_FW(fwrt,
 		     "WRT: Generating active triggers list, domain 0x%x\n",
 		     fwrt->trans->dbg.domains_bitmap);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 11/12] iwlwifi: thermal: support new temperature measurement API
  2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
                   ` (9 preceding siblings ...)
  2020-09-30 13:31 ` [PATCH 10/12] iwlwifi: dbg: run init_cfg function once per driver load Luca Coelho
@ 2020-09-30 13:31 ` Luca Coelho
  2020-09-30 13:31 ` [PATCH 12/12] iwlwifi: phy-ctxt: add new API VER 3 for phy context cmd Luca Coelho
  11 siblings, 0 replies; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Gil Adam <gil.adam@intel.com>

New API for temperature measurement (DTS_MEASUREMENT_TRIGGER)
involves getting an immediate response from FW, and not waiting
for a notification like in previous APIs. Support new API while
keeping backwards compatibility.

Signed-off-by: Gil Adam <gil.adam@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
---
 .../net/wireless/intel/iwlwifi/fw/api/phy.h   | 13 +++-
 drivers/net/wireless/intel/iwlwifi/mvm/tt.c   | 78 ++++++++++++++++---
 2 files changed, 77 insertions(+), 14 deletions(-)

diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/phy.h b/drivers/net/wireless/intel/iwlwifi/fw/api/phy.h
index 8991ddffbf5e..0debca6dd037 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/phy.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/phy.h
@@ -8,7 +8,7 @@
  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
- * Copyright(c) 2019 Intel Corporation
+ * Copyright(c) 2019 - 2020 Intel Corporation
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of version 2 of the GNU General Public License as
@@ -31,7 +31,7 @@
  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
- * Copyright(c) 2019 Intel Corporation
+ * Copyright(c) 2019 - 2020 Intel Corporation
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -213,6 +213,15 @@ struct iwl_dts_measurement_notif_v2 {
 	__le32 threshold_idx;
 } __packed; /* TEMPERATURE_MEASUREMENT_TRIGGER_NTFY_S_VER_2 */
 
+/**
+ * struct iwl_dts_measurement_resp - measurements response
+ *
+ * @temp: the measured temperature
+ */
+struct iwl_dts_measurement_resp {
+	__le32 temp;
+} __packed; /* CMD_DTS_MEASUREMENT_RSP_API_S_VER_1 */
+
 /**
  * struct ct_kill_notif - CT-kill entry notification
  *
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tt.c b/drivers/net/wireless/intel/iwlwifi/mvm/tt.c
index 0c95663bf9ed..94e9b6de425e 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/tt.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/tt.c
@@ -228,24 +228,67 @@ void iwl_mvm_ct_kill_notif(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb)
 	iwl_mvm_enter_ctkill(mvm);
 }
 
-static int iwl_mvm_get_temp_cmd(struct iwl_mvm *mvm)
+/*
+ * send the DTS_MEASUREMENT_TRIGGER command with or without waiting for a
+ * response. If we get a response then the measurement is stored in 'temp'
+ */
+static int iwl_mvm_send_temp_cmd(struct iwl_mvm *mvm, bool response, s32 *temp)
 {
-	struct iwl_dts_measurement_cmd cmd = {
+	struct iwl_host_cmd cmd = {};
+	struct iwl_dts_measurement_cmd dts_cmd = {
 		.flags = cpu_to_le32(DTS_TRIGGER_CMD_FLAGS_TEMP),
 	};
-	struct iwl_ext_dts_measurement_cmd extcmd = {
+	struct iwl_ext_dts_measurement_cmd ext_cmd = {
 		.control_mode = cpu_to_le32(DTS_DIRECT_WITHOUT_MEASURE),
 	};
-	u32 cmdid;
+	struct iwl_dts_measurement_resp *resp;
+	void *cmd_ptr;
+	int ret;
+	u32 cmd_flags = 0;
+	u16 len;
+
+	/* Check which command format is used (regular/extended) */
+	if (fw_has_capa(&mvm->fw->ucode_capa,
+			IWL_UCODE_TLV_CAPA_EXTENDED_DTS_MEASURE)) {
+		len = sizeof(ext_cmd);
+		cmd_ptr = &ext_cmd;
+	} else {
+		len = sizeof(dts_cmd);
+		cmd_ptr = &dts_cmd;
+	}
+	/* The command version where we get a response is zero length */
+	if (response) {
+		cmd_flags = CMD_WANT_SKB;
+		len = 0;
+	}
 
-	cmdid = iwl_cmd_id(CMD_DTS_MEASUREMENT_TRIGGER_WIDE,
-			   PHY_OPS_GROUP, 0);
+	cmd.id =  WIDE_ID(PHY_OPS_GROUP, CMD_DTS_MEASUREMENT_TRIGGER_WIDE);
+	cmd.len[0] = len;
+	cmd.flags = cmd_flags;
+	cmd.data[0] = cmd_ptr;
 
-	if (!fw_has_capa(&mvm->fw->ucode_capa,
-			 IWL_UCODE_TLV_CAPA_EXTENDED_DTS_MEASURE))
-		return iwl_mvm_send_cmd_pdu(mvm, cmdid, 0, sizeof(cmd), &cmd);
+	IWL_DEBUG_TEMP(mvm,
+		       "Sending temperature measurement command - %s response\n",
+		       response ? "with" : "without");
+	ret = iwl_mvm_send_cmd(mvm, &cmd);
 
-	return iwl_mvm_send_cmd_pdu(mvm, cmdid, 0, sizeof(extcmd), &extcmd);
+	if (ret) {
+		IWL_ERR(mvm,
+			"Failed to send the temperature measurement command (err=%d)\n",
+			ret);
+		return ret;
+	}
+
+	if (response) {
+		resp = (void *)cmd.resp_pkt->data;
+		*temp = le32_to_cpu(resp->temp);
+		IWL_DEBUG_TEMP(mvm,
+			       "Got temperature measurement response: temp=%d\n",
+			       *temp);
+		iwl_free_resp(&cmd);
+	}
+
+	return ret;
 }
 
 int iwl_mvm_get_temp(struct iwl_mvm *mvm, s32 *temp)
@@ -254,6 +297,18 @@ int iwl_mvm_get_temp(struct iwl_mvm *mvm, s32 *temp)
 	static u16 temp_notif[] = { WIDE_ID(PHY_OPS_GROUP,
 					    DTS_MEASUREMENT_NOTIF_WIDE) };
 	int ret;
+	u8 cmd_ver;
+
+	/*
+	 * If command version is 1 we send the command and immediately get
+	 * a response. For older versions we send the command and wait for a
+	 * notification (no command TLV for previous versions).
+	 */
+	cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, PHY_OPS_GROUP,
+					CMD_DTS_MEASUREMENT_TRIGGER_WIDE,
+					IWL_FW_CMD_VER_UNKNOWN);
+	if (cmd_ver == 1)
+		return iwl_mvm_send_temp_cmd(mvm, true, temp);
 
 	lockdep_assert_held(&mvm->mutex);
 
@@ -261,9 +316,8 @@ int iwl_mvm_get_temp(struct iwl_mvm *mvm, s32 *temp)
 				   temp_notif, ARRAY_SIZE(temp_notif),
 				   iwl_mvm_temp_notif_wait, temp);
 
-	ret = iwl_mvm_get_temp_cmd(mvm);
+	ret = iwl_mvm_send_temp_cmd(mvm, false, temp);
 	if (ret) {
-		IWL_ERR(mvm, "Failed to get the temperature (err=%d)\n", ret);
 		iwl_remove_notification(&mvm->notif_wait, &wait_temp_notif);
 		return ret;
 	}
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 12/12] iwlwifi: phy-ctxt: add new API VER 3 for phy context cmd
  2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
                   ` (10 preceding siblings ...)
  2020-09-30 13:31 ` [PATCH 11/12] iwlwifi: thermal: support new temperature measurement API Luca Coelho
@ 2020-09-30 13:31 ` Luca Coelho
  11 siblings, 0 replies; 14+ messages in thread
From: Luca Coelho @ 2020-09-30 13:31 UTC (permalink / raw)
  To: kvalo; +Cc: linux-wireless

From: Mordechay Goodstein <mordechay.goodstein@intel.com>

The API added the ability to send for CDB nic what LMAC ID
the cmd belongs to.

Also driver always set apply_time to zero so no need to pass it as
a param and anyway in new API it's removed for no use.

Signed-off-by: Mordechay Goodstein <mordechay.goodstein@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
---
 .../wireless/intel/iwlwifi/fw/api/phy-ctxt.h  |  32 ++++-
 .../net/wireless/intel/iwlwifi/mvm/phy-ctxt.c | 126 +++++++++++++-----
 2 files changed, 116 insertions(+), 42 deletions(-)

diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h b/drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h
index b833b80ea3d6..e6a069683462 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h
@@ -5,10 +5,9 @@
  *
  * GPL LICENSE SUMMARY
  *
- * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
- * Copyright(c) 2018        Intel Corporation
+ * Copyright(c) 2012 - 2014, 2018, 2020 Intel Corporation
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of version 2 of the GNU General Public License as
@@ -28,10 +27,9 @@
  *
  * BSD LICENSE
  *
- * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
- * Copyright(c) 2018        Intel Corporation
+ * Copyright(c) 2012 - 2014, 2018, 2020 Intel Corporation
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -181,15 +179,37 @@ struct iwl_phy_context_cmd_tail {
  * @ci: channel info
  * @tail: command tail
  */
-struct iwl_phy_context_cmd {
+struct iwl_phy_context_cmd_v1 {
 	/* COMMON_INDEX_HDR_API_S_VER_1 */
 	__le32 id_and_color;
 	__le32 action;
-	/* PHY_CONTEXT_DATA_API_S_VER_1 */
+	/* PHY_CONTEXT_DATA_API_S_VER_3 */
 	__le32 apply_time;
 	__le32 tx_param_color;
 	struct iwl_fw_channel_info ci;
 	struct iwl_phy_context_cmd_tail tail;
 } __packed; /* PHY_CONTEXT_CMD_API_VER_1 */
 
+/**
+ * struct iwl_phy_context_cmd - config of the PHY context
+ * ( PHY_CONTEXT_CMD = 0x8 )
+ * @id_and_color: ID and color of the relevant Binding
+ * @action: action to perform, one of FW_CTXT_ACTION_*
+ * @lmac_id: the lmac id the phy context belongs to
+ * @ci: channel info
+ * @rxchain_info: ???
+ * @dsp_cfg_flags: set to 0
+ * @reserved: reserved to align to 64 bit
+ */
+struct iwl_phy_context_cmd {
+	/* COMMON_INDEX_HDR_API_S_VER_1 */
+	__le32 id_and_color;
+	__le32 action;
+	/* PHY_CONTEXT_DATA_API_S_VER_3 */
+	struct iwl_fw_channel_info ci;
+	__le32 lmac_id;
+	__le32 rxchain_info;
+	__le32 dsp_cfg_flags;
+	__le32 reserved;
+} __packed; /* PHY_CONTEXT_CMD_API_VER_3 */
 #endif /* __iwl_fw_api_phy_ctxt_h__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
index 0243dbe8ac49..a5da4106ba5a 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
@@ -7,8 +7,8 @@
  *
  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
  * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
- * Copyright(c) 2017           Intel Deutschland GmbH
- * Copyright(c) 2018           Intel Corporation
+ * Copyright(c) 2017        Intel Deutschland GmbH
+ * Copyright(c) 2018 - 2020 Intel Corporation
  *
  * This program is free software; you can redistribute it and/or modify
  * it under the terms of version 2 of the GNU General Public License as
@@ -30,7 +30,7 @@
  *
  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
  * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
- * Copyright(c) 2018           Intel Corporation
+ * Copyright(c) 2018 - 2020 Intel Corporation
  * All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -125,30 +125,19 @@ u8 iwl_mvm_get_ctrl_pos(struct cfg80211_chan_def *chandef)
  */
 static void iwl_mvm_phy_ctxt_cmd_hdr(struct iwl_mvm_phy_ctxt *ctxt,
 				     struct iwl_phy_context_cmd *cmd,
-				     u32 action, u32 apply_time)
+				     u32 action)
 {
-	memset(cmd, 0, sizeof(struct iwl_phy_context_cmd));
-
 	cmd->id_and_color = cpu_to_le32(FW_CMD_ID_AND_COLOR(ctxt->id,
 							    ctxt->color));
 	cmd->action = cpu_to_le32(action);
-	cmd->apply_time = cpu_to_le32(apply_time);
 }
 
-/*
- * Add the phy configuration to the PHY context command
- */
-static void iwl_mvm_phy_ctxt_cmd_data(struct iwl_mvm *mvm,
-				      struct iwl_phy_context_cmd *cmd,
-				      struct cfg80211_chan_def *chandef,
-				      u8 chains_static, u8 chains_dynamic)
+static void iwl_mvm_phy_ctxt_set_rxchain(struct iwl_mvm *mvm,
+					 __le32 *rxchain_info,
+					 u8 chains_static,
+					 u8 chains_dynamic)
 {
 	u8 active_cnt, idle_cnt;
-	struct iwl_phy_context_cmd_tail *tail =
-		iwl_mvm_chan_info_cmd_tail(mvm, &cmd->ci);
-
-	/* Set the channel info data */
-	iwl_mvm_set_chan_info_chandef(mvm, &cmd->ci, chandef);
 
 	/* Set rx the chains */
 	idle_cnt = chains_static;
@@ -166,19 +155,58 @@ static void iwl_mvm_phy_ctxt_cmd_data(struct iwl_mvm *mvm,
 		active_cnt = 2;
 	}
 
-	tail->rxchain_info = cpu_to_le32(iwl_mvm_get_valid_rx_ant(mvm) <<
+	*rxchain_info = cpu_to_le32(iwl_mvm_get_valid_rx_ant(mvm) <<
 					PHY_RX_CHAIN_VALID_POS);
-	tail->rxchain_info |= cpu_to_le32(idle_cnt << PHY_RX_CHAIN_CNT_POS);
-	tail->rxchain_info |= cpu_to_le32(active_cnt <<
+	*rxchain_info |= cpu_to_le32(idle_cnt << PHY_RX_CHAIN_CNT_POS);
+	*rxchain_info |= cpu_to_le32(active_cnt <<
 					 PHY_RX_CHAIN_MIMO_CNT_POS);
 #ifdef CONFIG_IWLWIFI_DEBUGFS
 	if (unlikely(mvm->dbgfs_rx_phyinfo))
-		tail->rxchain_info = cpu_to_le32(mvm->dbgfs_rx_phyinfo);
+		*rxchain_info = cpu_to_le32(mvm->dbgfs_rx_phyinfo);
 #endif
+}
+
+/*
+ * Add the phy configuration to the PHY context command
+ */
+static void iwl_mvm_phy_ctxt_cmd_data_v1(struct iwl_mvm *mvm,
+					 struct iwl_phy_context_cmd_v1 *cmd,
+					 struct cfg80211_chan_def *chandef,
+					 u8 chains_static, u8 chains_dynamic)
+{
+	struct iwl_phy_context_cmd_tail *tail =
+		iwl_mvm_chan_info_cmd_tail(mvm, &cmd->ci);
+
+	/* Set the channel info data */
+	iwl_mvm_set_chan_info_chandef(mvm, &cmd->ci, chandef);
+
+	iwl_mvm_phy_ctxt_set_rxchain(mvm, &tail->rxchain_info,
+				     chains_static, chains_dynamic);
 
 	tail->txchain_info = cpu_to_le32(iwl_mvm_get_valid_tx_ant(mvm));
 }
 
+/*
+ * Add the phy configuration to the PHY context command
+ */
+static void iwl_mvm_phy_ctxt_cmd_data(struct iwl_mvm *mvm,
+				      struct iwl_phy_context_cmd *cmd,
+				      struct cfg80211_chan_def *chandef,
+				      u8 chains_static, u8 chains_dynamic)
+{
+	if (chandef->chan->band == NL80211_BAND_2GHZ ||
+	    !iwl_mvm_is_cdb_supported(mvm))
+		cmd->lmac_id = cpu_to_le32(IWL_LMAC_24G_INDEX);
+	else
+		cmd->lmac_id = cpu_to_le32(IWL_LMAC_5G_INDEX);
+
+	/* Set the channel info data */
+	iwl_mvm_set_chan_info_chandef(mvm, &cmd->ci, chandef);
+
+	iwl_mvm_phy_ctxt_set_rxchain(mvm, &cmd->rxchain_info,
+				     chains_static, chains_dynamic);
+}
+
 /*
  * Send a command to apply the current phy configuration. The command is send
  * only if something in the configuration changed: in case that this is the
@@ -189,20 +217,46 @@ static int iwl_mvm_phy_ctxt_apply(struct iwl_mvm *mvm,
 				  struct iwl_mvm_phy_ctxt *ctxt,
 				  struct cfg80211_chan_def *chandef,
 				  u8 chains_static, u8 chains_dynamic,
-				  u32 action, u32 apply_time)
+				  u32 action)
 {
-	struct iwl_phy_context_cmd cmd;
 	int ret;
-	u16 len = sizeof(cmd) - iwl_mvm_chan_info_padding(mvm);
-
-	/* Set the command header fields */
-	iwl_mvm_phy_ctxt_cmd_hdr(ctxt, &cmd, action, apply_time);
+	int ver = iwl_fw_lookup_cmd_ver(mvm->fw, IWL_ALWAYS_LONG_GROUP,
+					PHY_CONTEXT_CMD, 1);
+
+	if (ver == 3) {
+		struct iwl_phy_context_cmd cmd = {};
+
+		/* Set the command header fields */
+		iwl_mvm_phy_ctxt_cmd_hdr(ctxt, &cmd, action);
+
+		/* Set the command data */
+		iwl_mvm_phy_ctxt_cmd_data(mvm, &cmd, chandef,
+					  chains_static,
+					  chains_dynamic);
+
+		ret = iwl_mvm_send_cmd_pdu(mvm, PHY_CONTEXT_CMD,
+					   0, sizeof(cmd), &cmd);
+	} else if (ver < 3) {
+		struct iwl_phy_context_cmd_v1 cmd = {};
+		u16 len = sizeof(cmd) - iwl_mvm_chan_info_padding(mvm);
+
+		/* Set the command header fields */
+		iwl_mvm_phy_ctxt_cmd_hdr(ctxt,
+					 (struct iwl_phy_context_cmd *)&cmd,
+					 action);
+
+		/* Set the command data */
+		iwl_mvm_phy_ctxt_cmd_data_v1(mvm, &cmd, chandef,
+					     chains_static,
+					     chains_dynamic);
+		ret = iwl_mvm_send_cmd_pdu(mvm, PHY_CONTEXT_CMD,
+					   0, len, &cmd);
+	} else {
+		IWL_ERR(mvm, "PHY ctxt cmd error ver %d not supported\n", ver);
+		return -EOPNOTSUPP;
+	}
 
-	/* Set the command data */
-	iwl_mvm_phy_ctxt_cmd_data(mvm, &cmd, chandef,
-				  chains_static, chains_dynamic);
 
-	ret = iwl_mvm_send_cmd_pdu(mvm, PHY_CONTEXT_CMD, 0, len, &cmd);
 	if (ret)
 		IWL_ERR(mvm, "PHY ctxt cmd error. ret=%d\n", ret);
 	return ret;
@@ -223,7 +277,7 @@ int iwl_mvm_phy_ctxt_add(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt,
 
 	return iwl_mvm_phy_ctxt_apply(mvm, ctxt, chandef,
 				      chains_static, chains_dynamic,
-				      FW_CTXT_ACTION_ADD, 0);
+				      FW_CTXT_ACTION_ADD);
 }
 
 /*
@@ -257,7 +311,7 @@ int iwl_mvm_phy_ctxt_changed(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt,
 		/* ... remove it here ...*/
 		ret = iwl_mvm_phy_ctxt_apply(mvm, ctxt, chandef,
 					     chains_static, chains_dynamic,
-					     FW_CTXT_ACTION_REMOVE, 0);
+					     FW_CTXT_ACTION_REMOVE);
 		if (ret)
 			return ret;
 
@@ -269,7 +323,7 @@ int iwl_mvm_phy_ctxt_changed(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt,
 	ctxt->width = chandef->width;
 	return iwl_mvm_phy_ctxt_apply(mvm, ctxt, chandef,
 				      chains_static, chains_dynamic,
-				      action, 0);
+				      action);
 }
 
 void iwl_mvm_phy_ctxt_unref(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt)
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 01/12] iwlwifi: mvm: Add FTM initiator RTT smoothing logic
  2020-09-30 13:31 ` [PATCH 01/12] iwlwifi: mvm: Add FTM initiator RTT smoothing logic Luca Coelho
@ 2020-10-01 19:01   ` Luca Coelho
  0 siblings, 0 replies; 14+ messages in thread
From: Luca Coelho @ 2020-10-01 19:01 UTC (permalink / raw)
  To: Luca Coelho; +Cc: kvalo, linux-wireless

Luca Coelho <luca@coelho.fi> wrote:

> From: Ilan Peer <ilan.peer@intel.com>
> 
> The overcome instabilities in the RTT results add smoothing logic
> to the reported results. In short, the smoothing logic tracks the
> RTT average of each responder for a period of time, and in case
> a new RTT results is found to be a spur, the tracked RTT average
> is reported instead of the current RTT measurement.
> 
> Smooth logic debug configuration using iwl-dbg-cfg.ini:
> 
> - MVM_FTM_INITIATOR_ENABLE_SMOOTH: Set to 1 to enable smoothing logic
>  (default=0).
> - MVM_FTM_INITIATOR_SMOOTH_ALPHA: A value between 0 - 100, defining
>   the weight of the current RTT results vs. the RTT average tracked
>   based on the previous results. A value of 100 means use only the
>   current RTT results.
> - MVM_FTM_INITIATOR_SMOOTH_AGE_SEC: The maximal time in seconds in which
>   the RTT average tracked based on previous results is considered valid.
> - MVM_FTM_INITIATOR_SMOOTH_UNDERSHOOT: if the current RTT is positive
>   and below the RTT average by at least this value, report the average
>   RTT instead of the current one. In units of picoseconds.
> - MVM_FTM_INITIATOR_SMOOTH_OVERSHOOT: if the current RTT is positive
>   and above the RTT average by at least this value, report the average
>   RTT instead of the current one. In units of picoseconds.
> 
> Signed-off-by: Ilan Peer <ilan.peer@intel.com>
> Signed-off-by: Luca Coelho <luciano.coelho@intel.com>

12 patches applied to iwlwifi-next.git, thanks.

b68bd2e3143a iwlwifi: mvm: Add FTM initiator RTT smoothing logic
890d814b1837 iwlwifi: mvm: location: set the HLTK when PASN station is added
68ad24742f17 iwlwifi: mvm: responder: allow to set only the HLTK for an associated station
0739a7d70e00 iwlwifi: mvm: initiator: add option for adding a PASN responder
0cd1ad2d7fd4 iwlwifi: move all bus-independent TX functions to common code
2a42aea79531 iwlwifi: mvm: support more GTK rekeying algorithms
c7f996eb894e iwlwifi: mvm: d3: support GCMP ciphers
bfdb157127da iwlwifi: dbg: remove no filter condition
19d9fa7ab9f3 iwlwifi: mvm: add d3 prints
42f8a2735cc2 iwlwifi: dbg: run init_cfg function once per driver load
762c523f95b8 iwlwifi: thermal: support new temperature measurement API
a86821069e87 iwlwifi: phy-ctxt: add new API VER 3 for phy context cmd


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2020-10-01 19:01 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-30 13:31 [PATCH 00/12] iwlwifi: updates intended for v5.10 2020-09-30 Luca Coelho
2020-09-30 13:31 ` [PATCH 01/12] iwlwifi: mvm: Add FTM initiator RTT smoothing logic Luca Coelho
2020-10-01 19:01   ` Luca Coelho
2020-09-30 13:31 ` [PATCH 02/12] iwlwifi: mvm: location: set the HLTK when PASN station is added Luca Coelho
2020-09-30 13:31 ` [PATCH 03/12] iwlwifi: mvm: responder: allow to set only the HLTK for an associated station Luca Coelho
2020-09-30 13:31 ` [PATCH 04/12] iwlwifi: mvm: initiator: add option for adding a PASN responder Luca Coelho
2020-09-30 13:31 ` [PATCH 05/12] iwlwifi: move all bus-independent TX functions to common code Luca Coelho
2020-09-30 13:31 ` [PATCH 06/12] iwlwifi: mvm: support more GTK rekeying algorithms Luca Coelho
2020-09-30 13:31 ` [PATCH 07/12] iwlwifi: mvm: d3: support GCMP ciphers Luca Coelho
2020-09-30 13:31 ` [PATCH 08/12] iwlwifi: dbg: remove no filter condition Luca Coelho
2020-09-30 13:31 ` [PATCH 09/12] iwlwifi: mvm: add d3 prints Luca Coelho
2020-09-30 13:31 ` [PATCH 10/12] iwlwifi: dbg: run init_cfg function once per driver load Luca Coelho
2020-09-30 13:31 ` [PATCH 11/12] iwlwifi: thermal: support new temperature measurement API Luca Coelho
2020-09-30 13:31 ` [PATCH 12/12] iwlwifi: phy-ctxt: add new API VER 3 for phy context cmd Luca Coelho

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.