netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support
@ 2019-07-09  2:53 Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 01/11] nfp: tls: ignore queue limits for delete commands Jakub Kicinski
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: Jakub Kicinski @ 2019-07-09  2:53 UTC (permalink / raw)
  To: davem; +Cc: netdev, oss-drivers, alexei.starovoitov, Jakub Kicinski

Hi!

This series brings various fixes to nfp tls offload recently added
to net-next.

First 4 patches revolve around device mailbox communication, trying
to make it more reliable. Next patch fixes statistical counter.
Patch 6 improves the TX resync if device communication failed.
Patch 7 makes sure we remove keys from memory after talking to FW.
Patch 8 adds missing tls context initialization, we fill in the
context information from various places based on the configuration
and looks like we missed the init in the case of where TX is
offloaded, but RX wasn't initialized yet. Patches 9 and 10 make
the nfp driver undo TLS state changes if we need to drop the
frame (e.g. due to DMA mapping error).

Last but not least TLS fallback should not adjust socket memory
after skb_orphan_partial(). This code will go away once we forbid
orphaning of skbs in need of crypto, but that's "real" -next
material, so lets do a quick fix.

Dirk van der Merwe (2):
  nfp: ccm: increase message limits
  net/tls: don't clear TX resync flag on error

Jakub Kicinski (9):
  nfp: tls: ignore queue limits for delete commands
  nfp: tls: move setting ipver_vlan to a helper
  nfp: tls: use unique connection ids instead of 4-tuple for TX
  nfp: tls: count TSO segments separately for the TLS offload
  nfp: tls: don't leave key material in freed FW cmsg skbs
  net/tls: add missing prot info init
  nfp: tls: avoid one of the ifdefs for TLS
  nfp: tls: undo TLS sequence tracking when dropping the frame
  net/tls: fix socket wmem accounting on fallback with netem

 .../mellanox/mlx5/core/en_accel/tls.c         |  8 +-
 drivers/net/ethernet/netronome/nfp/ccm.h      |  4 +
 drivers/net/ethernet/netronome/nfp/ccm_mbox.c | 31 ++++---
 .../net/ethernet/netronome/nfp/crypto/fw.h    |  2 +
 .../net/ethernet/netronome/nfp/crypto/tls.c   | 93 +++++++++++++------
 drivers/net/ethernet/netronome/nfp/nfp_net.h  |  3 +
 .../ethernet/netronome/nfp/nfp_net_common.c   | 32 ++++++-
 include/net/tls.h                             |  6 +-
 net/tls/tls_device.c                          | 10 +-
 net/tls/tls_device_fallback.c                 |  4 +
 10 files changed, 143 insertions(+), 50 deletions(-)

-- 
2.21.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH net-next 01/11] nfp: tls: ignore queue limits for delete commands
  2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
@ 2019-07-09  2:53 ` Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 02/11] nfp: tls: move setting ipver_vlan to a helper Jakub Kicinski
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2019-07-09  2:53 UTC (permalink / raw)
  To: davem
  Cc: netdev, oss-drivers, alexei.starovoitov, Jakub Kicinski,
	Dirk van der Merwe

We need to do our best not to drop delete commands, otherwise
we will have stale entries in the connection table.  Ignore
the control message queue limits for delete commands.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/ccm.h      |  4 +++
 drivers/net/ethernet/netronome/nfp/ccm_mbox.c | 25 +++++++++++++------
 .../net/ethernet/netronome/nfp/crypto/tls.c   |  5 ++--
 3 files changed, 24 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/ccm.h b/drivers/net/ethernet/netronome/nfp/ccm.h
index da1b1e20df51..a460c75522be 100644
--- a/drivers/net/ethernet/netronome/nfp/ccm.h
+++ b/drivers/net/ethernet/netronome/nfp/ccm.h
@@ -118,6 +118,10 @@ bool nfp_ccm_mbox_fits(struct nfp_net *nn, unsigned int size);
 struct sk_buff *
 nfp_ccm_mbox_msg_alloc(struct nfp_net *nn, unsigned int req_size,
 		       unsigned int reply_size, gfp_t flags);
+int __nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb,
+			       enum nfp_ccm_type type,
+			       unsigned int reply_size,
+			       unsigned int max_reply_size, bool critical);
 int nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb,
 			     enum nfp_ccm_type type,
 			     unsigned int reply_size,
diff --git a/drivers/net/ethernet/netronome/nfp/ccm_mbox.c b/drivers/net/ethernet/netronome/nfp/ccm_mbox.c
index 02fccd90961d..d160ac794d98 100644
--- a/drivers/net/ethernet/netronome/nfp/ccm_mbox.c
+++ b/drivers/net/ethernet/netronome/nfp/ccm_mbox.c
@@ -515,13 +515,13 @@ nfp_ccm_mbox_msg_prepare(struct nfp_net *nn, struct sk_buff *skb,
 
 static int
 nfp_ccm_mbox_msg_enqueue(struct nfp_net *nn, struct sk_buff *skb,
-			 enum nfp_ccm_type type)
+			 enum nfp_ccm_type type, bool critical)
 {
 	struct nfp_ccm_hdr *hdr;
 
 	assert_spin_locked(&nn->mbox_cmsg.queue.lock);
 
-	if (nn->mbox_cmsg.queue.qlen >= NFP_CCM_MAX_QLEN) {
+	if (!critical && nn->mbox_cmsg.queue.qlen >= NFP_CCM_MAX_QLEN) {
 		nn_dp_warn(&nn->dp, "mailbox request queue too long\n");
 		return -EBUSY;
 	}
@@ -536,10 +536,10 @@ nfp_ccm_mbox_msg_enqueue(struct nfp_net *nn, struct sk_buff *skb,
 	return 0;
 }
 
-int nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb,
-			     enum nfp_ccm_type type,
-			     unsigned int reply_size,
-			     unsigned int max_reply_size)
+int __nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb,
+			       enum nfp_ccm_type type,
+			       unsigned int reply_size,
+			       unsigned int max_reply_size, bool critical)
 {
 	int err;
 
@@ -550,7 +550,7 @@ int nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb,
 
 	spin_lock_bh(&nn->mbox_cmsg.queue.lock);
 
-	err = nfp_ccm_mbox_msg_enqueue(nn, skb, type);
+	err = nfp_ccm_mbox_msg_enqueue(nn, skb, type, critical);
 	if (err)
 		goto err_unlock;
 
@@ -594,6 +594,15 @@ int nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb,
 	return err;
 }
 
+int nfp_ccm_mbox_communicate(struct nfp_net *nn, struct sk_buff *skb,
+			     enum nfp_ccm_type type,
+			     unsigned int reply_size,
+			     unsigned int max_reply_size)
+{
+	return __nfp_ccm_mbox_communicate(nn, skb, type, reply_size,
+					  max_reply_size, false);
+}
+
 static void nfp_ccm_mbox_post_runq_work(struct work_struct *work)
 {
 	struct sk_buff *skb;
@@ -650,7 +659,7 @@ int nfp_ccm_mbox_post(struct nfp_net *nn, struct sk_buff *skb,
 
 	spin_lock_bh(&nn->mbox_cmsg.queue.lock);
 
-	err = nfp_ccm_mbox_msg_enqueue(nn, skb, type);
+	err = nfp_ccm_mbox_msg_enqueue(nn, skb, type, false);
 	if (err)
 		goto err_unlock;
 
diff --git a/drivers/net/ethernet/netronome/nfp/crypto/tls.c b/drivers/net/ethernet/netronome/nfp/crypto/tls.c
index 9f7ccb7da417..086bea0a7f2d 100644
--- a/drivers/net/ethernet/netronome/nfp/crypto/tls.c
+++ b/drivers/net/ethernet/netronome/nfp/crypto/tls.c
@@ -112,8 +112,9 @@ nfp_net_tls_communicate_simple(struct nfp_net *nn, struct sk_buff *skb,
 	struct nfp_crypto_reply_simple *reply;
 	int err;
 
-	err = nfp_ccm_mbox_communicate(nn, skb, type,
-				       sizeof(*reply), sizeof(*reply));
+	err = __nfp_ccm_mbox_communicate(nn, skb, type,
+					 sizeof(*reply), sizeof(*reply),
+					 type == NFP_CCM_TYPE_CRYPTO_DEL);
 	if (err) {
 		nn_dp_warn(&nn->dp, "failed to %s TLS: %d\n", name, err);
 		return err;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 02/11] nfp: tls: move setting ipver_vlan to a helper
  2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 01/11] nfp: tls: ignore queue limits for delete commands Jakub Kicinski
@ 2019-07-09  2:53 ` Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 03/11] nfp: tls: use unique connection ids instead of 4-tuple for TX Jakub Kicinski
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2019-07-09  2:53 UTC (permalink / raw)
  To: davem
  Cc: netdev, oss-drivers, alexei.starovoitov, Jakub Kicinski,
	Dirk van der Merwe

Long lines are ugly.  No functional changes.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/crypto/tls.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/crypto/tls.c b/drivers/net/ethernet/netronome/nfp/crypto/tls.c
index 086bea0a7f2d..b13b3dbd4843 100644
--- a/drivers/net/ethernet/netronome/nfp/crypto/tls.c
+++ b/drivers/net/ethernet/netronome/nfp/crypto/tls.c
@@ -147,6 +147,14 @@ static void nfp_net_tls_del_fw(struct nfp_net *nn, __be32 *fw_handle)
 				       NFP_CCM_TYPE_CRYPTO_DEL);
 }
 
+static void
+nfp_net_tls_set_ipver_vlan(struct nfp_crypto_req_add_front *front, u8 ipver)
+{
+	front->ipver_vlan = cpu_to_be16(FIELD_PREP(NFP_NET_TLS_IPVER, ipver) |
+					FIELD_PREP(NFP_NET_TLS_VLAN,
+						   NFP_NET_TLS_VLAN_UNUSED));
+}
+
 static struct nfp_crypto_req_add_back *
 nfp_net_tls_set_ipv4(struct nfp_crypto_req_add_v4 *req, struct sock *sk,
 		     int direction)
@@ -154,9 +162,6 @@ nfp_net_tls_set_ipv4(struct nfp_crypto_req_add_v4 *req, struct sock *sk,
 	struct inet_sock *inet = inet_sk(sk);
 
 	req->front.key_len += sizeof(__be32) * 2;
-	req->front.ipver_vlan = cpu_to_be16(FIELD_PREP(NFP_NET_TLS_IPVER, 4) |
-					    FIELD_PREP(NFP_NET_TLS_VLAN,
-						       NFP_NET_TLS_VLAN_UNUSED));
 
 	if (direction == TLS_OFFLOAD_CTX_DIR_TX) {
 		req->src_ip = inet->inet_saddr;
@@ -177,9 +182,6 @@ nfp_net_tls_set_ipv6(struct nfp_crypto_req_add_v6 *req, struct sock *sk,
 	struct ipv6_pinfo *np = inet6_sk(sk);
 
 	req->front.key_len += sizeof(struct in6_addr) * 2;
-	req->front.ipver_vlan = cpu_to_be16(FIELD_PREP(NFP_NET_TLS_IPVER, 6) |
-					    FIELD_PREP(NFP_NET_TLS_VLAN,
-						       NFP_NET_TLS_VLAN_UNUSED));
 
 	if (direction == TLS_OFFLOAD_CTX_DIR_TX) {
 		memcpy(req->src_ip, &np->saddr, sizeof(req->src_ip));
@@ -304,6 +306,8 @@ nfp_net_tls_add(struct net_device *netdev, struct sock *sk,
 	front->opcode = nfp_tls_1_2_dir_to_opcode(direction);
 	memset(front->resv, 0, sizeof(front->resv));
 
+	nfp_net_tls_set_ipver_vlan(front, ipv6 ? 6 : 4);
+
 	if (ipv6)
 		back = nfp_net_tls_set_ipv6((void *)skb->data, sk, direction);
 	else
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 03/11] nfp: tls: use unique connection ids instead of 4-tuple for TX
  2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 01/11] nfp: tls: ignore queue limits for delete commands Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 02/11] nfp: tls: move setting ipver_vlan to a helper Jakub Kicinski
@ 2019-07-09  2:53 ` Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 04/11] nfp: ccm: increase message limits Jakub Kicinski
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2019-07-09  2:53 UTC (permalink / raw)
  To: davem
  Cc: netdev, oss-drivers, alexei.starovoitov, Jakub Kicinski,
	Dirk van der Merwe

Connection 4 tuple reuse is slightly problematic - TLS socket
and context do not get destroyed until all the associated skbs
left the system and all references are released. This leads
to stale connection entry in the device preventing addition
of new one if the 4 tuple is reused quickly enough.

Instead of using read 4 tuple as the key use a unique ID.
Set the protocol to TCP and port to 0 to ensure no collisions
with real connections.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
---
 .../net/ethernet/netronome/nfp/crypto/fw.h    |  2 +
 .../net/ethernet/netronome/nfp/crypto/tls.c   | 43 +++++++++++++------
 drivers/net/ethernet/netronome/nfp/nfp_net.h  |  3 ++
 3 files changed, 34 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/crypto/fw.h b/drivers/net/ethernet/netronome/nfp/crypto/fw.h
index 192ba907d91b..67413d946c4a 100644
--- a/drivers/net/ethernet/netronome/nfp/crypto/fw.h
+++ b/drivers/net/ethernet/netronome/nfp/crypto/fw.h
@@ -31,6 +31,8 @@ struct nfp_crypto_req_add_front {
 	u8 key_len;
 	__be16 ipver_vlan __packed;
 	u8 l4_proto;
+#define NFP_NET_TLS_NON_ADDR_KEY_LEN	8
+	u8 l3_addrs[0];
 };
 
 struct nfp_crypto_req_add_back {
diff --git a/drivers/net/ethernet/netronome/nfp/crypto/tls.c b/drivers/net/ethernet/netronome/nfp/crypto/tls.c
index b13b3dbd4843..b49405b4af55 100644
--- a/drivers/net/ethernet/netronome/nfp/crypto/tls.c
+++ b/drivers/net/ethernet/netronome/nfp/crypto/tls.c
@@ -155,17 +155,30 @@ nfp_net_tls_set_ipver_vlan(struct nfp_crypto_req_add_front *front, u8 ipver)
 						   NFP_NET_TLS_VLAN_UNUSED));
 }
 
+static void
+nfp_net_tls_assign_conn_id(struct nfp_net *nn,
+			   struct nfp_crypto_req_add_front *front)
+{
+	u32 len;
+	u64 id;
+
+	id = atomic64_inc_return(&nn->ktls_conn_id_gen);
+	len = front->key_len - NFP_NET_TLS_NON_ADDR_KEY_LEN;
+
+	memcpy(front->l3_addrs, &id, sizeof(id));
+	memset(front->l3_addrs + sizeof(id), 0, len - sizeof(id));
+}
+
 static struct nfp_crypto_req_add_back *
-nfp_net_tls_set_ipv4(struct nfp_crypto_req_add_v4 *req, struct sock *sk,
-		     int direction)
+nfp_net_tls_set_ipv4(struct nfp_net *nn, struct nfp_crypto_req_add_v4 *req,
+		     struct sock *sk, int direction)
 {
 	struct inet_sock *inet = inet_sk(sk);
 
 	req->front.key_len += sizeof(__be32) * 2;
 
 	if (direction == TLS_OFFLOAD_CTX_DIR_TX) {
-		req->src_ip = inet->inet_saddr;
-		req->dst_ip = inet->inet_daddr;
+		nfp_net_tls_assign_conn_id(nn, &req->front);
 	} else {
 		req->src_ip = inet->inet_daddr;
 		req->dst_ip = inet->inet_saddr;
@@ -175,8 +188,8 @@ nfp_net_tls_set_ipv4(struct nfp_crypto_req_add_v4 *req, struct sock *sk,
 }
 
 static struct nfp_crypto_req_add_back *
-nfp_net_tls_set_ipv6(struct nfp_crypto_req_add_v6 *req, struct sock *sk,
-		     int direction)
+nfp_net_tls_set_ipv6(struct nfp_net *nn, struct nfp_crypto_req_add_v6 *req,
+		     struct sock *sk, int direction)
 {
 #if IS_ENABLED(CONFIG_IPV6)
 	struct ipv6_pinfo *np = inet6_sk(sk);
@@ -184,8 +197,7 @@ nfp_net_tls_set_ipv6(struct nfp_crypto_req_add_v6 *req, struct sock *sk,
 	req->front.key_len += sizeof(struct in6_addr) * 2;
 
 	if (direction == TLS_OFFLOAD_CTX_DIR_TX) {
-		memcpy(req->src_ip, &np->saddr, sizeof(req->src_ip));
-		memcpy(req->dst_ip, &sk->sk_v6_daddr, sizeof(req->dst_ip));
+		nfp_net_tls_assign_conn_id(nn, &req->front);
 	} else {
 		memcpy(req->src_ip, &sk->sk_v6_daddr, sizeof(req->src_ip));
 		memcpy(req->dst_ip, &np->saddr, sizeof(req->dst_ip));
@@ -205,8 +217,8 @@ nfp_net_tls_set_l4(struct nfp_crypto_req_add_front *front,
 	front->l4_proto = IPPROTO_TCP;
 
 	if (direction == TLS_OFFLOAD_CTX_DIR_TX) {
-		back->src_port = inet->inet_sport;
-		back->dst_port = inet->inet_dport;
+		back->src_port = 0;
+		back->dst_port = 0;
 	} else {
 		back->src_port = inet->inet_dport;
 		back->dst_port = inet->inet_sport;
@@ -260,6 +272,7 @@ nfp_net_tls_add(struct net_device *netdev, struct sock *sk,
 	struct nfp_crypto_reply_add *reply;
 	struct sk_buff *skb;
 	size_t req_sz;
+	void *req;
 	bool ipv6;
 	int err;
 
@@ -302,16 +315,17 @@ nfp_net_tls_add(struct net_device *netdev, struct sock *sk,
 
 	front = (void *)skb->data;
 	front->ep_id = 0;
-	front->key_len = 8;
+	front->key_len = NFP_NET_TLS_NON_ADDR_KEY_LEN;
 	front->opcode = nfp_tls_1_2_dir_to_opcode(direction);
 	memset(front->resv, 0, sizeof(front->resv));
 
 	nfp_net_tls_set_ipver_vlan(front, ipv6 ? 6 : 4);
 
+	req = (void *)skb->data;
 	if (ipv6)
-		back = nfp_net_tls_set_ipv6((void *)skb->data, sk, direction);
+		back = nfp_net_tls_set_ipv6(nn, req, sk, direction);
 	else
-		back = nfp_net_tls_set_ipv4((void *)skb->data, sk, direction);
+		back = nfp_net_tls_set_ipv4(nn, req, sk, direction);
 
 	nfp_net_tls_set_l4(front, back, sk, direction);
 
@@ -329,7 +343,8 @@ nfp_net_tls_add(struct net_device *netdev, struct sock *sk,
 	err = nfp_ccm_mbox_communicate(nn, skb, NFP_CCM_TYPE_CRYPTO_ADD,
 				       sizeof(*reply), sizeof(*reply));
 	if (err) {
-		nn_dp_warn(&nn->dp, "failed to add TLS: %d\n", err);
+		nn_dp_warn(&nn->dp, "failed to add TLS: %d (%d)\n",
+			   err, direction == TLS_OFFLOAD_CTX_DIR_TX);
 		/* communicate frees skb on error */
 		goto err_conn_remove;
 	}
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h
index 0659756bf2bb..5d6c3738b494 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net.h
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h
@@ -583,6 +583,7 @@ struct nfp_net_dp {
  * @tlv_caps:		Parsed TLV capabilities
  * @ktls_tx_conn_cnt:	Number of offloaded kTLS TX connections
  * @ktls_rx_conn_cnt:	Number of offloaded kTLS RX connections
+ * @ktls_conn_id_gen:	Trivial generator for kTLS connection ids (for TX)
  * @ktls_no_space:	Counter of firmware rejecting kTLS connection due to
  *			lack of space
  * @mbox_cmsg:		Common Control Message via vNIC mailbox state
@@ -670,6 +671,8 @@ struct nfp_net {
 	unsigned int ktls_tx_conn_cnt;
 	unsigned int ktls_rx_conn_cnt;
 
+	atomic64_t ktls_conn_id_gen;
+
 	atomic_t ktls_no_space;
 
 	struct {
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 04/11] nfp: ccm: increase message limits
  2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
                   ` (2 preceding siblings ...)
  2019-07-09  2:53 ` [PATCH net-next 03/11] nfp: tls: use unique connection ids instead of 4-tuple for TX Jakub Kicinski
@ 2019-07-09  2:53 ` Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 05/11] nfp: tls: count TSO segments separately for the TLS offload Jakub Kicinski
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2019-07-09  2:53 UTC (permalink / raw)
  To: davem
  Cc: netdev, oss-drivers, alexei.starovoitov, Dirk van der Merwe,
	Jakub Kicinski

From: Dirk van der Merwe <dirk.vandermerwe@netronome.com>

Increase the batch limit to consume small message bursts more
effectively. Practically, the effect on the 'add' messages is not
significant since the mailbox is sized such that the 'add' messages are
still limited to the same order of magnitude that it was originally set
for.

Furthermore, increase the queue size limit to 1024 entries. This further
improves the handling of bursts of small control messages.

Signed-off-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/ccm_mbox.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/ccm_mbox.c b/drivers/net/ethernet/netronome/nfp/ccm_mbox.c
index d160ac794d98..f0783aa9e66e 100644
--- a/drivers/net/ethernet/netronome/nfp/ccm_mbox.c
+++ b/drivers/net/ethernet/netronome/nfp/ccm_mbox.c
@@ -13,7 +13,7 @@
  * form a batch.  Threads come in with CMSG formed in an skb, then
  * enqueue that skb onto the request queue.  If threads skb is first
  * in queue this thread will handle the mailbox operation.  It copies
- * up to 16 messages into the mailbox (making sure that both requests
+ * up to 64 messages into the mailbox (making sure that both requests
  * and replies will fit.  After FW is done processing the batch it
  * copies the data out and wakes waiting threads.
  * If a thread is waiting it either gets its the message completed
@@ -23,9 +23,9 @@
  * to limit potential cache line bounces.
  */
 
-#define NFP_CCM_MBOX_BATCH_LIMIT	16
+#define NFP_CCM_MBOX_BATCH_LIMIT	64
 #define NFP_CCM_TIMEOUT			(NFP_NET_POLL_TIMEOUT * 1000)
-#define NFP_CCM_MAX_QLEN		256
+#define NFP_CCM_MAX_QLEN		1024
 
 enum nfp_net_mbox_cmsg_state {
 	NFP_NET_MBOX_CMSG_STATE_QUEUED,
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 05/11] nfp: tls: count TSO segments separately for the TLS offload
  2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
                   ` (3 preceding siblings ...)
  2019-07-09  2:53 ` [PATCH net-next 04/11] nfp: ccm: increase message limits Jakub Kicinski
@ 2019-07-09  2:53 ` Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 06/11] net/tls: don't clear TX resync flag on error Jakub Kicinski
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2019-07-09  2:53 UTC (permalink / raw)
  To: davem
  Cc: netdev, oss-drivers, alexei.starovoitov, Jakub Kicinski,
	Dirk van der Merwe

Count the number of successfully submitted TLS segments,
not skbs. This will make it easier to compare the TLS
encryption count against other counters.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/nfp_net_common.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
index 270334427448..9a4421df9be9 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
@@ -880,7 +880,10 @@ nfp_net_tls_tx(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec,
 
 	if (datalen) {
 		u64_stats_update_begin(&r_vec->tx_sync);
-		r_vec->hw_tls_tx++;
+		if (!skb_is_gso(skb))
+			r_vec->hw_tls_tx++;
+		else
+			r_vec->hw_tls_tx += skb_shinfo(skb)->gso_segs;
 		u64_stats_update_end(&r_vec->tx_sync);
 	}
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 06/11] net/tls: don't clear TX resync flag on error
  2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
                   ` (4 preceding siblings ...)
  2019-07-09  2:53 ` [PATCH net-next 05/11] nfp: tls: count TSO segments separately for the TLS offload Jakub Kicinski
@ 2019-07-09  2:53 ` Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 07/11] nfp: tls: don't leave key material in freed FW cmsg skbs Jakub Kicinski
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2019-07-09  2:53 UTC (permalink / raw)
  To: davem
  Cc: netdev, oss-drivers, alexei.starovoitov, Dirk van der Merwe,
	Jakub Kicinski

From: Dirk van der Merwe <dirk.vandermerwe@netronome.com>

Introduce a return code for the tls_dev_resync callback.

When the driver TX resync fails, kernel can retry the resync again
until it succeeds.  This prevents drivers from attempting to offload
TLS packets if the connection is known to be out of sync.

We don't worry about the RX resync since they will be retried naturally
as more encrypted records get received.

Signed-off-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com>
---
 .../net/ethernet/mellanox/mlx5/core/en_accel/tls.c  |  8 +++++---
 drivers/net/ethernet/netronome/nfp/crypto/tls.c     | 13 +++++++++----
 include/net/tls.h                                   |  6 +++---
 net/tls/tls_device.c                                |  8 ++++++--
 4 files changed, 23 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.c
index f8b93b62a7d2..ca07c86427a7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/tls.c
@@ -160,9 +160,9 @@ static void mlx5e_tls_del(struct net_device *netdev,
 				direction == TLS_OFFLOAD_CTX_DIR_TX);
 }
 
-static void mlx5e_tls_resync(struct net_device *netdev, struct sock *sk,
-			     u32 seq, u8 *rcd_sn_data,
-			     enum tls_offload_ctx_dir direction)
+static int mlx5e_tls_resync(struct net_device *netdev, struct sock *sk,
+			    u32 seq, u8 *rcd_sn_data,
+			    enum tls_offload_ctx_dir direction)
 {
 	struct tls_context *tls_ctx = tls_get_ctx(sk);
 	struct mlx5e_priv *priv = netdev_priv(netdev);
@@ -177,6 +177,8 @@ static void mlx5e_tls_resync(struct net_device *netdev, struct sock *sk,
 		    be64_to_cpu(rcd_sn));
 	mlx5_accel_tls_resync_rx(priv->mdev, rx_ctx->handle, seq, rcd_sn);
 	atomic64_inc(&priv->tls->sw_stats.rx_tls_resync_reply);
+
+	return 0;
 }
 
 static const struct tlsdev_ops mlx5e_tls_ops = {
diff --git a/drivers/net/ethernet/netronome/nfp/crypto/tls.c b/drivers/net/ethernet/netronome/nfp/crypto/tls.c
index b49405b4af55..d448c6de8ea4 100644
--- a/drivers/net/ethernet/netronome/nfp/crypto/tls.c
+++ b/drivers/net/ethernet/netronome/nfp/crypto/tls.c
@@ -403,7 +403,7 @@ nfp_net_tls_del(struct net_device *netdev, struct tls_context *tls_ctx,
 	nfp_net_tls_del_fw(nn, ntls->fw_handle);
 }
 
-static void
+static int
 nfp_net_tls_resync(struct net_device *netdev, struct sock *sk, u32 seq,
 		   u8 *rcd_sn, enum tls_offload_ctx_dir direction)
 {
@@ -412,11 +412,12 @@ nfp_net_tls_resync(struct net_device *netdev, struct sock *sk, u32 seq,
 	struct nfp_crypto_req_update *req;
 	struct sk_buff *skb;
 	gfp_t flags;
+	int err;
 
 	flags = direction == TLS_OFFLOAD_CTX_DIR_TX ? GFP_KERNEL : GFP_ATOMIC;
 	skb = nfp_net_tls_alloc_simple(nn, sizeof(*req), flags);
 	if (!skb)
-		return;
+		return -ENOMEM;
 
 	ntls = tls_driver_ctx(sk, direction);
 	req = (void *)skb->data;
@@ -428,13 +429,17 @@ nfp_net_tls_resync(struct net_device *netdev, struct sock *sk, u32 seq,
 	memcpy(req->rec_no, rcd_sn, sizeof(req->rec_no));
 
 	if (direction == TLS_OFFLOAD_CTX_DIR_TX) {
-		nfp_net_tls_communicate_simple(nn, skb, "sync",
-					       NFP_CCM_TYPE_CRYPTO_UPDATE);
+		err = nfp_net_tls_communicate_simple(nn, skb, "sync",
+						     NFP_CCM_TYPE_CRYPTO_UPDATE);
+		if (err)
+			return err;
 		ntls->next_seq = seq;
 	} else {
 		nfp_ccm_mbox_post(nn, skb, NFP_CCM_TYPE_CRYPTO_UPDATE,
 				  sizeof(struct nfp_crypto_reply_simple));
 	}
+
+	return 0;
 }
 
 static const struct tlsdev_ops nfp_net_tls_ops = {
diff --git a/include/net/tls.h b/include/net/tls.h
index 0279938386ab..0e4b9624361b 100644
--- a/include/net/tls.h
+++ b/include/net/tls.h
@@ -304,9 +304,9 @@ struct tlsdev_ops {
 	void (*tls_dev_del)(struct net_device *netdev,
 			    struct tls_context *ctx,
 			    enum tls_offload_ctx_dir direction);
-	void (*tls_dev_resync)(struct net_device *netdev,
-			       struct sock *sk, u32 seq, u8 *rcd_sn,
-			       enum tls_offload_ctx_dir direction);
+	int (*tls_dev_resync)(struct net_device *netdev,
+			      struct sock *sk, u32 seq, u8 *rcd_sn,
+			      enum tls_offload_ctx_dir direction);
 };
 
 enum tls_offload_sync_type {
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index 40076f423dcb..56135f3ff4ff 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -214,6 +214,7 @@ static void tls_device_resync_tx(struct sock *sk, struct tls_context *tls_ctx,
 {
 	struct net_device *netdev;
 	struct sk_buff *skb;
+	int err = 0;
 	u8 *rcd_sn;
 
 	skb = tcp_write_queue_tail(sk);
@@ -225,9 +226,12 @@ static void tls_device_resync_tx(struct sock *sk, struct tls_context *tls_ctx,
 	down_read(&device_offload_lock);
 	netdev = tls_ctx->netdev;
 	if (netdev)
-		netdev->tlsdev_ops->tls_dev_resync(netdev, sk, seq, rcd_sn,
-						   TLS_OFFLOAD_CTX_DIR_TX);
+		err = netdev->tlsdev_ops->tls_dev_resync(netdev, sk, seq,
+							 rcd_sn,
+							 TLS_OFFLOAD_CTX_DIR_TX);
 	up_read(&device_offload_lock);
+	if (err)
+		return;
 
 	clear_bit_unlock(TLS_TX_SYNC_SCHED, &tls_ctx->flags);
 }
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 07/11] nfp: tls: don't leave key material in freed FW cmsg skbs
  2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
                   ` (5 preceding siblings ...)
  2019-07-09  2:53 ` [PATCH net-next 06/11] net/tls: don't clear TX resync flag on error Jakub Kicinski
@ 2019-07-09  2:53 ` Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 08/11] net/tls: add missing prot info init Jakub Kicinski
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2019-07-09  2:53 UTC (permalink / raw)
  To: davem
  Cc: netdev, oss-drivers, alexei.starovoitov, Jakub Kicinski,
	Dirk van der Merwe

Make sure the contents of the skb which carried key material
to the FW is cleared.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/crypto/tls.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/netronome/nfp/crypto/tls.c b/drivers/net/ethernet/netronome/nfp/crypto/tls.c
index d448c6de8ea4..96a96b35c0ca 100644
--- a/drivers/net/ethernet/netronome/nfp/crypto/tls.c
+++ b/drivers/net/ethernet/netronome/nfp/crypto/tls.c
@@ -4,6 +4,7 @@
 #include <linux/bitfield.h>
 #include <linux/ipv6.h>
 #include <linux/skbuff.h>
+#include <linux/string.h>
 #include <net/tls.h>
 
 #include "../ccm.h"
@@ -340,8 +341,22 @@ nfp_net_tls_add(struct net_device *netdev, struct sock *sk,
 	memcpy(&back->salt, tls_ci->salt, TLS_CIPHER_AES_GCM_128_SALT_SIZE);
 	memcpy(back->rec_no, tls_ci->rec_seq, sizeof(tls_ci->rec_seq));
 
+	/* Get an extra ref on the skb so we can wipe the key after */
+	skb_get(skb);
+
 	err = nfp_ccm_mbox_communicate(nn, skb, NFP_CCM_TYPE_CRYPTO_ADD,
 				       sizeof(*reply), sizeof(*reply));
+	reply = (void *)skb->data;
+
+	/* We depend on CCM MBOX code not reallocating skb we sent
+	 * so we can clear the key material out of the memory.
+	 */
+	if (!WARN_ON_ONCE((u8 *)back < skb->head ||
+			  (u8 *)back > skb_end_pointer(skb)) &&
+	    !WARN_ON_ONCE((u8 *)&reply[1] > (u8 *)back))
+		memzero_explicit(back, sizeof(*back));
+	dev_consume_skb_any(skb); /* the extra ref from skb_get() above */
+
 	if (err) {
 		nn_dp_warn(&nn->dp, "failed to add TLS: %d (%d)\n",
 			   err, direction == TLS_OFFLOAD_CTX_DIR_TX);
@@ -349,7 +364,6 @@ nfp_net_tls_add(struct net_device *netdev, struct sock *sk,
 		goto err_conn_remove;
 	}
 
-	reply = (void *)skb->data;
 	err = -be32_to_cpu(reply->error);
 	if (err) {
 		if (err == -ENOSPC) {
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 08/11] net/tls: add missing prot info init
  2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
                   ` (6 preceding siblings ...)
  2019-07-09  2:53 ` [PATCH net-next 07/11] nfp: tls: don't leave key material in freed FW cmsg skbs Jakub Kicinski
@ 2019-07-09  2:53 ` Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 09/11] nfp: tls: avoid one of the ifdefs for TLS Jakub Kicinski
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2019-07-09  2:53 UTC (permalink / raw)
  To: davem
  Cc: netdev, oss-drivers, alexei.starovoitov, Jakub Kicinski,
	Dirk van der Merwe

Turns out TLS_TX in HW offload mode does not initialize tls_prot_info.
Since commit 9cd81988cce1 ("net/tls: use version from prot") we actually
use this field on the datapath.  Luckily we always compare it to TLS 1.3,
and assume 1.2 otherwise. So since zero is not equal to 1.3, everything
worked fine.

Fixes: 9cd81988cce1 ("net/tls: use version from prot")
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
---
 net/tls/tls_device.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index 56135f3ff4ff..06c30f677f7a 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -878,6 +878,8 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
 		goto free_offload_ctx;
 	}
 
+	prot->version = crypto_info->version;
+	prot->cipher_type = crypto_info->cipher_type;
 	prot->prepend_size = TLS_HEADER_SIZE + nonce_size;
 	prot->tag_size = tag_size;
 	prot->overhead_size = prot->prepend_size + prot->tag_size;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 09/11] nfp: tls: avoid one of the ifdefs for TLS
  2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
                   ` (7 preceding siblings ...)
  2019-07-09  2:53 ` [PATCH net-next 08/11] net/tls: add missing prot info init Jakub Kicinski
@ 2019-07-09  2:53 ` Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 10/11] nfp: tls: undo TLS sequence tracking when dropping the frame Jakub Kicinski
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2019-07-09  2:53 UTC (permalink / raw)
  To: davem
  Cc: netdev, oss-drivers, alexei.starovoitov, Jakub Kicinski,
	Dirk van der Merwe

Move the #ifdef CONFIG_TLS_DEVICE a little so we can eliminate
the other one.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
---
 drivers/net/ethernet/netronome/nfp/nfp_net_common.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
index 9a4421df9be9..54dd98b2d645 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
@@ -822,11 +822,11 @@ static void nfp_net_tx_csum(struct nfp_net_dp *dp,
 	u64_stats_update_end(&r_vec->tx_sync);
 }
 
-#ifdef CONFIG_TLS_DEVICE
 static struct sk_buff *
 nfp_net_tls_tx(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec,
 	       struct sk_buff *skb, u64 *tls_handle, int *nr_frags)
 {
+#ifdef CONFIG_TLS_DEVICE
 	struct nfp_net_tls_offload_ctx *ntls;
 	struct sk_buff *nskb;
 	bool resync_pending;
@@ -889,9 +889,9 @@ nfp_net_tls_tx(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec,
 
 	memcpy(tls_handle, ntls->fw_handle, sizeof(ntls->fw_handle));
 	ntls->next_seq += datalen;
+#endif
 	return skb;
 }
-#endif
 
 static void nfp_net_tx_xmit_more_flush(struct nfp_net_tx_ring *tx_ring)
 {
@@ -985,13 +985,11 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev)
 		return NETDEV_TX_BUSY;
 	}
 
-#ifdef CONFIG_TLS_DEVICE
 	skb = nfp_net_tls_tx(dp, r_vec, skb, &tls_handle, &nr_frags);
 	if (unlikely(!skb)) {
 		nfp_net_tx_xmit_more_flush(tx_ring);
 		return NETDEV_TX_OK;
 	}
-#endif
 
 	md_bytes = nfp_net_prep_tx_meta(skb, tls_handle);
 	if (unlikely(md_bytes < 0))
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 10/11] nfp: tls: undo TLS sequence tracking when dropping the frame
  2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
                   ` (8 preceding siblings ...)
  2019-07-09  2:53 ` [PATCH net-next 09/11] nfp: tls: avoid one of the ifdefs for TLS Jakub Kicinski
@ 2019-07-09  2:53 ` Jakub Kicinski
  2019-07-09  2:53 ` [PATCH net-next 11/11] net/tls: fix socket wmem accounting on fallback with netem Jakub Kicinski
  2019-07-09  3:39 ` [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support David Miller
  11 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2019-07-09  2:53 UTC (permalink / raw)
  To: davem
  Cc: netdev, oss-drivers, alexei.starovoitov, Jakub Kicinski,
	Dirk van der Merwe

If driver has to drop the TLS frame it needs to undo the TCP
sequence tracking changes, otherwise device will receive
segments out of order and drop them.

Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
---
 .../ethernet/netronome/nfp/nfp_net_common.c   | 23 +++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
index 54dd98b2d645..9903805717da 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
@@ -893,6 +893,28 @@ nfp_net_tls_tx(struct nfp_net_dp *dp, struct nfp_net_r_vector *r_vec,
 	return skb;
 }
 
+static void nfp_net_tls_tx_undo(struct sk_buff *skb, u64 tls_handle)
+{
+#ifdef CONFIG_TLS_DEVICE
+	struct nfp_net_tls_offload_ctx *ntls;
+	u32 datalen, seq;
+
+	if (!tls_handle)
+		return;
+	if (WARN_ON_ONCE(!skb->sk || !tls_is_sk_tx_device_offloaded(skb->sk)))
+		return;
+
+	datalen = skb->len - (skb_transport_offset(skb) + tcp_hdrlen(skb));
+	seq = ntohl(tcp_hdr(skb)->seq);
+
+	ntls = tls_driver_ctx(skb->sk, TLS_OFFLOAD_CTX_DIR_TX);
+	if (ntls->next_seq == seq + datalen)
+		ntls->next_seq = seq;
+	else
+		WARN_ON_ONCE(1);
+#endif
+}
+
 static void nfp_net_tx_xmit_more_flush(struct nfp_net_tx_ring *tx_ring)
 {
 	wmb();
@@ -1102,6 +1124,7 @@ static int nfp_net_tx(struct sk_buff *skb, struct net_device *netdev)
 	u64_stats_update_begin(&r_vec->tx_sync);
 	r_vec->tx_errors++;
 	u64_stats_update_end(&r_vec->tx_sync);
+	nfp_net_tls_tx_undo(skb, tls_handle);
 	dev_kfree_skb_any(skb);
 	return NETDEV_TX_OK;
 }
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 11/11] net/tls: fix socket wmem accounting on fallback with netem
  2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
                   ` (9 preceding siblings ...)
  2019-07-09  2:53 ` [PATCH net-next 10/11] nfp: tls: undo TLS sequence tracking when dropping the frame Jakub Kicinski
@ 2019-07-09  2:53 ` Jakub Kicinski
  2019-07-09  3:39 ` [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support David Miller
  11 siblings, 0 replies; 13+ messages in thread
From: Jakub Kicinski @ 2019-07-09  2:53 UTC (permalink / raw)
  To: davem
  Cc: netdev, oss-drivers, alexei.starovoitov, Jakub Kicinski,
	Dirk van der Merwe

netem runs skb_orphan_partial() which "disconnects" the skb
from normal TCP write memory accounting.  We should not adjust
sk->sk_wmem_alloc on the fallback path for such skbs.

Fixes: e8f69799810c ("net/tls: Add generic NIC offload infrastructure")
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
Reviewed-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com>
---
 net/tls/tls_device_fallback.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c
index 1d2d804ac633..9070d68a92a4 100644
--- a/net/tls/tls_device_fallback.c
+++ b/net/tls/tls_device_fallback.c
@@ -209,6 +209,10 @@ static void complete_skb(struct sk_buff *nskb, struct sk_buff *skb, int headln)
 
 	update_chksum(nskb, headln);
 
+	/* sock_efree means skb must gone through skb_orphan_partial() */
+	if (nskb->destructor == sock_efree)
+		return;
+
 	delta = nskb->truesize - skb->truesize;
 	if (likely(delta < 0))
 		WARN_ON_ONCE(refcount_sub_and_test(-delta, &sk->sk_wmem_alloc));
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support
  2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
                   ` (10 preceding siblings ...)
  2019-07-09  2:53 ` [PATCH net-next 11/11] net/tls: fix socket wmem accounting on fallback with netem Jakub Kicinski
@ 2019-07-09  3:39 ` David Miller
  11 siblings, 0 replies; 13+ messages in thread
From: David Miller @ 2019-07-09  3:39 UTC (permalink / raw)
  To: jakub.kicinski; +Cc: netdev, oss-drivers, alexei.starovoitov

From: Jakub Kicinski <jakub.kicinski@netronome.com>
Date: Mon,  8 Jul 2019 19:53:07 -0700

> This series brings various fixes to nfp tls offload recently added
> to net-next.

Series applied, thanks.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-07-09  3:39 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-09  2:53 [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support Jakub Kicinski
2019-07-09  2:53 ` [PATCH net-next 01/11] nfp: tls: ignore queue limits for delete commands Jakub Kicinski
2019-07-09  2:53 ` [PATCH net-next 02/11] nfp: tls: move setting ipver_vlan to a helper Jakub Kicinski
2019-07-09  2:53 ` [PATCH net-next 03/11] nfp: tls: use unique connection ids instead of 4-tuple for TX Jakub Kicinski
2019-07-09  2:53 ` [PATCH net-next 04/11] nfp: ccm: increase message limits Jakub Kicinski
2019-07-09  2:53 ` [PATCH net-next 05/11] nfp: tls: count TSO segments separately for the TLS offload Jakub Kicinski
2019-07-09  2:53 ` [PATCH net-next 06/11] net/tls: don't clear TX resync flag on error Jakub Kicinski
2019-07-09  2:53 ` [PATCH net-next 07/11] nfp: tls: don't leave key material in freed FW cmsg skbs Jakub Kicinski
2019-07-09  2:53 ` [PATCH net-next 08/11] net/tls: add missing prot info init Jakub Kicinski
2019-07-09  2:53 ` [PATCH net-next 09/11] nfp: tls: avoid one of the ifdefs for TLS Jakub Kicinski
2019-07-09  2:53 ` [PATCH net-next 10/11] nfp: tls: undo TLS sequence tracking when dropping the frame Jakub Kicinski
2019-07-09  2:53 ` [PATCH net-next 11/11] net/tls: fix socket wmem accounting on fallback with netem Jakub Kicinski
2019-07-09  3:39 ` [PATCH net-next 00/11] nfp: tls: fixes for initial TLS support David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).