netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC net-next 0/6] RFC: TLS TX HW offload for Bond
@ 2020-12-29 11:40 Tariq Toukan
  2020-12-29 11:40 ` [PATCH RFC net-next 1/6] net: netdevice: Add operation ndo_sk_get_slave Tariq Toukan
                   ` (5 more replies)
  0 siblings, 6 replies; 7+ messages in thread
From: Tariq Toukan @ 2020-12-29 11:40 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: Saeed Mahameed, Boris Pismenny, netdev, Moshe Shemesh, andy,
	vfalico, j.vosburgh, Tariq Toukan, Tariq Toukan

Hi,

This is an RFC of the series that opens TLS TX HW offload for bond interfaces.
It allows them to benefit from capable slave devices.

To keep simple track of the HW and SW TLS contexts, we bind each socket to
a specific slave for the socket's whole lifetime. This is logically valid
(and similar to the SW kTLS behavior) in the following bond configuration, 
so we restrict the offload support to it:

((mode == balance-xor) or (mode == 802.3ad))
and xmit_hash_policy == layer3+4.

Opens/points for discussion:
- New ndo_sk_get_slave is introduced, as the existing ndo_get_xmit_slave returns
  a slave based on SKB fields, which in throry could give different results
  along the socket's lifetime.

- In this design, bond driver has no way to specify its own restrictions on
  when the TLS device offload is supported (depending on mode and xmit_policy).
  The best it can do today is return NULL in bond_sk_get_slave().
  This is not optimal, non-TLS cases might also want to call bond_sk_get_slave()
  in the future, and they might have other restrictions (for example, they might
  be okay in working with layer != LAYER34).
  That's another reason why I didn't combine ndo_sk_get_slave/ndo_get_xmit_slave
  into a single operation in this RFC.

- Patch #3 adds two explicit exceptions for bond devices in the TLS module.
  First is because a bond device supports the device offload without having
  a tls_dev_ops structure of it's own.
  Second, the TLS context is totally unaware of the bond netdev, but SKBs 
  still go through its xmit/validate functions.
  This is not very clean. What do you think?

- This bond design and implementation for the TLS device offload differs from the
  xfrm/ipsec offload. Bond does have struct xfrmdev_ops and callbacks of its own,
  communication is done via the bond and not directly to the lowest device.

Regards,
Tariq


Tariq Toukan (6):
  net: netdevice: Add operation ndo_sk_get_slave
  net/tls: Device offload to use lowest netdevice in chain
  net/tls: Except bond interface from some TLS checks
  net/bonding: Take IP hash logic into a helper
  net/bonding: Implement ndo_sk_get_slave
  net/bonding: Support TLS TX device offload

 drivers/net/bonding/bond_main.c    | 134 +++++++++++++++++++++++++++--
 drivers/net/bonding/bond_options.c |  27 ++++--
 include/linux/netdevice.h          |   4 +
 include/net/bonding.h              |   2 +
 net/core/dev.c                     |  32 +++++++
 net/tls/tls_device.c               |   4 +-
 net/tls/tls_device_fallback.c      |   3 +-
 7 files changed, 194 insertions(+), 12 deletions(-)

-- 
2.21.0


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH RFC net-next 1/6] net: netdevice: Add operation ndo_sk_get_slave
  2020-12-29 11:40 [PATCH RFC net-next 0/6] RFC: TLS TX HW offload for Bond Tariq Toukan
@ 2020-12-29 11:40 ` Tariq Toukan
  2020-12-29 11:41 ` [PATCH RFC net-next 2/6] net/tls: Device offload to use lowest netdevice in chain Tariq Toukan
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Tariq Toukan @ 2020-12-29 11:40 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: Saeed Mahameed, Boris Pismenny, netdev, Moshe Shemesh, andy,
	vfalico, j.vosburgh, Tariq Toukan, Tariq Toukan

ndo_sk_get_slave returns a slave given a socket.
Additionally, we implement a helper netdev_sk_get_lowest_dev()
to get the lowest slave netdevice.

Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
 include/linux/netdevice.h |  4 ++++
 net/core/dev.c            | 32 ++++++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 7bf167993c05..5938769c5a97 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -1412,6 +1412,8 @@ struct net_device_ops {
 	struct net_device*	(*ndo_get_xmit_slave)(struct net_device *dev,
 						      struct sk_buff *skb,
 						      bool all_slaves);
+	struct net_device*	(*ndo_sk_get_slave)(struct net_device *dev,
+						    struct sock *sk);
 	netdev_features_t	(*ndo_fix_features)(struct net_device *dev,
 						    netdev_features_t features);
 	int			(*ndo_set_features)(struct net_device *dev,
@@ -2876,6 +2878,8 @@ int init_dummy_netdev(struct net_device *dev);
 struct net_device *netdev_get_xmit_slave(struct net_device *dev,
 					 struct sk_buff *skb,
 					 bool all_slaves);
+struct net_device *netdev_sk_get_lowest_dev(struct net_device *dev,
+					    struct sock *sk);
 struct net_device *dev_get_by_index(struct net *net, int ifindex);
 struct net_device *__dev_get_by_index(struct net *net, int ifindex);
 struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex);
diff --git a/net/core/dev.c b/net/core/dev.c
index a46334906c94..a2101945363c 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -8102,6 +8102,38 @@ struct net_device *netdev_get_xmit_slave(struct net_device *dev,
 }
 EXPORT_SYMBOL(netdev_get_xmit_slave);
 
+static struct net_device *netdev_sk_get_slave(struct net_device *dev, struct sock *sk)
+{
+	const struct net_device_ops *ops = dev->netdev_ops;
+
+	if (!ops->ndo_sk_get_slave)
+		return NULL;
+	return ops->ndo_sk_get_slave(dev, sk);
+}
+
+/**
+ * netdev_sk_get_lowest_dev - Get the lowest device in chain given device and socket
+ * @dev: device
+ * @sk: the socket
+ *
+ * %NULL is returned if no slave is found.
+ */
+
+struct net_device *netdev_sk_get_lowest_dev(struct net_device *dev,
+					    struct sock *sk)
+{
+	struct net_device *slave;
+
+	slave = netdev_sk_get_slave(dev, sk);
+	while (slave) {
+		dev = slave;
+		slave = netdev_sk_get_slave(dev, sk);
+	}
+
+	return dev;
+}
+EXPORT_SYMBOL(netdev_sk_get_lowest_dev);
+
 static void netdev_adjacent_add_links(struct net_device *dev)
 {
 	struct netdev_adjacent *iter;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH RFC net-next 2/6] net/tls: Device offload to use lowest netdevice in chain
  2020-12-29 11:40 [PATCH RFC net-next 0/6] RFC: TLS TX HW offload for Bond Tariq Toukan
  2020-12-29 11:40 ` [PATCH RFC net-next 1/6] net: netdevice: Add operation ndo_sk_get_slave Tariq Toukan
@ 2020-12-29 11:41 ` Tariq Toukan
  2020-12-29 11:41 ` [PATCH RFC net-next 3/6] net/tls: Except bond interface from some TLS checks Tariq Toukan
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Tariq Toukan @ 2020-12-29 11:41 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: Saeed Mahameed, Boris Pismenny, netdev, Moshe Shemesh, andy,
	vfalico, j.vosburgh, Tariq Toukan, Tariq Toukan

Do not call the tls_dev_ops of upper devices. Instead, ask them
for the proper slave and communicate with it directly.

Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
 net/tls/tls_device.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index f7fb7d2c1de1..75ceea0a41bf 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -113,7 +113,7 @@ static struct net_device *get_netdev_for_sock(struct sock *sk)
 	struct net_device *netdev = NULL;
 
 	if (likely(dst)) {
-		netdev = dst->dev;
+		netdev = netdev_sk_get_lowest_dev(dst->dev, sk);
 		dev_hold(netdev);
 	}
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH RFC net-next 3/6] net/tls: Except bond interface from some TLS checks
  2020-12-29 11:40 [PATCH RFC net-next 0/6] RFC: TLS TX HW offload for Bond Tariq Toukan
  2020-12-29 11:40 ` [PATCH RFC net-next 1/6] net: netdevice: Add operation ndo_sk_get_slave Tariq Toukan
  2020-12-29 11:41 ` [PATCH RFC net-next 2/6] net/tls: Device offload to use lowest netdevice in chain Tariq Toukan
@ 2020-12-29 11:41 ` Tariq Toukan
  2020-12-29 11:41 ` [PATCH RFC net-next 4/6] net/bonding: Take IP hash logic into a helper Tariq Toukan
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 7+ messages in thread
From: Tariq Toukan @ 2020-12-29 11:41 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: Saeed Mahameed, Boris Pismenny, netdev, Moshe Shemesh, andy,
	vfalico, j.vosburgh, Tariq Toukan, Tariq Toukan

In the tls_dev_event handler, ignore tls_dev_ops requirement for bond
interfaces, they do not exist as the interaction is done directly with
the slave.

Also, make the validate function pass when it's called with the upper
bond interface.

Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
 net/tls/tls_device.c          | 2 ++
 net/tls/tls_device_fallback.c | 3 ++-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index 75ceea0a41bf..d9cd229aa111 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -1329,6 +1329,8 @@ static int tls_dev_event(struct notifier_block *this, unsigned long event,
 	switch (event) {
 	case NETDEV_REGISTER:
 	case NETDEV_FEAT_CHANGE:
+		if (netif_is_bond_master(dev))
+			return NOTIFY_DONE;
 		if ((dev->features & NETIF_F_HW_TLS_RX) &&
 		    !dev->tlsdev_ops->tls_dev_resync)
 			return NOTIFY_BAD;
diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c
index d946817ed065..40e4cf321878 100644
--- a/net/tls/tls_device_fallback.c
+++ b/net/tls/tls_device_fallback.c
@@ -424,7 +424,8 @@ struct sk_buff *tls_validate_xmit_skb(struct sock *sk,
 				      struct net_device *dev,
 				      struct sk_buff *skb)
 {
-	if (dev == tls_get_ctx(sk)->netdev)
+	/* TODO: verify slave belongs to the master? */
+	if (dev == tls_get_ctx(sk)->netdev || netif_is_bond_master(dev))
 		return skb;
 
 	return tls_sw_fallback(sk, skb);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH RFC net-next 4/6] net/bonding: Take IP hash logic into a helper
  2020-12-29 11:40 [PATCH RFC net-next 0/6] RFC: TLS TX HW offload for Bond Tariq Toukan
                   ` (2 preceding siblings ...)
  2020-12-29 11:41 ` [PATCH RFC net-next 3/6] net/tls: Except bond interface from some TLS checks Tariq Toukan
@ 2020-12-29 11:41 ` Tariq Toukan
  2020-12-29 11:41 ` [PATCH RFC net-next 5/6] net/bonding: Implement ndo_sk_get_slave Tariq Toukan
  2020-12-29 11:41 ` [PATCH RFC net-next 6/6] net/bonding: Support TLS TX device offload Tariq Toukan
  5 siblings, 0 replies; 7+ messages in thread
From: Tariq Toukan @ 2020-12-29 11:41 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: Saeed Mahameed, Boris Pismenny, netdev, Moshe Shemesh, andy,
	vfalico, j.vosburgh, Tariq Toukan, Tariq Toukan

Hash logic on L3 will be used in a downstream patch for one
more use case.
Take it to a function for a better code reuse.

Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
 drivers/net/bonding/bond_main.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 5fe5232cc3f3..8bc7629a2805 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -3539,6 +3539,16 @@ static bool bond_flow_dissect(struct bonding *bond, struct sk_buff *skb,
 	return true;
 }
 
+static u32 bond_ip_hash(u32 hash, struct flow_keys *flow)
+{
+	hash ^= (__force u32)flow_get_u32_dst(flow) ^
+		(__force u32)flow_get_u32_src(flow);
+	hash ^= (hash >> 16);
+	hash ^= (hash >> 8);
+	/* discard lowest hash bit to deal with the common even ports pattern */
+	return hash >> 1;
+}
+
 /**
  * bond_xmit_hash - generate a hash value based on the xmit policy
  * @bond: bonding device
@@ -3569,12 +3579,8 @@ u32 bond_xmit_hash(struct bonding *bond, struct sk_buff *skb)
 		else
 			memcpy(&hash, &flow.ports.ports, sizeof(hash));
 	}
-	hash ^= (__force u32)flow_get_u32_dst(&flow) ^
-		(__force u32)flow_get_u32_src(&flow);
-	hash ^= (hash >> 16);
-	hash ^= (hash >> 8);
 
-	return hash >> 1;
+	return bond_ip_hash(hash, &flow);
 }
 
 /*-------------------------- Device entry points ----------------------------*/
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH RFC net-next 5/6] net/bonding: Implement ndo_sk_get_slave
  2020-12-29 11:40 [PATCH RFC net-next 0/6] RFC: TLS TX HW offload for Bond Tariq Toukan
                   ` (3 preceding siblings ...)
  2020-12-29 11:41 ` [PATCH RFC net-next 4/6] net/bonding: Take IP hash logic into a helper Tariq Toukan
@ 2020-12-29 11:41 ` Tariq Toukan
  2020-12-29 11:41 ` [PATCH RFC net-next 6/6] net/bonding: Support TLS TX device offload Tariq Toukan
  5 siblings, 0 replies; 7+ messages in thread
From: Tariq Toukan @ 2020-12-29 11:41 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: Saeed Mahameed, Boris Pismenny, netdev, Moshe Shemesh, andy,
	vfalico, j.vosburgh, Tariq Toukan, Tariq Toukan

Support L3/4 sockets only, with xmit_hash_policy==LAYER34
and modes xor/802.3ad.

Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
 drivers/net/bonding/bond_main.c | 90 +++++++++++++++++++++++++++++++++
 1 file changed, 90 insertions(+)

diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 8bc7629a2805..0303e43e5fcf 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -301,6 +301,19 @@ netdev_tx_t bond_dev_queue_xmit(struct bonding *bond, struct sk_buff *skb,
 	return dev_queue_xmit(skb);
 }
 
+static bool bond_sk_check(struct bonding *bond)
+{
+	switch (BOND_MODE(bond)) {
+	case BOND_MODE_8023AD:
+	case BOND_MODE_XOR:
+		if (bond->params.xmit_policy == BOND_XMIT_POLICY_LAYER34)
+			return true;
+		fallthrough;
+	default:
+		return false;
+	}
+}
+
 /*---------------------------------- VLAN -----------------------------------*/
 
 /* In the following 2 functions, bond_vlan_rx_add_vid and bond_vlan_rx_kill_vid,
@@ -4553,6 +4566,82 @@ static struct net_device *bond_xmit_get_slave(struct net_device *master_dev,
 	return NULL;
 }
 
+static void bond_sk_to_flow(struct sock *sk, struct flow_keys *flow)
+{
+	switch (sk->sk_family) {
+#if IS_ENABLED(CONFIG_IPV6)
+	case AF_INET6:
+		if (sk->sk_ipv6only ||
+		    ipv6_addr_type(&sk->sk_v6_daddr) != IPV6_ADDR_MAPPED) {
+			flow->control.addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+			flow->addrs.v6addrs.src = inet6_sk(sk)->saddr;
+			flow->addrs.v6addrs.dst = sk->sk_v6_daddr;
+			break;
+		}
+		fallthrough;
+#endif
+	default: /* AF_INET */
+		flow->control.addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS;
+		flow->addrs.v4addrs.src = inet_sk(sk)->inet_rcv_saddr;
+		flow->addrs.v4addrs.dst = inet_sk(sk)->inet_daddr;
+		break;
+	}
+
+	flow->ports.src = inet_sk(sk)->inet_sport;
+	flow->ports.dst = inet_sk(sk)->inet_dport;
+}
+
+/**
+ * bond_sk_hash_l34 - generate a hash value based on the socket's L3 and L4 fields
+ * @sk: socket to use for headers
+ *
+ * This function will extract the necessary field from the socket and use
+ * them to generate a hash based on the LAYER34 xmit_policy.
+ * Assumes that sk is a TCP or UDP socket.
+ */
+static u32 bond_sk_hash_l34(struct sock *sk)
+{
+	struct flow_keys flow;
+	u32 hash;
+
+	bond_sk_to_flow(sk, &flow);
+
+	/* L4 */
+	memcpy(&hash, &flow.ports.ports, sizeof(hash));
+	/* L3 */
+	return bond_ip_hash(hash, &flow);
+}
+
+static struct net_device *__bond_sk_get_slave_dev(struct bonding *bond,
+						  struct sock *sk)
+{
+	struct bond_up_slave *slaves;
+	struct slave *slave;
+	unsigned int count;
+	u32 hash;
+
+	slaves = rcu_dereference(bond->usable_slaves);
+	count = slaves ? READ_ONCE(slaves->count) : 0;
+	if (unlikely(!count))
+		return NULL;
+
+	hash = bond_sk_hash_l34(sk);
+	slave = slaves->arr[hash % count];
+
+	return slave->dev;
+}
+
+static struct net_device *bond_sk_get_slave(struct net_device *master_dev,
+					    struct sock *sk)
+{
+	struct bonding *bond = netdev_priv(master_dev);
+
+	if (bond_sk_check(bond))
+		return __bond_sk_get_slave_dev(bond, sk);
+
+	return NULL;
+}
+
 static netdev_tx_t __bond_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct bonding *bond = netdev_priv(dev);
@@ -4689,6 +4778,7 @@ static const struct net_device_ops bond_netdev_ops = {
 	.ndo_fix_features	= bond_fix_features,
 	.ndo_features_check	= passthru_features_check,
 	.ndo_get_xmit_slave	= bond_xmit_get_slave,
+	.ndo_sk_get_slave	= bond_sk_get_slave,
 };
 
 static const struct device_type bond_type = {
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH RFC net-next 6/6] net/bonding: Support TLS TX device offload
  2020-12-29 11:40 [PATCH RFC net-next 0/6] RFC: TLS TX HW offload for Bond Tariq Toukan
                   ` (4 preceding siblings ...)
  2020-12-29 11:41 ` [PATCH RFC net-next 5/6] net/bonding: Implement ndo_sk_get_slave Tariq Toukan
@ 2020-12-29 11:41 ` Tariq Toukan
  5 siblings, 0 replies; 7+ messages in thread
From: Tariq Toukan @ 2020-12-29 11:41 UTC (permalink / raw)
  To: David S. Miller, Jakub Kicinski
  Cc: Saeed Mahameed, Boris Pismenny, netdev, Moshe Shemesh, andy,
	vfalico, j.vosburgh, Tariq Toukan, Tariq Toukan

Implement TLS TX device offload for bonding interfaces.
This allows kTLS sockets running on a bond to benefit from the
device offload on capable slaves.

To allow a simple and fast maintenance of the TLS context in SW and
slaves devices, we bind the TLS socket to a specific slave.
We ask the bond device for the socket's slave, and work with the lowest
in chain to call the tls_dev_ops operations.

To achieve a behavior similar to SW kTLS, we support only balance-xor
and 802.3ad modes, with xmit_hash_policy=layer3+4.
For the above configuration, the SW implementation keeps picking the
same exact slave for all the socket's SKBs.

We keep the bond feature bit independent from the slaves bits.
In case a non-capable slave is picked, the socket falls-back to
SW kTLS.

netdev_update_features() is taken out of the XFRM function so it
is called only once (if needed).

Signed-off-by: Tariq Toukan <tariqt@nvidia.com>
---
 drivers/net/bonding/bond_main.c    | 28 ++++++++++++++++++++++++++++
 drivers/net/bonding/bond_options.c | 27 ++++++++++++++++++++++-----
 include/net/bonding.h              |  2 ++
 3 files changed, 52 insertions(+), 5 deletions(-)

diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 0303e43e5fcf..574ffb147623 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -83,6 +83,9 @@
 #include <net/bonding.h>
 #include <net/bond_3ad.h>
 #include <net/bond_alb.h>
+#if IS_ENABLED(CONFIG_TLS_DEVICE)
+#include <net/tls.h>
+#endif
 
 #include "bonding_priv.h"
 
@@ -1225,6 +1228,11 @@ static netdev_features_t bond_fix_features(struct net_device *dev,
 	netdev_features_t mask;
 	struct slave *slave;
 
+#if IS_ENABLED(CONFIG_TLS_DEVICE)
+	if ((features & BOND_TLS_FEATURES) && !bond_sk_check(bond))
+		features &= ~BOND_TLS_FEATURES;
+#endif
+
 	mask = features;
 
 	features &= ~NETIF_F_ONE_FOR_ALL;
@@ -4642,6 +4650,16 @@ static struct net_device *bond_sk_get_slave(struct net_device *master_dev,
 	return NULL;
 }
 
+#if IS_ENABLED(CONFIG_TLS_DEVICE)
+static netdev_tx_t bond_tls_device_xmit(struct bonding *bond, struct sk_buff *skb,
+					struct net_device *dev)
+{
+	if (likely(bond_get_slave_by_dev(bond, tls_get_ctx(skb->sk)->netdev)))
+		return bond_dev_queue_xmit(bond, skb, tls_get_ctx(skb->sk)->netdev);
+	return bond_tx_drop(dev, skb);
+}
+#endif
+
 static netdev_tx_t __bond_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct bonding *bond = netdev_priv(dev);
@@ -4650,6 +4668,11 @@ static netdev_tx_t __bond_start_xmit(struct sk_buff *skb, struct net_device *dev
 	    !bond_slave_override(bond, skb))
 		return NETDEV_TX_OK;
 
+#if IS_ENABLED(CONFIG_TLS_DEVICE)
+	if (skb->sk && tls_is_sk_tx_device_offloaded(skb->sk))
+		return bond_tls_device_xmit(bond, skb, dev);
+#endif
+
 	switch (BOND_MODE(bond)) {
 	case BOND_MODE_ROUNDROBIN:
 		return bond_xmit_roundrobin(skb, dev);
@@ -4850,6 +4873,11 @@ void bond_setup(struct net_device *bond_dev)
 	if (BOND_MODE(bond) == BOND_MODE_ACTIVEBACKUP)
 		bond_dev->features |= BOND_XFRM_FEATURES;
 #endif /* CONFIG_XFRM_OFFLOAD */
+#if IS_ENABLED(CONFIG_TLS_DEVICE)
+	bond_dev->hw_features |= BOND_TLS_FEATURES;
+	if (bond_sk_check(bond))
+		bond_dev->features |= BOND_TLS_FEATURES;
+#endif
 }
 
 /* Destroy a bonding device.
diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
index a4e4e15f574d..8e5851289380 100644
--- a/drivers/net/bonding/bond_options.c
+++ b/drivers/net/bonding/bond_options.c
@@ -745,17 +745,22 @@ const struct bond_option *bond_opt_get(unsigned int option)
 	return &bond_opts[option];
 }
 
-static void bond_set_xfrm_features(struct net_device *bond_dev, u64 mode)
+static bool bond_set_xfrm_features(struct net_device *bond_dev, u64 mode)
 {
 	if (!IS_ENABLED(CONFIG_XFRM_OFFLOAD))
-		return;
+		return false;
 
 	if (mode == BOND_MODE_ACTIVEBACKUP)
 		bond_dev->wanted_features |= BOND_XFRM_FEATURES;
 	else
 		bond_dev->wanted_features &= ~BOND_XFRM_FEATURES;
 
-	netdev_update_features(bond_dev);
+	return true;
+}
+
+static bool bond_set_tls_features(struct net_device *bond_dev, u64 mode)
+{
+	return IS_ENABLED(CONFIG_TLS_DEVICE);
 }
 
 static int bond_option_mode_set(struct bonding *bond,
@@ -780,8 +785,15 @@ static int bond_option_mode_set(struct bonding *bond,
 	if (newval->value == BOND_MODE_ALB)
 		bond->params.tlb_dynamic_lb = 1;
 
-	if (bond->dev->reg_state == NETREG_REGISTERED)
-		bond_set_xfrm_features(bond->dev, newval->value);
+	if (bond->dev->reg_state == NETREG_REGISTERED) {
+		bool update = false;
+
+		update |= bond_set_xfrm_features(bond->dev, newval->value);
+		update |= bond_set_tls_features(bond->dev, newval->value);
+
+		if (update)
+			netdev_update_features(bond->dev);
+	}
 
 	/* don't cache arp_validate between modes */
 	bond->params.arp_validate = BOND_ARP_VALIDATE_NONE;
@@ -1219,6 +1231,11 @@ static int bond_option_xmit_hash_policy_set(struct bonding *bond,
 		   newval->string, newval->value);
 	bond->params.xmit_policy = newval->value;
 
+#if IS_ENABLED(CONFIG_TLS_DEVICE)
+	if (bond->dev->reg_state == NETREG_REGISTERED)
+		netdev_change_features(bond->dev);
+#endif
+
 	return 0;
 }
 
diff --git a/include/net/bonding.h b/include/net/bonding.h
index adc3da776970..60d91d7fdc3a 100644
--- a/include/net/bonding.h
+++ b/include/net/bonding.h
@@ -89,6 +89,8 @@
 #define BOND_XFRM_FEATURES (NETIF_F_HW_ESP | NETIF_F_HW_ESP_TX_CSUM | \
 			    NETIF_F_GSO_ESP)
 
+#define BOND_TLS_FEATURES (NETIF_F_HW_TLS_TX)
+
 #ifdef CONFIG_NET_POLL_CONTROLLER
 extern atomic_t netpoll_block_tx;
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-12-29 11:42 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-29 11:40 [PATCH RFC net-next 0/6] RFC: TLS TX HW offload for Bond Tariq Toukan
2020-12-29 11:40 ` [PATCH RFC net-next 1/6] net: netdevice: Add operation ndo_sk_get_slave Tariq Toukan
2020-12-29 11:41 ` [PATCH RFC net-next 2/6] net/tls: Device offload to use lowest netdevice in chain Tariq Toukan
2020-12-29 11:41 ` [PATCH RFC net-next 3/6] net/tls: Except bond interface from some TLS checks Tariq Toukan
2020-12-29 11:41 ` [PATCH RFC net-next 4/6] net/bonding: Take IP hash logic into a helper Tariq Toukan
2020-12-29 11:41 ` [PATCH RFC net-next 5/6] net/bonding: Implement ndo_sk_get_slave Tariq Toukan
2020-12-29 11:41 ` [PATCH RFC net-next 6/6] net/bonding: Support TLS TX device offload Tariq Toukan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).