All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] bonding: locks
@ 2016-05-05 15:14 Bernard Iremonger
  2016-05-05 15:14 ` [PATCH 1/5] bonding: replace spinlock with read/write lock Bernard Iremonger
                   ` (5 more replies)
  0 siblings, 6 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-05 15:14 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, Bernard Iremonger

Replace spinlock with read/write lock.
Add read/write locks where needed to protect
active_slave_count and active_slaves[].
With read/write locks in place remove memcpy of slaves.

Bernard Iremonger (5):
  bonding: replace spinlock with read/write lock
  bonding: add read/write lock to rx/tx burst functions
  bonding: remove memcopy of slaves from rx/tx burst function
  bonding: add read/write lock to stop function
  bonding: add read/write lock to the link_update function

 drivers/net/bonding/rte_eth_bond_api.c     |  10 +-
 drivers/net/bonding/rte_eth_bond_pmd.c     | 219 ++++++++++++++++-------------
 drivers/net/bonding/rte_eth_bond_private.h |   6 +-
 3 files changed, 131 insertions(+), 104 deletions(-)

-- 
2.6.3

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 1/5] bonding: replace spinlock with read/write lock
  2016-05-05 15:14 [PATCH 0/5] bonding: locks Bernard Iremonger
@ 2016-05-05 15:14 ` Bernard Iremonger
  2016-05-05 17:12   ` Stephen Hemminger
  2016-05-05 15:14 ` [PATCH 2/5] bonding: add read/write lock to rx/tx burst functions Bernard Iremonger
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-05 15:14 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, Bernard Iremonger

Fixes: a45b288ef21a ("bond: support link status polling")
Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 drivers/net/bonding/rte_eth_bond_api.c     | 10 +++---
 drivers/net/bonding/rte_eth_bond_pmd.c     | 57 +++++++++++++++---------------
 drivers/net/bonding/rte_eth_bond_private.h |  6 ++--
 3 files changed, 36 insertions(+), 37 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index d3bda35..c77626d 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -227,7 +227,7 @@ rte_eth_bond_create(const char *name, uint8_t mode, uint8_t socket_id)
 	eth_dev->data->drv_name = pmd_bond_driver_name;
 	eth_dev->data->numa_node =  socket_id;
 
-	rte_spinlock_init(&internals->lock);
+	rte_rwlock_init(&internals->rwlock);
 
 	internals->port_id = eth_dev->data->port_id;
 	internals->mode = BONDING_MODE_INVALID;
@@ -451,11 +451,11 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id)
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	rte_spinlock_lock(&internals->lock);
+	rte_rwlock_write_lock(&internals->rwlock);
 
 	retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
 
-	rte_spinlock_unlock(&internals->lock);
+	rte_rwlock_write_unlock(&internals->rwlock);
 
 	return retval;
 }
@@ -553,11 +553,11 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id)
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	internals = bonded_eth_dev->data->dev_private;
 
-	rte_spinlock_lock(&internals->lock);
+	rte_rwlock_write_lock(&internals->rwlock);
 
 	retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
 
-	rte_spinlock_unlock(&internals->lock);
+	rte_rwlock_write_unlock(&internals->rwlock);
 
 	return retval;
 }
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 129f04b..ed6245b 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -1750,37 +1750,36 @@ bond_ethdev_slave_link_status_change_monitor(void *cb_arg)
 		!internals->link_status_polling_enabled)
 		return;
 
-	/* If device is currently being configured then don't check slaves link
-	 * status, wait until next period */
-	if (rte_spinlock_trylock(&internals->lock)) {
-		if (internals->slave_count > 0)
-			polling_slave_found = 0;
+	rte_rwlock_read_lock(&internals->rwlock);
+	if (internals->slave_count > 0)
+		polling_slave_found = 0;
 
-		for (i = 0; i < internals->slave_count; i++) {
-			if (!internals->slaves[i].link_status_poll_enabled)
-				continue;
-
-			slave_ethdev = &rte_eth_devices[internals->slaves[i].port_id];
-			polling_slave_found = 1;
-
-			/* Update slave link status */
-			(*slave_ethdev->dev_ops->link_update)(slave_ethdev,
-					internals->slaves[i].link_status_wait_to_complete);
-
-			/* if link status has changed since last checked then call lsc
-			 * event callback */
-			if (slave_ethdev->data->dev_link.link_status !=
-					internals->slaves[i].last_link_status) {
-				internals->slaves[i].last_link_status =
-						slave_ethdev->data->dev_link.link_status;
-
-				bond_ethdev_lsc_event_callback(internals->slaves[i].port_id,
-						RTE_ETH_EVENT_INTR_LSC,
-						&bonded_ethdev->data->port_id);
-			}
+	for (i = 0; i < internals->slave_count; i++) {
+		if (!internals->slaves[i].link_status_poll_enabled)
+			continue;
+
+		slave_ethdev = &rte_eth_devices[internals->slaves[i].port_id];
+		polling_slave_found = 1;
+
+		/* Update slave link status */
+		(*slave_ethdev->dev_ops->link_update)(slave_ethdev,
+			internals->slaves[i].link_status_wait_to_complete);
+
+		/* if link status has changed since last checked then call lsc
+		 * event callback
+		 */
+		if (slave_ethdev->data->dev_link.link_status !=
+			internals->slaves[i].last_link_status) {
+			internals->slaves[i].last_link_status =
+				slave_ethdev->data->dev_link.link_status;
+
+			bond_ethdev_lsc_event_callback(
+					internals->slaves[i].port_id,
+					RTE_ETH_EVENT_INTR_LSC,
+					&bonded_ethdev->data->port_id);
 		}
-		rte_spinlock_unlock(&internals->lock);
 	}
+	rte_rwlock_read_unlock(&internals->rwlock);
 
 	if (polling_slave_found)
 		/* Set alarm to continue monitoring link status of slave ethdev's */
diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h
index 8312397..72ea679 100644
--- a/drivers/net/bonding/rte_eth_bond_private.h
+++ b/drivers/net/bonding/rte_eth_bond_private.h
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -35,7 +35,7 @@
 #define _RTE_ETH_BOND_PRIVATE_H_
 
 #include <rte_ethdev.h>
-#include <rte_spinlock.h>
+#include <rte_rwlock.h>
 
 #include "rte_eth_bond.h"
 #include "rte_eth_bond_8023ad_private.h"
@@ -115,7 +115,7 @@ struct bond_dev_private {
 	uint8_t port_id;					/**< Port Id of Bonded Port */
 	uint8_t mode;						/**< Link Bonding Mode */
 
-	rte_spinlock_t lock;
+	rte_rwlock_t rwlock;
 
 	uint8_t primary_port;				/**< Primary Slave Port */
 	uint8_t current_primary_port;		/**< Primary Slave Port */
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 2/5] bonding: add read/write lock to rx/tx burst functions
  2016-05-05 15:14 [PATCH 0/5] bonding: locks Bernard Iremonger
  2016-05-05 15:14 ` [PATCH 1/5] bonding: replace spinlock with read/write lock Bernard Iremonger
@ 2016-05-05 15:14 ` Bernard Iremonger
  2016-05-05 15:14 ` [PATCH 3/5] bonding: remove memcopy of slaves from rx/tx burst function Bernard Iremonger
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-05 15:14 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, Bernard Iremonger

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 112 +++++++++++++++++++++++++--------
 1 file changed, 85 insertions(+), 27 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index ed6245b..c3e772c 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -92,7 +92,7 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 	internals = bd_rx_q->dev_private;
 
-
+	rte_rwlock_read_lock(&internals->rwlock);
 	for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
 		/* Offset of pointer to *bufs increases as packets are received
 		 * from other slaves */
@@ -103,6 +103,7 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			nb_pkts -= num_rx_slave;
 		}
 	}
+	rte_rwlock_read_unlock(&internals->rwlock);
 
 	return num_rx_total;
 }
@@ -112,14 +113,20 @@ bond_ethdev_rx_burst_active_backup(void *queue, struct rte_mbuf **bufs,
 		uint16_t nb_pkts)
 {
 	struct bond_dev_private *internals;
+	uint16_t num_rx_total;
 
 	/* Cast to structure, containing bonded device's port id and queue id */
 	struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
 
 	internals = bd_rx_q->dev_private;
+	rte_rwlock_read_lock(&internals->rwlock);
+
+	num_rx_total = rte_eth_rx_burst(internals->current_primary_port,
+					bd_rx_q->queue_id, bufs, nb_pkts);
 
-	return rte_eth_rx_burst(internals->current_primary_port,
-			bd_rx_q->queue_id, bufs, nb_pkts);
+	rte_rwlock_read_unlock(&internals->rwlock);
+
+	return num_rx_total;
 }
 
 static uint16_t
@@ -149,12 +156,17 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * slave_count);
 
-	for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
+	rte_rwlock_read_lock(&internals->rwlock);
+	for (i = 0; i < internals->active_slave_count && num_rx_total < nb_pkts;
+		 i++) {
 		j = num_rx_total;
-		collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[i]], COLLECTING);
+		collecting = ACTOR_STATE(
+				&mode_8023ad_ports[internals->active_slaves[i]],
+				COLLECTING);
 
 		/* Read packets from this slave */
-		num_rx_total += rte_eth_rx_burst(slaves[i], bd_rx_q->queue_id,
+		num_rx_total += rte_eth_rx_burst(internals->active_slaves[i],
+				bd_rx_q->queue_id,
 				&bufs[num_rx_total], nb_pkts - num_rx_total);
 
 		for (k = j; k < 2 && k < num_rx_total; k++)
@@ -175,7 +187,9 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 					!is_same_ether_addr(&bond_mac, &hdr->d_addr)))) {
 
 				if (hdr->ether_type == ether_type_slow_be) {
-					bond_mode_8023ad_handle_slow_pkt(internals, slaves[i],
+					bond_mode_8023ad_handle_slow_pkt(
+						internals,
+						internals->active_slaves[i],
 						bufs[j]);
 				} else
 					rte_pktmbuf_free(bufs[j]);
@@ -190,6 +204,7 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 				j++;
 		}
 	}
+	rte_rwlock_read_unlock(&internals->rwlock);
 
 	return num_rx_total;
 }
@@ -408,12 +423,14 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
+	rte_rwlock_read_lock(&internals->rwlock);
 	num_of_slaves = internals->active_slave_count;
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * num_of_slaves);
-
-	if (num_of_slaves < 1)
+	if (num_of_slaves < 1) {
+		rte_rwlock_read_unlock(&internals->rwlock);
 		return num_tx_total;
+	}
 
 	/* Populate slaves mbuf with which packets are to be sent on it  */
 	for (i = 0; i < nb_pkts; i++) {
@@ -428,8 +445,10 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	/* Send packet burst on each slave device */
 	for (i = 0; i < num_of_slaves; i++) {
 		if (slave_nb_pkts[i] > 0) {
-			num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
-					slave_bufs[i], slave_nb_pkts[i]);
+			num_tx_slave = rte_eth_tx_burst(
+					internals->active_slaves[i],
+					bd_tx_q->queue_id, slave_bufs[i],
+					slave_nb_pkts[i]);
 
 			/* if tx burst fails move packets to end of bufs */
 			if (unlikely(num_tx_slave < slave_nb_pkts[i])) {
@@ -444,6 +463,7 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 			num_tx_total += num_tx_slave;
 		}
 	}
+	rte_rwlock_read_unlock(&internals->rwlock);
 
 	return num_tx_total;
 }
@@ -454,15 +474,23 @@ bond_ethdev_tx_burst_active_backup(void *queue,
 {
 	struct bond_dev_private *internals;
 	struct bond_tx_queue *bd_tx_q;
+	uint16_t num_tx_total;
 
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	if (internals->active_slave_count < 1)
+	rte_rwlock_read_lock(&internals->rwlock);
+	if (internals->active_slave_count < 1) {
+		rte_rwlock_read_unlock(&internals->rwlock);
 		return 0;
+	}
+
+	num_tx_total = rte_eth_tx_burst(internals->current_primary_port,
+					bd_tx_q->queue_id, bufs, nb_pkts);
 
-	return rte_eth_tx_burst(internals->current_primary_port, bd_tx_q->queue_id,
-			bufs, nb_pkts);
+	rte_rwlock_read_unlock(&internals->rwlock);
+
+	return num_tx_total;
 }
 
 static inline uint16_t
@@ -693,16 +721,19 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 			&rte_eth_devices[internals->primary_port];
 	uint16_t num_tx_total = 0;
 	uint8_t i, j;
-
-	uint8_t num_of_slaves = internals->active_slave_count;
+	uint8_t num_of_slaves;
 	uint8_t slaves[RTE_MAX_ETHPORTS];
 
 	struct ether_hdr *ether_hdr;
 	struct ether_addr primary_slave_addr;
 	struct ether_addr active_slave_addr;
 
-	if (num_of_slaves < 1)
+	rte_rwlock_read_lock(&internals->rwlock);
+	num_of_slaves = internals->active_slave_count;
+	if (num_of_slaves < 1) {
+		rte_rwlock_read_unlock(&internals->rwlock);
 		return num_tx_total;
+	}
 
 	memcpy(slaves, internals->tlb_slaves_order,
 				sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
@@ -716,7 +747,8 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	}
 
 	for (i = 0; i < num_of_slaves; i++) {
-		rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+		rte_eth_macaddr_get(internals->tlb_slaves_order[i],
+				&active_slave_addr);
 		for (j = num_tx_total; j < nb_pkts; j++) {
 			if (j + 3 < nb_pkts)
 				rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -729,12 +761,15 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 #endif
 		}
 
-		num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+		num_tx_total += rte_eth_tx_burst(
+				internals->tlb_slaves_order[i],
+				bd_tx_q->queue_id,
 				bufs + num_tx_total, nb_pkts - num_tx_total);
 
 		if (num_tx_total == nb_pkts)
 			break;
 	}
+	rte_rwlock_read_unlock(&internals->rwlock);
 
 	return num_tx_total;
 }
@@ -836,6 +871,8 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		internals->mode6.ntt = 0;
 	}
 
+	rte_rwlock_read_lock(&internals->rwlock);
+
 	/* Send ARP packets on proper slaves */
 	for (i = 0; i < RTE_MAX_ETHPORTS; i++) {
 		if (slave_bufs_pkts[i] > 0) {
@@ -876,6 +913,8 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 	}
 
+	rte_rwlock_read_unlock(&internals->rwlock);
+
 	/* Send non-ARP packets using tlb policy */
 	if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
 		num_send = bond_ethdev_tx_burst_tlb(queue,
@@ -916,12 +955,16 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
+	rte_rwlock_read_lock(&internals->rwlock);
 	num_of_slaves = internals->active_slave_count;
+
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * num_of_slaves);
 
-	if (num_of_slaves < 1)
+	if (num_of_slaves < 1) {
+		rte_rwlock_read_unlock(&internals->rwlock);
 		return num_tx_total;
+	}
 
 	/* Populate slaves mbuf with the packets which are to be sent on it  */
 	for (i = 0; i < nb_pkts; i++) {
@@ -935,7 +978,9 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	/* Send packet burst on each slave device */
 	for (i = 0; i < num_of_slaves; i++) {
 		if (slave_nb_pkts[i] > 0) {
-			num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+			num_tx_slave = rte_eth_tx_burst(
+					internals->active_slaves[i],
+					bd_tx_q->queue_id,
 					slave_bufs[i], slave_nb_pkts[i]);
 
 			/* if tx burst fails move packets to end of bufs */
@@ -952,6 +997,7 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 		}
 	}
 
+	rte_rwlock_read_unlock(&internals->rwlock);
 	return num_tx_total;
 }
 
@@ -986,15 +1032,20 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
+	rte_rwlock_read_lock(&internals->rwlock);
 	num_of_slaves = internals->active_slave_count;
-	if (num_of_slaves < 1)
+	if (num_of_slaves < 1) {
+		rte_rwlock_read_unlock(&internals->rwlock);
 		return num_tx_total;
+	}
 
 	memcpy(slaves, internals->active_slaves, sizeof(slaves[0]) * num_of_slaves);
 
 	distributing_count = 0;
 	for (i = 0; i < num_of_slaves; i++) {
-		struct port *port = &mode_8023ad_ports[slaves[i]];
+		struct port *port;
+
+		port = &mode_8023ad_ports[internals->active_slaves[i]];
 
 		slave_slow_nb_pkts[i] = rte_ring_dequeue_burst(port->tx_ring,
 				slow_pkts, BOND_MODE_8023AX_SLAVE_TX_PKTS);
@@ -1026,7 +1077,8 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 		if (slave_nb_pkts[i] == 0)
 			continue;
 
-		num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+		num_tx_slave = rte_eth_tx_burst(
+				internals->active_slaves[i], bd_tx_q->queue_id,
 				slave_bufs[i], slave_nb_pkts[i]);
 
 		/* If tx burst fails drop slow packets */
@@ -1044,6 +1096,7 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 		}
 	}
 
+	rte_rwlock_read_unlock(&internals->rwlock);
 	return num_tx_total;
 }
 
@@ -1067,12 +1120,15 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
+	rte_rwlock_read_lock(&internals->rwlock);
 	num_of_slaves = internals->active_slave_count;
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * num_of_slaves);
 
-	if (num_of_slaves < 1)
+	if (num_of_slaves < 1) {
+		rte_rwlock_read_unlock(&internals->rwlock);
 		return 0;
+	}
 
 	/* Increment reference count on mbufs */
 	for (i = 0; i < nb_pkts; i++)
@@ -1080,8 +1136,9 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 
 	/* Transmit burst on each active slave */
 	for (i = 0; i < num_of_slaves; i++) {
-		slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
-					bufs, nb_pkts);
+		slave_tx_total[i] = rte_eth_tx_burst(
+				internals->active_slaves[i],
+				bd_tx_q->queue_id, bufs, nb_pkts);
 
 		if (unlikely(slave_tx_total[i] < nb_pkts))
 			tx_failed_flag = 1;
@@ -1104,6 +1161,7 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 				while (slave_tx_total[i] < nb_pkts)
 					rte_pktmbuf_free(bufs[slave_tx_total[i]++]);
 
+	rte_rwlock_read_unlock(&internals->rwlock);
 	return max_nb_of_tx_pkts;
 }
 
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 3/5] bonding: remove memcopy of slaves from rx/tx burst function
  2016-05-05 15:14 [PATCH 0/5] bonding: locks Bernard Iremonger
  2016-05-05 15:14 ` [PATCH 1/5] bonding: replace spinlock with read/write lock Bernard Iremonger
  2016-05-05 15:14 ` [PATCH 2/5] bonding: add read/write lock to rx/tx burst functions Bernard Iremonger
@ 2016-05-05 15:14 ` Bernard Iremonger
  2016-05-05 15:14 ` [PATCH 4/5] bonding: add read/write lock to stop function Bernard Iremonger
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-05 15:14 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, Bernard Iremonger

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 41 +---------------------------------
 1 file changed, 1 insertion(+), 40 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index c3e772c..c33a860 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -142,19 +142,11 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 
 	const uint16_t ether_type_slow_be = rte_be_to_cpu_16(ETHER_TYPE_SLOW);
 	uint16_t num_rx_total = 0;	/* Total number of received packets */
-	uint8_t slaves[RTE_MAX_ETHPORTS];
-	uint8_t slave_count;
-
 	uint8_t collecting;  /* current slave collecting status */
 	const uint8_t promisc = internals->promiscuous_en;
 	uint8_t i, j, k;
 
 	rte_eth_macaddr_get(internals->port_id, &bond_mac);
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
-	slave_count = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * slave_count);
 
 	rte_rwlock_read_lock(&internals->rwlock);
 	for (i = 0; i < internals->active_slave_count && num_rx_total < nb_pkts;
@@ -409,10 +401,7 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 
 	struct rte_mbuf *slave_bufs[RTE_MAX_ETHPORTS][nb_pkts];
 	uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
-
 	uint8_t num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
-
 	uint16_t num_tx_total = 0, num_tx_slave;
 
 	static int slave_idx = 0;
@@ -421,12 +410,8 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
 	rte_rwlock_read_lock(&internals->rwlock);
 	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
 	if (num_of_slaves < 1) {
 		rte_rwlock_read_unlock(&internals->rwlock);
 		return num_tx_total;
@@ -722,7 +707,6 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	uint16_t num_tx_total = 0;
 	uint8_t i, j;
 	uint8_t num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
 
 	struct ether_hdr *ether_hdr;
 	struct ether_addr primary_slave_addr;
@@ -735,10 +719,6 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		return num_tx_total;
 	}
 
-	memcpy(slaves, internals->tlb_slaves_order,
-				sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
-
-
 	ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
 
 	if (nb_pkts > 3) {
@@ -941,8 +921,6 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q;
 
 	uint8_t num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
-
 	uint16_t num_tx_total = 0, num_tx_slave = 0, tx_fail_total = 0;
 
 	int i, op_slave_id;
@@ -953,14 +931,8 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
 	rte_rwlock_read_lock(&internals->rwlock);
 	num_of_slaves = internals->active_slave_count;
-
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
-
 	if (num_of_slaves < 1) {
 		rte_rwlock_read_unlock(&internals->rwlock);
 		return num_tx_total;
@@ -1009,7 +981,7 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q;
 
 	uint8_t num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
+
 	 /* positions in slaves, not ID */
 	uint8_t distributing_offsets[RTE_MAX_ETHPORTS];
 	uint8_t distributing_count;
@@ -1030,8 +1002,6 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
 	rte_rwlock_read_lock(&internals->rwlock);
 	num_of_slaves = internals->active_slave_count;
 	if (num_of_slaves < 1) {
@@ -1039,8 +1009,6 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 		return num_tx_total;
 	}
 
-	memcpy(slaves, internals->active_slaves, sizeof(slaves[0]) * num_of_slaves);
-
 	distributing_count = 0;
 	for (i = 0; i < num_of_slaves; i++) {
 		struct port *port;
@@ -1108,8 +1076,6 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q;
 
 	uint8_t tx_failed_flag = 0, num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
-
 	uint16_t max_nb_of_tx_pkts = 0;
 
 	int slave_tx_total[RTE_MAX_ETHPORTS];
@@ -1118,13 +1084,8 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
 	rte_rwlock_read_lock(&internals->rwlock);
 	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
-
 	if (num_of_slaves < 1) {
 		rte_rwlock_read_unlock(&internals->rwlock);
 		return 0;
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 4/5] bonding: add read/write lock to stop function
  2016-05-05 15:14 [PATCH 0/5] bonding: locks Bernard Iremonger
                   ` (2 preceding siblings ...)
  2016-05-05 15:14 ` [PATCH 3/5] bonding: remove memcopy of slaves from rx/tx burst function Bernard Iremonger
@ 2016-05-05 15:14 ` Bernard Iremonger
  2016-05-05 15:15 ` [PATCH 5/5] bonding: add read/write lock to the link_update function Bernard Iremonger
  2016-05-26 16:38 ` [PATCH v2 0/6] bonding: locks Bernard Iremonger
  5 siblings, 0 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-05 15:14 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, Bernard Iremonger

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index c33a860..6e1cc10 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1623,6 +1623,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 
 		bond_mode_8023ad_stop(eth_dev);
 
+		rte_rwlock_read_lock(&internals->rwlock);
 		/* Discard all messages to/from mode 4 state machines */
 		for (i = 0; i < internals->active_slave_count; i++) {
 			port = &mode_8023ad_ports[internals->active_slaves[i]];
@@ -1635,15 +1636,20 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 			while (rte_ring_dequeue(port->tx_ring, &pkt) != -ENOENT)
 				rte_pktmbuf_free(pkt);
 		}
+		rte_rwlock_read_unlock(&internals->rwlock);
 	}
 
 	if (internals->mode == BONDING_MODE_TLB ||
 			internals->mode == BONDING_MODE_ALB) {
 		bond_tlb_disable(internals);
+
+		rte_rwlock_read_lock(&internals->rwlock);
 		for (i = 0; i < internals->active_slave_count; i++)
 			tlb_last_obytets[internals->active_slaves[i]] = 0;
+		rte_rwlock_read_unlock(&internals->rwlock);
 	}
 
+	rte_rwlock_write_lock(&internals->rwlock);
 	internals->active_slave_count = 0;
 	internals->link_status_polling_enabled = 0;
 	for (i = 0; i < internals->slave_count; i++)
@@ -1651,6 +1657,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 
 	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
+	rte_rwlock_write_unlock(&internals->rwlock);
 }
 
 void
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 5/5] bonding: add read/write lock to the link_update function
  2016-05-05 15:14 [PATCH 0/5] bonding: locks Bernard Iremonger
                   ` (3 preceding siblings ...)
  2016-05-05 15:14 ` [PATCH 4/5] bonding: add read/write lock to stop function Bernard Iremonger
@ 2016-05-05 15:15 ` Bernard Iremonger
  2016-05-26 16:38 ` [PATCH v2 0/6] bonding: locks Bernard Iremonger
  5 siblings, 0 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-05 15:15 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, Bernard Iremonger

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 6e1cc10..fff6654 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1819,9 +1819,11 @@ bond_ethdev_link_update(struct rte_eth_dev *bonded_eth_dev,
 {
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
 
+	rte_rwlock_read_lock(&internals->rwlock);
 	if (!bonded_eth_dev->data->dev_started ||
 		internals->active_slave_count == 0) {
 		bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		rte_rwlock_read_unlock(&internals->rwlock);
 		return 0;
 	} else {
 		struct rte_eth_dev *slave_eth_dev;
@@ -1840,7 +1842,7 @@ bond_ethdev_link_update(struct rte_eth_dev *bonded_eth_dev,
 
 		bonded_eth_dev->data->dev_link.link_status = link_up;
 	}
-
+	rte_rwlock_read_unlock(&internals->rwlock);
 	return 0;
 }
 
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/5] bonding: replace spinlock with read/write lock
  2016-05-05 15:14 ` [PATCH 1/5] bonding: replace spinlock with read/write lock Bernard Iremonger
@ 2016-05-05 17:12   ` Stephen Hemminger
  2016-05-06 10:32     ` Declan Doherty
  0 siblings, 1 reply; 42+ messages in thread
From: Stephen Hemminger @ 2016-05-05 17:12 UTC (permalink / raw)
  To: Bernard Iremonger; +Cc: dev, declan.doherty

On Thu,  5 May 2016 16:14:56 +0100
Bernard Iremonger <bernard.iremonger@intel.com> wrote:

> Fixes: a45b288ef21a ("bond: support link status polling")
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>

You know an uncontested reader/writer lock is significantly slower
than a spinlock.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/5] bonding: replace spinlock with read/write lock
  2016-05-05 17:12   ` Stephen Hemminger
@ 2016-05-06 10:32     ` Declan Doherty
  2016-05-06 15:55       ` Stephen Hemminger
  0 siblings, 1 reply; 42+ messages in thread
From: Declan Doherty @ 2016-05-06 10:32 UTC (permalink / raw)
  To: Stephen Hemminger, Bernard Iremonger; +Cc: dev

On 05/05/16 18:12, Stephen Hemminger wrote:
> On Thu,  5 May 2016 16:14:56 +0100
> Bernard Iremonger <bernard.iremonger@intel.com> wrote:
>
>> Fixes: a45b288ef21a ("bond: support link status polling")
>> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
>
> You know an uncontested reader/writer lock is significantly slower
> than a spinlock.
>

As we can have multiple readers of the active slave list / primary 
slave, basically any tx/rx burst call needs to protect against a device 
being removed/closed during it's operation now that we support 
hotplugging, in the worst case this could mean we have 2(rx+tx) * queues 
possibly using the active slave list simultaneously, in that case I 
would have thought that a spinlock would have a much more significant 
affect on performance?

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/5] bonding: replace spinlock with read/write lock
  2016-05-06 10:32     ` Declan Doherty
@ 2016-05-06 15:55       ` Stephen Hemminger
  2016-05-13 17:10         ` Ananyev, Konstantin
  0 siblings, 1 reply; 42+ messages in thread
From: Stephen Hemminger @ 2016-05-06 15:55 UTC (permalink / raw)
  To: Declan Doherty; +Cc: Bernard Iremonger, dev

On Fri, 6 May 2016 11:32:19 +0100
Declan Doherty <declan.doherty@intel.com> wrote:

> On 05/05/16 18:12, Stephen Hemminger wrote:
> > On Thu,  5 May 2016 16:14:56 +0100
> > Bernard Iremonger <bernard.iremonger@intel.com> wrote:
> >
> >> Fixes: a45b288ef21a ("bond: support link status polling")
> >> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> >
> > You know an uncontested reader/writer lock is significantly slower
> > than a spinlock.
> >
> 
> As we can have multiple readers of the active slave list / primary 
> slave, basically any tx/rx burst call needs to protect against a device 
> being removed/closed during it's operation now that we support 
> hotplugging, in the worst case this could mean we have 2(rx+tx) * queues 
> possibly using the active slave list simultaneously, in that case I 
> would have thought that a spinlock would have a much more significant 
> affect on performance?

Right, but the window where the shared variable is accessed is very small,
and it is actually faster to use spinlock for that.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/5] bonding: replace spinlock with read/write lock
  2016-05-06 15:55       ` Stephen Hemminger
@ 2016-05-13 17:10         ` Ananyev, Konstantin
  2016-05-13 17:18           ` Ananyev, Konstantin
  0 siblings, 1 reply; 42+ messages in thread
From: Ananyev, Konstantin @ 2016-05-13 17:10 UTC (permalink / raw)
  To: Stephen Hemminger, Doherty, Declan; +Cc: Iremonger, Bernard, dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Stephen Hemminger
> Sent: Friday, May 06, 2016 4:56 PM
> To: Doherty, Declan
> Cc: Iremonger, Bernard; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/5] bonding: replace spinlock with read/write lock
> 
> On Fri, 6 May 2016 11:32:19 +0100
> Declan Doherty <declan.doherty@intel.com> wrote:
> 
> > On 05/05/16 18:12, Stephen Hemminger wrote:
> > > On Thu,  5 May 2016 16:14:56 +0100
> > > Bernard Iremonger <bernard.iremonger@intel.com> wrote:
> > >
> > >> Fixes: a45b288ef21a ("bond: support link status polling")
> > >> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> > >
> > > You know an uncontested reader/writer lock is significantly slower
> > > than a spinlock.
> > >
> >
> > As we can have multiple readers of the active slave list / primary
> > slave, basically any tx/rx burst call needs to protect against a device
> > being removed/closed during it's operation now that we support
> > hotplugging, in the worst case this could mean we have 2(rx+tx) * queues
> > possibly using the active slave list simultaneously, in that case I
> > would have thought that a spinlock would have a much more significant
> > affect on performance?
> 
> Right, but the window where the shared variable is accessed is very small,
> and it is actually faster to use spinlock for that.

I don't think that window we hold the lock is that small, let say if we have
a burst of 32 packets * (let say) 50 cycles/pkt = ~1500 cycles - each IO thread would stall.
For me that's long enough to justify rwlock usage here, especially that 
DPDK rwlock price is not much bigger (as I remember) then spinlock -
it is basically 1 CAS operation.

Konstantin
 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/5] bonding: replace spinlock with read/write lock
  2016-05-13 17:10         ` Ananyev, Konstantin
@ 2016-05-13 17:18           ` Ananyev, Konstantin
  2016-05-26 16:24             ` Iremonger, Bernard
  0 siblings, 1 reply; 42+ messages in thread
From: Ananyev, Konstantin @ 2016-05-13 17:18 UTC (permalink / raw)
  To: Ananyev, Konstantin, Stephen Hemminger, Doherty, Declan
  Cc: Iremonger, Bernard, dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Ananyev, Konstantin
> Sent: Friday, May 13, 2016 6:11 PM
> To: Stephen Hemminger; Doherty, Declan
> Cc: Iremonger, Bernard; dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 1/5] bonding: replace spinlock with read/write lock
> 
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Stephen Hemminger
> > Sent: Friday, May 06, 2016 4:56 PM
> > To: Doherty, Declan
> > Cc: Iremonger, Bernard; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 1/5] bonding: replace spinlock with read/write lock
> >
> > On Fri, 6 May 2016 11:32:19 +0100
> > Declan Doherty <declan.doherty@intel.com> wrote:
> >
> > > On 05/05/16 18:12, Stephen Hemminger wrote:
> > > > On Thu,  5 May 2016 16:14:56 +0100
> > > > Bernard Iremonger <bernard.iremonger@intel.com> wrote:
> > > >
> > > >> Fixes: a45b288ef21a ("bond: support link status polling")
> > > >> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> > > >
> > > > You know an uncontested reader/writer lock is significantly slower
> > > > than a spinlock.
> > > >
> > >
> > > As we can have multiple readers of the active slave list / primary
> > > slave, basically any tx/rx burst call needs to protect against a device
> > > being removed/closed during it's operation now that we support
> > > hotplugging, in the worst case this could mean we have 2(rx+tx) * queues
> > > possibly using the active slave list simultaneously, in that case I
> > > would have thought that a spinlock would have a much more significant
> > > affect on performance?
> >
> > Right, but the window where the shared variable is accessed is very small,
> > and it is actually faster to use spinlock for that.
> 
> I don't think that window we hold the lock is that small, let say if we have
> a burst of 32 packets * (let say) 50 cycles/pkt = ~1500 cycles - each IO thread would stall.
> For me that's long enough to justify rwlock usage here, especially that
> DPDK rwlock price is not much bigger (as I remember) then spinlock -
> it is basically 1 CAS operation.

As another alternative we can have a spinlock per queue, then different IO threads
doing RX/XTX over different queues will be uncontended at all.
Though control thread would need to grab locks for all configured queues :)

Konstantin

> 
> Konstantin
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/5] bonding: replace spinlock with read/write lock
  2016-05-13 17:18           ` Ananyev, Konstantin
@ 2016-05-26 16:24             ` Iremonger, Bernard
  0 siblings, 0 replies; 42+ messages in thread
From: Iremonger, Bernard @ 2016-05-26 16:24 UTC (permalink / raw)
  To: Ananyev, Konstantin, Stephen Hemminger, Doherty, Declan
  Cc: dev, Iremonger, Bernard

Hi Konstantin, 

<snip>
> > > > On 05/05/16 18:12, Stephen Hemminger wrote:
> > > > > On Thu,  5 May 2016 16:14:56 +0100 Bernard Iremonger
> > > > > <bernard.iremonger@intel.com> wrote:
> > > > >
> > > > >> Fixes: a45b288ef21a ("bond: support link status polling")
> > > > >> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> > > > >
> > > > > You know an uncontested reader/writer lock is significantly
> > > > > slower than a spinlock.
> > > > >
> > > >
> > > > As we can have multiple readers of the active slave list / primary
> > > > slave, basically any tx/rx burst call needs to protect against a
> > > > device being removed/closed during it's operation now that we
> > > > support hotplugging, in the worst case this could mean we have
> > > > 2(rx+tx) * queues possibly using the active slave list
> > > > simultaneously, in that case I would have thought that a spinlock
> > > > would have a much more significant affect on performance?
> > >
> > > Right, but the window where the shared variable is accessed is very
> > > small, and it is actually faster to use spinlock for that.
> >
> > I don't think that window we hold the lock is that small, let say if
> > we have a burst of 32 packets * (let say) 50 cycles/pkt = ~1500 cycles - each
> IO thread would stall.
> > For me that's long enough to justify rwlock usage here, especially
> > that DPDK rwlock price is not much bigger (as I remember) then
> > spinlock - it is basically 1 CAS operation.
> 
> As another alternative we can have a spinlock per queue, then different IO
> threads doing RX/XTX over different queues will be uncontended at all.
> Though control thread would need to grab locks for all configured queues :)
> 
> Konstantin
> 

I am preparing a v2 patchset which uses a spinlock per queue.

Regards,

Bernard.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH v2 0/6] bonding: locks
  2016-05-05 15:14 [PATCH 0/5] bonding: locks Bernard Iremonger
                   ` (4 preceding siblings ...)
  2016-05-05 15:15 ` [PATCH 5/5] bonding: add read/write lock to the link_update function Bernard Iremonger
@ 2016-05-26 16:38 ` Bernard Iremonger
  2016-05-26 16:38   ` [PATCH v2 1/6] bonding: add spinlock to rx and tx queues Bernard Iremonger
                     ` (6 more replies)
  5 siblings, 7 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger

Add spinlock to bonding rx and tx queues.
Take spinlock in rx and tx burst functions.
Take all spinlocks in slave add and remove functions.
With spinlocks in place remove memcpy of slaves.

Changes in v2:
Replace patch 1.
Add patch 2 and reorder patches.
Add spinlock to bonding rx and tx queues.
Take all spinlocks in slave add and remove functions.
Replace readlocks with spinlocks.

Bernard Iremonger (6):
  bonding: add spinlock to rx and tx queues
  bonding: grab queue spinlocks in slave add and remove
  bonding: take queue spinlock in rx/tx burst functions
  bonding: add spinlock to stop function
  bonding: add spinlock to link update function
  bonding: remove memcpy from burst functions

 drivers/net/bonding/rte_eth_bond_api.c     |  52 +++++++-
 drivers/net/bonding/rte_eth_bond_pmd.c     | 196 ++++++++++++++++++-----------
 drivers/net/bonding/rte_eth_bond_private.h |   4 +-
 3 files changed, 173 insertions(+), 79 deletions(-)

-- 
2.6.3

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH v2 1/6] bonding: add spinlock to rx and tx queues
  2016-05-26 16:38 ` [PATCH v2 0/6] bonding: locks Bernard Iremonger
@ 2016-05-26 16:38   ` Bernard Iremonger
  2016-06-10 18:12     ` Ananyev, Konstantin
  2016-06-12 17:11     ` [PATCH v3 0/4] bonding: locks Bernard Iremonger
  2016-05-26 16:38   ` [PATCH v2 2/6] bonding: grab queue spinlocks in slave add and remove Bernard Iremonger
                     ` (5 subsequent siblings)
  6 siblings, 2 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c     | 4 ++++
 drivers/net/bonding/rte_eth_bond_private.h | 4 +++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 129f04b..2e624bb 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1676,6 +1676,8 @@ bond_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 	if (bd_rx_q == NULL)
 		return -1;
 
+	rte_spinlock_init(&bd_rx_q->lock);
+
 	bd_rx_q->queue_id = rx_queue_id;
 	bd_rx_q->dev_private = dev->data->dev_private;
 
@@ -1701,6 +1703,8 @@ bond_ethdev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 	if (bd_tx_q == NULL)
 		return -1;
 
+	rte_spinlock_init(&bd_tx_q->lock);
+
 	bd_tx_q->queue_id = tx_queue_id;
 	bd_tx_q->dev_private = dev->data->dev_private;
 
diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h
index 8312397..b6abcba 100644
--- a/drivers/net/bonding/rte_eth_bond_private.h
+++ b/drivers/net/bonding/rte_eth_bond_private.h
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -76,6 +76,7 @@ struct bond_rx_queue {
 	/**< Copy of RX configuration structure for queue */
 	struct rte_mempool *mb_pool;
 	/**< Reference to mbuf pool to use for RX queue */
+	rte_spinlock_t lock;
 };
 
 struct bond_tx_queue {
@@ -87,6 +88,7 @@ struct bond_tx_queue {
 	/**< Number of TX descriptors available for the queue */
 	struct rte_eth_txconf tx_conf;
 	/**< Copy of TX configuration structure for queue */
+	rte_spinlock_t lock;
 };
 
 /** Bonded slave devices structure */
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 2/6] bonding: grab queue spinlocks in slave add and remove
  2016-05-26 16:38 ` [PATCH v2 0/6] bonding: locks Bernard Iremonger
  2016-05-26 16:38   ` [PATCH v2 1/6] bonding: add spinlock to rx and tx queues Bernard Iremonger
@ 2016-05-26 16:38   ` Bernard Iremonger
  2016-06-10 18:14     ` Ananyev, Konstantin
  2016-05-26 16:38   ` [PATCH v2 3/6] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
                     ` (4 subsequent siblings)
  6 siblings, 1 reply; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 drivers/net/bonding/rte_eth_bond_api.c | 52 ++++++++++++++++++++++++++++++++--
 1 file changed, 49 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 53df9fe..006c901 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -437,8 +437,10 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
-
+	struct bond_tx_queue *bd_tx_q;
+	struct bond_rx_queue *bd_rx_q;
 	int retval;
+	uint16_t i;
 
 	/* Verify that port id's are valid bonded and slave ports */
 	if (valid_bonded_port_id(bonded_port_id) != 0)
@@ -448,11 +450,30 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id)
 	internals = bonded_eth_dev->data->dev_private;
 
 	rte_spinlock_lock(&internals->lock);
+	if (bonded_eth_dev->data->dev_started) {
+		for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+			bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+			rte_spinlock_lock(&bd_rx_q->lock);
+		}
+		for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+			bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+			rte_spinlock_lock(&bd_tx_q->lock);
+		}
+	}
 
 	retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
 
+	if (bonded_eth_dev->data->dev_started) {
+		for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+			bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+			rte_spinlock_unlock(&bd_rx_q->lock);
+		}
+		for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+			bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+			rte_spinlock_unlock(&bd_tx_q->lock);
+		}
+	}
 	rte_spinlock_unlock(&internals->lock);
-
 	return retval;
 }
 
@@ -541,7 +562,10 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
+	struct bond_tx_queue *bd_tx_q;
+	struct bond_rx_queue *bd_rx_q;
 	int retval;
+	uint16_t i;
 
 	if (valid_bonded_port_id(bonded_port_id) != 0)
 		return -1;
@@ -550,11 +574,33 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id)
 	internals = bonded_eth_dev->data->dev_private;
 
 	rte_spinlock_lock(&internals->lock);
+	if (bonded_eth_dev->data->dev_started) {
+		for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+			bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+			rte_spinlock_lock(&bd_rx_q->lock);
+		}
+
+		for (i = 0; i < bonded_eth_dev->data->nb_tx_queues; i++) {
+			bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+			rte_spinlock_lock(&bd_tx_q->lock);
+		}
+	}
 
 	retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
 
-	rte_spinlock_unlock(&internals->lock);
+	if (bonded_eth_dev->data->dev_started) {
+		for (i = 0; i < bonded_eth_dev->data->nb_tx_queues; i++) {
+			bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+			rte_spinlock_unlock(&bd_tx_q->lock);
+		}
 
+		for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+			bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+			rte_spinlock_unlock(&bd_rx_q->lock);
+		}
+		rte_spinlock_unlock(&internals->lock);
+	}
+	rte_spinlock_unlock(&internals->lock);
 	return retval;
 }
 
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 3/6] bonding: take queue spinlock in rx/tx burst functions
  2016-05-26 16:38 ` [PATCH v2 0/6] bonding: locks Bernard Iremonger
  2016-05-26 16:38   ` [PATCH v2 1/6] bonding: add spinlock to rx and tx queues Bernard Iremonger
  2016-05-26 16:38   ` [PATCH v2 2/6] bonding: grab queue spinlocks in slave add and remove Bernard Iremonger
@ 2016-05-26 16:38   ` Bernard Iremonger
  2016-06-10 18:14     ` Ananyev, Konstantin
  2016-05-26 16:38   ` [PATCH v2 4/6] bonding: add spinlock to stop function Bernard Iremonger
                     ` (3 subsequent siblings)
  6 siblings, 1 reply; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 116 ++++++++++++++++++++++++---------
 1 file changed, 84 insertions(+), 32 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 2e624bb..93043ef 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -92,16 +92,22 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 	internals = bd_rx_q->dev_private;
 
-
-	for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
-		/* Offset of pointer to *bufs increases as packets are received
-		 * from other slaves */
-		num_rx_slave = rte_eth_rx_burst(internals->active_slaves[i],
-				bd_rx_q->queue_id, bufs + num_rx_total, nb_pkts);
-		if (num_rx_slave) {
-			num_rx_total += num_rx_slave;
-			nb_pkts -= num_rx_slave;
+	if (rte_spinlock_trylock(&bd_rx_q->lock)) {
+		for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
+			/* Offset of pointer to *bufs increases as packets
+			 * are received from other slaves
+			 */
+			num_rx_slave = rte_eth_rx_burst(
+						internals->active_slaves[i],
+						bd_rx_q->queue_id,
+						bufs + num_rx_total,
+						nb_pkts);
+			if (num_rx_slave) {
+				num_rx_total += num_rx_slave;
+				nb_pkts -= num_rx_slave;
+			}
 		}
+		rte_spinlock_unlock(&bd_rx_q->lock);
 	}
 
 	return num_rx_total;
@@ -112,14 +118,19 @@ bond_ethdev_rx_burst_active_backup(void *queue, struct rte_mbuf **bufs,
 		uint16_t nb_pkts)
 {
 	struct bond_dev_private *internals;
+	uint16_t ret = 0;
 
 	/* Cast to structure, containing bonded device's port id and queue id */
 	struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
 
 	internals = bd_rx_q->dev_private;
 
-	return rte_eth_rx_burst(internals->current_primary_port,
-			bd_rx_q->queue_id, bufs, nb_pkts);
+	if (rte_spinlock_trylock(&bd_rx_q->lock)) {
+		ret = rte_eth_rx_burst(internals->current_primary_port,
+					bd_rx_q->queue_id, bufs, nb_pkts);
+		rte_spinlock_unlock(&bd_rx_q->lock);
+	}
+	return ret;
 }
 
 static uint16_t
@@ -143,8 +154,10 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	uint8_t i, j, k;
 
 	rte_eth_macaddr_get(internals->port_id, &bond_mac);
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
+
+	if (rte_spinlock_trylock(&bd_rx_q->lock) == 0)
+		return num_rx_total;
+
 	slave_count = internals->active_slave_count;
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * slave_count);
@@ -190,7 +203,7 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 				j++;
 		}
 	}
-
+	rte_spinlock_unlock(&bd_rx_q->lock);
 	return num_rx_total;
 }
 
@@ -406,14 +419,19 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
+	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+		return num_tx_total;
+
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
 	num_of_slaves = internals->active_slave_count;
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * num_of_slaves);
 
-	if (num_of_slaves < 1)
+	if (num_of_slaves < 1) {
+		rte_spinlock_unlock(&bd_tx_q->lock);
 		return num_tx_total;
+	}
 
 	/* Populate slaves mbuf with which packets are to be sent on it  */
 	for (i = 0; i < nb_pkts; i++) {
@@ -444,7 +462,7 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 			num_tx_total += num_tx_slave;
 		}
 	}
-
+	rte_spinlock_unlock(&bd_tx_q->lock);
 	return num_tx_total;
 }
 
@@ -454,15 +472,23 @@ bond_ethdev_tx_burst_active_backup(void *queue,
 {
 	struct bond_dev_private *internals;
 	struct bond_tx_queue *bd_tx_q;
+	uint16_t ret = 0;
 
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	if (internals->active_slave_count < 1)
-		return 0;
+	if (rte_spinlock_trylock(&bd_tx_q->lock)) {
+		if (internals->active_slave_count < 1) {
+			rte_spinlock_unlock(&bd_tx_q->lock);
+			return 0;
+		}
 
-	return rte_eth_tx_burst(internals->current_primary_port, bd_tx_q->queue_id,
-			bufs, nb_pkts);
+		ret = rte_eth_tx_burst(internals->current_primary_port,
+					bd_tx_q->queue_id,
+					bufs, nb_pkts);
+		rte_spinlock_unlock(&bd_tx_q->lock);
+	}
+	return ret;
 }
 
 static inline uint16_t
@@ -694,20 +720,25 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	uint16_t num_tx_total = 0;
 	uint8_t i, j;
 
-	uint8_t num_of_slaves = internals->active_slave_count;
+	uint8_t num_of_slaves;
 	uint8_t slaves[RTE_MAX_ETHPORTS];
 
 	struct ether_hdr *ether_hdr;
 	struct ether_addr primary_slave_addr;
 	struct ether_addr active_slave_addr;
 
-	if (num_of_slaves < 1)
+	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
 		return num_tx_total;
 
+	num_of_slaves = internals->active_slave_count;
+	if (num_of_slaves < 1) {
+		rte_spinlock_unlock(&bd_tx_q->lock);
+		return num_tx_total;
+	}
+
 	memcpy(slaves, internals->tlb_slaves_order,
 				sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
 
-
 	ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
 
 	if (nb_pkts > 3) {
@@ -735,7 +766,7 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		if (num_tx_total == nb_pkts)
 			break;
 	}
-
+	rte_spinlock_unlock(&bd_tx_q->lock);
 	return num_tx_total;
 }
 
@@ -785,6 +816,9 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 	int i, j;
 
+	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+		return num_tx_total;
+
 	/* Search tx buffer for ARP packets and forward them to alb */
 	for (i = 0; i < nb_pkts; i++) {
 		eth_h = rte_pktmbuf_mtod(bufs[i], struct ether_hdr *);
@@ -875,6 +909,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 #endif
 		}
 	}
+	rte_spinlock_unlock(&bd_tx_q->lock);
 
 	/* Send non-ARP packets using tlb policy */
 	if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
@@ -914,14 +949,19 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
+	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+		return num_tx_total;
+
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
 	num_of_slaves = internals->active_slave_count;
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * num_of_slaves);
 
-	if (num_of_slaves < 1)
+	if (num_of_slaves < 1) {
+		rte_spinlock_unlock(&bd_tx_q->lock);
 		return num_tx_total;
+	}
 
 	/* Populate slaves mbuf with the packets which are to be sent on it  */
 	for (i = 0; i < nb_pkts; i++) {
@@ -951,7 +991,7 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 			num_tx_total += num_tx_slave;
 		}
 	}
-
+	rte_spinlock_unlock(&bd_tx_q->lock);
 	return num_tx_total;
 }
 
@@ -984,17 +1024,24 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
+	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+		return num_tx_total;
+
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
 	num_of_slaves = internals->active_slave_count;
-	if (num_of_slaves < 1)
+	if (num_of_slaves < 1) {
+		rte_spinlock_unlock(&bd_tx_q->lock);
 		return num_tx_total;
+	}
 
 	memcpy(slaves, internals->active_slaves, sizeof(slaves[0]) * num_of_slaves);
 
 	distributing_count = 0;
 	for (i = 0; i < num_of_slaves; i++) {
-		struct port *port = &mode_8023ad_ports[slaves[i]];
+		struct port *port;
+
+		port = &mode_8023ad_ports[internals->active_slaves[i]];
 
 		slave_slow_nb_pkts[i] = rte_ring_dequeue_burst(port->tx_ring,
 				slow_pkts, BOND_MODE_8023AX_SLAVE_TX_PKTS);
@@ -1043,7 +1090,7 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 				bufs[j] = slave_bufs[i][num_tx_slave];
 		}
 	}
-
+	rte_spinlock_unlock(&bd_tx_q->lock);
 	return num_tx_total;
 }
 
@@ -1065,14 +1112,19 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
+	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+		return 0;
+
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
 	num_of_slaves = internals->active_slave_count;
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * num_of_slaves);
 
-	if (num_of_slaves < 1)
+	if (num_of_slaves < 1) {
+		rte_spinlock_unlock(&bd_tx_q->lock);
 		return 0;
+	}
 
 	/* Increment reference count on mbufs */
 	for (i = 0; i < nb_pkts; i++)
@@ -1093,6 +1145,7 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 			most_successful_tx_slave = i;
 		}
 	}
+	rte_spinlock_unlock(&bd_tx_q->lock);
 
 	/* if slaves fail to transmit packets from burst, the calling application
 	 * is not expected to know about multiple references to packets so we must
@@ -1819,7 +1872,6 @@ bond_ethdev_link_update(struct rte_eth_dev *bonded_eth_dev,
 
 		bonded_eth_dev->data->dev_link.link_status = link_up;
 	}
-
 	return 0;
 }
 
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 4/6] bonding: add spinlock to stop function
  2016-05-26 16:38 ` [PATCH v2 0/6] bonding: locks Bernard Iremonger
                     ` (2 preceding siblings ...)
  2016-05-26 16:38   ` [PATCH v2 3/6] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
@ 2016-05-26 16:38   ` Bernard Iremonger
  2016-05-26 16:38   ` [PATCH v2 5/6] bonding: add spinlock to link update function Bernard Iremonger
                     ` (2 subsequent siblings)
  6 siblings, 0 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 93043ef..55b37a5 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1651,6 +1651,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 	struct bond_dev_private *internals = eth_dev->data->dev_private;
 	uint8_t i;
 
+	rte_spinlock_lock(&internals->lock);
 	if (internals->mode == BONDING_MODE_8023AD) {
 		struct port *port;
 		void *pkt = NULL;
@@ -1672,7 +1673,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 	}
 
 	if (internals->mode == BONDING_MODE_TLB ||
-			internals->mode == BONDING_MODE_ALB) {
+		internals->mode == BONDING_MODE_ALB) {
 		bond_tlb_disable(internals);
 		for (i = 0; i < internals->active_slave_count; i++)
 			tlb_last_obytets[internals->active_slaves[i]] = 0;
@@ -1685,6 +1686,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 
 	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
+	rte_spinlock_unlock(&internals->lock);
 }
 
 void
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 5/6] bonding: add spinlock to link update function
  2016-05-26 16:38 ` [PATCH v2 0/6] bonding: locks Bernard Iremonger
                     ` (3 preceding siblings ...)
  2016-05-26 16:38   ` [PATCH v2 4/6] bonding: add spinlock to stop function Bernard Iremonger
@ 2016-05-26 16:38   ` Bernard Iremonger
  2016-05-26 16:38   ` [PATCH v2 6/6] bonding: remove memcpy from burst functions Bernard Iremonger
  2016-06-10 14:45   ` [PATCH v2 0/6] bonding: locks Bruce Richardson
  6 siblings, 0 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 55b37a5..474bfcc 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1853,9 +1853,11 @@ bond_ethdev_link_update(struct rte_eth_dev *bonded_eth_dev,
 {
 	struct bond_dev_private *internals = bonded_eth_dev->data->dev_private;
 
+	rte_spinlock_lock(&internals->lock);
 	if (!bonded_eth_dev->data->dev_started ||
 		internals->active_slave_count == 0) {
 		bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		rte_spinlock_unlock(&internals->lock);
 		return 0;
 	} else {
 		struct rte_eth_dev *slave_eth_dev;
@@ -1874,6 +1876,7 @@ bond_ethdev_link_update(struct rte_eth_dev *bonded_eth_dev,
 
 		bonded_eth_dev->data->dev_link.link_status = link_up;
 	}
+	rte_spinlock_unlock(&internals->lock);
 	return 0;
 }
 
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v2 6/6] bonding: remove memcpy from burst functions
  2016-05-26 16:38 ` [PATCH v2 0/6] bonding: locks Bernard Iremonger
                     ` (4 preceding siblings ...)
  2016-05-26 16:38   ` [PATCH v2 5/6] bonding: add spinlock to link update function Bernard Iremonger
@ 2016-05-26 16:38   ` Bernard Iremonger
  2016-06-10 18:15     ` Ananyev, Konstantin
  2016-06-10 14:45   ` [PATCH v2 0/6] bonding: locks Bruce Richardson
  6 siblings, 1 reply; 42+ messages in thread
From: Bernard Iremonger @ 2016-05-26 16:38 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 71 ++++++++++++++--------------------
 1 file changed, 28 insertions(+), 43 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 474bfcc..d952658 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -146,7 +146,6 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 
 	const uint16_t ether_type_slow_be = rte_be_to_cpu_16(ETHER_TYPE_SLOW);
 	uint16_t num_rx_total = 0;	/* Total number of received packets */
-	uint8_t slaves[RTE_MAX_ETHPORTS];
 	uint8_t slave_count;
 
 	uint8_t collecting;  /* current slave collecting status */
@@ -159,15 +158,16 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 		return num_rx_total;
 
 	slave_count = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * slave_count);
 
 	for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
 		j = num_rx_total;
-		collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[i]], COLLECTING);
+		collecting = ACTOR_STATE(
+				&mode_8023ad_ports[internals->active_slaves[i]],
+				COLLECTING);
 
 		/* Read packets from this slave */
-		num_rx_total += rte_eth_rx_burst(slaves[i], bd_rx_q->queue_id,
+		num_rx_total += rte_eth_rx_burst(internals->active_slaves[i],
+				bd_rx_q->queue_id,
 				&bufs[num_rx_total], nb_pkts - num_rx_total);
 
 		for (k = j; k < 2 && k < num_rx_total; k++)
@@ -188,7 +188,9 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 					!is_same_ether_addr(&bond_mac, &hdr->d_addr)))) {
 
 				if (hdr->ether_type == ether_type_slow_be) {
-					bond_mode_8023ad_handle_slow_pkt(internals, slaves[i],
+					bond_mode_8023ad_handle_slow_pkt(
+						internals,
+						internals->active_slaves[i],
 						bufs[j]);
 				} else
 					rte_pktmbuf_free(bufs[j]);
@@ -409,8 +411,6 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
 
 	uint8_t num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
-
 	uint16_t num_tx_total = 0, num_tx_slave;
 
 	static int slave_idx = 0;
@@ -422,12 +422,7 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
 		return num_tx_total;
 
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
 	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
-
 	if (num_of_slaves < 1) {
 		rte_spinlock_unlock(&bd_tx_q->lock);
 		return num_tx_total;
@@ -446,7 +441,9 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	/* Send packet burst on each slave device */
 	for (i = 0; i < num_of_slaves; i++) {
 		if (slave_nb_pkts[i] > 0) {
-			num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+			num_tx_slave = rte_eth_tx_burst(
+					internals->active_slaves[i],
+					bd_tx_q->queue_id,
 					slave_bufs[i], slave_nb_pkts[i]);
 
 			/* if tx burst fails move packets to end of bufs */
@@ -721,7 +718,6 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	uint8_t i, j;
 
 	uint8_t num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
 
 	struct ether_hdr *ether_hdr;
 	struct ether_addr primary_slave_addr;
@@ -736,9 +732,6 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		return num_tx_total;
 	}
 
-	memcpy(slaves, internals->tlb_slaves_order,
-				sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
-
 	ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
 
 	if (nb_pkts > 3) {
@@ -747,7 +740,8 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	}
 
 	for (i = 0; i < num_of_slaves; i++) {
-		rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+		rte_eth_macaddr_get(internals->tlb_slaves_order[i],
+					&active_slave_addr);
 		for (j = num_tx_total; j < nb_pkts; j++) {
 			if (j + 3 < nb_pkts)
 				rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -760,8 +754,11 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 #endif
 		}
 
-		num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
-				bufs + num_tx_total, nb_pkts - num_tx_total);
+		num_tx_total += rte_eth_tx_burst(
+				internals->tlb_slaves_order[i],
+				bd_tx_q->queue_id,
+				bufs + num_tx_total,
+				nb_pkts - num_tx_total);
 
 		if (num_tx_total == nb_pkts)
 			break;
@@ -937,7 +934,6 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q;
 
 	uint8_t num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
 
 	uint16_t num_tx_total = 0, num_tx_slave = 0, tx_fail_total = 0;
 
@@ -952,12 +948,7 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
 		return num_tx_total;
 
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
 	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
-
 	if (num_of_slaves < 1) {
 		rte_spinlock_unlock(&bd_tx_q->lock);
 		return num_tx_total;
@@ -975,7 +966,9 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	/* Send packet burst on each slave device */
 	for (i = 0; i < num_of_slaves; i++) {
 		if (slave_nb_pkts[i] > 0) {
-			num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+			num_tx_slave = rte_eth_tx_burst(
+					internals->active_slaves[i],
+					bd_tx_q->queue_id,
 					slave_bufs[i], slave_nb_pkts[i]);
 
 			/* if tx burst fails move packets to end of bufs */
@@ -1003,7 +996,6 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q;
 
 	uint8_t num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
 	 /* positions in slaves, not ID */
 	uint8_t distributing_offsets[RTE_MAX_ETHPORTS];
 	uint8_t distributing_count;
@@ -1027,16 +1019,12 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
 		return num_tx_total;
 
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
 	num_of_slaves = internals->active_slave_count;
 	if (num_of_slaves < 1) {
 		rte_spinlock_unlock(&bd_tx_q->lock);
 		return num_tx_total;
 	}
 
-	memcpy(slaves, internals->active_slaves, sizeof(slaves[0]) * num_of_slaves);
-
 	distributing_count = 0;
 	for (i = 0; i < num_of_slaves; i++) {
 		struct port *port;
@@ -1073,7 +1061,9 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 		if (slave_nb_pkts[i] == 0)
 			continue;
 
-		num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+		num_tx_slave = rte_eth_tx_burst(
+				internals->active_slaves[i],
+				bd_tx_q->queue_id,
 				slave_bufs[i], slave_nb_pkts[i]);
 
 		/* If tx burst fails drop slow packets */
@@ -1102,8 +1092,6 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q;
 
 	uint8_t tx_failed_flag = 0, num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
-
 	uint16_t max_nb_of_tx_pkts = 0;
 
 	int slave_tx_total[RTE_MAX_ETHPORTS];
@@ -1115,12 +1103,7 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
 		return 0;
 
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
 	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
-
 	if (num_of_slaves < 1) {
 		rte_spinlock_unlock(&bd_tx_q->lock);
 		return 0;
@@ -1132,8 +1115,10 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 
 	/* Transmit burst on each active slave */
 	for (i = 0; i < num_of_slaves; i++) {
-		slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
-					bufs, nb_pkts);
+		slave_tx_total[i] = rte_eth_tx_burst(
+				internals->active_slaves[i],
+				bd_tx_q->queue_id,
+				bufs, nb_pkts);
 
 		if (unlikely(slave_tx_total[i] < nb_pkts))
 			tx_failed_flag = 1;
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 0/6] bonding: locks
  2016-05-26 16:38 ` [PATCH v2 0/6] bonding: locks Bernard Iremonger
                     ` (5 preceding siblings ...)
  2016-05-26 16:38   ` [PATCH v2 6/6] bonding: remove memcpy from burst functions Bernard Iremonger
@ 2016-06-10 14:45   ` Bruce Richardson
  2016-06-10 18:24     ` Iremonger, Bernard
  6 siblings, 1 reply; 42+ messages in thread
From: Bruce Richardson @ 2016-06-10 14:45 UTC (permalink / raw)
  To: Bernard Iremonger; +Cc: dev, declan.doherty, konstantin.ananyev

On Thu, May 26, 2016 at 05:38:41PM +0100, Bernard Iremonger wrote:
> Add spinlock to bonding rx and tx queues.
> Take spinlock in rx and tx burst functions.
> Take all spinlocks in slave add and remove functions.
> With spinlocks in place remove memcpy of slaves.
> 
> Changes in v2:
> Replace patch 1.
> Add patch 2 and reorder patches.
> Add spinlock to bonding rx and tx queues.
> Take all spinlocks in slave add and remove functions.
> Replace readlocks with spinlocks.
> 
> Bernard Iremonger (6):
>   bonding: add spinlock to rx and tx queues
>   bonding: grab queue spinlocks in slave add and remove
>   bonding: take queue spinlock in rx/tx burst functions
>   bonding: add spinlock to stop function
>   bonding: add spinlock to link update function
>   bonding: remove memcpy from burst functions
> 
>  drivers/net/bonding/rte_eth_bond_api.c     |  52 +++++++-
>  drivers/net/bonding/rte_eth_bond_pmd.c     | 196 ++++++++++++++++++-----------
>  drivers/net/bonding/rte_eth_bond_private.h |   4 +-
>  3 files changed, 173 insertions(+), 79 deletions(-)
> 
> -- 

The patches in this set are missing any explanation for the reasons behind each
patch. The commit messages are empty for every patch.

I'm also concerned at the fact that this patchset is adding lock operations all
over the bonding PMD. While other PMDs need synchronization between control plane
and data plane threads so that e.g. you don't do IO on a stopped port, they
don't use locks so as to get max performance. Nowhere in the patchset is there
an explanation as to why bonding is so different that it needs locks where
other PMDs can do without them. This should also be explained in each individual
patch as to why the area covered by the patch needs locks in this PMD (again,
compared to other PMDs)

Thanks,
/Bruce

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 1/6] bonding: add spinlock to rx and tx queues
  2016-05-26 16:38   ` [PATCH v2 1/6] bonding: add spinlock to rx and tx queues Bernard Iremonger
@ 2016-06-10 18:12     ` Ananyev, Konstantin
  2016-06-12 17:11     ` [PATCH v3 0/4] bonding: locks Bernard Iremonger
  1 sibling, 0 replies; 42+ messages in thread
From: Ananyev, Konstantin @ 2016-06-10 18:12 UTC (permalink / raw)
  To: Iremonger, Bernard, dev; +Cc: Doherty, Declan


> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> ---
>  drivers/net/bonding/rte_eth_bond_pmd.c     | 4 ++++
>  drivers/net/bonding/rte_eth_bond_private.h | 4 +++-
>  2 files changed, 7 insertions(+), 1 deletion(-)
> 

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 2/6] bonding: grab queue spinlocks in slave add and remove
  2016-05-26 16:38   ` [PATCH v2 2/6] bonding: grab queue spinlocks in slave add and remove Bernard Iremonger
@ 2016-06-10 18:14     ` Ananyev, Konstantin
  0 siblings, 0 replies; 42+ messages in thread
From: Ananyev, Konstantin @ 2016-06-10 18:14 UTC (permalink / raw)
  To: Iremonger, Bernard, dev; +Cc: Doherty, Declan



> 
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 3/6] bonding: take queue spinlock in rx/tx burst functions
  2016-05-26 16:38   ` [PATCH v2 3/6] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
@ 2016-06-10 18:14     ` Ananyev, Konstantin
  0 siblings, 0 replies; 42+ messages in thread
From: Ananyev, Konstantin @ 2016-06-10 18:14 UTC (permalink / raw)
  To: Iremonger, Bernard, dev; +Cc: Doherty, Declan



> 
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 6/6] bonding: remove memcpy from burst functions
  2016-05-26 16:38   ` [PATCH v2 6/6] bonding: remove memcpy from burst functions Bernard Iremonger
@ 2016-06-10 18:15     ` Ananyev, Konstantin
  0 siblings, 0 replies; 42+ messages in thread
From: Ananyev, Konstantin @ 2016-06-10 18:15 UTC (permalink / raw)
  To: Iremonger, Bernard, dev; +Cc: Doherty, Declan

> 
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v2 0/6] bonding: locks
  2016-06-10 14:45   ` [PATCH v2 0/6] bonding: locks Bruce Richardson
@ 2016-06-10 18:24     ` Iremonger, Bernard
  0 siblings, 0 replies; 42+ messages in thread
From: Iremonger, Bernard @ 2016-06-10 18:24 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: dev, Doherty, Declan, Ananyev, Konstantin

Hi Bruce,

> -----Original Message-----
> From: Richardson, Bruce
> Sent: Friday, June 10, 2016 3:46 PM
> To: Iremonger, Bernard <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org; Doherty, Declan <declan.doherty@intel.com>; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 0/6] bonding: locks
> 
> On Thu, May 26, 2016 at 05:38:41PM +0100, Bernard Iremonger wrote:
> > Add spinlock to bonding rx and tx queues.
> > Take spinlock in rx and tx burst functions.
> > Take all spinlocks in slave add and remove functions.
> > With spinlocks in place remove memcpy of slaves.
> >
> > Changes in v2:
> > Replace patch 1.
> > Add patch 2 and reorder patches.
> > Add spinlock to bonding rx and tx queues.
> > Take all spinlocks in slave add and remove functions.
> > Replace readlocks with spinlocks.
> >
> > Bernard Iremonger (6):
> >   bonding: add spinlock to rx and tx queues
> >   bonding: grab queue spinlocks in slave add and remove
> >   bonding: take queue spinlock in rx/tx burst functions
> >   bonding: add spinlock to stop function
> >   bonding: add spinlock to link update function
> >   bonding: remove memcpy from burst functions
> >
> >  drivers/net/bonding/rte_eth_bond_api.c     |  52 +++++++-
> >  drivers/net/bonding/rte_eth_bond_pmd.c     | 196
> ++++++++++++++++++-----------
> >  drivers/net/bonding/rte_eth_bond_private.h |   4 +-
> >  3 files changed, 173 insertions(+), 79 deletions(-)
> >
> > --
> 
> The patches in this set are missing any explanation for the reasons behind
> each patch. The commit messages are empty for every patch.
> 
> I'm also concerned at the fact that this patchset is adding lock operations all
> over the bonding PMD. While other PMDs need synchronization between
> control plane and data plane threads so that e.g. you don't do IO on a
> stopped port, they don't use locks so as to get max performance. Nowhere
> in the patchset is there an explanation as to why bonding is so different that
> it needs locks where other PMDs can do without them. This should also be
> explained in each individual patch as to why the area covered by the patch
> needs locks in this PMD (again, compared to other PMDs)
> 
> Thanks,
> /Bruce

I will be sending a v3 for this patchset.
The empty commit messages were an oversight on my part, this will be corrected in the v3.
I will also try to explain why the locks are needed.

Regards,

Bernard.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH v3 0/4] bonding: locks
  2016-05-26 16:38   ` [PATCH v2 1/6] bonding: add spinlock to rx and tx queues Bernard Iremonger
  2016-06-10 18:12     ` Ananyev, Konstantin
@ 2016-06-12 17:11     ` Bernard Iremonger
  2016-06-12 17:11       ` [PATCH v3 1/4] bonding: add spinlock to rx and tx queues Bernard Iremonger
                         ` (3 more replies)
  1 sibling, 4 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-06-12 17:11 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger

Add spinlock to bonding rx and tx queues.
Take spinlock in rx and tx burst functions.
Take all spinlocks in slave add and remove functions.
With spinlocks in place remove memcpy of slaves.

Changes in v3:
Rebase to latest master.
Drop patches 4 and 5 from v2 patchset.
Update commit messages on patches.

Changes in v2:
Replace patch 1.
Add patch 2 and reorder patches.
Add spinlock to bonding rx and tx queues.
Take all spinlocks in slave add and remove functions.
Replace readlocks with spinlocks.

Bernard Iremonger (4):
  bonding: add spinlock to rx and tx queues
  bonding: grab queue spinlocks in slave add and remove
  bonding: take queue spinlock in rx/tx burst functions
  bonding: remove memcpy from burst functions

 drivers/net/bonding/rte_eth_bond_api.c     |  52 +++++++-
 drivers/net/bonding/rte_eth_bond_pmd.c     | 189 ++++++++++++++++++-----------
 drivers/net/bonding/rte_eth_bond_private.h |   4 +-
 3 files changed, 167 insertions(+), 78 deletions(-)

-- 
2.6.3

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH v3 1/4] bonding: add spinlock to rx and tx queues
  2016-06-12 17:11     ` [PATCH v3 0/4] bonding: locks Bernard Iremonger
@ 2016-06-12 17:11       ` Bernard Iremonger
  2016-06-12 17:11       ` [PATCH v3 2/4] bonding: grab queue spinlocks in slave add and remove Bernard Iremonger
                         ` (2 subsequent siblings)
  3 siblings, 0 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-06-12 17:11 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger

At present it is possible to add and remove slave devices from the
bonding device while traffic is running. This can result in
segmentation faults occurring in the rx and tx burst functions.
To resolve this issue spinlocks have been added to the rx and tx
queues.

Now when a slave is added or removed the rx and tx queue spinlocks
must be held.

Fixes: 2efb58cbab6e ("bond: new link bonding library")

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c     | 4 ++++
 drivers/net/bonding/rte_eth_bond_private.h | 4 +++-
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 129f04b..2e624bb 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1676,6 +1676,8 @@ bond_ethdev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 	if (bd_rx_q == NULL)
 		return -1;
 
+	rte_spinlock_init(&bd_rx_q->lock);
+
 	bd_rx_q->queue_id = rx_queue_id;
 	bd_rx_q->dev_private = dev->data->dev_private;
 
@@ -1701,6 +1703,8 @@ bond_ethdev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id,
 	if (bd_tx_q == NULL)
 		return -1;
 
+	rte_spinlock_init(&bd_tx_q->lock);
+
 	bd_tx_q->queue_id = tx_queue_id;
 	bd_tx_q->dev_private = dev->data->dev_private;
 
diff --git a/drivers/net/bonding/rte_eth_bond_private.h b/drivers/net/bonding/rte_eth_bond_private.h
index 8312397..b6abcba 100644
--- a/drivers/net/bonding/rte_eth_bond_private.h
+++ b/drivers/net/bonding/rte_eth_bond_private.h
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -76,6 +76,7 @@ struct bond_rx_queue {
 	/**< Copy of RX configuration structure for queue */
 	struct rte_mempool *mb_pool;
 	/**< Reference to mbuf pool to use for RX queue */
+	rte_spinlock_t lock;
 };
 
 struct bond_tx_queue {
@@ -87,6 +88,7 @@ struct bond_tx_queue {
 	/**< Number of TX descriptors available for the queue */
 	struct rte_eth_txconf tx_conf;
 	/**< Copy of TX configuration structure for queue */
+	rte_spinlock_t lock;
 };
 
 /** Bonded slave devices structure */
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 2/4] bonding: grab queue spinlocks in slave add and remove
  2016-06-12 17:11     ` [PATCH v3 0/4] bonding: locks Bernard Iremonger
  2016-06-12 17:11       ` [PATCH v3 1/4] bonding: add spinlock to rx and tx queues Bernard Iremonger
@ 2016-06-12 17:11       ` Bernard Iremonger
  2016-06-12 17:11       ` [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
  2016-06-12 17:11       ` [PATCH v3 4/4] bonding: remove memcpy from " Bernard Iremonger
  3 siblings, 0 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-06-12 17:11 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger

When adding or removing a slave device from the bonding device
the rx and tx queue spinlocks should be held.

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 drivers/net/bonding/rte_eth_bond_api.c | 52 ++++++++++++++++++++++++++++++++--
 1 file changed, 49 insertions(+), 3 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 53df9fe..006c901 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -437,8 +437,10 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
-
+	struct bond_tx_queue *bd_tx_q;
+	struct bond_rx_queue *bd_rx_q;
 	int retval;
+	uint16_t i;
 
 	/* Verify that port id's are valid bonded and slave ports */
 	if (valid_bonded_port_id(bonded_port_id) != 0)
@@ -448,11 +450,30 @@ rte_eth_bond_slave_add(uint8_t bonded_port_id, uint8_t slave_port_id)
 	internals = bonded_eth_dev->data->dev_private;
 
 	rte_spinlock_lock(&internals->lock);
+	if (bonded_eth_dev->data->dev_started) {
+		for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+			bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+			rte_spinlock_lock(&bd_rx_q->lock);
+		}
+		for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+			bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+			rte_spinlock_lock(&bd_tx_q->lock);
+		}
+	}
 
 	retval = __eth_bond_slave_add_lock_free(bonded_port_id, slave_port_id);
 
+	if (bonded_eth_dev->data->dev_started) {
+		for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+			bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+			rte_spinlock_unlock(&bd_rx_q->lock);
+		}
+		for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+			bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+			rte_spinlock_unlock(&bd_tx_q->lock);
+		}
+	}
 	rte_spinlock_unlock(&internals->lock);
-
 	return retval;
 }
 
@@ -541,7 +562,10 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id)
 {
 	struct rte_eth_dev *bonded_eth_dev;
 	struct bond_dev_private *internals;
+	struct bond_tx_queue *bd_tx_q;
+	struct bond_rx_queue *bd_rx_q;
 	int retval;
+	uint16_t i;
 
 	if (valid_bonded_port_id(bonded_port_id) != 0)
 		return -1;
@@ -550,11 +574,33 @@ rte_eth_bond_slave_remove(uint8_t bonded_port_id, uint8_t slave_port_id)
 	internals = bonded_eth_dev->data->dev_private;
 
 	rte_spinlock_lock(&internals->lock);
+	if (bonded_eth_dev->data->dev_started) {
+		for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+			bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+			rte_spinlock_lock(&bd_rx_q->lock);
+		}
+
+		for (i = 0; i < bonded_eth_dev->data->nb_tx_queues; i++) {
+			bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+			rte_spinlock_lock(&bd_tx_q->lock);
+		}
+	}
 
 	retval = __eth_bond_slave_remove_lock_free(bonded_port_id, slave_port_id);
 
-	rte_spinlock_unlock(&internals->lock);
+	if (bonded_eth_dev->data->dev_started) {
+		for (i = 0; i < bonded_eth_dev->data->nb_tx_queues; i++) {
+			bd_tx_q = bonded_eth_dev->data->tx_queues[i];
+			rte_spinlock_unlock(&bd_tx_q->lock);
+		}
 
+		for (i = 0; i < bonded_eth_dev->data->nb_rx_queues; i++) {
+			bd_rx_q = bonded_eth_dev->data->rx_queues[i];
+			rte_spinlock_unlock(&bd_rx_q->lock);
+		}
+		rte_spinlock_unlock(&internals->lock);
+	}
+	rte_spinlock_unlock(&internals->lock);
 	return retval;
 }
 
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
  2016-06-12 17:11     ` [PATCH v3 0/4] bonding: locks Bernard Iremonger
  2016-06-12 17:11       ` [PATCH v3 1/4] bonding: add spinlock to rx and tx queues Bernard Iremonger
  2016-06-12 17:11       ` [PATCH v3 2/4] bonding: grab queue spinlocks in slave add and remove Bernard Iremonger
@ 2016-06-12 17:11       ` Bernard Iremonger
  2016-06-13  9:18         ` Bruce Richardson
  2016-09-09 11:29         ` Ferruh Yigit
  2016-06-12 17:11       ` [PATCH v3 4/4] bonding: remove memcpy from " Bernard Iremonger
  3 siblings, 2 replies; 42+ messages in thread
From: Bernard Iremonger @ 2016-06-12 17:11 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger

Use rte_spinlock_trylock() in the rx/tx burst functions to
take the queue spinlock.

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 116 ++++++++++++++++++++++++---------
 1 file changed, 84 insertions(+), 32 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 2e624bb..93043ef 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1,7 +1,7 @@
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -92,16 +92,22 @@ bond_ethdev_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 	internals = bd_rx_q->dev_private;
 
-
-	for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
-		/* Offset of pointer to *bufs increases as packets are received
-		 * from other slaves */
-		num_rx_slave = rte_eth_rx_burst(internals->active_slaves[i],
-				bd_rx_q->queue_id, bufs + num_rx_total, nb_pkts);
-		if (num_rx_slave) {
-			num_rx_total += num_rx_slave;
-			nb_pkts -= num_rx_slave;
+	if (rte_spinlock_trylock(&bd_rx_q->lock)) {
+		for (i = 0; i < internals->active_slave_count && nb_pkts; i++) {
+			/* Offset of pointer to *bufs increases as packets
+			 * are received from other slaves
+			 */
+			num_rx_slave = rte_eth_rx_burst(
+						internals->active_slaves[i],
+						bd_rx_q->queue_id,
+						bufs + num_rx_total,
+						nb_pkts);
+			if (num_rx_slave) {
+				num_rx_total += num_rx_slave;
+				nb_pkts -= num_rx_slave;
+			}
 		}
+		rte_spinlock_unlock(&bd_rx_q->lock);
 	}
 
 	return num_rx_total;
@@ -112,14 +118,19 @@ bond_ethdev_rx_burst_active_backup(void *queue, struct rte_mbuf **bufs,
 		uint16_t nb_pkts)
 {
 	struct bond_dev_private *internals;
+	uint16_t ret = 0;
 
 	/* Cast to structure, containing bonded device's port id and queue id */
 	struct bond_rx_queue *bd_rx_q = (struct bond_rx_queue *)queue;
 
 	internals = bd_rx_q->dev_private;
 
-	return rte_eth_rx_burst(internals->current_primary_port,
-			bd_rx_q->queue_id, bufs, nb_pkts);
+	if (rte_spinlock_trylock(&bd_rx_q->lock)) {
+		ret = rte_eth_rx_burst(internals->current_primary_port,
+					bd_rx_q->queue_id, bufs, nb_pkts);
+		rte_spinlock_unlock(&bd_rx_q->lock);
+	}
+	return ret;
 }
 
 static uint16_t
@@ -143,8 +154,10 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	uint8_t i, j, k;
 
 	rte_eth_macaddr_get(internals->port_id, &bond_mac);
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
+
+	if (rte_spinlock_trylock(&bd_rx_q->lock) == 0)
+		return num_rx_total;
+
 	slave_count = internals->active_slave_count;
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * slave_count);
@@ -190,7 +203,7 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 				j++;
 		}
 	}
-
+	rte_spinlock_unlock(&bd_rx_q->lock);
 	return num_rx_total;
 }
 
@@ -406,14 +419,19 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
+	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+		return num_tx_total;
+
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
 	num_of_slaves = internals->active_slave_count;
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * num_of_slaves);
 
-	if (num_of_slaves < 1)
+	if (num_of_slaves < 1) {
+		rte_spinlock_unlock(&bd_tx_q->lock);
 		return num_tx_total;
+	}
 
 	/* Populate slaves mbuf with which packets are to be sent on it  */
 	for (i = 0; i < nb_pkts; i++) {
@@ -444,7 +462,7 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 			num_tx_total += num_tx_slave;
 		}
 	}
-
+	rte_spinlock_unlock(&bd_tx_q->lock);
 	return num_tx_total;
 }
 
@@ -454,15 +472,23 @@ bond_ethdev_tx_burst_active_backup(void *queue,
 {
 	struct bond_dev_private *internals;
 	struct bond_tx_queue *bd_tx_q;
+	uint16_t ret = 0;
 
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
-	if (internals->active_slave_count < 1)
-		return 0;
+	if (rte_spinlock_trylock(&bd_tx_q->lock)) {
+		if (internals->active_slave_count < 1) {
+			rte_spinlock_unlock(&bd_tx_q->lock);
+			return 0;
+		}
 
-	return rte_eth_tx_burst(internals->current_primary_port, bd_tx_q->queue_id,
-			bufs, nb_pkts);
+		ret = rte_eth_tx_burst(internals->current_primary_port,
+					bd_tx_q->queue_id,
+					bufs, nb_pkts);
+		rte_spinlock_unlock(&bd_tx_q->lock);
+	}
+	return ret;
 }
 
 static inline uint16_t
@@ -694,20 +720,25 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	uint16_t num_tx_total = 0;
 	uint8_t i, j;
 
-	uint8_t num_of_slaves = internals->active_slave_count;
+	uint8_t num_of_slaves;
 	uint8_t slaves[RTE_MAX_ETHPORTS];
 
 	struct ether_hdr *ether_hdr;
 	struct ether_addr primary_slave_addr;
 	struct ether_addr active_slave_addr;
 
-	if (num_of_slaves < 1)
+	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
 		return num_tx_total;
 
+	num_of_slaves = internals->active_slave_count;
+	if (num_of_slaves < 1) {
+		rte_spinlock_unlock(&bd_tx_q->lock);
+		return num_tx_total;
+	}
+
 	memcpy(slaves, internals->tlb_slaves_order,
 				sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
 
-
 	ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
 
 	if (nb_pkts > 3) {
@@ -735,7 +766,7 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		if (num_tx_total == nb_pkts)
 			break;
 	}
-
+	rte_spinlock_unlock(&bd_tx_q->lock);
 	return num_tx_total;
 }
 
@@ -785,6 +816,9 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 	int i, j;
 
+	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+		return num_tx_total;
+
 	/* Search tx buffer for ARP packets and forward them to alb */
 	for (i = 0; i < nb_pkts; i++) {
 		eth_h = rte_pktmbuf_mtod(bufs[i], struct ether_hdr *);
@@ -875,6 +909,7 @@ bond_ethdev_tx_burst_alb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 #endif
 		}
 	}
+	rte_spinlock_unlock(&bd_tx_q->lock);
 
 	/* Send non-ARP packets using tlb policy */
 	if (slave_bufs_pkts[RTE_MAX_ETHPORTS] > 0) {
@@ -914,14 +949,19 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
+	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+		return num_tx_total;
+
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
 	num_of_slaves = internals->active_slave_count;
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * num_of_slaves);
 
-	if (num_of_slaves < 1)
+	if (num_of_slaves < 1) {
+		rte_spinlock_unlock(&bd_tx_q->lock);
 		return num_tx_total;
+	}
 
 	/* Populate slaves mbuf with the packets which are to be sent on it  */
 	for (i = 0; i < nb_pkts; i++) {
@@ -951,7 +991,7 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 			num_tx_total += num_tx_slave;
 		}
 	}
-
+	rte_spinlock_unlock(&bd_tx_q->lock);
 	return num_tx_total;
 }
 
@@ -984,17 +1024,24 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
+	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+		return num_tx_total;
+
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
 	num_of_slaves = internals->active_slave_count;
-	if (num_of_slaves < 1)
+	if (num_of_slaves < 1) {
+		rte_spinlock_unlock(&bd_tx_q->lock);
 		return num_tx_total;
+	}
 
 	memcpy(slaves, internals->active_slaves, sizeof(slaves[0]) * num_of_slaves);
 
 	distributing_count = 0;
 	for (i = 0; i < num_of_slaves; i++) {
-		struct port *port = &mode_8023ad_ports[slaves[i]];
+		struct port *port;
+
+		port = &mode_8023ad_ports[internals->active_slaves[i]];
 
 		slave_slow_nb_pkts[i] = rte_ring_dequeue_burst(port->tx_ring,
 				slow_pkts, BOND_MODE_8023AX_SLAVE_TX_PKTS);
@@ -1043,7 +1090,7 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 				bufs[j] = slave_bufs[i][num_tx_slave];
 		}
 	}
-
+	rte_spinlock_unlock(&bd_tx_q->lock);
 	return num_tx_total;
 }
 
@@ -1065,14 +1112,19 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 	bd_tx_q = (struct bond_tx_queue *)queue;
 	internals = bd_tx_q->dev_private;
 
+	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
+		return 0;
+
 	/* Copy slave list to protect against slave up/down changes during tx
 	 * bursting */
 	num_of_slaves = internals->active_slave_count;
 	memcpy(slaves, internals->active_slaves,
 			sizeof(internals->active_slaves[0]) * num_of_slaves);
 
-	if (num_of_slaves < 1)
+	if (num_of_slaves < 1) {
+		rte_spinlock_unlock(&bd_tx_q->lock);
 		return 0;
+	}
 
 	/* Increment reference count on mbufs */
 	for (i = 0; i < nb_pkts; i++)
@@ -1093,6 +1145,7 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 			most_successful_tx_slave = i;
 		}
 	}
+	rte_spinlock_unlock(&bd_tx_q->lock);
 
 	/* if slaves fail to transmit packets from burst, the calling application
 	 * is not expected to know about multiple references to packets so we must
@@ -1819,7 +1872,6 @@ bond_ethdev_link_update(struct rte_eth_dev *bonded_eth_dev,
 
 		bonded_eth_dev->data->dev_link.link_status = link_up;
 	}
-
 	return 0;
 }
 
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v3 4/4] bonding: remove memcpy from burst functions
  2016-06-12 17:11     ` [PATCH v3 0/4] bonding: locks Bernard Iremonger
                         ` (2 preceding siblings ...)
  2016-06-12 17:11       ` [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
@ 2016-06-12 17:11       ` Bernard Iremonger
  2016-09-11 12:39         ` Yuanhan Liu
  3 siblings, 1 reply; 42+ messages in thread
From: Bernard Iremonger @ 2016-06-12 17:11 UTC (permalink / raw)
  To: dev; +Cc: declan.doherty, konstantin.ananyev, Bernard Iremonger

Now that the queue spinlocks have been added to the rx and
tx burst functions the memcpy of the slave data is no
longer necessary, so it has been removed.

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 71 ++++++++++++++--------------------
 1 file changed, 28 insertions(+), 43 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 93043ef..ce46450 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -146,7 +146,6 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 
 	const uint16_t ether_type_slow_be = rte_be_to_cpu_16(ETHER_TYPE_SLOW);
 	uint16_t num_rx_total = 0;	/* Total number of received packets */
-	uint8_t slaves[RTE_MAX_ETHPORTS];
 	uint8_t slave_count;
 
 	uint8_t collecting;  /* current slave collecting status */
@@ -159,15 +158,16 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 		return num_rx_total;
 
 	slave_count = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * slave_count);
 
 	for (i = 0; i < slave_count && num_rx_total < nb_pkts; i++) {
 		j = num_rx_total;
-		collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[i]], COLLECTING);
+		collecting = ACTOR_STATE(
+				&mode_8023ad_ports[internals->active_slaves[i]],
+				COLLECTING);
 
 		/* Read packets from this slave */
-		num_rx_total += rte_eth_rx_burst(slaves[i], bd_rx_q->queue_id,
+		num_rx_total += rte_eth_rx_burst(internals->active_slaves[i],
+				bd_rx_q->queue_id,
 				&bufs[num_rx_total], nb_pkts - num_rx_total);
 
 		for (k = j; k < 2 && k < num_rx_total; k++)
@@ -188,7 +188,9 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 					!is_same_ether_addr(&bond_mac, &hdr->d_addr)))) {
 
 				if (hdr->ether_type == ether_type_slow_be) {
-					bond_mode_8023ad_handle_slow_pkt(internals, slaves[i],
+					bond_mode_8023ad_handle_slow_pkt(
+						internals,
+						internals->active_slaves[i],
 						bufs[j]);
 				} else
 					rte_pktmbuf_free(bufs[j]);
@@ -409,8 +411,6 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	uint16_t slave_nb_pkts[RTE_MAX_ETHPORTS] = { 0 };
 
 	uint8_t num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
-
 	uint16_t num_tx_total = 0, num_tx_slave;
 
 	static int slave_idx = 0;
@@ -422,12 +422,7 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
 		return num_tx_total;
 
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
 	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
-
 	if (num_of_slaves < 1) {
 		rte_spinlock_unlock(&bd_tx_q->lock);
 		return num_tx_total;
@@ -446,7 +441,9 @@ bond_ethdev_tx_burst_round_robin(void *queue, struct rte_mbuf **bufs,
 	/* Send packet burst on each slave device */
 	for (i = 0; i < num_of_slaves; i++) {
 		if (slave_nb_pkts[i] > 0) {
-			num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+			num_tx_slave = rte_eth_tx_burst(
+					internals->active_slaves[i],
+					bd_tx_q->queue_id,
 					slave_bufs[i], slave_nb_pkts[i]);
 
 			/* if tx burst fails move packets to end of bufs */
@@ -721,7 +718,6 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	uint8_t i, j;
 
 	uint8_t num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
 
 	struct ether_hdr *ether_hdr;
 	struct ether_addr primary_slave_addr;
@@ -736,9 +732,6 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		return num_tx_total;
 	}
 
-	memcpy(slaves, internals->tlb_slaves_order,
-				sizeof(internals->tlb_slaves_order[0]) * num_of_slaves);
-
 	ether_addr_copy(primary_port->data->mac_addrs, &primary_slave_addr);
 
 	if (nb_pkts > 3) {
@@ -747,7 +740,8 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	}
 
 	for (i = 0; i < num_of_slaves; i++) {
-		rte_eth_macaddr_get(slaves[i], &active_slave_addr);
+		rte_eth_macaddr_get(internals->tlb_slaves_order[i],
+					&active_slave_addr);
 		for (j = num_tx_total; j < nb_pkts; j++) {
 			if (j + 3 < nb_pkts)
 				rte_prefetch0(rte_pktmbuf_mtod(bufs[j+3], void*));
@@ -760,8 +754,11 @@ bond_ethdev_tx_burst_tlb(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 #endif
 		}
 
-		num_tx_total += rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
-				bufs + num_tx_total, nb_pkts - num_tx_total);
+		num_tx_total += rte_eth_tx_burst(
+				internals->tlb_slaves_order[i],
+				bd_tx_q->queue_id,
+				bufs + num_tx_total,
+				nb_pkts - num_tx_total);
 
 		if (num_tx_total == nb_pkts)
 			break;
@@ -937,7 +934,6 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q;
 
 	uint8_t num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
 
 	uint16_t num_tx_total = 0, num_tx_slave = 0, tx_fail_total = 0;
 
@@ -952,12 +948,7 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
 		return num_tx_total;
 
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
 	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
-
 	if (num_of_slaves < 1) {
 		rte_spinlock_unlock(&bd_tx_q->lock);
 		return num_tx_total;
@@ -975,7 +966,9 @@ bond_ethdev_tx_burst_balance(void *queue, struct rte_mbuf **bufs,
 	/* Send packet burst on each slave device */
 	for (i = 0; i < num_of_slaves; i++) {
 		if (slave_nb_pkts[i] > 0) {
-			num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+			num_tx_slave = rte_eth_tx_burst(
+					internals->active_slaves[i],
+					bd_tx_q->queue_id,
 					slave_bufs[i], slave_nb_pkts[i]);
 
 			/* if tx burst fails move packets to end of bufs */
@@ -1003,7 +996,6 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q;
 
 	uint8_t num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
 	 /* positions in slaves, not ID */
 	uint8_t distributing_offsets[RTE_MAX_ETHPORTS];
 	uint8_t distributing_count;
@@ -1027,16 +1019,12 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
 		return num_tx_total;
 
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
 	num_of_slaves = internals->active_slave_count;
 	if (num_of_slaves < 1) {
 		rte_spinlock_unlock(&bd_tx_q->lock);
 		return num_tx_total;
 	}
 
-	memcpy(slaves, internals->active_slaves, sizeof(slaves[0]) * num_of_slaves);
-
 	distributing_count = 0;
 	for (i = 0; i < num_of_slaves; i++) {
 		struct port *port;
@@ -1073,7 +1061,9 @@ bond_ethdev_tx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 		if (slave_nb_pkts[i] == 0)
 			continue;
 
-		num_tx_slave = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
+		num_tx_slave = rte_eth_tx_burst(
+				internals->active_slaves[i],
+				bd_tx_q->queue_id,
 				slave_bufs[i], slave_nb_pkts[i]);
 
 		/* If tx burst fails drop slow packets */
@@ -1102,8 +1092,6 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 	struct bond_tx_queue *bd_tx_q;
 
 	uint8_t tx_failed_flag = 0, num_of_slaves;
-	uint8_t slaves[RTE_MAX_ETHPORTS];
-
 	uint16_t max_nb_of_tx_pkts = 0;
 
 	int slave_tx_total[RTE_MAX_ETHPORTS];
@@ -1115,12 +1103,7 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 	if (rte_spinlock_trylock(&bd_tx_q->lock) == 0)
 		return 0;
 
-	/* Copy slave list to protect against slave up/down changes during tx
-	 * bursting */
 	num_of_slaves = internals->active_slave_count;
-	memcpy(slaves, internals->active_slaves,
-			sizeof(internals->active_slaves[0]) * num_of_slaves);
-
 	if (num_of_slaves < 1) {
 		rte_spinlock_unlock(&bd_tx_q->lock);
 		return 0;
@@ -1132,8 +1115,10 @@ bond_ethdev_tx_burst_broadcast(void *queue, struct rte_mbuf **bufs,
 
 	/* Transmit burst on each active slave */
 	for (i = 0; i < num_of_slaves; i++) {
-		slave_tx_total[i] = rte_eth_tx_burst(slaves[i], bd_tx_q->queue_id,
-					bufs, nb_pkts);
+		slave_tx_total[i] = rte_eth_tx_burst(
+				internals->active_slaves[i],
+				bd_tx_q->queue_id,
+				bufs, nb_pkts);
 
 		if (unlikely(slave_tx_total[i] < nb_pkts))
 			tx_failed_flag = 1;
-- 
2.6.3

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
  2016-06-12 17:11       ` [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
@ 2016-06-13  9:18         ` Bruce Richardson
  2016-06-13 12:28           ` Iremonger, Bernard
  2016-09-09 11:29         ` Ferruh Yigit
  1 sibling, 1 reply; 42+ messages in thread
From: Bruce Richardson @ 2016-06-13  9:18 UTC (permalink / raw)
  To: Bernard Iremonger; +Cc: dev, declan.doherty, konstantin.ananyev

On Sun, Jun 12, 2016 at 06:11:28PM +0100, Bernard Iremonger wrote:
> Use rte_spinlock_trylock() in the rx/tx burst functions to
> take the queue spinlock.
> 
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---

Why does this particular PMD need spinlocks when doing RX and TX, while other
device types do not? How is adding/removing devices from a bonded device different
to other control operations that can be done on physical PMDs? Is this not
similar to say bringing down or hotplugging out a physical port just before an
RX or TX operation takes place?
For all other PMDs we rely on the app to synchronise control and data plane
operation - why not here? 

/Bruce

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
  2016-06-13  9:18         ` Bruce Richardson
@ 2016-06-13 12:28           ` Iremonger, Bernard
  2016-06-16 14:32             ` Bruce Richardson
  0 siblings, 1 reply; 42+ messages in thread
From: Iremonger, Bernard @ 2016-06-13 12:28 UTC (permalink / raw)
  To: Richardson, Bruce; +Cc: dev, Doherty, Declan, Ananyev, Konstantin

Hi Bruce,

<snip>

> Subject: Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx
> burst functions
> 
> On Sun, Jun 12, 2016 at 06:11:28PM +0100, Bernard Iremonger wrote:
> > Use rte_spinlock_trylock() in the rx/tx burst functions to take the
> > queue spinlock.
> >
> > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > ---
> 
> Why does this particular PMD need spinlocks when doing RX and TX, while
> other device types do not? How is adding/removing devices from a bonded
> device different to other control operations that can be done on physical
> PMDs? Is this not similar to say bringing down or hotplugging out a physical
> port just before an RX or TX operation takes place?
> For all other PMDs we rely on the app to synchronise control and data plane
> operation - why not here?
> 
> /Bruce

This issue arose during VM live migration testing. 
For VM live migration it is necessary (while traffic is running) to be able to remove a bonded slave device, stop it, close it and detach it.
It a slave device is removed from a bonded device while traffic is running a segmentation fault may occur in the rx/tx burst function. The spinlock has been added to prevent this occurring.

The bonding device already uses a spinlock to synchronise between the add and remove functionality and the slave_link_status_change_monitor code. 

Previously testpmd did not allow, stop, close or detach of PMD while traffic was running. Testpmd has been modified with the following patchset 

http://dpdk.org/dev/patchwork/patch/13472/

It now allows stop, close and detach of a PMD provided in it is not forwarding and is not a slave of bonded PMD.

 Regards,

Bernard.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
  2016-06-13 12:28           ` Iremonger, Bernard
@ 2016-06-16 14:32             ` Bruce Richardson
  2016-06-16 15:00               ` Thomas Monjalon
  0 siblings, 1 reply; 42+ messages in thread
From: Bruce Richardson @ 2016-06-16 14:32 UTC (permalink / raw)
  To: Iremonger, Bernard; +Cc: dev, Doherty, Declan, Ananyev, Konstantin

On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> Hi Bruce,
> 
> <snip>
> 
> > Subject: Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx
> > burst functions
> > 
> > On Sun, Jun 12, 2016 at 06:11:28PM +0100, Bernard Iremonger wrote:
> > > Use rte_spinlock_trylock() in the rx/tx burst functions to take the
> > > queue spinlock.
> > >
> > > Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> > > Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> > > ---
> > 
> > Why does this particular PMD need spinlocks when doing RX and TX, while
> > other device types do not? How is adding/removing devices from a bonded
> > device different to other control operations that can be done on physical
> > PMDs? Is this not similar to say bringing down or hotplugging out a physical
> > port just before an RX or TX operation takes place?
> > For all other PMDs we rely on the app to synchronise control and data plane
> > operation - why not here?
> > 
> > /Bruce
> 
> This issue arose during VM live migration testing. 
> For VM live migration it is necessary (while traffic is running) to be able to remove a bonded slave device, stop it, close it and detach it.
> It a slave device is removed from a bonded device while traffic is running a segmentation fault may occur in the rx/tx burst function. The spinlock has been added to prevent this occurring.
> 
> The bonding device already uses a spinlock to synchronise between the add and remove functionality and the slave_link_status_change_monitor code. 
> 
> Previously testpmd did not allow, stop, close or detach of PMD while traffic was running. Testpmd has been modified with the following patchset 
> 
> http://dpdk.org/dev/patchwork/patch/13472/
> 
> It now allows stop, close and detach of a PMD provided in it is not forwarding and is not a slave of bonded PMD.
> 
I will admit to not being fully convinced, but if nobody else has any serious
objections, and since this patch has been reviewed and acked, I'm ok to merge it
in. I'll do so shortly.

/Bruce

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
  2016-06-16 14:32             ` Bruce Richardson
@ 2016-06-16 15:00               ` Thomas Monjalon
  2016-06-16 16:41                 ` Iremonger, Bernard
  0 siblings, 1 reply; 42+ messages in thread
From: Thomas Monjalon @ 2016-06-16 15:00 UTC (permalink / raw)
  To: Bruce Richardson, Iremonger, Bernard
  Cc: dev, Doherty, Declan, Ananyev, Konstantin

2016-06-16 15:32, Bruce Richardson:
> On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> > > Why does this particular PMD need spinlocks when doing RX and TX, while
> > > other device types do not? How is adding/removing devices from a bonded
> > > device different to other control operations that can be done on physical
> > > PMDs? Is this not similar to say bringing down or hotplugging out a physical
> > > port just before an RX or TX operation takes place?
> > > For all other PMDs we rely on the app to synchronise control and data plane
> > > operation - why not here?
> > > 
> > > /Bruce
> > 
> > This issue arose during VM live migration testing. 
> > For VM live migration it is necessary (while traffic is running) to be able to remove a bonded slave device, stop it, close it and detach it.
> > It a slave device is removed from a bonded device while traffic is running a segmentation fault may occur in the rx/tx burst function. The spinlock has been added to prevent this occurring.
> > 
> > The bonding device already uses a spinlock to synchronise between the add and remove functionality and the slave_link_status_change_monitor code. 
> > 
> > Previously testpmd did not allow, stop, close or detach of PMD while traffic was running. Testpmd has been modified with the following patchset 
> > 
> > http://dpdk.org/dev/patchwork/patch/13472/
> > 
> > It now allows stop, close and detach of a PMD provided in it is not forwarding and is not a slave of bonded PMD.
> > 
> I will admit to not being fully convinced, but if nobody else has any serious
> objections, and since this patch has been reviewed and acked, I'm ok to merge it
> in. I'll do so shortly.

Please hold on.
Seeing locks introduced in the Rx/Tx path is an alert.
We clearly need a design document to explain where locks can be used
and what are the responsibility of the control plane.
If everybody agrees in this document that DPDK can have some locks
in the fast path, then OK to merge it.

So I would say NACK for 16.07 and maybe postpone to 16.11.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
  2016-06-16 15:00               ` Thomas Monjalon
@ 2016-06-16 16:41                 ` Iremonger, Bernard
  2016-06-16 18:38                   ` Thomas Monjalon
  0 siblings, 1 reply; 42+ messages in thread
From: Iremonger, Bernard @ 2016-06-16 16:41 UTC (permalink / raw)
  To: Thomas Monjalon, Richardson, Bruce
  Cc: dev, Doherty, Declan, Ananyev, Konstantin, Mcnamara, John

Hi Thomas,
<snip>
> 2016-06-16 15:32, Bruce Richardson:
> > On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> > > > Why does this particular PMD need spinlocks when doing RX and TX,
> > > > while other device types do not? How is adding/removing devices
> > > > from a bonded device different to other control operations that
> > > > can be done on physical PMDs? Is this not similar to say bringing
> > > > down or hotplugging out a physical port just before an RX or TX
> operation takes place?
> > > > For all other PMDs we rely on the app to synchronise control and
> > > > data plane operation - why not here?
> > > >
> > > > /Bruce
> > >
> > > This issue arose during VM live migration testing.
> > > For VM live migration it is necessary (while traffic is running) to be able to
> remove a bonded slave device, stop it, close it and detach it.
> > > It a slave device is removed from a bonded device while traffic is running
> a segmentation fault may occur in the rx/tx burst function. The spinlock has
> been added to prevent this occurring.
> > >
> > > The bonding device already uses a spinlock to synchronise between the
> add and remove functionality and the slave_link_status_change_monitor
> code.
> > >
> > > Previously testpmd did not allow, stop, close or detach of PMD while
> > > traffic was running. Testpmd has been modified with the following
> > > patchset
> > >
> > > http://dpdk.org/dev/patchwork/patch/13472/
> > >
> > > It now allows stop, close and detach of a PMD provided in it is not
> forwarding and is not a slave of bonded PMD.
> > >
> > I will admit to not being fully convinced, but if nobody else has any
> > serious objections, and since this patch has been reviewed and acked,
> > I'm ok to merge it in. I'll do so shortly.
> 
> Please hold on.
> Seeing locks introduced in the Rx/Tx path is an alert.
> We clearly need a design document to explain where locks can be used and
> what are the responsibility of the control plane.
> If everybody agrees in this document that DPDK can have some locks in the
> fast path, then OK to merge it.
> 
> So I would say NACK for 16.07 and maybe postpone to 16.11.

Looking at the documentation for the bonding PMD.

http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.html

In section 10.2 it states the following:

Bonded devices support the dynamical addition and removal of slave devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.

If a slave device is added or removed while traffic is running, there is the possibility of a segmentation fault in the rx/tx burst functions. This is most likely to occur in the round robin bonding mode.

This patch set fixes what appears to be a bug in the bonding PMD.

Performance measurements have been made with this patch set applied and without the patches applied using 64 byte packets. 

With the patches applied the following drop in performance was observed:

% drop for fwd+io:	0.16%
% drop for fwd+mac:	0.39%

This patch set has been reviewed and ack'ed, so I think it should be applied in 16.07

Regards,

Bernard.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
  2016-06-16 16:41                 ` Iremonger, Bernard
@ 2016-06-16 18:38                   ` Thomas Monjalon
  2017-02-15 18:01                     ` Ferruh Yigit
  0 siblings, 1 reply; 42+ messages in thread
From: Thomas Monjalon @ 2016-06-16 18:38 UTC (permalink / raw)
  To: Iremonger, Bernard
  Cc: Richardson, Bruce, dev, Doherty, Declan, Ananyev, Konstantin,
	Mcnamara, John

2016-06-16 16:41, Iremonger, Bernard:
> Hi Thomas,
> <snip>
> > 2016-06-16 15:32, Bruce Richardson:
> > > On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> > > > > Why does this particular PMD need spinlocks when doing RX and TX,
> > > > > while other device types do not? How is adding/removing devices
> > > > > from a bonded device different to other control operations that
> > > > > can be done on physical PMDs? Is this not similar to say bringing
> > > > > down or hotplugging out a physical port just before an RX or TX
> > operation takes place?
> > > > > For all other PMDs we rely on the app to synchronise control and
> > > > > data plane operation - why not here?
> > > > >
> > > > > /Bruce
> > > >
> > > > This issue arose during VM live migration testing.
> > > > For VM live migration it is necessary (while traffic is running) to be able to
> > remove a bonded slave device, stop it, close it and detach it.
> > > > It a slave device is removed from a bonded device while traffic is running
> > a segmentation fault may occur in the rx/tx burst function. The spinlock has
> > been added to prevent this occurring.
> > > >
> > > > The bonding device already uses a spinlock to synchronise between the
> > add and remove functionality and the slave_link_status_change_monitor
> > code.
> > > >
> > > > Previously testpmd did not allow, stop, close or detach of PMD while
> > > > traffic was running. Testpmd has been modified with the following
> > > > patchset
> > > >
> > > > http://dpdk.org/dev/patchwork/patch/13472/
> > > >
> > > > It now allows stop, close and detach of a PMD provided in it is not
> > forwarding and is not a slave of bonded PMD.
> > > >
> > > I will admit to not being fully convinced, but if nobody else has any
> > > serious objections, and since this patch has been reviewed and acked,
> > > I'm ok to merge it in. I'll do so shortly.
> > 
> > Please hold on.
> > Seeing locks introduced in the Rx/Tx path is an alert.
> > We clearly need a design document to explain where locks can be used and
> > what are the responsibility of the control plane.
> > If everybody agrees in this document that DPDK can have some locks in the
> > fast path, then OK to merge it.
> > 
> > So I would say NACK for 16.07 and maybe postpone to 16.11.
> 
> Looking at the documentation for the bonding PMD.
> 
> http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
> 
> In section 10.2 it states the following:
> 
> Bonded devices support the dynamical addition and removal of slave devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.
> 
> If a slave device is added or removed while traffic is running, there is the possibility of a segmentation fault in the rx/tx burst functions. This is most likely to occur in the round robin bonding mode.
> 
> This patch set fixes what appears to be a bug in the bonding PMD.

It can be fixed by removing this statement in the doc.

One of the design principle of DPDK is to avoid locks.

> Performance measurements have been made with this patch set applied and without the patches applied using 64 byte packets. 
> 
> With the patches applied the following drop in performance was observed:
> 
> % drop for fwd+io:	0.16%
> % drop for fwd+mac:	0.39%
> 
> This patch set has been reviewed and ack'ed, so I think it should be applied in 16.07

I understand your point of view and I gave mine.
Now we need more opinions from others.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
  2016-06-12 17:11       ` [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
  2016-06-13  9:18         ` Bruce Richardson
@ 2016-09-09 11:29         ` Ferruh Yigit
  1 sibling, 0 replies; 42+ messages in thread
From: Ferruh Yigit @ 2016-09-09 11:29 UTC (permalink / raw)
  To: Bernard Iremonger, dev; +Cc: Doherty, Declan, Ananyev, Konstantin

Hi Bernard,

This is an old patch, sorry for commenting after this long.

On 6/12/2016 6:11 PM, Bernard Iremonger wrote:
> Use rte_spinlock_trylock() in the rx/tx burst functions to
> take the queue spinlock.
> 
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---

...

>  static uint16_t
> @@ -143,8 +154,10 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
>         uint8_t i, j, k;
> 
>         rte_eth_macaddr_get(internals->port_id, &bond_mac);
> -       /* Copy slave list to protect against slave up/down changes during tx
> -        * bursting */

This piece,

...

>         for (i = 0; i < num_of_slaves; i++) {
> -               struct port *port = &mode_8023ad_ports[slaves[i]];
> +               struct port *port;
> +
> +               port = &mode_8023ad_ports[internals->active_slaves[i]];

And this piece seems needs to be moved into next patch in the patchset.

...

And if you will send new version of the patchset, there are a few
warnings from check-git-log.sh:

Wrong headline prefix:
        bonding: remove memcpy from burst functions
        bonding: take queue spinlock in rx/tx burst functions
        bonding: grab queue spinlocks in slave add and remove
        bonding: add spinlock to rx and tx queues
Wrong headline lowercase:
        bonding: take queue spinlock in rx/tx burst functions
        bonding: add spinlock to rx and tx queues

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 4/4] bonding: remove memcpy from burst functions
  2016-06-12 17:11       ` [PATCH v3 4/4] bonding: remove memcpy from " Bernard Iremonger
@ 2016-09-11 12:39         ` Yuanhan Liu
  0 siblings, 0 replies; 42+ messages in thread
From: Yuanhan Liu @ 2016-09-11 12:39 UTC (permalink / raw)
  To: Bernard Iremonger
  Cc: dev, declan.doherty, konstantin.ananyev, Thomas Monjalon

On Sun, Jun 12, 2016 at 06:11:29PM +0100, Bernard Iremonger wrote:
> Now that the queue spinlocks have been added to the rx and
> tx burst functions the memcpy of the slave data is no
> longer necessary, so it has been removed.
> 
> Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

Hi,

FYI, my testrobot caught some errors when this patch is applied.

        --yliu

---
x86_64-native-linuxapp-clang: config-all-yes
============================================
grep: /lib/modules/4.6.0/build/include/generated/utsrelease.h: No such file or directory
grep: /lib/modules/4.6.0/build/include/generated/utsrelease.h: No such file or directory
/root/dpdk/drivers/net/bonding/rte_eth_bond_pmd.c:753:41: error: use of undeclared identifier 'slaves'
                                        mode6_debug("TX IPv4:", ether_hdr, slaves[i], &burstnumberTX);
                                                                           ^
1 error generated.
make[6]: *** [rte_eth_bond_pmd.o] Error 1
make[5]: *** [bonding] Error 2
make[5]: *** Waiting for unfinished jobs....
make[4]: *** [net] Error 2
make[3]: *** [drivers] Error 2
make[2]: *** [all] Error 2
make[1]: *** [pre_install] Error 2
make: *** [install] Error 2
error: build failed

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
  2016-06-16 18:38                   ` Thomas Monjalon
@ 2017-02-15 18:01                     ` Ferruh Yigit
  2017-02-16  9:13                       ` Bruce Richardson
  0 siblings, 1 reply; 42+ messages in thread
From: Ferruh Yigit @ 2017-02-15 18:01 UTC (permalink / raw)
  To: Thomas Monjalon, Bernard Iremonger, Bruce Richardson, Ananyev,
	Konstantin, Declan Doherty
  Cc: DPDK

On 6/16/2016 7:38 PM, thomas.monjalon at 6wind.com (Thomas Monjalon) wrote:
> 2016-06-16 16:41, Iremonger, Bernard:
>> Hi Thomas,
>> <snip>
>>> 2016-06-16 15:32, Bruce Richardson:
>>>> On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
>>>>>> Why does this particular PMD need spinlocks when doing RX and TX,
>>>>>> while other device types do not? How is adding/removing devices
>>>>>> from a bonded device different to other control operations that
>>>>>> can be done on physical PMDs? Is this not similar to say bringing
>>>>>> down or hotplugging out a physical port just before an RX or TX
>>> operation takes place?
>>>>>> For all other PMDs we rely on the app to synchronise control and
>>>>>> data plane operation - why not here?
>>>>>>
>>>>>> /Bruce
>>>>>
>>>>> This issue arose during VM live migration testing.
>>>>> For VM live migration it is necessary (while traffic is running) to be able to
>>> remove a bonded slave device, stop it, close it and detach it.
>>>>> It a slave device is removed from a bonded device while traffic is running
>>> a segmentation fault may occur in the rx/tx burst function. The spinlock has
>>> been added to prevent this occurring.
>>>>>
>>>>> The bonding device already uses a spinlock to synchronise between the
>>> add and remove functionality and the slave_link_status_change_monitor
>>> code.
>>>>>
>>>>> Previously testpmd did not allow, stop, close or detach of PMD while
>>>>> traffic was running. Testpmd has been modified with the following
>>>>> patchset
>>>>>
>>>>> http://dpdk.org/dev/patchwork/patch/13472/
>>>>>
>>>>> It now allows stop, close and detach of a PMD provided in it is not
>>> forwarding and is not a slave of bonded PMD.
>>>>>
>>>> I will admit to not being fully convinced, but if nobody else has any
>>>> serious objections, and since this patch has been reviewed and acked,
>>>> I'm ok to merge it in. I'll do so shortly.
>>>
>>> Please hold on.
>>> Seeing locks introduced in the Rx/Tx path is an alert.
>>> We clearly need a design document to explain where locks can be used and
>>> what are the responsibility of the control plane.
>>> If everybody agrees in this document that DPDK can have some locks in the
>>> fast path, then OK to merge it.
>>>
>>> So I would say NACK for 16.07 and maybe postpone to 16.11.
>>
>> Looking at the documentation for the bonding PMD.
>>
>> http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
>>
>> In section 10.2 it states the following:
>>
>> Bonded devices support the dynamical addition and removal of slave devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.
>>
>> If a slave device is added or removed while traffic is running, there is the possibility of a segmentation fault in the rx/tx burst functions. This is most likely to occur in the round robin bonding mode.
>>
>> This patch set fixes what appears to be a bug in the bonding PMD.
> 
> It can be fixed by removing this statement in the doc.
> 
> One of the design principle of DPDK is to avoid locks.
> 
>> Performance measurements have been made with this patch set applied and without the patches applied using 64 byte packets. 
>>
>> With the patches applied the following drop in performance was observed:
>>
>> % drop for fwd+io:	0.16%
>> % drop for fwd+mac:	0.39%
>>
>> This patch set has been reviewed and ack'ed, so I think it should be applied in 16.07
> 
> I understand your point of view and I gave mine.
> Now we need more opinions from others.
> 

Hi,

These patches are sitting in the patchwork for a long time. Discussion
never concluded and patches kept deferred each release.

I think we should give a decision about them:

1- We can merge them in this release, they are fixing a valid problem,
and patches are already acked.

2- We can reject them, if not having them for more than six months not
caused a problem, perhaps they are not really that required. And if
somebody needs them in the future, we can resurrect them from patchwork.

I vote for option 2, any comments?

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
  2017-02-15 18:01                     ` Ferruh Yigit
@ 2017-02-16  9:13                       ` Bruce Richardson
  2017-02-16 11:39                         ` Iremonger, Bernard
  0 siblings, 1 reply; 42+ messages in thread
From: Bruce Richardson @ 2017-02-16  9:13 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Thomas Monjalon, Bernard Iremonger, Ananyev, Konstantin,
	Declan Doherty, DPDK

On Wed, Feb 15, 2017 at 06:01:45PM +0000, Ferruh Yigit wrote:
> On 6/16/2016 7:38 PM, thomas.monjalon at 6wind.com (Thomas Monjalon) wrote:
> > 2016-06-16 16:41, Iremonger, Bernard:
> >> Hi Thomas,
> >> <snip>
> >>> 2016-06-16 15:32, Bruce Richardson:
> >>>> On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard wrote:
> >>>>>> Why does this particular PMD need spinlocks when doing RX and TX,
> >>>>>> while other device types do not? How is adding/removing devices
> >>>>>> from a bonded device different to other control operations that
> >>>>>> can be done on physical PMDs? Is this not similar to say bringing
> >>>>>> down or hotplugging out a physical port just before an RX or TX
> >>> operation takes place?
> >>>>>> For all other PMDs we rely on the app to synchronise control and
> >>>>>> data plane operation - why not here?
> >>>>>>
> >>>>>> /Bruce
> >>>>>
> >>>>> This issue arose during VM live migration testing.
> >>>>> For VM live migration it is necessary (while traffic is running) to be able to
> >>> remove a bonded slave device, stop it, close it and detach it.
> >>>>> It a slave device is removed from a bonded device while traffic is running
> >>> a segmentation fault may occur in the rx/tx burst function. The spinlock has
> >>> been added to prevent this occurring.
> >>>>>
> >>>>> The bonding device already uses a spinlock to synchronise between the
> >>> add and remove functionality and the slave_link_status_change_monitor
> >>> code.
> >>>>>
> >>>>> Previously testpmd did not allow, stop, close or detach of PMD while
> >>>>> traffic was running. Testpmd has been modified with the following
> >>>>> patchset
> >>>>>
> >>>>> http://dpdk.org/dev/patchwork/patch/13472/
> >>>>>
> >>>>> It now allows stop, close and detach of a PMD provided in it is not
> >>> forwarding and is not a slave of bonded PMD.
> >>>>>
> >>>> I will admit to not being fully convinced, but if nobody else has any
> >>>> serious objections, and since this patch has been reviewed and acked,
> >>>> I'm ok to merge it in. I'll do so shortly.
> >>>
> >>> Please hold on.
> >>> Seeing locks introduced in the Rx/Tx path is an alert.
> >>> We clearly need a design document to explain where locks can be used and
> >>> what are the responsibility of the control plane.
> >>> If everybody agrees in this document that DPDK can have some locks in the
> >>> fast path, then OK to merge it.
> >>>
> >>> So I would say NACK for 16.07 and maybe postpone to 16.11.
> >>
> >> Looking at the documentation for the bonding PMD.
> >>
> >> http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.html
> >>
> >> In section 10.2 it states the following:
> >>
> >> Bonded devices support the dynamical addition and removal of slave devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove APIs.
> >>
> >> If a slave device is added or removed while traffic is running, there is the possibility of a segmentation fault in the rx/tx burst functions. This is most likely to occur in the round robin bonding mode.
> >>
> >> This patch set fixes what appears to be a bug in the bonding PMD.
> > 
> > It can be fixed by removing this statement in the doc.
> > 
> > One of the design principle of DPDK is to avoid locks.
> > 
> >> Performance measurements have been made with this patch set applied and without the patches applied using 64 byte packets. 
> >>
> >> With the patches applied the following drop in performance was observed:
> >>
> >> % drop for fwd+io:	0.16%
> >> % drop for fwd+mac:	0.39%
> >>
> >> This patch set has been reviewed and ack'ed, so I think it should be applied in 16.07
> > 
> > I understand your point of view and I gave mine.
> > Now we need more opinions from others.
> > 
> 
> Hi,
> 
> These patches are sitting in the patchwork for a long time. Discussion
> never concluded and patches kept deferred each release.
> 
> I think we should give a decision about them:
> 
> 1- We can merge them in this release, they are fixing a valid problem,
> and patches are already acked.
> 
> 2- We can reject them, if not having them for more than six months not
> caused a problem, perhaps they are not really that required. And if
> somebody needs them in the future, we can resurrect them from patchwork.
> 
> I vote for option 2, any comments?
> 
+1 on option 2. There are obviously not badly needed if nobody is asking
for them for over six months.

	/Bruce

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
  2017-02-16  9:13                       ` Bruce Richardson
@ 2017-02-16 11:39                         ` Iremonger, Bernard
  2017-02-20 11:15                           ` Ferruh Yigit
  0 siblings, 1 reply; 42+ messages in thread
From: Iremonger, Bernard @ 2017-02-16 11:39 UTC (permalink / raw)
  To: Richardson, Bruce, Yigit, Ferruh
  Cc: Thomas Monjalon, Ananyev, Konstantin, Doherty, Declan, DPDK

Hi Ferruh,

> -----Original Message-----
> From: Richardson, Bruce
> Sent: Thursday, February 16, 2017 9:14 AM
> To: Yigit, Ferruh <ferruh.yigit@intel.com>
> Cc: Thomas Monjalon <thomas.monjalon@6wind.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Doherty, Declan
> <declan.doherty@intel.com>; DPDK <dev@dpdk.org>
> Subject: Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx
> burst functions
> 
> On Wed, Feb 15, 2017 at 06:01:45PM +0000, Ferruh Yigit wrote:
> > On 6/16/2016 7:38 PM, thomas.monjalon at 6wind.com (Thomas Monjalon)
> wrote:
> > > 2016-06-16 16:41, Iremonger, Bernard:
> > >> Hi Thomas,
> > >> <snip>
> > >>> 2016-06-16 15:32, Bruce Richardson:
> > >>>> On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard
> wrote:
> > >>>>>> Why does this particular PMD need spinlocks when doing RX and
> > >>>>>> TX, while other device types do not? How is adding/removing
> > >>>>>> devices from a bonded device different to other control
> > >>>>>> operations that can be done on physical PMDs? Is this not
> > >>>>>> similar to say bringing down or hotplugging out a physical port
> > >>>>>> just before an RX or TX
> > >>> operation takes place?
> > >>>>>> For all other PMDs we rely on the app to synchronise control
> > >>>>>> and data plane operation - why not here?
> > >>>>>>
> > >>>>>> /Bruce
> > >>>>>
> > >>>>> This issue arose during VM live migration testing.
> > >>>>> For VM live migration it is necessary (while traffic is running)
> > >>>>> to be able to
> > >>> remove a bonded slave device, stop it, close it and detach it.
> > >>>>> It a slave device is removed from a bonded device while traffic
> > >>>>> is running
> > >>> a segmentation fault may occur in the rx/tx burst function. The
> > >>> spinlock has been added to prevent this occurring.
> > >>>>>
> > >>>>> The bonding device already uses a spinlock to synchronise
> > >>>>> between the
> > >>> add and remove functionality and the
> > >>> slave_link_status_change_monitor code.
> > >>>>>
> > >>>>> Previously testpmd did not allow, stop, close or detach of PMD
> > >>>>> while traffic was running. Testpmd has been modified with the
> > >>>>> following patchset
> > >>>>>
> > >>>>> http://dpdk.org/dev/patchwork/patch/13472/
> > >>>>>
> > >>>>> It now allows stop, close and detach of a PMD provided in it is
> > >>>>> not
> > >>> forwarding and is not a slave of bonded PMD.
> > >>>>>
> > >>>> I will admit to not being fully convinced, but if nobody else has
> > >>>> any serious objections, and since this patch has been reviewed
> > >>>> and acked, I'm ok to merge it in. I'll do so shortly.
> > >>>
> > >>> Please hold on.
> > >>> Seeing locks introduced in the Rx/Tx path is an alert.
> > >>> We clearly need a design document to explain where locks can be
> > >>> used and what are the responsibility of the control plane.
> > >>> If everybody agrees in this document that DPDK can have some locks
> > >>> in the fast path, then OK to merge it.
> > >>>
> > >>> So I would say NACK for 16.07 and maybe postpone to 16.11.
> > >>
> > >> Looking at the documentation for the bonding PMD.
> > >>
> > >>
> http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_li
> > >> b.html
> > >>
> > >> In section 10.2 it states the following:
> > >>
> > >> Bonded devices support the dynamical addition and removal of slave
> devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove
> APIs.
> > >>
> > >> If a slave device is added or removed while traffic is running, there is the
> possibility of a segmentation fault in the rx/tx burst functions. This is most
> likely to occur in the round robin bonding mode.
> > >>
> > >> This patch set fixes what appears to be a bug in the bonding PMD.
> > >
> > > It can be fixed by removing this statement in the doc.
> > >
> > > One of the design principle of DPDK is to avoid locks.
> > >
> > >> Performance measurements have been made with this patch set
> applied and without the patches applied using 64 byte packets.
> > >>
> > >> With the patches applied the following drop in performance was
> observed:
> > >>
> > >> % drop for fwd+io:	0.16%
> > >> % drop for fwd+mac:	0.39%
> > >>
> > >> This patch set has been reviewed and ack'ed, so I think it should
> > >> be applied in 16.07
> > >
> > > I understand your point of view and I gave mine.
> > > Now we need more opinions from others.
> > >
> >
> > Hi,
> >
> > These patches are sitting in the patchwork for a long time. Discussion
> > never concluded and patches kept deferred each release.
> >
> > I think we should give a decision about them:
> >
> > 1- We can merge them in this release, they are fixing a valid problem,
> > and patches are already acked.
> >
> > 2- We can reject them, if not having them for more than six months not
> > caused a problem, perhaps they are not really that required. And if
> > somebody needs them in the future, we can resurrect them from
> patchwork.
> >
> > I vote for option 2, any comments?
> >
> +1 on option 2. There are obviously not badly needed if nobody is asking
> for them for over six months.
> 
> 	/Bruce

I am ok with option 2, provided they can be retrieved if needed.

Regards,

Bernard.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions
  2017-02-16 11:39                         ` Iremonger, Bernard
@ 2017-02-20 11:15                           ` Ferruh Yigit
  0 siblings, 0 replies; 42+ messages in thread
From: Ferruh Yigit @ 2017-02-20 11:15 UTC (permalink / raw)
  To: Iremonger, Bernard, Richardson, Bruce
  Cc: Thomas Monjalon, Ananyev, Konstantin, Doherty, Declan, DPDK

On 2/16/2017 11:39 AM, Iremonger, Bernard wrote:
> Hi Ferruh,
> 
>> -----Original Message-----
>> From: Richardson, Bruce
>> Sent: Thursday, February 16, 2017 9:14 AM
>> To: Yigit, Ferruh <ferruh.yigit@intel.com>
>> Cc: Thomas Monjalon <thomas.monjalon@6wind.com>; Iremonger, Bernard
>> <bernard.iremonger@intel.com>; Ananyev, Konstantin
>> <konstantin.ananyev@intel.com>; Doherty, Declan
>> <declan.doherty@intel.com>; DPDK <dev@dpdk.org>
>> Subject: Re: [dpdk-dev] [PATCH v3 3/4] bonding: take queue spinlock in rx/tx
>> burst functions
>>
>> On Wed, Feb 15, 2017 at 06:01:45PM +0000, Ferruh Yigit wrote:
>>> On 6/16/2016 7:38 PM, thomas.monjalon at 6wind.com (Thomas Monjalon)
>> wrote:
>>>> 2016-06-16 16:41, Iremonger, Bernard:
>>>>> Hi Thomas,
>>>>> <snip>
>>>>>> 2016-06-16 15:32, Bruce Richardson:
>>>>>>> On Mon, Jun 13, 2016 at 01:28:08PM +0100, Iremonger, Bernard
>> wrote:
>>>>>>>>> Why does this particular PMD need spinlocks when doing RX and
>>>>>>>>> TX, while other device types do not? How is adding/removing
>>>>>>>>> devices from a bonded device different to other control
>>>>>>>>> operations that can be done on physical PMDs? Is this not
>>>>>>>>> similar to say bringing down or hotplugging out a physical port
>>>>>>>>> just before an RX or TX
>>>>>> operation takes place?
>>>>>>>>> For all other PMDs we rely on the app to synchronise control
>>>>>>>>> and data plane operation - why not here?
>>>>>>>>>
>>>>>>>>> /Bruce
>>>>>>>>
>>>>>>>> This issue arose during VM live migration testing.
>>>>>>>> For VM live migration it is necessary (while traffic is running)
>>>>>>>> to be able to
>>>>>> remove a bonded slave device, stop it, close it and detach it.
>>>>>>>> It a slave device is removed from a bonded device while traffic
>>>>>>>> is running
>>>>>> a segmentation fault may occur in the rx/tx burst function. The
>>>>>> spinlock has been added to prevent this occurring.
>>>>>>>>
>>>>>>>> The bonding device already uses a spinlock to synchronise
>>>>>>>> between the
>>>>>> add and remove functionality and the
>>>>>> slave_link_status_change_monitor code.
>>>>>>>>
>>>>>>>> Previously testpmd did not allow, stop, close or detach of PMD
>>>>>>>> while traffic was running. Testpmd has been modified with the
>>>>>>>> following patchset
>>>>>>>>
>>>>>>>> http://dpdk.org/dev/patchwork/patch/13472/
>>>>>>>>
>>>>>>>> It now allows stop, close and detach of a PMD provided in it is
>>>>>>>> not
>>>>>> forwarding and is not a slave of bonded PMD.
>>>>>>>>
>>>>>>> I will admit to not being fully convinced, but if nobody else has
>>>>>>> any serious objections, and since this patch has been reviewed
>>>>>>> and acked, I'm ok to merge it in. I'll do so shortly.
>>>>>>
>>>>>> Please hold on.
>>>>>> Seeing locks introduced in the Rx/Tx path is an alert.
>>>>>> We clearly need a design document to explain where locks can be
>>>>>> used and what are the responsibility of the control plane.
>>>>>> If everybody agrees in this document that DPDK can have some locks
>>>>>> in the fast path, then OK to merge it.
>>>>>>
>>>>>> So I would say NACK for 16.07 and maybe postpone to 16.11.
>>>>>
>>>>> Looking at the documentation for the bonding PMD.
>>>>>
>>>>>
>> http://dpdk.org/doc/guides/prog_guide/link_bonding_poll_mode_drv_li
>>>>> b.html
>>>>>
>>>>> In section 10.2 it states the following:
>>>>>
>>>>> Bonded devices support the dynamical addition and removal of slave
>> devices using the rte_eth_bond_slave_add / rte_eth_bond_slave_remove
>> APIs.
>>>>>
>>>>> If a slave device is added or removed while traffic is running, there is the
>> possibility of a segmentation fault in the rx/tx burst functions. This is most
>> likely to occur in the round robin bonding mode.
>>>>>
>>>>> This patch set fixes what appears to be a bug in the bonding PMD.
>>>>
>>>> It can be fixed by removing this statement in the doc.
>>>>
>>>> One of the design principle of DPDK is to avoid locks.
>>>>
>>>>> Performance measurements have been made with this patch set
>> applied and without the patches applied using 64 byte packets.
>>>>>
>>>>> With the patches applied the following drop in performance was
>> observed:
>>>>>
>>>>> % drop for fwd+io:	0.16%
>>>>> % drop for fwd+mac:	0.39%
>>>>>
>>>>> This patch set has been reviewed and ack'ed, so I think it should
>>>>> be applied in 16.07
>>>>
>>>> I understand your point of view and I gave mine.
>>>> Now we need more opinions from others.
>>>>
>>>
>>> Hi,
>>>
>>> These patches are sitting in the patchwork for a long time. Discussion
>>> never concluded and patches kept deferred each release.
>>>
>>> I think we should give a decision about them:
>>>
>>> 1- We can merge them in this release, they are fixing a valid problem,
>>> and patches are already acked.
>>>
>>> 2- We can reject them, if not having them for more than six months not
>>> caused a problem, perhaps they are not really that required. And if
>>> somebody needs them in the future, we can resurrect them from
>> patchwork.
>>>
>>> I vote for option 2, any comments?
>>>
>> +1 on option 2. There are obviously not badly needed if nobody is asking
>> for them for over six months.
>>
>> 	/Bruce
> 
> I am ok with option 2, provided they can be retrieved if needed.

Patches marked as rejected in patchwork.

For future reference, patchwork ids:
http://dpdk.org/dev/patchwork/patch/13482/
http://dpdk.org/dev/patchwork/patch/13483/
http://dpdk.org/dev/patchwork/patch/13484/
http://dpdk.org/dev/patchwork/patch/13485/

Thanks,
ferruh

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2017-02-20 11:15 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-05 15:14 [PATCH 0/5] bonding: locks Bernard Iremonger
2016-05-05 15:14 ` [PATCH 1/5] bonding: replace spinlock with read/write lock Bernard Iremonger
2016-05-05 17:12   ` Stephen Hemminger
2016-05-06 10:32     ` Declan Doherty
2016-05-06 15:55       ` Stephen Hemminger
2016-05-13 17:10         ` Ananyev, Konstantin
2016-05-13 17:18           ` Ananyev, Konstantin
2016-05-26 16:24             ` Iremonger, Bernard
2016-05-05 15:14 ` [PATCH 2/5] bonding: add read/write lock to rx/tx burst functions Bernard Iremonger
2016-05-05 15:14 ` [PATCH 3/5] bonding: remove memcopy of slaves from rx/tx burst function Bernard Iremonger
2016-05-05 15:14 ` [PATCH 4/5] bonding: add read/write lock to stop function Bernard Iremonger
2016-05-05 15:15 ` [PATCH 5/5] bonding: add read/write lock to the link_update function Bernard Iremonger
2016-05-26 16:38 ` [PATCH v2 0/6] bonding: locks Bernard Iremonger
2016-05-26 16:38   ` [PATCH v2 1/6] bonding: add spinlock to rx and tx queues Bernard Iremonger
2016-06-10 18:12     ` Ananyev, Konstantin
2016-06-12 17:11     ` [PATCH v3 0/4] bonding: locks Bernard Iremonger
2016-06-12 17:11       ` [PATCH v3 1/4] bonding: add spinlock to rx and tx queues Bernard Iremonger
2016-06-12 17:11       ` [PATCH v3 2/4] bonding: grab queue spinlocks in slave add and remove Bernard Iremonger
2016-06-12 17:11       ` [PATCH v3 3/4] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
2016-06-13  9:18         ` Bruce Richardson
2016-06-13 12:28           ` Iremonger, Bernard
2016-06-16 14:32             ` Bruce Richardson
2016-06-16 15:00               ` Thomas Monjalon
2016-06-16 16:41                 ` Iremonger, Bernard
2016-06-16 18:38                   ` Thomas Monjalon
2017-02-15 18:01                     ` Ferruh Yigit
2017-02-16  9:13                       ` Bruce Richardson
2017-02-16 11:39                         ` Iremonger, Bernard
2017-02-20 11:15                           ` Ferruh Yigit
2016-09-09 11:29         ` Ferruh Yigit
2016-06-12 17:11       ` [PATCH v3 4/4] bonding: remove memcpy from " Bernard Iremonger
2016-09-11 12:39         ` Yuanhan Liu
2016-05-26 16:38   ` [PATCH v2 2/6] bonding: grab queue spinlocks in slave add and remove Bernard Iremonger
2016-06-10 18:14     ` Ananyev, Konstantin
2016-05-26 16:38   ` [PATCH v2 3/6] bonding: take queue spinlock in rx/tx burst functions Bernard Iremonger
2016-06-10 18:14     ` Ananyev, Konstantin
2016-05-26 16:38   ` [PATCH v2 4/6] bonding: add spinlock to stop function Bernard Iremonger
2016-05-26 16:38   ` [PATCH v2 5/6] bonding: add spinlock to link update function Bernard Iremonger
2016-05-26 16:38   ` [PATCH v2 6/6] bonding: remove memcpy from burst functions Bernard Iremonger
2016-06-10 18:15     ` Ananyev, Konstantin
2016-06-10 14:45   ` [PATCH v2 0/6] bonding: locks Bruce Richardson
2016-06-10 18:24     ` Iremonger, Bernard

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.