All of lore.kernel.org
 help / color / mirror / Atom feed
* [patch 00/20] ibmveth update
@ 2010-08-23  0:09 Anton Blanchard
  2010-08-23  0:09 ` [patch 01/20] ibmveth: Remove integer divide caused by modulus Anton Blanchard
                   ` (20 more replies)
  0 siblings, 21 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

Here are a number of patches for the IBM Power virtual ethernet driver.
Patches 1-9 contain the significant changes, and patches 10-20 are style
and formatting changes to bring it more into line with Linux coding
standards.


^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 01/20] ibmveth: Remove integer divide caused by modulus
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 02/20] ibmveth: batch rx buffer replacement Anton Blanchard
                   ` (19 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_modulus --]
[-- Type: text/plain, Size: 1320 bytes --]

Replace some modulus operators with an increment and compare to avoid
an integer divide.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:21.104406611 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:26.774004158 +1000
@@ -252,7 +252,9 @@ static void ibmveth_replenish_buffer_poo
 		}
 
 		free_index = pool->consumer_index;
-		pool->consumer_index = (pool->consumer_index + 1) % pool->size;
+		pool->consumer_index++;
+		if (pool->consumer_index >= pool->size)
+			pool->consumer_index = 0;
 		index = pool->free_map[free_index];
 
 		ibmveth_assert(index != IBM_VETH_INVALID_MAP);
@@ -377,9 +379,10 @@ static void ibmveth_remove_buffer_from_p
 			 DMA_FROM_DEVICE);
 
 	free_index = adapter->rx_buff_pool[pool].producer_index;
-	adapter->rx_buff_pool[pool].producer_index
-		= (adapter->rx_buff_pool[pool].producer_index + 1)
-		% adapter->rx_buff_pool[pool].size;
+	adapter->rx_buff_pool[pool].producer_index++;
+	if (adapter->rx_buff_pool[pool].producer_index >=
+	    adapter->rx_buff_pool[pool].size)
+		adapter->rx_buff_pool[pool].producer_index = 0;
 	adapter->rx_buff_pool[pool].free_map[free_index] = index;
 
 	mb();



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 02/20] ibmveth: batch rx buffer replacement
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
  2010-08-23  0:09 ` [patch 01/20] ibmveth: Remove integer divide caused by modulus Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 03/20] ibmveth: Remove LLTX Anton Blanchard
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_watermark --]
[-- Type: text/plain, Size: 2158 bytes --]

At the moment we try and replenish the receive ring on every rx interrupt.
We even have a pool->threshold but aren't using it.

To limit the maximum latency incurred when refilling, change the threshold
from 1/2 to 7/8 and reduce the largest rx pool from 768 buffers to 512 which
should be more than enough.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:26.774004158 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:27.963919704 +1000
@@ -178,7 +178,7 @@ static void ibmveth_init_buffer_pool(str
 	pool->size = pool_size;
 	pool->index = pool_index;
 	pool->buff_size = buff_size;
-	pool->threshold = pool_size / 2;
+	pool->threshold = pool_size * 7 / 8;
 	pool->active = pool_active;
 }
 
@@ -315,10 +315,13 @@ static void ibmveth_replenish_task(struc
 
 	adapter->replenish_task_cycles++;
 
-	for (i = (IbmVethNumBufferPools - 1); i >= 0; i--)
-		if(adapter->rx_buff_pool[i].active)
-			ibmveth_replenish_buffer_pool(adapter,
-						     &adapter->rx_buff_pool[i]);
+	for (i = (IbmVethNumBufferPools - 1); i >= 0; i--) {
+		struct ibmveth_buff_pool *pool = &adapter->rx_buff_pool[i];
+
+		if (pool->active &&
+		    (atomic_read(&pool->available) < pool->threshold))
+			ibmveth_replenish_buffer_pool(adapter, pool);
+	}
 
 	adapter->rx_no_buffer = *(u64*)(((char*)adapter->buffer_list_addr) + 4096 - 8);
 }
Index: net-next-2.6/drivers/net/ibmveth.h
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.h	2010-08-23 08:52:16.164757227 +1000
+++ net-next-2.6/drivers/net/ibmveth.h	2010-08-23 08:52:27.963919704 +1000
@@ -102,7 +102,7 @@ static inline long h_illan_attributes(un
 #define IBMVETH_MAX_BUF_SIZE (1024 * 128)
 
 static int pool_size[] = { 512, 1024 * 2, 1024 * 16, 1024 * 32, 1024 * 64 };
-static int pool_count[] = { 256, 768, 256, 256, 256 };
+static int pool_count[] = { 256, 512, 256, 256, 256 };
 static int pool_active[] = { 1, 1, 0, 0, 0};
 
 #define IBM_VETH_INVALID_MAP ((u16)0xffff)



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 03/20] ibmveth: Remove LLTX
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
  2010-08-23  0:09 ` [patch 01/20] ibmveth: Remove integer divide caused by modulus Anton Blanchard
  2010-08-23  0:09 ` [patch 02/20] ibmveth: batch rx buffer replacement Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 04/20] ibmveth: Add tx_copybreak Anton Blanchard
                   ` (17 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_no_lltx --]
[-- Type: text/plain, Size: 2257 bytes --]

The ibmveth adapter needs locking in the transmit routine to protect
the bounce_buffer but it sets LLTX and forgets to add any of its own
locking.

Just remove the deprecated LLTX option. Remove the stats lock in the process.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:27.963919704 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:28.563877123 +1000
@@ -903,7 +903,6 @@ static netdev_tx_t ibmveth_start_xmit(st
 	union ibmveth_buf_desc desc;
 	unsigned long lpar_rc;
 	unsigned long correlator;
-	unsigned long flags;
 	unsigned int retry_count;
 	unsigned int tx_dropped = 0;
 	unsigned int tx_bytes = 0;
@@ -965,20 +964,18 @@ static netdev_tx_t ibmveth_start_xmit(st
 	} else {
 		tx_packets++;
 		tx_bytes += skb->len;
-		netdev->trans_start = jiffies; /* NETIF_F_LLTX driver :( */
 	}
 
 	if (!used_bounce)
 		dma_unmap_single(&adapter->vdev->dev, data_dma_addr,
 				 skb->len, DMA_TO_DEVICE);
 
-out:	spin_lock_irqsave(&adapter->stats_lock, flags);
+out:
 	netdev->stats.tx_dropped += tx_dropped;
 	netdev->stats.tx_bytes += tx_bytes;
 	netdev->stats.tx_packets += tx_packets;
 	adapter->tx_send_failed += tx_send_failed;
 	adapter->tx_map_failed += tx_map_failed;
-	spin_unlock_irqrestore(&adapter->stats_lock, flags);
 
 	dev_kfree_skb(skb);
 	return NETDEV_TX_OK;
@@ -1290,8 +1287,6 @@ static int __devinit ibmveth_probe(struc
 	netdev->netdev_ops = &ibmveth_netdev_ops;
 	netdev->ethtool_ops = &netdev_ethtool_ops;
 	SET_NETDEV_DEV(netdev, &dev->dev);
- 	netdev->features |= NETIF_F_LLTX;
-	spin_lock_init(&adapter->stats_lock);
 
 	memcpy(netdev->dev_addr, &adapter->mac_addr, netdev->addr_len);
 
Index: net-next-2.6/drivers/net/ibmveth.h
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.h	2010-08-23 08:52:27.963919704 +1000
+++ net-next-2.6/drivers/net/ibmveth.h	2010-08-23 08:52:28.563877123 +1000
@@ -158,7 +158,6 @@ struct ibmveth_adapter {
     u64 rx_no_buffer;
     u64 tx_map_failed;
     u64 tx_send_failed;
-    spinlock_t stats_lock;
 };
 
 struct ibmveth_buf_desc_fields {



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 04/20] ibmveth: Add tx_copybreak
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (2 preceding siblings ...)
  2010-08-23  0:09 ` [patch 03/20] ibmveth: Remove LLTX Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 05/20] ibmveth: Add rx_copybreak Anton Blanchard
                   ` (16 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_tx_copybreak --]
[-- Type: text/plain, Size: 2038 bytes --]

Use the existing bounce buffer if we send a buffer under a certain size.
This saves the overhead of a TCE map/unmap.

I can't see any reason for the wmb() in the bounce buffer case, if we need
a barrier it will be before we call h_send_logical_lan but we have
nothing in the common case. Remove it.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:28.563877123 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:29.173833820 +1000
@@ -117,6 +117,11 @@ MODULE_DESCRIPTION("IBM i/pSeries Virtua
 MODULE_LICENSE("GPL");
 MODULE_VERSION(ibmveth_driver_version);
 
+static unsigned int tx_copybreak __read_mostly = 128;
+module_param(tx_copybreak, uint, 0644);
+MODULE_PARM_DESC(tx_copybreak,
+	"Maximum size of packet that is copied to a new buffer on transmit");
+
 struct ibmveth_stat {
 	char name[ETH_GSTRING_LEN];
 	int offset;
@@ -931,17 +936,24 @@ static netdev_tx_t ibmveth_start_xmit(st
 		buf[1] = 0;
 	}
 
-	data_dma_addr = dma_map_single(&adapter->vdev->dev, skb->data,
-				       skb->len, DMA_TO_DEVICE);
-	if (dma_mapping_error(&adapter->vdev->dev, data_dma_addr)) {
-		if (!firmware_has_feature(FW_FEATURE_CMO))
-			ibmveth_error_printk("tx: unable to map xmit buffer\n");
+	if (skb->len < tx_copybreak) {
+		used_bounce = 1;
+	} else {
+		data_dma_addr = dma_map_single(&adapter->vdev->dev, skb->data,
+					       skb->len, DMA_TO_DEVICE);
+		if (dma_mapping_error(&adapter->vdev->dev, data_dma_addr)) {
+			if (!firmware_has_feature(FW_FEATURE_CMO))
+				ibmveth_error_printk("tx: unable to map "
+						     "xmit buffer\n");
+			tx_map_failed++;
+			used_bounce = 1;
+		}
+	}
+
+	if (used_bounce) {
 		skb_copy_from_linear_data(skb, adapter->bounce_buffer,
 					  skb->len);
 		desc.fields.address = adapter->bounce_buffer_dma;
-		tx_map_failed++;
-		used_bounce = 1;
-		wmb();
 	} else
 		desc.fields.address = data_dma_addr;
 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 05/20] ibmveth: Add rx_copybreak
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (3 preceding siblings ...)
  2010-08-23  0:09 ` [patch 04/20] ibmveth: Add tx_copybreak Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 06/20] ibmveth: Use lighter weight read memory barrier in ibmveth_poll Anton Blanchard
                   ` (15 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_rx_copybreak --]
[-- Type: text/plain, Size: 2171 bytes --]

For small packets, create a new skb and copy the packet into it so we
avoid tearing down and creating a TCE entry.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:29.173833820 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:29.793789816 +1000
@@ -122,6 +122,11 @@ module_param(tx_copybreak, uint, 0644);
 MODULE_PARM_DESC(tx_copybreak,
 	"Maximum size of packet that is copied to a new buffer on transmit");
 
+static unsigned int rx_copybreak __read_mostly = 128;
+module_param(rx_copybreak, uint, 0644);
+MODULE_PARM_DESC(rx_copybreak,
+	"Maximum size of packet that is copied to a new buffer on receive");
+
 struct ibmveth_stat {
 	char name[ETH_GSTRING_LEN];
 	int offset;
@@ -1002,8 +1007,6 @@ static int ibmveth_poll(struct napi_stru
 
  restart_poll:
 	do {
-		struct sk_buff *skb;
-
 		if (!ibmveth_rxq_pending_buffer(adapter))
 			break;
 
@@ -1014,20 +1017,34 @@ static int ibmveth_poll(struct napi_stru
 			ibmveth_debug_printk("recycling invalid buffer\n");
 			ibmveth_rxq_recycle_buffer(adapter);
 		} else {
+			struct sk_buff *skb, *new_skb;
 			int length = ibmveth_rxq_frame_length(adapter);
 			int offset = ibmveth_rxq_frame_offset(adapter);
 			int csum_good = ibmveth_rxq_csum_good(adapter);
 
 			skb = ibmveth_rxq_get_buffer(adapter);
-			if (csum_good)
-				skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-			ibmveth_rxq_harvest_buffer(adapter);
+			new_skb = NULL;
+			if (length < rx_copybreak)
+				new_skb = netdev_alloc_skb(netdev, length);
+
+			if (new_skb) {
+				skb_copy_to_linear_data(new_skb,
+							skb->data + offset,
+							length);
+				skb = new_skb;
+				ibmveth_rxq_recycle_buffer(adapter);
+			} else {
+				ibmveth_rxq_harvest_buffer(adapter);
+				skb_reserve(skb, offset);
+			}
 
-			skb_reserve(skb, offset);
 			skb_put(skb, length);
 			skb->protocol = eth_type_trans(skb, netdev);
 
+			if (csum_good)
+				skb->ip_summed = CHECKSUM_UNNECESSARY;
+
 			netif_receive_skb(skb);	/* send it up */
 
 			netdev->stats.rx_packets++;



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 06/20] ibmveth: Use lighter weight read memory barrier in ibmveth_poll
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (4 preceding siblings ...)
  2010-08-23  0:09 ` [patch 05/20] ibmveth: Add rx_copybreak Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 07/20] ibmveth: Add scatter-gather support Anton Blanchard
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_barriers --]
[-- Type: text/plain, Size: 749 bytes --]

We want to order the read in ibmveth_rxq_pending_buffer and the read of
ibmveth_rxq_buffer_valid which are both cacheable memory. smp_rmb() is good
enough for this.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:29.793789816 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:30.283755038 +1000
@@ -1010,7 +1010,7 @@ static int ibmveth_poll(struct napi_stru
 		if (!ibmveth_rxq_pending_buffer(adapter))
 			break;
 
-		rmb();
+		smp_rmb();
 		if (!ibmveth_rxq_buffer_valid(adapter)) {
 			wmb(); /* suggested by larson1 */
 			adapter->rx_invalid_buffer++;



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 07/20] ibmveth: Add scatter-gather support
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (5 preceding siblings ...)
  2010-08-23  0:09 ` [patch 06/20] ibmveth: Use lighter weight read memory barrier in ibmveth_poll Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 08/20] ibmveth: Dont overallocate buffers Anton Blanchard
                   ` (13 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_sg --]
[-- Type: text/plain, Size: 7673 bytes --]

ibmveth can scatter gather up to 6 segments. If we go over this then
we have no option but to call skb_linearize, like other drivers with
similar limitations do.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:30.283755038 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:30.633730198 +1000
@@ -897,6 +897,7 @@ static const struct ethtool_ops netdev_e
 	.get_strings		= ibmveth_get_strings,
 	.get_sset_count		= ibmveth_get_sset_count,
 	.get_ethtool_stats	= ibmveth_get_ethtool_stats,
+	.set_sg			= ethtool_op_set_sg,
 };
 
 static int ibmveth_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
@@ -906,96 +907,158 @@ static int ibmveth_ioctl(struct net_devi
 
 #define page_offset(v) ((unsigned long)(v) & ((1 << 12) - 1))
 
-static netdev_tx_t ibmveth_start_xmit(struct sk_buff *skb,
-				      struct net_device *netdev)
+static int ibmveth_send(struct ibmveth_adapter *adapter,
+			union ibmveth_buf_desc *descs)
 {
-	struct ibmveth_adapter *adapter = netdev_priv(netdev);
-	union ibmveth_buf_desc desc;
-	unsigned long lpar_rc;
 	unsigned long correlator;
 	unsigned int retry_count;
-	unsigned int tx_dropped = 0;
-	unsigned int tx_bytes = 0;
-	unsigned int tx_packets = 0;
-	unsigned int tx_send_failed = 0;
-	unsigned int tx_map_failed = 0;
-	int used_bounce = 0;
-	unsigned long data_dma_addr;
+	unsigned long ret;
+
+	/*
+	 * The retry count sets a maximum for the number of broadcast and
+	 * multicast destinations within the system.
+	 */
+	retry_count = 1024;
+	correlator = 0;
+	do {
+		ret = h_send_logical_lan(adapter->vdev->unit_address,
+					     descs[0].desc, descs[1].desc,
+					     descs[2].desc, descs[3].desc,
+					     descs[4].desc, descs[5].desc,
+					     correlator, &correlator);
+	} while ((ret == H_BUSY) && (retry_count--));
+
+	if (ret != H_SUCCESS && ret != H_DROPPED) {
+		ibmveth_error_printk("tx: h_send_logical_lan failed with "
+				     "rc=%ld\n", ret);
+		return 1;
+	}
+
+	return 0;
+}
 
-	desc.fields.flags_len = IBMVETH_BUF_VALID | skb->len;
+static netdev_tx_t ibmveth_start_xmit(struct sk_buff *skb,
+				      struct net_device *netdev)
+{
+	struct ibmveth_adapter *adapter = netdev_priv(netdev);
+	unsigned int desc_flags;
+	union ibmveth_buf_desc descs[6];
+	int last, i;
+	int force_bounce = 0;
+
+	/*
+	 * veth handles a maximum of 6 segments including the header, so
+	 * we have to linearize the skb if there are more than this.
+	 */
+	if (skb_shinfo(skb)->nr_frags > 5 && __skb_linearize(skb)) {
+		netdev->stats.tx_dropped++;
+		goto out;
+	}
 
+	/* veth can't checksum offload UDP */
 	if (skb->ip_summed == CHECKSUM_PARTIAL &&
 	    ip_hdr(skb)->protocol != IPPROTO_TCP && skb_checksum_help(skb)) {
 		ibmveth_error_printk("tx: failed to checksum packet\n");
-		tx_dropped++;
+		netdev->stats.tx_dropped++;
 		goto out;
 	}
 
+	desc_flags = IBMVETH_BUF_VALID;
+
 	if (skb->ip_summed == CHECKSUM_PARTIAL) {
-		unsigned char *buf = skb_transport_header(skb) + skb->csum_offset;
+		unsigned char *buf = skb_transport_header(skb) +
+						skb->csum_offset;
 
-		desc.fields.flags_len |= (IBMVETH_BUF_NO_CSUM | IBMVETH_BUF_CSUM_GOOD);
+		desc_flags |= (IBMVETH_BUF_NO_CSUM | IBMVETH_BUF_CSUM_GOOD);
 
 		/* Need to zero out the checksum */
 		buf[0] = 0;
 		buf[1] = 0;
 	}
 
-	if (skb->len < tx_copybreak) {
-		used_bounce = 1;
-	} else {
-		data_dma_addr = dma_map_single(&adapter->vdev->dev, skb->data,
-					       skb->len, DMA_TO_DEVICE);
-		if (dma_mapping_error(&adapter->vdev->dev, data_dma_addr)) {
-			if (!firmware_has_feature(FW_FEATURE_CMO))
-				ibmveth_error_printk("tx: unable to map "
-						     "xmit buffer\n");
-			tx_map_failed++;
-			used_bounce = 1;
-		}
-	}
+retry_bounce:
+	memset(descs, 0, sizeof(descs));
 
-	if (used_bounce) {
+	/*
+	 * If a linear packet is below the rx threshold then
+	 * copy it into the static bounce buffer. This avoids the
+	 * cost of a TCE insert and remove.
+	 */
+	if (force_bounce || (!skb_is_nonlinear(skb) &&
+				(skb->len < tx_copybreak))) {
 		skb_copy_from_linear_data(skb, adapter->bounce_buffer,
 					  skb->len);
-		desc.fields.address = adapter->bounce_buffer_dma;
-	} else
-		desc.fields.address = data_dma_addr;
 
-	/* send the frame. Arbitrarily set retrycount to 1024 */
-	correlator = 0;
-	retry_count = 1024;
-	do {
-		lpar_rc = h_send_logical_lan(adapter->vdev->unit_address,
-					     desc.desc, 0, 0, 0, 0, 0,
-					     correlator, &correlator);
-	} while ((lpar_rc == H_BUSY) && (retry_count--));
+		descs[0].fields.flags_len = desc_flags | skb->len;
+		descs[0].fields.address = adapter->bounce_buffer_dma;
 
-	if(lpar_rc != H_SUCCESS && lpar_rc != H_DROPPED) {
-		ibmveth_error_printk("tx: h_send_logical_lan failed with rc=%ld\n", lpar_rc);
-		ibmveth_error_printk("tx: valid=%d, len=%d, address=0x%08x\n",
-				     (desc.fields.flags_len & IBMVETH_BUF_VALID) ? 1 : 0,
-				     skb->len, desc.fields.address);
-		tx_send_failed++;
-		tx_dropped++;
+		if (ibmveth_send(adapter, descs)) {
+			adapter->tx_send_failed++;
+			netdev->stats.tx_dropped++;
+		} else {
+			netdev->stats.tx_packets++;
+			netdev->stats.tx_bytes += skb->len;
+		}
+
+		goto out;
+	}
+
+	/* Map the header */
+	descs[0].fields.address = dma_map_single(&adapter->vdev->dev, skb->data,
+						 skb_headlen(skb),
+						 DMA_TO_DEVICE);
+	if (dma_mapping_error(&adapter->vdev->dev, descs[0].fields.address))
+		goto map_failed;
+
+	descs[0].fields.flags_len = desc_flags | skb_headlen(skb);
+
+	/* Map the frags */
+	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+		unsigned long dma_addr;
+		skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+
+		dma_addr = dma_map_page(&adapter->vdev->dev, frag->page,
+					frag->page_offset, frag->size,
+					DMA_TO_DEVICE);
+
+		if (dma_mapping_error(&adapter->vdev->dev, dma_addr))
+			goto map_failed_frags;
+
+		descs[i+1].fields.flags_len = desc_flags | frag->size;
+		descs[i+1].fields.address = dma_addr;
+	}
+
+	if (ibmveth_send(adapter, descs)) {
+		adapter->tx_send_failed++;
+		netdev->stats.tx_dropped++;
 	} else {
-		tx_packets++;
-		tx_bytes += skb->len;
+		netdev->stats.tx_packets++;
+		netdev->stats.tx_bytes += skb->len;
 	}
 
-	if (!used_bounce)
-		dma_unmap_single(&adapter->vdev->dev, data_dma_addr,
-				 skb->len, DMA_TO_DEVICE);
+	for (i = 0; i < skb_shinfo(skb)->nr_frags + 1; i++)
+		dma_unmap_page(&adapter->vdev->dev, descs[i].fields.address,
+			       descs[i].fields.flags_len & IBMVETH_BUF_LEN_MASK,
+			       DMA_TO_DEVICE);
 
 out:
-	netdev->stats.tx_dropped += tx_dropped;
-	netdev->stats.tx_bytes += tx_bytes;
-	netdev->stats.tx_packets += tx_packets;
-	adapter->tx_send_failed += tx_send_failed;
-	adapter->tx_map_failed += tx_map_failed;
-
 	dev_kfree_skb(skb);
 	return NETDEV_TX_OK;
+
+map_failed_frags:
+	last = i+1;
+	for (i = 0; i < last; i++)
+		dma_unmap_page(&adapter->vdev->dev, descs[i].fields.address,
+			       descs[i].fields.flags_len & IBMVETH_BUF_LEN_MASK,
+			       DMA_TO_DEVICE);
+
+map_failed:
+	if (!firmware_has_feature(FW_FEATURE_CMO))
+		ibmveth_error_printk("tx: unable to map xmit buffer\n");
+	adapter->tx_map_failed++;
+	skb_linearize(skb);
+	force_bounce = 1;
+	goto retry_bounce;
 }
 
 static int ibmveth_poll(struct napi_struct *napi, int budget)
@@ -1316,6 +1379,7 @@ static int __devinit ibmveth_probe(struc
 	netdev->netdev_ops = &ibmveth_netdev_ops;
 	netdev->ethtool_ops = &netdev_ethtool_ops;
 	SET_NETDEV_DEV(netdev, &dev->dev);
+	netdev->features |= NETIF_F_SG;
 
 	memcpy(netdev->dev_addr, &adapter->mac_addr, netdev->addr_len);
 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 08/20] ibmveth: Dont overallocate buffers
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (6 preceding siblings ...)
  2010-08-23  0:09 ` [patch 07/20] ibmveth: Add scatter-gather support Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23 20:49   ` Robert Jennings
  2010-08-23  0:09 ` [patch 09/20] ibmveth: Add optional flush of rx buffer Anton Blanchard
                   ` (12 subsequent siblings)
  20 siblings, 1 reply; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_page_4k --]
[-- Type: text/plain, Size: 1517 bytes --]

We were allocating a page, even though we always want 4k. Use kmalloc
instead of get_zeroed_page.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:30.633730198 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:31.003703934 +1000
@@ -473,7 +473,7 @@ static void ibmveth_cleanup(struct ibmve
 					DMA_BIDIRECTIONAL);
 			adapter->buffer_list_dma = DMA_ERROR_CODE;
 		}
-		free_page((unsigned long)adapter->buffer_list_addr);
+		kfree(adapter->buffer_list_addr);
 		adapter->buffer_list_addr = NULL;
 	}
 
@@ -483,7 +483,7 @@ static void ibmveth_cleanup(struct ibmve
 					DMA_BIDIRECTIONAL);
 			adapter->filter_list_dma = DMA_ERROR_CODE;
 		}
-		free_page((unsigned long)adapter->filter_list_addr);
+		kfree(adapter->filter_list_addr);
 		adapter->filter_list_addr = NULL;
 	}
 
@@ -560,8 +560,8 @@ static int ibmveth_open(struct net_devic
 	for(i = 0; i<IbmVethNumBufferPools; i++)
 		rxq_entries += adapter->rx_buff_pool[i].size;
 
-	adapter->buffer_list_addr = (void*) get_zeroed_page(GFP_KERNEL);
-	adapter->filter_list_addr = (void*) get_zeroed_page(GFP_KERNEL);
+	adapter->buffer_list_addr = kzalloc(4096, GFP_KERNEL);
+	adapter->filter_list_addr = kzalloc(4096, GFP_KERNEL);
 
 	if(!adapter->buffer_list_addr || !adapter->filter_list_addr) {
 		ibmveth_error_printk("unable to allocate filter or buffer list pages\n");



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 09/20] ibmveth: Add optional flush of rx buffer
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (7 preceding siblings ...)
  2010-08-23  0:09 ` [patch 08/20] ibmveth: Dont overallocate buffers Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 10/20] ibmveth: remove procfs code Anton Blanchard
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_flush_buffer --]
[-- Type: text/plain, Size: 2057 bytes --]

On some machines we can improve the bandwidth by ensuring rx buffers are
not in the cache. Add a module option that is disabled by default that flushes
rx buffers on insertion.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:31.003703934 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:31.363678383 +1000
@@ -127,6 +127,10 @@ module_param(rx_copybreak, uint, 0644);
 MODULE_PARM_DESC(rx_copybreak,
 	"Maximum size of packet that is copied to a new buffer on receive");
 
+static unsigned int rx_flush __read_mostly = 0;
+module_param(rx_flush, uint, 0644);
+MODULE_PARM_DESC(rx_flush, "Flush receive buffers before use");
+
 struct ibmveth_stat {
 	char name[ETH_GSTRING_LEN];
 	int offset;
@@ -234,6 +238,14 @@ static int ibmveth_alloc_buffer_pool(str
 	return 0;
 }
 
+static inline void ibmveth_flush_buffer(void *addr, unsigned long length)
+{
+	unsigned long offset;
+
+	for (offset = 0; offset < length; offset += SMP_CACHE_BYTES)
+		asm("dcbfl %0,%1" :: "b" (addr), "r" (offset));
+}
+
 /* replenish the buffers for a pool.  note that we don't need to
  * skb_reserve these since they are used for incoming...
  */
@@ -286,6 +298,12 @@ static void ibmveth_replenish_buffer_poo
 		desc.fields.flags_len = IBMVETH_BUF_VALID | pool->buff_size;
 		desc.fields.address = dma_addr;
 
+		if (rx_flush) {
+			unsigned int len = min(pool->buff_size,
+						adapter->netdev->mtu +
+						IBMVETH_BUFF_OH);
+			ibmveth_flush_buffer(skb->data, len);
+		}
 		lpar_rc = h_add_logical_lan_buffer(adapter->vdev->unit_address, desc.desc);
 
 		if (lpar_rc != H_SUCCESS)
@@ -1095,6 +1113,9 @@ static int ibmveth_poll(struct napi_stru
 				skb_copy_to_linear_data(new_skb,
 							skb->data + offset,
 							length);
+				if (rx_flush)
+					ibmveth_flush_buffer(skb->data,
+						length + offset);
 				skb = new_skb;
 				ibmveth_rxq_recycle_buffer(adapter);
 			} else {



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 10/20] ibmveth: remove procfs code
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (8 preceding siblings ...)
  2010-08-23  0:09 ` [patch 09/20] ibmveth: Add optional flush of rx buffer Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 11/20] ibmveth: Convert to netdev_alloc_skb Anton Blanchard
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_remove_proc --]
[-- Type: text/plain, Size: 6094 bytes --]

We export all the driver specific statistics via ethtool, so there is no need
to duplicate this in procfs.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:31.363678383 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:31.723652837 +1000
@@ -26,11 +26,6 @@
 /* ethernet NICs that are presented to the partition by the hypervisor.   */
 /*                                                                        */
 /**************************************************************************/
-/*
-  TODO:
-  - add support for sysfs
-  - possibly remove procfs support
-*/
 
 #include <linux/module.h>
 #include <linux/moduleparam.h>
@@ -47,18 +42,15 @@
 #include <linux/mm.h>
 #include <linux/pm.h>
 #include <linux/ethtool.h>
-#include <linux/proc_fs.h>
 #include <linux/in.h>
 #include <linux/ip.h>
 #include <linux/slab.h>
-#include <net/net_namespace.h>
 #include <asm/hvcall.h>
 #include <asm/atomic.h>
 #include <asm/vio.h>
 #include <asm/iommu.h>
 #include <asm/uaccess.h>
 #include <asm/firmware.h>
-#include <linux/seq_file.h>
 
 #include "ibmveth.h"
 
@@ -93,21 +85,12 @@ static int ibmveth_poll(struct napi_stru
 static int ibmveth_start_xmit(struct sk_buff *skb, struct net_device *dev);
 static void ibmveth_set_multicast_list(struct net_device *dev);
 static int ibmveth_change_mtu(struct net_device *dev, int new_mtu);
-static void ibmveth_proc_register_driver(void);
-static void ibmveth_proc_unregister_driver(void);
-static void ibmveth_proc_register_adapter(struct ibmveth_adapter *adapter);
-static void ibmveth_proc_unregister_adapter(struct ibmveth_adapter *adapter);
 static irqreturn_t ibmveth_interrupt(int irq, void *dev_instance);
 static void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter);
 static unsigned long ibmveth_get_desired_dma(struct vio_dev *vdev);
 static struct kobj_type ktype_veth_pool;
 
 
-#ifdef CONFIG_PROC_FS
-#define IBMVETH_PROC_DIR "ibmveth"
-static struct proc_dir_entry *ibmveth_proc_dir;
-#endif
-
 static const char ibmveth_driver_name[] = "ibmveth";
 static const char ibmveth_driver_string[] = "IBM i/pSeries Virtual Ethernet Driver";
 #define ibmveth_driver_version "1.03"
@@ -1451,8 +1434,6 @@ static int __devinit ibmveth_probe(struc
 
 	ibmveth_debug_printk("registered\n");
 
-	ibmveth_proc_register_adapter(adapter);
-
 	return 0;
 }
 
@@ -1467,103 +1448,12 @@ static int __devexit ibmveth_remove(stru
 
 	unregister_netdev(netdev);
 
-	ibmveth_proc_unregister_adapter(adapter);
-
 	free_netdev(netdev);
 	dev_set_drvdata(&dev->dev, NULL);
 
 	return 0;
 }
 
-#ifdef CONFIG_PROC_FS
-static void ibmveth_proc_register_driver(void)
-{
-	ibmveth_proc_dir = proc_mkdir(IBMVETH_PROC_DIR, init_net.proc_net);
-	if (ibmveth_proc_dir) {
-	}
-}
-
-static void ibmveth_proc_unregister_driver(void)
-{
-	remove_proc_entry(IBMVETH_PROC_DIR, init_net.proc_net);
-}
-
-static int ibmveth_show(struct seq_file *seq, void *v)
-{
-	struct ibmveth_adapter *adapter = seq->private;
-	char *current_mac = (char *) adapter->netdev->dev_addr;
-	char *firmware_mac = (char *) &adapter->mac_addr;
-
-	seq_printf(seq, "%s %s\n\n", ibmveth_driver_string, ibmveth_driver_version);
-
-	seq_printf(seq, "Unit Address:    0x%x\n", adapter->vdev->unit_address);
-	seq_printf(seq, "Current MAC:     %pM\n", current_mac);
-	seq_printf(seq, "Firmware MAC:    %pM\n", firmware_mac);
-
-	seq_printf(seq, "\nAdapter Statistics:\n");
-	seq_printf(seq, "  TX:  vio_map_single failres:      %lld\n", adapter->tx_map_failed);
-	seq_printf(seq, "       send failures:               %lld\n", adapter->tx_send_failed);
-	seq_printf(seq, "  RX:  replenish task cycles:       %lld\n", adapter->replenish_task_cycles);
-	seq_printf(seq, "       alloc_skb_failures:          %lld\n", adapter->replenish_no_mem);
-	seq_printf(seq, "       add buffer failures:         %lld\n", adapter->replenish_add_buff_failure);
-	seq_printf(seq, "       invalid buffers:             %lld\n", adapter->rx_invalid_buffer);
-	seq_printf(seq, "       no buffers:                  %lld\n", adapter->rx_no_buffer);
-
-	return 0;
-}
-
-static int ibmveth_proc_open(struct inode *inode, struct file *file)
-{
-	return single_open(file, ibmveth_show, PDE(inode)->data);
-}
-
-static const struct file_operations ibmveth_proc_fops = {
-	.owner	 = THIS_MODULE,
-	.open    = ibmveth_proc_open,
-	.read    = seq_read,
-	.llseek  = seq_lseek,
-	.release = single_release,
-};
-
-static void ibmveth_proc_register_adapter(struct ibmveth_adapter *adapter)
-{
-	struct proc_dir_entry *entry;
-	if (ibmveth_proc_dir) {
-		char u_addr[10];
-		sprintf(u_addr, "%x", adapter->vdev->unit_address);
-		entry = proc_create_data(u_addr, S_IFREG, ibmveth_proc_dir,
-					 &ibmveth_proc_fops, adapter);
-		if (!entry)
-			ibmveth_error_printk("Cannot create adapter proc entry");
-	}
-}
-
-static void ibmveth_proc_unregister_adapter(struct ibmveth_adapter *adapter)
-{
-	if (ibmveth_proc_dir) {
-		char u_addr[10];
-		sprintf(u_addr, "%x", adapter->vdev->unit_address);
-		remove_proc_entry(u_addr, ibmveth_proc_dir);
-	}
-}
-
-#else /* CONFIG_PROC_FS */
-static void ibmveth_proc_register_adapter(struct ibmveth_adapter *adapter)
-{
-}
-
-static void ibmveth_proc_unregister_adapter(struct ibmveth_adapter *adapter)
-{
-}
-static void ibmveth_proc_register_driver(void)
-{
-}
-
-static void ibmveth_proc_unregister_driver(void)
-{
-}
-#endif /* CONFIG_PROC_FS */
-
 static struct attribute veth_active_attr;
 static struct attribute veth_num_attr;
 static struct attribute veth_size_attr;
@@ -1736,15 +1626,12 @@ static int __init ibmveth_module_init(vo
 {
 	ibmveth_printk("%s: %s %s\n", ibmveth_driver_name, ibmveth_driver_string, ibmveth_driver_version);
 
-	ibmveth_proc_register_driver();
-
 	return vio_register_driver(&ibmveth_driver);
 }
 
 static void __exit ibmveth_module_exit(void)
 {
 	vio_unregister_driver(&ibmveth_driver);
-	ibmveth_proc_unregister_driver();
 }
 
 module_init(ibmveth_module_init);



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 11/20] ibmveth: Convert to netdev_alloc_skb
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (9 preceding siblings ...)
  2010-08-23  0:09 ` [patch 10/20] ibmveth: remove procfs code Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 12/20] ibmveth: Remove redundant function prototypes Anton Blanchard
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_use_netdev_alloc_skb --]
[-- Type: text/plain, Size: 761 bytes --]

We were using alloc_skb which doesn't create any headroom. Change it to
use netdev_alloc_skb to match most other drivers.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:31.723652837 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:32.113625155 +1000
@@ -248,7 +248,7 @@ static void ibmveth_replenish_buffer_poo
 	for(i = 0; i < count; ++i) {
 		union ibmveth_buf_desc desc;
 
-		skb = alloc_skb(pool->buff_size, GFP_ATOMIC);
+		skb = netdev_alloc_skb(adapter->netdev, pool->buff_size);
 
 		if(!skb) {
 			ibmveth_debug_printk("replenish: unable to allocate skb\n");



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 12/20] ibmveth: Remove redundant function prototypes
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (10 preceding siblings ...)
  2010-08-23  0:09 ` [patch 11/20] ibmveth: Convert to netdev_alloc_skb Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 13/20] ibmveth: Convert driver specific debug to netdev_dbg Anton Blanchard
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_remove_prototype --]
[-- Type: text/plain, Size: 1180 bytes --]

These functions appear before their use, so we can remove the redundant
prototypes.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:32.113625155 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:32.523596050 +1000
@@ -78,16 +78,10 @@
 #define ibmveth_assert(expr)
 #endif
 
-static int ibmveth_open(struct net_device *dev);
-static int ibmveth_close(struct net_device *dev);
-static int ibmveth_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd);
-static int ibmveth_poll(struct napi_struct *napi, int budget);
-static int ibmveth_start_xmit(struct sk_buff *skb, struct net_device *dev);
-static void ibmveth_set_multicast_list(struct net_device *dev);
-static int ibmveth_change_mtu(struct net_device *dev, int new_mtu);
 static irqreturn_t ibmveth_interrupt(int irq, void *dev_instance);
 static void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter);
 static unsigned long ibmveth_get_desired_dma(struct vio_dev *vdev);
+
 static struct kobj_type ktype_veth_pool;
 
 



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 13/20] ibmveth: Convert driver specific debug to netdev_dbg
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (11 preceding siblings ...)
  2010-08-23  0:09 ` [patch 12/20] ibmveth: Remove redundant function prototypes Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 14/20] ibmveth: Convert driver specific error functions to netdev_err Anton Blanchard
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_netdev_dbg --]
[-- Type: text/plain, Size: 5764 bytes --]

Use netdev_dbg to standardise the debug output.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:32.523596050 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:32.953565532 +1000
@@ -63,18 +63,12 @@
   printk(KERN_ERR "(%s:%3.3d ua:%x) ERROR: " fmt, __FILE__, __LINE__ , adapter->vdev->unit_address, ## args)
 
 #ifdef DEBUG
-#define ibmveth_debug_printk_no_adapter(fmt, args...) \
-  printk(KERN_DEBUG "(%s:%3.3d): " fmt, __FILE__, __LINE__ , ## args)
-#define ibmveth_debug_printk(fmt, args...) \
-  printk(KERN_DEBUG "(%s:%3.3d ua:%x): " fmt, __FILE__, __LINE__ , adapter->vdev->unit_address, ## args)
 #define ibmveth_assert(expr) \
   if(!(expr)) {                                   \
     printk(KERN_DEBUG "assertion failed (%s:%3.3d ua:%x): %s\n", __FILE__, __LINE__, adapter->vdev->unit_address, #expr); \
     BUG(); \
   }
 #else
-#define ibmveth_debug_printk_no_adapter(fmt, args...)
-#define ibmveth_debug_printk(fmt, args...)
 #define ibmveth_assert(expr)
 #endif
 
@@ -245,7 +239,8 @@ static void ibmveth_replenish_buffer_poo
 		skb = netdev_alloc_skb(adapter->netdev, pool->buff_size);
 
 		if(!skb) {
-			ibmveth_debug_printk("replenish: unable to allocate skb\n");
+			netdev_dbg(adapter->netdev,
+				   "replenish: unable to allocate skb\n");
 			adapter->replenish_no_mem++;
 			break;
 		}
@@ -437,7 +432,8 @@ static void ibmveth_rxq_recycle_buffer(s
 	lpar_rc = h_add_logical_lan_buffer(adapter->vdev->unit_address, desc.desc);
 
 	if(lpar_rc != H_SUCCESS) {
-		ibmveth_debug_printk("h_add_logical_lan_buffer failed during recycle rc=%ld", lpar_rc);
+		netdev_dbg(adapter->netdev, "h_add_logical_lan_buffer failed "
+			   "during recycle rc=%ld", lpar_rc);
 		ibmveth_remove_buffer_from_pool(adapter, adapter->rx_queue.queue_addr[adapter->rx_queue.index].correlator);
 	}
 
@@ -548,7 +544,7 @@ static int ibmveth_open(struct net_devic
 	int i;
 	struct device *dev;
 
-	ibmveth_debug_printk("open starting\n");
+	netdev_dbg(netdev, "open starting\n");
 
 	napi_enable(&adapter->napi);
 
@@ -604,9 +600,9 @@ static int ibmveth_open(struct net_devic
 	rxq_desc.fields.flags_len = IBMVETH_BUF_VALID | adapter->rx_queue.queue_len;
 	rxq_desc.fields.address = adapter->rx_queue.queue_dma;
 
-	ibmveth_debug_printk("buffer list @ 0x%p\n", adapter->buffer_list_addr);
-	ibmveth_debug_printk("filter list @ 0x%p\n", adapter->filter_list_addr);
-	ibmveth_debug_printk("receive q   @ 0x%p\n", adapter->rx_queue.queue_addr);
+	netdev_dbg(netdev, "buffer list @ 0x%p\n", adapter->buffer_list_addr);
+	netdev_dbg(netdev, "filter list @ 0x%p\n", adapter->filter_list_addr);
+	netdev_dbg(netdev, "receive q   @ 0x%p\n", adapter->rx_queue.queue_addr);
 
 	h_vio_signal(adapter->vdev->unit_address, VIO_IRQ_DISABLE);
 
@@ -636,7 +632,7 @@ static int ibmveth_open(struct net_devic
 		}
 	}
 
-	ibmveth_debug_printk("registering irq 0x%x\n", netdev->irq);
+	netdev_dbg(netdev, "registering irq 0x%x\n", netdev->irq);
 	if((rc = request_irq(netdev->irq, ibmveth_interrupt, 0, netdev->name, netdev)) != 0) {
 		ibmveth_error_printk("unable to request irq 0x%x, rc %d\n", netdev->irq, rc);
 		do {
@@ -666,12 +662,12 @@ static int ibmveth_open(struct net_devic
 		return -ENOMEM;
 	}
 
-	ibmveth_debug_printk("initial replenish cycle\n");
+	netdev_dbg(netdev, "initial replenish cycle\n");
 	ibmveth_interrupt(netdev->irq, netdev);
 
 	netif_start_queue(netdev);
 
-	ibmveth_debug_printk("open complete\n");
+	netdev_dbg(netdev, "open complete\n");
 
 	return 0;
 }
@@ -681,7 +677,7 @@ static int ibmveth_close(struct net_devi
 	struct ibmveth_adapter *adapter = netdev_priv(netdev);
 	long lpar_rc;
 
-	ibmveth_debug_printk("close starting\n");
+	netdev_dbg(netdev, "close starting\n");
 
 	napi_disable(&adapter->napi);
 
@@ -706,7 +702,7 @@ static int ibmveth_close(struct net_devi
 
 	ibmveth_cleanup(adapter);
 
-	ibmveth_debug_printk("close complete\n");
+	netdev_dbg(netdev, "close complete\n");
 
 	return 0;
 }
@@ -1072,7 +1068,7 @@ static int ibmveth_poll(struct napi_stru
 		if (!ibmveth_rxq_buffer_valid(adapter)) {
 			wmb(); /* suggested by larson1 */
 			adapter->rx_invalid_buffer++;
-			ibmveth_debug_printk("recycling invalid buffer\n");
+			netdev_dbg(netdev, "recycling invalid buffer\n");
 			ibmveth_rxq_recycle_buffer(adapter);
 		} else {
 			struct sk_buff *skb, *new_skb;
@@ -1324,8 +1320,8 @@ static int __devinit ibmveth_probe(struc
 	unsigned int *mcastFilterSize_p;
 
 
-	ibmveth_debug_printk_no_adapter("entering ibmveth_probe for UA 0x%x\n",
-					dev->unit_address);
+	dev_dbg(&dev->dev, "entering ibmveth_probe for UA 0x%x\n",
+		dev->unit_address);
 
 	mac_addr_p = (unsigned char *) vio_get_attribute(dev,
 						VETH_MAC_ADDR, NULL);
@@ -1394,13 +1390,13 @@ static int __devinit ibmveth_probe(struc
 			kobject_uevent(kobj, KOBJ_ADD);
 	}
 
-	ibmveth_debug_printk("adapter @ 0x%p\n", adapter);
+	netdev_dbg(netdev, "adapter @ 0x%p\n", adapter);
 
 	adapter->buffer_list_dma = DMA_ERROR_CODE;
 	adapter->filter_list_dma = DMA_ERROR_CODE;
 	adapter->rx_queue.queue_dma = DMA_ERROR_CODE;
 
-	ibmveth_debug_printk("registering netdev...\n");
+	netdev_dbg(netdev, "registering netdev...\n");
 
 	ret = h_illan_attributes(dev->unit_address, 0, 0, &ret_attr);
 
@@ -1421,12 +1417,12 @@ static int __devinit ibmveth_probe(struc
 	rc = register_netdev(netdev);
 
 	if(rc) {
-		ibmveth_debug_printk("failed to register netdev rc=%d\n", rc);
+		netdev_dbg(netdev, "failed to register netdev rc=%d\n", rc);
 		free_netdev(netdev);
 		return rc;
 	}
 
-	ibmveth_debug_printk("registered\n");
+	netdev_dbg(netdev, "registered\n");
 
 	return 0;
 }



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 14/20] ibmveth: Convert driver specific error functions to netdev_err
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (12 preceding siblings ...)
  2010-08-23  0:09 ` [patch 13/20] ibmveth: Convert driver specific debug to netdev_dbg Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 15/20] ibmveth: Some formatting fixes Anton Blanchard
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_netdev_printk --]
[-- Type: text/plain, Size: 9792 bytes --]

Use netdev_err to standardise the error output.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:32.953565532 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:52:33.000000000 +1000
@@ -56,12 +56,6 @@
 
 #undef DEBUG
 
-#define ibmveth_printk(fmt, args...) \
-  printk(KERN_DEBUG "%s: " fmt, __FILE__, ## args)
-
-#define ibmveth_error_printk(fmt, args...) \
-  printk(KERN_ERR "(%s:%3.3d ua:%x) ERROR: " fmt, __FILE__, __LINE__ , adapter->vdev->unit_address, ## args)
-
 #ifdef DEBUG
 #define ibmveth_assert(expr) \
   if(!(expr)) {                                   \
@@ -555,7 +549,8 @@ static int ibmveth_open(struct net_devic
 	adapter->filter_list_addr = kzalloc(4096, GFP_KERNEL);
 
 	if(!adapter->buffer_list_addr || !adapter->filter_list_addr) {
-		ibmveth_error_printk("unable to allocate filter or buffer list pages\n");
+		netdev_err(netdev, "unable to allocate filter or buffer list "
+			   "pages\n");
 		ibmveth_cleanup(adapter);
 		napi_disable(&adapter->napi);
 		return -ENOMEM;
@@ -565,7 +560,7 @@ static int ibmveth_open(struct net_devic
 	adapter->rx_queue.queue_addr = kmalloc(adapter->rx_queue.queue_len, GFP_KERNEL);
 
 	if(!adapter->rx_queue.queue_addr) {
-		ibmveth_error_printk("unable to allocate rx queue pages\n");
+		netdev_err(netdev, "unable to allocate rx queue pages\n");
 		ibmveth_cleanup(adapter);
 		napi_disable(&adapter->napi);
 		return -ENOMEM;
@@ -584,7 +579,8 @@ static int ibmveth_open(struct net_devic
 	if ((dma_mapping_error(dev, adapter->buffer_list_dma)) ||
 	    (dma_mapping_error(dev, adapter->filter_list_dma)) ||
 	    (dma_mapping_error(dev, adapter->rx_queue.queue_dma))) {
-		ibmveth_error_printk("unable to map filter or buffer list pages\n");
+		netdev_err(netdev, "unable to map filter or buffer list "
+			   "pages\n");
 		ibmveth_cleanup(adapter);
 		napi_disable(&adapter->napi);
 		return -ENOMEM;
@@ -609,8 +605,10 @@ static int ibmveth_open(struct net_devic
 	lpar_rc = ibmveth_register_logical_lan(adapter, rxq_desc, mac_address);
 
 	if(lpar_rc != H_SUCCESS) {
-		ibmveth_error_printk("h_register_logical_lan failed with %ld\n", lpar_rc);
-		ibmveth_error_printk("buffer TCE:0x%llx filter TCE:0x%llx rxq desc:0x%llx MAC:0x%llx\n",
+		netdev_err(netdev, "h_register_logical_lan failed with %ld\n",
+			   lpar_rc);
+		netdev_err(netdev, "buffer TCE:0x%llx filter TCE:0x%llx rxq "
+			   "desc:0x%llx MAC:0x%llx\n",
 				     adapter->buffer_list_dma,
 				     adapter->filter_list_dma,
 				     rxq_desc.desc,
@@ -624,7 +622,7 @@ static int ibmveth_open(struct net_devic
 		if(!adapter->rx_buff_pool[i].active)
 			continue;
 		if (ibmveth_alloc_buffer_pool(&adapter->rx_buff_pool[i])) {
-			ibmveth_error_printk("unable to alloc pool\n");
+			netdev_err(netdev, "unable to alloc pool\n");
 			adapter->rx_buff_pool[i].active = 0;
 			ibmveth_cleanup(adapter);
 			napi_disable(&adapter->napi);
@@ -634,7 +632,8 @@ static int ibmveth_open(struct net_devic
 
 	netdev_dbg(netdev, "registering irq 0x%x\n", netdev->irq);
 	if((rc = request_irq(netdev->irq, ibmveth_interrupt, 0, netdev->name, netdev)) != 0) {
-		ibmveth_error_printk("unable to request irq 0x%x, rc %d\n", netdev->irq, rc);
+		netdev_err(netdev, "unable to request irq 0x%x, rc %d\n",
+			   netdev->irq, rc);
 		do {
 			rc = h_free_logical_lan(adapter->vdev->unit_address);
 		} while (H_IS_LONG_BUSY(rc) || (rc == H_BUSY));
@@ -647,7 +646,7 @@ static int ibmveth_open(struct net_devic
 	adapter->bounce_buffer =
 	    kmalloc(netdev->mtu + IBMVETH_BUFF_OH, GFP_KERNEL);
 	if (!adapter->bounce_buffer) {
-		ibmveth_error_printk("unable to allocate bounce buffer\n");
+		netdev_err(netdev, "unable to allocate bounce buffer\n");
 		ibmveth_cleanup(adapter);
 		napi_disable(&adapter->napi);
 		return -ENOMEM;
@@ -656,7 +655,7 @@ static int ibmveth_open(struct net_devic
 	    dma_map_single(&adapter->vdev->dev, adapter->bounce_buffer,
 			   netdev->mtu + IBMVETH_BUFF_OH, DMA_BIDIRECTIONAL);
 	if (dma_mapping_error(dev, adapter->bounce_buffer_dma)) {
-		ibmveth_error_printk("unable to map bounce buffer\n");
+		netdev_err(netdev, "unable to map bounce buffer\n");
 		ibmveth_cleanup(adapter);
 		napi_disable(&adapter->napi);
 		return -ENOMEM;
@@ -692,8 +691,8 @@ static int ibmveth_close(struct net_devi
 
 	if(lpar_rc != H_SUCCESS)
 	{
-		ibmveth_error_printk("h_free_logical_lan failed with %lx, continuing with close\n",
-				     lpar_rc);
+		netdev_err(netdev, "h_free_logical_lan failed with %lx, "
+			   "continuing with close\n", lpar_rc);
 	}
 
 	free_irq(netdev->irq, netdev);
@@ -794,8 +793,8 @@ static int ibmveth_set_csum_offload(stru
 
 		if (ret != H_SUCCESS) {
 			rc1 = -EIO;
-			ibmveth_error_printk("unable to change checksum offload settings."
-					     " %d rc=%ld\n", data, ret);
+			netdev_err(dev, "unable to change checksum offload "
+				   "settings. %d rc=%ld\n", data, ret);
 
 			ret = h_illan_attributes(adapter->vdev->unit_address,
 						 set_attr, clr_attr, &ret_attr);
@@ -803,8 +802,9 @@ static int ibmveth_set_csum_offload(stru
 			done(dev, data);
 	} else {
 		rc1 = -EIO;
-		ibmveth_error_printk("unable to change checksum offload settings."
-				     " %d rc=%ld ret_attr=%lx\n", data, ret, ret_attr);
+		netdev_err(dev, "unable to change checksum offload settings."
+				     " %d rc=%ld ret_attr=%lx\n", data, ret,
+				     ret_attr);
 	}
 
 	if (restart)
@@ -920,8 +920,8 @@ static int ibmveth_send(struct ibmveth_a
 	} while ((ret == H_BUSY) && (retry_count--));
 
 	if (ret != H_SUCCESS && ret != H_DROPPED) {
-		ibmveth_error_printk("tx: h_send_logical_lan failed with "
-				     "rc=%ld\n", ret);
+		netdev_err(adapter->netdev, "tx: h_send_logical_lan failed "
+			   "with rc=%ld\n", ret);
 		return 1;
 	}
 
@@ -949,7 +949,7 @@ static netdev_tx_t ibmveth_start_xmit(st
 	/* veth can't checksum offload UDP */
 	if (skb->ip_summed == CHECKSUM_PARTIAL &&
 	    ip_hdr(skb)->protocol != IPPROTO_TCP && skb_checksum_help(skb)) {
-		ibmveth_error_printk("tx: failed to checksum packet\n");
+		netdev_err(netdev, "tx: failed to checksum packet\n");
 		netdev->stats.tx_dropped++;
 		goto out;
 	}
@@ -1045,7 +1045,7 @@ map_failed_frags:
 
 map_failed:
 	if (!firmware_has_feature(FW_FEATURE_CMO))
-		ibmveth_error_printk("tx: unable to map xmit buffer\n");
+		netdev_err(netdev, "tx: unable to map xmit buffer\n");
 	adapter->tx_map_failed++;
 	skb_linearize(skb);
 	force_bounce = 1;
@@ -1161,7 +1161,8 @@ static void ibmveth_set_multicast_list(s
 					   IbmVethMcastDisableFiltering,
 					   0);
 		if(lpar_rc != H_SUCCESS) {
-			ibmveth_error_printk("h_multicast_ctrl rc=%ld when entering promisc mode\n", lpar_rc);
+			netdev_err(netdev, "h_multicast_ctrl rc=%ld when "
+				   "entering promisc mode\n", lpar_rc);
 		}
 	} else {
 		struct netdev_hw_addr *ha;
@@ -1172,7 +1173,9 @@ static void ibmveth_set_multicast_list(s
 					   IbmVethMcastClearFilterTable,
 					   0);
 		if(lpar_rc != H_SUCCESS) {
-			ibmveth_error_printk("h_multicast_ctrl rc=%ld when attempting to clear filter table\n", lpar_rc);
+			netdev_err(netdev, "h_multicast_ctrl rc=%ld when "
+				   "attempting to clear filter table\n",
+				   lpar_rc);
 		}
 		/* add the addresses to the filter table */
 		netdev_for_each_mc_addr(ha, netdev) {
@@ -1183,7 +1186,9 @@ static void ibmveth_set_multicast_list(s
 						   IbmVethMcastAddFilter,
 						   mcast_addr);
 			if(lpar_rc != H_SUCCESS) {
-				ibmveth_error_printk("h_multicast_ctrl rc=%ld when adding an entry to the filter table\n", lpar_rc);
+				netdev_err(netdev, "h_multicast_ctrl rc=%ld "
+					   "when adding an entry to the filter "
+					   "table\n", lpar_rc);
 			}
 		}
 
@@ -1192,7 +1197,8 @@ static void ibmveth_set_multicast_list(s
 					   IbmVethMcastEnableFiltering,
 					   0);
 		if(lpar_rc != H_SUCCESS) {
-			ibmveth_error_printk("h_multicast_ctrl rc=%ld when enabling filtering\n", lpar_rc);
+			netdev_err(netdev, "h_multicast_ctrl rc=%ld when "
+				   "enabling filtering\n", lpar_rc);
 		}
 	}
 }
@@ -1326,17 +1332,15 @@ static int __devinit ibmveth_probe(struc
 	mac_addr_p = (unsigned char *) vio_get_attribute(dev,
 						VETH_MAC_ADDR, NULL);
 	if(!mac_addr_p) {
-		printk(KERN_ERR "(%s:%3.3d) ERROR: Can't find VETH_MAC_ADDR "
-				"attribute\n", __FILE__, __LINE__);
+		dev_err(&dev->dev, "Can't find VETH_MAC_ADDR attribute\n");
 		return 0;
 	}
 
 	mcastFilterSize_p = (unsigned int *) vio_get_attribute(dev,
 						VETH_MCAST_FILTER_SIZE, NULL);
 	if(!mcastFilterSize_p) {
-		printk(KERN_ERR "(%s:%3.3d) ERROR: Can't find "
-				"VETH_MCAST_FILTER_SIZE attribute\n",
-				__FILE__, __LINE__);
+		dev_err(&dev->dev, "Can't find VETH_MCAST_FILTER_SIZE "
+			"attribute\n");
 		return 0;
 	}
 
@@ -1480,7 +1484,8 @@ const char * buf, size_t count)
 		if (value && !pool->active) {
 			if (netif_running(netdev)) {
 				if(ibmveth_alloc_buffer_pool(pool)) {
-					ibmveth_error_printk("unable to alloc pool\n");
+					netdev_err(netdev,
+						   "unable to alloc pool\n");
 					return -ENOMEM;
 				}
 				pool->active = 1;
@@ -1506,7 +1511,7 @@ const char * buf, size_t count)
 			}
 
 			if (i == IbmVethNumBufferPools) {
-				ibmveth_error_printk("no active pool >= MTU\n");
+				netdev_err(netdev, "no active pool >= MTU\n");
 				return -EPERM;
 			}
 
@@ -1614,7 +1619,8 @@ static struct vio_driver ibmveth_driver
 
 static int __init ibmveth_module_init(void)
 {
-	ibmveth_printk("%s: %s %s\n", ibmveth_driver_name, ibmveth_driver_string, ibmveth_driver_version);
+	printk(KERN_DEBUG "%s: %s %s\n", ibmveth_driver_name,
+	       ibmveth_driver_string, ibmveth_driver_version);
 
 	return vio_register_driver(&ibmveth_driver);
 }



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 15/20] ibmveth: Some formatting fixes
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (13 preceding siblings ...)
  2010-08-23  0:09 ` [patch 14/20] ibmveth: Convert driver specific error functions to netdev_err Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 16/20] ibmveth: Coding style fixes Anton Blanchard
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_min_mtu --]
[-- Type: text/plain, Size: 6259 bytes --]

IbmVethNumBufferPools -> IBMVETH_NUM_BUFF_POOLS

Also change IBMVETH_MAX_MTU -> IBMVETH_MIN_MTU, it refers to the minimum
size not the maximum.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:52:33.000000000 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 08:53:23.040010330 +1000
@@ -309,7 +309,7 @@ static void ibmveth_replenish_task(struc
 
 	adapter->replenish_task_cycles++;
 
-	for (i = (IbmVethNumBufferPools - 1); i >= 0; i--) {
+	for (i = (IBMVETH_NUM_BUFF_POOLS - 1); i >= 0; i--) {
 		struct ibmveth_buff_pool *pool = &adapter->rx_buff_pool[i];
 
 		if (pool->active &&
@@ -361,7 +361,7 @@ static void ibmveth_remove_buffer_from_p
 	unsigned int free_index;
 	struct sk_buff *skb;
 
-	ibmveth_assert(pool < IbmVethNumBufferPools);
+	ibmveth_assert(pool < IBMVETH_NUM_BUFF_POOLS);
 	ibmveth_assert(index < adapter->rx_buff_pool[pool].size);
 
 	skb = adapter->rx_buff_pool[pool].skbuff[index];
@@ -394,7 +394,7 @@ static inline struct sk_buff *ibmveth_rx
 	unsigned int pool = correlator >> 32;
 	unsigned int index = correlator & 0xffffffffUL;
 
-	ibmveth_assert(pool < IbmVethNumBufferPools);
+	ibmveth_assert(pool < IBMVETH_NUM_BUFF_POOLS);
 	ibmveth_assert(index < adapter->rx_buff_pool[pool].size);
 
 	return adapter->rx_buff_pool[pool].skbuff[index];
@@ -410,7 +410,7 @@ static void ibmveth_rxq_recycle_buffer(s
 	union ibmveth_buf_desc desc;
 	unsigned long lpar_rc;
 
-	ibmveth_assert(pool < IbmVethNumBufferPools);
+	ibmveth_assert(pool < IBMVETH_NUM_BUFF_POOLS);
 	ibmveth_assert(index < adapter->rx_buff_pool[pool].size);
 
 	if(!adapter->rx_buff_pool[pool].active) {
@@ -484,7 +484,7 @@ static void ibmveth_cleanup(struct ibmve
 		adapter->rx_queue.queue_addr = NULL;
 	}
 
-	for(i = 0; i<IbmVethNumBufferPools; i++)
+	for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
 		if (adapter->rx_buff_pool[i].active)
 			ibmveth_free_buffer_pool(adapter,
 						 &adapter->rx_buff_pool[i]);
@@ -542,7 +542,7 @@ static int ibmveth_open(struct net_devic
 
 	napi_enable(&adapter->napi);
 
-	for(i = 0; i<IbmVethNumBufferPools; i++)
+	for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
 		rxq_entries += adapter->rx_buff_pool[i].size;
 
 	adapter->buffer_list_addr = kzalloc(4096, GFP_KERNEL);
@@ -618,7 +618,7 @@ static int ibmveth_open(struct net_devic
 		return -ENONET;
 	}
 
-	for(i = 0; i<IbmVethNumBufferPools; i++) {
+	for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
 		if(!adapter->rx_buff_pool[i].active)
 			continue;
 		if (ibmveth_alloc_buffer_pool(&adapter->rx_buff_pool[i])) {
@@ -1211,14 +1211,14 @@ static int ibmveth_change_mtu(struct net
 	int i, rc;
 	int need_restart = 0;
 
-	if (new_mtu < IBMVETH_MAX_MTU)
+	if (new_mtu < IBMVETH_MIN_MTU)
 		return -EINVAL;
 
-	for (i = 0; i < IbmVethNumBufferPools; i++)
+	for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
 		if (new_mtu_oh < adapter->rx_buff_pool[i].buff_size)
 			break;
 
-	if (i == IbmVethNumBufferPools)
+	if (i == IBMVETH_NUM_BUFF_POOLS)
 		return -EINVAL;
 
 	/* Deactivate all the buffer pools so that the next loop can activate
@@ -1231,7 +1231,7 @@ static int ibmveth_change_mtu(struct net
 	}
 
 	/* Look for an active buffer pool that can hold the new MTU */
-	for(i = 0; i<IbmVethNumBufferPools; i++) {
+	for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
 		adapter->rx_buff_pool[i].active = 1;
 
 		if (new_mtu_oh < adapter->rx_buff_pool[i].buff_size) {
@@ -1285,7 +1285,7 @@ static unsigned long ibmveth_get_desired
 	ret = IBMVETH_BUFF_LIST_SIZE + IBMVETH_FILT_LIST_SIZE;
 	ret += IOMMU_PAGE_ALIGN(netdev->mtu);
 
-	for (i = 0; i < IbmVethNumBufferPools; i++) {
+	for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
 		/* add the size of the active receive buffers */
 		if (adapter->rx_buff_pool[i].active)
 			ret +=
@@ -1381,7 +1381,7 @@ static int __devinit ibmveth_probe(struc
 
 	memcpy(netdev->dev_addr, &adapter->mac_addr, netdev->addr_len);
 
-	for(i = 0; i<IbmVethNumBufferPools; i++) {
+	for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
 		struct kobject *kobj = &adapter->rx_buff_pool[i].kobj;
 		int error;
 
@@ -1437,7 +1437,7 @@ static int __devexit ibmveth_remove(stru
 	struct ibmveth_adapter *adapter = netdev_priv(netdev);
 	int i;
 
-	for(i = 0; i<IbmVethNumBufferPools; i++)
+	for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
 		kobject_put(&adapter->rx_buff_pool[i].kobj);
 
 	unregister_netdev(netdev);
@@ -1501,7 +1501,7 @@ const char * buf, size_t count)
 			int i;
 			/* Make sure there is a buffer pool with buffers that
 			   can hold a packet of the size of the MTU */
-			for (i = 0; i < IbmVethNumBufferPools; i++) {
+			for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
 				if (pool == &adapter->rx_buff_pool[i])
 					continue;
 				if (!adapter->rx_buff_pool[i].active)
@@ -1510,7 +1510,7 @@ const char * buf, size_t count)
 					break;
 			}
 
-			if (i == IbmVethNumBufferPools) {
+			if (i == IBMVETH_NUM_BUFF_POOLS) {
 				netdev_err(netdev, "no active pool >= MTU\n");
 				return -EPERM;
 			}
Index: net-next-2.6/drivers/net/ibmveth.h
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.h	2010-08-23 08:52:28.000000000 +1000
+++ net-next-2.6/drivers/net/ibmveth.h	2010-08-23 08:52:52.492178650 +1000
@@ -92,10 +92,10 @@ static inline long h_illan_attributes(un
 #define h_change_logical_lan_mac(ua, mac) \
   plpar_hcall_norets(H_CHANGE_LOGICAL_LAN_MAC, ua, mac)
 
-#define IbmVethNumBufferPools 5
+#define IBMVETH_NUM_BUFF_POOLS 5
 #define IBMVETH_IO_ENTITLEMENT_DEFAULT 4243456 /* MTU of 1500 needs 4.2Mb */
 #define IBMVETH_BUFF_OH 22 /* Overhead: 14 ethernet header + 8 opaque handle */
-#define IBMVETH_MAX_MTU 68
+#define IBMVETH_MIN_MTU 68
 #define IBMVETH_MAX_POOL_COUNT 4096
 #define IBMVETH_BUFF_LIST_SIZE 4096
 #define IBMVETH_FILT_LIST_SIZE 4096
@@ -142,7 +142,7 @@ struct ibmveth_adapter {
     void * filter_list_addr;
     dma_addr_t buffer_list_dma;
     dma_addr_t filter_list_dma;
-    struct ibmveth_buff_pool rx_buff_pool[IbmVethNumBufferPools];
+    struct ibmveth_buff_pool rx_buff_pool[IBMVETH_NUM_BUFF_POOLS];
     struct ibmveth_rx_q rx_queue;
     int pool_config;
     int rx_csum;



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 16/20] ibmveth: Coding style fixes
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (14 preceding siblings ...)
  2010-08-23  0:09 ` [patch 15/20] ibmveth: Some formatting fixes Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 17/20] ibmveth: Return -EINVAL on all ->probe errors Anton Blanchard
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_style --]
[-- Type: text/plain, Size: 30904 bytes --]

Fix most of the kernel coding style issues in ibmveth.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 08:53:23.040010330 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 09:23:11.982966815 +1000
@@ -1,31 +1,29 @@
-/**************************************************************************/
-/*                                                                        */
-/* IBM eServer i/pSeries Virtual Ethernet Device Driver                   */
-/* Copyright (C) 2003 IBM Corp.                                           */
-/*  Originally written by Dave Larson (larson1@us.ibm.com)                */
-/*  Maintained by Santiago Leon (santil@us.ibm.com)                       */
-/*                                                                        */
-/*  This program is free software; you can redistribute it and/or modify  */
-/*  it under the terms of the GNU General Public License as published by  */
-/*  the Free Software Foundation; either version 2 of the License, or     */
-/*  (at your option) any later version.                                   */
-/*                                                                        */
-/*  This program is distributed in the hope that it will be useful,       */
-/*  but WITHOUT ANY WARRANTY; without even the implied warranty of        */
-/*  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the         */
-/*  GNU General Public License for more details.                          */
-/*                                                                        */
-/*  You should have received a copy of the GNU General Public License     */
-/*  along with this program; if not, write to the Free Software           */
-/*  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  */
-/*                                                                   USA  */
-/*                                                                        */
-/* This module contains the implementation of a virtual ethernet device   */
-/* for use with IBM i/pSeries LPAR Linux.  It utilizes the logical LAN    */
-/* option of the RS/6000 Platform Architechture to interface with virtual */
-/* ethernet NICs that are presented to the partition by the hypervisor.   */
-/*                                                                        */
-/**************************************************************************/
+/*
+ * IBM eServer i/pSeries Virtual Ethernet Device Driver
+ * Copyright (C) 2003 IBM Corp.
+ *  Originally written by Dave Larson (larson1@us.ibm.com)
+ *  Maintained by Santiago Leon (santil@us.ibm.com)
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; either version 2 of the License, or
+ *  (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License
+ *  along with this program; if not, write to the Free Software
+ *  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307
+ *                                                                   USA
+ *
+ * This module contains the implementation of a virtual ethernet device
+ * for use with IBM i/pSeries LPAR Linux.  It utilizes the logical LAN
+ * option of the RS/6000 Platform Architechture to interface with virtual
+ * ethernet NICs that are presented to the partition by the hypervisor.
+ */
 
 #include <linux/module.h>
 #include <linux/moduleparam.h>
@@ -58,7 +56,7 @@
 
 #ifdef DEBUG
 #define ibmveth_assert(expr) \
-  if(!(expr)) {                                   \
+  if (!(expr)) {                                   \
     printk(KERN_DEBUG "assertion failed (%s:%3.3d ua:%x): %s\n", __FILE__, __LINE__, adapter->vdev->unit_address, #expr); \
     BUG(); \
   }
@@ -74,7 +72,8 @@ static struct kobj_type ktype_veth_pool;
 
 
 static const char ibmveth_driver_name[] = "ibmveth";
-static const char ibmveth_driver_string[] = "IBM i/pSeries Virtual Ethernet Driver";
+static const char ibmveth_driver_string[] = "IBM i/pSeries Virtual Ethernet "
+					    "Driver";
 #define ibmveth_driver_version "1.03"
 
 MODULE_AUTHOR("Santiago Leon <santil@us.ibm.com>");
@@ -107,8 +106,10 @@ struct ibmveth_stat {
 struct ibmveth_stat ibmveth_stats[] = {
 	{ "replenish_task_cycles", IBMVETH_STAT_OFF(replenish_task_cycles) },
 	{ "replenish_no_mem", IBMVETH_STAT_OFF(replenish_no_mem) },
-	{ "replenish_add_buff_failure", IBMVETH_STAT_OFF(replenish_add_buff_failure) },
-	{ "replenish_add_buff_success", IBMVETH_STAT_OFF(replenish_add_buff_success) },
+	{ "replenish_add_buff_failure",
+			IBMVETH_STAT_OFF(replenish_add_buff_failure) },
+	{ "replenish_add_buff_success",
+			IBMVETH_STAT_OFF(replenish_add_buff_success) },
 	{ "rx_invalid_buffer", IBMVETH_STAT_OFF(rx_invalid_buffer) },
 	{ "rx_no_buffer", IBMVETH_STAT_OFF(rx_no_buffer) },
 	{ "tx_map_failed", IBMVETH_STAT_OFF(tx_map_failed) },
@@ -123,36 +124,39 @@ static inline u32 ibmveth_rxq_flags(stru
 
 static inline int ibmveth_rxq_toggle(struct ibmveth_adapter *adapter)
 {
-	return (ibmveth_rxq_flags(adapter) & IBMVETH_RXQ_TOGGLE) >> IBMVETH_RXQ_TOGGLE_SHIFT;
+	return (ibmveth_rxq_flags(adapter) & IBMVETH_RXQ_TOGGLE) >>
+			IBMVETH_RXQ_TOGGLE_SHIFT;
 }
 
 static inline int ibmveth_rxq_pending_buffer(struct ibmveth_adapter *adapter)
 {
-	return (ibmveth_rxq_toggle(adapter) == adapter->rx_queue.toggle);
+	return ibmveth_rxq_toggle(adapter) == adapter->rx_queue.toggle;
 }
 
 static inline int ibmveth_rxq_buffer_valid(struct ibmveth_adapter *adapter)
 {
-	return (ibmveth_rxq_flags(adapter) & IBMVETH_RXQ_VALID);
+	return ibmveth_rxq_flags(adapter) & IBMVETH_RXQ_VALID;
 }
 
 static inline int ibmveth_rxq_frame_offset(struct ibmveth_adapter *adapter)
 {
-	return (ibmveth_rxq_flags(adapter) & IBMVETH_RXQ_OFF_MASK);
+	return ibmveth_rxq_flags(adapter) & IBMVETH_RXQ_OFF_MASK;
 }
 
 static inline int ibmveth_rxq_frame_length(struct ibmveth_adapter *adapter)
 {
-	return (adapter->rx_queue.queue_addr[adapter->rx_queue.index].length);
+	return adapter->rx_queue.queue_addr[adapter->rx_queue.index].length;
 }
 
 static inline int ibmveth_rxq_csum_good(struct ibmveth_adapter *adapter)
 {
-	return (ibmveth_rxq_flags(adapter) & IBMVETH_RXQ_CSUM_GOOD);
+	return ibmveth_rxq_flags(adapter) & IBMVETH_RXQ_CSUM_GOOD;
 }
 
 /* setup the initial settings for a buffer pool */
-static void ibmveth_init_buffer_pool(struct ibmveth_buff_pool *pool, u32 pool_index, u32 pool_size, u32 buff_size, u32 pool_active)
+static void ibmveth_init_buffer_pool(struct ibmveth_buff_pool *pool,
+				     u32 pool_index, u32 pool_size,
+				     u32 buff_size, u32 pool_active)
 {
 	pool->size = pool_size;
 	pool->index = pool_index;
@@ -168,12 +172,11 @@ static int ibmveth_alloc_buffer_pool(str
 
 	pool->free_map = kmalloc(sizeof(u16) * pool->size, GFP_KERNEL);
 
-	if(!pool->free_map) {
+	if (!pool->free_map)
 		return -1;
-	}
 
 	pool->dma_addr = kmalloc(sizeof(dma_addr_t) * pool->size, GFP_KERNEL);
-	if(!pool->dma_addr) {
+	if (!pool->dma_addr) {
 		kfree(pool->free_map);
 		pool->free_map = NULL;
 		return -1;
@@ -181,7 +184,7 @@ static int ibmveth_alloc_buffer_pool(str
 
 	pool->skbuff = kcalloc(pool->size, sizeof(void *), GFP_KERNEL);
 
-	if(!pool->skbuff) {
+	if (!pool->skbuff) {
 		kfree(pool->dma_addr);
 		pool->dma_addr = NULL;
 
@@ -192,9 +195,8 @@ static int ibmveth_alloc_buffer_pool(str
 
 	memset(pool->dma_addr, 0, sizeof(dma_addr_t) * pool->size);
 
-	for(i = 0; i < pool->size; ++i) {
+	for (i = 0; i < pool->size; ++i)
 		pool->free_map[i] = i;
-	}
 
 	atomic_set(&pool->available, 0);
 	pool->producer_index = 0;
@@ -214,7 +216,8 @@ static inline void ibmveth_flush_buffer(
 /* replenish the buffers for a pool.  note that we don't need to
  * skb_reserve these since they are used for incoming...
  */
-static void ibmveth_replenish_buffer_pool(struct ibmveth_adapter *adapter, struct ibmveth_buff_pool *pool)
+static void ibmveth_replenish_buffer_pool(struct ibmveth_adapter *adapter,
+					  struct ibmveth_buff_pool *pool)
 {
 	u32 i;
 	u32 count = pool->size - atomic_read(&pool->available);
@@ -227,12 +230,12 @@ static void ibmveth_replenish_buffer_poo
 
 	mb();
 
-	for(i = 0; i < count; ++i) {
+	for (i = 0; i < count; ++i) {
 		union ibmveth_buf_desc desc;
 
 		skb = netdev_alloc_skb(adapter->netdev, pool->buff_size);
 
-		if(!skb) {
+		if (!skb) {
 			netdev_dbg(adapter->netdev,
 				   "replenish: unable to allocate skb\n");
 			adapter->replenish_no_mem++;
@@ -259,7 +262,7 @@ static void ibmveth_replenish_buffer_poo
 		pool->skbuff[index] = skb;
 
 		correlator = ((u64)pool->index << 32) | index;
-		*(u64*)skb->data = correlator;
+		*(u64 *)skb->data = correlator;
 
 		desc.fields.flags_len = IBMVETH_BUF_VALID | pool->buff_size;
 		desc.fields.address = dma_addr;
@@ -270,11 +273,12 @@ static void ibmveth_replenish_buffer_poo
 						IBMVETH_BUFF_OH);
 			ibmveth_flush_buffer(skb->data, len);
 		}
-		lpar_rc = h_add_logical_lan_buffer(adapter->vdev->unit_address, desc.desc);
+		lpar_rc = h_add_logical_lan_buffer(adapter->vdev->unit_address,
+						   desc.desc);
 
-		if (lpar_rc != H_SUCCESS)
+		if (lpar_rc != H_SUCCESS) {
 			goto failure;
-		else {
+		} else {
 			buffers_added++;
 			adapter->replenish_add_buff_success++;
 		}
@@ -317,21 +321,23 @@ static void ibmveth_replenish_task(struc
 			ibmveth_replenish_buffer_pool(adapter, pool);
 	}
 
-	adapter->rx_no_buffer = *(u64*)(((char*)adapter->buffer_list_addr) + 4096 - 8);
+	adapter->rx_no_buffer = *(u64 *)(((char*)adapter->buffer_list_addr) +
+						4096 - 8);
 }
 
 /* empty and free ana buffer pool - also used to do cleanup in error paths */
-static void ibmveth_free_buffer_pool(struct ibmveth_adapter *adapter, struct ibmveth_buff_pool *pool)
+static void ibmveth_free_buffer_pool(struct ibmveth_adapter *adapter,
+				     struct ibmveth_buff_pool *pool)
 {
 	int i;
 
 	kfree(pool->free_map);
 	pool->free_map = NULL;
 
-	if(pool->skbuff && pool->dma_addr) {
-		for(i = 0; i < pool->size; ++i) {
+	if (pool->skbuff && pool->dma_addr) {
+		for (i = 0; i < pool->size; ++i) {
 			struct sk_buff *skb = pool->skbuff[i];
-			if(skb) {
+			if (skb) {
 				dma_unmap_single(&adapter->vdev->dev,
 						 pool->dma_addr[i],
 						 pool->buff_size,
@@ -342,19 +348,20 @@ static void ibmveth_free_buffer_pool(str
 		}
 	}
 
-	if(pool->dma_addr) {
+	if (pool->dma_addr) {
 		kfree(pool->dma_addr);
 		pool->dma_addr = NULL;
 	}
 
-	if(pool->skbuff) {
+	if (pool->skbuff) {
 		kfree(pool->skbuff);
 		pool->skbuff = NULL;
 	}
 }
 
 /* remove a buffer from a pool */
-static void ibmveth_remove_buffer_from_pool(struct ibmveth_adapter *adapter, u64 correlator)
+static void ibmveth_remove_buffer_from_pool(struct ibmveth_adapter *adapter,
+					    u64 correlator)
 {
 	unsigned int pool  = correlator >> 32;
 	unsigned int index = correlator & 0xffffffffUL;
@@ -413,7 +420,7 @@ static void ibmveth_rxq_recycle_buffer(s
 	ibmveth_assert(pool < IBMVETH_NUM_BUFF_POOLS);
 	ibmveth_assert(index < adapter->rx_buff_pool[pool].size);
 
-	if(!adapter->rx_buff_pool[pool].active) {
+	if (!adapter->rx_buff_pool[pool].active) {
 		ibmveth_rxq_harvest_buffer(adapter);
 		ibmveth_free_buffer_pool(adapter, &adapter->rx_buff_pool[pool]);
 		return;
@@ -425,13 +432,13 @@ static void ibmveth_rxq_recycle_buffer(s
 
 	lpar_rc = h_add_logical_lan_buffer(adapter->vdev->unit_address, desc.desc);
 
-	if(lpar_rc != H_SUCCESS) {
+	if (lpar_rc != H_SUCCESS) {
 		netdev_dbg(adapter->netdev, "h_add_logical_lan_buffer failed "
 			   "during recycle rc=%ld", lpar_rc);
 		ibmveth_remove_buffer_from_pool(adapter, adapter->rx_queue.queue_addr[adapter->rx_queue.index].correlator);
 	}
 
-	if(++adapter->rx_queue.index == adapter->rx_queue.num_slots) {
+	if (++adapter->rx_queue.index == adapter->rx_queue.num_slots) {
 		adapter->rx_queue.index = 0;
 		adapter->rx_queue.toggle = !adapter->rx_queue.toggle;
 	}
@@ -441,7 +448,7 @@ static void ibmveth_rxq_harvest_buffer(s
 {
 	ibmveth_remove_buffer_from_pool(adapter, adapter->rx_queue.queue_addr[adapter->rx_queue.index].correlator);
 
-	if(++adapter->rx_queue.index == adapter->rx_queue.num_slots) {
+	if (++adapter->rx_queue.index == adapter->rx_queue.num_slots) {
 		adapter->rx_queue.index = 0;
 		adapter->rx_queue.toggle = !adapter->rx_queue.toggle;
 	}
@@ -452,7 +459,7 @@ static void ibmveth_cleanup(struct ibmve
 	int i;
 	struct device *dev = &adapter->vdev->dev;
 
-	if(adapter->buffer_list_addr != NULL) {
+	if (adapter->buffer_list_addr != NULL) {
 		if (!dma_mapping_error(dev, adapter->buffer_list_dma)) {
 			dma_unmap_single(dev, adapter->buffer_list_dma, 4096,
 					DMA_BIDIRECTIONAL);
@@ -462,7 +469,7 @@ static void ibmveth_cleanup(struct ibmve
 		adapter->buffer_list_addr = NULL;
 	}
 
-	if(adapter->filter_list_addr != NULL) {
+	if (adapter->filter_list_addr != NULL) {
 		if (!dma_mapping_error(dev, adapter->filter_list_dma)) {
 			dma_unmap_single(dev, adapter->filter_list_dma, 4096,
 					DMA_BIDIRECTIONAL);
@@ -472,7 +479,7 @@ static void ibmveth_cleanup(struct ibmve
 		adapter->filter_list_addr = NULL;
 	}
 
-	if(adapter->rx_queue.queue_addr != NULL) {
+	if (adapter->rx_queue.queue_addr != NULL) {
 		if (!dma_mapping_error(dev, adapter->rx_queue.queue_dma)) {
 			dma_unmap_single(dev,
 					adapter->rx_queue.queue_dma,
@@ -484,7 +491,7 @@ static void ibmveth_cleanup(struct ibmve
 		adapter->rx_queue.queue_addr = NULL;
 	}
 
-	for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
+	for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
 		if (adapter->rx_buff_pool[i].active)
 			ibmveth_free_buffer_pool(adapter,
 						 &adapter->rx_buff_pool[i]);
@@ -507,9 +514,11 @@ static int ibmveth_register_logical_lan(
 {
 	int rc, try_again = 1;
 
-	/* After a kexec the adapter will still be open, so our attempt to
-	* open it will fail. So if we get a failure we free the adapter and
-	* try again, but only once. */
+	/*
+	 * After a kexec the adapter will still be open, so our attempt to
+	 * open it will fail. So if we get a failure we free the adapter and
+	 * try again, but only once.
+	 */
 retry:
 	rc = h_register_logical_lan(adapter->vdev->unit_address,
 				    adapter->buffer_list_dma, rxq_desc.desc,
@@ -542,13 +551,13 @@ static int ibmveth_open(struct net_devic
 
 	napi_enable(&adapter->napi);
 
-	for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
+	for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
 		rxq_entries += adapter->rx_buff_pool[i].size;
 
 	adapter->buffer_list_addr = kzalloc(4096, GFP_KERNEL);
 	adapter->filter_list_addr = kzalloc(4096, GFP_KERNEL);
 
-	if(!adapter->buffer_list_addr || !adapter->filter_list_addr) {
+	if (!adapter->buffer_list_addr || !adapter->filter_list_addr) {
 		netdev_err(netdev, "unable to allocate filter or buffer list "
 			   "pages\n");
 		ibmveth_cleanup(adapter);
@@ -556,10 +565,12 @@ static int ibmveth_open(struct net_devic
 		return -ENOMEM;
 	}
 
-	adapter->rx_queue.queue_len = sizeof(struct ibmveth_rx_q_entry) * rxq_entries;
-	adapter->rx_queue.queue_addr = kmalloc(adapter->rx_queue.queue_len, GFP_KERNEL);
+	adapter->rx_queue.queue_len = sizeof(struct ibmveth_rx_q_entry) *
+						rxq_entries;
+	adapter->rx_queue.queue_addr = kmalloc(adapter->rx_queue.queue_len,
+						GFP_KERNEL);
 
-	if(!adapter->rx_queue.queue_addr) {
+	if (!adapter->rx_queue.queue_addr) {
 		netdev_err(netdev, "unable to allocate rx queue pages\n");
 		ibmveth_cleanup(adapter);
 		napi_disable(&adapter->napi);
@@ -593,7 +604,8 @@ static int ibmveth_open(struct net_devic
 	memcpy(&mac_address, netdev->dev_addr, netdev->addr_len);
 	mac_address = mac_address >> 16;
 
-	rxq_desc.fields.flags_len = IBMVETH_BUF_VALID | adapter->rx_queue.queue_len;
+	rxq_desc.fields.flags_len = IBMVETH_BUF_VALID |
+					adapter->rx_queue.queue_len;
 	rxq_desc.fields.address = adapter->rx_queue.queue_dma;
 
 	netdev_dbg(netdev, "buffer list @ 0x%p\n", adapter->buffer_list_addr);
@@ -604,7 +616,7 @@ static int ibmveth_open(struct net_devic
 
 	lpar_rc = ibmveth_register_logical_lan(adapter, rxq_desc, mac_address);
 
-	if(lpar_rc != H_SUCCESS) {
+	if (lpar_rc != H_SUCCESS) {
 		netdev_err(netdev, "h_register_logical_lan failed with %ld\n",
 			   lpar_rc);
 		netdev_err(netdev, "buffer TCE:0x%llx filter TCE:0x%llx rxq "
@@ -618,8 +630,8 @@ static int ibmveth_open(struct net_devic
 		return -ENONET;
 	}
 
-	for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
-		if(!adapter->rx_buff_pool[i].active)
+	for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
+		if (!adapter->rx_buff_pool[i].active)
 			continue;
 		if (ibmveth_alloc_buffer_pool(&adapter->rx_buff_pool[i])) {
 			netdev_err(netdev, "unable to alloc pool\n");
@@ -631,7 +643,9 @@ static int ibmveth_open(struct net_devic
 	}
 
 	netdev_dbg(netdev, "registering irq 0x%x\n", netdev->irq);
-	if((rc = request_irq(netdev->irq, ibmveth_interrupt, 0, netdev->name, netdev)) != 0) {
+	rc = request_irq(netdev->irq, ibmveth_interrupt, 0, netdev->name,
+			 netdev);
+	if (rc != 0) {
 		netdev_err(netdev, "unable to request irq 0x%x, rc %d\n",
 			   netdev->irq, rc);
 		do {
@@ -689,15 +703,15 @@ static int ibmveth_close(struct net_devi
 		lpar_rc = h_free_logical_lan(adapter->vdev->unit_address);
 	} while (H_IS_LONG_BUSY(lpar_rc) || (lpar_rc == H_BUSY));
 
-	if(lpar_rc != H_SUCCESS)
-	{
+	if (lpar_rc != H_SUCCESS) {
 		netdev_err(netdev, "h_free_logical_lan failed with %lx, "
 			   "continuing with close\n", lpar_rc);
 	}
 
 	free_irq(netdev->irq, netdev);
 
-	adapter->rx_no_buffer = *(u64*)(((char*)adapter->buffer_list_addr) + 4096 - 8);
+	adapter->rx_no_buffer = *(u64 *)(((char *)adapter->buffer_list_addr) +
+						4096 - 8);
 
 	ibmveth_cleanup(adapter);
 
@@ -706,9 +720,12 @@ static int ibmveth_close(struct net_devi
 	return 0;
 }
 
-static int netdev_get_settings(struct net_device *dev, struct ethtool_cmd *cmd) {
-	cmd->supported = (SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg | SUPPORTED_FIBRE);
-	cmd->advertising = (ADVERTISED_1000baseT_Full | ADVERTISED_Autoneg | ADVERTISED_FIBRE);
+static int netdev_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+	cmd->supported = (SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg |
+				SUPPORTED_FIBRE);
+	cmd->advertising = (ADVERTISED_1000baseT_Full | ADVERTISED_Autoneg |
+				ADVERTISED_FIBRE);
 	cmd->speed = SPEED_1000;
 	cmd->duplex = DUPLEX_FULL;
 	cmd->port = PORT_FIBRE;
@@ -720,12 +737,16 @@ static int netdev_get_settings(struct ne
 	return 0;
 }
 
-static void netdev_get_drvinfo (struct net_device *dev, struct ethtool_drvinfo *info) {
+static void netdev_get_drvinfo(struct net_device *dev,
+			       struct ethtool_drvinfo *info)
+{
 	strncpy(info->driver, ibmveth_driver_name, sizeof(info->driver) - 1);
-	strncpy(info->version, ibmveth_driver_version, sizeof(info->version) - 1);
+	strncpy(info->version, ibmveth_driver_version,
+		sizeof(info->version) - 1);
 }
 
-static u32 netdev_get_link(struct net_device *dev) {
+static u32 netdev_get_link(struct net_device *dev)
+{
 	return 1;
 }
 
@@ -733,15 +754,16 @@ static void ibmveth_set_rx_csum_flags(st
 {
 	struct ibmveth_adapter *adapter = netdev_priv(dev);
 
-	if (data)
+	if (data) {
 		adapter->rx_csum = 1;
-	else {
+	} else {
 		/*
-		 * Since the ibmveth firmware interface does not have the concept of
-		 * separate tx/rx checksum offload enable, if rx checksum is disabled
-		 * we also have to disable tx checksum offload. Once we disable rx
-		 * checksum offload, we are no longer allowed to send tx buffers that
-		 * are not properly checksummed.
+		 * Since the ibmveth firmware interface does not have the
+		 * concept of separate tx/rx checksum offload enable, if rx
+		 * checksum is disabled we also have to disable tx checksum
+		 * offload. Once we disable rx checksum offload, we are no
+		 * longer allowed to send tx buffers that are not properly
+		 * checksummed.
 		 */
 		adapter->rx_csum = 0;
 		dev->features &= ~NETIF_F_IP_CSUM;
@@ -755,8 +777,9 @@ static void ibmveth_set_tx_csum_flags(st
 	if (data) {
 		dev->features |= NETIF_F_IP_CSUM;
 		adapter->rx_csum = 1;
-	} else
+	} else {
 		dev->features &= ~NETIF_F_IP_CSUM;
+	}
 }
 
 static int ibmveth_set_csum_offload(struct net_device *dev, u32 data,
@@ -798,8 +821,9 @@ static int ibmveth_set_csum_offload(stru
 
 			ret = h_illan_attributes(adapter->vdev->unit_address,
 						 set_attr, clr_attr, &ret_attr);
-		} else
+		} else {
 			done(dev, data);
+		}
 	} else {
 		rc1 = -EIO;
 		netdev_err(dev, "unable to change checksum offload settings."
@@ -834,7 +858,8 @@ static int ibmveth_set_tx_csum(struct ne
 		return 0;
 
 	if (data && !adapter->rx_csum)
-		rc = ibmveth_set_csum_offload(dev, data, ibmveth_set_tx_csum_flags);
+		rc = ibmveth_set_csum_offload(dev, data,
+					      ibmveth_set_tx_csum_flags);
 	else
 		ibmveth_set_tx_csum_flags(dev, data);
 
@@ -1054,12 +1079,13 @@ map_failed:
 
 static int ibmveth_poll(struct napi_struct *napi, int budget)
 {
-	struct ibmveth_adapter *adapter = container_of(napi, struct ibmveth_adapter, napi);
+	struct ibmveth_adapter *adapter =
+			container_of(napi, struct ibmveth_adapter, napi);
 	struct net_device *netdev = adapter->netdev;
 	int frames_processed = 0;
 	unsigned long lpar_rc;
 
- restart_poll:
+restart_poll:
 	do {
 		if (!ibmveth_rxq_pending_buffer(adapter))
 			break;
@@ -1160,7 +1186,7 @@ static void ibmveth_set_multicast_list(s
 					   IbmVethMcastEnableRecv |
 					   IbmVethMcastDisableFiltering,
 					   0);
-		if(lpar_rc != H_SUCCESS) {
+		if (lpar_rc != H_SUCCESS) {
 			netdev_err(netdev, "h_multicast_ctrl rc=%ld when "
 				   "entering promisc mode\n", lpar_rc);
 		}
@@ -1172,20 +1198,20 @@ static void ibmveth_set_multicast_list(s
 					   IbmVethMcastDisableFiltering |
 					   IbmVethMcastClearFilterTable,
 					   0);
-		if(lpar_rc != H_SUCCESS) {
+		if (lpar_rc != H_SUCCESS) {
 			netdev_err(netdev, "h_multicast_ctrl rc=%ld when "
 				   "attempting to clear filter table\n",
 				   lpar_rc);
 		}
 		/* add the addresses to the filter table */
 		netdev_for_each_mc_addr(ha, netdev) {
-			// add the multicast address to the filter table
+			/* add the multicast address to the filter table */
 			unsigned long mcast_addr = 0;
 			memcpy(((char *)&mcast_addr)+2, ha->addr, 6);
 			lpar_rc = h_multicast_ctrl(adapter->vdev->unit_address,
 						   IbmVethMcastAddFilter,
 						   mcast_addr);
-			if(lpar_rc != H_SUCCESS) {
+			if (lpar_rc != H_SUCCESS) {
 				netdev_err(netdev, "h_multicast_ctrl rc=%ld "
 					   "when adding an entry to the filter "
 					   "table\n", lpar_rc);
@@ -1196,7 +1222,7 @@ static void ibmveth_set_multicast_list(s
 		lpar_rc = h_multicast_ctrl(adapter->vdev->unit_address,
 					   IbmVethMcastEnableFiltering,
 					   0);
-		if(lpar_rc != H_SUCCESS) {
+		if (lpar_rc != H_SUCCESS) {
 			netdev_err(netdev, "h_multicast_ctrl rc=%ld when "
 				   "enabling filtering\n", lpar_rc);
 		}
@@ -1231,7 +1257,7 @@ static int ibmveth_change_mtu(struct net
 	}
 
 	/* Look for an active buffer pool that can hold the new MTU */
-	for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
+	for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
 		adapter->rx_buff_pool[i].active = 1;
 
 		if (new_mtu_oh < adapter->rx_buff_pool[i].buff_size) {
@@ -1314,31 +1340,30 @@ static const struct net_device_ops ibmve
 #endif
 };
 
-static int __devinit ibmveth_probe(struct vio_dev *dev, const struct vio_device_id *id)
+static int __devinit ibmveth_probe(struct vio_dev *dev,
+				   const struct vio_device_id *id)
 {
 	int rc, i;
 	long ret;
 	struct net_device *netdev;
 	struct ibmveth_adapter *adapter;
 	unsigned long set_attr, ret_attr;
-
 	unsigned char *mac_addr_p;
 	unsigned int *mcastFilterSize_p;
 
-
 	dev_dbg(&dev->dev, "entering ibmveth_probe for UA 0x%x\n",
 		dev->unit_address);
 
-	mac_addr_p = (unsigned char *) vio_get_attribute(dev,
+	mac_addr_p = (unsigned char *)vio_get_attribute(dev,
 						VETH_MAC_ADDR, NULL);
-	if(!mac_addr_p) {
+	if (!mac_addr_p) {
 		dev_err(&dev->dev, "Can't find VETH_MAC_ADDR attribute\n");
 		return 0;
 	}
 
-	mcastFilterSize_p = (unsigned int *) vio_get_attribute(dev,
+	mcastFilterSize_p = (unsigned int *)vio_get_attribute(dev,
 						VETH_MCAST_FILTER_SIZE, NULL);
-	if(!mcastFilterSize_p) {
+	if (!mcastFilterSize_p) {
 		dev_err(&dev->dev, "Can't find VETH_MCAST_FILTER_SIZE "
 			"attribute\n");
 		return 0;
@@ -1346,7 +1371,7 @@ static int __devinit ibmveth_probe(struc
 
 	netdev = alloc_etherdev(sizeof(struct ibmveth_adapter));
 
-	if(!netdev)
+	if (!netdev)
 		return -ENOMEM;
 
 	adapter = netdev_priv(netdev);
@@ -1354,19 +1379,19 @@ static int __devinit ibmveth_probe(struc
 
 	adapter->vdev = dev;
 	adapter->netdev = netdev;
-	adapter->mcastFilterSize= *mcastFilterSize_p;
+	adapter->mcastFilterSize = *mcastFilterSize_p;
 	adapter->pool_config = 0;
 
 	netif_napi_add(netdev, &adapter->napi, ibmveth_poll, 16);
 
-	/* 	Some older boxes running PHYP non-natively have an OF that
-		returns a 8-byte local-mac-address field (and the first
-		2 bytes have to be ignored) while newer boxes' OF return
-		a 6-byte field. Note that IEEE 1275 specifies that
-		local-mac-address must be a 6-byte field.
-		The RPA doc specifies that the first byte must be 10b, so
-		we'll just look for it to solve this 8 vs. 6 byte field issue */
-
+	/*
+	 * Some older boxes running PHYP non-natively have an OF that returns
+	 * a 8-byte local-mac-address field (and the first 2 bytes have to be
+	 * ignored) while newer boxes' OF return a 6-byte field. Note that
+	 * IEEE 1275 specifies that local-mac-address must be a 6-byte field.
+	 * The RPA doc specifies that the first byte must be 10b, so we'll
+	 * just look for it to solve this 8 vs. 6 byte field issue
+	 */
 	if ((*mac_addr_p & 0x3) != 0x02)
 		mac_addr_p += 2;
 
@@ -1381,7 +1406,7 @@ static int __devinit ibmveth_probe(struc
 
 	memcpy(netdev->dev_addr, &adapter->mac_addr, netdev->addr_len);
 
-	for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
+	for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) {
 		struct kobject *kobj = &adapter->rx_buff_pool[i].kobj;
 		int error;
 
@@ -1409,18 +1434,21 @@ static int __devinit ibmveth_probe(struc
 	    (ret_attr & IBMVETH_ILLAN_PADDED_PKT_CSUM)) {
 		set_attr = IBMVETH_ILLAN_IPV4_TCP_CSUM;
 
-		ret = h_illan_attributes(dev->unit_address, 0, set_attr, &ret_attr);
+		ret = h_illan_attributes(dev->unit_address, 0, set_attr,
+					 &ret_attr);
 
 		if (ret == H_SUCCESS) {
 			adapter->rx_csum = 1;
 			netdev->features |= NETIF_F_IP_CSUM;
-		} else
-			ret = h_illan_attributes(dev->unit_address, set_attr, 0, &ret_attr);
+		} else {
+			ret = h_illan_attributes(dev->unit_address, set_attr,
+						 0, &ret_attr);
+		}
 	}
 
 	rc = register_netdev(netdev);
 
-	if(rc) {
+	if (rc) {
 		netdev_dbg(netdev, "failed to register netdev rc=%d\n", rc);
 		free_netdev(netdev);
 		return rc;
@@ -1437,7 +1465,7 @@ static int __devexit ibmveth_remove(stru
 	struct ibmveth_adapter *adapter = netdev_priv(netdev);
 	int i;
 
-	for(i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
+	for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++)
 		kobject_put(&adapter->rx_buff_pool[i].kobj);
 
 	unregister_netdev(netdev);
@@ -1452,8 +1480,8 @@ static struct attribute veth_active_attr
 static struct attribute veth_num_attr;
 static struct attribute veth_size_attr;
 
-static ssize_t veth_pool_show(struct kobject * kobj,
-                              struct attribute * attr, char * buf)
+static ssize_t veth_pool_show(struct kobject *kobj,
+			      struct attribute *attr, char *buf)
 {
 	struct ibmveth_buff_pool *pool = container_of(kobj,
 						      struct ibmveth_buff_pool,
@@ -1468,8 +1496,8 @@ static ssize_t veth_pool_show(struct kob
 	return 0;
 }
 
-static ssize_t veth_pool_store(struct kobject * kobj, struct attribute * attr,
-const char * buf, size_t count)
+static ssize_t veth_pool_store(struct kobject *kobj, struct attribute *attr,
+			       const char *buf, size_t count)
 {
 	struct ibmveth_buff_pool *pool = container_of(kobj,
 						      struct ibmveth_buff_pool,
@@ -1483,7 +1511,7 @@ const char * buf, size_t count)
 	if (attr == &veth_active_attr) {
 		if (value && !pool->active) {
 			if (netif_running(netdev)) {
-				if(ibmveth_alloc_buffer_pool(pool)) {
+				if (ibmveth_alloc_buffer_pool(pool)) {
 					netdev_err(netdev,
 						   "unable to alloc pool\n");
 					return -ENOMEM;
@@ -1494,8 +1522,9 @@ const char * buf, size_t count)
 				adapter->pool_config = 0;
 				if ((rc = ibmveth_open(netdev)))
 					return rc;
-			} else
+			} else {
 				pool->active = 1;
+			}
 		} else if (!value && pool->active) {
 			int mtu = netdev->mtu + IBMVETH_BUFF_OH;
 			int i;
@@ -1526,9 +1555,9 @@ const char * buf, size_t count)
 			pool->active = 0;
 		}
 	} else if (attr == &veth_num_attr) {
-		if (value <= 0 || value > IBMVETH_MAX_POOL_COUNT)
+		if (value <= 0 || value > IBMVETH_MAX_POOL_COUNT) {
 			return -EINVAL;
-		else {
+		} else {
 			if (netif_running(netdev)) {
 				adapter->pool_config = 1;
 				ibmveth_close(netdev);
@@ -1536,13 +1565,14 @@ const char * buf, size_t count)
 				pool->size = value;
 				if ((rc = ibmveth_open(netdev)))
 					return rc;
-			} else
+			} else {
 				pool->size = value;
+			}
 		}
 	} else if (attr == &veth_size_attr) {
-		if (value <= IBMVETH_BUFF_OH || value > IBMVETH_MAX_BUF_SIZE)
+		if (value <= IBMVETH_BUFF_OH || value > IBMVETH_MAX_BUF_SIZE) {
 			return -EINVAL;
-		else {
+		} else {
 			if (netif_running(netdev)) {
 				adapter->pool_config = 1;
 				ibmveth_close(netdev);
@@ -1550,8 +1580,9 @@ const char * buf, size_t count)
 				pool->buff_size = value;
 				if ((rc = ibmveth_open(netdev)))
 					return rc;
-			} else
+			} else {
 				pool->buff_size = value;
+			}
 		}
 	}
 
@@ -1561,16 +1592,16 @@ const char * buf, size_t count)
 }
 
 
-#define ATTR(_name, _mode)      \
-        struct attribute veth_##_name##_attr = {               \
-        .name = __stringify(_name), .mode = _mode, \
-        };
+#define ATTR(_name, _mode)				\
+	struct attribute veth_##_name##_attr = {	\
+	.name = __stringify(_name), .mode = _mode,	\
+	};
 
 static ATTR(active, 0644);
 static ATTR(num, 0644);
 static ATTR(size, 0644);
 
-static struct attribute * veth_pool_attrs[] = {
+static struct attribute *veth_pool_attrs[] = {
 	&veth_active_attr,
 	&veth_num_attr,
 	&veth_size_attr,
@@ -1595,7 +1626,7 @@ static int ibmveth_resume(struct device
 	return 0;
 }
 
-static struct vio_device_id ibmveth_device_table[] __devinitdata= {
+static struct vio_device_id ibmveth_device_table[] __devinitdata = {
 	{ "network", "IBM,l-lan"},
 	{ "", "" }
 };



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 17/20] ibmveth: Return -EINVAL on all ->probe errors.
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (15 preceding siblings ...)
  2010-08-23  0:09 ` [patch 16/20] ibmveth: Coding style fixes Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 18/20] ibmveth: Convert driver specific assert to BUG_ON Anton Blanchard
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_return_error --]
[-- Type: text/plain, Size: 919 bytes --]

We had a few cases where we returned success on error.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 09:23:11.982966815 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 09:23:32.781489458 +1000
@@ -1358,7 +1358,7 @@ static int __devinit ibmveth_probe(struc
 						VETH_MAC_ADDR, NULL);
 	if (!mac_addr_p) {
 		dev_err(&dev->dev, "Can't find VETH_MAC_ADDR attribute\n");
-		return 0;
+		return -EINVAL;
 	}
 
 	mcastFilterSize_p = (unsigned int *)vio_get_attribute(dev,
@@ -1366,7 +1366,7 @@ static int __devinit ibmveth_probe(struc
 	if (!mcastFilterSize_p) {
 		dev_err(&dev->dev, "Can't find VETH_MCAST_FILTER_SIZE "
 			"attribute\n");
-		return 0;
+		return -EINVAL;
 	}
 
 	netdev = alloc_etherdev(sizeof(struct ibmveth_adapter));



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 18/20] ibmveth: Convert driver specific assert to BUG_ON
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (16 preceding siblings ...)
  2010-08-23  0:09 ` [patch 17/20] ibmveth: Return -EINVAL on all ->probe errors Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 19/20] ibmveth: Remove some unnecessary include files Anton Blanchard
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_BUG_ON --]
[-- Type: text/plain, Size: 3293 bytes --]

We had a driver specific assert function which wasn't enabled most of the
time. Convert them to BUG_ON and enable them all the time.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 09:23:32.781489458 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 09:23:35.101324676 +1000
@@ -52,18 +52,6 @@
 
 #include "ibmveth.h"
 
-#undef DEBUG
-
-#ifdef DEBUG
-#define ibmveth_assert(expr) \
-  if (!(expr)) {                                   \
-    printk(KERN_DEBUG "assertion failed (%s:%3.3d ua:%x): %s\n", __FILE__, __LINE__, adapter->vdev->unit_address, #expr); \
-    BUG(); \
-  }
-#else
-#define ibmveth_assert(expr)
-#endif
-
 static irqreturn_t ibmveth_interrupt(int irq, void *dev_instance);
 static void ibmveth_rxq_harvest_buffer(struct ibmveth_adapter *adapter);
 static unsigned long ibmveth_get_desired_dma(struct vio_dev *vdev);
@@ -248,8 +236,8 @@ static void ibmveth_replenish_buffer_poo
 			pool->consumer_index = 0;
 		index = pool->free_map[free_index];
 
-		ibmveth_assert(index != IBM_VETH_INVALID_MAP);
-		ibmveth_assert(pool->skbuff[index] == NULL);
+		BUG_ON(index == IBM_VETH_INVALID_MAP);
+		BUG_ON(pool->skbuff[index] != NULL);
 
 		dma_addr = dma_map_single(&adapter->vdev->dev, skb->data,
 				pool->buff_size, DMA_FROM_DEVICE);
@@ -368,12 +356,12 @@ static void ibmveth_remove_buffer_from_p
 	unsigned int free_index;
 	struct sk_buff *skb;
 
-	ibmveth_assert(pool < IBMVETH_NUM_BUFF_POOLS);
-	ibmveth_assert(index < adapter->rx_buff_pool[pool].size);
+	BUG_ON(pool >= IBMVETH_NUM_BUFF_POOLS);
+	BUG_ON(index >= adapter->rx_buff_pool[pool].size);
 
 	skb = adapter->rx_buff_pool[pool].skbuff[index];
 
-	ibmveth_assert(skb != NULL);
+	BUG_ON(skb == NULL);
 
 	adapter->rx_buff_pool[pool].skbuff[index] = NULL;
 
@@ -401,8 +389,8 @@ static inline struct sk_buff *ibmveth_rx
 	unsigned int pool = correlator >> 32;
 	unsigned int index = correlator & 0xffffffffUL;
 
-	ibmveth_assert(pool < IBMVETH_NUM_BUFF_POOLS);
-	ibmveth_assert(index < adapter->rx_buff_pool[pool].size);
+	BUG_ON(pool >= IBMVETH_NUM_BUFF_POOLS);
+	BUG_ON(index >= adapter->rx_buff_pool[pool].size);
 
 	return adapter->rx_buff_pool[pool].skbuff[index];
 }
@@ -417,8 +405,8 @@ static void ibmveth_rxq_recycle_buffer(s
 	union ibmveth_buf_desc desc;
 	unsigned long lpar_rc;
 
-	ibmveth_assert(pool < IBMVETH_NUM_BUFF_POOLS);
-	ibmveth_assert(index < adapter->rx_buff_pool[pool].size);
+	BUG_ON(pool >= IBMVETH_NUM_BUFF_POOLS);
+	BUG_ON(index >= adapter->rx_buff_pool[pool].size);
 
 	if (!adapter->rx_buff_pool[pool].active) {
 		ibmveth_rxq_harvest_buffer(adapter);
@@ -1145,7 +1133,7 @@ restart_poll:
 		lpar_rc = h_vio_signal(adapter->vdev->unit_address,
 				       VIO_IRQ_ENABLE);
 
-		ibmveth_assert(lpar_rc == H_SUCCESS);
+		BUG_ON(lpar_rc != H_SUCCESS);
 
 		napi_complete(napi);
 
@@ -1169,7 +1157,7 @@ static irqreturn_t ibmveth_interrupt(int
 	if (napi_schedule_prep(&adapter->napi)) {
 		lpar_rc = h_vio_signal(adapter->vdev->unit_address,
 				       VIO_IRQ_DISABLE);
-		ibmveth_assert(lpar_rc == H_SUCCESS);
+		BUG_ON(lpar_rc != H_SUCCESS);
 		__napi_schedule(&adapter->napi);
 	}
 	return IRQ_HANDLED;



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 19/20] ibmveth: Remove some unnecessary include files
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (17 preceding siblings ...)
  2010-08-23  0:09 ` [patch 18/20] ibmveth: Convert driver specific assert to BUG_ON Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  0:09 ` [patch 20/20] ibmveth: Update module information and version Anton Blanchard
  2010-08-23  1:57 ` [patch 00/20] ibmveth update David Miller
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_includes --]
[-- Type: text/plain, Size: 964 bytes --]

These files probably came across from the skeleton driver. Remove them.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 09:23:35.101324676 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 09:23:36.391233048 +1000
@@ -29,14 +29,12 @@
 #include <linux/moduleparam.h>
 #include <linux/types.h>
 #include <linux/errno.h>
-#include <linux/ioport.h>
 #include <linux/dma-mapping.h>
 #include <linux/kernel.h>
 #include <linux/netdevice.h>
 #include <linux/etherdevice.h>
 #include <linux/skbuff.h>
 #include <linux/init.h>
-#include <linux/delay.h>
 #include <linux/mm.h>
 #include <linux/pm.h>
 #include <linux/ethtool.h>
@@ -47,7 +45,6 @@
 #include <asm/atomic.h>
 #include <asm/vio.h>
 #include <asm/iommu.h>
-#include <asm/uaccess.h>
 #include <asm/firmware.h>
 
 #include "ibmveth.h"



^ permalink raw reply	[flat|nested] 26+ messages in thread

* [patch 20/20] ibmveth: Update module information and version
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (18 preceding siblings ...)
  2010-08-23  0:09 ` [patch 19/20] ibmveth: Remove some unnecessary include files Anton Blanchard
@ 2010-08-23  0:09 ` Anton Blanchard
  2010-08-23  1:57 ` [patch 00/20] ibmveth update David Miller
  20 siblings, 0 replies; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  0:09 UTC (permalink / raw)
  To: brking, santil; +Cc: netdev

[-- Attachment #1: veth_fix_author --]
[-- Type: text/plain, Size: 7372 bytes --]

Add an entry to the MAINTAINERS file for ibmveth, clean up the copyright
and add all authors. Change the name of the module to reflect the product name
over the last number of years.

Considering all the changes we have made, bump the driver version.

Signed-off-by: Anton Blanchard <anton@samba.org>
---

Index: net-next-2.6/drivers/net/ibmveth.c
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.c	2010-08-23 09:23:36.391233048 +1000
+++ net-next-2.6/drivers/net/ibmveth.c	2010-08-23 09:23:37.561149945 +1000
@@ -1,28 +1,27 @@
 /*
- * IBM eServer i/pSeries Virtual Ethernet Device Driver
- * Copyright (C) 2003 IBM Corp.
- *  Originally written by Dave Larson (larson1@us.ibm.com)
- *  Maintained by Santiago Leon (santil@us.ibm.com)
+ * IBM Power Virtual Ethernet Device Driver
  *
- *  This program is free software; you can redistribute it and/or modify
- *  it under the terms of the GNU General Public License as published by
- *  the Free Software Foundation; either version 2 of the License, or
- *  (at your option) any later version.
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
  *
- *  This program is distributed in the hope that it will be useful,
- *  but WITHOUT ANY WARRANTY; without even the implied warranty of
- *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- *  GNU General Public License for more details.
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
  *
- *  You should have received a copy of the GNU General Public License
- *  along with this program; if not, write to the Free Software
- *  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307
- *                                                                   USA
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
  *
- * This module contains the implementation of a virtual ethernet device
- * for use with IBM i/pSeries LPAR Linux.  It utilizes the logical LAN
- * option of the RS/6000 Platform Architechture to interface with virtual
- * ethernet NICs that are presented to the partition by the hypervisor.
+ * Copyright (C) IBM Corporation, 2003, 2010
+ *
+ * Authors: Dave Larson <larson1@us.ibm.com>
+ *	    Santiago Leon <santil@linux.vnet.ibm.com>
+ *	    Brian King <brking@linux.vnet.ibm.com>
+ *	    Robert Jennings <rcj@linux.vnet.ibm.com>
+ *	    Anton Blanchard <anton@au.ibm.com>
  */
 
 #include <linux/module.h>
@@ -57,12 +56,11 @@ static struct kobj_type ktype_veth_pool;
 
 
 static const char ibmveth_driver_name[] = "ibmveth";
-static const char ibmveth_driver_string[] = "IBM i/pSeries Virtual Ethernet "
-					    "Driver";
-#define ibmveth_driver_version "1.03"
+static const char ibmveth_driver_string[] = "IBM Power Virtual Ethernet Driver";
+#define ibmveth_driver_version "1.04"
 
 MODULE_AUTHOR("Santiago Leon <santil@us.ibm.com>");
-MODULE_DESCRIPTION("IBM i/pSeries Virtual Ethernet Driver");
+MODULE_DESCRIPTION("IBM Power Virtual Ethernet Driver");
 MODULE_LICENSE("GPL");
 MODULE_VERSION(ibmveth_driver_version);
 
Index: net-next-2.6/MAINTAINERS
===================================================================
--- net-next-2.6.orig/MAINTAINERS	2010-08-23 09:22:56.514065611 +1000
+++ net-next-2.6/MAINTAINERS	2010-08-23 09:23:37.571149242 +1000
@@ -2849,6 +2849,12 @@ M:	Brian King <brking@us.ibm.com>
 S:	Supported
 F:	drivers/scsi/ipr.*
 
+IBM Power Virtual Ethernet Device Driver
+M:	Santiago Leon <santil@linux.vnet.ibm.com>
+L:	netdev@vger.kernel.org
+S:	Supported
+F:	drivers/net/ibmveth.*
+
 IBM ServeRAID RAID DRIVER
 P:	Jack Hammer
 M:	Dave Jeffery <ipslinux@adaptec.com>
Index: net-next-2.6/drivers/net/ibmveth.h
===================================================================
--- net-next-2.6.orig/drivers/net/ibmveth.h	2010-08-23 09:22:56.534064193 +1000
+++ net-next-2.6/drivers/net/ibmveth.h	2010-08-23 09:23:37.581148530 +1000
@@ -1,26 +1,28 @@
-/**************************************************************************/
-/*                                                                        */
-/* IBM eServer i/[Series Virtual Ethernet Device Driver                   */
-/* Copyright (C) 2003 IBM Corp.                                           */
-/*  Dave Larson (larson1@us.ibm.com)                                      */
-/*  Santiago Leon (santil@us.ibm.com)                                     */
-/*                                                                        */
-/*  This program is free software; you can redistribute it and/or modify  */
-/*  it under the terms of the GNU General Public License as published by  */
-/*  the Free Software Foundation; either version 2 of the License, or     */
-/*  (at your option) any later version.                                   */
-/*                                                                        */
-/*  This program is distributed in the hope that it will be useful,       */
-/*  but WITHOUT ANY WARRANTY; without even the implied warranty of        */
-/*  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the         */
-/*  GNU General Public License for more details.                          */
-/*                                                                        */
-/*  You should have received a copy of the GNU General Public License     */
-/*  along with this program; if not, write to the Free Software           */
-/*  Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  */
-/*                                                                   USA  */
-/*                                                                        */
-/**************************************************************************/
+/*
+ * IBM Power Virtual Ethernet Device Driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Copyright (C) IBM Corporation, 2003, 2010
+ *
+ * Authors: Dave Larson <larson1@us.ibm.com>
+ *	    Santiago Leon <santil@linux.vnet.ibm.com>
+ *	    Brian King <brking@linux.vnet.ibm.com>
+ *	    Robert Jennings <rcj@linux.vnet.ibm.com>
+ *	    Anton Blanchard <anton@au.ibm.com>
+ */
 
 #ifndef _IBMVETH_H
 #define _IBMVETH_H



^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [patch 00/20] ibmveth update
  2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
                   ` (19 preceding siblings ...)
  2010-08-23  0:09 ` [patch 20/20] ibmveth: Update module information and version Anton Blanchard
@ 2010-08-23  1:57 ` David Miller
  2010-08-23  2:07   ` Anton Blanchard
  20 siblings, 1 reply; 26+ messages in thread
From: David Miller @ 2010-08-23  1:57 UTC (permalink / raw)
  To: anton; +Cc: brking, santil, netdev

From: Anton Blanchard <anton@samba.org>
Date: Mon, 23 Aug 2010 10:09:30 +1000

> Here are a number of patches for the IBM Power virtual ethernet driver.
> Patches 1-9 contain the significant changes, and patches 10-20 are style
> and formatting changes to bring it more into line with Linux coding
> standards.

What about the patches posted by Robert Jennings the other week that
enable ipv6 checksum support et al.?

http://patchwork.ozlabs.org/patch/62054/
http://patchwork.ozlabs.org/patch/62055/
http://patchwork.ozlabs.org/patch/62056/

Might I suggest that we have a central person who gathers and
submits all of the ibmveth driver changes to me?  That way I
don't have to ask questions like this.

There is also no entry for this driver in MAINTAINERS, which I
also would like to see fixed quickly. :-)

Thanks.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [patch 00/20] ibmveth update
  2010-08-23  1:57 ` [patch 00/20] ibmveth update David Miller
@ 2010-08-23  2:07   ` Anton Blanchard
  2010-08-23  2:15     ` David Miller
  0 siblings, 1 reply; 26+ messages in thread
From: Anton Blanchard @ 2010-08-23  2:07 UTC (permalink / raw)
  To: David Miller; +Cc: brking, santil, netdev

 
Hi Dave,

> What about the patches posted by Robert Jennings the other week that
> enable ipv6 checksum support et al.?
> 
> http://patchwork.ozlabs.org/patch/62054/
> http://patchwork.ozlabs.org/patch/62055/
> http://patchwork.ozlabs.org/patch/62056/
> 
> Might I suggest that we have a central person who gathers and
> submits all of the ibmveth driver changes to me?  That way I
> don't have to ask questions like this.

I'll ask the ibmveth maintainter (Santi) to roll up Robert and my patches for
submission.

> There is also no entry for this driver in MAINTAINERS, which I
> also would like to see fixed quickly. :-)

Agreed, my last patch fixes that very issue :)

Anton

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [patch 00/20] ibmveth update
  2010-08-23  2:07   ` Anton Blanchard
@ 2010-08-23  2:15     ` David Miller
  0 siblings, 0 replies; 26+ messages in thread
From: David Miller @ 2010-08-23  2:15 UTC (permalink / raw)
  To: anton; +Cc: brking, santil, netdev

From: Anton Blanchard <anton@samba.org>
Date: Mon, 23 Aug 2010 12:07:13 +1000

>> Might I suggest that we have a central person who gathers and
>> submits all of the ibmveth driver changes to me?  That way I
>> don't have to ask questions like this.
> 
> I'll ask the ibmveth maintainter (Santi) to roll up Robert and my patches for
> submission.

Perfect.

>> There is also no entry for this driver in MAINTAINERS, which I
>> also would like to see fixed quickly. :-)
> 
> Agreed, my last patch fixes that very issue :)

Awesome :)

Thanks!

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [patch 08/20] ibmveth: Dont overallocate buffers
  2010-08-23  0:09 ` [patch 08/20] ibmveth: Dont overallocate buffers Anton Blanchard
@ 2010-08-23 20:49   ` Robert Jennings
  2010-08-25 13:44     ` Brian King
  0 siblings, 1 reply; 26+ messages in thread
From: Robert Jennings @ 2010-08-23 20:49 UTC (permalink / raw)
  To: Anton Blanchard; +Cc: brking, santil, netdev

* Anton Blanchard (anton@samba.org) wrote:
> We were allocating a page, even though we always want 4k. Use kmalloc
> instead of get_zeroed_page.

I get a failure during device open when we have a 4k allocation rather
than a 64k page allocation.  This is running on a Power6 LPAR without
AMS.

(drivers/net/ibmveth.c:621 ua:3000000c) ERROR: h_register_logical_lan failed with -4
(drivers/net/ibmveth.c:626 ua:3000000c) ERROR: buffer TCE:0x300 filter TCE:0x2300 rxq desc:0x8000601000004000 MAC:0x763cbdd9d30c


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [patch 08/20] ibmveth: Dont overallocate buffers
  2010-08-23 20:49   ` Robert Jennings
@ 2010-08-25 13:44     ` Brian King
  0 siblings, 0 replies; 26+ messages in thread
From: Brian King @ 2010-08-25 13:44 UTC (permalink / raw)
  To: Anton Blanchard, santil, netdev

On 08/23/2010 03:49 PM, Robert Jennings wrote:
> * Anton Blanchard (anton@samba.org) wrote:
>> We were allocating a page, even though we always want 4k. Use kmalloc
>> instead of get_zeroed_page.
> 
> I get a failure during device open when we have a 4k allocation rather
> than a 64k page allocation.  This is running on a Power6 LPAR without
> AMS.
> 
> (drivers/net/ibmveth.c:621 ua:3000000c) ERROR: h_register_logical_lan failed with -4
> (drivers/net/ibmveth.c:626 ua:3000000c) ERROR: buffer TCE:0x300 filter TCE:0x2300 rxq desc:0x8000601000004000 MAC:0x763cbdd9d30c

According to the architecture, these buffers need to be 4k aligned buffers,
which is probably the cause of this failure.

-Brian

-- 
Brian King
Linux on Power Virtualization
IBM Linux Technology Center



^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2010-08-25 13:44 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-08-23  0:09 [patch 00/20] ibmveth update Anton Blanchard
2010-08-23  0:09 ` [patch 01/20] ibmveth: Remove integer divide caused by modulus Anton Blanchard
2010-08-23  0:09 ` [patch 02/20] ibmveth: batch rx buffer replacement Anton Blanchard
2010-08-23  0:09 ` [patch 03/20] ibmveth: Remove LLTX Anton Blanchard
2010-08-23  0:09 ` [patch 04/20] ibmveth: Add tx_copybreak Anton Blanchard
2010-08-23  0:09 ` [patch 05/20] ibmveth: Add rx_copybreak Anton Blanchard
2010-08-23  0:09 ` [patch 06/20] ibmveth: Use lighter weight read memory barrier in ibmveth_poll Anton Blanchard
2010-08-23  0:09 ` [patch 07/20] ibmveth: Add scatter-gather support Anton Blanchard
2010-08-23  0:09 ` [patch 08/20] ibmveth: Dont overallocate buffers Anton Blanchard
2010-08-23 20:49   ` Robert Jennings
2010-08-25 13:44     ` Brian King
2010-08-23  0:09 ` [patch 09/20] ibmveth: Add optional flush of rx buffer Anton Blanchard
2010-08-23  0:09 ` [patch 10/20] ibmveth: remove procfs code Anton Blanchard
2010-08-23  0:09 ` [patch 11/20] ibmveth: Convert to netdev_alloc_skb Anton Blanchard
2010-08-23  0:09 ` [patch 12/20] ibmveth: Remove redundant function prototypes Anton Blanchard
2010-08-23  0:09 ` [patch 13/20] ibmveth: Convert driver specific debug to netdev_dbg Anton Blanchard
2010-08-23  0:09 ` [patch 14/20] ibmveth: Convert driver specific error functions to netdev_err Anton Blanchard
2010-08-23  0:09 ` [patch 15/20] ibmveth: Some formatting fixes Anton Blanchard
2010-08-23  0:09 ` [patch 16/20] ibmveth: Coding style fixes Anton Blanchard
2010-08-23  0:09 ` [patch 17/20] ibmveth: Return -EINVAL on all ->probe errors Anton Blanchard
2010-08-23  0:09 ` [patch 18/20] ibmveth: Convert driver specific assert to BUG_ON Anton Blanchard
2010-08-23  0:09 ` [patch 19/20] ibmveth: Remove some unnecessary include files Anton Blanchard
2010-08-23  0:09 ` [patch 20/20] ibmveth: Update module information and version Anton Blanchard
2010-08-23  1:57 ` [patch 00/20] ibmveth update David Miller
2010-08-23  2:07   ` Anton Blanchard
2010-08-23  2:15     ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.