All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] irlan: handle out of memory errors
@ 2006-12-19  8:56 Akinobu Mita
  2006-12-19 22:47 ` Samuel Ortiz
  0 siblings, 1 reply; 15+ messages in thread
From: Akinobu Mita @ 2006-12-19  8:56 UTC (permalink / raw)
  To: netdev; +Cc: Samuel Ortiz

This patch checks return values:

- irlmp_register_client()
- irlmp_register_service()
- irlan_open()

Cc: Samuel Ortiz <samuel@sortiz.org>
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>

---
 net/irda/irlan/irlan_common.c |   23 ++++++++++++++++++++---
 1 file changed, 20 insertions(+), 3 deletions(-)

Index: 2.6-mm/net/irda/irlan/irlan_common.c
===================================================================
--- 2.6-mm.orig/net/irda/irlan/irlan_common.c
+++ 2.6-mm/net/irda/irlan/irlan_common.c
@@ -144,12 +144,18 @@ static int __init irlan_init(void)
 	/* Register with IrLMP as a client */
 	ckey = irlmp_register_client(hints, &irlan_client_discovery_indication,
 				     NULL, NULL);
-	
+	if (!ckey)
+		goto err_ckey;
+
 	/* Register with IrLMP as a service */
- 	skey = irlmp_register_service(hints);
+	skey = irlmp_register_service(hints);
+	if (!skey)
+		goto err_skey;
 
 	/* Start the master IrLAN instance (the only one for now) */
- 	new = irlan_open(DEV_ADDR_ANY, DEV_ADDR_ANY);
+	new = irlan_open(DEV_ADDR_ANY, DEV_ADDR_ANY);
+	if (!new)
+		goto err_open;
 
 	/* The master will only open its (listen) control TSAP */
 	irlan_provider_open_ctrl_tsap(new);
@@ -158,6 +164,17 @@ static int __init irlan_init(void)
 	irlmp_discovery_request(DISCOVERY_DEFAULT_SLOTS);
 
 	return 0;
+
+err_open:
+	irlmp_unregister_service(skey);
+err_skey:
+	irlmp_unregister_client(ckey);
+err_ckey:
+#ifdef CONFIG_PROC_FS
+	remove_proc_entry("irlan", proc_irda);
+#endif /* CONFIG_PROC_FS */
+
+	return -ENOMEM;
 }
 
 static void __exit irlan_cleanup(void) 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] irlan: handle out of memory errors
  2006-12-19  8:56 [PATCH] irlan: handle out of memory errors Akinobu Mita
@ 2006-12-19 22:47 ` Samuel Ortiz
  2007-02-07  8:12   ` David Miller
  0 siblings, 1 reply; 15+ messages in thread
From: Samuel Ortiz @ 2006-12-19 22:47 UTC (permalink / raw)
  To: Akinobu Mita; +Cc: netdev, David S. Miller

On Tue, Dec 19, 2006 at 05:56:01PM +0900, Akinobu Mita wrote:
> This patch checks return values:
> 
> - irlmp_register_client()
> - irlmp_register_service()
> - irlan_open()
> 
> Cc: Samuel Ortiz <samuel@sortiz.org>
> Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
This one seems correct as well, thanks again.

Signed-off-by: Samuel Ortiz <samuel@sortiz.org>

Cheers,
Samuel.


> 
> ---
>  net/irda/irlan/irlan_common.c |   23 ++++++++++++++++++++---
>  1 file changed, 20 insertions(+), 3 deletions(-)
> 
> Index: 2.6-mm/net/irda/irlan/irlan_common.c
> ===================================================================
> --- 2.6-mm.orig/net/irda/irlan/irlan_common.c
> +++ 2.6-mm/net/irda/irlan/irlan_common.c
> @@ -144,12 +144,18 @@ static int __init irlan_init(void)
>  	/* Register with IrLMP as a client */
>  	ckey = irlmp_register_client(hints, &irlan_client_discovery_indication,
>  				     NULL, NULL);
> -	
> +	if (!ckey)
> +		goto err_ckey;
> +
>  	/* Register with IrLMP as a service */
> - 	skey = irlmp_register_service(hints);
> +	skey = irlmp_register_service(hints);
> +	if (!skey)
> +		goto err_skey;
>  
>  	/* Start the master IrLAN instance (the only one for now) */
> - 	new = irlan_open(DEV_ADDR_ANY, DEV_ADDR_ANY);
> +	new = irlan_open(DEV_ADDR_ANY, DEV_ADDR_ANY);
> +	if (!new)
> +		goto err_open;
>  
>  	/* The master will only open its (listen) control TSAP */
>  	irlan_provider_open_ctrl_tsap(new);
> @@ -158,6 +164,17 @@ static int __init irlan_init(void)
>  	irlmp_discovery_request(DISCOVERY_DEFAULT_SLOTS);
>  
>  	return 0;
> +
> +err_open:
> +	irlmp_unregister_service(skey);
> +err_skey:
> +	irlmp_unregister_client(ckey);
> +err_ckey:
> +#ifdef CONFIG_PROC_FS
> +	remove_proc_entry("irlan", proc_irda);
> +#endif /* CONFIG_PROC_FS */
> +
> +	return -ENOMEM;
>  }
>  
>  static void __exit irlan_cleanup(void) 

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] irlan: handle out of memory errors
  2006-12-19 22:47 ` Samuel Ortiz
@ 2007-02-07  8:12   ` David Miller
  2007-02-07 10:59     ` [PATCH] NET : change layout of ehash table Eric Dumazet
  0 siblings, 1 reply; 15+ messages in thread
From: David Miller @ 2007-02-07  8:12 UTC (permalink / raw)
  To: samuel; +Cc: akinobu.mita, netdev

From: Samuel Ortiz <samuel@sortiz.org>
Date: Wed, 20 Dec 2006 00:47:26 +0200

> On Tue, Dec 19, 2006 at 05:56:01PM +0900, Akinobu Mita wrote:
> > This patch checks return values:
> > 
> > - irlmp_register_client()
> > - irlmp_register_service()
> > - irlan_open()
> > 
> > Cc: Samuel Ortiz <samuel@sortiz.org>
> > Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
> This one seems correct as well, thanks again.
> 
> Signed-off-by: Samuel Ortiz <samuel@sortiz.org>

Also applied, thanks a lot.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH] NET : change layout of ehash table
  2007-02-07  8:12   ` David Miller
@ 2007-02-07 10:59     ` Eric Dumazet
  2007-02-08 22:56       ` David Miller
  0 siblings, 1 reply; 15+ messages in thread
From: Eric Dumazet @ 2007-02-07 10:59 UTC (permalink / raw)
  To: David Miller; +Cc: linux, akepner, netdev

[-- Attachment #1: Type: text/plain, Size: 1467 bytes --]


ehash table layout is currently this one :

First half of this table is used by sockets not in TIME_WAIT state
Second half of it is used by sockets in TIME_WAIT state.

This is non optimal because of for a given hash or socket, the two chain heads 
are located in separate cache lines.
Moreover the locks of the second half are never used.

If instead of this halving, we use two list heads in inet_ehash_bucket instead 
of only one, we probably can avoid one cache miss, and reduce ram usage, 
particularly if sizeof(rwlock_t) is big (various CONFIG_DEBUG_SPINLOCK, 
CONFIG_DEBUG_LOCK_ALLOC settings). So we still halves the table but we keep 
together related chains to speedup lookups and socket state change.

In this patch I did not try to align struct inet_ehash_bucket, but a future 
patch could try to make this structure have a convenient size (a power of two 
or a multiple of L1_CACHE_SIZE).
I guess rwlock will just vanish as soon as RCU is plugged into ehash :) , so 
maybe we dont need to scratch our heads to align the bucket...

Note : In case struct inet_ehash_bucket is not a power of two, we could 
probably change alloc_large_system_hash() (in case it use __get_free_pages()) 
to free the unused space. It currently allocates a big zone, but the last 
quarter of it could be freed. Again, this should be a temporary 'problem'.

Patch tested on ipv4 tcp only, but should be OK for IPV6 and DCCP.

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>

[-- Attachment #2: tcp_ehash_reorg.patch --]
[-- Type: text/plain, Size: 6222 bytes --]

--- linux-2.6.20/include/net/inet_hashtables.h	2007-02-07 09:19:22.000000000 +0100
+++ linux-2.6.20-ed/include/net/inet_hashtables.h	2007-02-07 12:20:47.000000000 +0100
@@ -34,12 +34,13 @@
 #include <asm/byteorder.h>
 
 /* This is for all connections with a full identity, no wildcards.
- * New scheme, half the table is for TIME_WAIT, the other half is
- * for the rest.  I'll experiment with dynamic table growth later.
+ * One chain is dedicated to TIME_WAIT sockets.
+ * I'll experiment with dynamic table growth later.
  */
 struct inet_ehash_bucket {
 	rwlock_t	  lock;
 	struct hlist_head chain;
+	struct hlist_head twchain;
 };
 
 /* There are a few simple rules, which allow for local port reuse by
@@ -97,8 +98,7 @@ struct inet_hashinfo {
 	 *
 	 *          TCP_ESTABLISHED <= sk->sk_state < TCP_CLOSE
 	 *
-	 * First half of the table is for sockets not in TIME_WAIT, second half
-	 * is for TIME_WAIT sockets only.
+	 * TIME_WAIT sockets use a separate chain (twchain).
 	 */
 	struct inet_ehash_bucket	*ehash;
 
@@ -369,7 +369,7 @@ static inline struct sock *
 	}
 
 	/* Must check for a TIME_WAIT'er before going to listener hash. */
-	sk_for_each(sk, node, &(head + hashinfo->ehash_size)->chain) {
+	sk_for_each(sk, node, &head->twchain) {
 		if (INET_TW_MATCH(sk, hash, acookie, saddr, daddr, ports, dif))
 			goto hit;
 	}
--- linux-2.6.20/net/ipv4/tcp_ipv4.c	2007-02-07 09:09:40.000000000 +0100
+++ linux-2.6.20-ed/net/ipv4/tcp_ipv4.c	2007-02-07 09:42:26.000000000 +0100
@@ -2051,7 +2051,7 @@ static void *established_get_first(struc
 		}
 		st->state = TCP_SEQ_STATE_TIME_WAIT;
 		inet_twsk_for_each(tw, node,
-				   &tcp_hashinfo.ehash[st->bucket + tcp_hashinfo.ehash_size].chain) {
+				   &tcp_hashinfo.ehash[st->bucket].twchain) {
 			if (tw->tw_family != st->family) {
 				continue;
 			}
@@ -2107,7 +2107,7 @@ get_tw:
 	}
 
 	st->state = TCP_SEQ_STATE_TIME_WAIT;
-	tw = tw_head(&tcp_hashinfo.ehash[st->bucket + tcp_hashinfo.ehash_size].chain);
+	tw = tw_head(&tcp_hashinfo.ehash[st->bucket].twchain);
 	goto get_tw;
 found:
 	cur = sk;
--- linux-2.6.20/net/ipv4/tcp.c	2007-02-07 09:09:40.000000000 +0100
+++ linux-2.6.20-ed/net/ipv4/tcp.c	2007-02-07 09:33:33.000000000 +0100
@@ -2415,10 +2415,11 @@ void __init tcp_init(void)
 					&tcp_hashinfo.ehash_size,
 					NULL,
 					0);
-	tcp_hashinfo.ehash_size = (1 << tcp_hashinfo.ehash_size) >> 1;
-	for (i = 0; i < (tcp_hashinfo.ehash_size << 1); i++) {
+	tcp_hashinfo.ehash_size = 1 << tcp_hashinfo.ehash_size;
+	for (i = 0; i < tcp_hashinfo.ehash_size; i++) {
 		rwlock_init(&tcp_hashinfo.ehash[i].lock);
 		INIT_HLIST_HEAD(&tcp_hashinfo.ehash[i].chain);
+		INIT_HLIST_HEAD(&tcp_hashinfo.ehash[i].twchain);
 	}
 
 	tcp_hashinfo.bhash =
@@ -2475,7 +2476,7 @@ void __init tcp_init(void)
 
 	printk(KERN_INFO "TCP: Hash tables configured "
 	       "(established %d bind %d)\n",
-	       tcp_hashinfo.ehash_size << 1, tcp_hashinfo.bhash_size);
+	       tcp_hashinfo.ehash_size, tcp_hashinfo.bhash_size);
 
 	tcp_register_congestion_control(&tcp_reno);
 }
--- linux-2.6.20/net/ipv4/inet_timewait_sock.c	2007-02-07 09:09:40.000000000 +0100
+++ linux-2.6.20-ed/net/ipv4/inet_timewait_sock.c	2007-02-07 09:33:33.000000000 +0100
@@ -78,8 +78,8 @@ void __inet_twsk_hashdance(struct inet_t
 	if (__sk_del_node_init(sk))
 		sock_prot_dec_use(sk->sk_prot);
 
-	/* Step 3: Hash TW into TIMEWAIT half of established hash table. */
-	inet_twsk_add_node(tw, &(ehead + hashinfo->ehash_size)->chain);
+	/* Step 3: Hash TW into TIMEWAIT chain. */
+	inet_twsk_add_node(tw, &ehead->twchain);
 	atomic_inc(&tw->tw_refcnt);
 
 	write_unlock(&ehead->lock);
--- linux-2.6.20/net/ipv4/inet_hashtables.c	2007-02-07 09:09:40.000000000 +0100
+++ linux-2.6.20-ed/net/ipv4/inet_hashtables.c	2007-02-07 09:33:33.000000000 +0100
@@ -212,7 +212,7 @@ static int __inet_check_established(stru
 	write_lock(&head->lock);
 
 	/* Check TIME-WAIT sockets first. */
-	sk_for_each(sk2, node, &(head + hinfo->ehash_size)->chain) {
+	sk_for_each(sk2, node, &head->twchain) {
 		tw = inet_twsk(sk2);
 
 		if (INET_TW_MATCH(sk2, hash, acookie, saddr, daddr, ports, dif)) {
--- linux-2.6.20/net/ipv4/inet_diag.c	2007-02-07 09:09:40.000000000 +0100
+++ linux-2.6.20-ed/net/ipv4/inet_diag.c	2007-02-07 09:33:33.000000000 +0100
@@ -775,7 +775,7 @@ next_normal:
 			struct inet_timewait_sock *tw;
 
 			inet_twsk_for_each(tw, node,
-				    &hashinfo->ehash[i + hashinfo->ehash_size].chain) {
+				    &head->twchain) {
 
 				if (num < s_num)
 					goto next_dying;
--- linux-2.6.20/net/dccp/proto.c	2007-02-07 10:50:36.000000000 +0100
+++ linux-2.6.20-ed/net/dccp/proto.c	2007-02-07 10:55:10.000000000 +0100
@@ -1024,7 +1024,6 @@ static int __init dccp_init(void)
 	do {
 		dccp_hashinfo.ehash_size = (1UL << ehash_order) * PAGE_SIZE /
 					sizeof(struct inet_ehash_bucket);
-		dccp_hashinfo.ehash_size >>= 1;
 		while (dccp_hashinfo.ehash_size &
 		       (dccp_hashinfo.ehash_size - 1))
 			dccp_hashinfo.ehash_size--;
@@ -1037,9 +1036,10 @@ static int __init dccp_init(void)
 		goto out_free_bind_bucket_cachep;
 	}
 
-	for (i = 0; i < (dccp_hashinfo.ehash_size << 1); i++) {
+	for (i = 0; i < dccp_hashinfo.ehash_size; i++) {
 		rwlock_init(&dccp_hashinfo.ehash[i].lock);
 		INIT_HLIST_HEAD(&dccp_hashinfo.ehash[i].chain);
+		INIT_HLIST_HEAD(&dccp_hashinfo.ehash[i].twchain);
 	}
 
 	bhash_order = ehash_order;
--- linux-2.6.20/net/ipv6/inet6_hashtables.c	2007-02-07 10:53:33.000000000 +0100
+++ linux-2.6.20-ed/net/ipv6/inet6_hashtables.c	2007-02-07 10:55:10.000000000 +0100
@@ -79,7 +79,7 @@ struct sock *__inet6_lookup_established(
 			goto hit; /* You sunk my battleship! */
 	}
 	/* Must check for a TIME_WAIT'er before going to listener hash. */
-	sk_for_each(sk, node, &(head + hashinfo->ehash_size)->chain) {
+	sk_for_each(sk, node, &head->twchain) {
 		const struct inet_timewait_sock *tw = inet_twsk(sk);
 
 		if(*((__portpair *)&(tw->tw_dport))	== ports	&&
@@ -183,7 +183,7 @@ static int __inet6_check_established(str
 	write_lock(&head->lock);
 
 	/* Check TIME-WAIT sockets first. */
-	sk_for_each(sk2, node, &(head + hinfo->ehash_size)->chain) {
+	sk_for_each(sk2, node, &head->twchain) {
 		const struct inet6_timewait_sock *tw6 = inet6_twsk(sk2);
 
 		tw = inet_twsk(sk2);

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] NET : change layout of ehash table
  2007-02-07 10:59     ` [PATCH] NET : change layout of ehash table Eric Dumazet
@ 2007-02-08 22:56       ` David Miller
  2007-02-09  9:18         ` Andi Kleen
  0 siblings, 1 reply; 15+ messages in thread
From: David Miller @ 2007-02-08 22:56 UTC (permalink / raw)
  To: dada1; +Cc: linux, akepner, netdev

From: Eric Dumazet <dada1@cosmosbay.com>
Date: Wed, 7 Feb 2007 11:59:34 +0100

> ehash table layout is currently this one :
> 
> First half of this table is used by sockets not in TIME_WAIT state
> Second half of it is used by sockets in TIME_WAIT state.
> 
> This is non optimal because of for a given hash or socket, the two chain heads 
> are located in separate cache lines.
> Moreover the locks of the second half are never used.
> 
> If instead of this halving, we use two list heads in inet_ehash_bucket instead 
> of only one, we probably can avoid one cache miss, and reduce ram usage, 
> particularly if sizeof(rwlock_t) is big (various CONFIG_DEBUG_SPINLOCK, 
> CONFIG_DEBUG_LOCK_ALLOC settings). So we still halves the table but we keep 
> together related chains to speedup lookups and socket state change.
> 
> In this patch I did not try to align struct inet_ehash_bucket, but a future 
> patch could try to make this structure have a convenient size (a power of two 
> or a multiple of L1_CACHE_SIZE).
> I guess rwlock will just vanish as soon as RCU is plugged into ehash :) , so 
> maybe we dont need to scratch our heads to align the bucket...
> 
> Note : In case struct inet_ehash_bucket is not a power of two, we could 
> probably change alloc_large_system_hash() (in case it use __get_free_pages()) 
> to free the unused space. It currently allocates a big zone, but the last 
> quarter of it could be freed. Again, this should be a temporary 'problem'.
> 
> Patch tested on ipv4 tcp only, but should be OK for IPV6 and DCCP.
> 
> Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>

I've applied this, but I _REALLY_ don't like the new multiply
instructions that are used now in the hash indexing paths when
CONFIG_SMP is set.

I think that's a higher cost than the memory waste.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] NET : change layout of ehash table
  2007-02-09  9:18         ` Andi Kleen
@ 2007-02-09  8:40           ` David Miller
  2007-02-09  8:57             ` Andi Kleen
  2007-02-09  9:06             ` Eric Dumazet
  0 siblings, 2 replies; 15+ messages in thread
From: David Miller @ 2007-02-09  8:40 UTC (permalink / raw)
  To: ak; +Cc: dada1, linux, akepner, netdev

From: Andi Kleen <ak@suse.de>
Date: 09 Feb 2007 10:18:03 +0100

> David Miller <davem@davemloft.net> writes:
> > 
> > I've applied this, but I _REALLY_ don't like the new multiply
> > instructions that are used now in the hash indexing paths when
> > CONFIG_SMP is set.
> > 
> > I think that's a higher cost than the memory waste.
> 
> You're serious? multiply on a modern CPU is _much_ cheaper than a cache miss
> e.g. a K8 can do a arbitary 64bit multiplication in 3-7 cycles.
> Any cache miss will be in the three to four digits at least.

I'm not thinking of modern CPUs, I'm think of the little
guys :-)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] NET : change layout of ehash table
  2007-02-09  8:40           ` David Miller
@ 2007-02-09  8:57             ` Andi Kleen
  2007-02-09  9:06             ` Eric Dumazet
  1 sibling, 0 replies; 15+ messages in thread
From: Andi Kleen @ 2007-02-09  8:57 UTC (permalink / raw)
  To: David Miller; +Cc: dada1, linux, akepner, netdev

On Friday 09 February 2007 09:40, David Miller wrote:
> From: Andi Kleen <ak@suse.de>
> Date: 09 Feb 2007 10:18:03 +0100
> 
> > David Miller <davem@davemloft.net> writes:
> > > 
> > > I've applied this, but I _REALLY_ don't like the new multiply
> > > instructions that are used now in the hash indexing paths when
> > > CONFIG_SMP is set.
> > > 
> > > I think that's a higher cost than the memory waste.
> > 
> > You're serious? multiply on a modern CPU is _much_ cheaper than a cache miss
> > e.g. a K8 can do a arbitary 64bit multiplication in 3-7 cycles.
> > Any cache miss will be in the three to four digits at least.
> 
> I'm not thinking of modern CPUs, I'm think of the little
> guys :-)

Even for them it's true, although the ratios are smaller (unless you go back to 
m68000 or VAX days) 

-Andi

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] NET : change layout of ehash table
  2007-02-09  8:40           ` David Miller
  2007-02-09  8:57             ` Andi Kleen
@ 2007-02-09  9:06             ` Eric Dumazet
  2007-02-09  9:15               ` David Miller
  1 sibling, 1 reply; 15+ messages in thread
From: Eric Dumazet @ 2007-02-09  9:06 UTC (permalink / raw)
  To: David Miller; +Cc: ak, linux, akepner, netdev

On Friday 09 February 2007 09:40, David Miller wrote:
> From: Andi Kleen <ak@suse.de>
> Date: 09 Feb 2007 10:18:03 +0100
>
> > David Miller <davem@davemloft.net> writes:
> > > I've applied this, but I _REALLY_ don't like the new multiply
> > > instructions that are used now in the hash indexing paths when
> > > CONFIG_SMP is set.
> > >
> > > I think that's a higher cost than the memory waste.
> >
> > You're serious? multiply on a modern CPU is _much_ cheaper than a cache
> > miss e.g. a K8 can do a arbitary 64bit multiplication in 3-7 cycles.
> > Any cache miss will be in the three to four digits at least.
>
> I'm not thinking of modern CPUs, I'm think of the little
> guys :-)

Yes, but a decent C compiler for such targets should not use a multiply 
instruction to perform a (idx * 12) operation... :)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] NET : change layout of ehash table
  2007-02-09  9:06             ` Eric Dumazet
@ 2007-02-09  9:15               ` David Miller
  2007-02-09  9:36                 ` Eric Dumazet
  0 siblings, 1 reply; 15+ messages in thread
From: David Miller @ 2007-02-09  9:15 UTC (permalink / raw)
  To: dada1; +Cc: ak, linux, akepner, netdev

From: Eric Dumazet <dada1@cosmosbay.com>
Date: Fri, 9 Feb 2007 10:06:24 +0100

> Yes, but a decent C compiler for such targets should not use a
> multiply instruction to perform a (idx * 12) operation... :)

Good point.

Actually, I could never get GCC to avoid a divide on sparc64 for
certain kinds of pointer arithmetic when the elements were not
a power of two.  It probably has something to do with signedness.

I think I narrowed is down to the fact that you can't legally replace
a signed divide with shift/add/subtract.  But I could be remembering
things wrong.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] NET : change layout of ehash table
  2007-02-08 22:56       ` David Miller
@ 2007-02-09  9:18         ` Andi Kleen
  2007-02-09  8:40           ` David Miller
  0 siblings, 1 reply; 15+ messages in thread
From: Andi Kleen @ 2007-02-09  9:18 UTC (permalink / raw)
  To: David Miller; +Cc: dada1, linux, akepner, netdev

David Miller <davem@davemloft.net> writes:
> 
> I've applied this, but I _REALLY_ don't like the new multiply
> instructions that are used now in the hash indexing paths when
> CONFIG_SMP is set.
> 
> I think that's a higher cost than the memory waste.

You're serious? multiply on a modern CPU is _much_ cheaper than a cache miss
e.g. a K8 can do a arbitary 64bit multiplication in 3-7 cycles.
Any cache miss will be in the three to four digits at least.

-Andi

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] NET : change layout of ehash table
  2007-02-09  9:15               ` David Miller
@ 2007-02-09  9:36                 ` Eric Dumazet
  2007-02-09  9:43                   ` David Miller
  0 siblings, 1 reply; 15+ messages in thread
From: Eric Dumazet @ 2007-02-09  9:36 UTC (permalink / raw)
  To: David Miller; +Cc: ak, linux, akepner, netdev

On Friday 09 February 2007 10:15, David Miller wrote:
> From: Eric Dumazet <dada1@cosmosbay.com>
> Date: Fri, 9 Feb 2007 10:06:24 +0100
>
> > Yes, but a decent C compiler for such targets should not use a
> > multiply instruction to perform a (idx * 12) operation... :)
>
> Good point.
>
> Actually, I could never get GCC to avoid a divide on sparc64 for
> certain kinds of pointer arithmetic when the elements were not
> a power of two.  It probably has something to do with signedness.
>
> I think I narrowed is down to the fact that you can't legally replace
> a signed divide with shift/add/subtract.  But I could be remembering
> things wrong.

Thats strange, because pointer arithmetic is unsigned...
I dont know when gcc started to use reciprocal division, maybe your gcc was 
very old ?

$ cat div.c
struct s1 {        int pad[3];        };

unsigned long diffptr(struct s1 *a, struct s1 *b)
{
return a - b;
}

If compiled on i386 , gcc-4.1.1 :

$ gcc -O2 -fomit-frame-pointer -S div.c

diffptr:
        movl    4(%esp), %eax
        subl    8(%esp), %eax
        sarl    $2, %eax
        imull   $-1431655765, %eax, %eax
        ret

If compiled on x86_64 , gcc-4.1.1:

diffptr:
        subq    %rsi, %rdi
        movabsq $-6148914691236517205, %rax
        sarq    $2, %rdi
        imulq   %rax, %rdi
        movq    %rdi, %rax
        ret

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] NET : change layout of ehash table
  2007-02-09  9:36                 ` Eric Dumazet
@ 2007-02-09  9:43                   ` David Miller
  2007-02-09 10:10                     ` Andi Kleen
  2007-02-09 17:44                     ` [PATCH] NET : UDP can use sk_hash to speedup lookups Eric Dumazet
  0 siblings, 2 replies; 15+ messages in thread
From: David Miller @ 2007-02-09  9:43 UTC (permalink / raw)
  To: dada1; +Cc: ak, linux, akepner, netdev

From: Eric Dumazet <dada1@cosmosbay.com>
Date: Fri, 9 Feb 2007 10:36:58 +0100

> Thats strange, because pointer arithmetic is unsigned...
> I dont know when gcc started to use reciprocal division, maybe your gcc was 
> very old ?

Yep, it was only on older gcc's.

And as the sparc gcc backend co-maintainer, I remember what the
problem was.  The insn costs for multiply and divide were not set
properly on UltraSPARC, so it used the defaults, which made gcc think
divides were very cheap :-)

Current gcc does the right thing, even for weird sizes like 56 and 52
which expands to many IALU operations.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] NET : change layout of ehash table
  2007-02-09  9:43                   ` David Miller
@ 2007-02-09 10:10                     ` Andi Kleen
  2007-02-09 17:44                     ` [PATCH] NET : UDP can use sk_hash to speedup lookups Eric Dumazet
  1 sibling, 0 replies; 15+ messages in thread
From: Andi Kleen @ 2007-02-09 10:10 UTC (permalink / raw)
  To: David Miller; +Cc: dada1, linux, akepner, netdev

On Friday 09 February 2007 10:43, David Miller wrote:

> Current gcc does the right thing, even for weird sizes like 56 and 52
> which expands to many IALU operations.

... except if you use -Os, which at least on x86* is default and is what
distros ship with.

-Andi


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH] NET : UDP can use sk_hash to speedup lookups
  2007-02-09  9:43                   ` David Miller
  2007-02-09 10:10                     ` Andi Kleen
@ 2007-02-09 17:44                     ` Eric Dumazet
  2007-02-09 23:45                       ` David Miller
  1 sibling, 1 reply; 15+ messages in thread
From: Eric Dumazet @ 2007-02-09 17:44 UTC (permalink / raw)
  To: David Miller; +Cc: netdev

[-- Attachment #1: Type: text/plain, Size: 609 bytes --]

In a prior patch, I introduced a sk_hash field (__sk_common.skc_hash)  to let 
tcp lookups use one cache line per unmatched entry instead of two.

We can also use sk_hash to speedup UDP part as well. We store in sk_hash the 
hnum value, and use sk->sk_hash (same cache line than 'next' pointer), 
instead of inet->num (different cache line)

Note : We still have a false sharing problem for SMP machines, because 
sock_hold(sock) dirties the cache line containing the 'next' pointer. Not 
counting the udp_hash_lock rwlock. (did someone mentioned RCU ? :) )

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>

[-- Attachment #2: udp_can_use_sk_hash.patch --]
[-- Type: text/plain, Size: 2586 bytes --]

--- linux-2.6.20/net/ipv4/udp.c	2007-02-09 19:17:28.000000000 +0100
+++ linux-2.6.20-ed/net/ipv4/udp.c	2007-02-09 19:21:07.000000000 +0100
@@ -120,7 +120,7 @@ static inline int __udp_lib_lport_inuse(
 	struct hlist_node *node;
 
 	sk_for_each(sk, node, &udptable[num & (UDP_HTABLE_SIZE - 1)])
-		if (inet_sk(sk)->num == num)
+		if (sk->sk_hash == num)
 			return 1;
 	return 0;
 }
@@ -191,7 +191,7 @@ gotit:
 		head = &udptable[snum & (UDP_HTABLE_SIZE - 1)];
 
 		sk_for_each(sk2, node, head)
-			if (inet_sk(sk2)->num == snum                        &&
+			if (sk2->sk_hash == snum                             &&
 			    sk2 != sk                                        &&
 			    (!sk2->sk_reuse        || !sk->sk_reuse)         &&
 			    (!sk2->sk_bound_dev_if || !sk->sk_bound_dev_if
@@ -200,6 +200,7 @@ gotit:
 				goto fail;
 	}
 	inet_sk(sk)->num = snum;
+	sk->sk_hash = snum;
 	if (sk_unhashed(sk)) {
 		head = &udptable[snum & (UDP_HTABLE_SIZE - 1)];
 		sk_add_node(sk, head);
@@ -247,7 +248,7 @@ static struct sock *__udp4_lib_lookup(__
 	sk_for_each(sk, node, &udptable[hnum & (UDP_HTABLE_SIZE - 1)]) {
 		struct inet_sock *inet = inet_sk(sk);
 
-		if (inet->num == hnum && !ipv6_only_sock(sk)) {
+		if (sk->sk_hash == hnum && !ipv6_only_sock(sk)) {
 			int score = (sk->sk_family == PF_INET ? 1 : 0);
 			if (inet->rcv_saddr) {
 				if (inet->rcv_saddr != daddr)
@@ -296,7 +297,7 @@ static inline struct sock *udp_v4_mcast_
 	sk_for_each_from(s, node) {
 		struct inet_sock *inet = inet_sk(s);
 
-		if (inet->num != hnum					||
+		if (s->sk_hash != hnum					||
 		    (inet->daddr && inet->daddr != rmt_addr)		||
 		    (inet->dport != rmt_port && inet->dport)		||
 		    (inet->rcv_saddr && inet->rcv_saddr != loc_addr)	||
--- linux-2.6.20/net/ipv6/udp.c	2007-02-09 19:17:28.000000000 +0100
+++ linux-2.6.20-ed/net/ipv6/udp.c	2007-02-09 19:17:28.000000000 +0100
@@ -71,7 +71,7 @@ static struct sock *__udp6_lib_lookup(st
 	sk_for_each(sk, node, &udptable[hnum & (UDP_HTABLE_SIZE - 1)]) {
 		struct inet_sock *inet = inet_sk(sk);
 
-		if (inet->num == hnum && sk->sk_family == PF_INET6) {
+		if (sk->sk_hash == hnum && sk->sk_family == PF_INET6) {
 			struct ipv6_pinfo *np = inet6_sk(sk);
 			int score = 0;
 			if (inet->dport) {
@@ -309,7 +309,7 @@ static struct sock *udp_v6_mcast_next(st
 	sk_for_each_from(s, node) {
 		struct inet_sock *inet = inet_sk(s);
 
-		if (inet->num == num && s->sk_family == PF_INET6) {
+		if (s->sk_hash == num && s->sk_family == PF_INET6) {
 			struct ipv6_pinfo *np = inet6_sk(s);
 			if (inet->dport) {
 				if (inet->dport != rmt_port)

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH] NET : UDP can use sk_hash to speedup lookups
  2007-02-09 17:44                     ` [PATCH] NET : UDP can use sk_hash to speedup lookups Eric Dumazet
@ 2007-02-09 23:45                       ` David Miller
  0 siblings, 0 replies; 15+ messages in thread
From: David Miller @ 2007-02-09 23:45 UTC (permalink / raw)
  To: dada1; +Cc: netdev

From: Eric Dumazet <dada1@cosmosbay.com>
Date: Fri, 9 Feb 2007 18:44:16 +0100

> In a prior patch, I introduced a sk_hash field (__sk_common.skc_hash)  to let 
> tcp lookups use one cache line per unmatched entry instead of two.
> 
> We can also use sk_hash to speedup UDP part as well. We store in sk_hash the 
> hnum value, and use sk->sk_hash (same cache line than 'next' pointer), 
> instead of inet->num (different cache line)
> 
> Note : We still have a false sharing problem for SMP machines, because 
> sock_hold(sock) dirties the cache line containing the 'next' pointer. Not 
> counting the udp_hash_lock rwlock. (did someone mentioned RCU ? :) )
> 
> Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>

Applied, thanks Eric.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2007-02-09 23:45 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2006-12-19  8:56 [PATCH] irlan: handle out of memory errors Akinobu Mita
2006-12-19 22:47 ` Samuel Ortiz
2007-02-07  8:12   ` David Miller
2007-02-07 10:59     ` [PATCH] NET : change layout of ehash table Eric Dumazet
2007-02-08 22:56       ` David Miller
2007-02-09  9:18         ` Andi Kleen
2007-02-09  8:40           ` David Miller
2007-02-09  8:57             ` Andi Kleen
2007-02-09  9:06             ` Eric Dumazet
2007-02-09  9:15               ` David Miller
2007-02-09  9:36                 ` Eric Dumazet
2007-02-09  9:43                   ` David Miller
2007-02-09 10:10                     ` Andi Kleen
2007-02-09 17:44                     ` [PATCH] NET : UDP can use sk_hash to speedup lookups Eric Dumazet
2007-02-09 23:45                       ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.