All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC v3 0/7] generic hashtable implementation
@ 2012-08-07  0:45 ` Sasha Levin
  0 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

There are quite a few places in the kernel which implement a hashtable
in a very similar way. Instead of having implementations of a hashtable
all over the kernel, we can re-use the code.

This patch series introduces a very simple hashtable implementation, and
modifies three (random) modules to use it. I've limited it to 3 only
so that it would be easy to review and modify, and to show that even
at this number we already eliminate a big amount of duplicated code.

If this basic hashtable looks ok, future code will include:

 - RCU support
 - Self locking (list_bl?)
 - Converting more code to use the hashtable


Changes in V3:

 - Address review comments by Tejun Heo, Josh Triplett, Eric Beiderman,
   Mathieu Desnoyers, Eric Dumazet and Linus Torvalds.
 - Removed hash_get due to being too Gandalf.
 - Rewrote the user namespaces hash implementation.
 - Hashtable went back to being a simple array of buckets, but without any
   of the macro tricks to get the size automatically.
 - Optimize hasing if key is 32 bits long.

Changes in V2:

 - Address review comments by Tejun Heo, Josh Triplett and Eric Beiderman (Thanks all!).
 - Rebase on top of latest master.
 - Convert more places to use the hashtable. Hopefully it will trigger more reviews by
 touching more subsystems.


Sasha Levin (7):
  hashtable: introduce a small and naive hashtable
  user_ns: use new hashtable implementation
  mm,ksm: use new hashtable implementation
  workqueue: use new hashtable implementation
  mm/huge_memory: use new hashtable implementation
  tracepoint: use new hashtable implementation
  net,9p: use new hashtable implementation

 include/linux/hashtable.h |   82 +++++++++++++++++++++++++++++++++++++++++
 kernel/tracepoint.c       |   26 +++++--------
 kernel/user.c             |   35 ++++++++----------
 kernel/workqueue.c        |   89 +++++++++------------------------------------
 mm/huge_memory.c          |   56 +++++++---------------------
 mm/ksm.c                  |   31 +++++++---------
 net/9p/error.c            |   21 +++++------
 7 files changed, 162 insertions(+), 178 deletions(-)
 create mode 100644 include/linux/hashtable.h

-- 
1.7.8.6


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC v3 0/7] generic hashtable implementation
@ 2012-08-07  0:45 ` Sasha Levin
  0 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

There are quite a few places in the kernel which implement a hashtable
in a very similar way. Instead of having implementations of a hashtable
all over the kernel, we can re-use the code.

This patch series introduces a very simple hashtable implementation, and
modifies three (random) modules to use it. I've limited it to 3 only
so that it would be easy to review and modify, and to show that even
at this number we already eliminate a big amount of duplicated code.

If this basic hashtable looks ok, future code will include:

 - RCU support
 - Self locking (list_bl?)
 - Converting more code to use the hashtable


Changes in V3:

 - Address review comments by Tejun Heo, Josh Triplett, Eric Beiderman,
   Mathieu Desnoyers, Eric Dumazet and Linus Torvalds.
 - Removed hash_get due to being too Gandalf.
 - Rewrote the user namespaces hash implementation.
 - Hashtable went back to being a simple array of buckets, but without any
   of the macro tricks to get the size automatically.
 - Optimize hasing if key is 32 bits long.

Changes in V2:

 - Address review comments by Tejun Heo, Josh Triplett and Eric Beiderman (Thanks all!).
 - Rebase on top of latest master.
 - Convert more places to use the hashtable. Hopefully it will trigger more reviews by
 touching more subsystems.


Sasha Levin (7):
  hashtable: introduce a small and naive hashtable
  user_ns: use new hashtable implementation
  mm,ksm: use new hashtable implementation
  workqueue: use new hashtable implementation
  mm/huge_memory: use new hashtable implementation
  tracepoint: use new hashtable implementation
  net,9p: use new hashtable implementation

 include/linux/hashtable.h |   82 +++++++++++++++++++++++++++++++++++++++++
 kernel/tracepoint.c       |   26 +++++--------
 kernel/user.c             |   35 ++++++++----------
 kernel/workqueue.c        |   89 +++++++++------------------------------------
 mm/huge_memory.c          |   56 +++++++---------------------
 mm/ksm.c                  |   31 +++++++---------
 net/9p/error.c            |   21 +++++------
 7 files changed, 162 insertions(+), 178 deletions(-)
 create mode 100644 include/linux/hashtable.h

-- 
1.7.8.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC v3 1/7] hashtable: introduce a small and naive hashtable
  2012-08-07  0:45 ` Sasha Levin
@ 2012-08-07  0:45   ` Sasha Levin
  -1 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

This hashtable implementation is using hlist buckets to provide a simple
hashtable to prevent it from getting reimplemented all over the kernel.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 include/linux/hashtable.h |   82 +++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 82 insertions(+), 0 deletions(-)
 create mode 100644 include/linux/hashtable.h

diff --git a/include/linux/hashtable.h b/include/linux/hashtable.h
new file mode 100644
index 0000000..394652b
--- /dev/null
+++ b/include/linux/hashtable.h
@@ -0,0 +1,82 @@
+/*
+ * Hash table implementation
+ * (C) 2012  Sasha Levin <levinsasha928@gmail.com>
+ */
+
+#ifndef _LINUX_HASHTABLE_H
+#define _LINUX_HASHTABLE_H
+
+#include <linux/list.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/hash.h>
+
+#define DEFINE_HASHTABLE(name, bits)					\
+	struct hlist_head name[HASH_SIZE(bits)];
+
+#define HASH_SIZE(bits) (1 << (bits))
+
+/* Use hash_32 when possible to allow for fast 32bit hashing in 64bit kernels. */
+#define hash_min(val, bits) ((sizeof(val)==4)?hash_32((val), (bits)):hash_long((val), (bits)))
+
+/**
+ * hash_init - initialize a hash table
+ * @hashtable: hashtable to be initialized
+ * @bits: bit count of hashing function
+ *
+ * Initializes a hash table with 2**bits buckets.
+ */
+static inline void hash_init(struct hlist_head *hashtable, int bits)
+{
+	int i;
+
+	for (i = 0; i < HASH_SIZE(bits); i++)
+		INIT_HLIST_HEAD(hashtable + i);
+}
+
+/**
+ * hash_add - add an object to a hashtable
+ * @hashtable: hashtable to add to
+ * @bits: bit count used for hashing
+ * @node: the &struct hlist_node of the object to be added
+ * @key: the key of the object to be added
+ */
+#define hash_add(hashtable, bits, node, key)				\
+	hlist_add_head(node, &hashtable[hash_min(key, bits)]);
+
+/**
+ * hash_del - remove an object from a hashtable
+ * @node: &struct hlist_node of the object to remove
+ */
+static inline void hash_del(struct hlist_node *node)
+{
+	hlist_del_init(node);
+}
+
+/**
+ * hash_for_each - iterate over a hashtable
+ * @name: hashtable to iterate
+ * @bits: bit count of hashing function of the hashtable
+ * @bkt: integer to use as bucket loop cursor
+ * @node: the &struct list_head to use as a loop cursor for each bucket
+ * @obj: the type * to use as a loop cursor for each bucket
+ * @member: the name of the hlist_node within the struct
+ */
+#define hash_for_each(name, bits, bkt, node, obj, member)		\
+	for (bkt = 0; bkt < HASH_SIZE(bits); bkt++)			\
+		hlist_for_each_entry(obj, node, &name[i], member)
+
+/**
+ * hash_for_each_possible - iterate over all possible objects for a giver key
+ * @name: hashtable to iterate
+ * @obj: the type * to use as a loop cursor for each bucke
+ * @bits: bit count of hashing function of the hashtable
+ * @node: the &struct list_head to use as a loop cursor for each bucket
+ * @member: the name of the hlist_node within the struct
+ * @key: the key of the objects to iterate over
+ */
+#define hash_for_each_possible(name, obj, bits, node, member, key)	\
+	hlist_for_each_entry(obj, node,					\
+		&name[hash_min(key, bits)], member)
+
+#endif
-- 
1.7.8.6


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 1/7] hashtable: introduce a small and naive hashtable
@ 2012-08-07  0:45   ` Sasha Levin
  0 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

This hashtable implementation is using hlist buckets to provide a simple
hashtable to prevent it from getting reimplemented all over the kernel.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 include/linux/hashtable.h |   82 +++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 82 insertions(+), 0 deletions(-)
 create mode 100644 include/linux/hashtable.h

diff --git a/include/linux/hashtable.h b/include/linux/hashtable.h
new file mode 100644
index 0000000..394652b
--- /dev/null
+++ b/include/linux/hashtable.h
@@ -0,0 +1,82 @@
+/*
+ * Hash table implementation
+ * (C) 2012  Sasha Levin <levinsasha928@gmail.com>
+ */
+
+#ifndef _LINUX_HASHTABLE_H
+#define _LINUX_HASHTABLE_H
+
+#include <linux/list.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/hash.h>
+
+#define DEFINE_HASHTABLE(name, bits)					\
+	struct hlist_head name[HASH_SIZE(bits)];
+
+#define HASH_SIZE(bits) (1 << (bits))
+
+/* Use hash_32 when possible to allow for fast 32bit hashing in 64bit kernels. */
+#define hash_min(val, bits) ((sizeof(val)==4)?hash_32((val), (bits)):hash_long((val), (bits)))
+
+/**
+ * hash_init - initialize a hash table
+ * @hashtable: hashtable to be initialized
+ * @bits: bit count of hashing function
+ *
+ * Initializes a hash table with 2**bits buckets.
+ */
+static inline void hash_init(struct hlist_head *hashtable, int bits)
+{
+	int i;
+
+	for (i = 0; i < HASH_SIZE(bits); i++)
+		INIT_HLIST_HEAD(hashtable + i);
+}
+
+/**
+ * hash_add - add an object to a hashtable
+ * @hashtable: hashtable to add to
+ * @bits: bit count used for hashing
+ * @node: the &struct hlist_node of the object to be added
+ * @key: the key of the object to be added
+ */
+#define hash_add(hashtable, bits, node, key)				\
+	hlist_add_head(node, &hashtable[hash_min(key, bits)]);
+
+/**
+ * hash_del - remove an object from a hashtable
+ * @node: &struct hlist_node of the object to remove
+ */
+static inline void hash_del(struct hlist_node *node)
+{
+	hlist_del_init(node);
+}
+
+/**
+ * hash_for_each - iterate over a hashtable
+ * @name: hashtable to iterate
+ * @bits: bit count of hashing function of the hashtable
+ * @bkt: integer to use as bucket loop cursor
+ * @node: the &struct list_head to use as a loop cursor for each bucket
+ * @obj: the type * to use as a loop cursor for each bucket
+ * @member: the name of the hlist_node within the struct
+ */
+#define hash_for_each(name, bits, bkt, node, obj, member)		\
+	for (bkt = 0; bkt < HASH_SIZE(bits); bkt++)			\
+		hlist_for_each_entry(obj, node, &name[i], member)
+
+/**
+ * hash_for_each_possible - iterate over all possible objects for a giver key
+ * @name: hashtable to iterate
+ * @obj: the type * to use as a loop cursor for each bucke
+ * @bits: bit count of hashing function of the hashtable
+ * @node: the &struct list_head to use as a loop cursor for each bucket
+ * @member: the name of the hlist_node within the struct
+ * @key: the key of the objects to iterate over
+ */
+#define hash_for_each_possible(name, obj, bits, node, member, key)	\
+	hlist_for_each_entry(obj, node,					\
+		&name[hash_min(key, bits)], member)
+
+#endif
-- 
1.7.8.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 2/7] user_ns: use new hashtable implementation
  2012-08-07  0:45 ` Sasha Levin
@ 2012-08-07  0:45   ` Sasha Levin
  -1 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

Switch user_ns to use the new hashtable implementation. This reduces the amount of
generic unrelated code in user_ns.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 kernel/user.c |   35 +++++++++++++++--------------------
 1 files changed, 15 insertions(+), 20 deletions(-)

diff --git a/kernel/user.c b/kernel/user.c
index b815fef..400a23cf 100644
--- a/kernel/user.c
+++ b/kernel/user.c
@@ -16,6 +16,7 @@
 #include <linux/interrupt.h>
 #include <linux/export.h>
 #include <linux/user_namespace.h>
+#include <linux/hashtable.h>
 
 /*
  * userns count is 1 for root user, 1 for init_uts_ns,
@@ -52,13 +53,9 @@ EXPORT_SYMBOL_GPL(init_user_ns);
  */
 
 #define UIDHASH_BITS	(CONFIG_BASE_SMALL ? 3 : 7)
-#define UIDHASH_SZ	(1 << UIDHASH_BITS)
-#define UIDHASH_MASK		(UIDHASH_SZ - 1)
-#define __uidhashfn(uid)	(((uid >> UIDHASH_BITS) + uid) & UIDHASH_MASK)
-#define uidhashentry(uid)	(uidhash_table + __uidhashfn((__kuid_val(uid))))
 
 static struct kmem_cache *uid_cachep;
-struct hlist_head uidhash_table[UIDHASH_SZ];
+static DEFINE_HASHTABLE(uidhash_table, UIDHASH_BITS)
 
 /*
  * The uidhash_lock is mostly taken from process context, but it is
@@ -84,22 +81,24 @@ struct user_struct root_user = {
 /*
  * These routines must be called with the uidhash spinlock held!
  */
-static void uid_hash_insert(struct user_struct *up, struct hlist_head *hashent)
+static void uid_hash_insert(struct user_struct *up)
 {
-	hlist_add_head(&up->uidhash_node, hashent);
+	hash_add(uidhash_table, UIDHASH_BITS, &up->uidhash_node,
+		__kuid_val(up->uid));
 }
 
 static void uid_hash_remove(struct user_struct *up)
 {
-	hlist_del_init(&up->uidhash_node);
+	hash_del(&up->uidhash_node);
 }
 
-static struct user_struct *uid_hash_find(kuid_t uid, struct hlist_head *hashent)
+static struct user_struct *uid_hash_find(kuid_t uid)
 {
 	struct user_struct *user;
 	struct hlist_node *h;
 
-	hlist_for_each_entry(user, h, hashent, uidhash_node) {
+	hash_for_each_possible(uidhash_table, user, UIDHASH_BITS, h, uidhash_node,
+								__kuid_val(uid)) {
 		if (uid_eq(user->uid, uid)) {
 			atomic_inc(&user->__count);
 			return user;
@@ -135,7 +134,7 @@ struct user_struct *find_user(kuid_t uid)
 	unsigned long flags;
 
 	spin_lock_irqsave(&uidhash_lock, flags);
-	ret = uid_hash_find(uid, uidhashentry(uid));
+	ret = uid_hash_find(uid);
 	spin_unlock_irqrestore(&uidhash_lock, flags);
 	return ret;
 }
@@ -156,11 +155,10 @@ void free_uid(struct user_struct *up)
 
 struct user_struct *alloc_uid(kuid_t uid)
 {
-	struct hlist_head *hashent = uidhashentry(uid);
 	struct user_struct *up, *new;
 
 	spin_lock_irq(&uidhash_lock);
-	up = uid_hash_find(uid, hashent);
+	up = uid_hash_find(uid);
 	spin_unlock_irq(&uidhash_lock);
 
 	if (!up) {
@@ -176,13 +174,13 @@ struct user_struct *alloc_uid(kuid_t uid)
 		 * on adding the same user already..
 		 */
 		spin_lock_irq(&uidhash_lock);
-		up = uid_hash_find(uid, hashent);
+		up = uid_hash_find(uid);
 		if (up) {
 			key_put(new->uid_keyring);
 			key_put(new->session_keyring);
 			kmem_cache_free(uid_cachep, new);
 		} else {
-			uid_hash_insert(new, hashent);
+			uid_hash_insert(new);
 			up = new;
 		}
 		spin_unlock_irq(&uidhash_lock);
@@ -196,17 +194,14 @@ out_unlock:
 
 static int __init uid_cache_init(void)
 {
-	int n;
-
 	uid_cachep = kmem_cache_create("uid_cache", sizeof(struct user_struct),
 			0, SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
 
-	for(n = 0; n < UIDHASH_SZ; ++n)
-		INIT_HLIST_HEAD(uidhash_table + n);
+	hash_init(uidhash_table, UIDHASH_BITS);
 
 	/* Insert the root user immediately (init already runs as root) */
 	spin_lock_irq(&uidhash_lock);
-	uid_hash_insert(&root_user, uidhashentry(GLOBAL_ROOT_UID));
+	uid_hash_insert(&root_user);
 	spin_unlock_irq(&uidhash_lock);
 
 	return 0;
-- 
1.7.8.6


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 2/7] user_ns: use new hashtable implementation
@ 2012-08-07  0:45   ` Sasha Levin
  0 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

Switch user_ns to use the new hashtable implementation. This reduces the amount of
generic unrelated code in user_ns.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 kernel/user.c |   35 +++++++++++++++--------------------
 1 files changed, 15 insertions(+), 20 deletions(-)

diff --git a/kernel/user.c b/kernel/user.c
index b815fef..400a23cf 100644
--- a/kernel/user.c
+++ b/kernel/user.c
@@ -16,6 +16,7 @@
 #include <linux/interrupt.h>
 #include <linux/export.h>
 #include <linux/user_namespace.h>
+#include <linux/hashtable.h>
 
 /*
  * userns count is 1 for root user, 1 for init_uts_ns,
@@ -52,13 +53,9 @@ EXPORT_SYMBOL_GPL(init_user_ns);
  */
 
 #define UIDHASH_BITS	(CONFIG_BASE_SMALL ? 3 : 7)
-#define UIDHASH_SZ	(1 << UIDHASH_BITS)
-#define UIDHASH_MASK		(UIDHASH_SZ - 1)
-#define __uidhashfn(uid)	(((uid >> UIDHASH_BITS) + uid) & UIDHASH_MASK)
-#define uidhashentry(uid)	(uidhash_table + __uidhashfn((__kuid_val(uid))))
 
 static struct kmem_cache *uid_cachep;
-struct hlist_head uidhash_table[UIDHASH_SZ];
+static DEFINE_HASHTABLE(uidhash_table, UIDHASH_BITS)
 
 /*
  * The uidhash_lock is mostly taken from process context, but it is
@@ -84,22 +81,24 @@ struct user_struct root_user = {
 /*
  * These routines must be called with the uidhash spinlock held!
  */
-static void uid_hash_insert(struct user_struct *up, struct hlist_head *hashent)
+static void uid_hash_insert(struct user_struct *up)
 {
-	hlist_add_head(&up->uidhash_node, hashent);
+	hash_add(uidhash_table, UIDHASH_BITS, &up->uidhash_node,
+		__kuid_val(up->uid));
 }
 
 static void uid_hash_remove(struct user_struct *up)
 {
-	hlist_del_init(&up->uidhash_node);
+	hash_del(&up->uidhash_node);
 }
 
-static struct user_struct *uid_hash_find(kuid_t uid, struct hlist_head *hashent)
+static struct user_struct *uid_hash_find(kuid_t uid)
 {
 	struct user_struct *user;
 	struct hlist_node *h;
 
-	hlist_for_each_entry(user, h, hashent, uidhash_node) {
+	hash_for_each_possible(uidhash_table, user, UIDHASH_BITS, h, uidhash_node,
+								__kuid_val(uid)) {
 		if (uid_eq(user->uid, uid)) {
 			atomic_inc(&user->__count);
 			return user;
@@ -135,7 +134,7 @@ struct user_struct *find_user(kuid_t uid)
 	unsigned long flags;
 
 	spin_lock_irqsave(&uidhash_lock, flags);
-	ret = uid_hash_find(uid, uidhashentry(uid));
+	ret = uid_hash_find(uid);
 	spin_unlock_irqrestore(&uidhash_lock, flags);
 	return ret;
 }
@@ -156,11 +155,10 @@ void free_uid(struct user_struct *up)
 
 struct user_struct *alloc_uid(kuid_t uid)
 {
-	struct hlist_head *hashent = uidhashentry(uid);
 	struct user_struct *up, *new;
 
 	spin_lock_irq(&uidhash_lock);
-	up = uid_hash_find(uid, hashent);
+	up = uid_hash_find(uid);
 	spin_unlock_irq(&uidhash_lock);
 
 	if (!up) {
@@ -176,13 +174,13 @@ struct user_struct *alloc_uid(kuid_t uid)
 		 * on adding the same user already..
 		 */
 		spin_lock_irq(&uidhash_lock);
-		up = uid_hash_find(uid, hashent);
+		up = uid_hash_find(uid);
 		if (up) {
 			key_put(new->uid_keyring);
 			key_put(new->session_keyring);
 			kmem_cache_free(uid_cachep, new);
 		} else {
-			uid_hash_insert(new, hashent);
+			uid_hash_insert(new);
 			up = new;
 		}
 		spin_unlock_irq(&uidhash_lock);
@@ -196,17 +194,14 @@ out_unlock:
 
 static int __init uid_cache_init(void)
 {
-	int n;
-
 	uid_cachep = kmem_cache_create("uid_cache", sizeof(struct user_struct),
 			0, SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
 
-	for(n = 0; n < UIDHASH_SZ; ++n)
-		INIT_HLIST_HEAD(uidhash_table + n);
+	hash_init(uidhash_table, UIDHASH_BITS);
 
 	/* Insert the root user immediately (init already runs as root) */
 	spin_lock_irq(&uidhash_lock);
-	uid_hash_insert(&root_user, uidhashentry(GLOBAL_ROOT_UID));
+	uid_hash_insert(&root_user);
 	spin_unlock_irq(&uidhash_lock);
 
 	return 0;
-- 
1.7.8.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 3/7] mm,ksm: use new hashtable implementation
  2012-08-07  0:45 ` Sasha Levin
@ 2012-08-07  0:45   ` Sasha Levin
  -1 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

Switch ksm to use the new hashtable implementation. This reduces the amount of
generic unrelated code in the ksm module.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 mm/ksm.c |   31 +++++++++++++------------------
 1 files changed, 13 insertions(+), 18 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 47c8853..96d5f91 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -33,7 +33,7 @@
 #include <linux/mmu_notifier.h>
 #include <linux/swap.h>
 #include <linux/ksm.h>
-#include <linux/hash.h>
+#include <linux/hashtable.h>
 #include <linux/freezer.h>
 #include <linux/oom.h>
 
@@ -156,9 +156,8 @@ struct rmap_item {
 static struct rb_root root_stable_tree = RB_ROOT;
 static struct rb_root root_unstable_tree = RB_ROOT;
 
-#define MM_SLOTS_HASH_SHIFT 10
-#define MM_SLOTS_HASH_HEADS (1 << MM_SLOTS_HASH_SHIFT)
-static struct hlist_head mm_slots_hash[MM_SLOTS_HASH_HEADS];
+#define MM_SLOTS_HASH_BITS 10
+static DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
 
 static struct mm_slot ksm_mm_head = {
 	.mm_list = LIST_HEAD_INIT(ksm_mm_head.mm_list),
@@ -275,26 +274,22 @@ static inline void free_mm_slot(struct mm_slot *mm_slot)
 
 static struct mm_slot *get_mm_slot(struct mm_struct *mm)
 {
-	struct mm_slot *mm_slot;
-	struct hlist_head *bucket;
 	struct hlist_node *node;
+	struct mm_slot *slot;
+
+	hash_for_each_possible(mm_slots_hash, slot, MM_SLOTS_HASH_BITS,
+					node, link, (unsigned long)mm) 
+		if (slot->mm == mm)
+			return slot;
 
-	bucket = &mm_slots_hash[hash_ptr(mm, MM_SLOTS_HASH_SHIFT)];
-	hlist_for_each_entry(mm_slot, node, bucket, link) {
-		if (mm == mm_slot->mm)
-			return mm_slot;
-	}
 	return NULL;
 }
 
 static void insert_to_mm_slots_hash(struct mm_struct *mm,
 				    struct mm_slot *mm_slot)
 {
-	struct hlist_head *bucket;
-
-	bucket = &mm_slots_hash[hash_ptr(mm, MM_SLOTS_HASH_SHIFT)];
 	mm_slot->mm = mm;
-	hlist_add_head(&mm_slot->link, bucket);
+	hash_add(mm_slots_hash, MM_SLOTS_HASH_BITS, &mm_slot->link, (long)mm);
 }
 
 static inline int in_stable_tree(struct rmap_item *rmap_item)
@@ -647,7 +642,7 @@ static int unmerge_and_remove_all_rmap_items(void)
 		ksm_scan.mm_slot = list_entry(mm_slot->mm_list.next,
 						struct mm_slot, mm_list);
 		if (ksm_test_exit(mm)) {
-			hlist_del(&mm_slot->link);
+			hash_del(&mm_slot->link);
 			list_del(&mm_slot->mm_list);
 			spin_unlock(&ksm_mmlist_lock);
 
@@ -1385,7 +1380,7 @@ next_mm:
 		 * or when all VM_MERGEABLE areas have been unmapped (and
 		 * mmap_sem then protects against race with MADV_MERGEABLE).
 		 */
-		hlist_del(&slot->link);
+		hash_del(&slot->link);
 		list_del(&slot->mm_list);
 		spin_unlock(&ksm_mmlist_lock);
 
@@ -1548,7 +1543,7 @@ void __ksm_exit(struct mm_struct *mm)
 	mm_slot = get_mm_slot(mm);
 	if (mm_slot && ksm_scan.mm_slot != mm_slot) {
 		if (!mm_slot->rmap_list) {
-			hlist_del(&mm_slot->link);
+			hash_del(&mm_slot->link);
 			list_del(&mm_slot->mm_list);
 			easy_to_free = 1;
 		} else {
-- 
1.7.8.6


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 3/7] mm,ksm: use new hashtable implementation
@ 2012-08-07  0:45   ` Sasha Levin
  0 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

Switch ksm to use the new hashtable implementation. This reduces the amount of
generic unrelated code in the ksm module.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 mm/ksm.c |   31 +++++++++++++------------------
 1 files changed, 13 insertions(+), 18 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 47c8853..96d5f91 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -33,7 +33,7 @@
 #include <linux/mmu_notifier.h>
 #include <linux/swap.h>
 #include <linux/ksm.h>
-#include <linux/hash.h>
+#include <linux/hashtable.h>
 #include <linux/freezer.h>
 #include <linux/oom.h>
 
@@ -156,9 +156,8 @@ struct rmap_item {
 static struct rb_root root_stable_tree = RB_ROOT;
 static struct rb_root root_unstable_tree = RB_ROOT;
 
-#define MM_SLOTS_HASH_SHIFT 10
-#define MM_SLOTS_HASH_HEADS (1 << MM_SLOTS_HASH_SHIFT)
-static struct hlist_head mm_slots_hash[MM_SLOTS_HASH_HEADS];
+#define MM_SLOTS_HASH_BITS 10
+static DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
 
 static struct mm_slot ksm_mm_head = {
 	.mm_list = LIST_HEAD_INIT(ksm_mm_head.mm_list),
@@ -275,26 +274,22 @@ static inline void free_mm_slot(struct mm_slot *mm_slot)
 
 static struct mm_slot *get_mm_slot(struct mm_struct *mm)
 {
-	struct mm_slot *mm_slot;
-	struct hlist_head *bucket;
 	struct hlist_node *node;
+	struct mm_slot *slot;
+
+	hash_for_each_possible(mm_slots_hash, slot, MM_SLOTS_HASH_BITS,
+					node, link, (unsigned long)mm) 
+		if (slot->mm == mm)
+			return slot;
 
-	bucket = &mm_slots_hash[hash_ptr(mm, MM_SLOTS_HASH_SHIFT)];
-	hlist_for_each_entry(mm_slot, node, bucket, link) {
-		if (mm == mm_slot->mm)
-			return mm_slot;
-	}
 	return NULL;
 }
 
 static void insert_to_mm_slots_hash(struct mm_struct *mm,
 				    struct mm_slot *mm_slot)
 {
-	struct hlist_head *bucket;
-
-	bucket = &mm_slots_hash[hash_ptr(mm, MM_SLOTS_HASH_SHIFT)];
 	mm_slot->mm = mm;
-	hlist_add_head(&mm_slot->link, bucket);
+	hash_add(mm_slots_hash, MM_SLOTS_HASH_BITS, &mm_slot->link, (long)mm);
 }
 
 static inline int in_stable_tree(struct rmap_item *rmap_item)
@@ -647,7 +642,7 @@ static int unmerge_and_remove_all_rmap_items(void)
 		ksm_scan.mm_slot = list_entry(mm_slot->mm_list.next,
 						struct mm_slot, mm_list);
 		if (ksm_test_exit(mm)) {
-			hlist_del(&mm_slot->link);
+			hash_del(&mm_slot->link);
 			list_del(&mm_slot->mm_list);
 			spin_unlock(&ksm_mmlist_lock);
 
@@ -1385,7 +1380,7 @@ next_mm:
 		 * or when all VM_MERGEABLE areas have been unmapped (and
 		 * mmap_sem then protects against race with MADV_MERGEABLE).
 		 */
-		hlist_del(&slot->link);
+		hash_del(&slot->link);
 		list_del(&slot->mm_list);
 		spin_unlock(&ksm_mmlist_lock);
 
@@ -1548,7 +1543,7 @@ void __ksm_exit(struct mm_struct *mm)
 	mm_slot = get_mm_slot(mm);
 	if (mm_slot && ksm_scan.mm_slot != mm_slot) {
 		if (!mm_slot->rmap_list) {
-			hlist_del(&mm_slot->link);
+			hash_del(&mm_slot->link);
 			list_del(&mm_slot->mm_list);
 			easy_to_free = 1;
 		} else {
-- 
1.7.8.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 4/7] workqueue: use new hashtable implementation
  2012-08-07  0:45 ` Sasha Levin
@ 2012-08-07  0:45   ` Sasha Levin
  -1 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

Switch workqueues to use the new hashtable implementation. This reduces the amount of
generic unrelated code in the workqueues.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 kernel/workqueue.c |   89 ++++++++++-----------------------------------------
 1 files changed, 18 insertions(+), 71 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 692d976..548fa87 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -41,6 +41,7 @@
 #include <linux/debug_locks.h>
 #include <linux/lockdep.h>
 #include <linux/idr.h>
+#include <linux/hashtable.h>
 
 #include "workqueue_sched.h"
 
@@ -82,8 +83,6 @@ enum {
 	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
-	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
-	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
 
 	MAX_IDLE_WORKERS_RATIO	= 4,		/* 1/4 of busy can be idle */
 	IDLE_WORKER_TIMEOUT	= 300 * HZ,	/* keep idle ones for 5 mins */
@@ -180,7 +179,7 @@ struct global_cwq {
 	unsigned int		flags;		/* L: GCWQ_* flags */
 
 	/* workers are chained either in busy_hash or pool idle_list */
-	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
+	DEFINE_HASHTABLE(busy_hash, BUSY_WORKER_HASH_ORDER);
 						/* L: hash of busy workers */
 
 	struct worker_pool	pools[2];	/* normal and highpri pools */
@@ -288,8 +287,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
-	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
-		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
+	hash_for_each(gcwq->busy_hash, BUSY_WORKER_HASH_ORDER, i, pos,	\
+		worker, hentry)
 
 static inline int __next_gcwq_cpu(int cpu, const struct cpumask *mask,
 				  unsigned int sw)
@@ -822,63 +821,6 @@ static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 }
 
 /**
- * busy_worker_head - return the busy hash head for a work
- * @gcwq: gcwq of interest
- * @work: work to be hashed
- *
- * Return hash head of @gcwq for @work.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to the hash head.
- */
-static struct hlist_head *busy_worker_head(struct global_cwq *gcwq,
-					   struct work_struct *work)
-{
-	const int base_shift = ilog2(sizeof(struct work_struct));
-	unsigned long v = (unsigned long)work;
-
-	/* simple shift and fold hash, do we need something better? */
-	v >>= base_shift;
-	v += v >> BUSY_WORKER_HASH_ORDER;
-	v &= BUSY_WORKER_HASH_MASK;
-
-	return &gcwq->busy_hash[v];
-}
-
-/**
- * __find_worker_executing_work - find worker which is executing a work
- * @gcwq: gcwq of interest
- * @bwh: hash head as returned by busy_worker_head()
- * @work: work to find worker for
- *
- * Find a worker which is executing @work on @gcwq.  @bwh should be
- * the hash head obtained by calling busy_worker_head() with the same
- * work.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to worker which is executing @work if found, NULL
- * otherwise.
- */
-static struct worker *__find_worker_executing_work(struct global_cwq *gcwq,
-						   struct hlist_head *bwh,
-						   struct work_struct *work)
-{
-	struct worker *worker;
-	struct hlist_node *tmp;
-
-	hlist_for_each_entry(worker, tmp, bwh, hentry)
-		if (worker->current_work == work)
-			return worker;
-	return NULL;
-}
-
-/**
  * find_worker_executing_work - find worker which is executing a work
  * @gcwq: gcwq of interest
  * @work: work to find worker for
@@ -897,8 +839,15 @@ static struct worker *__find_worker_executing_work(struct global_cwq *gcwq,
 static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 						 struct work_struct *work)
 {
-	return __find_worker_executing_work(gcwq, busy_worker_head(gcwq, work),
-					    work);
+	struct worker *worker;
+	struct hlist_node *tmp;
+
+	hash_for_each_possible(gcwq->busy_hash, worker, BUSY_WORKER_HASH_ORDER,
+						tmp, hentry, (unsigned long)work)
+		if (worker->current_work == work)
+			return worker;
+
+	return NULL;
 }
 
 /**
@@ -1932,7 +1881,6 @@ __acquires(&gcwq->lock)
 	struct cpu_workqueue_struct *cwq = get_work_cwq(work);
 	struct worker_pool *pool = worker->pool;
 	struct global_cwq *gcwq = pool->gcwq;
-	struct hlist_head *bwh = busy_worker_head(gcwq, work);
 	bool cpu_intensive = cwq->wq->flags & WQ_CPU_INTENSIVE;
 	work_func_t f = work->func;
 	int work_color;
@@ -1964,7 +1912,7 @@ __acquires(&gcwq->lock)
 	 * already processing the work.  If so, defer the work to the
 	 * currently executing one.
 	 */
-	collision = __find_worker_executing_work(gcwq, bwh, work);
+	collision = find_worker_executing_work(gcwq, work);
 	if (unlikely(collision)) {
 		move_linked_works(work, &collision->scheduled, NULL);
 		return;
@@ -1972,7 +1920,8 @@ __acquires(&gcwq->lock)
 
 	/* claim and process */
 	debug_work_deactivate(work);
-	hlist_add_head(&worker->hentry, bwh);
+	hash_add(gcwq->busy_hash, BUSY_WORKER_HASH_ORDER, &worker->hentry,
+			(unsigned long)worker);
 	worker->current_work = work;
 	worker->current_cwq = cwq;
 	work_color = get_work_color(work);
@@ -2027,7 +1976,7 @@ __acquires(&gcwq->lock)
 		worker_clr_flags(worker, WORKER_CPU_INTENSIVE);
 
 	/* we're done with it, release */
-	hlist_del_init(&worker->hentry);
+	hash_del(&worker->hentry);
 	worker->current_work = NULL;
 	worker->current_cwq = NULL;
 	cwq_dec_nr_in_flight(cwq, work_color, false);
@@ -3690,7 +3639,6 @@ out_unlock:
 static int __init init_workqueues(void)
 {
 	unsigned int cpu;
-	int i;
 
 	cpu_notifier(workqueue_cpu_up_callback, CPU_PRI_WORKQUEUE_UP);
 	cpu_notifier(workqueue_cpu_down_callback, CPU_PRI_WORKQUEUE_DOWN);
@@ -3704,8 +3652,7 @@ static int __init init_workqueues(void)
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
-			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
+		hash_init(gcwq->busy_hash, BUSY_WORKER_HASH_ORDER);
 
 		for_each_worker_pool(pool, gcwq) {
 			pool->gcwq = gcwq;
-- 
1.7.8.6


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 4/7] workqueue: use new hashtable implementation
@ 2012-08-07  0:45   ` Sasha Levin
  0 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

Switch workqueues to use the new hashtable implementation. This reduces the amount of
generic unrelated code in the workqueues.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 kernel/workqueue.c |   89 ++++++++++-----------------------------------------
 1 files changed, 18 insertions(+), 71 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 692d976..548fa87 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -41,6 +41,7 @@
 #include <linux/debug_locks.h>
 #include <linux/lockdep.h>
 #include <linux/idr.h>
+#include <linux/hashtable.h>
 
 #include "workqueue_sched.h"
 
@@ -82,8 +83,6 @@ enum {
 	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
-	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
-	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
 
 	MAX_IDLE_WORKERS_RATIO	= 4,		/* 1/4 of busy can be idle */
 	IDLE_WORKER_TIMEOUT	= 300 * HZ,	/* keep idle ones for 5 mins */
@@ -180,7 +179,7 @@ struct global_cwq {
 	unsigned int		flags;		/* L: GCWQ_* flags */
 
 	/* workers are chained either in busy_hash or pool idle_list */
-	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
+	DEFINE_HASHTABLE(busy_hash, BUSY_WORKER_HASH_ORDER);
 						/* L: hash of busy workers */
 
 	struct worker_pool	pools[2];	/* normal and highpri pools */
@@ -288,8 +287,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
-	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
-		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
+	hash_for_each(gcwq->busy_hash, BUSY_WORKER_HASH_ORDER, i, pos,	\
+		worker, hentry)
 
 static inline int __next_gcwq_cpu(int cpu, const struct cpumask *mask,
 				  unsigned int sw)
@@ -822,63 +821,6 @@ static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 }
 
 /**
- * busy_worker_head - return the busy hash head for a work
- * @gcwq: gcwq of interest
- * @work: work to be hashed
- *
- * Return hash head of @gcwq for @work.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to the hash head.
- */
-static struct hlist_head *busy_worker_head(struct global_cwq *gcwq,
-					   struct work_struct *work)
-{
-	const int base_shift = ilog2(sizeof(struct work_struct));
-	unsigned long v = (unsigned long)work;
-
-	/* simple shift and fold hash, do we need something better? */
-	v >>= base_shift;
-	v += v >> BUSY_WORKER_HASH_ORDER;
-	v &= BUSY_WORKER_HASH_MASK;
-
-	return &gcwq->busy_hash[v];
-}
-
-/**
- * __find_worker_executing_work - find worker which is executing a work
- * @gcwq: gcwq of interest
- * @bwh: hash head as returned by busy_worker_head()
- * @work: work to find worker for
- *
- * Find a worker which is executing @work on @gcwq.  @bwh should be
- * the hash head obtained by calling busy_worker_head() with the same
- * work.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to worker which is executing @work if found, NULL
- * otherwise.
- */
-static struct worker *__find_worker_executing_work(struct global_cwq *gcwq,
-						   struct hlist_head *bwh,
-						   struct work_struct *work)
-{
-	struct worker *worker;
-	struct hlist_node *tmp;
-
-	hlist_for_each_entry(worker, tmp, bwh, hentry)
-		if (worker->current_work == work)
-			return worker;
-	return NULL;
-}
-
-/**
  * find_worker_executing_work - find worker which is executing a work
  * @gcwq: gcwq of interest
  * @work: work to find worker for
@@ -897,8 +839,15 @@ static struct worker *__find_worker_executing_work(struct global_cwq *gcwq,
 static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 						 struct work_struct *work)
 {
-	return __find_worker_executing_work(gcwq, busy_worker_head(gcwq, work),
-					    work);
+	struct worker *worker;
+	struct hlist_node *tmp;
+
+	hash_for_each_possible(gcwq->busy_hash, worker, BUSY_WORKER_HASH_ORDER,
+						tmp, hentry, (unsigned long)work)
+		if (worker->current_work == work)
+			return worker;
+
+	return NULL;
 }
 
 /**
@@ -1932,7 +1881,6 @@ __acquires(&gcwq->lock)
 	struct cpu_workqueue_struct *cwq = get_work_cwq(work);
 	struct worker_pool *pool = worker->pool;
 	struct global_cwq *gcwq = pool->gcwq;
-	struct hlist_head *bwh = busy_worker_head(gcwq, work);
 	bool cpu_intensive = cwq->wq->flags & WQ_CPU_INTENSIVE;
 	work_func_t f = work->func;
 	int work_color;
@@ -1964,7 +1912,7 @@ __acquires(&gcwq->lock)
 	 * already processing the work.  If so, defer the work to the
 	 * currently executing one.
 	 */
-	collision = __find_worker_executing_work(gcwq, bwh, work);
+	collision = find_worker_executing_work(gcwq, work);
 	if (unlikely(collision)) {
 		move_linked_works(work, &collision->scheduled, NULL);
 		return;
@@ -1972,7 +1920,8 @@ __acquires(&gcwq->lock)
 
 	/* claim and process */
 	debug_work_deactivate(work);
-	hlist_add_head(&worker->hentry, bwh);
+	hash_add(gcwq->busy_hash, BUSY_WORKER_HASH_ORDER, &worker->hentry,
+			(unsigned long)worker);
 	worker->current_work = work;
 	worker->current_cwq = cwq;
 	work_color = get_work_color(work);
@@ -2027,7 +1976,7 @@ __acquires(&gcwq->lock)
 		worker_clr_flags(worker, WORKER_CPU_INTENSIVE);
 
 	/* we're done with it, release */
-	hlist_del_init(&worker->hentry);
+	hash_del(&worker->hentry);
 	worker->current_work = NULL;
 	worker->current_cwq = NULL;
 	cwq_dec_nr_in_flight(cwq, work_color, false);
@@ -3690,7 +3639,6 @@ out_unlock:
 static int __init init_workqueues(void)
 {
 	unsigned int cpu;
-	int i;
 
 	cpu_notifier(workqueue_cpu_up_callback, CPU_PRI_WORKQUEUE_UP);
 	cpu_notifier(workqueue_cpu_down_callback, CPU_PRI_WORKQUEUE_DOWN);
@@ -3704,8 +3652,7 @@ static int __init init_workqueues(void)
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
-			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
+		hash_init(gcwq->busy_hash, BUSY_WORKER_HASH_ORDER);
 
 		for_each_worker_pool(pool, gcwq) {
 			pool->gcwq = gcwq;
-- 
1.7.8.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 4/7] workqueue: use new hashtable implementation
  2012-08-07  0:45 ` Sasha Levin
@ 2012-08-07  0:45   ` Sasha Levin
  -1 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

From: Sasha Levin <sasha.levin@oracle.com>

Switch workqueues to use the new hashtable implementation. This reduces the amount of
generic unrelated code in the workqueues.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 kernel/workqueue.c |   91 +++++++++++-----------------------------------------
 1 files changed, 19 insertions(+), 72 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 692d976..edc7fd0 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -41,6 +41,7 @@
 #include <linux/debug_locks.h>
 #include <linux/lockdep.h>
 #include <linux/idr.h>
+#include <linux/hashtable.h>
 
 #include "workqueue_sched.h"
 
@@ -82,8 +83,6 @@ enum {
 	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
-	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
-	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
 
 	MAX_IDLE_WORKERS_RATIO	= 4,		/* 1/4 of busy can be idle */
 	IDLE_WORKER_TIMEOUT	= 300 * HZ,	/* keep idle ones for 5 mins */
@@ -180,7 +179,7 @@ struct global_cwq {
 	unsigned int		flags;		/* L: GCWQ_* flags */
 
 	/* workers are chained either in busy_hash or pool idle_list */
-	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
+	DEFINE_HASHTABLE(busy_hash, BUSY_WORKER_HASH_ORDER);
 						/* L: hash of busy workers */
 
 	struct worker_pool	pools[2];	/* normal and highpri pools */
@@ -288,8 +287,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
-	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
-		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
+	hash_for_each(gcwq->busy_hash, BUSY_WORKER_HASH_ORDER, i, pos,	\
+		worker, hentry)
 
 static inline int __next_gcwq_cpu(int cpu, const struct cpumask *mask,
 				  unsigned int sw)
@@ -822,63 +821,6 @@ static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 }
 
 /**
- * busy_worker_head - return the busy hash head for a work
- * @gcwq: gcwq of interest
- * @work: work to be hashed
- *
- * Return hash head of @gcwq for @work.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to the hash head.
- */
-static struct hlist_head *busy_worker_head(struct global_cwq *gcwq,
-					   struct work_struct *work)
-{
-	const int base_shift = ilog2(sizeof(struct work_struct));
-	unsigned long v = (unsigned long)work;
-
-	/* simple shift and fold hash, do we need something better? */
-	v >>= base_shift;
-	v += v >> BUSY_WORKER_HASH_ORDER;
-	v &= BUSY_WORKER_HASH_MASK;
-
-	return &gcwq->busy_hash[v];
-}
-
-/**
- * __find_worker_executing_work - find worker which is executing a work
- * @gcwq: gcwq of interest
- * @bwh: hash head as returned by busy_worker_head()
- * @work: work to find worker for
- *
- * Find a worker which is executing @work on @gcwq.  @bwh should be
- * the hash head obtained by calling busy_worker_head() with the same
- * work.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to worker which is executing @work if found, NULL
- * otherwise.
- */
-static struct worker *__find_worker_executing_work(struct global_cwq *gcwq,
-						   struct hlist_head *bwh,
-						   struct work_struct *work)
-{
-	struct worker *worker;
-	struct hlist_node *tmp;
-
-	hlist_for_each_entry(worker, tmp, bwh, hentry)
-		if (worker->current_work == work)
-			return worker;
-	return NULL;
-}
-
-/**
  * find_worker_executing_work - find worker which is executing a work
  * @gcwq: gcwq of interest
  * @work: work to find worker for
@@ -897,8 +839,15 @@ static struct worker *__find_worker_executing_work(struct global_cwq *gcwq,
 static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 						 struct work_struct *work)
 {
-	return __find_worker_executing_work(gcwq, busy_worker_head(gcwq, work),
-					    work);
+	struct worker *worker;
+	struct hlist_node *tmp;
+
+	hash_for_each_possible(gcwq->busy_hash, worker, BUSY_WORKER_HASH_ORDER,
+								tmp, hentry, work)
+		if (worker->current_work == work)
+			return worker;
+
+	return NULL;
 }
 
 /**
@@ -1916,7 +1865,7 @@ static void cwq_dec_nr_in_flight(struct cpu_workqueue_struct *cwq, int color,
  * @worker: self
  * @work: work to process
  *
- * Process @work.  This function contains all the logics necessary to
+ * Process @work.  This? function contains all the logics necessary to
  * process a single work including synchronization against and
  * interaction with other workers on the same cpu, queueing and
  * flushing.  As long as context requirement is met, any worker can
@@ -1932,7 +1881,6 @@ __acquires(&gcwq->lock)
 	struct cpu_workqueue_struct *cwq = get_work_cwq(work);
 	struct worker_pool *pool = worker->pool;
 	struct global_cwq *gcwq = pool->gcwq;
-	struct hlist_head *bwh = busy_worker_head(gcwq, work);
 	bool cpu_intensive = cwq->wq->flags & WQ_CPU_INTENSIVE;
 	work_func_t f = work->func;
 	int work_color;
@@ -1964,7 +1912,7 @@ __acquires(&gcwq->lock)
 	 * already processing the work.  If so, defer the work to the
 	 * currently executing one.
 	 */
-	collision = __find_worker_executing_work(gcwq, bwh, work);
+	collision = find_worker_executing_work(gcwq, work);
 	if (unlikely(collision)) {
 		move_linked_works(work, &collision->scheduled, NULL);
 		return;
@@ -1972,7 +1920,8 @@ __acquires(&gcwq->lock)
 
 	/* claim and process */
 	debug_work_deactivate(work);
-	hlist_add_head(&worker->hentry, bwh);
+	hash_add(gcwq->busy_hash, BUSY_WORKER_HASH_ORDER, &worker->hentry,
+			(unsigned long)worker);
 	worker->current_work = work;
 	worker->current_cwq = cwq;
 	work_color = get_work_color(work);
@@ -2027,7 +1976,7 @@ __acquires(&gcwq->lock)
 		worker_clr_flags(worker, WORKER_CPU_INTENSIVE);
 
 	/* we're done with it, release */
-	hlist_del_init(&worker->hentry);
+	hash_del(&worker->hentry);
 	worker->current_work = NULL;
 	worker->current_cwq = NULL;
 	cwq_dec_nr_in_flight(cwq, work_color, false);
@@ -3690,7 +3639,6 @@ out_unlock:
 static int __init init_workqueues(void)
 {
 	unsigned int cpu;
-	int i;
 
 	cpu_notifier(workqueue_cpu_up_callback, CPU_PRI_WORKQUEUE_UP);
 	cpu_notifier(workqueue_cpu_down_callback, CPU_PRI_WORKQUEUE_DOWN);
@@ -3704,8 +3652,7 @@ static int __init init_workqueues(void)
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
-			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
+		hash_init(gcwq->busy_hash, BUSY_WORKER_HASH_ORDER);
 
 		for_each_worker_pool(pool, gcwq) {
 			pool->gcwq = gcwq;
-- 
1.7.8.6


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 4/7] workqueue: use new hashtable implementation
@ 2012-08-07  0:45   ` Sasha Levin
  0 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

From: Sasha Levin <sasha.levin@oracle.com>

Switch workqueues to use the new hashtable implementation. This reduces the amount of
generic unrelated code in the workqueues.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 kernel/workqueue.c |   91 +++++++++++-----------------------------------------
 1 files changed, 19 insertions(+), 72 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 692d976..edc7fd0 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -41,6 +41,7 @@
 #include <linux/debug_locks.h>
 #include <linux/lockdep.h>
 #include <linux/idr.h>
+#include <linux/hashtable.h>
 
 #include "workqueue_sched.h"
 
@@ -82,8 +83,6 @@ enum {
 	NR_WORKER_POOLS		= 2,		/* # worker pools per gcwq */
 
 	BUSY_WORKER_HASH_ORDER	= 6,		/* 64 pointers */
-	BUSY_WORKER_HASH_SIZE	= 1 << BUSY_WORKER_HASH_ORDER,
-	BUSY_WORKER_HASH_MASK	= BUSY_WORKER_HASH_SIZE - 1,
 
 	MAX_IDLE_WORKERS_RATIO	= 4,		/* 1/4 of busy can be idle */
 	IDLE_WORKER_TIMEOUT	= 300 * HZ,	/* keep idle ones for 5 mins */
@@ -180,7 +179,7 @@ struct global_cwq {
 	unsigned int		flags;		/* L: GCWQ_* flags */
 
 	/* workers are chained either in busy_hash or pool idle_list */
-	struct hlist_head	busy_hash[BUSY_WORKER_HASH_SIZE];
+	DEFINE_HASHTABLE(busy_hash, BUSY_WORKER_HASH_ORDER);
 						/* L: hash of busy workers */
 
 	struct worker_pool	pools[2];	/* normal and highpri pools */
@@ -288,8 +287,8 @@ EXPORT_SYMBOL_GPL(system_nrt_freezable_wq);
 	     (pool) < &(gcwq)->pools[NR_WORKER_POOLS]; (pool)++)
 
 #define for_each_busy_worker(worker, i, pos, gcwq)			\
-	for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)			\
-		hlist_for_each_entry(worker, pos, &gcwq->busy_hash[i], hentry)
+	hash_for_each(gcwq->busy_hash, BUSY_WORKER_HASH_ORDER, i, pos,	\
+		worker, hentry)
 
 static inline int __next_gcwq_cpu(int cpu, const struct cpumask *mask,
 				  unsigned int sw)
@@ -822,63 +821,6 @@ static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 }
 
 /**
- * busy_worker_head - return the busy hash head for a work
- * @gcwq: gcwq of interest
- * @work: work to be hashed
- *
- * Return hash head of @gcwq for @work.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to the hash head.
- */
-static struct hlist_head *busy_worker_head(struct global_cwq *gcwq,
-					   struct work_struct *work)
-{
-	const int base_shift = ilog2(sizeof(struct work_struct));
-	unsigned long v = (unsigned long)work;
-
-	/* simple shift and fold hash, do we need something better? */
-	v >>= base_shift;
-	v += v >> BUSY_WORKER_HASH_ORDER;
-	v &= BUSY_WORKER_HASH_MASK;
-
-	return &gcwq->busy_hash[v];
-}
-
-/**
- * __find_worker_executing_work - find worker which is executing a work
- * @gcwq: gcwq of interest
- * @bwh: hash head as returned by busy_worker_head()
- * @work: work to find worker for
- *
- * Find a worker which is executing @work on @gcwq.  @bwh should be
- * the hash head obtained by calling busy_worker_head() with the same
- * work.
- *
- * CONTEXT:
- * spin_lock_irq(gcwq->lock).
- *
- * RETURNS:
- * Pointer to worker which is executing @work if found, NULL
- * otherwise.
- */
-static struct worker *__find_worker_executing_work(struct global_cwq *gcwq,
-						   struct hlist_head *bwh,
-						   struct work_struct *work)
-{
-	struct worker *worker;
-	struct hlist_node *tmp;
-
-	hlist_for_each_entry(worker, tmp, bwh, hentry)
-		if (worker->current_work == work)
-			return worker;
-	return NULL;
-}
-
-/**
  * find_worker_executing_work - find worker which is executing a work
  * @gcwq: gcwq of interest
  * @work: work to find worker for
@@ -897,8 +839,15 @@ static struct worker *__find_worker_executing_work(struct global_cwq *gcwq,
 static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
 						 struct work_struct *work)
 {
-	return __find_worker_executing_work(gcwq, busy_worker_head(gcwq, work),
-					    work);
+	struct worker *worker;
+	struct hlist_node *tmp;
+
+	hash_for_each_possible(gcwq->busy_hash, worker, BUSY_WORKER_HASH_ORDER,
+								tmp, hentry, work)
+		if (worker->current_work == work)
+			return worker;
+
+	return NULL;
 }
 
 /**
@@ -1916,7 +1865,7 @@ static void cwq_dec_nr_in_flight(struct cpu_workqueue_struct *cwq, int color,
  * @worker: self
  * @work: work to process
  *
- * Process @work.  This function contains all the logics necessary to
+ * Process @work.  This? function contains all the logics necessary to
  * process a single work including synchronization against and
  * interaction with other workers on the same cpu, queueing and
  * flushing.  As long as context requirement is met, any worker can
@@ -1932,7 +1881,6 @@ __acquires(&gcwq->lock)
 	struct cpu_workqueue_struct *cwq = get_work_cwq(work);
 	struct worker_pool *pool = worker->pool;
 	struct global_cwq *gcwq = pool->gcwq;
-	struct hlist_head *bwh = busy_worker_head(gcwq, work);
 	bool cpu_intensive = cwq->wq->flags & WQ_CPU_INTENSIVE;
 	work_func_t f = work->func;
 	int work_color;
@@ -1964,7 +1912,7 @@ __acquires(&gcwq->lock)
 	 * already processing the work.  If so, defer the work to the
 	 * currently executing one.
 	 */
-	collision = __find_worker_executing_work(gcwq, bwh, work);
+	collision = find_worker_executing_work(gcwq, work);
 	if (unlikely(collision)) {
 		move_linked_works(work, &collision->scheduled, NULL);
 		return;
@@ -1972,7 +1920,8 @@ __acquires(&gcwq->lock)
 
 	/* claim and process */
 	debug_work_deactivate(work);
-	hlist_add_head(&worker->hentry, bwh);
+	hash_add(gcwq->busy_hash, BUSY_WORKER_HASH_ORDER, &worker->hentry,
+			(unsigned long)worker);
 	worker->current_work = work;
 	worker->current_cwq = cwq;
 	work_color = get_work_color(work);
@@ -2027,7 +1976,7 @@ __acquires(&gcwq->lock)
 		worker_clr_flags(worker, WORKER_CPU_INTENSIVE);
 
 	/* we're done with it, release */
-	hlist_del_init(&worker->hentry);
+	hash_del(&worker->hentry);
 	worker->current_work = NULL;
 	worker->current_cwq = NULL;
 	cwq_dec_nr_in_flight(cwq, work_color, false);
@@ -3690,7 +3639,6 @@ out_unlock:
 static int __init init_workqueues(void)
 {
 	unsigned int cpu;
-	int i;
 
 	cpu_notifier(workqueue_cpu_up_callback, CPU_PRI_WORKQUEUE_UP);
 	cpu_notifier(workqueue_cpu_down_callback, CPU_PRI_WORKQUEUE_DOWN);
@@ -3704,8 +3652,7 @@ static int __init init_workqueues(void)
 		gcwq->cpu = cpu;
 		gcwq->flags |= GCWQ_DISASSOCIATED;
 
-		for (i = 0; i < BUSY_WORKER_HASH_SIZE; i++)
-			INIT_HLIST_HEAD(&gcwq->busy_hash[i]);
+		hash_init(gcwq->busy_hash, BUSY_WORKER_HASH_ORDER);
 
 		for_each_worker_pool(pool, gcwq) {
 			pool->gcwq = gcwq;
-- 
1.7.8.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 5/7] mm/huge_memory: use new hashtable implementation
  2012-08-07  0:45 ` Sasha Levin
@ 2012-08-07  0:45   ` Sasha Levin
  -1 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

Switch hugemem to use the new hashtable implementation. This reduces the amount of
generic unrelated code in the hugemem.

This also removes the dymanic allocation of the hash table. The size of the table is
constant so there's no point in paying the price of an extra dereference when accessing
it.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 mm/huge_memory.c |   56 +++++++++++++----------------------------------------
 1 files changed, 14 insertions(+), 42 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 57c4b93..e4c752d 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -17,6 +17,7 @@
 #include <linux/khugepaged.h>
 #include <linux/freezer.h>
 #include <linux/mman.h>
+#include <linux/hashtable.h>
 #include <asm/tlb.h>
 #include <asm/pgalloc.h>
 #include "internal.h"
@@ -57,12 +58,12 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
 static unsigned int khugepaged_max_ptes_none __read_mostly = HPAGE_PMD_NR-1;
 
 static int khugepaged(void *none);
-static int mm_slots_hash_init(void);
 static int khugepaged_slab_init(void);
 static void khugepaged_slab_free(void);
 
-#define MM_SLOTS_HASH_HEADS 1024
-static struct hlist_head *mm_slots_hash __read_mostly;
+#define MM_SLOTS_HASH_BITS 10
+static DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
+
 static struct kmem_cache *mm_slot_cache __read_mostly;
 
 /**
@@ -140,7 +141,7 @@ static int start_khugepaged(void)
 	int err = 0;
 	if (khugepaged_enabled()) {
 		int wakeup;
-		if (unlikely(!mm_slot_cache || !mm_slots_hash)) {
+		if (unlikely(!mm_slot_cache)) {
 			err = -ENOMEM;
 			goto out;
 		}
@@ -554,12 +555,6 @@ static int __init hugepage_init(void)
 	if (err)
 		goto out;
 
-	err = mm_slots_hash_init();
-	if (err) {
-		khugepaged_slab_free();
-		goto out;
-	}
-
 	/*
 	 * By default disable transparent hugepages on smaller systems,
 	 * where the extra memory used could hurt more than TLB overhead
@@ -1562,47 +1557,24 @@ static inline void free_mm_slot(struct mm_slot *mm_slot)
 	kmem_cache_free(mm_slot_cache, mm_slot);
 }
 
-static int __init mm_slots_hash_init(void)
-{
-	mm_slots_hash = kzalloc(MM_SLOTS_HASH_HEADS * sizeof(struct hlist_head),
-				GFP_KERNEL);
-	if (!mm_slots_hash)
-		return -ENOMEM;
-	return 0;
-}
-
-#if 0
-static void __init mm_slots_hash_free(void)
-{
-	kfree(mm_slots_hash);
-	mm_slots_hash = NULL;
-}
-#endif
-
 static struct mm_slot *get_mm_slot(struct mm_struct *mm)
 {
-	struct mm_slot *mm_slot;
-	struct hlist_head *bucket;
+	struct mm_slot *slot;
 	struct hlist_node *node;
 
-	bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
-				% MM_SLOTS_HASH_HEADS];
-	hlist_for_each_entry(mm_slot, node, bucket, hash) {
-		if (mm == mm_slot->mm)
-			return mm_slot;
-	}
+	hash_for_each_possible(mm_slots_hash, slot, MM_SLOTS_HASH_BITS, node,
+				hash, (unsigned long) mm)
+		if (slot->mm == mm)
+			return slot;
+
 	return NULL;
 }
 
 static void insert_to_mm_slots_hash(struct mm_struct *mm,
 				    struct mm_slot *mm_slot)
 {
-	struct hlist_head *bucket;
-
-	bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
-				% MM_SLOTS_HASH_HEADS];
 	mm_slot->mm = mm;
-	hlist_add_head(&mm_slot->hash, bucket);
+	hash_add(mm_slots_hash, MM_SLOTS_HASH_BITS, &mm_slot->hash, (long)mm);
 }
 
 static inline int khugepaged_test_exit(struct mm_struct *mm)
@@ -1675,7 +1647,7 @@ void __khugepaged_exit(struct mm_struct *mm)
 	spin_lock(&khugepaged_mm_lock);
 	mm_slot = get_mm_slot(mm);
 	if (mm_slot && khugepaged_scan.mm_slot != mm_slot) {
-		hlist_del(&mm_slot->hash);
+		hash_del(&mm_slot->hash);
 		list_del(&mm_slot->mm_node);
 		free = 1;
 	}
@@ -2089,7 +2061,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot)
 
 	if (khugepaged_test_exit(mm)) {
 		/* free mm_slot */
-		hlist_del(&mm_slot->hash);
+		hash_del(&mm_slot->hash);
 		list_del(&mm_slot->mm_node);
 
 		/*
-- 
1.7.8.6


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 5/7] mm/huge_memory: use new hashtable implementation
@ 2012-08-07  0:45   ` Sasha Levin
  0 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

Switch hugemem to use the new hashtable implementation. This reduces the amount of
generic unrelated code in the hugemem.

This also removes the dymanic allocation of the hash table. The size of the table is
constant so there's no point in paying the price of an extra dereference when accessing
it.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 mm/huge_memory.c |   56 +++++++++++++----------------------------------------
 1 files changed, 14 insertions(+), 42 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 57c4b93..e4c752d 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -17,6 +17,7 @@
 #include <linux/khugepaged.h>
 #include <linux/freezer.h>
 #include <linux/mman.h>
+#include <linux/hashtable.h>
 #include <asm/tlb.h>
 #include <asm/pgalloc.h>
 #include "internal.h"
@@ -57,12 +58,12 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
 static unsigned int khugepaged_max_ptes_none __read_mostly = HPAGE_PMD_NR-1;
 
 static int khugepaged(void *none);
-static int mm_slots_hash_init(void);
 static int khugepaged_slab_init(void);
 static void khugepaged_slab_free(void);
 
-#define MM_SLOTS_HASH_HEADS 1024
-static struct hlist_head *mm_slots_hash __read_mostly;
+#define MM_SLOTS_HASH_BITS 10
+static DEFINE_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
+
 static struct kmem_cache *mm_slot_cache __read_mostly;
 
 /**
@@ -140,7 +141,7 @@ static int start_khugepaged(void)
 	int err = 0;
 	if (khugepaged_enabled()) {
 		int wakeup;
-		if (unlikely(!mm_slot_cache || !mm_slots_hash)) {
+		if (unlikely(!mm_slot_cache)) {
 			err = -ENOMEM;
 			goto out;
 		}
@@ -554,12 +555,6 @@ static int __init hugepage_init(void)
 	if (err)
 		goto out;
 
-	err = mm_slots_hash_init();
-	if (err) {
-		khugepaged_slab_free();
-		goto out;
-	}
-
 	/*
 	 * By default disable transparent hugepages on smaller systems,
 	 * where the extra memory used could hurt more than TLB overhead
@@ -1562,47 +1557,24 @@ static inline void free_mm_slot(struct mm_slot *mm_slot)
 	kmem_cache_free(mm_slot_cache, mm_slot);
 }
 
-static int __init mm_slots_hash_init(void)
-{
-	mm_slots_hash = kzalloc(MM_SLOTS_HASH_HEADS * sizeof(struct hlist_head),
-				GFP_KERNEL);
-	if (!mm_slots_hash)
-		return -ENOMEM;
-	return 0;
-}
-
-#if 0
-static void __init mm_slots_hash_free(void)
-{
-	kfree(mm_slots_hash);
-	mm_slots_hash = NULL;
-}
-#endif
-
 static struct mm_slot *get_mm_slot(struct mm_struct *mm)
 {
-	struct mm_slot *mm_slot;
-	struct hlist_head *bucket;
+	struct mm_slot *slot;
 	struct hlist_node *node;
 
-	bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
-				% MM_SLOTS_HASH_HEADS];
-	hlist_for_each_entry(mm_slot, node, bucket, hash) {
-		if (mm == mm_slot->mm)
-			return mm_slot;
-	}
+	hash_for_each_possible(mm_slots_hash, slot, MM_SLOTS_HASH_BITS, node,
+				hash, (unsigned long) mm)
+		if (slot->mm == mm)
+			return slot;
+
 	return NULL;
 }
 
 static void insert_to_mm_slots_hash(struct mm_struct *mm,
 				    struct mm_slot *mm_slot)
 {
-	struct hlist_head *bucket;
-
-	bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
-				% MM_SLOTS_HASH_HEADS];
 	mm_slot->mm = mm;
-	hlist_add_head(&mm_slot->hash, bucket);
+	hash_add(mm_slots_hash, MM_SLOTS_HASH_BITS, &mm_slot->hash, (long)mm);
 }
 
 static inline int khugepaged_test_exit(struct mm_struct *mm)
@@ -1675,7 +1647,7 @@ void __khugepaged_exit(struct mm_struct *mm)
 	spin_lock(&khugepaged_mm_lock);
 	mm_slot = get_mm_slot(mm);
 	if (mm_slot && khugepaged_scan.mm_slot != mm_slot) {
-		hlist_del(&mm_slot->hash);
+		hash_del(&mm_slot->hash);
 		list_del(&mm_slot->mm_node);
 		free = 1;
 	}
@@ -2089,7 +2061,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot)
 
 	if (khugepaged_test_exit(mm)) {
 		/* free mm_slot */
-		hlist_del(&mm_slot->hash);
+		hash_del(&mm_slot->hash);
 		list_del(&mm_slot->mm_node);
 
 		/*
-- 
1.7.8.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 6/7] tracepoint: use new hashtable implementation
  2012-08-07  0:45 ` Sasha Levin
@ 2012-08-07  0:45   ` Sasha Levin
  -1 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

Switch tracepoints to use the new hashtable implementation. This reduces the amount of
generic unrelated code in the tracepoints.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 kernel/tracepoint.c |   26 ++++++++++----------------
 1 files changed, 10 insertions(+), 16 deletions(-)

diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
index d96ba22..5a44949 100644
--- a/kernel/tracepoint.c
+++ b/kernel/tracepoint.c
@@ -26,6 +26,7 @@
 #include <linux/slab.h>
 #include <linux/sched.h>
 #include <linux/static_key.h>
+#include <linux/hashtable.h>
 
 extern struct tracepoint * const __start___tracepoints_ptrs[];
 extern struct tracepoint * const __stop___tracepoints_ptrs[];
@@ -49,8 +50,7 @@ static LIST_HEAD(tracepoint_module_list);
  * Protected by tracepoints_mutex.
  */
 #define TRACEPOINT_HASH_BITS 6
-#define TRACEPOINT_TABLE_SIZE (1 << TRACEPOINT_HASH_BITS)
-static struct hlist_head tracepoint_table[TRACEPOINT_TABLE_SIZE];
+static DEFINE_HASHTABLE(tracepoint_table, TRACEPOINT_HASH_BITS);
 
 /*
  * Note about RCU :
@@ -191,16 +191,16 @@ tracepoint_entry_remove_probe(struct tracepoint_entry *entry,
  */
 static struct tracepoint_entry *get_tracepoint(const char *name)
 {
-	struct hlist_head *head;
 	struct hlist_node *node;
 	struct tracepoint_entry *e;
 	u32 hash = jhash(name, strlen(name), 0);
 
-	head = &tracepoint_table[hash & (TRACEPOINT_TABLE_SIZE - 1)];
-	hlist_for_each_entry(e, node, head, hlist) {
+	hash_for_each_possible(tracepoint_table, e, TRACEPOINT_HASH_BITS,
+				node, hlist, hash) {
 		if (!strcmp(name, e->name))
 			return e;
 	}
+
 	return NULL;
 }
 
@@ -210,19 +210,13 @@ static struct tracepoint_entry *get_tracepoint(const char *name)
  */
 static struct tracepoint_entry *add_tracepoint(const char *name)
 {
-	struct hlist_head *head;
-	struct hlist_node *node;
 	struct tracepoint_entry *e;
 	size_t name_len = strlen(name) + 1;
 	u32 hash = jhash(name, name_len-1, 0);
 
-	head = &tracepoint_table[hash & (TRACEPOINT_TABLE_SIZE - 1)];
-	hlist_for_each_entry(e, node, head, hlist) {
-		if (!strcmp(name, e->name)) {
-			printk(KERN_NOTICE
-				"tracepoint %s busy\n", name);
-			return ERR_PTR(-EEXIST);	/* Already there */
-		}
+	if (get_tracepoint(name)) {
+		printk(KERN_NOTICE "tracepoint %s busy\n", name);
+		return ERR_PTR(-EEXIST);	/* Already there */
 	}
 	/*
 	 * Using kmalloc here to allocate a variable length element. Could
@@ -234,7 +228,7 @@ static struct tracepoint_entry *add_tracepoint(const char *name)
 	memcpy(&e->name[0], name, name_len);
 	e->funcs = NULL;
 	e->refcount = 0;
-	hlist_add_head(&e->hlist, head);
+	hash_add(tracepoint_table, TRACEPOINT_HASH_BITS, &e->hlist, hash);
 	return e;
 }
 
@@ -244,7 +238,7 @@ static struct tracepoint_entry *add_tracepoint(const char *name)
  */
 static inline void remove_tracepoint(struct tracepoint_entry *e)
 {
-	hlist_del(&e->hlist);
+	hash_del(&e->hlist);
 	kfree(e);
 }
 
-- 
1.7.8.6


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 6/7] tracepoint: use new hashtable implementation
@ 2012-08-07  0:45   ` Sasha Levin
  0 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

Switch tracepoints to use the new hashtable implementation. This reduces the amount of
generic unrelated code in the tracepoints.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 kernel/tracepoint.c |   26 ++++++++++----------------
 1 files changed, 10 insertions(+), 16 deletions(-)

diff --git a/kernel/tracepoint.c b/kernel/tracepoint.c
index d96ba22..5a44949 100644
--- a/kernel/tracepoint.c
+++ b/kernel/tracepoint.c
@@ -26,6 +26,7 @@
 #include <linux/slab.h>
 #include <linux/sched.h>
 #include <linux/static_key.h>
+#include <linux/hashtable.h>
 
 extern struct tracepoint * const __start___tracepoints_ptrs[];
 extern struct tracepoint * const __stop___tracepoints_ptrs[];
@@ -49,8 +50,7 @@ static LIST_HEAD(tracepoint_module_list);
  * Protected by tracepoints_mutex.
  */
 #define TRACEPOINT_HASH_BITS 6
-#define TRACEPOINT_TABLE_SIZE (1 << TRACEPOINT_HASH_BITS)
-static struct hlist_head tracepoint_table[TRACEPOINT_TABLE_SIZE];
+static DEFINE_HASHTABLE(tracepoint_table, TRACEPOINT_HASH_BITS);
 
 /*
  * Note about RCU :
@@ -191,16 +191,16 @@ tracepoint_entry_remove_probe(struct tracepoint_entry *entry,
  */
 static struct tracepoint_entry *get_tracepoint(const char *name)
 {
-	struct hlist_head *head;
 	struct hlist_node *node;
 	struct tracepoint_entry *e;
 	u32 hash = jhash(name, strlen(name), 0);
 
-	head = &tracepoint_table[hash & (TRACEPOINT_TABLE_SIZE - 1)];
-	hlist_for_each_entry(e, node, head, hlist) {
+	hash_for_each_possible(tracepoint_table, e, TRACEPOINT_HASH_BITS,
+				node, hlist, hash) {
 		if (!strcmp(name, e->name))
 			return e;
 	}
+
 	return NULL;
 }
 
@@ -210,19 +210,13 @@ static struct tracepoint_entry *get_tracepoint(const char *name)
  */
 static struct tracepoint_entry *add_tracepoint(const char *name)
 {
-	struct hlist_head *head;
-	struct hlist_node *node;
 	struct tracepoint_entry *e;
 	size_t name_len = strlen(name) + 1;
 	u32 hash = jhash(name, name_len-1, 0);
 
-	head = &tracepoint_table[hash & (TRACEPOINT_TABLE_SIZE - 1)];
-	hlist_for_each_entry(e, node, head, hlist) {
-		if (!strcmp(name, e->name)) {
-			printk(KERN_NOTICE
-				"tracepoint %s busy\n", name);
-			return ERR_PTR(-EEXIST);	/* Already there */
-		}
+	if (get_tracepoint(name)) {
+		printk(KERN_NOTICE "tracepoint %s busy\n", name);
+		return ERR_PTR(-EEXIST);	/* Already there */
 	}
 	/*
 	 * Using kmalloc here to allocate a variable length element. Could
@@ -234,7 +228,7 @@ static struct tracepoint_entry *add_tracepoint(const char *name)
 	memcpy(&e->name[0], name, name_len);
 	e->funcs = NULL;
 	e->refcount = 0;
-	hlist_add_head(&e->hlist, head);
+	hash_add(tracepoint_table, TRACEPOINT_HASH_BITS, &e->hlist, hash);
 	return e;
 }
 
@@ -244,7 +238,7 @@ static struct tracepoint_entry *add_tracepoint(const char *name)
  */
 static inline void remove_tracepoint(struct tracepoint_entry *e)
 {
-	hlist_del(&e->hlist);
+	hash_del(&e->hlist);
 	kfree(e);
 }
 
-- 
1.7.8.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 7/7] net,9p: use new hashtable implementation
  2012-08-07  0:45 ` Sasha Levin
@ 2012-08-07  0:45   ` Sasha Levin
  -1 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

Switch 9p error table to use the new hashtable implementation. This reduces the amount of
generic unrelated code in 9p.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 net/9p/error.c |   21 ++++++++++-----------
 1 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/net/9p/error.c b/net/9p/error.c
index 2ab2de7..f712344d 100644
--- a/net/9p/error.c
+++ b/net/9p/error.c
@@ -34,7 +34,7 @@
 #include <linux/jhash.h>
 #include <linux/errno.h>
 #include <net/9p/9p.h>
-
+#include <linux/hashtable.h>
 /**
  * struct errormap - map string errors from Plan 9 to Linux numeric ids
  * @name: string sent over 9P
@@ -50,8 +50,8 @@ struct errormap {
 	struct hlist_node list;
 };
 
-#define ERRHASHSZ		32
-static struct hlist_head hash_errmap[ERRHASHSZ];
+#define ERR_HASH_BITS 5
+static DEFINE_HASHTABLE(hash_errmap, ERR_HASH_BITS);
 
 /* FixMe - reduce to a reasonable size */
 static struct errormap errmap[] = {
@@ -193,18 +193,17 @@ static struct errormap errmap[] = {
 int p9_error_init(void)
 {
 	struct errormap *c;
-	int bucket;
+	u32 hash;
 
 	/* initialize hash table */
-	for (bucket = 0; bucket < ERRHASHSZ; bucket++)
-		INIT_HLIST_HEAD(&hash_errmap[bucket]);
+	hash_init(hash_errmap, ERR_HASH_BITS);
 
 	/* load initial error map into hash table */
 	for (c = errmap; c->name != NULL; c++) {
 		c->namelen = strlen(c->name);
-		bucket = jhash(c->name, c->namelen, 0) % ERRHASHSZ;
+		hash = jhash(c->name, c->namelen, 0);
 		INIT_HLIST_NODE(&c->list);
-		hlist_add_head(&c->list, &hash_errmap[bucket]);
+		hash_add(hash_errmap, ERR_HASH_BITS, &c->list, hash);
 	}
 
 	return 1;
@@ -223,13 +222,13 @@ int p9_errstr2errno(char *errstr, int len)
 	int errno;
 	struct hlist_node *p;
 	struct errormap *c;
-	int bucket;
+	u32 hash;
 
 	errno = 0;
 	p = NULL;
 	c = NULL;
-	bucket = jhash(errstr, len, 0) % ERRHASHSZ;
-	hlist_for_each_entry(c, p, &hash_errmap[bucket], list) {
+	hash = jhash(errstr, len, 0);
+	hash_for_each_possible(hash_errmap, c, ERR_HASH_BITS, p, list, hash) {
 		if (c->namelen == len && !memcmp(c->name, errstr, len)) {
 			errno = c->val;
 			break;
-- 
1.7.8.6


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC v3 7/7] net,9p: use new hashtable implementation
@ 2012-08-07  0:45   ` Sasha Levin
  0 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  0:45 UTC (permalink / raw)
  To: torvalds
  Cc: tj, akpm, linux-kernel, linux-mm, paul.gortmaker, davem, rostedt,
	mingo, ebiederm, aarcange, ericvh, netdev, josh, eric.dumazet,
	mathieu.desnoyers, Sasha Levin

Switch 9p error table to use the new hashtable implementation. This reduces the amount of
generic unrelated code in 9p.

Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
---
 net/9p/error.c |   21 ++++++++++-----------
 1 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/net/9p/error.c b/net/9p/error.c
index 2ab2de7..f712344d 100644
--- a/net/9p/error.c
+++ b/net/9p/error.c
@@ -34,7 +34,7 @@
 #include <linux/jhash.h>
 #include <linux/errno.h>
 #include <net/9p/9p.h>
-
+#include <linux/hashtable.h>
 /**
  * struct errormap - map string errors from Plan 9 to Linux numeric ids
  * @name: string sent over 9P
@@ -50,8 +50,8 @@ struct errormap {
 	struct hlist_node list;
 };
 
-#define ERRHASHSZ		32
-static struct hlist_head hash_errmap[ERRHASHSZ];
+#define ERR_HASH_BITS 5
+static DEFINE_HASHTABLE(hash_errmap, ERR_HASH_BITS);
 
 /* FixMe - reduce to a reasonable size */
 static struct errormap errmap[] = {
@@ -193,18 +193,17 @@ static struct errormap errmap[] = {
 int p9_error_init(void)
 {
 	struct errormap *c;
-	int bucket;
+	u32 hash;
 
 	/* initialize hash table */
-	for (bucket = 0; bucket < ERRHASHSZ; bucket++)
-		INIT_HLIST_HEAD(&hash_errmap[bucket]);
+	hash_init(hash_errmap, ERR_HASH_BITS);
 
 	/* load initial error map into hash table */
 	for (c = errmap; c->name != NULL; c++) {
 		c->namelen = strlen(c->name);
-		bucket = jhash(c->name, c->namelen, 0) % ERRHASHSZ;
+		hash = jhash(c->name, c->namelen, 0);
 		INIT_HLIST_NODE(&c->list);
-		hlist_add_head(&c->list, &hash_errmap[bucket]);
+		hash_add(hash_errmap, ERR_HASH_BITS, &c->list, hash);
 	}
 
 	return 1;
@@ -223,13 +222,13 @@ int p9_errstr2errno(char *errstr, int len)
 	int errno;
 	struct hlist_node *p;
 	struct errormap *c;
-	int bucket;
+	u32 hash;
 
 	errno = 0;
 	p = NULL;
 	c = NULL;
-	bucket = jhash(errstr, len, 0) % ERRHASHSZ;
-	hlist_for_each_entry(c, p, &hash_errmap[bucket], list) {
+	hash = jhash(errstr, len, 0);
+	hash_for_each_possible(hash_errmap, c, ERR_HASH_BITS, p, list, hash) {
 		if (c->namelen == len && !memcmp(c->name, errstr, len)) {
 			errno = c->val;
 			break;
-- 
1.7.8.6

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [RFC v3 1/7] hashtable: introduce a small and naive hashtable
  2012-08-07  0:45   ` Sasha Levin
  (?)
@ 2012-08-07  1:19     ` Joe Perches
  -1 siblings, 0 replies; 32+ messages in thread
From: Joe Perches @ 2012-08-07  1:19 UTC (permalink / raw)
  To: Sasha Levin
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev, josh,
	eric.dumazet, mathieu.desnoyers

On Tue, 2012-08-07 at 02:45 +0200, Sasha Levin wrote:
> This hashtable implementation is using hlist buckets to provide a simple
> hashtable to prevent it from getting reimplemented all over the kernel.

> diff --git a/include/linux/hashtable.h b/include/linux/hashtable.h

Just trivial style notes and a typo

> +/* Use hash_32 when possible to allow for fast 32bit hashing in 64bit kernels. */
> +#define hash_min(val, bits) ((sizeof(val)==4)?hash_32((val), (bits)):hash_long((val), (bits)))

This is a pretty long line.  It doesn't use normal kernel spacing
style and it has unnecessary parentheses.

Maybe:

#define hash_min(val, bits)						\
	(sizeof(val) == 4 ? hash_32(val, bits) : hash_long(val, bits))

> +
> +/**
> + * hash_init - initialize a hash table
> + * @hashtable: hashtable to be initialized
> + * @bits: bit count of hashing function
> + *
> + * Initializes a hash table with 2**bits buckets.
> + */
> +static inline void hash_init(struct hlist_head *hashtable, int bits)
> +{
> +	int i;
> +
> +	for (i = 0; i < HASH_SIZE(bits); i++)
> +		INIT_HLIST_HEAD(hashtable + i);
> +}

Maybe use a struct hlist_head *last_hash_entry as a loop variable

{
	struct hlist_head *eo_hash = hashtable + HASH_SIZE(bits);

	while (hashtable < eo_hash)
		INIT_HLIST_HEAD(hashtable++);
}

The compiler might generate the same code anyway...

[]

> +/**
> + * hash_for_each_possible - iterate over all possible objects for a giver key
> + * @name: hashtable to iterate
> + * @obj: the type * to use as a loop cursor for each bucke

bucket



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 1/7] hashtable: introduce a small and naive hashtable
@ 2012-08-07  1:19     ` Joe Perches
  0 siblings, 0 replies; 32+ messages in thread
From: Joe Perches @ 2012-08-07  1:19 UTC (permalink / raw)
  To: Sasha Levin
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev, josh,
	eric.dumazet, mathieu.desnoyers

On Tue, 2012-08-07 at 02:45 +0200, Sasha Levin wrote:
> This hashtable implementation is using hlist buckets to provide a simple
> hashtable to prevent it from getting reimplemented all over the kernel.

> diff --git a/include/linux/hashtable.h b/include/linux/hashtable.h

Just trivial style notes and a typo

> +/* Use hash_32 when possible to allow for fast 32bit hashing in 64bit kernels. */
> +#define hash_min(val, bits) ((sizeof(val)==4)?hash_32((val), (bits)):hash_long((val), (bits)))

This is a pretty long line.  It doesn't use normal kernel spacing
style and it has unnecessary parentheses.

Maybe:

#define hash_min(val, bits)						\
	(sizeof(val) == 4 ? hash_32(val, bits) : hash_long(val, bits))

> +
> +/**
> + * hash_init - initialize a hash table
> + * @hashtable: hashtable to be initialized
> + * @bits: bit count of hashing function
> + *
> + * Initializes a hash table with 2**bits buckets.
> + */
> +static inline void hash_init(struct hlist_head *hashtable, int bits)
> +{
> +	int i;
> +
> +	for (i = 0; i < HASH_SIZE(bits); i++)
> +		INIT_HLIST_HEAD(hashtable + i);
> +}

Maybe use a struct hlist_head *last_hash_entry as a loop variable

{
	struct hlist_head *eo_hash = hashtable + HASH_SIZE(bits);

	while (hashtable < eo_hash)
		INIT_HLIST_HEAD(hashtable++);
}

The compiler might generate the same code anyway...

[]

> +/**
> + * hash_for_each_possible - iterate over all possible objects for a giver key
> + * @name: hashtable to iterate
> + * @obj: the type * to use as a loop cursor for each bucke

bucket


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 1/7] hashtable: introduce a small and naive hashtable
@ 2012-08-07  1:19     ` Joe Perches
  0 siblings, 0 replies; 32+ messages in thread
From: Joe Perches @ 2012-08-07  1:19 UTC (permalink / raw)
  To: Sasha Levin
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev, josh,
	eric.dumazet, mathieu.desnoyers

On Tue, 2012-08-07 at 02:45 +0200, Sasha Levin wrote:
> This hashtable implementation is using hlist buckets to provide a simple
> hashtable to prevent it from getting reimplemented all over the kernel.

> diff --git a/include/linux/hashtable.h b/include/linux/hashtable.h

Just trivial style notes and a typo

> +/* Use hash_32 when possible to allow for fast 32bit hashing in 64bit kernels. */
> +#define hash_min(val, bits) ((sizeof(val)==4)?hash_32((val), (bits)):hash_long((val), (bits)))

This is a pretty long line.  It doesn't use normal kernel spacing
style and it has unnecessary parentheses.

Maybe:

#define hash_min(val, bits)						\
	(sizeof(val) == 4 ? hash_32(val, bits) : hash_long(val, bits))

> +
> +/**
> + * hash_init - initialize a hash table
> + * @hashtable: hashtable to be initialized
> + * @bits: bit count of hashing function
> + *
> + * Initializes a hash table with 2**bits buckets.
> + */
> +static inline void hash_init(struct hlist_head *hashtable, int bits)
> +{
> +	int i;
> +
> +	for (i = 0; i < HASH_SIZE(bits); i++)
> +		INIT_HLIST_HEAD(hashtable + i);
> +}

Maybe use a struct hlist_head *last_hash_entry as a loop variable

{
	struct hlist_head *eo_hash = hashtable + HASH_SIZE(bits);

	while (hashtable < eo_hash)
		INIT_HLIST_HEAD(hashtable++);
}

The compiler might generate the same code anyway...

[]

> +/**
> + * hash_for_each_possible - iterate over all possible objects for a giver key
> + * @name: hashtable to iterate
> + * @obj: the type * to use as a loop cursor for each bucke

bucket


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 4/7] workqueue: use new hashtable implementation
  2012-08-07  0:45   ` Sasha Levin
  (?)
@ 2012-08-07  1:19     ` Joe Perches
  -1 siblings, 0 replies; 32+ messages in thread
From: Joe Perches @ 2012-08-07  1:19 UTC (permalink / raw)
  To: Sasha Levin
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev, josh,
	eric.dumazet, mathieu.desnoyers, Sasha Levin

On Tue, 2012-08-07 at 02:45 +0200, Sasha Levin wrote:
> From: Sasha Levin <sasha.levin@oracle.com>
> 
> Switch workqueues to use the new hashtable implementation. This reduces the amount of
> generic unrelated code in the workqueues.

Just style trivia:

> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
[]
> @@ -897,8 +839,15 @@ static struct worker *__find_worker_executing_work(struct global_cwq *gcwq,
>  static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
>  						 struct work_struct *work)
>  {
> -	return __find_worker_executing_work(gcwq, busy_worker_head(gcwq, work),
> -					    work);
> +	struct worker *worker;
> +	struct hlist_node *tmp;
> +
> +	hash_for_each_possible(gcwq->busy_hash, worker, BUSY_WORKER_HASH_ORDER,
> +								tmp, hentry, work)
> +		if (worker->current_work == work)
> +			return worker;

braces please:

	hash_for_each_possible(gcwq->busy_hash, worker, BUSY_WORKER_HASH_ORDER,
			       tmp, hentry, work) {
		if (worker->current_work == work)
			return worker;
	}

[]

@@ -1916,7 +1865,7 @@ static void cwq_dec_nr_in_flight(struct cpu_workqueue_struct *cwq, int color,
>   * @worker: self
>   * @work: work to process
>   *
> - * Process @work.  This function contains all the logics necessary to
> + * Process @work.  This? function contains all the logics necessary to

Odd ? and the grammar also seems odd.

>   * process a single work including synchronization against and
>   * interaction with other workers on the same cpu, queueing and
>   * flushing.  As long as context requirement is met, any worker can



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 4/7] workqueue: use new hashtable implementation
@ 2012-08-07  1:19     ` Joe Perches
  0 siblings, 0 replies; 32+ messages in thread
From: Joe Perches @ 2012-08-07  1:19 UTC (permalink / raw)
  To: Sasha Levin
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev, josh,
	eric.dumazet, mathieu.desnoyers, Sasha Levin

On Tue, 2012-08-07 at 02:45 +0200, Sasha Levin wrote:
> From: Sasha Levin <sasha.levin@oracle.com>
> 
> Switch workqueues to use the new hashtable implementation. This reduces the amount of
> generic unrelated code in the workqueues.

Just style trivia:

> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
[]
> @@ -897,8 +839,15 @@ static struct worker *__find_worker_executing_work(struct global_cwq *gcwq,
>  static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
>  						 struct work_struct *work)
>  {
> -	return __find_worker_executing_work(gcwq, busy_worker_head(gcwq, work),
> -					    work);
> +	struct worker *worker;
> +	struct hlist_node *tmp;
> +
> +	hash_for_each_possible(gcwq->busy_hash, worker, BUSY_WORKER_HASH_ORDER,
> +								tmp, hentry, work)
> +		if (worker->current_work == work)
> +			return worker;

braces please:

	hash_for_each_possible(gcwq->busy_hash, worker, BUSY_WORKER_HASH_ORDER,
			       tmp, hentry, work) {
		if (worker->current_work == work)
			return worker;
	}

[]

@@ -1916,7 +1865,7 @@ static void cwq_dec_nr_in_flight(struct cpu_workqueue_struct *cwq, int color,
>   * @worker: self
>   * @work: work to process
>   *
> - * Process @work.  This function contains all the logics necessary to
> + * Process @work.  This? function contains all the logics necessary to

Odd ? and the grammar also seems odd.

>   * process a single work including synchronization against and
>   * interaction with other workers on the same cpu, queueing and
>   * flushing.  As long as context requirement is met, any worker can


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 4/7] workqueue: use new hashtable implementation
@ 2012-08-07  1:19     ` Joe Perches
  0 siblings, 0 replies; 32+ messages in thread
From: Joe Perches @ 2012-08-07  1:19 UTC (permalink / raw)
  To: Sasha Levin
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev, josh,
	eric.dumazet, mathieu.desnoyers, Sasha Levin

On Tue, 2012-08-07 at 02:45 +0200, Sasha Levin wrote:
> From: Sasha Levin <sasha.levin@oracle.com>
> 
> Switch workqueues to use the new hashtable implementation. This reduces the amount of
> generic unrelated code in the workqueues.

Just style trivia:

> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
[]
> @@ -897,8 +839,15 @@ static struct worker *__find_worker_executing_work(struct global_cwq *gcwq,
>  static struct worker *find_worker_executing_work(struct global_cwq *gcwq,
>  						 struct work_struct *work)
>  {
> -	return __find_worker_executing_work(gcwq, busy_worker_head(gcwq, work),
> -					    work);
> +	struct worker *worker;
> +	struct hlist_node *tmp;
> +
> +	hash_for_each_possible(gcwq->busy_hash, worker, BUSY_WORKER_HASH_ORDER,
> +								tmp, hentry, work)
> +		if (worker->current_work == work)
> +			return worker;

braces please:

	hash_for_each_possible(gcwq->busy_hash, worker, BUSY_WORKER_HASH_ORDER,
			       tmp, hentry, work) {
		if (worker->current_work == work)
			return worker;
	}

[]

@@ -1916,7 +1865,7 @@ static void cwq_dec_nr_in_flight(struct cpu_workqueue_struct *cwq, int color,
>   * @worker: self
>   * @work: work to process
>   *
> - * Process @work.  This function contains all the logics necessary to
> + * Process @work.  This? function contains all the logics necessary to

Odd ? and the grammar also seems odd.

>   * process a single work including synchronization against and
>   * interaction with other workers on the same cpu, queueing and
>   * flushing.  As long as context requirement is met, any worker can


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 1/7] hashtable: introduce a small and naive hashtable
  2012-08-07  0:45   ` Sasha Levin
@ 2012-08-07  1:48     ` Li Wei
  -1 siblings, 0 replies; 32+ messages in thread
From: Li Wei @ 2012-08-07  1:48 UTC (permalink / raw)
  To: Sasha Levin
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev, josh,
	eric.dumazet, mathieu.desnoyers

On 08/07/2012 08:45 AM, Sasha Levin wrote:
> This hashtable implementation is using hlist buckets to provide a simple
> hashtable to prevent it from getting reimplemented all over the kernel.
> 
> Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
> ---
>  include/linux/hashtable.h |   82 +++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 82 insertions(+), 0 deletions(-)
>  create mode 100644 include/linux/hashtable.h
> 
> diff --git a/include/linux/hashtable.h b/include/linux/hashtable.h
> new file mode 100644
> index 0000000..394652b
> --- /dev/null
> +++ b/include/linux/hashtable.h
> @@ -0,0 +1,82 @@
> +/*
> + * Hash table implementation
> + * (C) 2012  Sasha Levin <levinsasha928@gmail.com>
> + */
> +
> +#ifndef _LINUX_HASHTABLE_H
> +#define _LINUX_HASHTABLE_H
> +
> +#include <linux/list.h>
> +#include <linux/types.h>
> +#include <linux/kernel.h>
> +#include <linux/hash.h>
> +
> +#define DEFINE_HASHTABLE(name, bits)					\
> +	struct hlist_head name[HASH_SIZE(bits)];
> +
> +#define HASH_SIZE(bits) (1 << (bits))
> +
> +/* Use hash_32 when possible to allow for fast 32bit hashing in 64bit kernels. */
> +#define hash_min(val, bits) ((sizeof(val)==4)?hash_32((val), (bits)):hash_long((val), (bits)))
> +
> +/**
> + * hash_init - initialize a hash table
> + * @hashtable: hashtable to be initialized
> + * @bits: bit count of hashing function
> + *
> + * Initializes a hash table with 2**bits buckets.
> + */
> +static inline void hash_init(struct hlist_head *hashtable, int bits)
> +{
> +	int i;
> +
> +	for (i = 0; i < HASH_SIZE(bits); i++)
> +		INIT_HLIST_HEAD(hashtable + i);
> +}
> +
> +/**
> + * hash_add - add an object to a hashtable
> + * @hashtable: hashtable to add to
> + * @bits: bit count used for hashing
> + * @node: the &struct hlist_node of the object to be added
> + * @key: the key of the object to be added
> + */
> +#define hash_add(hashtable, bits, node, key)				\
> +	hlist_add_head(node, &hashtable[hash_min(key, bits)]);
> +
> +/**
> + * hash_del - remove an object from a hashtable
> + * @node: &struct hlist_node of the object to remove
> + */
> +static inline void hash_del(struct hlist_node *node)
> +{
> +	hlist_del_init(node);
> +}
> +
> +/**
> + * hash_for_each - iterate over a hashtable
> + * @name: hashtable to iterate
> + * @bits: bit count of hashing function of the hashtable
> + * @bkt: integer to use as bucket loop cursor
> + * @node: the &struct list_head to use as a loop cursor for each bucket
> + * @obj: the type * to use as a loop cursor for each bucket
> + * @member: the name of the hlist_node within the struct
> + */
> +#define hash_for_each(name, bits, bkt, node, obj, member)		\
> +	for (bkt = 0; bkt < HASH_SIZE(bits); bkt++)			\
> +		hlist_for_each_entry(obj, node, &name[i], member)

Where is the 'i' coming from? maybe &name[bkt]?

> +
> +/**
> + * hash_for_each_possible - iterate over all possible objects for a giver key
> + * @name: hashtable to iterate
> + * @obj: the type * to use as a loop cursor for each bucke
> + * @bits: bit count of hashing function of the hashtable
> + * @node: the &struct list_head to use as a loop cursor for each bucket
> + * @member: the name of the hlist_node within the struct
> + * @key: the key of the objects to iterate over
> + */
> +#define hash_for_each_possible(name, obj, bits, node, member, key)	\
> +	hlist_for_each_entry(obj, node,					\
> +		&name[hash_min(key, bits)], member)
> +
> +#endif

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 1/7] hashtable: introduce a small and naive hashtable
@ 2012-08-07  1:48     ` Li Wei
  0 siblings, 0 replies; 32+ messages in thread
From: Li Wei @ 2012-08-07  1:48 UTC (permalink / raw)
  To: Sasha Levin
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev, josh,
	eric.dumazet, mathieu.desnoyers

On 08/07/2012 08:45 AM, Sasha Levin wrote:
> This hashtable implementation is using hlist buckets to provide a simple
> hashtable to prevent it from getting reimplemented all over the kernel.
> 
> Signed-off-by: Sasha Levin <levinsasha928@gmail.com>
> ---
>  include/linux/hashtable.h |   82 +++++++++++++++++++++++++++++++++++++++++++++
>  1 files changed, 82 insertions(+), 0 deletions(-)
>  create mode 100644 include/linux/hashtable.h
> 
> diff --git a/include/linux/hashtable.h b/include/linux/hashtable.h
> new file mode 100644
> index 0000000..394652b
> --- /dev/null
> +++ b/include/linux/hashtable.h
> @@ -0,0 +1,82 @@
> +/*
> + * Hash table implementation
> + * (C) 2012  Sasha Levin <levinsasha928@gmail.com>
> + */
> +
> +#ifndef _LINUX_HASHTABLE_H
> +#define _LINUX_HASHTABLE_H
> +
> +#include <linux/list.h>
> +#include <linux/types.h>
> +#include <linux/kernel.h>
> +#include <linux/hash.h>
> +
> +#define DEFINE_HASHTABLE(name, bits)					\
> +	struct hlist_head name[HASH_SIZE(bits)];
> +
> +#define HASH_SIZE(bits) (1 << (bits))
> +
> +/* Use hash_32 when possible to allow for fast 32bit hashing in 64bit kernels. */
> +#define hash_min(val, bits) ((sizeof(val)==4)?hash_32((val), (bits)):hash_long((val), (bits)))
> +
> +/**
> + * hash_init - initialize a hash table
> + * @hashtable: hashtable to be initialized
> + * @bits: bit count of hashing function
> + *
> + * Initializes a hash table with 2**bits buckets.
> + */
> +static inline void hash_init(struct hlist_head *hashtable, int bits)
> +{
> +	int i;
> +
> +	for (i = 0; i < HASH_SIZE(bits); i++)
> +		INIT_HLIST_HEAD(hashtable + i);
> +}
> +
> +/**
> + * hash_add - add an object to a hashtable
> + * @hashtable: hashtable to add to
> + * @bits: bit count used for hashing
> + * @node: the &struct hlist_node of the object to be added
> + * @key: the key of the object to be added
> + */
> +#define hash_add(hashtable, bits, node, key)				\
> +	hlist_add_head(node, &hashtable[hash_min(key, bits)]);
> +
> +/**
> + * hash_del - remove an object from a hashtable
> + * @node: &struct hlist_node of the object to remove
> + */
> +static inline void hash_del(struct hlist_node *node)
> +{
> +	hlist_del_init(node);
> +}
> +
> +/**
> + * hash_for_each - iterate over a hashtable
> + * @name: hashtable to iterate
> + * @bits: bit count of hashing function of the hashtable
> + * @bkt: integer to use as bucket loop cursor
> + * @node: the &struct list_head to use as a loop cursor for each bucket
> + * @obj: the type * to use as a loop cursor for each bucket
> + * @member: the name of the hlist_node within the struct
> + */
> +#define hash_for_each(name, bits, bkt, node, obj, member)		\
> +	for (bkt = 0; bkt < HASH_SIZE(bits); bkt++)			\
> +		hlist_for_each_entry(obj, node, &name[i], member)

Where is the 'i' coming from? maybe &name[bkt]?

> +
> +/**
> + * hash_for_each_possible - iterate over all possible objects for a giver key
> + * @name: hashtable to iterate
> + * @obj: the type * to use as a loop cursor for each bucke
> + * @bits: bit count of hashing function of the hashtable
> + * @node: the &struct list_head to use as a loop cursor for each bucket
> + * @member: the name of the hlist_node within the struct
> + * @key: the key of the objects to iterate over
> + */
> +#define hash_for_each_possible(name, obj, bits, node, member, key)	\
> +	hlist_for_each_entry(obj, node,					\
> +		&name[hash_min(key, bits)], member)
> +
> +#endif

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 1/7] hashtable: introduce a small and naive hashtable
  2012-08-07  1:48     ` Li Wei
@ 2012-08-07  1:54       ` Sasha Levin
  -1 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  1:54 UTC (permalink / raw)
  To: Li Wei
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev, josh,
	eric.dumazet, mathieu.desnoyers

On 08/07/2012 03:48 AM, Li Wei wrote:
> On 08/07/2012 08:45 AM, Sasha Levin wrote:
>> +/**
>> + * hash_for_each - iterate over a hashtable
>> + * @name: hashtable to iterate
>> + * @bits: bit count of hashing function of the hashtable
>> + * @bkt: integer to use as bucket loop cursor
>> + * @node: the &struct list_head to use as a loop cursor for each bucket
>> + * @obj: the type * to use as a loop cursor for each bucket
>> + * @member: the name of the hlist_node within the struct
>> + */
>> +#define hash_for_each(name, bits, bkt, node, obj, member)		\
>> +	for (bkt = 0; bkt < HASH_SIZE(bits); bkt++)			\
>> +		hlist_for_each_entry(obj, node, &name[i], member)
> 
> Where is the 'i' coming from? maybe &name[bkt]?

Heh, yeah. And the only place that uses this macro had 'i' declared as the loop counter, so it didn't trigger any issues during testing.

Thanks!


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 1/7] hashtable: introduce a small and naive hashtable
@ 2012-08-07  1:54       ` Sasha Levin
  0 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  1:54 UTC (permalink / raw)
  To: Li Wei
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev, josh,
	eric.dumazet, mathieu.desnoyers

On 08/07/2012 03:48 AM, Li Wei wrote:
> On 08/07/2012 08:45 AM, Sasha Levin wrote:
>> +/**
>> + * hash_for_each - iterate over a hashtable
>> + * @name: hashtable to iterate
>> + * @bits: bit count of hashing function of the hashtable
>> + * @bkt: integer to use as bucket loop cursor
>> + * @node: the &struct list_head to use as a loop cursor for each bucket
>> + * @obj: the type * to use as a loop cursor for each bucket
>> + * @member: the name of the hlist_node within the struct
>> + */
>> +#define hash_for_each(name, bits, bkt, node, obj, member)		\
>> +	for (bkt = 0; bkt < HASH_SIZE(bits); bkt++)			\
>> +		hlist_for_each_entry(obj, node, &name[i], member)
> 
> Where is the 'i' coming from? maybe &name[bkt]?

Heh, yeah. And the only place that uses this macro had 'i' declared as the loop counter, so it didn't trigger any issues during testing.

Thanks!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 1/7] hashtable: introduce a small and naive hashtable
  2012-08-07  0:45   ` Sasha Levin
@ 2012-08-07  2:55     ` Josh Triplett
  -1 siblings, 0 replies; 32+ messages in thread
From: Josh Triplett @ 2012-08-07  2:55 UTC (permalink / raw)
  To: Sasha Levin
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev,
	eric.dumazet, mathieu.desnoyers

On Tue, Aug 07, 2012 at 02:45:10AM +0200, Sasha Levin wrote:
> +/**
> + * hash_add - add an object to a hashtable
> + * @hashtable: hashtable to add to
> + * @bits: bit count used for hashing
> + * @node: the &struct hlist_node of the object to be added
> + * @key: the key of the object to be added
> + */
> +#define hash_add(hashtable, bits, node, key)				\
> +	hlist_add_head(node, &hashtable[hash_min(key, bits)]);

Any particular reason to make this a macro rather than a static inline?

Also, even if you do make it a macro, don't include the semicolon.

> +/**
> + * hash_for_each_possible - iterate over all possible objects for a giver key

s/giver/given/

> + * @name: hashtable to iterate
> + * @obj: the type * to use as a loop cursor for each bucke

s/bucke/bucket/

- Josh Triplett

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 1/7] hashtable: introduce a small and naive hashtable
@ 2012-08-07  2:55     ` Josh Triplett
  0 siblings, 0 replies; 32+ messages in thread
From: Josh Triplett @ 2012-08-07  2:55 UTC (permalink / raw)
  To: Sasha Levin
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev,
	eric.dumazet, mathieu.desnoyers

On Tue, Aug 07, 2012 at 02:45:10AM +0200, Sasha Levin wrote:
> +/**
> + * hash_add - add an object to a hashtable
> + * @hashtable: hashtable to add to
> + * @bits: bit count used for hashing
> + * @node: the &struct hlist_node of the object to be added
> + * @key: the key of the object to be added
> + */
> +#define hash_add(hashtable, bits, node, key)				\
> +	hlist_add_head(node, &hashtable[hash_min(key, bits)]);

Any particular reason to make this a macro rather than a static inline?

Also, even if you do make it a macro, don't include the semicolon.

> +/**
> + * hash_for_each_possible - iterate over all possible objects for a giver key

s/giver/given/

> + * @name: hashtable to iterate
> + * @obj: the type * to use as a loop cursor for each bucke

s/bucke/bucket/

- Josh Triplett

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 1/7] hashtable: introduce a small and naive hashtable
  2012-08-07  2:55     ` Josh Triplett
@ 2012-08-07  9:49       ` Sasha Levin
  -1 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  9:49 UTC (permalink / raw)
  To: Josh Triplett
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev,
	eric.dumazet, mathieu.desnoyers

On 08/07/2012 04:55 AM, Josh Triplett wrote:
> On Tue, Aug 07, 2012 at 02:45:10AM +0200, Sasha Levin wrote:
>> +/**
>> + * hash_add - add an object to a hashtable
>> + * @hashtable: hashtable to add to
>> + * @bits: bit count used for hashing
>> + * @node: the &struct hlist_node of the object to be added
>> + * @key: the key of the object to be added
>> + */
>> +#define hash_add(hashtable, bits, node, key)				\
>> +	hlist_add_head(node, &hashtable[hash_min(key, bits)]);
> 
> Any particular reason to make this a macro rather than a static inline?

Yes. As Eric Dumazet pointed out, hash_64() is slower than hash_32() so we should be calling hash_32() if possible (if key size is 32bits long).

This way we can call hash_min() without knowing the key size. See also the definition of hash_min() above.




^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC v3 1/7] hashtable: introduce a small and naive hashtable
@ 2012-08-07  9:49       ` Sasha Levin
  0 siblings, 0 replies; 32+ messages in thread
From: Sasha Levin @ 2012-08-07  9:49 UTC (permalink / raw)
  To: Josh Triplett
  Cc: torvalds, tj, akpm, linux-kernel, linux-mm, paul.gortmaker,
	davem, rostedt, mingo, ebiederm, aarcange, ericvh, netdev,
	eric.dumazet, mathieu.desnoyers

On 08/07/2012 04:55 AM, Josh Triplett wrote:
> On Tue, Aug 07, 2012 at 02:45:10AM +0200, Sasha Levin wrote:
>> +/**
>> + * hash_add - add an object to a hashtable
>> + * @hashtable: hashtable to add to
>> + * @bits: bit count used for hashing
>> + * @node: the &struct hlist_node of the object to be added
>> + * @key: the key of the object to be added
>> + */
>> +#define hash_add(hashtable, bits, node, key)				\
>> +	hlist_add_head(node, &hashtable[hash_min(key, bits)]);
> 
> Any particular reason to make this a macro rather than a static inline?

Yes. As Eric Dumazet pointed out, hash_64() is slower than hash_32() so we should be calling hash_32() if possible (if key size is 32bits long).

This way we can call hash_min() without knowing the key size. See also the definition of hash_min() above.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2012-08-07  9:49 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-08-07  0:45 [RFC v3 0/7] generic hashtable implementation Sasha Levin
2012-08-07  0:45 ` Sasha Levin
2012-08-07  0:45 ` [RFC v3 1/7] hashtable: introduce a small and naive hashtable Sasha Levin
2012-08-07  0:45   ` Sasha Levin
2012-08-07  1:19   ` Joe Perches
2012-08-07  1:19     ` Joe Perches
2012-08-07  1:19     ` Joe Perches
2012-08-07  1:48   ` Li Wei
2012-08-07  1:48     ` Li Wei
2012-08-07  1:54     ` Sasha Levin
2012-08-07  1:54       ` Sasha Levin
2012-08-07  2:55   ` Josh Triplett
2012-08-07  2:55     ` Josh Triplett
2012-08-07  9:49     ` Sasha Levin
2012-08-07  9:49       ` Sasha Levin
2012-08-07  0:45 ` [RFC v3 2/7] user_ns: use new hashtable implementation Sasha Levin
2012-08-07  0:45   ` Sasha Levin
2012-08-07  0:45 ` [RFC v3 3/7] mm,ksm: " Sasha Levin
2012-08-07  0:45   ` Sasha Levin
2012-08-07  0:45 ` [RFC v3 4/7] workqueue: " Sasha Levin
2012-08-07  0:45   ` Sasha Levin
2012-08-07  0:45 ` Sasha Levin
2012-08-07  0:45   ` Sasha Levin
2012-08-07  1:19   ` Joe Perches
2012-08-07  1:19     ` Joe Perches
2012-08-07  1:19     ` Joe Perches
2012-08-07  0:45 ` [RFC v3 5/7] mm/huge_memory: " Sasha Levin
2012-08-07  0:45   ` Sasha Levin
2012-08-07  0:45 ` [RFC v3 6/7] tracepoint: " Sasha Levin
2012-08-07  0:45   ` Sasha Levin
2012-08-07  0:45 ` [RFC v3 7/7] net,9p: " Sasha Levin
2012-08-07  0:45   ` Sasha Levin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.