linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Santosh Shilimkar <santosh.shilimkar@oracle.com>
To: netdev@vger.kernel.org
Cc: linux-rdma@vger.kernel.org, davem@davemloft.net,
	linux-kernel@vger.kernel.org, ssantosh@kernel.org,
	Santosh Shilimkar <santosh.shilimkar@oracle.com>
Subject: [PATCH v2 02/14] RDS: make socket bind/release locking scheme simple and more efficient
Date: Wed, 30 Sep 2015 13:24:21 -0400	[thread overview]
Message-ID: <1443633873-13359-3-git-send-email-santosh.shilimkar@oracle.com> (raw)
In-Reply-To: <1443633873-13359-1-git-send-email-santosh.shilimkar@oracle.com>

RDS bind and release locking scheme is very inefficient. It
uses RCU for maintaining the bind hash-table which is great but
it also needs to hold spinlock for [add/remove]_bound(). So
overall usecase, the hash-table concurrent speedup doesn't pay off.
In fact blocking nature of synchronize_rcu() makes the RDS
socket shutdown too slow which hurts RDS performance since
connection shutdown and re-connect happens quite often to
maintain the RC part of the protocol.

So we make the locking scheme simpler and more efficient by
replacing spin_locks with reader/writer locks and getting rid
off rcu for bind hash-table.

In subsequent patch, we also covert the global lock with per-bucket
lock to reduce the global lock contention.

Signed-off-by: Santosh Shilimkar <ssantosh@kernel.org>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
---
 net/rds/af_rds.c |  6 ------
 net/rds/bind.c   | 35 +++++++++++++++--------------------
 2 files changed, 15 insertions(+), 26 deletions(-)

diff --git a/net/rds/af_rds.c b/net/rds/af_rds.c
index a2f28a6..dc08766 100644
--- a/net/rds/af_rds.c
+++ b/net/rds/af_rds.c
@@ -72,13 +72,7 @@ static int rds_release(struct socket *sock)
 	rds_clear_recv_queue(rs);
 	rds_cong_remove_socket(rs);
 
-	/*
-	 * the binding lookup hash uses rcu, we need to
-	 * make sure we synchronize_rcu before we free our
-	 * entry
-	 */
 	rds_remove_bound(rs);
-	synchronize_rcu();
 
 	rds_send_drop_to(rs, NULL);
 	rds_rdma_drop_keys(rs);
diff --git a/net/rds/bind.c b/net/rds/bind.c
index dd666fb..01989e2 100644
--- a/net/rds/bind.c
+++ b/net/rds/bind.c
@@ -40,7 +40,7 @@
 
 #define BIND_HASH_SIZE 1024
 static struct hlist_head bind_hash_table[BIND_HASH_SIZE];
-static DEFINE_SPINLOCK(rds_bind_lock);
+static DEFINE_RWLOCK(rds_bind_lock);
 
 static struct hlist_head *hash_to_bucket(__be32 addr, __be16 port)
 {
@@ -48,6 +48,7 @@ static struct hlist_head *hash_to_bucket(__be32 addr, __be16 port)
 				  (BIND_HASH_SIZE - 1));
 }
 
+/* must hold either read or write lock (write lock for insert != NULL) */
 static struct rds_sock *rds_bind_lookup(__be32 addr, __be16 port,
 					struct rds_sock *insert)
 {
@@ -56,30 +57,24 @@ static struct rds_sock *rds_bind_lookup(__be32 addr, __be16 port,
 	u64 cmp;
 	u64 needle = ((u64)be32_to_cpu(addr) << 32) | be16_to_cpu(port);
 
-	rcu_read_lock();
-	hlist_for_each_entry_rcu(rs, head, rs_bound_node) {
+	hlist_for_each_entry(rs, head, rs_bound_node) {
 		cmp = ((u64)be32_to_cpu(rs->rs_bound_addr) << 32) |
 		      be16_to_cpu(rs->rs_bound_port);
 
-		if (cmp == needle) {
-			rcu_read_unlock();
+		if (cmp == needle)
 			return rs;
-		}
 	}
-	rcu_read_unlock();
 
 	if (insert) {
 		/*
 		 * make sure our addr and port are set before
-		 * we are added to the list, other people
-		 * in rcu will find us as soon as the
-		 * hlist_add_head_rcu is done
+		 * we are added to the list.
 		 */
 		insert->rs_bound_addr = addr;
 		insert->rs_bound_port = port;
 		rds_sock_addref(insert);
 
-		hlist_add_head_rcu(&insert->rs_bound_node, head);
+		hlist_add_head(&insert->rs_bound_node, head);
 	}
 	return NULL;
 }
@@ -93,8 +88,11 @@ static struct rds_sock *rds_bind_lookup(__be32 addr, __be16 port,
 struct rds_sock *rds_find_bound(__be32 addr, __be16 port)
 {
 	struct rds_sock *rs;
+	unsigned long flags;
 
+	read_lock_irqsave(&rds_bind_lock, flags);
 	rs = rds_bind_lookup(addr, port, NULL);
+	read_unlock_irqrestore(&rds_bind_lock, flags);
 
 	if (rs && !sock_flag(rds_rs_to_sk(rs), SOCK_DEAD))
 		rds_sock_addref(rs);
@@ -103,6 +101,7 @@ struct rds_sock *rds_find_bound(__be32 addr, __be16 port)
 
 	rdsdebug("returning rs %p for %pI4:%u\n", rs, &addr,
 		ntohs(port));
+
 	return rs;
 }
 
@@ -121,7 +120,7 @@ static int rds_add_bound(struct rds_sock *rs, __be32 addr, __be16 *port)
 		last = rover - 1;
 	}
 
-	spin_lock_irqsave(&rds_bind_lock, flags);
+	write_lock_irqsave(&rds_bind_lock, flags);
 
 	do {
 		if (rover == 0)
@@ -135,7 +134,7 @@ static int rds_add_bound(struct rds_sock *rs, __be32 addr, __be16 *port)
 		}
 	} while (rover++ != last);
 
-	spin_unlock_irqrestore(&rds_bind_lock, flags);
+	write_unlock_irqrestore(&rds_bind_lock, flags);
 
 	return ret;
 }
@@ -144,19 +143,19 @@ void rds_remove_bound(struct rds_sock *rs)
 {
 	unsigned long flags;
 
-	spin_lock_irqsave(&rds_bind_lock, flags);
+	write_lock_irqsave(&rds_bind_lock, flags);
 
 	if (rs->rs_bound_addr) {
 		rdsdebug("rs %p unbinding from %pI4:%d\n",
 		  rs, &rs->rs_bound_addr,
 		  ntohs(rs->rs_bound_port));
 
-		hlist_del_init_rcu(&rs->rs_bound_node);
+		hlist_del_init(&rs->rs_bound_node);
 		rds_sock_put(rs);
 		rs->rs_bound_addr = 0;
 	}
 
-	spin_unlock_irqrestore(&rds_bind_lock, flags);
+	write_unlock_irqrestore(&rds_bind_lock, flags);
 }
 
 int rds_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
@@ -200,9 +199,5 @@ int rds_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
 
 out:
 	release_sock(sk);
-
-	/* we might have called rds_remove_bound on error */
-	if (ret)
-		synchronize_rcu();
 	return ret;
 }
-- 
1.9.1


  parent reply	other threads:[~2015-09-30 17:31 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-30 17:24 [PATCH v2 00/14] RDS: connection scalability and performance improvements Santosh Shilimkar
2015-09-30 17:24 ` [PATCH v2 01/14] RDS: use kfree_rcu in rds_ib_remove_ipaddr Santosh Shilimkar
2015-09-30 17:24 ` Santosh Shilimkar [this message]
2015-09-30 17:24 ` [PATCH v2 03/14] RDS: fix rds_sock reference bug while doing bind Santosh Shilimkar
2015-09-30 17:24 ` [PATCH v2 04/14] RDS: Use per-bucket rw lock for bind hash-table Santosh Shilimkar
2015-09-30 17:24 ` [PATCH v2 05/14] RDS: defer the over_batch work to send worker Santosh Shilimkar
2015-10-05 10:30   ` David Miller
2015-10-05 15:31     ` santosh shilimkar
2015-09-30 17:24 ` [PATCH v2 06/14] RDS: use rds_send_xmit() state instead of RDS_LL_SEND_FULL Santosh Shilimkar
2015-09-30 17:24 ` [PATCH v2 07/14] RDS: IB: ack more receive completions to improve performance Santosh Shilimkar
2015-09-30 17:24 ` [PATCH v2 08/14] RDS: IB: split send completion handling and do batch ack Santosh Shilimkar
2015-09-30 17:24 ` [PATCH v2 09/14] RDS: IB: handle rds_ibdev release case instead of crashing the kernel Santosh Shilimkar
2015-09-30 17:24 ` [PATCH v2 10/14] RDS: IB: fix the rds_ib_fmr_wq kick call Santosh Shilimkar
2015-09-30 17:24 ` [PATCH v2 11/14] RDS: IB: use already available pool handle from ibmr Santosh Shilimkar
2015-09-30 17:24 ` [PATCH v2 12/14] RDS: IB: mark rds_ib_fmr_wq static Santosh Shilimkar
2015-09-30 17:24 ` [PATCH v2 13/14] RDS: IB: use max_mr from HCA caps than max_fmr Santosh Shilimkar
2015-09-30 17:24 ` [PATCH v2 14/14] RDS: IB: split mr pool to improve 8K messages performance Santosh Shilimkar
2015-10-01 16:19 ` [PATCH v2 00/14] RDS: connection scalability and performance improvements David Laight
2015-10-01 19:00   ` santosh.shilimkar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1443633873-13359-3-git-send-email-santosh.shilimkar@oracle.com \
    --to=santosh.shilimkar@oracle.com \
    --cc=davem@davemloft.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=ssantosh@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).