All of lore.kernel.org
 help / color / mirror / Atom feed
From: Xie He <xie.he.0141@gmail.com>
To: Jakub Kicinski <kuba@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	linux-x25@vger.kernel.org, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Martin Schiller <ms@dev.tdt.de>
Cc: Xie He <xie.he.0141@gmail.com>
Subject: [PATCH net] net: x25: Correct locking for x25_kill_by_device and x25_kill_by_neigh
Date: Sat, 14 Nov 2020 02:36:25 -0800	[thread overview]
Message-ID: <20201114103625.323919-1-xie.he.0141@gmail.com> (raw)

When the x25_connect function and the x25_disconnect function decrease
the refcnt of "x25->neighbour" (struct x25_neigh) and reset this pointer
to NULL, they would hold the x25_list_lock read lock. This is weird,
because x25_list_lock is meant to protect x25_list, and neither the
refcnt of "struct x25_neigh" nor the "x25->neighbour" pointer is related
to x25_list itself.

I checked the commit history. The author who added the locking in
x25_disconnect didn't explain why in the commit message. I think they
probably just copied the code from x25_connect. The author who added
the locking in x25_connect did this probably because he wanted to
protect the code from racing with x25_kill_by_device.

However, I think this is not the correct way to protect from racing
between x25_connect and x25_kill_by_device. The correct way should be
letting x25_kill_by_device hold the appropriate sock lock instead.
For x25_disconnect, holding x25_list_lock not only is incorrect, but also
causes deadlock, because x25_disconnect is called by x25_kill_by_device
with the x25_list_lock write lock held.

For x25_kill_by_neigh, the situation is the same as x25_kill_by_device.

This patch adds correct locking for x25_kill_by_device and
x25_kill_by_neigh, and removes the incorrect locking in x25_connect and
x25_disconnect.

Fixes: 4becb7ee5b3d ("net/x25: Fix x25_neigh refcnt leak when x25 disconnect")
Fixes: 95d6ebd53c79 ("net/x25: fix use-after-free in x25_device_event()")
Cc: Martin Schiller <ms@dev.tdt.de>
Signed-off-by: Xie He <xie.he.0141@gmail.com>
---
 net/x25/af_x25.c   | 12 ++++++++----
 net/x25/x25_subr.c |  2 --
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
index a10487e7574c..50f043f0c1d0 100644
--- a/net/x25/af_x25.c
+++ b/net/x25/af_x25.c
@@ -208,9 +208,12 @@ static void x25_kill_by_device(struct net_device *dev)
 
 	write_lock_bh(&x25_list_lock);
 
-	sk_for_each(s, &x25_list)
+	sk_for_each(s, &x25_list) {
+		bh_lock_sock(s);
 		if (x25_sk(s)->neighbour && x25_sk(s)->neighbour->dev == dev)
 			x25_disconnect(s, ENETUNREACH, 0, 0);
+		bh_unlock_sock(s);
+	}
 
 	write_unlock_bh(&x25_list_lock);
 }
@@ -826,10 +829,8 @@ static int x25_connect(struct socket *sock, struct sockaddr *uaddr,
 	rc = 0;
 out_put_neigh:
 	if (rc && x25->neighbour) {
-		read_lock_bh(&x25_list_lock);
 		x25_neigh_put(x25->neighbour);
 		x25->neighbour = NULL;
-		read_unlock_bh(&x25_list_lock);
 		x25->state = X25_STATE_0;
 	}
 out_put_route:
@@ -1773,9 +1774,12 @@ void x25_kill_by_neigh(struct x25_neigh *nb)
 
 	write_lock_bh(&x25_list_lock);
 
-	sk_for_each(s, &x25_list)
+	sk_for_each(s, &x25_list) {
+		bh_lock_sock(s);
 		if (x25_sk(s)->neighbour == nb)
 			x25_disconnect(s, ENETUNREACH, 0, 0);
+		bh_unlock_sock(s);
+	}
 
 	write_unlock_bh(&x25_list_lock);
 
diff --git a/net/x25/x25_subr.c b/net/x25/x25_subr.c
index 0285aaa1e93c..6c0f94257f7c 100644
--- a/net/x25/x25_subr.c
+++ b/net/x25/x25_subr.c
@@ -358,10 +358,8 @@ void x25_disconnect(struct sock *sk, int reason, unsigned char cause,
 		sock_set_flag(sk, SOCK_DEAD);
 	}
 	if (x25->neighbour) {
-		read_lock_bh(&x25_list_lock);
 		x25_neigh_put(x25->neighbour);
 		x25->neighbour = NULL;
-		read_unlock_bh(&x25_list_lock);
 	}
 }
 
-- 
2.27.0


             reply	other threads:[~2020-11-14 10:36 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-14 10:36 Xie He [this message]
2020-11-15  3:11 ` [PATCH net] net: x25: Correct locking for x25_kill_by_device and x25_kill_by_neigh Xie He

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201114103625.323919-1-xie.he.0141@gmail.com \
    --to=xie.he.0141@gmail.com \
    --cc=davem@davemloft.net \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-x25@vger.kernel.org \
    --cc=ms@dev.tdt.de \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.