From mboxrd@z Thu Jan 1 00:00:00 1970 From: mwilck@suse.com Subject: [PATCH 01/23] multipathd: uxlsnr: avoid deadlock on exit Date: Thu, 24 Sep 2020 15:40:32 +0200 Message-ID: <20200924134054.14632-2-mwilck@suse.com> References: <20200924134054.14632-1-mwilck@suse.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20200924134054.14632-1-mwilck@suse.com> List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: dm-devel-bounces@redhat.com Errors-To: dm-devel-bounces@redhat.com To: Christophe Varoqui , Benjamin Marzinski Cc: lixiaokeng@huawei.com, dm-devel@redhat.com, Martin Wilck List-Id: dm-devel.ids From: Martin Wilck The uxlsnr wouldn't always release the client lock when cancelled, causing a deadlock in uxsock_cleanup(). While this hasn't been caused by commit 3d611a2, the deadlock seems to have become much more likely after that patch. Solving this means that we have to treat reallocation failure of the pollfd array differently. We will now just ignore any clients above the last valid pfd index. That's a minor problem, as we're in an OOM situation anyway. Moreover, client_lock is not a "struct lock", but a plain pthread_mutex_t. Fixes: 3d611a2 ("multipathd: cancel threads early during shutdown") Signed-off-by: Martin Wilck --- multipathd/uxlsnr.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/multipathd/uxlsnr.c b/multipathd/uxlsnr.c index 1c5ce9d..d47ba1a 100644 --- a/multipathd/uxlsnr.c +++ b/multipathd/uxlsnr.c @@ -35,6 +35,7 @@ #include "config.h" #include "mpath_cmd.h" #include "time-util.h" +#include "util.h" #include "main.h" #include "cli.h" @@ -116,7 +117,7 @@ static void _dead_client(struct client *c) static void dead_client(struct client *c) { - pthread_cleanup_push(cleanup_lock, &client_lock); + pthread_cleanup_push(cleanup_mutex, &client_lock); pthread_mutex_lock(&client_lock); _dead_client(c); pthread_cleanup_pop(1); @@ -306,6 +307,7 @@ void * uxsock_listen(uxsock_trigger_fn uxsock_trigger, long ux_sock, /* setup for a poll */ pthread_mutex_lock(&client_lock); + pthread_cleanup_push(cleanup_mutex, &client_lock); num_clients = 0; list_for_each_entry(c, &clients, node) { num_clients++; @@ -322,14 +324,13 @@ void * uxsock_listen(uxsock_trigger_fn uxsock_trigger, long ux_sock, sizeof(struct pollfd)); } if (!new) { - pthread_mutex_unlock(&client_lock); condlog(0, "%s: failed to realloc %d poll fds", "uxsock", 2 + num_clients); - sched_yield(); - continue; + num_clients = old_clients; + } else { + old_clients = num_clients; + polls = new; } - old_clients = num_clients; - polls = new; } polls[0].fd = ux_sock; polls[0].events = POLLIN; @@ -347,8 +348,10 @@ void * uxsock_listen(uxsock_trigger_fn uxsock_trigger, long ux_sock, polls[i].fd = c->fd; polls[i].events = POLLIN; i++; + if (i >= 2 + num_clients) + break; } - pthread_mutex_unlock(&client_lock); + pthread_cleanup_pop(1); /* most of our life is spent in this call */ poll_count = ppoll(polls, i, &sleep_time, &mask); -- 2.28.0