dm-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
* [dm-devel] [PATCH 0/3] Multipath io_err_stat fixes
@ 2021-01-12 23:52 Benjamin Marzinski
  2021-01-12 23:52 ` [dm-devel] [PATCH 1/3] libmultipath: make find_err_path_by_dev() static Benjamin Marzinski
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Benjamin Marzinski @ 2021-01-12 23:52 UTC (permalink / raw)
  To: Christophe Varoqui; +Cc: device-mapper development, Martin Wilck

I found an ABBA deadlock in the io_err_stat marginal path code, and in
the process of fixing it, noticed a potential crash on shutdown. This
patchset addresses both of the issues.

Benjamin Marzinski (3):
  libmultipath: make find_err_path_by_dev() static
  multipathd: avoid io_err_stat crash during shutdown
  multipathd: avoid io_err_stat ABBA deadlock

 libmultipath/io_err_stat.c | 159 +++++++++++++++++--------------------
 1 file changed, 73 insertions(+), 86 deletions(-)

-- 
2.17.2

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [dm-devel] [PATCH 1/3] libmultipath: make find_err_path_by_dev() static
  2021-01-12 23:52 [dm-devel] [PATCH 0/3] Multipath io_err_stat fixes Benjamin Marzinski
@ 2021-01-12 23:52 ` Benjamin Marzinski
  2021-01-12 23:52 ` [dm-devel] [PATCH 2/3] multipathd: avoid io_err_stat crash during shutdown Benjamin Marzinski
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Benjamin Marzinski @ 2021-01-12 23:52 UTC (permalink / raw)
  To: Christophe Varoqui; +Cc: device-mapper development, Martin Wilck

Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com>
---
 libmultipath/io_err_stat.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/libmultipath/io_err_stat.c b/libmultipath/io_err_stat.c
index 5363049d..2e48ee81 100644
--- a/libmultipath/io_err_stat.c
+++ b/libmultipath/io_err_stat.c
@@ -88,7 +88,7 @@ static void rcu_unregister(__attribute__((unused)) void *param)
 	rcu_unregister_thread();
 }
 
-struct io_err_stat_path *find_err_path_by_dev(vector pathvec, char *dev)
+static struct io_err_stat_path *find_err_path_by_dev(vector pathvec, char *dev)
 {
 	int i;
 	struct io_err_stat_path *pp;
-- 
2.17.2

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [dm-devel] [PATCH 2/3] multipathd: avoid io_err_stat crash during shutdown
  2021-01-12 23:52 [dm-devel] [PATCH 0/3] Multipath io_err_stat fixes Benjamin Marzinski
  2021-01-12 23:52 ` [dm-devel] [PATCH 1/3] libmultipath: make find_err_path_by_dev() static Benjamin Marzinski
@ 2021-01-12 23:52 ` Benjamin Marzinski
  2021-01-13 11:45   ` Martin Wilck
  2021-01-12 23:52 ` [dm-devel] [PATCH 3/3] multipathd: avoid io_err_stat ABBA deadlock Benjamin Marzinski
  2021-01-13 11:45 ` [dm-devel] [PATCH 0/3] Multipath io_err_stat fixes Martin Wilck
  3 siblings, 1 reply; 8+ messages in thread
From: Benjamin Marzinski @ 2021-01-12 23:52 UTC (permalink / raw)
  To: Christophe Varoqui; +Cc: device-mapper development, Martin Wilck

The checker thread is reponsible for enqueueing paths for the
io_err_stat thread to check. During shutdown, the io_err_stat thread is
shut down and cleaned up before the checker thread.  There is no code
to make sure that the checker thread isn't accessing the io_err_stat
pathvec or its mutex while they are being freed, which can lead to
memory corruption crashes.

To solve this, get rid of the io_err_stat_pathvec structure, and
statically define the mutex.  This means that the mutex is always valid
to access, and the io_err_stat pathvec can only be accessed while
holding it.  If the io_err_stat thread has already been cleaned up
when the checker tries to access the pathvec, it will be NULL, and the
checker will simply fail to enqueue the path.

This change also fixes a bug in free_io_err_pathvec(), which previously
only attempted to free the pathvec if it was not set, instead of when it
was set.

Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com>
---
 libmultipath/io_err_stat.c | 108 +++++++++++++++----------------------
 1 file changed, 44 insertions(+), 64 deletions(-)

diff --git a/libmultipath/io_err_stat.c b/libmultipath/io_err_stat.c
index 2e48ee81..4c6f7f08 100644
--- a/libmultipath/io_err_stat.c
+++ b/libmultipath/io_err_stat.c
@@ -46,12 +46,6 @@
 #define io_err_stat_log(prio, fmt, args...) \
 	condlog(prio, "io error statistic: " fmt, ##args)
 
-
-struct io_err_stat_pathvec {
-	pthread_mutex_t mutex;
-	vector		pathvec;
-};
-
 struct dio_ctx {
 	struct timespec	io_starttime;
 	unsigned int	blksize;
@@ -75,9 +69,10 @@ static pthread_t	io_err_stat_thr;
 
 static pthread_mutex_t io_err_thread_lock = PTHREAD_MUTEX_INITIALIZER;
 static pthread_cond_t io_err_thread_cond = PTHREAD_COND_INITIALIZER;
+static pthread_mutex_t io_err_pathvec_lock = PTHREAD_MUTEX_INITIALIZER;
 static int io_err_thread_running = 0;
 
-static struct io_err_stat_pathvec *paths;
+static vector io_err_pathvec;
 struct vectors *vecs;
 io_context_t	ioctx;
 
@@ -207,46 +202,28 @@ static void free_io_err_stat_path(struct io_err_stat_path *p)
 	FREE(p);
 }
 
-static struct io_err_stat_pathvec *alloc_pathvec(void)
+static void cleanup_unlock(void *arg)
 {
-	struct io_err_stat_pathvec *p;
-	int r;
-
-	p = (struct io_err_stat_pathvec *)MALLOC(sizeof(*p));
-	if (!p)
-		return NULL;
-	p->pathvec = vector_alloc();
-	if (!p->pathvec)
-		goto out_free_struct_pathvec;
-	r = pthread_mutex_init(&p->mutex, NULL);
-	if (r)
-		goto out_free_member_pathvec;
-
-	return p;
-
-out_free_member_pathvec:
-	vector_free(p->pathvec);
-out_free_struct_pathvec:
-	FREE(p);
-	return NULL;
+	pthread_mutex_unlock((pthread_mutex_t*) arg);
 }
 
-static void free_io_err_pathvec(struct io_err_stat_pathvec *p)
+static void free_io_err_pathvec(void)
 {
 	struct io_err_stat_path *path;
 	int i;
 
-	if (!p)
-		return;
-	pthread_mutex_destroy(&p->mutex);
-	if (!p->pathvec) {
-		vector_foreach_slot(p->pathvec, path, i) {
-			destroy_directio_ctx(path);
-			free_io_err_stat_path(path);
-		}
-		vector_free(p->pathvec);
+	pthread_mutex_lock(&io_err_pathvec_lock);
+	pthread_cleanup_push(cleanup_unlock, &io_err_pathvec_lock);
+	if (!io_err_pathvec)
+		goto out;
+	vector_foreach_slot(io_err_pathvec, path, i) {
+		destroy_directio_ctx(path);
+		free_io_err_stat_path(path);
 	}
-	FREE(p);
+	vector_free(io_err_pathvec);
+	io_err_pathvec = NULL;
+out:
+	pthread_cleanup_pop(1);
 }
 
 /*
@@ -258,13 +235,13 @@ static int enqueue_io_err_stat_by_path(struct path *path)
 {
 	struct io_err_stat_path *p;
 
-	pthread_mutex_lock(&paths->mutex);
-	p = find_err_path_by_dev(paths->pathvec, path->dev);
+	pthread_mutex_lock(&io_err_pathvec_lock);
+	p = find_err_path_by_dev(io_err_pathvec, path->dev);
 	if (p) {
-		pthread_mutex_unlock(&paths->mutex);
+		pthread_mutex_unlock(&io_err_pathvec_lock);
 		return 0;
 	}
-	pthread_mutex_unlock(&paths->mutex);
+	pthread_mutex_unlock(&io_err_pathvec_lock);
 
 	p = alloc_io_err_stat_path();
 	if (!p)
@@ -276,18 +253,18 @@ static int enqueue_io_err_stat_by_path(struct path *path)
 
 	if (setup_directio_ctx(p))
 		goto free_ioerr_path;
-	pthread_mutex_lock(&paths->mutex);
-	if (!vector_alloc_slot(paths->pathvec))
+	pthread_mutex_lock(&io_err_pathvec_lock);
+	if (!vector_alloc_slot(io_err_pathvec))
 		goto unlock_destroy;
-	vector_set_slot(paths->pathvec, p);
-	pthread_mutex_unlock(&paths->mutex);
+	vector_set_slot(io_err_pathvec, p);
+	pthread_mutex_unlock(&io_err_pathvec_lock);
 
 	io_err_stat_log(2, "%s: enqueue path %s to check",
 			path->mpp->alias, path->dev);
 	return 0;
 
 unlock_destroy:
-	pthread_mutex_unlock(&paths->mutex);
+	pthread_mutex_unlock(&io_err_pathvec_lock);
 	destroy_directio_ctx(p);
 free_ioerr_path:
 	free_io_err_stat_path(p);
@@ -412,9 +389,9 @@ static int delete_io_err_stat_by_addr(struct io_err_stat_path *p)
 {
 	int i;
 
-	i = find_slot(paths->pathvec, p);
+	i = find_slot(io_err_pathvec, p);
 	if (i != -1)
-		vector_del_slot(paths->pathvec, i);
+		vector_del_slot(io_err_pathvec, i);
 
 	destroy_directio_ctx(p);
 	free_io_err_stat_path(p);
@@ -585,7 +562,7 @@ static void poll_async_io_timeout(void)
 
 	if (clock_gettime(CLOCK_MONOTONIC, &curr_time) != 0)
 		return;
-	vector_foreach_slot(paths->pathvec, pp, i) {
+	vector_foreach_slot(io_err_pathvec, pp, i) {
 		for (j = 0; j < CONCUR_NR_EVENT; j++) {
 			rc = try_to_cancel_timeout_io(pp->dio_ctx_array + j,
 					&curr_time, pp->devname);
@@ -631,7 +608,7 @@ static void handle_async_io_done_event(struct io_event *io_evt)
 	int rc = PATH_UNCHECKED;
 	int i, j;
 
-	vector_foreach_slot(paths->pathvec, pp, i) {
+	vector_foreach_slot(io_err_pathvec, pp, i) {
 		for (j = 0; j < CONCUR_NR_EVENT; j++) {
 			ct = pp->dio_ctx_array + j;
 			if (&ct->io == io_evt->obj) {
@@ -665,19 +642,14 @@ static void service_paths(void)
 	struct io_err_stat_path *pp;
 	int i;
 
-	pthread_mutex_lock(&paths->mutex);
-	vector_foreach_slot(paths->pathvec, pp, i) {
+	pthread_mutex_lock(&io_err_pathvec_lock);
+	vector_foreach_slot(io_err_pathvec, pp, i) {
 		send_batch_async_ios(pp);
 		process_async_ios_event(TIMEOUT_NO_IO_NSEC, pp->devname);
 		poll_async_io_timeout();
 		poll_io_err_stat(vecs, pp);
 	}
-	pthread_mutex_unlock(&paths->mutex);
-}
-
-static void cleanup_unlock(void *arg)
-{
-	pthread_mutex_unlock((pthread_mutex_t*) arg);
+	pthread_mutex_unlock(&io_err_pathvec_lock);
 }
 
 static void cleanup_exited(__attribute__((unused)) void *arg)
@@ -736,9 +708,14 @@ int start_io_err_stat_thread(void *data)
 		io_err_stat_log(4, "io_setup failed");
 		return 1;
 	}
-	paths = alloc_pathvec();
-	if (!paths)
+
+	pthread_mutex_lock(&io_err_pathvec_lock);
+	io_err_pathvec = vector_alloc();
+	if (!io_err_pathvec) {
+		pthread_mutex_unlock(&io_err_pathvec_lock);
 		goto destroy_ctx;
+	}
+	pthread_mutex_unlock(&io_err_pathvec_lock);
 
 	setup_thread_attr(&io_err_stat_attr, 32 * 1024, 0);
 	pthread_mutex_lock(&io_err_thread_lock);
@@ -763,7 +740,10 @@ int start_io_err_stat_thread(void *data)
 	return 0;
 
 out_free:
-	free_io_err_pathvec(paths);
+	pthread_mutex_lock(&io_err_pathvec_lock);
+	vector_free(io_err_pathvec);
+	io_err_pathvec = NULL;
+	pthread_mutex_unlock(&io_err_pathvec_lock);
 destroy_ctx:
 	io_destroy(ioctx);
 	io_err_stat_log(0, "failed to start io_error statistic thread");
@@ -779,6 +759,6 @@ void stop_io_err_stat_thread(void)
 		pthread_cancel(io_err_stat_thr);
 
 	pthread_join(io_err_stat_thr, NULL);
-	free_io_err_pathvec(paths);
+	free_io_err_pathvec();
 	io_destroy(ioctx);
 }
-- 
2.17.2

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [dm-devel] [PATCH 3/3] multipathd: avoid io_err_stat ABBA deadlock
  2021-01-12 23:52 [dm-devel] [PATCH 0/3] Multipath io_err_stat fixes Benjamin Marzinski
  2021-01-12 23:52 ` [dm-devel] [PATCH 1/3] libmultipath: make find_err_path_by_dev() static Benjamin Marzinski
  2021-01-12 23:52 ` [dm-devel] [PATCH 2/3] multipathd: avoid io_err_stat crash during shutdown Benjamin Marzinski
@ 2021-01-12 23:52 ` Benjamin Marzinski
  2021-01-13 11:45   ` Martin Wilck
  2021-01-13 11:45 ` [dm-devel] [PATCH 0/3] Multipath io_err_stat fixes Martin Wilck
  3 siblings, 1 reply; 8+ messages in thread
From: Benjamin Marzinski @ 2021-01-12 23:52 UTC (permalink / raw)
  To: Christophe Varoqui; +Cc: device-mapper development, Martin Wilck

When the checker thread enqueues paths for the io_err_stat thread to
check, it calls enqueue_io_err_stat_by_path() with the vecs lock held.
start_io_err_stat_thread() is also called with the vecs lock held.
These two functions both lock io_err_pathvec_lock. When the io_err_stat
thread updates the paths in vecs->pathvec in poll_io_err_stat(), it has
the io_err_pathvec_lock held, and then locks the vecs lock. This can
cause an ABBA deadlock.

To solve this, service_paths() no longer updates the paths in
vecs->pathvec with the io_err_pathvec_lock held.  It does this by moving
the io_err_stat_path from io_err_pathvec to a local vector when it needs
to update the path. After releasing the io_err_pathvec_lock, it goes
through this temporary vector, updates the paths with the vecs lock
held, and then frees everything.

This change fixes a bug in service_paths() where elements were being
deleted from io_err_pathvec, without the index being decremented,
causing the loop to skip elements. Also, service_paths() could be
cancelled while holding the io_err_pathvec_lock, so it should have a
cleanup handler.

Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com>
---
 libmultipath/io_err_stat.c | 55 +++++++++++++++++++++-----------------
 1 file changed, 31 insertions(+), 24 deletions(-)

diff --git a/libmultipath/io_err_stat.c b/libmultipath/io_err_stat.c
index 4c6f7f08..a222594e 100644
--- a/libmultipath/io_err_stat.c
+++ b/libmultipath/io_err_stat.c
@@ -385,20 +385,6 @@ recover:
 	return 0;
 }
 
-static int delete_io_err_stat_by_addr(struct io_err_stat_path *p)
-{
-	int i;
-
-	i = find_slot(io_err_pathvec, p);
-	if (i != -1)
-		vector_del_slot(io_err_pathvec, i);
-
-	destroy_directio_ctx(p);
-	free_io_err_stat_path(p);
-
-	return 0;
-}
-
 static void account_async_io_state(struct io_err_stat_path *pp, int rc)
 {
 	switch (rc) {
@@ -415,17 +401,26 @@ static void account_async_io_state(struct io_err_stat_path *pp, int rc)
 	}
 }
 
-static int poll_io_err_stat(struct vectors *vecs, struct io_err_stat_path *pp)
+static int io_err_stat_time_up(struct io_err_stat_path *pp)
 {
 	struct timespec currtime, difftime;
-	struct path *path;
-	double err_rate;
 
 	if (clock_gettime(CLOCK_MONOTONIC, &currtime) != 0)
-		return 1;
+		return 0;
 	timespecsub(&currtime, &pp->start_time, &difftime);
 	if (difftime.tv_sec < pp->total_time)
 		return 0;
+	return 1;
+}
+
+static void end_io_err_stat(struct io_err_stat_path *pp)
+{
+	struct timespec currtime;
+	struct path *path;
+	double err_rate;
+
+	if (clock_gettime(CLOCK_MONOTONIC, &currtime) != 0)
+		currtime = pp->start_time;
 
 	io_err_stat_log(4, "%s: check end", pp->devname);
 
@@ -464,10 +459,6 @@ static int poll_io_err_stat(struct vectors *vecs, struct io_err_stat_path *pp)
 				pp->devname);
 	}
 	lock_cleanup_pop(vecs->lock);
-
-	delete_io_err_stat_by_addr(pp);
-
-	return 0;
 }
 
 static int send_each_async_io(struct dio_ctx *ct, int fd, char *dev)
@@ -639,17 +630,33 @@ static void process_async_ios_event(int timeout_nsecs, char *dev)
 
 static void service_paths(void)
 {
+	struct _vector _pathvec = {0};
+	/* avoid gcc warnings that &_pathvec will never be NULL in vector ops */
+	vector tmp_pathvec = &_pathvec;
 	struct io_err_stat_path *pp;
 	int i;
 
 	pthread_mutex_lock(&io_err_pathvec_lock);
+	pthread_cleanup_push(cleanup_unlock, &io_err_pathvec_lock);
 	vector_foreach_slot(io_err_pathvec, pp, i) {
 		send_batch_async_ios(pp);
 		process_async_ios_event(TIMEOUT_NO_IO_NSEC, pp->devname);
 		poll_async_io_timeout();
-		poll_io_err_stat(vecs, pp);
+		if (io_err_stat_time_up(pp)) {
+			if (!vector_alloc_slot(tmp_pathvec))
+				continue;
+			vector_del_slot(io_err_pathvec, i--);
+			vector_set_slot(tmp_pathvec, pp);
+		}
 	}
-	pthread_mutex_unlock(&io_err_pathvec_lock);
+	pthread_cleanup_pop(1);
+	vector_foreach_slot_backwards(tmp_pathvec, pp, i) {
+		end_io_err_stat(pp);
+		vector_del_slot(tmp_pathvec, i);
+		destroy_directio_ctx(pp);
+		free_io_err_stat_path(pp);
+	}
+	vector_reset(tmp_pathvec);
 }
 
 static void cleanup_exited(__attribute__((unused)) void *arg)
-- 
2.17.2

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [dm-devel] [PATCH 2/3] multipathd: avoid io_err_stat crash during shutdown
  2021-01-12 23:52 ` [dm-devel] [PATCH 2/3] multipathd: avoid io_err_stat crash during shutdown Benjamin Marzinski
@ 2021-01-13 11:45   ` Martin Wilck
  0 siblings, 0 replies; 8+ messages in thread
From: Martin Wilck @ 2021-01-13 11:45 UTC (permalink / raw)
  To: bmarzins, christophe.varoqui; +Cc: dm-devel

On Tue, 2021-01-12 at 17:52 -0600, Benjamin Marzinski wrote:
> The checker thread is reponsible for enqueueing paths for the
> io_err_stat thread to check. During shutdown, the io_err_stat thread
> is
> shut down and cleaned up before the checker thread.  There is no code
> to make sure that the checker thread isn't accessing the io_err_stat
> pathvec or its mutex while they are being freed, which can lead to
> memory corruption crashes.
> 
> To solve this, get rid of the io_err_stat_pathvec structure, and
> statically define the mutex.  This means that the mutex is always
> valid
> to access, and the io_err_stat pathvec can only be accessed while
> holding it.  If the io_err_stat thread has already been cleaned up
> when the checker tries to access the pathvec, it will be NULL, and
> the
> checker will simply fail to enqueue the path.
> 
> This change also fixes a bug in free_io_err_pathvec(), which
> previously
> only attempted to free the pathvec if it was not set, instead of when
> it
> was set.
> 
> Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com>

Looks good to me. A few minor notes below.

Regards,
Martin



> ---
>  libmultipath/io_err_stat.c | 108 +++++++++++++++--------------------
> --
>  1 file changed, 44 insertions(+), 64 deletions(-)
> 
> diff --git a/libmultipath/io_err_stat.c b/libmultipath/io_err_stat.c
> index 2e48ee81..4c6f7f08 100644
> --- a/libmultipath/io_err_stat.c
> +++ b/libmultipath/io_err_stat.c
> @@ -46,12 +46,6 @@
>  #define io_err_stat_log(prio, fmt, args...) \
>         condlog(prio, "io error statistic: " fmt, ##args)
>  
> -
> -struct io_err_stat_pathvec {
> -       pthread_mutex_t mutex;
> -       vector          pathvec;
> -};
> -
>  struct dio_ctx {
>         struct timespec io_starttime;
>         unsigned int    blksize;
> @@ -75,9 +69,10 @@ static pthread_t     io_err_stat_thr;
>  
>  static pthread_mutex_t io_err_thread_lock =
> PTHREAD_MUTEX_INITIALIZER;
>  static pthread_cond_t io_err_thread_cond = PTHREAD_COND_INITIALIZER;
> +static pthread_mutex_t io_err_pathvec_lock =
> PTHREAD_MUTEX_INITIALIZER;
>  static int io_err_thread_running = 0;
>  
> -static struct io_err_stat_pathvec *paths;
> +static vector io_err_pathvec;
>  struct vectors *vecs;
>  io_context_t   ioctx;
>  
> @@ -207,46 +202,28 @@ static void free_io_err_stat_path(struct
> io_err_stat_path *p)
>         FREE(p);
>  }
>  
> -static struct io_err_stat_pathvec *alloc_pathvec(void)
> +static void cleanup_unlock(void *arg)

Nitpick: we've got the cleanup_mutex() utility function for this now.

>  {
> -       struct io_err_stat_pathvec *p;
> -       int r;
> -
> -       p = (struct io_err_stat_pathvec *)MALLOC(sizeof(*p));
> -       if (!p)
> -               return NULL;
> -       p->pathvec = vector_alloc();
> -       if (!p->pathvec)
> -               goto out_free_struct_pathvec;
> -       r = pthread_mutex_init(&p->mutex, NULL);
> -       if (r)
> -               goto out_free_member_pathvec;
> -
> -       return p;
> -
> -out_free_member_pathvec:
> -       vector_free(p->pathvec);
> -out_free_struct_pathvec:
> -       FREE(p);
> -       return NULL;
> +       pthread_mutex_unlock((pthread_mutex_t*) arg);
>  }
>  
> -static void free_io_err_pathvec(struct io_err_stat_pathvec *p)
> +static void free_io_err_pathvec(void)
>  {
>         struct io_err_stat_path *path;
>         int i;
>  
> -       if (!p)
> -               return;
> -       pthread_mutex_destroy(&p->mutex);
> -       if (!p->pathvec) {
> -               vector_foreach_slot(p->pathvec, path, i) {
> -                       destroy_directio_ctx(path);
> -                       free_io_err_stat_path(path);

Note: We should call destroy_directio_ctx() (only) from
free_io_err_stat_path().

> -               }
> -               vector_free(p->pathvec);
> +       pthread_mutex_lock(&io_err_pathvec_lock);
> +       pthread_cleanup_push(cleanup_unlock, &io_err_pathvec_lock);
> +       if (!io_err_pathvec)
> +               goto out;
> +       vector_foreach_slot(io_err_pathvec, path, i) {
> +               destroy_directio_ctx(path);
> +               free_io_err_stat_path(path);
>         }
> -       FREE(p);
> +       vector_free(io_err_pathvec);
> +       io_err_pathvec = NULL;
> +out:
> +       pthread_cleanup_pop(1);
>  }
>  
>  /*
> @@ -258,13 +235,13 @@ static int enqueue_io_err_stat_by_path(struct
> path *path)
>  {
>         struct io_err_stat_path *p;
>  
> -       pthread_mutex_lock(&paths->mutex);
> -       p = find_err_path_by_dev(paths->pathvec, path->dev);
> +       pthread_mutex_lock(&io_err_pathvec_lock);
> +       p = find_err_path_by_dev(io_err_pathvec, path->dev);
>         if (p) {
> -               pthread_mutex_unlock(&paths->mutex);
> +               pthread_mutex_unlock(&io_err_pathvec_lock);
>                 return 0;
>         }
> -       pthread_mutex_unlock(&paths->mutex);
> +       pthread_mutex_unlock(&io_err_pathvec_lock);
>  
>         p = alloc_io_err_stat_path();
>         if (!p)
> @@ -276,18 +253,18 @@ static int enqueue_io_err_stat_by_path(struct
> path *path)
>  
>         if (setup_directio_ctx(p))
>                 goto free_ioerr_path;
> -       pthread_mutex_lock(&paths->mutex);
> -       if (!vector_alloc_slot(paths->pathvec))
> +       pthread_mutex_lock(&io_err_pathvec_lock);
> +       if (!vector_alloc_slot(io_err_pathvec))
>                 goto unlock_destroy;
> -       vector_set_slot(paths->pathvec, p);
> -       pthread_mutex_unlock(&paths->mutex);
> +       vector_set_slot(io_err_pathvec, p);
> +       pthread_mutex_unlock(&io_err_pathvec_lock);
>  
>         io_err_stat_log(2, "%s: enqueue path %s to check",
>                         path->mpp->alias, path->dev);

Another note: This is not a level 2 log message. IMO the log levels of
the io_err_stat code are generally too "low"; the only messages we want
to see at 2 would be if a path's "marginal" status changes. Internals
of the algorithm should log at level 3 and 4.

>         return 0;
>  
>  unlock_destroy:
> -       pthread_mutex_unlock(&paths->mutex);
> +       pthread_mutex_unlock(&io_err_pathvec_lock);
>         destroy_directio_ctx(p);
>  free_ioerr_path:
>         free_io_err_stat_path(p);
> @@ -412,9 +389,9 @@ static int delete_io_err_stat_by_addr(struct
> io_err_stat_path *p)
>  {
>         int i;
>  
> -       i = find_slot(paths->pathvec, p);
> +       i = find_slot(io_err_pathvec, p);
>         if (i != -1)
> -               vector_del_slot(paths->pathvec, i);
> +               vector_del_slot(io_err_pathvec, i);
>  
>         destroy_directio_ctx(p);
>         free_io_err_stat_path(p);
> @@ -585,7 +562,7 @@ static void poll_async_io_timeout(void)
>  
>         if (clock_gettime(CLOCK_MONOTONIC, &curr_time) != 0)
>                 return;
> -       vector_foreach_slot(paths->pathvec, pp, i) {
> +       vector_foreach_slot(io_err_pathvec, pp, i) {
>                 for (j = 0; j < CONCUR_NR_EVENT; j++) {
>                         rc = try_to_cancel_timeout_io(pp-
> >dio_ctx_array + j,
>                                         &curr_time, pp->devname);
> @@ -631,7 +608,7 @@ static void handle_async_io_done_event(struct
> io_event *io_evt)
>         int rc = PATH_UNCHECKED;
>         int i, j;
>  
> -       vector_foreach_slot(paths->pathvec, pp, i) {
> +       vector_foreach_slot(io_err_pathvec, pp, i) {
>                 for (j = 0; j < CONCUR_NR_EVENT; j++) {
>                         ct = pp->dio_ctx_array + j;
>                         if (&ct->io == io_evt->obj) {
> @@ -665,19 +642,14 @@ static void service_paths(void)
>         struct io_err_stat_path *pp;
>         int i;
>  
> -       pthread_mutex_lock(&paths->mutex);
> -       vector_foreach_slot(paths->pathvec, pp, i) {
> +       pthread_mutex_lock(&io_err_pathvec_lock);
> +       vector_foreach_slot(io_err_pathvec, pp, i) {
>                 send_batch_async_ios(pp);
>                 process_async_ios_event(TIMEOUT_NO_IO_NSEC, pp-
> >devname);

We should actually use pthread_cleanup_push() here (update: I see you
changed this in patch 3/3). We should also call pthread_testcancel()
before calling io_getevents(), which is not cancellation point but
might block.

>                 poll_async_io_timeout();
>                 poll_io_err_stat(vecs, pp);
>         }
> -       pthread_mutex_unlock(&paths->mutex);
> -}
> -
> -static void cleanup_unlock(void *arg)
> -{
> -       pthread_mutex_unlock((pthread_mutex_t*) arg);
> +       pthread_mutex_unlock(&io_err_pathvec_lock);
>  }
>  
>  static void cleanup_exited(__attribute__((unused)) void *arg)
> @@ -736,9 +708,14 @@ int start_io_err_stat_thread(void *data)
>                 io_err_stat_log(4, "io_setup failed");
>                 return 1;
>         }
> -       paths = alloc_pathvec();
> -       if (!paths)
> +
> +       pthread_mutex_lock(&io_err_pathvec_lock);
> +       io_err_pathvec = vector_alloc();
> +       if (!io_err_pathvec) {
> +               pthread_mutex_unlock(&io_err_pathvec_lock);
>                 goto destroy_ctx;
> +       }
> +       pthread_mutex_unlock(&io_err_pathvec_lock);
>  
>         setup_thread_attr(&io_err_stat_attr, 32 * 1024, 0);
>         pthread_mutex_lock(&io_err_thread_lock);
> @@ -763,7 +740,10 @@ int start_io_err_stat_thread(void *data)
>         return 0;
>  
>  out_free:
> -       free_io_err_pathvec(paths);
> +       pthread_mutex_lock(&io_err_pathvec_lock);
> +       vector_free(io_err_pathvec);
> +       io_err_pathvec = NULL;
> +       pthread_mutex_unlock(&io_err_pathvec_lock);
>  destroy_ctx:
>         io_destroy(ioctx);
>         io_err_stat_log(0, "failed to start io_error statistic
> thread");
> @@ -779,6 +759,6 @@ void stop_io_err_stat_thread(void)
>                 pthread_cancel(io_err_stat_thr);
>  
>         pthread_join(io_err_stat_thr, NULL);
> -       free_io_err_pathvec(paths);
> +       free_io_err_pathvec();
>         io_destroy(ioctx);
>  }

-- 
Dr. Martin Wilck <mwilck@suse.com>, Tel. +49 (0)911 74053 2107
SUSE Software Solutions Germany GmbH
HRB 36809, AG Nürnberg GF: Felix Imendörffer



--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dm-devel] [PATCH 3/3] multipathd: avoid io_err_stat ABBA deadlock
  2021-01-12 23:52 ` [dm-devel] [PATCH 3/3] multipathd: avoid io_err_stat ABBA deadlock Benjamin Marzinski
@ 2021-01-13 11:45   ` Martin Wilck
  0 siblings, 0 replies; 8+ messages in thread
From: Martin Wilck @ 2021-01-13 11:45 UTC (permalink / raw)
  To: bmarzins, christophe.varoqui; +Cc: dm-devel

On Tue, 2021-01-12 at 17:52 -0600, Benjamin Marzinski wrote:
> When the checker thread enqueues paths for the io_err_stat thread to
> check, it calls enqueue_io_err_stat_by_path() with the vecs lock
> held.
> start_io_err_stat_thread() is also called with the vecs lock held.
> These two functions both lock io_err_pathvec_lock. When the
> io_err_stat
> thread updates the paths in vecs->pathvec in poll_io_err_stat(), it
> has
> the io_err_pathvec_lock held, and then locks the vecs lock. This can
> cause an ABBA deadlock.
> 
> To solve this, service_paths() no longer updates the paths in
> vecs->pathvec with the io_err_pathvec_lock held.  It does this by
> moving
> the io_err_stat_path from io_err_pathvec to a local vector when it
> needs
> to update the path. After releasing the io_err_pathvec_lock, it goes
> through this temporary vector, updates the paths with the vecs lock
> held, and then frees everything.
> 
> This change fixes a bug in service_paths() where elements were being
> deleted from io_err_pathvec, without the index being decremented,
> causing the loop to skip elements. Also, service_paths() could be
> cancelled while holding the io_err_pathvec_lock, so it should have a
> cleanup handler.
> 
> Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com>

Looks good. Only two nits below.

> ---
>  libmultipath/io_err_stat.c | 55 +++++++++++++++++++++---------------
> --
>  1 file changed, 31 insertions(+), 24 deletions(-)
> 
> diff --git a/libmultipath/io_err_stat.c b/libmultipath/io_err_stat.c
> index 4c6f7f08..a222594e 100644
> --- a/libmultipath/io_err_stat.c
> +++ b/libmultipath/io_err_stat.c
> @@ -385,20 +385,6 @@ recover:
>         return 0;
>  }
>  
> -static int delete_io_err_stat_by_addr(struct io_err_stat_path *p)
> -{
> -       int i;
> -
> -       i = find_slot(io_err_pathvec, p);
> -       if (i != -1)
> -               vector_del_slot(io_err_pathvec, i);
> -
> -       destroy_directio_ctx(p);
> -       free_io_err_stat_path(p);
> -
> -       return 0;
> -}
> -
>  static void account_async_io_state(struct io_err_stat_path *pp, int
> rc)
>  {
>         switch (rc) {
> @@ -415,17 +401,26 @@ static void account_async_io_state(struct
> io_err_stat_path *pp, int rc)
>         }
>  }
>  
> -static int poll_io_err_stat(struct vectors *vecs, struct
> io_err_stat_path *pp)
> +static int io_err_stat_time_up(struct io_err_stat_path *pp)
>  {
>         struct timespec currtime, difftime;
> -       struct path *path;
> -       double err_rate;
>  
>         if (clock_gettime(CLOCK_MONOTONIC, &currtime) != 0)
> -               return 1;
> +               return 0;

This can't fail. Please change it to get_monotonic_time().

>         timespecsub(&currtime, &pp->start_time, &difftime);
>         if (difftime.tv_sec < pp->total_time)
>                 return 0;
> +       return 1;
> +}
> +
> +static void end_io_err_stat(struct io_err_stat_path *pp)
> +{
> +       struct timespec currtime;
> +       struct path *path;
> +       double err_rate;
> +
> +       if (clock_gettime(CLOCK_MONOTONIC, &currtime) != 0)
> +               currtime = pp->start_time;

See above.


>  
>         io_err_stat_log(4, "%s: check end", pp->devname);
>  
> @@ -464,10 +459,6 @@ static int poll_io_err_stat(struct vectors
> *vecs, struct io_err_stat_path *pp)
>                                 pp->devname);
>         }
>         lock_cleanup_pop(vecs->lock);
> -
> -       delete_io_err_stat_by_addr(pp);
> -
> -       return 0;
>  }
>  
>  static int send_each_async_io(struct dio_ctx *ct, int fd, char *dev)
> @@ -639,17 +630,33 @@ static void process_async_ios_event(int
> timeout_nsecs, char *dev)
>  
>  static void service_paths(void)
>  {
> +       struct _vector _pathvec = {0};
> +       /* avoid gcc warnings that &_pathvec will never be NULL in
> vector ops */
> +       vector tmp_pathvec = &_pathvec;
>         struct io_err_stat_path *pp;
>         int i;
>  
>         pthread_mutex_lock(&io_err_pathvec_lock);
> +       pthread_cleanup_push(cleanup_unlock, &io_err_pathvec_lock);
>         vector_foreach_slot(io_err_pathvec, pp, i) {
>                 send_batch_async_ios(pp);
>                 process_async_ios_event(TIMEOUT_NO_IO_NSEC, pp-
> >devname);
>                 poll_async_io_timeout();
> -               poll_io_err_stat(vecs, pp);
> +               if (io_err_stat_time_up(pp)) {
> +                       if (!vector_alloc_slot(tmp_pathvec))
> +                               continue;
> +                       vector_del_slot(io_err_pathvec, i--);
> +                       vector_set_slot(tmp_pathvec, pp);
> +               }
>         }
> -       pthread_mutex_unlock(&io_err_pathvec_lock);
> +       pthread_cleanup_pop(1);
> +       vector_foreach_slot_backwards(tmp_pathvec, pp, i) {
> +               end_io_err_stat(pp);
> +               vector_del_slot(tmp_pathvec, i);
> +               destroy_directio_ctx(pp);
> +               free_io_err_stat_path(pp);
> +       }
> +       vector_reset(tmp_pathvec);
>  }
>  
>  static void cleanup_exited(__attribute__((unused)) void *arg)

-- 
Dr. Martin Wilck <mwilck@suse.com>, Tel. +49 (0)911 74053 2107
SUSE Software Solutions Germany GmbH
HRB 36809, AG Nürnberg GF: Felix Imendörffer



--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dm-devel] [PATCH 0/3] Multipath io_err_stat fixes
  2021-01-12 23:52 [dm-devel] [PATCH 0/3] Multipath io_err_stat fixes Benjamin Marzinski
                   ` (2 preceding siblings ...)
  2021-01-12 23:52 ` [dm-devel] [PATCH 3/3] multipathd: avoid io_err_stat ABBA deadlock Benjamin Marzinski
@ 2021-01-13 11:45 ` Martin Wilck
  2021-01-13 17:09   ` Benjamin Marzinski
  3 siblings, 1 reply; 8+ messages in thread
From: Martin Wilck @ 2021-01-13 11:45 UTC (permalink / raw)
  To: bmarzins, christophe.varoqui; +Cc: dm-devel

On Tue, 2021-01-12 at 17:52 -0600, Benjamin Marzinski wrote:
> I found an ABBA deadlock in the io_err_stat marginal path code, and
> in
> the process of fixing it, noticed a potential crash on shutdown. This
> patchset addresses both of the issues.
> 
> Benjamin Marzinski (3):
>   libmultipath: make find_err_path_by_dev() static
>   multipathd: avoid io_err_stat crash during shutdown
>   multipathd: avoid io_err_stat ABBA deadlock
> 
>  libmultipath/io_err_stat.c | 159 +++++++++++++++++------------------
> --
>  1 file changed, 73 insertions(+), 86 deletions(-)
> 

Thanks, the series looks good, I have only minor nits.

I've made some remarks about the io_err_stat code in the review. While
you're working at it, would you be willing to fix those issues too?

Cheers,
Martin

-- 
Dr. Martin Wilck <mwilck@suse.com>, Tel. +49 (0)911 74053 2107
SUSE Software Solutions Germany GmbH
HRB 36809, AG Nürnberg GF: Felix Imendörffer



--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dm-devel] [PATCH 0/3] Multipath io_err_stat fixes
  2021-01-13 11:45 ` [dm-devel] [PATCH 0/3] Multipath io_err_stat fixes Martin Wilck
@ 2021-01-13 17:09   ` Benjamin Marzinski
  0 siblings, 0 replies; 8+ messages in thread
From: Benjamin Marzinski @ 2021-01-13 17:09 UTC (permalink / raw)
  To: Martin Wilck; +Cc: dm-devel

On Wed, Jan 13, 2021 at 11:45:55AM +0000, Martin Wilck wrote:
> On Tue, 2021-01-12 at 17:52 -0600, Benjamin Marzinski wrote:
> > I found an ABBA deadlock in the io_err_stat marginal path code, and
> > in
> > the process of fixing it, noticed a potential crash on shutdown. This
> > patchset addresses both of the issues.
> > 
> > Benjamin Marzinski (3):
> >   libmultipath: make find_err_path_by_dev() static
> >   multipathd: avoid io_err_stat crash during shutdown
> >   multipathd: avoid io_err_stat ABBA deadlock
> > 
> >  libmultipath/io_err_stat.c | 159 +++++++++++++++++------------------
> > --
> >  1 file changed, 73 insertions(+), 86 deletions(-)
> > 
> 
> Thanks, the series looks good, I have only minor nits.
> 
> I've made some remarks about the io_err_stat code in the review. While
> you're working at it, would you be willing to fix those issues too?

Sure. I'll send out a v2 patchset that addresses all your issues.

-Ben

> 
> Cheers,
> Martin
> 
> -- 
> Dr. Martin Wilck <mwilck@suse.com>, Tel. +49 (0)911 74053 2107
> SUSE Software Solutions Germany GmbH
> HRB 36809, AG Nürnberg GF: Felix Imendörffer
> 

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-01-13 17:10 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-12 23:52 [dm-devel] [PATCH 0/3] Multipath io_err_stat fixes Benjamin Marzinski
2021-01-12 23:52 ` [dm-devel] [PATCH 1/3] libmultipath: make find_err_path_by_dev() static Benjamin Marzinski
2021-01-12 23:52 ` [dm-devel] [PATCH 2/3] multipathd: avoid io_err_stat crash during shutdown Benjamin Marzinski
2021-01-13 11:45   ` Martin Wilck
2021-01-12 23:52 ` [dm-devel] [PATCH 3/3] multipathd: avoid io_err_stat ABBA deadlock Benjamin Marzinski
2021-01-13 11:45   ` Martin Wilck
2021-01-13 11:45 ` [dm-devel] [PATCH 0/3] Multipath io_err_stat fixes Martin Wilck
2021-01-13 17:09   ` Benjamin Marzinski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).