All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 2/3] migration/multifd: fix destroyed mutex access in terminating multifd threads
       [not found] <20191023034738.10309-1-cenjiahui@huawei.com>
@ 2019-10-23  3:47 ` cenjiahui
  2020-01-09 10:08   ` Juan Quintela
  0 siblings, 1 reply; 2+ messages in thread
From: cenjiahui @ 2019-10-23  3:47 UTC (permalink / raw)
  To: quintela, dgilbert; +Cc: fangying1, Jiahui Cen, qemu-devel, peterx, zhouyibo3

From: Jiahui Cen <cenjiahui@huawei.com>

One multifd will lock all the other multifds' IOChannel mutex to inform them
to quit by setting p->quit or shutting down p->c. In this senario, if some
multifds had already been terminated and multifd_load_cleanup/multifd_save_cleanup
had destroyed their mutex, it could cause destroyed mutex access when trying
lock their mutex.

Here is the coredump stack:
    #0  0x00007f81a2794437 in raise () from /usr/lib64/libc.so.6
    #1  0x00007f81a2795b28 in abort () from /usr/lib64/libc.so.6
    #2  0x00007f81a278d1b6 in __assert_fail_base () from /usr/lib64/libc.so.6
    #3  0x00007f81a278d262 in __assert_fail () from /usr/lib64/libc.so.6
    #4  0x000055eb1bfadbd3 in qemu_mutex_lock_impl (mutex=0x55eb1e2d1988, file=<optimized out>, line=<optimized out>) at util/qemu-thread-posix.c:64
    #5  0x000055eb1bb4564a in multifd_send_terminate_threads (err=<optimized out>) at migration/ram.c:1015
    #6  0x000055eb1bb4bb7f in multifd_send_thread (opaque=0x55eb1e2d19f8) at migration/ram.c:1171
    #7  0x000055eb1bfad628 in qemu_thread_start (args=0x55eb1e170450) at util/qemu-thread-posix.c:502
    #8  0x00007f81a2b36df5 in start_thread () from /usr/lib64/libpthread.so.0
    #9  0x00007f81a286048d in clone () from /usr/lib64/libc.so.6

To fix it up, let's destroy the mutex after all the other multifd threads had
been terminated.

Signed-off-by: Jiahui Cen <cenjiahui@huawei.com>
Signed-off-by: Ying Fang <fangying1@huawei.com>
---
 migration/ram.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/migration/ram.c b/migration/ram.c
index dc63692..24a8906 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1032,6 +1032,10 @@ void multifd_save_cleanup(void)
         if (p->running) {
             qemu_thread_join(&p->thread);
         }
+    }
+    for (i = 0; i < migrate_multifd_channels(); i++) {
+        MultiFDSendParams *p = &multifd_send_state->params[i];
+
         socket_send_channel_destroy(p->c);
         p->c = NULL;
         qemu_mutex_destroy(&p->mutex);
@@ -1308,6 +1312,10 @@ int multifd_load_cleanup(Error **errp)
             qemu_sem_post(&p->sem_sync);
             qemu_thread_join(&p->thread);
         }
+    }
+    for (i = 0; i < migrate_multifd_channels(); i++) {
+        MultiFDRecvParams *p = &multifd_recv_state->params[i];
+
         object_unref(OBJECT(p->c));
         p->c = NULL;
         qemu_mutex_destroy(&p->mutex);
-- 
1.8.3.1




^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH 2/3] migration/multifd: fix destroyed mutex access in terminating multifd threads
  2019-10-23  3:47 ` [PATCH 2/3] migration/multifd: fix destroyed mutex access in terminating multifd threads cenjiahui
@ 2020-01-09 10:08   ` Juan Quintela
  0 siblings, 0 replies; 2+ messages in thread
From: Juan Quintela @ 2020-01-09 10:08 UTC (permalink / raw)
  To: cenjiahui; +Cc: fangying1, zhouyibo3, dgilbert, peterx, qemu-devel

cenjiahui <cenjiahui@huawei.com> wrote:
> From: Jiahui Cen <cenjiahui@huawei.com>
>
> One multifd will lock all the other multifds' IOChannel mutex to inform them
> to quit by setting p->quit or shutting down p->c. In this senario, if some
> multifds had already been terminated and multifd_load_cleanup/multifd_save_cleanup
> had destroyed their mutex, it could cause destroyed mutex access when trying
> lock their mutex.
>
> Here is the coredump stack:
>     #0  0x00007f81a2794437 in raise () from /usr/lib64/libc.so.6
>     #1  0x00007f81a2795b28 in abort () from /usr/lib64/libc.so.6
>     #2  0x00007f81a278d1b6 in __assert_fail_base () from /usr/lib64/libc.so.6
>     #3  0x00007f81a278d262 in __assert_fail () from /usr/lib64/libc.so.6
>     #4  0x000055eb1bfadbd3 in qemu_mutex_lock_impl (mutex=0x55eb1e2d1988, file=<optimized out>, line=<optimized out>) at util/qemu-thread-posix.c:64
>     #5  0x000055eb1bb4564a in multifd_send_terminate_threads (err=<optimized out>) at migration/ram.c:1015
>     #6  0x000055eb1bb4bb7f in multifd_send_thread (opaque=0x55eb1e2d19f8) at migration/ram.c:1171
>     #7  0x000055eb1bfad628 in qemu_thread_start (args=0x55eb1e170450) at util/qemu-thread-posix.c:502
>     #8  0x00007f81a2b36df5 in start_thread () from /usr/lib64/libpthread.so.0
>     #9  0x00007f81a286048d in clone () from /usr/lib64/libc.so.6
>
> To fix it up, let's destroy the mutex after all the other multifd threads had
> been terminated.
>
> Signed-off-by: Jiahui Cen <cenjiahui@huawei.com>
> Signed-off-by: Ying Fang <fangying1@huawei.com>

Reviewed-by: Juan Quintela <quintela@redhat.com>



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-01-09 10:10 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20191023034738.10309-1-cenjiahui@huawei.com>
2019-10-23  3:47 ` [PATCH 2/3] migration/multifd: fix destroyed mutex access in terminating multifd threads cenjiahui
2020-01-09 10:08   ` Juan Quintela

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.