* [Qemu-devel] [RFC PATCH] qemu_mutux: make the iothread recursive (MTTCG)
@ 2015-06-23 12:21 Alex Bennée
2015-06-23 12:32 ` Paolo Bonzini
2015-06-23 14:21 ` Frederic Konrad
0 siblings, 2 replies; 5+ messages in thread
From: Alex Bennée @ 2015-06-23 12:21 UTC (permalink / raw)
To: mttcg, mark.burton, fred.konrad, pbonzini
Cc: peter.maydell, Alex Bennée, qemu-devel
While I was testing multi-threaded TCG I discovered once consequence of
using locking around memory_region_dispatch is that virt-io transactions
could dead lock trying to grab the main mutex. This is due to the
virt-io driver writing data back into the system memory:
#0 0x00007ffff119dcc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x00007ffff11a10d8 in __GI_abort () at abort.c:89
#2 0x00005555555f9b24 in error_exit (err=<optimised out>, msg=msg@entry=0x5555559f3710 <__func__.6011> "qemu_mutex_lock") at util/qemu-thread-posix.c:48
#3 0x000055555594d630 in qemu_mutex_lock (mutex=mutex@entry=0x555555e62e60 <qemu_global_mutex>) at util/qemu-thread-posix.c:79
#4 0x0000555555631a84 in qemu_mutex_lock_iothread () at /home/alex/lsrc/qemu/qemu.git/cpus.c:1128
#5 0x000055555560dd1a in stw_phys_internal (endian=DEVICE_LITTLE_ENDIAN, val=1, addr=<optimised out>, as=0x555555e08060 <address_space_memory>) at /home/alex/lsrc/qemu/qemu.git/exec.c:3010
#6 stw_le_phys (as=as@entry=0x555555e08060 <address_space_memory>, addr=<optimised out>, val=1) at /home/alex/lsrc/qemu/qemu.git/exec.c:3024
#7 0x0000555555696ae5 in virtio_stw_phys (vdev=<optimised out>, value=<optimised out>, pa=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/include/hw/virtio/virtio-access.h:61
#8 vring_avail_event (vq=0x55555648dc00, vq=0x55555648dc00, vq=0x55555648dc00, val=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:214
#9 virtqueue_pop (vq=0x55555648dc00, elem=elem@entry=0x7fff1403fd98) at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:472
#10 0x0000555555653cd1 in virtio_blk_get_request (s=0x555556486830) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:122
#11 virtio_blk_handle_output (vdev=<optimised out>, vq=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:446
#12 0x00005555556414e1 in access_with_adjusted_size (addr=addr@entry=80, value=value@entry=0x7fffa93052b0, size=size@entry=4, access_size_min=<optimised out>,
access_size_max=<optimised out>, access=0x5555556413e0 <memory_region_write_accessor>, mr=0x555556b80388) at /home/alex/lsrc/qemu/qemu.git/memory.c:461
#13 0x00005555556471b7 in memory_region_dispatch_write (size=4, data=0, addr=80, mr=0x555556b80388) at /home/alex/lsrc/qemu/qemu.git/memory.c:1103
#14 io_mem_write (mr=mr@entry=0x555556b80388, addr=80, val=<optimised out>, size=size@entry=4) at /home/alex/lsrc/qemu/qemu.git/memory.c:2003
#15 0x000055555560ad6b in address_space_rw_internal (as=<optimised out>, addr=167788112, buf=buf@entry=0x7fffa9305380 "", len=4, is_write=is_write@entry=true, unlocked=<optimised out>,
unlocked@entry=false) at /home/alex/lsrc/qemu/qemu.git/exec.c:2318
#16 0x000055555560aea8 in address_space_rw (is_write=true, len=<optimised out>, buf=0x7fffa9305380 "", addr=<optimised out>, as=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/exec.c:2392
#17 address_space_write (len=<optimised out>, buf=0x7fffa9305380 "", addr=<optimised out>, as=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/exec.c:2404
#18 subpage_write (opaque=<optimised out>, addr=<optimised out>, value=<optimised out>, len=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/exec.c:1963
#19 0x00005555556414e1 in access_with_adjusted_size (addr=addr@entry=592, value=value@entry=0x7fffa9305420, size=size@entry=4, access_size_min=<optimised out>,
access_size_max=<optimised out>, access=0x5555556413e0 <memory_region_write_accessor>, mr=0x555556bfca20) at /home/alex/lsrc/qemu/qemu.git/memory.c:461
#20 0x00005555556471b7 in memory_region_dispatch_write (size=4, data=0, addr=592, mr=0x555556bfca20) at /home/alex/lsrc/qemu/qemu.git/memory.c:1103
#21 io_mem_write (mr=mr@entry=0x555556bfca20, addr=addr@entry=592, val=val@entry=0, size=size@entry=4) at /home/alex/lsrc/qemu/qemu.git/memory.c:2003
#22 0x000055555564ce16 in io_writel (retaddr=140736492182797, addr=4027616848, val=0, physaddr=592, env=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/softmmu_template.h:386
#23 helper_le_stl_mmu (env=0x55555649e9b0, addr=<optimised out>, val=0, mmu_idx=<optimised out>, retaddr=140736492182797) at /home/alex/lsrc/qemu/qemu.git/softmmu_template.h:426
#24 0x00007fffc49f9d0f in code_gen_buffer ()
#25 0x00005555556109dc in cpu_tb_exec (tb_ptr=0x7fffc49f9c60 <code_gen_buffer+8371296> "A\213n\374\205\355\017\205\233\001", cpu=0x555556496750)
at /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:179
#26 cpu_arm_exec (env=env@entry=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:524
#27 0x0000555555630f28 in tcg_cpu_exec (env=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1344
#28 tcg_exec_all (cpu=0x555556496750) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1392
#29 qemu_tcg_cpu_thread_fn (arg=0x555556496750) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1037
#30 0x00007ffff1534182 in start_thread (arg=0x7fffa9306700) at pthread_create.c:312
#31 0x00007ffff126147d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
The fix in this patch makes the global/iothread mutex recursive (only if
called from the same thread-id). As other threads are still blocked
memory is still consistent all the way through.
This seems neater than having to do a trylock each time.
Tested-by: Alex Bennée <alex.bennee@linaro.org>
---
cpus.c | 2 +-
include/qemu/thread.h | 1 +
util/qemu-thread-posix.c | 12 ++++++++++++
util/qemu-thread-win32.c | 5 +++++
4 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/cpus.c b/cpus.c
index d161bb9..412ab04 100644
--- a/cpus.c
+++ b/cpus.c
@@ -804,7 +804,7 @@ void qemu_init_cpu_loop(void)
qemu_cond_init(&qemu_pause_cond);
qemu_cond_init(&qemu_work_cond);
qemu_cond_init(&qemu_io_proceeded_cond);
- qemu_mutex_init(&qemu_global_mutex);
+ qemu_mutex_init_recursive(&qemu_global_mutex);
qemu_thread_get_self(&io_thread);
}
diff --git a/include/qemu/thread.h b/include/qemu/thread.h
index c9389f4..4519d2f 100644
--- a/include/qemu/thread.h
+++ b/include/qemu/thread.h
@@ -22,6 +22,7 @@ typedef struct QemuThread QemuThread;
#define QEMU_THREAD_DETACHED 1
void qemu_mutex_init(QemuMutex *mutex);
+void qemu_mutex_init_recursive(QemuMutex *mutex);
void qemu_mutex_destroy(QemuMutex *mutex);
void __qemu_mutex_lock(QemuMutex *mutex, const char *func, int line);
#define qemu_mutex_lock(mutex) __qemu_mutex_lock(mutex, __func__, __LINE__)
diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
index 98eb0f0..ba2fb97 100644
--- a/util/qemu-thread-posix.c
+++ b/util/qemu-thread-posix.c
@@ -57,6 +57,18 @@ void qemu_mutex_init(QemuMutex *mutex)
error_exit(err, __func__);
}
+void qemu_mutex_init_recursive(QemuMutex *mutex)
+{
+ int err;
+ pthread_mutexattr_t attr;
+
+ pthread_mutexattr_init(&attr);
+ pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE_NP);
+ err = pthread_mutex_init(&mutex->lock, &attr);
+ if (err)
+ error_exit(err, __func__);
+}
+
void qemu_mutex_destroy(QemuMutex *mutex)
{
int err;
diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c
index 406b52f..f055067 100644
--- a/util/qemu-thread-win32.c
+++ b/util/qemu-thread-win32.c
@@ -44,6 +44,11 @@ void qemu_mutex_init(QemuMutex *mutex)
InitializeCriticalSection(&mutex->lock);
}
+void qemu_mutex_init_recursive(QemuMutex *mutex)
+{
+ error_exit(0, "%s: not implemented for win32\n", __func__);
+}
+
void qemu_mutex_destroy(QemuMutex *mutex)
{
assert(mutex->owner == 0);
--
2.4.3
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [RFC PATCH] qemu_mutux: make the iothread recursive (MTTCG)
2015-06-23 12:21 [Qemu-devel] [RFC PATCH] qemu_mutux: make the iothread recursive (MTTCG) Alex Bennée
@ 2015-06-23 12:32 ` Paolo Bonzini
2015-06-23 12:55 ` Alex Bennée
2015-06-23 14:21 ` Frederic Konrad
1 sibling, 1 reply; 5+ messages in thread
From: Paolo Bonzini @ 2015-06-23 12:32 UTC (permalink / raw)
To: Alex Bennée, mttcg, mark.burton, fred.konrad
Cc: peter.maydell, qemu-devel
On 23/06/2015 14:21, Alex Bennée wrote:
> While I was testing multi-threaded TCG I discovered once consequence of
> using locking around memory_region_dispatch is that virt-io transactions
> could dead lock trying to grab the main mutex. This is due to the
> virt-io driver writing data back into the system memory:
Have you looked at the patches I posted last week?
http://thread.gmane.org/gmane.comp.emulators.qemu/345258
Paolo
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [RFC PATCH] qemu_mutux: make the iothread recursive (MTTCG)
2015-06-23 12:32 ` Paolo Bonzini
@ 2015-06-23 12:55 ` Alex Bennée
0 siblings, 0 replies; 5+ messages in thread
From: Alex Bennée @ 2015-06-23 12:55 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: mttcg, peter.maydell, mark.burton, qemu-devel, fred.konrad
Paolo Bonzini <pbonzini@redhat.com> writes:
> On 23/06/2015 14:21, Alex Bennée wrote:
>> While I was testing multi-threaded TCG I discovered once consequence of
>> using locking around memory_region_dispatch is that virt-io transactions
>> could dead lock trying to grab the main mutex. This is due to the
>> virt-io driver writing data back into the system memory:
>
> Have you looked at the patches I posted last week?
>
> http://thread.gmane.org/gmane.comp.emulators.qemu/345258
No, but I will do now ;-)
I sent this to the list so Mark and Fred could see what I'd done when I
hit the virt-io problem.
--
Alex Bennée
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [RFC PATCH] qemu_mutux: make the iothread recursive (MTTCG)
2015-06-23 12:21 [Qemu-devel] [RFC PATCH] qemu_mutux: make the iothread recursive (MTTCG) Alex Bennée
2015-06-23 12:32 ` Paolo Bonzini
@ 2015-06-23 14:21 ` Frederic Konrad
2015-06-23 14:23 ` Paolo Bonzini
1 sibling, 1 reply; 5+ messages in thread
From: Frederic Konrad @ 2015-06-23 14:21 UTC (permalink / raw)
To: Alex Bennée, mttcg, mark.burton, pbonzini; +Cc: peter.maydell, qemu-devel
On 23/06/2015 14:21, Alex Bennée wrote:
> While I was testing multi-threaded TCG I discovered once consequence of
> using locking around memory_region_dispatch is that virt-io transactions
> could dead lock trying to grab the main mutex. This is due to the
> virt-io driver writing data back into the system memory:
Hi,
Thanks for that.
Didn't qemu abort in this case with a pthread error? Maybe that did
change since
the last time I had this error.
Thanks,
Fred
> #0 0x00007ffff119dcc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
> #1 0x00007ffff11a10d8 in __GI_abort () at abort.c:89
> #2 0x00005555555f9b24 in error_exit (err=<optimised out>, msg=msg@entry=0x5555559f3710 <__func__.6011> "qemu_mutex_lock") at util/qemu-thread-posix.c:48
> #3 0x000055555594d630 in qemu_mutex_lock (mutex=mutex@entry=0x555555e62e60 <qemu_global_mutex>) at util/qemu-thread-posix.c:79
> #4 0x0000555555631a84 in qemu_mutex_lock_iothread () at /home/alex/lsrc/qemu/qemu.git/cpus.c:1128
> #5 0x000055555560dd1a in stw_phys_internal (endian=DEVICE_LITTLE_ENDIAN, val=1, addr=<optimised out>, as=0x555555e08060 <address_space_memory>) at /home/alex/lsrc/qemu/qemu.git/exec.c:3010
> #6 stw_le_phys (as=as@entry=0x555555e08060 <address_space_memory>, addr=<optimised out>, val=1) at /home/alex/lsrc/qemu/qemu.git/exec.c:3024
> #7 0x0000555555696ae5 in virtio_stw_phys (vdev=<optimised out>, value=<optimised out>, pa=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/include/hw/virtio/virtio-access.h:61
> #8 vring_avail_event (vq=0x55555648dc00, vq=0x55555648dc00, vq=0x55555648dc00, val=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:214
> #9 virtqueue_pop (vq=0x55555648dc00, elem=elem@entry=0x7fff1403fd98) at /home/alex/lsrc/qemu/qemu.git/hw/virtio/virtio.c:472
> #10 0x0000555555653cd1 in virtio_blk_get_request (s=0x555556486830) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:122
> #11 virtio_blk_handle_output (vdev=<optimised out>, vq=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/hw/block/virtio-blk.c:446
> #12 0x00005555556414e1 in access_with_adjusted_size (addr=addr@entry=80, value=value@entry=0x7fffa93052b0, size=size@entry=4, access_size_min=<optimised out>,
> access_size_max=<optimised out>, access=0x5555556413e0 <memory_region_write_accessor>, mr=0x555556b80388) at /home/alex/lsrc/qemu/qemu.git/memory.c:461
> #13 0x00005555556471b7 in memory_region_dispatch_write (size=4, data=0, addr=80, mr=0x555556b80388) at /home/alex/lsrc/qemu/qemu.git/memory.c:1103
> #14 io_mem_write (mr=mr@entry=0x555556b80388, addr=80, val=<optimised out>, size=size@entry=4) at /home/alex/lsrc/qemu/qemu.git/memory.c:2003
> #15 0x000055555560ad6b in address_space_rw_internal (as=<optimised out>, addr=167788112, buf=buf@entry=0x7fffa9305380 "", len=4, is_write=is_write@entry=true, unlocked=<optimised out>,
> unlocked@entry=false) at /home/alex/lsrc/qemu/qemu.git/exec.c:2318
> #16 0x000055555560aea8 in address_space_rw (is_write=true, len=<optimised out>, buf=0x7fffa9305380 "", addr=<optimised out>, as=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/exec.c:2392
> #17 address_space_write (len=<optimised out>, buf=0x7fffa9305380 "", addr=<optimised out>, as=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/exec.c:2404
> #18 subpage_write (opaque=<optimised out>, addr=<optimised out>, value=<optimised out>, len=<optimised out>) at /home/alex/lsrc/qemu/qemu.git/exec.c:1963
> #19 0x00005555556414e1 in access_with_adjusted_size (addr=addr@entry=592, value=value@entry=0x7fffa9305420, size=size@entry=4, access_size_min=<optimised out>,
> access_size_max=<optimised out>, access=0x5555556413e0 <memory_region_write_accessor>, mr=0x555556bfca20) at /home/alex/lsrc/qemu/qemu.git/memory.c:461
> #20 0x00005555556471b7 in memory_region_dispatch_write (size=4, data=0, addr=592, mr=0x555556bfca20) at /home/alex/lsrc/qemu/qemu.git/memory.c:1103
> #21 io_mem_write (mr=mr@entry=0x555556bfca20, addr=addr@entry=592, val=val@entry=0, size=size@entry=4) at /home/alex/lsrc/qemu/qemu.git/memory.c:2003
> #22 0x000055555564ce16 in io_writel (retaddr=140736492182797, addr=4027616848, val=0, physaddr=592, env=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/softmmu_template.h:386
> #23 helper_le_stl_mmu (env=0x55555649e9b0, addr=<optimised out>, val=0, mmu_idx=<optimised out>, retaddr=140736492182797) at /home/alex/lsrc/qemu/qemu.git/softmmu_template.h:426
> #24 0x00007fffc49f9d0f in code_gen_buffer ()
> #25 0x00005555556109dc in cpu_tb_exec (tb_ptr=0x7fffc49f9c60 <code_gen_buffer+8371296> "A\213n\374\205\355\017\205\233\001", cpu=0x555556496750)
> at /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:179
> #26 cpu_arm_exec (env=env@entry=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/cpu-exec.c:524
> #27 0x0000555555630f28 in tcg_cpu_exec (env=0x55555649e9b0) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1344
> #28 tcg_exec_all (cpu=0x555556496750) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1392
> #29 qemu_tcg_cpu_thread_fn (arg=0x555556496750) at /home/alex/lsrc/qemu/qemu.git/cpus.c:1037
> #30 0x00007ffff1534182 in start_thread (arg=0x7fffa9306700) at pthread_create.c:312
> #31 0x00007ffff126147d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111
>
> The fix in this patch makes the global/iothread mutex recursive (only if
> called from the same thread-id). As other threads are still blocked
> memory is still consistent all the way through.
>
> This seems neater than having to do a trylock each time.
>
> Tested-by: Alex Bennée <alex.bennee@linaro.org>
> ---
> cpus.c | 2 +-
> include/qemu/thread.h | 1 +
> util/qemu-thread-posix.c | 12 ++++++++++++
> util/qemu-thread-win32.c | 5 +++++
> 4 files changed, 19 insertions(+), 1 deletion(-)
>
> diff --git a/cpus.c b/cpus.c
> index d161bb9..412ab04 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -804,7 +804,7 @@ void qemu_init_cpu_loop(void)
> qemu_cond_init(&qemu_pause_cond);
> qemu_cond_init(&qemu_work_cond);
> qemu_cond_init(&qemu_io_proceeded_cond);
> - qemu_mutex_init(&qemu_global_mutex);
> + qemu_mutex_init_recursive(&qemu_global_mutex);
>
> qemu_thread_get_self(&io_thread);
> }
> diff --git a/include/qemu/thread.h b/include/qemu/thread.h
> index c9389f4..4519d2f 100644
> --- a/include/qemu/thread.h
> +++ b/include/qemu/thread.h
> @@ -22,6 +22,7 @@ typedef struct QemuThread QemuThread;
> #define QEMU_THREAD_DETACHED 1
>
> void qemu_mutex_init(QemuMutex *mutex);
> +void qemu_mutex_init_recursive(QemuMutex *mutex);
> void qemu_mutex_destroy(QemuMutex *mutex);
> void __qemu_mutex_lock(QemuMutex *mutex, const char *func, int line);
> #define qemu_mutex_lock(mutex) __qemu_mutex_lock(mutex, __func__, __LINE__)
> diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
> index 98eb0f0..ba2fb97 100644
> --- a/util/qemu-thread-posix.c
> +++ b/util/qemu-thread-posix.c
> @@ -57,6 +57,18 @@ void qemu_mutex_init(QemuMutex *mutex)
> error_exit(err, __func__);
> }
>
> +void qemu_mutex_init_recursive(QemuMutex *mutex)
> +{
> + int err;
> + pthread_mutexattr_t attr;
> +
> + pthread_mutexattr_init(&attr);
> + pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE_NP);
> + err = pthread_mutex_init(&mutex->lock, &attr);
> + if (err)
> + error_exit(err, __func__);
> +}
> +
> void qemu_mutex_destroy(QemuMutex *mutex)
> {
> int err;
> diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c
> index 406b52f..f055067 100644
> --- a/util/qemu-thread-win32.c
> +++ b/util/qemu-thread-win32.c
> @@ -44,6 +44,11 @@ void qemu_mutex_init(QemuMutex *mutex)
> InitializeCriticalSection(&mutex->lock);
> }
>
> +void qemu_mutex_init_recursive(QemuMutex *mutex)
> +{
> + error_exit(0, "%s: not implemented for win32\n", __func__);
> +}
> +
> void qemu_mutex_destroy(QemuMutex *mutex)
> {
> assert(mutex->owner == 0);
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [Qemu-devel] [RFC PATCH] qemu_mutux: make the iothread recursive (MTTCG)
2015-06-23 14:21 ` Frederic Konrad
@ 2015-06-23 14:23 ` Paolo Bonzini
0 siblings, 0 replies; 5+ messages in thread
From: Paolo Bonzini @ 2015-06-23 14:23 UTC (permalink / raw)
To: Frederic Konrad, Alex Bennée, mttcg, mark.burton
Cc: peter.maydell, qemu-devel
On 23/06/2015 16:21, Frederic Konrad wrote:
>
>> While I was testing multi-threaded TCG I discovered once consequence of
>> using locking around memory_region_dispatch is that virt-io transactions
>> could dead lock trying to grab the main mutex. This is due to the
>> virt-io driver writing data back into the system memory:
>
> Hi,
>
> Thanks for that.
> Didn't qemu abort in this case with a pthread error? Maybe that did
> change since
> the last time I had this error.
Unfortunately it had to change:
commit 24fa90499f8b24bcba2960a3316d797f9b80b5e9
Author: Paolo Bonzini <pbonzini@redhat.com>
Date: Thu Mar 5 16:47:14 2015 +0100
qemu-thread: do not use PTHREAD_MUTEX_ERRORCHECK
PTHREAD_MUTEX_ERRORCHECK is completely broken with respect to fork.
The way to safely do fork is to bring all threads to a quiescent
state by acquiring locks (either in callers---as we do for the
iothread mutex---or using pthread_atfork's prepare callbacks)
and then release them in the child.
The problem is that releasing error-checking locks in the child
fails under glibc with EPERM, because the mutex stores a different
owner tid than the duplicated thread in the child process. We
could make it work for locks acquired via pthread_atfork, by
recreating the mutex in the child instead of unlocking it
(we know that there are no other threads that could have taken
the mutex; but when the lock is acquired in fork's caller
that would not be possible.
The simplest solution is just to forgo error checking.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
I do revert that patch however for my own testing, since it does make
things much easier when there's a deadlock.
Paolo
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2015-06-23 14:23 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-23 12:21 [Qemu-devel] [RFC PATCH] qemu_mutux: make the iothread recursive (MTTCG) Alex Bennée
2015-06-23 12:32 ` Paolo Bonzini
2015-06-23 12:55 ` Alex Bennée
2015-06-23 14:21 ` Frederic Konrad
2015-06-23 14:23 ` Paolo Bonzini
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.