All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/2] Some fixes for pause and resume all vcpus
@ 2024-03-17  8:37 Keqian Zhu via
  2024-03-17  8:37 ` [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment Keqian Zhu via
  2024-03-17  8:37 ` [PATCH v1 2/2] system/cpus: Fix resume_all_vcpus() under vCPU hotplug condition Keqian Zhu via
  0 siblings, 2 replies; 13+ messages in thread
From: Keqian Zhu via @ 2024-03-17  8:37 UTC (permalink / raw)
  To: qemu-devel, Peter Maydell, Igor Mammedov, David Hildenbrand,
	Stefan Hajnoczi
  Cc: wanghaibin.wang, Zenghui Yu, jiangkunkun, salil.mehta

I hit these bugs when I test the RFC patch of ARM vCPU hotplug feature.
This patch has been verified valid.

Keqian Zhu (2):
  system/cpus: Fix pause_all_vcpus() under concurrent environment
  system/cpus: Fix resume_all_vcpus() under vCPU hotplug condition

 system/cpus.c | 32 +++++++++++++++++++++++++++-----
 1 file changed, 27 insertions(+), 5 deletions(-)

-- 
2.33.0



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment
  2024-03-17  8:37 [PATCH v1 0/2] Some fixes for pause and resume all vcpus Keqian Zhu via
@ 2024-03-17  8:37 ` Keqian Zhu via
  2024-03-18 10:10   ` David Hildenbrand
  2024-03-17  8:37 ` [PATCH v1 2/2] system/cpus: Fix resume_all_vcpus() under vCPU hotplug condition Keqian Zhu via
  1 sibling, 1 reply; 13+ messages in thread
From: Keqian Zhu via @ 2024-03-17  8:37 UTC (permalink / raw)
  To: qemu-devel, Peter Maydell, Igor Mammedov, David Hildenbrand,
	Stefan Hajnoczi
  Cc: wanghaibin.wang, Zenghui Yu, jiangkunkun, salil.mehta

Both main loop thread and vCPU thread are allowed to call
pause_all_vcpus(), and in general resume_all_vcpus() is called
after it. Two issues live in pause_all_vcpus():

1. There is possibility that during thread T1 waits on
qemu_pause_cond with bql unlocked, other thread has called
pause_all_vcpus() and resume_all_vcpus(), then thread T1 will
stuck, because the condition all_vcpus_paused() is always false.

2. After all_vcpus_paused() has been checked as true, we will
unlock bql to relock replay_mutex. During the bql was unlocked,
the vcpu's state may has been changed by other thread, so we
must retry.

Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
---
 system/cpus.c | 29 ++++++++++++++++++++++++-----
 1 file changed, 24 insertions(+), 5 deletions(-)

diff --git a/system/cpus.c b/system/cpus.c
index 68d161d96b..4e41abe23e 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -571,12 +571,14 @@ static bool all_vcpus_paused(void)
     return true;
 }
 
-void pause_all_vcpus(void)
+static void request_pause_all_vcpus(void)
 {
     CPUState *cpu;
 
-    qemu_clock_enable(QEMU_CLOCK_VIRTUAL, false);
     CPU_FOREACH(cpu) {
+        if (cpu->stopped) {
+            continue;
+        }
         if (qemu_cpu_is_self(cpu)) {
             qemu_cpu_stop(cpu, true);
         } else {
@@ -584,6 +586,14 @@ void pause_all_vcpus(void)
             qemu_cpu_kick(cpu);
         }
     }
+}
+
+void pause_all_vcpus(void)
+{
+    qemu_clock_enable(QEMU_CLOCK_VIRTUAL, false);
+
+retry:
+    request_pause_all_vcpus();
 
     /* We need to drop the replay_lock so any vCPU threads woken up
      * can finish their replay tasks
@@ -592,14 +602,23 @@ void pause_all_vcpus(void)
 
     while (!all_vcpus_paused()) {
         qemu_cond_wait(&qemu_pause_cond, &bql);
-        CPU_FOREACH(cpu) {
-            qemu_cpu_kick(cpu);
-        }
+        /* During we waited on qemu_pause_cond the bql was unlocked,
+         * the vcpu's state may has been changed by other thread, so
+         * we must request the pause state on all vcpus again.
+         */
+        request_pause_all_vcpus();
     }
 
     bql_unlock();
     replay_mutex_lock();
     bql_lock();
+
+    /* During the bql was unlocked, the vcpu's state may has been
+     * changed by other thread, so we must retry.
+     */
+    if (!all_vcpus_paused()) {
+        goto retry;
+    }
 }
 
 void cpu_resume(CPUState *cpu)
-- 
2.33.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v1 2/2] system/cpus: Fix resume_all_vcpus() under vCPU hotplug condition
  2024-03-17  8:37 [PATCH v1 0/2] Some fixes for pause and resume all vcpus Keqian Zhu via
  2024-03-17  8:37 ` [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment Keqian Zhu via
@ 2024-03-17  8:37 ` Keqian Zhu via
  2024-03-18 10:14   ` David Hildenbrand
  1 sibling, 1 reply; 13+ messages in thread
From: Keqian Zhu via @ 2024-03-17  8:37 UTC (permalink / raw)
  To: qemu-devel, Peter Maydell, Igor Mammedov, David Hildenbrand,
	Stefan Hajnoczi
  Cc: wanghaibin.wang, Zenghui Yu, jiangkunkun, salil.mehta

For vCPU being hotplugged, qemu_init_vcpu() is called. In this
function, we set vcpu state as stopped, and then wait vcpu thread
to be created.

As the vcpu state is stopped, it will inform us it has been created
and then wait on halt_cond. After we has realized vcpu object, we
will resume the vcpu thread.

However, during we wait vcpu thread to be created, the bql is
unlocked, and other thread is allowed to call resume_all_vcpus(),
which will resume the un-realized vcpu.

This fixes the issue by filter out un-realized vcpu during
resume_all_vcpus().

Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
---
 system/cpus.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/system/cpus.c b/system/cpus.c
index 4e41abe23e..8871f5dfa9 100644
--- a/system/cpus.c
+++ b/system/cpus.c
@@ -638,6 +638,9 @@ void resume_all_vcpus(void)
 
     qemu_clock_enable(QEMU_CLOCK_VIRTUAL, true);
     CPU_FOREACH(cpu) {
+        if (!object_property_get_bool(OBJECT(cpu), "realized", &error_abort)) {
+            continue;
+        }
         cpu_resume(cpu);
     }
 }
-- 
2.33.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment
  2024-03-17  8:37 ` [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment Keqian Zhu via
@ 2024-03-18 10:10   ` David Hildenbrand
  2024-03-19  5:06     ` 答复: " zhukeqian via
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2024-03-18 10:10 UTC (permalink / raw)
  To: Keqian Zhu, qemu-devel, Peter Maydell, Igor Mammedov, Stefan Hajnoczi
  Cc: wanghaibin.wang, Zenghui Yu, jiangkunkun, salil.mehta

On 17.03.24 09:37, Keqian Zhu via wrote:
> Both main loop thread and vCPU thread are allowed to call
> pause_all_vcpus(), and in general resume_all_vcpus() is called
> after it. Two issues live in pause_all_vcpus():

In general, calling pause_all_vcpus() from VCPU threads is quite dangerous.

Do we have reproducers for the cases below?

> 
> 1. There is possibility that during thread T1 waits on
> qemu_pause_cond with bql unlocked, other thread has called
> pause_all_vcpus() and resume_all_vcpus(), then thread T1 will
> stuck, because the condition all_vcpus_paused() is always false.

How can this happen?

Two threads calling pause_all_vcpus() is borderline broken, as you note.

IIRC, we should call pause_all_vcpus() only if some other mechanism 
prevents these races. For example, based on runstate changes.


Just imagine one thread calling pause_all_vcpus() while another one 
calls resume_all_vcpus(). It cannot possibly work.

> 
> 2. After all_vcpus_paused() has been checked as true, we will
> unlock bql to relock replay_mutex. During the bql was unlocked,
> the vcpu's state may has been changed by other thread, so we
> must retry.
> 
> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
> ---
>   system/cpus.c | 29 ++++++++++++++++++++++++-----
>   1 file changed, 24 insertions(+), 5 deletions(-)
> 
> diff --git a/system/cpus.c b/system/cpus.c
> index 68d161d96b..4e41abe23e 100644
> --- a/system/cpus.c
> +++ b/system/cpus.c
> @@ -571,12 +571,14 @@ static bool all_vcpus_paused(void)
>       return true;
>   }
>   
> -void pause_all_vcpus(void)
> +static void request_pause_all_vcpus(void)
>   {
>       CPUState *cpu;
>   
> -    qemu_clock_enable(QEMU_CLOCK_VIRTUAL, false);
>       CPU_FOREACH(cpu) {
> +        if (cpu->stopped) {
> +            continue;
> +        }
>           if (qemu_cpu_is_self(cpu)) {
>               qemu_cpu_stop(cpu, true);
>           } else {
> @@ -584,6 +586,14 @@ void pause_all_vcpus(void)
>               qemu_cpu_kick(cpu);
>           }
>       }
> +}
> +
> +void pause_all_vcpus(void)
> +{
> +    qemu_clock_enable(QEMU_CLOCK_VIRTUAL, false);
> +
> +retry:
> +    request_pause_all_vcpus();
>   
>       /* We need to drop the replay_lock so any vCPU threads woken up
>        * can finish their replay tasks
> @@ -592,14 +602,23 @@ void pause_all_vcpus(void)
>   
>       while (!all_vcpus_paused()) {
>           qemu_cond_wait(&qemu_pause_cond, &bql);
> -        CPU_FOREACH(cpu) {
> -            qemu_cpu_kick(cpu);
> -        }
> +        /* During we waited on qemu_pause_cond the bql was unlocked,
> +         * the vcpu's state may has been changed by other thread, so
> +         * we must request the pause state on all vcpus again.
> +         */
> +        request_pause_all_vcpus();
>       }
>   
>       bql_unlock();
>       replay_mutex_lock();
>       bql_lock();
> +
> +    /* During the bql was unlocked, the vcpu's state may has been
> +     * changed by other thread, so we must retry.
> +     */
> +    if (!all_vcpus_paused()) {
> +        goto retry;
> +    }
>   }
>   
>   void cpu_resume(CPUState *cpu)

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v1 2/2] system/cpus: Fix resume_all_vcpus() under vCPU hotplug condition
  2024-03-17  8:37 ` [PATCH v1 2/2] system/cpus: Fix resume_all_vcpus() under vCPU hotplug condition Keqian Zhu via
@ 2024-03-18 10:14   ` David Hildenbrand
  2024-03-19  5:11     ` 答复: " zhukeqian via
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2024-03-18 10:14 UTC (permalink / raw)
  To: Keqian Zhu, qemu-devel, Peter Maydell, Igor Mammedov, Stefan Hajnoczi
  Cc: wanghaibin.wang, Zenghui Yu, jiangkunkun, salil.mehta

On 17.03.24 09:37, Keqian Zhu via wrote:
> For vCPU being hotplugged, qemu_init_vcpu() is called. In this
> function, we set vcpu state as stopped, and then wait vcpu thread
> to be created.
> 
> As the vcpu state is stopped, it will inform us it has been created
> and then wait on halt_cond. After we has realized vcpu object, we
> will resume the vcpu thread.
> 
> However, during we wait vcpu thread to be created, the bql is
> unlocked, and other thread is allowed to call resume_all_vcpus(),
> which will resume the un-realized vcpu.
> 
> This fixes the issue by filter out un-realized vcpu during
> resume_all_vcpus().

Similar question: is there a reproducer?

How could we currently hotplug a VCPU, and while it is being created, 
see pause_all_vcpus()/resume_all_vcpus() getting claled.

If I am not getting this wrong, there seems to be some other mechanism 
missing that makes sure that this cannot happen. Dropping the BQL 
half-way through creating a VCPU might be the problem.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* 答复: [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment
  2024-03-18 10:10   ` David Hildenbrand
@ 2024-03-19  5:06     ` zhukeqian via
  2024-03-19  9:24       ` David Hildenbrand
  0 siblings, 1 reply; 13+ messages in thread
From: zhukeqian via @ 2024-03-19  5:06 UTC (permalink / raw)
  To: David Hildenbrand, qemu-devel, Peter Maydell, Igor Mammedov,
	Stefan Hajnoczi
  Cc: Wanghaibin (D),
	yuzenghui, jiangkunkun, Salil Mehta, Jonathan Cameron,
	Zengtao (B)

Hi David,

Thanks for reviewing.

On 17.03.24 09:37, Keqian Zhu via wrote:
>> Both main loop thread and vCPU thread are allowed to call 
>> pause_all_vcpus(), and in general resume_all_vcpus() is called after 
>> it. Two issues live in pause_all_vcpus():
>
>In general, calling pause_all_vcpus() from VCPU threads is quite dangerous.
>
>Do we have reproducers for the cases below? 
>

I produce the issues by testing ARM vCPU hotplug feature:
QEMU changes for vCPU hotplug could be cloned from below site,
     https://github.com/salil-mehta/qemu.git virt-cpuhp-armv8/rfc-v2
Guest Kernel changes (by James Morse, ARM) are available here:
     https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git virtual_cpu_hotplug/rfc/v2

The procedure to produce problems:
1. Startup a Linux VM (e.g., called OS-vcpuhotplug) with 32 possible vCPUs and 16 current vCPUs.
2. Log in guestOS and run script[1] to continuously online/offline CPU.
3. At host side, run script[2] to continuously hotplug/unhotplug vCPU.
After several minutes, we can hit these problems.

Script[1] to online/offline CPU:
for ((time=1;time<10000000;time++));
do
        for ((cpu=16;cpu<32;cpu++));
        do
                echo 1 > /sys/devices/system/cpu/cpu$cpu/online
        done

        for ((cpu=16;cpu<32;cpu++));
        do
                echo 0 > /sys/devices/system/cpu/cpu$cpu/online
        done
done

Script[2] to hotplug/unhotplug vCPU:
for ((time=1;time<1000000;time++));
do
        echo $time
        for ((cpu=16;cpu<=32;cpu++));
        do
                echo "virsh setvcpus OS-vcpuhotplug --count  $cpu --live"
                virsh setvcpus OS-vcpuhotplug --count  $cpu --live
                sleep 2
        done

        for ((cpu=32;cpu>=16;cpu--));
        do
                echo "virsh setvcpus OS-vcpuhotplug --count  $cpu --live"
                virsh setvcpus OS-vcpuhotplug --count  $cpu --live
                sleep 2
        done

        for ((cpu=16;cpu<=32;cpu+=2));
        do
                echo "virsh setvcpus OS-vcpuhotplug --count  $cpu --live"
                virsh setvcpus OS-vcpuhotplug --count  $cpu --live
                sleep 2
        done

        for ((cpu=32;cpu>=16;cpu-=2));
        do
                echo "virsh setvcpus OS-vcpuhotplug --count  $cpu --live"
                virsh setvcpus OS-vcpuhotplug --count  $cpu --live
                sleep 2
        done
done

The script[1] will call PSCI CPU_ON which emulated by QEMU, which result in calling cpu_reset() on vCPU thread.
For ARM architecture, it needs to reset GICC registers, which is only possible when all vcpus paused. So script[1]
will call pause_all_vcpus() in vCPU thread.
The script[2] also calls cpu_reset() for newly hotplugged vCPU, which is done in main loop thread.
So this scenario causes problems as I state in commit message.

>> 
>> 1. There is possibility that during thread T1 waits on qemu_pause_cond 
>> with bql unlocked, other thread has called
>> pause_all_vcpus() and resume_all_vcpus(), then thread T1 will stuck, 
>> because the condition all_vcpus_paused() is always false.
>
>How can this happen?
>
>Two threads calling pause_all_vcpus() is borderline broken, as you note. 
>
>IIRC, we should call pause_all_vcpus() only if some other mechanism prevents these races. For example, based on runstate changes.
>

We already has bql to prevent concurrent calling of pause_all_vcpus() and resume_all_vcpus(). But pause_all_vcpus() will
unlock bql in the half way, which gives change for other thread to call pause and resume. In the  past, code does not consider
this problem, now I add retry mechanism to fix it.

>
>Just imagine one thread calling pause_all_vcpus() while another one 
>calls resume_all_vcpus(). It cannot possibly work.

With bql, we can make sure all vcpus are paused after pause_all_vcpus() finish,  and all vcpus are resumed after resume_all_vcpus() finish.

For example, the following situation may occur:
Thread T1:     lock bql  ->    pause_all_vcpus ->   wait on cond and unlock bql  ->   wait T2 unlock bql to lock bql                                            -> lock bql  &&  all_vcpu_paused ->   success and do other work -> unlock bql
Thread T2:                             wait T1 unlock bql to lock bql            ->   lock bql    ->      resume_all_vcpus   ->   success  and do other work   -> unlock bql

Thanks,
Keqian

>
>
>> 
>> 2. After all_vcpus_paused() has been checked as true, we will
>> unlock bql to relock replay_mutex. During the bql was unlocked,
>> the vcpu's state may has been changed by other thread, so we
>> must retry.
>> 
>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
>> ---
>>   system/cpus.c | 29 ++++++++++++++++++++++++-----
>>   1 file changed, 24 insertions(+), 5 deletions(-)
>> 
> diff --git a/system/cpus.c b/system/cpus.c
> index 68d161d96b..4e41abe23e 100644
> --- a/system/cpus.c
> +++ b/system/cpus.c
> @@ -571,12 +571,14 @@ static bool all_vcpus_paused(void)
>       return true;
>   }
>   
> -void pause_all_vcpus(void)
> +static void request_pause_all_vcpus(void)
>   {
>       CPUState *cpu;
>   
> -    qemu_clock_enable(QEMU_CLOCK_VIRTUAL, false);
>       CPU_FOREACH(cpu) {
> +        if (cpu->stopped) {
> +            continue;
> +        }
>           if (qemu_cpu_is_self(cpu)) {
>               qemu_cpu_stop(cpu, true);
>           } else {
> @@ -584,6 +586,14 @@ void pause_all_vcpus(void)
>               qemu_cpu_kick(cpu);
>           }
>       }
> +}
> +
> +void pause_all_vcpus(void)
> +{
> +    qemu_clock_enable(QEMU_CLOCK_VIRTUAL, false);
> +
> +retry:
> +    request_pause_all_vcpus();
>   
>       /* We need to drop the replay_lock so any vCPU threads woken up
>        * can finish their replay tasks
> @@ -592,14 +602,23 @@ void pause_all_vcpus(void)
>   
>       while (!all_vcpus_paused()) {
>           qemu_cond_wait(&qemu_pause_cond, &bql);
> -        CPU_FOREACH(cpu) {
> -            qemu_cpu_kick(cpu);
> -        }
> +        /* During we waited on qemu_pause_cond the bql was unlocked,
> +         * the vcpu's state may has been changed by other thread, so
> +         * we must request the pause state on all vcpus again.
> +         */
> +        request_pause_all_vcpus();
>       }
>   
>       bql_unlock();
>       replay_mutex_lock();
>       bql_lock();
> +
> +    /* During the bql was unlocked, the vcpu's state may has been
> +     * changed by other thread, so we must retry.
> +     */
> +    if (!all_vcpus_paused()) {
> +        goto retry;
> +    }
>   }
>   
>   void cpu_resume(CPUState *cpu)

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* 答复: [PATCH v1 2/2] system/cpus: Fix resume_all_vcpus() under vCPU hotplug condition
  2024-03-18 10:14   ` David Hildenbrand
@ 2024-03-19  5:11     ` zhukeqian via
  2024-03-19  9:25       ` David Hildenbrand
  0 siblings, 1 reply; 13+ messages in thread
From: zhukeqian via @ 2024-03-19  5:11 UTC (permalink / raw)
  To: David Hildenbrand, qemu-devel, Peter Maydell, Igor Mammedov,
	Stefan Hajnoczi
  Cc: Wanghaibin (D),
	yuzenghui, jiangkunkun, Salil Mehta, Jonathan Cameron,
	Zengtao (B)

Hi David,

On 17.03.24 09:37, Keqian Zhu via wrote:
>> For vCPU being hotplugged, qemu_init_vcpu() is called. In this 
>> function, we set vcpu state as stopped, and then wait vcpu thread to 
>> be created.
>> 
>> As the vcpu state is stopped, it will inform us it has been created 
>> and then wait on halt_cond. After we has realized vcpu object, we will 
>> resume the vcpu thread.
>> 
>> However, during we wait vcpu thread to be created, the bql is 
>> unlocked, and other thread is allowed to call resume_all_vcpus(), 
>> which will resume the un-realized vcpu.
>> 
>> This fixes the issue by filter out un-realized vcpu during 
>> resume_all_vcpus().
>
>Similar question: is there a reproducer? 
>
>How could we currently hotplug a VCPU, and while it is being created, see pause_all_vcpus()/resume_all_vcpus() getting claled. 
>
I described the reason for this at patch 1.

>If I am not getting this wrong, there seems to be some other mechanism missing that makes sure that this cannot happen. Dropping the BQL half-way through creating a VCPU might be the problem.
>
When we add retry mechanism in pause_all_vcpus(), we can solve this problem. With the sematic unchanged for user, which means:
With bql, we can make sure all vcpus are paused after pause_all_vcpus() finish,  and all vcpus are resumed after resume_all_vcpus() finish.

Thanks,
Keqian

>
>
--
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 答复: [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment
  2024-03-19  5:06     ` 答复: " zhukeqian via
@ 2024-03-19  9:24       ` David Hildenbrand
  2024-03-19 13:23         ` David Hildenbrand
  2024-03-19 14:23         ` Peter Maydell
  0 siblings, 2 replies; 13+ messages in thread
From: David Hildenbrand @ 2024-03-19  9:24 UTC (permalink / raw)
  To: zhukeqian, qemu-devel, Peter Maydell, Igor Mammedov, Stefan Hajnoczi
  Cc: Wanghaibin (D),
	yuzenghui, jiangkunkun, Salil Mehta, Jonathan Cameron,
	Zengtao (B)

On 19.03.24 06:06, zhukeqian wrote:
> Hi David,
> 
> Thanks for reviewing.
> 
> On 17.03.24 09:37, Keqian Zhu via wrote:
>>> Both main loop thread and vCPU thread are allowed to call
>>> pause_all_vcpus(), and in general resume_all_vcpus() is called after
>>> it. Two issues live in pause_all_vcpus():
>>
>> In general, calling pause_all_vcpus() from VCPU threads is quite dangerous.
>>
>> Do we have reproducers for the cases below?
>>
> 
> I produce the issues by testing ARM vCPU hotplug feature:
> QEMU changes for vCPU hotplug could be cloned from below site,
>       https://github.com/salil-mehta/qemu.git virt-cpuhp-armv8/rfc-v2
> Guest Kernel changes (by James Morse, ARM) are available here:
>       https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git virtual_cpu_hotplug/rfc/v2
> 

Thanks for these infos (would be reasonable to include that in the cover letter).

Okay, so likely this is not actually a "fix" for upstream as it is. Understood.

> The procedure to produce problems:
> 1. Startup a Linux VM (e.g., called OS-vcpuhotplug) with 32 possible vCPUs and 16 current vCPUs.
> 2. Log in guestOS and run script[1] to continuously online/offline CPU.
> 3. At host side, run script[2] to continuously hotplug/unhotplug vCPU.
> After several minutes, we can hit these problems.
> 
> Script[1] to online/offline CPU:
> for ((time=1;time<10000000;time++));
> do
>          for ((cpu=16;cpu<32;cpu++));
>          do
>                  echo 1 > /sys/devices/system/cpu/cpu$cpu/online
>          done
> 
>          for ((cpu=16;cpu<32;cpu++));
>          do
>                  echo 0 > /sys/devices/system/cpu/cpu$cpu/online
>          done
> done
> 
> Script[2] to hotplug/unhotplug vCPU:
> for ((time=1;time<1000000;time++));
> do
>          echo $time
>          for ((cpu=16;cpu<=32;cpu++));
>          do
>                  echo "virsh setvcpus OS-vcpuhotplug --count  $cpu --live"
>                  virsh setvcpus OS-vcpuhotplug --count  $cpu --live
>                  sleep 2
>          done
> 
>          for ((cpu=32;cpu>=16;cpu--));
>          do
>                  echo "virsh setvcpus OS-vcpuhotplug --count  $cpu --live"
>                  virsh setvcpus OS-vcpuhotplug --count  $cpu --live
>                  sleep 2
>          done
> 
>          for ((cpu=16;cpu<=32;cpu+=2));
>          do
>                  echo "virsh setvcpus OS-vcpuhotplug --count  $cpu --live"
>                  virsh setvcpus OS-vcpuhotplug --count  $cpu --live
>                  sleep 2
>          done
> 
>          for ((cpu=32;cpu>=16;cpu-=2));
>          do
>                  echo "virsh setvcpus OS-vcpuhotplug --count  $cpu --live"
>                  virsh setvcpus OS-vcpuhotplug --count  $cpu --live
>                  sleep 2
>          done
> done
> 
> The script[1] will call PSCI CPU_ON which emulated by QEMU, which result in calling cpu_reset() on vCPU thread.

I spotted new pause_all_vcpus() / resume_all_vcpus() calls in hw/intc/arm_gicv3_kvm.c and
thought they would be the problematic bit.

Yeah, that's going to be problematic. Further note that a lot of code does not expect
that the BQL is suddenly dropped.

We had issues with that in different context where we ended up wanting to use pause/resume from VCPU context:

https://lore.kernel.org/all/294a987d-b0ef-1b58-98ac-0d4d43075d6e@redhat.com/

This sounds like a bad idea. Read below.

> For ARM architecture, it needs to reset GICC registers, which is only possible when all vcpus paused. So script[1]
> will call pause_all_vcpus() in vCPU thread.
> The script[2] also calls cpu_reset() for newly hotplugged vCPU, which is done in main loop thread.
> So this scenario causes problems as I state in commit message.
> 
>>>
>>> 1. There is possibility that during thread T1 waits on qemu_pause_cond
>>> with bql unlocked, other thread has called
>>> pause_all_vcpus() and resume_all_vcpus(), then thread T1 will stuck,
>>> because the condition all_vcpus_paused() is always false.
>>
>> How can this happen?
>>
>> Two threads calling pause_all_vcpus() is borderline broken, as you note.
>>
>> IIRC, we should call pause_all_vcpus() only if some other mechanism prevents these races. For example, based on runstate changes.
>>
> 
> We already has bql to prevent concurrent calling of pause_all_vcpus() and resume_all_vcpus(). But pause_all_vcpus() will
> unlock bql in the half way, which gives change for other thread to call pause and resume. In the  past, code does not consider
> this problem, now I add retry mechanism to fix it.

Note that BQL did not prevent concurrent calling of pause_all_vcpus(). There had to be something else. Likely that was runstate transitions.

> 
>>
>> Just imagine one thread calling pause_all_vcpus() while another one
>> calls resume_all_vcpus(). It cannot possibly work.
> 
> With bql, we can make sure all vcpus are paused after pause_all_vcpus() finish,  and all vcpus are resumed after resume_all_vcpus() finish.
> 
> For example, the following situation may occur:
> Thread T1:     lock bql  ->    pause_all_vcpus ->   wait on cond and unlock bql  ->   wait T2 unlock bql to lock bql                                            -> lock bql  &&  all_vcpu_paused ->   success and do other work -> unlock bql
> Thread T2:                             wait T1 unlock bql to lock bql            ->   lock bql    ->      resume_all_vcpus   ->   success  and do other work   -> unlock bql


Now trow in another thread and it all gets really complicated :)

Finding ways to avoid pause_all_vcpus() on the ARM reset code would be preferable.

I guess you simply want to do something similar to what KVM does to avoid messing
with pause_all_vcpus(): inhibiting certain IOCTLs.


commit f39b7d2b96e3e73c01bb678cd096f7baf0b9ab39
Author: David Hildenbrand <david@redhat.com>
Date:   Fri Nov 11 10:47:58 2022 -0500

     kvm: Atomic memslot updates
     
     If we update an existing memslot (e.g., resize, split), we temporarily
     remove the memslot to re-add it immediately afterwards. These updates
     are not atomic, especially not for KVM VCPU threads, such that we can
     get spurious faults.
     
     Let's inhibit most KVM ioctls while performing relevant updates, such
     that we can perform the update just as if it would happen atomically
     without additional kernel support.
     
     We capture the add/del changes and apply them in the notifier commit
     stage instead. There, we can check for overlaps and perform the ioctl
     inhibiting only if really required (-> overlap).
     
     To keep things simple we don't perform additional checks that wouldn't
     actually result in an overlap -- such as !RAM memory regions in some
     cases (see kvm_set_phys_mem()).
     
     To minimize cache-line bouncing, use a separate indicator
     (in_ioctl_lock) per CPU.  Also, make sure to hold the kvm_slots_lock
     while performing both actions (removing+re-adding).
     
     We have to wait until all IOCTLs were exited and block new ones from
     getting executed.
     
     This approach cannot result in a deadlock as long as the inhibitor does
     not hold any locks that might hinder an IOCTL from getting finished and
     exited - something fairly unusual. The inhibitor will always hold the BQL.
     
     AFAIKs, one possible candidate would be userfaultfd. If a page cannot be
     placed (e.g., during postcopy), because we're waiting for a lock, or if the
     userfaultfd thread cannot process a fault, because it is waiting for a
     lock, there could be a deadlock. However, the BQL is not applicable here,
     because any other guest memory access while holding the BQL would already
     result in a deadlock.
     
     Nothing else in the kernel should block forever and wait for userspace
     intervention.
     
     Note: pause_all_vcpus()/resume_all_vcpus() or
     start_exclusive()/end_exclusive() cannot be used, as they either drop
     the BQL or require to be called without the BQL - something inhibitors
     cannot handle. We need a low-level locking mechanism that is
     deadlock-free even when not releasing the BQL.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 答复: [PATCH v1 2/2] system/cpus: Fix resume_all_vcpus() under vCPU hotplug condition
  2024-03-19  5:11     ` 答复: " zhukeqian via
@ 2024-03-19  9:25       ` David Hildenbrand
  0 siblings, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2024-03-19  9:25 UTC (permalink / raw)
  To: zhukeqian, qemu-devel, Peter Maydell, Igor Mammedov, Stefan Hajnoczi
  Cc: Wanghaibin (D),
	yuzenghui, jiangkunkun, Salil Mehta, Jonathan Cameron,
	Zengtao (B)

On 19.03.24 06:11, zhukeqian wrote:
> Hi David,
> 
> On 17.03.24 09:37, Keqian Zhu via wrote:
>>> For vCPU being hotplugged, qemu_init_vcpu() is called. In this
>>> function, we set vcpu state as stopped, and then wait vcpu thread to
>>> be created.
>>>
>>> As the vcpu state is stopped, it will inform us it has been created
>>> and then wait on halt_cond. After we has realized vcpu object, we will
>>> resume the vcpu thread.
>>>
>>> However, during we wait vcpu thread to be created, the bql is
>>> unlocked, and other thread is allowed to call resume_all_vcpus(),
>>> which will resume the un-realized vcpu.
>>>
>>> This fixes the issue by filter out un-realized vcpu during
>>> resume_all_vcpus().
>>
>> Similar question: is there a reproducer?
>>
>> How could we currently hotplug a VCPU, and while it is being created, see pause_all_vcpus()/resume_all_vcpus() getting claled.
>>
> I described the reason for this at patch 1.
> 
>> If I am not getting this wrong, there seems to be some other mechanism missing that makes sure that this cannot happen. Dropping the BQL half-way through creating a VCPU might be the problem.
>>
> When we add retry mechanism in pause_all_vcpus(), we can solve this problem. With the sematic unchanged for user, which means:
> With bql, we can make sure all vcpus are paused after pause_all_vcpus() finish,  and all vcpus are resumed after resume_all_vcpus() finish.

Okay, got it. As just replied to #1, please see if you can avoid messing 
with pause_all_vcpus() by inhibiting KVM IOCTLs like KVM does. That 
would be preferable.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 答复: [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment
  2024-03-19  9:24       ` David Hildenbrand
@ 2024-03-19 13:23         ` David Hildenbrand
  2024-03-19 14:23         ` Peter Maydell
  1 sibling, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2024-03-19 13:23 UTC (permalink / raw)
  To: zhukeqian, qemu-devel, Peter Maydell, Igor Mammedov, Stefan Hajnoczi
  Cc: Wanghaibin (D),
	yuzenghui, jiangkunkun, Salil Mehta, Jonathan Cameron,
	Zengtao (B)

On 19.03.24 10:24, David Hildenbrand wrote:
> On 19.03.24 06:06, zhukeqian wrote:
>> Hi David,
>>
>> Thanks for reviewing.
>>
>> On 17.03.24 09:37, Keqian Zhu via wrote:
>>>> Both main loop thread and vCPU thread are allowed to call
>>>> pause_all_vcpus(), and in general resume_all_vcpus() is called after
>>>> it. Two issues live in pause_all_vcpus():
>>>
>>> In general, calling pause_all_vcpus() from VCPU threads is quite dangerous.
>>>
>>> Do we have reproducers for the cases below?
>>>
>>
>> I produce the issues by testing ARM vCPU hotplug feature:
>> QEMU changes for vCPU hotplug could be cloned from below site,
>>        https://github.com/salil-mehta/qemu.git virt-cpuhp-armv8/rfc-v2
>> Guest Kernel changes (by James Morse, ARM) are available here:
>>        https://git.kernel.org/pub/scm/linux/kernel/git/morse/linux.git virtual_cpu_hotplug/rfc/v2
>>
> 
> Thanks for these infos (would be reasonable to include that in the cover letter).
> 
> Okay, so likely this is not actually a "fix" for upstream as it is. Understood.
> 
>> The procedure to produce problems:
>> 1. Startup a Linux VM (e.g., called OS-vcpuhotplug) with 32 possible vCPUs and 16 current vCPUs.
>> 2. Log in guestOS and run script[1] to continuously online/offline CPU.
>> 3. At host side, run script[2] to continuously hotplug/unhotplug vCPU.
>> After several minutes, we can hit these problems.
>>
>> Script[1] to online/offline CPU:
>> for ((time=1;time<10000000;time++));
>> do
>>           for ((cpu=16;cpu<32;cpu++));
>>           do
>>                   echo 1 > /sys/devices/system/cpu/cpu$cpu/online
>>           done
>>
>>           for ((cpu=16;cpu<32;cpu++));
>>           do
>>                   echo 0 > /sys/devices/system/cpu/cpu$cpu/online
>>           done
>> done
>>
>> Script[2] to hotplug/unhotplug vCPU:
>> for ((time=1;time<1000000;time++));
>> do
>>           echo $time
>>           for ((cpu=16;cpu<=32;cpu++));
>>           do
>>                   echo "virsh setvcpus OS-vcpuhotplug --count  $cpu --live"
>>                   virsh setvcpus OS-vcpuhotplug --count  $cpu --live
>>                   sleep 2
>>           done
>>
>>           for ((cpu=32;cpu>=16;cpu--));
>>           do
>>                   echo "virsh setvcpus OS-vcpuhotplug --count  $cpu --live"
>>                   virsh setvcpus OS-vcpuhotplug --count  $cpu --live
>>                   sleep 2
>>           done
>>
>>           for ((cpu=16;cpu<=32;cpu+=2));
>>           do
>>                   echo "virsh setvcpus OS-vcpuhotplug --count  $cpu --live"
>>                   virsh setvcpus OS-vcpuhotplug --count  $cpu --live
>>                   sleep 2
>>           done
>>
>>           for ((cpu=32;cpu>=16;cpu-=2));
>>           do
>>                   echo "virsh setvcpus OS-vcpuhotplug --count  $cpu --live"
>>                   virsh setvcpus OS-vcpuhotplug --count  $cpu --live
>>                   sleep 2
>>           done
>> done
>>
>> The script[1] will call PSCI CPU_ON which emulated by QEMU, which result in calling cpu_reset() on vCPU thread.
> 
> I spotted new pause_all_vcpus() / resume_all_vcpus() calls in hw/intc/arm_gicv3_kvm.c and
> thought they would be the problematic bit.
> 
> Yeah, that's going to be problematic. Further note that a lot of code does not expect
> that the BQL is suddenly dropped.
> 
> We had issues with that in different context where we ended up wanting to use pause/resume from VCPU context:
> 
> https://lore.kernel.org/all/294a987d-b0ef-1b58-98ac-0d4d43075d6e@redhat.com/
> 
> This sounds like a bad idea. Read below.
> 
>> For ARM architecture, it needs to reset GICC registers, which is only possible when all vcpus paused. So script[1]
>> will call pause_all_vcpus() in vCPU thread.
>> The script[2] also calls cpu_reset() for newly hotplugged vCPU, which is done in main loop thread.
>> So this scenario causes problems as I state in commit message.
>>
>>>>
>>>> 1. There is possibility that during thread T1 waits on qemu_pause_cond
>>>> with bql unlocked, other thread has called
>>>> pause_all_vcpus() and resume_all_vcpus(), then thread T1 will stuck,
>>>> because the condition all_vcpus_paused() is always false.
>>>
>>> How can this happen?
>>>
>>> Two threads calling pause_all_vcpus() is borderline broken, as you note.
>>>
>>> IIRC, we should call pause_all_vcpus() only if some other mechanism prevents these races. For example, based on runstate changes.
>>>
>>
>> We already has bql to prevent concurrent calling of pause_all_vcpus() and resume_all_vcpus(). But pause_all_vcpus() will
>> unlock bql in the half way, which gives change for other thread to call pause and resume. In the  past, code does not consider
>> this problem, now I add retry mechanism to fix it.
> 
> Note that BQL did not prevent concurrent calling of pause_all_vcpus(). There had to be something else. Likely that was runstate transitions.
> 
>>
>>>
>>> Just imagine one thread calling pause_all_vcpus() while another one
>>> calls resume_all_vcpus(). It cannot possibly work.
>>
>> With bql, we can make sure all vcpus are paused after pause_all_vcpus() finish,  and all vcpus are resumed after resume_all_vcpus() finish.
>>
>> For example, the following situation may occur:
>> Thread T1:     lock bql  ->    pause_all_vcpus ->   wait on cond and unlock bql  ->   wait T2 unlock bql to lock bql                                            -> lock bql  &&  all_vcpu_paused ->   success and do other work -> unlock bql
>> Thread T2:                             wait T1 unlock bql to lock bql            ->   lock bql    ->      resume_all_vcpus   ->   success  and do other work   -> unlock bql
> 
> 
> Now trow in another thread and it all gets really complicated :)
> 
> Finding ways to avoid pause_all_vcpus() on the ARM reset code would be preferable.
> 
> I guess you simply want to do something similar to what KVM does to avoid messing
> with pause_all_vcpus(): inhibiting certain IOCTLs.
> 
> 
> commit f39b7d2b96e3e73c01bb678cd096f7baf0b9ab39
> Author: David Hildenbrand <david@redhat.com>
> Date:   Fri Nov 11 10:47:58 2022 -0500
> 
>       kvm: Atomic memslot updates
>       
>       If we update an existing memslot (e.g., resize, split), we temporarily
>       remove the memslot to re-add it immediately afterwards. These updates
>       are not atomic, especially not for KVM VCPU threads, such that we can
>       get spurious faults.
>       
>       Let's inhibit most KVM ioctls while performing relevant updates, such
>       that we can perform the update just as if it would happen atomically
>       without additional kernel support.
>       
>       We capture the add/del changes and apply them in the notifier commit
>       stage instead. There, we can check for overlaps and perform the ioctl
>       inhibiting only if really required (-> overlap).
>       
>       To keep things simple we don't perform additional checks that wouldn't
>       actually result in an overlap -- such as !RAM memory regions in some
>       cases (see kvm_set_phys_mem()).
>       
>       To minimize cache-line bouncing, use a separate indicator
>       (in_ioctl_lock) per CPU.  Also, make sure to hold the kvm_slots_lock
>       while performing both actions (removing+re-adding).
>       
>       We have to wait until all IOCTLs were exited and block new ones from
>       getting executed.
>       
>       This approach cannot result in a deadlock as long as the inhibitor does
>       not hold any locks that might hinder an IOCTL from getting finished and
>       exited - something fairly unusual. The inhibitor will always hold the BQL.
>       
>       AFAIKs, one possible candidate would be userfaultfd. If a page cannot be
>       placed (e.g., during postcopy), because we're waiting for a lock, or if the
>       userfaultfd thread cannot process a fault, because it is waiting for a
>       lock, there could be a deadlock. However, the BQL is not applicable here,
>       because any other guest memory access while holding the BQL would already
>       result in a deadlock.
>       
>       Nothing else in the kernel should block forever and wait for userspace
>       intervention.
>       
>       Note: pause_all_vcpus()/resume_all_vcpus() or
>       start_exclusive()/end_exclusive() cannot be used, as they either drop
>       the BQL or require to be called without the BQL - something inhibitors
>       cannot handle. We need a low-level locking mechanism that is
>       deadlock-free even when not releasing the BQL.
> 

.. and the relevant prerequisites for that commit include:

commit bd688fc93120fb3e28aa70e3dfdf567ccc1e0bc1
Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date:   Fri Nov 11 10:47:56 2022 -0500

     accel: introduce accelerator blocker API
     
     This API allows the accelerators to prevent vcpus from issuing
     new ioctls while execting a critical section marked with the
     accel_ioctl_inhibit_begin/end functions.
     
     Note that all functions submitting ioctls must mark where the
     ioctl is being called with accel_{cpu_}ioctl_begin/end().
     
     This API requires the caller to always hold the BQL.
     API documentation is in sysemu/accel-blocker.h
     
     Internally, it uses a QemuLockCnt together with a per-CPU QemuLockCnt
     (to minimize cache line bouncing) to keep avoid that new ioctls
     run when the critical section starts, and a QemuEvent to wait
     that all running ioctls finish.


and

commit a27dd2de68f37ba96fe164a42121daa5f0750afc
Author: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date:   Fri Nov 11 10:47:57 2022 -0500

     KVM: keep track of running ioctls
     
     Using the new accel-blocker API, mark where ioctls are being called
     in KVM. Next, we will implement the critical section that will take
     care of performing memslots modifications atomically, therefore
     preventing any new ioctl from running and allowing the running ones
     to finish.


-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 答复: [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment
  2024-03-19  9:24       ` David Hildenbrand
  2024-03-19 13:23         ` David Hildenbrand
@ 2024-03-19 14:23         ` Peter Maydell
  2024-03-19 14:46           ` David Hildenbrand
  1 sibling, 1 reply; 13+ messages in thread
From: Peter Maydell @ 2024-03-19 14:23 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: zhukeqian, qemu-devel, Igor Mammedov, Stefan Hajnoczi,
	Wanghaibin (D),
	yuzenghui, jiangkunkun, Salil Mehta, Jonathan Cameron,
	Zengtao (B)

On Tue, 19 Mar 2024 at 09:24, David Hildenbrand <david@redhat.com> wrote:
> I spotted new pause_all_vcpus() / resume_all_vcpus() calls in hw/intc/arm_gicv3_kvm.c and
> thought they would be the problematic bit.
>
> Yeah, that's going to be problematic. Further note that a lot of code does not expect
> that the BQL is suddenly dropped.

Agreed; we already have one nasty set of bugs in the framebuffer
devices because a function drops the BQL briefly:
https://lore.kernel.org/qemu-devel/CAFEAcA9odnPo2LPip295Uztri7JfoVnQbkJ=Wn+k8dQneB_ynQ@mail.gmail.com/T/#u
so let's avoid introducing any more of a similar kind.

Side note, the pause_all_vcpus()/resume_all_vcpus() calls in
hw/i386/vapic.c are probably a bit suspect for similar reasons.

-- PMM


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 答复: [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment
  2024-03-19 14:23         ` Peter Maydell
@ 2024-03-19 14:46           ` David Hildenbrand
  2024-03-19 14:56             ` Peter Maydell
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2024-03-19 14:46 UTC (permalink / raw)
  To: Peter Maydell
  Cc: zhukeqian, qemu-devel, Igor Mammedov, Stefan Hajnoczi,
	Wanghaibin (D),
	yuzenghui, jiangkunkun, Salil Mehta, Jonathan Cameron,
	Zengtao (B)

On 19.03.24 15:23, Peter Maydell wrote:
> On Tue, 19 Mar 2024 at 09:24, David Hildenbrand <david@redhat.com> wrote:
>> I spotted new pause_all_vcpus() / resume_all_vcpus() calls in hw/intc/arm_gicv3_kvm.c and
>> thought they would be the problematic bit.
>>
>> Yeah, that's going to be problematic. Further note that a lot of code does not expect
>> that the BQL is suddenly dropped.
> 
> Agreed; we already have one nasty set of bugs in the framebuffer
> devices because a function drops the BQL briefly:
> https://lore.kernel.org/qemu-devel/CAFEAcA9odnPo2LPip295Uztri7JfoVnQbkJ=Wn+k8dQneB_ynQ@mail.gmail.com/T/#u
> so let's avoid introducing any more of a similar kind.
> 
> Side note, the pause_all_vcpus()/resume_all_vcpus() calls in
> hw/i386/vapic.c are probably a bit suspect for similar reasons.

Exactly my thoughts. But there, it was less clear "why" it is even 
required. It's only performed for KVM.

Do we also just want to stop KVM threads from executing instructions?, 
so blocking KVM ioctls might be a reasonable "replacement"? Really not sure.

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: 答复: [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment
  2024-03-19 14:46           ` David Hildenbrand
@ 2024-03-19 14:56             ` Peter Maydell
  0 siblings, 0 replies; 13+ messages in thread
From: Peter Maydell @ 2024-03-19 14:56 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: zhukeqian, qemu-devel, Igor Mammedov, Stefan Hajnoczi,
	Wanghaibin (D),
	yuzenghui, jiangkunkun, Salil Mehta, Jonathan Cameron,
	Zengtao (B)

On Tue, 19 Mar 2024 at 14:46, David Hildenbrand <david@redhat.com> wrote:
>
> On 19.03.24 15:23, Peter Maydell wrote:
> > On Tue, 19 Mar 2024 at 09:24, David Hildenbrand <david@redhat.com> wrote:
> >> I spotted new pause_all_vcpus() / resume_all_vcpus() calls in hw/intc/arm_gicv3_kvm.c and
> >> thought they would be the problematic bit.
> >>
> >> Yeah, that's going to be problematic. Further note that a lot of code does not expect
> >> that the BQL is suddenly dropped.
> >
> > Agreed; we already have one nasty set of bugs in the framebuffer
> > devices because a function drops the BQL briefly:
> > https://lore.kernel.org/qemu-devel/CAFEAcA9odnPo2LPip295Uztri7JfoVnQbkJ=Wn+k8dQneB_ynQ@mail.gmail.com/T/#u
> > so let's avoid introducing any more of a similar kind.
> >
> > Side note, the pause_all_vcpus()/resume_all_vcpus() calls in
> > hw/i386/vapic.c are probably a bit suspect for similar reasons.
>
> Exactly my thoughts. But there, it was less clear "why" it is even
> required. It's only performed for KVM.
>
> Do we also just want to stop KVM threads from executing instructions?,
> so blocking KVM ioctls might be a reasonable "replacement"? Really not sure.

I think the vapic code wants to stop other threads from executing
instructions while it's patching them, yes.

-- PMM


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2024-03-19 14:57 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-17  8:37 [PATCH v1 0/2] Some fixes for pause and resume all vcpus Keqian Zhu via
2024-03-17  8:37 ` [PATCH v1 1/2] system/cpus: Fix pause_all_vcpus() under concurrent environment Keqian Zhu via
2024-03-18 10:10   ` David Hildenbrand
2024-03-19  5:06     ` 答复: " zhukeqian via
2024-03-19  9:24       ` David Hildenbrand
2024-03-19 13:23         ` David Hildenbrand
2024-03-19 14:23         ` Peter Maydell
2024-03-19 14:46           ` David Hildenbrand
2024-03-19 14:56             ` Peter Maydell
2024-03-17  8:37 ` [PATCH v1 2/2] system/cpus: Fix resume_all_vcpus() under vCPU hotplug condition Keqian Zhu via
2024-03-18 10:14   ` David Hildenbrand
2024-03-19  5:11     ` 答复: " zhukeqian via
2024-03-19  9:25       ` David Hildenbrand

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.