qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC PATCH] Implement qemu_thread_yield for posix, use it in mttcg to handle EXCP_YIELD
@ 2019-07-17  5:46 Nicholas Piggin
  2019-07-17 11:48 ` David Gibson
  2019-12-20 13:11 ` Alex Bennée
  0 siblings, 2 replies; 7+ messages in thread
From: Nicholas Piggin @ 2019-07-17  5:46 UTC (permalink / raw)
  To: David Gibson
  Cc: Greg Kurz, Nicholas Piggin, qemu-devel, qemu-ppc, Cédric Le Goater

This is a bit of proof of concept in case mttcg becomes more important
yield could be handled like this. You can have by accident or deliberately
force vCPUs onto the same physical CPU and cause inversion issues when the
lock holder was preempted by the waiter. This is lightly tested but not
to the point of measuring performance difference.

I really consider the previous confer/prod patches more important just to
provide a more complete guest environment and better test coverage, than
performance, but maybe someone wants to persue this.

Thanks,
Nick
---
 cpus.c                   |  6 ++++++
 hw/ppc/spapr_hcall.c     | 14 +++++++-------
 include/qemu/thread.h    |  1 +
 util/qemu-thread-posix.c |  5 +++++
 util/qemu-thread-win32.c |  4 ++++
 5 files changed, 23 insertions(+), 7 deletions(-)

diff --git a/cpus.c b/cpus.c
index 927a00aa90..f036e062d9 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1760,6 +1760,12 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
                 qemu_mutex_unlock_iothread();
                 cpu_exec_step_atomic(cpu);
                 qemu_mutex_lock_iothread();
+                break;
+            case EXCP_YIELD:
+                qemu_mutex_unlock_iothread();
+                qemu_thread_yield();
+                qemu_mutex_lock_iothread();
+                break;
             default:
                 /* Ignore everything else? */
                 break;
diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 57c1ee0fe1..9c24a64dfe 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -1162,13 +1162,13 @@ static target_ulong h_confer(PowerPCCPU *cpu, SpaprMachineState *spapr,
             return H_SUCCESS;
         }
 
-        /*
-         * The targeted confer does not do anything special beyond yielding
-         * the current vCPU, but even this should be better than nothing.
-         * At least for single-threaded tcg, it gives the target a chance to
-         * run before we run again. Multi-threaded tcg does not really do
-         * anything with EXCP_YIELD yet.
-         */
+       /*
+        * The targeted confer does not do anything special beyond yielding
+        * the current vCPU, but even this should be better than nothing.
+        * For single-threaded tcg, it gives the target a chance to run
+        * before we run again, multi-threaded tcg will yield the CPU to
+        * another vCPU.
+        */
     }
 
     cs->exception_index = EXCP_YIELD;
diff --git a/include/qemu/thread.h b/include/qemu/thread.h
index 55d83a907c..8525b0a70a 100644
--- a/include/qemu/thread.h
+++ b/include/qemu/thread.h
@@ -160,6 +160,7 @@ void qemu_thread_get_self(QemuThread *thread);
 bool qemu_thread_is_self(QemuThread *thread);
 void qemu_thread_exit(void *retval);
 void qemu_thread_naming(bool enable);
+void qemu_thread_yield(void);
 
 struct Notifier;
 /**
diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
index 1bf5e65dea..91b12a1082 100644
--- a/util/qemu-thread-posix.c
+++ b/util/qemu-thread-posix.c
@@ -573,3 +573,8 @@ void *qemu_thread_join(QemuThread *thread)
     }
     return ret;
 }
+
+void qemu_thread_yield(void)
+{
+    pthread_yield();
+}
diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c
index 572f88535d..72fe406bef 100644
--- a/util/qemu-thread-win32.c
+++ b/util/qemu-thread-win32.c
@@ -442,3 +442,7 @@ bool qemu_thread_is_self(QemuThread *thread)
 {
     return GetCurrentThreadId() == thread->tid;
 }
+
+void qemu_thread_yield(void)
+{
+}
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] [RFC PATCH] Implement qemu_thread_yield for posix, use it in mttcg to handle EXCP_YIELD
  2019-07-17  5:46 [Qemu-devel] [RFC PATCH] Implement qemu_thread_yield for posix, use it in mttcg to handle EXCP_YIELD Nicholas Piggin
@ 2019-07-17 11:48 ` David Gibson
  2019-12-20 13:11 ` Alex Bennée
  1 sibling, 0 replies; 7+ messages in thread
From: David Gibson @ 2019-07-17 11:48 UTC (permalink / raw)
  To: Nicholas Piggin; +Cc: Greg Kurz, qemu-ppc, qemu-devel, Cédric Le Goater

[-- Attachment #1: Type: text/plain, Size: 4123 bytes --]

On Wed, Jul 17, 2019 at 03:46:55PM +1000, Nicholas Piggin wrote:
> This is a bit of proof of concept in case mttcg becomes more important
> yield could be handled like this. You can have by accident or deliberately
> force vCPUs onto the same physical CPU and cause inversion issues when the
> lock holder was preempted by the waiter. This is lightly tested but not
> to the point of measuring performance difference.
> 
> I really consider the previous confer/prod patches more important just to
> provide a more complete guest environment and better test coverage, than
> performance, but maybe someone wants to persue this.
> 
> Thanks,
> Nick
> ---
>  cpus.c                   |  6 ++++++
>  hw/ppc/spapr_hcall.c     | 14 +++++++-------
>  include/qemu/thread.h    |  1 +
>  util/qemu-thread-posix.c |  5 +++++
>  util/qemu-thread-win32.c |  4 ++++
>  5 files changed, 23 insertions(+), 7 deletions(-)
> 
> diff --git a/cpus.c b/cpus.c
> index 927a00aa90..f036e062d9 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -1760,6 +1760,12 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
>                  qemu_mutex_unlock_iothread();
>                  cpu_exec_step_atomic(cpu);
>                  qemu_mutex_lock_iothread();
> +                break;
> +            case EXCP_YIELD:
> +                qemu_mutex_unlock_iothread();
> +                qemu_thread_yield();
> +                qemu_mutex_lock_iothread();
> +                break;
>              default:
>                  /* Ignore everything else? */
>                  break;
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index 57c1ee0fe1..9c24a64dfe 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -1162,13 +1162,13 @@ static target_ulong h_confer(PowerPCCPU *cpu, SpaprMachineState *spapr,
>              return H_SUCCESS;
>          }
>  
> -        /*
> -         * The targeted confer does not do anything special beyond yielding
> -         * the current vCPU, but even this should be better than nothing.
> -         * At least for single-threaded tcg, it gives the target a chance to
> -         * run before we run again. Multi-threaded tcg does not really do
> -         * anything with EXCP_YIELD yet.
> -         */
> +       /*
> +        * The targeted confer does not do anything special beyond yielding
> +        * the current vCPU, but even this should be better than nothing.
> +        * For single-threaded tcg, it gives the target a chance to run
> +        * before we run again, multi-threaded tcg will yield the CPU to
> +        * another vCPU.
> +        */

Uh.. this looks like a whitespace fixup leaked in from your other patches.

>      }
>  
>      cs->exception_index = EXCP_YIELD;
> diff --git a/include/qemu/thread.h b/include/qemu/thread.h
> index 55d83a907c..8525b0a70a 100644
> --- a/include/qemu/thread.h
> +++ b/include/qemu/thread.h
> @@ -160,6 +160,7 @@ void qemu_thread_get_self(QemuThread *thread);
>  bool qemu_thread_is_self(QemuThread *thread);
>  void qemu_thread_exit(void *retval);
>  void qemu_thread_naming(bool enable);
> +void qemu_thread_yield(void);
>  
>  struct Notifier;
>  /**
> diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
> index 1bf5e65dea..91b12a1082 100644
> --- a/util/qemu-thread-posix.c
> +++ b/util/qemu-thread-posix.c
> @@ -573,3 +573,8 @@ void *qemu_thread_join(QemuThread *thread)
>      }
>      return ret;
>  }
> +
> +void qemu_thread_yield(void)
> +{
> +    pthread_yield();
> +}
> diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c
> index 572f88535d..72fe406bef 100644
> --- a/util/qemu-thread-win32.c
> +++ b/util/qemu-thread-win32.c
> @@ -442,3 +442,7 @@ bool qemu_thread_is_self(QemuThread *thread)
>  {
>      return GetCurrentThreadId() == thread->tid;
>  }
> +
> +void qemu_thread_yield(void)
> +{
> +}

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] [RFC PATCH] Implement qemu_thread_yield for posix, use it in mttcg to handle EXCP_YIELD
  2019-07-17  5:46 [Qemu-devel] [RFC PATCH] Implement qemu_thread_yield for posix, use it in mttcg to handle EXCP_YIELD Nicholas Piggin
  2019-07-17 11:48 ` David Gibson
@ 2019-12-20 13:11 ` Alex Bennée
  2020-01-21 11:20   ` Nicholas Piggin
  1 sibling, 1 reply; 7+ messages in thread
From: Alex Bennée @ 2019-12-20 13:11 UTC (permalink / raw)
  To: qemu-devel
  Cc: Cédric Le Goater, qemu-ppc, Greg Kurz, Nicholas Piggin,
	David Gibson


Nicholas Piggin <npiggin@gmail.com> writes:

> This is a bit of proof of concept in case mttcg becomes more important
> yield could be handled like this. You can have by accident or deliberately
> force vCPUs onto the same physical CPU and cause inversion issues when the
> lock holder was preempted by the waiter. This is lightly tested but not
> to the point of measuring performance difference.

Sorry I'm so late replying.

Really this comes down to what EXCP_YIELD semantics are meant to mean.
For ARM it's a hint operation because we also have WFE which should halt
until there is some sort of change of state. In those cases exiting the
main-loop and sitting in wait_for_io should be the correct response. If
a vCPU is suspended waiting on the halt condition doesn't it have the
same effect?

>
> I really consider the previous confer/prod patches more important just to
> provide a more complete guest environment and better test coverage, than
> performance, but maybe someone wants to persue this.
>
> Thanks,
> Nick
> ---
>  cpus.c                   |  6 ++++++
>  hw/ppc/spapr_hcall.c     | 14 +++++++-------
>  include/qemu/thread.h    |  1 +
>  util/qemu-thread-posix.c |  5 +++++
>  util/qemu-thread-win32.c |  4 ++++
>  5 files changed, 23 insertions(+), 7 deletions(-)
>
> diff --git a/cpus.c b/cpus.c
> index 927a00aa90..f036e062d9 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -1760,6 +1760,12 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
>                  qemu_mutex_unlock_iothread();
>                  cpu_exec_step_atomic(cpu);
>                  qemu_mutex_lock_iothread();
> +                break;
> +            case EXCP_YIELD:
> +                qemu_mutex_unlock_iothread();
> +                qemu_thread_yield();
> +                qemu_mutex_lock_iothread();
> +                break;
>              default:
>                  /* Ignore everything else? */
>                  break;
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index 57c1ee0fe1..9c24a64dfe 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -1162,13 +1162,13 @@ static target_ulong h_confer(PowerPCCPU *cpu, SpaprMachineState *spapr,
>              return H_SUCCESS;
>          }
>  
> -        /*
> -         * The targeted confer does not do anything special beyond yielding
> -         * the current vCPU, but even this should be better than nothing.
> -         * At least for single-threaded tcg, it gives the target a chance to
> -         * run before we run again. Multi-threaded tcg does not really do
> -         * anything with EXCP_YIELD yet.
> -         */
> +       /*
> +        * The targeted confer does not do anything special beyond yielding
> +        * the current vCPU, but even this should be better than nothing.
> +        * For single-threaded tcg, it gives the target a chance to run
> +        * before we run again, multi-threaded tcg will yield the CPU to
> +        * another vCPU.
> +        */
>      }
>  
>      cs->exception_index = EXCP_YIELD;
> diff --git a/include/qemu/thread.h b/include/qemu/thread.h
> index 55d83a907c..8525b0a70a 100644
> --- a/include/qemu/thread.h
> +++ b/include/qemu/thread.h
> @@ -160,6 +160,7 @@ void qemu_thread_get_self(QemuThread *thread);
>  bool qemu_thread_is_self(QemuThread *thread);
>  void qemu_thread_exit(void *retval);
>  void qemu_thread_naming(bool enable);
> +void qemu_thread_yield(void);
>  
>  struct Notifier;
>  /**
> diff --git a/util/qemu-thread-posix.c b/util/qemu-thread-posix.c
> index 1bf5e65dea..91b12a1082 100644
> --- a/util/qemu-thread-posix.c
> +++ b/util/qemu-thread-posix.c
> @@ -573,3 +573,8 @@ void *qemu_thread_join(QemuThread *thread)
>      }
>      return ret;
>  }
> +
> +void qemu_thread_yield(void)
> +{
> +    pthread_yield();
> +}
> diff --git a/util/qemu-thread-win32.c b/util/qemu-thread-win32.c
> index 572f88535d..72fe406bef 100644
> --- a/util/qemu-thread-win32.c
> +++ b/util/qemu-thread-win32.c
> @@ -442,3 +442,7 @@ bool qemu_thread_is_self(QemuThread *thread)
>  {
>      return GetCurrentThreadId() == thread->tid;
>  }
> +
> +void qemu_thread_yield(void)
> +{
> +}


-- 
Alex Bennée


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] [RFC PATCH] Implement qemu_thread_yield for posix, use it in mttcg to handle EXCP_YIELD
  2019-12-20 13:11 ` Alex Bennée
@ 2020-01-21 11:20   ` Nicholas Piggin
  2020-01-21 14:37     ` Alex Bennée
  0 siblings, 1 reply; 7+ messages in thread
From: Nicholas Piggin @ 2020-01-21 11:20 UTC (permalink / raw)
  To: Alex Bennée, qemu-devel
  Cc: Greg Kurz, qemu-ppc, Cédric Le Goater, David Gibson

Alex Bennée's on December 20, 2019 11:11 pm:
> 
> Nicholas Piggin <npiggin@gmail.com> writes:
> 
>> This is a bit of proof of concept in case mttcg becomes more important
>> yield could be handled like this. You can have by accident or deliberately
>> force vCPUs onto the same physical CPU and cause inversion issues when the
>> lock holder was preempted by the waiter. This is lightly tested but not
>> to the point of measuring performance difference.
> 
> Sorry I'm so late replying.

That's fine if you'll also forgive me :)

> Really this comes down to what EXCP_YIELD semantics are meant to mean.
> For ARM it's a hint operation because we also have WFE which should halt
> until there is some sort of change of state. In those cases exiting the
> main-loop and sitting in wait_for_io should be the correct response. If
> a vCPU is suspended waiting on the halt condition doesn't it have the
> same effect?

For powerpc H_CONFER, the vCPU does not want to wait for ever, but just
give up a some time slice on the physical CPU and allow other vCPUs to
run. But it's not necessary that one does run (if they are all sleeping,
the hypervisor must prevent deadlock). How would you wait on such a
conditon?

Thanks,
Nick


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] [RFC PATCH] Implement qemu_thread_yield for posix, use it in mttcg to handle EXCP_YIELD
  2020-01-21 11:20   ` Nicholas Piggin
@ 2020-01-21 14:37     ` Alex Bennée
  2020-01-22  3:26       ` David Gibson
  0 siblings, 1 reply; 7+ messages in thread
From: Alex Bennée @ 2020-01-21 14:37 UTC (permalink / raw)
  To: Nicholas Piggin
  Cc: Greg Kurz, David Gibson, qemu-ppc, qemu-devel, Cédric Le Goater


Nicholas Piggin <npiggin@gmail.com> writes:

> Alex Bennée's on December 20, 2019 11:11 pm:
>>
>> Nicholas Piggin <npiggin@gmail.com> writes:
>>
>>> This is a bit of proof of concept in case mttcg becomes more important
>>> yield could be handled like this. You can have by accident or deliberately
>>> force vCPUs onto the same physical CPU and cause inversion issues when the
>>> lock holder was preempted by the waiter. This is lightly tested but not
>>> to the point of measuring performance difference.
>>
>> Sorry I'm so late replying.
>
> That's fine if you'll also forgive me :)
>
>> Really this comes down to what EXCP_YIELD semantics are meant to mean.
>> For ARM it's a hint operation because we also have WFE which should halt
>> until there is some sort of change of state. In those cases exiting the
>> main-loop and sitting in wait_for_io should be the correct response. If
>> a vCPU is suspended waiting on the halt condition doesn't it have the
>> same effect?
>
> For powerpc H_CONFER, the vCPU does not want to wait for ever, but just
> give up a some time slice on the physical CPU and allow other vCPUs to
> run. But it's not necessary that one does run (if they are all sleeping,
> the hypervisor must prevent deadlock). How would you wait on such a
> conditon?

Isn't H_CONFER a hypercall rather than instruction though? In QEMU's TCG
emulation case I would expect it just to exit to the (guest) hypervisor
which then schedules the next (guest) vCPU. It shouldn't be something
QEMU has to deal with.

If you are running QEMU as a KVM monitor this is still outside of it's
scope as all the scheduling shenanigans are dealt with inside the
kernel.

From QEMU's TCG point of view we want to concern ourselves with what the
real hardware would do - which I think in this case is drop to the
hypervisor and let it sort it out.

--
Alex Bennée


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] [RFC PATCH] Implement qemu_thread_yield for posix, use it in mttcg to handle EXCP_YIELD
  2020-01-21 14:37     ` Alex Bennée
@ 2020-01-22  3:26       ` David Gibson
  2020-01-22 18:01         ` Richard Henderson
  0 siblings, 1 reply; 7+ messages in thread
From: David Gibson @ 2020-01-22  3:26 UTC (permalink / raw)
  To: Alex Bennée
  Cc: Greg Kurz, qemu-ppc, qemu-devel, Nicholas Piggin, Cédric Le Goater

[-- Attachment #1: Type: text/plain, Size: 2690 bytes --]

On Tue, Jan 21, 2020 at 02:37:59PM +0000, Alex Bennée wrote:
> 
> Nicholas Piggin <npiggin@gmail.com> writes:
> 
> > Alex Bennée's on December 20, 2019 11:11 pm:
> >>
> >> Nicholas Piggin <npiggin@gmail.com> writes:
> >>
> >>> This is a bit of proof of concept in case mttcg becomes more important
> >>> yield could be handled like this. You can have by accident or deliberately
> >>> force vCPUs onto the same physical CPU and cause inversion issues when the
> >>> lock holder was preempted by the waiter. This is lightly tested but not
> >>> to the point of measuring performance difference.
> >>
> >> Sorry I'm so late replying.
> >
> > That's fine if you'll also forgive me :)
> >
> >> Really this comes down to what EXCP_YIELD semantics are meant to mean.
> >> For ARM it's a hint operation because we also have WFE which should halt
> >> until there is some sort of change of state. In those cases exiting the
> >> main-loop and sitting in wait_for_io should be the correct response. If
> >> a vCPU is suspended waiting on the halt condition doesn't it have the
> >> same effect?
> >
> > For powerpc H_CONFER, the vCPU does not want to wait for ever, but just
> > give up a some time slice on the physical CPU and allow other vCPUs to
> > run. But it's not necessary that one does run (if they are all sleeping,
> > the hypervisor must prevent deadlock). How would you wait on such a
> > conditon?
> 
> Isn't H_CONFER a hypercall rather than instruction though? In QEMU's TCG
> emulation case I would expect it just to exit to the (guest) hypervisor
> which then schedules the next (guest) vCPU. It shouldn't be something
> QEMU has to deal with.

That's true if you're emulating a whole system complete with
hypervisor under TCG, which is what the "pnv" machine type does.

However, a more common use of qemu is the "pseries" machine type,
which emulates only a guest (in the cpu architectural sense) with qemu
taking the place of the hypervisor as well as emulating the cpus.  In
that case the H_CONFER hypercall goes to qemu.

> If you are running QEMU as a KVM monitor this is still outside of it's
> scope as all the scheduling shenanigans are dealt with inside the
> kernel.
> 
> From QEMU's TCG point of view we want to concern ourselves with what the
> real hardware would do - which I think in this case is drop to the
> hypervisor and let it sort it out.

Right, but with the "pseries" machine type qemu *is* the hypervisor.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [Qemu-devel] [RFC PATCH] Implement qemu_thread_yield for posix, use it in mttcg to handle EXCP_YIELD
  2020-01-22  3:26       ` David Gibson
@ 2020-01-22 18:01         ` Richard Henderson
  0 siblings, 0 replies; 7+ messages in thread
From: Richard Henderson @ 2020-01-22 18:01 UTC (permalink / raw)
  To: David Gibson, Alex Bennée
  Cc: Cédric Le Goater, qemu-ppc, Greg Kurz, Nicholas Piggin, qemu-devel

On 1/21/20 5:26 PM, David Gibson wrote:
> However, a more common use of qemu is the "pseries" machine type,
> which emulates only a guest (in the cpu architectural sense) with qemu
> taking the place of the hypervisor as well as emulating the cpus.  In
> that case the H_CONFER hypercall goes to qemu.
> 
>> If you are running QEMU as a KVM monitor this is still outside of it's
>> scope as all the scheduling shenanigans are dealt with inside the
>> kernel.
>>
>> From QEMU's TCG point of view we want to concern ourselves with what the
>> real hardware would do - which I think in this case is drop to the
>> hypervisor and let it sort it out.
> 
> Right, but with the "pseries" machine type qemu *is* the hypervisor.

In which case this behaviour doesn't seem implausible.

I will note that "pthread_yield" isn't standardized; "sched_yield" is the one
in POSIX.  Though that says nothing about how that syscall might affect a
hypothetical many-to-one pthread implementation.  You could, I suppose, have a
configure test for pthread_yield.

Also, the win32 implementation would be SwitchToThread():

https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-switchtothread

It looks like one need do nothing for the single-threaded implementation,
qemu_tcg_rr_cpu_thread_fn, as any return to the main loop will select the next
round-robin cpu.  But a note to say that's been tested would be nice.


r~


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-01-22 18:02 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-17  5:46 [Qemu-devel] [RFC PATCH] Implement qemu_thread_yield for posix, use it in mttcg to handle EXCP_YIELD Nicholas Piggin
2019-07-17 11:48 ` David Gibson
2019-12-20 13:11 ` Alex Bennée
2020-01-21 11:20   ` Nicholas Piggin
2020-01-21 14:37     ` Alex Bennée
2020-01-22  3:26       ` David Gibson
2020-01-22 18:01         ` Richard Henderson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).