* [Qemu-devel] Multiple instances of Qemu on Windows multicore
@ 2011-11-02 15:38 Fabien Chouteau
2011-11-02 16:25 ` Paolo Bonzini
0 siblings, 1 reply; 16+ messages in thread
From: Fabien Chouteau @ 2011-11-02 15:38 UTC (permalink / raw)
To: qemu-devel
Hello fellow Qemu aficionados,
On Windows, Qemu sets the affinity mask in order to run all thread on
CPU0, with this comment in the code (os-win32.c:182):
/* Note: cpu_interrupt() is currently not SMP safe, so we force
QEMU to run on a single CPU */
This was added by Fabrice Bellard in 2006 (git show a8e5ac33d).
I can't find/understand any reason for this CPU affinity restriction. So
I'm asking to the experts:
1 - Is this comment still applicable?
2 - If yes, what is the problem with cpu_interrupt on SMP?
3 - Why is this different on Linux?
This is a real drawback, especially to run automated tests.
Thanks in advance,
--
Fabien Chouteau
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-02 15:38 [Qemu-devel] Multiple instances of Qemu on Windows multicore Fabien Chouteau
@ 2011-11-02 16:25 ` Paolo Bonzini
2011-11-02 17:10 ` Fabien Chouteau
0 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2011-11-02 16:25 UTC (permalink / raw)
To: qemu-devel
On 11/02/2011 04:38 PM, Fabien Chouteau wrote:
> Hello fellow Qemu aficionados,
>
> On Windows, Qemu sets the affinity mask in order to run all thread on
> CPU0, with this comment in the code (os-win32.c:182):
>
> /* Note: cpu_interrupt() is currently not SMP safe, so we force
> QEMU to run on a single CPU */
>
> This was added by Fabrice Bellard in 2006 (git show a8e5ac33d).
>
> I can't find/understand any reason for this CPU affinity restriction.
Have you tried looking for a justification in the mailing lists? Also,
I suppose you have tested without the affinity mask and it works?
Offhand I cannot think of why that would be needed.
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-02 16:25 ` Paolo Bonzini
@ 2011-11-02 17:10 ` Fabien Chouteau
2011-11-02 17:16 ` malc
0 siblings, 1 reply; 16+ messages in thread
From: Fabien Chouteau @ 2011-11-02 17:10 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: qemu-devel
On 02/11/2011 17:25, Paolo Bonzini wrote:
> On 11/02/2011 04:38 PM, Fabien Chouteau wrote:
>> Hello fellow Qemu aficionados,
>>
>> On Windows, Qemu sets the affinity mask in order to run all thread on
>> CPU0, with this comment in the code (os-win32.c:182):
>>
>> /* Note: cpu_interrupt() is currently not SMP safe, so we force
>> QEMU to run on a single CPU */
>>
>> This was added by Fabrice Bellard in 2006 (git show a8e5ac33d).
>>
>> I can't find/understand any reason for this CPU affinity restriction.
>
> Have you tried looking for a justification in the mailing lists?
Yes, and I found few mails from Fabrice Bellard and Konrad Schwarz in
the archives:
http://thread.gmane.org/gmane.comp.emulators.qemu/13804
and
http://thread.gmane.org/gmane.comp.emulators.qemu/13831/focus=13805
But it didn't provide more information about the problem.
>
> Also, I suppose you have tested without the affinity mask and it works?
>
Yes I did, it works pretty well. I had 1 unexpected failure among ~6000
tests. But I would like to have a substantial explanation.
>
> Offhand I cannot think of why that would be needed.
>
OK, thanks for your help.
--
Fabien Chouteau
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-02 17:10 ` Fabien Chouteau
@ 2011-11-02 17:16 ` malc
2011-11-02 17:45 ` Paolo Bonzini
0 siblings, 1 reply; 16+ messages in thread
From: malc @ 2011-11-02 17:16 UTC (permalink / raw)
To: Fabien Chouteau; +Cc: Paolo Bonzini, qemu-devel
On Wed, 2 Nov 2011, Fabien Chouteau wrote:
> On 02/11/2011 17:25, Paolo Bonzini wrote:
> > On 11/02/2011 04:38 PM, Fabien Chouteau wrote:
> >> Hello fellow Qemu aficionados,
> >>
> >> On Windows, Qemu sets the affinity mask in order to run all thread on
> >> CPU0, with this comment in the code (os-win32.c:182):
> >>
> >> /* Note: cpu_interrupt() is currently not SMP safe, so we force
> >> QEMU to run on a single CPU */
> >>
> >> This was added by Fabrice Bellard in 2006 (git show a8e5ac33d).
> >>
> >> I can't find/understand any reason for this CPU affinity restriction.
> >
> > Have you tried looking for a justification in the mailing lists?
>
> Yes, and I found few mails from Fabrice Bellard and Konrad Schwarz in
> the archives:
>
> http://thread.gmane.org/gmane.comp.emulators.qemu/13804
>
> and
>
> http://thread.gmane.org/gmane.comp.emulators.qemu/13831/focus=13805
>
> But it didn't provide more information about the problem.
>
> >
> > Also, I suppose you have tested without the affinity mask and it works?
> >
>
> Yes I did, it works pretty well. I had 1 unexpected failure among ~6000
> tests. But I would like to have a substantial explanation.
>
> >
> > Offhand I cannot think of why that would be needed.
> >
>
> OK, thanks for your help.
(mm)Timers have a possibility of running on a thread of their own which
might be schedulled on the CPU different from the thread that runs
emulated code, unchaining TBs and can (and will) fail in this case.
--
mailto:av1474@comtv.ru
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-02 17:16 ` malc
@ 2011-11-02 17:45 ` Paolo Bonzini
2011-11-02 17:55 ` malc
2011-11-02 18:01 ` Peter Maydell
0 siblings, 2 replies; 16+ messages in thread
From: Paolo Bonzini @ 2011-11-02 17:45 UTC (permalink / raw)
To: malc; +Cc: qemu-devel, Fabien Chouteau
On 11/02/2011 06:16 PM, malc wrote:
> (mm)Timers have a possibility of running on a thread of their own which
> might be schedulled on the CPU different from the thread that runs
> emulated code, unchaining TBs and can (and will) fail in this case.
This should not be a problem with dynticks+iothread (i.e. it should work
or not work equally). We now run just this basically when an alarm fires:
t->expired = t->pending = 1;
qemu_notify_event();
The rest is always done in the iothread. The iothread will then
suspend/resume the VCPU thread around the unchaining, so what matters is
(in Unix parlance) signal-safety of the unchaining, not thread-safety.
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-02 17:45 ` Paolo Bonzini
@ 2011-11-02 17:55 ` malc
2011-11-02 18:01 ` Peter Maydell
1 sibling, 0 replies; 16+ messages in thread
From: malc @ 2011-11-02 17:55 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: qemu-devel, Fabien Chouteau
On Wed, 2 Nov 2011, Paolo Bonzini wrote:
> On 11/02/2011 06:16 PM, malc wrote:
> > (mm)Timers have a possibility of running on a thread of their own which
> > might be schedulled on the CPU different from the thread that runs
> > emulated code, unchaining TBs and can (and will) fail in this case.
>
> This should not be a problem with dynticks+iothread (i.e. it should work or
> not work equally). We now run just this basically when an alarm fires:
>
I was explaining rationale behind pinning stuff at the time it was done.
> t->expired = t->pending = 1;
> qemu_notify_event();
>
> The rest is always done in the iothread. The iothread will then
> suspend/resume the VCPU thread around the unchaining, so what matters is (in
> Unix parlance) signal-safety of the unchaining, not thread-safety.
>
> Paolo
>
--
mailto:av1474@comtv.ru
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-02 17:45 ` Paolo Bonzini
2011-11-02 17:55 ` malc
@ 2011-11-02 18:01 ` Peter Maydell
2011-11-02 19:52 ` Paolo Bonzini
1 sibling, 1 reply; 16+ messages in thread
From: Peter Maydell @ 2011-11-02 18:01 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: qemu-devel, Fabien Chouteau
On 2 November 2011 17:45, Paolo Bonzini <pbonzini@redhat.com> wrote:
> The rest is always done in the iothread. The iothread will then
> suspend/resume the VCPU thread around the unchaining, so what matters is (in
> Unix parlance) signal-safety of the unchaining, not thread-safety.
The unchaining is neither signal-safe nor thread-safe...
-- PMM
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-02 18:01 ` Peter Maydell
@ 2011-11-02 19:52 ` Paolo Bonzini
2011-11-02 19:57 ` Peter Maydell
2011-11-03 9:54 ` Fabien Chouteau
0 siblings, 2 replies; 16+ messages in thread
From: Paolo Bonzini @ 2011-11-02 19:52 UTC (permalink / raw)
To: Peter Maydell; +Cc: qemu-devel, Fabien Chouteau
On 11/02/2011 07:01 PM, Peter Maydell wrote:
> On 2 November 2011 17:45, Paolo Bonzini<pbonzini@redhat.com> wrote:
>> The rest is always done in the iothread. The iothread will then
>> suspend/resume the VCPU thread around the unchaining, so what matters is (in
>> Unix parlance) signal-safety of the unchaining, not thread-safety.
>
> The unchaining is neither signal-safe nor thread-safe...
Yeah, but there's nothing Windows-specific in that.
(Also, the unchaining is safer, or even completely safe in system mode
than it is with pthreads).
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-02 19:52 ` Paolo Bonzini
@ 2011-11-02 19:57 ` Peter Maydell
2011-11-03 9:56 ` Fabien Chouteau
2011-11-03 9:54 ` Fabien Chouteau
1 sibling, 1 reply; 16+ messages in thread
From: Peter Maydell @ 2011-11-02 19:57 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: qemu-devel, Fabien Chouteau
On 2 November 2011 19:52, Paolo Bonzini <pbonzini@redhat.com> wrote:
> (Also, the unchaining is safer, or even completely safe in system mode than
> it is with pthreads).
I don't think it's completely safe, you're just a bit less likely
to get bitten than if you're trying to run a linux-user-mode
multithreaded guest binary.
-- PMM
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-02 19:52 ` Paolo Bonzini
2011-11-02 19:57 ` Peter Maydell
@ 2011-11-03 9:54 ` Fabien Chouteau
2011-11-03 10:10 ` Paolo Bonzini
1 sibling, 1 reply; 16+ messages in thread
From: Fabien Chouteau @ 2011-11-03 9:54 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Peter Maydell, qemu-devel
On 02/11/2011 20:52, Paolo Bonzini wrote:
> On 11/02/2011 07:01 PM, Peter Maydell wrote:
>> On 2 November 2011 17:45, Paolo Bonzini<pbonzini@redhat.com> wrote:
>>> The rest is always done in the iothread. The iothread will then
>>> suspend/resume the VCPU thread around the unchaining, so what matters is (in
>>> Unix parlance) signal-safety of the unchaining, not thread-safety.
>>
>> The unchaining is neither signal-safe nor thread-safe...
>
> Yeah, but there's nothing Windows-specific in that.
That's very important, I don't see why it is different between Linux and
Windows here. Also, why running all the threads on the same CPU would
make the code thread-safe?
--
Fabien Chouteau
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-02 19:57 ` Peter Maydell
@ 2011-11-03 9:56 ` Fabien Chouteau
0 siblings, 0 replies; 16+ messages in thread
From: Fabien Chouteau @ 2011-11-03 9:56 UTC (permalink / raw)
To: Peter Maydell; +Cc: Paolo Bonzini, qemu-devel
On 02/11/2011 20:57, Peter Maydell wrote:
> On 2 November 2011 19:52, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> (Also, the unchaining is safer, or even completely safe in system mode than
>> it is with pthreads).
>
> I don't think it's completely safe, you're just a bit less likely
> to get bitten than if you're trying to run a linux-user-mode
> multithreaded guest binary.
Do you think it's possible to make the code safe?
--
Fabien Chouteau
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-03 9:54 ` Fabien Chouteau
@ 2011-11-03 10:10 ` Paolo Bonzini
2011-11-03 10:29 ` Avi Kivity
0 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2011-11-03 10:10 UTC (permalink / raw)
To: Fabien Chouteau; +Cc: Peter Maydell, qemu-devel
On 11/03/2011 10:54 AM, Fabien Chouteau wrote:
>>> >> The unchaining is neither signal-safe nor thread-safe...
>> >
>> > Yeah, but there's nothing Windows-specific in that.
> That's very important, I don't see why it is different between Linux and
> Windows here.
Yep, perhaps for timers it was the case a while ago, but with
iothread+dynticks it should not be a problem anymore. For unchaining,
Linux and Windows should have never been different.
> Also, why running all the threads on the same CPU would
> make the code thread-safe?
It would ensure that two mutators wouldn't run concurrently. In some
sense, signal-safe code could then be considered thread-safe too.
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-03 10:10 ` Paolo Bonzini
@ 2011-11-03 10:29 ` Avi Kivity
2011-11-03 11:50 ` Paolo Bonzini
0 siblings, 1 reply; 16+ messages in thread
From: Avi Kivity @ 2011-11-03 10:29 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Peter Maydell, qemu-devel, Fabien Chouteau
On 11/03/2011 12:10 PM, Paolo Bonzini wrote:
>
>> Also, why running all the threads on the same CPU would
>> make the code thread-safe?
>
> It would ensure that two mutators wouldn't run concurrently. In some
> sense, signal-safe code could then be considered thread-safe too.
>
How so? The scheduler can switch between the two threads on every
instruction.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-03 10:29 ` Avi Kivity
@ 2011-11-03 11:50 ` Paolo Bonzini
2011-11-04 9:27 ` Fabien Chouteau
0 siblings, 1 reply; 16+ messages in thread
From: Paolo Bonzini @ 2011-11-03 11:50 UTC (permalink / raw)
To: Avi Kivity; +Cc: Peter Maydell, qemu-devel, Fabien Chouteau
On 11/03/2011 11:29 AM, Avi Kivity wrote:
> > It would ensure that two mutators wouldn't run concurrently. In some
> > sense, signal-safe code could then be considered thread-safe too.
>
> How so? The scheduler can switch between the two threads on every
> instruction.
In general signal-safe is more stringent than thread-safe, but with two
exceptions: memory barriers and locked memory access. On x86 (implied
by Windows...) you might also assume that the compiler will generate
arithmetic operations with a memory destination, which makes code like
void cpu_interrupt(CPUState *env, int mask)
{
env->interrupt_request |= mask; /* <--- this */
cpu_unlink_tb(env);
}
signal-safe in practice---and even "thread-safe" on non-SMP systems.
It's a huge assumption though, and I don't think it should be assumed
anymore. With iothread the architecture of the QEMU main loop is anyway
completely different.
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-03 11:50 ` Paolo Bonzini
@ 2011-11-04 9:27 ` Fabien Chouteau
2011-11-04 9:34 ` Paolo Bonzini
0 siblings, 1 reply; 16+ messages in thread
From: Fabien Chouteau @ 2011-11-04 9:27 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Peter Maydell, Avi Kivity, qemu-devel
On 03/11/2011 12:50, Paolo Bonzini wrote:
> On 11/03/2011 11:29 AM, Avi Kivity wrote:
>> > It would ensure that two mutators wouldn't run concurrently. In some
>> > sense, signal-safe code could then be considered thread-safe too.
>>
>> How so? The scheduler can switch between the two threads on every
>> instruction.
>
> In general signal-safe is more stringent than thread-safe, but with two exceptions: memory barriers and locked memory access. On x86 (implied by Windows...) you might also assume that the compiler will generate arithmetic operations with a memory destination, which makes code like
>
> void cpu_interrupt(CPUState *env, int mask)
> {
> env->interrupt_request |= mask; /* <--- this */
> cpu_unlink_tb(env);
> }
>
> signal-safe in practice---and even "thread-safe" on non-SMP systems. It's a huge assumption though, and I don't think it should be assumed anymore.
What can we do to improve that?
>
> With iothread the architecture of the QEMU main loop is anyway completely different.
>
Are you saying that things are better or worst with iothread?
--
Fabien Chouteau
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] Multiple instances of Qemu on Windows multicore
2011-11-04 9:27 ` Fabien Chouteau
@ 2011-11-04 9:34 ` Paolo Bonzini
0 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2011-11-04 9:34 UTC (permalink / raw)
To: Fabien Chouteau; +Cc: Peter Maydell, Avi Kivity, qemu-devel
On 11/04/2011 10:27 AM, Fabien Chouteau wrote:
>> It's a huge assumption though, and I don't think it should be assumed anymore.
>
> What can we do to improve that?
Remove the affinity change and see what breaks. :)
> > With iothread the architecture of the QEMU main loop is anyway completely different.
>
> Are you saying that things are better or worst with iothread?
I think better. Without iothread, cpu_interrupt was called by
qemu_notify_event. With iothread it is not. Perhaps that was the
original problem with timers?...
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2011-11-04 9:34 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-11-02 15:38 [Qemu-devel] Multiple instances of Qemu on Windows multicore Fabien Chouteau
2011-11-02 16:25 ` Paolo Bonzini
2011-11-02 17:10 ` Fabien Chouteau
2011-11-02 17:16 ` malc
2011-11-02 17:45 ` Paolo Bonzini
2011-11-02 17:55 ` malc
2011-11-02 18:01 ` Peter Maydell
2011-11-02 19:52 ` Paolo Bonzini
2011-11-02 19:57 ` Peter Maydell
2011-11-03 9:56 ` Fabien Chouteau
2011-11-03 9:54 ` Fabien Chouteau
2011-11-03 10:10 ` Paolo Bonzini
2011-11-03 10:29 ` Avi Kivity
2011-11-03 11:50 ` Paolo Bonzini
2011-11-04 9:27 ` Fabien Chouteau
2011-11-04 9:34 ` Paolo Bonzini
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.