qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Alex Bennée" <alex.bennee@linaro.org>
To: Nicholas Piggin <npiggin@gmail.com>
Cc: "Greg Kurz" <groug@kaod.org>,
	"David Gibson" <david@gibson.dropbear.id.au>,
	qemu-ppc@nongnu.org, qemu-devel@nongnu.org,
	"Cédric Le Goater" <clg@kaod.org>
Subject: Re: [Qemu-devel] [RFC PATCH] Implement qemu_thread_yield for posix, use it in mttcg to handle EXCP_YIELD
Date: Tue, 21 Jan 2020 14:37:59 +0000	[thread overview]
Message-ID: <87ftg827ug.fsf@linaro.org> (raw)
In-Reply-To: <1579604990.qzk2f3181l.astroid@bobo.none>


Nicholas Piggin <npiggin@gmail.com> writes:

> Alex Bennée's on December 20, 2019 11:11 pm:
>>
>> Nicholas Piggin <npiggin@gmail.com> writes:
>>
>>> This is a bit of proof of concept in case mttcg becomes more important
>>> yield could be handled like this. You can have by accident or deliberately
>>> force vCPUs onto the same physical CPU and cause inversion issues when the
>>> lock holder was preempted by the waiter. This is lightly tested but not
>>> to the point of measuring performance difference.
>>
>> Sorry I'm so late replying.
>
> That's fine if you'll also forgive me :)
>
>> Really this comes down to what EXCP_YIELD semantics are meant to mean.
>> For ARM it's a hint operation because we also have WFE which should halt
>> until there is some sort of change of state. In those cases exiting the
>> main-loop and sitting in wait_for_io should be the correct response. If
>> a vCPU is suspended waiting on the halt condition doesn't it have the
>> same effect?
>
> For powerpc H_CONFER, the vCPU does not want to wait for ever, but just
> give up a some time slice on the physical CPU and allow other vCPUs to
> run. But it's not necessary that one does run (if they are all sleeping,
> the hypervisor must prevent deadlock). How would you wait on such a
> conditon?

Isn't H_CONFER a hypercall rather than instruction though? In QEMU's TCG
emulation case I would expect it just to exit to the (guest) hypervisor
which then schedules the next (guest) vCPU. It shouldn't be something
QEMU has to deal with.

If you are running QEMU as a KVM monitor this is still outside of it's
scope as all the scheduling shenanigans are dealt with inside the
kernel.

From QEMU's TCG point of view we want to concern ourselves with what the
real hardware would do - which I think in this case is drop to the
hypervisor and let it sort it out.

--
Alex Bennée


  reply	other threads:[~2020-01-21 14:39 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-17  5:46 [Qemu-devel] [RFC PATCH] Implement qemu_thread_yield for posix, use it in mttcg to handle EXCP_YIELD Nicholas Piggin
2019-07-17 11:48 ` David Gibson
2019-12-20 13:11 ` Alex Bennée
2020-01-21 11:20   ` Nicholas Piggin
2020-01-21 14:37     ` Alex Bennée [this message]
2020-01-22  3:26       ` David Gibson
2020-01-22 18:01         ` Richard Henderson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87ftg827ug.fsf@linaro.org \
    --to=alex.bennee@linaro.org \
    --cc=clg@kaod.org \
    --cc=david@gibson.dropbear.id.au \
    --cc=groug@kaod.org \
    --cc=npiggin@gmail.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).