From: Julien Grall <julien.grall@arm.com>
To: Andrii Anisov <andrii.anisov@gmail.com>, Jan Beulich <JBeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
Wei Liu <wei.liu2@citrix.com>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
George Dunlap <George.Dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
xen-devel <xen-devel@lists.xenproject.org>,
"andrii_anisov@epam.com" <andrii_anisov@epam.com>,
Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v3] xen: introduce VCPUOP_register_runstate_phys_memory_area hypercall
Date: Fri, 14 Jun 2019 16:24:24 +0100 [thread overview]
Message-ID: <c1094660-9c41-9883-8869-f04f95976728@arm.com> (raw)
In-Reply-To: <5e13f916-4ea7-05a6-3156-df6dc8bbd1d8@gmail.com>
Hi Andrii,
On 14/06/2019 16:11, Andrii Anisov wrote:
>
>
> On 14.06.19 17:39, Julien Grall wrote:
>> Why? What are the benefits for a guest to use the two interface together?
>
> I do not say the guest has to use both interfaces simultaneously. It is
> logically odd, doing so will only reflect in increasing of hypervisor overhead.
> But such an implementation will have a simpler code, which expected to be (a
> bit) faster. > So the code simplicity would be a benefit for us. Lower hypervisor overhead is a
> benefit for sane guests, which use only one interface.
I hope you are aware that speaking about speed here is quite irrelevant. The
difference would be clear lost in the noise of the rest of the context switch.
But, if you allow something, then most likely someone will use it. However, you
have to differentiate implementation vs documentation.
In this case, I don't think the implementation should dictate what is going to
be exposed.
If you document that it can't happen, then you have room to forbid the mix in
the future (assuming this can't be done now).
In other word, the more lax is the interface, the more difficult it is tighten
in the future.
I am not going to push for an implementation that forbid the mix. But I am
strongly going to push for any documentation of the expected interaction. So we
don't make our life miserable later on.
>
> BTW, dropping the old interface implementation will be much easier in future if
> it will not clash with the new one.
I am afraid we will never be able to remove the old interface.
>
>> After all they have exactly the same data...
>
> Yes, but normal guests should use only one interface.
See above.
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2019-06-14 15:24 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-24 18:12 [PATCH RFC 2] [DO NOT APPLY] introduce VCPUOP_register_runstate_phys_memory_area hypercall Andrii Anisov
2019-05-24 18:12 ` [Xen-devel] " Andrii Anisov
2019-05-24 18:12 ` [PATCH v3] Introduce runstate area registration with phys address Andrii Anisov
2019-05-24 18:12 ` [Xen-devel] " Andrii Anisov
2019-05-24 18:12 ` [PATCH v3] xen: introduce VCPUOP_register_runstate_phys_memory_area hypercall Andrii Anisov
2019-05-24 18:12 ` [Xen-devel] " Andrii Anisov
2019-06-07 14:23 ` Jan Beulich
2019-06-10 11:44 ` Julien Grall
2019-06-11 9:10 ` Jan Beulich
2019-06-11 10:22 ` Andrii Anisov
2019-06-11 12:12 ` Julien Grall
2019-06-11 12:26 ` Andrii Anisov
2019-06-11 12:32 ` Julien Grall
2019-06-11 12:40 ` Andrii Anisov
2019-06-13 12:21 ` Andrii Anisov
2019-06-13 12:39 ` Jan Beulich
2019-06-13 12:32 ` Andrii Anisov
2019-06-13 12:41 ` Jan Beulich
2019-06-13 12:48 ` Julien Grall
2019-06-13 12:58 ` Jan Beulich
2019-06-13 13:14 ` Julien Grall
2019-06-13 13:40 ` Jan Beulich
2019-06-13 14:41 ` Julien Grall
2019-06-14 14:36 ` Andrii Anisov
2019-06-14 14:39 ` Julien Grall
2019-06-14 15:11 ` Andrii Anisov
2019-06-14 15:24 ` Julien Grall [this message]
2019-06-14 16:11 ` Andrii Anisov
2019-06-14 16:20 ` Julien Grall
2019-06-14 16:25 ` Andrii Anisov
2019-06-17 6:27 ` Jan Beulich
2019-06-14 15:42 ` Jan Beulich
2019-06-14 16:23 ` Andrii Anisov
2019-06-17 6:28 ` Jan Beulich
2019-06-18 15:32 ` Andrii Anisov
2019-06-18 15:44 ` Jan Beulich
2019-06-11 16:09 ` Andrii Anisov
2019-06-12 7:27 ` Jan Beulich
2019-06-13 12:17 ` Andrii Anisov
2019-06-13 12:36 ` Jan Beulich
2019-06-11 16:13 ` Andrii Anisov
2019-05-24 18:12 ` [PATCH RFC 1] [DO NOT APPLY] " Andrii Anisov
2019-05-24 18:12 ` [Xen-devel] " Andrii Anisov
2019-05-28 8:59 ` [PATCH RFC 2] " Julien Grall
2019-05-28 8:59 ` [Xen-devel] " Julien Grall
2019-05-28 9:17 ` Andrii Anisov
2019-05-28 9:17 ` [Xen-devel] " Andrii Anisov
2019-05-28 9:23 ` Julien Grall
2019-05-28 9:23 ` [Xen-devel] " Julien Grall
2019-05-28 9:36 ` Andrii Anisov
2019-05-28 9:36 ` [Xen-devel] " Andrii Anisov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c1094660-9c41-9883-8869-f04f95976728@arm.com \
--to=julien.grall@arm.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=andrii.anisov@gmail.com \
--cc=andrii_anisov@epam.com \
--cc=konrad.wilk@oracle.com \
--cc=roger.pau@citrix.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).