xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: Ankur Arora <ankur.a.arora@oracle.com>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Cc: pbonzini@redhat.com, boris.ostrovsky@oracle.com,
	sstabellini@kernel.org, joao.m.martins@oracle.com,
	konrad.wilk@oracle.com
Subject: Re: [Xen-devel] [RFC PATCH 01/16] x86/xen: add xenhost_t interface
Date: Fri, 14 Jun 2019 13:52:51 +0200	[thread overview]
Message-ID: <c80886fb-583a-a78e-62cb-4a7944ab7fab@suse.com> (raw)
In-Reply-To: <199b7183-1872-7342-4283-af2925e780c5@oracle.com>

On 11.06.19 09:16, Ankur Arora wrote:
> On 2019-06-07 8:04 a.m., Juergen Gross wrote:
>> On 09.05.19 19:25, Ankur Arora wrote:
>>> Add xenhost_t which will serve as an abstraction over Xen interfaces.
>>> It co-exists with the PV/HVM/PVH abstractions (x86_init, hypervisor_x86,
>>> pv_ops etc) and is meant to capture mechanisms for communication with
>>> Xen so we could have different types of underlying Xen: regular, local,
>>> and nested.
>>>
>>> Also add xenhost_register() and stub registration in the various guest
>>> types.
>>>
>>> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
>>> ---
>>>   arch/x86/xen/Makefile        |  1 +
>>>   arch/x86/xen/enlighten_hvm.c | 13 +++++
>>>   arch/x86/xen/enlighten_pv.c  | 16 ++++++
>>>   arch/x86/xen/enlighten_pvh.c | 12 +++++
>>>   arch/x86/xen/xenhost.c       | 75 ++++++++++++++++++++++++++++
>>>   include/xen/xen.h            |  3 ++
>>>   include/xen/xenhost.h        | 95 ++++++++++++++++++++++++++++++++++++
>>>   7 files changed, 215 insertions(+)
>>>   create mode 100644 arch/x86/xen/xenhost.c
>>>   create mode 100644 include/xen/xenhost.h
>>>
>>> diff --git a/arch/x86/xen/Makefile b/arch/x86/xen/Makefile
>>> index 084de77a109e..564b4dddbc15 100644
>>> --- a/arch/x86/xen/Makefile
>>> +++ b/arch/x86/xen/Makefile
>>> @@ -18,6 +18,7 @@ obj-y                += mmu.o
>>>   obj-y                += time.o
>>>   obj-y                += grant-table.o
>>>   obj-y                += suspend.o
>>> +obj-y                += xenhost.o
>>>   obj-$(CONFIG_XEN_PVHVM)        += enlighten_hvm.o
>>>   obj-$(CONFIG_XEN_PVHVM)        += mmu_hvm.o
>>> diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
>>> index 0e75642d42a3..100452f4f44c 100644
>>> --- a/arch/x86/xen/enlighten_hvm.c
>>> +++ b/arch/x86/xen/enlighten_hvm.c
>>> @@ -5,6 +5,7 @@
>>>   #include <linux/kexec.h>
>>>   #include <linux/memblock.h>
>>> +#include <xen/xenhost.h>
>>>   #include <xen/features.h>
>>>   #include <xen/events.h>
>>>   #include <xen/interface/memory.h>
>>> @@ -82,6 +83,12 @@ static void __init xen_hvm_init_mem_mapping(void)
>>>       xen_vcpu_info_reset(0);
>>>   }
>>> +xenhost_ops_t xh_hvm_ops = {
>>> +};
>>> +
>>> +xenhost_ops_t xh_hvm_nested_ops = {
>>> +};
>>> +
>>>   static void __init init_hvm_pv_info(void)
>>>   {
>>>       int major, minor;
>>> @@ -179,6 +186,12 @@ static void __init xen_hvm_guest_init(void)
>>>   {
>>>       if (xen_pv_domain())
>>>           return;
>>> +    /*
>>> +     * We need only xenhost_r1 for HVM guests since they cannot be
>>> +     * driver domain (?) or dom0.
>>
>> I think even HVM guests could (in theory) be driver domains.
>>
>>> +     */
>>> +    if (!xen_pvh_domain())
>>> +        xenhost_register(xenhost_r1, &xh_hvm_ops);
>>>       init_hvm_pv_info();
>>> diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
>>> index c54a493e139a..bb6e811c1525 100644
>>> --- a/arch/x86/xen/enlighten_pv.c
>>> +++ b/arch/x86/xen/enlighten_pv.c
>>> @@ -36,6 +36,7 @@
>>>   #include <xen/xen.h>
>>>   #include <xen/events.h>
>>> +#include <xen/xenhost.h>
>>>   #include <xen/interface/xen.h>
>>>   #include <xen/interface/version.h>
>>>   #include <xen/interface/physdev.h>
>>> @@ -1188,6 +1189,12 @@ static void __init 
>>> xen_dom0_set_legacy_features(void)
>>>       x86_platform.legacy.rtc = 1;
>>>   }
>>> +xenhost_ops_t xh_pv_ops = {
>>> +};
>>> +
>>> +xenhost_ops_t xh_pv_nested_ops = {
>>> +};
>>> +
>>>   /* First C function to be called on Xen boot */
>>>   asmlinkage __visible void __init xen_start_kernel(void)
>>>   {
>>> @@ -1198,6 +1205,15 @@ asmlinkage __visible void __init 
>>> xen_start_kernel(void)
>>>       if (!xen_start_info)
>>>           return;
>>> +    xenhost_register(xenhost_r1, &xh_pv_ops);
>>> +
>>> +    /*
>>> +     * Detect in some implementation defined manner whether this is
>>> +     * nested or not.
>>> +     */
>>> +    if (xen_driver_domain() && xen_nested())
>>> +        xenhost_register(xenhost_r2, &xh_pv_nested_ops);
>>
>> I don't think a driver domain other than dom0 "knows" this in the
>> beginning. It will need to register xenhost_r2
> Right. No point in needlessly registrating as xenhost_r2 without
> needing to handle any xenhost_r2 devices.
> 
>>  in case it learns about a pv device from L0 hypervisor.
> What's the mechanism you are thinking of, for this?
> I'm guessing this PV device notification could arrive at an
> arbitrary point in time after the system has booted.

I'm not sure yet how this should be handled.

Maybe an easy solution would be the presence of a Xen PCI device
passed through from L1 hypervisor to L1 dom0. OTOH this would
preclude nested Xen for L1 hypervisor running in PVH mode. And for
L1 driver domains this would need either a shared PCI device or
multiple Xen PCI devices or something new.

There is a design session planned for this topic at the Xen developer
summit in July.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-06-14 11:53 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-09 17:25 [Xen-devel] [RFC PATCH 00/16] xenhost support Ankur Arora
2019-05-09 17:25 ` [Xen-devel] [RFC PATCH 01/16] x86/xen: add xenhost_t interface Ankur Arora
2019-06-07 15:04   ` Juergen Gross
2019-06-11  7:16     ` Ankur Arora
2019-06-14 11:52       ` Juergen Gross [this message]
2019-05-09 17:25 ` Ankur Arora
2019-05-09 17:25 ` [RFC PATCH 02/16] x86/xen: cpuid support in xenhost_t Ankur Arora
2019-05-09 17:25   ` [Xen-devel] " Ankur Arora
2019-06-12 21:09   ` Andrew Cooper
2019-05-09 17:25 ` [Xen-devel] [RFC PATCH 03/16] x86/xen: make hypercall_page generic Ankur Arora
2019-05-09 17:25 ` Ankur Arora
2019-05-09 17:25 ` [RFC PATCH 04/16] x86/xen: hypercall support for xenhost_t Ankur Arora
2019-05-09 17:25 ` [Xen-devel] " Ankur Arora
2019-06-12 21:15   ` Andrew Cooper
2019-06-14  7:20     ` Ankur Arora
2019-06-14  7:35       ` Juergen Gross
2019-06-14  8:00         ` Andrew Cooper
2019-05-09 17:25 ` [RFC PATCH 05/16] x86/xen: add feature support in xenhost_t Ankur Arora
2019-05-09 17:25 ` [Xen-devel] " Ankur Arora
2019-05-09 17:25 ` [Xen-devel] [RFC PATCH 06/16] x86/xen: add shared_info support to xenhost_t Ankur Arora
2019-06-07 15:08   ` Juergen Gross
2019-06-08  5:01     ` Ankur Arora
2019-05-09 17:25 ` Ankur Arora
2019-05-09 17:25 ` [RFC PATCH 07/16] x86/xen: make vcpu_info part of xenhost_t Ankur Arora
2019-05-09 17:25   ` [Xen-devel] " Ankur Arora
2019-06-14 11:53   ` Juergen Gross
2019-06-17  6:28     ` Ankur Arora
2019-05-09 17:25 ` [RFC PATCH 08/16] x86/xen: irq/upcall handling with multiple xenhosts Ankur Arora
2019-05-09 17:25 ` [Xen-devel] " Ankur Arora
2019-06-14 12:01   ` Juergen Gross
2019-05-09 17:25 ` [RFC PATCH 09/16] xen/evtchn: support evtchn in xenhost_t Ankur Arora
2019-05-09 17:25 ` [Xen-devel] " Ankur Arora
2019-06-14 12:04   ` Juergen Gross
2019-06-17  6:09     ` Ankur Arora
2019-05-09 17:25 ` [RFC PATCH 10/16] xen/balloon: support ballooning " Ankur Arora
2019-05-09 17:25 ` [Xen-devel] " Ankur Arora
2019-06-17  9:28   ` Juergen Gross
2019-06-19  2:24     ` Ankur Arora
2019-05-09 17:25 ` [RFC PATCH 11/16] xen/grant-table: make grant-table xenhost aware Ankur Arora
2019-05-09 17:25   ` [Xen-devel] " Ankur Arora
2019-06-17  9:36   ` Juergen Gross
2019-06-19  2:25     ` Ankur Arora
2019-05-09 17:25 ` [RFC PATCH 12/16] xen/xenbus: support xenbus frontend/backend with xenhost_t Ankur Arora
2019-05-09 17:25 ` [Xen-devel] " Ankur Arora
2019-06-17  9:50   ` Juergen Gross
2019-06-19  2:38     ` Ankur Arora
2019-05-09 17:25 ` [Xen-devel] [RFC PATCH 13/16] drivers/xen: gnttab, evtchn, xenbus API changes Ankur Arora
2019-06-17 10:07   ` Juergen Gross
2019-06-19  2:55     ` Ankur Arora
2019-05-09 17:25 ` Ankur Arora
2019-05-09 17:25 ` [Xen-devel] [RFC PATCH 14/16] xen/blk: " Ankur Arora
2019-06-17 10:14   ` Juergen Gross
2019-06-19  2:59     ` Ankur Arora
2019-05-09 17:25 ` Ankur Arora
2019-05-09 17:25 ` [RFC PATCH 15/16] xen/net: " Ankur Arora
2019-05-09 17:25 ` [Xen-devel] " Ankur Arora
2019-06-17 10:14   ` Juergen Gross
2019-05-09 17:25 ` [Xen-devel] [RFC PATCH 16/16] xen/grant-table: host_addr fixup in mapping on xenhost_r0 Ankur Arora
2019-06-17 10:55   ` Juergen Gross
2019-06-19  3:02     ` Ankur Arora
2019-05-09 17:25 ` Ankur Arora
2019-06-07 14:51 ` [Xen-devel] [RFC PATCH 00/16] xenhost support Juergen Gross
2019-06-07 15:22   ` Joao Martins
2019-06-07 16:21     ` Juergen Gross
2019-06-08  5:50       ` Ankur Arora
2019-06-08  5:33   ` Ankur Arora

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c80886fb-583a-a78e-62cb-4a7944ab7fab@suse.com \
    --to=jgross@suse.com \
    --cc=ankur.a.arora@oracle.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=joao.m.martins@oracle.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).