linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: Joao Martins <joao.m.martins@oracle.com>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	"Ankur Arora" <ankur.a.arora@oracle.com>,
	"Boris Ostrovsky" <boris.ostrovsky@oracle.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"Ingo Molnar" <mingo@redhat.com>,
	"Borislav Petkov" <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	x86@kernel.org, "Stefano Stabellini" <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH RFC 00/39] x86/KVM: Xen HVM guest support
Date: Tue, 9 Apr 2019 07:04:26 +0200	[thread overview]
Message-ID: <59deb041-2b5d-8451-32c7-644fe36e053b@suse.com> (raw)
In-Reply-To: <97808492-58ee-337f-c894-900b34b7b1a5@oracle.com>

On 08/04/2019 19:31, Joao Martins wrote:
> On 4/8/19 11:42 AM, Juergen Gross wrote:
>> On 08/04/2019 12:36, Joao Martins wrote:
>>> On 4/8/19 7:44 AM, Juergen Gross wrote:
>>>> On 12/03/2019 18:14, Joao Martins wrote:
>>>>> On 2/22/19 4:59 PM, Paolo Bonzini wrote:
>>>>>> On 21/02/19 12:45, Joao Martins wrote:
>>>>>>> On 2/20/19 9:09 PM, Paolo Bonzini wrote:
>>>>>>>> On 20/02/19 21:15, Joao Martins wrote:
>>>>>>>>>  2. PV Driver support (patches 17 - 39)
>>>>>>>>>
>>>>>>>>>  We start by redirecting hypercalls from the backend to routines
>>>>>>>>>  which emulate the behaviour that PV backends expect i.e. grant
>>>>>>>>>  table and interdomain events. Next, we add support for late
>>>>>>>>>  initialization of xenbus, followed by implementing
>>>>>>>>>  frontend/backend communication mechanisms (i.e. grant tables and
>>>>>>>>>  interdomain event channels). Finally, introduce xen-shim.ko,
>>>>>>>>>  which will setup a limited Xen environment. This uses the added
>>>>>>>>>  functionality of Xen specific shared memory (grant tables) and
>>>>>>>>>  notifications (event channels).
>>>>>>>>
>>>>>>>> I am a bit worried by the last patches, they seem really brittle and
>>>>>>>> prone to breakage.  I don't know Xen well enough to understand if the
>>>>>>>> lack of support for GNTMAP_host_map is fixable, but if not, you have to
>>>>>>>> define a completely different hypercall.
>>>>>>>>
>>>>>>> I guess Ankur already answered this; so just to stack this on top of his comment.
>>>>>>>
>>>>>>> The xen_shim_domain() is only meant to handle the case where the backend
>>>>>>> has/can-have full access to guest memory [i.e. netback and blkback would work
>>>>>>> with similar assumptions as vhost?]. For the normal case, where a backend *in a
>>>>>>> guest* maps and unmaps other guest memory, this is not applicable and these
>>>>>>> changes don't affect that case.
>>>>>>>
>>>>>>> IOW, the PV backend here sits on the hypervisor, and the hypercalls aren't
>>>>>>> actual hypercalls but rather invoking shim_hypercall(). The call chain would go
>>>>>>> more or less like:
>>>>>>>
>>>>>>> <netback|blkback|scsiback>
>>>>>>>  gnttab_map_refs(map_ops, pages)
>>>>>>>    HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,...)
>>>>>>>      shim_hypercall()
>>>>>>>        shim_hcall_gntmap()
>>>>>>>
>>>>>>> Our reasoning was that given we are already in KVM, why mapping a page if the
>>>>>>> user (i.e. the kernel PV backend) is himself? The lack of GNTMAP_host_map is how
>>>>>>> the shim determines its user doesn't want to map the page. Also, there's another
>>>>>>> issue where PV backends always need a struct page to reference the device
>>>>>>> inflight data as Ankur pointed out.
>>>>>>
>>>>>> Ultimately it's up to the Xen people.  It does make their API uglier,
>>>>>> especially the in/out change for the parameter.  If you can at least
>>>>>> avoid that, it would alleviate my concerns quite a bit.
>>>>>
>>>>> In my view, we have two options overall:
>>>>>
>>>>> 1) Make it explicit, the changes the PV drivers we have to make in
>>>>> order to support xen_shim_domain(). This could mean e.g. a) add a callback
>>>>> argument to gnttab_map_refs() that is invoked for every page that gets looked up
>>>>> successfully, and inside this callback the PV driver may update it's tracking
>>>>> page. Here we no longer have this in/out parameter in gnttab_map_refs, and all
>>>>> shim_domain specific bits would be a little more abstracted from Xen PV
>>>>> backends. See netback example below the scissors mark. Or b) have sort of a
>>>>> translate_gref() and put_gref() API that Xen PV drivers use which make it even
>>>>> more explicit that there's no grant ops involved. The latter is more invasive.
>>>>>
>>>>> 2) The second option is to support guest grant mapping/unmapping [*] to allow
>>>>> hosting PV backends inside the guest. This would remove the Xen changes in this
>>>>> series completely. But it would require another guest being used
>>>>> as netback/blkback/xenstored, and less performance than 1) (though, in theory,
>>>>> it would be equivalent to what does Xen with grants/events). The only changes in
>>>>> Linux Xen code is adding xenstored domain support, but that is useful on its own
>>>>> outside the scope of this work.
>>>>>
>>>>> I think there's value on both; 1) is probably more familiar for KVM users
>>>>> perhaps (as it is similar to what vhost does?) while  2) equates to implementing
>>>>> Xen disagregation capabilities in KVM.
>>>>>
>>>>> Thoughts? Xen maintainers what's your take on this?
>>>>
>>>> What I'd like best would be a new handle (e.g. xenhost_t *) used as an
>>>> abstraction layer for this kind of stuff. It should be passed to the
>>>> backends and those would pass it on to low-level Xen drivers (xenbus,
>>>> event channels, grant table, ...).
>>>>
>>> So if IIRC backends would use the xenhost layer to access grants or frames
>>> referenced by grants, and that would grok into some of this. IOW, you would have
>>> two implementors of xenhost: one for nested remote/local events+grants and
>>> another for this "shim domain" ?
>>
>> As I'd need that for nested Xen I guess that would make it 3 variants.
>> Probably the xen-shim variant would need more hooks, but that should be
>> no problem.
>>
> I probably messed up in the short description but "nested remote/local
> events+grants" was referring to nested Xen (FWIW remote meant L0 and local L1).
> So maybe only 2 variants are needed?

I need one xenhost variant for the "normal" case as today: talking to
the single hypervisor (or in nested case: to the L1 hypervisor).

Then I need a variant for the nested case talking to L0 hypervisor.

And you need a variant talking to xen-shim.

The first two variants can be active in the same system in case of
nested Xen: the backends of L2 dom0 are talking to L1 hypervisor,
while its frontends are talking with L0 hypervisor.

> 
>>>> I was planning to do that (the xenhost_t * stuff) soon in order to add
>>>> support for nested Xen using PV devices (you need two Xenstores for that
>>>> as the nested dom0 is acting as Xen backend server, while using PV
>>>> frontends for accessing the "real" world outside).
>>>>
>>>> The xenhost_t should be used for:
>>>>
>>>> - accessing Xenstore
>>>> - issuing and receiving events
>>>> - doing hypercalls
>>>> - grant table operations
>>>>
>>>
>>> In the text above, I sort of suggested a slice of this on 1.b) with a
>>> translate_gref() and put_gref() API -- to get the page from a gref. This was
>>> because of the flags|host_addr hurdle we depicted above wrt to using using grant
>>> maps/unmaps. You think some of the xenhost layer would be ammenable to support
>>> this case?
>>
>> I think so, yes.
>>
>>>
>>>> So exactly the kind of stuff you want to do, too.
>>>>
>>> Cool idea!
>>
>> In the end you might make my life easier for nested Xen. :-)
>>
> Hehe :)
> 
>> Do you want to have a try with that idea or should I do that? I might be
>> able to start working on that in about a month.
>>
> Ankur (CC'ed) will give a shot at it, and should start a new thread on this
> xenhost abstraction layer.

Great, looking forward to it!


Juergen

  parent reply	other threads:[~2019-04-09  5:04 UTC|newest]

Thread overview: 126+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-20 20:15 [PATCH RFC 00/39] x86/KVM: Xen HVM guest support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 01/39] KVM: x86: fix Xen hypercall page msr handling Joao Martins
2019-02-22  1:30   ` Sean Christopherson
2019-02-22 11:47     ` Joao Martins
2019-02-22 12:51     ` Paolo Bonzini
2020-11-30 10:39       ` David Woodhouse
2020-11-30 11:03         ` Paolo Bonzini
2020-11-30 11:27           ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 02/39] KVM: x86/xen: intercept xen hypercalls if enabled Joao Martins
2019-02-21 18:29   ` Sean Christopherson
2019-02-21 20:56     ` Joao Martins
2019-02-22  0:30       ` Sean Christopherson
2019-02-22 12:50         ` Paolo Bonzini
2020-12-01  9:48   ` David Woodhouse
2020-12-01 11:19     ` David Woodhouse
2020-12-02 11:17       ` Joao Martins
2020-12-02 12:12         ` David Woodhouse
2020-12-02  5:19     ` Ankur Arora
2020-12-02  8:03       ` David Woodhouse
2020-12-02 18:20         ` Ankur Arora
2019-02-20 20:15 ` [PATCH RFC 03/39] KVM: x86/xen: register shared_info page Joao Martins
2020-12-01 13:07   ` David Woodhouse
2020-12-02  0:40     ` Ankur Arora
2020-12-02  1:26       ` David Woodhouse
2020-12-02  5:17         ` Ankur Arora
2020-12-02 10:50           ` Joao Martins
2020-12-02 10:44       ` Joao Martins
2020-12-02 12:20         ` David Woodhouse
2020-12-02 20:32           ` Ankur Arora
2020-12-03 10:16             ` David Woodhouse
2020-12-04 17:30               ` Sean Christopherson
2020-12-02 20:33         ` Ankur Arora
2020-12-12 12:07       ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 04/39] KVM: x86/xen: setup pvclock updates Joao Martins
2019-02-20 20:15 ` [PATCH RFC 05/39] KVM: x86/xen: update wallclock region Joao Martins
2019-02-20 20:15 ` [PATCH RFC 06/39] KVM: x86/xen: register vcpu info Joao Martins
2019-02-20 20:15 ` [PATCH RFC 07/39] KVM: x86/xen: register vcpu time info region Joao Martins
2019-02-20 20:15 ` [PATCH RFC 08/39] KVM: x86/xen: register steal clock Joao Martins
2019-02-20 20:15 ` [PATCH RFC 09/39] KVM: x86: declare Xen HVM guest capability Joao Martins
2019-02-20 20:15 ` [PATCH RFC 10/39] KVM: x86/xen: support upcall vector Joao Martins
2020-12-02 11:17   ` David Woodhouse
2020-12-02 13:12     ` Joao Martins
2020-12-02 16:47       ` David Woodhouse
2020-12-02 18:34         ` Joao Martins
2020-12-02 19:02           ` David Woodhouse
2020-12-02 20:12             ` Joao Martins
2020-12-02 20:37               ` David Woodhouse
2020-12-03  1:08             ` Ankur Arora
2020-12-08 16:08             ` David Woodhouse
2020-12-09  6:35               ` Ankur Arora
2020-12-09 10:27                 ` David Woodhouse
2020-12-09 10:51                   ` Joao Martins
2020-12-09 11:39                     ` David Woodhouse
2020-12-09 13:26                       ` Joao Martins
2020-12-09 15:41                         ` David Woodhouse
2020-12-09 16:12                           ` Joao Martins
2021-01-01 14:33           ` David Woodhouse
2021-01-05 12:11             ` Joao Martins
2021-01-05 13:23               ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 11/39] KVM: x86/xen: evtchn signaling via eventfd Joao Martins
2020-11-30  9:41   ` David Woodhouse
2020-11-30 12:17     ` Joao Martins
2020-11-30 12:55       ` David Woodhouse
2020-11-30 15:08         ` Joao Martins
2020-11-30 16:48           ` David Woodhouse
2020-11-30 17:15             ` Joao Martins
2020-11-30 18:01               ` David Woodhouse
2020-11-30 18:41                 ` Joao Martins
2020-11-30 19:04                   ` David Woodhouse
2020-11-30 19:25                     ` Joao Martins
2021-11-23 13:15           ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 12/39] KVM: x86/xen: store virq when assigning evtchn Joao Martins
     [not found]   ` <b750291466f3c89e0a393e48079c087704b217a5.camel@amazon.co.uk>
2022-02-10 12:17     ` Joao Martins
2022-02-10 15:23       ` [EXTERNAL] " David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 13/39] KVM: x86/xen: handle PV timers oneshot mode Joao Martins
2019-02-20 20:15 ` [PATCH RFC 14/39] KVM: x86/xen: handle PV IPI vcpu yield Joao Martins
2019-02-20 20:15 ` [PATCH RFC 15/39] KVM: x86/xen: handle PV spinlocks slowpath Joao Martins
2022-02-08 12:36   ` David Woodhouse
2022-02-10 12:17     ` Joao Martins
2022-02-10 14:11       ` David Woodhouse
2019-02-20 20:15 ` [PATCH RFC 16/39] KVM: x86: declare Xen HVM evtchn offload capability Joao Martins
2019-02-20 20:15 ` [PATCH RFC 17/39] x86/xen: export vcpu_info and shared_info Joao Martins
2019-02-20 20:15 ` [PATCH RFC 18/39] x86/xen: make hypercall_page generic Joao Martins
2019-02-20 20:15 ` [PATCH RFC 19/39] xen/xenbus: xenbus uninit support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 20/39] xen-blkback: module_exit support Joao Martins
2019-02-25 18:57   ` Konrad Rzeszutek Wilk
2019-02-26 11:20     ` Joao Martins
2019-02-20 20:15 ` [PATCH RFC 21/39] KVM: x86/xen: domid allocation Joao Martins
2019-02-20 20:15 ` [PATCH RFC 22/39] KVM: x86/xen: grant table init Joao Martins
2019-02-20 20:15 ` [PATCH RFC 23/39] KVM: x86/xen: grant table grow support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 24/39] KVM: x86/xen: backend hypercall support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 25/39] KVM: x86/xen: grant map support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 26/39] KVM: x86/xen: grant unmap support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 27/39] KVM: x86/xen: grant copy support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 28/39] KVM: x86/xen: interdomain evtchn support Joao Martins
2019-02-20 20:15 ` [PATCH RFC 29/39] KVM: x86/xen: evtchn unmask support Joao Martins
2019-02-20 20:16 ` [PATCH RFC 30/39] KVM: x86/xen: add additional evtchn ops Joao Martins
2019-02-20 20:16 ` [PATCH RFC 31/39] xen-shim: introduce shim domain driver Joao Martins
2019-02-20 20:16 ` [PATCH RFC 32/39] xen/balloon: xen_shim_domain() support Joao Martins
2019-02-20 20:16 ` [PATCH RFC 33/39] xen/grant-table: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 34/39] xen/gntdev: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 35/39] xen/xenbus: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 36/39] drivers/xen: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 37/39] xen-netback: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 38/39] xen-blkback: " Joao Martins
2019-02-20 20:16 ` [PATCH RFC 39/39] KVM: x86: declare Xen HVM Dom0 capability Joao Martins
2019-02-20 21:09 ` [PATCH RFC 00/39] x86/KVM: Xen HVM guest support Paolo Bonzini
2019-02-21  0:29   ` Ankur Arora
2019-02-21 11:45   ` Joao Martins
2019-02-22 16:59     ` Paolo Bonzini
2019-03-12 17:14       ` Joao Martins
2019-04-08  6:44         ` Juergen Gross
2019-04-08 10:36           ` Joao Martins
2019-04-08 10:42             ` Juergen Gross
2019-04-08 17:31               ` Joao Martins
2019-04-09  0:35                 ` Stefano Stabellini
2019-04-10  5:50                   ` [Xen-devel] " Ankur Arora
2019-04-10 20:45                     ` Stefano Stabellini
2019-04-09  5:04                 ` Juergen Gross [this message]
2019-04-10  6:55                   ` Ankur Arora
2019-04-10  7:14                     ` Juergen Gross
2019-02-20 23:39 ` [Xen-devel] " Marek Marczykowski-Górecki
2019-02-21  0:31   ` Ankur Arora
2019-02-21  7:57   ` Juergen Gross
2019-02-21 12:00     ` Joao Martins
2019-02-21 11:55   ` Joao Martins

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=59deb041-2b5d-8451-32c7-644fe36e053b@suse.com \
    --to=jgross@suse.com \
    --cc=ankur.a.arora@oracle.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=bp@alien8.de \
    --cc=hpa@zytor.com \
    --cc=joao.m.martins@oracle.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=rkrcmar@redhat.com \
    --cc=sstabellini@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).