All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Julien Grall <julien.grall@arm.com>
Cc: Juergen Gross <jgross@suse.com>,
	xen-devel@lists.xen.org, Andrii Anisov <andrii_anisov@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrii Anisov <andrii.anisov@gmail.com>
Subject: Re: [PATCH 0/2 for-4.12] Introduce runstate area registration with phys address
Date: Thu, 7 Mar 2019 18:15:53 +0100	[thread overview]
Message-ID: <20190307171553.nj3fteo7anst7bxm@Air-de-Roger> (raw)
In-Reply-To: <0f89f12b-0111-5da0-9119-c35ebdacdd2f@arm.com>

On Thu, Mar 07, 2019 at 04:36:59PM +0000, Julien Grall wrote:
> Hi Roger,
> 
> Thank you for the answer.
> 
> On 07/03/2019 16:16, Roger Pau Monné wrote:
> > On Thu, Mar 07, 2019 at 03:17:54PM +0000, Julien Grall wrote:
> > > Hi Andrii,
> > > 
> > > On 07/03/2019 14:34, Andrii Anisov wrote:
> > > > On 07.03.19 16:02, Julien Grall wrote:
> > > > > >    - IMHO, this implementation is simpler and cleaner than what I
> > > > > > have for runstate mapping on access.
> > > > > 
> > > > > Did you implement it using access_guest_memory_by_ipa?
> > > > Not exactly, access_guest_memory_by_ipa() has no implementation for x86.
> > > > But it is made around that code.
> > > 
> > > For the HVM, the equivalent function is hvm_copy_to_guest_phys. I don't know
> > > what would be the interface for PV. Roger, any idea?
> > 
> > For PV I think you will have to use get_page_from_gfn, check the
> > permissions, map it, write and unmap it. The same flow would also work
> > for HVM, so I'm not sure if there's much point in using
> > hvm_copy_to_guest_phys. Or you can implement a generic
> > copy_to_guest_phys helper that works for both PV and HVM.
> > 
> > Note that for translated guests you will have to walk the HAP page
> > tables for each vCPU for each context switch, which I think will be
> > expensive in terms of performance (I might be wrong however, since I
> > have no proof of this).
> 
> AFAICT, we already walk the page-table with the current implementation. So
> this should be no different here, except we will not need to walk the
> guest-PT here. No?

Yes, current implementation is even worse because it walks both the
guest page tables and the HAP page tables in the HVM case. It would be
interesting IMO if we could avoid walking any of those page tables.

I see you have concerns about permanently mapping the runstate area,
so I'm not going to oppose, albeit even with only 1G of VA space you
can map plenty of runstate areas, and taking into account this is
32bit hardware I'm not sure you will ever have that many vCPUs that
you will run out of VA space to map runstate areas.

That being said, if the implementation turns out to be more
complicated because of this permanent mapping, walking the guest HAP
page tables is certainly no worse than what's done ATM.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-03-07 17:15 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-03-05 13:14 [PATCH 0/2 for-4.12] Introduce runstate area registration with phys address Andrii Anisov
2019-03-05 13:14 ` [PATCH 1/2 for-4.12] xen: introduce VCPUOP_register_runstate_phys_memory_area hypercall Andrii Anisov
2019-03-14  8:45   ` Jan Beulich
2019-03-05 13:14 ` [PATCH 2/2 for-4.12] xen: implement VCPUOP_register_runstate_phys_memory_area Andrii Anisov
2019-03-14  9:05   ` Jan Beulich
2019-03-05 13:20 ` [PATCH 0/2 for-4.12] Introduce runstate area registration with phys address Juergen Gross
2019-03-05 13:32   ` Andrii Anisov
2019-03-05 13:39     ` Julien Grall
2019-03-05 14:11       ` Andrii Anisov
2019-03-05 14:30         ` Julien Grall
2019-03-07 13:07           ` Andrii Anisov
2019-03-05 13:44     ` Juergen Gross
2019-03-05 13:50       ` Andrii Anisov
2019-03-05 13:56 ` Julien Grall
2019-03-07 13:01   ` Andrii Anisov
2019-03-07 14:02     ` Julien Grall
2019-03-07 14:34       ` Andrii Anisov
2019-03-07 15:17         ` Julien Grall
2019-03-07 15:20           ` Julien Grall
2019-03-07 16:16           ` Roger Pau Monné
2019-03-07 16:36             ` Julien Grall
2019-03-07 17:15               ` Roger Pau Monné [this message]
2019-03-07 18:00                 ` Julien Grall
2019-03-08  6:28                   ` Juergen Gross
2019-03-08 10:15                     ` Julien Grall
2019-03-08 10:18                       ` Juergen Gross
2019-03-08 10:31                         ` Julien Grall
2019-03-18 11:31           ` Andrii Anisov
2019-03-18 12:25             ` Julien Grall
2019-03-18 13:38               ` Andrii Anisov
2019-03-21 19:05                 ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190307171553.nj3fteo7anst7bxm@Air-de-Roger \
    --to=roger.pau@citrix.com \
    --cc=andrii.anisov@gmail.com \
    --cc=andrii_anisov@epam.com \
    --cc=jgross@suse.com \
    --cc=julien.grall@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.