xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Kevin Tian" <kevin.tian@intel.com>,
	Julien Grall <julien@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] VMX: use a single, global APIC access page
Date: Thu, 11 Feb 2021 12:16:29 +0100	[thread overview]
Message-ID: <YCUSDSYpS5X+AZco@Air-de-Roger> (raw)
In-Reply-To: <7abc515b-d652-3d39-6038-99966deafdf8@suse.com>

On Thu, Feb 11, 2021 at 11:36:59AM +0100, Jan Beulich wrote:
> On 11.02.2021 09:45, Roger Pau Monné wrote:
> > On Wed, Feb 10, 2021 at 05:48:26PM +0100, Jan Beulich wrote:
> >> I did further consider not allocating any real page at all, but just
> >> using the address of some unpopulated space (which would require
> >> announcing this page as reserved to Dom0, so it wouldn't put any PCI
> >> MMIO BARs there). But I thought this would be too controversial, because
> >> of the possible risks associated with this.
> > 
> > No, Xen is not capable of allocating a suitable unpopulated page IMO,
> > so let's not go down that route. Wasting one RAM page seems perfectly
> > fine to me.
> 
> Why would Xen not be able to, in principle? It may be difficult,
> but there may also be pages we could declare we know we can use
> for this purpose.

I was under the impression that there could always be bits in ACPI
dynamic tables that could report MMIO ranges that Xen wasn't aware of,
but those should already be marked as reserved in the memory map
anyway for good behaved systems.

> >>          return;
> >>  
> >>      ASSERT(cpu_has_vmx_virtualize_apic_accesses);
> >>  
> >>      virt_page_ma = page_to_maddr(vcpu_vlapic(v)->regs_page);
> >> -    apic_page_ma = mfn_to_maddr(v->domain->arch.hvm.vmx.apic_access_mfn);
> >> +    apic_page_ma = mfn_to_maddr(apic_access_mfn);
> >>  
> >>      vmx_vmcs_enter(v);
> >>      __vmwrite(VIRTUAL_APIC_PAGE_ADDR, virt_page_ma);
> > 
> > I would consider setting up the vmcs and adding the page to the p2m in
> > the same function, and likely call it from vlapic_init. We could have
> > a domain_setup_apic in hvm_function_table that takes care of all this.
> 
> Well, I'd prefer to do this just once per domain without needing
> to special case this on e.g. vCPU 0.

I seems more natural to me to do this setup together with the rest of
the vlapic initialization, but I'm not going to insist, I also
understand your point about calling the function only once.

> >> --- a/xen/include/asm-x86/p2m.h
> >> +++ b/xen/include/asm-x86/p2m.h
> >> @@ -935,6 +935,9 @@ static inline unsigned int p2m_get_iommu
> >>          flags = IOMMUF_readable;
> >>          if ( !rangeset_contains_singleton(mmio_ro_ranges, mfn_x(mfn)) )
> >>              flags |= IOMMUF_writable;
> >> +        /* VMX'es APIC access page is global and hence has no owner. */
> >> +        if ( mfn_valid(mfn) && !page_get_owner(mfn_to_page(mfn)) )
> >> +            flags = 0;
> > 
> > Is it fine to have this page accessible to devices if the page tables
> > are shared between the CPU and the IOMMU?
> 
> No, it's not, but what do you do? As said elsewhere, devices
> gaining more access than is helpful is the price we pay for
> being able to share page tables. But ...

I'm concerned about allowing devices to write to this shared page, as
could be used as an unintended way to exchange information between
domains?

> > Is it possible for devices to write to it?
> 
> ... thinking about it - they would be able to access it only
> when interrupt remapping is off. Otherwise the entire range
> 0xFEExxxxx gets treated differently altogether by the IOMMU,

Now that I think of it, the range 0xFEExxxxx must always be special
handled for device accesses, regardless of whether interrupt remapping
is enabled or not, or else they won't be capable of delivering MSI
messages?

So I assume that whatever gets mapped in the IOMMU pages tables at
0xFEExxxxx simply gets ignored?

Or else mapping the lapic access page when using shared page tables
would have prevented CPU#0 from receiving MSI messages.

Thanks, Roger.


  reply	other threads:[~2021-02-11 11:17 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-10 16:48 [PATCH] VMX: use a single, global APIC access page Jan Beulich
2021-02-10 17:00 ` Andrew Cooper
2021-02-10 17:03   ` Jan Beulich
2021-03-01  2:08     ` Tian, Kevin
2021-02-10 17:16   ` Jan Beulich
2021-02-11  8:45 ` Roger Pau Monné
2021-02-11 10:36   ` Jan Beulich
2021-02-11 11:16     ` Roger Pau Monné [this message]
2021-02-11 11:22       ` Jan Beulich
2021-02-11 12:27         ` Roger Pau Monné
2021-03-01  2:18           ` Tian, Kevin
2021-03-01  8:15             ` Jan Beulich
2021-03-01  8:30               ` Tian, Kevin
2021-03-01  9:58                 ` Jan Beulich
2021-03-04  7:51                   ` Tian, Kevin
2021-02-11 13:53     ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YCUSDSYpS5X+AZco@Air-de-Roger \
    --to=roger.pau@citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=julien@xen.org \
    --cc=kevin.tian@intel.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).