xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
	"Stefano Stabellini" <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 02/12] x86/p2m: {,un}map_mmio_regions() are HVM-only
Date: Thu, 29 Apr 2021 16:48:27 +0200	[thread overview]
Message-ID: <YIrHO9N6YgQEOpJh@Air-de-Roger> (raw)
In-Reply-To: <7f8ca70d-8bbe-bd5d-533a-c5ea81dc91a2@suse.com>

On Mon, Apr 12, 2021 at 04:06:34PM +0200, Jan Beulich wrote:
> Mirror the "translated" check the functions do to do_domctl(), allowing
> the calls to be DCEd by the compiler. Add ASSERT_UNREACHABLE() to the
> original checks.
> 
> Also arrange for {set,clear}_mmio_p2m_entry() and
> {set,clear}_identity_p2m_entry() to respectively live next to each
> other, such that clear_mmio_p2m_entry() can also be covered by the
> #ifdef already covering set_mmio_p2m_entry().

Seeing the increase in HVM specific regions, would it make sense to
consider splitting the HVM bits into p2m-hvm.c or some such?

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Fix build.
> ---
> Arguably the original checks, returning success, could also be dropped
> at this point.
> 
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1352,52 +1352,6 @@ int set_mmio_p2m_entry(struct domain *d,
>                                 p2m_get_hostp2m(d)->default_access);
>  }
>  
> -#endif /* CONFIG_HVM */
> -
> -int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
> -                           p2m_access_t p2ma, unsigned int flag)
> -{
> -    p2m_type_t p2mt;
> -    p2m_access_t a;
> -    gfn_t gfn = _gfn(gfn_l);
> -    mfn_t mfn;
> -    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> -    int ret;
> -
> -    if ( !paging_mode_translate(p2m->domain) )
> -    {
> -        if ( !is_iommu_enabled(d) )
> -            return 0;
> -        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
> -                                1ul << PAGE_ORDER_4K,
> -                                IOMMUF_readable | IOMMUF_writable);
> -    }
> -
> -    gfn_lock(p2m, gfn, 0);
> -
> -    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
> -
> -    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm )
> -        ret = p2m_set_entry(p2m, gfn, _mfn(gfn_l), PAGE_ORDER_4K,
> -                            p2m_mmio_direct, p2ma);
> -    else if ( mfn_x(mfn) == gfn_l && p2mt == p2m_mmio_direct && a == p2ma )
> -        ret = 0;
> -    else
> -    {
> -        if ( flag & XEN_DOMCTL_DEV_RDM_RELAXED )
> -            ret = 0;
> -        else
> -            ret = -EBUSY;
> -        printk(XENLOG_G_WARNING
> -               "Cannot setup identity map d%d:%lx,"
> -               " gfn already mapped to %lx.\n",
> -               d->domain_id, gfn_l, mfn_x(mfn));
> -    }
> -
> -    gfn_unlock(p2m, gfn, 0);
> -    return ret;
> -}
> -
>  /*
>   * Returns:
>   *    0        for success
> @@ -1447,6 +1401,52 @@ int clear_mmio_p2m_entry(struct domain *
>      return rc;
>  }
>  
> +#endif /* CONFIG_HVM */
> +
> +int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
> +                           p2m_access_t p2ma, unsigned int flag)
> +{
> +    p2m_type_t p2mt;
> +    p2m_access_t a;
> +    gfn_t gfn = _gfn(gfn_l);
> +    mfn_t mfn;
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    int ret;
> +
> +    if ( !paging_mode_translate(p2m->domain) )
> +    {
> +        if ( !is_iommu_enabled(d) )
> +            return 0;
> +        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
> +                                1ul << PAGE_ORDER_4K,
> +                                IOMMUF_readable | IOMMUF_writable);
> +    }
> +
> +    gfn_lock(p2m, gfn, 0);
> +
> +    mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
> +
> +    if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm )
> +        ret = p2m_set_entry(p2m, gfn, _mfn(gfn_l), PAGE_ORDER_4K,
> +                            p2m_mmio_direct, p2ma);
> +    else if ( mfn_x(mfn) == gfn_l && p2mt == p2m_mmio_direct && a == p2ma )
> +        ret = 0;
> +    else
> +    {
> +        if ( flag & XEN_DOMCTL_DEV_RDM_RELAXED )
> +            ret = 0;
> +        else
> +            ret = -EBUSY;
> +        printk(XENLOG_G_WARNING
> +               "Cannot setup identity map d%d:%lx,"
> +               " gfn already mapped to %lx.\n",
> +               d->domain_id, gfn_l, mfn_x(mfn));
> +    }
> +
> +    gfn_unlock(p2m, gfn, 0);
> +    return ret;
> +}
> +
>  int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
>  {
>      p2m_type_t p2mt;
> @@ -1892,6 +1892,8 @@ void *map_domain_gfn(struct p2m_domain *
>      return map_domain_page(*mfn);
>  }
>  
> +#ifdef CONFIG_HVM
> +
>  static unsigned int mmio_order(const struct domain *d,
>                                 unsigned long start_fn, unsigned long nr)
>  {
> @@ -1932,7 +1934,10 @@ int map_mmio_regions(struct domain *d,
>      unsigned int iter, order;
>  
>      if ( !paging_mode_translate(d) )
> +    {
> +        ASSERT_UNREACHABLE();
>          return 0;
> +    }
>  
>      for ( iter = i = 0; i < nr && iter < MAP_MMIO_MAX_ITER;
>            i += 1UL << order, ++iter )
> @@ -1964,7 +1969,10 @@ int unmap_mmio_regions(struct domain *d,
>      unsigned int iter, order;
>  
>      if ( !paging_mode_translate(d) )
> +    {
> +        ASSERT_UNREACHABLE();
>          return 0;

Maybe consider returning an error here now instead of silently
failing? It's not supposed to be reached, so getting here likely means
something else has gone wrong and it's best to just report an error?

The rest LGTM:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


  reply	other threads:[~2021-04-29 14:48 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-12 14:03 [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich
2021-04-12 14:05 ` [PATCH v2 01/12] x86/p2m: set_{foreign,mmio}_p2m_entry() are HVM-only Jan Beulich
2021-04-29 13:17   ` Roger Pau Monné
2021-04-29 14:09     ` Jan Beulich
2021-04-29 15:06       ` Roger Pau Monné
2021-04-12 14:06 ` [PATCH v2 02/12] x86/p2m: {,un}map_mmio_regions() " Jan Beulich
2021-04-29 14:48   ` Roger Pau Monné [this message]
2021-04-29 15:01     ` Jan Beulich
2021-04-12 14:07 ` [PATCH v2 03/12] x86/mm: the gva_to_gfn() hook is HVM-only Jan Beulich
2021-04-15 16:22   ` Tim Deegan
2021-04-12 14:07 ` [PATCH v2 04/12] AMD/IOMMU: guest IOMMU support is for HVM only Jan Beulich
2021-04-29 15:14   ` Roger Pau Monné
2021-04-12 14:08 ` [PATCH v2 05/12] x86/p2m: change_entry_type_* hooks are HVM-only Jan Beulich
2021-04-29 15:49   ` Roger Pau Monné
2021-04-12 14:08 ` [PATCH v2 06/12] x86/p2m: hardware-log-dirty related " Jan Beulich
2021-04-29 16:05   ` Roger Pau Monné
2021-04-12 14:09 ` [PATCH v2 07/12] x86/p2m: the recalc hook is HVM-only Jan Beulich
2021-04-30  9:27   ` Roger Pau Monné
2021-04-12 14:10 ` [PATCH v2 08/12] x86: mem-access " Jan Beulich
2021-04-12 14:16   ` Tamas K Lengyel
2021-04-12 14:48   ` Isaila Alexandru
2021-04-12 14:12 ` [PATCH v2 09/12] x86: make mem-paging configuarable and default it to off for being unsupported Jan Beulich
2021-04-12 14:18   ` Tamas K Lengyel
2021-04-12 14:47     ` Isaila Alexandru
2021-04-12 14:27   ` Isaila Alexandru
2021-04-30  9:55   ` Roger Pau Monné
2021-04-30 14:16     ` Jan Beulich
2021-04-30 14:37       ` Roger Pau Monné
2021-04-12 14:13 ` [PATCH v2 10/12] x86/p2m: {get,set}_entry hooks and p2m-pt.c are HVM-only Jan Beulich
2021-04-30 10:57   ` Roger Pau Monné
2021-04-12 14:13 ` [PATCH v2 11/12] x86/p2m: write_p2m_entry_{pre,post} hooks " Jan Beulich
2021-04-15 16:22   ` Tim Deegan
2021-04-12 14:14 ` [PATCH v2 12/12] x86/p2m: re-arrange struct p2m_domain Jan Beulich
2021-04-30 10:59   ` Roger Pau Monné
2021-04-29  9:27 ` Ping: [PATCH v2 00/12] x86/p2m: restrict more code to build just for HVM Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YIrHO9N6YgQEOpJh@Air-de-Roger \
    --to=roger.pau@citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=jbeulich@suse.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).