All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Yu Zhang' <yu.c.zhang@linux.intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"zhiyuan.lv@intel.com" <zhiyuan.lv@intel.com>,
	Jan Beulich <jbeulich@suse.com>,
	George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v9 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps.
Date: Tue, 21 Mar 2017 10:00:29 +0000	[thread overview]
Message-ID: <5d845ae8dbee4b2f898e42d23ec84f93@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <1490064773-26751-6-git-send-email-yu.c.zhang@linux.intel.com>

> -----Original Message-----
> From: Yu Zhang [mailto:yu.c.zhang@linux.intel.com]
> Sent: 21 March 2017 02:53
> To: xen-devel@lists.xen.org
> Cc: zhiyuan.lv@intel.com; Paul Durrant <Paul.Durrant@citrix.com>; Jan
> Beulich <jbeulich@suse.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>
> Subject: [PATCH v9 5/5] x86/ioreq server: Synchronously reset outstanding
> p2m_ioreq_server entries when an ioreq server unmaps.
> 
> After an ioreq server has unmapped, the remaining p2m_ioreq_server
> entries need to be reset back to p2m_ram_rw. This patch does this
> synchronously by iterating the p2m table.
> 
> The synchronous resetting is necessary because we need to guarantee
> the p2m table is clean before another ioreq server is mapped. And
> since the sweeping of p2m table could be time consuming, it is done
> with hypercall continuation.
> 
> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> ---
> Cc: Paul Durrant <paul.durrant@citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> 
> changes in v2:
>   - According to comments from Jan and Andrew: do not use the
>     HVMOP type hypercall continuation method. Instead, adding
>     an opaque in xen_dm_op_map_mem_type_to_ioreq_server to
>     store the gfn.
>   - According to comments from Jan: change routine's comments
>     and name of parameters of p2m_finish_type_change().
> 
> changes in v1:
>   - This patch is splitted from patch 4 of last version.
>   - According to comments from Jan: update the gfn_start for
>     when use hypercall continuation to reset the p2m type.
>   - According to comments from Jan: use min() to compare gfn_end
>     and max mapped pfn in p2m_finish_type_change()
> ---
>  xen/arch/x86/hvm/dm.c     | 41
> ++++++++++++++++++++++++++++++++++++++---
>  xen/arch/x86/mm/p2m.c     | 29 +++++++++++++++++++++++++++++
>  xen/include/asm-x86/p2m.h |  7 +++++++
>  3 files changed, 74 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
> index 3f9484d..a24d0f8 100644
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -385,16 +385,51 @@ static int dm_op(domid_t domid,
> 
>      case XEN_DMOP_map_mem_type_to_ioreq_server:
>      {
> -        const struct xen_dm_op_map_mem_type_to_ioreq_server *data =
> +        struct xen_dm_op_map_mem_type_to_ioreq_server *data =
>              &op.u.map_mem_type_to_ioreq_server;
> +        unsigned long first_gfn = data->opaque;
> +        unsigned long last_gfn;
> +
> +        const_op = false;
> 
>          rc = -EOPNOTSUPP;
>          /* Only support for HAP enabled hvm. */
>          if ( !hap_enabled(d) )
>              break;
> 
> -        rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
> -                                              data->type, data->flags);
> +        if ( first_gfn == 0 )
> +            rc = hvm_map_mem_type_to_ioreq_server(d, data->id,
> +                                                  data->type, data->flags);
> +        /*
> +         * Iterate p2m table when an ioreq server unmaps from
> p2m_ioreq_server,
> +         * and reset the remaining p2m_ioreq_server entries back to
> p2m_ram_rw.
> +         */
> +        if ( (first_gfn > 0) || (data->flags == 0 && rc == 0) )
> +        {
> +            struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +
> +            while ( read_atomic(&p2m->ioreq.entry_count) &&
> +                    first_gfn <= p2m->max_mapped_pfn )
> +            {
> +                /* Iterate p2m table for 256 gfns each time. */
> +                last_gfn = first_gfn + 0xff;
> +

Might be worth a comment here to sat that p2m_finish_type_change() limits last_gfn appropriately because it kind of looks wrong to be blindly calling it with first_gfn + 0xff. Or perhaps, rather than passing last_gfn, pass a 'max_nr' parameter of 256 instead. Then you can drop last_gfn altogether. If you prefer the parameters as they are then at least limit the scope of last_gfn to this while loop.

> +                p2m_finish_type_change(d, first_gfn, last_gfn,
> +                                       p2m_ioreq_server, p2m_ram_rw);
> +
> +                first_gfn = last_gfn + 1;
> +
> +                /* Check for continuation if it's not the last iteration. */
> +                if ( first_gfn <= p2m->max_mapped_pfn &&
> +                     hypercall_preempt_check() )
> +                {
> +                    rc = -ERESTART;
> +                    data->opaque = first_gfn;
> +                    break;
> +                }
> +            }
> +        }
> +
>          break;
>      }
> 
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index e3e54f1..0a2f276 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1038,6 +1038,35 @@ void p2m_change_type_range(struct domain *d,
>      p2m_unlock(p2m);
>  }
> 
> +/* Synchronously modify the p2m type for a range of gfns from ot to nt. */
> +void p2m_finish_type_change(struct domain *d,

As I said above, consider a 'max_nr' parameter here rather than last_gfn.

  Paul

> +                            unsigned long first_gfn, unsigned long last_gfn,
> +                            p2m_type_t ot, p2m_type_t nt)
> +{
> +    struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    p2m_type_t t;
> +    unsigned long gfn = first_gfn;
> +
> +    ASSERT(first_gfn <= last_gfn);
> +    ASSERT(ot != nt);
> +    ASSERT(p2m_is_changeable(ot) && p2m_is_changeable(nt));
> +
> +    p2m_lock(p2m);
> +
> +    last_gfn = min(last_gfn, p2m->max_mapped_pfn);
> +    while ( gfn <= last_gfn )
> +    {
> +        get_gfn_query_unlocked(d, gfn, &t);
> +
> +        if ( t == ot )
> +            p2m_change_type_one(d, gfn, t, nt);
> +
> +        gfn++;
> +    }
> +
> +    p2m_unlock(p2m);
> +}
> +
>  /*
>   * Returns:
>   *    0              for success
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 395f125..3d665e8 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -611,6 +611,13 @@ void p2m_change_type_range(struct domain *d,
>  int p2m_change_type_one(struct domain *d, unsigned long gfn,
>                          p2m_type_t ot, p2m_type_t nt);
> 
> +/* Synchronously change the p2m type for a range of gfns:
> + * [first_gfn ... last_gfn]. */
> +void p2m_finish_type_change(struct domain *d,
> +                            unsigned long first_gfn,
> +                            unsigned long last_gfn,
> +                            p2m_type_t ot, p2m_type_t nt);
> +
>  /* Report a change affecting memory types. */
>  void p2m_memory_type_changed(struct domain *d);
> 
> --
> 1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-03-21 10:00 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-21  2:52 [PATCH v9 0/5] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2017-03-21  2:52 ` [PATCH v9 1/5] x86/ioreq server: Release the p2m lock after mmio is handled Yu Zhang
2017-03-29 13:39   ` George Dunlap
2017-03-29 13:50     ` Jan Beulich
2017-03-21  2:52 ` [PATCH v9 2/5] x86/ioreq server: Add DMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2017-03-22  7:49   ` Tian, Kevin
2017-03-22 10:12     ` Yu Zhang
2017-03-24  9:26       ` Tian, Kevin
2017-03-24 12:34         ` Yu Zhang
2017-03-22 14:21   ` Jan Beulich
2017-03-23  3:23     ` Yu Zhang
2017-03-23  8:57       ` Jan Beulich
2017-03-24  9:05         ` Yu Zhang
2017-03-24 10:19           ` Jan Beulich
2017-03-24 12:35             ` Yu Zhang
2017-03-24 13:09               ` Jan Beulich
2017-03-21  2:52 ` [PATCH v9 3/5] x86/ioreq server: Handle read-modify-write cases for p2m_ioreq_server pages Yu Zhang
2017-03-22 14:22   ` Jan Beulich
2017-03-21  2:52 ` [PATCH v9 4/5] x86/ioreq server: Asynchronously reset outstanding p2m_ioreq_server entries Yu Zhang
2017-03-21 10:05   ` Paul Durrant
2017-03-22  8:10   ` Tian, Kevin
2017-03-22 10:12     ` Yu Zhang
2017-03-24  9:37       ` Tian, Kevin
2017-03-24 12:45         ` Yu Zhang
2017-03-22 14:29   ` Jan Beulich
2017-03-23  3:23     ` Yu Zhang
2017-03-23  9:00       ` Jan Beulich
2017-03-24  9:05         ` Yu Zhang
2017-03-24 10:37           ` Jan Beulich
2017-03-24 12:36             ` Yu Zhang
2017-03-21  2:52 ` [PATCH v9 5/5] x86/ioreq server: Synchronously reset outstanding p2m_ioreq_server entries when an ioreq server unmaps Yu Zhang
2017-03-21 10:00   ` Paul Durrant [this message]
2017-03-21 11:15     ` Yu Zhang
2017-03-21 13:49       ` Paul Durrant
2017-03-21 14:14         ` Yu Zhang
2017-03-22  8:28   ` Tian, Kevin
2017-03-22  8:54     ` Jan Beulich
2017-03-22  9:02       ` Tian, Kevin
2017-03-22 14:39   ` Jan Beulich
2017-03-23  3:23     ` Yu Zhang
2017-03-23  9:02       ` Jan Beulich
2017-03-24  9:05         ` Yu Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5d845ae8dbee4b2f898e42d23ec84f93@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=George.Dunlap@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=xen-devel@lists.xen.org \
    --cc=yu.c.zhang@linux.intel.com \
    --cc=zhiyuan.lv@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.