All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Tim Deegan <tim@xen.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: open-coded page list manipulation in shadow code
Date: Fri, 30 Jan 2015 08:36:27 +0000	[thread overview]
Message-ID: <54CB509B020000780005B264@mail.emea.novell.com> (raw)
In-Reply-To: <20150129173046.GG30353@deinos.phlegethon.org>

>>> On 29.01.15 at 18:30, <tim@xen.org> wrote:
> OK, here's what I have.  It keeps the mechanism the same, threading on
> the main page list entry, but tidies it up to use the page_list operations
> rather than open-coding.  I've tested it (lightly - basically jsut
> boot testing) with 32bit, 32pae and 64bit Windows VMs (all SMP), with
> bigmem=y and bigmem=n.
> 
> It was developed on top of your bigmem series, but it should apply OK
> before patch 3/4, removing the need to nobble shadow-paging in that
> patch.

Thanks, looks quite okay, just a couple of remarks.

> @@ -1525,6 +1495,14 @@ mfn_t shadow_alloc(struct domain *d,
>      if ( shadow_type >= SH_type_min_shadow 
>           && shadow_type <= SH_type_max_shadow )
>          sp->u.sh.head = 1;
> +
> +#ifndef PAGE_LIST_NULL
> +    /* The temporary list-head is on our stack.  Blank out the
> +     * pointers to it in the shadows, just to get a clean failure if
> +     * we accidentally follow them. */
> +    tmp_list.next->prev = tmp_list.prev->next = NULL;
> +#endif

Perhaps better to use LIST_POISON{1,2} here, so we don't point into
PV VA space?

> --- a/xen/arch/x86/mm/shadow/private.h
> +++ b/xen/arch/x86/mm/shadow/private.h
> @@ -318,6 +318,33 @@ static inline int mfn_oos_may_write(mfn_t gmfn)
>  }
>  #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) */
>  
> +/* Figure out the size (in pages) of a given shadow type */
> +static inline u32
> +shadow_size(unsigned int shadow_type) 
> +{
> +    static const u32 type_to_size[SH_type_unused] = {
> +        1, /* SH_type_none           */
> +        2, /* SH_type_l1_32_shadow   */
> +        2, /* SH_type_fl1_32_shadow  */
> +        4, /* SH_type_l2_32_shadow   */
> +        1, /* SH_type_l1_pae_shadow  */
> +        1, /* SH_type_fl1_pae_shadow */
> +        1, /* SH_type_l2_pae_shadow  */
> +        1, /* SH_type_l2h_pae_shadow */
> +        1, /* SH_type_l1_64_shadow   */
> +        1, /* SH_type_fl1_64_shadow  */
> +        1, /* SH_type_l2_64_shadow   */
> +        1, /* SH_type_l2h_64_shadow  */
> +        1, /* SH_type_l3_64_shadow   */
> +        1, /* SH_type_l4_64_shadow   */
> +        1, /* SH_type_p2m_table      */
> +        1, /* SH_type_monitor_table  */
> +        1  /* SH_type_oos_snapshot   */
> +        };
> +    ASSERT(shadow_type < SH_type_unused);
> +    return type_to_size[shadow_type];
> +}

Isn't this going to lead to multiple instances of that static array?

> @@ -668,46 +697,40 @@ static inline int sh_pin(struct vcpu *v, mfn_t smfn)
>   * of pinned shadows, and release the extra ref. */
>  static inline void sh_unpin(struct vcpu *v, mfn_t smfn)
>  {
> -    struct page_list_head h, *pin_list;
> -    struct page_info *sp;
> -    
> +    struct page_list_head tmp_list, *pin_list;
> +    struct page_info *sp, *next;
> +    unsigned int i, head_type;
> +
>      ASSERT(mfn_valid(smfn));
>      sp = mfn_to_page(smfn);
> +    head_type = sp->u.sh.type;
>      ASSERT(sh_type_is_pinnable(v, sp->u.sh.type));
>      ASSERT(sp->u.sh.head);
>  
> -    /* Treat the up-to-four pages of the shadow as a unit in the list ops */
> -    h.next = h.tail = sp; 
> -    if ( sp->u.sh.type == SH_type_l2_32_shadow ) 
> -    {
> -        h.tail = pdx_to_page(h.tail->list.next);
> -        h.tail = pdx_to_page(h.tail->list.next);
> -        h.tail = pdx_to_page(h.tail->list.next);
> -        ASSERT(h.tail->u.sh.type == SH_type_l2_32_shadow); 
> -    }
> -    pin_list = &v->domain->arch.paging.shadow.pinned_shadows;
> -
>      if ( !sp->u.sh.pinned )
>          return;
> -
>      sp->u.sh.pinned = 0;
>  
> -    /* Cut the sub-list out of the list of pinned shadows */
> -    if ( pin_list->next == h.next && pin_list->tail == h.tail )
> -        pin_list->next = pin_list->tail = NULL;
> -    else 
> +    /* Cut the sub-list out of the list of pinned shadows,
> +     * stitching it back into a list fragment of its own. */
> +    pin_list = &v->domain->arch.paging.shadow.pinned_shadows;
> +    INIT_PAGE_LIST_HEAD(&tmp_list);
> +    for ( i = 0; i < shadow_size(head_type); i++ )
>      {
> -        if ( pin_list->next == h.next )
> -            pin_list->next = page_list_next(h.tail, pin_list);
> -        else
> -            page_list_prev(h.next, pin_list)->list.next = h.tail->list.next;
> -        if ( pin_list->tail == h.tail )
> -            pin_list->tail = page_list_prev(h.next, pin_list);
> -        else
> -            page_list_next(h.tail, pin_list)->list.prev = h.next->list.prev;
> +        ASSERT(sp->u.sh.type == head_type);
> +        ASSERT(!i || !sp->u.sh.head);
> +        next = page_list_next(sp, pin_list);
> +        page_list_del(sp, pin_list);
> +        page_list_add_tail(sp, &tmp_list);
> +        sp = next;
>      }
> -    h.tail->list.next = h.next->list.prev = PAGE_LIST_NULL;
> -    
> +#ifndef PAGE_LIST_NULL
> +    /* The temporary list-head is on our stack.  Blank out the
> +     * pointers to it in the shadows, just to get a clean failure if
> +     * we accidentally follow them. */
> +    tmp_list.next->prev = tmp_list.prev->next = NULL;
> +#endif

Same here. Perhaps worth putting into another inline helper?

Jan

  reply	other threads:[~2015-01-30  8:36 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-26 16:35 open-coded page list manipulation in shadow code Jan Beulich
2015-01-27 10:50 ` Tim Deegan
2015-01-29 17:30   ` Tim Deegan
2015-01-30  8:36     ` Jan Beulich [this message]
2015-01-30 10:20       ` Tim Deegan
2015-01-30 10:26         ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54CB509B020000780005B264@mail.emea.novell.com \
    --to=jbeulich@suse.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.