All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: tim@xen.org, sstabellini@kernel.org, wei.liu2@citrix.com,
	George.Dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
	ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [PATCH v3 2/9] mm: Place unscrubbed pages at the end of pagelist
Date: Thu, 04 May 2017 04:17:21 -0600	[thread overview]
Message-ID: <590B1BD10200007800156B79@prv-mh.provo.novell.com> (raw)
In-Reply-To: <1492184258-3277-3-git-send-email-boris.ostrovsky@oracle.com>

>>> On 14.04.17 at 17:37, <boris.ostrovsky@oracle.com> wrote:
> @@ -678,6 +680,20 @@ static void check_low_mem_virq(void)
>      }
>  }
>  
> +/* Pages that need scrub are added to tail, otherwise to head. */
> +static void page_list_add_scrub(struct page_info *pg, unsigned int node,
> +                                unsigned int zone, unsigned int order,
> +                                bool need_scrub)
> +{
> +    PFN_ORDER(pg) = order;
> +    pg->u.free.dirty_head = need_scrub;
> +
> +    if ( need_scrub )
> +        page_list_add_tail(pg, &heap(node, zone, order));
> +    else
> +        page_list_add(pg, &heap(node, zone, order));
> +}
> +
>  /* Allocate 2^@order contiguous pages. */
>  static struct page_info *alloc_heap_pages(
>      unsigned int zone_lo, unsigned int zone_hi,
> @@ -802,7 +818,7 @@ static struct page_info *alloc_heap_pages(
>      while ( j != order )
>      {
>          PFN_ORDER(pg) = --j;
> -        page_list_add_tail(pg, &heap(node, zone, j));
> +        page_list_add(pg, &heap(node, zone, j));

Don't you need to replicate pg->u.free.dirty_head (and hence use
page_list_add_scrub()) here too?

> @@ -851,11 +867,14 @@ static int reserve_offlined_page(struct page_info *head)
>      int zone = page_to_zone(head), i, head_order = PFN_ORDER(head), count = 0;
>      struct page_info *cur_head;
>      int cur_order;
> +    bool need_scrub;

Please put this in the most narrow scope it's needed in.

>      ASSERT(spin_is_locked(&heap_lock));
>  
>      cur_head = head;
>  
> +    head->u.free.dirty_head = false;
> +
>      page_list_del(head, &heap(node, zone, head_order));
>  
>      while ( cur_head < (head + (1 << head_order)) )
> @@ -892,8 +911,16 @@ static int reserve_offlined_page(struct page_info *head)
>              {
>              merge:
>                  /* We don't consider merging outside the head_order. */
> -                page_list_add_tail(cur_head, &heap(node, zone, cur_order));
> -                PFN_ORDER(cur_head) = cur_order;
> +
> +                /* See if any of the pages need scrubbing. */
> +                need_scrub = false;
> +                for ( i = 0; i < (1 << cur_order); i++ )
> +                    if ( test_bit(_PGC_need_scrub, &cur_head[i].count_info) )
> +                    {
> +                        need_scrub = true;
> +                        break;
> +                    }

Can't you skip this loop when the incoming chunk has
->u.free.dirty_head clear?

> +static void scrub_free_pages(unsigned int node)
> +{
> +    struct page_info *pg;
> +    unsigned int zone, order;
> +    unsigned long i;

Here I would similarly appreciate if the local variables were moved
into the scopes they're actually needed in.

> +    ASSERT(spin_is_locked(&heap_lock));
> +
> +    if ( !node_need_scrub[node] )
> +        return;
> +
> +    for ( zone = 0; zone < NR_ZONES; zone++ )
> +    {
> +        order = MAX_ORDER;
> +        do {
> +            while ( !page_list_empty(&heap(node, zone, order)) )
> +            {
> +                /* Unscrubbed pages are always at the end of the list. */
> +                pg = page_list_last(&heap(node, zone, order));
> +                if ( !pg->u.free.dirty_head )
> +                    break;
> +
> +                for ( i = 0; i < (1UL << order); i++)
> +                {
> +                    if ( test_bit(_PGC_need_scrub, &pg[i].count_info) )
> +                    {
> +                        scrub_one_page(&pg[i]);
> +                        pg[i].count_info &= ~PGC_need_scrub;
> +                        node_need_scrub[node]--;
> +                    }
> +                }
> +
> +                page_list_del(pg, &heap(node, zone, order));
> +                merge_and_free_buddy(pg, node, zone, order, false);

Is there actually any merging involved here, i.e. can't you
simply put back the buddy at the list head?

> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -45,6 +45,8 @@ struct page_info
>          struct {
>              /* Do TLBs need flushing for safety before next page use? */
>              bool_t need_tlbflush;
> +            /* Set on a buddy head if the buddy has unscrubbed pages. */
> +            bool dirty_head;
>          } free;
>  
>      } u;
> @@ -115,6 +117,10 @@ struct page_info
>  #define PGC_count_width   PG_shift(9)
>  #define PGC_count_mask    ((1UL<<PGC_count_width)-1)
>  
> +/* Page needs to be scrubbed */
> +#define _PGC_need_scrub   PG_shift(10)
> +#define PGC_need_scrub    PG_mask(1, 10)

This is at least dangerous: You're borrowing a bit from
PGC_count_mask. That's presumably okay because you only
ever set the bit on free pages (and it is being zapped by
alloc_heap_pages() setting ->count_info to PGC_state_inuse),
but I think it would be more explicit that you borrow another bit
if you re-used one of the existing ones (like PGC_allocated), and
then did so by using a straight #define to that other value.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-05-04 10:17 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-04-14 15:37 [PATCH v3 0/9] Memory scrubbing from idle loop Boris Ostrovsky
2017-04-14 15:37 ` [PATCH v3 1/9] mm: Separate free page chunk merging into its own routine Boris Ostrovsky
2017-05-04  9:45   ` Jan Beulich
2017-04-14 15:37 ` [PATCH v3 2/9] mm: Place unscrubbed pages at the end of pagelist Boris Ostrovsky
2017-05-04 10:17   ` Jan Beulich [this message]
2017-05-04 14:53     ` Boris Ostrovsky
2017-05-04 15:00       ` Jan Beulich
2017-05-08 16:41   ` George Dunlap
2017-05-08 16:59     ` Boris Ostrovsky
2017-04-14 15:37 ` [PATCH v3 3/9] mm: Scrub pages in alloc_heap_pages() if needed Boris Ostrovsky
2017-05-04 14:44   ` Jan Beulich
2017-05-04 15:04     ` Boris Ostrovsky
2017-05-04 15:36       ` Jan Beulich
2017-04-14 15:37 ` [PATCH v3 4/9] mm: Scrub memory from idle loop Boris Ostrovsky
2017-05-04 15:31   ` Jan Beulich
2017-05-04 17:09     ` Boris Ostrovsky
2017-05-05 10:21       ` Jan Beulich
2017-05-05 13:42         ` Boris Ostrovsky
2017-05-05 14:10           ` Jan Beulich
2017-05-05 14:14             ` Jan Beulich
2017-05-05 14:27               ` Boris Ostrovsky
2017-05-05 14:51                 ` Jan Beulich
2017-05-05 15:23                   ` Boris Ostrovsky
2017-05-05 16:05                     ` Jan Beulich
2017-05-05 16:49                       ` Boris Ostrovsky
2017-05-08  7:14                         ` Jan Beulich
2017-05-11 10:26   ` Dario Faggioli
2017-05-11 14:19     ` Boris Ostrovsky
2017-05-11 15:48       ` Dario Faggioli
2017-05-11 17:05         ` Boris Ostrovsky
2017-05-12  8:17           ` Dario Faggioli
2017-05-12 14:42             ` Boris Ostrovsky
2017-04-14 15:37 ` [PATCH v3 5/9] mm: Do not discard already-scrubbed pages if softirqs are pending Boris Ostrovsky
2017-05-04 15:43   ` Jan Beulich
2017-05-04 17:18     ` Boris Ostrovsky
2017-05-05 10:27       ` Jan Beulich
2017-05-05 13:51         ` Boris Ostrovsky
2017-05-05 14:13           ` Jan Beulich
2017-04-14 15:37 ` [PATCH v3 6/9] spinlock: Introduce spin_lock_cb() Boris Ostrovsky
2017-04-14 15:37 ` [PATCH v3 7/9] mm: Keep pages available for allocation while scrubbing Boris Ostrovsky
2017-05-04 16:03   ` Jan Beulich
2017-05-04 17:26     ` Boris Ostrovsky
2017-05-05 10:28       ` Jan Beulich
2017-04-14 15:37 ` [PATCH v3 8/9] mm: Print number of unscrubbed pages in 'H' debug handler Boris Ostrovsky
2017-04-14 15:37 ` [PATCH v3 9/9] mm: Make sure pages are scrubbed Boris Ostrovsky
2017-05-05 15:05   ` Jan Beulich
2017-05-08 15:48     ` Konrad Rzeszutek Wilk
2017-05-08 16:23       ` Boris Ostrovsky
2017-05-02 14:46 ` [PATCH v3 0/9] Memory scrubbing from idle loop Boris Ostrovsky
2017-05-02 14:58   ` Jan Beulich
2017-05-02 15:07     ` Boris Ostrovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=590B1BD10200007800156B79@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.