All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging
       [not found] <mailman.5929.1325754331.12970.xen-devel@lists.xensource.com>
@ 2012-01-07  3:51 ` Andres Lagar-Cavilla
  2012-01-07  8:38   ` Hongkaixing
  0 siblings, 1 reply; 5+ messages in thread
From: Andres Lagar-Cavilla @ 2012-01-07  3:51 UTC (permalink / raw)
  To: xen-devel; +Cc: olaf, tim, hongkaixing

> Date: Thu, 5 Jan 2012 09:05:16 +0000
> From: Tim Deegan <tim@xen.org>
> To: hongkaixing@huawei.com
> Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xensource.com
> Subject: Re: [Xen-devel] [PATCH 2 of 2] xenpaging:change page-in
> 	process to	speed up page-in in	xenpaging
> Message-ID: <20120105090516.GE15595@ocelot.phlegethon.org>
> Content-Type: text/plain; charset=iso-8859-1
>
> Hello,
>
> At 03:50 +0000 on 05 Jan (1325735430), hongkaixing@huawei.com wrote:
>> xenpaging:change page-in process to speed up page-in in xenpaging
>> In this patch,we change the page-in process.Firstly,we add a new
>> function paging_in_trigger_sync
>> to handle page-in requests directly.and when the requests' count is up
>> to 32,then handle them
>> batchly;Most importantly,we use an increasing gfn to test_bit,which
>> saves much time.
>> In p2m.c,we changes p2m_mem_paging_populate() to return a value;
>> The following is a xenpaging test on suse11-64 with 4G memory.
>>
>> Nums of page_out pages	Page out time	Page in time(in unstable code) Page
>> in time(apply this patch)
>> 512M(131072)		    2.6s		        540s		              4.7s
>> 2G(524288)		    15.5s		        2088s		      	      17.7s
>>
>
> Thanks for the patch!  That's an impressive difference.  You're changing
> quite a few things in this patch, though.  Can you send them as separate
> patches so they can be reviewed one at a time?  Is one of them in
> particular making the difference?  I suspect it's mostly the change to
> test_bit(), and the rest is not necessary.

Second that, on all counts. Impressive numbers, and, a bit puzzled as to
what actually made the difference.

I would also like to see changes to xenpaging teased out from changes to
the hypervisor.

I've been sitting on a patch to page-in synchronously, which shortcuts
even more aggressively the page-in path: instead of calling populate, we
go straight into paging_load. This does not necessitate an additional
domctl, and would save even more hypervisor<->pager control round-trips.
Do you foresee any conflicts with your current approach?

Thanks!
Andres

>
> Cheers,
>
> Tim.
>
>> Signed-off-by??hongkaixing<hongkaixing@huawei.com>,shizhen<bicky.shi@huawei.com>
>>
>> diff -r 052727b8165c -r 978daceef147 tools/libxc/xc_mem_paging.c
>> --- a/tools/libxc/xc_mem_paging.c	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/tools/libxc/xc_mem_paging.c	Thu Dec 29 19:41:39 2011 +0800
>> @@ -73,6 +73,13 @@
>>                                  NULL, NULL, gfn);
>>  }
>>
>> +int xc_mem_paging_in(xc_interface *xch, domid_t domain_id, unsigned
>> long gfn)
>> +{
>> +    return xc_mem_event_control(xch, domain_id,
>> +                                XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN,
>> +                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
>> +                                NULL, NULL, gfn);
>> +}
>>
>>  /*
>>   * Local variables:
>> diff -r 052727b8165c -r 978daceef147 tools/libxc/xenctrl.h
>> --- a/tools/libxc/xenctrl.h	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/tools/libxc/xenctrl.h	Thu Dec 29 19:41:39 2011 +0800
>> @@ -1841,6 +1841,7 @@
>>  int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned
>> long gfn);
>>  int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id,
>>                           unsigned long gfn);
>> +int xc_mem_paging_in(xc_interface *xch, domid_t domain_id,unsigned long
>> gfn);
>>
>>  int xc_mem_access_enable(xc_interface *xch, domid_t domain_id,
>>                          void *shared_page, void *ring_page);
>> diff -r 052727b8165c -r 978daceef147 tools/xenpaging/xenpaging.c
>> --- a/tools/xenpaging/xenpaging.c	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/tools/xenpaging/xenpaging.c	Thu Dec 29 19:41:39 2011 +0800
>> @@ -594,6 +594,13 @@
>>      return ret;
>>  }
>>
>> +static int paging_in_trigger_sync(xenpaging_t *paging,unsigned long
>> gfn)
>> +{
>> +    int rc = 0;
>> +    rc = xc_mem_paging_in(paging->xc_handle,
>> paging->mem_event.domain_id,gfn);
>> +    return rc;
>> +}
>
> This function is
>
>> +
>>  int main(int argc, char *argv[])
>>  {
>>      struct sigaction act;
>> @@ -605,6 +612,9 @@
>>      int i;
>>      int rc = -1;
>>      int rc1;
>> +    int request_count = 0;
>> +    unsigned long page_in_start_gfn = 0;
>> +    unsigned long real_page = 0;
>>      xc_interface *xch;
>>
>>      int open_flags = O_CREAT | O_TRUNC | O_RDWR;
>> @@ -773,24 +783,51 @@
>>          /* Write all pages back into the guest */
>>          if ( interrupted == SIGTERM || interrupted == SIGINT )
>>          {
>> -            int num = 0;
>> +            request_count = 0;
>>              for ( i = 0; i < paging->domain_info->max_pages; i++ )
>>              {
>> -                if ( test_bit(i, paging->bitmap) )
>> +                real_page = i + page_in_start_gfn;
>> +                real_page %= paging->domain_info->max_pages;
>> +                if ( test_bit(real_page, paging->bitmap) )
>>                  {
>> -                    paging->pagein_queue[num] = i;
>> -                    num++;
>> -                    if ( num == XENPAGING_PAGEIN_QUEUE_SIZE )
>> -                        break;
>> +                    rc = paging_in_trigger_sync(paging,real_page);
>> +                    if ( 0 == rc )
>> +                    {
>> +                        request_count++;
>> +                        /* If page_in requests up to 32 then handle
>> them */
>> +                        if( request_count >= 32 )
>> +                        {
>> +                            page_in_start_gfn=real_page + 1;
>> +                            break;
>> +                        }
>> +                    }
>> +                    else
>> +                    {
>> +                        /* If IO ring is full then handle requests to
>> free space */
>> +                        if( ENOSPC == errno )
>> +                        {
>> +                            page_in_start_gfn = real_page;
>> +                            break;
>> +                        }
>> +                        /* If p2mt is not p2m_is_paging,then clear
>> bitmap;
>> +                        * e.g. a page is paged then it is dropped by
>> balloon.
>> +                        */
>> +                        else if ( EINVAL == errno )
>> +                        {
>> +                            clear_bit(i,paging->bitmap);
>> +                            policy_notify_paged_in(i);
>> +                        }
>> +                        /* If hypercall fails then go to teardown
>> xenpaging */
>> +                        else
>> +                        {
>> +                            ERROR("Error paging in page");
>> +                            goto out;
>> +                        }
>> +                    }
>>                  }
>>              }
>> -            /*
>> -             * One more round if there are still pages to process.
>> -             * If no more pages to process, exit loop.
>> -             */
>> -            if ( num )
>> -                page_in_trigger();
>> -            else if ( i == paging->domain_info->max_pages )
>> +            if( (i==paging->domain_info->max_pages) &&
>> +
>> !RING_HAS_UNCONSUMED_REQUESTS(&paging->mem_event.back_ring) )
>>                  break;
>>          }
>>          else
>> diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/mem_paging.c
>> --- a/xen/arch/x86/mm/mem_paging.c	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/xen/arch/x86/mm/mem_paging.c	Thu Dec 29 19:41:39 2011 +0800
>> @@ -57,7 +57,14 @@
>>          return 0;
>>      }
>>      break;
>> -
>> +
>> +    case XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN:
>> +    {
>> +        unsigned long gfn = mec->gfn;
>> +        return p2m_mem_paging_populate(d, gfn);
>> +    }
>> +    break;
>> +
>>      default:
>>          return -ENOSYS;
>>          break;
>> diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/p2m.c
>> --- a/xen/arch/x86/mm/p2m.c	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/xen/arch/x86/mm/p2m.c	Thu Dec 29 19:41:39 2011 +0800
>> @@ -874,7 +874,7 @@
>>   * already sent to the pager. In this case the caller has to try again
>> until the
>>   * gfn is fully paged in again.
>>   */
>> -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>> +int p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>>  {
>>      struct vcpu *v = current;
>>      mem_event_request_t req;
>> @@ -882,10 +882,12 @@
>>      p2m_access_t a;
>>      mfn_t mfn;
>>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> +    int ret;
>>
>>      /* Check that there's space on the ring for this request */
>> +    ret = -ENOSPC;
>>      if ( mem_event_check_ring(d, &d->mem_paging) )
>> -        return;
>> +        goto out;
>>
>>      memset(&req, 0, sizeof(req));
>>      req.type = MEM_EVENT_TYPE_PAGING;
>> @@ -905,19 +907,27 @@
>>      }
>>      p2m_unlock(p2m);
>>
>> +    ret = -EINVAL;
>>      /* Pause domain if request came from guest and gfn has paging type
>> */
>> -    if (  p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
>> +    if ( p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
>>      {
>>          vcpu_pause_nosync(v);
>>          req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>>      }
>>      /* No need to inform pager if the gfn is not in the page-out path
>> */
>> -    else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
>> +    else if ( p2mt == p2m_ram_paging_in_start || p2mt ==
>> p2m_ram_paging_in )
>>      {
>>          /* gfn is already on its way back and vcpu is not paused */
>>          mem_event_put_req_producers(&d->mem_paging);
>> -        return;
>> +        return 0;
>>      }
>> +    else if ( !p2m_is_paging(p2mt) )
>> +    {
>> +        /* please clear the bit in paging->bitmap; */
>> +        mem_event_put_req_producers(&d->mem_paging);
>> +        goto out;
>> +    }
>> +
>>
>>      /* Send request to pager */
>>      req.gfn = gfn;
>> @@ -925,8 +935,13 @@
>>      req.vcpu_id = v->vcpu_id;
>>
>>      mem_event_put_request(d, &d->mem_paging, &req);
>> +
>> +    ret = 0;
>> + out:
>> +    return ret;
>>  }
>>
>> +
>>  /**
>>   * p2m_mem_paging_prep - Allocate a new page for the guest
>>   * @d: guest domain
>> diff -r 052727b8165c -r 978daceef147 xen/include/asm-x86/p2m.h
>> --- a/xen/include/asm-x86/p2m.h	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/xen/include/asm-x86/p2m.h	Thu Dec 29 19:41:39 2011 +0800
>> @@ -485,7 +485,7 @@
>>  /* Tell xenpaging to drop a paged out frame */
>>  void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn);
>>  /* Start populating a paged out frame */
>> -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
>> +int p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
>>  /* Prepare the p2m for paging a frame in */
>>  int p2m_mem_paging_prep(struct domain *d, unsigned long gfn);
>>  /* Resume normal operation (in case a domain was paused) */
>> diff -r 052727b8165c -r 978daceef147 xen/include/public/domctl.h
>> --- a/xen/include/public/domctl.h	Thu Dec 29 17:08:24 2011 +0800
>> +++ b/xen/include/public/domctl.h	Thu Dec 29 19:41:39 2011 +0800
>> @@ -721,6 +721,7 @@
>>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_EVICT      3
>>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_PREP       4
>>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME     5
>> +#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN         6
>>
>>  /*
>>   * Access permissions.
>>
>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>
>
>
>
> ------------------------------
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>
> End of Xen-devel Digest, Vol 83, Issue 39
> *****************************************
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging
  2012-01-07  3:51 ` [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging Andres Lagar-Cavilla
@ 2012-01-07  8:38   ` Hongkaixing
  2012-01-10  3:33     ` Andres Lagar-Cavilla
  0 siblings, 1 reply; 5+ messages in thread
From: Hongkaixing @ 2012-01-07  8:38 UTC (permalink / raw)
  To: andres, xen-devel
  Cc: xiaowei.yang, olaf, hanweidong, yanqiangjun, tim, bicky.shi


> -----Original Message-----
> From: Andres Lagar-Cavilla [mailto:andres@lagarcavilla.org]
> Sent: Saturday, January 07, 2012 11:52 AM
> To: xen-devel@lists.xensource.com
> Cc: tim@xen.org; olaf@aepfle.de; hongkaixing@huawei.com
> Subject: Re: [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging
> 
> > Date: Thu, 5 Jan 2012 09:05:16 +0000
> > From: Tim Deegan <tim@xen.org>
> > To: hongkaixing@huawei.com
> > Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xensource.com
> > Subject: Re: [Xen-devel] [PATCH 2 of 2] xenpaging:change page-in
> > 	process to	speed up page-in in	xenpaging
> > Message-ID: <20120105090516.GE15595@ocelot.phlegethon.org>
> > Content-Type: text/plain; charset=iso-8859-1
> >
> > Hello,
> >
> > At 03:50 +0000 on 05 Jan (1325735430), hongkaixing@huawei.com wrote:
> >> xenpaging:change page-in process to speed up page-in in xenpaging
> >> In this patch,we change the page-in process.Firstly,we add a new
> >> function paging_in_trigger_sync
> >> to handle page-in requests directly.and when the requests' count is up
> >> to 32,then handle them
> >> batchly;Most importantly,we use an increasing gfn to test_bit,which
> >> saves much time.
> >> In p2m.c,we changes p2m_mem_paging_populate() to return a value;
> >> The following is a xenpaging test on suse11-64 with 4G memory.
> >>
> >> Nums of page_out pages	Page out time	Page in time(in unstable code) Page
> >> in time(apply this patch)
> >> 512M(131072)		    2.6s		        540s		              4.7s
> >> 2G(524288)		    15.5s		        2088s		      	      17.7s
> >>
> >
> > Thanks for the patch!  That's an impressive difference.  You're changing
> > quite a few things in this patch, though.  Can you send them as separate
> > patches so they can be reviewed one at a time?  Is one of them in
> > particular making the difference?  I suspect it's mostly the change to
> > test_bit(), and the rest is not necessary.
> 
> Second that, on all counts. Impressive numbers, and, a bit puzzled as to
> what actually made the difference.

Take paging 512M as a example, before using this patch, it spends 540s in page-in process, after using this patch, only 4.7 s

> 
> I would also like to see changes to xenpaging teased out from changes to
> the hypervisor.
> 
> I've been sitting on a patch to page-in synchronously, which shortcuts
> even more aggressively the page-in path: instead of calling populate, we
> go straight into paging_load. This does not necessitate an additional
> domctl, and would save even more hypervisor<->pager control round-trips.

Would you mind showing your details?
I see, I have try this so. My method is :
When test the page is paged out, then populate directly without using event ring.
Problem is : There may be some concurrency problems, and some strange blue screens.


> Do you foresee any conflicts with your current approach?

Our approach is stable till now.
> 
> Thanks!
> Andres
> 
> >
> > Cheers,
> >
> > Tim.
> >
> >> Signed-off-by??hongkaixing<hongkaixing@huawei.com>,shizhen<bicky.shi@huawei.com>
> >>
> >> diff -r 052727b8165c -r 978daceef147 tools/libxc/xc_mem_paging.c
> >> --- a/tools/libxc/xc_mem_paging.c	Thu Dec 29 17:08:24 2011 +0800
> >> +++ b/tools/libxc/xc_mem_paging.c	Thu Dec 29 19:41:39 2011 +0800
> >> @@ -73,6 +73,13 @@
> >>                                  NULL, NULL, gfn);
> >>  }
> >>
> >> +int xc_mem_paging_in(xc_interface *xch, domid_t domain_id, unsigned
> >> long gfn)
> >> +{
> >> +    return xc_mem_event_control(xch, domain_id,
> >> +                                XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN,
> >> +                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
> >> +                                NULL, NULL, gfn);
> >> +}
> >>
> >>  /*
> >>   * Local variables:
> >> diff -r 052727b8165c -r 978daceef147 tools/libxc/xenctrl.h
> >> --- a/tools/libxc/xenctrl.h	Thu Dec 29 17:08:24 2011 +0800
> >> +++ b/tools/libxc/xenctrl.h	Thu Dec 29 19:41:39 2011 +0800
> >> @@ -1841,6 +1841,7 @@
> >>  int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned
> >> long gfn);
> >>  int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id,
> >>                           unsigned long gfn);
> >> +int xc_mem_paging_in(xc_interface *xch, domid_t domain_id,unsigned long
> >> gfn);
> >>
> >>  int xc_mem_access_enable(xc_interface *xch, domid_t domain_id,
> >>                          void *shared_page, void *ring_page);
> >> diff -r 052727b8165c -r 978daceef147 tools/xenpaging/xenpaging.c
> >> --- a/tools/xenpaging/xenpaging.c	Thu Dec 29 17:08:24 2011 +0800
> >> +++ b/tools/xenpaging/xenpaging.c	Thu Dec 29 19:41:39 2011 +0800
> >> @@ -594,6 +594,13 @@
> >>      return ret;
> >>  }
> >>
> >> +static int paging_in_trigger_sync(xenpaging_t *paging,unsigned long
> >> gfn)
> >> +{
> >> +    int rc = 0;
> >> +    rc = xc_mem_paging_in(paging->xc_handle,
> >> paging->mem_event.domain_id,gfn);
> >> +    return rc;
> >> +}
> >
> > This function is
> >
> >> +
> >>  int main(int argc, char *argv[])
> >>  {
> >>      struct sigaction act;
> >> @@ -605,6 +612,9 @@
> >>      int i;
> >>      int rc = -1;
> >>      int rc1;
> >> +    int request_count = 0;
> >> +    unsigned long page_in_start_gfn = 0;
> >> +    unsigned long real_page = 0;
> >>      xc_interface *xch;
> >>
> >>      int open_flags = O_CREAT | O_TRUNC | O_RDWR;
> >> @@ -773,24 +783,51 @@
> >>          /* Write all pages back into the guest */
> >>          if ( interrupted == SIGTERM || interrupted == SIGINT )
> >>          {
> >> -            int num = 0;
> >> +            request_count = 0;
> >>              for ( i = 0; i < paging->domain_info->max_pages; i++ )
> >>              {
> >> -                if ( test_bit(i, paging->bitmap) )
> >> +                real_page = i + page_in_start_gfn;
> >> +                real_page %= paging->domain_info->max_pages;
> >> +                if ( test_bit(real_page, paging->bitmap) )
> >>                  {
> >> -                    paging->pagein_queue[num] = i;
> >> -                    num++;
> >> -                    if ( num == XENPAGING_PAGEIN_QUEUE_SIZE )
> >> -                        break;
> >> +                    rc = paging_in_trigger_sync(paging,real_page);
> >> +                    if ( 0 == rc )
> >> +                    {
> >> +                        request_count++;
> >> +                        /* If page_in requests up to 32 then handle
> >> them */
> >> +                        if( request_count >= 32 )
> >> +                        {
> >> +                            page_in_start_gfn=real_page + 1;
> >> +                            break;
> >> +                        }
> >> +                    }
> >> +                    else
> >> +                    {
> >> +                        /* If IO ring is full then handle requests to
> >> free space */
> >> +                        if( ENOSPC == errno )
> >> +                        {
> >> +                            page_in_start_gfn = real_page;
> >> +                            break;
> >> +                        }
> >> +                        /* If p2mt is not p2m_is_paging,then clear
> >> bitmap;
> >> +                        * e.g. a page is paged then it is dropped by
> >> balloon.
> >> +                        */
> >> +                        else if ( EINVAL == errno )
> >> +                        {
> >> +                            clear_bit(i,paging->bitmap);
> >> +                            policy_notify_paged_in(i);
> >> +                        }
> >> +                        /* If hypercall fails then go to teardown
> >> xenpaging */
> >> +                        else
> >> +                        {
> >> +                            ERROR("Error paging in page");
> >> +                            goto out;
> >> +                        }
> >> +                    }
> >>                  }
> >>              }
> >> -            /*
> >> -             * One more round if there are still pages to process.
> >> -             * If no more pages to process, exit loop.
> >> -             */
> >> -            if ( num )
> >> -                page_in_trigger();
> >> -            else if ( i == paging->domain_info->max_pages )
> >> +            if( (i==paging->domain_info->max_pages) &&
> >> +
> >> !RING_HAS_UNCONSUMED_REQUESTS(&paging->mem_event.back_ring) )
> >>                  break;
> >>          }
> >>          else
> >> diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/mem_paging.c
> >> --- a/xen/arch/x86/mm/mem_paging.c	Thu Dec 29 17:08:24 2011 +0800
> >> +++ b/xen/arch/x86/mm/mem_paging.c	Thu Dec 29 19:41:39 2011 +0800
> >> @@ -57,7 +57,14 @@
> >>          return 0;
> >>      }
> >>      break;
> >> -
> >> +
> >> +    case XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN:
> >> +    {
> >> +        unsigned long gfn = mec->gfn;
> >> +        return p2m_mem_paging_populate(d, gfn);
> >> +    }
> >> +    break;
> >> +
> >>      default:
> >>          return -ENOSYS;
> >>          break;
> >> diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/p2m.c
> >> --- a/xen/arch/x86/mm/p2m.c	Thu Dec 29 17:08:24 2011 +0800
> >> +++ b/xen/arch/x86/mm/p2m.c	Thu Dec 29 19:41:39 2011 +0800
> >> @@ -874,7 +874,7 @@
> >>   * already sent to the pager. In this case the caller has to try again
> >> until the
> >>   * gfn is fully paged in again.
> >>   */
> >> -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
> >> +int p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
> >>  {
> >>      struct vcpu *v = current;
> >>      mem_event_request_t req;
> >> @@ -882,10 +882,12 @@
> >>      p2m_access_t a;
> >>      mfn_t mfn;
> >>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
> >> +    int ret;
> >>
> >>      /* Check that there's space on the ring for this request */
> >> +    ret = -ENOSPC;
> >>      if ( mem_event_check_ring(d, &d->mem_paging) )
> >> -        return;
> >> +        goto out;
> >>
> >>      memset(&req, 0, sizeof(req));
> >>      req.type = MEM_EVENT_TYPE_PAGING;
> >> @@ -905,19 +907,27 @@
> >>      }
> >>      p2m_unlock(p2m);
> >>
> >> +    ret = -EINVAL;
> >>      /* Pause domain if request came from guest and gfn has paging type
> >> */
> >> -    if (  p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
> >> +    if ( p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
> >>      {
> >>          vcpu_pause_nosync(v);
> >>          req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
> >>      }
> >>      /* No need to inform pager if the gfn is not in the page-out path
> >> */
> >> -    else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
> >> +    else if ( p2mt == p2m_ram_paging_in_start || p2mt ==
> >> p2m_ram_paging_in )
> >>      {
> >>          /* gfn is already on its way back and vcpu is not paused */
> >>          mem_event_put_req_producers(&d->mem_paging);
> >> -        return;
> >> +        return 0;
> >>      }
> >> +    else if ( !p2m_is_paging(p2mt) )
> >> +    {
> >> +        /* please clear the bit in paging->bitmap; */
> >> +        mem_event_put_req_producers(&d->mem_paging);
> >> +        goto out;
> >> +    }
> >> +
> >>
> >>      /* Send request to pager */
> >>      req.gfn = gfn;
> >> @@ -925,8 +935,13 @@
> >>      req.vcpu_id = v->vcpu_id;
> >>
> >>      mem_event_put_request(d, &d->mem_paging, &req);
> >> +
> >> +    ret = 0;
> >> + out:
> >> +    return ret;
> >>  }
> >>
> >> +
> >>  /**
> >>   * p2m_mem_paging_prep - Allocate a new page for the guest
> >>   * @d: guest domain
> >> diff -r 052727b8165c -r 978daceef147 xen/include/asm-x86/p2m.h
> >> --- a/xen/include/asm-x86/p2m.h	Thu Dec 29 17:08:24 2011 +0800
> >> +++ b/xen/include/asm-x86/p2m.h	Thu Dec 29 19:41:39 2011 +0800
> >> @@ -485,7 +485,7 @@
> >>  /* Tell xenpaging to drop a paged out frame */
> >>  void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn);
> >>  /* Start populating a paged out frame */
> >> -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
> >> +int p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
> >>  /* Prepare the p2m for paging a frame in */
> >>  int p2m_mem_paging_prep(struct domain *d, unsigned long gfn);
> >>  /* Resume normal operation (in case a domain was paused) */
> >> diff -r 052727b8165c -r 978daceef147 xen/include/public/domctl.h
> >> --- a/xen/include/public/domctl.h	Thu Dec 29 17:08:24 2011 +0800
> >> +++ b/xen/include/public/domctl.h	Thu Dec 29 19:41:39 2011 +0800
> >> @@ -721,6 +721,7 @@
> >>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_EVICT      3
> >>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_PREP       4
> >>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME     5
> >> +#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN         6
> >>
> >>  /*
> >>   * Access permissions.
> >>
> >
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xensource.com
> >> http://lists.xensource.com/xen-devel
> >
> >
> >
> >
> > ------------------------------
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
> >
> >
> > End of Xen-devel Digest, Vol 83, Issue 39
> > *****************************************
> >

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging
  2012-01-07  8:38   ` Hongkaixing
@ 2012-01-10  3:33     ` Andres Lagar-Cavilla
  0 siblings, 0 replies; 5+ messages in thread
From: Andres Lagar-Cavilla @ 2012-01-10  3:33 UTC (permalink / raw)
  To: Hongkaixing
  Cc: xiaowei.yang, olaf, xen-devel, hanweidong, yanqiangjun, tim,
	bicky.shi, andres

>
>> -----Original Message-----
>> From: Andres Lagar-Cavilla [mailto:andres@lagarcavilla.org]
>> Sent: Saturday, January 07, 2012 11:52 AM
>> To: xen-devel@lists.xensource.com
>> Cc: tim@xen.org; olaf@aepfle.de; hongkaixing@huawei.com
>> Subject: Re: [PATCH 2 of 2] xenpaging:change page-in process to speed up
>> page-in in xenpaging
>>
>> > Date: Thu, 5 Jan 2012 09:05:16 +0000
>> > From: Tim Deegan <tim@xen.org>
>> > To: hongkaixing@huawei.com
>> > Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xensource.com
>> > Subject: Re: [Xen-devel] [PATCH 2 of 2] xenpaging:change page-in
>> > 	process to	speed up page-in in	xenpaging
>> > Message-ID: <20120105090516.GE15595@ocelot.phlegethon.org>
>> > Content-Type: text/plain; charset=iso-8859-1
>> >
>> > Hello,
>> >
>> > At 03:50 +0000 on 05 Jan (1325735430), hongkaixing@huawei.com wrote:
>> >> xenpaging:change page-in process to speed up page-in in xenpaging
>> >> In this patch,we change the page-in process.Firstly,we add a new
>> >> function paging_in_trigger_sync
>> >> to handle page-in requests directly.and when the requests' count is
>> up
>> >> to 32,then handle them
>> >> batchly;Most importantly,we use an increasing gfn to test_bit,which
>> >> saves much time.
>> >> In p2m.c,we changes p2m_mem_paging_populate() to return a value;
>> >> The following is a xenpaging test on suse11-64 with 4G memory.
>> >>
>> >> Nums of page_out pages	Page out time	Page in time(in unstable code)
>> Page
>> >> in time(apply this patch)
>> >> 512M(131072)		    2.6s		        540s		              4.7s
>> >> 2G(524288)		    15.5s		        2088s		      	      17.7s
>> >>
>> >
>> > Thanks for the patch!  That's an impressive difference.  You're
>> changing
>> > quite a few things in this patch, though.  Can you send them as
>> separate
>> > patches so they can be reviewed one at a time?  Is one of them in
>> > particular making the difference?  I suspect it's mostly the change to
>> > test_bit(), and the rest is not necessary.
>>
>> Second that, on all counts. Impressive numbers, and, a bit puzzled as to
>> what actually made the difference.
>
> Take paging 512M as a example, before using this patch, it spends 540s in
> page-in process, after using this patch, only 4.7 s
>
>>
>> I would also like to see changes to xenpaging teased out from changes to
>> the hypervisor.
>>
>> I've been sitting on a patch to page-in synchronously, which shortcuts
>> even more aggressively the page-in path: instead of calling populate, we
>> go straight into paging_load. This does not necessitate an additional
>> domctl, and would save even more hypervisor<->pager control round-trips.
>
> Would you mind showing your details?
I just sent the patch, look for "[PATCH 1 of 2] x86/mm: Allow a page in
p2m_ram_paged_out state to be loaded"

This should allow xenpaging to populate pages directly in its page-in
thread, without needing a populate domctl (or a mmap).

Andres
> I see, I have try this so. My method is :
> When test the page is paged out, then populate directly without using
> event ring.
> Problem is : There may be some concurrency problems, and some strange blue
> screens.
>
>
>> Do you foresee any conflicts with your current approach?
>
> Our approach is stable till now.
>>
>> Thanks!
>> Andres
>>
>> >
>> > Cheers,
>> >
>> > Tim.
>> >
>> >> Signed-off-by??hongkaixing<hongkaixing@huawei.com>,shizhen<bicky.shi@huawei.com>
>> >>
>> >> diff -r 052727b8165c -r 978daceef147 tools/libxc/xc_mem_paging.c
>> >> --- a/tools/libxc/xc_mem_paging.c	Thu Dec 29 17:08:24 2011 +0800
>> >> +++ b/tools/libxc/xc_mem_paging.c	Thu Dec 29 19:41:39 2011 +0800
>> >> @@ -73,6 +73,13 @@
>> >>                                  NULL, NULL, gfn);
>> >>  }
>> >>
>> >> +int xc_mem_paging_in(xc_interface *xch, domid_t domain_id, unsigned
>> >> long gfn)
>> >> +{
>> >> +    return xc_mem_event_control(xch, domain_id,
>> >> +                                XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN,
>> >> +                                XEN_DOMCTL_MEM_EVENT_OP_PAGING,
>> >> +                                NULL, NULL, gfn);
>> >> +}
>> >>
>> >>  /*
>> >>   * Local variables:
>> >> diff -r 052727b8165c -r 978daceef147 tools/libxc/xenctrl.h
>> >> --- a/tools/libxc/xenctrl.h	Thu Dec 29 17:08:24 2011 +0800
>> >> +++ b/tools/libxc/xenctrl.h	Thu Dec 29 19:41:39 2011 +0800
>> >> @@ -1841,6 +1841,7 @@
>> >>  int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id,
>> unsigned
>> >> long gfn);
>> >>  int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id,
>> >>                           unsigned long gfn);
>> >> +int xc_mem_paging_in(xc_interface *xch, domid_t domain_id,unsigned
>> long
>> >> gfn);
>> >>
>> >>  int xc_mem_access_enable(xc_interface *xch, domid_t domain_id,
>> >>                          void *shared_page, void *ring_page);
>> >> diff -r 052727b8165c -r 978daceef147 tools/xenpaging/xenpaging.c
>> >> --- a/tools/xenpaging/xenpaging.c	Thu Dec 29 17:08:24 2011 +0800
>> >> +++ b/tools/xenpaging/xenpaging.c	Thu Dec 29 19:41:39 2011 +0800
>> >> @@ -594,6 +594,13 @@
>> >>      return ret;
>> >>  }
>> >>
>> >> +static int paging_in_trigger_sync(xenpaging_t *paging,unsigned long
>> >> gfn)
>> >> +{
>> >> +    int rc = 0;
>> >> +    rc = xc_mem_paging_in(paging->xc_handle,
>> >> paging->mem_event.domain_id,gfn);
>> >> +    return rc;
>> >> +}
>> >
>> > This function is
>> >
>> >> +
>> >>  int main(int argc, char *argv[])
>> >>  {
>> >>      struct sigaction act;
>> >> @@ -605,6 +612,9 @@
>> >>      int i;
>> >>      int rc = -1;
>> >>      int rc1;
>> >> +    int request_count = 0;
>> >> +    unsigned long page_in_start_gfn = 0;
>> >> +    unsigned long real_page = 0;
>> >>      xc_interface *xch;
>> >>
>> >>      int open_flags = O_CREAT | O_TRUNC | O_RDWR;
>> >> @@ -773,24 +783,51 @@
>> >>          /* Write all pages back into the guest */
>> >>          if ( interrupted == SIGTERM || interrupted == SIGINT )
>> >>          {
>> >> -            int num = 0;
>> >> +            request_count = 0;
>> >>              for ( i = 0; i < paging->domain_info->max_pages; i++ )
>> >>              {
>> >> -                if ( test_bit(i, paging->bitmap) )
>> >> +                real_page = i + page_in_start_gfn;
>> >> +                real_page %= paging->domain_info->max_pages;
>> >> +                if ( test_bit(real_page, paging->bitmap) )
>> >>                  {
>> >> -                    paging->pagein_queue[num] = i;
>> >> -                    num++;
>> >> -                    if ( num == XENPAGING_PAGEIN_QUEUE_SIZE )
>> >> -                        break;
>> >> +                    rc = paging_in_trigger_sync(paging,real_page);
>> >> +                    if ( 0 == rc )
>> >> +                    {
>> >> +                        request_count++;
>> >> +                        /* If page_in requests up to 32 then handle
>> >> them */
>> >> +                        if( request_count >= 32 )
>> >> +                        {
>> >> +                            page_in_start_gfn=real_page + 1;
>> >> +                            break;
>> >> +                        }
>> >> +                    }
>> >> +                    else
>> >> +                    {
>> >> +                        /* If IO ring is full then handle requests
>> to
>> >> free space */
>> >> +                        if( ENOSPC == errno )
>> >> +                        {
>> >> +                            page_in_start_gfn = real_page;
>> >> +                            break;
>> >> +                        }
>> >> +                        /* If p2mt is not p2m_is_paging,then clear
>> >> bitmap;
>> >> +                        * e.g. a page is paged then it is dropped by
>> >> balloon.
>> >> +                        */
>> >> +                        else if ( EINVAL == errno )
>> >> +                        {
>> >> +                            clear_bit(i,paging->bitmap);
>> >> +                            policy_notify_paged_in(i);
>> >> +                        }
>> >> +                        /* If hypercall fails then go to teardown
>> >> xenpaging */
>> >> +                        else
>> >> +                        {
>> >> +                            ERROR("Error paging in page");
>> >> +                            goto out;
>> >> +                        }
>> >> +                    }
>> >>                  }
>> >>              }
>> >> -            /*
>> >> -             * One more round if there are still pages to process.
>> >> -             * If no more pages to process, exit loop.
>> >> -             */
>> >> -            if ( num )
>> >> -                page_in_trigger();
>> >> -            else if ( i == paging->domain_info->max_pages )
>> >> +            if( (i==paging->domain_info->max_pages) &&
>> >> +
>> >> !RING_HAS_UNCONSUMED_REQUESTS(&paging->mem_event.back_ring) )
>> >>                  break;
>> >>          }
>> >>          else
>> >> diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/mem_paging.c
>> >> --- a/xen/arch/x86/mm/mem_paging.c	Thu Dec 29 17:08:24 2011 +0800
>> >> +++ b/xen/arch/x86/mm/mem_paging.c	Thu Dec 29 19:41:39 2011 +0800
>> >> @@ -57,7 +57,14 @@
>> >>          return 0;
>> >>      }
>> >>      break;
>> >> -
>> >> +
>> >> +    case XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN:
>> >> +    {
>> >> +        unsigned long gfn = mec->gfn;
>> >> +        return p2m_mem_paging_populate(d, gfn);
>> >> +    }
>> >> +    break;
>> >> +
>> >>      default:
>> >>          return -ENOSYS;
>> >>          break;
>> >> diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/p2m.c
>> >> --- a/xen/arch/x86/mm/p2m.c	Thu Dec 29 17:08:24 2011 +0800
>> >> +++ b/xen/arch/x86/mm/p2m.c	Thu Dec 29 19:41:39 2011 +0800
>> >> @@ -874,7 +874,7 @@
>> >>   * already sent to the pager. In this case the caller has to try
>> again
>> >> until the
>> >>   * gfn is fully paged in again.
>> >>   */
>> >> -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>> >> +int p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>> >>  {
>> >>      struct vcpu *v = current;
>> >>      mem_event_request_t req;
>> >> @@ -882,10 +882,12 @@
>> >>      p2m_access_t a;
>> >>      mfn_t mfn;
>> >>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> >> +    int ret;
>> >>
>> >>      /* Check that there's space on the ring for this request */
>> >> +    ret = -ENOSPC;
>> >>      if ( mem_event_check_ring(d, &d->mem_paging) )
>> >> -        return;
>> >> +        goto out;
>> >>
>> >>      memset(&req, 0, sizeof(req));
>> >>      req.type = MEM_EVENT_TYPE_PAGING;
>> >> @@ -905,19 +907,27 @@
>> >>      }
>> >>      p2m_unlock(p2m);
>> >>
>> >> +    ret = -EINVAL;
>> >>      /* Pause domain if request came from guest and gfn has paging
>> type
>> >> */
>> >> -    if (  p2m_is_paging(p2mt) && v->domain->domain_id ==
>> d->domain_id )
>> >> +    if ( p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id
>> )
>> >>      {
>> >>          vcpu_pause_nosync(v);
>> >>          req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>> >>      }
>> >>      /* No need to inform pager if the gfn is not in the page-out
>> path
>> >> */
>> >> -    else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
>> >> +    else if ( p2mt == p2m_ram_paging_in_start || p2mt ==
>> >> p2m_ram_paging_in )
>> >>      {
>> >>          /* gfn is already on its way back and vcpu is not paused */
>> >>          mem_event_put_req_producers(&d->mem_paging);
>> >> -        return;
>> >> +        return 0;
>> >>      }
>> >> +    else if ( !p2m_is_paging(p2mt) )
>> >> +    {
>> >> +        /* please clear the bit in paging->bitmap; */
>> >> +        mem_event_put_req_producers(&d->mem_paging);
>> >> +        goto out;
>> >> +    }
>> >> +
>> >>
>> >>      /* Send request to pager */
>> >>      req.gfn = gfn;
>> >> @@ -925,8 +935,13 @@
>> >>      req.vcpu_id = v->vcpu_id;
>> >>
>> >>      mem_event_put_request(d, &d->mem_paging, &req);
>> >> +
>> >> +    ret = 0;
>> >> + out:
>> >> +    return ret;
>> >>  }
>> >>
>> >> +
>> >>  /**
>> >>   * p2m_mem_paging_prep - Allocate a new page for the guest
>> >>   * @d: guest domain
>> >> diff -r 052727b8165c -r 978daceef147 xen/include/asm-x86/p2m.h
>> >> --- a/xen/include/asm-x86/p2m.h	Thu Dec 29 17:08:24 2011 +0800
>> >> +++ b/xen/include/asm-x86/p2m.h	Thu Dec 29 19:41:39 2011 +0800
>> >> @@ -485,7 +485,7 @@
>> >>  /* Tell xenpaging to drop a paged out frame */
>> >>  void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn);
>> >>  /* Start populating a paged out frame */
>> >> -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
>> >> +int p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
>> >>  /* Prepare the p2m for paging a frame in */
>> >>  int p2m_mem_paging_prep(struct domain *d, unsigned long gfn);
>> >>  /* Resume normal operation (in case a domain was paused) */
>> >> diff -r 052727b8165c -r 978daceef147 xen/include/public/domctl.h
>> >> --- a/xen/include/public/domctl.h	Thu Dec 29 17:08:24 2011 +0800
>> >> +++ b/xen/include/public/domctl.h	Thu Dec 29 19:41:39 2011 +0800
>> >> @@ -721,6 +721,7 @@
>> >>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_EVICT      3
>> >>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_PREP       4
>> >>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME     5
>> >> +#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN         6
>> >>
>> >>  /*
>> >>   * Access permissions.
>> >>
>> >
>> >> _______________________________________________
>> >> Xen-devel mailing list
>> >> Xen-devel@lists.xensource.com
>> >> http://lists.xensource.com/xen-devel
>> >
>> >
>> >
>> >
>> > ------------------------------
>> >
>> > _______________________________________________
>> > Xen-devel mailing list
>> > Xen-devel@lists.xensource.com
>> > http://lists.xensource.com/xen-devel
>> >
>> >
>> > End of Xen-devel Digest, Vol 83, Issue 39
>> > *****************************************
>> >
>
>
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging
  2012-01-05  3:50 ` [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging hongkaixing
@ 2012-01-05  9:05   ` Tim Deegan
  0 siblings, 0 replies; 5+ messages in thread
From: Tim Deegan @ 2012-01-05  9:05 UTC (permalink / raw)
  To: hongkaixing; +Cc: Olaf Hering, xen-devel

Hello, 

At 03:50 +0000 on 05 Jan (1325735430), hongkaixing@huawei.com wrote:
> xenpaging:change page-in process to speed up page-in in xenpaging
> In this patch,we change the page-in process.Firstly,we add a new function paging_in_trigger_sync
> to handle page-in requests directly.and when the requests' count is up to 32,then handle them
> batchly;Most importantly,we use an increasing gfn to test_bit,which saves much time.
> In p2m.c,we changes p2m_mem_paging_populate() to return a value;
> The following is a xenpaging test on suse11-64 with 4G memory.
> 
> Nums of page_out pages	Page out time	Page in time(in unstable code) Page in time(apply this patch)
> 512M(131072)		    2.6s		        540s		              4.7s
> 2G(524288)		    15.5s		        2088s		      	      17.7s
> 

Thanks for the patch!  That's an impressive difference.  You're changing
quite a few things in this patch, though.  Can you send them as separate
patches so they can be reviewed one at a time?  Is one of them in
particular making the difference?  I suspect it's mostly the change to
test_bit(), and the rest is not necessary. 

Cheers,

Tim.

> Signed-off-by??hongkaixing<hongkaixing@huawei.com>,shizhen<bicky.shi@huawei.com>
> 
> diff -r 052727b8165c -r 978daceef147 tools/libxc/xc_mem_paging.c
> --- a/tools/libxc/xc_mem_paging.c	Thu Dec 29 17:08:24 2011 +0800
> +++ b/tools/libxc/xc_mem_paging.c	Thu Dec 29 19:41:39 2011 +0800
> @@ -73,6 +73,13 @@
>                                  NULL, NULL, gfn);
>  }
>  
> +int xc_mem_paging_in(xc_interface *xch, domid_t domain_id, unsigned long gfn)
> +{
> +    return xc_mem_event_control(xch, domain_id,
> +                                XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN,
> +                                XEN_DOMCTL_MEM_EVENT_OP_PAGING, 
> +                                NULL, NULL, gfn);
> +}
>  
>  /*
>   * Local variables:
> diff -r 052727b8165c -r 978daceef147 tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h	Thu Dec 29 17:08:24 2011 +0800
> +++ b/tools/libxc/xenctrl.h	Thu Dec 29 19:41:39 2011 +0800
> @@ -1841,6 +1841,7 @@
>  int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn);
>  int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id,
>                           unsigned long gfn);
> +int xc_mem_paging_in(xc_interface *xch, domid_t domain_id,unsigned long gfn); 
>  
>  int xc_mem_access_enable(xc_interface *xch, domid_t domain_id,
>                          void *shared_page, void *ring_page);
> diff -r 052727b8165c -r 978daceef147 tools/xenpaging/xenpaging.c
> --- a/tools/xenpaging/xenpaging.c	Thu Dec 29 17:08:24 2011 +0800
> +++ b/tools/xenpaging/xenpaging.c	Thu Dec 29 19:41:39 2011 +0800
> @@ -594,6 +594,13 @@
>      return ret;
>  }
>  
> +static int paging_in_trigger_sync(xenpaging_t *paging,unsigned long gfn)
> +{
> +    int rc = 0;
> +    rc = xc_mem_paging_in(paging->xc_handle, paging->mem_event.domain_id,gfn);
> +    return rc;
> +}

This function is 

> +
>  int main(int argc, char *argv[])
>  {
>      struct sigaction act;
> @@ -605,6 +612,9 @@
>      int i;
>      int rc = -1;
>      int rc1;
> +    int request_count = 0;
> +    unsigned long page_in_start_gfn = 0;
> +    unsigned long real_page = 0;
>      xc_interface *xch;
>  
>      int open_flags = O_CREAT | O_TRUNC | O_RDWR;
> @@ -773,24 +783,51 @@
>          /* Write all pages back into the guest */
>          if ( interrupted == SIGTERM || interrupted == SIGINT )
>          {
> -            int num = 0;
> +            request_count = 0;
>              for ( i = 0; i < paging->domain_info->max_pages; i++ )
>              {
> -                if ( test_bit(i, paging->bitmap) )
> +                real_page = i + page_in_start_gfn;
> +                real_page %= paging->domain_info->max_pages;
> +                if ( test_bit(real_page, paging->bitmap) )
>                  {
> -                    paging->pagein_queue[num] = i;
> -                    num++;
> -                    if ( num == XENPAGING_PAGEIN_QUEUE_SIZE )
> -                        break;
> +                    rc = paging_in_trigger_sync(paging,real_page);
> +                    if ( 0 == rc )
> +                    {
> +                        request_count++;
> +                        /* If page_in requests up to 32 then handle them */
> +                        if( request_count >= 32 )
> +                        {
> +                            page_in_start_gfn=real_page + 1;
> +                            break;
> +                        }
> +                    }
> +                    else
> +                    {
> +                        /* If IO ring is full then handle requests to free space */
> +                        if( ENOSPC == errno )
> +                        {
> +                            page_in_start_gfn = real_page;
> +                            break;
> +                        }
> +                        /* If p2mt is not p2m_is_paging,then clear bitmap;
> +                        * e.g. a page is paged then it is dropped by balloon.
> +                        */
> +                        else if ( EINVAL == errno )
> +                        {
> +                            clear_bit(i,paging->bitmap);
> +                            policy_notify_paged_in(i);
> +                        }
> +                        /* If hypercall fails then go to teardown xenpaging */
> +                        else 
> +                        {
> +                            ERROR("Error paging in page");
> +                            goto out;
> +                        }
> +                    }
>                  }
>              }
> -            /*
> -             * One more round if there are still pages to process.
> -             * If no more pages to process, exit loop.
> -             */
> -            if ( num )
> -                page_in_trigger();
> -            else if ( i == paging->domain_info->max_pages )
> +            if( (i==paging->domain_info->max_pages) && 
> +                !RING_HAS_UNCONSUMED_REQUESTS(&paging->mem_event.back_ring) )
>                  break;
>          }
>          else
> diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/mem_paging.c
> --- a/xen/arch/x86/mm/mem_paging.c	Thu Dec 29 17:08:24 2011 +0800
> +++ b/xen/arch/x86/mm/mem_paging.c	Thu Dec 29 19:41:39 2011 +0800
> @@ -57,7 +57,14 @@
>          return 0;
>      }
>      break;
> -
> +    
> +    case XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN:
> +    {
> +        unsigned long gfn = mec->gfn;
> +        return p2m_mem_paging_populate(d, gfn);
> +    }
> +    break;
> +    
>      default:
>          return -ENOSYS;
>          break;
> diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/p2m.c
> --- a/xen/arch/x86/mm/p2m.c	Thu Dec 29 17:08:24 2011 +0800
> +++ b/xen/arch/x86/mm/p2m.c	Thu Dec 29 19:41:39 2011 +0800
> @@ -874,7 +874,7 @@
>   * already sent to the pager. In this case the caller has to try again until the
>   * gfn is fully paged in again.
>   */
> -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
> +int p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
>  {
>      struct vcpu *v = current;
>      mem_event_request_t req;
> @@ -882,10 +882,12 @@
>      p2m_access_t a;
>      mfn_t mfn;
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
> +    int ret;
>  
>      /* Check that there's space on the ring for this request */
> +    ret = -ENOSPC;
>      if ( mem_event_check_ring(d, &d->mem_paging) )
> -        return;
> +        goto out;
>  
>      memset(&req, 0, sizeof(req));
>      req.type = MEM_EVENT_TYPE_PAGING;
> @@ -905,19 +907,27 @@
>      }
>      p2m_unlock(p2m);
>  
> +    ret = -EINVAL;
>      /* Pause domain if request came from guest and gfn has paging type */
> -    if (  p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
> +    if ( p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
>      {
>          vcpu_pause_nosync(v);
>          req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>      }
>      /* No need to inform pager if the gfn is not in the page-out path */
> -    else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
> +    else if ( p2mt == p2m_ram_paging_in_start || p2mt == p2m_ram_paging_in )
>      {
>          /* gfn is already on its way back and vcpu is not paused */
>          mem_event_put_req_producers(&d->mem_paging);
> -        return;
> +        return 0;
>      }
> +    else if ( !p2m_is_paging(p2mt) )
> +    {
> +        /* please clear the bit in paging->bitmap; */
> +        mem_event_put_req_producers(&d->mem_paging);
> +        goto out;
> +    }
> +
>  
>      /* Send request to pager */
>      req.gfn = gfn;
> @@ -925,8 +935,13 @@
>      req.vcpu_id = v->vcpu_id;
>  
>      mem_event_put_request(d, &d->mem_paging, &req);
> +    
> +    ret = 0;
> + out:
> +    return ret;   
>  }
>  
> +
>  /**
>   * p2m_mem_paging_prep - Allocate a new page for the guest
>   * @d: guest domain
> diff -r 052727b8165c -r 978daceef147 xen/include/asm-x86/p2m.h
> --- a/xen/include/asm-x86/p2m.h	Thu Dec 29 17:08:24 2011 +0800
> +++ b/xen/include/asm-x86/p2m.h	Thu Dec 29 19:41:39 2011 +0800
> @@ -485,7 +485,7 @@
>  /* Tell xenpaging to drop a paged out frame */
>  void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn);
>  /* Start populating a paged out frame */
> -void p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
> +int p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
>  /* Prepare the p2m for paging a frame in */
>  int p2m_mem_paging_prep(struct domain *d, unsigned long gfn);
>  /* Resume normal operation (in case a domain was paused) */
> diff -r 052727b8165c -r 978daceef147 xen/include/public/domctl.h
> --- a/xen/include/public/domctl.h	Thu Dec 29 17:08:24 2011 +0800
> +++ b/xen/include/public/domctl.h	Thu Dec 29 19:41:39 2011 +0800
> @@ -721,6 +721,7 @@
>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_EVICT      3
>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_PREP       4
>  #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME     5
> +#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN         6
>  
>  /*
>   * Access permissions.
> 

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging
  2012-01-05  3:50 [PATCH 0 of 2] xenpaging:speed up page-in hongkaixing
@ 2012-01-05  3:50 ` hongkaixing
  2012-01-05  9:05   ` Tim Deegan
  0 siblings, 1 reply; 5+ messages in thread
From: hongkaixing @ 2012-01-05  3:50 UTC (permalink / raw)
  To: Olaf Hering; +Cc: xen-devel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=us-ascii, Size: 9381 bytes --]

# HG changeset patch
# User hongkaixing<hongkaixing@huawei.com>
# Date 1325158899 -28800
# Node ID 978daceef147273920f298556489b60dc32ce458
# Parent  052727b8165ce6e05002184ae894096214c8b537
xenpaging:change page-in process to speed up page-in in xenpaging
In this patch,we change the page-in process.Firstly,we add a new function paging_in_trigger_sync
to handle page-in requests directly.and when the requests' count is up to 32,then handle them
batchly;Most importantly,we use an increasing gfn to test_bit,which saves much time.
In p2m.c,we changes p2m_mem_paging_populate() to return a value;
The following is a xenpaging test on suse11-64 with 4G memory.

Nums of page_out pages	Page out time	Page in time(in unstable code) Page in time(apply this patch)
512M(131072)		    2.6s		        540s		              4.7s
2G(524288)		    15.5s		        2088s		      	      17.7s

Signed-off-by£ºhongkaixing<hongkaixing@huawei.com>,shizhen<bicky.shi@huawei.com>

diff -r 052727b8165c -r 978daceef147 tools/libxc/xc_mem_paging.c
--- a/tools/libxc/xc_mem_paging.c	Thu Dec 29 17:08:24 2011 +0800
+++ b/tools/libxc/xc_mem_paging.c	Thu Dec 29 19:41:39 2011 +0800
@@ -73,6 +73,13 @@
                                 NULL, NULL, gfn);
 }
 
+int xc_mem_paging_in(xc_interface *xch, domid_t domain_id, unsigned long gfn)
+{
+    return xc_mem_event_control(xch, domain_id,
+                                XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN,
+                                XEN_DOMCTL_MEM_EVENT_OP_PAGING, 
+                                NULL, NULL, gfn);
+}
 
 /*
  * Local variables:
diff -r 052727b8165c -r 978daceef147 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h	Thu Dec 29 17:08:24 2011 +0800
+++ b/tools/libxc/xenctrl.h	Thu Dec 29 19:41:39 2011 +0800
@@ -1841,6 +1841,7 @@
 int xc_mem_paging_prep(xc_interface *xch, domid_t domain_id, unsigned long gfn);
 int xc_mem_paging_resume(xc_interface *xch, domid_t domain_id,
                          unsigned long gfn);
+int xc_mem_paging_in(xc_interface *xch, domid_t domain_id,unsigned long gfn); 
 
 int xc_mem_access_enable(xc_interface *xch, domid_t domain_id,
                         void *shared_page, void *ring_page);
diff -r 052727b8165c -r 978daceef147 tools/xenpaging/xenpaging.c
--- a/tools/xenpaging/xenpaging.c	Thu Dec 29 17:08:24 2011 +0800
+++ b/tools/xenpaging/xenpaging.c	Thu Dec 29 19:41:39 2011 +0800
@@ -594,6 +594,13 @@
     return ret;
 }
 
+static int paging_in_trigger_sync(xenpaging_t *paging,unsigned long gfn)
+{
+    int rc = 0;
+    rc = xc_mem_paging_in(paging->xc_handle, paging->mem_event.domain_id,gfn);
+    return rc;
+}
+
 int main(int argc, char *argv[])
 {
     struct sigaction act;
@@ -605,6 +612,9 @@
     int i;
     int rc = -1;
     int rc1;
+    int request_count = 0;
+    unsigned long page_in_start_gfn = 0;
+    unsigned long real_page = 0;
     xc_interface *xch;
 
     int open_flags = O_CREAT | O_TRUNC | O_RDWR;
@@ -773,24 +783,51 @@
         /* Write all pages back into the guest */
         if ( interrupted == SIGTERM || interrupted == SIGINT )
         {
-            int num = 0;
+            request_count = 0;
             for ( i = 0; i < paging->domain_info->max_pages; i++ )
             {
-                if ( test_bit(i, paging->bitmap) )
+                real_page = i + page_in_start_gfn;
+                real_page %= paging->domain_info->max_pages;
+                if ( test_bit(real_page, paging->bitmap) )
                 {
-                    paging->pagein_queue[num] = i;
-                    num++;
-                    if ( num == XENPAGING_PAGEIN_QUEUE_SIZE )
-                        break;
+                    rc = paging_in_trigger_sync(paging,real_page);
+                    if ( 0 == rc )
+                    {
+                        request_count++;
+                        /* If page_in requests up to 32 then handle them */
+                        if( request_count >= 32 )
+                        {
+                            page_in_start_gfn=real_page + 1;
+                            break;
+                        }
+                    }
+                    else
+                    {
+                        /* If IO ring is full then handle requests to free space */
+                        if( ENOSPC == errno )
+                        {
+                            page_in_start_gfn = real_page;
+                            break;
+                        }
+                        /* If p2mt is not p2m_is_paging,then clear bitmap;
+                        * e.g. a page is paged then it is dropped by balloon.
+                        */
+                        else if ( EINVAL == errno )
+                        {
+                            clear_bit(i,paging->bitmap);
+                            policy_notify_paged_in(i);
+                        }
+                        /* If hypercall fails then go to teardown xenpaging */
+                        else 
+                        {
+                            ERROR("Error paging in page");
+                            goto out;
+                        }
+                    }
                 }
             }
-            /*
-             * One more round if there are still pages to process.
-             * If no more pages to process, exit loop.
-             */
-            if ( num )
-                page_in_trigger();
-            else if ( i == paging->domain_info->max_pages )
+            if( (i==paging->domain_info->max_pages) && 
+                !RING_HAS_UNCONSUMED_REQUESTS(&paging->mem_event.back_ring) )
                 break;
         }
         else
diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/mem_paging.c
--- a/xen/arch/x86/mm/mem_paging.c	Thu Dec 29 17:08:24 2011 +0800
+++ b/xen/arch/x86/mm/mem_paging.c	Thu Dec 29 19:41:39 2011 +0800
@@ -57,7 +57,14 @@
         return 0;
     }
     break;
-
+    
+    case XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN:
+    {
+        unsigned long gfn = mec->gfn;
+        return p2m_mem_paging_populate(d, gfn);
+    }
+    break;
+    
     default:
         return -ENOSYS;
         break;
diff -r 052727b8165c -r 978daceef147 xen/arch/x86/mm/p2m.c
--- a/xen/arch/x86/mm/p2m.c	Thu Dec 29 17:08:24 2011 +0800
+++ b/xen/arch/x86/mm/p2m.c	Thu Dec 29 19:41:39 2011 +0800
@@ -874,7 +874,7 @@
  * already sent to the pager. In this case the caller has to try again until the
  * gfn is fully paged in again.
  */
-void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
+int p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
 {
     struct vcpu *v = current;
     mem_event_request_t req;
@@ -882,10 +882,12 @@
     p2m_access_t a;
     mfn_t mfn;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    int ret;
 
     /* Check that there's space on the ring for this request */
+    ret = -ENOSPC;
     if ( mem_event_check_ring(d, &d->mem_paging) )
-        return;
+        goto out;
 
     memset(&req, 0, sizeof(req));
     req.type = MEM_EVENT_TYPE_PAGING;
@@ -905,19 +907,27 @@
     }
     p2m_unlock(p2m);
 
+    ret = -EINVAL;
     /* Pause domain if request came from guest and gfn has paging type */
-    if (  p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
+    if ( p2m_is_paging(p2mt) && v->domain->domain_id == d->domain_id )
     {
         vcpu_pause_nosync(v);
         req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
     }
     /* No need to inform pager if the gfn is not in the page-out path */
-    else if ( p2mt != p2m_ram_paging_out && p2mt != p2m_ram_paged )
+    else if ( p2mt == p2m_ram_paging_in_start || p2mt == p2m_ram_paging_in )
     {
         /* gfn is already on its way back and vcpu is not paused */
         mem_event_put_req_producers(&d->mem_paging);
-        return;
+        return 0;
     }
+    else if ( !p2m_is_paging(p2mt) )
+    {
+        /* please clear the bit in paging->bitmap; */
+        mem_event_put_req_producers(&d->mem_paging);
+        goto out;
+    }
+
 
     /* Send request to pager */
     req.gfn = gfn;
@@ -925,8 +935,13 @@
     req.vcpu_id = v->vcpu_id;
 
     mem_event_put_request(d, &d->mem_paging, &req);
+    
+    ret = 0;
+ out:
+    return ret;   
 }
 
+
 /**
  * p2m_mem_paging_prep - Allocate a new page for the guest
  * @d: guest domain
diff -r 052727b8165c -r 978daceef147 xen/include/asm-x86/p2m.h
--- a/xen/include/asm-x86/p2m.h	Thu Dec 29 17:08:24 2011 +0800
+++ b/xen/include/asm-x86/p2m.h	Thu Dec 29 19:41:39 2011 +0800
@@ -485,7 +485,7 @@
 /* Tell xenpaging to drop a paged out frame */
 void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn);
 /* Start populating a paged out frame */
-void p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
+int p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
 /* Prepare the p2m for paging a frame in */
 int p2m_mem_paging_prep(struct domain *d, unsigned long gfn);
 /* Resume normal operation (in case a domain was paused) */
diff -r 052727b8165c -r 978daceef147 xen/include/public/domctl.h
--- a/xen/include/public/domctl.h	Thu Dec 29 17:08:24 2011 +0800
+++ b/xen/include/public/domctl.h	Thu Dec 29 19:41:39 2011 +0800
@@ -721,6 +721,7 @@
 #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_EVICT      3
 #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_PREP       4
 #define XEN_DOMCTL_MEM_EVENT_OP_PAGING_RESUME     5
+#define XEN_DOMCTL_MEM_EVENT_OP_PAGING_IN         6
 
 /*
  * Access permissions.


[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2012-01-10  3:33 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <mailman.5929.1325754331.12970.xen-devel@lists.xensource.com>
2012-01-07  3:51 ` [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging Andres Lagar-Cavilla
2012-01-07  8:38   ` Hongkaixing
2012-01-10  3:33     ` Andres Lagar-Cavilla
2012-01-05  3:50 [PATCH 0 of 2] xenpaging:speed up page-in hongkaixing
2012-01-05  3:50 ` [PATCH 2 of 2] xenpaging:change page-in process to speed up page-in in xenpaging hongkaixing
2012-01-05  9:05   ` Tim Deegan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.