All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mem_sharing: fix race condition of nominate and unshare
@ 2011-01-06 16:11 Jui-Hao Chiang
  2011-01-06 16:54 ` Tim Deegan
  2011-01-07  3:14 ` tinnycloud
  0 siblings, 2 replies; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-06 16:11 UTC (permalink / raw)
  To: Tim Deegan; +Cc: tinnycloud, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 638 bytes --]

Hi, this patch does the following
(1) When updating/checking p2m type for mem_sharing, we must hold shr_lock
(2) For nominate operation, if the page is already nominated, return the
handle from page_info->shr_handle
(3) For unshare operation, it is possible that multiple users unshare a page
via hvm_hap_nested_page_fault() at the same time. If the page is already
un-shared by someone else, simply return success.

NOTE: we assume that nobody holds page_alloc_lock/p2m_lock before calling
nominate/share/unshare.

Signed-off-by: Jui-Hao Chiang <juihaochiang@gmail.com>
Signed-off-by: Han-Lin Li <Han-Lin.Li@itri.org.tw>

Bests,
Jui-Hao

[-- Attachment #1.2: Type: text/html, Size: 1575 bytes --]

[-- Attachment #2: mem_sharing_p2mt_race.patch --]
[-- Type: text/x-diff, Size: 2913 bytes --]

diff -r 7b4c82f07281 xen/arch/x86/mm/mem_sharing.c
--- a/xen/arch/x86/mm/mem_sharing.c	Wed Jan 05 23:54:15 2011 +0000
+++ b/xen/arch/x86/mm/mem_sharing.c	Thu Jan 06 23:46:28 2011 +0800
@@ -502,6 +502,7 @@ int mem_sharing_nominate_page(struct p2m
 
     *phandle = 0UL;
 
+    shr_lock(); 
     mfn = gfn_to_mfn(p2m, gfn, &p2mt);
 
     /* Check if mfn is valid */
@@ -509,29 +510,33 @@ int mem_sharing_nominate_page(struct p2m
     if (!mfn_valid(mfn))
         goto out;
 
+    /* Return the handle if the page is already shared */
+    page = mfn_to_page(mfn);
+    if (p2m_is_shared(p2mt)) {
+        *phandle = page->shr_handle;
+        ret = 0;
+        goto out;
+    }
+
     /* Check p2m type */
     if (!p2m_is_sharable(p2mt))
         goto out;
 
     /* Try to convert the mfn to the sharable type */
-    page = mfn_to_page(mfn);
     ret = page_make_sharable(d, page, expected_refcnt); 
     if(ret) 
         goto out;
 
     /* Create the handle */
     ret = -ENOMEM;
-    shr_lock(); 
     handle = next_handle++;  
     if((hash_entry = mem_sharing_hash_insert(handle, mfn)) == NULL)
     {
-        shr_unlock();
         goto out;
     }
     if((gfn_info = mem_sharing_gfn_alloc()) == NULL)
     {
         mem_sharing_hash_destroy(hash_entry);
-        shr_unlock();
         goto out;
     }
 
@@ -545,7 +550,6 @@ int mem_sharing_nominate_page(struct p2m
         BUG_ON(page_make_private(d, page) != 0);
         mem_sharing_hash_destroy(hash_entry);
         mem_sharing_gfn_destroy(gfn_info, 0);
-        shr_unlock();
         goto out;
     }
 
@@ -559,11 +563,11 @@ int mem_sharing_nominate_page(struct p2m
     gfn_info->domain = d->domain_id;
     page->shr_handle = handle;
     *phandle = handle;
-    shr_unlock();
 
     ret = 0;
 
 out:
+    shr_unlock();
     return ret;
 }
 
@@ -633,14 +637,21 @@ int mem_sharing_unshare_page(struct p2m_
     struct list_head *le;
     struct domain *d = p2m->domain;
 
+    mem_sharing_audit();
+    /* Remove the gfn_info from the list */
+    shr_lock();
+    
     mfn = gfn_to_mfn(p2m, gfn, &p2mt);
+    
+    /* Has someone already unshared it? */
+    if (!p2m_is_shared(p2mt)) {
+        shr_unlock();
+        return 0;
+    }
 
     page = mfn_to_page(mfn);
     handle = page->shr_handle;
  
-    mem_sharing_audit();
-    /* Remove the gfn_info from the list */
-    shr_lock();
     hash_entry = mem_sharing_hash_lookup(handle); 
     list_for_each(le, &hash_entry->gfns)
     {
@@ -707,7 +718,6 @@ private_page_found:
         mem_sharing_hash_delete(handle);
     else
         atomic_dec(&nr_saved_mfns);
-    shr_unlock();
 
     if(p2m_change_type(p2m, gfn, p2m_ram_shared, p2m_ram_rw) != 
                                                 p2m_ram_shared) 
@@ -718,6 +728,7 @@ private_page_found:
     /* Update m2p entry */
     set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), gfn);
 
+    shr_unlock();
     return 0;
 }
 

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-06 16:11 [PATCH] mem_sharing: fix race condition of nominate and unshare Jui-Hao Chiang
@ 2011-01-06 16:54 ` Tim Deegan
  2011-01-07  3:54   ` Jui-Hao Chiang
  2011-01-07  3:14 ` tinnycloud
  1 sibling, 1 reply; 50+ messages in thread
From: Tim Deegan @ 2011-01-06 16:54 UTC (permalink / raw)
  To: Jui-Hao Chiang; +Cc: tinnycloud, xen-devel

At 16:11 +0000 on 06 Jan (1294330319), Jui-Hao Chiang wrote:
> Hi, this patch does the following
> (1) When updating/checking p2m type for mem_sharing, we must hold shr_lock
> (2) For nominate operation, if the page is already nominated, return the handle from page_info->shr_handle
> (3) For unshare operation, it is possible that multiple users unshare a page via hvm_hap_nested_page_fault() at the same time. If the page is already un-shared by someone else, simply return success.

I'm going to apply this, since it looks like an improvement, but I'm not
convinced it properly solves the problem.

> NOTE: we assume that nobody holds page_alloc_lock/p2m_lock before calling nominate/share/unshare.

As I told you earlier, that's not the case.  p2m_teardown() can call
mem_sharing_unshare_page() with the p2m lock held.

Cheers,

Tim.

> Signed-off-by: Jui-Hao Chiang <juihaochiang@gmail.com<mailto:juihaochiang@gmail.com>>
> Signed-off-by: Han-Lin Li <Han-Lin.Li@itri.org.tw<mailto:Li@itri.org.tw>>
> 
> Bests,
> Jui-Hao

> diff -r 7b4c82f07281 xen/arch/x86/mm/mem_sharing.c
> --- a/xen/arch/x86/mm/mem_sharing.c	Wed Jan 05 23:54:15 2011 +0000
> +++ b/xen/arch/x86/mm/mem_sharing.c	Thu Jan 06 23:46:28 2011 +0800
> @@ -502,6 +502,7 @@ int mem_sharing_nominate_page(struct p2m
>  
>      *phandle = 0UL;
>  
> +    shr_lock(); 
>      mfn = gfn_to_mfn(p2m, gfn, &p2mt);
>  
>      /* Check if mfn is valid */
> @@ -509,29 +510,33 @@ int mem_sharing_nominate_page(struct p2m
>      if (!mfn_valid(mfn))
>          goto out;
>  
> +    /* Return the handle if the page is already shared */
> +    page = mfn_to_page(mfn);
> +    if (p2m_is_shared(p2mt)) {
> +        *phandle = page->shr_handle;
> +        ret = 0;
> +        goto out;
> +    }
> +
>      /* Check p2m type */
>      if (!p2m_is_sharable(p2mt))
>          goto out;
>  
>      /* Try to convert the mfn to the sharable type */
> -    page = mfn_to_page(mfn);
>      ret = page_make_sharable(d, page, expected_refcnt); 
>      if(ret) 
>          goto out;
>  
>      /* Create the handle */
>      ret = -ENOMEM;
> -    shr_lock(); 
>      handle = next_handle++;  
>      if((hash_entry = mem_sharing_hash_insert(handle, mfn)) == NULL)
>      {
> -        shr_unlock();
>          goto out;
>      }
>      if((gfn_info = mem_sharing_gfn_alloc()) == NULL)
>      {
>          mem_sharing_hash_destroy(hash_entry);
> -        shr_unlock();
>          goto out;
>      }
>  
> @@ -545,7 +550,6 @@ int mem_sharing_nominate_page(struct p2m
>          BUG_ON(page_make_private(d, page) != 0);
>          mem_sharing_hash_destroy(hash_entry);
>          mem_sharing_gfn_destroy(gfn_info, 0);
> -        shr_unlock();
>          goto out;
>      }
>  
> @@ -559,11 +563,11 @@ int mem_sharing_nominate_page(struct p2m
>      gfn_info->domain = d->domain_id;
>      page->shr_handle = handle;
>      *phandle = handle;
> -    shr_unlock();
>  
>      ret = 0;
>  
>  out:
> +    shr_unlock();
>      return ret;
>  }
>  
> @@ -633,14 +637,21 @@ int mem_sharing_unshare_page(struct p2m_
>      struct list_head *le;
>      struct domain *d = p2m->domain;
>  
> +    mem_sharing_audit();
> +    /* Remove the gfn_info from the list */
> +    shr_lock();
> +    
>      mfn = gfn_to_mfn(p2m, gfn, &p2mt);
> +    
> +    /* Has someone already unshared it? */
> +    if (!p2m_is_shared(p2mt)) {
> +        shr_unlock();
> +        return 0;
> +    }
>  
>      page = mfn_to_page(mfn);
>      handle = page->shr_handle;
>   
> -    mem_sharing_audit();
> -    /* Remove the gfn_info from the list */
> -    shr_lock();
>      hash_entry = mem_sharing_hash_lookup(handle); 
>      list_for_each(le, &hash_entry->gfns)
>      {
> @@ -707,7 +718,6 @@ private_page_found:
>          mem_sharing_hash_delete(handle);
>      else
>          atomic_dec(&nr_saved_mfns);
> -    shr_unlock();
>  
>      if(p2m_change_type(p2m, gfn, p2m_ram_shared, p2m_ram_rw) != 
>                                                  p2m_ram_shared) 
> @@ -718,6 +728,7 @@ private_page_found:
>      /* Update m2p entry */
>      set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), gfn);
>  
> +    shr_unlock();
>      return 0;
>  }
>  


-- 
Tim Deegan <Tim.Deegan@citrix.com>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-06 16:11 [PATCH] mem_sharing: fix race condition of nominate and unshare Jui-Hao Chiang
  2011-01-06 16:54 ` Tim Deegan
@ 2011-01-07  3:14 ` tinnycloud
  2011-01-07  6:45   ` Jui-Hao Chiang
  1 sibling, 1 reply; 50+ messages in thread
From: tinnycloud @ 2011-01-07  3:14 UTC (permalink / raw)
  To: 'Jui-Hao Chiang', 'Tim Deegan'; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 3996 bytes --]

Hi Tim and Hao:

 

         The patch failed to fix the bug.

 

         I applied patch, and add some more log info.

 

in  mem_sharing_unshare_page

664     shr_lock();

665     mfn = gfn_to_mfn(d, gfn, &p2mt);

666     /* Has someone already unshared it? */

667     if (!p2m_is_shared(p2mt)) {

668         printk("===someone unshare mfn %lx\n", mfn);


669         shr_unlock();

670         return 0;

671     }

 

In  mem_sharing_nominate_page

508     shr_lock();

509     mfn = gfn_to_mfn(d, gfn, &p2mt);

510 

511     /* Check if mfn is valid */

512     ret = -EINVAL;

513     if (!mfn_valid(mfn))

514         goto out;

515 

516     if (p2m_is_shared(p2mt)) {

517         page = mfn_to_page(mfn);

518         printk("===page h %lu, mfx %lx is already shared\n",
page->shr_handle, mfn);


519     }

520     /* Check p2m type */

521     if (!p2m_is_sharable(p2mt))

522         goto out;

523  

 

 

Also in mem_sharing_share_pages, I print some free info

 

584     shr_lock();

585 

586     ret = XEN_DOMCTL_MEM_SHARING_S_HANDLE_INVALID;

587     se = mem_sharing_hash_lookup(sh);

588     if(se == NULL) goto err_out;

589     ret = XEN_DOMCTL_MEM_SHARING_C_HANDLE_INVALID;

590     ce = mem_sharing_hash_lookup(ch);

591     if(ce == NULL) goto err_out;

592     spage = mfn_to_page(se->mfn);

593     cpage = mfn_to_page(ce->mfn);

594     printk("===will free cpage_mfn %lx spage_mfn %lx \n", ce->mfn,
se->mfn);

 

 

 

634     ASSERT(list_empty(&ce->gfns));

635     mem_sharing_hash_delete(ch);

636     atomic_inc(&nr_saved_mfns);

637     /* Free the client page */

638     if(test_and_clear_bit(_PGC_allocated, &cpage->count_info)){

639         put_page(cpage);

640         printk("===free cpage_mfn %lx spage_mfn %lx \n", ce->mfn,
se->mfn);

641     }

642     ret = 0;

643     

644 err_out:

645     shr_unlock();

646 

647     return ret;

 

 

Below is the serial output.

We can see neither line 668 and nor line 518 is print out.

 

blktap_sysfs_create: adding attributes for dev ffff88012148ee00

(XEN) ===will free cpage_mfn 1406fd spage_mfn 2df6a6 

(XEN) ===free cpage_mfn 0 spage_mfn 2df6a6 

(XEN) ===will free cpage_mfn 14083c spage_mfn 2df6a8 

(XEN) ===free cpage_mfn 0 spage_mfn 2df6a8 

(XEN) ===will free cpage_mfn 1409e5 spage_mfn 2df6a9 

(XEN) ===free cpage_mfn 0 spage_mfn 2df6a9 

(XEN) ===will free cpage_mfn 142eb4 spage_mfn 141c4b 

(XEN) ===free cpage_mfn 0 spage_mfn 141c4b 

(XEN) printk: 32 messages suppressed.

(XEN) mm.c:859:d0 Error getting mfn 2df6a8 (pfn fffffffffffffffe) from L1
entry 80000002df6a8627 for l1e_owner=0, pg_owner=2

(XEN) mm.c:859:d0 Error getting mfn 2df6a9 (pfn fffffffffffffffe) from L1
entry 80000002df6a9627 for l1e_owner=0, pg_owner=2

(XEN) ===will free cpage_mfn 219a1c spage_mfn 1421b1 

(XEN) ===free cpage_mfn 0 spage_mfn 1421b1 

(XEN) ===will free cpage_mfn 2dea05 spage_mfn 142124 

(XEN) ===free cpage_mfn 0 spage_mfn 142124 

(XEN) ===will free cpage_mfn 147127 spage_mfn 146cbb 

(XEN) ===free cpage_mfn 0 spage_mfn 146cbb 

(XEN) ===will free cpage_mfn 146ecf spage_mfn 14127f 

(XEN) ===free cpage_mfn 0 spage_mfn 14127f

 

 

From: Jui-Hao Chiang [mailto:juihaochiang@gmail.com] 
Date:  2011.1.7. 0:12
TO: Tim Deegan
CC: tinnycloud; xen-devel@lists.xensource.com
Sub: [PATCH] mem_sharing: fix race condition of nominate and unshare

 

Hi, this patch does the following
(1) When updating/checking p2m type for mem_sharing, we must hold shr_lock
(2) For nominate operation, if the page is already nominated, return the
handle from page_info->shr_handle
(3) For unshare operation, it is possible that multiple users unshare a page
via hvm_hap_nested_page_fault() at the same time. If the page is already
un-shared by someone else, simply return success.

NOTE: we assume that nobody holds page_alloc_lock/p2m_lock before calling
nominate/share/unshare.

Signed-off-by: Jui-Hao Chiang <juihaochiang@gmail.com>
Signed-off-by: Han-Lin Li <Han-Lin.Li@itri.org.tw>

Bests,
Jui-Hao


[-- Attachment #1.2: Type: text/html, Size: 23221 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-06 16:54 ` Tim Deegan
@ 2011-01-07  3:54   ` Jui-Hao Chiang
  2011-01-07  6:02     ` Jui-Hao Chiang
  0 siblings, 1 reply; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-07  3:54 UTC (permalink / raw)
  To: Tim Deegan; +Cc: tinnycloud, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2214 bytes --]

Hi, Tim

On Fri, Jan 7, 2011 at 12:54 AM, Tim Deegan <Tim.Deegan@citrix.com> wrote:

> At 16:11 +0000 on 06 Jan (1294330319), Jui-Hao Chiang wrote:
> > Hi, this patch does the following
> > (1) When updating/checking p2m type for mem_sharing, we must hold
> shr_lock
> > (2) For nominate operation, if the page is already nominated, return the
> handle from page_info->shr_handle
> > (3) For unshare operation, it is possible that multiple users unshare a
> page via hvm_hap_nested_page_fault() at the same time. If the page is
> already un-shared by someone else, simply return success.
>
> I'm going to apply this, since it looks like an improvement, but I'm not
> convinced it properly solves the problem.
>
>
It seems tinnycloud's case is when dom0 try to RW maps a shared page, which
should unshare it properly, and change the type count.
But there is still a bug hidden in page_make_sharable() which fails to
recover type count when the call fails.
Now I trace it again, and found something different than what we have
discussed before.
I will find it and submit patch again.



>  > NOTE: we assume that nobody holds page_alloc_lock/p2m_lock before
> calling nominate/share/unshare.
>
> As I told you earlier, that's not the case.  p2m_teardown() can call
> mem_sharing_unshare_page() with the p2m lock held.
>
>
Oops, I forgot again.
After this change, unshare() has a potential problem of deadlock for
shr_lock and p2m_lock with different locking order.
Assume two CPUs do the following
CPU1: hvm_hap_nested_page_fault() => unshare() => p2m_change_type() (locking
order: shr_lock, p2m_lock)
CPU2: p2m_teardown() => unshare() (locking order: p2m_lock, shr_lock)
When CPU1 grabs shr_lock and CPU2 grabs p2m_lock, they deadlock later.

So it seems better to fix the following rules
(1) Fix locking order: p2m_lock ---> shr_lock
(2) Any function in mem_sharing, if modifying/checking p2m entry is
necessary, it must hold p2m_lock and then shr_lock. Later on, when changing
p2m entries, don't call any p2m function which locks p2m again

So for p2m functions, it seems better to provide some functions which don't
call p2m_lock again.
What do you think? If that's ok, I will do it in this way.

Bests,
Jui-Hao

[-- Attachment #1.2: Type: text/html, Size: 2855 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-07  3:54   ` Jui-Hao Chiang
@ 2011-01-07  6:02     ` Jui-Hao Chiang
  2011-01-07 16:09       ` Tim Deegan
  2011-01-10  6:48       ` tinnycloud
  0 siblings, 2 replies; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-07  6:02 UTC (permalink / raw)
  To: Tim Deegan; +Cc: tinnycloud, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 1793 bytes --]

> Oops, I forgot again.
> After this change, unshare() has a potential problem of deadlock for
> shr_lock and p2m_lock with different locking order.
> Assume two CPUs do the following
> CPU1: hvm_hap_nested_page_fault() => unshare() => p2m_change_type()
> (locking order: shr_lock, p2m_lock)
> CPU2: p2m_teardown() => unshare() (locking order: p2m_lock, shr_lock)
> When CPU1 grabs shr_lock and CPU2 grabs p2m_lock, they deadlock later.
>
> So it seems better to fix the following rules
> (1) Fix locking order: p2m_lock ---> shr_lock
> (2) Any function in mem_sharing, if modifying/checking p2m entry is
> necessary, it must hold p2m_lock and then shr_lock. Later on, when changing
> p2m entries, don't call any p2m function which locks p2m again
>
> So for p2m functions, it seems better to provide some functions which don't
> call p2m_lock again.
> What do you think? If that's ok, I will do it in this way.
>
>
Hmm,  after looking it deeper, I summarize as the following
(1) It seems all the users of shr_lock, nominate/share/unshare, will
check/modify p2m type.
- nominate: p2m_change_type()
- share: set_shared_p2m_entry()
- unshare: set_shared_p2m_entry() and p2m_change_type()
(2) The functions which call unshare()
- hvm_hap_nested_page_fault(): I don't see any p2m_lock holded
- p2m_tear_down(): hold p2m_lock
- gfn_to_mfn_unshare(): I don't see any p2m_lock holded

One of the solution is to
(a) Simply replace shr_lock with p2m_lock.
(b) In unshare(), apply the following: if (!p2m_locked_by_me(p2m)) call
p2m_lock, otherwise, don't lock it.
(c) p2m_change_type() and set_shared_p2m_entry() are pretty similar, we can
merge the functionality into one function, which does NOT take p2m_lock, and
keep the original p2m_change_type() unchanged.

Correct me if wrong.

Bests,
Jui-Hao

[-- Attachment #1.2: Type: text/html, Size: 2149 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-07  3:14 ` tinnycloud
@ 2011-01-07  6:45   ` Jui-Hao Chiang
  2011-01-07  7:35     ` tinnycloud
  0 siblings, 1 reply; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-07  6:45 UTC (permalink / raw)
  To: tinnycloud; +Cc: xen-devel, Tim Deegan


[-- Attachment #1.1: Type: text/plain, Size: 663 bytes --]

Hi, tinnycloud:

(XEN) mm.c:859:d0 Error getting mfn 2df6a8 (pfn fffffffffffffffe) from L1
> entry 80000002df6a8627 for l1e_owner=0, pg_owner=2
>
> (XEN) mm.c:859:d0 Error getting mfn 2df6a9 (pfn fffffffffffffffe) from L1
> entry 80000002df6a9627 for l1e_owner=0, pg_owner=2
>

Could you use dump_execution_state() in mm.c:859?
And in the unshare() function, could you move the printk outside the
(!p2m_is_shared(p2mt))
checking?
If you put inside it, we never know if the unshare() is being done or not
(please also print out the mfn, p2mt, gfn, domain_id).

Just out of curiosity, are you running stubdom? your HVM guest id =2 is
pretty weird.

Thanks,
Jui-Hao

[-- Attachment #1.2: Type: text/html, Size: 1243 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-07  6:45   ` Jui-Hao Chiang
@ 2011-01-07  7:35     ` tinnycloud
  0 siblings, 0 replies; 50+ messages in thread
From: tinnycloud @ 2011-01-07  7:35 UTC (permalink / raw)
  To: 'Jui-Hao Chiang'; +Cc: xen-devel, 'Tim Deegan'


[-- Attachment #1.1: Type: text/plain, Size: 4904 bytes --]

HI Jui-Hao:

 

         I have no stub-dom fro HVM. The domain ID starts from 1, and grows
on every new domain is created.

         Below is for you, thanks. 

 

---------code-----

 

664     shr_lock();

665     mfn = gfn_to_mfn(d, gfn, &p2mt);

666     /* Has someone already unshared it? */

667     printk("===will unshare mfn %lx p2mt %x gfn %lu did %d\n", mfn,
p2mt, gfn, d->domain_id);

668     if (!p2m_is_shared(p2mt)) {

669         printk("===someone unshare mfn %lx p2mt %x gfn %lu did %d\n",
mfn, p2mt, gfn, d->domain_id);


670         shr_unlock();

671         return 0;

672     }

 

 

------output -------

 

(XEN) ===will unshare mfn 1728ae p2mt d gfn 512686 did 1

(XEN) ===will unshare mfn 1728ef p2mt d gfn 512751 did 1

(XEN) ===will unshare mfn 1729aa p2mt d gfn 512938 did 1

(XEN) ===will unshare mfn 1728f6 p2mt d gfn 512758 did 1

(XEN) ===will unshare mfn 2de94a p2mt d gfn 39754 did 1

(XEN) ===will unshare mfn 2de94b p2mt d gfn 39755 did 1

(XEN) ===will unshare mfn 2de94c p2mt d gfn 39756 did 1

(XEN) printk: 32 messages suppressed.

(XEN) mm.c:859:d0 Error getting mfn 2de94c (pfn fffffffffffffffe) from L1
entry 80000002de94c627 for l1e_owner=0, pg_owner=1

(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----

(XEN) CPU:    0

(XEN) RIP:    e008:[<ffff82c48015d1d1>] get_page_from_l1e+0x351/0x4d0

(XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor

(XEN) rax: 007fffffffffffff   rbx: 0000000000000001   rcx: 0000000000000092

(XEN) rdx: 8000000000000002   rsi: 8000000000000003   rdi: ffff82f605bd2980

(XEN) rbp: 00000000002de94c   rsp: ffff82c48035fcd8   r8:  0000000000000001

(XEN) r9:  0000000000000000   r10: 00000000fffffffb   r11: 0000000000000002

(XEN) r12: 0000000000000000   r13: fffffffffffffffe   r14: ffff82f605bd2980

(XEN) r15: 80000002de94c627   cr0: 000000008005003b   cr4: 00000000000026f0

(XEN) cr3: 000000031b870000   cr2: 000000000098efa0

(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008

(XEN) Xen stack trace from rsp=ffff82c48035fcd8:

(XEN)    80000002de94c627 ffff830200000000 ffff82c400000001 ffff82c4801df3d9

(XEN)    ffff83033e944930 ffff8300bf554000 ffff83023ff40000 000000000000014c

(XEN)    ffffffffffffffff 0000000000800627 ffff8302dd6a0000 0000000000000001

(XEN)    ffff83031ccb26b8 000000000031ccb2 80000002de94c627 ffff82c48016288b

(XEN)    000082c480168170 0000000000000000 0000000000000004 ffff8300bf554000

(XEN)    0000000000000009 000000000031ccb2 ffff83023fe60000 80000002de94c627

(XEN)    000000000031ccb2 ffff82c48035fedc 0000000000000000 000000010000014c

(XEN)    ffff83031babc000 0000000000000000 0000000000000000 0000000000000001

(XEN)    ffff83031ccb26b8 000000000031ccb2 8000000009b4c627 ffff82c480163f36

(XEN)    0000000000000001 ffff82c480161a49 ffff82c48035fe88 ffff82c48035fe88

(XEN)    00007ff0fffffffe 0000000000000000 0000000100000000 ffff8300bf554000

(XEN)    00000001bf554000 0000000000000000 000000010000c178 ffff82f606399640

(XEN)    0000000000000006 ffff83023fe60000 ffff8302dd6a0000 ffff8300bf554000

(XEN)    0000000000000000 0000000000000000 0000000000000000 000000008035ff28

(XEN)    ffff8801208d5c18 0000000080251008 000000031ccb26b8 8000000009b4c627

(XEN)    ffff82c480251008 ffff82c480251000 0000000000000000 ffff82c480113d7e

(XEN)    0000000d00000000 0000000000000001 00000001ffffffff ffff8300bf554000

(XEN)    ffff8801208d5d68 0000000000000001 ffff880121dbd0a8 00007f4de18d7000

(XEN)    0000000000000001 ffff82c4801e3169 0000000000000001 00007f4de18d7000

(XEN)    ffff880121dbd0a8 0000000000000001 ffff8801208d5d68 0000000000000000

(XEN) Xen call trace:

(XEN)    [<ffff82c48015d1d1>] get_page_from_l1e+0x351/0x4d0

(XEN)    [<ffff82c4801df3d9>] ept_get_entry+0xa9/0x1c0

(XEN)    [<ffff82c48016288b>] mod_l1_entry+0x37b/0x9a0

(XEN)    [<ffff82c480163f36>] do_mmu_update+0x9f6/0x1a70

(XEN)    [<ffff82c480161a49>] do_mmuext_op+0x859/0x1320

(XEN)    [<ffff82c480113d7e>] do_multicall+0x14e/0x340

(XEN)    [<ffff82c4801e3169>] syscall_enter+0xa9/0xae

(XEN)    

 

From: Jui-Hao Chiang [mailto:juihaochiang@gmail.com] 
Date: 2011.1.1 14:45
To: tinnycloud
CC: Tim Deegan; xen-devel@lists.xensource.com
Sub: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare

 

Hi, tinnycloud:

(XEN) mm.c:859:d0 Error getting mfn 2df6a8 (pfn fffffffffffffffe) from L1
entry 80000002df6a8627 for l1e_owner=0, pg_owner=2

(XEN) mm.c:859:d0 Error getting mfn 2df6a9 (pfn fffffffffffffffe) from L1
entry 80000002df6a9627 for l1e_owner=0, pg_owner=2


Could you use dump_execution_state() in mm.c:859?
And in the unshare() function, could you move the printk outside the
(!p2m_is_shared(p2mt)) checking?
If you put inside it, we never know if the unshare() is being done or not
(please also print out the mfn, p2mt, gfn, domain_id).

Just out of curiosity, are you running stubdom? your HVM guest id =2 is
pretty weird.

Thanks,
Jui-Hao


[-- Attachment #1.2: Type: text/html, Size: 21059 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-07  6:02     ` Jui-Hao Chiang
@ 2011-01-07 16:09       ` Tim Deegan
  2011-01-10  4:57         ` Jui-Hao Chiang
  2011-01-10  6:48       ` tinnycloud
  1 sibling, 1 reply; 50+ messages in thread
From: Tim Deegan @ 2011-01-07 16:09 UTC (permalink / raw)
  To: Jui-Hao Chiang; +Cc: tinnycloud, xen-devel

At 06:02 +0000 on 07 Jan (1294380120), Jui-Hao Chiang wrote:
> One of the solution is to
> (a) Simply replace shr_lock with p2m_lock.

I think this is the best choice.   If we find that the p2m lock is a
bottleneck we can address it later. 

Cheers,

Tim

> (b) In unshare(), apply the following: if (!p2m_locked_by_me(p2m)) call p2m_lock, otherwise, don't lock it.
> (c) p2m_change_type() and set_shared_p2m_entry() are pretty similar, we can merge the functionality into one function, which does NOT take p2m_lock, and keep the original p2m_change_type() unchanged.
> 
> Correct me if wrong.
> 
> Bests,
> Jui-Hao

-- 
Tim Deegan <Tim.Deegan@citrix.com>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-07 16:09       ` Tim Deegan
@ 2011-01-10  4:57         ` Jui-Hao Chiang
  2011-01-10  4:58           ` Jui-Hao Chiang
  2011-01-10 10:30           ` Tim Deegan
  0 siblings, 2 replies; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-10  4:57 UTC (permalink / raw)
  To: Tim Deegan; +Cc: tinnycloud, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 751 bytes --]

Hi, Tim:

On Sat, Jan 8, 2011 at 12:09 AM, Tim Deegan <Tim.Deegan@citrix.com> wrote:

> At 06:02 +0000 on 07 Jan (1294380120), Jui-Hao Chiang wrote:
> > One of the solution is to
> > (a) Simply replace shr_lock with p2m_lock.
>
> I think this is the best choice.   If we find that the p2m lock is a
> bottleneck we can address it later.
>
>
Just to be skeptic.
Why doesn't mfn_to_gfn() take p2m lock when querying the p2m type? Is there
any quarantee that the resulting type is correct and trustful?
For example:
(1) User1 query the p2m type:
mfn_to_gfn(...&p2mt);
if (p2mt == p2m_ram_rw) /* do something based on the p2m type result? */

(2) User2 modify the p2m type
p2m_lock(p2m);
set_p2m_entry(..... p2m_ram_rw);
p2m_unlock(p2m);

Thanks,
Jui-Hao

[-- Attachment #1.2: Type: text/html, Size: 1122 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-10  4:57         ` Jui-Hao Chiang
@ 2011-01-10  4:58           ` Jui-Hao Chiang
  2011-01-10 10:30           ` Tim Deegan
  1 sibling, 0 replies; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-10  4:58 UTC (permalink / raw)
  To: Tim Deegan; +Cc: tinnycloud, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 916 bytes --]

Sorry, typo

On Mon, Jan 10, 2011 at 12:57 PM, Jui-Hao Chiang <juihaochiang@gmail.com>wrote:

> Hi, Tim:
>
> On Sat, Jan 8, 2011 at 12:09 AM, Tim Deegan <Tim.Deegan@citrix.com> wrote:
>
>> At 06:02 +0000 on 07 Jan (1294380120), Jui-Hao Chiang wrote:
>> > One of the solution is to
>> > (a) Simply replace shr_lock with p2m_lock.
>>
>> I think this is the best choice.   If we find that the p2m lock is a
>> bottleneck we can address it later.
>>
>>
> Just to be skeptic.
> Why doesn't mfn_to_gfn() take p2m lock when querying the p2m type? Is there
> any quarantee that the resulting type is correct and
>

I mean gfn_to_mfn()


> trustful?
> For example:
> (1) User1 query the p2m type:
> mfn_to_gfn(...&p2mt);
> if (p2mt == p2m_ram_rw) /* do something based on the p2m type result? */
>
> (2) User2 modify the p2m type
> p2m_lock(p2m);
> set_p2m_entry(..... p2m_ram_rw);
> p2m_unlock(p2m);
>
> Thanks,
> Jui-Hao
>

[-- Attachment #1.2: Type: text/html, Size: 1737 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-07  6:02     ` Jui-Hao Chiang
  2011-01-07 16:09       ` Tim Deegan
@ 2011-01-10  6:48       ` tinnycloud
  2011-01-10  8:10         ` Jui-Hao Chiang
  1 sibling, 1 reply; 50+ messages in thread
From: tinnycloud @ 2011-01-10  6:48 UTC (permalink / raw)
  To: 'Jui-Hao Chiang', 'Tim Deegan'; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 2900 bytes --]

 

 

Sent: Jui-Hao Chiang [mailto:juihaochiang@gmail.com] 
Date: 2011年1月7日 14:02
To: Tim Deegan
CC: tinnycloud; xen-devel@lists.xensource.com
Sub: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare

 

 

Oops, I forgot again.
After this change, unshare() has a potential problem of deadlock for
shr_lock and p2m_lock with different locking order.
Assume two CPUs do the following
CPU1: hvm_hap_nested_page_fault() => unshare() => p2m_change_type() (locking
order: shr_lock, p2m_lock)
CPU2: p2m_teardown() => unshare() (locking order: p2m_lock, shr_lock)
When CPU1 grabs shr_lock and CPU2 grabs p2m_lock, they deadlock later.

So it seems better to fix the following rules
(1) Fix locking order: p2m_lock ---> shr_lock
(2) Any function in mem_sharing, if modifying/checking p2m entry is
necessary, it must hold p2m_lock and then shr_lock. Later on, when changing
p2m entries, don't call any p2m function which locks p2m again

So for p2m functions, it seems better to provide some functions which don't
call p2m_lock again.
What do you think? If that's ok, I will do it in this way.


Hmm,  after looking it deeper, I summarize as the following
(1) It seems all the users of shr_lock, nominate/share/unshare, will
check/modify p2m type.
- nominate: p2m_change_type()
- share: set_shared_p2m_entry()
- unshare: set_shared_p2m_entry() and p2m_change_type()
(2) The functions which call unshare()
- hvm_hap_nested_page_fault(): I don't see any p2m_lock holded
- p2m_tear_down(): hold p2m_lock
- gfn_to_mfn_unshare(): I don't see any p2m_lock holded



 

Thank for sharing the lock info.

I’ve go through the code too. 

 

1.       mem_sharing_unshare_page() has the routine  called from
gfn_to_mfn_unshare, which is called by gnttab_transfer  

Since no bug report on grant_table right now, so I think this is safe for
now

Also  p2m_tear_down è mem_sharing_unshare_page() , its flag is
MEM_SHARING_DESTROY_GFN, and won’t has the chance to

call set_shared_p2m_entry()

 

2.       as for p2m_change_type(), I found in other place is it called lock
free, so it is safe too

3.       set_shared_p2m_entry() which call set_p2m_entry() is not in
p2m_lock, and I found in other code set_p2m_entry is called in p2m_lock,

so here I think it is a problem

 

So I think at least set_p2m_entry should be put into p2m_lock.

I’ll do more investigation base on this.

 

One of the solution is to 
(a) Simply replace shr_lock with p2m_lock.
(b) In unshare(), apply the following: if (!p2m_locked_by_me(p2m)) call
p2m_lock, otherwise, don't lock it.
(c) p2m_change_type() and set_shared_p2m_entry() are pretty similar, we can
merge the functionality into one function, which does NOT take p2m_lock, and
keep the original p2m_change_type() unchanged.

Correct me if wrong.

Bests,
Jui-Hao


[-- Attachment #1.2: Type: text/html, Size: 10096 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-10  6:48       ` tinnycloud
@ 2011-01-10  8:10         ` Jui-Hao Chiang
  2011-01-10 10:34           ` tinnycloud
  2011-01-12 10:03           ` Jui-Hao Chiang
  0 siblings, 2 replies; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-10  8:10 UTC (permalink / raw)
  To: tinnycloud; +Cc: xen-devel, Tim Deegan


[-- Attachment #1.1: Type: text/plain, Size: 2143 bytes --]

Hi, tinnycloud:

Thanks for your testing info.
I assume you have mem_sharing_unshare_page called with successful return
value, otherwise the mod_l1_entry won't be called.
Could you call mem_sharing_debug_gfn() before unshare() return success?
In addition, are there multiple CPUs touching the same page? e.g. you can
print out the cpu id inside unshare() and the mm.c:859.


> After this change, unshare() has a potential problem of deadlock for
> shr_lock and p2m_lock with different locking order.
> Assume two CPUs do the following
> CPU1: hvm_hap_nested_page_fault() => unshare() => p2m_change_type()
> (locking order: shr_lock, p2m_lock)
> CPU2: p2m_teardown() => unshare() (locking order: p2m_lock, shr_lock)
> When CPU1 grabs shr_lock and CPU2 grabs p2m_lock, they deadlock later.
>
>  1.       mem_sharing_unshare_page() has the routine  called from
> gfn_to_mfn_unshare, which is called by gnttab_transfer
>
> Since no bug report on grant_table right now, so I think this is safe for
> now
>
> Also  p2m_tear_down è mem_sharing_unshare_page() , its flag is
> MEM_SHARING_DESTROY_GFN, and won’t has the chance to
>
> call set_shared_p2m_entry()
>
>
>
Of course, the p2m_teardown won't call set_shared_p2m_entry. But this does
not change my argument that p2m_teardown() hold p2m_lock to wait on
shr_lock. Actaully, after looking for a while, I rebut myself that the
scenario of deadlock won't exist.
When p2m_teardown is called, the domain is dying in its last few steps
(device, irq are released), and there is no way for hvm_hap_nested_page_fault()
to happen on the memory of the dying domain. If this case is eliminated,
then my patch should not have deadlock problem. Any comments?

3.       set_shared_p2m_entry() which call set_p2m_entry() is not in
> p2m_lock, and I found in other code set_p2m_entry is called in p2m_lock,
>
> so here I think it is a problem
>
>
>
> So I think at least set_p2m_entry should be put into p2m_lock.
>
> I’ll do more investigation base on this.
>
>
>

See this http://xenbits.xensource.com/xen-unstable.hg?rev/a8d69de8eb31

Thanks,
Jui-Hao

[-- Attachment #1.2: Type: text/html, Size: 4986 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-10  4:57         ` Jui-Hao Chiang
  2011-01-10  4:58           ` Jui-Hao Chiang
@ 2011-01-10 10:30           ` Tim Deegan
  2011-01-11  1:49             ` MaoXiaoyun
  2011-01-12 11:50             ` Re: [PATCH] mem_sharing: fix race condition of nominate and unshare George Dunlap
  1 sibling, 2 replies; 50+ messages in thread
From: Tim Deegan @ 2011-01-10 10:30 UTC (permalink / raw)
  To: Jui-Hao Chiang; +Cc: tinnycloud, xen-devel

Hi, 

Can you please (both of you) sort out yout mail clients to do proper
indenting of quoted text?  The plain-text versions don't have any
quote prefix, which makes them confusing to read.

At 04:57 +0000 on 10 Jan (1294635461), Jui-Hao Chiang wrote:
> Hi, Tim:
> 
> On Sat, Jan 8, 2011 at 12:09 AM, Tim Deegan <Tim.Deegan@citrix.com<mailto:Tim.Deegan@citrix.com>> wrote:
> At 06:02 +0000 on 07 Jan (1294380120), Jui-Hao Chiang wrote:
> > One of the solution is to
> > (a) Simply replace shr_lock with p2m_lock.
> 
> I think this is the best choice.   If we find that the p2m lock is a
> bottleneck we can address it later.
> 
> 
> Just to be skeptic.
> Why doesn't mfn_to_gfn() take p2m lock when querying the p2m type?

Because gfn->mfn lookups happen very frequently and requiring the lock
would be a performance bottleneck on multi-vcpu guests.

> Is there any quarantee that the resulting type is correct and trustful?

Yes.  It's not perfect (and as I said I need to overhaul the locking
here) but if the p2m lookup only reads each level once and the p2m
updates are careful about the order they change things in, the worst
that can happen is another CPU sees a slightly out-of-date value.

There is at least one issue there (now that some p2m code frees old p2m
pages there's a potential race against other readers that needs a
tlbflush-timestamp-style interlock), but TBH there are other things that
need fixing first.

Tim.

> For example:
> (1) User1 query the p2m type:
> mfn_to_gfn(...&p2mt);
> if (p2mt == p2m_ram_rw) /* do something based on the p2m type result? */
> 
> (2) User2 modify the p2m type
> p2m_lock(p2m);
> set_p2m_entry(..... p2m_ram_rw);
> p2m_unlock(p2m);
> 
> Thanks,
> Jui-Hao

-- 
Tim Deegan <Tim.Deegan@citrix.com>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-10  8:10         ` Jui-Hao Chiang
@ 2011-01-10 10:34           ` tinnycloud
  2011-01-12 10:03           ` Jui-Hao Chiang
  1 sibling, 0 replies; 50+ messages in thread
From: tinnycloud @ 2011-01-10 10:34 UTC (permalink / raw)
  To: 'Jui-Hao Chiang'; +Cc: xen-devel, 'Tim Deegan'


[-- Attachment #1.1: Type: text/plain, Size: 3914 bytes --]

Hi Jui-Hao:

 

         Under what situation hvm_hap_nested_page_fault will be called?

 

         Attach is the latest log with CPU id.

 

         Before each call of unshare() I print out the caller.

         

         From the log u will find the error mfn is MFN=2df895,  in line 488

       Line 37:(XEN) ===>mem_sharing_share_pages mfn 2df895 gfn 520798 p2md
d did 2

Is the log in mem_sharing_share_pages, from below line 632

 

618    /* gfn lists always have at least one entry => save to call
list_entry */

619     mem_sharing_gfn_account(gfn_get_info(&ce->gfns), 1);

620     mem_sharing_gfn_account(gfn_get_info(&se->gfns), 1);

621     list_for_each_safe(le, te, &ce->gfns)

622     {

623         gfn = list_entry(le, struct gfn_info, list);

624         /* Get the source page and type, this should never fail 

625          * because we are under shr lock, and got non-null se */

626         BUG_ON(!get_page_and_type(spage, dom_cow, PGT_shared_page));

627         /* Move the gfn_info from ce list to se list */

628         list_del(&gfn->list);

629         d = get_domain_by_id(gfn->domain);

630         BUG_ON(!d);

631         gfn_to_mfn(d, gfn->gfn, &p2mt);

632         printk("===>mem_sharing_share_pages mfn %lx gfn %lu p2md %x did
%d\n", se->mfn, gfn->gfn, p2mt,d->domain_id);

633         BUG_ON(set_shared_p2m_entry(d, gfn->gfn, se->mfn) == 0);

634         put_domain(d);

635         list_add(&gfn->list, &se->gfns);

636         put_page_and_type(cpage);

637     }   

 

Sent: Jui-Hao Chiang [mailto:juihaochiang@gmail.com] 
Date: 2011年1月10日 16:10
To: tinnycloud
CC: Tim Deegan; xen-devel@lists.xensource.com
Sub: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare

 

Hi, tinnycloud:

Thanks for your testing info.
I assume you have mem_sharing_unshare_page called with successful return
value, otherwise the mod_l1_entry won't be called.
Could you call mem_sharing_debug_gfn() before unshare() return success?
In addition, are there multiple CPUs touching the same page? e.g. you can
print out the cpu id inside unshare() and the mm.c:859.


After this change, unshare() has a potential problem of deadlock for
shr_lock and p2m_lock with different locking order.
Assume two CPUs do the following
CPU1: hvm_hap_nested_page_fault() => unshare() => p2m_change_type() (locking
order: shr_lock, p2m_lock)
CPU2: p2m_teardown() => unshare() (locking order: p2m_lock, shr_lock)
When CPU1 grabs shr_lock and CPU2 grabs p2m_lock, they deadlock later.

 1.       mem_sharing_unshare_page() has the routine  called from
gfn_to_mfn_unshare, which is called by gnttab_transfer  

Since no bug report on grant_table right now, so I think this is safe for
now

Also  p2m_tear_down è mem_sharing_unshare_page() , its flag is
MEM_SHARING_DESTROY_GFN, and won’t has the chance to

call set_shared_p2m_entry()

 

Of course, the p2m_teardown won't call set_shared_p2m_entry. But this does
not change my argument that p2m_teardown() hold p2m_lock to wait on
shr_lock. Actaully, after looking for a while, I rebut myself that the
scenario of deadlock won't exist.
When p2m_teardown is called, the domain is dying in its last few steps
(device, irq are released), and there is no way for
hvm_hap_nested_page_fault() to happen on the memory of the dying domain. If
this case is eliminated, then my patch should not have deadlock problem. Any
comments?

3.       set_shared_p2m_entry() which call set_p2m_entry() is not in
p2m_lock, and I found in other code set_p2m_entry is called in p2m_lock,

so here I think it is a problem

 

So I think at least set_p2m_entry should be put into p2m_lock.

I’ll do more investigation base on this.

 


See this http://xenbits.xensource.com/xen-unstable.hg?rev/a8d69de8eb31

Thanks,
Jui-Hao


[-- Attachment #1.2: Type: text/html, Size: 15051 bytes --]

[-- Attachment #2: log.txt --]
[-- Type: text/plain, Size: 74996 bytes --]

blktap_sysfs_create: adding attributes for dev ffff8801594c6a00
(XEN) ===>mem_sharing_share_pages mfn 2df893 gfn 521523 p2md d did 2
(XEN) ===hvm_hap_nested_page_fault mfn 170cf5 p2mt d gfn 521461 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 170cf5 p2mt d gfn 521461 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4f5, Debug page: MFN=170cf5 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 170d37 p2mt d gfn 521527 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 170d37 p2mt d gfn 521527 did 2
(XEN) ==>3Debug for domain=2, gfn=7f537, Debug page: MFN=170d37 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 170c8d p2mt d gfn 521357 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 170c8d p2mt d gfn 521357 did 2
(XEN) ==>3Debug for domain=2, gfn=7f48d, Debug page: MFN=170c8d is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 170c4e p2mt d gfn 521294 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 170c4e p2mt d gfn 521294 did 2
(XEN) ==>3Debug for domain=2, gfn=7f44e, Debug page: MFN=170c4e is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 170ca9 p2mt d gfn 521385 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 170ca9 p2mt d gfn 521385 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4a9, Debug page: MFN=170ca9 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 170cb4 p2mt d gfn 521396 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 170cb4 p2mt d gfn 521396 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4b4, Debug page: MFN=170cb4 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 170cf5 p2mt d gfn 521461 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 170cf5 p2mt d gfn 521461 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4f5, Debug page: MFN=170cf5 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 170cb6 p2mt d gfn 521398 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 170cb6 p2mt d gfn 521398 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4b6, Debug page: MFN=170cb6 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 2df893 p2mt d gfn 521523 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 2df893 p2mt d gfn 521523 did 2
(XEN) ===set p2mentry mfn 2df893 p2mt d gfn 521523 did 2
(XEN) ==>3Debug for domain=2, gfn=7f533, Debug page: MFN=2df893 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===hvm_hap_nested_page_fault mfn 170e54 p2mt d gfn 520788 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 170e54 p2mt d gfn 520788 did 2
(XEN) ==>3Debug for domain=2, gfn=7f254, Debug page: MFN=170e54 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 170e5d p2mt d gfn 520797 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 170e5d p2mt d gfn 520797 did 2
(XEN) ==>3Debug for domain=2, gfn=7f25d, Debug page: MFN=170e5d is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===>mem_sharing_share_pages mfn 2df895 gfn 520798 p2md d did 2
(XEN) ===gfn_to_mfn_unshare mfn 170c5d p2mt d gfn 521309 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c5d p2mt d gfn 521309 did 2
(XEN) ==>3Debug for domain=2, gfn=7f45d, Debug page: MFN=170c5d is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c9f p2mt d gfn 521375 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c9f p2mt d gfn 521375 did 2
(XEN) ==>3Debug for domain=2, gfn=7f49f, Debug page: MFN=170c9f is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170ca1 p2mt d gfn 521377 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170ca1 p2mt d gfn 521377 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4a1, Debug page: MFN=170ca1 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170ca3 p2mt d gfn 521379 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170ca3 p2mt d gfn 521379 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4a3, Debug page: MFN=170ca3 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===>mem_sharing_share_pages mfn 2df896 gfn 520868 p2md d did 2
(XEN) ===hvm_hap_nested_page_fault mfn 170e7d p2mt d gfn 520829 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 170e7d p2mt d gfn 520829 did 2
(XEN) ==>3Debug for domain=2, gfn=7f27d, Debug page: MFN=170e7d is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c7f p2mt d gfn 521343 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c7f p2mt d gfn 521343 did 2
(XEN) ==>3Debug for domain=2, gfn=7f47f, Debug page: MFN=170c7f is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170fc1 p2mt d gfn 521153 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170fc1 p2mt d gfn 521153 did 2
(XEN) ==>3Debug for domain=2, gfn=7f3c1, Debug page: MFN=170fc1 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c03 p2mt d gfn 521219 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c03 p2mt d gfn 521219 did 2
(XEN) ==>3Debug for domain=2, gfn=7f403, Debug page: MFN=170c03 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c05 p2mt d gfn 521221 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c05 p2mt d gfn 521221 did 2
(XEN) ==>3Debug for domain=2, gfn=7f405, Debug page: MFN=170c05 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170fc7 p2mt d gfn 521159 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170fc7 p2mt d gfn 521159 did 2
(XEN) ==>3Debug for domain=2, gfn=7f3c7, Debug page: MFN=170fc7 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c89 p2mt d gfn 521353 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c89 p2mt d gfn 521353 did 2
(XEN) ==>3Debug for domain=2, gfn=7f489, Debug page: MFN=170c89 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c4b p2mt d gfn 521291 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c4b p2mt d gfn 521291 did 2
(XEN) ==>3Debug for domain=2, gfn=7f44b, Debug page: MFN=170c4b is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c4d p2mt d gfn 521293 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c4d p2mt d gfn 521293 did 2
(XEN) ==>3Debug for domain=2, gfn=7f44d, Debug page: MFN=170c4d is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c0f p2mt d gfn 521231 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c0f p2mt d gfn 521231 did 2
(XEN) ==>3Debug for domain=2, gfn=7f40f, Debug page: MFN=170c0f is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c11 p2mt d gfn 521233 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c11 p2mt d gfn 521233 did 2
(XEN) ==>3Debug for domain=2, gfn=7f411, Debug page: MFN=170c11 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c53 p2mt d gfn 521299 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c53 p2mt d gfn 521299 did 2
(XEN) ==>3Debug for domain=2, gfn=7f453, Debug page: MFN=170c53 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c55 p2mt d gfn 521301 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c55 p2mt d gfn 521301 did 2
(XEN) ==>3Debug for domain=2, gfn=7f455, Debug page: MFN=170c55 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c17 p2mt d gfn 521239 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c17 p2mt d gfn 521239 did 2
(XEN) ==>3Debug for domain=2, gfn=7f417, Debug page: MFN=170c17 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c59 p2mt d gfn 521305 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c59 p2mt d gfn 521305 did 2
(XEN) ==>3Debug for domain=2, gfn=7f459, Debug page: MFN=170c59 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c1b p2mt d gfn 521243 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c1b p2mt d gfn 521243 did 2
(XEN) ==>3Debug for domain=2, gfn=7f41b, Debug page: MFN=170c1b is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170ca5 p2mt d gfn 521381 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170ca5 p2mt d gfn 521381 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4a5, Debug page: MFN=170ca5 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c67 p2mt d gfn 521319 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c67 p2mt d gfn 521319 did 2
(XEN) ==>3Debug for domain=2, gfn=7f467, Debug page: MFN=170c67 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c69 p2mt d gfn 521321 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c69 p2mt d gfn 521321 did 2
(XEN) ==>3Debug for domain=2, gfn=7f469, Debug page: MFN=170c69 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c6b p2mt d gfn 521323 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c6b p2mt d gfn 521323 did 2
(XEN) ==>3Debug for domain=2, gfn=7f46b, Debug page: MFN=170c6b is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c6d p2mt d gfn 521325 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c6d p2mt d gfn 521325 did 2
(XEN) ==>3Debug for domain=2, gfn=7f46d, Debug page: MFN=170c6d is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170caf p2mt d gfn 521391 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170caf p2mt d gfn 521391 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4af, Debug page: MFN=170caf is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c71 p2mt d gfn 521329 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c71 p2mt d gfn 521329 did 2
(XEN) ==>3Debug for domain=2, gfn=7f471, Debug page: MFN=170c71 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170cb5 p2mt d gfn 521397 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170cb5 p2mt d gfn 521397 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4b5, Debug page: MFN=170cb5 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170cf7 p2mt d gfn 521463 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170cf7 p2mt d gfn 521463 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4f7, Debug page: MFN=170cf7 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170cb9 p2mt d gfn 521401 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170cb9 p2mt d gfn 521401 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4b9, Debug page: MFN=170cb9 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170c7b p2mt d gfn 521339 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170c7b p2mt d gfn 521339 did 2
(XEN) ==>3Debug for domain=2, gfn=7f47b, Debug page: MFN=170c7b is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 170cbd p2mt d gfn 521405 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170cbd p2mt d gfn 521405 did 2
(XEN) ==>3Debug for domain=2, gfn=7f4bd, Debug page: MFN=170cbd is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1711ed p2mt d gfn 520685 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1711ed p2mt d gfn 520685 did 2
(XEN) ==>3Debug for domain=2, gfn=7f1ed, Debug page: MFN=1711ed is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 171300 p2mt d gfn 519936 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 171300 p2mt d gfn 519936 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef00, Debug page: MFN=171300 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17108f p2mt d gfn 520335 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17108f p2mt d gfn 520335 did 2
(XEN) ==>3Debug for domain=2, gfn=7f08f, Debug page: MFN=17108f is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171052 p2mt d gfn 520274 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171052 p2mt d gfn 520274 did 2
(XEN) ==>3Debug for domain=2, gfn=7f052, Debug page: MFN=171052 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171016 p2mt d gfn 520214 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171016 p2mt d gfn 520214 did 2
(XEN) ==>3Debug for domain=2, gfn=7f016, Debug page: MFN=171016 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171058 p2mt d gfn 520280 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171058 p2mt d gfn 520280 did 2
(XEN) ==>3Debug for domain=2, gfn=7f058, Debug page: MFN=171058 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1710d9 p2mt d gfn 520409 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1710d9 p2mt d gfn 520409 did 2
(XEN) ==>3Debug for domain=2, gfn=7f0d9, Debug page: MFN=1710d9 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17105c p2mt d gfn 520284 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17105c p2mt d gfn 520284 did 2
(XEN) ==>3Debug for domain=2, gfn=7f05c, Debug page: MFN=17105c is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17109d p2mt d gfn 520349 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17109d p2mt d gfn 520349 did 2
(XEN) ==>3Debug for domain=2, gfn=7f09d, Debug page: MFN=17109d is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17105e p2mt d gfn 520286 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17105e p2mt d gfn 520286 did 2
(XEN) ==>3Debug for domain=2, gfn=7f05e, Debug page: MFN=17105e is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1710df p2mt d gfn 520415 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1710df p2mt d gfn 520415 did 2
(XEN) ==>3Debug for domain=2, gfn=7f0df, Debug page: MFN=1710df is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1710a0 p2mt d gfn 520352 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1710a0 p2mt d gfn 520352 did 2
(XEN) ==>3Debug for domain=2, gfn=7f0a0, Debug page: MFN=1710a0 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1710e1 p2mt d gfn 520417 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1710e1 p2mt d gfn 520417 did 2
(XEN) ==>3Debug for domain=2, gfn=7f0e1, Debug page: MFN=1710e1 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1710e5 p2mt d gfn 520421 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1710e5 p2mt d gfn 520421 did 2
(XEN) ==>3Debug for domain=2, gfn=7f0e5, Debug page: MFN=1710e5 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713a6 p2mt d gfn 520102 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713a6 p2mt d gfn 520102 did 2
(XEN) ==>3Debug for domain=2, gfn=7efa6, Debug page: MFN=1713a6 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1710a9 p2mt d gfn 520361 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1710a9 p2mt d gfn 520361 did 2
(XEN) ==>3Debug for domain=2, gfn=7f0a9, Debug page: MFN=1710a9 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17107f p2mt d gfn 520319 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17107f p2mt d gfn 520319 did 2
(XEN) ==>3Debug for domain=2, gfn=7f07f, Debug page: MFN=17107f is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171280 p2mt d gfn 519808 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171280 p2mt d gfn 519808 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee80, Debug page: MFN=171280 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171381 p2mt d gfn 520065 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171381 p2mt d gfn 520065 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef81, Debug page: MFN=171381 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713c3 p2mt d gfn 520131 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713c3 p2mt d gfn 520131 did 2
(XEN) ==>3Debug for domain=2, gfn=7efc3, Debug page: MFN=1713c3 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171006 p2mt d gfn 520198 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171006 p2mt d gfn 520198 did 2
(XEN) ==>3Debug for domain=2, gfn=7f006, Debug page: MFN=171006 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713c8 p2mt d gfn 520136 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713c8 p2mt d gfn 520136 did 2
(XEN) ==>3Debug for domain=2, gfn=7efc8, Debug page: MFN=1713c8 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713ce p2mt d gfn 520142 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 1713ce p2mt d gfn 520142 did 2
(XEN) ==>3Debug for domain=2, gfn=7efce, Debug page: MFN=1713ce is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713d0 p2mt d gfn 520144 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713d0 p2mt d gfn 520144 did 2
(XEN) ==>3Debug for domain=2, gfn=7efd0, Debug page: MFN=1713d0 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171055 p2mt d gfn 520277 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171055 p2mt d gfn 520277 did 2
(XEN) ==>3Debug for domain=2, gfn=7f055, Debug page: MFN=171055 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171017 p2mt d gfn 520215 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171017 p2mt d gfn 520215 did 2
(XEN) ==>3Debug for domain=2, gfn=7f017, Debug page: MFN=171017 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17101b p2mt d gfn 520219 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17101b p2mt d gfn 520219 did 2
(XEN) ==>3Debug for domain=2, gfn=7f01b, Debug page: MFN=17101b is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713dc p2mt d gfn 520156 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713dc p2mt d gfn 520156 did 2
(XEN) ==>3Debug for domain=2, gfn=7efdc, Debug page: MFN=1713dc is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171022 p2mt d gfn 520226 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171022 p2mt d gfn 520226 did 2
(XEN) ==>3Debug for domain=2, gfn=7f022, Debug page: MFN=171022 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171024 p2mt d gfn 520228 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171024 p2mt d gfn 520228 did 2
(XEN) ==>3Debug for domain=2, gfn=7f024, Debug page: MFN=171024 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17103a p2mt d gfn 520250 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17103a p2mt d gfn 520250 did 2
(XEN) ==>3Debug for domain=2, gfn=7f03a, Debug page: MFN=17103a is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713ff p2mt d gfn 520191 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 1713ff p2mt d gfn 520191 did 2
(XEN) ==>3Debug for domain=2, gfn=7efff, Debug page: MFN=1713ff is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171200 p2mt d gfn 519680 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 171200 p2mt d gfn 519680 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee00, Debug page: MFN=171200 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171301 p2mt d gfn 519937 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 171301 p2mt d gfn 519937 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef01, Debug page: MFN=171301 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171303 p2mt d gfn 519939 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 171303 p2mt d gfn 519939 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef03, Debug page: MFN=171303 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1712c7 p2mt d gfn 519879 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 1712c7 p2mt d gfn 519879 did 2
(XEN) ==>3Debug for domain=2, gfn=7eec7, Debug page: MFN=1712c7 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17130e p2mt d gfn 519950 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17130e p2mt d gfn 519950 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef0e, Debug page: MFN=17130e is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171310 p2mt d gfn 519952 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171310 p2mt d gfn 519952 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef10, Debug page: MFN=171310 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171354 p2mt d gfn 520020 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171354 p2mt d gfn 520020 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef54, Debug page: MFN=171354 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17131c p2mt d gfn 519964 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17131c p2mt d gfn 519964 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef1c, Debug page: MFN=17131c is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713e1 p2mt d gfn 520161 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713e1 p2mt d gfn 520161 did 2
(XEN) ==>3Debug for domain=2, gfn=7efe1, Debug page: MFN=1713e1 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713a2 p2mt d gfn 520098 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713a2 p2mt d gfn 520098 did 2
(XEN) ==>3Debug for domain=2, gfn=7efa2, Debug page: MFN=1713a2 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171266 p2mt d gfn 519782 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 171266 p2mt d gfn 519782 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee66, Debug page: MFN=171266 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171035 p2mt d gfn 520245 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171035 p2mt d gfn 520245 did 2
(XEN) ==>3Debug for domain=2, gfn=7f035, Debug page: MFN=171035 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713b6 p2mt d gfn 520118 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713b6 p2mt d gfn 520118 did 2
(XEN) ==>3Debug for domain=2, gfn=7efb6, Debug page: MFN=1713b6 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171037 p2mt d gfn 520247 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171037 p2mt d gfn 520247 did 2
(XEN) ==>3Debug for domain=2, gfn=7f037, Debug page: MFN=171037 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713b8 p2mt d gfn 520120 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713b8 p2mt d gfn 520120 did 2
(XEN) ==>3Debug for domain=2, gfn=7efb8, Debug page: MFN=1713b8 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713ba p2mt d gfn 520122 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713ba p2mt d gfn 520122 did 2
(XEN) ==>3Debug for domain=2, gfn=7efba, Debug page: MFN=1713ba is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713fe p2mt d gfn 520190 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713fe p2mt d gfn 520190 did 2
(XEN) ==>3Debug for domain=2, gfn=7effe, Debug page: MFN=1713fe is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171300 p2mt d gfn 519936 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171300 p2mt d gfn 519936 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef00, Debug page: MFN=171300 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171372 p2mt d gfn 520050 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171372 p2mt d gfn 520050 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef72, Debug page: MFN=171372 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171033 p2mt d gfn 520243 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171033 p2mt d gfn 520243 did 2
(XEN) ==>3Debug for domain=2, gfn=7f033, Debug page: MFN=171033 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713b4 p2mt d gfn 520116 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713b4 p2mt d gfn 520116 did 2
(XEN) ==>3Debug for domain=2, gfn=7efb4, Debug page: MFN=1713b4 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1713be p2mt d gfn 520126 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1713be p2mt d gfn 520126 did 2
(XEN) ==>3Debug for domain=2, gfn=7efbe, Debug page: MFN=1713be is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17137f p2mt d gfn 520063 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17137f p2mt d gfn 520063 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef7f, Debug page: MFN=17137f is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171580 p2mt d gfn 519552 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171580 p2mt d gfn 519552 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed80, Debug page: MFN=171580 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171281 p2mt d gfn 519809 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 171281 p2mt d gfn 519809 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee81, Debug page: MFN=171281 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171242 p2mt d gfn 519746 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 171242 p2mt d gfn 519746 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee42, Debug page: MFN=171242 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171283 p2mt d gfn 519811 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 171283 p2mt d gfn 519811 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee83, Debug page: MFN=171283 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171284 p2mt d gfn 519812 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171284 p2mt d gfn 519812 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee84, Debug page: MFN=171284 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1712c5 p2mt d gfn 519877 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1712c5 p2mt d gfn 519877 did 2
(XEN) ==>3Debug for domain=2, gfn=7eec5, Debug page: MFN=1712c5 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171247 p2mt d gfn 519751 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171247 p2mt d gfn 519751 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee47, Debug page: MFN=171247 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171288 p2mt d gfn 519816 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171288 p2mt d gfn 519816 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee88, Debug page: MFN=171288 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171300 p2mt d gfn 519936 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171300 p2mt d gfn 519936 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef00, Debug page: MFN=171300 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171349 p2mt d gfn 520009 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171349 p2mt d gfn 520009 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef49, Debug page: MFN=171349 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17130a p2mt d gfn 519946 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17130a p2mt d gfn 519946 did 2
(XEN) ==>3Debug for domain=2, gfn=7ef0a, Debug(XEN) ===hvm_hap_nested_page_fault mfn 1715ef p2mt d gfn 519663 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 1715ef p2mt d gfn 519663 did 2
(XEN) ==>3Debug for domain=2, gfn=7edef, Debug page: MFN=1715ef is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171516 p2mt d gfn 519446 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171516 p2mt d gfn 519446 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed16, Debug page: MFN=171516 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1715f1 p2mt d gfn 519665 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1715f1 p2mt d gfn 519665 did 2
(XEN) ==>3Debug for domain=2, gfn=7edf1, Debug page: MFN=1715f1 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1715b2 p2mt d gfn 519602 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1715b2 p2mt d gfn 519602 did 2
(XEN) ==>3Debug for domain=2, gfn=7edb2, Debug page: MFN=1715b2 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171233 p2mt d gfn 519731 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171233 p2mt d gfn 519731 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee33, Debug page: MFN=171233 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171234 p2mt d gfn 519732 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171234 p2mt d gfn 519732 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee34, Debug page: MFN=171234 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1715f5 p2mt d gfn 519669 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1715f5 p2mt d gfn 519669 did 2
(XEN) ==>3Debug for domain=2, gfn=7edf5, Debug page: MFN=1715f5 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1715b6 p2mt d gfn 519606 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1715b6 p2mt d gfn 519606 did 2
(XEN) ==>3Debug for domain=2, gfn=7edb6, Debug page: MFN=1715b6 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171277 p2mt d gfn 519799 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171277 p2mt d gfn 519799 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee77, Debug page: MFN=171277 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1715f9 p2mt d gfn 519673 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 1715f9 p2mt d gfn 519673 did 2
(XEN) ==>3Debug for domain=2, gfn=7edf9, Debug page: MFN=1715f9 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171557 p2mt d gfn 519511 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171557 p2mt d gfn 519511 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed57, Debug page: MFN=171557 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17157a p2mt d gfn 519546 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17157a p2mt d gfn 519546 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed7a, Debug page: MFN=17157a is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1715fb p2mt d gfn 519675 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1715fb p2mt d gfn 519675 did 2
(XEN) ==>3Debug for domain=2, gfn=7edfb, Debug page: MFN=1715fb is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1715bc p2mt d gfn 519612 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1715bc p2mt d gfn 519612 did 2
(XEN) ==>3Debug for domain=2, gfn=7edbc, Debug page: MFN=1715bc is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1714c4 p2mt d gfn 519364 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1714c4 p2mt d gfn 519364 did 2
(XEN) ==>3Debug for domain=2, gfn=7ecc4, Debug page: MFN=1714c4 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17148e p2mt d gfn 519310 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17148e p2mt d gfn 519310 did 2
(XEN) ==>3Debug for domain=2, gfn=7ec8e, Debug page: MFN=17148e is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171554 p2mt d gfn 519508 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171554 p2mt d gfn 519508 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed54, Debug page: MFN=171554 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1714d6 p2mt d gfn 519382 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1714d6 p2mt d gfn 519382 did 2
(XEN) ==>3Debug for domain=2, gfn=7ecd6, Debug page: MFN=1714d6 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171518 p2mt d gfn 519448 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171518 p2mt d gfn 519448 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed18, Debug page: MFN=171518 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17151a p2mt d gfn 519450 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17151a p2mt d gfn 519450 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed1a, Debug page: MFN=17151a is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17151e p2mt d gfn 519454 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17151e p2mt d gfn 519454 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed1e, Debug page: MFN=17151e is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171466 p2mt d gfn 519270 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171466 p2mt d gfn 519270 did 2
(XEN) ==>3Debug for domain=2, gfn=7ec66, Debug page: MFN=171466 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1715ea p2mt d gfn 519658 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1715ea p2mt d gfn 519658 did 2
(XEN) ==>3Debug for domain=2, gfn=7edea, Debug page: MFN=1715ea is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17152b p2mt d gfn 519467 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17152b p2mt d gfn 519467 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed2b, Debug page: MFN=17152b is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17156c p2mt d gfn 519532 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17156c p2mt d gfn 519532 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed6c, Debug page: MFN=17156c is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17122d p2mt d gfn 519725 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17122d p2mt d gfn 519725 did 2
(XEN) ==>3Debug for domain=2, gfn=7ee2d, Debug page: MFN=17122d is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17156e p2mt d gfn 519534 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17156e p2mt d gfn 519534 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed6e, Debug page: MFN=17156e is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171572 p2mt d gfn 519538 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171572 p2mt d gfn 519538 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed72, Debug page: MFN=171572 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1715b5 p2mt d gfn 519605 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1715b5 p2mt d gfn 519605 did 2
(XEN) ==>3Debug for domain=2, gfn=7edb5, Debug page: MFN=1715b5 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171496 p2mt d gfn 519318 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171496 p2mt d gfn 519318 did 2
(XEN) ==>3Debug for domain=2, gfn=7ec96, Debug page: MFN=171496 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171579 p2mt d gfn 519545 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171579 p2mt d gfn 519545 did 2
(XEN) ==>3Debug for domain=2, gfn=7ed79, Debug page: MFN=171579 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1715be p2mt d gfn 519614 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1715be p2mt d gfn 519614 did 2
(XEN) ==>3Debug for domain=2, gfn=7edbe, Debug page: MFN=1715be is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171441 p2mt d gfn 519233 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171441 p2mt d gfn 519233 did 2
(XEN) ==>3Debug for domain=2, gfn=7ec41, Debug page: MFN=171441 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 171402 p2mt d gfn 519170 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 171402 p2mt d gfn 519170 did 2
(XEN) ==>3Debug for domain=2, gfn=7ec02, Debug page: MFN=171402 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17149a p2mt d gfn 519322 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17149a p2mt d gfn 519322 did 2
(XEN) ==>3Debug for domain=2, gfn=7ec9a, Debug page: MFN=17149a is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1723dc p2mt d gfn 516060 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1723dc p2mt d gfn 516060 did 2
(XEN) ==>3Debug for domain=2, gfn=7dfdc, Debug page: MFN=1723dc is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17231d p2mt d gfn 515869 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17231d p2mt d gfn 515869 did 2
(XEN) ==>3Debug for domain=2, gfn=7df1d, Debug page: MFN=17231d is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17239e p2mt d gfn 515998 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17239e p2mt d gfn 515998 did 2
(XEN) ==>3Debug for domain=2, gfn=7df9e, Debug page: MFN=17239e is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17201f p2mt d gfn 516127 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17201f p2mt d gfn 516127 did 2
(XEN) ==>3Debug for domain=2, gfn=7e01f, Debug page: MFN=17201f is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172020 p2mt d gfn 516128 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172020 p2mt d gfn 516128 did 2
(XEN) ==>3Debug for domain=2, gfn=7e020, Debug page: MFN=172020 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1723e1 p2mt d gfn 516065 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1723e1 p2mt d gfn 516065 did 2
(XEN) ==>3Debug for domain=2, gfn=7dfe1, Debug page: MFN=1723e1 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172022 p2mt d gfn 516130 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172022 p2mt d gfn 516130 did 2
(XEN) ==>3Debug for domain=2, gfn=7e022, Debug page: MFN=172022 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1723a3 p2mt d gfn 516003 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1723a3 p2mt d gfn 516003 did 2
(XEN) ==>3Debug for domain=2, gfn=7dfa3, Debug page: MFN=1723a3 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172364 p2mt d gfn 515940 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172364 p2mt d gfn 515940 did 2
(XEN) ==>3Debug for domain=2, gfn=7df64, Debug page: MFN=172364 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172065 p2mt d gfn 516197 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172065 p2mt d gfn 516197 did 2
(XEN) ==>3Debug for domain=2, gfn=7e065, Debug page: MFN=172065 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172226 p2mt d gfn 515622 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172226 p2mt d gfn 515622 did 2
(XEN) ==>3Debug for domain=2, gfn=7de26, Debug page: MFN=172226 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172327 p2mt d gfn 515879 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172327 p2mt d gfn 515879 did 2
(XEN) ==>3Debug for domain=2, gfn=7df27, Debug page: MFN=172327 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1723a8 p2mt d gfn 516008 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1723a8 p2mt d gfn 516008 did 2
(XEN) ==>3Debug for domain=2, gfn=7dfa8, Debug page: MFN=1723a8 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172029 p2mt d gfn 516137 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172029 p2mt d gfn 516137 did 2
(XEN) ==>3Debug for domain=2, gfn=7e029, Debug page: MFN=172029 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===do_mmu_update 2df893 p2mt d gfn 40083 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 2df893 p2mt d gfn 40083 did 2
(XEN) ==>3Debug for domain=2, gfn=9c93, Debug page: MFN=2df893 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===do_mmu_update 2df894 p2mt d gfn 40084 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 2df894 p2mt d gfn 40084 did 2
(XEN) ==>3Debug for domain=2, gfn=9c94, Debug page: MFN=2df894 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===do_mmu_update 2df895 p2mt d gfn 40085 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 2df895 p2mt d gfn 40085 did 2
(XEN) ===set p2mentry mfn 2df895 p2mt d gfn 40085 did 2
(XEN) ==>3Debug for domain=2, gfn=9c95, Debug page: MFN=2df895 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) printk: 32 messages suppressed.
(XEN) mm.c:859:d0 Error getting mfn 2df895 (pfn fffffffffffffffe) from L1 entry 80000002df895627 for l1e_owner=0, pg_owner=2 cpu 0
(XEN) ===do_mmu_update 2df896 p2mt d gfn 40086 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 2df896 p2mt d gfn 40086 did 2
(XEN) ===set p2mentry mfn 2df896 p2mt d gfn 40086 did 2
(XEN) ==>3Debug for domain=2, gfn=9c96, Debug page: MFN=2df896 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) mm.c:859:d0 Error getting mfn 2df896 (pfn fffffffffffffffe) from L1 entry 80000002df896627 for l1e_owner=0, pg_owner=2 cpu 0
(XEN) ===do_mmu_update 2dfb73 p2mt d gfn 39795 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 2dfb73 p2mt d gfn 39795 did 2
(XEN) ==>3Debug for domain=2, gfn=9b73, Debug page: MFN=2dfb73 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172241 p2mt d gfn 515649 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 172241 p2mt d gfn 515649 did 2
(XEN) ==>3Debug for domain=2, gfn=7de41, Debug page: MFN=172241 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172362 p2mt d gfn 515938 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 172362 p2mt d gfn 515938 did 2
(XEN) ==>3Debug for domain=2, gfn=7df62, Debug page: MFN=172362 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172582 p2mt d gfn 515458 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 172582 p2mt d gfn 515458 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd82, Debug page: MFN=172582 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172597 p2mt d gfn 515479 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 172597 p2mt d gfn 515479 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd97, Debug page: MFN=172597 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172558 p2mt d gfn 515416 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 172558 p2mt d gfn 515416 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd58, Debug page: MFN=172558 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172559 p2mt d gfn 515417 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 172559 p2mt d gfn 515417 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd59, Debug page: MFN=172559 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17259a p2mt d gfn 515482 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 17259a p2mt d gfn 515482 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd9a, Debug page: MFN=17259a is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17221b p2mt d gfn 515611 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 17221b p2mt d gfn 515611 did 2
(XEN) ==>3Debug for domain=2, gfn=7de1b, Debug page: MFN=17221b is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17221c p2mt d gfn 515612 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 17221c p2mt d gfn 515612 did 2
(XEN) ==>3Debug for domain=2, gfn=7de1c, Debug page: MFN=17221c is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17259d p2mt d gfn 515485 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 17259d p2mt d gfn 515485 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd9d, Debug page: MFN=17259d is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17221e p2mt d gfn 515614 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 17221e p2mt d gfn 515614 did 2
(XEN) ==>3Debug for domain=2, gfn=7de1e, Debug page: MFN=17221e is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1722df p2mt d gfn 515807 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 1722df p2mt d gfn 515807 did 2
(XEN) ==>3Debug for domain=2, gfn=7dedf, Debug page: MFN=1722df is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1722e0 p2mt d gfn 515808 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 1722e0 p2mt d gfn 515808 did 2
(XEN) ==>3Debug for domain=2, gfn=7dee0, Debug page: MFN=1722e0 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172261 p2mt d gfn 515681 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 172261 p2mt d gfn 515681 did 2
(XEN) ==>3Debug for domain=2, gfn=7de61, Debug page: MFN=172261 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1722a2 p2mt d gfn 515746 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 1722a2 p2mt d gfn 515746 did 2
(XEN) ==>3Debug for domain=2, gfn=7dea2, Debug page: MFN=1722a2 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1725e3 p2mt d gfn 515555 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 1725e3 p2mt d gfn 515555 did 2
(XEN) ==>3Debug for domain=2, gfn=7dde3, Debug page: MFN=1725e3 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1725a4 p2mt d gfn 515492 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 1725a4 p2mt d gfn 515492 did 2
(XEN) ==>3Debug for domain=2, gfn=7dda4, Debug page: MFN=1725a4 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1724c0 p2mt d gfn 515264 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1724c0 p2mt d gfn 515264 did 2
(XEN) ==>3Debug for domain=2, gfn=7dcc0, Debug page: MFN=1724c0 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172581 p2mt d gfn 515457 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172581 p2mt d gfn 515457 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd81, Debug page: MFN=172581 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1724c2 p2mt d gfn 515266 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1724c2 p2mt d gfn 515266 did 2
(XEN) ==>3Debug for domain=2, gfn=7dcc2, Debug page: MFN=1724c2 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1725c3 p2mt d gfn 515523 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1725c3 p2mt d gfn 515523 did 2
(XEN) ==>3Debug for domain=2, gfn=7ddc3, Debug page: MFN=1725c3 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172504 p2mt d gfn 515332 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172504 p2mt d gfn 515332 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd04, Debug page: MFN=172504 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172545 p2mt d gfn 515397 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172545 p2mt d gfn 515397 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd45, Debug page: MFN=172545 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1725c6 p2mt d gfn 515526 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1725c6 p2mt d gfn 515526 did 2
(XEN) ==>3Debug for domain=2, gfn=7ddc6, Debug page: MFN=1725c6 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172587 p2mt d gfn 515463 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172587 p2mt d gfn 515463 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd87, Debug page: MFN=172587 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172288 p2mt d gfn 515720 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172288 p2mt d gfn 515720 did 2
(XEN) ==>3Debug for domain=2, gfn=7de88, Debug page: MFN=172288 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1722c9 p2mt d gfn 515785 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1722c9 p2mt d gfn 515785 did 2
(XEN) ==>3Debug for domain=2, gfn=7dec9, Debug page: MFN=1722c9 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17258a p2mt d gfn 515466 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17258a p2mt d gfn 515466 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd8a, Debug page: MFN=17258a is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1725cb p2mt d gfn 515531 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1725cb p2mt d gfn 515531 did 2
(XEN) ==>3Debug for domain=2, gfn=7ddcb, Debug page: MFN=1725cb is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 17254c p2mt d gfn 515404 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 17254c p2mt d gfn 515404 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd4c, Debug page: MFN=17254c is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172532 p2mt d gfn 515378 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 172532 p2mt d gfn 515378 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd32, Debug page: MFN=172532 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172213 p2mt d gfn 515603 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172213 p2mt d gfn 515603 did 2
(XEN) ==>3Debug for domain=2, gfn=7de13, Debug page: MFN=172213 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172594 p2mt d gfn 515476 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172594 p2mt d gfn 515476 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd94, Debug page: MFN=172594 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172595 p2mt d gfn 515477 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172595 p2mt d gfn 515477 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd95, Debug page: MFN=172595 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1724d6 p2mt d gfn 515286 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1724d6 p2mt d gfn 515286 did 2
(XEN) ==>3Debug for domain=2, gfn=7dcd6, Debug page: MFN=1724d6 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 172557 p2mt d gfn 515415 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 172557 p2mt d gfn 515415 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd57, Debug page: MFN=172557 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 173502 p2mt d gfn 511234 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 173502 p2mt d gfn 511234 did 2
(XEN) ==>3Debug for domain=2, gfn=7cd02, Debug page: MFN=173502 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1735b6 p2mt d gfn 511414 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1735b6 p2mt d gfn 511414 did 2
(XEN) ==>3Debug for domain=2, gfn=7cdb6, Debug page: MFN=1735b6 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 173406 p2mt d gfn 510982 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 173406 p2mt d gfn 510982 did 2
(XEN) ==>3Debug for domain=2, gfn=7cc06, Debug page: MFN=173406 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 173713 p2mt d gfn 510739 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 173713 p2mt d gfn 510739 did 2
(XEN) ==>3Debug for domain=2, gfn=7cb13, Debug page: MFN=173713 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1739d4 p2mt d gfn 510420 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1739d4 p2mt d gfn 510420 did 2
(XEN) ==>3Debug for domain=2, gfn=7c9d4, Debug page: MFN=1739d4 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 173bdf p2mt d gfn 509919 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 173bdf p2mt d gfn 509919 did 2
(XEN) ==>3Debug for domain=2, gfn=7c7df, Debug page: MFN=173bdf is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1725db p2mt d gfn 515547 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1725db p2mt d gfn 515547 did 2
(XEN) ==>3Debug for domain=2, gfn=7dddb, Debug page: MFN=1725db is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===hvm_hap_nested_page_fault mfn 1725dc p2mt d gfn 515548 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 1725dc p2mt d gfn 515548 did 2
(XEN) ==>3Debug for domain=2, gfn=7dddc, Debug page: MFN=1725dc is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 1725bf p2mt d gfn 515519 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1725bf p2mt d gfn 515519 did 2
(XEN) ==>3Debug for domain=2, gfn=7ddbf, Debug page: MFN=1725bf is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===>mem_sharing_share_pages mfn 1725b9 gfn 515519 p2md d did 2
(XEN) ===>mem_sharing_share_pages mfn 1724fa gfn 515264 p2md d did 2
(XEN) ===>mem_sharing_share_pages mfn 1725fb gfn 515457 p2md d did 2
(XEN) ===>mem_sharing_share_pages mfn 1725bc gfn 515266 p2md d did 2
(XEN) ===>mem_sharing_share_pages mfn 17227d gfn 515523 p2md d did 2
(XEN) ===>mem_sharing_share_pages mfn 17223e gfn 515332 p2md d did 2
(XEN) ===gfn_to_mfn_unshare mfn 17223e p2mt d gfn 515646 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 17223e p2mt d gfn 515646 did 2
(XEN) ===set p2mentry mfn 17223e p2mt d gfn 515646 did 2
(XEN) ==>3Debug for domain=2, gfn=7de3e, Debug page: MFN=17223e is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===hvm_hap_nested_page_fault mfn 1725bc p2mt d gfn 515516 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 1725bc p2mt d gfn 515516 did 2
(XEN) ===set p2mentry mfn 1725bc p2mt d gfn 515516 did 2
(XEN) ==>3Debug for domain=2, gfn=7ddbc, Debug page: MFN=1725bc is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===hvm_hap_nested_page_fault mfn 17227d p2mt d gfn 515709 did 2
(XEN) ===mem_sharing_unshare_page cpu 14 mfn 17227d p2mt d gfn 515709 did 2
(XEN) ===set p2mentry mfn 17227d p2mt d gfn 515709 did 2
(XEN) ==>3Debug for domain=2, gfn=7de7d, Debug page: MFN=17227d is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===gfn_to_mfn_unshare mfn 173b69 p2mt d gfn 509801 did 2
(XEN) ===mem_sharing_unshare_page cpu 4 mfn 173b69 p2mt d gfn 509801 did 2
(XEN) ==>3Debug for domain=2, gfn=7c769, Debug page: MFN=173b69 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 1724fa p2mt d gfn 515322 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1724fa p2mt d gfn 515322 did 2
(XEN) ===set p2mentry mfn 1724fa p2mt d gfn 515322 did 2
(XEN) ==>3Debug for domain=2, gfn=7dcfa, Debug page: MFN=1724fa is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===gfn_to_mfn_unshare mfn 1725fb p2mt d gfn 515579 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1725fb p2mt d gfn 515579 did 2
(XEN) ===set p2mentry mfn 1725fb p2mt d gfn 515579 did 2
(XEN) ==>3Debug for domain=2, gfn=7ddfb, Debug page: MFN=1725fb is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===gfn_to_mfn_unshare mfn 172518 p2mt d gfn 515352 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 172518 p2mt d gfn 515352 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd18, Debug page: MFN=172518 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 172519 p2mt d gfn 515353 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 172519 p2mt d gfn 515353 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd19, Debug page: MFN=172519 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 17255a p2mt d gfn 515418 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 17255a p2mt d gfn 515418 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd5a, Debug page: MFN=17255a is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 1725a3 p2mt d gfn 515491 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1725a3 p2mt d gfn 515491 did 2
(XEN) ==>3Debug for domain=2, gfn=7dda3, Debug page: MFN=1725a3 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 172564 p2mt d gfn 515428 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 172564 p2mt d gfn 515428 did 2
(XEN) ==>3Debug for domain=2, gfn=7dd64, Debug page: MFN=172564 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 172265 p2mt d gfn 515685 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 172265 p2mt d gfn 515685 did 2
(XEN) ==>3Debug for domain=2, gfn=7de65, Debug page: MFN=172265 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 173db1 p2mt d gfn 509361 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 173db1 p2mt d gfn 509361 did 2
(XEN) ==>3Debug for domain=2, gfn=7c5b1, Debug page: MFN=173db1 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 172278 p2mt d gfn 515704 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 172278 p2mt d gfn 515704 did 2
(XEN) ==>3Debug for domain=2, gfn=7de78, Debug page: MFN=172278 is ci=8000000000000001, ti=0, owner_id=2
(XEN) ===gfn_to_mfn_unshare mfn 1725b9 p2mt d gfn 515513 did 2
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1725b9 p2mt d gfn 515513 did 2
(XEN) ===set p2mentry mfn 1725b9 p2mt d gfn 515513 did 2
(XEN) ==>3Debug for domain=2, gfn=7ddb9, Debug page: MFN=1725b9 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 2)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 2)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 2)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 2)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 2)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 2)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 2)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 2)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 2)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 2)
blktap_sysfs_destroy
(XEN) ===p2m_teardown mfn 20018e p2mt d gfn 131 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20018e p2mt d gfn 131 did 2
(XEN) ==>1Debug for domain=2, gfn=83, Debug page: MFN=20018e is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20018d p2mt d gfn 132 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20018d p2mt d gfn 132 did 2
(XEN) ==>1Debug for domain=2, gfn=84, Debug page: MFN=20018d is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20018c p2mt d gfn 133 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20018c p2mt d gfn 133 did 2
(XEN) ==>1Debug for domain=2, gfn=85, Debug page: MFN=20018c is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20018b p2mt d gfn 134 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20018b p2mt d gfn 134 did 2
(XEN) ==>1Debug for domain=2, gfn=86, Debug page: MFN=20018b is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20018a p2mt d gfn 135 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20018a p2mt d gfn 135 did 2
(XEN) ==>1Debug for domain=2, gfn=87, Debug page: MFN=20018a is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200189 p2mt d gfn 136 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200189 p2mt d gfn 136 did 2
(XEN) ==>1Debug for domain=2, gfn=88, Debug page: MFN=200189 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200188 p2mt d gfn 137 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200188 p2mt d gfn 137 did 2
(XEN) ==>1Debug for domain=2, gfn=89, Debug page: MFN=200188 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200187 p2mt d gfn 138 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200187 p2mt d gfn 138 did 2
(XEN) ==>1Debug for domain=2, gfn=8a, Debug page: MFN=200187 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200186 p2mt d gfn 139 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200186 p2mt d gfn 139 did 2
(XEN) ==>1Debug for domain=2, gfn=8b, Debug page: MFN=200186 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200185 p2mt d gfn 140 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200185 p2mt d gfn 140 did 2
(XEN) ==>1Debug for domain=2, gfn=8c, Debug page: MFN=200185 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200184 p2mt d gfn 141 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200184 p2mt d gfn 141 did 2
(XEN) ==>1Debug for domain=2, gfn=8d, Debug page: MFN=200184 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200183 p2mt d gfn 142 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200183 p2mt d gfn 142 did 2
(XEN) ==>1Debug for domain=2, gfn=8e, Debug page: MFN=200183 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200182 p2mt d gfn 143 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200182 p2mt d gfn 143 did 2
(XEN) ==>1Debug for domain=2, gfn=8f, Debug page: MFN=200182 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200181 p2mt d gfn 144 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200181 p2mt d gfn 144 did 2
(XEN) ==>1Debug for domain=2, gfn=90, Debug page: MFN=200181 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200180 p2mt d gfn 145 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200180 p2mt d gfn 145 did 2
(XEN) ==>1Debug for domain=2, gfn=91, Debug page: MFN=200180 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20017f p2mt d gfn 146 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20017f p2mt d gfn 146 did 2
(XEN) ==>1Debug for domain=2, gfn=92, Debug page: MFN=20017f is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20017e p2mt d gfn 147 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20017e p2mt d gfn 147 did 2
(XEN) ==>1Debug for domain=2, gfn=93, Debug page: MFN=20017e is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200131 p2mt d gfn 256 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200131 p2mt d gfn 256 did 2
(XEN) ==>1Debug for domain=2, gfn=100, Debug page: MFN=200131 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200130 p2mt d gfn 257 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200130 p2mt d gfn 257 did 2
(XEN) ==>1Debug for domain=2, gfn=101, Debug page: MFN=200130 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20012f p2mt d gfn 258 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20012f p2mt d gfn 258 did 2
(XEN) ==>1Debug for domain=2, gfn=102, Debug page: MFN=20012f is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20012e p2mt d gfn 259 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20012e p2mt d gfn 259 did 2
(XEN) ==>1Debug for domain=2, gfn=103, Debug page: MFN=20012e is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20012d p2mt d gfn 260 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20012d p2mt d gfn 260 did 2
(XEN) ==>1Debug for domain=2, gfn=104, Debug page: MFN=20012d is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20012c p2mt d gfn 261 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20012c p2mt d gfn 261 did 2
(XEN) ==>1Debug for domain=2, gfn=105, Debug page: MFN=20012c is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20012b p2mt d gfn 262 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20012b p2mt d gfn 262 did 2
(XEN) ==>1Debug for domain=2, gfn=106, Debug page: MFN=20012b is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20012a p2mt d gfn 263 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20012a p2mt d gfn 263 did 2
(XEN) ==>1Debug for domain=2, gfn=107, Debug page: MFN=20012a is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200129 p2mt d gfn 264 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200129 p2mt d gfn 264 did 2
(XEN) ==>1Debug for domain=2, gfn=108, Debug page: MFN=200129 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200128 p2mt d gfn 265 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200128 p2mt d gfn 265 did 2
(XEN) ==>1Debug for domain=2, gfn=109, Debug page: MFN=200128 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200127 p2mt d gfn 266 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200127 p2mt d gfn 266 did 2
(XEN) ==>1Debug for domain=2, gfn=10a, Debug page: MFN=200127 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200126 p2mt d gfn 267 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200126 p2mt d gfn 267 did 2
(XEN) ==>1Debug for domain=2, gfn=10b, Debug page: MFN=200126 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200125 p2mt d gfn 268 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200125 p2mt d gfn 268 did 2
(XEN) ==>1Debug for domain=2, gfn=10c, Debug page: MFN=200125 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200124 p2mt d gfn 269 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200124 p2mt d gfn 269 did 2
(XEN) ==>1Debug for domain=2, gfn=10d, Debug page: MFN=200124 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200123 p2mt d gfn 270 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200123 p2mt d gfn 270 did 2
(XEN) ==>1Debug for domain=2, gfn=10e, Debug page: MFN=200123 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200122 p2mt d gfn 271 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200122 p2mt d gfn 271 did 2
(XEN) ==>1Debug for domain=2, gfn=10f, Debug page: MFN=200122 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200121 p2mt d gfn 272 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200121 p2mt d gfn 272 did 2
(XEN) ==>1Debug for domain=2, gfn=110, Debug page: MFN=200121 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200120 p2mt d gfn 273 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200120 p2mt d gfn 273 did 2
(XEN) ==>1Debug for domain=2, gfn=111, Debug page: MFN=200120 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20011f p2mt d gfn 274 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20011f p2mt d gfn 274 did 2
(XEN) ==>1Debug for domain=2, gfn=112, Debug page: MFN=20011f is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20011e p2mt d gfn 275 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20011e p2mt d gfn 275 did 2
(XEN) ==>1Debug for domain=2, gfn=113, Debug page: MFN=20011e is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20011d p2mt d gfn 276 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20011d p2mt d gfn 276 did 2
(XEN) ==>1Debug for domain=2, gfn=114, Debug page: MFN=20011d is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20011c p2mt d gfn 277 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20011c p2mt d gfn 277 did 2
(XEN) ==>1Debug for domain=2, gfn=115, Debug page: MFN=20011c is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20011b p2mt d gfn 278 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20011b p2mt d gfn 278 did 2
(XEN) ==>1Debug for domain=2, gfn=116, Debug page: MFN=20011b is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20011a p2mt d gfn 279 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20011a p2mt d gfn 279 did 2
(XEN) ==>1Debug for domain=2, gfn=117, Debug page: MFN=20011a is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200119 p2mt d gfn 280 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200119 p2mt d gfn 280 did 2
(XEN) ==>1Debug for domain=2, gfn=118, Debug page: MFN=200119 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200118 p2mt d gfn 281 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200118 p2mt d gfn 281 did 2
(XEN) ==>1Debug for domain=2, gfn=119, Debug page: MFN=200118 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200117 p2mt d gfn 282 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200117 p2mt d gfn 282 did 2
(XEN) ==>1Debug for domain=2, gfn=11a, Debug page: MFN=200117 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200116 p2mt d gfn 283 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200116 p2mt d gfn 283 did 2
(XEN) ==>1Debug for domain=2, gfn=11b, Debug page: MFN=200116 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200115 p2mt d gfn 284 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200115 p2mt d gfn 284 did 2
(XEN) ==>1Debug for domain=2, gfn=11c, Debug page: MFN=200115 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200114 p2mt d gfn 285 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200114 p2mt d gfn 285 did 2
(XEN) ==>1Debug for domain=2, gfn=11d, Debug page: MFN=200114 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200113 p2mt d gfn 286 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200113 p2mt d gfn 286 did 2
(XEN) ==>1Debug for domain=2, gfn=11e, Debug page: MFN=200113 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200112 p2mt d gfn 287 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200112 p2mt d gfn 287 did 2
(XEN) ==>1Debug for domain=2, gfn=11f, Debug page: MFN=200112 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200111 p2mt d gfn 288 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200111 p2mt d gfn 288 did 2
(XEN) ==>1Debug for domain=2, gfn=120, Debug page: MFN=200111 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200110 p2mt d gfn 289 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200110 p2mt d gfn 289 did 2
(XEN) ==>1Debug for domain=2, gfn=121, Debug page: MFN=200110 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20010f p2mt d gfn 290 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20010f p2mt d gfn 290 did 2
(XEN) ==>1Debug for domain=2, gfn=122, Debug page: MFN=20010f is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20010e p2mt d gfn 291 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20010e p2mt d gfn 291 did 2
(XEN) ==>1Debug for domain=2, gfn=123, Debug page: MFN=20010e is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20010d p2mt d gfn 292 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20010d p2mt d gfn 292 did 2
(XEN) ==>1Debug for domain=2, gfn=124, Debug page: MFN=20010d is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20010c p2mt d gfn 293 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20010c p2mt d gfn 293 did 2
(XEN) ==>1Debug for domain=2, gfn=125, Debug page: MFN=20010c is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20010b p2mt d gfn 294 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20010b p2mt d gfn 294 did 2
(XEN) ==>1Debug for domain=2, gfn=126, Debug page: MFN=20010b is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 20010a p2mt d gfn 295 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20010a p2mt d gfn 295 did 2
(XEN) ==>1Debug for domain=2, gfn=127, Debug page: MFN=20010a is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200109 p2mt d gfn 296 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200109 p2mt d gfn 296 did 2
(XEN) ==>1Debug for domain=2, gfn=128, Debug page: MFN=200109 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200108 p2mt d gfn 297 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200108 p2mt d gfn 297 did 2
(XEN) ==>1Debug for domain=2, gfn=129, Debug page: MFN=200108 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200107 p2mt d gfn 298 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200107 p2mt d gfn 298 did 2
(XEN) ==>1Debug for domain=2, gfn=12a, Debug page: MFN=200107 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200106 p2mt d gfn 299 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200106 p2mt d gfn 299 did 2
(XEN) ==>1Debug for domain=2, gfn=12b, Debug page: MFN=200106 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200105 p2mt d gfn 300 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200105 p2mt d gfn 300 did 2
(XEN) ==>1Debug for domain=2, gfn=12c, Debug page: MFN=200105 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200104 p2mt d gfn 301 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200104 p2mt d gfn 301 did 2
(XEN) ==>1Debug for domain=2, gfn=12d, Debug page: MFN=200104 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200103 p2mt d gfn 302 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200103 p2mt d gfn 302 did 2
(XEN) ==>1Debug for domain=2, gfn=12e, Debug page: MFN=200103 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200102 p2mt d gfn 303 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 200102 p2mt d gfn 303 did 2
(XEN) ==>1Debug for domain=2, gfn=12f, Debug page: MFN=200102 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===p2m_teardown mfn 200101 p2mt d gfn 304 did 2
(XEN) ===mem_sharing_unshare_page cpu 1 mfn 20010blktap_sysfs_create: adding attributes for dev ffff8801581d5c00

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-10 10:30           ` Tim Deegan
@ 2011-01-11  1:49             ` MaoXiaoyun
  2011-01-11  6:32               ` Jui-Hao Chiang
  2011-01-12  8:01               ` Fix mem_sharing on Xen 4.0.0 MaoXiaoyun
  2011-01-12 11:50             ` Re: [PATCH] mem_sharing: fix race condition of nominate and unshare George Dunlap
  1 sibling, 2 replies; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-11  1:49 UTC (permalink / raw)
  To: tim.deegan, juihaochiang; +Cc: xen devel


[-- Attachment #1.1: Type: text/plain, Size: 4141 bytes --]


Hi Tim:
 
      Sorry for the inconvenience, I think it's better now when I reply directly from hotmail.
      
      It looks like when unshare(), page_set_owner is forgetten, right?
      Since after I add this code the Error log is disappeared.
 
 
     But unfortunately, I meet HVM blue screen(windows) with serial output below.
     Full log attached.
 
(XEN) ==>3Debug for domain=1, gfn=7de15, Debug page: MFN=171c15 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171fd6 p2mt d gfn 515542 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171fd6 p2mt d gfn 515542 did 1
(XEN) ==>3Debug for domain=1, gfn=7ddd6, Debug page: MFN=171fd6 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c57 p2mt d gfn 515671 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171c57 p2mt d gfn 515671 did 1
(XEN) ==>3Debug for domain=1, gfn=7de57, Debug page: MFN=171c57 is ci=8000000000000001, ti=0, owner_id=1
(XEN) printk: 32 messages suppressed.
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)

 
> Date: Mon, 10 Jan 2011 10:30:41 +0000
> From: Tim.Deegan@citrix.com
> To: juihaochiang@gmail.com
> CC: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> 
> Hi, 
> 
> Can you please (both of you) sort out yout mail clients to do proper
> indenting of quoted text? The plain-text versions don't have any
> quote prefix, which makes them confusing to read.
> 
> At 04:57 +0000 on 10 Jan (1294635461), Jui-Hao Chiang wrote:
> > Hi, Tim:
> > 
> > On Sat, Jan 8, 2011 at 12:09 AM, Tim Deegan <Tim.Deegan@citrix.com<mailto:Tim.Deegan@citrix.com>> wrote:
> > At 06:02 +0000 on 07 Jan (1294380120), Jui-Hao Chiang wrote:
> > > One of the solution is to
> > > (a) Simply replace shr_lock with p2m_lock.
> > 
> > I think this is the best choice. If we find that the p2m lock is a
> > bottleneck we can address it later.
> > 
> > 
> > Just to be skeptic.
> > Why doesn't mfn_to_gfn() take p2m lock when querying the p2m type?
> 
> Because gfn->mfn lookups happen very frequently and requiring the lock
> would be a performance bottleneck on multi-vcpu guests.
> 
> > Is there any quarantee that the resulting type is correct and trustful?
> 
> Yes. It's not perfect (and as I said I need to overhaul the locking
> here) but if the p2m lookup only reads each level once and the p2m
> updates are careful about the order they change things in, the worst
> that can happen is another CPU sees a slightly out-of-date value.
> 
> There is at least one issue there (now that some p2m code frees old p2m
> pages there's a potential race against other readers that needs a
> tlbflush-timestamp-style interlock), but TBH there are other things that
> need fixing first.
> 
> Tim.
> 
> > For example:
> > (1) User1 query the p2m type:
> > mfn_to_gfn(...&p2mt);
> > if (p2mt == p2m_ram_rw) /* do something based on the p2m type result? */
> > 
> > (2) User2 modify the p2m type
> > p2m_lock(p2m);
> > set_p2m_entry(..... p2m_ram_rw);
> > p2m_unlock(p2m);
> > 
> > Thanks,
> > Jui-Hao
> 
> -- 
> Tim Deegan <Tim.Deegan@citrix.com>
> Principal Software Engineer, Xen Platform Team
> Citrix Systems UK Ltd. (Company #02937203, SL9 0BG)
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 5058 bytes --]

[-- Attachment #2: log.txt --]
[-- Type: text/plain, Size: 46584 bytes --]

(XEN) ===>mem_sharing_share_pages mfn 2debd9 gfn 521457 p2md d did 1
(XEN) ===hvm_hap_nested_page_fault mfn 170733 p2mt d gfn 521523 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170733 p2mt d gfn 521523 did 1
(XEN) ==>3Debug for domain=1, gfn=7f533, Debug page: MFN=170733 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 1706f5 p2mt d gfn 521461 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 1706f5 p2mt d gfn 521461 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4f5, Debug page: MFN=1706f5 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 17068b p2mt d gfn 521355 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 17068b p2mt d gfn 521355 did 1
(XEN) ==>3Debug for domain=1, gfn=7f48b, Debug page: MFN=17068b is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 17068c p2mt d gfn 521356 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 17068c p2mt d gfn 521356 did 1
(XEN) ==>3Debug for domain=1, gfn=7f48c, Debug page: MFN=17068c is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170693 p2mt d gfn 521363 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170693 p2mt d gfn 521363 did 1
(XEN) ==>3Debug for domain=1, gfn=7f493, Debug page: MFN=170693 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 17065e p2mt d gfn 521310 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 17065e p2mt d gfn 521310 did 1
(XEN) ==>3Debug for domain=1, gfn=7f45e, Debug page: MFN=17065e is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 17065f p2mt d gfn 521311 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 17065f p2mt d gfn 521311 did 1
(XEN) ==>3Debug for domain=1, gfn=7f45f, Debug page: MFN=17065f is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 1706a0 p2mt d gfn 521376 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 1706a0 p2mt d gfn 521376 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4a0, Debug page: MFN=1706a0 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 2debd9 p2mt d gfn 521457 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 2debd9 p2mt d gfn 521457 did 1
(XEN) ===set p2mentry mfn 2debd9 p2mt d gfn 521457 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4f1, Debug page: MFN=2debd9 is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===hvm_hap_nested_page_fault mfn 1708ba p2mt d gfn 520890 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 1708ba p2mt d gfn 520890 did 1
(XEN) ==>3Debug for domain=1, gfn=7f2ba, Debug page: MFN=1708ba is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170bc7 p2mt d gfn 520647 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170bc7 p2mt d gfn 520647 did 1
(XEN) ==>3Debug for domain=1, gfn=7f1c7, Debug page: MFN=170bc7 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===>mem_sharing_share_pages mfn 2debdb gfn 520841 p2md d did 1
(XEN) ===gfn_to_mfn_unshare mfn 170660 p2mt d gfn 521312 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170660 p2mt d gfn 521312 did 1
(XEN) ==>3Debug for domain=1, gfn=7f460, Debug page: MFN=170660 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 1706a2 p2mt d gfn 521378 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1706a2 p2mt d gfn 521378 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4a2, Debug page: MFN=1706a2 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 1706a4 p2mt d gfn 521380 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 1706a4 p2mt d gfn 521380 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4a4, Debug page: MFN=1706a4 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170626 p2mt d gfn 521254 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170626 p2mt d gfn 521254 did 1
(XEN) ==>3Debug for domain=1, gfn=7f426, Debug page: MFN=170626 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 17064c p2mt d gfn 521292 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 17064c p2mt d gfn 521292 did 1
(XEN) ==>3Debug for domain=1, gfn=7f44c, Debug page: MFN=17064c is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170628 p2mt d gfn 521256 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170628 p2mt d gfn 521256 did 1
(XEN) ==>3Debug for domain=1, gfn=7f428, Debug page: MFN=170628 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===>mem_sharing_share_pages mfn 2debdc gfn 521256 p2md d did 1
(XEN) ===hvm_hap_nested_page_fault mfn 17087d p2mt d gfn 520829 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 17087d p2mt d gfn 520829 did 1
(XEN) ==>3Debug for domain=1, gfn=7f27d, Debug page: MFN=17087d is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170940 p2mt d gfn 521024 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170940 p2mt d gfn 521024 did 1
(XEN) ==>3Debug for domain=1, gfn=7f340, Debug page: MFN=170940 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170602 p2mt d gfn 521218 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170602 p2mt d gfn 521218 did 1
(XEN) ==>3Debug for domain=1, gfn=7f402, Debug page: MFN=170602 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170604 p2mt d gfn 521220 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170604 p2mt d gfn 521220 did 1
(XEN) ==>3Debug for domain=1, gfn=7f404, Debug page: MFN=170604 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170646 p2mt d gfn 521286 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170646 p2mt d gfn 521286 did 1
(XEN) ==>3Debug for domain=1, gfn=7f446, Debug page: MFN=170646 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170688 p2mt d gfn 521352 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170688 p2mt d gfn 521352 did 1
(XEN) ==>3Debug for domain=1, gfn=7f488, Debug page: MFN=170688 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 17064a p2mt d gfn 521290 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 17064a p2mt d gfn 521290 did 1
(XEN) ==>3Debug for domain=1, gfn=7f44a, Debug page: MFN=17064a is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 17060e p2mt d gfn 521230 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 17060e p2mt d gfn 521230 did 1
(XEN) ==>3Debug for domain=1, gfn=7f40e, Debug page: MFN=17060e is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170610 p2mt d gfn 521232 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170610 p2mt d gfn 521232 did 1
(XEN) ==>3Debug for domain=1, gfn=7f410, Debug page: MFN=170610 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170612 p2mt d gfn 521234 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170612 p2mt d gfn 521234 did 1
(XEN) ==>3Debug for domain=1, gfn=7f412, Debug page: MFN=170612 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170654 p2mt d gfn 521300 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170654 p2mt d gfn 521300 did 1
(XEN) ==>3Debug for domain=1, gfn=7f454, Debug page: MFN=170654 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170616 p2mt d gfn 521238 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170616 p2mt d gfn 521238 did 1
(XEN) ==>3Debug for domain=1, gfn=7f416, Debug page: MFN=170616 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170618 p2mt d gfn 521240 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170618 p2mt d gfn 521240 did 1
(XEN) ==>3Debug for domain=1, gfn=7f418, Debug page: MFN=170618 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 17061a p2mt d gfn 521242 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 17061a p2mt d gfn 521242 did 1
(XEN) ==>3Debug for domain=1, gfn=7f41a, Debug page: MFN=17061a is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 17065c p2mt d gfn 521308 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 17065c p2mt d gfn 521308 did 1
(XEN) ==>3Debug for domain=1, gfn=7f45c, Debug page: MFN=17065c is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 17065e p2mt d gfn 521310 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 17065e p2mt d gfn 521310 did 1
(XEN) ==>3Debug for domain=1, gfn=7f45e, Debug page: MFN=17065e is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 17066a p2mt d gfn 521322 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 17066a p2mt d gfn 521322 did 1
(XEN) ==>3Debug for domain=1, gfn=7f46a, Debug page: MFN=17066a is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 1706ac p2mt d gfn 521388 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1706ac p2mt d gfn 521388 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4ac, Debug page: MFN=1706ac is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 1706ee p2mt d gfn 521454 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1706ee p2mt d gfn 521454 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4ee, Debug page: MFN=1706ee is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 1706b0 p2mt d gfn 521392 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1706b0 p2mt d gfn 521392 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4b0, Debug page: MFN=1706b0 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 1706b2 p2mt d gfn 521394 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1706b2 p2mt d gfn 521394 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4b2, Debug page: MFN=1706b2 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 1706b4 p2mt d gfn 521396 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1706b4 p2mt d gfn 521396 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4b4, Debug page: MFN=1706b4 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 1706b6 p2mt d gfn 521398 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1706b6 p2mt d gfn 521398 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4b6, Debug page: MFN=1706b6 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 1706b8 p2mt d gfn 521400 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1706b8 p2mt d gfn 521400 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4b8, Debug page: MFN=1706b8 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 1706ba p2mt d gfn 521402 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1706ba p2mt d gfn 521402 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4ba, Debug page: MFN=1706ba is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 17067c p2mt d gfn 521340 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 17067c p2mt d gfn 521340 did 1
(XEN) ==>3Debug for domain=1, gfn=7f47c, Debug page: MFN=17067c is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 1706be p2mt d gfn 521406 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 1706be p2mt d gfn 521406 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4be, Debug page: MFN=1706be is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170bf1 p2mt d gfn 520689 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170bf1 p2mt d gfn 520689 did 1
(XEN) ==>3Debug for domain=1, gfn=7f1f1, Debug page: MFN=170bf1 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 170d80 p2mt d gfn 520064 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 170d80 p2mt d gfn 520064 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef80, Debug page: MFN=170d80 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170a0f p2mt d gfn 520207 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170a0f p2mt d gfn 520207 did 1
(XEN) ==>3Debug for domain=1, gfn=7f00f, Debug page: MFN=170a0f is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170a92 p2mt d gfn 520338 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170a92 p2mt d gfn 520338 did 1
(XEN) ==>3Debug for domain=1, gfn=7f092, Debug page: MFN=170a92 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170a96 p2mt d gfn 520342 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170a96 p2mt d gfn 520342 did 1
(XEN) ==>3Debug for domain=1, gfn=7f096, Debug page: MFN=170a96 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170a98 p2mt d gfn 520344 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170a98 p2mt d gfn 520344 did 1
(XEN) ==>3Debug for domain=1, gfn=7f098, Debug page: MFN=170a98 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170a19 p2mt d gfn 520217 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170a19 p2mt d gfn 520217 did 1
(XEN) ==>3Debug for domain=1, gfn=7f019, Debug page: MFN=170a19 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170b1c p2mt d gfn 520476 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170b1c p2mt d gfn 520476 did 1
(XEN) ==>3Debug for domain=1, gfn=7f11c, Debug page: MFN=170b1c is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170a9d p2mt d gfn 520349 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170a9d p2mt d gfn 520349 did 1
(XEN) ==>3Debug for domain=1, gfn=7f09d, Debug page: MFN=170a9d is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170b1e p2mt d gfn 520478 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170b1e p2mt d gfn 520478 did 1
(XEN) ==>3Debug for domain=1, gfn=7f11e, Debug page: MFN=170b1e is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170a9f p2mt d gfn 520351 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170a9f p2mt d gfn 520351 did 1
(XEN) ==>3Debug for domain=1, gfn=7f09f, Debug page: MFN=170a9f is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170ae0 p2mt d gfn 520416 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170ae0 p2mt d gfn 520416 did 1
(XEN) ==>3Debug for domain=1, gfn=7f0e0, Debug page: MFN=170ae0 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170aa5 p2mt d gfn 520357 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170aa5 p2mt d gfn 520357 did 1
(XEN) ==>3Debug for domain=1, gfn=7f0a5, Debug page: MFN=170aa5 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170a66 p2mt d gfn 520294 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170a66 p2mt d gfn 520294 did 1
(XEN) ==>3Debug for domain=1, gfn=7f066, Debug page: MFN=170a66 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170aa9 p2mt d gfn 520361 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170aa9 p2mt d gfn 520361 did 1
(XEN) ==>3Debug for domain=1, gfn=7f0a9, Debug page: MFN=170aa9 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170cc0 p2mt d gfn 519872 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170cc0 p2mt d gfn 519872 did 1
(XEN) ==>3Debug for domain=1, gfn=7eec0, Debug page: MFN=170cc0 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d41 p2mt d gfn 520001 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d41 p2mt d gfn 520001 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef41, Debug page: MFN=170d41 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d07 p2mt d gfn 519943 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d07 p2mt d gfn 519943 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef07, Debug page: MFN=170d07 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170a16 p2mt d gfn 520214 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170a16 p2mt d gfn 520214 did 1
(XEN) ==>3Debug for domain=1, gfn=7f016, Debug page: MFN=170a16 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170dd7 p2mt d gfn 520151 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170dd7 p2mt d gfn 520151 did 1
(XEN) ==>3Debug for domain=1, gfn=7efd7, Debug page: MFN=170dd7 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170a1f p2mt d gfn 520223 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170a1f p2mt d gfn 520223 did 1
(XEN) ==>3Debug for domain=1, gfn=7f01f, Debug page: MFN=170a1f is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170ab3 p2mt d gfn 520371 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170ab3 p2mt d gfn 520371 did 1
(XEN) ==>3Debug for domain=1, gfn=7f0b3, Debug page: MFN=170ab3 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170cc5 p2mt d gfn 519877 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170cc5 p2mt d gfn 519877 did 1
(XEN) ==>3Debug for domain=1, gfn=7eec5, Debug page: MFN=170cc5 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170c87 p2mt d gfn 519815 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170c87 p2mt d gfn 519815 did 1
(XEN) ==>3Debug for domain=1, gfn=7ee87, Debug page: MFN=170c87 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d0b p2mt d gfn 519947 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d0b p2mt d gfn 519947 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef0b, Debug page: MFN=170d0b is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d53 p2mt d gfn 520019 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d53 p2mt d gfn 520019 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef53, Debug page: MFN=170d53 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170cd9 p2mt d gfn 519897 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170cd9 p2mt d gfn 519897 did 1
(XEN) ==>3Debug for domain=1, gfn=7eed9, Debug page: MFN=170cd9 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d5d p2mt d gfn 520029 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d5d p2mt d gfn 520029 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef5d, Debug page: MFN=170d5d is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170dec p2mt d gfn 520172 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170dec p2mt d gfn 520172 did 1
(XEN) ==>3Debug for domain=1, gfn=7efec, Debug page: MFN=170dec is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170dad p2mt d gfn 520109 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170dad p2mt d gfn 520109 did 1
(XEN) ==>3Debug for domain=1, gfn=7efad, Debug page: MFN=170dad is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170a6e p2mt d gfn 520302 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170a6e p2mt d gfn 520302 did 1
(XEN) ==>3Debug for domain=1, gfn=7f06e, Debug page: MFN=170a6e is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170df0 p2mt d gfn 520176 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170df0 p2mt d gfn 520176 did 1
(XEN) ==>3Debug for domain=1, gfn=7eff0, Debug page: MFN=170df0 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170df4 p2mt d gfn 520180 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170df4 p2mt d gfn 520180 did 1
(XEN) ==>3Debug for domain=1, gfn=7eff4, Debug page: MFN=170df4 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d80 p2mt d gfn 520064 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170d80 p2mt d gfn 520064 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef80, Debug page: MFN=170d80 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170ce8 p2mt d gfn 519912 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170ce8 p2mt d gfn 519912 did 1
(XEN) ==>3Debug for domain=1, gfn=7eee8, Debug page: MFN=170ce8 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170daa p2mt d gfn 520106 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170daa p2mt d gfn 520106 did 1
(XEN) ==>3Debug for domain=1, gfn=7efaa, Debug page: MFN=170daa is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170db4 p2mt d gfn 520116 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170db4 p2mt d gfn 520116 did 1
(XEN) ==>3Debug for domain=1, gfn=7efb4, Debug page: MFN=170db4 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d75 p2mt d gfn 520053 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d75 p2mt d gfn 520053 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef75, Debug page: MFN=170d75 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170db6 p2mt d gfn 520118 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170db6 p2mt d gfn 520118 did 1
(XEN) ==>3Debug for domain=1, gfn=7efb6, Debug page: MFN=170db6 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170db7 p2mt d gfn 520119 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170db7 p2mt d gfn 520119 did 1
(XEN) ==>3Debug for domain=1, gfn=7efb7, Debug page: MFN=170db7 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170db8 p2mt d gfn 520120 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170db8 p2mt d gfn 520120 did 1
(XEN) ==>3Debug for domain=1, gfn=7efb8, Debug page: MFN=170db8 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d39 p2mt d gfn 519993 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d39 p2mt d gfn 519993 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef39, Debug page: MFN=170d39 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170dba p2mt d gfn 520122 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170dba p2mt d gfn 520122 did 1
(XEN) ==>3Debug for domain=1, gfn=7efba, Debug page: MFN=170dba is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170cfb p2mt d gfn 519931 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170cfb p2mt d gfn 519931 did 1
(XEN) ==>3Debug for domain=1, gfn=7eefb, Debug page: MFN=170cfb is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d7c p2mt d gfn 520060 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d7c p2mt d gfn 520060 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef7c, Debug page: MFN=170d7c is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170cfd p2mt d gfn 519933 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170cfd p2mt d gfn 519933 did 1
(XEN) ==>3Debug for domain=1, gfn=7eefd, Debug page: MFN=170cfd is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170cbf p2mt d gfn 519871 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170cbf p2mt d gfn 519871 did 1
(XEN) ==>3Debug for domain=1, gfn=7eebf, Debug page: MFN=170cbf is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170fc0 p2mt d gfn 519616 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170fc0 p2mt d gfn 519616 did 1
(XEN) ==>3Debug for domain=1, gfn=7edc0, Debug page: MFN=170fc0 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d80 p2mt d gfn 520064 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d80 p2mt d gfn 520064 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef80, Debug page: MFN=170d80 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170c41 p2mt d gfn 519745 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170c41 p2mt d gfn 519745 did 1
(XEN) ==>3Debug for domain=1, gfn=7ee41, Debug page: MFN=170c41 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170c82 p2mt d gfn 519810 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170c82 p2mt d gfn 519810 did 1
(XEN) ==>3Debug for domain=1, gfn=7ee82, Debug page: MFN=170c82 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170c43 p2mt d gfn 519747 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170c43 p2mt d gfn 519747 did 1
(XEN) ==>3Debug for domain=1, gfn=7ee43, Debug page: MFN=170c43 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170c84 p2mt d gfn 519812 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170c84 p2mt d gfn 519812 did 1
(XEN) ==>3Debug for domain=1, gfn=7ee84, Debug page: MFN=170c84 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170cc6 p2mt d gfn 519878 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170cc6 p2mt d gfn 519878 did 1
(XEN) ==>3Debug for domain=1, gfn=7eec6, Debug page: MFN=170cc6 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170c07 p2mt d gfn 519687 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170c07 p2mt d gfn 519687 did 1
(XEN) ==>3Debug for domain=1, gfn=7ee07, Debug page: MFN=170c07 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d08 p2mt d gfn 519944 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d08 p2mt d gfn 519944 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef08, Debug page: MFN=170d08 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d09 p2mt d gfn 519945 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d09 p2mt d gfn 519945 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef09, Debug page: MFN=170d09 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d0a p2mt d gfn 519946 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d0a p2mt d gfn 519946 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef0a, Debug page: MFN=170d0a is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170c8b p2mt d gfn 519819 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170c8b p2mt d gfn 519819 did 1
(XEN) ==>3Debug for domain=1, gfn=7ee8b, Debug page: MFN=170c8b is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170ccc p2mt d gfn 519884 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170ccc p2mt d gfn 519884 did 1
(XEN) ==>3Debug for domain=1, gfn=7eecc, Debug page: MFN=170ccc is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170c8d p2mt d gfn 519821 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170c8d p2mt d gfn 519821 did 1
(XEN) ==>3Debug for domain=1, gfn=7ee8d, Debug page: MFN=170c8d is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170c8e p2mt d gfn 519822 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170c8e p2mt d gfn 519822 did 1
(XEN) ==>3Debug for domain=1, gfn=7ee8e, Debug page: MFN=170c8e is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170c4f p2mt d gfn 519759 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170c4f p2mt d gfn 519759 did 1
(XEN) ==>3Debug for domain=1, gfn=7ee4f, Debug page: MFN=170c4f is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d14 p2mt d gfn 519956 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d14 p2mt d gfn 519956 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef14, Debug page: MFN=170d14 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170a59 p2mt d gfn 520281 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170a59 p2mt d gfn 520281 did 1
(XEN) ==>3Debug for domain=1, gfn=7f059, Debug page: MFN=170a59 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170cda p2mt d gfn 519898 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170cda p2mt d gfn 519898 did 1
(XEN) ==>3Debug for domain=1, gfn=7eeda, Debug page: MFN=170cda is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170cdb p2mt d gfn 519899 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170cdb p2mt d gfn 519899 did 1
(XEN) ==>3Debug for domain=1, gfn=7eedb, Debug page: MFN=170cdb is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d5c p2mt d gfn 520028 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d5c p2mt d gfn 520028 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef5c, Debug page: MFN=170d5c is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170cdd p2mt d gfn 519901 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170cdd p2mt d gfn 519901 did 1
(XEN) ==>3Debug for domain=1, gfn=7eedd, Debug page: MFN=170cdd is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d5e p2mt d gfn 520030 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d5e p2mt d gfn 520030 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef5e, Debug page: MFN=170d5e is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170cdf p2mt d gfn 519903 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170cdf p2mt d gfn 519903 did 1
(XEN) ==>3Debug for domain=1, gfn=7eedf, Debug page: MFN=170cdf is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170ce3 p2mt d gfn 519907 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170ce3 p2mt d gfn 519907 did 1
(XEN) ==>3Debug for domain=1, gfn=7eee3, Debug page: MFN=170ce3 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170d24 p2mt d gfn 519972 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170d24 p2mt d gfn 519972 did 1
(XEN) ==>3Debug for domain=1, gfn=7ef24, Debug page: MFN=170d24 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170ce5 p2mt d gfn 519909 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170ce5 p2mt d gfn 519909 did 1
(XEN) ==>3Debug for domain=1, gfn=7eee5, Debug page: MFN=170ce5 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170ca6 p(XEN) ===hvm_hap_nested_page_fault mfn 170ec5 p2mt d gfn 519365 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170ec5 p2mt d gfn 519365 did 1
(XEN) ==>3Debug for domain=1, gfn=7ecc5, Debug page: MFN=170ec5 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170f55 p2mt d gfn 519509 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170f55 p2mt d gfn 519509 did 1
(XEN) ==>3Debug for domain=1, gfn=7ed55, Debug page: MFN=170f55 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170f97 p2mt d gfn 519575 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170f97 p2mt d gfn 519575 did 1
(XEN) ==>3Debug for domain=1, gfn=7ed97, Debug page: MFN=170f97 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170ed9 p2mt d gfn 519385 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170ed9 p2mt d gfn 519385 did 1
(XEN) ==>3Debug for domain=1, gfn=7ecd9, Debug page: MFN=170ed9 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 1703dc p2mt d gfn 522716 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 1703dc p2mt d gfn 522716 did 1
(XEN) ==>3Debug for domain=1, gfn=7f9dc, Debug page: MFN=1703dc is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170ee8 p2mt d gfn 519400 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170ee8 p2mt d gfn 519400 did 1
(XEN) ==>3Debug for domain=1, gfn=7ece8, Debug page: MFN=170ee8 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170f6d p2mt d gfn 519533 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170f6d p2mt d gfn 519533 did 1
(XEN) ==>3Debug for domain=1, gfn=7ed6d, Debug page: MFN=170f6d is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170fef p2mt d gfn 519663 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170fef p2mt d gfn 519663 did 1
(XEN) ==>3Debug for domain=1, gfn=7edef, Debug page: MFN=170fef is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 1706f0 p2mt d gfn 521456 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 1706f0 p2mt d gfn 521456 did 1
(XEN) ==>3Debug for domain=1, gfn=7f4f0, Debug page: MFN=1706f0 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170fee p2mt d gfn 519662 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170fee p2mt d gfn 519662 did 1
(XEN) ==>3Debug for domain=1, gfn=7edee, Debug page: MFN=170fee is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170ff2 p2mt d gfn 519666 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170ff2 p2mt d gfn 519666 did 1
(XEN) ==>3Debug for domain=1, gfn=7edf2, Debug page: MFN=170ff2 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170fb5 p2mt d gfn 519605 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170fb5 p2mt d gfn 519605 did 1
(XEN) ==>3Debug for domain=1, gfn=7edb5, Debug page: MFN=170fb5 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170f3b p2mt d gfn 519483 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170f3b p2mt d gfn 519483 did 1
(XEN) ==>3Debug for domain=1, gfn=7ed3b, Debug page: MFN=170f3b is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170ec0 p2mt d gfn 519360 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 170ec0 p2mt d gfn 519360 did 1
(XEN) ==>3Debug for domain=1, gfn=7ecc0, Debug page: MFN=170ec0 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 170ec2 p2mt d gfn 519362 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 170ec2 p2mt d gfn 519362 did 1
(XEN) ==>3Debug for domain=1, gfn=7ecc2, Debug page: MFN=170ec2 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171cfd p2mt d gfn 515837 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171cfd p2mt d gfn 515837 did 1
(XEN) ==>3Debug for domain=1, gfn=7defd, Debug page: MFN=171cfd is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171cde p2mt d gfn 515806 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171cde p2mt d gfn 515806 did 1
(XEN) ==>3Debug for domain=1, gfn=7dede, Debug page: MFN=171cde is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171dbe p2mt d gfn 516030 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171dbe p2mt d gfn 516030 did 1
(XEN) ==>3Debug for domain=1, gfn=7dfbe, Debug page: MFN=171dbe is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c92 p2mt d gfn 515730 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171c92 p2mt d gfn 515730 did 1
(XEN) ==>3Debug for domain=1, gfn=7de92, Debug page: MFN=171c92 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171d53 p2mt d gfn 515923 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171d53 p2mt d gfn 515923 did 1
(XEN) ==>3Debug for domain=1, gfn=7df53, Debug page: MFN=171d53 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171cd4 p2mt d gfn 515796 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171cd4 p2mt d gfn 515796 did 1
(XEN) ==>3Debug for domain=1, gfn=7ded4, Debug page: MFN=171cd4 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c55 p2mt d gfn 515669 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171c55 p2mt d gfn 515669 did 1
(XEN) ==>3Debug for domain=1, gfn=7de55, Debug page: MFN=171c55 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c16 p2mt d gfn 515606 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171c16 p2mt d gfn 515606 did 1
(XEN) ==>3Debug for domain=1, gfn=7de16, Debug page: MFN=171c16 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c97 p2mt d gfn 515735 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171c97 p2mt d gfn 515735 did 1
(XEN) ==>3Debug for domain=1, gfn=7de97, Debug page: MFN=171c97 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171d98 p2mt d gfn 515992 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171d98 p2mt d gfn 515992 did 1
(XEN) ==>3Debug for domain=1, gfn=7df98, Debug page: MFN=171d98 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c19 p2mt d gfn 515609 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171c19 p2mt d gfn 515609 did 1
(XEN) ==>3Debug for domain=1, gfn=7de19, Debug page: MFN=171c19 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c1a p2mt d gfn 515610 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171c1a p2mt d gfn 515610 did 1
(XEN) ==>3Debug for domain=1, gfn=7de1a, Debug page: MFN=171c1a is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171fdb p2mt d gfn 515547 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171fdb p2mt d gfn 515547 did 1
(XEN) ==>3Debug for domain=1, gfn=7dddb, Debug page: MFN=171fdb is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171d5c p2mt d gfn 515932 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171d5c p2mt d gfn 515932 did 1
(XEN) ==>3Debug for domain=1, gfn=7df5c, Debug page: MFN=171d5c is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171f9d p2mt d gfn 515485 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171f9d p2mt d gfn 515485 did 1
(XEN) ==>3Debug for domain=1, gfn=7dd9d, Debug page: MFN=171f9d is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c1e p2mt d gfn 515614 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171c1e p2mt d gfn 515614 did 1
(XEN) ==>3Debug for domain=1, gfn=7de1e, Debug page: MFN=171c1e is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c9f p2mt d gfn 515743 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171c9f p2mt d gfn 515743 did 1
(XEN) ==>3Debug for domain=1, gfn=7de9f, Debug page: MFN=171c9f is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171f3f p2mt d gfn 515391 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171f3f p2mt d gfn 515391 did 1
(XEN) ==>3Debug for domain=1, gfn=7dd3f, Debug page: MFN=171f3f is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171fc0 p2mt d gfn 515520 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171fc0 p2mt d gfn 515520 did 1
(XEN) ==>3Debug for domain=1, gfn=7ddc0, Debug page: MFN=171fc0 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171f41 p2mt d gfn 515393 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171f41 p2mt d gfn 515393 did 1
(XEN) ==>3Debug for domain=1, gfn=7dd41, Debug page: MFN=171f41 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===>mem_sharing_share_pages mfn 171fed gfn 515788 p2md d did 1
(XEN) ===hvm_hap_nested_page_fault mfn 171c02 p2mt d gfn 515586 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171c02 p2mt d gfn 515586 did 1
(XEN) ==>3Debug for domain=1, gfn=7de02, Debug page: MFN=171c02 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171cc3 p2mt d gfn 515779 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171cc3 p2mt d gfn 515779 did 1
(XEN) ==>3Debug for domain=1, gfn=7dec3, Debug page: MFN=171cc3 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171cc4 p2mt d gfn 515780 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171cc4 p2mt d gfn 515780 did 1
(XEN) ==>3Debug for domain=1, gfn=7dec4, Debug page: MFN=171cc4 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c85 p2mt d gfn 515717 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171c85 p2mt d gfn 515717 did 1
(XEN) ==>3Debug for domain=1, gfn=7de85, Debug page: MFN=171c85 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171cc6 p2mt d gfn 515782 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171cc6 p2mt d gfn 515782 did 1
(XEN) ==>3Debug for domain=1, gfn=7dec6, Debug page: MFN=171cc6 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c07 p2mt d gfn 515591 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171c07 p2mt d gfn 515591 did 1
(XEN) ==>3Debug for domain=1, gfn=7de07, Debug page: MFN=171c07 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 171d13 p2mt d gfn 515859 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 171d13 p2mt d gfn 515859 did 1
(XEN) ==>3Debug for domain=1, gfn=7df13, Debug page: MFN=171d13 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===>mem_sharing_share_pages mfn 171fed gfn 515859 p2md d did 1
(XEN) ===hvm_hap_nested_page_fault mfn 171f74 p2mt d gfn 515444 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171f74 p2mt d gfn 515444 did 1
(XEN) ==>3Debug for domain=1, gfn=7dd74, Debug page: MFN=171f74 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171ef5 p2mt d gfn 515317 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171ef5 p2mt d gfn 515317 did 1
(XEN) ==>3Debug for domain=1, gfn=7dcf5, Debug page: MFN=171ef5 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c0e p2mt d gfn 515598 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171c0e p2mt d gfn 515598 did 1
(XEN) ==>3Debug for domain=1, gfn=7de0e, Debug page: MFN=171c0e is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171fcf p2mt d gfn 515535 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171fcf p2mt d gfn 515535 did 1
(XEN) ==>3Debug for domain=1, gfn=7ddcf, Debug page: MFN=171fcf is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c50 p2mt d gfn 515664 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171c50 p2mt d gfn 515664 did 1
(XEN) ==>3Debug for domain=1, gfn=7de50, Debug page: MFN=171c50 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 171c60 p2mt d gfn 515680 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 171c60 p2mt d gfn 515680 did 1
(XEN) ==>3Debug for domain=1, gfn=7de60, Debug page: MFN=171c60 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c51 p2mt d gfn 515665 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171c51 p2mt d gfn 515665 did 1
(XEN) ==>3Debug for domain=1, gfn=7de51, Debug page: MFN=171c51 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===>mem_sharing_share_pages mfn 171cf6 gfn 512604 p2md d did 1
(XEN) ===>mem_sharing_share_pages mfn 171c37 gfn 511837 p2md d did 1
(XEN) ===>mem_sharing_share_pages mfn 171c78 gfn 511966 p2md d did 1
(XEN) ===>mem_sharing_share_pages mfn 171c39 gfn 512159 p2md d did 1
(XEN) ===>mem_sharing_share_pages mfn 171c3a gfn 515680 p2md d did 1
(XEN) ===>mem_sharing_share_pages mfn 171c3b gfn 515681 p2md d did 1
(XEN) ===>mem_sharing_share_pages mfn 171dbc gfn 515682 p2md d did 1
(XEN) ===>mem_sharing_share_pages mfn 171c3d gfn 515747 p2md d did 1
(XEN) ===>mem_sharing_share_pages mfn 171cfe gfn 515556 p2md d did 1
(XEN) ===gfn_to_mfn_unshare mfn 171fed p2mt d gfn 515565 did 1
(XEN) ===mem_sharing_unshare_page cpu 0 mfn 171fed p2mt d gfn 515565 did 1
(XEN) ===set p2mentry mfn 171fed p2mt d gfn 515565 did 1
(XEN) ==>3Debug for domain=1, gfn=7dded, Debug page: MFN=171fed is ci=8000000000000003, ti=8400000000000002, owner_id=32755
(XEN) ===hvm_hap_nested_page_fault mfn 171c52 p2mt d gfn 515666 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171c52 p2mt d gfn 515666 did 1
(XEN) ==>3Debug for domain=1, gfn=7de52, Debug page: MFN=171c52 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c94 p2mt d gfn 515732 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171c94 p2mt d gfn 515732 did 1
(XEN) ==>3Debug for domain=1, gfn=7de94, Debug page: MFN=171c94 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===gfn_to_mfn_unshare mfn 171c3d p2mt d gfn 515747 did 1
(XEN) ===mem_sharing_unshare_page cpu 6 mfn 171c3d p2mt d gfn 515747 did 1
(XEN) ===set p2mentry mfn 171c3d p2mt d gfn 515747 did 1
(XEN) ==>3Debug for domain=1, gfn=7dea3, Debug page: MFN=171c3d is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===hvm_hap_nested_page_fault mfn 171fed p2mt d gfn 515788 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171fed p2mt d gfn 515788 did 1
(XEN) ===set p2mentry mfn 171fed p2mt d gfn 515788 did 1
(XEN) ==>3Debug for domain=1, gfn=7decc, Debug page: MFN=171fed is ci=8000000000000002, ti=8400000000000001, owner_id=32755
(XEN) ===hvm_hap_nested_page_fault mfn 171fed p2mt d gfn 515859 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171fed p2mt d gfn 515859 did 1
(XEN) ==>3Debug for domain=1, gfn=7df13, Debug page: MFN=171fed is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c15 p2mt d gfn 515605 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171c15 p2mt d gfn 515605 did 1
(XEN) ==>3Debug for domain=1, gfn=7de15, Debug page: MFN=171c15 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171fd6 p2mt d gfn 515542 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171fd6 p2mt d gfn 515542 did 1
(XEN) ==>3Debug for domain=1, gfn=7ddd6, Debug page: MFN=171fd6 is ci=8000000000000001, ti=0, owner_id=1
(XEN) ===hvm_hap_nested_page_fault mfn 171c57 p2mt d gfn 515671 did 1
(XEN) ===mem_sharing_unshare_page cpu 12 mfn 171c57 p2mt d gfn 515671 did 1
(XEN) ==>3Debug for domain=1, gfn=7de57, Debug page: MFN=171c57 is ci=8000000000000001, ti=0, owner_id=1
(XEN) printk: 32 messages suppressed.
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)
(XEN) grant_table.c:555:d0 Iomem mapping not permitted ffffffffffffffff (domain 1)


[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-11  1:49             ` MaoXiaoyun
@ 2011-01-11  6:32               ` Jui-Hao Chiang
  2011-01-11  6:46                 ` MaoXiaoyun
  2011-01-12  8:01               ` Fix mem_sharing on Xen 4.0.0 MaoXiaoyun
  1 sibling, 1 reply; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-11  6:32 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, tim.deegan

Hi, all:

I disable Rich formatting in gmail, does the format looks good now?
To tinnycloud: hvm_hap_nested_page_fault() will be called when a guest
writes a shared page.

2011/1/11 MaoXiaoyun <tinnycloud@hotmail.com>
>
> Hi Tim:
>
>       Sorry for the inconvenience, I think it's better now when I reply directly from hotmail.
>
>       It looks like when unshare(), page_set_owner is forgetten, right?

Inside unshare(), it does the following
(1) page_make_private() --> page_set_owner(page, d) to update
page_info structure. So there is no need for you to add page_set_owner
explicitly.
(2) set_shared_p2m_entry() to update the page table
>From your log file, the page_make_private() is not called/executed
properly. Could you debug into it also?

Thanks,
Jui-Hao

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-11  6:32               ` Jui-Hao Chiang
@ 2011-01-11  6:46                 ` MaoXiaoyun
  0 siblings, 0 replies; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-11  6:46 UTC (permalink / raw)
  To: juihaochiang; +Cc: xen devel, tim.deegan


[-- Attachment #1.1: Type: text/plain, Size: 1360 bytes --]


Hi Jui_hao:
 
    Page_make_private() --> page_set_owner(page, d)  sets page owner when page_make_private success.
    But in fact, I remember, when the Error log shows up, page_make_private failed, thus 
    page_set_owner(page, d) is not called.    
 
> Date: Tue, 11 Jan 2011 14:32:51 +0800
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> From: juihaochiang@gmail.com
> To: tinnycloud@hotmail.com
> CC: tim.deegan@citrix.com; xen-devel@lists.xensource.com
> 
> Hi, all:
> 
> I disable Rich formatting in gmail, does the format looks good now?
> To tinnycloud: hvm_hap_nested_page_fault() will be called when a guest
> writes a shared page.
> 
> 2011/1/11 MaoXiaoyun <tinnycloud@hotmail.com>
> >
> > Hi Tim:
> >
> >       Sorry for the inconvenience, I think it's better now when I reply directly from hotmail.
> >
> >       It looks like when unshare(), page_set_owner is forgetten, right?
> 
> Inside unshare(), it does the following
> (1) page_make_private() --> page_set_owner(page, d) to update
> page_info structure. So there is no need for you to add page_set_owner
> explicitly.
> (2) set_shared_p2m_entry() to update the page table
> From your log file, the page_make_private() is not called/executed
> properly. Could you debug into it also?
> 
> Thanks,
> Jui-Hao
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 1944 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Fix mem_sharing on Xen 4.0.0
  2011-01-11  1:49             ` MaoXiaoyun
  2011-01-11  6:32               ` Jui-Hao Chiang
@ 2011-01-12  8:01               ` MaoXiaoyun
  1 sibling, 0 replies; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-12  8:01 UTC (permalink / raw)
  To: xen devel; +Cc: keir.fraser, tim.deegan, juihaochiang


[-- Attachment #1.1: Type: text/plain, Size: 851 bytes --]



 Hi:
 
      I've been trying to fix memory sharing on xen for a few days.
      Now it works on my box, I'd like to share the patches.
 
     Attached is the fix for code of tools in xen.(Sorry, I don't know how to make git patches.)
     Also you should apply the patch of http://lists.xensource.com/archives/html/xen-devel/2010-12/msg00149.html
     
     Currently, I am able to start 24 HVMS(each of which memory=512, maxmem=2048) 
     After all VMS start, Xen info shows that without mem_sharing,  the free memory is 7486
     and with mem_sharing, free is 9000, so 1.5G memory is saved.
 
    Remember, you should start the blktap/drivers/blktapctrl first, since it is enable hash share between tapdisk process.
    also add memory_sharing into your HVM file to enable sharing.
 
    thanks. 
   
    
 
 
  		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 1573 bytes --]

[-- Attachment #2: tools.patch --]
[-- Type: application/octet-stream, Size: 4091 bytes --]

diff -rbuN tools/blktap/drivers/blktapctrl.c tools.new/blktap/drivers/blktapctrl.c
--- tools/blktap/drivers/blktapctrl.c	2011-01-12 15:44:15.000000000 +0800
+++ tools.new/blktap/drivers/blktapctrl.c	2011-01-12 15:41:55.000000000 +0800
@@ -838,6 +838,8 @@
 	pid_t process;
 	char buf[128];
 
+    memshr_daemon_initialize();
+    sleep(86400);
 	__init_blkif();
 	snprintf(buf, sizeof(buf), "BLKTAPCTRL[%d]", getpid());
 	openlog(buf, LOG_CONS|LOG_ODELAY, LOG_DAEMON);
diff -rbuN tools/blktap2/drivers/Makefile tools.new/blktap2/drivers/Makefile
--- tools/blktap2/drivers/Makefile	2010-04-08 00:12:04.000000000 +0800
+++ tools.new/blktap2/drivers/Makefile	2011-01-12 15:45:41.000000000 +0800
@@ -57,7 +57,7 @@
 MEMSHR_DIR = $(XEN_ROOT)/tools/memshr
 
 MEMSHRLIBS :=
-ifeq ($(CONFIG_Linux), __fixme__)
+ifeq ($(CONFIG_Linux), y)
 CFLAGS += -DMEMSHR
 MEMSHRLIBS += $(MEMSHR_DIR)/libmemshr.a
 endif
diff -rbuN tools/blktap2/drivers/tapdisk-vbd.c tools.new/blktap2/drivers/tapdisk-vbd.c
--- tools/blktap2/drivers/tapdisk-vbd.c	2010-04-08 00:12:04.000000000 +0800
+++ tools.new/blktap2/drivers/tapdisk-vbd.c	2011-01-12 15:27:20.000000000 +0800
@@ -559,6 +559,8 @@
 	td_disk_id_t id;
 	struct  list_head *images;
 	td_driver_t *driver;
+    td_flag_t image_flags;
+    int level = 0;
 
 	images = calloc(1, sizeof(struct list_head));
 	INIT_LIST_HEAD(images);
@@ -568,8 +570,10 @@
 
 	for (;;) {
 		err   = -ENOMEM;
+        image_flags = level++ ? flags | TD_OPEN_RDONLY | TD_OPEN_SHAREABLE : flags;
+            
 		image = tapdisk_image_allocate(name, type,
-					       vbd->storage, flags, vbd);
+                           vbd->storage, image_flags, vbd);
 
 		/* free 'name' if it was created by td_get_parent_id() */
 		if (name != params) {
@@ -1463,8 +1467,10 @@
 		}
 	} else {
 #ifdef MEMSHR
-		if (treq.op == TD_OP_READ
-		   && td_flag_test(image->flags, TD_OPEN_RDONLY)) {
+        if (
+           treq.op == TD_OP_READ
+           && td_flag_test(image->flags, TD_OPEN_RDONLY)
+           && ((&vreq->req)->seg[treq.sidx].gref) ) {
 			uint64_t hnd  = treq.memshr_hnd;
 			uint16_t uid  = image->memshr_id;
 			blkif_request_t *breq = &vreq->req;
@@ -1496,6 +1502,7 @@
 
 	if (tapdisk_vbd_is_last_image(vbd, image)) {
 		memset(treq.buf, 0, treq.secs << SECTOR_SHIFT);
+        treq.memshr_hnd = 0;
 		td_complete_request(treq, 0);
 		goto done;
 	}
@@ -1517,6 +1524,7 @@
 			treq.secs   = 0;
 
 		memset(clone.buf, 0, clone.secs << SECTOR_SHIFT);
+        clone.memshr_hnd = 0;
 		td_complete_request(clone, 0);
 
 		if (!treq.secs)
@@ -1530,7 +1538,7 @@
 
 	case TD_OP_READ:
 #ifdef MEMSHR
-		if(td_flag_test(parent->flags, TD_OPEN_RDONLY)) {
+        if(td_flag_test(parent->flags, TD_OPEN_RDONLY) && ((&vreq->req)->seg[treq.sidx].gref)) {
 			int ret, seg = treq.sidx;
 			blkif_request_t *breq = &vreq->req;
         
diff -rbuN tools/memshr/interface.c tools.new/memshr/interface.c
--- tools/memshr/interface.c	2010-04-08 00:12:04.000000000 +0800
+++ tools.new/memshr/interface.c	2011-01-12 15:22:43.000000000 +0800
@@ -18,6 +18,7 @@
  */
 #include <string.h>
 #include <inttypes.h>
+#include <errno.h>
 
 #include "memshr.h"
 #include "memshr-priv.h"
@@ -180,15 +181,16 @@
         if(!ret) return 0;
         /* Handles failed to be shared => at least one of them must be invalid,
            remove the relevant ones from the map */
+        ret = -errno;
         switch(ret)
         {
             case XEN_DOMCTL_MEM_SHARING_S_HANDLE_INVALID:
                 ret = blockshr_shrhnd_remove(memshr.blks, s_hnd, NULL);
-                if(ret) DPRINTF("Could not rm invl s_hnd: %"PRId64"\n", s_hnd);
+                if(!ret) DPRINTF("Could not rm invl s_hnd: %"PRId64"\n", s_hnd);
                 break;
             case XEN_DOMCTL_MEM_SHARING_C_HANDLE_INVALID:
                 ret = blockshr_shrhnd_remove(memshr.blks, c_hnd, NULL);
-                if(ret) DPRINTF("Could not rm invl c_hnd: %"PRId64"\n", c_hnd);
+                if(!ret) DPRINTF("Could not rm invl c_hnd: %"PRId64"\n", c_hnd);
                 break;
             default:
                 break;

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-10  8:10         ` Jui-Hao Chiang
  2011-01-10 10:34           ` tinnycloud
@ 2011-01-12 10:03           ` Jui-Hao Chiang
  2011-01-12 10:54             ` Tim Deegan
  1 sibling, 1 reply; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-12 10:03 UTC (permalink / raw)
  To: Tim Deegan; +Cc: tinnycloud, xen-devel

[-- Attachment #1: Type: text/plain, Size: 2071 bytes --]

Hi, Tim:

On Mon, Jan 10, 2011 at 4:10 PM, Jui-Hao Chiang <juihaochiang@gmail.com> wrote:

>>
>> After this change, unshare() has a potential problem of deadlock for
>> shr_lock and p2m_lock with different locking order.
>> Assume two CPUs do the following
>> CPU1: hvm_hap_nested_page_fault() => unshare() => p2m_change_type()
>> (locking order: shr_lock, p2m_lock)
>> CPU2: p2m_teardown() => unshare() (locking order: p2m_lock, shr_lock)
>> When CPU1 grabs shr_lock and CPU2 grabs p2m_lock, they deadlock later.
>>
>>  1.       mem_sharing_unshare_page() has the routine  called from
>> gfn_to_mfn_unshare, which is called by gnttab_transfer
>>
>> Since no bug report on grant_table right now, so I think this is safe for
>> now
>>
>> Also  p2m_tear_down è mem_sharing_unshare_page() , its flag is
>> MEM_SHARING_DESTROY_GFN, and won’t has the chance to
>>
>> call set_shared_p2m_entry()
>>
>>
>
> Of course, the p2m_teardown won't call set_shared_p2m_entry. But this does
> not change my argument that p2m_teardown() hold p2m_lock to wait on
> shr_lock. Actaully, after looking for a while, I rebut myself that the
> scenario of deadlock won't exist.
> When p2m_teardown is called, the domain is dying in its last few steps
> (device, irq are released), and there is no way for
> hvm_hap_nested_page_fault() to happen on the memory of the dying domain. If
> this case is eliminated, then my patch should not have deadlock problem. Any
> comments?
>

After a discussion with tinnycloud, his test is working after applying
the previous patch
http://lists.xensource.com/archives/html/xen-devel/2010-12/txteWc7Bs5Yap.txt
(set_shared_p2m_entry is not executed since it is in ASSERT).

And after a few code tracing and testing, my own worry about the
deadlock between p2m_lock and shr_lock actually disappears as the
above discussion. So here I re-attach the patch again which includes
another fix to recover type count when nominate fails on a page (from
our previous dicussions).

See if anything wrong.

Bests,
Jui-Hao

[-- Attachment #2: mem_sharing_p2mt_race.patch --]
[-- Type: application/octet-stream, Size: 3509 bytes --]

diff -r 7b4c82f07281 xen/arch/x86/mm.c
--- a/xen/arch/x86/mm.c	Wed Jan 05 23:54:15 2011 +0000
+++ b/xen/arch/x86/mm.c	Wed Jan 12 17:46:56 2011 +0800
@@ -2266,6 +2266,10 @@ static int __put_page_type(struct page_i
 
         if ( unlikely((nx & PGT_count_mask) == 0) )
         {
+            /* When nominate fails, recover the shared page to type none */
+            if ((x & PGT_type_mask) == PGT_shared_page)
+                nx = PGT_none;
+
             if ( unlikely((nx & PGT_type_mask) <= PGT_l4_page_table) &&
                  likely(nx & (PGT_validated|PGT_partial)) )
             {
diff -r 7b4c82f07281 xen/arch/x86/mm/mem_sharing.c
--- a/xen/arch/x86/mm/mem_sharing.c	Wed Jan 05 23:54:15 2011 +0000
+++ b/xen/arch/x86/mm/mem_sharing.c	Wed Jan 12 17:46:56 2011 +0800
@@ -502,6 +502,7 @@ int mem_sharing_nominate_page(struct p2m
 
     *phandle = 0UL;
 
+    shr_lock(); 
     mfn = gfn_to_mfn(p2m, gfn, &p2mt);
 
     /* Check if mfn is valid */
@@ -509,29 +510,33 @@ int mem_sharing_nominate_page(struct p2m
     if (!mfn_valid(mfn))
         goto out;
 
+    /* Return the handle if the page is already shared */
+    page = mfn_to_page(mfn);
+    if (p2m_is_shared(p2mt)) {
+        *phandle = page->shr_handle;
+        ret = 0;
+        goto out;
+    }
+
     /* Check p2m type */
     if (!p2m_is_sharable(p2mt))
         goto out;
 
     /* Try to convert the mfn to the sharable type */
-    page = mfn_to_page(mfn);
     ret = page_make_sharable(d, page, expected_refcnt); 
     if(ret) 
         goto out;
 
     /* Create the handle */
     ret = -ENOMEM;
-    shr_lock(); 
     handle = next_handle++;  
     if((hash_entry = mem_sharing_hash_insert(handle, mfn)) == NULL)
     {
-        shr_unlock();
         goto out;
     }
     if((gfn_info = mem_sharing_gfn_alloc()) == NULL)
     {
         mem_sharing_hash_destroy(hash_entry);
-        shr_unlock();
         goto out;
     }
 
@@ -545,7 +550,6 @@ int mem_sharing_nominate_page(struct p2m
         BUG_ON(page_make_private(d, page) != 0);
         mem_sharing_hash_destroy(hash_entry);
         mem_sharing_gfn_destroy(gfn_info, 0);
-        shr_unlock();
         goto out;
     }
 
@@ -559,11 +563,11 @@ int mem_sharing_nominate_page(struct p2m
     gfn_info->domain = d->domain_id;
     page->shr_handle = handle;
     *phandle = handle;
-    shr_unlock();
 
     ret = 0;
 
 out:
+    shr_unlock();
     return ret;
 }
 
@@ -633,14 +637,21 @@ int mem_sharing_unshare_page(struct p2m_
     struct list_head *le;
     struct domain *d = p2m->domain;
 
+    mem_sharing_audit();
+    /* Remove the gfn_info from the list */
+    shr_lock();
+    
     mfn = gfn_to_mfn(p2m, gfn, &p2mt);
+    
+    /* Has someone already unshared it? */
+    if (!p2m_is_shared(p2mt)) {
+        shr_unlock();
+        return 0;
+    }
 
     page = mfn_to_page(mfn);
     handle = page->shr_handle;
  
-    mem_sharing_audit();
-    /* Remove the gfn_info from the list */
-    shr_lock();
     hash_entry = mem_sharing_hash_lookup(handle); 
     list_for_each(le, &hash_entry->gfns)
     {
@@ -707,7 +718,6 @@ private_page_found:
         mem_sharing_hash_delete(handle);
     else
         atomic_dec(&nr_saved_mfns);
-    shr_unlock();
 
     if(p2m_change_type(p2m, gfn, p2m_ram_shared, p2m_ram_rw) != 
                                                 p2m_ram_shared) 
@@ -718,6 +728,7 @@ private_page_found:
     /* Update m2p entry */
     set_gpfn_from_mfn(mfn_x(page_to_mfn(page)), gfn);
 
+    shr_unlock();
     return 0;
 }
 

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-12 10:03           ` Jui-Hao Chiang
@ 2011-01-12 10:54             ` Tim Deegan
  2011-01-12 12:39               ` MaoXiaoyun
  0 siblings, 1 reply; 50+ messages in thread
From: Tim Deegan @ 2011-01-12 10:54 UTC (permalink / raw)
  To: Jui-Hao Chiang; +Cc: tinnycloud, xen-devel

[-- Attachment #1: Type: text/plain, Size: 1183 bytes --]

Hi, 

At 10:03 +0000 on 12 Jan (1294826637), Jui-Hao Chiang wrote:
> After a discussion with tinnycloud, his test is working after applying
> the previous patch
> http://lists.xensource.com/archives/html/xen-devel/2010-12/txteWc7Bs5Yap.txt
> (set_shared_p2m_entry is not executed since it is in ASSERT).

Great!

> And after a few code tracing and testing, my own worry about the
> deadlock between p2m_lock and shr_lock actually disappears as the
> above discussion. So here I re-attach the patch again which includes
> another fix to recover type count when nominate fails on a page (from
> our previous dicussions).

The locking parts of this patch are already applied to the staging tree,
thanks.

The "recover type count" I still think is wrong.  The page-sharing code
can't rely on the type of a page with typecount 0.  Also in this patch
you're changing the code of the core __put_page_type() function, rather
than the page-sharing code.

Can you try the attached patch instead?  If it fixes your problem I'll
apply it. 

Cheers,

Tim.

-- 
Tim Deegan <Tim.Deegan@citrix.com>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

[-- Attachment #2: pgt.patch --]
[-- Type: text/x-diff, Size: 3861 bytes --]

# HG changeset patch
# User Tim Deegan <Tim.Deegan@citrix.com>
# Date 1294829498 0
# Node ID d8eef6e395a83f2af5324061725daa2ae1d72824
# Parent  e6d7d312dfd3b3b7b58bda0c936d37ae7e1d5b8a
x86/mm: make page-sharing use the proper typecount functions
instead of having its own cmpxhg loops.

This should remove some confusion about the use of PGT_none,
and also makes page-sharing participate properly in the TLB
flushing discipline.

Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com>

diff -r e6d7d312dfd3 -r d8eef6e395a8 xen/arch/x86/mm.c
--- a/xen/arch/x86/mm.c	Mon Jan 10 17:41:37 2011 +0000
+++ b/xen/arch/x86/mm.c	Wed Jan 12 10:51:38 2011 +0000
@@ -4167,32 +4167,25 @@ int page_make_sharable(struct domain *d,
                        struct page_info *page, 
                        int expected_refcnt)
 {
-    unsigned long x, nx, y;
-
-    /* Acquire ref first, so that the page doesn't dissapear from us */
-    if(!get_page(page, d))
+    spin_lock(&d->page_alloc_lock);
+
+    /* Change page type and count atomically */
+    if ( get_page_and_type(page, d, PGT_shared_page) )
+    {
+        spin_unlock(&d->page_alloc_lock);
         return -EINVAL;
-
-    spin_lock(&d->page_alloc_lock);
-
-    /* Change page type and count atomically */
-    y = page->u.inuse.type_info;
-    nx = PGT_shared_page | PGT_validated | 1; 
-    do {
-        x = y;
-        /* We can only change the type if count is zero, and 
-           type is PGT_none */
-        if((x & (PGT_type_mask | PGT_count_mask)) != PGT_none)
-        {
-            put_page(page);
-            spin_unlock(&d->page_alloc_lock);
-            return -EEXIST;
-        }
-        y = cmpxchg(&page->u.inuse.type_info, x, nx);
-    } while(x != y);
-
-    /* Check if the ref count is 2. The first from PGT_allocated, and the second
-     * from get_page at the top of this function */
+    }
+
+    /* Check it wasn't already sharable and undo if it was */
+    if ( (page->u.inuse.type_info & PGT_count_mask) != 1 )
+    {
+        put_page_and_type(page);
+        spin_unlock(&d->page_alloc_lock);
+        return -EEXIST;
+    }
+
+    /* Check if the ref count is 2. The first from PGT_allocated, and
+     * the second from get_page_and_type at the top of this function */
     if(page->count_info != (PGC_allocated | (2 + expected_refcnt)))
     {
         /* Return type count back to zero */
@@ -4205,39 +4198,27 @@ int page_make_sharable(struct domain *d,
     d->tot_pages--;
     page_list_del(page, &d->page_list);
     spin_unlock(&d->page_alloc_lock);
-
-    /* NOTE: We are not putting the page back. In effect this function acquires
-     * one ref and type ref for the caller */
-
     return 0;
 }
 
 int page_make_private(struct domain *d, struct page_info *page)
 {
-    unsigned long x, y;
-
     if(!get_page(page, dom_cow))
         return -EINVAL;
     
     spin_lock(&d->page_alloc_lock);
 
-    /* Change page type and count atomically */
-    y = page->u.inuse.type_info;
-    do {
-        x = y;
-        /* We can only change the type if count is one */
-        if((x & (PGT_type_mask | PGT_count_mask)) != 
-                (PGT_shared_page | 1))
-        {
-            put_page(page);
-            spin_unlock(&d->page_alloc_lock);
-            return -EEXIST;
-        }
-        y = cmpxchg(&page->u.inuse.type_info, x, PGT_none);
-    } while(x != y);
-
-    /* We dropped type ref above, drop one ref count too */
-    put_page(page);
+    /* We can only change the type if count is one */
+    if ( (page->u.inuse.type_info & (PGT_type_mask | PGT_count_mask))
+         != (PGT_shared_page | 1) )
+    {
+        put_page(page);
+        spin_unlock(&d->page_alloc_lock);
+        return -EEXIST;
+    }
+
+    /* Drop the final typecount */
+    put_page_and_type(page);
 
     /* Change the owner */
     ASSERT(page_get_owner(page) == dom_cow);

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-10 10:30           ` Tim Deegan
  2011-01-11  1:49             ` MaoXiaoyun
@ 2011-01-12 11:50             ` George Dunlap
  1 sibling, 0 replies; 50+ messages in thread
From: George Dunlap @ 2011-01-12 11:50 UTC (permalink / raw)
  To: Tim Deegan; +Cc: tinnycloud, xen-devel, Jui-Hao Chiang

On Mon, Jan 10, 2011 at 10:30 AM, Tim Deegan <Tim.Deegan@citrix.com> wrote:
>> Just to be skeptic.
>> Why doesn't mfn_to_gfn() take p2m lock when querying the p2m type?
>
> Because gfn->mfn lookups happen very frequently and requiring the lock
> would be a performance bottleneck on multi-vcpu guests.

I think there's also a deadlock issue.  At some point a few months ago
I made ept_get_entry() grab the p2m lock, and it deadlocked with the
paging lock.  IIRC, the problem was that
* Sometimes the paging lock is grabbed after the p2m lock is taken
* Sometimes gfn_to_mfn() is called when the paging lock is held

So adding the p2m lock to gfn_to_mfn() gave you a circular lock
dependency, classic condition for deadlock.

 -George

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-12 10:54             ` Tim Deegan
@ 2011-01-12 12:39               ` MaoXiaoyun
  2011-01-12 14:02                 ` Tim Deegan
  0 siblings, 1 reply; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-12 12:39 UTC (permalink / raw)
  To: tim.deegan, juihaochiang; +Cc: xen devel


[-- Attachment #1.1: Type: text/plain, Size: 5412 bytes --]


Hi Tim:
 
      Seems not work. See log below.
      Since the patch is for xen_unstable, I need to merge the code manually, I will check my code carefully later. 
 
blktap_sysfs_create: adding attributes for dev ffff880102523c00
(XEN) Bad type in alloc_page_type 8000000000000000 t=8000000000000001 c=8000000000000005
(XEN) Xen BUG at mm.c:2094
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    3
(XEN) RIP:    e008:[<ffff82c48015cd08>] __get_page_type+0x4d8/0x1410
(XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: 0000000000000000   rcx: 0000000000000092
(XEN) rdx: 000000000000000a   rsi: 000000000000000a   rdi: ffff82c48021e8c4
(XEN) rbp: ffff82f603663e80   rsp: ffff83023ff27b08   r8:  0000000000000001
(XEN) r9:  0000000000000001   r10: 00000000ffffffed   r11: ffff82c4801318d0
(XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 8000000000000001
(XEN) r15: ffff83023ff27bc8   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 0000000336800000   cr2: ffff88010545d008
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83023ff27b08:
(XEN)    0000000000000000 00000000000000ec 000000110000000e ffff83023ff27c1c
(XEN)    0000000100000002 ffff83023ff27be8 0000000800000001 0000000000000001
(XEN)    00000000001b31f4 8000000000000000 0000008700000100 0000000000000086
(XEN)    00972cd100000000 ffff82c480376980 0000000000000000 ffff880002b63dc8
(XEN)    0000000000000000 4c00000000000002 000000000000ffff 80000003276b0127
(XEN)    0000000000000000 ffff88002c11a258 0000000000000000 ffff8801060b00a8
(XEN)    0000000000000000 0000000000000000 0000000000000009 ffff82f603663e80
(XEN)    ffff8301bd350000 ffff8301bd350014 0000000000000003 ffff82f603663e80
(XEN)    00000000000099f4 ffff82c48015dc5b ffff82f603663e80 ffff82c48015e98e
(XEN)    ffff8301bd350000 00000000001b31f4 ffff8301bd350000 ffff83023ff27cc0
(XEN)    0000000000000003 ffff82c4801c02aa 0000000000000000 0000000000000092
(XEN)    00000000bf568000 0000000000000096 000000013ff27f28 ffff83023ff27e38
(XEN)    ffff8301bd350000 ffff83023ff27e28 0000000000305000 0000000000000006
(XEN)    0000000000000006 ffff82c4801c06a3 0000000000000000 0000000000000000
(XEN)    00000000000099f4 00000000006ee000 00000000006ee000 ffff82c4801457fd
(XEN)    ffff82c4801447da 0000000000000080 ffff83023ff27f28 ffff82c48015a1d8
(XEN)    0000000000000000 00000000000000fc 0000000000000000 0000000000000001
(XEN)    ffff83023ff27e28 ffff82c48015a1d8 0000000000000000 ffff82f6066d0000
(XEN)    0000000000000000 0000000000000000 00000000003369d0 0000000000000000
(XEN)    0000000000000000 ffff82f6066d3a00 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82c48015cd08>] __get_page_type+0x4d8/0x1410
(XEN)    [<ffff82c48015dc5b>] get_page_type+0xb/0x20
(XEN)    [<ffff82c48015e98e>] page_make_sharable+0x4e/0x1a0
(XEN)    [<ffff82c4801c02aa>] mem_sharing_nominate_page+0x18a/0x380
(XEN)    [<ffff82c4801c06a3>] mem_sharing_domctl+0x73/0x130
(XEN)    [<ffff82c4801457fd>] arch_do_domctl+0xdad/0x1f90
(XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
(XEN)    [<ffff82c48015a1d8>] get_page+0x28/0xf0
(XEN)    [<ffff82c48015a1d8>] get_page+0x28/0xf0
(XEN)    [<ffff82c48015dc5b>] get_page_type+0xb/0x20
(XEN)    [<ffff82c48015e1c3>] get_page_and_type_from_pagenr+0x93/0xf0
(XEN)    [<ffff82c480104373>] do_domctl+0x163/0x1000
(XEN)    [<ffff82c480162134>] do_mmuext_op+0xf34/0x1320
(XEN)    [<ffff82c48014717d>] vcpu_kick+0x1d/0x80
(XEN)    [<ffff82c4801e3169>] syscall_enter+0xa9/0xae
(XEN)    
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 3:
(XEN) Xen BUG at mm.c:2094
(XEN) ****************************************
(XEN) 
(XEN) Manual reset required ('noreboot' specified)

 
> Date: Wed, 12 Jan 2011 10:54:05 +0000
> From: Tim.Deegan@citrix.com
> To: juihaochiang@gmail.com
> CC: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> 
> Hi, 
> 
> At 10:03 +0000 on 12 Jan (1294826637), Jui-Hao Chiang wrote:
> > After a discussion with tinnycloud, his test is working after applying
> > the previous patch
> > http://lists.xensource.com/archives/html/xen-devel/2010-12/txteWc7Bs5Yap.txt
> > (set_shared_p2m_entry is not executed since it is in ASSERT).
> 
> Great!
> 
> > And after a few code tracing and testing, my own worry about the
> > deadlock between p2m_lock and shr_lock actually disappears as the
> > above discussion. So here I re-attach the patch again which includes
> > another fix to recover type count when nominate fails on a page (from
> > our previous dicussions).
> 
> The locking parts of this patch are already applied to the staging tree,
> thanks.
> 
> The "recover type count" I still think is wrong. The page-sharing code
> can't rely on the type of a page with typecount 0. Also in this patch
> you're changing the code of the core __put_page_type() function, rather
> than the page-sharing code.
> 
> Can you try the attached patch instead? If it fixes your problem I'll
> apply it. 
> 
> Cheers,
> 
> Tim.
> 
> -- 
> Tim Deegan <Tim.Deegan@citrix.com>
> Principal Software Engineer, Xen Platform Team
> Citrix Systems UK Ltd. (Company #02937203, SL9 0BG)
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 6956 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-12 12:39               ` MaoXiaoyun
@ 2011-01-12 14:02                 ` Tim Deegan
  2011-01-12 15:21                   ` MaoXiaoyun
  2011-01-13  1:48                   ` MaoXiaoyun
  0 siblings, 2 replies; 50+ messages in thread
From: Tim Deegan @ 2011-01-12 14:02 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, juihaochiang

At 12:39 +0000 on 12 Jan (1294835994), MaoXiaoyun wrote:
> Hi Tim:
> 
>       Seems not work. See log below.
>       Since the patch is for xen_unstable, I need to merge the code manually, I will check my code carefully later.
> 

I think you need this change as well:

diff -r d8eef6e395a8 xen/arch/x86/mm.c
--- a/xen/arch/x86/mm.c	Wed Jan 12 10:51:38 2011 +0000
+++ b/xen/arch/x86/mm.c	Wed Jan 12 14:01:11 2011 +0000
@@ -2367,7 +2367,7 @@ static int __get_page_type(struct page_i
 
                 /* No special validation needed for writable pages. */
                 /* Page tables and GDT/LDT need to be scanned for validity. */
-                if ( type == PGT_writable_page )
+                if ( type == PGT_writable_page || type == PGT_shared_page )
                     nx |= PGT_validated;
             }
         }


Cheers,

Tim.

-- 
Tim Deegan <Tim.Deegan@citrix.com>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-12 14:02                 ` Tim Deegan
@ 2011-01-12 15:21                   ` MaoXiaoyun
  2011-01-13  2:26                     ` Jui-Hao Chiang
  2011-01-13  1:48                   ` MaoXiaoyun
  1 sibling, 1 reply; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-12 15:21 UTC (permalink / raw)
  To: tim.deegan; +Cc: xen devel, juihaochiang


[-- Attachment #1.1: Type: text/plain, Size: 1724 bytes --]


Hi Tim:
 
        That's it, I am running the test, so far so good, I'll test more, thanks.
 
      Currently from the code of tapdisk, it indicates only *read only* IO with secs 8 has the 
chance to be shared, so does it mean only the parent image can be shared, still it needs to
be opened read only, right?
 
      So it looks like page sharing are refer to those pages contain disk data been loaded 
into Guest IO buffer, and this is the page sharing in Xen, right?      
             
 
> Date: Wed, 12 Jan 2011 14:02:23 +0000
> From: Tim.Deegan@citrix.com
> To: tinnycloud@hotmail.com
> CC: juihaochiang@gmail.com; xen-devel@lists.xensource.com
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> 
> At 12:39 +0000 on 12 Jan (1294835994), MaoXiaoyun wrote:
> > Hi Tim:
> > 
> > Seems not work. See log below.
> > Since the patch is for xen_unstable, I need to merge the code manually, I will check my code carefully later.
> > 
> 
> I think you need this change as well:
> 
> diff -r d8eef6e395a8 xen/arch/x86/mm.c
> --- a/xen/arch/x86/mm.c Wed Jan 12 10:51:38 2011 +0000
> +++ b/xen/arch/x86/mm.c Wed Jan 12 14:01:11 2011 +0000
> @@ -2367,7 +2367,7 @@ static int __get_page_type(struct page_i
> 
> /* No special validation needed for writable pages. */
> /* Page tables and GDT/LDT need to be scanned for validity. */
> - if ( type == PGT_writable_page )
> + if ( type == PGT_writable_page || type == PGT_shared_page )
> nx |= PGT_validated;
> }
> }
> 
> 
> Cheers,
> 
> Tim.
> 
> -- 
> Tim Deegan <Tim.Deegan@citrix.com>
> Principal Software Engineer, Xen Platform Team
> Citrix Systems UK Ltd. (Company #02937203, SL9 0BG)
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 2473 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-12 14:02                 ` Tim Deegan
  2011-01-12 15:21                   ` MaoXiaoyun
@ 2011-01-13  1:48                   ` MaoXiaoyun
  2011-01-13 10:21                     ` Tim Deegan
  1 sibling, 1 reply; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-13  1:48 UTC (permalink / raw)
  To: tim.deegan; +Cc: xen devel, juihaochiang


[-- Attachment #1.1: Type: text/plain, Size: 4694 bytes --]


Hi Tim:
 
      More strees test failed on lock issue. 
      thanks.
 
(XEN) Error: p2m lock held by p2m_change_type
(XEN) Xen BUG at p2m-ept.c:38
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    9
(XEN) RIP:    e008:[<ffff82c4801df45a>] ept_pod_check_and_populate+0x13a/0x150
(XEN) RFLAGS: 0000000000010282   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: ffff830402750000   rcx: 0000000000000092
(XEN) rdx: 000000000000000a   rsi: 000000000000000a   rdi: ffff82c48021e8c4
(XEN) rbp: ffff83023fed7f28   rsp: ffff83023fed7c18   r8:  0000000000000001
(XEN) r9:  0000000000000001   r10: ffff830000000000   r11: ffff82c4801318d0
(XEN) r12: ffff83058dda63e0   r13: 0000000000000001   r14: 0000000000000009
(XEN) r15: 000000000000f9c7   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 000000058ddfb000   cr2: 00000000e1258000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83023fed7c18:
(XEN)    0000000000000002 0000000000000001 0000000000000009 ffff830402750000
(XEN)    ffff83023fed7c78 0000000000000001 ffff83023fed7c70 ffff82c4801df5cf
(XEN)    0000000000000000 ffff83023fed7cc4 000000000000f9c7 000000000000f9c7
(XEN)    ffff83058dda6000 ffff830402750000 ffff83023fed7f28 000000000000f9c7
(XEN)    0000000000000002 0000000000000001 0000000000000030 ffff82c4801bcba4
(XEN)    ffff83058dda6000 000000043fed7f28 ffff83023fed7f28 000000000000f9c7
(XEN)    000000000003f7c7 0000000000000030 ffff83023fed7f28 ffff82c48019baf1
(XEN)    0000000000000000 00000001a8e46000 0000000000000000 0000000000000182
(XEN)    ffff8300a8e46000 ffff82c4801b3864 ffff830402750470 07008300a8e46000
(XEN)    ffff83023fe810e0 0000000000000009 ffff83023fe814c0 0000000000000040
(XEN)    ffff82c4801447da 0000000000000080 ffff83023fe814c0 0000000000000000
(XEN)    000000000f9c7000 000000000000e000 ffff83023fed7dc8 0000000000000080
(XEN)    ffff82c480251dd0 000000000000f9c7 00ff8304027504b0 ffff82c480251dc0
(XEN)    ffff82c480251080 ffff82c480251dc0 0000000000000080 ffff82c48011b3cc
(XEN)    ffff82c480263240 0000000000000040 ffff82c4801447da 0000000000000080
(XEN)    ffff83023fed7f28 0000000000000092 00000c2d0a7ff1db 00000000000000fc
(XEN)    0000000000000092 0000000000000009 ffff8304027504b0 0000000000000009
(XEN)    ffff82c480263100 ffff82c480251100 ffff82c480251100 0000000000000292
(XEN)    ffff8300a8e477f0 0000069731554460 0000000000000292 ffff82c4801a93c3
(XEN)    00000000000000d1 ffff8300a8e46000 ffff8300a8e46000 ffff8300a8e477e8
(XEN) Xen call trace:
(XEN)    [<ffff82c4801df45a>] ept_pod_check_and_populate+0x13a/0x150
(XEN)    [<ffff82c4801df5cf>] ept_get_entry+0x15f/0x1c0
(XEN)    [<ffff82c4801bcba4>] p2m_change_type+0x144/0x1b0
(XEN)    [<ffff82c48019baf1>] hvm_hap_nested_page_fault+0x121/0x190
(XEN)    [<ffff82c4801b3864>] vmx_vmexit_handler+0x304/0x1a90
(XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
(XEN)    [<ffff82c48011b3cc>] vcpu_runstate_get+0xec/0xf0
(XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
(XEN)    [<ffff82c4801a93c3>] pt_update_irq+0x33/0x1e0
(XEN)    [<ffff82c4801a6082>] vlapic_has_pending_irq+0x42/0x70
(XEN)    [<ffff82c4801a0cc8>] hvm_vcpu_has_pending_irq+0x88/0xa0
(XEN)    [<ffff82c4801b267b>] vmx_vmenter_helper+0x5b/0x150
(XEN)    [<ffff82c4801adaa3>] vmx_asm_do_vmentry+0x0/0xdd
(XEN)    
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 9:
 
> Date: Wed, 12 Jan 2011 14:02:23 +0000
> From: Tim.Deegan@citrix.com
> To: tinnycloud@hotmail.com
> CC: juihaochiang@gmail.com; xen-devel@lists.xensource.com
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> 
> At 12:39 +0000 on 12 Jan (1294835994), MaoXiaoyun wrote:
> > Hi Tim:
> > 
> > Seems not work. See log below.
> > Since the patch is for xen_unstable, I need to merge the code manually, I will check my code carefully later.
> > 
> 
> I think you need this change as well:
> 
> diff -r d8eef6e395a8 xen/arch/x86/mm.c
> --- a/xen/arch/x86/mm.c Wed Jan 12 10:51:38 2011 +0000
> +++ b/xen/arch/x86/mm.c Wed Jan 12 14:01:11 2011 +0000
> @@ -2367,7 +2367,7 @@ static int __get_page_type(struct page_i
> 
> /* No special validation needed for writable pages. */
> /* Page tables and GDT/LDT need to be scanned for validity. */
> - if ( type == PGT_writable_page )
> + if ( type == PGT_writable_page || type == PGT_shared_page )
> nx |= PGT_validated;
> }
> }
> 
> 
> Cheers,
> 
> Tim.
> 
> -- 
> Tim Deegan <Tim.Deegan@citrix.com>
> Principal Software Engineer, Xen Platform Team
> Citrix Systems UK Ltd. (Company #02937203, SL9 0BG)
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 6148 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-12 15:21                   ` MaoXiaoyun
@ 2011-01-13  2:26                     ` Jui-Hao Chiang
  2011-01-13  4:42                       ` MaoXiaoyun
  2011-01-13  9:24                       ` Tim Deegan
  0 siblings, 2 replies; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-13  2:26 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, tim.deegan

Hi, all:

I think there is still a problem.
(1) I think using the get_page_and_type is definitely better since
it's a function already implemented there
There seems a typo:
"if ( get_page_and_type(page, d, PGT_shared_page) )" should be changed
to "if ( !get_page_and_type(page, d, PGT_shared_page) )" because the
function return 1 on success.

(2) The major problem is the __put_page_type() never handle the
special case for shared pages.

If the (1) is changed as I said, the problem still exists as the following
/* Before nominating domain 1, gfn 0x63 */
(XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
ci=8000000000000002, ti=0, owner_id=1
/* After a failed nominate  [desired: ci=8000000000000002, ti=0]*/
(XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
ci=8000000000000002, ti=8400000000000000, owner_id=1


2011/1/12 MaoXiaoyun <tinnycloud@hotmail.com>:
> Hi Tim:
>
>         That's it, I am running the test, so far so good, I'll test more,
> thanks.
>
>       Currently from the code of tapdisk, it indicates only *read only* IO
> with secs 8 has the
> chance to be shared, so does it mean only the parent image can be shared,
> still it needs to
> be opened read only, right?
>
>       So it looks like page sharing are refer to those pages contain disk
> data been loaded
> into Guest IO buffer, and this is the page sharing in Xen, right?
>
>

Bests,
Jui-Hao

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-13  2:26                     ` Jui-Hao Chiang
@ 2011-01-13  4:42                       ` MaoXiaoyun
  2011-01-13  9:55                         ` Tim Deegan
  2011-01-13  9:24                       ` Tim Deegan
  1 sibling, 1 reply; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-13  4:42 UTC (permalink / raw)
  To: xen devel; +Cc: tim.deegan, juihaochiang


[-- Attachment #1.1: Type: text/plain, Size: 1942 bytes --]


Well, I think the discuss is around get_page/put_page, get_page_type/put_page_type
 
Could someone help to explain their usage and difference?
 
thanks.
 
> Date: Thu, 13 Jan 2011 10:26:55 +0800
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> From: juihaochiang@gmail.com
> To: tinnycloud@hotmail.com
> CC: tim.deegan@citrix.com; xen-devel@lists.xensource.com
> 
> Hi, all:
> 
> I think there is still a problem.
> (1) I think using the get_page_and_type is definitely better since
> it's a function already implemented there
> There seems a typo:
> "if ( get_page_and_type(page, d, PGT_shared_page) )" should be changed
> to "if ( !get_page_and_type(page, d, PGT_shared_page) )" because the
> function return 1 on success.
> 
> (2) The major problem is the __put_page_type() never handle the
> special case for shared pages.
> 
> If the (1) is changed as I said, the problem still exists as the following
> /* Before nominating domain 1, gfn 0x63 */
> (XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
> ci=8000000000000002, ti=0, owner_id=1
> /* After a failed nominate [desired: ci=8000000000000002, ti=0]*/
> (XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
> ci=8000000000000002, ti=8400000000000000, owner_id=1
> 
> 
> 2011/1/12 MaoXiaoyun <tinnycloud@hotmail.com>:
> > Hi Tim:
> >
> >         That's it, I am running the test, so far so good, I'll test more,
> > thanks.
> >
> >       Currently from the code of tapdisk, it indicates only *read only* IO
> > with secs 8 has the
> > chance to be shared, so does it mean only the parent image can be shared,
> > still it needs to
> > be opened read only, right?
> >
> >       So it looks like page sharing are refer to those pages contain disk
> > data been loaded
> > into Guest IO buffer, and this is the page sharing in Xen, right?
> >
> >
> 
> Bests,
> Jui-Hao
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 2683 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-13  2:26                     ` Jui-Hao Chiang
  2011-01-13  4:42                       ` MaoXiaoyun
@ 2011-01-13  9:24                       ` Tim Deegan
  2011-01-13 15:24                         ` Jui-Hao Chiang
  1 sibling, 1 reply; 50+ messages in thread
From: Tim Deegan @ 2011-01-13  9:24 UTC (permalink / raw)
  To: Jui-Hao Chiang; +Cc: MaoXiaoyun, xen devel

At 02:26 +0000 on 13 Jan (1294885615), Jui-Hao Chiang wrote:
> There seems a typo:
> "if ( get_page_and_type(page, d, PGT_shared_page) )" should be changed
> to "if ( !get_page_and_type(page, d, PGT_shared_page) )"

Oops!  Yes, thanks for that. :)

> (2) The major problem is the __put_page_type() never handle the
> special case for shared pages.
> 
> If the (1) is changed as I said, the problem still exists as the following
> /* Before nominating domain 1, gfn 0x63 */
> (XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
> ci=8000000000000002, ti=0, owner_id=1
> /* After a failed nominate  [desired: ci=8000000000000002, ti=0]*/
> (XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
> ci=8000000000000002, ti=8400000000000000, owner_id=1

Is this causing a real problem other than this printout?

One of the reasons to use get_page_and_type/put_page_and_type was that
it gets rid of the code that requires pages to have PGT_none before
they're shared. 

As I have been trying to explain, when a page has typecount 0 its type
is only relevant for the TLB flushing logic.  If there's still a place
in the page-sharing code that relies on (type == PGT_none && count == 0)
then AFAICS that's a bug. 

Cheers,

Tim.

> 2011/1/12 MaoXiaoyun <tinnycloud@hotmail.com>:
> > Hi Tim:
> >
> >         That's it, I am running the test, so far so good, I'll test more,
> > thanks.
> >
> >       Currently from the code of tapdisk, it indicates only *read only* IO
> > with secs 8 has the
> > chance to be shared, so does it mean only the parent image can be shared,
> > still it needs to
> > be opened read only, right?
> >
> >       So it looks like page sharing are refer to those pages contain disk
> > data been loaded
> > into Guest IO buffer, and this is the page sharing in Xen, right?
> >
> >
> 
> Bests,
> Jui-Hao

-- 
Tim Deegan <Tim.Deegan@citrix.com>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-13  4:42                       ` MaoXiaoyun
@ 2011-01-13  9:55                         ` Tim Deegan
  0 siblings, 0 replies; 50+ messages in thread
From: Tim Deegan @ 2011-01-13  9:55 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, juihaochiang

At 04:42 +0000 on 13 Jan (1294893768), MaoXiaoyun wrote:
> Well, I think the discuss is around get_page/put_page, get_page_type/put_page_type
> 
> Could someone help to explain their usage and difference?

The reference counting mechanism is described at the top of
xen/arch/x86/mm.c.  get_page() takes a "TOT_COUNT" reference;
get_page_type() takes a "TYPE_COUNT" reference; get_page_and_type()
takes both. 

Cheers,

Tim.

> > Date: Thu, 13 Jan 2011 10:26:55 +0800
> > Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> > From: juihaochiang@gmail.com
> > To: tinnycloud@hotmail.com
> > CC: tim.deegan@citrix.com; xen-devel@lists.xensource.com
> >
> > Hi, all:
> >
> > I think there is still a problem.
> > (1) I think using the get_page_and_type is definitely better since
> > it's a function already implemented there
> > There seems a typo:
> > "if ( get_page_and_type(page, d, PGT_shared_page) )" should be changed
> > to "if ( !get_page_and_type(page, d, PGT_shared_page) )" because the
> > function return 1 on success.
> >
> > (2) The major problem is the __put_page_type() never handle the
> > special case for shared pages.
> >
> > If the (1) is changed as I said, the problem still exists as the following
> > /* Before nominating domain 1, gfn 0x63 */
> > (XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
> > ci=8000000000000002, ti=0, owner_id=1
> > /* After a failed nominate [desired: ci=8000000000000002, ti=0]*/
> > (XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
> > ci=8000000000000002, ti=8400000000000000, owner_id=1
> >
> >
> > 2011/1/12 MaoXiaoyun <tinnycloud@hotmail.com>:
> > > Hi Tim:
> > >
> > >         That's it, I am running the test, so far so good, I'll test more,
> > > thanks.
> > >
> > >       Currently from the code of tapdisk, it indicates only *read only* IO
> > > with secs 8 has the
> > > chance to be shared, so does it mean only the parent image can be shared,
> > > still it needs to
> > > be opened read only, right?
> > >
> > >       So it looks like page sharing are refer to those pages contain disk
> > > data been loaded
> > > into Guest IO buffer, and this is the page sharing in Xen, right?
> > >
> > >
> >
> > Bests,
> > Jui-Hao

-- 
Tim Deegan <Tim.Deegan@citrix.com>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-13  1:48                   ` MaoXiaoyun
@ 2011-01-13 10:21                     ` Tim Deegan
  0 siblings, 0 replies; 50+ messages in thread
From: Tim Deegan @ 2011-01-13 10:21 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, juihaochiang

[-- Attachment #1: Type: text/plain, Size: 427 bytes --]

At 01:48 +0000 on 13 Jan (1294883302), MaoXiaoyun wrote:
> Hi Tim:
> 
>       More strees test failed on lock issue.

Here's the fix for that.  It's a surprisingly old bug in the
populate-on-demand code for EPT.  I'll check it in as soon as we've
managed to tag 4.1.0 RC1

Cheers,

Tim.

-- 
Tim Deegan <Tim.Deegan@citrix.com>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

[-- Attachment #2: ept-pod-locking --]
[-- Type: text/plain, Size: 1304 bytes --]

x86/mm: fix EPT PoD locking to match the normal p2m case.

This recursive-locking bug was fixed in the main p2m code in
20269:fd3d5d66c446 (in October 2009) but has lurked unseen in
the EPT side since then.  Copy the fix across.

Signed-off-by: Tim Deegan <Tim.Deegan@citrix.com>

diff -r ce208811f540 -r 35971b4c695b xen/arch/x86/mm/hap/p2m-ept.c
--- a/xen/arch/x86/mm/hap/p2m-ept.c	Thu Jan 13 01:26:44 2011 +0000
+++ b/xen/arch/x86/mm/hap/p2m-ept.c	Thu Jan 13 10:14:59 2011 +0000
@@ -45,19 +45,26 @@ static int ept_pod_check_and_populate(st
                                       ept_entry_t *entry, int order,
                                       p2m_query_t q)
 {
+    /* Only take the lock if we don't already have it.  Otherwise it
+     * wouldn't be safe to do p2m lookups with the p2m lock held */
+    int do_locking = !p2m_locked_by_me(p2m);
     int r;
-    p2m_lock(p2m);
+
+    if ( do_locking )
+        p2m_lock(p2m);
 
     /* Check to make sure this is still PoD */
     if ( entry->sa_p2mt != p2m_populate_on_demand )
     {
-        p2m_unlock(p2m);
+        if ( do_locking )
+            p2m_unlock(p2m);
         return 0;
     }
 
     r = p2m_pod_demand_populate(p2m, gfn, order, q);
 
-    p2m_unlock(p2m);
+    if ( do_locking )
+        p2m_unlock(p2m);
 
     return r;
 }

[-- Attachment #3: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-13  9:24                       ` Tim Deegan
@ 2011-01-13 15:24                         ` Jui-Hao Chiang
  2011-01-13 15:53                           ` Tim Deegan
  0 siblings, 1 reply; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-13 15:24 UTC (permalink / raw)
  To: Tim Deegan; +Cc: MaoXiaoyun, xen devel

Hi, Tim:

>> (2) The major problem is the __put_page_type() never handle the
>> special case for shared pages.
>>
>> If the (1) is changed as I said, the problem still exists as the following
>> /* Before nominating domain 1, gfn 0x63 */
>> (XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
>> ci=8000000000000002, ti=0, owner_id=1
>> /* After a failed nominate  [desired: ci=8000000000000002, ti=0]*/
>> (XEN) Debug for domain=1, gfn=63, Debug page: MFN=4836c7 is
>> ci=8000000000000002, ti=8400000000000000, owner_id=1
>
> Is this causing a real problem other than this printout?
>
> One of the reasons to use get_page_and_type/put_page_and_type was that
> it gets rid of the code that requires pages to have PGT_none before
> they're shared.
>
> As I have been trying to explain, when a page has typecount 0 its type
> is only relevant for the TLB flushing logic.  If there's still a place
> in the page-sharing code that relies on (type == PGT_none && count == 0)
> then AFAICS that's a bug.
>

Thanks for the clarification. As you said, the following is excerpted
from the mm.c
"* So, type_count is a count of the number of times a frame is being
 * referred to in its current incarnation. Therefore, a page can only
 * change its type when its type count is zero."

After testing the code with your patch, it's ok for the mem_sharing.
And as the argument says, when (type_count & PGT_count_mask) is zero,
it's ok for changing the page type. (even when there is a old value in
type_count & PGT_type_mask, e.g., ti=8400000000000000)

Thanks,
Jui-Hao

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-13 15:24                         ` Jui-Hao Chiang
@ 2011-01-13 15:53                           ` Tim Deegan
  2011-01-14  2:04                             ` MaoXiaoyun
  2011-01-14  2:04                             ` MaoXiaoyun
  0 siblings, 2 replies; 50+ messages in thread
From: Tim Deegan @ 2011-01-13 15:53 UTC (permalink / raw)
  To: Jui-Hao Chiang; +Cc: MaoXiaoyun, xen devel

At 15:24 +0000 on 13 Jan (1294932299), Jui-Hao Chiang wrote:
> After testing the code with your patch, it's ok for the mem_sharing.
> And as the argument says, when (type_count & PGT_count_mask) is zero,
> it's ok for changing the page type. (even when there is a old value in
> type_count & PGT_type_mask, e.g., ti=8400000000000000)

Great, thanks.  I've applied that change as cset 22745:32b7a4f2d399
of xen-unstable and the EPT locking fix as 22744:b01ef59c8c80 .
They're in the staging tree and will hit the public tree the next time
the automatic regression tests pass. 

Cheers,

Tim.

-- 
Tim Deegan <Tim.Deegan@citrix.com>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-13 15:53                           ` Tim Deegan
@ 2011-01-14  2:04                             ` MaoXiaoyun
  2011-01-14 17:00                               ` Jui-Hao Chiang
  2011-01-14  2:04                             ` MaoXiaoyun
  1 sibling, 1 reply; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-14  2:04 UTC (permalink / raw)
  To: xen devel; +Cc: tim.deegan, juihaochiang


[-- Attachment #1.1: Type: text/plain, Size: 4689 bytes --]


Hi Tim:
 
     Thanks for the patch, xen panic on more stressed test. ( 12HVMS, each of them reboot every 30minutes).
     Please refer to below log.
 
blktap_sysfs_create: adding attributes for dev ffff8801044bc400
blktap_sysfs_create: adding attributes for dev ffff8801044bc200
__ratelimit: 4 callbacks suppressed
(XEN) Xen BUG at mem_sharing.c:454
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c4801bf52c>] mem_sharing_gfn_account+0x5c/0x70
(XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: 0000000000000001   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 000000000000005f   rdi: 000000000000005f
(XEN) rbp: ffff8305894f0fc0   rsp: ffff82c48035fc48   r8:  ffff82f600000000
(XEN) r9:  00007fffcdbd0fb8   r10: ffff82c4801f8c70   r11: 0000000000000282
(XEN) r12: ffff82c48035fe28   r13: ffff8303192a3bf0   r14: ffff82f60b966700
(XEN) r15: 0000000000000006   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 000000032ea58000   cr2: ffff880119c2e668
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c48035fc48:
(XEN)    00000000fffffff7 ffff82c4801bf8c0 0000000000553b86 ffff8305894f0fc0
(XEN)    ffff8302f4d12cf0 0000000000553b86 ffff82f603e28580 ffff82c48035fe38
(XEN)    ffff83023fe60000 ffff82c48035fe28 0000000000305000 0000000000000006
(XEN)    0000000000000006 ffff82c4801c0724 ffff82c4801447da 0000000000553b86
(XEN)    000000000001a938 00000000006ee000 00000000006ee000 ffff82c4801457fd
(XEN)    0000000000000096 0000000000000001 ffff82c48035fd30 0000000000000080
(XEN)    ffff82c480376980 ffff82c480251080 0000000000000292 ffff82c48011c519
(XEN)    ffff82c48035fe28 0000000000000080 0000000000000000 ffff8302ef312fa0
(XEN)    ffff8300b4aee000 ffff82c48025f080 ffff82c480251080 ffff82c480118351
(XEN)    0000000000000080 0000000000000000 ffff8300b4aef708 00000de9e9529c40
(XEN)    ffff8300b4aee000 0000000000000292 ffff8305cf9f09b8 0000000000000001
(XEN)    0000000000000001 0000000000000000 00000000002159e6 fffffffffffffff3
(XEN)    00000000006ee000 ffff82c48035fe28 0000000000305000 0000000000000006
(XEN)    0000000000000006 ffff82c480104373 ffff8305cf9f09c0 ffff82c4801a0b63
(XEN)    00000000159e6070 ffff8305cf9f0000 0000000000000007 ffff83023fe60180
(XEN)    0000000600000039 0000000000000000 00007fae14b30003 000000000054fdad
(XEN)    0000000000553b86 ffffffffff600429 000000004d2f26e8 0000000000088742
(XEN)    0000000000000000 00007fae14b30070 00007fae14b30000 00007fffcdbd0f50
(XEN)    00007fae14b30078 0000000000430e98 00007fffcdbd0fb8 0000000000cd39c8
(XEN)    0005aeb700000007 00007fae15bd2ab0 0000000000000000 0000000000000246
(XEN) Xen call trace:
(XEN)    [<ffff82c4801bf52c>] mem_sharing_gfn_account+0x5c/0x70
(XEN)    [<ffff82c4801bf8c0>] mem_sharing_share_pages+0x170/0x310
(XEN)    [<ffff82c4801c0724>] mem_sharing_domctl+0xe4/0x130
(XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
(XEN)    [<ffff82c4801457fd>] arch_do_domctl+0xdad/0x1f90
(XEN)    [<ffff82c48011c519>] cpumask_raise_softirq+0x89/0xa0
(XEN)    [<ffff82c480118351>] csched_vcpu_wake+0x101/0x1b0
(XEN)    [<ffff82c480104373>] do_domctl+0x163/0x1000
(XEN)    [<ffff82c4801a0b63>] hvm_set_callback_irq_level+0xe3/0x110
(XEN)    [<ffff82c4801e3169>] syscall_enter+0xa9/0xae
(XEN)    
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Xen BUG at mem_sharing.c:454
(XEN) ****************************************
(XEN) 
(XEN) Manual reset required ('noreboot' specified)
 
 
 

 
> Date: Thu, 13 Jan 2011 15:53:44 +0000
> From: Tim.Deegan@citrix.com
> To: juihaochiang@gmail.com
> CC: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> 
> At 15:24 +0000 on 13 Jan (1294932299), Jui-Hao Chiang wrote:
> > After testing the code with your patch, it's ok for the mem_sharing.
> > And as the argument says, when (type_count & PGT_count_mask) is zero,
> > it's ok for changing the page type. (even when there is a old value in
> > type_count & PGT_type_mask, e.g., ti=8400000000000000)
> 
> Great, thanks. I've applied that change as cset 22745:32b7a4f2d399
> of xen-unstable and the EPT locking fix as 22744:b01ef59c8c80 .
> They're in the staging tree and will hit the public tree the next time
> the automatic regression tests pass. 
> 
> Cheers,
> 
> Tim.
> 
> -- 
> Tim Deegan <Tim.Deegan@citrix.com>
> Principal Software Engineer, Xen Platform Team
> Citrix Systems UK Ltd. (Company #02937203, SL9 0BG)
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 6072 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-13 15:53                           ` Tim Deegan
  2011-01-14  2:04                             ` MaoXiaoyun
@ 2011-01-14  2:04                             ` MaoXiaoyun
  1 sibling, 0 replies; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-14  2:04 UTC (permalink / raw)
  To: xen devel; +Cc: tim.deegan, juihaochiang


[-- Attachment #1.1: Type: text/plain, Size: 4689 bytes --]


Hi Tim:
 
     Thanks for the patch, xen panic on more stressed test. ( 12HVMS, each of them reboot every 30minutes).
     Please refer to below log.
 
blktap_sysfs_create: adding attributes for dev ffff8801044bc400
blktap_sysfs_create: adding attributes for dev ffff8801044bc200
__ratelimit: 4 callbacks suppressed
(XEN) Xen BUG at mem_sharing.c:454
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c4801bf52c>] mem_sharing_gfn_account+0x5c/0x70
(XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: 0000000000000001   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 000000000000005f   rdi: 000000000000005f
(XEN) rbp: ffff8305894f0fc0   rsp: ffff82c48035fc48   r8:  ffff82f600000000
(XEN) r9:  00007fffcdbd0fb8   r10: ffff82c4801f8c70   r11: 0000000000000282
(XEN) r12: ffff82c48035fe28   r13: ffff8303192a3bf0   r14: ffff82f60b966700
(XEN) r15: 0000000000000006   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 000000032ea58000   cr2: ffff880119c2e668
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c48035fc48:
(XEN)    00000000fffffff7 ffff82c4801bf8c0 0000000000553b86 ffff8305894f0fc0
(XEN)    ffff8302f4d12cf0 0000000000553b86 ffff82f603e28580 ffff82c48035fe38
(XEN)    ffff83023fe60000 ffff82c48035fe28 0000000000305000 0000000000000006
(XEN)    0000000000000006 ffff82c4801c0724 ffff82c4801447da 0000000000553b86
(XEN)    000000000001a938 00000000006ee000 00000000006ee000 ffff82c4801457fd
(XEN)    0000000000000096 0000000000000001 ffff82c48035fd30 0000000000000080
(XEN)    ffff82c480376980 ffff82c480251080 0000000000000292 ffff82c48011c519
(XEN)    ffff82c48035fe28 0000000000000080 0000000000000000 ffff8302ef312fa0
(XEN)    ffff8300b4aee000 ffff82c48025f080 ffff82c480251080 ffff82c480118351
(XEN)    0000000000000080 0000000000000000 ffff8300b4aef708 00000de9e9529c40
(XEN)    ffff8300b4aee000 0000000000000292 ffff8305cf9f09b8 0000000000000001
(XEN)    0000000000000001 0000000000000000 00000000002159e6 fffffffffffffff3
(XEN)    00000000006ee000 ffff82c48035fe28 0000000000305000 0000000000000006
(XEN)    0000000000000006 ffff82c480104373 ffff8305cf9f09c0 ffff82c4801a0b63
(XEN)    00000000159e6070 ffff8305cf9f0000 0000000000000007 ffff83023fe60180
(XEN)    0000000600000039 0000000000000000 00007fae14b30003 000000000054fdad
(XEN)    0000000000553b86 ffffffffff600429 000000004d2f26e8 0000000000088742
(XEN)    0000000000000000 00007fae14b30070 00007fae14b30000 00007fffcdbd0f50
(XEN)    00007fae14b30078 0000000000430e98 00007fffcdbd0fb8 0000000000cd39c8
(XEN)    0005aeb700000007 00007fae15bd2ab0 0000000000000000 0000000000000246
(XEN) Xen call trace:
(XEN)    [<ffff82c4801bf52c>] mem_sharing_gfn_account+0x5c/0x70
(XEN)    [<ffff82c4801bf8c0>] mem_sharing_share_pages+0x170/0x310
(XEN)    [<ffff82c4801c0724>] mem_sharing_domctl+0xe4/0x130
(XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
(XEN)    [<ffff82c4801457fd>] arch_do_domctl+0xdad/0x1f90
(XEN)    [<ffff82c48011c519>] cpumask_raise_softirq+0x89/0xa0
(XEN)    [<ffff82c480118351>] csched_vcpu_wake+0x101/0x1b0
(XEN)    [<ffff82c480104373>] do_domctl+0x163/0x1000
(XEN)    [<ffff82c4801a0b63>] hvm_set_callback_irq_level+0xe3/0x110
(XEN)    [<ffff82c4801e3169>] syscall_enter+0xa9/0xae
(XEN)    
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Xen BUG at mem_sharing.c:454
(XEN) ****************************************
(XEN) 
(XEN) Manual reset required ('noreboot' specified)
 
 
 

 
> Date: Thu, 13 Jan 2011 15:53:44 +0000
> From: Tim.Deegan@citrix.com
> To: juihaochiang@gmail.com
> CC: tinnycloud@hotmail.com; xen-devel@lists.xensource.com
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> 
> At 15:24 +0000 on 13 Jan (1294932299), Jui-Hao Chiang wrote:
> > After testing the code with your patch, it's ok for the mem_sharing.
> > And as the argument says, when (type_count & PGT_count_mask) is zero,
> > it's ok for changing the page type. (even when there is a old value in
> > type_count & PGT_type_mask, e.g., ti=8400000000000000)
> 
> Great, thanks. I've applied that change as cset 22745:32b7a4f2d399
> of xen-unstable and the EPT locking fix as 22744:b01ef59c8c80 .
> They're in the staging tree and will hit the public tree the next time
> the automatic regression tests pass. 
> 
> Cheers,
> 
> Tim.
> 
> -- 
> Tim Deegan <Tim.Deegan@citrix.com>
> Principal Software Engineer, Xen Platform Team
> Citrix Systems UK Ltd. (Company #02937203, SL9 0BG)
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 6072 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-14  2:04                             ` MaoXiaoyun
@ 2011-01-14 17:00                               ` Jui-Hao Chiang
  2011-01-17  6:00                                 ` MaoXiaoyun
  2011-01-17  8:43                                 ` MaoXiaoyun
  0 siblings, 2 replies; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-14 17:00 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, tim.deegan

Hi, all:

Is that possible that the domain is dying?
In mem_sharing_gfn_account(): could you try the following?

d = get_domain_by_id(gfn->domain);
if (!d) printk("null domain %x\n", gfn->domain); /* add this line to
see which domain id you see */
BUG_ON(!d);

When this domain id printed out, could you check if the printed domain
id is dying?
If the domain is dying, then the question seems to be:
"Given a domain id from the gfn_info, how do we know the domain is
dying? or we have stored a wrong information inside the hash list?"

2011/1/14 MaoXiaoyun <tinnycloud@hotmail.com>:
> Hi Tim:
>
>      Thanks for the patch, xen panic on more stressed test. ( 12HVMS, each
> of them reboot every 30minutes).
>      Please refer to below log.
>
> blktap_sysfs_create: adding attributes for dev ffff8801044bc400
> blktap_sysfs_create: adding attributes for dev ffff8801044bc200
> __ratelimit: 4 callbacks suppressed
> (XEN) Xen BUG at mem_sharing.c:454
> (XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82c4801bf52c>] mem_sharing_gfn_account+0x5c/0x70
> (XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: 0000000000000001   rcx: 0000000000000000
> (XEN) rdx: 0000000000000000   rsi: 000000000000005f   rdi: 000000000000005f
> (XEN) rbp: ffff8305894f0fc0   rsp: ffff82c48035fc48   r8:  ffff82f600000000
> (XEN) r9:  00007fffcdbd0fb8   r10: ffff82c4801f8c70   r11: 0000000000000282
> (XEN) r12: ffff82c48035fe28   r13: ffff8303192a3bf0   r14: ffff82f60b966700
> (XEN) r15: 0000000000000006   cr0: 0000000080050033   cr4: 00000000000026f0
> (XEN) cr3: 000000032ea58000   cr2: ffff880119c2e668
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff82c48035fc48:
> (XEN)    00000000fffffff7 ffff82c4801bf8c0 0000000000553b86 ffff8305894f0fc0
> (XEN)    ffff8302f4d12cf0 0000000000553b86 ffff82f603e28580 ffff82c48035fe38
> (XEN)    ffff83023fe60000 ffff82c48035fe28 0000000000305000 0000000000000006
> (XEN)    0000000000000006 ffff82c4801c0724 ffff82c4801447da 0000000000553b86
> (XEN)    000000000001a938 00000000006ee000 00000000006ee000 ffff82c4801457fd
> (XEN)    0000000000000096 0000000000000001 ffff82c48035fd30 0000000000000080
> (XEN)    ffff82c480376980 ffff82c480251080 0000000000000292 ffff82c48011c519
> (XEN)    ffff82c48035fe28 0000000000000080 0000000000000000 ffff8302ef312fa0
> (XEN)    ffff8300b4aee000 ffff82c48025f080 ffff82c480251080 ffff82c480118351
> (XEN)    0000000000000080 0000000000000000 ffff8300b4aef708 00000de9e9529c40
> (XEN)    ffff8300b4aee000 0000000000000292 ffff8305cf9f09b8 0000000000000001
> (XEN)    0000000000000001 0000000000000000 00000000002159e6 fffffffffffffff3
> (XEN)    00000000006ee000 ffff82c48035fe28 0000000000305000 0000000000000006
> (XEN)    0000000000000006 ffff82c480104373 ffff8305cf9f09c0 ffff82c4801a0b63
> (XEN)    00000000159e6070 ffff8305cf9f0000 0000000000000007 ffff83023fe60180
> (XEN)    0000000600000039 0000000000000000 00007fae14b30003 000000000054fdad
> (XEN)    0000000000553b86 ffffffffff600429 000000004d2f26e8 0000000000088742
> (XEN)    0000000000000000 00007fae14b30070 00007fae14b30000 00007fffcdbd0f50
> (XEN)    00007fae14b30078 0000000000430e98 00007fffcdbd0fb8 0000000000cd39c8
> (XEN)    0005aeb700000007 00007fae15bd2ab0 0000000000000000 0000000000000246
> (XEN) Xen call trace:
> (XEN)    [<ffff82c4801bf52c>] mem_sharing_gfn_account+0x5c/0x70
> (XEN)    [<ffff82c4801bf8c0>] mem_sharing_share_pages+0x170/0x310
> (XEN)    [<ffff82c4801c0724>] mem_sharing_domctl+0xe4/0x130
> (XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
> (XEN)    [<ffff82c4801457fd>] arch_do_domctl+0xdad/0x1f90
> (XEN)    [<ffff82c48011c519>] cpumask_raise_softirq+0x89/0xa0
> (XEN)    [<ffff82c480118351>] csched_vcpu_wake+0x101/0x1b0
> (XEN)    [<ffff82c480104373>] do_domctl+0x163/0x1000
> (XEN)    [<ffff82c4801a0b63>] hvm_set_callback_irq_level+0xe3/0x110
> (XEN)    [<ffff82c4801e3169>] syscall_enter+0xa9/0xae
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Xen BUG at mem_sharing.c:454
> (XEN) ****************************************
> (XEN)
> (XEN) Manual reset required ('noreboot' specified)
>

Bests,
Jui-Hao

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-14 17:00                               ` Jui-Hao Chiang
@ 2011-01-17  6:00                                 ` MaoXiaoyun
  2011-01-17  8:43                                 ` MaoXiaoyun
  1 sibling, 0 replies; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-17  6:00 UTC (permalink / raw)
  To: juihaochiang, xen devel; +Cc: tim.deegan


[-- Attachment #1.1: Type: text/plain, Size: 6024 bytes --]


Hi Jui-Hao:
 
    Domain ID is 4.
   
   Well, domain_destroy()->complete_domain_destroy->arch_domain_destroy->
paging_final_teardown()->hap_final_teardown->p2m_teardown->mem_sharing_unshare_page
 
   so it looks like it is possible that domain is destroyed before the handle is removed for hash table.
 
  Further more, I add below code.
637     if(mem_sharing_gfn_account(gfn_get_info(&ce->gfns), 1) == -1){
638         printk("=====client not found, server %d client %d\n", gfn_get_info(&se->gfns)->domain, gfn_get_info(&ce->gfns)->domain);
639         ret = XEN_DOMCTL_MEM_SHARING_C_HANDLE_INVALID;
640         goto err_out;
641     }
642 
643     if(mem_sharing_gfn_account(gfn_get_info(&se->gfns), 1)==-1){                                                                                         
644         printk("=====server not found, server %d client %d\n", gfn_get_info(&se->gfns)->domain, gfn_get_info(&ce->gfns)->domain);
645         ret = XEN_DOMCTL_MEM_SHARING_C_HANDLE_INVALID;
646         goto err_out;
647     }
 
     those logs are printed out in test, when all domains are destroyed, I print out all hash entry, it is empty,
     so it is correct.
      
     what's your opinion?
 
> Date: Sat, 15 Jan 2011 01:00:27 +0800
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> From: juihaochiang@gmail.com
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; tim.deegan@citrix.com
> 
> Hi, all:
> 
> Is that possible that the domain is dying?
> In mem_sharing_gfn_account(): could you try the following?
> 
> d = get_domain_by_id(gfn->domain);
> if (!d) printk("null domain %x\n", gfn->domain); /* add this line to
> see which domain id you see */
> BUG_ON(!d);
> 
> When this domain id printed out, could you check if the printed domain
> id is dying?
> If the domain is dying, then the question seems to be:
> "Given a domain id from the gfn_info, how do we know the domain is
> dying? or we have stored a wrong information inside the hash list?"
> 
> 2011/1/14 MaoXiaoyun <tinnycloud@hotmail.com>:
> > Hi Tim:
> >
> >      Thanks for the patch, xen panic on more stressed test. ( 12HVMS, each
> > of them reboot every 30minutes).
> >      Please refer to below log.
> >
> > blktap_sysfs_create: adding attributes for dev ffff8801044bc400
> > blktap_sysfs_create: adding attributes for dev ffff8801044bc200
> > __ratelimit: 4 callbacks suppressed
> > (XEN) Xen BUG at mem_sharing.c:454
> > (XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
> > (XEN) CPU:    0
> > (XEN) RIP:    e008:[<ffff82c4801bf52c>] mem_sharing_gfn_account+0x5c/0x70
> > (XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
> > (XEN) rax: 0000000000000000   rbx: 0000000000000001   rcx: 0000000000000000
> > (XEN) rdx: 0000000000000000   rsi: 000000000000005f   rdi: 000000000000005f
> > (XEN) rbp: ffff8305894f0fc0   rsp: ffff82c48035fc48   r8:  ffff82f600000000
> > (XEN) r9:  00007fffcdbd0fb8   r10: ffff82c4801f8c70   r11: 0000000000000282
> > (XEN) r12: ffff82c48035fe28   r13: ffff8303192a3bf0   r14: ffff82f60b966700
> > (XEN) r15: 0000000000000006   cr0: 0000000080050033   cr4: 00000000000026f0
> > (XEN) cr3: 000000032ea58000   cr2: ffff880119c2e668
> > (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> > (XEN) Xen stack trace from rsp=ffff82c48035fc48:
> > (XEN)    00000000fffffff7 ffff82c4801bf8c0 0000000000553b86 ffff8305894f0fc0
> > (XEN)    ffff8302f4d12cf0 0000000000553b86 ffff82f603e28580 ffff82c48035fe38
> > (XEN)    ffff83023fe60000 ffff82c48035fe28 0000000000305000 0000000000000006
> > (XEN)    0000000000000006 ffff82c4801c0724 ffff82c4801447da 0000000000553b86
> > (XEN)    000000000001a938 00000000006ee000 00000000006ee000 ffff82c4801457fd
> > (XEN)    0000000000000096 0000000000000001 ffff82c48035fd30 0000000000000080
> > (XEN)    ffff82c480376980 ffff82c480251080 0000000000000292 ffff82c48011c519
> > (XEN)    ffff82c48035fe28 0000000000000080 0000000000000000 ffff8302ef312fa0
> > (XEN)    ffff8300b4aee000 ffff82c48025f080 ffff82c480251080 ffff82c480118351
> > (XEN)    0000000000000080 0000000000000000 ffff8300b4aef708 00000de9e9529c40
> > (XEN)    ffff8300b4aee000 0000000000000292 ffff8305cf9f09b8 0000000000000001
> > (XEN)    0000000000000001 0000000000000000 00000000002159e6 fffffffffffffff3
> > (XEN)    00000000006ee000 ffff82c48035fe28 0000000000305000 0000000000000006
> > (XEN)    0000000000000006 ffff82c480104373 ffff8305cf9f09c0 ffff82c4801a0b63
> > (XEN)    00000000159e6070 ffff8305cf9f0000 0000000000000007 ffff83023fe60180
> > (XEN)    0000000600000039 0000000000000000 00007fae14b30003 000000000054fdad
> > (XEN)    0000000000553b86 ffffffffff600429 000000004d2f26e8 0000000000088742
> > (XEN)    0000000000000000 00007fae14b30070 00007fae14b30000 00007fffcdbd0f50
> > (XEN)    00007fae14b30078 0000000000430e98 00007fffcdbd0fb8 0000000000cd39c8
> > (XEN)    0005aeb700000007 00007fae15bd2ab0 0000000000000000 0000000000000246
> > (XEN) Xen call trace:
> > (XEN)    [<ffff82c4801bf52c>] mem_sharing_gfn_account+0x5c/0x70
> > (XEN)    [<ffff82c4801bf8c0>] mem_sharing_share_pages+0x170/0x310
> > (XEN)    [<ffff82c4801c0724>] mem_sharing_domctl+0xe4/0x130
> > (XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
> > (XEN)    [<ffff82c4801457fd>] arch_do_domctl+0xdad/0x1f90
> > (XEN)    [<ffff82c48011c519>] cpumask_raise_softirq+0x89/0xa0
> > (XEN)    [<ffff82c480118351>] csched_vcpu_wake+0x101/0x1b0
> > (XEN)    [<ffff82c480104373>] do_domctl+0x163/0x1000
> > (XEN)    [<ffff82c4801a0b63>] hvm_set_callback_irq_level+0xe3/0x110
> > (XEN)    [<ffff82c4801e3169>] syscall_enter+0xa9/0xae
> > (XEN)
> > (XEN)
> > (XEN) ****************************************
> > (XEN) Panic on CPU 0:
> > (XEN) Xen BUG at mem_sharing.c:454
> > (XEN) ****************************************
> > (XEN)
> > (XEN) Manual reset required ('noreboot' specified)
> >
> 
> Bests,
> Jui-Hao
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 8839 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-14 17:00                               ` Jui-Hao Chiang
  2011-01-17  6:00                                 ` MaoXiaoyun
@ 2011-01-17  8:43                                 ` MaoXiaoyun
  2011-01-17  9:02                                   ` Jui-Hao Chiang
  1 sibling, 1 reply; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-17  8:43 UTC (permalink / raw)
  To: juihaochiang; +Cc: xen devel, tim.deegan


[-- Attachment #1.1: Type: text/plain, Size: 9678 bytes --]


Another failure on BUG() in mem_sharing_alloc_page()
 
 memset(&req, 0, sizeof(req));
 if(must_succeed) 
    {
        /* We do not support 'must_succeed' any more. External operations such
         * as grant table mappings may fail with OOM condition! 
         */
        BUG();===================>bug here
    }
    else
    {
        /* All foreign attempts to unshare pages should be handled through
         * 'must_succeed' case. */
        ASSERT(v->domain->domain_id == d->domain_id);
        vcpu_pause_nosync(v);
        req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
    }
        
log below:

(XEN) Xen BUG at mem_sharing.c:347
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c4801c0344>] mem_sharing_unshare_page+0x5d4/0x6e0
(XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: ffff83030913eb78   rcx: 0000000000000000
(XEN) rdx: ffff82f6094d78e0   rsi: 0000000000000001   rdi: ffff82c48035f650
(XEN) rbp: ffff83048d950000   rsp: ffff82c48035f5e8   r8:  0000000000000000
(XEN) r9:  ffffffffffffffff   r10: 0000000000000001   r11: 0000000000000002
(XEN) r12: 000000000000ecbf   r13: ffff82c48035f628   r14: ffff83033f4fd760
(XEN) r15: ffff82f6025540c0   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 0000000339044000   cr2: ffff8801589fac78
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c48035f5e8:
(XEN)    ffff880159dcf000 ffff83033f4fd770 0000000000000000 ffff83030913eb60
(XEN)    0000000000235012 ffff8300bf554000 0000000100000009 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000d000000bf ffff83059b097000 000000000000ecbf
(XEN)    000000000012aa06 ffff82c48035f724 ffff8304f20d2a00 ffff8800b742bbc0
(XEN)    ffff83048d950000 ffff82c48010bfa9 ffff8304fca169b0 000000000053ea75
(XEN)    0000000000000000 ffff8304fca169b0 0000000000000000 ffff82c48035f6f8
(XEN)    0000000000000001 ffff8304fca169b0 ffff83023fe60000 ffff8304fca169b0
(XEN)    000002fd8035ff28 ffff8800b742bbc0 ffff880159dcf000 00003d3600000002
(XEN)    ffffffffffff003c 000000048cb88000 ffff82f60a7d4ea0 0000000d8010a650
(XEN)    0000000000000100 ffff83023fe60000 ffff83057cdbd8c0 ffffffffffffffea
(XEN)    ffff8800b742bb70 0000000000000000 ffff8800b742bbf0 ffff8800b742bbc0
(XEN)    0000000000000001 ffff82c48010da9b 0000000000000202 ffff82c48035fec8
(XEN)    ffff82c48035f7c8 00000000801880af ffff83023fe60010 0000000000000000
(XEN)    ffff82c400000001 ffff82c48035ff28 0000000000000008 ffff8800b742bbc0
(XEN)    ffff880159dcf000 0000000000000000 0000000000000000 00020000000002fd
(XEN)    000000000053ea75 ffff83021bdc47e8 ffff830310a20000 00000003370af718
(XEN)    0000000000000000 0000000000000000 001a000000000096 000000000053ea75
(XEN)    ffff83023fe514b0 ffff830310a20000 ffff880159d51000 0000000000000000
(XEN)    0000000000000000 00020000000005b6 0000000000509bb5 ffff83031884fdb0
(XEN) Xen call trace:
(XEN)    [<ffff82c4801c0344>] mem_sharing_unshare_page+0x5d4/0x6e0
(XEN)    [<ffff82c48010bfa9>] gnttab_map_grant_ref+0xbf9/0xe30
(XEN)    [<ffff82c48010da9b>] do_grant_table_op+0x14b/0x1080
(XEN)    [<ffff82c48015dc6b>] get_page_type+0xb/0x20
(XEN)    [<ffff82c48015df6d>] get_page_from_l1e+0x2ed/0x4c0
(XEN)    [<ffff82c480159ce9>] is_iomem_page+0x9/0x70
(XEN)    [<ffff82c48015b999>] put_page_from_l1e+0x59/0x150
(XEN)    [<ffff82c48015f6c5>] ptwr_emulated_update+0x2e5/0x420
(XEN)    [<ffff82c48015f946>] ptwr_emulated_write+0x86/0x90
(XEN)    [<ffff82c480159cb5>] ptwr_emulated_read+0x15/0x40
(XEN)    [<ffff82c48017511d>] x86_emulate+0x7ad/0x115d0
(XEN)    [<ffff82c48010fb44>] do_xen_version+0xb4/0x480
(XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
(XEN)    [<ffff82c48011c519>] cpumask_raise_softirq+0x89/0xa0
(XEN)    [<ffff82c4801872fc>] reprogram_hpet_evt_channel+0x8c/0x110
(XEN)    [<ffff82c4801880af>] handle_hpet_broadcast+0x16f/0x1d0
(XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
(XEN)    [<ffff82c480187622>] hpet_legacy_irq_tick+0x42/0x50
(XEN)    [<ffff82c48011c519>] cpumask_raise_softirq+0x89/0xa0
(XEN)    [<ffff82c4801478e2>] update_runstate_area+0x102/0x110
(XEN)    [<ffff82c480118351>] csched_vcpu_wake+0x101/0x1b0
(XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
(XEN)    [<ffff82c48015a1d8>] get_page+0x28/0xf0
(XEN)    [<ffff82c48015ed72>] do_update_descriptor+0x1d2/0x210
(XEN)    [<ffff82c480113d7e>] do_multicall+0x14e/0x340
(XEN)    [<ffff82c4801e3169>] syscall_enter+0xa9/0xae
(XEN)    
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Xen BUG at mem_sharing.c:347
(XEN) ****************************************
(XEN) 
(XEN) Manual reset required ('noreboot' specified)
 
> Date: Sat, 15 Jan 2011 01:00:27 +0800
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> From: juihaochiang@gmail.com
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; tim.deegan@citrix.com
> 
> Hi, all:
> 
> Is that possible that the domain is dying?
> In mem_sharing_gfn_account(): could you try the following?
> 
> d = get_domain_by_id(gfn->domain);
> if (!d) printk("null domain %x\n", gfn->domain); /* add this line to
> see which domain id you see */
> BUG_ON(!d);
> 
> When this domain id printed out, could you check if the printed domain
> id is dying?
> If the domain is dying, then the question seems to be:
> "Given a domain id from the gfn_info, how do we know the domain is
> dying? or we have stored a wrong information inside the hash list?"
> 
> 2011/1/14 MaoXiaoyun <tinnycloud@hotmail.com>:
> > Hi Tim:
> >
> >      Thanks for the patch, xen panic on more stressed test. ( 12HVMS, each
> > of them reboot every 30minutes).
> >      Please refer to below log.
> >
> > blktap_sysfs_create: adding attributes for dev ffff8801044bc400
> > blktap_sysfs_create: adding attributes for dev ffff8801044bc200
> > __ratelimit: 4 callbacks suppressed
> > (XEN) Xen BUG at mem_sharing.c:454
> > (XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
> > (XEN) CPU:    0
> > (XEN) RIP:    e008:[<ffff82c4801bf52c>] mem_sharing_gfn_account+0x5c/0x70
> > (XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
> > (XEN) rax: 0000000000000000   rbx: 0000000000000001   rcx: 0000000000000000
> > (XEN) rdx: 0000000000000000   rsi: 000000000000005f   rdi: 000000000000005f
> > (XEN) rbp: ffff8305894f0fc0   rsp: ffff82c48035fc48   r8:  ffff82f600000000
> > (XEN) r9:  00007fffcdbd0fb8   r10: ffff82c4801f8c70   r11: 0000000000000282
> > (XEN) r12: ffff82c48035fe28   r13: ffff8303192a3bf0   r14: ffff82f60b966700
> > (XEN) r15: 0000000000000006   cr0: 0000000080050033   cr4: 00000000000026f0
> > (XEN) cr3: 000000032ea58000   cr2: ffff880119c2e668
> > (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> > (XEN) Xen stack trace from rsp=ffff82c48035fc48:
> > (XEN)    00000000fffffff7 ffff82c4801bf8c0 0000000000553b86 ffff8305894f0fc0
> > (XEN)    ffff8302f4d12cf0 0000000000553b86 ffff82f603e28580 ffff82c48035fe38
> > (XEN)    ffff83023fe60000 ffff82c48035fe28 0000000000305000 0000000000000006
> > (XEN)    0000000000000006 ffff82c4801c0724 ffff82c4801447da 0000000000553b86
> > (XEN)    000000000001a938 00000000006ee000 00000000006ee000 ffff82c4801457fd
> > (XEN)    0000000000000096 0000000000000001 ffff82c48035fd30 0000000000000080
> > (XEN)    ffff82c480376980 ffff82c480251080 0000000000000292 ffff82c48011c519
> > (XEN)    ffff82c48035fe28 0000000000000080 0000000000000000 ffff8302ef312fa0
> > (XEN)    ffff8300b4aee000 ffff82c48025f080 ffff82c480251080 ffff82c480118351
> > (XEN)    0000000000000080 0000000000000000 ffff8300b4aef708 00000de9e9529c40
> > (XEN)    ffff8300b4aee000 0000000000000292 ffff8305cf9f09b8 0000000000000001
> > (XEN)    0000000000000001 0000000000000000 00000000002159e6 fffffffffffffff3
> > (XEN)    00000000006ee000 ffff82c48035fe28 0000000000305000 0000000000000006
> > (XEN)    0000000000000006 ffff82c480104373 ffff8305cf9f09c0 ffff82c4801a0b63
> > (XEN)    00000000159e6070 ffff8305cf9f0000 0000000000000007 ffff83023fe60180
> > (XEN)    0000000600000039 0000000000000000 00007fae14b30003 000000000054fdad
> > (XEN)    0000000000553b86 ffffffffff600429 000000004d2f26e8 0000000000088742
> > (XEN)    0000000000000000 00007fae14b30070 00007fae14b30000 00007fffcdbd0f50
> > (XEN)    00007fae14b30078 0000000000430e98 00007fffcdbd0fb8 0000000000cd39c8
> > (XEN)    0005aeb700000007 00007fae15bd2ab0 0000000000000000 0000000000000246
> > (XEN) Xen call trace:
> > (XEN)    [<ffff82c4801bf52c>] mem_sharing_gfn_account+0x5c/0x70
> > (XEN)    [<ffff82c4801bf8c0>] mem_sharing_share_pages+0x170/0x310
> > (XEN)    [<ffff82c4801c0724>] mem_sharing_domctl+0xe4/0x130
> > (XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
> > (XEN)    [<ffff82c4801457fd>] arch_do_domctl+0xdad/0x1f90
> > (XEN)    [<ffff82c48011c519>] cpumask_raise_softirq+0x89/0xa0
> > (XEN)    [<ffff82c480118351>] csched_vcpu_wake+0x101/0x1b0
> > (XEN)    [<ffff82c480104373>] do_domctl+0x163/0x1000
> > (XEN)    [<ffff82c4801a0b63>] hvm_set_callback_irq_level+0xe3/0x110
> > (XEN)    [<ffff82c4801e3169>] syscall_enter+0xa9/0xae
> > (XEN)
> > (XEN)
> > (XEN) ****************************************
> > (XEN) Panic on CPU 0:
> > (XEN) Xen BUG at mem_sharing.c:454
> > (XEN) ****************************************
> > (XEN)
> > (XEN) Manual reset required ('noreboot' specified)
> >
> 
> Bests,
> Jui-Hao
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 13165 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-17  8:43                                 ` MaoXiaoyun
@ 2011-01-17  9:02                                   ` Jui-Hao Chiang
  2011-01-17  9:15                                     ` MaoXiaoyun
                                                       ` (2 more replies)
  0 siblings, 3 replies; 50+ messages in thread
From: Jui-Hao Chiang @ 2011-01-17  9:02 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, tim.deegan

Hi, tinnycloud:

Do you have xenpaging tools running properly?
I haven't gone through that one, but it seems you have run out of memory.
When this case happens, mem_sharing will request memory to the
xenpaging daemon, which tends to page out and free some memory.
Otherwise, the allocation would fail.
Is this your scenario?

Bests,
Jui-Hao

2011/1/17 MaoXiaoyun <tinnycloud@hotmail.com>:
> Another failure on BUG() in mem_sharing_alloc_page()
>
>  memset(&req, 0, sizeof(req));
>  if(must_succeed)
>     {
>         /* We do not support 'must_succeed' any more. External operations
> such
>          * as grant table mappings may fail with OOM condition!
>          */
>         BUG();===================>bug here
>     }
>     else
>     {
>         /* All foreign attempts to unshare pages should be handled through
>          * 'must_succeed' case. */
>         ASSERT(v->domain->domain_id == d->domain_id);
>         vcpu_pause_nosync(v);
>         req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
>     }
>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-17  9:02                                   ` Jui-Hao Chiang
@ 2011-01-17  9:15                                     ` MaoXiaoyun
  2011-01-18  9:42                                     ` MaoXiaoyun
  2011-01-20  7:19                                     ` [PATCH] mem_sharing: fix race condition of nominate and unshare MaoXiaoyun
  2 siblings, 0 replies; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-17  9:15 UTC (permalink / raw)
  To: juihaochiang; +Cc: xen devel, tim.deegan


[-- Attachment #1.1: Type: text/plain, Size: 1752 bytes --]


Hi Juihao:
 
     I have no xenpaging run. 
     Is it only necessary for memory sharing?
           
     But I don't think my box confront with OOM, since I've been periodically 
check memory through "xm info", it shows about 10G free memory. Also, 
this is the first time the test run into this bug.
 
     Could you let me know how you set up memory sharing scenario?

 
> Date: Mon, 17 Jan 2011 17:02:02 +0800
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> From: juihaochiang@gmail.com
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; tim.deegan@citrix.com
> 
> Hi, tinnycloud:
> 
> Do you have xenpaging tools running properly?
> I haven't gone through that one, but it seems you have run out of memory.
> When this case happens, mem_sharing will request memory to the
> xenpaging daemon, which tends to page out and free some memory.
> Otherwise, the allocation would fail.
> Is this your scenario?
> 
> Bests,
> Jui-Hao
> 
> 2011/1/17 MaoXiaoyun <tinnycloud@hotmail.com>:
> > Another failure on BUG() in mem_sharing_alloc_page()
> >
> >  memset(&req, 0, sizeof(req));
> >  if(must_succeed)
> >     {
> >         /* We do not support 'must_succeed' any more. External operations
> > such
> >          * as grant table mappings may fail with OOM condition!
> >          */
> >         BUG();===================>bug here
> >     }
> >     else
> >     {
> >         /* All foreign attempts to unshare pages should be handled through
> >          * 'must_succeed' case. */
> >         ASSERT(v->domain->domain_id == d->domain_id);
> >         vcpu_pause_nosync(v);
> >         req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
> >     }
> >
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 2895 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-17  9:02                                   ` Jui-Hao Chiang
  2011-01-17  9:15                                     ` MaoXiaoyun
@ 2011-01-18  9:42                                     ` MaoXiaoyun
  2011-01-18 12:05                                       ` MaoXiaoyun
  2011-01-20  7:19                                     ` [PATCH] mem_sharing: fix race condition of nominate and unshare MaoXiaoyun
  2 siblings, 1 reply; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-18  9:42 UTC (permalink / raw)
  To: juihaochiang, xen devel; +Cc: tim.deegan


[-- Attachment #1.1: Type: text/plain, Size: 8326 bytes --]


Hi Tim & Jui-Hao:
 
      When I use Linux HVM instead of Windows HVM, more bug shows up.
 
      I only start on VM, and when I destroy it , xen crashed on mem_sharing_unshare_page()
which in line709, hash_entry is NULL. Later I found the handle has been removed in 
mem_sharing_share_pages(), please refer logs below.
 
----mem_sharing_unshare_page
708     /* Remove the gfn_info from the list */
709     hash_entry = mem_sharing_hash_lookup(handle); 
710     list_for_each(le, &hash_entry->gfns)
711     {
712         gfn_info = list_entry(le, struct gfn_info, list);
713         if((gfn_info->gfn == gfn) && (gfn_info->domain == d->domain_id))
714             goto gfn_found;
715     }
716     printk("Could not find gfn_info for shared gfn: %lx\n", gfn);
717     BUG();
 
 
----mem_sharing_share_page
 
649     printk("del %lu\n", ch);
650     list_for_each_safe(le, te, &ce->gfns)
651     {
652         gfn = list_entry(le, struct gfn_info, list);
653         /* Get the source page and type, this should never fail 
654          * because we are under shr lock, and got non-null se */
655         BUG_ON(!get_page_and_type(spage, dom_cow, PGT_shared_page));
656         /* Move the gfn_info from ce list to se list */
657         list_del(&gfn->list);
658         d = get_domain_by_id(gfn->domain);
659         BUG_ON(!d);
660         BUG_ON(set_shared_p2m_entry(d, gfn->gfn, se->mfn) == 0);
661         put_domain(d);
662         list_add(&gfn->list, &se->gfns);
663         put_page_and_type(cpage);
664         mem_sharing_debug_gfn(d, gfn->gfn);
665     } 
666     ASSERT(list_empty(&ce->gfns));
667     mem_sharing_hash_delete(ch);
668     atomic_inc(&nr_saved_mfns);
669     /* Free the client page */
670     if(test_and_clear_bit(_PGC_allocated, &cpage->count_info))
671         put_page(cpage);
672     mem_sharing_debug_gfn(d, gfn->gfn);                                                                                                                  
673     ret = 0;
 
 
-------log------------
      
(XEN) del 31261
(XEN) Debug for domain=1, gfn=75fd5, Debug page: MFN=179fd5 is ci=8000000000000005, ti=8400000000000001, owner_id=32755
(XEN) Debug for domain=1, gfn=75fd5, Debug page: MFN=179fd5 is ci=4, ti=8400000000000001, owner_id=32755
(XEN) del 31262
(XEN) Debug for domain=1, gfn=75fd6, Debug page: MFN=179fd6 is ci=8000000000000005, ti=8400000000000001, owner_id=32755
(XEN) Debug for domain=1, gfn=75fd6, Debug page: MFN=179fd6 is ci=4, ti=8400000000000001, owner_id=32755
(XEN) del 31263
(XEN) Debug for domain=1, gfn=75fd7, Debug page: MFN=179fd7 is ci=8000000000000005, ti=8400000000000001, owner_id=32755
(XEN) Debug for domain=1, gfn=75fd7, Debug page: MFN=179fd7 is ci=4, ti=8400000000000001, owner_id=32755
(XEN) del 31264
(XEN) Debug for domain=1, gfn=75fd8, Debug page: MFN=179fd8 is ci=8000000000000005, ti=8400000000000001, owner_id=32755
(XEN) Debug for domain=1, gfn=75fd8, Debug page: MFN=179fd8 is ci=4, ti=8400000000000001, owner_id=32755
(XEN) del 31265
(XEN) Debug for domain=1, gfn=75fd9, Debug page: MFN=179fd9 is ci=8000000000000005, ti=8400000000000001, owner_id=32755
(XEN) Debug for domain=1, gfn=75fd9, Debug page: MFN=179fd9 is ci=4, ti=8400000000000001, owner_id=32755
(XEN) del 31266
(XEN) Debug for domain=1, gfn=75fda, Debug page: MFN=179fda is ci=8000000000000005, ti=8400000000000001, owner_id=32755
(XEN) Debug for domain=1, gfn=75fda, Debug page: MFN=179fda is ci=4, ti=8400000000000001, owner_id=32755
(XEN) del 31267
(XEN) Debug for domain=1, gfn=75fdb, Debug page: MFN=179fdb is ci=8000000000000005, ti=8400000000000001, owner_id=32755
(XEN) Debug for domain=1, gfn=75fdb, Debug page: MFN=179fdb is ci=4, ti=8400000000000001, owner_id=32755
(XEN) del 31268
(XEN) Debug for domain=1, gfn=75fdc, Debug page: MFN=179fdc is ci=8000000000000005, ti=8400000000000001, owner_id=32755
(XEN) Debug for domain=1, gfn=75fdc, Debug page: MFN=179fdc is ci=4, ti=8400000000000001, owner_id=32755
(XEN) del 31269
(XEN) Debug for domain=1, gfn=75fdd, Debug page: MFN=179fdd is ci=8000000000000005, ti=8400000000000001, owner_id=32755
(XEN) Debug for domain=1, gfn=75fdd, Debug page: MFN=179fdd is ci=4, ti=8400000000000001, owner_id=32755
(XEN) del 31270
(XEN) Debug for domain=1, gfn=75fde, Debug page: MFN=179fde is ci=8000000000000005, ti=8400000000000001, owner_id=32755
(XEN) Debug for domain=1, gfn=75fde, Debug page: MFN=179fde is ci=4, ti=8400000000000001, owner_id=32755
blktap_sysfs_destroy
(XEN) handle 31261
(XEN) Debug for domain=1, gfn=75fd5, Debug page: MFN=179fd5 is ci=1, ti=8400000000000001, owner_id=32755
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    1
(XEN) RIP:    e008:[<ffff82c4801bfeeb>] mem_sharing_unshare_page+0x1ab/0x740
(XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: 0000000000179fd5   rcx: 0000000000000082
(XEN) rdx: 000000000000000a   rsi: 000000000000000a   rdi: ffff82c48021e9c4
(XEN) rbp: ffff8302dd6a0000   rsp: ffff83023ff3fcd0   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 00000000fffffff8   r11: 0000000000000005
(XEN) r12: 0000000000075fd5   r13: 0000000000000002   r14: 0000000000000000
(XEN) r15: ffff82f602f3faa0   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 000000031b83b000   cr2: 0000000000000018
(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff83023ff3fcd0:
(XEN)    ffff83023fe60000 ffff830173da4410 0000000100000000 0000000000000000
(XEN)    0000000000007a1d 0000000000000001 0000000000000009 ffff8302dd6a0000
(XEN)    ffff83023ff3fd40 ffff82c4801df7e9 ffff83023ff3ff28 ffff83023ff3fd94
(XEN)    0000000000075fd5 0000000d000001d5 ffff83033e950000 0000000000075fd5
(XEN)    ffff83063fde8ef0 ffff8302dd6a0000 ffff83023ff3fd94 ffff82c48037a980
(XEN)    ffff82c480253080 ffff82c4801b8949 ffff8302dd6a0180 ffff82c48037a980
(XEN)    0000000d80253080 ffff8302dd6a0000 ffff8302dd6a0000 00000000ffffffff
(XEN)    ffff8302dd6a0000 ffff82c4801b681c 0000000000000000 ffff82c480149b74
(XEN)    ffff8300bf560000 ffff8300bf560000 fffffffffffffff8 00000000ffffffff
(XEN)    ffff8302dd6a0000 ffff82c4801061fc ffff8300bf552060 0000000000000000
(XEN)    ffff82c4802531a0 0000000000000001 ffff82c480376980 ffff82c48012218c
(XEN)    0000000000000001 fffffffffffffffd ffff83023ff3ff28 ffff82c48011c588
(XEN)    ffff83023ff3ff28 ffff83063fdeb170 ffff83063fdeb230 ffff8300bf552000
(XEN)    000001831ea27db3 ffff82c480189c6a 7fffffffffffffff ffff82c4801441b5
(XEN)    ffff82c48037b7b0 ffff82c48011e474 0000000000000001 ffffffffffffffff
(XEN)    0000000000000000 0000000000000000 0000000080376980 00001833000116f2
(XEN)    ffffffffffffffff ffff83023ff3ff28 ffff82c480251b00 ffff83023ff3fe28
(XEN)    ffff8300bf552000 000001831ea27db3 ffff82c480253080 ffff82c480149ad6
(XEN)    0000000000000000 0000000000002000 ffff8300bf2fc000 0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 ffff88015f8f9f10
(XEN) Xen call trace:
(XEN)    [<ffff82c4801bfeeb>] mem_sharing_unshare_page+0x1ab/0x740
(XEN)    [<ffff82c4801df7e9>] ept_get_entry+0xa9/0x1c0
(XEN)    [<ffff82c4801b8949>] p2m_teardown+0x129/0x170
(XEN)    [<ffff82c4801b681c>] paging_final_teardown+0x2c/0x40
(XEN)    [<ffff82c480149b74>] arch_domain_destroy+0x44/0x170
(XEN)    [<ffff82c4801061fc>] complete_domain_destroy+0x6c/0x130
(XEN)    [<ffff82c48012218c>] rcu_process_callbacks+0xac/0x220
(XEN)    [<ffff82c48011c588>] __do_softirq+0x58/0x80
(XEN)    [<ffff82c480189c6a>] acpi_processor_idle+0x14a/0x740
(XEN)    [<ffff82c4801441b5>] reprogram_timer+0x55/0x90
(XEN)    [<ffff82c48011e474>] timer_softirq_action+0x1a4/0x360
(XEN)    [<ffff82c480149ad6>] idle_loop+0x26/0x80
(XEN)    
(XEN) Pagetable walk from 0000000000000018:
(XEN)  L4[0x000] = 00000001676bd067 0000000000156103
(XEN)  L3[0x000] = 000000031b947067 0000000000121c8d
(XEN)  L2[0x000] = 0000000000000000 ffffffffffffffff 
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 1:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 0000000000000018
(XEN) ****************************************
(XEN) 
(XEN) Manual reset required ('noreboot' specified 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 11547 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-18  9:42                                     ` MaoXiaoyun
@ 2011-01-18 12:05                                       ` MaoXiaoyun
  2011-01-19  4:44                                         ` MaoXiaoyun
  0 siblings, 1 reply; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-18 12:05 UTC (permalink / raw)
  To: juihaochiang, xen devel; +Cc: tim.deegan


[-- Attachment #1.1: Type: text/plain, Size: 5312 bytes --]


Hi:
 
 It is later found that caused by below patch code and I am using the blktap2.
The handle retruned from here will later become ch in mem_sharing_share_pages, and then 
in mem_sharing_share_pages will have ch = sh, thus caused the problem.
 
+    /* Return the handle if the page is already shared */
+    page = mfn_to_page(mfn);
+    if (p2m_is_shared(p2mt)) {
+        *phandle = page->shr_handle;
+        ret = 0;
+        goto out;
+    }
+
 
But. after I  removed the code, tests still failed, and this handle's value is not make sence.
 
 
(XEN) ===>total handles 17834 total gfns 255853
(XEN) handle 13856642536914634
(XEN) Debug for domain=1, gfn=19fed, Debug page: MFN=349c0a is ci=8000000000000008, ti=8400000000000007, owner_id=32755
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    15
(XEN) RIP:    e008:[<ffff82c4801bff4b>] mem_sharing_unshare_page+0x19b/0x720
(XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: ffff83063fc67f28   rcx: 0000000000000092
(XEN) rdx: 000000000000000a   rsi: 000000000000000a   rdi: ffff82c48021e9c4
(XEN) rbp: ffff830440000000   rsp: ffff83063fc67c48   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 00000000fffffff8   r11: 0000000000000005
(XEN) r12: 0000000000019fed   r13: 0000000000000000   r14: 0000000000000000
(XEN) r15: ffff82f606938140   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 000000055513c000   cr2: 0000000000000018
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff83063fc67c48:
(XEN)    02c5f6c8b70fed66 39ef64058b487674 ffff82c4801a6082 0000000000000000
(XEN)    00313a8b00313eca 0000000000000001 0000000000000009 ffff830440000000
(XEN)    ffff83063fc67cb8 ffff82c4801df6f9 0000000000000040 ffff83063fc67d04
(XEN)    0000000000019fed 0000000d000001ed ffff83055458d000 ffff83063fc67f28
(XEN)    0000000000019fed 0000000000349c0a 0000000000000030 ffff83063fc67f28
(XEN)    0000000000000030 ffff82c48019baa6 ffff82c4802519c0 0000000d8016838e
(XEN)    0000000000000000 00000000000001aa ffff8300bf554000 ffff82c4801b3864
(XEN)    ffff830440000348 ffff8300bf554000 ffff8300bf5557f0 ffff8300bf5557e8
(XEN)    00000032027b81f2 ffff82c48026f080 ffff82c4801a9337 ffff8300bf448000
(XEN)    ffff8300bf554000 ffff830000000000 0000000019fed000 ffff8300bf2f2000
(XEN)    ffff82c48019985d 0000000000000080 ffff8300bf554000 0000000000019fed
(XEN)    ffff82c4801b08ba 000000000001e000 ffff82c48014931f ffff8305570c6d50
(XEN)    ffff82c480251080 00000032027b81f2 ffff8305570c6d50 ffff83052f3e2200
(XEN)    0000000f027b7de0 ffff82c48011e07a 000000000000000f ffff82c48026f0a0
(XEN)    0000000000000082 0000000000000000 0000000000000000 0000000000009e44
(XEN)    ffff8300bf554000 ffff8300bf2f2000 ffff82c48011e07a 000000000000000f
(XEN)    ffff8300bf555760 0000000000000292 ffff82c48011afca 00000032028a8fc0
(XEN)    0000000000000292 ffff82c4801a93c3 00000000000000ef ffff8300bf554000
(XEN)    ffff8300bf554000 ffff8300bf5557e8 ffff82c4801a6082 ffff8300bf554000
(XEN)    0000000000000000 ffff82c4801a0cc8 ffff8300bf554000 ffff8300bf554000
(XEN) Xen call trace:
(XEN)    [<ffff82c4801bff4b>] mem_sharing_unshare_page+0x19b/0x720
(XEN)    [<ffff82c4801a6082>] vlapic_has_pending_irq+0x42/0x70
(XEN)    [<ffff82c4801df6f9>] ept_get_entry+0xa9/0x1c0
(XEN)    [<ffff82c48019baa6>] hvm_hap_nested_page_fault+0xd6/0x190
(XEN)    [<ffff82c4801b3864>] vmx_vmexit_handler+0x304/0x1a90
(XEN)    [<ffff82c4801a9337>] pt_restore_timer+0x57/0xb0
(XEN)    [<ffff82c48019985d>] hvm_do_resume+0x1d/0x130
(XEN)    [<ffff82c4801b08ba>] vmx_do_resume+0x11a/0x1c0
(XEN)    [<ffff82c48014931f>] context_switch+0x76f/0xf00
(XEN)    [<ffff82c48011e07a>] add_entry+0x3a/0xb0
(XEN)    [<ffff82c48011e07a>] add_entry+0x3a/0xb0
(XEN)    [<ffff82c48011afca>] schedule+0x1ea/0x500
(XEN)    [<ffff82c4801a93c3>] pt_update_irq+0x33/0x1e0
(XEN)    [<ffff82c4801a6082>] vlapic_has_pending_irq+0x42/0x70
(XEN)    [<ffff82c4801a0cc8>] hvm_vcpu_has_pending_irq+0x88/0xa0
(XEN)    [<ffff82c4801b267b>] vmx_vmenter_helper+0x5b/0x150
(XEN)    [<ffff82c4801adaa3>] vmx_asm_do_vmentry+0x0/0xdd
(XEN)    
(XEN) Pagetable walk from 0000000000000018:
(XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 15:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 0000000000000018
(XEN) ****************************************
(XEN) 
(XEN) Manual reset required ('noreboot' specified)

 
 

 ---------------------------------------------------------------------------------------------------
>From: tinnycloud@hotmail.com
>To: juihaochiang@gmail.com; xen-devel@lists.xensource.com
>CC: tim.deegan@citrix.com
>Subject: RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
>Date: Tue, 18 Jan 2011 17:42:32 +0800




>Hi Tim & Jui-Hao:
 
 >     When I use Linux HVM instead of Windows HVM, more bug shows up.
 
>      I only start on VM, and when I destroy it , xen crashed on mem_sharing_unshare_page()
>which in line709, hash_entry is NULL. Later I found the handle has been removed in 
>mem_sharing_share_pages(), please refer logs below.
 
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 7206 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-18 12:05                                       ` MaoXiaoyun
@ 2011-01-19  4:44                                         ` MaoXiaoyun
  2011-01-19  9:11                                           ` George Dunlap
  0 siblings, 1 reply; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-19  4:44 UTC (permalink / raw)
  To: xen devel; +Cc: george.dunlap, tim.deegan, juihaochiang


[-- Attachment #1.1: Type: text/plain, Size: 6464 bytes --]


Hi George:
 
       I am working on the xen mem_sharing,  I think the bug below is related to POD.
(Test shows when POD is enable, it is easily hit the bug, when disabled, no bug occurs).
 
As I know when domU starts will POD, it gets memory from POD cache, and in some
situation, POD cached will scan from Zero pages for reusing(link the page into POD
cache page list), and from the page_info define, list and handle share same posistion,
I think when reusing the page, POD doest't check page type, and if it is a shared page
, it still can be put into POD cache, and thus handle is been overwritten.
      
      So maybe we need to check the page type before putting into cache, What's your opinion?
      thanks.
 
>--------------------------------------------------------------------------------
>From: tinnycloud@hotmail.com
>To: juihaochiang@gmail.com; xen-devel@lists.xensource.com
>CC: tim.deegan@citrix.com
>Subject: RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
>Date: Tue, 18 Jan 2011 20:05:16 +0800
>
>Hi:
> 
> It is later found that caused by below patch code and I am using the blktap2.
>The handle retruned from here will later become ch in mem_sharing_share_pages, and then 
>in mem_sharing_share_pages will have ch = sh, thus caused the problem.
> 
>+    /* Return the handle if the page is already shared */
>+    page = mfn_to_page(mfn);
>+    if (p2m_is_shared(p2mt)) {
>+        *phandle = page->shr_handle;
>+        ret = 0;
>+        goto out;
>+    }
>+
> 
>But. after I  removed the code, tests still failed, and this handle's value is not make sence.
> 
> 
>(XEN) ===>total handles 17834 total gfns 255853
>(XEN) handle 13856642536914634
>(XEN) Debug for domain=1, gfn=19fed, Debug page: MFN=349c0a is ci=8000000000000008, ti=8400000000000007, owner_id=32755
>(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
>(XEN) CPU:    15
>(XEN) RIP:    e008:[<ffff82c4801bff4b>] mem_sharing_unshare_page+0x19b/0x720
>(XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
>(XEN) rax: 0000000000000000   rbx: ffff83063fc67f28   rcx: 0000000000000092
>(XEN) rdx: 000000000000000a   rsi: 000000000000000a   rdi: ffff82c48021e9c4
>(XEN) rbp: ffff830440000000   rsp: ffff83063fc67c48   r8:  0000000000000001
>(XEN) r9:  0000000000000000   r10: 00000000fffffff8   r11: 0000000000000005
>(XEN) r12: 0000000000019fed   r13: 0000000000000000   r14: 0000000000000000
>(XEN) r15: ffff82f606938140   cr0: 000000008005003b   cr4: 00000000000026f0
>(XEN) cr3: 000000055513c000   cr2: 0000000000000018
>(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
>(XEN) Xen stack trace from rsp=ffff83063fc67c48:
>(XEN)    02c5f6c8b70fed66 39ef64058b487674 ffff82c4801a6082 0000000000000000
>(XEN)    00313a8b00313eca 0000000000000001 0000000000000009 ffff830440000000
>(XEN)    ffff83063fc67cb8 ffff82c4801df6f9 0000000000000040 ffff83063fc67d04
>(XEN)    0000000000019fed 0000000d000001ed ffff83055458d000 ffff83063fc67f28
>(XEN)    0000000000019fed 0000000000349c0a 0000000000000030 ffff83063fc67f28
>(XEN)    0000000000000030 ffff82c48019baa6 ffff82c4802519c0 0000000d8016838e
>(XEN)    0000000000000000 00000000000001aa ffff8300bf554000 ffff82c4801b3864
>(XEN)    ffff830440000348 ffff8300bf554000 ffff8300bf5557f0 ffff8300bf5557e8
>(XEN)    00000032027b81f2 ffff82c48026f080 ffff82c4801a9337 ffff8300bf448000
>(XEN)    ffff8300bf554000 ffff830000000000 0000000019fed000 ffff8300bf2f2000
>(XEN)    ffff82c48019985d 0000000000000080 ffff8300bf554000 0000000000019fed
>(XEN)    ffff82c4801b08ba 000000000001e000 ffff82c48014931f ffff8305570c6d50
>(XEN)    ffff82c480251080 00000032027b81f2 ffff8305570c6d50 ffff83052f3e2200
>(XEN)    0000000f027b7de0 ffff82c48011e07a 000000000000000f ffff82c48026f0a0
>(XEN)    0000000000000082 0000000000000000 0000000000000000 0000000000009e44
>(XEN)    ffff8300bf554000 ffff8300bf2f2000 ffff82c48011e07a 000000000000000f
>(XEN)    ffff8300bf555760 0000000000000292 ffff82c48011afca 00000032028a8fc0
>(XEN)    0000000000000292 ffff82c4801a93c3 00000000000000ef ffff8300bf554000
>(XEN)    ffff8300bf554000 ffff8300bf5557e8 ffff82c4801a6082 ffff8300bf554000
>(XEN)    0000000000000000 ffff82c4801a0cc8 ffff8300bf554000 ffff8300bf554000
>(XEN) Xen call trace:
>(XEN)    [<ffff82c4801bff4b>] mem_sharing_unshare_page+0x19b/0x720
>(XEN)    [<ffff82c4801a6082>] vlapic_has_pending_irq+0x42/0x70
>(XEN)    [<ffff82c4801df6f9>] ept_get_entry+0xa9/0x1c0
>(XEN)    [<ffff82c48019baa6>] hvm_hap_nested_page_fault+0xd6/0x190
>(XEN)    [<ffff82c4801b3864>] vmx_vmexit_handler+0x304/0x1a90
>(XEN)    [<ffff82c4801a9337>] pt_restore_timer+0x57/0xb0
>(XEN)    [<ffff82c48019985d>] hvm_do_resume+0x1d/0x130
>(XEN)    [<ffff82c4801b08ba>] vmx_do_resume+0x11a/0x1c0
>(XEN)    [<ffff82c48014931f>] context_switch+0x76f/0xf00
>(XEN)    [<ffff82c48011e07a>] add_entry+0x3a/0xb0
>(XEN)    [<ffff82c48011e07a>] add_entry+0x3a/0xb0
>(XEN)    [<ffff82c48011afca>] schedule+0x1ea/0x500
>(XEN)    [<ffff82c4801a93c3>] pt_update_irq+0x33/0x1e0
>(XEN)    [<ffff82c4801a6082>] vlapic_has_pending_irq+0x42/0x70
>(XEN)    [<ffff82c4801a0cc8>] hvm_vcpu_has_pending_irq+0x88/0xa0
>(XEN)    [<ffff82c4801b267b>] vmx_vmenter_helper+0x5b/0x150
>(XEN)    [<ffff82c4801adaa3>] vmx_asm_do_vmentry+0x0/0xdd
>(XEN)    
>(XEN) Pagetable walk from 0000000000000018:
>(XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
>(XEN) 
>(XEN) ****************************************
>(XEN) Panic on CPU 15:
>(XEN) FATAL PAGE FAULT
>(XEN) [error_code=0000]
>(XEN) Faulting linear address: 0000000000000018
>(XEN) ****************************************
>(XEN) 
>(XEN) Manual reset required ('noreboot' specified)
>
> 
> 
>
> ---------------------------------------------------------------------------------------------------
>>From: tinnycloud@hotmail.com
>>To: juihaochiang@gmail.com; xen-devel@lists.xensource.com
>>CC: tim.deegan@citrix.com
>>Subject: RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
>>Date: Tue, 18 Jan 2011 17:42:32 +0800
>
>>Hi Tim & Jui-Hao:
> 
> >     When I use Linux HVM instead of Windows HVM, more bug shows up.
> 
>>      I only start on VM, and when I destroy it , xen crashed on mem_sharing_unshare_page()
>>which in line709, hash_entry is NULL. Later I found the handle has been removed in 
>>mem_sharing_share_pages(), please refer logs below.
>  		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 8979 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-19  4:44                                         ` MaoXiaoyun
@ 2011-01-19  9:11                                           ` George Dunlap
  2011-01-19 13:44                                             ` MaoXiaoyun
  2011-01-24 10:56                                             ` MaoXiaoyun
  0 siblings, 2 replies; 50+ messages in thread
From: George Dunlap @ 2011-01-19  9:11 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, tim.deegan, juihaochiang

Very likely.  If you look in xen/arch/x86/mm/p2m.c, the two functions
which check a page to see if it can be reclaimed are
"p2m_pod_zero_check*()".  A little ways into each function there's a
giant "if()" which has all of the conditions for reclaiming a page,
starting with p2m_is_ram().  The easiest way to fix it is to add
p2m_is_shared() to that "if" statement.

 -George

2011/1/19 MaoXiaoyun <tinnycloud@hotmail.com>:
> Hi George:
>
>        I am working on the xen mem_sharing,  I think the bug below is
> related to POD.
> (Test shows when POD is enable, it is easily hit the bug, when disabled, no
> bug occurs).
>
> As I know when domU starts will POD, it gets memory from POD cache, and in
> some
> situation, POD cached will scan from Zero pages for reusing(link the page
> into POD
> cache page list), and from the page_info define, list and handle share same
> posistion,
> I think when reusing the page, POD doest't check page type, and if it is a
> shared page
> , it still can be put into POD cache, and thus handle is been overwritten.
>
>       So maybe we need to check the page type before putting into cache,
> What's your opinion?
>       thanks.
>
>>--------------------------------------------------------------------------------
>>From: tinnycloud@hotmail.com
>>To: juihaochiang@gmail.com; xen-devel@lists.xensource.com
>>CC: tim.deegan@citrix.com
>>Subject: RE: [PATCH] mem_sharing: fix race condition of nominate and
>> unshare
>>Date: Tue, 18 Jan 2011 20:05:16 +0800
>>
>>Hi:
>>
>> It is later found that caused by below patch code and I am using the
>> blktap2.
>>The handle retruned from here will later become ch in
>> mem_sharing_share_pages, and then
>>in mem_sharing_share_pages will have ch = sh, thus caused the problem.
>>
>>+    /* Return the handle if the page is already shared */
>>+    page = mfn_to_page(mfn);
>>+    if (p2m_is_shared(p2mt)) {
>>+        *phandle = page->shr_handle;
>>+        ret = 0;
>>+        goto out;
>>+    }
>>+
>>
>>But. after I  removed the code, tests still failed, and this handle's value
>> is not make sence.
>>
>>
>>(XEN) ===>total handles 17834 total gfns 255853
>>(XEN) handle 13856642536914634
>>(XEN) Debug for domain=1, gfn=19fed, Debug page: MFN=349c0a is
>> ci=8000000000000008, ti=8400000000000007, owner_id=32755
>>(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
>>(XEN) CPU:    15
>>(XEN) RIP:    e008:[<ffff82c4801bff4b>]
>> mem_sharing_unshare_page+0x19b/0x720
>>(XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
>>(XEN) rax: 0000000000000000   rbx: ffff83063fc67f28&nbsp ;  rcx:
>> 0000000000000092
>>(XEN) rdx: 000000000000000a   rsi: 000000000000000a   rdi: ffff82c48021e9c4
>>(XEN) rbp: ffff830440000000   rsp: ffff83063fc67c48   r8:  0000000000000001
>>(XEN) r9:  0000000000000000   r10: 00000000fffffff8   r11: 0000000000000005
>>(XEN) r12: 0000000000019fed   r13: 0000000000000000   r14: 0000000000000000
>>(XEN) r15: ffff82f606938140   cr0: 000000008005003b   cr4: 00000000000026f0
>>(XEN) cr3: 000000055513c000   cr2: 0000000000000018
>>(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
>>(XEN) Xen stack trace from rsp=ffff83063fc67c48:
>>(XEN)    02c5f6c8b70fed66 39ef64058b487674 ffff82c4801a6082
>> 0000000000000000
>>(XEN)    00313a8b00313eca 0000000000000001 0000000000000009 ff
>> ff830440000000
>>(XEN)    ffff83063fc67cb8 ffff82c4801df6f9 0000000000000040
>> ffff83063fc67d04
>>(XEN)    0000000000019fed 0000000d000001ed ffff83055458d000
>> ffff83063fc67f28
>>(XEN)    0000000000019fed 0000000000349c0a 0000000000000030
>> ffff83063fc67f28
>>(XEN)    0000000000000030 ffff82c48019baa6 ffff82c4802519c0
>> 0000000d8016838e
>>(XEN)    0000000000000000 00000000000001aa ffff8300bf554000
>> ffff82c4801b3864
>>(XEN)    ffff830440000348 ffff8300bf554000 ffff8300bf5557f0
>> ffff8300bf5557e8
>>(XEN)    00000032027b81f2 ffff82c48026f080 ffff82c4801a9337
>> ffff8300bf448000
>>(XEN)    ffff8300bf554000 ffff830000000000 0000000019fed000
>> ffff8300bf2f2000
>>(XEN)    ffff82c48019985d 0000000000000080 ffff8300bf554000
>> 0000000000019fed
>>(XEN)    ffff82c4801b08ba 000000000001e000 ffff82c48014931f ff
>> ff8305570c6d50
>>(XEN)    ffff82c480251080 00000032027b81f2 ffff8305570c6d50
>> ffff83052f3e2200
>>(XEN)    0000000f027b7de0 ffff82c48011e07a 000000000000000f
>> ffff82c48026f0a0
>>(XEN)    0000000000000082 0000000000000000 0000000000000000
>> 0000000000009e44
>>(XEN)    ffff8300bf554000 ffff8300bf2f2000 ffff82c48011e07a
>> 000000000000000f
>>(XEN)    ffff8300bf555760 0000000000000292 ffff82c48011afca
>> 00000032028a8fc0
>>(XEN)    0000000000000292 ffff82c4801a93c3 00000000000000ef
>> ffff8300bf554000
>>(XEN)    ffff8300bf554000 ffff8300bf5557e8 ffff82c4801a6082
>> ffff8300bf554000
>>(XEN)    0000000000000000 ffff82c4801a0cc8 ffff8300bf554000
>> ffff8300bf554000
>>(XEN) Xen call trace:
>>(XEN)    [<ffff82c4801bff4b>] mem_sharing_unshare_page+0x19b/0x720
>>(XEN)    [<ffff82c4801a6082>] v lapic_has_pending_irq+0x42/0x70
>>(XEN)    [<ffff82c4801df6f9>] ept_get_entry+0xa9/0x1c0
>>(XEN)    [<ffff82c48019baa6>] hvm_hap_nested_page_fault+0xd6/0x190
>>(XEN)    [<ffff82c4801b3864>] vmx_vmexit_handler+0x304/0x1a90
>>(XEN)    [<ffff82c4801a9337>] pt_restore_timer+0x57/0xb0
>>(XEN)    [<ffff82c48019985d>] hvm_do_resume+0x1d/0x130
>>(XEN)    [<ffff82c4801b08ba>] vmx_do_resume+0x11a/0x1c0
>>(XEN)    [<ffff82c48014931f>] context_switch+0x76f/0xf00
>>(XEN)    [<ffff82c48011e07a>] add_entry+0x3a/0xb0
>>(XEN)    [<ffff82c48011e07a>] add_entry+0x3a/0xb0
>>(XEN)    [<ffff82c48011afca>] schedule+0x1ea/0x500
>>(XEN)    [<ffff82c4801a93c3>] pt_update_irq+0x33/0x1e0
>>(XEN)    [&lt ;ffff82c4801a6082>] vlapic_has_pending_irq+0x42/0x70
>>(XEN)    [<ffff82c4801a0cc8>] hvm_vcpu_has_pending_irq+0x88/0xa0
>>(XEN)    [<ffff82c4801b267b>] vmx_vmenter_helper+0x5b/0x150
>>(XEN)    [<ffff82c4801adaa3>] vmx_asm_do_vmentry+0x0/0xdd
>>(XEN)
>>(XEN) Pagetable walk from 0000000000000018:
>>(XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
>>(XEN)
>>(XEN) ****************************************
>>(XEN) Panic on CPU 15:
>>(XEN) FATAL PAGE FAULT
>>(XEN) [error_code=0000]
>>(XEN) Faulting linear address: 0000000000000018
>>(XEN) ****************************************
>>(XEN)
>>(XEN) Manual reset required ('noreboot' specified)
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------------------------------------
>>>From: tinnycloud@hotmail.com
>>>To: juihaochiang@gmail.com; xen-devel@lists.xensource.com
>>>CC: tim.deegan@citrix.com
>>>Subject: RE: [PATCH] mem_sharing: fix race condition of nominate and
>>> unshare
>>>Date: Tue, 18 Jan 2011 17:42:32 +0800
>>
>>>Hi Tim & Jui-Hao:
>>
>> >     When I use Linux HVM instead of Windows HVM, more bug shows up.
>>
>>>      I only start on VM, and when I destroy it , xen crashed on
>>> mem_sharing_unshare_page()
>>>which in line709, hash_entry is NULL. Later I found the handle has been
>>> removed in
>>>mem_sharing_share_pages(), please refer logs below.
>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-19  9:11                                           ` George Dunlap
@ 2011-01-19 13:44                                             ` MaoXiaoyun
  2011-01-24 10:56                                             ` MaoXiaoyun
  1 sibling, 0 replies; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-19 13:44 UTC (permalink / raw)
  To: xen devel; +Cc: george.dunlap, ian.campbell, keir, tim.deegan, juihaochiang


[-- Attachment #1.1: Type: text/plain, Size: 7191 bytes --]


Hi:
 
       More, as I previous found that, some of the nominate pages are already shared in linux HVM test. 
       But it doesn't make sense. 
 
       Since the in my setup, the page is allocated from  blkfront in HVM, its gref go through blktap2
and tapdisk2, it is assumed to be new, not possible has the type of shared, since if nominate failed,
the page will be filled with IO data from disk, that is to say, the page will be written again, and its previous
data is overwritten. I tested with only one HVM, it shows it has double nominate pages.
 
       I also test the windows HVM, which is no same page nominated.
       So i think it is a problem of blkfront.
 
       Am I right, could some one kindly offer some hints?
       many thanks.
 
> >>--------------------------------------------------------------------------------
> >>From: tinnycloud@hotmail.com
> >>To: juihaochiang@gmail.com; xen-devel@lists.xensource.com
> >>CC: tim.deegan@citrix.com
> >>Subject: RE: [PATCH] mem_sharing: fix race condition of nominate and
> >> unshare
> >>Date: Tue, 18 Jan 2011 20:05:16 +0800
> >>
> >>Hi:
> >>
> >> It is later found that caused by below patch code and I am using the
> >> blktap2.
> >>The handle retruned from here will later become ch in
> >> mem_sharing_share_pages, and then
> >>in mem_sharing_share_pages will have ch = sh, thus caused the problem.
> >>
> >>+    /* Return the handle if the page is already shared */
> >>+    page = mfn_to_page(mfn);
> >>+    if (p2m_is_shared(p2mt)) {
> >>+        *phandle = page->shr_handle;
> >>+        ret = 0;
> >>+        goto out;
> >>+    }
> >>+
> >>
> >>But. after I  removed the code, tests still failed, and this handle's value
> >> is not make sence.
> >>
> >>
> >>(XEN) ===>total handles 17834 total gfns 255853
> >>(XEN) handle 13856642536914634
> >>(XEN) Debug for domain=1, gfn=19fed, Debug page: MFN=349c0a is
> >> ci=8000000000000008, ti=8400000000000007, owner_id=32755
> >>(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
> >>(XEN) CPU:    15
> >>(XEN) RIP:    e008:[<ffff82c4801bff4b>]
> >> mem_sharing_unshare_page+0x19b/0x720
> >>(XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
> >>(XEN) rax: 0000000000000000   rbx: ffff83063fc67f28&nbsp ;  rcx:
> >> 0000000000000092
> >>(XEN) rdx: 000000000000000a   rsi: 000000000000000a   rdi: ffff82c48021e9c4
> >>(XEN) rbp: ffff830440000000   rsp: ffff83063fc67c48   r8:  0000000000000001
> >>(XEN) r9:  0000000000000000   r10: 00000000fffffff8   r11: 0000000000000005
> >>(XEN) r12: 0000000000019fed   r13: 0000000000000000   r14: 0000000000000000
> >>(XEN) r15: ffff82f606938140   cr0: 000000008005003b   cr4: 00000000000026f0
> >>(XEN) cr3: 000000055513c000   cr2: 0000000000000018
> >>(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> >>(XEN) Xen stack trace from rsp=ffff83063fc67c48:
> >>(XEN)    02c5f6c8b70fed66 39ef64058b487674 ffff82c4801a6082
> >> 0000000000000000
> >>(XEN)    00313a8b00313eca 0000000000000001 0000000000000009 ff
> >> ff830440000000
> >>(XEN)    ffff83063fc67cb8 ffff82c4801df6f9 0000000000000040
> >> ffff83063fc67d04
> >>(XEN)    0000000000019fed 0000000d000001ed ffff83055458d000
> >> ffff83063fc67f28
> >>(XEN)    0000000000019fed 0000000000349c0a 0000000000000030
> >> ffff83063fc67f28
> >>(XEN)    0000000000000030 ffff82c48019baa6 ffff82c4802519c0
> >> 0000000d8016838e
> >>(XEN)    0000000000000000 00000000000001aa ffff8300bf554000
> >> ffff82c4801b3864
> >>(XEN)    ffff830440000348 ffff8300bf554000 ffff8300bf5557f0
> >> ffff8300bf5557e8
> >>(XEN)    00000032027b81f2 ffff82c48026f080 ffff82c4801a9337
> >> ffff8300bf448000
> >>(XEN)    ffff8300bf554000 ffff830000000000 0000000019fed000
> >> ffff8300bf2f2000
> >>(XEN)    ffff82c48019985d 0000000000000080 ffff8300bf554000
> >> 0000000000019fed
> >>(XEN)    ffff82c4801b08ba 000000000001e000 ffff82c48014931f ff
> >> ff8305570c6d50
> >>(XEN)    ffff82c480251080 00000032027b81f2 ffff8305570c6d50
> >> ffff83052f3e2200
> >>(XEN)    0000000f027b7de0 ffff82c48011e07a 000000000000000f
> >> ffff82c48026f0a0
> >>(XEN)    0000000000000082 0000000000000000 0000000000000000
> >> 0000000000009e44
> >>(XEN)    ffff8300bf554000 ffff8300bf2f2000 ffff82c48011e07a
> >> 000000000000000f
> >>(XEN)    ffff8300bf555760 0000000000000292 ffff82c48011afca
> >> 00000032028a8fc0
> >>(XEN)    0000000000000292 ffff82c4801a93c3 00000000000000ef
> >> ffff8300bf554000
> >>(XEN)    ffff8300bf554000 ffff8300bf5557e8 ffff82c4801a6082
> >> ffff8300bf554000
> >>(XEN)    0000000000000000 ffff82c4801a0cc8 ffff8300bf554000
> >> ffff8300bf554000
> >>(XEN) Xen call trace:
> >>(XEN)    [<ffff82c4801bff4b>] mem_sharing_unshare_page+0x19b/0x720
> >>(XEN)    [<ffff82c4801a6082>] v lapic_has_pending_irq+0x42/0x70
> >>(XEN)    [<ffff82c4801df6f9>] ept_get_entry+0xa9/0x1c0
> >>(XEN)    [<ffff82c48019baa6>] hvm_hap_nested_page_fault+0xd6/0x190
> >>(XEN)    [<ffff82c4801b3864>] vmx_vmexit_handler+0x304/0x1a90
> >>(XEN)    [<ffff82c4801a9337>] pt_restore_timer+0x57/0xb0
> >>(XEN)    [<ffff82c48019985d>] hvm_do_resume+0x1d/0x130
> >>(XEN)    [<ffff82c4801b08ba>] vmx_do_resume+0x11a/0x1c0
> >>(XEN)    [<ffff82c48014931f>] context_switch+0x76f/0xf00
> >>(XEN)    [<ffff82c48011e07a>] add_entry+0x3a/0xb0
> >>(XEN)    [<ffff82c48011e07a>] add_entry+0x3a/0xb0
> >>(XEN)    [<ffff82c48011afca>] schedule+0x1ea/0x500
> >>(XEN)    [<ffff82c4801a93c3>] pt_update_irq+0x33/0x1e0
> >>(XEN)    [&lt ;ffff82c4801a6082>] vlapic_has_pending_irq+0x42/0x70
> >>(XEN)    [<ffff82c4801a0cc8>] hvm_vcpu_has_pending_irq+0x88/0xa0
> >>(XEN)    [<ffff82c4801b267b>] vmx_vmenter_helper+0x5b/0x150
> >>(XEN)    [<ffff82c4801adaa3>] vmx_asm_do_vmentry+0x0/0xdd
> >>(XEN)
> >>(XEN) Pagetable walk from 0000000000000018:
> >>(XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
> >>(XEN)
> >>(XEN) ****************************************
> >>(XEN) Panic on CPU 15:
> >>(XEN) FATAL PAGE FAULT
> >>(XEN) [error_code=0000]
> >>(XEN) Faulting linear address: 0000000000000018
> >>(XEN) ****************************************
> >>(XEN)
> >>(XEN) Manual reset required ('noreboot' specified)
> >>
> >>
> >>
> >>
> >>
> >> ---------------------------------------------------------------------------------------------------
> >>>From: tinnycloud@hotmail.com
> >>>To: juihaochiang@gmail.com; xen-devel@lists.xensource.com
> >>>CC: tim.deegan@citrix.com
> >>>Subject: RE: [PATCH] mem_sharing: fix race condition of nominate and
> >>> unshare
> >>>Date: Tue, 18 Jan 2011 17:42:32 +0800
> >>
> >>>Hi Tim & Jui-Hao:
> >>
> >> >     When I use Linux HVM instead of Windows HVM, more bug shows up.
> >>
> >>>      I only start on VM, and when I destroy it , xen crashed on
> >>> mem_sharing_unshare_page()
> >>>which in line709, hash_entry is NULL. Later I found the handle has been
> >>> removed in
> >>>mem_sharing_share_pages(), please refer logs below.
> >>
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
> >
> >
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 10554 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-17  9:02                                   ` Jui-Hao Chiang
  2011-01-17  9:15                                     ` MaoXiaoyun
  2011-01-18  9:42                                     ` MaoXiaoyun
@ 2011-01-20  7:19                                     ` MaoXiaoyun
  2011-01-20  9:19                                       ` Tim Deegan
  2 siblings, 1 reply; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-20  7:19 UTC (permalink / raw)
  To: xen devel; +Cc: tim.deegan, juihaochiang


[-- Attachment #1.1: Type: text/plain, Size: 6628 bytes --]


Hi:
 
            The latest BUG in mem_sharing_alloc_page from mem_sharing_unshare_page.
            I printed heap info, which shows plenty memory left.
            Could domain be NULL during in unshare, or should it be locked by rcu_lock_domain_by_id ?
 
-----------code------------
422 extern void pagealloc_info(unsigned char key);
 423 static struct page_info* mem_sharing_alloc_page(struct domain *d, 
 424                                                 unsigned long gfn,
 425                                                 int must_succeed)
 426 {
 427     struct page_info* page;
 428     struct vcpu *v = current;
 429     mem_event_request_t req;
 430 
 431     page = alloc_domheap_page(d, 0); 
 432     if(page != NULL) return page;
 433 
 434     memset(&req, 0, sizeof(req));
 435     if(must_succeed) 
 436     {
 437         /* We do not support 'must_succeed' any more. External operations such
 438          * as grant table mappings may fail with OOM condition! 
 439          */
 440         pagealloc_info('m');
 441         BUG();
 442     }
 
-------------serial output-------
(XEN) Physical memory information:
(XEN)     Xen heap: 0kB free
(XEN)     heap[14]: 64480kB free
(XEN)     heap[15]: 131072kB free
(XEN)     heap[16]: 262144kB free
(XEN)     heap[17]: 524288kB free
(XEN)     heap[18]: 1048576kB free
(XEN)     heap[19]: 1037128kB free
(XEN)     heap[20]: 3035744kB free
(XEN)     heap[21]: 2610292kB free
(XEN)     heap[22]: 2866212kB free
(XEN)     Dom heap: 11579936kB free
(XEN) Xen BUG at mem_sharing.c:441
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c4801c0531>] mem_sharing_unshare_page+0x681/0x790
(XEN) RFLAGS: 0000000000010282   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: ffff83040092d808   rcx: 0000000000000096
(XEN) rdx: 000000000000000a   rsi: 000000000000000a   rdi: ffff82c48021eac4
(XEN) rbp: 0000000000000000   rsp: ffff82c48035f5e8   r8:  0000000000000001
(XEN) r9:  0000000000000001   r10: 00000000fffffff5   r11: 0000000000000008
(XEN) r12: ffff8305c61f3980   r13: ffff83040eff0000   r14: 000000000001610f
(XEN) r15: ffff82c48035f628   cr0: 000000008005003b   cr4: 00000000000026f0
(XEN) cr3: 000000052bc4f000   cr2: ffff880120126e88
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen stack trace from rsp=ffff82c48035f5e8:
(XEN)    ffff8305c61f3990 00018300bf2f0000 ffff82f604e6a4a0 000000002ab84078
(XEN)    ffff83040092d7f0 00000000001b9c9c ffff8300bf2f0000 000000010eff0000
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000d0000010f ffff8305447ec000 000000000001610f
(XEN)    0000000000273525 ffff82c48035f724 ffff830502c705a0 ffff82f602c89a00
(XEN)    ffff83040eff0000 ffff82c48010bfa9 ffff830572c5dbf0 000000000029e07f
(XEN)    0000000000000000 ffff830572c5dbf0 000000008035fbe8 ffff82c48035f6f8
(XEN)    0000000100000002 ffff830572c5dbf0 ffff83063fc30000 ffff830572c5dbf0
(XEN)    0000035900000000 ffff88010d14bbe0 ffff880159e09000 00003f7e00000002
(XEN)    ffffffffffff0032 ffff88010d14bbb0 ffff830438dfa920 0000000d8010a650
(XEN)    0000000000000100 ffff83063fc30000 ffff8305f9203730 ffffffffffffffea
(XEN)    ffff88010d14bb70 0000000000000000 ffff88010d14bc10 ffff88010d14bbc0
(XEN)    0000000000000002 ffff82c48010da9b 0000000000000202 ffff82c48035fec8
(XEN)    ffff82c48035f7c8 00000000801880af ffff83063fc30010 0000000000000000
(XEN)    ffff82c400000008 ffff82c48035ff28 0000000000000000 ffff88010d14bbc0
(XEN)    ffff880159e08000 0000000000000000 0000000000000000 00020000000002d7
(XEN)    00000000003f2b38 ffff8305b1f4b6b8 ffff8305b30f0000 ffff880159e09000
(XEN)    0000000000000000 0000000000000000 000200000000008a 00000000003ed1f9
(XEN)    ffff83063fc26450 ffff8305b30f0000 ffff880159e0a000 0000000000000000
(XEN)    0000000000000000 00020000000001fa 000000000029e2ba ffff83063fc26fd0
(XEN) Xen call trace:
(XEN)    [<ffff82c4801c0531>] mem_sharing_unshare_page+0x681/0x790
(XEN)    [<ffff82c48010bfa9>] gnttab_map_grant_ref+0xbf9/0xe30
(XEN)    [<ffff82c48010da9b>] do_grant_table_op+0x14b/0x1080
(XEN)    [<ffff82c48010fb44>] do_xen_version+0xb4/0x480
(XEN)    [<ffff82c4801b8215>] set_p2m_entry+0x85/0xc0
(XEN)    [<ffff82c4801bc92e>] set_shared_p2m_entry+0x1be/0x2f0
(XEN)    [<ffff82c480121c4c>] xmem_pool_free+0x2c/0x310
(XEN)    [<ffff82c4801bfaf8>] mem_sharing_share_pages+0xd8/0x3d0
(XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
(XEN)    [<ffff82c48011c519>] cpumask_raise_softirq+0x89/0xa0
(XEN)    [<ffff82c480118351>] csched_vcpu_wake+0x101/0x1b0
(XEN)    [<ffff82c48014717d>] vcpu_kick+0x1d/0x80
(XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
(XEN)    [<ffff82c48015a1d8>] get_page+0x28/0xf0
(XEN)    [<ffff82c48015ed72>] do_update_descriptor+0x1d2/0x210
(XEN)    [<ffff82c480113d7e>] do_multicall+0x14e/0x340
(XEN)    [<ffff82c4801e3169>] syscall_enter+0xa9/0xae
(XEN)    
(XEN) 
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Xen BUG at mem_sharing.c:441
(XEN) ****************************************
(XEN) 
(XEN) Manual reset required ('noreboot' specified)
 
> Date: Mon, 17 Jan 2011 17:02:02 +0800
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> From: juihaochiang@gmail.com
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; tim.deegan@citrix.com
> 
> Hi, tinnycloud:
> 
> Do you have xenpaging tools running properly?
> I haven't gone through that one, but it seems you have run out of memory.
> When this case happens, mem_sharing will request memory to the
> xenpaging daemon, which tends to page out and free some memory.
> Otherwise, the allocation would fail.
> Is this your scenario?
> 
> Bests,
> Jui-Hao
> 
> 2011/1/17 MaoXiaoyun <tinnycloud@hotmail.com>:
> > Another failure on BUG() in mem_sharing_alloc_page()
> >
> >  memset(&req, 0, sizeof(req));
> >  if(must_succeed)
> >     {
> >         /* We do not support 'must_succeed' any more. External operations
> > such
> >          * as grant table mappings may fail with OOM condition!
> >          */
> >         BUG();===================>bug here
> >     }
> >     else
> >     {
> >         /* All foreign attempts to unshare pages should be handled through
> >          * 'must_succeed' case. */
> >         ASSERT(v->domain->domain_id == d->domain_id);
> >         vcpu_pause_nosync(v);
> >         req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
> >     }
> >
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 10088 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-20  7:19                                     ` [PATCH] mem_sharing: fix race condition of nominate and unshare MaoXiaoyun
@ 2011-01-20  9:19                                       ` Tim Deegan
  2011-01-20  9:37                                         ` MaoXiaoyun
  2011-01-21  6:10                                         ` MaoXiaoyun
  0 siblings, 2 replies; 50+ messages in thread
From: Tim Deegan @ 2011-01-20  9:19 UTC (permalink / raw)
  To: MaoXiaoyun; +Cc: xen devel, juihaochiang

At 07:19 +0000 on 20 Jan (1295507976), MaoXiaoyun wrote:
> Hi:
> 
>             The latest BUG in mem_sharing_alloc_page from mem_sharing_unshare_page.
>             I printed heap info, which shows plenty memory left.
>             Could domain be NULL during in unshare, or should it be locked by rcu_lock_domain_by_id ?
> 

'd' probably isn't NULL; more likely is that the domain is not allowed
to have any more memory.  You should look at the values of d->max_pages
and d->tot_pages when the failure happens.

Cheers.

Tim.

> -----------code------------
> 422 extern void pagealloc_info(unsigned char key);
>  423 static struct page_info* mem_sharing_alloc_page(struct domain *d,
>  424                                                 unsigned long gfn,
>  425                                                 int must_succeed)
>  426 {
>  427     struct page_info* page;
>  428     struct vcpu *v = current;
>  429     mem_event_request_t req;
>  430
>  431     page = alloc_domheap_page(d, 0);
>  432     if(page != NULL) return page;
>  433
>  434     memset(&req, 0, sizeof(req));
>  435     if(must_succeed)
>  436     {
>  437         /* We do not support 'must_succeed' any more. External operations such
>  438          * as grant table mappings may fail with OOM condition!
>  439          */
>  440         pagealloc_info('m');
>  441         BUG();
>  442     }
> 
> -------------serial output-------
> (XEN) Physical memory information:
> (XEN)     Xen heap: 0kB free
> (XEN)     heap[14]: 64480kB free
> (XEN)     heap[15]: 131072kB free
> (XEN)     heap[16]: 262144kB free
> (XEN)     heap[17]: 524288kB free
> (XEN)     heap[18]: 1048576kB free
> (XEN)     heap[19]: 1037128kB free
> (XEN)     heap[20]: 3035744kB free
> (XEN)     heap[21]: 2610292kB free
> (XEN)     heap[22]: 2866212kB free
> (XEN)     Dom heap: 11579936kB free
> (XEN) Xen BUG at mem_sharing.c:441
> (XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82c4801c0531>] mem_sharing_unshare_page+0x681/0x790
> (XEN) RFLAGS: 0000000000010282   CONTEXT: hypervisor
> (XEN) rax: 0000000000000000   rbx: ffff83040092d808   rcx: 0000000000000096
> (XEN) rdx: 000000000000000a   rsi: 000000000000000a   rdi: ffff82c48021eac4
> (XEN) rbp: 0000000000000000   rsp: ffff82c48035f5e8   r8:  0000000000000001
> (XEN) r9:  0000000000000001   r10: 00000000fffffff5   r11: 0000000000000008
> (XEN) r12: ffff8305c61f3980   r13: ffff83040eff0000   r14: 000000000001610f
> (XEN) r15: ffff82c48035f628   cr0: 000000008005003b   cr4: 00000000000026f0
> (XEN) cr3: 000000052bc4f000   cr2: ffff880120126e88
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
> (XEN) Xen stack trace from rsp=ffff82c48035f5e8:
> (XEN)    ffff8305c61f3990 00018300bf2f0000 ffff82f604e6a4a0 000000002ab84078
> (XEN)    ffff83040092d7f0 00000000001b9c9c ffff8300bf2f0000 000000010eff0000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000d0000010f ffff8305447ec000 000000000001610f
> (XEN)    0000000000273525 ffff82c48035f724 ffff830502c705a0 ffff82f602c89a00
> (XEN)    ffff83040eff0000 ffff82c48010bfa9 ffff830572c5dbf0 000000000029e07f
> (XEN)    0000000000000000 ffff830572c5dbf0 000000008035fbe8 ffff82c48035f6f8
> (XEN)    0000000100000002 ffff830572c5dbf0 ffff83063fc30000 ffff830572c5dbf0
> (XEN)    0000035900000000 ffff88010d14bbe0 ffff880159e09000 00003f7e00000002
> (XEN)    ffffffffffff0032 ffff88010d14bbb0 ffff830438dfa920 0000000d8010a650
> (XEN)    0000000000000100 ffff83063fc30000 ffff8305f9203730 ffffffffffffffea
> (XEN)    ffff88010d14bb70 0000000000000000 ffff88010d14bc10 ffff88010d14bbc0
> (XEN)    0000000000000002 ffff82c48010da9b 0000000000000202 ffff82c48035fec8
> (XEN)    ffff82c48035f7c8 00000000801880af ffff83063fc30010 0000000000000000
> (XEN)    ffff82c400000008 ffff82c48035ff28 0000000000000000 ffff88010d14bbc0
> (XEN)    ffff880159e08000 0000000000000000 0000000000000000 00020000000002d7
> (XEN)    00000000003f2b38 ffff8305b1f4b6b8 ffff8305b30f0000 ffff880159e09000
> (XEN)    0000000000000000 0000000000000000 000200000000008a 00000000003ed1f9
> (XEN)    ffff83063fc26450 ffff8305b30f0000 ffff880159e0a000 0000000000000000
> (XEN)    0000000000000000 00020000000001fa 000000000029e2ba ffff83063fc26fd0
> (XEN) Xen call trace:
> (XEN)    [<ffff82c4801c0531>] mem_sharing_unshare_page+0x681/0x790
> (XEN)    [<ffff82c48010bfa9>] gnttab_map_grant_ref+0xbf9/0xe30
> (XEN)    [<ffff82c48010da9b>] do_grant_table_op+0x14b/0x1080
> (XEN)    [<ffff82c48010fb44>] do_xen_version+0xb4/0x480
> (XEN)    [<ffff82c4801b8215>] set_p2m_entry+0x85/0xc0
> (XEN)    [<ffff82c4801bc92e>] set_shared_p2m_entry+0x1be/0x2f0
> (XEN)    [<ffff82c480121c4c>] xmem_pool_free+0x2c/0x310
> (XEN)    [<ffff82c4801bfaf8>] mem_sharing_share_pages+0xd8/0x3d0
> (XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
> (XEN)    [<ffff82c48011c519>] cpumask_raise_softirq+0x89/0xa0
> (XEN)    [<ffff82c480118351>] csched_vcpu_wake+0x101/0x1b0
> (XEN)    [<ffff82c48014717d>] vcpu_kick+0x1d/0x80
> (XEN)    [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
> (XEN)    [<ffff82c48015a1d8>] get_page+0x28/0xf0
> (XEN)    [<ffff82c48015ed72>] do_update_descriptor+0x1d2/0x210
> (XEN)    [<ffff82c480113d7e>] do_multicall+0x14e/0x340
> (XEN)    [<ffff82c4801e3169>] syscall_enter+0xa9/0xae
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Xen BUG at mem_sharing.c:441
> (XEN) ****************************************
> (XEN)
> (XEN) Manual reset required ('noreboot' specified)
> 
> > Date: Mon, 17 Jan 2011 17:02:02 +0800
> > Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> > From: juihaochiang@gmail.com
> > To: tinnycloud@hotmail.com
> > CC: xen-devel@lists.xensource.com; tim.deegan@citrix.com
> >
> > Hi, tinnycloud:
> >
> > Do you have xenpaging tools running properly?
> > I haven't gone through that one, but it seems you have run out of memory.
> > When this case happens, mem_sharing will request memory to the
> > xenpaging daemon, which tends to page out and free some memory.
> > Otherwise, the allocation would fail.
> > Is this your scenario?
> >
> > Bests,
> > Jui-Hao
> >
> > 2011/1/17 MaoXiaoyun <tinnycloud@hotmail.com>:
> > > Another failure on BUG() in mem_sharing_alloc_page()
> > >
> > >  memset(&req, 0, sizeof(req));
> > >  if(must_succeed)
> > >     {
> > >         /* We do not support 'must_succeed' any more. External operations
> > > such
> > >          * as grant table mappings may fail with OOM condition!
> > >          */
> > >         BUG();===================>bug here
> > >     }
> > >     else
> > >     {
> > >         /* All foreign attempts to unshare pages should be handled through
> > >          * 'must_succeed' case. */
> > >         ASSERT(v->domain->domain_id == d->domain_id);
> > >         vcpu_pause_nosync(v);
> > >         req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
> > >     }
> > >

-- 
Tim Deegan <Tim.Deegan@citrix.com>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-20  9:19                                       ` Tim Deegan
@ 2011-01-20  9:37                                         ` MaoXiaoyun
  2011-01-21  6:10                                         ` MaoXiaoyun
  1 sibling, 0 replies; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-20  9:37 UTC (permalink / raw)
  To: tim.deegan; +Cc: xen devel, juihaochiang


[-- Attachment #1.1: Type: text/plain, Size: 9380 bytes --]


I'll do the check. Thanks.
Well, during the test, I still have another two failures
 
1) when all domain are destroyed, the handle in hash table are not decrease to 0 sometimes.
I print the handle count, most of time it is 0 after all domain destroyed.
(XEN) ===>total handles 2 total gfns 2 next_handle: 713269
 
2)  set_shared_p2m_entry failed
 745     list_for_each_safe(le, te, &ce->gfns)
 746     {
 747         gfn = list_entry(le, struct gfn_info, list);
 748         /* Get the source page and type, this should never fail 
 749          * because we are under shr lock, and got non-null se */
 750         BUG_ON(!get_page_and_type(spage, dom_cow, PGT_shared_page));
 751         /* Move the gfn_info from ce list to se list */
 752         list_del(&gfn->list);
 753         d = get_domain_by_id(gfn->domain);
 754 //      mem_sharing_debug_gfn(d, gfn->gfn);
 755         BUG_ON(!d);
 756         BUG_ON(set_shared_p2m_entry(d, gfn->gfn, se->mfn) == 0);
 757         put_domain(d);
 758         list_add(&gfn->list, &se->gfns);
 759         put_page_and_type(cpage);
 760 //      mem_sharing_debug_gfn(d, gfn->gfn);   
 
 
(XEN) printk: 33 messages suppressed.
(XEN) p2m.c:2442:d0 set_mmio_p2m_entry: set_p2m_entry failed! mfn=0023dbb7
(XEN) Xen BUG at mem_sharing.c:756
(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82c4801bfd90>] mem_sharing_share_pages+0x370/0x3d0
(XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
(XEN) rax: 0000000000000000   rbx: ffff83040ed20000   rcx: 0000000000000092
(XEN) rdx: 000000000000000a   rsi: 000000000000000a   rdi: ffff82c48021eac4
(XEN) rbp: ffff8305a4bbe1b0   rsp: ffff82c48035fc58   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 00000000fffffffb   r11: ffff82c4801318d0
(XEN) r12: ffff8305a4bbe1a0   r13: ffff8305a61d42a0   r14: ffff82f6047b76e0
(XEN) r15: ffff8304e5e918c8   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 00000005203fc000   cr2: 00000000027b8000


 
 
 
> Date: Thu, 20 Jan 2011 09:19:34 +0000
> From: Tim.Deegan@citrix.com
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; juihaochiang@gmail.com
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> 
> At 07:19 +0000 on 20 Jan (1295507976), MaoXiaoyun wrote:
> > Hi:
> > 
> > The latest BUG in mem_sharing_alloc_page from mem_sharing_unshare_page.
> > I printed heap info, which shows plenty memory left.
> > Could domain be NULL during in unshare, or should it be locked by rcu_lock_domain_by_id ?
> > 
> 
> 'd' probably isn't NULL; more likely is that the domain is not allowed
> to have any more memory. You should look at the values of d->max_pages
> and d->tot_pages when the failure happens.
> 
> Cheers.
> 
> Tim.
> 
> > -----------code------------
> > 422 extern void pagealloc_info(unsigned char key);
> > 423 static struct page_info* mem_sharing_alloc_page(struct domain *d,
> > 424 unsigned long gfn,
> > 425 int must_succeed)
> > 426 {
> > 427 struct page_info* page;
> > 428 struct vcpu *v = current;
> > 429 mem_event_request_t req;
> > 430
> > 431 page = alloc_domheap_page(d, 0);
> > 432 if(page != NULL) return page;
> > 433
> > 434 memset(&req, 0, sizeof(req));
> > 435 if(must_succeed)
> > 436 {
> > 437 /* We do not support 'must_succeed' any more. External operations such
> > 438 * as grant table mappings may fail with OOM condition!
> > 439 */
> > 440 pagealloc_info('m');
> > 441 BUG();
> > 442 }
> > 
> > -------------serial output-------
> > (XEN) Physical memory information:
> > (XEN) Xen heap: 0kB free
> > (XEN) heap[14]: 64480kB free
> > (XEN) heap[15]: 131072kB free
> > (XEN) heap[16]: 262144kB free
> > (XEN) heap[17]: 524288kB free
> > (XEN) heap[18]: 1048576kB free
> > (XEN) heap[19]: 1037128kB free
> > (XEN) heap[20]: 3035744kB free
> > (XEN) heap[21]: 2610292kB free
> > (XEN) heap[22]: 2866212kB free
> > (XEN) Dom heap: 11579936kB free
> > (XEN) Xen BUG at mem_sharing.c:441
> > (XEN) ----[ Xen-4.0.0 x86_64 debug=n Not tainted ]----
> > (XEN) CPU: 0
> > (XEN) RIP: e008:[<ffff82c4801c0531>] mem_sharing_unshare_page+0x681/0x790
> > (XEN) RFLAGS: 0000000000010282 CONTEXT: hypervisor
> > (XEN) rax: 0000000000000000 rbx: ffff83040092d808 rcx: 0000000000000096
> > (XEN) rdx: 000000000000000a rsi: 000000000000000a rdi: ffff82c48021eac4
> > (XEN) rbp: 0000000000000000 rsp: ffff82c48035f5e8 r8: 0000000000000001
> > (XEN) r9: 0000000000000001 r10: 00000000fffffff5 r11: 0000000000000008
> > (XEN) r12: ffff8305c61f3980 r13: ffff83040eff0000 r14: 000000000001610f
> > (XEN) r15: ffff82c48035f628 cr0: 000000008005003b cr4: 00000000000026f0
> > (XEN) cr3: 000000052bc4f000 cr2: ffff880120126e88
> > (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e010 cs: e008
> > (XEN) Xen stack trace from rsp=ffff82c48035f5e8:
> > (XEN) ffff8305c61f3990 00018300bf2f0000 ffff82f604e6a4a0 000000002ab84078
> > (XEN) ffff83040092d7f0 00000000001b9c9c ffff8300bf2f0000 000000010eff0000
> > (XEN) 0000000000000000 0000000000000000 0000000000000000 0000000000000000
> > (XEN) 0000000000000000 0000000d0000010f ffff8305447ec000 000000000001610f
> > (XEN) 0000000000273525 ffff82c48035f724 ffff830502c705a0 ffff82f602c89a00
> > (XEN) ffff83040eff0000 ffff82c48010bfa9 ffff830572c5dbf0 000000000029e07f
> > (XEN) 0000000000000000 ffff830572c5dbf0 000000008035fbe8 ffff82c48035f6f8
> > (XEN) 0000000100000002 ffff830572c5dbf0 ffff83063fc30000 ffff830572c5dbf0
> > (XEN) 0000035900000000 ffff88010d14bbe0 ffff880159e09000 00003f7e00000002
> > (XEN) ffffffffffff0032 ffff88010d14bbb0 ffff830438dfa920 0000000d8010a650
> > (XEN) 0000000000000100 ffff83063fc30000 ffff8305f9203730 ffffffffffffffea
> > (XEN) ffff88010d14bb70 0000000000000000 ffff88010d14bc10 ffff88010d14bbc0
> > (XEN) 0000000000000002 ffff82c48010da9b 0000000000000202 ffff82c48035fec8
> > (XEN) ffff82c48035f7c8 00000000801880af ffff83063fc30010 0000000000000000
> > (XEN) ffff82c400000008 ffff82c48035ff28 0000000000000000 ffff88010d14bbc0
> > (XEN) ffff880159e08000 0000000000000000 0000000000000000 00020000000002d7
> > (XEN) 00000000003f2b38 ffff8305b1f4b6b8 ffff8305b30f0000 ffff880159e09000
> > (XEN) 0000000000000000 0000000000000000 000200000000008a 00000000003ed1f9
> > (XEN) ffff83063fc26450 ffff8305b30f0000 ffff880159e0a000 0000000000000000
> > (XEN) 0000000000000000 00020000000001fa 000000000029e2ba ffff83063fc26fd0
> > (XEN) Xen call trace:
> > (XEN) [<ffff82c4801c0531>] mem_sharing_unshare_page+0x681/0x790
> > (XEN) [<ffff82c48010bfa9>] gnttab_map_grant_ref+0xbf9/0xe30
> > (XEN) [<ffff82c48010da9b>] do_grant_table_op+0x14b/0x1080
> > (XEN) [<ffff82c48010fb44>] do_xen_version+0xb4/0x480
> > (XEN) [<ffff82c4801b8215>] set_p2m_entry+0x85/0xc0
> > (XEN) [<ffff82c4801bc92e>] set_shared_p2m_entry+0x1be/0x2f0
> > (XEN) [<ffff82c480121c4c>] xmem_pool_free+0x2c/0x310
> > (XEN) [<ffff82c4801bfaf8>] mem_sharing_share_pages+0xd8/0x3d0
> > (XEN) [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
> > (XEN) [<ffff82c48011c519>] cpumask_raise_softirq+0x89/0xa0
> > (XEN) [<ffff82c480118351>] csched_vcpu_wake+0x101/0x1b0
> > (XEN) [<ffff82c48014717d>] vcpu_kick+0x1d/0x80
> > (XEN) [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
> > (XEN) [<ffff82c48015a1d8>] get_page+0x28/0xf0
> > (XEN) [<ffff82c48015ed72>] do_update_descriptor+0x1d2/0x210
> > (XEN) [<ffff82c480113d7e>] do_multicall+0x14e/0x340
> > (XEN) [<ffff82c4801e3169>] syscall_enter+0xa9/0xae
> > (XEN)
> > (XEN)
> > (XEN) ****************************************
> > (XEN) Panic on CPU 0:
> > (XEN) Xen BUG at mem_sharing.c:441
> > (XEN) ****************************************
> > (XEN)
> > (XEN) Manual reset required ('noreboot' specified)
> > 
> > > Date: Mon, 17 Jan 2011 17:02:02 +0800
> > > Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> > > From: juihaochiang@gmail.com
> > > To: tinnycloud@hotmail.com
> > > CC: xen-devel@lists.xensource.com; tim.deegan@citrix.com
> > >
> > > Hi, tinnycloud:
> > >
> > > Do you have xenpaging tools running properly?
> > > I haven't gone through that one, but it seems you have run out of memory.
> > > When this case happens, mem_sharing will request memory to the
> > > xenpaging daemon, which tends to page out and free some memory.
> > > Otherwise, the allocation would fail.
> > > Is this your scenario?
> > >
> > > Bests,
> > > Jui-Hao
> > >
> > > 2011/1/17 MaoXiaoyun <tinnycloud@hotmail.com>:
> > > > Another failure on BUG() in mem_sharing_alloc_page()
> > > >
> > > > memset(&req, 0, sizeof(req));
> > > > if(must_succeed)
> > > > {
> > > > /* We do not support 'must_succeed' any more. External operations
> > > > such
> > > > * as grant table mappings may fail with OOM condition!
> > > > */
> > > > BUG();===================>bug here
> > > > }
> > > > else
> > > > {
> > > > /* All foreign attempts to unshare pages should be handled through
> > > > * 'must_succeed' case. */
> > > > ASSERT(v->domain->domain_id == d->domain_id);
> > > > vcpu_pause_nosync(v);
> > > > req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
> > > > }
> > > >
> 
> -- 
> Tim Deegan <Tim.Deegan@citrix.com>
> Principal Software Engineer, Xen Platform Team
> Citrix Systems UK Ltd. (Company #02937203, SL9 0BG)
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 12280 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-20  9:19                                       ` Tim Deegan
  2011-01-20  9:37                                         ` MaoXiaoyun
@ 2011-01-21  6:10                                         ` MaoXiaoyun
  1 sibling, 0 replies; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-21  6:10 UTC (permalink / raw)
  To: xen devel; +Cc: george.dunlap, tim.deegan, juihaochiang


[-- Attachment #1.1: Type: text/plain, Size: 9380 bytes --]


it is later found that domain is dying, so when dying alloc page is prohibitted
 
 (XEN) ---domain is 1, max_pages 132096, total_pages 29736
 
output log from line 914
 
 909     old_page = page;
 910     page = mem_sharing_alloc_page(d, gfn, flags & MEM_SHARING_MUST_SUCCEED);
 911     if(!page)
 912     {
 913         mem_sharing_debug_gfn(d, gfn);
 914         printk("---domain is %d, max_pages %u, total_pages %u \n", d->is_dying, d->max_pages, d->tot_pages);
 915         BUG_ON(!d);
 
--------------
Well the logic is a bit of complicate, my fix is to set gfn's mfn to  INVALID_MFN
 
 
 876     ret = page_make_private(d, page);                                                                                                                   
 877     /*last_gfn shoule able to be make_private*/
 878     BUG_ON(last_gfn & ret);
 879     if(ret == 0) goto private_page_found;
 880 
 881     ld = rcu_lock_domain_by_id(d->domain_id);                                                                                                         
 882     BUG_ON(!ld);
 883     if(ld->is_dying )
 884     {
 885         if(!ld)
 886             printk("d is NULL %d\n", d->domain_id);
 887         else
 888             printk("d is dying %d %d\n", d->is_dying, d->domain_id);
 889 
 890         /*decrease page type count and destory gfn*/
 891         put_page_and_type(page);
 892         mem_sharing_gfn_destroy(gfn_info, !last_gfn);
 893 
 894         if(last_gfn) 
 895             mem_sharing_hash_delete(handle);
 896         else 
 897             /* Even though we don't allocate a private page, we have to account
 898              * for the MFN that originally backed this PFN. */
 899             atomic_dec(&nr_saved_mfns);
 900 
 901         /*set mfn invalid*/
 902         BUG_ON(set_shared_p2m_entry_invalid(d, gfn)==0);
 903         if(ld)
 904           rcu_unlock_domain(ld);
 905         shr_unlock();
 906         return 0;
 907     }
 
Any other suggestions?
 
 


> Date: Thu, 20 Jan 2011 09:19:34 +0000
> From: Tim.Deegan@citrix.com
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; juihaochiang@gmail.com
> Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> 
> At 07:19 +0000 on 20 Jan (1295507976), MaoXiaoyun wrote:
> > Hi:
> > 
> > The latest BUG in mem_sharing_alloc_page from mem_sharing_unshare_page.
> > I printed heap info, which shows plenty memory left.
> > Could domain be NULL during in unshare, or should it be locked by rcu_lock_domain_by_id ?
> > 
> 
> 'd' probably isn't NULL; more likely is that the domain is not allowed
> to have any more memory. You should look at the values of d->max_pages
> and d->tot_pages when the failure happens.
> 
> Cheers.
> 
> Tim.
> 
> > -----------code------------
> > 422 extern void pagealloc_info(unsigned char key);
> > 423 static struct page_info* mem_sharing_alloc_page(struct domain *d,
> > 424 unsigned long gfn,
> > 425 int must_succeed)
> > 426 {
> > 427 struct page_info* page;
> > 428 struct vcpu *v = current;
> > 429 mem_event_request_t req;
> > 430
> > 431 page = alloc_domheap_page(d, 0);
> > 432 if(page != NULL) return page;
> > 433
> > 434 memset(&req, 0, sizeof(req));
> > 435 if(must_succeed)
> > 436 {
> > 437 /* We do not support 'must_succeed' any more. External operations such
> > 438 * as grant table mappings may fail with OOM condition!
> > 439 */
> > 440 pagealloc_info('m');
> > 441 BUG();
> > 442 }
> > 
> > -------------serial output-------
> > (XEN) Physical memory information:
> > (XEN) Xen heap: 0kB free
> > (XEN) heap[14]: 64480kB free
> > (XEN) heap[15]: 131072kB free
> > (XEN) heap[16]: 262144kB free
> > (XEN) heap[17]: 524288kB free
> > (XEN) heap[18]: 1048576kB free
> > (XEN) heap[19]: 1037128kB free
> > (XEN) heap[20]: 3035744kB free
> > (XEN) heap[21]: 2610292kB free
> > (XEN) heap[22]: 2866212kB free
> > (XEN) Dom heap: 11579936kB free
> > (XEN) Xen BUG at mem_sharing.c:441
> > (XEN) ----[ Xen-4.0.0 x86_64 debug=n Not tainted ]----
> > (XEN) CPU: 0
> > (XEN) RIP: e008:[<ffff82c4801c0531>] mem_sharing_unshare_page+0x681/0x790
> > (XEN) RFLAGS: 0000000000010282 CONTEXT: hypervisor
> > (XEN) rax: 0000000000000000 rbx: ffff83040092d808 rcx: 0000000000000096
> > (XEN) rdx: 000000000000000a rsi: 000000000000000a rdi: ffff82c48021eac4
> > (XEN) rbp: 0000000000000000 rsp: ffff82c48035f5e8 r8: 0000000000000001
> > (XEN) r9: 0000000000000001 r10: 00000000fffffff5 r11: 0000000000000008
> > (XEN) r12: ffff8305c61f3980 r13: ffff83040eff0000 r14: 000000000001610f
> > (XEN) r15: ffff82c48035f628 cr0: 000000008005003b cr4: 00000000000026f0
> > (XEN) cr3: 000000052bc4f000 cr2: ffff880120126e88
> > (XEN) ds: 0000 es: 0000 fs: 0000 gs: 0000 ss: e010 cs: e008
> > (XEN) Xen stack trace from rsp=ffff82c48035f5e8:
> > (XEN) ffff8305c61f3990 00018300bf2f0000 ffff82f604e6a4a0 000000002ab84078
> > (XEN) ffff83040092d7f0 00000000001b9c9c ffff8300bf2f0000 000000010eff0000
> > (XEN) 0000000000000000 0000000000000000 0000000000000000 0000000000000000
> > (XEN) 0000000000000000 0000000d0000010f ffff8305447ec000 000000000001610f
> > (XEN) 0000000000273525 ffff82c48035f724 ffff830502c705a0 ffff82f602c89a00
> > (XEN) ffff83040eff0000 ffff82c48010bfa9 ffff830572c5dbf0 000000000029e07f
> > (XEN) 0000000000000000 ffff830572c5dbf0 000000008035fbe8 ffff82c48035f6f8
> > (XEN) 0000000100000002 ffff830572c5dbf0 ffff83063fc30000 ffff830572c5dbf0
> > (XEN) 0000035900000000 ffff88010d14bbe0 ffff880159e09000 00003f7e00000002
> > (XEN) ffffffffffff0032 ffff88010d14bbb0 ffff830438dfa920 0000000d8010a650
> > (XEN) 0000000000000100 ffff83063fc30000 ffff8305f9203730 ffffffffffffffea
> > (XEN) ffff88010d14bb70 0000000000000000 ffff88010d14bc10 ffff88010d14bbc0
> > (XEN) 0000000000000002 ffff82c48010da9b 0000000000000202 ffff82c48035fec8
> > (XEN) ffff82c48035f7c8 00000000801880af ffff83063fc30010 0000000000000000
> > (XEN) ffff82c400000008 ffff82c48035ff28 0000000000000000 ffff88010d14bbc0
> > (XEN) ffff880159e08000 0000000000000000 0000000000000000 00020000000002d7
> > (XEN) 00000000003f2b38 ffff8305b1f4b6b8 ffff8305b30f0000 ffff880159e09000
> > (XEN) 0000000000000000 0000000000000000 000200000000008a 00000000003ed1f9
> > (XEN) ffff83063fc26450 ffff8305b30f0000 ffff880159e0a000 0000000000000000
> > (XEN) 0000000000000000 00020000000001fa 000000000029e2ba ffff83063fc26fd0
> > (XEN) Xen call trace:
> > (XEN) [<ffff82c4801c0531>] mem_sharing_unshare_page+0x681/0x790
> > (XEN) [<ffff82c48010bfa9>] gnttab_map_grant_ref+0xbf9/0xe30
> > (XEN) [<ffff82c48010da9b>] do_grant_table_op+0x14b/0x1080
> > (XEN) [<ffff82c48010fb44>] do_xen_version+0xb4/0x480
> > (XEN) [<ffff82c4801b8215>] set_p2m_entry+0x85/0xc0
> > (XEN) [<ffff82c4801bc92e>] set_shared_p2m_entry+0x1be/0x2f0
> > (XEN) [<ffff82c480121c4c>] xmem_pool_free+0x2c/0x310
> > (XEN) [<ffff82c4801bfaf8>] mem_sharing_share_pages+0xd8/0x3d0
> > (XEN) [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
> > (XEN) [<ffff82c48011c519>] cpumask_raise_softirq+0x89/0xa0
> > (XEN) [<ffff82c480118351>] csched_vcpu_wake+0x101/0x1b0
> > (XEN) [<ffff82c48014717d>] vcpu_kick+0x1d/0x80
> > (XEN) [<ffff82c4801447da>] __find_next_bit+0x6a/0x70
> > (XEN) [<ffff82c48015a1d8>] get_page+0x28/0xf0
> > (XEN) [<ffff82c48015ed72>] do_update_descriptor+0x1d2/0x210
> > (XEN) [<ffff82c480113d7e>] do_multicall+0x14e/0x340
> > (XEN) [<ffff82c4801e3169>] syscall_enter+0xa9/0xae
> > (XEN)
> > (XEN)
> > (XEN) ****************************************
> > (XEN) Panic on CPU 0:
> > (XEN) Xen BUG at mem_sharing.c:441
> > (XEN) ****************************************
> > (XEN)
> > (XEN) Manual reset required ('noreboot' specified)
> > 
> > > Date: Mon, 17 Jan 2011 17:02:02 +0800
> > > Subject: Re: [PATCH] mem_sharing: fix race condition of nominate and unshare
> > > From: juihaochiang@gmail.com
> > > To: tinnycloud@hotmail.com
> > > CC: xen-devel@lists.xensource.com; tim.deegan@citrix.com
> > >
> > > Hi, tinnycloud:
> > >
> > > Do you have xenpaging tools running properly?
> > > I haven't gone through that one, but it seems you have run out of memory.
> > > When this case happens, mem_sharing will request memory to the
> > > xenpaging daemon, which tends to page out and free some memory.
> > > Otherwise, the allocation would fail.
> > > Is this your scenario?
> > >
> > > Bests,
> > > Jui-Hao
> > >
> > > 2011/1/17 MaoXiaoyun <tinnycloud@hotmail.com>:
> > > > Another failure on BUG() in mem_sharing_alloc_page()
> > > >
> > > > memset(&req, 0, sizeof(req));
> > > > if(must_succeed)
> > > > {
> > > > /* We do not support 'must_succeed' any more. External operations
> > > > such
> > > > * as grant table mappings may fail with OOM condition!
> > > > */
> > > > BUG();===================>bug here
> > > > }
> > > > else
> > > > {
> > > > /* All foreign attempts to unshare pages should be handled through
> > > > * 'must_succeed' case. */
> > > > ASSERT(v->domain->domain_id == d->domain_id);
> > > > vcpu_pause_nosync(v);
> > > > req.flags |= MEM_EVENT_FLAG_VCPU_PAUSED;
> > > > }
> > > >
> 
> -- 
> Tim Deegan <Tim.Deegan@citrix.com>
> Principal Software Engineer, Xen Platform Team
> Citrix Systems UK Ltd. (Company #02937203, SL9 0BG)
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 15122 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
  2011-01-19  9:11                                           ` George Dunlap
  2011-01-19 13:44                                             ` MaoXiaoyun
@ 2011-01-24 10:56                                             ` MaoXiaoyun
  2011-02-16 13:50                                               ` abnormal CPU utilization tinnycloud
  1 sibling, 1 reply; 50+ messages in thread
From: MaoXiaoyun @ 2011-01-24 10:56 UTC (permalink / raw)
  To: george.dunlap; +Cc: xen devel, tim.deegan, juihaochiang


[-- Attachment #1.1: Type: text/plain, Size: 11483 bytes --]


Hi George:
 
       Beside the p2m_is_shared() need in p2m_pod_zero_check(), 
In mem_sharing_nominate_page() line 524, there is a page type check,
and line 566, page handle is set.
       So it looks like exists a competition on page type,  mem_sharing_nominate_page()
may first find page is sharable and POD at same time put the page into it cache, 
and later mem_sharing_nominate_page() set page handle, thus damage the page list
so we need to put mem_sharing_nominate_page into p2m_lock protect, right?
 
   491int mem_sharing_nominate_page(struct p2m_domain *p2m, 

   492                              unsigned long gfn,

   493                              int expected_refcnt,

   494                              shr_handle_t *phandle)

   495{

   496    p2m_type_t p2mt;

   497    mfn_t mfn;

   498    struct page_info *page;

   499    int ret;

   500    shr_handle_t handle;

   501    shr_hash_entry_t *hash_entry;

   502    struct gfn_info *gfn_info;

   503    struct domain *d = p2m->domain;

   504

   505    *phandle = 0UL;

   506

   507    shr_lock(); 

   508    mfn = gfn_to_mfn(p2m, gfn, &p2mt);

   509

   510    /* Check if mfn is valid */

   511    ret = -EINVAL;

   512    if (!mfn_valid(mfn))

   513        goto out;

   514

   515    /* Return the handle if the page is already shared */

   516    page = mfn_to_page(mfn);

   517    if (p2m_is_shared(p2mt)) {

   518        *phandle = page->shr_handle;

   519        ret = 0;

   520        goto out;

   521    }

   522

   523    /* Check p2m type */

   524    if (!p2m_is_sharable(p2mt))

   525        goto out;

   526

   527    /* Try to convert the mfn to the sharable type */

   528    ret = page_make_sharable(d, page, expected_refcnt); 

   529    if(ret) 

   530        goto out;

   531

   532    /* Create the handle */

   533    ret = -ENOMEM;

   534    handle = next_handle++;  

   535    if((hash_entry = mem_sharing_hash_insert(handle, mfn)) == NULL)

   536    {

   537        goto out;

   538    }

   539    if((gfn_info = mem_sharing_gfn_alloc()) == NULL)

   540    {

   541        mem_sharing_hash_destroy(hash_entry);

   542        goto out;

   543    }

   544

   545    /* Change the p2m type */

   546    if(p2m_change_type(p2m, gfn, p2mt, p2m_ram_shared) != p2mt) 

   547    {

   548        /* This is unlikely, as the type must have changed since we've checked

   549         * it a few lines above.

   550         * The mfn needs to revert back to rw type. This should never fail,

   551         * since no-one knew that the mfn was temporarily sharable */

   552        BUG_ON(page_make_private(d, page) != 0);

   553        mem_sharing_hash_destroy(hash_entry);

   554        mem_sharing_gfn_destroy(gfn_info, 0);

   555        goto out;

   556    }

   557

   558    /* Update m2p entry to SHARED_M2P_ENTRY */

   559    set_gpfn_from_mfn(mfn_x(mfn), SHARED_M2P_ENTRY);

   560

   561    INIT_LIST_HEAD(&hash_entry->gfns);

   562    INIT_LIST_HEAD(&gfn_info->list);

   563    list_add(&gfn_info->list, &hash_entry->gfns);

   564    gfn_info->gfn = gfn;

   565    gfn_info->domain = d->domain_id;

   566    page->shr_handle = handle;

   567    *phandle = handle;

   568

   569    ret = 0;

   570

   571out:

   572    shr_unlock();

   573    return ret;

   574}


 
> Date: Wed, 19 Jan 2011 09:11:37 +0000
> Subject: Re: [Xen-devel] RE: [PATCH] mem_sharing: fix race condition of nominate and unshare
> From: George.Dunlap@eu.citrix.com
> To: tinnycloud@hotmail.com
> CC: xen-devel@lists.xensource.com; tim.deegan@citrix.com; juihaochiang@gmail.com
> 
> Very likely. If you look in xen/arch/x86/mm/p2m.c, the two functions
> which check a page to see if it can be reclaimed are
> "p2m_pod_zero_check*()". A little ways into each function there's a
> giant "if()" which has all of the conditions for reclaiming a page,
> starting with p2m_is_ram(). The easiest way to fix it is to add
> p2m_is_shared() to that "if" statement.
> 
> -George
> 
> 2011/1/19 MaoXiaoyun <tinnycloud@hotmail.com>:
> > Hi George:
> >
> >        I am working on the xen mem_sharing,  I think the bug below is
> > related to POD.
> > (Test shows when POD is enable, it is easily hit the bug, when disabled, no
> > bug occurs).
> >
> > As I know when domU starts will POD, it gets memory from POD cache, and in
> > some
> > situation, POD cached will scan from Zero pages for reusing(link the page
> > into POD
> > cache page list), and from the page_info define, list and handle share same
> > posistion,
> > I think when reusing the page, POD doest't check page type, and if it is a
> > shared page
> > , it still can be put into POD cache, and thus handle is been overwritten.
> >
> >       So maybe we need to check the page type before putting into cache,
> > What's your opinion?
> >       thanks.
> >
> >>--------------------------------------------------------------------------------
> >>From: tinnycloud@hotmail.com
> >>To: juihaochiang@gmail.com; xen-devel@lists.xensource.com
> >>CC: tim.deegan@citrix.com
> >>Subject: RE: [PATCH] mem_sharing: fix race condition of nominate and
> >> unshare
> >>Date: Tue, 18 Jan 2011 20:05:16 +0800
> >>
> >>Hi:
> >>
> >> It is later found that caused by below patch code and I am using the
> >> blktap2.
> >>The handle retruned from here will later become ch in
> >> mem_sharing_share_pages, and then
> >>in mem_sharing_share_pages will have ch = sh, thus caused the problem.
> >>
> >>+    /* Return the handle if the page is already shared */
> >>+    page = mfn_to_page(mfn);
> >>+    if (p2m_is_shared(p2mt)) {
> >>+        *phandle = page->shr_handle;
> >>+        ret = 0;
> >>+        goto out;
> >>+    }
> >>+
> >>
> >>But. after I  removed the code, tests still failed, and this handle's value
> >> is not make sence.
> >>
> >>
> >>(XEN) ===>total handles 17834 total gfns 255853
> >>(XEN) handle 13856642536914634
> >>(XEN) Debug for domain=1, gfn=19fed, Debug page: MFN=349c0a is
> >> ci=8000000000000008, ti=8400000000000007, owner_id=32755
> >>(XEN) ----[ Xen-4.0.0  x86_64  debug=n  Not tainted ]----
> >>(XEN) CPU:    15
> >>(XEN) RIP:    e008:[<ffff82c4801bff4b>]
> >> mem_sharing_unshare_page+0x19b/0x720
> >>(XEN) RFLAGS: 0000000000010246   CONTEXT: hypervisor
> >>(XEN) rax: 0000000000000000   rbx: ffff83063fc67f28&nbsp ;  rcx:
> >> 0000000000000092
> >>(XEN) rdx: 000000000000000a   rsi: 000000000000000a   rdi: ffff82c48021e9c4
> >>(XEN) rbp: ffff830440000000   rsp: ffff83063fc67c48   r8:  0000000000000001
> >>(XEN) r9:  0000000000000000   r10: 00000000fffffff8   r11: 0000000000000005
> >>(XEN) r12: 0000000000019fed   r13: 0000000000000000   r14: 0000000000000000
> >>(XEN) r15: ffff82f606938140   cr0: 000000008005003b   cr4: 00000000000026f0
> >>(XEN) cr3: 000000055513c000   cr2: 0000000000000018
> >>(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> >>(XEN) Xen stack trace from rsp=ffff83063fc67c48:
> >>(XEN)    02c5f6c8b70fed66 39ef64058b487674 ffff82c4801a6082
> >> 0000000000000000
> >>(XEN)    00313a8b00313eca 0000000000000001 0000000000000009 ff
> >> ff830440000000
> >>(XEN)    ffff83063fc67cb8 ffff82c4801df6f9 0000000000000040
> >> ffff83063fc67d04
> >>(XEN)    0000000000019fed 0000000d000001ed ffff83055458d000
> >> ffff83063fc67f28
> >>(XEN)    0000000000019fed 0000000000349c0a 0000000000000030
> >> ffff83063fc67f28
> >>(XEN)    0000000000000030 ffff82c48019baa6 ffff82c4802519c0
> >> 0000000d8016838e
> >>(XEN)    0000000000000000 00000000000001aa ffff8300bf554000
> >> ffff82c4801b3864
> >>(XEN)    ffff830440000348 ffff8300bf554000 ffff8300bf5557f0
> >> ffff8300bf5557e8
> >>(XEN)    00000032027b81f2 ffff82c48026f080 ffff82c4801a9337
> >> ffff8300bf448000
> >>(XEN)    ffff8300bf554000 ffff830000000000 0000000019fed000
> >> ffff8300bf2f2000
> >>(XEN)    ffff82c48019985d 0000000000000080 ffff8300bf554000
> >> 0000000000019fed
> >>(XEN)    ffff82c4801b08ba 000000000001e000 ffff82c48014931f ff
> >> ff8305570c6d50
> >>(XEN)    ffff82c480251080 00000032027b81f2 ffff8305570c6d50
> >> ffff83052f3e2200
> >>(XEN)    0000000f027b7de0 ffff82c48011e07a 000000000000000f
> >> ffff82c48026f0a0
> >>(XEN)    0000000000000082 0000000000000000 0000000000000000
> >> 0000000000009e44
> >>(XEN)    ffff8300bf554000 ffff8300bf2f2000 ffff82c48011e07a
> >> 000000000000000f
> >>(XEN)    ffff8300bf555760 0000000000000292 ffff82c48011afca
> >> 00000032028a8fc0
> >>(XEN)    0000000000000292 ffff82c4801a93c3 00000000000000ef
> >> ffff8300bf554000
> >>(XEN)    ffff8300bf554000 ffff8300bf5557e8 ffff82c4801a6082
> >> ffff8300bf554000
> >>(XEN)    0000000000000000 ffff82c4801a0cc8 ffff8300bf554000
> >> ffff8300bf554000
> >>(XEN) Xen call trace:
> >>(XEN)    [<ffff82c4801bff4b>] mem_sharing_unshare_page+0x19b/0x720
> >>(XEN)    [<ffff82c4801a6082>] v lapic_has_pending_irq+0x42/0x70
> >>(XEN)    [<ffff82c4801df6f9>] ept_get_entry+0xa9/0x1c0
> >>(XEN)    [<ffff82c48019baa6>] hvm_hap_nested_page_fault+0xd6/0x190
> >>(XEN)    [<ffff82c4801b3864>] vmx_vmexit_handler+0x304/0x1a90
> >>(XEN)    [<ffff82c4801a9337>] pt_restore_timer+0x57/0xb0
> >>(XEN)    [<ffff82c48019985d>] hvm_do_resume+0x1d/0x130
> >>(XEN)    [<ffff82c4801b08ba>] vmx_do_resume+0x11a/0x1c0
> >>(XEN)    [<ffff82c48014931f>] context_switch+0x76f/0xf00
> >>(XEN)    [<ffff82c48011e07a>] add_entry+0x3a/0xb0
> >>(XEN)    [<ffff82c48011e07a>] add_entry+0x3a/0xb0
> >>(XEN)    [<ffff82c48011afca>] schedule+0x1ea/0x500
> >>(XEN)    [<ffff82c4801a93c3>] pt_update_irq+0x33/0x1e0
> >>(XEN)    [&lt ;ffff82c4801a6082>] vlapic_has_pending_irq+0x42/0x70
> >>(XEN)    [<ffff82c4801a0cc8>] hvm_vcpu_has_pending_irq+0x88/0xa0
> >>(XEN)    [<ffff82c4801b267b>] vmx_vmenter_helper+0x5b/0x150
> >>(XEN)    [<ffff82c4801adaa3>] vmx_asm_do_vmentry+0x0/0xdd
> >>(XEN)
> >>(XEN) Pagetable walk from 0000000000000018:
> >>(XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
> >>(XEN)
> >>(XEN) ****************************************
> >>(XEN) Panic on CPU 15:
> >>(XEN) FATAL PAGE FAULT
> >>(XEN) [error_code=0000]
> >>(XEN) Faulting linear address: 0000000000000018
> >>(XEN) ****************************************
> >>(XEN)
> >>(XEN) Manual reset required ('noreboot' specified)
> >>
> >>
> >>
> >>
> >>
> >> ---------------------------------------------------------------------------------------------------
> >>>From: tinnycloud@hotmail.com
> >>>To: juihaochiang@gmail.com; xen-devel@lists.xensource.com
> >>>CC: tim.deegan@citrix.com
> >>>Subject: RE: [PATCH] mem_sharing: fix race condition of nominate and
> >>> unshare
> >>>Date: Tue, 18 Jan 2011 17:42:32 +0800
> >>
> >>>Hi Tim & Jui-Hao:
> >>
> >> >     When I use Linux HVM instead of Windows HVM, more bug shows up.
> >>
> >>>      I only start on VM, and when I destroy it , xen crashed on
> >>> mem_sharing_unshare_page()
> >>>which in line709, hash_entry is NULL. Later I found the handle has been
> >>> removed in
> >>>mem_sharing_share_pages(), please refer logs below.
> >>
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
> >
> >
 		 	   		  

[-- Attachment #1.2: Type: text/html, Size: 22106 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

* abnormal CPU utilization
  2011-01-24 10:56                                             ` MaoXiaoyun
@ 2011-02-16 13:50                                               ` tinnycloud
  0 siblings, 0 replies; 50+ messages in thread
From: tinnycloud @ 2011-02-16 13:50 UTC (permalink / raw)
  To: 'xen devel'; +Cc: george.dunlap


[-- Attachment #1.1: Type: text/plain, Size: 1657 bytes --]

Hi George:

 

         We have observed abnormal CPU utilization behavior on our HVMS.

 

         There are two same physical hosts, with exactly same Xen and dom0
kernel installed,

The host owns 24G memory and 16 CPUS. 

On each of the host, a HVM is running.

(WIN2008, 16G MEM, 8VCPUS, derive from same base VHD image)

 

That is  we have same HVMA, HVMB on different host.

 

Some days after, HVMA behaves quite abnormal, every time I start an
application inside VM, 

its CPU utilization  grows and drops very quickly(up to 80%), so it feels
that the system is very lag. 

While, HVMB works fine.

 

So it makes me to guess that processes on HVMA consume more CPU than HVMB.

 

Later, I run superPI tests on both HVMS, calculate 320million PI.

Here is the result.

 

                                                       HVMA
HVMB

End of initialization. Time:  2min, 54 Sec.                12sec

         1st time completion:            5min, 05 Sec
49 sec

         2nd time completion:            6min, 14 Sec
1min, 32sec

          3rd time completion:            7min,  17Sec
2min 14 sec

         4th time completion :            8min  22, sec
2min 57 

         5th time completion :            9min , 48 sec
3min 40

                          ...                              ..
..

          24th time completion             29min 52
16min 59

         

 

         From the data above, the initialization  time of superPI for HVMA
is much slower than HVMB.

         And every calculation of PI for HVMA is a bit slower than HVMB.

 

         How could this happen, could you enlighten me some?

         Many thanks.

 

 

 


[-- Attachment #1.2: Type: text/html, Size: 12123 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2011-02-16 13:50 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-01-06 16:11 [PATCH] mem_sharing: fix race condition of nominate and unshare Jui-Hao Chiang
2011-01-06 16:54 ` Tim Deegan
2011-01-07  3:54   ` Jui-Hao Chiang
2011-01-07  6:02     ` Jui-Hao Chiang
2011-01-07 16:09       ` Tim Deegan
2011-01-10  4:57         ` Jui-Hao Chiang
2011-01-10  4:58           ` Jui-Hao Chiang
2011-01-10 10:30           ` Tim Deegan
2011-01-11  1:49             ` MaoXiaoyun
2011-01-11  6:32               ` Jui-Hao Chiang
2011-01-11  6:46                 ` MaoXiaoyun
2011-01-12  8:01               ` Fix mem_sharing on Xen 4.0.0 MaoXiaoyun
2011-01-12 11:50             ` Re: [PATCH] mem_sharing: fix race condition of nominate and unshare George Dunlap
2011-01-10  6:48       ` tinnycloud
2011-01-10  8:10         ` Jui-Hao Chiang
2011-01-10 10:34           ` tinnycloud
2011-01-12 10:03           ` Jui-Hao Chiang
2011-01-12 10:54             ` Tim Deegan
2011-01-12 12:39               ` MaoXiaoyun
2011-01-12 14:02                 ` Tim Deegan
2011-01-12 15:21                   ` MaoXiaoyun
2011-01-13  2:26                     ` Jui-Hao Chiang
2011-01-13  4:42                       ` MaoXiaoyun
2011-01-13  9:55                         ` Tim Deegan
2011-01-13  9:24                       ` Tim Deegan
2011-01-13 15:24                         ` Jui-Hao Chiang
2011-01-13 15:53                           ` Tim Deegan
2011-01-14  2:04                             ` MaoXiaoyun
2011-01-14 17:00                               ` Jui-Hao Chiang
2011-01-17  6:00                                 ` MaoXiaoyun
2011-01-17  8:43                                 ` MaoXiaoyun
2011-01-17  9:02                                   ` Jui-Hao Chiang
2011-01-17  9:15                                     ` MaoXiaoyun
2011-01-18  9:42                                     ` MaoXiaoyun
2011-01-18 12:05                                       ` MaoXiaoyun
2011-01-19  4:44                                         ` MaoXiaoyun
2011-01-19  9:11                                           ` George Dunlap
2011-01-19 13:44                                             ` MaoXiaoyun
2011-01-24 10:56                                             ` MaoXiaoyun
2011-02-16 13:50                                               ` abnormal CPU utilization tinnycloud
2011-01-20  7:19                                     ` [PATCH] mem_sharing: fix race condition of nominate and unshare MaoXiaoyun
2011-01-20  9:19                                       ` Tim Deegan
2011-01-20  9:37                                         ` MaoXiaoyun
2011-01-21  6:10                                         ` MaoXiaoyun
2011-01-14  2:04                             ` MaoXiaoyun
2011-01-13  1:48                   ` MaoXiaoyun
2011-01-13 10:21                     ` Tim Deegan
2011-01-07  3:14 ` tinnycloud
2011-01-07  6:45   ` Jui-Hao Chiang
2011-01-07  7:35     ` tinnycloud

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.