All of lore.kernel.org
 help / color / mirror / Atom feed
* [V2 PATCH 0/3]: Fixup error codes in some p2m paths.
@ 2014-04-10  0:18 Mukesh Rathor
  2014-04-10  0:19 ` [V2 PATCH 1/3] Rename public functions in p2m-pt.c Mukesh Rathor
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Mukesh Rathor @ 2014-04-10  0:18 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, JBeulich, George.Dunlap, tim, eddie.dong, keir.xen,
	jun.nakajima

Hi,

In reference to our discussion on propogating error codes here
 http://osdir.com/ml/general/2014-04/msg13083.html

please find v2 patches. Built on c/s: ac0f56a.

thanks
Mukesh

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [V2 PATCH 1/3] Rename public functions in p2m-pt.c
  2014-04-10  0:18 [V2 PATCH 0/3]: Fixup error codes in some p2m paths Mukesh Rathor
@ 2014-04-10  0:19 ` Mukesh Rathor
  2014-04-10  0:19 ` [V2 PATCH 2/3] Rename "set_p2m_entry" to "p2m_set_entry" Mukesh Rathor
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Mukesh Rathor @ 2014-04-10  0:19 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, JBeulich, George.Dunlap, tim, eddie.dong, keir.xen,
	jun.nakajima

This patch renames "public" functions in p2m-pt.c. In addition to
making them more descriptive, it also frees up "p2m_set_entry" name
to be used later.  This patch doesn't change any functionality.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/mm/p2m-pt.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index a1d5650..0ad5057 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -277,8 +277,8 @@ p2m_next_level(struct p2m_domain *p2m, mfn_t *table_mfn, void **table,
 
 // Returns 0 on error (out of memory)
 static int
-p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, 
-              unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
+p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
+                 unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
 {
     // XXX -- this might be able to be faster iff current->domain == d
     mfn_t table_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m));
@@ -461,9 +461,9 @@ out:
 }
 
 static mfn_t
-p2m_gfn_to_mfn(struct p2m_domain *p2m, unsigned long gfn, 
-               p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
-               unsigned int *page_order)
+p2m_pt_get_entry(struct p2m_domain *p2m, unsigned long gfn,
+                 p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
+                 unsigned int *page_order)
 {
     mfn_t mfn;
     paddr_t addr = ((paddr_t)gfn) << PAGE_SHIFT;
@@ -604,8 +604,8 @@ pod_retry_l1:
 /* Walk the whole p2m table, changing any entries of the old type
  * to the new type.  This is used in hardware-assisted paging to 
  * quickly enable or diable log-dirty tracking */
-static void p2m_change_type_global(struct p2m_domain *p2m,
-                                   p2m_type_t ot, p2m_type_t nt)
+static void p2m_pt_change_entry_type_global(struct p2m_domain *p2m,
+                                            p2m_type_t ot, p2m_type_t nt)
 {
     unsigned long mfn, gfn, flags;
     l1_pgentry_t l1e_content;
@@ -870,9 +870,9 @@ long p2m_pt_audit_p2m(struct p2m_domain *p2m)
 /* Set up the p2m function pointers for pagetable format */
 void p2m_pt_init(struct p2m_domain *p2m)
 {
-    p2m->set_entry = p2m_set_entry;
-    p2m->get_entry = p2m_gfn_to_mfn;
-    p2m->change_entry_type_global = p2m_change_type_global;
+    p2m->set_entry = p2m_pt_set_entry;
+    p2m->get_entry = p2m_pt_get_entry;
+    p2m->change_entry_type_global = p2m_pt_change_entry_type_global;
     p2m->write_p2m_entry = paging_write_p2m_entry;
 #if P2M_AUDIT
     p2m->audit_p2m = p2m_pt_audit_p2m;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [V2 PATCH 2/3] Rename "set_p2m_entry" to "p2m_set_entry"
  2014-04-10  0:18 [V2 PATCH 0/3]: Fixup error codes in some p2m paths Mukesh Rathor
  2014-04-10  0:19 ` [V2 PATCH 1/3] Rename public functions in p2m-pt.c Mukesh Rathor
@ 2014-04-10  0:19 ` Mukesh Rathor
  2014-04-10  0:19 ` [V2 PATCH 3/3] P2M error code propogation Mukesh Rathor
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Mukesh Rathor @ 2014-04-10  0:19 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, JBeulich, George.Dunlap, tim, eddie.dong, keir.xen,
	jun.nakajima

This patch renames set_p2m_entry defined in arch/x86/mm/p2m.c
to p2m_set_entry which makes it consistent with other functions
from that file. It also facilitates changing the function signature
to return approriate errno for failure cases.  This patch doesn't
change any functionality.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/mm/hap/nested_hap.c |  2 +-
 xen/arch/x86/mm/mem_sharing.c    |  2 +-
 xen/arch/x86/mm/p2m-pod.c        | 32 +++++++++++++++------------
 xen/arch/x86/mm/p2m.c            | 47 ++++++++++++++++++++++------------------
 xen/include/asm-x86/p2m.h        |  2 +-
 5 files changed, 47 insertions(+), 38 deletions(-)

diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 38e2327..78701dc 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -124,7 +124,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
         gfn = (L2_gpa >> PAGE_SHIFT) & mask;
         mfn = _mfn((L0_gpa >> PAGE_SHIFT) & mask);
 
-        rv = set_p2m_entry(p2m, gfn, mfn, page_order, p2mt, p2ma);
+        rv = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma);
     }
 
     p2m_unlock(p2m);
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 237d346..0968681 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1092,7 +1092,7 @@ int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle
         goto err_unlock;
     }
 
-    ret = set_p2m_entry(p2m, cgfn, smfn, PAGE_ORDER_4K, p2m_ram_shared, a);
+    ret = p2m_set_entry(p2m, cgfn, smfn, PAGE_ORDER_4K, p2m_ram_shared, a);
 
     /* Tempted to turn this into an assert */
     if ( !ret )
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index d14565d..3c86255 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -556,7 +556,8 @@ recount:
     {
         /* All PoD: Mark the whole region invalid and tell caller
          * we're done. */
-        set_p2m_entry(p2m, gpfn, _mfn(INVALID_MFN), order, p2m_invalid, p2m->default_access);
+        p2m_set_entry(p2m, gpfn, _mfn(INVALID_MFN), order, p2m_invalid,
+                      p2m->default_access);
         p2m->pod.entry_count-=(1<<order);
         BUG_ON(p2m->pod.entry_count < 0);
         ret = 1;
@@ -589,7 +590,8 @@ recount:
         mfn = p2m->get_entry(p2m, gpfn + i, &t, &a, 0, NULL);
         if ( t == p2m_populate_on_demand )
         {
-            set_p2m_entry(p2m, gpfn + i, _mfn(INVALID_MFN), 0, p2m_invalid, p2m->default_access);
+            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), 0, p2m_invalid,
+                          p2m->default_access);
             p2m->pod.entry_count--;
             BUG_ON(p2m->pod.entry_count < 0);
             pod--;
@@ -602,7 +604,8 @@ recount:
 
             page = mfn_to_page(mfn);
 
-            set_p2m_entry(p2m, gpfn + i, _mfn(INVALID_MFN), 0, p2m_invalid, p2m->default_access);
+            p2m_set_entry(p2m, gpfn + i, _mfn(INVALID_MFN), 0, p2m_invalid,
+                          p2m->default_access);
             set_gpfn_from_mfn(mfn_x(mfn), INVALID_M2P_ENTRY);
 
             p2m_pod_cache_add(p2m, page, 0);
@@ -721,7 +724,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
     }
 
     /* Try to remove the page, restoring old mapping if it fails. */
-    set_p2m_entry(p2m, gfn, _mfn(0), PAGE_ORDER_2M,
+    p2m_set_entry(p2m, gfn, _mfn(0), PAGE_ORDER_2M,
                   p2m_populate_on_demand, p2m->default_access);
 
     /* Make none of the MFNs are used elsewhere... for example, mapped
@@ -779,7 +782,7 @@ p2m_pod_zero_check_superpage(struct p2m_domain *p2m, unsigned long gfn)
 
 out_reset:
     if ( reset )
-        set_p2m_entry(p2m, gfn, mfn0, 9, type0, p2m->default_access);
+        p2m_set_entry(p2m, gfn, mfn0, 9, type0, p2m->default_access);
     
 out:
     gfn_unlock(p2m, gfn, SUPERPAGE_ORDER);
@@ -837,7 +840,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
         }
 
         /* Try to remove the page, restoring old mapping if it fails. */
-        set_p2m_entry(p2m, gfns[i], _mfn(0), PAGE_ORDER_4K,
+        p2m_set_entry(p2m, gfns[i], _mfn(0), PAGE_ORDER_4K,
                       p2m_populate_on_demand, p2m->default_access);
 
         /* See if the page was successfully unmapped.  (Allow one refcount
@@ -847,7 +850,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
             unmap_domain_page(map[i]);
             map[i] = NULL;
 
-            set_p2m_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
+            p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
                 types[i], p2m->default_access);
 
             continue;
@@ -870,7 +873,7 @@ p2m_pod_zero_check(struct p2m_domain *p2m, unsigned long *gfns, int count)
          * check timing.  */
         if ( j < PAGE_SIZE/sizeof(*map[i]) )
         {
-            set_p2m_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
+            p2m_set_entry(p2m, gfns[i], mfns[i], PAGE_ORDER_4K,
                 types[i], p2m->default_access);
         }
         else
@@ -1000,15 +1003,15 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
     {
         pod_unlock(p2m);
         gfn_aligned = (gfn >> order) << order;
-        /* Note that we are supposed to call set_p2m_entry() 512 times to 
+        /* Note that we are supposed to call p2m_set_entry() 512 times to
          * split 1GB into 512 2MB pages here. But We only do once here because
-         * set_p2m_entry() should automatically shatter the 1GB page into 
+         * p2m_set_entry() should automatically shatter the 1GB page into
          * 512 2MB pages. The rest of 511 calls are unnecessary.
          *
          * NOTE: In a fine-grained p2m locking scenario this operation
          * may need to promote its locking from gfn->1g superpage
          */
-        set_p2m_entry(p2m, gfn_aligned, _mfn(0), PAGE_ORDER_2M,
+        p2m_set_entry(p2m, gfn_aligned, _mfn(0), PAGE_ORDER_2M,
                       p2m_populate_on_demand, p2m->default_access);
         return 0;
     }
@@ -1037,7 +1040,8 @@ p2m_pod_demand_populate(struct p2m_domain *p2m, unsigned long gfn,
 
     gfn_aligned = (gfn >> order) << order;
 
-    set_p2m_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw, p2m->default_access);
+    p2m_set_entry(p2m, gfn_aligned, mfn, order, p2m_ram_rw,
+                  p2m->default_access);
 
     for( i = 0; i < (1UL << order); i++ )
     {
@@ -1091,7 +1095,7 @@ remap_and_retry:
      * need promoting the gfn lock from gfn->2M superpage */
     gfn_aligned = (gfn>>order)<<order;
     for(i=0; i<(1<<order); i++)
-        set_p2m_entry(p2m, gfn_aligned+i, _mfn(0), PAGE_ORDER_4K,
+        p2m_set_entry(p2m, gfn_aligned+i, _mfn(0), PAGE_ORDER_4K,
                       p2m_populate_on_demand, p2m->default_access);
     if ( tb_init_done )
     {
@@ -1146,7 +1150,7 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
     }
 
     /* Now, actually do the two-way mapping */
-    if ( !set_p2m_entry(p2m, gfn, _mfn(0), order,
+    if ( !p2m_set_entry(p2m, gfn, _mfn(0), order,
                         p2m_populate_on_demand, p2m->default_access) )
         rc = -EINVAL;
     else
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index c38f334..e16f31a 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -309,7 +309,7 @@ struct page_info *get_page_from_gfn_p2m(
 }
 
 
-int set_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, 
+int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
 {
     struct domain *d = p2m->domain;
@@ -417,7 +417,7 @@ int p2m_alloc_table(struct p2m_domain *p2m)
 
     /* Initialise physmap tables for slot zero. Other code assumes this. */
     p2m->defer_nested_flush = 1;
-    if ( !set_p2m_entry(p2m, 0, _mfn(INVALID_MFN), PAGE_ORDER_4K,
+    if ( !p2m_set_entry(p2m, 0, _mfn(INVALID_MFN), PAGE_ORDER_4K,
                         p2m_invalid, p2m->default_access) )
         goto error;
     p2m->defer_nested_flush = 0;
@@ -496,7 +496,8 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn,
             ASSERT( !p2m_is_valid(t) || mfn + i == mfn_x(mfn_return) );
         }
     }
-    set_p2m_entry(p2m, gfn, _mfn(INVALID_MFN), page_order, p2m_invalid, p2m->default_access);
+    p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), page_order, p2m_invalid,
+                  p2m->default_access);
 }
 
 void
@@ -635,7 +636,8 @@ guest_physmap_add_entry(struct domain *d, unsigned long gfn,
     /* Now, actually do the two-way mapping */
     if ( mfn_valid(_mfn(mfn)) ) 
     {
-        if ( !set_p2m_entry(p2m, gfn, _mfn(mfn), page_order, t, p2m->default_access) )
+        if ( !p2m_set_entry(p2m, gfn, _mfn(mfn), page_order, t,
+                            p2m->default_access) )
         {
             rc = -EINVAL;
             goto out; /* Failed to update p2m, bail without updating m2p. */
@@ -650,7 +652,7 @@ guest_physmap_add_entry(struct domain *d, unsigned long gfn,
     {
         gdprintk(XENLOG_WARNING, "Adding bad mfn to p2m map (%#lx -> %#lx)\n",
                  gfn, mfn);
-        if ( !set_p2m_entry(p2m, gfn, _mfn(INVALID_MFN), page_order, 
+        if ( !p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), page_order,
                             p2m_invalid, p2m->default_access) )
             rc = -EINVAL;
         else
@@ -685,7 +687,7 @@ p2m_type_t p2m_change_type(struct domain *d, unsigned long gfn,
 
     mfn = p2m->get_entry(p2m, gfn, &pt, &a, 0, NULL);
     if ( pt == ot )
-        set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, nt, p2m->default_access);
+        p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, nt, p2m->default_access);
 
     gfn_unlock(p2m, gfn, 0);
 
@@ -726,7 +728,7 @@ void p2m_change_type_range(struct domain *d,
                 order = PAGE_ORDER_4K;
         }
         if ( pt == ot )
-            set_p2m_entry(p2m, gfn, mfn, order, nt, a);
+            p2m_set_entry(p2m, gfn, mfn, order, nt, a);
         gfn += 1UL << order;
         gfn &= -1UL << order;
         if ( !gfn )
@@ -768,11 +770,12 @@ set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
     }
 
     P2M_DEBUG("set mmio %lx %lx\n", gfn, mfn_x(mfn));
-    rc = set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_mmio_direct, p2m->default_access);
+    rc = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_mmio_direct,
+                       p2m->default_access);
     gfn_unlock(p2m, gfn, 0);
     if ( 0 == rc )
         gdprintk(XENLOG_ERR,
-            "set_mmio_p2m_entry: set_p2m_entry failed! mfn=%08lx\n",
+            "set_mmio_p2m_entry: p2m_set_entry failed! mfn=%08lx\n",
             mfn_x(get_gfn_query_unlocked(p2m->domain, gfn, &ot)));
     return rc;
 }
@@ -799,7 +802,8 @@ clear_mmio_p2m_entry(struct domain *d, unsigned long gfn)
             "clear_mmio_p2m_entry: gfn_to_mfn failed! gfn=%08lx\n", gfn);
         goto out;
     }
-    rc = set_p2m_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K, p2m_invalid, p2m->default_access);
+    rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K, p2m_invalid,
+                       p2m->default_access);
 
 out:
     gfn_unlock(p2m, gfn, 0);
@@ -834,11 +838,12 @@ set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
         set_gpfn_from_mfn(mfn_x(omfn), INVALID_M2P_ENTRY);
 
     P2M_DEBUG("set shared %lx %lx\n", gfn, mfn_x(mfn));
-    rc = set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_shared, p2m->default_access);
+    rc = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_shared,
+                       p2m->default_access);
     gfn_unlock(p2m, gfn, 0);
     if ( 0 == rc )
         gdprintk(XENLOG_ERR,
-            "set_shared_p2m_entry: set_p2m_entry failed! mfn=%08lx\n",
+            "set_shared_p2m_entry: p2m_set_entry failed! mfn=%08lx\n",
             mfn_x(get_gfn_query_unlocked(p2m->domain, gfn, &ot)));
     return rc;
 }
@@ -896,7 +901,7 @@ int p2m_mem_paging_nominate(struct domain *d, unsigned long gfn)
         goto out;
 
     /* Fix p2m entry */
-    set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_paging_out, a);
+    p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_paging_out, a);
     ret = 0;
 
  out:
@@ -961,7 +966,7 @@ int p2m_mem_paging_evict(struct domain *d, unsigned long gfn)
         put_page(page);
 
     /* Remove mapping from p2m table */
-    set_p2m_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K, p2m_ram_paged, a);
+    p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K, p2m_ram_paged, a);
 
     /* Clear content before returning the page to Xen */
     scrub_one_page(page);
@@ -1070,7 +1075,7 @@ void p2m_mem_paging_populate(struct domain *d, unsigned long gfn)
         if ( p2mt == p2m_ram_paging_out )
             req.flags |= MEM_EVENT_FLAG_EVICT_FAIL;
 
-        set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_paging_in, a);
+        p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_paging_in, a);
     }
     gfn_unlock(p2m, gfn, 0);
 
@@ -1170,9 +1175,9 @@ int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer)
     /* Make the page already guest-accessible. If the pager still has a
      * pending resume operation, it will be idempotent p2m entry-wise,
      * but will unpause the vcpu */
-    set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, 
-                    paging_mode_log_dirty(d) ? p2m_ram_logdirty : 
-                    p2m_ram_rw, a);
+    p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
+                  paging_mode_log_dirty(d) ? p2m_ram_logdirty :
+                  p2m_ram_rw, a);
     set_gpfn_from_mfn(mfn_x(mfn), gfn);
 
     if ( !page_extant )
@@ -1222,9 +1227,9 @@ void p2m_mem_paging_resume(struct domain *d)
              * were nominated but not evicted */
             if ( mfn_valid(mfn) && (p2mt == p2m_ram_paging_in) )
             {
-                set_p2m_entry(p2m, rsp.gfn, mfn, PAGE_ORDER_4K, 
-                                paging_mode_log_dirty(d) ? p2m_ram_logdirty : 
-                                p2m_ram_rw, a);
+                p2m_set_entry(p2m, rsp.gfn, mfn, PAGE_ORDER_4K,
+                              paging_mode_log_dirty(d) ? p2m_ram_logdirty :
+                              p2m_ram_rw, a);
                 set_gpfn_from_mfn(mfn_x(mfn), rsp.gfn);
             }
             gfn_unlock(p2m, rsp.gfn, 0);
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index d644f82..0fdfbda 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -593,7 +593,7 @@ void p2m_free_ptp(struct p2m_domain *p2m, struct page_info *pg);
 
 /* Directly set a p2m entry: only for use by p2m code. Does not need
  * a call to put_gfn afterwards/ */
-int set_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, 
+int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma);
 
 /* Set up function pointers for PT implementation: only for use by p2m code */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [V2 PATCH 3/3] P2M error code propogation
  2014-04-10  0:18 [V2 PATCH 0/3]: Fixup error codes in some p2m paths Mukesh Rathor
  2014-04-10  0:19 ` [V2 PATCH 1/3] Rename public functions in p2m-pt.c Mukesh Rathor
  2014-04-10  0:19 ` [V2 PATCH 2/3] Rename "set_p2m_entry" to "p2m_set_entry" Mukesh Rathor
@ 2014-04-10  0:19 ` Mukesh Rathor
  2014-04-10  7:23   ` Jan Beulich
  2014-04-10  6:57 ` [V2 PATCH 0/3]: Fixup error codes in some p2m paths Jan Beulich
  2014-04-13 20:43 ` Tim Deegan
  4 siblings, 1 reply; 8+ messages in thread
From: Mukesh Rathor @ 2014-04-10  0:19 UTC (permalink / raw)
  To: xen-devel
  Cc: kevin.tian, JBeulich, George.Dunlap, tim, eddie.dong, keir.xen,
	jun.nakajima

Because some of the leaf p2m functions return 0 for failure and
TRUE for success, the real errno is lost. We change some of those
functions to return proper -errno. Also, any code in the immediate
vicinity that is in coding style violation is fixed up.

This patch doesn't change any functionality.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/domctl.c            |  21 +++++---
 xen/arch/x86/mm/hap/nested_hap.c |  11 +++--
 xen/arch/x86/mm/mem_sharing.c    |  10 ++--
 xen/arch/x86/mm/p2m-ept.c        |  17 ++++---
 xen/arch/x86/mm/p2m-pod.c        |   7 ++-
 xen/arch/x86/mm/p2m-pt.c         |  53 ++++++++++----------
 xen/arch/x86/mm/p2m.c            | 104 +++++++++++++++++++--------------------
 7 files changed, 116 insertions(+), 107 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 26635ff..9a1013c 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -671,13 +671,12 @@ long arch_do_domctl(
             if ( !ret && paging_mode_translate(d) )
             {
                 for ( i = 0; !ret && i < nr_mfns; i++ )
-                    if ( !set_mmio_p2m_entry(d, gfn + i, _mfn(mfn + i)) )
-                        ret = -EIO;
+                    ret = set_mmio_p2m_entry(d, gfn + i, _mfn(mfn + i));
                 if ( ret )
                 {
                     printk(XENLOG_G_WARNING
-                           "memory_map:fail: dom%d gfn=%lx mfn=%lx\n",
-                           d->domain_id, gfn + i, mfn + i);
+                           "memory_map:fail: dom%d gfn=%lx mfn=%lx ret:%ld\n",
+                           d->domain_id, gfn + i, mfn + i, ret);
                     while ( i-- )
                         clear_mmio_p2m_entry(d, gfn + i);
                     if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
@@ -690,20 +689,26 @@ long arch_do_domctl(
         }
         else
         {
+            int tmp_rc = 0;
+
             printk(XENLOG_G_INFO
                    "memory_map:remove: dom%d gfn=%lx mfn=%lx nr=%lx\n",
                    d->domain_id, gfn, mfn, nr_mfns);
 
             if ( paging_mode_translate(d) )
                 for ( i = 0; i < nr_mfns; i++ )
-                    add |= !clear_mmio_p2m_entry(d, gfn + i);
+                {
+                    ret = clear_mmio_p2m_entry(d, gfn + i);
+                    if ( ret )
+                        tmp_rc = ret;
+                }
             ret = iomem_deny_access(d, mfn, mfn + nr_mfns - 1);
-            if ( !ret && add )
-                ret = -EIO;
+            if ( !ret && tmp_rc )
+                ret = tmp_rc;
             if ( ret && is_hardware_domain(current->domain) )
                 printk(XENLOG_ERR
                        "memory_map: error %ld %s dom%d access to [%lx,%lx]\n",
-                       ret, add ? "removing" : "denying", d->domain_id,
+                       ret, tmp_rc ? "removing" : "denying", d->domain_id,
                        mfn, mfn + nr_mfns - 1);
         }
     }
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 78701dc..5c41725 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -103,7 +103,7 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
                   paddr_t L2_gpa, paddr_t L0_gpa,
                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
 {
-    int rv = 1;
+    int rc = 0;
     ASSERT(p2m);
     ASSERT(p2m->set_entry);
 
@@ -124,15 +124,16 @@ nestedhap_fix_p2m(struct vcpu *v, struct p2m_domain *p2m,
         gfn = (L2_gpa >> PAGE_SHIFT) & mask;
         mfn = _mfn((L0_gpa >> PAGE_SHIFT) & mask);
 
-        rv = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma);
+        rc = p2m_set_entry(p2m, gfn, mfn, page_order, p2mt, p2ma);
     }
 
     p2m_unlock(p2m);
 
-    if (rv == 0) {
+    if ( rc )
+    {
         gdprintk(XENLOG_ERR,
-		"failed to set entry for %#"PRIx64" -> %#"PRIx64"\n",
-		L2_gpa, L0_gpa);
+                 "failed to set entry for %#"PRIx64" -> %#"PRIx64" rc:%d\n",
+                 L2_gpa, L0_gpa, rc);
         BUG();
     }
 }
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 0968681..61f0b22 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1021,7 +1021,7 @@ int mem_sharing_share_pages(struct domain *sd, unsigned long sgfn, shr_handle_t
         put_page_and_type(cpage);
         d = get_domain_by_id(gfn->domain);
         BUG_ON(!d);
-        BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn) == 0);
+        BUG_ON(set_shared_p2m_entry(d, gfn->gfn, smfn));
         put_domain(d);
     }
     ASSERT(list_empty(&cpage->sharing->gfns));
@@ -1095,13 +1095,11 @@ int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle
     ret = p2m_set_entry(p2m, cgfn, smfn, PAGE_ORDER_4K, p2m_ram_shared, a);
 
     /* Tempted to turn this into an assert */
-    if ( !ret )
+    if ( ret )
     {
-        ret = -ENOENT;
         mem_sharing_gfn_destroy(spage, cd, gfn_info);
         put_page_and_type(spage);
     } else {
-        ret = 0;
         /* There is a chance we're plugging a hole where a paged out page was */
         if ( p2m_is_paging(cmfn_type) && (cmfn_type != p2m_ram_paging_out) )
         {
@@ -1232,7 +1230,7 @@ int __mem_sharing_unshare_page(struct domain *d,
     unmap_domain_page(s);
     unmap_domain_page(t);
 
-    BUG_ON(set_shared_p2m_entry(d, gfn, page_to_mfn(page)) == 0);
+    BUG_ON(set_shared_p2m_entry(d, gfn, page_to_mfn(page)));
     mem_sharing_gfn_destroy(old_page, d, gfn_info);
     mem_sharing_page_unlock(old_page);
     put_page_and_type(old_page);
@@ -1288,7 +1286,7 @@ int relinquish_shared_pages(struct domain *d)
              * we hold the p2m lock. */
             set_rc = p2m->set_entry(p2m, gfn, _mfn(0), PAGE_ORDER_4K,
                                     p2m_invalid, p2m_access_rwx);
-            ASSERT(set_rc != 0);
+            ASSERT(set_rc == 0);
             count += 0x10;
         }
         else
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 99a1084..a219f8b 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -273,6 +273,8 @@ static int ept_next_level(struct p2m_domain *p2m, bool_t read_only,
 /*
  * ept_set_entry() computes 'need_modify_vtd_table' for itself,
  * by observing whether any gfn->mfn translations are modified.
+ *
+ * Returns: 0 for success, -errno for failure
  */
 static int
 ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn, 
@@ -281,7 +283,7 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     ept_entry_t *table, *ept_entry = NULL;
     unsigned long gfn_remainder = gfn;
     int i, target = order / EPT_TABLE_ORDER;
-    int rv = 0;
+    int rc = 0;
     int ret = 0;
     bool_t direct_mmio = (p2mt == p2m_mmio_direct);
     uint8_t ipat = 0;
@@ -302,7 +304,7 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     if ( ((gfn | mfn_x(mfn)) & ((1UL << order) - 1)) ||
          ((u64)gfn >> ((ept_get_wl(ept) + 1) * EPT_TABLE_ORDER)) ||
          (order % EPT_TABLE_ORDER) )
-        return 0;
+        return -EINVAL;
 
     ASSERT((target == 2 && hvm_hap_has_1gb()) ||
            (target == 1 && hvm_hap_has_2mb()) ||
@@ -314,7 +316,10 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
     {
         ret = ept_next_level(p2m, 0, &table, &gfn_remainder, i);
         if ( !ret )
+        {
+            rc = -ENOENT;
             goto out;
+        }
         else if ( ret != GUEST_TABLE_NORMAL_PAGE )
             break;
     }
@@ -386,6 +391,7 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         if ( !ept_split_super_page(p2m, &split_ept_entry, i, target) )
         {
             ept_free_entry(p2m, &split_ept_entry, i);
+            rc = -ENOMEM;
             goto out;
         }
 
@@ -426,9 +432,6 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
          (gfn + (1UL << order) - 1 > p2m->max_mapped_pfn) )
         p2m->max_mapped_pfn = gfn + (1UL << order) - 1;
 
-    /* Success */
-    rv = 1;
-
 out:
     unmap_domain_page(table);
 
@@ -436,7 +439,7 @@ out:
         ept_sync_domain(p2m);
 
     /* For non-nested p2m, may need to change VT-d page table.*/
-    if ( rv && !p2m_is_nestedp2m(p2m) && need_iommu(d) &&
+    if ( rc == 0 && !p2m_is_nestedp2m(p2m) && need_iommu(d) &&
          need_modify_vtd_table )
     {
         if ( iommu_hap_pt_share )
@@ -460,7 +463,7 @@ out:
     if ( is_epte_present(&old_entry) )
         ept_free_entry(p2m, &old_entry, target);
 
-    return rv;
+    return rc;
 }
 
 /* Read ept p2m entries */
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index 3c86255..68db937 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1150,10 +1150,9 @@ guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
     }
 
     /* Now, actually do the two-way mapping */
-    if ( !p2m_set_entry(p2m, gfn, _mfn(0), order,
-                        p2m_populate_on_demand, p2m->default_access) )
-        rc = -EINVAL;
-    else
+    rc = p2m_set_entry(p2m, gfn, _mfn(0), order, p2m_populate_on_demand,
+                       p2m->default_access);
+    if ( rc == 0 )
     {
         pod_lock(p2m);
         p2m->pod.entry_count += 1 << order;
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 0ad5057..7b07240 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -154,6 +154,7 @@ static void p2m_add_iommu_flags(l1_pgentry_t *p2m_entry,
         l1e_add_flags(*p2m_entry, iommu_nlevel_to_flags(nlevel, flags));
 }
 
+/* Returns: 0 for success, -errno for failure */
 static int
 p2m_next_level(struct p2m_domain *p2m, mfn_t *table_mfn, void **table,
                unsigned long *gfn_remainder, unsigned long gfn, u32 shift,
@@ -167,7 +168,7 @@ p2m_next_level(struct p2m_domain *p2m, mfn_t *table_mfn, void **table,
 
     if ( !(p2m_entry = p2m_find_entry(*table, gfn_remainder, gfn,
                                       shift, max)) )
-        return 0;
+        return -ENOENT;
 
     /* PoD/paging: Not present doesn't imply empty. */
     if ( !l1e_get_flags(*p2m_entry) )
@@ -176,7 +177,7 @@ p2m_next_level(struct p2m_domain *p2m, mfn_t *table_mfn, void **table,
 
         pg = p2m_alloc_ptp(p2m, type);
         if ( pg == NULL )
-            return 0;
+            return -ENOMEM;
 
         new_entry = l1e_from_pfn(mfn_x(page_to_mfn(pg)),
                                  __PAGE_HYPERVISOR | _PAGE_USER);
@@ -210,7 +211,7 @@ p2m_next_level(struct p2m_domain *p2m, mfn_t *table_mfn, void **table,
 
         pg = p2m_alloc_ptp(p2m, PGT_l2_page_table);
         if ( pg == NULL )
-            return 0;
+            return -ENOMEM;
 
         flags = l1e_get_flags(*p2m_entry);
         pfn = l1e_get_pfn(*p2m_entry);
@@ -239,7 +240,7 @@ p2m_next_level(struct p2m_domain *p2m, mfn_t *table_mfn, void **table,
 
         pg = p2m_alloc_ptp(p2m, PGT_l1_page_table);
         if ( pg == NULL )
-            return 0;
+            return -ENOMEM;
 
         /* New splintered mappings inherit the flags of the old superpage, 
          * with a little reorganisation for the _PAGE_PSE_PAT bit. */
@@ -272,23 +273,23 @@ p2m_next_level(struct p2m_domain *p2m, mfn_t *table_mfn, void **table,
     unmap_domain_page(*table);
     *table = next;
 
-    return 1;
+    return 0;
 }
 
-// Returns 0 on error (out of memory)
+/* Returns: 0 for success, -errno for failure */
 static int
 p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
                  unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
 {
-    // XXX -- this might be able to be faster iff current->domain == d
+    /* XXX -- this might be able to be faster iff current->domain == d */
     mfn_t table_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m));
-    void *table =map_domain_page(mfn_x(table_mfn));
+    void *table = map_domain_page(mfn_x(table_mfn));
     unsigned long i, gfn_remainder = gfn;
     l1_pgentry_t *p2m_entry;
     l1_pgentry_t entry_content;
     l2_pgentry_t l2e_content;
     l3_pgentry_t l3e_content;
-    int rv=0;
+    int rc;
     unsigned int iommu_pte_flags = (p2mt == p2m_ram_rw) ?
                                    IOMMUF_readable|IOMMUF_writable:
                                    0; 
@@ -311,9 +312,10 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         __trace_var(TRC_MEM_SET_P2M_ENTRY, 0, sizeof(t), &t);
     }
 
-    if ( !p2m_next_level(p2m, &table_mfn, &table, &gfn_remainder, gfn,
-                         L4_PAGETABLE_SHIFT - PAGE_SHIFT,
-                         L4_PAGETABLE_ENTRIES, PGT_l3_page_table) )
+    rc = p2m_next_level(p2m, &table_mfn, &table, &gfn_remainder, gfn,
+                        L4_PAGETABLE_SHIFT - PAGE_SHIFT,
+                        L4_PAGETABLE_ENTRIES, PGT_l3_page_table);
+    if ( rc )
         goto out;
 
     /*
@@ -354,17 +356,21 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         if ( l1e_get_flags(old_entry) & _PAGE_PRESENT )
             p2m_free_entry(p2m, &old_entry, page_order);
     }
-    else if ( !p2m_next_level(p2m, &table_mfn, &table, &gfn_remainder, gfn,
-                              L3_PAGETABLE_SHIFT - PAGE_SHIFT,
-                              L3_PAGETABLE_ENTRIES,
-                              PGT_l2_page_table) )
-        goto out;
+    else 
+    {
+        rc = p2m_next_level(p2m, &table_mfn, &table, &gfn_remainder,
+                            gfn, L3_PAGETABLE_SHIFT - PAGE_SHIFT,
+                            L3_PAGETABLE_ENTRIES, PGT_l2_page_table);
+        if ( rc )
+            goto out;
+    }
 
     if ( page_order == PAGE_ORDER_4K )
     {
-        if ( !p2m_next_level(p2m, &table_mfn, &table, &gfn_remainder, gfn,
-                             L2_PAGETABLE_SHIFT - PAGE_SHIFT,
-                             L2_PAGETABLE_ENTRIES, PGT_l1_page_table) )
+        rc = p2m_next_level(p2m, &table_mfn, &table, &gfn_remainder, gfn,
+                            L2_PAGETABLE_SHIFT - PAGE_SHIFT,
+                            L2_PAGETABLE_ENTRIES, PGT_l1_page_table);
+        if ( rc )
             goto out;
 
         p2m_entry = p2m_find_entry(table, &gfn_remainder, gfn,
@@ -452,12 +458,9 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         }
     }
 
-    /* Success */
-    rv = 1;
-
-out:
+ out:
     unmap_domain_page(table);
-    return rv;
+    return rc;
 }
 
 static mfn_t
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index e16f31a..f3476da 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -308,14 +308,14 @@ struct page_info *get_page_from_gfn_p2m(
     return page;
 }
 
-
+/* Returns: 0 for success, -errno for failure */
 int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
 {
     struct domain *d = p2m->domain;
     unsigned long todo = 1ul << page_order;
     unsigned int order;
-    int rc = 1;
+    int set_rc, rc = 0;
 
     ASSERT(gfn_locked_by_me(p2m, gfn));
 
@@ -329,8 +329,10 @@ int p2m_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         else
             order = 0;
 
-        if ( !p2m->set_entry(p2m, gfn, mfn, order, p2mt, p2ma) )
-            rc = 0;
+        set_rc = p2m->set_entry(p2m, gfn, mfn, order, p2mt, p2ma);
+        if ( set_rc )
+            rc = set_rc;
+
         gfn += 1ul << order;
         if ( mfn_x(mfn) != INVALID_MFN )
             mfn = _mfn(mfn_x(mfn) + (1ul << order));
@@ -370,17 +372,19 @@ void p2m_free_ptp(struct p2m_domain *p2m, struct page_info *pg)
     return;
 }
 
-// Allocate a new p2m table for a domain.
-//
-// The structure of the p2m table is that of a pagetable for xen (i.e. it is
-// controlled by CONFIG_PAGING_LEVELS).
-//
-// Returns 0 for success or -errno.
-//
+/*
+ * Allocate a new p2m table for a domain.
+ *
+ * The structure of the p2m table is that of a pagetable for xen (i.e. it is
+ * controlled by CONFIG_PAGING_LEVELS).
+ *
+ * Returns 0 for success, -errno for failure.
+ */
 int p2m_alloc_table(struct p2m_domain *p2m)
 {
     struct page_info *p2m_top;
     struct domain *d = p2m->domain;
+    int rc = 0;
 
     p2m_lock(p2m);
 
@@ -417,8 +421,9 @@ int p2m_alloc_table(struct p2m_domain *p2m)
 
     /* Initialise physmap tables for slot zero. Other code assumes this. */
     p2m->defer_nested_flush = 1;
-    if ( !p2m_set_entry(p2m, 0, _mfn(INVALID_MFN), PAGE_ORDER_4K,
-                        p2m_invalid, p2m->default_access) )
+    rc = p2m_set_entry(p2m, 0, _mfn(INVALID_MFN), PAGE_ORDER_4K,
+                       p2m_invalid, p2m->default_access);
+    if ( rc )
         goto error;
     p2m->defer_nested_flush = 0;
 
@@ -428,9 +433,9 @@ int p2m_alloc_table(struct p2m_domain *p2m)
 
     spin_unlock(&p2m->domain->page_alloc_lock);
  error:
-    P2M_PRINTK("failed to initialise p2m table for slot zero\n");
+    P2M_PRINTK("failed to initialise p2m table for slot zero. rc:%d\n", rc);
     p2m_unlock(p2m);
-    return -ENOMEM;
+    return rc;
 }
 
 void p2m_teardown(struct p2m_domain *p2m)
@@ -636,12 +641,11 @@ guest_physmap_add_entry(struct domain *d, unsigned long gfn,
     /* Now, actually do the two-way mapping */
     if ( mfn_valid(_mfn(mfn)) ) 
     {
-        if ( !p2m_set_entry(p2m, gfn, _mfn(mfn), page_order, t,
-                            p2m->default_access) )
-        {
-            rc = -EINVAL;
+        rc = p2m_set_entry(p2m, gfn, _mfn(mfn), page_order, t,
+                           p2m->default_access);
+        if ( rc )
             goto out; /* Failed to update p2m, bail without updating m2p. */
-        }
+
         if ( !p2m_is_grant(t) )
         {
             for ( i = 0; i < (1UL << page_order); i++ )
@@ -652,10 +656,9 @@ guest_physmap_add_entry(struct domain *d, unsigned long gfn,
     {
         gdprintk(XENLOG_WARNING, "Adding bad mfn to p2m map (%#lx -> %#lx)\n",
                  gfn, mfn);
-        if ( !p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), page_order,
-                            p2m_invalid, p2m->default_access) )
-            rc = -EINVAL;
-        else
+        rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), page_order,
+                           p2m_invalid, p2m->default_access);
+        if ( rc == 0 )
         {
             pod_lock(p2m);
             p2m->pod.entry_count -= pod_count;
@@ -742,9 +745,8 @@ void p2m_change_type_range(struct domain *d,
 }
 
 
-
-int
-set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
+/* Returns: 0 for success, -errno for failure */
+int set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
 {
     int rc = 0;
     p2m_access_t a;
@@ -753,7 +755,7 @@ set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     if ( !paging_mode_translate(d) )
-        return 0;
+        return -EIO;
 
     gfn_lock(p2m, gfn, 0);
     omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, NULL);
@@ -761,7 +763,7 @@ set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
     {
         p2m_unlock(p2m);
         domain_crash(d);
-        return 0;
+        return -ENOENT;
     }
     else if ( p2m_is_ram(ot) )
     {
@@ -773,24 +775,24 @@ set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
     rc = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_mmio_direct,
                        p2m->default_access);
     gfn_unlock(p2m, gfn, 0);
-    if ( 0 == rc )
+    if ( rc )
         gdprintk(XENLOG_ERR,
-            "set_mmio_p2m_entry: p2m_set_entry failed! mfn=%08lx\n",
-            mfn_x(get_gfn_query_unlocked(p2m->domain, gfn, &ot)));
+                 "p2m_set_entry failed! mfn=%08lx rc:%d\n",
+                 mfn_x(get_gfn_query_unlocked(p2m->domain, gfn, &ot)), rc);
     return rc;
 }
 
-int
-clear_mmio_p2m_entry(struct domain *d, unsigned long gfn)
+/* Returns: 0 for success, -errno for failure */
+int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn)
 {
-    int rc = 0;
+    int rc = -EINVAL;
     mfn_t mfn;
     p2m_access_t a;
     p2m_type_t t;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     if ( !paging_mode_translate(d) )
-        return 0;
+        return -EIO;
 
     gfn_lock(p2m, gfn, 0);
     mfn = p2m->get_entry(p2m, gfn, &t, &a, 0, NULL);
@@ -799,20 +801,20 @@ clear_mmio_p2m_entry(struct domain *d, unsigned long gfn)
     if ( (INVALID_MFN == mfn_x(mfn)) || (t != p2m_mmio_direct) )
     {
         gdprintk(XENLOG_ERR,
-            "clear_mmio_p2m_entry: gfn_to_mfn failed! gfn=%08lx\n", gfn);
+                 "gfn_to_mfn failed! gfn=%08lx type:%d\n", gfn, t);
         goto out;
     }
     rc = p2m_set_entry(p2m, gfn, _mfn(INVALID_MFN), PAGE_ORDER_4K, p2m_invalid,
                        p2m->default_access);
 
-out:
+ out:
     gfn_unlock(p2m, gfn, 0);
 
     return rc;
 }
 
-int
-set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
+/* Returns: 0 for success, -errno for failure */
+int set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int rc = 0;
@@ -822,7 +824,7 @@ set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
     unsigned long pg_type;
 
     if ( !paging_mode_translate(p2m->domain) )
-        return 0;
+        return -EIO;
 
     gfn_lock(p2m, gfn, 0);
     omfn = p2m->get_entry(p2m, gfn, &ot, &a, 0, NULL);
@@ -841,10 +843,10 @@ set_shared_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
     rc = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_shared,
                        p2m->default_access);
     gfn_unlock(p2m, gfn, 0);
-    if ( 0 == rc )
+    if ( rc )
         gdprintk(XENLOG_ERR,
-            "set_shared_p2m_entry: p2m_set_entry failed! mfn=%08lx\n",
-            mfn_x(get_gfn_query_unlocked(p2m->domain, gfn, &ot)));
+                 "p2m_set_entry failed! mfn=%08lx rc:%d\n",
+                 mfn_x(get_gfn_query_unlocked(p2m->domain, gfn, &ot)), rc);
     return rc;
 }
 
@@ -1263,7 +1265,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, bool_t gla_valid, unsigned long gla,
     if ( access_w && p2ma == p2m_access_rx2rw ) 
     {
         rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2mt, p2m_access_rw);
-        ASSERT(rc);
+        ASSERT(rc == 0);
         gfn_unlock(p2m, gfn, 0);
         return 1;
     }
@@ -1272,7 +1274,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, bool_t gla_valid, unsigned long gla,
         ASSERT(access_w || access_r || access_x);
         rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
                             p2mt, p2m_access_rwx);
-        ASSERT(rc);
+        ASSERT(rc == 0);
     }
     gfn_unlock(p2m, gfn, 0);
 
@@ -1299,7 +1301,7 @@ bool_t p2m_mem_access_check(paddr_t gpa, bool_t gla_valid, unsigned long gla,
                  * gfn locked and just did a successful get_entry(). */
                 rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
                                     p2mt, p2m_access_rwx);
-                ASSERT(rc);
+                ASSERT(rc == 0);
             }
             gfn_unlock(p2m, gfn, 0);
             return 1;
@@ -1401,13 +1403,11 @@ long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t nr,
     for ( pfn += start; nr > start; ++pfn )
     {
         mfn = p2m->get_entry(p2m, pfn, &t, &_a, 0, NULL);
-        if ( p2m->set_entry(p2m, pfn, mfn, PAGE_ORDER_4K, t, a) == 0 )
-        {
-            rc = -ENOMEM;
+        rc = p2m->set_entry(p2m, pfn, mfn, PAGE_ORDER_4K, t, a);
+        if ( rc )
             break;
-        }
 
-        /* Check for continuation if it's not the last interation. */
+        /* Check for continuation if it's not the last iteration. */
         if ( nr > ++start && !(start & mask) && hypercall_preempt_check() )
         {
             rc = start;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [V2 PATCH 0/3]: Fixup error codes in some p2m paths.
  2014-04-10  0:18 [V2 PATCH 0/3]: Fixup error codes in some p2m paths Mukesh Rathor
                   ` (2 preceding siblings ...)
  2014-04-10  0:19 ` [V2 PATCH 3/3] P2M error code propogation Mukesh Rathor
@ 2014-04-10  6:57 ` Jan Beulich
  2014-04-10 22:04   ` Mukesh Rathor
  2014-04-13 20:43 ` Tim Deegan
  4 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2014-04-10  6:57 UTC (permalink / raw)
  To: Mukesh Rathor
  Cc: kevin.tian, George.Dunlap, tim, eddie.dong, keir.xen,
	jun.nakajima, xen-devel

>>> On 10.04.14 at 02:18, <mukesh.rathor@oracle.com> wrote:
> In reference to our discussion on propogating error codes here
>  http://osdir.com/ml/general/2014-04/msg13083.html 
> 
> please find v2 patches. Built on c/s: ac0f56a.

No information here or in the individual patches as to what changed
from v1?

Jan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [V2 PATCH 3/3] P2M error code propogation
  2014-04-10  0:19 ` [V2 PATCH 3/3] P2M error code propogation Mukesh Rathor
@ 2014-04-10  7:23   ` Jan Beulich
  0 siblings, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2014-04-10  7:23 UTC (permalink / raw)
  To: Mukesh Rathor
  Cc: kevin.tian, George.Dunlap, tim, eddie.dong, keir.xen,
	jun.nakajima, xen-devel

>>> On 10.04.14 at 02:19, <mukesh.rathor@oracle.com> wrote:
>              ret = iomem_deny_access(d, mfn, mfn + nr_mfns - 1);
> -            if ( !ret && add )
> -                ret = -EIO;
> +            if ( !ret && tmp_rc )
> +                ret = tmp_rc;

This could be just "if ( !ret )"; no need to re-submit just because of this
though.

>  static int
>  p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
>                   unsigned int page_order, p2m_type_t p2mt, p2m_access_t p2ma)
>  {
> -    // XXX -- this might be able to be faster iff current->domain == d
> +    /* XXX -- this might be able to be faster iff current->domain == d */
>      mfn_t table_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m));
> -    void *table =map_domain_page(mfn_x(table_mfn));
> +    void *table = map_domain_page(mfn_x(table_mfn));

While I'm generally fine with you doing cleanup like this as you go, I
think this should have been done in the mechanical (renaming) patch
rather than in this one, which while not intended to change
functionality needs pretty close looking at to verify it really doesn't,
and hence would benefit from having just the changes that are
needed (that said, adjusting style when you have to touch a line
anyway is still fine).

As the comments are only of stylistic nature, nevertheless
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [V2 PATCH 0/3]: Fixup error codes in some p2m paths.
  2014-04-10  6:57 ` [V2 PATCH 0/3]: Fixup error codes in some p2m paths Jan Beulich
@ 2014-04-10 22:04   ` Mukesh Rathor
  0 siblings, 0 replies; 8+ messages in thread
From: Mukesh Rathor @ 2014-04-10 22:04 UTC (permalink / raw)
  To: Jan Beulich
  Cc: kevin.tian, George.Dunlap, tim, eddie.dong, keir.xen,
	jun.nakajima, xen-devel

On Thu, 10 Apr 2014 07:57:56 +0100
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 10.04.14 at 02:18, <mukesh.rathor@oracle.com> wrote:
> > In reference to our discussion on propogating error codes here
> >  http://osdir.com/ml/general/2014-04/msg13083.html 
> > 
> > please find v2 patches. Built on c/s: ac0f56a.
> 
> No information here or in the individual patches as to what changed
> from v1?
> 
> Jan
> 

Chagnes in  V2:

Patch 1: 
   - also rename p2m_pt_gfn_to_mfn to p2m_pt_get_entry
     and p2m_pt_change_type_global to p2m_pt_change_entry_type_global.


Patch 2: No change

Patch 3: 
  - split if ( (rc = f()) to two lines
  - fix interation to say iteration
  - change EPERM to EIO 
  - fix logic in arch_do_domctl() to make it functionally same as it has
    been


Mukesh

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [V2 PATCH 0/3]: Fixup error codes in some p2m paths.
  2014-04-10  0:18 [V2 PATCH 0/3]: Fixup error codes in some p2m paths Mukesh Rathor
                   ` (3 preceding siblings ...)
  2014-04-10  6:57 ` [V2 PATCH 0/3]: Fixup error codes in some p2m paths Jan Beulich
@ 2014-04-13 20:43 ` Tim Deegan
  4 siblings, 0 replies; 8+ messages in thread
From: Tim Deegan @ 2014-04-13 20:43 UTC (permalink / raw)
  To: Mukesh Rathor
  Cc: kevin.tian, jun.nakajima, George.Dunlap, eddie.dong, keir.xen,
	JBeulich, xen-devel

At 17:18 -0700 on 09 Apr (1397060339), Mukesh Rathor wrote:
> Hi,
> 
> In reference to our discussion on propogating error codes here
>  http://osdir.com/ml/general/2014-04/msg13083.html
> 
> please find v2 patches. Built on c/s: ac0f56a.
> 

Acked-by: Tim Deegan <tim@xen.org>

I'm AFK this week; I'll commit thses next week unless someone else
does it first.

Cheers,

Tim.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-04-13 20:43 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-10  0:18 [V2 PATCH 0/3]: Fixup error codes in some p2m paths Mukesh Rathor
2014-04-10  0:19 ` [V2 PATCH 1/3] Rename public functions in p2m-pt.c Mukesh Rathor
2014-04-10  0:19 ` [V2 PATCH 2/3] Rename "set_p2m_entry" to "p2m_set_entry" Mukesh Rathor
2014-04-10  0:19 ` [V2 PATCH 3/3] P2M error code propogation Mukesh Rathor
2014-04-10  7:23   ` Jan Beulich
2014-04-10  6:57 ` [V2 PATCH 0/3]: Fixup error codes in some p2m paths Jan Beulich
2014-04-10 22:04   ` Mukesh Rathor
2014-04-13 20:43 ` Tim Deegan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.