All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] x86: allow for more memory to be used
@ 2015-01-28  8:00 Jan Beulich
  2015-01-28  8:10 ` [PATCH 1/4] x86: skip further initialization for idle domains Jan Beulich
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Jan Beulich @ 2015-01-28  8:00 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Keir Fraser, Tim Deegan

1: skip further initialization for idle domains
2: mm: allow for building without shadow mode support
3: provide build time option to support up to 123Tb of memory
4: shadow: make some log-dirty handling functions static

Signed-off-by: Jan Beulich <jbeulich@suse.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/4] x86: skip further initialization for idle domains
  2015-01-28  8:00 [PATCH 0/4] x86: allow for more memory to be used Jan Beulich
@ 2015-01-28  8:10 ` Jan Beulich
  2015-01-28 11:00   ` Andrew Cooper
  2015-01-28  8:11 ` [PATCH 2/4] x86/mm: allow for building without shadow mode support Jan Beulich
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2015-01-28  8:10 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Keir Fraser, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 2191 bytes --]

While in the end not really found necessary, early versions of the
patches to follow pointed out that we needlessly set up paging for idle
domains. Arranging for that to be skipped made me notice that we can at
once skip vMCE setup for them. Leverage to adjustment to further
re-arrange the way FPU setup gets skipped.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -427,12 +427,15 @@ int vcpu_initialise(struct vcpu *v)
     if ( rc )
         return rc;
 
-    paging_vcpu_init(v);
+    if ( !is_idle_domain(d) )
+    {
+        paging_vcpu_init(v);
 
-    if ( (rc = vcpu_init_fpu(v)) != 0 )
-        return rc;
+        if ( (rc = vcpu_init_fpu(v)) != 0 )
+            return rc;
 
-    vmce_init_vcpu(v);
+        vmce_init_vcpu(v);
+    }
 
     if ( has_hvm_container_domain(d) )
     {
@@ -559,12 +562,12 @@ int arch_domain_create(struct domain *d,
     HYPERVISOR_COMPAT_VIRT_START(d) =
         is_pv_domain(d) ? __HYPERVISOR_COMPAT_VIRT_START : ~0u;
 
-    if ( (rc = paging_domain_init(d, domcr_flags)) != 0 )
-        goto fail;
-    paging_initialised = 1;
-
     if ( !is_idle_domain(d) )
     {
+        if ( (rc = paging_domain_init(d, domcr_flags)) != 0 )
+            goto fail;
+        paging_initialised = 1;
+
         d->arch.cpuids = xmalloc_array(cpuid_input_t, MAX_CPUID_INPUT);
         rc = -ENOMEM;
         if ( d->arch.cpuids == NULL )
--- a/xen/arch/x86/i387.c
+++ b/xen/arch/x86/i387.c
@@ -303,12 +303,8 @@ void save_fpu_enable(void)
 /* Initialize FPU's context save area */
 int vcpu_init_fpu(struct vcpu *v)
 {
-    int rc = 0;
+    int rc;
     
-    /* Idle domain doesn't have FPU state allocated */
-    if ( is_idle_vcpu(v) )
-        goto done;
-
     if ( (rc = xstate_alloc_save_area(v)) != 0 )
         return rc;
 
@@ -318,13 +314,9 @@ int vcpu_init_fpu(struct vcpu *v)
     {
         v->arch.fpu_ctxt = _xzalloc(sizeof(v->arch.xsave_area->fpu_sse), 16);
         if ( !v->arch.fpu_ctxt )
-        {
             rc = -ENOMEM;
-            goto done;
-        }
     }
 
-done:
     return rc;
 }
 




[-- Attachment #2: x86-idle-domain-init.patch --]
[-- Type: text/plain, Size: 2238 bytes --]

x86: skip further initialization for idle domains

While in the end not really found necessary, early versions of the
patches to follow pointed out that we needlessly set up paging for idle
domains. Arranging for that to be skipped made me notice that we can at
once skip vMCE setup for them. Leverage to adjustment to further
re-arrange the way FPU setup gets skipped.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -427,12 +427,15 @@ int vcpu_initialise(struct vcpu *v)
     if ( rc )
         return rc;
 
-    paging_vcpu_init(v);
+    if ( !is_idle_domain(d) )
+    {
+        paging_vcpu_init(v);
 
-    if ( (rc = vcpu_init_fpu(v)) != 0 )
-        return rc;
+        if ( (rc = vcpu_init_fpu(v)) != 0 )
+            return rc;
 
-    vmce_init_vcpu(v);
+        vmce_init_vcpu(v);
+    }
 
     if ( has_hvm_container_domain(d) )
     {
@@ -559,12 +562,12 @@ int arch_domain_create(struct domain *d,
     HYPERVISOR_COMPAT_VIRT_START(d) =
         is_pv_domain(d) ? __HYPERVISOR_COMPAT_VIRT_START : ~0u;
 
-    if ( (rc = paging_domain_init(d, domcr_flags)) != 0 )
-        goto fail;
-    paging_initialised = 1;
-
     if ( !is_idle_domain(d) )
     {
+        if ( (rc = paging_domain_init(d, domcr_flags)) != 0 )
+            goto fail;
+        paging_initialised = 1;
+
         d->arch.cpuids = xmalloc_array(cpuid_input_t, MAX_CPUID_INPUT);
         rc = -ENOMEM;
         if ( d->arch.cpuids == NULL )
--- a/xen/arch/x86/i387.c
+++ b/xen/arch/x86/i387.c
@@ -303,12 +303,8 @@ void save_fpu_enable(void)
 /* Initialize FPU's context save area */
 int vcpu_init_fpu(struct vcpu *v)
 {
-    int rc = 0;
+    int rc;
     
-    /* Idle domain doesn't have FPU state allocated */
-    if ( is_idle_vcpu(v) )
-        goto done;
-
     if ( (rc = xstate_alloc_save_area(v)) != 0 )
         return rc;
 
@@ -318,13 +314,9 @@ int vcpu_init_fpu(struct vcpu *v)
     {
         v->arch.fpu_ctxt = _xzalloc(sizeof(v->arch.xsave_area->fpu_sse), 16);
         if ( !v->arch.fpu_ctxt )
-        {
             rc = -ENOMEM;
-            goto done;
-        }
     }
 
-done:
     return rc;
 }
 

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 2/4] x86/mm: allow for building without shadow mode support
  2015-01-28  8:00 [PATCH 0/4] x86: allow for more memory to be used Jan Beulich
  2015-01-28  8:10 ` [PATCH 1/4] x86: skip further initialization for idle domains Jan Beulich
@ 2015-01-28  8:11 ` Jan Beulich
  2015-01-28 11:10   ` Andrew Cooper
  2015-01-28  8:11 ` [PATCH 3/4] x86: provide build time option to support up to 123Tb of memory Jan Beulich
  2015-01-28  8:12 ` [PATCH 4/4] x86/shadow: make some log‑dirty handling functions static Jan Beulich
  3 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2015-01-28  8:11 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Keir Fraser, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 8104 bytes --]

Considering the complexity of the code, it seems to be a reasonable
thing to allow people to disable that code entirely even outside the
immediate need for this by the next patch.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
There is one XXX being added to the code - while it doesn't look like
we need to set v->arch.paging.mode in shadow_vcpu_init(), it may be
deemed more secure to set up a table with stub function pointers.

--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -32,9 +32,13 @@ x86 := y
 x86_32 := n
 x86_64 := y
 
+shadow-paging ?= y
+
 CFLAGS += -mno-red-zone -mno-sse -fpic
 CFLAGS += -fno-asynchronous-unwind-tables
 # -fvisibility=hidden reduces -fpic cost, if it's available
 ifneq ($(call cc-option,$(CC),-fvisibility=hidden,n),n)
 CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
 endif
+
+CFLAGS-$(shadow-paging) += -DCONFIG_SHADOW_PAGING
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -635,16 +635,16 @@ int paging_domain_init(struct domain *d,
      * don't want to leak any active log-dirty bitmaps */
     d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
 
-    /* The order of the *_init calls below is important, as the later
-     * ones may rewrite some common fields.  Shadow pagetables are the
-     * default... */
-    shadow_domain_init(d, domcr_flags);
-
-    /* ... but we will use hardware assistance if it's available. */
+    /*
+     * Shadow pagetables are the default, but we will use
+     * hardware assistance if it's available and enabled.
+     */
     if ( hap_enabled(d) )
         hap_domain_init(d);
+    else
+        rc = shadow_domain_init(d, domcr_flags);
 
-    return 0;
+    return rc;
 }
 
 /* vcpu paging struct initialization goes here */
@@ -822,12 +822,16 @@ int paging_enable(struct domain *d, u32 
  * and therefore its pagetables will soon be discarded */
 void pagetable_dying(struct domain *d, paddr_t gpa)
 {
+#ifdef CONFIG_SHADOW_PAGING
     struct vcpu *v;
 
     ASSERT(paging_mode_shadow(d));
 
     v = d->vcpu[0];
     v->arch.paging.mode->shadow.pagetable_dying(v, gpa);
+#else
+    BUG();
+#endif
 }
 
 /* Print paging-assistance info to the console */
--- a/xen/arch/x86/mm/shadow/Makefile
+++ b/xen/arch/x86/mm/shadow/Makefile
@@ -1,4 +1,5 @@
-obj-$(x86_64) += common.o guest_2.o guest_3.o guest_4.o
+obj-y                := none.o
+obj-$(shadow-paging) := common.o guest_2.o guest_3.o guest_4.o
 
 guest_%.o: multi.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -43,7 +43,7 @@ DEFINE_PER_CPU(uint32_t,trace_shadow_pat
 
 /* Set up the shadow-specific parts of a domain struct at start of day.
  * Called for every domain from arch_domain_create() */
-void shadow_domain_init(struct domain *d, unsigned int domcr_flags)
+int shadow_domain_init(struct domain *d, unsigned int domcr_flags)
 {
     INIT_PAGE_LIST_HEAD(&d->arch.paging.shadow.freelist);
     INIT_PAGE_LIST_HEAD(&d->arch.paging.shadow.pinned_shadows);
@@ -57,6 +57,8 @@ void shadow_domain_init(struct domain *d
     d->arch.paging.shadow.oos_off = (domcr_flags & DOMCRF_oos_off) ?  1 : 0;
 #endif
     d->arch.paging.shadow.pagetable_dying_op = 0;
+
+    return 0;
 }
 
 /* Setup the shadow-specfic parts of a vcpu struct. Note: The most important
--- /dev/null
+++ b/xen/arch/x86/mm/shadow/none.c	2015-01-26 17:23:04.000000000 +0100
@@ -0,0 +1,70 @@
+#include <xen/mm.h>
+#include <asm/shadow.h>
+
+static int _enable_log_dirty(struct domain *d, bool_t log_global)
+{
+    BUG_ON(!is_pv_domain(d));
+    return -EOPNOTSUPP;
+}
+
+static int _disable_log_dirty(struct domain *d)
+{
+    BUG_ON(!is_pv_domain(d));
+    return -EOPNOTSUPP;
+}
+
+static void _clean_dirty_bitmap(struct domain *d)
+{
+    BUG_ON(!is_pv_domain(d));
+}
+
+int shadow_domain_init(struct domain *d, unsigned int domcr_flags)
+{
+    paging_log_dirty_init(d, _enable_log_dirty,
+                          _disable_log_dirty, _clean_dirty_bitmap);
+    return is_pv_domain(d) ? 0 : -EOPNOTSUPP;
+}
+
+void shadow_vcpu_init(struct vcpu *v)
+{
+    BUG_ON(!is_pv_domain(v->domain));
+    /* XXX v->arch.paging.mode = ???; */
+}
+
+void shadow_teardown(struct domain *d)
+{
+    BUG_ON(!is_pv_domain(d));
+}
+
+void shadow_final_teardown(struct domain *d)
+{
+    BUG_ON(!is_pv_domain(d));
+}
+
+int shadow_enable(struct domain *d, u32 mode)
+{
+    BUG_ON(!is_pv_domain(d));
+    return -EOPNOTSUPP;
+}
+
+void sh_remove_shadows(struct vcpu *v, mfn_t gmfn, int fast, int all)
+{
+}
+
+void shadow_blow_tables_per_domain(struct domain *d)
+{
+}
+
+int shadow_track_dirty_vram(struct domain *d,
+                            unsigned long begin_pfn, unsigned long nr,
+                            XEN_GUEST_HANDLE_64(uint8) dirty_bitmap)
+{
+    BUG();
+    return -EOPNOTSUPP;
+}
+
+int shadow_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+{
+    return -EINVAL;
+}
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -88,6 +88,7 @@ void hypercall_page_initialise(struct do
 /*          shadow paging extension             */
 /************************************************/
 struct shadow_domain {
+#ifdef CONFIG_SHADOW_PAGING
     unsigned int      opt_flags;    /* runtime tunable optimizations on/off */
     struct page_list_head pinned_shadows;
 
@@ -117,9 +118,11 @@ struct shadow_domain {
 
     /* Has this domain ever used HVMOP_pagetable_dying? */
     bool_t pagetable_dying_op;
+#endif
 };
 
 struct shadow_vcpu {
+#ifdef CONFIG_SHADOW_PAGING
     /* PAE guests: per-vcpu shadow top-level table */
     l3_pgentry_t l3table[4] __attribute__((__aligned__(32)));
     /* PAE guests: per-vcpu cache of the top-level *guest* entries */
@@ -145,6 +148,7 @@ struct shadow_vcpu {
     } oos_fixup[SHADOW_OOS_PAGES];
 
     bool_t pagetable_dying;
+#endif
 };
 
 /************************************************/
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -55,7 +55,11 @@
 #define PG_external    (XEN_DOMCTL_SHADOW_ENABLE_EXTERNAL << PG_mode_shift)
 
 #define paging_mode_enabled(_d)   ((_d)->arch.paging.mode)
+#ifdef CONFIG_SHADOW_PAGING
 #define paging_mode_shadow(_d)    ((_d)->arch.paging.mode & PG_SH_enable)
+#else
+#define paging_mode_shadow(_d)    ((void)(_d), 0)
+#endif
 #define paging_mode_hap(_d)       ((_d)->arch.paging.mode & PG_HAP_enable)
 
 #define paging_mode_refcounts(_d) ((_d)->arch.paging.mode & PG_refcounts)
@@ -74,6 +78,7 @@
 
 struct sh_emulate_ctxt;
 struct shadow_paging_mode {
+#ifdef CONFIG_SHADOW_PAGING
     void          (*detach_old_tables     )(struct vcpu *v);
     int           (*x86_emulate_write     )(struct vcpu *v, unsigned long va,
                                             void *src, u32 bytes,
@@ -88,6 +93,7 @@ struct shadow_paging_mode {
     int           (*guess_wrmap           )(struct vcpu *v, 
                                             unsigned long vaddr, mfn_t gmfn);
     void          (*pagetable_dying       )(struct vcpu *v, paddr_t gpa);
+#endif
     /* For outsiders to tell what mode we're in */
     unsigned int shadow_levels;
 };
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -49,7 +49,7 @@
 
 /* Set up the shadow-specific parts of a domain struct at start of day.
  * Called from paging_domain_init(). */
-void shadow_domain_init(struct domain *d, unsigned int domcr_flags);
+int shadow_domain_init(struct domain *d, unsigned int domcr_flags);
 
 /* Setup the shadow-specific parts of a vcpu struct. It is called by
  * paging_vcpu_init() in paging.c */
--- a/xen/include/xen/paging.h
+++ b/xen/include/xen/paging.h
@@ -7,7 +7,7 @@
 #include <asm/paging.h>
 #include <asm/p2m.h>
 
-#elif defined CONFIG_SHADOW
+#elif defined CONFIG_SHADOW_PAGING
 
 #include <asm/shadow.h>
 



[-- Attachment #2: x86-no-shadow.patch --]
[-- Type: text/plain, Size: 8158 bytes --]

x86/mm: allow for building without shadow mode support

Considering the complexity of the code, it seems to be a reasonable
thing to allow people to disable that code entirely even outside the
immediate need for this by the next patch.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
There is one XXX being added to the code - while it doesn't look like
we need to set v->arch.paging.mode in shadow_vcpu_init(), it may be
deemed more secure to set up a table with stub function pointers.

--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -32,9 +32,13 @@ x86 := y
 x86_32 := n
 x86_64 := y
 
+shadow-paging ?= y
+
 CFLAGS += -mno-red-zone -mno-sse -fpic
 CFLAGS += -fno-asynchronous-unwind-tables
 # -fvisibility=hidden reduces -fpic cost, if it's available
 ifneq ($(call cc-option,$(CC),-fvisibility=hidden,n),n)
 CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
 endif
+
+CFLAGS-$(shadow-paging) += -DCONFIG_SHADOW_PAGING
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -635,16 +635,16 @@ int paging_domain_init(struct domain *d,
      * don't want to leak any active log-dirty bitmaps */
     d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
 
-    /* The order of the *_init calls below is important, as the later
-     * ones may rewrite some common fields.  Shadow pagetables are the
-     * default... */
-    shadow_domain_init(d, domcr_flags);
-
-    /* ... but we will use hardware assistance if it's available. */
+    /*
+     * Shadow pagetables are the default, but we will use
+     * hardware assistance if it's available and enabled.
+     */
     if ( hap_enabled(d) )
         hap_domain_init(d);
+    else
+        rc = shadow_domain_init(d, domcr_flags);
 
-    return 0;
+    return rc;
 }
 
 /* vcpu paging struct initialization goes here */
@@ -822,12 +822,16 @@ int paging_enable(struct domain *d, u32 
  * and therefore its pagetables will soon be discarded */
 void pagetable_dying(struct domain *d, paddr_t gpa)
 {
+#ifdef CONFIG_SHADOW_PAGING
     struct vcpu *v;
 
     ASSERT(paging_mode_shadow(d));
 
     v = d->vcpu[0];
     v->arch.paging.mode->shadow.pagetable_dying(v, gpa);
+#else
+    BUG();
+#endif
 }
 
 /* Print paging-assistance info to the console */
--- a/xen/arch/x86/mm/shadow/Makefile
+++ b/xen/arch/x86/mm/shadow/Makefile
@@ -1,4 +1,5 @@
-obj-$(x86_64) += common.o guest_2.o guest_3.o guest_4.o
+obj-y                := none.o
+obj-$(shadow-paging) := common.o guest_2.o guest_3.o guest_4.o
 
 guest_%.o: multi.c Makefile
 	$(CC) $(CFLAGS) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -43,7 +43,7 @@ DEFINE_PER_CPU(uint32_t,trace_shadow_pat
 
 /* Set up the shadow-specific parts of a domain struct at start of day.
  * Called for every domain from arch_domain_create() */
-void shadow_domain_init(struct domain *d, unsigned int domcr_flags)
+int shadow_domain_init(struct domain *d, unsigned int domcr_flags)
 {
     INIT_PAGE_LIST_HEAD(&d->arch.paging.shadow.freelist);
     INIT_PAGE_LIST_HEAD(&d->arch.paging.shadow.pinned_shadows);
@@ -57,6 +57,8 @@ void shadow_domain_init(struct domain *d
     d->arch.paging.shadow.oos_off = (domcr_flags & DOMCRF_oos_off) ?  1 : 0;
 #endif
     d->arch.paging.shadow.pagetable_dying_op = 0;
+
+    return 0;
 }
 
 /* Setup the shadow-specfic parts of a vcpu struct. Note: The most important
--- /dev/null
+++ b/xen/arch/x86/mm/shadow/none.c	2015-01-26 17:23:04.000000000 +0100
@@ -0,0 +1,70 @@
+#include <xen/mm.h>
+#include <asm/shadow.h>
+
+static int _enable_log_dirty(struct domain *d, bool_t log_global)
+{
+    BUG_ON(!is_pv_domain(d));
+    return -EOPNOTSUPP;
+}
+
+static int _disable_log_dirty(struct domain *d)
+{
+    BUG_ON(!is_pv_domain(d));
+    return -EOPNOTSUPP;
+}
+
+static void _clean_dirty_bitmap(struct domain *d)
+{
+    BUG_ON(!is_pv_domain(d));
+}
+
+int shadow_domain_init(struct domain *d, unsigned int domcr_flags)
+{
+    paging_log_dirty_init(d, _enable_log_dirty,
+                          _disable_log_dirty, _clean_dirty_bitmap);
+    return is_pv_domain(d) ? 0 : -EOPNOTSUPP;
+}
+
+void shadow_vcpu_init(struct vcpu *v)
+{
+    BUG_ON(!is_pv_domain(v->domain));
+    /* XXX v->arch.paging.mode = ???; */
+}
+
+void shadow_teardown(struct domain *d)
+{
+    BUG_ON(!is_pv_domain(d));
+}
+
+void shadow_final_teardown(struct domain *d)
+{
+    BUG_ON(!is_pv_domain(d));
+}
+
+int shadow_enable(struct domain *d, u32 mode)
+{
+    BUG_ON(!is_pv_domain(d));
+    return -EOPNOTSUPP;
+}
+
+void sh_remove_shadows(struct vcpu *v, mfn_t gmfn, int fast, int all)
+{
+}
+
+void shadow_blow_tables_per_domain(struct domain *d)
+{
+}
+
+int shadow_track_dirty_vram(struct domain *d,
+                            unsigned long begin_pfn, unsigned long nr,
+                            XEN_GUEST_HANDLE_64(uint8) dirty_bitmap)
+{
+    BUG();
+    return -EOPNOTSUPP;
+}
+
+int shadow_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+{
+    return -EINVAL;
+}
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -88,6 +88,7 @@ void hypercall_page_initialise(struct do
 /*          shadow paging extension             */
 /************************************************/
 struct shadow_domain {
+#ifdef CONFIG_SHADOW_PAGING
     unsigned int      opt_flags;    /* runtime tunable optimizations on/off */
     struct page_list_head pinned_shadows;
 
@@ -117,9 +118,11 @@ struct shadow_domain {
 
     /* Has this domain ever used HVMOP_pagetable_dying? */
     bool_t pagetable_dying_op;
+#endif
 };
 
 struct shadow_vcpu {
+#ifdef CONFIG_SHADOW_PAGING
     /* PAE guests: per-vcpu shadow top-level table */
     l3_pgentry_t l3table[4] __attribute__((__aligned__(32)));
     /* PAE guests: per-vcpu cache of the top-level *guest* entries */
@@ -145,6 +148,7 @@ struct shadow_vcpu {
     } oos_fixup[SHADOW_OOS_PAGES];
 
     bool_t pagetable_dying;
+#endif
 };
 
 /************************************************/
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -55,7 +55,11 @@
 #define PG_external    (XEN_DOMCTL_SHADOW_ENABLE_EXTERNAL << PG_mode_shift)
 
 #define paging_mode_enabled(_d)   ((_d)->arch.paging.mode)
+#ifdef CONFIG_SHADOW_PAGING
 #define paging_mode_shadow(_d)    ((_d)->arch.paging.mode & PG_SH_enable)
+#else
+#define paging_mode_shadow(_d)    ((void)(_d), 0)
+#endif
 #define paging_mode_hap(_d)       ((_d)->arch.paging.mode & PG_HAP_enable)
 
 #define paging_mode_refcounts(_d) ((_d)->arch.paging.mode & PG_refcounts)
@@ -74,6 +78,7 @@
 
 struct sh_emulate_ctxt;
 struct shadow_paging_mode {
+#ifdef CONFIG_SHADOW_PAGING
     void          (*detach_old_tables     )(struct vcpu *v);
     int           (*x86_emulate_write     )(struct vcpu *v, unsigned long va,
                                             void *src, u32 bytes,
@@ -88,6 +93,7 @@ struct shadow_paging_mode {
     int           (*guess_wrmap           )(struct vcpu *v, 
                                             unsigned long vaddr, mfn_t gmfn);
     void          (*pagetable_dying       )(struct vcpu *v, paddr_t gpa);
+#endif
     /* For outsiders to tell what mode we're in */
     unsigned int shadow_levels;
 };
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -49,7 +49,7 @@
 
 /* Set up the shadow-specific parts of a domain struct at start of day.
  * Called from paging_domain_init(). */
-void shadow_domain_init(struct domain *d, unsigned int domcr_flags);
+int shadow_domain_init(struct domain *d, unsigned int domcr_flags);
 
 /* Setup the shadow-specific parts of a vcpu struct. It is called by
  * paging_vcpu_init() in paging.c */
--- a/xen/include/xen/paging.h
+++ b/xen/include/xen/paging.h
@@ -7,7 +7,7 @@
 #include <asm/paging.h>
 #include <asm/p2m.h>
 
-#elif defined CONFIG_SHADOW
+#elif defined CONFIG_SHADOW_PAGING
 
 #include <asm/shadow.h>
 

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 3/4] x86: provide build time option to support up to 123Tb of memory
  2015-01-28  8:00 [PATCH 0/4] x86: allow for more memory to be used Jan Beulich
  2015-01-28  8:10 ` [PATCH 1/4] x86: skip further initialization for idle domains Jan Beulich
  2015-01-28  8:11 ` [PATCH 2/4] x86/mm: allow for building without shadow mode support Jan Beulich
@ 2015-01-28  8:11 ` Jan Beulich
  2015-01-28 12:10   ` Andrew Cooper
  2015-01-28  8:12 ` [PATCH 4/4] x86/shadow: make some log‑dirty handling functions static Jan Beulich
  3 siblings, 1 reply; 13+ messages in thread
From: Jan Beulich @ 2015-01-28  8:11 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Keir Fraser, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 6127 bytes --]

As this requires growing struct page_info from 32 to 48 bytes as well
as shrinking the always accessible direct mapped memory range from 5Tb
to 3.5Tb, this isn't being introduced as a general or default enabled
feature.

For now setting "bigmem=y" implies "shadow-paging=n", as the shadow
paging code otherwise fails to build (see
http://lists.xenproject.org/archives/html/xen-devel/2015-01/msg03165.html).

A side effect of the change to x86's mm.h is that asm/mm.h may no
longer be included directly. Hence in the few places where this was done,
xen/mm.h is being substituted (indirectly in the hvm/mtrr.h case).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -32,7 +32,8 @@ x86 := y
 x86_32 := n
 x86_64 := y
 
-shadow-paging ?= y
+bigmem        ?= n
+shadow-paging ?= $(if $(filter y,$(bigmem)),n,y)
 
 CFLAGS += -mno-red-zone -mno-sse -fpic
 CFLAGS += -fno-asynchronous-unwind-tables
@@ -42,3 +43,4 @@ CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
 endif
 
 CFLAGS-$(shadow-paging) += -DCONFIG_SHADOW_PAGING
+CFLAGS-$(bigmem)        += -DCONFIG_BIGMEM
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -18,13 +18,11 @@
  */
 
 #include <public/hvm/e820.h>
-#include <xen/types.h>
+#include <xen/domain_page.h>
 #include <asm/e820.h>
 #include <asm/iocap.h>
-#include <asm/mm.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
-#include <xen/domain_page.h>
 #include <asm/mtrr.h>
 #include <asm/hvm/support.h>
 #include <asm/hvm/cacheattr.h>
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -386,8 +386,13 @@ static void __init setup_max_pdx(unsigne
     if ( max_pdx > FRAMETABLE_NR )
         max_pdx = FRAMETABLE_NR;
 
+    if ( max_pdx > MPT_VIRT_SIZE / sizeof(unsigned long) )
+        max_pdx = MPT_VIRT_SIZE / sizeof(unsigned long);
+
+#ifdef PAGE_LIST_NULL
     if ( max_pdx >= PAGE_LIST_NULL )
         max_pdx = PAGE_LIST_NULL - 1;
+#endif
 
     max_page = pdx_to_pfn(max_pdx - 1) + 1;
 }
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -158,6 +158,7 @@ extern unsigned char boot_edid_info[128]
  *    High read-only compatibility machine-to-phys translation table.
  *  0xffff82d080000000 - 0xffff82d0bfffffff [1GB,   2^30 bytes, PML4:261]
  *    Xen text, static data, bss.
+#ifndef CONFIG_BIGMEM
  *  0xffff82d0c0000000 - 0xffff82dffbffffff [61GB - 64MB,       PML4:261]
  *    Reserved for future use.
  *  0xffff82dffc000000 - 0xffff82dfffffffff [64MB,  2^26 bytes, PML4:261]
@@ -166,6 +167,16 @@ extern unsigned char boot_edid_info[128]
  *    Page-frame information array.
  *  0xffff830000000000 - 0xffff87ffffffffff [5TB, 5*2^40 bytes, PML4:262-271]
  *    1:1 direct mapping of all physical memory.
+#else
+ *  0xffff82d0c0000000 - 0xffff82ffdfffffff [188.5GB,           PML4:261]
+ *    Reserved for future use.
+ *  0xffff82ffe0000000 - 0xffff82ffffffffff [512MB, 2^29 bytes, PML4:261]
+ *    Super-page information array.
+ *  0xffff830000000000 - 0xffff847fffffffff [1.5TB, 3*2^39 bytes, PML4:262-264]
+ *    Page-frame information array.
+ *  0xffff848000000000 - 0xffff87ffffffffff [3.5TB, 7*2^39 bytes, PML4:265-271]
+ *    1:1 direct mapping of all physical memory.
+#endif
  *  0xffff880000000000 - 0xffffffffffffffff [120TB,             PML4:272-511]
  *    PV: Guest-defined use.
  *  0xffff880000000000 - 0xffffff7fffffffff [119.5TB,           PML4:272-510]
@@ -234,21 +245,35 @@ extern unsigned char boot_edid_info[128]
 /* Slot 261: xen text, static data and bss (1GB). */
 #define XEN_VIRT_START          (HIRO_COMPAT_MPT_VIRT_END)
 #define XEN_VIRT_END            (XEN_VIRT_START + GB(1))
-/* Slot 261: superpage information array (64MB). */
+
+/* Slot 261: superpage information array (64MB or 512MB). */
 #define SPAGETABLE_VIRT_END     FRAMETABLE_VIRT_START
 #define SPAGETABLE_NR           (((FRAMETABLE_NR - 1) >> (SUPERPAGE_SHIFT - \
                                                           PAGE_SHIFT)) + 1)
 #define SPAGETABLE_SIZE         (SPAGETABLE_NR * sizeof(struct spage_info))
 #define SPAGETABLE_VIRT_START   ((SPAGETABLE_VIRT_END - SPAGETABLE_SIZE) & \
                                  (_AC(-1,UL) << SUPERPAGE_SHIFT))
+
+#ifndef CONFIG_BIGMEM
 /* Slot 261: page-frame information array (128GB). */
-#define FRAMETABLE_VIRT_END     DIRECTMAP_VIRT_START
 #define FRAMETABLE_SIZE         GB(128)
+#else
+/* Slot 262-264: page-frame information array (1.5TB). */
+#define FRAMETABLE_SIZE         GB(1536)
+#endif
+#define FRAMETABLE_VIRT_END     DIRECTMAP_VIRT_START
 #define FRAMETABLE_NR           (FRAMETABLE_SIZE / sizeof(*frame_table))
 #define FRAMETABLE_VIRT_START   (FRAMETABLE_VIRT_END - FRAMETABLE_SIZE)
+
+#ifndef CONFIG_BIGMEM
 /* Slot 262-271/510: A direct 1:1 mapping of all of physical memory. */
 #define DIRECTMAP_VIRT_START    (PML4_ADDR(262))
 #define DIRECTMAP_SIZE          (PML4_ENTRY_BYTES * (511 - 262))
+#else
+/* Slot 265-271/510: A direct 1:1 mapping of all of physical memory. */
+#define DIRECTMAP_VIRT_START    (PML4_ADDR(265))
+#define DIRECTMAP_SIZE          (PML4_ENTRY_BYTES * (511 - 265))
+#endif
 #define DIRECTMAP_VIRT_END      (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE)
 
 #ifndef __ASSEMBLY__
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -17,6 +17,7 @@
  */
 #define PFN_ORDER(_pfn) ((_pfn)->v.free.order)
 
+#ifndef CONFIG_BIGMEM
 /*
  * This definition is solely for the use in struct page_info (and
  * struct page_list_head), intended to allow easy adjustment once x86-64
@@ -30,6 +31,9 @@ struct page_list_entry
 {
     __pdx_t next, prev;
 };
+#else
+#define __pdx_t unsigned long
+#endif
 
 struct page_sharing_info;
 
--- a/xen/include/asm-x86/mtrr.h
+++ b/xen/include/asm-x86/mtrr.h
@@ -1,8 +1,7 @@
 #ifndef __ASM_X86_MTRR_H__
 #define __ASM_X86_MTRR_H__
 
-#include <xen/config.h>
-#include <asm/mm.h>
+#include <xen/mm.h>
 
 /* These are the region types. They match the architectural specification. */
 #define MTRR_TYPE_UNCACHABLE 0



[-- Attachment #2: x86-bigmem.patch --]
[-- Type: text/plain, Size: 6190 bytes --]

x86: provide build time option to support up to 123Tb of memory

As this requires growing struct page_info from 32 to 48 bytes as well
as shrinking the always accessible direct mapped memory range from 5Tb
to 3.5Tb, this isn't being introduced as a general or default enabled
feature.

For now setting "bigmem=y" implies "shadow-paging=n", as the shadow
paging code otherwise fails to build (see
http://lists.xenproject.org/archives/html/xen-devel/2015-01/msg03165.html).

A side effect of the change to x86's mm.h is that asm/mm.h may no
longer be included directly. Hence in the few places where this was done,
xen/mm.h is being substituted (indirectly in the hvm/mtrr.h case).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/Rules.mk
+++ b/xen/arch/x86/Rules.mk
@@ -32,7 +32,8 @@ x86 := y
 x86_32 := n
 x86_64 := y
 
-shadow-paging ?= y
+bigmem        ?= n
+shadow-paging ?= $(if $(filter y,$(bigmem)),n,y)
 
 CFLAGS += -mno-red-zone -mno-sse -fpic
 CFLAGS += -fno-asynchronous-unwind-tables
@@ -42,3 +43,4 @@ CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
 endif
 
 CFLAGS-$(shadow-paging) += -DCONFIG_SHADOW_PAGING
+CFLAGS-$(bigmem)        += -DCONFIG_BIGMEM
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -18,13 +18,11 @@
  */
 
 #include <public/hvm/e820.h>
-#include <xen/types.h>
+#include <xen/domain_page.h>
 #include <asm/e820.h>
 #include <asm/iocap.h>
-#include <asm/mm.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
-#include <xen/domain_page.h>
 #include <asm/mtrr.h>
 #include <asm/hvm/support.h>
 #include <asm/hvm/cacheattr.h>
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -386,8 +386,13 @@ static void __init setup_max_pdx(unsigne
     if ( max_pdx > FRAMETABLE_NR )
         max_pdx = FRAMETABLE_NR;
 
+    if ( max_pdx > MPT_VIRT_SIZE / sizeof(unsigned long) )
+        max_pdx = MPT_VIRT_SIZE / sizeof(unsigned long);
+
+#ifdef PAGE_LIST_NULL
     if ( max_pdx >= PAGE_LIST_NULL )
         max_pdx = PAGE_LIST_NULL - 1;
+#endif
 
     max_page = pdx_to_pfn(max_pdx - 1) + 1;
 }
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -158,6 +158,7 @@ extern unsigned char boot_edid_info[128]
  *    High read-only compatibility machine-to-phys translation table.
  *  0xffff82d080000000 - 0xffff82d0bfffffff [1GB,   2^30 bytes, PML4:261]
  *    Xen text, static data, bss.
+#ifndef CONFIG_BIGMEM
  *  0xffff82d0c0000000 - 0xffff82dffbffffff [61GB - 64MB,       PML4:261]
  *    Reserved for future use.
  *  0xffff82dffc000000 - 0xffff82dfffffffff [64MB,  2^26 bytes, PML4:261]
@@ -166,6 +167,16 @@ extern unsigned char boot_edid_info[128]
  *    Page-frame information array.
  *  0xffff830000000000 - 0xffff87ffffffffff [5TB, 5*2^40 bytes, PML4:262-271]
  *    1:1 direct mapping of all physical memory.
+#else
+ *  0xffff82d0c0000000 - 0xffff82ffdfffffff [188.5GB,           PML4:261]
+ *    Reserved for future use.
+ *  0xffff82ffe0000000 - 0xffff82ffffffffff [512MB, 2^29 bytes, PML4:261]
+ *    Super-page information array.
+ *  0xffff830000000000 - 0xffff847fffffffff [1.5TB, 3*2^39 bytes, PML4:262-264]
+ *    Page-frame information array.
+ *  0xffff848000000000 - 0xffff87ffffffffff [3.5TB, 7*2^39 bytes, PML4:265-271]
+ *    1:1 direct mapping of all physical memory.
+#endif
  *  0xffff880000000000 - 0xffffffffffffffff [120TB,             PML4:272-511]
  *    PV: Guest-defined use.
  *  0xffff880000000000 - 0xffffff7fffffffff [119.5TB,           PML4:272-510]
@@ -234,21 +245,35 @@ extern unsigned char boot_edid_info[128]
 /* Slot 261: xen text, static data and bss (1GB). */
 #define XEN_VIRT_START          (HIRO_COMPAT_MPT_VIRT_END)
 #define XEN_VIRT_END            (XEN_VIRT_START + GB(1))
-/* Slot 261: superpage information array (64MB). */
+
+/* Slot 261: superpage information array (64MB or 512MB). */
 #define SPAGETABLE_VIRT_END     FRAMETABLE_VIRT_START
 #define SPAGETABLE_NR           (((FRAMETABLE_NR - 1) >> (SUPERPAGE_SHIFT - \
                                                           PAGE_SHIFT)) + 1)
 #define SPAGETABLE_SIZE         (SPAGETABLE_NR * sizeof(struct spage_info))
 #define SPAGETABLE_VIRT_START   ((SPAGETABLE_VIRT_END - SPAGETABLE_SIZE) & \
                                  (_AC(-1,UL) << SUPERPAGE_SHIFT))
+
+#ifndef CONFIG_BIGMEM
 /* Slot 261: page-frame information array (128GB). */
-#define FRAMETABLE_VIRT_END     DIRECTMAP_VIRT_START
 #define FRAMETABLE_SIZE         GB(128)
+#else
+/* Slot 262-264: page-frame information array (1.5TB). */
+#define FRAMETABLE_SIZE         GB(1536)
+#endif
+#define FRAMETABLE_VIRT_END     DIRECTMAP_VIRT_START
 #define FRAMETABLE_NR           (FRAMETABLE_SIZE / sizeof(*frame_table))
 #define FRAMETABLE_VIRT_START   (FRAMETABLE_VIRT_END - FRAMETABLE_SIZE)
+
+#ifndef CONFIG_BIGMEM
 /* Slot 262-271/510: A direct 1:1 mapping of all of physical memory. */
 #define DIRECTMAP_VIRT_START    (PML4_ADDR(262))
 #define DIRECTMAP_SIZE          (PML4_ENTRY_BYTES * (511 - 262))
+#else
+/* Slot 265-271/510: A direct 1:1 mapping of all of physical memory. */
+#define DIRECTMAP_VIRT_START    (PML4_ADDR(265))
+#define DIRECTMAP_SIZE          (PML4_ENTRY_BYTES * (511 - 265))
+#endif
 #define DIRECTMAP_VIRT_END      (DIRECTMAP_VIRT_START + DIRECTMAP_SIZE)
 
 #ifndef __ASSEMBLY__
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -17,6 +17,7 @@
  */
 #define PFN_ORDER(_pfn) ((_pfn)->v.free.order)
 
+#ifndef CONFIG_BIGMEM
 /*
  * This definition is solely for the use in struct page_info (and
  * struct page_list_head), intended to allow easy adjustment once x86-64
@@ -30,6 +31,9 @@ struct page_list_entry
 {
     __pdx_t next, prev;
 };
+#else
+#define __pdx_t unsigned long
+#endif
 
 struct page_sharing_info;
 
--- a/xen/include/asm-x86/mtrr.h
+++ b/xen/include/asm-x86/mtrr.h
@@ -1,8 +1,7 @@
 #ifndef __ASM_X86_MTRR_H__
 #define __ASM_X86_MTRR_H__
 
-#include <xen/config.h>
-#include <asm/mm.h>
+#include <xen/mm.h>
 
 /* These are the region types. They match the architectural specification. */
 #define MTRR_TYPE_UNCACHABLE 0

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 4/4] x86/shadow: make some log‑dirty handling functions static
  2015-01-28  8:00 [PATCH 0/4] x86: allow for more memory to be used Jan Beulich
                   ` (2 preceding siblings ...)
  2015-01-28  8:11 ` [PATCH 3/4] x86: provide build time option to support up to 123Tb of memory Jan Beulich
@ 2015-01-28  8:12 ` Jan Beulich
  2015-01-28 11:18   ` Andrew Cooper
  2015-01-29 11:21   ` [PATCH 4/4] x86/shadow: make some log?dirty " Tim Deegan
  3 siblings, 2 replies; 13+ messages in thread
From: Jan Beulich @ 2015-01-28  8:12 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Keir Fraser, Tim Deegan

[-- Attachment #1: Type: text/plain, Size: 3021 bytes --]

Noticed while introducing the stub replacement for disabling shadow
paging support at build time.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -41,6 +41,10 @@
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
 
+static int sh_enable_log_dirty(struct domain *, bool_t log_global);
+static int sh_disable_log_dirty(struct domain *);
+static void sh_clean_dirty_bitmap(struct domain *);
+
 /* Set up the shadow-specific parts of a domain struct at start of day.
  * Called for every domain from arch_domain_create() */
 int shadow_domain_init(struct domain *d, unsigned int domcr_flags)
@@ -49,8 +53,8 @@ int shadow_domain_init(struct domain *d,
     INIT_PAGE_LIST_HEAD(&d->arch.paging.shadow.pinned_shadows);
 
     /* Use shadow pagetables for log-dirty support */
-    paging_log_dirty_init(d, shadow_enable_log_dirty, 
-                          shadow_disable_log_dirty, shadow_clean_dirty_bitmap);
+    paging_log_dirty_init(d, sh_enable_log_dirty,
+                          sh_disable_log_dirty, sh_clean_dirty_bitmap);
 
 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
     d->arch.paging.shadow.oos_active = 0;
@@ -3422,7 +3426,7 @@ shadow_write_p2m_entry(struct domain *d,
 /* Shadow specific code which is called in paging_log_dirty_enable().
  * Return 0 if no problem found.
  */
-int shadow_enable_log_dirty(struct domain *d, bool_t log_global)
+static int sh_enable_log_dirty(struct domain *d, bool_t log_global)
 {
     int ret;
 
@@ -3450,7 +3454,7 @@ int shadow_enable_log_dirty(struct domai
 }
 
 /* shadow specfic code which is called in paging_log_dirty_disable() */
-int shadow_disable_log_dirty(struct domain *d)
+static int sh_disable_log_dirty(struct domain *d)
 {
     int ret;
 
@@ -3464,7 +3468,7 @@ int shadow_disable_log_dirty(struct doma
 /* This function is called when we CLEAN log dirty bitmap. See 
  * paging_log_dirty_op() for details. 
  */
-void shadow_clean_dirty_bitmap(struct domain *d)
+static void sh_clean_dirty_bitmap(struct domain *d)
 {
     paging_lock(d);
     /* Need to revoke write access to the domain's pages again.
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -77,15 +77,6 @@ void shadow_teardown(struct domain *d);
 /* Call once all of the references to the domain have gone away */
 void shadow_final_teardown(struct domain *d);
 
-/* shadow code to call when log dirty is enabled */
-int shadow_enable_log_dirty(struct domain *d, bool_t log_global);
-
-/* shadow code to call when log dirty is disabled */
-int shadow_disable_log_dirty(struct domain *d);
-
-/* shadow code to call when bitmap is being cleaned */
-void shadow_clean_dirty_bitmap(struct domain *d);
-
 /* Update all the things that are derived from the guest's CR0/CR3/CR4.
  * Called to initialize paging structures if the paging mode
  * has changed, and when bringing up a VCPU for the first time. */




[-- Attachment #2: x86-shadow-log-dirty-static.patch --]
[-- Type: text/plain, Size: 3076 bytes --]

x86/shadow: make some log-dirty handling functions static

Noticed while introducing the stub replacement for disabling shadow
paging support at build time.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -41,6 +41,10 @@
 
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
 
+static int sh_enable_log_dirty(struct domain *, bool_t log_global);
+static int sh_disable_log_dirty(struct domain *);
+static void sh_clean_dirty_bitmap(struct domain *);
+
 /* Set up the shadow-specific parts of a domain struct at start of day.
  * Called for every domain from arch_domain_create() */
 int shadow_domain_init(struct domain *d, unsigned int domcr_flags)
@@ -49,8 +53,8 @@ int shadow_domain_init(struct domain *d,
     INIT_PAGE_LIST_HEAD(&d->arch.paging.shadow.pinned_shadows);
 
     /* Use shadow pagetables for log-dirty support */
-    paging_log_dirty_init(d, shadow_enable_log_dirty, 
-                          shadow_disable_log_dirty, shadow_clean_dirty_bitmap);
+    paging_log_dirty_init(d, sh_enable_log_dirty,
+                          sh_disable_log_dirty, sh_clean_dirty_bitmap);
 
 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
     d->arch.paging.shadow.oos_active = 0;
@@ -3422,7 +3426,7 @@ shadow_write_p2m_entry(struct domain *d,
 /* Shadow specific code which is called in paging_log_dirty_enable().
  * Return 0 if no problem found.
  */
-int shadow_enable_log_dirty(struct domain *d, bool_t log_global)
+static int sh_enable_log_dirty(struct domain *d, bool_t log_global)
 {
     int ret;
 
@@ -3450,7 +3454,7 @@ int shadow_enable_log_dirty(struct domai
 }
 
 /* shadow specfic code which is called in paging_log_dirty_disable() */
-int shadow_disable_log_dirty(struct domain *d)
+static int sh_disable_log_dirty(struct domain *d)
 {
     int ret;
 
@@ -3464,7 +3468,7 @@ int shadow_disable_log_dirty(struct doma
 /* This function is called when we CLEAN log dirty bitmap. See 
  * paging_log_dirty_op() for details. 
  */
-void shadow_clean_dirty_bitmap(struct domain *d)
+static void sh_clean_dirty_bitmap(struct domain *d)
 {
     paging_lock(d);
     /* Need to revoke write access to the domain's pages again.
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -77,15 +77,6 @@ void shadow_teardown(struct domain *d);
 /* Call once all of the references to the domain have gone away */
 void shadow_final_teardown(struct domain *d);
 
-/* shadow code to call when log dirty is enabled */
-int shadow_enable_log_dirty(struct domain *d, bool_t log_global);
-
-/* shadow code to call when log dirty is disabled */
-int shadow_disable_log_dirty(struct domain *d);
-
-/* shadow code to call when bitmap is being cleaned */
-void shadow_clean_dirty_bitmap(struct domain *d);
-
 /* Update all the things that are derived from the guest's CR0/CR3/CR4.
  * Called to initialize paging structures if the paging mode
  * has changed, and when bringing up a VCPU for the first time. */

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/4] x86: skip further initialization for idle domains
  2015-01-28  8:10 ` [PATCH 1/4] x86: skip further initialization for idle domains Jan Beulich
@ 2015-01-28 11:00   ` Andrew Cooper
  0 siblings, 0 replies; 13+ messages in thread
From: Andrew Cooper @ 2015-01-28 11:00 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Tim Deegan, Keir Fraser

On 28/01/15 08:10, Jan Beulich wrote:
> While in the end not really found necessary, early versions of the
> patches to follow pointed out that we needlessly set up paging for idle
> domains. Arranging for that to be skipped made me notice that we can at
> once skip vMCE setup for them. Leverage to adjustment to further
> re-arrange the way FPU setup gets skipped.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -427,12 +427,15 @@ int vcpu_initialise(struct vcpu *v)
>      if ( rc )
>          return rc;
>  
> -    paging_vcpu_init(v);
> +    if ( !is_idle_domain(d) )
> +    {
> +        paging_vcpu_init(v);
>  
> -    if ( (rc = vcpu_init_fpu(v)) != 0 )
> -        return rc;
> +        if ( (rc = vcpu_init_fpu(v)) != 0 )
> +            return rc;
>  
> -    vmce_init_vcpu(v);
> +        vmce_init_vcpu(v);
> +    }
>  
>      if ( has_hvm_container_domain(d) )
>      {
> @@ -559,12 +562,12 @@ int arch_domain_create(struct domain *d,
>      HYPERVISOR_COMPAT_VIRT_START(d) =
>          is_pv_domain(d) ? __HYPERVISOR_COMPAT_VIRT_START : ~0u;
>  
> -    if ( (rc = paging_domain_init(d, domcr_flags)) != 0 )
> -        goto fail;
> -    paging_initialised = 1;
> -
>      if ( !is_idle_domain(d) )
>      {
> +        if ( (rc = paging_domain_init(d, domcr_flags)) != 0 )
> +            goto fail;
> +        paging_initialised = 1;
> +
>          d->arch.cpuids = xmalloc_array(cpuid_input_t, MAX_CPUID_INPUT);
>          rc = -ENOMEM;
>          if ( d->arch.cpuids == NULL )
> --- a/xen/arch/x86/i387.c
> +++ b/xen/arch/x86/i387.c
> @@ -303,12 +303,8 @@ void save_fpu_enable(void)
>  /* Initialize FPU's context save area */
>  int vcpu_init_fpu(struct vcpu *v)
>  {
> -    int rc = 0;
> +    int rc;
>      
> -    /* Idle domain doesn't have FPU state allocated */
> -    if ( is_idle_vcpu(v) )
> -        goto done;
> -
>      if ( (rc = xstate_alloc_save_area(v)) != 0 )
>          return rc;
>  
> @@ -318,13 +314,9 @@ int vcpu_init_fpu(struct vcpu *v)
>      {
>          v->arch.fpu_ctxt = _xzalloc(sizeof(v->arch.xsave_area->fpu_sse), 16);
>          if ( !v->arch.fpu_ctxt )
> -        {
>              rc = -ENOMEM;
> -            goto done;
> -        }
>      }
>  
> -done:
>      return rc;
>  }
>  
>
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] x86/mm: allow for building without shadow mode support
  2015-01-28  8:11 ` [PATCH 2/4] x86/mm: allow for building without shadow mode support Jan Beulich
@ 2015-01-28 11:10   ` Andrew Cooper
  2015-01-29 17:34     ` Tim Deegan
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Cooper @ 2015-01-28 11:10 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Tim Deegan, Keir Fraser

On 28/01/15 08:11, Jan Beulich wrote:
> Considering the complexity of the code, it seems to be a reasonable
> thing to allow people to disable that code entirely even outside the
> immediate need for this by the next patch.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> There is one XXX being added to the code - while it doesn't look like
> we need to set v->arch.paging.mode in shadow_vcpu_init(), it may be
> deemed more secure to set up a table with stub function pointers.

I would agree that setting up a function pointer table is the safer
course of action to take.

>
> --- a/xen/arch/x86/Rules.mk
> +++ b/xen/arch/x86/Rules.mk
> @@ -32,9 +32,13 @@ x86 := y
>  x86_32 := n
>  x86_64 := y
>  
> +shadow-paging ?= y
> +
>  CFLAGS += -mno-red-zone -mno-sse -fpic
>  CFLAGS += -fno-asynchronous-unwind-tables
>  # -fvisibility=hidden reduces -fpic cost, if it's available
>  ifneq ($(call cc-option,$(CC),-fvisibility=hidden,n),n)
>  CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
>  endif
> +
> +CFLAGS-$(shadow-paging) += -DCONFIG_SHADOW_PAGING
> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -635,16 +635,16 @@ int paging_domain_init(struct domain *d,
>       * don't want to leak any active log-dirty bitmaps */
>      d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
>  
> -    /* The order of the *_init calls below is important, as the later
> -     * ones may rewrite some common fields.  Shadow pagetables are the
> -     * default... */
> -    shadow_domain_init(d, domcr_flags);
> -
> -    /* ... but we will use hardware assistance if it's available. */
> +    /*
> +     * Shadow pagetables are the default, but we will use
> +     * hardware assistance if it's available and enabled.
> +     */
>      if ( hap_enabled(d) )
>          hap_domain_init(d);
> +    else
> +        rc = shadow_domain_init(d, domcr_flags);
>  
> -    return 0;
> +    return rc;
>  }
>  
>  /* vcpu paging struct initialization goes here */
> @@ -822,12 +822,16 @@ int paging_enable(struct domain *d, u32 
>   * and therefore its pagetables will soon be discarded */
>  void pagetable_dying(struct domain *d, paddr_t gpa)
>  {
> +#ifdef CONFIG_SHADOW_PAGING
>      struct vcpu *v;
>  
>      ASSERT(paging_mode_shadow(d));
>  
>      v = d->vcpu[0];
>      v->arch.paging.mode->shadow.pagetable_dying(v, gpa);
> +#else
> +    BUG();
> +#endif
>  }
>  
>  /* Print paging-assistance info to the console */
> --- a/xen/arch/x86/mm/shadow/Makefile
> +++ b/xen/arch/x86/mm/shadow/Makefile
> @@ -1,4 +1,5 @@
> -obj-$(x86_64) += common.o guest_2.o guest_3.o guest_4.o
> +obj-y                := none.o
> +obj-$(shadow-paging) := common.o guest_2.o guest_3.o guest_4.o
>  

Can this be

ifeq($(shadow-paging),y)
obj-y := common.o guest_2.o guest_3.o guest_4.o
else
obj-y := none.o
endif

Rather than relying on the double := to clobber none.o and prevent a
link failure ?

~Andrew

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] x86/shadow: make some log‑dirty handling functions static
  2015-01-28  8:12 ` [PATCH 4/4] x86/shadow: make some log‑dirty handling functions static Jan Beulich
@ 2015-01-28 11:18   ` Andrew Cooper
  2015-01-29 11:21   ` [PATCH 4/4] x86/shadow: make some log?dirty " Tim Deegan
  1 sibling, 0 replies; 13+ messages in thread
From: Andrew Cooper @ 2015-01-28 11:18 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Tim Deegan, Keir Fraser

On 28/01/15 08:12, Jan Beulich wrote:
> Noticed while introducing the stub replacement for disabling shadow
> paging support at build time.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -41,6 +41,10 @@
>  
>  DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
>  
> +static int sh_enable_log_dirty(struct domain *, bool_t log_global);
> +static int sh_disable_log_dirty(struct domain *);
> +static void sh_clean_dirty_bitmap(struct domain *);
> +
>  /* Set up the shadow-specific parts of a domain struct at start of day.
>   * Called for every domain from arch_domain_create() */
>  int shadow_domain_init(struct domain *d, unsigned int domcr_flags)
> @@ -49,8 +53,8 @@ int shadow_domain_init(struct domain *d,
>      INIT_PAGE_LIST_HEAD(&d->arch.paging.shadow.pinned_shadows);
>  
>      /* Use shadow pagetables for log-dirty support */
> -    paging_log_dirty_init(d, shadow_enable_log_dirty, 
> -                          shadow_disable_log_dirty, shadow_clean_dirty_bitmap);
> +    paging_log_dirty_init(d, sh_enable_log_dirty,
> +                          sh_disable_log_dirty, sh_clean_dirty_bitmap);
>  
>  #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
>      d->arch.paging.shadow.oos_active = 0;
> @@ -3422,7 +3426,7 @@ shadow_write_p2m_entry(struct domain *d,
>  /* Shadow specific code which is called in paging_log_dirty_enable().
>   * Return 0 if no problem found.
>   */
> -int shadow_enable_log_dirty(struct domain *d, bool_t log_global)
> +static int sh_enable_log_dirty(struct domain *d, bool_t log_global)
>  {
>      int ret;
>  
> @@ -3450,7 +3454,7 @@ int shadow_enable_log_dirty(struct domai
>  }
>  
>  /* shadow specfic code which is called in paging_log_dirty_disable() */
> -int shadow_disable_log_dirty(struct domain *d)
> +static int sh_disable_log_dirty(struct domain *d)
>  {
>      int ret;
>  
> @@ -3464,7 +3468,7 @@ int shadow_disable_log_dirty(struct doma
>  /* This function is called when we CLEAN log dirty bitmap. See 
>   * paging_log_dirty_op() for details. 
>   */
> -void shadow_clean_dirty_bitmap(struct domain *d)
> +static void sh_clean_dirty_bitmap(struct domain *d)
>  {
>      paging_lock(d);
>      /* Need to revoke write access to the domain's pages again.
> --- a/xen/include/asm-x86/shadow.h
> +++ b/xen/include/asm-x86/shadow.h
> @@ -77,15 +77,6 @@ void shadow_teardown(struct domain *d);
>  /* Call once all of the references to the domain have gone away */
>  void shadow_final_teardown(struct domain *d);
>  
> -/* shadow code to call when log dirty is enabled */
> -int shadow_enable_log_dirty(struct domain *d, bool_t log_global);
> -
> -/* shadow code to call when log dirty is disabled */
> -int shadow_disable_log_dirty(struct domain *d);
> -
> -/* shadow code to call when bitmap is being cleaned */
> -void shadow_clean_dirty_bitmap(struct domain *d);
> -
>  /* Update all the things that are derived from the guest's CR0/CR3/CR4.
>   * Called to initialize paging structures if the paging mode
>   * has changed, and when bringing up a VCPU for the first time. */
>
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/4] x86: provide build time option to support up to 123Tb of memory
  2015-01-28  8:11 ` [PATCH 3/4] x86: provide build time option to support up to 123Tb of memory Jan Beulich
@ 2015-01-28 12:10   ` Andrew Cooper
  2015-01-28 14:16     ` Jan Beulich
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Cooper @ 2015-01-28 12:10 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Tim Deegan, Keir Fraser

On 28/01/15 08:11, Jan Beulich wrote:
> As this requires growing struct page_info from 32 to 48 bytes as well
> as shrinking the always accessible direct mapped memory range from 5Tb
> to 3.5Tb, this isn't being introduced as a general or default enabled
> feature.
>
> For now setting "bigmem=y" implies "shadow-paging=n", as the shadow
> paging code otherwise fails to build (see
> http://lists.xenproject.org/archives/html/xen-devel/2015-01/msg03165.html).

I presume that once the linked list issues are sorted, bigmem and
shadow-paging will no longer be mutually exclusive?

>
> A side effect of the change to x86's mm.h is that asm/mm.h may no
> longer be included directly. Hence in the few places where this was done,
> xen/mm.h is being substituted (indirectly in the hvm/mtrr.h case).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

The changes look surprisingly uninvasive.  Have you got a 123TB machine
to hand?

On a separate tact, I wonder whether it might be an idea about setting
about stealing some virtual address space back from 64bit PV guests.  If
we make a start now, in a couple of years time, it might be a plausible
ABI breakage that vendors would choose to take, as current operating
systems start falling out of support.

>
> --- a/xen/arch/x86/Rules.mk
> +++ b/xen/arch/x86/Rules.mk
> @@ -32,7 +32,8 @@ x86 := y
>  x86_32 := n
>  x86_64 := y
>  
> -shadow-paging ?= y
> +bigmem        ?= n
> +shadow-paging ?= $(if $(filter y,$(bigmem)),n,y)
>  
>  CFLAGS += -mno-red-zone -mno-sse -fpic
>  CFLAGS += -fno-asynchronous-unwind-tables
> @@ -42,3 +43,4 @@ CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
>  endif
>  
>  CFLAGS-$(shadow-paging) += -DCONFIG_SHADOW_PAGING
> +CFLAGS-$(bigmem)        += -DCONFIG_BIGMEM
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -18,13 +18,11 @@
>   */
>  
>  #include <public/hvm/e820.h>
> -#include <xen/types.h>
> +#include <xen/domain_page.h>
>  #include <asm/e820.h>
>  #include <asm/iocap.h>
> -#include <asm/mm.h>
>  #include <asm/paging.h>
>  #include <asm/p2m.h>
> -#include <xen/domain_page.h>
>  #include <asm/mtrr.h>
>  #include <asm/hvm/support.h>
>  #include <asm/hvm/cacheattr.h>
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -386,8 +386,13 @@ static void __init setup_max_pdx(unsigne
>      if ( max_pdx > FRAMETABLE_NR )
>          max_pdx = FRAMETABLE_NR;
>  
> +    if ( max_pdx > MPT_VIRT_SIZE / sizeof(unsigned long) )
> +        max_pdx = MPT_VIRT_SIZE / sizeof(unsigned long);
> +
> +#ifdef PAGE_LIST_NULL

Why does PAGE_LIST_NULL become conditional?  It looks to be
unconditionally available from xen/mm.h.

~Andrew

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/4] x86: provide build time option to support up to 123Tb of memory
  2015-01-28 12:10   ` Andrew Cooper
@ 2015-01-28 14:16     ` Jan Beulich
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2015-01-28 14:16 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Keir Fraser, Tim Deegan

>>> On 28.01.15 at 13:10, <andrew.cooper3@citrix.com> wrote:
> On 28/01/15 08:11, Jan Beulich wrote:
>> As this requires growing struct page_info from 32 to 48 bytes as well
>> as shrinking the always accessible direct mapped memory range from 5Tb
>> to 3.5Tb, this isn't being introduced as a general or default enabled
>> feature.
>>
>> For now setting "bigmem=y" implies "shadow-paging=n", as the shadow
>> paging code otherwise fails to build (see
>> http://lists.xenproject.org/archives/html/xen-devel/2015-01/msg03165.html).
> 
> I presume that once the linked list issues are sorted, bigmem and
> shadow-paging will no longer be mutually exclusive?

Of course.

>> A side effect of the change to x86's mm.h is that asm/mm.h may no
>> longer be included directly. Hence in the few places where this was done,
>> xen/mm.h is being substituted (indirectly in the hvm/mtrr.h case).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> The changes look surprisingly uninvasive.  Have you got a 123TB machine
> to hand?

Haha.

> On a separate tact, I wonder whether it might be an idea about setting
> about stealing some virtual address space back from 64bit PV guests.  If
> we make a start now, in a couple of years time, it might be a plausible
> ABI breakage that vendors would choose to take, as current operating
> systems start falling out of support.

Making such a change would need to start on the OS side though, and
the hypervisor wouldn't be able to make use of that until in a couple of
(or really, many) years time. Furthermore, with OSes using 1:1
mappings like Linux does, shrinking their VA space isn't going to be
nice (the more that at least currently the same VA layout gets used
for native x86 kernels too, i.e. the shrinking would affect them at
once). I'd much rather hope for a 5th page table level in the not too
distant future...

>> --- a/xen/arch/x86/setup.c
>> +++ b/xen/arch/x86/setup.c
>> @@ -386,8 +386,13 @@ static void __init setup_max_pdx(unsigne
>>      if ( max_pdx > FRAMETABLE_NR )
>>          max_pdx = FRAMETABLE_NR;
>>  
>> +    if ( max_pdx > MPT_VIRT_SIZE / sizeof(unsigned long) )
>> +        max_pdx = MPT_VIRT_SIZE / sizeof(unsigned long);
>> +
>> +#ifdef PAGE_LIST_NULL
> 
> Why does PAGE_LIST_NULL become conditional?  It looks to be
> unconditionally available from xen/mm.h.

It sits inside a "#ifndef page_list_entry" conditional, i.e. gets
#define-d only when not using normal struct list_head for linking.

Jan

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/4] x86/shadow: make some log?dirty handling functions static
  2015-01-28  8:12 ` [PATCH 4/4] x86/shadow: make some log‑dirty handling functions static Jan Beulich
  2015-01-28 11:18   ` Andrew Cooper
@ 2015-01-29 11:21   ` Tim Deegan
  1 sibling, 0 replies; 13+ messages in thread
From: Tim Deegan @ 2015-01-29 11:21 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Keir Fraser, Andrew Cooper

At 08:12 +0000 on 28 Jan (1422429139), Jan Beulich wrote:
> Noticed while introducing the stub replacement for disabling shadow
> paging support at build time.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Applied, thanks.

Tim.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] x86/mm: allow for building without shadow mode support
  2015-01-28 11:10   ` Andrew Cooper
@ 2015-01-29 17:34     ` Tim Deegan
  2015-01-30  9:28       ` Jan Beulich
  0 siblings, 1 reply; 13+ messages in thread
From: Tim Deegan @ 2015-01-29 17:34 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: xen-devel, Keir Fraser, Jan Beulich

At 11:10 +0000 on 28 Jan (1422439811), Andrew Cooper wrote:
> On 28/01/15 08:11, Jan Beulich wrote:
> > Considering the complexity of the code, it seems to be a reasonable
> > thing to allow people to disable that code entirely even outside the
> > immediate need for this by the next patch.
> >
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > ---
> > There is one XXX being added to the code - while it doesn't look like
> > we need to set v->arch.paging.mode in shadow_vcpu_init(), it may be
> > deemed more secure to set up a table with stub function pointers.
> 
> I would agree that setting up a function pointer table is the safer
> course of action to take.

+1.

> > --- a/xen/arch/x86/Rules.mk
> > +++ b/xen/arch/x86/Rules.mk
> > @@ -32,9 +32,13 @@ x86 := y
> >  x86_32 := n
> >  x86_64 := y
> >  
> > +shadow-paging ?= y
> > +
> >  CFLAGS += -mno-red-zone -mno-sse -fpic
> >  CFLAGS += -fno-asynchronous-unwind-tables
> >  # -fvisibility=hidden reduces -fpic cost, if it's available
> >  ifneq ($(call cc-option,$(CC),-fvisibility=hidden,n),n)
> >  CFLAGS += -DGCC_HAS_VISIBILITY_ATTRIBUTE
> >  endif
> > +
> > +CFLAGS-$(shadow-paging) += -DCONFIG_SHADOW_PAGING
> > --- a/xen/arch/x86/mm/paging.c
> > +++ b/xen/arch/x86/mm/paging.c
> > @@ -635,16 +635,16 @@ int paging_domain_init(struct domain *d,
> >       * don't want to leak any active log-dirty bitmaps */
> >      d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
> >  
> > -    /* The order of the *_init calls below is important, as the later
> > -     * ones may rewrite some common fields.  Shadow pagetables are the
> > -     * default... */
> > -    shadow_domain_init(d, domcr_flags);
> > -
> > -    /* ... but we will use hardware assistance if it's available. */
> > +    /*
> > +     * Shadow pagetables are the default, but we will use
> > +     * hardware assistance if it's available and enabled.
> > +     */
> >      if ( hap_enabled(d) )
> >          hap_domain_init(d);
> > +    else
> > +        rc = shadow_domain_init(d, domcr_flags);
> >  
> > -    return 0;
> > +    return rc;
> >  }
> >  
> >  /* vcpu paging struct initialization goes here */
> > @@ -822,12 +822,16 @@ int paging_enable(struct domain *d, u32 
> >   * and therefore its pagetables will soon be discarded */
> >  void pagetable_dying(struct domain *d, paddr_t gpa)
> >  {
> > +#ifdef CONFIG_SHADOW_PAGING
> >      struct vcpu *v;
> >  
> >      ASSERT(paging_mode_shadow(d));
> >  
> >      v = d->vcpu[0];
> >      v->arch.paging.mode->shadow.pagetable_dying(v, gpa);
> > +#else
> > +    BUG();
> > +#endif
> >  }
> >  
> >  /* Print paging-assistance info to the console */
> > --- a/xen/arch/x86/mm/shadow/Makefile
> > +++ b/xen/arch/x86/mm/shadow/Makefile
> > @@ -1,4 +1,5 @@
> > -obj-$(x86_64) += common.o guest_2.o guest_3.o guest_4.o
> > +obj-y                := none.o
> > +obj-$(shadow-paging) := common.o guest_2.o guest_3.o guest_4.o
> >  
> 
> Can this be
> 
> ifeq($(shadow-paging),y)
> obj-y := common.o guest_2.o guest_3.o guest_4.o
> else
> obj-y := none.o
> endif
> 
> Rather than relying on the double := to clobber none.o and prevent a
> link failure ?

+1 to this too, for readability.  Or else define a $(shadow-dummy)
that's set appropriately and use obj-$(shadow-dummy) := none.o ?

Cheers,

Tim

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/4] x86/mm: allow for building without shadow mode support
  2015-01-29 17:34     ` Tim Deegan
@ 2015-01-30  9:28       ` Jan Beulich
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Beulich @ 2015-01-30  9:28 UTC (permalink / raw)
  To: Andrew Cooper, Tim Deegan; +Cc: xen-devel, Keir Fraser

>>> On 29.01.15 at 18:34, <tim@xen.org> wrote:
> At 11:10 +0000 on 28 Jan (1422439811), Andrew Cooper wrote:
>> On 28/01/15 08:11, Jan Beulich wrote:
>> > --- a/xen/arch/x86/mm/shadow/Makefile
>> > +++ b/xen/arch/x86/mm/shadow/Makefile
>> > @@ -1,4 +1,5 @@
>> > -obj-$(x86_64) += common.o guest_2.o guest_3.o guest_4.o
>> > +obj-y                := none.o
>> > +obj-$(shadow-paging) := common.o guest_2.o guest_3.o guest_4.o
>> >  
>> 
>> Can this be
>> 
>> ifeq($(shadow-paging),y)
>> obj-y := common.o guest_2.o guest_3.o guest_4.o
>> else
>> obj-y := none.o
>> endif
>> 
>> Rather than relying on the double := to clobber none.o and prevent a
>> link failure ?
> 
> +1 to this too, for readability.

As you both ask for it I'll do it, but very reluctantly, as to me this
makes it worse to read, not better.

Jan

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2015-01-30  9:28 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-28  8:00 [PATCH 0/4] x86: allow for more memory to be used Jan Beulich
2015-01-28  8:10 ` [PATCH 1/4] x86: skip further initialization for idle domains Jan Beulich
2015-01-28 11:00   ` Andrew Cooper
2015-01-28  8:11 ` [PATCH 2/4] x86/mm: allow for building without shadow mode support Jan Beulich
2015-01-28 11:10   ` Andrew Cooper
2015-01-29 17:34     ` Tim Deegan
2015-01-30  9:28       ` Jan Beulich
2015-01-28  8:11 ` [PATCH 3/4] x86: provide build time option to support up to 123Tb of memory Jan Beulich
2015-01-28 12:10   ` Andrew Cooper
2015-01-28 14:16     ` Jan Beulich
2015-01-28  8:12 ` [PATCH 4/4] x86/shadow: make some log‑dirty handling functions static Jan Beulich
2015-01-28 11:18   ` Andrew Cooper
2015-01-29 11:21   ` [PATCH 4/4] x86/shadow: make some log?dirty " Tim Deegan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.