* [v3 00/10] PML (Page Modification Logging) support
@ 2015-04-24 8:19 Kai Huang
2015-04-24 8:19 ` [v3 01/10] vmx: add new boot parameter to control PML enabling Kai Huang
` (10 more replies)
0 siblings, 11 replies; 22+ messages in thread
From: Kai Huang @ 2015-04-24 8:19 UTC (permalink / raw)
To: andrew.cooper3, tim, jbeulich, kevin.tian, xen-devel; +Cc: Kai Huang
v2->v3:
- Merged v2 patch 02 (document change) to patch 01 as a single patch, and
changed new parameter description as suggested by Andrew.
- changed vmx_vcpu_flush_pml_buffer to call mark_dirty for all logged GFNs, and
call p2m_change_type_one regardless of return value.
- Added ASSERT for vcpu (being current, or being non-running and unrunnable) to
vmx_vcpu_flush_pml_buffer
- Other refinement in coding style, comments description, etc.
Sanity test of live migration has been tested both with and without PML.
Kai Huang (10):
vmx: add new boot parameter to control PML enabling
log-dirty: add new paging_mark_gfn_dirty
vmx: add PML definition and feature detection
vmx: add new data structure member to support PML
vmx: add help functions to support PML
vmx: handle PML buffer full VMEXIT
vmx: handle PML enabling in vmx_vcpu_initialise
vmx: disable PML in vmx_vcpu_destroy
log-dirty: refine common code to support PML
p2m/ept: enable PML in p2m-ept for log-dirty
docs/misc/xen-command-line.markdown | 15 +++
xen/arch/x86/hvm/vmx/vmcs.c | 227 ++++++++++++++++++++++++++++++++++++
xen/arch/x86/hvm/vmx/vmx.c | 35 ++++++
xen/arch/x86/mm/hap/hap.c | 29 ++++-
xen/arch/x86/mm/p2m-ept.c | 79 +++++++++++--
xen/arch/x86/mm/p2m.c | 36 ++++++
xen/arch/x86/mm/paging.c | 41 +++++--
xen/include/asm-x86/hvm/vmx/vmcs.h | 26 ++++-
xen/include/asm-x86/hvm/vmx/vmx.h | 4 +-
xen/include/asm-x86/p2m.h | 11 ++
xen/include/asm-x86/paging.h | 2 +
11 files changed, 482 insertions(+), 23 deletions(-)
--
2.1.0
^ permalink raw reply [flat|nested] 22+ messages in thread
* [v3 01/10] vmx: add new boot parameter to control PML enabling
2015-04-24 8:19 [v3 00/10] PML (Page Modification Logging) support Kai Huang
@ 2015-04-24 8:19 ` Kai Huang
2015-04-24 10:46 ` Andrew Cooper
2015-04-24 14:33 ` Jan Beulich
2015-04-24 8:19 ` [v3 02/10] log-dirty: add new paging_mark_gfn_dirty Kai Huang
` (9 subsequent siblings)
10 siblings, 2 replies; 22+ messages in thread
From: Kai Huang @ 2015-04-24 8:19 UTC (permalink / raw)
To: andrew.cooper3, tim, jbeulich, kevin.tian, xen-devel; +Cc: Kai Huang
A top level EPT parameter "ept=<options>" and a sub boolean "opt_pml_enabled"
are added to control PML. Other booleans can be further added for any other EPT
related features.
The document description for the new parameter is also added.
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
---
docs/misc/xen-command-line.markdown | 15 +++++++++++++++
xen/arch/x86/hvm/vmx/vmcs.c | 30 ++++++++++++++++++++++++++++++
2 files changed, 45 insertions(+)
diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 1dda1f0..4889e27 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -685,6 +685,21 @@ requirement can be relaxed. This option is particularly useful for nested
virtualization, to allow the L1 hypervisor to use EPT even if the L0 hypervisor
does not provide VM\_ENTRY\_LOAD\_GUEST\_PAT.
+### ept (Intel)
+> `= List of ( pml<boolean> )`
+
+> Default: `false`
+
+Controls EPT related features. Currently only Page Modification Logging (PML) is
+the controllable feature as boolean type.
+
+PML is a new hardware feature in Intel's Broadwell Server and further platforms
+which reduces hypervisor overhead of log-dirty mechanism by automatically
+recording GPAs (guest physical addresses) when guest memory gets dirty, and
+therefore significantly reducing number of EPT violation caused by write
+protection of guest memory, which is a necessity to implement log-dirty
+mechanism before PML.
+
### gdb
> `= <baud>[/<clock_hz>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]] | pci | amt ] `
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 63007a9..79efa42 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -64,6 +64,36 @@ integer_param("ple_gap", ple_gap);
static unsigned int __read_mostly ple_window = 4096;
integer_param("ple_window", ple_window);
+static bool_t __read_mostly opt_pml_enabled = 0;
+
+/*
+ * The 'ept' parameter controls functionalities that depend on, or impact the
+ * EPT mechanism. Optional comma separated value may contain:
+ *
+ * pml Enable PML
+ */
+static void __init parse_ept_param(char *s)
+{
+ char *ss;
+
+ do {
+ bool_t val = !!strncmp(s, "no-", 3);
+ if ( !val )
+ s += 3;
+
+ ss = strchr(s, ',');
+ if ( ss )
+ *ss = '\0';
+
+ if ( !strcmp(s, "pml") )
+ opt_pml_enabled = val;
+
+ s = ss + 1;
+ } while ( ss );
+}
+
+custom_param("ept", parse_ept_param);
+
/* Dynamic (run-time adjusted) execution control flags. */
u32 vmx_pin_based_exec_control __read_mostly;
u32 vmx_cpu_based_exec_control __read_mostly;
--
2.1.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [v3 02/10] log-dirty: add new paging_mark_gfn_dirty
2015-04-24 8:19 [v3 00/10] PML (Page Modification Logging) support Kai Huang
2015-04-24 8:19 ` [v3 01/10] vmx: add new boot parameter to control PML enabling Kai Huang
@ 2015-04-24 8:19 ` Kai Huang
2015-04-24 8:19 ` [v3 03/10] vmx: add PML definition and feature detection Kai Huang
` (8 subsequent siblings)
10 siblings, 0 replies; 22+ messages in thread
From: Kai Huang @ 2015-04-24 8:19 UTC (permalink / raw)
To: andrew.cooper3, tim, jbeulich, kevin.tian, xen-devel; +Cc: Kai Huang
PML logs GPA in PML buffer. Original paging_mark_dirty takes MFN as parameter
but it gets guest pfn internally and use guest pfn to as index for looking up
radix log-dirty tree. In flushing PML buffer, calling paging_mark_dirty directly
introduces redundant p2m lookups (gfn->mfn->gfn), therefore we introduce
paging_mark_gfn_dirty which is bulk of paging_mark_dirty but takes guest pfn as
parameter, and in flushing PML buffer we call paging_mark_gfn_dirty directly.
Original paging_mark_dirty then simply is a wrapper of paging_mark_gfn_dirty.
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
---
xen/arch/x86/mm/paging.c | 31 +++++++++++++++++++++----------
xen/include/asm-x86/paging.h | 2 ++
2 files changed, 23 insertions(+), 10 deletions(-)
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index b54d76a..77c929b 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -266,24 +266,17 @@ static int paging_log_dirty_disable(struct domain *d, bool_t resuming)
return ret;
}
-/* Mark a page as dirty */
-void paging_mark_dirty(struct domain *d, unsigned long guest_mfn)
+/* Mark a page as dirty, with taking guest pfn as parameter */
+void paging_mark_gfn_dirty(struct domain *d, unsigned long pfn)
{
- unsigned long pfn;
- mfn_t gmfn;
int changed;
mfn_t mfn, *l4, *l3, *l2;
unsigned long *l1;
int i1, i2, i3, i4;
- gmfn = _mfn(guest_mfn);
-
- if ( !paging_mode_log_dirty(d) || !mfn_valid(gmfn) ||
- page_get_owner(mfn_to_page(gmfn)) != d )
+ if ( !paging_mode_log_dirty(d) )
return;
- /* We /really/ mean PFN here, even for non-translated guests. */
- pfn = get_gpfn_from_mfn(mfn_x(gmfn));
/* Shared MFNs should NEVER be marked dirty */
BUG_ON(SHARED_M2P(pfn));
@@ -351,6 +344,24 @@ out:
return;
}
+/* Mark a page as dirty */
+void paging_mark_dirty(struct domain *d, unsigned long guest_mfn)
+{
+ unsigned long pfn;
+ mfn_t gmfn;
+
+ gmfn = _mfn(guest_mfn);
+
+ if ( !paging_mode_log_dirty(d) || !mfn_valid(gmfn) ||
+ page_get_owner(mfn_to_page(gmfn)) != d )
+ return;
+
+ /* We /really/ mean PFN here, even for non-translated guests. */
+ pfn = get_gpfn_from_mfn(mfn_x(gmfn));
+
+ paging_mark_gfn_dirty(d, pfn);
+}
+
/* Is this guest page dirty? */
int paging_mfn_is_dirty(struct domain *d, mfn_t gmfn)
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index 53de715..c99324c 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -156,6 +156,8 @@ void paging_log_dirty_init(struct domain *d,
/* mark a page as dirty */
void paging_mark_dirty(struct domain *d, unsigned long guest_mfn);
+/* mark a page as dirty with taking guest pfn as parameter */
+void paging_mark_gfn_dirty(struct domain *d, unsigned long pfn);
/* is this guest page dirty?
* This is called from inside paging code, with the paging lock held. */
--
2.1.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [v3 03/10] vmx: add PML definition and feature detection
2015-04-24 8:19 [v3 00/10] PML (Page Modification Logging) support Kai Huang
2015-04-24 8:19 ` [v3 01/10] vmx: add new boot parameter to control PML enabling Kai Huang
2015-04-24 8:19 ` [v3 02/10] log-dirty: add new paging_mark_gfn_dirty Kai Huang
@ 2015-04-24 8:19 ` Kai Huang
2015-04-24 8:19 ` [v3 04/10] vmx: add new data structure member to support PML Kai Huang
` (7 subsequent siblings)
10 siblings, 0 replies; 22+ messages in thread
From: Kai Huang @ 2015-04-24 8:19 UTC (permalink / raw)
To: andrew.cooper3, tim, jbeulich, kevin.tian, xen-devel; +Cc: Kai Huang
The patch adds PML definition and feature detection. Note PML won't be detected
if PML is disabled from boot parameter. PML is also disabled in construct_vmcs,
as it will only be enabled when domain is switched to log dirty mode.
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
---
xen/arch/x86/hvm/vmx/vmcs.c | 18 ++++++++++++++++++
xen/include/asm-x86/hvm/vmx/vmcs.h | 6 ++++++
xen/include/asm-x86/hvm/vmx/vmx.h | 1 +
3 files changed, 25 insertions(+)
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 79efa42..04fdca3 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -140,6 +140,7 @@ static void __init vmx_display_features(void)
P(cpu_has_vmx_virtual_intr_delivery, "Virtual Interrupt Delivery");
P(cpu_has_vmx_posted_intr_processing, "Posted Interrupt Processing");
P(cpu_has_vmx_vmcs_shadowing, "VMCS shadowing");
+ P(cpu_has_vmx_pml, "Page Modification Logging");
#undef P
if ( !printed )
@@ -237,6 +238,8 @@ static int vmx_init_vmcs_config(void)
opt |= SECONDARY_EXEC_ENABLE_VPID;
if ( opt_unrestricted_guest_enabled )
opt |= SECONDARY_EXEC_UNRESTRICTED_GUEST;
+ if ( opt_pml_enabled )
+ opt |= SECONDARY_EXEC_ENABLE_PML;
/*
* "APIC Register Virtualization" and "Virtual Interrupt Delivery"
@@ -283,6 +286,10 @@ static int vmx_init_vmcs_config(void)
*/
if ( !(_vmx_ept_vpid_cap & VMX_VPID_INVVPID_ALL_CONTEXT) )
_vmx_secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_VPID;
+
+ /* EPT A/D bits is required for PML */
+ if ( !(_vmx_ept_vpid_cap & VMX_EPT_AD_BIT) )
+ _vmx_secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_PML;
}
if ( _vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT )
@@ -303,6 +310,14 @@ static int vmx_init_vmcs_config(void)
SECONDARY_EXEC_UNRESTRICTED_GUEST);
}
+ /* PML cannot be supported if EPT is not used */
+ if ( !(_vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT) )
+ _vmx_secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_PML;
+
+ /* Turn off opt_pml_enabled if PML feature is not present */
+ if ( !(_vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_PML) )
+ opt_pml_enabled = 0;
+
if ( (_vmx_secondary_exec_control & SECONDARY_EXEC_PAUSE_LOOP_EXITING) &&
ple_gap == 0 )
{
@@ -1038,6 +1053,9 @@ static int construct_vmcs(struct vcpu *v)
__vmwrite(POSTED_INTR_NOTIFICATION_VECTOR, posted_intr_vector);
}
+ /* Disable PML anyway here as it will only be enabled in log dirty mode */
+ v->arch.hvm_vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_PML;
+
/* Host data selectors. */
__vmwrite(HOST_SS_SELECTOR, __HYPERVISOR_DS);
__vmwrite(HOST_DS_SELECTOR, __HYPERVISOR_DS);
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 6fce6aa..f831a78 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -215,6 +215,7 @@ extern u32 vmx_vmentry_control;
#define SECONDARY_EXEC_ENABLE_INVPCID 0x00001000
#define SECONDARY_EXEC_ENABLE_VMFUNC 0x00002000
#define SECONDARY_EXEC_ENABLE_VMCS_SHADOWING 0x00004000
+#define SECONDARY_EXEC_ENABLE_PML 0x00020000
extern u32 vmx_secondary_exec_control;
#define VMX_EPT_EXEC_ONLY_SUPPORTED 0x00000001
@@ -226,6 +227,7 @@ extern u32 vmx_secondary_exec_control;
#define VMX_EPT_INVEPT_INSTRUCTION 0x00100000
#define VMX_EPT_INVEPT_SINGLE_CONTEXT 0x02000000
#define VMX_EPT_INVEPT_ALL_CONTEXT 0x04000000
+#define VMX_EPT_AD_BIT 0x00200000
#define VMX_MISC_VMWRITE_ALL 0x20000000
@@ -274,6 +276,8 @@ extern u32 vmx_secondary_exec_control;
(vmx_pin_based_exec_control & PIN_BASED_POSTED_INTERRUPT)
#define cpu_has_vmx_vmcs_shadowing \
(vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VMCS_SHADOWING)
+#define cpu_has_vmx_pml \
+ (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_PML)
#define VMCS_RID_TYPE_MASK 0x80000000
@@ -318,6 +322,7 @@ enum vmcs_field {
GUEST_LDTR_SELECTOR = 0x0000080c,
GUEST_TR_SELECTOR = 0x0000080e,
GUEST_INTR_STATUS = 0x00000810,
+ GUEST_PML_INDEX = 0x00000812,
HOST_ES_SELECTOR = 0x00000c00,
HOST_CS_SELECTOR = 0x00000c02,
HOST_SS_SELECTOR = 0x00000c04,
@@ -331,6 +336,7 @@ enum vmcs_field {
VM_EXIT_MSR_STORE_ADDR = 0x00002006,
VM_EXIT_MSR_LOAD_ADDR = 0x00002008,
VM_ENTRY_MSR_LOAD_ADDR = 0x0000200a,
+ PML_ADDRESS = 0x0000200e,
TSC_OFFSET = 0x00002010,
VIRTUAL_APIC_PAGE_ADDR = 0x00002012,
APIC_ACCESS_ADDR = 0x00002014,
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 91c5e18..50f1bfc 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -185,6 +185,7 @@ static inline unsigned long pi_get_pir(struct pi_desc *pi_desc, int group)
#define EXIT_REASON_XSETBV 55
#define EXIT_REASON_APIC_WRITE 56
#define EXIT_REASON_INVPCID 58
+#define EXIT_REASON_PML_FULL 62
/*
* Interruption-information format
--
2.1.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [v3 04/10] vmx: add new data structure member to support PML
2015-04-24 8:19 [v3 00/10] PML (Page Modification Logging) support Kai Huang
` (2 preceding siblings ...)
2015-04-24 8:19 ` [v3 03/10] vmx: add PML definition and feature detection Kai Huang
@ 2015-04-24 8:19 ` Kai Huang
2015-04-24 8:19 ` [v3 05/10] vmx: add help functions " Kai Huang
` (6 subsequent siblings)
10 siblings, 0 replies; 22+ messages in thread
From: Kai Huang @ 2015-04-24 8:19 UTC (permalink / raw)
To: andrew.cooper3, tim, jbeulich, kevin.tian, xen-devel; +Cc: Kai Huang
A new 4K page pointer is added to arch_vmx_struct as PML buffer for vcpu. And a
new 'status' field is added to vmx_domain to indicate whether PML is enabled for
the domain or not.
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
---
xen/include/asm-x86/hvm/vmx/vmcs.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index f831a78..441e974 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -70,8 +70,12 @@ struct ept_data {
cpumask_var_t synced_mask;
};
+#define _VMX_DOMAIN_PML_ENABLED 0
+#define VMX_DOMAIN_PML_ENABLED (1ul << _VMX_DOMAIN_PML_ENABLED)
struct vmx_domain {
unsigned long apic_access_mfn;
+ /* VMX_DOMAIN_* */
+ unsigned int status;
};
struct pi_desc {
@@ -85,6 +89,8 @@ struct pi_desc {
#define ept_get_eptp(ept) ((ept)->eptp)
#define ept_get_synced_mask(ept) ((ept)->synced_mask)
+#define NR_PML_ENTRIES 512
+
struct arch_vmx_struct {
/* Virtual address of VMCS. */
struct vmcs_struct *vmcs;
@@ -142,6 +148,8 @@ struct arch_vmx_struct {
/* Bitmap to control vmexit policy for Non-root VMREAD/VMWRITE */
struct page_info *vmread_bitmap;
struct page_info *vmwrite_bitmap;
+
+ struct page_info *pml_pg;
};
int vmx_create_vmcs(struct vcpu *v);
--
2.1.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [v3 05/10] vmx: add help functions to support PML
2015-04-24 8:19 [v3 00/10] PML (Page Modification Logging) support Kai Huang
` (3 preceding siblings ...)
2015-04-24 8:19 ` [v3 04/10] vmx: add new data structure member to support PML Kai Huang
@ 2015-04-24 8:19 ` Kai Huang
2015-04-24 8:19 ` [v3 06/10] vmx: handle PML buffer full VMEXIT Kai Huang
` (5 subsequent siblings)
10 siblings, 0 replies; 22+ messages in thread
From: Kai Huang @ 2015-04-24 8:19 UTC (permalink / raw)
To: andrew.cooper3, tim, jbeulich, kevin.tian, xen-devel; +Cc: Kai Huang
This patch adds help functions to enable/disable PML, and flush PML buffer for
single vcpu and particular domain for further use.
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
---
xen/arch/x86/hvm/vmx/vmcs.c | 179 +++++++++++++++++++++++++++++++++++++
xen/include/asm-x86/hvm/vmx/vmcs.h | 9 ++
2 files changed, 188 insertions(+)
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 04fdca3..f797fde 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1323,6 +1323,185 @@ void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector)
&v->arch.hvm_vmx.eoi_exitmap_changed);
}
+bool_t vmx_vcpu_pml_enabled(const struct vcpu *v)
+{
+ return !!(v->arch.hvm_vmx.secondary_exec_control &
+ SECONDARY_EXEC_ENABLE_PML);
+}
+
+int vmx_vcpu_enable_pml(struct vcpu *v)
+{
+ if ( vmx_vcpu_pml_enabled(v) )
+ return 0;
+
+ v->arch.hvm_vmx.pml_pg = v->domain->arch.paging.alloc_page(v->domain);
+ if ( !v->arch.hvm_vmx.pml_pg )
+ return -ENOMEM;
+
+ vmx_vmcs_enter(v);
+
+ __vmwrite(PML_ADDRESS, page_to_mfn(v->arch.hvm_vmx.pml_pg) << PAGE_SHIFT);
+ __vmwrite(GUEST_PML_INDEX, NR_PML_ENTRIES - 1);
+
+ v->arch.hvm_vmx.secondary_exec_control |= SECONDARY_EXEC_ENABLE_PML;
+
+ __vmwrite(SECONDARY_VM_EXEC_CONTROL,
+ v->arch.hvm_vmx.secondary_exec_control);
+
+ vmx_vmcs_exit(v);
+
+ return 0;
+}
+
+void vmx_vcpu_disable_pml(struct vcpu *v)
+{
+ if ( !vmx_vcpu_pml_enabled(v) )
+ return;
+
+ /* Make sure we don't lose any logged GPAs */
+ vmx_vcpu_flush_pml_buffer(v);
+
+ vmx_vmcs_enter(v);
+
+ v->arch.hvm_vmx.secondary_exec_control &= ~SECONDARY_EXEC_ENABLE_PML;
+ __vmwrite(SECONDARY_VM_EXEC_CONTROL,
+ v->arch.hvm_vmx.secondary_exec_control);
+
+ vmx_vmcs_exit(v);
+
+ v->domain->arch.paging.free_page(v->domain, v->arch.hvm_vmx.pml_pg);
+ v->arch.hvm_vmx.pml_pg = NULL;
+}
+
+void vmx_vcpu_flush_pml_buffer(struct vcpu *v)
+{
+ uint64_t *pml_buf;
+ unsigned long pml_idx;
+
+ ASSERT((v == current) || (!vcpu_runnable(v) && !v->is_running));
+ ASSERT(vmx_vcpu_pml_enabled(v));
+
+ vmx_vmcs_enter(v);
+
+ __vmread(GUEST_PML_INDEX, &pml_idx);
+
+ /* Do nothing if PML buffer is empty */
+ if ( pml_idx == (NR_PML_ENTRIES - 1) )
+ goto out;
+
+ pml_buf = __map_domain_page(v->arch.hvm_vmx.pml_pg);
+
+ /*
+ * PML index can be either 2^16-1 (buffer is full), or 0 ~ NR_PML_ENTRIES-1
+ * (buffer is not full), and in latter case PML index always points to next
+ * available entity.
+ */
+ if (pml_idx >= NR_PML_ENTRIES)
+ pml_idx = 0;
+ else
+ pml_idx++;
+
+ for ( ; pml_idx < NR_PML_ENTRIES; pml_idx++ )
+ {
+ unsigned long gfn = pml_buf[pml_idx] >> PAGE_SHIFT;
+
+ /*
+ * Need to change type from log-dirty to normal memory for logged GFN.
+ * hap_track_dirty_vram depends on it to work. And we mark all logged
+ * GFNs to be dirty, as we cannot be sure whether it's safe to ignore
+ * GFNs on which p2m_change_type_one returns failure. The failure cases
+ * are very rare, and additional cost is negligible, but a missing mark
+ * is extremely difficult to debug.
+ */
+ p2m_change_type_one(v->domain, gfn, p2m_ram_logdirty, p2m_ram_rw);
+ paging_mark_gfn_dirty(v->domain, gfn);
+ }
+
+ unmap_domain_page(pml_buf);
+
+ /* Reset PML index */
+ __vmwrite(GUEST_PML_INDEX, NR_PML_ENTRIES - 1);
+
+ out:
+ vmx_vmcs_exit(v);
+}
+
+bool_t vmx_domain_pml_enabled(const struct domain *d)
+{
+ return !!(d->arch.hvm_domain.vmx.status & VMX_DOMAIN_PML_ENABLED);
+}
+
+/*
+ * This function enables PML for particular domain. It should be called when
+ * domain is paused.
+ *
+ * PML needs to be enabled globally for all vcpus of the domain, as PML buffer
+ * and PML index are pre-vcpu, but EPT table is shared by vcpus, therefore
+ * enabling PML on partial vcpus won't work.
+ */
+int vmx_domain_enable_pml(struct domain *d)
+{
+ struct vcpu *v;
+ int rc;
+
+ ASSERT(atomic_read(&d->pause_count));
+
+ if ( vmx_domain_pml_enabled(d) )
+ return 0;
+
+ for_each_vcpu( d, v )
+ if ( (rc = vmx_vcpu_enable_pml(v)) != 0 )
+ goto error;
+
+ d->arch.hvm_domain.vmx.status |= VMX_DOMAIN_PML_ENABLED;
+
+ return 0;
+
+ error:
+ for_each_vcpu( d, v )
+ if ( vmx_vcpu_pml_enabled(v) )
+ vmx_vcpu_disable_pml(v);
+ return rc;
+}
+
+/*
+ * Disable PML for particular domain. Called when domain is paused.
+ *
+ * The same as enabling PML for domain, disabling PML should be done for all
+ * vcpus at once.
+ */
+void vmx_domain_disable_pml(struct domain *d)
+{
+ struct vcpu *v;
+
+ ASSERT(atomic_read(&d->pause_count));
+
+ if ( !vmx_domain_pml_enabled(d) )
+ return;
+
+ for_each_vcpu( d, v )
+ vmx_vcpu_disable_pml(v);
+
+ d->arch.hvm_domain.vmx.status &= ~VMX_DOMAIN_PML_ENABLED;
+}
+
+/*
+ * Flush PML buffer of all vcpus, and update the logged dirty pages to log-dirty
+ * radix tree. Called when domain is paused.
+ */
+void vmx_domain_flush_pml_buffers(struct domain *d)
+{
+ struct vcpu *v;
+
+ ASSERT(atomic_read(&d->pause_count));
+
+ if ( !vmx_domain_pml_enabled(d) )
+ return;
+
+ for_each_vcpu( d, v )
+ vmx_vcpu_flush_pml_buffer(v);
+}
+
int vmx_create_vmcs(struct vcpu *v)
{
struct arch_vmx_struct *arch_vmx = &v->arch.hvm_vmx;
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 441e974..6c13dbb 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -499,6 +499,15 @@ static inline int vmx_add_host_load_msr(u32 msr)
DECLARE_PER_CPU(bool_t, vmxon);
+bool_t vmx_vcpu_pml_enabled(const struct vcpu *v);
+int vmx_vcpu_enable_pml(struct vcpu *v);
+void vmx_vcpu_disable_pml(struct vcpu *v);
+void vmx_vcpu_flush_pml_buffer(struct vcpu *v);
+bool_t vmx_domain_pml_enabled(const struct domain *d);
+int vmx_domain_enable_pml(struct domain *d);
+void vmx_domain_disable_pml(struct domain *d);
+void vmx_domain_flush_pml_buffers(struct domain *d);
+
#endif /* ASM_X86_HVM_VMX_VMCS_H__ */
/*
--
2.1.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [v3 06/10] vmx: handle PML buffer full VMEXIT
2015-04-24 8:19 [v3 00/10] PML (Page Modification Logging) support Kai Huang
` (4 preceding siblings ...)
2015-04-24 8:19 ` [v3 05/10] vmx: add help functions " Kai Huang
@ 2015-04-24 8:19 ` Kai Huang
2015-04-24 8:19 ` [v3 07/10] vmx: handle PML enabling in vmx_vcpu_initialise Kai Huang
` (4 subsequent siblings)
10 siblings, 0 replies; 22+ messages in thread
From: Kai Huang @ 2015-04-24 8:19 UTC (permalink / raw)
To: andrew.cooper3, tim, jbeulich, kevin.tian, xen-devel; +Cc: Kai Huang
We need to flush PML buffer when it's full.
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
---
xen/arch/x86/hvm/vmx/vmx.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 6c4f78c..f11ac46 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3178,6 +3178,10 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
vmx_handle_apic_write();
break;
+ case EXIT_REASON_PML_FULL:
+ vmx_vcpu_flush_pml_buffer(v);
+ break;
+
case EXIT_REASON_ACCESS_GDTR_OR_IDTR:
case EXIT_REASON_ACCESS_LDTR_OR_TR:
case EXIT_REASON_VMX_PREEMPTION_TIMER_EXPIRED:
--
2.1.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [v3 07/10] vmx: handle PML enabling in vmx_vcpu_initialise
2015-04-24 8:19 [v3 00/10] PML (Page Modification Logging) support Kai Huang
` (5 preceding siblings ...)
2015-04-24 8:19 ` [v3 06/10] vmx: handle PML buffer full VMEXIT Kai Huang
@ 2015-04-24 8:19 ` Kai Huang
2015-04-24 8:19 ` [v3 08/10] vmx: disable PML in vmx_vcpu_destroy Kai Huang
` (3 subsequent siblings)
10 siblings, 0 replies; 22+ messages in thread
From: Kai Huang @ 2015-04-24 8:19 UTC (permalink / raw)
To: andrew.cooper3, tim, jbeulich, kevin.tian, xen-devel; +Cc: Kai Huang
It's possible domain has already been in log-dirty mode when creating vcpu, in
which case we should enable PML for this vcpu if PML has been enabled for the
domain.
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
---
xen/arch/x86/hvm/vmx/vmx.c | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f11ac46..e5471b8 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -117,6 +117,29 @@ static int vmx_vcpu_initialise(struct vcpu *v)
return rc;
}
+ /*
+ * It's rare but still possible that domain has already been in log-dirty
+ * mode when vcpu is being created (commented by Tim), in which case we
+ * should enable PML for this vcpu if PML has been enabled for the domain,
+ * and failure to enable results in failure of creating this vcpu.
+ *
+ * Note even there's no vcpu created for the domain, vmx_domain_enable_pml
+ * will return successful in which case vmx_domain_pml_enabled will also
+ * return true. And even this is the first vcpu to be created with
+ * vmx_domain_pml_enabled being true, failure of enabling PML still results
+ * in failure of creating vcpu, to avoid complicated logic to revert PML
+ * style EPT table to non-PML style EPT table.
+ */
+ if ( vmx_domain_pml_enabled(v->domain) )
+ {
+ if ( (rc = vmx_vcpu_enable_pml(v)) != 0 )
+ {
+ dprintk(XENLOG_ERR, "%pv: Failed to enable PML.\n", v);
+ vmx_destroy_vmcs(v);
+ return rc;
+ }
+ }
+
vpmu_initialise(v);
vmx_install_vlapic_mapping(v);
--
2.1.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [v3 08/10] vmx: disable PML in vmx_vcpu_destroy
2015-04-24 8:19 [v3 00/10] PML (Page Modification Logging) support Kai Huang
` (6 preceding siblings ...)
2015-04-24 8:19 ` [v3 07/10] vmx: handle PML enabling in vmx_vcpu_initialise Kai Huang
@ 2015-04-24 8:19 ` Kai Huang
2015-04-24 8:19 ` [v3 09/10] log-dirty: refine common code to support PML Kai Huang
` (2 subsequent siblings)
10 siblings, 0 replies; 22+ messages in thread
From: Kai Huang @ 2015-04-24 8:19 UTC (permalink / raw)
To: andrew.cooper3, tim, jbeulich, kevin.tian, xen-devel; +Cc: Kai Huang
It's possible domain still remains in log-dirty mode when it is about to be
destroyed, in which case we should manually disable PML for it.
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
---
xen/arch/x86/hvm/vmx/vmx.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index e5471b8..e189424 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -153,6 +153,14 @@ static int vmx_vcpu_initialise(struct vcpu *v)
static void vmx_vcpu_destroy(struct vcpu *v)
{
+ /*
+ * There are cases that domain still remains in log-dirty mode when it is
+ * about to be destroyed (ex, user types 'xl destroy <dom>'), in which case
+ * we should disable PML manually here. Note that vmx_vcpu_destroy is called
+ * prior to vmx_domain_destroy so we need to disable PML for each vcpu
+ * separately here.
+ */
+ vmx_vcpu_disable_pml(v);
vmx_destroy_vmcs(v);
vpmu_destroy(v);
passive_domain_destroy(v);
--
2.1.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [v3 09/10] log-dirty: refine common code to support PML
2015-04-24 8:19 [v3 00/10] PML (Page Modification Logging) support Kai Huang
` (7 preceding siblings ...)
2015-04-24 8:19 ` [v3 08/10] vmx: disable PML in vmx_vcpu_destroy Kai Huang
@ 2015-04-24 8:19 ` Kai Huang
2015-04-24 8:19 ` [v3 10/10] p2m/ept: enable PML in p2m-ept for log-dirty Kai Huang
2015-04-30 11:04 ` [v3 00/10] PML (Page Modification Logging) support Tim Deegan
10 siblings, 0 replies; 22+ messages in thread
From: Kai Huang @ 2015-04-24 8:19 UTC (permalink / raw)
To: andrew.cooper3, tim, jbeulich, kevin.tian, xen-devel; +Cc: Kai Huang
Using PML, it's possible there are dirty GPAs logged in vcpus' PML buffers
when userspace peek/clear dirty pages, therefore we need to flush them befor
reporting dirty pages to userspace. This applies to both video ram tracking and
paging_log_dirty_op.
This patch adds new p2m layer functions to enable/disable PML and flush PML
buffers. The new functions are named to be generic to cover potential futher
PML-like features for other platforms.
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
---
xen/arch/x86/mm/hap/hap.c | 29 +++++++++++++++++++++++++----
xen/arch/x86/mm/p2m.c | 36 ++++++++++++++++++++++++++++++++++++
xen/arch/x86/mm/paging.c | 10 ++++++++++
xen/include/asm-x86/p2m.h | 11 +++++++++++
4 files changed, 82 insertions(+), 4 deletions(-)
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 4ecb2e2..1099670 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -121,7 +121,10 @@ int hap_track_dirty_vram(struct domain *d,
p2m_change_type_range(d, ostart, oend,
p2m_ram_logdirty, p2m_ram_rw);
- /* set l1e entries of range within P2M table to be read-only. */
+ /*
+ * switch vram to log dirty mode, either by setting l1e entries of
+ * P2M table to be read-only, or via hardware-assisted log-dirty.
+ */
p2m_change_type_range(d, begin_pfn, begin_pfn + nr,
p2m_ram_rw, p2m_ram_logdirty);
@@ -135,6 +138,9 @@ int hap_track_dirty_vram(struct domain *d,
domain_pause(d);
+ /* flush dirty GFNs potentially cached by hardware */
+ p2m_flush_hardware_cached_dirty(d);
+
/* get the bitmap */
paging_log_dirty_range(d, begin_pfn, nr, dirty_bitmap);
@@ -190,9 +196,15 @@ static int hap_enable_log_dirty(struct domain *d, bool_t log_global)
d->arch.paging.mode |= PG_log_dirty;
paging_unlock(d);
+ /* enable hardware-assisted log-dirty if it is supported */
+ p2m_enable_hardware_log_dirty(d);
+
if ( log_global )
{
- /* set l1e entries of P2M table to be read-only. */
+ /*
+ * switch to log dirty mode, either by setting l1e entries of P2M table
+ * to be read-only, or via hardware-assisted log-dirty.
+ */
p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
flush_tlb_mask(d->domain_dirty_cpumask);
}
@@ -205,14 +217,23 @@ static int hap_disable_log_dirty(struct domain *d)
d->arch.paging.mode &= ~PG_log_dirty;
paging_unlock(d);
- /* set l1e entries of P2M table with normal mode */
+ /* disable hardware-assisted log-dirty if it is supported */
+ p2m_disable_hardware_log_dirty(d);
+
+ /*
+ * switch to normal mode, either by setting l1e entries of P2M table to
+ * normal mode, or via hardware-assisted log-dirty.
+ */
p2m_change_entry_type_global(d, p2m_ram_logdirty, p2m_ram_rw);
return 0;
}
static void hap_clean_dirty_bitmap(struct domain *d)
{
- /* set l1e entries of P2M table to be read-only. */
+ /*
+ * switch to log-dirty mode, either by setting l1e entries of P2M table to
+ * be read-only, or via hardware-assisted log-dirty.
+ */
p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
flush_tlb_mask(d->domain_dirty_cpumask);
}
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index df4a485..67edf89 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -239,6 +239,42 @@ void p2m_memory_type_changed(struct domain *d)
}
}
+void p2m_enable_hardware_log_dirty(struct domain *d)
+{
+ struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+ if ( p2m->enable_hardware_log_dirty )
+ {
+ p2m_lock(p2m);
+ p2m->enable_hardware_log_dirty(p2m);
+ p2m_unlock(p2m);
+ }
+}
+
+void p2m_disable_hardware_log_dirty(struct domain *d)
+{
+ struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+ if ( p2m->disable_hardware_log_dirty )
+ {
+ p2m_lock(p2m);
+ p2m->disable_hardware_log_dirty(p2m);
+ p2m_unlock(p2m);
+ }
+}
+
+void p2m_flush_hardware_cached_dirty(struct domain *d)
+{
+ struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+ if ( p2m->flush_hardware_cached_dirty )
+ {
+ p2m_lock(p2m);
+ p2m->flush_hardware_cached_dirty(p2m);
+ p2m_unlock(p2m);
+ }
+}
+
mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn,
p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
unsigned int *page_order, bool_t locked)
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 77c929b..59d4720 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -422,7 +422,17 @@ static int paging_log_dirty_op(struct domain *d,
int i4, i3, i2;
if ( !resuming )
+ {
domain_pause(d);
+
+ /*
+ * Flush dirty GFNs potentially cached by hardware. Only need to flush
+ * when not resuming, as domain was paused in resuming case therefore
+ * it's not possible to have any new dirty pages.
+ */
+ p2m_flush_hardware_cached_dirty(d);
+ }
+
paging_lock(d);
if ( !d->arch.paging.preempt.dom )
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 91fc099..5b5785e 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -233,6 +233,9 @@ struct p2m_domain {
p2m_access_t *p2ma,
p2m_query_t q,
unsigned int *page_order);
+ void (*enable_hardware_log_dirty)(struct p2m_domain *p2m);
+ void (*disable_hardware_log_dirty)(struct p2m_domain *p2m);
+ void (*flush_hardware_cached_dirty)(struct p2m_domain *p2m);
void (*change_entry_type_global)(struct p2m_domain *p2m,
p2m_type_t ot,
p2m_type_t nt);
@@ -507,6 +510,14 @@ void guest_physmap_remove_page(struct domain *d,
/* Set a p2m range as populate-on-demand */
int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
unsigned int order);
+/* Enable hardware-assisted log-dirty. */
+void p2m_enable_hardware_log_dirty(struct domain *d);
+
+/* Disable hardware-assisted log-dirty */
+void p2m_disable_hardware_log_dirty(struct domain *d);
+
+/* Flush hardware cached dirty GFNs */
+void p2m_flush_hardware_cached_dirty(struct domain *d);
/* Change types across all p2m entries in a domain */
void p2m_change_entry_type_global(struct domain *d,
--
2.1.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [v3 10/10] p2m/ept: enable PML in p2m-ept for log-dirty
2015-04-24 8:19 [v3 00/10] PML (Page Modification Logging) support Kai Huang
` (8 preceding siblings ...)
2015-04-24 8:19 ` [v3 09/10] log-dirty: refine common code to support PML Kai Huang
@ 2015-04-24 8:19 ` Kai Huang
2015-04-30 11:04 ` [v3 00/10] PML (Page Modification Logging) support Tim Deegan
10 siblings, 0 replies; 22+ messages in thread
From: Kai Huang @ 2015-04-24 8:19 UTC (permalink / raw)
To: andrew.cooper3, tim, jbeulich, kevin.tian, xen-devel; +Cc: Kai Huang
This patch firstly enables EPT A/D bits if PML is used, as PML depends on EPT
A/D bits to work. A bit is set for all present p2m types in middle and leaf EPT
entries, and D bit is set for all writable types in EPT leaf entry, except for
log-dirty type with PML.
With PML, for 4K pages, instead of setting EPT entry to read-only, we just need
to clear D bit in order to log that GFN. For superpages, we still need to set it
to read-only as we need to split superpage to 4K pages in EPT violation.
Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
---
xen/arch/x86/mm/p2m-ept.c | 79 ++++++++++++++++++++++++++++++++++----
xen/include/asm-x86/hvm/vmx/vmcs.h | 3 +-
xen/include/asm-x86/hvm/vmx/vmx.h | 3 +-
3 files changed, 76 insertions(+), 9 deletions(-)
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 5e95a83..a1b9eaf 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -102,9 +102,20 @@ static int atomic_write_ept_entry(ept_entry_t *entryptr, ept_entry_t new,
return rc;
}
-static void ept_p2m_type_to_flags(ept_entry_t *entry, p2m_type_t type, p2m_access_t access)
+static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry,
+ p2m_type_t type, p2m_access_t access)
{
- /* First apply type permissions */
+ /*
+ * First apply type permissions.
+ *
+ * A/D bits are also manually set to avoid overhead of MMU having to set
+ * them later. Both A/D bits are safe to be updated directly as they are
+ * ignored by processor if EPT A/D bits is not turned on.
+ *
+ * A bit is set for all present p2m types in middle and leaf EPT entries.
+ * D bit is set for all writable types in EPT leaf entry, except for
+ * log-dirty type with PML.
+ */
switch(type)
{
case p2m_invalid:
@@ -118,27 +129,51 @@ static void ept_p2m_type_to_flags(ept_entry_t *entry, p2m_type_t type, p2m_acces
break;
case p2m_ram_rw:
entry->r = entry->w = entry->x = 1;
+ entry->a = entry->d = 1;
break;
case p2m_mmio_direct:
entry->r = entry->x = 1;
entry->w = !rangeset_contains_singleton(mmio_ro_ranges,
entry->mfn);
+ entry->a = 1;
+ entry->d = entry->w;
break;
case p2m_ram_logdirty:
+ entry->r = entry->x = 1;
+ /*
+ * In case of PML, we don't have to write protect 4K page, but
+ * only need to clear D-bit for it, but we still need to write
+ * protect super page in order to split it to 4K pages in EPT
+ * violation.
+ */
+ if ( vmx_domain_pml_enabled(p2m->domain)
+ && !is_epte_superpage(entry) )
+ entry->w = 1;
+ else
+ entry->w = 0;
+ entry->a = 1;
+ /* For both PML or non-PML cases we clear D bit anyway */
+ entry->d = 0;
+ break;
case p2m_ram_ro:
case p2m_ram_shared:
entry->r = entry->x = 1;
entry->w = 0;
+ entry->a = 1;
+ entry->d = 0;
break;
case p2m_grant_map_rw:
case p2m_map_foreign:
entry->r = entry->w = 1;
entry->x = 0;
+ entry->a = entry->d = 1;
break;
case p2m_grant_map_ro:
case p2m_mmio_write_dm:
entry->r = 1;
entry->w = entry->x = 0;
+ entry->a = 1;
+ entry->d = 0;
break;
}
@@ -194,6 +229,8 @@ static int ept_set_middle_entry(struct p2m_domain *p2m, ept_entry_t *ept_entry)
ept_entry->access = p2m->default_access;
ept_entry->r = ept_entry->w = ept_entry->x = 1;
+ /* Manually set A bit to avoid overhead of MMU having to write it later. */
+ ept_entry->a = 1;
return 1;
}
@@ -244,10 +281,9 @@ static int ept_split_super_page(struct p2m_domain *p2m, ept_entry_t *ept_entry,
epte->sp = (level > 1);
epte->mfn += i * trunk;
epte->snp = (iommu_enabled && iommu_snoop);
- ASSERT(!epte->rsvd1);
ASSERT(!epte->avail3);
- ept_p2m_type_to_flags(epte, epte->sa_p2mt, epte->access);
+ ept_p2m_type_to_flags(p2m, epte, epte->sa_p2mt, epte->access);
if ( (level - 1) == target )
continue;
@@ -489,7 +525,7 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
{
e.sa_p2mt = p2m_is_logdirty_range(p2m, gfn + i, gfn + i)
? p2m_ram_logdirty : p2m_ram_rw;
- ept_p2m_type_to_flags(&e, e.sa_p2mt, e.access);
+ ept_p2m_type_to_flags(p2m, &e, e.sa_p2mt, e.access);
}
e.recalc = 0;
wrc = atomic_write_ept_entry(&epte[i], e, level);
@@ -541,7 +577,7 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
e.ipat = ipat;
e.recalc = 0;
if ( recalc && p2m_is_changeable(e.sa_p2mt) )
- ept_p2m_type_to_flags(&e, e.sa_p2mt, e.access);
+ ept_p2m_type_to_flags(p2m, &e, e.sa_p2mt, e.access);
wrc = atomic_write_ept_entry(&epte[i], e, level);
ASSERT(wrc == 0);
}
@@ -752,7 +788,7 @@ ept_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
if ( ept_entry->mfn == new_entry.mfn )
need_modify_vtd_table = 0;
- ept_p2m_type_to_flags(&new_entry, p2mt, p2ma);
+ ept_p2m_type_to_flags(p2m, &new_entry, p2mt, p2ma);
}
rc = atomic_write_ept_entry(ept_entry, new_entry, target);
@@ -1053,6 +1089,26 @@ void ept_sync_domain(struct p2m_domain *p2m)
__ept_sync_domain, p2m, 1);
}
+static void ept_enable_pml(struct p2m_domain *p2m)
+{
+ /*
+ * No need to check if vmx_domain_enable_pml has succeeded or not, as
+ * ept_p2m_type_to_flags will do the check, and write protection will be
+ * used if PML is not enabled.
+ */
+ vmx_domain_enable_pml(p2m->domain);
+}
+
+static void ept_disable_pml(struct p2m_domain *p2m)
+{
+ vmx_domain_disable_pml(p2m->domain);
+}
+
+static void ept_flush_pml_buffers(struct p2m_domain *p2m)
+{
+ vmx_domain_flush_pml_buffers(p2m->domain);
+}
+
int ept_p2m_init(struct p2m_domain *p2m)
{
struct ept_data *ept = &p2m->ept;
@@ -1070,6 +1126,15 @@ int ept_p2m_init(struct p2m_domain *p2m)
/* set EPT page-walk length, now it's actual walk length - 1, i.e. 3 */
ept->ept_wl = 3;
+ if ( cpu_has_vmx_pml )
+ {
+ /* Enable EPT A/D bits if we are going to use PML */
+ ept->ept_ad = cpu_has_vmx_pml ? 1 : 0;
+ p2m->enable_hardware_log_dirty = ept_enable_pml;
+ p2m->disable_hardware_log_dirty = ept_disable_pml;
+ p2m->flush_hardware_cached_dirty = ept_flush_pml_buffers;
+ }
+
if ( !zalloc_cpumask_var(&ept->synced_mask) )
return -ENOMEM;
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 6c13dbb..1104bda 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -62,7 +62,8 @@ struct ept_data {
struct {
u64 ept_mt :3,
ept_wl :3,
- rsvd :6,
+ ept_ad :1, /* bit 6 - enable EPT A/D bits */
+ rsvd :5,
asr :52;
};
u64 eptp;
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 50f1bfc..35f804a 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -37,7 +37,8 @@ typedef union {
emt : 3, /* bits 5:3 - EPT Memory type */
ipat : 1, /* bit 6 - Ignore PAT memory type */
sp : 1, /* bit 7 - Is this a superpage? */
- rsvd1 : 2, /* bits 9:8 - Reserved for future use */
+ a : 1, /* bit 8 - Access bit */
+ d : 1, /* bit 9 - Dirty bit */
recalc : 1, /* bit 10 - Software available 1 */
snp : 1, /* bit 11 - VT-d snoop control in shared
EPT/VT-d usage */
--
2.1.0
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [v3 01/10] vmx: add new boot parameter to control PML enabling
2015-04-24 8:19 ` [v3 01/10] vmx: add new boot parameter to control PML enabling Kai Huang
@ 2015-04-24 10:46 ` Andrew Cooper
2015-04-24 14:33 ` Jan Beulich
1 sibling, 0 replies; 22+ messages in thread
From: Andrew Cooper @ 2015-04-24 10:46 UTC (permalink / raw)
To: Kai Huang, tim, jbeulich, kevin.tian, xen-devel
On 24/04/15 09:19, Kai Huang wrote:
> A top level EPT parameter "ept=<options>" and a sub boolean "opt_pml_enabled"
> are added to control PML. Other booleans can be further added for any other EPT
> related features.
>
> The document description for the new parameter is also added.
>
> Signed-off-by: Kai Huang <kai.huang@linux.intel.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> docs/misc/xen-command-line.markdown | 15 +++++++++++++++
> xen/arch/x86/hvm/vmx/vmcs.c | 30 ++++++++++++++++++++++++++++++
> 2 files changed, 45 insertions(+)
>
> diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
> index 1dda1f0..4889e27 100644
> --- a/docs/misc/xen-command-line.markdown
> +++ b/docs/misc/xen-command-line.markdown
> @@ -685,6 +685,21 @@ requirement can be relaxed. This option is particularly useful for nested
> virtualization, to allow the L1 hypervisor to use EPT even if the L0 hypervisor
> does not provide VM\_ENTRY\_LOAD\_GUEST\_PAT.
>
> +### ept (Intel)
> +> `= List of ( pml<boolean> )`
> +
> +> Default: `false`
> +
> +Controls EPT related features. Currently only Page Modification Logging (PML) is
> +the controllable feature as boolean type.
> +
> +PML is a new hardware feature in Intel's Broadwell Server and further platforms
> +which reduces hypervisor overhead of log-dirty mechanism by automatically
> +recording GPAs (guest physical addresses) when guest memory gets dirty, and
> +therefore significantly reducing number of EPT violation caused by write
> +protection of guest memory, which is a necessity to implement log-dirty
> +mechanism before PML.
> +
> ### gdb
> > `= <baud>[/<clock_hz>][,DPS[,<io-base>[,<irq>[,<port-bdf>[,<bridge-bdf>]]]] | pci | amt ] `
>
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index 63007a9..79efa42 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -64,6 +64,36 @@ integer_param("ple_gap", ple_gap);
> static unsigned int __read_mostly ple_window = 4096;
> integer_param("ple_window", ple_window);
>
> +static bool_t __read_mostly opt_pml_enabled = 0;
> +
> +/*
> + * The 'ept' parameter controls functionalities that depend on, or impact the
> + * EPT mechanism. Optional comma separated value may contain:
> + *
> + * pml Enable PML
> + */
> +static void __init parse_ept_param(char *s)
> +{
> + char *ss;
> +
> + do {
> + bool_t val = !!strncmp(s, "no-", 3);
> + if ( !val )
> + s += 3;
> +
> + ss = strchr(s, ',');
> + if ( ss )
> + *ss = '\0';
> +
> + if ( !strcmp(s, "pml") )
> + opt_pml_enabled = val;
> +
> + s = ss + 1;
> + } while ( ss );
> +}
> +
> +custom_param("ept", parse_ept_param);
> +
> /* Dynamic (run-time adjusted) execution control flags. */
> u32 vmx_pin_based_exec_control __read_mostly;
> u32 vmx_cpu_based_exec_control __read_mostly;
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v3 01/10] vmx: add new boot parameter to control PML enabling
2015-04-24 8:19 ` [v3 01/10] vmx: add new boot parameter to control PML enabling Kai Huang
2015-04-24 10:46 ` Andrew Cooper
@ 2015-04-24 14:33 ` Jan Beulich
2015-04-25 15:00 ` Kai Huang
1 sibling, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2015-04-24 14:33 UTC (permalink / raw)
To: Kai Huang; +Cc: andrew.cooper3, kevin.tian, tim, xen-devel
>>> On 24.04.15 at 10:19, <kai.huang@linux.intel.com> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -64,6 +64,36 @@ integer_param("ple_gap", ple_gap);
> static unsigned int __read_mostly ple_window = 4096;
> integer_param("ple_window", ple_window);
>
> +static bool_t __read_mostly opt_pml_enabled = 0;
> +
> +/*
> + * The 'ept' parameter controls functionalities that depend on, or impact the
> + * EPT mechanism. Optional comma separated value may contain:
> + *
> + * pml Enable PML
> + */
> +static void __init parse_ept_param(char *s)
> +{
> + char *ss;
> +
> + do {
> + bool_t val = !!strncmp(s, "no-", 3);
> + if ( !val )
In case another round is needed, a blank line is missing above.
> + s += 3;
> +
> + ss = strchr(s, ',');
> + if ( ss )
> + *ss = '\0';
> +
> + if ( !strcmp(s, "pml") )
> + opt_pml_enabled = val;
> +
> + s = ss + 1;
> + } while ( ss );
> +}
> +
> +custom_param("ept", parse_ept_param);
And a superfluous blank line would want to be dropped here.
Jan
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v3 01/10] vmx: add new boot parameter to control PML enabling
2015-04-24 14:33 ` Jan Beulich
@ 2015-04-25 15:00 ` Kai Huang
2015-04-27 6:56 ` Jan Beulich
0 siblings, 1 reply; 22+ messages in thread
From: Kai Huang @ 2015-04-25 15:00 UTC (permalink / raw)
To: Jan Beulich; +Cc: Kai Huang, Andrew Cooper, Tian, Kevin, tim, xen-devel
On Fri, Apr 24, 2015 at 10:33 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>> On 24.04.15 at 10:19, <kai.huang@linux.intel.com> wrote:
>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>> @@ -64,6 +64,36 @@ integer_param("ple_gap", ple_gap);
>> static unsigned int __read_mostly ple_window = 4096;
>> integer_param("ple_window", ple_window);
>>
>> +static bool_t __read_mostly opt_pml_enabled = 0;
>> +
>> +/*
>> + * The 'ept' parameter controls functionalities that depend on, or impact the
>> + * EPT mechanism. Optional comma separated value may contain:
>> + *
>> + * pml Enable PML
>> + */
>> +static void __init parse_ept_param(char *s)
>> +{
>> + char *ss;
>> +
>> + do {
>> + bool_t val = !!strncmp(s, "no-", 3);
>> + if ( !val )
>
> In case another round is needed, a blank line is missing above.
>
>> + s += 3;
>> +
>> + ss = strchr(s, ',');
>> + if ( ss )
>> + *ss = '\0';
>> +
>> + if ( !strcmp(s, "pml") )
>> + opt_pml_enabled = val;
>> +
>> + s = ss + 1;
>> + } while ( ss );
>> +}
>> +
>> +custom_param("ept", parse_ept_param);
>
> And a superfluous blank line would want to be dropped here.
Sure. Will do both of your above comments if a further v4 is needed. Thanks.
And I suppose you are talking about the blank line before
custom_param("ept", parse_ept_param) ?
Thanks,
-Kai
>
> Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
--
Thanks,
-Kai
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v3 01/10] vmx: add new boot parameter to control PML enabling
2015-04-25 15:00 ` Kai Huang
@ 2015-04-27 6:56 ` Jan Beulich
2015-05-04 7:46 ` Kai Huang
0 siblings, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2015-04-27 6:56 UTC (permalink / raw)
To: kaih.linux; +Cc: kai.huang, andrew.cooper3, kevin.tian, tim, xen-devel
>>> Kai Huang <kaih.linux@gmail.com> 04/25/15 5:00 PM >>>
>On Fri, Apr 24, 2015 at 10:33 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> On 24.04.15 at 10:19, <kai.huang@linux.intel.com> wrote:
>>> +}
>>> +
>>> +custom_param("ept", parse_ept_param);
>>
>> And a superfluous blank line would want to be dropped here.
>
>Sure. Will do both of your above comments if a further v4 is needed. Thanks.
>
>And I suppose you are talking about the blank line before
>custom_param("ept", parse_ept_param) ?
Yes.
Jan
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v3 00/10] PML (Page Modification Logging) support
2015-04-24 8:19 [v3 00/10] PML (Page Modification Logging) support Kai Huang
` (9 preceding siblings ...)
2015-04-24 8:19 ` [v3 10/10] p2m/ept: enable PML in p2m-ept for log-dirty Kai Huang
@ 2015-04-30 11:04 ` Tim Deegan
2015-05-01 9:06 ` Kai Huang
2015-05-04 7:40 ` Tian, Kevin
10 siblings, 2 replies; 22+ messages in thread
From: Tim Deegan @ 2015-04-30 11:04 UTC (permalink / raw)
To: Kai Huang; +Cc: andrew.cooper3, kevin.tian, jbeulich, xen-devel
At 16:19 +0800 on 24 Apr (1429892368), Kai Huang wrote:
> v2->v3:
>
> - Merged v2 patch 02 (document change) to patch 01 as a single patch, and
> changed new parameter description as suggested by Andrew.
> - changed vmx_vcpu_flush_pml_buffer to call mark_dirty for all logged GFNs, and
> call p2m_change_type_one regardless of return value.
> - Added ASSERT for vcpu (being current, or being non-running and unrunnable) to
> vmx_vcpu_flush_pml_buffer
> - Other refinement in coding style, comments description, etc.
>
> Sanity test of live migration has been tested both with and without PML.
Acked-by: Tim Deegan <tim@xen.org>
Cheers,
Tim.
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v3 00/10] PML (Page Modification Logging) support
2015-04-30 11:04 ` [v3 00/10] PML (Page Modification Logging) support Tim Deegan
@ 2015-05-01 9:06 ` Kai Huang
2015-05-04 7:40 ` Tian, Kevin
1 sibling, 0 replies; 22+ messages in thread
From: Kai Huang @ 2015-05-01 9:06 UTC (permalink / raw)
To: Tim Deegan; +Cc: Kai Huang, Andrew Cooper, Tian, Kevin, Jan Beulich, xen-devel
Thanks Tim!
On Thu, Apr 30, 2015 at 7:04 PM, Tim Deegan <tim@xen.org> wrote:
> At 16:19 +0800 on 24 Apr (1429892368), Kai Huang wrote:
>> v2->v3:
>>
>> - Merged v2 patch 02 (document change) to patch 01 as a single patch, and
>> changed new parameter description as suggested by Andrew.
>> - changed vmx_vcpu_flush_pml_buffer to call mark_dirty for all logged GFNs, and
>> call p2m_change_type_one regardless of return value.
>> - Added ASSERT for vcpu (being current, or being non-running and unrunnable) to
>> vmx_vcpu_flush_pml_buffer
>> - Other refinement in coding style, comments description, etc.
>>
>> Sanity test of live migration has been tested both with and without PML.
>
> Acked-by: Tim Deegan <tim@xen.org>
>
> Cheers,
>
> Tim.
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
--
Thanks,
-Kai
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v3 00/10] PML (Page Modification Logging) support
2015-04-30 11:04 ` [v3 00/10] PML (Page Modification Logging) support Tim Deegan
2015-05-01 9:06 ` Kai Huang
@ 2015-05-04 7:40 ` Tian, Kevin
2015-05-04 7:46 ` Kai Huang
1 sibling, 1 reply; 22+ messages in thread
From: Tian, Kevin @ 2015-05-04 7:40 UTC (permalink / raw)
To: Tim Deegan, Kai Huang; +Cc: andrew.cooper3, jbeulich, xen-devel
> From: Tim Deegan [mailto:tim@xen.org]
> Sent: Thursday, April 30, 2015 7:04 PM
>
> At 16:19 +0800 on 24 Apr (1429892368), Kai Huang wrote:
> > v2->v3:
> >
> > - Merged v2 patch 02 (document change) to patch 01 as a single patch, and
> > changed new parameter description as suggested by Andrew.
> > - changed vmx_vcpu_flush_pml_buffer to call mark_dirty for all logged GFNs,
> and
> > call p2m_change_type_one regardless of return value.
> > - Added ASSERT for vcpu (being current, or being non-running and
> unrunnable) to
> > vmx_vcpu_flush_pml_buffer
> > - Other refinement in coding style, comments description, etc.
> >
> > Sanity test of live migration has been tested both with and without PML.
>
> Acked-by: Tim Deegan <tim@xen.org>
>
> Cheers,
>
> Tim.
Acked-by: Kevin Tian <kevin.tian@intel.com> for all VMX related changes.
Thanks
Kevin
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v3 00/10] PML (Page Modification Logging) support
2015-05-04 7:40 ` Tian, Kevin
@ 2015-05-04 7:46 ` Kai Huang
0 siblings, 0 replies; 22+ messages in thread
From: Kai Huang @ 2015-05-04 7:46 UTC (permalink / raw)
To: Tian, Kevin, Tim Deegan; +Cc: andrew.cooper3, jbeulich, xen-devel
On 05/04/2015 03:40 PM, Tian, Kevin wrote:
>> From: Tim Deegan [mailto:tim@xen.org]
>> Sent: Thursday, April 30, 2015 7:04 PM
>>
>> At 16:19 +0800 on 24 Apr (1429892368), Kai Huang wrote:
>>> v2->v3:
>>>
>>> - Merged v2 patch 02 (document change) to patch 01 as a single patch, and
>>> changed new parameter description as suggested by Andrew.
>>> - changed vmx_vcpu_flush_pml_buffer to call mark_dirty for all logged GFNs,
>> and
>>> call p2m_change_type_one regardless of return value.
>>> - Added ASSERT for vcpu (being current, or being non-running and
>> unrunnable) to
>>> vmx_vcpu_flush_pml_buffer
>>> - Other refinement in coding style, comments description, etc.
>>>
>>> Sanity test of live migration has been tested both with and without PML.
>> Acked-by: Tim Deegan <tim@xen.org>
>>
>> Cheers,
>>
>> Tim.
> Acked-by: Kevin Tian <kevin.tian@intel.com> for all VMX related changes.
Thanks Kevin.
Thanks,
-Kai
>
> Thanks
> Kevin
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v3 01/10] vmx: add new boot parameter to control PML enabling
2015-04-27 6:56 ` Jan Beulich
@ 2015-05-04 7:46 ` Kai Huang
2015-05-04 7:52 ` Jan Beulich
0 siblings, 1 reply; 22+ messages in thread
From: Kai Huang @ 2015-05-04 7:46 UTC (permalink / raw)
To: Jan Beulich, kaih.linux; +Cc: andrew.cooper3, kevin.tian, tim, xen-devel
On 04/27/2015 02:56 PM, Jan Beulich wrote:
>>>> Kai Huang <kaih.linux@gmail.com> 04/25/15 5:00 PM >>>
>> On Fri, Apr 24, 2015 at 10:33 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> On 24.04.15 at 10:19, <kai.huang@linux.intel.com> wrote:
>>>> +}
>>>> +
>>>> +custom_param("ept", parse_ept_param);
>>> And a superfluous blank line would want to be dropped here.
>> Sure. Will do both of your above comments if a further v4 is needed. Thanks.
>>
>> And I suppose you are talking about the blank line before
>> custom_param("ept", parse_ept_param) ?
> Yes.
Hi Jan,
Tim, Kevin, Andrew have provided their acks, so I think the v3 patch
series are OK to be merged?
For your comments above, if you think is necessary, I can sent another
incremental patch to address, or you can just simply do it for me. Is
this OK to you?
Thanks,
-Kai
>
> Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v3 01/10] vmx: add new boot parameter to control PML enabling
2015-05-04 7:46 ` Kai Huang
@ 2015-05-04 7:52 ` Jan Beulich
2015-05-04 7:53 ` Kai Huang
0 siblings, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2015-05-04 7:52 UTC (permalink / raw)
To: kaih.linux, Kai Huang; +Cc: andrew.cooper3, kevin.tian, tim, xen-devel
>>> On 04.05.15 at 09:46, <kai.huang@linux.intel.com> wrote:
>
> On 04/27/2015 02:56 PM, Jan Beulich wrote:
>>>>> Kai Huang <kaih.linux@gmail.com> 04/25/15 5:00 PM >>>
>>> On Fri, Apr 24, 2015 at 10:33 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>> On 24.04.15 at 10:19, <kai.huang@linux.intel.com> wrote:
>>>>> +}
>>>>> +
>>>>> +custom_param("ept", parse_ept_param);
>>>> And a superfluous blank line would want to be dropped here.
>>> Sure. Will do both of your above comments if a further v4 is needed. Thanks.
>>>
>>> And I suppose you are talking about the blank line before
>>> custom_param("ept", parse_ept_param) ?
>> Yes.
> Tim, Kevin, Andrew have provided their acks, so I think the v3 patch
> series are OK to be merged?
I just got back to the office and would still like to make at least a brief
pass over the rest of the series before applying it.
> For your comments above, if you think is necessary, I can sent another
> incremental patch to address, or you can just simply do it for me. Is
> this OK to you?
Yes, I've taken notes to do these adjustments while committing.
Jan
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [v3 01/10] vmx: add new boot parameter to control PML enabling
2015-05-04 7:52 ` Jan Beulich
@ 2015-05-04 7:53 ` Kai Huang
0 siblings, 0 replies; 22+ messages in thread
From: Kai Huang @ 2015-05-04 7:53 UTC (permalink / raw)
To: Jan Beulich, kaih.linux; +Cc: andrew.cooper3, kevin.tian, tim, xen-devel
On 05/04/2015 03:52 PM, Jan Beulich wrote:
>>>> On 04.05.15 at 09:46, <kai.huang@linux.intel.com> wrote:
>> On 04/27/2015 02:56 PM, Jan Beulich wrote:
>>>>>> Kai Huang <kaih.linux@gmail.com> 04/25/15 5:00 PM >>>
>>>> On Fri, Apr 24, 2015 at 10:33 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>>> On 24.04.15 at 10:19, <kai.huang@linux.intel.com> wrote:
>>>>>> +}
>>>>>> +
>>>>>> +custom_param("ept", parse_ept_param);
>>>>> And a superfluous blank line would want to be dropped here.
>>>> Sure. Will do both of your above comments if a further v4 is needed. Thanks.
>>>>
>>>> And I suppose you are talking about the blank line before
>>>> custom_param("ept", parse_ept_param) ?
>>> Yes.
>> Tim, Kevin, Andrew have provided their acks, so I think the v3 patch
>> series are OK to be merged?
> I just got back to the office and would still like to make at least a brief
> pass over the rest of the series before applying it.
Sure.
Thanks,
Kai
>
>> For your comments above, if you think is necessary, I can sent another
>> incremental patch to address, or you can just simply do it for me. Is
>> this OK to you?
> Yes, I've taken notes to do these adjustments while committing.
>
> Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2015-05-04 7:53 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-24 8:19 [v3 00/10] PML (Page Modification Logging) support Kai Huang
2015-04-24 8:19 ` [v3 01/10] vmx: add new boot parameter to control PML enabling Kai Huang
2015-04-24 10:46 ` Andrew Cooper
2015-04-24 14:33 ` Jan Beulich
2015-04-25 15:00 ` Kai Huang
2015-04-27 6:56 ` Jan Beulich
2015-05-04 7:46 ` Kai Huang
2015-05-04 7:52 ` Jan Beulich
2015-05-04 7:53 ` Kai Huang
2015-04-24 8:19 ` [v3 02/10] log-dirty: add new paging_mark_gfn_dirty Kai Huang
2015-04-24 8:19 ` [v3 03/10] vmx: add PML definition and feature detection Kai Huang
2015-04-24 8:19 ` [v3 04/10] vmx: add new data structure member to support PML Kai Huang
2015-04-24 8:19 ` [v3 05/10] vmx: add help functions " Kai Huang
2015-04-24 8:19 ` [v3 06/10] vmx: handle PML buffer full VMEXIT Kai Huang
2015-04-24 8:19 ` [v3 07/10] vmx: handle PML enabling in vmx_vcpu_initialise Kai Huang
2015-04-24 8:19 ` [v3 08/10] vmx: disable PML in vmx_vcpu_destroy Kai Huang
2015-04-24 8:19 ` [v3 09/10] log-dirty: refine common code to support PML Kai Huang
2015-04-24 8:19 ` [v3 10/10] p2m/ept: enable PML in p2m-ept for log-dirty Kai Huang
2015-04-30 11:04 ` [v3 00/10] PML (Page Modification Logging) support Tim Deegan
2015-05-01 9:06 ` Kai Huang
2015-05-04 7:40 ` Tian, Kevin
2015-05-04 7:46 ` Kai Huang
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.