xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH v4 14/26] x86/cpu: Rework AMD masking MSR setup
Date: Mon, 28 Mar 2016 14:55:23 -0400	[thread overview]
Message-ID: <20160328185523.GM17944@char.us.oracle.com> (raw)
In-Reply-To: <1458750989-28967-15-git-send-email-andrew.cooper3@citrix.com>

On Wed, Mar 23, 2016 at 04:36:17PM +0000, Andrew Cooper wrote:
> This patch is best reviewed as its end result rather than as a diff, as it
> rewrites almost all of the setup.
> 
> On the BSP, cpuid information is used to evaluate the potential available set
> of masking MSRs, and they are unconditionally probed, filling in the
> availability information and hardware defaults.
> 
> The command line parameters are then combined with the hardware defaults to
> further restrict the Xen default masking level.  Each cpu is then context
> switched into the default levelling state.

Context switched? Why not just say:

When booting up CPUs we set the same MSR mask for each CPU.

The amd_ctxt_switch_levelling can be also used (in patch XYZ) to swap
levelling per guest granularity.

> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <JBeulich@suse.com>
> ---
> v2:
>  * Provide extra information if opt_cpu_info
>  * Extra comment indicating the expected use of amd_ctxt_switch_levelling()
> v3:
>  * Fix the interaction of the fast-forward bits with the override MSRs.
>  * Style fixups.
> ---
>  xen/arch/x86/cpu/amd.c | 276 ++++++++++++++++++++++++++++++++-----------------
>  1 file changed, 179 insertions(+), 97 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
> index 5516777..0e1c8b9 100644
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -80,6 +80,13 @@ static inline int wrmsr_amd_safe(unsigned int msr, unsigned int lo,
>  	return err;
>  }
>  
> +static void wrmsr_amd(unsigned int msr, uint64_t val)
> +{
> +	asm volatile("wrmsr" ::
> +		     "c" (msr), "a" ((uint32_t)val),
> +		     "d" (val >> 32), "D" (0x9c5a203a));
> +}
> +
>  static const struct cpuidmask {
>  	uint16_t fam;
>  	char rev[2];
> @@ -126,126 +133,198 @@ static const struct cpuidmask *__init noinline get_cpuidmask(const char *opt)
>  }
>  
>  /*
> + * Sets caps in expected_levelling_cap, probes for the specified mask MSR, and
> + * set caps in levelling_caps if it is found.  Processors prior to Fam 10h
> + * required a 32-bit password for masking MSRs.  Returns the default value.
> + */
> +static uint64_t __init _probe_mask_msr(unsigned int msr, uint64_t caps)
> +{
> +	unsigned int hi, lo;
> +
> +	expected_levelling_cap |= caps;
> +
> +	if ((rdmsr_amd_safe(msr, &lo, &hi) == 0) &&
> +	    (wrmsr_amd_safe(msr, lo, hi) == 0))
> +		levelling_caps |= caps;
> +
> +	return ((uint64_t)hi << 32) | lo;
> +}
> +
> +/*
> + * Probe for the existance of the expected masking MSRs.  They might easily
> + * not be available if Xen is running virtualised.
> + */
> +static void __init noinline probe_masking_msrs(void)

Why noninline?

> +{
> +	const struct cpuinfo_x86 *c = &boot_cpu_data;
> +
> +	/*
> +	 * First, work out which masking MSRs we should have, based on
> +	 * revision and cpuid.
> +	 */
> +
> +	/* Fam11 doesn't support masking at all. */
> +	if (c->x86 == 0x11)
> +		return;
> +
> +	cpuidmask_defaults._1cd =
> +		_probe_mask_msr(MSR_K8_FEATURE_MASK, LCAP_1cd);
> +	cpuidmask_defaults.e1cd =
> +		_probe_mask_msr(MSR_K8_EXT_FEATURE_MASK, LCAP_e1cd);
> +
> +	if (c->cpuid_level >= 7)
> +		cpuidmask_defaults._7ab0 =
> +			_probe_mask_msr(MSR_AMD_L7S0_FEATURE_MASK, LCAP_7ab0);
> +
> +	if (c->x86 == 0x15 && c->cpuid_level >= 6 && cpuid_ecx(6))
> +		cpuidmask_defaults._6c =
> +			_probe_mask_msr(MSR_AMD_THRM_FEATURE_MASK, LCAP_6c);
> +
> +	/*
> +	 * Don't bother warning about a mismatch if virtualised.  These MSRs
> +	 * are not architectural and almost never virtualised.
> +	 */
> +	if ((expected_levelling_cap == levelling_caps) ||
> +	    cpu_has_hypervisor)
> +		return;
> +
> +	printk(XENLOG_WARNING "Mismatch between expected (%#x) "
> +	       "and real (%#x) levelling caps: missing %#x\n",
> +	       expected_levelling_cap, levelling_caps,
> +	       (expected_levelling_cap ^ levelling_caps) & levelling_caps);
> +	printk(XENLOG_WARNING "Fam %#x, model %#x level %#x\n",
> +	       c->x86, c->x86_model, c->cpuid_level);
> +	printk(XENLOG_WARNING
> +	       "If not running virtualised, please report a bug\n");

You already have an cpu_has_hypervisor check above? Or is that for
the hypervisor which do not set that bit?

> +}
> +
> +/*
> + * Context switch levelling state to the next domain.  A parameter of NULL is
> + * used to context switch to the default host state, and is used by the BSP/AP
> + * startup code.

OK, how about:
> + */
> +static void amd_ctxt_switch_levelling(const struct domain *nextd)
> +{
> +	struct cpuidmasks *these_masks = &this_cpu(cpuidmasks);
> +	const struct cpuidmasks *masks = &cpuidmask_defaults;
> +

	ASSERT(!d && system_state != SYS_STATE_active); ?

> +#define LAZY(cap, msr, field)						\
> +	({								\
> +		if (unlikely(these_masks->field != masks->field) &&	\
> +		    ((levelling_caps & cap) == cap))			\
> +		{							\
> +			wrmsr_amd(msr, masks->field);			\
> +			these_masks->field = masks->field;		\
> +		}							\
> +	})
> +
> +	LAZY(LCAP_1cd,  MSR_K8_FEATURE_MASK,       _1cd);
> +	LAZY(LCAP_e1cd, MSR_K8_EXT_FEATURE_MASK,   e1cd);
> +	LAZY(LCAP_7ab0, MSR_AMD_L7S0_FEATURE_MASK, _7ab0);
> +	LAZY(LCAP_6c,   MSR_AMD_THRM_FEATURE_MASK, _6c);
> +
> +#undef LAZY
> +}
> +
> +/*
>   * Mask the features and extended features returned by CPUID.  Parameters are
>   * set from the boot line via two methods:
>   *
>   *   1) Specific processor revision string
>   *   2) User-defined masks
>   *
> - * The processor revision string parameter has precedene.
> + * The user-defined masks take precedence.
>   */
> -static void set_cpuidmask(const struct cpuinfo_x86 *c)
> +static void __init noinline amd_init_levelling(void)
>  {
> -	static unsigned int feat_ecx, feat_edx;
> -	static unsigned int extfeat_ecx, extfeat_edx;
> -	static unsigned int l7s0_eax, l7s0_ebx;
> -	static unsigned int thermal_ecx;
> -	static bool_t skip_feat, skip_extfeat;
> -	static bool_t skip_l7s0_eax_ebx, skip_thermal_ecx;
> -	static enum { not_parsed, no_mask, set_mask } status;
> -	unsigned int eax, ebx, ecx, edx;
> -
> -	if (status == no_mask)
> -		return;
> +	const struct cpuidmask *m = NULL;
>  
> -	if (status == set_mask)
> -		goto setmask;
> +	probe_masking_msrs();
>  
> -	ASSERT((status == not_parsed) && (c == &boot_cpu_data));
> -	status = no_mask;
> +	if (*opt_famrev != '\0') {
> +		m = get_cpuidmask(opt_famrev);
>  
> -	/* Fam11 doesn't support masking at all. */
> -	if (c->x86 == 0x11)
> -		return;
> +		if (!m)
> +			printk("Invalid processor string: %s\n", opt_famrev);
> +	}
>  
> -	if (~(opt_cpuid_mask_ecx & opt_cpuid_mask_edx &
> -	      opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx &
> -	      opt_cpuid_mask_l7s0_eax & opt_cpuid_mask_l7s0_ebx &
> -	      opt_cpuid_mask_thermal_ecx)) {
> -		feat_ecx = opt_cpuid_mask_ecx;
> -		feat_edx = opt_cpuid_mask_edx;
> -		extfeat_ecx = opt_cpuid_mask_ext_ecx;
> -		extfeat_edx = opt_cpuid_mask_ext_edx;
> -		l7s0_eax = opt_cpuid_mask_l7s0_eax;
> -		l7s0_ebx = opt_cpuid_mask_l7s0_ebx;
> -		thermal_ecx = opt_cpuid_mask_thermal_ecx;
> -	} else if (*opt_famrev == '\0') {
> -		return;
> -	} else {
> -		const struct cpuidmask *m = get_cpuidmask(opt_famrev);
> +	if ((levelling_caps & LCAP_1cd) == LCAP_1cd) {
> +		uint32_t ecx, edx, tmp;
>  
> -		if (!m) {
> -			printk("Invalid processor string: %s\n", opt_famrev);
> -			printk("CPUID will not be masked\n");
> -			return;
> +		cpuid(0x00000001, &tmp, &tmp, &ecx, &edx);
> +
> +		if(~(opt_cpuid_mask_ecx & opt_cpuid_mask_edx)) {

Why the missing space?
> +			ecx &= opt_cpuid_mask_ecx;
> +			edx &= opt_cpuid_mask_edx;
> +		} else if (m) {
> +			ecx &= m->ecx;
> +			edx &= m->edx;
>  		}
> -		feat_ecx = m->ecx;
> -		feat_edx = m->edx;
> -		extfeat_ecx = m->ext_ecx;
> -		extfeat_edx = m->ext_edx;
> +
> +		/* Fast-forward bits - Must be set. */
> +		if (ecx & cpufeat_mask(X86_FEATURE_XSAVE))
> +			ecx |= cpufeat_mask(X86_FEATURE_OSXSAVE);
> +		edx |= cpufeat_mask(X86_FEATURE_APIC);
> +
> +		/* Allow the HYPERVISOR bit to be set via guest policy. */
> +		ecx |= cpufeat_mask(X86_FEATURE_HYPERVISOR);

Hmm. The http://support.amd.com/TechDocs/52740_16h_Models_30h-3Fh_BKDG.pdf
pg 624 mentions this (bit 63) as 'Reserved'. Should we really set it?
Ah, but then earlier (pg 530) it says 'Reserved for use by hypervisor to indicate
guest status.


> +
> +		cpuidmask_defaults._1cd = ((uint64_t)ecx << 32) | edx;

Considering the document mentions Reserved should we preserve the bits that
are set by the initial call that fills out the cpuidmask_default?

The original code also had:
>  	}
>  
> -        /* Setting bits in the CPUID mask MSR that are not set in the
> -         * unmasked CPUID response can cause those bits to be set in the
> -         * masked response.  Avoid that by explicitly masking in software. */

 that comment in it. Would it make sense to include it (or a rework of it since
I wasn't exactly sure what it was saying).

> -        feat_ecx &= cpuid_ecx(0x00000001);
> -        feat_edx &= cpuid_edx(0x00000001);
> -        extfeat_ecx &= cpuid_ecx(0x80000001);
> -        extfeat_edx &= cpuid_edx(0x80000001);
> +	if ((levelling_caps & LCAP_e1cd) == LCAP_e1cd) {
> +		uint32_t ecx, edx, tmp;
>  
> -	status = set_mask;
> -	printk("Writing CPUID feature mask ECX:EDX -> %08Xh:%08Xh\n", 
> -	       feat_ecx, feat_edx);
> -	printk("Writing CPUID extended feature mask ECX:EDX -> %08Xh:%08Xh\n", 
> -	       extfeat_ecx, extfeat_edx);
> +		cpuid(0x80000001, &tmp, &tmp, &ecx, &edx);
>  
> -	if (c->cpuid_level >= 7)
> -		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
> -	else
> -		ebx = eax = 0;
> -	if ((eax | ebx) && ~(l7s0_eax & l7s0_ebx)) {
> -		if (l7s0_eax > eax)
> -			l7s0_eax = eax;
> -		l7s0_ebx &= ebx;
> -		printk("Writing CPUID leaf 7 subleaf 0 feature mask EAX:EBX -> %08Xh:%08Xh\n",
> -		       l7s0_eax, l7s0_ebx);
> -	} else
> -		skip_l7s0_eax_ebx = 1;
> -
> -	/* Only Fam15 has the respective MSR. */
> -	ecx = c->x86 == 0x15 && c->cpuid_level >= 6 ? cpuid_ecx(6) : 0;
> -	if (ecx && ~thermal_ecx) {
> -		thermal_ecx &= ecx;
> -		printk("Writing CPUID thermal/power feature mask ECX -> %08Xh\n",
> -		       thermal_ecx);
> -	} else
> -		skip_thermal_ecx = 1;
> -
> - setmask:
> -	/* AMD processors prior to family 10h required a 32-bit password */
> -	if (!skip_feat &&
> -	    wrmsr_amd_safe(MSR_K8_FEATURE_MASK, feat_edx, feat_ecx)) {
> -		skip_feat = 1;
> -		printk("Failed to set CPUID feature mask\n");
> +		if(~(opt_cpuid_mask_ext_ecx & opt_cpuid_mask_ext_edx)) {

Here the space looks to be missing?
> +			ecx &= opt_cpuid_mask_ext_ecx;
> +			edx &= opt_cpuid_mask_ext_edx;
> +		} else if (m) {
> +			ecx &= m->ext_ecx;
> +			edx &= m->ext_edx;
> +		}
> +
> +		/* Fast-forward bits - Must be set. */
> +		edx |= cpufeat_mask(X86_FEATURE_APIC);
> +
> +		cpuidmask_defaults.e1cd = ((uint64_t)ecx << 32) | edx;

Should this be &= ?

>  	}
>  
> -	if (!skip_extfeat &&
> -	    wrmsr_amd_safe(MSR_K8_EXT_FEATURE_MASK, extfeat_edx, extfeat_ecx)) {
> -		skip_extfeat = 1;
> -		printk("Failed to set CPUID extended feature mask\n");
> +	if ((levelling_caps & LCAP_7ab0) == LCAP_7ab0) {
> +		uint32_t eax, ebx, tmp;
> +
> +		cpuid(0x00000007, &eax, &ebx, &tmp, &tmp);
> +
> +		if(~(opt_cpuid_mask_l7s0_eax & opt_cpuid_mask_l7s0_ebx)) {

Ditto.
> +			eax &= opt_cpuid_mask_l7s0_eax;
> +			ebx &= opt_cpuid_mask_l7s0_ebx;
> +		}
> +
> +		cpuidmask_defaults._7ab0 &= ((uint64_t)eax << 32) | ebx;
>  	}
>  
> -	if (!skip_l7s0_eax_ebx &&
> -	    wrmsr_amd_safe(MSR_AMD_L7S0_FEATURE_MASK, l7s0_ebx, l7s0_eax)) {
> -		skip_l7s0_eax_ebx = 1;
> -		printk("Failed to set CPUID leaf 7 subleaf 0 feature mask\n");
> +	if ((levelling_caps & LCAP_6c) == LCAP_6c) {
> +		uint32_t ecx = cpuid_ecx(6);
> +
> +		if (~opt_cpuid_mask_thermal_ecx)
> +			ecx &= opt_cpuid_mask_thermal_ecx;
> +
> +		cpuidmask_defaults._6c &= (~0ULL << 32) | ecx;


Is there any documentation about this? The BKDG from 03/2016 does not mention
this MSR (C001_1003). Ah but it is mentioned in docs for Family 15th. How nice.

>  	}.
>  
> -	if (!skip_thermal_ecx &&
> -	    (rdmsr_amd_safe(MSR_AMD_THRM_FEATURE_MASK, &eax, &edx) ||
> -	     wrmsr_amd_safe(MSR_AMD_THRM_FEATURE_MASK, thermal_ecx, edx))){
> -		skip_thermal_ecx = 1;
> -		printk("Failed to set CPUID thermal/power feature mask\n");
> +	if (opt_cpu_info) {
> +		printk(XENLOG_INFO "Levelling caps: %#x\n", levelling_caps);
> +		printk(XENLOG_INFO
> +		       "MSR defaults: 1d 0x%08x, 1c 0x%08x, e1d 0x%08x, "
> +		       "e1c 0x%08x, 7a0 0x%08x, 7b0 0x%08x, 6c 0x%08x\n",
> +		       (uint32_t)cpuidmask_defaults._1cd,
> +		       (uint32_t)(cpuidmask_defaults._1cd >> 32),
> +		       (uint32_t)cpuidmask_defaults.e1cd,
> +		       (uint32_t)(cpuidmask_defaults.e1cd >> 32),
> +		       (uint32_t)(cpuidmask_defaults._7ab0 >> 32),
> +		       (uint32_t)cpuidmask_defaults._7ab0,
> +		       (uint32_t)cpuidmask_defaults._6c);

Why don't you bit shift cpuidmask_defaults._6c too?
>  	}
>  }
>  
> @@ -409,7 +488,10 @@ static void amd_get_topology(struct cpuinfo_x86 *c)
>  
>  static void early_init_amd(struct cpuinfo_x86 *c)
>  {
> -	set_cpuidmask(c);
> +	if (c == &boot_cpu_data)
> +		amd_init_levelling();
> +
> +	amd_ctxt_switch_levelling(NULL);
>  }
>  
>  static void init_amd(struct cpuinfo_x86 *c)
> -- 
> 2.1.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-03-28 18:55 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-23 16:36 [PATCH v4 00/26] x86: Improvements to cpuid handling for guests Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 01/26] xen/public: Export cpu featureset information in the public API Andrew Cooper
2016-03-24 14:08   ` Jan Beulich
2016-03-24 14:12     ` Andrew Cooper
2016-03-24 14:16       ` Jan Beulich
2016-03-23 16:36 ` [PATCH v4 02/26] xen/x86: Script to automatically process featureset information Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 03/26] xen/x86: Collect more cpuid feature leaves Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 04/26] xen/x86: Mask out unknown features from Xen's capabilities Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 05/26] xen/x86: Annotate special features Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 06/26] xen/x86: Annotate VM applicability in featureset Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 07/26] xen/x86: Calculate maximum host and guest featuresets Andrew Cooper
2016-03-29  8:57   ` Jan Beulich
2016-03-23 16:36 ` [PATCH v4 08/26] xen/x86: Generate deep dependencies of features Andrew Cooper
2016-03-24 16:16   ` Jan Beulich
2016-03-23 16:36 ` [PATCH v4 09/26] xen/x86: Clear dependent features when clearing a cpu cap Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 10/26] xen/x86: Improve disabling of features which have dependencies Andrew Cooper
2016-03-28 15:18   ` Konrad Rzeszutek Wilk
2016-03-23 16:36 ` [PATCH v4 11/26] xen/x86: Improvements to in-hypervisor cpuid sanity checks Andrew Cooper
2016-03-24 15:38   ` Andrew Cooper
2016-03-24 16:47   ` Jan Beulich
2016-03-24 17:01     ` Andrew Cooper
2016-03-24 17:11       ` Jan Beulich
2016-03-24 17:12         ` Andrew Cooper
2016-03-28 15:29   ` Konrad Rzeszutek Wilk
2016-04-05 15:25     ` Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 12/26] x86/cpu: Move set_cpumask() calls into c_early_init() Andrew Cooper
2016-03-28 15:55   ` Konrad Rzeszutek Wilk
2016-04-05 16:19     ` Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 13/26] x86/cpu: Sysctl and common infrastructure for levelling context switching Andrew Cooper
2016-03-24 16:58   ` Jan Beulich
2016-03-28 16:12   ` Konrad Rzeszutek Wilk
2016-04-05 16:33     ` Andrew Cooper
2016-03-28 17:37   ` Konrad Rzeszutek Wilk
2016-03-23 16:36 ` [PATCH v4 14/26] x86/cpu: Rework AMD masking MSR setup Andrew Cooper
2016-03-28 18:55   ` Konrad Rzeszutek Wilk [this message]
2016-04-05 16:44     ` Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 15/26] x86/cpu: Rework Intel masking/faulting setup Andrew Cooper
2016-03-28 19:14   ` Konrad Rzeszutek Wilk
2016-04-05 16:45     ` Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 16/26] x86/cpu: Context switch cpuid masks and faulting state in context_switch() Andrew Cooper
2016-03-28 19:27   ` Konrad Rzeszutek Wilk
2016-04-05 18:34     ` Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 17/26] x86/pv: Provide custom cpumasks for PV domains Andrew Cooper
2016-03-28 19:40   ` Konrad Rzeszutek Wilk
2016-04-05 16:55     ` Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 18/26] x86/domctl: Update PV domain cpumasks when setting cpuid policy Andrew Cooper
2016-03-24 17:04   ` Jan Beulich
2016-03-24 17:05     ` Andrew Cooper
2016-03-28 19:51   ` Konrad Rzeszutek Wilk
2016-03-23 16:36 ` [PATCH v4 19/26] xen+tools: Export maximum host and guest cpu featuresets via SYSCTL Andrew Cooper
2016-03-28 19:59   ` Konrad Rzeszutek Wilk
2016-03-23 16:36 ` [PATCH v4 20/26] tools/libxc: Modify bitmap operations to take void pointers Andrew Cooper
2016-03-28 20:05   ` Konrad Rzeszutek Wilk
2016-03-23 16:36 ` [PATCH v4 21/26] tools/libxc: Use public/featureset.h for cpuid policy generation Andrew Cooper
2016-03-28 20:07   ` Konrad Rzeszutek Wilk
2016-03-23 16:36 ` [PATCH v4 22/26] tools/libxc: Expose the automatically generated cpu featuremask information Andrew Cooper
2016-03-28 20:08   ` Konrad Rzeszutek Wilk
2016-03-23 16:36 ` [PATCH v4 23/26] tools: Utility for dealing with featuresets Andrew Cooper
2016-03-28 20:26   ` Konrad Rzeszutek Wilk
2016-03-23 16:36 ` [PATCH v4 24/26] tools/libxc: Wire a featureset through to cpuid policy logic Andrew Cooper
2016-03-28 20:39   ` Konrad Rzeszutek Wilk
2016-03-23 16:36 ` [PATCH v4 25/26] tools/libxc: Use featuresets rather than guesswork Andrew Cooper
2016-03-23 16:36 ` [PATCH v4 26/26] tools/libxc: Calculate xstate cpuid leaf from guest information Andrew Cooper
2016-03-24 17:20   ` Wei Liu
2016-03-31  7:48   ` Jan Beulich
2016-04-05 17:48     ` Andrew Cooper
2016-04-07  0:16       ` Jan Beulich
2016-04-07  0:40         ` Andrew Cooper
2016-04-07  0:56           ` Jan Beulich
2016-04-07 11:34             ` Andrew Cooper
2016-03-24 10:27 ` [PATCH v4 00/26] x86: Improvements to cpuid handling for guests Jan Beulich
2016-03-24 10:28   ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160328185523.GM17944@char.us.oracle.com \
    --to=konrad.wilk@oracle.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).