All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Razvan Cojocaru <rcojocaru@bitdefender.com>
Cc: Wei Liu <wei.liu2@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel@lists.xen.org, Paul Durrant <paul.durrant@citrix.com>,
	aisaila@bitdefender.com
Subject: Re: [PATCH v17 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func
Date: Tue, 28 Aug 2018 09:43:46 -0600	[thread overview]
Message-ID: <5B856DB202000078001E2B0C@prv1-mh.provo.novell.com> (raw)
In-Reply-To: <8d9cf4e2-e4cf-78d6-800d-9a9ae72802f5@bitdefender.com>

>>> On 28.08.18 at 17:33, <rcojocaru@bitdefender.com> wrote:
> On 8/22/18 5:02 PM, Alexandru Isaila wrote:
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -787,119 +787,126 @@ static int hvm_load_tsc_adjust(struct domain *d, hvm_domain_context_t *h)
>>  HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
>>                            hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);
>>  
>> -static int hvm_save_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
>> +static int hvm_save_cpu_ctxt_one(struct vcpu *v, hvm_domain_context_t *h)
>>  {
>> -    struct vcpu *v;
>> -    struct hvm_hw_cpu ctxt;
>>      struct segment_register seg;
>> +    struct hvm_hw_cpu ctxt = {
>> +        .tsc = hvm_get_guest_tsc_fixed(v, v->domain->arch.hvm_domain.sync_tsc),
>> +        .msr_tsc_aux = hvm_msr_tsc_aux(v),
>> +        .rax = v->arch.user_regs.rax,
>> +        .rbx = v->arch.user_regs.rbx,
>> +        .rcx = v->arch.user_regs.rcx,
>> +        .rdx = v->arch.user_regs.rdx,
>> +        .rbp = v->arch.user_regs.rbp,
>> +        .rsi = v->arch.user_regs.rsi,
>> +        .rdi = v->arch.user_regs.rdi,
>> +        .rsp = v->arch.user_regs.rsp,
>> +        .rip = v->arch.user_regs.rip,
>> +        .rflags = v->arch.user_regs.rflags,
>> +        .r8  = v->arch.user_regs.r8,
>> +        .r9  = v->arch.user_regs.r9,
>> +        .r10 = v->arch.user_regs.r10,
>> +        .r11 = v->arch.user_regs.r11,
>> +        .r12 = v->arch.user_regs.r12,
>> +        .r13 = v->arch.user_regs.r13,
>> +        .r14 = v->arch.user_regs.r14,
>> +        .r15 = v->arch.user_regs.r15,
>> +        .dr0 = v->arch.debugreg[0],
>> +        .dr1 = v->arch.debugreg[1],
>> +        .dr2 = v->arch.debugreg[2],
>> +        .dr3 = v->arch.debugreg[3],
>> +        .dr6 = v->arch.debugreg[6],
>> +        .dr7 = v->arch.debugreg[7],
>> +    };
>>  
>> -    for_each_vcpu ( d, v )
>> +    /*
>> +     * We don't need to save state for a vcpu that is down; the restore
>> +     * code will leave it down if there is nothing saved.
>> +     */
>> +    if ( v->pause_flags & VPF_down )
>> +        return 0;
> 
> Actually, we'd like to remove this if() if possible - even if the VCPU
> is down, we should still be able to query it and receive whatever state
> it is in, according to the Intel SDM, "9.1.1 Processor State After
> Reset". Any objections to this?

Certainly not in this series; it's therefore not really ideal to
discuss this on this thread instead of a separate one.

I'm also not convinced the state you'd receive would
actually be "after reset" state. We may need to synthesize
this if we really wanted to allow this.

> Also, reading the comment and looking at the code, it appears that the
> end of hvm_load_cpu_ctxt() actually wakes the VCPU up:
> 
> 1152     v->arch.debugreg[7] = ctxt.dr7;
> 1153
> 1154     v->arch.vgc_flags = VGCF_online;
> 1155
> 1156     /* Auxiliary processors should be woken immediately. */
> 1157     v->is_initialised = 1;
> 1158     clear_bit(_VPF_down, &v->pause_flags);
> 1159     vcpu_wake(v);
> 1160
> 1161     return 0;
> 1162 }
> 
> which appears to be non-architectural behaviour if we've interpreted
> correctly section 8.4 of Volume 3 of the SDM, as basically saying that
> VCPUs should be awaken by IPIs (though this is outside the scope of this
> patch).

I'm not sure hardware behavior can be used as a reference here:
Hardware doesn't allow loading of state the way we do here. Also
I don't see how/when else you would propose the unpausing would
happen here, in particular in the course of a (live) migration.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-08-28 15:43 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-22 14:02 [PATCH v17 00/14] x86/domctl: Save info for one vcpu instance Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 01/13] x86/cpu: Introduce vmce_save_vcpu_ctxt_one() func Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 02/13] x86/hvm: Introduce hvm_save_tsc_adjust_one() func Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 03/13] x86/hvm: Introduce hvm_save_cpu_ctxt_one func Alexandru Isaila
2018-08-28 15:33   ` Razvan Cojocaru
2018-08-28 15:43     ` Jan Beulich [this message]
2018-08-22 14:02 ` [PATCH v17 04/13] x86/hvm: Introduce hvm_save_cpu_xsave_states_one Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 05/13] x86/hvm: Introduce hvm_save_cpu_msrs_one func Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 06/13] x86/hvm: Introduce hvm_save_mtrr_msr_one func Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 07/13] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 08/13] x86/hvm: Introduce lapic_save_hidden_one Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 09/13] x86/hvm: Introduce lapic_save_regs_one func Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 10/13] x86/hvm: Add handler for save_one funcs Alexandru Isaila
2018-08-22 14:02 ` [PATCH v17 11/13] x86/domctl: Use hvm_save_vcpu_handler Alexandru Isaila
2018-08-22 14:28   ` Roger Pau Monné
2018-08-27  8:39     ` Jan Beulich
2018-08-22 14:02 ` [PATCH v17 12/13] x86/hvm: Remove redundant save functions Alexandru Isaila
2018-08-22 15:04   ` Roger Pau Monné
2018-08-22 15:22     ` Isaila Alexandru
2018-08-22 15:26       ` Roger Pau Monné
2018-08-28 15:27   ` Jan Beulich
2018-08-22 14:02 ` [PATCH v17 13/13] x86/domctl: Don't pause the whole domain if only getting vcpu state Alexandru Isaila
2018-08-22 14:41   ` Roger Pau Monné
2018-08-22 15:15     ` Isaila Alexandru
2018-08-29 14:02       ` Isaila Alexandru
2018-08-29 14:13         ` Jan Beulich
2018-08-31 13:56           ` Isaila Alexandru
2018-08-31 15:23             ` Jan Beulich
2018-09-03 14:36             ` Roger Pau Monné
2018-09-03 14:42               ` Isaila Alexandru
2018-09-03 14:56                 ` Roger Pau Monné

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5B856DB202000078001E2B0C@prv1-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=aisaila@bitdefender.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=paul.durrant@citrix.com \
    --cc=rcojocaru@bitdefender.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.