xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Chenxiao Zhao <chenxiao.zhao@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: julien.grall@arm.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: questions of vm save/restore on arm64
Date: Tue, 31 May 2016 17:28:22 -0700	[thread overview]
Message-ID: <5f28430f-50ee-7d5a-2308-0d16f116ab0e@gmail.com> (raw)
In-Reply-To: <alpine.DEB.2.10.1605301233160.3896@sstabellini-ThinkPad-X260>



On 5/30/2016 4:40 AM, Stefano Stabellini wrote:
> On Fri, 27 May 2016, Chenxiao Zhao wrote:
>> Hi,
>>
>> My board is Hikey on which have octa-core of arm cortex-a53. I have applied patches [1] to try vm save/restore on arm.
>> These patches originally do not working on arm64. I have made some changes based on patch set [2].
>
> Hello Chenxiao,
>
> thanks for your interest in Xen on ARM save/restore.

Hi Stefano,

Thanks for your advice.

I found a possible reason that cause the restore failure is that xen 
always failed on p2m_lookup for guest domain.

I called dump_p2m_lookup in p2m_look() and get the output like below:

(XEN) dom1 IPA 0x0000000039001000
(XEN) P2M @ 0000000801e7ce80 mfn:0x79f3a
(XEN) Using concatenated root table 0
(XEN) 1ST[0x0] = 0x0040000079f3c77f
(XEN) 2ND[0x1c8] = 0x0000000000000000

My question is:

1. who is responsible for restoring p2m table, Xen or guest kernel?
2. After restore, the vm always get zero memory space, but there is no 
error reported on the restore progress. Does the memory requested by the 
guest kernel or should be allocated early by hypervisor?


Name                          ID   Mem VCPUs      State   Time(s)
Domain-0                       0  1024     8     r-----      15.0
guest                          1     0     1     --p---       0.0



>
>
>> What I have got so far is
>>
>> 1. if I run 'xl save -p guest memState' to leave guest in suspend state, then run 'xl unpause guest'.
>>     the guest can resume successfully. so I suppose the guest works find on suspend/resume.
>>
>> 2. if I run 'xl restore -p memState' to restore guest and use xenctx to dump all vcpu's registers.
>>     all the registers are identical to the state on save. After I run 'xl unpause guest', I got no error but can not connect to console.
>> After restore the guest's PC is at a function called user_disable_single_step(), which is called by single_step_handler().
>>
>> My question is
>>
>> 1. How could I debug guest on restore progress? are there any tools available?
>
> Nothing special. You can use ctrl-AAA on the console to switch to the
> hypervisor console and see the state of the guest. You can also add some
> debug printks; if the console doesn't work you can use
> dom0_write_console in Linux to get messages out of your guest (you need
> to compile Xen with debug=y for that to work).
>
>
>> 2. From my understanding, the restore not working is because some status is missing when saving.
>>  e.g. on cpu_save, it know the domain is 64bit, but on cpu_load, it always think it is a 32bit domain. so I have hard coded the domain type to
>> DOMAIN_64BIT.
>> Am I correct?
>
> If Xen thinks the domain is 32-bit at restore, it must be a bug.
>
>
>> 3. How could I dump all VM's status? I only found xenctx can dump vcpu's registers.
>
> You can use the hypervisor console via the ctrl-aaa menu.
>
>
>> I have attached my patch and log below.
>>
>> Looking forward for your feedback.
>> Thanks
>>
>> xl list
>> Name                                        ID   Mem VCPUs      State   Time(s)
>> Domain-0                                     0  1024     8     r-----      11.7
>> root@linaro-alip:~# xl create guest.cfg
>> Parsing config from guest.cfg
>> [   39.238723] xen-blkback: ring-ref 8, event-channel 3, protocol 1 (arm-abi) persistent grants
>>
>> root@linaro-alip:~# xl save -p guest memState
>> Saving to memState new xl format (info 0x3/0x0/931)
>> xc: info: Saving domain 1, type ARM
>> (XEN) HVM1 save: VCPU
>> (XEN) HVM1 save: A15_TIMER
>> (XEN) HVM1 save: GICV2_GICD
>> (XEN) HVM1 save: GICV2_GICC
>> (XEN) HVM1 save: GICV3
>> root@linaro-alip:~# /usr/lib/xen/bin/xenctx -a 1
>> PC:       ffffffc0000ab028
>> LR:       ffffffc00050458c
>> ELR_EL1:  ffffffc000086b34
>> CPSR:     200001c5
>> SPSR_EL1: 60000145
>> SP_EL0:   0000007ff6f2a850
>> SP_EL1:   ffffffc0140a7ca0
>>
>>  x0: 0000000000000001    x1: 00000000deadbeef    x2: 0000000000000002
>>  x3: 0000000000000002    x4: 0000000000000004    x5: 0000000000000000
>>  x6: 000000000000001b    x7: 0000000000000001    x8: 000000618e589e00
>>  x9: 0000000000000000   x10: 0000000000000000   x11: 0000000000000000
>> x12: 00000000000001a3   x13: 000000001911a7d9   x14: 0000000000002ee0
>> x15: 0000000000000005   x16: 00000000deadbeef   x17: 0000000000000001
>> x18: 0000000000000007   x19: 0000000000000000   x20: ffffffc014163d58
>> x21: ffffffc014163cd8   x22: 0000000000000001   x23: 0000000000000140
>> x24: ffffffc000d5bb18   x25: ffffffc014163cd8   x26: 0000000000000000
>> x27: 0000000000000000   x28: 0000000000000000   x29: ffffffc0140a7ca0
>>
>> SCTLR: 34d5d91d
>> TTBCR: 00000032b5193519
>> TTBR0: 002d000054876000
>> TTBR1: 0000000040dcf000
>> root@linaro-alip:~# xl destroy guest
>> (XEN) mm.c:1265:d0v1 gnttab_mark_dirty not implemented yet
>> root@linaro-alip:~# xl restore -p memState
>> Loading new save file memState (new xl fmt info 0x3/0x0/931)
>>  Savefile contains xl domain config in JSON format
>> Parsing config from <saved>
>> xc: info: (XEN) HVM2 restore: VCPU 0
>> Found ARM domain from Xen 4.7
>> xc: info: Restoring domain
>> (XEN) HVM2 restore: A15_TIMER 0
>> (XEN) HVM2 restore: A15_TIMER 0
>> (XEN) HVM2 restore: GICV2_GICD 0
>> (XEN) HVM2 restore: GICV2_GICC 0
>> (XEN) GICH_LRs (vcpu 0) mask=0
>> (XEN)    VCPU_LR[0]=0
>> (XEN)    VCPU_LR[1]=0
>> (XEN)    VCPU_LR[2]=0
>> (XEN)    VCPU_LR[3]=0
>> xc: info: Restore successful
>> xc: info: XenStore: mfn 0x39001, dom 0, evt 1
>> xc: info: Console: mfn 0x39000, dom 0, evt 2
>> root@linaro-alip:~# /usr/lib/xen/bin/xenctx -a 2
>> PC:       ffffffc0000ab028
>> LR:       ffffffc00050458c
>> ELR_EL1:  ffffffc000086b34
>> CPSR:     200001c5
>> SPSR_EL1: 60000145
>> SP_EL0:   0000007ff6f2a850
>> SP_EL1:   ffffffc0140a7ca0
>>
>>  x0: 0000000000000000    x1: 00000000deadbeef    x2: 0000000000000002
>>  x3: 0000000000000002    x4: 0000000000000004    x5: 0000000000000000
>>  x6: 000000000000001b    x7: 0000000000000001    x8: 000000618e589e00
>>  x9: 0000000000000000   x10: 0000000000000000   x11: 0000000000000000
>> x12: 00000000000001a3   x13: 000000001911a7d9   x14: 0000000000002ee0
>> x15: 0000000000000005   x16: 00000000deadbeef   x17: 0000000000000001
>> x18: 0000000000000007   x19: 0000000000000000   x20: ffffffc014163d58
>> x21: ffffffc014163cd8   x22: 0000000000000001   x23: 0000000000000140
>> x24: ffffffc000d5bb18   x25: ffffffc014163cd8   x26: 0000000000000000
>> x27: 0000000000000000   x28: 0000000000000000   x29: ffffffc0140a7ca0
>>
>> SCTLR: 34d5d91d
>> TTBCR: 00000000b5193519
>> TTBR0: 002d000054876000
>> TTBR1: 0000000040dcf000
>> root@linaro-alip:~# xl unpause guest
>> root@linaro-alip:~# xl list
>> Name                                        ID   Mem VCPUs      State   Time(s)
>> Domain-0                                     0  1024     8     r-----      22.2
>> guest                                        2     0     1     r-----       4.8
>> root@linaro-alip:~# /usr/lib/xen/bin/xenctx -a 2
>> PC:       ffffffc000084a00
>> LR:       ffffffc00050458c
>> ELR_EL1:  ffffffc000084a00
>> CPSR:     000003c5
>> SPSR_EL1: 000003c5
>> SP_EL0:   0000007ff6f2a850
>> SP_EL1:   ffffffc0140a7ca0
>>
>>  x0: 0000000000000000    x1: 00000000deadbeef    x2: 0000000000000002
>>  x3: 0000000000000002    x4: 0000000000000004    x5: 0000000000000000
>>  x6: 000000000000001b    x7: 0000000000000001    x8: 000000618e589e00
>>  x9: 0000000000000000   x10: 0000000000000000   x11: 0000000000000000
>> x12: 00000000000001a3   x13: 000000001911a7d9   x14: 0000000000002ee0
>> x15: 0000000000000005   x16: 00000000deadbeef   x17: 0000000000000001
>> x18: 0000000000000007   x19: 0000000000000000   x20: ffffffc014163d58
>> x21: ffffffc014163cd8   x22: 0000000000000001   x23: 0000000000000140
>> x24: ffffffc000d5bb18   x25: ffffffc014163cd8   x26: 0000000000000000
>> x27: 0000000000000000   x28: 0000000000000000   x29: ffffffc0140a7ca0
>>
>> SCTLR: 34d5d91d
>> TTBCR: 00000000b5193519
>> TTBR0: 002d000054876000
>> TTBR1: 0000000040dcf000
>> root@linaro-alip:~# xl console guest
>> xenconsole: Could not read tty from store: Success
>> root@linaro-alip:~#
>>
>>
>> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
>> index aee3353..411bab4 100644
>> --- a/xen/arch/arm/hvm.c
>> +++ b/xen/arch/arm/hvm.c
>> @@ -120,7 +120,8 @@ static int cpu_save(struct domain *d, hvm_domain_context_t *h)
>>          ctxt.dfar = v->arch.dfar;
>>          ctxt.dfsr = v->arch.dfsr;
>>  #else
>> -        /* XXX 64-bit */
>> +       ctxt.far = v->arch.far;
>> +       ctxt.esr = v->arch.esr;
>>  #endif
>>
>>  #ifdef CONFIG_ARM_32
>> @@ -187,6 +188,9 @@ static int cpu_load(struct domain *d, hvm_domain_context_t *h)
>>      if ( hvm_load_entry(VCPU, h, &ctxt) != 0 )
>>          return -EINVAL;
>>
>> +#ifdef CONFIG_ARM64
>> +    v->arch.type = DOMAIN_64BIT;
>> +#endif
>>      v->arch.sctlr = ctxt.sctlr;
>>      v->arch.ttbr0 = ctxt.ttbr0;
>>      v->arch.ttbr1 = ctxt.ttbr1;
>> @@ -199,7 +203,8 @@ static int cpu_load(struct domain *d, hvm_domain_context_t *h)
>>      v->arch.dfar = ctxt.dfar;
>>      v->arch.dfsr = ctxt.dfsr;
>>  #else
>> -    /* XXX 64-bit */
>> +    v->arch.far = ctxt.far;
>> +    v->arch.esr = ctxt.esr;
>>  #endif
>>
>>  #ifdef CONFIG_ARM_32
>> diff --git a/xen/include/public/arch-arm/hvm/save.h b/xen/include/public/arch-arm/hvm/save.h
>> index db916b1..89e6e89 100644
>> --- a/xen/include/public/arch-arm/hvm/save.h
>> +++ b/xen/include/public/arch-arm/hvm/save.h
>> @@ -46,8 +46,12 @@ DECLARE_HVM_SAVE_TYPE(HEADER, 1, struct hvm_save_header);
>>
>>  struct hvm_hw_cpu
>>  {
>> +#ifdef CONFIG_ARM_32
>>      uint64_t vfp[34]; /* Vector floating pointer */
>>      /* VFP v3 state is 34x64 bit, VFP v4 is not yet supported */
>> +#else
>> +    uint64_t vfp[66];
>> +#endif
>>
>>      /* Guest core registers */
>>      struct vcpu_guest_core_regs core_regs;
>> @@ -60,6 +64,9 @@ struct hvm_hw_cpu
>>      uint32_t dacr;
>>      uint64_t par;
>>
>> +    uint64_t far;
>> +    uint64_t esr;
>> +
>>      uint64_t mair0, mair1;
>>      uint64_t tpidr_el0;
>>      uint64_t tpidr_el1;
>>
>>
>>
>> [1] http://lists.xen.org/archives/html/xen-devel/2015-12/msg01053.html
>> [2] http://lists.xen.org/archives/html/xen-devel/2014-04/msg01544.html
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-06-01  0:28 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-27 10:08 questions of vm save/restore on arm64 Chenxiao Zhao
2016-05-30 11:40 ` Stefano Stabellini
2016-06-01  0:28   ` Chenxiao Zhao [this message]
2016-06-02 12:29     ` Julien Grall
2016-06-03 17:05       ` Chenxiao Zhao
2016-06-03 10:16         ` Julien Grall
2016-06-04  1:32           ` Chenxiao Zhao
2016-06-03 11:02             ` Julien Grall
2016-06-04  2:37               ` Chenxiao Zhao
2016-06-03 12:33                 ` Julien Grall
2016-06-06 11:58                 ` Stefano Stabellini
2016-06-07  1:17                   ` Chenxiao Zhao
2016-06-12  9:46                     ` Chenxiao Zhao
2016-06-12 15:31                       ` Julien Grall
2016-06-13  0:55                         ` Chenxiao Zhao
2016-06-13  9:59                           ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5f28430f-50ee-7d5a-2308-0d16f116ab0e@gmail.com \
    --to=chenxiao.zhao@gmail.com \
    --cc=julien.grall@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).