From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([209.51.188.92]:55222) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hL93E-0007pj-9c for qemu-devel@nongnu.org; Mon, 29 Apr 2019 12:22:45 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hL93C-0005F2-MC for qemu-devel@nongnu.org; Mon, 29 Apr 2019 12:22:44 -0400 Received: from mx1.redhat.com ([209.132.183.28]:53958) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hL93C-0005Dh-A2 for qemu-devel@nongnu.org; Mon, 29 Apr 2019 12:22:42 -0400 Date: Mon, 29 Apr 2019 17:22:37 +0100 From: "Dr. David Alan Gilbert" Message-ID: <20190429162236.GJ2748@work-vm> References: <20190415145358.GA2893@work-vm> <20190416084738.GA3123@work-vm> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Subject: Re: [Qemu-devel] Fwd: How live migration work for vhost-user List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: fengyd Cc: qemu-devel@nongnu.org * fengyd (fengyd81@gmail.com) wrote: > Hi, > > For vhost, *last_avail_idx* is maintained in vhost_virtqueue > but during live migration, *last_avail_idx* is fetched from VirtQueue. > Do you know how these two *last_avail_idx *are synchronized? > > virtio_load related code which is called during live migration: > > * vdev->vq[i].inuse = (uint16_t)(vdev->vq[i].last_avail_idx -* > * vdev->vq[i].used_idx);* > * if (vdev->vq[i].inuse > vdev->vq[i].vring.num) {* > * error_report("VQ %d size 0x%x < last_avail_idx 0x%x - "* > * "used_idx 0x%x",* > * i, vdev->vq[i].vring.num,* > * vdev->vq[i].last_avail_idx,* I don't know that code well; but I think the answer is that since the queues themselves are in guest memory, the guest memory is migrated by the normal migration code and so the queues version of last_avail_idx should be correct. The 'log' mechanism I previously mentioned will need to make sure the queue pages are marked dirty to make sure these are updated correctly. Dave > > > Thanks > > On Tue, 23 Apr 2019 at 14:20, fengyd wrote: > > > Hi, > > > > I want to add some log to qemu-kvm-ev. > > Do you know how to compile qemu-kvm-ev from source code? > > > > Thanks > > > > Yafeng > > > > On Tue, 16 Apr 2019 at 16:47, Dr. David Alan Gilbert > > wrote: > > > >> * fengyd (fengyd81@gmail.com) wrote: > >> > ---------- Forwarded message --------- > >> > From: fengyd > >> > Date: Tue, 16 Apr 2019 at 09:17 > >> > Subject: Re: [Qemu-devel] How live migration work for vhost-user > >> > To: Dr. David Alan Gilbert > >> > > >> > > >> > Hi, > >> > > >> > Any special feature needs to be supported on guest driver? > >> > Because it's OK for standard Linux VM, but not OK for our VM where > >> virtio > >> > is implemented by ourself. > >> > >> I'm not sure; you do have to support that 'log' mechanism but I don't > >> know what else is needed. > >> > >> > And with qemu-kvm-ev-2.6, live migration can work with our VM where > >> virtio > >> > is implemented by ourself. > >> > >> 2.6 is pretty old, so there's a lot of changes - not sure what's > >> relevant. > >> > >> Dave > >> > >> > Thanks > >> > Yafeng > >> > > >> > On Mon, 15 Apr 2019 at 22:54, Dr. David Alan Gilbert < > >> dgilbert@redhat.com> > >> > wrote: > >> > > >> > > * fengyd (fengyd81@gmail.com) wrote: > >> > > > Hi, > >> > > > > >> > > > During live migration, the folloing log can see in > >> nova-compute.log in > >> > > my > >> > > > environment: > >> > > > ERROR nova.virt.libvirt.driver > >> [req-039a85e1-e7a1-4a63-bc6d-c4b9a044aab6 > >> > > > 0cdab20dc79f4bc6ae5790e7b4a898ac 3363c319773549178acc67f32c78310e - > >> > > default > >> > > > default] [instance: 5ec719f4-1865-4afe-a207-3d9fae22c410] Live > >> Migration > >> > > > failure: internal error: qemu unexpectedly closed the monitor: > >> > > > 2019-04-15T02:58:22.213897Z qemu-kvm: VQ 0 > >> > > > size 0x100 < last_avail_idx 0x1e - used_idx 0x23 > >> > > > > >> > > > It's OK for standard Linux VM, but not OK for our VM where virtio is > >> > > > implemented by ourself. > >> > > > KVM version as follow: > >> > > > qemu-kvm-common-ev-2.12.0-18.el7_6.3.1.x86_64 > >> > > > qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64 > >> > > > libvirt-daemon-kvm-3.9.0-14.2.el7.centos.ncir.8.x86_64 > >> > > > > >> > > > Do you know what's the difference between virtio and vhost-user > >> during > >> > > > migration? > >> > > > The function virtio_load in Qemu is called for virtio and vhost-user > >> > > during > >> > > > migration. > >> > > > For virtio, last_avail_idx and used_idx are stored in Qemu, Qemu > >> is > >> > > > responsible for updating their values accordingly > >> > > > For vhost-user, last_avail_idx and used_idx are stored in > >> vhost-user > >> > > app, > >> > > > eg. DPDK, not in Qemu? > >> > > > How does migration work for vhost-user? > >> > > > >> > > I don't know the details, but my understanding is that vhost-user > >> > > tells the vhost-user client about an area of 'log' memory, where the > >> > > vhost-user client must mark pages as dirty. > >> > > > >> > > In the qemu source, see docs/interop/vhost-user.txt and see > >> > > the VHOST_SET_LOG_BASE and VHOST_USER_SET_LOG_FD calls. > >> > > > >> > > If the client correctly marks the areas as dirty, then qemu > >> > > should resend those pages across. > >> > > > >> > > > >> > > Dave > >> > > > >> > > > Thanks in advance > >> > > > Yafeng > >> > > -- > >> > > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK > >> > > > >> -- > >> Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK > >> > > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK