From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33416) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YdqGO-0005jl-IG for qemu-devel@nongnu.org; Thu, 02 Apr 2015 21:19:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YdqGL-0008GK-83 for qemu-devel@nongnu.org; Thu, 02 Apr 2015 21:19:12 -0400 Received: from [59.151.112.132] (port=10732 helo=heian.cn.fujitsu.com) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YdqGK-0008G1-SS for qemu-devel@nongnu.org; Thu, 02 Apr 2015 21:19:09 -0400 Message-ID: <551DEB4F.9050007@cn.fujitsu.com> Date: Fri, 3 Apr 2015 09:22:23 +0800 From: Wen Congyang MIME-Version: 1.0 References: <1427981862-19783-1-git-send-email-pbonzini@redhat.com> <20150402143952.GJ15412@fam-t430.nay.redhat.com> <551D578B.5050102@redhat.com> <20150402151650.GK15412@fam-t430.nay.redhat.com> <551D5E61.7090305@redhat.com> <20150402162601.GL15412@fam-t430.nay.redhat.com> In-Reply-To: <20150402162601.GL15412@fam-t430.nay.redhat.com> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH] virtio-blk: correctly dirty guest memory List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Fam Zheng , Paolo Bonzini Cc: kwolf@redhat.com, hangaohuai@huawei.com, zhang.zhanghailiang@huawei.com, lizhijian@cn.fujitsu.com, mst@redhat.com, qemu-devel@nongnu.org, dgilbert@redhat.com, arei.gonglei@huawei.com, stefanha@redhat.com, amit.shah@redhat.com, dgibson@redhat.com, peter.huangpeng@huawei.com On 04/03/2015 12:26 AM, Fam Zheng wrote: > On Thu, 04/02 17:21, Paolo Bonzini wrote: >> >> >> On 02/04/2015 17:16, Fam Zheng wrote: >>>>>>>>> After qemu_iovec_destroy, the QEMUIOVector's size is zeroed and >>>>>>>>> the zero size ultimately is used to compute virtqueue_push's len >>>>>>>>> argument. Therefore, reads from virtio-blk devices did not >>>>>>>>> migrate their results correctly. (Writes were okay). >>>>>>> >>>>>>> Can't we move qemu_iovec_destroy to virtio_blk_free_request? >>>>> >>>>> You would still have to add more code to differentiate reads and >>>>> writes---I think. >>> Yeah, but the extra field will not be needed. >> >> Can you post an alternative patch? One small complication is that >> is_write is in mrb but not in mrb->reqs[x]. virtio_blk_rw_complete is >> already doing >> >> int p = virtio_ldl_p(VIRTIO_DEVICE(req->dev), &req->out.type); >> bool is_read = !(p & VIRTIO_BLK_T_OUT); >> >> but only in a slow path. > > OK, so it looks like a new field is the simplest way to achieve. > > There is another problem with your patch - read_size is not initialized in > non-RW paths like scsi and flush. > > I think the optimization for write is a separate thing, though. Shouldn't below > patch already fix the migration issue? > > diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c > index 000c38d..ee6e198 100644 > --- a/hw/block/virtio-blk.c > +++ b/hw/block/virtio-blk.c > @@ -92,13 +92,6 @@ static void virtio_blk_rw_complete(void *opaque, int ret) > next = req->mr_next; > trace_virtio_blk_rw_complete(req, ret); > > - if (req->qiov.nalloc != -1) { > - /* If nalloc is != 1 req->qiov is a local copy of the original > - * external iovec. It was allocated in submit_merged_requests > - * to be able to merge requests. */ > - qemu_iovec_destroy(&req->qiov); > - } > - > if (ret) { > int p = virtio_ldl_p(VIRTIO_DEVICE(req->dev), &req->out.type); > bool is_read = !(p & VIRTIO_BLK_T_OUT); > @@ -109,6 +102,13 @@ static void virtio_blk_rw_complete(void *opaque, int ret) > > virtio_blk_req_complete(req, VIRTIO_BLK_S_OK); > block_acct_done(blk_get_stats(req->dev->blk), &req->acct); > + > + if (req->qiov.nalloc != -1) { > + /* This means req->qiov is a local copy of the original external > + * iovec. It was allocated in virtio_blk_submit_multireq in order > + * to merge requests. */ > + qemu_iovec_destroy(&req->qiov); > + } We will not come here on I/O failure. It will cause memory leak. Thanks Wen Congyang > virtio_blk_free_request(req); > } > } > > > . >