From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA8A4C7618B for ; Tue, 23 Jul 2019 12:53:07 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C2B6C2083B for ; Tue, 23 Jul 2019 12:53:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C2B6C2083B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:42180 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpuHz-00024p-0Z for qemu-devel@archiver.kernel.org; Tue, 23 Jul 2019 08:53:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:47697) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hpuHo-0001d0-Hj for qemu-devel@nongnu.org; Tue, 23 Jul 2019 08:52:58 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hpuHm-0000Nn-K1 for qemu-devel@nongnu.org; Tue, 23 Jul 2019 08:52:56 -0400 Received: from mx1.redhat.com ([209.132.183.28]:52880) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hpuHm-0000Km-BM for qemu-devel@nongnu.org; Tue, 23 Jul 2019 08:52:54 -0400 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1E26830C0DCF; Tue, 23 Jul 2019 12:52:49 +0000 (UTC) Received: from gondolin (dhcp-192-181.str.redhat.com [10.33.192.181]) by smtp.corp.redhat.com (Postfix) with ESMTP id E61481001B14; Tue, 23 Jul 2019 12:52:42 +0000 (UTC) Date: Tue, 23 Jul 2019 14:52:40 +0200 From: Cornelia Huck To: Kirti Wankhede Message-ID: <20190723145240.7270ace4.cohuck@redhat.com> In-Reply-To: <1562665760-26158-6-git-send-email-kwankhede@nvidia.com> References: <1562665760-26158-1-git-send-email-kwankhede@nvidia.com> <1562665760-26158-6-git-send-email-kwankhede@nvidia.com> Organization: Red Hat GmbH MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Tue, 23 Jul 2019 12:52:49 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: Re: [Qemu-devel] [PATCH v7 05/13] vfio: Add migration region initialization and finalize function X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kevin.tian@intel.com, yi.l.liu@intel.com, cjia@nvidia.com, eskultet@redhat.com, ziye.yang@intel.com, qemu-devel@nongnu.org, Zhengxiao.zx@Alibaba-inc.com, shuangtai.tst@alibaba-inc.com, dgilbert@redhat.com, zhi.a.wang@intel.com, mlevitsk@redhat.com, pasic@linux.ibm.com, aik@ozlabs.ru, alex.williamson@redhat.com, eauger@redhat.com, felipe@nutanix.com, jonathan.davies@nutanix.com, yan.y.zhao@intel.com, changpeng.liu@intel.com, Ken.Xue@amd.com Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" On Tue, 9 Jul 2019 15:19:12 +0530 Kirti Wankhede wrote: > - Migration functions are implemented for VFIO_DEVICE_TYPE_PCI device in this > patch series. > - VFIO device supports migration or not is decided based of migration region > query. If migration region query is successful and migration region > initialization is successful then migration is supported else migration is > blocked. > > Signed-off-by: Kirti Wankhede > Reviewed-by: Neo Jia > --- > hw/vfio/Makefile.objs | 2 +- > hw/vfio/migration.c | 145 ++++++++++++++++++++++++++++++++++++++++++ > hw/vfio/trace-events | 3 + > include/hw/vfio/vfio-common.h | 14 ++++ > 4 files changed, 163 insertions(+), 1 deletion(-) > create mode 100644 hw/vfio/migration.c > (...) > diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c > new file mode 100644 > index 000000000000..a2cfbd5af2e1 > --- /dev/null > +++ b/hw/vfio/migration.c > @@ -0,0 +1,145 @@ > +/* > + * Migration support for VFIO devices > + * > + * Copyright NVIDIA, Inc. 2019 > + * > + * This work is licensed under the terms of the GNU GPL, version 2. See > + * the COPYING file in the top-level directory. > + */ > + > +#include "qemu/osdep.h" > +#include > + > +#include "hw/vfio/vfio-common.h" > +#include "cpu.h" > +#include "migration/migration.h" > +#include "migration/qemu-file.h" > +#include "migration/register.h" > +#include "migration/blocker.h" > +#include "migration/misc.h" > +#include "qapi/error.h" > +#include "exec/ramlist.h" > +#include "exec/ram_addr.h" > +#include "pci.h" > +#include "trace.h" > + > +static void vfio_migration_region_exit(VFIODevice *vbasedev) > +{ > + VFIOMigration *migration = vbasedev->migration; > + > + if (!migration) { > + return; > + } > + > + if (migration->region.buffer.size) { > + vfio_region_exit(&migration->region.buffer); > + vfio_region_finalize(&migration->region.buffer); > + } > +} > + > +static int vfio_migration_region_init(VFIODevice *vbasedev) > +{ > + VFIOMigration *migration = vbasedev->migration; > + Object *obj = NULL; > + int ret = -EINVAL; > + > + if (!migration) { You're checking for vbasedev->migration here... > + return ret; > + } > + > + if (!vbasedev->ops || !vbasedev->ops->vfio_get_object) { > + return ret; > + } > + > + obj = vbasedev->ops->vfio_get_object(vbasedev); > + if (!obj) { > + return ret; > + } > + > + ret = vfio_region_setup(obj, vbasedev, &migration->region.buffer, > + migration->region.index, "migration"); > + if (ret) { > + error_report("%s: Failed to setup VFIO migration region %d: %s", > + vbasedev->name, migration->region.index, strerror(-ret)); > + goto err; > + } > + > + if (!migration->region.buffer.size) { > + ret = -EINVAL; > + error_report("%s: Invalid region size of VFIO migration region %d: %s", > + vbasedev->name, migration->region.index, strerror(-ret)); > + goto err; > + } > + > + return 0; > + > +err: > + vfio_migration_region_exit(vbasedev); > + return ret; > +} > + > +static int vfio_migration_init(VFIODevice *vbasedev, > + struct vfio_region_info *info) > +{ > + int ret; > + > + vbasedev->migration = g_new0(VFIOMigration, 1); ...but always allocate it before calling the function above here. What am I missing? > + vbasedev->migration->region.index = info->index; > + > + ret = vfio_migration_region_init(vbasedev); > + if (ret) { > + error_report("%s: Failed to initialise migration region", > + vbasedev->name); > + return ret; It feels a bit odd that you don't free ->migration again here, but delay it until finalize. > + } > + > + return 0; > +} > + > +/* ---------------------------------------------------------------------- */ > + > +int vfio_migration_probe(VFIODevice *vbasedev, Error **errp) > +{ > + struct vfio_region_info *info; > + Error *local_err = NULL; > + int ret; > + > + ret = vfio_get_dev_region_info(vbasedev, VFIO_REGION_TYPE_MIGRATION, > + VFIO_REGION_SUBTYPE_MIGRATION, &info); > + if (ret) { > + goto add_blocker; So you don't even call init if the region is not present (which seems reasonable)... > + } > + > + ret = vfio_migration_init(vbasedev, info); > + if (ret) { > + goto add_blocker; > + } > + > + trace_vfio_migration_probe(vbasedev->name, info->index); > + return 0; > + > +add_blocker: > + error_setg(&vbasedev->migration_blocker, > + "VFIO device doesn't support migration"); > + ret = migrate_add_blocker(vbasedev->migration_blocker, &local_err); > + if (local_err) { > + error_propagate(errp, local_err); > + error_free(vbasedev->migration_blocker); > + } > + return ret; > +} > + > +void vfio_migration_finalize(VFIODevice *vbasedev) > +{ > + if (!vbasedev->migration) { ...but you're doing a quick exit here in that case. Shouldn't you get rid of the blocker here? > + return; > + } > + > + if (vbasedev->migration_blocker) { > + migrate_del_blocker(vbasedev->migration_blocker); > + error_free(vbasedev->migration_blocker); > + } > + > + vfio_migration_region_exit(vbasedev); > + g_free(vbasedev->migration); > +}