From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40996C43331 for ; Tue, 12 Nov 2019 17:32:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 105EC21872 for ; Tue, 12 Nov 2019 17:32:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="hngEcYbi" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727020AbfKLRcr (ORCPT ); Tue, 12 Nov 2019 12:32:47 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:7538 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726980AbfKLRcq (ORCPT ); Tue, 12 Nov 2019 12:32:46 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate16.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 12 Nov 2019 09:31:49 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 12 Nov 2019 09:32:45 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 12 Nov 2019 09:32:45 -0800 Received: from HQMAIL105.nvidia.com (172.20.187.12) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 12 Nov 2019 17:32:45 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 12 Nov 2019 17:32:38 +0000 From: Kirti Wankhede To: , CC: , , , , , , , , , , , , , , , , , , , , "Kirti Wankhede" Subject: [PATCH v9 Kernel 0/5] Add KABIs to support migration for VFIO devices Date: Tue, 12 Nov 2019 22:33:35 +0530 Message-ID: <1573578220-7530-1-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 X-NVConfidentiality: public MIME-Version: 1.0 Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1573579909; bh=VwC/4UKhllpfCNBLQXoDqFAkQVIzZ3vwpVHFBclHKmQ=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: X-NVConfidentiality:MIME-Version:Content-Type; b=hngEcYbiELuMYGwLOY+aojWwkFwbIb88rEnqTaNdX8pKqsHhdtawuQ5Owkr/83lZd 8w6d4hihyrv9NSLgpxsbJKvqrs2m10BaVI6f3Swqj7xHgLglW456B0isfaqHcyrBbi nqUUZqucInxHkPgktFJT+C2/tf3N1tL9KUejNR6dhDN/1jnY0OYXo7tQchEbvlY3Dc xUYTui9uTUTAJnYlvTY07EI/hj0iOuAr7QE464kYCvg490MjJ6LXmz9xz9oa4jYydQ cJe+qR8aWGih//SplUcHpCv2EjELQPEsflP21H3jWPINYWm8QfNRTD346RRAmfLK1o /yqJxG3uWVylQ== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hi Alex, To keep kernel and QEMU patches in sync, keeping v9 version for this patch set. Till v8 version, KABI was being discussed from QEMU patch series[1]. In earlier version mail and as per in person discussion at KVM forum, this patch set adds: * New IOCTL VFIO_IOMMU_GET_DIRTY_BITMAP to get dirty pages bitmap with respect to IOMMU container rather than per device. All pages pinned by vendor driver through vfio_pin_pages external API has to be marked as dirty during migration. When there are CPU writes, CPU dirty page tracking can identify dirtied pages, but any page pinned by vendor driver can also be written by device. As of now there is no device which has hardware support for dirty page tracking. So all pages which are pinned by vendor driver should be considered as dirty. * New IOCTL VFIO_IOMMU_UNMAP_DMA_GET_BITMAP to get dirty pages bitmap before unmapping IO virtual address range. With vIOMMU, during pre-copy phase of migration, while CPUs are still running, IO virtual address unmap can happen while device still keeping reference of guest pfns. Those pages should be reported as dirty before unmap, so that VFIO user space application can copy content of those pages from source to destination. Yet TODO: Since there is no device which has hardware support for system memmory dirty bitmap tracking, right now there is no other API from vendor driver to VFIO IOMMU module to report dirty pages. In future, when such hardware support will be implemented, an API will be required such that vendor driver could report dirty pages to VFIO module during migration phases. [1] https://www.mail-archive.com/qemu-devel@nongnu.org/msg640400.html Adding revision history from previous QEMU patch set to understand KABI changes done till now v8 -> v9: - Split patch set in 2 sets, Kernel and QEMU. - Dirty pages bitmap is queried from IOMMU container rather than from vendor driver for per device. Added 2 ioctls to achieve this. v7 -> v8: - Updated comments for KABI - Added BAR address validation check during PCI device's config space load as suggested by Dr. David Alan Gilbert. - Changed vfio_migration_set_state() to set or clear device state flags. - Some nit fixes. v6 -> v7: - Fix build failures. v5 -> v6: - Fix build failure. v4 -> v5: - Added decriptive comment about the sequence of access of members of structure vfio_device_migration_info to be followed based on Alex's suggestion - Updated get dirty pages sequence. - As per Cornelia Huck's suggestion, added callbacks to VFIODeviceOps to get_object, save_config and load_config. - Fixed multiple nit picks. - Tested live migration with multiple vfio device assigned to a VM. v3 -> v4: - Added one more bit for _RESUMING flag to be set explicitly. - data_offset field is read-only for user space application. - data_size is read for every iteration before reading data from migration, that is removed assumption that data will be till end of migration region. - If vendor driver supports mappable sparsed region, map those region during setup state of save/load, similarly unmap those from cleanup routines. - Handles race condition that causes data corruption in migration region during save device state by adding mutex and serialiaing save_buffer and get_dirty_pages routines. - Skip called get_dirty_pages routine for mapped MMIO region of device. - Added trace events. - Split into multiple functional patches. v2 -> v3: - Removed enum of VFIO device states. Defined VFIO device state with 2 bits. - Re-structured vfio_device_migration_info to keep it minimal and defined action on read and write access on its members. v1 -> v2: - Defined MIGRATION region type and sub-type which should be used with region type capability. - Re-structured vfio_device_migration_info. This structure will be placed at 0th offset of migration region. - Replaced ioctl with read/write for trapped part of migration region. - Added both type of access support, trapped or mmapped, for data section of the region. - Moved PCI device functions to pci file. - Added iteration to get dirty page bitmap until bitmap for all requested pages are copied. Thanks, Kirti Kirti Wankhede (5): vfio: KABI for migration interface for device state vfio iommu: Add ioctl defination to get dirty pages bitmap. vfio iommu: Add ioctl defination to unmap IOVA and return dirty bitmap vfio iommu: Implementation of ioctl to get dirty pages bitmap. vfio iommu: Implementation of ioctl to get dirty bitmap before unmap drivers/vfio/vfio_iommu_type1.c | 163 ++++++++++++++++++++++++++++++++++++++- include/uapi/linux/vfio.h | 164 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 325 insertions(+), 2 deletions(-) -- 2.7.0