From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.0 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7A55C43603 for ; Thu, 5 Dec 2019 05:44:09 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 765962077B for ; Thu, 5 Dec 2019 05:44:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="ISRlBNZ1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 765962077B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:50290 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1icjvs-0003yW-H7 for qemu-devel@archiver.kernel.org; Thu, 05 Dec 2019 00:44:08 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:58048) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1icjuS-0003Xu-Mf for qemu-devel@nongnu.org; Thu, 05 Dec 2019 00:42:42 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1icjuQ-0004Xe-84 for qemu-devel@nongnu.org; Thu, 05 Dec 2019 00:42:39 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:11276) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1icjuP-0004SO-VO for qemu-devel@nongnu.org; Thu, 05 Dec 2019 00:42:38 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Wed, 04 Dec 2019 21:42:31 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Wed, 04 Dec 2019 21:42:35 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Wed, 04 Dec 2019 21:42:35 -0800 Received: from [10.25.73.41] (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 5 Dec 2019 05:42:27 +0000 Subject: Re: [PATCH v9 Kernel 2/5] vfio iommu: Add ioctl defination to get dirty pages bitmap. To: Yan Zhao , Alex Williamson References: <1573578220-7530-3-git-send-email-kwankhede@nvidia.com> <20191112153020.71406c44@x1.home> <324ce4f8-d655-ee37-036c-fc9ef9045bef@nvidia.com> <20191113130705.32c6b663@x1.home> <7f74a2a1-ba1c-9d4c-dc5e-343ecdd7d6d6@nvidia.com> <20191114140625.213e8a99@x1.home> <20191126005739.GA31144@joy-OptiPlex-7040> <20191203110412.055c38df@x1.home> <20191204113457.16c1316d@x1.home> <20191205012835.GB31791@joy-OptiPlex-7040> X-Nvconfidentiality: public From: Kirti Wankhede Message-ID: Date: Thu, 5 Dec 2019 11:12:23 +0530 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.1.2 MIME-Version: 1.0 In-Reply-To: <20191205012835.GB31791@joy-OptiPlex-7040> X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1575524552; bh=QomU4SmjJyIxMryZCqD+GkcHlHWka01/FM6djKbN2fw=; h=X-PGP-Universal:Subject:To:CC:References:X-Nvconfidentiality:From: Message-ID:Date:User-Agent:MIME-Version:In-Reply-To: X-Originating-IP:X-ClientProxiedBy:Content-Type:Content-Language: Content-Transfer-Encoding; b=ISRlBNZ1gU0eCEH4oDOFET2q++wk8b1/KyI+LbOwyHkLdqWfTJTh0tlpZeSmSsK8x V+JDijwWZTazbnQWzx++T46EB1NiLKwLq9EoN+8yh0Yt/eYP+KmaRhryUzPOYU3S8i S5le2c7X/RuuBeliSLkVqZmUrGWMZ1cb8wEn3JZxKZkqO75mcCKRdizWVzTsQgdJDG mz+9DAgx6xEkDcqk8i15G+ww5ectrbkF/REb02iiH/4GoIIJYJiojUFGkewwC7OrYs wYmA5be5LUMZi/F27YhwVad6SmLWJ4mS+0RC+rBdou8UZCXTAfvFbFjaUeHmm612IE 1NfiqD84iTW4A== X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 [fuzzy] X-Received-From: 216.228.121.64 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "Zhengxiao.zx@Alibaba-inc.com" , "Tian, Kevin" , "Liu, Yi L" , "cjia@nvidia.com" , "kvm@vger.kernel.org" , "eskultet@redhat.com" , "Yang, Ziye" , "qemu-devel@nongnu.org" , "cohuck@redhat.com" , "shuangtai.tst@alibaba-inc.com" , "dgilbert@redhat.com" , "Wang, Zhi A" , "mlevitsk@redhat.com" , "pasic@linux.ibm.com" , "aik@ozlabs.ru" , "eauger@redhat.com" , "felipe@nutanix.com" , "jonathan.davies@nutanix.com" , "Liu, Changpeng" , "Ken.Xue@amd.com" Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" On 12/5/2019 6:58 AM, Yan Zhao wrote: > On Thu, Dec 05, 2019 at 02:34:57AM +0800, Alex Williamson wrote: >> On Wed, 4 Dec 2019 23:40:25 +0530 >> Kirti Wankhede wrote: >> >>> On 12/3/2019 11:34 PM, Alex Williamson wrote: >>>> On Mon, 25 Nov 2019 19:57:39 -0500 >>>> Yan Zhao wrote: >>>> >>>>> On Fri, Nov 15, 2019 at 05:06:25AM +0800, Alex Williamson wrote: >>>>>> On Fri, 15 Nov 2019 00:26:07 +0530 >>>>>> Kirti Wankhede wrote: >>>>>> >>>>>>> On 11/14/2019 1:37 AM, Alex Williamson wrote: >>>>>>>> On Thu, 14 Nov 2019 01:07:21 +0530 >>>>>>>> Kirti Wankhede wrote: >>>>>>>> >>>>>>>>> On 11/13/2019 4:00 AM, Alex Williamson wrote: >>>>>>>>>> On Tue, 12 Nov 2019 22:33:37 +0530 >>>>>>>>>> Kirti Wankhede wrote: >>>>>>>>>> >>>>>>>>>>> All pages pinned by vendor driver through vfio_pin_pages API should be >>>>>>>>>>> considered as dirty during migration. IOMMU container maintains a list of >>>>>>>>>>> all such pinned pages. Added an ioctl defination to get bitmap of such >>>>>>>>>> >>>>>>>>>> definition >>>>>>>>>> >>>>>>>>>>> pinned pages for requested IO virtual address range. >>>>>>>>>> >>>>>>>>>> Additionally, all mapped pages are considered dirty when physically >>>>>>>>>> mapped through to an IOMMU, modulo we discussed devices opting in to >>>>>>>>>> per page pinning to indicate finer granularity with a TBD mechanism to >>>>>>>>>> figure out if any non-opt-in devices remain. >>>>>>>>>> >>>>>>>>> >>>>>>>>> You mean, in case of device direct assignment (device pass through)? >>>>>>>> >>>>>>>> Yes, or IOMMU backed mdevs. If vfio_dmas in the container are fully >>>>>>>> pinned and mapped, then the correct dirty page set is all mapped pages. >>>>>>>> We discussed using the vpfn list as a mechanism for vendor drivers to >>>>>>>> reduce their migration footprint, but we also discussed that we would >>>>>>>> need a way to determine that all participants in the container have >>>>>>>> explicitly pinned their working pages or else we must consider the >>>>>>>> entire potential working set as dirty. >>>>>>>> >>>>>>> >>>>>>> How can vendor driver tell this capability to iommu module? Any suggestions? >>>>>> >>>>>> I think it does so by pinning pages. Is it acceptable that if the >>>>>> vendor driver pins any pages, then from that point forward we consider >>>>>> the IOMMU group dirty page scope to be limited to pinned pages? There >>>>> we should also be aware of that dirty page scope is pinned pages + unpinned pages, >>>>> which means ever since a page is pinned, it should be regarded as dirty >>>>> no matter whether it's unpinned later. only after log_sync is called and >>>>> dirty info retrieved, its dirty state should be cleared. >>>> >>>> Yes, good point. We can't just remove a vpfn when a page is unpinned >>>> or else we'd lose information that the page potentially had been >>>> dirtied while it was pinned. Maybe that vpfn needs to move to a dirty >>>> list and both the currently pinned vpfns and the dirty vpfns are walked >>>> on a log_sync. The dirty vpfns list would be cleared after a log_sync. >>>> The container would need to know that dirty tracking is enabled and >>>> only manage the dirty vpfns list when necessary. Thanks, >>>> >>> >>> If page is unpinned, then that page is available in free page pool for >>> others to use, then how can we say that unpinned page has valid data? >>> >>> If suppose, one driver A unpins a page and when driver B of some other >>> device gets that page and he pins it, uses it, and then unpins it, then >>> how can we say that page has valid data for driver A? >>> >>> Can you give one example where unpinned page data is considered reliable >>> and valid? >> >> We can only pin pages that the user has already allocated* and mapped >> through the vfio DMA API. The pinning of the page simply locks the >> page for the vendor driver to access it and unpinning that page only >> indicates that access is complete. Pages are not freed when a vendor >> driver unpins them, they still exist and at this point we're now >> assuming the device dirtied the page while it was pinned. Thanks, >> >> Alex >> >> * An exception here is that the page might be demand allocated and the >> act of pinning the page could actually allocate the backing page for >> the user if they have not faulted the page to trigger that allocation >> previously. That page remains mapped for the user's virtual address >> space even after the unpinning though. >> > > Yes, I can give an example in GVT. > when a gem_object is allocated in guest, before submitting it to guest > vGPU, gfx cmds in its ring buffer need to be pinned into GGTT to get a > global graphics address for hardware access. At that time, we shadow > those cmds and pin pages through vfio pin_pages(), and submit the shadow > gem_object to physial hardware. > After guest driver thinks the submitted gem_object has completed hardware > DMA, it unnpinnd those pinned GGTT graphics memory addresses. Then in > host, we unpin the shadow pages through vfio unpin_pages. > But, at this point, guest driver is still free to access the gem_object > through vCPUs, and guest user space is probably still mapping an object > into the gem_object in guest driver. > So, missing the dirty page tracking for unpinned pages would cause > data inconsitency. > If pages are accessed by guest through vCPUs, then RAM module in QEMU will take care of tracking those pages as dirty. All unpinned pages might not be used, so tracking all unpinned pages during VM or application life time would also lead to tracking lots of stale pages, even though they are not being used. Increasing number of not needed pages could also lead to increasing migration data leading increase in migration downtime. Thanks, Kirti