From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7202FC433E6 for ; Fri, 15 Jan 2021 10:15:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2E4B62371F for ; Fri, 15 Jan 2021 10:15:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730801AbhAOKOq (ORCPT ); Fri, 15 Jan 2021 05:14:46 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:11021 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728199AbhAOKOf (ORCPT ); Fri, 15 Jan 2021 05:14:35 -0500 Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.60]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DHH743Qz6zj88D; Fri, 15 Jan 2021 18:12:48 +0800 (CST) Received: from DESKTOP-5IS4806.china.huawei.com (10.174.184.42) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.498.0; Fri, 15 Jan 2021 18:13:41 +0800 From: Keqian Zhu To: , , , , , Alex Williamson , Kirti Wankhede , Cornelia Huck , Will Deacon , Marc Zyngier , Catalin Marinas CC: Mark Rutland , James Morse , Robin Murphy , Joerg Roedel , "Daniel Lezcano" , Thomas Gleixner , Suzuki K Poulose , Julien Thierry , Andrew Morton , Alexios Zavras , , Subject: [RESEND PATCH v2 1/2] vfio/iommu_type1: Populate full dirty when detach non-pinned group Date: Fri, 15 Jan 2021 18:13:20 +0800 Message-ID: <20210115101321.12084-2-zhukeqian1@huawei.com> X-Mailer: git-send-email 2.8.4.windows.1 In-Reply-To: <20210115101321.12084-1-zhukeqian1@huawei.com> References: <20210115101321.12084-1-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.184.42] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org If a group with non-pinned-page dirty scope is detached with dirty logging enabled, we should fully populate the dirty bitmaps at the time it's removed since we don't know the extent of its previous DMA, nor will the group be present to trigger the full bitmap when the user retrieves the dirty bitmap. Fixes: d6a4c185660c ("vfio iommu: Implementation of ioctl for dirty pages tracking") Suggested-by: Alex Williamson Signed-off-by: Keqian Zhu --- drivers/vfio/vfio_iommu_type1.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 0b4dedaa9128..c16924cd54e7 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -236,6 +236,19 @@ static void vfio_dma_populate_bitmap(struct vfio_dma *dma, size_t pgsize) } } +static void vfio_iommu_populate_bitmap_full(struct vfio_iommu *iommu) +{ + struct rb_node *n; + unsigned long pgshift = __ffs(iommu->pgsize_bitmap); + + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); + + if (dma->iommu_mapped) + bitmap_set(dma->bitmap, 0, dma->size >> pgshift); + } +} + static int vfio_dma_bitmap_alloc_all(struct vfio_iommu *iommu, size_t pgsize) { struct rb_node *n; @@ -2415,8 +2428,12 @@ static void vfio_iommu_type1_detach_group(void *iommu_data, * Removal of a group without dirty tracking may allow the iommu scope * to be promoted. */ - if (update_dirty_scope) + if (update_dirty_scope) { update_pinned_page_dirty_scope(iommu); + /* Promote pinned_scope successfully during dirty tracking? */ + if (iommu->dirty_page_tracking && iommu->pinned_page_dirty_scope) + vfio_iommu_populate_bitmap_full(iommu); + } mutex_unlock(&iommu->lock); } -- 2.19.1