From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD7AFC433E6 for ; Sun, 7 Feb 2021 10:04:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8180864E0F for ; Sun, 7 Feb 2021 10:04:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229626AbhBGKDu (ORCPT ); Sun, 7 Feb 2021 05:03:50 -0500 Received: from mga06.intel.com ([134.134.136.31]:50122 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229445AbhBGKDr (ORCPT ); Sun, 7 Feb 2021 05:03:47 -0500 IronPort-SDR: ETI7YzxekrQHHF6lb0EdsdVghedbMZOEOriMx8FTAEBnCY7lN0YceM8AsQ4riz5w+SIZuiA1F7 U+a/hTrAKt5A== X-IronPort-AV: E=McAfee;i="6000,8403,9887"; a="243097226" X-IronPort-AV: E=Sophos;i="5.81,159,1610438400"; d="scan'208";a="243097226" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Feb 2021 02:02:01 -0800 IronPort-SDR: Mp8oKFuCuA6dAZ2YZ/k3bHQSVYlOl31EFoAhZtw2KltKpjEurl1bATYaVs9WFlQSWAC71xuHE6 2n62yqCcbYLg== X-IronPort-AV: E=Sophos;i="5.81,159,1610438400"; d="scan'208";a="394665971" Received: from yisun1-ubuntu.bj.intel.com (HELO yi.y.sun) ([10.238.156.116]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-SHA256; 07 Feb 2021 02:01:56 -0800 Date: Sun, 7 Feb 2021 17:56:30 +0800 From: Yi Sun To: Keqian Zhu Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, iommu@lists.linux-foundation.org, Will Deacon , Alex Williamson , Marc Zyngier , Catalin Marinas , Kirti Wankhede , Cornelia Huck , Mark Rutland , James Morse , Robin Murphy , Suzuki K Poulose , wanghaibin.wang@huawei.com, jiangkunkun@huawei.com, yuzenghui@huawei.com, lushenming@huawei.com, kevin.tian@intel.com, yan.y.zhao@intel.com, baolu.lu@linux.intel.com Subject: Re: [RFC PATCH 10/11] vfio/iommu_type1: Optimize dirty bitmap population based on iommu HWDBM Message-ID: <20210207095630.GA28580@yi.y.sun> References: <20210128151742.18840-1-zhukeqian1@huawei.com> <20210128151742.18840-11-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20210128151742.18840-11-zhukeqian1@huawei.com> User-Agent: Mutt/1.5.21 (2010-09-15) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, On 21-01-28 23:17:41, Keqian Zhu wrote: [...] > +static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + struct vfio_domain *d; > + > + list_for_each_entry(d, &iommu->domain_list, next) { > + /* Go through all domain anyway even if we fail */ > + iommu_split_block(d->domain, dma->iova, dma->size); > + } > +} This should be a switch to prepare for dirty log start. Per Intel Vtd spec, there is SLADE defined in Scalable-Mode PASID Table Entry. It enables Accessed/Dirty Flags in second-level paging entries. So, a generic iommu interface here is better. For Intel iommu, it enables SLADE. For ARM, it splits block. > + > +static void vfio_dma_dirty_log_stop(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + struct vfio_domain *d; > + > + list_for_each_entry(d, &iommu->domain_list, next) { > + /* Go through all domain anyway even if we fail */ > + iommu_merge_page(d->domain, dma->iova, dma->size, > + d->prot | dma->prot); > + } > +} Same as above comment, a generic interface is required here. > + > +static void vfio_iommu_dirty_log_switch(struct vfio_iommu *iommu, bool start) > +{ > + struct rb_node *n; > + > + /* Split and merge even if all iommu don't support HWDBM now */ > + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { > + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); > + > + if (!dma->iommu_mapped) > + continue; > + > + /* Go through all dma range anyway even if we fail */ > + if (start) > + vfio_dma_dirty_log_start(iommu, dma); > + else > + vfio_dma_dirty_log_stop(iommu, dma); > + } > +} > + > static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > unsigned long arg) > { > @@ -2812,8 +2900,10 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > pgsize = 1 << __ffs(iommu->pgsize_bitmap); > if (!iommu->dirty_page_tracking) { > ret = vfio_dma_bitmap_alloc_all(iommu, pgsize); > - if (!ret) > + if (!ret) { > iommu->dirty_page_tracking = true; > + vfio_iommu_dirty_log_switch(iommu, true); > + } > } > mutex_unlock(&iommu->lock); > return ret; > @@ -2822,6 +2912,7 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > if (iommu->dirty_page_tracking) { > iommu->dirty_page_tracking = false; > vfio_dma_bitmap_free_all(iommu); > + vfio_iommu_dirty_log_switch(iommu, false); > } > mutex_unlock(&iommu->lock); > return 0; > -- > 2.19.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9AFEC433E0 for ; Sun, 7 Feb 2021 10:02:06 +0000 (UTC) Received: from fraxinus.osuosl.org (smtp4.osuosl.org [140.211.166.137]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 10C3A64E2B for ; Sun, 7 Feb 2021 10:02:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 10C3A64E2B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from localhost (localhost [127.0.0.1]) by fraxinus.osuosl.org (Postfix) with ESMTP id D235D860C1; Sun, 7 Feb 2021 10:02:05 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from fraxinus.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id qcRla53BCRhi; Sun, 7 Feb 2021 10:02:04 +0000 (UTC) Received: from lists.linuxfoundation.org (lf-lists.osuosl.org [140.211.9.56]) by fraxinus.osuosl.org (Postfix) with ESMTP id C11D086126; Sun, 7 Feb 2021 10:02:04 +0000 (UTC) Received: from lf-lists.osuosl.org (localhost [127.0.0.1]) by lists.linuxfoundation.org (Postfix) with ESMTP id 90756C0891; Sun, 7 Feb 2021 10:02:04 +0000 (UTC) Received: from hemlock.osuosl.org (smtp2.osuosl.org [140.211.166.133]) by lists.linuxfoundation.org (Postfix) with ESMTP id 26B2FC013A for ; Sun, 7 Feb 2021 10:02:03 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by hemlock.osuosl.org (Postfix) with ESMTP id 145B387031 for ; Sun, 7 Feb 2021 10:02:03 +0000 (UTC) X-Virus-Scanned: amavisd-new at osuosl.org Received: from hemlock.osuosl.org ([127.0.0.1]) by localhost (.osuosl.org [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id ULZMe19dMhjW for ; Sun, 7 Feb 2021 10:02:02 +0000 (UTC) X-Greylist: domain auto-whitelisted by SQLgrey-1.7.6 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by hemlock.osuosl.org (Postfix) with ESMTPS id 6222286F3A for ; Sun, 7 Feb 2021 10:02:02 +0000 (UTC) IronPort-SDR: wbLdHMQjXKrck/s8pixic3DXjNtEEnwLilRtXKkA+yb72KdNUd6rgBcqKL/WqSui+xaUVjGlHc 0lrl0Y6Wt8FA== X-IronPort-AV: E=McAfee;i="6000,8403,9887"; a="243097227" X-IronPort-AV: E=Sophos;i="5.81,159,1610438400"; d="scan'208";a="243097227" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Feb 2021 02:02:01 -0800 IronPort-SDR: Mp8oKFuCuA6dAZ2YZ/k3bHQSVYlOl31EFoAhZtw2KltKpjEurl1bATYaVs9WFlQSWAC71xuHE6 2n62yqCcbYLg== X-IronPort-AV: E=Sophos;i="5.81,159,1610438400"; d="scan'208";a="394665971" Received: from yisun1-ubuntu.bj.intel.com (HELO yi.y.sun) ([10.238.156.116]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-SHA256; 07 Feb 2021 02:01:56 -0800 Date: Sun, 7 Feb 2021 17:56:30 +0800 From: Yi Sun To: Keqian Zhu Subject: Re: [RFC PATCH 10/11] vfio/iommu_type1: Optimize dirty bitmap population based on iommu HWDBM Message-ID: <20210207095630.GA28580@yi.y.sun> References: <20210128151742.18840-1-zhukeqian1@huawei.com> <20210128151742.18840-11-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210128151742.18840-11-zhukeqian1@huawei.com> User-Agent: Mutt/1.5.21 (2010-09-15) Cc: Mark Rutland , kvm@vger.kernel.org, Catalin Marinas , Kirti Wankhede , Will Deacon , kvmarm@lists.cs.columbia.edu, Marc Zyngier , jiangkunkun@huawei.com, wanghaibin.wang@huawei.com, kevin.tian@intel.com, yan.y.zhao@intel.com, Suzuki K Poulose , Alex Williamson , linux-arm-kernel@lists.infradead.org, Cornelia Huck , linux-kernel@vger.kernel.org, lushenming@huawei.com, iommu@lists.linux-foundation.org, James Morse , Robin Murphy X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: iommu-bounces@lists.linux-foundation.org Sender: "iommu" Hi, On 21-01-28 23:17:41, Keqian Zhu wrote: [...] > +static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + struct vfio_domain *d; > + > + list_for_each_entry(d, &iommu->domain_list, next) { > + /* Go through all domain anyway even if we fail */ > + iommu_split_block(d->domain, dma->iova, dma->size); > + } > +} This should be a switch to prepare for dirty log start. Per Intel Vtd spec, there is SLADE defined in Scalable-Mode PASID Table Entry. It enables Accessed/Dirty Flags in second-level paging entries. So, a generic iommu interface here is better. For Intel iommu, it enables SLADE. For ARM, it splits block. > + > +static void vfio_dma_dirty_log_stop(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + struct vfio_domain *d; > + > + list_for_each_entry(d, &iommu->domain_list, next) { > + /* Go through all domain anyway even if we fail */ > + iommu_merge_page(d->domain, dma->iova, dma->size, > + d->prot | dma->prot); > + } > +} Same as above comment, a generic interface is required here. > + > +static void vfio_iommu_dirty_log_switch(struct vfio_iommu *iommu, bool start) > +{ > + struct rb_node *n; > + > + /* Split and merge even if all iommu don't support HWDBM now */ > + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { > + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); > + > + if (!dma->iommu_mapped) > + continue; > + > + /* Go through all dma range anyway even if we fail */ > + if (start) > + vfio_dma_dirty_log_start(iommu, dma); > + else > + vfio_dma_dirty_log_stop(iommu, dma); > + } > +} > + > static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > unsigned long arg) > { > @@ -2812,8 +2900,10 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > pgsize = 1 << __ffs(iommu->pgsize_bitmap); > if (!iommu->dirty_page_tracking) { > ret = vfio_dma_bitmap_alloc_all(iommu, pgsize); > - if (!ret) > + if (!ret) { > iommu->dirty_page_tracking = true; > + vfio_iommu_dirty_log_switch(iommu, true); > + } > } > mutex_unlock(&iommu->lock); > return ret; > @@ -2822,6 +2912,7 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > if (iommu->dirty_page_tracking) { > iommu->dirty_page_tracking = false; > vfio_dma_bitmap_free_all(iommu); > + vfio_iommu_dirty_log_switch(iommu, false); > } > mutex_unlock(&iommu->lock); > return 0; > -- > 2.19.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1C66C433DB for ; Sun, 7 Feb 2021 17:59:58 +0000 (UTC) Received: from mm01.cs.columbia.edu (mm01.cs.columbia.edu [128.59.11.253]) by mail.kernel.org (Postfix) with ESMTP id E930864E31 for ; Sun, 7 Feb 2021 17:59:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E930864E31 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvmarm-bounces@lists.cs.columbia.edu Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 57D804B5CE; Sun, 7 Feb 2021 12:59:57 -0500 (EST) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id bRjxCWKzAaOR; Sun, 7 Feb 2021 12:59:56 -0500 (EST) Received: from mm01.cs.columbia.edu (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id 044A14B59A; Sun, 7 Feb 2021 12:59:56 -0500 (EST) Received: from localhost (localhost [127.0.0.1]) by mm01.cs.columbia.edu (Postfix) with ESMTP id D3EE84B52B for ; Sun, 7 Feb 2021 05:02:04 -0500 (EST) X-Virus-Scanned: at lists.cs.columbia.edu Received: from mm01.cs.columbia.edu ([127.0.0.1]) by localhost (mm01.cs.columbia.edu [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id T4TtbxovkR4J for ; Sun, 7 Feb 2021 05:02:03 -0500 (EST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by mm01.cs.columbia.edu (Postfix) with ESMTPS id 1C4214B517 for ; Sun, 7 Feb 2021 05:02:02 -0500 (EST) IronPort-SDR: Cqivwy7X3XQFaMUIEHS06iPdw9aeS5f5Yp03cCHqyWNWe2cE8KpDHgzO9mFe4rjNIqZuH1RSAl OGRYzOgpqTWA== X-IronPort-AV: E=McAfee;i="6000,8403,9887"; a="180823142" X-IronPort-AV: E=Sophos;i="5.81,159,1610438400"; d="scan'208";a="180823142" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Feb 2021 02:02:01 -0800 IronPort-SDR: Mp8oKFuCuA6dAZ2YZ/k3bHQSVYlOl31EFoAhZtw2KltKpjEurl1bATYaVs9WFlQSWAC71xuHE6 2n62yqCcbYLg== X-IronPort-AV: E=Sophos;i="5.81,159,1610438400"; d="scan'208";a="394665971" Received: from yisun1-ubuntu.bj.intel.com (HELO yi.y.sun) ([10.238.156.116]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-SHA256; 07 Feb 2021 02:01:56 -0800 Date: Sun, 7 Feb 2021 17:56:30 +0800 From: Yi Sun To: Keqian Zhu Subject: Re: [RFC PATCH 10/11] vfio/iommu_type1: Optimize dirty bitmap population based on iommu HWDBM Message-ID: <20210207095630.GA28580@yi.y.sun> References: <20210128151742.18840-1-zhukeqian1@huawei.com> <20210128151742.18840-11-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210128151742.18840-11-zhukeqian1@huawei.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-Mailman-Approved-At: Sun, 07 Feb 2021 12:59:55 -0500 Cc: kvm@vger.kernel.org, Catalin Marinas , Kirti Wankhede , Will Deacon , kvmarm@lists.cs.columbia.edu, Marc Zyngier , kevin.tian@intel.com, yan.y.zhao@intel.com, Alex Williamson , linux-arm-kernel@lists.infradead.org, Cornelia Huck , linux-kernel@vger.kernel.org, lushenming@huawei.com, iommu@lists.linux-foundation.org, Robin Murphy , baolu.lu@linux.intel.com X-BeenThere: kvmarm@lists.cs.columbia.edu X-Mailman-Version: 2.1.14 Precedence: list List-Id: Where KVM/ARM decisions are made List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: kvmarm-bounces@lists.cs.columbia.edu Sender: kvmarm-bounces@lists.cs.columbia.edu Hi, On 21-01-28 23:17:41, Keqian Zhu wrote: [...] > +static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + struct vfio_domain *d; > + > + list_for_each_entry(d, &iommu->domain_list, next) { > + /* Go through all domain anyway even if we fail */ > + iommu_split_block(d->domain, dma->iova, dma->size); > + } > +} This should be a switch to prepare for dirty log start. Per Intel Vtd spec, there is SLADE defined in Scalable-Mode PASID Table Entry. It enables Accessed/Dirty Flags in second-level paging entries. So, a generic iommu interface here is better. For Intel iommu, it enables SLADE. For ARM, it splits block. > + > +static void vfio_dma_dirty_log_stop(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + struct vfio_domain *d; > + > + list_for_each_entry(d, &iommu->domain_list, next) { > + /* Go through all domain anyway even if we fail */ > + iommu_merge_page(d->domain, dma->iova, dma->size, > + d->prot | dma->prot); > + } > +} Same as above comment, a generic interface is required here. > + > +static void vfio_iommu_dirty_log_switch(struct vfio_iommu *iommu, bool start) > +{ > + struct rb_node *n; > + > + /* Split and merge even if all iommu don't support HWDBM now */ > + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { > + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); > + > + if (!dma->iommu_mapped) > + continue; > + > + /* Go through all dma range anyway even if we fail */ > + if (start) > + vfio_dma_dirty_log_start(iommu, dma); > + else > + vfio_dma_dirty_log_stop(iommu, dma); > + } > +} > + > static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > unsigned long arg) > { > @@ -2812,8 +2900,10 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > pgsize = 1 << __ffs(iommu->pgsize_bitmap); > if (!iommu->dirty_page_tracking) { > ret = vfio_dma_bitmap_alloc_all(iommu, pgsize); > - if (!ret) > + if (!ret) { > iommu->dirty_page_tracking = true; > + vfio_iommu_dirty_log_switch(iommu, true); > + } > } > mutex_unlock(&iommu->lock); > return ret; > @@ -2822,6 +2912,7 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > if (iommu->dirty_page_tracking) { > iommu->dirty_page_tracking = false; > vfio_dma_bitmap_free_all(iommu); > + vfio_iommu_dirty_log_switch(iommu, false); > } > mutex_unlock(&iommu->lock); > return 0; > -- > 2.19.1 _______________________________________________ kvmarm mailing list kvmarm@lists.cs.columbia.edu https://lists.cs.columbia.edu/mailman/listinfo/kvmarm From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72730C433E0 for ; Sun, 7 Feb 2021 10:03:31 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BC08964E0F for ; Sun, 7 Feb 2021 10:03:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BC08964E0F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=PT7rsWDxQfQHkQ7QERShDQuCl149fJreujqGKzDXhTQ=; b=CBaQAG2eyPMWbVYJqkTGLpct5 iLuZKluNsj1wR3nzRielOGYqd8F5UrcgjaCFBG7IzApY7BCIpvS4YTSZNnaGgdOR+vDKoLNrAvmU2 MOk4tZESa6qfoQ5YRH0hm6+/q23G4AkpJtGIAMdToMEwDJ1eeWvcjvCp1UQZT+wQzJQEcqhtlkjyV kCPMcL5tjaqmo+iqN71ns/03ot25Rw44uHa3gYsJ3ubbmHqc195KFi/3RR/LYRID56fJKkR74OidA nJ8GnX8lmcc+rP5UKq3gKNIvu6Z6nos2NySfsCFz3XQdCt/uLr6RBs1K2lr9Kwn1w3LWtr3h4VWE3 r3q7mFJZw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1l8gtP-0006VN-NK; Sun, 07 Feb 2021 10:02:11 +0000 Received: from mga04.intel.com ([192.55.52.120]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1l8gtN-0006Uk-8O for linux-arm-kernel@lists.infradead.org; Sun, 07 Feb 2021 10:02:10 +0000 IronPort-SDR: do18H+UBA5bvzBdELdxvhX4ms7qx/efYorL/pgF6Ysk0YELrQlkEVtkMsLpudw9Yp5SWlTdi9b rUzVexNGunrQ== X-IronPort-AV: E=McAfee;i="6000,8403,9887"; a="179037978" X-IronPort-AV: E=Sophos;i="5.81,159,1610438400"; d="scan'208";a="179037978" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Feb 2021 02:02:01 -0800 IronPort-SDR: Mp8oKFuCuA6dAZ2YZ/k3bHQSVYlOl31EFoAhZtw2KltKpjEurl1bATYaVs9WFlQSWAC71xuHE6 2n62yqCcbYLg== X-IronPort-AV: E=Sophos;i="5.81,159,1610438400"; d="scan'208";a="394665971" Received: from yisun1-ubuntu.bj.intel.com (HELO yi.y.sun) ([10.238.156.116]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-SHA256; 07 Feb 2021 02:01:56 -0800 Date: Sun, 7 Feb 2021 17:56:30 +0800 From: Yi Sun To: Keqian Zhu Subject: Re: [RFC PATCH 10/11] vfio/iommu_type1: Optimize dirty bitmap population based on iommu HWDBM Message-ID: <20210207095630.GA28580@yi.y.sun> References: <20210128151742.18840-1-zhukeqian1@huawei.com> <20210128151742.18840-11-zhukeqian1@huawei.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20210128151742.18840-11-zhukeqian1@huawei.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210207_050209_475631_5637B861 X-CRM114-Status: GOOD ( 14.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , kvm@vger.kernel.org, Catalin Marinas , Kirti Wankhede , Will Deacon , kvmarm@lists.cs.columbia.edu, Marc Zyngier , jiangkunkun@huawei.com, yuzenghui@huawei.com, wanghaibin.wang@huawei.com, kevin.tian@intel.com, yan.y.zhao@intel.com, Suzuki K Poulose , Alex Williamson , linux-arm-kernel@lists.infradead.org, Cornelia Huck , linux-kernel@vger.kernel.org, lushenming@huawei.com, iommu@lists.linux-foundation.org, James Morse , Robin Murphy , baolu.lu@linux.intel.com Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi, On 21-01-28 23:17:41, Keqian Zhu wrote: [...] > +static void vfio_dma_dirty_log_start(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + struct vfio_domain *d; > + > + list_for_each_entry(d, &iommu->domain_list, next) { > + /* Go through all domain anyway even if we fail */ > + iommu_split_block(d->domain, dma->iova, dma->size); > + } > +} This should be a switch to prepare for dirty log start. Per Intel Vtd spec, there is SLADE defined in Scalable-Mode PASID Table Entry. It enables Accessed/Dirty Flags in second-level paging entries. So, a generic iommu interface here is better. For Intel iommu, it enables SLADE. For ARM, it splits block. > + > +static void vfio_dma_dirty_log_stop(struct vfio_iommu *iommu, > + struct vfio_dma *dma) > +{ > + struct vfio_domain *d; > + > + list_for_each_entry(d, &iommu->domain_list, next) { > + /* Go through all domain anyway even if we fail */ > + iommu_merge_page(d->domain, dma->iova, dma->size, > + d->prot | dma->prot); > + } > +} Same as above comment, a generic interface is required here. > + > +static void vfio_iommu_dirty_log_switch(struct vfio_iommu *iommu, bool start) > +{ > + struct rb_node *n; > + > + /* Split and merge even if all iommu don't support HWDBM now */ > + for (n = rb_first(&iommu->dma_list); n; n = rb_next(n)) { > + struct vfio_dma *dma = rb_entry(n, struct vfio_dma, node); > + > + if (!dma->iommu_mapped) > + continue; > + > + /* Go through all dma range anyway even if we fail */ > + if (start) > + vfio_dma_dirty_log_start(iommu, dma); > + else > + vfio_dma_dirty_log_stop(iommu, dma); > + } > +} > + > static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > unsigned long arg) > { > @@ -2812,8 +2900,10 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > pgsize = 1 << __ffs(iommu->pgsize_bitmap); > if (!iommu->dirty_page_tracking) { > ret = vfio_dma_bitmap_alloc_all(iommu, pgsize); > - if (!ret) > + if (!ret) { > iommu->dirty_page_tracking = true; > + vfio_iommu_dirty_log_switch(iommu, true); > + } > } > mutex_unlock(&iommu->lock); > return ret; > @@ -2822,6 +2912,7 @@ static int vfio_iommu_type1_dirty_pages(struct vfio_iommu *iommu, > if (iommu->dirty_page_tracking) { > iommu->dirty_page_tracking = false; > vfio_dma_bitmap_free_all(iommu); > + vfio_iommu_dirty_log_switch(iommu, false); > } > mutex_unlock(&iommu->lock); > return 0; > -- > 2.19.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel