From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 703AEC43331 for ; Tue, 24 Mar 2020 20:06:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 453A520409 for ; Tue, 24 Mar 2020 20:06:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="dyiDcANe" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727345AbgCXUGW (ORCPT ); Tue, 24 Mar 2020 16:06:22 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:16836 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725927AbgCXUGW (ORCPT ); Tue, 24 Mar 2020 16:06:22 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 24 Mar 2020 13:04:50 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 24 Mar 2020 13:06:21 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 24 Mar 2020 13:06:21 -0700 Received: from HQMAIL107.nvidia.com (172.20.187.13) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 24 Mar 2020 20:06:20 +0000 Received: from kwankhede-dev.nvidia.com (172.20.13.39) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 24 Mar 2020 20:06:15 +0000 From: Kirti Wankhede To: , CC: , , , , , , , , , , , , , , , , , , , , "Kirti Wankhede" Subject: [PATCH v16 Kernel 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages Date: Wed, 25 Mar 2020 01:02:34 +0530 Message-ID: <1585078359-20124-3-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1585078359-20124-1-git-send-email-kwankhede@nvidia.com> References: <1585078359-20124-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1585080290; bh=kzqfIKCWf9vQR6PQ+sUSWeRBECF2rBKJ1H2hXSdVW00=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=dyiDcANeI8V8mw8zzcNiUjfn43Wv6olSDR21+OH5aEFngXtjxGf5tII/8MhIhvJC1 7MmFDq/lbbfv7CUFbbmvwy3mtVG2gXRJ+exGlurW2O8zRjTDwXp4BBTBVGKEb8K+r3 z5BzgqOAC987NPMRApNrCt5FUPY21Np2Ff0I6l4aQIXmronJFpyUwu4oL7bUXL21bj OJywUpdKC41Ap675pj79YyIIax8c+rCs8uAyflGbmgFJoO6oQHLzMWG4WJ6QiS5g4E ujWJ2QDnnnN4Xu5CnHguCyeAnG9v2M99HUMp2nrJvmAMny9rv4EjygGOMULzeFBskP 8m/5cCjRSX0hw== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vfio_pfn.ref_count is always updated by holding iommu->lock, using atomic variable is overkill. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia Reviewed-by: Eric Auger --- drivers/vfio/vfio_iommu_type1.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index 9fdfae1cb17a..70aeab921d0f 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -112,7 +112,7 @@ struct vfio_pfn { struct rb_node node; dma_addr_t iova; /* Device address */ unsigned long pfn; /* Host pfn */ - atomic_t ref_count; + unsigned int ref_count; }; struct vfio_regions { @@ -233,7 +233,7 @@ static int vfio_add_to_pfn_list(struct vfio_dma *dma, dma_addr_t iova, vpfn->iova = iova; vpfn->pfn = pfn; - atomic_set(&vpfn->ref_count, 1); + vpfn->ref_count = 1; vfio_link_pfn(dma, vpfn); return 0; } @@ -251,7 +251,7 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn = vfio_find_vpfn(dma, iova); if (vpfn) - atomic_inc(&vpfn->ref_count); + vpfn->ref_count++; return vpfn; } @@ -259,7 +259,8 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn) { int ret = 0; - if (atomic_dec_and_test(&vpfn->ref_count)) { + vpfn->ref_count--; + if (!vpfn->ref_count) { ret = put_pfn(vpfn->pfn, dma->prot); vfio_remove_from_pfn_list(dma, vpfn); } -- 2.7.0