From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F3ACC3A5A9 for ; Mon, 4 May 2020 16:32:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A40B206D9 for ; Mon, 4 May 2020 16:32:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="dTeZ1CtE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729614AbgEDQca (ORCPT ); Mon, 4 May 2020 12:32:30 -0400 Received: from hqnvemgate24.nvidia.com ([216.228.121.143]:16857 "EHLO hqnvemgate24.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728655AbgEDQc3 (ORCPT ); Mon, 4 May 2020 12:32:29 -0400 Received: from hqpgpgate102.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 04 May 2020 09:30:21 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate102.nvidia.com (PGP Universal service); Mon, 04 May 2020 09:32:29 -0700 X-PGP-Universal: processed; by hqpgpgate102.nvidia.com on Mon, 04 May 2020 09:32:29 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 4 May 2020 16:32:29 +0000 Received: from kwankhede-dev.nvidia.com (10.124.1.5) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 4 May 2020 16:32:22 +0000 From: Kirti Wankhede To: , CC: , , , , , , , , , , , , , , , , , , , , "Kirti Wankhede" Subject: [PATCH Kernel v18 2/7] vfio iommu: Remove atomicity of ref_count of pinned pages Date: Mon, 4 May 2020 21:28:54 +0530 Message-ID: <1588607939-26441-3-git-send-email-kwankhede@nvidia.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1588607939-26441-1-git-send-email-kwankhede@nvidia.com> References: <1588607939-26441-1-git-send-email-kwankhede@nvidia.com> X-NVConfidentiality: public MIME-Version: 1.0 Content-Type: text/plain DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1588609821; bh=uUgKJVCFdD4Zp7du3o0u/sQDLu1K2BjFd3xLP7hSugs=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:X-NVConfidentiality:MIME-Version: Content-Type; b=dTeZ1CtEAxZVOGcAv3K4pz7W6Hb34bil+8J6Rt0zCps/ZBLnxs5Qgl/JiKhSAP6Fd Xzj875AseBS9Z231eQjOI6l9WdIcwV2QmnQb7J9WtkDDFBJwq8oMl3Pm34IPPbhwaQ bSScXr9mpeqo+Z9pmE8CVTKMz5UxI3XIRhebnz/9RZ967+PHHq1M82syO14cg37z1B kU9dwLc8GMjWTUzkZZ87FRQNmWkgWJIE2Tmsc3Gq0gmy6D93xgNnNHAWQTkR5XSx5H QqP3dV752puWhMx77jbDgJUtGy6ORLOYONYe+wtQaUf53hH1BMiywXAGNocL1Zz/LJ yLtv4SSrI8Ltg== Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org vfio_pfn.ref_count is always updated while holding iommu->lock, using atomic variable is overkill. Signed-off-by: Kirti Wankhede Reviewed-by: Neo Jia Reviewed-by: Eric Auger Reviewed-by: Cornelia Huck --- drivers/vfio/vfio_iommu_type1.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index a0c60f895b24..fa735047b04d 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -112,7 +112,7 @@ struct vfio_pfn { struct rb_node node; dma_addr_t iova; /* Device address */ unsigned long pfn; /* Host pfn */ - atomic_t ref_count; + unsigned int ref_count; }; struct vfio_regions { @@ -233,7 +233,7 @@ static int vfio_add_to_pfn_list(struct vfio_dma *dma, dma_addr_t iova, vpfn->iova = iova; vpfn->pfn = pfn; - atomic_set(&vpfn->ref_count, 1); + vpfn->ref_count = 1; vfio_link_pfn(dma, vpfn); return 0; } @@ -251,7 +251,7 @@ static struct vfio_pfn *vfio_iova_get_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn = vfio_find_vpfn(dma, iova); if (vpfn) - atomic_inc(&vpfn->ref_count); + vpfn->ref_count++; return vpfn; } @@ -259,7 +259,8 @@ static int vfio_iova_put_vfio_pfn(struct vfio_dma *dma, struct vfio_pfn *vpfn) { int ret = 0; - if (atomic_dec_and_test(&vpfn->ref_count)) { + vpfn->ref_count--; + if (!vpfn->ref_count) { ret = put_pfn(vpfn->pfn, dma->prot); vfio_remove_from_pfn_list(dma, vpfn); } -- 2.7.0