From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 000F9C2B9F4 for ; Thu, 17 Jun 2021 07:21:12 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C03B961241 for ; Thu, 17 Jun 2021 07:21:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C03B961241 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:Content-Type: Content-Transfer-Encoding:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Cc:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=xmYKGyWAvIR4Ghp9Ly1b9pWUQRGd3eDh7ZXjQAK+7fk=; b=2oY1WMwoeBjZVkkJ0pHxSzk8kj 5AxYJwFO0L0LlyXwfdLmsrXSE5WOa/gZkmPxDj7feXB7tGHFelU5n0Ydg1qV5I7K0ZmTKIO1gcyzA /t9bO2ZtaGJxNyOk6G5+lWBldZ+CGscdwXdv6spwI5hy0by4fo4PVYtsuih5ecAWfjYTO1+G2Qg1k r4A90NYbE64IMVPNWVYBazLI944VDPJLy9csO9P4Q6CG+GfcdG8bXUTRNmbA3ITyonTfP1pgAPTAK 7x0YvwMXtxapcJ1K3UDmACDkegAY5gdF7bKBZ/LnWSjpfhMMsHnNoFP8xySA7W4ilwSyOHDGXHatx 31MP6GMg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltmJR-009F2C-9N; Thu, 17 Jun 2021 07:19:41 +0000 Received: from mga01.intel.com ([192.55.52.88]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ltmJM-009F0Z-JL for linux-arm-kernel@lists.infradead.org; Thu, 17 Jun 2021 07:19:38 +0000 IronPort-SDR: uuP2yYExLsShcJTNO7pekBgW8g0oAyWsVc8j0c/fZdXBvrbz7/qKI8Iolc5J08/yo7Sk/MmHIN xIXiOsG2qbSQ== X-IronPort-AV: E=McAfee;i="6200,9189,10017"; a="227828478" X-IronPort-AV: E=Sophos;i="5.83,278,1616482800"; d="scan'208";a="227828478" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jun 2021 00:19:33 -0700 IronPort-SDR: 4208CIkwC6S+691a5DBLW9+0D4NjiPL+joRWtB185dL7teYQSJYN5f4tubIu2cTlp4b+dxbs2f wn2GiQiSEX0w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.83,278,1616482800"; d="scan'208";a="472326350" Received: from allen-box.sh.intel.com (HELO [10.239.159.118]) ([10.239.159.118]) by fmsmga004.fm.intel.com with ESMTP; 17 Jun 2021 00:19:30 -0700 Cc: baolu.lu@linux.intel.com, joro@8bytes.org, isaacm@codeaurora.org, pratikp@codeaurora.org, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, djakov@kernel.org Subject: Re: [PATCH v7 07/15] iommu: Hook up '->unmap_pages' driver callback To: Georgi Djakov , will@kernel.org, robin.murphy@arm.com References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> <1623850736-389584-8-git-send-email-quic_c_gdjako@quicinc.com> From: Lu Baolu Message-ID: <0cb188c0-defd-e179-ad0e-471f48dfb54e@linux.intel.com> Date: Thu, 17 Jun 2021 15:18:03 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1 MIME-Version: 1.0 In-Reply-To: <1623850736-389584-8-git-send-email-quic_c_gdjako@quicinc.com> Content-Language: en-US X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210617_001936_732056_CF6997B3 X-CRM114-Status: GOOD ( 29.43 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 6/16/21 9:38 PM, Georgi Djakov wrote: > From: Will Deacon > > Extend iommu_pgsize() to populate an optional 'count' parameter so that > we can direct unmapping operation to the ->unmap_pages callback if it > has been provided by the driver. > > Signed-off-by: Will Deacon > Signed-off-by: Isaac J. Manjarres > Signed-off-by: Georgi Djakov > --- > drivers/iommu/iommu.c | 59 +++++++++++++++++++++++++++++++++++++++++++-------- > 1 file changed, 50 insertions(+), 9 deletions(-) > > diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c > index 80e14c139d40..725622c7e603 100644 > --- a/drivers/iommu/iommu.c > +++ b/drivers/iommu/iommu.c > @@ -2376,11 +2376,11 @@ phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova) > EXPORT_SYMBOL_GPL(iommu_iova_to_phys); > > static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova, > - phys_addr_t paddr, size_t size) > + phys_addr_t paddr, size_t size, size_t *count) > { > - unsigned int pgsize_idx; > + unsigned int pgsize_idx, pgsize_idx_next; > unsigned long pgsizes; > - size_t pgsize; > + size_t offset, pgsize, pgsize_next; > unsigned long addr_merge = paddr | iova; > > /* Page sizes supported by the hardware and small enough for @size */ > @@ -2396,7 +2396,36 @@ static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova, > /* Pick the biggest page size remaining */ > pgsize_idx = __fls(pgsizes); > pgsize = BIT(pgsize_idx); > + if (!count) > + return pgsize; > > + /* Find the next biggest support page size, if it exists */ > + pgsizes = domain->pgsize_bitmap & ~GENMASK(pgsize_idx, 0); > + if (!pgsizes) > + goto out_set_count; > + > + pgsize_idx_next = __ffs(pgsizes); > + pgsize_next = BIT(pgsize_idx_next); > + > + /* > + * There's no point trying a bigger page size unless the virtual > + * and physical addresses are similarly offset within the larger page. > + */ > + if ((iova ^ paddr) & (pgsize_next - 1)) > + goto out_set_count; > + > + /* Calculate the offset to the next page size alignment boundary */ > + offset = pgsize_next - (addr_merge & (pgsize_next - 1)); > + > + /* > + * If size is big enough to accommodate the larger page, reduce > + * the number of smaller pages. > + */ > + if (offset + pgsize_next <= size) > + size = offset; > + > +out_set_count: > + *count = size >> pgsize_idx; > return pgsize; > } > > @@ -2434,7 +2463,7 @@ static int __iommu_map(struct iommu_domain *domain, unsigned long iova, > pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size); > > while (size) { > - size_t pgsize = iommu_pgsize(domain, iova, paddr, size); > + size_t pgsize = iommu_pgsize(domain, iova, paddr, size, NULL); > > pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx\n", > iova, &paddr, pgsize); > @@ -2485,6 +2514,19 @@ int iommu_map_atomic(struct iommu_domain *domain, unsigned long iova, > } > EXPORT_SYMBOL_GPL(iommu_map_atomic); > > +static size_t __iommu_unmap_pages(struct iommu_domain *domain, > + unsigned long iova, size_t size, > + struct iommu_iotlb_gather *iotlb_gather) > +{ > + const struct iommu_ops *ops = domain->ops; > + size_t pgsize, count; > + > + pgsize = iommu_pgsize(domain, iova, iova, size, &count); > + return ops->unmap_pages ? > + ops->unmap_pages(domain, iova, pgsize, count, iotlb_gather) : > + ops->unmap(domain, iova, pgsize, iotlb_gather); > +} > + > static size_t __iommu_unmap(struct iommu_domain *domain, > unsigned long iova, size_t size, > struct iommu_iotlb_gather *iotlb_gather) > @@ -2494,7 +2536,7 @@ static size_t __iommu_unmap(struct iommu_domain *domain, > unsigned long orig_iova = iova; > unsigned int min_pagesz; > > - if (unlikely(ops->unmap == NULL || > + if (unlikely(!(ops->unmap || ops->unmap_pages) || > domain->pgsize_bitmap == 0UL)) > return 0; > > @@ -2522,10 +2564,9 @@ static size_t __iommu_unmap(struct iommu_domain *domain, > * or we hit an area that isn't mapped. > */ > while (unmapped < size) { > - size_t pgsize; > - > - pgsize = iommu_pgsize(domain, iova, iova, size - unmapped); > - unmapped_page = ops->unmap(domain, iova, pgsize, iotlb_gather); > + unmapped_page = __iommu_unmap_pages(domain, iova, > + size - unmapped, > + iotlb_gather); > if (!unmapped_page) > break; > > Reviewed-by: Lu Baolu Best regards, baolu _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel