From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05B58C48BE6 for ; Wed, 16 Jun 2021 13:40:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E936A61166 for ; Wed, 16 Jun 2021 13:40:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233434AbhFPNmI (ORCPT ); Wed, 16 Jun 2021 09:42:08 -0400 Received: from alexa-out-sd-01.qualcomm.com ([199.106.114.38]:33998 "EHLO alexa-out-sd-01.qualcomm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233354AbhFPNle (ORCPT ); Wed, 16 Jun 2021 09:41:34 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=quicinc.com; i=@quicinc.com; q=dns/txt; s=qcdkim; t=1623850768; x=1655386768; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=e9j78rdWwk4Kd0aCMBvspRHOURninHxcAIXEA5TcovU=; b=gSzd959IQ37IAkHPPJ6s9AQRl98Bsbm9fUbmaySBhfBZ2z4DtwMB5nLN DN6q9mcJNEHcMmQjvBssCpjbrtCkd94Ar03ikyNp2m/i5zEhUvsXR5+YY JoI6Tc9k43IcK2f8rSQlWJljSEtGTnj823EYZHOzE+8uUkU32mPNByp+6 8=; Received: from unknown (HELO ironmsg04-sd.qualcomm.com) ([10.53.140.144]) by alexa-out-sd-01.qualcomm.com with ESMTP; 16 Jun 2021 06:39:16 -0700 X-QCInternal: smtphost Received: from nasanexm03e.na.qualcomm.com ([10.85.0.48]) by ironmsg04-sd.qualcomm.com with ESMTP/TLS/AES256-SHA; 16 Jun 2021 06:39:08 -0700 Received: from th-lint-040.qualcomm.com (10.80.80.8) by nasanexm03e.na.qualcomm.com (10.85.0.48) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Jun 2021 06:39:06 -0700 From: Georgi Djakov To: , CC: , , , , , , , Subject: [PATCH v7 02/15] iommu: Add an unmap_pages() op for IOMMU drivers Date: Wed, 16 Jun 2021 06:38:43 -0700 Message-ID: <1623850736-389584-3-git-send-email-quic_c_gdjako@quicinc.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> References: <1623850736-389584-1-git-send-email-quic_c_gdjako@quicinc.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.80.80.8] X-ClientProxiedBy: nasanexm03h.na.qualcomm.com (10.85.0.50) To nasanexm03e.na.qualcomm.com (10.85.0.48) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Isaac J. Manjarres" Add a callback for IOMMU drivers to provide a path for the IOMMU framework to call into an IOMMU driver, which can call into the io-pgtable code, to unmap a virtually contiguous range of pages of the same size. For IOMMU drivers that do not specify an unmap_pages() callback, the existing logic of unmapping memory one page block at a time will be used. Signed-off-by: Isaac J. Manjarres Suggested-by: Will Deacon Signed-off-by: Will Deacon Acked-by: Lu Baolu Signed-off-by: Georgi Djakov --- include/linux/iommu.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/iommu.h b/include/linux/iommu.h index 32d448050bf7..25a844121be5 100644 --- a/include/linux/iommu.h +++ b/include/linux/iommu.h @@ -181,6 +181,7 @@ struct iommu_iotlb_gather { * @detach_dev: detach device from an iommu domain * @map: map a physically contiguous memory region to an iommu domain * @unmap: unmap a physically contiguous memory region from an iommu domain + * @unmap_pages: unmap a number of pages of the same size from an iommu domain * @flush_iotlb_all: Synchronously flush all hardware TLBs for this domain * @iotlb_sync_map: Sync mappings created recently using @map to the hardware * @iotlb_sync: Flush all queued ranges from the hardware TLBs and empty flush @@ -231,6 +232,9 @@ struct iommu_ops { phys_addr_t paddr, size_t size, int prot, gfp_t gfp); size_t (*unmap)(struct iommu_domain *domain, unsigned long iova, size_t size, struct iommu_iotlb_gather *iotlb_gather); + size_t (*unmap_pages)(struct iommu_domain *domain, unsigned long iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *iotlb_gather); void (*flush_iotlb_all)(struct iommu_domain *domain); void (*iotlb_sync_map)(struct iommu_domain *domain, unsigned long iova, size_t size);