From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9F85C3A5AB for ; Thu, 5 Sep 2019 11:41:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AB31522CF5 for ; Thu, 5 Sep 2019 11:41:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="l1I2WTh6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388767AbfIELlQ (ORCPT ); Thu, 5 Sep 2019 07:41:16 -0400 Received: from bombadil.infradead.org ([198.137.202.133]:52340 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731167AbfIELlP (ORCPT ); Thu, 5 Sep 2019 07:41:15 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=o2Hg9wxb5edkTo95p676m6BP+QhEQ7reG2mRWIlBGcU=; b=l1I2WTh6kSi0zu5Xcn/JgG2r1y BEUkN3EZ6tDNamjy9MN3j4Db1C6/v9hV4vRQ7UlqUQw5VPkPHo6KqNXmaR56w+12YEmz04HEsQRr0 0yts2Hhkd9gzVJ0y+ztK46Ll+d7JhDkq3Gyd4ThC4bYxJetxRlwLE/ZenjpNBEWlve4IP2GqxJBvg OCEmFQ7ZgsVy6tsXqXHpl38MHTXtC/tynwEnNYSqHtTOX/JxvR85MHRi4Bu3AShAIPkSppXbttjIs QIVK0y1rdc5wJWxlo+5TabpDPLoxCLHbniEYom4aIf4i1rGvFOqskbQnnR5ZQuOSqyKbD63GWJnt7 +5XB08yA==; Received: from [2001:4bb8:18c:1755:a4b2:9562:6bf1:4bb9] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1i5q8S-00057u-B2; Thu, 05 Sep 2019 11:41:09 +0000 From: Christoph Hellwig To: Stefano Stabellini , Konrad Rzeszutek Wilk , gross@suse.com, boris.ostrovsky@oracle.com Cc: x86@kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: [PATCH 09/11] swiotlb-xen: simplify cache maintainance Date: Thu, 5 Sep 2019 13:34:06 +0200 Message-Id: <20190905113408.3104-10-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190905113408.3104-1-hch@lst.de> References: <20190905113408.3104-1-hch@lst.de> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Now that we know we always have the dma-noncoherent.h helpers available if we are on an architecture with support for non-coherent devices, we can just call them directly, and remove the calls to the dma-direct routines, including the fact that we call the dma_direct_map_page routines but ignore the value returned from it. Instead we now have Xen wrappers for the arch_sync_dma_for_{device,cpu} helpers that call the special Xen versions of those routines for foreign pages. Note that the new helpers get the physical address passed in addition to the dma address to avoid another translation for the local cache maintainance. The pfn_valid checks remain on the dma address as in the old code, even if that looks a little funny. Signed-off-by: Christoph Hellwig Reviewed-by: Stefano Stabellini --- arch/arm/xen/mm.c | 64 +++++++----------------- arch/x86/include/asm/xen/page-coherent.h | 14 ------ drivers/xen/swiotlb-xen.c | 20 ++++---- include/xen/arm/page-coherent.h | 63 ----------------------- include/xen/swiotlb-xen.h | 5 ++ 5 files changed, 32 insertions(+), 134 deletions(-) diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c index 9d73fa4a5991..2b2c208408bb 100644 --- a/arch/arm/xen/mm.c +++ b/arch/arm/xen/mm.c @@ -60,63 +60,33 @@ static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op) } while (size); } -static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir) +/* + * Dom0 is mapped 1:1, and while the Linux page can span across multiple Xen + * pages, it is not possible for it to contain a mix of local and foreign Xen + * pages. Calling pfn_valid on a foreign mfn will always return false, so if + * pfn_valid returns true the pages is local and we can use the native + * dma-direct functions, otherwise we call the Xen specific version. + */ +void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle, + phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - if (dir != DMA_TO_DEVICE) + if (pfn_valid(PFN_DOWN(handle))) + arch_sync_dma_for_cpu(dev, paddr, size, dir); + else if (dir != DMA_TO_DEVICE) dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL); } -static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir) +void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle, + phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - if (dir == DMA_FROM_DEVICE) + if (pfn_valid(PFN_DOWN(handle))) + arch_sync_dma_for_device(dev, paddr, size, dir); + else if (dir == DMA_FROM_DEVICE) dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL); else dma_cache_maint(handle, size, GNTTAB_CACHE_CLEAN); } -void __xen_dma_map_page(struct device *hwdev, struct page *page, - dma_addr_t dev_addr, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - if (dev_is_dma_coherent(hwdev)) - return; - if (attrs & DMA_ATTR_SKIP_CPU_SYNC) - return; - - __xen_dma_page_cpu_to_dev(hwdev, dev_addr, size, dir); -} - -void __xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs) - -{ - if (dev_is_dma_coherent(hwdev)) - return; - if (attrs & DMA_ATTR_SKIP_CPU_SYNC) - return; - - __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir); -} - -void __xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - if (dev_is_dma_coherent(hwdev)) - return; - __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir); -} - -void __xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - if (dev_is_dma_coherent(hwdev)) - return; - __xen_dma_page_cpu_to_dev(hwdev, handle, size, dir); -} - bool xen_arch_need_swiotlb(struct device *dev, phys_addr_t phys, dma_addr_t dev_addr) diff --git a/arch/x86/include/asm/xen/page-coherent.h b/arch/x86/include/asm/xen/page-coherent.h index 116777e7f387..63cd41b2e17a 100644 --- a/arch/x86/include/asm/xen/page-coherent.h +++ b/arch/x86/include/asm/xen/page-coherent.h @@ -21,18 +21,4 @@ static inline void xen_free_coherent_pages(struct device *hwdev, size_t size, free_pages((unsigned long) cpu_addr, get_order(size)); } -static inline void xen_dma_map_page(struct device *hwdev, struct page *page, - dma_addr_t dev_addr, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) { } - -static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs) { } - -static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) { } - -static inline void xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) { } - #endif /* _ASM_X86_XEN_PAGE_COHERENT_H */ diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index b8808677ae1d..f81031f0c1c7 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -28,6 +28,7 @@ #include #include +#include #include #include #include @@ -391,6 +392,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, if (map == (phys_addr_t)DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; + phys = map; dev_addr = xen_phys_to_bus(map); /* @@ -402,14 +404,9 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, return DMA_MAPPING_ERROR; } - page = pfn_to_page(map >> PAGE_SHIFT); - offset = map & ~PAGE_MASK; done: - /* - * we are not interested in the dma_addr returned by xen_dma_map_page, - * only in the potential cache flushes executed by the function. - */ - xen_dma_map_page(dev, page, dev_addr, offset, size, dir, attrs); + if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + xen_dma_sync_for_device(dev, dev_addr, phys, size, dir); return dev_addr; } @@ -429,7 +426,8 @@ static void xen_unmap_single(struct device *hwdev, dma_addr_t dev_addr, BUG_ON(dir == DMA_NONE); - xen_dma_unmap_page(hwdev, dev_addr, size, dir, attrs); + if (!dev_is_dma_coherent(hwdev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + xen_dma_sync_for_cpu(hwdev, dev_addr, paddr, size, dir); /* NOTE: We use dev_addr here, not paddr! */ if (is_xen_swiotlb_buffer(dev_addr)) @@ -449,7 +447,8 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr, { phys_addr_t paddr = xen_bus_to_phys(dma_addr); - xen_dma_sync_single_for_cpu(dev, dma_addr, size, dir); + if (!dev_is_dma_coherent(dev)) + xen_dma_sync_for_cpu(dev, dma_addr, paddr, size, dir); if (is_xen_swiotlb_buffer(dma_addr)) swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU); @@ -464,7 +463,8 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr, if (is_xen_swiotlb_buffer(dma_addr)) swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE); - xen_dma_sync_single_for_device(dev, dma_addr, size, dir); + if (!dev_is_dma_coherent(dev)) + xen_dma_sync_for_device(dev, dma_addr, paddr, size, dir); } /* diff --git a/include/xen/arm/page-coherent.h b/include/xen/arm/page-coherent.h index a8d9c0678c27..b9cc11e887ed 100644 --- a/include/xen/arm/page-coherent.h +++ b/include/xen/arm/page-coherent.h @@ -5,17 +5,6 @@ #include #include -void __xen_dma_map_page(struct device *hwdev, struct page *page, - dma_addr_t dev_addr, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs); -void __xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs); -void __xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir); -void __xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir); - static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs) { @@ -28,56 +17,4 @@ static inline void xen_free_coherent_pages(struct device *hwdev, size_t size, dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs); } -static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - unsigned long pfn = PFN_DOWN(handle); - - if (pfn_valid(pfn)) - dma_direct_sync_single_for_cpu(hwdev, handle, size, dir); - else - __xen_dma_sync_single_for_cpu(hwdev, handle, size, dir); -} - -static inline void xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - unsigned long pfn = PFN_DOWN(handle); - if (pfn_valid(pfn)) - dma_direct_sync_single_for_device(hwdev, handle, size, dir); - else - __xen_dma_sync_single_for_device(hwdev, handle, size, dir); -} - -static inline void xen_dma_map_page(struct device *hwdev, struct page *page, - dma_addr_t dev_addr, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - unsigned long pfn = PFN_DOWN(dev_addr); - - /* - * Dom0 is mapped 1:1, and while the Linux page can span across multiple - * Xen pages, it is not possible for it to contain a mix of local and - * foreign Xen pages. Calling pfn_valid on a foreign mfn will always - * return false, so if pfn_valid returns true the pages is local and we - * can use the native dma-direct functions, otherwise we call the Xen - * specific version. - */ - if (pfn_valid(pfn)) - dma_direct_map_page(hwdev, page, offset, size, dir, attrs); - else - __xen_dma_map_page(hwdev, page, dev_addr, offset, size, dir, attrs); -} - -static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - unsigned long pfn = PFN_DOWN(handle); - - if (pfn_valid(pfn)) - dma_direct_unmap_page(hwdev, handle, size, dir, attrs); - else - __xen_dma_unmap_page(hwdev, handle, size, dir, attrs); -} - #endif /* _XEN_ARM_PAGE_COHERENT_H */ diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h index 5e4b83f83dbc..d71380f6ed0b 100644 --- a/include/xen/swiotlb-xen.h +++ b/include/xen/swiotlb-xen.h @@ -4,6 +4,11 @@ #include +void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle, + phys_addr_t paddr, size_t size, enum dma_data_direction dir); +void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle, + phys_addr_t paddr, size_t size, enum dma_data_direction dir); + extern int xen_swiotlb_init(int verbose, bool early); extern const struct dma_map_ops xen_swiotlb_dma_ops; -- 2.20.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1DF3C3A5A5 for ; Thu, 5 Sep 2019 11:41:38 +0000 (UTC) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C26EC20825 for ; Thu, 5 Sep 2019 11:41:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="l1I2WTh6" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C26EC20825 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=iommu-bounces@lists.linux-foundation.org Received: from mail.linux-foundation.org (localhost [127.0.0.1]) by mail.linuxfoundation.org (Postfix) with ESMTP id 27F841762; Thu, 5 Sep 2019 11:41:16 +0000 (UTC) Received: from smtp1.linuxfoundation.org (smtp1.linux-foundation.org [172.17.192.35]) by mail.linuxfoundation.org (Postfix) with ESMTPS id E22A41762 for ; Thu, 5 Sep 2019 11:41:14 +0000 (UTC) X-Greylist: from auto-whitelisted by SQLgrey-1.7.6 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by smtp1.linuxfoundation.org (Postfix) with ESMTPS id 107EEA9 for ; Thu, 5 Sep 2019 11:41:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=o2Hg9wxb5edkTo95p676m6BP+QhEQ7reG2mRWIlBGcU=; b=l1I2WTh6kSi0zu5Xcn/JgG2r1y BEUkN3EZ6tDNamjy9MN3j4Db1C6/v9hV4vRQ7UlqUQw5VPkPHo6KqNXmaR56w+12YEmz04HEsQRr0 0yts2Hhkd9gzVJ0y+ztK46Ll+d7JhDkq3Gyd4ThC4bYxJetxRlwLE/ZenjpNBEWlve4IP2GqxJBvg OCEmFQ7ZgsVy6tsXqXHpl38MHTXtC/tynwEnNYSqHtTOX/JxvR85MHRi4Bu3AShAIPkSppXbttjIs QIVK0y1rdc5wJWxlo+5TabpDPLoxCLHbniEYom4aIf4i1rGvFOqskbQnnR5ZQuOSqyKbD63GWJnt7 +5XB08yA==; Received: from [2001:4bb8:18c:1755:a4b2:9562:6bf1:4bb9] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1i5q8S-00057u-B2; Thu, 05 Sep 2019 11:41:09 +0000 From: Christoph Hellwig To: Stefano Stabellini , Konrad Rzeszutek Wilk , gross@suse.com, boris.ostrovsky@oracle.com Subject: [PATCH 09/11] swiotlb-xen: simplify cache maintainance Date: Thu, 5 Sep 2019 13:34:06 +0200 Message-Id: <20190905113408.3104-10-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190905113408.3104-1-hch@lst.de> References: <20190905113408.3104-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Cc: xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org, x86@kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: iommu@lists.linux-foundation.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: Development issues for Linux IOMMU support List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: iommu-bounces@lists.linux-foundation.org Errors-To: iommu-bounces@lists.linux-foundation.org Now that we know we always have the dma-noncoherent.h helpers available if we are on an architecture with support for non-coherent devices, we can just call them directly, and remove the calls to the dma-direct routines, including the fact that we call the dma_direct_map_page routines but ignore the value returned from it. Instead we now have Xen wrappers for the arch_sync_dma_for_{device,cpu} helpers that call the special Xen versions of those routines for foreign pages. Note that the new helpers get the physical address passed in addition to the dma address to avoid another translation for the local cache maintainance. The pfn_valid checks remain on the dma address as in the old code, even if that looks a little funny. Signed-off-by: Christoph Hellwig Reviewed-by: Stefano Stabellini --- arch/arm/xen/mm.c | 64 +++++++----------------- arch/x86/include/asm/xen/page-coherent.h | 14 ------ drivers/xen/swiotlb-xen.c | 20 ++++---- include/xen/arm/page-coherent.h | 63 ----------------------- include/xen/swiotlb-xen.h | 5 ++ 5 files changed, 32 insertions(+), 134 deletions(-) diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c index 9d73fa4a5991..2b2c208408bb 100644 --- a/arch/arm/xen/mm.c +++ b/arch/arm/xen/mm.c @@ -60,63 +60,33 @@ static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op) } while (size); } -static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir) +/* + * Dom0 is mapped 1:1, and while the Linux page can span across multiple Xen + * pages, it is not possible for it to contain a mix of local and foreign Xen + * pages. Calling pfn_valid on a foreign mfn will always return false, so if + * pfn_valid returns true the pages is local and we can use the native + * dma-direct functions, otherwise we call the Xen specific version. + */ +void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle, + phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - if (dir != DMA_TO_DEVICE) + if (pfn_valid(PFN_DOWN(handle))) + arch_sync_dma_for_cpu(dev, paddr, size, dir); + else if (dir != DMA_TO_DEVICE) dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL); } -static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir) +void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle, + phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - if (dir == DMA_FROM_DEVICE) + if (pfn_valid(PFN_DOWN(handle))) + arch_sync_dma_for_device(dev, paddr, size, dir); + else if (dir == DMA_FROM_DEVICE) dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL); else dma_cache_maint(handle, size, GNTTAB_CACHE_CLEAN); } -void __xen_dma_map_page(struct device *hwdev, struct page *page, - dma_addr_t dev_addr, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - if (dev_is_dma_coherent(hwdev)) - return; - if (attrs & DMA_ATTR_SKIP_CPU_SYNC) - return; - - __xen_dma_page_cpu_to_dev(hwdev, dev_addr, size, dir); -} - -void __xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs) - -{ - if (dev_is_dma_coherent(hwdev)) - return; - if (attrs & DMA_ATTR_SKIP_CPU_SYNC) - return; - - __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir); -} - -void __xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - if (dev_is_dma_coherent(hwdev)) - return; - __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir); -} - -void __xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - if (dev_is_dma_coherent(hwdev)) - return; - __xen_dma_page_cpu_to_dev(hwdev, handle, size, dir); -} - bool xen_arch_need_swiotlb(struct device *dev, phys_addr_t phys, dma_addr_t dev_addr) diff --git a/arch/x86/include/asm/xen/page-coherent.h b/arch/x86/include/asm/xen/page-coherent.h index 116777e7f387..63cd41b2e17a 100644 --- a/arch/x86/include/asm/xen/page-coherent.h +++ b/arch/x86/include/asm/xen/page-coherent.h @@ -21,18 +21,4 @@ static inline void xen_free_coherent_pages(struct device *hwdev, size_t size, free_pages((unsigned long) cpu_addr, get_order(size)); } -static inline void xen_dma_map_page(struct device *hwdev, struct page *page, - dma_addr_t dev_addr, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) { } - -static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs) { } - -static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) { } - -static inline void xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) { } - #endif /* _ASM_X86_XEN_PAGE_COHERENT_H */ diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index b8808677ae1d..f81031f0c1c7 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -28,6 +28,7 @@ #include #include +#include #include #include #include @@ -391,6 +392,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, if (map == (phys_addr_t)DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; + phys = map; dev_addr = xen_phys_to_bus(map); /* @@ -402,14 +404,9 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, return DMA_MAPPING_ERROR; } - page = pfn_to_page(map >> PAGE_SHIFT); - offset = map & ~PAGE_MASK; done: - /* - * we are not interested in the dma_addr returned by xen_dma_map_page, - * only in the potential cache flushes executed by the function. - */ - xen_dma_map_page(dev, page, dev_addr, offset, size, dir, attrs); + if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + xen_dma_sync_for_device(dev, dev_addr, phys, size, dir); return dev_addr; } @@ -429,7 +426,8 @@ static void xen_unmap_single(struct device *hwdev, dma_addr_t dev_addr, BUG_ON(dir == DMA_NONE); - xen_dma_unmap_page(hwdev, dev_addr, size, dir, attrs); + if (!dev_is_dma_coherent(hwdev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + xen_dma_sync_for_cpu(hwdev, dev_addr, paddr, size, dir); /* NOTE: We use dev_addr here, not paddr! */ if (is_xen_swiotlb_buffer(dev_addr)) @@ -449,7 +447,8 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr, { phys_addr_t paddr = xen_bus_to_phys(dma_addr); - xen_dma_sync_single_for_cpu(dev, dma_addr, size, dir); + if (!dev_is_dma_coherent(dev)) + xen_dma_sync_for_cpu(dev, dma_addr, paddr, size, dir); if (is_xen_swiotlb_buffer(dma_addr)) swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU); @@ -464,7 +463,8 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr, if (is_xen_swiotlb_buffer(dma_addr)) swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE); - xen_dma_sync_single_for_device(dev, dma_addr, size, dir); + if (!dev_is_dma_coherent(dev)) + xen_dma_sync_for_device(dev, dma_addr, paddr, size, dir); } /* diff --git a/include/xen/arm/page-coherent.h b/include/xen/arm/page-coherent.h index a8d9c0678c27..b9cc11e887ed 100644 --- a/include/xen/arm/page-coherent.h +++ b/include/xen/arm/page-coherent.h @@ -5,17 +5,6 @@ #include #include -void __xen_dma_map_page(struct device *hwdev, struct page *page, - dma_addr_t dev_addr, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs); -void __xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs); -void __xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir); -void __xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir); - static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs) { @@ -28,56 +17,4 @@ static inline void xen_free_coherent_pages(struct device *hwdev, size_t size, dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs); } -static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - unsigned long pfn = PFN_DOWN(handle); - - if (pfn_valid(pfn)) - dma_direct_sync_single_for_cpu(hwdev, handle, size, dir); - else - __xen_dma_sync_single_for_cpu(hwdev, handle, size, dir); -} - -static inline void xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - unsigned long pfn = PFN_DOWN(handle); - if (pfn_valid(pfn)) - dma_direct_sync_single_for_device(hwdev, handle, size, dir); - else - __xen_dma_sync_single_for_device(hwdev, handle, size, dir); -} - -static inline void xen_dma_map_page(struct device *hwdev, struct page *page, - dma_addr_t dev_addr, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - unsigned long pfn = PFN_DOWN(dev_addr); - - /* - * Dom0 is mapped 1:1, and while the Linux page can span across multiple - * Xen pages, it is not possible for it to contain a mix of local and - * foreign Xen pages. Calling pfn_valid on a foreign mfn will always - * return false, so if pfn_valid returns true the pages is local and we - * can use the native dma-direct functions, otherwise we call the Xen - * specific version. - */ - if (pfn_valid(pfn)) - dma_direct_map_page(hwdev, page, offset, size, dir, attrs); - else - __xen_dma_map_page(hwdev, page, dev_addr, offset, size, dir, attrs); -} - -static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - unsigned long pfn = PFN_DOWN(handle); - - if (pfn_valid(pfn)) - dma_direct_unmap_page(hwdev, handle, size, dir, attrs); - else - __xen_dma_unmap_page(hwdev, handle, size, dir, attrs); -} - #endif /* _XEN_ARM_PAGE_COHERENT_H */ diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h index 5e4b83f83dbc..d71380f6ed0b 100644 --- a/include/xen/swiotlb-xen.h +++ b/include/xen/swiotlb-xen.h @@ -4,6 +4,11 @@ #include +void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle, + phys_addr_t paddr, size_t size, enum dma_data_direction dir); +void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle, + phys_addr_t paddr, size_t size, enum dma_data_direction dir); + extern int xen_swiotlb_init(int verbose, bool early); extern const struct dma_map_ops xen_swiotlb_dma_ops; -- 2.20.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90D95C3A5AB for ; Thu, 5 Sep 2019 11:43:26 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 65BFF20870 for ; Thu, 5 Sep 2019 11:43:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="pDWzWncN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 65BFF20870 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TvDyoze+jNity4LcYL25U94KKnkrlLYT4hZZ8PY/C0c=; b=pDWzWncNhEBpFC wU6zTZn/pWf+HYbpLtnd70T7jdRJv20A4H/3vUqHFSfDvSCBSeuXHl3ACJd2LFDhKb87OGoJUoDSo 5O4C6SiQqFN2aORBkwmwHJd+MizogNV0nDzAry25WJ9+jJ9H8w6+Vr2ZSSnYfvAtdPDkSpWQWlNKB 2POg10AHjyKJxc27hV0loz2OvfegM1ccEMhzW4Ai3bAzlehZNf3xMl0xLxF20fEa1kmXKzrli775o /WjzCnabxPr+Ah9mO98UaNOBT74loAr8zbPLTZnr/a8I8ubjRUvT2YR071cMNVmCSG+0WfeR4MvnZ YJyEICdzWmkQ7syRW2hA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1i5qAa-0006xY-4K; Thu, 05 Sep 2019 11:43:20 +0000 Received: from [2001:4bb8:18c:1755:a4b2:9562:6bf1:4bb9] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1i5q8S-00057u-B2; Thu, 05 Sep 2019 11:41:09 +0000 From: Christoph Hellwig To: Stefano Stabellini , Konrad Rzeszutek Wilk , gross@suse.com, boris.ostrovsky@oracle.com Subject: [PATCH 09/11] swiotlb-xen: simplify cache maintainance Date: Thu, 5 Sep 2019 13:34:06 +0200 Message-Id: <20190905113408.3104-10-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190905113408.3104-1-hch@lst.de> References: <20190905113408.3104-1-hch@lst.de> MIME-Version: 1.0 X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org, x86@kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org Now that we know we always have the dma-noncoherent.h helpers available if we are on an architecture with support for non-coherent devices, we can just call them directly, and remove the calls to the dma-direct routines, including the fact that we call the dma_direct_map_page routines but ignore the value returned from it. Instead we now have Xen wrappers for the arch_sync_dma_for_{device,cpu} helpers that call the special Xen versions of those routines for foreign pages. Note that the new helpers get the physical address passed in addition to the dma address to avoid another translation for the local cache maintainance. The pfn_valid checks remain on the dma address as in the old code, even if that looks a little funny. Signed-off-by: Christoph Hellwig Reviewed-by: Stefano Stabellini --- arch/arm/xen/mm.c | 64 +++++++----------------- arch/x86/include/asm/xen/page-coherent.h | 14 ------ drivers/xen/swiotlb-xen.c | 20 ++++---- include/xen/arm/page-coherent.h | 63 ----------------------- include/xen/swiotlb-xen.h | 5 ++ 5 files changed, 32 insertions(+), 134 deletions(-) diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c index 9d73fa4a5991..2b2c208408bb 100644 --- a/arch/arm/xen/mm.c +++ b/arch/arm/xen/mm.c @@ -60,63 +60,33 @@ static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op) } while (size); } -static void __xen_dma_page_dev_to_cpu(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir) +/* + * Dom0 is mapped 1:1, and while the Linux page can span across multiple Xen + * pages, it is not possible for it to contain a mix of local and foreign Xen + * pages. Calling pfn_valid on a foreign mfn will always return false, so if + * pfn_valid returns true the pages is local and we can use the native + * dma-direct functions, otherwise we call the Xen specific version. + */ +void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle, + phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - if (dir != DMA_TO_DEVICE) + if (pfn_valid(PFN_DOWN(handle))) + arch_sync_dma_for_cpu(dev, paddr, size, dir); + else if (dir != DMA_TO_DEVICE) dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL); } -static void __xen_dma_page_cpu_to_dev(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir) +void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle, + phys_addr_t paddr, size_t size, enum dma_data_direction dir) { - if (dir == DMA_FROM_DEVICE) + if (pfn_valid(PFN_DOWN(handle))) + arch_sync_dma_for_device(dev, paddr, size, dir); + else if (dir == DMA_FROM_DEVICE) dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL); else dma_cache_maint(handle, size, GNTTAB_CACHE_CLEAN); } -void __xen_dma_map_page(struct device *hwdev, struct page *page, - dma_addr_t dev_addr, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - if (dev_is_dma_coherent(hwdev)) - return; - if (attrs & DMA_ATTR_SKIP_CPU_SYNC) - return; - - __xen_dma_page_cpu_to_dev(hwdev, dev_addr, size, dir); -} - -void __xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs) - -{ - if (dev_is_dma_coherent(hwdev)) - return; - if (attrs & DMA_ATTR_SKIP_CPU_SYNC) - return; - - __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir); -} - -void __xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - if (dev_is_dma_coherent(hwdev)) - return; - __xen_dma_page_dev_to_cpu(hwdev, handle, size, dir); -} - -void __xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - if (dev_is_dma_coherent(hwdev)) - return; - __xen_dma_page_cpu_to_dev(hwdev, handle, size, dir); -} - bool xen_arch_need_swiotlb(struct device *dev, phys_addr_t phys, dma_addr_t dev_addr) diff --git a/arch/x86/include/asm/xen/page-coherent.h b/arch/x86/include/asm/xen/page-coherent.h index 116777e7f387..63cd41b2e17a 100644 --- a/arch/x86/include/asm/xen/page-coherent.h +++ b/arch/x86/include/asm/xen/page-coherent.h @@ -21,18 +21,4 @@ static inline void xen_free_coherent_pages(struct device *hwdev, size_t size, free_pages((unsigned long) cpu_addr, get_order(size)); } -static inline void xen_dma_map_page(struct device *hwdev, struct page *page, - dma_addr_t dev_addr, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) { } - -static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs) { } - -static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) { } - -static inline void xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) { } - #endif /* _ASM_X86_XEN_PAGE_COHERENT_H */ diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index b8808677ae1d..f81031f0c1c7 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -28,6 +28,7 @@ #include #include +#include #include #include #include @@ -391,6 +392,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, if (map == (phys_addr_t)DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; + phys = map; dev_addr = xen_phys_to_bus(map); /* @@ -402,14 +404,9 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page, return DMA_MAPPING_ERROR; } - page = pfn_to_page(map >> PAGE_SHIFT); - offset = map & ~PAGE_MASK; done: - /* - * we are not interested in the dma_addr returned by xen_dma_map_page, - * only in the potential cache flushes executed by the function. - */ - xen_dma_map_page(dev, page, dev_addr, offset, size, dir, attrs); + if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + xen_dma_sync_for_device(dev, dev_addr, phys, size, dir); return dev_addr; } @@ -429,7 +426,8 @@ static void xen_unmap_single(struct device *hwdev, dma_addr_t dev_addr, BUG_ON(dir == DMA_NONE); - xen_dma_unmap_page(hwdev, dev_addr, size, dir, attrs); + if (!dev_is_dma_coherent(hwdev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + xen_dma_sync_for_cpu(hwdev, dev_addr, paddr, size, dir); /* NOTE: We use dev_addr here, not paddr! */ if (is_xen_swiotlb_buffer(dev_addr)) @@ -449,7 +447,8 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr, { phys_addr_t paddr = xen_bus_to_phys(dma_addr); - xen_dma_sync_single_for_cpu(dev, dma_addr, size, dir); + if (!dev_is_dma_coherent(dev)) + xen_dma_sync_for_cpu(dev, dma_addr, paddr, size, dir); if (is_xen_swiotlb_buffer(dma_addr)) swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU); @@ -464,7 +463,8 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr, if (is_xen_swiotlb_buffer(dma_addr)) swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE); - xen_dma_sync_single_for_device(dev, dma_addr, size, dir); + if (!dev_is_dma_coherent(dev)) + xen_dma_sync_for_device(dev, dma_addr, paddr, size, dir); } /* diff --git a/include/xen/arm/page-coherent.h b/include/xen/arm/page-coherent.h index a8d9c0678c27..b9cc11e887ed 100644 --- a/include/xen/arm/page-coherent.h +++ b/include/xen/arm/page-coherent.h @@ -5,17 +5,6 @@ #include #include -void __xen_dma_map_page(struct device *hwdev, struct page *page, - dma_addr_t dev_addr, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs); -void __xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs); -void __xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir); -void __xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir); - static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size, dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs) { @@ -28,56 +17,4 @@ static inline void xen_free_coherent_pages(struct device *hwdev, size_t size, dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs); } -static inline void xen_dma_sync_single_for_cpu(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - unsigned long pfn = PFN_DOWN(handle); - - if (pfn_valid(pfn)) - dma_direct_sync_single_for_cpu(hwdev, handle, size, dir); - else - __xen_dma_sync_single_for_cpu(hwdev, handle, size, dir); -} - -static inline void xen_dma_sync_single_for_device(struct device *hwdev, - dma_addr_t handle, size_t size, enum dma_data_direction dir) -{ - unsigned long pfn = PFN_DOWN(handle); - if (pfn_valid(pfn)) - dma_direct_sync_single_for_device(hwdev, handle, size, dir); - else - __xen_dma_sync_single_for_device(hwdev, handle, size, dir); -} - -static inline void xen_dma_map_page(struct device *hwdev, struct page *page, - dma_addr_t dev_addr, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - unsigned long pfn = PFN_DOWN(dev_addr); - - /* - * Dom0 is mapped 1:1, and while the Linux page can span across multiple - * Xen pages, it is not possible for it to contain a mix of local and - * foreign Xen pages. Calling pfn_valid on a foreign mfn will always - * return false, so if pfn_valid returns true the pages is local and we - * can use the native dma-direct functions, otherwise we call the Xen - * specific version. - */ - if (pfn_valid(pfn)) - dma_direct_map_page(hwdev, page, offset, size, dir, attrs); - else - __xen_dma_map_page(hwdev, page, dev_addr, offset, size, dir, attrs); -} - -static inline void xen_dma_unmap_page(struct device *hwdev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - unsigned long pfn = PFN_DOWN(handle); - - if (pfn_valid(pfn)) - dma_direct_unmap_page(hwdev, handle, size, dir, attrs); - else - __xen_dma_unmap_page(hwdev, handle, size, dir, attrs); -} - #endif /* _XEN_ARM_PAGE_COHERENT_H */ diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h index 5e4b83f83dbc..d71380f6ed0b 100644 --- a/include/xen/swiotlb-xen.h +++ b/include/xen/swiotlb-xen.h @@ -4,6 +4,11 @@ #include +void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle, + phys_addr_t paddr, size_t size, enum dma_data_direction dir); +void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle, + phys_addr_t paddr, size_t size, enum dma_data_direction dir); + extern int xen_swiotlb_init(int verbose, bool early); extern const struct dma_map_ops xen_swiotlb_dma_ops; -- 2.20.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.5 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CFCBC3A5AA for ; Thu, 5 Sep 2019 11:41:46 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 40D1D20825 for ; Thu, 5 Sep 2019 11:41:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="l1I2WTh6" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 40D1D20825 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5q8q-0002Kz-Ud; Thu, 05 Sep 2019 11:41:32 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1i5q8o-0002Jo-VX for xen-devel@lists.xenproject.org; Thu, 05 Sep 2019 11:41:30 +0000 X-Inumbo-ID: 10449920-cfd2-11e9-978d-bc764e2007e4 Received: from bombadil.infradead.org (unknown [2607:7c80:54:e::133]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 10449920-cfd2-11e9-978d-bc764e2007e4; Thu, 05 Sep 2019 11:41:14 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From :Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=o2Hg9wxb5edkTo95p676m6BP+QhEQ7reG2mRWIlBGcU=; b=l1I2WTh6kSi0zu5Xcn/JgG2r1y BEUkN3EZ6tDNamjy9MN3j4Db1C6/v9hV4vRQ7UlqUQw5VPkPHo6KqNXmaR56w+12YEmz04HEsQRr0 0yts2Hhkd9gzVJ0y+ztK46Ll+d7JhDkq3Gyd4ThC4bYxJetxRlwLE/ZenjpNBEWlve4IP2GqxJBvg OCEmFQ7ZgsVy6tsXqXHpl38MHTXtC/tynwEnNYSqHtTOX/JxvR85MHRi4Bu3AShAIPkSppXbttjIs QIVK0y1rdc5wJWxlo+5TabpDPLoxCLHbniEYom4aIf4i1rGvFOqskbQnnR5ZQuOSqyKbD63GWJnt7 +5XB08yA==; Received: from [2001:4bb8:18c:1755:a4b2:9562:6bf1:4bb9] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92 #3 (Red Hat Linux)) id 1i5q8S-00057u-B2; Thu, 05 Sep 2019 11:41:09 +0000 From: Christoph Hellwig To: Stefano Stabellini , Konrad Rzeszutek Wilk , gross@suse.com, boris.ostrovsky@oracle.com Date: Thu, 5 Sep 2019 13:34:06 +0200 Message-Id: <20190905113408.3104-10-hch@lst.de> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190905113408.3104-1-hch@lst.de> References: <20190905113408.3104-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html Subject: [Xen-devel] [PATCH 09/11] swiotlb-xen: simplify cache maintainance X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org, x86@kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" Tm93IHRoYXQgd2Uga25vdyB3ZSBhbHdheXMgaGF2ZSB0aGUgZG1hLW5vbmNvaGVyZW50LmggaGVs cGVycyBhdmFpbGFibGUKaWYgd2UgYXJlIG9uIGFuIGFyY2hpdGVjdHVyZSB3aXRoIHN1cHBvcnQg Zm9yIG5vbi1jb2hlcmVudCBkZXZpY2VzLAp3ZSBjYW4ganVzdCBjYWxsIHRoZW0gZGlyZWN0bHks IGFuZCByZW1vdmUgdGhlIGNhbGxzIHRvIHRoZSBkbWEtZGlyZWN0CnJvdXRpbmVzLCBpbmNsdWRp bmcgdGhlIGZhY3QgdGhhdCB3ZSBjYWxsIHRoZSBkbWFfZGlyZWN0X21hcF9wYWdlCnJvdXRpbmVz IGJ1dCBpZ25vcmUgdGhlIHZhbHVlIHJldHVybmVkIGZyb20gaXQuICBJbnN0ZWFkIHdlIG5vdyBo YXZlClhlbiB3cmFwcGVycyBmb3IgdGhlIGFyY2hfc3luY19kbWFfZm9yX3tkZXZpY2UsY3B1fSBo ZWxwZXJzIHRoYXQgY2FsbAp0aGUgc3BlY2lhbCBYZW4gdmVyc2lvbnMgb2YgdGhvc2Ugcm91dGlu ZXMgZm9yIGZvcmVpZ24gcGFnZXMuCgpOb3RlIHRoYXQgdGhlIG5ldyBoZWxwZXJzIGdldCB0aGUg cGh5c2ljYWwgYWRkcmVzcyBwYXNzZWQgaW4gYWRkaXRpb24KdG8gdGhlIGRtYSBhZGRyZXNzIHRv IGF2b2lkIGFub3RoZXIgdHJhbnNsYXRpb24gZm9yIHRoZSBsb2NhbCBjYWNoZQptYWludGFpbmFu Y2UuICBUaGUgcGZuX3ZhbGlkIGNoZWNrcyByZW1haW4gb24gdGhlIGRtYSBhZGRyZXNzIGFzIGlu CnRoZSBvbGQgY29kZSwgZXZlbiBpZiB0aGF0IGxvb2tzIGEgbGl0dGxlIGZ1bm55LgoKU2lnbmVk LW9mZi1ieTogQ2hyaXN0b3BoIEhlbGx3aWcgPGhjaEBsc3QuZGU+ClJldmlld2VkLWJ5OiBTdGVm YW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Ci0tLQogYXJjaC9hcm0veGVu L21tLmMgICAgICAgICAgICAgICAgICAgICAgICB8IDY0ICsrKysrKystLS0tLS0tLS0tLS0tLS0t LQogYXJjaC94ODYvaW5jbHVkZS9hc20veGVuL3BhZ2UtY29oZXJlbnQuaCB8IDE0IC0tLS0tLQog ZHJpdmVycy94ZW4vc3dpb3RsYi14ZW4uYyAgICAgICAgICAgICAgICB8IDIwICsrKystLS0tCiBp bmNsdWRlL3hlbi9hcm0vcGFnZS1jb2hlcmVudC5oICAgICAgICAgIHwgNjMgLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0KIGluY2x1ZGUveGVuL3N3aW90bGIteGVuLmggICAgICAgICAgICAgICAgfCAg NSArKwogNSBmaWxlcyBjaGFuZ2VkLCAzMiBpbnNlcnRpb25zKCspLCAxMzQgZGVsZXRpb25zKC0p CgpkaWZmIC0tZ2l0IGEvYXJjaC9hcm0veGVuL21tLmMgYi9hcmNoL2FybS94ZW4vbW0uYwppbmRl eCA5ZDczZmE0YTU5OTEuLjJiMmMyMDg0MDhiYiAxMDA2NDQKLS0tIGEvYXJjaC9hcm0veGVuL21t LmMKKysrIGIvYXJjaC9hcm0veGVuL21tLmMKQEAgLTYwLDYzICs2MCwzMyBAQCBzdGF0aWMgdm9p ZCBkbWFfY2FjaGVfbWFpbnQoZG1hX2FkZHJfdCBoYW5kbGUsIHNpemVfdCBzaXplLCB1MzIgb3Ap CiAJfSB3aGlsZSAoc2l6ZSk7CiB9CiAKLXN0YXRpYyB2b2lkIF9feGVuX2RtYV9wYWdlX2Rldl90 b19jcHUoc3RydWN0IGRldmljZSAqaHdkZXYsIGRtYV9hZGRyX3QgaGFuZGxlLAotCQlzaXplX3Qg c2l6ZSwgZW51bSBkbWFfZGF0YV9kaXJlY3Rpb24gZGlyKQorLyoKKyAqIERvbTAgaXMgbWFwcGVk IDE6MSwgYW5kIHdoaWxlIHRoZSBMaW51eCBwYWdlIGNhbiBzcGFuIGFjcm9zcyBtdWx0aXBsZSBY ZW4KKyAqIHBhZ2VzLCBpdCBpcyBub3QgcG9zc2libGUgZm9yIGl0IHRvIGNvbnRhaW4gYSBtaXgg b2YgbG9jYWwgYW5kIGZvcmVpZ24gWGVuCisgKiBwYWdlcy4gIENhbGxpbmcgcGZuX3ZhbGlkIG9u IGEgZm9yZWlnbiBtZm4gd2lsbCBhbHdheXMgcmV0dXJuIGZhbHNlLCBzbyBpZgorICogcGZuX3Zh bGlkIHJldHVybnMgdHJ1ZSB0aGUgcGFnZXMgaXMgbG9jYWwgYW5kIHdlIGNhbiB1c2UgdGhlIG5h dGl2ZQorICogZG1hLWRpcmVjdCBmdW5jdGlvbnMsIG90aGVyd2lzZSB3ZSBjYWxsIHRoZSBYZW4g c3BlY2lmaWMgdmVyc2lvbi4KKyAqLwordm9pZCB4ZW5fZG1hX3N5bmNfZm9yX2NwdShzdHJ1Y3Qg ZGV2aWNlICpkZXYsIGRtYV9hZGRyX3QgaGFuZGxlLAorCQlwaHlzX2FkZHJfdCBwYWRkciwgc2l6 ZV90IHNpemUsIGVudW0gZG1hX2RhdGFfZGlyZWN0aW9uIGRpcikKIHsKLQlpZiAoZGlyICE9IERN QV9UT19ERVZJQ0UpCisJaWYgKHBmbl92YWxpZChQRk5fRE9XTihoYW5kbGUpKSkKKwkJYXJjaF9z eW5jX2RtYV9mb3JfY3B1KGRldiwgcGFkZHIsIHNpemUsIGRpcik7CisJZWxzZSBpZiAoZGlyICE9 IERNQV9UT19ERVZJQ0UpCiAJCWRtYV9jYWNoZV9tYWludChoYW5kbGUsIHNpemUsIEdOVFRBQl9D QUNIRV9JTlZBTCk7CiB9CiAKLXN0YXRpYyB2b2lkIF9feGVuX2RtYV9wYWdlX2NwdV90b19kZXYo c3RydWN0IGRldmljZSAqaHdkZXYsIGRtYV9hZGRyX3QgaGFuZGxlLAotCQlzaXplX3Qgc2l6ZSwg ZW51bSBkbWFfZGF0YV9kaXJlY3Rpb24gZGlyKQordm9pZCB4ZW5fZG1hX3N5bmNfZm9yX2Rldmlj ZShzdHJ1Y3QgZGV2aWNlICpkZXYsIGRtYV9hZGRyX3QgaGFuZGxlLAorCQlwaHlzX2FkZHJfdCBw YWRkciwgc2l6ZV90IHNpemUsIGVudW0gZG1hX2RhdGFfZGlyZWN0aW9uIGRpcikKIHsKLQlpZiAo ZGlyID09IERNQV9GUk9NX0RFVklDRSkKKwlpZiAocGZuX3ZhbGlkKFBGTl9ET1dOKGhhbmRsZSkp KQorCQlhcmNoX3N5bmNfZG1hX2Zvcl9kZXZpY2UoZGV2LCBwYWRkciwgc2l6ZSwgZGlyKTsKKwll bHNlIGlmIChkaXIgPT0gRE1BX0ZST01fREVWSUNFKQogCQlkbWFfY2FjaGVfbWFpbnQoaGFuZGxl LCBzaXplLCBHTlRUQUJfQ0FDSEVfSU5WQUwpOwogCWVsc2UKIAkJZG1hX2NhY2hlX21haW50KGhh bmRsZSwgc2l6ZSwgR05UVEFCX0NBQ0hFX0NMRUFOKTsKIH0KIAotdm9pZCBfX3hlbl9kbWFfbWFw X3BhZ2Uoc3RydWN0IGRldmljZSAqaHdkZXYsIHN0cnVjdCBwYWdlICpwYWdlLAotCSAgICAgZG1h X2FkZHJfdCBkZXZfYWRkciwgdW5zaWduZWQgbG9uZyBvZmZzZXQsIHNpemVfdCBzaXplLAotCSAg ICAgZW51bSBkbWFfZGF0YV9kaXJlY3Rpb24gZGlyLCB1bnNpZ25lZCBsb25nIGF0dHJzKQotewot CWlmIChkZXZfaXNfZG1hX2NvaGVyZW50KGh3ZGV2KSkKLQkJcmV0dXJuOwotCWlmIChhdHRycyAm IERNQV9BVFRSX1NLSVBfQ1BVX1NZTkMpCi0JCXJldHVybjsKLQotCV9feGVuX2RtYV9wYWdlX2Nw dV90b19kZXYoaHdkZXYsIGRldl9hZGRyLCBzaXplLCBkaXIpOwotfQotCi12b2lkIF9feGVuX2Rt YV91bm1hcF9wYWdlKHN0cnVjdCBkZXZpY2UgKmh3ZGV2LCBkbWFfYWRkcl90IGhhbmRsZSwKLQkJ c2l6ZV90IHNpemUsIGVudW0gZG1hX2RhdGFfZGlyZWN0aW9uIGRpciwKLQkJdW5zaWduZWQgbG9u ZyBhdHRycykKLQotewotCWlmIChkZXZfaXNfZG1hX2NvaGVyZW50KGh3ZGV2KSkKLQkJcmV0dXJu OwotCWlmIChhdHRycyAmIERNQV9BVFRSX1NLSVBfQ1BVX1NZTkMpCi0JCXJldHVybjsKLQotCV9f eGVuX2RtYV9wYWdlX2Rldl90b19jcHUoaHdkZXYsIGhhbmRsZSwgc2l6ZSwgZGlyKTsKLX0KLQot dm9pZCBfX3hlbl9kbWFfc3luY19zaW5nbGVfZm9yX2NwdShzdHJ1Y3QgZGV2aWNlICpod2RldiwK LQkJZG1hX2FkZHJfdCBoYW5kbGUsIHNpemVfdCBzaXplLCBlbnVtIGRtYV9kYXRhX2RpcmVjdGlv biBkaXIpCi17Ci0JaWYgKGRldl9pc19kbWFfY29oZXJlbnQoaHdkZXYpKQotCQlyZXR1cm47Ci0J X194ZW5fZG1hX3BhZ2VfZGV2X3RvX2NwdShod2RldiwgaGFuZGxlLCBzaXplLCBkaXIpOwotfQot Ci12b2lkIF9feGVuX2RtYV9zeW5jX3NpbmdsZV9mb3JfZGV2aWNlKHN0cnVjdCBkZXZpY2UgKmh3 ZGV2LAotCQlkbWFfYWRkcl90IGhhbmRsZSwgc2l6ZV90IHNpemUsIGVudW0gZG1hX2RhdGFfZGly ZWN0aW9uIGRpcikKLXsKLQlpZiAoZGV2X2lzX2RtYV9jb2hlcmVudChod2RldikpCi0JCXJldHVy bjsKLQlfX3hlbl9kbWFfcGFnZV9jcHVfdG9fZGV2KGh3ZGV2LCBoYW5kbGUsIHNpemUsIGRpcik7 Ci19Ci0KIGJvb2wgeGVuX2FyY2hfbmVlZF9zd2lvdGxiKHN0cnVjdCBkZXZpY2UgKmRldiwKIAkJ CSAgIHBoeXNfYWRkcl90IHBoeXMsCiAJCQkgICBkbWFfYWRkcl90IGRldl9hZGRyKQpkaWZmIC0t Z2l0IGEvYXJjaC94ODYvaW5jbHVkZS9hc20veGVuL3BhZ2UtY29oZXJlbnQuaCBiL2FyY2gveDg2 L2luY2x1ZGUvYXNtL3hlbi9wYWdlLWNvaGVyZW50LmgKaW5kZXggMTE2Nzc3ZTdmMzg3Li42M2Nk NDFiMmUxN2EgMTAwNjQ0Ci0tLSBhL2FyY2gveDg2L2luY2x1ZGUvYXNtL3hlbi9wYWdlLWNvaGVy ZW50LmgKKysrIGIvYXJjaC94ODYvaW5jbHVkZS9hc20veGVuL3BhZ2UtY29oZXJlbnQuaApAQCAt MjEsMTggKzIxLDQgQEAgc3RhdGljIGlubGluZSB2b2lkIHhlbl9mcmVlX2NvaGVyZW50X3BhZ2Vz KHN0cnVjdCBkZXZpY2UgKmh3ZGV2LCBzaXplX3Qgc2l6ZSwKIAlmcmVlX3BhZ2VzKCh1bnNpZ25l ZCBsb25nKSBjcHVfYWRkciwgZ2V0X29yZGVyKHNpemUpKTsKIH0KIAotc3RhdGljIGlubGluZSB2 b2lkIHhlbl9kbWFfbWFwX3BhZ2Uoc3RydWN0IGRldmljZSAqaHdkZXYsIHN0cnVjdCBwYWdlICpw YWdlLAotCSAgICAgZG1hX2FkZHJfdCBkZXZfYWRkciwgdW5zaWduZWQgbG9uZyBvZmZzZXQsIHNp emVfdCBzaXplLAotCSAgICAgZW51bSBkbWFfZGF0YV9kaXJlY3Rpb24gZGlyLCB1bnNpZ25lZCBs b25nIGF0dHJzKSB7IH0KLQotc3RhdGljIGlubGluZSB2b2lkIHhlbl9kbWFfdW5tYXBfcGFnZShz dHJ1Y3QgZGV2aWNlICpod2RldiwgZG1hX2FkZHJfdCBoYW5kbGUsCi0JCXNpemVfdCBzaXplLCBl bnVtIGRtYV9kYXRhX2RpcmVjdGlvbiBkaXIsCi0JCXVuc2lnbmVkIGxvbmcgYXR0cnMpIHsgfQot Ci1zdGF0aWMgaW5saW5lIHZvaWQgeGVuX2RtYV9zeW5jX3NpbmdsZV9mb3JfY3B1KHN0cnVjdCBk ZXZpY2UgKmh3ZGV2LAotCQlkbWFfYWRkcl90IGhhbmRsZSwgc2l6ZV90IHNpemUsIGVudW0gZG1h X2RhdGFfZGlyZWN0aW9uIGRpcikgeyB9Ci0KLXN0YXRpYyBpbmxpbmUgdm9pZCB4ZW5fZG1hX3N5 bmNfc2luZ2xlX2Zvcl9kZXZpY2Uoc3RydWN0IGRldmljZSAqaHdkZXYsCi0JCWRtYV9hZGRyX3Qg aGFuZGxlLCBzaXplX3Qgc2l6ZSwgZW51bSBkbWFfZGF0YV9kaXJlY3Rpb24gZGlyKSB7IH0KLQog I2VuZGlmIC8qIF9BU01fWDg2X1hFTl9QQUdFX0NPSEVSRU5UX0ggKi8KZGlmZiAtLWdpdCBhL2Ry aXZlcnMveGVuL3N3aW90bGIteGVuLmMgYi9kcml2ZXJzL3hlbi9zd2lvdGxiLXhlbi5jCmluZGV4 IGI4ODA4Njc3YWUxZC4uZjgxMDMxZjBjMWM3IDEwMDY0NAotLS0gYS9kcml2ZXJzL3hlbi9zd2lv dGxiLXhlbi5jCisrKyBiL2RyaXZlcnMveGVuL3N3aW90bGIteGVuLmMKQEAgLTI4LDYgKzI4LDcg QEAKIAogI2luY2x1ZGUgPGxpbnV4L21lbWJsb2NrLmg+CiAjaW5jbHVkZSA8bGludXgvZG1hLWRp cmVjdC5oPgorI2luY2x1ZGUgPGxpbnV4L2RtYS1ub25jb2hlcmVudC5oPgogI2luY2x1ZGUgPGxp bnV4L2V4cG9ydC5oPgogI2luY2x1ZGUgPHhlbi9zd2lvdGxiLXhlbi5oPgogI2luY2x1ZGUgPHhl bi9wYWdlLmg+CkBAIC0zOTEsNiArMzkyLDcgQEAgc3RhdGljIGRtYV9hZGRyX3QgeGVuX3N3aW90 bGJfbWFwX3BhZ2Uoc3RydWN0IGRldmljZSAqZGV2LCBzdHJ1Y3QgcGFnZSAqcGFnZSwKIAlpZiAo bWFwID09IChwaHlzX2FkZHJfdClETUFfTUFQUElOR19FUlJPUikKIAkJcmV0dXJuIERNQV9NQVBQ SU5HX0VSUk9SOwogCisJcGh5cyA9IG1hcDsKIAlkZXZfYWRkciA9IHhlbl9waHlzX3RvX2J1cyht YXApOwogCiAJLyoKQEAgLTQwMiwxNCArNDA0LDkgQEAgc3RhdGljIGRtYV9hZGRyX3QgeGVuX3N3 aW90bGJfbWFwX3BhZ2Uoc3RydWN0IGRldmljZSAqZGV2LCBzdHJ1Y3QgcGFnZSAqcGFnZSwKIAkJ cmV0dXJuIERNQV9NQVBQSU5HX0VSUk9SOwogCX0KIAotCXBhZ2UgPSBwZm5fdG9fcGFnZShtYXAg Pj4gUEFHRV9TSElGVCk7Ci0Jb2Zmc2V0ID0gbWFwICYgflBBR0VfTUFTSzsKIGRvbmU6Ci0JLyoK LQkgKiB3ZSBhcmUgbm90IGludGVyZXN0ZWQgaW4gdGhlIGRtYV9hZGRyIHJldHVybmVkIGJ5IHhl bl9kbWFfbWFwX3BhZ2UsCi0JICogb25seSBpbiB0aGUgcG90ZW50aWFsIGNhY2hlIGZsdXNoZXMg ZXhlY3V0ZWQgYnkgdGhlIGZ1bmN0aW9uLgotCSAqLwotCXhlbl9kbWFfbWFwX3BhZ2UoZGV2LCBw YWdlLCBkZXZfYWRkciwgb2Zmc2V0LCBzaXplLCBkaXIsIGF0dHJzKTsKKwlpZiAoIWRldl9pc19k bWFfY29oZXJlbnQoZGV2KSAmJiAhKGF0dHJzICYgRE1BX0FUVFJfU0tJUF9DUFVfU1lOQykpCisJ CXhlbl9kbWFfc3luY19mb3JfZGV2aWNlKGRldiwgZGV2X2FkZHIsIHBoeXMsIHNpemUsIGRpcik7 CiAJcmV0dXJuIGRldl9hZGRyOwogfQogCkBAIC00MjksNyArNDI2LDggQEAgc3RhdGljIHZvaWQg eGVuX3VubWFwX3NpbmdsZShzdHJ1Y3QgZGV2aWNlICpod2RldiwgZG1hX2FkZHJfdCBkZXZfYWRk ciwKIAogCUJVR19PTihkaXIgPT0gRE1BX05PTkUpOwogCi0JeGVuX2RtYV91bm1hcF9wYWdlKGh3 ZGV2LCBkZXZfYWRkciwgc2l6ZSwgZGlyLCBhdHRycyk7CisJaWYgKCFkZXZfaXNfZG1hX2NvaGVy ZW50KGh3ZGV2KSAmJiAhKGF0dHJzICYgRE1BX0FUVFJfU0tJUF9DUFVfU1lOQykpCisJCXhlbl9k bWFfc3luY19mb3JfY3B1KGh3ZGV2LCBkZXZfYWRkciwgcGFkZHIsIHNpemUsIGRpcik7CiAKIAkv KiBOT1RFOiBXZSB1c2UgZGV2X2FkZHIgaGVyZSwgbm90IHBhZGRyISAqLwogCWlmIChpc194ZW5f c3dpb3RsYl9idWZmZXIoZGV2X2FkZHIpKQpAQCAtNDQ5LDcgKzQ0Nyw4IEBAIHhlbl9zd2lvdGxi X3N5bmNfc2luZ2xlX2Zvcl9jcHUoc3RydWN0IGRldmljZSAqZGV2LCBkbWFfYWRkcl90IGRtYV9h ZGRyLAogewogCXBoeXNfYWRkcl90IHBhZGRyID0geGVuX2J1c190b19waHlzKGRtYV9hZGRyKTsK IAotCXhlbl9kbWFfc3luY19zaW5nbGVfZm9yX2NwdShkZXYsIGRtYV9hZGRyLCBzaXplLCBkaXIp OworCWlmICghZGV2X2lzX2RtYV9jb2hlcmVudChkZXYpKQorCQl4ZW5fZG1hX3N5bmNfZm9yX2Nw dShkZXYsIGRtYV9hZGRyLCBwYWRkciwgc2l6ZSwgZGlyKTsKIAogCWlmIChpc194ZW5fc3dpb3Rs Yl9idWZmZXIoZG1hX2FkZHIpKQogCQlzd2lvdGxiX3RibF9zeW5jX3NpbmdsZShkZXYsIHBhZGRy LCBzaXplLCBkaXIsIFNZTkNfRk9SX0NQVSk7CkBAIC00NjQsNyArNDYzLDggQEAgeGVuX3N3aW90 bGJfc3luY19zaW5nbGVfZm9yX2RldmljZShzdHJ1Y3QgZGV2aWNlICpkZXYsIGRtYV9hZGRyX3Qg ZG1hX2FkZHIsCiAJaWYgKGlzX3hlbl9zd2lvdGxiX2J1ZmZlcihkbWFfYWRkcikpCiAJCXN3aW90 bGJfdGJsX3N5bmNfc2luZ2xlKGRldiwgcGFkZHIsIHNpemUsIGRpciwgU1lOQ19GT1JfREVWSUNF KTsKIAotCXhlbl9kbWFfc3luY19zaW5nbGVfZm9yX2RldmljZShkZXYsIGRtYV9hZGRyLCBzaXpl LCBkaXIpOworCWlmICghZGV2X2lzX2RtYV9jb2hlcmVudChkZXYpKQorCQl4ZW5fZG1hX3N5bmNf Zm9yX2RldmljZShkZXYsIGRtYV9hZGRyLCBwYWRkciwgc2l6ZSwgZGlyKTsKIH0KIAogLyoKZGlm ZiAtLWdpdCBhL2luY2x1ZGUveGVuL2FybS9wYWdlLWNvaGVyZW50LmggYi9pbmNsdWRlL3hlbi9h cm0vcGFnZS1jb2hlcmVudC5oCmluZGV4IGE4ZDljMDY3OGMyNy4uYjljYzExZTg4N2VkIDEwMDY0 NAotLS0gYS9pbmNsdWRlL3hlbi9hcm0vcGFnZS1jb2hlcmVudC5oCisrKyBiL2luY2x1ZGUveGVu L2FybS9wYWdlLWNvaGVyZW50LmgKQEAgLTUsMTcgKzUsNiBAQAogI2luY2x1ZGUgPGxpbnV4L2Rt YS1tYXBwaW5nLmg+CiAjaW5jbHVkZSA8YXNtL3BhZ2UuaD4KIAotdm9pZCBfX3hlbl9kbWFfbWFw X3BhZ2Uoc3RydWN0IGRldmljZSAqaHdkZXYsIHN0cnVjdCBwYWdlICpwYWdlLAotCSAgICAgZG1h X2FkZHJfdCBkZXZfYWRkciwgdW5zaWduZWQgbG9uZyBvZmZzZXQsIHNpemVfdCBzaXplLAotCSAg ICAgZW51bSBkbWFfZGF0YV9kaXJlY3Rpb24gZGlyLCB1bnNpZ25lZCBsb25nIGF0dHJzKTsKLXZv aWQgX194ZW5fZG1hX3VubWFwX3BhZ2Uoc3RydWN0IGRldmljZSAqaHdkZXYsIGRtYV9hZGRyX3Qg aGFuZGxlLAotCQlzaXplX3Qgc2l6ZSwgZW51bSBkbWFfZGF0YV9kaXJlY3Rpb24gZGlyLAotCQl1 bnNpZ25lZCBsb25nIGF0dHJzKTsKLXZvaWQgX194ZW5fZG1hX3N5bmNfc2luZ2xlX2Zvcl9jcHUo c3RydWN0IGRldmljZSAqaHdkZXYsCi0JCWRtYV9hZGRyX3QgaGFuZGxlLCBzaXplX3Qgc2l6ZSwg ZW51bSBkbWFfZGF0YV9kaXJlY3Rpb24gZGlyKTsKLXZvaWQgX194ZW5fZG1hX3N5bmNfc2luZ2xl X2Zvcl9kZXZpY2Uoc3RydWN0IGRldmljZSAqaHdkZXYsCi0JCWRtYV9hZGRyX3QgaGFuZGxlLCBz aXplX3Qgc2l6ZSwgZW51bSBkbWFfZGF0YV9kaXJlY3Rpb24gZGlyKTsKLQogc3RhdGljIGlubGlu ZSB2b2lkICp4ZW5fYWxsb2NfY29oZXJlbnRfcGFnZXMoc3RydWN0IGRldmljZSAqaHdkZXYsIHNp emVfdCBzaXplLAogCQlkbWFfYWRkcl90ICpkbWFfaGFuZGxlLCBnZnBfdCBmbGFncywgdW5zaWdu ZWQgbG9uZyBhdHRycykKIHsKQEAgLTI4LDU2ICsxNyw0IEBAIHN0YXRpYyBpbmxpbmUgdm9pZCB4 ZW5fZnJlZV9jb2hlcmVudF9wYWdlcyhzdHJ1Y3QgZGV2aWNlICpod2Rldiwgc2l6ZV90IHNpemUs CiAJZG1hX2RpcmVjdF9mcmVlKGh3ZGV2LCBzaXplLCBjcHVfYWRkciwgZG1hX2hhbmRsZSwgYXR0 cnMpOwogfQogCi1zdGF0aWMgaW5saW5lIHZvaWQgeGVuX2RtYV9zeW5jX3NpbmdsZV9mb3JfY3B1 KHN0cnVjdCBkZXZpY2UgKmh3ZGV2LAotCQlkbWFfYWRkcl90IGhhbmRsZSwgc2l6ZV90IHNpemUs IGVudW0gZG1hX2RhdGFfZGlyZWN0aW9uIGRpcikKLXsKLQl1bnNpZ25lZCBsb25nIHBmbiA9IFBG Tl9ET1dOKGhhbmRsZSk7Ci0KLQlpZiAocGZuX3ZhbGlkKHBmbikpCi0JCWRtYV9kaXJlY3Rfc3lu Y19zaW5nbGVfZm9yX2NwdShod2RldiwgaGFuZGxlLCBzaXplLCBkaXIpOwotCWVsc2UKLQkJX194 ZW5fZG1hX3N5bmNfc2luZ2xlX2Zvcl9jcHUoaHdkZXYsIGhhbmRsZSwgc2l6ZSwgZGlyKTsKLX0K LQotc3RhdGljIGlubGluZSB2b2lkIHhlbl9kbWFfc3luY19zaW5nbGVfZm9yX2RldmljZShzdHJ1 Y3QgZGV2aWNlICpod2RldiwKLQkJZG1hX2FkZHJfdCBoYW5kbGUsIHNpemVfdCBzaXplLCBlbnVt IGRtYV9kYXRhX2RpcmVjdGlvbiBkaXIpCi17Ci0JdW5zaWduZWQgbG9uZyBwZm4gPSBQRk5fRE9X TihoYW5kbGUpOwotCWlmIChwZm5fdmFsaWQocGZuKSkKLQkJZG1hX2RpcmVjdF9zeW5jX3Npbmds ZV9mb3JfZGV2aWNlKGh3ZGV2LCBoYW5kbGUsIHNpemUsIGRpcik7Ci0JZWxzZQotCQlfX3hlbl9k bWFfc3luY19zaW5nbGVfZm9yX2RldmljZShod2RldiwgaGFuZGxlLCBzaXplLCBkaXIpOwotfQot Ci1zdGF0aWMgaW5saW5lIHZvaWQgeGVuX2RtYV9tYXBfcGFnZShzdHJ1Y3QgZGV2aWNlICpod2Rl diwgc3RydWN0IHBhZ2UgKnBhZ2UsCi0JICAgICBkbWFfYWRkcl90IGRldl9hZGRyLCB1bnNpZ25l ZCBsb25nIG9mZnNldCwgc2l6ZV90IHNpemUsCi0JICAgICBlbnVtIGRtYV9kYXRhX2RpcmVjdGlv biBkaXIsIHVuc2lnbmVkIGxvbmcgYXR0cnMpCi17Ci0JdW5zaWduZWQgbG9uZyBwZm4gPSBQRk5f RE9XTihkZXZfYWRkcik7Ci0KLQkvKgotCSAqIERvbTAgaXMgbWFwcGVkIDE6MSwgYW5kIHdoaWxl IHRoZSBMaW51eCBwYWdlIGNhbiBzcGFuIGFjcm9zcyBtdWx0aXBsZQotCSAqIFhlbiBwYWdlcywg aXQgaXMgbm90IHBvc3NpYmxlIGZvciBpdCB0byBjb250YWluIGEgbWl4IG9mIGxvY2FsIGFuZAot CSAqIGZvcmVpZ24gWGVuIHBhZ2VzLiAgQ2FsbGluZyBwZm5fdmFsaWQgb24gYSBmb3JlaWduIG1m biB3aWxsIGFsd2F5cwotCSAqIHJldHVybiBmYWxzZSwgc28gaWYgcGZuX3ZhbGlkIHJldHVybnMg dHJ1ZSB0aGUgcGFnZXMgaXMgbG9jYWwgYW5kIHdlCi0JICogY2FuIHVzZSB0aGUgbmF0aXZlIGRt YS1kaXJlY3QgZnVuY3Rpb25zLCBvdGhlcndpc2Ugd2UgY2FsbCB0aGUgWGVuCi0JICogc3BlY2lm aWMgdmVyc2lvbi4KLQkgKi8KLQlpZiAocGZuX3ZhbGlkKHBmbikpCi0JCWRtYV9kaXJlY3RfbWFw X3BhZ2UoaHdkZXYsIHBhZ2UsIG9mZnNldCwgc2l6ZSwgZGlyLCBhdHRycyk7Ci0JZWxzZQotCQlf X3hlbl9kbWFfbWFwX3BhZ2UoaHdkZXYsIHBhZ2UsIGRldl9hZGRyLCBvZmZzZXQsIHNpemUsIGRp ciwgYXR0cnMpOwotfQotCi1zdGF0aWMgaW5saW5lIHZvaWQgeGVuX2RtYV91bm1hcF9wYWdlKHN0 cnVjdCBkZXZpY2UgKmh3ZGV2LCBkbWFfYWRkcl90IGhhbmRsZSwKLQkJc2l6ZV90IHNpemUsIGVu dW0gZG1hX2RhdGFfZGlyZWN0aW9uIGRpciwgdW5zaWduZWQgbG9uZyBhdHRycykKLXsKLQl1bnNp Z25lZCBsb25nIHBmbiA9IFBGTl9ET1dOKGhhbmRsZSk7Ci0KLQlpZiAocGZuX3ZhbGlkKHBmbikp Ci0JCWRtYV9kaXJlY3RfdW5tYXBfcGFnZShod2RldiwgaGFuZGxlLCBzaXplLCBkaXIsIGF0dHJz KTsKLQllbHNlCi0JCV9feGVuX2RtYV91bm1hcF9wYWdlKGh3ZGV2LCBoYW5kbGUsIHNpemUsIGRp ciwgYXR0cnMpOwotfQotCiAjZW5kaWYgLyogX1hFTl9BUk1fUEFHRV9DT0hFUkVOVF9IICovCmRp ZmYgLS1naXQgYS9pbmNsdWRlL3hlbi9zd2lvdGxiLXhlbi5oIGIvaW5jbHVkZS94ZW4vc3dpb3Rs Yi14ZW4uaAppbmRleCA1ZTRiODNmODNkYmMuLmQ3MTM4MGY2ZWQwYiAxMDA2NDQKLS0tIGEvaW5j bHVkZS94ZW4vc3dpb3RsYi14ZW4uaAorKysgYi9pbmNsdWRlL3hlbi9zd2lvdGxiLXhlbi5oCkBA IC00LDYgKzQsMTEgQEAKIAogI2luY2x1ZGUgPGxpbnV4L3N3aW90bGIuaD4KIAordm9pZCB4ZW5f ZG1hX3N5bmNfZm9yX2NwdShzdHJ1Y3QgZGV2aWNlICpkZXYsIGRtYV9hZGRyX3QgaGFuZGxlLAor CQlwaHlzX2FkZHJfdCBwYWRkciwgc2l6ZV90IHNpemUsIGVudW0gZG1hX2RhdGFfZGlyZWN0aW9u IGRpcik7Cit2b2lkIHhlbl9kbWFfc3luY19mb3JfZGV2aWNlKHN0cnVjdCBkZXZpY2UgKmRldiwg ZG1hX2FkZHJfdCBoYW5kbGUsCisJCXBoeXNfYWRkcl90IHBhZGRyLCBzaXplX3Qgc2l6ZSwgZW51 bSBkbWFfZGF0YV9kaXJlY3Rpb24gZGlyKTsKKwogZXh0ZXJuIGludCB4ZW5fc3dpb3RsYl9pbml0 KGludCB2ZXJib3NlLCBib29sIGVhcmx5KTsKIGV4dGVybiBjb25zdCBzdHJ1Y3QgZG1hX21hcF9v cHMgeGVuX3N3aW90bGJfZG1hX29wczsKIAotLSAKMi4yMC4xCgoKX19fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t ZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmcKaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9yZy9t YWlsbWFuL2xpc3RpbmZvL3hlbi1kZXZlbA==