From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752004AbdECErH (ORCPT ); Wed, 3 May 2017 00:47:07 -0400 Received: from mail-pf0-f176.google.com ([209.85.192.176]:36718 "EHLO mail-pf0-f176.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751821AbdECEqy (ORCPT ); Wed, 3 May 2017 00:46:54 -0400 From: Oza Pawandeep To: Joerg Roedel , Robin Murphy Cc: iommu@lists.linux-foundation.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, devicetree@vger.kernel.org, bcm-kernel-feedback-list@broadcom.com, Oza Pawandeep , Oza Pawandeep Subject: [PATCH 2/3] iommu/pci: reserve iova for PCI masters Date: Wed, 3 May 2017 10:16:34 +0530 Message-Id: <1493786795-28153-2-git-send-email-oza.oza@broadcom.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1493786795-28153-1-git-send-email-oza.oza@broadcom.com> References: <1493786795-28153-1-git-send-email-oza.oza@broadcom.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org this patch reserves the iova for PCI masters. ARM64 based SOCs may have scattered memory banks. such as iproc based SOC has <0x00000000 0x80000000 0x0 0x80000000>, /* 2G @ 2G */ <0x00000008 0x80000000 0x3 0x80000000>, /* 14G @ 34G */ <0x00000090 0x00000000 0x4 0x00000000>, /* 16G @ 576G */ <0x000000a0 0x00000000 0x4 0x00000000>; /* 16G @ 640G */ but incoming PCI transcation addressing capability is limited by host bridge, for example if max incoming window capability is 512 GB, then 0x00000090 and 0x000000a0 will fall beyond it. to address this problem, iommu has to avoid allocating iovas which are reserved. which inturn does not allocate iova if it falls into hole. Bug: SOC-5216 Change-Id: Icbfc99a045d730be143fef427098c937b9d46353 Signed-off-by: Oza Pawandeep Reviewed-on: http://gerrit-ccxsw.broadcom.net/40760 Reviewed-by: vpx_checkpatch status Reviewed-by: CCXSW Tested-by: vpx_autobuild status Tested-by: vpx_smoketest status Tested-by: CCXSW Reviewed-by: Scott Branden diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 48d36ce..08764b0 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -171,8 +172,12 @@ static void iova_reserve_pci_windows(struct pci_dev *dev, struct iova_domain *iovad) { struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus); + struct device_node *np = bridge->dev.parent->of_node; struct resource_entry *window; unsigned long lo, hi; + int ret; + dma_addr_t tmp_dma_addr = 0, dma_addr; + LIST_HEAD(res); resource_list_for_each_entry(window, &bridge->windows) { if (resource_type(window->res) != IORESOURCE_MEM && @@ -183,6 +188,36 @@ static void iova_reserve_pci_windows(struct pci_dev *dev, hi = iova_pfn(iovad, window->res->end - window->offset); reserve_iova(iovad, lo, hi); } + + /* PCI inbound memory reservation. */ + ret = of_pci_get_dma_ranges(np, &res); + if (!ret) { + resource_list_for_each_entry(window, &res) { + struct resource *res_dma = window->res; + + dma_addr = res_dma->start - window->offset; + if (tmp_dma_addr > dma_addr) { + pr_warn("PCI: failed to reserve iovas; ranges should be sorted\n"); + return; + } + if (tmp_dma_addr != dma_addr) { + lo = iova_pfn(iovad, tmp_dma_addr); + hi = iova_pfn(iovad, dma_addr - 1); + reserve_iova(iovad, lo, hi); + } + tmp_dma_addr = window->res->end - window->offset; + } + /* + * the last dma-range should honour based on the + * 32/64-bit dma addresses. + */ + if (tmp_dma_addr < DMA_BIT_MASK(sizeof(dma_addr_t) * 8)) { + lo = iova_pfn(iovad, tmp_dma_addr); + hi = iova_pfn(iovad, + DMA_BIT_MASK(sizeof(dma_addr_t) * 8) - 1); + reserve_iova(iovad, lo, hi); + } + } } /** -- 1.9.1