From: Tom Murphy <murphyt7@tcd.ie>
To: iommu@lists.linux-foundation.org
Cc: Heiko Stuebner <heiko@sntech.de>,
kvm@vger.kernel.org, David Airlie <airlied@linux.ie>,
Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
dri-devel@lists.freedesktop.org,
Bjorn Andersson <bjorn.andersson@linaro.org>,
linux-tegra@vger.kernel.org, Julien Grall <julien.grall@arm.com>,
Thierry Reding <thierry.reding@gmail.com>,
Will Deacon <will@kernel.org>,
Jean-Philippe Brucker <jean-philippe@linaro.org>,
linux-samsung-soc@vger.kernel.org, Marc Zyngier <maz@kernel.org>,
Krzysztof Kozlowski <krzk@kernel.org>,
Jonathan Hunter <jonathanh@nvidia.com>,
linux-rockchip@lists.infradead.org,
Andy Gross <agross@kernel.org>,
linux-arm-kernel@lists.infradead.org, linux-s390@vger.kernel.org,
linux-arm-msm@vger.kernel.org, intel-gfx@lists.freedesktop.org,
Jani Nikula <jani.nikula@linux.intel.com>,
Alex Williamson <alex.williamson@redhat.com>,
linux-mediatek@lists.infradead.org,
Rodrigo Vivi <rodrigo.vivi@intel.com>,
Matthias Brugger <matthias.bgg@gmail.com>,
Thomas Gleixner <tglx@linutronix.de>,
virtualization@lists.linux-foundation.org,
Gerald Schaefer <gerald.schaefer@de.ibm.com>,
David Woodhouse <dwmw2@infradead.org>,
Cornelia Huck <cohuck@redhat.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Kukjin Kim <kgene@kernel.org>, Daniel Vetter <daniel@ffwll.ch>,
Robin Murphy <robin.murphy@arm.com>
Subject: Re: [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path
Date: Thu, 19 Mar 2020 23:30:51 -0700 [thread overview]
Message-ID: <CALQxJuuue2MCF+xAAAcWCW=301HHZ9yWBmYV-K-ubCxO4s5eqQ@mail.gmail.com> (raw)
In-Reply-To: <20191221150402.13868-4-murphyt7@tcd.ie>
Could we merge patch 1-3 from this series? it just cleans up weird
code and merging these patches will cover some of the work needed to
move the intel iommu driver to the dma-iommu api in the future.
On Sat, 21 Dec 2019 at 07:04, Tom Murphy <murphyt7@tcd.ie> wrote:
>
> Remove all IOVA handling code from the non-dma_ops path in the intel
> iommu driver.
>
> There's no need for the non-dma_ops path to keep track of IOVAs. The
> whole point of the non-dma_ops path is that it allows the IOVAs to be
> handled separately. The IOVA handling code removed in this patch is
> pointless.
>
> Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
> ---
> drivers/iommu/intel-iommu.c | 89 ++++++++++++++-----------------------
> 1 file changed, 33 insertions(+), 56 deletions(-)
>
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 64b1a9793daa..8d72ea0fb843 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -1908,7 +1908,8 @@ static void domain_exit(struct dmar_domain *domain)
> domain_remove_dev_info(domain);
>
> /* destroy iovas */
> - put_iova_domain(&domain->iovad);
> + if (domain->domain.type == IOMMU_DOMAIN_DMA)
> + put_iova_domain(&domain->iovad);
>
> if (domain->pgd) {
> struct page *freelist;
> @@ -2671,19 +2672,9 @@ static struct dmar_domain *set_domain_for_dev(struct device *dev,
> }
>
> static int iommu_domain_identity_map(struct dmar_domain *domain,
> - unsigned long long start,
> - unsigned long long end)
> + unsigned long first_vpfn,
> + unsigned long last_vpfn)
> {
> - unsigned long first_vpfn = start >> VTD_PAGE_SHIFT;
> - unsigned long last_vpfn = end >> VTD_PAGE_SHIFT;
> -
> - if (!reserve_iova(&domain->iovad, dma_to_mm_pfn(first_vpfn),
> - dma_to_mm_pfn(last_vpfn))) {
> - pr_err("Reserving iova failed\n");
> - return -ENOMEM;
> - }
> -
> - pr_debug("Mapping reserved region %llx-%llx\n", start, end);
> /*
> * RMRR range might have overlap with physical memory range,
> * clear it first
> @@ -2760,7 +2751,8 @@ static int __init si_domain_init(int hw)
>
> for_each_mem_pfn_range(i, nid, &start_pfn, &end_pfn, NULL) {
> ret = iommu_domain_identity_map(si_domain,
> - PFN_PHYS(start_pfn), PFN_PHYS(end_pfn));
> + mm_to_dma_pfn(start_pfn),
> + mm_to_dma_pfn(end_pfn));
> if (ret)
> return ret;
> }
> @@ -4593,58 +4585,37 @@ static int intel_iommu_memory_notifier(struct notifier_block *nb,
> unsigned long val, void *v)
> {
> struct memory_notify *mhp = v;
> - unsigned long long start, end;
> - unsigned long start_vpfn, last_vpfn;
> + unsigned long start_vpfn = mm_to_dma_pfn(mhp->start_pfn);
> + unsigned long last_vpfn = mm_to_dma_pfn(mhp->start_pfn +
> + mhp->nr_pages - 1);
>
> switch (val) {
> case MEM_GOING_ONLINE:
> - start = mhp->start_pfn << PAGE_SHIFT;
> - end = ((mhp->start_pfn + mhp->nr_pages) << PAGE_SHIFT) - 1;
> - if (iommu_domain_identity_map(si_domain, start, end)) {
> - pr_warn("Failed to build identity map for [%llx-%llx]\n",
> - start, end);
> + if (iommu_domain_identity_map(si_domain, start_vpfn,
> + last_vpfn)) {
> + pr_warn("Failed to build identity map for [%lx-%lx]\n",
> + start_vpfn, last_vpfn);
> return NOTIFY_BAD;
> }
> break;
>
> case MEM_OFFLINE:
> case MEM_CANCEL_ONLINE:
> - start_vpfn = mm_to_dma_pfn(mhp->start_pfn);
> - last_vpfn = mm_to_dma_pfn(mhp->start_pfn + mhp->nr_pages - 1);
> - while (start_vpfn <= last_vpfn) {
> - struct iova *iova;
> + {
> struct dmar_drhd_unit *drhd;
> struct intel_iommu *iommu;
> struct page *freelist;
>
> - iova = find_iova(&si_domain->iovad, start_vpfn);
> - if (iova == NULL) {
> - pr_debug("Failed get IOVA for PFN %lx\n",
> - start_vpfn);
> - break;
> - }
> -
> - iova = split_and_remove_iova(&si_domain->iovad, iova,
> - start_vpfn, last_vpfn);
> - if (iova == NULL) {
> - pr_warn("Failed to split IOVA PFN [%lx-%lx]\n",
> - start_vpfn, last_vpfn);
> - return NOTIFY_BAD;
> - }
> -
> - freelist = domain_unmap(si_domain, iova->pfn_lo,
> - iova->pfn_hi);
> + freelist = domain_unmap(si_domain, start_vpfn,
> + last_vpfn);
>
> rcu_read_lock();
> for_each_active_iommu(iommu, drhd)
> iommu_flush_iotlb_psi(iommu, si_domain,
> - iova->pfn_lo, iova_size(iova),
> + start_vpfn, mhp->nr_pages,
> !freelist, 0);
> rcu_read_unlock();
> dma_free_pagelist(freelist);
> -
> - start_vpfn = iova->pfn_hi + 1;
> - free_iova_mem(iova);
> }
> break;
> }
> @@ -4672,8 +4643,9 @@ static void free_all_cpu_cached_iovas(unsigned int cpu)
> for (did = 0; did < cap_ndoms(iommu->cap); did++) {
> domain = get_iommu_domain(iommu, (u16)did);
>
> - if (!domain)
> + if (!domain || domain->domain.type != IOMMU_DOMAIN_DMA)
> continue;
> +
> free_cpu_cached_iovas(cpu, &domain->iovad);
> }
> }
> @@ -5095,9 +5067,6 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width)
> {
> int adjust_width;
>
> - init_iova_domain(&domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
> - domain_reserve_special_ranges(domain);
> -
> /* calculate AGAW */
> domain->gaw = guest_width;
> adjust_width = guestwidth_to_adjustwidth(guest_width);
> @@ -5116,6 +5085,18 @@ static int md_domain_init(struct dmar_domain *domain, int guest_width)
> return 0;
> }
>
> +static void intel_init_iova_domain(struct dmar_domain *dmar_domain)
> +{
> + init_iova_domain(&dmar_domain->iovad, VTD_PAGE_SIZE, IOVA_START_PFN);
> + copy_reserved_iova(&reserved_iova_list, &dmar_domain->iovad);
> +
> + if (init_iova_flush_queue(&dmar_domain->iovad, iommu_flush_iova,
> + iova_entry_free)) {
> + pr_warn("iova flush queue initialization failed\n");
> + intel_iommu_strict = 1;
> + }
> +}
> +
> static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
> {
> struct dmar_domain *dmar_domain;
> @@ -5136,12 +5117,8 @@ static struct iommu_domain *intel_iommu_domain_alloc(unsigned type)
> return NULL;
> }
>
> - if (type == IOMMU_DOMAIN_DMA &&
> - init_iova_flush_queue(&dmar_domain->iovad,
> - iommu_flush_iova, iova_entry_free)) {
> - pr_warn("iova flush queue initialization failed\n");
> - intel_iommu_strict = 1;
> - }
> + if (type == IOMMU_DOMAIN_DMA)
> + intel_init_iova_domain(dmar_domain);
>
> domain_update_iommu_cap(dmar_domain);
>
> --
> 2.20.1
>
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2020-03-20 6:55 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-21 15:03 [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Tom Murphy
2019-12-21 15:03 ` [PATCH 1/8] iommu/vt-d: clean up 32bit si_domain assignment Tom Murphy
2019-12-21 23:46 ` Arvind Sankar
2019-12-23 3:00 ` Lu Baolu
2019-12-21 15:03 ` [PATCH 2/8] iommu/vt-d: Use default dma_direct_* mapping functions for direct mapped devices Tom Murphy
2019-12-21 15:03 ` [PATCH 3/8] iommu/vt-d: Remove IOVA handling code from non-dma_ops path Tom Murphy
2020-03-20 6:30 ` Tom Murphy [this message]
2020-03-20 7:06 ` Lu Baolu
2019-12-21 15:03 ` [PATCH 4/8] iommu: Handle freelists when using deferred flushing in iommu drivers Tom Murphy
2019-12-21 15:03 ` [PATCH 5/8] iommu: Add iommu_dma_free_cpu_cached_iovas function Tom Murphy
2019-12-21 15:03 ` [PATCH 6/8] iommu: allow the dma-iommu api to use bounce buffers Tom Murphy
2019-12-24 10:20 ` kbuild test robot
2019-12-21 15:03 ` [PATCH 7/8] iommu/vt-d: Convert intel iommu driver to the iommu ops Tom Murphy
2019-12-21 15:04 ` [PATCH 8/8] DO NOT MERGE: iommu: disable list appending in dma-iommu Tom Murphy
2019-12-23 10:37 ` [PATCH 0/8] Convert the intel iommu driver to the dma-iommu api Jani Nikula
2019-12-23 11:29 ` Robin Murphy
2019-12-23 11:41 ` Jani Nikula
2020-03-20 6:28 ` Tom Murphy
2020-05-29 0:00 ` Logan Gunthorpe
2020-05-29 12:45 ` Christoph Hellwig
2020-05-29 19:05 ` Logan Gunthorpe
2020-05-29 21:11 ` Marek Szyprowski
2020-05-29 21:21 ` Logan Gunthorpe
2020-08-24 0:04 ` Tom Murphy
2020-08-26 18:26 ` Alex Deucher
2020-08-27 21:36 ` Logan Gunthorpe
2020-08-27 23:34 ` Tom Murphy
2020-09-03 20:26 ` Tom Murphy
2020-09-08 15:28 ` [Intel-gfx] " Tvrtko Ursulin
2020-09-08 15:44 ` Logan Gunthorpe
2020-09-08 15:56 ` Tvrtko Ursulin
2020-09-08 22:43 ` Tom Murphy
2020-09-09 9:16 ` Tvrtko Ursulin
2020-09-09 12:55 ` Tvrtko Ursulin
2020-09-10 13:33 ` Tom Murphy
2020-09-10 13:34 ` Tom Murphy
2020-08-26 18:14 ` Robin Murphy
2020-08-26 18:23 ` Tom Murphy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CALQxJuuue2MCF+xAAAcWCW=301HHZ9yWBmYV-K-ubCxO4s5eqQ@mail.gmail.com' \
--to=murphyt7@tcd.ie \
--cc=agross@kernel.org \
--cc=airlied@linux.ie \
--cc=alex.williamson@redhat.com \
--cc=bjorn.andersson@linaro.org \
--cc=cohuck@redhat.com \
--cc=daniel@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=dwmw2@infradead.org \
--cc=gerald.schaefer@de.ibm.com \
--cc=heiko@sntech.de \
--cc=intel-gfx@lists.freedesktop.org \
--cc=iommu@lists.linux-foundation.org \
--cc=jani.nikula@linux.intel.com \
--cc=jean-philippe@linaro.org \
--cc=jonathanh@nvidia.com \
--cc=joonas.lahtinen@linux.intel.com \
--cc=julien.grall@arm.com \
--cc=kgene@kernel.org \
--cc=krzk@kernel.org \
--cc=kvm@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mediatek@lists.infradead.org \
--cc=linux-rockchip@lists.infradead.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux-samsung-soc@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=matthias.bgg@gmail.com \
--cc=maz@kernel.org \
--cc=robin.murphy@arm.com \
--cc=rodrigo.vivi@intel.com \
--cc=tglx@linutronix.de \
--cc=thierry.reding@gmail.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).