From: Joerg Roedel <joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org> To: Robin Murphy <robin.murphy-5wv7dgnIgG8@public.gmane.org> Cc: laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org, arnd-r2nGTMty4D4@public.gmane.org, catalin.marinas-5wv7dgnIgG8@public.gmane.org, will.deacon-5wv7dgnIgG8@public.gmane.org, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org, djkurtz-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org, thunder.leizhen-hv44wF8Li93QT0dZR+AlfA@public.gmane.org, yingjoe.chen-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org, treding-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org Subject: Re: [PATCH v5 1/3] iommu: Implement common IOMMU ops for DMA mapping Date: Fri, 7 Aug 2015 10:42:28 +0200 [thread overview] Message-ID: <20150807084228.GU14980@8bytes.org> (raw) In-Reply-To: <6ce6b501501f611297ae0eae31e07b0d2060eaae.1438362603.git.robin.murphy-5wv7dgnIgG8@public.gmane.org> On Fri, Jul 31, 2015 at 06:18:27PM +0100, Robin Murphy wrote: > +int iommu_get_dma_cookie(struct iommu_domain *domain) > +{ > + struct iova_domain *iovad; > + > + if (domain->dma_api_cookie) > + return -EEXIST; Why do you call that dma_api_cookie? It is just a pointer to an iova allocator, you can just name it as such, like domain->iova. > +static struct iova *__alloc_iova(struct iova_domain *iovad, size_t size, > + dma_addr_t dma_limit) I think you also need a struct device here to take segment boundary and dma_mask into account. > +/* The IOVA allocator knows what we mapped, so just unmap whatever that was */ > +static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr) > +{ > + struct iova_domain *iovad = domain->dma_api_cookie; > + unsigned long shift = iova_shift(iovad); > + unsigned long pfn = dma_addr >> shift; > + struct iova *iova = find_iova(iovad, pfn); > + size_t size = iova_size(iova) << shift; > + > + /* ...and if we can't, then something is horribly, horribly wrong */ > + BUG_ON(iommu_unmap(domain, pfn << shift, size) < size); This is a WARN_ON at most, not a BUG_ON condition, especially since this type of bug is also catched with the dma-api debugging code. > +static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) > +{ > + struct page **pages; > + unsigned int i = 0, array_size = count * sizeof(*pages); > + > + if (array_size <= PAGE_SIZE) > + pages = kzalloc(array_size, GFP_KERNEL); > + else > + pages = vzalloc(array_size); > + if (!pages) > + return NULL; > + > + /* IOMMU can map any pages, so himem can also be used here */ > + gfp |= __GFP_NOWARN | __GFP_HIGHMEM; > + > + while (count) { > + struct page *page = NULL; > + int j, order = __fls(count); > + > + /* > + * Higher-order allocations are a convenience rather > + * than a necessity, hence using __GFP_NORETRY until > + * falling back to single-page allocations. > + */ > + for (order = min(order, MAX_ORDER); order > 0; order--) { > + page = alloc_pages(gfp | __GFP_NORETRY, order); > + if (!page) > + continue; > + if (PageCompound(page)) { > + if (!split_huge_page(page)) > + break; > + __free_pages(page, order); > + } else { > + split_page(page, order); > + break; > + } > + } > + if (!page) > + page = alloc_page(gfp); > + if (!page) { > + __iommu_dma_free_pages(pages, i); > + return NULL; > + } > + j = 1 << order; > + count -= j; > + while (j--) > + pages[i++] = page++; > + } > + return pages; > +} Hmm, most dma-api implementation just try to allocate a big enough region from the page-alloctor. Is it implemented different here to avoid the use of CMA? Joerg
WARNING: multiple messages have this Message-ID (diff)
From: joro@8bytes.org (Joerg Roedel) To: linux-arm-kernel@lists.infradead.org Subject: [PATCH v5 1/3] iommu: Implement common IOMMU ops for DMA mapping Date: Fri, 7 Aug 2015 10:42:28 +0200 [thread overview] Message-ID: <20150807084228.GU14980@8bytes.org> (raw) In-Reply-To: <6ce6b501501f611297ae0eae31e07b0d2060eaae.1438362603.git.robin.murphy@arm.com> On Fri, Jul 31, 2015 at 06:18:27PM +0100, Robin Murphy wrote: > +int iommu_get_dma_cookie(struct iommu_domain *domain) > +{ > + struct iova_domain *iovad; > + > + if (domain->dma_api_cookie) > + return -EEXIST; Why do you call that dma_api_cookie? It is just a pointer to an iova allocator, you can just name it as such, like domain->iova. > +static struct iova *__alloc_iova(struct iova_domain *iovad, size_t size, > + dma_addr_t dma_limit) I think you also need a struct device here to take segment boundary and dma_mask into account. > +/* The IOVA allocator knows what we mapped, so just unmap whatever that was */ > +static void __iommu_dma_unmap(struct iommu_domain *domain, dma_addr_t dma_addr) > +{ > + struct iova_domain *iovad = domain->dma_api_cookie; > + unsigned long shift = iova_shift(iovad); > + unsigned long pfn = dma_addr >> shift; > + struct iova *iova = find_iova(iovad, pfn); > + size_t size = iova_size(iova) << shift; > + > + /* ...and if we can't, then something is horribly, horribly wrong */ > + BUG_ON(iommu_unmap(domain, pfn << shift, size) < size); This is a WARN_ON@most, not a BUG_ON condition, especially since this type of bug is also catched with the dma-api debugging code. > +static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp) > +{ > + struct page **pages; > + unsigned int i = 0, array_size = count * sizeof(*pages); > + > + if (array_size <= PAGE_SIZE) > + pages = kzalloc(array_size, GFP_KERNEL); > + else > + pages = vzalloc(array_size); > + if (!pages) > + return NULL; > + > + /* IOMMU can map any pages, so himem can also be used here */ > + gfp |= __GFP_NOWARN | __GFP_HIGHMEM; > + > + while (count) { > + struct page *page = NULL; > + int j, order = __fls(count); > + > + /* > + * Higher-order allocations are a convenience rather > + * than a necessity, hence using __GFP_NORETRY until > + * falling back to single-page allocations. > + */ > + for (order = min(order, MAX_ORDER); order > 0; order--) { > + page = alloc_pages(gfp | __GFP_NORETRY, order); > + if (!page) > + continue; > + if (PageCompound(page)) { > + if (!split_huge_page(page)) > + break; > + __free_pages(page, order); > + } else { > + split_page(page, order); > + break; > + } > + } > + if (!page) > + page = alloc_page(gfp); > + if (!page) { > + __iommu_dma_free_pages(pages, i); > + return NULL; > + } > + j = 1 << order; > + count -= j; > + while (j--) > + pages[i++] = page++; > + } > + return pages; > +} Hmm, most dma-api implementation just try to allocate a big enough region from the page-alloctor. Is it implemented different here to avoid the use of CMA? Joerg
next prev parent reply other threads:[~2015-08-07 8:42 UTC|newest] Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top 2015-07-31 17:18 [PATCH v5 0/3] arm64: IOMMU-backed DMA mapping Robin Murphy 2015-07-31 17:18 ` Robin Murphy [not found] ` <cover.1438362603.git.robin.murphy-5wv7dgnIgG8@public.gmane.org> 2015-07-31 17:18 ` [PATCH v5 1/3] iommu: Implement common IOMMU ops for " Robin Murphy 2015-07-31 17:18 ` Robin Murphy [not found] ` <6ce6b501501f611297ae0eae31e07b0d2060eaae.1438362603.git.robin.murphy-5wv7dgnIgG8@public.gmane.org> 2015-08-03 17:40 ` Catalin Marinas 2015-08-03 17:40 ` Catalin Marinas 2015-08-06 15:23 ` Will Deacon 2015-08-06 15:23 ` Will Deacon [not found] ` <20150806152327.GH25483-5wv7dgnIgG8@public.gmane.org> 2015-08-06 17:54 ` joro-zLv9SwRftAIdnm+yROfE0A 2015-08-06 17:54 ` joro at 8bytes.org 2015-08-07 8:42 ` Joerg Roedel [this message] 2015-08-07 8:42 ` Joerg Roedel [not found] ` <20150807084228.GU14980-zLv9SwRftAIdnm+yROfE0A@public.gmane.org> 2015-08-07 13:38 ` Robin Murphy 2015-08-07 13:38 ` Robin Murphy [not found] ` <55C4B4DF.4040608-5wv7dgnIgG8@public.gmane.org> 2015-08-11 9:37 ` Joerg Roedel 2015-08-11 9:37 ` Joerg Roedel [not found] ` <20150811093742.GC14980-zLv9SwRftAIdnm+yROfE0A@public.gmane.org> 2015-08-11 13:31 ` Robin Murphy 2015-08-11 13:31 ` Robin Murphy 2015-07-31 17:18 ` [PATCH v5 2/3] arm64: Add IOMMU dma_ops Robin Murphy 2015-07-31 17:18 ` Robin Murphy [not found] ` <8a5abd0a9929aae160ccb74d7a8d9c3698f61910.1438362603.git.robin.murphy-5wv7dgnIgG8@public.gmane.org> 2015-08-03 17:33 ` Catalin Marinas 2015-08-03 17:33 ` Catalin Marinas 2015-08-07 8:52 ` Joerg Roedel 2015-08-07 8:52 ` Joerg Roedel [not found] ` <20150807085233.GV14980-zLv9SwRftAIdnm+yROfE0A@public.gmane.org> 2015-08-07 15:27 ` Robin Murphy 2015-08-07 15:27 ` Robin Murphy [not found] ` <55C4CE7C.7050205-5wv7dgnIgG8@public.gmane.org> 2015-08-11 9:49 ` Joerg Roedel 2015-08-11 9:49 ` Joerg Roedel [not found] ` <20150811094951.GD14980-zLv9SwRftAIdnm+yROfE0A@public.gmane.org> 2015-08-11 20:15 ` Robin Murphy 2015-08-11 20:15 ` Robin Murphy 2015-09-22 17:12 ` Daniel Kurtz via iommu 2015-09-22 17:12 ` Daniel Kurtz [not found] ` <CAGS+omCDYrjpr--+sUzaKCxo12Eff6TC04RgroDgKvxHwK3t2Q-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org> 2015-09-22 18:11 ` Robin Murphy 2015-09-22 18:11 ` Robin Murphy 2015-07-31 17:18 ` [PATCH v5 3/3] arm64: Hook up " Robin Murphy 2015-07-31 17:18 ` Robin Murphy [not found] ` <caecbce93dd4870995a000bebc8f58d1ca7e551e.1438362603.git.robin.murphy-5wv7dgnIgG8@public.gmane.org> 2015-08-07 8:55 ` Joerg Roedel 2015-08-07 8:55 ` Joerg Roedel 2015-08-26 6:19 ` [PATCH v5 0/3] arm64: IOMMU-backed DMA mapping Yong Wu 2015-08-26 6:19 ` Yong Wu
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20150807084228.GU14980@8bytes.org \ --to=joro-zlv9swrftaidnm+yrofe0a@public.gmane.org \ --cc=arnd-r2nGTMty4D4@public.gmane.org \ --cc=catalin.marinas-5wv7dgnIgG8@public.gmane.org \ --cc=djkurtz-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \ --cc=iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org \ --cc=laurent.pinchart+renesas-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org \ --cc=linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org \ --cc=robin.murphy-5wv7dgnIgG8@public.gmane.org \ --cc=thunder.leizhen-hv44wF8Li93QT0dZR+AlfA@public.gmane.org \ --cc=treding-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org \ --cc=will.deacon-5wv7dgnIgG8@public.gmane.org \ --cc=yingjoe.chen-NuS5LvNUpcJWk0Htik3J/w@public.gmane.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.