From: Roman Skakun <rm.skakun@gmail.com> To: Christoph Hellwig <hch@lst.de> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Andrii Anisov <andrii_anisov@epam.com>, Roman Skakun <rm.skakun@gmail.com>, Roman Skakun <Roman_Skakun@epam.com>, Roman Skakun <roman_skakun@epam.com> Subject: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses Date: Fri, 16 Jul 2021 11:39:34 +0300 [thread overview] Message-ID: <20210716083934.154992-1-rm.skakun@gmail.com> (raw) In-Reply-To: <20210715170011.GA17324@lst.de> From: Roman Skakun <Roman_Skakun@epam.com> This commit is dedicated to fix incorrect conversion from cpu_addr to page address in cases when we get virtual address which allocated in the vmalloc range. As the result, virt_to_page() cannot convert this address properly and return incorrect page address. Need to detect such cases and obtains the page address using vmalloc_to_page() instead. Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> --- Hi, Christoph! It's updated patch in accordance with your and Stefano suggestions. drivers/xen/swiotlb-xen.c | 7 +------ include/linux/dma-map-ops.h | 2 ++ kernel/dma/ops_helpers.c | 16 ++++++++++++++-- 3 files changed, 17 insertions(+), 8 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 92ee6eea30cd..c2f612a10a95 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -337,7 +337,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, int order = get_order(size); phys_addr_t phys; u64 dma_mask = DMA_BIT_MASK(32); - struct page *page; + struct page *page = cpu_addr_to_page(vaddr); if (hwdev && hwdev->coherent_dma_mask) dma_mask = hwdev->coherent_dma_mask; @@ -349,11 +349,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, /* Convert the size to actually allocated. */ size = 1UL << (order + XEN_PAGE_SHIFT); - if (is_vmalloc_addr(vaddr)) - page = vmalloc_to_page(vaddr); - else - page = virt_to_page(vaddr); - if (!WARN_ON((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size)) && TestClearPageXenRemapped(page)) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index a5f89fc4d6df..ce0edb0bb603 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -226,6 +226,8 @@ struct page *dma_alloc_from_pool(struct device *dev, size_t size, bool (*phys_addr_ok)(struct device *, phys_addr_t, size_t)); bool dma_free_from_pool(struct device *dev, void *start, size_t size); +struct page *cpu_addr_to_page(void *cpu_addr); + #ifdef CONFIG_ARCH_HAS_DMA_COHERENCE_H #include <asm/dma-coherence.h> #elif defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 910ae69cae77..472e861750d3 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -5,6 +5,17 @@ */ #include <linux/dma-map-ops.h> +/* + * This helper converts virtual address to page address. + */ +struct page *cpu_addr_to_page(void *cpu_addr) +{ + if (is_vmalloc_addr(cpu_addr)) + return vmalloc_to_page(cpu_addr); + else + return virt_to_page(cpu_addr); +} + /* * Create scatter-list for the already allocated DMA buffer. */ @@ -12,7 +23,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page = virt_to_page(cpu_addr); + struct page *page = cpu_addr_to_page(cpu_addr); int ret; ret = sg_alloc_table(sgt, 1, GFP_KERNEL); @@ -32,6 +43,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, unsigned long user_count = vma_pages(vma); unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long off = vma->vm_pgoff; + struct page *page = cpu_addr_to_page(cpu_addr); int ret = -ENXIO; vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); @@ -43,7 +55,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return -ENXIO; return remap_pfn_range(vma, vma->vm_start, - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, + page_to_pfn(page) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO; -- 2.27.0
WARNING: multiple messages have this Message-ID (diff)
From: Roman Skakun <rm.skakun@gmail.com> To: Christoph Hellwig <hch@lst.de> Cc: Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Andrii Anisov <andrii_anisov@epam.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, linux-kernel@vger.kernel.org, Roman Skakun <rm.skakun@gmail.com>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, iommu@lists.linux-foundation.org, Roman Skakun <roman_skakun@epam.com>, xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Volodymyr Babchuk <volodymyr_babchuk@epam.com> Subject: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses Date: Fri, 16 Jul 2021 11:39:34 +0300 [thread overview] Message-ID: <20210716083934.154992-1-rm.skakun@gmail.com> (raw) In-Reply-To: <20210715170011.GA17324@lst.de> From: Roman Skakun <Roman_Skakun@epam.com> This commit is dedicated to fix incorrect conversion from cpu_addr to page address in cases when we get virtual address which allocated in the vmalloc range. As the result, virt_to_page() cannot convert this address properly and return incorrect page address. Need to detect such cases and obtains the page address using vmalloc_to_page() instead. Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> --- Hi, Christoph! It's updated patch in accordance with your and Stefano suggestions. drivers/xen/swiotlb-xen.c | 7 +------ include/linux/dma-map-ops.h | 2 ++ kernel/dma/ops_helpers.c | 16 ++++++++++++++-- 3 files changed, 17 insertions(+), 8 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 92ee6eea30cd..c2f612a10a95 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -337,7 +337,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, int order = get_order(size); phys_addr_t phys; u64 dma_mask = DMA_BIT_MASK(32); - struct page *page; + struct page *page = cpu_addr_to_page(vaddr); if (hwdev && hwdev->coherent_dma_mask) dma_mask = hwdev->coherent_dma_mask; @@ -349,11 +349,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, /* Convert the size to actually allocated. */ size = 1UL << (order + XEN_PAGE_SHIFT); - if (is_vmalloc_addr(vaddr)) - page = vmalloc_to_page(vaddr); - else - page = virt_to_page(vaddr); - if (!WARN_ON((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size)) && TestClearPageXenRemapped(page)) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index a5f89fc4d6df..ce0edb0bb603 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -226,6 +226,8 @@ struct page *dma_alloc_from_pool(struct device *dev, size_t size, bool (*phys_addr_ok)(struct device *, phys_addr_t, size_t)); bool dma_free_from_pool(struct device *dev, void *start, size_t size); +struct page *cpu_addr_to_page(void *cpu_addr); + #ifdef CONFIG_ARCH_HAS_DMA_COHERENCE_H #include <asm/dma-coherence.h> #elif defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 910ae69cae77..472e861750d3 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -5,6 +5,17 @@ */ #include <linux/dma-map-ops.h> +/* + * This helper converts virtual address to page address. + */ +struct page *cpu_addr_to_page(void *cpu_addr) +{ + if (is_vmalloc_addr(cpu_addr)) + return vmalloc_to_page(cpu_addr); + else + return virt_to_page(cpu_addr); +} + /* * Create scatter-list for the already allocated DMA buffer. */ @@ -12,7 +23,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page = virt_to_page(cpu_addr); + struct page *page = cpu_addr_to_page(cpu_addr); int ret; ret = sg_alloc_table(sgt, 1, GFP_KERNEL); @@ -32,6 +43,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, unsigned long user_count = vma_pages(vma); unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long off = vma->vm_pgoff; + struct page *page = cpu_addr_to_page(cpu_addr); int ret = -ENXIO; vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); @@ -43,7 +55,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return -ENXIO; return remap_pfn_range(vma, vma->vm_start, - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, + page_to_pfn(page) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO; -- 2.27.0 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
next prev parent reply other threads:[~2021-07-16 8:39 UTC|newest] Thread overview: 63+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-06-11 9:55 [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops Roman Skakun 2021-06-11 9:55 ` Roman Skakun 2021-06-11 15:19 ` Boris Ostrovsky 2021-06-11 15:19 ` Boris Ostrovsky 2021-06-14 12:47 ` Roman Skakun 2021-06-14 12:47 ` Roman Skakun 2021-06-14 12:47 ` Roman Skakun 2021-06-14 15:45 ` Boris Ostrovsky 2021-06-14 15:45 ` Boris Ostrovsky 2021-06-16 11:45 ` Roman Skakun 2021-06-16 11:45 ` Roman Skakun 2021-06-16 11:45 ` Roman Skakun 2021-06-16 11:42 ` [PATCH 1/2] Revert "swiotlb-xen: remove xen_swiotlb_dma_mmap and xen_swiotlb_dma_get_sgtable" Roman Skakun 2021-06-16 11:42 ` Roman Skakun 2021-06-16 11:42 ` [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops Roman Skakun 2021-06-16 11:42 ` Roman Skakun 2021-06-16 14:12 ` Boris Ostrovsky 2021-06-16 14:12 ` Boris Ostrovsky 2021-06-16 14:21 ` Christoph Hellwig 2021-06-16 14:21 ` Christoph Hellwig 2021-06-16 15:33 ` Boris Ostrovsky 2021-06-16 15:33 ` Boris Ostrovsky 2021-06-16 15:35 ` Christoph Hellwig 2021-06-16 15:35 ` Christoph Hellwig 2021-06-16 15:39 ` Boris Ostrovsky 2021-06-16 15:39 ` Boris Ostrovsky 2021-06-16 15:44 ` Christoph Hellwig 2021-06-16 15:44 ` Christoph Hellwig 2021-06-22 13:34 ` [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses Roman Skakun 2021-06-22 13:34 ` Roman Skakun 2021-07-14 0:15 ` Konrad Rzeszutek Wilk 2021-07-14 0:15 ` Konrad Rzeszutek Wilk 2021-07-15 7:39 ` Roman Skakun 2021-07-15 7:39 ` Roman Skakun 2021-07-15 7:39 ` Roman Skakun 2021-07-15 16:58 ` Boris Ostrovsky 2021-07-15 16:58 ` Boris Ostrovsky 2021-07-15 17:00 ` Christoph Hellwig 2021-07-15 17:00 ` Christoph Hellwig 2021-07-16 8:39 ` Roman Skakun [this message] 2021-07-16 8:39 ` Roman Skakun 2021-07-16 9:35 ` Christoph Hellwig 2021-07-16 9:35 ` Christoph Hellwig 2021-07-16 12:53 ` Roman Skakun 2021-07-16 12:53 ` Roman Skakun 2021-07-16 12:53 ` Roman Skakun 2021-07-16 15:29 ` Stefano Stabellini 2021-07-16 15:29 ` Stefano Stabellini 2021-07-16 15:29 ` Stefano Stabellini 2021-07-17 8:39 ` Roman Skakun 2021-07-17 8:39 ` Roman Skakun 2021-07-17 8:39 ` Roman Skakun 2021-07-19 9:22 ` Christoph Hellwig 2021-07-19 9:22 ` Christoph Hellwig 2021-07-21 18:39 ` Roman Skakun 2021-07-21 18:39 ` Roman Skakun 2021-07-21 18:39 ` Roman Skakun 2021-07-14 1:23 ` Stefano Stabellini 2021-07-14 1:23 ` Stefano Stabellini 2021-07-14 1:23 ` Stefano Stabellini 2021-07-15 7:31 ` Roman Skakun 2021-07-15 7:31 ` Roman Skakun 2021-07-15 7:31 ` Roman Skakun
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210716083934.154992-1-rm.skakun@gmail.com \ --to=rm.skakun@gmail.com \ --cc=Roman_Skakun@epam.com \ --cc=andrii_anisov@epam.com \ --cc=boris.ostrovsky@oracle.com \ --cc=hch@lst.de \ --cc=iommu@lists.linux-foundation.org \ --cc=jgross@suse.com \ --cc=konrad.wilk@oracle.com \ --cc=linux-kernel@vger.kernel.org \ --cc=oleksandr_andrushchenko@epam.com \ --cc=oleksandr_tyshchenko@epam.com \ --cc=sstabellini@kernel.org \ --cc=volodymyr_babchuk@epam.com \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.