* [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-11 9:55 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-11 9:55 UTC (permalink / raw) To: Konrad Rzeszutek Wilk, Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel Cc: Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Roman Skakun, Andrii Anisov This commit is dedicated to fix incorrect convertion from cpu_addr to page address in cases when we get virtual address which allocated throught xen_swiotlb_alloc_coherent() and can be mapped in the vmalloc range. As the result, virt_to_page() cannot convert this address properly and return incorrect page address. Need to detect such cases and obtains the page address using vmalloc_to_page() instead. The reference code was copied from kernel/dma/ops_helpers.c and modified to provide additional detections as described above. Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> --- Also, I have observed that the original common code didn't make additional checks about contiguity of the memory region represented by cpu_addr and size May be, this means that these functions can get only physically contiguous memory. Is this correct? Cheers! --- drivers/xen/swiotlb-xen.c | 51 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 49 insertions(+), 2 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 2b385c1b4a99..f99c98472927 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -563,6 +563,53 @@ xen_swiotlb_dma_supported(struct device *hwdev, u64 mask) return xen_virt_to_bus(hwdev, xen_io_tlb_end - 1) <= mask; } +static int +xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + unsigned long user_count = vma_pages(vma); + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; + unsigned long off = vma->vm_pgoff; + struct page *page; + + if (is_vmalloc_addr(cpu_addr)) + page = vmalloc_to_page(cpu_addr); + else + page = virt_to_page(cpu_addr); + + vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); + + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) + return -ENXIO; + + if (off >= count || user_count > count - off) + return -ENXIO; + + return remap_pfn_range(vma, vma->vm_start, + page_to_pfn(page) + vma->vm_pgoff, + user_count << PAGE_SHIFT, vma->vm_page_prot); +} + +static int +xen_swiotlb_dma_get_sgtable(struct device *dev, struct sg_table *sgt, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + struct page *page; + int ret; + + if (is_vmalloc_addr(cpu_addr)) + page = vmalloc_to_page(cpu_addr); + else + page = virt_to_page(cpu_addr); + + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (!ret) + sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); + return ret; +} + const struct dma_map_ops xen_swiotlb_dma_ops = { .alloc = xen_swiotlb_alloc_coherent, .free = xen_swiotlb_free_coherent, @@ -575,8 +622,8 @@ const struct dma_map_ops xen_swiotlb_dma_ops = { .map_page = xen_swiotlb_map_page, .unmap_page = xen_swiotlb_unmap_page, .dma_supported = xen_swiotlb_dma_supported, - .mmap = dma_common_mmap, - .get_sgtable = dma_common_get_sgtable, + .mmap = xen_swiotlb_dma_mmap, + .get_sgtable = xen_swiotlb_dma_get_sgtable, .alloc_pages = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; -- 2.27.0 ^ permalink raw reply related [flat|nested] 63+ messages in thread
* [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-11 9:55 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-11 9:55 UTC (permalink / raw) To: Konrad Rzeszutek Wilk, Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel Cc: Andrii Anisov, Oleksandr Andrushchenko, Oleksandr Tyshchenko, Roman Skakun, Roman Skakun, Volodymyr Babchuk This commit is dedicated to fix incorrect convertion from cpu_addr to page address in cases when we get virtual address which allocated throught xen_swiotlb_alloc_coherent() and can be mapped in the vmalloc range. As the result, virt_to_page() cannot convert this address properly and return incorrect page address. Need to detect such cases and obtains the page address using vmalloc_to_page() instead. The reference code was copied from kernel/dma/ops_helpers.c and modified to provide additional detections as described above. Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> --- Also, I have observed that the original common code didn't make additional checks about contiguity of the memory region represented by cpu_addr and size May be, this means that these functions can get only physically contiguous memory. Is this correct? Cheers! --- drivers/xen/swiotlb-xen.c | 51 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 49 insertions(+), 2 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 2b385c1b4a99..f99c98472927 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -563,6 +563,53 @@ xen_swiotlb_dma_supported(struct device *hwdev, u64 mask) return xen_virt_to_bus(hwdev, xen_io_tlb_end - 1) <= mask; } +static int +xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + unsigned long user_count = vma_pages(vma); + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; + unsigned long off = vma->vm_pgoff; + struct page *page; + + if (is_vmalloc_addr(cpu_addr)) + page = vmalloc_to_page(cpu_addr); + else + page = virt_to_page(cpu_addr); + + vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); + + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) + return -ENXIO; + + if (off >= count || user_count > count - off) + return -ENXIO; + + return remap_pfn_range(vma, vma->vm_start, + page_to_pfn(page) + vma->vm_pgoff, + user_count << PAGE_SHIFT, vma->vm_page_prot); +} + +static int +xen_swiotlb_dma_get_sgtable(struct device *dev, struct sg_table *sgt, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + struct page *page; + int ret; + + if (is_vmalloc_addr(cpu_addr)) + page = vmalloc_to_page(cpu_addr); + else + page = virt_to_page(cpu_addr); + + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (!ret) + sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); + return ret; +} + const struct dma_map_ops xen_swiotlb_dma_ops = { .alloc = xen_swiotlb_alloc_coherent, .free = xen_swiotlb_free_coherent, @@ -575,8 +622,8 @@ const struct dma_map_ops xen_swiotlb_dma_ops = { .map_page = xen_swiotlb_map_page, .unmap_page = xen_swiotlb_unmap_page, .dma_supported = xen_swiotlb_dma_supported, - .mmap = dma_common_mmap, - .get_sgtable = dma_common_get_sgtable, + .mmap = xen_swiotlb_dma_mmap, + .get_sgtable = xen_swiotlb_dma_get_sgtable, .alloc_pages = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; -- 2.27.0 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 63+ messages in thread
* Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops 2021-06-11 9:55 ` Roman Skakun @ 2021-06-11 15:19 ` Boris Ostrovsky -1 siblings, 0 replies; 63+ messages in thread From: Boris Ostrovsky @ 2021-06-11 15:19 UTC (permalink / raw) To: Roman Skakun, Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel Cc: Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov On 6/11/21 5:55 AM, Roman Skakun wrote: > > +static int > +xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, > + void *cpu_addr, dma_addr_t dma_addr, size_t size, > + unsigned long attrs) > +{ > + unsigned long user_count = vma_pages(vma); > + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > + unsigned long off = vma->vm_pgoff; > + struct page *page; > + > + if (is_vmalloc_addr(cpu_addr)) > + page = vmalloc_to_page(cpu_addr); > + else > + page = virt_to_page(cpu_addr); > + > + vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > + > + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) > + return -ENXIO; > + > + if (off >= count || user_count > count - off) > + return -ENXIO; > + > + return remap_pfn_range(vma, vma->vm_start, > + page_to_pfn(page) + vma->vm_pgoff, > + user_count << PAGE_SHIFT, vma->vm_page_prot); > +} I suggest you create a helper for computing page value and then revert 922659ea771b3fd728149262c5ea15608fab9719 and pass result of the helper instead of cpu_addr. Here and in xen_swiotlb_dma_get_sgtable(). And use this new helper in xen_swiotlb_free_coherent() too. I am curious though why this was not a problem when Stefano was looking at the problem that introduced this vmalloc check (i.e. 8b1e868f66076490189a36d984fcce286cdd6295). Stefano? -boris ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-11 15:19 ` Boris Ostrovsky 0 siblings, 0 replies; 63+ messages in thread From: Boris Ostrovsky @ 2021-06-11 15:19 UTC (permalink / raw) To: Roman Skakun, Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel Cc: Oleksandr Tyshchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun, Oleksandr Andrushchenko On 6/11/21 5:55 AM, Roman Skakun wrote: > > +static int > +xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, > + void *cpu_addr, dma_addr_t dma_addr, size_t size, > + unsigned long attrs) > +{ > + unsigned long user_count = vma_pages(vma); > + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > + unsigned long off = vma->vm_pgoff; > + struct page *page; > + > + if (is_vmalloc_addr(cpu_addr)) > + page = vmalloc_to_page(cpu_addr); > + else > + page = virt_to_page(cpu_addr); > + > + vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > + > + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) > + return -ENXIO; > + > + if (off >= count || user_count > count - off) > + return -ENXIO; > + > + return remap_pfn_range(vma, vma->vm_start, > + page_to_pfn(page) + vma->vm_pgoff, > + user_count << PAGE_SHIFT, vma->vm_page_prot); > +} I suggest you create a helper for computing page value and then revert 922659ea771b3fd728149262c5ea15608fab9719 and pass result of the helper instead of cpu_addr. Here and in xen_swiotlb_dma_get_sgtable(). And use this new helper in xen_swiotlb_free_coherent() too. I am curious though why this was not a problem when Stefano was looking at the problem that introduced this vmalloc check (i.e. 8b1e868f66076490189a36d984fcce286cdd6295). Stefano? -boris _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops 2021-06-11 15:19 ` Boris Ostrovsky (?) @ 2021-06-14 12:47 ` Roman Skakun -1 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-14 12:47 UTC (permalink / raw) To: Boris Ostrovsky Cc: Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov Hey, Boris! Thanks for the review. I have an additional question about current implementation that disturbed me. I think, that we can have cases when mapped memory can not be physically contiguous. In order to proceed this correctly need to apply some additional steps to current implementation as well. In mmap() : 1. Is the memory region physically contiguous? 2. Remap multiple ranges if it is not. In get_sgtable() : 1. Is the memory region physically contiguous? 2. Create sgt that will be included multiple contiguous ranges if it is not. What do you think about it? Cheers! Roman пт, 11 июн. 2021 г. в 18:20, Boris Ostrovsky <boris.ostrovsky@oracle.com>: > > > On 6/11/21 5:55 AM, Roman Skakun wrote: > > > > +static int > > +xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, > > + void *cpu_addr, dma_addr_t dma_addr, size_t size, > > + unsigned long attrs) > > +{ > > + unsigned long user_count = vma_pages(vma); > > + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > > + unsigned long off = vma->vm_pgoff; > > + struct page *page; > > + > > + if (is_vmalloc_addr(cpu_addr)) > > + page = vmalloc_to_page(cpu_addr); > > + else > > + page = virt_to_page(cpu_addr); > > + > > + vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > > + > > + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) > > + return -ENXIO; > > + > > + if (off >= count || user_count > count - off) > > + return -ENXIO; > > + > > + return remap_pfn_range(vma, vma->vm_start, > > + page_to_pfn(page) + vma->vm_pgoff, > > + user_count << PAGE_SHIFT, vma->vm_page_prot); > > +} > > > I suggest you create a helper for computing page value and then revert 922659ea771b3fd728149262c5ea15608fab9719 and pass result of the helper instead of cpu_addr. Here and in xen_swiotlb_dma_get_sgtable(). > > > And use this new helper in xen_swiotlb_free_coherent() too. I am curious though why this was not a problem when Stefano was looking at the problem that introduced this vmalloc check (i.e. 8b1e868f66076490189a36d984fcce286cdd6295). Stefano? > > > -boris -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-14 12:47 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-14 12:47 UTC (permalink / raw) To: Boris Ostrovsky Cc: Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov Hey, Boris! Thanks for the review. I have an additional question about current implementation that disturbed me. I think, that we can have cases when mapped memory can not be physically contiguous. In order to proceed this correctly need to apply some additional steps to current implementation as well. In mmap() : 1. Is the memory region physically contiguous? 2. Remap multiple ranges if it is not. In get_sgtable() : 1. Is the memory region physically contiguous? 2. Create sgt that will be included multiple contiguous ranges if it is not. What do you think about it? Cheers! Roman пт, 11 июн. 2021 г. в 18:20, Boris Ostrovsky <boris.ostrovsky@oracle.com>: > > > On 6/11/21 5:55 AM, Roman Skakun wrote: > > > > +static int > > +xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, > > + void *cpu_addr, dma_addr_t dma_addr, size_t size, > > + unsigned long attrs) > > +{ > > + unsigned long user_count = vma_pages(vma); > > + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > > + unsigned long off = vma->vm_pgoff; > > + struct page *page; > > + > > + if (is_vmalloc_addr(cpu_addr)) > > + page = vmalloc_to_page(cpu_addr); > > + else > > + page = virt_to_page(cpu_addr); > > + > > + vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > > + > > + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) > > + return -ENXIO; > > + > > + if (off >= count || user_count > count - off) > > + return -ENXIO; > > + > > + return remap_pfn_range(vma, vma->vm_start, > > + page_to_pfn(page) + vma->vm_pgoff, > > + user_count << PAGE_SHIFT, vma->vm_page_prot); > > +} > > > I suggest you create a helper for computing page value and then revert 922659ea771b3fd728149262c5ea15608fab9719 and pass result of the helper instead of cpu_addr. Here and in xen_swiotlb_dma_get_sgtable(). > > > And use this new helper in xen_swiotlb_free_coherent() too. I am curious though why this was not a problem when Stefano was looking at the problem that introduced this vmalloc check (i.e. 8b1e868f66076490189a36d984fcce286cdd6295). Stefano? > > > -boris -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-14 12:47 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-14 12:47 UTC (permalink / raw) To: Boris Ostrovsky Cc: Juergen Gross, Stefano Stabellini, Andrii Anisov, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Volodymyr Babchuk Hey, Boris! Thanks for the review. I have an additional question about current implementation that disturbed me. I think, that we can have cases when mapped memory can not be physically contiguous. In order to proceed this correctly need to apply some additional steps to current implementation as well. In mmap() : 1. Is the memory region physically contiguous? 2. Remap multiple ranges if it is not. In get_sgtable() : 1. Is the memory region physically contiguous? 2. Create sgt that will be included multiple contiguous ranges if it is not. What do you think about it? Cheers! Roman пт, 11 июн. 2021 г. в 18:20, Boris Ostrovsky <boris.ostrovsky@oracle.com>: > > > On 6/11/21 5:55 AM, Roman Skakun wrote: > > > > +static int > > +xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, > > + void *cpu_addr, dma_addr_t dma_addr, size_t size, > > + unsigned long attrs) > > +{ > > + unsigned long user_count = vma_pages(vma); > > + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > > + unsigned long off = vma->vm_pgoff; > > + struct page *page; > > + > > + if (is_vmalloc_addr(cpu_addr)) > > + page = vmalloc_to_page(cpu_addr); > > + else > > + page = virt_to_page(cpu_addr); > > + > > + vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > > + > > + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) > > + return -ENXIO; > > + > > + if (off >= count || user_count > count - off) > > + return -ENXIO; > > + > > + return remap_pfn_range(vma, vma->vm_start, > > + page_to_pfn(page) + vma->vm_pgoff, > > + user_count << PAGE_SHIFT, vma->vm_page_prot); > > +} > > > I suggest you create a helper for computing page value and then revert 922659ea771b3fd728149262c5ea15608fab9719 and pass result of the helper instead of cpu_addr. Here and in xen_swiotlb_dma_get_sgtable(). > > > And use this new helper in xen_swiotlb_free_coherent() too. I am curious though why this was not a problem when Stefano was looking at the problem that introduced this vmalloc check (i.e. 8b1e868f66076490189a36d984fcce286cdd6295). Stefano? > > > -boris -- Best Regards, Roman. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops 2021-06-14 12:47 ` Roman Skakun @ 2021-06-14 15:45 ` Boris Ostrovsky -1 siblings, 0 replies; 63+ messages in thread From: Boris Ostrovsky @ 2021-06-14 15:45 UTC (permalink / raw) To: Roman Skakun Cc: Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov On 6/14/21 8:47 AM, Roman Skakun wrote: > Hey, Boris! > Thanks for the review. > > I have an additional question about current implementation that disturbed me. > I think, that we can have cases when mapped memory can not be > physically contiguous. > In order to proceed this correctly need to apply some additional steps > to current implementation as well. > > In mmap() : > 1. Is the memory region physically contiguous? > 2. Remap multiple ranges if it is not. > > In get_sgtable() : > 1. Is the memory region physically contiguous? > 2. Create sgt that will be included multiple contiguous ranges if it is not. > > What do you think about it? We make sure that we allocate contiguous memory in xen_swiotlb_alloc_coherent(). -boris ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-14 15:45 ` Boris Ostrovsky 0 siblings, 0 replies; 63+ messages in thread From: Boris Ostrovsky @ 2021-06-14 15:45 UTC (permalink / raw) To: Roman Skakun Cc: Juergen Gross, Stefano Stabellini, Andrii Anisov, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Volodymyr Babchuk On 6/14/21 8:47 AM, Roman Skakun wrote: > Hey, Boris! > Thanks for the review. > > I have an additional question about current implementation that disturbed me. > I think, that we can have cases when mapped memory can not be > physically contiguous. > In order to proceed this correctly need to apply some additional steps > to current implementation as well. > > In mmap() : > 1. Is the memory region physically contiguous? > 2. Remap multiple ranges if it is not. > > In get_sgtable() : > 1. Is the memory region physically contiguous? > 2. Create sgt that will be included multiple contiguous ranges if it is not. > > What do you think about it? We make sure that we allocate contiguous memory in xen_swiotlb_alloc_coherent(). -boris _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops 2021-06-14 15:45 ` Boris Ostrovsky (?) @ 2021-06-16 11:45 ` Roman Skakun -1 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-16 11:45 UTC (permalink / raw) To: Boris Ostrovsky Cc: Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov > We make sure that we allocate contiguous memory in xen_swiotlb_alloc_coherent(). I understood. Thanks! -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-16 11:45 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-16 11:45 UTC (permalink / raw) To: Boris Ostrovsky Cc: Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov > We make sure that we allocate contiguous memory in xen_swiotlb_alloc_coherent(). I understood. Thanks! -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-16 11:45 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-16 11:45 UTC (permalink / raw) To: Boris Ostrovsky Cc: Juergen Gross, Stefano Stabellini, Andrii Anisov, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Volodymyr Babchuk > We make sure that we allocate contiguous memory in xen_swiotlb_alloc_coherent(). I understood. Thanks! -- Best Regards, Roman. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* [PATCH 1/2] Revert "swiotlb-xen: remove xen_swiotlb_dma_mmap and xen_swiotlb_dma_get_sgtable" 2021-06-11 15:19 ` Boris Ostrovsky @ 2021-06-16 11:42 ` Roman Skakun -1 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-16 11:42 UTC (permalink / raw) To: Konrad Rzeszutek Wilk, Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel Cc: Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Roman Skakun This reverts commit 922659ea771b3fd728149262c5ea15608fab9719. Signed-off-by: Roman Skakun <roman_skakun@epam.com> --- drivers/xen/swiotlb-xen.c | 29 +++++++++++++++++++++++++++-- 1 file changed, 27 insertions(+), 2 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 2b385c1b4a99..90bc5fc321bc 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -563,6 +563,31 @@ xen_swiotlb_dma_supported(struct device *hwdev, u64 mask) return xen_virt_to_bus(hwdev, xen_io_tlb_end - 1) <= mask; } +/* + * Create userspace mapping for the DMA-coherent memory. + * This function should be called with the pages from the current domain only, + * passing pages mapped from other domains would lead to memory corruption. + */ +static int +xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs); +} + +/* + * This function should be called with the pages from the current domain only, + * passing pages mapped from other domains would lead to memory corruption. + */ +static int +xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, + void *cpu_addr, dma_addr_t handle, size_t size, + unsigned long attrs) +{ + return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size, attrs); +} + const struct dma_map_ops xen_swiotlb_dma_ops = { .alloc = xen_swiotlb_alloc_coherent, .free = xen_swiotlb_free_coherent, @@ -575,8 +600,8 @@ const struct dma_map_ops xen_swiotlb_dma_ops = { .map_page = xen_swiotlb_map_page, .unmap_page = xen_swiotlb_unmap_page, .dma_supported = xen_swiotlb_dma_supported, - .mmap = dma_common_mmap, - .get_sgtable = dma_common_get_sgtable, + .mmap = xen_swiotlb_dma_mmap, + .get_sgtable = xen_swiotlb_get_sgtable, .alloc_pages = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; -- 2.25.1 ^ permalink raw reply related [flat|nested] 63+ messages in thread
* [PATCH 1/2] Revert "swiotlb-xen: remove xen_swiotlb_dma_mmap and xen_swiotlb_dma_get_sgtable" @ 2021-06-16 11:42 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-16 11:42 UTC (permalink / raw) To: Konrad Rzeszutek Wilk, Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel Cc: Oleksandr Tyshchenko, Volodymyr Babchuk, Roman Skakun, Roman Skakun, Oleksandr Andrushchenko This reverts commit 922659ea771b3fd728149262c5ea15608fab9719. Signed-off-by: Roman Skakun <roman_skakun@epam.com> --- drivers/xen/swiotlb-xen.c | 29 +++++++++++++++++++++++++++-- 1 file changed, 27 insertions(+), 2 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 2b385c1b4a99..90bc5fc321bc 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -563,6 +563,31 @@ xen_swiotlb_dma_supported(struct device *hwdev, u64 mask) return xen_virt_to_bus(hwdev, xen_io_tlb_end - 1) <= mask; } +/* + * Create userspace mapping for the DMA-coherent memory. + * This function should be called with the pages from the current domain only, + * passing pages mapped from other domains would lead to memory corruption. + */ +static int +xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, + void *cpu_addr, dma_addr_t dma_addr, size_t size, + unsigned long attrs) +{ + return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs); +} + +/* + * This function should be called with the pages from the current domain only, + * passing pages mapped from other domains would lead to memory corruption. + */ +static int +xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, + void *cpu_addr, dma_addr_t handle, size_t size, + unsigned long attrs) +{ + return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size, attrs); +} + const struct dma_map_ops xen_swiotlb_dma_ops = { .alloc = xen_swiotlb_alloc_coherent, .free = xen_swiotlb_free_coherent, @@ -575,8 +600,8 @@ const struct dma_map_ops xen_swiotlb_dma_ops = { .map_page = xen_swiotlb_map_page, .unmap_page = xen_swiotlb_unmap_page, .dma_supported = xen_swiotlb_dma_supported, - .mmap = dma_common_mmap, - .get_sgtable = dma_common_get_sgtable, + .mmap = xen_swiotlb_dma_mmap, + .get_sgtable = xen_swiotlb_get_sgtable, .alloc_pages = dma_common_alloc_pages, .free_pages = dma_common_free_pages, }; -- 2.25.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 63+ messages in thread
* [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops 2021-06-16 11:42 ` Roman Skakun @ 2021-06-16 11:42 ` Roman Skakun -1 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-16 11:42 UTC (permalink / raw) To: Konrad Rzeszutek Wilk, Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel Cc: Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Roman Skakun, Andrii Anisov This commit is dedicated to fix incorrect conversion from cpu_addr to page address in cases when we get virtual address which allocated through xen_swiotlb_alloc_coherent() and can be mapped in the vmalloc range. As the result, virt_to_page() cannot convert this address properly and return incorrect page address. Need to detect such cases and obtains the page address using vmalloc_to_page() instead. The reference code for mmap() and get_sgtable() was copied from kernel/dma/ops_helpers.c and modified to provide additional detections as described above. In order to simplify code there was added a new dma_cpu_addr_to_page() helper. Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> --- drivers/xen/swiotlb-xen.c | 42 +++++++++++++++++++++++++++++++-------- 1 file changed, 34 insertions(+), 8 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 90bc5fc321bc..9331a8500547 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -118,6 +118,14 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr) return 0; } +static struct page *cpu_addr_to_page(void *cpu_addr) +{ + if (is_vmalloc_addr(cpu_addr)) + return vmalloc_to_page(cpu_addr); + else + return virt_to_page(cpu_addr); +} + static int xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs) { @@ -337,7 +345,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, int order = get_order(size); phys_addr_t phys; u64 dma_mask = DMA_BIT_MASK(32); - struct page *page; + struct page *page = cpu_addr_to_page(vaddr); if (hwdev && hwdev->coherent_dma_mask) dma_mask = hwdev->coherent_dma_mask; @@ -349,11 +357,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, /* Convert the size to actually allocated. */ size = 1UL << (order + XEN_PAGE_SHIFT); - if (is_vmalloc_addr(vaddr)) - page = vmalloc_to_page(vaddr); - else - page = virt_to_page(vaddr); - if (!WARN_ON((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size)) && TestClearPageXenRemapped(page)) @@ -573,7 +576,23 @@ xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs); + unsigned long user_count = vma_pages(vma); + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; + unsigned long off = vma->vm_pgoff; + struct page *page = cpu_addr_to_page(cpu_addr); + int ret; + + vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); + + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) + return ret; + + if (off >= count || user_count > count - off) + return -ENXIO; + + return remap_pfn_range(vma, vma->vm_start, + page_to_pfn(page) + vma->vm_pgoff, + user_count << PAGE_SHIFT, vma->vm_page_prot); } /* @@ -585,7 +604,14 @@ xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t handle, size_t size, unsigned long attrs) { - return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size, attrs); + struct page *page = cpu_addr_to_page(cpu_addr); + int ret; + + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (!ret) + sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); + + return ret; } const struct dma_map_ops xen_swiotlb_dma_ops = { -- 2.25.1 ^ permalink raw reply related [flat|nested] 63+ messages in thread
* [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-16 11:42 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-16 11:42 UTC (permalink / raw) To: Konrad Rzeszutek Wilk, Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel Cc: Andrii Anisov, Oleksandr Andrushchenko, Oleksandr Tyshchenko, Roman Skakun, Roman Skakun, Volodymyr Babchuk This commit is dedicated to fix incorrect conversion from cpu_addr to page address in cases when we get virtual address which allocated through xen_swiotlb_alloc_coherent() and can be mapped in the vmalloc range. As the result, virt_to_page() cannot convert this address properly and return incorrect page address. Need to detect such cases and obtains the page address using vmalloc_to_page() instead. The reference code for mmap() and get_sgtable() was copied from kernel/dma/ops_helpers.c and modified to provide additional detections as described above. In order to simplify code there was added a new dma_cpu_addr_to_page() helper. Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> --- drivers/xen/swiotlb-xen.c | 42 +++++++++++++++++++++++++++++++-------- 1 file changed, 34 insertions(+), 8 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 90bc5fc321bc..9331a8500547 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -118,6 +118,14 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr) return 0; } +static struct page *cpu_addr_to_page(void *cpu_addr) +{ + if (is_vmalloc_addr(cpu_addr)) + return vmalloc_to_page(cpu_addr); + else + return virt_to_page(cpu_addr); +} + static int xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs) { @@ -337,7 +345,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, int order = get_order(size); phys_addr_t phys; u64 dma_mask = DMA_BIT_MASK(32); - struct page *page; + struct page *page = cpu_addr_to_page(vaddr); if (hwdev && hwdev->coherent_dma_mask) dma_mask = hwdev->coherent_dma_mask; @@ -349,11 +357,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, /* Convert the size to actually allocated. */ size = 1UL << (order + XEN_PAGE_SHIFT); - if (is_vmalloc_addr(vaddr)) - page = vmalloc_to_page(vaddr); - else - page = virt_to_page(vaddr); - if (!WARN_ON((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size)) && TestClearPageXenRemapped(page)) @@ -573,7 +576,23 @@ xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs); + unsigned long user_count = vma_pages(vma); + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; + unsigned long off = vma->vm_pgoff; + struct page *page = cpu_addr_to_page(cpu_addr); + int ret; + + vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); + + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) + return ret; + + if (off >= count || user_count > count - off) + return -ENXIO; + + return remap_pfn_range(vma, vma->vm_start, + page_to_pfn(page) + vma->vm_pgoff, + user_count << PAGE_SHIFT, vma->vm_page_prot); } /* @@ -585,7 +604,14 @@ xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t handle, size_t size, unsigned long attrs) { - return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size, attrs); + struct page *page = cpu_addr_to_page(cpu_addr); + int ret; + + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); + if (!ret) + sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); + + return ret; } const struct dma_map_ops xen_swiotlb_dma_ops = { -- 2.25.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 63+ messages in thread
* Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops 2021-06-16 11:42 ` Roman Skakun @ 2021-06-16 14:12 ` Boris Ostrovsky -1 siblings, 0 replies; 63+ messages in thread From: Boris Ostrovsky @ 2021-06-16 14:12 UTC (permalink / raw) To: Roman Skakun, Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Christoph Hellwig Cc: Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov On 6/16/21 7:42 AM, Roman Skakun wrote: > This commit is dedicated to fix incorrect conversion from > cpu_addr to page address in cases when we get virtual > address which allocated through xen_swiotlb_alloc_coherent() > and can be mapped in the vmalloc range. > As the result, virt_to_page() cannot convert this address > properly and return incorrect page address. > > Need to detect such cases and obtains the page address using > vmalloc_to_page() instead. > > The reference code for mmap() and get_sgtable() was copied > from kernel/dma/ops_helpers.c and modified to provide > additional detections as described above. > > In order to simplify code there was added a new > dma_cpu_addr_to_page() helper. > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > --- > drivers/xen/swiotlb-xen.c | 42 +++++++++++++++++++++++++++++++-------- > 1 file changed, 34 insertions(+), 8 deletions(-) > > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c > index 90bc5fc321bc..9331a8500547 100644 > --- a/drivers/xen/swiotlb-xen.c > +++ b/drivers/xen/swiotlb-xen.c > @@ -118,6 +118,14 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr) > return 0; > } > > +static struct page *cpu_addr_to_page(void *cpu_addr) > +{ > + if (is_vmalloc_addr(cpu_addr)) > + return vmalloc_to_page(cpu_addr); > + else > + return virt_to_page(cpu_addr); > +} > + > static int > xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs) > { > @@ -337,7 +345,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, > int order = get_order(size); > phys_addr_t phys; > u64 dma_mask = DMA_BIT_MASK(32); > - struct page *page; > + struct page *page = cpu_addr_to_page(vaddr); > > if (hwdev && hwdev->coherent_dma_mask) > dma_mask = hwdev->coherent_dma_mask; > @@ -349,11 +357,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, > /* Convert the size to actually allocated. */ > size = 1UL << (order + XEN_PAGE_SHIFT); > > - if (is_vmalloc_addr(vaddr)) > - page = vmalloc_to_page(vaddr); > - else > - page = virt_to_page(vaddr); > - > if (!WARN_ON((dev_addr + size - 1 > dma_mask) || > range_straddles_page_boundary(phys, size)) && > TestClearPageXenRemapped(page)) > @@ -573,7 +576,23 @@ xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, > void *cpu_addr, dma_addr_t dma_addr, size_t size, > unsigned long attrs) > { > - return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs); > + unsigned long user_count = vma_pages(vma); > + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > + unsigned long off = vma->vm_pgoff; > + struct page *page = cpu_addr_to_page(cpu_addr); > + int ret; > + > + vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > + > + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) > + return ret; > + > + if (off >= count || user_count > count - off) > + return -ENXIO; > + > + return remap_pfn_range(vma, vma->vm_start, > + page_to_pfn(page) + vma->vm_pgoff, > + user_count << PAGE_SHIFT, vma->vm_page_prot); > } I wonder now whether we could avoid code duplication between here and dma_common_mmap()/dma_common_get_sgtable() and use your helper there. Christoph, would that work? I.e. something like diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 910ae69cae77..43411c2fa47b 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -12,7 +12,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page = virt_to_page(cpu_addr); + struct page *page = cpu_addr_to_page(cpu_addr); int ret; ret = sg_alloc_table(sgt, 1, GFP_KERNEL); @@ -43,7 +43,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return -ENXIO; return remap_pfn_range(vma, vma->vm_start, - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO; -boris > > /* > @@ -585,7 +604,14 @@ xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, > void *cpu_addr, dma_addr_t handle, size_t size, > unsigned long attrs) > { > - return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size, attrs); > + struct page *page = cpu_addr_to_page(cpu_addr); > + int ret; > + > + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > + if (!ret) > + sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); > + > + return ret; > } > > const struct dma_map_ops xen_swiotlb_dma_ops = { ^ permalink raw reply related [flat|nested] 63+ messages in thread
* Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-16 14:12 ` Boris Ostrovsky 0 siblings, 0 replies; 63+ messages in thread From: Boris Ostrovsky @ 2021-06-16 14:12 UTC (permalink / raw) To: Roman Skakun, Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Christoph Hellwig Cc: Oleksandr Tyshchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun, Oleksandr Andrushchenko On 6/16/21 7:42 AM, Roman Skakun wrote: > This commit is dedicated to fix incorrect conversion from > cpu_addr to page address in cases when we get virtual > address which allocated through xen_swiotlb_alloc_coherent() > and can be mapped in the vmalloc range. > As the result, virt_to_page() cannot convert this address > properly and return incorrect page address. > > Need to detect such cases and obtains the page address using > vmalloc_to_page() instead. > > The reference code for mmap() and get_sgtable() was copied > from kernel/dma/ops_helpers.c and modified to provide > additional detections as described above. > > In order to simplify code there was added a new > dma_cpu_addr_to_page() helper. > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > --- > drivers/xen/swiotlb-xen.c | 42 +++++++++++++++++++++++++++++++-------- > 1 file changed, 34 insertions(+), 8 deletions(-) > > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c > index 90bc5fc321bc..9331a8500547 100644 > --- a/drivers/xen/swiotlb-xen.c > +++ b/drivers/xen/swiotlb-xen.c > @@ -118,6 +118,14 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr) > return 0; > } > > +static struct page *cpu_addr_to_page(void *cpu_addr) > +{ > + if (is_vmalloc_addr(cpu_addr)) > + return vmalloc_to_page(cpu_addr); > + else > + return virt_to_page(cpu_addr); > +} > + > static int > xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs) > { > @@ -337,7 +345,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, > int order = get_order(size); > phys_addr_t phys; > u64 dma_mask = DMA_BIT_MASK(32); > - struct page *page; > + struct page *page = cpu_addr_to_page(vaddr); > > if (hwdev && hwdev->coherent_dma_mask) > dma_mask = hwdev->coherent_dma_mask; > @@ -349,11 +357,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, > /* Convert the size to actually allocated. */ > size = 1UL << (order + XEN_PAGE_SHIFT); > > - if (is_vmalloc_addr(vaddr)) > - page = vmalloc_to_page(vaddr); > - else > - page = virt_to_page(vaddr); > - > if (!WARN_ON((dev_addr + size - 1 > dma_mask) || > range_straddles_page_boundary(phys, size)) && > TestClearPageXenRemapped(page)) > @@ -573,7 +576,23 @@ xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma, > void *cpu_addr, dma_addr_t dma_addr, size_t size, > unsigned long attrs) > { > - return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs); > + unsigned long user_count = vma_pages(vma); > + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > + unsigned long off = vma->vm_pgoff; > + struct page *page = cpu_addr_to_page(cpu_addr); > + int ret; > + > + vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > + > + if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret)) > + return ret; > + > + if (off >= count || user_count > count - off) > + return -ENXIO; > + > + return remap_pfn_range(vma, vma->vm_start, > + page_to_pfn(page) + vma->vm_pgoff, > + user_count << PAGE_SHIFT, vma->vm_page_prot); > } I wonder now whether we could avoid code duplication between here and dma_common_mmap()/dma_common_get_sgtable() and use your helper there. Christoph, would that work? I.e. something like diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 910ae69cae77..43411c2fa47b 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -12,7 +12,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page = virt_to_page(cpu_addr); + struct page *page = cpu_addr_to_page(cpu_addr); int ret; ret = sg_alloc_table(sgt, 1, GFP_KERNEL); @@ -43,7 +43,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return -ENXIO; return remap_pfn_range(vma, vma->vm_start, - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO; -boris > > /* > @@ -585,7 +604,14 @@ xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt, > void *cpu_addr, dma_addr_t handle, size_t size, > unsigned long attrs) > { > - return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size, attrs); > + struct page *page = cpu_addr_to_page(cpu_addr); > + int ret; > + > + ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > + if (!ret) > + sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0); > + > + return ret; > } > > const struct dma_map_ops xen_swiotlb_dma_ops = { _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 63+ messages in thread
* Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops 2021-06-16 14:12 ` Boris Ostrovsky @ 2021-06-16 14:21 ` Christoph Hellwig -1 siblings, 0 replies; 63+ messages in thread From: Christoph Hellwig @ 2021-06-16 14:21 UTC (permalink / raw) To: Boris Ostrovsky Cc: Roman Skakun, Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Christoph Hellwig, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov On Wed, Jun 16, 2021 at 10:12:55AM -0400, Boris Ostrovsky wrote: > I wonder now whether we could avoid code duplication between here and dma_common_mmap()/dma_common_get_sgtable() and use your helper there. > > > Christoph, would that work? I.e. something like You should not duplicate the code at all, and just make the common helpers work with vmalloc addresses. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-16 14:21 ` Christoph Hellwig 0 siblings, 0 replies; 63+ messages in thread From: Christoph Hellwig @ 2021-06-16 14:21 UTC (permalink / raw) To: Boris Ostrovsky Cc: Juergen Gross, Stefano Stabellini, Roman Skakun, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Andrii Anisov, Roman Skakun, xen-devel, Volodymyr Babchuk, Christoph Hellwig On Wed, Jun 16, 2021 at 10:12:55AM -0400, Boris Ostrovsky wrote: > I wonder now whether we could avoid code duplication between here and dma_common_mmap()/dma_common_get_sgtable() and use your helper there. > > > Christoph, would that work? I.e. something like You should not duplicate the code at all, and just make the common helpers work with vmalloc addresses. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops 2021-06-16 14:21 ` Christoph Hellwig @ 2021-06-16 15:33 ` Boris Ostrovsky -1 siblings, 0 replies; 63+ messages in thread From: Boris Ostrovsky @ 2021-06-16 15:33 UTC (permalink / raw) To: Christoph Hellwig Cc: Roman Skakun, Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov On 6/16/21 10:21 AM, Christoph Hellwig wrote: > On Wed, Jun 16, 2021 at 10:12:55AM -0400, Boris Ostrovsky wrote: >> I wonder now whether we could avoid code duplication between here and dma_common_mmap()/dma_common_get_sgtable() and use your helper there. >> >> >> Christoph, would that work? I.e. something like > You should not duplicate the code at all, and just make the common > helpers work with vmalloc addresses. Isn't the expectation of virt_to_page() that it only works on non-vmalloc'd addresses? (This is not a rhetorical question, I actually don't know). -boris ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-16 15:33 ` Boris Ostrovsky 0 siblings, 0 replies; 63+ messages in thread From: Boris Ostrovsky @ 2021-06-16 15:33 UTC (permalink / raw) To: Christoph Hellwig Cc: Juergen Gross, Stefano Stabellini, Roman Skakun, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Andrii Anisov, Roman Skakun, xen-devel, Volodymyr Babchuk On 6/16/21 10:21 AM, Christoph Hellwig wrote: > On Wed, Jun 16, 2021 at 10:12:55AM -0400, Boris Ostrovsky wrote: >> I wonder now whether we could avoid code duplication between here and dma_common_mmap()/dma_common_get_sgtable() and use your helper there. >> >> >> Christoph, would that work? I.e. something like > You should not duplicate the code at all, and just make the common > helpers work with vmalloc addresses. Isn't the expectation of virt_to_page() that it only works on non-vmalloc'd addresses? (This is not a rhetorical question, I actually don't know). -boris _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops 2021-06-16 15:33 ` Boris Ostrovsky @ 2021-06-16 15:35 ` Christoph Hellwig -1 siblings, 0 replies; 63+ messages in thread From: Christoph Hellwig @ 2021-06-16 15:35 UTC (permalink / raw) To: Boris Ostrovsky Cc: Christoph Hellwig, Roman Skakun, Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov On Wed, Jun 16, 2021 at 11:33:50AM -0400, Boris Ostrovsky wrote: > Isn't the expectation of virt_to_page() that it only works on non-vmalloc'd addresses? (This is not a rhetorical question, I actually don't know). Yes. Thus is why I'd suggest to just do the vmalloc_to_page or virt_to_page dance in ops_helpers.c and just continue using that. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-16 15:35 ` Christoph Hellwig 0 siblings, 0 replies; 63+ messages in thread From: Christoph Hellwig @ 2021-06-16 15:35 UTC (permalink / raw) To: Boris Ostrovsky Cc: Juergen Gross, Stefano Stabellini, Roman Skakun, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Andrii Anisov, Roman Skakun, xen-devel, Volodymyr Babchuk, Christoph Hellwig On Wed, Jun 16, 2021 at 11:33:50AM -0400, Boris Ostrovsky wrote: > Isn't the expectation of virt_to_page() that it only works on non-vmalloc'd addresses? (This is not a rhetorical question, I actually don't know). Yes. Thus is why I'd suggest to just do the vmalloc_to_page or virt_to_page dance in ops_helpers.c and just continue using that. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops 2021-06-16 15:35 ` Christoph Hellwig @ 2021-06-16 15:39 ` Boris Ostrovsky -1 siblings, 0 replies; 63+ messages in thread From: Boris Ostrovsky @ 2021-06-16 15:39 UTC (permalink / raw) To: Christoph Hellwig Cc: Roman Skakun, Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov On 6/16/21 11:35 AM, Christoph Hellwig wrote: > On Wed, Jun 16, 2021 at 11:33:50AM -0400, Boris Ostrovsky wrote: >> Isn't the expectation of virt_to_page() that it only works on non-vmalloc'd addresses? (This is not a rhetorical question, I actually don't know). > Yes. Thus is why I'd suggest to just do the vmalloc_to_page or > virt_to_page dance in ops_helpers.c and just continue using that. Ah, OK, so something along the lines of what I suggested. (I thought by "helpers" you meant virt_to_page()). -boris ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-16 15:39 ` Boris Ostrovsky 0 siblings, 0 replies; 63+ messages in thread From: Boris Ostrovsky @ 2021-06-16 15:39 UTC (permalink / raw) To: Christoph Hellwig Cc: Juergen Gross, Stefano Stabellini, Roman Skakun, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Andrii Anisov, Roman Skakun, xen-devel, Volodymyr Babchuk On 6/16/21 11:35 AM, Christoph Hellwig wrote: > On Wed, Jun 16, 2021 at 11:33:50AM -0400, Boris Ostrovsky wrote: >> Isn't the expectation of virt_to_page() that it only works on non-vmalloc'd addresses? (This is not a rhetorical question, I actually don't know). > Yes. Thus is why I'd suggest to just do the vmalloc_to_page or > virt_to_page dance in ops_helpers.c and just continue using that. Ah, OK, so something along the lines of what I suggested. (I thought by "helpers" you meant virt_to_page()). -boris _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops 2021-06-16 15:39 ` Boris Ostrovsky @ 2021-06-16 15:44 ` Christoph Hellwig -1 siblings, 0 replies; 63+ messages in thread From: Christoph Hellwig @ 2021-06-16 15:44 UTC (permalink / raw) To: Boris Ostrovsky Cc: Christoph Hellwig, Roman Skakun, Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov On Wed, Jun 16, 2021 at 11:39:07AM -0400, Boris Ostrovsky wrote: > > On 6/16/21 11:35 AM, Christoph Hellwig wrote: > > On Wed, Jun 16, 2021 at 11:33:50AM -0400, Boris Ostrovsky wrote: > >> Isn't the expectation of virt_to_page() that it only works on non-vmalloc'd addresses? (This is not a rhetorical question, I actually don't know). > > Yes. Thus is why I'd suggest to just do the vmalloc_to_page or > > virt_to_page dance in ops_helpers.c and just continue using that. > > > Ah, OK, so something along the lines of what I suggested. (I thought by "helpers" you meant virt_to_page()). Yes. Just keeping it contained in the common code without duplicating it into a xen-specific version. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops @ 2021-06-16 15:44 ` Christoph Hellwig 0 siblings, 0 replies; 63+ messages in thread From: Christoph Hellwig @ 2021-06-16 15:44 UTC (permalink / raw) To: Boris Ostrovsky Cc: Juergen Gross, Stefano Stabellini, Roman Skakun, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Andrii Anisov, Roman Skakun, xen-devel, Volodymyr Babchuk, Christoph Hellwig On Wed, Jun 16, 2021 at 11:39:07AM -0400, Boris Ostrovsky wrote: > > On 6/16/21 11:35 AM, Christoph Hellwig wrote: > > On Wed, Jun 16, 2021 at 11:33:50AM -0400, Boris Ostrovsky wrote: > >> Isn't the expectation of virt_to_page() that it only works on non-vmalloc'd addresses? (This is not a rhetorical question, I actually don't know). > > Yes. Thus is why I'd suggest to just do the vmalloc_to_page or > > virt_to_page dance in ops_helpers.c and just continue using that. > > > Ah, OK, so something along the lines of what I suggested. (I thought by "helpers" you meant virt_to_page()). Yes. Just keeping it contained in the common code without duplicating it into a xen-specific version. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-06-16 15:44 ` Christoph Hellwig @ 2021-06-22 13:34 ` Roman Skakun -1 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-22 13:34 UTC (permalink / raw) To: Konrad Rzeszutek Wilk, Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel Cc: Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Roman Skakun, Andrii Anisov This commit is dedicated to fix incorrect conversion from cpu_addr to page address in cases when we get virtual address which allocated in the vmalloc range. As the result, virt_to_page() cannot convert this address properly and return incorrect page address. Need to detect such cases and obtains the page address using vmalloc_to_page() instead. Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> --- Hey! Thanks for suggestions, Christoph! I updated the patch according to your advice. But, I'm so surprised because nobody catches this problem in the common code before. It looks a bit strange as for me. kernel/dma/ops_helpers.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 910ae69cae77..782728d8a393 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -5,6 +5,14 @@ */ #include <linux/dma-map-ops.h> +static struct page *cpu_addr_to_page(void *cpu_addr) +{ + if (is_vmalloc_addr(cpu_addr)) + return vmalloc_to_page(cpu_addr); + else + return virt_to_page(cpu_addr); +} + /* * Create scatter-list for the already allocated DMA buffer. */ @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page = virt_to_page(cpu_addr); + struct page *page = cpu_addr_to_page(cpu_addr); int ret; ret = sg_alloc_table(sgt, 1, GFP_KERNEL); @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return -ENXIO; return remap_pfn_range(vma, vma->vm_start, - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO; -- 2.25.1 ^ permalink raw reply related [flat|nested] 63+ messages in thread
* [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-06-22 13:34 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-06-22 13:34 UTC (permalink / raw) To: Konrad Rzeszutek Wilk, Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel Cc: Andrii Anisov, Oleksandr Andrushchenko, Oleksandr Tyshchenko, Roman Skakun, Roman Skakun, Volodymyr Babchuk This commit is dedicated to fix incorrect conversion from cpu_addr to page address in cases when we get virtual address which allocated in the vmalloc range. As the result, virt_to_page() cannot convert this address properly and return incorrect page address. Need to detect such cases and obtains the page address using vmalloc_to_page() instead. Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> --- Hey! Thanks for suggestions, Christoph! I updated the patch according to your advice. But, I'm so surprised because nobody catches this problem in the common code before. It looks a bit strange as for me. kernel/dma/ops_helpers.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 910ae69cae77..782728d8a393 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -5,6 +5,14 @@ */ #include <linux/dma-map-ops.h> +static struct page *cpu_addr_to_page(void *cpu_addr) +{ + if (is_vmalloc_addr(cpu_addr)) + return vmalloc_to_page(cpu_addr); + else + return virt_to_page(cpu_addr); +} + /* * Create scatter-list for the already allocated DMA buffer. */ @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page = virt_to_page(cpu_addr); + struct page *page = cpu_addr_to_page(cpu_addr); int ret; ret = sg_alloc_table(sgt, 1, GFP_KERNEL); @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return -ENXIO; return remap_pfn_range(vma, vma->vm_start, - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO; -- 2.25.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-06-22 13:34 ` Roman Skakun @ 2021-07-14 0:15 ` Konrad Rzeszutek Wilk -1 siblings, 0 replies; 63+ messages in thread From: Konrad Rzeszutek Wilk @ 2021-07-14 0:15 UTC (permalink / raw) To: Roman Skakun Cc: Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov On Tue, Jun 22, 2021 at 04:34:14PM +0300, Roman Skakun wrote: > This commit is dedicated to fix incorrect conversion from > cpu_addr to page address in cases when we get virtual > address which allocated in the vmalloc range. > As the result, virt_to_page() cannot convert this address > properly and return incorrect page address. > > Need to detect such cases and obtains the page address using > vmalloc_to_page() instead. > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > --- > Hey! > Thanks for suggestions, Christoph! > I updated the patch according to your advice. > But, I'm so surprised because nobody catches this problem > in the common code before. It looks a bit strange as for me. This looks like it wasn't picked up? Should it go in rc1? > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > index 910ae69cae77..782728d8a393 100644 > --- a/kernel/dma/ops_helpers.c > +++ b/kernel/dma/ops_helpers.c > @@ -5,6 +5,14 @@ > */ > #include <linux/dma-map-ops.h> > > +static struct page *cpu_addr_to_page(void *cpu_addr) > +{ > + if (is_vmalloc_addr(cpu_addr)) > + return vmalloc_to_page(cpu_addr); > + else > + return virt_to_page(cpu_addr); > +} > + > /* > * Create scatter-list for the already allocated DMA buffer. > */ > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > void *cpu_addr, dma_addr_t dma_addr, size_t size, > unsigned long attrs) > { > - struct page *page = virt_to_page(cpu_addr); > + struct page *page = cpu_addr_to_page(cpu_addr); > int ret; > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > return -ENXIO; > > return remap_pfn_range(vma, vma->vm_start, > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, > user_count << PAGE_SHIFT, vma->vm_page_prot); > #else > return -ENXIO; > -- > 2.25.1 > ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-14 0:15 ` Konrad Rzeszutek Wilk 0 siblings, 0 replies; 63+ messages in thread From: Konrad Rzeszutek Wilk @ 2021-07-14 0:15 UTC (permalink / raw) To: Roman Skakun Cc: Juergen Gross, Stefano Stabellini, Andrii Anisov, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Boris Ostrovsky, Volodymyr Babchuk On Tue, Jun 22, 2021 at 04:34:14PM +0300, Roman Skakun wrote: > This commit is dedicated to fix incorrect conversion from > cpu_addr to page address in cases when we get virtual > address which allocated in the vmalloc range. > As the result, virt_to_page() cannot convert this address > properly and return incorrect page address. > > Need to detect such cases and obtains the page address using > vmalloc_to_page() instead. > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > --- > Hey! > Thanks for suggestions, Christoph! > I updated the patch according to your advice. > But, I'm so surprised because nobody catches this problem > in the common code before. It looks a bit strange as for me. This looks like it wasn't picked up? Should it go in rc1? > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > index 910ae69cae77..782728d8a393 100644 > --- a/kernel/dma/ops_helpers.c > +++ b/kernel/dma/ops_helpers.c > @@ -5,6 +5,14 @@ > */ > #include <linux/dma-map-ops.h> > > +static struct page *cpu_addr_to_page(void *cpu_addr) > +{ > + if (is_vmalloc_addr(cpu_addr)) > + return vmalloc_to_page(cpu_addr); > + else > + return virt_to_page(cpu_addr); > +} > + > /* > * Create scatter-list for the already allocated DMA buffer. > */ > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > void *cpu_addr, dma_addr_t dma_addr, size_t size, > unsigned long attrs) > { > - struct page *page = virt_to_page(cpu_addr); > + struct page *page = cpu_addr_to_page(cpu_addr); > int ret; > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > return -ENXIO; > > return remap_pfn_range(vma, vma->vm_start, > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, > user_count << PAGE_SHIFT, vma->vm_page_prot); > #else > return -ENXIO; > -- > 2.25.1 > _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-07-14 0:15 ` Konrad Rzeszutek Wilk (?) @ 2021-07-15 7:39 ` Roman Skakun -1 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-15 7:39 UTC (permalink / raw) To: Konrad Rzeszutek Wilk Cc: Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov > This looks like it wasn't picked up? Should it go in rc1? Hi, Konrad! This looks like an unambiguous bug, and should be in rc1. Cheers! ср, 14 июл. 2021 г. в 03:15, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>: > > On Tue, Jun 22, 2021 at 04:34:14PM +0300, Roman Skakun wrote: > > This commit is dedicated to fix incorrect conversion from > > cpu_addr to page address in cases when we get virtual > > address which allocated in the vmalloc range. > > As the result, virt_to_page() cannot convert this address > > properly and return incorrect page address. > > > > Need to detect such cases and obtains the page address using > > vmalloc_to_page() instead. > > > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > > --- > > Hey! > > Thanks for suggestions, Christoph! > > I updated the patch according to your advice. > > But, I'm so surprised because nobody catches this problem > > in the common code before. It looks a bit strange as for me. > > This looks like it wasn't picked up? Should it go in rc1? > > > > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > > index 910ae69cae77..782728d8a393 100644 > > --- a/kernel/dma/ops_helpers.c > > +++ b/kernel/dma/ops_helpers.c > > @@ -5,6 +5,14 @@ > > */ > > #include <linux/dma-map-ops.h> > > > > +static struct page *cpu_addr_to_page(void *cpu_addr) > > +{ > > + if (is_vmalloc_addr(cpu_addr)) > > + return vmalloc_to_page(cpu_addr); > > + else > > + return virt_to_page(cpu_addr); > > +} > > + > > /* > > * Create scatter-list for the already allocated DMA buffer. > > */ > > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > > void *cpu_addr, dma_addr_t dma_addr, size_t size, > > unsigned long attrs) > > { > > - struct page *page = virt_to_page(cpu_addr); > > + struct page *page = cpu_addr_to_page(cpu_addr); > > int ret; > > > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > return -ENXIO; > > > > return remap_pfn_range(vma, vma->vm_start, > > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > > + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, > > user_count << PAGE_SHIFT, vma->vm_page_prot); > > #else > > return -ENXIO; > > -- > > 2.25.1 > > -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-15 7:39 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-15 7:39 UTC (permalink / raw) To: Konrad Rzeszutek Wilk Cc: Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov > This looks like it wasn't picked up? Should it go in rc1? Hi, Konrad! This looks like an unambiguous bug, and should be in rc1. Cheers! ср, 14 июл. 2021 г. в 03:15, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>: > > On Tue, Jun 22, 2021 at 04:34:14PM +0300, Roman Skakun wrote: > > This commit is dedicated to fix incorrect conversion from > > cpu_addr to page address in cases when we get virtual > > address which allocated in the vmalloc range. > > As the result, virt_to_page() cannot convert this address > > properly and return incorrect page address. > > > > Need to detect such cases and obtains the page address using > > vmalloc_to_page() instead. > > > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > > --- > > Hey! > > Thanks for suggestions, Christoph! > > I updated the patch according to your advice. > > But, I'm so surprised because nobody catches this problem > > in the common code before. It looks a bit strange as for me. > > This looks like it wasn't picked up? Should it go in rc1? > > > > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > > index 910ae69cae77..782728d8a393 100644 > > --- a/kernel/dma/ops_helpers.c > > +++ b/kernel/dma/ops_helpers.c > > @@ -5,6 +5,14 @@ > > */ > > #include <linux/dma-map-ops.h> > > > > +static struct page *cpu_addr_to_page(void *cpu_addr) > > +{ > > + if (is_vmalloc_addr(cpu_addr)) > > + return vmalloc_to_page(cpu_addr); > > + else > > + return virt_to_page(cpu_addr); > > +} > > + > > /* > > * Create scatter-list for the already allocated DMA buffer. > > */ > > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > > void *cpu_addr, dma_addr_t dma_addr, size_t size, > > unsigned long attrs) > > { > > - struct page *page = virt_to_page(cpu_addr); > > + struct page *page = cpu_addr_to_page(cpu_addr); > > int ret; > > > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > return -ENXIO; > > > > return remap_pfn_range(vma, vma->vm_start, > > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > > + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, > > user_count << PAGE_SHIFT, vma->vm_page_prot); > > #else > > return -ENXIO; > > -- > > 2.25.1 > > -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-15 7:39 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-15 7:39 UTC (permalink / raw) To: Konrad Rzeszutek Wilk Cc: Juergen Gross, Stefano Stabellini, Andrii Anisov, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Boris Ostrovsky, Volodymyr Babchuk > This looks like it wasn't picked up? Should it go in rc1? Hi, Konrad! This looks like an unambiguous bug, and should be in rc1. Cheers! ср, 14 июл. 2021 г. в 03:15, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>: > > On Tue, Jun 22, 2021 at 04:34:14PM +0300, Roman Skakun wrote: > > This commit is dedicated to fix incorrect conversion from > > cpu_addr to page address in cases when we get virtual > > address which allocated in the vmalloc range. > > As the result, virt_to_page() cannot convert this address > > properly and return incorrect page address. > > > > Need to detect such cases and obtains the page address using > > vmalloc_to_page() instead. > > > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > > --- > > Hey! > > Thanks for suggestions, Christoph! > > I updated the patch according to your advice. > > But, I'm so surprised because nobody catches this problem > > in the common code before. It looks a bit strange as for me. > > This looks like it wasn't picked up? Should it go in rc1? > > > > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > > index 910ae69cae77..782728d8a393 100644 > > --- a/kernel/dma/ops_helpers.c > > +++ b/kernel/dma/ops_helpers.c > > @@ -5,6 +5,14 @@ > > */ > > #include <linux/dma-map-ops.h> > > > > +static struct page *cpu_addr_to_page(void *cpu_addr) > > +{ > > + if (is_vmalloc_addr(cpu_addr)) > > + return vmalloc_to_page(cpu_addr); > > + else > > + return virt_to_page(cpu_addr); > > +} > > + > > /* > > * Create scatter-list for the already allocated DMA buffer. > > */ > > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > > void *cpu_addr, dma_addr_t dma_addr, size_t size, > > unsigned long attrs) > > { > > - struct page *page = virt_to_page(cpu_addr); > > + struct page *page = cpu_addr_to_page(cpu_addr); > > int ret; > > > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > return -ENXIO; > > > > return remap_pfn_range(vma, vma->vm_start, > > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > > + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, > > user_count << PAGE_SHIFT, vma->vm_page_prot); > > #else > > return -ENXIO; > > -- > > 2.25.1 > > -- Best Regards, Roman. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-07-15 7:39 ` Roman Skakun @ 2021-07-15 16:58 ` Boris Ostrovsky -1 siblings, 0 replies; 63+ messages in thread From: Boris Ostrovsky @ 2021-07-15 16:58 UTC (permalink / raw) To: Roman Skakun, Konrad Rzeszutek Wilk Cc: Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov, Christoph Hellwig On 7/15/21 3:39 AM, Roman Skakun wrote: >> This looks like it wasn't picked up? Should it go in rc1? > Hi, Konrad! > > This looks like an unambiguous bug, and should be in rc1. Looks like you didn't copy Christoph which could be part of the problem. Adding him. -boris > > Cheers! > > ср, 14 июл. 2021 г. в 03:15, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>: >> On Tue, Jun 22, 2021 at 04:34:14PM +0300, Roman Skakun wrote: >>> This commit is dedicated to fix incorrect conversion from >>> cpu_addr to page address in cases when we get virtual >>> address which allocated in the vmalloc range. >>> As the result, virt_to_page() cannot convert this address >>> properly and return incorrect page address. >>> >>> Need to detect such cases and obtains the page address using >>> vmalloc_to_page() instead. >>> >>> Signed-off-by: Roman Skakun <roman_skakun@epam.com> >>> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> >>> --- >>> Hey! >>> Thanks for suggestions, Christoph! >>> I updated the patch according to your advice. >>> But, I'm so surprised because nobody catches this problem >>> in the common code before. It looks a bit strange as for me. >> This looks like it wasn't picked up? Should it go in rc1? >>> >>> kernel/dma/ops_helpers.c | 12 ++++++++++-- >>> 1 file changed, 10 insertions(+), 2 deletions(-) >>> >>> diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c >>> index 910ae69cae77..782728d8a393 100644 >>> --- a/kernel/dma/ops_helpers.c >>> +++ b/kernel/dma/ops_helpers.c >>> @@ -5,6 +5,14 @@ >>> */ >>> #include <linux/dma-map-ops.h> >>> >>> +static struct page *cpu_addr_to_page(void *cpu_addr) >>> +{ >>> + if (is_vmalloc_addr(cpu_addr)) >>> + return vmalloc_to_page(cpu_addr); >>> + else >>> + return virt_to_page(cpu_addr); >>> +} >>> + >>> /* >>> * Create scatter-list for the already allocated DMA buffer. >>> */ >>> @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, >>> void *cpu_addr, dma_addr_t dma_addr, size_t size, >>> unsigned long attrs) >>> { >>> - struct page *page = virt_to_page(cpu_addr); >>> + struct page *page = cpu_addr_to_page(cpu_addr); >>> int ret; >>> >>> ret = sg_alloc_table(sgt, 1, GFP_KERNEL); >>> @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, >>> return -ENXIO; >>> >>> return remap_pfn_range(vma, vma->vm_start, >>> - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, >>> + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, >>> user_count << PAGE_SHIFT, vma->vm_page_prot); >>> #else >>> return -ENXIO; >>> -- >>> 2.25.1 >>> > > ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-15 16:58 ` Boris Ostrovsky 0 siblings, 0 replies; 63+ messages in thread From: Boris Ostrovsky @ 2021-07-15 16:58 UTC (permalink / raw) To: Roman Skakun, Konrad Rzeszutek Wilk Cc: Juergen Gross, Stefano Stabellini, Andrii Anisov, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Volodymyr Babchuk, Christoph Hellwig On 7/15/21 3:39 AM, Roman Skakun wrote: >> This looks like it wasn't picked up? Should it go in rc1? > Hi, Konrad! > > This looks like an unambiguous bug, and should be in rc1. Looks like you didn't copy Christoph which could be part of the problem. Adding him. -boris > > Cheers! > > ср, 14 июл. 2021 г. в 03:15, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>: >> On Tue, Jun 22, 2021 at 04:34:14PM +0300, Roman Skakun wrote: >>> This commit is dedicated to fix incorrect conversion from >>> cpu_addr to page address in cases when we get virtual >>> address which allocated in the vmalloc range. >>> As the result, virt_to_page() cannot convert this address >>> properly and return incorrect page address. >>> >>> Need to detect such cases and obtains the page address using >>> vmalloc_to_page() instead. >>> >>> Signed-off-by: Roman Skakun <roman_skakun@epam.com> >>> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> >>> --- >>> Hey! >>> Thanks for suggestions, Christoph! >>> I updated the patch according to your advice. >>> But, I'm so surprised because nobody catches this problem >>> in the common code before. It looks a bit strange as for me. >> This looks like it wasn't picked up? Should it go in rc1? >>> >>> kernel/dma/ops_helpers.c | 12 ++++++++++-- >>> 1 file changed, 10 insertions(+), 2 deletions(-) >>> >>> diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c >>> index 910ae69cae77..782728d8a393 100644 >>> --- a/kernel/dma/ops_helpers.c >>> +++ b/kernel/dma/ops_helpers.c >>> @@ -5,6 +5,14 @@ >>> */ >>> #include <linux/dma-map-ops.h> >>> >>> +static struct page *cpu_addr_to_page(void *cpu_addr) >>> +{ >>> + if (is_vmalloc_addr(cpu_addr)) >>> + return vmalloc_to_page(cpu_addr); >>> + else >>> + return virt_to_page(cpu_addr); >>> +} >>> + >>> /* >>> * Create scatter-list for the already allocated DMA buffer. >>> */ >>> @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, >>> void *cpu_addr, dma_addr_t dma_addr, size_t size, >>> unsigned long attrs) >>> { >>> - struct page *page = virt_to_page(cpu_addr); >>> + struct page *page = cpu_addr_to_page(cpu_addr); >>> int ret; >>> >>> ret = sg_alloc_table(sgt, 1, GFP_KERNEL); >>> @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, >>> return -ENXIO; >>> >>> return remap_pfn_range(vma, vma->vm_start, >>> - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, >>> + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, >>> user_count << PAGE_SHIFT, vma->vm_page_prot); >>> #else >>> return -ENXIO; >>> -- >>> 2.25.1 >>> > > _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-07-15 16:58 ` Boris Ostrovsky @ 2021-07-15 17:00 ` Christoph Hellwig -1 siblings, 0 replies; 63+ messages in thread From: Christoph Hellwig @ 2021-07-15 17:00 UTC (permalink / raw) To: Boris Ostrovsky Cc: Roman Skakun, Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov, Christoph Hellwig On Thu, Jul 15, 2021 at 12:58:53PM -0400, Boris Ostrovsky wrote: > > On 7/15/21 3:39 AM, Roman Skakun wrote: > >> This looks like it wasn't picked up? Should it go in rc1? > > Hi, Konrad! > > > > This looks like an unambiguous bug, and should be in rc1. > > > Looks like you didn't copy Christoph which could be part of the problem. Adding him. Can someone just send a clean patch that I can review and hopefully apply? ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-15 17:00 ` Christoph Hellwig 0 siblings, 0 replies; 63+ messages in thread From: Christoph Hellwig @ 2021-07-15 17:00 UTC (permalink / raw) To: Boris Ostrovsky Cc: Juergen Gross, Stefano Stabellini, Roman Skakun, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Andrii Anisov, Roman Skakun, xen-devel, Volodymyr Babchuk, Christoph Hellwig On Thu, Jul 15, 2021 at 12:58:53PM -0400, Boris Ostrovsky wrote: > > On 7/15/21 3:39 AM, Roman Skakun wrote: > >> This looks like it wasn't picked up? Should it go in rc1? > > Hi, Konrad! > > > > This looks like an unambiguous bug, and should be in rc1. > > > Looks like you didn't copy Christoph which could be part of the problem. Adding him. Can someone just send a clean patch that I can review and hopefully apply? _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-07-15 17:00 ` Christoph Hellwig @ 2021-07-16 8:39 ` Roman Skakun -1 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-16 8:39 UTC (permalink / raw) To: Christoph Hellwig Cc: Boris Ostrovsky, Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun, Roman Skakun, Roman Skakun From: Roman Skakun <Roman_Skakun@epam.com> This commit is dedicated to fix incorrect conversion from cpu_addr to page address in cases when we get virtual address which allocated in the vmalloc range. As the result, virt_to_page() cannot convert this address properly and return incorrect page address. Need to detect such cases and obtains the page address using vmalloc_to_page() instead. Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> --- Hi, Christoph! It's updated patch in accordance with your and Stefano suggestions. drivers/xen/swiotlb-xen.c | 7 +------ include/linux/dma-map-ops.h | 2 ++ kernel/dma/ops_helpers.c | 16 ++++++++++++++-- 3 files changed, 17 insertions(+), 8 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 92ee6eea30cd..c2f612a10a95 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -337,7 +337,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, int order = get_order(size); phys_addr_t phys; u64 dma_mask = DMA_BIT_MASK(32); - struct page *page; + struct page *page = cpu_addr_to_page(vaddr); if (hwdev && hwdev->coherent_dma_mask) dma_mask = hwdev->coherent_dma_mask; @@ -349,11 +349,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, /* Convert the size to actually allocated. */ size = 1UL << (order + XEN_PAGE_SHIFT); - if (is_vmalloc_addr(vaddr)) - page = vmalloc_to_page(vaddr); - else - page = virt_to_page(vaddr); - if (!WARN_ON((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size)) && TestClearPageXenRemapped(page)) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index a5f89fc4d6df..ce0edb0bb603 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -226,6 +226,8 @@ struct page *dma_alloc_from_pool(struct device *dev, size_t size, bool (*phys_addr_ok)(struct device *, phys_addr_t, size_t)); bool dma_free_from_pool(struct device *dev, void *start, size_t size); +struct page *cpu_addr_to_page(void *cpu_addr); + #ifdef CONFIG_ARCH_HAS_DMA_COHERENCE_H #include <asm/dma-coherence.h> #elif defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 910ae69cae77..472e861750d3 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -5,6 +5,17 @@ */ #include <linux/dma-map-ops.h> +/* + * This helper converts virtual address to page address. + */ +struct page *cpu_addr_to_page(void *cpu_addr) +{ + if (is_vmalloc_addr(cpu_addr)) + return vmalloc_to_page(cpu_addr); + else + return virt_to_page(cpu_addr); +} + /* * Create scatter-list for the already allocated DMA buffer. */ @@ -12,7 +23,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page = virt_to_page(cpu_addr); + struct page *page = cpu_addr_to_page(cpu_addr); int ret; ret = sg_alloc_table(sgt, 1, GFP_KERNEL); @@ -32,6 +43,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, unsigned long user_count = vma_pages(vma); unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long off = vma->vm_pgoff; + struct page *page = cpu_addr_to_page(cpu_addr); int ret = -ENXIO; vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); @@ -43,7 +55,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return -ENXIO; return remap_pfn_range(vma, vma->vm_start, - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, + page_to_pfn(page) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO; -- 2.27.0 ^ permalink raw reply related [flat|nested] 63+ messages in thread
* [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-16 8:39 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-16 8:39 UTC (permalink / raw) To: Christoph Hellwig Cc: Juergen Gross, Stefano Stabellini, Andrii Anisov, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Roman Skakun, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Boris Ostrovsky, Volodymyr Babchuk From: Roman Skakun <Roman_Skakun@epam.com> This commit is dedicated to fix incorrect conversion from cpu_addr to page address in cases when we get virtual address which allocated in the vmalloc range. As the result, virt_to_page() cannot convert this address properly and return incorrect page address. Need to detect such cases and obtains the page address using vmalloc_to_page() instead. Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> --- Hi, Christoph! It's updated patch in accordance with your and Stefano suggestions. drivers/xen/swiotlb-xen.c | 7 +------ include/linux/dma-map-ops.h | 2 ++ kernel/dma/ops_helpers.c | 16 ++++++++++++++-- 3 files changed, 17 insertions(+), 8 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index 92ee6eea30cd..c2f612a10a95 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -337,7 +337,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, int order = get_order(size); phys_addr_t phys; u64 dma_mask = DMA_BIT_MASK(32); - struct page *page; + struct page *page = cpu_addr_to_page(vaddr); if (hwdev && hwdev->coherent_dma_mask) dma_mask = hwdev->coherent_dma_mask; @@ -349,11 +349,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, /* Convert the size to actually allocated. */ size = 1UL << (order + XEN_PAGE_SHIFT); - if (is_vmalloc_addr(vaddr)) - page = vmalloc_to_page(vaddr); - else - page = virt_to_page(vaddr); - if (!WARN_ON((dev_addr + size - 1 > dma_mask) || range_straddles_page_boundary(phys, size)) && TestClearPageXenRemapped(page)) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index a5f89fc4d6df..ce0edb0bb603 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -226,6 +226,8 @@ struct page *dma_alloc_from_pool(struct device *dev, size_t size, bool (*phys_addr_ok)(struct device *, phys_addr_t, size_t)); bool dma_free_from_pool(struct device *dev, void *start, size_t size); +struct page *cpu_addr_to_page(void *cpu_addr); + #ifdef CONFIG_ARCH_HAS_DMA_COHERENCE_H #include <asm/dma-coherence.h> #elif defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \ diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 910ae69cae77..472e861750d3 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -5,6 +5,17 @@ */ #include <linux/dma-map-ops.h> +/* + * This helper converts virtual address to page address. + */ +struct page *cpu_addr_to_page(void *cpu_addr) +{ + if (is_vmalloc_addr(cpu_addr)) + return vmalloc_to_page(cpu_addr); + else + return virt_to_page(cpu_addr); +} + /* * Create scatter-list for the already allocated DMA buffer. */ @@ -12,7 +23,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page = virt_to_page(cpu_addr); + struct page *page = cpu_addr_to_page(cpu_addr); int ret; ret = sg_alloc_table(sgt, 1, GFP_KERNEL); @@ -32,6 +43,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, unsigned long user_count = vma_pages(vma); unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long off = vma->vm_pgoff; + struct page *page = cpu_addr_to_page(cpu_addr); int ret = -ENXIO; vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); @@ -43,7 +55,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return -ENXIO; return remap_pfn_range(vma, vma->vm_start, - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, + page_to_pfn(page) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO; -- 2.27.0 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-07-16 8:39 ` Roman Skakun @ 2021-07-16 9:35 ` Christoph Hellwig -1 siblings, 0 replies; 63+ messages in thread From: Christoph Hellwig @ 2021-07-16 9:35 UTC (permalink / raw) To: Roman Skakun Cc: Christoph Hellwig, Boris Ostrovsky, Konrad Rzeszutek Wilk, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun Technically this looks good. But given that exposing a helper that does either vmalloc_to_page or virt_to_page is one of the never ending MM discussions I don't want to get into that discussion and just keep it local in the DMA code. Are you fine with me applying this version? --- From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001 From: Roman Skakun <Roman_Skakun@epam.com> Date: Fri, 16 Jul 2021 11:39:34 +0300 Subject: dma-mapping: handle vmalloc addresses in dma_common_{mmap,get_sgtable} xen-swiotlb can use vmalloc backed addresses for dma coherent allocations and uses the common helpers. Properly handle them to unbreak Xen on ARM platforms. Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> [hch: split the patch, renamed the helpers] Signed-off-by: Christoph Hellwig <hch@lst.de> --- kernel/dma/ops_helpers.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 910ae69cae77..af4a6ef48ce0 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -5,6 +5,13 @@ */ #include <linux/dma-map-ops.h> +static struct page *dma_common_vaddr_to_page(void *cpu_addr) +{ + if (is_vmalloc_addr(cpu_addr)) + return vmalloc_to_page(cpu_addr); + return virt_to_page(cpu_addr); +} + /* * Create scatter-list for the already allocated DMA buffer. */ @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page = virt_to_page(cpu_addr); + struct page *page = dma_common_vaddr_to_page(cpu_addr); int ret; ret = sg_alloc_table(sgt, 1, GFP_KERNEL); @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, unsigned long user_count = vma_pages(vma); unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long off = vma->vm_pgoff; + struct page *page = dma_common_vaddr_to_page(cpu_addr); int ret = -ENXIO; vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return -ENXIO; return remap_pfn_range(vma, vma->vm_start, - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, + page_to_pfn(page) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO; -- 2.30.2 ^ permalink raw reply related [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-16 9:35 ` Christoph Hellwig 0 siblings, 0 replies; 63+ messages in thread From: Christoph Hellwig @ 2021-07-16 9:35 UTC (permalink / raw) To: Roman Skakun Cc: Juergen Gross, Stefano Stabellini, Andrii Anisov, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Boris Ostrovsky, Volodymyr Babchuk, Christoph Hellwig Technically this looks good. But given that exposing a helper that does either vmalloc_to_page or virt_to_page is one of the never ending MM discussions I don't want to get into that discussion and just keep it local in the DMA code. Are you fine with me applying this version? --- From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001 From: Roman Skakun <Roman_Skakun@epam.com> Date: Fri, 16 Jul 2021 11:39:34 +0300 Subject: dma-mapping: handle vmalloc addresses in dma_common_{mmap,get_sgtable} xen-swiotlb can use vmalloc backed addresses for dma coherent allocations and uses the common helpers. Properly handle them to unbreak Xen on ARM platforms. Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") Signed-off-by: Roman Skakun <roman_skakun@epam.com> Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> [hch: split the patch, renamed the helpers] Signed-off-by: Christoph Hellwig <hch@lst.de> --- kernel/dma/ops_helpers.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 910ae69cae77..af4a6ef48ce0 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -5,6 +5,13 @@ */ #include <linux/dma-map-ops.h> +static struct page *dma_common_vaddr_to_page(void *cpu_addr) +{ + if (is_vmalloc_addr(cpu_addr)) + return vmalloc_to_page(cpu_addr); + return virt_to_page(cpu_addr); +} + /* * Create scatter-list for the already allocated DMA buffer. */ @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { - struct page *page = virt_to_page(cpu_addr); + struct page *page = dma_common_vaddr_to_page(cpu_addr); int ret; ret = sg_alloc_table(sgt, 1, GFP_KERNEL); @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, unsigned long user_count = vma_pages(vma); unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; unsigned long off = vma->vm_pgoff; + struct page *page = dma_common_vaddr_to_page(cpu_addr); int ret = -ENXIO; vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, return -ENXIO; return remap_pfn_range(vma, vma->vm_start, - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, + page_to_pfn(page) + vma->vm_pgoff, user_count << PAGE_SHIFT, vma->vm_page_prot); #else return -ENXIO; -- 2.30.2 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply related [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-07-16 9:35 ` Christoph Hellwig (?) @ 2021-07-16 12:53 ` Roman Skakun -1 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-16 12:53 UTC (permalink / raw) To: Christoph Hellwig, Stefano Stabellini Cc: Boris Ostrovsky, Konrad Rzeszutek Wilk, Juergen Gross, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun, Roman Skakun > Technically this looks good. But given that exposing a helper > that does either vmalloc_to_page or virt_to_page is one of the > never ending MM discussions I don't want to get into that discussion > and just keep it local in the DMA code. > > Are you fine with me applying this version? Looks good to me, thanks! But, Stefano asked me about using created helper in the xen_swiotlb_free_coherent() and I created a patch according to this mention. We can merge this patch and create a new one for xen_swiotlb_free_coherent() later. пт, 16 июл. 2021 г. в 12:35, Christoph Hellwig <hch@lst.de>: > > Technically this looks good. But given that exposing a helper > that does either vmalloc_to_page or virt_to_page is one of the > never ending MM discussions I don't want to get into that discussion > and just keep it local in the DMA code. > > Are you fine with me applying this version? > > --- > From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001 > From: Roman Skakun <Roman_Skakun@epam.com> > Date: Fri, 16 Jul 2021 11:39:34 +0300 > Subject: dma-mapping: handle vmalloc addresses in > dma_common_{mmap,get_sgtable} > > xen-swiotlb can use vmalloc backed addresses for dma coherent allocations > and uses the common helpers. Properly handle them to unbreak Xen on > ARM platforms. > > Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > [hch: split the patch, renamed the helpers] > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > kernel/dma/ops_helpers.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > index 910ae69cae77..af4a6ef48ce0 100644 > --- a/kernel/dma/ops_helpers.c > +++ b/kernel/dma/ops_helpers.c > @@ -5,6 +5,13 @@ > */ > #include <linux/dma-map-ops.h> > > +static struct page *dma_common_vaddr_to_page(void *cpu_addr) > +{ > + if (is_vmalloc_addr(cpu_addr)) > + return vmalloc_to_page(cpu_addr); > + return virt_to_page(cpu_addr); > +} > + > /* > * Create scatter-list for the already allocated DMA buffer. > */ > @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > void *cpu_addr, dma_addr_t dma_addr, size_t size, > unsigned long attrs) > { > - struct page *page = virt_to_page(cpu_addr); > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > int ret; > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > unsigned long user_count = vma_pages(vma); > unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > unsigned long off = vma->vm_pgoff; > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > int ret = -ENXIO; > > vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > return -ENXIO; > > return remap_pfn_range(vma, vma->vm_start, > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > + page_to_pfn(page) + vma->vm_pgoff, > user_count << PAGE_SHIFT, vma->vm_page_prot); > #else > return -ENXIO; > -- > 2.30.2 > -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-16 12:53 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-16 12:53 UTC (permalink / raw) To: Christoph Hellwig, Stefano Stabellini Cc: Boris Ostrovsky, Konrad Rzeszutek Wilk, Juergen Gross, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun, Roman Skakun > Technically this looks good. But given that exposing a helper > that does either vmalloc_to_page or virt_to_page is one of the > never ending MM discussions I don't want to get into that discussion > and just keep it local in the DMA code. > > Are you fine with me applying this version? Looks good to me, thanks! But, Stefano asked me about using created helper in the xen_swiotlb_free_coherent() and I created a patch according to this mention. We can merge this patch and create a new one for xen_swiotlb_free_coherent() later. пт, 16 июл. 2021 г. в 12:35, Christoph Hellwig <hch@lst.de>: > > Technically this looks good. But given that exposing a helper > that does either vmalloc_to_page or virt_to_page is one of the > never ending MM discussions I don't want to get into that discussion > and just keep it local in the DMA code. > > Are you fine with me applying this version? > > --- > From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001 > From: Roman Skakun <Roman_Skakun@epam.com> > Date: Fri, 16 Jul 2021 11:39:34 +0300 > Subject: dma-mapping: handle vmalloc addresses in > dma_common_{mmap,get_sgtable} > > xen-swiotlb can use vmalloc backed addresses for dma coherent allocations > and uses the common helpers. Properly handle them to unbreak Xen on > ARM platforms. > > Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > [hch: split the patch, renamed the helpers] > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > kernel/dma/ops_helpers.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > index 910ae69cae77..af4a6ef48ce0 100644 > --- a/kernel/dma/ops_helpers.c > +++ b/kernel/dma/ops_helpers.c > @@ -5,6 +5,13 @@ > */ > #include <linux/dma-map-ops.h> > > +static struct page *dma_common_vaddr_to_page(void *cpu_addr) > +{ > + if (is_vmalloc_addr(cpu_addr)) > + return vmalloc_to_page(cpu_addr); > + return virt_to_page(cpu_addr); > +} > + > /* > * Create scatter-list for the already allocated DMA buffer. > */ > @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > void *cpu_addr, dma_addr_t dma_addr, size_t size, > unsigned long attrs) > { > - struct page *page = virt_to_page(cpu_addr); > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > int ret; > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > unsigned long user_count = vma_pages(vma); > unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > unsigned long off = vma->vm_pgoff; > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > int ret = -ENXIO; > > vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > return -ENXIO; > > return remap_pfn_range(vma, vma->vm_start, > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > + page_to_pfn(page) + vma->vm_pgoff, > user_count << PAGE_SHIFT, vma->vm_page_prot); > #else > return -ENXIO; > -- > 2.30.2 > -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-16 12:53 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-16 12:53 UTC (permalink / raw) To: Christoph Hellwig, Stefano Stabellini Cc: Juergen Gross, Andrii Anisov, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Roman Skakun, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Boris Ostrovsky, Volodymyr Babchuk > Technically this looks good. But given that exposing a helper > that does either vmalloc_to_page or virt_to_page is one of the > never ending MM discussions I don't want to get into that discussion > and just keep it local in the DMA code. > > Are you fine with me applying this version? Looks good to me, thanks! But, Stefano asked me about using created helper in the xen_swiotlb_free_coherent() and I created a patch according to this mention. We can merge this patch and create a new one for xen_swiotlb_free_coherent() later. пт, 16 июл. 2021 г. в 12:35, Christoph Hellwig <hch@lst.de>: > > Technically this looks good. But given that exposing a helper > that does either vmalloc_to_page or virt_to_page is one of the > never ending MM discussions I don't want to get into that discussion > and just keep it local in the DMA code. > > Are you fine with me applying this version? > > --- > From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001 > From: Roman Skakun <Roman_Skakun@epam.com> > Date: Fri, 16 Jul 2021 11:39:34 +0300 > Subject: dma-mapping: handle vmalloc addresses in > dma_common_{mmap,get_sgtable} > > xen-swiotlb can use vmalloc backed addresses for dma coherent allocations > and uses the common helpers. Properly handle them to unbreak Xen on > ARM platforms. > > Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > [hch: split the patch, renamed the helpers] > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > kernel/dma/ops_helpers.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > index 910ae69cae77..af4a6ef48ce0 100644 > --- a/kernel/dma/ops_helpers.c > +++ b/kernel/dma/ops_helpers.c > @@ -5,6 +5,13 @@ > */ > #include <linux/dma-map-ops.h> > > +static struct page *dma_common_vaddr_to_page(void *cpu_addr) > +{ > + if (is_vmalloc_addr(cpu_addr)) > + return vmalloc_to_page(cpu_addr); > + return virt_to_page(cpu_addr); > +} > + > /* > * Create scatter-list for the already allocated DMA buffer. > */ > @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > void *cpu_addr, dma_addr_t dma_addr, size_t size, > unsigned long attrs) > { > - struct page *page = virt_to_page(cpu_addr); > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > int ret; > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > unsigned long user_count = vma_pages(vma); > unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > unsigned long off = vma->vm_pgoff; > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > int ret = -ENXIO; > > vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > return -ENXIO; > > return remap_pfn_range(vma, vma->vm_start, > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > + page_to_pfn(page) + vma->vm_pgoff, > user_count << PAGE_SHIFT, vma->vm_page_prot); > #else > return -ENXIO; > -- > 2.30.2 > -- Best Regards, Roman. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-07-16 12:53 ` Roman Skakun (?) @ 2021-07-16 15:29 ` Stefano Stabellini -1 siblings, 0 replies; 63+ messages in thread From: Stefano Stabellini @ 2021-07-16 15:29 UTC (permalink / raw) To: Roman Skakun Cc: Christoph Hellwig, Stefano Stabellini, Boris Ostrovsky, Konrad Rzeszutek Wilk, Juergen Gross, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun [-- Attachment #1: Type: text/plain, Size: 3852 bytes --] On Fri, 16 Jul 2021, Roman Skakun wrote: > > Technically this looks good. But given that exposing a helper > > that does either vmalloc_to_page or virt_to_page is one of the > > never ending MM discussions I don't want to get into that discussion > > and just keep it local in the DMA code. > > > > Are you fine with me applying this version? > > Looks good to me, thanks! > But, Stefano asked me about using created helper in the > xen_swiotlb_free_coherent() > and I created a patch according to this mention. > > We can merge this patch and create a new one for > xen_swiotlb_free_coherent() later. Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page was problematic. This patch is fine by me. > пт, 16 июл. 2021 г. в 12:35, Christoph Hellwig <hch@lst.de>: > > > > Technically this looks good. But given that exposing a helper > > that does either vmalloc_to_page or virt_to_page is one of the > > never ending MM discussions I don't want to get into that discussion > > and just keep it local in the DMA code. > > > > Are you fine with me applying this version? > > > > --- > > From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001 > > From: Roman Skakun <Roman_Skakun@epam.com> > > Date: Fri, 16 Jul 2021 11:39:34 +0300 > > Subject: dma-mapping: handle vmalloc addresses in > > dma_common_{mmap,get_sgtable} > > > > xen-swiotlb can use vmalloc backed addresses for dma coherent allocations > > and uses the common helpers. Properly handle them to unbreak Xen on > > ARM platforms. > > > > Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > > [hch: split the patch, renamed the helpers] > > Signed-off-by: Christoph Hellwig <hch@lst.de> > > --- > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > > index 910ae69cae77..af4a6ef48ce0 100644 > > --- a/kernel/dma/ops_helpers.c > > +++ b/kernel/dma/ops_helpers.c > > @@ -5,6 +5,13 @@ > > */ > > #include <linux/dma-map-ops.h> > > > > +static struct page *dma_common_vaddr_to_page(void *cpu_addr) > > +{ > > + if (is_vmalloc_addr(cpu_addr)) > > + return vmalloc_to_page(cpu_addr); > > + return virt_to_page(cpu_addr); > > +} > > + > > /* > > * Create scatter-list for the already allocated DMA buffer. > > */ > > @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > > void *cpu_addr, dma_addr_t dma_addr, size_t size, > > unsigned long attrs) > > { > > - struct page *page = virt_to_page(cpu_addr); > > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > > int ret; > > > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > unsigned long user_count = vma_pages(vma); > > unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > > unsigned long off = vma->vm_pgoff; > > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > > int ret = -ENXIO; > > > > vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > return -ENXIO; > > > > return remap_pfn_range(vma, vma->vm_start, > > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > > + page_to_pfn(page) + vma->vm_pgoff, > > user_count << PAGE_SHIFT, vma->vm_page_prot); > > #else > > return -ENXIO; > > -- > > 2.30.2 > > > > > -- > Best Regards, Roman. > ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-16 15:29 ` Stefano Stabellini 0 siblings, 0 replies; 63+ messages in thread From: Stefano Stabellini @ 2021-07-16 15:29 UTC (permalink / raw) To: Roman Skakun Cc: Christoph Hellwig, Stefano Stabellini, Boris Ostrovsky, Konrad Rzeszutek Wilk, Juergen Gross, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun [-- Attachment #1: Type: text/plain, Size: 3852 bytes --] On Fri, 16 Jul 2021, Roman Skakun wrote: > > Technically this looks good. But given that exposing a helper > > that does either vmalloc_to_page or virt_to_page is one of the > > never ending MM discussions I don't want to get into that discussion > > and just keep it local in the DMA code. > > > > Are you fine with me applying this version? > > Looks good to me, thanks! > But, Stefano asked me about using created helper in the > xen_swiotlb_free_coherent() > and I created a patch according to this mention. > > We can merge this patch and create a new one for > xen_swiotlb_free_coherent() later. Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page was problematic. This patch is fine by me. > пт, 16 июл. 2021 г. в 12:35, Christoph Hellwig <hch@lst.de>: > > > > Technically this looks good. But given that exposing a helper > > that does either vmalloc_to_page or virt_to_page is one of the > > never ending MM discussions I don't want to get into that discussion > > and just keep it local in the DMA code. > > > > Are you fine with me applying this version? > > > > --- > > From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001 > > From: Roman Skakun <Roman_Skakun@epam.com> > > Date: Fri, 16 Jul 2021 11:39:34 +0300 > > Subject: dma-mapping: handle vmalloc addresses in > > dma_common_{mmap,get_sgtable} > > > > xen-swiotlb can use vmalloc backed addresses for dma coherent allocations > > and uses the common helpers. Properly handle them to unbreak Xen on > > ARM platforms. > > > > Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > > [hch: split the patch, renamed the helpers] > > Signed-off-by: Christoph Hellwig <hch@lst.de> > > --- > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > > index 910ae69cae77..af4a6ef48ce0 100644 > > --- a/kernel/dma/ops_helpers.c > > +++ b/kernel/dma/ops_helpers.c > > @@ -5,6 +5,13 @@ > > */ > > #include <linux/dma-map-ops.h> > > > > +static struct page *dma_common_vaddr_to_page(void *cpu_addr) > > +{ > > + if (is_vmalloc_addr(cpu_addr)) > > + return vmalloc_to_page(cpu_addr); > > + return virt_to_page(cpu_addr); > > +} > > + > > /* > > * Create scatter-list for the already allocated DMA buffer. > > */ > > @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > > void *cpu_addr, dma_addr_t dma_addr, size_t size, > > unsigned long attrs) > > { > > - struct page *page = virt_to_page(cpu_addr); > > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > > int ret; > > > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > unsigned long user_count = vma_pages(vma); > > unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > > unsigned long off = vma->vm_pgoff; > > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > > int ret = -ENXIO; > > > > vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > return -ENXIO; > > > > return remap_pfn_range(vma, vma->vm_start, > > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > > + page_to_pfn(page) + vma->vm_pgoff, > > user_count << PAGE_SHIFT, vma->vm_page_prot); > > #else > > return -ENXIO; > > -- > > 2.30.2 > > > > > -- > Best Regards, Roman. > ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-16 15:29 ` Stefano Stabellini 0 siblings, 0 replies; 63+ messages in thread From: Stefano Stabellini @ 2021-07-16 15:29 UTC (permalink / raw) To: Roman Skakun Cc: Juergen Gross, Stefano Stabellini, Andrii Anisov, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Boris Ostrovsky, Volodymyr Babchuk, Christoph Hellwig [-- Attachment #1: Type: text/plain, Size: 3852 bytes --] On Fri, 16 Jul 2021, Roman Skakun wrote: > > Technically this looks good. But given that exposing a helper > > that does either vmalloc_to_page or virt_to_page is one of the > > never ending MM discussions I don't want to get into that discussion > > and just keep it local in the DMA code. > > > > Are you fine with me applying this version? > > Looks good to me, thanks! > But, Stefano asked me about using created helper in the > xen_swiotlb_free_coherent() > and I created a patch according to this mention. > > We can merge this patch and create a new one for > xen_swiotlb_free_coherent() later. Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page was problematic. This patch is fine by me. > пт, 16 июл. 2021 г. в 12:35, Christoph Hellwig <hch@lst.de>: > > > > Technically this looks good. But given that exposing a helper > > that does either vmalloc_to_page or virt_to_page is one of the > > never ending MM discussions I don't want to get into that discussion > > and just keep it local in the DMA code. > > > > Are you fine with me applying this version? > > > > --- > > From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001 > > From: Roman Skakun <Roman_Skakun@epam.com> > > Date: Fri, 16 Jul 2021 11:39:34 +0300 > > Subject: dma-mapping: handle vmalloc addresses in > > dma_common_{mmap,get_sgtable} > > > > xen-swiotlb can use vmalloc backed addresses for dma coherent allocations > > and uses the common helpers. Properly handle them to unbreak Xen on > > ARM platforms. > > > > Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > > [hch: split the patch, renamed the helpers] > > Signed-off-by: Christoph Hellwig <hch@lst.de> > > --- > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > > index 910ae69cae77..af4a6ef48ce0 100644 > > --- a/kernel/dma/ops_helpers.c > > +++ b/kernel/dma/ops_helpers.c > > @@ -5,6 +5,13 @@ > > */ > > #include <linux/dma-map-ops.h> > > > > +static struct page *dma_common_vaddr_to_page(void *cpu_addr) > > +{ > > + if (is_vmalloc_addr(cpu_addr)) > > + return vmalloc_to_page(cpu_addr); > > + return virt_to_page(cpu_addr); > > +} > > + > > /* > > * Create scatter-list for the already allocated DMA buffer. > > */ > > @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > > void *cpu_addr, dma_addr_t dma_addr, size_t size, > > unsigned long attrs) > > { > > - struct page *page = virt_to_page(cpu_addr); > > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > > int ret; > > > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > unsigned long user_count = vma_pages(vma); > > unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > > unsigned long off = vma->vm_pgoff; > > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > > int ret = -ENXIO; > > > > vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > return -ENXIO; > > > > return remap_pfn_range(vma, vma->vm_start, > > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > > + page_to_pfn(page) + vma->vm_pgoff, > > user_count << PAGE_SHIFT, vma->vm_page_prot); > > #else > > return -ENXIO; > > -- > > 2.30.2 > > > > > -- > Best Regards, Roman. > [-- Attachment #2: Type: text/plain, Size: 156 bytes --] _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-07-16 15:29 ` Stefano Stabellini (?) @ 2021-07-17 8:39 ` Roman Skakun -1 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-17 8:39 UTC (permalink / raw) To: Christoph Hellwig, Stefano Stabellini Cc: Boris Ostrovsky, Konrad Rzeszutek Wilk, Juergen Gross, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun, Roman Skakun > We can merge this patch and create a new one for > xen_swiotlb_free_coherent() later. > Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page > was problematic. > > This patch is fine by me. Good. I'm agreed too. Waiting for Christoph. пт, 16 июл. 2021 г. в 18:29, Stefano Stabellini <sstabellini@kernel.org>: > > On Fri, 16 Jul 2021, Roman Skakun wrote: > > > Technically this looks good. But given that exposing a helper > > > that does either vmalloc_to_page or virt_to_page is one of the > > > never ending MM discussions I don't want to get into that discussion > > > and just keep it local in the DMA code. > > > > > > Are you fine with me applying this version? > > > > Looks good to me, thanks! > > But, Stefano asked me about using created helper in the > > xen_swiotlb_free_coherent() > > and I created a patch according to this mention. > > > > We can merge this patch and create a new one for > > xen_swiotlb_free_coherent() later. > > Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page > was problematic. > > This patch is fine by me. > > > > пт, 16 июл. 2021 г. в 12:35, Christoph Hellwig <hch@lst.de>: > > > > > > Technically this looks good. But given that exposing a helper > > > that does either vmalloc_to_page or virt_to_page is one of the > > > never ending MM discussions I don't want to get into that discussion > > > and just keep it local in the DMA code. > > > > > > Are you fine with me applying this version? > > > > > > --- > > > From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001 > > > From: Roman Skakun <Roman_Skakun@epam.com> > > > Date: Fri, 16 Jul 2021 11:39:34 +0300 > > > Subject: dma-mapping: handle vmalloc addresses in > > > dma_common_{mmap,get_sgtable} > > > > > > xen-swiotlb can use vmalloc backed addresses for dma coherent allocations > > > and uses the common helpers. Properly handle them to unbreak Xen on > > > ARM platforms. > > > > > > Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") > > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > > > [hch: split the patch, renamed the helpers] > > > Signed-off-by: Christoph Hellwig <hch@lst.de> > > > --- > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > > > index 910ae69cae77..af4a6ef48ce0 100644 > > > --- a/kernel/dma/ops_helpers.c > > > +++ b/kernel/dma/ops_helpers.c > > > @@ -5,6 +5,13 @@ > > > */ > > > #include <linux/dma-map-ops.h> > > > > > > +static struct page *dma_common_vaddr_to_page(void *cpu_addr) > > > +{ > > > + if (is_vmalloc_addr(cpu_addr)) > > > + return vmalloc_to_page(cpu_addr); > > > + return virt_to_page(cpu_addr); > > > +} > > > + > > > /* > > > * Create scatter-list for the already allocated DMA buffer. > > > */ > > > @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > > > void *cpu_addr, dma_addr_t dma_addr, size_t size, > > > unsigned long attrs) > > > { > > > - struct page *page = virt_to_page(cpu_addr); > > > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > > > int ret; > > > > > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > > @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > > unsigned long user_count = vma_pages(vma); > > > unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > > > unsigned long off = vma->vm_pgoff; > > > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > > > int ret = -ENXIO; > > > > > > vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > > return -ENXIO; > > > > > > return remap_pfn_range(vma, vma->vm_start, > > > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > > > + page_to_pfn(page) + vma->vm_pgoff, > > > user_count << PAGE_SHIFT, vma->vm_page_prot); > > > #else > > > return -ENXIO; > > > -- > > > 2.30.2 > > > > > > > > > -- > > Best Regards, Roman. > > -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-17 8:39 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-17 8:39 UTC (permalink / raw) To: Christoph Hellwig, Stefano Stabellini Cc: Boris Ostrovsky, Konrad Rzeszutek Wilk, Juergen Gross, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun, Roman Skakun > We can merge this patch and create a new one for > xen_swiotlb_free_coherent() later. > Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page > was problematic. > > This patch is fine by me. Good. I'm agreed too. Waiting for Christoph. пт, 16 июл. 2021 г. в 18:29, Stefano Stabellini <sstabellini@kernel.org>: > > On Fri, 16 Jul 2021, Roman Skakun wrote: > > > Technically this looks good. But given that exposing a helper > > > that does either vmalloc_to_page or virt_to_page is one of the > > > never ending MM discussions I don't want to get into that discussion > > > and just keep it local in the DMA code. > > > > > > Are you fine with me applying this version? > > > > Looks good to me, thanks! > > But, Stefano asked me about using created helper in the > > xen_swiotlb_free_coherent() > > and I created a patch according to this mention. > > > > We can merge this patch and create a new one for > > xen_swiotlb_free_coherent() later. > > Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page > was problematic. > > This patch is fine by me. > > > > пт, 16 июл. 2021 г. в 12:35, Christoph Hellwig <hch@lst.de>: > > > > > > Technically this looks good. But given that exposing a helper > > > that does either vmalloc_to_page or virt_to_page is one of the > > > never ending MM discussions I don't want to get into that discussion > > > and just keep it local in the DMA code. > > > > > > Are you fine with me applying this version? > > > > > > --- > > > From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001 > > > From: Roman Skakun <Roman_Skakun@epam.com> > > > Date: Fri, 16 Jul 2021 11:39:34 +0300 > > > Subject: dma-mapping: handle vmalloc addresses in > > > dma_common_{mmap,get_sgtable} > > > > > > xen-swiotlb can use vmalloc backed addresses for dma coherent allocations > > > and uses the common helpers. Properly handle them to unbreak Xen on > > > ARM platforms. > > > > > > Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") > > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > > > [hch: split the patch, renamed the helpers] > > > Signed-off-by: Christoph Hellwig <hch@lst.de> > > > --- > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > > > index 910ae69cae77..af4a6ef48ce0 100644 > > > --- a/kernel/dma/ops_helpers.c > > > +++ b/kernel/dma/ops_helpers.c > > > @@ -5,6 +5,13 @@ > > > */ > > > #include <linux/dma-map-ops.h> > > > > > > +static struct page *dma_common_vaddr_to_page(void *cpu_addr) > > > +{ > > > + if (is_vmalloc_addr(cpu_addr)) > > > + return vmalloc_to_page(cpu_addr); > > > + return virt_to_page(cpu_addr); > > > +} > > > + > > > /* > > > * Create scatter-list for the already allocated DMA buffer. > > > */ > > > @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > > > void *cpu_addr, dma_addr_t dma_addr, size_t size, > > > unsigned long attrs) > > > { > > > - struct page *page = virt_to_page(cpu_addr); > > > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > > > int ret; > > > > > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > > @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > > unsigned long user_count = vma_pages(vma); > > > unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > > > unsigned long off = vma->vm_pgoff; > > > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > > > int ret = -ENXIO; > > > > > > vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > > return -ENXIO; > > > > > > return remap_pfn_range(vma, vma->vm_start, > > > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > > > + page_to_pfn(page) + vma->vm_pgoff, > > > user_count << PAGE_SHIFT, vma->vm_page_prot); > > > #else > > > return -ENXIO; > > > -- > > > 2.30.2 > > > > > > > > > -- > > Best Regards, Roman. > > -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-17 8:39 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-17 8:39 UTC (permalink / raw) To: Christoph Hellwig, Stefano Stabellini Cc: Juergen Gross, Andrii Anisov, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Roman Skakun, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Boris Ostrovsky, Volodymyr Babchuk > We can merge this patch and create a new one for > xen_swiotlb_free_coherent() later. > Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page > was problematic. > > This patch is fine by me. Good. I'm agreed too. Waiting for Christoph. пт, 16 июл. 2021 г. в 18:29, Stefano Stabellini <sstabellini@kernel.org>: > > On Fri, 16 Jul 2021, Roman Skakun wrote: > > > Technically this looks good. But given that exposing a helper > > > that does either vmalloc_to_page or virt_to_page is one of the > > > never ending MM discussions I don't want to get into that discussion > > > and just keep it local in the DMA code. > > > > > > Are you fine with me applying this version? > > > > Looks good to me, thanks! > > But, Stefano asked me about using created helper in the > > xen_swiotlb_free_coherent() > > and I created a patch according to this mention. > > > > We can merge this patch and create a new one for > > xen_swiotlb_free_coherent() later. > > Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page > was problematic. > > This patch is fine by me. > > > > пт, 16 июл. 2021 г. в 12:35, Christoph Hellwig <hch@lst.de>: > > > > > > Technically this looks good. But given that exposing a helper > > > that does either vmalloc_to_page or virt_to_page is one of the > > > never ending MM discussions I don't want to get into that discussion > > > and just keep it local in the DMA code. > > > > > > Are you fine with me applying this version? > > > > > > --- > > > From 40ac971eab89330d6153e7721e88acd2d98833f9 Mon Sep 17 00:00:00 2001 > > > From: Roman Skakun <Roman_Skakun@epam.com> > > > Date: Fri, 16 Jul 2021 11:39:34 +0300 > > > Subject: dma-mapping: handle vmalloc addresses in > > > dma_common_{mmap,get_sgtable} > > > > > > xen-swiotlb can use vmalloc backed addresses for dma coherent allocations > > > and uses the common helpers. Properly handle them to unbreak Xen on > > > ARM platforms. > > > > > > Fixes: 1b65c4e5a9af ("swiotlb-xen: use xen_alloc/free_coherent_pages") > > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > > > [hch: split the patch, renamed the helpers] > > > Signed-off-by: Christoph Hellwig <hch@lst.de> > > > --- > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > > > index 910ae69cae77..af4a6ef48ce0 100644 > > > --- a/kernel/dma/ops_helpers.c > > > +++ b/kernel/dma/ops_helpers.c > > > @@ -5,6 +5,13 @@ > > > */ > > > #include <linux/dma-map-ops.h> > > > > > > +static struct page *dma_common_vaddr_to_page(void *cpu_addr) > > > +{ > > > + if (is_vmalloc_addr(cpu_addr)) > > > + return vmalloc_to_page(cpu_addr); > > > + return virt_to_page(cpu_addr); > > > +} > > > + > > > /* > > > * Create scatter-list for the already allocated DMA buffer. > > > */ > > > @@ -12,7 +19,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > > > void *cpu_addr, dma_addr_t dma_addr, size_t size, > > > unsigned long attrs) > > > { > > > - struct page *page = virt_to_page(cpu_addr); > > > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > > > int ret; > > > > > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > > @@ -32,6 +39,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > > unsigned long user_count = vma_pages(vma); > > > unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT; > > > unsigned long off = vma->vm_pgoff; > > > + struct page *page = dma_common_vaddr_to_page(cpu_addr); > > > int ret = -ENXIO; > > > > > > vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs); > > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > > return -ENXIO; > > > > > > return remap_pfn_range(vma, vma->vm_start, > > > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > > > + page_to_pfn(page) + vma->vm_pgoff, > > > user_count << PAGE_SHIFT, vma->vm_page_prot); > > > #else > > > return -ENXIO; > > > -- > > > 2.30.2 > > > > > > > > > -- > > Best Regards, Roman. > > -- Best Regards, Roman. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-07-17 8:39 ` Roman Skakun @ 2021-07-19 9:22 ` Christoph Hellwig -1 siblings, 0 replies; 63+ messages in thread From: Christoph Hellwig @ 2021-07-19 9:22 UTC (permalink / raw) To: Roman Skakun Cc: Christoph Hellwig, Stefano Stabellini, Boris Ostrovsky, Konrad Rzeszutek Wilk, Juergen Gross, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun On Sat, Jul 17, 2021 at 11:39:21AM +0300, Roman Skakun wrote: > > We can merge this patch and create a new one for > > xen_swiotlb_free_coherent() later. > > Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page > > was problematic. > > > > This patch is fine by me. > > Good. I'm agreed too. Waiting for Christoph. Fine with. I've queued up the modified patch. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-19 9:22 ` Christoph Hellwig 0 siblings, 0 replies; 63+ messages in thread From: Christoph Hellwig @ 2021-07-19 9:22 UTC (permalink / raw) To: Roman Skakun Cc: Juergen Gross, Stefano Stabellini, Andrii Anisov, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Boris Ostrovsky, Volodymyr Babchuk, Christoph Hellwig On Sat, Jul 17, 2021 at 11:39:21AM +0300, Roman Skakun wrote: > > We can merge this patch and create a new one for > > xen_swiotlb_free_coherent() later. > > Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page > > was problematic. > > > > This patch is fine by me. > > Good. I'm agreed too. Waiting for Christoph. Fine with. I've queued up the modified patch. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-07-19 9:22 ` Christoph Hellwig (?) @ 2021-07-21 18:39 ` Roman Skakun -1 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-21 18:39 UTC (permalink / raw) To: Christoph Hellwig Cc: Stefano Stabellini, Boris Ostrovsky, Konrad Rzeszutek Wilk, Juergen Gross, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun, Roman Skakun > Fine with. I've queued up the modified patch. Good. Thanks! > > On Sat, Jul 17, 2021 at 11:39:21AM +0300, Roman Skakun wrote: > > > We can merge this patch and create a new one for > > > xen_swiotlb_free_coherent() later. > > > Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page > > > was problematic. > > > > > > This patch is fine by me. > > > > Good. I'm agreed too. Waiting for Christoph. > > Fine with. I've queued up the modified patch. -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-21 18:39 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-21 18:39 UTC (permalink / raw) To: Christoph Hellwig Cc: Stefano Stabellini, Boris Ostrovsky, Konrad Rzeszutek Wilk, Juergen Gross, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Andrii Anisov, Roman Skakun, Roman Skakun > Fine with. I've queued up the modified patch. Good. Thanks! > > On Sat, Jul 17, 2021 at 11:39:21AM +0300, Roman Skakun wrote: > > > We can merge this patch and create a new one for > > > xen_swiotlb_free_coherent() later. > > > Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page > > > was problematic. > > > > > > This patch is fine by me. > > > > Good. I'm agreed too. Waiting for Christoph. > > Fine with. I've queued up the modified patch. -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-21 18:39 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-21 18:39 UTC (permalink / raw) To: Christoph Hellwig Cc: Juergen Gross, Stefano Stabellini, Andrii Anisov, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Roman Skakun, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Boris Ostrovsky, Volodymyr Babchuk > Fine with. I've queued up the modified patch. Good. Thanks! > > On Sat, Jul 17, 2021 at 11:39:21AM +0300, Roman Skakun wrote: > > > We can merge this patch and create a new one for > > > xen_swiotlb_free_coherent() later. > > > Yeah, no worries, I didn't know that exposing dma_common_vaddr_to_page > > > was problematic. > > > > > > This patch is fine by me. > > > > Good. I'm agreed too. Waiting for Christoph. > > Fine with. I've queued up the modified patch. -- Best Regards, Roman. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-06-22 13:34 ` Roman Skakun (?) @ 2021-07-14 1:23 ` Stefano Stabellini -1 siblings, 0 replies; 63+ messages in thread From: Stefano Stabellini @ 2021-07-14 1:23 UTC (permalink / raw) To: Roman Skakun Cc: Konrad Rzeszutek Wilk, Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov On Tue, 22 Jun 2021, Roman Skakun wrote: > This commit is dedicated to fix incorrect conversion from > cpu_addr to page address in cases when we get virtual > address which allocated in the vmalloc range. > As the result, virt_to_page() cannot convert this address > properly and return incorrect page address. > > Need to detect such cases and obtains the page address using > vmalloc_to_page() instead. > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > --- > Hey! > Thanks for suggestions, Christoph! > I updated the patch according to your advice. > But, I'm so surprised because nobody catches this problem > in the common code before. It looks a bit strange as for me. > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > index 910ae69cae77..782728d8a393 100644 > --- a/kernel/dma/ops_helpers.c > +++ b/kernel/dma/ops_helpers.c > @@ -5,6 +5,14 @@ > */ > #include <linux/dma-map-ops.h> > > +static struct page *cpu_addr_to_page(void *cpu_addr) > +{ > + if (is_vmalloc_addr(cpu_addr)) > + return vmalloc_to_page(cpu_addr); > + else > + return virt_to_page(cpu_addr); > +} We have the same thing in xen_swiotlb_free_coherent. Can we find a way to call cpu_addr_to_page() from xen_swiotlb_free_coherent? Maybe we can make cpu_addr_to_page non-static? No problem if it is too much trouble. > /* > * Create scatter-list for the already allocated DMA buffer. > */ > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > void *cpu_addr, dma_addr_t dma_addr, size_t size, > unsigned long attrs) > { > - struct page *page = virt_to_page(cpu_addr); > + struct page *page = cpu_addr_to_page(cpu_addr); > int ret; > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > return -ENXIO; > > return remap_pfn_range(vma, vma->vm_start, > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, > user_count << PAGE_SHIFT, vma->vm_page_prot); > #else > return -ENXIO; > -- > 2.25.1 > ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-14 1:23 ` Stefano Stabellini 0 siblings, 0 replies; 63+ messages in thread From: Stefano Stabellini @ 2021-07-14 1:23 UTC (permalink / raw) To: Roman Skakun Cc: Konrad Rzeszutek Wilk, Boris Ostrovsky, Juergen Gross, Stefano Stabellini, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov On Tue, 22 Jun 2021, Roman Skakun wrote: > This commit is dedicated to fix incorrect conversion from > cpu_addr to page address in cases when we get virtual > address which allocated in the vmalloc range. > As the result, virt_to_page() cannot convert this address > properly and return incorrect page address. > > Need to detect such cases and obtains the page address using > vmalloc_to_page() instead. > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > --- > Hey! > Thanks for suggestions, Christoph! > I updated the patch according to your advice. > But, I'm so surprised because nobody catches this problem > in the common code before. It looks a bit strange as for me. > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > index 910ae69cae77..782728d8a393 100644 > --- a/kernel/dma/ops_helpers.c > +++ b/kernel/dma/ops_helpers.c > @@ -5,6 +5,14 @@ > */ > #include <linux/dma-map-ops.h> > > +static struct page *cpu_addr_to_page(void *cpu_addr) > +{ > + if (is_vmalloc_addr(cpu_addr)) > + return vmalloc_to_page(cpu_addr); > + else > + return virt_to_page(cpu_addr); > +} We have the same thing in xen_swiotlb_free_coherent. Can we find a way to call cpu_addr_to_page() from xen_swiotlb_free_coherent? Maybe we can make cpu_addr_to_page non-static? No problem if it is too much trouble. > /* > * Create scatter-list for the already allocated DMA buffer. > */ > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > void *cpu_addr, dma_addr_t dma_addr, size_t size, > unsigned long attrs) > { > - struct page *page = virt_to_page(cpu_addr); > + struct page *page = cpu_addr_to_page(cpu_addr); > int ret; > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > return -ENXIO; > > return remap_pfn_range(vma, vma->vm_start, > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, > user_count << PAGE_SHIFT, vma->vm_page_prot); > #else > return -ENXIO; > -- > 2.25.1 > ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-14 1:23 ` Stefano Stabellini 0 siblings, 0 replies; 63+ messages in thread From: Stefano Stabellini @ 2021-07-14 1:23 UTC (permalink / raw) To: Roman Skakun Cc: Juergen Gross, Stefano Stabellini, Andrii Anisov, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Boris Ostrovsky, Volodymyr Babchuk On Tue, 22 Jun 2021, Roman Skakun wrote: > This commit is dedicated to fix incorrect conversion from > cpu_addr to page address in cases when we get virtual > address which allocated in the vmalloc range. > As the result, virt_to_page() cannot convert this address > properly and return incorrect page address. > > Need to detect such cases and obtains the page address using > vmalloc_to_page() instead. > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > --- > Hey! > Thanks for suggestions, Christoph! > I updated the patch according to your advice. > But, I'm so surprised because nobody catches this problem > in the common code before. It looks a bit strange as for me. > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > 1 file changed, 10 insertions(+), 2 deletions(-) > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > index 910ae69cae77..782728d8a393 100644 > --- a/kernel/dma/ops_helpers.c > +++ b/kernel/dma/ops_helpers.c > @@ -5,6 +5,14 @@ > */ > #include <linux/dma-map-ops.h> > > +static struct page *cpu_addr_to_page(void *cpu_addr) > +{ > + if (is_vmalloc_addr(cpu_addr)) > + return vmalloc_to_page(cpu_addr); > + else > + return virt_to_page(cpu_addr); > +} We have the same thing in xen_swiotlb_free_coherent. Can we find a way to call cpu_addr_to_page() from xen_swiotlb_free_coherent? Maybe we can make cpu_addr_to_page non-static? No problem if it is too much trouble. > /* > * Create scatter-list for the already allocated DMA buffer. > */ > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > void *cpu_addr, dma_addr_t dma_addr, size_t size, > unsigned long attrs) > { > - struct page *page = virt_to_page(cpu_addr); > + struct page *page = cpu_addr_to_page(cpu_addr); > int ret; > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > return -ENXIO; > > return remap_pfn_range(vma, vma->vm_start, > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, > user_count << PAGE_SHIFT, vma->vm_page_prot); > #else > return -ENXIO; > -- > 2.25.1 > _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses 2021-07-14 1:23 ` Stefano Stabellini (?) @ 2021-07-15 7:31 ` Roman Skakun -1 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-15 7:31 UTC (permalink / raw) To: Stefano Stabellini Cc: Konrad Rzeszutek Wilk, Boris Ostrovsky, Juergen Gross, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov Hi, Stefano! > We have the same thing in xen_swiotlb_free_coherent. Can we find a way > to call cpu_addr_to_page() from xen_swiotlb_free_coherent? > Maybe we can make cpu_addr_to_page non-static? Yes, right, If we want to reuse this helper instead of the same code block in xen_swiotlb_free_coherent() need to make cpu_addr_to_page() as global and add a new declaration for this helper in include/linux/dma-map-ops.h. What do you think? Cheers! ср, 14 июл. 2021 г. в 04:23, Stefano Stabellini <sstabellini@kernel.org>: > > On Tue, 22 Jun 2021, Roman Skakun wrote: > > This commit is dedicated to fix incorrect conversion from > > cpu_addr to page address in cases when we get virtual > > address which allocated in the vmalloc range. > > As the result, virt_to_page() cannot convert this address > > properly and return incorrect page address. > > > > Need to detect such cases and obtains the page address using > > vmalloc_to_page() instead. > > > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > > --- > > Hey! > > Thanks for suggestions, Christoph! > > I updated the patch according to your advice. > > But, I'm so surprised because nobody catches this problem > > in the common code before. It looks a bit strange as for me. > > > > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > > index 910ae69cae77..782728d8a393 100644 > > --- a/kernel/dma/ops_helpers.c > > +++ b/kernel/dma/ops_helpers.c > > @@ -5,6 +5,14 @@ > > */ > > #include <linux/dma-map-ops.h> > > > > +static struct page *cpu_addr_to_page(void *cpu_addr) > > +{ > > + if (is_vmalloc_addr(cpu_addr)) > > + return vmalloc_to_page(cpu_addr); > > + else > > + return virt_to_page(cpu_addr); > > +} > > We have the same thing in xen_swiotlb_free_coherent. Can we find a way > to call cpu_addr_to_page() from xen_swiotlb_free_coherent? Maybe we can > make cpu_addr_to_page non-static? No problem if it is too much trouble. > > > > /* > > * Create scatter-list for the already allocated DMA buffer. > > */ > > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > > void *cpu_addr, dma_addr_t dma_addr, size_t size, > > unsigned long attrs) > > { > > - struct page *page = virt_to_page(cpu_addr); > > + struct page *page = cpu_addr_to_page(cpu_addr); > > int ret; > > > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > return -ENXIO; > > > > return remap_pfn_range(vma, vma->vm_start, > > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > > + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, > > user_count << PAGE_SHIFT, vma->vm_page_prot); > > #else > > return -ENXIO; > > -- > > 2.25.1 > > -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-15 7:31 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-15 7:31 UTC (permalink / raw) To: Stefano Stabellini Cc: Konrad Rzeszutek Wilk, Boris Ostrovsky, Juergen Gross, xen-devel, iommu, linux-kernel, Oleksandr Tyshchenko, Oleksandr Andrushchenko, Volodymyr Babchuk, Roman Skakun, Andrii Anisov Hi, Stefano! > We have the same thing in xen_swiotlb_free_coherent. Can we find a way > to call cpu_addr_to_page() from xen_swiotlb_free_coherent? > Maybe we can make cpu_addr_to_page non-static? Yes, right, If we want to reuse this helper instead of the same code block in xen_swiotlb_free_coherent() need to make cpu_addr_to_page() as global and add a new declaration for this helper in include/linux/dma-map-ops.h. What do you think? Cheers! ср, 14 июл. 2021 г. в 04:23, Stefano Stabellini <sstabellini@kernel.org>: > > On Tue, 22 Jun 2021, Roman Skakun wrote: > > This commit is dedicated to fix incorrect conversion from > > cpu_addr to page address in cases when we get virtual > > address which allocated in the vmalloc range. > > As the result, virt_to_page() cannot convert this address > > properly and return incorrect page address. > > > > Need to detect such cases and obtains the page address using > > vmalloc_to_page() instead. > > > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > > --- > > Hey! > > Thanks for suggestions, Christoph! > > I updated the patch according to your advice. > > But, I'm so surprised because nobody catches this problem > > in the common code before. It looks a bit strange as for me. > > > > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > > index 910ae69cae77..782728d8a393 100644 > > --- a/kernel/dma/ops_helpers.c > > +++ b/kernel/dma/ops_helpers.c > > @@ -5,6 +5,14 @@ > > */ > > #include <linux/dma-map-ops.h> > > > > +static struct page *cpu_addr_to_page(void *cpu_addr) > > +{ > > + if (is_vmalloc_addr(cpu_addr)) > > + return vmalloc_to_page(cpu_addr); > > + else > > + return virt_to_page(cpu_addr); > > +} > > We have the same thing in xen_swiotlb_free_coherent. Can we find a way > to call cpu_addr_to_page() from xen_swiotlb_free_coherent? Maybe we can > make cpu_addr_to_page non-static? No problem if it is too much trouble. > > > > /* > > * Create scatter-list for the already allocated DMA buffer. > > */ > > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > > void *cpu_addr, dma_addr_t dma_addr, size_t size, > > unsigned long attrs) > > { > > - struct page *page = virt_to_page(cpu_addr); > > + struct page *page = cpu_addr_to_page(cpu_addr); > > int ret; > > > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > return -ENXIO; > > > > return remap_pfn_range(vma, vma->vm_start, > > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > > + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, > > user_count << PAGE_SHIFT, vma->vm_page_prot); > > #else > > return -ENXIO; > > -- > > 2.25.1 > > -- Best Regards, Roman. ^ permalink raw reply [flat|nested] 63+ messages in thread
* Re: [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses @ 2021-07-15 7:31 ` Roman Skakun 0 siblings, 0 replies; 63+ messages in thread From: Roman Skakun @ 2021-07-15 7:31 UTC (permalink / raw) To: Stefano Stabellini Cc: Juergen Gross, Andrii Anisov, Konrad Rzeszutek Wilk, Oleksandr Andrushchenko, linux-kernel, Oleksandr Tyshchenko, iommu, Roman Skakun, xen-devel, Boris Ostrovsky, Volodymyr Babchuk Hi, Stefano! > We have the same thing in xen_swiotlb_free_coherent. Can we find a way > to call cpu_addr_to_page() from xen_swiotlb_free_coherent? > Maybe we can make cpu_addr_to_page non-static? Yes, right, If we want to reuse this helper instead of the same code block in xen_swiotlb_free_coherent() need to make cpu_addr_to_page() as global and add a new declaration for this helper in include/linux/dma-map-ops.h. What do you think? Cheers! ср, 14 июл. 2021 г. в 04:23, Stefano Stabellini <sstabellini@kernel.org>: > > On Tue, 22 Jun 2021, Roman Skakun wrote: > > This commit is dedicated to fix incorrect conversion from > > cpu_addr to page address in cases when we get virtual > > address which allocated in the vmalloc range. > > As the result, virt_to_page() cannot convert this address > > properly and return incorrect page address. > > > > Need to detect such cases and obtains the page address using > > vmalloc_to_page() instead. > > > > Signed-off-by: Roman Skakun <roman_skakun@epam.com> > > Reviewed-by: Andrii Anisov <andrii_anisov@epam.com> > > --- > > Hey! > > Thanks for suggestions, Christoph! > > I updated the patch according to your advice. > > But, I'm so surprised because nobody catches this problem > > in the common code before. It looks a bit strange as for me. > > > > > > kernel/dma/ops_helpers.c | 12 ++++++++++-- > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c > > index 910ae69cae77..782728d8a393 100644 > > --- a/kernel/dma/ops_helpers.c > > +++ b/kernel/dma/ops_helpers.c > > @@ -5,6 +5,14 @@ > > */ > > #include <linux/dma-map-ops.h> > > > > +static struct page *cpu_addr_to_page(void *cpu_addr) > > +{ > > + if (is_vmalloc_addr(cpu_addr)) > > + return vmalloc_to_page(cpu_addr); > > + else > > + return virt_to_page(cpu_addr); > > +} > > We have the same thing in xen_swiotlb_free_coherent. Can we find a way > to call cpu_addr_to_page() from xen_swiotlb_free_coherent? Maybe we can > make cpu_addr_to_page non-static? No problem if it is too much trouble. > > > > /* > > * Create scatter-list for the already allocated DMA buffer. > > */ > > @@ -12,7 +20,7 @@ int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt, > > void *cpu_addr, dma_addr_t dma_addr, size_t size, > > unsigned long attrs) > > { > > - struct page *page = virt_to_page(cpu_addr); > > + struct page *page = cpu_addr_to_page(cpu_addr); > > int ret; > > > > ret = sg_alloc_table(sgt, 1, GFP_KERNEL); > > @@ -43,7 +51,7 @@ int dma_common_mmap(struct device *dev, struct vm_area_struct *vma, > > return -ENXIO; > > > > return remap_pfn_range(vma, vma->vm_start, > > - page_to_pfn(virt_to_page(cpu_addr)) + vma->vm_pgoff, > > + page_to_pfn(cpu_addr_to_page(cpu_addr)) + vma->vm_pgoff, > > user_count << PAGE_SHIFT, vma->vm_page_prot); > > #else > > return -ENXIO; > > -- > > 2.25.1 > > -- Best Regards, Roman. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu ^ permalink raw reply [flat|nested] 63+ messages in thread
end of thread, other threads:[~2021-07-21 18:39 UTC | newest] Thread overview: 63+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-06-11 9:55 [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops Roman Skakun 2021-06-11 9:55 ` Roman Skakun 2021-06-11 15:19 ` Boris Ostrovsky 2021-06-11 15:19 ` Boris Ostrovsky 2021-06-14 12:47 ` Roman Skakun 2021-06-14 12:47 ` Roman Skakun 2021-06-14 12:47 ` Roman Skakun 2021-06-14 15:45 ` Boris Ostrovsky 2021-06-14 15:45 ` Boris Ostrovsky 2021-06-16 11:45 ` Roman Skakun 2021-06-16 11:45 ` Roman Skakun 2021-06-16 11:45 ` Roman Skakun 2021-06-16 11:42 ` [PATCH 1/2] Revert "swiotlb-xen: remove xen_swiotlb_dma_mmap and xen_swiotlb_dma_get_sgtable" Roman Skakun 2021-06-16 11:42 ` Roman Skakun 2021-06-16 11:42 ` [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops Roman Skakun 2021-06-16 11:42 ` Roman Skakun 2021-06-16 14:12 ` Boris Ostrovsky 2021-06-16 14:12 ` Boris Ostrovsky 2021-06-16 14:21 ` Christoph Hellwig 2021-06-16 14:21 ` Christoph Hellwig 2021-06-16 15:33 ` Boris Ostrovsky 2021-06-16 15:33 ` Boris Ostrovsky 2021-06-16 15:35 ` Christoph Hellwig 2021-06-16 15:35 ` Christoph Hellwig 2021-06-16 15:39 ` Boris Ostrovsky 2021-06-16 15:39 ` Boris Ostrovsky 2021-06-16 15:44 ` Christoph Hellwig 2021-06-16 15:44 ` Christoph Hellwig 2021-06-22 13:34 ` [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses Roman Skakun 2021-06-22 13:34 ` Roman Skakun 2021-07-14 0:15 ` Konrad Rzeszutek Wilk 2021-07-14 0:15 ` Konrad Rzeszutek Wilk 2021-07-15 7:39 ` Roman Skakun 2021-07-15 7:39 ` Roman Skakun 2021-07-15 7:39 ` Roman Skakun 2021-07-15 16:58 ` Boris Ostrovsky 2021-07-15 16:58 ` Boris Ostrovsky 2021-07-15 17:00 ` Christoph Hellwig 2021-07-15 17:00 ` Christoph Hellwig 2021-07-16 8:39 ` Roman Skakun 2021-07-16 8:39 ` Roman Skakun 2021-07-16 9:35 ` Christoph Hellwig 2021-07-16 9:35 ` Christoph Hellwig 2021-07-16 12:53 ` Roman Skakun 2021-07-16 12:53 ` Roman Skakun 2021-07-16 12:53 ` Roman Skakun 2021-07-16 15:29 ` Stefano Stabellini 2021-07-16 15:29 ` Stefano Stabellini 2021-07-16 15:29 ` Stefano Stabellini 2021-07-17 8:39 ` Roman Skakun 2021-07-17 8:39 ` Roman Skakun 2021-07-17 8:39 ` Roman Skakun 2021-07-19 9:22 ` Christoph Hellwig 2021-07-19 9:22 ` Christoph Hellwig 2021-07-21 18:39 ` Roman Skakun 2021-07-21 18:39 ` Roman Skakun 2021-07-21 18:39 ` Roman Skakun 2021-07-14 1:23 ` Stefano Stabellini 2021-07-14 1:23 ` Stefano Stabellini 2021-07-14 1:23 ` Stefano Stabellini 2021-07-15 7:31 ` Roman Skakun 2021-07-15 7:31 ` Roman Skakun 2021-07-15 7:31 ` Roman Skakun
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.