All of lore.kernel.org
 help / color / mirror / Atom feed
From: Roman Skakun <rm.skakun@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Roman Skakun <rm.skakun@gmail.com>,
	Roman Skakun <roman_skakun@epam.com>,
	Andrii Anisov <andrii_anisov@epam.com>
Subject: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops
Date: Wed, 16 Jun 2021 14:42:05 +0300	[thread overview]
Message-ID: <20210616114205.38902-2-roman_skakun@epam.com> (raw)
In-Reply-To: <20210616114205.38902-1-roman_skakun@epam.com>

This commit is dedicated to fix incorrect conversion from
cpu_addr to page address in cases when we get virtual
address which allocated through xen_swiotlb_alloc_coherent()
and can be mapped in the vmalloc range.
As the result, virt_to_page() cannot convert this address
properly and return incorrect page address.

Need to detect such cases and obtains the page address using
vmalloc_to_page() instead.

The reference code for mmap() and get_sgtable() was copied
from kernel/dma/ops_helpers.c and modified to provide
additional detections as described above.

In order to simplify code there was added a new
dma_cpu_addr_to_page() helper.

Signed-off-by: Roman Skakun <roman_skakun@epam.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
---
 drivers/xen/swiotlb-xen.c | 42 +++++++++++++++++++++++++++++++--------
 1 file changed, 34 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 90bc5fc321bc..9331a8500547 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -118,6 +118,14 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	return 0;
 }
 
+static struct page *cpu_addr_to_page(void *cpu_addr)
+{
+	if (is_vmalloc_addr(cpu_addr))
+		return vmalloc_to_page(cpu_addr);
+	else
+		return virt_to_page(cpu_addr);
+}
+
 static int
 xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
 {
@@ -337,7 +345,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	int order = get_order(size);
 	phys_addr_t phys;
 	u64 dma_mask = DMA_BIT_MASK(32);
-	struct page *page;
+	struct page *page = cpu_addr_to_page(vaddr);
 
 	if (hwdev && hwdev->coherent_dma_mask)
 		dma_mask = hwdev->coherent_dma_mask;
@@ -349,11 +357,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
 
-	if (is_vmalloc_addr(vaddr))
-		page = vmalloc_to_page(vaddr);
-	else
-		page = virt_to_page(vaddr);
-
 	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
 		     range_straddles_page_boundary(phys, size)) &&
 	    TestClearPageXenRemapped(page))
@@ -573,7 +576,23 @@ xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
 		     void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		     unsigned long attrs)
 {
-	return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
+	unsigned long user_count = vma_pages(vma);
+	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+	unsigned long off = vma->vm_pgoff;
+	struct page *page = cpu_addr_to_page(cpu_addr);
+	int ret;
+
+	vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
+
+	if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))
+		return ret;
+
+	if (off >= count || user_count > count - off)
+		return -ENXIO;
+
+	return remap_pfn_range(vma, vma->vm_start,
+			page_to_pfn(page) + vma->vm_pgoff,
+			user_count << PAGE_SHIFT, vma->vm_page_prot);
 }
 
 /*
@@ -585,7 +604,14 @@ xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
 			void *cpu_addr, dma_addr_t handle, size_t size,
 			unsigned long attrs)
 {
-	return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size, attrs);
+	struct page *page = cpu_addr_to_page(cpu_addr);
+	int ret;
+
+	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+	if (!ret)
+		sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
+
+	return ret;
 }
 
 const struct dma_map_ops xen_swiotlb_dma_ops = {
-- 
2.25.1


WARNING: multiple messages have this Message-ID (diff)
From: Roman Skakun <rm.skakun@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Cc: Andrii Anisov <andrii_anisov@epam.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Roman Skakun <roman_skakun@epam.com>,
	Roman Skakun <rm.skakun@gmail.com>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Subject: [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops
Date: Wed, 16 Jun 2021 14:42:05 +0300	[thread overview]
Message-ID: <20210616114205.38902-2-roman_skakun@epam.com> (raw)
In-Reply-To: <20210616114205.38902-1-roman_skakun@epam.com>

This commit is dedicated to fix incorrect conversion from
cpu_addr to page address in cases when we get virtual
address which allocated through xen_swiotlb_alloc_coherent()
and can be mapped in the vmalloc range.
As the result, virt_to_page() cannot convert this address
properly and return incorrect page address.

Need to detect such cases and obtains the page address using
vmalloc_to_page() instead.

The reference code for mmap() and get_sgtable() was copied
from kernel/dma/ops_helpers.c and modified to provide
additional detections as described above.

In order to simplify code there was added a new
dma_cpu_addr_to_page() helper.

Signed-off-by: Roman Skakun <roman_skakun@epam.com>
Reviewed-by: Andrii Anisov <andrii_anisov@epam.com>
---
 drivers/xen/swiotlb-xen.c | 42 +++++++++++++++++++++++++++++++--------
 1 file changed, 34 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 90bc5fc321bc..9331a8500547 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -118,6 +118,14 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	return 0;
 }
 
+static struct page *cpu_addr_to_page(void *cpu_addr)
+{
+	if (is_vmalloc_addr(cpu_addr))
+		return vmalloc_to_page(cpu_addr);
+	else
+		return virt_to_page(cpu_addr);
+}
+
 static int
 xen_swiotlb_fixup(void *buf, size_t size, unsigned long nslabs)
 {
@@ -337,7 +345,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	int order = get_order(size);
 	phys_addr_t phys;
 	u64 dma_mask = DMA_BIT_MASK(32);
-	struct page *page;
+	struct page *page = cpu_addr_to_page(vaddr);
 
 	if (hwdev && hwdev->coherent_dma_mask)
 		dma_mask = hwdev->coherent_dma_mask;
@@ -349,11 +357,6 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
 
-	if (is_vmalloc_addr(vaddr))
-		page = vmalloc_to_page(vaddr);
-	else
-		page = virt_to_page(vaddr);
-
 	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
 		     range_straddles_page_boundary(phys, size)) &&
 	    TestClearPageXenRemapped(page))
@@ -573,7 +576,23 @@ xen_swiotlb_dma_mmap(struct device *dev, struct vm_area_struct *vma,
 		     void *cpu_addr, dma_addr_t dma_addr, size_t size,
 		     unsigned long attrs)
 {
-	return dma_common_mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
+	unsigned long user_count = vma_pages(vma);
+	unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+	unsigned long off = vma->vm_pgoff;
+	struct page *page = cpu_addr_to_page(cpu_addr);
+	int ret;
+
+	vma->vm_page_prot = dma_pgprot(dev, vma->vm_page_prot, attrs);
+
+	if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))
+		return ret;
+
+	if (off >= count || user_count > count - off)
+		return -ENXIO;
+
+	return remap_pfn_range(vma, vma->vm_start,
+			page_to_pfn(page) + vma->vm_pgoff,
+			user_count << PAGE_SHIFT, vma->vm_page_prot);
 }
 
 /*
@@ -585,7 +604,14 @@ xen_swiotlb_get_sgtable(struct device *dev, struct sg_table *sgt,
 			void *cpu_addr, dma_addr_t handle, size_t size,
 			unsigned long attrs)
 {
-	return dma_common_get_sgtable(dev, sgt, cpu_addr, handle, size, attrs);
+	struct page *page = cpu_addr_to_page(cpu_addr);
+	int ret;
+
+	ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
+	if (!ret)
+		sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
+
+	return ret;
 }
 
 const struct dma_map_ops xen_swiotlb_dma_ops = {
-- 
2.25.1

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2021-06-16 11:43 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-11  9:55 [PATCH] swiotlb-xen: override common mmap and get_sgtable dma ops Roman Skakun
2021-06-11  9:55 ` Roman Skakun
2021-06-11 15:19 ` Boris Ostrovsky
2021-06-11 15:19   ` Boris Ostrovsky
2021-06-14 12:47   ` Roman Skakun
2021-06-14 12:47     ` Roman Skakun
2021-06-14 12:47     ` Roman Skakun
2021-06-14 15:45     ` Boris Ostrovsky
2021-06-14 15:45       ` Boris Ostrovsky
2021-06-16 11:45       ` Roman Skakun
2021-06-16 11:45         ` Roman Skakun
2021-06-16 11:45         ` Roman Skakun
2021-06-16 11:42   ` [PATCH 1/2] Revert "swiotlb-xen: remove xen_swiotlb_dma_mmap and xen_swiotlb_dma_get_sgtable" Roman Skakun
2021-06-16 11:42     ` Roman Skakun
2021-06-16 11:42     ` Roman Skakun [this message]
2021-06-16 11:42       ` [PATCH 2/2] swiotlb-xen: override common mmap and get_sgtable dma ops Roman Skakun
2021-06-16 14:12       ` Boris Ostrovsky
2021-06-16 14:12         ` Boris Ostrovsky
2021-06-16 14:21         ` Christoph Hellwig
2021-06-16 14:21           ` Christoph Hellwig
2021-06-16 15:33           ` Boris Ostrovsky
2021-06-16 15:33             ` Boris Ostrovsky
2021-06-16 15:35             ` Christoph Hellwig
2021-06-16 15:35               ` Christoph Hellwig
2021-06-16 15:39               ` Boris Ostrovsky
2021-06-16 15:39                 ` Boris Ostrovsky
2021-06-16 15:44                 ` Christoph Hellwig
2021-06-16 15:44                   ` Christoph Hellwig
2021-06-22 13:34                   ` [PATCH v2] dma-mapping: use vmalloc_to_page for vmalloc addresses Roman Skakun
2021-06-22 13:34                     ` Roman Skakun
2021-07-14  0:15                     ` Konrad Rzeszutek Wilk
2021-07-14  0:15                       ` Konrad Rzeszutek Wilk
2021-07-15  7:39                       ` Roman Skakun
2021-07-15  7:39                         ` Roman Skakun
2021-07-15  7:39                         ` Roman Skakun
2021-07-15 16:58                         ` Boris Ostrovsky
2021-07-15 16:58                           ` Boris Ostrovsky
2021-07-15 17:00                           ` Christoph Hellwig
2021-07-15 17:00                             ` Christoph Hellwig
2021-07-16  8:39                             ` Roman Skakun
2021-07-16  8:39                               ` Roman Skakun
2021-07-16  9:35                               ` Christoph Hellwig
2021-07-16  9:35                                 ` Christoph Hellwig
2021-07-16 12:53                                 ` Roman Skakun
2021-07-16 12:53                                   ` Roman Skakun
2021-07-16 12:53                                   ` Roman Skakun
2021-07-16 15:29                                   ` Stefano Stabellini
2021-07-16 15:29                                     ` Stefano Stabellini
2021-07-16 15:29                                     ` Stefano Stabellini
2021-07-17  8:39                                     ` Roman Skakun
2021-07-17  8:39                                       ` Roman Skakun
2021-07-17  8:39                                       ` Roman Skakun
2021-07-19  9:22                                       ` Christoph Hellwig
2021-07-19  9:22                                         ` Christoph Hellwig
2021-07-21 18:39                                         ` Roman Skakun
2021-07-21 18:39                                           ` Roman Skakun
2021-07-21 18:39                                           ` Roman Skakun
2021-07-14  1:23                     ` Stefano Stabellini
2021-07-14  1:23                       ` Stefano Stabellini
2021-07-14  1:23                       ` Stefano Stabellini
2021-07-15  7:31                       ` Roman Skakun
2021-07-15  7:31                         ` Roman Skakun
2021-07-15  7:31                         ` Roman Skakun

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210616114205.38902-2-roman_skakun@epam.com \
    --to=rm.skakun@gmail.com \
    --cc=andrii_anisov@epam.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jgross@suse.com \
    --cc=konrad.wilk@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=oleksandr_andrushchenko@epam.com \
    --cc=oleksandr_tyshchenko@epam.com \
    --cc=roman_skakun@epam.com \
    --cc=sstabellini@kernel.org \
    --cc=volodymyr_babchuk@epam.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.