linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] vme: DMA improvements
@ 2015-05-18 18:56 Dmitry Kalinkin
  2015-05-18 18:56 ` [PATCH 1/6] Documentation: mention vme_master_mmap() in VME API Dmitry Kalinkin
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Dmitry Kalinkin @ 2015-05-18 18:56 UTC (permalink / raw)
  To: linux-kernel, devel
  Cc: Martyn Welch, Manohar Vanga, Greg Kroah-Hartman, Igor Alekseev,
	Dmitry Kalinkin

The first item in this submission documents previously introduced
vme_master_mmap() call. Following, there are three fixes for the tsi148
driver's DMA.  There was one bug that rendered it imposible to use DMA
lists with more than one item. The other was related to the place where
dma_map_single was called on the first DMA descriptor in the DMA list. The
last bug was about DMA transfer not stopping at interruption by signal,
which is a possible DoS attack vector. I also made an attempt to fix the
same issue in the ca91cx42 driver. I don't have access to this hardware to
test, so this is based only on my understanding of the datasheet (checked
ca91c042's errata as well).

A new /sys/bus/vme/dma0 device with a new ioctl for making DMA transfers
was introduced in vme_user driver. The logic of the function is similar to
the one found in similar existing drivers.

One question that I had while implementing this feature was whether we
should keep vme_dma_attr objects until vme_dma_list_exec() call. API
doesn't specify this, the existing vme bridge drivers copy all information
from attributes during vme_dma_list_add(). So for simplicity this
implementation frees vme_dma_attr's before vme_dma_list_exec() is done.

A simple test against AVM16 board displays speeds around 45 MiB/s for
A32/D32 reads for both BLT and MBLT (with MBLT being slightly faster).


Dmitry Kalinkin (6):
  Documentation: mention vme_master_mmap() in VME API
  vme: tsi148: fix DMA lists longer that one item
  vme: tsi148: fix first DMA item mapping
  vme: stop DMA transfer on interruption
  staging: vme_user: refactor llseek to switch(){}
  staging: vme_user: provide DMA functionality

 Documentation/vme_api.txt              |   6 ++
 drivers/staging/vme/devices/vme_user.c | 188 +++++++++++++++++++++++++++++++--
 drivers/staging/vme/devices/vme_user.h |  11 ++
 drivers/vme/bridges/vme_ca91cx42.c     |  15 ++-
 drivers/vme/bridges/vme_tsi148.c       |  42 +++++---
 5 files changed, 237 insertions(+), 25 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/6] Documentation: mention vme_master_mmap() in VME API
  2015-05-18 18:56 [PATCH 0/6] vme: DMA improvements Dmitry Kalinkin
@ 2015-05-18 18:56 ` Dmitry Kalinkin
  2015-05-18 18:56 ` [PATCH 2/6] vme: tsi148: fix DMA lists longer that one item Dmitry Kalinkin
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Dmitry Kalinkin @ 2015-05-18 18:56 UTC (permalink / raw)
  To: linux-kernel, devel
  Cc: Martyn Welch, Manohar Vanga, Greg Kroah-Hartman, Igor Alekseev,
	Dmitry Kalinkin

Signed-off-by: Dmitry Kalinkin <dmitry.kalinkin@gmail.com>
Cc: Igor Alekseev <igor.alekseev@itep.ru>
---
 Documentation/vme_api.txt | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/Documentation/vme_api.txt b/Documentation/vme_api.txt
index ffe6e22..ca5b827 100644
--- a/Documentation/vme_api.txt
+++ b/Documentation/vme_api.txt
@@ -171,6 +171,12 @@ This functions by reading the offset, applying the mask. If the bits selected in
 the mask match with the values of the corresponding bits in the compare field,
 the value of swap is written the specified offset.
 
+Parts of a VME window can be mapped into user space memory using the following
+function:
+
+	int vme_master_mmap(struct vme_resource *resource,
+		struct vm_area_struct *vma)
+
 
 Slave windows
 =============
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/6] vme: tsi148: fix DMA lists longer that one item
  2015-05-18 18:56 [PATCH 0/6] vme: DMA improvements Dmitry Kalinkin
  2015-05-18 18:56 ` [PATCH 1/6] Documentation: mention vme_master_mmap() in VME API Dmitry Kalinkin
@ 2015-05-18 18:56 ` Dmitry Kalinkin
  2015-05-18 18:56 ` [PATCH 3/6] vme: tsi148: fix first DMA item mapping Dmitry Kalinkin
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Dmitry Kalinkin @ 2015-05-18 18:56 UTC (permalink / raw)
  To: linux-kernel, devel
  Cc: Martyn Welch, Manohar Vanga, Greg Kroah-Hartman, Igor Alekseev,
	Dmitry Kalinkin

DMA lists on tsi148 weren't processed further than the first item
because of the broken logic. This regression was introduced in:

ac1a4f2caf7b071 "Staging: VME: Ensure TSI148 link list descriptors..."

Signed-off-by: Dmitry Kalinkin <dmitry.kalinkin@gmail.com>
Cc: Igor Alekseev <igor.alekseev@itep.ru>
---
 drivers/vme/bridges/vme_tsi148.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/vme/bridges/vme_tsi148.c b/drivers/vme/bridges/vme_tsi148.c
index 895c2a3..1be4136 100644
--- a/drivers/vme/bridges/vme_tsi148.c
+++ b/drivers/vme/bridges/vme_tsi148.c
@@ -1844,8 +1844,8 @@ static int tsi148_dma_list_add(struct vme_dma_list *list,
 
 		reg_split((unsigned long long)entry->dma_handle, &address_high,
 			&address_low);
-		entry->descriptor.dnlau = cpu_to_be32(address_high);
-		entry->descriptor.dnlal = cpu_to_be32(address_low);
+		prev->descriptor.dnlau = cpu_to_be32(address_high);
+		prev->descriptor.dnlal = cpu_to_be32(address_low);
 
 	}
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/6] vme: tsi148: fix first DMA item mapping
  2015-05-18 18:56 [PATCH 0/6] vme: DMA improvements Dmitry Kalinkin
  2015-05-18 18:56 ` [PATCH 1/6] Documentation: mention vme_master_mmap() in VME API Dmitry Kalinkin
  2015-05-18 18:56 ` [PATCH 2/6] vme: tsi148: fix DMA lists longer that one item Dmitry Kalinkin
@ 2015-05-18 18:56 ` Dmitry Kalinkin
  2015-05-18 18:56 ` [PATCH 4/6] vme: stop DMA transfer on interruption Dmitry Kalinkin
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Dmitry Kalinkin @ 2015-05-18 18:56 UTC (permalink / raw)
  To: linux-kernel, devel
  Cc: Martyn Welch, Manohar Vanga, Greg Kroah-Hartman, Igor Alekseev,
	Dmitry Kalinkin

This moves DMA mapping of the first list element to vme_list_add, the
same place where other elements mappings occur. This prevents extra
mapping or over-unmapping in the cases when vme_list_exec is called more
or less than one time respectively.

Also adds dma_mapping_error check.

Signed-off-by: Dmitry Kalinkin <dmitry.kalinkin@gmail.com>
Cc: Igor Alekseev <igor.alekseev@itep.ru>
---
 drivers/vme/bridges/vme_tsi148.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/drivers/vme/bridges/vme_tsi148.c b/drivers/vme/bridges/vme_tsi148.c
index 1be4136..03b0755 100644
--- a/drivers/vme/bridges/vme_tsi148.c
+++ b/drivers/vme/bridges/vme_tsi148.c
@@ -1833,17 +1833,21 @@ static int tsi148_dma_list_add(struct vme_dma_list *list,
 	/* Add to list */
 	list_add_tail(&entry->list, &list->entries);
 
+	entry->dma_handle = dma_map_single(tsi148_bridge->parent,
+		&entry->descriptor,
+		sizeof(struct tsi148_dma_descriptor), DMA_TO_DEVICE);
+	if (dma_mapping_error(tsi148_bridge->parent, entry->dma_handle)) {
+		dev_err(tsi148_bridge->parent, "DMA mapping error\n");
+		retval = -EINVAL;
+		goto err_dma;
+	}
+
 	/* Fill out previous descriptors "Next Address" */
 	if (entry->list.prev != &list->entries) {
-		prev = list_entry(entry->list.prev, struct tsi148_dma_entry,
-			list);
-		/* We need the bus address for the pointer */
-		entry->dma_handle = dma_map_single(tsi148_bridge->parent,
-			&entry->descriptor,
-			sizeof(struct tsi148_dma_descriptor), DMA_TO_DEVICE);
-
 		reg_split((unsigned long long)entry->dma_handle, &address_high,
 			&address_low);
+		prev = list_entry(entry->list.prev, struct tsi148_dma_entry,
+			list);
 		prev->descriptor.dnlau = cpu_to_be32(address_high);
 		prev->descriptor.dnlal = cpu_to_be32(address_low);
 
@@ -1851,6 +1855,7 @@ static int tsi148_dma_list_add(struct vme_dma_list *list,
 
 	return 0;
 
+err_dma:
 err_dest:
 err_source:
 err_align:
@@ -1921,10 +1926,6 @@ static int tsi148_dma_list_exec(struct vme_dma_list *list)
 	entry = list_first_entry(&list->entries, struct tsi148_dma_entry,
 		list);
 
-	entry->dma_handle = dma_map_single(tsi148_bridge->parent,
-		&entry->descriptor,
-		sizeof(struct tsi148_dma_descriptor), DMA_TO_DEVICE);
-
 	mutex_unlock(&ctrlr->mtx);
 
 	reg_split(entry->dma_handle, &bus_addr_high, &bus_addr_low);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/6] vme: stop DMA transfer on interruption
  2015-05-18 18:56 [PATCH 0/6] vme: DMA improvements Dmitry Kalinkin
                   ` (2 preceding siblings ...)
  2015-05-18 18:56 ` [PATCH 3/6] vme: tsi148: fix first DMA item mapping Dmitry Kalinkin
@ 2015-05-18 18:56 ` Dmitry Kalinkin
  2015-05-18 18:56 ` [PATCH 5/6] staging: vme_user: refactor llseek to switch(){} Dmitry Kalinkin
  2015-05-18 18:56 ` [PATCH 6/6] staging: vme_user: provide DMA functionality Dmitry Kalinkin
  5 siblings, 0 replies; 10+ messages in thread
From: Dmitry Kalinkin @ 2015-05-18 18:56 UTC (permalink / raw)
  To: linux-kernel, devel
  Cc: Martyn Welch, Manohar Vanga, Greg Kroah-Hartman, Igor Alekseev,
	Dmitry Kalinkin

Signed-off-by: Dmitry Kalinkin <dmitry.kalinkin@gmail.com>
Cc: Igor Alekseev <igor.alekseev@itep.ru>
---
 drivers/vme/bridges/vme_ca91cx42.c | 15 +++++++++++++--
 drivers/vme/bridges/vme_tsi148.c   | 15 +++++++++++++--
 2 files changed, 26 insertions(+), 4 deletions(-)

diff --git a/drivers/vme/bridges/vme_ca91cx42.c b/drivers/vme/bridges/vme_ca91cx42.c
index 18078ec..ffb8e91 100644
--- a/drivers/vme/bridges/vme_ca91cx42.c
+++ b/drivers/vme/bridges/vme_ca91cx42.c
@@ -1192,7 +1192,7 @@ static int ca91cx42_dma_list_exec(struct vme_dma_list *list)
 {
 	struct vme_dma_resource *ctrlr;
 	struct ca91cx42_dma_entry *entry;
-	int retval = 0;
+	int retval;
 	dma_addr_t bus_addr;
 	u32 val;
 	struct device *dev;
@@ -1245,9 +1245,19 @@ static int ca91cx42_dma_list_exec(struct vme_dma_list *list)
 
 	iowrite32(val, bridge->base + DGCS);
 
-	wait_event_interruptible(bridge->dma_queue,
+	retval = wait_event_interruptible(bridge->dma_queue,
 		ca91cx42_dma_busy(ctrlr->parent));
 
+	if (retval) {
+		val = ioread32(bridge->base + DGCS);
+		iowrite32(val | CA91CX42_DGCS_STOP_REQ, bridge->base + DGCS);
+		/* Wait for the operation to abort */
+		wait_event(bridge->dma_queue,
+			ca91cx42_dma_busy(ctrlr->parent));
+		retval = -EINTR;
+		goto exit;
+	}
+
 	/*
 	 * Read status register, this register is valid until we kick off a
 	 * new transfer.
@@ -1261,6 +1271,7 @@ static int ca91cx42_dma_list_exec(struct vme_dma_list *list)
 		val = ioread32(bridge->base + DCTL);
 	}
 
+exit:
 	/* Remove list from running list */
 	mutex_lock(&ctrlr->mtx);
 	list_del(&list->list);
diff --git a/drivers/vme/bridges/vme_tsi148.c b/drivers/vme/bridges/vme_tsi148.c
index 03b0755..f6c455e 100644
--- a/drivers/vme/bridges/vme_tsi148.c
+++ b/drivers/vme/bridges/vme_tsi148.c
@@ -1892,7 +1892,7 @@ static int tsi148_dma_busy(struct vme_bridge *tsi148_bridge, int channel)
 static int tsi148_dma_list_exec(struct vme_dma_list *list)
 {
 	struct vme_dma_resource *ctrlr;
-	int channel, retval = 0;
+	int channel, retval;
 	struct tsi148_dma_entry *entry;
 	u32 bus_addr_high, bus_addr_low;
 	u32 val, dctlreg = 0;
@@ -1942,9 +1942,19 @@ static int tsi148_dma_list_exec(struct vme_dma_list *list)
 	iowrite32be(dctlreg | TSI148_LCSR_DCTL_DGO, bridge->base +
 		TSI148_LCSR_DMA[channel] + TSI148_LCSR_OFFSET_DCTL);
 
-	wait_event_interruptible(bridge->dma_queue[channel],
+	retval = wait_event_interruptible(bridge->dma_queue[channel],
 		tsi148_dma_busy(ctrlr->parent, channel));
 
+	if (retval) {
+		iowrite32be(dctlreg | TSI148_LCSR_DCTL_ABT, bridge->base +
+			TSI148_LCSR_DMA[channel] + TSI148_LCSR_OFFSET_DCTL);
+		/* Wait for the operation to abort */
+		wait_event(bridge->dma_queue[channel],
+			tsi148_dma_busy(ctrlr->parent, channel));
+		retval = -EINTR;
+		goto exit;
+	}
+
 	/*
 	 * Read status register, this register is valid until we kick off a
 	 * new transfer.
@@ -1957,6 +1967,7 @@ static int tsi148_dma_list_exec(struct vme_dma_list *list)
 		retval = -EIO;
 	}
 
+exit:
 	/* Remove list from running list */
 	mutex_lock(&ctrlr->mtx);
 	list_del(&list->list);
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/6] staging: vme_user: refactor llseek to switch(){}
  2015-05-18 18:56 [PATCH 0/6] vme: DMA improvements Dmitry Kalinkin
                   ` (3 preceding siblings ...)
  2015-05-18 18:56 ` [PATCH 4/6] vme: stop DMA transfer on interruption Dmitry Kalinkin
@ 2015-05-18 18:56 ` Dmitry Kalinkin
  2015-05-18 18:56 ` [PATCH 6/6] staging: vme_user: provide DMA functionality Dmitry Kalinkin
  5 siblings, 0 replies; 10+ messages in thread
From: Dmitry Kalinkin @ 2015-05-18 18:56 UTC (permalink / raw)
  To: linux-kernel, devel
  Cc: Martyn Welch, Manohar Vanga, Greg Kroah-Hartman, Igor Alekseev,
	Dmitry Kalinkin

This makes vme_user_llseek ignore all minors that don't have llseek
implementation.

Signed-off-by: Dmitry Kalinkin <dmitry.kalinkin@gmail.com>
Cc: Igor Alekseev <igor.alekseev@itep.ru>
---
 drivers/staging/vme/devices/vme_user.c | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/drivers/staging/vme/devices/vme_user.c b/drivers/staging/vme/devices/vme_user.c
index 19ba749..da828f4 100644
--- a/drivers/staging/vme/devices/vme_user.c
+++ b/drivers/staging/vme/devices/vme_user.c
@@ -430,15 +430,17 @@ static loff_t vme_user_llseek(struct file *file, loff_t off, int whence)
 	size_t image_size;
 	loff_t res;
 
-	if (minor == CONTROL_MINOR)
-		return -EINVAL;
-
-	mutex_lock(&image[minor].mutex);
-	image_size = vme_get_size(image[minor].resource);
-	res = fixed_size_llseek(file, off, whence, image_size);
-	mutex_unlock(&image[minor].mutex);
+	switch (type[minor]) {
+	case MASTER_MINOR:
+	case SLAVE_MINOR:
+		mutex_lock(&image[minor].mutex);
+		image_size = vme_get_size(image[minor].resource);
+		res = fixed_size_llseek(file, off, whence, image_size);
+		mutex_unlock(&image[minor].mutex);
+		return res;
+	}
 
-	return res;
+	return -EINVAL;
 }
 
 /*
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/6] staging: vme_user: provide DMA functionality
  2015-05-18 18:56 [PATCH 0/6] vme: DMA improvements Dmitry Kalinkin
                   ` (4 preceding siblings ...)
  2015-05-18 18:56 ` [PATCH 5/6] staging: vme_user: refactor llseek to switch(){} Dmitry Kalinkin
@ 2015-05-18 18:56 ` Dmitry Kalinkin
  2015-05-19  9:18   ` Dan Carpenter
  5 siblings, 1 reply; 10+ messages in thread
From: Dmitry Kalinkin @ 2015-05-18 18:56 UTC (permalink / raw)
  To: linux-kernel, devel
  Cc: Martyn Welch, Manohar Vanga, Greg Kroah-Hartman, Igor Alekseev,
	Dmitry Kalinkin

This introduces a new dma device that provides a single ioctl call that
provides DMA read and write functionality to the user space.

Signed-off-by: Dmitry Kalinkin <dmitry.kalinkin@gmail.com>
Cc: Igor Alekseev <igor.alekseev@itep.ru>
---
 drivers/staging/vme/devices/vme_user.c | 174 ++++++++++++++++++++++++++++++++-
 drivers/staging/vme/devices/vme_user.h |  11 +++
 2 files changed, 183 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/vme/devices/vme_user.c b/drivers/staging/vme/devices/vme_user.c
index da828f4..aca1e0b 100644
--- a/drivers/staging/vme/devices/vme_user.c
+++ b/drivers/staging/vme/devices/vme_user.c
@@ -79,13 +79,14 @@ static unsigned int bus_num;
  * We shall support 4 masters and 4 slaves with this driver.
  */
 #define VME_MAJOR	221	/* VME Major Device Number */
-#define VME_DEVS	9	/* Number of dev entries */
+#define VME_DEVS	10	/* Number of dev entries */
 
 #define MASTER_MINOR	0
 #define MASTER_MAX	3
 #define SLAVE_MINOR	4
 #define SLAVE_MAX	7
 #define CONTROL_MINOR	8
+#define DMA_MINOR	9
 
 #define PCI_BUF_SIZE  0x20000	/* Size of one slave image buffer */
 
@@ -125,7 +126,7 @@ static const int type[VME_DEVS] = {	MASTER_MINOR,	MASTER_MINOR,
 					MASTER_MINOR,	MASTER_MINOR,
 					SLAVE_MINOR,	SLAVE_MINOR,
 					SLAVE_MINOR,	SLAVE_MINOR,
-					CONTROL_MINOR
+					CONTROL_MINOR,	DMA_MINOR
 				};
 
 
@@ -443,6 +444,145 @@ static loff_t vme_user_llseek(struct file *file, loff_t off, int whence)
 	return -EINVAL;
 }
 
+static int vme_user_sg_to_dma_list(const struct vme_dma_op *dma_op,
+	struct sg_table *sgt, int sg_count, struct vme_dma_list *dma_list)
+{
+	ssize_t pos = 0;
+	struct scatterlist *sg;
+	int i, ret;
+
+	for_each_sg(sgt->sgl, sg, sg_count, i) {
+		struct vme_dma_attr *pci_attr, *vme_attr, *dest, *src;
+		dma_addr_t hw_address = sg_dma_address(sg);
+		unsigned int hw_len = sg_dma_len(sg);
+
+		vme_attr = vme_dma_vme_attribute(dma_op->vme_addr + pos,
+			dma_op->aspace, dma_op->cycle, dma_op->dwidth);
+		if (!vme_attr)
+			return -ENOMEM;
+
+		pci_attr = vme_dma_pci_attribute(hw_address);
+		if (!pci_attr) {
+			vme_dma_free_attribute(vme_attr);
+			return -ENOMEM;
+		}
+
+		if (dma_op->write) {
+			dest = vme_attr;
+			src = pci_attr;
+		} else {
+			dest = pci_attr;
+			src = vme_attr;
+		}
+
+		ret = vme_dma_list_add(dma_list, src, dest, hw_len);
+
+		/*
+		 * XXX VME API doesn't mention whether we should keep
+		 * attributes around
+		 */
+		vme_dma_free_attribute(vme_attr);
+		vme_dma_free_attribute(pci_attr);
+
+		if (ret)
+			return ret;
+
+		pos += hw_len;
+	}
+
+	WARN_ON(pos != dma_op->count);
+
+	return 0;
+}
+
+static ssize_t vme_user_dma_ioctl(unsigned int minor,
+	const struct vme_dma_op *dma_op)
+{
+	unsigned int offset = offset_in_page(dma_op->buf_vaddr);
+	unsigned long nr_pages;
+	enum dma_data_direction dir;
+	struct vme_dma_list *dma_list;
+	struct sg_table *sgt = NULL;
+	struct page **pages = NULL;
+	long got_pages;
+	int ret, sg_count;
+
+	/* Overflow check for nr_pages */
+	if (dma_op->count > U32_MAX - 2 * PAGE_SIZE)
+		return -EINVAL;
+
+	/* Prevent WARN from dma_map_sg */
+	if (dma_op->count == 0)
+		return 0;
+
+	nr_pages = (offset + dma_op->count + PAGE_SIZE - 1) >> PAGE_SHIFT;
+	dir = dma_op->write ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
+
+	pages = kmalloc_array(nr_pages, sizeof(pages[0]), GFP_KERNEL);
+	if (!pages) {
+		ret = -ENOMEM;
+		goto free;
+	}
+
+	sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
+	if (!sgt) {
+		ret = -ENOMEM;
+		goto free;
+	}
+
+	dma_list = vme_new_dma_list(image[minor].resource);
+	if (!dma_list) {
+		ret = -ENOMEM;
+		goto free;
+	}
+
+	got_pages = get_user_pages_fast(dma_op->buf_vaddr, nr_pages,
+		!dma_op->write, pages);
+	if (got_pages != nr_pages) {
+		pr_debug("Not all pages were pinned\n");
+		ret = (got_pages < 0) ? got_pages : -EFAULT;
+		goto release_pages;
+	}
+
+	ret = sg_alloc_table_from_pages(sgt, pages, nr_pages,
+		offset, dma_op->count, GFP_KERNEL);
+	if (ret)
+		goto release_pages;
+
+	sg_count = dma_map_sg(vme_user_bridge->dev.parent,
+		sgt->sgl, sgt->nents, dir);
+	if (!sg_count) {
+		pr_debug("DMA mapping error\n");
+		ret = -EFAULT;
+		goto free_sgt;
+	}
+
+	ret = vme_user_sg_to_dma_list(dma_op, sgt, sg_count, dma_list);
+	if (ret)
+		goto dma_unmap;
+
+	ret = vme_dma_list_exec(dma_list);
+
+dma_unmap:
+	dma_unmap_sg(vme_user_bridge->dev.parent, sgt->sgl, sgt->nents, dir);
+
+free_sgt:
+	sg_free_table(sgt);
+
+release_pages:
+	if (got_pages > 0)
+		release_pages(pages, got_pages, 0);
+
+	vme_dma_list_free(dma_list);
+
+free:
+	kfree(sgt);
+	kfree(pages);
+	if (ret)
+		return ret;
+	return dma_op->count;
+}
+
 /*
  * The ioctls provided by the old VME access method (the one at vmelinux.org)
  * are most certainly wrong as the effectively push the registers layout
@@ -459,6 +599,7 @@ static int vme_user_ioctl(struct inode *inode, struct file *file,
 	struct vme_master master;
 	struct vme_slave slave;
 	struct vme_irq_id irq_req;
+	struct vme_dma_op dma_op;
 	unsigned long copied;
 	unsigned int minor = MINOR(inode->i_rdev);
 	int retval;
@@ -569,6 +710,19 @@ static int vme_user_ioctl(struct inode *inode, struct file *file,
 			break;
 		}
 		break;
+	case DMA_MINOR:
+		switch (cmd) {
+		case VME_DO_DMA:
+			copied = copy_from_user(&dma_op, argp,
+				sizeof(dma_op));
+			if (copied != 0) {
+				pr_warn("Partial copy from userspace\n");
+				return -EFAULT;
+			}
+
+			return vme_user_dma_ioctl(minor, &dma_op);
+		}
+		break;
 	}
 
 	return -EINVAL;
@@ -842,6 +996,15 @@ static int vme_user_probe(struct vme_dev *vdev)
 		}
 	}
 
+	image[DMA_MINOR].resource = vme_dma_request(vme_user_bridge,
+		VME_DMA_VME_TO_MEM | VME_DMA_MEM_TO_VME);
+	if (image[DMA_MINOR].resource == NULL) {
+		dev_warn(&vdev->dev,
+			"Unable to allocate dma resource\n");
+		err = -ENOMEM;
+		goto err_master;
+	}
+
 	/* Create sysfs entries - on udev systems this creates the dev files */
 	vme_user_sysfs_class = class_create(THIS_MODULE, driver_name);
 	if (IS_ERR(vme_user_sysfs_class)) {
@@ -864,6 +1027,9 @@ static int vme_user_probe(struct vme_dev *vdev)
 		case SLAVE_MINOR:
 			name = "bus/vme/s%d";
 			break;
+		case DMA_MINOR:
+			name = "bus/vme/dma0";
+			break;
 		default:
 			err = -EINVAL;
 			goto err_sysfs;
@@ -888,6 +1054,8 @@ err_sysfs:
 	}
 	class_destroy(vme_user_sysfs_class);
 
+	vme_dma_free(image[DMA_MINOR].resource);
+
 	/* Ensure counter set correcty to unalloc all master windows */
 	i = MASTER_MAX + 1;
 err_master:
@@ -927,6 +1095,8 @@ static int vme_user_remove(struct vme_dev *dev)
 	}
 	class_destroy(vme_user_sysfs_class);
 
+	vme_dma_free(image[DMA_MINOR].resource);
+
 	for (i = MASTER_MINOR; i < (MASTER_MAX + 1); i++) {
 		kfree(image[i].kern_buf);
 		vme_master_free(image[i].resource);
diff --git a/drivers/staging/vme/devices/vme_user.h b/drivers/staging/vme/devices/vme_user.h
index b8cc7bc..2ae5d99 100644
--- a/drivers/staging/vme/devices/vme_user.h
+++ b/drivers/staging/vme/devices/vme_user.h
@@ -48,11 +48,22 @@ struct vme_irq_id {
 	__u8 statid;
 };
 
+struct vme_dma_op {
+	__u32 aspace;		/* Address Space */
+	__u32 cycle;		/* Cycle properties */
+	__u32 dwidth;		/* Data transfer width */
+	__u64 vme_addr;		/* Starting Address on the VMEbus */
+	__u64 buf_vaddr;	/* Pointer to userspace memory */
+	__u32 count;		/* Count of bytes to copy */
+	__u32 write;		/* Write flag */
+} __packed;
+
 #define VME_GET_SLAVE _IOR(VME_IOC_MAGIC, 1, struct vme_slave)
 #define VME_SET_SLAVE _IOW(VME_IOC_MAGIC, 2, struct vme_slave)
 #define VME_GET_MASTER _IOR(VME_IOC_MAGIC, 3, struct vme_master)
 #define VME_SET_MASTER _IOW(VME_IOC_MAGIC, 4, struct vme_master)
 #define VME_IRQ_GEN _IOW(VME_IOC_MAGIC, 5, struct vme_irq_id)
+#define VME_DO_DMA _IOW(VME_IOC_MAGIC, 7, struct vme_dma_op)
 
 #endif /* _VME_USER_H_ */
 
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 6/6] staging: vme_user: provide DMA functionality
  2015-05-18 18:56 ` [PATCH 6/6] staging: vme_user: provide DMA functionality Dmitry Kalinkin
@ 2015-05-19  9:18   ` Dan Carpenter
  2015-05-21 22:12     ` Dmitry Kalinkin
  0 siblings, 1 reply; 10+ messages in thread
From: Dan Carpenter @ 2015-05-19  9:18 UTC (permalink / raw)
  To: Dmitry Kalinkin
  Cc: linux-kernel, devel, Martyn Welch, Greg Kroah-Hartman,
	Manohar Vanga, Igor Alekseev

On Mon, May 18, 2015 at 09:56:33PM +0300, Dmitry Kalinkin wrote:
> +static int vme_user_sg_to_dma_list(const struct vme_dma_op *dma_op,
> +	struct sg_table *sgt, int sg_count, struct vme_dma_list *dma_list)
> +{
> +	ssize_t pos = 0;
> +	struct scatterlist *sg;
> +	int i, ret;
> +
> +	for_each_sg(sgt->sgl, sg, sg_count, i) {
> +		struct vme_dma_attr *pci_attr, *vme_attr, *dest, *src;
> +		dma_addr_t hw_address = sg_dma_address(sg);
> +		unsigned int hw_len = sg_dma_len(sg);
> +
> +		vme_attr = vme_dma_vme_attribute(dma_op->vme_addr + pos,
                                                 ^^^^^^^^^^^^^^^^^^^^^^

->vme_addr comes from the user and we don't seem to have done any
validation that it's correct.  This addition can overflow.  How is this
safe?  (This is not a rhetorical question, I am a newbie in this).

> +			dma_op->aspace, dma_op->cycle, dma_op->dwidth);
> +		if (!vme_attr)
> +			return -ENOMEM;
> +
> +		pci_attr = vme_dma_pci_attribute(hw_address);
> +		if (!pci_attr) {
> +			vme_dma_free_attribute(vme_attr);
> +			return -ENOMEM;
> +		}
> +
> +		if (dma_op->write) {
> +			dest = vme_attr;
> +			src = pci_attr;
> +		} else {
> +			dest = pci_attr;
> +			src = vme_attr;
> +		}
> +
> +		ret = vme_dma_list_add(dma_list, src, dest, hw_len);
> +
> +		/*
> +		 * XXX VME API doesn't mention whether we should keep
> +		 * attributes around
> +		 */
> +		vme_dma_free_attribute(vme_attr);
> +		vme_dma_free_attribute(pci_attr);
> +
> +		if (ret)
> +			return ret;
> +
> +		pos += hw_len;
> +	}
> +
> +	WARN_ON(pos != dma_op->count);
> +
> +	return 0;
> +}
> +
> +static ssize_t vme_user_dma_ioctl(unsigned int minor,
> +	const struct vme_dma_op *dma_op)
> +{
> +	unsigned int offset = offset_in_page(dma_op->buf_vaddr);
> +	unsigned long nr_pages;
> +	enum dma_data_direction dir;
> +	struct vme_dma_list *dma_list;
> +	struct sg_table *sgt = NULL;
> +	struct page **pages = NULL;
> +	long got_pages;
> +	int ret, sg_count;
> +
> +	/* Overflow check for nr_pages */
> +	if (dma_op->count > U32_MAX - 2 * PAGE_SIZE)
> +		return -EINVAL;
> +
> +	/* Prevent WARN from dma_map_sg */
> +	if (dma_op->count == 0)
> +		return 0;
> +
> +	nr_pages = (offset + dma_op->count + PAGE_SIZE - 1) >> PAGE_SHIFT;
> +	dir = dma_op->write ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
> +
> +	pages = kmalloc_array(nr_pages, sizeof(pages[0]), GFP_KERNEL);

This lets the user try allocate huge ammounts of RAM.  Is there no
reasonable max size we can use?

> +	if (!pages) {
> +		ret = -ENOMEM;
> +		goto free;
> +	}
> +
> +	sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
> +	if (!sgt) {
> +		ret = -ENOMEM;
> +		goto free;
> +	}
> +
> +	dma_list = vme_new_dma_list(image[minor].resource);
> +	if (!dma_list) {
> +		ret = -ENOMEM;
> +		goto free;
> +	}
> +
> +	got_pages = get_user_pages_fast(dma_op->buf_vaddr, nr_pages,
> +		!dma_op->write, pages);

This file is all indented poorly, but these patches adds a bunch of new
ones so they make a bad situation worse.

	got_pages = get_user_pages_fast(dma_op->buf_vaddr, nr_pages,
					!dma_op->write, pages);

You sometimes might have to use spaces to make things align correctly.

	got_pages = some_fake_name(dma_op->buf_vaddr, nr_pages,
				   !dma_op->write, pages);

[tab][tab][tab][tab][space][space][space][space]!dma_op->write, pages);

regards,
dan carpenter


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 6/6] staging: vme_user: provide DMA functionality
  2015-05-19  9:18   ` Dan Carpenter
@ 2015-05-21 22:12     ` Dmitry Kalinkin
  2015-05-22  7:59       ` Dan Carpenter
  0 siblings, 1 reply; 10+ messages in thread
From: Dmitry Kalinkin @ 2015-05-21 22:12 UTC (permalink / raw)
  To: Dan Carpenter
  Cc: linux-kernel, devel, Martyn Welch, Greg Kroah-Hartman,
	Manohar Vanga, Igor Alekseev

On Tue, May 19, 2015 at 12:18 PM, Dan Carpenter <dan.carpenter@oracle.com> wrote:
> On Mon, May 18, 2015 at 09:56:33PM +0300, Dmitry Kalinkin wrote:
>> 
>> +     for_each_sg(sgt->sgl, sg, sg_count, i) {
>> +             struct vme_dma_attr *pci_attr, *vme_attr, *dest, *src;
>> +             dma_addr_t hw_address = sg_dma_address(sg);
>> +             unsigned int hw_len = sg_dma_len(sg);
>> +
>> +             vme_attr = vme_dma_vme_attribute(dma_op->vme_addr + pos,
>                                                 ^^^^^^^^^^^^^^^^^^^^^^
> 
> ->vme_addr comes from the user and we don't seem to have done any
> validation that it's correct.  This addition can overflow.  How is this
> safe?  (This is not a rhetorical question, I am a newbie in this).
> 
This expression calculates address on the VME bus, where we already have
full access. There shouldn't have security implications. Such transfer will
most likely wrap or cause DMA transfer error (gives EIO).

We could add an additional check:

diff --git a/drivers/staging/vme/devices/vme_user.c b/drivers/staging/vme/devices/vme_user.c
index d8aa03b..cb0fc63 100644
--- a/drivers/staging/vme/devices/vme_user.c
+++ b/drivers/staging/vme/devices/vme_user.c
@@ -515,6 +515,11 @@ static ssize_t vme_user_dma_ioctl(unsigned int minor,
        if (dma_op->count == 0)
                return 0;
 
+       ret = vme_check_window(dma_op->aspace, dma_op->vme_addr,
+                              dma_op->count);
+       if (ret)
+               return ret;
+
        nr_pages = (offset + dma_op->count + PAGE_SIZE - 1) >> PAGE_SHIFT;
        dir = dma_op->write ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
 
diff --git a/drivers/vme/vme.c b/drivers/vme/vme.c
index 6bab2c4..50cabc3 100644
--- a/drivers/vme/vme.c
+++ b/drivers/vme/vme.c
@@ -177,7 +177,7 @@ size_t vme_get_size(struct vme_resource *resource)
 }
 EXPORT_SYMBOL(vme_get_size);
 
-static int vme_check_window(u32 aspace, unsigned long long vme_base,
+int vme_check_window(u32 aspace, unsigned long long vme_base,
        unsigned long long size)
 {
        int retval = 0;
@@ -199,10 +199,8 @@ static int vme_check_window(u32 aspace, unsigned long long vme_base,
                        retval = -EFAULT;
                break;
        case VME_A64:
-               /*
-                * Any value held in an unsigned long long can be used as the
-                * base
-                */
+               if (vme_base > VME_A64_MAX - size)
+                       retval = -EFAULT;
                break;
        case VME_CRCSR:
                if (((vme_base + size) > VME_CRCSR_MAX) ||
@@ -223,6 +221,7 @@ static int vme_check_window(u32 aspace, unsigned long long vme_base,
 
        return retval;
 }
+EXPORT_SYMBOL(vme_check_window);
 
 /*
  * Request a slave image with specific attributes, return some unique
diff --git a/include/linux/vme.h b/include/linux/vme.h
index 79242e9..cfff272 100644
--- a/include/linux/vme.h
+++ b/include/linux/vme.h
@@ -120,6 +120,8 @@ void vme_free_consistent(struct vme_resource *, size_t,  void *,
        dma_addr_t);
 
 size_t vme_get_size(struct vme_resource *);
+int vme_check_window(u32 aspace, unsigned long long vme_base,
+                     unsigned long long size);
 
 struct vme_resource *vme_slave_request(struct vme_dev *, u32, u32);
 int vme_slave_set(struct vme_resource *, int, unsigned long long,

>> +     nr_pages = (offset + dma_op->count + PAGE_SIZE - 1) >> PAGE_SHIFT;
>> +     dir = dma_op->write ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
>> +
>> +     pages = kmalloc_array(nr_pages, sizeof(pages[0]), GFP_KERNEL);
> 
> This lets the user try allocate huge ammounts of RAM.  Is there no
> reasonable max size we can use?
> 
We could limit operation size by 64 MiB and do an partial read for any bigger request.

> This file is all indented poorly, but these patches adds a bunch of new
> ones so they make a bad situation worse.
> 
>        got_pages = get_user_pages_fast(dma_op->buf_vaddr, nr_pages,
>                                        !dma_op->write, pages);
> 
> You sometimes might have to use spaces to make things align correctly.
> 
>        got_pages = some_fake_name(dma_op->buf_vaddr, nr_pages,
>                                   !dma_op->write, pages);
> 
> [tab][tab][tab][tab][space][space][space][space]!dma_op->write, pages);
Will fix this.


Thank you.

Cheers,
Dmitry


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 6/6] staging: vme_user: provide DMA functionality
  2015-05-21 22:12     ` Dmitry Kalinkin
@ 2015-05-22  7:59       ` Dan Carpenter
  0 siblings, 0 replies; 10+ messages in thread
From: Dan Carpenter @ 2015-05-22  7:59 UTC (permalink / raw)
  To: Dmitry Kalinkin
  Cc: linux-kernel, devel, Martyn Welch, Greg Kroah-Hartman,
	Manohar Vanga, Igor Alekseev

On Fri, May 22, 2015 at 01:12:19AM +0300, Dmitry Kalinkin wrote:
> On Tue, May 19, 2015 at 12:18 PM, Dan Carpenter <dan.carpenter@oracle.com> wrote:
> > On Mon, May 18, 2015 at 09:56:33PM +0300, Dmitry Kalinkin wrote:
> >> 
> >> +     for_each_sg(sgt->sgl, sg, sg_count, i) {
> >> +             struct vme_dma_attr *pci_attr, *vme_attr, *dest, *src;
> >> +             dma_addr_t hw_address = sg_dma_address(sg);
> >> +             unsigned int hw_len = sg_dma_len(sg);
> >> +
> >> +             vme_attr = vme_dma_vme_attribute(dma_op->vme_addr + pos,
> >                                                 ^^^^^^^^^^^^^^^^^^^^^^
> > 
> > ->vme_addr comes from the user and we don't seem to have done any
> > validation that it's correct.  This addition can overflow.  How is this
> > safe?  (This is not a rhetorical question, I am a newbie in this).
> > 
> This expression calculates address on the VME bus, where we already have
> full access. There shouldn't have security implications. Such transfer will
> most likely wrap or cause DMA transfer error (gives EIO).

Ahh...   Thanks. Again I was just asking because I'm a bit of a newbie
here so there wasn't really a need to add the other check just to make
me happy.  But I do like the new check as well.

regards,
dan carpenter


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-05-22  7:59 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-18 18:56 [PATCH 0/6] vme: DMA improvements Dmitry Kalinkin
2015-05-18 18:56 ` [PATCH 1/6] Documentation: mention vme_master_mmap() in VME API Dmitry Kalinkin
2015-05-18 18:56 ` [PATCH 2/6] vme: tsi148: fix DMA lists longer that one item Dmitry Kalinkin
2015-05-18 18:56 ` [PATCH 3/6] vme: tsi148: fix first DMA item mapping Dmitry Kalinkin
2015-05-18 18:56 ` [PATCH 4/6] vme: stop DMA transfer on interruption Dmitry Kalinkin
2015-05-18 18:56 ` [PATCH 5/6] staging: vme_user: refactor llseek to switch(){} Dmitry Kalinkin
2015-05-18 18:56 ` [PATCH 6/6] staging: vme_user: provide DMA functionality Dmitry Kalinkin
2015-05-19  9:18   ` Dan Carpenter
2015-05-21 22:12     ` Dmitry Kalinkin
2015-05-22  7:59       ` Dan Carpenter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).