All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/7] mtd: nand: sunxi: add support for DMA operations
@ 2016-03-30 15:39 ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon,
	Richard Weinberger

Hello,

This patch series aims at adding support for DMA assisted operations to
the sunxi_nand driver.

The first 3 patches are just reworks in the existing driver preparing
things for DMA ->read/write_page() operations. Those ones are mainly
re-arranging existing functions, and moving some code into dedicated
functions so we can reuse them when adding the read/write_page()
implementation.

Patch 4 is an attempt to generalize some logic that are duplicated in a
lot of places. It provides a generic solution to create an SG table
from a buffer (passed by virtual address) and its length.
This generic implementation tries to take all possible constraints into
account, like:
- vmallocated buffers
- alignement requirement/preference
- maximum DMA transfer length

I may have missed other things (is there a need for a minimum DMA
transfer constraint?), so don't hesitate to point problems or missing
elements in this implementation.
Note that other subsystems doing the same kind of thing (like SPI of V4L)
could use this implementation. This is why I've put the SPI and V4L
maintainers in Cc.

Patch 5 is providing functions to map/unmap buffers for DMA operations
at the MTD level. This will hopefully limit the number of open-coded
implementations we're currently seeing in a lot of NAND drivers.
Of course, it's making use of sg_alloc_table_from_buf(), introduced in
patch 4.

Patch 6 and 7 are patching the sunxi NAND driver and its DT binding doc
to add DMA support.

I'm particularly interested in getting feedbacks on patch 4 and 5.
Is there a reason nobody ever tried to create such generic functions
(at the scatterlist and MTD levels), and if there are, could you detail
them?

Thanks,

Boris

Side note: patches touching the sunxi NAND driver are depending on
this series [1].

[1]https://lkml.org/lkml/2016/3/7/444

Changes since v1:
- reworked sg_alloc_table_from_buf() to avoid splitting contiguous
  vmalloced area
- fixed a bug in the read_dma()
- fixed dma_direction flag in write_dma()

Boris Brezillon (7):
  mtd: nand: sunxi: move some ECC related operations to their own
    functions
  mtd: nand: sunxi: make OOB retrieval optional
  mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers
  scatterlist: add sg_alloc_table_from_buf() helper
  mtd: provide helper to prepare buffers for DMA operations
  mtd: nand: sunxi: add support for DMA assisted operations
  mtd: nand: sunxi: update DT bindings

 .../devicetree/bindings/mtd/sunxi-nand.txt         |   4 +
 drivers/mtd/mtdcore.c                              |  66 +++
 drivers/mtd/nand/sunxi_nand.c                      | 505 ++++++++++++++++++---
 include/linux/mtd/mtd.h                            |  25 +
 include/linux/scatterlist.h                        |  24 +
 lib/scatterlist.c                                  | 161 +++++++
 6 files changed, 720 insertions(+), 65 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v2 0/7] mtd: nand: sunxi: add support for DMA operations
@ 2016-03-30 15:39 ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton, Dave Gordon
  Cc: Mark Brown, linux-spi, linux-arm-kernel, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi, Vinod Koul, Dan Williams, dmaengine,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree, Boris Brezillon, Richard Weinberger

Hello,

This patch series aims at adding support for DMA assisted operations to
the sunxi_nand driver.

The first 3 patches are just reworks in the existing driver preparing
things for DMA ->read/write_page() operations. Those ones are mainly
re-arranging existing functions, and moving some code into dedicated
functions so we can reuse them when adding the read/write_page()
implementation.

Patch 4 is an attempt to generalize some logic that are duplicated in a
lot of places. It provides a generic solution to create an SG table
from a buffer (passed by virtual address) and its length.
This generic implementation tries to take all possible constraints into
account, like:
- vmallocated buffers
- alignement requirement/preference
- maximum DMA transfer length

I may have missed other things (is there a need for a minimum DMA
transfer constraint?), so don't hesitate to point problems or missing
elements in this implementation.
Note that other subsystems doing the same kind of thing (like SPI of V4L)
could use this implementation. This is why I've put the SPI and V4L
maintainers in Cc.

Patch 5 is providing functions to map/unmap buffers for DMA operations
at the MTD level. This will hopefully limit the number of open-coded
implementations we're currently seeing in a lot of NAND drivers.
Of course, it's making use of sg_alloc_table_from_buf(), introduced in
patch 4.

Patch 6 and 7 are patching the sunxi NAND driver and its DT binding doc
to add DMA support.

I'm particularly interested in getting feedbacks on patch 4 and 5.
Is there a reason nobody ever tried to create such generic functions
(at the scatterlist and MTD levels), and if there are, could you detail
them?

Thanks,

Boris

Side note: patches touching the sunxi NAND driver are depending on
this series [1].

[1]https://lkml.org/lkml/2016/3/7/444

Changes since v1:
- reworked sg_alloc_table_from_buf() to avoid splitting contiguous
  vmalloced area
- fixed a bug in the read_dma()
- fixed dma_direction flag in write_dma()

Boris Brezillon (7):
  mtd: nand: sunxi: move some ECC related operations to their own
    functions
  mtd: nand: sunxi: make OOB retrieval optional
  mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers
  scatterlist: add sg_alloc_table_from_buf() helper
  mtd: provide helper to prepare buffers for DMA operations
  mtd: nand: sunxi: add support for DMA assisted operations
  mtd: nand: sunxi: update DT bindings

 .../devicetree/bindings/mtd/sunxi-nand.txt         |   4 +
 drivers/mtd/mtdcore.c                              |  66 +++
 drivers/mtd/nand/sunxi_nand.c                      | 505 ++++++++++++++++++---
 include/linux/mtd/mtd.h                            |  25 +
 include/linux/scatterlist.h                        |  24 +
 lib/scatterlist.c                                  | 161 +++++++
 6 files changed, 720 insertions(+), 65 deletions(-)

-- 
2.5.0


^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v2 0/7] mtd: nand: sunxi: add support for DMA operations
@ 2016-03-30 15:39 ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: linux-arm-kernel

Hello,

This patch series aims at adding support for DMA assisted operations to
the sunxi_nand driver.

The first 3 patches are just reworks in the existing driver preparing
things for DMA ->read/write_page() operations. Those ones are mainly
re-arranging existing functions, and moving some code into dedicated
functions so we can reuse them when adding the read/write_page()
implementation.

Patch 4 is an attempt to generalize some logic that are duplicated in a
lot of places. It provides a generic solution to create an SG table
from a buffer (passed by virtual address) and its length.
This generic implementation tries to take all possible constraints into
account, like:
- vmallocated buffers
- alignement requirement/preference
- maximum DMA transfer length

I may have missed other things (is there a need for a minimum DMA
transfer constraint?), so don't hesitate to point problems or missing
elements in this implementation.
Note that other subsystems doing the same kind of thing (like SPI of V4L)
could use this implementation. This is why I've put the SPI and V4L
maintainers in Cc.

Patch 5 is providing functions to map/unmap buffers for DMA operations
at the MTD level. This will hopefully limit the number of open-coded
implementations we're currently seeing in a lot of NAND drivers.
Of course, it's making use of sg_alloc_table_from_buf(), introduced in
patch 4.

Patch 6 and 7 are patching the sunxi NAND driver and its DT binding doc
to add DMA support.

I'm particularly interested in getting feedbacks on patch 4 and 5.
Is there a reason nobody ever tried to create such generic functions
(at the scatterlist and MTD levels), and if there are, could you detail
them?

Thanks,

Boris

Side note: patches touching the sunxi NAND driver are depending on
this series [1].

[1]https://lkml.org/lkml/2016/3/7/444

Changes since v1:
- reworked sg_alloc_table_from_buf() to avoid splitting contiguous
  vmalloced area
- fixed a bug in the read_dma()
- fixed dma_direction flag in write_dma()

Boris Brezillon (7):
  mtd: nand: sunxi: move some ECC related operations to their own
    functions
  mtd: nand: sunxi: make OOB retrieval optional
  mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers
  scatterlist: add sg_alloc_table_from_buf() helper
  mtd: provide helper to prepare buffers for DMA operations
  mtd: nand: sunxi: add support for DMA assisted operations
  mtd: nand: sunxi: update DT bindings

 .../devicetree/bindings/mtd/sunxi-nand.txt         |   4 +
 drivers/mtd/mtdcore.c                              |  66 +++
 drivers/mtd/nand/sunxi_nand.c                      | 505 ++++++++++++++++++---
 include/linux/mtd/mtd.h                            |  25 +
 include/linux/scatterlist.h                        |  24 +
 lib/scatterlist.c                                  | 161 +++++++
 6 files changed, 720 insertions(+), 65 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v2 1/7] mtd: nand: sunxi: move some ECC related operations to their own functions
  2016-03-30 15:39 ` Boris Brezillon
  (?)
@ 2016-03-30 15:39     ` Boris Brezillon
  -1 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon,
	Richard Weinberger

In order to support DMA operations in a clean way we need to extract some
of the logic coded in sunxi_nfc_hw_ecc_read/write_page() into their own
function.

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 drivers/mtd/nand/sunxi_nand.c | 163 ++++++++++++++++++++++++++++--------------
 1 file changed, 108 insertions(+), 55 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index a71905c..f6ea0fb 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -776,6 +776,92 @@ static inline void sunxi_nfc_user_data_to_buf(u32 user_data, u8 *buf)
 	buf[3] = user_data >> 24;
 }
 
+static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
+{
+	return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
+}
+
+static void sunxi_nfc_hw_ecc_get_prot_oob_bytes(struct mtd_info *mtd, u8 *oob,
+						int step, bool bbm, int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+	sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(step)),
+				   oob);
+
+	/* De-randomize the Bad Block Marker. */
+	if (bbm && (nand->options & NAND_NEED_SCRAMBLING))
+		sunxi_nfc_randomize_bbm(mtd, page, oob);
+}
+
+static void sunxi_nfc_hw_ecc_set_prot_oob_bytes(struct mtd_info *mtd,
+						const u8 *oob, int step,
+						bool bbm, int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	u8 user_data[4];
+
+	/* Randomize the Bad Block Marker. */
+	if (bbm && (nand->options & NAND_NEED_SCRAMBLING)) {
+		memcpy(user_data, oob, sizeof(user_data));
+		sunxi_nfc_randomize_bbm(mtd, page, user_data);
+		oob = user_data;
+	}
+
+	writel(sunxi_nfc_buf_to_user_data(oob),
+	       nfc->regs + NFC_REG_USER_DATA(step));
+}
+
+static void sunxi_nfc_hw_ecc_update_stats(struct mtd_info *mtd,
+					  unsigned int *max_bitflips, int ret)
+{
+	if (ret < 0) {
+		mtd->ecc_stats.failed++;
+	} else {
+		mtd->ecc_stats.corrected += ret;
+		*max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+	}
+}
+
+static int sunxi_nfc_hw_ecc_correct(struct mtd_info *mtd, u8 *data, u8 *oob,
+				    int step, u32 status, bool *erased)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	u32 tmp;
+
+	*erased = false;
+
+	if (status & NFC_ECC_ERR(step))
+		return -EBADMSG;
+
+	if (status & NFC_ECC_PAT_FOUND(step)) {
+		u8 pattern;
+
+		if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1))) {
+			pattern = 0x0;
+		} else {
+			pattern = 0xff;
+			*erased = true;
+		}
+
+		if (data)
+			memset(data, pattern, ecc->size);
+
+		if (oob)
+			memset(oob, pattern, ecc->bytes + 4);
+
+		return 0;
+	}
+
+	tmp = readl(nfc->regs + NFC_REG_ECC_ERR_CNT(step));
+
+	return NFC_ECC_ERR_CNT(step, tmp);
+}
+
 static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 				       u8 *data, int data_off,
 				       u8 *oob, int oob_off,
@@ -787,7 +873,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
 	struct nand_ecc_ctrl *ecc = &nand->ecc;
 	int raw_mode = 0;
-	u32 status;
+	bool erased;
 	int ret;
 
 	if (*cur_off != data_off)
@@ -813,27 +899,13 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 
 	*cur_off = oob_off + ecc->bytes + 4;
 
-	status = readl(nfc->regs + NFC_REG_ECC_ST);
-	if (status & NFC_ECC_PAT_FOUND(0)) {
-		u8 pattern = 0xff;
-
-		if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1)))
-			pattern = 0x0;
-
-		memset(data, pattern, ecc->size);
-		memset(oob, pattern, ecc->bytes + 4);
-
+	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob, 0,
+				       readl(nfc->regs + NFC_REG_ECC_ST),
+				       &erased);
+	if (erased)
 		return 1;
-	}
-
-	ret = NFC_ECC_ERR_CNT(0, readl(nfc->regs + NFC_REG_ECC_ERR_CNT(0)));
-
-	memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-	nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-	sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4, true, page);
-
-	if (status & NFC_ECC_ERR(0)) {
+	if (ret < 0) {
 		/*
 		 * Re-read the data with the randomizer disabled to identify
 		 * bitflips in erased pages.
@@ -841,35 +913,32 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 		if (nand->options & NAND_NEED_SCRAMBLING) {
 			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1);
 			nand->read_buf(mtd, data, ecc->size);
-			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-			nand->read_buf(mtd, oob, ecc->bytes + 4);
+		} else {
+			memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE,
+				      ecc->size);
 		}
 
+		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+		nand->read_buf(mtd, oob, ecc->bytes + 4);
+
 		ret = nand_check_erased_ecc_chunk(data,	ecc->size,
 						  oob, ecc->bytes + 4,
 						  NULL, 0, ecc->strength);
 		if (ret >= 0)
 			raw_mode = 1;
 	} else {
-		/*
-		 * The engine protects 4 bytes of OOB data per chunk.
-		 * Retrieve the corrected OOB bytes.
-		 */
-		sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(0)),
-					   oob);
+		memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-		/* De-randomize the Bad Block Marker. */
-		if (bbm && nand->options & NAND_NEED_SCRAMBLING)
-			sunxi_nfc_randomize_bbm(mtd, page, oob);
-	}
+		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+		sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+					      true, page);
 
-	if (ret < 0) {
-		mtd->ecc_stats.failed++;
-	} else {
-		mtd->ecc_stats.corrected += ret;
-		*max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+		sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+						    bbm, page);
 	}
 
+	sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
+
 	return raw_mode;
 }
 
@@ -898,11 +967,6 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 	*cur_off = mtd->oobsize + mtd->writesize;
 }
 
-static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
-{
-	return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
-}
-
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 					const u8 *data, int data_off,
 					const u8 *oob, int oob_off,
@@ -919,19 +983,6 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 
 	sunxi_nfc_randomizer_write_buf(mtd, data, ecc->size, false, page);
 
-	/* Fill OOB data in */
-	if ((nand->options & NAND_NEED_SCRAMBLING) && bbm) {
-		u8 user_data[4];
-
-		memcpy(user_data, oob, 4);
-		sunxi_nfc_randomize_bbm(mtd, page, user_data);
-		writel(sunxi_nfc_buf_to_user_data(user_data),
-		       nfc->regs + NFC_REG_USER_DATA(0));
-	} else {
-		writel(sunxi_nfc_buf_to_user_data(oob),
-		       nfc->regs + NFC_REG_USER_DATA(0));
-	}
-
 	if (data_off + ecc->size != oob_off)
 		nand->cmdfunc(mtd, NAND_CMD_RNDIN, oob_off, -1);
 
@@ -940,6 +991,8 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 		return ret;
 
 	sunxi_nfc_randomizer_enable(mtd);
+	sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, 0, bbm, page);
+
 	writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD |
 	       NFC_ACCESS_DIR | NFC_ECC_OP,
 	       nfc->regs + NFC_REG_CMD);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 1/7] mtd: nand: sunxi: move some ECC related operations to their own functions
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton, Dave Gordon
  Cc: Mark Brown, linux-spi, linux-arm-kernel, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi, Vinod Koul, Dan Williams, dmaengine,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree, Boris Brezillon, Richard Weinberger

In order to support DMA operations in a clean way we need to extract some
of the logic coded in sunxi_nfc_hw_ecc_read/write_page() into their own
function.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 163 ++++++++++++++++++++++++++++--------------
 1 file changed, 108 insertions(+), 55 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index a71905c..f6ea0fb 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -776,6 +776,92 @@ static inline void sunxi_nfc_user_data_to_buf(u32 user_data, u8 *buf)
 	buf[3] = user_data >> 24;
 }
 
+static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
+{
+	return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
+}
+
+static void sunxi_nfc_hw_ecc_get_prot_oob_bytes(struct mtd_info *mtd, u8 *oob,
+						int step, bool bbm, int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+	sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(step)),
+				   oob);
+
+	/* De-randomize the Bad Block Marker. */
+	if (bbm && (nand->options & NAND_NEED_SCRAMBLING))
+		sunxi_nfc_randomize_bbm(mtd, page, oob);
+}
+
+static void sunxi_nfc_hw_ecc_set_prot_oob_bytes(struct mtd_info *mtd,
+						const u8 *oob, int step,
+						bool bbm, int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	u8 user_data[4];
+
+	/* Randomize the Bad Block Marker. */
+	if (bbm && (nand->options & NAND_NEED_SCRAMBLING)) {
+		memcpy(user_data, oob, sizeof(user_data));
+		sunxi_nfc_randomize_bbm(mtd, page, user_data);
+		oob = user_data;
+	}
+
+	writel(sunxi_nfc_buf_to_user_data(oob),
+	       nfc->regs + NFC_REG_USER_DATA(step));
+}
+
+static void sunxi_nfc_hw_ecc_update_stats(struct mtd_info *mtd,
+					  unsigned int *max_bitflips, int ret)
+{
+	if (ret < 0) {
+		mtd->ecc_stats.failed++;
+	} else {
+		mtd->ecc_stats.corrected += ret;
+		*max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+	}
+}
+
+static int sunxi_nfc_hw_ecc_correct(struct mtd_info *mtd, u8 *data, u8 *oob,
+				    int step, u32 status, bool *erased)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	u32 tmp;
+
+	*erased = false;
+
+	if (status & NFC_ECC_ERR(step))
+		return -EBADMSG;
+
+	if (status & NFC_ECC_PAT_FOUND(step)) {
+		u8 pattern;
+
+		if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1))) {
+			pattern = 0x0;
+		} else {
+			pattern = 0xff;
+			*erased = true;
+		}
+
+		if (data)
+			memset(data, pattern, ecc->size);
+
+		if (oob)
+			memset(oob, pattern, ecc->bytes + 4);
+
+		return 0;
+	}
+
+	tmp = readl(nfc->regs + NFC_REG_ECC_ERR_CNT(step));
+
+	return NFC_ECC_ERR_CNT(step, tmp);
+}
+
 static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 				       u8 *data, int data_off,
 				       u8 *oob, int oob_off,
@@ -787,7 +873,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
 	struct nand_ecc_ctrl *ecc = &nand->ecc;
 	int raw_mode = 0;
-	u32 status;
+	bool erased;
 	int ret;
 
 	if (*cur_off != data_off)
@@ -813,27 +899,13 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 
 	*cur_off = oob_off + ecc->bytes + 4;
 
-	status = readl(nfc->regs + NFC_REG_ECC_ST);
-	if (status & NFC_ECC_PAT_FOUND(0)) {
-		u8 pattern = 0xff;
-
-		if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1)))
-			pattern = 0x0;
-
-		memset(data, pattern, ecc->size);
-		memset(oob, pattern, ecc->bytes + 4);
-
+	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob, 0,
+				       readl(nfc->regs + NFC_REG_ECC_ST),
+				       &erased);
+	if (erased)
 		return 1;
-	}
-
-	ret = NFC_ECC_ERR_CNT(0, readl(nfc->regs + NFC_REG_ECC_ERR_CNT(0)));
-
-	memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-	nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-	sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4, true, page);
-
-	if (status & NFC_ECC_ERR(0)) {
+	if (ret < 0) {
 		/*
 		 * Re-read the data with the randomizer disabled to identify
 		 * bitflips in erased pages.
@@ -841,35 +913,32 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 		if (nand->options & NAND_NEED_SCRAMBLING) {
 			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1);
 			nand->read_buf(mtd, data, ecc->size);
-			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-			nand->read_buf(mtd, oob, ecc->bytes + 4);
+		} else {
+			memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE,
+				      ecc->size);
 		}
 
+		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+		nand->read_buf(mtd, oob, ecc->bytes + 4);
+
 		ret = nand_check_erased_ecc_chunk(data,	ecc->size,
 						  oob, ecc->bytes + 4,
 						  NULL, 0, ecc->strength);
 		if (ret >= 0)
 			raw_mode = 1;
 	} else {
-		/*
-		 * The engine protects 4 bytes of OOB data per chunk.
-		 * Retrieve the corrected OOB bytes.
-		 */
-		sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(0)),
-					   oob);
+		memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-		/* De-randomize the Bad Block Marker. */
-		if (bbm && nand->options & NAND_NEED_SCRAMBLING)
-			sunxi_nfc_randomize_bbm(mtd, page, oob);
-	}
+		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+		sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+					      true, page);
 
-	if (ret < 0) {
-		mtd->ecc_stats.failed++;
-	} else {
-		mtd->ecc_stats.corrected += ret;
-		*max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+		sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+						    bbm, page);
 	}
 
+	sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
+
 	return raw_mode;
 }
 
@@ -898,11 +967,6 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 	*cur_off = mtd->oobsize + mtd->writesize;
 }
 
-static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
-{
-	return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
-}
-
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 					const u8 *data, int data_off,
 					const u8 *oob, int oob_off,
@@ -919,19 +983,6 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 
 	sunxi_nfc_randomizer_write_buf(mtd, data, ecc->size, false, page);
 
-	/* Fill OOB data in */
-	if ((nand->options & NAND_NEED_SCRAMBLING) && bbm) {
-		u8 user_data[4];
-
-		memcpy(user_data, oob, 4);
-		sunxi_nfc_randomize_bbm(mtd, page, user_data);
-		writel(sunxi_nfc_buf_to_user_data(user_data),
-		       nfc->regs + NFC_REG_USER_DATA(0));
-	} else {
-		writel(sunxi_nfc_buf_to_user_data(oob),
-		       nfc->regs + NFC_REG_USER_DATA(0));
-	}
-
 	if (data_off + ecc->size != oob_off)
 		nand->cmdfunc(mtd, NAND_CMD_RNDIN, oob_off, -1);
 
@@ -940,6 +991,8 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 		return ret;
 
 	sunxi_nfc_randomizer_enable(mtd);
+	sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, 0, bbm, page);
+
 	writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD |
 	       NFC_ACCESS_DIR | NFC_ECC_OP,
 	       nfc->regs + NFC_REG_CMD);
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 1/7] mtd: nand: sunxi: move some ECC related operations to their own functions
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: linux-arm-kernel

In order to support DMA operations in a clean way we need to extract some
of the logic coded in sunxi_nfc_hw_ecc_read/write_page() into their own
function.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 163 ++++++++++++++++++++++++++++--------------
 1 file changed, 108 insertions(+), 55 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index a71905c..f6ea0fb 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -776,6 +776,92 @@ static inline void sunxi_nfc_user_data_to_buf(u32 user_data, u8 *buf)
 	buf[3] = user_data >> 24;
 }
 
+static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
+{
+	return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
+}
+
+static void sunxi_nfc_hw_ecc_get_prot_oob_bytes(struct mtd_info *mtd, u8 *oob,
+						int step, bool bbm, int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+	sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(step)),
+				   oob);
+
+	/* De-randomize the Bad Block Marker. */
+	if (bbm && (nand->options & NAND_NEED_SCRAMBLING))
+		sunxi_nfc_randomize_bbm(mtd, page, oob);
+}
+
+static void sunxi_nfc_hw_ecc_set_prot_oob_bytes(struct mtd_info *mtd,
+						const u8 *oob, int step,
+						bool bbm, int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	u8 user_data[4];
+
+	/* Randomize the Bad Block Marker. */
+	if (bbm && (nand->options & NAND_NEED_SCRAMBLING)) {
+		memcpy(user_data, oob, sizeof(user_data));
+		sunxi_nfc_randomize_bbm(mtd, page, user_data);
+		oob = user_data;
+	}
+
+	writel(sunxi_nfc_buf_to_user_data(oob),
+	       nfc->regs + NFC_REG_USER_DATA(step));
+}
+
+static void sunxi_nfc_hw_ecc_update_stats(struct mtd_info *mtd,
+					  unsigned int *max_bitflips, int ret)
+{
+	if (ret < 0) {
+		mtd->ecc_stats.failed++;
+	} else {
+		mtd->ecc_stats.corrected += ret;
+		*max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+	}
+}
+
+static int sunxi_nfc_hw_ecc_correct(struct mtd_info *mtd, u8 *data, u8 *oob,
+				    int step, u32 status, bool *erased)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	u32 tmp;
+
+	*erased = false;
+
+	if (status & NFC_ECC_ERR(step))
+		return -EBADMSG;
+
+	if (status & NFC_ECC_PAT_FOUND(step)) {
+		u8 pattern;
+
+		if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1))) {
+			pattern = 0x0;
+		} else {
+			pattern = 0xff;
+			*erased = true;
+		}
+
+		if (data)
+			memset(data, pattern, ecc->size);
+
+		if (oob)
+			memset(oob, pattern, ecc->bytes + 4);
+
+		return 0;
+	}
+
+	tmp = readl(nfc->regs + NFC_REG_ECC_ERR_CNT(step));
+
+	return NFC_ECC_ERR_CNT(step, tmp);
+}
+
 static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 				       u8 *data, int data_off,
 				       u8 *oob, int oob_off,
@@ -787,7 +873,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
 	struct nand_ecc_ctrl *ecc = &nand->ecc;
 	int raw_mode = 0;
-	u32 status;
+	bool erased;
 	int ret;
 
 	if (*cur_off != data_off)
@@ -813,27 +899,13 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 
 	*cur_off = oob_off + ecc->bytes + 4;
 
-	status = readl(nfc->regs + NFC_REG_ECC_ST);
-	if (status & NFC_ECC_PAT_FOUND(0)) {
-		u8 pattern = 0xff;
-
-		if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1)))
-			pattern = 0x0;
-
-		memset(data, pattern, ecc->size);
-		memset(oob, pattern, ecc->bytes + 4);
-
+	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob, 0,
+				       readl(nfc->regs + NFC_REG_ECC_ST),
+				       &erased);
+	if (erased)
 		return 1;
-	}
-
-	ret = NFC_ECC_ERR_CNT(0, readl(nfc->regs + NFC_REG_ECC_ERR_CNT(0)));
-
-	memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-	nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-	sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4, true, page);
-
-	if (status & NFC_ECC_ERR(0)) {
+	if (ret < 0) {
 		/*
 		 * Re-read the data with the randomizer disabled to identify
 		 * bitflips in erased pages.
@@ -841,35 +913,32 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 		if (nand->options & NAND_NEED_SCRAMBLING) {
 			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1);
 			nand->read_buf(mtd, data, ecc->size);
-			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-			nand->read_buf(mtd, oob, ecc->bytes + 4);
+		} else {
+			memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE,
+				      ecc->size);
 		}
 
+		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+		nand->read_buf(mtd, oob, ecc->bytes + 4);
+
 		ret = nand_check_erased_ecc_chunk(data,	ecc->size,
 						  oob, ecc->bytes + 4,
 						  NULL, 0, ecc->strength);
 		if (ret >= 0)
 			raw_mode = 1;
 	} else {
-		/*
-		 * The engine protects 4 bytes of OOB data per chunk.
-		 * Retrieve the corrected OOB bytes.
-		 */
-		sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(0)),
-					   oob);
+		memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-		/* De-randomize the Bad Block Marker. */
-		if (bbm && nand->options & NAND_NEED_SCRAMBLING)
-			sunxi_nfc_randomize_bbm(mtd, page, oob);
-	}
+		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+		sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+					      true, page);
 
-	if (ret < 0) {
-		mtd->ecc_stats.failed++;
-	} else {
-		mtd->ecc_stats.corrected += ret;
-		*max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+		sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+						    bbm, page);
 	}
 
+	sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
+
 	return raw_mode;
 }
 
@@ -898,11 +967,6 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 	*cur_off = mtd->oobsize + mtd->writesize;
 }
 
-static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
-{
-	return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
-}
-
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 					const u8 *data, int data_off,
 					const u8 *oob, int oob_off,
@@ -919,19 +983,6 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 
 	sunxi_nfc_randomizer_write_buf(mtd, data, ecc->size, false, page);
 
-	/* Fill OOB data in */
-	if ((nand->options & NAND_NEED_SCRAMBLING) && bbm) {
-		u8 user_data[4];
-
-		memcpy(user_data, oob, 4);
-		sunxi_nfc_randomize_bbm(mtd, page, user_data);
-		writel(sunxi_nfc_buf_to_user_data(user_data),
-		       nfc->regs + NFC_REG_USER_DATA(0));
-	} else {
-		writel(sunxi_nfc_buf_to_user_data(oob),
-		       nfc->regs + NFC_REG_USER_DATA(0));
-	}
-
 	if (data_off + ecc->size != oob_off)
 		nand->cmdfunc(mtd, NAND_CMD_RNDIN, oob_off, -1);
 
@@ -940,6 +991,8 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 		return ret;
 
 	sunxi_nfc_randomizer_enable(mtd);
+	sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, 0, bbm, page);
+
 	writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD |
 	       NFC_ACCESS_DIR | NFC_ECC_OP,
 	       nfc->regs + NFC_REG_CMD);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 2/7] mtd: nand: sunxi: make OOB retrieval optional
  2016-03-30 15:39 ` Boris Brezillon
  (?)
@ 2016-03-30 15:39     ` Boris Brezillon
  -1 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon,
	Richard Weinberger

sunxi_nfc_hw_ecc_read_chunk() always retrieves the ECC and protected free
bytes, no matter if the user really asked for it or not. This can take a
non negligible amount of time, especially on NAND chips exposing large OOB
areas (> 1KB). Make it optional.

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 drivers/mtd/nand/sunxi_nand.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index f6ea0fb..3e7b919 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -867,7 +867,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 				       u8 *oob, int oob_off,
 				       int *cur_off,
 				       unsigned int *max_bitflips,
-				       bool bbm, int page)
+				       bool bbm, bool oob_required, int page)
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
 	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
@@ -899,7 +899,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 
 	*cur_off = oob_off + ecc->bytes + 4;
 
-	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob, 0,
+	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob_required ? oob : NULL, 0,
 				       readl(nfc->regs + NFC_REG_ECC_ST),
 				       &erased);
 	if (erased)
@@ -929,12 +929,14 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 	} else {
 		memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-		sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
-					      true, page);
+		if (oob_required) {
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+						      true, page);
 
-		sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
-						    bbm, page);
+			sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+							    bbm, page);
+		}
 	}
 
 	sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
@@ -1048,7 +1050,7 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
 						  oob_off + mtd->writesize,
 						  &cur_off, &max_bitflips,
-						  !i, page);
+						  !i, oob_required, page);
 		if (ret < 0)
 			return ret;
 		else if (ret)
@@ -1086,8 +1088,8 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off,
 						  oob,
 						  oob_off + mtd->writesize,
-						  &cur_off, &max_bitflips,
-						  !i, page);
+						  &cur_off, &max_bitflips, !i,
+						  false, page);
 		if (ret < 0)
 			return ret;
 	}
@@ -1149,7 +1151,9 @@ static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
 
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
 						  oob_off, &cur_off,
-						  &max_bitflips, !i, page);
+						  &max_bitflips, !i,
+						  oob_required,
+						  page);
 		if (ret < 0)
 			return ret;
 		else if (ret)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 2/7] mtd: nand: sunxi: make OOB retrieval optional
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton, Dave Gordon
  Cc: Mark Brown, linux-spi, linux-arm-kernel, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi, Vinod Koul, Dan Williams, dmaengine,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree, Boris Brezillon, Richard Weinberger

sunxi_nfc_hw_ecc_read_chunk() always retrieves the ECC and protected free
bytes, no matter if the user really asked for it or not. This can take a
non negligible amount of time, especially on NAND chips exposing large OOB
areas (> 1KB). Make it optional.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index f6ea0fb..3e7b919 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -867,7 +867,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 				       u8 *oob, int oob_off,
 				       int *cur_off,
 				       unsigned int *max_bitflips,
-				       bool bbm, int page)
+				       bool bbm, bool oob_required, int page)
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
 	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
@@ -899,7 +899,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 
 	*cur_off = oob_off + ecc->bytes + 4;
 
-	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob, 0,
+	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob_required ? oob : NULL, 0,
 				       readl(nfc->regs + NFC_REG_ECC_ST),
 				       &erased);
 	if (erased)
@@ -929,12 +929,14 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 	} else {
 		memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-		sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
-					      true, page);
+		if (oob_required) {
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+						      true, page);
 
-		sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
-						    bbm, page);
+			sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+							    bbm, page);
+		}
 	}
 
 	sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
@@ -1048,7 +1050,7 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
 						  oob_off + mtd->writesize,
 						  &cur_off, &max_bitflips,
-						  !i, page);
+						  !i, oob_required, page);
 		if (ret < 0)
 			return ret;
 		else if (ret)
@@ -1086,8 +1088,8 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off,
 						  oob,
 						  oob_off + mtd->writesize,
-						  &cur_off, &max_bitflips,
-						  !i, page);
+						  &cur_off, &max_bitflips, !i,
+						  false, page);
 		if (ret < 0)
 			return ret;
 	}
@@ -1149,7 +1151,9 @@ static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
 
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
 						  oob_off, &cur_off,
-						  &max_bitflips, !i, page);
+						  &max_bitflips, !i,
+						  oob_required,
+						  page);
 		if (ret < 0)
 			return ret;
 		else if (ret)
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 2/7] mtd: nand: sunxi: make OOB retrieval optional
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: linux-arm-kernel

sunxi_nfc_hw_ecc_read_chunk() always retrieves the ECC and protected free
bytes, no matter if the user really asked for it or not. This can take a
non negligible amount of time, especially on NAND chips exposing large OOB
areas (> 1KB). Make it optional.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index f6ea0fb..3e7b919 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -867,7 +867,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 				       u8 *oob, int oob_off,
 				       int *cur_off,
 				       unsigned int *max_bitflips,
-				       bool bbm, int page)
+				       bool bbm, bool oob_required, int page)
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
 	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
@@ -899,7 +899,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 
 	*cur_off = oob_off + ecc->bytes + 4;
 
-	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob, 0,
+	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob_required ? oob : NULL, 0,
 				       readl(nfc->regs + NFC_REG_ECC_ST),
 				       &erased);
 	if (erased)
@@ -929,12 +929,14 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 	} else {
 		memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-		sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
-					      true, page);
+		if (oob_required) {
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+						      true, page);
 
-		sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
-						    bbm, page);
+			sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+							    bbm, page);
+		}
 	}
 
 	sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
@@ -1048,7 +1050,7 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
 						  oob_off + mtd->writesize,
 						  &cur_off, &max_bitflips,
-						  !i, page);
+						  !i, oob_required, page);
 		if (ret < 0)
 			return ret;
 		else if (ret)
@@ -1086,8 +1088,8 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off,
 						  oob,
 						  oob_off + mtd->writesize,
-						  &cur_off, &max_bitflips,
-						  !i, page);
+						  &cur_off, &max_bitflips, !i,
+						  false, page);
 		if (ret < 0)
 			return ret;
 	}
@@ -1149,7 +1151,9 @@ static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
 
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
 						  oob_off, &cur_off,
-						  &max_bitflips, !i, page);
+						  &max_bitflips, !i,
+						  oob_required,
+						  page);
 		if (ret < 0)
 			return ret;
 		else if (ret)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 3/7] mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers
  2016-03-30 15:39 ` Boris Brezillon
  (?)
@ 2016-03-30 15:39     ` Boris Brezillon
  -1 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon,
	Richard Weinberger

Allow for NULL cur_offs values when the caller does not know where the
NAND page register pointer point to.

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 drivers/mtd/nand/sunxi_nand.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 3e7b919..6d6b166 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -956,7 +956,7 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 	if (len <= 0)
 		return;
 
-	if (*cur_off != offset)
+	if (!cur_off || *cur_off != offset)
 		nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
 			      offset + mtd->writesize, -1);
 
@@ -966,7 +966,8 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 		sunxi_nfc_randomizer_read_buf(mtd, oob + offset, len,
 					      false, page);
 
-	*cur_off = mtd->oobsize + mtd->writesize;
+	if (cur_off)
+		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
@@ -1021,13 +1022,14 @@ static void sunxi_nfc_hw_ecc_write_extra_oob(struct mtd_info *mtd,
 	if (len <= 0)
 		return;
 
-	if (*cur_off != offset)
+	if (!cur_off || *cur_off != offset)
 		nand->cmdfunc(mtd, NAND_CMD_RNDIN,
 			      offset + mtd->writesize, -1);
 
 	sunxi_nfc_randomizer_write_buf(mtd, oob + offset, len, false, page);
 
-	*cur_off = mtd->oobsize + mtd->writesize;
+	if (cur_off)
+		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
 static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 3/7] mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton, Dave Gordon
  Cc: Mark Brown, linux-spi, linux-arm-kernel, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi, Vinod Koul, Dan Williams, dmaengine,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree, Boris Brezillon, Richard Weinberger

Allow for NULL cur_offs values when the caller does not know where the
NAND page register pointer point to.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 3e7b919..6d6b166 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -956,7 +956,7 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 	if (len <= 0)
 		return;
 
-	if (*cur_off != offset)
+	if (!cur_off || *cur_off != offset)
 		nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
 			      offset + mtd->writesize, -1);
 
@@ -966,7 +966,8 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 		sunxi_nfc_randomizer_read_buf(mtd, oob + offset, len,
 					      false, page);
 
-	*cur_off = mtd->oobsize + mtd->writesize;
+	if (cur_off)
+		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
@@ -1021,13 +1022,14 @@ static void sunxi_nfc_hw_ecc_write_extra_oob(struct mtd_info *mtd,
 	if (len <= 0)
 		return;
 
-	if (*cur_off != offset)
+	if (!cur_off || *cur_off != offset)
 		nand->cmdfunc(mtd, NAND_CMD_RNDIN,
 			      offset + mtd->writesize, -1);
 
 	sunxi_nfc_randomizer_write_buf(mtd, oob + offset, len, false, page);
 
-	*cur_off = mtd->oobsize + mtd->writesize;
+	if (cur_off)
+		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
 static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 3/7] mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: linux-arm-kernel

Allow for NULL cur_offs values when the caller does not know where the
NAND page register pointer point to.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 3e7b919..6d6b166 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -956,7 +956,7 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 	if (len <= 0)
 		return;
 
-	if (*cur_off != offset)
+	if (!cur_off || *cur_off != offset)
 		nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
 			      offset + mtd->writesize, -1);
 
@@ -966,7 +966,8 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 		sunxi_nfc_randomizer_read_buf(mtd, oob + offset, len,
 					      false, page);
 
-	*cur_off = mtd->oobsize + mtd->writesize;
+	if (cur_off)
+		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
@@ -1021,13 +1022,14 @@ static void sunxi_nfc_hw_ecc_write_extra_oob(struct mtd_info *mtd,
 	if (len <= 0)
 		return;
 
-	if (*cur_off != offset)
+	if (!cur_off || *cur_off != offset)
 		nand->cmdfunc(mtd, NAND_CMD_RNDIN,
 			      offset + mtd->writesize, -1);
 
 	sunxi_nfc_randomizer_write_buf(mtd, oob + offset, len, false, page);
 
-	*cur_off = mtd->oobsize + mtd->writesize;
+	if (cur_off)
+		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
 static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
  2016-03-30 15:39 ` Boris Brezillon
  (?)
@ 2016-03-30 15:39     ` Boris Brezillon
  -1 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon,
	Richard Weinberger

sg_alloc_table_from_buf() provides an easy solution to create an sg_table
from a virtual address pointer. This function takes care of dealing with
vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
DMA transfer size).

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 include/linux/scatterlist.h |  24 +++++++
 lib/scatterlist.c           | 161 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 185 insertions(+)

diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 556ec1e..4a75362 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -41,6 +41,27 @@ struct sg_table {
 	unsigned int orig_nents;	/* original size of list */
 };
 
+/**
+ * struct sg_constraints - SG constraints structure
+ *
+ * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be smaller
+ *		   than this value. Zero means no constraint.
+ * @required_alignment: minimum alignment. Is used for both size and pointer
+ *			alignment. If this constraint is not met, the function
+ *			should return -EINVAL.
+ * @preferred_alignment: preferred alignment. Mainly used to optimize
+ *			 throughput when the DMA engine performs better when
+ *			 doing aligned accesses.
+ *
+ * This structure is here to help sg_alloc_table_from_buf() create the optimal
+ * SG list based on DMA engine constraints.
+ */
+struct sg_constraints {
+	size_t max_chunk_len;
+	size_t required_alignment;
+	size_t preferred_alignment;
+};
+
 /*
  * Notes on SG table design.
  *
@@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
 	struct page **pages, unsigned int n_pages,
 	unsigned long offset, unsigned long size,
 	gfp_t gfp_mask);
+int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
+			    const struct sg_constraints *constraints,
+			    gfp_t gfp_mask);
 
 size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
 		      size_t buflen, off_t skip, bool to_buffer);
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index 004fc70..94776ff 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -433,6 +433,167 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
 }
 EXPORT_SYMBOL(sg_alloc_table_from_pages);
 
+static size_t sg_buf_chunk_len(const void *buf, size_t len,
+			       const struct sg_constraints *cons)
+{
+	size_t chunk_len = len;
+
+	if (cons->max_chunk_len)
+		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
+
+	if (is_vmalloc_addr(buf)) {
+		unsigned long offset_in_page = offset_in_page(buf);
+		size_t contig_len = PAGE_SIZE - offset_in_page;
+		unsigned long phys = vmalloc_to_pfn(buf) - offset_in_page;
+		const void *contig_ptr = buf + contig_len;
+
+		/*
+		 * Vmalloced buffer might be composed of several physically
+		 * contiguous pages. Avoid extra scattergather entries in
+		 * this case.
+		 */
+		while (contig_len < chunk_len) {
+			if (phys + PAGE_SIZE != vmalloc_to_pfn(contig_ptr))
+				break;
+
+			contig_len += PAGE_SIZE;
+			contig_ptr += PAGE_SIZE;
+			phys += PAGE_SIZE;
+		}
+
+		chunk_len = min_t(size_t, chunk_len, contig_len);
+	}
+
+	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
+		const void *aligned_buf = PTR_ALIGN(buf,
+						    cons->preferred_alignment);
+		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
+
+		chunk_len = min_t(size_t, chunk_len, unaligned_len);
+	} else if (chunk_len > cons->preferred_alignment) {
+		chunk_len &= ~(cons->preferred_alignment - 1);
+	}
+
+	return chunk_len;
+}
+
+#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
+	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
+	     len;							\
+	     len -= chunk_len, buf += chunk_len,			\
+	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
+
+static int sg_check_constraints(struct sg_constraints *cons,
+				const void *buf, size_t len)
+{
+	if (!cons->required_alignment)
+		cons->required_alignment = 1;
+
+	if (!cons->preferred_alignment)
+		cons->preferred_alignment = cons->required_alignment;
+
+	/* Test if buf and len are properly aligned. */
+	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
+	    !IS_ALIGNED(len, cons->required_alignment))
+		return -EINVAL;
+
+	/*
+	 * if the buffer has been vmallocated and required_alignment is
+	 * more than PAGE_SIZE we cannot guarantee it.
+	 */
+	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
+		return -EINVAL;
+
+	/*
+	 * max_chunk_len has to be aligned to required_alignment to
+	 * guarantee that all buffer chunks are aligned correctly.
+	 */
+	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
+		return -EINVAL;
+
+	/*
+	 * preferred_alignment has to be aligned to required_alignment
+	 * to avoid misalignment of buffer chunks.
+	 */
+	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * sg_alloc_table_from_buf - create an SG table from a buffer
+ *
+ * @sgt: SG table
+ * @buf: buffer you want to create this SG table from
+ * @len: length of buf
+ * @constraints: optional constraints to take into account when creating
+ *		 the SG table. Can be NULL if no specific constraints are
+ *		 required.
+ * @gfp_mask: type of allocation to use when creating the table
+ *
+ * This function creates an SG table from a buffer, its length and some
+ * SG constraints.
+ *
+ * Note: This function supports vmallocated and physically contiguous buffers.
+ */
+int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
+			    const struct sg_constraints *constraints,
+			    gfp_t gfp_mask)
+{
+	struct sg_constraints cons = { };
+	size_t remaining, chunk_len;
+	const void *sg_buf;
+	int i, ret;
+
+	if (constraints)
+		cons = *constraints;
+
+	ret = sg_check_constraints(&cons, buf, len);
+	if (ret)
+		return ret;
+
+	sg_buf = buf;
+	remaining = len;
+	i = 0;
+	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
+		i++;
+
+	ret = sg_alloc_table(sgt, i, gfp_mask);
+	if (ret)
+		return ret;
+
+	sg_buf = buf;
+	remaining = len;
+	i = 0;
+	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
+		if (is_vmalloc_addr(sg_buf)) {
+			struct page *vm_page;
+
+			vm_page = vmalloc_to_page(sg_buf);
+			if (!vm_page) {
+				ret = -ENOMEM;
+				goto err_free_table;
+			}
+
+			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
+				    offset_in_page(sg_buf));
+		} else {
+			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
+		}
+
+		i++;
+	}
+
+	return 0;
+
+err_free_table:
+	sg_free_table(sgt);
+
+	return ret;
+}
+EXPORT_SYMBOL(sg_alloc_table_from_buf);
+
 void __sg_page_iter_start(struct sg_page_iter *piter,
 			  struct scatterlist *sglist, unsigned int nents,
 			  unsigned long pgoffset)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton, Dave Gordon
  Cc: Mark Brown, linux-spi, linux-arm-kernel, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi, Vinod Koul, Dan Williams, dmaengine,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree, Boris Brezillon, Richard Weinberger

sg_alloc_table_from_buf() provides an easy solution to create an sg_table
from a virtual address pointer. This function takes care of dealing with
vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
DMA transfer size).

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 include/linux/scatterlist.h |  24 +++++++
 lib/scatterlist.c           | 161 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 185 insertions(+)

diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 556ec1e..4a75362 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -41,6 +41,27 @@ struct sg_table {
 	unsigned int orig_nents;	/* original size of list */
 };
 
+/**
+ * struct sg_constraints - SG constraints structure
+ *
+ * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be smaller
+ *		   than this value. Zero means no constraint.
+ * @required_alignment: minimum alignment. Is used for both size and pointer
+ *			alignment. If this constraint is not met, the function
+ *			should return -EINVAL.
+ * @preferred_alignment: preferred alignment. Mainly used to optimize
+ *			 throughput when the DMA engine performs better when
+ *			 doing aligned accesses.
+ *
+ * This structure is here to help sg_alloc_table_from_buf() create the optimal
+ * SG list based on DMA engine constraints.
+ */
+struct sg_constraints {
+	size_t max_chunk_len;
+	size_t required_alignment;
+	size_t preferred_alignment;
+};
+
 /*
  * Notes on SG table design.
  *
@@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
 	struct page **pages, unsigned int n_pages,
 	unsigned long offset, unsigned long size,
 	gfp_t gfp_mask);
+int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
+			    const struct sg_constraints *constraints,
+			    gfp_t gfp_mask);
 
 size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
 		      size_t buflen, off_t skip, bool to_buffer);
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index 004fc70..94776ff 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -433,6 +433,167 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
 }
 EXPORT_SYMBOL(sg_alloc_table_from_pages);
 
+static size_t sg_buf_chunk_len(const void *buf, size_t len,
+			       const struct sg_constraints *cons)
+{
+	size_t chunk_len = len;
+
+	if (cons->max_chunk_len)
+		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
+
+	if (is_vmalloc_addr(buf)) {
+		unsigned long offset_in_page = offset_in_page(buf);
+		size_t contig_len = PAGE_SIZE - offset_in_page;
+		unsigned long phys = vmalloc_to_pfn(buf) - offset_in_page;
+		const void *contig_ptr = buf + contig_len;
+
+		/*
+		 * Vmalloced buffer might be composed of several physically
+		 * contiguous pages. Avoid extra scattergather entries in
+		 * this case.
+		 */
+		while (contig_len < chunk_len) {
+			if (phys + PAGE_SIZE != vmalloc_to_pfn(contig_ptr))
+				break;
+
+			contig_len += PAGE_SIZE;
+			contig_ptr += PAGE_SIZE;
+			phys += PAGE_SIZE;
+		}
+
+		chunk_len = min_t(size_t, chunk_len, contig_len);
+	}
+
+	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
+		const void *aligned_buf = PTR_ALIGN(buf,
+						    cons->preferred_alignment);
+		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
+
+		chunk_len = min_t(size_t, chunk_len, unaligned_len);
+	} else if (chunk_len > cons->preferred_alignment) {
+		chunk_len &= ~(cons->preferred_alignment - 1);
+	}
+
+	return chunk_len;
+}
+
+#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
+	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
+	     len;							\
+	     len -= chunk_len, buf += chunk_len,			\
+	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
+
+static int sg_check_constraints(struct sg_constraints *cons,
+				const void *buf, size_t len)
+{
+	if (!cons->required_alignment)
+		cons->required_alignment = 1;
+
+	if (!cons->preferred_alignment)
+		cons->preferred_alignment = cons->required_alignment;
+
+	/* Test if buf and len are properly aligned. */
+	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
+	    !IS_ALIGNED(len, cons->required_alignment))
+		return -EINVAL;
+
+	/*
+	 * if the buffer has been vmallocated and required_alignment is
+	 * more than PAGE_SIZE we cannot guarantee it.
+	 */
+	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
+		return -EINVAL;
+
+	/*
+	 * max_chunk_len has to be aligned to required_alignment to
+	 * guarantee that all buffer chunks are aligned correctly.
+	 */
+	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
+		return -EINVAL;
+
+	/*
+	 * preferred_alignment has to be aligned to required_alignment
+	 * to avoid misalignment of buffer chunks.
+	 */
+	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * sg_alloc_table_from_buf - create an SG table from a buffer
+ *
+ * @sgt: SG table
+ * @buf: buffer you want to create this SG table from
+ * @len: length of buf
+ * @constraints: optional constraints to take into account when creating
+ *		 the SG table. Can be NULL if no specific constraints are
+ *		 required.
+ * @gfp_mask: type of allocation to use when creating the table
+ *
+ * This function creates an SG table from a buffer, its length and some
+ * SG constraints.
+ *
+ * Note: This function supports vmallocated and physically contiguous buffers.
+ */
+int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
+			    const struct sg_constraints *constraints,
+			    gfp_t gfp_mask)
+{
+	struct sg_constraints cons = { };
+	size_t remaining, chunk_len;
+	const void *sg_buf;
+	int i, ret;
+
+	if (constraints)
+		cons = *constraints;
+
+	ret = sg_check_constraints(&cons, buf, len);
+	if (ret)
+		return ret;
+
+	sg_buf = buf;
+	remaining = len;
+	i = 0;
+	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
+		i++;
+
+	ret = sg_alloc_table(sgt, i, gfp_mask);
+	if (ret)
+		return ret;
+
+	sg_buf = buf;
+	remaining = len;
+	i = 0;
+	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
+		if (is_vmalloc_addr(sg_buf)) {
+			struct page *vm_page;
+
+			vm_page = vmalloc_to_page(sg_buf);
+			if (!vm_page) {
+				ret = -ENOMEM;
+				goto err_free_table;
+			}
+
+			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
+				    offset_in_page(sg_buf));
+		} else {
+			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
+		}
+
+		i++;
+	}
+
+	return 0;
+
+err_free_table:
+	sg_free_table(sgt);
+
+	return ret;
+}
+EXPORT_SYMBOL(sg_alloc_table_from_buf);
+
 void __sg_page_iter_start(struct sg_page_iter *piter,
 			  struct scatterlist *sglist, unsigned int nents,
 			  unsigned long pgoffset)
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: linux-arm-kernel

sg_alloc_table_from_buf() provides an easy solution to create an sg_table
from a virtual address pointer. This function takes care of dealing with
vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
DMA transfer size).

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 include/linux/scatterlist.h |  24 +++++++
 lib/scatterlist.c           | 161 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 185 insertions(+)

diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 556ec1e..4a75362 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -41,6 +41,27 @@ struct sg_table {
 	unsigned int orig_nents;	/* original size of list */
 };
 
+/**
+ * struct sg_constraints - SG constraints structure
+ *
+ * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be smaller
+ *		   than this value. Zero means no constraint.
+ * @required_alignment: minimum alignment. Is used for both size and pointer
+ *			alignment. If this constraint is not met, the function
+ *			should return -EINVAL.
+ * @preferred_alignment: preferred alignment. Mainly used to optimize
+ *			 throughput when the DMA engine performs better when
+ *			 doing aligned accesses.
+ *
+ * This structure is here to help sg_alloc_table_from_buf() create the optimal
+ * SG list based on DMA engine constraints.
+ */
+struct sg_constraints {
+	size_t max_chunk_len;
+	size_t required_alignment;
+	size_t preferred_alignment;
+};
+
 /*
  * Notes on SG table design.
  *
@@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
 	struct page **pages, unsigned int n_pages,
 	unsigned long offset, unsigned long size,
 	gfp_t gfp_mask);
+int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
+			    const struct sg_constraints *constraints,
+			    gfp_t gfp_mask);
 
 size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
 		      size_t buflen, off_t skip, bool to_buffer);
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index 004fc70..94776ff 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -433,6 +433,167 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
 }
 EXPORT_SYMBOL(sg_alloc_table_from_pages);
 
+static size_t sg_buf_chunk_len(const void *buf, size_t len,
+			       const struct sg_constraints *cons)
+{
+	size_t chunk_len = len;
+
+	if (cons->max_chunk_len)
+		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
+
+	if (is_vmalloc_addr(buf)) {
+		unsigned long offset_in_page = offset_in_page(buf);
+		size_t contig_len = PAGE_SIZE - offset_in_page;
+		unsigned long phys = vmalloc_to_pfn(buf) - offset_in_page;
+		const void *contig_ptr = buf + contig_len;
+
+		/*
+		 * Vmalloced buffer might be composed of several physically
+		 * contiguous pages. Avoid extra scattergather entries in
+		 * this case.
+		 */
+		while (contig_len < chunk_len) {
+			if (phys + PAGE_SIZE != vmalloc_to_pfn(contig_ptr))
+				break;
+
+			contig_len += PAGE_SIZE;
+			contig_ptr += PAGE_SIZE;
+			phys += PAGE_SIZE;
+		}
+
+		chunk_len = min_t(size_t, chunk_len, contig_len);
+	}
+
+	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
+		const void *aligned_buf = PTR_ALIGN(buf,
+						    cons->preferred_alignment);
+		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
+
+		chunk_len = min_t(size_t, chunk_len, unaligned_len);
+	} else if (chunk_len > cons->preferred_alignment) {
+		chunk_len &= ~(cons->preferred_alignment - 1);
+	}
+
+	return chunk_len;
+}
+
+#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
+	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
+	     len;							\
+	     len -= chunk_len, buf += chunk_len,			\
+	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
+
+static int sg_check_constraints(struct sg_constraints *cons,
+				const void *buf, size_t len)
+{
+	if (!cons->required_alignment)
+		cons->required_alignment = 1;
+
+	if (!cons->preferred_alignment)
+		cons->preferred_alignment = cons->required_alignment;
+
+	/* Test if buf and len are properly aligned. */
+	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
+	    !IS_ALIGNED(len, cons->required_alignment))
+		return -EINVAL;
+
+	/*
+	 * if the buffer has been vmallocated and required_alignment is
+	 * more than PAGE_SIZE we cannot guarantee it.
+	 */
+	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
+		return -EINVAL;
+
+	/*
+	 * max_chunk_len has to be aligned to required_alignment to
+	 * guarantee that all buffer chunks are aligned correctly.
+	 */
+	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
+		return -EINVAL;
+
+	/*
+	 * preferred_alignment has to be aligned to required_alignment
+	 * to avoid misalignment of buffer chunks.
+	 */
+	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * sg_alloc_table_from_buf - create an SG table from a buffer
+ *
+ * @sgt: SG table
+ * @buf: buffer you want to create this SG table from
+ * @len: length of buf
+ * @constraints: optional constraints to take into account when creating
+ *		 the SG table. Can be NULL if no specific constraints are
+ *		 required.
+ * @gfp_mask: type of allocation to use when creating the table
+ *
+ * This function creates an SG table from a buffer, its length and some
+ * SG constraints.
+ *
+ * Note: This function supports vmallocated and physically contiguous buffers.
+ */
+int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
+			    const struct sg_constraints *constraints,
+			    gfp_t gfp_mask)
+{
+	struct sg_constraints cons = { };
+	size_t remaining, chunk_len;
+	const void *sg_buf;
+	int i, ret;
+
+	if (constraints)
+		cons = *constraints;
+
+	ret = sg_check_constraints(&cons, buf, len);
+	if (ret)
+		return ret;
+
+	sg_buf = buf;
+	remaining = len;
+	i = 0;
+	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
+		i++;
+
+	ret = sg_alloc_table(sgt, i, gfp_mask);
+	if (ret)
+		return ret;
+
+	sg_buf = buf;
+	remaining = len;
+	i = 0;
+	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
+		if (is_vmalloc_addr(sg_buf)) {
+			struct page *vm_page;
+
+			vm_page = vmalloc_to_page(sg_buf);
+			if (!vm_page) {
+				ret = -ENOMEM;
+				goto err_free_table;
+			}
+
+			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
+				    offset_in_page(sg_buf));
+		} else {
+			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
+		}
+
+		i++;
+	}
+
+	return 0;
+
+err_free_table:
+	sg_free_table(sgt);
+
+	return ret;
+}
+EXPORT_SYMBOL(sg_alloc_table_from_buf);
+
 void __sg_page_iter_start(struct sg_page_iter *piter,
 			  struct scatterlist *sglist, unsigned int nents,
 			  unsigned long pgoffset)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 5/7] mtd: provide helper to prepare buffers for DMA operations
  2016-03-30 15:39 ` Boris Brezillon
  (?)
@ 2016-03-30 15:39     ` Boris Brezillon
  -1 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon,
	Richard Weinberger

Some NAND controller drivers are making use of DMA to transfer data from
the controller to the buffer passed by the MTD user.
Provide a generic mtd_map/unmap_buf() implementation to avoid open coded
(and sometime erroneous) implementations.

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 drivers/mtd/mtdcore.c   | 66 +++++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/mtd/mtd.h | 25 +++++++++++++++++++
 2 files changed, 91 insertions(+)

diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
index 3096251..3c368f0 100644
--- a/drivers/mtd/mtdcore.c
+++ b/drivers/mtd/mtdcore.c
@@ -1253,6 +1253,72 @@ void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size)
 }
 EXPORT_SYMBOL_GPL(mtd_kmalloc_up_to);
 
+#ifdef CONFIG_HAS_DMA
+/**
+ * mtd_map_buf - create an SG table and prepare it for DMA operations
+ *
+ * @mtd: mtd device description object pointer
+ * @dev: device handling the DMA operation
+ * @buf: buf used to create the SG table
+ * @len: length of buf
+ * @constraints: optional constraints to take into account when creating
+ *		 the SG table. Can be NULL if no specific constraints
+ *		 are required.
+ * @dir: direction of the DMA operation
+ *
+ * This function should be used when an MTD driver wants to do DMA operations
+ * on a buffer passed by the MTD layer. This functions takes care of
+ * vmallocated buffer constraints, and return and sg_table that you can safely
+ * use.
+ */
+int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+		struct sg_table *sgt, const void *buf, size_t len,
+		const struct sg_constraints *constraints,
+		enum dma_data_direction dir)
+{
+	int ret;
+
+	ret = sg_alloc_table_from_buf(sgt, buf, len, constraints, GFP_KERNEL);
+	if (ret)
+		return ret;
+
+	ret = dma_map_sg(dev, sgt->sgl, sgt->nents, dir);
+	if (!ret)
+		ret = -ENOMEM;
+
+	if (ret < 0) {
+		sg_free_table(sgt);
+		return ret;
+	}
+
+	sgt->nents = ret;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(mtd_map_buf);
+
+/**
+ * mtd_map_buf - unmap an SG table and release its resources
+ *
+ * @mtd: mtd device description object pointer
+ * @dev: device handling the DMA operation
+ * @sgt: SG table
+ * @dir: direction of the DMA operation
+ *
+ * This function unmaps a previously mapped SG table and release SG table
+ * resources. Should be called when your DMA operation is done.
+ */
+void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+		   struct sg_table *sgt, enum dma_data_direction dir)
+{
+	if (sgt->orig_nents) {
+		dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir);
+		sg_free_table(sgt);
+	}
+}
+EXPORT_SYMBOL_GPL(mtd_unmap_buf);
+#endif /* !CONFIG_HAS_DMA */
+
 #ifdef CONFIG_PROC_FS
 
 /*====================================================================*/
diff --git a/include/linux/mtd/mtd.h b/include/linux/mtd/mtd.h
index 7712721..15cff85 100644
--- a/include/linux/mtd/mtd.h
+++ b/include/linux/mtd/mtd.h
@@ -24,6 +24,7 @@
 #include <linux/uio.h>
 #include <linux/notifier.h>
 #include <linux/device.h>
+#include <linux/dma-mapping.h>
 
 #include <mtd/mtd-abi.h>
 
@@ -410,6 +411,30 @@ extern void register_mtd_user (struct mtd_notifier *new);
 extern int unregister_mtd_user (struct mtd_notifier *old);
 void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size);
 
+#ifdef CONFIG_HAS_DMA
+int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+		struct sg_table *sgt, const void *buf, size_t len,
+		const struct sg_constraints *constraints,
+		enum dma_data_direction dir);
+void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+		   struct sg_table *sgt, enum dma_data_direction dir);
+#else
+static inline int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+			      struct sg_table *sgt, const void *buf,
+			      size_t len,
+			      const struct sg_constraints *constraints
+			      enum dma_data_direction dir)
+{
+	return -ENOTSUPP;
+}
+
+static void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+			  struct sg_table *sgt, enum dma_data_direction dir)
+{
+	return -ENOTSUPP;
+}
+#endif
+
 void mtd_erase_callback(struct erase_info *instr);
 
 static inline int mtd_is_bitflip(int err) {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 5/7] mtd: provide helper to prepare buffers for DMA operations
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton, Dave Gordon
  Cc: Mark Brown, linux-spi, linux-arm-kernel, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi, Vinod Koul, Dan Williams, dmaengine,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree, Boris Brezillon, Richard Weinberger

Some NAND controller drivers are making use of DMA to transfer data from
the controller to the buffer passed by the MTD user.
Provide a generic mtd_map/unmap_buf() implementation to avoid open coded
(and sometime erroneous) implementations.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/mtdcore.c   | 66 +++++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/mtd/mtd.h | 25 +++++++++++++++++++
 2 files changed, 91 insertions(+)

diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
index 3096251..3c368f0 100644
--- a/drivers/mtd/mtdcore.c
+++ b/drivers/mtd/mtdcore.c
@@ -1253,6 +1253,72 @@ void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size)
 }
 EXPORT_SYMBOL_GPL(mtd_kmalloc_up_to);
 
+#ifdef CONFIG_HAS_DMA
+/**
+ * mtd_map_buf - create an SG table and prepare it for DMA operations
+ *
+ * @mtd: mtd device description object pointer
+ * @dev: device handling the DMA operation
+ * @buf: buf used to create the SG table
+ * @len: length of buf
+ * @constraints: optional constraints to take into account when creating
+ *		 the SG table. Can be NULL if no specific constraints
+ *		 are required.
+ * @dir: direction of the DMA operation
+ *
+ * This function should be used when an MTD driver wants to do DMA operations
+ * on a buffer passed by the MTD layer. This functions takes care of
+ * vmallocated buffer constraints, and return and sg_table that you can safely
+ * use.
+ */
+int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+		struct sg_table *sgt, const void *buf, size_t len,
+		const struct sg_constraints *constraints,
+		enum dma_data_direction dir)
+{
+	int ret;
+
+	ret = sg_alloc_table_from_buf(sgt, buf, len, constraints, GFP_KERNEL);
+	if (ret)
+		return ret;
+
+	ret = dma_map_sg(dev, sgt->sgl, sgt->nents, dir);
+	if (!ret)
+		ret = -ENOMEM;
+
+	if (ret < 0) {
+		sg_free_table(sgt);
+		return ret;
+	}
+
+	sgt->nents = ret;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(mtd_map_buf);
+
+/**
+ * mtd_map_buf - unmap an SG table and release its resources
+ *
+ * @mtd: mtd device description object pointer
+ * @dev: device handling the DMA operation
+ * @sgt: SG table
+ * @dir: direction of the DMA operation
+ *
+ * This function unmaps a previously mapped SG table and release SG table
+ * resources. Should be called when your DMA operation is done.
+ */
+void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+		   struct sg_table *sgt, enum dma_data_direction dir)
+{
+	if (sgt->orig_nents) {
+		dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir);
+		sg_free_table(sgt);
+	}
+}
+EXPORT_SYMBOL_GPL(mtd_unmap_buf);
+#endif /* !CONFIG_HAS_DMA */
+
 #ifdef CONFIG_PROC_FS
 
 /*====================================================================*/
diff --git a/include/linux/mtd/mtd.h b/include/linux/mtd/mtd.h
index 7712721..15cff85 100644
--- a/include/linux/mtd/mtd.h
+++ b/include/linux/mtd/mtd.h
@@ -24,6 +24,7 @@
 #include <linux/uio.h>
 #include <linux/notifier.h>
 #include <linux/device.h>
+#include <linux/dma-mapping.h>
 
 #include <mtd/mtd-abi.h>
 
@@ -410,6 +411,30 @@ extern void register_mtd_user (struct mtd_notifier *new);
 extern int unregister_mtd_user (struct mtd_notifier *old);
 void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size);
 
+#ifdef CONFIG_HAS_DMA
+int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+		struct sg_table *sgt, const void *buf, size_t len,
+		const struct sg_constraints *constraints,
+		enum dma_data_direction dir);
+void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+		   struct sg_table *sgt, enum dma_data_direction dir);
+#else
+static inline int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+			      struct sg_table *sgt, const void *buf,
+			      size_t len,
+			      const struct sg_constraints *constraints
+			      enum dma_data_direction dir)
+{
+	return -ENOTSUPP;
+}
+
+static void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+			  struct sg_table *sgt, enum dma_data_direction dir)
+{
+	return -ENOTSUPP;
+}
+#endif
+
 void mtd_erase_callback(struct erase_info *instr);
 
 static inline int mtd_is_bitflip(int err) {
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 5/7] mtd: provide helper to prepare buffers for DMA operations
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: linux-arm-kernel

Some NAND controller drivers are making use of DMA to transfer data from
the controller to the buffer passed by the MTD user.
Provide a generic mtd_map/unmap_buf() implementation to avoid open coded
(and sometime erroneous) implementations.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/mtdcore.c   | 66 +++++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/mtd/mtd.h | 25 +++++++++++++++++++
 2 files changed, 91 insertions(+)

diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
index 3096251..3c368f0 100644
--- a/drivers/mtd/mtdcore.c
+++ b/drivers/mtd/mtdcore.c
@@ -1253,6 +1253,72 @@ void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size)
 }
 EXPORT_SYMBOL_GPL(mtd_kmalloc_up_to);
 
+#ifdef CONFIG_HAS_DMA
+/**
+ * mtd_map_buf - create an SG table and prepare it for DMA operations
+ *
+ * @mtd: mtd device description object pointer
+ * @dev: device handling the DMA operation
+ * @buf: buf used to create the SG table
+ * @len: length of buf
+ * @constraints: optional constraints to take into account when creating
+ *		 the SG table. Can be NULL if no specific constraints
+ *		 are required.
+ * @dir: direction of the DMA operation
+ *
+ * This function should be used when an MTD driver wants to do DMA operations
+ * on a buffer passed by the MTD layer. This functions takes care of
+ * vmallocated buffer constraints, and return and sg_table that you can safely
+ * use.
+ */
+int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+		struct sg_table *sgt, const void *buf, size_t len,
+		const struct sg_constraints *constraints,
+		enum dma_data_direction dir)
+{
+	int ret;
+
+	ret = sg_alloc_table_from_buf(sgt, buf, len, constraints, GFP_KERNEL);
+	if (ret)
+		return ret;
+
+	ret = dma_map_sg(dev, sgt->sgl, sgt->nents, dir);
+	if (!ret)
+		ret = -ENOMEM;
+
+	if (ret < 0) {
+		sg_free_table(sgt);
+		return ret;
+	}
+
+	sgt->nents = ret;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(mtd_map_buf);
+
+/**
+ * mtd_map_buf - unmap an SG table and release its resources
+ *
+ * @mtd: mtd device description object pointer
+ * @dev: device handling the DMA operation
+ * @sgt: SG table
+ * @dir: direction of the DMA operation
+ *
+ * This function unmaps a previously mapped SG table and release SG table
+ * resources. Should be called when your DMA operation is done.
+ */
+void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+		   struct sg_table *sgt, enum dma_data_direction dir)
+{
+	if (sgt->orig_nents) {
+		dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir);
+		sg_free_table(sgt);
+	}
+}
+EXPORT_SYMBOL_GPL(mtd_unmap_buf);
+#endif /* !CONFIG_HAS_DMA */
+
 #ifdef CONFIG_PROC_FS
 
 /*====================================================================*/
diff --git a/include/linux/mtd/mtd.h b/include/linux/mtd/mtd.h
index 7712721..15cff85 100644
--- a/include/linux/mtd/mtd.h
+++ b/include/linux/mtd/mtd.h
@@ -24,6 +24,7 @@
 #include <linux/uio.h>
 #include <linux/notifier.h>
 #include <linux/device.h>
+#include <linux/dma-mapping.h>
 
 #include <mtd/mtd-abi.h>
 
@@ -410,6 +411,30 @@ extern void register_mtd_user (struct mtd_notifier *new);
 extern int unregister_mtd_user (struct mtd_notifier *old);
 void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size);
 
+#ifdef CONFIG_HAS_DMA
+int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+		struct sg_table *sgt, const void *buf, size_t len,
+		const struct sg_constraints *constraints,
+		enum dma_data_direction dir);
+void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+		   struct sg_table *sgt, enum dma_data_direction dir);
+#else
+static inline int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+			      struct sg_table *sgt, const void *buf,
+			      size_t len,
+			      const struct sg_constraints *constraints
+			      enum dma_data_direction dir)
+{
+	return -ENOTSUPP;
+}
+
+static void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+			  struct sg_table *sgt, enum dma_data_direction dir)
+{
+	return -ENOTSUPP;
+}
+#endif
+
 void mtd_erase_callback(struct erase_info *instr);
 
 static inline int mtd_is_bitflip(int err) {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 6/7] mtd: nand: sunxi: add support for DMA assisted operations
  2016-03-30 15:39 ` Boris Brezillon
  (?)
@ 2016-03-30 15:39     ` Boris Brezillon
  -1 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon,
	Richard Weinberger

The sunxi NAND controller is able to pipeline ECC operations only when
operated in DMA mode, which improves a lot NAND throughput while keeping
CPU usage low.

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 drivers/mtd/nand/sunxi_nand.c | 324 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 320 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 6d6b166..1029f28 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -154,6 +154,7 @@
 
 /* define bit use in NFC_ECC_ST */
 #define NFC_ECC_ERR(x)		BIT(x)
+#define NFC_ECC_ERR_MSK		GENMASK(15, 0)
 #define NFC_ECC_PAT_FOUND(x)	BIT(x + 16)
 #define NFC_ECC_ERR_CNT(b, x)	(((x) >> (((b) % 4) * 8)) & 0xff)
 
@@ -277,6 +278,7 @@ struct sunxi_nfc {
 	unsigned long clk_rate;
 	struct list_head chips;
 	struct completion complete;
+	struct dma_chan *dmac;
 };
 
 static inline struct sunxi_nfc *to_sunxi_nfc(struct nand_hw_control *ctrl)
@@ -369,6 +371,68 @@ static int sunxi_nfc_rst(struct sunxi_nfc *nfc)
 	return ret;
 }
 
+static int sunxi_nfc_dma_op_prepare(struct mtd_info *mtd, const void *buf,
+				    int chunksize, int nchunks,
+				    enum dma_data_direction ddir,
+				    struct sg_table *sgt)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct dma_async_tx_descriptor *dmad;
+	enum dma_transfer_direction tdir;
+	dma_cookie_t dmat;
+	int ret;
+
+	if (ddir == DMA_FROM_DEVICE)
+		tdir = DMA_DEV_TO_MEM;
+	else
+		tdir = DMA_MEM_TO_DEV;
+
+	ret = mtd_map_buf(mtd, nfc->dev, sgt, buf, nchunks * chunksize,
+			  NULL, ddir);
+	if (ret)
+		return ret;
+
+	dmad = dmaengine_prep_slave_sg(nfc->dmac, sgt->sgl, sgt->nents,
+				       tdir, DMA_CTRL_ACK);
+	if (IS_ERR(dmad)) {
+		ret = PTR_ERR(dmad);
+		goto err_unmap_buf;
+	}
+
+	writel(readl(nfc->regs + NFC_REG_CTL) | NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+	writel(nchunks, nfc->regs + NFC_REG_SECTOR_NUM);
+	writel(chunksize, nfc->regs + NFC_REG_CNT);
+	dmat = dmaengine_submit(dmad);
+
+	ret = dma_submit_error(dmat);
+	if (ret)
+		goto err_clr_dma_flag;
+
+	return 0;
+
+err_clr_dma_flag:
+	writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+
+err_unmap_buf:
+	mtd_unmap_buf(mtd, nfc->dev, sgt, ddir);
+	return ret;
+}
+
+static void sunxi_nfc_dma_op_cleanup(struct mtd_info *mtd,
+				     enum dma_data_direction ddir,
+				     struct sg_table *sgt)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+	mtd_unmap_buf(mtd, nfc->dev, sgt, ddir);
+	writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+}
+
 static int sunxi_nfc_dev_ready(struct mtd_info *mtd)
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
@@ -970,6 +1034,128 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
+static int sunxi_nfc_hw_ecc_read_chunks_dma(struct mtd_info *mtd, uint8_t *buf,
+					    int oob_required, int page,
+					    int nchunks)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	bool randomized = nand->options & NAND_NEED_SCRAMBLING;
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	unsigned int max_bitflips = 0;
+	int ret, i, raw_mode = 0;
+	struct sg_table sgt;
+	u32 status;
+
+	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
+	if (ret)
+		return ret;
+
+	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, nchunks,
+				       DMA_FROM_DEVICE, &sgt);
+	if (ret)
+		return ret;
+
+	sunxi_nfc_hw_ecc_enable(mtd);
+	sunxi_nfc_randomizer_config(mtd, page, false);
+	sunxi_nfc_randomizer_enable(mtd);
+
+	writel((NAND_CMD_RNDOUTSTART << 16) | (NAND_CMD_RNDOUT << 8) |
+	       NAND_CMD_READSTART, nfc->regs + NFC_REG_RCMD_SET);
+
+	dma_async_issue_pending(nfc->dmac);
+
+	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD | NFC_DATA_TRANS,
+	       nfc->regs + NFC_REG_CMD);
+
+	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
+	if (ret)
+		dmaengine_terminate_all(nfc->dmac);
+
+	sunxi_nfc_randomizer_disable(mtd);
+	sunxi_nfc_hw_ecc_disable(mtd);
+
+	sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sgt);
+
+	if (ret)
+		return ret;
+
+	status = readl(nfc->regs + NFC_REG_ECC_ST);
+
+	for (i = 0; i < nchunks; i++) {
+		int data_off = i * ecc->size;
+		int oob_off = i * (ecc->bytes + 4);
+		u8 *data = buf + data_off;
+		u8 *oob = nand->oob_poi + oob_off;
+		bool erased;
+
+		ret = sunxi_nfc_hw_ecc_correct(mtd, randomized ? data : NULL,
+					       oob_required ? oob : NULL,
+					       i, status, &erased);
+
+		/* ECC errors are handled in the second loop. */
+		if (ret < 0)
+			continue;
+
+		if (oob_required && !erased) {
+			/* TODO: use DMA to retrieve OOB */
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			nand->read_buf(mtd, oob, ecc->bytes + 4);
+
+			sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, i,
+							    !i, page);
+		}
+
+		if (erased)
+			raw_mode = 1;
+
+		sunxi_nfc_hw_ecc_update_stats(mtd, &max_bitflips, ret);
+	}
+
+	if (status & NFC_ECC_ERR_MSK) {
+		for (i = 0; i < nchunks; i++) {
+			int data_off = i * ecc->size;
+			int oob_off = i * (ecc->bytes + 4);
+			u8 *data = buf + data_off;
+			u8 *oob = nand->oob_poi + oob_off;
+
+			if (!(status & NFC_ECC_ERR(i)))
+				continue;
+
+			/*
+			 * Re-read the data with the randomizer disabled to
+			 * identify bitflips in erased pages.
+			 */
+			if (randomized) {
+				/* TODO: use DMA to read page in raw mode */
+				nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
+					      data_off, -1);
+				nand->read_buf(mtd, data, ecc->size);
+			}
+
+			/* TODO: use DMA to retrieve OOB */
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			nand->read_buf(mtd, oob, ecc->bytes + 4);
+
+			ret = nand_check_erased_ecc_chunk(data,	ecc->size,
+							  oob, ecc->bytes + 4,
+							  NULL, 0,
+							  ecc->strength);
+			if (ret >= 0)
+				raw_mode = 1;
+
+			sunxi_nfc_hw_ecc_update_stats(mtd, &max_bitflips, ret);
+		}
+	}
+
+	if (oob_required)
+		sunxi_nfc_hw_ecc_read_extra_oob(mtd, nand->oob_poi,
+						NULL, !raw_mode,
+						page);
+
+	return max_bitflips;
+}
+
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 					const u8 *data, int data_off,
 					const u8 *oob, int oob_off,
@@ -1068,6 +1254,23 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
 	return max_bitflips;
 }
 
+static int sunxi_nfc_hw_ecc_read_page_dma(struct mtd_info *mtd,
+					  struct nand_chip *chip, u8 *buf,
+					  int oob_required, int page)
+{
+	int ret;
+
+	ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, oob_required, page,
+					       chip->ecc.steps);
+	if (ret >= 0)
+		return ret;
+
+	/* Fallback to PIO mode */
+	chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
+
+	return sunxi_nfc_hw_ecc_read_page(mtd, chip, buf, oob_required, page);
+}
+
 static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 					 struct nand_chip *chip,
 					 u32 data_offs, u32 readlen,
@@ -1101,6 +1304,25 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 	return max_bitflips;
 }
 
+static int sunxi_nfc_hw_ecc_read_subpage_dma(struct mtd_info *mtd,
+					     struct nand_chip *chip,
+					     u32 data_offs, u32 readlen,
+					     u8 *buf, int page)
+{
+	int nchunks = DIV_ROUND_UP(data_offs + readlen, chip->ecc.size);
+	int ret;
+
+	ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, false, page, nchunks);
+	if (ret >= 0)
+		return ret;
+
+	/* Fallback to PIO mode */
+	chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
+
+	return sunxi_nfc_hw_ecc_read_subpage(mtd, chip, data_offs, readlen,
+					     buf, page);
+}
+
 static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
 				       struct nand_chip *chip,
 				       const uint8_t *buf, int oob_required,
@@ -1133,6 +1355,69 @@ static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
 	return 0;
 }
 
+static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
+					   struct nand_chip *chip,
+					   const u8 *buf,
+					   int oob_required,
+					   int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	struct sg_table sgt;
+	int ret, i;
+
+	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
+	if (ret)
+		return ret;
+
+	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, ecc->steps,
+				       DMA_TO_DEVICE, &sgt);
+	if (ret)
+		goto pio_fallback;
+
+	for (i = 0; i < ecc->steps; i++) {
+		const u8 *oob = nand->oob_poi + (i * (ecc->bytes + 4));
+
+		sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, i, !i, page);
+	}
+
+	sunxi_nfc_hw_ecc_enable(mtd);
+	sunxi_nfc_randomizer_config(mtd, page, false);
+	sunxi_nfc_randomizer_enable(mtd);
+
+	writel((NAND_CMD_RNDIN << 8) | NAND_CMD_PAGEPROG,
+	       nfc->regs + NFC_REG_RCMD_SET);
+
+	dma_async_issue_pending(nfc->dmac);
+
+	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD |
+	       NFC_DATA_TRANS | NFC_ACCESS_DIR,
+	       nfc->regs + NFC_REG_CMD);
+
+	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
+	if (ret)
+		dmaengine_terminate_all(nfc->dmac);
+
+	sunxi_nfc_randomizer_disable(mtd);
+	sunxi_nfc_hw_ecc_disable(mtd);
+
+	sunxi_nfc_dma_op_cleanup(mtd, DMA_TO_DEVICE, &sgt);
+
+	if (ret)
+		return ret;
+
+	if (oob_required || (chip->options & NAND_NEED_SCRAMBLING))
+		/* TODO: use DMA to transfer extra OOB bytes ? */
+		sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi,
+						 NULL, page);
+
+	return 0;
+
+pio_fallback:
+	return sunxi_nfc_hw_ecc_write_page(mtd, chip, buf, oob_required, page);
+}
+
 static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
 					       struct nand_chip *chip,
 					       uint8_t *buf, int oob_required,
@@ -1506,6 +1791,9 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 				       struct nand_ecc_ctrl *ecc,
 				       struct device_node *np)
 {
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller);
 	struct nand_ecclayout *layout;
 	int nsectors;
 	int i, j;
@@ -1515,11 +1803,19 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 	if (ret)
 		return ret;
 
-	ecc->read_page = sunxi_nfc_hw_ecc_read_page;
-	ecc->write_page = sunxi_nfc_hw_ecc_write_page;
+	if (nfc->dmac) {
+		ecc->read_page = sunxi_nfc_hw_ecc_read_page_dma;
+		ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage_dma;
+		ecc->write_page = sunxi_nfc_hw_ecc_write_page_dma;
+	} else {
+		ecc->read_page = sunxi_nfc_hw_ecc_read_page;
+		ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
+		ecc->write_page = sunxi_nfc_hw_ecc_write_page;
+	}
+
+	/* TODO: support DMA for raw accesses */
 	ecc->read_oob_raw = nand_read_oob_std;
 	ecc->write_oob_raw = nand_write_oob_std;
-	ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
 	layout = ecc->layout;
 	nsectors = mtd->writesize / ecc->size;
 
@@ -1893,16 +2189,34 @@ static int sunxi_nfc_probe(struct platform_device *pdev)
 	if (ret)
 		goto out_mod_clk_unprepare;
 
+	nfc->dmac = dma_request_slave_channel(dev, "rxtx");
+	if (nfc->dmac) {
+		struct dma_slave_config dmac_cfg = { };
+
+		dmac_cfg.src_addr = r->start + NFC_REG_IO_DATA;
+		dmac_cfg.dst_addr = dmac_cfg.src_addr;
+		dmac_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+		dmac_cfg.dst_addr_width = dmac_cfg.src_addr_width;
+		dmac_cfg.src_maxburst = 4;
+		dmac_cfg.dst_maxburst = 4;
+		dmaengine_slave_config(nfc->dmac, &dmac_cfg);
+	} else {
+		dev_warn(dev, "failed to request rxtx DMA channel\n");
+	}
+
 	platform_set_drvdata(pdev, nfc);
 
 	ret = sunxi_nand_chips_init(dev, nfc);
 	if (ret) {
 		dev_err(dev, "failed to init nand chips\n");
-		goto out_mod_clk_unprepare;
+		goto out_release_dmac;
 	}
 
 	return 0;
 
+out_release_dmac:
+	if (nfc->dmac)
+		dma_release_channel(nfc->dmac);
 out_mod_clk_unprepare:
 	clk_disable_unprepare(nfc->mod_clk);
 out_ahb_clk_unprepare:
@@ -1916,6 +2230,8 @@ static int sunxi_nfc_remove(struct platform_device *pdev)
 	struct sunxi_nfc *nfc = platform_get_drvdata(pdev);
 
 	sunxi_nand_chips_cleanup(nfc);
+	if (nfc->dmac)
+		dma_release_channel(nfc->dmac);
 	clk_disable_unprepare(nfc->mod_clk);
 	clk_disable_unprepare(nfc->ahb_clk);
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 6/7] mtd: nand: sunxi: add support for DMA assisted operations
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton, Dave Gordon
  Cc: Mark Brown, linux-spi, linux-arm-kernel, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi, Vinod Koul, Dan Williams, dmaengine,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree, Boris Brezillon, Richard Weinberger

The sunxi NAND controller is able to pipeline ECC operations only when
operated in DMA mode, which improves a lot NAND throughput while keeping
CPU usage low.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 324 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 320 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 6d6b166..1029f28 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -154,6 +154,7 @@
 
 /* define bit use in NFC_ECC_ST */
 #define NFC_ECC_ERR(x)		BIT(x)
+#define NFC_ECC_ERR_MSK		GENMASK(15, 0)
 #define NFC_ECC_PAT_FOUND(x)	BIT(x + 16)
 #define NFC_ECC_ERR_CNT(b, x)	(((x) >> (((b) % 4) * 8)) & 0xff)
 
@@ -277,6 +278,7 @@ struct sunxi_nfc {
 	unsigned long clk_rate;
 	struct list_head chips;
 	struct completion complete;
+	struct dma_chan *dmac;
 };
 
 static inline struct sunxi_nfc *to_sunxi_nfc(struct nand_hw_control *ctrl)
@@ -369,6 +371,68 @@ static int sunxi_nfc_rst(struct sunxi_nfc *nfc)
 	return ret;
 }
 
+static int sunxi_nfc_dma_op_prepare(struct mtd_info *mtd, const void *buf,
+				    int chunksize, int nchunks,
+				    enum dma_data_direction ddir,
+				    struct sg_table *sgt)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct dma_async_tx_descriptor *dmad;
+	enum dma_transfer_direction tdir;
+	dma_cookie_t dmat;
+	int ret;
+
+	if (ddir == DMA_FROM_DEVICE)
+		tdir = DMA_DEV_TO_MEM;
+	else
+		tdir = DMA_MEM_TO_DEV;
+
+	ret = mtd_map_buf(mtd, nfc->dev, sgt, buf, nchunks * chunksize,
+			  NULL, ddir);
+	if (ret)
+		return ret;
+
+	dmad = dmaengine_prep_slave_sg(nfc->dmac, sgt->sgl, sgt->nents,
+				       tdir, DMA_CTRL_ACK);
+	if (IS_ERR(dmad)) {
+		ret = PTR_ERR(dmad);
+		goto err_unmap_buf;
+	}
+
+	writel(readl(nfc->regs + NFC_REG_CTL) | NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+	writel(nchunks, nfc->regs + NFC_REG_SECTOR_NUM);
+	writel(chunksize, nfc->regs + NFC_REG_CNT);
+	dmat = dmaengine_submit(dmad);
+
+	ret = dma_submit_error(dmat);
+	if (ret)
+		goto err_clr_dma_flag;
+
+	return 0;
+
+err_clr_dma_flag:
+	writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+
+err_unmap_buf:
+	mtd_unmap_buf(mtd, nfc->dev, sgt, ddir);
+	return ret;
+}
+
+static void sunxi_nfc_dma_op_cleanup(struct mtd_info *mtd,
+				     enum dma_data_direction ddir,
+				     struct sg_table *sgt)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+	mtd_unmap_buf(mtd, nfc->dev, sgt, ddir);
+	writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+}
+
 static int sunxi_nfc_dev_ready(struct mtd_info *mtd)
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
@@ -970,6 +1034,128 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
+static int sunxi_nfc_hw_ecc_read_chunks_dma(struct mtd_info *mtd, uint8_t *buf,
+					    int oob_required, int page,
+					    int nchunks)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	bool randomized = nand->options & NAND_NEED_SCRAMBLING;
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	unsigned int max_bitflips = 0;
+	int ret, i, raw_mode = 0;
+	struct sg_table sgt;
+	u32 status;
+
+	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
+	if (ret)
+		return ret;
+
+	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, nchunks,
+				       DMA_FROM_DEVICE, &sgt);
+	if (ret)
+		return ret;
+
+	sunxi_nfc_hw_ecc_enable(mtd);
+	sunxi_nfc_randomizer_config(mtd, page, false);
+	sunxi_nfc_randomizer_enable(mtd);
+
+	writel((NAND_CMD_RNDOUTSTART << 16) | (NAND_CMD_RNDOUT << 8) |
+	       NAND_CMD_READSTART, nfc->regs + NFC_REG_RCMD_SET);
+
+	dma_async_issue_pending(nfc->dmac);
+
+	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD | NFC_DATA_TRANS,
+	       nfc->regs + NFC_REG_CMD);
+
+	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
+	if (ret)
+		dmaengine_terminate_all(nfc->dmac);
+
+	sunxi_nfc_randomizer_disable(mtd);
+	sunxi_nfc_hw_ecc_disable(mtd);
+
+	sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sgt);
+
+	if (ret)
+		return ret;
+
+	status = readl(nfc->regs + NFC_REG_ECC_ST);
+
+	for (i = 0; i < nchunks; i++) {
+		int data_off = i * ecc->size;
+		int oob_off = i * (ecc->bytes + 4);
+		u8 *data = buf + data_off;
+		u8 *oob = nand->oob_poi + oob_off;
+		bool erased;
+
+		ret = sunxi_nfc_hw_ecc_correct(mtd, randomized ? data : NULL,
+					       oob_required ? oob : NULL,
+					       i, status, &erased);
+
+		/* ECC errors are handled in the second loop. */
+		if (ret < 0)
+			continue;
+
+		if (oob_required && !erased) {
+			/* TODO: use DMA to retrieve OOB */
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			nand->read_buf(mtd, oob, ecc->bytes + 4);
+
+			sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, i,
+							    !i, page);
+		}
+
+		if (erased)
+			raw_mode = 1;
+
+		sunxi_nfc_hw_ecc_update_stats(mtd, &max_bitflips, ret);
+	}
+
+	if (status & NFC_ECC_ERR_MSK) {
+		for (i = 0; i < nchunks; i++) {
+			int data_off = i * ecc->size;
+			int oob_off = i * (ecc->bytes + 4);
+			u8 *data = buf + data_off;
+			u8 *oob = nand->oob_poi + oob_off;
+
+			if (!(status & NFC_ECC_ERR(i)))
+				continue;
+
+			/*
+			 * Re-read the data with the randomizer disabled to
+			 * identify bitflips in erased pages.
+			 */
+			if (randomized) {
+				/* TODO: use DMA to read page in raw mode */
+				nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
+					      data_off, -1);
+				nand->read_buf(mtd, data, ecc->size);
+			}
+
+			/* TODO: use DMA to retrieve OOB */
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			nand->read_buf(mtd, oob, ecc->bytes + 4);
+
+			ret = nand_check_erased_ecc_chunk(data,	ecc->size,
+							  oob, ecc->bytes + 4,
+							  NULL, 0,
+							  ecc->strength);
+			if (ret >= 0)
+				raw_mode = 1;
+
+			sunxi_nfc_hw_ecc_update_stats(mtd, &max_bitflips, ret);
+		}
+	}
+
+	if (oob_required)
+		sunxi_nfc_hw_ecc_read_extra_oob(mtd, nand->oob_poi,
+						NULL, !raw_mode,
+						page);
+
+	return max_bitflips;
+}
+
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 					const u8 *data, int data_off,
 					const u8 *oob, int oob_off,
@@ -1068,6 +1254,23 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
 	return max_bitflips;
 }
 
+static int sunxi_nfc_hw_ecc_read_page_dma(struct mtd_info *mtd,
+					  struct nand_chip *chip, u8 *buf,
+					  int oob_required, int page)
+{
+	int ret;
+
+	ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, oob_required, page,
+					       chip->ecc.steps);
+	if (ret >= 0)
+		return ret;
+
+	/* Fallback to PIO mode */
+	chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
+
+	return sunxi_nfc_hw_ecc_read_page(mtd, chip, buf, oob_required, page);
+}
+
 static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 					 struct nand_chip *chip,
 					 u32 data_offs, u32 readlen,
@@ -1101,6 +1304,25 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 	return max_bitflips;
 }
 
+static int sunxi_nfc_hw_ecc_read_subpage_dma(struct mtd_info *mtd,
+					     struct nand_chip *chip,
+					     u32 data_offs, u32 readlen,
+					     u8 *buf, int page)
+{
+	int nchunks = DIV_ROUND_UP(data_offs + readlen, chip->ecc.size);
+	int ret;
+
+	ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, false, page, nchunks);
+	if (ret >= 0)
+		return ret;
+
+	/* Fallback to PIO mode */
+	chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
+
+	return sunxi_nfc_hw_ecc_read_subpage(mtd, chip, data_offs, readlen,
+					     buf, page);
+}
+
 static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
 				       struct nand_chip *chip,
 				       const uint8_t *buf, int oob_required,
@@ -1133,6 +1355,69 @@ static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
 	return 0;
 }
 
+static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
+					   struct nand_chip *chip,
+					   const u8 *buf,
+					   int oob_required,
+					   int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	struct sg_table sgt;
+	int ret, i;
+
+	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
+	if (ret)
+		return ret;
+
+	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, ecc->steps,
+				       DMA_TO_DEVICE, &sgt);
+	if (ret)
+		goto pio_fallback;
+
+	for (i = 0; i < ecc->steps; i++) {
+		const u8 *oob = nand->oob_poi + (i * (ecc->bytes + 4));
+
+		sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, i, !i, page);
+	}
+
+	sunxi_nfc_hw_ecc_enable(mtd);
+	sunxi_nfc_randomizer_config(mtd, page, false);
+	sunxi_nfc_randomizer_enable(mtd);
+
+	writel((NAND_CMD_RNDIN << 8) | NAND_CMD_PAGEPROG,
+	       nfc->regs + NFC_REG_RCMD_SET);
+
+	dma_async_issue_pending(nfc->dmac);
+
+	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD |
+	       NFC_DATA_TRANS | NFC_ACCESS_DIR,
+	       nfc->regs + NFC_REG_CMD);
+
+	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
+	if (ret)
+		dmaengine_terminate_all(nfc->dmac);
+
+	sunxi_nfc_randomizer_disable(mtd);
+	sunxi_nfc_hw_ecc_disable(mtd);
+
+	sunxi_nfc_dma_op_cleanup(mtd, DMA_TO_DEVICE, &sgt);
+
+	if (ret)
+		return ret;
+
+	if (oob_required || (chip->options & NAND_NEED_SCRAMBLING))
+		/* TODO: use DMA to transfer extra OOB bytes ? */
+		sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi,
+						 NULL, page);
+
+	return 0;
+
+pio_fallback:
+	return sunxi_nfc_hw_ecc_write_page(mtd, chip, buf, oob_required, page);
+}
+
 static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
 					       struct nand_chip *chip,
 					       uint8_t *buf, int oob_required,
@@ -1506,6 +1791,9 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 				       struct nand_ecc_ctrl *ecc,
 				       struct device_node *np)
 {
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller);
 	struct nand_ecclayout *layout;
 	int nsectors;
 	int i, j;
@@ -1515,11 +1803,19 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 	if (ret)
 		return ret;
 
-	ecc->read_page = sunxi_nfc_hw_ecc_read_page;
-	ecc->write_page = sunxi_nfc_hw_ecc_write_page;
+	if (nfc->dmac) {
+		ecc->read_page = sunxi_nfc_hw_ecc_read_page_dma;
+		ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage_dma;
+		ecc->write_page = sunxi_nfc_hw_ecc_write_page_dma;
+	} else {
+		ecc->read_page = sunxi_nfc_hw_ecc_read_page;
+		ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
+		ecc->write_page = sunxi_nfc_hw_ecc_write_page;
+	}
+
+	/* TODO: support DMA for raw accesses */
 	ecc->read_oob_raw = nand_read_oob_std;
 	ecc->write_oob_raw = nand_write_oob_std;
-	ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
 	layout = ecc->layout;
 	nsectors = mtd->writesize / ecc->size;
 
@@ -1893,16 +2189,34 @@ static int sunxi_nfc_probe(struct platform_device *pdev)
 	if (ret)
 		goto out_mod_clk_unprepare;
 
+	nfc->dmac = dma_request_slave_channel(dev, "rxtx");
+	if (nfc->dmac) {
+		struct dma_slave_config dmac_cfg = { };
+
+		dmac_cfg.src_addr = r->start + NFC_REG_IO_DATA;
+		dmac_cfg.dst_addr = dmac_cfg.src_addr;
+		dmac_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+		dmac_cfg.dst_addr_width = dmac_cfg.src_addr_width;
+		dmac_cfg.src_maxburst = 4;
+		dmac_cfg.dst_maxburst = 4;
+		dmaengine_slave_config(nfc->dmac, &dmac_cfg);
+	} else {
+		dev_warn(dev, "failed to request rxtx DMA channel\n");
+	}
+
 	platform_set_drvdata(pdev, nfc);
 
 	ret = sunxi_nand_chips_init(dev, nfc);
 	if (ret) {
 		dev_err(dev, "failed to init nand chips\n");
-		goto out_mod_clk_unprepare;
+		goto out_release_dmac;
 	}
 
 	return 0;
 
+out_release_dmac:
+	if (nfc->dmac)
+		dma_release_channel(nfc->dmac);
 out_mod_clk_unprepare:
 	clk_disable_unprepare(nfc->mod_clk);
 out_ahb_clk_unprepare:
@@ -1916,6 +2230,8 @@ static int sunxi_nfc_remove(struct platform_device *pdev)
 	struct sunxi_nfc *nfc = platform_get_drvdata(pdev);
 
 	sunxi_nand_chips_cleanup(nfc);
+	if (nfc->dmac)
+		dma_release_channel(nfc->dmac);
 	clk_disable_unprepare(nfc->mod_clk);
 	clk_disable_unprepare(nfc->ahb_clk);
 
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 6/7] mtd: nand: sunxi: add support for DMA assisted operations
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: linux-arm-kernel

The sunxi NAND controller is able to pipeline ECC operations only when
operated in DMA mode, which improves a lot NAND throughput while keeping
CPU usage low.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 324 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 320 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 6d6b166..1029f28 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -154,6 +154,7 @@
 
 /* define bit use in NFC_ECC_ST */
 #define NFC_ECC_ERR(x)		BIT(x)
+#define NFC_ECC_ERR_MSK		GENMASK(15, 0)
 #define NFC_ECC_PAT_FOUND(x)	BIT(x + 16)
 #define NFC_ECC_ERR_CNT(b, x)	(((x) >> (((b) % 4) * 8)) & 0xff)
 
@@ -277,6 +278,7 @@ struct sunxi_nfc {
 	unsigned long clk_rate;
 	struct list_head chips;
 	struct completion complete;
+	struct dma_chan *dmac;
 };
 
 static inline struct sunxi_nfc *to_sunxi_nfc(struct nand_hw_control *ctrl)
@@ -369,6 +371,68 @@ static int sunxi_nfc_rst(struct sunxi_nfc *nfc)
 	return ret;
 }
 
+static int sunxi_nfc_dma_op_prepare(struct mtd_info *mtd, const void *buf,
+				    int chunksize, int nchunks,
+				    enum dma_data_direction ddir,
+				    struct sg_table *sgt)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct dma_async_tx_descriptor *dmad;
+	enum dma_transfer_direction tdir;
+	dma_cookie_t dmat;
+	int ret;
+
+	if (ddir == DMA_FROM_DEVICE)
+		tdir = DMA_DEV_TO_MEM;
+	else
+		tdir = DMA_MEM_TO_DEV;
+
+	ret = mtd_map_buf(mtd, nfc->dev, sgt, buf, nchunks * chunksize,
+			  NULL, ddir);
+	if (ret)
+		return ret;
+
+	dmad = dmaengine_prep_slave_sg(nfc->dmac, sgt->sgl, sgt->nents,
+				       tdir, DMA_CTRL_ACK);
+	if (IS_ERR(dmad)) {
+		ret = PTR_ERR(dmad);
+		goto err_unmap_buf;
+	}
+
+	writel(readl(nfc->regs + NFC_REG_CTL) | NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+	writel(nchunks, nfc->regs + NFC_REG_SECTOR_NUM);
+	writel(chunksize, nfc->regs + NFC_REG_CNT);
+	dmat = dmaengine_submit(dmad);
+
+	ret = dma_submit_error(dmat);
+	if (ret)
+		goto err_clr_dma_flag;
+
+	return 0;
+
+err_clr_dma_flag:
+	writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+
+err_unmap_buf:
+	mtd_unmap_buf(mtd, nfc->dev, sgt, ddir);
+	return ret;
+}
+
+static void sunxi_nfc_dma_op_cleanup(struct mtd_info *mtd,
+				     enum dma_data_direction ddir,
+				     struct sg_table *sgt)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+	mtd_unmap_buf(mtd, nfc->dev, sgt, ddir);
+	writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+}
+
 static int sunxi_nfc_dev_ready(struct mtd_info *mtd)
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
@@ -970,6 +1034,128 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
+static int sunxi_nfc_hw_ecc_read_chunks_dma(struct mtd_info *mtd, uint8_t *buf,
+					    int oob_required, int page,
+					    int nchunks)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	bool randomized = nand->options & NAND_NEED_SCRAMBLING;
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	unsigned int max_bitflips = 0;
+	int ret, i, raw_mode = 0;
+	struct sg_table sgt;
+	u32 status;
+
+	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
+	if (ret)
+		return ret;
+
+	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, nchunks,
+				       DMA_FROM_DEVICE, &sgt);
+	if (ret)
+		return ret;
+
+	sunxi_nfc_hw_ecc_enable(mtd);
+	sunxi_nfc_randomizer_config(mtd, page, false);
+	sunxi_nfc_randomizer_enable(mtd);
+
+	writel((NAND_CMD_RNDOUTSTART << 16) | (NAND_CMD_RNDOUT << 8) |
+	       NAND_CMD_READSTART, nfc->regs + NFC_REG_RCMD_SET);
+
+	dma_async_issue_pending(nfc->dmac);
+
+	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD | NFC_DATA_TRANS,
+	       nfc->regs + NFC_REG_CMD);
+
+	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
+	if (ret)
+		dmaengine_terminate_all(nfc->dmac);
+
+	sunxi_nfc_randomizer_disable(mtd);
+	sunxi_nfc_hw_ecc_disable(mtd);
+
+	sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sgt);
+
+	if (ret)
+		return ret;
+
+	status = readl(nfc->regs + NFC_REG_ECC_ST);
+
+	for (i = 0; i < nchunks; i++) {
+		int data_off = i * ecc->size;
+		int oob_off = i * (ecc->bytes + 4);
+		u8 *data = buf + data_off;
+		u8 *oob = nand->oob_poi + oob_off;
+		bool erased;
+
+		ret = sunxi_nfc_hw_ecc_correct(mtd, randomized ? data : NULL,
+					       oob_required ? oob : NULL,
+					       i, status, &erased);
+
+		/* ECC errors are handled in the second loop. */
+		if (ret < 0)
+			continue;
+
+		if (oob_required && !erased) {
+			/* TODO: use DMA to retrieve OOB */
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			nand->read_buf(mtd, oob, ecc->bytes + 4);
+
+			sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, i,
+							    !i, page);
+		}
+
+		if (erased)
+			raw_mode = 1;
+
+		sunxi_nfc_hw_ecc_update_stats(mtd, &max_bitflips, ret);
+	}
+
+	if (status & NFC_ECC_ERR_MSK) {
+		for (i = 0; i < nchunks; i++) {
+			int data_off = i * ecc->size;
+			int oob_off = i * (ecc->bytes + 4);
+			u8 *data = buf + data_off;
+			u8 *oob = nand->oob_poi + oob_off;
+
+			if (!(status & NFC_ECC_ERR(i)))
+				continue;
+
+			/*
+			 * Re-read the data with the randomizer disabled to
+			 * identify bitflips in erased pages.
+			 */
+			if (randomized) {
+				/* TODO: use DMA to read page in raw mode */
+				nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
+					      data_off, -1);
+				nand->read_buf(mtd, data, ecc->size);
+			}
+
+			/* TODO: use DMA to retrieve OOB */
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			nand->read_buf(mtd, oob, ecc->bytes + 4);
+
+			ret = nand_check_erased_ecc_chunk(data,	ecc->size,
+							  oob, ecc->bytes + 4,
+							  NULL, 0,
+							  ecc->strength);
+			if (ret >= 0)
+				raw_mode = 1;
+
+			sunxi_nfc_hw_ecc_update_stats(mtd, &max_bitflips, ret);
+		}
+	}
+
+	if (oob_required)
+		sunxi_nfc_hw_ecc_read_extra_oob(mtd, nand->oob_poi,
+						NULL, !raw_mode,
+						page);
+
+	return max_bitflips;
+}
+
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 					const u8 *data, int data_off,
 					const u8 *oob, int oob_off,
@@ -1068,6 +1254,23 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
 	return max_bitflips;
 }
 
+static int sunxi_nfc_hw_ecc_read_page_dma(struct mtd_info *mtd,
+					  struct nand_chip *chip, u8 *buf,
+					  int oob_required, int page)
+{
+	int ret;
+
+	ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, oob_required, page,
+					       chip->ecc.steps);
+	if (ret >= 0)
+		return ret;
+
+	/* Fallback to PIO mode */
+	chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
+
+	return sunxi_nfc_hw_ecc_read_page(mtd, chip, buf, oob_required, page);
+}
+
 static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 					 struct nand_chip *chip,
 					 u32 data_offs, u32 readlen,
@@ -1101,6 +1304,25 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 	return max_bitflips;
 }
 
+static int sunxi_nfc_hw_ecc_read_subpage_dma(struct mtd_info *mtd,
+					     struct nand_chip *chip,
+					     u32 data_offs, u32 readlen,
+					     u8 *buf, int page)
+{
+	int nchunks = DIV_ROUND_UP(data_offs + readlen, chip->ecc.size);
+	int ret;
+
+	ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, false, page, nchunks);
+	if (ret >= 0)
+		return ret;
+
+	/* Fallback to PIO mode */
+	chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
+
+	return sunxi_nfc_hw_ecc_read_subpage(mtd, chip, data_offs, readlen,
+					     buf, page);
+}
+
 static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
 				       struct nand_chip *chip,
 				       const uint8_t *buf, int oob_required,
@@ -1133,6 +1355,69 @@ static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
 	return 0;
 }
 
+static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
+					   struct nand_chip *chip,
+					   const u8 *buf,
+					   int oob_required,
+					   int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	struct sg_table sgt;
+	int ret, i;
+
+	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
+	if (ret)
+		return ret;
+
+	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, ecc->steps,
+				       DMA_TO_DEVICE, &sgt);
+	if (ret)
+		goto pio_fallback;
+
+	for (i = 0; i < ecc->steps; i++) {
+		const u8 *oob = nand->oob_poi + (i * (ecc->bytes + 4));
+
+		sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, i, !i, page);
+	}
+
+	sunxi_nfc_hw_ecc_enable(mtd);
+	sunxi_nfc_randomizer_config(mtd, page, false);
+	sunxi_nfc_randomizer_enable(mtd);
+
+	writel((NAND_CMD_RNDIN << 8) | NAND_CMD_PAGEPROG,
+	       nfc->regs + NFC_REG_RCMD_SET);
+
+	dma_async_issue_pending(nfc->dmac);
+
+	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD |
+	       NFC_DATA_TRANS | NFC_ACCESS_DIR,
+	       nfc->regs + NFC_REG_CMD);
+
+	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
+	if (ret)
+		dmaengine_terminate_all(nfc->dmac);
+
+	sunxi_nfc_randomizer_disable(mtd);
+	sunxi_nfc_hw_ecc_disable(mtd);
+
+	sunxi_nfc_dma_op_cleanup(mtd, DMA_TO_DEVICE, &sgt);
+
+	if (ret)
+		return ret;
+
+	if (oob_required || (chip->options & NAND_NEED_SCRAMBLING))
+		/* TODO: use DMA to transfer extra OOB bytes ? */
+		sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi,
+						 NULL, page);
+
+	return 0;
+
+pio_fallback:
+	return sunxi_nfc_hw_ecc_write_page(mtd, chip, buf, oob_required, page);
+}
+
 static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
 					       struct nand_chip *chip,
 					       uint8_t *buf, int oob_required,
@@ -1506,6 +1791,9 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 				       struct nand_ecc_ctrl *ecc,
 				       struct device_node *np)
 {
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller);
 	struct nand_ecclayout *layout;
 	int nsectors;
 	int i, j;
@@ -1515,11 +1803,19 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 	if (ret)
 		return ret;
 
-	ecc->read_page = sunxi_nfc_hw_ecc_read_page;
-	ecc->write_page = sunxi_nfc_hw_ecc_write_page;
+	if (nfc->dmac) {
+		ecc->read_page = sunxi_nfc_hw_ecc_read_page_dma;
+		ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage_dma;
+		ecc->write_page = sunxi_nfc_hw_ecc_write_page_dma;
+	} else {
+		ecc->read_page = sunxi_nfc_hw_ecc_read_page;
+		ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
+		ecc->write_page = sunxi_nfc_hw_ecc_write_page;
+	}
+
+	/* TODO: support DMA for raw accesses */
 	ecc->read_oob_raw = nand_read_oob_std;
 	ecc->write_oob_raw = nand_write_oob_std;
-	ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
 	layout = ecc->layout;
 	nsectors = mtd->writesize / ecc->size;
 
@@ -1893,16 +2189,34 @@ static int sunxi_nfc_probe(struct platform_device *pdev)
 	if (ret)
 		goto out_mod_clk_unprepare;
 
+	nfc->dmac = dma_request_slave_channel(dev, "rxtx");
+	if (nfc->dmac) {
+		struct dma_slave_config dmac_cfg = { };
+
+		dmac_cfg.src_addr = r->start + NFC_REG_IO_DATA;
+		dmac_cfg.dst_addr = dmac_cfg.src_addr;
+		dmac_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+		dmac_cfg.dst_addr_width = dmac_cfg.src_addr_width;
+		dmac_cfg.src_maxburst = 4;
+		dmac_cfg.dst_maxburst = 4;
+		dmaengine_slave_config(nfc->dmac, &dmac_cfg);
+	} else {
+		dev_warn(dev, "failed to request rxtx DMA channel\n");
+	}
+
 	platform_set_drvdata(pdev, nfc);
 
 	ret = sunxi_nand_chips_init(dev, nfc);
 	if (ret) {
 		dev_err(dev, "failed to init nand chips\n");
-		goto out_mod_clk_unprepare;
+		goto out_release_dmac;
 	}
 
 	return 0;
 
+out_release_dmac:
+	if (nfc->dmac)
+		dma_release_channel(nfc->dmac);
 out_mod_clk_unprepare:
 	clk_disable_unprepare(nfc->mod_clk);
 out_ahb_clk_unprepare:
@@ -1916,6 +2230,8 @@ static int sunxi_nfc_remove(struct platform_device *pdev)
 	struct sunxi_nfc *nfc = platform_get_drvdata(pdev);
 
 	sunxi_nand_chips_cleanup(nfc);
+	if (nfc->dmac)
+		dma_release_channel(nfc->dmac);
 	clk_disable_unprepare(nfc->mod_clk);
 	clk_disable_unprepare(nfc->ahb_clk);
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 7/7] mtd: nand: sunxi: update DT bindings
  2016-03-30 15:39 ` Boris Brezillon
  (?)
@ 2016-03-30 15:39     ` Boris Brezillon
  -1 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon,
	Richard Weinberger

Document dmas and dma-names properties.

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
Acked-by: Rob Herring <robh-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
---
 Documentation/devicetree/bindings/mtd/sunxi-nand.txt | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/Documentation/devicetree/bindings/mtd/sunxi-nand.txt b/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
index 086d6f4..6fdf8f6 100644
--- a/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
+++ b/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
@@ -11,6 +11,10 @@ Required properties:
     * "ahb" : AHB gating clock
     * "mod" : nand controller clock
 
+Optional properties:
+- dmas : shall reference DMA channel associated to the NAND controller.
+- dma-names : shall be "rxtx".
+
 Optional children nodes:
 Children nodes represent the available nand chips.
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 7/7] mtd: nand: sunxi: update DT bindings
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton, Dave Gordon
  Cc: Mark Brown, linux-spi, linux-arm-kernel, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi, Vinod Koul, Dan Williams, dmaengine,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree, Boris Brezillon, Richard Weinberger

Document dmas and dma-names properties.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Acked-by: Rob Herring <robh@kernel.org>
---
 Documentation/devicetree/bindings/mtd/sunxi-nand.txt | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/Documentation/devicetree/bindings/mtd/sunxi-nand.txt b/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
index 086d6f4..6fdf8f6 100644
--- a/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
+++ b/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
@@ -11,6 +11,10 @@ Required properties:
     * "ahb" : AHB gating clock
     * "mod" : nand controller clock
 
+Optional properties:
+- dmas : shall reference DMA channel associated to the NAND controller.
+- dma-names : shall be "rxtx".
+
 Optional children nodes:
 Children nodes represent the available nand chips.
 
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [PATCH v2 7/7] mtd: nand: sunxi: update DT bindings
@ 2016-03-30 15:39     ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 15:39 UTC (permalink / raw)
  To: linux-arm-kernel

Document dmas and dma-names properties.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Acked-by: Rob Herring <robh@kernel.org>
---
 Documentation/devicetree/bindings/mtd/sunxi-nand.txt | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/Documentation/devicetree/bindings/mtd/sunxi-nand.txt b/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
index 086d6f4..6fdf8f6 100644
--- a/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
+++ b/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
@@ -11,6 +11,10 @@ Required properties:
     * "ahb" : AHB gating clock
     * "mod" : nand controller clock
 
+Optional properties:
+- dmas : shall reference DMA channel associated to the NAND controller.
+- dma-names : shall be "rxtx".
+
 Optional children nodes:
 Children nodes represent the available nand chips.
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
  2016-03-30 15:39     ` Boris Brezillon
  (?)
  (?)
@ 2016-03-30 16:51         ` Mark Brown
  -1 siblings, 0 replies; 46+ messages in thread
From: Mark Brown @ 2016-03-30 16:51 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA

[-- Attachment #1: Type: text/plain, Size: 466 bytes --]

On Wed, Mar 30, 2016 at 05:39:51PM +0200, Boris Brezillon wrote:
> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> from a virtual address pointer. This function takes care of dealing with
> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> DMA transfer size).

This seems nice.  Should we also have a further helper on top of this
which will get constraints from a dmaengine, it seems like it'd be a
common need?

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-30 16:51         ` Mark Brown
  0 siblings, 0 replies; 46+ messages in thread
From: Mark Brown @ 2016-03-30 16:51 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton,
	Dave Gordon, linux-spi, linux-arm-kernel, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi, Vinod Koul, Dan Williams, dmaengine,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree, Richard Weinberger

[-- Attachment #1: Type: text/plain, Size: 467 bytes --]

On Wed, Mar 30, 2016 at 05:39:51PM +0200, Boris Brezillon wrote:
> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> from a virtual address pointer. This function takes care of dealing with
> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> DMA transfer size).

This seems nice.  Should we also have a further helper on top of this
which will get constraints from a dmaengine, it seems like it'd be a
common need?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-30 16:51         ` Mark Brown
  0 siblings, 0 replies; 46+ messages in thread
From: Mark Brown @ 2016-03-30 16:51 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Richard Weinberger

[-- Attachment #1: Type: text/plain, Size: 466 bytes --]

On Wed, Mar 30, 2016 at 05:39:51PM +0200, Boris Brezillon wrote:
> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> from a virtual address pointer. This function takes care of dealing with
> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> DMA transfer size).

This seems nice.  Should we also have a further helper on top of this
which will get constraints from a dmaengine, it seems like it'd be a
common need?

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-30 16:51         ` Mark Brown
  0 siblings, 0 replies; 46+ messages in thread
From: Mark Brown @ 2016-03-30 16:51 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Mar 30, 2016 at 05:39:51PM +0200, Boris Brezillon wrote:
> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> from a virtual address pointer. This function takes care of dealing with
> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> DMA transfer size).

This seems nice.  Should we also have a further helper on top of this
which will get constraints from a dmaengine, it seems like it'd be a
common need?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160330/6feb33d1/attachment-0001.sig>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
  2016-03-30 16:51         ` Mark Brown
  (?)
  (?)
@ 2016-03-30 18:18             ` Boris Brezillon
  -1 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 18:18 UTC (permalink / raw)
  To: Mark Brown
  Cc: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA

On Wed, 30 Mar 2016 09:51:43 -0700
Mark Brown <broonie-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:

> On Wed, Mar 30, 2016 at 05:39:51PM +0200, Boris Brezillon wrote:
> > sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> > from a virtual address pointer. This function takes care of dealing with
> > vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> > DMA transfer size).
> 
> This seems nice.  Should we also have a further helper on top of this
> which will get constraints from a dmaengine, it seems like it'd be a
> common need?

Yep, we could create a wrapper extracting dma_slave caps info,
converting it to sg_constraints and calling sg_alloc_table_from_buf().
But let's try to get this function accepted first, and I'll send another
patch providing this wrapper.

BTW, do you see other things that should be added in sg_constraints?

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-spi" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-30 18:18             ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 18:18 UTC (permalink / raw)
  To: Mark Brown
  Cc: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton,
	Dave Gordon, linux-spi, linux-arm-kernel, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi, Vinod Koul, Dan Williams, dmaengine,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree, Richard Weinberger

On Wed, 30 Mar 2016 09:51:43 -0700
Mark Brown <broonie@kernel.org> wrote:

> On Wed, Mar 30, 2016 at 05:39:51PM +0200, Boris Brezillon wrote:
> > sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> > from a virtual address pointer. This function takes care of dealing with
> > vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> > DMA transfer size).
> 
> This seems nice.  Should we also have a further helper on top of this
> which will get constraints from a dmaengine, it seems like it'd be a
> common need?

Yep, we could create a wrapper extracting dma_slave caps info,
converting it to sg_constraints and calling sg_alloc_table_from_buf().
But let's try to get this function accepted first, and I'll send another
patch providing this wrapper.

BTW, do you see other things that should be added in sg_constraints?

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-30 18:18             ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 18:18 UTC (permalink / raw)
  To: Mark Brown
  Cc: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Richard Weinberger

On Wed, 30 Mar 2016 09:51:43 -0700
Mark Brown <broonie-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:

> On Wed, Mar 30, 2016 at 05:39:51PM +0200, Boris Brezillon wrote:
> > sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> > from a virtual address pointer. This function takes care of dealing with
> > vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> > DMA transfer size).
> 
> This seems nice.  Should we also have a further helper on top of this
> which will get constraints from a dmaengine, it seems like it'd be a
> common need?

Yep, we could create a wrapper extracting dma_slave caps info,
converting it to sg_constraints and calling sg_alloc_table_from_buf().
But let's try to get this function accepted first, and I'll send another
patch providing this wrapper.

BTW, do you see other things that should be added in sg_constraints?

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com
--
To unsubscribe from this list: send the line "unsubscribe linux-spi" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-30 18:18             ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-30 18:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, 30 Mar 2016 09:51:43 -0700
Mark Brown <broonie@kernel.org> wrote:

> On Wed, Mar 30, 2016 at 05:39:51PM +0200, Boris Brezillon wrote:
> > sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> > from a virtual address pointer. This function takes care of dealing with
> > vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> > DMA transfer size).
> 
> This seems nice.  Should we also have a further helper on top of this
> which will get constraints from a dmaengine, it seems like it'd be a
> common need?

Yep, we could create a wrapper extracting dma_slave caps info,
converting it to sg_constraints and calling sg_alloc_table_from_buf().
But let's try to get this function accepted first, and I'll send another
patch providing this wrapper.

BTW, do you see other things that should be added in sg_constraints?

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
  2016-03-30 18:18             ` Boris Brezillon
  (?)
  (?)
@ 2016-03-30 18:34               ` Mark Brown
  -1 siblings, 0 replies; 46+ messages in thread
From: Mark Brown @ 2016-03-30 18:34 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA

[-- Attachment #1: Type: text/plain, Size: 207 bytes --]

On Wed, Mar 30, 2016 at 08:18:31PM +0200, Boris Brezillon wrote:

> BTW, do you see other things that should be added in sg_constraints?

It looked to do everything SPI does which is everything I know about.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-30 18:34               ` Mark Brown
  0 siblings, 0 replies; 46+ messages in thread
From: Mark Brown @ 2016-03-30 18:34 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton,
	Dave Gordon, linux-spi, linux-arm-kernel, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi, Vinod Koul, Dan Williams, dmaengine,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree, Richard Weinberger

[-- Attachment #1: Type: text/plain, Size: 208 bytes --]

On Wed, Mar 30, 2016 at 08:18:31PM +0200, Boris Brezillon wrote:

> BTW, do you see other things that should be added in sg_constraints?

It looked to do everything SPI does which is everything I know about.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-30 18:34               ` Mark Brown
  0 siblings, 0 replies; 46+ messages in thread
From: Mark Brown @ 2016-03-30 18:34 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Richard Weinberger

[-- Attachment #1: Type: text/plain, Size: 207 bytes --]

On Wed, Mar 30, 2016 at 08:18:31PM +0200, Boris Brezillon wrote:

> BTW, do you see other things that should be added in sg_constraints?

It looked to do everything SPI does which is everything I know about.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-30 18:34               ` Mark Brown
  0 siblings, 0 replies; 46+ messages in thread
From: Mark Brown @ 2016-03-30 18:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Mar 30, 2016 at 08:18:31PM +0200, Boris Brezillon wrote:

> BTW, do you see other things that should be added in sg_constraints?

It looked to do everything SPI does which is everything I know about.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160330/a5b2be6a/attachment.sig>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [v2,4/7] scatterlist: add sg_alloc_table_from_buf() helper
  2016-03-30 15:39     ` Boris Brezillon
  (?)
@ 2016-03-31  4:56       ` Vignesh R
  -1 siblings, 0 replies; 46+ messages in thread
From: Vignesh R @ 2016-03-31  4:56 UTC (permalink / raw)
  To: Boris BREZILLON, David Woodhouse, Brian Norris, linux-mtd,
	Andrew Morton, Dave Gordon
  Cc: Mark Rutland, devicetree, Pawel Moll, Ian Campbell, Vinod Koul,
	Chen-Yu Tsai, Rob Herring, linux-spi, Richard Weinberger,
	linux-sunxi, Mark Brown, Hans Verkuil, Laurent Pinchart,
	Kumar Gala, dmaengine, Maxime Ripard, linux-media, Dan Williams,
	linux-arm-kernel, Mauro Carvalho Chehab

Hi,

On 03/30/2016 09:09 PM, Boris BREZILLON wrote:

[...]

> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask)
> +{
> +	struct sg_constraints cons = { };
> +	size_t remaining, chunk_len;
> +	const void *sg_buf;
> +	int i, ret;
> +
> +	if (constraints)
> +		cons = *constraints;
> +
> +	ret = sg_check_constraints(&cons, buf, len);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> +		i++;
> +
> +	ret = sg_alloc_table(sgt, i, gfp_mask);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> +		if (is_vmalloc_addr(sg_buf)) {
> +			struct page *vm_page;
> +
> +			vm_page = vmalloc_to_page(sg_buf);
> +			if (!vm_page) {
> +				ret = -ENOMEM;
> +				goto err_free_table;
> +			}
> +
> +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> +				    offset_in_page(sg_buf));
> +		} else {
> +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> +		}
> +

If the buf address is in PKMAP_BASE - PAGE_OFFSET-1 region (I have
observed that JFFS2 FS provides buffers in above region to MTD layer),
if CONFIG_DEBUG_SG is set then sg_set_buf() will throw a BUG_ON() as
virt_addr_is_valid() will return false. Is there a sane way to handle
buffers of PKMAP_BASE region with sg_*  APIs?
Or, is the function sg_alloc_table_from_buf() not to be used with such
buffers?


-- 
Regards
Vignesh

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [v2,4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-31  4:56       ` Vignesh R
  0 siblings, 0 replies; 46+ messages in thread
From: Vignesh R @ 2016-03-31  4:56 UTC (permalink / raw)
  To: Boris BREZILLON, David Woodhouse, Brian Norris, linux-mtd,
	Andrew Morton, Dave Gordon
  Cc: Mark Rutland, devicetree, Pawel Moll, Ian Campbell, Vinod Koul,
	Chen-Yu Tsai, Rob Herring, linux-spi, Richard Weinberger,
	linux-sunxi, Mark Brown, Hans Verkuil, Laurent Pinchart,
	Kumar Gala, dmaengine, Maxime Ripard, linux-media, Dan Williams,
	linux-arm-kernel, Mauro Carvalho Chehab

Hi,

On 03/30/2016 09:09 PM, Boris BREZILLON wrote:

[...]

> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask)
> +{
> +	struct sg_constraints cons = { };
> +	size_t remaining, chunk_len;
> +	const void *sg_buf;
> +	int i, ret;
> +
> +	if (constraints)
> +		cons = *constraints;
> +
> +	ret = sg_check_constraints(&cons, buf, len);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> +		i++;
> +
> +	ret = sg_alloc_table(sgt, i, gfp_mask);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> +		if (is_vmalloc_addr(sg_buf)) {
> +			struct page *vm_page;
> +
> +			vm_page = vmalloc_to_page(sg_buf);
> +			if (!vm_page) {
> +				ret = -ENOMEM;
> +				goto err_free_table;
> +			}
> +
> +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> +				    offset_in_page(sg_buf));
> +		} else {
> +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> +		}
> +

If the buf address is in PKMAP_BASE - PAGE_OFFSET-1 region (I have
observed that JFFS2 FS provides buffers in above region to MTD layer),
if CONFIG_DEBUG_SG is set then sg_set_buf() will throw a BUG_ON() as
virt_addr_is_valid() will return false. Is there a sane way to handle
buffers of PKMAP_BASE region with sg_*  APIs?
Or, is the function sg_alloc_table_from_buf() not to be used with such
buffers?


-- 
Regards
Vignesh

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [v2,4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-31  4:56       ` Vignesh R
  0 siblings, 0 replies; 46+ messages in thread
From: Vignesh R @ 2016-03-31  4:56 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

On 03/30/2016 09:09 PM, Boris BREZILLON wrote:

[...]

> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask)
> +{
> +	struct sg_constraints cons = { };
> +	size_t remaining, chunk_len;
> +	const void *sg_buf;
> +	int i, ret;
> +
> +	if (constraints)
> +		cons = *constraints;
> +
> +	ret = sg_check_constraints(&cons, buf, len);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> +		i++;
> +
> +	ret = sg_alloc_table(sgt, i, gfp_mask);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> +		if (is_vmalloc_addr(sg_buf)) {
> +			struct page *vm_page;
> +
> +			vm_page = vmalloc_to_page(sg_buf);
> +			if (!vm_page) {
> +				ret = -ENOMEM;
> +				goto err_free_table;
> +			}
> +
> +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> +				    offset_in_page(sg_buf));
> +		} else {
> +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> +		}
> +

If the buf address is in PKMAP_BASE - PAGE_OFFSET-1 region (I have
observed that JFFS2 FS provides buffers in above region to MTD layer),
if CONFIG_DEBUG_SG is set then sg_set_buf() will throw a BUG_ON() as
virt_addr_is_valid() will return false. Is there a sane way to handle
buffers of PKMAP_BASE region with sg_*  APIs?
Or, is the function sg_alloc_table_from_buf() not to be used with such
buffers?


-- 
Regards
Vignesh

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [v2,4/7] scatterlist: add sg_alloc_table_from_buf() helper
  2016-03-31  4:56       ` Vignesh R
  (?)
  (?)
@ 2016-03-31  7:26           ` Boris Brezillon
  -1 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-31  7:26 UTC (permalink / raw)
  To: Vignesh R
  Cc: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon, Mark Rutland, devicetree-u79uwXL29TY76Z2rM5mHXA,
	Pawel Moll, Ian Campbell, Vinod Koul, Chen-Yu Tsai, Rob Herring,
	linux-spi-u79uwXL29TY76Z2rM5mHXA, Richard Weinberger,
	linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Mark Brown, Hans Verkuil,
	Laurent Pinchart, Kumar Gala, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Maxime Ripard, linux-media-u79uwXL29TY76Z2rM5mHXA, Dan Williams,
	linux-arm-kernel-IAPFreCvJWMMKQXVYFwzLw

Hi Vignesh,

On Thu, 31 Mar 2016 10:26:59 +0530
Vignesh R <vigneshr-l0cyMroinI0@public.gmane.org> wrote:

> Hi,
> 
> On 03/30/2016 09:09 PM, Boris BREZILLON wrote:
> 
> [...]
> 
> > +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> > +			    const struct sg_constraints *constraints,
> > +			    gfp_t gfp_mask)
> > +{
> > +	struct sg_constraints cons = { };
> > +	size_t remaining, chunk_len;
> > +	const void *sg_buf;
> > +	int i, ret;
> > +
> > +	if (constraints)
> > +		cons = *constraints;
> > +
> > +	ret = sg_check_constraints(&cons, buf, len);
> > +	if (ret)
> > +		return ret;
> > +
> > +	sg_buf = buf;
> > +	remaining = len;
> > +	i = 0;
> > +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> > +		i++;
> > +
> > +	ret = sg_alloc_table(sgt, i, gfp_mask);
> > +	if (ret)
> > +		return ret;
> > +
> > +	sg_buf = buf;
> > +	remaining = len;
> > +	i = 0;
> > +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> > +		if (is_vmalloc_addr(sg_buf)) {
> > +			struct page *vm_page;
> > +
> > +			vm_page = vmalloc_to_page(sg_buf);
> > +			if (!vm_page) {
> > +				ret = -ENOMEM;
> > +				goto err_free_table;
> > +			}
> > +
> > +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> > +				    offset_in_page(sg_buf));
> > +		} else {
> > +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> > +		}
> > +
> 
> If the buf address is in PKMAP_BASE - PAGE_OFFSET-1 region (I have
> observed that JFFS2 FS provides buffers in above region to MTD layer),
> if CONFIG_DEBUG_SG is set then sg_set_buf() will throw a BUG_ON() as
> virt_addr_is_valid() will return false. Is there a sane way to handle
> buffers of PKMAP_BASE region with sg_*  APIs?
> Or, is the function sg_alloc_table_from_buf() not to be used with such
> buffers?

It should be usable with kmapped buffers too: I'll provide a new version
to support that.
That makes me realize I'm not checking the virtual address consistency
in sg_check_constraints().

Thanks,

Boris

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [v2,4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-31  7:26           ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-31  7:26 UTC (permalink / raw)
  To: Vignesh R
  Cc: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton,
	Dave Gordon, Mark Rutland, devicetree, Pawel Moll, Ian Campbell,
	Vinod Koul, Chen-Yu Tsai, Rob Herring, linux-spi,
	Richard Weinberger, linux-sunxi, Mark Brown, Hans Verkuil,
	Laurent Pinchart, Kumar Gala, dmaengine, Maxime Ripard,
	linux-media, Dan Williams, linux-arm-kernel,
	Mauro Carvalho Chehab

Hi Vignesh,

On Thu, 31 Mar 2016 10:26:59 +0530
Vignesh R <vigneshr@ti.com> wrote:

> Hi,
> 
> On 03/30/2016 09:09 PM, Boris BREZILLON wrote:
> 
> [...]
> 
> > +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> > +			    const struct sg_constraints *constraints,
> > +			    gfp_t gfp_mask)
> > +{
> > +	struct sg_constraints cons = { };
> > +	size_t remaining, chunk_len;
> > +	const void *sg_buf;
> > +	int i, ret;
> > +
> > +	if (constraints)
> > +		cons = *constraints;
> > +
> > +	ret = sg_check_constraints(&cons, buf, len);
> > +	if (ret)
> > +		return ret;
> > +
> > +	sg_buf = buf;
> > +	remaining = len;
> > +	i = 0;
> > +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> > +		i++;
> > +
> > +	ret = sg_alloc_table(sgt, i, gfp_mask);
> > +	if (ret)
> > +		return ret;
> > +
> > +	sg_buf = buf;
> > +	remaining = len;
> > +	i = 0;
> > +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> > +		if (is_vmalloc_addr(sg_buf)) {
> > +			struct page *vm_page;
> > +
> > +			vm_page = vmalloc_to_page(sg_buf);
> > +			if (!vm_page) {
> > +				ret = -ENOMEM;
> > +				goto err_free_table;
> > +			}
> > +
> > +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> > +				    offset_in_page(sg_buf));
> > +		} else {
> > +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> > +		}
> > +
> 
> If the buf address is in PKMAP_BASE - PAGE_OFFSET-1 region (I have
> observed that JFFS2 FS provides buffers in above region to MTD layer),
> if CONFIG_DEBUG_SG is set then sg_set_buf() will throw a BUG_ON() as
> virt_addr_is_valid() will return false. Is there a sane way to handle
> buffers of PKMAP_BASE region with sg_*  APIs?
> Or, is the function sg_alloc_table_from_buf() not to be used with such
> buffers?

It should be usable with kmapped buffers too: I'll provide a new version
to support that.
That makes me realize I'm not checking the virtual address consistency
in sg_check_constraints().

Thanks,

Boris

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [v2,4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-31  7:26           ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-31  7:26 UTC (permalink / raw)
  To: Vignesh R
  Cc: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	Dave Gordon, Mark Rutland, devicetree-u79uwXL29TY76Z2rM5mHXA,
	Pawel Moll, Ian Campbell, Vinod Koul, Chen-Yu Tsai, Rob Herring,
	linux-spi-u79uwXL29TY76Z2rM5mHXA, Richard Weinberger,
	linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Mark Brown, Hans Verkuil,
	Laurent Pinchart, Kumar Gala, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Maxime Ripard, linux-media-u79uwXL29TY76Z2rM5mHXA, Dan Williams,
	linux-arm-kernel-IAPFreCvJWMMKQXVYFwzLw

Hi Vignesh,

On Thu, 31 Mar 2016 10:26:59 +0530
Vignesh R <vigneshr-l0cyMroinI0@public.gmane.org> wrote:

> Hi,
> 
> On 03/30/2016 09:09 PM, Boris BREZILLON wrote:
> 
> [...]
> 
> > +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> > +			    const struct sg_constraints *constraints,
> > +			    gfp_t gfp_mask)
> > +{
> > +	struct sg_constraints cons = { };
> > +	size_t remaining, chunk_len;
> > +	const void *sg_buf;
> > +	int i, ret;
> > +
> > +	if (constraints)
> > +		cons = *constraints;
> > +
> > +	ret = sg_check_constraints(&cons, buf, len);
> > +	if (ret)
> > +		return ret;
> > +
> > +	sg_buf = buf;
> > +	remaining = len;
> > +	i = 0;
> > +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> > +		i++;
> > +
> > +	ret = sg_alloc_table(sgt, i, gfp_mask);
> > +	if (ret)
> > +		return ret;
> > +
> > +	sg_buf = buf;
> > +	remaining = len;
> > +	i = 0;
> > +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> > +		if (is_vmalloc_addr(sg_buf)) {
> > +			struct page *vm_page;
> > +
> > +			vm_page = vmalloc_to_page(sg_buf);
> > +			if (!vm_page) {
> > +				ret = -ENOMEM;
> > +				goto err_free_table;
> > +			}
> > +
> > +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> > +				    offset_in_page(sg_buf));
> > +		} else {
> > +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> > +		}
> > +
> 
> If the buf address is in PKMAP_BASE - PAGE_OFFSET-1 region (I have
> observed that JFFS2 FS provides buffers in above region to MTD layer),
> if CONFIG_DEBUG_SG is set then sg_set_buf() will throw a BUG_ON() as
> virt_addr_is_valid() will return false. Is there a sane way to handle
> buffers of PKMAP_BASE region with sg_*  APIs?
> Or, is the function sg_alloc_table_from_buf() not to be used with such
> buffers?

It should be usable with kmapped buffers too: I'll provide a new version
to support that.
That makes me realize I'm not checking the virtual address consistency
in sg_check_constraints().

Thanks,

Boris

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [v2,4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-31  7:26           ` Boris Brezillon
  0 siblings, 0 replies; 46+ messages in thread
From: Boris Brezillon @ 2016-03-31  7:26 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Vignesh,

On Thu, 31 Mar 2016 10:26:59 +0530
Vignesh R <vigneshr@ti.com> wrote:

> Hi,
> 
> On 03/30/2016 09:09 PM, Boris BREZILLON wrote:
> 
> [...]
> 
> > +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> > +			    const struct sg_constraints *constraints,
> > +			    gfp_t gfp_mask)
> > +{
> > +	struct sg_constraints cons = { };
> > +	size_t remaining, chunk_len;
> > +	const void *sg_buf;
> > +	int i, ret;
> > +
> > +	if (constraints)
> > +		cons = *constraints;
> > +
> > +	ret = sg_check_constraints(&cons, buf, len);
> > +	if (ret)
> > +		return ret;
> > +
> > +	sg_buf = buf;
> > +	remaining = len;
> > +	i = 0;
> > +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> > +		i++;
> > +
> > +	ret = sg_alloc_table(sgt, i, gfp_mask);
> > +	if (ret)
> > +		return ret;
> > +
> > +	sg_buf = buf;
> > +	remaining = len;
> > +	i = 0;
> > +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> > +		if (is_vmalloc_addr(sg_buf)) {
> > +			struct page *vm_page;
> > +
> > +			vm_page = vmalloc_to_page(sg_buf);
> > +			if (!vm_page) {
> > +				ret = -ENOMEM;
> > +				goto err_free_table;
> > +			}
> > +
> > +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> > +				    offset_in_page(sg_buf));
> > +		} else {
> > +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> > +		}
> > +
> 
> If the buf address is in PKMAP_BASE - PAGE_OFFSET-1 region (I have
> observed that JFFS2 FS provides buffers in above region to MTD layer),
> if CONFIG_DEBUG_SG is set then sg_set_buf() will throw a BUG_ON() as
> virt_addr_is_valid() will return false. Is there a sane way to handle
> buffers of PKMAP_BASE region with sg_*  APIs?
> Or, is the function sg_alloc_table_from_buf() not to be used with such
> buffers?

It should be usable with kmapped buffers too: I'll provide a new version
to support that.
That makes me realize I'm not checking the virtual address consistency
in sg_check_constraints().

Thanks,

Boris

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
  2016-03-30 18:18             ` Boris Brezillon
  (?)
@ 2016-04-05  7:11               ` Dave Gordon
  -1 siblings, 0 replies; 46+ messages in thread
From: Dave Gordon @ 2016-04-05  7:11 UTC (permalink / raw)
  To: Boris Brezillon, Mark Brown
  Cc: David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Andrew Morton,
	linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Richard Weinberger

On 30/03/16 19:18, Boris Brezillon wrote:
> On Wed, 30 Mar 2016 09:51:43 -0700
> Mark Brown <broonie-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
>
>> On Wed, Mar 30, 2016 at 05:39:51PM +0200, Boris Brezillon wrote:
>>> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
>>> from a virtual address pointer. This function takes care of dealing with
>>> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
>>> DMA transfer size).
>>
>> This seems nice.  Should we also have a further helper on top of this
>> which will get constraints from a dmaengine, it seems like it'd be a
>> common need?
>
> Yep, we could create a wrapper extracting dma_slave caps info,
> converting it to sg_constraints and calling sg_alloc_table_from_buf().
> But let's try to get this function accepted first, and I'll send another
> patch providing this wrapper.
>
> BTW, do you see other things that should be added in sg_constraints?
>

You could compare with the things Solaris uses to describe the 
restrictions on a DMA binding ...

http://docs.oracle.com/cd/E23824_01/html/821-1478/ddi-dma-attr-9s.html#REFMAN9Sddi-dma-attr-9s

.Dave.
--
To unsubscribe from this list: send the line "unsubscribe linux-spi" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-04-05  7:11               ` Dave Gordon
  0 siblings, 0 replies; 46+ messages in thread
From: Dave Gordon @ 2016-04-05  7:11 UTC (permalink / raw)
  To: Boris Brezillon, Mark Brown
  Cc: David Woodhouse, Brian Norris, linux-mtd, Andrew Morton,
	linux-spi, linux-arm-kernel, Maxime Ripard, Chen-Yu Tsai,
	linux-sunxi, Vinod Koul, Dan Williams, dmaengine,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree, Richard Weinberger

On 30/03/16 19:18, Boris Brezillon wrote:
> On Wed, 30 Mar 2016 09:51:43 -0700
> Mark Brown <broonie@kernel.org> wrote:
>
>> On Wed, Mar 30, 2016 at 05:39:51PM +0200, Boris Brezillon wrote:
>>> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
>>> from a virtual address pointer. This function takes care of dealing with
>>> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
>>> DMA transfer size).
>>
>> This seems nice.  Should we also have a further helper on top of this
>> which will get constraints from a dmaengine, it seems like it'd be a
>> common need?
>
> Yep, we could create a wrapper extracting dma_slave caps info,
> converting it to sg_constraints and calling sg_alloc_table_from_buf().
> But let's try to get this function accepted first, and I'll send another
> patch providing this wrapper.
>
> BTW, do you see other things that should be added in sg_constraints?
>

You could compare with the things Solaris uses to describe the 
restrictions on a DMA binding ...

http://docs.oracle.com/cd/E23824_01/html/821-1478/ddi-dma-attr-9s.html#REFMAN9Sddi-dma-attr-9s

.Dave.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-04-05  7:11               ` Dave Gordon
  0 siblings, 0 replies; 46+ messages in thread
From: Dave Gordon @ 2016-04-05  7:11 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/03/16 19:18, Boris Brezillon wrote:
> On Wed, 30 Mar 2016 09:51:43 -0700
> Mark Brown <broonie@kernel.org> wrote:
>
>> On Wed, Mar 30, 2016 at 05:39:51PM +0200, Boris Brezillon wrote:
>>> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
>>> from a virtual address pointer. This function takes care of dealing with
>>> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
>>> DMA transfer size).
>>
>> This seems nice.  Should we also have a further helper on top of this
>> which will get constraints from a dmaengine, it seems like it'd be a
>> common need?
>
> Yep, we could create a wrapper extracting dma_slave caps info,
> converting it to sg_constraints and calling sg_alloc_table_from_buf().
> But let's try to get this function accepted first, and I'll send another
> patch providing this wrapper.
>
> BTW, do you see other things that should be added in sg_constraints?
>

You could compare with the things Solaris uses to describe the 
restrictions on a DMA binding ...

http://docs.oracle.com/cd/E23824_01/html/821-1478/ddi-dma-attr-9s.html#REFMAN9Sddi-dma-attr-9s

.Dave.

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2016-04-05  7:11 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-30 15:39 [PATCH v2 0/7] mtd: nand: sunxi: add support for DMA operations Boris Brezillon
2016-03-30 15:39 ` Boris Brezillon
2016-03-30 15:39 ` Boris Brezillon
     [not found] ` <1459352394-22810-1-git-send-email-boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
2016-03-30 15:39   ` [PATCH v2 1/7] mtd: nand: sunxi: move some ECC related operations to their own functions Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
2016-03-30 15:39   ` [PATCH v2 2/7] mtd: nand: sunxi: make OOB retrieval optional Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
2016-03-30 15:39   ` [PATCH v2 3/7] mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
2016-03-30 15:39   ` [PATCH v2 4/7] scatterlist: add sg_alloc_table_from_buf() helper Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
     [not found]     ` <1459352394-22810-5-git-send-email-boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
2016-03-30 16:51       ` Mark Brown
2016-03-30 16:51         ` Mark Brown
2016-03-30 16:51         ` Mark Brown
2016-03-30 16:51         ` Mark Brown
     [not found]         ` <20160330165143.GI2350-GFdadSzt00ze9xe1eoZjHA@public.gmane.org>
2016-03-30 18:18           ` Boris Brezillon
2016-03-30 18:18             ` Boris Brezillon
2016-03-30 18:18             ` Boris Brezillon
2016-03-30 18:18             ` Boris Brezillon
2016-03-30 18:34             ` Mark Brown
2016-03-30 18:34               ` Mark Brown
2016-03-30 18:34               ` Mark Brown
2016-03-30 18:34               ` Mark Brown
2016-04-05  7:11             ` Dave Gordon
2016-04-05  7:11               ` Dave Gordon
2016-04-05  7:11               ` Dave Gordon
2016-03-31  4:56     ` [v2,4/7] " Vignesh R
2016-03-31  4:56       ` Vignesh R
2016-03-31  4:56       ` Vignesh R
     [not found]       ` <56FCAE1B.90206-l0cyMroinI0@public.gmane.org>
2016-03-31  7:26         ` Boris Brezillon
2016-03-31  7:26           ` Boris Brezillon
2016-03-31  7:26           ` Boris Brezillon
2016-03-31  7:26           ` Boris Brezillon
2016-03-30 15:39   ` [PATCH v2 5/7] mtd: provide helper to prepare buffers for DMA operations Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
2016-03-30 15:39   ` [PATCH v2 6/7] mtd: nand: sunxi: add support for DMA assisted operations Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
2016-03-30 15:39   ` [PATCH v2 7/7] mtd: nand: sunxi: update DT bindings Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon
2016-03-30 15:39     ` Boris Brezillon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.