All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] mtd: nand: sunxi: add support for DMA operations
@ 2016-03-08 11:15 ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris, linux-mtd
  Cc: Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	Laurent Pinchart, linux-media, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Boris Brezillon

Hello,

This patch series aims at adding support for DMA assisted operations to
the sunxi_nand driver.

The first 3 patches are just reworks in the existing driver preparing
things for DMA ->read/write_page() operations. Those ones are mainly
re-arranging existing functions, and moving some code into dedicated
functions so we can reuse them when adding the read/write_page()
implementation.

Patch 4 is an attempt to generalize some logic that are duplicated in a
lot of places. It provides a generic solution to create an SG table
from a buffer (passed by virtual address) and its length.
This generic implementation tries to take all possible constraints into
account, like:
- vmallocated buffers
- alignement requirement/preference
- maximum DMA transfer length

I may have missed other things (is there a need for a minimum DMA
transfer constraint?), so don't hesitate to point problems or missing
elements in this implementation.
Note that other subsystems doing the same kind of thing (like SPI of V4L)
could use this implementation. This is why I've put the SPI and V4L
maintainers in Cc.

Patch 5 is providing functions to map/unmap buffers for DMA operations
at the MTD level. This will hopefully limit the number of open-coded
implementations we're currently seeing in a lot of NAND drivers.
Of course, it's making use of sg_alloc_table_from_buf(), introduced in
patch 4.

Patch 6 and 7 are patching the sunxi NAND driver and its DT binding doc
to add DMA support.

I'm particularly interested in getting feedbacks on patch 4 and 5.
Is there a reason nobody ever tried to create such generic functions
(at the scatterlist and MTD levels), and if there are, could you detail
them?

Thanks,

Boris

Side note: patches touching the sunxi NAND driver are depending on
a series posted yesterday [1].

[1]https://lkml.org/lkml/2016/3/7/444

Boris Brezillon (7):
  mtd: nand: sunxi: move some ECC related operations to their own
    functions
  mtd: nand: sunxi: make OOB retrieval optional
  mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers
  scatterlist: add sg_alloc_table_from_buf() helper
  mtd: provide helper to prepare buffers for DMA operations
  mtd: nand: sunxi: add support for DMA assisted operations
  mtd: nand: sunxi: update DT bindings

 .../devicetree/bindings/mtd/sunxi-nand.txt         |   4 +
 drivers/mtd/mtdcore.c                              |  66 +++
 drivers/mtd/nand/sunxi_nand.c                      | 483 ++++++++++++++++++---
 include/linux/mtd/mtd.h                            |  25 ++
 include/linux/scatterlist.h                        |  24 +
 lib/scatterlist.c                                  | 142 ++++++
 6 files changed, 679 insertions(+), 65 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 0/7] mtd: nand: sunxi: add support for DMA operations
@ 2016-03-08 11:15 ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon

Hello,

This patch series aims at adding support for DMA assisted operations to
the sunxi_nand driver.

The first 3 patches are just reworks in the existing driver preparing
things for DMA ->read/write_page() operations. Those ones are mainly
re-arranging existing functions, and moving some code into dedicated
functions so we can reuse them when adding the read/write_page()
implementation.

Patch 4 is an attempt to generalize some logic that are duplicated in a
lot of places. It provides a generic solution to create an SG table
from a buffer (passed by virtual address) and its length.
This generic implementation tries to take all possible constraints into
account, like:
- vmallocated buffers
- alignement requirement/preference
- maximum DMA transfer length

I may have missed other things (is there a need for a minimum DMA
transfer constraint?), so don't hesitate to point problems or missing
elements in this implementation.
Note that other subsystems doing the same kind of thing (like SPI of V4L)
could use this implementation. This is why I've put the SPI and V4L
maintainers in Cc.

Patch 5 is providing functions to map/unmap buffers for DMA operations
at the MTD level. This will hopefully limit the number of open-coded
implementations we're currently seeing in a lot of NAND drivers.
Of course, it's making use of sg_alloc_table_from_buf(), introduced in
patch 4.

Patch 6 and 7 are patching the sunxi NAND driver and its DT binding doc
to add DMA support.

I'm particularly interested in getting feedbacks on patch 4 and 5.
Is there a reason nobody ever tried to create such generic functions
(at the scatterlist and MTD levels), and if there are, could you detail
them?

Thanks,

Boris

Side note: patches touching the sunxi NAND driver are depending on
a series posted yesterday [1].

[1]https://lkml.org/lkml/2016/3/7/444

Boris Brezillon (7):
  mtd: nand: sunxi: move some ECC related operations to their own
    functions
  mtd: nand: sunxi: make OOB retrieval optional
  mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers
  scatterlist: add sg_alloc_table_from_buf() helper
  mtd: provide helper to prepare buffers for DMA operations
  mtd: nand: sunxi: add support for DMA assisted operations
  mtd: nand: sunxi: update DT bindings

 .../devicetree/bindings/mtd/sunxi-nand.txt         |   4 +
 drivers/mtd/mtdcore.c                              |  66 +++
 drivers/mtd/nand/sunxi_nand.c                      | 483 ++++++++++++++++++---
 include/linux/mtd/mtd.h                            |  25 ++
 include/linux/scatterlist.h                        |  24 +
 lib/scatterlist.c                                  | 142 ++++++
 6 files changed, 679 insertions(+), 65 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 0/7] mtd: nand: sunxi: add support for DMA operations
@ 2016-03-08 11:15 ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: linux-arm-kernel

Hello,

This patch series aims at adding support for DMA assisted operations to
the sunxi_nand driver.

The first 3 patches are just reworks in the existing driver preparing
things for DMA ->read/write_page() operations. Those ones are mainly
re-arranging existing functions, and moving some code into dedicated
functions so we can reuse them when adding the read/write_page()
implementation.

Patch 4 is an attempt to generalize some logic that are duplicated in a
lot of places. It provides a generic solution to create an SG table
from a buffer (passed by virtual address) and its length.
This generic implementation tries to take all possible constraints into
account, like:
- vmallocated buffers
- alignement requirement/preference
- maximum DMA transfer length

I may have missed other things (is there a need for a minimum DMA
transfer constraint?), so don't hesitate to point problems or missing
elements in this implementation.
Note that other subsystems doing the same kind of thing (like SPI of V4L)
could use this implementation. This is why I've put the SPI and V4L
maintainers in Cc.

Patch 5 is providing functions to map/unmap buffers for DMA operations
at the MTD level. This will hopefully limit the number of open-coded
implementations we're currently seeing in a lot of NAND drivers.
Of course, it's making use of sg_alloc_table_from_buf(), introduced in
patch 4.

Patch 6 and 7 are patching the sunxi NAND driver and its DT binding doc
to add DMA support.

I'm particularly interested in getting feedbacks on patch 4 and 5.
Is there a reason nobody ever tried to create such generic functions
(at the scatterlist and MTD levels), and if there are, could you detail
them?

Thanks,

Boris

Side note: patches touching the sunxi NAND driver are depending on
a series posted yesterday [1].

[1]https://lkml.org/lkml/2016/3/7/444

Boris Brezillon (7):
  mtd: nand: sunxi: move some ECC related operations to their own
    functions
  mtd: nand: sunxi: make OOB retrieval optional
  mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers
  scatterlist: add sg_alloc_table_from_buf() helper
  mtd: provide helper to prepare buffers for DMA operations
  mtd: nand: sunxi: add support for DMA assisted operations
  mtd: nand: sunxi: update DT bindings

 .../devicetree/bindings/mtd/sunxi-nand.txt         |   4 +
 drivers/mtd/mtdcore.c                              |  66 +++
 drivers/mtd/nand/sunxi_nand.c                      | 483 ++++++++++++++++++---
 include/linux/mtd/mtd.h                            |  25 ++
 include/linux/scatterlist.h                        |  24 +
 lib/scatterlist.c                                  | 142 ++++++
 6 files changed, 679 insertions(+), 65 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 1/7] mtd: nand: sunxi: move some ECC related operations to their own functions
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris, linux-mtd
  Cc: Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	Laurent Pinchart, linux-media, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Boris Brezillon

In order to support DMA operations in a clean way we need to extract some
of the logic coded in sunxi_nfc_hw_ecc_read/write_page() into their own
function.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 163 ++++++++++++++++++++++++++++--------------
 1 file changed, 108 insertions(+), 55 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 0616f3b..90c121d 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -776,6 +776,94 @@ static inline void sunxi_nfc_user_data_to_buf(u32 user_data, u8 *buf)
 	buf[3] = user_data >> 24;
 }
 
+static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
+{
+	return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
+}
+
+static void sunxi_nfc_hw_ecc_get_prot_oob_bytes(struct mtd_info *mtd, u8 *oob,
+						int step, bool bbm, int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+	sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(step)),
+				   oob);
+
+	/* De-randomize the Bad Block Marker. */
+	if (bbm && (nand->options & NAND_NEED_SCRAMBLING))
+		sunxi_nfc_randomize_bbm(mtd, page, oob);
+}
+
+static void sunxi_nfc_hw_ecc_set_prot_oob_bytes(struct mtd_info *mtd,
+						const u8 *oob, int step,
+						bool bbm, int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	u8 user_data[4];
+
+	/* Randomize the Bad Block Marker. */
+	if (bbm && (nand->options & NAND_NEED_SCRAMBLING)) {
+		memcpy(user_data, oob, sizeof(user_data));
+		sunxi_nfc_randomize_bbm(mtd, page, user_data);
+		oob = user_data;
+	}
+
+	writel(sunxi_nfc_buf_to_user_data(oob),
+	       nfc->regs + NFC_REG_USER_DATA(step));
+}
+
+static void sunxi_nfc_hw_ecc_update_stats(struct mtd_info *mtd,
+					  unsigned int *max_bitflips, int ret)
+{
+	if (ret < 0) {
+		mtd->ecc_stats.failed++;
+	} else {
+		mtd->ecc_stats.corrected += ret;
+		*max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+	}
+}
+
+static int sunxi_nfc_hw_ecc_correct(struct mtd_info *mtd, u8 *data, u8 *oob,
+				    int step, bool *erased)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	u32 status, tmp;
+
+	*erased = false;
+
+	status = readl(nfc->regs + NFC_REG_ECC_ST);
+
+	if (status & NFC_ECC_ERR(step))
+		return -EBADMSG;
+
+	if (status & NFC_ECC_PAT_FOUND(step)) {
+		u8 pattern;
+
+		if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1))) {
+			pattern = 0x0;
+		} else {
+			pattern = 0xff;
+			*erased = true;
+		}
+
+		if (data)
+			memset(data, pattern, ecc->size);
+
+		if (oob)
+			memset(oob, pattern, ecc->bytes + 4);
+
+		return 0;
+	}
+
+	tmp = readl(nfc->regs + NFC_REG_ECC_ERR_CNT(step));
+
+	return NFC_ECC_ERR_CNT(step, tmp);
+}
+
 static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 				       u8 *data, int data_off,
 				       u8 *oob, int oob_off,
@@ -787,7 +875,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
 	struct nand_ecc_ctrl *ecc = &nand->ecc;
 	int raw_mode = 0;
-	u32 status;
+	bool erased;
 	int ret;
 
 	if (*cur_off != data_off)
@@ -813,27 +901,11 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 
 	*cur_off = oob_off + ecc->bytes + 4;
 
-	status = readl(nfc->regs + NFC_REG_ECC_ST);
-	if (status & NFC_ECC_PAT_FOUND(0)) {
-		u8 pattern = 0xff;
-
-		if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1)))
-			pattern = 0x0;
-
-		memset(data, pattern, ecc->size);
-		memset(oob, pattern, ecc->bytes + 4);
-
+	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob, 0, &erased);
+	if (erased)
 		return 1;
-	}
-
-	ret = NFC_ECC_ERR_CNT(0, readl(nfc->regs + NFC_REG_ECC_ERR_CNT(0)));
-
-	memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-	nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-	sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4, true, page);
-
-	if (status & NFC_ECC_ERR(0)) {
+	if (ret < 0) {
 		/*
 		 * Re-read the data with the randomizer disabled to identify
 		 * bitflips in erased pages.
@@ -841,35 +913,32 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 		if (nand->options & NAND_NEED_SCRAMBLING) {
 			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1);
 			nand->read_buf(mtd, data, ecc->size);
-			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-			nand->read_buf(mtd, oob, ecc->bytes + 4);
+		} else {
+			memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE,
+				      ecc->size);
 		}
 
+		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+		nand->read_buf(mtd, oob, ecc->bytes + 4);
+
 		ret = nand_check_erased_ecc_chunk(data,	ecc->size,
 						  oob, ecc->bytes + 4,
 						  NULL, 0, ecc->strength);
 		if (ret >= 0)
 			raw_mode = 1;
 	} else {
-		/*
-		 * The engine protects 4 bytes of OOB data per chunk.
-		 * Retrieve the corrected OOB bytes.
-		 */
-		sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(0)),
-					   oob);
+		memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-		/* De-randomize the Bad Block Marker. */
-		if (bbm && nand->options & NAND_NEED_SCRAMBLING)
-			sunxi_nfc_randomize_bbm(mtd, page, oob);
-	}
+		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+		sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+					      true, page);
 
-	if (ret < 0) {
-		mtd->ecc_stats.failed++;
-	} else {
-		mtd->ecc_stats.corrected += ret;
-		*max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+		sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+						    bbm, page);
 	}
 
+	sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
+
 	return raw_mode;
 }
 
@@ -898,11 +967,6 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 	*cur_off = mtd->oobsize + mtd->writesize;
 }
 
-static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
-{
-	return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
-}
-
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 					const u8 *data, int data_off,
 					const u8 *oob, int oob_off,
@@ -919,19 +983,6 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 
 	sunxi_nfc_randomizer_write_buf(mtd, data, ecc->size, false, page);
 
-	/* Fill OOB data in */
-	if ((nand->options & NAND_NEED_SCRAMBLING) && bbm) {
-		u8 user_data[4];
-
-		memcpy(user_data, oob, 4);
-		sunxi_nfc_randomize_bbm(mtd, page, user_data);
-		writel(sunxi_nfc_buf_to_user_data(user_data),
-		       nfc->regs + NFC_REG_USER_DATA(0));
-	} else {
-		writel(sunxi_nfc_buf_to_user_data(oob),
-		       nfc->regs + NFC_REG_USER_DATA(0));
-	}
-
 	if (data_off + ecc->size != oob_off)
 		nand->cmdfunc(mtd, NAND_CMD_RNDIN, oob_off, -1);
 
@@ -940,6 +991,8 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 		return ret;
 
 	sunxi_nfc_randomizer_enable(mtd);
+	sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, 0, bbm, page);
+
 	writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD |
 	       NFC_ACCESS_DIR | NFC_ECC_OP,
 	       nfc->regs + NFC_REG_CMD);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 1/7] mtd: nand: sunxi: move some ECC related operations to their own functions
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon

In order to support DMA operations in a clean way we need to extract some
of the logic coded in sunxi_nfc_hw_ecc_read/write_page() into their own
function.

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 drivers/mtd/nand/sunxi_nand.c | 163 ++++++++++++++++++++++++++++--------------
 1 file changed, 108 insertions(+), 55 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 0616f3b..90c121d 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -776,6 +776,94 @@ static inline void sunxi_nfc_user_data_to_buf(u32 user_data, u8 *buf)
 	buf[3] = user_data >> 24;
 }
 
+static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
+{
+	return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
+}
+
+static void sunxi_nfc_hw_ecc_get_prot_oob_bytes(struct mtd_info *mtd, u8 *oob,
+						int step, bool bbm, int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+	sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(step)),
+				   oob);
+
+	/* De-randomize the Bad Block Marker. */
+	if (bbm && (nand->options & NAND_NEED_SCRAMBLING))
+		sunxi_nfc_randomize_bbm(mtd, page, oob);
+}
+
+static void sunxi_nfc_hw_ecc_set_prot_oob_bytes(struct mtd_info *mtd,
+						const u8 *oob, int step,
+						bool bbm, int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	u8 user_data[4];
+
+	/* Randomize the Bad Block Marker. */
+	if (bbm && (nand->options & NAND_NEED_SCRAMBLING)) {
+		memcpy(user_data, oob, sizeof(user_data));
+		sunxi_nfc_randomize_bbm(mtd, page, user_data);
+		oob = user_data;
+	}
+
+	writel(sunxi_nfc_buf_to_user_data(oob),
+	       nfc->regs + NFC_REG_USER_DATA(step));
+}
+
+static void sunxi_nfc_hw_ecc_update_stats(struct mtd_info *mtd,
+					  unsigned int *max_bitflips, int ret)
+{
+	if (ret < 0) {
+		mtd->ecc_stats.failed++;
+	} else {
+		mtd->ecc_stats.corrected += ret;
+		*max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+	}
+}
+
+static int sunxi_nfc_hw_ecc_correct(struct mtd_info *mtd, u8 *data, u8 *oob,
+				    int step, bool *erased)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	u32 status, tmp;
+
+	*erased = false;
+
+	status = readl(nfc->regs + NFC_REG_ECC_ST);
+
+	if (status & NFC_ECC_ERR(step))
+		return -EBADMSG;
+
+	if (status & NFC_ECC_PAT_FOUND(step)) {
+		u8 pattern;
+
+		if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1))) {
+			pattern = 0x0;
+		} else {
+			pattern = 0xff;
+			*erased = true;
+		}
+
+		if (data)
+			memset(data, pattern, ecc->size);
+
+		if (oob)
+			memset(oob, pattern, ecc->bytes + 4);
+
+		return 0;
+	}
+
+	tmp = readl(nfc->regs + NFC_REG_ECC_ERR_CNT(step));
+
+	return NFC_ECC_ERR_CNT(step, tmp);
+}
+
 static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 				       u8 *data, int data_off,
 				       u8 *oob, int oob_off,
@@ -787,7 +875,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
 	struct nand_ecc_ctrl *ecc = &nand->ecc;
 	int raw_mode = 0;
-	u32 status;
+	bool erased;
 	int ret;
 
 	if (*cur_off != data_off)
@@ -813,27 +901,11 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 
 	*cur_off = oob_off + ecc->bytes + 4;
 
-	status = readl(nfc->regs + NFC_REG_ECC_ST);
-	if (status & NFC_ECC_PAT_FOUND(0)) {
-		u8 pattern = 0xff;
-
-		if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1)))
-			pattern = 0x0;
-
-		memset(data, pattern, ecc->size);
-		memset(oob, pattern, ecc->bytes + 4);
-
+	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob, 0, &erased);
+	if (erased)
 		return 1;
-	}
-
-	ret = NFC_ECC_ERR_CNT(0, readl(nfc->regs + NFC_REG_ECC_ERR_CNT(0)));
-
-	memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-	nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-	sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4, true, page);
-
-	if (status & NFC_ECC_ERR(0)) {
+	if (ret < 0) {
 		/*
 		 * Re-read the data with the randomizer disabled to identify
 		 * bitflips in erased pages.
@@ -841,35 +913,32 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 		if (nand->options & NAND_NEED_SCRAMBLING) {
 			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1);
 			nand->read_buf(mtd, data, ecc->size);
-			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-			nand->read_buf(mtd, oob, ecc->bytes + 4);
+		} else {
+			memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE,
+				      ecc->size);
 		}
 
+		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+		nand->read_buf(mtd, oob, ecc->bytes + 4);
+
 		ret = nand_check_erased_ecc_chunk(data,	ecc->size,
 						  oob, ecc->bytes + 4,
 						  NULL, 0, ecc->strength);
 		if (ret >= 0)
 			raw_mode = 1;
 	} else {
-		/*
-		 * The engine protects 4 bytes of OOB data per chunk.
-		 * Retrieve the corrected OOB bytes.
-		 */
-		sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(0)),
-					   oob);
+		memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-		/* De-randomize the Bad Block Marker. */
-		if (bbm && nand->options & NAND_NEED_SCRAMBLING)
-			sunxi_nfc_randomize_bbm(mtd, page, oob);
-	}
+		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+		sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+					      true, page);
 
-	if (ret < 0) {
-		mtd->ecc_stats.failed++;
-	} else {
-		mtd->ecc_stats.corrected += ret;
-		*max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+		sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+						    bbm, page);
 	}
 
+	sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
+
 	return raw_mode;
 }
 
@@ -898,11 +967,6 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 	*cur_off = mtd->oobsize + mtd->writesize;
 }
 
-static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
-{
-	return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
-}
-
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 					const u8 *data, int data_off,
 					const u8 *oob, int oob_off,
@@ -919,19 +983,6 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 
 	sunxi_nfc_randomizer_write_buf(mtd, data, ecc->size, false, page);
 
-	/* Fill OOB data in */
-	if ((nand->options & NAND_NEED_SCRAMBLING) && bbm) {
-		u8 user_data[4];
-
-		memcpy(user_data, oob, 4);
-		sunxi_nfc_randomize_bbm(mtd, page, user_data);
-		writel(sunxi_nfc_buf_to_user_data(user_data),
-		       nfc->regs + NFC_REG_USER_DATA(0));
-	} else {
-		writel(sunxi_nfc_buf_to_user_data(oob),
-		       nfc->regs + NFC_REG_USER_DATA(0));
-	}
-
 	if (data_off + ecc->size != oob_off)
 		nand->cmdfunc(mtd, NAND_CMD_RNDIN, oob_off, -1);
 
@@ -940,6 +991,8 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 		return ret;
 
 	sunxi_nfc_randomizer_enable(mtd);
+	sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, 0, bbm, page);
+
 	writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD |
 	       NFC_ACCESS_DIR | NFC_ECC_OP,
 	       nfc->regs + NFC_REG_CMD);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 1/7] mtd: nand: sunxi: move some ECC related operations to their own functions
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: linux-arm-kernel

In order to support DMA operations in a clean way we need to extract some
of the logic coded in sunxi_nfc_hw_ecc_read/write_page() into their own
function.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 163 ++++++++++++++++++++++++++++--------------
 1 file changed, 108 insertions(+), 55 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 0616f3b..90c121d 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -776,6 +776,94 @@ static inline void sunxi_nfc_user_data_to_buf(u32 user_data, u8 *buf)
 	buf[3] = user_data >> 24;
 }
 
+static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
+{
+	return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
+}
+
+static void sunxi_nfc_hw_ecc_get_prot_oob_bytes(struct mtd_info *mtd, u8 *oob,
+						int step, bool bbm, int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+	sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(step)),
+				   oob);
+
+	/* De-randomize the Bad Block Marker. */
+	if (bbm && (nand->options & NAND_NEED_SCRAMBLING))
+		sunxi_nfc_randomize_bbm(mtd, page, oob);
+}
+
+static void sunxi_nfc_hw_ecc_set_prot_oob_bytes(struct mtd_info *mtd,
+						const u8 *oob, int step,
+						bool bbm, int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	u8 user_data[4];
+
+	/* Randomize the Bad Block Marker. */
+	if (bbm && (nand->options & NAND_NEED_SCRAMBLING)) {
+		memcpy(user_data, oob, sizeof(user_data));
+		sunxi_nfc_randomize_bbm(mtd, page, user_data);
+		oob = user_data;
+	}
+
+	writel(sunxi_nfc_buf_to_user_data(oob),
+	       nfc->regs + NFC_REG_USER_DATA(step));
+}
+
+static void sunxi_nfc_hw_ecc_update_stats(struct mtd_info *mtd,
+					  unsigned int *max_bitflips, int ret)
+{
+	if (ret < 0) {
+		mtd->ecc_stats.failed++;
+	} else {
+		mtd->ecc_stats.corrected += ret;
+		*max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+	}
+}
+
+static int sunxi_nfc_hw_ecc_correct(struct mtd_info *mtd, u8 *data, u8 *oob,
+				    int step, bool *erased)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	u32 status, tmp;
+
+	*erased = false;
+
+	status = readl(nfc->regs + NFC_REG_ECC_ST);
+
+	if (status & NFC_ECC_ERR(step))
+		return -EBADMSG;
+
+	if (status & NFC_ECC_PAT_FOUND(step)) {
+		u8 pattern;
+
+		if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1))) {
+			pattern = 0x0;
+		} else {
+			pattern = 0xff;
+			*erased = true;
+		}
+
+		if (data)
+			memset(data, pattern, ecc->size);
+
+		if (oob)
+			memset(oob, pattern, ecc->bytes + 4);
+
+		return 0;
+	}
+
+	tmp = readl(nfc->regs + NFC_REG_ECC_ERR_CNT(step));
+
+	return NFC_ECC_ERR_CNT(step, tmp);
+}
+
 static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 				       u8 *data, int data_off,
 				       u8 *oob, int oob_off,
@@ -787,7 +875,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
 	struct nand_ecc_ctrl *ecc = &nand->ecc;
 	int raw_mode = 0;
-	u32 status;
+	bool erased;
 	int ret;
 
 	if (*cur_off != data_off)
@@ -813,27 +901,11 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 
 	*cur_off = oob_off + ecc->bytes + 4;
 
-	status = readl(nfc->regs + NFC_REG_ECC_ST);
-	if (status & NFC_ECC_PAT_FOUND(0)) {
-		u8 pattern = 0xff;
-
-		if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1)))
-			pattern = 0x0;
-
-		memset(data, pattern, ecc->size);
-		memset(oob, pattern, ecc->bytes + 4);
-
+	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob, 0, &erased);
+	if (erased)
 		return 1;
-	}
-
-	ret = NFC_ECC_ERR_CNT(0, readl(nfc->regs + NFC_REG_ECC_ERR_CNT(0)));
-
-	memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-	nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-	sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4, true, page);
-
-	if (status & NFC_ECC_ERR(0)) {
+	if (ret < 0) {
 		/*
 		 * Re-read the data with the randomizer disabled to identify
 		 * bitflips in erased pages.
@@ -841,35 +913,32 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 		if (nand->options & NAND_NEED_SCRAMBLING) {
 			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1);
 			nand->read_buf(mtd, data, ecc->size);
-			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-			nand->read_buf(mtd, oob, ecc->bytes + 4);
+		} else {
+			memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE,
+				      ecc->size);
 		}
 
+		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+		nand->read_buf(mtd, oob, ecc->bytes + 4);
+
 		ret = nand_check_erased_ecc_chunk(data,	ecc->size,
 						  oob, ecc->bytes + 4,
 						  NULL, 0, ecc->strength);
 		if (ret >= 0)
 			raw_mode = 1;
 	} else {
-		/*
-		 * The engine protects 4 bytes of OOB data per chunk.
-		 * Retrieve the corrected OOB bytes.
-		 */
-		sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(0)),
-					   oob);
+		memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-		/* De-randomize the Bad Block Marker. */
-		if (bbm && nand->options & NAND_NEED_SCRAMBLING)
-			sunxi_nfc_randomize_bbm(mtd, page, oob);
-	}
+		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+		sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+					      true, page);
 
-	if (ret < 0) {
-		mtd->ecc_stats.failed++;
-	} else {
-		mtd->ecc_stats.corrected += ret;
-		*max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+		sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+						    bbm, page);
 	}
 
+	sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
+
 	return raw_mode;
 }
 
@@ -898,11 +967,6 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 	*cur_off = mtd->oobsize + mtd->writesize;
 }
 
-static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
-{
-	return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
-}
-
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 					const u8 *data, int data_off,
 					const u8 *oob, int oob_off,
@@ -919,19 +983,6 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 
 	sunxi_nfc_randomizer_write_buf(mtd, data, ecc->size, false, page);
 
-	/* Fill OOB data in */
-	if ((nand->options & NAND_NEED_SCRAMBLING) && bbm) {
-		u8 user_data[4];
-
-		memcpy(user_data, oob, 4);
-		sunxi_nfc_randomize_bbm(mtd, page, user_data);
-		writel(sunxi_nfc_buf_to_user_data(user_data),
-		       nfc->regs + NFC_REG_USER_DATA(0));
-	} else {
-		writel(sunxi_nfc_buf_to_user_data(oob),
-		       nfc->regs + NFC_REG_USER_DATA(0));
-	}
-
 	if (data_off + ecc->size != oob_off)
 		nand->cmdfunc(mtd, NAND_CMD_RNDIN, oob_off, -1);
 
@@ -940,6 +991,8 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 		return ret;
 
 	sunxi_nfc_randomizer_enable(mtd);
+	sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, 0, bbm, page);
+
 	writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD |
 	       NFC_ACCESS_DIR | NFC_ECC_OP,
 	       nfc->regs + NFC_REG_CMD);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 2/7] mtd: nand: sunxi: make OOB retrieval optional
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris, linux-mtd
  Cc: Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	Laurent Pinchart, linux-media, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Boris Brezillon

sunxi_nfc_hw_ecc_read_chunk() always retrieves the ECC and protected free
bytes, no matter if the user really asked for it or not. This can take a
non negligible amount of time, especially on NAND chips exposing large OOB
areas (> 1KB). Make it optional.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 27 ++++++++++++++++-----------
 1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 90c121d..7b3ae72 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -869,7 +869,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 				       u8 *oob, int oob_off,
 				       int *cur_off,
 				       unsigned int *max_bitflips,
-				       bool bbm, int page)
+				       bool bbm, bool oob_required, int page)
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
 	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
@@ -901,7 +901,8 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 
 	*cur_off = oob_off + ecc->bytes + 4;
 
-	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob, 0, &erased);
+	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob_required ? oob : NULL, 0,
+				       &erased);
 	if (erased)
 		return 1;
 
@@ -929,12 +930,14 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 	} else {
 		memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-		sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
-					      true, page);
+		if (oob_required) {
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+						      true, page);
 
-		sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
-						    bbm, page);
+			sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+							    bbm, page);
+		}
 	}
 
 	sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
@@ -1048,7 +1051,7 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
 						  oob_off + mtd->writesize,
 						  &cur_off, &max_bitflips,
-						  !i, page);
+						  !i, oob_required, page);
 		if (ret < 0)
 			return ret;
 		else if (ret)
@@ -1086,8 +1089,8 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off,
 						  oob,
 						  oob_off + mtd->writesize,
-						  &cur_off, &max_bitflips,
-						  !i, page);
+						  &cur_off, &max_bitflips, !i,
+						  false, page);
 		if (ret < 0)
 			return ret;
 	}
@@ -1149,7 +1152,9 @@ static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
 
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
 						  oob_off, &cur_off,
-						  &max_bitflips, !i, page);
+						  &max_bitflips, !i,
+						  oob_required,
+						  page);
 		if (ret < 0)
 			return ret;
 		else if (ret)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 2/7] mtd: nand: sunxi: make OOB retrieval optional
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon

sunxi_nfc_hw_ecc_read_chunk() always retrieves the ECC and protected free
bytes, no matter if the user really asked for it or not. This can take a
non negligible amount of time, especially on NAND chips exposing large OOB
areas (> 1KB). Make it optional.

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 drivers/mtd/nand/sunxi_nand.c | 27 ++++++++++++++++-----------
 1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 90c121d..7b3ae72 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -869,7 +869,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 				       u8 *oob, int oob_off,
 				       int *cur_off,
 				       unsigned int *max_bitflips,
-				       bool bbm, int page)
+				       bool bbm, bool oob_required, int page)
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
 	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
@@ -901,7 +901,8 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 
 	*cur_off = oob_off + ecc->bytes + 4;
 
-	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob, 0, &erased);
+	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob_required ? oob : NULL, 0,
+				       &erased);
 	if (erased)
 		return 1;
 
@@ -929,12 +930,14 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 	} else {
 		memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-		sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
-					      true, page);
+		if (oob_required) {
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+						      true, page);
 
-		sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
-						    bbm, page);
+			sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+							    bbm, page);
+		}
 	}
 
 	sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
@@ -1048,7 +1051,7 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
 						  oob_off + mtd->writesize,
 						  &cur_off, &max_bitflips,
-						  !i, page);
+						  !i, oob_required, page);
 		if (ret < 0)
 			return ret;
 		else if (ret)
@@ -1086,8 +1089,8 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off,
 						  oob,
 						  oob_off + mtd->writesize,
-						  &cur_off, &max_bitflips,
-						  !i, page);
+						  &cur_off, &max_bitflips, !i,
+						  false, page);
 		if (ret < 0)
 			return ret;
 	}
@@ -1149,7 +1152,9 @@ static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
 
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
 						  oob_off, &cur_off,
-						  &max_bitflips, !i, page);
+						  &max_bitflips, !i,
+						  oob_required,
+						  page);
 		if (ret < 0)
 			return ret;
 		else if (ret)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 2/7] mtd: nand: sunxi: make OOB retrieval optional
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: linux-arm-kernel

sunxi_nfc_hw_ecc_read_chunk() always retrieves the ECC and protected free
bytes, no matter if the user really asked for it or not. This can take a
non negligible amount of time, especially on NAND chips exposing large OOB
areas (> 1KB). Make it optional.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 27 ++++++++++++++++-----------
 1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 90c121d..7b3ae72 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -869,7 +869,7 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 				       u8 *oob, int oob_off,
 				       int *cur_off,
 				       unsigned int *max_bitflips,
-				       bool bbm, int page)
+				       bool bbm, bool oob_required, int page)
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
 	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
@@ -901,7 +901,8 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 
 	*cur_off = oob_off + ecc->bytes + 4;
 
-	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob, 0, &erased);
+	ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob_required ? oob : NULL, 0,
+				       &erased);
 	if (erased)
 		return 1;
 
@@ -929,12 +930,14 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
 	} else {
 		memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
 
-		nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
-		sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
-					      true, page);
+		if (oob_required) {
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+						      true, page);
 
-		sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
-						    bbm, page);
+			sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+							    bbm, page);
+		}
 	}
 
 	sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
@@ -1048,7 +1051,7 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
 						  oob_off + mtd->writesize,
 						  &cur_off, &max_bitflips,
-						  !i, page);
+						  !i, oob_required, page);
 		if (ret < 0)
 			return ret;
 		else if (ret)
@@ -1086,8 +1089,8 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off,
 						  oob,
 						  oob_off + mtd->writesize,
-						  &cur_off, &max_bitflips,
-						  !i, page);
+						  &cur_off, &max_bitflips, !i,
+						  false, page);
 		if (ret < 0)
 			return ret;
 	}
@@ -1149,7 +1152,9 @@ static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
 
 		ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
 						  oob_off, &cur_off,
-						  &max_bitflips, !i, page);
+						  &max_bitflips, !i,
+						  oob_required,
+						  page);
 		if (ret < 0)
 			return ret;
 		else if (ret)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 3/7] mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris, linux-mtd
  Cc: Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	Laurent Pinchart, linux-media, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Boris Brezillon

Allow for NULL cur_offs values when the caller does not know where the
NAND page register pointer point to.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 7b3ae72..07c3af7 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -957,7 +957,7 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 	if (len <= 0)
 		return;
 
-	if (*cur_off != offset)
+	if (!cur_off || *cur_off != offset)
 		nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
 			      offset + mtd->writesize, -1);
 
@@ -967,7 +967,8 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 		sunxi_nfc_randomizer_read_buf(mtd, oob + offset, len,
 					      false, page);
 
-	*cur_off = mtd->oobsize + mtd->writesize;
+	if (cur_off)
+		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
@@ -1022,13 +1023,14 @@ static void sunxi_nfc_hw_ecc_write_extra_oob(struct mtd_info *mtd,
 	if (len <= 0)
 		return;
 
-	if (*cur_off != offset)
+	if (!cur_off || *cur_off != offset)
 		nand->cmdfunc(mtd, NAND_CMD_RNDIN,
 			      offset + mtd->writesize, -1);
 
 	sunxi_nfc_randomizer_write_buf(mtd, oob + offset, len, false, page);
 
-	*cur_off = mtd->oobsize + mtd->writesize;
+	if (cur_off)
+		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
 static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 3/7] mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon

Allow for NULL cur_offs values when the caller does not know where the
NAND page register pointer point to.

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 drivers/mtd/nand/sunxi_nand.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 7b3ae72..07c3af7 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -957,7 +957,7 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 	if (len <= 0)
 		return;
 
-	if (*cur_off != offset)
+	if (!cur_off || *cur_off != offset)
 		nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
 			      offset + mtd->writesize, -1);
 
@@ -967,7 +967,8 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 		sunxi_nfc_randomizer_read_buf(mtd, oob + offset, len,
 					      false, page);
 
-	*cur_off = mtd->oobsize + mtd->writesize;
+	if (cur_off)
+		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
@@ -1022,13 +1023,14 @@ static void sunxi_nfc_hw_ecc_write_extra_oob(struct mtd_info *mtd,
 	if (len <= 0)
 		return;
 
-	if (*cur_off != offset)
+	if (!cur_off || *cur_off != offset)
 		nand->cmdfunc(mtd, NAND_CMD_RNDIN,
 			      offset + mtd->writesize, -1);
 
 	sunxi_nfc_randomizer_write_buf(mtd, oob + offset, len, false, page);
 
-	*cur_off = mtd->oobsize + mtd->writesize;
+	if (cur_off)
+		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
 static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 3/7] mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: linux-arm-kernel

Allow for NULL cur_offs values when the caller does not know where the
NAND page register pointer point to.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 7b3ae72..07c3af7 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -957,7 +957,7 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 	if (len <= 0)
 		return;
 
-	if (*cur_off != offset)
+	if (!cur_off || *cur_off != offset)
 		nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
 			      offset + mtd->writesize, -1);
 
@@ -967,7 +967,8 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 		sunxi_nfc_randomizer_read_buf(mtd, oob + offset, len,
 					      false, page);
 
-	*cur_off = mtd->oobsize + mtd->writesize;
+	if (cur_off)
+		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
@@ -1022,13 +1023,14 @@ static void sunxi_nfc_hw_ecc_write_extra_oob(struct mtd_info *mtd,
 	if (len <= 0)
 		return;
 
-	if (*cur_off != offset)
+	if (!cur_off || *cur_off != offset)
 		nand->cmdfunc(mtd, NAND_CMD_RNDIN,
 			      offset + mtd->writesize, -1);
 
 	sunxi_nfc_randomizer_write_buf(mtd, oob + offset, len, false, page);
 
-	*cur_off = mtd->oobsize + mtd->writesize;
+	if (cur_off)
+		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
 static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris, linux-mtd
  Cc: Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	Laurent Pinchart, linux-media, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Boris Brezillon

sg_alloc_table_from_buf() provides an easy solution to create an sg_table
from a virtual address pointer. This function takes care of dealing with
vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
DMA transfer size).

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 include/linux/scatterlist.h |  24 ++++++++
 lib/scatterlist.c           | 142 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 166 insertions(+)

diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 556ec1e..4a75362 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -41,6 +41,27 @@ struct sg_table {
 	unsigned int orig_nents;	/* original size of list */
 };
 
+/**
+ * struct sg_constraints - SG constraints structure
+ *
+ * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be smaller
+ *		   than this value. Zero means no constraint.
+ * @required_alignment: minimum alignment. Is used for both size and pointer
+ *			alignment. If this constraint is not met, the function
+ *			should return -EINVAL.
+ * @preferred_alignment: preferred alignment. Mainly used to optimize
+ *			 throughput when the DMA engine performs better when
+ *			 doing aligned accesses.
+ *
+ * This structure is here to help sg_alloc_table_from_buf() create the optimal
+ * SG list based on DMA engine constraints.
+ */
+struct sg_constraints {
+	size_t max_chunk_len;
+	size_t required_alignment;
+	size_t preferred_alignment;
+};
+
 /*
  * Notes on SG table design.
  *
@@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
 	struct page **pages, unsigned int n_pages,
 	unsigned long offset, unsigned long size,
 	gfp_t gfp_mask);
+int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
+			    const struct sg_constraints *constraints,
+			    gfp_t gfp_mask);
 
 size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
 		      size_t buflen, off_t skip, bool to_buffer);
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index bafa993..706b583 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -433,6 +433,148 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
 }
 EXPORT_SYMBOL(sg_alloc_table_from_pages);
 
+static size_t sg_buf_chunk_len(const void *buf, size_t len,
+			       const struct sg_constraints *cons)
+{
+	size_t chunk_len = len;
+
+	if (cons->max_chunk_len)
+		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
+
+	if (is_vmalloc_addr(buf))
+		chunk_len = min_t(size_t, chunk_len,
+				  PAGE_SIZE - offset_in_page(buf));
+
+	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
+		const void *aligned_buf = PTR_ALIGN(buf,
+						    cons->preferred_alignment);
+		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
+
+		chunk_len = min_t(size_t, chunk_len, unaligned_len);
+	} else if (chunk_len > cons->preferred_alignment) {
+		chunk_len &= ~(cons->preferred_alignment - 1);
+	}
+
+	return chunk_len;
+}
+
+#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
+	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
+	     len;							\
+	     len -= chunk_len, buf += chunk_len,			\
+	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
+
+static int sg_check_constraints(struct sg_constraints *cons,
+				const void *buf, size_t len)
+{
+	if (!cons->required_alignment)
+		cons->required_alignment = 1;
+
+	if (!cons->preferred_alignment)
+		cons->preferred_alignment = cons->required_alignment;
+
+	/* Test if buf and len are properly aligned. */
+	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
+	    !IS_ALIGNED(len, cons->required_alignment))
+		return -EINVAL;
+
+	/*
+	 * if the buffer has been vmallocated and required_alignment is
+	 * more than PAGE_SIZE we cannot guarantee it.
+	 */
+	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
+		return -EINVAL;
+
+	/*
+	 * max_chunk_len has to be aligned to required_alignment to
+	 * guarantee that all buffer chunks are aligned correctly.
+	 */
+	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
+		return -EINVAL;
+
+	/*
+	 * preferred_alignment has to be aligned to required_alignment
+	 * to avoid misalignment of buffer chunks.
+	 */
+	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * sg_alloc_table_from_buf - create an SG table from a buffer
+ *
+ * @sgt: SG table
+ * @buf: buffer you want to create this SG table from
+ * @len: length of buf
+ * @constraints: optional constraints to take into account when creating
+ *		 the SG table. Can be NULL if no specific constraints are
+ *		 required.
+ * @gfp_mask: type of allocation to use when creating the table
+ *
+ * This function creates an SG table from a buffer, its length and some
+ * SG constraints.
+ *
+ * Note: This function supports vmallocated buffers.
+ */
+int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
+			    const struct sg_constraints *constraints,
+			    gfp_t gfp_mask)
+{
+	struct sg_constraints cons = { };
+	size_t remaining, chunk_len;
+	const void *sg_buf;
+	int i, ret;
+
+	if (constraints)
+		cons = *constraints;
+
+	ret = sg_check_constraints(&cons, buf, len);
+	if (ret)
+		return ret;
+
+	sg_buf = buf;
+	remaining = len;
+	i = 0;
+	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
+		i++;
+
+	ret = sg_alloc_table(sgt, i, gfp_mask);
+	if (ret)
+		return ret;
+
+	sg_buf = buf;
+	remaining = len;
+	i = 0;
+	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
+		if (is_vmalloc_addr(sg_buf)) {
+			struct page *vm_page;
+
+			vm_page = vmalloc_to_page(sg_buf);
+			if (!vm_page) {
+				ret = -ENOMEM;
+				goto err_free_table;
+			}
+
+			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
+				    offset_in_page(sg_buf));
+		} else {
+			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
+		}
+
+		i++;
+	}
+
+	return 0;
+
+err_free_table:
+	sg_free_table(sgt);
+
+	return ret;
+}
+EXPORT_SYMBOL(sg_alloc_table_from_buf);
+
 void __sg_page_iter_start(struct sg_page_iter *piter,
 			  struct scatterlist *sglist, unsigned int nents,
 			  unsigned long pgoffset)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon

sg_alloc_table_from_buf() provides an easy solution to create an sg_table
from a virtual address pointer. This function takes care of dealing with
vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
DMA transfer size).

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 include/linux/scatterlist.h |  24 ++++++++
 lib/scatterlist.c           | 142 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 166 insertions(+)

diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 556ec1e..4a75362 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -41,6 +41,27 @@ struct sg_table {
 	unsigned int orig_nents;	/* original size of list */
 };
 
+/**
+ * struct sg_constraints - SG constraints structure
+ *
+ * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be smaller
+ *		   than this value. Zero means no constraint.
+ * @required_alignment: minimum alignment. Is used for both size and pointer
+ *			alignment. If this constraint is not met, the function
+ *			should return -EINVAL.
+ * @preferred_alignment: preferred alignment. Mainly used to optimize
+ *			 throughput when the DMA engine performs better when
+ *			 doing aligned accesses.
+ *
+ * This structure is here to help sg_alloc_table_from_buf() create the optimal
+ * SG list based on DMA engine constraints.
+ */
+struct sg_constraints {
+	size_t max_chunk_len;
+	size_t required_alignment;
+	size_t preferred_alignment;
+};
+
 /*
  * Notes on SG table design.
  *
@@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
 	struct page **pages, unsigned int n_pages,
 	unsigned long offset, unsigned long size,
 	gfp_t gfp_mask);
+int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
+			    const struct sg_constraints *constraints,
+			    gfp_t gfp_mask);
 
 size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
 		      size_t buflen, off_t skip, bool to_buffer);
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index bafa993..706b583 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -433,6 +433,148 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
 }
 EXPORT_SYMBOL(sg_alloc_table_from_pages);
 
+static size_t sg_buf_chunk_len(const void *buf, size_t len,
+			       const struct sg_constraints *cons)
+{
+	size_t chunk_len = len;
+
+	if (cons->max_chunk_len)
+		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
+
+	if (is_vmalloc_addr(buf))
+		chunk_len = min_t(size_t, chunk_len,
+				  PAGE_SIZE - offset_in_page(buf));
+
+	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
+		const void *aligned_buf = PTR_ALIGN(buf,
+						    cons->preferred_alignment);
+		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
+
+		chunk_len = min_t(size_t, chunk_len, unaligned_len);
+	} else if (chunk_len > cons->preferred_alignment) {
+		chunk_len &= ~(cons->preferred_alignment - 1);
+	}
+
+	return chunk_len;
+}
+
+#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
+	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
+	     len;							\
+	     len -= chunk_len, buf += chunk_len,			\
+	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
+
+static int sg_check_constraints(struct sg_constraints *cons,
+				const void *buf, size_t len)
+{
+	if (!cons->required_alignment)
+		cons->required_alignment = 1;
+
+	if (!cons->preferred_alignment)
+		cons->preferred_alignment = cons->required_alignment;
+
+	/* Test if buf and len are properly aligned. */
+	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
+	    !IS_ALIGNED(len, cons->required_alignment))
+		return -EINVAL;
+
+	/*
+	 * if the buffer has been vmallocated and required_alignment is
+	 * more than PAGE_SIZE we cannot guarantee it.
+	 */
+	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
+		return -EINVAL;
+
+	/*
+	 * max_chunk_len has to be aligned to required_alignment to
+	 * guarantee that all buffer chunks are aligned correctly.
+	 */
+	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
+		return -EINVAL;
+
+	/*
+	 * preferred_alignment has to be aligned to required_alignment
+	 * to avoid misalignment of buffer chunks.
+	 */
+	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * sg_alloc_table_from_buf - create an SG table from a buffer
+ *
+ * @sgt: SG table
+ * @buf: buffer you want to create this SG table from
+ * @len: length of buf
+ * @constraints: optional constraints to take into account when creating
+ *		 the SG table. Can be NULL if no specific constraints are
+ *		 required.
+ * @gfp_mask: type of allocation to use when creating the table
+ *
+ * This function creates an SG table from a buffer, its length and some
+ * SG constraints.
+ *
+ * Note: This function supports vmallocated buffers.
+ */
+int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
+			    const struct sg_constraints *constraints,
+			    gfp_t gfp_mask)
+{
+	struct sg_constraints cons = { };
+	size_t remaining, chunk_len;
+	const void *sg_buf;
+	int i, ret;
+
+	if (constraints)
+		cons = *constraints;
+
+	ret = sg_check_constraints(&cons, buf, len);
+	if (ret)
+		return ret;
+
+	sg_buf = buf;
+	remaining = len;
+	i = 0;
+	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
+		i++;
+
+	ret = sg_alloc_table(sgt, i, gfp_mask);
+	if (ret)
+		return ret;
+
+	sg_buf = buf;
+	remaining = len;
+	i = 0;
+	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
+		if (is_vmalloc_addr(sg_buf)) {
+			struct page *vm_page;
+
+			vm_page = vmalloc_to_page(sg_buf);
+			if (!vm_page) {
+				ret = -ENOMEM;
+				goto err_free_table;
+			}
+
+			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
+				    offset_in_page(sg_buf));
+		} else {
+			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
+		}
+
+		i++;
+	}
+
+	return 0;
+
+err_free_table:
+	sg_free_table(sgt);
+
+	return ret;
+}
+EXPORT_SYMBOL(sg_alloc_table_from_buf);
+
 void __sg_page_iter_start(struct sg_page_iter *piter,
 			  struct scatterlist *sglist, unsigned int nents,
 			  unsigned long pgoffset)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: linux-arm-kernel

sg_alloc_table_from_buf() provides an easy solution to create an sg_table
from a virtual address pointer. This function takes care of dealing with
vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
DMA transfer size).

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 include/linux/scatterlist.h |  24 ++++++++
 lib/scatterlist.c           | 142 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 166 insertions(+)

diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
index 556ec1e..4a75362 100644
--- a/include/linux/scatterlist.h
+++ b/include/linux/scatterlist.h
@@ -41,6 +41,27 @@ struct sg_table {
 	unsigned int orig_nents;	/* original size of list */
 };
 
+/**
+ * struct sg_constraints - SG constraints structure
+ *
+ * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be smaller
+ *		   than this value. Zero means no constraint.
+ * @required_alignment: minimum alignment. Is used for both size and pointer
+ *			alignment. If this constraint is not met, the function
+ *			should return -EINVAL.
+ * @preferred_alignment: preferred alignment. Mainly used to optimize
+ *			 throughput when the DMA engine performs better when
+ *			 doing aligned accesses.
+ *
+ * This structure is here to help sg_alloc_table_from_buf() create the optimal
+ * SG list based on DMA engine constraints.
+ */
+struct sg_constraints {
+	size_t max_chunk_len;
+	size_t required_alignment;
+	size_t preferred_alignment;
+};
+
 /*
  * Notes on SG table design.
  *
@@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
 	struct page **pages, unsigned int n_pages,
 	unsigned long offset, unsigned long size,
 	gfp_t gfp_mask);
+int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
+			    const struct sg_constraints *constraints,
+			    gfp_t gfp_mask);
 
 size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
 		      size_t buflen, off_t skip, bool to_buffer);
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index bafa993..706b583 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -433,6 +433,148 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
 }
 EXPORT_SYMBOL(sg_alloc_table_from_pages);
 
+static size_t sg_buf_chunk_len(const void *buf, size_t len,
+			       const struct sg_constraints *cons)
+{
+	size_t chunk_len = len;
+
+	if (cons->max_chunk_len)
+		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
+
+	if (is_vmalloc_addr(buf))
+		chunk_len = min_t(size_t, chunk_len,
+				  PAGE_SIZE - offset_in_page(buf));
+
+	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
+		const void *aligned_buf = PTR_ALIGN(buf,
+						    cons->preferred_alignment);
+		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
+
+		chunk_len = min_t(size_t, chunk_len, unaligned_len);
+	} else if (chunk_len > cons->preferred_alignment) {
+		chunk_len &= ~(cons->preferred_alignment - 1);
+	}
+
+	return chunk_len;
+}
+
+#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
+	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
+	     len;							\
+	     len -= chunk_len, buf += chunk_len,			\
+	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
+
+static int sg_check_constraints(struct sg_constraints *cons,
+				const void *buf, size_t len)
+{
+	if (!cons->required_alignment)
+		cons->required_alignment = 1;
+
+	if (!cons->preferred_alignment)
+		cons->preferred_alignment = cons->required_alignment;
+
+	/* Test if buf and len are properly aligned. */
+	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
+	    !IS_ALIGNED(len, cons->required_alignment))
+		return -EINVAL;
+
+	/*
+	 * if the buffer has been vmallocated and required_alignment is
+	 * more than PAGE_SIZE we cannot guarantee it.
+	 */
+	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
+		return -EINVAL;
+
+	/*
+	 * max_chunk_len has to be aligned to required_alignment to
+	 * guarantee that all buffer chunks are aligned correctly.
+	 */
+	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
+		return -EINVAL;
+
+	/*
+	 * preferred_alignment has to be aligned to required_alignment
+	 * to avoid misalignment of buffer chunks.
+	 */
+	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
+		return -EINVAL;
+
+	return 0;
+}
+
+/**
+ * sg_alloc_table_from_buf - create an SG table from a buffer
+ *
+ * @sgt: SG table
+ * @buf: buffer you want to create this SG table from
+ * @len: length of buf
+ * @constraints: optional constraints to take into account when creating
+ *		 the SG table. Can be NULL if no specific constraints are
+ *		 required.
+ * @gfp_mask: type of allocation to use when creating the table
+ *
+ * This function creates an SG table from a buffer, its length and some
+ * SG constraints.
+ *
+ * Note: This function supports vmallocated buffers.
+ */
+int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
+			    const struct sg_constraints *constraints,
+			    gfp_t gfp_mask)
+{
+	struct sg_constraints cons = { };
+	size_t remaining, chunk_len;
+	const void *sg_buf;
+	int i, ret;
+
+	if (constraints)
+		cons = *constraints;
+
+	ret = sg_check_constraints(&cons, buf, len);
+	if (ret)
+		return ret;
+
+	sg_buf = buf;
+	remaining = len;
+	i = 0;
+	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
+		i++;
+
+	ret = sg_alloc_table(sgt, i, gfp_mask);
+	if (ret)
+		return ret;
+
+	sg_buf = buf;
+	remaining = len;
+	i = 0;
+	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
+		if (is_vmalloc_addr(sg_buf)) {
+			struct page *vm_page;
+
+			vm_page = vmalloc_to_page(sg_buf);
+			if (!vm_page) {
+				ret = -ENOMEM;
+				goto err_free_table;
+			}
+
+			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
+				    offset_in_page(sg_buf));
+		} else {
+			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
+		}
+
+		i++;
+	}
+
+	return 0;
+
+err_free_table:
+	sg_free_table(sgt);
+
+	return ret;
+}
+EXPORT_SYMBOL(sg_alloc_table_from_buf);
+
 void __sg_page_iter_start(struct sg_page_iter *piter,
 			  struct scatterlist *sglist, unsigned int nents,
 			  unsigned long pgoffset)
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 5/7] mtd: provide helper to prepare buffers for DMA operations
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris, linux-mtd
  Cc: Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	Laurent Pinchart, linux-media, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Boris Brezillon

Some NAND controller drivers are making use of DMA to transfer data from
the controller to the buffer passed by the MTD user.
Provide a generic mtd_map/unmap_buf() implementation to avoid open coded
(and sometime erroneous) implementations.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/mtdcore.c   | 66 +++++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/mtd/mtd.h | 25 +++++++++++++++++++
 2 files changed, 91 insertions(+)

diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
index 3096251..3c368f0 100644
--- a/drivers/mtd/mtdcore.c
+++ b/drivers/mtd/mtdcore.c
@@ -1253,6 +1253,72 @@ void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size)
 }
 EXPORT_SYMBOL_GPL(mtd_kmalloc_up_to);
 
+#ifdef CONFIG_HAS_DMA
+/**
+ * mtd_map_buf - create an SG table and prepare it for DMA operations
+ *
+ * @mtd: mtd device description object pointer
+ * @dev: device handling the DMA operation
+ * @buf: buf used to create the SG table
+ * @len: length of buf
+ * @constraints: optional constraints to take into account when creating
+ *		 the SG table. Can be NULL if no specific constraints
+ *		 are required.
+ * @dir: direction of the DMA operation
+ *
+ * This function should be used when an MTD driver wants to do DMA operations
+ * on a buffer passed by the MTD layer. This functions takes care of
+ * vmallocated buffer constraints, and return and sg_table that you can safely
+ * use.
+ */
+int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+		struct sg_table *sgt, const void *buf, size_t len,
+		const struct sg_constraints *constraints,
+		enum dma_data_direction dir)
+{
+	int ret;
+
+	ret = sg_alloc_table_from_buf(sgt, buf, len, constraints, GFP_KERNEL);
+	if (ret)
+		return ret;
+
+	ret = dma_map_sg(dev, sgt->sgl, sgt->nents, dir);
+	if (!ret)
+		ret = -ENOMEM;
+
+	if (ret < 0) {
+		sg_free_table(sgt);
+		return ret;
+	}
+
+	sgt->nents = ret;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(mtd_map_buf);
+
+/**
+ * mtd_map_buf - unmap an SG table and release its resources
+ *
+ * @mtd: mtd device description object pointer
+ * @dev: device handling the DMA operation
+ * @sgt: SG table
+ * @dir: direction of the DMA operation
+ *
+ * This function unmaps a previously mapped SG table and release SG table
+ * resources. Should be called when your DMA operation is done.
+ */
+void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+		   struct sg_table *sgt, enum dma_data_direction dir)
+{
+	if (sgt->orig_nents) {
+		dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir);
+		sg_free_table(sgt);
+	}
+}
+EXPORT_SYMBOL_GPL(mtd_unmap_buf);
+#endif /* !CONFIG_HAS_DMA */
+
 #ifdef CONFIG_PROC_FS
 
 /*====================================================================*/
diff --git a/include/linux/mtd/mtd.h b/include/linux/mtd/mtd.h
index cc84923..11c63f1 100644
--- a/include/linux/mtd/mtd.h
+++ b/include/linux/mtd/mtd.h
@@ -24,6 +24,7 @@
 #include <linux/uio.h>
 #include <linux/notifier.h>
 #include <linux/device.h>
+#include <linux/dma-mapping.h>
 
 #include <mtd/mtd-abi.h>
 
@@ -406,6 +407,30 @@ extern void register_mtd_user (struct mtd_notifier *new);
 extern int unregister_mtd_user (struct mtd_notifier *old);
 void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size);
 
+#ifdef CONFIG_HAS_DMA
+int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+		struct sg_table *sgt, const void *buf, size_t len,
+		const struct sg_constraints *constraints,
+		enum dma_data_direction dir);
+void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+		   struct sg_table *sgt, enum dma_data_direction dir);
+#else
+static inline int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+			      struct sg_table *sgt, const void *buf,
+			      size_t len,
+			      const struct sg_constraints *constraints
+			      enum dma_data_direction dir)
+{
+	return -ENOTSUPP;
+}
+
+static void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+			  struct sg_table *sgt, enum dma_data_direction dir)
+{
+	return -ENOTSUPP;
+}
+#endif
+
 void mtd_erase_callback(struct erase_info *instr);
 
 static inline int mtd_is_bitflip(int err) {
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 5/7] mtd: provide helper to prepare buffers for DMA operations
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon

Some NAND controller drivers are making use of DMA to transfer data from
the controller to the buffer passed by the MTD user.
Provide a generic mtd_map/unmap_buf() implementation to avoid open coded
(and sometime erroneous) implementations.

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 drivers/mtd/mtdcore.c   | 66 +++++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/mtd/mtd.h | 25 +++++++++++++++++++
 2 files changed, 91 insertions(+)

diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
index 3096251..3c368f0 100644
--- a/drivers/mtd/mtdcore.c
+++ b/drivers/mtd/mtdcore.c
@@ -1253,6 +1253,72 @@ void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size)
 }
 EXPORT_SYMBOL_GPL(mtd_kmalloc_up_to);
 
+#ifdef CONFIG_HAS_DMA
+/**
+ * mtd_map_buf - create an SG table and prepare it for DMA operations
+ *
+ * @mtd: mtd device description object pointer
+ * @dev: device handling the DMA operation
+ * @buf: buf used to create the SG table
+ * @len: length of buf
+ * @constraints: optional constraints to take into account when creating
+ *		 the SG table. Can be NULL if no specific constraints
+ *		 are required.
+ * @dir: direction of the DMA operation
+ *
+ * This function should be used when an MTD driver wants to do DMA operations
+ * on a buffer passed by the MTD layer. This functions takes care of
+ * vmallocated buffer constraints, and return and sg_table that you can safely
+ * use.
+ */
+int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+		struct sg_table *sgt, const void *buf, size_t len,
+		const struct sg_constraints *constraints,
+		enum dma_data_direction dir)
+{
+	int ret;
+
+	ret = sg_alloc_table_from_buf(sgt, buf, len, constraints, GFP_KERNEL);
+	if (ret)
+		return ret;
+
+	ret = dma_map_sg(dev, sgt->sgl, sgt->nents, dir);
+	if (!ret)
+		ret = -ENOMEM;
+
+	if (ret < 0) {
+		sg_free_table(sgt);
+		return ret;
+	}
+
+	sgt->nents = ret;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(mtd_map_buf);
+
+/**
+ * mtd_map_buf - unmap an SG table and release its resources
+ *
+ * @mtd: mtd device description object pointer
+ * @dev: device handling the DMA operation
+ * @sgt: SG table
+ * @dir: direction of the DMA operation
+ *
+ * This function unmaps a previously mapped SG table and release SG table
+ * resources. Should be called when your DMA operation is done.
+ */
+void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+		   struct sg_table *sgt, enum dma_data_direction dir)
+{
+	if (sgt->orig_nents) {
+		dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir);
+		sg_free_table(sgt);
+	}
+}
+EXPORT_SYMBOL_GPL(mtd_unmap_buf);
+#endif /* !CONFIG_HAS_DMA */
+
 #ifdef CONFIG_PROC_FS
 
 /*====================================================================*/
diff --git a/include/linux/mtd/mtd.h b/include/linux/mtd/mtd.h
index cc84923..11c63f1 100644
--- a/include/linux/mtd/mtd.h
+++ b/include/linux/mtd/mtd.h
@@ -24,6 +24,7 @@
 #include <linux/uio.h>
 #include <linux/notifier.h>
 #include <linux/device.h>
+#include <linux/dma-mapping.h>
 
 #include <mtd/mtd-abi.h>
 
@@ -406,6 +407,30 @@ extern void register_mtd_user (struct mtd_notifier *new);
 extern int unregister_mtd_user (struct mtd_notifier *old);
 void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size);
 
+#ifdef CONFIG_HAS_DMA
+int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+		struct sg_table *sgt, const void *buf, size_t len,
+		const struct sg_constraints *constraints,
+		enum dma_data_direction dir);
+void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+		   struct sg_table *sgt, enum dma_data_direction dir);
+#else
+static inline int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+			      struct sg_table *sgt, const void *buf,
+			      size_t len,
+			      const struct sg_constraints *constraints
+			      enum dma_data_direction dir)
+{
+	return -ENOTSUPP;
+}
+
+static void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+			  struct sg_table *sgt, enum dma_data_direction dir)
+{
+	return -ENOTSUPP;
+}
+#endif
+
 void mtd_erase_callback(struct erase_info *instr);
 
 static inline int mtd_is_bitflip(int err) {
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 5/7] mtd: provide helper to prepare buffers for DMA operations
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: linux-arm-kernel

Some NAND controller drivers are making use of DMA to transfer data from
the controller to the buffer passed by the MTD user.
Provide a generic mtd_map/unmap_buf() implementation to avoid open coded
(and sometime erroneous) implementations.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/mtdcore.c   | 66 +++++++++++++++++++++++++++++++++++++++++++++++++
 include/linux/mtd/mtd.h | 25 +++++++++++++++++++
 2 files changed, 91 insertions(+)

diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
index 3096251..3c368f0 100644
--- a/drivers/mtd/mtdcore.c
+++ b/drivers/mtd/mtdcore.c
@@ -1253,6 +1253,72 @@ void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size)
 }
 EXPORT_SYMBOL_GPL(mtd_kmalloc_up_to);
 
+#ifdef CONFIG_HAS_DMA
+/**
+ * mtd_map_buf - create an SG table and prepare it for DMA operations
+ *
+ * @mtd: mtd device description object pointer
+ * @dev: device handling the DMA operation
+ * @buf: buf used to create the SG table
+ * @len: length of buf
+ * @constraints: optional constraints to take into account when creating
+ *		 the SG table. Can be NULL if no specific constraints
+ *		 are required.
+ * @dir: direction of the DMA operation
+ *
+ * This function should be used when an MTD driver wants to do DMA operations
+ * on a buffer passed by the MTD layer. This functions takes care of
+ * vmallocated buffer constraints, and return and sg_table that you can safely
+ * use.
+ */
+int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+		struct sg_table *sgt, const void *buf, size_t len,
+		const struct sg_constraints *constraints,
+		enum dma_data_direction dir)
+{
+	int ret;
+
+	ret = sg_alloc_table_from_buf(sgt, buf, len, constraints, GFP_KERNEL);
+	if (ret)
+		return ret;
+
+	ret = dma_map_sg(dev, sgt->sgl, sgt->nents, dir);
+	if (!ret)
+		ret = -ENOMEM;
+
+	if (ret < 0) {
+		sg_free_table(sgt);
+		return ret;
+	}
+
+	sgt->nents = ret;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(mtd_map_buf);
+
+/**
+ * mtd_map_buf - unmap an SG table and release its resources
+ *
+ * @mtd: mtd device description object pointer
+ * @dev: device handling the DMA operation
+ * @sgt: SG table
+ * @dir: direction of the DMA operation
+ *
+ * This function unmaps a previously mapped SG table and release SG table
+ * resources. Should be called when your DMA operation is done.
+ */
+void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+		   struct sg_table *sgt, enum dma_data_direction dir)
+{
+	if (sgt->orig_nents) {
+		dma_unmap_sg(dev, sgt->sgl, sgt->orig_nents, dir);
+		sg_free_table(sgt);
+	}
+}
+EXPORT_SYMBOL_GPL(mtd_unmap_buf);
+#endif /* !CONFIG_HAS_DMA */
+
 #ifdef CONFIG_PROC_FS
 
 /*====================================================================*/
diff --git a/include/linux/mtd/mtd.h b/include/linux/mtd/mtd.h
index cc84923..11c63f1 100644
--- a/include/linux/mtd/mtd.h
+++ b/include/linux/mtd/mtd.h
@@ -24,6 +24,7 @@
 #include <linux/uio.h>
 #include <linux/notifier.h>
 #include <linux/device.h>
+#include <linux/dma-mapping.h>
 
 #include <mtd/mtd-abi.h>
 
@@ -406,6 +407,30 @@ extern void register_mtd_user (struct mtd_notifier *new);
 extern int unregister_mtd_user (struct mtd_notifier *old);
 void *mtd_kmalloc_up_to(const struct mtd_info *mtd, size_t *size);
 
+#ifdef CONFIG_HAS_DMA
+int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+		struct sg_table *sgt, const void *buf, size_t len,
+		const struct sg_constraints *constraints,
+		enum dma_data_direction dir);
+void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+		   struct sg_table *sgt, enum dma_data_direction dir);
+#else
+static inline int mtd_map_buf(struct mtd_info *mtd, struct device *dev,
+			      struct sg_table *sgt, const void *buf,
+			      size_t len,
+			      const struct sg_constraints *constraints
+			      enum dma_data_direction dir)
+{
+	return -ENOTSUPP;
+}
+
+static void mtd_unmap_buf(struct mtd_info *mtd, struct device *dev,
+			  struct sg_table *sgt, enum dma_data_direction dir)
+{
+	return -ENOTSUPP;
+}
+#endif
+
 void mtd_erase_callback(struct erase_info *instr);
 
 static inline int mtd_is_bitflip(int err) {
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 6/7] mtd: nand: sunxi: add support for DMA assisted operations
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris, linux-mtd
  Cc: Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	Laurent Pinchart, linux-media, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Boris Brezillon

The sunxi NAND controller is able to pipeline ECC operations only when
operated in DMA mode, which improves a lot NAND throughput while keeping
CPU usage low.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 301 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 297 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 07c3af7..7ba285e 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -277,6 +277,7 @@ struct sunxi_nfc {
 	unsigned long clk_rate;
 	struct list_head chips;
 	struct completion complete;
+	struct dma_chan *dmac;
 };
 
 static inline struct sunxi_nfc *to_sunxi_nfc(struct nand_hw_control *ctrl)
@@ -369,6 +370,68 @@ static int sunxi_nfc_rst(struct sunxi_nfc *nfc)
 	return ret;
 }
 
+static int sunxi_nfc_dma_op_prepare(struct mtd_info *mtd, const void *buf,
+				    int chunksize, int nchunks,
+				    enum dma_data_direction ddir,
+				    struct sg_table *sgt)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct dma_async_tx_descriptor *dmad;
+	enum dma_transfer_direction tdir;
+	dma_cookie_t dmat;
+	int ret;
+
+	if (ddir == DMA_FROM_DEVICE)
+		tdir = DMA_DEV_TO_MEM;
+	else
+		tdir = DMA_MEM_TO_DEV;
+
+	ret = mtd_map_buf(mtd, nfc->dev, sgt, buf, nchunks * chunksize,
+			  NULL, ddir);
+	if (ret)
+		return ret;
+
+	dmad = dmaengine_prep_slave_sg(nfc->dmac, sgt->sgl, sgt->nents,
+				       tdir, DMA_CTRL_ACK);
+	if (IS_ERR(dmad)) {
+		ret = PTR_ERR(dmad);
+		goto err_unmap_buf;
+	}
+
+	writel(readl(nfc->regs + NFC_REG_CTL) | NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+	writel(nchunks, nfc->regs + NFC_REG_SECTOR_NUM);
+	writel(chunksize, nfc->regs + NFC_REG_CNT);
+	dmat = dmaengine_submit(dmad);
+
+	ret = dma_submit_error(dmat);
+	if (ret)
+		goto err_clr_dma_flag;
+
+	return 0;
+
+err_clr_dma_flag:
+	writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+
+err_unmap_buf:
+	mtd_unmap_buf(mtd, nfc->dev, sgt, ddir);
+	return ret;
+}
+
+static void sunxi_nfc_dma_op_cleanup(struct mtd_info *mtd,
+				     enum dma_data_direction ddir,
+				     struct sg_table *sgt)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+	mtd_unmap_buf(mtd, nfc->dev, sgt, ddir);
+	writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+}
+
 static int sunxi_nfc_dev_ready(struct mtd_info *mtd)
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
@@ -971,6 +1034,106 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
+static int sunxi_nfc_hw_ecc_read_chunks_dma(struct mtd_info *mtd, uint8_t *buf,
+					    int oob_required, int page,
+					    int nchunks)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	unsigned int max_bitflips = 0;
+	int ret, i, raw_mode = 0;
+	struct sg_table sgt;
+
+	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
+	if (ret)
+		return ret;
+
+	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, nchunks,
+				       DMA_FROM_DEVICE, &sgt);
+	if (ret)
+		return ret;
+
+	sunxi_nfc_hw_ecc_enable(mtd);
+	sunxi_nfc_randomizer_config(mtd, page, false);
+	sunxi_nfc_randomizer_enable(mtd);
+
+	writel((NAND_CMD_RNDOUTSTART << 16) | (NAND_CMD_RNDOUT << 8) |
+	       NAND_CMD_READSTART, nfc->regs + NFC_REG_RCMD_SET);
+
+	dma_async_issue_pending(nfc->dmac);
+
+	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD | NFC_DATA_TRANS,
+	       nfc->regs + NFC_REG_CMD);
+
+	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
+	if (ret)
+		dmaengine_terminate_all(nfc->dmac);
+
+	sunxi_nfc_randomizer_disable(mtd);
+	sunxi_nfc_hw_ecc_disable(mtd);
+
+	sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sgt);
+
+	if (ret)
+		return ret;
+
+	for (i = 0; i < nchunks; i++) {
+		int data_off = i * ecc->size;
+		int oob_off = i * (ecc->bytes + 4);
+		u8 *data = buf + data_off;
+		u8 *oob = nand->oob_poi + oob_off;
+		bool randomized = nand->options & NAND_NEED_SCRAMBLING;
+		bool erased;
+
+		ret = sunxi_nfc_hw_ecc_correct(mtd, randomized ? data : NULL,
+				(randomized && oob_required) ? oob : NULL,
+				i, &erased);
+		if (ret < 0) {
+			/*
+			 * Re-read the data with the randomizer disabled to
+			 * identify bitflips in erased pages.
+			 */
+			if (nand->options & NAND_NEED_SCRAMBLING) {
+				/* TODO: use DMA to read page in raw mode */
+				nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
+					      data_off, -1);
+				nand->read_buf(mtd, data, ecc->size);
+			}
+
+			/* TODO: use DMA to retrieve OOB */
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			nand->read_buf(mtd, oob, ecc->bytes + 4);
+
+			ret = nand_check_erased_ecc_chunk(data,	ecc->size,
+							  oob, ecc->bytes + 4,
+							  NULL, 0,
+							  ecc->strength);
+			if (ret >= 0)
+				raw_mode = 1;
+		} else if (oob_required && !erased) {
+			/* TODO: use DMA to retrieve OOB */
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			nand->read_buf(mtd, oob, ecc->bytes + 4);
+
+			sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, i,
+							    !i, page);
+		}
+
+		if (erased)
+			raw_mode = 1;
+
+		sunxi_nfc_hw_ecc_update_stats(mtd, &max_bitflips, ret);
+	}
+
+	if (oob_required)
+		sunxi_nfc_hw_ecc_read_extra_oob(mtd, nand->oob_poi,
+						NULL, !raw_mode,
+						page);
+
+	return max_bitflips;
+}
+
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 					const u8 *data, int data_off,
 					const u8 *oob, int oob_off,
@@ -1069,6 +1232,23 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
 	return max_bitflips;
 }
 
+static int sunxi_nfc_hw_ecc_read_page_dma(struct mtd_info *mtd,
+					  struct nand_chip *chip, u8 *buf,
+					  int oob_required, int page)
+{
+	int ret;
+
+	ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, oob_required, page,
+					       chip->ecc.steps);
+	if (ret >= 0)
+		return ret;
+
+	/* Fallback to PIO mode */
+	chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
+
+	return sunxi_nfc_hw_ecc_read_page(mtd, chip, buf, oob_required, page);
+}
+
 static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 					 struct nand_chip *chip,
 					 u32 data_offs, u32 readlen,
@@ -1102,6 +1282,25 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 	return max_bitflips;
 }
 
+static int sunxi_nfc_hw_ecc_read_subpage_dma(struct mtd_info *mtd,
+					     struct nand_chip *chip,
+					     u32 data_offs, u32 readlen,
+					     u8 *buf, int page)
+{
+	int nchunks = DIV_ROUND_UP(data_offs + readlen, chip->ecc.size);
+	int ret;
+
+	ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, false, page, nchunks);
+	if (ret >= 0)
+		return ret;
+
+	/* Fallback to PIO mode */
+	chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
+
+	return sunxi_nfc_hw_ecc_read_subpage(mtd, chip, data_offs, readlen,
+					     buf, page);
+}
+
 static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
 				       struct nand_chip *chip,
 				       const uint8_t *buf, int oob_required,
@@ -1134,6 +1333,69 @@ static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
 	return 0;
 }
 
+static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
+					   struct nand_chip *chip,
+					   const u8 *buf,
+					   int oob_required,
+					   int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	struct sg_table sgt;
+	int ret, i;
+
+	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
+	if (ret)
+		return ret;
+
+	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, ecc->steps,
+				       DMA_TO_DEVICE, &sgt);
+	if (ret)
+		goto pio_fallback;
+
+	for (i = 0; i < ecc->steps; i++) {
+		const u8 *oob = nand->oob_poi + (i * (ecc->bytes + 4));
+
+		sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, i, !i, page);
+	}
+
+	sunxi_nfc_hw_ecc_enable(mtd);
+	sunxi_nfc_randomizer_config(mtd, page, false);
+	sunxi_nfc_randomizer_enable(mtd);
+
+	writel((NAND_CMD_RNDIN << 8) | NAND_CMD_PAGEPROG,
+	       nfc->regs + NFC_REG_RCMD_SET);
+
+	dma_async_issue_pending(nfc->dmac);
+
+	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD |
+	       NFC_DATA_TRANS | NFC_ACCESS_DIR,
+	       nfc->regs + NFC_REG_CMD);
+
+	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
+	if (ret)
+		dmaengine_terminate_all(nfc->dmac);
+
+	sunxi_nfc_randomizer_disable(mtd);
+	sunxi_nfc_hw_ecc_disable(mtd);
+
+	sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sgt);
+
+	if (ret)
+		return ret;
+
+	if (oob_required || (chip->options & NAND_NEED_SCRAMBLING))
+		/* TODO: use DMA to transfer extra OOB bytes ? */
+		sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi,
+						 NULL, page);
+
+	return 0;
+
+pio_fallback:
+	return sunxi_nfc_hw_ecc_write_page(mtd, chip, buf, oob_required, page);
+}
+
 static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
 					       struct nand_chip *chip,
 					       uint8_t *buf, int oob_required,
@@ -1501,6 +1763,9 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 				       struct nand_ecc_ctrl *ecc,
 				       struct device_node *np)
 {
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller);
 	struct nand_ecclayout *layout;
 	int nsectors;
 	int i, j;
@@ -1510,11 +1775,19 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 	if (ret)
 		return ret;
 
-	ecc->read_page = sunxi_nfc_hw_ecc_read_page;
-	ecc->write_page = sunxi_nfc_hw_ecc_write_page;
+	if (nfc->dmac) {
+		ecc->read_page = sunxi_nfc_hw_ecc_read_page_dma;
+		ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage_dma;
+		ecc->write_page = sunxi_nfc_hw_ecc_write_page_dma;
+	} else {
+		ecc->read_page = sunxi_nfc_hw_ecc_read_page;
+		ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
+		ecc->write_page = sunxi_nfc_hw_ecc_write_page;
+	}
+
+	/* TODO: support DMA for raw accesses */
 	ecc->read_oob_raw = nand_read_oob_std;
 	ecc->write_oob_raw = nand_write_oob_std;
-	ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
 	layout = ecc->layout;
 	nsectors = mtd->writesize / ecc->size;
 
@@ -1888,16 +2161,34 @@ static int sunxi_nfc_probe(struct platform_device *pdev)
 	if (ret)
 		goto out_mod_clk_unprepare;
 
+	nfc->dmac = dma_request_slave_channel(dev, "rxtx");
+	if (nfc->dmac) {
+		struct dma_slave_config dmac_cfg = { };
+
+		dmac_cfg.src_addr = r->start + NFC_REG_IO_DATA;
+		dmac_cfg.dst_addr = dmac_cfg.src_addr;
+		dmac_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+		dmac_cfg.dst_addr_width = dmac_cfg.src_addr_width;
+		dmac_cfg.src_maxburst = 4;
+		dmac_cfg.dst_maxburst = 4;
+		dmaengine_slave_config(nfc->dmac, &dmac_cfg);
+	} else {
+		dev_warn(dev, "failed to request rxtx DMA channel\n");
+	}
+
 	platform_set_drvdata(pdev, nfc);
 
 	ret = sunxi_nand_chips_init(dev, nfc);
 	if (ret) {
 		dev_err(dev, "failed to init nand chips\n");
-		goto out_mod_clk_unprepare;
+		goto out_release_dmac;
 	}
 
 	return 0;
 
+out_release_dmac:
+	if (nfc->dmac)
+		dma_release_channel(nfc->dmac);
 out_mod_clk_unprepare:
 	clk_disable_unprepare(nfc->mod_clk);
 out_ahb_clk_unprepare:
@@ -1911,6 +2202,8 @@ static int sunxi_nfc_remove(struct platform_device *pdev)
 	struct sunxi_nfc *nfc = platform_get_drvdata(pdev);
 
 	sunxi_nand_chips_cleanup(nfc);
+	if (nfc->dmac)
+		dma_release_channel(nfc->dmac);
 	clk_disable_unprepare(nfc->mod_clk);
 	clk_disable_unprepare(nfc->ahb_clk);
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 6/7] mtd: nand: sunxi: add support for DMA assisted operations
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon

The sunxi NAND controller is able to pipeline ECC operations only when
operated in DMA mode, which improves a lot NAND throughput while keeping
CPU usage low.

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 drivers/mtd/nand/sunxi_nand.c | 301 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 297 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 07c3af7..7ba285e 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -277,6 +277,7 @@ struct sunxi_nfc {
 	unsigned long clk_rate;
 	struct list_head chips;
 	struct completion complete;
+	struct dma_chan *dmac;
 };
 
 static inline struct sunxi_nfc *to_sunxi_nfc(struct nand_hw_control *ctrl)
@@ -369,6 +370,68 @@ static int sunxi_nfc_rst(struct sunxi_nfc *nfc)
 	return ret;
 }
 
+static int sunxi_nfc_dma_op_prepare(struct mtd_info *mtd, const void *buf,
+				    int chunksize, int nchunks,
+				    enum dma_data_direction ddir,
+				    struct sg_table *sgt)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct dma_async_tx_descriptor *dmad;
+	enum dma_transfer_direction tdir;
+	dma_cookie_t dmat;
+	int ret;
+
+	if (ddir == DMA_FROM_DEVICE)
+		tdir = DMA_DEV_TO_MEM;
+	else
+		tdir = DMA_MEM_TO_DEV;
+
+	ret = mtd_map_buf(mtd, nfc->dev, sgt, buf, nchunks * chunksize,
+			  NULL, ddir);
+	if (ret)
+		return ret;
+
+	dmad = dmaengine_prep_slave_sg(nfc->dmac, sgt->sgl, sgt->nents,
+				       tdir, DMA_CTRL_ACK);
+	if (IS_ERR(dmad)) {
+		ret = PTR_ERR(dmad);
+		goto err_unmap_buf;
+	}
+
+	writel(readl(nfc->regs + NFC_REG_CTL) | NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+	writel(nchunks, nfc->regs + NFC_REG_SECTOR_NUM);
+	writel(chunksize, nfc->regs + NFC_REG_CNT);
+	dmat = dmaengine_submit(dmad);
+
+	ret = dma_submit_error(dmat);
+	if (ret)
+		goto err_clr_dma_flag;
+
+	return 0;
+
+err_clr_dma_flag:
+	writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+
+err_unmap_buf:
+	mtd_unmap_buf(mtd, nfc->dev, sgt, ddir);
+	return ret;
+}
+
+static void sunxi_nfc_dma_op_cleanup(struct mtd_info *mtd,
+				     enum dma_data_direction ddir,
+				     struct sg_table *sgt)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+	mtd_unmap_buf(mtd, nfc->dev, sgt, ddir);
+	writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+}
+
 static int sunxi_nfc_dev_ready(struct mtd_info *mtd)
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
@@ -971,6 +1034,106 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
+static int sunxi_nfc_hw_ecc_read_chunks_dma(struct mtd_info *mtd, uint8_t *buf,
+					    int oob_required, int page,
+					    int nchunks)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	unsigned int max_bitflips = 0;
+	int ret, i, raw_mode = 0;
+	struct sg_table sgt;
+
+	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
+	if (ret)
+		return ret;
+
+	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, nchunks,
+				       DMA_FROM_DEVICE, &sgt);
+	if (ret)
+		return ret;
+
+	sunxi_nfc_hw_ecc_enable(mtd);
+	sunxi_nfc_randomizer_config(mtd, page, false);
+	sunxi_nfc_randomizer_enable(mtd);
+
+	writel((NAND_CMD_RNDOUTSTART << 16) | (NAND_CMD_RNDOUT << 8) |
+	       NAND_CMD_READSTART, nfc->regs + NFC_REG_RCMD_SET);
+
+	dma_async_issue_pending(nfc->dmac);
+
+	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD | NFC_DATA_TRANS,
+	       nfc->regs + NFC_REG_CMD);
+
+	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
+	if (ret)
+		dmaengine_terminate_all(nfc->dmac);
+
+	sunxi_nfc_randomizer_disable(mtd);
+	sunxi_nfc_hw_ecc_disable(mtd);
+
+	sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sgt);
+
+	if (ret)
+		return ret;
+
+	for (i = 0; i < nchunks; i++) {
+		int data_off = i * ecc->size;
+		int oob_off = i * (ecc->bytes + 4);
+		u8 *data = buf + data_off;
+		u8 *oob = nand->oob_poi + oob_off;
+		bool randomized = nand->options & NAND_NEED_SCRAMBLING;
+		bool erased;
+
+		ret = sunxi_nfc_hw_ecc_correct(mtd, randomized ? data : NULL,
+				(randomized && oob_required) ? oob : NULL,
+				i, &erased);
+		if (ret < 0) {
+			/*
+			 * Re-read the data with the randomizer disabled to
+			 * identify bitflips in erased pages.
+			 */
+			if (nand->options & NAND_NEED_SCRAMBLING) {
+				/* TODO: use DMA to read page in raw mode */
+				nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
+					      data_off, -1);
+				nand->read_buf(mtd, data, ecc->size);
+			}
+
+			/* TODO: use DMA to retrieve OOB */
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			nand->read_buf(mtd, oob, ecc->bytes + 4);
+
+			ret = nand_check_erased_ecc_chunk(data,	ecc->size,
+							  oob, ecc->bytes + 4,
+							  NULL, 0,
+							  ecc->strength);
+			if (ret >= 0)
+				raw_mode = 1;
+		} else if (oob_required && !erased) {
+			/* TODO: use DMA to retrieve OOB */
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			nand->read_buf(mtd, oob, ecc->bytes + 4);
+
+			sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, i,
+							    !i, page);
+		}
+
+		if (erased)
+			raw_mode = 1;
+
+		sunxi_nfc_hw_ecc_update_stats(mtd, &max_bitflips, ret);
+	}
+
+	if (oob_required)
+		sunxi_nfc_hw_ecc_read_extra_oob(mtd, nand->oob_poi,
+						NULL, !raw_mode,
+						page);
+
+	return max_bitflips;
+}
+
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 					const u8 *data, int data_off,
 					const u8 *oob, int oob_off,
@@ -1069,6 +1232,23 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
 	return max_bitflips;
 }
 
+static int sunxi_nfc_hw_ecc_read_page_dma(struct mtd_info *mtd,
+					  struct nand_chip *chip, u8 *buf,
+					  int oob_required, int page)
+{
+	int ret;
+
+	ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, oob_required, page,
+					       chip->ecc.steps);
+	if (ret >= 0)
+		return ret;
+
+	/* Fallback to PIO mode */
+	chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
+
+	return sunxi_nfc_hw_ecc_read_page(mtd, chip, buf, oob_required, page);
+}
+
 static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 					 struct nand_chip *chip,
 					 u32 data_offs, u32 readlen,
@@ -1102,6 +1282,25 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 	return max_bitflips;
 }
 
+static int sunxi_nfc_hw_ecc_read_subpage_dma(struct mtd_info *mtd,
+					     struct nand_chip *chip,
+					     u32 data_offs, u32 readlen,
+					     u8 *buf, int page)
+{
+	int nchunks = DIV_ROUND_UP(data_offs + readlen, chip->ecc.size);
+	int ret;
+
+	ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, false, page, nchunks);
+	if (ret >= 0)
+		return ret;
+
+	/* Fallback to PIO mode */
+	chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
+
+	return sunxi_nfc_hw_ecc_read_subpage(mtd, chip, data_offs, readlen,
+					     buf, page);
+}
+
 static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
 				       struct nand_chip *chip,
 				       const uint8_t *buf, int oob_required,
@@ -1134,6 +1333,69 @@ static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
 	return 0;
 }
 
+static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
+					   struct nand_chip *chip,
+					   const u8 *buf,
+					   int oob_required,
+					   int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	struct sg_table sgt;
+	int ret, i;
+
+	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
+	if (ret)
+		return ret;
+
+	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, ecc->steps,
+				       DMA_TO_DEVICE, &sgt);
+	if (ret)
+		goto pio_fallback;
+
+	for (i = 0; i < ecc->steps; i++) {
+		const u8 *oob = nand->oob_poi + (i * (ecc->bytes + 4));
+
+		sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, i, !i, page);
+	}
+
+	sunxi_nfc_hw_ecc_enable(mtd);
+	sunxi_nfc_randomizer_config(mtd, page, false);
+	sunxi_nfc_randomizer_enable(mtd);
+
+	writel((NAND_CMD_RNDIN << 8) | NAND_CMD_PAGEPROG,
+	       nfc->regs + NFC_REG_RCMD_SET);
+
+	dma_async_issue_pending(nfc->dmac);
+
+	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD |
+	       NFC_DATA_TRANS | NFC_ACCESS_DIR,
+	       nfc->regs + NFC_REG_CMD);
+
+	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
+	if (ret)
+		dmaengine_terminate_all(nfc->dmac);
+
+	sunxi_nfc_randomizer_disable(mtd);
+	sunxi_nfc_hw_ecc_disable(mtd);
+
+	sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sgt);
+
+	if (ret)
+		return ret;
+
+	if (oob_required || (chip->options & NAND_NEED_SCRAMBLING))
+		/* TODO: use DMA to transfer extra OOB bytes ? */
+		sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi,
+						 NULL, page);
+
+	return 0;
+
+pio_fallback:
+	return sunxi_nfc_hw_ecc_write_page(mtd, chip, buf, oob_required, page);
+}
+
 static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
 					       struct nand_chip *chip,
 					       uint8_t *buf, int oob_required,
@@ -1501,6 +1763,9 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 				       struct nand_ecc_ctrl *ecc,
 				       struct device_node *np)
 {
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller);
 	struct nand_ecclayout *layout;
 	int nsectors;
 	int i, j;
@@ -1510,11 +1775,19 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 	if (ret)
 		return ret;
 
-	ecc->read_page = sunxi_nfc_hw_ecc_read_page;
-	ecc->write_page = sunxi_nfc_hw_ecc_write_page;
+	if (nfc->dmac) {
+		ecc->read_page = sunxi_nfc_hw_ecc_read_page_dma;
+		ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage_dma;
+		ecc->write_page = sunxi_nfc_hw_ecc_write_page_dma;
+	} else {
+		ecc->read_page = sunxi_nfc_hw_ecc_read_page;
+		ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
+		ecc->write_page = sunxi_nfc_hw_ecc_write_page;
+	}
+
+	/* TODO: support DMA for raw accesses */
 	ecc->read_oob_raw = nand_read_oob_std;
 	ecc->write_oob_raw = nand_write_oob_std;
-	ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
 	layout = ecc->layout;
 	nsectors = mtd->writesize / ecc->size;
 
@@ -1888,16 +2161,34 @@ static int sunxi_nfc_probe(struct platform_device *pdev)
 	if (ret)
 		goto out_mod_clk_unprepare;
 
+	nfc->dmac = dma_request_slave_channel(dev, "rxtx");
+	if (nfc->dmac) {
+		struct dma_slave_config dmac_cfg = { };
+
+		dmac_cfg.src_addr = r->start + NFC_REG_IO_DATA;
+		dmac_cfg.dst_addr = dmac_cfg.src_addr;
+		dmac_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+		dmac_cfg.dst_addr_width = dmac_cfg.src_addr_width;
+		dmac_cfg.src_maxburst = 4;
+		dmac_cfg.dst_maxburst = 4;
+		dmaengine_slave_config(nfc->dmac, &dmac_cfg);
+	} else {
+		dev_warn(dev, "failed to request rxtx DMA channel\n");
+	}
+
 	platform_set_drvdata(pdev, nfc);
 
 	ret = sunxi_nand_chips_init(dev, nfc);
 	if (ret) {
 		dev_err(dev, "failed to init nand chips\n");
-		goto out_mod_clk_unprepare;
+		goto out_release_dmac;
 	}
 
 	return 0;
 
+out_release_dmac:
+	if (nfc->dmac)
+		dma_release_channel(nfc->dmac);
 out_mod_clk_unprepare:
 	clk_disable_unprepare(nfc->mod_clk);
 out_ahb_clk_unprepare:
@@ -1911,6 +2202,8 @@ static int sunxi_nfc_remove(struct platform_device *pdev)
 	struct sunxi_nfc *nfc = platform_get_drvdata(pdev);
 
 	sunxi_nand_chips_cleanup(nfc);
+	if (nfc->dmac)
+		dma_release_channel(nfc->dmac);
 	clk_disable_unprepare(nfc->mod_clk);
 	clk_disable_unprepare(nfc->ahb_clk);
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 6/7] mtd: nand: sunxi: add support for DMA assisted operations
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: linux-arm-kernel

The sunxi NAND controller is able to pipeline ECC operations only when
operated in DMA mode, which improves a lot NAND throughput while keeping
CPU usage low.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 drivers/mtd/nand/sunxi_nand.c | 301 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 297 insertions(+), 4 deletions(-)

diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 07c3af7..7ba285e 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -277,6 +277,7 @@ struct sunxi_nfc {
 	unsigned long clk_rate;
 	struct list_head chips;
 	struct completion complete;
+	struct dma_chan *dmac;
 };
 
 static inline struct sunxi_nfc *to_sunxi_nfc(struct nand_hw_control *ctrl)
@@ -369,6 +370,68 @@ static int sunxi_nfc_rst(struct sunxi_nfc *nfc)
 	return ret;
 }
 
+static int sunxi_nfc_dma_op_prepare(struct mtd_info *mtd, const void *buf,
+				    int chunksize, int nchunks,
+				    enum dma_data_direction ddir,
+				    struct sg_table *sgt)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct dma_async_tx_descriptor *dmad;
+	enum dma_transfer_direction tdir;
+	dma_cookie_t dmat;
+	int ret;
+
+	if (ddir == DMA_FROM_DEVICE)
+		tdir = DMA_DEV_TO_MEM;
+	else
+		tdir = DMA_MEM_TO_DEV;
+
+	ret = mtd_map_buf(mtd, nfc->dev, sgt, buf, nchunks * chunksize,
+			  NULL, ddir);
+	if (ret)
+		return ret;
+
+	dmad = dmaengine_prep_slave_sg(nfc->dmac, sgt->sgl, sgt->nents,
+				       tdir, DMA_CTRL_ACK);
+	if (IS_ERR(dmad)) {
+		ret = PTR_ERR(dmad);
+		goto err_unmap_buf;
+	}
+
+	writel(readl(nfc->regs + NFC_REG_CTL) | NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+	writel(nchunks, nfc->regs + NFC_REG_SECTOR_NUM);
+	writel(chunksize, nfc->regs + NFC_REG_CNT);
+	dmat = dmaengine_submit(dmad);
+
+	ret = dma_submit_error(dmat);
+	if (ret)
+		goto err_clr_dma_flag;
+
+	return 0;
+
+err_clr_dma_flag:
+	writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+
+err_unmap_buf:
+	mtd_unmap_buf(mtd, nfc->dev, sgt, ddir);
+	return ret;
+}
+
+static void sunxi_nfc_dma_op_cleanup(struct mtd_info *mtd,
+				     enum dma_data_direction ddir,
+				     struct sg_table *sgt)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+	mtd_unmap_buf(mtd, nfc->dev, sgt, ddir);
+	writel(readl(nfc->regs + NFC_REG_CTL) & ~NFC_RAM_METHOD,
+	       nfc->regs + NFC_REG_CTL);
+}
+
 static int sunxi_nfc_dev_ready(struct mtd_info *mtd)
 {
 	struct nand_chip *nand = mtd_to_nand(mtd);
@@ -971,6 +1034,106 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
 		*cur_off = mtd->oobsize + mtd->writesize;
 }
 
+static int sunxi_nfc_hw_ecc_read_chunks_dma(struct mtd_info *mtd, uint8_t *buf,
+					    int oob_required, int page,
+					    int nchunks)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	unsigned int max_bitflips = 0;
+	int ret, i, raw_mode = 0;
+	struct sg_table sgt;
+
+	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
+	if (ret)
+		return ret;
+
+	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, nchunks,
+				       DMA_FROM_DEVICE, &sgt);
+	if (ret)
+		return ret;
+
+	sunxi_nfc_hw_ecc_enable(mtd);
+	sunxi_nfc_randomizer_config(mtd, page, false);
+	sunxi_nfc_randomizer_enable(mtd);
+
+	writel((NAND_CMD_RNDOUTSTART << 16) | (NAND_CMD_RNDOUT << 8) |
+	       NAND_CMD_READSTART, nfc->regs + NFC_REG_RCMD_SET);
+
+	dma_async_issue_pending(nfc->dmac);
+
+	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD | NFC_DATA_TRANS,
+	       nfc->regs + NFC_REG_CMD);
+
+	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
+	if (ret)
+		dmaengine_terminate_all(nfc->dmac);
+
+	sunxi_nfc_randomizer_disable(mtd);
+	sunxi_nfc_hw_ecc_disable(mtd);
+
+	sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sgt);
+
+	if (ret)
+		return ret;
+
+	for (i = 0; i < nchunks; i++) {
+		int data_off = i * ecc->size;
+		int oob_off = i * (ecc->bytes + 4);
+		u8 *data = buf + data_off;
+		u8 *oob = nand->oob_poi + oob_off;
+		bool randomized = nand->options & NAND_NEED_SCRAMBLING;
+		bool erased;
+
+		ret = sunxi_nfc_hw_ecc_correct(mtd, randomized ? data : NULL,
+				(randomized && oob_required) ? oob : NULL,
+				i, &erased);
+		if (ret < 0) {
+			/*
+			 * Re-read the data with the randomizer disabled to
+			 * identify bitflips in erased pages.
+			 */
+			if (nand->options & NAND_NEED_SCRAMBLING) {
+				/* TODO: use DMA to read page in raw mode */
+				nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
+					      data_off, -1);
+				nand->read_buf(mtd, data, ecc->size);
+			}
+
+			/* TODO: use DMA to retrieve OOB */
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			nand->read_buf(mtd, oob, ecc->bytes + 4);
+
+			ret = nand_check_erased_ecc_chunk(data,	ecc->size,
+							  oob, ecc->bytes + 4,
+							  NULL, 0,
+							  ecc->strength);
+			if (ret >= 0)
+				raw_mode = 1;
+		} else if (oob_required && !erased) {
+			/* TODO: use DMA to retrieve OOB */
+			nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+			nand->read_buf(mtd, oob, ecc->bytes + 4);
+
+			sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, i,
+							    !i, page);
+		}
+
+		if (erased)
+			raw_mode = 1;
+
+		sunxi_nfc_hw_ecc_update_stats(mtd, &max_bitflips, ret);
+	}
+
+	if (oob_required)
+		sunxi_nfc_hw_ecc_read_extra_oob(mtd, nand->oob_poi,
+						NULL, !raw_mode,
+						page);
+
+	return max_bitflips;
+}
+
 static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
 					const u8 *data, int data_off,
 					const u8 *oob, int oob_off,
@@ -1069,6 +1232,23 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
 	return max_bitflips;
 }
 
+static int sunxi_nfc_hw_ecc_read_page_dma(struct mtd_info *mtd,
+					  struct nand_chip *chip, u8 *buf,
+					  int oob_required, int page)
+{
+	int ret;
+
+	ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, oob_required, page,
+					       chip->ecc.steps);
+	if (ret >= 0)
+		return ret;
+
+	/* Fallback to PIO mode */
+	chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
+
+	return sunxi_nfc_hw_ecc_read_page(mtd, chip, buf, oob_required, page);
+}
+
 static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 					 struct nand_chip *chip,
 					 u32 data_offs, u32 readlen,
@@ -1102,6 +1282,25 @@ static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
 	return max_bitflips;
 }
 
+static int sunxi_nfc_hw_ecc_read_subpage_dma(struct mtd_info *mtd,
+					     struct nand_chip *chip,
+					     u32 data_offs, u32 readlen,
+					     u8 *buf, int page)
+{
+	int nchunks = DIV_ROUND_UP(data_offs + readlen, chip->ecc.size);
+	int ret;
+
+	ret = sunxi_nfc_hw_ecc_read_chunks_dma(mtd, buf, false, page, nchunks);
+	if (ret >= 0)
+		return ret;
+
+	/* Fallback to PIO mode */
+	chip->cmdfunc(mtd, NAND_CMD_RNDOUT, 0, -1);
+
+	return sunxi_nfc_hw_ecc_read_subpage(mtd, chip, data_offs, readlen,
+					     buf, page);
+}
+
 static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
 				       struct nand_chip *chip,
 				       const uint8_t *buf, int oob_required,
@@ -1134,6 +1333,69 @@ static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
 	return 0;
 }
 
+static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
+					   struct nand_chip *chip,
+					   const u8 *buf,
+					   int oob_required,
+					   int page)
+{
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+	struct nand_ecc_ctrl *ecc = &nand->ecc;
+	struct sg_table sgt;
+	int ret, i;
+
+	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
+	if (ret)
+		return ret;
+
+	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, ecc->steps,
+				       DMA_TO_DEVICE, &sgt);
+	if (ret)
+		goto pio_fallback;
+
+	for (i = 0; i < ecc->steps; i++) {
+		const u8 *oob = nand->oob_poi + (i * (ecc->bytes + 4));
+
+		sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, i, !i, page);
+	}
+
+	sunxi_nfc_hw_ecc_enable(mtd);
+	sunxi_nfc_randomizer_config(mtd, page, false);
+	sunxi_nfc_randomizer_enable(mtd);
+
+	writel((NAND_CMD_RNDIN << 8) | NAND_CMD_PAGEPROG,
+	       nfc->regs + NFC_REG_RCMD_SET);
+
+	dma_async_issue_pending(nfc->dmac);
+
+	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD |
+	       NFC_DATA_TRANS | NFC_ACCESS_DIR,
+	       nfc->regs + NFC_REG_CMD);
+
+	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
+	if (ret)
+		dmaengine_terminate_all(nfc->dmac);
+
+	sunxi_nfc_randomizer_disable(mtd);
+	sunxi_nfc_hw_ecc_disable(mtd);
+
+	sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sgt);
+
+	if (ret)
+		return ret;
+
+	if (oob_required || (chip->options & NAND_NEED_SCRAMBLING))
+		/* TODO: use DMA to transfer extra OOB bytes ? */
+		sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi,
+						 NULL, page);
+
+	return 0;
+
+pio_fallback:
+	return sunxi_nfc_hw_ecc_write_page(mtd, chip, buf, oob_required, page);
+}
+
 static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
 					       struct nand_chip *chip,
 					       uint8_t *buf, int oob_required,
@@ -1501,6 +1763,9 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 				       struct nand_ecc_ctrl *ecc,
 				       struct device_node *np)
 {
+	struct nand_chip *nand = mtd_to_nand(mtd);
+	struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
+	struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller);
 	struct nand_ecclayout *layout;
 	int nsectors;
 	int i, j;
@@ -1510,11 +1775,19 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
 	if (ret)
 		return ret;
 
-	ecc->read_page = sunxi_nfc_hw_ecc_read_page;
-	ecc->write_page = sunxi_nfc_hw_ecc_write_page;
+	if (nfc->dmac) {
+		ecc->read_page = sunxi_nfc_hw_ecc_read_page_dma;
+		ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage_dma;
+		ecc->write_page = sunxi_nfc_hw_ecc_write_page_dma;
+	} else {
+		ecc->read_page = sunxi_nfc_hw_ecc_read_page;
+		ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
+		ecc->write_page = sunxi_nfc_hw_ecc_write_page;
+	}
+
+	/* TODO: support DMA for raw accesses */
 	ecc->read_oob_raw = nand_read_oob_std;
 	ecc->write_oob_raw = nand_write_oob_std;
-	ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
 	layout = ecc->layout;
 	nsectors = mtd->writesize / ecc->size;
 
@@ -1888,16 +2161,34 @@ static int sunxi_nfc_probe(struct platform_device *pdev)
 	if (ret)
 		goto out_mod_clk_unprepare;
 
+	nfc->dmac = dma_request_slave_channel(dev, "rxtx");
+	if (nfc->dmac) {
+		struct dma_slave_config dmac_cfg = { };
+
+		dmac_cfg.src_addr = r->start + NFC_REG_IO_DATA;
+		dmac_cfg.dst_addr = dmac_cfg.src_addr;
+		dmac_cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
+		dmac_cfg.dst_addr_width = dmac_cfg.src_addr_width;
+		dmac_cfg.src_maxburst = 4;
+		dmac_cfg.dst_maxburst = 4;
+		dmaengine_slave_config(nfc->dmac, &dmac_cfg);
+	} else {
+		dev_warn(dev, "failed to request rxtx DMA channel\n");
+	}
+
 	platform_set_drvdata(pdev, nfc);
 
 	ret = sunxi_nand_chips_init(dev, nfc);
 	if (ret) {
 		dev_err(dev, "failed to init nand chips\n");
-		goto out_mod_clk_unprepare;
+		goto out_release_dmac;
 	}
 
 	return 0;
 
+out_release_dmac:
+	if (nfc->dmac)
+		dma_release_channel(nfc->dmac);
 out_mod_clk_unprepare:
 	clk_disable_unprepare(nfc->mod_clk);
 out_ahb_clk_unprepare:
@@ -1911,6 +2202,8 @@ static int sunxi_nfc_remove(struct platform_device *pdev)
 	struct sunxi_nfc *nfc = platform_get_drvdata(pdev);
 
 	sunxi_nand_chips_cleanup(nfc);
+	if (nfc->dmac)
+		dma_release_channel(nfc->dmac);
 	clk_disable_unprepare(nfc->mod_clk);
 	clk_disable_unprepare(nfc->ahb_clk);
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 7/7] mtd: nand: sunxi: update DT bindings
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris, linux-mtd
  Cc: Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	Laurent Pinchart, linux-media, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, devicetree,
	Boris Brezillon

Document dmas and dma-names properties.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 Documentation/devicetree/bindings/mtd/sunxi-nand.txt | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/Documentation/devicetree/bindings/mtd/sunxi-nand.txt b/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
index 086d6f4..6fdf8f6 100644
--- a/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
+++ b/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
@@ -11,6 +11,10 @@ Required properties:
     * "ahb" : AHB gating clock
     * "mod" : nand controller clock
 
+Optional properties:
+- dmas : shall reference DMA channel associated to the NAND controller.
+- dma-names : shall be "rxtx".
+
 Optional children nodes:
 Children nodes represent the available nand chips.
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 7/7] mtd: nand: sunxi: update DT bindings
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA, Boris Brezillon

Document dmas and dma-names properties.

Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
---
 Documentation/devicetree/bindings/mtd/sunxi-nand.txt | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/Documentation/devicetree/bindings/mtd/sunxi-nand.txt b/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
index 086d6f4..6fdf8f6 100644
--- a/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
+++ b/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
@@ -11,6 +11,10 @@ Required properties:
     * "ahb" : AHB gating clock
     * "mod" : nand controller clock
 
+Optional properties:
+- dmas : shall reference DMA channel associated to the NAND controller.
+- dma-names : shall be "rxtx".
+
 Optional children nodes:
 Children nodes represent the available nand chips.
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* [PATCH 7/7] mtd: nand: sunxi: update DT bindings
@ 2016-03-08 11:15   ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 11:15 UTC (permalink / raw)
  To: linux-arm-kernel

Document dmas and dma-names properties.

Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
---
 Documentation/devicetree/bindings/mtd/sunxi-nand.txt | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/Documentation/devicetree/bindings/mtd/sunxi-nand.txt b/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
index 086d6f4..6fdf8f6 100644
--- a/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
+++ b/Documentation/devicetree/bindings/mtd/sunxi-nand.txt
@@ -11,6 +11,10 @@ Required properties:
     * "ahb" : AHB gating clock
     * "mod" : nand controller clock
 
+Optional properties:
+- dmas : shall reference DMA channel associated to the NAND controller.
+- dma-names : shall be "rxtx".
+
 Optional children nodes:
 Children nodes represent the available nand chips.
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 45+ messages in thread

* Re: [PATCH 5/7] mtd: provide helper to prepare buffers for DMA operations
@ 2016-03-08 14:18     ` Vinod Koul
  0 siblings, 0 replies; 45+ messages in thread
From: Vinod Koul @ 2016-03-08 14:18 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd, Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Dan Williams,
	dmaengine, Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree

On Tue, Mar 08, 2016 at 12:15:13PM +0100, Boris Brezillon wrote:
>  
> +#ifdef CONFIG_HAS_DMA

Shouldn't this be CONFIG_DMA_ENGINE as you are preparing these descriptors
for DMA transfer?

-- 
~Vinod

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 5/7] mtd: provide helper to prepare buffers for DMA operations
@ 2016-03-08 14:18     ` Vinod Koul
  0 siblings, 0 replies; 45+ messages in thread
From: Vinod Koul @ 2016-03-08 14:18 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Mark Brown,
	linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Dan Williams,
	dmaengine-u79uwXL29TY76Z2rM5mHXA, Mauro Carvalho Chehab,
	Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TZUIDd8j+nm9g

On Tue, Mar 08, 2016 at 12:15:13PM +0100, Boris Brezillon wrote:
>  
> +#ifdef CONFIG_HAS_DMA

Shouldn't this be CONFIG_DMA_ENGINE as you are preparing these descriptors
for DMA transfer?

-- 
~Vinod
--
To unsubscribe from this list: send the line "unsubscribe linux-spi" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 5/7] mtd: provide helper to prepare buffers for DMA operations
@ 2016-03-08 14:18     ` Vinod Koul
  0 siblings, 0 replies; 45+ messages in thread
From: Vinod Koul @ 2016-03-08 14:18 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Mar 08, 2016 at 12:15:13PM +0100, Boris Brezillon wrote:
>  
> +#ifdef CONFIG_HAS_DMA

Shouldn't this be CONFIG_DMA_ENGINE as you are preparing these descriptors
for DMA transfer?

-- 
~Vinod

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 5/7] mtd: provide helper to prepare buffers for DMA operations
  2016-03-08 14:18     ` Vinod Koul
  (?)
@ 2016-03-08 14:46       ` Boris Brezillon
  -1 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 14:46 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd, Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Dan Williams,
	dmaengine, Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree

On Tue, 8 Mar 2016 19:48:53 +0530
Vinod Koul <vinod.koul@intel.com> wrote:

> On Tue, Mar 08, 2016 at 12:15:13PM +0100, Boris Brezillon wrote:
> >  
> > +#ifdef CONFIG_HAS_DMA
> 
> Shouldn't this be CONFIG_DMA_ENGINE as you are preparing these descriptors
> for DMA transfer?
> 

Nope, scatterlist users are not necessarily using a dmaengine, some IPs
are directly embedding a dedicated DMA engine, which has no driver in
drivers/dma/.

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 5/7] mtd: provide helper to prepare buffers for DMA operations
@ 2016-03-08 14:46       ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 14:46 UTC (permalink / raw)
  To: Vinod Koul
  Cc: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Mark Brown,
	linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Dan Williams,
	dmaengine-u79uwXL29TY76Z2rM5mHXA, Mauro Carvalho Chehab,
	Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA

On Tue, 8 Mar 2016 19:48:53 +0530
Vinod Koul <vinod.koul-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> wrote:

> On Tue, Mar 08, 2016 at 12:15:13PM +0100, Boris Brezillon wrote:
> >  
> > +#ifdef CONFIG_HAS_DMA
> 
> Shouldn't this be CONFIG_DMA_ENGINE as you are preparing these descriptors
> for DMA transfer?
> 

Nope, scatterlist users are not necessarily using a dmaengine, some IPs
are directly embedding a dedicated DMA engine, which has no driver in
drivers/dma/.

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 5/7] mtd: provide helper to prepare buffers for DMA operations
@ 2016-03-08 14:46       ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 14:46 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 8 Mar 2016 19:48:53 +0530
Vinod Koul <vinod.koul@intel.com> wrote:

> On Tue, Mar 08, 2016 at 12:15:13PM +0100, Boris Brezillon wrote:
> >  
> > +#ifdef CONFIG_HAS_DMA
> 
> Shouldn't this be CONFIG_DMA_ENGINE as you are preparing these descriptors
> for DMA transfer?
> 

Nope, scatterlist users are not necessarily using a dmaengine, some IPs
are directly embedding a dedicated DMA engine, which has no driver in
drivers/dma/.

-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-08 16:51     ` Laurent Pinchart
  0 siblings, 0 replies; 45+ messages in thread
From: Laurent Pinchart @ 2016-03-08 16:51 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd, Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala, devicetree

Hi Boris,

Thank you for the patch.

On Tuesday 08 March 2016 12:15:12 Boris Brezillon wrote:
> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> from a virtual address pointer. This function takes care of dealing with
> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> DMA transfer size).
> 
> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
> ---
>  include/linux/scatterlist.h |  24 ++++++++
>  lib/scatterlist.c           | 142 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 166 insertions(+)
> 
> diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> index 556ec1e..4a75362 100644
> --- a/include/linux/scatterlist.h
> +++ b/include/linux/scatterlist.h
> @@ -41,6 +41,27 @@ struct sg_table {
>  	unsigned int orig_nents;	/* original size of list */
>  };
> 
> +/**
> + * struct sg_constraints - SG constraints structure
> + *
> + * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be
> smaller
> + *		   than this value. Zero means no constraint.
> + * @required_alignment: minimum alignment. Is used for both size and
> pointer
> + *			alignment. If this constraint is not met, the function
> + *			should return -EINVAL.
> + * @preferred_alignment: preferred alignment. Mainly used to optimize
> + *			 throughput when the DMA engine performs better when
> + *			 doing aligned accesses.
> + *
> + * This structure is here to help sg_alloc_table_from_buf() create the
> optimal
> + * SG list based on DMA engine constraints.
> + */
> +struct sg_constraints {
> +	size_t max_chunk_len;
> +	size_t required_alignment;
> +	size_t preferred_alignment;
> +};
> +
>  /*
>   * Notes on SG table design.
>   *
> @@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
>  	struct page **pages, unsigned int n_pages,
>  	unsigned long offset, unsigned long size,
>  	gfp_t gfp_mask);
> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t
> len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask);
> 
>  size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void
> *buf, size_t buflen, off_t skip, bool to_buffer);
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index bafa993..706b583 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -433,6 +433,148 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
>  }
>  EXPORT_SYMBOL(sg_alloc_table_from_pages);
> 
> +static size_t sg_buf_chunk_len(const void *buf, size_t len,
> +			       const struct sg_constraints *cons)
> +{
> +	size_t chunk_len = len;
> +
> +	if (cons->max_chunk_len)
> +		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
> +
> +	if (is_vmalloc_addr(buf))
> +		chunk_len = min_t(size_t, chunk_len,
> +				  PAGE_SIZE - offset_in_page(buf));

This will lead to page-sized scatter-gather entries even for pages of the 
vmalloc memory that happen to be physically contiguous. That works, but I 
wonder whether we'd want to be more efficient.

> +	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
> +		const void *aligned_buf = PTR_ALIGN(buf,
> +						    cons->preferred_alignment);
> +		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
> +
> +		chunk_len = min_t(size_t, chunk_len, unaligned_len);
> +	} else if (chunk_len > cons->preferred_alignment) {
> +		chunk_len &= ~(cons->preferred_alignment - 1);
> +	}
> +
> +	return chunk_len;
> +}
> +
> +#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
> +	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
> +	     len;							\
> +	     len -= chunk_len, buf += chunk_len,			\
> +	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
> +
> +static int sg_check_constraints(struct sg_constraints *cons,
> +				const void *buf, size_t len)
> +{
> +	if (!cons->required_alignment)
> +		cons->required_alignment = 1;
> +
> +	if (!cons->preferred_alignment)
> +		cons->preferred_alignment = cons->required_alignment;
> +
> +	/* Test if buf and len are properly aligned. */
> +	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
> +	    !IS_ALIGNED(len, cons->required_alignment))
> +		return -EINVAL;
> +
> +	/*
> +	 * if the buffer has been vmallocated and required_alignment is
> +	 * more than PAGE_SIZE we cannot guarantee it.
> +	 */
> +	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
> +		return -EINVAL;
> +
> +	/*
> +	 * max_chunk_len has to be aligned to required_alignment to
> +	 * guarantee that all buffer chunks are aligned correctly.
> +	 */
> +	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
> +		return -EINVAL;
> +
> +	/*
> +	 * preferred_alignment has to be aligned to required_alignment
> +	 * to avoid misalignment of buffer chunks.
> +	 */
> +	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +/**
> + * sg_alloc_table_from_buf - create an SG table from a buffer
> + *
> + * @sgt: SG table
> + * @buf: buffer you want to create this SG table from
> + * @len: length of buf
> + * @constraints: optional constraints to take into account when creating
> + *		 the SG table. Can be NULL if no specific constraints are
> + *		 required.
> + * @gfp_mask: type of allocation to use when creating the table
> + *
> + * This function creates an SG table from a buffer, its length and some
> + * SG constraints.
> + *
> + * Note: This function supports vmallocated buffers.

What other types of memory does it support ? kmalloc() quite obviously, are 
there others ? I think you should explicitly list the memory types that the 
function intends to support.

> + */
> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t
> len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask)
> +{
> +	struct sg_constraints cons = { };
> +	size_t remaining, chunk_len;
> +	const void *sg_buf;
> +	int i, ret;
> +
> +	if (constraints)
> +		cons = *constraints;
> +
> +	ret = sg_check_constraints(&cons, buf, len);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> +		i++;
> +
> +	ret = sg_alloc_table(sgt, i, gfp_mask);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> +		if (is_vmalloc_addr(sg_buf)) {
> +			struct page *vm_page;
> +
> +			vm_page = vmalloc_to_page(sg_buf);
> +			if (!vm_page) {
> +				ret = -ENOMEM;
> +				goto err_free_table;
> +			}
> +
> +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> +				    offset_in_page(sg_buf));
> +		} else {
> +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> +		}
> +
> +		i++;
> +	}
> +
> +	return 0;
> +
> +err_free_table:
> +	sg_free_table(sgt);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(sg_alloc_table_from_buf);
> +
>  void __sg_page_iter_start(struct sg_page_iter *piter,
>  			  struct scatterlist *sglist, unsigned int nents,
>  			  unsigned long pgoffset)

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-08 16:51     ` Laurent Pinchart
  0 siblings, 0 replies; 45+ messages in thread
From: Laurent Pinchart @ 2016-03-08 16:51 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Mark Brown,
	linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA

Hi Boris,

Thank you for the patch.

On Tuesday 08 March 2016 12:15:12 Boris Brezillon wrote:
> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> from a virtual address pointer. This function takes care of dealing with
> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> DMA transfer size).
> 
> Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
> ---
>  include/linux/scatterlist.h |  24 ++++++++
>  lib/scatterlist.c           | 142 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 166 insertions(+)
> 
> diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> index 556ec1e..4a75362 100644
> --- a/include/linux/scatterlist.h
> +++ b/include/linux/scatterlist.h
> @@ -41,6 +41,27 @@ struct sg_table {
>  	unsigned int orig_nents;	/* original size of list */
>  };
> 
> +/**
> + * struct sg_constraints - SG constraints structure
> + *
> + * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be
> smaller
> + *		   than this value. Zero means no constraint.
> + * @required_alignment: minimum alignment. Is used for both size and
> pointer
> + *			alignment. If this constraint is not met, the function
> + *			should return -EINVAL.
> + * @preferred_alignment: preferred alignment. Mainly used to optimize
> + *			 throughput when the DMA engine performs better when
> + *			 doing aligned accesses.
> + *
> + * This structure is here to help sg_alloc_table_from_buf() create the
> optimal
> + * SG list based on DMA engine constraints.
> + */
> +struct sg_constraints {
> +	size_t max_chunk_len;
> +	size_t required_alignment;
> +	size_t preferred_alignment;
> +};
> +
>  /*
>   * Notes on SG table design.
>   *
> @@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
>  	struct page **pages, unsigned int n_pages,
>  	unsigned long offset, unsigned long size,
>  	gfp_t gfp_mask);
> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t
> len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask);
> 
>  size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void
> *buf, size_t buflen, off_t skip, bool to_buffer);
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index bafa993..706b583 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -433,6 +433,148 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
>  }
>  EXPORT_SYMBOL(sg_alloc_table_from_pages);
> 
> +static size_t sg_buf_chunk_len(const void *buf, size_t len,
> +			       const struct sg_constraints *cons)
> +{
> +	size_t chunk_len = len;
> +
> +	if (cons->max_chunk_len)
> +		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
> +
> +	if (is_vmalloc_addr(buf))
> +		chunk_len = min_t(size_t, chunk_len,
> +				  PAGE_SIZE - offset_in_page(buf));

This will lead to page-sized scatter-gather entries even for pages of the 
vmalloc memory that happen to be physically contiguous. That works, but I 
wonder whether we'd want to be more efficient.

> +	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
> +		const void *aligned_buf = PTR_ALIGN(buf,
> +						    cons->preferred_alignment);
> +		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
> +
> +		chunk_len = min_t(size_t, chunk_len, unaligned_len);
> +	} else if (chunk_len > cons->preferred_alignment) {
> +		chunk_len &= ~(cons->preferred_alignment - 1);
> +	}
> +
> +	return chunk_len;
> +}
> +
> +#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
> +	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
> +	     len;							\
> +	     len -= chunk_len, buf += chunk_len,			\
> +	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
> +
> +static int sg_check_constraints(struct sg_constraints *cons,
> +				const void *buf, size_t len)
> +{
> +	if (!cons->required_alignment)
> +		cons->required_alignment = 1;
> +
> +	if (!cons->preferred_alignment)
> +		cons->preferred_alignment = cons->required_alignment;
> +
> +	/* Test if buf and len are properly aligned. */
> +	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
> +	    !IS_ALIGNED(len, cons->required_alignment))
> +		return -EINVAL;
> +
> +	/*
> +	 * if the buffer has been vmallocated and required_alignment is
> +	 * more than PAGE_SIZE we cannot guarantee it.
> +	 */
> +	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
> +		return -EINVAL;
> +
> +	/*
> +	 * max_chunk_len has to be aligned to required_alignment to
> +	 * guarantee that all buffer chunks are aligned correctly.
> +	 */
> +	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
> +		return -EINVAL;
> +
> +	/*
> +	 * preferred_alignment has to be aligned to required_alignment
> +	 * to avoid misalignment of buffer chunks.
> +	 */
> +	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +/**
> + * sg_alloc_table_from_buf - create an SG table from a buffer
> + *
> + * @sgt: SG table
> + * @buf: buffer you want to create this SG table from
> + * @len: length of buf
> + * @constraints: optional constraints to take into account when creating
> + *		 the SG table. Can be NULL if no specific constraints are
> + *		 required.
> + * @gfp_mask: type of allocation to use when creating the table
> + *
> + * This function creates an SG table from a buffer, its length and some
> + * SG constraints.
> + *
> + * Note: This function supports vmallocated buffers.

What other types of memory does it support ? kmalloc() quite obviously, are 
there others ? I think you should explicitly list the memory types that the 
function intends to support.

> + */
> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t
> len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask)
> +{
> +	struct sg_constraints cons = { };
> +	size_t remaining, chunk_len;
> +	const void *sg_buf;
> +	int i, ret;
> +
> +	if (constraints)
> +		cons = *constraints;
> +
> +	ret = sg_check_constraints(&cons, buf, len);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> +		i++;
> +
> +	ret = sg_alloc_table(sgt, i, gfp_mask);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> +		if (is_vmalloc_addr(sg_buf)) {
> +			struct page *vm_page;
> +
> +			vm_page = vmalloc_to_page(sg_buf);
> +			if (!vm_page) {
> +				ret = -ENOMEM;
> +				goto err_free_table;
> +			}
> +
> +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> +				    offset_in_page(sg_buf));
> +		} else {
> +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> +		}
> +
> +		i++;
> +	}
> +
> +	return 0;
> +
> +err_free_table:
> +	sg_free_table(sgt);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(sg_alloc_table_from_buf);
> +
>  void __sg_page_iter_start(struct sg_page_iter *piter,
>  			  struct scatterlist *sglist, unsigned int nents,
>  			  unsigned long pgoffset)

-- 
Regards,

Laurent Pinchart

--
To unsubscribe from this list: send the line "unsubscribe linux-spi" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-08 16:51     ` Laurent Pinchart
  0 siblings, 0 replies; 45+ messages in thread
From: Laurent Pinchart @ 2016-03-08 16:51 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Boris,

Thank you for the patch.

On Tuesday 08 March 2016 12:15:12 Boris Brezillon wrote:
> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> from a virtual address pointer. This function takes care of dealing with
> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> DMA transfer size).
> 
> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
> ---
>  include/linux/scatterlist.h |  24 ++++++++
>  lib/scatterlist.c           | 142 +++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 166 insertions(+)
> 
> diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> index 556ec1e..4a75362 100644
> --- a/include/linux/scatterlist.h
> +++ b/include/linux/scatterlist.h
> @@ -41,6 +41,27 @@ struct sg_table {
>  	unsigned int orig_nents;	/* original size of list */
>  };
> 
> +/**
> + * struct sg_constraints - SG constraints structure
> + *
> + * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be
> smaller
> + *		   than this value. Zero means no constraint.
> + * @required_alignment: minimum alignment. Is used for both size and
> pointer
> + *			alignment. If this constraint is not met, the function
> + *			should return -EINVAL.
> + * @preferred_alignment: preferred alignment. Mainly used to optimize
> + *			 throughput when the DMA engine performs better when
> + *			 doing aligned accesses.
> + *
> + * This structure is here to help sg_alloc_table_from_buf() create the
> optimal
> + * SG list based on DMA engine constraints.
> + */
> +struct sg_constraints {
> +	size_t max_chunk_len;
> +	size_t required_alignment;
> +	size_t preferred_alignment;
> +};
> +
>  /*
>   * Notes on SG table design.
>   *
> @@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
>  	struct page **pages, unsigned int n_pages,
>  	unsigned long offset, unsigned long size,
>  	gfp_t gfp_mask);
> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t
> len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask);
> 
>  size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void
> *buf, size_t buflen, off_t skip, bool to_buffer);
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index bafa993..706b583 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -433,6 +433,148 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
>  }
>  EXPORT_SYMBOL(sg_alloc_table_from_pages);
> 
> +static size_t sg_buf_chunk_len(const void *buf, size_t len,
> +			       const struct sg_constraints *cons)
> +{
> +	size_t chunk_len = len;
> +
> +	if (cons->max_chunk_len)
> +		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
> +
> +	if (is_vmalloc_addr(buf))
> +		chunk_len = min_t(size_t, chunk_len,
> +				  PAGE_SIZE - offset_in_page(buf));

This will lead to page-sized scatter-gather entries even for pages of the 
vmalloc memory that happen to be physically contiguous. That works, but I 
wonder whether we'd want to be more efficient.

> +	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
> +		const void *aligned_buf = PTR_ALIGN(buf,
> +						    cons->preferred_alignment);
> +		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
> +
> +		chunk_len = min_t(size_t, chunk_len, unaligned_len);
> +	} else if (chunk_len > cons->preferred_alignment) {
> +		chunk_len &= ~(cons->preferred_alignment - 1);
> +	}
> +
> +	return chunk_len;
> +}
> +
> +#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
> +	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
> +	     len;							\
> +	     len -= chunk_len, buf += chunk_len,			\
> +	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
> +
> +static int sg_check_constraints(struct sg_constraints *cons,
> +				const void *buf, size_t len)
> +{
> +	if (!cons->required_alignment)
> +		cons->required_alignment = 1;
> +
> +	if (!cons->preferred_alignment)
> +		cons->preferred_alignment = cons->required_alignment;
> +
> +	/* Test if buf and len are properly aligned. */
> +	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
> +	    !IS_ALIGNED(len, cons->required_alignment))
> +		return -EINVAL;
> +
> +	/*
> +	 * if the buffer has been vmallocated and required_alignment is
> +	 * more than PAGE_SIZE we cannot guarantee it.
> +	 */
> +	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
> +		return -EINVAL;
> +
> +	/*
> +	 * max_chunk_len has to be aligned to required_alignment to
> +	 * guarantee that all buffer chunks are aligned correctly.
> +	 */
> +	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
> +		return -EINVAL;
> +
> +	/*
> +	 * preferred_alignment has to be aligned to required_alignment
> +	 * to avoid misalignment of buffer chunks.
> +	 */
> +	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +/**
> + * sg_alloc_table_from_buf - create an SG table from a buffer
> + *
> + * @sgt: SG table
> + * @buf: buffer you want to create this SG table from
> + * @len: length of buf
> + * @constraints: optional constraints to take into account when creating
> + *		 the SG table. Can be NULL if no specific constraints are
> + *		 required.
> + * @gfp_mask: type of allocation to use when creating the table
> + *
> + * This function creates an SG table from a buffer, its length and some
> + * SG constraints.
> + *
> + * Note: This function supports vmallocated buffers.

What other types of memory does it support ? kmalloc() quite obviously, are 
there others ? I think you should explicitly list the memory types that the 
function intends to support.

> + */
> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t
> len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask)
> +{
> +	struct sg_constraints cons = { };
> +	size_t remaining, chunk_len;
> +	const void *sg_buf;
> +	int i, ret;
> +
> +	if (constraints)
> +		cons = *constraints;
> +
> +	ret = sg_check_constraints(&cons, buf, len);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> +		i++;
> +
> +	ret = sg_alloc_table(sgt, i, gfp_mask);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> +		if (is_vmalloc_addr(sg_buf)) {
> +			struct page *vm_page;
> +
> +			vm_page = vmalloc_to_page(sg_buf);
> +			if (!vm_page) {
> +				ret = -ENOMEM;
> +				goto err_free_table;
> +			}
> +
> +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> +				    offset_in_page(sg_buf));
> +		} else {
> +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> +		}
> +
> +		i++;
> +	}
> +
> +	return 0;
> +
> +err_free_table:
> +	sg_free_table(sgt);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(sg_alloc_table_from_buf);
> +
>  void __sg_page_iter_start(struct sg_page_iter *piter,
>  			  struct scatterlist *sglist, unsigned int nents,
>  			  unsigned long pgoffset)

-- 
Regards,

Laurent Pinchart

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper
  2016-03-08 16:51     ` Laurent Pinchart
  (?)
@ 2016-03-08 16:57       ` Boris Brezillon
  -1 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 16:57 UTC (permalink / raw)
  To: Laurent Pinchart, devicetree
  Cc: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd, Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	linux-media, Rob Herring, Pawel Moll, Mark Rutland, Ian Campbell,
	Kumar Gala

Hi Laurent,

On Tue, 08 Mar 2016 18:51:49 +0200
Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:

> Hi Boris,
> 
> Thank you for the patch.
> 
> On Tuesday 08 March 2016 12:15:12 Boris Brezillon wrote:
> > sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> > from a virtual address pointer. This function takes care of dealing with
> > vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> > DMA transfer size).
> > 
> > Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
> > ---
> >  include/linux/scatterlist.h |  24 ++++++++
> >  lib/scatterlist.c           | 142 +++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 166 insertions(+)
> > 
> > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> > index 556ec1e..4a75362 100644
> > --- a/include/linux/scatterlist.h
> > +++ b/include/linux/scatterlist.h
> > @@ -41,6 +41,27 @@ struct sg_table {
> >  	unsigned int orig_nents;	/* original size of list */
> >  };
> > 
> > +/**
> > + * struct sg_constraints - SG constraints structure
> > + *
> > + * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be
> > smaller
> > + *		   than this value. Zero means no constraint.
> > + * @required_alignment: minimum alignment. Is used for both size and
> > pointer
> > + *			alignment. If this constraint is not met, the function
> > + *			should return -EINVAL.
> > + * @preferred_alignment: preferred alignment. Mainly used to optimize
> > + *			 throughput when the DMA engine performs better when
> > + *			 doing aligned accesses.
> > + *
> > + * This structure is here to help sg_alloc_table_from_buf() create the
> > optimal
> > + * SG list based on DMA engine constraints.
> > + */
> > +struct sg_constraints {
> > +	size_t max_chunk_len;
> > +	size_t required_alignment;
> > +	size_t preferred_alignment;
> > +};
> > +
> >  /*
> >   * Notes on SG table design.
> >   *
> > @@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
> >  	struct page **pages, unsigned int n_pages,
> >  	unsigned long offset, unsigned long size,
> >  	gfp_t gfp_mask);
> > +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t
> > len,
> > +			    const struct sg_constraints *constraints,
> > +			    gfp_t gfp_mask);
> > 
> >  size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void
> > *buf, size_t buflen, off_t skip, bool to_buffer);
> > diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> > index bafa993..706b583 100644
> > --- a/lib/scatterlist.c
> > +++ b/lib/scatterlist.c
> > @@ -433,6 +433,148 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
> >  }
> >  EXPORT_SYMBOL(sg_alloc_table_from_pages);
> > 
> > +static size_t sg_buf_chunk_len(const void *buf, size_t len,
> > +			       const struct sg_constraints *cons)
> > +{
> > +	size_t chunk_len = len;
> > +
> > +	if (cons->max_chunk_len)
> > +		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
> > +
> > +	if (is_vmalloc_addr(buf))
> > +		chunk_len = min_t(size_t, chunk_len,
> > +				  PAGE_SIZE - offset_in_page(buf));
> 
> This will lead to page-sized scatter-gather entries even for pages of the 
> vmalloc memory that happen to be physically contiguous. That works, but I 
> wonder whether we'd want to be more efficient.

Hm, I thought dma_map_sg() was taking care of merging pĥysically
contiguous memory regions, but maybe I'm wrong. Anyway, that's
definitely something I can add at this level.

> 
> > +	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
> > +		const void *aligned_buf = PTR_ALIGN(buf,
> > +						    cons->preferred_alignment);
> > +		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
> > +
> > +		chunk_len = min_t(size_t, chunk_len, unaligned_len);
> > +	} else if (chunk_len > cons->preferred_alignment) {
> > +		chunk_len &= ~(cons->preferred_alignment - 1);
> > +	}
> > +
> > +	return chunk_len;
> > +}
> > +
> > +#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
> > +	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
> > +	     len;							\
> > +	     len -= chunk_len, buf += chunk_len,			\
> > +	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
> > +
> > +static int sg_check_constraints(struct sg_constraints *cons,
> > +				const void *buf, size_t len)
> > +{
> > +	if (!cons->required_alignment)
> > +		cons->required_alignment = 1;
> > +
> > +	if (!cons->preferred_alignment)
> > +		cons->preferred_alignment = cons->required_alignment;
> > +
> > +	/* Test if buf and len are properly aligned. */
> > +	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
> > +	    !IS_ALIGNED(len, cons->required_alignment))
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * if the buffer has been vmallocated and required_alignment is
> > +	 * more than PAGE_SIZE we cannot guarantee it.
> > +	 */
> > +	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * max_chunk_len has to be aligned to required_alignment to
> > +	 * guarantee that all buffer chunks are aligned correctly.
> > +	 */
> > +	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * preferred_alignment has to be aligned to required_alignment
> > +	 * to avoid misalignment of buffer chunks.
> > +	 */
> > +	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
> > +		return -EINVAL;
> > +
> > +	return 0;
> > +}
> > +
> > +/**
> > + * sg_alloc_table_from_buf - create an SG table from a buffer
> > + *
> > + * @sgt: SG table
> > + * @buf: buffer you want to create this SG table from
> > + * @len: length of buf
> > + * @constraints: optional constraints to take into account when creating
> > + *		 the SG table. Can be NULL if no specific constraints are
> > + *		 required.
> > + * @gfp_mask: type of allocation to use when creating the table
> > + *
> > + * This function creates an SG table from a buffer, its length and some
> > + * SG constraints.
> > + *
> > + * Note: This function supports vmallocated buffers.
> 
> What other types of memory does it support ? kmalloc() quite obviously, are 
> there others ? I think you should explicitly list the memory types that the 
> function intends to support.

Sure, I can explicitly list all memory types. Do you see any other kind
of memory other than the vmalloced and physically-contiguous ones?

Thanks for your review.

Boris
-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-08 16:57       ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 16:57 UTC (permalink / raw)
  To: Laurent Pinchart, devicetree-u79uwXL29TY76Z2rM5mHXA
  Cc: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Mark Brown,
	linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala

Hi Laurent,

On Tue, 08 Mar 2016 18:51:49 +0200
Laurent Pinchart <laurent.pinchart-ryLnwIuWjnjg/C1BVhZhaw@public.gmane.org> wrote:

> Hi Boris,
> 
> Thank you for the patch.
> 
> On Tuesday 08 March 2016 12:15:12 Boris Brezillon wrote:
> > sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> > from a virtual address pointer. This function takes care of dealing with
> > vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> > DMA transfer size).
> > 
> > Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
> > ---
> >  include/linux/scatterlist.h |  24 ++++++++
> >  lib/scatterlist.c           | 142 +++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 166 insertions(+)
> > 
> > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> > index 556ec1e..4a75362 100644
> > --- a/include/linux/scatterlist.h
> > +++ b/include/linux/scatterlist.h
> > @@ -41,6 +41,27 @@ struct sg_table {
> >  	unsigned int orig_nents;	/* original size of list */
> >  };
> > 
> > +/**
> > + * struct sg_constraints - SG constraints structure
> > + *
> > + * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be
> > smaller
> > + *		   than this value. Zero means no constraint.
> > + * @required_alignment: minimum alignment. Is used for both size and
> > pointer
> > + *			alignment. If this constraint is not met, the function
> > + *			should return -EINVAL.
> > + * @preferred_alignment: preferred alignment. Mainly used to optimize
> > + *			 throughput when the DMA engine performs better when
> > + *			 doing aligned accesses.
> > + *
> > + * This structure is here to help sg_alloc_table_from_buf() create the
> > optimal
> > + * SG list based on DMA engine constraints.
> > + */
> > +struct sg_constraints {
> > +	size_t max_chunk_len;
> > +	size_t required_alignment;
> > +	size_t preferred_alignment;
> > +};
> > +
> >  /*
> >   * Notes on SG table design.
> >   *
> > @@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
> >  	struct page **pages, unsigned int n_pages,
> >  	unsigned long offset, unsigned long size,
> >  	gfp_t gfp_mask);
> > +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t
> > len,
> > +			    const struct sg_constraints *constraints,
> > +			    gfp_t gfp_mask);
> > 
> >  size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void
> > *buf, size_t buflen, off_t skip, bool to_buffer);
> > diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> > index bafa993..706b583 100644
> > --- a/lib/scatterlist.c
> > +++ b/lib/scatterlist.c
> > @@ -433,6 +433,148 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
> >  }
> >  EXPORT_SYMBOL(sg_alloc_table_from_pages);
> > 
> > +static size_t sg_buf_chunk_len(const void *buf, size_t len,
> > +			       const struct sg_constraints *cons)
> > +{
> > +	size_t chunk_len = len;
> > +
> > +	if (cons->max_chunk_len)
> > +		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
> > +
> > +	if (is_vmalloc_addr(buf))
> > +		chunk_len = min_t(size_t, chunk_len,
> > +				  PAGE_SIZE - offset_in_page(buf));
> 
> This will lead to page-sized scatter-gather entries even for pages of the 
> vmalloc memory that happen to be physically contiguous. That works, but I 
> wonder whether we'd want to be more efficient.

Hm, I thought dma_map_sg() was taking care of merging pĥysically
contiguous memory regions, but maybe I'm wrong. Anyway, that's
definitely something I can add at this level.

> 
> > +	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
> > +		const void *aligned_buf = PTR_ALIGN(buf,
> > +						    cons->preferred_alignment);
> > +		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
> > +
> > +		chunk_len = min_t(size_t, chunk_len, unaligned_len);
> > +	} else if (chunk_len > cons->preferred_alignment) {
> > +		chunk_len &= ~(cons->preferred_alignment - 1);
> > +	}
> > +
> > +	return chunk_len;
> > +}
> > +
> > +#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
> > +	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
> > +	     len;							\
> > +	     len -= chunk_len, buf += chunk_len,			\
> > +	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
> > +
> > +static int sg_check_constraints(struct sg_constraints *cons,
> > +				const void *buf, size_t len)
> > +{
> > +	if (!cons->required_alignment)
> > +		cons->required_alignment = 1;
> > +
> > +	if (!cons->preferred_alignment)
> > +		cons->preferred_alignment = cons->required_alignment;
> > +
> > +	/* Test if buf and len are properly aligned. */
> > +	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
> > +	    !IS_ALIGNED(len, cons->required_alignment))
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * if the buffer has been vmallocated and required_alignment is
> > +	 * more than PAGE_SIZE we cannot guarantee it.
> > +	 */
> > +	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * max_chunk_len has to be aligned to required_alignment to
> > +	 * guarantee that all buffer chunks are aligned correctly.
> > +	 */
> > +	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * preferred_alignment has to be aligned to required_alignment
> > +	 * to avoid misalignment of buffer chunks.
> > +	 */
> > +	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
> > +		return -EINVAL;
> > +
> > +	return 0;
> > +}
> > +
> > +/**
> > + * sg_alloc_table_from_buf - create an SG table from a buffer
> > + *
> > + * @sgt: SG table
> > + * @buf: buffer you want to create this SG table from
> > + * @len: length of buf
> > + * @constraints: optional constraints to take into account when creating
> > + *		 the SG table. Can be NULL if no specific constraints are
> > + *		 required.
> > + * @gfp_mask: type of allocation to use when creating the table
> > + *
> > + * This function creates an SG table from a buffer, its length and some
> > + * SG constraints.
> > + *
> > + * Note: This function supports vmallocated buffers.
> 
> What other types of memory does it support ? kmalloc() quite obviously, are 
> there others ? I think you should explicitly list the memory types that the 
> function intends to support.

Sure, I can explicitly list all memory types. Do you see any other kind
of memory other than the vmalloced and physically-contiguous ones?

Thanks for your review.

Boris
-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

-- 
You received this message because you are subscribed to the Google Groups "linux-sunxi" group.
To unsubscribe from this group and stop receiving emails from it, send an email to linux-sunxi+unsubscribe-/JYPxA39Uh5TLH3MbocFF+G/Ez6ZCGd0@public.gmane.org
For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-08 16:57       ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-08 16:57 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Laurent,

On Tue, 08 Mar 2016 18:51:49 +0200
Laurent Pinchart <laurent.pinchart@ideasonboard.com> wrote:

> Hi Boris,
> 
> Thank you for the patch.
> 
> On Tuesday 08 March 2016 12:15:12 Boris Brezillon wrote:
> > sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> > from a virtual address pointer. This function takes care of dealing with
> > vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> > DMA transfer size).
> > 
> > Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
> > ---
> >  include/linux/scatterlist.h |  24 ++++++++
> >  lib/scatterlist.c           | 142 +++++++++++++++++++++++++++++++++++++++++
> >  2 files changed, 166 insertions(+)
> > 
> > diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> > index 556ec1e..4a75362 100644
> > --- a/include/linux/scatterlist.h
> > +++ b/include/linux/scatterlist.h
> > @@ -41,6 +41,27 @@ struct sg_table {
> >  	unsigned int orig_nents;	/* original size of list */
> >  };
> > 
> > +/**
> > + * struct sg_constraints - SG constraints structure
> > + *
> > + * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be
> > smaller
> > + *		   than this value. Zero means no constraint.
> > + * @required_alignment: minimum alignment. Is used for both size and
> > pointer
> > + *			alignment. If this constraint is not met, the function
> > + *			should return -EINVAL.
> > + * @preferred_alignment: preferred alignment. Mainly used to optimize
> > + *			 throughput when the DMA engine performs better when
> > + *			 doing aligned accesses.
> > + *
> > + * This structure is here to help sg_alloc_table_from_buf() create the
> > optimal
> > + * SG list based on DMA engine constraints.
> > + */
> > +struct sg_constraints {
> > +	size_t max_chunk_len;
> > +	size_t required_alignment;
> > +	size_t preferred_alignment;
> > +};
> > +
> >  /*
> >   * Notes on SG table design.
> >   *
> > @@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
> >  	struct page **pages, unsigned int n_pages,
> >  	unsigned long offset, unsigned long size,
> >  	gfp_t gfp_mask);
> > +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t
> > len,
> > +			    const struct sg_constraints *constraints,
> > +			    gfp_t gfp_mask);
> > 
> >  size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void
> > *buf, size_t buflen, off_t skip, bool to_buffer);
> > diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> > index bafa993..706b583 100644
> > --- a/lib/scatterlist.c
> > +++ b/lib/scatterlist.c
> > @@ -433,6 +433,148 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
> >  }
> >  EXPORT_SYMBOL(sg_alloc_table_from_pages);
> > 
> > +static size_t sg_buf_chunk_len(const void *buf, size_t len,
> > +			       const struct sg_constraints *cons)
> > +{
> > +	size_t chunk_len = len;
> > +
> > +	if (cons->max_chunk_len)
> > +		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
> > +
> > +	if (is_vmalloc_addr(buf))
> > +		chunk_len = min_t(size_t, chunk_len,
> > +				  PAGE_SIZE - offset_in_page(buf));
> 
> This will lead to page-sized scatter-gather entries even for pages of the 
> vmalloc memory that happen to be physically contiguous. That works, but I 
> wonder whether we'd want to be more efficient.

Hm, I thought dma_map_sg() was taking care of merging p?ysically
contiguous memory regions, but maybe I'm wrong. Anyway, that's
definitely something I can add at this level.

> 
> > +	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
> > +		const void *aligned_buf = PTR_ALIGN(buf,
> > +						    cons->preferred_alignment);
> > +		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
> > +
> > +		chunk_len = min_t(size_t, chunk_len, unaligned_len);
> > +	} else if (chunk_len > cons->preferred_alignment) {
> > +		chunk_len &= ~(cons->preferred_alignment - 1);
> > +	}
> > +
> > +	return chunk_len;
> > +}
> > +
> > +#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
> > +	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
> > +	     len;							\
> > +	     len -= chunk_len, buf += chunk_len,			\
> > +	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
> > +
> > +static int sg_check_constraints(struct sg_constraints *cons,
> > +				const void *buf, size_t len)
> > +{
> > +	if (!cons->required_alignment)
> > +		cons->required_alignment = 1;
> > +
> > +	if (!cons->preferred_alignment)
> > +		cons->preferred_alignment = cons->required_alignment;
> > +
> > +	/* Test if buf and len are properly aligned. */
> > +	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
> > +	    !IS_ALIGNED(len, cons->required_alignment))
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * if the buffer has been vmallocated and required_alignment is
> > +	 * more than PAGE_SIZE we cannot guarantee it.
> > +	 */
> > +	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * max_chunk_len has to be aligned to required_alignment to
> > +	 * guarantee that all buffer chunks are aligned correctly.
> > +	 */
> > +	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
> > +		return -EINVAL;
> > +
> > +	/*
> > +	 * preferred_alignment has to be aligned to required_alignment
> > +	 * to avoid misalignment of buffer chunks.
> > +	 */
> > +	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
> > +		return -EINVAL;
> > +
> > +	return 0;
> > +}
> > +
> > +/**
> > + * sg_alloc_table_from_buf - create an SG table from a buffer
> > + *
> > + * @sgt: SG table
> > + * @buf: buffer you want to create this SG table from
> > + * @len: length of buf
> > + * @constraints: optional constraints to take into account when creating
> > + *		 the SG table. Can be NULL if no specific constraints are
> > + *		 required.
> > + * @gfp_mask: type of allocation to use when creating the table
> > + *
> > + * This function creates an SG table from a buffer, its length and some
> > + * SG constraints.
> > + *
> > + * Note: This function supports vmallocated buffers.
> 
> What other types of memory does it support ? kmalloc() quite obviously, are 
> there others ? I think you should explicitly list the memory types that the 
> function intends to support.

Sure, I can explicitly list all memory types. Do you see any other kind
of memory other than the vmalloced and physically-contiguous ones?

Thanks for your review.

Boris
-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 6/7] mtd: nand: sunxi: add support for DMA assisted operations
@ 2016-03-17 14:06     ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-17 14:06 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris, linux-mtd
  Cc: Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	Laurent Pinchart, linux-media, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, devicetree

On Tue,  8 Mar 2016 12:15:14 +0100
Boris Brezillon <boris.brezillon@free-electrons.com> wrote:

> The sunxi NAND controller is able to pipeline ECC operations only when
> operated in DMA mode, which improves a lot NAND throughput while keeping
> CPU usage low.
> 
> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
> ---
>  drivers/mtd/nand/sunxi_nand.c | 301 +++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 297 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
> index 07c3af7..7ba285e 100644
> --- a/drivers/mtd/nand/sunxi_nand.c
> +++ b/drivers/mtd/nand/sunxi_nand.c

[...]

> +static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
> +					   struct nand_chip *chip,
> +					   const u8 *buf,
> +					   int oob_required,
> +					   int page)
> +{
> +	struct nand_chip *nand = mtd_to_nand(mtd);
> +	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
> +	struct nand_ecc_ctrl *ecc = &nand->ecc;
> +	struct sg_table sgt;
> +	int ret, i;
> +
> +	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
> +	if (ret)
> +		return ret;
> +
> +	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, ecc->steps,
> +				       DMA_TO_DEVICE, &sgt);
> +	if (ret)
> +		goto pio_fallback;
> +
> +	for (i = 0; i < ecc->steps; i++) {
> +		const u8 *oob = nand->oob_poi + (i * (ecc->bytes + 4));
> +
> +		sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, i, !i, page);
> +	}
> +
> +	sunxi_nfc_hw_ecc_enable(mtd);
> +	sunxi_nfc_randomizer_config(mtd, page, false);
> +	sunxi_nfc_randomizer_enable(mtd);
> +
> +	writel((NAND_CMD_RNDIN << 8) | NAND_CMD_PAGEPROG,
> +	       nfc->regs + NFC_REG_RCMD_SET);
> +
> +	dma_async_issue_pending(nfc->dmac);
> +
> +	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD |
> +	       NFC_DATA_TRANS | NFC_ACCESS_DIR,
> +	       nfc->regs + NFC_REG_CMD);
> +
> +	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
> +	if (ret)
> +		dmaengine_terminate_all(nfc->dmac);
> +
> +	sunxi_nfc_randomizer_disable(mtd);
> +	sunxi_nfc_hw_ecc_disable(mtd);
> +
> +	sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sgt);

		Should be DMA_TO_DEVICE here ^

> +
> +	if (ret)
> +		return ret;
> +
> +	if (oob_required || (chip->options & NAND_NEED_SCRAMBLING))
> +		/* TODO: use DMA to transfer extra OOB bytes ? */
> +		sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi,
> +						 NULL, page);
> +
> +	return 0;
> +
> +pio_fallback:
> +	return sunxi_nfc_hw_ecc_write_page(mtd, chip, buf, oob_required, page);
> +}




-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 6/7] mtd: nand: sunxi: add support for DMA assisted operations
@ 2016-03-17 14:06     ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-17 14:06 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA

On Tue,  8 Mar 2016 12:15:14 +0100
Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org> wrote:

> The sunxi NAND controller is able to pipeline ECC operations only when
> operated in DMA mode, which improves a lot NAND throughput while keeping
> CPU usage low.
> 
> Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
> ---
>  drivers/mtd/nand/sunxi_nand.c | 301 +++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 297 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
> index 07c3af7..7ba285e 100644
> --- a/drivers/mtd/nand/sunxi_nand.c
> +++ b/drivers/mtd/nand/sunxi_nand.c

[...]

> +static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
> +					   struct nand_chip *chip,
> +					   const u8 *buf,
> +					   int oob_required,
> +					   int page)
> +{
> +	struct nand_chip *nand = mtd_to_nand(mtd);
> +	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
> +	struct nand_ecc_ctrl *ecc = &nand->ecc;
> +	struct sg_table sgt;
> +	int ret, i;
> +
> +	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
> +	if (ret)
> +		return ret;
> +
> +	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, ecc->steps,
> +				       DMA_TO_DEVICE, &sgt);
> +	if (ret)
> +		goto pio_fallback;
> +
> +	for (i = 0; i < ecc->steps; i++) {
> +		const u8 *oob = nand->oob_poi + (i * (ecc->bytes + 4));
> +
> +		sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, i, !i, page);
> +	}
> +
> +	sunxi_nfc_hw_ecc_enable(mtd);
> +	sunxi_nfc_randomizer_config(mtd, page, false);
> +	sunxi_nfc_randomizer_enable(mtd);
> +
> +	writel((NAND_CMD_RNDIN << 8) | NAND_CMD_PAGEPROG,
> +	       nfc->regs + NFC_REG_RCMD_SET);
> +
> +	dma_async_issue_pending(nfc->dmac);
> +
> +	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD |
> +	       NFC_DATA_TRANS | NFC_ACCESS_DIR,
> +	       nfc->regs + NFC_REG_CMD);
> +
> +	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
> +	if (ret)
> +		dmaengine_terminate_all(nfc->dmac);
> +
> +	sunxi_nfc_randomizer_disable(mtd);
> +	sunxi_nfc_hw_ecc_disable(mtd);
> +
> +	sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sgt);

		Should be DMA_TO_DEVICE here ^

> +
> +	if (ret)
> +		return ret;
> +
> +	if (oob_required || (chip->options & NAND_NEED_SCRAMBLING))
> +		/* TODO: use DMA to transfer extra OOB bytes ? */
> +		sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi,
> +						 NULL, page);
> +
> +	return 0;
> +
> +pio_fallback:
> +	return sunxi_nfc_hw_ecc_write_page(mtd, chip, buf, oob_required, page);
> +}




-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 6/7] mtd: nand: sunxi: add support for DMA assisted operations
@ 2016-03-17 14:06     ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-17 14:06 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue,  8 Mar 2016 12:15:14 +0100
Boris Brezillon <boris.brezillon@free-electrons.com> wrote:

> The sunxi NAND controller is able to pipeline ECC operations only when
> operated in DMA mode, which improves a lot NAND throughput while keeping
> CPU usage low.
> 
> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
> ---
>  drivers/mtd/nand/sunxi_nand.c | 301 +++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 297 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
> index 07c3af7..7ba285e 100644
> --- a/drivers/mtd/nand/sunxi_nand.c
> +++ b/drivers/mtd/nand/sunxi_nand.c

[...]

> +static int sunxi_nfc_hw_ecc_write_page_dma(struct mtd_info *mtd,
> +					   struct nand_chip *chip,
> +					   const u8 *buf,
> +					   int oob_required,
> +					   int page)
> +{
> +	struct nand_chip *nand = mtd_to_nand(mtd);
> +	struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
> +	struct nand_ecc_ctrl *ecc = &nand->ecc;
> +	struct sg_table sgt;
> +	int ret, i;
> +
> +	ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
> +	if (ret)
> +		return ret;
> +
> +	ret = sunxi_nfc_dma_op_prepare(mtd, buf, ecc->size, ecc->steps,
> +				       DMA_TO_DEVICE, &sgt);
> +	if (ret)
> +		goto pio_fallback;
> +
> +	for (i = 0; i < ecc->steps; i++) {
> +		const u8 *oob = nand->oob_poi + (i * (ecc->bytes + 4));
> +
> +		sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, i, !i, page);
> +	}
> +
> +	sunxi_nfc_hw_ecc_enable(mtd);
> +	sunxi_nfc_randomizer_config(mtd, page, false);
> +	sunxi_nfc_randomizer_enable(mtd);
> +
> +	writel((NAND_CMD_RNDIN << 8) | NAND_CMD_PAGEPROG,
> +	       nfc->regs + NFC_REG_RCMD_SET);
> +
> +	dma_async_issue_pending(nfc->dmac);
> +
> +	writel(NFC_PAGE_OP | NFC_DATA_SWAP_METHOD |
> +	       NFC_DATA_TRANS | NFC_ACCESS_DIR,
> +	       nfc->regs + NFC_REG_CMD);
> +
> +	ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
> +	if (ret)
> +		dmaengine_terminate_all(nfc->dmac);
> +
> +	sunxi_nfc_randomizer_disable(mtd);
> +	sunxi_nfc_hw_ecc_disable(mtd);
> +
> +	sunxi_nfc_dma_op_cleanup(mtd, DMA_FROM_DEVICE, &sgt);

		Should be DMA_TO_DEVICE here ^

> +
> +	if (ret)
> +		return ret;
> +
> +	if (oob_required || (chip->options & NAND_NEED_SCRAMBLING))
> +		/* TODO: use DMA to transfer extra OOB bytes ? */
> +		sunxi_nfc_hw_ecc_write_extra_oob(mtd, chip->oob_poi,
> +						 NULL, page);
> +
> +	return 0;
> +
> +pio_fallback:
> +	return sunxi_nfc_hw_ecc_write_page(mtd, chip, buf, oob_required, page);
> +}




-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 7/7] mtd: nand: sunxi: update DT bindings
  2016-03-08 11:15   ` Boris Brezillon
  (?)
@ 2016-03-17 15:28     ` Rob Herring
  -1 siblings, 0 replies; 45+ messages in thread
From: Rob Herring @ 2016-03-17 15:28 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd, Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	Laurent Pinchart, linux-media, Pawel Moll, Mark Rutland,
	Ian Campbell, Kumar Gala, devicetree

On Tue, Mar 08, 2016 at 12:15:15PM +0100, Boris Brezillon wrote:
> Document dmas and dma-names properties.
> 
> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
> ---
>  Documentation/devicetree/bindings/mtd/sunxi-nand.txt | 4 ++++
>  1 file changed, 4 insertions(+)

Acked-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 7/7] mtd: nand: sunxi: update DT bindings
@ 2016-03-17 15:28     ` Rob Herring
  0 siblings, 0 replies; 45+ messages in thread
From: Rob Herring @ 2016-03-17 15:28 UTC (permalink / raw)
  To: Boris Brezillon
  Cc: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd, Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	Laurent Pinchart, linux-media, Pawel Moll, Mark Rutland,
	Ian Campbell, Kumar Gala, devicetree

On Tue, Mar 08, 2016 at 12:15:15PM +0100, Boris Brezillon wrote:
> Document dmas and dma-names properties.
> 
> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
> ---
>  Documentation/devicetree/bindings/mtd/sunxi-nand.txt | 4 ++++
>  1 file changed, 4 insertions(+)

Acked-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 7/7] mtd: nand: sunxi: update DT bindings
@ 2016-03-17 15:28     ` Rob Herring
  0 siblings, 0 replies; 45+ messages in thread
From: Rob Herring @ 2016-03-17 15:28 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Mar 08, 2016 at 12:15:15PM +0100, Boris Brezillon wrote:
> Document dmas and dma-names properties.
> 
> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
> ---
>  Documentation/devicetree/bindings/mtd/sunxi-nand.txt | 4 ++++
>  1 file changed, 4 insertions(+)

Acked-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-21 15:46     ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-21 15:46 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris, linux-mtd
  Cc: Mark Brown, linux-spi, linux-kernel, linux-arm-kernel,
	Maxime Ripard, Chen-Yu Tsai, linux-sunxi, Vinod Koul,
	Dan Williams, dmaengine, Mauro Carvalho Chehab, Hans Verkuil,
	Laurent Pinchart, linux-media, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala, devicetree

On Tue,  8 Mar 2016 12:15:12 +0100
Boris Brezillon <boris.brezillon@free-electrons.com> wrote:

> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> from a virtual address pointer. This function takes care of dealing with
> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> DMA transfer size).
> 
> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
> ---
>  include/linux/scatterlist.h |  24 ++++++++
>  lib/scatterlist.c           | 142 ++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 166 insertions(+)
> 
> diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> index 556ec1e..4a75362 100644
> --- a/include/linux/scatterlist.h
> +++ b/include/linux/scatterlist.h
> @@ -41,6 +41,27 @@ struct sg_table {
>  	unsigned int orig_nents;	/* original size of list */
>  };
>  
> +/**
> + * struct sg_constraints - SG constraints structure
> + *
> + * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be smaller
> + *		   than this value. Zero means no constraint.
> + * @required_alignment: minimum alignment. Is used for both size and pointer
> + *			alignment. If this constraint is not met, the function
> + *			should return -EINVAL.
> + * @preferred_alignment: preferred alignment. Mainly used to optimize
> + *			 throughput when the DMA engine performs better when
> + *			 doing aligned accesses.
> + *
> + * This structure is here to help sg_alloc_table_from_buf() create the optimal
> + * SG list based on DMA engine constraints.
> + */
> +struct sg_constraints {
> +	size_t max_chunk_len;
> +	size_t required_alignment;
> +	size_t preferred_alignment;
> +};
> +

Hm, we seem to have another case in NAND controller drivers. Some
drivers do not support scattergather accesses and have to provide a
single physically contiguous memory region to transfer the whole
NAND page.

Could we add a ->max_entries field to the sg_contraints struct to allow
those drivers to use the generic mtd_map_buf() function, and fallback
to a bounce buffer approach (or PIO access approach) when
sg_alloc_table_from_buf() fails to fulfill this constraint.

We could also define another 'non-sg' function to do the 'physically
contiguous' check, but in any case, I'd like to avoid letting each
driver code its own checking function.

What do you think?

>  /*
>   * Notes on SG table design.
>   *
> @@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
>  	struct page **pages, unsigned int n_pages,
>  	unsigned long offset, unsigned long size,
>  	gfp_t gfp_mask);
> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask);
>  
>  size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
>  		      size_t buflen, off_t skip, bool to_buffer);
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index bafa993..706b583 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -433,6 +433,148 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
>  }
>  EXPORT_SYMBOL(sg_alloc_table_from_pages);
>  
> +static size_t sg_buf_chunk_len(const void *buf, size_t len,
> +			       const struct sg_constraints *cons)
> +{
> +	size_t chunk_len = len;
> +
> +	if (cons->max_chunk_len)
> +		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
> +
> +	if (is_vmalloc_addr(buf))
> +		chunk_len = min_t(size_t, chunk_len,
> +				  PAGE_SIZE - offset_in_page(buf));
> +
> +	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
> +		const void *aligned_buf = PTR_ALIGN(buf,
> +						    cons->preferred_alignment);
> +		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
> +
> +		chunk_len = min_t(size_t, chunk_len, unaligned_len);
> +	} else if (chunk_len > cons->preferred_alignment) {
> +		chunk_len &= ~(cons->preferred_alignment - 1);
> +	}
> +
> +	return chunk_len;
> +}
> +
> +#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
> +	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
> +	     len;							\
> +	     len -= chunk_len, buf += chunk_len,			\
> +	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
> +
> +static int sg_check_constraints(struct sg_constraints *cons,
> +				const void *buf, size_t len)
> +{
> +	if (!cons->required_alignment)
> +		cons->required_alignment = 1;
> +
> +	if (!cons->preferred_alignment)
> +		cons->preferred_alignment = cons->required_alignment;
> +
> +	/* Test if buf and len are properly aligned. */
> +	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
> +	    !IS_ALIGNED(len, cons->required_alignment))
> +		return -EINVAL;
> +
> +	/*
> +	 * if the buffer has been vmallocated and required_alignment is
> +	 * more than PAGE_SIZE we cannot guarantee it.
> +	 */
> +	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
> +		return -EINVAL;
> +
> +	/*
> +	 * max_chunk_len has to be aligned to required_alignment to
> +	 * guarantee that all buffer chunks are aligned correctly.
> +	 */
> +	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
> +		return -EINVAL;
> +
> +	/*
> +	 * preferred_alignment has to be aligned to required_alignment
> +	 * to avoid misalignment of buffer chunks.
> +	 */
> +	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +/**
> + * sg_alloc_table_from_buf - create an SG table from a buffer
> + *
> + * @sgt: SG table
> + * @buf: buffer you want to create this SG table from
> + * @len: length of buf
> + * @constraints: optional constraints to take into account when creating
> + *		 the SG table. Can be NULL if no specific constraints are
> + *		 required.
> + * @gfp_mask: type of allocation to use when creating the table
> + *
> + * This function creates an SG table from a buffer, its length and some
> + * SG constraints.
> + *
> + * Note: This function supports vmallocated buffers.
> + */
> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask)
> +{
> +	struct sg_constraints cons = { };
> +	size_t remaining, chunk_len;
> +	const void *sg_buf;
> +	int i, ret;
> +
> +	if (constraints)
> +		cons = *constraints;
> +
> +	ret = sg_check_constraints(&cons, buf, len);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> +		i++;
> +
> +	ret = sg_alloc_table(sgt, i, gfp_mask);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> +		if (is_vmalloc_addr(sg_buf)) {
> +			struct page *vm_page;
> +
> +			vm_page = vmalloc_to_page(sg_buf);
> +			if (!vm_page) {
> +				ret = -ENOMEM;
> +				goto err_free_table;
> +			}
> +
> +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> +				    offset_in_page(sg_buf));
> +		} else {
> +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> +		}
> +
> +		i++;
> +	}
> +
> +	return 0;
> +
> +err_free_table:
> +	sg_free_table(sgt);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(sg_alloc_table_from_buf);
> +
>  void __sg_page_iter_start(struct sg_page_iter *piter,
>  			  struct scatterlist *sglist, unsigned int nents,
>  			  unsigned long pgoffset)



-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 45+ messages in thread

* Re: [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-21 15:46     ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-21 15:46 UTC (permalink / raw)
  To: Andrew Morton, Dave Gordon, David Woodhouse, Brian Norris,
	linux-mtd-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r
  Cc: Mark Brown, linux-spi-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r, Maxime Ripard,
	Chen-Yu Tsai, linux-sunxi-/JYPxA39Uh5TLH3MbocFFw, Vinod Koul,
	Dan Williams, dmaengine-u79uwXL29TY76Z2rM5mHXA,
	Mauro Carvalho Chehab, Hans Verkuil, Laurent Pinchart,
	linux-media-u79uwXL29TY76Z2rM5mHXA, Rob Herring, Pawel Moll,
	Mark Rutland, Ian Campbell, Kumar Gala,
	devicetree-u79uwXL29TY76Z2rM5mHXA

On Tue,  8 Mar 2016 12:15:12 +0100
Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org> wrote:

> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> from a virtual address pointer. This function takes care of dealing with
> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> DMA transfer size).
> 
> Signed-off-by: Boris Brezillon <boris.brezillon-wi1+55ScJUtKEb57/3fJTNBPR1lH4CV8@public.gmane.org>
> ---
>  include/linux/scatterlist.h |  24 ++++++++
>  lib/scatterlist.c           | 142 ++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 166 insertions(+)
> 
> diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> index 556ec1e..4a75362 100644
> --- a/include/linux/scatterlist.h
> +++ b/include/linux/scatterlist.h
> @@ -41,6 +41,27 @@ struct sg_table {
>  	unsigned int orig_nents;	/* original size of list */
>  };
>  
> +/**
> + * struct sg_constraints - SG constraints structure
> + *
> + * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be smaller
> + *		   than this value. Zero means no constraint.
> + * @required_alignment: minimum alignment. Is used for both size and pointer
> + *			alignment. If this constraint is not met, the function
> + *			should return -EINVAL.
> + * @preferred_alignment: preferred alignment. Mainly used to optimize
> + *			 throughput when the DMA engine performs better when
> + *			 doing aligned accesses.
> + *
> + * This structure is here to help sg_alloc_table_from_buf() create the optimal
> + * SG list based on DMA engine constraints.
> + */
> +struct sg_constraints {
> +	size_t max_chunk_len;
> +	size_t required_alignment;
> +	size_t preferred_alignment;
> +};
> +

Hm, we seem to have another case in NAND controller drivers. Some
drivers do not support scattergather accesses and have to provide a
single physically contiguous memory region to transfer the whole
NAND page.

Could we add a ->max_entries field to the sg_contraints struct to allow
those drivers to use the generic mtd_map_buf() function, and fallback
to a bounce buffer approach (or PIO access approach) when
sg_alloc_table_from_buf() fails to fulfill this constraint.

We could also define another 'non-sg' function to do the 'physically
contiguous' check, but in any case, I'd like to avoid letting each
driver code its own checking function.

What do you think?

>  /*
>   * Notes on SG table design.
>   *
> @@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
>  	struct page **pages, unsigned int n_pages,
>  	unsigned long offset, unsigned long size,
>  	gfp_t gfp_mask);
> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask);
>  
>  size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
>  		      size_t buflen, off_t skip, bool to_buffer);
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index bafa993..706b583 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -433,6 +433,148 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
>  }
>  EXPORT_SYMBOL(sg_alloc_table_from_pages);
>  
> +static size_t sg_buf_chunk_len(const void *buf, size_t len,
> +			       const struct sg_constraints *cons)
> +{
> +	size_t chunk_len = len;
> +
> +	if (cons->max_chunk_len)
> +		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
> +
> +	if (is_vmalloc_addr(buf))
> +		chunk_len = min_t(size_t, chunk_len,
> +				  PAGE_SIZE - offset_in_page(buf));
> +
> +	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
> +		const void *aligned_buf = PTR_ALIGN(buf,
> +						    cons->preferred_alignment);
> +		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
> +
> +		chunk_len = min_t(size_t, chunk_len, unaligned_len);
> +	} else if (chunk_len > cons->preferred_alignment) {
> +		chunk_len &= ~(cons->preferred_alignment - 1);
> +	}
> +
> +	return chunk_len;
> +}
> +
> +#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
> +	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
> +	     len;							\
> +	     len -= chunk_len, buf += chunk_len,			\
> +	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
> +
> +static int sg_check_constraints(struct sg_constraints *cons,
> +				const void *buf, size_t len)
> +{
> +	if (!cons->required_alignment)
> +		cons->required_alignment = 1;
> +
> +	if (!cons->preferred_alignment)
> +		cons->preferred_alignment = cons->required_alignment;
> +
> +	/* Test if buf and len are properly aligned. */
> +	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
> +	    !IS_ALIGNED(len, cons->required_alignment))
> +		return -EINVAL;
> +
> +	/*
> +	 * if the buffer has been vmallocated and required_alignment is
> +	 * more than PAGE_SIZE we cannot guarantee it.
> +	 */
> +	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
> +		return -EINVAL;
> +
> +	/*
> +	 * max_chunk_len has to be aligned to required_alignment to
> +	 * guarantee that all buffer chunks are aligned correctly.
> +	 */
> +	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
> +		return -EINVAL;
> +
> +	/*
> +	 * preferred_alignment has to be aligned to required_alignment
> +	 * to avoid misalignment of buffer chunks.
> +	 */
> +	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +/**
> + * sg_alloc_table_from_buf - create an SG table from a buffer
> + *
> + * @sgt: SG table
> + * @buf: buffer you want to create this SG table from
> + * @len: length of buf
> + * @constraints: optional constraints to take into account when creating
> + *		 the SG table. Can be NULL if no specific constraints are
> + *		 required.
> + * @gfp_mask: type of allocation to use when creating the table
> + *
> + * This function creates an SG table from a buffer, its length and some
> + * SG constraints.
> + *
> + * Note: This function supports vmallocated buffers.
> + */
> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask)
> +{
> +	struct sg_constraints cons = { };
> +	size_t remaining, chunk_len;
> +	const void *sg_buf;
> +	int i, ret;
> +
> +	if (constraints)
> +		cons = *constraints;
> +
> +	ret = sg_check_constraints(&cons, buf, len);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> +		i++;
> +
> +	ret = sg_alloc_table(sgt, i, gfp_mask);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> +		if (is_vmalloc_addr(sg_buf)) {
> +			struct page *vm_page;
> +
> +			vm_page = vmalloc_to_page(sg_buf);
> +			if (!vm_page) {
> +				ret = -ENOMEM;
> +				goto err_free_table;
> +			}
> +
> +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> +				    offset_in_page(sg_buf));
> +		} else {
> +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> +		}
> +
> +		i++;
> +	}
> +
> +	return 0;
> +
> +err_free_table:
> +	sg_free_table(sgt);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(sg_alloc_table_from_buf);
> +
>  void __sg_page_iter_start(struct sg_page_iter *piter,
>  			  struct scatterlist *sglist, unsigned int nents,
>  			  unsigned long pgoffset)



-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 45+ messages in thread

* [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper
@ 2016-03-21 15:46     ` Boris Brezillon
  0 siblings, 0 replies; 45+ messages in thread
From: Boris Brezillon @ 2016-03-21 15:46 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue,  8 Mar 2016 12:15:12 +0100
Boris Brezillon <boris.brezillon@free-electrons.com> wrote:

> sg_alloc_table_from_buf() provides an easy solution to create an sg_table
> from a virtual address pointer. This function takes care of dealing with
> vmallocated buffers, buffer alignment, or DMA engine limitations (maximum
> DMA transfer size).
> 
> Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
> ---
>  include/linux/scatterlist.h |  24 ++++++++
>  lib/scatterlist.c           | 142 ++++++++++++++++++++++++++++++++++++++++++++
>  2 files changed, 166 insertions(+)
> 
> diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
> index 556ec1e..4a75362 100644
> --- a/include/linux/scatterlist.h
> +++ b/include/linux/scatterlist.h
> @@ -41,6 +41,27 @@ struct sg_table {
>  	unsigned int orig_nents;	/* original size of list */
>  };
>  
> +/**
> + * struct sg_constraints - SG constraints structure
> + *
> + * @max_chunk_len: maximum chunk buffer length. Each SG entry has to be smaller
> + *		   than this value. Zero means no constraint.
> + * @required_alignment: minimum alignment. Is used for both size and pointer
> + *			alignment. If this constraint is not met, the function
> + *			should return -EINVAL.
> + * @preferred_alignment: preferred alignment. Mainly used to optimize
> + *			 throughput when the DMA engine performs better when
> + *			 doing aligned accesses.
> + *
> + * This structure is here to help sg_alloc_table_from_buf() create the optimal
> + * SG list based on DMA engine constraints.
> + */
> +struct sg_constraints {
> +	size_t max_chunk_len;
> +	size_t required_alignment;
> +	size_t preferred_alignment;
> +};
> +

Hm, we seem to have another case in NAND controller drivers. Some
drivers do not support scattergather accesses and have to provide a
single physically contiguous memory region to transfer the whole
NAND page.

Could we add a ->max_entries field to the sg_contraints struct to allow
those drivers to use the generic mtd_map_buf() function, and fallback
to a bounce buffer approach (or PIO access approach) when
sg_alloc_table_from_buf() fails to fulfill this constraint.

We could also define another 'non-sg' function to do the 'physically
contiguous' check, but in any case, I'd like to avoid letting each
driver code its own checking function.

What do you think?

>  /*
>   * Notes on SG table design.
>   *
> @@ -265,6 +286,9 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
>  	struct page **pages, unsigned int n_pages,
>  	unsigned long offset, unsigned long size,
>  	gfp_t gfp_mask);
> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask);
>  
>  size_t sg_copy_buffer(struct scatterlist *sgl, unsigned int nents, void *buf,
>  		      size_t buflen, off_t skip, bool to_buffer);
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index bafa993..706b583 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -433,6 +433,148 @@ int sg_alloc_table_from_pages(struct sg_table *sgt,
>  }
>  EXPORT_SYMBOL(sg_alloc_table_from_pages);
>  
> +static size_t sg_buf_chunk_len(const void *buf, size_t len,
> +			       const struct sg_constraints *cons)
> +{
> +	size_t chunk_len = len;
> +
> +	if (cons->max_chunk_len)
> +		chunk_len = min_t(size_t, chunk_len, cons->max_chunk_len);
> +
> +	if (is_vmalloc_addr(buf))
> +		chunk_len = min_t(size_t, chunk_len,
> +				  PAGE_SIZE - offset_in_page(buf));
> +
> +	if (!IS_ALIGNED((unsigned long)buf, cons->preferred_alignment)) {
> +		const void *aligned_buf = PTR_ALIGN(buf,
> +						    cons->preferred_alignment);
> +		size_t unaligned_len = (unsigned long)(aligned_buf - buf);
> +
> +		chunk_len = min_t(size_t, chunk_len, unaligned_len);
> +	} else if (chunk_len > cons->preferred_alignment) {
> +		chunk_len &= ~(cons->preferred_alignment - 1);
> +	}
> +
> +	return chunk_len;
> +}
> +
> +#define sg_for_each_chunk_in_buf(buf, len, chunk_len, constraints)	\
> +	for (chunk_len = sg_buf_chunk_len(buf, len, constraints);	\
> +	     len;							\
> +	     len -= chunk_len, buf += chunk_len,			\
> +	     chunk_len = sg_buf_chunk_len(buf, len, constraints))
> +
> +static int sg_check_constraints(struct sg_constraints *cons,
> +				const void *buf, size_t len)
> +{
> +	if (!cons->required_alignment)
> +		cons->required_alignment = 1;
> +
> +	if (!cons->preferred_alignment)
> +		cons->preferred_alignment = cons->required_alignment;
> +
> +	/* Test if buf and len are properly aligned. */
> +	if (!IS_ALIGNED((unsigned long)buf, cons->required_alignment) ||
> +	    !IS_ALIGNED(len, cons->required_alignment))
> +		return -EINVAL;
> +
> +	/*
> +	 * if the buffer has been vmallocated and required_alignment is
> +	 * more than PAGE_SIZE we cannot guarantee it.
> +	 */
> +	if (is_vmalloc_addr(buf) && cons->required_alignment > PAGE_SIZE)
> +		return -EINVAL;
> +
> +	/*
> +	 * max_chunk_len has to be aligned to required_alignment to
> +	 * guarantee that all buffer chunks are aligned correctly.
> +	 */
> +	if (!IS_ALIGNED(cons->max_chunk_len, cons->required_alignment))
> +		return -EINVAL;
> +
> +	/*
> +	 * preferred_alignment has to be aligned to required_alignment
> +	 * to avoid misalignment of buffer chunks.
> +	 */
> +	if (!IS_ALIGNED(cons->preferred_alignment, cons->required_alignment))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +/**
> + * sg_alloc_table_from_buf - create an SG table from a buffer
> + *
> + * @sgt: SG table
> + * @buf: buffer you want to create this SG table from
> + * @len: length of buf
> + * @constraints: optional constraints to take into account when creating
> + *		 the SG table. Can be NULL if no specific constraints are
> + *		 required.
> + * @gfp_mask: type of allocation to use when creating the table
> + *
> + * This function creates an SG table from a buffer, its length and some
> + * SG constraints.
> + *
> + * Note: This function supports vmallocated buffers.
> + */
> +int sg_alloc_table_from_buf(struct sg_table *sgt, const void *buf, size_t len,
> +			    const struct sg_constraints *constraints,
> +			    gfp_t gfp_mask)
> +{
> +	struct sg_constraints cons = { };
> +	size_t remaining, chunk_len;
> +	const void *sg_buf;
> +	int i, ret;
> +
> +	if (constraints)
> +		cons = *constraints;
> +
> +	ret = sg_check_constraints(&cons, buf, len);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons)
> +		i++;
> +
> +	ret = sg_alloc_table(sgt, i, gfp_mask);
> +	if (ret)
> +		return ret;
> +
> +	sg_buf = buf;
> +	remaining = len;
> +	i = 0;
> +	sg_for_each_chunk_in_buf(sg_buf, remaining, chunk_len, &cons) {
> +		if (is_vmalloc_addr(sg_buf)) {
> +			struct page *vm_page;
> +
> +			vm_page = vmalloc_to_page(sg_buf);
> +			if (!vm_page) {
> +				ret = -ENOMEM;
> +				goto err_free_table;
> +			}
> +
> +			sg_set_page(&sgt->sgl[i], vm_page, chunk_len,
> +				    offset_in_page(sg_buf));
> +		} else {
> +			sg_set_buf(&sgt->sgl[i], sg_buf, chunk_len);
> +		}
> +
> +		i++;
> +	}
> +
> +	return 0;
> +
> +err_free_table:
> +	sg_free_table(sgt);
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL(sg_alloc_table_from_buf);
> +
>  void __sg_page_iter_start(struct sg_page_iter *piter,
>  			  struct scatterlist *sglist, unsigned int nents,
>  			  unsigned long pgoffset)



-- 
Boris Brezillon, Free Electrons
Embedded Linux and Kernel engineering
http://free-electrons.com

^ permalink raw reply	[flat|nested] 45+ messages in thread

end of thread, other threads:[~2016-03-21 15:46 UTC | newest]

Thread overview: 45+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-08 11:15 [PATCH 0/7] mtd: nand: sunxi: add support for DMA operations Boris Brezillon
2016-03-08 11:15 ` Boris Brezillon
2016-03-08 11:15 ` Boris Brezillon
2016-03-08 11:15 ` [PATCH 1/7] mtd: nand: sunxi: move some ECC related operations to their own functions Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-08 11:15 ` [PATCH 2/7] mtd: nand: sunxi: make OOB retrieval optional Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-08 11:15 ` [PATCH 3/7] mtd: nand: sunxi: make cur_off parameter optional in extra oob helpers Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-08 11:15 ` [PATCH 4/7] scatterlist: add sg_alloc_table_from_buf() helper Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-08 16:51   ` Laurent Pinchart
2016-03-08 16:51     ` Laurent Pinchart
2016-03-08 16:51     ` Laurent Pinchart
2016-03-08 16:57     ` Boris Brezillon
2016-03-08 16:57       ` Boris Brezillon
2016-03-08 16:57       ` Boris Brezillon
2016-03-21 15:46   ` Boris Brezillon
2016-03-21 15:46     ` Boris Brezillon
2016-03-21 15:46     ` Boris Brezillon
2016-03-08 11:15 ` [PATCH 5/7] mtd: provide helper to prepare buffers for DMA operations Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-08 14:18   ` Vinod Koul
2016-03-08 14:18     ` Vinod Koul
2016-03-08 14:18     ` Vinod Koul
2016-03-08 14:46     ` Boris Brezillon
2016-03-08 14:46       ` Boris Brezillon
2016-03-08 14:46       ` Boris Brezillon
2016-03-08 11:15 ` [PATCH 6/7] mtd: nand: sunxi: add support for DMA assisted operations Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-17 14:06   ` Boris Brezillon
2016-03-17 14:06     ` Boris Brezillon
2016-03-17 14:06     ` Boris Brezillon
2016-03-08 11:15 ` [PATCH 7/7] mtd: nand: sunxi: update DT bindings Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-08 11:15   ` Boris Brezillon
2016-03-17 15:28   ` Rob Herring
2016-03-17 15:28     ` Rob Herring
2016-03-17 15:28     ` Rob Herring

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.