All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC,v4,0/5] Add Mediatek SPI Nand controller and convert ECC driver
@ 2021-11-30  8:31 ` Xiangsheng Hou
  0 siblings, 0 replies; 36+ messages in thread
From: Xiangsheng Hou @ 2021-11-30  8:31 UTC (permalink / raw)
  To: miquel.raynal, broonie
  Cc: xiangsheng.hou, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

The Mediatek SPI Nand controller can support multiple SPI protocols,
which can support other SPI device in theory. And the SPI Nand controller
can cowork with the HW ECC engine for high performance at the pipelined
ecc case.

This RFC v4 series fix coding style and move some ecc related code form
snfi driver to the ecc driver based on the RFC v3 comment. And, also try
to resolve the read/write OOB issue in AUTO mode, which can also resolve
the data format issue with Mediatek ECC engine.

The RFC patch v1 and v2 only try to get nand info and ecc status
in spi driver. However, this can be resolved by pipelined ECC design.

The RFC patch v3 realize the HW ECC engine in pipelined case.

Only take mt7622 project for dts node example.

Xiangsheng Hou (5):
  mtd: nand: ecc: Move mediatek ECC driver
  mtd: nand: ecc: mtk: Convert to the ECC infrastructure
  spi: mtk: Add mediatek SPI Nand Flash interface driver
  mtd: spinand: Move set/get OOB databytes to each ECC engines
  arm64: dts: mtk: Add snfi node

 arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts |   16 +
 arch/arm64/boot/dts/mediatek/mt7622.dtsi     |   13 +
 drivers/mtd/nand/Kconfig                     |    9 +
 drivers/mtd/nand/Makefile                    |    1 +
 drivers/mtd/nand/ecc-mtk.c                   | 1207 ++++++++++++++++++
 drivers/mtd/nand/ecc-sw-bch.c                |   71 +-
 drivers/mtd/nand/ecc-sw-hamming.c            |   71 +-
 drivers/mtd/nand/raw/Kconfig                 |    1 +
 drivers/mtd/nand/raw/Makefile                |    2 +-
 drivers/mtd/nand/raw/mtk_ecc.c               |  593 ---------
 drivers/mtd/nand/raw/mtk_ecc.h               |   47 -
 drivers/mtd/nand/raw/mtk_nand.c              |    2 +-
 drivers/mtd/nand/spi/core.c                  |   93 +-
 drivers/spi/Kconfig                          |   11 +
 drivers/spi/Makefile                         |    1 +
 drivers/spi/spi-mtk-snfi.c                   | 1117 ++++++++++++++++
 include/linux/mtd/nand-ecc-mtk.h             |  115 ++
 include/linux/mtd/nand-ecc-sw-bch.h          |    4 +
 include/linux/mtd/nand-ecc-sw-hamming.h      |    4 +
 include/linux/mtd/spinand.h                  |    4 +
 20 files changed, 2691 insertions(+), 691 deletions(-)
 create mode 100644 drivers/mtd/nand/ecc-mtk.c
 delete mode 100644 drivers/mtd/nand/raw/mtk_ecc.c
 delete mode 100644 drivers/mtd/nand/raw/mtk_ecc.h
 create mode 100644 drivers/spi/spi-mtk-snfi.c
 create mode 100644 include/linux/mtd/nand-ecc-mtk.h

-- 
2.25.1


______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [RFC,v4,0/5] Add Mediatek SPI Nand controller and convert ECC driver
@ 2021-11-30  8:31 ` Xiangsheng Hou
  0 siblings, 0 replies; 36+ messages in thread
From: Xiangsheng Hou @ 2021-11-30  8:31 UTC (permalink / raw)
  To: miquel.raynal, broonie
  Cc: xiangsheng.hou, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

The Mediatek SPI Nand controller can support multiple SPI protocols,
which can support other SPI device in theory. And the SPI Nand controller
can cowork with the HW ECC engine for high performance at the pipelined
ecc case.

This RFC v4 series fix coding style and move some ecc related code form
snfi driver to the ecc driver based on the RFC v3 comment. And, also try
to resolve the read/write OOB issue in AUTO mode, which can also resolve
the data format issue with Mediatek ECC engine.

The RFC patch v1 and v2 only try to get nand info and ecc status
in spi driver. However, this can be resolved by pipelined ECC design.

The RFC patch v3 realize the HW ECC engine in pipelined case.

Only take mt7622 project for dts node example.

Xiangsheng Hou (5):
  mtd: nand: ecc: Move mediatek ECC driver
  mtd: nand: ecc: mtk: Convert to the ECC infrastructure
  spi: mtk: Add mediatek SPI Nand Flash interface driver
  mtd: spinand: Move set/get OOB databytes to each ECC engines
  arm64: dts: mtk: Add snfi node

 arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts |   16 +
 arch/arm64/boot/dts/mediatek/mt7622.dtsi     |   13 +
 drivers/mtd/nand/Kconfig                     |    9 +
 drivers/mtd/nand/Makefile                    |    1 +
 drivers/mtd/nand/ecc-mtk.c                   | 1207 ++++++++++++++++++
 drivers/mtd/nand/ecc-sw-bch.c                |   71 +-
 drivers/mtd/nand/ecc-sw-hamming.c            |   71 +-
 drivers/mtd/nand/raw/Kconfig                 |    1 +
 drivers/mtd/nand/raw/Makefile                |    2 +-
 drivers/mtd/nand/raw/mtk_ecc.c               |  593 ---------
 drivers/mtd/nand/raw/mtk_ecc.h               |   47 -
 drivers/mtd/nand/raw/mtk_nand.c              |    2 +-
 drivers/mtd/nand/spi/core.c                  |   93 +-
 drivers/spi/Kconfig                          |   11 +
 drivers/spi/Makefile                         |    1 +
 drivers/spi/spi-mtk-snfi.c                   | 1117 ++++++++++++++++
 include/linux/mtd/nand-ecc-mtk.h             |  115 ++
 include/linux/mtd/nand-ecc-sw-bch.h          |    4 +
 include/linux/mtd/nand-ecc-sw-hamming.h      |    4 +
 include/linux/mtd/spinand.h                  |    4 +
 20 files changed, 2691 insertions(+), 691 deletions(-)
 create mode 100644 drivers/mtd/nand/ecc-mtk.c
 delete mode 100644 drivers/mtd/nand/raw/mtk_ecc.c
 delete mode 100644 drivers/mtd/nand/raw/mtk_ecc.h
 create mode 100644 drivers/spi/spi-mtk-snfi.c
 create mode 100644 include/linux/mtd/nand-ecc-mtk.h

-- 
2.25.1


_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [RFC,v4,1/5] mtd: nand: ecc: Move mediatek ECC driver
  2021-11-30  8:31 ` Xiangsheng Hou
@ 2021-11-30  8:31   ` Xiangsheng Hou
  -1 siblings, 0 replies; 36+ messages in thread
From: Xiangsheng Hou @ 2021-11-30  8:31 UTC (permalink / raw)
  To: miquel.raynal, broonie
  Cc: xiangsheng.hou, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Move mediatek on host ECC driver to comply with the generic ECC
framework. The ECC engine can be used by mediatek raw nand and
spi nand controller.

Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
---
 drivers/mtd/nand/Kconfig                                 | 9 +++++++++
 drivers/mtd/nand/Makefile                                | 1 +
 drivers/mtd/nand/{raw/mtk_ecc.c => ecc-mtk.c}            | 2 +-
 drivers/mtd/nand/raw/Kconfig                             | 1 +
 drivers/mtd/nand/raw/Makefile                            | 2 +-
 drivers/mtd/nand/raw/mtk_nand.c                          | 2 +-
 .../raw/mtk_ecc.h => include/linux/mtd/nand-ecc-mtk.h    | 0
 7 files changed, 14 insertions(+), 3 deletions(-)
 rename drivers/mtd/nand/{raw/mtk_ecc.c => ecc-mtk.c} (99%)
 rename drivers/mtd/nand/raw/mtk_ecc.h => include/linux/mtd/nand-ecc-mtk.h (100%)

diff --git a/drivers/mtd/nand/Kconfig b/drivers/mtd/nand/Kconfig
index 8431292ff49d..a96fddff5ba5 100644
--- a/drivers/mtd/nand/Kconfig
+++ b/drivers/mtd/nand/Kconfig
@@ -52,6 +52,15 @@ config MTD_NAND_ECC_MXIC
 	help
 	  This enables support for the hardware ECC engine from Macronix.
 
+config MTD_NAND_ECC_MTK
+	bool "Mediatek hardware ECC engine"
+	select MTD_NAND_ECC
+	help
+	  This enables support for Mediatek hardware ECC engine which
+	  used for error correction. This correction strength depends
+	  on different project. The ECC engine can be used with Mediatek
+	  raw nand and spi nand controller driver.
+
 endmenu
 
 endmenu
diff --git a/drivers/mtd/nand/Makefile b/drivers/mtd/nand/Makefile
index a4e6b7ae0614..686f0d635ddf 100644
--- a/drivers/mtd/nand/Makefile
+++ b/drivers/mtd/nand/Makefile
@@ -11,3 +11,4 @@ nandcore-$(CONFIG_MTD_NAND_ECC) += ecc.o
 nandcore-$(CONFIG_MTD_NAND_ECC_SW_HAMMING) += ecc-sw-hamming.o
 nandcore-$(CONFIG_MTD_NAND_ECC_SW_BCH) += ecc-sw-bch.o
 nandcore-$(CONFIG_MTD_NAND_ECC_MXIC) += ecc-mxic.o
+nandcore-$(CONFIG_MTD_NAND_ECC_MTK) += ecc-mtk.o
diff --git a/drivers/mtd/nand/raw/mtk_ecc.c b/drivers/mtd/nand/ecc-mtk.c
similarity index 99%
rename from drivers/mtd/nand/raw/mtk_ecc.c
rename to drivers/mtd/nand/ecc-mtk.c
index 1b47964cb6da..31d7c77d5c59 100644
--- a/drivers/mtd/nand/raw/mtk_ecc.c
+++ b/drivers/mtd/nand/ecc-mtk.c
@@ -16,7 +16,7 @@
 #include <linux/of_platform.h>
 #include <linux/mutex.h>
 
-#include "mtk_ecc.h"
+#include <linux/mtd/nand-ecc-mtk.h>
 
 #define ECC_IDLE_MASK		BIT(0)
 #define ECC_IRQ_EN		BIT(0)
diff --git a/drivers/mtd/nand/raw/Kconfig b/drivers/mtd/nand/raw/Kconfig
index 67b7cb67c030..c90bc166034b 100644
--- a/drivers/mtd/nand/raw/Kconfig
+++ b/drivers/mtd/nand/raw/Kconfig
@@ -362,6 +362,7 @@ config MTD_NAND_MTK
 	tristate "MTK NAND controller"
 	depends on ARCH_MEDIATEK || COMPILE_TEST
 	depends on HAS_IOMEM
+	select MTD_NAND_ECC_MTK
 	help
 	  Enables support for NAND controller on MTK SoCs.
 	  This controller is found on mt27xx, mt81xx, mt65xx SoCs.
diff --git a/drivers/mtd/nand/raw/Makefile b/drivers/mtd/nand/raw/Makefile
index 2f97958c3a33..49d3946c166b 100644
--- a/drivers/mtd/nand/raw/Makefile
+++ b/drivers/mtd/nand/raw/Makefile
@@ -48,7 +48,7 @@ obj-$(CONFIG_MTD_NAND_SUNXI)		+= sunxi_nand.o
 obj-$(CONFIG_MTD_NAND_HISI504)	        += hisi504_nand.o
 obj-$(CONFIG_MTD_NAND_BRCMNAND)		+= brcmnand/
 obj-$(CONFIG_MTD_NAND_QCOM)		+= qcom_nandc.o
-obj-$(CONFIG_MTD_NAND_MTK)		+= mtk_ecc.o mtk_nand.o
+obj-$(CONFIG_MTD_NAND_MTK)		+= mtk_nand.o
 obj-$(CONFIG_MTD_NAND_MXIC)		+= mxic_nand.o
 obj-$(CONFIG_MTD_NAND_TEGRA)		+= tegra_nand.o
 obj-$(CONFIG_MTD_NAND_STM32_FMC2)	+= stm32_fmc2_nand.o
diff --git a/drivers/mtd/nand/raw/mtk_nand.c b/drivers/mtd/nand/raw/mtk_nand.c
index 66f04c693c87..d540454cbbdf 100644
--- a/drivers/mtd/nand/raw/mtk_nand.c
+++ b/drivers/mtd/nand/raw/mtk_nand.c
@@ -17,7 +17,7 @@
 #include <linux/iopoll.h>
 #include <linux/of.h>
 #include <linux/of_device.h>
-#include "mtk_ecc.h"
+#include <linux/mtd/nand-ecc-mtk.h>
 
 /* NAND controller register definition */
 #define NFI_CNFG		(0x00)
diff --git a/drivers/mtd/nand/raw/mtk_ecc.h b/include/linux/mtd/nand-ecc-mtk.h
similarity index 100%
rename from drivers/mtd/nand/raw/mtk_ecc.h
rename to include/linux/mtd/nand-ecc-mtk.h
-- 
2.25.1


______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC,v4,1/5] mtd: nand: ecc: Move mediatek ECC driver
@ 2021-11-30  8:31   ` Xiangsheng Hou
  0 siblings, 0 replies; 36+ messages in thread
From: Xiangsheng Hou @ 2021-11-30  8:31 UTC (permalink / raw)
  To: miquel.raynal, broonie
  Cc: xiangsheng.hou, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Move mediatek on host ECC driver to comply with the generic ECC
framework. The ECC engine can be used by mediatek raw nand and
spi nand controller.

Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
---
 drivers/mtd/nand/Kconfig                                 | 9 +++++++++
 drivers/mtd/nand/Makefile                                | 1 +
 drivers/mtd/nand/{raw/mtk_ecc.c => ecc-mtk.c}            | 2 +-
 drivers/mtd/nand/raw/Kconfig                             | 1 +
 drivers/mtd/nand/raw/Makefile                            | 2 +-
 drivers/mtd/nand/raw/mtk_nand.c                          | 2 +-
 .../raw/mtk_ecc.h => include/linux/mtd/nand-ecc-mtk.h    | 0
 7 files changed, 14 insertions(+), 3 deletions(-)
 rename drivers/mtd/nand/{raw/mtk_ecc.c => ecc-mtk.c} (99%)
 rename drivers/mtd/nand/raw/mtk_ecc.h => include/linux/mtd/nand-ecc-mtk.h (100%)

diff --git a/drivers/mtd/nand/Kconfig b/drivers/mtd/nand/Kconfig
index 8431292ff49d..a96fddff5ba5 100644
--- a/drivers/mtd/nand/Kconfig
+++ b/drivers/mtd/nand/Kconfig
@@ -52,6 +52,15 @@ config MTD_NAND_ECC_MXIC
 	help
 	  This enables support for the hardware ECC engine from Macronix.
 
+config MTD_NAND_ECC_MTK
+	bool "Mediatek hardware ECC engine"
+	select MTD_NAND_ECC
+	help
+	  This enables support for Mediatek hardware ECC engine which
+	  used for error correction. This correction strength depends
+	  on different project. The ECC engine can be used with Mediatek
+	  raw nand and spi nand controller driver.
+
 endmenu
 
 endmenu
diff --git a/drivers/mtd/nand/Makefile b/drivers/mtd/nand/Makefile
index a4e6b7ae0614..686f0d635ddf 100644
--- a/drivers/mtd/nand/Makefile
+++ b/drivers/mtd/nand/Makefile
@@ -11,3 +11,4 @@ nandcore-$(CONFIG_MTD_NAND_ECC) += ecc.o
 nandcore-$(CONFIG_MTD_NAND_ECC_SW_HAMMING) += ecc-sw-hamming.o
 nandcore-$(CONFIG_MTD_NAND_ECC_SW_BCH) += ecc-sw-bch.o
 nandcore-$(CONFIG_MTD_NAND_ECC_MXIC) += ecc-mxic.o
+nandcore-$(CONFIG_MTD_NAND_ECC_MTK) += ecc-mtk.o
diff --git a/drivers/mtd/nand/raw/mtk_ecc.c b/drivers/mtd/nand/ecc-mtk.c
similarity index 99%
rename from drivers/mtd/nand/raw/mtk_ecc.c
rename to drivers/mtd/nand/ecc-mtk.c
index 1b47964cb6da..31d7c77d5c59 100644
--- a/drivers/mtd/nand/raw/mtk_ecc.c
+++ b/drivers/mtd/nand/ecc-mtk.c
@@ -16,7 +16,7 @@
 #include <linux/of_platform.h>
 #include <linux/mutex.h>
 
-#include "mtk_ecc.h"
+#include <linux/mtd/nand-ecc-mtk.h>
 
 #define ECC_IDLE_MASK		BIT(0)
 #define ECC_IRQ_EN		BIT(0)
diff --git a/drivers/mtd/nand/raw/Kconfig b/drivers/mtd/nand/raw/Kconfig
index 67b7cb67c030..c90bc166034b 100644
--- a/drivers/mtd/nand/raw/Kconfig
+++ b/drivers/mtd/nand/raw/Kconfig
@@ -362,6 +362,7 @@ config MTD_NAND_MTK
 	tristate "MTK NAND controller"
 	depends on ARCH_MEDIATEK || COMPILE_TEST
 	depends on HAS_IOMEM
+	select MTD_NAND_ECC_MTK
 	help
 	  Enables support for NAND controller on MTK SoCs.
 	  This controller is found on mt27xx, mt81xx, mt65xx SoCs.
diff --git a/drivers/mtd/nand/raw/Makefile b/drivers/mtd/nand/raw/Makefile
index 2f97958c3a33..49d3946c166b 100644
--- a/drivers/mtd/nand/raw/Makefile
+++ b/drivers/mtd/nand/raw/Makefile
@@ -48,7 +48,7 @@ obj-$(CONFIG_MTD_NAND_SUNXI)		+= sunxi_nand.o
 obj-$(CONFIG_MTD_NAND_HISI504)	        += hisi504_nand.o
 obj-$(CONFIG_MTD_NAND_BRCMNAND)		+= brcmnand/
 obj-$(CONFIG_MTD_NAND_QCOM)		+= qcom_nandc.o
-obj-$(CONFIG_MTD_NAND_MTK)		+= mtk_ecc.o mtk_nand.o
+obj-$(CONFIG_MTD_NAND_MTK)		+= mtk_nand.o
 obj-$(CONFIG_MTD_NAND_MXIC)		+= mxic_nand.o
 obj-$(CONFIG_MTD_NAND_TEGRA)		+= tegra_nand.o
 obj-$(CONFIG_MTD_NAND_STM32_FMC2)	+= stm32_fmc2_nand.o
diff --git a/drivers/mtd/nand/raw/mtk_nand.c b/drivers/mtd/nand/raw/mtk_nand.c
index 66f04c693c87..d540454cbbdf 100644
--- a/drivers/mtd/nand/raw/mtk_nand.c
+++ b/drivers/mtd/nand/raw/mtk_nand.c
@@ -17,7 +17,7 @@
 #include <linux/iopoll.h>
 #include <linux/of.h>
 #include <linux/of_device.h>
-#include "mtk_ecc.h"
+#include <linux/mtd/nand-ecc-mtk.h>
 
 /* NAND controller register definition */
 #define NFI_CNFG		(0x00)
diff --git a/drivers/mtd/nand/raw/mtk_ecc.h b/include/linux/mtd/nand-ecc-mtk.h
similarity index 100%
rename from drivers/mtd/nand/raw/mtk_ecc.h
rename to include/linux/mtd/nand-ecc-mtk.h
-- 
2.25.1


_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
  2021-11-30  8:31 ` Xiangsheng Hou
@ 2021-11-30  8:31   ` Xiangsheng Hou
  -1 siblings, 0 replies; 36+ messages in thread
From: Xiangsheng Hou @ 2021-11-30  8:31 UTC (permalink / raw)
  To: miquel.raynal, broonie
  Cc: xiangsheng.hou, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Convert the Mediatek HW ECC engine to the ECC infrastructure with
pipelined case.

Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
---
 drivers/mtd/nand/ecc-mtk.c       | 614 +++++++++++++++++++++++++++++++
 include/linux/mtd/nand-ecc-mtk.h |  68 ++++
 2 files changed, 682 insertions(+)

diff --git a/drivers/mtd/nand/ecc-mtk.c b/drivers/mtd/nand/ecc-mtk.c
index 31d7c77d5c59..c44499b3d0a5 100644
--- a/drivers/mtd/nand/ecc-mtk.c
+++ b/drivers/mtd/nand/ecc-mtk.c
@@ -16,6 +16,7 @@
 #include <linux/of_platform.h>
 #include <linux/mutex.h>
 
+#include <linux/mtd/nand.h>
 #include <linux/mtd/nand-ecc-mtk.h>
 
 #define ECC_IDLE_MASK		BIT(0)
@@ -41,11 +42,17 @@
 #define ECC_IDLE_REG(op)	((op) == ECC_ENCODE ? ECC_ENCIDLE : ECC_DECIDLE)
 #define ECC_CTL_REG(op)		((op) == ECC_ENCODE ? ECC_ENCCON : ECC_DECCON)
 
+#define OOB_FREE_MAX_SIZE 8
+#define OOB_FREE_MIN_SIZE 1
+
 struct mtk_ecc_caps {
 	u32 err_mask;
 	const u8 *ecc_strength;
 	const u32 *ecc_regs;
 	u8 num_ecc_strength;
+	const u8 *spare_size;
+	u8 num_spare_size;
+	u32 max_section_size;
 	u8 ecc_mode_shift;
 	u32 parity_bits;
 	int pg_irq_sel;
@@ -79,6 +86,12 @@ static const u8 ecc_strength_mt7622[] = {
 	4, 6, 8, 10, 12, 14, 16
 };
 
+/* spare size for each section that each IP supports */
+static const u8 spare_size_mt7622[] = {
+	16, 26, 27, 28, 32, 36, 40, 44, 48, 49, 50, 51,
+	52, 62, 61, 63, 64, 67, 74
+};
+
 enum mtk_ecc_regs {
 	ECC_ENCPAR00,
 	ECC_ENCIRQ_EN,
@@ -447,6 +460,604 @@ unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc)
 }
 EXPORT_SYMBOL(mtk_ecc_get_parity_bits);
 
+static inline int mtk_ecc_data_off(struct nand_device *nand, int i)
+{
+	int eccsize = nand->ecc.ctx.conf.step_size;
+
+	return i * eccsize;
+}
+
+static inline int mtk_ecc_oob_free_position(struct nand_device *nand, int i)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	int position;
+
+	if (i < eng->bbm_ctl.section)
+		position = (i + 1) * eng->oob_free;
+	else if (i == eng->bbm_ctl.section)
+		position = 0;
+	else
+		position = i * eng->oob_free;
+
+	return position;
+}
+
+static inline int mtk_ecc_data_len(struct nand_device *nand)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	int eccsize = nand->ecc.ctx.conf.step_size;
+	int eccbytes = eng->oob_ecc;
+
+	return eccsize + eng->oob_free + eccbytes;
+}
+
+static inline u8 *mtk_ecc_section_ptr(struct nand_device *nand,  int i)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+
+	return eng->bounce_page_buf + i * mtk_ecc_data_len(nand);
+}
+
+static inline u8 *mtk_ecc_oob_free_ptr(struct nand_device *nand, int i)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	int eccsize = nand->ecc.ctx.conf.step_size;
+
+	return eng->bounce_page_buf + i * mtk_ecc_data_len(nand) + eccsize;
+}
+
+static void mtk_ecc_no_bbm_swap(struct nand_device *a, u8 *b, u8 *c)
+{
+	/* nop */
+}
+
+static void mtk_ecc_bbm_swap(struct nand_device *nand, u8 *databuf, u8 *oobbuf)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	int step_size = nand->ecc.ctx.conf.step_size;
+	u32 bbm_pos = eng->bbm_ctl.position;
+
+	bbm_pos += eng->bbm_ctl.section * step_size;
+
+	swap(oobbuf[0], databuf[bbm_pos]);
+}
+
+static void mtk_ecc_set_bbm_ctl(struct mtk_ecc_bbm_ctl *bbm_ctl,
+				struct nand_device *nand)
+{
+	if (nanddev_page_size(nand) == 512) {
+		bbm_ctl->bbm_swap = mtk_ecc_no_bbm_swap;
+	} else {
+		bbm_ctl->bbm_swap = mtk_ecc_bbm_swap;
+		bbm_ctl->section = nanddev_page_size(nand) /
+				   mtk_ecc_data_len(nand);
+		bbm_ctl->position = nanddev_page_size(nand) %
+				    mtk_ecc_data_len(nand);
+	}
+}
+
+static int mtk_ecc_ooblayout_free(struct mtd_info *mtd, int section,
+				  struct mtd_oob_region *oob_region)
+{
+	struct nand_device *nand = mtd_to_nanddev(mtd);
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
+	u32 eccsteps, bbm_bytes = 0;
+
+	eccsteps = mtd->writesize / conf->step_size;
+
+	if (section >= eccsteps)
+		return -ERANGE;
+
+	/* Reserve 1 byte for BBM only for section 0 */
+	if (section == 0)
+		bbm_bytes = 1;
+
+	oob_region->length = eng->oob_free - bbm_bytes;
+	oob_region->offset = section * eng->oob_free + bbm_bytes;
+
+	return 0;
+}
+
+static int mtk_ecc_ooblayout_ecc(struct mtd_info *mtd, int section,
+				 struct mtd_oob_region *oob_region)
+{
+	struct nand_device *nand = mtd_to_nanddev(mtd);
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+
+	if (section)
+		return -ERANGE;
+
+	oob_region->offset = eng->oob_free * eng->nsteps;
+	oob_region->length = mtd->oobsize - oob_region->offset;
+
+	return 0;
+}
+
+static const struct mtd_ooblayout_ops mtk_ecc_ooblayout_ops = {
+	.free = mtk_ecc_ooblayout_free,
+	.ecc = mtk_ecc_ooblayout_ecc,
+};
+
+const struct mtd_ooblayout_ops *mtk_ecc_get_ooblayout(void)
+{
+	return &mtk_ecc_ooblayout_ops;
+}
+
+static struct device *mtk_ecc_get_engine_dev(struct device *dev)
+{
+	struct platform_device *eccpdev;
+	struct device_node *np;
+
+	/*
+	 * The device node is only the host controller,
+	 * not the actual ECC engine when pipelined case.
+	 */
+	np = of_parse_phandle(dev->of_node, "nand-ecc-engine", 0);
+	if (!np)
+		return NULL;
+
+	eccpdev = of_find_device_by_node(np);
+	if (!eccpdev) {
+		of_node_put(np);
+		return NULL;
+	}
+
+	platform_device_put(eccpdev);
+	of_node_put(np);
+
+	return &eccpdev->dev;
+}
+
+/*
+ * mtk_ecc_data_format() - Convert to/from MTK ECC on-flash data format
+ *
+ * MTK ECC engine organize page data by section, the on-flash format as bellow:
+ * ||          section 0         ||          section 1          || ...
+ * || data | OOB free | OOB ECC || data || OOB free | OOB ECC || ...
+ *
+ * Terefore, it`s necessary to convert data when reading/writing in raw mode.
+ */
+static void mtk_ecc_data_format(struct nand_device *nand,
+				struct nand_page_io_req *req)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	int step_size = nand->ecc.ctx.conf.step_size;
+	void *databuf, *oobbuf;
+	int i;
+
+	if (req->type == NAND_PAGE_WRITE) {
+		databuf = (void *)req->databuf.out;
+		oobbuf = (void *)req->oobbuf.out;
+
+		/*
+		 * Convert the source databuf and oobbuf to MTK ECC
+		 * on-flash data format.
+		 */
+		for (i = 0; i < eng->nsteps; i++) {
+			if (i == eng->bbm_ctl.section)
+				eng->bbm_ctl.bbm_swap(nand,
+						      databuf, oobbuf);
+			memcpy(mtk_ecc_section_ptr(nand, i),
+			       databuf + mtk_ecc_data_off(nand, i),
+			       step_size);
+
+			memcpy(mtk_ecc_oob_free_ptr(nand, i),
+			       oobbuf + mtk_ecc_oob_free_position(nand, i),
+			       eng->oob_free);
+
+			memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free,
+			       oobbuf + eng->oob_free * eng->nsteps +
+			       i * eng->oob_ecc,
+			       eng->oob_ecc);
+		}
+
+		req->databuf.out = eng->bounce_page_buf;
+		req->oobbuf.out = eng->bounce_oob_buf;
+	} else {
+		databuf = req->databuf.in;
+		oobbuf = req->oobbuf.in;
+
+		/*
+		 * Convert the on-flash MTK ECC data format to
+		 * destination databuf and oobbuf.
+		 */
+		memcpy(eng->bounce_page_buf, databuf,
+		       nanddev_page_size(nand));
+		memcpy(eng->bounce_oob_buf, oobbuf,
+		       nanddev_per_page_oobsize(nand));
+
+		for (i = 0; i < eng->nsteps; i++) {
+			memcpy(databuf + mtk_ecc_data_off(nand, i),
+			       mtk_ecc_section_ptr(nand, i), step_size);
+
+			memcpy(oobbuf + mtk_ecc_oob_free_position(nand, i),
+			       mtk_ecc_section_ptr(nand, i) + step_size,
+			       eng->oob_free);
+
+			memcpy(oobbuf + eng->oob_free * eng->nsteps +
+			       i * eng->oob_ecc,
+			       mtk_ecc_section_ptr(nand, i) + step_size
+			       + eng->oob_free,
+			       eng->oob_ecc);
+
+			if (i == eng->bbm_ctl.section)
+				eng->bbm_ctl.bbm_swap(nand,
+						      databuf, oobbuf);
+		}
+	}
+}
+
+static void mtk_ecc_oob_free_shift(struct nand_device *nand,
+				   u8 *dst_buf, u8 *src_buf, bool write)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	u32 position;
+	int i;
+
+	for (i = 0; i < eng->nsteps; i++) {
+		if (i < eng->bbm_ctl.section)
+			position = (i + 1) * eng->oob_free;
+		else if (i == eng->bbm_ctl.section)
+			position = 0;
+		else
+			position = i * eng->oob_free;
+
+		if (write)
+			memcpy(dst_buf + i * eng->oob_free, src_buf + position,
+			       eng->oob_free);
+		else
+			memcpy(dst_buf + position, src_buf + i * eng->oob_free,
+			       eng->oob_free);
+	}
+}
+
+static void mtk_ecc_set_section_size_and_strength(struct nand_device *nand)
+{
+	struct nand_ecc_props *reqs = &nand->ecc.requirements;
+	struct nand_ecc_props *user = &nand->ecc.user_conf;
+	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+
+	/* Configure the correction depending on the NAND device topology */
+	if (user->step_size && user->strength) {
+		conf->step_size = user->step_size;
+		conf->strength = user->strength;
+	} else if (reqs->step_size && reqs->strength) {
+		conf->step_size = reqs->step_size;
+		conf->strength = reqs->strength;
+	}
+
+	/*
+	 * Align ECC strength and ECC size.
+	 * The MTK HW ECC engine only support 512 and 1024 ECC size.
+	 */
+	if (conf->step_size < 1024) {
+		if (nanddev_page_size(nand) > 512 &&
+		    eng->ecc->caps->max_section_size > 512) {
+			conf->step_size = 1024;
+			conf->strength <<= 1;
+		} else {
+			conf->step_size = 512;
+		}
+	} else {
+		conf->step_size = 1024;
+	}
+
+	eng->section_size = conf->step_size;
+}
+
+static int mtk_ecc_set_spare_per_section(struct nand_device *nand)
+{
+	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	const u8 *spare = eng->ecc->caps->spare_size;
+	u32 i, closest_spare = 0;
+
+	eng->nsteps = nanddev_page_size(nand) / conf->step_size;
+	eng->oob_per_section = nanddev_per_page_oobsize(nand) / eng->nsteps;
+
+	if (conf->step_size == 1024)
+		eng->oob_per_section >>= 1;
+
+	if (eng->oob_per_section < spare[0]) {
+		dev_err(eng->ecc->dev, "OOB size per section too small %d\n",
+			eng->oob_per_section);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < eng->ecc->caps->num_spare_size; i++) {
+		if (eng->oob_per_section >= spare[i] &&
+		    spare[i] >= spare[closest_spare]) {
+			closest_spare = i;
+			if (eng->oob_per_section == spare[i])
+				break;
+		}
+	}
+
+	eng->oob_per_section = spare[closest_spare];
+	eng->oob_per_section_idx = closest_spare;
+
+	if (conf->step_size == 1024)
+		eng->oob_per_section <<= 1;
+
+	return 0;
+}
+
+int mtk_ecc_prepare_io_req_pipelined(struct nand_device *nand,
+				     struct nand_page_io_req *req)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	struct mtd_info *mtd = nanddev_to_mtd(nand);
+	int ret;
+
+	nand_ecc_tweak_req(&eng->req_ctx, req);
+
+	/* Store the source buffer data to avoid modify source data */
+	if (req->type == NAND_PAGE_WRITE) {
+		if (req->datalen)
+			memcpy(eng->src_page_buf + req->dataoffs,
+			       req->databuf.out,
+			       req->datalen);
+
+		if (req->ooblen)
+			memcpy(eng->src_oob_buf + req->ooboffs,
+			       req->oobbuf.out,
+			       req->ooblen);
+	}
+
+	if (req->mode == MTD_OPS_RAW) {
+		if (req->type == NAND_PAGE_WRITE)
+			mtk_ecc_data_format(nand, req);
+
+		return 0;
+	}
+
+	eng->ecc_cfg.mode = ECC_NFI_MODE;
+	eng->ecc_cfg.sectors = eng->nsteps;
+	eng->ecc_cfg.op = ECC_DECODE;
+
+	if (req->type == NAND_PAGE_READ)
+		return mtk_ecc_enable(eng->ecc, &eng->ecc_cfg);
+
+	memset(eng->bounce_oob_buf, 0xff, nanddev_per_page_oobsize(nand));
+	if (req->ooblen) {
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_set_databytes(mtd,
+							  req->oobbuf.out,
+							  eng->bounce_oob_buf,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(eng->bounce_oob_buf + req->ooboffs,
+			       req->oobbuf.out,
+			       req->ooblen);
+		}
+	}
+
+	eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out,
+			      eng->bounce_oob_buf);
+	mtk_ecc_oob_free_shift(nand, (void *)req->oobbuf.out,
+			       eng->bounce_oob_buf, true);
+
+	eng->ecc_cfg.op = ECC_ENCODE;
+
+	return mtk_ecc_enable(eng->ecc, &eng->ecc_cfg);
+}
+
+int mtk_ecc_finish_io_req_pipelined(struct nand_device *nand,
+				    struct nand_page_io_req *req)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	struct mtd_info *mtd = nanddev_to_mtd(nand);
+	struct mtk_ecc_stats stats;
+	int ret;
+
+	if (req->type == NAND_PAGE_WRITE) {
+		/* Restore the source buffer data */
+		if (req->datalen)
+			memcpy((void *)req->databuf.out,
+			       eng->src_page_buf + req->dataoffs,
+			       req->datalen);
+
+		if (req->ooblen)
+			memcpy((void *)req->oobbuf.out,
+			       eng->src_oob_buf + req->ooboffs,
+			       req->ooblen);
+
+		if (req->mode != MTD_OPS_RAW)
+			mtk_ecc_disable(eng->ecc);
+
+		nand_ecc_restore_req(&eng->req_ctx, req);
+
+		return 0;
+	}
+
+	if (req->mode == MTD_OPS_RAW) {
+		mtk_ecc_data_format(nand, req);
+		nand_ecc_restore_req(&eng->req_ctx, req);
+
+		return 0;
+	}
+
+	ret = mtk_ecc_wait_done(eng->ecc, ECC_DECODE);
+	if (ret) {
+		ret = -ETIMEDOUT;
+		goto out;
+	}
+
+	if (eng->read_empty) {
+		memset(req->databuf.in, 0xff, nanddev_page_size(nand));
+		memset(req->oobbuf.in, 0xff, nanddev_per_page_oobsize(nand));
+		ret = 0;
+
+		goto out;
+	}
+
+	mtk_ecc_get_stats(eng->ecc, &stats, eng->nsteps);
+	mtd->ecc_stats.corrected += stats.corrected;
+	mtd->ecc_stats.failed += stats.failed;
+
+	/*
+	 * Return -EBADMSG when exit uncorrect ECC error.
+	 * Otherwise, return the bitflips.
+	 */
+	if (stats.failed)
+		ret = -EBADMSG;
+	else
+		ret = stats.bitflips;
+
+	memset(eng->bounce_oob_buf, 0xff, nanddev_per_page_oobsize(nand));
+	mtk_ecc_oob_free_shift(nand, eng->bounce_oob_buf, req->oobbuf.in, false);
+	eng->bbm_ctl.bbm_swap(nand, req->databuf.in, eng->bounce_oob_buf);
+
+	if (req->ooblen) {
+		if (req->mode == MTD_OPS_AUTO_OOB)
+			ret = mtd_ooblayout_get_databytes(mtd,
+							  req->oobbuf.in,
+							  eng->bounce_oob_buf,
+							  req->ooboffs,
+							  mtd->oobavail);
+		else
+			memcpy(req->oobbuf.in,
+			       eng->bounce_oob_buf + req->ooboffs,
+			       req->ooblen);
+	}
+
+out:
+	mtk_ecc_disable(eng->ecc);
+	nand_ecc_restore_req(&eng->req_ctx, req);
+
+	return ret;
+}
+
+int mtk_ecc_init_ctx_pipelined(struct nand_device *nand)
+{
+	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
+	struct mtd_info *mtd = nanddev_to_mtd(nand);
+	struct mtk_ecc_engine *eng;
+	struct device *dev;
+	int free, ret;
+
+	/*
+	 * In the case of a pipelined engine, the device registering the ECC
+	 * engine is not the actual ECC engine device but the host controller.
+	 */
+	dev = mtk_ecc_get_engine_dev(nand->ecc.engine->dev);
+	if (!dev)
+		return -EINVAL;
+
+	eng = devm_kzalloc(dev, sizeof(*eng), GFP_KERNEL);
+	if (!eng)
+		return -ENOMEM;
+
+	nand->ecc.ctx.priv = eng;
+	nand->ecc.engine->priv = eng;
+
+	eng->ecc = dev_get_drvdata(dev);
+
+	mtk_ecc_set_section_size_and_strength(nand);
+
+	ret = mtk_ecc_set_spare_per_section(nand);
+	if (ret)
+		return ret;
+
+	clk_prepare_enable(eng->ecc->clk);
+	mtk_ecc_hw_init(eng->ecc);
+
+	/* Calculate OOB free bytes except ECC parity data */
+	free = (conf->strength * mtk_ecc_get_parity_bits(eng->ecc)
+	       + 7) >> 3;
+	free = eng->oob_per_section - free;
+
+	/*
+	 * Enhance ECC strength if OOB left is bigger than max FDM size
+	 * or reduce ECC strength if OOB size is not enough for ECC
+	 * parity data.
+	 */
+	if (free > OOB_FREE_MAX_SIZE)
+		eng->oob_ecc = eng->oob_per_section - OOB_FREE_MAX_SIZE;
+	else if (free < 0)
+		eng->oob_ecc = eng->oob_per_section - OOB_FREE_MIN_SIZE;
+
+	/* Calculate and adjust ECC strenth based on OOB ECC bytes */
+	conf->strength = (eng->oob_ecc << 3) /
+			 mtk_ecc_get_parity_bits(eng->ecc);
+	mtk_ecc_adjust_strength(eng->ecc, &conf->strength);
+
+	eng->oob_ecc = DIV_ROUND_UP(conf->strength *
+		       mtk_ecc_get_parity_bits(eng->ecc), 8);
+
+	eng->oob_free = eng->oob_per_section - eng->oob_ecc;
+	if (eng->oob_free > OOB_FREE_MAX_SIZE)
+		eng->oob_free = OOB_FREE_MAX_SIZE;
+
+	eng->oob_free_protected = OOB_FREE_MIN_SIZE;
+
+	eng->oob_ecc = eng->oob_per_section - eng->oob_free;
+
+	if (!mtd->ooblayout)
+		mtd_set_ooblayout(mtd, mtk_ecc_get_ooblayout());
+
+	ret = nand_ecc_init_req_tweaking(&eng->req_ctx, nand);
+	if (ret)
+		return ret;
+
+	eng->src_page_buf = kmalloc(nanddev_page_size(nand) +
+			    nanddev_per_page_oobsize(nand), GFP_KERNEL);
+	eng->bounce_page_buf = kmalloc(nanddev_page_size(nand) +
+			       nanddev_per_page_oobsize(nand), GFP_KERNEL);
+	if (!eng->src_page_buf || !eng->bounce_page_buf) {
+		ret = -ENOMEM;
+		goto cleanup_req_tweak;
+	}
+
+	eng->src_oob_buf = eng->src_page_buf + nanddev_page_size(nand);
+	eng->bounce_oob_buf = eng->bounce_page_buf + nanddev_page_size(nand);
+
+	mtk_ecc_set_bbm_ctl(&eng->bbm_ctl, nand);
+	eng->ecc_cfg.strength = conf->strength;
+	eng->ecc_cfg.len = conf->step_size + eng->oob_free_protected;
+	mtd->bitflip_threshold = conf->strength;
+
+	return 0;
+
+cleanup_req_tweak:
+	nand_ecc_cleanup_req_tweaking(&eng->req_ctx);
+
+	return ret;
+}
+
+void mtk_ecc_cleanup_ctx_pipelined(struct nand_device *nand)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+
+	if (eng) {
+		nand_ecc_cleanup_req_tweaking(&eng->req_ctx);
+		kfree(eng->src_page_buf);
+		kfree(eng->bounce_page_buf);
+	}
+}
+
+/*
+ * The MTK ECC engine work at pipelined situation,
+ * will be registered by the drivers that wrap it.
+ */
+static struct nand_ecc_engine_ops mtk_ecc_engine_pipelined_ops = {
+	.init_ctx = mtk_ecc_init_ctx_pipelined,
+	.cleanup_ctx = mtk_ecc_cleanup_ctx_pipelined,
+	.prepare_io_req = mtk_ecc_prepare_io_req_pipelined,
+	.finish_io_req = mtk_ecc_finish_io_req_pipelined,
+};
+
+struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void)
+{
+	return &mtk_ecc_engine_pipelined_ops;
+}
+EXPORT_SYMBOL(mtk_ecc_get_pipelined_ops);
+
 static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = {
 	.err_mask = 0x3f,
 	.ecc_strength = ecc_strength_mt2701,
@@ -472,6 +1083,9 @@ static const struct mtk_ecc_caps mtk_ecc_caps_mt7622 = {
 	.ecc_strength = ecc_strength_mt7622,
 	.ecc_regs = mt7622_ecc_regs,
 	.num_ecc_strength = 7,
+	.spare_size = spare_size_mt7622,
+	.num_spare_size = 19,
+	.max_section_size = 1024,
 	.ecc_mode_shift = 4,
 	.parity_bits = 13,
 	.pg_irq_sel = 0,
diff --git a/include/linux/mtd/nand-ecc-mtk.h b/include/linux/mtd/nand-ecc-mtk.h
index 0e48c36e6ca0..6d550032cbd9 100644
--- a/include/linux/mtd/nand-ecc-mtk.h
+++ b/include/linux/mtd/nand-ecc-mtk.h
@@ -33,6 +33,61 @@ struct mtk_ecc_config {
 	u32 len;
 };
 
+/**
+ * struct mtk_ecc_bbm_ctl - Information relative to the BBM swap
+ * @bbm_swap: BBM swap function
+ * @section: Section number in data area for swap
+ * @position: Position in @section for swap with BBM
+ */
+struct mtk_ecc_bbm_ctl {
+	void (*bbm_swap)(struct nand_device *nand, u8 *databuf, u8 *oobbuf);
+	u32 section;
+	u32 position;
+};
+
+/**
+ * struct mtk_ecc_engine - Information relative to the ECC
+ * @req_ctx: Save request context and tweak the original request to fit the
+ *           engine needs
+ * @oob_per_section: OOB size for each section to store OOB free/ECC bytes
+ * @oob_per_section_idx: The index for @oob_per_section in spare size array
+ * @oob_ecc: OOB size for each section to store the ECC parity
+ * @oob_free: OOB size for each section to store the OOB free bytes
+ * @oob_free_protected: OOB free bytes will be protected by the ECC engine
+ * @section_size: The size of each section
+ * @read_empty: Indicate whether empty page for one read operation
+ * @nsteps: The number of the sections
+ * @src_page_buf: Buffer used to store source data buffer when write
+ * @src_oob_buf: Buffer used to store source OOB buffer when write
+ * @bounce_page_buf: Data bounce buffer
+ * @bounce_oob_buf: OOB bounce buffer
+ * @ecc: The ECC engine private data structure
+ * @ecc_cfg: The configuration of each ECC operation
+ * @bbm_ctl: Information relative to the BBM swap
+ */
+struct mtk_ecc_engine {
+	struct nand_ecc_req_tweak_ctx req_ctx;
+
+	u32 oob_per_section;
+	u32 oob_per_section_idx;
+	u32 oob_ecc;
+	u32 oob_free;
+	u32 oob_free_protected;
+	u32 section_size;
+
+	bool read_empty;
+	u32 nsteps;
+
+	u8 *src_page_buf;
+	u8 *src_oob_buf;
+	u8 *bounce_page_buf;
+	u8 *bounce_oob_buf;
+
+	struct mtk_ecc *ecc;
+	struct mtk_ecc_config ecc_cfg;
+	struct mtk_ecc_bbm_ctl bbm_ctl;
+};
+
 int mtk_ecc_encode(struct mtk_ecc *, struct mtk_ecc_config *, u8 *, u32);
 void mtk_ecc_get_stats(struct mtk_ecc *, struct mtk_ecc_stats *, int);
 int mtk_ecc_wait_done(struct mtk_ecc *, enum mtk_ecc_operation);
@@ -44,4 +99,17 @@ unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc);
 struct mtk_ecc *of_mtk_ecc_get(struct device_node *);
 void mtk_ecc_release(struct mtk_ecc *);
 
+#if IS_ENABLED(CONFIG_MTD_NAND_ECC_MTK)
+
+struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void);
+
+#else /* !CONFIG_MTD_NAND_ECC_MTK */
+
+struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void)
+{
+	return NULL;
+}
+
+#endif /* CONFIG_MTD_NAND_ECC_MTK */
+
 #endif
-- 
2.25.1


______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
@ 2021-11-30  8:31   ` Xiangsheng Hou
  0 siblings, 0 replies; 36+ messages in thread
From: Xiangsheng Hou @ 2021-11-30  8:31 UTC (permalink / raw)
  To: miquel.raynal, broonie
  Cc: xiangsheng.hou, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Convert the Mediatek HW ECC engine to the ECC infrastructure with
pipelined case.

Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
---
 drivers/mtd/nand/ecc-mtk.c       | 614 +++++++++++++++++++++++++++++++
 include/linux/mtd/nand-ecc-mtk.h |  68 ++++
 2 files changed, 682 insertions(+)

diff --git a/drivers/mtd/nand/ecc-mtk.c b/drivers/mtd/nand/ecc-mtk.c
index 31d7c77d5c59..c44499b3d0a5 100644
--- a/drivers/mtd/nand/ecc-mtk.c
+++ b/drivers/mtd/nand/ecc-mtk.c
@@ -16,6 +16,7 @@
 #include <linux/of_platform.h>
 #include <linux/mutex.h>
 
+#include <linux/mtd/nand.h>
 #include <linux/mtd/nand-ecc-mtk.h>
 
 #define ECC_IDLE_MASK		BIT(0)
@@ -41,11 +42,17 @@
 #define ECC_IDLE_REG(op)	((op) == ECC_ENCODE ? ECC_ENCIDLE : ECC_DECIDLE)
 #define ECC_CTL_REG(op)		((op) == ECC_ENCODE ? ECC_ENCCON : ECC_DECCON)
 
+#define OOB_FREE_MAX_SIZE 8
+#define OOB_FREE_MIN_SIZE 1
+
 struct mtk_ecc_caps {
 	u32 err_mask;
 	const u8 *ecc_strength;
 	const u32 *ecc_regs;
 	u8 num_ecc_strength;
+	const u8 *spare_size;
+	u8 num_spare_size;
+	u32 max_section_size;
 	u8 ecc_mode_shift;
 	u32 parity_bits;
 	int pg_irq_sel;
@@ -79,6 +86,12 @@ static const u8 ecc_strength_mt7622[] = {
 	4, 6, 8, 10, 12, 14, 16
 };
 
+/* spare size for each section that each IP supports */
+static const u8 spare_size_mt7622[] = {
+	16, 26, 27, 28, 32, 36, 40, 44, 48, 49, 50, 51,
+	52, 62, 61, 63, 64, 67, 74
+};
+
 enum mtk_ecc_regs {
 	ECC_ENCPAR00,
 	ECC_ENCIRQ_EN,
@@ -447,6 +460,604 @@ unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc)
 }
 EXPORT_SYMBOL(mtk_ecc_get_parity_bits);
 
+static inline int mtk_ecc_data_off(struct nand_device *nand, int i)
+{
+	int eccsize = nand->ecc.ctx.conf.step_size;
+
+	return i * eccsize;
+}
+
+static inline int mtk_ecc_oob_free_position(struct nand_device *nand, int i)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	int position;
+
+	if (i < eng->bbm_ctl.section)
+		position = (i + 1) * eng->oob_free;
+	else if (i == eng->bbm_ctl.section)
+		position = 0;
+	else
+		position = i * eng->oob_free;
+
+	return position;
+}
+
+static inline int mtk_ecc_data_len(struct nand_device *nand)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	int eccsize = nand->ecc.ctx.conf.step_size;
+	int eccbytes = eng->oob_ecc;
+
+	return eccsize + eng->oob_free + eccbytes;
+}
+
+static inline u8 *mtk_ecc_section_ptr(struct nand_device *nand,  int i)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+
+	return eng->bounce_page_buf + i * mtk_ecc_data_len(nand);
+}
+
+static inline u8 *mtk_ecc_oob_free_ptr(struct nand_device *nand, int i)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	int eccsize = nand->ecc.ctx.conf.step_size;
+
+	return eng->bounce_page_buf + i * mtk_ecc_data_len(nand) + eccsize;
+}
+
+static void mtk_ecc_no_bbm_swap(struct nand_device *a, u8 *b, u8 *c)
+{
+	/* nop */
+}
+
+static void mtk_ecc_bbm_swap(struct nand_device *nand, u8 *databuf, u8 *oobbuf)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	int step_size = nand->ecc.ctx.conf.step_size;
+	u32 bbm_pos = eng->bbm_ctl.position;
+
+	bbm_pos += eng->bbm_ctl.section * step_size;
+
+	swap(oobbuf[0], databuf[bbm_pos]);
+}
+
+static void mtk_ecc_set_bbm_ctl(struct mtk_ecc_bbm_ctl *bbm_ctl,
+				struct nand_device *nand)
+{
+	if (nanddev_page_size(nand) == 512) {
+		bbm_ctl->bbm_swap = mtk_ecc_no_bbm_swap;
+	} else {
+		bbm_ctl->bbm_swap = mtk_ecc_bbm_swap;
+		bbm_ctl->section = nanddev_page_size(nand) /
+				   mtk_ecc_data_len(nand);
+		bbm_ctl->position = nanddev_page_size(nand) %
+				    mtk_ecc_data_len(nand);
+	}
+}
+
+static int mtk_ecc_ooblayout_free(struct mtd_info *mtd, int section,
+				  struct mtd_oob_region *oob_region)
+{
+	struct nand_device *nand = mtd_to_nanddev(mtd);
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
+	u32 eccsteps, bbm_bytes = 0;
+
+	eccsteps = mtd->writesize / conf->step_size;
+
+	if (section >= eccsteps)
+		return -ERANGE;
+
+	/* Reserve 1 byte for BBM only for section 0 */
+	if (section == 0)
+		bbm_bytes = 1;
+
+	oob_region->length = eng->oob_free - bbm_bytes;
+	oob_region->offset = section * eng->oob_free + bbm_bytes;
+
+	return 0;
+}
+
+static int mtk_ecc_ooblayout_ecc(struct mtd_info *mtd, int section,
+				 struct mtd_oob_region *oob_region)
+{
+	struct nand_device *nand = mtd_to_nanddev(mtd);
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+
+	if (section)
+		return -ERANGE;
+
+	oob_region->offset = eng->oob_free * eng->nsteps;
+	oob_region->length = mtd->oobsize - oob_region->offset;
+
+	return 0;
+}
+
+static const struct mtd_ooblayout_ops mtk_ecc_ooblayout_ops = {
+	.free = mtk_ecc_ooblayout_free,
+	.ecc = mtk_ecc_ooblayout_ecc,
+};
+
+const struct mtd_ooblayout_ops *mtk_ecc_get_ooblayout(void)
+{
+	return &mtk_ecc_ooblayout_ops;
+}
+
+static struct device *mtk_ecc_get_engine_dev(struct device *dev)
+{
+	struct platform_device *eccpdev;
+	struct device_node *np;
+
+	/*
+	 * The device node is only the host controller,
+	 * not the actual ECC engine when pipelined case.
+	 */
+	np = of_parse_phandle(dev->of_node, "nand-ecc-engine", 0);
+	if (!np)
+		return NULL;
+
+	eccpdev = of_find_device_by_node(np);
+	if (!eccpdev) {
+		of_node_put(np);
+		return NULL;
+	}
+
+	platform_device_put(eccpdev);
+	of_node_put(np);
+
+	return &eccpdev->dev;
+}
+
+/*
+ * mtk_ecc_data_format() - Convert to/from MTK ECC on-flash data format
+ *
+ * MTK ECC engine organize page data by section, the on-flash format as bellow:
+ * ||          section 0         ||          section 1          || ...
+ * || data | OOB free | OOB ECC || data || OOB free | OOB ECC || ...
+ *
+ * Terefore, it`s necessary to convert data when reading/writing in raw mode.
+ */
+static void mtk_ecc_data_format(struct nand_device *nand,
+				struct nand_page_io_req *req)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	int step_size = nand->ecc.ctx.conf.step_size;
+	void *databuf, *oobbuf;
+	int i;
+
+	if (req->type == NAND_PAGE_WRITE) {
+		databuf = (void *)req->databuf.out;
+		oobbuf = (void *)req->oobbuf.out;
+
+		/*
+		 * Convert the source databuf and oobbuf to MTK ECC
+		 * on-flash data format.
+		 */
+		for (i = 0; i < eng->nsteps; i++) {
+			if (i == eng->bbm_ctl.section)
+				eng->bbm_ctl.bbm_swap(nand,
+						      databuf, oobbuf);
+			memcpy(mtk_ecc_section_ptr(nand, i),
+			       databuf + mtk_ecc_data_off(nand, i),
+			       step_size);
+
+			memcpy(mtk_ecc_oob_free_ptr(nand, i),
+			       oobbuf + mtk_ecc_oob_free_position(nand, i),
+			       eng->oob_free);
+
+			memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free,
+			       oobbuf + eng->oob_free * eng->nsteps +
+			       i * eng->oob_ecc,
+			       eng->oob_ecc);
+		}
+
+		req->databuf.out = eng->bounce_page_buf;
+		req->oobbuf.out = eng->bounce_oob_buf;
+	} else {
+		databuf = req->databuf.in;
+		oobbuf = req->oobbuf.in;
+
+		/*
+		 * Convert the on-flash MTK ECC data format to
+		 * destination databuf and oobbuf.
+		 */
+		memcpy(eng->bounce_page_buf, databuf,
+		       nanddev_page_size(nand));
+		memcpy(eng->bounce_oob_buf, oobbuf,
+		       nanddev_per_page_oobsize(nand));
+
+		for (i = 0; i < eng->nsteps; i++) {
+			memcpy(databuf + mtk_ecc_data_off(nand, i),
+			       mtk_ecc_section_ptr(nand, i), step_size);
+
+			memcpy(oobbuf + mtk_ecc_oob_free_position(nand, i),
+			       mtk_ecc_section_ptr(nand, i) + step_size,
+			       eng->oob_free);
+
+			memcpy(oobbuf + eng->oob_free * eng->nsteps +
+			       i * eng->oob_ecc,
+			       mtk_ecc_section_ptr(nand, i) + step_size
+			       + eng->oob_free,
+			       eng->oob_ecc);
+
+			if (i == eng->bbm_ctl.section)
+				eng->bbm_ctl.bbm_swap(nand,
+						      databuf, oobbuf);
+		}
+	}
+}
+
+static void mtk_ecc_oob_free_shift(struct nand_device *nand,
+				   u8 *dst_buf, u8 *src_buf, bool write)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	u32 position;
+	int i;
+
+	for (i = 0; i < eng->nsteps; i++) {
+		if (i < eng->bbm_ctl.section)
+			position = (i + 1) * eng->oob_free;
+		else if (i == eng->bbm_ctl.section)
+			position = 0;
+		else
+			position = i * eng->oob_free;
+
+		if (write)
+			memcpy(dst_buf + i * eng->oob_free, src_buf + position,
+			       eng->oob_free);
+		else
+			memcpy(dst_buf + position, src_buf + i * eng->oob_free,
+			       eng->oob_free);
+	}
+}
+
+static void mtk_ecc_set_section_size_and_strength(struct nand_device *nand)
+{
+	struct nand_ecc_props *reqs = &nand->ecc.requirements;
+	struct nand_ecc_props *user = &nand->ecc.user_conf;
+	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+
+	/* Configure the correction depending on the NAND device topology */
+	if (user->step_size && user->strength) {
+		conf->step_size = user->step_size;
+		conf->strength = user->strength;
+	} else if (reqs->step_size && reqs->strength) {
+		conf->step_size = reqs->step_size;
+		conf->strength = reqs->strength;
+	}
+
+	/*
+	 * Align ECC strength and ECC size.
+	 * The MTK HW ECC engine only support 512 and 1024 ECC size.
+	 */
+	if (conf->step_size < 1024) {
+		if (nanddev_page_size(nand) > 512 &&
+		    eng->ecc->caps->max_section_size > 512) {
+			conf->step_size = 1024;
+			conf->strength <<= 1;
+		} else {
+			conf->step_size = 512;
+		}
+	} else {
+		conf->step_size = 1024;
+	}
+
+	eng->section_size = conf->step_size;
+}
+
+static int mtk_ecc_set_spare_per_section(struct nand_device *nand)
+{
+	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	const u8 *spare = eng->ecc->caps->spare_size;
+	u32 i, closest_spare = 0;
+
+	eng->nsteps = nanddev_page_size(nand) / conf->step_size;
+	eng->oob_per_section = nanddev_per_page_oobsize(nand) / eng->nsteps;
+
+	if (conf->step_size == 1024)
+		eng->oob_per_section >>= 1;
+
+	if (eng->oob_per_section < spare[0]) {
+		dev_err(eng->ecc->dev, "OOB size per section too small %d\n",
+			eng->oob_per_section);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < eng->ecc->caps->num_spare_size; i++) {
+		if (eng->oob_per_section >= spare[i] &&
+		    spare[i] >= spare[closest_spare]) {
+			closest_spare = i;
+			if (eng->oob_per_section == spare[i])
+				break;
+		}
+	}
+
+	eng->oob_per_section = spare[closest_spare];
+	eng->oob_per_section_idx = closest_spare;
+
+	if (conf->step_size == 1024)
+		eng->oob_per_section <<= 1;
+
+	return 0;
+}
+
+int mtk_ecc_prepare_io_req_pipelined(struct nand_device *nand,
+				     struct nand_page_io_req *req)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	struct mtd_info *mtd = nanddev_to_mtd(nand);
+	int ret;
+
+	nand_ecc_tweak_req(&eng->req_ctx, req);
+
+	/* Store the source buffer data to avoid modify source data */
+	if (req->type == NAND_PAGE_WRITE) {
+		if (req->datalen)
+			memcpy(eng->src_page_buf + req->dataoffs,
+			       req->databuf.out,
+			       req->datalen);
+
+		if (req->ooblen)
+			memcpy(eng->src_oob_buf + req->ooboffs,
+			       req->oobbuf.out,
+			       req->ooblen);
+	}
+
+	if (req->mode == MTD_OPS_RAW) {
+		if (req->type == NAND_PAGE_WRITE)
+			mtk_ecc_data_format(nand, req);
+
+		return 0;
+	}
+
+	eng->ecc_cfg.mode = ECC_NFI_MODE;
+	eng->ecc_cfg.sectors = eng->nsteps;
+	eng->ecc_cfg.op = ECC_DECODE;
+
+	if (req->type == NAND_PAGE_READ)
+		return mtk_ecc_enable(eng->ecc, &eng->ecc_cfg);
+
+	memset(eng->bounce_oob_buf, 0xff, nanddev_per_page_oobsize(nand));
+	if (req->ooblen) {
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_set_databytes(mtd,
+							  req->oobbuf.out,
+							  eng->bounce_oob_buf,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(eng->bounce_oob_buf + req->ooboffs,
+			       req->oobbuf.out,
+			       req->ooblen);
+		}
+	}
+
+	eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out,
+			      eng->bounce_oob_buf);
+	mtk_ecc_oob_free_shift(nand, (void *)req->oobbuf.out,
+			       eng->bounce_oob_buf, true);
+
+	eng->ecc_cfg.op = ECC_ENCODE;
+
+	return mtk_ecc_enable(eng->ecc, &eng->ecc_cfg);
+}
+
+int mtk_ecc_finish_io_req_pipelined(struct nand_device *nand,
+				    struct nand_page_io_req *req)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	struct mtd_info *mtd = nanddev_to_mtd(nand);
+	struct mtk_ecc_stats stats;
+	int ret;
+
+	if (req->type == NAND_PAGE_WRITE) {
+		/* Restore the source buffer data */
+		if (req->datalen)
+			memcpy((void *)req->databuf.out,
+			       eng->src_page_buf + req->dataoffs,
+			       req->datalen);
+
+		if (req->ooblen)
+			memcpy((void *)req->oobbuf.out,
+			       eng->src_oob_buf + req->ooboffs,
+			       req->ooblen);
+
+		if (req->mode != MTD_OPS_RAW)
+			mtk_ecc_disable(eng->ecc);
+
+		nand_ecc_restore_req(&eng->req_ctx, req);
+
+		return 0;
+	}
+
+	if (req->mode == MTD_OPS_RAW) {
+		mtk_ecc_data_format(nand, req);
+		nand_ecc_restore_req(&eng->req_ctx, req);
+
+		return 0;
+	}
+
+	ret = mtk_ecc_wait_done(eng->ecc, ECC_DECODE);
+	if (ret) {
+		ret = -ETIMEDOUT;
+		goto out;
+	}
+
+	if (eng->read_empty) {
+		memset(req->databuf.in, 0xff, nanddev_page_size(nand));
+		memset(req->oobbuf.in, 0xff, nanddev_per_page_oobsize(nand));
+		ret = 0;
+
+		goto out;
+	}
+
+	mtk_ecc_get_stats(eng->ecc, &stats, eng->nsteps);
+	mtd->ecc_stats.corrected += stats.corrected;
+	mtd->ecc_stats.failed += stats.failed;
+
+	/*
+	 * Return -EBADMSG when exit uncorrect ECC error.
+	 * Otherwise, return the bitflips.
+	 */
+	if (stats.failed)
+		ret = -EBADMSG;
+	else
+		ret = stats.bitflips;
+
+	memset(eng->bounce_oob_buf, 0xff, nanddev_per_page_oobsize(nand));
+	mtk_ecc_oob_free_shift(nand, eng->bounce_oob_buf, req->oobbuf.in, false);
+	eng->bbm_ctl.bbm_swap(nand, req->databuf.in, eng->bounce_oob_buf);
+
+	if (req->ooblen) {
+		if (req->mode == MTD_OPS_AUTO_OOB)
+			ret = mtd_ooblayout_get_databytes(mtd,
+							  req->oobbuf.in,
+							  eng->bounce_oob_buf,
+							  req->ooboffs,
+							  mtd->oobavail);
+		else
+			memcpy(req->oobbuf.in,
+			       eng->bounce_oob_buf + req->ooboffs,
+			       req->ooblen);
+	}
+
+out:
+	mtk_ecc_disable(eng->ecc);
+	nand_ecc_restore_req(&eng->req_ctx, req);
+
+	return ret;
+}
+
+int mtk_ecc_init_ctx_pipelined(struct nand_device *nand)
+{
+	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
+	struct mtd_info *mtd = nanddev_to_mtd(nand);
+	struct mtk_ecc_engine *eng;
+	struct device *dev;
+	int free, ret;
+
+	/*
+	 * In the case of a pipelined engine, the device registering the ECC
+	 * engine is not the actual ECC engine device but the host controller.
+	 */
+	dev = mtk_ecc_get_engine_dev(nand->ecc.engine->dev);
+	if (!dev)
+		return -EINVAL;
+
+	eng = devm_kzalloc(dev, sizeof(*eng), GFP_KERNEL);
+	if (!eng)
+		return -ENOMEM;
+
+	nand->ecc.ctx.priv = eng;
+	nand->ecc.engine->priv = eng;
+
+	eng->ecc = dev_get_drvdata(dev);
+
+	mtk_ecc_set_section_size_and_strength(nand);
+
+	ret = mtk_ecc_set_spare_per_section(nand);
+	if (ret)
+		return ret;
+
+	clk_prepare_enable(eng->ecc->clk);
+	mtk_ecc_hw_init(eng->ecc);
+
+	/* Calculate OOB free bytes except ECC parity data */
+	free = (conf->strength * mtk_ecc_get_parity_bits(eng->ecc)
+	       + 7) >> 3;
+	free = eng->oob_per_section - free;
+
+	/*
+	 * Enhance ECC strength if OOB left is bigger than max FDM size
+	 * or reduce ECC strength if OOB size is not enough for ECC
+	 * parity data.
+	 */
+	if (free > OOB_FREE_MAX_SIZE)
+		eng->oob_ecc = eng->oob_per_section - OOB_FREE_MAX_SIZE;
+	else if (free < 0)
+		eng->oob_ecc = eng->oob_per_section - OOB_FREE_MIN_SIZE;
+
+	/* Calculate and adjust ECC strenth based on OOB ECC bytes */
+	conf->strength = (eng->oob_ecc << 3) /
+			 mtk_ecc_get_parity_bits(eng->ecc);
+	mtk_ecc_adjust_strength(eng->ecc, &conf->strength);
+
+	eng->oob_ecc = DIV_ROUND_UP(conf->strength *
+		       mtk_ecc_get_parity_bits(eng->ecc), 8);
+
+	eng->oob_free = eng->oob_per_section - eng->oob_ecc;
+	if (eng->oob_free > OOB_FREE_MAX_SIZE)
+		eng->oob_free = OOB_FREE_MAX_SIZE;
+
+	eng->oob_free_protected = OOB_FREE_MIN_SIZE;
+
+	eng->oob_ecc = eng->oob_per_section - eng->oob_free;
+
+	if (!mtd->ooblayout)
+		mtd_set_ooblayout(mtd, mtk_ecc_get_ooblayout());
+
+	ret = nand_ecc_init_req_tweaking(&eng->req_ctx, nand);
+	if (ret)
+		return ret;
+
+	eng->src_page_buf = kmalloc(nanddev_page_size(nand) +
+			    nanddev_per_page_oobsize(nand), GFP_KERNEL);
+	eng->bounce_page_buf = kmalloc(nanddev_page_size(nand) +
+			       nanddev_per_page_oobsize(nand), GFP_KERNEL);
+	if (!eng->src_page_buf || !eng->bounce_page_buf) {
+		ret = -ENOMEM;
+		goto cleanup_req_tweak;
+	}
+
+	eng->src_oob_buf = eng->src_page_buf + nanddev_page_size(nand);
+	eng->bounce_oob_buf = eng->bounce_page_buf + nanddev_page_size(nand);
+
+	mtk_ecc_set_bbm_ctl(&eng->bbm_ctl, nand);
+	eng->ecc_cfg.strength = conf->strength;
+	eng->ecc_cfg.len = conf->step_size + eng->oob_free_protected;
+	mtd->bitflip_threshold = conf->strength;
+
+	return 0;
+
+cleanup_req_tweak:
+	nand_ecc_cleanup_req_tweaking(&eng->req_ctx);
+
+	return ret;
+}
+
+void mtk_ecc_cleanup_ctx_pipelined(struct nand_device *nand)
+{
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+
+	if (eng) {
+		nand_ecc_cleanup_req_tweaking(&eng->req_ctx);
+		kfree(eng->src_page_buf);
+		kfree(eng->bounce_page_buf);
+	}
+}
+
+/*
+ * The MTK ECC engine work at pipelined situation,
+ * will be registered by the drivers that wrap it.
+ */
+static struct nand_ecc_engine_ops mtk_ecc_engine_pipelined_ops = {
+	.init_ctx = mtk_ecc_init_ctx_pipelined,
+	.cleanup_ctx = mtk_ecc_cleanup_ctx_pipelined,
+	.prepare_io_req = mtk_ecc_prepare_io_req_pipelined,
+	.finish_io_req = mtk_ecc_finish_io_req_pipelined,
+};
+
+struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void)
+{
+	return &mtk_ecc_engine_pipelined_ops;
+}
+EXPORT_SYMBOL(mtk_ecc_get_pipelined_ops);
+
 static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = {
 	.err_mask = 0x3f,
 	.ecc_strength = ecc_strength_mt2701,
@@ -472,6 +1083,9 @@ static const struct mtk_ecc_caps mtk_ecc_caps_mt7622 = {
 	.ecc_strength = ecc_strength_mt7622,
 	.ecc_regs = mt7622_ecc_regs,
 	.num_ecc_strength = 7,
+	.spare_size = spare_size_mt7622,
+	.num_spare_size = 19,
+	.max_section_size = 1024,
 	.ecc_mode_shift = 4,
 	.parity_bits = 13,
 	.pg_irq_sel = 0,
diff --git a/include/linux/mtd/nand-ecc-mtk.h b/include/linux/mtd/nand-ecc-mtk.h
index 0e48c36e6ca0..6d550032cbd9 100644
--- a/include/linux/mtd/nand-ecc-mtk.h
+++ b/include/linux/mtd/nand-ecc-mtk.h
@@ -33,6 +33,61 @@ struct mtk_ecc_config {
 	u32 len;
 };
 
+/**
+ * struct mtk_ecc_bbm_ctl - Information relative to the BBM swap
+ * @bbm_swap: BBM swap function
+ * @section: Section number in data area for swap
+ * @position: Position in @section for swap with BBM
+ */
+struct mtk_ecc_bbm_ctl {
+	void (*bbm_swap)(struct nand_device *nand, u8 *databuf, u8 *oobbuf);
+	u32 section;
+	u32 position;
+};
+
+/**
+ * struct mtk_ecc_engine - Information relative to the ECC
+ * @req_ctx: Save request context and tweak the original request to fit the
+ *           engine needs
+ * @oob_per_section: OOB size for each section to store OOB free/ECC bytes
+ * @oob_per_section_idx: The index for @oob_per_section in spare size array
+ * @oob_ecc: OOB size for each section to store the ECC parity
+ * @oob_free: OOB size for each section to store the OOB free bytes
+ * @oob_free_protected: OOB free bytes will be protected by the ECC engine
+ * @section_size: The size of each section
+ * @read_empty: Indicate whether empty page for one read operation
+ * @nsteps: The number of the sections
+ * @src_page_buf: Buffer used to store source data buffer when write
+ * @src_oob_buf: Buffer used to store source OOB buffer when write
+ * @bounce_page_buf: Data bounce buffer
+ * @bounce_oob_buf: OOB bounce buffer
+ * @ecc: The ECC engine private data structure
+ * @ecc_cfg: The configuration of each ECC operation
+ * @bbm_ctl: Information relative to the BBM swap
+ */
+struct mtk_ecc_engine {
+	struct nand_ecc_req_tweak_ctx req_ctx;
+
+	u32 oob_per_section;
+	u32 oob_per_section_idx;
+	u32 oob_ecc;
+	u32 oob_free;
+	u32 oob_free_protected;
+	u32 section_size;
+
+	bool read_empty;
+	u32 nsteps;
+
+	u8 *src_page_buf;
+	u8 *src_oob_buf;
+	u8 *bounce_page_buf;
+	u8 *bounce_oob_buf;
+
+	struct mtk_ecc *ecc;
+	struct mtk_ecc_config ecc_cfg;
+	struct mtk_ecc_bbm_ctl bbm_ctl;
+};
+
 int mtk_ecc_encode(struct mtk_ecc *, struct mtk_ecc_config *, u8 *, u32);
 void mtk_ecc_get_stats(struct mtk_ecc *, struct mtk_ecc_stats *, int);
 int mtk_ecc_wait_done(struct mtk_ecc *, enum mtk_ecc_operation);
@@ -44,4 +99,17 @@ unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc);
 struct mtk_ecc *of_mtk_ecc_get(struct device_node *);
 void mtk_ecc_release(struct mtk_ecc *);
 
+#if IS_ENABLED(CONFIG_MTD_NAND_ECC_MTK)
+
+struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void);
+
+#else /* !CONFIG_MTD_NAND_ECC_MTK */
+
+struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void)
+{
+	return NULL;
+}
+
+#endif /* CONFIG_MTD_NAND_ECC_MTK */
+
 #endif
-- 
2.25.1


_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC,v4,3/5] spi: mtk: Add mediatek SPI Nand Flash interface driver
  2021-11-30  8:31 ` Xiangsheng Hou
@ 2021-11-30  8:32   ` Xiangsheng Hou
  -1 siblings, 0 replies; 36+ messages in thread
From: Xiangsheng Hou @ 2021-11-30  8:32 UTC (permalink / raw)
  To: miquel.raynal, broonie
  Cc: xiangsheng.hou, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

The SPI Nand Flash interface driver cowork with Mediatek pipelined
HW ECC engine.

Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
---
 drivers/spi/Kconfig        |   11 +
 drivers/spi/Makefile       |    1 +
 drivers/spi/spi-mtk-snfi.c | 1117 ++++++++++++++++++++++++++++++++++++
 3 files changed, 1129 insertions(+)
 create mode 100644 drivers/spi/spi-mtk-snfi.c

diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
index 596705d24400..9cb6a173b1ef 100644
--- a/drivers/spi/Kconfig
+++ b/drivers/spi/Kconfig
@@ -535,6 +535,17 @@ config SPI_MT65XX
 	  say Y or M here.If you are not sure, say N.
 	  SPI drivers for Mediatek MT65XX and MT81XX series ARM SoCs.
 
+config SPI_MTK_SNFI
+	tristate "MediaTek SPI NAND interface"
+	depends on MTD
+	select MTD_SPI_NAND
+	select MTD_NAND_ECC_MTK
+	help
+	  This selects the SPI NAND FLASH interface(SNFI),
+	  which could be found on MediaTek Soc.
+	  Say Y or M here.If you are not sure, say N.
+	  Note Parallel Nand and SPI NAND is alternative on MediaTek SoCs.
+
 config SPI_MT7621
 	tristate "MediaTek MT7621 SPI Controller"
 	depends on RALINK || COMPILE_TEST
diff --git a/drivers/spi/Makefile b/drivers/spi/Makefile
index dd7393a6046f..57d11eecf662 100644
--- a/drivers/spi/Makefile
+++ b/drivers/spi/Makefile
@@ -73,6 +73,7 @@ obj-$(CONFIG_SPI_MPC52xx)		+= spi-mpc52xx.o
 obj-$(CONFIG_SPI_MT65XX)                += spi-mt65xx.o
 obj-$(CONFIG_SPI_MT7621)		+= spi-mt7621.o
 obj-$(CONFIG_SPI_MTK_NOR)		+= spi-mtk-nor.o
+obj-$(CONFIG_SPI_MTK_SNFI)              += spi-mtk-snfi.o
 obj-$(CONFIG_SPI_MXIC)			+= spi-mxic.o
 obj-$(CONFIG_SPI_MXS)			+= spi-mxs.o
 obj-$(CONFIG_SPI_NPCM_FIU)		+= spi-npcm-fiu.o
diff --git a/drivers/spi/spi-mtk-snfi.c b/drivers/spi/spi-mtk-snfi.c
new file mode 100644
index 000000000000..b4dce6d78176
--- /dev/null
+++ b/drivers/spi/spi-mtk-snfi.c
@@ -0,0 +1,1117 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Driver for MediaTek SPI Nand Flash interface
+ *
+ * Copyright (C) 2021 MediaTek Inc.
+ * Authors:	Xiangsheng Hou	<xiangsheng.hou@mediatek.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/iopoll.h>
+#include <linux/module.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/nand-ecc-mtk.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/spi/spi.h>
+#include <linux/spi/spi-mem.h>
+
+/* Registers used by the driver */
+#define NFI_CNFG		(0x00)
+#define		CNFG_DMA		BIT(0)
+#define		CNFG_READ_EN		BIT(1)
+#define		CNFG_DMA_BURST_EN	BIT(2)
+#define		CNFG_HW_ECC_EN		BIT(8)
+#define		CNFG_AUTO_FMT_EN	BIT(9)
+#define		CNFG_OP_CUST		GENMASK(14, 13)
+#define NFI_PAGEFMT		(0x04)
+#define		PAGEFMT_512_2K		(0)
+#define		PAGEFMT_2K_4K		(1)
+#define		PAGEFMT_4K_8K		(2)
+#define		PAGEFMT_8K_16K		(3)
+#define		PAGEFMT_PAGE_MASK	GENMASK(1, 0)
+#define		PAGEFMT_SEC_SEL_512	BIT(2)
+#define		PAGEFMT_FDM_SHIFT	(8)
+#define		PAGEFMT_FDM_ECC_SHIFT	(12)
+#define		PAGEFMT_SPARE_SHIFT	(16)
+#define		PAGEFMT_SPARE_MASK	GENMASK(21, 16)
+#define NFI_CON			(0x08)
+#define		CON_FIFO_FLUSH		BIT(0)
+#define		CON_NFI_RST		BIT(1)
+#define		CON_BRD			BIT(8)
+#define		CON_BWR			BIT(9)
+#define		CON_SEC_SHIFT		(12)
+#define		CON_SEC_MASK		GENMASK(16, 12)
+#define NFI_INTR_EN		(0x10)
+#define		INTR_CUS_PROG_EN	BIT(7)
+#define		INTR_CUS_READ_EN	BIT(8)
+#define		INTR_IRQ_EN		BIT(31)
+#define NFI_INTR_STA		(0x14)
+#define NFI_CMD			(0x20)
+#define		CMD_DUMMY		(0x00)
+#define NFI_STRDATA		(0x40)
+#define		STAR_EN			BIT(0)
+#define NFI_STA			(0x60)
+#define		NFI_FSM_MASK		GENMASK(19, 16)
+#define		STA_EMP_PAGE		BIT(12)
+#define NFI_ADDRCNTR		(0x70)
+#define		CNTR_MASK		GENMASK(16, 12)
+#define		ADDRCNTR_SEC_SHIFT	(12)
+#define		ADDRCNTR_SEC(val) \
+		(((val) & CNTR_MASK) >> ADDRCNTR_SEC_SHIFT)
+#define NFI_STRADDR		(0x80)
+#define NFI_BYTELEN		(0x84)
+#define NFI_FDML(x)		(0xA0 + (x) * sizeof(u32) * 2)
+#define NFI_FDMM(x)		(0xA4 + (x) * sizeof(u32) * 2)
+#define NFI_MASTERSTA		(0x224)
+#define		AHB_BUS_BUSY		GENMASK(1, 0)
+#define SNFI_MAC_CTL		(0x500)
+#define		MAC_WIP			BIT(0)
+#define		MAC_WIP_READY		BIT(1)
+#define		MAC_TRIG		BIT(2)
+#define		MAC_EN			BIT(3)
+#define		MAC_SIO_SEL		BIT(4)
+#define SNFI_MAC_OUTL		(0x504)
+#define SNFI_MAC_INL		(0x508)
+#define SNFI_RD_CTL2		(0x510)
+#define		RD_CMD_MASK		GENMASK(7, 0)
+#define		RD_DUMMY_SHIFT		(8)
+#define SNFI_RD_CTL3		(0x514)
+#define		RD_ADDR_MASK		GENMASK(16, 0)
+#define SNFI_PG_CTL1		(0x524)
+#define		WR_LOAD_CMD_MASK	GENMASK(15, 8)
+#define		WR_LOAD_CMD_SHIFT	(8)
+#define SNFI_PG_CTL2		(0x528)
+#define		WR_LOAD_ADDR_MASK	GENMASK(15, 0)
+#define SNFI_MISC_CTL		(0x538)
+#define		RD_CUSTOM_EN		BIT(6)
+#define		WR_CUSTOM_EN		BIT(7)
+#define		LATCH_LAT_SHIFT		(8)
+#define		LATCH_LAT_MASK		GENMASK(9, 8)
+#define		RD_MODE_X2		BIT(16)
+#define		RD_MODE_X4		BIT(17)
+#define		RD_MODE_DQUAL		BIT(18)
+#define		RD_MODE_MASK		GENMASK(18, 16)
+#define		WR_X4_EN		BIT(20)
+#define		SW_RST			BIT(28)
+#define SNFI_MISC_CTL2		(0x53c)
+#define		WR_LEN_SHIFT		(16)
+#define SNFI_DLY_CTL3		(0x548)
+#define		SAM_DLY_MASK		GENMASK(5, 0)
+#define SNFI_STA_CTL1		(0x550)
+#define		SPI_STATE		GENMASK(3, 0)
+#define		CUS_READ_DONE		BIT(27)
+#define		CUS_PROG_DONE		BIT(28)
+#define SNFI_CNFG		(0x55c)
+#define		SNFI_MODE_EN		BIT(0)
+#define SNFI_GPRAM_DATA		(0x800)
+#define		SNFI_GPRAM_MAX_LEN	(160)
+
+#define MTK_SNFI_TIMEOUT		(500000)
+#define MTK_SNFI_RESET_TIMEOUT		(1000000)
+#define MTK_SNFI_AUTOSUSPEND_DELAY	(1000)
+#define KB(x)				((x) * 1024UL)
+
+struct mtk_snfi_caps {
+	u8 pageformat_spare_shift;
+};
+
+struct mtk_snfi {
+	struct device *dev;
+	struct completion done;
+	void __iomem *regs;
+	const struct mtk_snfi_caps *caps;
+
+	struct clk *nfi_clk;
+	struct clk *snfi_clk;
+	struct clk *hclk;
+
+	struct nand_ecc_engine *engine;
+
+	u32 sample_delay;
+	u32 read_latency;
+
+	void *tx_buf;
+	dma_addr_t dma_addr;
+};
+
+static struct mtk_ecc_engine *mtk_snfi_to_ecc_engine(struct mtk_snfi *snfi)
+{
+	return snfi->engine->priv;
+}
+
+static void mtk_snfi_mac_enable(struct mtk_snfi *snfi)
+{
+	u32 val;
+
+	val = readl(snfi->regs + SNFI_MAC_CTL);
+	val &= ~MAC_SIO_SEL;
+	val |= MAC_EN;
+
+	writel(val, snfi->regs + SNFI_MAC_CTL);
+}
+
+static int mtk_snfi_mac_trigger(struct mtk_snfi *snfi)
+{
+	int ret;
+	u32 val;
+
+	val = readl(snfi->regs + SNFI_MAC_CTL);
+	val |= MAC_TRIG;
+	writel(val, snfi->regs + SNFI_MAC_CTL);
+
+	ret = readl_poll_timeout_atomic(snfi->regs + SNFI_MAC_CTL,
+					val, val & MAC_WIP_READY,
+					0, MTK_SNFI_TIMEOUT);
+	if (ret < 0) {
+		dev_err(snfi->dev, "wait for wip ready timeout\n");
+		return -EIO;
+	}
+
+	ret = readl_poll_timeout_atomic(snfi->regs + SNFI_MAC_CTL,
+					val, !(val & MAC_WIP), 0,
+					MTK_SNFI_TIMEOUT);
+	if (ret < 0) {
+		dev_err(snfi->dev, "command write timeout\n");
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static void mtk_snfi_mac_disable(struct mtk_snfi *snfi)
+{
+	u32 val;
+
+	val = readl(snfi->regs + SNFI_MAC_CTL);
+	val &= ~(MAC_TRIG | MAC_EN);
+	writel(val, snfi->regs + SNFI_MAC_CTL);
+}
+
+static int mtk_snfi_mac_op(struct mtk_snfi *snfi)
+{
+	int ret;
+
+	mtk_snfi_mac_enable(snfi);
+	ret = mtk_snfi_mac_trigger(snfi);
+	mtk_snfi_mac_disable(snfi);
+
+	return ret;
+}
+
+static inline void mtk_snfi_read_oob_free(struct mtk_snfi *snfi,
+					  const struct spi_mem_op *op)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	u8 *oobptr = op->data.buf.in;
+	u32 vall, valm;
+	int i, j;
+
+	oobptr += eng->section_size * eng->nsteps;
+	for (i = 0; i < eng->nsteps; i++) {
+		vall = readl(snfi->regs + NFI_FDML(i));
+		valm = readl(snfi->regs + NFI_FDMM(i));
+
+		for (j = 0; j < eng->oob_free; j++)
+			oobptr[j] = (j >= 4 ? valm : vall) >> ((j % 4) * 8);
+
+		oobptr += eng->oob_free;
+	}
+}
+
+static inline void mtk_snfi_write_oob_free(struct mtk_snfi *snfi,
+					   const struct spi_mem_op *op)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	const u8 *oobptr = op->data.buf.out;
+	u32 vall, valm;
+	int i, j;
+
+	oobptr += eng->section_size * eng->nsteps;
+	for (i = 0; i < eng->nsteps; i++) {
+		vall = 0;
+		valm = 0;
+		for (j = 0; j < 8; j++) {
+			if (j < 4)
+				vall |= (j < eng->oob_free ? oobptr[j] : 0xff)
+					<< (j * 8);
+			else
+				valm |= (j < eng->oob_free ? oobptr[j] : 0xff)
+					<< ((j - 4) * 8);
+		}
+
+		writel(vall, snfi->regs + NFI_FDML(i));
+		writel(valm, snfi->regs + NFI_FDMM(i));
+		oobptr += eng->oob_free;
+	}
+}
+
+static irqreturn_t mtk_snfi_irq(int irq, void *id)
+{
+	struct mtk_snfi *snfi = id;
+	u32 sta, ien;
+
+	sta = readl(snfi->regs + NFI_INTR_STA);
+	ien = readl(snfi->regs + NFI_INTR_EN);
+
+	if (!(sta & ien))
+		return IRQ_NONE;
+
+	writel(0, snfi->regs + NFI_INTR_EN);
+	complete(&snfi->done);
+
+	return IRQ_HANDLED;
+}
+
+static int mtk_snfi_enable_clk(struct device *dev, struct mtk_snfi *snfi)
+{
+	int ret;
+
+	ret = clk_prepare_enable(snfi->nfi_clk);
+	if (ret) {
+		dev_err(dev, "failed to enable nfi clk\n");
+		return ret;
+	}
+
+	ret = clk_prepare_enable(snfi->snfi_clk);
+	if (ret) {
+		dev_err(dev, "failed to enable snfi clk\n");
+		clk_disable_unprepare(snfi->nfi_clk);
+		return ret;
+	}
+
+	ret = clk_prepare_enable(snfi->hclk);
+	if (ret) {
+		dev_err(dev, "failed to enable hclk\n");
+		clk_disable_unprepare(snfi->nfi_clk);
+		clk_disable_unprepare(snfi->snfi_clk);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void mtk_snfi_disable_clk(struct mtk_snfi *snfi)
+{
+	clk_disable_unprepare(snfi->nfi_clk);
+	clk_disable_unprepare(snfi->snfi_clk);
+	clk_disable_unprepare(snfi->hclk);
+}
+
+static int mtk_snfi_reset(struct mtk_snfi *snfi)
+{
+	u32 val;
+	int ret;
+
+	val = readl(snfi->regs + SNFI_MISC_CTL) | SW_RST;
+	writel(val, snfi->regs + SNFI_MISC_CTL);
+
+	ret = readw_poll_timeout(snfi->regs + SNFI_STA_CTL1, val,
+				 !(val & SPI_STATE), 0,
+				 MTK_SNFI_RESET_TIMEOUT);
+	if (ret) {
+		dev_warn(snfi->dev, "wait spi idle timeout 0x%x\n", val);
+		return ret;
+	}
+
+	val = readl(snfi->regs + SNFI_MISC_CTL);
+	val &= ~SW_RST;
+	writel(val, snfi->regs + SNFI_MISC_CTL);
+
+	writew(CON_FIFO_FLUSH | CON_NFI_RST, snfi->regs + NFI_CON);
+	ret = readw_poll_timeout(snfi->regs + NFI_STA, val,
+				 !(val & NFI_FSM_MASK), 0,
+				 MTK_SNFI_RESET_TIMEOUT);
+	if (ret) {
+		dev_warn(snfi->dev, "wait nfi fsm idle timeout 0x%x\n", val);
+		return ret;
+	}
+
+	val = readl(snfi->regs + NFI_STRDATA);
+	val &= ~STAR_EN;
+	writew(val, snfi->regs + NFI_STRDATA);
+
+	return 0;
+}
+
+static int mtk_snfi_init(struct mtk_snfi *snfi)
+{
+	int ret;
+	u32 val;
+
+	ret = mtk_snfi_reset(snfi);
+	if (ret)
+		return ret;
+
+	writel(SNFI_MODE_EN, snfi->regs + SNFI_CNFG);
+
+	if (snfi->sample_delay) {
+		val = readl(snfi->regs + SNFI_DLY_CTL3);
+		val &= ~SAM_DLY_MASK;
+		val |= snfi->sample_delay;
+		writel(val, snfi->regs + SNFI_DLY_CTL3);
+	}
+
+	if (snfi->read_latency) {
+		val = readl(snfi->regs + SNFI_MISC_CTL);
+		val &= ~LATCH_LAT_MASK;
+		val |= (snfi->read_latency << LATCH_LAT_SHIFT);
+		writel(val, snfi->regs + SNFI_MISC_CTL);
+	}
+
+	return 0;
+}
+
+static void mtk_snfi_prepare_for_tx(struct mtk_snfi *snfi,
+				    const struct spi_mem_op *op)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	u32 val;
+
+	val = readl(snfi->regs + SNFI_PG_CTL1);
+	val &= ~WR_LOAD_CMD_MASK;
+	val |= op->cmd.opcode << WR_LOAD_CMD_SHIFT;
+	writel(val, snfi->regs + SNFI_PG_CTL1);
+
+	writel(op->addr.val & WR_LOAD_ADDR_MASK,
+	       snfi->regs + SNFI_PG_CTL2);
+
+	val = readl(snfi->regs + SNFI_MISC_CTL);
+	val |= WR_CUSTOM_EN;
+	if (op->data.buswidth == 4)
+		val |= WR_X4_EN;
+	writel(val, snfi->regs + SNFI_MISC_CTL);
+
+	val = eng->nsteps * (eng->oob_per_section + eng->section_size);
+	writel(val << WR_LEN_SHIFT, snfi->regs + SNFI_MISC_CTL2);
+
+	writel(INTR_CUS_PROG_EN | INTR_IRQ_EN, snfi->regs + NFI_INTR_EN);
+}
+
+static void mtk_snfi_prepare_for_rx(struct mtk_snfi *snfi,
+				    const struct spi_mem_op *op)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	u32 val, dummy_cycle;
+
+	dummy_cycle = (op->dummy.nbytes << 3) >>
+			(ffs(op->dummy.buswidth) - 1);
+	val = (op->cmd.opcode & RD_CMD_MASK) |
+		  (dummy_cycle << RD_DUMMY_SHIFT);
+	writel(val, snfi->regs + SNFI_RD_CTL2);
+
+	writel(op->addr.val & RD_ADDR_MASK,
+	       snfi->regs + SNFI_RD_CTL3);
+
+	val = readl(snfi->regs + SNFI_MISC_CTL);
+	val |= RD_CUSTOM_EN;
+	val &= ~RD_MODE_MASK;
+	if (op->data.buswidth == 4)
+		val |= RD_MODE_X4;
+	else if (op->data.buswidth == 2)
+		val |= RD_MODE_X2;
+
+	if (op->addr.buswidth != 1)
+		val |= RD_MODE_DQUAL;
+
+	writel(val, snfi->regs + SNFI_MISC_CTL);
+
+	val = eng->nsteps * (eng->oob_per_section + eng->section_size);
+	writel(val, snfi->regs + SNFI_MISC_CTL2);
+
+	writel(INTR_CUS_READ_EN | INTR_IRQ_EN, snfi->regs + NFI_INTR_EN);
+}
+
+static int mtk_snfi_prepare(struct mtk_snfi *snfi,
+			    const struct spi_mem_op *op, bool rx)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	dma_addr_t addr;
+	int ret;
+	u32 val;
+
+	addr = dma_map_single(snfi->dev,
+			      op->data.buf.in, op->data.nbytes,
+			      rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	ret = dma_mapping_error(snfi->dev, addr);
+	if (ret) {
+		dev_err(snfi->dev, "dma mapping error\n");
+		return -EINVAL;
+	}
+
+	snfi->dma_addr = addr;
+	writel(lower_32_bits(addr), snfi->regs + NFI_STRADDR);
+
+	if (op->ecc_en && !rx)
+		mtk_snfi_write_oob_free(snfi, op);
+
+	val = readw(snfi->regs + NFI_CNFG);
+	val |= CNFG_DMA | CNFG_DMA_BURST_EN | CNFG_OP_CUST;
+	val |= rx ? CNFG_READ_EN : 0;
+
+	if (op->ecc_en)
+		val |= CNFG_HW_ECC_EN | CNFG_AUTO_FMT_EN;
+
+	writew(val, snfi->regs + NFI_CNFG);
+
+	writel(eng->nsteps << CON_SEC_SHIFT, snfi->regs + NFI_CON);
+
+	init_completion(&snfi->done);
+
+	/* trigger state machine to custom op mode */
+	writel(CMD_DUMMY, snfi->regs + NFI_CMD);
+
+	if (rx)
+		mtk_snfi_prepare_for_rx(snfi, op);
+	else
+		mtk_snfi_prepare_for_tx(snfi, op);
+
+	return 0;
+}
+
+static void mtk_snfi_trigger(struct mtk_snfi *snfi,
+			     const struct spi_mem_op *op, bool rx)
+{
+	u32 val;
+
+	val = readl(snfi->regs + NFI_CON);
+	val |= rx ? CON_BRD : CON_BWR;
+	writew(val, snfi->regs + NFI_CON);
+
+	writew(STAR_EN, snfi->regs + NFI_STRDATA);
+}
+
+static int mtk_snfi_wait_done(struct mtk_snfi *snfi,
+			      const struct spi_mem_op *op, bool rx)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	struct device *dev = snfi->dev;
+	u32 val;
+	int ret;
+
+	ret = wait_for_completion_timeout(&snfi->done, msecs_to_jiffies(500));
+	if (!ret) {
+		dev_err(dev, "wait for %d completion done timeout\n", rx);
+		return -ETIMEDOUT;
+	}
+
+	if (rx) {
+		ret = readl_poll_timeout_atomic(snfi->regs + NFI_BYTELEN,
+						val,
+						ADDRCNTR_SEC(val) >=
+						eng->nsteps,
+						0, MTK_SNFI_TIMEOUT);
+		if (ret) {
+			dev_err(dev, "wait for rx section count timeout\n");
+			return -ETIMEDOUT;
+		}
+
+		ret = readl_poll_timeout_atomic(snfi->regs + NFI_MASTERSTA,
+						val,
+						!(val & AHB_BUS_BUSY),
+						0, MTK_SNFI_TIMEOUT);
+		if (ret) {
+			dev_err(dev, "wait for bus busy timeout\n");
+			return -ETIMEDOUT;
+		}
+	} else {
+		ret = readl_poll_timeout_atomic(snfi->regs + NFI_ADDRCNTR,
+						val,
+						ADDRCNTR_SEC(val) >=
+						eng->nsteps,
+						0, MTK_SNFI_TIMEOUT);
+		if (ret) {
+			dev_err(dev, "wait for tx section count timeout\n");
+			return -ETIMEDOUT;
+		}
+	}
+
+	return 0;
+}
+
+static void mtk_snfi_complete(struct mtk_snfi *snfi,
+			      const struct spi_mem_op *op, bool rx)
+{
+	u32 val;
+
+	dma_unmap_single(snfi->dev,
+			 snfi->dma_addr, op->data.nbytes,
+			 rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE);
+
+	if (op->ecc_en && rx)
+		mtk_snfi_read_oob_free(snfi, op);
+
+	val = readl(snfi->regs + SNFI_MISC_CTL);
+	val &= rx ? ~RD_CUSTOM_EN : ~WR_CUSTOM_EN;
+	writel(val, snfi->regs + SNFI_MISC_CTL);
+
+	val = readl(snfi->regs + SNFI_STA_CTL1);
+	val |= rx ? CUS_READ_DONE : CUS_PROG_DONE;
+	writew(val, snfi->regs + SNFI_STA_CTL1);
+	val &= rx ? ~CUS_READ_DONE : ~CUS_PROG_DONE;
+	writew(val, snfi->regs + SNFI_STA_CTL1);
+
+	/* Disable interrupt */
+	val = readl(snfi->regs + NFI_INTR_EN);
+	val &= rx ? ~INTR_CUS_READ_EN : ~INTR_CUS_PROG_EN;
+	writew(val, snfi->regs + NFI_INTR_EN);
+
+	writew(0, snfi->regs + NFI_CNFG);
+	writew(0, snfi->regs + NFI_CON);
+}
+
+static int mtk_snfi_transfer_dma(struct mtk_snfi *snfi,
+				 const struct spi_mem_op *op, bool rx)
+{
+	int ret;
+
+	ret = mtk_snfi_prepare(snfi, op, rx);
+	if (ret)
+		return ret;
+
+	mtk_snfi_trigger(snfi, op, rx);
+
+	ret = mtk_snfi_wait_done(snfi, op, rx);
+
+	mtk_snfi_complete(snfi, op, rx);
+
+	return ret;
+}
+
+static int mtk_snfi_transfer_mac(struct mtk_snfi *snfi,
+				 const u8 *txbuf, u8 *rxbuf,
+				 const u32 txlen, const u32 rxlen)
+{
+	u32 i, j, val, tmp;
+	u8 *p_tmp = (u8 *)(&tmp);
+	u32 offset = 0;
+	int ret = 0;
+
+	/* Move tx data to gpram in snfi mac mode */
+	for (i = 0; i < txlen; ) {
+		for (j = 0, tmp = 0; i < txlen && j < 4; i++, j++)
+			p_tmp[j] = txbuf[i];
+
+		writel(tmp, snfi->regs + SNFI_GPRAM_DATA + offset);
+		offset += 4;
+	}
+
+	writel(txlen, snfi->regs + SNFI_MAC_OUTL);
+	writel(rxlen, snfi->regs + SNFI_MAC_INL);
+
+	ret = mtk_snfi_mac_op(snfi);
+	if (ret) {
+		dev_warn(snfi->dev, "snfi mac operation fail\n");
+		return ret;
+	}
+
+	/* Get tx data from gpram in snfi mac mode */
+	if (rxlen)
+		for (i = 0, offset = rounddown(txlen, 4); i < rxlen; ) {
+			val = readl(snfi->regs +
+				    SNFI_GPRAM_DATA + offset);
+			for (j = 0; i < rxlen && j < 4; i++, j++, rxbuf++) {
+				if (i == 0)
+					j = txlen % 4;
+				*rxbuf = (val >> (j * 8)) & 0xff;
+			}
+			offset += 4;
+		}
+
+	return ret;
+}
+
+static int mtk_snfi_exec_op(struct spi_mem *mem,
+			    const struct spi_mem_op *op)
+{
+	struct mtk_snfi *snfi = spi_controller_get_devdata(mem->spi->master);
+	u8 *buf, *txbuf = snfi->tx_buf, *rxbuf = NULL;
+	u32 txlen = 0, rxlen = 0;
+	int i, ret = 0;
+	bool rx;
+
+	rx = op->data.dir == SPI_MEM_DATA_IN;
+
+	ret = mtk_snfi_reset(snfi);
+	if (ret) {
+		dev_warn(snfi->dev, "snfi reset fail\n");
+		return ret;
+	}
+
+	/*
+	 * If tx/rx data buswidth is not 0/1, use snfi DMA mode.
+	 * Otherwise, use snfi mac mode.
+	 */
+	if (op->data.buswidth != 1 && op->data.buswidth != 0) {
+		ret = mtk_snfi_transfer_dma(snfi, op, rx);
+		if (ret)
+			dev_warn(snfi->dev, "snfi dma transfer %d fail %d\n",
+				 rx, ret);
+		return ret;
+	}
+
+	txbuf[txlen++] = op->cmd.opcode;
+
+	if (op->addr.nbytes)
+		for (i = 0; i < op->addr.nbytes; i++)
+			txbuf[txlen++] = op->addr.val >>
+					(8 * (op->addr.nbytes - i - 1));
+
+	txlen += op->dummy.nbytes;
+
+	if (op->data.dir == SPI_MEM_DATA_OUT) {
+		buf = (u8 *)op->data.buf.out;
+		for (i = 0; i < op->data.nbytes; i++)
+			txbuf[txlen++] = buf[i];
+	}
+
+	if (op->data.dir == SPI_MEM_DATA_IN) {
+		rxbuf = (u8 *)op->data.buf.in;
+		rxlen = op->data.nbytes;
+	}
+
+	ret = mtk_snfi_transfer_mac(snfi, txbuf, rxbuf, txlen, rxlen);
+	if (ret)
+		dev_warn(snfi->dev, "snfi mac transfer %d fail %d\n",
+			 op->data.dir, ret);
+
+	return ret;
+}
+
+static int mtk_snfi_check_buswidth(u8 width)
+{
+	switch (width) {
+	case 1:
+	case 2:
+	case 4:
+		return 0;
+
+	default:
+		break;
+	}
+
+	return -EOPNOTSUPP;
+}
+
+static bool mtk_snfi_supports_op(struct spi_mem *mem,
+				 const struct spi_mem_op *op)
+{
+	int ret = 0;
+
+	if (!spi_mem_default_supports_op(mem, op))
+		return false;
+
+	if (op->cmd.buswidth != 1)
+		return false;
+
+	/*
+	 * For one operation will use snfi mac mode when data
+	 * buswidth is 0/1. However, the HW ECC engine can not
+	 * be used in mac mode.
+	 */
+	if (op->ecc_en && op->data.buswidth == 1 &&
+	    op->data.nbytes >= SNFI_GPRAM_MAX_LEN)
+		return false;
+
+	switch (op->data.dir) {
+	/* For spi mem data in, can support 1/2/4 buswidth */
+	case SPI_MEM_DATA_IN:
+		if (op->addr.nbytes)
+			ret |= mtk_snfi_check_buswidth(op->addr.buswidth);
+
+		if (op->dummy.nbytes)
+			ret |= mtk_snfi_check_buswidth(op->dummy.buswidth);
+
+		if (op->data.nbytes)
+			ret |= mtk_snfi_check_buswidth(op->data.buswidth);
+
+		if (ret)
+			return false;
+
+		break;
+	case SPI_MEM_DATA_OUT:
+		/*
+		 * For spi mem data out, can support 0/1 buswidth
+		 * for addr/dummy and 1/4 buswidth for data.
+		 */
+		if (op->addr.buswidth != 0 && op->addr.buswidth != 1)
+			return false;
+
+		if (op->dummy.buswidth != 0 && op->dummy.buswidth != 1)
+			return false;
+
+		if (op->data.buswidth != 1 && op->data.buswidth != 4)
+			return false;
+
+		break;
+	default:
+		break;
+	}
+
+	return true;
+}
+
+static int mtk_snfi_adjust_op_size(struct spi_mem *mem,
+				   struct spi_mem_op *op)
+{
+	u32 len, max_len;
+
+	/*
+	 * The op size only support SNFI_GPRAM_MAX_LEN which will
+	 * use the snfi mac mode when data buswidth is 0/1.
+	 * Otherwise, the snfi can max support 16KB.
+	 */
+	if (op->data.buswidth == 1 || op->data.buswidth == 0)
+		max_len = SNFI_GPRAM_MAX_LEN;
+	else
+		max_len = KB(16);
+
+	len = op->cmd.nbytes + op->addr.nbytes + op->dummy.nbytes;
+	if (len > max_len)
+		return -EOPNOTSUPP;
+
+	if ((len + op->data.nbytes) > max_len)
+		op->data.nbytes = max_len - len;
+
+	return 0;
+}
+
+static const struct mtk_snfi_caps mtk_snfi_caps_mt7622 = {
+	.pageformat_spare_shift = 16,
+};
+
+static const struct spi_controller_mem_ops mtk_snfi_ops = {
+	.adjust_op_size = mtk_snfi_adjust_op_size,
+	.supports_op = mtk_snfi_supports_op,
+	.exec_op = mtk_snfi_exec_op,
+};
+
+static const struct of_device_id mtk_snfi_id_table[] = {
+	{ .compatible = "mediatek,mt7622-snfi",
+	  .data = &mtk_snfi_caps_mt7622,
+	},
+	{  /* sentinel */ }
+};
+
+/* ECC wrapper */
+static struct mtk_snfi *mtk_nand_to_spi(struct nand_device *nand)
+{
+	struct device *dev = nand->ecc.engine->dev;
+	struct spi_master *master = dev_get_drvdata(dev);
+	struct mtk_snfi *snfi = spi_master_get_devdata(master);
+
+	return snfi;
+}
+
+static int mtk_snfi_config(struct nand_device *nand,
+			   struct mtk_snfi *snfi)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	u32 val;
+
+	switch (nanddev_page_size(nand)) {
+	case 512:
+		val = PAGEFMT_512_2K | PAGEFMT_SEC_SEL_512;
+		break;
+	case KB(2):
+		if (eng->section_size == 512)
+			val = PAGEFMT_2K_4K | PAGEFMT_SEC_SEL_512;
+		else
+			val = PAGEFMT_512_2K;
+		break;
+	case KB(4):
+		if (eng->section_size == 512)
+			val = PAGEFMT_4K_8K | PAGEFMT_SEC_SEL_512;
+		else
+			val = PAGEFMT_2K_4K;
+		break;
+	case KB(8):
+		if (eng->section_size == 512)
+			val = PAGEFMT_8K_16K | PAGEFMT_SEC_SEL_512;
+		else
+			val = PAGEFMT_4K_8K;
+		break;
+	case KB(16):
+		val = PAGEFMT_8K_16K;
+		break;
+	default:
+		dev_err(snfi->dev, "invalid page len: %d\n",
+			nanddev_page_size(nand));
+		return -EINVAL;
+	}
+
+	val |= eng->oob_per_section_idx << PAGEFMT_SPARE_SHIFT;
+	val |= eng->oob_free << PAGEFMT_FDM_SHIFT;
+	val |= eng->oob_free_protected << PAGEFMT_FDM_ECC_SHIFT;
+	writel(val, snfi->regs + NFI_PAGEFMT);
+
+	return 0;
+}
+
+static int mtk_snfi_ecc_init_ctx(struct nand_device *nand)
+{
+	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
+
+	return ops->init_ctx(nand);
+}
+
+static void mtk_snfi_ecc_cleanup_ctx(struct nand_device *nand)
+{
+	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
+
+	ops->cleanup_ctx(nand);
+}
+
+static int mtk_snfi_ecc_prepare_io_req(struct nand_device *nand,
+				       struct nand_page_io_req *req)
+{
+	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
+	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
+	int ret;
+
+	ret = mtk_snfi_config(nand, snfi);
+	if (ret)
+		return ret;
+
+	return ops->prepare_io_req(nand, req);
+}
+
+static int mtk_snfi_ecc_finish_io_req(struct nand_device *nand,
+				      struct nand_page_io_req *req)
+{
+	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
+
+	if (req->mode != MTD_OPS_RAW)
+		eng->read_empty = readl(snfi->regs + NFI_STA) & STA_EMP_PAGE;
+
+	return ops->finish_io_req(nand, req);
+}
+
+static struct nand_ecc_engine_ops mtk_snfi_ecc_engine_pipelined_ops = {
+	.init_ctx = mtk_snfi_ecc_init_ctx,
+	.cleanup_ctx = mtk_snfi_ecc_cleanup_ctx,
+	.prepare_io_req = mtk_snfi_ecc_prepare_io_req,
+	.finish_io_req = mtk_snfi_ecc_finish_io_req,
+};
+
+static int mtk_snfi_ecc_probe(struct platform_device *pdev,
+			      struct mtk_snfi *snfi)
+{
+	struct nand_ecc_engine *ecceng;
+
+	if (!mtk_ecc_get_pipelined_ops())
+		return -EOPNOTSUPP;
+
+	ecceng = devm_kzalloc(&pdev->dev, sizeof(*ecceng), GFP_KERNEL);
+	if (!ecceng)
+		return -ENOMEM;
+
+	ecceng->dev = &pdev->dev;
+	ecceng->ops = &mtk_snfi_ecc_engine_pipelined_ops;
+
+	nand_ecc_register_on_host_hw_engine(ecceng);
+
+	snfi->engine = ecceng;
+
+	return 0;
+}
+
+static int mtk_snfi_probe(struct platform_device *pdev)
+{
+	struct device_node *np = pdev->dev.of_node;
+	struct spi_controller *ctlr;
+	struct mtk_snfi *snfi;
+	struct resource *res;
+	int ret, irq;
+	u32 val = 0;
+
+	ctlr = spi_alloc_master(&pdev->dev, sizeof(*snfi));
+	if (!ctlr)
+		return -ENOMEM;
+
+	snfi = spi_controller_get_devdata(ctlr);
+	snfi->dev = &pdev->dev;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	snfi->regs = devm_ioremap_resource(snfi->dev, res);
+	if (IS_ERR(snfi->regs)) {
+		ret = PTR_ERR(snfi->regs);
+		goto err_put_master;
+	}
+
+	ret = of_property_read_u32(np, "sample-delay", &val);
+	if (!ret)
+		snfi->sample_delay = val;
+
+	ret = of_property_read_u32(np, "read-latency", &val);
+	if (!ret)
+		snfi->read_latency = val;
+
+	snfi->nfi_clk = devm_clk_get(snfi->dev, "nfi_clk");
+	if (IS_ERR(snfi->nfi_clk)) {
+		dev_err(snfi->dev, "not found nfi clk\n");
+		ret = PTR_ERR(snfi->nfi_clk);
+		goto err_put_master;
+	}
+
+	snfi->snfi_clk = devm_clk_get(snfi->dev, "snfi_clk");
+	if (IS_ERR(snfi->snfi_clk)) {
+		dev_err(snfi->dev, "not found snfi clk\n");
+		ret = PTR_ERR(snfi->snfi_clk);
+		goto err_put_master;
+	}
+
+	snfi->hclk = devm_clk_get(snfi->dev, "hclk");
+	if (IS_ERR(snfi->hclk)) {
+		dev_err(snfi->dev, "not found hclk\n");
+		ret = PTR_ERR(snfi->hclk);
+		goto err_put_master;
+	}
+
+	ret = mtk_snfi_enable_clk(snfi->dev, snfi);
+	if (ret)
+		goto err_put_master;
+
+	snfi->caps = of_device_get_match_data(snfi->dev);
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(snfi->dev, "not found snfi irq resource\n");
+		ret = -EINVAL;
+		goto clk_disable;
+	}
+
+	ret = devm_request_irq(snfi->dev, irq, mtk_snfi_irq,
+			       0, "mtk-snfi", snfi);
+	if (ret) {
+		dev_err(snfi->dev, "failed to request snfi irq\n");
+		goto clk_disable;
+	}
+
+	ret = dma_set_mask(snfi->dev, DMA_BIT_MASK(32));
+	if (ret) {
+		dev_err(snfi->dev, "failed to set dma mask\n");
+		goto clk_disable;
+	}
+
+	snfi->tx_buf = kzalloc(SNFI_GPRAM_MAX_LEN, GFP_KERNEL);
+	if (!snfi->tx_buf) {
+		ret = -ENOMEM;
+		goto clk_disable;
+	}
+
+	ctlr->dev.of_node = np;
+	ctlr->mem_ops = &mtk_snfi_ops;
+	ctlr->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD | SPI_TX_QUAD;
+	ctlr->auto_runtime_pm = true;
+
+	dev_set_drvdata(snfi->dev, ctlr);
+
+	ret = mtk_snfi_init(snfi);
+	if (ret) {
+		dev_err(snfi->dev, "failed to init snfi\n");
+		goto free_buf;
+	}
+
+	ret = mtk_snfi_ecc_probe(pdev, snfi);
+	if (ret) {
+		dev_warn(snfi->dev, "ECC engine not available\n");
+		goto free_buf;
+	}
+
+	pm_runtime_enable(snfi->dev);
+
+	ret = devm_spi_register_master(snfi->dev, ctlr);
+	if (ret) {
+		dev_err(snfi->dev, "failed to register spi master\n");
+		goto disable_pm_runtime;
+	}
+
+	return 0;
+
+disable_pm_runtime:
+	pm_runtime_disable(snfi->dev);
+
+free_buf:
+	kfree(snfi->tx_buf);
+
+clk_disable:
+	mtk_snfi_disable_clk(snfi);
+
+err_put_master:
+	spi_master_put(ctlr);
+
+	return ret;
+}
+
+static int mtk_snfi_remove(struct platform_device *pdev)
+{
+	struct spi_controller *ctlr = dev_get_drvdata(&pdev->dev);
+	struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr);
+	struct nand_ecc_engine *eng = snfi->engine;
+
+	pm_runtime_disable(snfi->dev);
+	nand_ecc_unregister_on_host_hw_engine(eng);
+	kfree(snfi->tx_buf);
+	spi_master_put(ctlr);
+
+	return 0;
+}
+
+#ifdef CONFIG_PM
+static int mtk_snfi_runtime_suspend(struct device *dev)
+{
+	struct spi_controller *ctlr = dev_get_drvdata(dev);
+	struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr);
+
+	mtk_snfi_disable_clk(snfi);
+
+	return 0;
+}
+
+static int mtk_snfi_runtime_resume(struct device *dev)
+{
+	struct spi_controller *ctlr = dev_get_drvdata(dev);
+	struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr);
+	int ret;
+
+	ret = mtk_snfi_enable_clk(dev, snfi);
+	if (ret)
+		return ret;
+
+	ret = mtk_snfi_init(snfi);
+	if (ret)
+		dev_err(dev, "failed to init snfi\n");
+
+	return ret;
+}
+#endif /* CONFIG_PM */
+
+static const struct dev_pm_ops mtk_snfi_pm_ops = {
+	SET_RUNTIME_PM_OPS(mtk_snfi_runtime_suspend,
+			   mtk_snfi_runtime_resume, NULL)
+	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
+				pm_runtime_force_resume)
+};
+
+static struct platform_driver mtk_snfi_driver = {
+	.driver = {
+		.name	= "mtk-snfi",
+		.of_match_table = mtk_snfi_id_table,
+		.pm = &mtk_snfi_pm_ops,
+	},
+	.probe		= mtk_snfi_probe,
+	.remove		= mtk_snfi_remove,
+};
+
+module_platform_driver(mtk_snfi_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Xiangsheng Hou <xiangsheng.hou@mediatek.com>");
+MODULE_DESCRIPTION("Mediatek SPI Nand Flash interface driver");
-- 
2.25.1


______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC,v4,3/5] spi: mtk: Add mediatek SPI Nand Flash interface driver
@ 2021-11-30  8:32   ` Xiangsheng Hou
  0 siblings, 0 replies; 36+ messages in thread
From: Xiangsheng Hou @ 2021-11-30  8:32 UTC (permalink / raw)
  To: miquel.raynal, broonie
  Cc: xiangsheng.hou, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

The SPI Nand Flash interface driver cowork with Mediatek pipelined
HW ECC engine.

Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
---
 drivers/spi/Kconfig        |   11 +
 drivers/spi/Makefile       |    1 +
 drivers/spi/spi-mtk-snfi.c | 1117 ++++++++++++++++++++++++++++++++++++
 3 files changed, 1129 insertions(+)
 create mode 100644 drivers/spi/spi-mtk-snfi.c

diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
index 596705d24400..9cb6a173b1ef 100644
--- a/drivers/spi/Kconfig
+++ b/drivers/spi/Kconfig
@@ -535,6 +535,17 @@ config SPI_MT65XX
 	  say Y or M here.If you are not sure, say N.
 	  SPI drivers for Mediatek MT65XX and MT81XX series ARM SoCs.
 
+config SPI_MTK_SNFI
+	tristate "MediaTek SPI NAND interface"
+	depends on MTD
+	select MTD_SPI_NAND
+	select MTD_NAND_ECC_MTK
+	help
+	  This selects the SPI NAND FLASH interface(SNFI),
+	  which could be found on MediaTek Soc.
+	  Say Y or M here.If you are not sure, say N.
+	  Note Parallel Nand and SPI NAND is alternative on MediaTek SoCs.
+
 config SPI_MT7621
 	tristate "MediaTek MT7621 SPI Controller"
 	depends on RALINK || COMPILE_TEST
diff --git a/drivers/spi/Makefile b/drivers/spi/Makefile
index dd7393a6046f..57d11eecf662 100644
--- a/drivers/spi/Makefile
+++ b/drivers/spi/Makefile
@@ -73,6 +73,7 @@ obj-$(CONFIG_SPI_MPC52xx)		+= spi-mpc52xx.o
 obj-$(CONFIG_SPI_MT65XX)                += spi-mt65xx.o
 obj-$(CONFIG_SPI_MT7621)		+= spi-mt7621.o
 obj-$(CONFIG_SPI_MTK_NOR)		+= spi-mtk-nor.o
+obj-$(CONFIG_SPI_MTK_SNFI)              += spi-mtk-snfi.o
 obj-$(CONFIG_SPI_MXIC)			+= spi-mxic.o
 obj-$(CONFIG_SPI_MXS)			+= spi-mxs.o
 obj-$(CONFIG_SPI_NPCM_FIU)		+= spi-npcm-fiu.o
diff --git a/drivers/spi/spi-mtk-snfi.c b/drivers/spi/spi-mtk-snfi.c
new file mode 100644
index 000000000000..b4dce6d78176
--- /dev/null
+++ b/drivers/spi/spi-mtk-snfi.c
@@ -0,0 +1,1117 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Driver for MediaTek SPI Nand Flash interface
+ *
+ * Copyright (C) 2021 MediaTek Inc.
+ * Authors:	Xiangsheng Hou	<xiangsheng.hou@mediatek.com>
+ */
+
+#include <linux/clk.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/iopoll.h>
+#include <linux/module.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/nand-ecc-mtk.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/spi/spi.h>
+#include <linux/spi/spi-mem.h>
+
+/* Registers used by the driver */
+#define NFI_CNFG		(0x00)
+#define		CNFG_DMA		BIT(0)
+#define		CNFG_READ_EN		BIT(1)
+#define		CNFG_DMA_BURST_EN	BIT(2)
+#define		CNFG_HW_ECC_EN		BIT(8)
+#define		CNFG_AUTO_FMT_EN	BIT(9)
+#define		CNFG_OP_CUST		GENMASK(14, 13)
+#define NFI_PAGEFMT		(0x04)
+#define		PAGEFMT_512_2K		(0)
+#define		PAGEFMT_2K_4K		(1)
+#define		PAGEFMT_4K_8K		(2)
+#define		PAGEFMT_8K_16K		(3)
+#define		PAGEFMT_PAGE_MASK	GENMASK(1, 0)
+#define		PAGEFMT_SEC_SEL_512	BIT(2)
+#define		PAGEFMT_FDM_SHIFT	(8)
+#define		PAGEFMT_FDM_ECC_SHIFT	(12)
+#define		PAGEFMT_SPARE_SHIFT	(16)
+#define		PAGEFMT_SPARE_MASK	GENMASK(21, 16)
+#define NFI_CON			(0x08)
+#define		CON_FIFO_FLUSH		BIT(0)
+#define		CON_NFI_RST		BIT(1)
+#define		CON_BRD			BIT(8)
+#define		CON_BWR			BIT(9)
+#define		CON_SEC_SHIFT		(12)
+#define		CON_SEC_MASK		GENMASK(16, 12)
+#define NFI_INTR_EN		(0x10)
+#define		INTR_CUS_PROG_EN	BIT(7)
+#define		INTR_CUS_READ_EN	BIT(8)
+#define		INTR_IRQ_EN		BIT(31)
+#define NFI_INTR_STA		(0x14)
+#define NFI_CMD			(0x20)
+#define		CMD_DUMMY		(0x00)
+#define NFI_STRDATA		(0x40)
+#define		STAR_EN			BIT(0)
+#define NFI_STA			(0x60)
+#define		NFI_FSM_MASK		GENMASK(19, 16)
+#define		STA_EMP_PAGE		BIT(12)
+#define NFI_ADDRCNTR		(0x70)
+#define		CNTR_MASK		GENMASK(16, 12)
+#define		ADDRCNTR_SEC_SHIFT	(12)
+#define		ADDRCNTR_SEC(val) \
+		(((val) & CNTR_MASK) >> ADDRCNTR_SEC_SHIFT)
+#define NFI_STRADDR		(0x80)
+#define NFI_BYTELEN		(0x84)
+#define NFI_FDML(x)		(0xA0 + (x) * sizeof(u32) * 2)
+#define NFI_FDMM(x)		(0xA4 + (x) * sizeof(u32) * 2)
+#define NFI_MASTERSTA		(0x224)
+#define		AHB_BUS_BUSY		GENMASK(1, 0)
+#define SNFI_MAC_CTL		(0x500)
+#define		MAC_WIP			BIT(0)
+#define		MAC_WIP_READY		BIT(1)
+#define		MAC_TRIG		BIT(2)
+#define		MAC_EN			BIT(3)
+#define		MAC_SIO_SEL		BIT(4)
+#define SNFI_MAC_OUTL		(0x504)
+#define SNFI_MAC_INL		(0x508)
+#define SNFI_RD_CTL2		(0x510)
+#define		RD_CMD_MASK		GENMASK(7, 0)
+#define		RD_DUMMY_SHIFT		(8)
+#define SNFI_RD_CTL3		(0x514)
+#define		RD_ADDR_MASK		GENMASK(16, 0)
+#define SNFI_PG_CTL1		(0x524)
+#define		WR_LOAD_CMD_MASK	GENMASK(15, 8)
+#define		WR_LOAD_CMD_SHIFT	(8)
+#define SNFI_PG_CTL2		(0x528)
+#define		WR_LOAD_ADDR_MASK	GENMASK(15, 0)
+#define SNFI_MISC_CTL		(0x538)
+#define		RD_CUSTOM_EN		BIT(6)
+#define		WR_CUSTOM_EN		BIT(7)
+#define		LATCH_LAT_SHIFT		(8)
+#define		LATCH_LAT_MASK		GENMASK(9, 8)
+#define		RD_MODE_X2		BIT(16)
+#define		RD_MODE_X4		BIT(17)
+#define		RD_MODE_DQUAL		BIT(18)
+#define		RD_MODE_MASK		GENMASK(18, 16)
+#define		WR_X4_EN		BIT(20)
+#define		SW_RST			BIT(28)
+#define SNFI_MISC_CTL2		(0x53c)
+#define		WR_LEN_SHIFT		(16)
+#define SNFI_DLY_CTL3		(0x548)
+#define		SAM_DLY_MASK		GENMASK(5, 0)
+#define SNFI_STA_CTL1		(0x550)
+#define		SPI_STATE		GENMASK(3, 0)
+#define		CUS_READ_DONE		BIT(27)
+#define		CUS_PROG_DONE		BIT(28)
+#define SNFI_CNFG		(0x55c)
+#define		SNFI_MODE_EN		BIT(0)
+#define SNFI_GPRAM_DATA		(0x800)
+#define		SNFI_GPRAM_MAX_LEN	(160)
+
+#define MTK_SNFI_TIMEOUT		(500000)
+#define MTK_SNFI_RESET_TIMEOUT		(1000000)
+#define MTK_SNFI_AUTOSUSPEND_DELAY	(1000)
+#define KB(x)				((x) * 1024UL)
+
+struct mtk_snfi_caps {
+	u8 pageformat_spare_shift;
+};
+
+struct mtk_snfi {
+	struct device *dev;
+	struct completion done;
+	void __iomem *regs;
+	const struct mtk_snfi_caps *caps;
+
+	struct clk *nfi_clk;
+	struct clk *snfi_clk;
+	struct clk *hclk;
+
+	struct nand_ecc_engine *engine;
+
+	u32 sample_delay;
+	u32 read_latency;
+
+	void *tx_buf;
+	dma_addr_t dma_addr;
+};
+
+static struct mtk_ecc_engine *mtk_snfi_to_ecc_engine(struct mtk_snfi *snfi)
+{
+	return snfi->engine->priv;
+}
+
+static void mtk_snfi_mac_enable(struct mtk_snfi *snfi)
+{
+	u32 val;
+
+	val = readl(snfi->regs + SNFI_MAC_CTL);
+	val &= ~MAC_SIO_SEL;
+	val |= MAC_EN;
+
+	writel(val, snfi->regs + SNFI_MAC_CTL);
+}
+
+static int mtk_snfi_mac_trigger(struct mtk_snfi *snfi)
+{
+	int ret;
+	u32 val;
+
+	val = readl(snfi->regs + SNFI_MAC_CTL);
+	val |= MAC_TRIG;
+	writel(val, snfi->regs + SNFI_MAC_CTL);
+
+	ret = readl_poll_timeout_atomic(snfi->regs + SNFI_MAC_CTL,
+					val, val & MAC_WIP_READY,
+					0, MTK_SNFI_TIMEOUT);
+	if (ret < 0) {
+		dev_err(snfi->dev, "wait for wip ready timeout\n");
+		return -EIO;
+	}
+
+	ret = readl_poll_timeout_atomic(snfi->regs + SNFI_MAC_CTL,
+					val, !(val & MAC_WIP), 0,
+					MTK_SNFI_TIMEOUT);
+	if (ret < 0) {
+		dev_err(snfi->dev, "command write timeout\n");
+		return -EIO;
+	}
+
+	return 0;
+}
+
+static void mtk_snfi_mac_disable(struct mtk_snfi *snfi)
+{
+	u32 val;
+
+	val = readl(snfi->regs + SNFI_MAC_CTL);
+	val &= ~(MAC_TRIG | MAC_EN);
+	writel(val, snfi->regs + SNFI_MAC_CTL);
+}
+
+static int mtk_snfi_mac_op(struct mtk_snfi *snfi)
+{
+	int ret;
+
+	mtk_snfi_mac_enable(snfi);
+	ret = mtk_snfi_mac_trigger(snfi);
+	mtk_snfi_mac_disable(snfi);
+
+	return ret;
+}
+
+static inline void mtk_snfi_read_oob_free(struct mtk_snfi *snfi,
+					  const struct spi_mem_op *op)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	u8 *oobptr = op->data.buf.in;
+	u32 vall, valm;
+	int i, j;
+
+	oobptr += eng->section_size * eng->nsteps;
+	for (i = 0; i < eng->nsteps; i++) {
+		vall = readl(snfi->regs + NFI_FDML(i));
+		valm = readl(snfi->regs + NFI_FDMM(i));
+
+		for (j = 0; j < eng->oob_free; j++)
+			oobptr[j] = (j >= 4 ? valm : vall) >> ((j % 4) * 8);
+
+		oobptr += eng->oob_free;
+	}
+}
+
+static inline void mtk_snfi_write_oob_free(struct mtk_snfi *snfi,
+					   const struct spi_mem_op *op)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	const u8 *oobptr = op->data.buf.out;
+	u32 vall, valm;
+	int i, j;
+
+	oobptr += eng->section_size * eng->nsteps;
+	for (i = 0; i < eng->nsteps; i++) {
+		vall = 0;
+		valm = 0;
+		for (j = 0; j < 8; j++) {
+			if (j < 4)
+				vall |= (j < eng->oob_free ? oobptr[j] : 0xff)
+					<< (j * 8);
+			else
+				valm |= (j < eng->oob_free ? oobptr[j] : 0xff)
+					<< ((j - 4) * 8);
+		}
+
+		writel(vall, snfi->regs + NFI_FDML(i));
+		writel(valm, snfi->regs + NFI_FDMM(i));
+		oobptr += eng->oob_free;
+	}
+}
+
+static irqreturn_t mtk_snfi_irq(int irq, void *id)
+{
+	struct mtk_snfi *snfi = id;
+	u32 sta, ien;
+
+	sta = readl(snfi->regs + NFI_INTR_STA);
+	ien = readl(snfi->regs + NFI_INTR_EN);
+
+	if (!(sta & ien))
+		return IRQ_NONE;
+
+	writel(0, snfi->regs + NFI_INTR_EN);
+	complete(&snfi->done);
+
+	return IRQ_HANDLED;
+}
+
+static int mtk_snfi_enable_clk(struct device *dev, struct mtk_snfi *snfi)
+{
+	int ret;
+
+	ret = clk_prepare_enable(snfi->nfi_clk);
+	if (ret) {
+		dev_err(dev, "failed to enable nfi clk\n");
+		return ret;
+	}
+
+	ret = clk_prepare_enable(snfi->snfi_clk);
+	if (ret) {
+		dev_err(dev, "failed to enable snfi clk\n");
+		clk_disable_unprepare(snfi->nfi_clk);
+		return ret;
+	}
+
+	ret = clk_prepare_enable(snfi->hclk);
+	if (ret) {
+		dev_err(dev, "failed to enable hclk\n");
+		clk_disable_unprepare(snfi->nfi_clk);
+		clk_disable_unprepare(snfi->snfi_clk);
+		return ret;
+	}
+
+	return 0;
+}
+
+static void mtk_snfi_disable_clk(struct mtk_snfi *snfi)
+{
+	clk_disable_unprepare(snfi->nfi_clk);
+	clk_disable_unprepare(snfi->snfi_clk);
+	clk_disable_unprepare(snfi->hclk);
+}
+
+static int mtk_snfi_reset(struct mtk_snfi *snfi)
+{
+	u32 val;
+	int ret;
+
+	val = readl(snfi->regs + SNFI_MISC_CTL) | SW_RST;
+	writel(val, snfi->regs + SNFI_MISC_CTL);
+
+	ret = readw_poll_timeout(snfi->regs + SNFI_STA_CTL1, val,
+				 !(val & SPI_STATE), 0,
+				 MTK_SNFI_RESET_TIMEOUT);
+	if (ret) {
+		dev_warn(snfi->dev, "wait spi idle timeout 0x%x\n", val);
+		return ret;
+	}
+
+	val = readl(snfi->regs + SNFI_MISC_CTL);
+	val &= ~SW_RST;
+	writel(val, snfi->regs + SNFI_MISC_CTL);
+
+	writew(CON_FIFO_FLUSH | CON_NFI_RST, snfi->regs + NFI_CON);
+	ret = readw_poll_timeout(snfi->regs + NFI_STA, val,
+				 !(val & NFI_FSM_MASK), 0,
+				 MTK_SNFI_RESET_TIMEOUT);
+	if (ret) {
+		dev_warn(snfi->dev, "wait nfi fsm idle timeout 0x%x\n", val);
+		return ret;
+	}
+
+	val = readl(snfi->regs + NFI_STRDATA);
+	val &= ~STAR_EN;
+	writew(val, snfi->regs + NFI_STRDATA);
+
+	return 0;
+}
+
+static int mtk_snfi_init(struct mtk_snfi *snfi)
+{
+	int ret;
+	u32 val;
+
+	ret = mtk_snfi_reset(snfi);
+	if (ret)
+		return ret;
+
+	writel(SNFI_MODE_EN, snfi->regs + SNFI_CNFG);
+
+	if (snfi->sample_delay) {
+		val = readl(snfi->regs + SNFI_DLY_CTL3);
+		val &= ~SAM_DLY_MASK;
+		val |= snfi->sample_delay;
+		writel(val, snfi->regs + SNFI_DLY_CTL3);
+	}
+
+	if (snfi->read_latency) {
+		val = readl(snfi->regs + SNFI_MISC_CTL);
+		val &= ~LATCH_LAT_MASK;
+		val |= (snfi->read_latency << LATCH_LAT_SHIFT);
+		writel(val, snfi->regs + SNFI_MISC_CTL);
+	}
+
+	return 0;
+}
+
+static void mtk_snfi_prepare_for_tx(struct mtk_snfi *snfi,
+				    const struct spi_mem_op *op)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	u32 val;
+
+	val = readl(snfi->regs + SNFI_PG_CTL1);
+	val &= ~WR_LOAD_CMD_MASK;
+	val |= op->cmd.opcode << WR_LOAD_CMD_SHIFT;
+	writel(val, snfi->regs + SNFI_PG_CTL1);
+
+	writel(op->addr.val & WR_LOAD_ADDR_MASK,
+	       snfi->regs + SNFI_PG_CTL2);
+
+	val = readl(snfi->regs + SNFI_MISC_CTL);
+	val |= WR_CUSTOM_EN;
+	if (op->data.buswidth == 4)
+		val |= WR_X4_EN;
+	writel(val, snfi->regs + SNFI_MISC_CTL);
+
+	val = eng->nsteps * (eng->oob_per_section + eng->section_size);
+	writel(val << WR_LEN_SHIFT, snfi->regs + SNFI_MISC_CTL2);
+
+	writel(INTR_CUS_PROG_EN | INTR_IRQ_EN, snfi->regs + NFI_INTR_EN);
+}
+
+static void mtk_snfi_prepare_for_rx(struct mtk_snfi *snfi,
+				    const struct spi_mem_op *op)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	u32 val, dummy_cycle;
+
+	dummy_cycle = (op->dummy.nbytes << 3) >>
+			(ffs(op->dummy.buswidth) - 1);
+	val = (op->cmd.opcode & RD_CMD_MASK) |
+		  (dummy_cycle << RD_DUMMY_SHIFT);
+	writel(val, snfi->regs + SNFI_RD_CTL2);
+
+	writel(op->addr.val & RD_ADDR_MASK,
+	       snfi->regs + SNFI_RD_CTL3);
+
+	val = readl(snfi->regs + SNFI_MISC_CTL);
+	val |= RD_CUSTOM_EN;
+	val &= ~RD_MODE_MASK;
+	if (op->data.buswidth == 4)
+		val |= RD_MODE_X4;
+	else if (op->data.buswidth == 2)
+		val |= RD_MODE_X2;
+
+	if (op->addr.buswidth != 1)
+		val |= RD_MODE_DQUAL;
+
+	writel(val, snfi->regs + SNFI_MISC_CTL);
+
+	val = eng->nsteps * (eng->oob_per_section + eng->section_size);
+	writel(val, snfi->regs + SNFI_MISC_CTL2);
+
+	writel(INTR_CUS_READ_EN | INTR_IRQ_EN, snfi->regs + NFI_INTR_EN);
+}
+
+static int mtk_snfi_prepare(struct mtk_snfi *snfi,
+			    const struct spi_mem_op *op, bool rx)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	dma_addr_t addr;
+	int ret;
+	u32 val;
+
+	addr = dma_map_single(snfi->dev,
+			      op->data.buf.in, op->data.nbytes,
+			      rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	ret = dma_mapping_error(snfi->dev, addr);
+	if (ret) {
+		dev_err(snfi->dev, "dma mapping error\n");
+		return -EINVAL;
+	}
+
+	snfi->dma_addr = addr;
+	writel(lower_32_bits(addr), snfi->regs + NFI_STRADDR);
+
+	if (op->ecc_en && !rx)
+		mtk_snfi_write_oob_free(snfi, op);
+
+	val = readw(snfi->regs + NFI_CNFG);
+	val |= CNFG_DMA | CNFG_DMA_BURST_EN | CNFG_OP_CUST;
+	val |= rx ? CNFG_READ_EN : 0;
+
+	if (op->ecc_en)
+		val |= CNFG_HW_ECC_EN | CNFG_AUTO_FMT_EN;
+
+	writew(val, snfi->regs + NFI_CNFG);
+
+	writel(eng->nsteps << CON_SEC_SHIFT, snfi->regs + NFI_CON);
+
+	init_completion(&snfi->done);
+
+	/* trigger state machine to custom op mode */
+	writel(CMD_DUMMY, snfi->regs + NFI_CMD);
+
+	if (rx)
+		mtk_snfi_prepare_for_rx(snfi, op);
+	else
+		mtk_snfi_prepare_for_tx(snfi, op);
+
+	return 0;
+}
+
+static void mtk_snfi_trigger(struct mtk_snfi *snfi,
+			     const struct spi_mem_op *op, bool rx)
+{
+	u32 val;
+
+	val = readl(snfi->regs + NFI_CON);
+	val |= rx ? CON_BRD : CON_BWR;
+	writew(val, snfi->regs + NFI_CON);
+
+	writew(STAR_EN, snfi->regs + NFI_STRDATA);
+}
+
+static int mtk_snfi_wait_done(struct mtk_snfi *snfi,
+			      const struct spi_mem_op *op, bool rx)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	struct device *dev = snfi->dev;
+	u32 val;
+	int ret;
+
+	ret = wait_for_completion_timeout(&snfi->done, msecs_to_jiffies(500));
+	if (!ret) {
+		dev_err(dev, "wait for %d completion done timeout\n", rx);
+		return -ETIMEDOUT;
+	}
+
+	if (rx) {
+		ret = readl_poll_timeout_atomic(snfi->regs + NFI_BYTELEN,
+						val,
+						ADDRCNTR_SEC(val) >=
+						eng->nsteps,
+						0, MTK_SNFI_TIMEOUT);
+		if (ret) {
+			dev_err(dev, "wait for rx section count timeout\n");
+			return -ETIMEDOUT;
+		}
+
+		ret = readl_poll_timeout_atomic(snfi->regs + NFI_MASTERSTA,
+						val,
+						!(val & AHB_BUS_BUSY),
+						0, MTK_SNFI_TIMEOUT);
+		if (ret) {
+			dev_err(dev, "wait for bus busy timeout\n");
+			return -ETIMEDOUT;
+		}
+	} else {
+		ret = readl_poll_timeout_atomic(snfi->regs + NFI_ADDRCNTR,
+						val,
+						ADDRCNTR_SEC(val) >=
+						eng->nsteps,
+						0, MTK_SNFI_TIMEOUT);
+		if (ret) {
+			dev_err(dev, "wait for tx section count timeout\n");
+			return -ETIMEDOUT;
+		}
+	}
+
+	return 0;
+}
+
+static void mtk_snfi_complete(struct mtk_snfi *snfi,
+			      const struct spi_mem_op *op, bool rx)
+{
+	u32 val;
+
+	dma_unmap_single(snfi->dev,
+			 snfi->dma_addr, op->data.nbytes,
+			 rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE);
+
+	if (op->ecc_en && rx)
+		mtk_snfi_read_oob_free(snfi, op);
+
+	val = readl(snfi->regs + SNFI_MISC_CTL);
+	val &= rx ? ~RD_CUSTOM_EN : ~WR_CUSTOM_EN;
+	writel(val, snfi->regs + SNFI_MISC_CTL);
+
+	val = readl(snfi->regs + SNFI_STA_CTL1);
+	val |= rx ? CUS_READ_DONE : CUS_PROG_DONE;
+	writew(val, snfi->regs + SNFI_STA_CTL1);
+	val &= rx ? ~CUS_READ_DONE : ~CUS_PROG_DONE;
+	writew(val, snfi->regs + SNFI_STA_CTL1);
+
+	/* Disable interrupt */
+	val = readl(snfi->regs + NFI_INTR_EN);
+	val &= rx ? ~INTR_CUS_READ_EN : ~INTR_CUS_PROG_EN;
+	writew(val, snfi->regs + NFI_INTR_EN);
+
+	writew(0, snfi->regs + NFI_CNFG);
+	writew(0, snfi->regs + NFI_CON);
+}
+
+static int mtk_snfi_transfer_dma(struct mtk_snfi *snfi,
+				 const struct spi_mem_op *op, bool rx)
+{
+	int ret;
+
+	ret = mtk_snfi_prepare(snfi, op, rx);
+	if (ret)
+		return ret;
+
+	mtk_snfi_trigger(snfi, op, rx);
+
+	ret = mtk_snfi_wait_done(snfi, op, rx);
+
+	mtk_snfi_complete(snfi, op, rx);
+
+	return ret;
+}
+
+static int mtk_snfi_transfer_mac(struct mtk_snfi *snfi,
+				 const u8 *txbuf, u8 *rxbuf,
+				 const u32 txlen, const u32 rxlen)
+{
+	u32 i, j, val, tmp;
+	u8 *p_tmp = (u8 *)(&tmp);
+	u32 offset = 0;
+	int ret = 0;
+
+	/* Move tx data to gpram in snfi mac mode */
+	for (i = 0; i < txlen; ) {
+		for (j = 0, tmp = 0; i < txlen && j < 4; i++, j++)
+			p_tmp[j] = txbuf[i];
+
+		writel(tmp, snfi->regs + SNFI_GPRAM_DATA + offset);
+		offset += 4;
+	}
+
+	writel(txlen, snfi->regs + SNFI_MAC_OUTL);
+	writel(rxlen, snfi->regs + SNFI_MAC_INL);
+
+	ret = mtk_snfi_mac_op(snfi);
+	if (ret) {
+		dev_warn(snfi->dev, "snfi mac operation fail\n");
+		return ret;
+	}
+
+	/* Get tx data from gpram in snfi mac mode */
+	if (rxlen)
+		for (i = 0, offset = rounddown(txlen, 4); i < rxlen; ) {
+			val = readl(snfi->regs +
+				    SNFI_GPRAM_DATA + offset);
+			for (j = 0; i < rxlen && j < 4; i++, j++, rxbuf++) {
+				if (i == 0)
+					j = txlen % 4;
+				*rxbuf = (val >> (j * 8)) & 0xff;
+			}
+			offset += 4;
+		}
+
+	return ret;
+}
+
+static int mtk_snfi_exec_op(struct spi_mem *mem,
+			    const struct spi_mem_op *op)
+{
+	struct mtk_snfi *snfi = spi_controller_get_devdata(mem->spi->master);
+	u8 *buf, *txbuf = snfi->tx_buf, *rxbuf = NULL;
+	u32 txlen = 0, rxlen = 0;
+	int i, ret = 0;
+	bool rx;
+
+	rx = op->data.dir == SPI_MEM_DATA_IN;
+
+	ret = mtk_snfi_reset(snfi);
+	if (ret) {
+		dev_warn(snfi->dev, "snfi reset fail\n");
+		return ret;
+	}
+
+	/*
+	 * If tx/rx data buswidth is not 0/1, use snfi DMA mode.
+	 * Otherwise, use snfi mac mode.
+	 */
+	if (op->data.buswidth != 1 && op->data.buswidth != 0) {
+		ret = mtk_snfi_transfer_dma(snfi, op, rx);
+		if (ret)
+			dev_warn(snfi->dev, "snfi dma transfer %d fail %d\n",
+				 rx, ret);
+		return ret;
+	}
+
+	txbuf[txlen++] = op->cmd.opcode;
+
+	if (op->addr.nbytes)
+		for (i = 0; i < op->addr.nbytes; i++)
+			txbuf[txlen++] = op->addr.val >>
+					(8 * (op->addr.nbytes - i - 1));
+
+	txlen += op->dummy.nbytes;
+
+	if (op->data.dir == SPI_MEM_DATA_OUT) {
+		buf = (u8 *)op->data.buf.out;
+		for (i = 0; i < op->data.nbytes; i++)
+			txbuf[txlen++] = buf[i];
+	}
+
+	if (op->data.dir == SPI_MEM_DATA_IN) {
+		rxbuf = (u8 *)op->data.buf.in;
+		rxlen = op->data.nbytes;
+	}
+
+	ret = mtk_snfi_transfer_mac(snfi, txbuf, rxbuf, txlen, rxlen);
+	if (ret)
+		dev_warn(snfi->dev, "snfi mac transfer %d fail %d\n",
+			 op->data.dir, ret);
+
+	return ret;
+}
+
+static int mtk_snfi_check_buswidth(u8 width)
+{
+	switch (width) {
+	case 1:
+	case 2:
+	case 4:
+		return 0;
+
+	default:
+		break;
+	}
+
+	return -EOPNOTSUPP;
+}
+
+static bool mtk_snfi_supports_op(struct spi_mem *mem,
+				 const struct spi_mem_op *op)
+{
+	int ret = 0;
+
+	if (!spi_mem_default_supports_op(mem, op))
+		return false;
+
+	if (op->cmd.buswidth != 1)
+		return false;
+
+	/*
+	 * For one operation will use snfi mac mode when data
+	 * buswidth is 0/1. However, the HW ECC engine can not
+	 * be used in mac mode.
+	 */
+	if (op->ecc_en && op->data.buswidth == 1 &&
+	    op->data.nbytes >= SNFI_GPRAM_MAX_LEN)
+		return false;
+
+	switch (op->data.dir) {
+	/* For spi mem data in, can support 1/2/4 buswidth */
+	case SPI_MEM_DATA_IN:
+		if (op->addr.nbytes)
+			ret |= mtk_snfi_check_buswidth(op->addr.buswidth);
+
+		if (op->dummy.nbytes)
+			ret |= mtk_snfi_check_buswidth(op->dummy.buswidth);
+
+		if (op->data.nbytes)
+			ret |= mtk_snfi_check_buswidth(op->data.buswidth);
+
+		if (ret)
+			return false;
+
+		break;
+	case SPI_MEM_DATA_OUT:
+		/*
+		 * For spi mem data out, can support 0/1 buswidth
+		 * for addr/dummy and 1/4 buswidth for data.
+		 */
+		if (op->addr.buswidth != 0 && op->addr.buswidth != 1)
+			return false;
+
+		if (op->dummy.buswidth != 0 && op->dummy.buswidth != 1)
+			return false;
+
+		if (op->data.buswidth != 1 && op->data.buswidth != 4)
+			return false;
+
+		break;
+	default:
+		break;
+	}
+
+	return true;
+}
+
+static int mtk_snfi_adjust_op_size(struct spi_mem *mem,
+				   struct spi_mem_op *op)
+{
+	u32 len, max_len;
+
+	/*
+	 * The op size only support SNFI_GPRAM_MAX_LEN which will
+	 * use the snfi mac mode when data buswidth is 0/1.
+	 * Otherwise, the snfi can max support 16KB.
+	 */
+	if (op->data.buswidth == 1 || op->data.buswidth == 0)
+		max_len = SNFI_GPRAM_MAX_LEN;
+	else
+		max_len = KB(16);
+
+	len = op->cmd.nbytes + op->addr.nbytes + op->dummy.nbytes;
+	if (len > max_len)
+		return -EOPNOTSUPP;
+
+	if ((len + op->data.nbytes) > max_len)
+		op->data.nbytes = max_len - len;
+
+	return 0;
+}
+
+static const struct mtk_snfi_caps mtk_snfi_caps_mt7622 = {
+	.pageformat_spare_shift = 16,
+};
+
+static const struct spi_controller_mem_ops mtk_snfi_ops = {
+	.adjust_op_size = mtk_snfi_adjust_op_size,
+	.supports_op = mtk_snfi_supports_op,
+	.exec_op = mtk_snfi_exec_op,
+};
+
+static const struct of_device_id mtk_snfi_id_table[] = {
+	{ .compatible = "mediatek,mt7622-snfi",
+	  .data = &mtk_snfi_caps_mt7622,
+	},
+	{  /* sentinel */ }
+};
+
+/* ECC wrapper */
+static struct mtk_snfi *mtk_nand_to_spi(struct nand_device *nand)
+{
+	struct device *dev = nand->ecc.engine->dev;
+	struct spi_master *master = dev_get_drvdata(dev);
+	struct mtk_snfi *snfi = spi_master_get_devdata(master);
+
+	return snfi;
+}
+
+static int mtk_snfi_config(struct nand_device *nand,
+			   struct mtk_snfi *snfi)
+{
+	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
+	u32 val;
+
+	switch (nanddev_page_size(nand)) {
+	case 512:
+		val = PAGEFMT_512_2K | PAGEFMT_SEC_SEL_512;
+		break;
+	case KB(2):
+		if (eng->section_size == 512)
+			val = PAGEFMT_2K_4K | PAGEFMT_SEC_SEL_512;
+		else
+			val = PAGEFMT_512_2K;
+		break;
+	case KB(4):
+		if (eng->section_size == 512)
+			val = PAGEFMT_4K_8K | PAGEFMT_SEC_SEL_512;
+		else
+			val = PAGEFMT_2K_4K;
+		break;
+	case KB(8):
+		if (eng->section_size == 512)
+			val = PAGEFMT_8K_16K | PAGEFMT_SEC_SEL_512;
+		else
+			val = PAGEFMT_4K_8K;
+		break;
+	case KB(16):
+		val = PAGEFMT_8K_16K;
+		break;
+	default:
+		dev_err(snfi->dev, "invalid page len: %d\n",
+			nanddev_page_size(nand));
+		return -EINVAL;
+	}
+
+	val |= eng->oob_per_section_idx << PAGEFMT_SPARE_SHIFT;
+	val |= eng->oob_free << PAGEFMT_FDM_SHIFT;
+	val |= eng->oob_free_protected << PAGEFMT_FDM_ECC_SHIFT;
+	writel(val, snfi->regs + NFI_PAGEFMT);
+
+	return 0;
+}
+
+static int mtk_snfi_ecc_init_ctx(struct nand_device *nand)
+{
+	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
+
+	return ops->init_ctx(nand);
+}
+
+static void mtk_snfi_ecc_cleanup_ctx(struct nand_device *nand)
+{
+	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
+
+	ops->cleanup_ctx(nand);
+}
+
+static int mtk_snfi_ecc_prepare_io_req(struct nand_device *nand,
+				       struct nand_page_io_req *req)
+{
+	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
+	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
+	int ret;
+
+	ret = mtk_snfi_config(nand, snfi);
+	if (ret)
+		return ret;
+
+	return ops->prepare_io_req(nand, req);
+}
+
+static int mtk_snfi_ecc_finish_io_req(struct nand_device *nand,
+				      struct nand_page_io_req *req)
+{
+	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
+	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
+	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
+
+	if (req->mode != MTD_OPS_RAW)
+		eng->read_empty = readl(snfi->regs + NFI_STA) & STA_EMP_PAGE;
+
+	return ops->finish_io_req(nand, req);
+}
+
+static struct nand_ecc_engine_ops mtk_snfi_ecc_engine_pipelined_ops = {
+	.init_ctx = mtk_snfi_ecc_init_ctx,
+	.cleanup_ctx = mtk_snfi_ecc_cleanup_ctx,
+	.prepare_io_req = mtk_snfi_ecc_prepare_io_req,
+	.finish_io_req = mtk_snfi_ecc_finish_io_req,
+};
+
+static int mtk_snfi_ecc_probe(struct platform_device *pdev,
+			      struct mtk_snfi *snfi)
+{
+	struct nand_ecc_engine *ecceng;
+
+	if (!mtk_ecc_get_pipelined_ops())
+		return -EOPNOTSUPP;
+
+	ecceng = devm_kzalloc(&pdev->dev, sizeof(*ecceng), GFP_KERNEL);
+	if (!ecceng)
+		return -ENOMEM;
+
+	ecceng->dev = &pdev->dev;
+	ecceng->ops = &mtk_snfi_ecc_engine_pipelined_ops;
+
+	nand_ecc_register_on_host_hw_engine(ecceng);
+
+	snfi->engine = ecceng;
+
+	return 0;
+}
+
+static int mtk_snfi_probe(struct platform_device *pdev)
+{
+	struct device_node *np = pdev->dev.of_node;
+	struct spi_controller *ctlr;
+	struct mtk_snfi *snfi;
+	struct resource *res;
+	int ret, irq;
+	u32 val = 0;
+
+	ctlr = spi_alloc_master(&pdev->dev, sizeof(*snfi));
+	if (!ctlr)
+		return -ENOMEM;
+
+	snfi = spi_controller_get_devdata(ctlr);
+	snfi->dev = &pdev->dev;
+
+	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+	snfi->regs = devm_ioremap_resource(snfi->dev, res);
+	if (IS_ERR(snfi->regs)) {
+		ret = PTR_ERR(snfi->regs);
+		goto err_put_master;
+	}
+
+	ret = of_property_read_u32(np, "sample-delay", &val);
+	if (!ret)
+		snfi->sample_delay = val;
+
+	ret = of_property_read_u32(np, "read-latency", &val);
+	if (!ret)
+		snfi->read_latency = val;
+
+	snfi->nfi_clk = devm_clk_get(snfi->dev, "nfi_clk");
+	if (IS_ERR(snfi->nfi_clk)) {
+		dev_err(snfi->dev, "not found nfi clk\n");
+		ret = PTR_ERR(snfi->nfi_clk);
+		goto err_put_master;
+	}
+
+	snfi->snfi_clk = devm_clk_get(snfi->dev, "snfi_clk");
+	if (IS_ERR(snfi->snfi_clk)) {
+		dev_err(snfi->dev, "not found snfi clk\n");
+		ret = PTR_ERR(snfi->snfi_clk);
+		goto err_put_master;
+	}
+
+	snfi->hclk = devm_clk_get(snfi->dev, "hclk");
+	if (IS_ERR(snfi->hclk)) {
+		dev_err(snfi->dev, "not found hclk\n");
+		ret = PTR_ERR(snfi->hclk);
+		goto err_put_master;
+	}
+
+	ret = mtk_snfi_enable_clk(snfi->dev, snfi);
+	if (ret)
+		goto err_put_master;
+
+	snfi->caps = of_device_get_match_data(snfi->dev);
+
+	irq = platform_get_irq(pdev, 0);
+	if (irq < 0) {
+		dev_err(snfi->dev, "not found snfi irq resource\n");
+		ret = -EINVAL;
+		goto clk_disable;
+	}
+
+	ret = devm_request_irq(snfi->dev, irq, mtk_snfi_irq,
+			       0, "mtk-snfi", snfi);
+	if (ret) {
+		dev_err(snfi->dev, "failed to request snfi irq\n");
+		goto clk_disable;
+	}
+
+	ret = dma_set_mask(snfi->dev, DMA_BIT_MASK(32));
+	if (ret) {
+		dev_err(snfi->dev, "failed to set dma mask\n");
+		goto clk_disable;
+	}
+
+	snfi->tx_buf = kzalloc(SNFI_GPRAM_MAX_LEN, GFP_KERNEL);
+	if (!snfi->tx_buf) {
+		ret = -ENOMEM;
+		goto clk_disable;
+	}
+
+	ctlr->dev.of_node = np;
+	ctlr->mem_ops = &mtk_snfi_ops;
+	ctlr->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD | SPI_TX_QUAD;
+	ctlr->auto_runtime_pm = true;
+
+	dev_set_drvdata(snfi->dev, ctlr);
+
+	ret = mtk_snfi_init(snfi);
+	if (ret) {
+		dev_err(snfi->dev, "failed to init snfi\n");
+		goto free_buf;
+	}
+
+	ret = mtk_snfi_ecc_probe(pdev, snfi);
+	if (ret) {
+		dev_warn(snfi->dev, "ECC engine not available\n");
+		goto free_buf;
+	}
+
+	pm_runtime_enable(snfi->dev);
+
+	ret = devm_spi_register_master(snfi->dev, ctlr);
+	if (ret) {
+		dev_err(snfi->dev, "failed to register spi master\n");
+		goto disable_pm_runtime;
+	}
+
+	return 0;
+
+disable_pm_runtime:
+	pm_runtime_disable(snfi->dev);
+
+free_buf:
+	kfree(snfi->tx_buf);
+
+clk_disable:
+	mtk_snfi_disable_clk(snfi);
+
+err_put_master:
+	spi_master_put(ctlr);
+
+	return ret;
+}
+
+static int mtk_snfi_remove(struct platform_device *pdev)
+{
+	struct spi_controller *ctlr = dev_get_drvdata(&pdev->dev);
+	struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr);
+	struct nand_ecc_engine *eng = snfi->engine;
+
+	pm_runtime_disable(snfi->dev);
+	nand_ecc_unregister_on_host_hw_engine(eng);
+	kfree(snfi->tx_buf);
+	spi_master_put(ctlr);
+
+	return 0;
+}
+
+#ifdef CONFIG_PM
+static int mtk_snfi_runtime_suspend(struct device *dev)
+{
+	struct spi_controller *ctlr = dev_get_drvdata(dev);
+	struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr);
+
+	mtk_snfi_disable_clk(snfi);
+
+	return 0;
+}
+
+static int mtk_snfi_runtime_resume(struct device *dev)
+{
+	struct spi_controller *ctlr = dev_get_drvdata(dev);
+	struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr);
+	int ret;
+
+	ret = mtk_snfi_enable_clk(dev, snfi);
+	if (ret)
+		return ret;
+
+	ret = mtk_snfi_init(snfi);
+	if (ret)
+		dev_err(dev, "failed to init snfi\n");
+
+	return ret;
+}
+#endif /* CONFIG_PM */
+
+static const struct dev_pm_ops mtk_snfi_pm_ops = {
+	SET_RUNTIME_PM_OPS(mtk_snfi_runtime_suspend,
+			   mtk_snfi_runtime_resume, NULL)
+	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
+				pm_runtime_force_resume)
+};
+
+static struct platform_driver mtk_snfi_driver = {
+	.driver = {
+		.name	= "mtk-snfi",
+		.of_match_table = mtk_snfi_id_table,
+		.pm = &mtk_snfi_pm_ops,
+	},
+	.probe		= mtk_snfi_probe,
+	.remove		= mtk_snfi_remove,
+};
+
+module_platform_driver(mtk_snfi_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Xiangsheng Hou <xiangsheng.hou@mediatek.com>");
+MODULE_DESCRIPTION("Mediatek SPI Nand Flash interface driver");
-- 
2.25.1


_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC, v4, 4/5] mtd: spinand: Move set/get OOB databytes to each ECC engines
  2021-11-30  8:31 ` Xiangsheng Hou
@ 2021-11-30  8:32   ` Xiangsheng Hou
  -1 siblings, 0 replies; 36+ messages in thread
From: Xiangsheng Hou @ 2021-11-30  8:32 UTC (permalink / raw)
  To: miquel.raynal, broonie
  Cc: xiangsheng.hou, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Move set/get OOB databytes to each ECC engines when in AUTO mode.
For read/write in AUTO mode, the OOB bytes include not only free
date bytes but also ECC data bytes. And for some special ECC engine,
the data bytes in OOB may be mixed with main data. For example,
mediatek ECC engine will be one more main data byte swap with BBM.
So, just put these operation in each ECC engine to distinguish
the differentiation.

Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
---
 drivers/mtd/nand/ecc-sw-bch.c           | 71 ++++++++++++++++---
 drivers/mtd/nand/ecc-sw-hamming.c       | 71 ++++++++++++++++---
 drivers/mtd/nand/spi/core.c             | 93 +++++++++++++++++--------
 include/linux/mtd/nand-ecc-sw-bch.h     |  4 ++
 include/linux/mtd/nand-ecc-sw-hamming.h |  4 ++
 include/linux/mtd/spinand.h             |  4 ++
 6 files changed, 198 insertions(+), 49 deletions(-)

diff --git a/drivers/mtd/nand/ecc-sw-bch.c b/drivers/mtd/nand/ecc-sw-bch.c
index 405552d014a8..bda31ef8f0b8 100644
--- a/drivers/mtd/nand/ecc-sw-bch.c
+++ b/drivers/mtd/nand/ecc-sw-bch.c
@@ -238,7 +238,9 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand)
 	engine_conf->code_size = code_size;
 	engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
 	engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
-	if (!engine_conf->calc_buf || !engine_conf->code_buf) {
+	engine_conf->oob_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
+	if (!engine_conf->calc_buf || !engine_conf->code_buf ||
+	    !engine_conf->oob_buf) {
 		ret = -ENOMEM;
 		goto free_bufs;
 	}
@@ -267,6 +269,7 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand)
 	nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
 	kfree(engine_conf->calc_buf);
 	kfree(engine_conf->code_buf);
+	kfree(engine_conf->oob_buf);
 free_engine_conf:
 	kfree(engine_conf);
 
@@ -283,6 +286,7 @@ void nand_ecc_sw_bch_cleanup_ctx(struct nand_device *nand)
 		nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
 		kfree(engine_conf->calc_buf);
 		kfree(engine_conf->code_buf);
+		kfree(engine_conf->oob_buf);
 		kfree(engine_conf);
 	}
 }
@@ -299,22 +303,42 @@ static int nand_ecc_sw_bch_prepare_io_req(struct nand_device *nand,
 	int total = nand->ecc.ctx.total;
 	u8 *ecccalc = engine_conf->calc_buf;
 	const u8 *data;
-	int i;
+	int i, ret = 0;
 
 	/* Nothing to do for a raw operation */
 	if (req->mode == MTD_OPS_RAW)
 		return 0;
 
-	/* This engine does not provide BBM/free OOB bytes protection */
-	if (!req->datalen)
-		return 0;
-
 	nand_ecc_tweak_req(&engine_conf->req_ctx, req);
 
 	/* No more preparation for page read */
 	if (req->type == NAND_PAGE_READ)
 		return 0;
 
+	if (req->ooblen) {
+		memset(engine_conf->oob_buf, 0xff,
+		       nanddev_per_page_oobsize(nand));
+
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_set_databytes(mtd, req->oobbuf.out,
+							  engine_conf->oob_buf,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(engine_conf->oob_buf + req->ooboffs,
+			       req->oobbuf.out, req->ooblen);
+		}
+
+		engine_conf->src_oob_buf = (void *)req->oobbuf.out;
+		req->oobbuf.out = engine_conf->oob_buf;
+	}
+
+	/* This engine does not provide BBM/free OOB bytes protection */
+	if (!req->datalen)
+		return 0;
+
 	/* Preparation for page write: derive the ECC bytes and place them */
 	for (i = 0, data = req->databuf.out;
 	     eccsteps;
@@ -344,12 +368,36 @@ static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand,
 	if (req->mode == MTD_OPS_RAW)
 		return 0;
 
-	/* This engine does not provide BBM/free OOB bytes protection */
-	if (!req->datalen)
-		return 0;
-
 	/* No more preparation for page write */
 	if (req->type == NAND_PAGE_WRITE) {
+		if (req->ooblen)
+			req->oobbuf.out = engine_conf->src_oob_buf;
+
+		nand_ecc_restore_req(&engine_conf->req_ctx, req);
+		return 0;
+	}
+
+	if (req->ooblen) {
+		memset(engine_conf->oob_buf, 0xff,
+		       nanddev_per_page_oobsize(nand));
+
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_get_databytes(mtd,
+							  engine_conf->oob_buf,
+							  req->oobbuf.in,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(engine_conf->oob_buf,
+			       req->oobbuf.in + req->ooboffs, req->ooblen);
+		}
+	}
+
+	/* This engine does not provide BBM/free OOB bytes protection */
+	if (!req->datalen) {
+		req->oobbuf.in = engine_conf->oob_buf;
 		nand_ecc_restore_req(&engine_conf->req_ctx, req);
 		return 0;
 	}
@@ -379,6 +427,9 @@ static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand,
 		}
 	}
 
+	if (req->ooblen)
+		req->oobbuf.in = engine_conf->oob_buf;
+
 	nand_ecc_restore_req(&engine_conf->req_ctx, req);
 
 	return max_bitflips;
diff --git a/drivers/mtd/nand/ecc-sw-hamming.c b/drivers/mtd/nand/ecc-sw-hamming.c
index 254db2e7f8bb..c90ff31e9656 100644
--- a/drivers/mtd/nand/ecc-sw-hamming.c
+++ b/drivers/mtd/nand/ecc-sw-hamming.c
@@ -507,7 +507,9 @@ int nand_ecc_sw_hamming_init_ctx(struct nand_device *nand)
 	engine_conf->code_size = 3;
 	engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
 	engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
-	if (!engine_conf->calc_buf || !engine_conf->code_buf) {
+	engine_conf->oob_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
+	if (!engine_conf->calc_buf || !engine_conf->code_buf ||
+	    !engine_conf->oob_buf) {
 		ret = -ENOMEM;
 		goto free_bufs;
 	}
@@ -522,6 +524,7 @@ int nand_ecc_sw_hamming_init_ctx(struct nand_device *nand)
 	nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
 	kfree(engine_conf->calc_buf);
 	kfree(engine_conf->code_buf);
+	kfree(engine_conf->oob_buf);
 free_engine_conf:
 	kfree(engine_conf);
 
@@ -537,6 +540,7 @@ void nand_ecc_sw_hamming_cleanup_ctx(struct nand_device *nand)
 		nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
 		kfree(engine_conf->calc_buf);
 		kfree(engine_conf->code_buf);
+		kfree(engine_conf->oob_buf);
 		kfree(engine_conf);
 	}
 }
@@ -553,22 +557,42 @@ static int nand_ecc_sw_hamming_prepare_io_req(struct nand_device *nand,
 	int total = nand->ecc.ctx.total;
 	u8 *ecccalc = engine_conf->calc_buf;
 	const u8 *data;
-	int i;
+	int i, ret;
 
 	/* Nothing to do for a raw operation */
 	if (req->mode == MTD_OPS_RAW)
 		return 0;
 
-	/* This engine does not provide BBM/free OOB bytes protection */
-	if (!req->datalen)
-		return 0;
-
 	nand_ecc_tweak_req(&engine_conf->req_ctx, req);
 
 	/* No more preparation for page read */
 	if (req->type == NAND_PAGE_READ)
 		return 0;
 
+	if (req->ooblen) {
+		memset(engine_conf->oob_buf, 0xff,
+		       nanddev_per_page_oobsize(nand));
+
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_set_databytes(mtd, req->oobbuf.out,
+							  engine_conf->oob_buf,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(engine_conf->oob_buf + req->ooboffs,
+			       req->oobbuf.out, req->ooblen);
+		}
+
+		engine_conf->src_oob_buf = (void *)req->oobbuf.out;
+		req->oobbuf.out = engine_conf->oob_buf;
+	}
+
+	/* This engine does not provide BBM/free OOB bytes protection */
+	if (!req->datalen)
+		return 0;
+
 	/* Preparation for page write: derive the ECC bytes and place them */
 	for (i = 0, data = req->databuf.out;
 	     eccsteps;
@@ -598,12 +622,36 @@ static int nand_ecc_sw_hamming_finish_io_req(struct nand_device *nand,
 	if (req->mode == MTD_OPS_RAW)
 		return 0;
 
-	/* This engine does not provide BBM/free OOB bytes protection */
-	if (!req->datalen)
-		return 0;
-
 	/* No more preparation for page write */
 	if (req->type == NAND_PAGE_WRITE) {
+		if (req->ooblen)
+			req->oobbuf.out = engine_conf->src_oob_buf;
+
+		nand_ecc_restore_req(&engine_conf->req_ctx, req);
+		return 0;
+	}
+
+	if (req->ooblen) {
+		memset(engine_conf->oob_buf, 0xff,
+		       nanddev_per_page_oobsize(nand));
+
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_get_databytes(mtd,
+							  engine_conf->oob_buf,
+							  req->oobbuf.in,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(engine_conf->oob_buf,
+			       req->oobbuf.in + req->ooboffs, req->ooblen);
+		}
+	}
+
+	/* This engine does not provide BBM/free OOB bytes protection */
+	if (!req->datalen) {
+		req->oobbuf.in = engine_conf->oob_buf;
 		nand_ecc_restore_req(&engine_conf->req_ctx, req);
 		return 0;
 	}
@@ -633,6 +681,9 @@ static int nand_ecc_sw_hamming_finish_io_req(struct nand_device *nand,
 		}
 	}
 
+	if (req->ooblen)
+		req->oobbuf.in = engine_conf->oob_buf;
+
 	nand_ecc_restore_req(&engine_conf->req_ctx, req);
 
 	return max_bitflips;
diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
index c58f558302c4..9033036086f2 100644
--- a/drivers/mtd/nand/spi/core.c
+++ b/drivers/mtd/nand/spi/core.c
@@ -267,6 +267,12 @@ static int spinand_ondie_ecc_init_ctx(struct nand_device *nand)
 	if (!engine_conf)
 		return -ENOMEM;
 
+	engine_conf->oob_buf = kzalloc(nand->memorg.oobsize, GFP_KERNEL);
+	if (!engine_conf->oob_buf) {
+		kfree(engine_conf);
+		return -ENOMEM;
+	}
+
 	nand->ecc.ctx.priv = engine_conf;
 
 	if (spinand->eccinfo.ooblayout)
@@ -279,16 +285,40 @@ static int spinand_ondie_ecc_init_ctx(struct nand_device *nand)
 
 static void spinand_ondie_ecc_cleanup_ctx(struct nand_device *nand)
 {
-	kfree(nand->ecc.ctx.priv);
+	struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv;
+
+	kfree(engine_conf->oob_buf);
+	kfree(engine_conf);
 }
 
 static int spinand_ondie_ecc_prepare_io_req(struct nand_device *nand,
 					    struct nand_page_io_req *req)
 {
+	struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv;
 	struct spinand_device *spinand = nand_to_spinand(nand);
+	struct mtd_info *mtd = spinand_to_mtd(spinand);
 	bool enable = (req->mode != MTD_OPS_RAW);
+	int ret;
 
-	memset(spinand->oobbuf, 0xff, nanddev_per_page_oobsize(nand));
+	if (req->ooblen && req->type == NAND_PAGE_WRITE) {
+		memset(engine_conf->oob_buf, 0xff,
+		       nanddev_per_page_oobsize(nand));
+
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_set_databytes(mtd, req->oobbuf.out,
+							  engine_conf->oob_buf,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(engine_conf->oob_buf + req->ooboffs,
+			       req->oobbuf.out, req->ooblen);
+		}
+
+		engine_conf->src_oob_buf = (void *)req->oobbuf.out;
+		req->oobbuf.out = engine_conf->oob_buf;
+	}
 
 	/* Only enable or disable the engine */
 	return spinand_ecc_enable(spinand, enable);
@@ -306,8 +336,32 @@ static int spinand_ondie_ecc_finish_io_req(struct nand_device *nand,
 		return 0;
 
 	/* Nothing to do when finishing a page write */
-	if (req->type == NAND_PAGE_WRITE)
+	if (req->type == NAND_PAGE_WRITE) {
+		if (req->ooblen)
+			req->oobbuf.out = engine_conf->src_oob_buf;
+
 		return 0;
+	}
+
+	if (req->ooblen) {
+		memset(engine_conf->oob_buf, 0xff,
+		       nanddev_per_page_oobsize(nand));
+
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_get_databytes(mtd,
+							  engine_conf->oob_buf,
+							  req->oobbuf.in,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(engine_conf->oob_buf,
+			       req->oobbuf.in + req->ooboffs, req->ooblen);
+		}
+
+		req->oobbuf.in = engine_conf->oob_buf;
+	}
 
 	/* Finish a page read: check the status, report errors/bitflips */
 	ret = spinand_check_ecc_status(spinand, engine_conf->status);
@@ -360,7 +414,6 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand,
 				      const struct nand_page_io_req *req)
 {
 	struct nand_device *nand = spinand_to_nand(spinand);
-	struct mtd_info *mtd = spinand_to_mtd(spinand);
 	struct spi_mem_dirmap_desc *rdesc;
 	unsigned int nbytes = 0;
 	void *buf = NULL;
@@ -403,16 +456,9 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand,
 		memcpy(req->databuf.in, spinand->databuf + req->dataoffs,
 		       req->datalen);
 
-	if (req->ooblen) {
-		if (req->mode == MTD_OPS_AUTO_OOB)
-			mtd_ooblayout_get_databytes(mtd, req->oobbuf.in,
-						    spinand->oobbuf,
-						    req->ooboffs,
-						    req->ooblen);
-		else
-			memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs,
-			       req->ooblen);
-	}
+	if (req->ooblen)
+		memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs,
+		       req->ooblen);
 
 	return 0;
 }
@@ -421,7 +467,6 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand,
 				     const struct nand_page_io_req *req)
 {
 	struct nand_device *nand = spinand_to_nand(spinand);
-	struct mtd_info *mtd = spinand_to_mtd(spinand);
 	struct spi_mem_dirmap_desc *wdesc;
 	unsigned int nbytes, column = 0;
 	void *buf = spinand->databuf;
@@ -433,27 +478,17 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand,
 	 * must fill the page cache entirely even if we only want to program
 	 * the data portion of the page, otherwise we might corrupt the BBM or
 	 * user data previously programmed in OOB area.
-	 *
-	 * Only reset the data buffer manually, the OOB buffer is prepared by
-	 * ECC engines ->prepare_io_req() callback.
 	 */
 	nbytes = nanddev_page_size(nand) + nanddev_per_page_oobsize(nand);
-	memset(spinand->databuf, 0xff, nanddev_page_size(nand));
+	memset(spinand->databuf, 0xff, nbytes);
 
 	if (req->datalen)
 		memcpy(spinand->databuf + req->dataoffs, req->databuf.out,
 		       req->datalen);
 
-	if (req->ooblen) {
-		if (req->mode == MTD_OPS_AUTO_OOB)
-			mtd_ooblayout_set_databytes(mtd, req->oobbuf.out,
-						    spinand->oobbuf,
-						    req->ooboffs,
-						    req->ooblen);
-		else
-			memcpy(spinand->oobbuf + req->ooboffs, req->oobbuf.out,
-			       req->ooblen);
-	}
+	if (req->ooblen)
+		memcpy(spinand->oobbuf + req->ooboffs, req->oobbuf.out,
+		       req->ooblen);
 
 	if (req->mode == MTD_OPS_RAW)
 		wdesc = spinand->dirmaps[req->pos.plane].wdesc;
diff --git a/include/linux/mtd/nand-ecc-sw-bch.h b/include/linux/mtd/nand-ecc-sw-bch.h
index 9da9969505a8..c4730badb77b 100644
--- a/include/linux/mtd/nand-ecc-sw-bch.h
+++ b/include/linux/mtd/nand-ecc-sw-bch.h
@@ -18,6 +18,8 @@
  * @code_size: Number of bytes needed to store a code (one code per step)
  * @calc_buf: Buffer to use when calculating ECC bytes
  * @code_buf: Buffer to use when reading (raw) ECC bytes from the chip
+ * @oob_buf: Buffer to use when handle the data in OOB area.
+ * @src_oob_buf: Pointer be used to store source OOB buffer pointer when write
  * @bch: BCH control structure
  * @errloc: error location array
  * @eccmask: XOR ecc mask, allows erased pages to be decoded as valid
@@ -27,6 +29,8 @@ struct nand_ecc_sw_bch_conf {
 	unsigned int code_size;
 	u8 *calc_buf;
 	u8 *code_buf;
+	u8 *oob_buf;
+	void *src_oob_buf;
 	struct bch_control *bch;
 	unsigned int *errloc;
 	unsigned char *eccmask;
diff --git a/include/linux/mtd/nand-ecc-sw-hamming.h b/include/linux/mtd/nand-ecc-sw-hamming.h
index c6c71894c575..88788d53b911 100644
--- a/include/linux/mtd/nand-ecc-sw-hamming.h
+++ b/include/linux/mtd/nand-ecc-sw-hamming.h
@@ -19,6 +19,8 @@
  * @code_size: Number of bytes needed to store a code (one code per step)
  * @calc_buf: Buffer to use when calculating ECC bytes
  * @code_buf: Buffer to use when reading (raw) ECC bytes from the chip
+ * @oob_buf: Buffer to use when handle the data in OOB area.
+ * @src_oob_buf: Pointer be used to store source OOB buffer pointer when write
  * @sm_order: Smart Media special ordering
  */
 struct nand_ecc_sw_hamming_conf {
@@ -26,6 +28,8 @@ struct nand_ecc_sw_hamming_conf {
 	unsigned int code_size;
 	u8 *calc_buf;
 	u8 *code_buf;
+	u8 *oob_buf;
+	void *src_oob_buf;
 	unsigned int sm_order;
 };
 
diff --git a/include/linux/mtd/spinand.h b/include/linux/mtd/spinand.h
index 3aa28240a77f..23b86941fbf6 100644
--- a/include/linux/mtd/spinand.h
+++ b/include/linux/mtd/spinand.h
@@ -312,9 +312,13 @@ struct spinand_ecc_info {
  * struct spinand_ondie_ecc_conf - private SPI-NAND on-die ECC engine structure
  * @status: status of the last wait operation that will be used in case
  *          ->get_status() is not populated by the spinand device.
+ * @oob_buf: Buffer to use when handle the data in OOB area.
+ * @src_oob_buf: Pointer be used to store source OOB buffer pointer when write
  */
 struct spinand_ondie_ecc_conf {
 	u8 status;
+	u8 *oob_buf;
+	void *src_oob_buf;
 };
 
 /**
-- 
2.25.1


______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC, v4, 4/5] mtd: spinand: Move set/get OOB databytes to each ECC engines
@ 2021-11-30  8:32   ` Xiangsheng Hou
  0 siblings, 0 replies; 36+ messages in thread
From: Xiangsheng Hou @ 2021-11-30  8:32 UTC (permalink / raw)
  To: miquel.raynal, broonie
  Cc: xiangsheng.hou, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Move set/get OOB databytes to each ECC engines when in AUTO mode.
For read/write in AUTO mode, the OOB bytes include not only free
date bytes but also ECC data bytes. And for some special ECC engine,
the data bytes in OOB may be mixed with main data. For example,
mediatek ECC engine will be one more main data byte swap with BBM.
So, just put these operation in each ECC engine to distinguish
the differentiation.

Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
---
 drivers/mtd/nand/ecc-sw-bch.c           | 71 ++++++++++++++++---
 drivers/mtd/nand/ecc-sw-hamming.c       | 71 ++++++++++++++++---
 drivers/mtd/nand/spi/core.c             | 93 +++++++++++++++++--------
 include/linux/mtd/nand-ecc-sw-bch.h     |  4 ++
 include/linux/mtd/nand-ecc-sw-hamming.h |  4 ++
 include/linux/mtd/spinand.h             |  4 ++
 6 files changed, 198 insertions(+), 49 deletions(-)

diff --git a/drivers/mtd/nand/ecc-sw-bch.c b/drivers/mtd/nand/ecc-sw-bch.c
index 405552d014a8..bda31ef8f0b8 100644
--- a/drivers/mtd/nand/ecc-sw-bch.c
+++ b/drivers/mtd/nand/ecc-sw-bch.c
@@ -238,7 +238,9 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand)
 	engine_conf->code_size = code_size;
 	engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
 	engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
-	if (!engine_conf->calc_buf || !engine_conf->code_buf) {
+	engine_conf->oob_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
+	if (!engine_conf->calc_buf || !engine_conf->code_buf ||
+	    !engine_conf->oob_buf) {
 		ret = -ENOMEM;
 		goto free_bufs;
 	}
@@ -267,6 +269,7 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand)
 	nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
 	kfree(engine_conf->calc_buf);
 	kfree(engine_conf->code_buf);
+	kfree(engine_conf->oob_buf);
 free_engine_conf:
 	kfree(engine_conf);
 
@@ -283,6 +286,7 @@ void nand_ecc_sw_bch_cleanup_ctx(struct nand_device *nand)
 		nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
 		kfree(engine_conf->calc_buf);
 		kfree(engine_conf->code_buf);
+		kfree(engine_conf->oob_buf);
 		kfree(engine_conf);
 	}
 }
@@ -299,22 +303,42 @@ static int nand_ecc_sw_bch_prepare_io_req(struct nand_device *nand,
 	int total = nand->ecc.ctx.total;
 	u8 *ecccalc = engine_conf->calc_buf;
 	const u8 *data;
-	int i;
+	int i, ret = 0;
 
 	/* Nothing to do for a raw operation */
 	if (req->mode == MTD_OPS_RAW)
 		return 0;
 
-	/* This engine does not provide BBM/free OOB bytes protection */
-	if (!req->datalen)
-		return 0;
-
 	nand_ecc_tweak_req(&engine_conf->req_ctx, req);
 
 	/* No more preparation for page read */
 	if (req->type == NAND_PAGE_READ)
 		return 0;
 
+	if (req->ooblen) {
+		memset(engine_conf->oob_buf, 0xff,
+		       nanddev_per_page_oobsize(nand));
+
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_set_databytes(mtd, req->oobbuf.out,
+							  engine_conf->oob_buf,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(engine_conf->oob_buf + req->ooboffs,
+			       req->oobbuf.out, req->ooblen);
+		}
+
+		engine_conf->src_oob_buf = (void *)req->oobbuf.out;
+		req->oobbuf.out = engine_conf->oob_buf;
+	}
+
+	/* This engine does not provide BBM/free OOB bytes protection */
+	if (!req->datalen)
+		return 0;
+
 	/* Preparation for page write: derive the ECC bytes and place them */
 	for (i = 0, data = req->databuf.out;
 	     eccsteps;
@@ -344,12 +368,36 @@ static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand,
 	if (req->mode == MTD_OPS_RAW)
 		return 0;
 
-	/* This engine does not provide BBM/free OOB bytes protection */
-	if (!req->datalen)
-		return 0;
-
 	/* No more preparation for page write */
 	if (req->type == NAND_PAGE_WRITE) {
+		if (req->ooblen)
+			req->oobbuf.out = engine_conf->src_oob_buf;
+
+		nand_ecc_restore_req(&engine_conf->req_ctx, req);
+		return 0;
+	}
+
+	if (req->ooblen) {
+		memset(engine_conf->oob_buf, 0xff,
+		       nanddev_per_page_oobsize(nand));
+
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_get_databytes(mtd,
+							  engine_conf->oob_buf,
+							  req->oobbuf.in,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(engine_conf->oob_buf,
+			       req->oobbuf.in + req->ooboffs, req->ooblen);
+		}
+	}
+
+	/* This engine does not provide BBM/free OOB bytes protection */
+	if (!req->datalen) {
+		req->oobbuf.in = engine_conf->oob_buf;
 		nand_ecc_restore_req(&engine_conf->req_ctx, req);
 		return 0;
 	}
@@ -379,6 +427,9 @@ static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand,
 		}
 	}
 
+	if (req->ooblen)
+		req->oobbuf.in = engine_conf->oob_buf;
+
 	nand_ecc_restore_req(&engine_conf->req_ctx, req);
 
 	return max_bitflips;
diff --git a/drivers/mtd/nand/ecc-sw-hamming.c b/drivers/mtd/nand/ecc-sw-hamming.c
index 254db2e7f8bb..c90ff31e9656 100644
--- a/drivers/mtd/nand/ecc-sw-hamming.c
+++ b/drivers/mtd/nand/ecc-sw-hamming.c
@@ -507,7 +507,9 @@ int nand_ecc_sw_hamming_init_ctx(struct nand_device *nand)
 	engine_conf->code_size = 3;
 	engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
 	engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
-	if (!engine_conf->calc_buf || !engine_conf->code_buf) {
+	engine_conf->oob_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
+	if (!engine_conf->calc_buf || !engine_conf->code_buf ||
+	    !engine_conf->oob_buf) {
 		ret = -ENOMEM;
 		goto free_bufs;
 	}
@@ -522,6 +524,7 @@ int nand_ecc_sw_hamming_init_ctx(struct nand_device *nand)
 	nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
 	kfree(engine_conf->calc_buf);
 	kfree(engine_conf->code_buf);
+	kfree(engine_conf->oob_buf);
 free_engine_conf:
 	kfree(engine_conf);
 
@@ -537,6 +540,7 @@ void nand_ecc_sw_hamming_cleanup_ctx(struct nand_device *nand)
 		nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
 		kfree(engine_conf->calc_buf);
 		kfree(engine_conf->code_buf);
+		kfree(engine_conf->oob_buf);
 		kfree(engine_conf);
 	}
 }
@@ -553,22 +557,42 @@ static int nand_ecc_sw_hamming_prepare_io_req(struct nand_device *nand,
 	int total = nand->ecc.ctx.total;
 	u8 *ecccalc = engine_conf->calc_buf;
 	const u8 *data;
-	int i;
+	int i, ret;
 
 	/* Nothing to do for a raw operation */
 	if (req->mode == MTD_OPS_RAW)
 		return 0;
 
-	/* This engine does not provide BBM/free OOB bytes protection */
-	if (!req->datalen)
-		return 0;
-
 	nand_ecc_tweak_req(&engine_conf->req_ctx, req);
 
 	/* No more preparation for page read */
 	if (req->type == NAND_PAGE_READ)
 		return 0;
 
+	if (req->ooblen) {
+		memset(engine_conf->oob_buf, 0xff,
+		       nanddev_per_page_oobsize(nand));
+
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_set_databytes(mtd, req->oobbuf.out,
+							  engine_conf->oob_buf,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(engine_conf->oob_buf + req->ooboffs,
+			       req->oobbuf.out, req->ooblen);
+		}
+
+		engine_conf->src_oob_buf = (void *)req->oobbuf.out;
+		req->oobbuf.out = engine_conf->oob_buf;
+	}
+
+	/* This engine does not provide BBM/free OOB bytes protection */
+	if (!req->datalen)
+		return 0;
+
 	/* Preparation for page write: derive the ECC bytes and place them */
 	for (i = 0, data = req->databuf.out;
 	     eccsteps;
@@ -598,12 +622,36 @@ static int nand_ecc_sw_hamming_finish_io_req(struct nand_device *nand,
 	if (req->mode == MTD_OPS_RAW)
 		return 0;
 
-	/* This engine does not provide BBM/free OOB bytes protection */
-	if (!req->datalen)
-		return 0;
-
 	/* No more preparation for page write */
 	if (req->type == NAND_PAGE_WRITE) {
+		if (req->ooblen)
+			req->oobbuf.out = engine_conf->src_oob_buf;
+
+		nand_ecc_restore_req(&engine_conf->req_ctx, req);
+		return 0;
+	}
+
+	if (req->ooblen) {
+		memset(engine_conf->oob_buf, 0xff,
+		       nanddev_per_page_oobsize(nand));
+
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_get_databytes(mtd,
+							  engine_conf->oob_buf,
+							  req->oobbuf.in,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(engine_conf->oob_buf,
+			       req->oobbuf.in + req->ooboffs, req->ooblen);
+		}
+	}
+
+	/* This engine does not provide BBM/free OOB bytes protection */
+	if (!req->datalen) {
+		req->oobbuf.in = engine_conf->oob_buf;
 		nand_ecc_restore_req(&engine_conf->req_ctx, req);
 		return 0;
 	}
@@ -633,6 +681,9 @@ static int nand_ecc_sw_hamming_finish_io_req(struct nand_device *nand,
 		}
 	}
 
+	if (req->ooblen)
+		req->oobbuf.in = engine_conf->oob_buf;
+
 	nand_ecc_restore_req(&engine_conf->req_ctx, req);
 
 	return max_bitflips;
diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
index c58f558302c4..9033036086f2 100644
--- a/drivers/mtd/nand/spi/core.c
+++ b/drivers/mtd/nand/spi/core.c
@@ -267,6 +267,12 @@ static int spinand_ondie_ecc_init_ctx(struct nand_device *nand)
 	if (!engine_conf)
 		return -ENOMEM;
 
+	engine_conf->oob_buf = kzalloc(nand->memorg.oobsize, GFP_KERNEL);
+	if (!engine_conf->oob_buf) {
+		kfree(engine_conf);
+		return -ENOMEM;
+	}
+
 	nand->ecc.ctx.priv = engine_conf;
 
 	if (spinand->eccinfo.ooblayout)
@@ -279,16 +285,40 @@ static int spinand_ondie_ecc_init_ctx(struct nand_device *nand)
 
 static void spinand_ondie_ecc_cleanup_ctx(struct nand_device *nand)
 {
-	kfree(nand->ecc.ctx.priv);
+	struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv;
+
+	kfree(engine_conf->oob_buf);
+	kfree(engine_conf);
 }
 
 static int spinand_ondie_ecc_prepare_io_req(struct nand_device *nand,
 					    struct nand_page_io_req *req)
 {
+	struct spinand_ondie_ecc_conf *engine_conf = nand->ecc.ctx.priv;
 	struct spinand_device *spinand = nand_to_spinand(nand);
+	struct mtd_info *mtd = spinand_to_mtd(spinand);
 	bool enable = (req->mode != MTD_OPS_RAW);
+	int ret;
 
-	memset(spinand->oobbuf, 0xff, nanddev_per_page_oobsize(nand));
+	if (req->ooblen && req->type == NAND_PAGE_WRITE) {
+		memset(engine_conf->oob_buf, 0xff,
+		       nanddev_per_page_oobsize(nand));
+
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_set_databytes(mtd, req->oobbuf.out,
+							  engine_conf->oob_buf,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(engine_conf->oob_buf + req->ooboffs,
+			       req->oobbuf.out, req->ooblen);
+		}
+
+		engine_conf->src_oob_buf = (void *)req->oobbuf.out;
+		req->oobbuf.out = engine_conf->oob_buf;
+	}
 
 	/* Only enable or disable the engine */
 	return spinand_ecc_enable(spinand, enable);
@@ -306,8 +336,32 @@ static int spinand_ondie_ecc_finish_io_req(struct nand_device *nand,
 		return 0;
 
 	/* Nothing to do when finishing a page write */
-	if (req->type == NAND_PAGE_WRITE)
+	if (req->type == NAND_PAGE_WRITE) {
+		if (req->ooblen)
+			req->oobbuf.out = engine_conf->src_oob_buf;
+
 		return 0;
+	}
+
+	if (req->ooblen) {
+		memset(engine_conf->oob_buf, 0xff,
+		       nanddev_per_page_oobsize(nand));
+
+		if (req->mode == MTD_OPS_AUTO_OOB) {
+			ret = mtd_ooblayout_get_databytes(mtd,
+							  engine_conf->oob_buf,
+							  req->oobbuf.in,
+							  req->ooboffs,
+							  mtd->oobavail);
+			if (ret)
+				return ret;
+		} else {
+			memcpy(engine_conf->oob_buf,
+			       req->oobbuf.in + req->ooboffs, req->ooblen);
+		}
+
+		req->oobbuf.in = engine_conf->oob_buf;
+	}
 
 	/* Finish a page read: check the status, report errors/bitflips */
 	ret = spinand_check_ecc_status(spinand, engine_conf->status);
@@ -360,7 +414,6 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand,
 				      const struct nand_page_io_req *req)
 {
 	struct nand_device *nand = spinand_to_nand(spinand);
-	struct mtd_info *mtd = spinand_to_mtd(spinand);
 	struct spi_mem_dirmap_desc *rdesc;
 	unsigned int nbytes = 0;
 	void *buf = NULL;
@@ -403,16 +456,9 @@ static int spinand_read_from_cache_op(struct spinand_device *spinand,
 		memcpy(req->databuf.in, spinand->databuf + req->dataoffs,
 		       req->datalen);
 
-	if (req->ooblen) {
-		if (req->mode == MTD_OPS_AUTO_OOB)
-			mtd_ooblayout_get_databytes(mtd, req->oobbuf.in,
-						    spinand->oobbuf,
-						    req->ooboffs,
-						    req->ooblen);
-		else
-			memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs,
-			       req->ooblen);
-	}
+	if (req->ooblen)
+		memcpy(req->oobbuf.in, spinand->oobbuf + req->ooboffs,
+		       req->ooblen);
 
 	return 0;
 }
@@ -421,7 +467,6 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand,
 				     const struct nand_page_io_req *req)
 {
 	struct nand_device *nand = spinand_to_nand(spinand);
-	struct mtd_info *mtd = spinand_to_mtd(spinand);
 	struct spi_mem_dirmap_desc *wdesc;
 	unsigned int nbytes, column = 0;
 	void *buf = spinand->databuf;
@@ -433,27 +478,17 @@ static int spinand_write_to_cache_op(struct spinand_device *spinand,
 	 * must fill the page cache entirely even if we only want to program
 	 * the data portion of the page, otherwise we might corrupt the BBM or
 	 * user data previously programmed in OOB area.
-	 *
-	 * Only reset the data buffer manually, the OOB buffer is prepared by
-	 * ECC engines ->prepare_io_req() callback.
 	 */
 	nbytes = nanddev_page_size(nand) + nanddev_per_page_oobsize(nand);
-	memset(spinand->databuf, 0xff, nanddev_page_size(nand));
+	memset(spinand->databuf, 0xff, nbytes);
 
 	if (req->datalen)
 		memcpy(spinand->databuf + req->dataoffs, req->databuf.out,
 		       req->datalen);
 
-	if (req->ooblen) {
-		if (req->mode == MTD_OPS_AUTO_OOB)
-			mtd_ooblayout_set_databytes(mtd, req->oobbuf.out,
-						    spinand->oobbuf,
-						    req->ooboffs,
-						    req->ooblen);
-		else
-			memcpy(spinand->oobbuf + req->ooboffs, req->oobbuf.out,
-			       req->ooblen);
-	}
+	if (req->ooblen)
+		memcpy(spinand->oobbuf + req->ooboffs, req->oobbuf.out,
+		       req->ooblen);
 
 	if (req->mode == MTD_OPS_RAW)
 		wdesc = spinand->dirmaps[req->pos.plane].wdesc;
diff --git a/include/linux/mtd/nand-ecc-sw-bch.h b/include/linux/mtd/nand-ecc-sw-bch.h
index 9da9969505a8..c4730badb77b 100644
--- a/include/linux/mtd/nand-ecc-sw-bch.h
+++ b/include/linux/mtd/nand-ecc-sw-bch.h
@@ -18,6 +18,8 @@
  * @code_size: Number of bytes needed to store a code (one code per step)
  * @calc_buf: Buffer to use when calculating ECC bytes
  * @code_buf: Buffer to use when reading (raw) ECC bytes from the chip
+ * @oob_buf: Buffer to use when handle the data in OOB area.
+ * @src_oob_buf: Pointer be used to store source OOB buffer pointer when write
  * @bch: BCH control structure
  * @errloc: error location array
  * @eccmask: XOR ecc mask, allows erased pages to be decoded as valid
@@ -27,6 +29,8 @@ struct nand_ecc_sw_bch_conf {
 	unsigned int code_size;
 	u8 *calc_buf;
 	u8 *code_buf;
+	u8 *oob_buf;
+	void *src_oob_buf;
 	struct bch_control *bch;
 	unsigned int *errloc;
 	unsigned char *eccmask;
diff --git a/include/linux/mtd/nand-ecc-sw-hamming.h b/include/linux/mtd/nand-ecc-sw-hamming.h
index c6c71894c575..88788d53b911 100644
--- a/include/linux/mtd/nand-ecc-sw-hamming.h
+++ b/include/linux/mtd/nand-ecc-sw-hamming.h
@@ -19,6 +19,8 @@
  * @code_size: Number of bytes needed to store a code (one code per step)
  * @calc_buf: Buffer to use when calculating ECC bytes
  * @code_buf: Buffer to use when reading (raw) ECC bytes from the chip
+ * @oob_buf: Buffer to use when handle the data in OOB area.
+ * @src_oob_buf: Pointer be used to store source OOB buffer pointer when write
  * @sm_order: Smart Media special ordering
  */
 struct nand_ecc_sw_hamming_conf {
@@ -26,6 +28,8 @@ struct nand_ecc_sw_hamming_conf {
 	unsigned int code_size;
 	u8 *calc_buf;
 	u8 *code_buf;
+	u8 *oob_buf;
+	void *src_oob_buf;
 	unsigned int sm_order;
 };
 
diff --git a/include/linux/mtd/spinand.h b/include/linux/mtd/spinand.h
index 3aa28240a77f..23b86941fbf6 100644
--- a/include/linux/mtd/spinand.h
+++ b/include/linux/mtd/spinand.h
@@ -312,9 +312,13 @@ struct spinand_ecc_info {
  * struct spinand_ondie_ecc_conf - private SPI-NAND on-die ECC engine structure
  * @status: status of the last wait operation that will be used in case
  *          ->get_status() is not populated by the spinand device.
+ * @oob_buf: Buffer to use when handle the data in OOB area.
+ * @src_oob_buf: Pointer be used to store source OOB buffer pointer when write
  */
 struct spinand_ondie_ecc_conf {
 	u8 status;
+	u8 *oob_buf;
+	void *src_oob_buf;
 };
 
 /**
-- 
2.25.1


_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC,v4,5/5] arm64: dts: mtk: Add snfi node
  2021-11-30  8:31 ` Xiangsheng Hou
@ 2021-11-30  8:32   ` Xiangsheng Hou
  -1 siblings, 0 replies; 36+ messages in thread
From: Xiangsheng Hou @ 2021-11-30  8:32 UTC (permalink / raw)
  To: miquel.raynal, broonie
  Cc: xiangsheng.hou, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Add snfi node for SPI NAND controller.
Just take MT7622 for example at present.

Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
---
 arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts | 16 ++++++++++++++++
 arch/arm64/boot/dts/mediatek/mt7622.dtsi     | 13 +++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
index 596c073d8b05..1a5bf553f3a3 100644
--- a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
+++ b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
@@ -530,6 +530,22 @@ &spi0 {
 	status = "okay";
 };
 
+&snfi {
+	pinctrl-names = "default";
+	pinctrl-0 = <&snfi_pins>;
+	nand-ecc-engine = <&bch>;
+	status = "disabled";
+
+	spi_nand@0 {
+		compatible = "spi-nand";
+		reg = <0>;
+		spi-max-frequency = <104000000>;
+		spi-tx-bus-width = <4>;
+		spi-rx-bus-width = <4>;
+		nand-ecc-engine = <&snfi>;
+	};
+};
+
 &spi1 {
 	pinctrl-names = "default";
 	pinctrl-0 = <&spic1_pins>;
diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
index 6f8cb3ad1e84..229ec2a3a65e 100644
--- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
@@ -497,6 +497,19 @@ spi0: spi@1100a000 {
 		status = "disabled";
 	};
 
+	snfi: spi@1100d000 {
+		compatible = "mediatek,mt7622-snfi";
+		reg = <0 0x1100d000 0 0x1000>;
+		interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_LOW>;
+		clocks = <&infracfg_ao CLK_INFRA_AO_NFI_BCLK_CK_SET>,
+			 <&infracfg_ao CLK_INFRA_AO_NFI_INFRA_BCLK_CK_SET>,
+			 <&infracfg_ao CLK_INFRA_AO_NFI_HCLK_CK_SET>;
+		clock-names = "nfi_clk", "snfi_clk", "hclk";
+		#address-cells = <1>;
+		#size-cells = <0>;
+		status = "disabled";
+	};
+
 	thermal: thermal@1100b000 {
 		#thermal-sensor-cells = <1>;
 		compatible = "mediatek,mt7622-thermal";
-- 
2.25.1


______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [RFC,v4,5/5] arm64: dts: mtk: Add snfi node
@ 2021-11-30  8:32   ` Xiangsheng Hou
  0 siblings, 0 replies; 36+ messages in thread
From: Xiangsheng Hou @ 2021-11-30  8:32 UTC (permalink / raw)
  To: miquel.raynal, broonie
  Cc: xiangsheng.hou, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Add snfi node for SPI NAND controller.
Just take MT7622 for example at present.

Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
---
 arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts | 16 ++++++++++++++++
 arch/arm64/boot/dts/mediatek/mt7622.dtsi     | 13 +++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
index 596c073d8b05..1a5bf553f3a3 100644
--- a/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
+++ b/arch/arm64/boot/dts/mediatek/mt7622-rfb1.dts
@@ -530,6 +530,22 @@ &spi0 {
 	status = "okay";
 };
 
+&snfi {
+	pinctrl-names = "default";
+	pinctrl-0 = <&snfi_pins>;
+	nand-ecc-engine = <&bch>;
+	status = "disabled";
+
+	spi_nand@0 {
+		compatible = "spi-nand";
+		reg = <0>;
+		spi-max-frequency = <104000000>;
+		spi-tx-bus-width = <4>;
+		spi-rx-bus-width = <4>;
+		nand-ecc-engine = <&snfi>;
+	};
+};
+
 &spi1 {
 	pinctrl-names = "default";
 	pinctrl-0 = <&spic1_pins>;
diff --git a/arch/arm64/boot/dts/mediatek/mt7622.dtsi b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
index 6f8cb3ad1e84..229ec2a3a65e 100644
--- a/arch/arm64/boot/dts/mediatek/mt7622.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt7622.dtsi
@@ -497,6 +497,19 @@ spi0: spi@1100a000 {
 		status = "disabled";
 	};
 
+	snfi: spi@1100d000 {
+		compatible = "mediatek,mt7622-snfi";
+		reg = <0 0x1100d000 0 0x1000>;
+		interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_LOW>;
+		clocks = <&infracfg_ao CLK_INFRA_AO_NFI_BCLK_CK_SET>,
+			 <&infracfg_ao CLK_INFRA_AO_NFI_INFRA_BCLK_CK_SET>,
+			 <&infracfg_ao CLK_INFRA_AO_NFI_HCLK_CK_SET>;
+		clock-names = "nfi_clk", "snfi_clk", "hclk";
+		#address-cells = <1>;
+		#size-cells = <0>;
+		status = "disabled";
+	};
+
 	thermal: thermal@1100b000 {
 		#thermal-sensor-cells = <1>;
 		compatible = "mediatek,mt7622-thermal";
-- 
2.25.1


_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,3/5] spi: mtk: Add mediatek SPI Nand Flash interface driver
  2021-11-30  8:32   ` Xiangsheng Hou
@ 2021-12-09 10:20     ` Miquel Raynal
  -1 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-09 10:20 UTC (permalink / raw)
  To: Xiangsheng Hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Xiangsheng,

xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:32:00 +0800:

> The SPI Nand Flash interface driver cowork with Mediatek pipelined
> HW ECC engine.
> 
> Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
> ---
>  drivers/spi/Kconfig        |   11 +
>  drivers/spi/Makefile       |    1 +
>  drivers/spi/spi-mtk-snfi.c | 1117 ++++++++++++++++++++++++++++++++++++
>  3 files changed, 1129 insertions(+)
>  create mode 100644 drivers/spi/spi-mtk-snfi.c
> 
> diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
> index 596705d24400..9cb6a173b1ef 100644
> --- a/drivers/spi/Kconfig
> +++ b/drivers/spi/Kconfig
> @@ -535,6 +535,17 @@ config SPI_MT65XX
>  	  say Y or M here.If you are not sure, say N.
>  	  SPI drivers for Mediatek MT65XX and MT81XX series ARM SoCs.
>  
> +config SPI_MTK_SNFI
> +	tristate "MediaTek SPI NAND interface"
> +	depends on MTD
> +	select MTD_SPI_NAND
> +	select MTD_NAND_ECC_MTK
> +	help
> +	  This selects the SPI NAND FLASH interface(SNFI),
> +	  which could be found on MediaTek Soc.
> +	  Say Y or M here.If you are not sure, say N.
> +	  Note Parallel Nand and SPI NAND is alternative on MediaTek SoCs.
> +
>  config SPI_MT7621
>  	tristate "MediaTek MT7621 SPI Controller"
>  	depends on RALINK || COMPILE_TEST
> diff --git a/drivers/spi/Makefile b/drivers/spi/Makefile
> index dd7393a6046f..57d11eecf662 100644
> --- a/drivers/spi/Makefile
> +++ b/drivers/spi/Makefile
> @@ -73,6 +73,7 @@ obj-$(CONFIG_SPI_MPC52xx)		+= spi-mpc52xx.o
>  obj-$(CONFIG_SPI_MT65XX)                += spi-mt65xx.o
>  obj-$(CONFIG_SPI_MT7621)		+= spi-mt7621.o
>  obj-$(CONFIG_SPI_MTK_NOR)		+= spi-mtk-nor.o
> +obj-$(CONFIG_SPI_MTK_SNFI)              += spi-mtk-snfi.o
>  obj-$(CONFIG_SPI_MXIC)			+= spi-mxic.o
>  obj-$(CONFIG_SPI_MXS)			+= spi-mxs.o
>  obj-$(CONFIG_SPI_NPCM_FIU)		+= spi-npcm-fiu.o
> diff --git a/drivers/spi/spi-mtk-snfi.c b/drivers/spi/spi-mtk-snfi.c
> new file mode 100644
> index 000000000000..b4dce6d78176
> --- /dev/null
> +++ b/drivers/spi/spi-mtk-snfi.c
> @@ -0,0 +1,1117 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Driver for MediaTek SPI Nand Flash interface
> + *
> + * Copyright (C) 2021 MediaTek Inc.
> + * Authors:	Xiangsheng Hou	<xiangsheng.hou@mediatek.com>
> + */
> +
> +#include <linux/clk.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/interrupt.h>
> +#include <linux/iopoll.h>
> +#include <linux/module.h>
> +#include <linux/mtd/nand.h>
> +#include <linux/mtd/nand-ecc-mtk.h>
> +#include <linux/of.h>
> +#include <linux/of_device.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/spi/spi.h>
> +#include <linux/spi/spi-mem.h>
> +
> +/* Registers used by the driver */
> +#define NFI_CNFG		(0x00)
> +#define		CNFG_DMA		BIT(0)
> +#define		CNFG_READ_EN		BIT(1)
> +#define		CNFG_DMA_BURST_EN	BIT(2)
> +#define		CNFG_HW_ECC_EN		BIT(8)
> +#define		CNFG_AUTO_FMT_EN	BIT(9)
> +#define		CNFG_OP_CUST		GENMASK(14, 13)
> +#define NFI_PAGEFMT		(0x04)
> +#define		PAGEFMT_512_2K		(0)
> +#define		PAGEFMT_2K_4K		(1)
> +#define		PAGEFMT_4K_8K		(2)
> +#define		PAGEFMT_8K_16K		(3)
> +#define		PAGEFMT_PAGE_MASK	GENMASK(1, 0)
> +#define		PAGEFMT_SEC_SEL_512	BIT(2)
> +#define		PAGEFMT_FDM_SHIFT	(8)
> +#define		PAGEFMT_FDM_ECC_SHIFT	(12)
> +#define		PAGEFMT_SPARE_SHIFT	(16)
> +#define		PAGEFMT_SPARE_MASK	GENMASK(21, 16)
> +#define NFI_CON			(0x08)
> +#define		CON_FIFO_FLUSH		BIT(0)
> +#define		CON_NFI_RST		BIT(1)
> +#define		CON_BRD			BIT(8)
> +#define		CON_BWR			BIT(9)
> +#define		CON_SEC_SHIFT		(12)
> +#define		CON_SEC_MASK		GENMASK(16, 12)
> +#define NFI_INTR_EN		(0x10)
> +#define		INTR_CUS_PROG_EN	BIT(7)
> +#define		INTR_CUS_READ_EN	BIT(8)
> +#define		INTR_IRQ_EN		BIT(31)
> +#define NFI_INTR_STA		(0x14)
> +#define NFI_CMD			(0x20)
> +#define		CMD_DUMMY		(0x00)
> +#define NFI_STRDATA		(0x40)
> +#define		STAR_EN			BIT(0)
> +#define NFI_STA			(0x60)
> +#define		NFI_FSM_MASK		GENMASK(19, 16)
> +#define		STA_EMP_PAGE		BIT(12)
> +#define NFI_ADDRCNTR		(0x70)
> +#define		CNTR_MASK		GENMASK(16, 12)
> +#define		ADDRCNTR_SEC_SHIFT	(12)
> +#define		ADDRCNTR_SEC(val) \
> +		(((val) & CNTR_MASK) >> ADDRCNTR_SEC_SHIFT)
> +#define NFI_STRADDR		(0x80)
> +#define NFI_BYTELEN		(0x84)
> +#define NFI_FDML(x)		(0xA0 + (x) * sizeof(u32) * 2)
> +#define NFI_FDMM(x)		(0xA4 + (x) * sizeof(u32) * 2)
> +#define NFI_MASTERSTA		(0x224)
> +#define		AHB_BUS_BUSY		GENMASK(1, 0)
> +#define SNFI_MAC_CTL		(0x500)
> +#define		MAC_WIP			BIT(0)
> +#define		MAC_WIP_READY		BIT(1)
> +#define		MAC_TRIG		BIT(2)
> +#define		MAC_EN			BIT(3)
> +#define		MAC_SIO_SEL		BIT(4)
> +#define SNFI_MAC_OUTL		(0x504)
> +#define SNFI_MAC_INL		(0x508)
> +#define SNFI_RD_CTL2		(0x510)
> +#define		RD_CMD_MASK		GENMASK(7, 0)
> +#define		RD_DUMMY_SHIFT		(8)
> +#define SNFI_RD_CTL3		(0x514)
> +#define		RD_ADDR_MASK		GENMASK(16, 0)
> +#define SNFI_PG_CTL1		(0x524)
> +#define		WR_LOAD_CMD_MASK	GENMASK(15, 8)
> +#define		WR_LOAD_CMD_SHIFT	(8)
> +#define SNFI_PG_CTL2		(0x528)
> +#define		WR_LOAD_ADDR_MASK	GENMASK(15, 0)
> +#define SNFI_MISC_CTL		(0x538)
> +#define		RD_CUSTOM_EN		BIT(6)
> +#define		WR_CUSTOM_EN		BIT(7)
> +#define		LATCH_LAT_SHIFT		(8)
> +#define		LATCH_LAT_MASK		GENMASK(9, 8)
> +#define		RD_MODE_X2		BIT(16)
> +#define		RD_MODE_X4		BIT(17)
> +#define		RD_MODE_DQUAL		BIT(18)
> +#define		RD_MODE_MASK		GENMASK(18, 16)
> +#define		WR_X4_EN		BIT(20)
> +#define		SW_RST			BIT(28)
> +#define SNFI_MISC_CTL2		(0x53c)
> +#define		WR_LEN_SHIFT		(16)
> +#define SNFI_DLY_CTL3		(0x548)
> +#define		SAM_DLY_MASK		GENMASK(5, 0)
> +#define SNFI_STA_CTL1		(0x550)
> +#define		SPI_STATE		GENMASK(3, 0)
> +#define		CUS_READ_DONE		BIT(27)
> +#define		CUS_PROG_DONE		BIT(28)
> +#define SNFI_CNFG		(0x55c)
> +#define		SNFI_MODE_EN		BIT(0)
> +#define SNFI_GPRAM_DATA		(0x800)
> +#define		SNFI_GPRAM_MAX_LEN	(160)
> +
> +#define MTK_SNFI_TIMEOUT		(500000)
> +#define MTK_SNFI_RESET_TIMEOUT		(1000000)
> +#define MTK_SNFI_AUTOSUSPEND_DELAY	(1000)
> +#define KB(x)				((x) * 1024UL)
> +
> +struct mtk_snfi_caps {
> +	u8 pageformat_spare_shift;
> +};
> +
> +struct mtk_snfi {
> +	struct device *dev;
> +	struct completion done;
> +	void __iomem *regs;
> +	const struct mtk_snfi_caps *caps;
> +
> +	struct clk *nfi_clk;
> +	struct clk *snfi_clk;
> +	struct clk *hclk;
> +
> +	struct nand_ecc_engine *engine;
> +
> +	u32 sample_delay;
> +	u32 read_latency;
> +
> +	void *tx_buf;
> +	dma_addr_t dma_addr;
> +};
> +
> +static struct mtk_ecc_engine *mtk_snfi_to_ecc_engine(struct mtk_snfi *snfi)
> +{
> +	return snfi->engine->priv;
> +}
> +
> +static void mtk_snfi_mac_enable(struct mtk_snfi *snfi)
> +{
> +	u32 val;
> +
> +	val = readl(snfi->regs + SNFI_MAC_CTL);
> +	val &= ~MAC_SIO_SEL;
> +	val |= MAC_EN;
> +
> +	writel(val, snfi->regs + SNFI_MAC_CTL);
> +}
> +
> +static int mtk_snfi_mac_trigger(struct mtk_snfi *snfi)
> +{
> +	int ret;
> +	u32 val;
> +
> +	val = readl(snfi->regs + SNFI_MAC_CTL);
> +	val |= MAC_TRIG;
> +	writel(val, snfi->regs + SNFI_MAC_CTL);
> +
> +	ret = readl_poll_timeout_atomic(snfi->regs + SNFI_MAC_CTL,
> +					val, val & MAC_WIP_READY,
> +					0, MTK_SNFI_TIMEOUT);
> +	if (ret < 0) {
> +		dev_err(snfi->dev, "wait for wip ready timeout\n");
> +		return -EIO;
> +	}
> +
> +	ret = readl_poll_timeout_atomic(snfi->regs + SNFI_MAC_CTL,
> +					val, !(val & MAC_WIP), 0,
> +					MTK_SNFI_TIMEOUT);
> +	if (ret < 0) {
> +		dev_err(snfi->dev, "command write timeout\n");
> +		return -EIO;
> +	}
> +
> +	return 0;
> +}
> +
> +static void mtk_snfi_mac_disable(struct mtk_snfi *snfi)
> +{
> +	u32 val;
> +
> +	val = readl(snfi->regs + SNFI_MAC_CTL);
> +	val &= ~(MAC_TRIG | MAC_EN);
> +	writel(val, snfi->regs + SNFI_MAC_CTL);
> +}
> +
> +static int mtk_snfi_mac_op(struct mtk_snfi *snfi)
> +{
> +	int ret;
> +
> +	mtk_snfi_mac_enable(snfi);
> +	ret = mtk_snfi_mac_trigger(snfi);
> +	mtk_snfi_mac_disable(snfi);
> +
> +	return ret;
> +}
> +
> +static inline void mtk_snfi_read_oob_free(struct mtk_snfi *snfi,
> +					  const struct spi_mem_op *op)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	u8 *oobptr = op->data.buf.in;
> +	u32 vall, valm;
> +	int i, j;
> +
> +	oobptr += eng->section_size * eng->nsteps;
> +	for (i = 0; i < eng->nsteps; i++) {
> +		vall = readl(snfi->regs + NFI_FDML(i));
> +		valm = readl(snfi->regs + NFI_FDMM(i));
> +
> +		for (j = 0; j < eng->oob_free; j++)
> +			oobptr[j] = (j >= 4 ? valm : vall) >> ((j % 4) * 8);
> +
> +		oobptr += eng->oob_free;
> +	}
> +}
> +
> +static inline void mtk_snfi_write_oob_free(struct mtk_snfi *snfi,
> +					   const struct spi_mem_op *op)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	const u8 *oobptr = op->data.buf.out;
> +	u32 vall, valm;
> +	int i, j;
> +
> +	oobptr += eng->section_size * eng->nsteps;
> +	for (i = 0; i < eng->nsteps; i++) {
> +		vall = 0;
> +		valm = 0;
> +		for (j = 0; j < 8; j++) {
> +			if (j < 4)
> +				vall |= (j < eng->oob_free ? oobptr[j] : 0xff)
> +					<< (j * 8);
> +			else
> +				valm |= (j < eng->oob_free ? oobptr[j] : 0xff)
> +					<< ((j - 4) * 8);
> +		}
> +
> +		writel(vall, snfi->regs + NFI_FDML(i));
> +		writel(valm, snfi->regs + NFI_FDMM(i));
> +		oobptr += eng->oob_free;
> +	}
> +}
> +
> +static irqreturn_t mtk_snfi_irq(int irq, void *id)
> +{
> +	struct mtk_snfi *snfi = id;
> +	u32 sta, ien;
> +
> +	sta = readl(snfi->regs + NFI_INTR_STA);
> +	ien = readl(snfi->regs + NFI_INTR_EN);
> +
> +	if (!(sta & ien))
> +		return IRQ_NONE;
> +
> +	writel(0, snfi->regs + NFI_INTR_EN);
> +	complete(&snfi->done);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static int mtk_snfi_enable_clk(struct device *dev, struct mtk_snfi *snfi)
> +{
> +	int ret;
> +
> +	ret = clk_prepare_enable(snfi->nfi_clk);
> +	if (ret) {
> +		dev_err(dev, "failed to enable nfi clk\n");
> +		return ret;
> +	}
> +
> +	ret = clk_prepare_enable(snfi->snfi_clk);
> +	if (ret) {
> +		dev_err(dev, "failed to enable snfi clk\n");
> +		clk_disable_unprepare(snfi->nfi_clk);
> +		return ret;
> +	}
> +
> +	ret = clk_prepare_enable(snfi->hclk);
> +	if (ret) {
> +		dev_err(dev, "failed to enable hclk\n");
> +		clk_disable_unprepare(snfi->nfi_clk);
> +		clk_disable_unprepare(snfi->snfi_clk);

definitely deserves goto statements :)

> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static void mtk_snfi_disable_clk(struct mtk_snfi *snfi)
> +{
> +	clk_disable_unprepare(snfi->nfi_clk);
> +	clk_disable_unprepare(snfi->snfi_clk);
> +	clk_disable_unprepare(snfi->hclk);
> +}
> +
> +static int mtk_snfi_reset(struct mtk_snfi *snfi)
> +{
> +	u32 val;
> +	int ret;
> +
> +	val = readl(snfi->regs + SNFI_MISC_CTL) | SW_RST;
> +	writel(val, snfi->regs + SNFI_MISC_CTL);
> +
> +	ret = readw_poll_timeout(snfi->regs + SNFI_STA_CTL1, val,
> +				 !(val & SPI_STATE), 0,
> +				 MTK_SNFI_RESET_TIMEOUT);
> +	if (ret) {
> +		dev_warn(snfi->dev, "wait spi idle timeout 0x%x\n", val);
> +		return ret;
> +	}
> +
> +	val = readl(snfi->regs + SNFI_MISC_CTL);
> +	val &= ~SW_RST;
> +	writel(val, snfi->regs + SNFI_MISC_CTL);
> +
> +	writew(CON_FIFO_FLUSH | CON_NFI_RST, snfi->regs + NFI_CON);
> +	ret = readw_poll_timeout(snfi->regs + NFI_STA, val,
> +				 !(val & NFI_FSM_MASK), 0,
> +				 MTK_SNFI_RESET_TIMEOUT);
> +	if (ret) {
> +		dev_warn(snfi->dev, "wait nfi fsm idle timeout 0x%x\n", val);
> +		return ret;
> +	}
> +
> +	val = readl(snfi->regs + NFI_STRDATA);
> +	val &= ~STAR_EN;
> +	writew(val, snfi->regs + NFI_STRDATA);
> +
> +	return 0;
> +}
> +
> +static int mtk_snfi_init(struct mtk_snfi *snfi)
> +{
> +	int ret;
> +	u32 val;
> +
> +	ret = mtk_snfi_reset(snfi);
> +	if (ret)
> +		return ret;
> +
> +	writel(SNFI_MODE_EN, snfi->regs + SNFI_CNFG);
> +
> +	if (snfi->sample_delay) {
> +		val = readl(snfi->regs + SNFI_DLY_CTL3);
> +		val &= ~SAM_DLY_MASK;
> +		val |= snfi->sample_delay;
> +		writel(val, snfi->regs + SNFI_DLY_CTL3);
> +	}
> +
> +	if (snfi->read_latency) {
> +		val = readl(snfi->regs + SNFI_MISC_CTL);
> +		val &= ~LATCH_LAT_MASK;
> +		val |= (snfi->read_latency << LATCH_LAT_SHIFT);
> +		writel(val, snfi->regs + SNFI_MISC_CTL);
> +	}
> +
> +	return 0;
> +}
> +
> +static void mtk_snfi_prepare_for_tx(struct mtk_snfi *snfi,
> +				    const struct spi_mem_op *op)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	u32 val;
> +
> +	val = readl(snfi->regs + SNFI_PG_CTL1);
> +	val &= ~WR_LOAD_CMD_MASK;
> +	val |= op->cmd.opcode << WR_LOAD_CMD_SHIFT;
> +	writel(val, snfi->regs + SNFI_PG_CTL1);
> +
> +	writel(op->addr.val & WR_LOAD_ADDR_MASK,
> +	       snfi->regs + SNFI_PG_CTL2);
> +
> +	val = readl(snfi->regs + SNFI_MISC_CTL);
> +	val |= WR_CUSTOM_EN;
> +	if (op->data.buswidth == 4)
> +		val |= WR_X4_EN;
> +	writel(val, snfi->regs + SNFI_MISC_CTL);
> +
> +	val = eng->nsteps * (eng->oob_per_section + eng->section_size);
> +	writel(val << WR_LEN_SHIFT, snfi->regs + SNFI_MISC_CTL2);
> +
> +	writel(INTR_CUS_PROG_EN | INTR_IRQ_EN, snfi->regs + NFI_INTR_EN);
> +}
> +
> +static void mtk_snfi_prepare_for_rx(struct mtk_snfi *snfi,
> +				    const struct spi_mem_op *op)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	u32 val, dummy_cycle;
> +
> +	dummy_cycle = (op->dummy.nbytes << 3) >>
> +			(ffs(op->dummy.buswidth) - 1);
> +	val = (op->cmd.opcode & RD_CMD_MASK) |
> +		  (dummy_cycle << RD_DUMMY_SHIFT);
> +	writel(val, snfi->regs + SNFI_RD_CTL2);
> +
> +	writel(op->addr.val & RD_ADDR_MASK,
> +	       snfi->regs + SNFI_RD_CTL3);
> +
> +	val = readl(snfi->regs + SNFI_MISC_CTL);
> +	val |= RD_CUSTOM_EN;
> +	val &= ~RD_MODE_MASK;
> +	if (op->data.buswidth == 4)
> +		val |= RD_MODE_X4;
> +	else if (op->data.buswidth == 2)
> +		val |= RD_MODE_X2;
> +
> +	if (op->addr.buswidth != 1)
> +		val |= RD_MODE_DQUAL;
> +
> +	writel(val, snfi->regs + SNFI_MISC_CTL);
> +
> +	val = eng->nsteps * (eng->oob_per_section + eng->section_size);
> +	writel(val, snfi->regs + SNFI_MISC_CTL2);
> +
> +	writel(INTR_CUS_READ_EN | INTR_IRQ_EN, snfi->regs + NFI_INTR_EN);
> +}
> +
> +static int mtk_snfi_prepare(struct mtk_snfi *snfi,
> +			    const struct spi_mem_op *op, bool rx)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	dma_addr_t addr;
> +	int ret;
> +	u32 val;
> +
> +	addr = dma_map_single(snfi->dev,
> +			      op->data.buf.in, op->data.nbytes,
> +			      rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	ret = dma_mapping_error(snfi->dev, addr);
> +	if (ret) {
> +		dev_err(snfi->dev, "dma mapping error\n");
> +		return -EINVAL;
> +	}
> +
> +	snfi->dma_addr = addr;
> +	writel(lower_32_bits(addr), snfi->regs + NFI_STRADDR);
> +
> +	if (op->ecc_en && !rx)
> +		mtk_snfi_write_oob_free(snfi, op);
> +
> +	val = readw(snfi->regs + NFI_CNFG);
> +	val |= CNFG_DMA | CNFG_DMA_BURST_EN | CNFG_OP_CUST;
> +	val |= rx ? CNFG_READ_EN : 0;
> +
> +	if (op->ecc_en)
> +		val |= CNFG_HW_ECC_EN | CNFG_AUTO_FMT_EN;
> +
> +	writew(val, snfi->regs + NFI_CNFG);
> +
> +	writel(eng->nsteps << CON_SEC_SHIFT, snfi->regs + NFI_CON);
> +
> +	init_completion(&snfi->done);
> +
> +	/* trigger state machine to custom op mode */
> +	writel(CMD_DUMMY, snfi->regs + NFI_CMD);
> +
> +	if (rx)
> +		mtk_snfi_prepare_for_rx(snfi, op);
> +	else
> +		mtk_snfi_prepare_for_tx(snfi, op);
> +
> +	return 0;
> +}
> +
> +static void mtk_snfi_trigger(struct mtk_snfi *snfi,
> +			     const struct spi_mem_op *op, bool rx)
> +{
> +	u32 val;
> +
> +	val = readl(snfi->regs + NFI_CON);
> +	val |= rx ? CON_BRD : CON_BWR;
> +	writew(val, snfi->regs + NFI_CON);
> +
> +	writew(STAR_EN, snfi->regs + NFI_STRDATA);
> +}
> +
> +static int mtk_snfi_wait_done(struct mtk_snfi *snfi,
> +			      const struct spi_mem_op *op, bool rx)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	struct device *dev = snfi->dev;
> +	u32 val;
> +	int ret;
> +
> +	ret = wait_for_completion_timeout(&snfi->done, msecs_to_jiffies(500));
> +	if (!ret) {
> +		dev_err(dev, "wait for %d completion done timeout\n", rx);
> +		return -ETIMEDOUT;
> +	}
> +
> +	if (rx) {
> +		ret = readl_poll_timeout_atomic(snfi->regs + NFI_BYTELEN,
> +						val,
> +						ADDRCNTR_SEC(val) >=
> +						eng->nsteps,
> +						0, MTK_SNFI_TIMEOUT);
> +		if (ret) {
> +			dev_err(dev, "wait for rx section count timeout\n");
> +			return -ETIMEDOUT;
> +		}
> +
> +		ret = readl_poll_timeout_atomic(snfi->regs + NFI_MASTERSTA,
> +						val,
> +						!(val & AHB_BUS_BUSY),
> +						0, MTK_SNFI_TIMEOUT);
> +		if (ret) {
> +			dev_err(dev, "wait for bus busy timeout\n");
> +			return -ETIMEDOUT;
> +		}
> +	} else {
> +		ret = readl_poll_timeout_atomic(snfi->regs + NFI_ADDRCNTR,
> +						val,
> +						ADDRCNTR_SEC(val) >=
> +						eng->nsteps,
> +						0, MTK_SNFI_TIMEOUT);
> +		if (ret) {
> +			dev_err(dev, "wait for tx section count timeout\n");
> +			return -ETIMEDOUT;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static void mtk_snfi_complete(struct mtk_snfi *snfi,
> +			      const struct spi_mem_op *op, bool rx)
> +{
> +	u32 val;
> +
> +	dma_unmap_single(snfi->dev,
> +			 snfi->dma_addr, op->data.nbytes,
> +			 rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +
> +	if (op->ecc_en && rx)
> +		mtk_snfi_read_oob_free(snfi, op);
> +
> +	val = readl(snfi->regs + SNFI_MISC_CTL);
> +	val &= rx ? ~RD_CUSTOM_EN : ~WR_CUSTOM_EN;
> +	writel(val, snfi->regs + SNFI_MISC_CTL);
> +
> +	val = readl(snfi->regs + SNFI_STA_CTL1);
> +	val |= rx ? CUS_READ_DONE : CUS_PROG_DONE;
> +	writew(val, snfi->regs + SNFI_STA_CTL1);
> +	val &= rx ? ~CUS_READ_DONE : ~CUS_PROG_DONE;
> +	writew(val, snfi->regs + SNFI_STA_CTL1);
> +
> +	/* Disable interrupt */
> +	val = readl(snfi->regs + NFI_INTR_EN);
> +	val &= rx ? ~INTR_CUS_READ_EN : ~INTR_CUS_PROG_EN;
> +	writew(val, snfi->regs + NFI_INTR_EN);
> +
> +	writew(0, snfi->regs + NFI_CNFG);
> +	writew(0, snfi->regs + NFI_CON);
> +}
> +
> +static int mtk_snfi_transfer_dma(struct mtk_snfi *snfi,
> +				 const struct spi_mem_op *op, bool rx)
> +{
> +	int ret;
> +
> +	ret = mtk_snfi_prepare(snfi, op, rx);
> +	if (ret)
> +		return ret;
> +
> +	mtk_snfi_trigger(snfi, op, rx);
> +
> +	ret = mtk_snfi_wait_done(snfi, op, rx);
> +
> +	mtk_snfi_complete(snfi, op, rx);
> +
> +	return ret;
> +}
> +
> +static int mtk_snfi_transfer_mac(struct mtk_snfi *snfi,
> +				 const u8 *txbuf, u8 *rxbuf,
> +				 const u32 txlen, const u32 rxlen)
> +{
> +	u32 i, j, val, tmp;
> +	u8 *p_tmp = (u8 *)(&tmp);
> +	u32 offset = 0;
> +	int ret = 0;
> +
> +	/* Move tx data to gpram in snfi mac mode */
> +	for (i = 0; i < txlen; ) {
> +		for (j = 0, tmp = 0; i < txlen && j < 4; i++, j++)
> +			p_tmp[j] = txbuf[i];
> +
> +		writel(tmp, snfi->regs + SNFI_GPRAM_DATA + offset);
> +		offset += 4;
> +	}
> +
> +	writel(txlen, snfi->regs + SNFI_MAC_OUTL);
> +	writel(rxlen, snfi->regs + SNFI_MAC_INL);
> +
> +	ret = mtk_snfi_mac_op(snfi);
> +	if (ret) {
> +		dev_warn(snfi->dev, "snfi mac operation fail\n");
> +		return ret;
> +	}
> +
> +	/* Get tx data from gpram in snfi mac mode */
> +	if (rxlen)
> +		for (i = 0, offset = rounddown(txlen, 4); i < rxlen; ) {
> +			val = readl(snfi->regs +
> +				    SNFI_GPRAM_DATA + offset);
> +			for (j = 0; i < rxlen && j < 4; i++, j++, rxbuf++) {
> +				if (i == 0)
> +					j = txlen % 4;
> +				*rxbuf = (val >> (j * 8)) & 0xff;
> +			}
> +			offset += 4;
> +		}
> +
> +	return ret;
> +}
> +
> +static int mtk_snfi_exec_op(struct spi_mem *mem,
> +			    const struct spi_mem_op *op)
> +{
> +	struct mtk_snfi *snfi = spi_controller_get_devdata(mem->spi->master);
> +	u8 *buf, *txbuf = snfi->tx_buf, *rxbuf = NULL;
> +	u32 txlen = 0, rxlen = 0;
> +	int i, ret = 0;
> +	bool rx;
> +
> +	rx = op->data.dir == SPI_MEM_DATA_IN;
> +
> +	ret = mtk_snfi_reset(snfi);
> +	if (ret) {
> +		dev_warn(snfi->dev, "snfi reset fail\n");
> +		return ret;
> +	}
> +
> +	/*
> +	 * If tx/rx data buswidth is not 0/1, use snfi DMA mode.
> +	 * Otherwise, use snfi mac mode.
> +	 */
> +	if (op->data.buswidth != 1 && op->data.buswidth != 0) {
> +		ret = mtk_snfi_transfer_dma(snfi, op, rx);
> +		if (ret)
> +			dev_warn(snfi->dev, "snfi dma transfer %d fail %d\n",
> +				 rx, ret);
> +		return ret;
> +	}
> +
> +	txbuf[txlen++] = op->cmd.opcode;
> +
> +	if (op->addr.nbytes)
> +		for (i = 0; i < op->addr.nbytes; i++)
> +			txbuf[txlen++] = op->addr.val >>
> +					(8 * (op->addr.nbytes - i - 1));
> +
> +	txlen += op->dummy.nbytes;
> +
> +	if (op->data.dir == SPI_MEM_DATA_OUT) {
> +		buf = (u8 *)op->data.buf.out;
> +		for (i = 0; i < op->data.nbytes; i++)
> +			txbuf[txlen++] = buf[i];
> +	}
> +
> +	if (op->data.dir == SPI_MEM_DATA_IN) {
> +		rxbuf = (u8 *)op->data.buf.in;
> +		rxlen = op->data.nbytes;
> +	}
> +
> +	ret = mtk_snfi_transfer_mac(snfi, txbuf, rxbuf, txlen, rxlen);
> +	if (ret)
> +		dev_warn(snfi->dev, "snfi mac transfer %d fail %d\n",
> +			 op->data.dir, ret);
> +
> +	return ret;
> +}
> +
> +static int mtk_snfi_check_buswidth(u8 width)
> +{
> +	switch (width) {
> +	case 1:
> +	case 2:
> +	case 4:
> +		return 0;
> +
> +	default:
> +		break;
> +	}
> +
> +	return -EOPNOTSUPP;
> +}
> +
> +static bool mtk_snfi_supports_op(struct spi_mem *mem,
> +				 const struct spi_mem_op *op)
> +{
> +	int ret = 0;
> +
> +	if (!spi_mem_default_supports_op(mem, op))

With the integration properly set, this call would always return false
when ecc_en = true. You should switch to

	spi_mem_generic_supports_op(mem, op, false, true);

> +		return false;
> +
> +	if (op->cmd.buswidth != 1)
> +		return false;
> +
> +	/*
> +	 * For one operation will use snfi mac mode when data
> +	 * buswidth is 0/1. However, the HW ECC engine can not
> +	 * be used in mac mode.
> +	 */
> +	if (op->ecc_en && op->data.buswidth == 1 &&
> +	    op->data.nbytes >= SNFI_GPRAM_MAX_LEN)
> +		return false;
> +
> +	switch (op->data.dir) {
> +	/* For spi mem data in, can support 1/2/4 buswidth */
> +	case SPI_MEM_DATA_IN:
> +		if (op->addr.nbytes)
> +			ret |= mtk_snfi_check_buswidth(op->addr.buswidth);
> +
> +		if (op->dummy.nbytes)
> +			ret |= mtk_snfi_check_buswidth(op->dummy.buswidth);
> +
> +		if (op->data.nbytes)
> +			ret |= mtk_snfi_check_buswidth(op->data.buswidth);
> +
> +		if (ret)
> +			return false;
> +
> +		break;
> +	case SPI_MEM_DATA_OUT:
> +		/*
> +		 * For spi mem data out, can support 0/1 buswidth
> +		 * for addr/dummy and 1/4 buswidth for data.
> +		 */
> +		if (op->addr.buswidth != 0 && op->addr.buswidth != 1)
> +			return false;
> +
> +		if (op->dummy.buswidth != 0 && op->dummy.buswidth != 1)
> +			return false;
> +
> +		if (op->data.buswidth != 1 && op->data.buswidth != 4)
> +			return false;
> +
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return true;
> +}
> +
> +static int mtk_snfi_adjust_op_size(struct spi_mem *mem,
> +				   struct spi_mem_op *op)
> +{
> +	u32 len, max_len;
> +
> +	/*
> +	 * The op size only support SNFI_GPRAM_MAX_LEN which will
> +	 * use the snfi mac mode when data buswidth is 0/1.
> +	 * Otherwise, the snfi can max support 16KB.
> +	 */
> +	if (op->data.buswidth == 1 || op->data.buswidth == 0)
> +		max_len = SNFI_GPRAM_MAX_LEN;
> +	else
> +		max_len = KB(16);
> +
> +	len = op->cmd.nbytes + op->addr.nbytes + op->dummy.nbytes;
> +	if (len > max_len)
> +		return -EOPNOTSUPP;
> +
> +	if ((len + op->data.nbytes) > max_len)
> +		op->data.nbytes = max_len - len;
> +
> +	return 0;
> +}
> +
> +static const struct mtk_snfi_caps mtk_snfi_caps_mt7622 = {
> +	.pageformat_spare_shift = 16,
> +};
> +
> +static const struct spi_controller_mem_ops mtk_snfi_ops = {
> +	.adjust_op_size = mtk_snfi_adjust_op_size,
> +	.supports_op = mtk_snfi_supports_op,
> +	.exec_op = mtk_snfi_exec_op,
> +};
> +
> +static const struct of_device_id mtk_snfi_id_table[] = {
> +	{ .compatible = "mediatek,mt7622-snfi",
> +	  .data = &mtk_snfi_caps_mt7622,
> +	},
> +	{  /* sentinel */ }
> +};
> +
> +/* ECC wrapper */
> +static struct mtk_snfi *mtk_nand_to_spi(struct nand_device *nand)
> +{
> +	struct device *dev = nand->ecc.engine->dev;
> +	struct spi_master *master = dev_get_drvdata(dev);
> +	struct mtk_snfi *snfi = spi_master_get_devdata(master);
> +
> +	return snfi;
> +}
> +
> +static int mtk_snfi_config(struct nand_device *nand,
> +			   struct mtk_snfi *snfi)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	u32 val;
> +
> +	switch (nanddev_page_size(nand)) {
> +	case 512:
> +		val = PAGEFMT_512_2K | PAGEFMT_SEC_SEL_512;
> +		break;
> +	case KB(2):
> +		if (eng->section_size == 512)
> +			val = PAGEFMT_2K_4K | PAGEFMT_SEC_SEL_512;
> +		else
> +			val = PAGEFMT_512_2K;
> +		break;
> +	case KB(4):
> +		if (eng->section_size == 512)
> +			val = PAGEFMT_4K_8K | PAGEFMT_SEC_SEL_512;
> +		else
> +			val = PAGEFMT_2K_4K;
> +		break;
> +	case KB(8):
> +		if (eng->section_size == 512)
> +			val = PAGEFMT_8K_16K | PAGEFMT_SEC_SEL_512;
> +		else
> +			val = PAGEFMT_4K_8K;
> +		break;
> +	case KB(16):
> +		val = PAGEFMT_8K_16K;
> +		break;
> +	default:
> +		dev_err(snfi->dev, "invalid page len: %d\n",
> +			nanddev_page_size(nand));
> +		return -EINVAL;
> +	}
> +
> +	val |= eng->oob_per_section_idx << PAGEFMT_SPARE_SHIFT;
> +	val |= eng->oob_free << PAGEFMT_FDM_SHIFT;
> +	val |= eng->oob_free_protected << PAGEFMT_FDM_ECC_SHIFT;
> +	writel(val, snfi->regs + NFI_PAGEFMT);

Shouldn't this be calculated only once?

> +
> +	return 0;
> +}
> +
> +static int mtk_snfi_ecc_init_ctx(struct nand_device *nand)
> +{
> +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> +
> +	return ops->init_ctx(nand);
> +}
> +
> +static void mtk_snfi_ecc_cleanup_ctx(struct nand_device *nand)
> +{
> +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> +
> +	ops->cleanup_ctx(nand);
> +}
> +
> +static int mtk_snfi_ecc_prepare_io_req(struct nand_device *nand,
> +				       struct nand_page_io_req *req)
> +{
> +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> +	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
> +	int ret;
> +
> +	ret = mtk_snfi_config(nand, snfi);
> +	if (ret)
> +		return ret;
> +
> +	return ops->prepare_io_req(nand, req);
> +}
> +
> +static int mtk_snfi_ecc_finish_io_req(struct nand_device *nand,
> +				      struct nand_page_io_req *req)
> +{
> +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
> +
> +	if (req->mode != MTD_OPS_RAW)
> +		eng->read_empty = readl(snfi->regs + NFI_STA) & STA_EMP_PAGE;
> +
> +	return ops->finish_io_req(nand, req);
> +}
> +
> +static struct nand_ecc_engine_ops mtk_snfi_ecc_engine_pipelined_ops = {
> +	.init_ctx = mtk_snfi_ecc_init_ctx,
> +	.cleanup_ctx = mtk_snfi_ecc_cleanup_ctx,
> +	.prepare_io_req = mtk_snfi_ecc_prepare_io_req,
> +	.finish_io_req = mtk_snfi_ecc_finish_io_req,
> +};
> +
> +static int mtk_snfi_ecc_probe(struct platform_device *pdev,
> +			      struct mtk_snfi *snfi)
> +{
> +	struct nand_ecc_engine *ecceng;
> +
> +	if (!mtk_ecc_get_pipelined_ops())
> +		return -EOPNOTSUPP;
> +
> +	ecceng = devm_kzalloc(&pdev->dev, sizeof(*ecceng), GFP_KERNEL);
> +	if (!ecceng)
> +		return -ENOMEM;
> +
> +	ecceng->dev = &pdev->dev;
> +	ecceng->ops = &mtk_snfi_ecc_engine_pipelined_ops;

You need to tell the core that this is a pipelined engine (look at the
integration entry).

> +
> +	nand_ecc_register_on_host_hw_engine(ecceng);
> +
> +	snfi->engine = ecceng;
> +
> +	return 0;
> +}
> +
> +static int mtk_snfi_probe(struct platform_device *pdev)
> +{
> +	struct device_node *np = pdev->dev.of_node;
> +	struct spi_controller *ctlr;
> +	struct mtk_snfi *snfi;
> +	struct resource *res;
> +	int ret, irq;
> +	u32 val = 0;
> +
> +	ctlr = spi_alloc_master(&pdev->dev, sizeof(*snfi));
> +	if (!ctlr)
> +		return -ENOMEM;
> +
> +	snfi = spi_controller_get_devdata(ctlr);
> +	snfi->dev = &pdev->dev;
> +
> +	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +	snfi->regs = devm_ioremap_resource(snfi->dev, res);
> +	if (IS_ERR(snfi->regs)) {
> +		ret = PTR_ERR(snfi->regs);
> +		goto err_put_master;
> +	}
> +
> +	ret = of_property_read_u32(np, "sample-delay", &val);
> +	if (!ret)
> +		snfi->sample_delay = val;
> +
> +	ret = of_property_read_u32(np, "read-latency", &val);
> +	if (!ret)
> +		snfi->read_latency = val;
> +
> +	snfi->nfi_clk = devm_clk_get(snfi->dev, "nfi_clk");
> +	if (IS_ERR(snfi->nfi_clk)) {
> +		dev_err(snfi->dev, "not found nfi clk\n");
> +		ret = PTR_ERR(snfi->nfi_clk);
> +		goto err_put_master;
> +	}
> +
> +	snfi->snfi_clk = devm_clk_get(snfi->dev, "snfi_clk");
> +	if (IS_ERR(snfi->snfi_clk)) {
> +		dev_err(snfi->dev, "not found snfi clk\n");
> +		ret = PTR_ERR(snfi->snfi_clk);
> +		goto err_put_master;
> +	}
> +
> +	snfi->hclk = devm_clk_get(snfi->dev, "hclk");
> +	if (IS_ERR(snfi->hclk)) {
> +		dev_err(snfi->dev, "not found hclk\n");
> +		ret = PTR_ERR(snfi->hclk);
> +		goto err_put_master;
> +	}
> +
> +	ret = mtk_snfi_enable_clk(snfi->dev, snfi);
> +	if (ret)
> +		goto err_put_master;
> +
> +	snfi->caps = of_device_get_match_data(snfi->dev);
> +
> +	irq = platform_get_irq(pdev, 0);
> +	if (irq < 0) {
> +		dev_err(snfi->dev, "not found snfi irq resource\n");
> +		ret = -EINVAL;
> +		goto clk_disable;
> +	}
> +
> +	ret = devm_request_irq(snfi->dev, irq, mtk_snfi_irq,
> +			       0, "mtk-snfi", snfi);
> +	if (ret) {
> +		dev_err(snfi->dev, "failed to request snfi irq\n");
> +		goto clk_disable;
> +	}
> +
> +	ret = dma_set_mask(snfi->dev, DMA_BIT_MASK(32));
> +	if (ret) {
> +		dev_err(snfi->dev, "failed to set dma mask\n");
> +		goto clk_disable;
> +	}
> +
> +	snfi->tx_buf = kzalloc(SNFI_GPRAM_MAX_LEN, GFP_KERNEL);
> +	if (!snfi->tx_buf) {
> +		ret = -ENOMEM;
> +		goto clk_disable;
> +	}
> +
> +	ctlr->dev.of_node = np;
> +	ctlr->mem_ops = &mtk_snfi_ops;
> +	ctlr->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD | SPI_TX_QUAD;
> +	ctlr->auto_runtime_pm = true;
> +
> +	dev_set_drvdata(snfi->dev, ctlr);
> +
> +	ret = mtk_snfi_init(snfi);
> +	if (ret) {
> +		dev_err(snfi->dev, "failed to init snfi\n");
> +		goto free_buf;
> +	}
> +
> +	ret = mtk_snfi_ecc_probe(pdev, snfi);
> +	if (ret) {
> +		dev_warn(snfi->dev, "ECC engine not available\n");
> +		goto free_buf;
> +	}
> +
> +	pm_runtime_enable(snfi->dev);
> +
> +	ret = devm_spi_register_master(snfi->dev, ctlr);
> +	if (ret) {
> +		dev_err(snfi->dev, "failed to register spi master\n");
> +		goto disable_pm_runtime;
> +	}
> +
> +	return 0;
> +
> +disable_pm_runtime:
> +	pm_runtime_disable(snfi->dev);
> +
> +free_buf:
> +	kfree(snfi->tx_buf);
> +
> +clk_disable:
> +	mtk_snfi_disable_clk(snfi);
> +
> +err_put_master:
> +	spi_master_put(ctlr);
> +
> +	return ret;
> +}
> +
> +static int mtk_snfi_remove(struct platform_device *pdev)
> +{
> +	struct spi_controller *ctlr = dev_get_drvdata(&pdev->dev);
> +	struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr);
> +	struct nand_ecc_engine *eng = snfi->engine;
> +
> +	pm_runtime_disable(snfi->dev);
> +	nand_ecc_unregister_on_host_hw_engine(eng);
> +	kfree(snfi->tx_buf);
> +	spi_master_put(ctlr);
> +
> +	return 0;
> +}
> +
> +#ifdef CONFIG_PM
> +static int mtk_snfi_runtime_suspend(struct device *dev)
> +{
> +	struct spi_controller *ctlr = dev_get_drvdata(dev);
> +	struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr);
> +
> +	mtk_snfi_disable_clk(snfi);
> +
> +	return 0;
> +}
> +
> +static int mtk_snfi_runtime_resume(struct device *dev)
> +{
> +	struct spi_controller *ctlr = dev_get_drvdata(dev);
> +	struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr);
> +	int ret;
> +
> +	ret = mtk_snfi_enable_clk(dev, snfi);
> +	if (ret)
> +		return ret;
> +
> +	ret = mtk_snfi_init(snfi);
> +	if (ret)
> +		dev_err(dev, "failed to init snfi\n");
> +
> +	return ret;
> +}
> +#endif /* CONFIG_PM */
> +
> +static const struct dev_pm_ops mtk_snfi_pm_ops = {
> +	SET_RUNTIME_PM_OPS(mtk_snfi_runtime_suspend,
> +			   mtk_snfi_runtime_resume, NULL)
> +	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
> +				pm_runtime_force_resume)
> +};
> +
> +static struct platform_driver mtk_snfi_driver = {
> +	.driver = {
> +		.name	= "mtk-snfi",
> +		.of_match_table = mtk_snfi_id_table,
> +		.pm = &mtk_snfi_pm_ops,
> +	},
> +	.probe		= mtk_snfi_probe,
> +	.remove		= mtk_snfi_remove,
> +};
> +
> +module_platform_driver(mtk_snfi_driver);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_AUTHOR("Xiangsheng Hou <xiangsheng.hou@mediatek.com>");
> +MODULE_DESCRIPTION("Mediatek SPI Nand Flash interface driver");

Otherwise looks good, I believe you can drop the RFC prefix now.

Thanks,
Miquèl

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,3/5] spi: mtk: Add mediatek SPI Nand Flash interface driver
@ 2021-12-09 10:20     ` Miquel Raynal
  0 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-09 10:20 UTC (permalink / raw)
  To: Xiangsheng Hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Xiangsheng,

xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:32:00 +0800:

> The SPI Nand Flash interface driver cowork with Mediatek pipelined
> HW ECC engine.
> 
> Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
> ---
>  drivers/spi/Kconfig        |   11 +
>  drivers/spi/Makefile       |    1 +
>  drivers/spi/spi-mtk-snfi.c | 1117 ++++++++++++++++++++++++++++++++++++
>  3 files changed, 1129 insertions(+)
>  create mode 100644 drivers/spi/spi-mtk-snfi.c
> 
> diff --git a/drivers/spi/Kconfig b/drivers/spi/Kconfig
> index 596705d24400..9cb6a173b1ef 100644
> --- a/drivers/spi/Kconfig
> +++ b/drivers/spi/Kconfig
> @@ -535,6 +535,17 @@ config SPI_MT65XX
>  	  say Y or M here.If you are not sure, say N.
>  	  SPI drivers for Mediatek MT65XX and MT81XX series ARM SoCs.
>  
> +config SPI_MTK_SNFI
> +	tristate "MediaTek SPI NAND interface"
> +	depends on MTD
> +	select MTD_SPI_NAND
> +	select MTD_NAND_ECC_MTK
> +	help
> +	  This selects the SPI NAND FLASH interface(SNFI),
> +	  which could be found on MediaTek Soc.
> +	  Say Y or M here.If you are not sure, say N.
> +	  Note Parallel Nand and SPI NAND is alternative on MediaTek SoCs.
> +
>  config SPI_MT7621
>  	tristate "MediaTek MT7621 SPI Controller"
>  	depends on RALINK || COMPILE_TEST
> diff --git a/drivers/spi/Makefile b/drivers/spi/Makefile
> index dd7393a6046f..57d11eecf662 100644
> --- a/drivers/spi/Makefile
> +++ b/drivers/spi/Makefile
> @@ -73,6 +73,7 @@ obj-$(CONFIG_SPI_MPC52xx)		+= spi-mpc52xx.o
>  obj-$(CONFIG_SPI_MT65XX)                += spi-mt65xx.o
>  obj-$(CONFIG_SPI_MT7621)		+= spi-mt7621.o
>  obj-$(CONFIG_SPI_MTK_NOR)		+= spi-mtk-nor.o
> +obj-$(CONFIG_SPI_MTK_SNFI)              += spi-mtk-snfi.o
>  obj-$(CONFIG_SPI_MXIC)			+= spi-mxic.o
>  obj-$(CONFIG_SPI_MXS)			+= spi-mxs.o
>  obj-$(CONFIG_SPI_NPCM_FIU)		+= spi-npcm-fiu.o
> diff --git a/drivers/spi/spi-mtk-snfi.c b/drivers/spi/spi-mtk-snfi.c
> new file mode 100644
> index 000000000000..b4dce6d78176
> --- /dev/null
> +++ b/drivers/spi/spi-mtk-snfi.c
> @@ -0,0 +1,1117 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Driver for MediaTek SPI Nand Flash interface
> + *
> + * Copyright (C) 2021 MediaTek Inc.
> + * Authors:	Xiangsheng Hou	<xiangsheng.hou@mediatek.com>
> + */
> +
> +#include <linux/clk.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/interrupt.h>
> +#include <linux/iopoll.h>
> +#include <linux/module.h>
> +#include <linux/mtd/nand.h>
> +#include <linux/mtd/nand-ecc-mtk.h>
> +#include <linux/of.h>
> +#include <linux/of_device.h>
> +#include <linux/platform_device.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/spi/spi.h>
> +#include <linux/spi/spi-mem.h>
> +
> +/* Registers used by the driver */
> +#define NFI_CNFG		(0x00)
> +#define		CNFG_DMA		BIT(0)
> +#define		CNFG_READ_EN		BIT(1)
> +#define		CNFG_DMA_BURST_EN	BIT(2)
> +#define		CNFG_HW_ECC_EN		BIT(8)
> +#define		CNFG_AUTO_FMT_EN	BIT(9)
> +#define		CNFG_OP_CUST		GENMASK(14, 13)
> +#define NFI_PAGEFMT		(0x04)
> +#define		PAGEFMT_512_2K		(0)
> +#define		PAGEFMT_2K_4K		(1)
> +#define		PAGEFMT_4K_8K		(2)
> +#define		PAGEFMT_8K_16K		(3)
> +#define		PAGEFMT_PAGE_MASK	GENMASK(1, 0)
> +#define		PAGEFMT_SEC_SEL_512	BIT(2)
> +#define		PAGEFMT_FDM_SHIFT	(8)
> +#define		PAGEFMT_FDM_ECC_SHIFT	(12)
> +#define		PAGEFMT_SPARE_SHIFT	(16)
> +#define		PAGEFMT_SPARE_MASK	GENMASK(21, 16)
> +#define NFI_CON			(0x08)
> +#define		CON_FIFO_FLUSH		BIT(0)
> +#define		CON_NFI_RST		BIT(1)
> +#define		CON_BRD			BIT(8)
> +#define		CON_BWR			BIT(9)
> +#define		CON_SEC_SHIFT		(12)
> +#define		CON_SEC_MASK		GENMASK(16, 12)
> +#define NFI_INTR_EN		(0x10)
> +#define		INTR_CUS_PROG_EN	BIT(7)
> +#define		INTR_CUS_READ_EN	BIT(8)
> +#define		INTR_IRQ_EN		BIT(31)
> +#define NFI_INTR_STA		(0x14)
> +#define NFI_CMD			(0x20)
> +#define		CMD_DUMMY		(0x00)
> +#define NFI_STRDATA		(0x40)
> +#define		STAR_EN			BIT(0)
> +#define NFI_STA			(0x60)
> +#define		NFI_FSM_MASK		GENMASK(19, 16)
> +#define		STA_EMP_PAGE		BIT(12)
> +#define NFI_ADDRCNTR		(0x70)
> +#define		CNTR_MASK		GENMASK(16, 12)
> +#define		ADDRCNTR_SEC_SHIFT	(12)
> +#define		ADDRCNTR_SEC(val) \
> +		(((val) & CNTR_MASK) >> ADDRCNTR_SEC_SHIFT)
> +#define NFI_STRADDR		(0x80)
> +#define NFI_BYTELEN		(0x84)
> +#define NFI_FDML(x)		(0xA0 + (x) * sizeof(u32) * 2)
> +#define NFI_FDMM(x)		(0xA4 + (x) * sizeof(u32) * 2)
> +#define NFI_MASTERSTA		(0x224)
> +#define		AHB_BUS_BUSY		GENMASK(1, 0)
> +#define SNFI_MAC_CTL		(0x500)
> +#define		MAC_WIP			BIT(0)
> +#define		MAC_WIP_READY		BIT(1)
> +#define		MAC_TRIG		BIT(2)
> +#define		MAC_EN			BIT(3)
> +#define		MAC_SIO_SEL		BIT(4)
> +#define SNFI_MAC_OUTL		(0x504)
> +#define SNFI_MAC_INL		(0x508)
> +#define SNFI_RD_CTL2		(0x510)
> +#define		RD_CMD_MASK		GENMASK(7, 0)
> +#define		RD_DUMMY_SHIFT		(8)
> +#define SNFI_RD_CTL3		(0x514)
> +#define		RD_ADDR_MASK		GENMASK(16, 0)
> +#define SNFI_PG_CTL1		(0x524)
> +#define		WR_LOAD_CMD_MASK	GENMASK(15, 8)
> +#define		WR_LOAD_CMD_SHIFT	(8)
> +#define SNFI_PG_CTL2		(0x528)
> +#define		WR_LOAD_ADDR_MASK	GENMASK(15, 0)
> +#define SNFI_MISC_CTL		(0x538)
> +#define		RD_CUSTOM_EN		BIT(6)
> +#define		WR_CUSTOM_EN		BIT(7)
> +#define		LATCH_LAT_SHIFT		(8)
> +#define		LATCH_LAT_MASK		GENMASK(9, 8)
> +#define		RD_MODE_X2		BIT(16)
> +#define		RD_MODE_X4		BIT(17)
> +#define		RD_MODE_DQUAL		BIT(18)
> +#define		RD_MODE_MASK		GENMASK(18, 16)
> +#define		WR_X4_EN		BIT(20)
> +#define		SW_RST			BIT(28)
> +#define SNFI_MISC_CTL2		(0x53c)
> +#define		WR_LEN_SHIFT		(16)
> +#define SNFI_DLY_CTL3		(0x548)
> +#define		SAM_DLY_MASK		GENMASK(5, 0)
> +#define SNFI_STA_CTL1		(0x550)
> +#define		SPI_STATE		GENMASK(3, 0)
> +#define		CUS_READ_DONE		BIT(27)
> +#define		CUS_PROG_DONE		BIT(28)
> +#define SNFI_CNFG		(0x55c)
> +#define		SNFI_MODE_EN		BIT(0)
> +#define SNFI_GPRAM_DATA		(0x800)
> +#define		SNFI_GPRAM_MAX_LEN	(160)
> +
> +#define MTK_SNFI_TIMEOUT		(500000)
> +#define MTK_SNFI_RESET_TIMEOUT		(1000000)
> +#define MTK_SNFI_AUTOSUSPEND_DELAY	(1000)
> +#define KB(x)				((x) * 1024UL)
> +
> +struct mtk_snfi_caps {
> +	u8 pageformat_spare_shift;
> +};
> +
> +struct mtk_snfi {
> +	struct device *dev;
> +	struct completion done;
> +	void __iomem *regs;
> +	const struct mtk_snfi_caps *caps;
> +
> +	struct clk *nfi_clk;
> +	struct clk *snfi_clk;
> +	struct clk *hclk;
> +
> +	struct nand_ecc_engine *engine;
> +
> +	u32 sample_delay;
> +	u32 read_latency;
> +
> +	void *tx_buf;
> +	dma_addr_t dma_addr;
> +};
> +
> +static struct mtk_ecc_engine *mtk_snfi_to_ecc_engine(struct mtk_snfi *snfi)
> +{
> +	return snfi->engine->priv;
> +}
> +
> +static void mtk_snfi_mac_enable(struct mtk_snfi *snfi)
> +{
> +	u32 val;
> +
> +	val = readl(snfi->regs + SNFI_MAC_CTL);
> +	val &= ~MAC_SIO_SEL;
> +	val |= MAC_EN;
> +
> +	writel(val, snfi->regs + SNFI_MAC_CTL);
> +}
> +
> +static int mtk_snfi_mac_trigger(struct mtk_snfi *snfi)
> +{
> +	int ret;
> +	u32 val;
> +
> +	val = readl(snfi->regs + SNFI_MAC_CTL);
> +	val |= MAC_TRIG;
> +	writel(val, snfi->regs + SNFI_MAC_CTL);
> +
> +	ret = readl_poll_timeout_atomic(snfi->regs + SNFI_MAC_CTL,
> +					val, val & MAC_WIP_READY,
> +					0, MTK_SNFI_TIMEOUT);
> +	if (ret < 0) {
> +		dev_err(snfi->dev, "wait for wip ready timeout\n");
> +		return -EIO;
> +	}
> +
> +	ret = readl_poll_timeout_atomic(snfi->regs + SNFI_MAC_CTL,
> +					val, !(val & MAC_WIP), 0,
> +					MTK_SNFI_TIMEOUT);
> +	if (ret < 0) {
> +		dev_err(snfi->dev, "command write timeout\n");
> +		return -EIO;
> +	}
> +
> +	return 0;
> +}
> +
> +static void mtk_snfi_mac_disable(struct mtk_snfi *snfi)
> +{
> +	u32 val;
> +
> +	val = readl(snfi->regs + SNFI_MAC_CTL);
> +	val &= ~(MAC_TRIG | MAC_EN);
> +	writel(val, snfi->regs + SNFI_MAC_CTL);
> +}
> +
> +static int mtk_snfi_mac_op(struct mtk_snfi *snfi)
> +{
> +	int ret;
> +
> +	mtk_snfi_mac_enable(snfi);
> +	ret = mtk_snfi_mac_trigger(snfi);
> +	mtk_snfi_mac_disable(snfi);
> +
> +	return ret;
> +}
> +
> +static inline void mtk_snfi_read_oob_free(struct mtk_snfi *snfi,
> +					  const struct spi_mem_op *op)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	u8 *oobptr = op->data.buf.in;
> +	u32 vall, valm;
> +	int i, j;
> +
> +	oobptr += eng->section_size * eng->nsteps;
> +	for (i = 0; i < eng->nsteps; i++) {
> +		vall = readl(snfi->regs + NFI_FDML(i));
> +		valm = readl(snfi->regs + NFI_FDMM(i));
> +
> +		for (j = 0; j < eng->oob_free; j++)
> +			oobptr[j] = (j >= 4 ? valm : vall) >> ((j % 4) * 8);
> +
> +		oobptr += eng->oob_free;
> +	}
> +}
> +
> +static inline void mtk_snfi_write_oob_free(struct mtk_snfi *snfi,
> +					   const struct spi_mem_op *op)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	const u8 *oobptr = op->data.buf.out;
> +	u32 vall, valm;
> +	int i, j;
> +
> +	oobptr += eng->section_size * eng->nsteps;
> +	for (i = 0; i < eng->nsteps; i++) {
> +		vall = 0;
> +		valm = 0;
> +		for (j = 0; j < 8; j++) {
> +			if (j < 4)
> +				vall |= (j < eng->oob_free ? oobptr[j] : 0xff)
> +					<< (j * 8);
> +			else
> +				valm |= (j < eng->oob_free ? oobptr[j] : 0xff)
> +					<< ((j - 4) * 8);
> +		}
> +
> +		writel(vall, snfi->regs + NFI_FDML(i));
> +		writel(valm, snfi->regs + NFI_FDMM(i));
> +		oobptr += eng->oob_free;
> +	}
> +}
> +
> +static irqreturn_t mtk_snfi_irq(int irq, void *id)
> +{
> +	struct mtk_snfi *snfi = id;
> +	u32 sta, ien;
> +
> +	sta = readl(snfi->regs + NFI_INTR_STA);
> +	ien = readl(snfi->regs + NFI_INTR_EN);
> +
> +	if (!(sta & ien))
> +		return IRQ_NONE;
> +
> +	writel(0, snfi->regs + NFI_INTR_EN);
> +	complete(&snfi->done);
> +
> +	return IRQ_HANDLED;
> +}
> +
> +static int mtk_snfi_enable_clk(struct device *dev, struct mtk_snfi *snfi)
> +{
> +	int ret;
> +
> +	ret = clk_prepare_enable(snfi->nfi_clk);
> +	if (ret) {
> +		dev_err(dev, "failed to enable nfi clk\n");
> +		return ret;
> +	}
> +
> +	ret = clk_prepare_enable(snfi->snfi_clk);
> +	if (ret) {
> +		dev_err(dev, "failed to enable snfi clk\n");
> +		clk_disable_unprepare(snfi->nfi_clk);
> +		return ret;
> +	}
> +
> +	ret = clk_prepare_enable(snfi->hclk);
> +	if (ret) {
> +		dev_err(dev, "failed to enable hclk\n");
> +		clk_disable_unprepare(snfi->nfi_clk);
> +		clk_disable_unprepare(snfi->snfi_clk);

definitely deserves goto statements :)

> +		return ret;
> +	}
> +
> +	return 0;
> +}
> +
> +static void mtk_snfi_disable_clk(struct mtk_snfi *snfi)
> +{
> +	clk_disable_unprepare(snfi->nfi_clk);
> +	clk_disable_unprepare(snfi->snfi_clk);
> +	clk_disable_unprepare(snfi->hclk);
> +}
> +
> +static int mtk_snfi_reset(struct mtk_snfi *snfi)
> +{
> +	u32 val;
> +	int ret;
> +
> +	val = readl(snfi->regs + SNFI_MISC_CTL) | SW_RST;
> +	writel(val, snfi->regs + SNFI_MISC_CTL);
> +
> +	ret = readw_poll_timeout(snfi->regs + SNFI_STA_CTL1, val,
> +				 !(val & SPI_STATE), 0,
> +				 MTK_SNFI_RESET_TIMEOUT);
> +	if (ret) {
> +		dev_warn(snfi->dev, "wait spi idle timeout 0x%x\n", val);
> +		return ret;
> +	}
> +
> +	val = readl(snfi->regs + SNFI_MISC_CTL);
> +	val &= ~SW_RST;
> +	writel(val, snfi->regs + SNFI_MISC_CTL);
> +
> +	writew(CON_FIFO_FLUSH | CON_NFI_RST, snfi->regs + NFI_CON);
> +	ret = readw_poll_timeout(snfi->regs + NFI_STA, val,
> +				 !(val & NFI_FSM_MASK), 0,
> +				 MTK_SNFI_RESET_TIMEOUT);
> +	if (ret) {
> +		dev_warn(snfi->dev, "wait nfi fsm idle timeout 0x%x\n", val);
> +		return ret;
> +	}
> +
> +	val = readl(snfi->regs + NFI_STRDATA);
> +	val &= ~STAR_EN;
> +	writew(val, snfi->regs + NFI_STRDATA);
> +
> +	return 0;
> +}
> +
> +static int mtk_snfi_init(struct mtk_snfi *snfi)
> +{
> +	int ret;
> +	u32 val;
> +
> +	ret = mtk_snfi_reset(snfi);
> +	if (ret)
> +		return ret;
> +
> +	writel(SNFI_MODE_EN, snfi->regs + SNFI_CNFG);
> +
> +	if (snfi->sample_delay) {
> +		val = readl(snfi->regs + SNFI_DLY_CTL3);
> +		val &= ~SAM_DLY_MASK;
> +		val |= snfi->sample_delay;
> +		writel(val, snfi->regs + SNFI_DLY_CTL3);
> +	}
> +
> +	if (snfi->read_latency) {
> +		val = readl(snfi->regs + SNFI_MISC_CTL);
> +		val &= ~LATCH_LAT_MASK;
> +		val |= (snfi->read_latency << LATCH_LAT_SHIFT);
> +		writel(val, snfi->regs + SNFI_MISC_CTL);
> +	}
> +
> +	return 0;
> +}
> +
> +static void mtk_snfi_prepare_for_tx(struct mtk_snfi *snfi,
> +				    const struct spi_mem_op *op)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	u32 val;
> +
> +	val = readl(snfi->regs + SNFI_PG_CTL1);
> +	val &= ~WR_LOAD_CMD_MASK;
> +	val |= op->cmd.opcode << WR_LOAD_CMD_SHIFT;
> +	writel(val, snfi->regs + SNFI_PG_CTL1);
> +
> +	writel(op->addr.val & WR_LOAD_ADDR_MASK,
> +	       snfi->regs + SNFI_PG_CTL2);
> +
> +	val = readl(snfi->regs + SNFI_MISC_CTL);
> +	val |= WR_CUSTOM_EN;
> +	if (op->data.buswidth == 4)
> +		val |= WR_X4_EN;
> +	writel(val, snfi->regs + SNFI_MISC_CTL);
> +
> +	val = eng->nsteps * (eng->oob_per_section + eng->section_size);
> +	writel(val << WR_LEN_SHIFT, snfi->regs + SNFI_MISC_CTL2);
> +
> +	writel(INTR_CUS_PROG_EN | INTR_IRQ_EN, snfi->regs + NFI_INTR_EN);
> +}
> +
> +static void mtk_snfi_prepare_for_rx(struct mtk_snfi *snfi,
> +				    const struct spi_mem_op *op)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	u32 val, dummy_cycle;
> +
> +	dummy_cycle = (op->dummy.nbytes << 3) >>
> +			(ffs(op->dummy.buswidth) - 1);
> +	val = (op->cmd.opcode & RD_CMD_MASK) |
> +		  (dummy_cycle << RD_DUMMY_SHIFT);
> +	writel(val, snfi->regs + SNFI_RD_CTL2);
> +
> +	writel(op->addr.val & RD_ADDR_MASK,
> +	       snfi->regs + SNFI_RD_CTL3);
> +
> +	val = readl(snfi->regs + SNFI_MISC_CTL);
> +	val |= RD_CUSTOM_EN;
> +	val &= ~RD_MODE_MASK;
> +	if (op->data.buswidth == 4)
> +		val |= RD_MODE_X4;
> +	else if (op->data.buswidth == 2)
> +		val |= RD_MODE_X2;
> +
> +	if (op->addr.buswidth != 1)
> +		val |= RD_MODE_DQUAL;
> +
> +	writel(val, snfi->regs + SNFI_MISC_CTL);
> +
> +	val = eng->nsteps * (eng->oob_per_section + eng->section_size);
> +	writel(val, snfi->regs + SNFI_MISC_CTL2);
> +
> +	writel(INTR_CUS_READ_EN | INTR_IRQ_EN, snfi->regs + NFI_INTR_EN);
> +}
> +
> +static int mtk_snfi_prepare(struct mtk_snfi *snfi,
> +			    const struct spi_mem_op *op, bool rx)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	dma_addr_t addr;
> +	int ret;
> +	u32 val;
> +
> +	addr = dma_map_single(snfi->dev,
> +			      op->data.buf.in, op->data.nbytes,
> +			      rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	ret = dma_mapping_error(snfi->dev, addr);
> +	if (ret) {
> +		dev_err(snfi->dev, "dma mapping error\n");
> +		return -EINVAL;
> +	}
> +
> +	snfi->dma_addr = addr;
> +	writel(lower_32_bits(addr), snfi->regs + NFI_STRADDR);
> +
> +	if (op->ecc_en && !rx)
> +		mtk_snfi_write_oob_free(snfi, op);
> +
> +	val = readw(snfi->regs + NFI_CNFG);
> +	val |= CNFG_DMA | CNFG_DMA_BURST_EN | CNFG_OP_CUST;
> +	val |= rx ? CNFG_READ_EN : 0;
> +
> +	if (op->ecc_en)
> +		val |= CNFG_HW_ECC_EN | CNFG_AUTO_FMT_EN;
> +
> +	writew(val, snfi->regs + NFI_CNFG);
> +
> +	writel(eng->nsteps << CON_SEC_SHIFT, snfi->regs + NFI_CON);
> +
> +	init_completion(&snfi->done);
> +
> +	/* trigger state machine to custom op mode */
> +	writel(CMD_DUMMY, snfi->regs + NFI_CMD);
> +
> +	if (rx)
> +		mtk_snfi_prepare_for_rx(snfi, op);
> +	else
> +		mtk_snfi_prepare_for_tx(snfi, op);
> +
> +	return 0;
> +}
> +
> +static void mtk_snfi_trigger(struct mtk_snfi *snfi,
> +			     const struct spi_mem_op *op, bool rx)
> +{
> +	u32 val;
> +
> +	val = readl(snfi->regs + NFI_CON);
> +	val |= rx ? CON_BRD : CON_BWR;
> +	writew(val, snfi->regs + NFI_CON);
> +
> +	writew(STAR_EN, snfi->regs + NFI_STRDATA);
> +}
> +
> +static int mtk_snfi_wait_done(struct mtk_snfi *snfi,
> +			      const struct spi_mem_op *op, bool rx)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	struct device *dev = snfi->dev;
> +	u32 val;
> +	int ret;
> +
> +	ret = wait_for_completion_timeout(&snfi->done, msecs_to_jiffies(500));
> +	if (!ret) {
> +		dev_err(dev, "wait for %d completion done timeout\n", rx);
> +		return -ETIMEDOUT;
> +	}
> +
> +	if (rx) {
> +		ret = readl_poll_timeout_atomic(snfi->regs + NFI_BYTELEN,
> +						val,
> +						ADDRCNTR_SEC(val) >=
> +						eng->nsteps,
> +						0, MTK_SNFI_TIMEOUT);
> +		if (ret) {
> +			dev_err(dev, "wait for rx section count timeout\n");
> +			return -ETIMEDOUT;
> +		}
> +
> +		ret = readl_poll_timeout_atomic(snfi->regs + NFI_MASTERSTA,
> +						val,
> +						!(val & AHB_BUS_BUSY),
> +						0, MTK_SNFI_TIMEOUT);
> +		if (ret) {
> +			dev_err(dev, "wait for bus busy timeout\n");
> +			return -ETIMEDOUT;
> +		}
> +	} else {
> +		ret = readl_poll_timeout_atomic(snfi->regs + NFI_ADDRCNTR,
> +						val,
> +						ADDRCNTR_SEC(val) >=
> +						eng->nsteps,
> +						0, MTK_SNFI_TIMEOUT);
> +		if (ret) {
> +			dev_err(dev, "wait for tx section count timeout\n");
> +			return -ETIMEDOUT;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
> +static void mtk_snfi_complete(struct mtk_snfi *snfi,
> +			      const struct spi_mem_op *op, bool rx)
> +{
> +	u32 val;
> +
> +	dma_unmap_single(snfi->dev,
> +			 snfi->dma_addr, op->data.nbytes,
> +			 rx ? DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +
> +	if (op->ecc_en && rx)
> +		mtk_snfi_read_oob_free(snfi, op);
> +
> +	val = readl(snfi->regs + SNFI_MISC_CTL);
> +	val &= rx ? ~RD_CUSTOM_EN : ~WR_CUSTOM_EN;
> +	writel(val, snfi->regs + SNFI_MISC_CTL);
> +
> +	val = readl(snfi->regs + SNFI_STA_CTL1);
> +	val |= rx ? CUS_READ_DONE : CUS_PROG_DONE;
> +	writew(val, snfi->regs + SNFI_STA_CTL1);
> +	val &= rx ? ~CUS_READ_DONE : ~CUS_PROG_DONE;
> +	writew(val, snfi->regs + SNFI_STA_CTL1);
> +
> +	/* Disable interrupt */
> +	val = readl(snfi->regs + NFI_INTR_EN);
> +	val &= rx ? ~INTR_CUS_READ_EN : ~INTR_CUS_PROG_EN;
> +	writew(val, snfi->regs + NFI_INTR_EN);
> +
> +	writew(0, snfi->regs + NFI_CNFG);
> +	writew(0, snfi->regs + NFI_CON);
> +}
> +
> +static int mtk_snfi_transfer_dma(struct mtk_snfi *snfi,
> +				 const struct spi_mem_op *op, bool rx)
> +{
> +	int ret;
> +
> +	ret = mtk_snfi_prepare(snfi, op, rx);
> +	if (ret)
> +		return ret;
> +
> +	mtk_snfi_trigger(snfi, op, rx);
> +
> +	ret = mtk_snfi_wait_done(snfi, op, rx);
> +
> +	mtk_snfi_complete(snfi, op, rx);
> +
> +	return ret;
> +}
> +
> +static int mtk_snfi_transfer_mac(struct mtk_snfi *snfi,
> +				 const u8 *txbuf, u8 *rxbuf,
> +				 const u32 txlen, const u32 rxlen)
> +{
> +	u32 i, j, val, tmp;
> +	u8 *p_tmp = (u8 *)(&tmp);
> +	u32 offset = 0;
> +	int ret = 0;
> +
> +	/* Move tx data to gpram in snfi mac mode */
> +	for (i = 0; i < txlen; ) {
> +		for (j = 0, tmp = 0; i < txlen && j < 4; i++, j++)
> +			p_tmp[j] = txbuf[i];
> +
> +		writel(tmp, snfi->regs + SNFI_GPRAM_DATA + offset);
> +		offset += 4;
> +	}
> +
> +	writel(txlen, snfi->regs + SNFI_MAC_OUTL);
> +	writel(rxlen, snfi->regs + SNFI_MAC_INL);
> +
> +	ret = mtk_snfi_mac_op(snfi);
> +	if (ret) {
> +		dev_warn(snfi->dev, "snfi mac operation fail\n");
> +		return ret;
> +	}
> +
> +	/* Get tx data from gpram in snfi mac mode */
> +	if (rxlen)
> +		for (i = 0, offset = rounddown(txlen, 4); i < rxlen; ) {
> +			val = readl(snfi->regs +
> +				    SNFI_GPRAM_DATA + offset);
> +			for (j = 0; i < rxlen && j < 4; i++, j++, rxbuf++) {
> +				if (i == 0)
> +					j = txlen % 4;
> +				*rxbuf = (val >> (j * 8)) & 0xff;
> +			}
> +			offset += 4;
> +		}
> +
> +	return ret;
> +}
> +
> +static int mtk_snfi_exec_op(struct spi_mem *mem,
> +			    const struct spi_mem_op *op)
> +{
> +	struct mtk_snfi *snfi = spi_controller_get_devdata(mem->spi->master);
> +	u8 *buf, *txbuf = snfi->tx_buf, *rxbuf = NULL;
> +	u32 txlen = 0, rxlen = 0;
> +	int i, ret = 0;
> +	bool rx;
> +
> +	rx = op->data.dir == SPI_MEM_DATA_IN;
> +
> +	ret = mtk_snfi_reset(snfi);
> +	if (ret) {
> +		dev_warn(snfi->dev, "snfi reset fail\n");
> +		return ret;
> +	}
> +
> +	/*
> +	 * If tx/rx data buswidth is not 0/1, use snfi DMA mode.
> +	 * Otherwise, use snfi mac mode.
> +	 */
> +	if (op->data.buswidth != 1 && op->data.buswidth != 0) {
> +		ret = mtk_snfi_transfer_dma(snfi, op, rx);
> +		if (ret)
> +			dev_warn(snfi->dev, "snfi dma transfer %d fail %d\n",
> +				 rx, ret);
> +		return ret;
> +	}
> +
> +	txbuf[txlen++] = op->cmd.opcode;
> +
> +	if (op->addr.nbytes)
> +		for (i = 0; i < op->addr.nbytes; i++)
> +			txbuf[txlen++] = op->addr.val >>
> +					(8 * (op->addr.nbytes - i - 1));
> +
> +	txlen += op->dummy.nbytes;
> +
> +	if (op->data.dir == SPI_MEM_DATA_OUT) {
> +		buf = (u8 *)op->data.buf.out;
> +		for (i = 0; i < op->data.nbytes; i++)
> +			txbuf[txlen++] = buf[i];
> +	}
> +
> +	if (op->data.dir == SPI_MEM_DATA_IN) {
> +		rxbuf = (u8 *)op->data.buf.in;
> +		rxlen = op->data.nbytes;
> +	}
> +
> +	ret = mtk_snfi_transfer_mac(snfi, txbuf, rxbuf, txlen, rxlen);
> +	if (ret)
> +		dev_warn(snfi->dev, "snfi mac transfer %d fail %d\n",
> +			 op->data.dir, ret);
> +
> +	return ret;
> +}
> +
> +static int mtk_snfi_check_buswidth(u8 width)
> +{
> +	switch (width) {
> +	case 1:
> +	case 2:
> +	case 4:
> +		return 0;
> +
> +	default:
> +		break;
> +	}
> +
> +	return -EOPNOTSUPP;
> +}
> +
> +static bool mtk_snfi_supports_op(struct spi_mem *mem,
> +				 const struct spi_mem_op *op)
> +{
> +	int ret = 0;
> +
> +	if (!spi_mem_default_supports_op(mem, op))

With the integration properly set, this call would always return false
when ecc_en = true. You should switch to

	spi_mem_generic_supports_op(mem, op, false, true);

> +		return false;
> +
> +	if (op->cmd.buswidth != 1)
> +		return false;
> +
> +	/*
> +	 * For one operation will use snfi mac mode when data
> +	 * buswidth is 0/1. However, the HW ECC engine can not
> +	 * be used in mac mode.
> +	 */
> +	if (op->ecc_en && op->data.buswidth == 1 &&
> +	    op->data.nbytes >= SNFI_GPRAM_MAX_LEN)
> +		return false;
> +
> +	switch (op->data.dir) {
> +	/* For spi mem data in, can support 1/2/4 buswidth */
> +	case SPI_MEM_DATA_IN:
> +		if (op->addr.nbytes)
> +			ret |= mtk_snfi_check_buswidth(op->addr.buswidth);
> +
> +		if (op->dummy.nbytes)
> +			ret |= mtk_snfi_check_buswidth(op->dummy.buswidth);
> +
> +		if (op->data.nbytes)
> +			ret |= mtk_snfi_check_buswidth(op->data.buswidth);
> +
> +		if (ret)
> +			return false;
> +
> +		break;
> +	case SPI_MEM_DATA_OUT:
> +		/*
> +		 * For spi mem data out, can support 0/1 buswidth
> +		 * for addr/dummy and 1/4 buswidth for data.
> +		 */
> +		if (op->addr.buswidth != 0 && op->addr.buswidth != 1)
> +			return false;
> +
> +		if (op->dummy.buswidth != 0 && op->dummy.buswidth != 1)
> +			return false;
> +
> +		if (op->data.buswidth != 1 && op->data.buswidth != 4)
> +			return false;
> +
> +		break;
> +	default:
> +		break;
> +	}
> +
> +	return true;
> +}
> +
> +static int mtk_snfi_adjust_op_size(struct spi_mem *mem,
> +				   struct spi_mem_op *op)
> +{
> +	u32 len, max_len;
> +
> +	/*
> +	 * The op size only support SNFI_GPRAM_MAX_LEN which will
> +	 * use the snfi mac mode when data buswidth is 0/1.
> +	 * Otherwise, the snfi can max support 16KB.
> +	 */
> +	if (op->data.buswidth == 1 || op->data.buswidth == 0)
> +		max_len = SNFI_GPRAM_MAX_LEN;
> +	else
> +		max_len = KB(16);
> +
> +	len = op->cmd.nbytes + op->addr.nbytes + op->dummy.nbytes;
> +	if (len > max_len)
> +		return -EOPNOTSUPP;
> +
> +	if ((len + op->data.nbytes) > max_len)
> +		op->data.nbytes = max_len - len;
> +
> +	return 0;
> +}
> +
> +static const struct mtk_snfi_caps mtk_snfi_caps_mt7622 = {
> +	.pageformat_spare_shift = 16,
> +};
> +
> +static const struct spi_controller_mem_ops mtk_snfi_ops = {
> +	.adjust_op_size = mtk_snfi_adjust_op_size,
> +	.supports_op = mtk_snfi_supports_op,
> +	.exec_op = mtk_snfi_exec_op,
> +};
> +
> +static const struct of_device_id mtk_snfi_id_table[] = {
> +	{ .compatible = "mediatek,mt7622-snfi",
> +	  .data = &mtk_snfi_caps_mt7622,
> +	},
> +	{  /* sentinel */ }
> +};
> +
> +/* ECC wrapper */
> +static struct mtk_snfi *mtk_nand_to_spi(struct nand_device *nand)
> +{
> +	struct device *dev = nand->ecc.engine->dev;
> +	struct spi_master *master = dev_get_drvdata(dev);
> +	struct mtk_snfi *snfi = spi_master_get_devdata(master);
> +
> +	return snfi;
> +}
> +
> +static int mtk_snfi_config(struct nand_device *nand,
> +			   struct mtk_snfi *snfi)
> +{
> +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> +	u32 val;
> +
> +	switch (nanddev_page_size(nand)) {
> +	case 512:
> +		val = PAGEFMT_512_2K | PAGEFMT_SEC_SEL_512;
> +		break;
> +	case KB(2):
> +		if (eng->section_size == 512)
> +			val = PAGEFMT_2K_4K | PAGEFMT_SEC_SEL_512;
> +		else
> +			val = PAGEFMT_512_2K;
> +		break;
> +	case KB(4):
> +		if (eng->section_size == 512)
> +			val = PAGEFMT_4K_8K | PAGEFMT_SEC_SEL_512;
> +		else
> +			val = PAGEFMT_2K_4K;
> +		break;
> +	case KB(8):
> +		if (eng->section_size == 512)
> +			val = PAGEFMT_8K_16K | PAGEFMT_SEC_SEL_512;
> +		else
> +			val = PAGEFMT_4K_8K;
> +		break;
> +	case KB(16):
> +		val = PAGEFMT_8K_16K;
> +		break;
> +	default:
> +		dev_err(snfi->dev, "invalid page len: %d\n",
> +			nanddev_page_size(nand));
> +		return -EINVAL;
> +	}
> +
> +	val |= eng->oob_per_section_idx << PAGEFMT_SPARE_SHIFT;
> +	val |= eng->oob_free << PAGEFMT_FDM_SHIFT;
> +	val |= eng->oob_free_protected << PAGEFMT_FDM_ECC_SHIFT;
> +	writel(val, snfi->regs + NFI_PAGEFMT);

Shouldn't this be calculated only once?

> +
> +	return 0;
> +}
> +
> +static int mtk_snfi_ecc_init_ctx(struct nand_device *nand)
> +{
> +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> +
> +	return ops->init_ctx(nand);
> +}
> +
> +static void mtk_snfi_ecc_cleanup_ctx(struct nand_device *nand)
> +{
> +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> +
> +	ops->cleanup_ctx(nand);
> +}
> +
> +static int mtk_snfi_ecc_prepare_io_req(struct nand_device *nand,
> +				       struct nand_page_io_req *req)
> +{
> +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> +	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
> +	int ret;
> +
> +	ret = mtk_snfi_config(nand, snfi);
> +	if (ret)
> +		return ret;
> +
> +	return ops->prepare_io_req(nand, req);
> +}
> +
> +static int mtk_snfi_ecc_finish_io_req(struct nand_device *nand,
> +				      struct nand_page_io_req *req)
> +{
> +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
> +
> +	if (req->mode != MTD_OPS_RAW)
> +		eng->read_empty = readl(snfi->regs + NFI_STA) & STA_EMP_PAGE;
> +
> +	return ops->finish_io_req(nand, req);
> +}
> +
> +static struct nand_ecc_engine_ops mtk_snfi_ecc_engine_pipelined_ops = {
> +	.init_ctx = mtk_snfi_ecc_init_ctx,
> +	.cleanup_ctx = mtk_snfi_ecc_cleanup_ctx,
> +	.prepare_io_req = mtk_snfi_ecc_prepare_io_req,
> +	.finish_io_req = mtk_snfi_ecc_finish_io_req,
> +};
> +
> +static int mtk_snfi_ecc_probe(struct platform_device *pdev,
> +			      struct mtk_snfi *snfi)
> +{
> +	struct nand_ecc_engine *ecceng;
> +
> +	if (!mtk_ecc_get_pipelined_ops())
> +		return -EOPNOTSUPP;
> +
> +	ecceng = devm_kzalloc(&pdev->dev, sizeof(*ecceng), GFP_KERNEL);
> +	if (!ecceng)
> +		return -ENOMEM;
> +
> +	ecceng->dev = &pdev->dev;
> +	ecceng->ops = &mtk_snfi_ecc_engine_pipelined_ops;

You need to tell the core that this is a pipelined engine (look at the
integration entry).

> +
> +	nand_ecc_register_on_host_hw_engine(ecceng);
> +
> +	snfi->engine = ecceng;
> +
> +	return 0;
> +}
> +
> +static int mtk_snfi_probe(struct platform_device *pdev)
> +{
> +	struct device_node *np = pdev->dev.of_node;
> +	struct spi_controller *ctlr;
> +	struct mtk_snfi *snfi;
> +	struct resource *res;
> +	int ret, irq;
> +	u32 val = 0;
> +
> +	ctlr = spi_alloc_master(&pdev->dev, sizeof(*snfi));
> +	if (!ctlr)
> +		return -ENOMEM;
> +
> +	snfi = spi_controller_get_devdata(ctlr);
> +	snfi->dev = &pdev->dev;
> +
> +	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> +	snfi->regs = devm_ioremap_resource(snfi->dev, res);
> +	if (IS_ERR(snfi->regs)) {
> +		ret = PTR_ERR(snfi->regs);
> +		goto err_put_master;
> +	}
> +
> +	ret = of_property_read_u32(np, "sample-delay", &val);
> +	if (!ret)
> +		snfi->sample_delay = val;
> +
> +	ret = of_property_read_u32(np, "read-latency", &val);
> +	if (!ret)
> +		snfi->read_latency = val;
> +
> +	snfi->nfi_clk = devm_clk_get(snfi->dev, "nfi_clk");
> +	if (IS_ERR(snfi->nfi_clk)) {
> +		dev_err(snfi->dev, "not found nfi clk\n");
> +		ret = PTR_ERR(snfi->nfi_clk);
> +		goto err_put_master;
> +	}
> +
> +	snfi->snfi_clk = devm_clk_get(snfi->dev, "snfi_clk");
> +	if (IS_ERR(snfi->snfi_clk)) {
> +		dev_err(snfi->dev, "not found snfi clk\n");
> +		ret = PTR_ERR(snfi->snfi_clk);
> +		goto err_put_master;
> +	}
> +
> +	snfi->hclk = devm_clk_get(snfi->dev, "hclk");
> +	if (IS_ERR(snfi->hclk)) {
> +		dev_err(snfi->dev, "not found hclk\n");
> +		ret = PTR_ERR(snfi->hclk);
> +		goto err_put_master;
> +	}
> +
> +	ret = mtk_snfi_enable_clk(snfi->dev, snfi);
> +	if (ret)
> +		goto err_put_master;
> +
> +	snfi->caps = of_device_get_match_data(snfi->dev);
> +
> +	irq = platform_get_irq(pdev, 0);
> +	if (irq < 0) {
> +		dev_err(snfi->dev, "not found snfi irq resource\n");
> +		ret = -EINVAL;
> +		goto clk_disable;
> +	}
> +
> +	ret = devm_request_irq(snfi->dev, irq, mtk_snfi_irq,
> +			       0, "mtk-snfi", snfi);
> +	if (ret) {
> +		dev_err(snfi->dev, "failed to request snfi irq\n");
> +		goto clk_disable;
> +	}
> +
> +	ret = dma_set_mask(snfi->dev, DMA_BIT_MASK(32));
> +	if (ret) {
> +		dev_err(snfi->dev, "failed to set dma mask\n");
> +		goto clk_disable;
> +	}
> +
> +	snfi->tx_buf = kzalloc(SNFI_GPRAM_MAX_LEN, GFP_KERNEL);
> +	if (!snfi->tx_buf) {
> +		ret = -ENOMEM;
> +		goto clk_disable;
> +	}
> +
> +	ctlr->dev.of_node = np;
> +	ctlr->mem_ops = &mtk_snfi_ops;
> +	ctlr->mode_bits = SPI_RX_DUAL | SPI_RX_QUAD | SPI_TX_QUAD;
> +	ctlr->auto_runtime_pm = true;
> +
> +	dev_set_drvdata(snfi->dev, ctlr);
> +
> +	ret = mtk_snfi_init(snfi);
> +	if (ret) {
> +		dev_err(snfi->dev, "failed to init snfi\n");
> +		goto free_buf;
> +	}
> +
> +	ret = mtk_snfi_ecc_probe(pdev, snfi);
> +	if (ret) {
> +		dev_warn(snfi->dev, "ECC engine not available\n");
> +		goto free_buf;
> +	}
> +
> +	pm_runtime_enable(snfi->dev);
> +
> +	ret = devm_spi_register_master(snfi->dev, ctlr);
> +	if (ret) {
> +		dev_err(snfi->dev, "failed to register spi master\n");
> +		goto disable_pm_runtime;
> +	}
> +
> +	return 0;
> +
> +disable_pm_runtime:
> +	pm_runtime_disable(snfi->dev);
> +
> +free_buf:
> +	kfree(snfi->tx_buf);
> +
> +clk_disable:
> +	mtk_snfi_disable_clk(snfi);
> +
> +err_put_master:
> +	spi_master_put(ctlr);
> +
> +	return ret;
> +}
> +
> +static int mtk_snfi_remove(struct platform_device *pdev)
> +{
> +	struct spi_controller *ctlr = dev_get_drvdata(&pdev->dev);
> +	struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr);
> +	struct nand_ecc_engine *eng = snfi->engine;
> +
> +	pm_runtime_disable(snfi->dev);
> +	nand_ecc_unregister_on_host_hw_engine(eng);
> +	kfree(snfi->tx_buf);
> +	spi_master_put(ctlr);
> +
> +	return 0;
> +}
> +
> +#ifdef CONFIG_PM
> +static int mtk_snfi_runtime_suspend(struct device *dev)
> +{
> +	struct spi_controller *ctlr = dev_get_drvdata(dev);
> +	struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr);
> +
> +	mtk_snfi_disable_clk(snfi);
> +
> +	return 0;
> +}
> +
> +static int mtk_snfi_runtime_resume(struct device *dev)
> +{
> +	struct spi_controller *ctlr = dev_get_drvdata(dev);
> +	struct mtk_snfi *snfi = spi_controller_get_devdata(ctlr);
> +	int ret;
> +
> +	ret = mtk_snfi_enable_clk(dev, snfi);
> +	if (ret)
> +		return ret;
> +
> +	ret = mtk_snfi_init(snfi);
> +	if (ret)
> +		dev_err(dev, "failed to init snfi\n");
> +
> +	return ret;
> +}
> +#endif /* CONFIG_PM */
> +
> +static const struct dev_pm_ops mtk_snfi_pm_ops = {
> +	SET_RUNTIME_PM_OPS(mtk_snfi_runtime_suspend,
> +			   mtk_snfi_runtime_resume, NULL)
> +	SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
> +				pm_runtime_force_resume)
> +};
> +
> +static struct platform_driver mtk_snfi_driver = {
> +	.driver = {
> +		.name	= "mtk-snfi",
> +		.of_match_table = mtk_snfi_id_table,
> +		.pm = &mtk_snfi_pm_ops,
> +	},
> +	.probe		= mtk_snfi_probe,
> +	.remove		= mtk_snfi_remove,
> +};
> +
> +module_platform_driver(mtk_snfi_driver);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_AUTHOR("Xiangsheng Hou <xiangsheng.hou@mediatek.com>");
> +MODULE_DESCRIPTION("Mediatek SPI Nand Flash interface driver");

Otherwise looks good, I believe you can drop the RFC prefix now.

Thanks,
Miquèl

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
  2021-11-30  8:31   ` Xiangsheng Hou
@ 2021-12-09 10:32     ` Miquel Raynal
  -1 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-09 10:32 UTC (permalink / raw)
  To: Xiangsheng Hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Xiangsheng,

xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:31:59 +0800:

> Convert the Mediatek HW ECC engine to the ECC infrastructure with
> pipelined case.
> 
> Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
> ---
>  drivers/mtd/nand/ecc-mtk.c       | 614 +++++++++++++++++++++++++++++++
>  include/linux/mtd/nand-ecc-mtk.h |  68 ++++
>  2 files changed, 682 insertions(+)
> 
> diff --git a/drivers/mtd/nand/ecc-mtk.c b/drivers/mtd/nand/ecc-mtk.c
> index 31d7c77d5c59..c44499b3d0a5 100644
> --- a/drivers/mtd/nand/ecc-mtk.c
> +++ b/drivers/mtd/nand/ecc-mtk.c
> @@ -16,6 +16,7 @@
>  #include <linux/of_platform.h>
>  #include <linux/mutex.h>
>  
> +#include <linux/mtd/nand.h>
>  #include <linux/mtd/nand-ecc-mtk.h>
>  
>  #define ECC_IDLE_MASK		BIT(0)
> @@ -41,11 +42,17 @@
>  #define ECC_IDLE_REG(op)	((op) == ECC_ENCODE ? ECC_ENCIDLE : ECC_DECIDLE)
>  #define ECC_CTL_REG(op)		((op) == ECC_ENCODE ? ECC_ENCCON : ECC_DECCON)
>  
> +#define OOB_FREE_MAX_SIZE 8
> +#define OOB_FREE_MIN_SIZE 1
> +
>  struct mtk_ecc_caps {
>  	u32 err_mask;
>  	const u8 *ecc_strength;
>  	const u32 *ecc_regs;
>  	u8 num_ecc_strength;
> +	const u8 *spare_size;
> +	u8 num_spare_size;
> +	u32 max_section_size;
>  	u8 ecc_mode_shift;
>  	u32 parity_bits;
>  	int pg_irq_sel;
> @@ -79,6 +86,12 @@ static const u8 ecc_strength_mt7622[] = {
>  	4, 6, 8, 10, 12, 14, 16
>  };
>  
> +/* spare size for each section that each IP supports */
> +static const u8 spare_size_mt7622[] = {
> +	16, 26, 27, 28, 32, 36, 40, 44, 48, 49, 50, 51,
> +	52, 62, 61, 63, 64, 67, 74
> +};
> +
>  enum mtk_ecc_regs {
>  	ECC_ENCPAR00,
>  	ECC_ENCIRQ_EN,
> @@ -447,6 +460,604 @@ unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc)
>  }
>  EXPORT_SYMBOL(mtk_ecc_get_parity_bits);
>  
> +static inline int mtk_ecc_data_off(struct nand_device *nand, int i)
> +{
> +	int eccsize = nand->ecc.ctx.conf.step_size;
> +
> +	return i * eccsize;
> +}
> +
> +static inline int mtk_ecc_oob_free_position(struct nand_device *nand, int i)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	int position;
> +
> +	if (i < eng->bbm_ctl.section)
> +		position = (i + 1) * eng->oob_free;
> +	else if (i == eng->bbm_ctl.section)
> +		position = 0;
> +	else
> +		position = i * eng->oob_free;
> +
> +	return position;
> +}
> +
> +static inline int mtk_ecc_data_len(struct nand_device *nand)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	int eccsize = nand->ecc.ctx.conf.step_size;
> +	int eccbytes = eng->oob_ecc;
> +
> +	return eccsize + eng->oob_free + eccbytes;
> +}
> +
> +static inline u8 *mtk_ecc_section_ptr(struct nand_device *nand,  int i)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +
> +	return eng->bounce_page_buf + i * mtk_ecc_data_len(nand);
> +}
> +
> +static inline u8 *mtk_ecc_oob_free_ptr(struct nand_device *nand, int i)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	int eccsize = nand->ecc.ctx.conf.step_size;
> +
> +	return eng->bounce_page_buf + i * mtk_ecc_data_len(nand) + eccsize;
> +}
> +
> +static void mtk_ecc_no_bbm_swap(struct nand_device *a, u8 *b, u8 *c)
> +{
> +	/* nop */

Is this really useful?

> +}
> +
> +static void mtk_ecc_bbm_swap(struct nand_device *nand, u8 *databuf, u8 *oobbuf)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	int step_size = nand->ecc.ctx.conf.step_size;
> +	u32 bbm_pos = eng->bbm_ctl.position;
> +
> +	bbm_pos += eng->bbm_ctl.section * step_size;
> +
> +	swap(oobbuf[0], databuf[bbm_pos]);
> +}
> +
> +static void mtk_ecc_set_bbm_ctl(struct mtk_ecc_bbm_ctl *bbm_ctl,
> +				struct nand_device *nand)
> +{
> +	if (nanddev_page_size(nand) == 512) {
> +		bbm_ctl->bbm_swap = mtk_ecc_no_bbm_swap;
> +	} else {
> +		bbm_ctl->bbm_swap = mtk_ecc_bbm_swap;
> +		bbm_ctl->section = nanddev_page_size(nand) /
> +				   mtk_ecc_data_len(nand);
> +		bbm_ctl->position = nanddev_page_size(nand) %
> +				    mtk_ecc_data_len(nand);
> +	}
> +}
> +
> +static int mtk_ecc_ooblayout_free(struct mtd_info *mtd, int section,
> +				  struct mtd_oob_region *oob_region)
> +{
> +	struct nand_device *nand = mtd_to_nanddev(mtd);
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
> +	u32 eccsteps, bbm_bytes = 0;
> +
> +	eccsteps = mtd->writesize / conf->step_size;
> +
> +	if (section >= eccsteps)
> +		return -ERANGE;
> +
> +	/* Reserve 1 byte for BBM only for section 0 */
> +	if (section == 0)
> +		bbm_bytes = 1;
> +
> +	oob_region->length = eng->oob_free - bbm_bytes;
> +	oob_region->offset = section * eng->oob_free + bbm_bytes;
> +
> +	return 0;
> +}
> +
> +static int mtk_ecc_ooblayout_ecc(struct mtd_info *mtd, int section,
> +				 struct mtd_oob_region *oob_region)
> +{
> +	struct nand_device *nand = mtd_to_nanddev(mtd);
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +
> +	if (section)
> +		return -ERANGE;
> +
> +	oob_region->offset = eng->oob_free * eng->nsteps;
> +	oob_region->length = mtd->oobsize - oob_region->offset;
> +
> +	return 0;
> +}
> +
> +static const struct mtd_ooblayout_ops mtk_ecc_ooblayout_ops = {
> +	.free = mtk_ecc_ooblayout_free,
> +	.ecc = mtk_ecc_ooblayout_ecc,
> +};
> +
> +const struct mtd_ooblayout_ops *mtk_ecc_get_ooblayout(void)
> +{
> +	return &mtk_ecc_ooblayout_ops;
> +}
> +
> +static struct device *mtk_ecc_get_engine_dev(struct device *dev)
> +{
> +	struct platform_device *eccpdev;
> +	struct device_node *np;
> +
> +	/*
> +	 * The device node is only the host controller,
> +	 * not the actual ECC engine when pipelined case.
> +	 */
> +	np = of_parse_phandle(dev->of_node, "nand-ecc-engine", 0);
> +	if (!np)
> +		return NULL;
> +
> +	eccpdev = of_find_device_by_node(np);
> +	if (!eccpdev) {
> +		of_node_put(np);
> +		return NULL;
> +	}
> +
> +	platform_device_put(eccpdev);
> +	of_node_put(np);
> +
> +	return &eccpdev->dev;
> +}

As this will be the exact same function for all the pipelined engines,
I am tempted to put this in the core. I'll soon send a iteration, stay
tuned.

> +/*
> + * mtk_ecc_data_format() - Convert to/from MTK ECC on-flash data format
> + *
> + * MTK ECC engine organize page data by section, the on-flash format as bellow:
> + * ||          section 0         ||          section 1          || ...
> + * || data | OOB free | OOB ECC || data || OOB free | OOB ECC || ...
> + *
> + * Terefore, it`s necessary to convert data when reading/writing in raw mode.
> + */
> +static void mtk_ecc_data_format(struct nand_device *nand,

mtk_ecc_reorganize_data_layout()?

> +				struct nand_page_io_req *req)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	int step_size = nand->ecc.ctx.conf.step_size;
> +	void *databuf, *oobbuf;
> +	int i;
> +
> +	if (req->type == NAND_PAGE_WRITE) {
> +		databuf = (void *)req->databuf.out;
> +		oobbuf = (void *)req->oobbuf.out;
> +
> +		/*
> +		 * Convert the source databuf and oobbuf to MTK ECC
> +		 * on-flash data format.
> +		 */
> +		for (i = 0; i < eng->nsteps; i++) {
> +			if (i == eng->bbm_ctl.section)
> +				eng->bbm_ctl.bbm_swap(nand,
> +						      databuf, oobbuf);

Do you really need this swap? Isn't the overall move enough to put the
BBM at the right place?

> +			memcpy(mtk_ecc_section_ptr(nand, i),
> +			       databuf + mtk_ecc_data_off(nand, i),
> +			       step_size);
> +
> +			memcpy(mtk_ecc_oob_free_ptr(nand, i),
> +			       oobbuf + mtk_ecc_oob_free_position(nand, i),
> +			       eng->oob_free);
> +
> +			memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free,
> +			       oobbuf + eng->oob_free * eng->nsteps +
> +			       i * eng->oob_ecc,
> +			       eng->oob_ecc);
> +		}
> +
> +		req->databuf.out = eng->bounce_page_buf;
> +		req->oobbuf.out = eng->bounce_oob_buf;
> +	} else {
> +		databuf = req->databuf.in;
> +		oobbuf = req->oobbuf.in;
> +
> +		/*
> +		 * Convert the on-flash MTK ECC data format to
> +		 * destination databuf and oobbuf.
> +		 */
> +		memcpy(eng->bounce_page_buf, databuf,
> +		       nanddev_page_size(nand));
> +		memcpy(eng->bounce_oob_buf, oobbuf,
> +		       nanddev_per_page_oobsize(nand));
> +
> +		for (i = 0; i < eng->nsteps; i++) {
> +			memcpy(databuf + mtk_ecc_data_off(nand, i),
> +			       mtk_ecc_section_ptr(nand, i), step_size);
> +
> +			memcpy(oobbuf + mtk_ecc_oob_free_position(nand, i),
> +			       mtk_ecc_section_ptr(nand, i) + step_size,
> +			       eng->oob_free);
> +
> +			memcpy(oobbuf + eng->oob_free * eng->nsteps +
> +			       i * eng->oob_ecc,
> +			       mtk_ecc_section_ptr(nand, i) + step_size
> +			       + eng->oob_free,
> +			       eng->oob_ecc);
> +
> +			if (i == eng->bbm_ctl.section)
> +				eng->bbm_ctl.bbm_swap(nand,
> +						      databuf, oobbuf);
> +		}
> +	}
> +}
> +
> +static void mtk_ecc_oob_free_shift(struct nand_device *nand,
> +				   u8 *dst_buf, u8 *src_buf, bool write)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	u32 position;
> +	int i;
> +
> +	for (i = 0; i < eng->nsteps; i++) {
> +		if (i < eng->bbm_ctl.section)
> +			position = (i + 1) * eng->oob_free;
> +		else if (i == eng->bbm_ctl.section)
> +			position = 0;
> +		else
> +			position = i * eng->oob_free;
> +
> +		if (write)
> +			memcpy(dst_buf + i * eng->oob_free, src_buf + position,
> +			       eng->oob_free);
> +		else
> +			memcpy(dst_buf + position, src_buf + i * eng->oob_free,
> +			       eng->oob_free);
> +	}
> +}
> +
> +static void mtk_ecc_set_section_size_and_strength(struct nand_device *nand)
> +{
> +	struct nand_ecc_props *reqs = &nand->ecc.requirements;
> +	struct nand_ecc_props *user = &nand->ecc.user_conf;
> +	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +
> +	/* Configure the correction depending on the NAND device topology */
> +	if (user->step_size && user->strength) {
> +		conf->step_size = user->step_size;
> +		conf->strength = user->strength;
> +	} else if (reqs->step_size && reqs->strength) {
> +		conf->step_size = reqs->step_size;
> +		conf->strength = reqs->strength;
> +	}
> +
> +	/*
> +	 * Align ECC strength and ECC size.
> +	 * The MTK HW ECC engine only support 512 and 1024 ECC size.
> +	 */
> +	if (conf->step_size < 1024) {

I prefer stronger checks than '<'.

> +		if (nanddev_page_size(nand) > 512 &&
> +		    eng->ecc->caps->max_section_size > 512) {
> +			conf->step_size = 1024;
> +			conf->strength <<= 1;

the operation "<<= 1" is more readable as "* 2" IMHO.

Same below in both directions.

> +		} else {
> +			conf->step_size = 512;
> +		}
> +	} else {
> +		conf->step_size = 1024;
> +	}
> +
> +	eng->section_size = conf->step_size;
> +}
> +
> +static int mtk_ecc_set_spare_per_section(struct nand_device *nand)
> +{
> +	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	const u8 *spare = eng->ecc->caps->spare_size;
> +	u32 i, closest_spare = 0;
> +
> +	eng->nsteps = nanddev_page_size(nand) / conf->step_size;
> +	eng->oob_per_section = nanddev_per_page_oobsize(nand) / eng->nsteps;
> +
> +	if (conf->step_size == 1024)
> +		eng->oob_per_section >>= 1;
> +
> +	if (eng->oob_per_section < spare[0]) {
> +		dev_err(eng->ecc->dev, "OOB size per section too small %d\n",
> +			eng->oob_per_section);
> +		return -EINVAL;
> +	}
> +
> +	for (i = 0; i < eng->ecc->caps->num_spare_size; i++) {
> +		if (eng->oob_per_section >= spare[i] &&
> +		    spare[i] >= spare[closest_spare]) {
> +			closest_spare = i;
> +			if (eng->oob_per_section == spare[i])
> +				break;
> +		}
> +	}
> +
> +	eng->oob_per_section = spare[closest_spare];
> +	eng->oob_per_section_idx = closest_spare;
> +
> +	if (conf->step_size == 1024)
> +		eng->oob_per_section <<= 1;
> +
> +	return 0;
> +}
> +
> +int mtk_ecc_prepare_io_req_pipelined(struct nand_device *nand,
> +				     struct nand_page_io_req *req)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	struct mtd_info *mtd = nanddev_to_mtd(nand);
> +	int ret;
> +
> +	nand_ecc_tweak_req(&eng->req_ctx, req);
> +
> +	/* Store the source buffer data to avoid modify source data */
> +	if (req->type == NAND_PAGE_WRITE) {
> +		if (req->datalen)
> +			memcpy(eng->src_page_buf + req->dataoffs,
> +			       req->databuf.out,
> +			       req->datalen);
> +
> +		if (req->ooblen)
> +			memcpy(eng->src_oob_buf + req->ooboffs,
> +			       req->oobbuf.out,
> +			       req->ooblen);
> +	}
> +
> +	if (req->mode == MTD_OPS_RAW) {
> +		if (req->type == NAND_PAGE_WRITE)
> +			mtk_ecc_data_format(nand, req);
> +
> +		return 0;
> +	}
> +
> +	eng->ecc_cfg.mode = ECC_NFI_MODE;
> +	eng->ecc_cfg.sectors = eng->nsteps;
> +	eng->ecc_cfg.op = ECC_DECODE;
> +
> +	if (req->type == NAND_PAGE_READ)
> +		return mtk_ecc_enable(eng->ecc, &eng->ecc_cfg);
> +
> +	memset(eng->bounce_oob_buf, 0xff, nanddev_per_page_oobsize(nand));
> +	if (req->ooblen) {
> +		if (req->mode == MTD_OPS_AUTO_OOB) {
> +			ret = mtd_ooblayout_set_databytes(mtd,
> +							  req->oobbuf.out,
> +							  eng->bounce_oob_buf,
> +							  req->ooboffs,
> +							  mtd->oobavail);
> +			if (ret)
> +				return ret;
> +		} else {
> +			memcpy(eng->bounce_oob_buf + req->ooboffs,
> +			       req->oobbuf.out,
> +			       req->ooblen);
> +		}
> +	}
> +
> +	eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out,
> +			      eng->bounce_oob_buf);
> +	mtk_ecc_oob_free_shift(nand, (void *)req->oobbuf.out,
> +			       eng->bounce_oob_buf, true);
> +
> +	eng->ecc_cfg.op = ECC_ENCODE;
> +
> +	return mtk_ecc_enable(eng->ecc, &eng->ecc_cfg);
> +}
> +
> +int mtk_ecc_finish_io_req_pipelined(struct nand_device *nand,
> +				    struct nand_page_io_req *req)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	struct mtd_info *mtd = nanddev_to_mtd(nand);
> +	struct mtk_ecc_stats stats;
> +	int ret;
> +
> +	if (req->type == NAND_PAGE_WRITE) {
> +		/* Restore the source buffer data */
> +		if (req->datalen)
> +			memcpy((void *)req->databuf.out,
> +			       eng->src_page_buf + req->dataoffs,
> +			       req->datalen);
> +
> +		if (req->ooblen)
> +			memcpy((void *)req->oobbuf.out,
> +			       eng->src_oob_buf + req->ooboffs,
> +			       req->ooblen);
> +
> +		if (req->mode != MTD_OPS_RAW)
> +			mtk_ecc_disable(eng->ecc);
> +
> +		nand_ecc_restore_req(&eng->req_ctx, req);
> +
> +		return 0;
> +	}
> +
> +	if (req->mode == MTD_OPS_RAW) {
> +		mtk_ecc_data_format(nand, req);
> +		nand_ecc_restore_req(&eng->req_ctx, req);
> +
> +		return 0;
> +	}
> +
> +	ret = mtk_ecc_wait_done(eng->ecc, ECC_DECODE);
> +	if (ret) {
> +		ret = -ETIMEDOUT;
> +		goto out;
> +	}
> +
> +	if (eng->read_empty) {
> +		memset(req->databuf.in, 0xff, nanddev_page_size(nand));
> +		memset(req->oobbuf.in, 0xff, nanddev_per_page_oobsize(nand));
> +		ret = 0;
> +
> +		goto out;
> +	}
> +
> +	mtk_ecc_get_stats(eng->ecc, &stats, eng->nsteps);
> +	mtd->ecc_stats.corrected += stats.corrected;
> +	mtd->ecc_stats.failed += stats.failed;
> +
> +	/*
> +	 * Return -EBADMSG when exit uncorrect ECC error.
> +	 * Otherwise, return the bitflips.
> +	 */
> +	if (stats.failed)
> +		ret = -EBADMSG;
> +	else
> +		ret = stats.bitflips;
> +
> +	memset(eng->bounce_oob_buf, 0xff, nanddev_per_page_oobsize(nand));
> +	mtk_ecc_oob_free_shift(nand, eng->bounce_oob_buf, req->oobbuf.in, false);
> +	eng->bbm_ctl.bbm_swap(nand, req->databuf.in, eng->bounce_oob_buf);
> +
> +	if (req->ooblen) {
> +		if (req->mode == MTD_OPS_AUTO_OOB)
> +			ret = mtd_ooblayout_get_databytes(mtd,
> +							  req->oobbuf.in,
> +							  eng->bounce_oob_buf,
> +							  req->ooboffs,
> +							  mtd->oobavail);
> +		else
> +			memcpy(req->oobbuf.in,
> +			       eng->bounce_oob_buf + req->ooboffs,
> +			       req->ooblen);
> +	}
> +
> +out:
> +	mtk_ecc_disable(eng->ecc);
> +	nand_ecc_restore_req(&eng->req_ctx, req);
> +
> +	return ret;
> +}
> +
> +int mtk_ecc_init_ctx_pipelined(struct nand_device *nand)
> +{
> +	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
> +	struct mtd_info *mtd = nanddev_to_mtd(nand);
> +	struct mtk_ecc_engine *eng;
> +	struct device *dev;
> +	int free, ret;
> +
> +	/*
> +	 * In the case of a pipelined engine, the device registering the ECC
> +	 * engine is not the actual ECC engine device but the host controller.
> +	 */
> +	dev = mtk_ecc_get_engine_dev(nand->ecc.engine->dev);
> +	if (!dev)
> +		return -EINVAL;
> +
> +	eng = devm_kzalloc(dev, sizeof(*eng), GFP_KERNEL);
> +	if (!eng)
> +		return -ENOMEM;
> +
> +	nand->ecc.ctx.priv = eng;
> +	nand->ecc.engine->priv = eng;
> +
> +	eng->ecc = dev_get_drvdata(dev);
> +
> +	mtk_ecc_set_section_size_and_strength(nand);
> +
> +	ret = mtk_ecc_set_spare_per_section(nand);
> +	if (ret)
> +		return ret;
> +
> +	clk_prepare_enable(eng->ecc->clk);
> +	mtk_ecc_hw_init(eng->ecc);
> +
> +	/* Calculate OOB free bytes except ECC parity data */
> +	free = (conf->strength * mtk_ecc_get_parity_bits(eng->ecc)
> +	       + 7) >> 3;
> +	free = eng->oob_per_section - free;
> +
> +	/*
> +	 * Enhance ECC strength if OOB left is bigger than max FDM size
> +	 * or reduce ECC strength if OOB size is not enough for ECC
> +	 * parity data.
> +	 */
> +	if (free > OOB_FREE_MAX_SIZE)
> +		eng->oob_ecc = eng->oob_per_section - OOB_FREE_MAX_SIZE;
> +	else if (free < 0)
> +		eng->oob_ecc = eng->oob_per_section - OOB_FREE_MIN_SIZE;
> +
> +	/* Calculate and adjust ECC strenth based on OOB ECC bytes */
> +	conf->strength = (eng->oob_ecc << 3) /
> +			 mtk_ecc_get_parity_bits(eng->ecc);
> +	mtk_ecc_adjust_strength(eng->ecc, &conf->strength);
> +
> +	eng->oob_ecc = DIV_ROUND_UP(conf->strength *
> +		       mtk_ecc_get_parity_bits(eng->ecc), 8);
> +
> +	eng->oob_free = eng->oob_per_section - eng->oob_ecc;
> +	if (eng->oob_free > OOB_FREE_MAX_SIZE)
> +		eng->oob_free = OOB_FREE_MAX_SIZE;
> +
> +	eng->oob_free_protected = OOB_FREE_MIN_SIZE;
> +
> +	eng->oob_ecc = eng->oob_per_section - eng->oob_free;
> +
> +	if (!mtd->ooblayout)
> +		mtd_set_ooblayout(mtd, mtk_ecc_get_ooblayout());
> +
> +	ret = nand_ecc_init_req_tweaking(&eng->req_ctx, nand);
> +	if (ret)
> +		return ret;
> +
> +	eng->src_page_buf = kmalloc(nanddev_page_size(nand) +
> +			    nanddev_per_page_oobsize(nand), GFP_KERNEL);
> +	eng->bounce_page_buf = kmalloc(nanddev_page_size(nand) +
> +			       nanddev_per_page_oobsize(nand), GFP_KERNEL);
> +	if (!eng->src_page_buf || !eng->bounce_page_buf) {
> +		ret = -ENOMEM;
> +		goto cleanup_req_tweak;
> +	}
> +
> +	eng->src_oob_buf = eng->src_page_buf + nanddev_page_size(nand);
> +	eng->bounce_oob_buf = eng->bounce_page_buf + nanddev_page_size(nand);
> +
> +	mtk_ecc_set_bbm_ctl(&eng->bbm_ctl, nand);
> +	eng->ecc_cfg.strength = conf->strength;
> +	eng->ecc_cfg.len = conf->step_size + eng->oob_free_protected;
> +	mtd->bitflip_threshold = conf->strength;
> +
> +	return 0;
> +
> +cleanup_req_tweak:
> +	nand_ecc_cleanup_req_tweaking(&eng->req_ctx);
> +
> +	return ret;
> +}
> +
> +void mtk_ecc_cleanup_ctx_pipelined(struct nand_device *nand)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +
> +	if (eng) {
> +		nand_ecc_cleanup_req_tweaking(&eng->req_ctx);
> +		kfree(eng->src_page_buf);
> +		kfree(eng->bounce_page_buf);
> +	}
> +}
> +
> +/*
> + * The MTK ECC engine work at pipelined situation,
> + * will be registered by the drivers that wrap it.
> + */
> +static struct nand_ecc_engine_ops mtk_ecc_engine_pipelined_ops = {
> +	.init_ctx = mtk_ecc_init_ctx_pipelined,
> +	.cleanup_ctx = mtk_ecc_cleanup_ctx_pipelined,
> +	.prepare_io_req = mtk_ecc_prepare_io_req_pipelined,
> +	.finish_io_req = mtk_ecc_finish_io_req_pipelined,
> +};
> +
> +struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void)
> +{
> +	return &mtk_ecc_engine_pipelined_ops;
> +}
> +EXPORT_SYMBOL(mtk_ecc_get_pipelined_ops);
> +
>  static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = {
>  	.err_mask = 0x3f,
>  	.ecc_strength = ecc_strength_mt2701,
> @@ -472,6 +1083,9 @@ static const struct mtk_ecc_caps mtk_ecc_caps_mt7622 = {
>  	.ecc_strength = ecc_strength_mt7622,
>  	.ecc_regs = mt7622_ecc_regs,
>  	.num_ecc_strength = 7,
> +	.spare_size = spare_size_mt7622,
> +	.num_spare_size = 19,
> +	.max_section_size = 1024,
>  	.ecc_mode_shift = 4,
>  	.parity_bits = 13,
>  	.pg_irq_sel = 0,
> diff --git a/include/linux/mtd/nand-ecc-mtk.h b/include/linux/mtd/nand-ecc-mtk.h
> index 0e48c36e6ca0..6d550032cbd9 100644
> --- a/include/linux/mtd/nand-ecc-mtk.h
> +++ b/include/linux/mtd/nand-ecc-mtk.h
> @@ -33,6 +33,61 @@ struct mtk_ecc_config {
>  	u32 len;
>  };
>  
> +/**
> + * struct mtk_ecc_bbm_ctl - Information relative to the BBM swap
> + * @bbm_swap: BBM swap function
> + * @section: Section number in data area for swap
> + * @position: Position in @section for swap with BBM
> + */
> +struct mtk_ecc_bbm_ctl {
> +	void (*bbm_swap)(struct nand_device *nand, u8 *databuf, u8 *oobbuf);
> +	u32 section;
> +	u32 position;
> +};
> +
> +/**
> + * struct mtk_ecc_engine - Information relative to the ECC
> + * @req_ctx: Save request context and tweak the original request to fit the
> + *           engine needs
> + * @oob_per_section: OOB size for each section to store OOB free/ECC bytes
> + * @oob_per_section_idx: The index for @oob_per_section in spare size array
> + * @oob_ecc: OOB size for each section to store the ECC parity
> + * @oob_free: OOB size for each section to store the OOB free bytes
> + * @oob_free_protected: OOB free bytes will be protected by the ECC engine
> + * @section_size: The size of each section
> + * @read_empty: Indicate whether empty page for one read operation
> + * @nsteps: The number of the sections
> + * @src_page_buf: Buffer used to store source data buffer when write
> + * @src_oob_buf: Buffer used to store source OOB buffer when write
> + * @bounce_page_buf: Data bounce buffer
> + * @bounce_oob_buf: OOB bounce buffer
> + * @ecc: The ECC engine private data structure
> + * @ecc_cfg: The configuration of each ECC operation
> + * @bbm_ctl: Information relative to the BBM swap
> + */
> +struct mtk_ecc_engine {
> +	struct nand_ecc_req_tweak_ctx req_ctx;
> +
> +	u32 oob_per_section;
> +	u32 oob_per_section_idx;
> +	u32 oob_ecc;
> +	u32 oob_free;
> +	u32 oob_free_protected;
> +	u32 section_size;
> +
> +	bool read_empty;
> +	u32 nsteps;
> +
> +	u8 *src_page_buf;
> +	u8 *src_oob_buf;
> +	u8 *bounce_page_buf;
> +	u8 *bounce_oob_buf;
> +
> +	struct mtk_ecc *ecc;
> +	struct mtk_ecc_config ecc_cfg;
> +	struct mtk_ecc_bbm_ctl bbm_ctl;
> +};

This and above should not be exported and be located in the driver.

> +
>  int mtk_ecc_encode(struct mtk_ecc *, struct mtk_ecc_config *, u8 *, u32);
>  void mtk_ecc_get_stats(struct mtk_ecc *, struct mtk_ecc_stats *, int);
>  int mtk_ecc_wait_done(struct mtk_ecc *, enum mtk_ecc_operation);
> @@ -44,4 +99,17 @@ unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc);
>  struct mtk_ecc *of_mtk_ecc_get(struct device_node *);
>  void mtk_ecc_release(struct mtk_ecc *);
>  
> +#if IS_ENABLED(CONFIG_MTD_NAND_ECC_MTK)
> +
> +struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void);
> +
> +#else /* !CONFIG_MTD_NAND_ECC_MTK */
> +
> +struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void)
> +{
> +	return NULL;
> +}
> +
> +#endif /* CONFIG_MTD_NAND_ECC_MTK */
> +
>  #endif


Thanks,
Miquèl

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
@ 2021-12-09 10:32     ` Miquel Raynal
  0 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-09 10:32 UTC (permalink / raw)
  To: Xiangsheng Hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Xiangsheng,

xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:31:59 +0800:

> Convert the Mediatek HW ECC engine to the ECC infrastructure with
> pipelined case.
> 
> Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
> ---
>  drivers/mtd/nand/ecc-mtk.c       | 614 +++++++++++++++++++++++++++++++
>  include/linux/mtd/nand-ecc-mtk.h |  68 ++++
>  2 files changed, 682 insertions(+)
> 
> diff --git a/drivers/mtd/nand/ecc-mtk.c b/drivers/mtd/nand/ecc-mtk.c
> index 31d7c77d5c59..c44499b3d0a5 100644
> --- a/drivers/mtd/nand/ecc-mtk.c
> +++ b/drivers/mtd/nand/ecc-mtk.c
> @@ -16,6 +16,7 @@
>  #include <linux/of_platform.h>
>  #include <linux/mutex.h>
>  
> +#include <linux/mtd/nand.h>
>  #include <linux/mtd/nand-ecc-mtk.h>
>  
>  #define ECC_IDLE_MASK		BIT(0)
> @@ -41,11 +42,17 @@
>  #define ECC_IDLE_REG(op)	((op) == ECC_ENCODE ? ECC_ENCIDLE : ECC_DECIDLE)
>  #define ECC_CTL_REG(op)		((op) == ECC_ENCODE ? ECC_ENCCON : ECC_DECCON)
>  
> +#define OOB_FREE_MAX_SIZE 8
> +#define OOB_FREE_MIN_SIZE 1
> +
>  struct mtk_ecc_caps {
>  	u32 err_mask;
>  	const u8 *ecc_strength;
>  	const u32 *ecc_regs;
>  	u8 num_ecc_strength;
> +	const u8 *spare_size;
> +	u8 num_spare_size;
> +	u32 max_section_size;
>  	u8 ecc_mode_shift;
>  	u32 parity_bits;
>  	int pg_irq_sel;
> @@ -79,6 +86,12 @@ static const u8 ecc_strength_mt7622[] = {
>  	4, 6, 8, 10, 12, 14, 16
>  };
>  
> +/* spare size for each section that each IP supports */
> +static const u8 spare_size_mt7622[] = {
> +	16, 26, 27, 28, 32, 36, 40, 44, 48, 49, 50, 51,
> +	52, 62, 61, 63, 64, 67, 74
> +};
> +
>  enum mtk_ecc_regs {
>  	ECC_ENCPAR00,
>  	ECC_ENCIRQ_EN,
> @@ -447,6 +460,604 @@ unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc)
>  }
>  EXPORT_SYMBOL(mtk_ecc_get_parity_bits);
>  
> +static inline int mtk_ecc_data_off(struct nand_device *nand, int i)
> +{
> +	int eccsize = nand->ecc.ctx.conf.step_size;
> +
> +	return i * eccsize;
> +}
> +
> +static inline int mtk_ecc_oob_free_position(struct nand_device *nand, int i)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	int position;
> +
> +	if (i < eng->bbm_ctl.section)
> +		position = (i + 1) * eng->oob_free;
> +	else if (i == eng->bbm_ctl.section)
> +		position = 0;
> +	else
> +		position = i * eng->oob_free;
> +
> +	return position;
> +}
> +
> +static inline int mtk_ecc_data_len(struct nand_device *nand)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	int eccsize = nand->ecc.ctx.conf.step_size;
> +	int eccbytes = eng->oob_ecc;
> +
> +	return eccsize + eng->oob_free + eccbytes;
> +}
> +
> +static inline u8 *mtk_ecc_section_ptr(struct nand_device *nand,  int i)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +
> +	return eng->bounce_page_buf + i * mtk_ecc_data_len(nand);
> +}
> +
> +static inline u8 *mtk_ecc_oob_free_ptr(struct nand_device *nand, int i)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	int eccsize = nand->ecc.ctx.conf.step_size;
> +
> +	return eng->bounce_page_buf + i * mtk_ecc_data_len(nand) + eccsize;
> +}
> +
> +static void mtk_ecc_no_bbm_swap(struct nand_device *a, u8 *b, u8 *c)
> +{
> +	/* nop */

Is this really useful?

> +}
> +
> +static void mtk_ecc_bbm_swap(struct nand_device *nand, u8 *databuf, u8 *oobbuf)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	int step_size = nand->ecc.ctx.conf.step_size;
> +	u32 bbm_pos = eng->bbm_ctl.position;
> +
> +	bbm_pos += eng->bbm_ctl.section * step_size;
> +
> +	swap(oobbuf[0], databuf[bbm_pos]);
> +}
> +
> +static void mtk_ecc_set_bbm_ctl(struct mtk_ecc_bbm_ctl *bbm_ctl,
> +				struct nand_device *nand)
> +{
> +	if (nanddev_page_size(nand) == 512) {
> +		bbm_ctl->bbm_swap = mtk_ecc_no_bbm_swap;
> +	} else {
> +		bbm_ctl->bbm_swap = mtk_ecc_bbm_swap;
> +		bbm_ctl->section = nanddev_page_size(nand) /
> +				   mtk_ecc_data_len(nand);
> +		bbm_ctl->position = nanddev_page_size(nand) %
> +				    mtk_ecc_data_len(nand);
> +	}
> +}
> +
> +static int mtk_ecc_ooblayout_free(struct mtd_info *mtd, int section,
> +				  struct mtd_oob_region *oob_region)
> +{
> +	struct nand_device *nand = mtd_to_nanddev(mtd);
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
> +	u32 eccsteps, bbm_bytes = 0;
> +
> +	eccsteps = mtd->writesize / conf->step_size;
> +
> +	if (section >= eccsteps)
> +		return -ERANGE;
> +
> +	/* Reserve 1 byte for BBM only for section 0 */
> +	if (section == 0)
> +		bbm_bytes = 1;
> +
> +	oob_region->length = eng->oob_free - bbm_bytes;
> +	oob_region->offset = section * eng->oob_free + bbm_bytes;
> +
> +	return 0;
> +}
> +
> +static int mtk_ecc_ooblayout_ecc(struct mtd_info *mtd, int section,
> +				 struct mtd_oob_region *oob_region)
> +{
> +	struct nand_device *nand = mtd_to_nanddev(mtd);
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +
> +	if (section)
> +		return -ERANGE;
> +
> +	oob_region->offset = eng->oob_free * eng->nsteps;
> +	oob_region->length = mtd->oobsize - oob_region->offset;
> +
> +	return 0;
> +}
> +
> +static const struct mtd_ooblayout_ops mtk_ecc_ooblayout_ops = {
> +	.free = mtk_ecc_ooblayout_free,
> +	.ecc = mtk_ecc_ooblayout_ecc,
> +};
> +
> +const struct mtd_ooblayout_ops *mtk_ecc_get_ooblayout(void)
> +{
> +	return &mtk_ecc_ooblayout_ops;
> +}
> +
> +static struct device *mtk_ecc_get_engine_dev(struct device *dev)
> +{
> +	struct platform_device *eccpdev;
> +	struct device_node *np;
> +
> +	/*
> +	 * The device node is only the host controller,
> +	 * not the actual ECC engine when pipelined case.
> +	 */
> +	np = of_parse_phandle(dev->of_node, "nand-ecc-engine", 0);
> +	if (!np)
> +		return NULL;
> +
> +	eccpdev = of_find_device_by_node(np);
> +	if (!eccpdev) {
> +		of_node_put(np);
> +		return NULL;
> +	}
> +
> +	platform_device_put(eccpdev);
> +	of_node_put(np);
> +
> +	return &eccpdev->dev;
> +}

As this will be the exact same function for all the pipelined engines,
I am tempted to put this in the core. I'll soon send a iteration, stay
tuned.

> +/*
> + * mtk_ecc_data_format() - Convert to/from MTK ECC on-flash data format
> + *
> + * MTK ECC engine organize page data by section, the on-flash format as bellow:
> + * ||          section 0         ||          section 1          || ...
> + * || data | OOB free | OOB ECC || data || OOB free | OOB ECC || ...
> + *
> + * Terefore, it`s necessary to convert data when reading/writing in raw mode.
> + */
> +static void mtk_ecc_data_format(struct nand_device *nand,

mtk_ecc_reorganize_data_layout()?

> +				struct nand_page_io_req *req)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	int step_size = nand->ecc.ctx.conf.step_size;
> +	void *databuf, *oobbuf;
> +	int i;
> +
> +	if (req->type == NAND_PAGE_WRITE) {
> +		databuf = (void *)req->databuf.out;
> +		oobbuf = (void *)req->oobbuf.out;
> +
> +		/*
> +		 * Convert the source databuf and oobbuf to MTK ECC
> +		 * on-flash data format.
> +		 */
> +		for (i = 0; i < eng->nsteps; i++) {
> +			if (i == eng->bbm_ctl.section)
> +				eng->bbm_ctl.bbm_swap(nand,
> +						      databuf, oobbuf);

Do you really need this swap? Isn't the overall move enough to put the
BBM at the right place?

> +			memcpy(mtk_ecc_section_ptr(nand, i),
> +			       databuf + mtk_ecc_data_off(nand, i),
> +			       step_size);
> +
> +			memcpy(mtk_ecc_oob_free_ptr(nand, i),
> +			       oobbuf + mtk_ecc_oob_free_position(nand, i),
> +			       eng->oob_free);
> +
> +			memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free,
> +			       oobbuf + eng->oob_free * eng->nsteps +
> +			       i * eng->oob_ecc,
> +			       eng->oob_ecc);
> +		}
> +
> +		req->databuf.out = eng->bounce_page_buf;
> +		req->oobbuf.out = eng->bounce_oob_buf;
> +	} else {
> +		databuf = req->databuf.in;
> +		oobbuf = req->oobbuf.in;
> +
> +		/*
> +		 * Convert the on-flash MTK ECC data format to
> +		 * destination databuf and oobbuf.
> +		 */
> +		memcpy(eng->bounce_page_buf, databuf,
> +		       nanddev_page_size(nand));
> +		memcpy(eng->bounce_oob_buf, oobbuf,
> +		       nanddev_per_page_oobsize(nand));
> +
> +		for (i = 0; i < eng->nsteps; i++) {
> +			memcpy(databuf + mtk_ecc_data_off(nand, i),
> +			       mtk_ecc_section_ptr(nand, i), step_size);
> +
> +			memcpy(oobbuf + mtk_ecc_oob_free_position(nand, i),
> +			       mtk_ecc_section_ptr(nand, i) + step_size,
> +			       eng->oob_free);
> +
> +			memcpy(oobbuf + eng->oob_free * eng->nsteps +
> +			       i * eng->oob_ecc,
> +			       mtk_ecc_section_ptr(nand, i) + step_size
> +			       + eng->oob_free,
> +			       eng->oob_ecc);
> +
> +			if (i == eng->bbm_ctl.section)
> +				eng->bbm_ctl.bbm_swap(nand,
> +						      databuf, oobbuf);
> +		}
> +	}
> +}
> +
> +static void mtk_ecc_oob_free_shift(struct nand_device *nand,
> +				   u8 *dst_buf, u8 *src_buf, bool write)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	u32 position;
> +	int i;
> +
> +	for (i = 0; i < eng->nsteps; i++) {
> +		if (i < eng->bbm_ctl.section)
> +			position = (i + 1) * eng->oob_free;
> +		else if (i == eng->bbm_ctl.section)
> +			position = 0;
> +		else
> +			position = i * eng->oob_free;
> +
> +		if (write)
> +			memcpy(dst_buf + i * eng->oob_free, src_buf + position,
> +			       eng->oob_free);
> +		else
> +			memcpy(dst_buf + position, src_buf + i * eng->oob_free,
> +			       eng->oob_free);
> +	}
> +}
> +
> +static void mtk_ecc_set_section_size_and_strength(struct nand_device *nand)
> +{
> +	struct nand_ecc_props *reqs = &nand->ecc.requirements;
> +	struct nand_ecc_props *user = &nand->ecc.user_conf;
> +	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +
> +	/* Configure the correction depending on the NAND device topology */
> +	if (user->step_size && user->strength) {
> +		conf->step_size = user->step_size;
> +		conf->strength = user->strength;
> +	} else if (reqs->step_size && reqs->strength) {
> +		conf->step_size = reqs->step_size;
> +		conf->strength = reqs->strength;
> +	}
> +
> +	/*
> +	 * Align ECC strength and ECC size.
> +	 * The MTK HW ECC engine only support 512 and 1024 ECC size.
> +	 */
> +	if (conf->step_size < 1024) {

I prefer stronger checks than '<'.

> +		if (nanddev_page_size(nand) > 512 &&
> +		    eng->ecc->caps->max_section_size > 512) {
> +			conf->step_size = 1024;
> +			conf->strength <<= 1;

the operation "<<= 1" is more readable as "* 2" IMHO.

Same below in both directions.

> +		} else {
> +			conf->step_size = 512;
> +		}
> +	} else {
> +		conf->step_size = 1024;
> +	}
> +
> +	eng->section_size = conf->step_size;
> +}
> +
> +static int mtk_ecc_set_spare_per_section(struct nand_device *nand)
> +{
> +	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	const u8 *spare = eng->ecc->caps->spare_size;
> +	u32 i, closest_spare = 0;
> +
> +	eng->nsteps = nanddev_page_size(nand) / conf->step_size;
> +	eng->oob_per_section = nanddev_per_page_oobsize(nand) / eng->nsteps;
> +
> +	if (conf->step_size == 1024)
> +		eng->oob_per_section >>= 1;
> +
> +	if (eng->oob_per_section < spare[0]) {
> +		dev_err(eng->ecc->dev, "OOB size per section too small %d\n",
> +			eng->oob_per_section);
> +		return -EINVAL;
> +	}
> +
> +	for (i = 0; i < eng->ecc->caps->num_spare_size; i++) {
> +		if (eng->oob_per_section >= spare[i] &&
> +		    spare[i] >= spare[closest_spare]) {
> +			closest_spare = i;
> +			if (eng->oob_per_section == spare[i])
> +				break;
> +		}
> +	}
> +
> +	eng->oob_per_section = spare[closest_spare];
> +	eng->oob_per_section_idx = closest_spare;
> +
> +	if (conf->step_size == 1024)
> +		eng->oob_per_section <<= 1;
> +
> +	return 0;
> +}
> +
> +int mtk_ecc_prepare_io_req_pipelined(struct nand_device *nand,
> +				     struct nand_page_io_req *req)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	struct mtd_info *mtd = nanddev_to_mtd(nand);
> +	int ret;
> +
> +	nand_ecc_tweak_req(&eng->req_ctx, req);
> +
> +	/* Store the source buffer data to avoid modify source data */
> +	if (req->type == NAND_PAGE_WRITE) {
> +		if (req->datalen)
> +			memcpy(eng->src_page_buf + req->dataoffs,
> +			       req->databuf.out,
> +			       req->datalen);
> +
> +		if (req->ooblen)
> +			memcpy(eng->src_oob_buf + req->ooboffs,
> +			       req->oobbuf.out,
> +			       req->ooblen);
> +	}
> +
> +	if (req->mode == MTD_OPS_RAW) {
> +		if (req->type == NAND_PAGE_WRITE)
> +			mtk_ecc_data_format(nand, req);
> +
> +		return 0;
> +	}
> +
> +	eng->ecc_cfg.mode = ECC_NFI_MODE;
> +	eng->ecc_cfg.sectors = eng->nsteps;
> +	eng->ecc_cfg.op = ECC_DECODE;
> +
> +	if (req->type == NAND_PAGE_READ)
> +		return mtk_ecc_enable(eng->ecc, &eng->ecc_cfg);
> +
> +	memset(eng->bounce_oob_buf, 0xff, nanddev_per_page_oobsize(nand));
> +	if (req->ooblen) {
> +		if (req->mode == MTD_OPS_AUTO_OOB) {
> +			ret = mtd_ooblayout_set_databytes(mtd,
> +							  req->oobbuf.out,
> +							  eng->bounce_oob_buf,
> +							  req->ooboffs,
> +							  mtd->oobavail);
> +			if (ret)
> +				return ret;
> +		} else {
> +			memcpy(eng->bounce_oob_buf + req->ooboffs,
> +			       req->oobbuf.out,
> +			       req->ooblen);
> +		}
> +	}
> +
> +	eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out,
> +			      eng->bounce_oob_buf);
> +	mtk_ecc_oob_free_shift(nand, (void *)req->oobbuf.out,
> +			       eng->bounce_oob_buf, true);
> +
> +	eng->ecc_cfg.op = ECC_ENCODE;
> +
> +	return mtk_ecc_enable(eng->ecc, &eng->ecc_cfg);
> +}
> +
> +int mtk_ecc_finish_io_req_pipelined(struct nand_device *nand,
> +				    struct nand_page_io_req *req)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +	struct mtd_info *mtd = nanddev_to_mtd(nand);
> +	struct mtk_ecc_stats stats;
> +	int ret;
> +
> +	if (req->type == NAND_PAGE_WRITE) {
> +		/* Restore the source buffer data */
> +		if (req->datalen)
> +			memcpy((void *)req->databuf.out,
> +			       eng->src_page_buf + req->dataoffs,
> +			       req->datalen);
> +
> +		if (req->ooblen)
> +			memcpy((void *)req->oobbuf.out,
> +			       eng->src_oob_buf + req->ooboffs,
> +			       req->ooblen);
> +
> +		if (req->mode != MTD_OPS_RAW)
> +			mtk_ecc_disable(eng->ecc);
> +
> +		nand_ecc_restore_req(&eng->req_ctx, req);
> +
> +		return 0;
> +	}
> +
> +	if (req->mode == MTD_OPS_RAW) {
> +		mtk_ecc_data_format(nand, req);
> +		nand_ecc_restore_req(&eng->req_ctx, req);
> +
> +		return 0;
> +	}
> +
> +	ret = mtk_ecc_wait_done(eng->ecc, ECC_DECODE);
> +	if (ret) {
> +		ret = -ETIMEDOUT;
> +		goto out;
> +	}
> +
> +	if (eng->read_empty) {
> +		memset(req->databuf.in, 0xff, nanddev_page_size(nand));
> +		memset(req->oobbuf.in, 0xff, nanddev_per_page_oobsize(nand));
> +		ret = 0;
> +
> +		goto out;
> +	}
> +
> +	mtk_ecc_get_stats(eng->ecc, &stats, eng->nsteps);
> +	mtd->ecc_stats.corrected += stats.corrected;
> +	mtd->ecc_stats.failed += stats.failed;
> +
> +	/*
> +	 * Return -EBADMSG when exit uncorrect ECC error.
> +	 * Otherwise, return the bitflips.
> +	 */
> +	if (stats.failed)
> +		ret = -EBADMSG;
> +	else
> +		ret = stats.bitflips;
> +
> +	memset(eng->bounce_oob_buf, 0xff, nanddev_per_page_oobsize(nand));
> +	mtk_ecc_oob_free_shift(nand, eng->bounce_oob_buf, req->oobbuf.in, false);
> +	eng->bbm_ctl.bbm_swap(nand, req->databuf.in, eng->bounce_oob_buf);
> +
> +	if (req->ooblen) {
> +		if (req->mode == MTD_OPS_AUTO_OOB)
> +			ret = mtd_ooblayout_get_databytes(mtd,
> +							  req->oobbuf.in,
> +							  eng->bounce_oob_buf,
> +							  req->ooboffs,
> +							  mtd->oobavail);
> +		else
> +			memcpy(req->oobbuf.in,
> +			       eng->bounce_oob_buf + req->ooboffs,
> +			       req->ooblen);
> +	}
> +
> +out:
> +	mtk_ecc_disable(eng->ecc);
> +	nand_ecc_restore_req(&eng->req_ctx, req);
> +
> +	return ret;
> +}
> +
> +int mtk_ecc_init_ctx_pipelined(struct nand_device *nand)
> +{
> +	struct nand_ecc_props *conf = &nand->ecc.ctx.conf;
> +	struct mtd_info *mtd = nanddev_to_mtd(nand);
> +	struct mtk_ecc_engine *eng;
> +	struct device *dev;
> +	int free, ret;
> +
> +	/*
> +	 * In the case of a pipelined engine, the device registering the ECC
> +	 * engine is not the actual ECC engine device but the host controller.
> +	 */
> +	dev = mtk_ecc_get_engine_dev(nand->ecc.engine->dev);
> +	if (!dev)
> +		return -EINVAL;
> +
> +	eng = devm_kzalloc(dev, sizeof(*eng), GFP_KERNEL);
> +	if (!eng)
> +		return -ENOMEM;
> +
> +	nand->ecc.ctx.priv = eng;
> +	nand->ecc.engine->priv = eng;
> +
> +	eng->ecc = dev_get_drvdata(dev);
> +
> +	mtk_ecc_set_section_size_and_strength(nand);
> +
> +	ret = mtk_ecc_set_spare_per_section(nand);
> +	if (ret)
> +		return ret;
> +
> +	clk_prepare_enable(eng->ecc->clk);
> +	mtk_ecc_hw_init(eng->ecc);
> +
> +	/* Calculate OOB free bytes except ECC parity data */
> +	free = (conf->strength * mtk_ecc_get_parity_bits(eng->ecc)
> +	       + 7) >> 3;
> +	free = eng->oob_per_section - free;
> +
> +	/*
> +	 * Enhance ECC strength if OOB left is bigger than max FDM size
> +	 * or reduce ECC strength if OOB size is not enough for ECC
> +	 * parity data.
> +	 */
> +	if (free > OOB_FREE_MAX_SIZE)
> +		eng->oob_ecc = eng->oob_per_section - OOB_FREE_MAX_SIZE;
> +	else if (free < 0)
> +		eng->oob_ecc = eng->oob_per_section - OOB_FREE_MIN_SIZE;
> +
> +	/* Calculate and adjust ECC strenth based on OOB ECC bytes */
> +	conf->strength = (eng->oob_ecc << 3) /
> +			 mtk_ecc_get_parity_bits(eng->ecc);
> +	mtk_ecc_adjust_strength(eng->ecc, &conf->strength);
> +
> +	eng->oob_ecc = DIV_ROUND_UP(conf->strength *
> +		       mtk_ecc_get_parity_bits(eng->ecc), 8);
> +
> +	eng->oob_free = eng->oob_per_section - eng->oob_ecc;
> +	if (eng->oob_free > OOB_FREE_MAX_SIZE)
> +		eng->oob_free = OOB_FREE_MAX_SIZE;
> +
> +	eng->oob_free_protected = OOB_FREE_MIN_SIZE;
> +
> +	eng->oob_ecc = eng->oob_per_section - eng->oob_free;
> +
> +	if (!mtd->ooblayout)
> +		mtd_set_ooblayout(mtd, mtk_ecc_get_ooblayout());
> +
> +	ret = nand_ecc_init_req_tweaking(&eng->req_ctx, nand);
> +	if (ret)
> +		return ret;
> +
> +	eng->src_page_buf = kmalloc(nanddev_page_size(nand) +
> +			    nanddev_per_page_oobsize(nand), GFP_KERNEL);
> +	eng->bounce_page_buf = kmalloc(nanddev_page_size(nand) +
> +			       nanddev_per_page_oobsize(nand), GFP_KERNEL);
> +	if (!eng->src_page_buf || !eng->bounce_page_buf) {
> +		ret = -ENOMEM;
> +		goto cleanup_req_tweak;
> +	}
> +
> +	eng->src_oob_buf = eng->src_page_buf + nanddev_page_size(nand);
> +	eng->bounce_oob_buf = eng->bounce_page_buf + nanddev_page_size(nand);
> +
> +	mtk_ecc_set_bbm_ctl(&eng->bbm_ctl, nand);
> +	eng->ecc_cfg.strength = conf->strength;
> +	eng->ecc_cfg.len = conf->step_size + eng->oob_free_protected;
> +	mtd->bitflip_threshold = conf->strength;
> +
> +	return 0;
> +
> +cleanup_req_tweak:
> +	nand_ecc_cleanup_req_tweaking(&eng->req_ctx);
> +
> +	return ret;
> +}
> +
> +void mtk_ecc_cleanup_ctx_pipelined(struct nand_device *nand)
> +{
> +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> +
> +	if (eng) {
> +		nand_ecc_cleanup_req_tweaking(&eng->req_ctx);
> +		kfree(eng->src_page_buf);
> +		kfree(eng->bounce_page_buf);
> +	}
> +}
> +
> +/*
> + * The MTK ECC engine work at pipelined situation,
> + * will be registered by the drivers that wrap it.
> + */
> +static struct nand_ecc_engine_ops mtk_ecc_engine_pipelined_ops = {
> +	.init_ctx = mtk_ecc_init_ctx_pipelined,
> +	.cleanup_ctx = mtk_ecc_cleanup_ctx_pipelined,
> +	.prepare_io_req = mtk_ecc_prepare_io_req_pipelined,
> +	.finish_io_req = mtk_ecc_finish_io_req_pipelined,
> +};
> +
> +struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void)
> +{
> +	return &mtk_ecc_engine_pipelined_ops;
> +}
> +EXPORT_SYMBOL(mtk_ecc_get_pipelined_ops);
> +
>  static const struct mtk_ecc_caps mtk_ecc_caps_mt2701 = {
>  	.err_mask = 0x3f,
>  	.ecc_strength = ecc_strength_mt2701,
> @@ -472,6 +1083,9 @@ static const struct mtk_ecc_caps mtk_ecc_caps_mt7622 = {
>  	.ecc_strength = ecc_strength_mt7622,
>  	.ecc_regs = mt7622_ecc_regs,
>  	.num_ecc_strength = 7,
> +	.spare_size = spare_size_mt7622,
> +	.num_spare_size = 19,
> +	.max_section_size = 1024,
>  	.ecc_mode_shift = 4,
>  	.parity_bits = 13,
>  	.pg_irq_sel = 0,
> diff --git a/include/linux/mtd/nand-ecc-mtk.h b/include/linux/mtd/nand-ecc-mtk.h
> index 0e48c36e6ca0..6d550032cbd9 100644
> --- a/include/linux/mtd/nand-ecc-mtk.h
> +++ b/include/linux/mtd/nand-ecc-mtk.h
> @@ -33,6 +33,61 @@ struct mtk_ecc_config {
>  	u32 len;
>  };
>  
> +/**
> + * struct mtk_ecc_bbm_ctl - Information relative to the BBM swap
> + * @bbm_swap: BBM swap function
> + * @section: Section number in data area for swap
> + * @position: Position in @section for swap with BBM
> + */
> +struct mtk_ecc_bbm_ctl {
> +	void (*bbm_swap)(struct nand_device *nand, u8 *databuf, u8 *oobbuf);
> +	u32 section;
> +	u32 position;
> +};
> +
> +/**
> + * struct mtk_ecc_engine - Information relative to the ECC
> + * @req_ctx: Save request context and tweak the original request to fit the
> + *           engine needs
> + * @oob_per_section: OOB size for each section to store OOB free/ECC bytes
> + * @oob_per_section_idx: The index for @oob_per_section in spare size array
> + * @oob_ecc: OOB size for each section to store the ECC parity
> + * @oob_free: OOB size for each section to store the OOB free bytes
> + * @oob_free_protected: OOB free bytes will be protected by the ECC engine
> + * @section_size: The size of each section
> + * @read_empty: Indicate whether empty page for one read operation
> + * @nsteps: The number of the sections
> + * @src_page_buf: Buffer used to store source data buffer when write
> + * @src_oob_buf: Buffer used to store source OOB buffer when write
> + * @bounce_page_buf: Data bounce buffer
> + * @bounce_oob_buf: OOB bounce buffer
> + * @ecc: The ECC engine private data structure
> + * @ecc_cfg: The configuration of each ECC operation
> + * @bbm_ctl: Information relative to the BBM swap
> + */
> +struct mtk_ecc_engine {
> +	struct nand_ecc_req_tweak_ctx req_ctx;
> +
> +	u32 oob_per_section;
> +	u32 oob_per_section_idx;
> +	u32 oob_ecc;
> +	u32 oob_free;
> +	u32 oob_free_protected;
> +	u32 section_size;
> +
> +	bool read_empty;
> +	u32 nsteps;
> +
> +	u8 *src_page_buf;
> +	u8 *src_oob_buf;
> +	u8 *bounce_page_buf;
> +	u8 *bounce_oob_buf;
> +
> +	struct mtk_ecc *ecc;
> +	struct mtk_ecc_config ecc_cfg;
> +	struct mtk_ecc_bbm_ctl bbm_ctl;
> +};

This and above should not be exported and be located in the driver.

> +
>  int mtk_ecc_encode(struct mtk_ecc *, struct mtk_ecc_config *, u8 *, u32);
>  void mtk_ecc_get_stats(struct mtk_ecc *, struct mtk_ecc_stats *, int);
>  int mtk_ecc_wait_done(struct mtk_ecc *, enum mtk_ecc_operation);
> @@ -44,4 +99,17 @@ unsigned int mtk_ecc_get_parity_bits(struct mtk_ecc *ecc);
>  struct mtk_ecc *of_mtk_ecc_get(struct device_node *);
>  void mtk_ecc_release(struct mtk_ecc *);
>  
> +#if IS_ENABLED(CONFIG_MTD_NAND_ECC_MTK)
> +
> +struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void);
> +
> +#else /* !CONFIG_MTD_NAND_ECC_MTK */
> +
> +struct nand_ecc_engine_ops *mtk_ecc_get_pipelined_ops(void)
> +{
> +	return NULL;
> +}
> +
> +#endif /* CONFIG_MTD_NAND_ECC_MTK */
> +
>  #endif


Thanks,
Miquèl

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
  2021-12-09 10:32     ` Miquel Raynal
@ 2021-12-10  9:09       ` xiangsheng.hou
  -1 siblings, 0 replies; 36+ messages in thread
From: xiangsheng.hou @ 2021-12-10  9:09 UTC (permalink / raw)
  To: Miquel Raynal
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Miquel,

On Thu, 2021-12-09 at 11:32 +0100, Miquel Raynal wrote:
> Hi Xiangsheng,
> 
> xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:31:59 +0800:
> 
> > 
> > +static void mtk_ecc_no_bbm_swap(struct nand_device *a, u8 *b, u8
> > *c)
> > +{
> > +	/* nop */
> 
> Is this really useful?

For 512 bytes page size, it is no need to do BBM swap due to the ECC
engine step size will be 512 bytes.

However, there have 512 bytes SLC NAND page size in history, although
have not seen such SPI/Parallel NAND device for now.

Do you think there no need to consider this small page device?

> 
> > +}
> > +
> > +static void mtk_ecc_bbm_swap(struct nand_device *nand, u8
> > *databuf, u8 *oobbuf)
> > +{
> > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > +	int step_size = nand->ecc.ctx.conf.step_size;
> > +	u32 bbm_pos = eng->bbm_ctl.position;
> > +
> > +	bbm_pos += eng->bbm_ctl.section * step_size;
> > +
> > +	swap(oobbuf[0], databuf[bbm_pos]);
> > +}
> > +
> > +static void mtk_ecc_set_bbm_ctl(struct mtk_ecc_bbm_ctl *bbm_ctl,
> > +				struct nand_device *nand)
> > +{
> > +	if (nanddev_page_size(nand) == 512) {
> > +		bbm_ctl->bbm_swap = mtk_ecc_no_bbm_swap;
> > +	} else {
> > +		bbm_ctl->bbm_swap = mtk_ecc_bbm_swap;
> > +		bbm_ctl->section = nanddev_page_size(nand) /
> > +				   mtk_ecc_data_len(nand);
> > +		bbm_ctl->position = nanddev_page_size(nand) %
> > +				    mtk_ecc_data_len(nand);
> > +	}
> > +}
> > 
> > +
> > +static struct device *mtk_ecc_get_engine_dev(struct device *dev)
> > +{
> > +	struct platform_device *eccpdev;
> > +	struct device_node *np;
> > +
> > +	/*
> > +	 * The device node is only the host controller,
> > +	 * not the actual ECC engine when pipelined case.
> > +	 */
> > +	np = of_parse_phandle(dev->of_node, "nand-ecc-engine", 0);
> > +	if (!np)
> > +		return NULL;
> > +
> > +	eccpdev = of_find_device_by_node(np);
> > +	if (!eccpdev) {
> > +		of_node_put(np);
> > +		return NULL;
> > +	}
> > +
> > +	platform_device_put(eccpdev);
> > +	of_node_put(np);
> > +
> > +	return &eccpdev->dev;
> > +}
> 
> As this will be the exact same function for all the pipelined
> engines,
> I am tempted to put this in the core. I'll soon send a iteration,
> stay
> tuned.
> 

Look forward to the function.

> > +/*
> > + * mtk_ecc_data_format() - Convert to/from MTK ECC on-flash data
> > format
> > + *
> > + * MTK ECC engine organize page data by section, the on-flash
> > format as bellow:
> > + * ||          section 0         ||          section 1          ||
> > ...
> > + * || data | OOB free | OOB ECC || data || OOB free | OOB ECC ||
> > ...
> > + *
> > + * Terefore, it`s necessary to convert data when reading/writing
> > in raw mode.
> > + */
> > +static void mtk_ecc_data_format(struct nand_device *nand,
> 
> mtk_ecc_reorganize_data_layout()?

Will be changed.

> 
> > +				struct nand_page_io_req *req)
> > +{
> > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > +	int step_size = nand->ecc.ctx.conf.step_size;
> > +	void *databuf, *oobbuf;
> > +	int i;
> > +
> > +	if (req->type == NAND_PAGE_WRITE) {
> > +		databuf = (void *)req->databuf.out;
> > +		oobbuf = (void *)req->oobbuf.out;
> > +
> > +		/*
> > +		 * Convert the source databuf and oobbuf to MTK ECC
> > +		 * on-flash data format.
> > +		 */
> > +		for (i = 0; i < eng->nsteps; i++) {
> > +			if (i == eng->bbm_ctl.section)
> > +				eng->bbm_ctl.bbm_swap(nand,
> > +						      databuf, oobbuf);
> 
> Do you really need this swap? Isn't the overall move enough to put
> the
> BBM at the right place?
> 

For OPS_RAW mode, need organize flash data in the MTK ECC engine data
format. Other operation in this function only organize data by section
and not include BBM swap.

For other mode, this function will not be called.

> > +			memcpy(mtk_ecc_section_ptr(nand, i),
> > +			       databuf + mtk_ecc_data_off(nand, i),
> > +			       step_size);
> > +
> > +			memcpy(mtk_ecc_oob_free_ptr(nand, i),
> > +			       oobbuf + mtk_ecc_oob_free_position(nand,
> > i),
> > +			       eng->oob_free);
> > +
> > +			memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng-
> > >oob_free,
> > +			       oobbuf + eng->oob_free * eng->nsteps +
> > +			       i * eng->oob_ecc,
> > +			       eng->oob_ecc);
> > +		}
> > +
> > +		req->databuf.out = eng->bounce_page_buf;
> > +		req->oobbuf.out = eng->bounce_oob_buf;
> > +	} else {
> > +		databuf = req->databuf.in;
> > +		oobbuf = req->oobbuf.in;
> > +
> > +		/*
> > +		 * Convert the on-flash MTK ECC data format to
> > +		 * destination databuf and oobbuf.
> > +		 */
> > +		memcpy(eng->bounce_page_buf, databuf,
> > +		       nanddev_page_size(nand));
> > +		memcpy(eng->bounce_oob_buf, oobbuf,
> > +		       nanddev_per_page_oobsize(nand));
> > +
> > +		for (i = 0; i < eng->nsteps; i++) {
> > +			memcpy(databuf + mtk_ecc_data_off(nand, i),
> > +			       mtk_ecc_section_ptr(nand, i),
> > step_size);
> > +
> > +			memcpy(oobbuf + mtk_ecc_oob_free_position(nand,
> > i),
> > +			       mtk_ecc_section_ptr(nand, i) +
> > step_size,
> > +			       eng->oob_free);
> > +
> > +			memcpy(oobbuf + eng->oob_free * eng->nsteps +
> > +			       i * eng->oob_ecc,
> > +			       mtk_ecc_section_ptr(nand, i) + step_size
> > +			       + eng->oob_free,
> > +			       eng->oob_ecc);
> > +
> > +			if (i == eng->bbm_ctl.section)
> > +				eng->bbm_ctl.bbm_swap(nand,
> > +						      databuf, oobbuf);
> > +		}
> > +	}
> > +}
> > +
> > 
> > 
> > +/**
> > + * struct mtk_ecc_engine - Information relative to the ECC
> > + * @req_ctx: Save request context and tweak the original request
> > to fit the
> > + *           engine needs
> > + * @oob_per_section: OOB size for each section to store OOB
> > free/ECC bytes
> > + * @oob_per_section_idx: The index for @oob_per_section in spare
> > size array
> > + * @oob_ecc: OOB size for each section to store the ECC parity
> > + * @oob_free: OOB size for each section to store the OOB free
> > bytes
> > + * @oob_free_protected: OOB free bytes will be protected by the
> > ECC engine
> > + * @section_size: The size of each section
> > + * @read_empty: Indicate whether empty page for one read operation
> > + * @nsteps: The number of the sections
> > + * @src_page_buf: Buffer used to store source data buffer when
> > write
> > + * @src_oob_buf: Buffer used to store source OOB buffer when write
> > + * @bounce_page_buf: Data bounce buffer
> > + * @bounce_oob_buf: OOB bounce buffer
> > + * @ecc: The ECC engine private data structure
> > + * @ecc_cfg: The configuration of each ECC operation
> > + * @bbm_ctl: Information relative to the BBM swap
> > + */
> > +struct mtk_ecc_engine {
> > +	struct nand_ecc_req_tweak_ctx req_ctx;
> > +
> > +	u32 oob_per_section;
> > +	u32 oob_per_section_idx;
> > +	u32 oob_ecc;
> > +	u32 oob_free;
> > +	u32 oob_free_protected;
> > +	u32 section_size;
> > +
> > +	bool read_empty;
> > +	u32 nsteps;
> > +
> > +	u8 *src_page_buf;
> > +	u8 *src_oob_buf;
> > +	u8 *bounce_page_buf;
> > +	u8 *bounce_oob_buf;
> > +
> > +	struct mtk_ecc *ecc;
> > +	struct mtk_ecc_config ecc_cfg;
> > +	struct mtk_ecc_bbm_ctl bbm_ctl;
> > +};
> 
> This and above should not be exported and be located in the driver.
> 

I will fix this.

Thanks
Xiangsheng Hou
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
@ 2021-12-10  9:09       ` xiangsheng.hou
  0 siblings, 0 replies; 36+ messages in thread
From: xiangsheng.hou @ 2021-12-10  9:09 UTC (permalink / raw)
  To: Miquel Raynal
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Miquel,

On Thu, 2021-12-09 at 11:32 +0100, Miquel Raynal wrote:
> Hi Xiangsheng,
> 
> xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:31:59 +0800:
> 
> > 
> > +static void mtk_ecc_no_bbm_swap(struct nand_device *a, u8 *b, u8
> > *c)
> > +{
> > +	/* nop */
> 
> Is this really useful?

For 512 bytes page size, it is no need to do BBM swap due to the ECC
engine step size will be 512 bytes.

However, there have 512 bytes SLC NAND page size in history, although
have not seen such SPI/Parallel NAND device for now.

Do you think there no need to consider this small page device?

> 
> > +}
> > +
> > +static void mtk_ecc_bbm_swap(struct nand_device *nand, u8
> > *databuf, u8 *oobbuf)
> > +{
> > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > +	int step_size = nand->ecc.ctx.conf.step_size;
> > +	u32 bbm_pos = eng->bbm_ctl.position;
> > +
> > +	bbm_pos += eng->bbm_ctl.section * step_size;
> > +
> > +	swap(oobbuf[0], databuf[bbm_pos]);
> > +}
> > +
> > +static void mtk_ecc_set_bbm_ctl(struct mtk_ecc_bbm_ctl *bbm_ctl,
> > +				struct nand_device *nand)
> > +{
> > +	if (nanddev_page_size(nand) == 512) {
> > +		bbm_ctl->bbm_swap = mtk_ecc_no_bbm_swap;
> > +	} else {
> > +		bbm_ctl->bbm_swap = mtk_ecc_bbm_swap;
> > +		bbm_ctl->section = nanddev_page_size(nand) /
> > +				   mtk_ecc_data_len(nand);
> > +		bbm_ctl->position = nanddev_page_size(nand) %
> > +				    mtk_ecc_data_len(nand);
> > +	}
> > +}
> > 
> > +
> > +static struct device *mtk_ecc_get_engine_dev(struct device *dev)
> > +{
> > +	struct platform_device *eccpdev;
> > +	struct device_node *np;
> > +
> > +	/*
> > +	 * The device node is only the host controller,
> > +	 * not the actual ECC engine when pipelined case.
> > +	 */
> > +	np = of_parse_phandle(dev->of_node, "nand-ecc-engine", 0);
> > +	if (!np)
> > +		return NULL;
> > +
> > +	eccpdev = of_find_device_by_node(np);
> > +	if (!eccpdev) {
> > +		of_node_put(np);
> > +		return NULL;
> > +	}
> > +
> > +	platform_device_put(eccpdev);
> > +	of_node_put(np);
> > +
> > +	return &eccpdev->dev;
> > +}
> 
> As this will be the exact same function for all the pipelined
> engines,
> I am tempted to put this in the core. I'll soon send a iteration,
> stay
> tuned.
> 

Look forward to the function.

> > +/*
> > + * mtk_ecc_data_format() - Convert to/from MTK ECC on-flash data
> > format
> > + *
> > + * MTK ECC engine organize page data by section, the on-flash
> > format as bellow:
> > + * ||          section 0         ||          section 1          ||
> > ...
> > + * || data | OOB free | OOB ECC || data || OOB free | OOB ECC ||
> > ...
> > + *
> > + * Terefore, it`s necessary to convert data when reading/writing
> > in raw mode.
> > + */
> > +static void mtk_ecc_data_format(struct nand_device *nand,
> 
> mtk_ecc_reorganize_data_layout()?

Will be changed.

> 
> > +				struct nand_page_io_req *req)
> > +{
> > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > +	int step_size = nand->ecc.ctx.conf.step_size;
> > +	void *databuf, *oobbuf;
> > +	int i;
> > +
> > +	if (req->type == NAND_PAGE_WRITE) {
> > +		databuf = (void *)req->databuf.out;
> > +		oobbuf = (void *)req->oobbuf.out;
> > +
> > +		/*
> > +		 * Convert the source databuf and oobbuf to MTK ECC
> > +		 * on-flash data format.
> > +		 */
> > +		for (i = 0; i < eng->nsteps; i++) {
> > +			if (i == eng->bbm_ctl.section)
> > +				eng->bbm_ctl.bbm_swap(nand,
> > +						      databuf, oobbuf);
> 
> Do you really need this swap? Isn't the overall move enough to put
> the
> BBM at the right place?
> 

For OPS_RAW mode, need organize flash data in the MTK ECC engine data
format. Other operation in this function only organize data by section
and not include BBM swap.

For other mode, this function will not be called.

> > +			memcpy(mtk_ecc_section_ptr(nand, i),
> > +			       databuf + mtk_ecc_data_off(nand, i),
> > +			       step_size);
> > +
> > +			memcpy(mtk_ecc_oob_free_ptr(nand, i),
> > +			       oobbuf + mtk_ecc_oob_free_position(nand,
> > i),
> > +			       eng->oob_free);
> > +
> > +			memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng-
> > >oob_free,
> > +			       oobbuf + eng->oob_free * eng->nsteps +
> > +			       i * eng->oob_ecc,
> > +			       eng->oob_ecc);
> > +		}
> > +
> > +		req->databuf.out = eng->bounce_page_buf;
> > +		req->oobbuf.out = eng->bounce_oob_buf;
> > +	} else {
> > +		databuf = req->databuf.in;
> > +		oobbuf = req->oobbuf.in;
> > +
> > +		/*
> > +		 * Convert the on-flash MTK ECC data format to
> > +		 * destination databuf and oobbuf.
> > +		 */
> > +		memcpy(eng->bounce_page_buf, databuf,
> > +		       nanddev_page_size(nand));
> > +		memcpy(eng->bounce_oob_buf, oobbuf,
> > +		       nanddev_per_page_oobsize(nand));
> > +
> > +		for (i = 0; i < eng->nsteps; i++) {
> > +			memcpy(databuf + mtk_ecc_data_off(nand, i),
> > +			       mtk_ecc_section_ptr(nand, i),
> > step_size);
> > +
> > +			memcpy(oobbuf + mtk_ecc_oob_free_position(nand,
> > i),
> > +			       mtk_ecc_section_ptr(nand, i) +
> > step_size,
> > +			       eng->oob_free);
> > +
> > +			memcpy(oobbuf + eng->oob_free * eng->nsteps +
> > +			       i * eng->oob_ecc,
> > +			       mtk_ecc_section_ptr(nand, i) + step_size
> > +			       + eng->oob_free,
> > +			       eng->oob_ecc);
> > +
> > +			if (i == eng->bbm_ctl.section)
> > +				eng->bbm_ctl.bbm_swap(nand,
> > +						      databuf, oobbuf);
> > +		}
> > +	}
> > +}
> > +
> > 
> > 
> > +/**
> > + * struct mtk_ecc_engine - Information relative to the ECC
> > + * @req_ctx: Save request context and tweak the original request
> > to fit the
> > + *           engine needs
> > + * @oob_per_section: OOB size for each section to store OOB
> > free/ECC bytes
> > + * @oob_per_section_idx: The index for @oob_per_section in spare
> > size array
> > + * @oob_ecc: OOB size for each section to store the ECC parity
> > + * @oob_free: OOB size for each section to store the OOB free
> > bytes
> > + * @oob_free_protected: OOB free bytes will be protected by the
> > ECC engine
> > + * @section_size: The size of each section
> > + * @read_empty: Indicate whether empty page for one read operation
> > + * @nsteps: The number of the sections
> > + * @src_page_buf: Buffer used to store source data buffer when
> > write
> > + * @src_oob_buf: Buffer used to store source OOB buffer when write
> > + * @bounce_page_buf: Data bounce buffer
> > + * @bounce_oob_buf: OOB bounce buffer
> > + * @ecc: The ECC engine private data structure
> > + * @ecc_cfg: The configuration of each ECC operation
> > + * @bbm_ctl: Information relative to the BBM swap
> > + */
> > +struct mtk_ecc_engine {
> > +	struct nand_ecc_req_tweak_ctx req_ctx;
> > +
> > +	u32 oob_per_section;
> > +	u32 oob_per_section_idx;
> > +	u32 oob_ecc;
> > +	u32 oob_free;
> > +	u32 oob_free_protected;
> > +	u32 section_size;
> > +
> > +	bool read_empty;
> > +	u32 nsteps;
> > +
> > +	u8 *src_page_buf;
> > +	u8 *src_oob_buf;
> > +	u8 *bounce_page_buf;
> > +	u8 *bounce_oob_buf;
> > +
> > +	struct mtk_ecc *ecc;
> > +	struct mtk_ecc_config ecc_cfg;
> > +	struct mtk_ecc_bbm_ctl bbm_ctl;
> > +};
> 
> This and above should not be exported and be located in the driver.
> 

I will fix this.

Thanks
Xiangsheng Hou
_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,3/5] spi: mtk: Add mediatek SPI Nand Flash interface driver
  2021-12-09 10:20     ` Miquel Raynal
@ 2021-12-10  9:09       ` xiangsheng.hou
  -1 siblings, 0 replies; 36+ messages in thread
From: xiangsheng.hou @ 2021-12-10  9:09 UTC (permalink / raw)
  To: Miquel Raynal
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Miquel,

On Thu, 2021-12-09 at 11:20 +0100, Miquel Raynal wrote:
> Hi Xiangsheng,
> 
> xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:32:00 +0800:
> 
> > 
> > +
> > +static int mtk_snfi_config(struct nand_device *nand,
> > +			   struct mtk_snfi *snfi)
> > +{
> > +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> > +	u32 val;
> > +
> > +	switch (nanddev_page_size(nand)) {
> > +	case 512:
> > +		val = PAGEFMT_512_2K | PAGEFMT_SEC_SEL_512;
> > +		break;
> > +	case KB(2):
> > +		if (eng->section_size == 512)
> > +			val = PAGEFMT_2K_4K | PAGEFMT_SEC_SEL_512;
> > +		else
> > +			val = PAGEFMT_512_2K;
> > +		break;
> > +	case KB(4):
> > +		if (eng->section_size == 512)
> > +			val = PAGEFMT_4K_8K | PAGEFMT_SEC_SEL_512;
> > +		else
> > +			val = PAGEFMT_2K_4K;
> > +		break;
> > +	case KB(8):
> > +		if (eng->section_size == 512)
> > +			val = PAGEFMT_8K_16K | PAGEFMT_SEC_SEL_512;
> > +		else
> > +			val = PAGEFMT_4K_8K;
> > +		break;
> > +	case KB(16):
> > +		val = PAGEFMT_8K_16K;
> > +		break;
> > +	default:
> > +		dev_err(snfi->dev, "invalid page len: %d\n",
> > +			nanddev_page_size(nand));
> > +		return -EINVAL;
> > +	}
> > +
> > +	val |= eng->oob_per_section_idx << PAGEFMT_SPARE_SHIFT;
> > +	val |= eng->oob_free << PAGEFMT_FDM_SHIFT;
> > +	val |= eng->oob_free_protected << PAGEFMT_FDM_ECC_SHIFT;
> > +	writel(val, snfi->regs + NFI_PAGEFMT);
> 
> Shouldn't this be calculated only once?

Yes, The mtk_snfi_config function can be only called in prepare_io_req
at the first time. I will add a variable to indicate config done or not
to avoid calculate repeatedly.

> 
> > +
> > +	return 0;
> > +}
> > +
> > +static int mtk_snfi_ecc_init_ctx(struct nand_device *nand)
> > +{
> > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > +
> > +	return ops->init_ctx(nand);
> > +}
> > +
> > +static void mtk_snfi_ecc_cleanup_ctx(struct nand_device *nand)
> > +{
> > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > +
> > +	ops->cleanup_ctx(nand);
> > +}
> > +
> > +static int mtk_snfi_ecc_prepare_io_req(struct nand_device *nand,
> > +				       struct nand_page_io_req *req)
> > +{
> > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > +	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
> > +	int ret;
> > +
> > +	ret = mtk_snfi_config(nand, snfi);
> > +	if (ret)
> > +		return ret;
> > +
> > +	return ops->prepare_io_req(nand, req);
> > +}
> > +
> > +static int mtk_snfi_ecc_finish_io_req(struct nand_device *nand,
> > +				      struct nand_page_io_req *req)
> > +{
> > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > +	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
> > +
> > +	if (req->mode != MTD_OPS_RAW)
> > +		eng->read_empty = readl(snfi->regs + NFI_STA) &
> > STA_EMP_PAGE;
> > +
> > +	return ops->finish_io_req(nand, req);
> > +}
> > +
> > +
> > +MODULE_LICENSE("GPL v2");
> > +MODULE_AUTHOR("Xiangsheng Hou <xiangsheng.hou@mediatek.com>");
> > +MODULE_DESCRIPTION("Mediatek SPI Nand Flash interface driver");
> 
> Otherwise looks good, I believe you can drop the RFC prefix now.
> 

I will prepare the formal patch and send for review after internal
review and test.

Thanks
Xiangsheng Hou
_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,3/5] spi: mtk: Add mediatek SPI Nand Flash interface driver
@ 2021-12-10  9:09       ` xiangsheng.hou
  0 siblings, 0 replies; 36+ messages in thread
From: xiangsheng.hou @ 2021-12-10  9:09 UTC (permalink / raw)
  To: Miquel Raynal
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Miquel,

On Thu, 2021-12-09 at 11:20 +0100, Miquel Raynal wrote:
> Hi Xiangsheng,
> 
> xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:32:00 +0800:
> 
> > 
> > +
> > +static int mtk_snfi_config(struct nand_device *nand,
> > +			   struct mtk_snfi *snfi)
> > +{
> > +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> > +	u32 val;
> > +
> > +	switch (nanddev_page_size(nand)) {
> > +	case 512:
> > +		val = PAGEFMT_512_2K | PAGEFMT_SEC_SEL_512;
> > +		break;
> > +	case KB(2):
> > +		if (eng->section_size == 512)
> > +			val = PAGEFMT_2K_4K | PAGEFMT_SEC_SEL_512;
> > +		else
> > +			val = PAGEFMT_512_2K;
> > +		break;
> > +	case KB(4):
> > +		if (eng->section_size == 512)
> > +			val = PAGEFMT_4K_8K | PAGEFMT_SEC_SEL_512;
> > +		else
> > +			val = PAGEFMT_2K_4K;
> > +		break;
> > +	case KB(8):
> > +		if (eng->section_size == 512)
> > +			val = PAGEFMT_8K_16K | PAGEFMT_SEC_SEL_512;
> > +		else
> > +			val = PAGEFMT_4K_8K;
> > +		break;
> > +	case KB(16):
> > +		val = PAGEFMT_8K_16K;
> > +		break;
> > +	default:
> > +		dev_err(snfi->dev, "invalid page len: %d\n",
> > +			nanddev_page_size(nand));
> > +		return -EINVAL;
> > +	}
> > +
> > +	val |= eng->oob_per_section_idx << PAGEFMT_SPARE_SHIFT;
> > +	val |= eng->oob_free << PAGEFMT_FDM_SHIFT;
> > +	val |= eng->oob_free_protected << PAGEFMT_FDM_ECC_SHIFT;
> > +	writel(val, snfi->regs + NFI_PAGEFMT);
> 
> Shouldn't this be calculated only once?

Yes, The mtk_snfi_config function can be only called in prepare_io_req
at the first time. I will add a variable to indicate config done or not
to avoid calculate repeatedly.

> 
> > +
> > +	return 0;
> > +}
> > +
> > +static int mtk_snfi_ecc_init_ctx(struct nand_device *nand)
> > +{
> > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > +
> > +	return ops->init_ctx(nand);
> > +}
> > +
> > +static void mtk_snfi_ecc_cleanup_ctx(struct nand_device *nand)
> > +{
> > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > +
> > +	ops->cleanup_ctx(nand);
> > +}
> > +
> > +static int mtk_snfi_ecc_prepare_io_req(struct nand_device *nand,
> > +				       struct nand_page_io_req *req)
> > +{
> > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > +	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
> > +	int ret;
> > +
> > +	ret = mtk_snfi_config(nand, snfi);
> > +	if (ret)
> > +		return ret;
> > +
> > +	return ops->prepare_io_req(nand, req);
> > +}
> > +
> > +static int mtk_snfi_ecc_finish_io_req(struct nand_device *nand,
> > +				      struct nand_page_io_req *req)
> > +{
> > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > +	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
> > +
> > +	if (req->mode != MTD_OPS_RAW)
> > +		eng->read_empty = readl(snfi->regs + NFI_STA) &
> > STA_EMP_PAGE;
> > +
> > +	return ops->finish_io_req(nand, req);
> > +}
> > +
> > +
> > +MODULE_LICENSE("GPL v2");
> > +MODULE_AUTHOR("Xiangsheng Hou <xiangsheng.hou@mediatek.com>");
> > +MODULE_DESCRIPTION("Mediatek SPI Nand Flash interface driver");
> 
> Otherwise looks good, I believe you can drop the RFC prefix now.
> 

I will prepare the formal patch and send for review after internal
review and test.

Thanks
Xiangsheng Hou
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
  2021-12-10  9:09       ` xiangsheng.hou
@ 2021-12-10  9:34         ` Miquel Raynal
  -1 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-10  9:34 UTC (permalink / raw)
  To: xiangsheng.hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hello,

xiangsheng.hou@mediatek.com wrote on Fri, 10 Dec 2021 17:09:14 +0800:

> Hi Miquel,
> 
> On Thu, 2021-12-09 at 11:32 +0100, Miquel Raynal wrote:
> > Hi Xiangsheng,
> > 
> > xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:31:59 +0800:
> >   
> > > 
> > > +static void mtk_ecc_no_bbm_swap(struct nand_device *a, u8 *b, u8
> > > *c)
> > > +{
> > > +	/* nop */  
> > 
> > Is this really useful?  
> 
> For 512 bytes page size, it is no need to do BBM swap due to the ECC
> engine step size will be 512 bytes.
> 
> However, there have 512 bytes SLC NAND page size in history, although
> have not seen such SPI/Parallel NAND device for now.
> 
> Do you think there no need to consider this small page device?

Actually I was talking about the empty helper itself. But let's keep
that aside for now, it's fine.

> 
> >   
> > > +}
> > > +
> > > +static void mtk_ecc_bbm_swap(struct nand_device *nand, u8
> > > *databuf, u8 *oobbuf)
> > > +{
> > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > +	int step_size = nand->ecc.ctx.conf.step_size;
> > > +	u32 bbm_pos = eng->bbm_ctl.position;
> > > +
> > > +	bbm_pos += eng->bbm_ctl.section * step_size;
> > > +
> > > +	swap(oobbuf[0], databuf[bbm_pos]);
> > > +}
> > > +
> > > +static void mtk_ecc_set_bbm_ctl(struct mtk_ecc_bbm_ctl *bbm_ctl,
> > > +				struct nand_device *nand)
> > > +{
> > > +	if (nanddev_page_size(nand) == 512) {
> > > +		bbm_ctl->bbm_swap = mtk_ecc_no_bbm_swap;
> > > +	} else {
> > > +		bbm_ctl->bbm_swap = mtk_ecc_bbm_swap;
> > > +		bbm_ctl->section = nanddev_page_size(nand) /
> > > +				   mtk_ecc_data_len(nand);
> > > +		bbm_ctl->position = nanddev_page_size(nand) %
> > > +				    mtk_ecc_data_len(nand);
> > > +	}
> > > +}
> > > 
> > > +
> > > +static struct device *mtk_ecc_get_engine_dev(struct device *dev)
> > > +{
> > > +	struct platform_device *eccpdev;
> > > +	struct device_node *np;
> > > +
> > > +	/*
> > > +	 * The device node is only the host controller,
> > > +	 * not the actual ECC engine when pipelined case.
> > > +	 */
> > > +	np = of_parse_phandle(dev->of_node, "nand-ecc-engine", 0);
> > > +	if (!np)
> > > +		return NULL;
> > > +
> > > +	eccpdev = of_find_device_by_node(np);
> > > +	if (!eccpdev) {
> > > +		of_node_put(np);
> > > +		return NULL;
> > > +	}
> > > +
> > > +	platform_device_put(eccpdev);
> > > +	of_node_put(np);
> > > +
> > > +	return &eccpdev->dev;
> > > +}  
> > 
> > As this will be the exact same function for all the pipelined
> > engines,
> > I am tempted to put this in the core. I'll soon send a iteration,
> > stay
> > tuned.
> >   
> 
> Look forward to the function.

I sent the new version yesterday but I
* forgot to CC: you
* forgot about that function as well

Let's ignore this comment for now, send your driver with the same
function in it and I'll clean that up later.

Here is the new iteration, sorry for forgetting to send it to you as
well:
https://lore.kernel.org/linux-mtd/20211209174046.535229-1-miquel.raynal@bootlin.com/T/
And here is a Github branch as well:
https://github.com/miquelraynal/linux/tree/ecc-engine

> > > +				struct nand_page_io_req *req)
> > > +{
> > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > +	int step_size = nand->ecc.ctx.conf.step_size;
> > > +	void *databuf, *oobbuf;
> > > +	int i;
> > > +
> > > +	if (req->type == NAND_PAGE_WRITE) {
> > > +		databuf = (void *)req->databuf.out;
> > > +		oobbuf = (void *)req->oobbuf.out;
> > > +
> > > +		/*
> > > +		 * Convert the source databuf and oobbuf to MTK ECC
> > > +		 * on-flash data format.
> > > +		 */
> > > +		for (i = 0; i < eng->nsteps; i++) {
> > > +			if (i == eng->bbm_ctl.section)
> > > +				eng->bbm_ctl.bbm_swap(nand,
> > > +						      databuf, oobbuf);  
> > 
> > Do you really need this swap? Isn't the overall move enough to put
> > the
> > BBM at the right place?
> >   
> 
> For OPS_RAW mode, need organize flash data in the MTK ECC engine data
> format. Other operation in this function only organize data by section
> and not include BBM swap.
> 
> For other mode, this function will not be called.

Can you try to explain this with an ascii schema again? I'm sorry but I
don't follow it. Is the BBM placed in the first bytes of the first oob
area by the engine? Or is it place somewhere else?


Thanks,
Miquèl

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
@ 2021-12-10  9:34         ` Miquel Raynal
  0 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-10  9:34 UTC (permalink / raw)
  To: xiangsheng.hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hello,

xiangsheng.hou@mediatek.com wrote on Fri, 10 Dec 2021 17:09:14 +0800:

> Hi Miquel,
> 
> On Thu, 2021-12-09 at 11:32 +0100, Miquel Raynal wrote:
> > Hi Xiangsheng,
> > 
> > xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:31:59 +0800:
> >   
> > > 
> > > +static void mtk_ecc_no_bbm_swap(struct nand_device *a, u8 *b, u8
> > > *c)
> > > +{
> > > +	/* nop */  
> > 
> > Is this really useful?  
> 
> For 512 bytes page size, it is no need to do BBM swap due to the ECC
> engine step size will be 512 bytes.
> 
> However, there have 512 bytes SLC NAND page size in history, although
> have not seen such SPI/Parallel NAND device for now.
> 
> Do you think there no need to consider this small page device?

Actually I was talking about the empty helper itself. But let's keep
that aside for now, it's fine.

> 
> >   
> > > +}
> > > +
> > > +static void mtk_ecc_bbm_swap(struct nand_device *nand, u8
> > > *databuf, u8 *oobbuf)
> > > +{
> > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > +	int step_size = nand->ecc.ctx.conf.step_size;
> > > +	u32 bbm_pos = eng->bbm_ctl.position;
> > > +
> > > +	bbm_pos += eng->bbm_ctl.section * step_size;
> > > +
> > > +	swap(oobbuf[0], databuf[bbm_pos]);
> > > +}
> > > +
> > > +static void mtk_ecc_set_bbm_ctl(struct mtk_ecc_bbm_ctl *bbm_ctl,
> > > +				struct nand_device *nand)
> > > +{
> > > +	if (nanddev_page_size(nand) == 512) {
> > > +		bbm_ctl->bbm_swap = mtk_ecc_no_bbm_swap;
> > > +	} else {
> > > +		bbm_ctl->bbm_swap = mtk_ecc_bbm_swap;
> > > +		bbm_ctl->section = nanddev_page_size(nand) /
> > > +				   mtk_ecc_data_len(nand);
> > > +		bbm_ctl->position = nanddev_page_size(nand) %
> > > +				    mtk_ecc_data_len(nand);
> > > +	}
> > > +}
> > > 
> > > +
> > > +static struct device *mtk_ecc_get_engine_dev(struct device *dev)
> > > +{
> > > +	struct platform_device *eccpdev;
> > > +	struct device_node *np;
> > > +
> > > +	/*
> > > +	 * The device node is only the host controller,
> > > +	 * not the actual ECC engine when pipelined case.
> > > +	 */
> > > +	np = of_parse_phandle(dev->of_node, "nand-ecc-engine", 0);
> > > +	if (!np)
> > > +		return NULL;
> > > +
> > > +	eccpdev = of_find_device_by_node(np);
> > > +	if (!eccpdev) {
> > > +		of_node_put(np);
> > > +		return NULL;
> > > +	}
> > > +
> > > +	platform_device_put(eccpdev);
> > > +	of_node_put(np);
> > > +
> > > +	return &eccpdev->dev;
> > > +}  
> > 
> > As this will be the exact same function for all the pipelined
> > engines,
> > I am tempted to put this in the core. I'll soon send a iteration,
> > stay
> > tuned.
> >   
> 
> Look forward to the function.

I sent the new version yesterday but I
* forgot to CC: you
* forgot about that function as well

Let's ignore this comment for now, send your driver with the same
function in it and I'll clean that up later.

Here is the new iteration, sorry for forgetting to send it to you as
well:
https://lore.kernel.org/linux-mtd/20211209174046.535229-1-miquel.raynal@bootlin.com/T/
And here is a Github branch as well:
https://github.com/miquelraynal/linux/tree/ecc-engine

> > > +				struct nand_page_io_req *req)
> > > +{
> > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > +	int step_size = nand->ecc.ctx.conf.step_size;
> > > +	void *databuf, *oobbuf;
> > > +	int i;
> > > +
> > > +	if (req->type == NAND_PAGE_WRITE) {
> > > +		databuf = (void *)req->databuf.out;
> > > +		oobbuf = (void *)req->oobbuf.out;
> > > +
> > > +		/*
> > > +		 * Convert the source databuf and oobbuf to MTK ECC
> > > +		 * on-flash data format.
> > > +		 */
> > > +		for (i = 0; i < eng->nsteps; i++) {
> > > +			if (i == eng->bbm_ctl.section)
> > > +				eng->bbm_ctl.bbm_swap(nand,
> > > +						      databuf, oobbuf);  
> > 
> > Do you really need this swap? Isn't the overall move enough to put
> > the
> > BBM at the right place?
> >   
> 
> For OPS_RAW mode, need organize flash data in the MTK ECC engine data
> format. Other operation in this function only organize data by section
> and not include BBM swap.
> 
> For other mode, this function will not be called.

Can you try to explain this with an ascii schema again? I'm sorry but I
don't follow it. Is the BBM placed in the first bytes of the first oob
area by the engine? Or is it place somewhere else?


Thanks,
Miquèl

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,3/5] spi: mtk: Add mediatek SPI Nand Flash interface driver
  2021-12-10  9:09       ` xiangsheng.hou
@ 2021-12-10  9:40         ` Miquel Raynal
  -1 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-10  9:40 UTC (permalink / raw)
  To: xiangsheng.hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream


xiangsheng.hou@mediatek.com wrote on Fri, 10 Dec 2021 17:09:31 +0800:

> Hi Miquel,
> 
> On Thu, 2021-12-09 at 11:20 +0100, Miquel Raynal wrote:
> > Hi Xiangsheng,
> > 
> > xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:32:00 +0800:
> >   
> > > 
> > > +
> > > +static int mtk_snfi_config(struct nand_device *nand,
> > > +			   struct mtk_snfi *snfi)
> > > +{
> > > +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> > > +	u32 val;
> > > +
> > > +	switch (nanddev_page_size(nand)) {
> > > +	case 512:
> > > +		val = PAGEFMT_512_2K | PAGEFMT_SEC_SEL_512;
> > > +		break;
> > > +	case KB(2):
> > > +		if (eng->section_size == 512)
> > > +			val = PAGEFMT_2K_4K | PAGEFMT_SEC_SEL_512;
> > > +		else
> > > +			val = PAGEFMT_512_2K;
> > > +		break;
> > > +	case KB(4):
> > > +		if (eng->section_size == 512)
> > > +			val = PAGEFMT_4K_8K | PAGEFMT_SEC_SEL_512;
> > > +		else
> > > +			val = PAGEFMT_2K_4K;
> > > +		break;
> > > +	case KB(8):
> > > +		if (eng->section_size == 512)
> > > +			val = PAGEFMT_8K_16K | PAGEFMT_SEC_SEL_512;
> > > +		else
> > > +			val = PAGEFMT_4K_8K;
> > > +		break;
> > > +	case KB(16):
> > > +		val = PAGEFMT_8K_16K;
> > > +		break;
> > > +	default:
> > > +		dev_err(snfi->dev, "invalid page len: %d\n",
> > > +			nanddev_page_size(nand));
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	val |= eng->oob_per_section_idx << PAGEFMT_SPARE_SHIFT;
> > > +	val |= eng->oob_free << PAGEFMT_FDM_SHIFT;
> > > +	val |= eng->oob_free_protected << PAGEFMT_FDM_ECC_SHIFT;
> > > +	writel(val, snfi->regs + NFI_PAGEFMT);  
> > 
> > Shouldn't this be calculated only once?  
> 
> Yes, The mtk_snfi_config function can be only called in prepare_io_req
> at the first time. I will add a variable to indicate config done or not
> to avoid calculate repeatedly.

No, it's fine to write down the configuration in the registers each
time you enter ->prepare() because you do not know if the engine was
used for another device or not. But what you should not have to do is to
recalculate everything before this register write. You need to move the
entire calculation to ->init_ctx() which is called only once for a
given device and then apply the configuration each time you enter
->prepare().

> > > +	return 0;
> > > +}
> > > +
> > > +static int mtk_snfi_ecc_init_ctx(struct nand_device *nand)
> > > +{
> > > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > > +
> > > +	return ops->init_ctx(nand);
> > > +}
> > > +
> > > +static void mtk_snfi_ecc_cleanup_ctx(struct nand_device *nand)
> > > +{
> > > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > > +
> > > +	ops->cleanup_ctx(nand);
> > > +}
> > > +
> > > +static int mtk_snfi_ecc_prepare_io_req(struct nand_device *nand,
> > > +				       struct nand_page_io_req *req)
> > > +{
> > > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > > +	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
> > > +	int ret;
> > > +
> > > +	ret = mtk_snfi_config(nand, snfi);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	return ops->prepare_io_req(nand, req);
> > > +}
> > > +
> > > +static int mtk_snfi_ecc_finish_io_req(struct nand_device *nand,
> > > +				      struct nand_page_io_req *req)
> > > +{
> > > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > +	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
> > > +
> > > +	if (req->mode != MTD_OPS_RAW)
> > > +		eng->read_empty = readl(snfi->regs + NFI_STA) &
> > > STA_EMP_PAGE;
> > > +
> > > +	return ops->finish_io_req(nand, req);
> > > +}
> > > +
> > > +
> > > +MODULE_LICENSE("GPL v2");
> > > +MODULE_AUTHOR("Xiangsheng Hou <xiangsheng.hou@mediatek.com>");
> > > +MODULE_DESCRIPTION("Mediatek SPI Nand Flash interface driver");  
> > 
> > Otherwise looks good, I believe you can drop the RFC prefix now.
> >   
> 
> I will prepare the formal patch and send for review after internal
> review and test.
> 
> Thanks
> Xiangsheng Hou


Thanks,
Miquèl

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,3/5] spi: mtk: Add mediatek SPI Nand Flash interface driver
@ 2021-12-10  9:40         ` Miquel Raynal
  0 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-10  9:40 UTC (permalink / raw)
  To: xiangsheng.hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream


xiangsheng.hou@mediatek.com wrote on Fri, 10 Dec 2021 17:09:31 +0800:

> Hi Miquel,
> 
> On Thu, 2021-12-09 at 11:20 +0100, Miquel Raynal wrote:
> > Hi Xiangsheng,
> > 
> > xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:32:00 +0800:
> >   
> > > 
> > > +
> > > +static int mtk_snfi_config(struct nand_device *nand,
> > > +			   struct mtk_snfi *snfi)
> > > +{
> > > +	struct mtk_ecc_engine *eng = mtk_snfi_to_ecc_engine(snfi);
> > > +	u32 val;
> > > +
> > > +	switch (nanddev_page_size(nand)) {
> > > +	case 512:
> > > +		val = PAGEFMT_512_2K | PAGEFMT_SEC_SEL_512;
> > > +		break;
> > > +	case KB(2):
> > > +		if (eng->section_size == 512)
> > > +			val = PAGEFMT_2K_4K | PAGEFMT_SEC_SEL_512;
> > > +		else
> > > +			val = PAGEFMT_512_2K;
> > > +		break;
> > > +	case KB(4):
> > > +		if (eng->section_size == 512)
> > > +			val = PAGEFMT_4K_8K | PAGEFMT_SEC_SEL_512;
> > > +		else
> > > +			val = PAGEFMT_2K_4K;
> > > +		break;
> > > +	case KB(8):
> > > +		if (eng->section_size == 512)
> > > +			val = PAGEFMT_8K_16K | PAGEFMT_SEC_SEL_512;
> > > +		else
> > > +			val = PAGEFMT_4K_8K;
> > > +		break;
> > > +	case KB(16):
> > > +		val = PAGEFMT_8K_16K;
> > > +		break;
> > > +	default:
> > > +		dev_err(snfi->dev, "invalid page len: %d\n",
> > > +			nanddev_page_size(nand));
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	val |= eng->oob_per_section_idx << PAGEFMT_SPARE_SHIFT;
> > > +	val |= eng->oob_free << PAGEFMT_FDM_SHIFT;
> > > +	val |= eng->oob_free_protected << PAGEFMT_FDM_ECC_SHIFT;
> > > +	writel(val, snfi->regs + NFI_PAGEFMT);  
> > 
> > Shouldn't this be calculated only once?  
> 
> Yes, The mtk_snfi_config function can be only called in prepare_io_req
> at the first time. I will add a variable to indicate config done or not
> to avoid calculate repeatedly.

No, it's fine to write down the configuration in the registers each
time you enter ->prepare() because you do not know if the engine was
used for another device or not. But what you should not have to do is to
recalculate everything before this register write. You need to move the
entire calculation to ->init_ctx() which is called only once for a
given device and then apply the configuration each time you enter
->prepare().

> > > +	return 0;
> > > +}
> > > +
> > > +static int mtk_snfi_ecc_init_ctx(struct nand_device *nand)
> > > +{
> > > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > > +
> > > +	return ops->init_ctx(nand);
> > > +}
> > > +
> > > +static void mtk_snfi_ecc_cleanup_ctx(struct nand_device *nand)
> > > +{
> > > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > > +
> > > +	ops->cleanup_ctx(nand);
> > > +}
> > > +
> > > +static int mtk_snfi_ecc_prepare_io_req(struct nand_device *nand,
> > > +				       struct nand_page_io_req *req)
> > > +{
> > > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > > +	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
> > > +	int ret;
> > > +
> > > +	ret = mtk_snfi_config(nand, snfi);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	return ops->prepare_io_req(nand, req);
> > > +}
> > > +
> > > +static int mtk_snfi_ecc_finish_io_req(struct nand_device *nand,
> > > +				      struct nand_page_io_req *req)
> > > +{
> > > +	struct nand_ecc_engine_ops *ops = mtk_ecc_get_pipelined_ops();
> > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > +	struct mtk_snfi *snfi = mtk_nand_to_spi(nand);
> > > +
> > > +	if (req->mode != MTD_OPS_RAW)
> > > +		eng->read_empty = readl(snfi->regs + NFI_STA) &
> > > STA_EMP_PAGE;
> > > +
> > > +	return ops->finish_io_req(nand, req);
> > > +}
> > > +
> > > +
> > > +MODULE_LICENSE("GPL v2");
> > > +MODULE_AUTHOR("Xiangsheng Hou <xiangsheng.hou@mediatek.com>");
> > > +MODULE_DESCRIPTION("Mediatek SPI Nand Flash interface driver");  
> > 
> > Otherwise looks good, I believe you can drop the RFC prefix now.
> >   
> 
> I will prepare the formal patch and send for review after internal
> review and test.
> 
> Thanks
> Xiangsheng Hou


Thanks,
Miquèl

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
  2021-12-10  9:34         ` Miquel Raynal
@ 2021-12-11  3:25           ` xiangsheng.hou
  -1 siblings, 0 replies; 36+ messages in thread
From: xiangsheng.hou @ 2021-12-11  3:25 UTC (permalink / raw)
  To: Miquel Raynal
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Miquel,

On Fri, 2021-12-10 at 10:34 +0100, Miquel Raynal wrote:
> Hello,
> 
> xiangsheng.hou@mediatek.com wrote on Fri, 10 Dec 2021 17:09:14 +0800:
> > 
> > > As this will be the exact same function for all the pipelined
> > > engines,
> > > I am tempted to put this in the core. I'll soon send a iteration,
> > > stay
> > > tuned.
> > >   
> > 
> > Look forward to the function.
> 
> I sent the new version yesterday but I
> * forgot to CC: you
> * forgot about that function as well
> 
> Let's ignore this comment for now, send your driver with the same
> function in it and I'll clean that up later.
> 
> Here is the new iteration, sorry for forgetting to send it to you as
> well:
> 
https://lore.kernel.org/linux-mtd/20211209174046.535229-1-miquel.raynal@bootlin.com/T/
> And here is a Github branch as well:
> https://github.com/miquelraynal/linux/tree/ecc-engine

Got it, Thanks.

> 
> > > > +				struct nand_page_io_req *req)
> > > > +{
> > > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > > +	int step_size = nand->ecc.ctx.conf.step_size;
> > > > +	void *databuf, *oobbuf;
> > > > +	int i;
> > > > +
> > > > +	if (req->type == NAND_PAGE_WRITE) {
> > > > +		databuf = (void *)req->databuf.out;
> > > > +		oobbuf = (void *)req->oobbuf.out;
> > > > +
> > > > +		/*
> > > > +		 * Convert the source databuf and oobbuf to MTK
> > > > ECC
> > > > +		 * on-flash data format.
> > > > +		 */
> > > > +		for (i = 0; i < eng->nsteps; i++) {
> > > > +			if (i == eng->bbm_ctl.section)
> > > > +				eng->bbm_ctl.bbm_swap(nand,
> > > > +						      databuf,
> > > > oobbuf);  
> > > 
> > > Do you really need this swap? Isn't the overall move enough to
> > > put
> > > the
> > > BBM at the right place?
> > >   
> > 
> > For OPS_RAW mode, need organize flash data in the MTK ECC engine
> > data
> > format. Other operation in this function only organize data by
> > section
> > and not include BBM swap.
> > 
> > For other mode, this function will not be called.
> 
> Can you try to explain this with an ascii schema again? I'm sorry but
> I
> don't follow it. Is the BBM placed in the first bytes of the first
> oob
> area by the engine? Or is it place somewhere else?
> 

Yes, the BBM will at the first OOB area in NAND standard layout after
BBM swap.

0. Differential on-flash data layout

NAND standard page layout
+------------------------------+------------+
|                              |            |
|            main area         |   OOB area |
|                              |            |
+------------------------------+------------+

MTK ECC on-flash page layout (2 section for example)
+------------+--------+------------+--------+
|            |        |            |        |    
| section(0) | OOB(0) | section(1) | OOB(1) |
|            |        |            |        | 
+------------+--------+------------+--------+

The standard BBM position will be section(1) main data,
need do the BBM swap operation.

request buffer include req->databuf and req->oobbuf.
+----------------------------+
|                            |
|     req->databuf           |
|                            |
+----------------------------+

+-------------+
|             |
| req->oobbuf |
|             |
+-------------+

1. For the OPS_RAW mode

Expect the on-flash data format is like MTK ECC layout.
The snfi controller will put the on-flash data as is
spi_mem_op->data.buf.out.

Therefore, the ECC engine have to reorganize the request
data and OOB buffer in 4 part for each section in
OPS_RAW mode.

1) BBM swap, only for the section need do the swap
2) section main data
3) OOB free data
4) OOB ECC data

The BBM swap will ensure the BBM position in MTK ECC
on-flash layout is same as NAND standard layout in
OPS_RAW mode.

for (i = 0; i < eng->nsteps; i++) {

        /* part 1: BBM swap */
        if (i == eng->bbm_ctl.section)
                eng->bbm_ctl.bbm_swap(nand,
                                      databuf, oobbuf);

	/* part 2: main data in this section */
        memcpy(mtk_ecc_section_ptr(nand, i),
               databuf + mtk_ecc_data_off(nand, i),
               step_size);

        /* part 3: OOB free data */
        memcpy(mtk_ecc_oob_free_ptr(nand, i),
               oobbuf + mtk_ecc_oob_free_position(nand, i),
               eng->oob_free);

        /* part 4: OOB ECC data */
        memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free,
               oobbuf + eng->oob_free * eng->nsteps +
               i * eng->oob_ecc,
               eng->oob_ecc);
}

2. For non OPS_RAW mode

The snfi have a function called auto format with ECC enable.
This will auto reorganize the request data and oob data in
MTK ECC page layout by the snfi controller except the BBM position.

Therefore, the ECC engine only need do the BBM swap after set OOB data
bytes in OPS_AUTO or after memcpy oob data in OPS_PLACE_OOB for write
operation.

The BBM swap also ensure the BBM position in MTK ECC on-flash
layout is
same as NAND standard layout in non OPS_RAW mode.

if (req->ooblen) {
        if (req->mode == MTD_OPS_AUTO_OOB) {
                ret = mtd_ooblayout_set_databytes(mtd,
                                                  req->oobbuf.out,
                                                  eng->bounce_oob_buf,
                                                  req->ooboffs,
                                                  mtd->oobavail);
                if (ret)
                        return ret;
        } else {
                memcpy(eng->bounce_oob_buf + req->ooboffs,
                       req->oobbuf.out,
                       req->ooblen);
        }
}

eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out,
                      eng->bounce_oob_buf);

Thanks
Xiangsheng Hou
_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
@ 2021-12-11  3:25           ` xiangsheng.hou
  0 siblings, 0 replies; 36+ messages in thread
From: xiangsheng.hou @ 2021-12-11  3:25 UTC (permalink / raw)
  To: Miquel Raynal
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Miquel,

On Fri, 2021-12-10 at 10:34 +0100, Miquel Raynal wrote:
> Hello,
> 
> xiangsheng.hou@mediatek.com wrote on Fri, 10 Dec 2021 17:09:14 +0800:
> > 
> > > As this will be the exact same function for all the pipelined
> > > engines,
> > > I am tempted to put this in the core. I'll soon send a iteration,
> > > stay
> > > tuned.
> > >   
> > 
> > Look forward to the function.
> 
> I sent the new version yesterday but I
> * forgot to CC: you
> * forgot about that function as well
> 
> Let's ignore this comment for now, send your driver with the same
> function in it and I'll clean that up later.
> 
> Here is the new iteration, sorry for forgetting to send it to you as
> well:
> 
https://lore.kernel.org/linux-mtd/20211209174046.535229-1-miquel.raynal@bootlin.com/T/
> And here is a Github branch as well:
> https://github.com/miquelraynal/linux/tree/ecc-engine

Got it, Thanks.

> 
> > > > +				struct nand_page_io_req *req)
> > > > +{
> > > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > > +	int step_size = nand->ecc.ctx.conf.step_size;
> > > > +	void *databuf, *oobbuf;
> > > > +	int i;
> > > > +
> > > > +	if (req->type == NAND_PAGE_WRITE) {
> > > > +		databuf = (void *)req->databuf.out;
> > > > +		oobbuf = (void *)req->oobbuf.out;
> > > > +
> > > > +		/*
> > > > +		 * Convert the source databuf and oobbuf to MTK
> > > > ECC
> > > > +		 * on-flash data format.
> > > > +		 */
> > > > +		for (i = 0; i < eng->nsteps; i++) {
> > > > +			if (i == eng->bbm_ctl.section)
> > > > +				eng->bbm_ctl.bbm_swap(nand,
> > > > +						      databuf,
> > > > oobbuf);  
> > > 
> > > Do you really need this swap? Isn't the overall move enough to
> > > put
> > > the
> > > BBM at the right place?
> > >   
> > 
> > For OPS_RAW mode, need organize flash data in the MTK ECC engine
> > data
> > format. Other operation in this function only organize data by
> > section
> > and not include BBM swap.
> > 
> > For other mode, this function will not be called.
> 
> Can you try to explain this with an ascii schema again? I'm sorry but
> I
> don't follow it. Is the BBM placed in the first bytes of the first
> oob
> area by the engine? Or is it place somewhere else?
> 

Yes, the BBM will at the first OOB area in NAND standard layout after
BBM swap.

0. Differential on-flash data layout

NAND standard page layout
+------------------------------+------------+
|                              |            |
|            main area         |   OOB area |
|                              |            |
+------------------------------+------------+

MTK ECC on-flash page layout (2 section for example)
+------------+--------+------------+--------+
|            |        |            |        |    
| section(0) | OOB(0) | section(1) | OOB(1) |
|            |        |            |        | 
+------------+--------+------------+--------+

The standard BBM position will be section(1) main data,
need do the BBM swap operation.

request buffer include req->databuf and req->oobbuf.
+----------------------------+
|                            |
|     req->databuf           |
|                            |
+----------------------------+

+-------------+
|             |
| req->oobbuf |
|             |
+-------------+

1. For the OPS_RAW mode

Expect the on-flash data format is like MTK ECC layout.
The snfi controller will put the on-flash data as is
spi_mem_op->data.buf.out.

Therefore, the ECC engine have to reorganize the request
data and OOB buffer in 4 part for each section in
OPS_RAW mode.

1) BBM swap, only for the section need do the swap
2) section main data
3) OOB free data
4) OOB ECC data

The BBM swap will ensure the BBM position in MTK ECC
on-flash layout is same as NAND standard layout in
OPS_RAW mode.

for (i = 0; i < eng->nsteps; i++) {

        /* part 1: BBM swap */
        if (i == eng->bbm_ctl.section)
                eng->bbm_ctl.bbm_swap(nand,
                                      databuf, oobbuf);

	/* part 2: main data in this section */
        memcpy(mtk_ecc_section_ptr(nand, i),
               databuf + mtk_ecc_data_off(nand, i),
               step_size);

        /* part 3: OOB free data */
        memcpy(mtk_ecc_oob_free_ptr(nand, i),
               oobbuf + mtk_ecc_oob_free_position(nand, i),
               eng->oob_free);

        /* part 4: OOB ECC data */
        memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free,
               oobbuf + eng->oob_free * eng->nsteps +
               i * eng->oob_ecc,
               eng->oob_ecc);
}

2. For non OPS_RAW mode

The snfi have a function called auto format with ECC enable.
This will auto reorganize the request data and oob data in
MTK ECC page layout by the snfi controller except the BBM position.

Therefore, the ECC engine only need do the BBM swap after set OOB data
bytes in OPS_AUTO or after memcpy oob data in OPS_PLACE_OOB for write
operation.

The BBM swap also ensure the BBM position in MTK ECC on-flash
layout is
same as NAND standard layout in non OPS_RAW mode.

if (req->ooblen) {
        if (req->mode == MTD_OPS_AUTO_OOB) {
                ret = mtd_ooblayout_set_databytes(mtd,
                                                  req->oobbuf.out,
                                                  eng->bounce_oob_buf,
                                                  req->ooboffs,
                                                  mtd->oobavail);
                if (ret)
                        return ret;
        } else {
                memcpy(eng->bounce_oob_buf + req->ooboffs,
                       req->oobbuf.out,
                       req->ooblen);
        }
}

eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out,
                      eng->bounce_oob_buf);

Thanks
Xiangsheng Hou
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
  2021-12-11  3:25           ` xiangsheng.hou
@ 2021-12-13  9:29             ` Miquel Raynal
  -1 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-13  9:29 UTC (permalink / raw)
  To: xiangsheng.hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Xiangsheng,

xiangsheng.hou@mediatek.com wrote on Sat, 11 Dec 2021 11:25:46 +0800:

> Hi Miquel,
> 
> On Fri, 2021-12-10 at 10:34 +0100, Miquel Raynal wrote:
> > Hello,
> > 
> > xiangsheng.hou@mediatek.com wrote on Fri, 10 Dec 2021 17:09:14 +0800:  
> > >   
> > > > As this will be the exact same function for all the pipelined
> > > > engines,
> > > > I am tempted to put this in the core. I'll soon send a iteration,
> > > > stay
> > > > tuned.
> > > >     
> > > 
> > > Look forward to the function.  
> > 
> > I sent the new version yesterday but I
> > * forgot to CC: you
> > * forgot about that function as well
> > 
> > Let's ignore this comment for now, send your driver with the same
> > function in it and I'll clean that up later.
> > 
> > Here is the new iteration, sorry for forgetting to send it to you as
> > well:
> >   
> https://lore.kernel.org/linux-mtd/20211209174046.535229-1-miquel.raynal@bootlin.com/T/
> > And here is a Github branch as well:
> > https://github.com/miquelraynal/linux/tree/ecc-engine  
> 
> Got it, Thanks.
> 
> >   
> > > > > +				struct nand_page_io_req *req)
> > > > > +{
> > > > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > > > +	int step_size = nand->ecc.ctx.conf.step_size;
> > > > > +	void *databuf, *oobbuf;
> > > > > +	int i;
> > > > > +
> > > > > +	if (req->type == NAND_PAGE_WRITE) {
> > > > > +		databuf = (void *)req->databuf.out;
> > > > > +		oobbuf = (void *)req->oobbuf.out;
> > > > > +
> > > > > +		/*
> > > > > +		 * Convert the source databuf and oobbuf to MTK
> > > > > ECC
> > > > > +		 * on-flash data format.
> > > > > +		 */
> > > > > +		for (i = 0; i < eng->nsteps; i++) {
> > > > > +			if (i == eng->bbm_ctl.section)
> > > > > +				eng->bbm_ctl.bbm_swap(nand,
> > > > > +						      databuf,
> > > > > oobbuf);    
> > > > 
> > > > Do you really need this swap? Isn't the overall move enough to
> > > > put
> > > > the
> > > > BBM at the right place?
> > > >     
> > > 
> > > For OPS_RAW mode, need organize flash data in the MTK ECC engine
> > > data
> > > format. Other operation in this function only organize data by
> > > section
> > > and not include BBM swap.
> > > 
> > > For other mode, this function will not be called.  
> > 
> > Can you try to explain this with an ascii schema again? I'm sorry but
> > I
> > don't follow it. Is the BBM placed in the first bytes of the first
> > oob
> > area by the engine? Or is it place somewhere else?
> >   
> 
> Yes, the BBM will at the first OOB area in NAND standard layout after
> BBM swap.
> 
> 0. Differential on-flash data layout
> 
> NAND standard page layout
> +------------------------------+------------+
> |                              |            |
> |            main area         |   OOB area |
> |                              |            |
> +------------------------------+------------+
> 
> MTK ECC on-flash page layout (2 section for example)
> +------------+--------+------------+--------+
> |            |        |            |        |    
> | section(0) | OOB(0) | section(1) | OOB(1) |
> |            |        |            |        | 
> +------------+--------+------------+--------+

I think we are aligned on that part.

The BBM is purely a user conception, it is not something wired in the
hardware. What I mean is: why do *you* think the BBM should be located
in the middle of section #1 ?

There is one layout: the layout from the NAND/MTD perspective.
There is another layout: the layout of your ECC engine.

Just consider that the BBM should be at byte 0 of OOB #0 and you will
not need any BBM swap operation anymore. I don't understand why you
absolutely want to put it in section #1.

> The standard BBM position will be section(1) main data,
> need do the BBM swap operation.
> 
> request buffer include req->databuf and req->oobbuf.
> +----------------------------+
> |                            |
> |     req->databuf           |
> |                            |
> +----------------------------+
> 
> +-------------+
> |             |
> | req->oobbuf |
> |             |
> +-------------+
> 
> 1. For the OPS_RAW mode
> 
> Expect the on-flash data format is like MTK ECC layout.
> The snfi controller will put the on-flash data as is
> spi_mem_op->data.buf.out.
> 
> Therefore, the ECC engine have to reorganize the request
> data and OOB buffer in 4 part for each section in
> OPS_RAW mode.
> 
> 1) BBM swap, only for the section need do the swap
> 2) section main data
> 3) OOB free data
> 4) OOB ECC data
> 
> The BBM swap will ensure the BBM position in MTK ECC
> on-flash layout is same as NAND standard layout in
> OPS_RAW mode.
> 
> for (i = 0; i < eng->nsteps; i++) {
> 
>         /* part 1: BBM swap */
>         if (i == eng->bbm_ctl.section)
>                 eng->bbm_ctl.bbm_swap(nand,
>                                       databuf, oobbuf);
> 
> 	/* part 2: main data in this section */
>         memcpy(mtk_ecc_section_ptr(nand, i),
>                databuf + mtk_ecc_data_off(nand, i),
>                step_size);
> 
>         /* part 3: OOB free data */
>         memcpy(mtk_ecc_oob_free_ptr(nand, i),
>                oobbuf + mtk_ecc_oob_free_position(nand, i),
>                eng->oob_free);
> 
>         /* part 4: OOB ECC data */
>         memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free,
>                oobbuf + eng->oob_free * eng->nsteps +
>                i * eng->oob_ecc,
>                eng->oob_ecc);
> }
> 
> 2. For non OPS_RAW mode
> 
> The snfi have a function called auto format with ECC enable.
> This will auto reorganize the request data and oob data in
> MTK ECC page layout by the snfi controller except the BBM position.
> 
> Therefore, the ECC engine only need do the BBM swap after set OOB data
> bytes in OPS_AUTO or after memcpy oob data in OPS_PLACE_OOB for write
> operation.
> 
> The BBM swap also ensure the BBM position in MTK ECC on-flash
> layout is
> same as NAND standard layout in non OPS_RAW mode.
> 
> if (req->ooblen) {
>         if (req->mode == MTD_OPS_AUTO_OOB) {
>                 ret = mtd_ooblayout_set_databytes(mtd,
>                                                   req->oobbuf.out,
>                                                   eng->bounce_oob_buf,
>                                                   req->ooboffs,
>                                                   mtd->oobavail);
>                 if (ret)
>                         return ret;
>         } else {
>                 memcpy(eng->bounce_oob_buf + req->ooboffs,
>                        req->oobbuf.out,
>                        req->ooblen);
>         }
> }
> 
> eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out,
>                       eng->bounce_oob_buf);
> 
> Thanks
> Xiangsheng Hou


Thanks,
Miquèl

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
@ 2021-12-13  9:29             ` Miquel Raynal
  0 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-13  9:29 UTC (permalink / raw)
  To: xiangsheng.hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Xiangsheng,

xiangsheng.hou@mediatek.com wrote on Sat, 11 Dec 2021 11:25:46 +0800:

> Hi Miquel,
> 
> On Fri, 2021-12-10 at 10:34 +0100, Miquel Raynal wrote:
> > Hello,
> > 
> > xiangsheng.hou@mediatek.com wrote on Fri, 10 Dec 2021 17:09:14 +0800:  
> > >   
> > > > As this will be the exact same function for all the pipelined
> > > > engines,
> > > > I am tempted to put this in the core. I'll soon send a iteration,
> > > > stay
> > > > tuned.
> > > >     
> > > 
> > > Look forward to the function.  
> > 
> > I sent the new version yesterday but I
> > * forgot to CC: you
> > * forgot about that function as well
> > 
> > Let's ignore this comment for now, send your driver with the same
> > function in it and I'll clean that up later.
> > 
> > Here is the new iteration, sorry for forgetting to send it to you as
> > well:
> >   
> https://lore.kernel.org/linux-mtd/20211209174046.535229-1-miquel.raynal@bootlin.com/T/
> > And here is a Github branch as well:
> > https://github.com/miquelraynal/linux/tree/ecc-engine  
> 
> Got it, Thanks.
> 
> >   
> > > > > +				struct nand_page_io_req *req)
> > > > > +{
> > > > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > > > +	int step_size = nand->ecc.ctx.conf.step_size;
> > > > > +	void *databuf, *oobbuf;
> > > > > +	int i;
> > > > > +
> > > > > +	if (req->type == NAND_PAGE_WRITE) {
> > > > > +		databuf = (void *)req->databuf.out;
> > > > > +		oobbuf = (void *)req->oobbuf.out;
> > > > > +
> > > > > +		/*
> > > > > +		 * Convert the source databuf and oobbuf to MTK
> > > > > ECC
> > > > > +		 * on-flash data format.
> > > > > +		 */
> > > > > +		for (i = 0; i < eng->nsteps; i++) {
> > > > > +			if (i == eng->bbm_ctl.section)
> > > > > +				eng->bbm_ctl.bbm_swap(nand,
> > > > > +						      databuf,
> > > > > oobbuf);    
> > > > 
> > > > Do you really need this swap? Isn't the overall move enough to
> > > > put
> > > > the
> > > > BBM at the right place?
> > > >     
> > > 
> > > For OPS_RAW mode, need organize flash data in the MTK ECC engine
> > > data
> > > format. Other operation in this function only organize data by
> > > section
> > > and not include BBM swap.
> > > 
> > > For other mode, this function will not be called.  
> > 
> > Can you try to explain this with an ascii schema again? I'm sorry but
> > I
> > don't follow it. Is the BBM placed in the first bytes of the first
> > oob
> > area by the engine? Or is it place somewhere else?
> >   
> 
> Yes, the BBM will at the first OOB area in NAND standard layout after
> BBM swap.
> 
> 0. Differential on-flash data layout
> 
> NAND standard page layout
> +------------------------------+------------+
> |                              |            |
> |            main area         |   OOB area |
> |                              |            |
> +------------------------------+------------+
> 
> MTK ECC on-flash page layout (2 section for example)
> +------------+--------+------------+--------+
> |            |        |            |        |    
> | section(0) | OOB(0) | section(1) | OOB(1) |
> |            |        |            |        | 
> +------------+--------+------------+--------+

I think we are aligned on that part.

The BBM is purely a user conception, it is not something wired in the
hardware. What I mean is: why do *you* think the BBM should be located
in the middle of section #1 ?

There is one layout: the layout from the NAND/MTD perspective.
There is another layout: the layout of your ECC engine.

Just consider that the BBM should be at byte 0 of OOB #0 and you will
not need any BBM swap operation anymore. I don't understand why you
absolutely want to put it in section #1.

> The standard BBM position will be section(1) main data,
> need do the BBM swap operation.
> 
> request buffer include req->databuf and req->oobbuf.
> +----------------------------+
> |                            |
> |     req->databuf           |
> |                            |
> +----------------------------+
> 
> +-------------+
> |             |
> | req->oobbuf |
> |             |
> +-------------+
> 
> 1. For the OPS_RAW mode
> 
> Expect the on-flash data format is like MTK ECC layout.
> The snfi controller will put the on-flash data as is
> spi_mem_op->data.buf.out.
> 
> Therefore, the ECC engine have to reorganize the request
> data and OOB buffer in 4 part for each section in
> OPS_RAW mode.
> 
> 1) BBM swap, only for the section need do the swap
> 2) section main data
> 3) OOB free data
> 4) OOB ECC data
> 
> The BBM swap will ensure the BBM position in MTK ECC
> on-flash layout is same as NAND standard layout in
> OPS_RAW mode.
> 
> for (i = 0; i < eng->nsteps; i++) {
> 
>         /* part 1: BBM swap */
>         if (i == eng->bbm_ctl.section)
>                 eng->bbm_ctl.bbm_swap(nand,
>                                       databuf, oobbuf);
> 
> 	/* part 2: main data in this section */
>         memcpy(mtk_ecc_section_ptr(nand, i),
>                databuf + mtk_ecc_data_off(nand, i),
>                step_size);
> 
>         /* part 3: OOB free data */
>         memcpy(mtk_ecc_oob_free_ptr(nand, i),
>                oobbuf + mtk_ecc_oob_free_position(nand, i),
>                eng->oob_free);
> 
>         /* part 4: OOB ECC data */
>         memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free,
>                oobbuf + eng->oob_free * eng->nsteps +
>                i * eng->oob_ecc,
>                eng->oob_ecc);
> }
> 
> 2. For non OPS_RAW mode
> 
> The snfi have a function called auto format with ECC enable.
> This will auto reorganize the request data and oob data in
> MTK ECC page layout by the snfi controller except the BBM position.
> 
> Therefore, the ECC engine only need do the BBM swap after set OOB data
> bytes in OPS_AUTO or after memcpy oob data in OPS_PLACE_OOB for write
> operation.
> 
> The BBM swap also ensure the BBM position in MTK ECC on-flash
> layout is
> same as NAND standard layout in non OPS_RAW mode.
> 
> if (req->ooblen) {
>         if (req->mode == MTD_OPS_AUTO_OOB) {
>                 ret = mtd_ooblayout_set_databytes(mtd,
>                                                   req->oobbuf.out,
>                                                   eng->bounce_oob_buf,
>                                                   req->ooboffs,
>                                                   mtd->oobavail);
>                 if (ret)
>                         return ret;
>         } else {
>                 memcpy(eng->bounce_oob_buf + req->ooboffs,
>                        req->oobbuf.out,
>                        req->ooblen);
>         }
> }
> 
> eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out,
>                       eng->bounce_oob_buf);
> 
> Thanks
> Xiangsheng Hou


Thanks,
Miquèl

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
  2021-12-13  9:29             ` Miquel Raynal
@ 2021-12-14  3:32               ` xiangsheng.hou
  -1 siblings, 0 replies; 36+ messages in thread
From: xiangsheng.hou @ 2021-12-14  3:32 UTC (permalink / raw)
  To: Miquel Raynal
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Miquel,


On Mon, 2021-12-13 at 10:29 +0100, Miquel Raynal wrote:
> Hi Xiangsheng,
> 
> xiangsheng.hou@mediatek.com wrote on Sat, 11 Dec 2021 11:25:46 +0800:
> > 
> > >   
> > > > > > +				struct nand_page_io_req *req)
> > > > > > +{
> > > > > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > > > > +	int step_size = nand->ecc.ctx.conf.step_size;
> > > > > > +	void *databuf, *oobbuf;
> > > > > > +	int i;
> > > > > > +
> > > > > > +	if (req->type == NAND_PAGE_WRITE) {
> > > > > > +		databuf = (void *)req->databuf.out;
> > > > > > +		oobbuf = (void *)req->oobbuf.out;
> > > > > > +
> > > > > > +		/*
> > > > > > +		 * Convert the source databuf and oobbuf to MTK
> > > > > > ECC
> > > > > > +		 * on-flash data format.
> > > > > > +		 */
> > > > > > +		for (i = 0; i < eng->nsteps; i++) {
> > > > > > +			if (i == eng->bbm_ctl.section)
> > > > > > +				eng->bbm_ctl.bbm_swap(nand,
> > > > > > +						      databuf,
> > > > > > oobbuf);    
> > > > > 
> > > > > Do you really need this swap? Isn't the overall move enough
> > > > > to
> > > > > put
> > > > > the
> > > > > BBM at the right place?
> > > > >     
> > > > 
> > > > For OPS_RAW mode, need organize flash data in the MTK ECC
> > > > engine
> > > > data
> > > > format. Other operation in this function only organize data by
> > > > section
> > > > and not include BBM swap.
> > > > 
> > > > For other mode, this function will not be called.  
> > > 
> > > Can you try to explain this with an ascii schema again? I'm sorry
> > > but
> > > I
> > > don't follow it. Is the BBM placed in the first bytes of the
> > > first
> > > oob
> > > area by the engine? Or is it place somewhere else?
> > >   
> > 
> > Yes, the BBM will at the first OOB area in NAND standard layout
> > after
> > BBM swap.
> > 
> > 0. Differential on-flash data layout
> > 
> > NAND standard page layout
> > +------------------------------+------------+
> > >                              |            |
> > >            main area         |   OOB area |
> > >                              |            |
> > 
> > +------------------------------+------------+
> > 
> > MTK ECC on-flash page layout (2 section for example)
> > +------------+--------+------------+--------+
> > >            |        |            |        |    
> > > section(0) | OOB(0) | section(1) | OOB(1) |
> > >            |        |            |        | 
> > 
> > +------------+--------+------------+--------+
> 
> I think we are aligned on that part.
> 
> The BBM is purely a user conception, it is not something wired in the
> hardware. What I mean is: why do *you* think the BBM should be
> located
> in the middle of section #1 ?

Take NAND page 2KB and OOB 64 bytes for example.

For the NAND perspective the BBM is located at OOB #0, the column
address is 2048 in one page, and this should not only a user
conception.

No matter what layout, the BBM position need at column 2048(OOB #0).
Because of the NAND device specification arrange this.

That is, The BBM position of a worn bad block(user mark) need
consistent with a factory bad block(flash vendor mark).

For the MTK ECC engine reorganize data by section in unit.
The step size will be 1024 bytes and the OOB area for each section will
be 32 bytes with NAND page 2KB and OOB 64 bytes.

The on-flash page data lauout for MTK ECC engine,
column 2048 will be main data in the middle of section #1.
+----------------+-------+----------------+-------+
|                |       |                |       |
|     1024B      |  32B  |    1024B       |  32B  |
|                |       |                |       |
+----------------+-------+----------------+-------+

> 
> There is one layout: the layout from the NAND/MTD perspective.
> There is another layout: the layout of your ECC engine.
> 
> Just consider that the BBM should be at byte 0 of OOB #0 and you will
> not need any BBM swap operation anymore. I don't understand why you
> absolutely want to put it in section #1.

For the read/write both in OPS_RAW mode, the data layout will be
organized by the ECC engine, no matter how the layout comply with, this
will be workable.

However, it will be chaotic when mix OPS_RAW operation with
OPS_AUTO_OOB/OPS_PLACE_OOB.

That is, the OPS_RAW mode on-flash data layout need comply with
OPS_AUTO_OOB/OPS_PLACE_OOB, this is the reorganize function purpose.

Just take the mtd/tests/nandbiterrs.c for example.
It will write data in OPS_PLACE_OOB mode and rewrite data in OPS_RAW
mode after insert one bitflip. Then read in OPS_PLACE_OOB mode to check
the bitflips.

> > The standard BBM position will be section(1) main data,
> > need do the BBM swap operation.
> > 
> > request buffer include req->databuf and req->oobbuf.
> > +----------------------------+
> > >                            |
> > >     req->databuf           |
> > >                            |
> > 
> > +----------------------------+
> > 
> > +-------------+
> > >             |
> > > req->oobbuf |
> > >             |
> > 
> > +-------------+
> > 
> > 1. For the OPS_RAW mode
> > 
> > Expect the on-flash data format is like MTK ECC layout.
> > The snfi controller will put the on-flash data as is
> > spi_mem_op->data.buf.out.
> > 
> > Therefore, the ECC engine have to reorganize the request
> > data and OOB buffer in 4 part for each section in
> > OPS_RAW mode.
> > 
> > 1) BBM swap, only for the section need do the swap
> > 2) section main data
> > 3) OOB free data
> > 4) OOB ECC data
> > 
> > The BBM swap will ensure the BBM position in MTK ECC
> > on-flash layout is same as NAND standard layout in
> > OPS_RAW mode.
> > 
> > for (i = 0; i < eng->nsteps; i++) {
> > 
> >         /* part 1: BBM swap */
> >         if (i == eng->bbm_ctl.section)
> >                 eng->bbm_ctl.bbm_swap(nand,
> >                                       databuf, oobbuf);
> > 
> > 	/* part 2: main data in this section */
> >         memcpy(mtk_ecc_section_ptr(nand, i),
> >                databuf + mtk_ecc_data_off(nand, i),
> >                step_size);
> > 
> >         /* part 3: OOB free data */
> >         memcpy(mtk_ecc_oob_free_ptr(nand, i),
> >                oobbuf + mtk_ecc_oob_free_position(nand, i),
> >                eng->oob_free);
> > 
> >         /* part 4: OOB ECC data */
> >         memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free,
> >                oobbuf + eng->oob_free * eng->nsteps +
> >                i * eng->oob_ecc,
> >                eng->oob_ecc);
> > }
> > 
> > 2. For non OPS_RAW mode
> > 
> > The snfi have a function called auto format with ECC enable.
> > This will auto reorganize the request data and oob data in
> > MTK ECC page layout by the snfi controller except the BBM position.
> > 
> > Therefore, the ECC engine only need do the BBM swap after set OOB
> > data
> > bytes in OPS_AUTO or after memcpy oob data in OPS_PLACE_OOB for
> > write
> > operation.
> > 
> > The BBM swap also ensure the BBM position in MTK ECC on-flash
> > layout is
> > same as NAND standard layout in non OPS_RAW mode.
> > 
> > if (req->ooblen) {
> >         if (req->mode == MTD_OPS_AUTO_OOB) {
> >                 ret = mtd_ooblayout_set_databytes(mtd,
> >                                                   req->oobbuf.out,
> >                                                   eng-
> > >bounce_oob_buf,
> >                                                   req->ooboffs,
> >                                                   mtd->oobavail);
> >                 if (ret)
> >                         return ret;
> >         } else {
> >                 memcpy(eng->bounce_oob_buf + req->ooboffs,
> >                        req->oobbuf.out,
> >                        req->ooblen);
> >         }
> > }
> > 
> > eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out,
> >                       eng->bounce_oob_buf);
> > 

Thanks
Xiangsheng Hou
_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
@ 2021-12-14  3:32               ` xiangsheng.hou
  0 siblings, 0 replies; 36+ messages in thread
From: xiangsheng.hou @ 2021-12-14  3:32 UTC (permalink / raw)
  To: Miquel Raynal
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Miquel,


On Mon, 2021-12-13 at 10:29 +0100, Miquel Raynal wrote:
> Hi Xiangsheng,
> 
> xiangsheng.hou@mediatek.com wrote on Sat, 11 Dec 2021 11:25:46 +0800:
> > 
> > >   
> > > > > > +				struct nand_page_io_req *req)
> > > > > > +{
> > > > > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > > > > +	int step_size = nand->ecc.ctx.conf.step_size;
> > > > > > +	void *databuf, *oobbuf;
> > > > > > +	int i;
> > > > > > +
> > > > > > +	if (req->type == NAND_PAGE_WRITE) {
> > > > > > +		databuf = (void *)req->databuf.out;
> > > > > > +		oobbuf = (void *)req->oobbuf.out;
> > > > > > +
> > > > > > +		/*
> > > > > > +		 * Convert the source databuf and oobbuf to MTK
> > > > > > ECC
> > > > > > +		 * on-flash data format.
> > > > > > +		 */
> > > > > > +		for (i = 0; i < eng->nsteps; i++) {
> > > > > > +			if (i == eng->bbm_ctl.section)
> > > > > > +				eng->bbm_ctl.bbm_swap(nand,
> > > > > > +						      databuf,
> > > > > > oobbuf);    
> > > > > 
> > > > > Do you really need this swap? Isn't the overall move enough
> > > > > to
> > > > > put
> > > > > the
> > > > > BBM at the right place?
> > > > >     
> > > > 
> > > > For OPS_RAW mode, need organize flash data in the MTK ECC
> > > > engine
> > > > data
> > > > format. Other operation in this function only organize data by
> > > > section
> > > > and not include BBM swap.
> > > > 
> > > > For other mode, this function will not be called.  
> > > 
> > > Can you try to explain this with an ascii schema again? I'm sorry
> > > but
> > > I
> > > don't follow it. Is the BBM placed in the first bytes of the
> > > first
> > > oob
> > > area by the engine? Or is it place somewhere else?
> > >   
> > 
> > Yes, the BBM will at the first OOB area in NAND standard layout
> > after
> > BBM swap.
> > 
> > 0. Differential on-flash data layout
> > 
> > NAND standard page layout
> > +------------------------------+------------+
> > >                              |            |
> > >            main area         |   OOB area |
> > >                              |            |
> > 
> > +------------------------------+------------+
> > 
> > MTK ECC on-flash page layout (2 section for example)
> > +------------+--------+------------+--------+
> > >            |        |            |        |    
> > > section(0) | OOB(0) | section(1) | OOB(1) |
> > >            |        |            |        | 
> > 
> > +------------+--------+------------+--------+
> 
> I think we are aligned on that part.
> 
> The BBM is purely a user conception, it is not something wired in the
> hardware. What I mean is: why do *you* think the BBM should be
> located
> in the middle of section #1 ?

Take NAND page 2KB and OOB 64 bytes for example.

For the NAND perspective the BBM is located at OOB #0, the column
address is 2048 in one page, and this should not only a user
conception.

No matter what layout, the BBM position need at column 2048(OOB #0).
Because of the NAND device specification arrange this.

That is, The BBM position of a worn bad block(user mark) need
consistent with a factory bad block(flash vendor mark).

For the MTK ECC engine reorganize data by section in unit.
The step size will be 1024 bytes and the OOB area for each section will
be 32 bytes with NAND page 2KB and OOB 64 bytes.

The on-flash page data lauout for MTK ECC engine,
column 2048 will be main data in the middle of section #1.
+----------------+-------+----------------+-------+
|                |       |                |       |
|     1024B      |  32B  |    1024B       |  32B  |
|                |       |                |       |
+----------------+-------+----------------+-------+

> 
> There is one layout: the layout from the NAND/MTD perspective.
> There is another layout: the layout of your ECC engine.
> 
> Just consider that the BBM should be at byte 0 of OOB #0 and you will
> not need any BBM swap operation anymore. I don't understand why you
> absolutely want to put it in section #1.

For the read/write both in OPS_RAW mode, the data layout will be
organized by the ECC engine, no matter how the layout comply with, this
will be workable.

However, it will be chaotic when mix OPS_RAW operation with
OPS_AUTO_OOB/OPS_PLACE_OOB.

That is, the OPS_RAW mode on-flash data layout need comply with
OPS_AUTO_OOB/OPS_PLACE_OOB, this is the reorganize function purpose.

Just take the mtd/tests/nandbiterrs.c for example.
It will write data in OPS_PLACE_OOB mode and rewrite data in OPS_RAW
mode after insert one bitflip. Then read in OPS_PLACE_OOB mode to check
the bitflips.

> > The standard BBM position will be section(1) main data,
> > need do the BBM swap operation.
> > 
> > request buffer include req->databuf and req->oobbuf.
> > +----------------------------+
> > >                            |
> > >     req->databuf           |
> > >                            |
> > 
> > +----------------------------+
> > 
> > +-------------+
> > >             |
> > > req->oobbuf |
> > >             |
> > 
> > +-------------+
> > 
> > 1. For the OPS_RAW mode
> > 
> > Expect the on-flash data format is like MTK ECC layout.
> > The snfi controller will put the on-flash data as is
> > spi_mem_op->data.buf.out.
> > 
> > Therefore, the ECC engine have to reorganize the request
> > data and OOB buffer in 4 part for each section in
> > OPS_RAW mode.
> > 
> > 1) BBM swap, only for the section need do the swap
> > 2) section main data
> > 3) OOB free data
> > 4) OOB ECC data
> > 
> > The BBM swap will ensure the BBM position in MTK ECC
> > on-flash layout is same as NAND standard layout in
> > OPS_RAW mode.
> > 
> > for (i = 0; i < eng->nsteps; i++) {
> > 
> >         /* part 1: BBM swap */
> >         if (i == eng->bbm_ctl.section)
> >                 eng->bbm_ctl.bbm_swap(nand,
> >                                       databuf, oobbuf);
> > 
> > 	/* part 2: main data in this section */
> >         memcpy(mtk_ecc_section_ptr(nand, i),
> >                databuf + mtk_ecc_data_off(nand, i),
> >                step_size);
> > 
> >         /* part 3: OOB free data */
> >         memcpy(mtk_ecc_oob_free_ptr(nand, i),
> >                oobbuf + mtk_ecc_oob_free_position(nand, i),
> >                eng->oob_free);
> > 
> >         /* part 4: OOB ECC data */
> >         memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free,
> >                oobbuf + eng->oob_free * eng->nsteps +
> >                i * eng->oob_ecc,
> >                eng->oob_ecc);
> > }
> > 
> > 2. For non OPS_RAW mode
> > 
> > The snfi have a function called auto format with ECC enable.
> > This will auto reorganize the request data and oob data in
> > MTK ECC page layout by the snfi controller except the BBM position.
> > 
> > Therefore, the ECC engine only need do the BBM swap after set OOB
> > data
> > bytes in OPS_AUTO or after memcpy oob data in OPS_PLACE_OOB for
> > write
> > operation.
> > 
> > The BBM swap also ensure the BBM position in MTK ECC on-flash
> > layout is
> > same as NAND standard layout in non OPS_RAW mode.
> > 
> > if (req->ooblen) {
> >         if (req->mode == MTD_OPS_AUTO_OOB) {
> >                 ret = mtd_ooblayout_set_databytes(mtd,
> >                                                   req->oobbuf.out,
> >                                                   eng-
> > >bounce_oob_buf,
> >                                                   req->ooboffs,
> >                                                   mtd->oobavail);
> >                 if (ret)
> >                         return ret;
> >         } else {
> >                 memcpy(eng->bounce_oob_buf + req->ooboffs,
> >                        req->oobbuf.out,
> >                        req->ooblen);
> >         }
> > }
> > 
> > eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out,
> >                       eng->bounce_oob_buf);
> > 

Thanks
Xiangsheng Hou
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
  2021-12-14  3:32               ` xiangsheng.hou
@ 2021-12-14  9:47                 ` Miquel Raynal
  -1 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-14  9:47 UTC (permalink / raw)
  To: xiangsheng.hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi xiangsheng.hou,

xiangsheng.hou@mediatek.com wrote on Tue, 14 Dec 2021 11:32:14 +0800:

> Hi Miquel,
> 
> 
> On Mon, 2021-12-13 at 10:29 +0100, Miquel Raynal wrote:
> > Hi Xiangsheng,
> > 
> > xiangsheng.hou@mediatek.com wrote on Sat, 11 Dec 2021 11:25:46 +0800:  
> > >   
> > > >     
> > > > > > > +				struct nand_page_io_req *req)
> > > > > > > +{
> > > > > > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > > > > > +	int step_size = nand->ecc.ctx.conf.step_size;
> > > > > > > +	void *databuf, *oobbuf;
> > > > > > > +	int i;
> > > > > > > +
> > > > > > > +	if (req->type == NAND_PAGE_WRITE) {
> > > > > > > +		databuf = (void *)req->databuf.out;
> > > > > > > +		oobbuf = (void *)req->oobbuf.out;
> > > > > > > +
> > > > > > > +		/*
> > > > > > > +		 * Convert the source databuf and oobbuf to MTK
> > > > > > > ECC
> > > > > > > +		 * on-flash data format.
> > > > > > > +		 */
> > > > > > > +		for (i = 0; i < eng->nsteps; i++) {
> > > > > > > +			if (i == eng->bbm_ctl.section)
> > > > > > > +				eng->bbm_ctl.bbm_swap(nand,
> > > > > > > +						      databuf,
> > > > > > > oobbuf);      
> > > > > > 
> > > > > > Do you really need this swap? Isn't the overall move enough
> > > > > > to
> > > > > > put
> > > > > > the
> > > > > > BBM at the right place?
> > > > > >       
> > > > > 
> > > > > For OPS_RAW mode, need organize flash data in the MTK ECC
> > > > > engine
> > > > > data
> > > > > format. Other operation in this function only organize data by
> > > > > section
> > > > > and not include BBM swap.
> > > > > 
> > > > > For other mode, this function will not be called.    
> > > > 
> > > > Can you try to explain this with an ascii schema again? I'm sorry
> > > > but
> > > > I
> > > > don't follow it. Is the BBM placed in the first bytes of the
> > > > first
> > > > oob
> > > > area by the engine? Or is it place somewhere else?
> > > >     
> > > 
> > > Yes, the BBM will at the first OOB area in NAND standard layout
> > > after
> > > BBM swap.
> > > 
> > > 0. Differential on-flash data layout
> > > 
> > > NAND standard page layout
> > > +------------------------------+------------+  
> > > >                              |            |
> > > >            main area         |   OOB area |
> > > >                              |            |  
> > > 
> > > +------------------------------+------------+
> > > 
> > > MTK ECC on-flash page layout (2 section for example)
> > > +------------+--------+------------+--------+  
> > > >            |        |            |        |    
> > > > section(0) | OOB(0) | section(1) | OOB(1) |
> > > >            |        |            |        |   
> > > 
> > > +------------+--------+------------+--------+  
> > 
> > I think we are aligned on that part.
> > 
> > The BBM is purely a user conception, it is not something wired in the
> > hardware. What I mean is: why do *you* think the BBM should be
> > located
> > in the middle of section #1 ?  
> 
> Take NAND page 2KB and OOB 64 bytes for example.
> 
> For the NAND perspective the BBM is located at OOB #0, the column
> address is 2048 in one page, and this should not only a user
> conception.
> 
> No matter what layout, the BBM position need at column 2048(OOB #0).
> Because of the NAND device specification arrange this.
> 
> That is, The BBM position of a worn bad block(user mark) need
> consistent with a factory bad block(flash vendor mark).

Yeah actually you're right, I guess the factory bad blocks will be
located at the "wrong" position regarding the engine layout...

Fine for this function.

I'll send today a v5 for the series with a new
helper: nand_ecc_get_engine_dev() (so you don't need to have your own)
as well as a change in the API of spi_mem_generic_supports_op() that
you must use.

Thanks,
Miquèl

> For the MTK ECC engine reorganize data by section in unit.
> The step size will be 1024 bytes and the OOB area for each section will
> be 32 bytes with NAND page 2KB and OOB 64 bytes.
> 
> The on-flash page data lauout for MTK ECC engine,
> column 2048 will be main data in the middle of section #1.
> +----------------+-------+----------------+-------+
> |                |       |                |       |
> |     1024B      |  32B  |    1024B       |  32B  |
> |                |       |                |       |
> +----------------+-------+----------------+-------+
> 
> > 
> > There is one layout: the layout from the NAND/MTD perspective.
> > There is another layout: the layout of your ECC engine.
> > 
> > Just consider that the BBM should be at byte 0 of OOB #0 and you will
> > not need any BBM swap operation anymore. I don't understand why you
> > absolutely want to put it in section #1.  
> 
> For the read/write both in OPS_RAW mode, the data layout will be
> organized by the ECC engine, no matter how the layout comply with, this
> will be workable.
> 
> However, it will be chaotic when mix OPS_RAW operation with
> OPS_AUTO_OOB/OPS_PLACE_OOB.
> 
> That is, the OPS_RAW mode on-flash data layout need comply with
> OPS_AUTO_OOB/OPS_PLACE_OOB, this is the reorganize function purpose.
> 
> Just take the mtd/tests/nandbiterrs.c for example.
> It will write data in OPS_PLACE_OOB mode and rewrite data in OPS_RAW
> mode after insert one bitflip. Then read in OPS_PLACE_OOB mode to check
> the bitflips.
> 
> > > The standard BBM position will be section(1) main data,
> > > need do the BBM swap operation.
> > > 
> > > request buffer include req->databuf and req->oobbuf.
> > > +----------------------------+  
> > > >                            |
> > > >     req->databuf           |
> > > >                            |  
> > > 
> > > +----------------------------+
> > > 
> > > +-------------+  
> > > >             |
> > > > req->oobbuf |
> > > >             |  
> > > 
> > > +-------------+
> > > 
> > > 1. For the OPS_RAW mode
> > > 
> > > Expect the on-flash data format is like MTK ECC layout.
> > > The snfi controller will put the on-flash data as is
> > > spi_mem_op->data.buf.out.
> > > 
> > > Therefore, the ECC engine have to reorganize the request
> > > data and OOB buffer in 4 part for each section in
> > > OPS_RAW mode.
> > > 
> > > 1) BBM swap, only for the section need do the swap
> > > 2) section main data
> > > 3) OOB free data
> > > 4) OOB ECC data
> > > 
> > > The BBM swap will ensure the BBM position in MTK ECC
> > > on-flash layout is same as NAND standard layout in
> > > OPS_RAW mode.
> > > 
> > > for (i = 0; i < eng->nsteps; i++) {
> > > 
> > >         /* part 1: BBM swap */
> > >         if (i == eng->bbm_ctl.section)
> > >                 eng->bbm_ctl.bbm_swap(nand,
> > >                                       databuf, oobbuf);
> > > 
> > > 	/* part 2: main data in this section */
> > >         memcpy(mtk_ecc_section_ptr(nand, i),
> > >                databuf + mtk_ecc_data_off(nand, i),
> > >                step_size);
> > > 
> > >         /* part 3: OOB free data */
> > >         memcpy(mtk_ecc_oob_free_ptr(nand, i),
> > >                oobbuf + mtk_ecc_oob_free_position(nand, i),
> > >                eng->oob_free);
> > > 
> > >         /* part 4: OOB ECC data */
> > >         memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free,
> > >                oobbuf + eng->oob_free * eng->nsteps +
> > >                i * eng->oob_ecc,
> > >                eng->oob_ecc);
> > > }
> > > 
> > > 2. For non OPS_RAW mode
> > > 
> > > The snfi have a function called auto format with ECC enable.
> > > This will auto reorganize the request data and oob data in
> > > MTK ECC page layout by the snfi controller except the BBM position.
> > > 
> > > Therefore, the ECC engine only need do the BBM swap after set OOB
> > > data
> > > bytes in OPS_AUTO or after memcpy oob data in OPS_PLACE_OOB for
> > > write
> > > operation.
> > > 
> > > The BBM swap also ensure the BBM position in MTK ECC on-flash
> > > layout is
> > > same as NAND standard layout in non OPS_RAW mode.
> > > 
> > > if (req->ooblen) {
> > >         if (req->mode == MTD_OPS_AUTO_OOB) {
> > >                 ret = mtd_ooblayout_set_databytes(mtd,
> > >                                                   req->oobbuf.out,
> > >                                                   eng-  
> > > >bounce_oob_buf,  
> > >                                                   req->ooboffs,
> > >                                                   mtd->oobavail);
> > >                 if (ret)
> > >                         return ret;
> > >         } else {
> > >                 memcpy(eng->bounce_oob_buf + req->ooboffs,
> > >                        req->oobbuf.out,
> > >                        req->ooblen);
> > >         }
> > > }
> > > 
> > > eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out,
> > >                       eng->bounce_oob_buf);
> > >   
> 
> Thanks
> Xiangsheng Hou

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure
@ 2021-12-14  9:47                 ` Miquel Raynal
  0 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-14  9:47 UTC (permalink / raw)
  To: xiangsheng.hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi xiangsheng.hou,

xiangsheng.hou@mediatek.com wrote on Tue, 14 Dec 2021 11:32:14 +0800:

> Hi Miquel,
> 
> 
> On Mon, 2021-12-13 at 10:29 +0100, Miquel Raynal wrote:
> > Hi Xiangsheng,
> > 
> > xiangsheng.hou@mediatek.com wrote on Sat, 11 Dec 2021 11:25:46 +0800:  
> > >   
> > > >     
> > > > > > > +				struct nand_page_io_req *req)
> > > > > > > +{
> > > > > > > +	struct mtk_ecc_engine *eng = nand_to_ecc_ctx(nand);
> > > > > > > +	int step_size = nand->ecc.ctx.conf.step_size;
> > > > > > > +	void *databuf, *oobbuf;
> > > > > > > +	int i;
> > > > > > > +
> > > > > > > +	if (req->type == NAND_PAGE_WRITE) {
> > > > > > > +		databuf = (void *)req->databuf.out;
> > > > > > > +		oobbuf = (void *)req->oobbuf.out;
> > > > > > > +
> > > > > > > +		/*
> > > > > > > +		 * Convert the source databuf and oobbuf to MTK
> > > > > > > ECC
> > > > > > > +		 * on-flash data format.
> > > > > > > +		 */
> > > > > > > +		for (i = 0; i < eng->nsteps; i++) {
> > > > > > > +			if (i == eng->bbm_ctl.section)
> > > > > > > +				eng->bbm_ctl.bbm_swap(nand,
> > > > > > > +						      databuf,
> > > > > > > oobbuf);      
> > > > > > 
> > > > > > Do you really need this swap? Isn't the overall move enough
> > > > > > to
> > > > > > put
> > > > > > the
> > > > > > BBM at the right place?
> > > > > >       
> > > > > 
> > > > > For OPS_RAW mode, need organize flash data in the MTK ECC
> > > > > engine
> > > > > data
> > > > > format. Other operation in this function only organize data by
> > > > > section
> > > > > and not include BBM swap.
> > > > > 
> > > > > For other mode, this function will not be called.    
> > > > 
> > > > Can you try to explain this with an ascii schema again? I'm sorry
> > > > but
> > > > I
> > > > don't follow it. Is the BBM placed in the first bytes of the
> > > > first
> > > > oob
> > > > area by the engine? Or is it place somewhere else?
> > > >     
> > > 
> > > Yes, the BBM will at the first OOB area in NAND standard layout
> > > after
> > > BBM swap.
> > > 
> > > 0. Differential on-flash data layout
> > > 
> > > NAND standard page layout
> > > +------------------------------+------------+  
> > > >                              |            |
> > > >            main area         |   OOB area |
> > > >                              |            |  
> > > 
> > > +------------------------------+------------+
> > > 
> > > MTK ECC on-flash page layout (2 section for example)
> > > +------------+--------+------------+--------+  
> > > >            |        |            |        |    
> > > > section(0) | OOB(0) | section(1) | OOB(1) |
> > > >            |        |            |        |   
> > > 
> > > +------------+--------+------------+--------+  
> > 
> > I think we are aligned on that part.
> > 
> > The BBM is purely a user conception, it is not something wired in the
> > hardware. What I mean is: why do *you* think the BBM should be
> > located
> > in the middle of section #1 ?  
> 
> Take NAND page 2KB and OOB 64 bytes for example.
> 
> For the NAND perspective the BBM is located at OOB #0, the column
> address is 2048 in one page, and this should not only a user
> conception.
> 
> No matter what layout, the BBM position need at column 2048(OOB #0).
> Because of the NAND device specification arrange this.
> 
> That is, The BBM position of a worn bad block(user mark) need
> consistent with a factory bad block(flash vendor mark).

Yeah actually you're right, I guess the factory bad blocks will be
located at the "wrong" position regarding the engine layout...

Fine for this function.

I'll send today a v5 for the series with a new
helper: nand_ecc_get_engine_dev() (so you don't need to have your own)
as well as a change in the API of spi_mem_generic_supports_op() that
you must use.

Thanks,
Miquèl

> For the MTK ECC engine reorganize data by section in unit.
> The step size will be 1024 bytes and the OOB area for each section will
> be 32 bytes with NAND page 2KB and OOB 64 bytes.
> 
> The on-flash page data lauout for MTK ECC engine,
> column 2048 will be main data in the middle of section #1.
> +----------------+-------+----------------+-------+
> |                |       |                |       |
> |     1024B      |  32B  |    1024B       |  32B  |
> |                |       |                |       |
> +----------------+-------+----------------+-------+
> 
> > 
> > There is one layout: the layout from the NAND/MTD perspective.
> > There is another layout: the layout of your ECC engine.
> > 
> > Just consider that the BBM should be at byte 0 of OOB #0 and you will
> > not need any BBM swap operation anymore. I don't understand why you
> > absolutely want to put it in section #1.  
> 
> For the read/write both in OPS_RAW mode, the data layout will be
> organized by the ECC engine, no matter how the layout comply with, this
> will be workable.
> 
> However, it will be chaotic when mix OPS_RAW operation with
> OPS_AUTO_OOB/OPS_PLACE_OOB.
> 
> That is, the OPS_RAW mode on-flash data layout need comply with
> OPS_AUTO_OOB/OPS_PLACE_OOB, this is the reorganize function purpose.
> 
> Just take the mtd/tests/nandbiterrs.c for example.
> It will write data in OPS_PLACE_OOB mode and rewrite data in OPS_RAW
> mode after insert one bitflip. Then read in OPS_PLACE_OOB mode to check
> the bitflips.
> 
> > > The standard BBM position will be section(1) main data,
> > > need do the BBM swap operation.
> > > 
> > > request buffer include req->databuf and req->oobbuf.
> > > +----------------------------+  
> > > >                            |
> > > >     req->databuf           |
> > > >                            |  
> > > 
> > > +----------------------------+
> > > 
> > > +-------------+  
> > > >             |
> > > > req->oobbuf |
> > > >             |  
> > > 
> > > +-------------+
> > > 
> > > 1. For the OPS_RAW mode
> > > 
> > > Expect the on-flash data format is like MTK ECC layout.
> > > The snfi controller will put the on-flash data as is
> > > spi_mem_op->data.buf.out.
> > > 
> > > Therefore, the ECC engine have to reorganize the request
> > > data and OOB buffer in 4 part for each section in
> > > OPS_RAW mode.
> > > 
> > > 1) BBM swap, only for the section need do the swap
> > > 2) section main data
> > > 3) OOB free data
> > > 4) OOB ECC data
> > > 
> > > The BBM swap will ensure the BBM position in MTK ECC
> > > on-flash layout is same as NAND standard layout in
> > > OPS_RAW mode.
> > > 
> > > for (i = 0; i < eng->nsteps; i++) {
> > > 
> > >         /* part 1: BBM swap */
> > >         if (i == eng->bbm_ctl.section)
> > >                 eng->bbm_ctl.bbm_swap(nand,
> > >                                       databuf, oobbuf);
> > > 
> > > 	/* part 2: main data in this section */
> > >         memcpy(mtk_ecc_section_ptr(nand, i),
> > >                databuf + mtk_ecc_data_off(nand, i),
> > >                step_size);
> > > 
> > >         /* part 3: OOB free data */
> > >         memcpy(mtk_ecc_oob_free_ptr(nand, i),
> > >                oobbuf + mtk_ecc_oob_free_position(nand, i),
> > >                eng->oob_free);
> > > 
> > >         /* part 4: OOB ECC data */
> > >         memcpy(mtk_ecc_oob_free_ptr(nand, i) + eng->oob_free,
> > >                oobbuf + eng->oob_free * eng->nsteps +
> > >                i * eng->oob_ecc,
> > >                eng->oob_ecc);
> > > }
> > > 
> > > 2. For non OPS_RAW mode
> > > 
> > > The snfi have a function called auto format with ECC enable.
> > > This will auto reorganize the request data and oob data in
> > > MTK ECC page layout by the snfi controller except the BBM position.
> > > 
> > > Therefore, the ECC engine only need do the BBM swap after set OOB
> > > data
> > > bytes in OPS_AUTO or after memcpy oob data in OPS_PLACE_OOB for
> > > write
> > > operation.
> > > 
> > > The BBM swap also ensure the BBM position in MTK ECC on-flash
> > > layout is
> > > same as NAND standard layout in non OPS_RAW mode.
> > > 
> > > if (req->ooblen) {
> > >         if (req->mode == MTD_OPS_AUTO_OOB) {
> > >                 ret = mtd_ooblayout_set_databytes(mtd,
> > >                                                   req->oobbuf.out,
> > >                                                   eng-  
> > > >bounce_oob_buf,  
> > >                                                   req->ooboffs,
> > >                                                   mtd->oobavail);
> > >                 if (ret)
> > >                         return ret;
> > >         } else {
> > >                 memcpy(eng->bounce_oob_buf + req->ooboffs,
> > >                        req->oobbuf.out,
> > >                        req->ooblen);
> > >         }
> > > }
> > > 
> > > eng->bbm_ctl.bbm_swap(nand, (void *)req->databuf.out,
> > >                       eng->bounce_oob_buf);
> > >   
> 
> Thanks
> Xiangsheng Hou

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,4/5] mtd: spinand: Move set/get OOB databytes to each ECC engines
  2021-11-30  8:32   ` Xiangsheng Hou
@ 2021-12-14 11:41     ` Miquel Raynal
  -1 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-14 11:41 UTC (permalink / raw)
  To: Xiangsheng Hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Xiangsheng,

xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:32:01 +0800:

> Move set/get OOB databytes to each ECC engines when in AUTO mode.
> For read/write in AUTO mode, the OOB bytes include not only free
> date bytes but also ECC data bytes.

This is more or less ok, I would rephrase it to give more details about
the issue:

Move the data bytes handling when in AUTO mode to the ECC drivers, only
them have the knowledge of what's in the OOB buffer and we should be
sure that the ECC bytes do not smash the data provided by the user in
this mode. So the ECC drivers must take care of any data that could be
present at the beginning of the buffer before generating the ECC bytes.

> And for some special ECC engine,
> the data bytes in OOB may be mixed with main data. For example,
> mediatek ECC engine will be one more main data byte swap with BBM.
> So, just put these operation in each ECC engine to distinguish
> the differentiation.

But his is not related to your change, please drop it.

Needs a Fixes here I believe.

> Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
> ---
>  drivers/mtd/nand/ecc-sw-bch.c           | 71 ++++++++++++++++---
>  drivers/mtd/nand/ecc-sw-hamming.c       | 71 ++++++++++++++++---
>  drivers/mtd/nand/spi/core.c             | 93 +++++++++++++++++--------
>  include/linux/mtd/nand-ecc-sw-bch.h     |  4 ++
>  include/linux/mtd/nand-ecc-sw-hamming.h |  4 ++
>  include/linux/mtd/spinand.h             |  4 ++
>  6 files changed, 198 insertions(+), 49 deletions(-)
> 
> diff --git a/drivers/mtd/nand/ecc-sw-bch.c b/drivers/mtd/nand/ecc-sw-bch.c
> index 405552d014a8..bda31ef8f0b8 100644
> --- a/drivers/mtd/nand/ecc-sw-bch.c
> +++ b/drivers/mtd/nand/ecc-sw-bch.c
> @@ -238,7 +238,9 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand)
>  	engine_conf->code_size = code_size;
>  	engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
>  	engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
> -	if (!engine_conf->calc_buf || !engine_conf->code_buf) {
> +	engine_conf->oob_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
> +	if (!engine_conf->calc_buf || !engine_conf->code_buf ||
> +	    !engine_conf->oob_buf) {
>  		ret = -ENOMEM;
>  		goto free_bufs;
>  	}
> @@ -267,6 +269,7 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand)
>  	nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
>  	kfree(engine_conf->calc_buf);
>  	kfree(engine_conf->code_buf);
> +	kfree(engine_conf->oob_buf);
>  free_engine_conf:
>  	kfree(engine_conf);
>  
> @@ -283,6 +286,7 @@ void nand_ecc_sw_bch_cleanup_ctx(struct nand_device *nand)
>  		nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
>  		kfree(engine_conf->calc_buf);
>  		kfree(engine_conf->code_buf);
> +		kfree(engine_conf->oob_buf);
>  		kfree(engine_conf);
>  	}
>  }
> @@ -299,22 +303,42 @@ static int nand_ecc_sw_bch_prepare_io_req(struct nand_device *nand,
>  	int total = nand->ecc.ctx.total;
>  	u8 *ecccalc = engine_conf->calc_buf;
>  	const u8 *data;
> -	int i;
> +	int i, ret = 0;

int i, ret;

>  
>  	/* Nothing to do for a raw operation */
>  	if (req->mode == MTD_OPS_RAW)
>  		return 0;
>  
> -	/* This engine does not provide BBM/free OOB bytes protection */
> -	if (!req->datalen)
> -		return 0;
> -

Why? Please drop this removal here and below, it has nothing to do with
the current fix, right?

>  	nand_ecc_tweak_req(&engine_conf->req_ctx, req);
>  
>  	/* No more preparation for page read */
>  	if (req->type == NAND_PAGE_READ)
>  		return 0;
>  
> +	if (req->ooblen) {
> +		memset(engine_conf->oob_buf, 0xff,
> +		       nanddev_per_page_oobsize(nand));

I think this is only needed in the AUTO case.

> +
> +		if (req->mode == MTD_OPS_AUTO_OOB) {
> +			ret = mtd_ooblayout_set_databytes(mtd, req->oobbuf.out,
> +							  engine_conf->oob_buf,
> +							  req->ooboffs,
> +							  mtd->oobavail);
> +			if (ret)
> +				return ret;
> +		} else {
> +			memcpy(engine_conf->oob_buf + req->ooboffs,
> +			       req->oobbuf.out, req->ooblen);
> +		}
> +
> +		engine_conf->src_oob_buf = (void *)req->oobbuf.out;
> +		req->oobbuf.out = engine_conf->oob_buf;
> +	}
> +
> +	/* This engine does not provide BBM/free OOB bytes protection */
> +	if (!req->datalen)
> +		return 0;
> +

I believe we can do something simpler:

if (req->ooblen && req->mode == AUTO) {
	memcpy(oobbuf, req->oobbuf.out...
	mtd_ooblayout_set_databytes(using oobbuf)
}

But I believe this should be done before tweaking the request so that
you can avoid messing with src_oob_buf, right?

>  	/* Preparation for page write: derive the ECC bytes and place them */
>  	for (i = 0, data = req->databuf.out;
>  	     eccsteps;
> @@ -344,12 +368,36 @@ static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand,
>  	if (req->mode == MTD_OPS_RAW)
>  		return 0;
>  
> -	/* This engine does not provide BBM/free OOB bytes protection */
> -	if (!req->datalen)
> -		return 0;
> -
>  	/* No more preparation for page write */
>  	if (req->type == NAND_PAGE_WRITE) {
> +		if (req->ooblen)
> +			req->oobbuf.out = engine_conf->src_oob_buf;
> +
> +		nand_ecc_restore_req(&engine_conf->req_ctx, req);
> +		return 0;
> +	}
> +
> +	if (req->ooblen) {
> +		memset(engine_conf->oob_buf, 0xff,
> +		       nanddev_per_page_oobsize(nand));
> +
> +		if (req->mode == MTD_OPS_AUTO_OOB) {
> +			ret = mtd_ooblayout_get_databytes(mtd,
> +							  engine_conf->oob_buf,
> +							  req->oobbuf.in,
> +							  req->ooboffs,
> +							  mtd->oobavail);
> +			if (ret)
> +				return ret;
> +		} else {
> +			memcpy(engine_conf->oob_buf,
> +			       req->oobbuf.in + req->ooboffs, req->ooblen);
> +		}
> +	}
> +
> +	/* This engine does not provide BBM/free OOB bytes protection */
> +	if (!req->datalen) {
> +		req->oobbuf.in = engine_conf->oob_buf;
>  		nand_ecc_restore_req(&engine_conf->req_ctx, req);
>  		return 0;
>  	}
> @@ -379,6 +427,9 @@ static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand,
>  		}
>  	}
>  
> +	if (req->ooblen)
> +		req->oobbuf.in = engine_conf->oob_buf;
> +
>  	nand_ecc_restore_req(&engine_conf->req_ctx, req);
>  
>  	return max_bitflips;

Same applies to the two other engines.

Thanks,
Miquèl

_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,4/5] mtd: spinand: Move set/get OOB databytes to each ECC engines
@ 2021-12-14 11:41     ` Miquel Raynal
  0 siblings, 0 replies; 36+ messages in thread
From: Miquel Raynal @ 2021-12-14 11:41 UTC (permalink / raw)
  To: Xiangsheng Hou
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Xiangsheng,

xiangsheng.hou@mediatek.com wrote on Tue, 30 Nov 2021 16:32:01 +0800:

> Move set/get OOB databytes to each ECC engines when in AUTO mode.
> For read/write in AUTO mode, the OOB bytes include not only free
> date bytes but also ECC data bytes.

This is more or less ok, I would rephrase it to give more details about
the issue:

Move the data bytes handling when in AUTO mode to the ECC drivers, only
them have the knowledge of what's in the OOB buffer and we should be
sure that the ECC bytes do not smash the data provided by the user in
this mode. So the ECC drivers must take care of any data that could be
present at the beginning of the buffer before generating the ECC bytes.

> And for some special ECC engine,
> the data bytes in OOB may be mixed with main data. For example,
> mediatek ECC engine will be one more main data byte swap with BBM.
> So, just put these operation in each ECC engine to distinguish
> the differentiation.

But his is not related to your change, please drop it.

Needs a Fixes here I believe.

> Signed-off-by: Xiangsheng Hou <xiangsheng.hou@mediatek.com>
> ---
>  drivers/mtd/nand/ecc-sw-bch.c           | 71 ++++++++++++++++---
>  drivers/mtd/nand/ecc-sw-hamming.c       | 71 ++++++++++++++++---
>  drivers/mtd/nand/spi/core.c             | 93 +++++++++++++++++--------
>  include/linux/mtd/nand-ecc-sw-bch.h     |  4 ++
>  include/linux/mtd/nand-ecc-sw-hamming.h |  4 ++
>  include/linux/mtd/spinand.h             |  4 ++
>  6 files changed, 198 insertions(+), 49 deletions(-)
> 
> diff --git a/drivers/mtd/nand/ecc-sw-bch.c b/drivers/mtd/nand/ecc-sw-bch.c
> index 405552d014a8..bda31ef8f0b8 100644
> --- a/drivers/mtd/nand/ecc-sw-bch.c
> +++ b/drivers/mtd/nand/ecc-sw-bch.c
> @@ -238,7 +238,9 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand)
>  	engine_conf->code_size = code_size;
>  	engine_conf->calc_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
>  	engine_conf->code_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
> -	if (!engine_conf->calc_buf || !engine_conf->code_buf) {
> +	engine_conf->oob_buf = kzalloc(mtd->oobsize, GFP_KERNEL);
> +	if (!engine_conf->calc_buf || !engine_conf->code_buf ||
> +	    !engine_conf->oob_buf) {
>  		ret = -ENOMEM;
>  		goto free_bufs;
>  	}
> @@ -267,6 +269,7 @@ int nand_ecc_sw_bch_init_ctx(struct nand_device *nand)
>  	nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
>  	kfree(engine_conf->calc_buf);
>  	kfree(engine_conf->code_buf);
> +	kfree(engine_conf->oob_buf);
>  free_engine_conf:
>  	kfree(engine_conf);
>  
> @@ -283,6 +286,7 @@ void nand_ecc_sw_bch_cleanup_ctx(struct nand_device *nand)
>  		nand_ecc_cleanup_req_tweaking(&engine_conf->req_ctx);
>  		kfree(engine_conf->calc_buf);
>  		kfree(engine_conf->code_buf);
> +		kfree(engine_conf->oob_buf);
>  		kfree(engine_conf);
>  	}
>  }
> @@ -299,22 +303,42 @@ static int nand_ecc_sw_bch_prepare_io_req(struct nand_device *nand,
>  	int total = nand->ecc.ctx.total;
>  	u8 *ecccalc = engine_conf->calc_buf;
>  	const u8 *data;
> -	int i;
> +	int i, ret = 0;

int i, ret;

>  
>  	/* Nothing to do for a raw operation */
>  	if (req->mode == MTD_OPS_RAW)
>  		return 0;
>  
> -	/* This engine does not provide BBM/free OOB bytes protection */
> -	if (!req->datalen)
> -		return 0;
> -

Why? Please drop this removal here and below, it has nothing to do with
the current fix, right?

>  	nand_ecc_tweak_req(&engine_conf->req_ctx, req);
>  
>  	/* No more preparation for page read */
>  	if (req->type == NAND_PAGE_READ)
>  		return 0;
>  
> +	if (req->ooblen) {
> +		memset(engine_conf->oob_buf, 0xff,
> +		       nanddev_per_page_oobsize(nand));

I think this is only needed in the AUTO case.

> +
> +		if (req->mode == MTD_OPS_AUTO_OOB) {
> +			ret = mtd_ooblayout_set_databytes(mtd, req->oobbuf.out,
> +							  engine_conf->oob_buf,
> +							  req->ooboffs,
> +							  mtd->oobavail);
> +			if (ret)
> +				return ret;
> +		} else {
> +			memcpy(engine_conf->oob_buf + req->ooboffs,
> +			       req->oobbuf.out, req->ooblen);
> +		}
> +
> +		engine_conf->src_oob_buf = (void *)req->oobbuf.out;
> +		req->oobbuf.out = engine_conf->oob_buf;
> +	}
> +
> +	/* This engine does not provide BBM/free OOB bytes protection */
> +	if (!req->datalen)
> +		return 0;
> +

I believe we can do something simpler:

if (req->ooblen && req->mode == AUTO) {
	memcpy(oobbuf, req->oobbuf.out...
	mtd_ooblayout_set_databytes(using oobbuf)
}

But I believe this should be done before tweaking the request so that
you can avoid messing with src_oob_buf, right?

>  	/* Preparation for page write: derive the ECC bytes and place them */
>  	for (i = 0, data = req->databuf.out;
>  	     eccsteps;
> @@ -344,12 +368,36 @@ static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand,
>  	if (req->mode == MTD_OPS_RAW)
>  		return 0;
>  
> -	/* This engine does not provide BBM/free OOB bytes protection */
> -	if (!req->datalen)
> -		return 0;
> -
>  	/* No more preparation for page write */
>  	if (req->type == NAND_PAGE_WRITE) {
> +		if (req->ooblen)
> +			req->oobbuf.out = engine_conf->src_oob_buf;
> +
> +		nand_ecc_restore_req(&engine_conf->req_ctx, req);
> +		return 0;
> +	}
> +
> +	if (req->ooblen) {
> +		memset(engine_conf->oob_buf, 0xff,
> +		       nanddev_per_page_oobsize(nand));
> +
> +		if (req->mode == MTD_OPS_AUTO_OOB) {
> +			ret = mtd_ooblayout_get_databytes(mtd,
> +							  engine_conf->oob_buf,
> +							  req->oobbuf.in,
> +							  req->ooboffs,
> +							  mtd->oobavail);
> +			if (ret)
> +				return ret;
> +		} else {
> +			memcpy(engine_conf->oob_buf,
> +			       req->oobbuf.in + req->ooboffs, req->ooblen);
> +		}
> +	}
> +
> +	/* This engine does not provide BBM/free OOB bytes protection */
> +	if (!req->datalen) {
> +		req->oobbuf.in = engine_conf->oob_buf;
>  		nand_ecc_restore_req(&engine_conf->req_ctx, req);
>  		return 0;
>  	}
> @@ -379,6 +427,9 @@ static int nand_ecc_sw_bch_finish_io_req(struct nand_device *nand,
>  		}
>  	}
>  
> +	if (req->ooblen)
> +		req->oobbuf.in = engine_conf->oob_buf;
> +
>  	nand_ecc_restore_req(&engine_conf->req_ctx, req);
>  
>  	return max_bitflips;

Same applies to the two other engines.

Thanks,
Miquèl

______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,4/5] mtd: spinand: Move set/get OOB databytes to each ECC engines
  2021-12-14 11:41     ` Miquel Raynal
@ 2021-12-20  7:37       ` xiangsheng.hou
  -1 siblings, 0 replies; 36+ messages in thread
From: xiangsheng.hou @ 2021-12-20  7:37 UTC (permalink / raw)
  To: Miquel Raynal
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Miquel,

On Tue, 2021-12-14 at 12:41 +0100, Miquel Raynal wrote:
> 
> > 
> > @@ -299,22 +303,42 @@ static int
> > nand_ecc_sw_bch_prepare_io_req(struct nand_device *nand,
> >  	int total = nand->ecc.ctx.total;
> >  	u8 *ecccalc = engine_conf->calc_buf;
> >  	const u8 *data;
> > -	int i;
> > +	int i, ret = 0;
> 
> int i, ret;
> 
> >  
> >  	/* Nothing to do for a raw operation */
> >  	if (req->mode == MTD_OPS_RAW)
> >  		return 0;
> >  
> > -	/* This engine does not provide BBM/free OOB bytes protection
> > */
> > -	if (!req->datalen)
> > -		return 0;
> > -
> 
> Why? Please drop this removal here and below, it has nothing to do
> with
> the current fix, right?
> 

For req->datalen == 0 and req->ooblen !=0 when write operation in
OPS_AUTO mode, it`s also need to set OOB databytes. So I remove this to
the back.

However, for OPS_RAW and OPS_PLACE_OOB mode, it`s unnecessary.

I will try to fix this.

> >  	nand_ecc_tweak_req(&engine_conf->req_ctx, req);
> >  
> >  	/* No more preparation for page read */
> >  	if (req->type == NAND_PAGE_READ)
> >  		return 0;
> >  
> > +	if (req->ooblen) {
> > +		memset(engine_conf->oob_buf, 0xff,
> > +		       nanddev_per_page_oobsize(nand));
> 
> I think this is only needed in the AUTO case.
> 
> > +
> > +		if (req->mode == MTD_OPS_AUTO_OOB) {
> > +			ret = mtd_ooblayout_set_databytes(mtd, req-
> > >oobbuf.out,
> > +							  engine_conf-
> > >oob_buf,
> > +							  req->ooboffs,
> > +							  mtd-
> > >oobavail);
> > +			if (ret)
> > +				return ret;
> > +		} else {
> > +			memcpy(engine_conf->oob_buf + req->ooboffs,
> > +			       req->oobbuf.out, req->ooblen);
> > +		}
> > +
> > +		engine_conf->src_oob_buf = (void *)req->oobbuf.out;
> > +		req->oobbuf.out = engine_conf->oob_buf;
> > +	}
> > +
> > +	/* This engine does not provide BBM/free OOB bytes protection
> > */
> > +	if (!req->datalen)
> > +		return 0;
> > +
> 
> I believe we can do something simpler:
> 
> if (req->ooblen && req->mode == AUTO) {
> 	memcpy(oobbuf, req->oobbuf.out...
> 	mtd_ooblayout_set_databytes(using oobbuf)
> }
> 
> But I believe this should be done before tweaking the request so that
> you can avoid messing with src_oob_buf, right?

Yes, you are right. For OPS_AUTO mode, the maximal request OOB len is
mtd->oobavail which smaller than nand->memorg.oobsize.

Therefore, this will use bounce oob buffer when tweaking the request.

> 
> Same applies to the two other engines.

I will.

Thanks
Xiangsheng Hou
_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [RFC,v4,4/5] mtd: spinand: Move set/get OOB databytes to each ECC engines
@ 2021-12-20  7:37       ` xiangsheng.hou
  0 siblings, 0 replies; 36+ messages in thread
From: xiangsheng.hou @ 2021-12-20  7:37 UTC (permalink / raw)
  To: Miquel Raynal
  Cc: broonie, benliang.zhao, dandan.he, guochun.mao, bin.zhang,
	sanny.chen, mao.zhong, yingjoe.chen, donghunt, rdlee, linux-mtd,
	linux-mediatek, srv_heupstream

Hi Miquel,

On Tue, 2021-12-14 at 12:41 +0100, Miquel Raynal wrote:
> 
> > 
> > @@ -299,22 +303,42 @@ static int
> > nand_ecc_sw_bch_prepare_io_req(struct nand_device *nand,
> >  	int total = nand->ecc.ctx.total;
> >  	u8 *ecccalc = engine_conf->calc_buf;
> >  	const u8 *data;
> > -	int i;
> > +	int i, ret = 0;
> 
> int i, ret;
> 
> >  
> >  	/* Nothing to do for a raw operation */
> >  	if (req->mode == MTD_OPS_RAW)
> >  		return 0;
> >  
> > -	/* This engine does not provide BBM/free OOB bytes protection
> > */
> > -	if (!req->datalen)
> > -		return 0;
> > -
> 
> Why? Please drop this removal here and below, it has nothing to do
> with
> the current fix, right?
> 

For req->datalen == 0 and req->ooblen !=0 when write operation in
OPS_AUTO mode, it`s also need to set OOB databytes. So I remove this to
the back.

However, for OPS_RAW and OPS_PLACE_OOB mode, it`s unnecessary.

I will try to fix this.

> >  	nand_ecc_tweak_req(&engine_conf->req_ctx, req);
> >  
> >  	/* No more preparation for page read */
> >  	if (req->type == NAND_PAGE_READ)
> >  		return 0;
> >  
> > +	if (req->ooblen) {
> > +		memset(engine_conf->oob_buf, 0xff,
> > +		       nanddev_per_page_oobsize(nand));
> 
> I think this is only needed in the AUTO case.
> 
> > +
> > +		if (req->mode == MTD_OPS_AUTO_OOB) {
> > +			ret = mtd_ooblayout_set_databytes(mtd, req-
> > >oobbuf.out,
> > +							  engine_conf-
> > >oob_buf,
> > +							  req->ooboffs,
> > +							  mtd-
> > >oobavail);
> > +			if (ret)
> > +				return ret;
> > +		} else {
> > +			memcpy(engine_conf->oob_buf + req->ooboffs,
> > +			       req->oobbuf.out, req->ooblen);
> > +		}
> > +
> > +		engine_conf->src_oob_buf = (void *)req->oobbuf.out;
> > +		req->oobbuf.out = engine_conf->oob_buf;
> > +	}
> > +
> > +	/* This engine does not provide BBM/free OOB bytes protection
> > */
> > +	if (!req->datalen)
> > +		return 0;
> > +
> 
> I believe we can do something simpler:
> 
> if (req->ooblen && req->mode == AUTO) {
> 	memcpy(oobbuf, req->oobbuf.out...
> 	mtd_ooblayout_set_databytes(using oobbuf)
> }
> 
> But I believe this should be done before tweaking the request so that
> you can avoid messing with src_oob_buf, right?

Yes, you are right. For OPS_AUTO mode, the maximal request OOB len is
mtd->oobavail which smaller than nand->memorg.oobsize.

Therefore, this will use bounce oob buffer when tweaking the request.

> 
> Same applies to the two other engines.

I will.

Thanks
Xiangsheng Hou
______________________________________________________
Linux MTD discussion mailing list
http://lists.infradead.org/mailman/listinfo/linux-mtd/

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2021-12-20  7:48 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-11-30  8:31 [RFC,v4,0/5] Add Mediatek SPI Nand controller and convert ECC driver Xiangsheng Hou
2021-11-30  8:31 ` Xiangsheng Hou
2021-11-30  8:31 ` [RFC,v4,1/5] mtd: nand: ecc: Move mediatek " Xiangsheng Hou
2021-11-30  8:31   ` Xiangsheng Hou
2021-11-30  8:31 ` [RFC,v4,2/5] mtd: nand: ecc: mtk: Convert to the ECC infrastructure Xiangsheng Hou
2021-11-30  8:31   ` Xiangsheng Hou
2021-12-09 10:32   ` Miquel Raynal
2021-12-09 10:32     ` Miquel Raynal
2021-12-10  9:09     ` xiangsheng.hou
2021-12-10  9:09       ` xiangsheng.hou
2021-12-10  9:34       ` Miquel Raynal
2021-12-10  9:34         ` Miquel Raynal
2021-12-11  3:25         ` xiangsheng.hou
2021-12-11  3:25           ` xiangsheng.hou
2021-12-13  9:29           ` Miquel Raynal
2021-12-13  9:29             ` Miquel Raynal
2021-12-14  3:32             ` xiangsheng.hou
2021-12-14  3:32               ` xiangsheng.hou
2021-12-14  9:47               ` Miquel Raynal
2021-12-14  9:47                 ` Miquel Raynal
2021-11-30  8:32 ` [RFC,v4,3/5] spi: mtk: Add mediatek SPI Nand Flash interface driver Xiangsheng Hou
2021-11-30  8:32   ` Xiangsheng Hou
2021-12-09 10:20   ` Miquel Raynal
2021-12-09 10:20     ` Miquel Raynal
2021-12-10  9:09     ` xiangsheng.hou
2021-12-10  9:09       ` xiangsheng.hou
2021-12-10  9:40       ` Miquel Raynal
2021-12-10  9:40         ` Miquel Raynal
2021-11-30  8:32 ` [RFC, v4, 4/5] mtd: spinand: Move set/get OOB databytes to each ECC engines Xiangsheng Hou
2021-11-30  8:32   ` Xiangsheng Hou
2021-12-14 11:41   ` [RFC,v4,4/5] " Miquel Raynal
2021-12-14 11:41     ` Miquel Raynal
2021-12-20  7:37     ` xiangsheng.hou
2021-12-20  7:37       ` xiangsheng.hou
2021-11-30  8:32 ` [RFC,v4,5/5] arm64: dts: mtk: Add snfi node Xiangsheng Hou
2021-11-30  8:32   ` Xiangsheng Hou

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.