From: <patrice.chotard@foss.st.com>
To: Mark Brown <broonie@kernel.org>,
Miquel Raynal <miquel.raynal@bootlin.com>,
Vignesh Raghavendra <vigneshr@ti.com>,
Boris Brezillon <boris.brezillon@collabora.com>,
<linux-mtd@lists.infradead.org>,
Alexandre Torgue <alexandre.torgue@foss.st.com>,
<linux-spi@vger.kernel.org>,
<linux-stm32@st-md-mailman.stormreply.com>,
<linux-arm-kernel@lists.infradead.org>,
<linux-kernel@vger.kernel.org>
Cc: <patrice.chotard@foss.st.com>, <christophe.kerello@foss.st.com>
Subject: [PATCH v2 1/3] spi: spi-mem: add automatic poll status functions
Date: Fri, 7 May 2021 15:17:54 +0200 [thread overview]
Message-ID: <20210507131756.17028-2-patrice.chotard@foss.st.com> (raw)
In-Reply-To: <20210507131756.17028-1-patrice.chotard@foss.st.com>
From: Patrice Chotard <patrice.chotard@foss.st.com>
With STM32 QSPI, it is possible to poll the status register of the device.
This could be done to offload the CPU during an operation (erase or
program a SPI NAND for example).
spi_mem_poll_status API has been added to handle this feature.
This new function take care of the offload/non-offload cases.
For the non-offload case, use read_poll_timeout() to poll the status in
order to release CPU during this phase.
Signed-off-by: Patrice Chotard <patrice.chotard@foss.st.com>
Signed-off-by: Christophe Kerello <christophe.kerello@foss.st.com>
---
Changes in v2:
- Indicates the spi_mem_poll_status() timeout unit
- Use 2-byte wide status register
- Add spi_mem_supports_op() call in spi_mem_poll_status()
- Add completion management in spi_mem_poll_status()
- Add offload/non-offload case mangement in spi_mem_poll_status()
- Optimize the non-offload case by using read_poll_timeout()
drivers/spi/spi-mem.c | 71 +++++++++++++++++++++++++++++++++++++
include/linux/spi/spi-mem.h | 10 ++++++
2 files changed, 81 insertions(+)
diff --git a/drivers/spi/spi-mem.c b/drivers/spi/spi-mem.c
index 1513553e4080..3f29c604df7d 100644
--- a/drivers/spi/spi-mem.c
+++ b/drivers/spi/spi-mem.c
@@ -6,6 +6,7 @@
* Author: Boris Brezillon <boris.brezillon@bootlin.com>
*/
#include <linux/dmaengine.h>
+#include <linux/iopoll.h>
#include <linux/pm_runtime.h>
#include <linux/spi/spi.h>
#include <linux/spi/spi-mem.h>
@@ -743,6 +744,75 @@ static inline struct spi_mem_driver *to_spi_mem_drv(struct device_driver *drv)
return container_of(drv, struct spi_mem_driver, spidrv.driver);
}
+/**
+ * spi_mem_finalize_op - report completion of spi_mem_op
+ * @ctlr: the controller reporting completion
+ *
+ * Called by SPI drivers using the spi-mem spi_mem_poll_status()
+ * implementation to notify it that the current spi_mem_op has
+ * finished.
+ */
+void spi_mem_finalize_op(struct spi_controller *ctlr)
+{
+ complete(&ctlr->xfer_completion);
+}
+EXPORT_SYMBOL_GPL(spi_mem_finalize_op);
+
+/**
+ * spi_mem_poll_status() - Poll memory device status
+ * @mem: SPI memory device
+ * @op: the memory operation to execute
+ * @mask: status bitmask to ckeck
+ * @match: (status & mask) expected value
+ * @timeout_ms: timeout in milliseconds
+ *
+ * This function send a polling status request to the controller driver
+ *
+ * Return: 0 in case of success, -ETIMEDOUT in case of error,
+ * -EOPNOTSUPP if not supported.
+ */
+int spi_mem_poll_status(struct spi_mem *mem,
+ const struct spi_mem_op *op,
+ u16 mask, u16 match, u16 timeout_ms)
+{
+ struct spi_controller *ctlr = mem->spi->controller;
+ unsigned long ms;
+ int ret = -EOPNOTSUPP;
+ int exec_op_ret;
+ u16 *status;
+
+ if (!spi_mem_supports_op(mem, op))
+ return ret;
+
+ if (ctlr->mem_ops && ctlr->mem_ops->poll_status) {
+ ret = spi_mem_access_start(mem);
+ if (ret)
+ return ret;
+
+ reinit_completion(&ctlr->xfer_completion);
+
+ ret = ctlr->mem_ops->poll_status(mem, op, mask, match,
+ timeout_ms);
+
+ ms = wait_for_completion_timeout(&ctlr->xfer_completion,
+ msecs_to_jiffies(timeout_ms));
+
+ spi_mem_access_end(mem);
+ if (!ms)
+ return -ETIMEDOUT;
+ } else {
+ status = (u16 *)op->data.buf.in;
+ ret = read_poll_timeout(spi_mem_exec_op, exec_op_ret,
+ ((*status) & mask) == match, 20,
+ timeout_ms * 1000, false, mem, op);
+ if (exec_op_ret)
+ return exec_op_ret;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(spi_mem_poll_status);
+
static int spi_mem_probe(struct spi_device *spi)
{
struct spi_mem_driver *memdrv = to_spi_mem_drv(spi->dev.driver);
@@ -763,6 +833,7 @@ static int spi_mem_probe(struct spi_device *spi)
if (IS_ERR_OR_NULL(mem->name))
return PTR_ERR_OR_ZERO(mem->name);
+ init_completion(&ctlr->xfer_completion);
spi_set_drvdata(spi, mem);
return memdrv->probe(mem);
diff --git a/include/linux/spi/spi-mem.h b/include/linux/spi/spi-mem.h
index 2b65c9edc34e..0fbf5d0a3d31 100644
--- a/include/linux/spi/spi-mem.h
+++ b/include/linux/spi/spi-mem.h
@@ -250,6 +250,7 @@ static inline void *spi_mem_get_drvdata(struct spi_mem *mem)
* the currently mapped area), and the caller of
* spi_mem_dirmap_write() is responsible for calling it again in
* this case.
+ * @poll_status: poll memory device status
*
* This interface should be implemented by SPI controllers providing an
* high-level interface to execute SPI memory operation, which is usually the
@@ -274,6 +275,9 @@ struct spi_controller_mem_ops {
u64 offs, size_t len, void *buf);
ssize_t (*dirmap_write)(struct spi_mem_dirmap_desc *desc,
u64 offs, size_t len, const void *buf);
+ int (*poll_status)(struct spi_mem *mem,
+ const struct spi_mem_op *op,
+ u16 mask, u16 match, unsigned long timeout);
};
/**
@@ -369,6 +373,12 @@ devm_spi_mem_dirmap_create(struct device *dev, struct spi_mem *mem,
void devm_spi_mem_dirmap_destroy(struct device *dev,
struct spi_mem_dirmap_desc *desc);
+void spi_mem_finalize_op(struct spi_controller *ctlr);
+
+int spi_mem_poll_status(struct spi_mem *mem,
+ const struct spi_mem_op *op,
+ u16 mask, u16 match, u16 timeout);
+
int spi_mem_driver_register_with_owner(struct spi_mem_driver *drv,
struct module *owner);
--
2.17.1
next prev parent reply other threads:[~2021-05-07 13:18 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-07 13:17 [PATCH v2 0/3] MTD: spinand: Add spi_mem_poll_status() support patrice.chotard
2021-05-07 13:17 ` patrice.chotard [this message]
2021-05-08 7:55 ` [PATCH v2 1/3] spi: spi-mem: add automatic poll status functions Boris Brezillon
2021-05-10 8:46 ` Patrice CHOTARD
2021-05-10 9:22 ` Boris Brezillon
2021-05-17 7:29 ` Patrice CHOTARD
2021-05-17 7:41 ` Boris Brezillon
2021-05-17 9:24 ` Patrice CHOTARD
2021-05-17 11:25 ` Boris Brezillon
2021-05-17 11:59 ` Patrice CHOTARD
2021-05-17 12:24 ` Boris Brezillon
2021-05-17 12:04 ` Patrice CHOTARD
2021-05-07 13:17 ` [PATCH v2 2/3] mtd: spinand: use the spi-mem poll status APIs patrice.chotard
2021-05-07 13:17 ` [PATCH v2 3/3] spi: stm32-qspi: add automatic poll status feature patrice.chotard
2021-05-07 19:29 ` kernel test robot
2021-05-07 22:16 ` kernel test robot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210507131756.17028-2-patrice.chotard@foss.st.com \
--to=patrice.chotard@foss.st.com \
--cc=alexandre.torgue@foss.st.com \
--cc=boris.brezillon@collabora.com \
--cc=broonie@kernel.org \
--cc=christophe.kerello@foss.st.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mtd@lists.infradead.org \
--cc=linux-spi@vger.kernel.org \
--cc=linux-stm32@st-md-mailman.stormreply.com \
--cc=miquel.raynal@bootlin.com \
--cc=vigneshr@ti.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).