All of lore.kernel.org
 help / color / mirror / Atom feed
* [Patch V5 0/3] Tegra TPM driver with HW flow control
@ 2023-02-27 12:06 Krishna Yarlagadda
  2023-02-27 12:07 ` [Patch V5 1/3] spi: Add TPM HW flow flag Krishna Yarlagadda
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Krishna Yarlagadda @ 2023-02-27 12:06 UTC (permalink / raw)
  To: robh+dt, broonie, peterhuewe, jgg, jarkko,
	krzysztof.kozlowski+dt, linux-spi, linux-tegra, linux-integrity,
	linux-kernel
  Cc: thierry.reding, jonathanh, skomatineni, ldewangan, Krishna Yarlagadda

TPM interface spec defines flow control where TPM device would drive
MISO at same cycle as last address bit sent by controller on MOSI. This
state of wait can be detected by software reading the MISO line or
by controller hardware. Support sending transfers to controller in
single message and handle flow control in hardware. Half duplex
controllers have to support flow control in hardware.

Tegra234 and Tegra241 chips have QSPI controller that supports TPM
Interface Specification (TIS) flow control.
Since the controller only supports half duplex, SW wait polling
(flow control using full duplex transfers) method implemented in
tpm_tis_spi_main.c will not work and have to us HW flow control.

Updates in this patchset 
 - Tegra QSPI identifies itself as half duplex.
 - TPM TIS SPI driver skips flow control for half duplex and send
   transfers in single message for controller to handle it.
 - TPM device identifies as TPM device for controller to detect and
   enable HW TPM wait poll feature.

Verified with a TPM device on Tegra241 ref board using TPM2 tools.

V5:
 - No SPI bus locking.
V4:
 - Split api change to different patch.
 - Describe TPM HW flow control.
V3:
 - Use SPI device mode flag and SPI controller flags.
 - Drop usage of device tree flags.
 - Generic TPM half duplex controller handling.
 - HW & SW flow control for TPM. Drop additional driver.
V2:
 - Fix dt schema errors.

Krishna Yarlagadda (3):
  spi: Add TPM HW flow flag
  tpm_tis-spi: Support hardware wait polling
  spi: tegra210-quad: Enable TPM wait polling

 drivers/char/tpm/tpm_tis_spi_main.c | 92 ++++++++++++++++++++++++++++-
 drivers/spi/spi-tegra210-quad.c     | 21 +++++++
 include/linux/spi/spi.h             |  7 ++-
 3 files changed, 115 insertions(+), 5 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [Patch V5 1/3] spi: Add TPM HW flow flag
  2023-02-27 12:06 [Patch V5 0/3] Tegra TPM driver with HW flow control Krishna Yarlagadda
@ 2023-02-27 12:07 ` Krishna Yarlagadda
  2023-02-27 12:07 ` [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling Krishna Yarlagadda
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 16+ messages in thread
From: Krishna Yarlagadda @ 2023-02-27 12:07 UTC (permalink / raw)
  To: robh+dt, broonie, peterhuewe, jgg, jarkko,
	krzysztof.kozlowski+dt, linux-spi, linux-tegra, linux-integrity,
	linux-kernel
  Cc: thierry.reding, jonathanh, skomatineni, ldewangan, Krishna Yarlagadda

TPM spec defines flow control over SPI. Client device can insert a wait
state on MISO when address is trasmitted by controller on MOSI. It can
work only on full duplex.
Half duplex controllers need to implement flow control in HW.
Add a flag for TPM to indicate flow control is expected in controller.

Signed-off-by: Krishna Yarlagadda <kyarlagadda@nvidia.com>
---
 include/linux/spi/spi.h | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/include/linux/spi/spi.h b/include/linux/spi/spi.h
index 4fa26b9a3572..6b32c90e9e20 100644
--- a/include/linux/spi/spi.h
+++ b/include/linux/spi/spi.h
@@ -184,8 +184,9 @@ struct spi_device {
 	u8			chip_select;
 	u8			bits_per_word;
 	bool			rt;
-#define SPI_NO_TX	BIT(31)		/* No transmit wire */
-#define SPI_NO_RX	BIT(30)		/* No receive wire */
+#define SPI_NO_TX		BIT(31)		/* No transmit wire */
+#define SPI_NO_RX		BIT(30)		/* No receive wire */
+#define SPI_TPM_HW_FLOW		BIT(29)		/* TPM flow control */
 	/*
 	 * All bits defined above should be covered by SPI_MODE_KERNEL_MASK.
 	 * The SPI_MODE_KERNEL_MASK has the SPI_MODE_USER_MASK counterpart,
@@ -195,7 +196,7 @@ struct spi_device {
 	 * These bits must not overlap. A static assert check should make sure of that.
 	 * If adding extra bits, make sure to decrease the bit index below as well.
 	 */
-#define SPI_MODE_KERNEL_MASK	(~(BIT(30) - 1))
+#define SPI_MODE_KERNEL_MASK	(~(BIT(29) - 1))
 	u32			mode;
 	int			irq;
 	void			*controller_state;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
  2023-02-27 12:06 [Patch V5 0/3] Tegra TPM driver with HW flow control Krishna Yarlagadda
  2023-02-27 12:07 ` [Patch V5 1/3] spi: Add TPM HW flow flag Krishna Yarlagadda
@ 2023-02-27 12:07 ` Krishna Yarlagadda
  2023-02-28  2:36   ` Jarkko Sakkinen
  2023-02-27 12:07 ` [Patch V5 3/3] spi: tegra210-quad: Enable TPM " Krishna Yarlagadda
  2023-02-28  2:28 ` [Patch V5 0/3] Tegra TPM driver with HW flow control Jarkko Sakkinen
  3 siblings, 1 reply; 16+ messages in thread
From: Krishna Yarlagadda @ 2023-02-27 12:07 UTC (permalink / raw)
  To: robh+dt, broonie, peterhuewe, jgg, jarkko,
	krzysztof.kozlowski+dt, linux-spi, linux-tegra, linux-integrity,
	linux-kernel
  Cc: thierry.reding, jonathanh, skomatineni, ldewangan, Krishna Yarlagadda

TPM devices raise wait signal on last addr cycle. This can be detected
by software driver by reading MISO line on same clock which requires
full duplex support. In case of half duplex controllers wait detection
has to be implemented in HW.
Support hardware wait state detection by sending entire message and let
controller handle flow control.
QSPI controller in Tegra236 & Tegra241 implement TPM wait polling.

Signed-off-by: Krishna Yarlagadda <kyarlagadda@nvidia.com>
---
 drivers/char/tpm/tpm_tis_spi_main.c | 92 ++++++++++++++++++++++++++++-
 1 file changed, 90 insertions(+), 2 deletions(-)

diff --git a/drivers/char/tpm/tpm_tis_spi_main.c b/drivers/char/tpm/tpm_tis_spi_main.c
index a0963a3e92bd..5f66448ee09e 100644
--- a/drivers/char/tpm/tpm_tis_spi_main.c
+++ b/drivers/char/tpm/tpm_tis_spi_main.c
@@ -71,8 +71,74 @@ static int tpm_tis_spi_flow_control(struct tpm_tis_spi_phy *phy,
 	return 0;
 }
 
-int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
-			 u8 *in, const u8 *out)
+/*
+ * Half duplex controller with support for TPM wait state detection like
+ * Tegra241 need cmd, addr & data sent in single message to manage HW flow
+ * control. Each phase sent in different transfer for controller to idenity
+ * phase.
+ */
+int tpm_tis_spi_hw_flow_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
+				 u8 *in, const u8 *out)
+{
+	struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
+	struct spi_transfer spi_xfer[3];
+	struct spi_message m;
+	u8 transfer_len;
+	int ret;
+
+	while (len) {
+		transfer_len = min_t(u16, len, MAX_SPI_FRAMESIZE);
+
+		spi_message_init(&m);
+		phy->iobuf[0] = (in ? 0x80 : 0) | (transfer_len - 1);
+		phy->iobuf[1] = 0xd4;
+		phy->iobuf[2] = addr >> 8;
+		phy->iobuf[3] = addr;
+
+		memset(&spi_xfer, 0, sizeof(spi_xfer));
+
+		spi_xfer[0].tx_buf = phy->iobuf;
+		spi_xfer[0].len = 1;
+		spi_message_add_tail(&spi_xfer[0], &m);
+
+		spi_xfer[1].tx_buf = phy->iobuf + 1;
+		spi_xfer[1].len = 3;
+		spi_message_add_tail(&spi_xfer[1], &m);
+
+		if (out) {
+			spi_xfer[2].tx_buf = &phy->iobuf[4];
+			spi_xfer[2].rx_buf = NULL;
+			memcpy(&phy->iobuf[4], out, transfer_len);
+			out += transfer_len;
+		}
+
+		if (in) {
+			spi_xfer[2].tx_buf = NULL;
+			spi_xfer[2].rx_buf = &phy->iobuf[4];
+		}
+
+		spi_xfer[2].len = transfer_len;
+		spi_message_add_tail(&spi_xfer[2], &m);
+
+		reinit_completion(&phy->ready);
+
+		ret = spi_sync_locked(phy->spi_device, &m);
+		if (ret < 0)
+			return ret;
+
+		if (in) {
+			memcpy(in, &phy->iobuf[4], transfer_len);
+			in += transfer_len;
+		}
+
+		len -= transfer_len;
+	}
+
+	return ret;
+}
+
+int tpm_tis_spi_sw_flow_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
+				 u8 *in, const u8 *out)
 {
 	struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
 	int ret = 0;
@@ -140,6 +206,28 @@ int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
 	return ret;
 }
 
+int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
+			 u8 *in, const u8 *out)
+{
+	struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
+	struct spi_controller *ctlr = phy->spi_device->controller;
+
+	/*
+	 * TPM flow control over SPI requires full duplex support.
+	 * Send entire message to a half duplex controller to handle
+	 * wait polling in controller.
+	 * Set TPM HW flow control flag..
+	 */
+	if (ctlr->flags & SPI_CONTROLLER_HALF_DUPLEX) {
+		phy->spi_device->mode |= SPI_TPM_HW_FLOW;
+		return tpm_tis_spi_hw_flow_transfer(data, addr, len, in,
+						    out);
+	} else {
+		return tpm_tis_spi_sw_flow_transfer(data, addr, len, in,
+						    out);
+	}
+}
+
 static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr,
 				  u16 len, u8 *result, enum tpm_tis_io_mode io_mode)
 {
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [Patch V5 3/3] spi: tegra210-quad: Enable TPM wait polling
  2023-02-27 12:06 [Patch V5 0/3] Tegra TPM driver with HW flow control Krishna Yarlagadda
  2023-02-27 12:07 ` [Patch V5 1/3] spi: Add TPM HW flow flag Krishna Yarlagadda
  2023-02-27 12:07 ` [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling Krishna Yarlagadda
@ 2023-02-27 12:07 ` Krishna Yarlagadda
  2023-02-28  2:28 ` [Patch V5 0/3] Tegra TPM driver with HW flow control Jarkko Sakkinen
  3 siblings, 0 replies; 16+ messages in thread
From: Krishna Yarlagadda @ 2023-02-27 12:07 UTC (permalink / raw)
  To: robh+dt, broonie, peterhuewe, jgg, jarkko,
	krzysztof.kozlowski+dt, linux-spi, linux-tegra, linux-integrity,
	linux-kernel
  Cc: thierry.reding, jonathanh, skomatineni, ldewangan, Krishna Yarlagadda

Trusted Platform Module requires flow control. As defined in TPM
interface specification, client would drive MISO line at same cycle as
last address bit on MOSI.
Tegra241 QSPI controller has TPM wait state detection feature which is
enabled for TPM client devices reported in SPI device mode bits.
Set half duplex flag for TPM device to detect and send entire message
to controller in one shot.

Signed-off-by: Krishna Yarlagadda <kyarlagadda@nvidia.com>
---
 drivers/spi/spi-tegra210-quad.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-quad.c
index b967576b6c96..fe15fa6eecd1 100644
--- a/drivers/spi/spi-tegra210-quad.c
+++ b/drivers/spi/spi-tegra210-quad.c
@@ -142,6 +142,7 @@
 
 #define QSPI_GLOBAL_CONFIG			0X1a4
 #define QSPI_CMB_SEQ_EN				BIT(0)
+#define QSPI_TPM_WAIT_POLL_EN			BIT(1)
 
 #define QSPI_CMB_SEQ_ADDR			0x1a8
 #define QSPI_ADDRESS_VALUE_SET(X)		(((x) & 0xFFFF) << 0)
@@ -164,6 +165,7 @@
 struct tegra_qspi_soc_data {
 	bool has_dma;
 	bool cmb_xfer_capable;
+	bool tpm_wait_poll;
 	unsigned int cs_count;
 };
 
@@ -991,6 +993,14 @@ static void tegra_qspi_dump_regs(struct tegra_qspi *tqspi)
 	dev_dbg(tqspi->dev, "TRANS_STAT:  0x%08x | FIFO_STATUS: 0x%08x\n",
 		tegra_qspi_readl(tqspi, QSPI_TRANS_STATUS),
 		tegra_qspi_readl(tqspi, QSPI_FIFO_STATUS));
+	dev_dbg(tqspi->dev, "GLOBAL_CFG: 0x%08x\n",
+		tegra_qspi_readl(tqspi, QSPI_GLOBAL_CONFIG));
+	dev_dbg(tqspi->dev, "CMB_CMD: 0x%08x | CMB_CMD_CFG: 0x%08x\n",
+		tegra_qspi_readl(tqspi, QSPI_CMB_SEQ_CMD),
+		tegra_qspi_readl(tqspi, QSPI_CMB_SEQ_CMD_CFG));
+	dev_dbg(tqspi->dev, "CMB_ADDR: 0x%08x | CMB_ADDR_CFG: 0x%08x\n",
+		tegra_qspi_readl(tqspi, QSPI_CMB_SEQ_ADDR),
+		tegra_qspi_readl(tqspi, QSPI_CMB_SEQ_ADDR_CFG));
 }
 
 static void tegra_qspi_handle_error(struct tegra_qspi *tqspi)
@@ -1065,6 +1075,12 @@ static int tegra_qspi_combined_seq_xfer(struct tegra_qspi *tqspi,
 
 	/* Enable Combined sequence mode */
 	val = tegra_qspi_readl(tqspi, QSPI_GLOBAL_CONFIG);
+	if (spi->mode & SPI_TPM_HW_FLOW) {
+		if (tqspi->soc_data->tpm_wait_poll)
+			val |= QSPI_TPM_WAIT_POLL_EN;
+		else
+			return -EIO;
+	}
 	val |= QSPI_CMB_SEQ_EN;
 	tegra_qspi_writel(tqspi, val, QSPI_GLOBAL_CONFIG);
 	/* Process individual transfer list */
@@ -1192,6 +1208,7 @@ static int tegra_qspi_non_combined_seq_xfer(struct tegra_qspi *tqspi,
 	/* Disable Combined sequence mode */
 	val = tegra_qspi_readl(tqspi, QSPI_GLOBAL_CONFIG);
 	val &= ~QSPI_CMB_SEQ_EN;
+	val &= ~QSPI_TPM_WAIT_POLL_EN;
 	tegra_qspi_writel(tqspi, val, QSPI_GLOBAL_CONFIG);
 	list_for_each_entry(transfer, &msg->transfers, transfer_list) {
 		struct spi_transfer *xfer = transfer;
@@ -1450,24 +1467,28 @@ static irqreturn_t tegra_qspi_isr_thread(int irq, void *context_data)
 static struct tegra_qspi_soc_data tegra210_qspi_soc_data = {
 	.has_dma = true,
 	.cmb_xfer_capable = false,
+	.tpm_wait_poll = false,
 	.cs_count = 1,
 };
 
 static struct tegra_qspi_soc_data tegra186_qspi_soc_data = {
 	.has_dma = true,
 	.cmb_xfer_capable = true,
+	.tpm_wait_poll = false,
 	.cs_count = 1,
 };
 
 static struct tegra_qspi_soc_data tegra234_qspi_soc_data = {
 	.has_dma = false,
 	.cmb_xfer_capable = true,
+	.tpm_wait_poll = true,
 	.cs_count = 1,
 };
 
 static struct tegra_qspi_soc_data tegra241_qspi_soc_data = {
 	.has_dma = false,
 	.cmb_xfer_capable = true,
+	.tpm_wait_poll = true,
 	.cs_count = 4,
 };
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [Patch V5 0/3] Tegra TPM driver with HW flow control
  2023-02-27 12:06 [Patch V5 0/3] Tegra TPM driver with HW flow control Krishna Yarlagadda
                   ` (2 preceding siblings ...)
  2023-02-27 12:07 ` [Patch V5 3/3] spi: tegra210-quad: Enable TPM " Krishna Yarlagadda
@ 2023-02-28  2:28 ` Jarkko Sakkinen
  3 siblings, 0 replies; 16+ messages in thread
From: Jarkko Sakkinen @ 2023-02-28  2:28 UTC (permalink / raw)
  To: Krishna Yarlagadda
  Cc: robh+dt, broonie, peterhuewe, jgg, krzysztof.kozlowski+dt,
	linux-spi, linux-tegra, linux-integrity, linux-kernel,
	thierry.reding, jonathanh, skomatineni, ldewangan

On Mon, Feb 27, 2023 at 05:36:59PM +0530, Krishna Yarlagadda wrote:
> TPM interface spec defines flow control where TPM device would drive
> MISO at same cycle as last address bit sent by controller on MOSI. This
> state of wait can be detected by software reading the MISO line or
> by controller hardware. Support sending transfers to controller in
> single message and handle flow control in hardware. Half duplex
> controllers have to support flow control in hardware.
> 
> Tegra234 and Tegra241 chips have QSPI controller that supports TPM
> Interface Specification (TIS) flow control.
> Since the controller only supports half duplex, SW wait polling
> (flow control using full duplex transfers) method implemented in
> tpm_tis_spi_main.c will not work and have to us HW flow control.
> 
> Updates in this patchset 
>  - Tegra QSPI identifies itself as half duplex.
>  - TPM TIS SPI driver skips flow control for half duplex and send
>    transfers in single message for controller to handle it.
>  - TPM device identifies as TPM device for controller to detect and
>    enable HW TPM wait poll feature.
> 
> Verified with a TPM device on Tegra241 ref board using TPM2 tools.
> 
> V5:
>  - No SPI bus locking.
> V4:
>  - Split api change to different patch.
>  - Describe TPM HW flow control.
> V3:
>  - Use SPI device mode flag and SPI controller flags.
>  - Drop usage of device tree flags.
>  - Generic TPM half duplex controller handling.
>  - HW & SW flow control for TPM. Drop additional driver.
> V2:
>  - Fix dt schema errors.
> 
> Krishna Yarlagadda (3):
>   spi: Add TPM HW flow flag
>   tpm_tis-spi: Support hardware wait polling
>   spi: tegra210-quad: Enable TPM wait polling
> 
>  drivers/char/tpm/tpm_tis_spi_main.c | 92 ++++++++++++++++++++++++++++-
>  drivers/spi/spi-tegra210-quad.c     | 21 +++++++
>  include/linux/spi/spi.h             |  7 ++-
>  3 files changed, 115 insertions(+), 5 deletions(-)
> 
> -- 
> 2.17.1
> 

Funny that this is already in v5, I'm seeing this for the very first time.

BR, Jarkko

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
  2023-02-27 12:07 ` [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling Krishna Yarlagadda
@ 2023-02-28  2:36   ` Jarkko Sakkinen
  2023-02-28  3:32     ` Krishna Yarlagadda
  2023-02-28 12:28     ` Jason Gunthorpe
  0 siblings, 2 replies; 16+ messages in thread
From: Jarkko Sakkinen @ 2023-02-28  2:36 UTC (permalink / raw)
  To: Krishna Yarlagadda
  Cc: robh+dt, broonie, peterhuewe, jgg, krzysztof.kozlowski+dt,
	linux-spi, linux-tegra, linux-integrity, linux-kernel,
	thierry.reding, jonathanh, skomatineni, ldewangan

On Mon, Feb 27, 2023 at 05:37:01PM +0530, Krishna Yarlagadda wrote:
> TPM devices raise wait signal on last addr cycle. This can be detected
> by software driver by reading MISO line on same clock which requires
> full duplex support. In case of half duplex controllers wait detection
> has to be implemented in HW.
> Support hardware wait state detection by sending entire message and let
> controller handle flow control.

When a is started sentence with the word "support" it translates to "I'm
too lazy to write a proper and verbose description of the implementation"
:-)

It has some abstract ideas of the implementation, I give you that, but do
you think anyone ever will get any value of reading that honestly? A bit
more concrette description of the change helps e.g. when bisecting bugs.

> QSPI controller in Tegra236 & Tegra241 implement TPM wait polling.
> 
> Signed-off-by: Krishna Yarlagadda <kyarlagadda@nvidia.com>
> ---
>  drivers/char/tpm/tpm_tis_spi_main.c | 92 ++++++++++++++++++++++++++++-
>  1 file changed, 90 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/char/tpm/tpm_tis_spi_main.c b/drivers/char/tpm/tpm_tis_spi_main.c
> index a0963a3e92bd..5f66448ee09e 100644
> --- a/drivers/char/tpm/tpm_tis_spi_main.c
> +++ b/drivers/char/tpm/tpm_tis_spi_main.c
> @@ -71,8 +71,74 @@ static int tpm_tis_spi_flow_control(struct tpm_tis_spi_phy *phy,
>  	return 0;
>  }
>  
> -int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> -			 u8 *in, const u8 *out)
> +/*
> + * Half duplex controller with support for TPM wait state detection like
> + * Tegra241 need cmd, addr & data sent in single message to manage HW flow
> + * control. Each phase sent in different transfer for controller to idenity
> + * phase.
> + */
> +int tpm_tis_spi_hw_flow_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> +				 u8 *in, const u8 *out)
> +{
> +	struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> +	struct spi_transfer spi_xfer[3];
> +	struct spi_message m;
> +	u8 transfer_len;
> +	int ret;
> +
> +	while (len) {
> +		transfer_len = min_t(u16, len, MAX_SPI_FRAMESIZE);
> +
> +		spi_message_init(&m);
> +		phy->iobuf[0] = (in ? 0x80 : 0) | (transfer_len - 1);
> +		phy->iobuf[1] = 0xd4;
> +		phy->iobuf[2] = addr >> 8;
> +		phy->iobuf[3] = addr;
> +
> +		memset(&spi_xfer, 0, sizeof(spi_xfer));
> +
> +		spi_xfer[0].tx_buf = phy->iobuf;
> +		spi_xfer[0].len = 1;
> +		spi_message_add_tail(&spi_xfer[0], &m);
> +
> +		spi_xfer[1].tx_buf = phy->iobuf + 1;
> +		spi_xfer[1].len = 3;
> +		spi_message_add_tail(&spi_xfer[1], &m);
> +
> +		if (out) {
> +			spi_xfer[2].tx_buf = &phy->iobuf[4];
> +			spi_xfer[2].rx_buf = NULL;
> +			memcpy(&phy->iobuf[4], out, transfer_len);
> +			out += transfer_len;
> +		}
> +
> +		if (in) {
> +			spi_xfer[2].tx_buf = NULL;
> +			spi_xfer[2].rx_buf = &phy->iobuf[4];
> +		}
> +
> +		spi_xfer[2].len = transfer_len;
> +		spi_message_add_tail(&spi_xfer[2], &m);
> +
> +		reinit_completion(&phy->ready);
> +
> +		ret = spi_sync_locked(phy->spi_device, &m);
> +		if (ret < 0)
> +			return ret;
> +
> +		if (in) {
> +			memcpy(in, &phy->iobuf[4], transfer_len);
> +			in += transfer_len;
> +		}
> +
> +		len -= transfer_len;
> +	}
> +
> +	return ret;
> +}
> +
> +int tpm_tis_spi_sw_flow_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> +				 u8 *in, const u8 *out)
>  {
>  	struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
>  	int ret = 0;
> @@ -140,6 +206,28 @@ int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
>  	return ret;
>  }
>  
> +int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> +			 u8 *in, const u8 *out)
> +{
> +	struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> +	struct spi_controller *ctlr = phy->spi_device->controller;
> +
> +	/*
> +	 * TPM flow control over SPI requires full duplex support.
> +	 * Send entire message to a half duplex controller to handle
> +	 * wait polling in controller.
> +	 * Set TPM HW flow control flag..
> +	 */
> +	if (ctlr->flags & SPI_CONTROLLER_HALF_DUPLEX) {
> +		phy->spi_device->mode |= SPI_TPM_HW_FLOW;
> +		return tpm_tis_spi_hw_flow_transfer(data, addr, len, in,
> +						    out);
> +	} else {
> +		return tpm_tis_spi_sw_flow_transfer(data, addr, len, in,
> +						    out);
> +	}
> +}
> +
>  static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr,
>  				  u16 len, u8 *result, enum tpm_tis_io_mode io_mode)
>  {
> -- 
> 2.17.1
> 

Looking pretty good but do you really want to export tpm_tis_spi_{hw,sw}_flow_transfer?

BR, Jarkko

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
  2023-02-28  2:36   ` Jarkko Sakkinen
@ 2023-02-28  3:32     ` Krishna Yarlagadda
  2023-03-01 23:17       ` Jarkko Sakkinen
  2023-02-28 12:28     ` Jason Gunthorpe
  1 sibling, 1 reply; 16+ messages in thread
From: Krishna Yarlagadda @ 2023-02-28  3:32 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: robh+dt, broonie, peterhuewe, jgg, krzysztof.kozlowski+dt,
	linux-spi, linux-tegra, linux-integrity, linux-kernel,
	thierry.reding, Jonathan Hunter, Sowjanya Komatineni,
	Laxman Dewangan

> -----Original Message-----
> From: Jarkko Sakkinen <jarkko@kernel.org>
> Sent: 28 February 2023 08:06
> To: Krishna Yarlagadda <kyarlagadda@nvidia.com>
> Cc: robh+dt@kernel.org; broonie@kernel.org; peterhuewe@gmx.de;
> jgg@ziepe.ca; krzysztof.kozlowski+dt@linaro.org; linux-spi@vger.kernel.org;
> linux-tegra@vger.kernel.org; linux-integrity@vger.kernel.org; linux-
> kernel@vger.kernel.org; thierry.reding@gmail.com; Jonathan Hunter
> <jonathanh@nvidia.com>; Sowjanya Komatineni
> <skomatineni@nvidia.com>; Laxman Dewangan <ldewangan@nvidia.com>
> Subject: Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
> 
> External email: Use caution opening links or attachments
> 
> 
> On Mon, Feb 27, 2023 at 05:37:01PM +0530, Krishna Yarlagadda wrote:
> > TPM devices raise wait signal on last addr cycle. This can be detected
> > by software driver by reading MISO line on same clock which requires
> > full duplex support. In case of half duplex controllers wait detection
> > has to be implemented in HW.
> > Support hardware wait state detection by sending entire message and let
> > controller handle flow control.
> 
> When a is started sentence with the word "support" it translates to "I'm
> too lazy to write a proper and verbose description of the implementation"
> :-)
> 
> It has some abstract ideas of the implementation, I give you that, but do
> you think anyone ever will get any value of reading that honestly? A bit
> more concrette description of the change helps e.g. when bisecting bugs.
> 
I presented why we are making the change. Will add explanation on how
it is implemented as well.

> > QSPI controller in Tegra236 & Tegra241 implement TPM wait polling.
> >
> > Signed-off-by: Krishna Yarlagadda <kyarlagadda@nvidia.com>
> > ---
> >  drivers/char/tpm/tpm_tis_spi_main.c | 92
> ++++++++++++++++++++++++++++-
> >  1 file changed, 90 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/char/tpm/tpm_tis_spi_main.c
> b/drivers/char/tpm/tpm_tis_spi_main.c
> > index a0963a3e92bd..5f66448ee09e 100644
> > --- a/drivers/char/tpm/tpm_tis_spi_main.c
> > +++ b/drivers/char/tpm/tpm_tis_spi_main.c
> > @@ -71,8 +71,74 @@ static int tpm_tis_spi_flow_control(struct
> tpm_tis_spi_phy *phy,
> >       return 0;
> >  }
> >
> > -int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> > -                      u8 *in, const u8 *out)
> > +/*
> > + * Half duplex controller with support for TPM wait state detection like
> > + * Tegra241 need cmd, addr & data sent in single message to manage HW
> flow
> > + * control. Each phase sent in different transfer for controller to idenity
> > + * phase.
> > + */
> > +int tpm_tis_spi_hw_flow_transfer(struct tpm_tis_data *data, u32 addr,
> u16 len,
> > +                              u8 *in, const u8 *out)
> > +{
> > +     struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > +     struct spi_transfer spi_xfer[3];
> > +     struct spi_message m;
> > +     u8 transfer_len;
> > +     int ret;
> > +
> > +     while (len) {
> > +             transfer_len = min_t(u16, len, MAX_SPI_FRAMESIZE);
> > +
> > +             spi_message_init(&m);
> > +             phy->iobuf[0] = (in ? 0x80 : 0) | (transfer_len - 1);
> > +             phy->iobuf[1] = 0xd4;
> > +             phy->iobuf[2] = addr >> 8;
> > +             phy->iobuf[3] = addr;
> > +
> > +             memset(&spi_xfer, 0, sizeof(spi_xfer));
> > +
> > +             spi_xfer[0].tx_buf = phy->iobuf;
> > +             spi_xfer[0].len = 1;
> > +             spi_message_add_tail(&spi_xfer[0], &m);
> > +
> > +             spi_xfer[1].tx_buf = phy->iobuf + 1;
> > +             spi_xfer[1].len = 3;
> > +             spi_message_add_tail(&spi_xfer[1], &m);
> > +
> > +             if (out) {
> > +                     spi_xfer[2].tx_buf = &phy->iobuf[4];
> > +                     spi_xfer[2].rx_buf = NULL;
> > +                     memcpy(&phy->iobuf[4], out, transfer_len);
> > +                     out += transfer_len;
> > +             }
> > +
> > +             if (in) {
> > +                     spi_xfer[2].tx_buf = NULL;
> > +                     spi_xfer[2].rx_buf = &phy->iobuf[4];
> > +             }
> > +
> > +             spi_xfer[2].len = transfer_len;
> > +             spi_message_add_tail(&spi_xfer[2], &m);
> > +
> > +             reinit_completion(&phy->ready);
> > +
> > +             ret = spi_sync_locked(phy->spi_device, &m);
> > +             if (ret < 0)
> > +                     return ret;
> > +
> > +             if (in) {
> > +                     memcpy(in, &phy->iobuf[4], transfer_len);
> > +                     in += transfer_len;
> > +             }
> > +
> > +             len -= transfer_len;
> > +     }
> > +
> > +     return ret;
> > +}
> > +
> > +int tpm_tis_spi_sw_flow_transfer(struct tpm_tis_data *data, u32 addr,
> u16 len,
> > +                              u8 *in, const u8 *out)
> >  {
> >       struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> >       int ret = 0;
> > @@ -140,6 +206,28 @@ int tpm_tis_spi_transfer(struct tpm_tis_data
> *data, u32 addr, u16 len,
> >       return ret;
> >  }
> >
> > +int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> > +                      u8 *in, const u8 *out)
> > +{
> > +     struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > +     struct spi_controller *ctlr = phy->spi_device->controller;
> > +
> > +     /*
> > +      * TPM flow control over SPI requires full duplex support.
> > +      * Send entire message to a half duplex controller to handle
> > +      * wait polling in controller.
> > +      * Set TPM HW flow control flag..
> > +      */
> > +     if (ctlr->flags & SPI_CONTROLLER_HALF_DUPLEX) {
> > +             phy->spi_device->mode |= SPI_TPM_HW_FLOW;
> > +             return tpm_tis_spi_hw_flow_transfer(data, addr, len, in,
> > +                                                 out);
> > +     } else {
> > +             return tpm_tis_spi_sw_flow_transfer(data, addr, len, in,
> > +                                                 out);
> > +     }
> > +}
> > +
> >  static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr,
> >                                 u16 len, u8 *result, enum tpm_tis_io_mode io_mode)
> >  {
> > --
> > 2.17.1
> >
> 
> Looking pretty good but do you really want to export
> tpm_tis_spi_{hw,sw}_flow_transfer?
> 
> BR, Jarkko
No need to export tpm_tis_spi_{hw,sw}_flow_transfer as well.
I will update this in next version.

KY

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
  2023-02-28  2:36   ` Jarkko Sakkinen
  2023-02-28  3:32     ` Krishna Yarlagadda
@ 2023-02-28 12:28     ` Jason Gunthorpe
  2023-03-01 11:56       ` Krishna Yarlagadda
  1 sibling, 1 reply; 16+ messages in thread
From: Jason Gunthorpe @ 2023-02-28 12:28 UTC (permalink / raw)
  To: Jarkko Sakkinen
  Cc: Krishna Yarlagadda, robh+dt, broonie, peterhuewe,
	krzysztof.kozlowski+dt, linux-spi, linux-tegra, linux-integrity,
	linux-kernel, thierry.reding, jonathanh, skomatineni, ldewangan

On Tue, Feb 28, 2023 at 04:36:26AM +0200, Jarkko Sakkinen wrote:
> On Mon, Feb 27, 2023 at 05:37:01PM +0530, Krishna Yarlagadda wrote:
> > TPM devices raise wait signal on last addr cycle. This can be detected
> > by software driver by reading MISO line on same clock which requires
> > full duplex support. In case of half duplex controllers wait detection
> > has to be implemented in HW.
> > Support hardware wait state detection by sending entire message and let
> > controller handle flow control.
> 
> When a is started sentence with the word "support" it translates to "I'm
> too lazy to write a proper and verbose description of the implementation"
> :-)
> 
> It has some abstract ideas of the implementation, I give you that, but do
> you think anyone ever will get any value of reading that honestly? A bit
> more concrette description of the change helps e.g. when bisecting bugs.

I would expect SPI_TPM_HW_FLOW to be documented in the kdocs to a
level that any other HW could implement it as well.

> > +int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> > +			 u8 *in, const u8 *out)
> > +{
> > +	struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > +	struct spi_controller *ctlr = phy->spi_device->controller;
> > +
> > +	/*
> > +	 * TPM flow control over SPI requires full duplex support.
> > +	 * Send entire message to a half duplex controller to handle
> > +	 * wait polling in controller.
> > +	 * Set TPM HW flow control flag..
> > +	 */
> > +	if (ctlr->flags & SPI_CONTROLLER_HALF_DUPLEX) {
> > +		phy->spi_device->mode |= SPI_TPM_HW_FLOW;

Shouldn't we check that this special flow is supported when the SPI
device is bound to the tpm in the first place?

Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
  2023-02-28 12:28     ` Jason Gunthorpe
@ 2023-03-01 11:56       ` Krishna Yarlagadda
  2023-03-01 12:27         ` Jason Gunthorpe
  0 siblings, 1 reply; 16+ messages in thread
From: Krishna Yarlagadda @ 2023-03-01 11:56 UTC (permalink / raw)
  To: Jason Gunthorpe, Jarkko Sakkinen
  Cc: robh+dt, broonie, peterhuewe, krzysztof.kozlowski+dt, linux-spi,
	linux-tegra, linux-integrity, linux-kernel, thierry.reding,
	Jonathan Hunter, Sowjanya Komatineni, Laxman Dewangan


> -----Original Message-----
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: 28 February 2023 17:58
> To: Jarkko Sakkinen <jarkko@kernel.org>
> Cc: Krishna Yarlagadda <kyarlagadda@nvidia.com>; robh+dt@kernel.org;
> broonie@kernel.org; peterhuewe@gmx.de;
> krzysztof.kozlowski+dt@linaro.org; linux-spi@vger.kernel.org; linux-
> tegra@vger.kernel.org; linux-integrity@vger.kernel.org; linux-
> kernel@vger.kernel.org; thierry.reding@gmail.com; Jonathan Hunter
> <jonathanh@nvidia.com>; Sowjanya Komatineni
> <skomatineni@nvidia.com>; Laxman Dewangan <ldewangan@nvidia.com>
> Subject: Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
> 
> On Tue, Feb 28, 2023 at 04:36:26AM +0200, Jarkko Sakkinen wrote:
> > On Mon, Feb 27, 2023 at 05:37:01PM +0530, Krishna Yarlagadda wrote:
> > > TPM devices raise wait signal on last addr cycle. This can be detected
> > > by software driver by reading MISO line on same clock which requires
> > > full duplex support. In case of half duplex controllers wait detection
> > > has to be implemented in HW.
> > > Support hardware wait state detection by sending entire message and let
> > > controller handle flow control.
> >
> > When a is started sentence with the word "support" it translates to "I'm
> > too lazy to write a proper and verbose description of the implementation"
> > :-)
> >
> > It has some abstract ideas of the implementation, I give you that, but do
> > you think anyone ever will get any value of reading that honestly? A bit
> > more concrette description of the change helps e.g. when bisecting bugs.
> 
> I would expect SPI_TPM_HW_FLOW to be documented in the kdocs to a
> level that any other HW could implement it as well.
HW implementation can be controller specific. I would add comments in the
header to say CMD-ADDR-DATA is sent as single message with this flag.
> 
> > > +int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> > > +			 u8 *in, const u8 *out)
> > > +{
> > > +	struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > > +	struct spi_controller *ctlr = phy->spi_device->controller;
> > > +
> > > +	/*
> > > +	 * TPM flow control over SPI requires full duplex support.
> > > +	 * Send entire message to a half duplex controller to handle
> > > +	 * wait polling in controller.
> > > +	 * Set TPM HW flow control flag..
> > > +	 */
> > > +	if (ctlr->flags & SPI_CONTROLLER_HALF_DUPLEX) {
> > > +		phy->spi_device->mode |= SPI_TPM_HW_FLOW;
> 
> Shouldn't we check that this special flow is supported when the SPI
> device is bound to the tpm in the first place?
TPM device connected behind half duplex controller can only work
this way. So, no additional flag needed to check.
KY
> 
> Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
  2023-03-01 11:56       ` Krishna Yarlagadda
@ 2023-03-01 12:27         ` Jason Gunthorpe
  2023-03-01 12:37           ` Mark Brown
  0 siblings, 1 reply; 16+ messages in thread
From: Jason Gunthorpe @ 2023-03-01 12:27 UTC (permalink / raw)
  To: Krishna Yarlagadda
  Cc: Jarkko Sakkinen, robh+dt, broonie, peterhuewe,
	krzysztof.kozlowski+dt, linux-spi, linux-tegra, linux-integrity,
	linux-kernel, thierry.reding, Jonathan Hunter,
	Sowjanya Komatineni, Laxman Dewangan

On Wed, Mar 01, 2023 at 11:56:53AM +0000, Krishna Yarlagadda wrote:

> > > > +int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> > > > +			 u8 *in, const u8 *out)
> > > > +{
> > > > +	struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > > > +	struct spi_controller *ctlr = phy->spi_device->controller;
> > > > +
> > > > +	/*
> > > > +	 * TPM flow control over SPI requires full duplex support.
> > > > +	 * Send entire message to a half duplex controller to handle
> > > > +	 * wait polling in controller.
> > > > +	 * Set TPM HW flow control flag..
> > > > +	 */
> > > > +	if (ctlr->flags & SPI_CONTROLLER_HALF_DUPLEX) {
> > > > +		phy->spi_device->mode |= SPI_TPM_HW_FLOW;
> > 
> > Shouldn't we check that this special flow is supported when the SPI
> > device is bound to the tpm in the first place?
> TPM device connected behind half duplex controller can only work
> this way. So, no additional flag needed to check.

Just because a DT hooks it up this way doesn't mean the kernel driver
can support it, eg support hasn't been implemented in an older SPI
driver or something.

If the failure mode is anything other than the TPM doesn't probe we
will need to check for support.

Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
  2023-03-01 12:27         ` Jason Gunthorpe
@ 2023-03-01 12:37           ` Mark Brown
  2023-03-01 13:39             ` Jason Gunthorpe
  0 siblings, 1 reply; 16+ messages in thread
From: Mark Brown @ 2023-03-01 12:37 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Krishna Yarlagadda, Jarkko Sakkinen, robh+dt, peterhuewe,
	krzysztof.kozlowski+dt, linux-spi, linux-tegra, linux-integrity,
	linux-kernel, thierry.reding, Jonathan Hunter,
	Sowjanya Komatineni, Laxman Dewangan

[-- Attachment #1: Type: text/plain, Size: 775 bytes --]

On Wed, Mar 01, 2023 at 08:27:45AM -0400, Jason Gunthorpe wrote:
> On Wed, Mar 01, 2023 at 11:56:53AM +0000, Krishna Yarlagadda wrote:

> > TPM device connected behind half duplex controller can only work
> > this way. So, no additional flag needed to check.

> Just because a DT hooks it up this way doesn't mean the kernel driver
> can support it, eg support hasn't been implemented in an older SPI
> driver or something.

> If the failure mode is anything other than the TPM doesn't probe we
> will need to check for support.

It's not like these buses are hot pluggable - someone would have to
design and manufacture a board which doesn't work.  It's probably
reasonable for this to fail with the SPI subsystem saying it can't
support things when the operation is tried.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
  2023-03-01 12:37           ` Mark Brown
@ 2023-03-01 13:39             ` Jason Gunthorpe
  2023-03-01 13:45               ` Mark Brown
  2023-03-01 14:09               ` Thierry Reding
  0 siblings, 2 replies; 16+ messages in thread
From: Jason Gunthorpe @ 2023-03-01 13:39 UTC (permalink / raw)
  To: Mark Brown
  Cc: Krishna Yarlagadda, Jarkko Sakkinen, robh+dt, peterhuewe,
	krzysztof.kozlowski+dt, linux-spi, linux-tegra, linux-integrity,
	linux-kernel, thierry.reding, Jonathan Hunter,
	Sowjanya Komatineni, Laxman Dewangan

On Wed, Mar 01, 2023 at 12:37:27PM +0000, Mark Brown wrote:
> On Wed, Mar 01, 2023 at 08:27:45AM -0400, Jason Gunthorpe wrote:
> > On Wed, Mar 01, 2023 at 11:56:53AM +0000, Krishna Yarlagadda wrote:
> 
> > > TPM device connected behind half duplex controller can only work
> > > this way. So, no additional flag needed to check.
> 
> > Just because a DT hooks it up this way doesn't mean the kernel driver
> > can support it, eg support hasn't been implemented in an older SPI
> > driver or something.
> 
> > If the failure mode is anything other than the TPM doesn't probe we
> > will need to check for support.
> 
> It's not like these buses are hot pluggable - someone would have to
> design and manufacture a board which doesn't work.  It's probably
> reasonable for this to fail with the SPI subsystem saying it can't
> support things when the operation is tried.

If the spi subsystem fails this request with these flags that would be
great, it would cause the TPM to fail probing reliably.

But does this patch do that? It looks like non-supporting half duplex
drivers will just ignore the new flag?

Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
  2023-03-01 13:39             ` Jason Gunthorpe
@ 2023-03-01 13:45               ` Mark Brown
  2023-03-01 14:09               ` Thierry Reding
  1 sibling, 0 replies; 16+ messages in thread
From: Mark Brown @ 2023-03-01 13:45 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Krishna Yarlagadda, Jarkko Sakkinen, robh+dt, peterhuewe,
	krzysztof.kozlowski+dt, linux-spi, linux-tegra, linux-integrity,
	linux-kernel, thierry.reding, Jonathan Hunter,
	Sowjanya Komatineni, Laxman Dewangan

[-- Attachment #1: Type: text/plain, Size: 727 bytes --]

On Wed, Mar 01, 2023 at 09:39:28AM -0400, Jason Gunthorpe wrote:
> On Wed, Mar 01, 2023 at 12:37:27PM +0000, Mark Brown wrote:

> > It's not like these buses are hot pluggable - someone would have to
> > design and manufacture a board which doesn't work.  It's probably
> > reasonable for this to fail with the SPI subsystem saying it can't
> > support things when the operation is tried.

> If the spi subsystem fails this request with these flags that would be
> great, it would cause the TPM to fail probing reliably.

> But does this patch do that? It looks like non-supporting half duplex
> drivers will just ignore the new flag?

That's something we can fix up in SPI, we shouldn't worry about it for
the client drivers.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
  2023-03-01 13:39             ` Jason Gunthorpe
  2023-03-01 13:45               ` Mark Brown
@ 2023-03-01 14:09               ` Thierry Reding
  2023-03-01 15:38                 ` Jason Gunthorpe
  1 sibling, 1 reply; 16+ messages in thread
From: Thierry Reding @ 2023-03-01 14:09 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Mark Brown, Krishna Yarlagadda, Jarkko Sakkinen, robh+dt,
	peterhuewe, krzysztof.kozlowski+dt, linux-spi, linux-tegra,
	linux-integrity, linux-kernel, Jonathan Hunter,
	Sowjanya Komatineni, Laxman Dewangan

[-- Attachment #1: Type: text/plain, Size: 1692 bytes --]

On Wed, Mar 01, 2023 at 09:39:28AM -0400, Jason Gunthorpe wrote:
> On Wed, Mar 01, 2023 at 12:37:27PM +0000, Mark Brown wrote:
> > On Wed, Mar 01, 2023 at 08:27:45AM -0400, Jason Gunthorpe wrote:
> > > On Wed, Mar 01, 2023 at 11:56:53AM +0000, Krishna Yarlagadda wrote:
> > 
> > > > TPM device connected behind half duplex controller can only work
> > > > this way. So, no additional flag needed to check.
> > 
> > > Just because a DT hooks it up this way doesn't mean the kernel driver
> > > can support it, eg support hasn't been implemented in an older SPI
> > > driver or something.
> > 
> > > If the failure mode is anything other than the TPM doesn't probe we
> > > will need to check for support.
> > 
> > It's not like these buses are hot pluggable - someone would have to
> > design and manufacture a board which doesn't work.  It's probably
> > reasonable for this to fail with the SPI subsystem saying it can't
> > support things when the operation is tried.
> 
> If the spi subsystem fails this request with these flags that would be
> great, it would cause the TPM to fail probing reliably.
> 
> But does this patch do that? It looks like non-supporting half duplex
> drivers will just ignore the new flag?

I think the assumption is that there are currently no half duplex
drivers that would be impacted by this. If I understand correctly, the
TPM driver currently supports only full duplex controllers, because
that's required in order to detect the wait state in software.

So, yes, half duplex controllers would ignore this flag, but since they
couldn't have supported TPM flow control before anyway it doesn't make a
difference.

Thierry

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
  2023-03-01 14:09               ` Thierry Reding
@ 2023-03-01 15:38                 ` Jason Gunthorpe
  0 siblings, 0 replies; 16+ messages in thread
From: Jason Gunthorpe @ 2023-03-01 15:38 UTC (permalink / raw)
  To: Thierry Reding
  Cc: Mark Brown, Krishna Yarlagadda, Jarkko Sakkinen, robh+dt,
	peterhuewe, krzysztof.kozlowski+dt, linux-spi, linux-tegra,
	linux-integrity, linux-kernel, Jonathan Hunter,
	Sowjanya Komatineni, Laxman Dewangan

On Wed, Mar 01, 2023 at 03:09:24PM +0100, Thierry Reding wrote:
> On Wed, Mar 01, 2023 at 09:39:28AM -0400, Jason Gunthorpe wrote:
> > On Wed, Mar 01, 2023 at 12:37:27PM +0000, Mark Brown wrote:
> > > On Wed, Mar 01, 2023 at 08:27:45AM -0400, Jason Gunthorpe wrote:
> > > > On Wed, Mar 01, 2023 at 11:56:53AM +0000, Krishna Yarlagadda wrote:
> > > 
> > > > > TPM device connected behind half duplex controller can only work
> > > > > this way. So, no additional flag needed to check.
> > > 
> > > > Just because a DT hooks it up this way doesn't mean the kernel driver
> > > > can support it, eg support hasn't been implemented in an older SPI
> > > > driver or something.
> > > 
> > > > If the failure mode is anything other than the TPM doesn't probe we
> > > > will need to check for support.
> > > 
> > > It's not like these buses are hot pluggable - someone would have to
> > > design and manufacture a board which doesn't work.  It's probably
> > > reasonable for this to fail with the SPI subsystem saying it can't
> > > support things when the operation is tried.
> > 
> > If the spi subsystem fails this request with these flags that would be
> > great, it would cause the TPM to fail probing reliably.
> > 
> > But does this patch do that? It looks like non-supporting half duplex
> > drivers will just ignore the new flag?
> 
> I think the assumption is that there are currently no half duplex
> drivers that would be impacted by this. If I understand correctly, the
> TPM driver currently supports only full duplex controllers, because
> that's required in order to detect the wait state in software.
> 
> So, yes, half duplex controllers would ignore this flag, but since they
> couldn't have supported TPM flow control before anyway it doesn't make a
> difference.

If more HW uses this feature it will likely look a lot like these
tegra drivers where an existing supported SPI driver gains a HW bit to
do the flow. Meaning DTs will exist configuring a TPM to a half duplex
SPI and kernels will exist that don't have the HW driver that
implements it.

So, I would like it if old kernels running against a new DT do not
mis-operate the SPI because their SPI driver does not support TPM
operation. Either because the spi layer refuses the request as
unsupported or the TPM layer refuses to use the spi driver as
unsupported.

I do not like the idea that the SPI subsystem will take a request from
a client driver and silently mis-execute it.

Jason

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
  2023-02-28  3:32     ` Krishna Yarlagadda
@ 2023-03-01 23:17       ` Jarkko Sakkinen
  0 siblings, 0 replies; 16+ messages in thread
From: Jarkko Sakkinen @ 2023-03-01 23:17 UTC (permalink / raw)
  To: Krishna Yarlagadda
  Cc: robh+dt, broonie, peterhuewe, jgg, krzysztof.kozlowski+dt,
	linux-spi, linux-tegra, linux-integrity, linux-kernel,
	thierry.reding, Jonathan Hunter, Sowjanya Komatineni,
	Laxman Dewangan

On Tue, Feb 28, 2023 at 03:32:24AM +0000, Krishna Yarlagadda wrote:
> > -----Original Message-----
> > From: Jarkko Sakkinen <jarkko@kernel.org>
> > Sent: 28 February 2023 08:06
> > To: Krishna Yarlagadda <kyarlagadda@nvidia.com>
> > Cc: robh+dt@kernel.org; broonie@kernel.org; peterhuewe@gmx.de;
> > jgg@ziepe.ca; krzysztof.kozlowski+dt@linaro.org; linux-spi@vger.kernel.org;
> > linux-tegra@vger.kernel.org; linux-integrity@vger.kernel.org; linux-
> > kernel@vger.kernel.org; thierry.reding@gmail.com; Jonathan Hunter
> > <jonathanh@nvidia.com>; Sowjanya Komatineni
> > <skomatineni@nvidia.com>; Laxman Dewangan <ldewangan@nvidia.com>
> > Subject: Re: [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling
> > 
> > External email: Use caution opening links or attachments
> > 
> > 
> > On Mon, Feb 27, 2023 at 05:37:01PM +0530, Krishna Yarlagadda wrote:
> > > TPM devices raise wait signal on last addr cycle. This can be detected
> > > by software driver by reading MISO line on same clock which requires
> > > full duplex support. In case of half duplex controllers wait detection
> > > has to be implemented in HW.
> > > Support hardware wait state detection by sending entire message and let
> > > controller handle flow control.
> > 
> > When a is started sentence with the word "support" it translates to "I'm
> > too lazy to write a proper and verbose description of the implementation"
> > :-)
> > 
> > It has some abstract ideas of the implementation, I give you that, but do
> > you think anyone ever will get any value of reading that honestly? A bit
> > more concrette description of the change helps e.g. when bisecting bugs.
> > 
> I presented why we are making the change. Will add explanation on how
> it is implemented as well.

OK, cool, thank you.

> 
> > > QSPI controller in Tegra236 & Tegra241 implement TPM wait polling.
> > >
> > > Signed-off-by: Krishna Yarlagadda <kyarlagadda@nvidia.com>
> > > ---
> > >  drivers/char/tpm/tpm_tis_spi_main.c | 92
> > ++++++++++++++++++++++++++++-
> > >  1 file changed, 90 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/char/tpm/tpm_tis_spi_main.c
> > b/drivers/char/tpm/tpm_tis_spi_main.c
> > > index a0963a3e92bd..5f66448ee09e 100644
> > > --- a/drivers/char/tpm/tpm_tis_spi_main.c
> > > +++ b/drivers/char/tpm/tpm_tis_spi_main.c
> > > @@ -71,8 +71,74 @@ static int tpm_tis_spi_flow_control(struct
> > tpm_tis_spi_phy *phy,
> > >       return 0;
> > >  }
> > >
> > > -int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> > > -                      u8 *in, const u8 *out)
> > > +/*
> > > + * Half duplex controller with support for TPM wait state detection like
> > > + * Tegra241 need cmd, addr & data sent in single message to manage HW
> > flow
> > > + * control. Each phase sent in different transfer for controller to idenity
> > > + * phase.
> > > + */
> > > +int tpm_tis_spi_hw_flow_transfer(struct tpm_tis_data *data, u32 addr,
> > u16 len,
> > > +                              u8 *in, const u8 *out)
> > > +{
> > > +     struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > > +     struct spi_transfer spi_xfer[3];
> > > +     struct spi_message m;
> > > +     u8 transfer_len;
> > > +     int ret;
> > > +
> > > +     while (len) {
> > > +             transfer_len = min_t(u16, len, MAX_SPI_FRAMESIZE);
> > > +
> > > +             spi_message_init(&m);
> > > +             phy->iobuf[0] = (in ? 0x80 : 0) | (transfer_len - 1);
> > > +             phy->iobuf[1] = 0xd4;
> > > +             phy->iobuf[2] = addr >> 8;
> > > +             phy->iobuf[3] = addr;
> > > +
> > > +             memset(&spi_xfer, 0, sizeof(spi_xfer));
> > > +
> > > +             spi_xfer[0].tx_buf = phy->iobuf;
> > > +             spi_xfer[0].len = 1;
> > > +             spi_message_add_tail(&spi_xfer[0], &m);
> > > +
> > > +             spi_xfer[1].tx_buf = phy->iobuf + 1;
> > > +             spi_xfer[1].len = 3;
> > > +             spi_message_add_tail(&spi_xfer[1], &m);
> > > +
> > > +             if (out) {
> > > +                     spi_xfer[2].tx_buf = &phy->iobuf[4];
> > > +                     spi_xfer[2].rx_buf = NULL;
> > > +                     memcpy(&phy->iobuf[4], out, transfer_len);
> > > +                     out += transfer_len;
> > > +             }
> > > +
> > > +             if (in) {
> > > +                     spi_xfer[2].tx_buf = NULL;
> > > +                     spi_xfer[2].rx_buf = &phy->iobuf[4];
> > > +             }
> > > +
> > > +             spi_xfer[2].len = transfer_len;
> > > +             spi_message_add_tail(&spi_xfer[2], &m);
> > > +
> > > +             reinit_completion(&phy->ready);
> > > +
> > > +             ret = spi_sync_locked(phy->spi_device, &m);
> > > +             if (ret < 0)
> > > +                     return ret;
> > > +
> > > +             if (in) {
> > > +                     memcpy(in, &phy->iobuf[4], transfer_len);
> > > +                     in += transfer_len;
> > > +             }
> > > +
> > > +             len -= transfer_len;
> > > +     }
> > > +
> > > +     return ret;
> > > +}
> > > +
> > > +int tpm_tis_spi_sw_flow_transfer(struct tpm_tis_data *data, u32 addr,
> > u16 len,
> > > +                              u8 *in, const u8 *out)
> > >  {
> > >       struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > >       int ret = 0;
> > > @@ -140,6 +206,28 @@ int tpm_tis_spi_transfer(struct tpm_tis_data
> > *data, u32 addr, u16 len,
> > >       return ret;
> > >  }
> > >
> > > +int tpm_tis_spi_transfer(struct tpm_tis_data *data, u32 addr, u16 len,
> > > +                      u8 *in, const u8 *out)
> > > +{
> > > +     struct tpm_tis_spi_phy *phy = to_tpm_tis_spi_phy(data);
> > > +     struct spi_controller *ctlr = phy->spi_device->controller;
> > > +
> > > +     /*
> > > +      * TPM flow control over SPI requires full duplex support.
> > > +      * Send entire message to a half duplex controller to handle
> > > +      * wait polling in controller.
> > > +      * Set TPM HW flow control flag..
> > > +      */
> > > +     if (ctlr->flags & SPI_CONTROLLER_HALF_DUPLEX) {
> > > +             phy->spi_device->mode |= SPI_TPM_HW_FLOW;
> > > +             return tpm_tis_spi_hw_flow_transfer(data, addr, len, in,
> > > +                                                 out);
> > > +     } else {
> > > +             return tpm_tis_spi_sw_flow_transfer(data, addr, len, in,
> > > +                                                 out);
> > > +     }
> > > +}
> > > +
> > >  static int tpm_tis_spi_read_bytes(struct tpm_tis_data *data, u32 addr,
> > >                                 u16 len, u8 *result, enum tpm_tis_io_mode io_mode)
> > >  {
> > > --
> > > 2.17.1
> > >
> > 
> > Looking pretty good but do you really want to export
> > tpm_tis_spi_{hw,sw}_flow_transfer?
> > 
> > BR, Jarkko
> No need to export tpm_tis_spi_{hw,sw}_flow_transfer as well.
> I will update this in next version.

Great.

BR, Jarkko

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2023-03-01 23:17 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-27 12:06 [Patch V5 0/3] Tegra TPM driver with HW flow control Krishna Yarlagadda
2023-02-27 12:07 ` [Patch V5 1/3] spi: Add TPM HW flow flag Krishna Yarlagadda
2023-02-27 12:07 ` [Patch V5 2/3] tpm_tis-spi: Support hardware wait polling Krishna Yarlagadda
2023-02-28  2:36   ` Jarkko Sakkinen
2023-02-28  3:32     ` Krishna Yarlagadda
2023-03-01 23:17       ` Jarkko Sakkinen
2023-02-28 12:28     ` Jason Gunthorpe
2023-03-01 11:56       ` Krishna Yarlagadda
2023-03-01 12:27         ` Jason Gunthorpe
2023-03-01 12:37           ` Mark Brown
2023-03-01 13:39             ` Jason Gunthorpe
2023-03-01 13:45               ` Mark Brown
2023-03-01 14:09               ` Thierry Reding
2023-03-01 15:38                 ` Jason Gunthorpe
2023-02-27 12:07 ` [Patch V5 3/3] spi: tegra210-quad: Enable TPM " Krishna Yarlagadda
2023-02-28  2:28 ` [Patch V5 0/3] Tegra TPM driver with HW flow control Jarkko Sakkinen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.