linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] [RFC] tpm_tis: tpm_tcg_flush() after iowrite*()s
@ 2017-08-04 21:56 Haris Okanovic
  2017-08-07 14:59 ` Julia Cartwright
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Haris Okanovic @ 2017-08-04 21:56 UTC (permalink / raw)
  To: linux-rt-users, linux-kernel
  Cc: haris.okanovic, harisokn, julia.cartwright, gratian.crisan,
	scott.hartman, chris.graf, brad.mouring, jonathan.david

I have a latency issue using a SPI-based TPM chip with tpm_tis driver
from non-rt usermode application, which induces ~400 us latency spikes
in cyclictest (Intel Atom E3940 system, PREEMPT_RT_FULL kernel).

The spikes are caused by a stalling ioread8() operation, following a
sequence of 30+ iowrite8()s to the same address. I believe this happens
because the writes are cached (in cpu or somewhere along the bus), which
gets flushed on the first LOAD instruction (ioread*()) that follows.

The enclosed change appears to fix this issue: read the TPM chip's
access register (status code) after every iowrite*() operation.

I believe this works because it amortize the cost of flushing data to
chip across multiple instructions. However, I don't have any direct
evidence to support this theory.

Does this seem like a reasonable theory?

Any feedback on the change (a better way to do it, perhaps)?

Thanks,
Haris Okanovic

https://github.com/harisokanovic/linux/tree/dev/hokanovi/tpm-latency-spike-fix-rfc
---
 drivers/char/tpm/tpm_tis.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c
index c7e1384f1b08..5cdbfec0ad67 100644
--- a/drivers/char/tpm/tpm_tis.c
+++ b/drivers/char/tpm/tpm_tis.c
@@ -89,6 +89,19 @@ static inline int is_itpm(struct acpi_device *dev)
 }
 #endif
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+/*
+ * Flushes previous iowrite*() operations to chip so that a subsequent
+ * ioread*() won't stall a cpu.
+ */
+static void tpm_tcg_flush(struct tpm_tis_tcg_phy *phy)
+{
+	ioread8(phy->iobase + TPM_ACCESS(0));
+}
+#else
+#define tpm_tcg_flush do { } while(0)
+#endif
+
 static int tpm_tcg_read_bytes(struct tpm_tis_data *data, u32 addr, u16 len,
 			      u8 *result)
 {
@@ -104,8 +117,10 @@ static int tpm_tcg_write_bytes(struct tpm_tis_data *data, u32 addr, u16 len,
 {
 	struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
 
-	while (len--)
+	while (len--) {
 		iowrite8(*value++, phy->iobase + addr);
+		tpm_tcg_flush(phy);
+	}
 	return 0;
 }
 
@@ -130,6 +145,7 @@ static int tpm_tcg_write32(struct tpm_tis_data *data, u32 addr, u32 value)
 	struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
 
 	iowrite32(value, phy->iobase + addr);
+	tpm_tcg_flush(phy);
 	return 0;
 }
 
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-08-19 17:03 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-04 21:56 [PATCH] [RFC] tpm_tis: tpm_tcg_flush() after iowrite*()s Haris Okanovic
2017-08-07 14:59 ` Julia Cartwright
2017-08-08 21:58   ` Jarkko Sakkinen
2017-08-14 22:53     ` Haris Okanovic
2017-08-14 22:52   ` Haris Okanovic
2017-08-14 22:53 ` [PATCH] tpm_tis: fix stall " Haris Okanovic
2017-08-15  6:11   ` Alexander Stein
2017-08-15 20:10     ` Haris Okanovic
2017-08-15 20:13 ` [PATCH v2] " Haris Okanovic
2017-08-16 21:15   ` [tpmdd-devel] " Ken Goldman
2017-08-17  5:57     ` Alexander Stein
2017-08-17 10:38     ` Sebastian Andrzej Siewior
2017-08-17 17:17       ` Jason Gunthorpe
2017-08-17 20:12         ` Haris Okanovic
2017-08-19 17:03         ` Jarkko Sakkinen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).