tpmdd-devel.lists.sourceforge.net archive mirror
 help / color / mirror / Atom feed
* [PATCH] tpm_tis: fix stall after iowrite*()s
       [not found] <20170804215651.29247-1-haris.okanovic@ni.com>
@ 2017-08-14 22:53 ` Haris Okanovic
  2017-08-15  6:11   ` Alexander Stein
       [not found] ` <20170804215651.29247-1-haris.okanovic-acOepvfBmUk@public.gmane.org>
  1 sibling, 1 reply; 10+ messages in thread
From: Haris Okanovic @ 2017-08-14 22:53 UTC (permalink / raw)
  To: linux-rt-users, linux-kernel
  Cc: tpmdd-devel, haris.okanovic, harisokn, julia.cartwright,
	gratian.crisan, scott.hartman, chris.graf, brad.mouring,
	jonathan.david, peterhuewe, tpmdd, jarkko.sakkinen, jgunthorpe,
	eric.gardiner

ioread8() operations to TPM MMIO addresses can stall the cpu when
immediately following a sequence of iowrite*()'s to the same region.

For example, cyclitest measures ~400us latency spikes when a non-RT
usermode application communicates with an SPI-based TPM chip (Intel Atom
E3940 system, PREEMPT_RT_FULL kernel). The spikes are caused by a
stalling ioread8() operation following a sequence of 30+ iowrite8()s to
the same address. I believe this happens because the write sequence is
buffered (in cpu or somewhere along the bus), and gets flushed on the
first LOAD instruction (ioread*()) that follows.

The enclosed change appears to fix this issue: read the TPM chip's
access register (status code) after every iowrite*() operation to
amortize the cost of flushing data to chip across multiple instructions.

Signed-off-by: Haris Okanovic <haris.okanovic@ni.com>
---
https://patchwork.kernel.org/patch/9882119/
https://github.com/harisokanovic/linux/tree/dev/hokanovi/tpm-latency-spike-fix
---
 drivers/char/tpm/tpm_tis.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c
index c7e1384f1b08..3be2755d0514 100644
--- a/drivers/char/tpm/tpm_tis.c
+++ b/drivers/char/tpm/tpm_tis.c
@@ -52,6 +52,22 @@ static inline struct tpm_tis_tcg_phy *to_tpm_tis_tcg_phy(struct tpm_tis_data *da
 	return container_of(data, struct tpm_tis_tcg_phy, priv);
 }
 
+static inline void tpm_tis_iowrite8(u8 b, void __iomem *iobase, u32 addr)
+{
+	iowrite8(b, iobase + addr);
+#ifdef CONFIG_PREEMPT_RT_FULL
+	ioread8(iobase + TPM_ACCESS(0));
+#endif
+}
+
+static inline void tpm_tis_iowrite32(u32 b, void __iomem *iobase, u32 addr)
+{
+	iowrite32(b, iobase + addr);
+#ifdef CONFIG_PREEMPT_RT_FULL
+	ioread8(iobase + TPM_ACCESS(0));
+#endif
+}
+
 static bool interrupts = true;
 module_param(interrupts, bool, 0444);
 MODULE_PARM_DESC(interrupts, "Enable interrupts");
@@ -105,7 +121,7 @@ static int tpm_tcg_write_bytes(struct tpm_tis_data *data, u32 addr, u16 len,
 	struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
 
 	while (len--)
-		iowrite8(*value++, phy->iobase + addr);
+		tpm_tis_iowrite8(*value++, phy->iobase, addr);
 	return 0;
 }
 
@@ -129,7 +145,7 @@ static int tpm_tcg_write32(struct tpm_tis_data *data, u32 addr, u32 value)
 {
 	struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
 
-	iowrite32(value, phy->iobase + addr);
+	tpm_tis_iowrite32(value, phy->iobase, addr);
 	return 0;
 }
 
-- 
2.13.2

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] tpm_tis: fix stall after iowrite*()s
  2017-08-14 22:53 ` [PATCH] tpm_tis: fix stall after iowrite*()s Haris Okanovic
@ 2017-08-15  6:11   ` Alexander Stein
  2017-08-15 20:10     ` Haris Okanovic
  0 siblings, 1 reply; 10+ messages in thread
From: Alexander Stein @ 2017-08-15  6:11 UTC (permalink / raw)
  To: Haris Okanovic
  Cc: linux-rt-users, linux-kernel, tpmdd-devel, harisokn,
	julia.cartwright, gratian.crisan, scott.hartman, chris.graf,
	brad.mouring, jonathan.david, peterhuewe, tpmdd, jarkko.sakkinen,
	jgunthorpe, eric.gardiner

On Monday 14 August 2017 17:53:47, Haris Okanovic wrote:
> --- a/drivers/char/tpm/tpm_tis.c
> +++ b/drivers/char/tpm/tpm_tis.c
> @@ -52,6 +52,22 @@ static inline struct tpm_tis_tcg_phy
> *to_tpm_tis_tcg_phy(struct tpm_tis_data *da return container_of(data,
> struct tpm_tis_tcg_phy, priv);
>  }
> 
> +static inline void tpm_tis_iowrite8(u8 b, void __iomem *iobase, u32 addr)
> +{
> +	iowrite8(b, iobase + addr);
> +#ifdef CONFIG_PREEMPT_RT_FULL
> +	ioread8(iobase + TPM_ACCESS(0));
> +#endif
> +}

Maybe add some comment why an iorad8 is actually requried after each write on 
RT. Currently it is rather obvious why this additional read is necessary. But 
is this still the case in a year?

> +static inline void tpm_tis_iowrite32(u32 b, void __iomem *iobase, u32 addr)
> +{
> +	iowrite32(b, iobase + addr);
> +#ifdef CONFIG_PREEMPT_RT_FULL
> +	ioread8(iobase + TPM_ACCESS(0));
> +#endif
> +}

Same applies here. Or add a comment above both functions describing their 
purpose.

Just my 2 cents

Best regards,
Alexander

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] tpm_tis: fix stall after iowrite*()s
  2017-08-15  6:11   ` Alexander Stein
@ 2017-08-15 20:10     ` Haris Okanovic
  0 siblings, 0 replies; 10+ messages in thread
From: Haris Okanovic @ 2017-08-15 20:10 UTC (permalink / raw)
  To: Alexander Stein
  Cc: linux-rt-users, linux-kernel, tpmdd-devel, harisokn,
	julia.cartwright, gratian.crisan, scott.hartman, chris.graf,
	brad.mouring, jonathan.david, peterhuewe, tpmdd, jarkko.sakkinen,
	jgunthorpe, eric.gardiner



On 08/15/2017 01:11 AM, Alexander Stein wrote:
> On Monday 14 August 2017 17:53:47, Haris Okanovic wrote:
>> --- a/drivers/char/tpm/tpm_tis.c
>> +++ b/drivers/char/tpm/tpm_tis.c
>> @@ -52,6 +52,22 @@ static inline struct tpm_tis_tcg_phy
>> *to_tpm_tis_tcg_phy(struct tpm_tis_data *da return container_of(data,
>> struct tpm_tis_tcg_phy, priv);
>>   }
>>
>> +static inline void tpm_tis_iowrite8(u8 b, void __iomem *iobase, u32 addr)
>> +{
>> +	iowrite8(b, iobase + addr);
>> +#ifdef CONFIG_PREEMPT_RT_FULL
>> +	ioread8(iobase + TPM_ACCESS(0));
>> +#endif
>> +}
> 
> Maybe add some comment why an iorad8 is actually requried after each write on
> RT. Currently it is rather obvious why this additional read is necessary. But
> is this still the case in a year?
> 

Sure. I re-added the tpm_tis_flush() function with comments to explain 
what's going. Calling it from tpm_tis_iowrite8() and tpm_tis_iowrite32().

Will post a PATCH v2 shortly.

>> +static inline void tpm_tis_iowrite32(u32 b, void __iomem *iobase, u32 addr)
>> +{
>> +	iowrite32(b, iobase + addr);
>> +#ifdef CONFIG_PREEMPT_RT_FULL
>> +	ioread8(iobase + TPM_ACCESS(0));
>> +#endif
>> +}
> 
> Same applies here. Or add a comment above both functions describing their
> purpose.
> 
> Just my 2 cents
> 
> Best regards,
> Alexander
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2] tpm_tis: fix stall after iowrite*()s
       [not found] ` <20170804215651.29247-1-haris.okanovic-acOepvfBmUk@public.gmane.org>
@ 2017-08-15 20:13   ` Haris Okanovic
  2017-08-16 21:15     ` [tpmdd-devel] " Ken Goldman
  0 siblings, 1 reply; 10+ messages in thread
From: Haris Okanovic @ 2017-08-15 20:13 UTC (permalink / raw)
  To: linux-rt-users-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA
  Cc: haris.okanovic-acOepvfBmUk, julia.cartwright-acOepvfBmUk,
	harisokn-Re5JQEeQqe8AvxtiuMwx3w,
	jarkko.sakkinen-VuQAYsv1563Yd54FQh9/CA,
	eric.gardiner-acOepvfBmUk,
	jgunthorpe-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/,
	jonathan.david-acOepvfBmUk,
	tpmdd-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	scott.hartman-acOepvfBmUk, chris.graf-acOepvfBmUk,
	gratian.crisan-acOepvfBmUk, brad.mouring-acOepvfBmUk

ioread8() operations to TPM MMIO addresses can stall the cpu when
immediately following a sequence of iowrite*()'s to the same region.

For example, cyclitest measures ~400us latency spikes when a non-RT
usermode application communicates with an SPI-based TPM chip (Intel Atom
E3940 system, PREEMPT_RT_FULL kernel). The spikes are caused by a
stalling ioread8() operation following a sequence of 30+ iowrite8()s to
the same address. I believe this happens because the write sequence is
buffered (in cpu or somewhere along the bus), and gets flushed on the
first LOAD instruction (ioread*()) that follows.

The enclosed change appears to fix this issue: read the TPM chip's
access register (status code) after every iowrite*() operation to
amortize the cost of flushing data to chip across multiple instructions.

Signed-off-by: Haris Okanovic <haris.okanovic-acOepvfBmUk@public.gmane.org>
---
[PATCH v2] Add tpm_tis_flush() function with comment explaining
the ioread8() after write, per Alexander.

https://patchwork.kernel.org/patch/9882119/
https://github.com/harisokanovic/linux/tree/dev/hokanovi/tpm-latency-spike-fix-v2
---
 drivers/char/tpm/tpm_tis.c | 29 +++++++++++++++++++++++++++--
 1 file changed, 27 insertions(+), 2 deletions(-)

diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c
index c7e1384f1b08..3fdec13eba8d 100644
--- a/drivers/char/tpm/tpm_tis.c
+++ b/drivers/char/tpm/tpm_tis.c
@@ -52,6 +52,31 @@ static inline struct tpm_tis_tcg_phy *to_tpm_tis_tcg_phy(struct tpm_tis_data *da
 	return container_of(data, struct tpm_tis_tcg_phy, priv);
 }
 
+#ifdef CONFIG_PREEMPT_RT_FULL
+/*
+ * Flushes previous write operations to chip so that a subsequent
+ * ioread*()s won't stall a cpu.
+ */
+static inline void tpm_tis_flush(void __iomem *iobase)
+{
+	ioread8(iobase + TPM_ACCESS(0));
+}
+#else
+#define tpm_tis_flush(iobase) do { } while (0)
+#endif
+
+static inline void tpm_tis_iowrite8(u8 b, void __iomem *iobase, u32 addr)
+{
+	iowrite8(b, iobase + addr);
+	tpm_tis_flush(iobase);
+}
+
+static inline void tpm_tis_iowrite32(u32 b, void __iomem *iobase, u32 addr)
+{
+	iowrite32(b, iobase + addr);
+	tpm_tis_flush(iobase);
+}
+
 static bool interrupts = true;
 module_param(interrupts, bool, 0444);
 MODULE_PARM_DESC(interrupts, "Enable interrupts");
@@ -105,7 +130,7 @@ static int tpm_tcg_write_bytes(struct tpm_tis_data *data, u32 addr, u16 len,
 	struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
 
 	while (len--)
-		iowrite8(*value++, phy->iobase + addr);
+		tpm_tis_iowrite8(*value++, phy->iobase, addr);
 	return 0;
 }
 
@@ -129,7 +154,7 @@ static int tpm_tcg_write32(struct tpm_tis_data *data, u32 addr, u32 value)
 {
 	struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
 
-	iowrite32(value, phy->iobase + addr);
+	tpm_tis_iowrite32(value, phy->iobase, addr);
 	return 0;
 }
 
-- 
2.13.2


------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [tpmdd-devel] [PATCH v2] tpm_tis: fix stall after iowrite*()s
  2017-08-15 20:13   ` [PATCH v2] " Haris Okanovic
@ 2017-08-16 21:15     ` Ken Goldman
  2017-08-17  5:57       ` Alexander Stein
  2017-08-17 10:38       ` Sebastian Andrzej Siewior
  0 siblings, 2 replies; 10+ messages in thread
From: Ken Goldman @ 2017-08-16 21:15 UTC (permalink / raw)
  To: linux-rt-users, linux-kernel; +Cc: tpmdd-devel

On 8/15/2017 4:13 PM, Haris Okanovic wrote:
> ioread8() operations to TPM MMIO addresses can stall the cpu when
> immediately following a sequence of iowrite*()'s to the same region.
> 
> For example, cyclitest measures ~400us latency spikes when a non-RT
> usermode application communicates with an SPI-based TPM chip (Intel Atom
> E3940 system, PREEMPT_RT_FULL kernel). The spikes are caused by a
> stalling ioread8() operation following a sequence of 30+ iowrite8()s to
> the same address. I believe this happens because the write sequence is
> buffered (in cpu or somewhere along the bus), and gets flushed on the
> first LOAD instruction (ioread*()) that follows.
> 
> The enclosed change appears to fix this issue: read the TPM chip's
> access register (status code) after every iowrite*() operation to
> amortize the cost of flushing data to chip across multiple instructions.

I worry a bit about "appears to fix".  It seems odd that the TPM device 
driver would be the first code to uncover this.  Can anyone confirm that 
the chipset does indeed have this bug?

I'd also like an indication of the performance penalty.  We're doing a 
lot of work to improve the performance and I worry that "do a read after 
every write" will have a performance impact.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [tpmdd-devel] [PATCH v2] tpm_tis: fix stall after iowrite*()s
  2017-08-16 21:15     ` [tpmdd-devel] " Ken Goldman
@ 2017-08-17  5:57       ` Alexander Stein
  2017-08-17 10:38       ` Sebastian Andrzej Siewior
  1 sibling, 0 replies; 10+ messages in thread
From: Alexander Stein @ 2017-08-17  5:57 UTC (permalink / raw)
  To: Ken Goldman; +Cc: linux-rt-users, linux-kernel, tpmdd-devel

On Wednesday 16 August 2017 17:15:55, Ken Goldman wrote:
> On 8/15/2017 4:13 PM, Haris Okanovic wrote:
> > ioread8() operations to TPM MMIO addresses can stall the cpu when
> > immediately following a sequence of iowrite*()'s to the same region.
> > 
> > For example, cyclitest measures ~400us latency spikes when a non-RT
> > usermode application communicates with an SPI-based TPM chip (Intel Atom
> > E3940 system, PREEMPT_RT_FULL kernel). The spikes are caused by a
> > stalling ioread8() operation following a sequence of 30+ iowrite8()s to
> > the same address. I believe this happens because the write sequence is
> > buffered (in cpu or somewhere along the bus), and gets flushed on the
> > first LOAD instruction (ioread*()) that follows.
> > 
> > The enclosed change appears to fix this issue: read the TPM chip's
> > access register (status code) after every iowrite*() operation to
> > amortize the cost of flushing data to chip across multiple instructions.
> 
> I worry a bit about "appears to fix".  It seems odd that the TPM device
> driver would be the first code to uncover this.  Can anyone confirm that
> the chipset does indeed have this bug?

No, there was already a similar problem in e1000e where a PCIe read stalled 
the CPU, hence no interrupts are serviced. See 
https://www.spinics.net/lists/linux-rt-users/msg14077.html
AFAIK there was no outcome though.

> I'd also like an indication of the performance penalty.  We're doing a
> lot of work to improve the performance and I worry that "do a read after
> every write" will have a performance impact.

Realtime will always affect performance, but IMHO the latter is much more 
important.

Best regards,
Alexander

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [tpmdd-devel] [PATCH v2] tpm_tis: fix stall after iowrite*()s
  2017-08-16 21:15     ` [tpmdd-devel] " Ken Goldman
  2017-08-17  5:57       ` Alexander Stein
@ 2017-08-17 10:38       ` Sebastian Andrzej Siewior
       [not found]         ` <20170817103807.ubrbylnud6wxod3s-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org>
  1 sibling, 1 reply; 10+ messages in thread
From: Sebastian Andrzej Siewior @ 2017-08-17 10:38 UTC (permalink / raw)
  To: Ken Goldman, Haris Okanovic
  Cc: linux-rt-users, linux-kernel, tpmdd-devel, harisokn,
	julia.cartwright, gratian.crisan, scott.hartman, chris.graf,
	brad.mouring, jonathan.david, peterhuewe, tpmdd, jarkko.sakkinen,
	jgunthorpe, eric.gardiner

On 2017-08-16 17:15:55 [-0400], Ken Goldman wrote:
> On 8/15/2017 4:13 PM, Haris Okanovic wrote:
> > ioread8() operations to TPM MMIO addresses can stall the cpu when
> > immediately following a sequence of iowrite*()'s to the same region.
> > 
> > For example, cyclitest measures ~400us latency spikes when a non-RT
> > usermode application communicates with an SPI-based TPM chip (Intel Atom
> > E3940 system, PREEMPT_RT_FULL kernel). The spikes are caused by a
> > stalling ioread8() operation following a sequence of 30+ iowrite8()s to
> > the same address. I believe this happens because the write sequence is
> > buffered (in cpu or somewhere along the bus), and gets flushed on the
> > first LOAD instruction (ioread*()) that follows.
> > 
> > The enclosed change appears to fix this issue: read the TPM chip's
> > access register (status code) after every iowrite*() operation to
> > amortize the cost of flushing data to chip across multiple instructions.

Haris, could you try a wmb() instead the read?

> I worry a bit about "appears to fix".  It seems odd that the TPM device
> driver would be the first code to uncover this.  Can anyone confirm that the
> chipset does indeed have this bug?

What Haris says makes sense. It is just not all architectures
accumulate/ batch writes to HW.

> I'd also like an indication of the performance penalty.  We're doing a lot
> of work to improve the performance and I worry that "do a read after every
> write" will have a performance impact.
So powerpc (for instance) has a sync operation after each write to HW. I
am wondering if we could need something like that on x86.

Sebastian

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] tpm_tis: fix stall after iowrite*()s
       [not found]         ` <20170817103807.ubrbylnud6wxod3s-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org>
@ 2017-08-17 17:17           ` Jason Gunthorpe
       [not found]             ` <20170817171732.GA22792-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
  2017-08-19 17:03             ` [tpmdd-devel] " Jarkko Sakkinen
  0 siblings, 2 replies; 10+ messages in thread
From: Jason Gunthorpe @ 2017-08-17 17:17 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Haris Okanovic, julia.cartwright-acOepvfBmUk,
	linux-rt-users-u79uwXL29TY76Z2rM5mHXA,
	harisokn-Re5JQEeQqe8AvxtiuMwx3w,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	jarkko.sakkinen-VuQAYsv1563Yd54FQh9/CA,
	eric.gardiner-acOepvfBmUk, jonathan.david-acOepvfBmUk,
	tpmdd-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	scott.hartman-acOepvfBmUk, chris.graf-acOepvfBmUk,
	gratian.crisan-acOepvfBmUk, brad.mouring-acOepvfBmUk

On Thu, Aug 17, 2017 at 12:38:07PM +0200, Sebastian Andrzej Siewior wrote:

> > I worry a bit about "appears to fix".  It seems odd that the TPM device
> > driver would be the first code to uncover this.  Can anyone confirm that the
> > chipset does indeed have this bug?
> 
> What Haris says makes sense. It is just not all architectures
> accumulate/ batch writes to HW.

It doesn't seem that odd to me.. In modern Intel chipsets the physical
LPC bus is used for very little. Maybe some flash and possibly a
winbond super IO at worst?  Plus the TPM.

I can't confirm what Intel has done, but if writes are posted, then it
is not a 'bug', but expected operation for a PCI/LPC bridge device to
have an ordered queue of posted writes, and thus higher latency when
processing reads due to ordering requirments.

Other drivers may not see it because most LPC usages would not be
write heavy, or might use IO instructions which are not posted..

I can confirm that my ARM systems with a custom PCI-LPC bridge will
have exactly the same problem, and that the readl is the only
solution.

This is becuase writes to LPC are posted over PCI and will be buffered
in the root complex, device end port and internally in the LPC
bridge. Since they are posted there is no way for the CPU to know when
the complete and when it would be 'low latency' to issue a read.

> So powerpc (for instance) has a sync operation after each write to HW. I
> am wondering if we could need something like that on x86.

Even on something like PPC 'sync' is not defined to globally flush
posted writes, and wil not help. WMB is probably similar.

Jason

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2] tpm_tis: fix stall after iowrite*()s
       [not found]             ` <20170817171732.GA22792-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
@ 2017-08-17 20:12               ` Haris Okanovic
  0 siblings, 0 replies; 10+ messages in thread
From: Haris Okanovic @ 2017-08-17 20:12 UTC (permalink / raw)
  To: Jason Gunthorpe, Sebastian Andrzej Siewior
  Cc: julia.cartwright-acOepvfBmUk,
	linux-rt-users-u79uwXL29TY76Z2rM5mHXA,
	harisokn-Re5JQEeQqe8AvxtiuMwx3w,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	jarkko.sakkinen-VuQAYsv1563Yd54FQh9/CA,
	eric.gardiner-acOepvfBmUk, jonathan.david-acOepvfBmUk,
	tpmdd-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f,
	scott.hartman-acOepvfBmUk, chris.graf-acOepvfBmUk,
	gratian.crisan-acOepvfBmUk, brad.mouring-acOepvfBmUk

Neither wmb() nor mb() have any effect when substituted for 
ioread8(iobase + TPM_ACCESS(0)) in tpm_tis_flush(). I still see 300 - 
400 us spikes in cyclictest invoking my TPM chip's RNG.

-- Haris


On 08/17/2017 12:17 PM, Jason Gunthorpe wrote:
> On Thu, Aug 17, 2017 at 12:38:07PM +0200, Sebastian Andrzej Siewior wrote:
> 
>>> I worry a bit about "appears to fix".  It seems odd that the TPM device
>>> driver would be the first code to uncover this.  Can anyone confirm that the
>>> chipset does indeed have this bug?
>>
>> What Haris says makes sense. It is just not all architectures
>> accumulate/ batch writes to HW.
> 
> It doesn't seem that odd to me.. In modern Intel chipsets the physical
> LPC bus is used for very little. Maybe some flash and possibly a
> winbond super IO at worst?  Plus the TPM.
> 
> I can't confirm what Intel has done, but if writes are posted, then it
> is not a 'bug', but expected operation for a PCI/LPC bridge device to
> have an ordered queue of posted writes, and thus higher latency when
> processing reads due to ordering requirments.
> 
> Other drivers may not see it because most LPC usages would not be
> write heavy, or might use IO instructions which are not posted..
> 
> I can confirm that my ARM systems with a custom PCI-LPC bridge will
> have exactly the same problem, and that the readl is the only
> solution.
> 
> This is becuase writes to LPC are posted over PCI and will be buffered
> in the root complex, device end port and internally in the LPC
> bridge. Since they are posted there is no way for the CPU to know when
> the complete and when it would be 'low latency' to issue a read.
> 
>> So powerpc (for instance) has a sync operation after each write to HW. I
>> am wondering if we could need something like that on x86.
> 
> Even on something like PPC 'sync' is not defined to globally flush
> posted writes, and wil not help. WMB is probably similar.
> 
> Jason
> 

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [tpmdd-devel] [PATCH v2] tpm_tis: fix stall after iowrite*()s
  2017-08-17 17:17           ` Jason Gunthorpe
       [not found]             ` <20170817171732.GA22792-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
@ 2017-08-19 17:03             ` Jarkko Sakkinen
  1 sibling, 0 replies; 10+ messages in thread
From: Jarkko Sakkinen @ 2017-08-19 17:03 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: Sebastian Andrzej Siewior, Ken Goldman, Haris Okanovic,
	linux-rt-users, linux-kernel, tpmdd-devel, harisokn,
	julia.cartwright, gratian.crisan, scott.hartman, chris.graf,
	brad.mouring, jonathan.david, peterhuewe, tpmdd, eric.gardiner

On Thu, Aug 17, 2017 at 11:17:32AM -0600, Jason Gunthorpe wrote:
> On Thu, Aug 17, 2017 at 12:38:07PM +0200, Sebastian Andrzej Siewior wrote:
> 
> > > I worry a bit about "appears to fix".  It seems odd that the TPM device
> > > driver would be the first code to uncover this.  Can anyone confirm that the
> > > chipset does indeed have this bug?
> > 
> > What Haris says makes sense. It is just not all architectures
> > accumulate/ batch writes to HW.
> 
> It doesn't seem that odd to me.. In modern Intel chipsets the physical
> LPC bus is used for very little. Maybe some flash and possibly a
> winbond super IO at worst?  Plus the TPM.
> 
> I can't confirm what Intel has done, but if writes are posted, then it
> is not a 'bug', but expected operation for a PCI/LPC bridge device to
> have an ordered queue of posted writes, and thus higher latency when
> processing reads due to ordering requirments.
> 
> Other drivers may not see it because most LPC usages would not be
> write heavy, or might use IO instructions which are not posted..
> 
> I can confirm that my ARM systems with a custom PCI-LPC bridge will
> have exactly the same problem, and that the readl is the only
> solution.
> 
> This is becuase writes to LPC are posted over PCI and will be buffered
> in the root complex, device end port and internally in the LPC
> bridge. Since they are posted there is no way for the CPU to know when
> the complete and when it would be 'low latency' to issue a read.

What you say makes sense to me but wasn't the patch tried with SPI-TPM,
not LPC? Anyway, what you're saying probably still applies.

/Jarkko

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-08-19 17:03 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20170804215651.29247-1-haris.okanovic@ni.com>
2017-08-14 22:53 ` [PATCH] tpm_tis: fix stall after iowrite*()s Haris Okanovic
2017-08-15  6:11   ` Alexander Stein
2017-08-15 20:10     ` Haris Okanovic
     [not found] ` <20170804215651.29247-1-haris.okanovic-acOepvfBmUk@public.gmane.org>
2017-08-15 20:13   ` [PATCH v2] " Haris Okanovic
2017-08-16 21:15     ` [tpmdd-devel] " Ken Goldman
2017-08-17  5:57       ` Alexander Stein
2017-08-17 10:38       ` Sebastian Andrzej Siewior
     [not found]         ` <20170817103807.ubrbylnud6wxod3s-hfZtesqFncYOwBW4kG4KsQ@public.gmane.org>
2017-08-17 17:17           ` Jason Gunthorpe
     [not found]             ` <20170817171732.GA22792-ePGOBjL8dl3ta4EC/59zMFaTQe2KTcn/@public.gmane.org>
2017-08-17 20:12               ` Haris Okanovic
2017-08-19 17:03             ` [tpmdd-devel] " Jarkko Sakkinen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).