All of lore.kernel.org
 help / color / mirror / Atom feed
* Question about the TPM driver
@ 2018-09-13 13:14 Martin Galvan
  2018-09-13 14:22 ` James Bottomley
  2018-09-16 19:16 ` Jarkko Sakkinen
  0 siblings, 2 replies; 12+ messages in thread
From: Martin Galvan @ 2018-09-13 13:14 UTC (permalink / raw)
  To: linux-integrity, jarkko.sakkinen

Hi all,

I noticed that, after a command is done, the TPM driver only allows
for a single read operation (for a limited time). If I wanted to e.g.
check a command's response code before attempting to read the rest of
the response, my next read would fail. Same happens if I take too long
to read a command's results.

I can understand the timeout, but I'm curious about this single-read
policy, especially since some commands such as GetCapability don't
guarantee how much data will actually be returned. Does anyone know
why it was implemented this way?

Thanks

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about the TPM driver
  2018-09-13 13:14 Question about the TPM driver Martin Galvan
@ 2018-09-13 14:22 ` James Bottomley
  2018-09-13 14:31   ` Martin Galvan
  2018-10-11 15:17   ` Ken Goldman
  2018-09-16 19:16 ` Jarkko Sakkinen
  1 sibling, 2 replies; 12+ messages in thread
From: James Bottomley @ 2018-09-13 14:22 UTC (permalink / raw)
  To: Martin Galvan, linux-integrity, jarkko.sakkinen

On Thu, 2018-09-13 at 10:14 -0300, Martin Galvan wrote:
> Hi all,
> 
> I noticed that, after a command is done, the TPM driver only allows
> for a single read operation (for a limited time). If I wanted to e.g.
> check a command's response code before attempting to read the rest of
> the response, my next read would fail. Same happens if I take too
> long to read a command's results.
> 
> I can understand the timeout, but I'm curious about this single-read
> policy, especially since some commands such as GetCapability don't
> guarantee how much data will actually be returned. Does anyone know
> why it was implemented this way?

Internally the TPM has a single message buffer.  It can't be reused for
the next command until the last one is fully received.  For
efficiency's sake you should have a pending read as soon as the command
is sent so the return of the read tells you the TPM has finished and
the TPM can get on to the next user.  The timeout is merely a courtesy
for the case the command completes before you can queue a read.  It's
short by design otherwise you give users the ability to DoS the TPM.

Exact buffer size hasn't been thought to be an issue since you know
that any response will fit into MAX_RESPONSE_SIZE which the current
implementation defines to be 4096.

James

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about the TPM driver
  2018-09-13 14:22 ` James Bottomley
@ 2018-09-13 14:31   ` Martin Galvan
  2018-09-13 15:11     ` Javier Martinez Canillas
  2018-10-11 15:17   ` Ken Goldman
  1 sibling, 1 reply; 12+ messages in thread
From: Martin Galvan @ 2018-09-13 14:31 UTC (permalink / raw)
  To: James.Bottomley; +Cc: linux-integrity, jarkko.sakkinen

Hi James, thanks for your answer.

El jue., 13 sept. 2018 a las 11:22, James Bottomley
(<James.Bottomley@hansenpartnership.com>) escribio:
>
> On Thu, 2018-09-13 at 10:14 -0300, Martin Galvan wrote:
> Exact buffer size hasn't been thought to be an issue since you know
> that any response will fit into MAX_RESPONSE_SIZE which the current
> implementation defines to be 4096.

Interesting, I didn't know about this max. I guess you're referring to
enum tpm_const's TPM_BUFSIZE constant, since I can't find this max in
the spec. In any case, what would be the effect of removing this
limitation and allowing a user to read the response in chunks rather
than having to allocate 4k every time? I don't think things would
break, though maybe I'm missing something.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about the TPM driver
  2018-09-13 14:31   ` Martin Galvan
@ 2018-09-13 15:11     ` Javier Martinez Canillas
  2018-09-13 15:21       ` Tadeusz Struk
  0 siblings, 1 reply; 12+ messages in thread
From: Javier Martinez Canillas @ 2018-09-13 15:11 UTC (permalink / raw)
  To: Martin Galvan, James.Bottomley
  Cc: linux-integrity, jarkko.sakkinen, Tadeusz Struk

[adding Tadeusz Struk to CC list]

On 9/13/18 4:31 PM, Martin Galvan wrote:
> Hi James, thanks for your answer.
> 
> El jue., 13 sept. 2018 a las 11:22, James Bottomley
> (<James.Bottomley@hansenpartnership.com>) escribio:
>>
>> On Thu, 2018-09-13 at 10:14 -0300, Martin Galvan wrote:
>> Exact buffer size hasn't been thought to be an issue since you know
>> that any response will fit into MAX_RESPONSE_SIZE which the current
>> implementation defines to be 4096.
> 
> Interesting, I didn't know about this max. I guess you're referring to
> enum tpm_const's TPM_BUFSIZE constant, since I can't find this max in
> the spec. In any case, what would be the effect of removing this
> limitation and allowing a user to read the response in chunks rather
> than having to allocate 4k every time? I don't think things would
> break, though maybe I'm missing something.
> 

There was an attempt to add partial reads support some time ago [0],
but it was nacked due being an ABI break IIRC.

[0]: https://lkml.org/lkml/2018/7/19/618

Best regards,
-- 
Javier Martinez Canillas
Software Engineer - Desktop Hardware Enablement
Red Hat

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about the TPM driver
  2018-09-13 15:11     ` Javier Martinez Canillas
@ 2018-09-13 15:21       ` Tadeusz Struk
  2018-09-13 15:26         ` Martin Galvan
  0 siblings, 1 reply; 12+ messages in thread
From: Tadeusz Struk @ 2018-09-13 15:21 UTC (permalink / raw)
  To: Javier Martinez Canillas, Martin Galvan, James.Bottomley
  Cc: linux-integrity, jarkko.sakkinen

On 09/13/2018 08:11 AM, Javier Martinez Canillas wrote:
> There was an attempt to add partial reads support some time ago [0],
> but it was nacked due being an ABI break IIRC.
> 
> [0]: https://lkml.org/lkml/2018/7/19/618

I'm planning to give it another try, based on the feedback,
after the async patch is merged.
Thanks,
-- 
Tadeusz

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about the TPM driver
  2018-09-13 15:21       ` Tadeusz Struk
@ 2018-09-13 15:26         ` Martin Galvan
  0 siblings, 0 replies; 12+ messages in thread
From: Martin Galvan @ 2018-09-13 15:26 UTC (permalink / raw)
  To: tadeusz.struk; +Cc: javierm, James.Bottomley, linux-integrity, jarkko.sakkinen

El jue., 13 sept. 2018 a las 12:21, Tadeusz Struk
(<tadeusz.struk@intel.com>) escribio:
> I'm planning to give it another try, based on the feedback,
> after the async patch is merged.
> Thanks,

That's great, thanks. Let me know if I can be of assistance.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about the TPM driver
  2018-09-13 13:14 Question about the TPM driver Martin Galvan
  2018-09-13 14:22 ` James Bottomley
@ 2018-09-16 19:16 ` Jarkko Sakkinen
  2018-09-17 13:32   ` Martin Galvan
  1 sibling, 1 reply; 12+ messages in thread
From: Jarkko Sakkinen @ 2018-09-16 19:16 UTC (permalink / raw)
  To: Martin Galvan; +Cc: linux-integrity

On Thu, Sep 13, 2018 at 10:14:30AM -0300, Martin Galvan wrote:
> Hi all,
> 
> I noticed that, after a command is done, the TPM driver only allows
> for a single read operation (for a limited time). If I wanted to e.g.
> check a command's response code before attempting to read the rest of
> the response, my next read would fail. Same happens if I take too long
> to read a command's results.
> 
> I can understand the timeout, but I'm curious about this single-read
> policy, especially since some commands such as GetCapability don't
> guarantee how much data will actually be returned. Does anyone know
> why it was implemented this way?
> 
> Thanks

I understand your concerns but without a concrete workload there is no
problem with this behavior.

/Jarkko

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about the TPM driver
  2018-09-16 19:16 ` Jarkko Sakkinen
@ 2018-09-17 13:32   ` Martin Galvan
  2018-09-17 14:45     ` James Bottomley
  2018-09-17 21:20     ` Jarkko Sakkinen
  0 siblings, 2 replies; 12+ messages in thread
From: Martin Galvan @ 2018-09-17 13:32 UTC (permalink / raw)
  To: jarkko.sakkinen; +Cc: linux-integrity

El dom., 16 sept. 2018 a las 16:16, Jarkko Sakkinen
(<jarkko.sakkinen@linux.intel.com>) escribio:
> I understand your concerns but without a concrete workload there is no
> problem with this behavior.

IMHO it's a bit excessive to allocate 4k to end up storing than 100
bytes. Beyond that, it's a pretty big gotcha for someone who's writing
software which talks to the driver :)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about the TPM driver
  2018-09-17 13:32   ` Martin Galvan
@ 2018-09-17 14:45     ` James Bottomley
  2018-09-17 21:23       ` Jarkko Sakkinen
  2018-09-17 21:20     ` Jarkko Sakkinen
  1 sibling, 1 reply; 12+ messages in thread
From: James Bottomley @ 2018-09-17 14:45 UTC (permalink / raw)
  To: Martin Galvan, jarkko.sakkinen; +Cc: linux-integrity

On Mon, 2018-09-17 at 10:32 -0300, Martin Galvan wrote:
> El dom., 16 sept. 2018 a las 16:16, Jarkko Sakkinen
> (<jarkko.sakkinen@linux.intel.com>) escribio:
> > I understand your concerns but without a concrete workload there is
> > no
> > problem with this behavior.
> 
> IMHO it's a bit excessive to allocate 4k to end up storing than 100
> bytes. Beyond that, it's a pretty big gotcha for someone who's
> writing software which talks to the driver :)

It's what we do in the kernel, which is one of our most memory
constrained environments.

You have to remember that sub page size buffers aren't always managed
the best at any level (they usually fragment the heap) so even in a
constrained memory environment, a 4k buffer (4k aligned) is usually
preferable.

James

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about the TPM driver
  2018-09-17 13:32   ` Martin Galvan
  2018-09-17 14:45     ` James Bottomley
@ 2018-09-17 21:20     ` Jarkko Sakkinen
  1 sibling, 0 replies; 12+ messages in thread
From: Jarkko Sakkinen @ 2018-09-17 21:20 UTC (permalink / raw)
  To: Martin Galvan; +Cc: linux-integrity

On Mon, Sep 17, 2018 at 10:32:18AM -0300, Martin Galvan wrote:
> El dom., 16 sept. 2018 a las 16:16, Jarkko Sakkinen
> (<jarkko.sakkinen@linux.intel.com>) escribio:
> > I understand your concerns but without a concrete workload there is no
> > problem with this behavior.
> 
> IMHO it's a bit excessive to allocate 4k to end up storing than 100
> bytes. Beyond that, it's a pretty big gotcha for someone who's writing
> software which talks to the driver :)

I remember now the patch from Tadeus. Yeah, it does make sense assuming
we can land a solution that does not cause ABI breaks.

/Jarkko

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about the TPM driver
  2018-09-17 14:45     ` James Bottomley
@ 2018-09-17 21:23       ` Jarkko Sakkinen
  0 siblings, 0 replies; 12+ messages in thread
From: Jarkko Sakkinen @ 2018-09-17 21:23 UTC (permalink / raw)
  To: James Bottomley; +Cc: Martin Galvan, linux-integrity

On Mon, Sep 17, 2018 at 07:45:32AM -0700, James Bottomley wrote:
> On Mon, 2018-09-17 at 10:32 -0300, Martin Galvan wrote:
> > El dom., 16 sept. 2018 a las 16:16, Jarkko Sakkinen
> > (<jarkko.sakkinen@linux.intel.com>) escribio:
> > > I understand your concerns but without a concrete workload there is
> > > no
> > > problem with this behavior.
> > 
> > IMHO it's a bit excessive to allocate 4k to end up storing than 100
> > bytes. Beyond that, it's a pretty big gotcha for someone who's
> > writing software which talks to the driver :)
> 
> It's what we do in the kernel, which is one of our most memory
> constrained environments.
> 
> You have to remember that sub page size buffers aren't always managed
> the best at any level (they usually fragment the heap) so even in a
> constrained memory environment, a 4k buffer (4k aligned) is usually
> preferable.
> 
> James

If the commit is not too interfering I still could consider merging it as
even if only for simplifying implementing a TPM user space.

/Jarkko

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Question about the TPM driver
  2018-09-13 14:22 ` James Bottomley
  2018-09-13 14:31   ` Martin Galvan
@ 2018-10-11 15:17   ` Ken Goldman
  1 sibling, 0 replies; 12+ messages in thread
From: Ken Goldman @ 2018-10-11 15:17 UTC (permalink / raw)
  To: linux-integrity

On 9/13/2018 10:22 AM, James Bottomley wrote:
> 
> Exact buffer size hasn't been thought to be an issue since you know
> that any response will fit into MAX_RESPONSE_SIZE which the current
> implementation defines to be 4096.

If a caller wishes to optimize, TPM 2.0 has these two getcapabilities.

TPM_PT_MAX_COMMAND_SIZE - the maximum value for commandSize in a command
TPM_PT_MAX_RESPONSE_SIZE - the maximum value for responseSize in a response

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-10-11 22:45 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-09-13 13:14 Question about the TPM driver Martin Galvan
2018-09-13 14:22 ` James Bottomley
2018-09-13 14:31   ` Martin Galvan
2018-09-13 15:11     ` Javier Martinez Canillas
2018-09-13 15:21       ` Tadeusz Struk
2018-09-13 15:26         ` Martin Galvan
2018-10-11 15:17   ` Ken Goldman
2018-09-16 19:16 ` Jarkko Sakkinen
2018-09-17 13:32   ` Martin Galvan
2018-09-17 14:45     ` James Bottomley
2018-09-17 21:23       ` Jarkko Sakkinen
2018-09-17 21:20     ` Jarkko Sakkinen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.