All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Tolnay <dtolnay@gmail.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Peter Huewe <peterhuewe@gmx.de>,
	Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	linux-integrity@vger.kernel.org, Jason Wang <jasowang@redhat.com>,
	virtualization@lists.linux-foundation.org, dgreid@chromium.org,
	apronin@chromium.org
Subject: Re: [PATCH] tpm: Add driver for TPM over virtio
Date: Fri, 22 Feb 2019 17:23:48 -0800	[thread overview]
Message-ID: <ba2bb76f-461f-147e-8081-42f231adefda@gmail.com> (raw)
In-Reply-To: <20190222170817-mutt-send-email-mst@kernel.org>

On 2/22/19 2:24 PM, Michael S. Tsirkin wrote:
> On Fri, Feb 22, 2019 at 01:40:25PM -0800, David Tolnay wrote:
>> On 2/21/19 9:51 PM, Michael S. Tsirkin wrote:
>>>
>>> Maintainer entry?
>>
>> Good call, I will add one.
>>
>> Could you let me know what workflow would work best for you? I can direct
>> patches toward Chrome OS's kernel tree and only a Chrome OS list, then send them
>> your way only once they have been reviewed and accepted there. Or I can list one
>> or both of the linux-integrity@v.k.o and virtualization@l.l-f.o lists along with
>> a list for Chrome OS reviewers.
>>
>> Either way, we'll want eyes from people who know virtio and from people who know
>> TPM on most changes.
> 
> Well if you are changing a host/guest interface then you also need to copy
> virtio-dev. That one is subscriber only so that would
> imply sending there after it's reviewed in chrome.
> As an extra bonus reviewed code is hopefully higher quality
> so less review work for me ;)
> so the first option sounds like a better plan.
> 
> But I hope accepted above just means it goes on some branch,
> such that we won't end up with production code that
> is set in stone and needs to be maintained forever?

Understood. We will avoid considering anything canonical before
upstream has a chance to weigh in. Thanks for the caution.


>>>> + *
>>>> + * The intended hypervisor-side implementation is as follows.
>>>> + *
>>>> + *     while true:
>>>> + *         await next virtio buffer.
>>>> + *         expect first descriptor in chain to be guest-to-host.
>>>> + *         read tpm command from that buffer.
>>>> + *         synchronously perform TPM work determined by the command.
>>>> + *         expect second descriptor in chain to be host-to-guest.
>>>> + *         write TPM response into that buffer.
>>>> + *         place buffer on virtio used queue indicating how many bytes written.
>>>
>>> That's fine I think except generally it should be legal for guest
>>> to split each buffer to several segments.
>>
>> Acknowledged. I will adjust this comment.
>>
>> To clarify, this means the hypervisor will need to accept a single descriptor
>> chain consisting of one or more guest-to-host descriptors which together form
>> the command, followed by one or more host-to-guest descriptors into which the
>> response may be written. Is that what you had in mind?
> 
> Exactly.
> 
>> TPM commands and responses are both capped at a moderate size (4096 bytes). With
>> that in mind, is it still necessary to allow split buffers? We won't be dealing
>> with multi-megabyte transfers.
> 
> This has been a general virtio rule, yes. See for example "2.6.4 Message
> Framing" in v1.1 draft.
> 
> It's generally not hard to implement and generally easier than argue about
> whether it's necessary. If you think it's a big problem, it's a good
> idea to take it up with the virtio tc. Address this to virtio-comment
> list.

I appreciate the explanation. As you said, it is easy to implement
so I'll fix this comment to allow for buffers split into segments. I
didn't know about this general virtio rule but it makes sense to me!


>>>> +/*
>>>> + * Timeout duration when waiting on the hypervisor to complete its end of the
>>>> + * TPM operation. This timeout is relatively high because certain TPM operations
>>>> + * can take dozens of seconds.
>>>> + */
>>>> +#define TPM_VIRTIO_TIMEOUT (120 * HZ)
>>>
>>> Should we read this from device? E.g. a busy hypervisor might
>>> take longer to respond ...
>>
>> That's true. Do you have an example of an existing virtio driver that reads
>> timeout from the hypervisor?
>>
>> To understand your perspective, do you see one of the following as being the
>> bigger advantage? -- either that we can use a timeout tighter than 120s for
>> hypervisors that believe they can respond quickly to any command, or that we can
>> relax the timeout beyond 120s for an especially overburdened hypervisor. The
>> first is nice because the caller in the guest will receive their error code
>> sooner in certain failure modes, but maybe that would be better addressed by
>> some other way of poking the host to find out whether it is still alive and
>> working. For the second, 120s is pretty generous and in our use case we would
>> just want to give up and retry much later at that point; I don't know how much
>> further it would be reasonable to grow a timeout.
> 
> I think the whole idea around timeout handling needs a bit more
> thought. What kind of reasons for the timeout do you envision
> that require the extra kicks?

The hesitation I had with responding to timeouts more aggressively
than just by kicking again, was that we don't want to end up in a
situation where the guest driver performs some kind of reset while
the hypervisor is still reading or writing one of the driver's
buffers. If timeout triggers a full remove and probe, including
kfree on the vtpm_device containing the buffer, that seems likely to
lead to memory corruption in the guest.

As you commented elsewhere, it sounds like NEEDS_RESET is the way to
go. If the guest driver times out, and observes that NEEDS_RESET bit
is set, then we can assume the hypervisor is not continuing to write
memory so a reset is safe.

Do you have guidance for how to safely handle a timeout in which
NEEDS_RESET has not been set?

  reply	other threads:[~2019-02-23  1:23 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-22  2:14 [PATCH] tpm: Add driver for TPM over virtio David Tolnay
2019-02-22  5:51 ` Michael S. Tsirkin
2019-02-22  5:51   ` Michael S. Tsirkin
2019-02-22 21:40   ` David Tolnay
2019-02-22 22:24     ` Michael S. Tsirkin
2019-02-22 22:24       ` Michael S. Tsirkin
2019-02-23  1:23       ` David Tolnay [this message]
2019-02-25  9:58   ` Jarkko Sakkinen
2019-02-22 10:26 ` Jarkko Sakkinen
2019-02-22 15:23   ` Michael S. Tsirkin
2019-02-22 15:23     ` Michael S. Tsirkin
2019-02-22 19:31     ` Jarkko Sakkinen
2019-02-22 19:33       ` Jarkko Sakkinen
2019-02-22 21:25         ` Michael S. Tsirkin
2019-02-22 21:25           ` Michael S. Tsirkin
2019-02-22 21:50           ` Jarkko Sakkinen
2019-02-22 22:24             ` David Tolnay
2019-02-22 22:36               ` Jarkko Sakkinen
2019-02-22 23:05                 ` Michael S. Tsirkin
2019-02-22 23:05                   ` Michael S. Tsirkin
2019-02-24  9:33                   ` Jarkko Sakkinen
2019-02-22 20:55       ` Michael S. Tsirkin
2019-02-22 20:55         ` Michael S. Tsirkin
2019-02-22 21:30         ` Jarkko Sakkinen
2019-02-22 10:30 ` Jarkko Sakkinen
2019-02-22 15:30 ` James Bottomley
2019-02-22 21:16   ` Michael S. Tsirkin
2019-02-22 21:16     ` Michael S. Tsirkin
2019-02-22 21:31     ` Jason Gunthorpe
2019-02-22 21:59       ` Jarkko Sakkinen
2019-02-22 22:07         ` Michael S. Tsirkin
2019-02-22 22:07           ` Michael S. Tsirkin
2019-02-22 22:14           ` Jarkko Sakkinen
2019-02-22 22:00   ` David Tolnay
2019-02-22 22:18     ` James Bottomley
2019-02-23  0:45       ` David Tolnay
2019-02-23  1:34         ` James Bottomley
2019-02-23  2:41           ` David Tolnay
2019-02-24 16:30             ` James Bottomley
2019-02-24 17:51               ` Jarkko Sakkinen
2019-02-24 22:12               ` David Tolnay
2019-02-25  9:55                 ` Jarkko Sakkinen
2019-02-25 15:36                 ` James Bottomley
2019-02-25 19:17                   ` Matthew Garrett
2019-02-25 19:54                     ` Jarkko Sakkinen
2019-02-25 20:20                     ` James Bottomley
2019-02-25 21:00                       ` Matthew Garrett
2019-02-25 21:02                         ` Matthew Garrett
2019-02-25 22:14                         ` James Bottomley
2019-02-25 22:24                           ` Matthew Garrett
2019-02-25 22:32                             ` James Bottomley
2019-02-25 22:43                               ` Matthew Garrett
2019-02-25 22:51                                 ` James Bottomley
2019-02-25 23:02                                   ` Matthew Garrett
2019-02-25 23:09                                     ` James Bottomley
2019-02-25 21:05                       ` Jarkko Sakkinen
2019-02-25 22:24                         ` James Bottomley

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ba2bb76f-461f-147e-8081-42f231adefda@gmail.com \
    --to=dtolnay@gmail.com \
    --cc=apronin@chromium.org \
    --cc=dgreid@chromium.org \
    --cc=jarkko.sakkinen@linux.intel.com \
    --cc=jasowang@redhat.com \
    --cc=jgg@ziepe.ca \
    --cc=linux-integrity@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=peterhuewe@gmx.de \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.