From: Alexey Kardashevskiy <aik@ozlabs.ru>
To: Ram Pai <linuxram@us.ibm.com>
Cc: andmike@us.ibm.com, mst@redhat.com, mdroth@linux.vnet.ibm.com,
linux-kernel@vger.kernel.org, ram.n.pai@gmail.com, cai@lca.pw,
tglx@linutronix.de, sukadev@linux.vnet.ibm.com,
linuxppc-dev@lists.ozlabs.org, hch@lst.de,
bauerman@linux.ibm.com, david@gibson.dropbear.id.au
Subject: Re: [PATCH v4 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with the hypervisor.
Date: Tue, 3 Dec 2019 15:24:37 +1100 [thread overview]
Message-ID: <a0f19e65-81eb-37bd-928b-7a57a8660e3d@ozlabs.ru> (raw)
In-Reply-To: <20191203040509.GB12354@oc0525413822.ibm.com>
On 03/12/2019 15:05, Ram Pai wrote:
> On Tue, Dec 03, 2019 at 01:15:04PM +1100, Alexey Kardashevskiy wrote:
>>
>>
>> On 03/12/2019 13:08, Ram Pai wrote:
>>> On Tue, Dec 03, 2019 at 11:56:43AM +1100, Alexey Kardashevskiy wrote:
>>>>
>>>>
>>>> On 02/12/2019 17:45, Ram Pai wrote:
>>>>> H_PUT_TCE_INDIRECT hcall uses a page filled with TCE entries, as one of
>>>>> its parameters. One page is dedicated per cpu, for the lifetime of the
>>>>> kernel for this purpose. On secure VMs, contents of this page, when
>>>>> accessed by the hypervisor, retrieves encrypted TCE entries. Hypervisor
>>>>> needs to know the unencrypted entries, to update the TCE table
>>>>> accordingly. There is nothing secret or sensitive about these entries.
>>>>> Hence share the page with the hypervisor.
>>>>
>>>> This unsecures a page in the guest in a random place which creates an
>>>> additional attack surface which is hard to exploit indeed but
>>>> nevertheless it is there.
>>>> A safer option would be not to use the
>>>> hcall-multi-tce hyperrtas option (which translates FW_FEATURE_MULTITCE
>>>> in the guest).
>>>
>>>
>>> Hmm... How do we not use it? AFAICT hcall-multi-tce option gets invoked
>>> automatically when IOMMU option is enabled.
>>
>> It is advertised by QEMU but the guest does not have to use it.
>
> Are you suggesting that even normal-guest, not use hcall-multi-tce?
> or just secure-guest?
Just secure.
>
>>
>>> This happens even
>>> on a normal VM when IOMMU is enabled.
>>>
>>>
>>>>
>>>> Also what is this for anyway?
>>>
>>> This is for sending indirect-TCE entries to the hypervisor.
>>> The hypervisor must be able to read those TCE entries, so that it can
>>> use those entires to populate the TCE table with the correct mappings.
>>>
>>>> if I understand things right, you cannot
>>>> map any random guest memory, you should only be mapping that 64MB-ish
>>>> bounce buffer array but 1) I do not see that happening (I may have
>>>> missed it) 2) it should be done once and it takes a little time for
>>>> whatever memory size we allow for bounce buffers anyway. Thanks,
>>>
>>> Any random guest memory can be shared by the guest.
>>
>> Yes but we do not want this to be this random.
>
> It is not sharing some random page. It is sharing a page that is
> ear-marked for communicating TCE entries. Yes the address of the page
> can be random, depending on where the allocator decides to allocate it.
> The purpose of the page is not random.
I was talking about the location.
> That page is used for one specific purpose; to communicate the TCE
> entries to the hypervisor.
>
>> I thought the whole idea
>> of swiotlb was to restrict the amount of shared memory to bare minimum,
>> what do I miss?
>
> I think, you are making a incorrect connection between this patch and
> SWIOTLB. This patch has nothing to do with SWIOTLB.
I can see this and this is the confusing part.
>>
>>> Maybe you are confusing this with the SWIOTLB bounce buffers used by
>>> PCI devices, to transfer data to the hypervisor?
>>
>> Is not this for pci+swiotlb?
>
>
> No. This patch is NOT for PCI+SWIOTLB. The SWIOTLB pages are a
> different set of pages allocated and earmarked for bounce buffering.
>
> This patch is purely to help the hypervisor setup the TCE table, in the
> presence of a IOMMU.
Then the hypervisor should be able to access the guest pages mapped for
DMA and these pages should be made unsecure for this to work. Where/when
does this happen?
>> The cover letter suggests it is for
>> virtio-scsi-_pci_ with iommu_platform=on which makes it a
>> normal pci device just like emulated XHCI. Thanks,
>
> Well, I guess, the cover letter is probably confusing. There are two
> patches, which togather enable virtio on secure guests, in the presence
> of IOMMU.
>
> The second patch enables virtio in the presence of a IOMMU, to use
> DMA_ops+SWIOTLB infrastructure, to correctly navigate the I/O to virtio
> devices.
The second patch does nothing in relation to the problem being solved.
> However that by itself wont work if the TCE entires are not correctly
> setup in the TCE tables. The first patch; i.e this patch, helps
> accomplish that.
>> Hope this clears up the confusion.
--
Alexey
next prev parent reply other threads:[~2019-12-03 5:05 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-02 6:45 [PATCH v4 0/2] Enable IOMMU support for pseries Secure VMs Ram Pai
2019-12-02 6:45 ` [PATCH v4 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with the hypervisor Ram Pai
2019-12-02 6:45 ` [PATCH v4 2/2] powerpc/pseries/iommu: Use dma_iommu_ops for Secure VMs aswell Ram Pai
2019-12-03 0:58 ` Alexey Kardashevskiy
2019-12-03 4:07 ` Ram Pai
2019-12-03 0:56 ` [PATCH v4 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with the hypervisor Alexey Kardashevskiy
2019-12-03 2:08 ` Ram Pai
2019-12-03 2:15 ` Alexey Kardashevskiy
2019-12-03 4:05 ` Ram Pai
2019-12-03 4:24 ` Alexey Kardashevskiy [this message]
2019-12-03 16:52 ` Ram Pai
2019-12-04 0:04 ` Alexey Kardashevskiy
2019-12-04 0:49 ` Ram Pai
2019-12-04 1:08 ` Alexey Kardashevskiy
2019-12-04 3:36 ` David Gibson
2019-12-04 20:42 ` Ram Pai
2019-12-04 22:26 ` Alexey Kardashevskiy
2019-12-05 2:15 ` [PATCH v4 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with the hypervisor.y Ram Pai
2019-12-06 23:10 ` [PATCH v4 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with the hypervisor Ram Pai
2019-12-05 8:28 ` Christoph Hellwig
2019-12-04 18:26 ` Leonardo Bras
2019-12-04 20:27 ` Ram Pai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a0f19e65-81eb-37bd-928b-7a57a8660e3d@ozlabs.ru \
--to=aik@ozlabs.ru \
--cc=andmike@us.ibm.com \
--cc=bauerman@linux.ibm.com \
--cc=cai@lca.pw \
--cc=david@gibson.dropbear.id.au \
--cc=hch@lst.de \
--cc=linux-kernel@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=linuxram@us.ibm.com \
--cc=mdroth@linux.vnet.ibm.com \
--cc=mst@redhat.com \
--cc=ram.n.pai@gmail.com \
--cc=sukadev@linux.vnet.ibm.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).