LinuxPPC-Dev Archive on lore.kernel.org
 help / color / Atom feed
From: Ram Pai <linuxram@us.ibm.com>
To: Alexey Kardashevskiy <aik@ozlabs.ru>
Cc: andmike@us.ibm.com, mst@redhat.com, mdroth@linux.vnet.ibm.com,
	linux-kernel@vger.kernel.org, ram.n.pai@gmail.com, cai@lca.pw,
	tglx@linutronix.de, sukadev@linux.vnet.ibm.com,
	linuxppc-dev@lists.ozlabs.org, hch@lst.de,
	bauerman@linux.ibm.com,
	David Gibson <david@gibson.dropbear.id.au>
Subject: RE: [PATCH v4 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with the hypervisor
Date: Fri, 6 Dec 2019 15:10:13 -0800
Message-ID: <20191206231013.GA5709@oc0525413822.ibm.com> (raw)
In-Reply-To: <c2dda233-2a11-a066-5d44-68e2a0b5121e@ozlabs.ru>

On Thu, Dec 05, 2019 at 09:26:14AM +1100, Alexey Kardashevskiy wrote:
> 
> 
> On 05/12/2019 07:42, Ram Pai wrote:
.snip...
> >>>> Do you still think, secure-VM should use H_PUT_TCE and not
> >>>> H_PUT_TCE_INDIRECT?  And normal VM should use H_PUT_TCE_INDIRECT?
> >>>> Is there any advantage of special casing it for secure-VMs.
> >>>
> >>>
> >>> Reducing the amount of insecure memory at random location.
> >>
> >> The other approach we could use for that - which would still allow
> >> H_PUT_TCE_INDIRECT, would be to allocate the TCE buffer page from the
> >> same pool that we use for the bounce buffers.  I assume there must
> >> already be some sort of allocator for that?
> > 
> > The allocator for swiotlb is buried deep in the swiotlb code. It is 
> > not exposed to the outside-swiotlb world. Will have to do major surgery
> > to expose it.
> > 
> > I was thinking, maybe we share the page, finish the INDIRECT_TCE call,
> > and unshare the page.  This will address Alexey's concern of having
> > shared pages at random location, and will also give me my performance
> > optimization.  Alexey: ok?
> 
> 
> I really do not see the point. I really think we should to 1:1 mapping
> of swtiotlb buffers using the default 32bit window using H_PUT_TCE and
> this should be more than enough, I do not think the amount of code will
> be dramatically different compared to unsecuring and securing a page or
> using one of swtiotlb pages for this purpose. Thanks,

Ok. there are three different issues to be addressed here.

(a) How to map the TCE entries in the TCE table?  Should we use H_PUT_TCE, 
	or H_PUT_INDIRECT_TCE?

(b) How much of the guest memory must be mapped in the TCE table? Should
   it be the entire guest memory? or the memory used by the SWIOTLB?

(c) What mapping window must to be used? Is it the 64bit ddw? or the 
	default 32 bit ddw?

Regardless of how we resolve issue (b) and (c),  we still need to
resolve (a).  The main concern you have about solution (a) is
that, random pages are permanently shared, something that can be
exploited and can cause security issues.  I tend to agree, this
is possible, though I am not sure how. But yes we need to address
this concern, since security is paramount to Secure Virtual Machines.

The way to resolve (a) is  --
(i) grab a page from the SWIOTLB pool and use H_PUT_INDIRECT_TCE
	OR
(ii) simply use H_PUT_TCE.
	OR
(iii) share the page prior to H_PUT_INDIRECT_TCE, and
	unshare the page once done.

Solution (i) has layering violation; as Christoph alluded to in his
previous reply. The swiotlb buffers are meant for I/O and DMA related
actitivy.  We will be abusing these swiotlb pages to communicate TCE
entries to the hypervisor.  Secondly IOMMU code has no idea where its
pages are sourced from and should not know either. I am uncomfortable
going this route. There is some upstream discussion about having a seperate
pool of shared pages on secure VM, https://lkml.org/lkml/2019/11/14/381.
That solution; when ready, may be exploitable here.

I have coded solution (ii)  and it works. But boot path slows down
significantly. Huge amount H_PUT_TCE hcalls. Very hurtful.

I strongly think, solution (iii) is the right way to go. I have coded
it, it works and bootpath is much faster.  However I am not sure if you
have a concern with this solution.

In any case, I am sending my next version of the patch based on solution
(iii).

Once this is addressed, I will address (b) and (c).

RP


  parent reply index

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-12-02  6:45 [PATCH v4 0/2] Enable IOMMU support for pseries Secure VMs Ram Pai
2019-12-02  6:45 ` [PATCH v4 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with the hypervisor Ram Pai
2019-12-02  6:45   ` [PATCH v4 2/2] powerpc/pseries/iommu: Use dma_iommu_ops for Secure VMs aswell Ram Pai
2019-12-03  0:58     ` Alexey Kardashevskiy
2019-12-03  4:07       ` Ram Pai
2019-12-03  0:56   ` [PATCH v4 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with the hypervisor Alexey Kardashevskiy
2019-12-03  2:08     ` Ram Pai
2019-12-03  2:15       ` Alexey Kardashevskiy
2019-12-03  4:05         ` Ram Pai
2019-12-03  4:24           ` Alexey Kardashevskiy
2019-12-03 16:52             ` Ram Pai
2019-12-04  0:04               ` Alexey Kardashevskiy
2019-12-04  0:49                 ` Ram Pai
2019-12-04  1:08                   ` Alexey Kardashevskiy
2019-12-04  3:36                     ` David Gibson
2019-12-04 20:42                       ` Ram Pai
2019-12-04 22:26                         ` Alexey Kardashevskiy
2019-12-05  2:15                           ` [PATCH v4 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with the hypervisor.y Ram Pai
2019-12-06 23:10                           ` Ram Pai [this message]
2019-12-05  8:28                         ` [PATCH v4 1/2] powerpc/pseries/iommu: Share the per-cpu TCE page with the hypervisor Christoph Hellwig
2019-12-04 18:26   ` Leonardo Bras
2019-12-04 20:27     ` Ram Pai

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191206231013.GA5709@oc0525413822.ibm.com \
    --to=linuxram@us.ibm.com \
    --cc=aik@ozlabs.ru \
    --cc=andmike@us.ibm.com \
    --cc=bauerman@linux.ibm.com \
    --cc=cai@lca.pw \
    --cc=david@gibson.dropbear.id.au \
    --cc=hch@lst.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mdroth@linux.vnet.ibm.com \
    --cc=mst@redhat.com \
    --cc=ram.n.pai@gmail.com \
    --cc=sukadev@linux.vnet.ibm.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

LinuxPPC-Dev Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linuxppc-dev/0 linuxppc-dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linuxppc-dev linuxppc-dev/ https://lore.kernel.org/linuxppc-dev \
		linuxppc-dev@lists.ozlabs.org linuxppc-dev@ozlabs.org
	public-inbox-index linuxppc-dev

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.ozlabs.lists.linuxppc-dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git