All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@ziepe.ca>
To: Tom Lendacky <thomas.lendacky@amd.com>
Cc: "Kalra, Ashish" <ashish.kalra@amd.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	linux-kernel@vger.kernel.org, iommu@lists.linux.dev,
	joro@8bytes.org, robin.murphy@arm.com, vasant.hegde@amd.com,
	jon.grimm@amd.com
Subject: Re: [PATCH 1/4] iommu/amd: Introduce Protection-domain flag VFIO
Date: Fri, 20 Jan 2023 20:09:04 -0400	[thread overview]
Message-ID: <Y8stIO6oAw8wEgEI@ziepe.ca> (raw)
In-Reply-To: <1e1e98ea-5215-be21-b732-58c67e9c8fd6@amd.com>

On Fri, Jan 20, 2023 at 04:42:26PM -0600, Tom Lendacky wrote:
> On 1/20/23 13:55, Kalra, Ashish wrote:
> > On 1/20/2023 11:50 AM, Jason Gunthorpe wrote:
> > > On Fri, Jan 20, 2023 at 11:01:21AM -0600, Kalra, Ashish wrote:
> > > 
> > > > We basically get the RMP #PF from the IOMMU because there is a page size
> > > > mismatch between the RMP table and the IOMMU page table. The RMP table's
> > > > large page entry has been smashed to 4K PTEs to handle page
> > > > state change to
> > > > shared on 4K mappings, so this change has to be synced up with the IOMMU
> > > > page table, otherwise there is now a page size mismatch between RMP table
> > > > and IOMMU page table which causes the RMP #PF.
> > > 
> > > I understand that, you haven't answered my question:
> > > 
> > > Why is the IOMMU being programmed with pages it cannot access in the
> > > first place?
> > > 
> > 
> > I believe the IOMMU page tables are setup as part of device pass-through
> > to be able to do DMA to all of the guest memory, but i am not an IOMMU
> > expert, so i will let Suravee elaborate on this.
> 
> Right. And what I believe Jason is saying is, that for SNP, since we know we
> can only DMA into shared pages, there is no need to setup the initial IOMMU
> page tables for all of guest memory. Instead, wait and set up IOMMU mappings
> when we do a page state change to shared and remove any mappings when we do
> a page state change to private.

Correct.

I don't know the details of how the shared/private works on AMD, eg if
the hypervisor even knows of the private/shared transformation..

At the very worst I suppose if you just turn on the vIOMMU it should
just start working as the vIOMMU mode should make the paging
dynamic. eg virtio-iommu or something general might even do the job.

Pinning pages is expensive, breaks swap, defragmentation and wastes a
lot of iommu memory. Given that the bounce buffers really shouldn't be
reallocated constantly I'd expect vIOMMU to be performance OK.

This solves the page size mistmatch issue because the iommu never has
a PFN installed that would generate a RMP #PF. The iommu can continue
to use large page sizes whenever possible.

It seems to me the current approach of just stuffing all memory into
the iommu is a just a shortcut to get something working.

Otherwise, what I said before is still the case: only the VMM knows
what it is doing. It knows if it is using a model where it programs
private memory into the IOMMU so it knows if it should ask the kernel
to use a 4k iommu page size.

Trying to have the kernel guess what userspace is doing based on the
kvm is simply architecturally wrong.

Jason

  reply	other threads:[~2023-01-21  0:09 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-10 14:31 [PATCH 0/4] iommu/amd: Force SNP-enabled VFIO domain to 4K page size Suravee Suthikulpanit
2023-01-10 14:31 ` [PATCH 1/4] iommu/amd: Introduce Protection-domain flag VFIO Suravee Suthikulpanit
2023-01-11  3:31   ` kernel test robot
2023-01-13 15:33   ` Jason Gunthorpe
2023-01-19  8:54     ` Kalra, Ashish
2023-01-19 17:44       ` Jason Gunthorpe
2023-01-20 15:12         ` Kalra, Ashish
2023-01-20 16:13           ` Jason Gunthorpe
2023-01-20 17:01             ` Kalra, Ashish
2023-01-20 17:50               ` Jason Gunthorpe
2023-01-20 19:55                 ` Kalra, Ashish
2023-01-20 22:42                   ` Tom Lendacky
2023-01-21  0:09                     ` Jason Gunthorpe [this message]
2023-01-10 14:31 ` [PATCH 2/4] iommu/amd: Introduce structure amd_iommu_svm_ops.is_snp_guest() Suravee Suthikulpanit
2023-01-10 14:31 ` [PATCH 3/4] iommu: Introduce IOMMU call-back for processing struct KVM assigned to VFIO Suravee Suthikulpanit
2023-01-10 15:11   ` Robin Murphy
2023-01-17  4:20     ` Suthikulpanit, Suravee
2023-01-17 12:51       ` Robin Murphy
2023-01-13 15:35   ` Jason Gunthorpe
2023-01-17  5:31     ` Suthikulpanit, Suravee
2023-01-17 14:19       ` Jason Gunthorpe
2023-01-10 14:31 ` [PATCH 4/4] iommu/amd: Force SNP-enabled VFIO domain to 4K page size Suravee Suthikulpanit
2023-01-17 13:10   ` Eric van Tassell
2023-01-16 13:17 ` [PATCH 0/4] " Eric van Tassell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Y8stIO6oAw8wEgEI@ziepe.ca \
    --to=jgg@ziepe.ca \
    --cc=ashish.kalra@amd.com \
    --cc=iommu@lists.linux.dev \
    --cc=jon.grimm@amd.com \
    --cc=joro@8bytes.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=robin.murphy@arm.com \
    --cc=suravee.suthikulpanit@amd.com \
    --cc=thomas.lendacky@amd.com \
    --cc=vasant.hegde@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.