qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: Dev Audsin <dev.devaqemu@gmail.com>, qemu-devel@nongnu.org
Subject: Re: [PATCH] make vfio and DAX cache work together
Date: Tue, 27 Apr 2021 20:00:42 +0100	[thread overview]
Message-ID: <YIhfWoRgJtaKZhh8@work-vm> (raw)
In-Reply-To: <20210427121850.68d2a8dd@redhat.com>

* Alex Williamson (alex.williamson@redhat.com) wrote:
> On Tue, 27 Apr 2021 17:29:37 +0100
> Dev Audsin <dev.devaqemu@gmail.com> wrote:
> 
> > Hi Alex
> > 
> > Based on your comments and thinking a bit, wonder if it makes sense to
> > allow DMA map for the DAX cache but make unexpected mappings to be not
> > fatal. Please let me know your thoughts.
> 
> I think you're still working on the assumption that simply making the
> VM boot is an improvement, it's not.  If there's a risk that a possible
> DMA target for the device cannot be mapped, it's better that the VM
> fail to boot than to expose that risk.  Performance cannot compromise
> correctness.
> 
> We do allow DMA mappings to other device memory regions to fail
> non-fatally with the logic that peer-to-peer DMA is often not trusted
> to work by drivers and therefore support would be probed before
> assuming that it works.  I don't think that same logic applies here.
> 
> Is there something about the definition of this particular region that
> precludes it from being a DMA target for an assigned devices?

It's never really the ram that's used.
This area is really a chunk of VMA that's mmap'd over by (chunks of)
normal files in the underlying exported filesystem.  The actual RAM
block itself is just a placeholder for the VMA, and is normally mapped
PROT_NONE until an actual file is mapped on top of it.
That cache bar is a mapping containing multiple separate file chunk
mappings.

So I guess the problems for VFIO are:
  a) At the start it's unmapped, unaccessible, unallocated ram.
  b) Later it's arbitrary chunks of ondisk files.

[on a bad day, and it's bad even without vfio, someone truncates the
file mapping]

Dave

> Otherwise if it's initially unpopulated, maybe something like the
> RamDiscardManager could be used to insert DMA mappings as the region
> becomes populated.
> 
> Simply disabling mapping to boot with both features together, without
> analyzing how that missing mapping affects their interaction is not
> acceptable.  Thanks,
> 
> Alex
> 
> > On Mon, Apr 26, 2021 at 10:22 PM Alex Williamson
> > <alex.williamson@redhat.com> wrote:
> > >
> > > On Mon, 26 Apr 2021 21:50:38 +0100
> > > Dev Audsin <dev.devaqemu@gmail.com> wrote:
> > >  
> > > > Hi Alex and David
> > > >
> > > > @Alex:
> > > >
> > > > Justification on why this region cannot be a DMA target for the device,
> > > >
> > > > virtio-fs with DAX is currently not compatible with NIC Pass through.
> > > > When a SR-IOV VF attaches to a qemu process, vfio will try to pin the
> > > > entire DAX Window but it is empty when the guest boots and will fail.
> > > > A method to make VFIO and DAX to work together is to make vfio skip
> > > > DAX cache.
> > > >
> > > > Currently DAX cache need to be set to 0, for the SR-IOV VF to be
> > > > attached to Kata containers. Enabling both SR-IOV VF and DAX work
> > > > together will potentially improve performance for workloads which are
> > > > I/O and network intensive.  
> > >
> > > Sorry, there's no actual justification described here.  You're enabling
> > > a VM with both features, virtio-fs DAX and VFIO, but there's no
> > > evidence that they "work together" or that your use case is simply
> > > avoiding a scenario where the device might attempt to DMA into the area
> > > with this designation.  With this change, if the device were to attempt
> > > to DMA into this region, it would be blocked by the IOMMU, which might
> > > result in a data loss within the VM.  Justification of this change
> > > needs to prove that this region can never be a DMA target for the
> > > device, not simply that both features can be enabled and we hope that
> > > they don't interact.  Thanks,
> > >
> > > Alex
> > >  
> > 
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



  reply	other threads:[~2021-04-27 19:02 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-26 20:50 [PATCH] make vfio and DAX cache work together Dev Audsin
2021-04-26 21:22 ` Alex Williamson
2021-04-27 16:29   ` Dev Audsin
2021-04-27 18:18     ` Alex Williamson
2021-04-27 19:00       ` Dr. David Alan Gilbert [this message]
2021-04-28 12:04         ` Dev Audsin
2021-04-28 19:17           ` Dr. David Alan Gilbert
2021-04-28 19:37             ` Alex Williamson
2021-04-29  8:44               ` Dr. David Alan Gilbert
2021-04-29 13:09                 ` Alex Williamson
2021-04-29 17:55                   ` Dr. David Alan Gilbert
2021-04-30 17:41                     ` Dev Audsin
2021-05-04  9:58                       ` Dr. David Alan Gilbert
  -- strict thread matches above, loose matches on Subject: below --
2021-04-26  9:50 Edge NFV
2021-04-26 12:19 ` Dr. David Alan Gilbert
2021-04-26 14:56   ` Alex Williamson
2021-04-26  5:45 Edge NFV

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YIhfWoRgJtaKZhh8@work-vm \
    --to=dgilbert@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=dev.devaqemu@gmail.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).