dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Chia-I Wu <olvaffe@gmail.com>,
	"Christopherson, Sean J" <sean.j.christopherson@intel.com>
Cc: Wanpeng Li <wanpengli@tencent.com>,
	kvm list <kvm@vger.kernel.org>, Joerg Roedel <joro@8bytes.org>,
	ML dri-devel <dri-devel@lists.freedesktop.org>,
	Gurchetan Singh <gurchetansingh@chromium.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Jim Mattson <jmattson@google.com>
Subject: RE: [RFC PATCH 0/3] KVM: x86: honor guest memory type
Date: Tue, 25 Feb 2020 01:29:09 +0000	[thread overview]
Message-ID: <AADFC41AFE54684AB9EE6CBC0274A5D19D79A7BE@SHSMSX104.ccr.corp.intel.com> (raw)
In-Reply-To: <CAPaKu7Qjnur=ntTXmGn7L38UaCoNjf6avWBk7xTvO6eDkZbWFQ@mail.gmail.com>

> From: Chia-I Wu <olvaffe@gmail.com>
> Sent: Saturday, February 22, 2020 2:21 AM
> 
> On Fri, Feb 21, 2020 at 7:59 AM Sean Christopherson
> <sean.j.christopherson@intel.com> wrote:
> >
> > On Thu, Feb 20, 2020 at 09:39:05PM -0800, Tian, Kevin wrote:
> > > > From: Chia-I Wu <olvaffe@gmail.com>
> > > > Sent: Friday, February 21, 2020 12:51 PM
> > > > If you think it is the best for KVM to inspect hva to determine the
> memory
> > > > type with page granularity, that is reasonable and should work for us
> too.
> > > > The userspace can do something (e.g., add a GPU driver dependency to
> the
> > > > hypervisor such that the dma-buf is imported as a GPU memory and
> mapped
> > > > using
> > > > vkMapMemory) or I can work with dma-buf maintainers to see if dma-
> buf's
> > > > semantics can be changed.
> > >
> > > I think you need consider the live migration requirement as Paolo pointed
> out.
> > > The migration thread needs to read/write the region, then it must use the
> > > same type as GPU process and guest to read/write the region. In such
> case,
> > > the hva mapped by Qemu should have the desired type as the guest.
> However,
> > > adding GPU driver dependency to Qemu might trigger some concern. I'm
> not
> > > sure whether there is generic mechanism though, to share dmabuf fd
> between GPU
> > > process and Qemu while allowing Qemu to follow the desired type w/o
> using
> > > vkMapMemory...
> >
> > Alternatively, KVM could make KVM_MEM_DMA and
> KVM_MEM_LOG_DIRTY_PAGES
> > mutually exclusive, i.e. force a transition to WB memtype for the guest
> > (with appropriate zapping) when migration is activated.  I think that
> > would work?
> Hm, virtio-gpu does not allow live migration when the 3D function
> (virgl=on) is enabled.  This is the relevant code in qemu:
> 
>     if (virtio_gpu_virgl_enabled(g->conf)) {
>         error_setg(&g->migration_blocker, "virgl is not yet migratable");
> 
> Although we (virtio-gpu and virglrenderer projects) plan to make host
> GPU buffers available to the guest via memslots, those buffers should
> be considered a part of the "GPU state".  The migration thread should
> work with virglrenderer and let virglrenderer save/restore them, if
> live migration is to be supported.

Thanks for your explanation. Your RFC makes more sense now.

One remaining open is, although for live migration we can explicitly
state that migration thread itself should not access the dma-buf
region, how can we warn other usages which may potentially simply
walk every memslot and access the content through the mmap-ed
virtual address? Possibly we may need a flag to indicate a memslot
which is mmaped only for KVM to retrieve its page table mapping
but not for direct access in Qemu. 

> 
> QEMU depends on GPU drivers already when configured with
> --enable-virglrenderer.  There is vhost-user-gpu that can move the
> dependency to a GPU process.  But there are still going to be cases
> (e.g., nVidia's proprietary driver does not support dma-buf) where
> QEMU cannot avoid GPU driver dependency.
> 
> 
> 
> 
> > > Note this is orthogonal to whether introducing a new uapi or implicitly
> checking
> > > hva to favor guest memory type. It's purely about Qemu itself. Ideally
> anyone
> > > with the desire to access a dma-buf object should follow the expected
> semantics.
> > > It's interesting that dma-buf sub-system doesn't provide a centralized
> > > synchronization about memory type between multiple mmap paths.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2020-02-25  1:29 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-13 21:30 [RFC PATCH 0/3] KVM: x86: honor guest memory type Chia-I Wu
2020-02-13 21:30 ` [RFC PATCH 1/3] KVM: vmx: rewrite the comment in vmx_get_mt_mask Chia-I Wu
2020-02-14  9:36   ` Paolo Bonzini
2020-02-13 21:30 ` [RFC PATCH 2/3] RFC: KVM: add KVM_MEM_DMA Chia-I Wu
2020-02-13 21:30 ` [RFC PATCH 3/3] RFC: KVM: x86: support KVM_CAP_DMA_MEM Chia-I Wu
2020-02-13 21:41 ` [RFC PATCH 0/3] KVM: x86: honor guest memory type Paolo Bonzini
2020-02-13 22:18   ` Chia-I Wu
2020-02-14 10:26     ` Paolo Bonzini
2020-02-14 19:52       ` Sean Christopherson
2020-02-14 21:47         ` Chia-I Wu
2020-02-14 21:56           ` Jim Mattson
2020-02-14 22:03             ` Sean Christopherson
2020-02-18 16:28               ` Paolo Bonzini
2020-02-18 22:58                 ` Sean Christopherson
2020-02-19  9:52                 ` Tian, Kevin
2020-02-19 19:36                   ` Chia-I Wu
2020-02-20  2:04                     ` Tian, Kevin
2020-02-20  2:38                       ` Tian, Kevin
2020-02-20 22:23                         ` Chia-I Wu
2020-02-21  0:23                           ` Tian, Kevin
2020-02-21  4:45                             ` Chia-I Wu
2020-02-21  4:51                               ` Chia-I Wu
2020-02-21  5:39                                 ` Tian, Kevin
2020-02-21 15:59                                   ` Sean Christopherson
2020-02-21 18:21                                     ` Chia-I Wu
2020-02-25  1:29                                       ` Tian, Kevin [this message]
2020-02-14 21:15       ` Chia-I Wu
2020-02-19 10:00         ` Tian, Kevin
2020-02-19 19:18           ` Chia-I Wu
2020-02-20  2:13             ` Tian, Kevin
2020-02-20 23:02               ` Chia-I Wu
2020-02-24 10:57               ` Gerd Hoffmann

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AADFC41AFE54684AB9EE6CBC0274A5D19D79A7BE@SHSMSX104.ccr.corp.intel.com \
    --to=kevin.tian@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gurchetansingh@chromium.org \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kraxel@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=olvaffe@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=sean.j.christopherson@intel.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).