All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yan Zhao <yan.y.zhao@intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: <kvm@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<dri-devel@lists.freedesktop.org>, <pbonzini@redhat.com>,
	<seanjc@google.com>, <olvaffe@gmail.com>, <kevin.tian@intel.com>,
	<zhiyuan.lv@intel.com>, <zhenyu.z.wang@intel.com>,
	<yongwei.ma@intel.com>, <vkuznets@redhat.com>,
	<wanpengli@tencent.com>, <jmattson@google.com>, <joro@8bytes.org>,
	<gurchetansingh@chromium.org>, <kraxel@redhat.com>,
	<zzyiwei@google.com>, <ankita@nvidia.com>,
	<alex.williamson@redhat.com>, <maz@kernel.org>,
	<oliver.upton@linux.dev>, <james.morse@arm.com>,
	<suzuki.poulose@arm.com>, <yuzenghui@huawei.com>
Subject: Re: [PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices
Date: Tue, 9 Jan 2024 10:11:23 +0800	[thread overview]
Message-ID: <ZZyrS4RiHvktDZXb@yzhao56-desk.sh.intel.com> (raw)
In-Reply-To: <20240109002220.GA439767@nvidia.com>

On Mon, Jan 08, 2024 at 08:22:20PM -0400, Jason Gunthorpe wrote:
> On Tue, Jan 09, 2024 at 07:36:22AM +0800, Yan Zhao wrote:
> > On Mon, Jan 08, 2024 at 10:02:50AM -0400, Jason Gunthorpe wrote:
> > > On Mon, Jan 08, 2024 at 02:02:57PM +0800, Yan Zhao wrote:
> > > > On Fri, Jan 05, 2024 at 03:55:51PM -0400, Jason Gunthorpe wrote:
> > > > > On Fri, Jan 05, 2024 at 05:12:37PM +0800, Yan Zhao wrote:
> > > > > > This series allow user space to notify KVM of noncoherent DMA status so as
> > > > > > to let KVM honor guest memory types in specified memory slot ranges.
> > > > > > 
> > > > > > Motivation
> > > > > > ===
> > > > > > A virtio GPU device may want to configure GPU hardware to work in
> > > > > > noncoherent mode, i.e. some of its DMAs do not snoop CPU caches.
> > > > > 
> > > > > Does this mean some DMA reads do not snoop the caches or does it
> > > > > include DMA writes not synchronizing the caches too?
> > > > Both DMA reads and writes are not snooped.
> > > 
> > > Oh that sounds really dangerous.
> > >
> > But the IOMMU for Intel GPU does not do force-snoop, no matter KVM
> > honors guest memory type or not.
> 
> Yes, I know. Sounds dangerous!
> 
> > > Not just migration. Any point where KVM revokes the page from the
> > > VM. Ie just tearing down the VM still has to make the cache coherent
> > > with physical or there may be problems.
> > Not sure what's the mentioned problem during KVM revoking.
> > In host,
> > - If the memory type is WB, as the case in intel GPU passthrough,
> >   the mismatch can only happen when guest memory type is UC/WC/WT/WP, all
> >   stronger than WB.
> >   So, even after KVM revoking the page, the host will not get delayed
> >   data from cache.
> > - If the memory type is WC, as the case in virtio GPU, after KVM revoking
> >   the page, the page is still hold in the virtio host side.
> >   Even though a incooperative guest can cause wrong data in the page,
> >   the guest can achieve the purpose in a more straight-forward way, i.e.
> >   writing a wrong data directly to the page.
> >   So, I don't see the problem in this case too.
> 
> You can't let cache incoherent memory leak back into the hypervisor
> for other uses or who knows what can happen. In many cases something
> will zero the page and you can probably reliably argue that will make
> the cache coherent, but there are still all sorts of cases where pages
> are write protected and then used in the hypervisor context. Eg page
> out or something where the incoherence is a big problem.
> 
> eg RAID parity and mirror calculations become at-rist of
> malfunction. Storage CRCs stop working reliably, etc, etc.
> 
> It is certainly a big enough problem that a generic KVM switch to
> allow incoherence should be trated with alot of skepticism. You can't
> argue that the only use of the generic switch will be with GPUs that
> exclude all the troublesome cases!
>
You are right. It's more safe with only one view of memory.
But even something will zero the page, if it happens before returning
the page to host, looks the impact is constrained in VM scope? e.g.
for the write protected page, hypervisor cannot rely on the page content
is correct or expected.

For virtio GPU's use case, do you think a better way for KVM is to pull
the memory type from host page table in the specified memslot?

But for noncoherent DMA device passthrough, we can't pull host memory
type, because we rely on guest device driver to do cache flush
properly, and if the guest device driver thinks a memory is uncached
while it's effectively cached, the device cannot work properly.

> > > > In this case, will this security attack impact other guests?
> > > 
> > > It impacts the hypervisor potentially. It depends..
> > Could you elaborate more on how it will impact hypervisor?
> > We can try to fix it if it's really a case.
> 
> Well, for instance, when you install pages into the KVM the hypervisor
> will have taken kernel memory, then zero'd it with cachable writes,
> however the VM can read it incoherently with DMA and access the
> pre-zero'd data since the zero'd writes potentially hasn't left the
> cache. That is an information leakage exploit.
This makes sense.
How about KVM doing cache flush before installing/revoking the
page if guest memory type is honored?

> Who knows what else you can get up to if you are creative. The whole
> security model assumes there is only one view of memory, not two.
> 

WARNING: multiple messages have this Message-ID (diff)
From: Yan Zhao <yan.y.zhao@intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: wanpengli@tencent.com, kvm@vger.kernel.org,
	dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	kraxel@redhat.com, maz@kernel.org, joro@8bytes.org,
	zzyiwei@google.com, yuzenghui@huawei.com, olvaffe@gmail.com,
	kevin.tian@intel.com, suzuki.poulose@arm.com,
	alex.williamson@redhat.com, yongwei.ma@intel.com,
	zhiyuan.lv@intel.com, gurchetansingh@chromium.org,
	jmattson@google.com, zhenyu.z.wang@intel.com, seanjc@google.com,
	ankita@nvidia.com, oliver.upton@linux.dev, james.morse@arm.com,
	pbonzini@redhat.com, vkuznets@redhat.com
Subject: Re: [PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices
Date: Tue, 9 Jan 2024 10:11:23 +0800	[thread overview]
Message-ID: <ZZyrS4RiHvktDZXb@yzhao56-desk.sh.intel.com> (raw)
In-Reply-To: <20240109002220.GA439767@nvidia.com>

On Mon, Jan 08, 2024 at 08:22:20PM -0400, Jason Gunthorpe wrote:
> On Tue, Jan 09, 2024 at 07:36:22AM +0800, Yan Zhao wrote:
> > On Mon, Jan 08, 2024 at 10:02:50AM -0400, Jason Gunthorpe wrote:
> > > On Mon, Jan 08, 2024 at 02:02:57PM +0800, Yan Zhao wrote:
> > > > On Fri, Jan 05, 2024 at 03:55:51PM -0400, Jason Gunthorpe wrote:
> > > > > On Fri, Jan 05, 2024 at 05:12:37PM +0800, Yan Zhao wrote:
> > > > > > This series allow user space to notify KVM of noncoherent DMA status so as
> > > > > > to let KVM honor guest memory types in specified memory slot ranges.
> > > > > > 
> > > > > > Motivation
> > > > > > ===
> > > > > > A virtio GPU device may want to configure GPU hardware to work in
> > > > > > noncoherent mode, i.e. some of its DMAs do not snoop CPU caches.
> > > > > 
> > > > > Does this mean some DMA reads do not snoop the caches or does it
> > > > > include DMA writes not synchronizing the caches too?
> > > > Both DMA reads and writes are not snooped.
> > > 
> > > Oh that sounds really dangerous.
> > >
> > But the IOMMU for Intel GPU does not do force-snoop, no matter KVM
> > honors guest memory type or not.
> 
> Yes, I know. Sounds dangerous!
> 
> > > Not just migration. Any point where KVM revokes the page from the
> > > VM. Ie just tearing down the VM still has to make the cache coherent
> > > with physical or there may be problems.
> > Not sure what's the mentioned problem during KVM revoking.
> > In host,
> > - If the memory type is WB, as the case in intel GPU passthrough,
> >   the mismatch can only happen when guest memory type is UC/WC/WT/WP, all
> >   stronger than WB.
> >   So, even after KVM revoking the page, the host will not get delayed
> >   data from cache.
> > - If the memory type is WC, as the case in virtio GPU, after KVM revoking
> >   the page, the page is still hold in the virtio host side.
> >   Even though a incooperative guest can cause wrong data in the page,
> >   the guest can achieve the purpose in a more straight-forward way, i.e.
> >   writing a wrong data directly to the page.
> >   So, I don't see the problem in this case too.
> 
> You can't let cache incoherent memory leak back into the hypervisor
> for other uses or who knows what can happen. In many cases something
> will zero the page and you can probably reliably argue that will make
> the cache coherent, but there are still all sorts of cases where pages
> are write protected and then used in the hypervisor context. Eg page
> out or something where the incoherence is a big problem.
> 
> eg RAID parity and mirror calculations become at-rist of
> malfunction. Storage CRCs stop working reliably, etc, etc.
> 
> It is certainly a big enough problem that a generic KVM switch to
> allow incoherence should be trated with alot of skepticism. You can't
> argue that the only use of the generic switch will be with GPUs that
> exclude all the troublesome cases!
>
You are right. It's more safe with only one view of memory.
But even something will zero the page, if it happens before returning
the page to host, looks the impact is constrained in VM scope? e.g.
for the write protected page, hypervisor cannot rely on the page content
is correct or expected.

For virtio GPU's use case, do you think a better way for KVM is to pull
the memory type from host page table in the specified memslot?

But for noncoherent DMA device passthrough, we can't pull host memory
type, because we rely on guest device driver to do cache flush
properly, and if the guest device driver thinks a memory is uncached
while it's effectively cached, the device cannot work properly.

> > > > In this case, will this security attack impact other guests?
> > > 
> > > It impacts the hypervisor potentially. It depends..
> > Could you elaborate more on how it will impact hypervisor?
> > We can try to fix it if it's really a case.
> 
> Well, for instance, when you install pages into the KVM the hypervisor
> will have taken kernel memory, then zero'd it with cachable writes,
> however the VM can read it incoherently with DMA and access the
> pre-zero'd data since the zero'd writes potentially hasn't left the
> cache. That is an information leakage exploit.
This makes sense.
How about KVM doing cache flush before installing/revoking the
page if guest memory type is honored?

> Who knows what else you can get up to if you are creative. The whole
> security model assumes there is only one view of memory, not two.
> 

  reply	other threads:[~2024-01-09  2:41 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-05  9:12 [PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices Yan Zhao
2024-01-05  9:12 ` Yan Zhao
2024-01-05  9:13 ` [PATCH 1/4] KVM: Introduce a new memslot flag KVM_MEM_NON_COHERENT_DMA Yan Zhao
2024-01-05  9:13   ` Yan Zhao
2024-01-05  9:14 ` [PATCH 2/4] KVM: x86: Add a new param "slot" to op get_mt_mask in kvm_x86_ops Yan Zhao
2024-01-05  9:14   ` Yan Zhao
2024-01-05  9:15 ` [PATCH 3/4] KVM: VMX: Honor guest PATs for memslots of flag KVM_MEM_NON_COHERENT_DMA Yan Zhao
2024-01-05  9:15   ` Yan Zhao
2024-01-05  9:16 ` [PATCH 4/4] KVM: selftests: Set KVM_MEM_NON_COHERENT_DMA as a supported memslot flag Yan Zhao
2024-01-05  9:16   ` Yan Zhao
2024-01-05 19:55 ` [PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices Jason Gunthorpe
2024-01-05 19:55   ` Jason Gunthorpe
2024-01-08  6:02   ` Yan Zhao
2024-01-08  6:02     ` Yan Zhao
2024-01-08 14:02     ` Jason Gunthorpe
2024-01-08 14:02       ` Jason Gunthorpe
2024-01-08 15:25       ` Daniel Vetter
2024-01-08 15:25         ` Daniel Vetter
2024-01-08 15:38         ` Jason Gunthorpe
2024-01-08 23:36       ` Yan Zhao
2024-01-08 23:36         ` Yan Zhao
2024-01-09  0:22         ` Jason Gunthorpe
2024-01-09  0:22           ` Jason Gunthorpe
2024-01-09  2:11           ` Yan Zhao [this message]
2024-01-09  2:11             ` Yan Zhao
2024-01-15 16:30             ` Jason Gunthorpe
2024-01-15 16:30               ` Jason Gunthorpe
2024-01-16  0:45               ` Tian, Kevin
2024-01-16  0:45                 ` Tian, Kevin
2024-01-16  4:05               ` Tian, Kevin
2024-01-16  4:05                 ` Tian, Kevin
2024-01-16 12:54                 ` Jason Gunthorpe
2024-01-16 12:54                   ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZZyrS4RiHvktDZXb@yzhao56-desk.sh.intel.com \
    --to=yan.y.zhao@intel.com \
    --cc=alex.williamson@redhat.com \
    --cc=ankita@nvidia.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gurchetansingh@chromium.org \
    --cc=james.morse@arm.com \
    --cc=jgg@nvidia.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=kraxel@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=olvaffe@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    --cc=yongwei.ma@intel.com \
    --cc=yuzenghui@huawei.com \
    --cc=zhenyu.z.wang@intel.com \
    --cc=zhiyuan.lv@intel.com \
    --cc=zzyiwei@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.