All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@nvidia.com>
To: Yan Zhao <yan.y.zhao@intel.com>
Cc: wanpengli@tencent.com, kvm@vger.kernel.org,
	dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
	kraxel@redhat.com, maz@kernel.org, joro@8bytes.org,
	zzyiwei@google.com, yuzenghui@huawei.com, olvaffe@gmail.com,
	kevin.tian@intel.com, suzuki.poulose@arm.com,
	alex.williamson@redhat.com, yongwei.ma@intel.com,
	zhiyuan.lv@intel.com, gurchetansingh@chromium.org,
	jmattson@google.com, zhenyu.z.wang@intel.com, seanjc@google.com,
	ankita@nvidia.com, oliver.upton@linux.dev, james.morse@arm.com,
	pbonzini@redhat.com, vkuznets@redhat.com
Subject: Re: [PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices
Date: Mon, 15 Jan 2024 12:30:50 -0400	[thread overview]
Message-ID: <20240115163050.GI734935@nvidia.com> (raw)
In-Reply-To: <ZZyrS4RiHvktDZXb@yzhao56-desk.sh.intel.com>

On Tue, Jan 09, 2024 at 10:11:23AM +0800, Yan Zhao wrote:

> > Well, for instance, when you install pages into the KVM the hypervisor
> > will have taken kernel memory, then zero'd it with cachable writes,
> > however the VM can read it incoherently with DMA and access the
> > pre-zero'd data since the zero'd writes potentially hasn't left the
> > cache. That is an information leakage exploit.
>
> This makes sense.
> How about KVM doing cache flush before installing/revoking the
> page if guest memory type is honored?

I think if you are going to allow the guest to bypass the cache in any
way then KVM should fully flush the cache before allowing the guest to
access memory and it should fully flush the cache after removing
memory from the guest.

Noting that fully removing the memory now includes VFIO too, which is
going to be very hard to co-ordinate between KVM and VFIO.

ARM has the hooks for most of this in the common code already, so it
should not be outrageous to do, but slow I suspect.

Jason

WARNING: multiple messages have this Message-ID (diff)
From: Jason Gunthorpe <jgg@nvidia.com>
To: Yan Zhao <yan.y.zhao@intel.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, pbonzini@redhat.com,
	seanjc@google.com, olvaffe@gmail.com, kevin.tian@intel.com,
	zhiyuan.lv@intel.com, zhenyu.z.wang@intel.com,
	yongwei.ma@intel.com, vkuznets@redhat.com, wanpengli@tencent.com,
	jmattson@google.com, joro@8bytes.org,
	gurchetansingh@chromium.org, kraxel@redhat.com,
	zzyiwei@google.com, ankita@nvidia.com,
	alex.williamson@redhat.com, maz@kernel.org,
	oliver.upton@linux.dev, james.morse@arm.com,
	suzuki.poulose@arm.com, yuzenghui@huawei.com
Subject: Re: [PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices
Date: Mon, 15 Jan 2024 12:30:50 -0400	[thread overview]
Message-ID: <20240115163050.GI734935@nvidia.com> (raw)
In-Reply-To: <ZZyrS4RiHvktDZXb@yzhao56-desk.sh.intel.com>

On Tue, Jan 09, 2024 at 10:11:23AM +0800, Yan Zhao wrote:

> > Well, for instance, when you install pages into the KVM the hypervisor
> > will have taken kernel memory, then zero'd it with cachable writes,
> > however the VM can read it incoherently with DMA and access the
> > pre-zero'd data since the zero'd writes potentially hasn't left the
> > cache. That is an information leakage exploit.
>
> This makes sense.
> How about KVM doing cache flush before installing/revoking the
> page if guest memory type is honored?

I think if you are going to allow the guest to bypass the cache in any
way then KVM should fully flush the cache before allowing the guest to
access memory and it should fully flush the cache after removing
memory from the guest.

Noting that fully removing the memory now includes VFIO too, which is
going to be very hard to co-ordinate between KVM and VFIO.

ARM has the hooks for most of this in the common code already, so it
should not be outrageous to do, but slow I suspect.

Jason

  reply	other threads:[~2024-01-15 16:30 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-05  9:12 [PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices Yan Zhao
2024-01-05  9:12 ` Yan Zhao
2024-01-05  9:13 ` [PATCH 1/4] KVM: Introduce a new memslot flag KVM_MEM_NON_COHERENT_DMA Yan Zhao
2024-01-05  9:13   ` Yan Zhao
2024-01-05  9:14 ` [PATCH 2/4] KVM: x86: Add a new param "slot" to op get_mt_mask in kvm_x86_ops Yan Zhao
2024-01-05  9:14   ` Yan Zhao
2024-01-05  9:15 ` [PATCH 3/4] KVM: VMX: Honor guest PATs for memslots of flag KVM_MEM_NON_COHERENT_DMA Yan Zhao
2024-01-05  9:15   ` Yan Zhao
2024-01-05  9:16 ` [PATCH 4/4] KVM: selftests: Set KVM_MEM_NON_COHERENT_DMA as a supported memslot flag Yan Zhao
2024-01-05  9:16   ` Yan Zhao
2024-01-05 19:55 ` [PATCH 0/4] KVM: Honor guest memory types for virtio GPU devices Jason Gunthorpe
2024-01-05 19:55   ` Jason Gunthorpe
2024-01-08  6:02   ` Yan Zhao
2024-01-08  6:02     ` Yan Zhao
2024-01-08 14:02     ` Jason Gunthorpe
2024-01-08 14:02       ` Jason Gunthorpe
2024-01-08 15:25       ` Daniel Vetter
2024-01-08 15:25         ` Daniel Vetter
2024-01-08 15:38         ` Jason Gunthorpe
2024-01-08 23:36       ` Yan Zhao
2024-01-08 23:36         ` Yan Zhao
2024-01-09  0:22         ` Jason Gunthorpe
2024-01-09  0:22           ` Jason Gunthorpe
2024-01-09  2:11           ` Yan Zhao
2024-01-09  2:11             ` Yan Zhao
2024-01-15 16:30             ` Jason Gunthorpe [this message]
2024-01-15 16:30               ` Jason Gunthorpe
2024-01-16  0:45               ` Tian, Kevin
2024-01-16  0:45                 ` Tian, Kevin
2024-01-16  4:05               ` Tian, Kevin
2024-01-16  4:05                 ` Tian, Kevin
2024-01-16 12:54                 ` Jason Gunthorpe
2024-01-16 12:54                   ` Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240115163050.GI734935@nvidia.com \
    --to=jgg@nvidia.com \
    --cc=alex.williamson@redhat.com \
    --cc=ankita@nvidia.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gurchetansingh@chromium.org \
    --cc=james.morse@arm.com \
    --cc=jmattson@google.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=kraxel@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maz@kernel.org \
    --cc=oliver.upton@linux.dev \
    --cc=olvaffe@gmail.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    --cc=suzuki.poulose@arm.com \
    --cc=vkuznets@redhat.com \
    --cc=wanpengli@tencent.com \
    --cc=yan.y.zhao@intel.com \
    --cc=yongwei.ma@intel.com \
    --cc=yuzenghui@huawei.com \
    --cc=zhenyu.z.wang@intel.com \
    --cc=zhiyuan.lv@intel.com \
    --cc=zzyiwei@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.