From: Tom Roeder <tmroeder@google.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Sean Christopherson" <sean.j.christopherson@intel.com>,
"Radim Krčmář" <rkrcmar@redhat.com>,
"Liran Alon" <liran.alon@oracle.com>,
"Thomas Gleixner" <tglx@linutronix.de>,
"Ingo Molnar" <mingo@redhat.com>,
"Borislav Petkov" <bp@alien8.de>,
"H . Peter Anvin" <hpa@zytor.com>,
x86@kernel.org, kvm@vger.kernel.org,
linux-kernel@vger.kernel.org,
syzbot+ded1696f6b50b615b630@syzkaller.appspotmail.com
Subject: Re: [RFC PATCH] kvm: x86/vmx: Use kzalloc for cached_vmcs12
Date: Wed, 23 Jan 2019 10:25:40 -0800 [thread overview]
Message-ID: <20190123182540.GA160275@google.com> (raw)
In-Reply-To: <2177074d-f610-0d86-7399-e63ba851346c@redhat.com>
On Tue, Jan 15, 2019 at 11:15:51AM +0100, Paolo Bonzini wrote:
> On 15/01/19 03:43, Sean Christopherson wrote:
> >> - vmx->nested.cached_vmcs12 = kmalloc(VMCS12_SIZE, GFP_KERNEL);
> >> + vmx->nested.cached_vmcs12 = kzalloc(VMCS12_SIZE, GFP_KERNEL);
> >> if (!vmx->nested.cached_vmcs12)
> >> goto out_cached_vmcs12;
> > Obviously not your code, but why do we allocate VMCS12_SIZE instead of
> > sizeof(struct vmcs12)? I get why we require userspace to reserve the
> > full 4k, but I don't understand why KVM needs to allocate the reserved
> > bytes internally.
>
> It's just cleaner and shorter code to copy everything in and out,
> instead of having to explicitly zero the slack.
Could you please clarify? I don't see code that copies everything in and
out, but it depends on what you mean by "everything". In the context of
this email exchange, I assumed that "everything" was "all 4k
(VMCS12_SIZE)".
But it looks to me like the code doesn't copy 4k in and out, but rather
only ever copies sizeof(struct vmcs12) in and out. The copy_from_user
and copy_to_user cases in nested.c use sizeof(*vmcs12), which is
sizeof(struct vmcs12).
So maybe can switch to allocating sizeof(struct vmcs12). Is this
correct, or is there some other reason to allocate the larger size?
next prev parent reply other threads:[~2019-01-23 18:25 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-07 1:38 KMSAN: kernel-infoleak in kvm_vcpu_write_guest_page syzbot
2018-11-07 12:10 ` Alexander Potapenko
2018-11-07 12:47 ` Paolo Bonzini
2018-11-07 12:58 ` Liran Alon
2018-11-07 13:37 ` Paolo Bonzini
2019-01-14 23:47 ` [RFC PATCH] kvm: x86/vmx: Use kzalloc for cached_vmcs12 Tom Roeder
2019-01-15 0:03 ` Jim Mattson
2019-01-15 2:43 ` Sean Christopherson
2019-01-15 10:15 ` Paolo Bonzini
2019-01-23 18:25 ` Tom Roeder [this message]
2019-01-24 1:17 ` Paolo Bonzini
2019-01-15 17:51 ` Tom Roeder
2019-01-23 18:33 ` Tom Roeder
2019-01-24 1:18 ` Paolo Bonzini
2019-01-24 21:46 ` Tom Roeder
2018-11-07 12:52 ` KMSAN: kernel-infoleak in kvm_vcpu_write_guest_page Liran Alon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190123182540.GA160275@google.com \
--to=tmroeder@google.com \
--cc=bp@alien8.de \
--cc=hpa@zytor.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=liran.alon@oracle.com \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=rkrcmar@redhat.com \
--cc=sean.j.christopherson@intel.com \
--cc=syzbot+ded1696f6b50b615b630@syzkaller.appspotmail.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).