All of lore.kernel.org
 help / color / mirror / Atom feed
From: Sean Christopherson <sean.j.christopherson@intel.com>
To: Brijesh Singh <brijesh.singh@amd.com>, Borislav Petkov <bp@suse.de>
Cc: x86@kernel.org, linux-kernel@vger.kernel.org,
	kvm@vger.kernel.org, "Tom Lendacky" <thomas.lendacky@amd.com>,
	"Thomas Gleixner" <tglx@linutronix.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>
Subject: Re: [PATCH v6 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active
Date: Mon, 10 Sep 2018 06:29:12 -0700	[thread overview]
Message-ID: <1536586152.11460.40.camel@intel.com> (raw)
In-Reply-To: <026d5ca5-7b77-de6c-477e-ff39f0291ac0@amd.com>

On Mon, 2018-09-10 at 08:15 -0500, Brijesh Singh wrote:
> 
> On 9/10/18 7:27 AM, Borislav Petkov wrote:
> > 
> > On Fri, Sep 07, 2018 at 12:57:30PM -0500, Brijesh Singh wrote:
> > > 
> > > diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
> > > index 376fd3a..6086b56 100644
> > > --- a/arch/x86/kernel/kvmclock.c
> > > +++ b/arch/x86/kernel/kvmclock.c
> > > @@ -65,6 +65,15 @@ static struct pvclock_vsyscall_time_info
> > >  static struct pvclock_wall_clock wall_clock __decrypted;
> > >  static DEFINE_PER_CPU(struct pvclock_vsyscall_time_info *, hv_clock_per_cpu);
> > >  
> > > +#ifdef CONFIG_AMD_MEM_ENCRYPT
> > > +/*
> > > + * The auxiliary array will be used when SEV is active. In non-SEV case,
> > > + * it will be freed by free_decrypted_mem().
> > > + */
> > > +static struct pvclock_vsyscall_time_info
> > > +			hv_clock_aux[NR_CPUS] __decrypted_aux;
> > Hmm, so worst case that's 64 4K pages:
> > 
> > (8192*32)/4096 = 64 4K pages.
> We can minimize the worst case memory usage. The number of VCPUs
> supported by KVM maybe less than NR_CPUS. e.g Currently KVM_MAX_VCPUS is
> set to 288

KVM_MAX_VCPUS is a property of the host, whereas this code runs in the
guest, e.g. KVM_MAX_VCPUS could be 2048 in the host for all we know.

> (288 * 64)/4096 = 4 4K pages.
> 
> (pvclock_vsyscall_time_info is cache aligned so it will be 64 bytes)

Ah, I was wondering why my calculations were always different than
yours.  I was looking at struct pvclock_vcpu_time_info, which is 32
bytes.

> #if NR_CPUS > KVM_MAX_VCPUS
> #define HV_AUX_ARRAY_SIZE  KVM_MAX_VCPUS
> #else
> #define HV_AUX_ARRAY_SIZE NR_CPUS
> #endif
> 
> static struct pvclock_vsyscall_time_info
>                         hv_clock_aux[HV_AUX_ARRAY_SIZE] __decrypted_aux;


  reply	other threads:[~2018-09-10 13:29 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-09-07 17:57 [PATCH v6 0/5] x86: Fix SEV guest regression Brijesh Singh
2018-09-07 17:57 ` [PATCH v6 1/5] x86/mm: Restructure sme_encrypt_kernel() Brijesh Singh
2018-09-10 11:32   ` Borislav Petkov
2018-09-07 17:57 ` [PATCH v6 2/5] x86/mm: fix sme_populate_pgd() to update page flags Brijesh Singh
2018-09-10 11:36   ` Borislav Petkov
2018-09-10 12:28     ` Brijesh Singh
2018-09-10 12:32       ` Borislav Petkov
2018-09-07 17:57 ` [PATCH v6 3/5] x86/mm: add .data..decrypted section to hold shared variables Brijesh Singh
2018-09-10 11:54   ` Borislav Petkov
2018-09-10 12:33     ` Brijesh Singh
2018-09-07 17:57 ` [PATCH v6 4/5] x86/kvm: use __decrypted attribute in " Brijesh Singh
2018-09-10 12:04   ` Borislav Petkov
2018-09-10 13:15     ` Sean Christopherson
2018-09-10 13:29       ` Thomas Gleixner
2018-09-10 15:34       ` Borislav Petkov
2018-09-10 12:29   ` Paolo Bonzini
2018-09-10 12:33     ` Borislav Petkov
2018-09-10 12:46       ` Paolo Bonzini
2018-09-07 17:57 ` [PATCH v6 5/5] x86/kvm: Avoid dynamic allocation of pvclock data when SEV is active Brijesh Singh
2018-09-10 12:27   ` Borislav Petkov
2018-09-10 13:15     ` Brijesh Singh
2018-09-10 13:29       ` Sean Christopherson [this message]
2018-09-10 15:10         ` Brijesh Singh
2018-09-10 15:28           ` Sean Christopherson
2018-09-10 15:30             ` Brijesh Singh
2018-09-10 16:48               ` Borislav Petkov
2018-09-11  9:26                 ` Paolo Bonzini
2018-09-11 10:01                   ` Borislav Petkov
2018-09-11 10:19                     ` Paolo Bonzini
2018-09-11 10:25                       ` Borislav Petkov
2018-09-11 11:07                         ` Paolo Bonzini
2018-09-11 13:55                           ` Borislav Petkov
2018-09-11 14:00                             ` Paolo Bonzini
2018-09-10 15:53       ` Borislav Petkov
2018-09-10 16:13         ` Sean Christopherson
2018-09-10 16:14         ` Brijesh Singh
2018-09-10 12:28   ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1536586152.11460.40.camel@intel.com \
    --to=sean.j.christopherson@intel.com \
    --cc=bp@suse.de \
    --cc=brijesh.singh@amd.com \
    --cc=hpa@zytor.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=rkrcmar@redhat.com \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.