kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Steve Rutherford <srutherford@google.com>
To: Ashish Kalra <Ashish.Kalra@amd.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Joerg Roedel <joro@8bytes.org>, Borislav Petkov <bp@suse.de>,
	Tom Lendacky <Thomas.Lendacky@amd.com>, X86 ML <x86@kernel.org>,
	KVM list <kvm@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	David Rientjes <rientjes@google.com>,
	Venu Busireddy <venu.busireddy@oracle.com>,
	Brijesh Singh <brijesh.singh@amd.com>
Subject: Re: [PATCH v8 09/18] KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl
Date: Fri, 29 May 2020 19:05:57 -0700	[thread overview]
Message-ID: <CABayD+enZfQjEvJ6ysmVkkN8G=Cu64YaWCr27=U+FpJ-jfWz2A@mail.gmail.com> (raw)
In-Reply-To: <d4680b02ff88b2687cbb807afd8eb47784dd3a33.1588711355.git.ashish.kalra@amd.com>

On Tue, May 5, 2020 at 2:17 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>
> From: Brijesh Singh <Brijesh.Singh@amd.com>
>
> The ioctl can be used to retrieve page encryption bitmap for a given
> gfn range.
>
> Return the correct bitmap as per the number of pages being requested
> by the user. Ensure that we only copy bmap->num_pages bytes in the
> userspace buffer, if bmap->num_pages is not byte aligned we read
> the trailing bits from the userspace and copy those bits as is.
>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Borislav Petkov <bp@suse.de>
> Cc: Tom Lendacky <thomas.lendacky@amd.com>
> Cc: x86@kernel.org
> Cc: kvm@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by: Venu Busireddy <venu.busireddy@oracle.com>
> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
> ---
>  Documentation/virt/kvm/api.rst  | 27 +++++++++++++
>  arch/x86/include/asm/kvm_host.h |  2 +
>  arch/x86/kvm/svm/sev.c          | 70 +++++++++++++++++++++++++++++++++
>  arch/x86/kvm/svm/svm.c          |  1 +
>  arch/x86/kvm/svm/svm.h          |  1 +
>  arch/x86/kvm/x86.c              | 12 ++++++
>  include/uapi/linux/kvm.h        | 12 ++++++
>  7 files changed, 125 insertions(+)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index efbbe570aa9b..ecad84086892 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -4636,6 +4636,33 @@ This ioctl resets VCPU registers and control structures according to
>  the clear cpu reset definition in the POP. However, the cpu is not put
>  into ESA mode. This reset is a superset of the initial reset.
>
> +4.125 KVM_GET_PAGE_ENC_BITMAP (vm ioctl)
> +---------------------------------------
> +
> +:Capability: basic
> +:Architectures: x86
> +:Type: vm ioctl
> +:Parameters: struct kvm_page_enc_bitmap (in/out)
> +:Returns: 0 on success, -1 on error
> +
> +/* for KVM_GET_PAGE_ENC_BITMAP */
> +struct kvm_page_enc_bitmap {
> +       __u64 start_gfn;
> +       __u64 num_pages;
> +       union {
> +               void __user *enc_bitmap; /* one bit per page */
> +               __u64 padding2;
> +       };
> +};
> +
> +The encrypted VMs have the concept of private and shared pages. The private
> +pages are encrypted with the guest-specific key, while the shared pages may
> +be encrypted with the hypervisor key. The KVM_GET_PAGE_ENC_BITMAP can
> +be used to get the bitmap indicating whether the guest page is private
> +or shared. The bitmap can be used during the guest migration. If the page
> +is private then the userspace need to use SEV migration commands to transmit
> +the page.
> +
>
>  4.125 KVM_S390_PV_COMMAND
>  -------------------------
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 4a8ee22f4f5b..9e428befb6a4 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1256,6 +1256,8 @@ struct kvm_x86_ops {
>         int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu);
>         int (*page_enc_status_hc)(struct kvm *kvm, unsigned long gpa,
>                                   unsigned long sz, unsigned long mode);
> +       int (*get_page_enc_bitmap)(struct kvm *kvm,
> +                               struct kvm_page_enc_bitmap *bmap);
>  };
>
>  struct kvm_x86_init_ops {
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index f088467708f0..387045902470 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -1434,6 +1434,76 @@ int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
>         return 0;
>  }
>
> +int svm_get_page_enc_bitmap(struct kvm *kvm,
> +                                  struct kvm_page_enc_bitmap *bmap)
> +{
> +       struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
> +       unsigned long gfn_start, gfn_end;
> +       unsigned long sz, i, sz_bytes;
> +       unsigned long *bitmap;
> +       int ret, n;
> +
> +       if (!sev_guest(kvm))
> +               return -ENOTTY;
> +
> +       gfn_start = bmap->start_gfn;
> +       gfn_end = gfn_start + bmap->num_pages;
> +
> +       sz = ALIGN(bmap->num_pages, BITS_PER_LONG) / BITS_PER_BYTE;
> +       bitmap = kmalloc(sz, GFP_KERNEL);
> +       if (!bitmap)
> +               return -ENOMEM;
> +
> +       /* by default all pages are marked encrypted */
> +       memset(bitmap, 0xff, sz);
> +
> +       mutex_lock(&kvm->lock);
> +       if (sev->page_enc_bmap) {
> +               i = gfn_start;
> +               for_each_clear_bit_from(i, sev->page_enc_bmap,
> +                                     min(sev->page_enc_bmap_size, gfn_end))
gfn_end is not a size? I believe you want either gfn_end - gfn_start
or bmap->num_pages.

> +                       clear_bit(i - gfn_start, bitmap);
> +       }
> +       mutex_unlock(&kvm->lock);
> +
> +       ret = -EFAULT;
> +
> +       n = bmap->num_pages % BITS_PER_BYTE;
> +       sz_bytes = ALIGN(bmap->num_pages, BITS_PER_BYTE) / BITS_PER_BYTE;
> +
> +       /*
> +        * Return the correct bitmap as per the number of pages being
> +        * requested by the user. Ensure that we only copy bmap->num_pages
> +        * bytes in the userspace buffer, if bmap->num_pages is not byte
> +        * aligned we read the trailing bits from the userspace and copy
Nit: "userspace" instead of "the userspace".



> +        * those bits as is.
> +        */
> +
> +       if (n) {
> +               unsigned char *bitmap_kernel = (unsigned char *)bitmap;
> +               unsigned char bitmap_user;
> +               unsigned long offset, mask;
> +
> +               offset = bmap->num_pages / BITS_PER_BYTE;
> +               if (copy_from_user(&bitmap_user, bmap->enc_bitmap + offset,
> +                               sizeof(unsigned char)))
> +                       goto out;
> +
> +               mask = GENMASK(n - 1, 0);
> +               bitmap_user &= ~mask;
> +               bitmap_kernel[offset] &= mask;
> +               bitmap_kernel[offset] |= bitmap_user;
> +       }
> +
> +       if (copy_to_user(bmap->enc_bitmap, bitmap, sz_bytes))
> +               goto out;
> +
> +       ret = 0;
> +out:
> +       kfree(bitmap);
> +       return ret;
> +}
> +
>  int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>  {
>         struct kvm_sev_cmd sev_cmd;
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index 1013ef0f4ce2..588709a9f68e 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -4016,6 +4016,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
>         .check_nested_events = svm_check_nested_events,
>
>         .page_enc_status_hc = svm_page_enc_status_hc,
> +       .get_page_enc_bitmap = svm_get_page_enc_bitmap,
>  };
>
>  static struct kvm_x86_init_ops svm_init_ops __initdata = {
> diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
> index 6a562f5928a2..f087fa7b380c 100644
> --- a/arch/x86/kvm/svm/svm.h
> +++ b/arch/x86/kvm/svm/svm.h
> @@ -404,6 +404,7 @@ int svm_check_nested_events(struct kvm_vcpu *vcpu);
>  int nested_svm_exit_special(struct vcpu_svm *svm);
>  int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
>                                   unsigned long npages, unsigned long enc);
> +int svm_get_page_enc_bitmap(struct kvm *kvm, struct kvm_page_enc_bitmap *bmap);
>
>  /* avic.c */
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 5f5ddb5765e2..937797cfaf9a 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5208,6 +5208,18 @@ long kvm_arch_vm_ioctl(struct file *filp,
>         case KVM_SET_PMU_EVENT_FILTER:
>                 r = kvm_vm_ioctl_set_pmu_event_filter(kvm, argp);
>                 break;
> +       case KVM_GET_PAGE_ENC_BITMAP: {
> +               struct kvm_page_enc_bitmap bitmap;
> +
> +               r = -EFAULT;
> +               if (copy_from_user(&bitmap, argp, sizeof(bitmap)))
> +                       goto out;
> +
> +               r = -ENOTTY;
> +               if (kvm_x86_ops.get_page_enc_bitmap)
> +                       r = kvm_x86_ops.get_page_enc_bitmap(kvm, &bitmap);
> +               break;
> +       }
>         default:
>                 r = -ENOTTY;
>         }
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 0fe1d206d750..af62f2afaa5d 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -505,6 +505,16 @@ struct kvm_dirty_log {
>         };
>  };
>
> +/* for KVM_GET_PAGE_ENC_BITMAP */
> +struct kvm_page_enc_bitmap {
> +       __u64 start_gfn;
> +       __u64 num_pages;
> +       union {
> +               void __user *enc_bitmap; /* one bit per page */
> +               __u64 padding2;
> +       };
> +};
> +
>  /* for KVM_CLEAR_DIRTY_LOG */
>  struct kvm_clear_dirty_log {
>         __u32 slot;
> @@ -1518,6 +1528,8 @@ struct kvm_pv_cmd {
>  /* Available with KVM_CAP_S390_PROTECTED */
>  #define KVM_S390_PV_COMMAND            _IOWR(KVMIO, 0xc5, struct kvm_pv_cmd)
>
> +#define KVM_GET_PAGE_ENC_BITMAP        _IOW(KVMIO, 0xc6, struct kvm_page_enc_bitmap)
> +
>  /* Secure Encrypted Virtualization command */
>  enum sev_cmd_id {
>         /* Guest initialization commands */
> --
> 2.17.1
>

  reply	other threads:[~2020-05-30  2:06 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-05 21:13 [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
2020-05-05 21:14 ` [PATCH v8 01/18] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
2020-05-05 21:14 ` [PATCH v8 02/18] KVM: SVM: Add KVM_SEND_UPDATE_DATA command Ashish Kalra
2020-05-05 22:48   ` Venu Busireddy
2020-05-05 21:15 ` [PATCH v8 03/18] KVM: SVM: Add KVM_SEV_SEND_FINISH command Ashish Kalra
2020-05-05 22:51   ` Venu Busireddy
2020-05-05 21:15 ` [PATCH v8 04/18] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command Ashish Kalra
2020-05-05 22:52   ` Venu Busireddy
2020-05-05 21:15 ` [PATCH v8 05/18] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command Ashish Kalra
2020-05-05 21:16 ` [PATCH v8 06/18] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command Ashish Kalra
2020-05-05 21:16 ` [PATCH v8 07/18] KVM: x86: Add AMD SEV specific Hypercall3 Ashish Kalra
2020-05-05 21:17 ` [PATCH v8 08/18] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall Ashish Kalra
2020-05-30  2:05   ` Steve Rutherford
2020-05-05 21:17 ` [PATCH v8 09/18] KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl Ashish Kalra
2020-05-30  2:05   ` Steve Rutherford [this message]
2020-05-05 21:17 ` [PATCH v8 10/18] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
2020-05-30  2:06   ` Steve Rutherford
2020-05-05 21:18 ` [PATCH v8 11/18] KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl Ashish Kalra
2020-05-30  2:06   ` Steve Rutherford
2020-05-05 21:18 ` [PATCH v8 12/18] KVM: SVM: Add support for static allocation of unified Page Encryption Bitmap Ashish Kalra
2020-05-30  2:07   ` Steve Rutherford
2020-05-30  5:49     ` Ashish Kalra
2020-12-04 11:08   ` Paolo Bonzini
2020-12-04 21:38     ` Ashish Kalra
2020-12-06 10:19       ` Paolo Bonzini
2020-05-05 21:19 ` [PATCH v8 13/18] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
2020-05-30  2:07   ` Steve Rutherford
2020-12-04 11:20   ` Paolo Bonzini
2020-12-04 16:48     ` Sean Christopherson
2020-12-04 17:08       ` Ashish Kalra
2020-12-04 17:23         ` Sean Christopherson
2020-12-06 10:57           ` Paolo Bonzini
2020-12-06 14:09             ` Kalra, Ashish
2020-12-04 18:06       ` Ashish Kalra
2020-12-04 18:41         ` Sean Christopherson
2020-12-04 18:48           ` Kalra, Ashish
2020-12-04 19:02           ` Tom Lendacky
2020-12-04 21:42     ` Ashish Kalra
2020-05-05 21:20 ` [PATCH v8 14/18] EFI: Introduce the new AMD Memory Encryption GUID Ashish Kalra
2020-05-30  2:07   ` Steve Rutherford
2020-05-30  5:51     ` Ashish Kalra
2020-05-05 21:20 ` [PATCH v8 15/18] KVM: x86: Add guest support for detecting and enabling SEV Live Migration feature Ashish Kalra
2020-05-30  2:08   ` Steve Rutherford
2020-05-05 21:20 ` [PATCH v8 16/18] KVM: x86: Mark _bss_decrypted section variables as decrypted in page encryption bitmap Ashish Kalra
2020-05-30  2:08   ` Steve Rutherford
2020-05-05 21:21 ` [PATCH v8 17/18] KVM: x86: Add kexec support for SEV Live Migration Ashish Kalra
2020-05-30  2:08   ` Steve Rutherford
2020-05-05 21:22 ` [PATCH v8 18/18] KVM: SVM: Enable SEV live migration feature implicitly on Incoming VM(s) Ashish Kalra
2020-05-30  2:09   ` Steve Rutherford
2020-12-04 11:11   ` Paolo Bonzini
2020-12-04 11:22   ` Paolo Bonzini
2020-12-04 21:46     ` Ashish Kalra
2020-12-06 10:18       ` Paolo Bonzini
2020-05-18 19:07 ` [PATCH v8 00/18] Add AMD SEV guest live migration support Ashish Kalra
2020-06-01 20:02   ` Steve Rutherford
2020-06-03 22:14     ` Ashish Kalra
2020-08-05 18:29       ` Steve Rutherford

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CABayD+enZfQjEvJ6ysmVkkN8G=Cu64YaWCr27=U+FpJ-jfWz2A@mail.gmail.com' \
    --to=srutherford@google.com \
    --cc=Ashish.Kalra@amd.com \
    --cc=Thomas.Lendacky@amd.com \
    --cc=bp@suse.de \
    --cc=brijesh.singh@amd.com \
    --cc=hpa@zytor.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=rientjes@google.com \
    --cc=tglx@linutronix.de \
    --cc=venu.busireddy@oracle.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).