kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brijesh Singh <brijesh.singh@amd.com>
To: Steve Rutherford <srutherford@google.com>,
	Ashish Kalra <ashish.kalra@amd.com>
Cc: brijesh.singh@amd.com, Paolo Bonzini <pbonzini@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Joerg Roedel <joro@8bytes.org>, Borislav Petkov <bp@suse.de>,
	Tom Lendacky <thomas.lendacky@amd.com>, X86 ML <x86@kernel.org>,
	KVM list <kvm@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	David Rientjes <rientjes@google.com>,
	Andy Lutomirski <luto@kernel.org>
Subject: Re: [PATCH v6 08/14] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall
Date: Tue, 7 Apr 2020 19:29:44 -0500	[thread overview]
Message-ID: <d67a104e-6a01-a766-63b2-3f8b6026ca4c@amd.com> (raw)
In-Reply-To: <CABayD+cNdEJxoSHee3s0toy6-nO6Bm4-OsrbBdS8mCWoMBSqLQ@mail.gmail.com>


On 4/7/20 7:01 PM, Steve Rutherford wrote:
> On Mon, Apr 6, 2020 at 10:27 PM Ashish Kalra <ashish.kalra@amd.com> wrote:
>> Hello Steve,
>>
>> On Mon, Apr 06, 2020 at 07:17:37PM -0700, Steve Rutherford wrote:
>>> On Sun, Mar 29, 2020 at 11:22 PM Ashish Kalra <Ashish.Kalra@amd.com> wrote:
>>>> From: Brijesh Singh <Brijesh.Singh@amd.com>
>>>>
>>>> This hypercall is used by the SEV guest to notify a change in the page
>>>> encryption status to the hypervisor. The hypercall should be invoked
>>>> only when the encryption attribute is changed from encrypted -> decrypted
>>>> and vice versa. By default all guest pages are considered encrypted.
>>>>
>>>> Cc: Thomas Gleixner <tglx@linutronix.de>
>>>> Cc: Ingo Molnar <mingo@redhat.com>
>>>> Cc: "H. Peter Anvin" <hpa@zytor.com>
>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>> Cc: "Radim Krčmář" <rkrcmar@redhat.com>
>>>> Cc: Joerg Roedel <joro@8bytes.org>
>>>> Cc: Borislav Petkov <bp@suse.de>
>>>> Cc: Tom Lendacky <thomas.lendacky@amd.com>
>>>> Cc: x86@kernel.org
>>>> Cc: kvm@vger.kernel.org
>>>> Cc: linux-kernel@vger.kernel.org
>>>> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
>>>> Signed-off-by: Ashish Kalra <ashish.kalra@amd.com>
>>>> ---
>>>>  Documentation/virt/kvm/hypercalls.rst | 15 +++++
>>>>  arch/x86/include/asm/kvm_host.h       |  2 +
>>>>  arch/x86/kvm/svm.c                    | 95 +++++++++++++++++++++++++++
>>>>  arch/x86/kvm/vmx/vmx.c                |  1 +
>>>>  arch/x86/kvm/x86.c                    |  6 ++
>>>>  include/uapi/linux/kvm_para.h         |  1 +
>>>>  6 files changed, 120 insertions(+)
>>>>
>>>> diff --git a/Documentation/virt/kvm/hypercalls.rst b/Documentation/virt/kvm/hypercalls.rst
>>>> index dbaf207e560d..ff5287e68e81 100644
>>>> --- a/Documentation/virt/kvm/hypercalls.rst
>>>> +++ b/Documentation/virt/kvm/hypercalls.rst
>>>> @@ -169,3 +169,18 @@ a0: destination APIC ID
>>>>
>>>>  :Usage example: When sending a call-function IPI-many to vCPUs, yield if
>>>>                 any of the IPI target vCPUs was preempted.
>>>> +
>>>> +
>>>> +8. KVM_HC_PAGE_ENC_STATUS
>>>> +-------------------------
>>>> +:Architecture: x86
>>>> +:Status: active
>>>> +:Purpose: Notify the encryption status changes in guest page table (SEV guest)
>>>> +
>>>> +a0: the guest physical address of the start page
>>>> +a1: the number of pages
>>>> +a2: encryption attribute
>>>> +
>>>> +   Where:
>>>> +       * 1: Encryption attribute is set
>>>> +       * 0: Encryption attribute is cleared
>>>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>>>> index 98959e8cd448..90718fa3db47 100644
>>>> --- a/arch/x86/include/asm/kvm_host.h
>>>> +++ b/arch/x86/include/asm/kvm_host.h
>>>> @@ -1267,6 +1267,8 @@ struct kvm_x86_ops {
>>>>
>>>>         bool (*apic_init_signal_blocked)(struct kvm_vcpu *vcpu);
>>>>         int (*enable_direct_tlbflush)(struct kvm_vcpu *vcpu);
>>>> +       int (*page_enc_status_hc)(struct kvm *kvm, unsigned long gpa,
>>>> +                                 unsigned long sz, unsigned long mode);
>>> Nit: spell out size instead of sz.
>>>>  };
>>>>
>>>>  struct kvm_arch_async_pf {
>>>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>>>> index 7c2721e18b06..1d8beaf1bceb 100644
>>>> --- a/arch/x86/kvm/svm.c
>>>> +++ b/arch/x86/kvm/svm.c
>>>> @@ -136,6 +136,8 @@ struct kvm_sev_info {
>>>>         int fd;                 /* SEV device fd */
>>>>         unsigned long pages_locked; /* Number of pages locked */
>>>>         struct list_head regions_list;  /* List of registered regions */
>>>> +       unsigned long *page_enc_bmap;
>>>> +       unsigned long page_enc_bmap_size;
>>>>  };
>>>>
>>>>  struct kvm_svm {
>>>> @@ -1991,6 +1993,9 @@ static void sev_vm_destroy(struct kvm *kvm)
>>>>
>>>>         sev_unbind_asid(kvm, sev->handle);
>>>>         sev_asid_free(sev->asid);
>>>> +
>>>> +       kvfree(sev->page_enc_bmap);
>>>> +       sev->page_enc_bmap = NULL;
>>>>  }
>>>>
>>>>  static void avic_vm_destroy(struct kvm *kvm)
>>>> @@ -7593,6 +7598,94 @@ static int sev_receive_finish(struct kvm *kvm, struct kvm_sev_cmd *argp)
>>>>         return ret;
>>>>  }
>>>>
>>>> +static int sev_resize_page_enc_bitmap(struct kvm *kvm, unsigned long new_size)
>>>> +{
>>>> +       struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
>>>> +       unsigned long *map;
>>>> +       unsigned long sz;
>>>> +
>>>> +       if (sev->page_enc_bmap_size >= new_size)
>>>> +               return 0;
>>>> +
>>>> +       sz = ALIGN(new_size, BITS_PER_LONG) / 8;
>>>> +
>>>> +       map = vmalloc(sz);
>>>> +       if (!map) {
>>>> +               pr_err_once("Failed to allocate encrypted bitmap size %lx\n",
>>>> +                               sz);
>>>> +               return -ENOMEM;
>>>> +       }
>>>> +
>>>> +       /* mark the page encrypted (by default) */
>>>> +       memset(map, 0xff, sz);
>>>> +
>>>> +       bitmap_copy(map, sev->page_enc_bmap, sev->page_enc_bmap_size);
>>>> +       kvfree(sev->page_enc_bmap);
>>>> +
>>>> +       sev->page_enc_bmap = map;
>>>> +       sev->page_enc_bmap_size = new_size;
>>>> +
>>>> +       return 0;
>>>> +}
>>>> +
>>>> +static int svm_page_enc_status_hc(struct kvm *kvm, unsigned long gpa,
>>>> +                                 unsigned long npages, unsigned long enc)
>>>> +{
>>>> +       struct kvm_sev_info *sev = &to_kvm_svm(kvm)->sev_info;
>>>> +       kvm_pfn_t pfn_start, pfn_end;
>>>> +       gfn_t gfn_start, gfn_end;
>>>> +       int ret;
>>>> +
>>>> +       if (!sev_guest(kvm))
>>>> +               return -EINVAL;
>>>> +
>>>> +       if (!npages)
>>>> +               return 0;
>>>> +
>>>> +       gfn_start = gpa_to_gfn(gpa);
>>>> +       gfn_end = gfn_start + npages;
>>>> +
>>>> +       /* out of bound access error check */
>>>> +       if (gfn_end <= gfn_start)
>>>> +               return -EINVAL;
>>>> +
>>>> +       /* lets make sure that gpa exist in our memslot */
>>>> +       pfn_start = gfn_to_pfn(kvm, gfn_start);
>>>> +       pfn_end = gfn_to_pfn(kvm, gfn_end);
>>>> +
>>>> +       if (is_error_noslot_pfn(pfn_start) && !is_noslot_pfn(pfn_start)) {
>>>> +               /*
>>>> +                * Allow guest MMIO range(s) to be added
>>>> +                * to the page encryption bitmap.
>>>> +                */
>>>> +               return -EINVAL;
>>>> +       }
>>>> +
>>>> +       if (is_error_noslot_pfn(pfn_end) && !is_noslot_pfn(pfn_end)) {
>>>> +               /*
>>>> +                * Allow guest MMIO range(s) to be added
>>>> +                * to the page encryption bitmap.
>>>> +                */
>>>> +               return -EINVAL;
>>>> +       }
>>>> +
>>>> +       mutex_lock(&kvm->lock);
>>>> +       ret = sev_resize_page_enc_bitmap(kvm, gfn_end);
>>>> +       if (ret)
>>>> +               goto unlock;
>>>> +
>>>> +       if (enc)
>>>> +               __bitmap_set(sev->page_enc_bmap, gfn_start,
>>>> +                               gfn_end - gfn_start);
>>>> +       else
>>>> +               __bitmap_clear(sev->page_enc_bmap, gfn_start,
>>>> +                               gfn_end - gfn_start);
>>>> +
>>>> +unlock:
>>>> +       mutex_unlock(&kvm->lock);
>>>> +       return ret;
>>>> +}
>>>> +
>>>>  static int svm_mem_enc_op(struct kvm *kvm, void __user *argp)
>>>>  {
>>>>         struct kvm_sev_cmd sev_cmd;
>>>> @@ -7995,6 +8088,8 @@ static struct kvm_x86_ops svm_x86_ops __ro_after_init = {
>>>>         .need_emulation_on_page_fault = svm_need_emulation_on_page_fault,
>>>>
>>>>         .apic_init_signal_blocked = svm_apic_init_signal_blocked,
>>>> +
>>>> +       .page_enc_status_hc = svm_page_enc_status_hc,
>>>>  };
>>>>
>>>>  static int __init svm_init(void)
>>>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>>>> index 079d9fbf278e..f68e76ee7f9c 100644
>>>> --- a/arch/x86/kvm/vmx/vmx.c
>>>> +++ b/arch/x86/kvm/vmx/vmx.c
>>>> @@ -8001,6 +8001,7 @@ static struct kvm_x86_ops vmx_x86_ops __ro_after_init = {
>>>>         .nested_get_evmcs_version = NULL,
>>>>         .need_emulation_on_page_fault = vmx_need_emulation_on_page_fault,
>>>>         .apic_init_signal_blocked = vmx_apic_init_signal_blocked,
>>>> +       .page_enc_status_hc = NULL,
>>>>  };
>>>>
>>>>  static void vmx_cleanup_l1d_flush(void)
>>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>>> index cf95c36cb4f4..68428eef2dde 100644
>>>> --- a/arch/x86/kvm/x86.c
>>>> +++ b/arch/x86/kvm/x86.c
>>>> @@ -7564,6 +7564,12 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
>>>>                 kvm_sched_yield(vcpu->kvm, a0);
>>>>                 ret = 0;
>>>>                 break;
>>>> +       case KVM_HC_PAGE_ENC_STATUS:
>>>> +               ret = -KVM_ENOSYS;
>>>> +               if (kvm_x86_ops->page_enc_status_hc)
>>>> +                       ret = kvm_x86_ops->page_enc_status_hc(vcpu->kvm,
>>>> +                                       a0, a1, a2);
>>>> +               break;
>>>>         default:
>>>>                 ret = -KVM_ENOSYS;
>>>>                 break;
>>>> diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
>>>> index 8b86609849b9..847b83b75dc8 100644
>>>> --- a/include/uapi/linux/kvm_para.h
>>>> +++ b/include/uapi/linux/kvm_para.h
>>>> @@ -29,6 +29,7 @@
>>>>  #define KVM_HC_CLOCK_PAIRING           9
>>>>  #define KVM_HC_SEND_IPI                10
>>>>  #define KVM_HC_SCHED_YIELD             11
>>>> +#define KVM_HC_PAGE_ENC_STATUS         12
>>>>
>>>>  /*
>>>>   * hypercalls use architecture specific
>>>> --
>>>> 2.17.1
>>>>
>>> I'm still not excited by the dynamic resizing. I believe the guest
>>> hypercall can be called in atomic contexts, which makes me
>>> particularly unexcited to see a potentially large vmalloc on the host
>>> followed by filling the buffer. Particularly when the buffer might be
>>> non-trivial in size (~1MB per 32GB, per some back of the envelope
>>> math).
>>>
>> I think looking at more practical situations, most hypercalls will
>> happen during the boot stage, when device specific initializations are
>> happening, so typically the maximum page encryption bitmap size would
>> be allocated early enough.
>>
>> In fact, initial hypercalls made by OVMF will probably allocate the
>> maximum page bitmap size even before the kernel comes up, especially
>> as they will be setting up page enc/dec status for MMIO, ROM, ACPI
>> regions, PCI device memory, etc., and most importantly for
>> "non-existent" high memory range (which will probably be the
>> maximum size page encryption bitmap allocated/resized).
>>
>> Let me know if you have different thoughts on this ?
> Hi Ashish,
>
> If this is not an issue in practice, we can just move past this. If we
> are basically guaranteed that OVMF will trigger hypercalls that expand
> the bitmap beyond the top of memory, then, yes, that should work. That
> leaves me slightly nervous that OVMF might regress since it's not
> obvious that calling a hypercall beyond the top of memory would be
> "required" for avoiding a somewhat indirectly related issue in guest
> kernels.


If possible then we should try to avoid growing/shrinking the bitmap .
Today OVMF may not be accessing beyond memory but a malicious guest
could send a hypercall down which can trigger a huge memory allocation
on the host side and may eventually cause denial of service for other. 
I am in favor if we can find some solution to handle this case. How
about Steve's suggestion about VMM making a call down to the kernel to
tell how big the bitmap should be? Initially it should be equal to the
guest RAM and if VMM ever did the memory expansion then it can send down
another notification to increase the bitmap ?

Optionally, instead of adding a new ioctl, I was wondering if we can
extend the kvm_arch_prepare_memory_region() to make svm specific x86_ops
which can take read the userspace provided memory region and calculate
the amount of guest RAM managed by the KVM and grow/shrink the bitmap
based on that information. I have not looked deep enough to see if its
doable but if it can work then we can avoid adding yet another ioctl.

>
> Adding a kvm_enable_cap doesn't seem particularly complicated and side
> steps all of these concerns, so I still prefer it. Caveat, haven't
> reviewed the patches about the feature bits yet: the enable cap would
> also make it possible for kernels that support live migration to avoid
> advertising live migration if host usermode does not want it to be
> advertised. This seems pretty important, since hosts that don't plan
> to live migrate should have the ability to tell the guest to stop
> calling.
>
> Thanks,
> Steve

  reply	other threads:[~2020-04-08  0:29 UTC|newest]

Thread overview: 107+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-30  6:19 [PATCH v6 00/14] Add AMD SEV guest live migration support Ashish Kalra
2020-03-30  6:19 ` [PATCH v6 01/14] KVM: SVM: Add KVM_SEV SEND_START command Ashish Kalra
2020-04-02  6:27   ` Venu Busireddy
2020-04-02 12:59     ` Brijesh Singh
2020-04-02 16:37       ` Venu Busireddy
2020-04-02 18:04         ` Brijesh Singh
2020-04-02 18:57           ` Venu Busireddy
2020-04-02 19:17             ` Brijesh Singh
2020-04-02 19:43               ` Venu Busireddy
2020-04-02 20:04                 ` Brijesh Singh
2020-04-02 20:19                   ` Venu Busireddy
2020-04-02 17:51   ` Krish Sadhukhan
2020-04-02 18:38     ` Brijesh Singh
2020-03-30  6:20 ` [PATCH v6 02/14] KVM: SVM: Add KVM_SEND_UPDATE_DATA command Ashish Kalra
2020-04-02 17:55   ` Venu Busireddy
2020-04-02 20:13   ` Krish Sadhukhan
2020-03-30  6:20 ` [PATCH v6 03/14] KVM: SVM: Add KVM_SEV_SEND_FINISH command Ashish Kalra
2020-04-02 18:17   ` Venu Busireddy
2020-04-02 20:15   ` Krish Sadhukhan
2020-03-30  6:21 ` [PATCH v6 04/14] KVM: SVM: Add support for KVM_SEV_RECEIVE_START command Ashish Kalra
2020-04-02 21:35   ` Venu Busireddy
2020-04-02 22:09   ` Krish Sadhukhan
2020-03-30  6:21 ` [PATCH v6 05/14] KVM: SVM: Add KVM_SEV_RECEIVE_UPDATE_DATA command Ashish Kalra
2020-04-02 22:25   ` Krish Sadhukhan
2020-04-02 22:29   ` Venu Busireddy
2020-04-07  0:49     ` Steve Rutherford
2020-03-30  6:21 ` [PATCH v6 06/14] KVM: SVM: Add KVM_SEV_RECEIVE_FINISH command Ashish Kalra
2020-04-02 22:24   ` Venu Busireddy
2020-04-02 22:27   ` Krish Sadhukhan
2020-04-07  0:57     ` Steve Rutherford
2020-03-30  6:21 ` [PATCH v6 07/14] KVM: x86: Add AMD SEV specific Hypercall3 Ashish Kalra
2020-04-02 22:36   ` Venu Busireddy
2020-04-02 23:54   ` Krish Sadhukhan
2020-04-07  1:22     ` Steve Rutherford
2020-03-30  6:22 ` [PATCH v6 08/14] KVM: X86: Introduce KVM_HC_PAGE_ENC_STATUS hypercall Ashish Kalra
2020-04-03  0:00   ` Venu Busireddy
2020-04-03  1:31   ` Krish Sadhukhan
2020-04-03  1:57     ` Ashish Kalra
2020-04-03  2:58       ` Ashish Kalra
2020-04-06 22:27         ` Krish Sadhukhan
2020-04-07  2:17   ` Steve Rutherford
2020-04-07  5:27     ` Ashish Kalra
2020-04-08  0:01       ` Steve Rutherford
2020-04-08  0:29         ` Brijesh Singh [this message]
2020-04-08  0:35           ` Steve Rutherford
2020-04-08  1:17             ` Ashish Kalra
2020-04-08  1:38               ` Steve Rutherford
2020-04-08  2:34                 ` Brijesh Singh
2020-04-08  3:18                   ` Ashish Kalra
2020-04-09 16:18                     ` Ashish Kalra
2020-04-09 20:41                       ` Steve Rutherford
2020-03-30  6:22 ` [PATCH v6 09/14] KVM: x86: Introduce KVM_GET_PAGE_ENC_BITMAP ioctl Ashish Kalra
2020-04-03 18:30   ` Venu Busireddy
2020-04-03 20:18   ` Krish Sadhukhan
2020-04-03 20:47     ` Ashish Kalra
2020-04-06 22:07       ` Krish Sadhukhan
2020-04-03 20:55     ` Venu Busireddy
2020-04-03 21:01       ` Ashish Kalra
2020-03-30  6:22 ` [PATCH v6 10/14] mm: x86: Invoke hypercall when page encryption status is changed Ashish Kalra
2020-04-03 21:07   ` Krish Sadhukhan
2020-04-03 21:30     ` Ashish Kalra
2020-04-03 21:36   ` Venu Busireddy
2020-03-30  6:22 ` [PATCH v6 11/14] KVM: x86: Introduce KVM_SET_PAGE_ENC_BITMAP ioctl Ashish Kalra
2020-04-03 21:10   ` Krish Sadhukhan
2020-04-03 21:46   ` Venu Busireddy
2020-04-08  0:26   ` Steve Rutherford
2020-04-08  1:48     ` Ashish Kalra
2020-04-10  0:06       ` Steve Rutherford
2020-04-10  1:23         ` Ashish Kalra
2020-04-10 18:08           ` Steve Rutherford
2020-03-30  6:23 ` [PATCH v6 12/14] KVM: x86: Introduce KVM_PAGE_ENC_BITMAP_RESET ioctl Ashish Kalra
2020-04-03 21:14   ` Krish Sadhukhan
2020-04-03 21:45     ` Ashish Kalra
2020-04-06 18:52       ` Krish Sadhukhan
2020-04-08  1:25         ` Steve Rutherford
2020-04-08  1:52           ` Ashish Kalra
2020-04-10  0:59             ` Steve Rutherford
2020-04-10  1:34               ` Ashish Kalra
2020-04-10 18:14                 ` Steve Rutherford
2020-04-10 20:16                   ` Steve Rutherford
2020-04-10 20:18                     ` Steve Rutherford
2020-04-10 20:55                       ` Kalra, Ashish
2020-04-10 21:42                         ` Brijesh Singh
2020-04-10 21:46                           ` Sean Christopherson
2020-04-10 21:58                             ` Brijesh Singh
2020-04-10 22:02                         ` Brijesh Singh
2020-04-11  0:35                           ` Ashish Kalra
2020-04-03 22:01   ` Venu Busireddy
2020-03-30  6:23 ` [PATCH v6 13/14] KVM: x86: Introduce new KVM_FEATURE_SEV_LIVE_MIGRATION feature & Custom MSR Ashish Kalra
2020-03-30 15:52   ` Brijesh Singh
2020-03-30 16:42     ` Ashish Kalra
     [not found]     ` <20200330162730.GA21567@ashkalra_ubuntu_server>
     [not found]       ` <1de5e95f-4485-f2ff-aba8-aa8b15564796@amd.com>
     [not found]         ` <20200331171336.GA24050@ashkalra_ubuntu_server>
     [not found]           ` <20200401070931.GA8562@ashkalra_ubuntu_server>
2020-04-02 23:29             ` Ashish Kalra
2020-04-03 23:46   ` Krish Sadhukhan
2020-03-30  6:23 ` [PATCH v6 14/14] KVM: x86: Add kexec support for SEV Live Migration Ashish Kalra
2020-03-30 16:00   ` Brijesh Singh
2020-03-30 16:45     ` Ashish Kalra
2020-03-31 14:26       ` Brijesh Singh
2020-04-02 23:34         ` Ashish Kalra
2020-04-03 12:57   ` Dave Young
2020-04-04  0:55   ` Krish Sadhukhan
2020-04-04 21:57     ` Ashish Kalra
2020-04-06 18:37       ` Krish Sadhukhan
2020-03-30 17:24 ` [PATCH v6 00/14] Add AMD SEV guest live migration support Venu Busireddy
2020-03-30 18:28   ` Ashish Kalra
2020-03-30 19:13     ` Venu Busireddy
2020-03-30 21:52       ` Ashish Kalra
2020-03-31 14:42         ` Venu Busireddy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d67a104e-6a01-a766-63b2-3f8b6026ca4c@amd.com \
    --to=brijesh.singh@amd.com \
    --cc=ashish.kalra@amd.com \
    --cc=bp@suse.de \
    --cc=hpa@zytor.com \
    --cc=joro@8bytes.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=rientjes@google.com \
    --cc=srutherford@google.com \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).