All of lore.kernel.org
 help / color / mirror / Atom feed
From: Thomas Huth <thuth@redhat.com>
To: "Cédric Le Goater" <clg@redhat.com>,
	"Cédric Le Goater" <clg@kaod.org>,
	qemu-s390x@nongnu.org
Cc: qemu-devel@nongnu.org, Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@linux.ibm.com>,
	Claudio Imbrenda <imbrenda@linux.ibm.com>,
	frankja@linux.ibm.com, David Hildenbrand <david@redhat.com>,
	Ilya Leoshkevich <iii@linux.ibm.com>,
	Eric Farman <farman@linux.ibm.com>
Subject: Re: [PATCH v2 1/4] s390x/pv: Implement a CGS check helper
Date: Mon, 9 Jan 2023 15:12:28 +0100	[thread overview]
Message-ID: <7f5de5f7-2bf1-869d-7b9b-ef44cbf78116@redhat.com> (raw)
In-Reply-To: <cd41a799-3f5b-7503-66d7-c5a8c99611f9@redhat.com>

On 09/01/2023 14.57, Cédric Le Goater wrote:
> On 1/9/23 14:34, Thomas Huth wrote:
>> On 06/01/2023 08.53, Cédric Le Goater wrote:
>>> From: Cédric Le Goater <clg@redhat.com>
>>>
>>> When a protected VM is started with the maximum number of CPUs (248),
>>> the service call providing information on the CPUs requires more
>>> buffer space than allocated and QEMU disgracefully aborts :
>>>
>>>      LOADPARM=[........]
>>>      Using virtio-blk.
>>>      Using SCSI scheme.
>>>      
>>> ................................................................................... 
>>>
>>>      qemu-system-s390x: KVM_S390_MEM_OP failed: Argument list too long
>>>
>>> When protected virtualization is initialized, compute the maximum
>>> number of vCPUs supported by the machine and return useful information
>>> to the user before the machine starts in case of error.
>>>
>>> Suggested-by: Thomas Huth <thuth@redhat.com>
>>> Signed-off-by: Cédric Le Goater <clg@redhat.com>
>>> ---
>>>   hw/s390x/pv.c | 40 ++++++++++++++++++++++++++++++++++++++++
>>>   1 file changed, 40 insertions(+)
>>>
>>> diff --git a/hw/s390x/pv.c b/hw/s390x/pv.c
>>> index 8dfe92d8df..8a1c71436b 100644
>>> --- a/hw/s390x/pv.c
>>> +++ b/hw/s390x/pv.c
>>> @@ -20,6 +20,7 @@
>>>   #include "exec/confidential-guest-support.h"
>>>   #include "hw/s390x/ipl.h"
>>>   #include "hw/s390x/pv.h"
>>> +#include "hw/s390x/sclp.h"
>>>   #include "target/s390x/kvm/kvm_s390x.h"
>>>   static bool info_valid;
>>> @@ -249,6 +250,41 @@ struct S390PVGuestClass {
>>>       ConfidentialGuestSupportClass parent_class;
>>>   };
>>> +/*
>>> + * If protected virtualization is enabled, the amount of data that the
>>> + * Read SCP Info Service Call can use is limited to one page. The
>>> + * available space also depends on the Extended-Length SCCB (ELS)
>>> + * feature which can take more buffer space to store feature
>>> + * information. This impacts the maximum number of CPUs supported in
>>> + * the machine.
>>> + */
>>> +static uint32_t s390_pv_get_max_cpus(void)
>>> +{
>>> +    int offset_cpu = s390_has_feat(S390_FEAT_EXTENDED_LENGTH_SCCB) ?
>>> +        offsetof(ReadInfo, entries) : SCLP_READ_SCP_INFO_FIXED_CPU_OFFSET;
>>> +
>>> +    return (TARGET_PAGE_SIZE - offset_cpu) / sizeof(CPUEntry);
>>> +}
>>> +
>>> +static bool s390_pv_check_cpus(Error **errp)
>>> +{
>>> +    MachineState *ms = MACHINE(qdev_get_machine());
>>> +    uint32_t pv_max_cpus = s390_pv_get_max_cpus();
>>> +
>>> +    if (ms->smp.max_cpus > pv_max_cpus) {
>>> +        error_setg(errp, "Protected VMs support a maximum of %d CPUs",
>>> +                   pv_max_cpus);
>>> +        return false;
>>> +    }
>>> +
>>> +    return true;
>>> +}
>>> +
>>> +static bool s390_pv_guest_check(ConfidentialGuestSupport *cgs, Error 
>>> **errp)
>>> +{
>>> +    return s390_pv_check_cpus(errp);
>>> +}
>>> +
>>>   int s390_pv_kvm_init(ConfidentialGuestSupport *cgs, Error **errp)
>>>   {
>>>       if (!object_dynamic_cast(OBJECT(cgs), TYPE_S390_PV_GUEST)) {
>>> @@ -261,6 +297,10 @@ int s390_pv_kvm_init(ConfidentialGuestSupport *cgs, 
>>> Error **errp)
>>>           return -1;
>>>       }
>>> +    if (!s390_pv_guest_check(cgs, errp)) {
>>> +        return -1;
>>> +    }
>>> +
>>>       cgs->ready = true;
>>>       return 0;
>>
>> Looks good to me now.
>>
>> Reviewed-by: Thomas Huth <thuth@redhat.com>
> 
> I think we could move the huge page test in s390_pv_guest_check() also.
> We are finishing a discussion with Janosch on the runtime test and I will
> send a v3.

Core question is likely: What if the hypervisor admin does not know whether 
the guest will run in protected mode or not, and thus always wants to enable 
the feature (so that the owner of the guest can decide)? So we cannot know 
right from the start whether we have a confidential guest or not? ... should 
we then really check the condition at the beginning, or is it better to 
check when the guest tries to switch to protected mode?

  Thomas



  reply	other threads:[~2023-01-09 15:11 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-06  7:53 [PATCH v2 0/4] s390x/pv: Improve protected VM support Cédric Le Goater
2023-01-06  7:53 ` [PATCH v2 1/4] s390x/pv: Implement a CGS check helper Cédric Le Goater
2023-01-09 13:34   ` Thomas Huth
2023-01-09 13:57     ` Cédric Le Goater
2023-01-09 14:12       ` Thomas Huth [this message]
2023-01-09 14:28         ` Cédric Le Goater
2023-01-06  7:53 ` [PATCH v2 2/4] s390x/pv: Check for support on the host Cédric Le Goater
2023-01-09  8:45   ` Janosch Frank
2023-01-09  9:44     ` Cédric Le Goater
2023-01-09 10:49       ` Janosch Frank
2023-01-06  7:53 ` [PATCH v2 3/4] s390x/pv: Introduce a s390_pv_check() helper for runtime Cédric Le Goater
2023-01-09  9:04   ` Janosch Frank
2023-01-09  9:27     ` Cédric Le Goater
2023-01-09  9:49       ` Janosch Frank
2023-01-09 13:30         ` Cédric Le Goater
2023-01-09 13:45           ` Janosch Frank
2023-01-09 13:53             ` Cédric Le Goater
2023-01-09 14:31               ` Janosch Frank
2023-01-09 14:52                 ` Janosch Frank
2023-01-09 15:24                   ` Cédric Le Goater
2023-01-06  7:53 ` [PATCH v2 4/4] s390x/pv: Move check on hugepage under s390_pv_guest_check() Cédric Le Goater

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7f5de5f7-2bf1-869d-7b9b-ef44cbf78116@redhat.com \
    --to=thuth@redhat.com \
    --cc=borntraeger@linux.ibm.com \
    --cc=clg@kaod.org \
    --cc=clg@redhat.com \
    --cc=david@redhat.com \
    --cc=farman@linux.ibm.com \
    --cc=frankja@linux.ibm.com \
    --cc=iii@linux.ibm.com \
    --cc=imbrenda@linux.ibm.com \
    --cc=pasic@linux.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-s390x@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.