All of lore.kernel.org
 help / color / mirror / Atom feed
From: Claudio Imbrenda <imbrenda@linux.ibm.com>
To: Thomas Huth <thuth@redhat.com>
Cc: "Cédric Le Goater" <clg@kaod.org>,
	qemu-s390x@nongnu.org, qemu-devel@nongnu.org,
	"Halil Pasic" <pasic@linux.ibm.com>,
	"Christian Borntraeger" <borntraeger@linux.ibm.com>,
	"Richard Henderson" <richard.henderson@linaro.org>,
	"David Hildenbrand" <david@redhat.com>,
	"Ilya Leoshkevich" <iii@linux.ibm.com>,
	"Eric Farman" <farman@linux.ibm.com>,
	"Cédric Le Goater" <clg@redhat.com>
Subject: Re: [PATCH 2/5] s390x/pv: Implement CGS check handler
Date: Thu, 5 Jan 2023 14:58:23 +0100	[thread overview]
Message-ID: <20230105145823.6a7345e6@p-imbrenda> (raw)
In-Reply-To: <61d70e15-770b-7f62-54aa-3cc0ac3b3a35@redhat.com>

On Thu, 5 Jan 2023 12:42:54 +0100
Thomas Huth <thuth@redhat.com> wrote:

> On 04/01/2023 12.51, Cédric Le Goater wrote:
> > From: Cédric Le Goater <clg@redhat.com>
> > 
> > When a protected VM is started with the maximum number of CPUs (248),
> > the service call providing information on the CPUs requires more
> > buffer space than allocated and QEMU disgracefully aborts :
> > 
> >      LOADPARM=[........]
> >      Using virtio-blk.
> >      Using SCSI scheme.
> >      ...................................................................................
> >      qemu-system-s390x: KVM_S390_MEM_OP failed: Argument list too long
> > 
> > Implement a test for this limitation in the ConfidentialGuestSupportClass
> > check handler and provide some valid information to the user before the
> > machine starts.
> > 
> > Signed-off-by: Cédric Le Goater <clg@redhat.com>
> > ---
> >   hw/s390x/pv.c | 23 +++++++++++++++++++++++
> >   1 file changed, 23 insertions(+)
> > 
> > diff --git a/hw/s390x/pv.c b/hw/s390x/pv.c
> > index 8dfe92d8df..3a7ec70634 100644
> > --- a/hw/s390x/pv.c
> > +++ b/hw/s390x/pv.c
> > @@ -266,6 +266,26 @@ int s390_pv_kvm_init(ConfidentialGuestSupport *cgs, Error **errp)
> >       return 0;
> >   }
> >   
> > +static bool s390_pv_check_cpus(Error **errp)
> > +{
> > +    MachineState *ms = MACHINE(qdev_get_machine());
> > +    MachineClass *mc = MACHINE_GET_CLASS(ms);
> > +    uint32_t pv_max_cpus = mc->max_cpus - 1;  
> 
> Not sure whether "mc->max_cpus - 1" is the right approach here. I think it 
> would be better to calculate the amount of CPUs that we can support.
> 
> So AFAIK the problem is that SCLP information that is gathered during 
> read_SCP_info() in hw/s390x/sclp.c. If protected virtualization is enabled, 
> everything has to fit in one page (i.e. 4096 bytes) there.
> 
> So we have space for
> 
>   (TARGET_PAGE_SIZE - offset_cpu) / sizeof(CPUEntry)
> 
> CPUs.
> 
> With S390_FEAT_EXTENDED_LENGTH_SCCB enabled, offset_cpu is 144 (see struct 
> ReadInfo in sclp.h), otherwise it is 128.
> 
> That means, with S390_FEAT_EXTENDED_LENGTH_SCCB we can have a maximum of:
> 
>   (4096 - 144) / 16 = 247 CPUs
> 
> which is what you were trying to check with the mc->max_cpus - 1 here.
> 
> But with "-cpu els=off", it sounds like we could fit all 248 also with 
> protected VMs? Could you please give it a try?
> 
> Anyway, instead of using "pv_max_cpus = mc->max_cpus - 1" I'd suggest to use 
> something like this instead:
> 
>   int offset_cpu = s390_has_feat(S390_FEAT_EXTENDED_LENGTH_SCCB) ?
>                       offsetof(ReadInfo, entries) :
>                       SCLP_READ_SCP_INFO_FIXED_CPU_OFFSET;
>   pv_max_cpus = (TARGET_PAGE_SIZE - offset_cpu) /sizeof(CPUEntry);

I agree with Thomas here

> 
>    Thomas
> 
> 



  reply	other threads:[~2023-01-05 14:27 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-04 11:51 [PATCH 0/5] s390x/pv: Improve protected VM support Cédric Le Goater
2023-01-04 11:51 ` [PATCH 1/5] confidential guest support: Introduce a 'check' class handler Cédric Le Goater
2023-01-05  8:46   ` Thomas Huth
2023-01-05 10:34     ` Philippe Mathieu-Daudé
2023-01-05 13:56     ` Cédric Le Goater
2023-01-04 11:51 ` [PATCH 2/5] s390x/pv: Implement CGS check handler Cédric Le Goater
2023-01-05 11:42   ` Thomas Huth
2023-01-05 13:58     ` Claudio Imbrenda [this message]
2023-01-05 14:47       ` Cédric Le Goater
2023-01-05 14:53         ` Thomas Huth
2023-01-05 17:12           ` Cédric Le Goater
2023-01-04 11:51 ` [PATCH 3/5] s390x/pv: Check for support on the host Cédric Le Goater
2023-01-05 11:46   ` Thomas Huth
2023-01-04 11:51 ` [PATCH 4/5] s390x/pv: Introduce a s390_pv_check() helper for runtime Cédric Le Goater
2023-01-05 12:33   ` Thomas Huth
2023-01-05 14:24   ` Claudio Imbrenda
2023-01-04 11:51 ` [PATCH 5/5] s390x/pv: Move check on hugepage under s390_pv_guest_check() Cédric Le Goater
2023-01-05 12:37   ` Thomas Huth

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230105145823.6a7345e6@p-imbrenda \
    --to=imbrenda@linux.ibm.com \
    --cc=borntraeger@linux.ibm.com \
    --cc=clg@kaod.org \
    --cc=clg@redhat.com \
    --cc=david@redhat.com \
    --cc=farman@linux.ibm.com \
    --cc=iii@linux.ibm.com \
    --cc=pasic@linux.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-s390x@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.