All of lore.kernel.org
 help / color / mirror / Atom feed
From: Claudio Imbrenda <imbrenda@linux.ibm.com>
To: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: David Hildenbrand <david@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	kvm@vger.kernel.org, frankja@linux.ibm.com, thuth@redhat.com,
	pasic@linux.ibm.com, linux-s390@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v1 00/11] KVM: s390: pv: implement lazy destroy
Date: Tue, 18 May 2021 19:00:49 +0200	[thread overview]
Message-ID: <20210518190049.7e6e661f@ibm-vm> (raw)
In-Reply-To: <e66400c5-a1b6-c5fe-d715-c08b166a7b54@de.ibm.com>

On Tue, 18 May 2021 18:55:56 +0200
Christian Borntraeger <borntraeger@de.ibm.com> wrote:

> On 18.05.21 18:31, Claudio Imbrenda wrote:
> > On Tue, 18 May 2021 18:22:42 +0200
> > David Hildenbrand <david@redhat.com> wrote:
> >   
> >> On 18.05.21 18:19, Claudio Imbrenda wrote:  
> >>> On Tue, 18 May 2021 18:04:11 +0200
> >>> Cornelia Huck <cohuck@redhat.com> wrote:
> >>>      
> >>>> On Tue, 18 May 2021 17:36:24 +0200
> >>>> Claudio Imbrenda <imbrenda@linux.ibm.com> wrote:
> >>>>     
> >>>>> On Tue, 18 May 2021 17:05:37 +0200
> >>>>> Cornelia Huck <cohuck@redhat.com> wrote:
> >>>>>         
> >>>>>> On Mon, 17 May 2021 22:07:47 +0200
> >>>>>> Claudio Imbrenda <imbrenda@linux.ibm.com> wrote:  
> >>>>     
> >>>>>>> This means that the same address space can have memory
> >>>>>>> belonging to more than one protected guest, although only one
> >>>>>>> will be running, the others will in fact not even have any
> >>>>>>> CPUs.  
> >>>>>>
> >>>>>> Are those set-aside-but-not-yet-cleaned-up pages still possibly
> >>>>>> accessible in any way? I would assume that they only belong to
> >>>>>> the  
> >>>>>
> >>>>> in case of reboot: yes, they are still in the address space of
> >>>>> the guest, and can be swapped if needed
> >>>>>         
> >>>>>> 'zombie' guests, and any new or rebooted guest is a new entity
> >>>>>> that needs to get new pages?  
> >>>>>
> >>>>> the rebooted guest (normal or secure) will re-use the same pages
> >>>>> of the old guest (before or after cleanup, which is the reason
> >>>>> of patches 3 and 4)  
> >>>>
> >>>> Took a look at those patches, makes sense.
> >>>>     
> >>>>>
> >>>>> the KVM guest is not affected in case of reboot, so the
> >>>>> userspace address space is not touched.  
> >>>>
> >>>> 'guest' is a bit ambiguous here -- do you mean the vm here, and
> >>>> the actual guest above?
> >>>>     
> >>>
> >>> yes this is tricky, because there is the guest OS, which
> >>> terminates or reboots, then there is the "secure configuration"
> >>> entity, handled by the Ultravisor, and then the KVM VM
> >>>
> >>> when a secure guest reboots, the "secure configuration" is
> >>> dismantled (in this case, in a deferred way), and the KVM VM (and
> >>> its memory) is not directly affected
> >>>
> >>> what happened before was that the secure configuration was
> >>> dismantled synchronously, and then re-created.
> >>>
> >>> now instead, a new secure configuration is created using the same
> >>> KVM VM (and thus the same mm), before the old secure configuration
> >>> has been completely dismantled. hence the same KVM VM can have
> >>> multiple secure configurations associated, sharing the same
> >>> address space.
> >>>
> >>> of course, only the newest one is actually running, the other ones
> >>> are "zombies", without CPUs.
> >>>      
> >>
> >> Can a guest trigger a DoS?  
> > 
> > I don't see how
> > 
> > a guest can fill its memory and then reboot, and then fill its
> > memory again and then reboot... but that will take time, filling
> > the memory will itself clean up leftover pages from previous boots.
> >  
> 
> In essence this guest will then synchronously wait for the page to be
> exported and reimported, correct?

correct

> > "normal" reboot loops will be fast, because there won't be much
> > memory to process
> > 
> > I have actually tested mixed reboot/shutdown loops, and the system
> > behaved as you would expect when under load.  
> 
> I guess the memory will continue to be accounted to the memcg?
> Correct?

for the reboot case, yes, since the mm is not directly affected.
for the shutdown case, I'm not sure.

      reply	other threads:[~2021-05-18 17:01 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-17 20:07 [PATCH v1 00/11] KVM: s390: pv: implement lazy destroy Claudio Imbrenda
2021-05-17 20:07 ` [PATCH v1 01/11] KVM: s390: pv: leak the ASCE page when destroy fails Claudio Imbrenda
2021-05-18 10:26   ` Janosch Frank
2021-05-18 10:40     ` Claudio Imbrenda
2021-05-18 12:00       ` Janosch Frank
2021-05-17 20:07 ` [PATCH v1 02/11] KVM: s390: pv: properly handle page flags for protected guests Claudio Imbrenda
2021-05-17 20:07 ` [PATCH v1 03/11] KVM: s390: pv: handle secure storage violations " Claudio Imbrenda
2021-05-17 20:07 ` [PATCH v1 04/11] KVM: s390: pv: handle secure storage exceptions for normal guests Claudio Imbrenda
2021-05-17 20:07 ` [PATCH v1 05/11] KVM: s390: pv: refactor s390_reset_acc Claudio Imbrenda
2021-05-26 12:11   ` Janosch Frank
2021-05-17 20:07 ` [PATCH v1 06/11] KVM: s390: pv: usage counter instead of flag Claudio Imbrenda
2021-05-27  9:29   ` Janosch Frank
2021-05-17 20:07 ` [PATCH v1 07/11] KVM: s390: pv: add export before import Claudio Imbrenda
2021-05-26 11:56   ` Janosch Frank
2021-05-17 20:07 ` [PATCH v1 08/11] KVM: s390: pv: lazy destroy for reboot Claudio Imbrenda
2021-05-27  9:43   ` Janosch Frank
2021-05-17 20:07 ` [PATCH v1 09/11] KVM: s390: pv: extend lazy destroy to handle shutdown Claudio Imbrenda
2021-05-17 20:07 ` [PATCH v1 10/11] KVM: s390: pv: module parameter to fence lazy destroy Claudio Imbrenda
2021-05-27 10:35   ` Janosch Frank
2021-05-17 20:07 ` [PATCH v1 11/11] KVM: s390: pv: add support for UV feature bits Claudio Imbrenda
2021-05-18 15:05 ` [PATCH v1 00/11] KVM: s390: pv: implement lazy destroy Cornelia Huck
2021-05-18 15:36   ` Claudio Imbrenda
2021-05-18 15:45     ` Christian Borntraeger
2021-05-18 15:52       ` Cornelia Huck
2021-05-18 16:13       ` Claudio Imbrenda
2021-05-18 16:20         ` Christian Borntraeger
2021-05-18 16:34           ` Claudio Imbrenda
2021-05-18 16:35             ` Christian Borntraeger
2021-05-18 16:04     ` Cornelia Huck
2021-05-18 16:19       ` Claudio Imbrenda
2021-05-18 16:22         ` David Hildenbrand
2021-05-18 16:31           ` Claudio Imbrenda
2021-05-18 16:55             ` Christian Borntraeger
2021-05-18 17:00               ` Claudio Imbrenda [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210518190049.7e6e661f@ibm-vm \
    --to=imbrenda@linux.ibm.com \
    --cc=borntraeger@de.ibm.com \
    --cc=cohuck@redhat.com \
    --cc=david@redhat.com \
    --cc=frankja@linux.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    --cc=pasic@linux.ibm.com \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.