All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Wei Liu <wei.liu2@citrix.com>
Subject: Re: [PATCH] libxc: don't fail domain creation when unpacking initrd fails
Date: Fri, 20 Oct 2017 09:47:29 -0600	[thread overview]
Message-ID: <59EA36B10200007800188D0F@prv-mh.provo.novell.com> (raw)
In-Reply-To: <23016.48995.657539.136790@mariner.uk.xensource.com>

>>> On 19.10.17 at 17:06, <ian.jackson@eu.citrix.com> wrote:
> Jan Beulich writes ("Re: [PATCH] libxc: don't fail domain creation when 
> unpacking initrd fails"):
>> On 16.10.17 at 18:43, <ian.jackson@eu.citrix.com> wrote:
>> > I'm afraid I still find the patch less clear than it could be.
>> > The new semantics of xc_dom_ramdisk_check_size are awkward.  And
>> > looking at it briefly, I think it might be possible to try the unzip
>> > even if the size is too large.
>> 
>> I don't think so - xc_dom_ramdisk_check_size() returns 1
>> whenever decompressed size is above the limit. What I do
>> admit is that in the case compressed size is larger than
>> uncompressed size, with the boundary being in between, and
>> with decompression failing, we may accept something that's
>> above the limit. Not sure how bad that is though, as the limit
>> is pretty arbitrary anyway.
> 
> Conceptually what you are trying to do is have two alternative
> strategies.  Those two strategies have different limits.  So "the
> limit" is not a meaningful concept.
> 
>> > What you are really trying to do here is to pursue two strategies in
>> > parallel.  And ideally they would not be entangled.
>> 
>> I would have wanted to do things in sequence rather than in
>> parallel. I can't see how that could work though, in particular
>> when considering the case mentioned above (uncompressed size
>> smaller than compressed) - as the space allocation in the guest
>> can't be reverted, I need to allocate the larger of the two sizes
>> anyway.
> 
> I don't think it can work.  I think you uneed to pursue them in
> parallel and keep separate records, for each one, of whether we are
> still pursuing it or whether it has failed (and of course its
> necessary locals).

So before I do another pointless round of backporting (for the
change to be tested in the environment where it is needed),
does the below new function (with xc_dom_ramdisk_check_size()
dropped altogether) look any better to you?

Thanks, Jan

static int xc_dom_build_ramdisk(struct xc_dom_image *dom)
{
    size_t unziplen, ramdisklen;
    void *ramdiskmap;

    if ( !dom->ramdisk_seg.vstart )
        unziplen = xc_dom_check_gzip(dom->xch,
                                     dom->ramdisk_blob, dom->ramdisk_size);
    else
        unziplen = 0;

    ramdisklen = max(unziplen, dom->ramdisk_size);
    if ( dom->max_ramdisk_size )
    {
        if ( unziplen && ramdisklen > dom->max_ramdisk_size )
        {
            ramdisklen = min(unziplen, dom->ramdisk_size);
            if ( unziplen > ramdisklen)
                unziplen = 0;
        }
        if ( ramdisklen > dom->max_ramdisk_size )
        {
            xc_dom_panic(dom->xch, XC_INVALID_KERNEL,
                         "ramdisk image too large");
            goto err;
        }
    }

    if ( xc_dom_alloc_segment(dom, &dom->ramdisk_seg, "ramdisk",
                              dom->ramdisk_seg.vstart, ramdisklen) != 0 )
        goto err;
    ramdiskmap = xc_dom_seg_to_ptr(dom, &dom->ramdisk_seg);
    if ( ramdiskmap == NULL )
    {
        DOMPRINTF("%s: xc_dom_seg_to_ptr(dom, &dom->ramdisk_seg) => NULL",
                  __FUNCTION__);
        goto err;
    }
    if ( unziplen )
    {
        if ( xc_dom_do_gunzip(dom->xch, dom->ramdisk_blob, dom->ramdisk_size,
                              ramdiskmap, unziplen) != -1 )
            return 0;
        if ( dom->ramdisk_size > ramdisklen )
            goto err;
    }

    /* Fall back to handing over the raw blob. */
    memcpy(ramdiskmap, dom->ramdisk_blob, dom->ramdisk_size);
    /* If an unzip attempt was made, the buffer may no longer be all zero. */
    if ( unziplen > dom->ramdisk_size )
        memset(ramdiskmap + dom->ramdisk_size, 0,
               unziplen - dom->ramdisk_size);

    return 0;

 err:
    return -1;
}



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-10-20 15:47 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-16 15:24 [PATCH] libxc: don't fail domain creation when unpacking initrd fails Jan Beulich
2017-10-16 15:45 ` Ian Jackson
2017-10-16 16:19   ` Jan Beulich
2017-10-16 16:43     ` Ian Jackson
2017-10-17  6:28       ` Jan Beulich
2017-10-18 14:31       ` Jan Beulich
2017-10-19 15:06         ` Ian Jackson
2017-10-20 15:47           ` Jan Beulich [this message]
2017-10-16 16:48     ` Andrew Cooper
2017-10-16 17:01       ` Ian Jackson
2017-10-25  4:09       ` Doug Goldstein

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=59EA36B10200007800188D0F@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.