xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <jbeulich@suse.com>
To: Julien Grall <julien@xen.org>
Cc: bertrand.marquis@arm.com, Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH 0/2] xen/arm: Couple of bug fixes when decompressing kernels
Date: Tue, 6 Apr 2021 09:45:52 +0200	[thread overview]
Message-ID: <7a65f71b-e5a6-22aa-d360-4045b266229e@suse.com> (raw)
In-Reply-To: <20210402152105.29387-1-julien@xen.org>

On 02.04.2021 17:21, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Hi all,
> 
> The main goal of this series is to address the bug report [1]. It is not
> possible

...?

> While testing the series, I also noticed that it is not possible to
> re-use the same compressed kernel for multiple domains as the module
> will be overwritten during the first decompression.
> 
> I am still a bit undecided how to fix it. We can either create a new
> module for the uncompressed kernel and link with the compressed kernel.
> Or we could decompress everytime.
> 
> One inconvenience to kernel to only decompress once is we have to keep
> it until all the domains have booted. This may means a lot of memory to be
> wasted just for keeping the uncompressed kernel (one my setup, it takes
> ~3 times more space).
> 
> Any opinions on how to approach it?

Well, it's not "until all the domains have booted", but until all the
domains have had their kernel image placed in the designated piece of
memory. So while for the time being multiple decompression may indeed
be a reasonable approach, longer term one could populate all the
domains' memory in two steps - first just the kernel space for all of
them, then load the kernel(s), then populate the rest of the memory.

Jan


  parent reply	other threads:[~2021-04-06  7:46 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-02 15:21 [PATCH 0/2] xen/arm: Couple of bug fixes when decompressing kernels Julien Grall
2021-04-02 15:21 ` [PATCH 1/2] xen/arm: kernel: Propagate the error if we fail to decompress the kernel Julien Grall
2021-04-06 19:15   ` Julien Grall
2021-04-02 15:21 ` [PATCH 2/2] xen/gunzip: Allow perform_gunzip() to be called multiple times Julien Grall
2021-04-06  7:40   ` Jan Beulich
2021-04-07 10:39   ` Jan Beulich
2021-04-07 18:18     ` Julien Grall
2021-04-06  7:45 ` Jan Beulich [this message]
2021-04-06 14:13   ` [PATCH 0/2] xen/arm: Couple of bug fixes when decompressing kernels Julien Grall
2021-04-06 18:31 ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7a65f71b-e5a6-22aa-d360-4045b266229e@suse.com \
    --to=jbeulich@suse.com \
    --cc=Volodymyr_Babchuk@epam.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=bertrand.marquis@arm.com \
    --cc=george.dunlap@citrix.com \
    --cc=iwj@xenproject.org \
    --cc=jgrall@amazon.com \
    --cc=julien@xen.org \
    --cc=sstabellini@kernel.org \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).