All of lore.kernel.org
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	HW42 <hw42@ipsumj.de>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v3] xen/balloon: don't online new memory initially
Date: Tue, 24 Oct 2017 09:47:32 +0200	[thread overview]
Message-ID: <37f323bd-8f2b-b7d6-e867-1b3faaa3c3cd@suse.com> (raw)
In-Reply-To: <a35f18d7-3047-ab19-179c-470ea8f3ef3e@oracle.com>

On 03/10/17 23:33, Boris Ostrovsky wrote:
> On 10/02/2017 05:37 PM, HW42 wrote:
>> Juergen Gross:
>>> When setting up the Xenstore watch for the memory target size the new
>>> watch will fire at once. Don't try to reach the configured target size
>>> by onlining new memory in this case, as the current memory size will
>>> be smaller in almost all cases due to e.g. BIOS reserved pages.
>>>
>>> Onlining new memory will lead to more problems e.g. undesired conflicts
>>> with NVMe devices meant to be operated as block devices.
>>>
>>> Instead remember the difference between target size and current size
>>> when the watch fires for the first time and apply it to any further
>>> size changes, too.
>>>
>>> In order to avoid races between balloon.c and xen-balloon.c init calls
>>> do the xen-balloon.c initialization from balloon.c.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> This patch seems to introduce a regression. If I boot a HVM or PVH
>> domain with memory != maxmem then the kernel inside the domain reports
>> that it has maxmem available even though Xen reports only what is set as
>> memory. Sooner or later Xen logs "out of PoD memory!" and kills the
>> domain. If I revert the corresponding commit (96edd61d) then everything
>> works as expected.
>>
>> Tested this with Xen 4.9.0 and Linux 4.13.4.
>>
> 
> 
> Yes, this indeed doesn't look like it's doing the right thing (although
> I haven't seen the "out of memory" error).

You need to use enough memory (e.g. via memhog).

> I wonder whether target_diff should be computed against xenstore's
> "static-max" and not "target" and the memory should be ballooned down
> immediately and not on a subsequent watch firing.

Right. And we need to keep target_diff = 0 for PV domains.

Patch coming soon.


Juergen

  reply	other threads:[~2017-10-24  7:47 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-10  8:10 [PATCH v3] xen/balloon: don't online new memory initially Juergen Gross
2017-07-18 16:08 ` Boris Ostrovsky
2017-07-18 16:12   ` Juergen Gross
2017-07-18 16:33     ` Boris Ostrovsky
2017-07-18 16:33       ` Boris Ostrovsky
2017-07-18 16:12   ` Juergen Gross
2017-07-18 16:08 ` Boris Ostrovsky
2017-10-02 21:37 ` [Xen-devel] " HW42
2017-10-02 21:37   ` HW42
2017-10-03 21:33   ` [Xen-devel] " Boris Ostrovsky
2017-10-24  7:47     ` Juergen Gross [this message]
2017-10-24  7:47     ` Juergen Gross
2017-10-03 21:33   ` Boris Ostrovsky

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=37f323bd-8f2b-b7d6-e867-1b3faaa3c3cd@suse.com \
    --to=jgross@suse.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=hw42@ipsumj.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.