* Re: [Xen-devel] [Patch V3 00/15] xen: support pv-domains larger than 512GB
[not found] <1429507420-18201-1-git-send-email-jgross@suse.com>
@ 2015-05-19 10:11 ` David Vrabel
[not found] ` <1429507420-18201-15-git-send-email-jgross@suse.com>
1 sibling, 0 replies; 4+ messages in thread
From: David Vrabel @ 2015-05-19 10:11 UTC (permalink / raw)
To: Juergen Gross, linux-kernel, xen-devel, konrad.wilk,
david.vrabel, boris.ostrovsky
On 20/04/15 06:23, Juergen Gross wrote:
> Support 64 bit pv-domains with more than 512GB of memory.
Reviewed-by: David Vrabel <david.vrabel@citrix.com>
I'll try and queue this for 4.2.
David
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Xen-devel] [Patch V3 14/15] xen: allow more than 512 GB of RAM for 64 bit pv-domains
[not found] ` <1429507420-18201-15-git-send-email-jgross@suse.com>
@ 2015-05-27 16:25 ` David Vrabel
2015-05-27 17:05 ` David Vrabel
0 siblings, 1 reply; 4+ messages in thread
From: David Vrabel @ 2015-05-27 16:25 UTC (permalink / raw)
To: Juergen Gross, linux-kernel, xen-devel, konrad.wilk,
david.vrabel, boris.ostrovsky
On 20/04/15 06:23, Juergen Gross wrote:
> 64 bit pv-domains under Xen are limited to 512 GB of RAM today. The
> main reason has been the 3 level p2m tree, which was replaced by the
> virtual mapped linear p2m list. Parallel to the p2m list which is
> being used by the kernel itself there is a 3 level mfn tree for usage
> by the Xen tools and eventually for crash dump analysis. For this tree
> the linear p2m list can serve as a replacement, too. As the kernel
> can't know whether the tools are capable of dealing with the p2m list
> instead of the mfn tree, the limit of 512 GB can't be dropped in all
> cases.
>
> This patch replaces the hard limit by a kernel parameter which tells
> the kernel to obey the 512 GB limit or not. The default is selected by
> a configuration parameter which specifies whether the 512 GB limit
> should be active per default for domUs (domain save/restore/migration
> and crash dump analysis are affected).
>
> Memory above the domain limit is returned to the hypervisor instead of
> being identity mapped, which was wrong anyway.
>
> The kernel configuration parameter to specify the maximum size of a
> domain can be deleted, as it is not relevant any more.
Something in this patch breaks the hvc console in my test domU.
kernel BUG at /local/davidvr/work/k.org/tip/drivers/tty/hvc/hvc_xen.c:153
Which suggests the hvc driver mapped the wrong console ring frame.
David
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Xen-devel] [Patch V3 14/15] xen: allow more than 512 GB of RAM for 64 bit pv-domains
2015-05-27 16:25 ` [Xen-devel] [Patch V3 14/15] xen: allow more than 512 GB of RAM for 64 bit pv-domains David Vrabel
@ 2015-05-27 17:05 ` David Vrabel
2015-06-08 5:09 ` Juergen Gross
0 siblings, 1 reply; 4+ messages in thread
From: David Vrabel @ 2015-05-27 17:05 UTC (permalink / raw)
To: David Vrabel, Juergen Gross, linux-kernel, xen-devel,
konrad.wilk, boris.ostrovsky
On 27/05/15 17:25, David Vrabel wrote:
> On 20/04/15 06:23, Juergen Gross wrote:
>> 64 bit pv-domains under Xen are limited to 512 GB of RAM today. The
>> main reason has been the 3 level p2m tree, which was replaced by the
>> virtual mapped linear p2m list. Parallel to the p2m list which is
>> being used by the kernel itself there is a 3 level mfn tree for usage
>> by the Xen tools and eventually for crash dump analysis. For this tree
>> the linear p2m list can serve as a replacement, too. As the kernel
>> can't know whether the tools are capable of dealing with the p2m list
>> instead of the mfn tree, the limit of 512 GB can't be dropped in all
>> cases.
>>
>> This patch replaces the hard limit by a kernel parameter which tells
>> the kernel to obey the 512 GB limit or not. The default is selected by
>> a configuration parameter which specifies whether the 512 GB limit
>> should be active per default for domUs (domain save/restore/migration
>> and crash dump analysis are affected).
>>
>> Memory above the domain limit is returned to the hypervisor instead of
>> being identity mapped, which was wrong anyway.
>>
>> The kernel configuration parameter to specify the maximum size of a
>> domain can be deleted, as it is not relevant any more.
>
> Something in this patch breaks the hvc console in my test domU.
>
> kernel BUG at /local/davidvr/work/k.org/tip/drivers/tty/hvc/hvc_xen.c:153
>
> Which suggests the hvc driver mapped the wrong console ring frame.
Sorry, it's patch #13 (xen: move p2m list if conflicting with e820 map)
that seems to be bad.
David
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Xen-devel] [Patch V3 14/15] xen: allow more than 512 GB of RAM for 64 bit pv-domains
2015-05-27 17:05 ` David Vrabel
@ 2015-06-08 5:09 ` Juergen Gross
0 siblings, 0 replies; 4+ messages in thread
From: Juergen Gross @ 2015-06-08 5:09 UTC (permalink / raw)
To: David Vrabel, linux-kernel, xen-devel, konrad.wilk, boris.ostrovsky
On 05/27/2015 07:05 PM, David Vrabel wrote:
> On 27/05/15 17:25, David Vrabel wrote:
>> On 20/04/15 06:23, Juergen Gross wrote:
>>> 64 bit pv-domains under Xen are limited to 512 GB of RAM today. The
>>> main reason has been the 3 level p2m tree, which was replaced by the
>>> virtual mapped linear p2m list. Parallel to the p2m list which is
>>> being used by the kernel itself there is a 3 level mfn tree for usage
>>> by the Xen tools and eventually for crash dump analysis. For this tree
>>> the linear p2m list can serve as a replacement, too. As the kernel
>>> can't know whether the tools are capable of dealing with the p2m list
>>> instead of the mfn tree, the limit of 512 GB can't be dropped in all
>>> cases.
>>>
>>> This patch replaces the hard limit by a kernel parameter which tells
>>> the kernel to obey the 512 GB limit or not. The default is selected by
>>> a configuration parameter which specifies whether the 512 GB limit
>>> should be active per default for domUs (domain save/restore/migration
>>> and crash dump analysis are affected).
>>>
>>> Memory above the domain limit is returned to the hypervisor instead of
>>> being identity mapped, which was wrong anyway.
>>>
>>> The kernel configuration parameter to specify the maximum size of a
>>> domain can be deleted, as it is not relevant any more.
>>
>> Something in this patch breaks the hvc console in my test domU.
>>
>> kernel BUG at /local/davidvr/work/k.org/tip/drivers/tty/hvc/hvc_xen.c:153
>>
>> Which suggests the hvc driver mapped the wrong console ring frame.
>
> Sorry, it's patch #13 (xen: move p2m list if conflicting with e820 map)
> that seems to be bad.
I think I've found the reason: the console frame isn't being marked as
"reserved" any more. With moving the p2m list I had to change the call
of memblock_reserve() for it. Before that patch this call covered the
p2m list, the start_info page, the xenstore page and the console page.
I added a memblock_reserve() for start_info, but failed to do so for
xenstore and console.
I'll modify the patch and respin.
I have to check why I didn't hit this issue. Maybe my test machine was
too large and the memory in question didn't get reused until my test
was finished.
Thanks for testing,
Juergen
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2015-06-08 5:09 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <1429507420-18201-1-git-send-email-jgross@suse.com>
2015-05-19 10:11 ` [Xen-devel] [Patch V3 00/15] xen: support pv-domains larger than 512GB David Vrabel
[not found] ` <1429507420-18201-15-git-send-email-jgross@suse.com>
2015-05-27 16:25 ` [Xen-devel] [Patch V3 14/15] xen: allow more than 512 GB of RAM for 64 bit pv-domains David Vrabel
2015-05-27 17:05 ` David Vrabel
2015-06-08 5:09 ` Juergen Gross
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).