From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51223) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Uz9df-0004T0-4w for qemu-devel@nongnu.org; Tue, 16 Jul 2013 14:06:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Uz9dW-0008W0-Jh for qemu-devel@nongnu.org; Tue, 16 Jul 2013 14:06:15 -0400 Date: Tue, 16 Jul 2013 20:06:03 +0200 From: Andrea Arcangeli Message-ID: <20130716180603.GD23279@redhat.com> References: <1373995321-2470-1-git-send-email-aarcange@redhat.com> <20130716173844.GC19826@otherpad.lan.raisama.net> <51E586E6.1060001@redhat.com> <51E5876B.2040500@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <51E5876B.2040500@redhat.com> Subject: Re: [Qemu-devel] [PATCH] fix guest physical bits to match host, to go beyond 1TB guests List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: qemu-stable@nongnu.org, Eduardo Habkost , Gleb Natapov , qemu-devel@nongnu.org -On Tue, Jul 16, 2013 at 07:48:27PM +0200, Paolo Bonzini wrote: > Il 16/07/2013 19:46, Paolo Bonzini ha scritto: > >>> >> (see PUD with bit >=40 set) > >> > > >> > I am not sure I understand what caused this: if we are advertising 40 > >> > physical bits to the guest, why are we ending up with a PUD with > >> > bit >= 40 set? > > Because we create a guest that has bigger memory than what we advertise > > in CPUID. > > > > Also, note that the guest does not really care about this CPUID. It is > only used by KVM itself to decide which bits in the page tables are > reserved. Yes, I suppose guests thinks if there's >1TB of RAM there are enough bits in the pagetables to map whatever RAM "range" was found. About migrating >1TB guests it will be possible as soon as qemu starts to use the remap_anon_pages+MADV_USERFAULT two features I posted on linux-kernel recently. With those two features, qemu should start to provide postcopy live migration by default and after pre-copy is complete, it will never need to freeze a guest in the source node anymore.