From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: (v2) Design proposal for RMRR fix Date: Fri, 09 Jan 2015 10:35:01 +0000 Message-ID: <54AFBCE502000078000530F3@mail.emea.novell.com> References: <54AE9A2F0200007800052ACF@mail.emea.novell.com> <54AFAB90020000780005303C@mail.emea.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: Content-Disposition: inline List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Kevin Tian Cc: "wei.liu2@citrix.com" , "ian.campbell@citrix.com" , "stefano.stabellini@eu.citrix.com" , "tim@xen.org" , "ian.jackson@eu.citrix.com" , "xen-devel@lists.xen.org" , Yang Z Zhang , Tiejun Chen List-Id: xen-devel@lists.xenproject.org >>> On 09.01.15 at 11:10, wrote: >> From: Jan Beulich [mailto:JBeulich@suse.com] >> Boot time device assignment is different: The question isn't whether >> an assigned device works, instead the proper analogy is whether a >> device is _present_. If a device doesn't work on bare metal, it will >> still be discoverable. Yet if device assignment fails, that's not going >> to be the case - for security reasons, the guest would not see any >> notion of the device. > > the question is whether we want such device assignment fail due to > RMRR confliction, and the fail decision should be when Xen handles > actual assignment instead of when domain builder prepares reserved > regions. Detecting the failure only in the hypervisor has the downside of potentially leaving the user with little clues as to what went wrong. Sending messages to the hypervisor log in that case is questionable, yet the tool stack (namely libxc) is known to not always do a good job in error propagation. >> The question isn't about migrating with devices assigned, but about >> assigning devices after migration (consider a dual vif + SR-IOV NIC >> guest setup where the SR-IOV NIC gets hot-removed before >> migration and a new one hot-plugged afterwards). >> >> Furthermore any tying of the guest memory layout to the host's >> where the guest first boots is awkward, as post-migration there's >> not going to be any reliable correlation between the guest layout >> and the new host's. > > how can you solve this? like above example, a NIC on node-A leaves > a reserved region in guest e820. now it's hot-removed and then > migrated to node-b. there's no way to update e820 again since it's > only boot structure. then user will still see such awkward regions. > since it's not avoidable, report-all in the summary mail looks not > causing a new problem. The solution to this are reserved regions specified in the guest config, independent of host characteristics. Jan