From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36115) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Un9T5-0006Sy-Mi for qemu-devel@nongnu.org; Thu, 13 Jun 2013 11:29:48 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Un9Sz-0001G5-Dl for qemu-devel@nongnu.org; Thu, 13 Jun 2013 11:29:43 -0400 Received: from smtp.citrix.com ([66.165.176.89]:29303) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Un9Sz-0001Fi-91 for qemu-devel@nongnu.org; Thu, 13 Jun 2013 11:29:37 -0400 Message-ID: <51B9E550.3010900@eu.citrix.com> Date: Thu, 13 Jun 2013 16:29:20 +0100 From: George Dunlap MIME-Version: 1.0 References: <51B1FF50.90406@eu.citrix.com> <403610A45A2B5242BD291EDAE8B37D3010E56731@SHSMSX102.ccr.corp.intel.com> <51B83E7A02000078000DD6E9@nat28.tlf.novell.com> <51B847E3.5010604@eu.citrix.com> <51B9CF26.1080707@eu.citrix.com> In-Reply-To: Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [Xen-devel] [BUG 1747]Guest could't find bootable device with memory more than 3600M List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefano Stabellini Cc: Tim Deegan , Yongjie Ren , "xen-devel@lists.xensource.com" , Keir Fraser , Ian Campbell , hanweidong@huawei.com, Xudong Hao , yanqiangjun@huawei.com, luonengjun@huawei.com, qemu-devel@nongnu.org, wangzhenguo@huawei.com, xiaowei.yang@huawei.com, arei.gonglei@huawei.com, Jan Beulich , Paolo Bonzini , YongweiX Xu , SongtaoX Liu On 13/06/13 15:50, Stefano Stabellini wrote: > Keep in mind that if we start the pci hole at 0xe0000000, the number of > cases for which any workarounds are needed is going to be dramatically > decreased to the point that I don't think we need a workaround anymore. You don't think anyone is going to want to pass through a card with 1GiB+ of RAM? > > The algorithm is going to work like this in details: > > - the pci hole size is set to 0xfc000000-0xe0000000 = 448MB > - we calculate the total mmio size, if it's bigger than the pci hole we > raise a 64 bit relocation flag > - if the 64 bit relocation is enabled, we relocate above 4G the first > device that is 64-bit capable and has an MMIO size greater or equal to > 512MB > - if the pci hole size is now big enough for the remaining devices we > stop the above 4G relocation, otherwise keep relocating devices that are > 64 bit capable and have an MMIO size greater or equal to 512MB > - if one or more devices don't fit we print an error and continue (it's > not a critical failure, one device won't be used) > > We could have a xenstore flag somewhere that enables the old behaviour > so that people can revert back to qemu-xen-traditional and make the pci > hole below 4G even bigger than 448MB, but I think that keeping the old > behaviour around is going to make the code more difficult to maintain. We'll only need to do that for one release, until we have a chance to fix it properly. > > Also it's difficult for people to realize that they need the workaround > because hvmloader logs aren't enabled by default and only go to the Xen > serial console. Well if key people know about it (Pasi, David Techer, &c), and we put it on the wikis related to VGA pass-through, I think information will get around. > The value of this workaround pretty low in my view. > Finally it's worth noting that Windows XP is going EOL in less than an > year. That's 1 year that a configuration with a currently-supported OS won't work for Xen 4.3 that worked for 4.2. Apart from that, one of the reasons for doing virtualization in the first place is to be able to run older, unsupported OSes on current hardware; so "XP isn't important" doesn't really cut it for me. :-) > > >> I thought that what we had proposed was to have an option in xenstore, that >> libxl would set, which would instruct hvmloader whether to expand the MMIO >> hole and whether to relocate devices above 64-bit? > I think it's right to have this discussion in public on the mailing > list, rather than behind closed doors. > Also I don't agree on the need for a workaround, as explained above. I see -- you thought it was a bad idea and so were letting someone else bring it up -- or maybe hoping no one would remember to bring it up. :-) (Obviously the decision needs to be made in public, but sometimes having technical solutions hashed out in a face-to-face meeting is more efficient.) -George From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: [Xen-devel] [BUG 1747]Guest could't find bootable device with memory more than 3600M Date: Thu, 13 Jun 2013 16:29:20 +0100 Message-ID: <51B9E550.3010900@eu.citrix.com> References: <51B1FF50.90406@eu.citrix.com> <403610A45A2B5242BD291EDAE8B37D3010E56731@SHSMSX102.ccr.corp.intel.com> <51B83E7A02000078000DD6E9@nat28.tlf.novell.com> <51B847E3.5010604@eu.citrix.com> <51B9CF26.1080707@eu.citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+gceq-qemu-devel=gmane.org@nongnu.org Sender: qemu-devel-bounces+gceq-qemu-devel=gmane.org@nongnu.org To: Stefano Stabellini Cc: Tim Deegan , Yongjie Ren , "xen-devel@lists.xensource.com" , Keir Fraser , Ian Campbell , hanweidong@huawei.com, Xudong Hao , yanqiangjun@huawei.com, luonengjun@huawei.com, qemu-devel@nongnu.org, wangzhenguo@huawei.com, xiaowei.yang@huawei.com, arei.gonglei@huawei.com, Jan Beulich , Paolo Bonzini , YongweiX Xu , SongtaoX Liu List-Id: xen-devel@lists.xenproject.org On 13/06/13 15:50, Stefano Stabellini wrote: > Keep in mind that if we start the pci hole at 0xe0000000, the number of > cases for which any workarounds are needed is going to be dramatically > decreased to the point that I don't think we need a workaround anymore. You don't think anyone is going to want to pass through a card with 1GiB+ of RAM? > > The algorithm is going to work like this in details: > > - the pci hole size is set to 0xfc000000-0xe0000000 = 448MB > - we calculate the total mmio size, if it's bigger than the pci hole we > raise a 64 bit relocation flag > - if the 64 bit relocation is enabled, we relocate above 4G the first > device that is 64-bit capable and has an MMIO size greater or equal to > 512MB > - if the pci hole size is now big enough for the remaining devices we > stop the above 4G relocation, otherwise keep relocating devices that are > 64 bit capable and have an MMIO size greater or equal to 512MB > - if one or more devices don't fit we print an error and continue (it's > not a critical failure, one device won't be used) > > We could have a xenstore flag somewhere that enables the old behaviour > so that people can revert back to qemu-xen-traditional and make the pci > hole below 4G even bigger than 448MB, but I think that keeping the old > behaviour around is going to make the code more difficult to maintain. We'll only need to do that for one release, until we have a chance to fix it properly. > > Also it's difficult for people to realize that they need the workaround > because hvmloader logs aren't enabled by default and only go to the Xen > serial console. Well if key people know about it (Pasi, David Techer, &c), and we put it on the wikis related to VGA pass-through, I think information will get around. > The value of this workaround pretty low in my view. > Finally it's worth noting that Windows XP is going EOL in less than an > year. That's 1 year that a configuration with a currently-supported OS won't work for Xen 4.3 that worked for 4.2. Apart from that, one of the reasons for doing virtualization in the first place is to be able to run older, unsupported OSes on current hardware; so "XP isn't important" doesn't really cut it for me. :-) > > >> I thought that what we had proposed was to have an option in xenstore, that >> libxl would set, which would instruct hvmloader whether to expand the MMIO >> hole and whether to relocate devices above 64-bit? > I think it's right to have this discussion in public on the mailing > list, rather than behind closed doors. > Also I don't agree on the need for a workaround, as explained above. I see -- you thought it was a bad idea and so were letting someone else bring it up -- or maybe hoping no one would remember to bring it up. :-) (Obviously the decision needs to be made in public, but sometimes having technical solutions hashed out in a face-to-face meeting is more efficient.) -George