From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: [V10 PATCH 0/4] pvh dom0 patches... Date: Thu, 08 May 2014 11:44:37 +0100 Message-ID: <536B7C3502000078000106F3@mail.emea.novell.com> References: <1398820008-9005-1-git-send-email-mukesh.rathor@oracle.com> <5361049B.7040409@citrix.com> <20140430111216.2bef8e60@mantra.us.oracle.com> <20140430181923.68d75467@mantra.us.oracle.com> <53637BF3.2000502@citrix.com> <20140502170114.7ec2a9e6@mantra.us.oracle.com> <53675166.30400@citrix.com> <20140505172858.4c6c3f0a@mantra.us.oracle.com> <53688B84.8090607@citrix.com> <20140506180053.7c6da000@mantra.us.oracle.com> <536A01D8020000780000FB15@mail.emea.novell.com> <536A0188.9090109@citrix.com> <536A365D020000780000FE17@mail.emea.novell.com> <536B5C24.6060102@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1WiLoe-0000eu-Hs for xen-devel@lists.xenproject.org; Thu, 08 May 2014 10:44:40 +0000 In-Reply-To: <536B5C24.6060102@citrix.com> Content-Disposition: inline List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: =?UTF-8?Q?Roger=20Pau=20Monn=C3=A9?= Cc: George.Dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir.xen@gmail.com, tim@xen.org List-Id: xen-devel@lists.xenproject.org >>> On 08.05.14 at 12:27, wrote: > On 07/05/14 13:34, Jan Beulich wrote: >>>>> On 07.05.14 at 11:48, wrote: >>> @@ -369,6 +374,89 @@ static __init void pvh_map_all_iomem(struct domain *d) >>> nump = end_pfn - start_pfn; >>> pvh_add_mem_mapping(d, start_pfn, start_pfn, nump); >>> } >>> + >>> + /* >>> + * Add the memory removed by the holes at the end of the >>> + * memory map. >>> + */ >>> + for ( i = 0, entry = e820.map; i < e820.nr_map; i++, entry++ ) >>> + { >>> + if ( entry->type != E820_RAM ) >>> + continue; >>> + >>> + end_pfn = PFN_UP(entry->addr + entry->size); >>> + if ( end_pfn <= nr_pages ) >>> + continue; >>> + >>> + navail = end_pfn - nr_pages; >>> + nmap = navail > nr_holes ? nr_holes : navail; >>> + start_pfn = PFN_DOWN(entry->addr) < nr_pages ? >>> + nr_pages : PFN_DOWN(entry->addr); >>> + page = alloc_domheap_pages(d, get_order_from_pages(nmap), 0); >>> + if ( !page ) >>> + panic("Not enough RAM for domain 0"); >>> + for ( j = 0; j < nmap; j++ ) >>> + { >>> + rc = guest_physmap_add_page(d, start_pfn + j, page_to_mfn(page), 0); >> >> I realize that this interface isn't the most flexible one in terms of what >> page orders it allows to be passed in, but could you at least use order >> 9 here when the allocation order above is 9 or higher? > > I've changed it to be: > > guest_physmap_add_page(d, start_pfn, page_to_mfn(page), order); > > and removed the loop. Honestly I'd be very surprised if this worked - the P2M backend functions generally accept only orders matching up with what the hardware supports. I.e. I specifically didn't suggest to remove the loop altogether. >>> +static __init void pvh_setup_e820(struct domain *d, unsigned long nr_pages) >>> +{ >>> + struct e820entry *entry; >>> + unsigned int i; >>> + unsigned long pages, cur_pages = 0; >>> + >>> + /* >>> + * Craft the e820 memory map for Dom0 based on the hardware e820 map. >>> + */ >>> + d->arch.e820 = xzalloc_array(struct e820entry, e820.nr_map); >>> + if ( !d->arch.e820 ) >>> + panic("Unable to allocate memory for Dom0 e820 map"); >>> + >>> + memcpy(d->arch.e820, e820.map, sizeof(struct e820entry) * e820.nr_map); >>> + d->arch.nr_e820 = e820.nr_map; >>> + >>> + /* Clamp e820 memory map to match the memory assigned to Dom0 */ >>> + for ( i = 0, entry = d->arch.e820; i < d->arch.nr_e820; i++, entry++ ) >>> + { >>> + if ( entry->type != E820_RAM ) >>> + continue; >>> + >>> + if ( nr_pages == cur_pages ) >>> + { >>> + /* >>> + * We already have all the assigned memory, >>> + * mark this region as reserved. >>> + */ >>> + entry->type = E820_RESERVED; >> >> That seems undesirable, as it will prevent Linux from using that space >> for MMIO purposes. Plus keeping the region here is consistent with >> simply shrinking the range below (yielding the remainder uncovered >> by any E820 entry). I think you ought to zap the entry here. > > I don't think this regions can be used for MMIO, the gpfns in the guest > p2m are not set as p2m_mmio_direct. Not at the point you build the domain, but such a use can't and shouldn't be precluded ones the domain is up. Jan