From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: [PATCH v7 15/15] x86/hvm: track large memory mapped accesses by buffer offset Date: Thu, 09 Jul 2015 16:46:05 +0100 Message-ID: <559EB35D020000780008EF0B@mail.emea.novell.com> References: <1436447455-11524-1-git-send-email-paul.durrant@citrix.com> <1436447455-11524-16-git-send-email-paul.durrant@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1436447455-11524-16-git-send-email-paul.durrant@citrix.com> Content-Disposition: inline List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Paul Durrant Cc: Keir Fraser , xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org >>> On 09.07.15 at 15:10, wrote: > @@ -635,13 +605,49 @@ static int hvmemul_phys_mmio_access( > return rc; > } > > +/* > + * Multi-cycle MMIO handling is based upon the assumption that emulation > + * of the same instruction will not access the same MMIO region more > + * than once. Hence we can deal with re-emulation (for secondary or > + * subsequent cycles) by looking up the result or previous I/O in a > + * cache indexed by linear MMIO address. > + */ > +static struct hvm_mmio_cache *hvmemul_find_mmio_cache( > + struct hvm_vcpu_io *vio, unsigned long gla, uint8_t dir) > +{ > + unsigned int i; > + struct hvm_mmio_cache *cache; > + > + for ( i = 0; i < vio->mmio_cache_count; i ++ ) > + { > + cache = &vio->mmio_cache[i]; > + > + if ( gla == cache->gla && > + dir == cache->dir ) > + return cache; > + } > + > + i = vio->mmio_cache_count++; > + if( i == ARRAY_SIZE(vio->mmio_cache) ) > + domain_crash(current->domain); But you mustn't continue here, or at least force i into range so you don't corrupt other data. And while doing that please also add the missing space on the if() line. Jan