From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Tian, Kevin" Subject: Re: [PATCH RFC V7 4/5] xen, libxc: Request page fault injection via libxc Date: Wed, 27 Aug 2014 00:54:57 +0000 Message-ID: References: <1407943689-9249-1-git-send-email-rcojocaru@bitdefender.com> <1407943689-9249-4-git-send-email-rcojocaru@bitdefender.com> <53FCB226020000780002DA7B@mail.emea.novell.com> <53FC98AD.6010104@bitdefender.com> <53FCB954020000780002DB08@mail.emea.novell.com> <53FCA00C.3070404@bitdefender.com> <53FCC88D020000780002DC05@mail.emea.novell.com> <53FCBCEE.5090700@bitdefender.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <53FCBCEE.5090700@bitdefender.com> Content-Language: en-US List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Razvan Cojocaru , Jan Beulich , Tim Deegan Cc: "ian.campbell@citrix.com" , "stefano.stabellini@eu.citrix.com" , "andrew.cooper3@citrix.com" , "Dong, Eddie" , "xen-devel@lists.xen.org" , "Nakajima, Jun" , "ian.jackson@eu.citrix.com" List-Id: xen-devel@lists.xenproject.org > From: Razvan Cojocaru [mailto:rcojocaru@bitdefender.com] > Sent: Tuesday, August 26, 2014 9:59 AM > > On 08/26/14 18:49, Jan Beulich wrote: > >>>> On 26.08.14 at 16:56, wrote: > >> On 08/26/2014 05:44 PM, Jan Beulich wrote: > >>>>>> On 26.08.14 at 16:24, wrote: > >>>> On 08/26/2014 05:13 PM, Jan Beulich wrote: > >>>>>>>> On 13.08.14 at 17:28, wrote: > >>>>>> --- a/xen/include/asm-x86/hvm/domain.h > >>>>>> +++ b/xen/include/asm-x86/hvm/domain.h > >>>>>> @@ -141,6 +141,14 @@ struct hvm_domain { > >>>>>> */ > >>>>>> uint64_t sync_tsc; > >>>>>> > >>>>>> + /* Memory introspection page fault injection data. */ > >>>>>> + struct { > >>>>>> + uint64_t address_space; > >>>>>> + uint64_t virtual_address; > >>>>>> + uint32_t errcode; > >>>>>> + bool_t valid; > >>>>>> + } fault_info; > >>>>> > >>>>> Sorry for noticing this only now, but how can this be a per-domain > >>>>> thing rather than a per-vCPU one? > >>>> > >>>> The requirement for our introspection application has simply been to > >>>> bring back in a swapped-out page, regardless of what VCPU ends up > >>>> actually doing it. > >>> > >>> But please remember that what you add to the public code base > >>> shouldn't be tied to specific needs of your application, it should > >>> be coded in a generally useful way. > >> > >> Of course, perhaps I should have written "the scenario we're working > >> with" rather than "the requirement for our application". I'm just trying > >> to understand all the usual cases for this. > >> > >>> Furthermore, how would this work if you have 2 vCPU-s hit such > >>> a condition, and you need to bring in 2 pages in parallel? > >> > >> Since this is all happening in the context of processing mem_events, > >> it's not really possible for two VCPUs to need to do this in parallel, > >> since processing mem_events is being done sequentially. A VCPU needs to > >> put a mem_event in the ring buffer and pause before this hypercall can > >> be called from userspace. > > > > I'd certainly want to hear Tim's opinion here before settling on > > either model. Considering that this is at least mem-event related, > > it's slightly odd you didn't copy him in the first place. > > Sorry about that, scripts/get_maintainter.pl did not list him and I > forgot to CC him. > > >>From code seems this info is a condition for PF injection, instead of recording VCPU faulting information. So it looks OK to be a per-domain structure, but the structure name 'fault_info' is too generic... Thanks Kevin