From mboxrd@z Thu Jan 1 00:00:00 1970 From: Vijay Kilari Subject: Re: [PATCH v4 05/17] xen/arm: ITS: implement hw_irq_controller for LPIs Date: Wed, 15 Jul 2015 22:31:26 +0530 Message-ID: References: <1436514172-3263-1-git-send-email-vijay.kilari@gmail.com> <1436514172-3263-6-git-send-email-vijay.kilari@gmail.com> <55A42B1C.2030600@citrix.com> <55A61940.9010408@citrix.com> <1436952729.11153.28.camel@citrix.com> <1436970531.32371.96.camel@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1436970531.32371.96.camel@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ian Campbell Cc: Stefano Stabellini , Prasun Kapoor , Vijaya Kumar K , Tim Deegan , "xen-devel@lists.xen.org" , Julien Grall , Stefano Stabellini , manish.jaggi@caviumnetworks.com List-Id: xen-devel@lists.xenproject.org On Wed, Jul 15, 2015 at 7:58 PM, Ian Campbell wrote: > On Wed, 2015-07-15 at 19:45 +0530, Vijay Kilari wrote: >> On Wed, Jul 15, 2015 at 3:02 PM, Ian Campbell wrote: >> > On Wed, 2015-07-15 at 10:26 +0200, Julien Grall wrote: >> > >> >> >>> @@ -149,7 +173,7 @@ int gic_route_irq_to_guest(struct domain *d, unsigned >> >> >>> int virq, >> >> >>> test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) >> >> >>> goto out; >> >> >>> >> >> >>> - desc->handler = gic_hw_ops->gic_guest_irq_type; >> >> >>> + desc->handler = get_guest_hw_irq_controller(desc->irq); >> >> >>> set_bit(_IRQ_GUEST, &desc->status); >> >> >>> >> >> >>> gic_set_irq_properties(desc, cpumask_of(v_target->processor), >> >> >>> priority); >> >> >>> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c >> >> >>> index 2dd43ee..ba8528a 100644 >> >> >>> --- a/xen/arch/arm/irq.c >> >> >>> +++ b/xen/arch/arm/irq.c >> >> >>> @@ -35,7 +35,13 @@ static DEFINE_SPINLOCK(local_irqs_type_lock); >> >> >>> struct irq_guest >> >> >>> { >> >> >>> struct domain *d; >> >> >>> - unsigned int virq; >> >> >>> + union >> >> >>> + { >> >> >>> + /* virq refer to virtual irq in case of spi */ >> >> >>> + unsigned int virq; >> >> >>> + /* virq refer to event ID in case of lpi */ >> >> >>> + unsigned int vid; >> >> >> >> >> >> >> >> >> Why can't we store the event ID in the irq_guest? As said on v3, this is not >> >> > >> >> > Are you referring to irq_desc in above statement? >> >> >> >> Yes sorry. >> > >> > I'm afraid I don't follow your suggestion here, are you suggesting that >> > the vid field added above should be moved to irq_desc? >> > >> > But the vid _is_ domain specific, it is the virtual event ID which is >> > per-domain (it's the thing looked up in the ITT to get a vLPI to be >> > injected). I think it is a pretty direct analogue of the virq field used >> > for non-LPI irq_guest structs. >> > >> > If we had need for the physical event id then that would like belong in >> > the irq_desc. >> > >> > Your proposal on v3 looks to be around moving the its_device pointer to >> > the irq_desc, which appears to have been done here, along with turning >> > the virq+vid into a union as requested there too. >> > >> >> >> It has been suggested by Ian to move col_id in the its_device in the >> >> >> previous version [4]. Any reason to not doing it? >> >> > >> >> > In round robin fashion each plpi is attached to col_id. So storing >> >> > in its_device is not possible. In linux latest col_id is stored in its_device >> >> > structure for which set_affinity is called. >> > >> > Are you saying that in Linux all Events/LPIs associated with a given ITS >> > device are routed to the same collection? >> >> Not associated with same collection. Events are associated with cpu >> for which cpu_mask is set and the collection id of that cpu is stored >> in the its_device, > > Surely storing the collection ID is precisely equivalent to storing the > cpu mask of the one CPU to which that collection ID is routed? > > But that is orthogonal to my question anyway, so let me try again: > > Are you saying that in Linux all Events/LPIs associated with a given ITS > device are routed to the same CPU? > >> which is later used for SYNC. > > >> So effectively it_device >> does not store collection id associated for all Events of that device. > > Right above you said "and the collection id of that cpu is stored in the > its_device", which contradicts what you now say here. (Unless it_device > is not a typo of its_device but a separate thing?) Sorry. I may not be clear. In Linux when MSIx is enabled. Device is created first and its_device->its_collection is set for first onine cpu and mapvi is called with collection set in its_create_device() as below. struct its_device *its_create_device(struct its_node *its, u32 dev_id, int nvecs) { .... /* Bind the device to the first possible CPU */ cpu = cpumask_first(cpu_online_mask); dev->collection = &its->collections[cpu]; .... } int its_alloc_device_irq(struct its_device *dev, u32 id, int *hwirq, unsigned int *irq) { ... /* Map the GIC irq ID to the device */ its_send_mapvi(dev, *hwirq, id); ... } When affinity is set, movi is sent with collection id selected for the cpu_mask. static int its_set_affinity(struct irq_data *d, const struct cpumask *mask_val, bool force) { unsigned int cpu = cpumask_any_and(mask_val, cpu_online_mask); ... target_col = &its_dev->its->collections[cpu]; .... its_send_movi(its_dev, target_col, id); its_dev->collection = target_col; ... } So, collection id to Event/LPI mapping is not stored. Regards Vijay