From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jan Kiszka Subject: Re: [RFC PATCH 3/4] KVM: x86: Add EOI exit bitmap inference Date: Wed, 13 May 2015 10:10:42 +0200 Message-ID: <55530702.5000407@siemens.com> References: <1431481652-27268-1-git-send-email-srutherford@google.com> <1431481652-27268-3-git-send-email-srutherford@google.com> <5552EB35.2070806@siemens.com> <555305A5.8060601@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Transfer-Encoding: 7bit Cc: ahonig@google.com To: Paolo Bonzini , Steve Rutherford , kvm@vger.kernel.org Return-path: Received: from thoth.sbs.de ([192.35.17.2]:34226 "EHLO thoth.sbs.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933139AbbEMILS (ORCPT ); Wed, 13 May 2015 04:11:18 -0400 In-Reply-To: <555305A5.8060601@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 2015-05-13 10:04, Paolo Bonzini wrote: > > > On 13/05/2015 08:12, Jan Kiszka wrote: >>> +void kvm_scan_ioapic_routes(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap) >>> +{ >>> + struct kvm *kvm = vcpu->kvm; >>> + struct kvm_kernel_irq_routing_entry *entry; >>> + struct kvm_irq_routing_table *table; >>> + u32 i, nr_rt_entries; >>> + >>> + mutex_lock(&kvm->irq_lock); > > This only needs irq_srcu protection, not irq_lock, so the lookup cost > becomes much smaller (all CPUs can proceed in parallel). > > You would need to put an smp_mb here, to ensure that irq_routing is read > after KVM_SCAN_IOAPIC is cleared. You can introduce > smb_mb__after_srcu_read_lock in order to elide it. > > The matching memory barrier would be a smp_mb__before_atomic in > kvm_make_scan_ioapic_request. > >>> + table = kvm->irq_routing; >>> + nr_rt_entries = min_t(u32, table->nr_rt_entries, IOAPIC_NUM_PINS); >>> + for (i = 0; i < nr_rt_entries; ++i) { >>> + hlist_for_each_entry(entry, &table->map[i], link) { >>> + u32 dest_id, dest_mode; >>> + >>> + if (entry->type != KVM_IRQ_ROUTING_MSI) >>> + continue; >>> + dest_id = (entry->msi.address_lo >> 12) & 0xff; >>> + dest_mode = (entry->msi.address_lo >> 2) & 0x1; >>> + if (kvm_apic_match_dest(vcpu, NULL, 0, dest_id, >>> + dest_mode)) { >>> + u32 vector = entry->msi.data & 0xff; >>> + >>> + __set_bit(vector, >>> + (unsigned long *) eoi_exit_bitmap); >>> + } >>> + } >>> + } >>> + mutex_unlock(&kvm->irq_lock); >>> +} >>> >> >> This looks a bit frightening regarding the lookup costs. Do we really >> have to run through the complete routing table to find the needed >> information? There can be way more "real" MSI entries than IOAPIC pins. > > It does at most IOAPIC_NUM_PINS iterations however. > >> There can even be multiple IOAPICs (thanks to your patches overcoming >> the single in-kernel instance). > > With multiple IOAPICs you have more than 24 GSIs per IOAPIC. That means I don't think that the number of pins per IOAPIC increases. At least not in the devices I've seen so far. > that the above loop is broken for multiple IOAPICs. The worst case remains #IOAPIC * 24 iterations - if we have means to stop after the IOAPIC entries, not iterating over all routes. > > But perhaps when enabling KVM_SPLIT_IRQCHIP we can use args[0] to pass > the number of IOAPIC routes that will cause EOI exits? And you need to ensure that their routes can be found in the table directly. Given IOAPIC hotplug, that may not be the first ones there... Jan -- Siemens AG, Corporate Technology, CT RTC ITP SES-DE Corporate Competence Center Embedded Linux