From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:37679) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UHWiS-0001Cb-Oy for qemu-devel@nongnu.org; Mon, 18 Mar 2013 05:50:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UHWiN-0008CO-HQ for qemu-devel@nongnu.org; Mon, 18 Mar 2013 05:50:52 -0400 Received: from mailpro.odiso.net ([89.248.209.98]:36312) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UHWiN-00087Z-8M for qemu-devel@nongnu.org; Mon, 18 Mar 2013 05:50:47 -0400 Date: Mon, 18 Mar 2013 10:50:19 +0100 (CET) From: Alexandre DERUMIER Message-ID: <0c8d28e0-07d9-499f-920d-7ceca4155aab@mailpro> In-Reply-To: <20130317090816.GB28528@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [Qemu-devel] slow virtio network with vhost=on and multiple cores List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: Peter Lieven , Stefan Hajnoczi , qemu-devel@nongnu.org, Davide Guerri , Jan Kiszka , Peter Lieven , Dietmar Maurer Hello, about this bug, Last Proxmox distrib use 2.6.32 rhel 6.3 kernel + qemu 1.4 , and have this = problem with guest with 2.6.32 kernel. do you think that +x2apic in guest cpu could help ? (I think it's enable by default in RHEV/OVIRT ? but not in proxmox) ----- Mail original ----- De: "Michael S. Tsirkin" =C3=80: "Peter Lieven" Cc: "Davide Guerri" , "Alexandre DERUMIER" , "Stefan Hajnoczi" , qemu-devel@nongnu.org, = "Jan Kiszka" , "Peter Lieven" , = "Dietmar Maurer" Envoy=C3=A9: Dimanche 17 Mars 2013 10:08:17 Objet: Re: [Qemu-devel] slow virtio network with vhost=3Don and multiple co= res On Fri, Mar 15, 2013 at 08:23:44AM +0100, Peter Lieven wrote: > On 15.03.2013 00:04, Davide Guerri wrote: > >Yes this is definitely an option :) > > > >Just for curiosity, what is the effect of "in-kernel irqchip"? > > it emulates the irqchip in-kernel (in the KVM kernel module) which > avoids userspace exits to qemu. in your particular case I remember > that it made all IRQs deliverd to vcpu0 on. So I think this is a workarou= nd > and not the real fix. I think Michael is right that it is a > client kernel bug. It would be good to find out what it is and ask > the 2.6.32 maintainers to include it. i further have seen that > with more recent kernels and inkernel-irqchip the irqs are delivered > to vcpu0 only again (without multiqueue). > > >Is it possible to disable it on a "live" domain? > > try it. i don't know. you definetely have to do a live migration for it, = > but I have no clue if the VM will survice this. > > Peter I doubt you can migrate VMs between irqchip/non irqchip configurations. > > > >Cheers, > > Davide > > > > > >On 14/mar/2013, at 19:21, Peter Lieven wrote: > > > >> > >>Am 14.03.2013 um 19:15 schrieb Davide Guerri : > >> > >>>Of course I can do some test but a kernel upgrade is not an option her= e :( > >> > >>disabling the in-kernel irqchip (default since 1.2.0) should also help,= maybe this is an option. > >> > >>Peter > >> > >> > >