From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752622AbdI2USq (ORCPT ); Fri, 29 Sep 2017 16:18:46 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36480 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752239AbdI2USo (ORCPT ); Fri, 29 Sep 2017 16:18:44 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 63D20641C8 Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=mtosatti@redhat.com Date: Fri, 29 Sep 2017 17:17:50 -0300 From: Marcelo Tosatti To: Paolo Bonzini Cc: Peter Zijlstra , Konrad Rzeszutek Wilk , mingo@redhat.com, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Thomas Gleixner Subject: Re: [patch 3/3] x86: kvm guest side support for KVM_HC_RT_PRIO hypercall\ Message-ID: <20170929201747.GB12447@amt.cnet> References: <20170925091316.bnwpiscs2bvpdxk5@hirez.programming.kicks-ass.net> <00ff8cbf-4e41-a950-568c-3bd95e155d4b@redhat.com> <20170926224925.GA9119@amt.cnet> <6f4afefd-8726-13ff-371e-0d3896b4cf6a@redhat.com> <20170928004452.GA30040@amt.cnet> <10635834-459a-9ec1-624d-febd6b5af243@redhat.com> <20170928213508.GA14053@amt.cnet> <06b714d8-7b66-6e03-a992-e359241abf84@redhat.com> <20170929164006.GC29391@amt.cnet> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Fri, 29 Sep 2017 20:18:44 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 29, 2017 at 07:05:41PM +0200, Paolo Bonzini wrote: > On 29/09/2017 18:40, Marcelo Tosatti wrote: > >> If you know you have this kind disk workload, you must use virtio-blk or > >> virtio-scsi with iothreads and place the iothreads on their own physical > >> CPUs. > >> > >> Among "run arbitrary workloads", "run real-time workloads", "pack stuff > >> into as few physical CPUs as possible", you can only pick two. > > > > Thats not the state of things (userspace in vcpu-0 is not specially tailored > > to not violate latencies in vcpu-1): that is not all user triggered > > actions can be verified. > > > > Think "updatedb", and so on... > > _Which_ spinlock is it that can cause unwanted latency while running > updatedb on VCPU0 and a real-time workload on VCPU1, and only so on virt > because of the emulator thread? Hundreds of them (the one being hit is in timer_interrupt), but i went to check and there are hundreds of raw spinlocks shared between the kernel threads that run on isolated CPUs and vcpu-0. > Is this still broken if you set up > priorities for the emulator thread correctly and use PI mutexes in QEMU? I don't see why it would not, if you have to schedule the emulator thread to process and inject I/O interrupts for example. > And if so, what is the cause of interruptions in the emulator thread > and how are these interruptions causing the jitter? Interrupt injections. > Priorities and priority inheritance (or lack of them) is a _known_ > issue. Jan was doing his KVM-RT things in 2009 and he was talking about > priorities[1] back then. The effect of correct priorities is to _lower_ > jitter, not to make it worse, and anyway certainly not worse than > SCHED_NORMAL I/O thread. Once that's fixed, we can look at other problems. > > Paolo > > [1] http://static.lwn.net/images/conf/rtlws11/papers/proc/p18.pdf which > also mentions pv scheduling