From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752558AbdI1AvQ (ORCPT ); Wed, 27 Sep 2017 20:51:16 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34982 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752445AbdI1AvP (ORCPT ); Wed, 27 Sep 2017 20:51:15 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com E5FF3389B69 Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx09.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=mtosatti@redhat.com Date: Tue, 26 Sep 2017 20:59:19 -0300 From: Marcelo Tosatti To: Jan Kiszka Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [patch 0/3] KVM KVM_HC_RT_PRIO hypercall support Message-ID: <20170926235916.GC10809@amt.cnet> References: <20170921113835.031375194@redhat.com> <0e9df6b6-f8ea-ad55-3308-9e583128cf46@siemens.com> <20170922011857.GC20133@amt.cnet> <1663b883-a59e-2093-5ccb-308cc7f0bda5@siemens.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1663b883-a59e-2093-5ccb-308cc7f0bda5@siemens.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.38]); Thu, 28 Sep 2017 00:51:15 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Sep 22, 2017 at 08:23:02AM +0200, Jan Kiszka wrote: > On 2017-09-22 03:19, Marcelo Tosatti wrote: > > On Thu, Sep 21, 2017 at 07:45:32PM +0200, Jan Kiszka wrote: > >> On 2017-09-21 13:38, Marcelo Tosatti wrote: > >>> When executing guest vcpu-0 with FIFO:1 priority, which is necessary to > >>> deal with the following situation: > >>> > >>> VCPU-0 (housekeeping VCPU) VCPU-1 (realtime VCPU) > >>> > >>> raw_spin_lock(A) > >>> interrupted, schedule task T-1 raw_spin_lock(A) (spin) > >>> > >>> raw_spin_unlock(A) > >>> > >>> Certain operations must interrupt guest vcpu-0 (see trace below). > >>> > >>> To fix this issue, only change guest vcpu-0 to FIFO priority > >>> on spinlock critical sections (see patch). > >>> > >>> Hang trace > >>> ========== > >>> > >>> Without FIFO priority: > >>> > >>> qemu-kvm-6705 [002] ....1.. 767785.648964: kvm_exit: reason IO_INSTRUCTION rip 0xe8fe info 1f00039 0 > >>> qemu-kvm-6705 [002] ....1.. 767785.648965: kvm_exit: reason IO_INSTRUCTION rip 0xe911 info 3f60008 0 > >>> qemu-kvm-6705 [002] ....1.. 767785.648968: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0 > >>> qemu-kvm-6705 [002] ....1.. 767785.648971: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0 > >>> qemu-kvm-6705 [002] ....1.. 767785.648974: kvm_exit: reason IO_INSTRUCTION rip 0xb514 info 3f60000 0 > >>> qemu-kvm-6705 [002] ....1.. 767785.648977: kvm_exit: reason PENDING_INTERRUPT rip 0x8052 info 0 0 > >>> qemu-kvm-6705 [002] ....1.. 767785.648980: kvm_exit: reason IO_INSTRUCTION rip 0xeee6 info 200040 0 > >>> qemu-kvm-6705 [002] ....1.. 767785.648999: kvm_exit: reason EPT_MISCONFIG rip 0x2120 info 0 0 > >>> > >>> With FIFO priority: > >>> > >>> qemu-kvm-7636 [002] ....1.. 768218.205065: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0 > >>> qemu-kvm-7636 [002] ....1.. 768218.205068: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0 > >>> qemu-kvm-7636 [002] ....1.. 768218.205071: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0 > >>> qemu-kvm-7636 [002] ....1.. 768218.205074: kvm_exit: reason IO_INSTRUCTION rip 0x8984 info 608000b 0 > >>> qemu-kvm-7636 [002] ....1.. 768218.205077: kvm_exit: reason IO_INSTRUCTION rip 0xb313 info 1f70008 0 > >>> .. > >>> > >>> Performance numbers (kernel compilation with make -j2) > >>> ====================================================== > >>> > >>> With hypercall: 4:40. (make -j2) > >>> Without hypercall: 3:38. (make -j2) > >>> > >>> Note for NFV workloads spinlock performance is not relevant > >>> since DPDK should not enter the kernel (and housekeeping vcpu > >>> performance is far from a key factor). > >>> > >>> Signed-off-by: Marcelo Tosatti > >>> > >> > >> That sounds familiar, though not yet the same: :) > >> > >> http://git.kiszka.org/?p=linux-kvm.git;a=shortlog;h=refs/heads/queues/paravirt-sched > >> (paper: http://lwn.net/images/conf/rtlws11/papers/proc/p18.pdf) > >> > >> I suppose your goal is not to enable the host to follow the guest > >> scheduler priority completely but only to have prio-ceiling for such > >> short critical sections. Maybe still useful to think ahead about future > >> extensions when actually introducing such an interface. > > > > Hi Jan! > > > > Hum... I'll take a look at your interface/paper and get back to you. > > > >> But shouldn't there be some limits on the maximum prio the guest can select? > > > > The SCHED_FIFO prio is fixed, selectable when QEMU starts. Do you > > envision any other use case than a fixed priority value selectable > > at QEMU initialization? > > Oh, indeed, this is a pure prio-ceiling variant with host-defined > ceiling value. > > But it's very inefficient to use a hypercall for entering and leaving > each and every sections. I would strongly recommend using a lazy scheme > where the guest writes the desired state into a shared memory page, and > the host only evaluates that prior to taking a scheduling decision, or > at least only on real vmexits. We're using such scheme successfully to > accelerate the fast path of prio-ceiling for pthread mutexes in the > Xenomai real-time extension. Yes, a faster scheme was envisioned, but not developed.