From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S965643AbcCOS1i (ORCPT ); Tue, 15 Mar 2016 14:27:38 -0400 Received: from mail-oi0-f52.google.com ([209.85.218.52]:35287 "EHLO mail-oi0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750934AbcCOS1g (ORCPT ); Tue, 15 Mar 2016 14:27:36 -0400 MIME-Version: 1.0 In-Reply-To: <56E6BA06.7000907@redhat.com> References: <1457729240-3846-1-git-send-email-dmatlack@google.com> <1457729240-3846-2-git-send-email-dmatlack@google.com> <56E6BA06.7000907@redhat.com> From: Andy Lutomirski Date: Tue, 15 Mar 2016 11:27:16 -0700 Message-ID: Subject: Re: [PATCH 1/1] KVM: don't allow irq_fpu_usable when the VCPU's XCR0 is loaded To: Paolo Bonzini Cc: David Matlack , "linux-kernel@vger.kernel.org" , X86 ML , kvm list , Ingo Molnar , Andrew Lutomirski , "H. Peter Anvin" , Eric Northup Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Mar 14, 2016 at 6:17 AM, Paolo Bonzini wrote: > > > On 11/03/2016 22:33, David Matlack wrote: >> > Is this better than just always keeping the host's XCR0 loaded outside >> > if the KVM interrupts-disabled region? >> >> Probably not. AFAICT KVM does not rely on it being loaded outside that >> region. xsetbv isn't insanely expensive, is it? Maybe to minimize the >> time spent with interrupts disabled it was put outside. >> >> I do like that your solution would be contained to KVM. > > I agree with Andy. We do want a fix for recent kernels because of the > !eager_fpu case that Guangrong mentioned. > > Paolo > > ps: while Andy is planning to kill lazy FPU, I want to benchmark it with > KVM... Remember that with a single pre-xsave host in your cluster, your > virt management might happily default your VMs to a Westmere or Nehalem > CPU model. GCC might be a pretty good testbench for this (e.g. a kernel > compile with very high make -j), because outside of the lexer (which > plays SIMD games) it never uses the FPU. Aren't pre-xsave CPUs really, really old? A brief search suggests that Intel Core added it somewhere in the middle of the cycle. For pre-xsave, it could indeed hurt performance a tiny bit under workloads that use the FPU and then stop completely because the xsaveopt and init optimizations aren't available. But even that is probably a very small effect, especially because pre-xsave CPUs have smaller FPU state sizes. -- Andy Lutomirski AMA Capital Management, LLC