From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43E7AC0018C for ; Mon, 7 Dec 2020 16:54:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 109E02388B for ; Mon, 7 Dec 2020 16:54:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727713AbgLGQyn (ORCPT ); Mon, 7 Dec 2020 11:54:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727641AbgLGQyn (ORCPT ); Mon, 7 Dec 2020 11:54:43 -0500 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B8809C0617B0 for ; Mon, 7 Dec 2020 08:54:02 -0800 (PST) Received: by mail-pg1-x542.google.com with SMTP id t3so9373895pgi.11 for ; Mon, 07 Dec 2020 08:54:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amacapital-net.20150623.gappssmtp.com; s=20150623; h=content-transfer-encoding:from:mime-version:subject:date:message-id :references:cc:in-reply-to:to; bh=F2rYrqS8SHhox6gb/PLA5FF8Na5ObfPjeNw5CWUgJBA=; b=EB9xXVBYnph+yLymg1YrR5VAcltS94YlY6g/tX2JMroPaR766LWa/Gzq1LriJvndL9 Ra/PKOAP/hbVwcJwA9T+ofrgXyV4/YhX4T0t8c9L1XMB21pDKue1sURccq5B66diFsFX g0nLZqhpoFpGzzprGg3JwSywVaNx5gnY2UMLtj1ZNQWjFcS7hFftxWEcyX9JsR0cQJrx N3OBLrfsIBOjYaObDBVuvpXOTnAmnlaZo6OD5tUH5X8tdL0Jd/ge8fX+zTijfuVh0+lR E+yG7u11Hg/e3JSk/WZm24CvjGehCaTkoZh/n0m8LHJOpJRoL49iTO8KeKYSTLKZyEZm eTrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:content-transfer-encoding:from:mime-version :subject:date:message-id:references:cc:in-reply-to:to; bh=F2rYrqS8SHhox6gb/PLA5FF8Na5ObfPjeNw5CWUgJBA=; b=fVtHBOnVoldd9az3FPNnPKOo1lTj8ixXYHzATBbePC8sw9RkhDyzX9P6+FjVJFRNdS UgJUG9WT4sbwrHGU4UACSyE1oDW7EKU2gNJt3EM9M15TEKPB+obG6So0f9NnAb96LkmA +1+mfMBaJDEaK3QmbJR3OhZ77FPSxeQur31xfBeZrMw2f2sOIU7VCg5vrWDCvJltq4kU G6y9on7FkWCgvBN/trK8YE4/2NyJvTCVWlLKvCIrqKwNIrWYUmaGAfIGApY89nei9cLd LI8LPwRnJYWz9eOURI6tQvMMUArf6pPUxcHnh0JRi6hLqpRnqCGUa4I6UvaUuudpdcaq jg4g== X-Gm-Message-State: AOAM532xLCTFzDf7EXXVfzRTVeJUHtpWVPpt0M6t1JXDTiDX++jXdRj8 c4btjFihpHJgQs7WdbF/pYYCrQ== X-Google-Smtp-Source: ABdhPJwn7nhwft//+r3F0Vxp9f1u8P3YnJM/m0z6bq5efFikrvoUwibXAaVfdivhuEkQeFgFnGroAw== X-Received: by 2002:a63:d312:: with SMTP id b18mr19102857pgg.233.1607360042167; Mon, 07 Dec 2020 08:54:02 -0800 (PST) Received: from ?IPv6:2601:646:c200:1ef2:799e:e3fc:86a:de72? ([2601:646:c200:1ef2:799e:e3fc:86a:de72]) by smtp.gmail.com with ESMTPSA id y23sm12988888pje.41.2020.12.07.08.54.00 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 07 Dec 2020 08:54:01 -0800 (PST) Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable From: Andy Lutomirski Mime-Version: 1.0 (1.0) Subject: Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE Date: Mon, 7 Dec 2020 08:53:59 -0800 Message-Id: <905DFDCE-71A5-4711-A31B-9FCFEA2CFC52@amacapital.net> References: <87a6up606r.fsf@nanos.tec.linutronix.de> Cc: Maxim Levitsky , kvm@vger.kernel.org, "H. Peter Anvin" , Paolo Bonzini , Jonathan Corbet , Jim Mattson , Wanpeng Li , "open list:KERNEL SELFTEST FRAMEWORK" , Vitaly Kuznetsov , Marcelo Tosatti , Sean Christopherson , open list , Ingo Molnar , "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" , Joerg Roedel , Borislav Petkov , Shuah Khan , Andrew Jones , Oliver Upton , "open list:DOCUMENTATION" In-Reply-To: <87a6up606r.fsf@nanos.tec.linutronix.de> To: Thomas Gleixner X-Mailer: iPhone Mail (18B121) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > On Dec 7, 2020, at 8:38 AM, Thomas Gleixner wrote: >=20 > =EF=BB=BFOn Mon, Dec 07 2020 at 14:16, Maxim Levitsky wrote: >>> On Sun, 2020-12-06 at 17:19 +0100, Thomas Gleixner wrote: >>> =46rom a timekeeping POV and the guests expectation of TSC this is >>> fundamentally wrong: >>>=20 >>> tscguest =3D scaled(hosttsc) + offset >>>=20 >>> The TSC has to be viewed systemwide and not per CPU. It's systemwide >>> used for timekeeping and for that to work it has to be synchronized.=20 >>>=20 >>> Why would this be different on virt? Just because it's virt or what?=20 >>>=20 >>> Migration is a guest wide thing and you're not migrating single vCPUs. >>>=20 >>> This hackery just papers over he underlying design fail that KVM looks >>> at the TSC per vCPU which is the root cause and that needs to be fixed. >>=20 >> I don't disagree with you. >> As far as I know the main reasons that kvm tracks TSC per guest are >>=20 >> 1. cases when host tsc is not stable=20 >> (hopefully rare now, and I don't mind making >> the new API just refuse to work when this is detected, and revert to old w= ay >> of doing things). >=20 > That's a trainwreck to begin with and I really would just not support it > for anything new which aims to be more precise and correct. TSC has > become pretty reliable over the years. >=20 >> 2. (theoretical) ability of the guest to introduce per core tsc offfset >> by either using TSC_ADJUST (for which I got recently an idea to stop >> advertising this feature to the guest), or writing TSC directly which >> is allowed by Intel's PRM: >=20 > For anything halfways modern the write to TSC is reflected in TSC_ADJUST > which means you get the precise offset. >=20 > The general principle still applies from a system POV. >=20 > TSC base (systemwide view) - The sane case >=20 > TSC CPU =3D TSC base + TSC_ADJUST >=20 > The guest TSC base is a per guest constant offset to the host TSC. >=20 > TSC guest base =3D TSC host base + guest base offset >=20 > If the guest want's this different per vCPU by writing to the MSR or to > TSC_ADJUST then you still can have a per vCPU offset in TSC_ADJUST which > is the offset to the TSC base of the guest. How about, if the guest wants to write TSC_ADJUST, it can turn off all parav= irt features and keep both pieces?