From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EC34C433F5 for ; Thu, 21 Apr 2022 14:46:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1389626AbiDUOtU (ORCPT ); Thu, 21 Apr 2022 10:49:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1389341AbiDUOtR (ORCPT ); Thu, 21 Apr 2022 10:49:17 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D76712253A for ; Thu, 21 Apr 2022 07:46:27 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id iq10so1396572pjb.0 for ; Thu, 21 Apr 2022 07:46:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=T0eW8wslz84hb8l3ajVs1uODly7DzBA5g8vw9TiMQg4=; b=ZVX7th7rXfoSR88e3u33hPEHJaDKG4gE74opdCHLOgw+uCLlTVbNrQR2bXeEyR9koK d+r86oC6+ZR59Qk/byJXsUN/PoB4mnPj/kWl8tgHByJsc2OW5u+CNAq4KK67D6izQmpV oxZa/kc/zZLws8p5RpxH91ix6tQet3mEGUi97SpXqTleQ9cVRPCHBZ2OnnopaVslf+yp uzR1eMh8iqwkuGuhSeDtkNkGdeia3fYbOpalNiX74xejuK8LPXhyVPNe9RGS+HSs8y0v k4siBh4AZa+FJ2A4jzCqXatD3bFvg7haaIkEwGz4tXbQunAdVAOGA8L5nijjQ+sXQo6w 11gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=T0eW8wslz84hb8l3ajVs1uODly7DzBA5g8vw9TiMQg4=; b=DgzkmqslxffBErWu/o71pJsa46Uz/ca3es6cKv/MWl9mfkp73+KLhrIgY6vdgJ6j/T 9PxiWITMuR/Fe7ABbvv5ievxbnPL4506IukyKMQgThv91nylk1yBAHFZxXaKWKRuWgMd 3RVg5Tma2c2yKn7Bg0lhGjqrKPza/MYXJBE31FfOJ9Tz7sNgO6nIRVccUYcl5f17sXYi iytdOIBPZlP63hFuOu9+Rwzzy6K72LRTtZnArBompNz0jJy19TKF3tD99hk13peS+t1J A6D3UFNV81Int0HFZheFumlgdUS+xH/tU/8IPa7i5nawDIVhgW8W9c/fcc4LWG9FQjKP eHiA== X-Gm-Message-State: AOAM531e22txkZesT5rCiGeJw9QHu+tFKe7lrMGumPP3yMjGLiNMEtIV HdrzPfPWUGj7yQYpD9LX+apBjw== X-Google-Smtp-Source: ABdhPJyyC4XWzMYGFBSp0eyOLoQthv2TCc+r+u6ANhTWcBsnL3cHjy8j0Ol0Az1JSYiJXscCNNQgOw== X-Received: by 2002:a17:90b:4f44:b0:1d2:c5f4:1a8d with SMTP id pj4-20020a17090b4f4400b001d2c5f41a8dmr38780pjb.4.1650552387145; Thu, 21 Apr 2022 07:46:27 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id j19-20020a056a00235300b0050a858af58fsm14007109pfj.145.2022.04.21.07.46.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Apr 2022 07:46:26 -0700 (PDT) Date: Thu, 21 Apr 2022 14:46:23 +0000 From: Sean Christopherson To: Peter Gonda Cc: Mingwei Zhang , kvm , LKML Subject: Re: [PATCH] KVM: SEV: Add cond_resched() to loop in sev_clflush_pages() Message-ID: References: <20220330164306.2376085-1-pgonda@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Apr 21, 2022, Peter Gonda wrote: > On Mon, Apr 18, 2022 at 9:48 AM Sean Christopherson wrote: > > > > On Wed, Apr 06, 2022, Peter Gonda wrote: > > > On Wed, Apr 6, 2022 at 12:26 PM Sean Christopherson wrote: > > > > > > > > On Wed, Apr 06, 2022, Mingwei Zhang wrote: > > > > > Hi Sean, > > > > > > > > > > > > > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > > > > > > > > index 75fa6dd268f0..c2fe89ecdb2d 100644 > > > > > > > > --- a/arch/x86/kvm/svm/sev.c > > > > > > > > +++ b/arch/x86/kvm/svm/sev.c > > > > > > > > @@ -465,6 +465,7 @@ static void sev_clflush_pages(struct page *pages[], unsigned long npages) > > > > > > > > page_virtual = kmap_atomic(pages[i]); > > > > > > > > clflush_cache_range(page_virtual, PAGE_SIZE); > > > > > > > > kunmap_atomic(page_virtual); > > > > > > > > + cond_resched(); > > > > > > > > > > > > > > If you add cond_resched() here, the frequency (once per 4K) might be > > > > > > > too high. You may want to do it once per X pages, where X could be > > > > > > > something like 1G/4K? > > > > > > > > > > > > No, every iteration is perfectly ok. The "cond"itional part means that this will > > > > > > reschedule if and only if it actually needs to be rescheduled, e.g. if the task's > > > > > > timeslice as expired. The check for a needed reschedule is cheap, using > > > > > > cond_resched() in tight-ish loops is ok and intended, e.g. KVM does a reched > > > > > > check prior to enterring the guest. > > > > > > > > > > Double check on the code again. I think the point is not about flag > > > > > checking. Obviously branch prediction could really help. The point I > > > > > think is the 'call' to cond_resched(). Depending on the kernel > > > > > configuration, cond_resched() may not always be inlined, at least this > > > > > is my understanding so far? So if that is true, then it still might > > > > > not always be the best to call cond_resched() that often. > > > > > > > > Eh, compared to the cost of 64 back-to-back CLFLUSHOPTs, the cost of __cond_resched() > > > > is peanuts. Even accounting for the rcu_all_qs() work, it's still dwarfed by the > > > > cost of flushing data from the cache. E.g. based on Agner Fog's wonderful uop > > > > latencies[*], the actual flush time for a single page is going to be upwards of > > > > 10k cycles, whereas __cond_resched() is going to well under 100 cycles in the happy > > > > case of no work. Even if those throughput numbers are off by an order of magnitude, > > > > e.g. CLFLUSHOPT can complete in 15 cycles, that's still ~1k cycles. > > > > > > > > Peter, don't we also theoretically need cond_resched() in the loops in > > > > sev_launch_update_data()? AFAICT, there's no articifical restriction on the size > > > > of the payload, i.e. the kernel is effectively relying on userspace to not update > > > > large swaths of memory. > > > > > > Yea we probably do want to cond_resched() in the for loop inside of > > > sev_launch_update_data(). Ithink in sev_dbg_crypt() userspace could > > > request a large number of pages to be decrypted/encrypted for > > > debugging but se have a call to sev_pin_memory() in the loop so that > > > will have a cond_resded() inside of __get_users_pages(). Or should we > > > have a cond_resded() inside of the loop in sev_dbg_crypt() too? > > > > I believe sev_dbg_crypt() needs a cond_resched() of its own, sev_pin_memory() > > isn't guaranteed to get into the slow path of internal_get_user_pages_fast(). > > Ah, understood thanks. I'll send out patches for those two paths. I > personally haven't seen any warning logs from them though. Do you have test cases that deliberately attempt to decrypt+read pages upon pages of guest memory at a time? Unless someone has wired up a VMM to do a full dump of guest memory, I highly doubt a "real" VMM will do more than read a handful of bytes at a time.