From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3950C25B06 for ; Tue, 9 Aug 2022 23:51:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229707AbiHIXvu (ORCPT ); Tue, 9 Aug 2022 19:51:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35862 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229490AbiHIXvq (ORCPT ); Tue, 9 Aug 2022 19:51:46 -0400 Received: from mail-pg1-x52e.google.com (mail-pg1-x52e.google.com [IPv6:2607:f8b0:4864:20::52e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DFBFB7C18F for ; Tue, 9 Aug 2022 16:51:45 -0700 (PDT) Received: by mail-pg1-x52e.google.com with SMTP id bh13so12815876pgb.4 for ; Tue, 09 Aug 2022 16:51:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc; bh=t8QA2tksbD31DfrMNyQZwHcagOyGgK4LmqpD2Mjd3ks=; b=FdBXj2iya/LVg+aZaZK6+vt3jeh2/o+1xHTSAXVuRM5bMpU8wwW6M5upnJSuoTJE3N XdHyZLOBrtQwyPrE9HLYfWwXvCTDOSFz24i5FD1JcS+vb/n1nXURsCzKKDgd1ufCtCT3 x/rMTrgIYQ9DEqE4PrMRxNKB5XmsP3O3SbkEvnMEVdYrTllJ1Y1M3ml9RHVmHR/bO5Fi dNbYbnt30yAwhsHyULeJm+S6UlpDwF2aDzokq2Dm6E0E2RFg/QzuZlkcUSz6K7NT9QlA prBFtFUX2rpKlzwCSlprWGn939XugjuzyZ9Lw5w6G/u1FJOLP1HRIDnHRYlBtOXbXVwk dKOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc; bh=t8QA2tksbD31DfrMNyQZwHcagOyGgK4LmqpD2Mjd3ks=; b=M7Hv11ORsBXn6rp1kahl1ko7xclDrOdrvEB5YSYrEdozz2fPmRTtQiOpmLph+1hQ3j 5OCQ+7vgnm3gkCzXq5wlD57VGPHVBxB850HBT0ZxMipoZzF+ZlJ41PTVJ9JcDBZGxs71 +FpDniw0rxBDwkK1grNmChI2HMAauME3vObpGVF7LvIHLABGAzRe9PaIMjrJqBEWu89r WM9/PMWTlnUhElX9UCmb6lBIsuzSiAARKpjhWgk/Mz4YRnJkuQ8q9UoHm13VUxMptdXW 0Wa6/+D8ypMw4nv+jJN5PE8L6YzvRCwkvBHJX4PhKk/wAB779r6Pho6+AHNMLVhCSjJs sPVg== X-Gm-Message-State: ACgBeo3MBq/YMuAcun8XBrpRykyLstS+eT10cJ1V98yCMEXOUUsfey6C TCIWB1m+gvhDDvPkigUzSKFgNA== X-Google-Smtp-Source: AA6agR7XZH+x6V6XY09gZfStY5OxSK0OFe/hNzxXbUIUUdwPBfyiUJKSdoFPrWHCD5oR1IWqJHaqCg== X-Received: by 2002:aa7:948b:0:b0:52e:e0cd:183a with SMTP id z11-20020aa7948b000000b0052ee0cd183amr18148655pfk.59.1660089105320; Tue, 09 Aug 2022 16:51:45 -0700 (PDT) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id g6-20020a1709026b4600b0016edb59f670sm11460869plt.6.2022.08.09.16.51.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Aug 2022 16:51:44 -0700 (PDT) Date: Tue, 9 Aug 2022 23:51:41 +0000 From: Sean Christopherson To: David Matlack Cc: Vipin Sharma , Paolo Bonzini , kvm list , LKML Subject: Re: [PATCH] KVM: x86/mmu: Make page tables for eager page splitting NUMA aware Message-ID: References: <20220801151928.270380-1-vipinsh@google.com> <4ccbafb5-9157-ec73-c751-ec71164f8688@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Aug 09, 2022, David Matlack wrote: > On Fri, Aug 5, 2022 at 4:30 PM Vipin Sharma wrote: > > Approach B: > > Ask page from the specific node on fault path with option to fallback > > to the original cache and default task policy. > > > > This is what Sean's rough patch looks like. > > This would definitely be a simpler approach but could increase the > amount of time a vCPU thread holds the MMU lock when handling a fault, > since KVM would start performing GFP_NOWAIT allocations under the > lock. So my preference would be to try the cache approach first and > see how complex it turns out to be. Ya, as discussed off-list, I don't like my idea either :-) The pfn and thus node information is available before mmu_lock is acquired, so I don't see any reason to defer the allocation other than to reduce the memory footprint, and that's a solvable problem one way or another.