From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.2 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 102B7C433B4 for ; Fri, 7 May 2021 08:25:53 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B83966143C for ; Fri, 7 May 2021 08:25:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B83966143C Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.123821.233631 (Exim 4.92) (envelope-from ) id 1levnu-0002Vq-2H; Fri, 07 May 2021 08:25:46 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 123821.233631; Fri, 07 May 2021 08:25:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1levnt-0002Vj-VH; Fri, 07 May 2021 08:25:45 +0000 Received: by outflank-mailman (input) for mailman id 123821; Fri, 07 May 2021 08:25:44 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1levns-0002Vb-8V for xen-devel@lists.xenproject.org; Fri, 07 May 2021 08:25:44 +0000 Received: from mx2.suse.de (unknown [195.135.220.15]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 2951210f-fd18-43b1-bf02-dd7eab40f013; Fri, 07 May 2021 08:25:43 +0000 (UTC) Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 99048AD21; Fri, 7 May 2021 08:25:42 +0000 (UTC) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2951210f-fd18-43b1-bf02-dd7eab40f013 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1620375942; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4g32LixRQqC5xw6jvIWDLSjRncG5ffzm2Z8swxeyQ6M=; b=Ygk7G/moxEZmEDujNkiQQUg+H4qYqaBD4bV8U1WVDsaDfT9eO3GJlNX3OWrlQKfpOnLehG vTmD8ZM3jeS1YEijIc4GFSKqWowgcBr2HPHoUNOrxnI4nnMaJm0bA7/ZN+CnHLM6TInwTW XosqXoOCz4wNJM4sx728+xRwhveGcmM= Subject: Ping: [PATCH] x86/AMD: also determine L3 cache size From: Jan Beulich To: Andrew Cooper Cc: Wei Liu , =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= , "xen-devel@lists.xenproject.org" References: <7ffeec9f-2ce4-9122-4699-32c3ffb06a5d@suse.com> <3ff79e34-da70-85c3-0324-efa50313d5b4@citrix.com> <487bed52-bd1d-ceee-a85a-9bed9aad4712@suse.com> Message-ID: Date: Fri, 7 May 2021 10:25:43 +0200 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.10.1 MIME-Version: 1.0 In-Reply-To: <487bed52-bd1d-ceee-a85a-9bed9aad4712@suse.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit On 29.04.2021 11:21, Jan Beulich wrote: > On 16.04.2021 16:21, Andrew Cooper wrote: >> On 16/04/2021 14:20, Jan Beulich wrote: >>> For Intel CPUs we record L3 cache size, hence we should also do so for >>> AMD and alike. >>> >>> While making these additions, also make sure (throughout the function) >>> that we don't needlessly overwrite prior values when the new value to be >>> stored is zero. >>> >>> Signed-off-by: Jan Beulich >>> --- >>> I have to admit though that I'm not convinced the sole real use of the >>> field (in flush_area_local()) is a good one - flushing an entire L3's >>> worth of lines via CLFLUSH may not be more efficient than using WBINVD. >>> But I didn't measure it (yet). >> >> WBINVD always needs a broadcast IPI to work correctly. >> >> CLFLUSH and friends let you do this from a single CPU, using cache >> coherency to DTRT with the line, wherever it is. >> >> >> Looking at that logic in flush_area_local(), I don't see how it can be >> correct.  The WBINVD path is a decomposition inside the IPI, but in the >> higher level helpers, I don't see how the "area too big, convert to >> WBINVD" can be safe. >> >> All users of FLUSH_CACHE are flush_all(), except two PCI >> Passthrough-restricted cases. MMUEXT_FLUSH_CACHE_GLOBAL looks to be >> safe, while vmx_do_resume() has very dubious reasoning, and is dead code >> I think, because I'm not aware of a VT-x capable CPU without WBINVD-exiting. > > Besides my prior question on your reply, may I also ask what all of > this means for the patch itself? After all you've been replying to > the post-commit-message remark only so far. As for the other patch just pinged again, unless I hear back on the patch itself by then, I'm intending to commit this the week after the next one, if need be without any acks. Jan