From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0044ACD6134 for ; Tue, 10 Oct 2023 09:58:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8264A8D00B2; Tue, 10 Oct 2023 05:58:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7D6258D0002; Tue, 10 Oct 2023 05:58:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6C4E78D00B2; Tue, 10 Oct 2023 05:58:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 5A5C28D0002 for ; Tue, 10 Oct 2023 05:58:01 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 31CCB1A02CF for ; Tue, 10 Oct 2023 09:58:01 +0000 (UTC) X-FDA: 81329100762.14.6345084 Received: from outbound-smtp30.blacknight.com (outbound-smtp30.blacknight.com [81.17.249.61]) by imf07.hostedemail.com (Postfix) with ESMTP id EFBBE40006 for ; Tue, 10 Oct 2023 09:57:58 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf07.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.61 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696931879; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=u2msVvTLM00wjeD7J7mfzAL9X8ooHry4yhI3Kq18xlk=; b=q4BMBFzAkybJSJXBXeB4WG9VyAyttE03b/noexviZS6AECpW70/Zgc008l/TO5l1MSUGjm 6gUctlB4uwewmT0ywuQMpvh9AsfYaXO3q2SdgvlgK4FPRtbrT38d2txu11i4paaCTTu6JZ XdqXjPoCDl4NGcTZX0VeBjwYal/RGQU= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf07.hostedemail.com: domain of mgorman@techsingularity.net designates 81.17.249.61 as permitted sender) smtp.mailfrom=mgorman@techsingularity.net ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696931879; a=rsa-sha256; cv=none; b=El/E8Ot/PF2fugodvT6zbGtUtqX86k9aeJKPQ/mqPMRsWtPKF0M2TEFXExq+sjpCxVHYWG iK189eEdd1ITHiWQDRI3aw50abYMFdY6K3JTBXX2121fra1nQkkJc+wi84D03IHkz2O5ew XdhGMc98VONSBw0mFwEoEH96gPhA+Mg= Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp30.blacknight.com (Postfix) with ESMTPS id DE744BAE94 for ; Tue, 10 Oct 2023 10:57:56 +0100 (IST) Received: (qmail 2492 invoked from network); 10 Oct 2023 09:57:56 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.197.19]) by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 10 Oct 2023 09:57:56 -0000 Date: Tue, 10 Oct 2023 10:57:52 +0100 From: Mel Gorman To: Ingo Molnar Cc: Peter Zijlstra , Raghavendra K T , K Prateek Nayak , Bharata B Rao , Ingo Molnar , LKML , Linux-MM Subject: Re: [PATCH 6/6] sched/numa: Complete scanning of inactive VMAs when there is no alternative Message-ID: <20231010095752.yueqcseg7p3xg5ui@techsingularity.net> References: <20231010083143.19593-1-mgorman@techsingularity.net> <20231010083143.19593-7-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: X-Rspamd-Queue-Id: EFBBE40006 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: wykr4xghjbrxbtpm6unu4yuubkptaxk8 X-HE-Tag: 1696931878-521037 X-HE-Meta: U2FsdGVkX192MEFpHHcJ7Sl4SVj9aJ7YlPemNF6LiDbhUkqjRIxOVRTfyIHTpJfwzzD2lE42Mn2IAD/sixHIICbOLPiyvybC4BJ9v6jSFqehP036EUtd0c1ExDgd9Y3WpgynXFemhhF3GD/GVYWdhnEhpTxHg4ARYq/K/tRDb+ZFsAFN0GBWFWc9OzxPMQDvT3AqAkzUZ7yKSz0Up0KGoED5UGDDWsnobs5uJXpYz/sE5lOaXD/VxwuCwcM/yhK78PXSizH6Zh2wT0BOl/geqcSc/mTGjHdPxkNTzasD21diUdqsPO4AVFI/UA3hWpkZU3pIInYp6nwsECwslWAOM1in865T3lpkYUSFgbwKNvTmQquBw51bYbHqiZFD+1RuTej5iys4zw+OnJPe3iFrB9vK7edg9mR7X3FXMcEdJvtwDW9tuATHNsUcbji31eyvYkgGjBA4YsJutQn2k0MBFNxuuhUOS+Bc/rnCqOCH5qKn759btKAP0Zg8/aNIgbj+cqYjoSdhfC93fOqwk1uePa5BfKeQ1hnildyM5QWerrgwZRWDlA51wNU9mcrm+zswDjAO83xVYisyF5WjfaN2j5X4R1uo3ALIssVpkwRU/5VIkOLLuIzRO/V++g8yp6vHQ77h7qhQ7UKFdbNNOH3+hTNpq/XM9yzVbVGFT6j0xUS/GyR3sHwv+B63A7e1Wac5KpK0rd8P/G7JvdyrqrQrMrUXFyxr/UAzfIhWF0Ess3zFPkY8QTA8y6+/6vJYLflNCv/wEEQ1tPobfepb02SSZ3K7MdVO9PcqP79GYUW7bV3axhyAq0b/H7TfES9+jmcSQ8lVRIu4tWUacxGgRMij040sYWPBvLnu60FrdHMmv06Y1H4L7d7Sje8tiiFHjLhBb895qIdUg9t+KVYl2fR2EBBtfswQHSaYdmhaqyn/93F8cPSAXjjJ2iQIy2+AzICcbe7jqOVh8SH7UobowoS NDvmDu9J 0bs+JcM5Gj+oZuA4R1IolUkasl2eMN2L3sJ/ElbDj2GbT4lBBFZ8tWacum+dH3B0jxapn4DX+onNjIZwux7voNM0Pn9jnPVK/DAhehB/q3V6hewA461nxexgVRXNuEr/oxvoadCUxhYeiECYU7z+XLtf2tlbXB7rUJvkkZ/ZbVSL/tnSYnfNPQ2c9WH3w8bIP3jH8449YJdUF2/pAXKYD3lyuq1e+JOZrzL/q X-Bogosity: Ham, tests=bogofilter, spamicity=0.000004, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Oct 10, 2023 at 11:23:00AM +0200, Ingo Molnar wrote: > > * Mel Gorman wrote: > > > On a 2-socket Cascade Lake test machine, the time to complete the > > workload is as follows; > > > > 6.6.0-rc2 6.6.0-rc2 > > sched-numabtrace-v1 sched-numabselective-v1 > > Min elsp-NUMA01_THREADLOCAL 174.22 ( 0.00%) 117.64 ( 32.48%) > > Amean elsp-NUMA01_THREADLOCAL 175.68 ( 0.00%) 123.34 * 29.79%* > > Stddev elsp-NUMA01_THREADLOCAL 1.20 ( 0.00%) 4.06 (-238.20%) > > CoeffVar elsp-NUMA01_THREADLOCAL 0.68 ( 0.00%) 3.29 (-381.70%) > > Max elsp-NUMA01_THREADLOCAL 177.18 ( 0.00%) 128.03 ( 27.74%) > > > > The time to complete the workload is reduced by almost 30% > > > > 6.6.0-rc2 6.6.0-rc2 > > sched-numabtrace-v1 sched-numabselective-v1 / > > Duration User 91201.80 63506.64 > > Duration System 2015.53 1819.78 > > Duration Elapsed 1234.77 868.37 > > > > In this specific case, system CPU time was not increased but it's not > > universally true. > > > > From vmstat, the NUMA scanning and fault activity is as follows; > > > > 6.6.0-rc2 6.6.0-rc2 > > sched-numabtrace-v1 sched-numabselective-v1 > > Ops NUMA base-page range updates 64272.00 26374386.00 > > Ops NUMA PTE updates 36624.00 55538.00 > > Ops NUMA PMD updates 54.00 51404.00 > > Ops NUMA hint faults 15504.00 75786.00 > > Ops NUMA hint local faults % 14860.00 56763.00 > > Ops NUMA hint local percent 95.85 74.90 > > Ops NUMA pages migrated 1629.00 6469222.00 > > > > Both the number of PTE updates and hint faults is dramatically > > increased. While this is superficially unfortunate, it represents > > ranges that were simply skipped without the patch. As a result > > of the scanning and hinting faults, many more pages were also > > migrated but as the time to completion is reduced, the overhead > > is offset by the gain. > > Nice! I've applied your series to tip:sched/core with a few non-functional > edits to comment/changelog formatting/clarity. > Thanks. > Btw., was any previous analysis done on the size of the pids_active[] hash > and the hash collision rate? > Not that I'm aware of but I also think it would be difficult to design something representative in terms of a benchmark. New pids are typically sequential so most benchmarks are not going to show many collisions unless the hash algorithm ignores lower bits. Maybe it does, I didn't actually check the hash algorithm and if it does, that is likely the patch justification right there -- threads created at similar times are almost certain to collide). As it was Peter that suggested the hash, I assumed he considered collisions due to lower bits but that is also lazy on my part. If lower bits are used then it would pose the question -- does it matter? The intent of the bitmap is for threads to prefer updating PTEs within task-active VMAs but ultimately all VMAs should be scanned anyway so some overhead will be usless. While collisions may occur, it's still better than scanning within VMAs that are definitely *not* of interest. It would suggest that a sensible direction would be to scan in passes like load balancing uses fbq_type in find_busiest_queue() to filter what types of tasks should be considered for moving. So, maybe the passes would look like 1. Task-active 2. Multiple tasks active 3. Any task active 4. Inactive The objective would be that PTE updates are as relevant as possible and hopefully by the time only inactive VMAs are considered, there is a relatively small amount of wasted work. > 64 (BITS_PER_LONG) feels a bit small, especially on larger machines running > threaded workloads, and the kmalloc of numab_state likely allocates a full > cacheline anyway, so we could double the hash size from 8 bytes (2x1 longs) > to 32 bytes (2x2 longs) with very little real cost, and still have a long > field left to spare? > You're right, we could and it's relatively cheap. I would worry that as the storage overhead is per-VMA then workloads for large machines may also have lots of VMAs that are not necessarily using threads. As I would struggle to provide supporting data justifying the change, I would also be hesitant to try merging it because if I was reviewing the patch for someone else, the first question I would ask is "is there any performance benefit that you can show?". I would expect the first patch would provide some telemetry and the patch some justification. -- Mel Gorman SUSE Labs