From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=BAYES_00,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7527FC43460 for ; Fri, 16 Apr 2021 21:47:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D2B47610E7 for ; Fri, 16 Apr 2021 21:47:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D2B47610E7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4CC7A6B006C; Fri, 16 Apr 2021 17:47:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 47DEB6B006E; Fri, 16 Apr 2021 17:47:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2CFFC6B0070; Fri, 16 Apr 2021 17:47:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id 0CC036B006C for ; Fri, 16 Apr 2021 17:47:42 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BA64B180AD838 for ; Fri, 16 Apr 2021 21:47:41 +0000 (UTC) X-FDA: 78039567522.18.2FFBF04 Received: from mail-io1-f52.google.com (mail-io1-f52.google.com [209.85.166.52]) by imf09.hostedemail.com (Postfix) with ESMTP id 9D9EE600011F for ; Fri, 16 Apr 2021 21:47:37 +0000 (UTC) Received: by mail-io1-f52.google.com with SMTP id f15so20339068iob.5 for ; Fri, 16 Apr 2021 14:47:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=I310BGSKkXfzg5NOmiEVbhhSAhpSkpByfYWqfBgc2TA=; b=hHVvFyLuOoIZwr9CtsNj9+1hQD2fT2qooqYG1lMu4TIcpkXWeS4Fai6rgEoD3Ja85s jEeTjluAr7AqodyVzTHuUqzHm8Gc9ZB1IFtGuCmqkj4atd9Ua2cuCIx3KIJsxaVQdSBT WZn27rWFARucVcDE0LzBIUB6O00ibaMjQ9Vf5sZ8lUMVESb22rogWIOVXxXzeQ+8G3iC 57oc4vr3AwdX8j/kCpbg4+2XvTis4vlttfvfVAz8fECFgWRjt1Xr5LrPtCAk09LHWGi6 jR1R8evhP5UWLVHzZoC8ryTq/g7E8A+RzidHejHJAlManO/0NJkDU8mMfzc6FUewP03Q 0H9g== X-Gm-Message-State: AOAM531sJhh+NLXDaqMe+qWcVQR3hQVCN8c1QRFQeWFoNSlgYeLAZ2Rv Lxh1KK/ZKw52icw9AujiKlI= X-Google-Smtp-Source: ABdhPJxspaYKBOLF+wC4UMmccnYjgO2lgyxemZfQD0OMmQsE7O3FQ9MbD3qAv2q2KG0+NK5SZKyi3g== X-Received: by 2002:a5d:8893:: with SMTP id d19mr5313330ioo.167.1618609660742; Fri, 16 Apr 2021 14:47:40 -0700 (PDT) Received: from google.com (243.199.238.35.bc.googleusercontent.com. [35.238.199.243]) by smtp.gmail.com with ESMTPSA id e17sm3249461ilq.21.2021.04.16.14.47.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Apr 2021 14:47:40 -0700 (PDT) Date: Fri, 16 Apr 2021 21:47:37 +0000 From: Dennis Zhou To: Pratik Sampat Cc: Roman Gushchin , Tejun Heo , Christoph Lameter , Andrew Morton , Vlastimil Babka , linux-mm@kvack.org, linux-kernel@vger.kernel.org, pratik.r.sampat@gmail.com Subject: Re: [PATCH v3 0/6] percpu: partial chunk depopulation Message-ID: References: <20210408035736.883861-1-guro@fb.com> <25c78660-9f4c-34b3-3a05-68c313661a46@linux.ibm.com> <7a001bf6-5708-fb04-4970-367d9845ccb9@linux.ibm.com> <8ea7c616-95e8-e391-5373-ebaf10836d2c@linux.ibm.com> <09a8d1eb-280d-9ee9-3d68-d065db47a516@linux.ibm.com> <2a0d371d-79f6-e7aa-6dcd-3b29264e1feb@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <2a0d371d-79f6-e7aa-6dcd-3b29264e1feb@linux.ibm.com> X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 9D9EE600011F X-Stat-Signature: crn7mss3dtm8j55duntk4sj4pcxygzgz Received-SPF: none (gmail.com>: No applicable sender policy available) receiver=imf09; identity=mailfrom; envelope-from=""; helo=mail-io1-f52.google.com; client-ip=209.85.166.52 X-HE-DKIM-Result: none/none X-HE-Tag: 1618609657-218711 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000105, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hello, On Sat, Apr 17, 2021 at 01:14:03AM +0530, Pratik Sampat wrote: >=20 >=20 > On 17/04/21 12:39 am, Roman Gushchin wrote: > > On Sat, Apr 17, 2021 at 12:11:37AM +0530, Pratik Sampat wrote: > > >=20 > > > On 17/04/21 12:04 am, Roman Gushchin wrote: > > > > On Fri, Apr 16, 2021 at 11:57:03PM +0530, Pratik Sampat wrote: > > > > > On 16/04/21 10:43 pm, Roman Gushchin wrote: > > > > > > On Fri, Apr 16, 2021 at 08:58:33PM +0530, Pratik Sampat wrote= : > > > > > > > Hello Dennis, > > > > > > >=20 > > > > > > > I apologize for the clutter of logs before, I'm pasting the= logs of before and > > > > > > > after the percpu test in the case of the patchset being app= lied on 5.12-rc6 and > > > > > > > the vanilla kernel 5.12-rc6. > > > > > > >=20 > > > > > > > On 16/04/21 7:48 pm, Dennis Zhou wrote: > > > > > > > > Hello, > > > > > > > >=20 > > > > > > > > On Fri, Apr 16, 2021 at 06:26:15PM +0530, Pratik Sampat w= rote: > > > > > > > > > Hello Roman, > > > > > > > > >=20 > > > > > > > > > I've tried the v3 patch series on a POWER9 and an x86 K= VM setup. > > > > > > > > >=20 > > > > > > > > > My results of the percpu_test are as follows: > > > > > > > > > Intel KVM 4CPU:4G > > > > > > > > > Vanilla 5.12-rc6 > > > > > > > > > # ./percpu_test.sh > > > > > > > > > Percpu:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 1952 kB > > > > > > > > > Percpu:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 219648 kB > > > > > > > > > Percpu:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 219648 kB > > > > > > > > >=20 > > > > > > > > > 5.12-rc6 + with patchset applied > > > > > > > > > # ./percpu_test.sh > > > > > > > > > Percpu:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 2080 kB > > > > > > > > > Percpu:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 219712 kB > > > > > > > > > Percpu:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 72672 kB > > > > > > > > >=20 > > > > > > > > > I'm able to see improvement comparable to that of what = you're see too. > > > > > > > > >=20 > > > > > > > > > However, on POWERPC I'm unable to reproduce these impro= vements with the patchset in the same configuration > > > > > > > > >=20 > > > > > > > > > POWER9 KVM 4CPU:4G > > > > > > > > > Vanilla 5.12-rc6 > > > > > > > > > # ./percpu_test.sh > > > > > > > > > Percpu:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 5888 kB > > > > > > > > > Percpu:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 118272 kB > > > > > > > > > Percpu:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 118272 kB > > > > > > > > >=20 > > > > > > > > > 5.12-rc6 + with patchset applied > > > > > > > > > # ./percpu_test.sh > > > > > > > > > Percpu:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 6144 kB > > > > > > > > > Percpu:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 119040 kB > > > > > > > > > Percpu:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0 119040 kB > > > > > > > > >=20 > > > > > > > > > I'm wondering if there's any architectural specific cod= e that needs plumbing > > > > > > > > > here? > > > > > > > > >=20 > > > > > > > > There shouldn't be. Can you send me the percpu_stats debu= g output before > > > > > > > > and after? > > > > > > > I'll paste the whole debug stats before and after here. > > > > > > > 5.12-rc6 + patchset > > > > > > > -----BEFORE----- > > > > > > > Percpu Memory Statistics > > > > > > > Allocation Info: > > > > > > Hm, this looks highly suspicious. Here is your stats in a mor= e compact form: > > > > > >=20 > > > > > > Vanilla > > > > > >=20 > > > > > > nr_alloc : 9038 nr_alloc = : 97046 > > > > > > nr_dealloc : 6992 nr_dealloc : = 94237 > > > > > > nr_cur_alloc : 2046 nr_cur_alloc : = 2809 > > > > > > nr_max_alloc : 2178 nr_max_alloc : = 90054 > > > > > > nr_chunks : 3 nr_chunks : = 11 > > > > > > nr_max_chunks : 3 nr_max_chunks : = 47 > > > > > > min_alloc_size : 4 min_alloc_size : = 4 > > > > > > max_alloc_size : 1072 max_alloc_size : = 1072 > > > > > > empty_pop_pages : 5 empty_pop_pages : = 29 > > > > > >=20 > > > > > >=20 > > > > > > Patched > > > > > >=20 > > > > > > nr_alloc : 9040 nr_alloc = : 97048 > > > > > > nr_dealloc : 6994 nr_dealloc : = 95002 > > > > > > nr_cur_alloc : 2046 nr_cur_alloc : = 2046 > > > > > > nr_max_alloc : 2208 nr_max_alloc : = 90054 > > > > > > nr_chunks : 3 nr_chunks : = 48 > > > > > > nr_max_chunks : 3 nr_max_chunks : = 48 > > > > > > min_alloc_size : 4 min_alloc_size : = 4 > > > > > > max_alloc_size : 1072 max_alloc_size : = 1072 > > > > > > empty_pop_pages : 12 empty_pop_pages : = 61 > > > > > >=20 > > > > > >=20 > > > > > > So it looks like the number of chunks got bigger, as well as = the number of > > > > > > empty_pop_pages? This contradicts to what you wrote, so can y= ou, please, make > > > > > > sure that the data is correct and we're not messing two cases= ? > > > > > >=20 > > > > > > So it looks like for some reason sidelined (depopulated) chun= ks are not getting > > > > > > freed completely. But I struggle to explain why the initial e= mpty_pop_pages is > > > > > > bigger with the same amount of chunks. > > > > > >=20 > > > > > > So, can you, please, apply the following patch and provide an= updated statistics? > > > > > Unfortunately, I'm not completely well versed in this area, but= yes the empty > > > > > pop pages number doesn't make sense to me either. > > > > >=20 > > > > > I re-ran the numbers trying to make sure my experiment setup is= sane but > > > > > results remain the same. > > > > >=20 > > > > > Vanilla > > > > > nr_alloc : 9040 nr_alloc = : 97048 > > > > > nr_dealloc : 6994 nr_dealloc : = 94404 > > > > > nr_cur_alloc : 2046 nr_cur_alloc : = 2644 > > > > > nr_max_alloc : 2169 nr_max_alloc : = 90054 > > > > > nr_chunks : 3 nr_chunks : = 10 > > > > > nr_max_chunks : 3 nr_max_chunks : = 47 > > > > > min_alloc_size : 4 min_alloc_size : = 4 > > > > > max_alloc_size : 1072 max_alloc_size : = 1072 > > > > > empty_pop_pages : 4 empty_pop_pages : = 32 > > > > >=20 > > > > > With the patchset + debug patch the results are as follows: > > > > > Patched > > > > >=20 > > > > > nr_alloc : 9040 nr_alloc = : 97048 > > > > > nr_dealloc : 6994 nr_dealloc : = 94349 > > > > > nr_cur_alloc : 2046 nr_cur_alloc : = 2699 > > > > > nr_max_alloc : 2194 nr_max_alloc : = 90054 > > > > > nr_chunks : 3 nr_chunks : = 48 > > > > > nr_max_chunks : 3 nr_max_chunks : = 48 > > > > > min_alloc_size : 4 min_alloc_size : = 4 > > > > > max_alloc_size : 1072 max_alloc_size : = 1072 > > > > > empty_pop_pages : 12 empty_pop_pages : = 54 > > > > >=20 > > > > > With the extra tracing I can see 39 entries of "Chunk (sideline= d)" > > > > > after the test was run. I don't see any entries for "Chunk (to = depopulate)" > > > > >=20 > > > > > I've snipped the results of slidelined chunks because they went= on for ~600 > > > > > lines, if you need the full logs let me know. > > > > Yes, please! That's the most interesting part! > > > Got it. Pasting the full logs of after the percpu experiment was co= mpleted > > Thanks! > >=20 > > Would you mind to apply the following patch and test again? > >=20 > > -- > >=20 > > diff --git a/mm/percpu.c b/mm/percpu.c > > index ded3a7541cb2..532c6a7ebdfd 100644 > > --- a/mm/percpu.c > > +++ b/mm/percpu.c > > @@ -2296,6 +2296,9 @@ void free_percpu(void __percpu *ptr) > > need_balance =3D true; > > break; > > } > > + > > + chunk->depopulated =3D false; > > + pcpu_chunk_relocate(chunk, -1); > > } else if (chunk !=3D pcpu_first_chunk && chunk !=3D pcpu_re= served_chunk && > > !chunk->isolated && > > (pcpu_nr_empty_pop_pages[pcpu_chunk_type(chunk)] = > > >=20 > Sure thing. >=20 > I see much lower sideline chunks. In one such test run I saw zero occur= rences > of slidelined chunks >=20 > Pasting the full logs as an example: >=20 > BEFORE > Percpu Memory Statistics > Allocation Info: > ---------------------------------------- > unit_size : 655360 > static_size : 608920 > reserved_size : 0 > dyn_size : 46440 > atom_size : 65536 > alloc_size : 655360 >=20 > Global Stats: > ---------------------------------------- > nr_alloc : 9038 > nr_dealloc : 6992 > nr_cur_alloc : 2046 > nr_max_alloc : 2200 > nr_chunks : 3 > nr_max_chunks : 3 > min_alloc_size : 4 > max_alloc_size : 1072 > empty_pop_pages : 12 >=20 > Per Chunk Stats: > ---------------------------------------- > Chunk: <- First Chunk > nr_alloc : 1092 > max_alloc_size : 1072 > empty_pop_pages : 0 > first_bit : 16247 > free_bytes : 4 > contig_bytes : 4 > sum_frag : 4 > max_frag : 4 > cur_min_alloc : 4 > cur_med_alloc : 8 > cur_max_alloc : 1072 > memcg_aware : 0 >=20 > Chunk: > nr_alloc : 594 > max_alloc_size : 992 > empty_pop_pages : 8 > first_bit : 456 > free_bytes : 645008 > contig_bytes : 319984 > sum_frag : 325024 > max_frag : 318680 > cur_min_alloc : 4 > cur_med_alloc : 8 > cur_max_alloc : 424 > memcg_aware : 0 >=20 > Chunk: > nr_alloc : 360 > max_alloc_size : 1072 > empty_pop_pages : 4 > first_bit : 26595 > free_bytes : 506640 > contig_bytes : 506540 > sum_frag : 100 > max_frag : 32 > cur_min_alloc : 4 > cur_med_alloc : 156 > cur_max_alloc : 1072 > memcg_aware : 1 >=20 >=20 > AFTER > Percpu Memory Statistics > Allocation Info: > ---------------------------------------- > unit_size : 655360 > static_size : 608920 > reserved_size : 0 > dyn_size : 46440 > atom_size : 65536 > alloc_size : 655360 >=20 > Global Stats: > ---------------------------------------- > nr_alloc : 97046 > nr_dealloc : 94304 > nr_cur_alloc : 2742 > nr_max_alloc : 90054 > nr_chunks : 11 > nr_max_chunks : 47 > min_alloc_size : 4 > max_alloc_size : 1072 > empty_pop_pages : 18 >=20 > Per Chunk Stats: > ---------------------------------------- > Chunk: <- First Chunk > nr_alloc : 1092 > max_alloc_size : 1072 > empty_pop_pages : 0 > first_bit : 16247 > free_bytes : 4 > contig_bytes : 4 > sum_frag : 4 > max_frag : 4 > cur_min_alloc : 4 > cur_med_alloc : 8 > cur_max_alloc : 1072 > memcg_aware : 0 >=20 > Chunk: > nr_alloc : 838 > max_alloc_size : 1072 > empty_pop_pages : 7 > first_bit : 464 > free_bytes : 640476 > contig_bytes : 290672 > sum_frag : 349804 > max_frag : 304344 > cur_min_alloc : 4 > cur_med_alloc : 8 > cur_max_alloc : 1072 > memcg_aware : 0 >=20 > Chunk: > nr_alloc : 90 > max_alloc_size : 1072 > empty_pop_pages : 0 > first_bit : 536 > free_bytes : 595752 > contig_bytes : 26164 > sum_frag : 575132 > max_frag : 26164 > cur_min_alloc : 156 > cur_med_alloc : 1072 > cur_max_alloc : 1072 > memcg_aware : 1 >=20 > Chunk: > nr_alloc : 90 > max_alloc_size : 1072 > empty_pop_pages : 0 > first_bit : 0 > free_bytes : 597428 > contig_bytes : 26164 > sum_frag : 596848 > max_frag : 26164 > cur_min_alloc : 156 > cur_med_alloc : 312 > cur_max_alloc : 1072 > memcg_aware : 1 >=20 > Chunk: > nr_alloc : 92 > max_alloc_size : 1072 > empty_pop_pages : 0 > first_bit : 0 > free_bytes : 595284 > contig_bytes : 26164 > sum_frag : 590360 > max_frag : 26164 > cur_min_alloc : 156 > cur_med_alloc : 312 > cur_max_alloc : 1072 > memcg_aware : 1 >=20 > Chunk: > nr_alloc : 92 > max_alloc_size : 1072 > empty_pop_pages : 0 > first_bit : 0 > free_bytes : 595284 > contig_bytes : 26164 > sum_frag : 583768 > max_frag : 26164 > cur_min_alloc : 156 > cur_med_alloc : 312 > cur_max_alloc : 1072 > memcg_aware : 1 >=20 > Chunk: > nr_alloc : 360 > max_alloc_size : 1072 > empty_pop_pages : 7 > first_bit : 26595 > free_bytes : 506640 > contig_bytes : 506540 > sum_frag : 100 > max_frag : 32 > cur_min_alloc : 4 > cur_med_alloc : 156 > cur_max_alloc : 1072 > memcg_aware : 1 >=20 > Chunk: > nr_alloc : 12 > max_alloc_size : 1072 > empty_pop_pages : 3 > first_bit : 0 > free_bytes : 647524 > contig_bytes : 563492 > sum_frag : 57872 > max_frag : 26164 > cur_min_alloc : 156 > cur_med_alloc : 312 > cur_max_alloc : 1072 > memcg_aware : 1 >=20 > Chunk: > nr_alloc : 0 > max_alloc_size : 1072 > empty_pop_pages : 1 > first_bit : 0 > free_bytes : 655360 > contig_bytes : 655360 > sum_frag : 0 > max_frag : 0 > cur_min_alloc : 0 > cur_med_alloc : 0 > cur_max_alloc : 0 > memcg_aware : 1 >=20 > Chunk (sidelined): > nr_alloc : 72 > max_alloc_size : 1072 > empty_pop_pages : 0 > first_bit : 0 > free_bytes : 608344 > contig_bytes : 145552 > sum_frag : 590340 > max_frag : 145552 > cur_min_alloc : 156 > cur_med_alloc : 312 > cur_max_alloc : 1072 > memcg_aware : 1 >=20 > Chunk (sidelined): > nr_alloc : 4 > max_alloc_size : 1072 > empty_pop_pages : 0 > first_bit : 0 > free_bytes : 652748 > contig_bytes : 426720 > sum_frag : 426720 > max_frag : 426720 > cur_min_alloc : 156 > cur_med_alloc : 312 > cur_max_alloc : 1072 > memcg_aware : 1 >=20 >=20 =20 Thank you Pratik for testing this and working with us to resolve this. I greatly appreciate it! Thanks, Dennis