From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.0 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_2 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADC25CA9EB6 for ; Wed, 23 Oct 2019 14:48:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 542022086D for ; Wed, 23 Oct 2019 14:48:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=lca.pw header.i=@lca.pw header.b="BZkHaRYu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 542022086D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lca.pw Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DCDE36B0005; Wed, 23 Oct 2019 10:48:18 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D7F8E6B0008; Wed, 23 Oct 2019 10:48:18 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C6F096B000D; Wed, 23 Oct 2019 10:48:18 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0049.hostedemail.com [216.40.44.49]) by kanga.kvack.org (Postfix) with ESMTP id A739D6B0005 for ; Wed, 23 Oct 2019 10:48:18 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 457A362FF for ; Wed, 23 Oct 2019 14:48:18 +0000 (UTC) X-FDA: 76075329876.01.fall25_e3bde380e859 X-HE-Tag: fall25_e3bde380e859 X-Filterd-Recvd-Size: 11056 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Wed, 23 Oct 2019 14:48:17 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id t20so32642271qtr.10 for ; Wed, 23 Oct 2019 07:48:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=lca.pw; s=google; h=message-id:subject:from:to:cc:date:in-reply-to:references :mime-version:content-transfer-encoding; bh=vjvG0YaPvJzrnfB/Uiu1r+Ze/1d150PHkHjsMUcfx8E=; b=BZkHaRYuoe/FkavOYJiIUDecwTMvo28hYRHTq0igC1R8h30o/ZIhCWeNSIxxhWMWwa i8iVq6KCl1LApCQRYUVIu75WbWIYRroN5PECJrmF1BfLZvA46aOVHMVNmNDuH/HQTMri AaHuy6wbFV7ZEhVuvS9wd65t+noxEmvNSAR9MlN8KxkAEXclE8KdaPrbzTD2gSshGM0c nzszAuXlpap01G11P/jWRtJXDDk0X1AJ+VESkIe0CmH7DaGHdDIjGZ/VU2YohRo4j3Hk 0fAKivpfTSlBjGOdrMLQr5C+NOQRvv6Rre8wf6+a4mRlLPodlyj/bTe65CD28lXyfghK uS/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:subject:from:to:cc:date:in-reply-to :references:mime-version:content-transfer-encoding; bh=vjvG0YaPvJzrnfB/Uiu1r+Ze/1d150PHkHjsMUcfx8E=; b=sUoebMHK5UG8xD7fV0ecJd/bKHy+naFGVnPluryEy0xtAOeCWdp5VlCXuLohKqdh5L WIBNI8neJmmEu3eO7ak18Jqbhm6Lch5zERoWpbfkqYHSyf1BuSybMUfd7Yy7VM7jSiWL yn4bxxYkIoTO5CFdl5IX2pL+art8FMbw/faZitZTnaTWf7y2h6aA6AT1NtMdWNr0zNG+ x/Vu1b9N7jL0oNPGvcTBOpWZos9KgPcvMvcYPab3Urrz4BDMM9d8FL+Wbi/vcyiP6/8d eAzu5NAVJV6eJDke9SXWd6gTkLCEYFQtfEqq0k8Q6V7enmIaPTpzf7woSErqq02s9tl1 0j4g== X-Gm-Message-State: APjAAAVa5zhD8tmoXf7ss/Jf9v5uoBO9q0TgYEt+1NUOOUD8RzC9Xp9P u7fQG2MpL94ojD5M8efBMmNJUQ== X-Google-Smtp-Source: APXvYqySKb0FbXVLchzcGl/io5uNfHzYtkG46H1IUxRsYaG9m/mP2gxIrRF8Pe1O8FziLDmvAWG4EQ== X-Received: by 2002:aed:3151:: with SMTP id 75mr879137qtg.145.1571842096703; Wed, 23 Oct 2019 07:48:16 -0700 (PDT) Received: from dhcp-41-57.bos.redhat.com (nat-pool-bos-t.redhat.com. [66.187.233.206]) by smtp.gmail.com with ESMTPSA id p3sm10008989qkm.52.2019.10.23.07.48.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 23 Oct 2019 07:48:16 -0700 (PDT) Message-ID: <1571842093.5937.84.camel@lca.pw> Subject: Re: [PATCH] mm/vmstat: Reduce zone lock hold time when reading /proc/pagetypeinfo From: Qian Cai To: Waiman Long , Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Roman Gushchin , Vlastimil Babka , Konstantin Khlebnikov , Jann Horn , Song Liu , Greg Kroah-Hartman , Rafael Aquini Date: Wed, 23 Oct 2019 10:48:13 -0400 In-Reply-To: <2236495a-ead0-e08e-3fb6-f3ab906b75b6@redhat.com> References: <20191022162156.17316-1-longman@redhat.com> <20191022145902.d9c4a719c0b32175e06e4eee@linux-foundation.org> <2236495a-ead0-e08e-3fb6-f3ab906b75b6@redhat.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.22.6 (3.22.6-10.el7) Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 2019-10-23 at 10:30 -0400, Waiman Long wrote: > On 10/22/19 5:59 PM, Andrew Morton wrote: > > On Tue, 22 Oct 2019 12:21:56 -0400 Waiman Long w= rote: > >=20 > > > The pagetypeinfo_showfree_print() function prints out the number of > > > free blocks for each of the page orders and migrate types. The curr= ent > > > code just iterates the each of the free lists to get counts. There= are > > > bug reports about hard lockup panics when reading the /proc/pagetye= info > > > file just because it look too long to iterate all the free lists wi= thin > > > a zone while holing the zone lock with irq disabled. > > >=20 > > > Given the fact that /proc/pagetypeinfo is readable by all, the poss= iblity > > > of crashing a system by the simple act of reading /proc/pagetypeinf= o > > > by any user is a security problem that needs to be addressed. > >=20 > > Yes. > >=20 > > > There is a free_area structure associated with each page order. The= re > > > is also a nr_free count within the free_area for all the different > > > migration types combined. Tracking the number of free list entries > > > for each migration type will probably add some overhead to the fast > > > paths like moving pages from one migration type to another which ma= y > > > not be desirable. > > >=20 > > > we can actually skip iterating the list of one of the migration typ= es > > > and used nr_free to compute the missing count. Since MIGRATE_MOVABL= E > > > is usually the largest one on large memory systems, this is the one > > > to be skipped. Since the printing order is migration-type =3D> orde= r, we > > > will have to store the counts in an internal 2D array before printi= ng > > > them out. > > >=20 > > > Even by skipping the MIGRATE_MOVABLE pages, we may still be holding= the > > > zone lock for too long blocking out other zone lock waiters from be= ing > > > run. This can be problematic for systems with large amount of memor= y. > > > So a check is added to temporarily release the lock and reschedule = if > > > more than 64k of list entries have been iterated for each order. Wi= th > > > a MAX_ORDER of 11, the worst case will be iterating about 700k of l= ist > > > entries before releasing the lock. > > >=20 > > > ... > > >=20 > > > --- a/mm/vmstat.c > > > +++ b/mm/vmstat.c > > > @@ -1373,23 +1373,54 @@ static void pagetypeinfo_showfree_print(str= uct seq_file *m, > > > pg_data_t *pgdat, struct zone *zone) > > > { > > > int order, mtype; > > > + unsigned long nfree[MAX_ORDER][MIGRATE_TYPES]; > >=20 > > 600+ bytes is a bit much. I guess it's OK in this situation. > >=20 >=20 > This function is called by reading /proc/pagetypeinfo. The call stack i= s > rather shallow: >=20 > PID: 58188=C2=A0 TASK: ffff938a4d4f1fa0=C2=A0 CPU: 2=C2=A0=C2=A0 COMMAN= D: "sosreport" > =C2=A0#0 [ffff9483bf488e48] crash_nmi_callback at ffffffffb8c551d7 > =C2=A0#1 [ffff9483bf488e58] nmi_handle at ffffffffb931d8cc > =C2=A0#2 [ffff9483bf488eb0] do_nmi at ffffffffb931dba8 > =C2=A0#3 [ffff9483bf488ef0] end_repeat_nmi at ffffffffb931cd69 > =C2=A0=C2=A0=C2=A0 [exception RIP: pagetypeinfo_showfree_print+0x73] > =C2=A0=C2=A0=C2=A0 RIP: ffffffffb8db7173=C2=A0 RSP: ffff938b9fcbfda0=C2= =A0 RFLAGS: 00000006 > =C2=A0=C2=A0=C2=A0 RAX: fffff0c9946d7020=C2=A0 RBX: ffff96073ffd5528=C2= =A0 RCX: 0000000000000000 > =C2=A0=C2=A0=C2=A0 RDX: 00000000001c7764=C2=A0 RSI: ffffffffb9676ab1=C2= =A0 RDI: 0000000000000000 > =C2=A0=C2=A0=C2=A0 RBP: ffff938b9fcbfdd0=C2=A0=C2=A0 R8: 00000000000000= 0a=C2=A0=C2=A0 R9: 00000000fffffffe > =C2=A0=C2=A0=C2=A0 R10: 0000000000000000=C2=A0 R11: ffff938b9fcbfc36=C2= =A0 R12: ffff942b97758240 > =C2=A0=C2=A0=C2=A0 R13: ffffffffb942f730=C2=A0 R14: ffff96073ffd5000=C2= =A0 R15: ffff96073ffd5180 > =C2=A0=C2=A0=C2=A0 ORIG_RAX: ffffffffffffffff=C2=A0 CS: 0010=C2=A0 SS: = 0018 > --- --- > =C2=A0#4 [ffff938b9fcbfda0] pagetypeinfo_showfree_print at ffffffffb8db= 7173 > =C2=A0#5 [ffff938b9fcbfdd8] walk_zones_in_node at ffffffffb8db74df > =C2=A0#6 [ffff938b9fcbfe20] pagetypeinfo_show at ffffffffb8db7a29 > =C2=A0#7 [ffff938b9fcbfe48] seq_read at ffffffffb8e45c3c > =C2=A0#8 [ffff938b9fcbfeb8] proc_reg_read at ffffffffb8e95070 > =C2=A0#9 [ffff938b9fcbfed8] vfs_read at ffffffffb8e1f2af > #10 [ffff938b9fcbff08] sys_read at ffffffffb8e2017f > #11 [ffff938b9fcbff50] system_call_fastpath at ffffffffb932579b >=20 > So we should not be in any risk of overflowing the stack. >=20 > > > - for (mtype =3D 0; mtype < MIGRATE_TYPES; mtype++) { > > > - seq_printf(m, "Node %4d, zone %8s, type %12s ", > > > - pgdat->node_id, > > > - zone->name, > > > - migratetype_names[mtype]); > > > - for (order =3D 0; order < MAX_ORDER; ++order) { > > > + lockdep_assert_held(&zone->lock); > > > + lockdep_assert_irqs_disabled(); > > > + > > > + /* > > > + * MIGRATE_MOVABLE is usually the largest one in large memory > > > + * systems. We skip iterating that list. Instead, we compute it b= y > > > + * subtracting the total of the rests from free_area->nr_free. > > > + */ > > > + for (order =3D 0; order < MAX_ORDER; ++order) { > > > + unsigned long nr_total =3D 0; > > > + struct free_area *area =3D &(zone->free_area[order]); > > > + > > > + for (mtype =3D 0; mtype < MIGRATE_TYPES; mtype++) { > > > unsigned long freecount =3D 0; > > > - struct free_area *area; > > > struct list_head *curr; > > > =20 > > > - area =3D &(zone->free_area[order]); > > > - > > > + if (mtype =3D=3D MIGRATE_MOVABLE) > > > + continue; > > > list_for_each(curr, &area->free_list[mtype]) > > > freecount++; > > > - seq_printf(m, "%6lu ", freecount); > > > + nfree[order][mtype] =3D freecount; > > > + nr_total +=3D freecount; > > > } > > > + nfree[order][MIGRATE_MOVABLE] =3D area->nr_free - nr_total; > > > + > > > + /* > > > + * If we have already iterated more than 64k of list > > > + * entries, we might have hold the zone lock for too long. > > > + * Temporarily release the lock and reschedule before > > > + * continuing so that other lock waiters have a chance > > > + * to run. > > > + */ > > > + if (nr_total > (1 << 16)) { > > > + spin_unlock_irq(&zone->lock); > > > + cond_resched(); > > > + spin_lock_irq(&zone->lock); > > > + } > > > + } > > > + > > > + for (mtype =3D 0; mtype < MIGRATE_TYPES; mtype++) { > > > + seq_printf(m, "Node %4d, zone %8s, type %12s ", > > > + pgdat->node_id, > > > + zone->name, > > > + migratetype_names[mtype]); > > > + for (order =3D 0; order < MAX_ORDER; ++order) > > > + seq_printf(m, "%6lu ", nfree[order][mtype]); > > > seq_putc(m, '\n'); > >=20 > > This is not exactly a thing of beauty :( Presumably there might still > > be situations where the irq-off times remain excessive. >=20 > Yes, that is still possible. > >=20 > > Why are we actually holding zone->lock so much? Can we get away with > > holding it across the list_for_each() loop and nothing else? If so, >=20 > We can certainly do that with the risk that the counts will be less > reliable for a given order. I can send a v2 patch if you think this is > safer. >=20 >=20 > > this still isn't a bulletproof fix. Maybe just terminate the list > > walk if freecount reaches 1024. Would anyone really care? > >=20 > > Sigh. I wonder if anyone really uses this thing for anything > > important. Can we just remove it all? > >=20 >=20 > Removing it will be a breakage of kernel API. Who cares about breaking this part of the API that essentially nobody wil= l use this file? >=20 > Another alternative is to mark the migration type in the page structure > so that we can do per-migration type nr_free tracking. That will be a > major change to the mm code. >=20 > I consider this patch lesser of the two evils.=C2=A0 >=20 > Cheers, > Longman >=20 >=20