From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D303C54791 for ; Sun, 10 Mar 2024 11:08:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 906FF6B0072; Sun, 10 Mar 2024 07:08:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B77E6B0074; Sun, 10 Mar 2024 07:08:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 77E9E6B0075; Sun, 10 Mar 2024 07:08:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 6299A6B0072 for ; Sun, 10 Mar 2024 07:08:56 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id F1C8AA0CE1 for ; Sun, 10 Mar 2024 11:08:55 +0000 (UTC) X-FDA: 81880857030.17.FB77CA4 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf10.hostedemail.com (Postfix) with ESMTP id A70CDC001B for ; Sun, 10 Mar 2024 11:08:53 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=lBI1FMLI; dmarc=none; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710068934; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=m1VWXNWokGcSkOg4qZ4Mon58Ey6QntT30P/CvPsDGPc=; b=As+hIDhyyizIgypAjFkKYoFaOrerjNkRq5T6JO8MN5BTa4roQ5lPUCC0AI27EfO+7otOWn cxgSsBJ3B/G5iXFVqGHvb2Dak8verhqJ4CTCxVb+ZlhaDvSI4OmkfR4KTqhXxxWX51+bMA 00B1yAquu8Y710jnL8ys6rq8OZI8dbc= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=lBI1FMLI; dmarc=none; spf=none (imf10.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710068934; a=rsa-sha256; cv=none; b=8pLsfQsiPTwN5VRhlLHFLcL09ejXZYezmnEo4x83lLnkMrREtv/R23R20knKNOezlI+e80 5eTEChPRbKdiMMvxUpJ4FShEvgM/IL4GA8k+MGszAf8RNTWP1meJLohbW3qWHIfLNPYMk8 dx5YaZcd61/VW5PDDEFC+62vXVU8KWY= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=m1VWXNWokGcSkOg4qZ4Mon58Ey6QntT30P/CvPsDGPc=; b=lBI1FMLIOQHFGPpaFLyaV/P+1U I0btmdcJjugS6cWC5kxPq9bJ2oKLSJXdIZlzxSHzCF1hqzHVahS5KIe9tvW072s//wu9yZ/2+Z9Lc eVE0IlRaNWe/7RkVKqcMcRaAJjjjbx92TRvWfATNDqnwy2VqEx3QXkUjhG+Y6yp9lixBr1eT7IqlA 5UpY8ZO5SOi1hwOrDDO60D8V6UEkVJwqWThuLzj1FzxHF8d+uk6w3AvfUxVpkFthNg27epMPaKWY9 asmC21ywEMtzbHuEp6YHNemMx2hh3dGAm2H56A1WMna5jrV+1zbRXi21ZAVaOKYAKZlaYsqdwZvhj yjSx4Bfg==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rjH2v-0000000Fnj7-0ojZ; Sun, 10 Mar 2024 11:08:49 +0000 Date: Sun, 10 Mar 2024 11:08:49 +0000 From: Matthew Wilcox To: Ryan Roberts Cc: Zi Yan , Andrew Morton , linux-mm@kvack.org, Yang Shi , Huang Ying Subject: Re: [PATCH v3 10/18] mm: Allow non-hugetlb large folios to be batch processed Message-ID: References: <090c9d68-9296-4338-9afa-5369bb1db66c@arm.com> <95f91a7f-13dd-4212-97ce-cf0b4060828a@arm.com> <08c01e9d-beda-435c-93ac-f303a89379df@arm.com> <59098a73-636a-497d-bc20-0abc90b4868c@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <59098a73-636a-497d-bc20-0abc90b4868c@arm.com> X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: A70CDC001B X-Stat-Signature: 5ztzokp8sru6f661eiipnirjqpdssms9 X-HE-Tag: 1710068933-118809 X-HE-Meta: U2FsdGVkX18XRdwrJYMUPjA3pSkRnsLY2YOS5pXjZta27oxK/lIWgWsxuxWbpat7oVrsKqViWqFcl4QcjHtkid4APUgLnQIZPgR1N4qITYnLuRW3+ZDiLQ9+yF2C6gFUWDltob6ofvI0FC2eExF+rWZH0F9cMlDM24Xf+ry8dX3tIRHkZtvwqfRnvRYSmbWI4IwtHn6HyQ8G3/W5aF5MNlJ54t7xcWsr0qIrK7XJ+bQIVG4zvodXHS67jwn+yocHgkZx0PFNG3xXOF64685mwMFs2z+YGHVky7qUIEmiVt1J39xgaq/Tb5bulw2wbKTvCrHHLve3ogAZTTnVjfuAB4sx0xc7EJaNVqDtOwBhJuvcn9gQfETaIk6d/VrhJXCXFfq3Z4Do4NN7lPjOHOdDuIpFJipwMJkPy6GrLfZ80WTBw9O9c5PgwCf6NLXzZ8Z06UeJCQnonLfSZO2+VZsse3026XQx1ohejquU+zHhOBTjFIhtY3749mOEWohFUbDlL4Bqg9bgncnoqpV2IfmWcRUHy5cb1kc53TwlpLJ3YFpWFRAQsEEewnrXxLf7i2H3DKzdqeeld2HsEcoTSHDgWTlKQ0uaw3lZSYyrto8+Y713fJ0NAZM2YfRMErJzyn1dx/tJfgPwkTvQq4PizwpBhm7poEygLK3x1iFAA9q4U0wRKgVBDsU03Bm4HUOw4DdpvW5ydBnU//Ok2yYyEouRuqzIDxn1TNVO1X5oZasqQGVMdZYhFG6NKzdHi62z16B3FjFXWw/0mY9MANEBvjCHIeLsoiNz9TrlcIsOP+qSdpEOr4pxElyq7gh6YOL/uPzYqH9ZaMHEK5N1D37se+neNQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sun, Mar 10, 2024 at 08:23:12AM +0000, Ryan Roberts wrote: > It doesn't sound completely impossible to me that there is a rare error path that accidentally folio_put()s an extra time... Your debug below seems to prove that it's an extra folio_put() somewhere. > > list_for_each_entry_safe(folio, next, &ds_queue->split_queue, > > _deferred_list) { > > + VM_BUG_ON_FOLIO(folio_nid(folio) != sc->nid, folio); > > + VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); > > list_del_init(&folio->_deferred_list); > > > > (also testing the hypothesis that somehow a split folio has ended up > > on the deferred split list) > > OK, ran with these checks, and get the following oops: > > [ 411.719461] page:0000000059c1826b refcount:0 mapcount:0 mapping:0000000000000000 index:0x1 pfn:0x8c6a40 > [ 411.720807] page:0000000059c1826b refcount:0 mapcount:-128 mapping:0000000000000000 index:0x1 pfn:0x8c6a40 > [ 411.721792] flags: 0xbfffc0000000000(node=0|zone=2|lastcpupid=0xffff) > [ 411.722453] page_type: 0xffffff7f(buddy) > [ 411.722870] raw: 0bfffc0000000000 fffffc001227e808 fffffc002a857408 0000000000000000 > [ 411.723672] raw: 0000000000000001 0000000000000004 00000000ffffff7f 0000000000000000 > [ 411.724470] page dumped because: VM_BUG_ON_FOLIO(!folio_test_large(folio)) > [ 411.725176] ------------[ cut here ]------------ > [ 411.725642] kernel BUG at include/linux/mm.h:1191! > [ 411.726341] Internal error: Oops - BUG: 00000000f2000800 [#1] PREEMPT SMP > [ 411.727021] Modules linked in: > [ 411.727329] CPU: 40 PID: 2704 Comm: usemem Not tainted 6.8.0-rc5-00391-g44b0dc848590-dirty #45 > [ 411.728179] Hardware name: linux,dummy-virt (DT) > [ 411.728657] pstate: 604000c5 (nZCv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--) > [ 411.729381] pc : __dump_page+0x450/0x4a8 > [ 411.729789] lr : __dump_page+0x450/0x4a8 > [ 411.730187] sp : ffff80008b97b6f0 > [ 411.730525] x29: ffff80008b97b6f0 x28: 00000000000000e2 x27: ffff80008b97b988 > [ 411.731227] x26: ffff80008b97b988 x25: ffff800082105000 x24: 0000000000000001 > [ 411.731926] x23: 0000000000000000 x22: 0000000000000001 x21: fffffc00221a9000 > [ 411.732630] x20: fffffc00221a9000 x19: fffffc00221a9000 x18: ffffffffffffffff > [ 411.733331] x17: 3030303030303030 x16: 2066376666666666 x15: 076c076f07660721 > [ 411.734035] x14: 0728074f0749074c x13: 076c076f07660721 x12: 0000000000000000 > [ 411.734757] x11: 0720072007200729 x10: ffff0013f5e756c0 x9 : ffff80008014b604 > [ 411.735473] x8 : 00000000ffffbfff x7 : ffff0013f5e756c0 x6 : 0000000000000000 > [ 411.736198] x5 : ffff0013a5a24d88 x4 : 0000000000000000 x3 : 0000000000000000 > [ 411.736923] x2 : 0000000000000000 x1 : ffff0000c2849b80 x0 : 000000000000003e > [ 411.737621] Call trace: > [ 411.737870] __dump_page+0x450/0x4a8 > [ 411.738229] dump_page+0x2c/0x70 > [ 411.738551] deferred_split_scan+0x258/0x368 > [ 411.738973] do_shrink_slab+0x184/0x750 > > The new VM_BUG_ON_FOLIO(folio_order(folio) < 2, folio); is firing, but then when dump_page() does this: > > if (compound) { > pr_warn("head:%p order:%u entire_mapcount:%d nr_pages_mapped:%d pincount:%d\n", > head, compound_order(head), > folio_entire_mapcount(folio), > folio_nr_pages_mapped(folio), > atomic_read(&folio->_pincount)); > } > > VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); inside folio_entire_mapcount() fires so we have a nested oops. Ah. I'm not sure what 44b0dc848590 is -- probably a local commit, but I guess you don't have fae7d834c43c in it which would prevent the nested oops. Nevertheless, the nested oops does tell us something interesting. > So the very first line is from the first oops and the rest is from the second. I guess we are racing with the page being freed? I find the change in mapcount interesting; 0 -> -128. Not sure why this would happen? That's PG_buddy being set in PageType. > Given the NID check didn't fire, I wonder if this points more towards extra folio_put than corrupt folio nid? Must be if PG_buddy got set. But we're still left with the question of how the page gets freed while still being on the deferred list and doesn't trigger bad_page(page, "still on deferred list") ... Anyway, we've made some progress. We now understand how a freed page gets its deferred list overwritten -- we've found a split page on the deferred list with refcount 0, we _assumed_ it was still intact and overwrote a different page's ->mapping. And it makes sense that my patch opened the window wider to hit this problem. I just checked that free_unref_folios() still does the right thing, and that also relies on the page not yet being split: unsigned int order = folio_order(folio); if (order > 0 && folio_test_large_rmappable(folio)) folio_undo_large_rmappable(folio); if (!free_unref_page_prepare(&folio->page, pfn, order)) continue; so there shouldn't be a point in the page freeing process where the folio is split before we take it off the deferred list. split_huge_page_to_list_to_order() is also very careful to take the ds_queue->split_queue_lock before freezing the folio ref, so it's not a race with that. I don't see what it is yet.