From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDAE9C54E58 for ; Sun, 10 Mar 2024 11:14:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F00C66B0072; Sun, 10 Mar 2024 07:14:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB7006B0074; Sun, 10 Mar 2024 07:14:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D7A506B0075; Sun, 10 Mar 2024 07:14:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C8AF56B0072 for ; Sun, 10 Mar 2024 07:14:39 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 927161A0249 for ; Sun, 10 Mar 2024 11:14:39 +0000 (UTC) X-FDA: 81880871478.06.A00AE75 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf11.hostedemail.com (Postfix) with ESMTP id 8F1244000D for ; Sun, 10 Mar 2024 11:14:37 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf11.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1710069277; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=flQO+Tz8lftQUzYMe2WDTBKokZg8s3hfC+sk6whB8Kk=; b=fQSRZPTsXwkmwvWKAq2WfEubTjvXt2rhCDbpF4OYdnepYFy1e0OhXygnS4QIdnxOPDDRoH NXF19Wrh6yImqD/anx8NqlxfNASI4NGxQfkS/YQ7UdXnB5kYDpKSWJ5KzFN2hiZX5ZEgUD P1dfnadR9B89enuLpV9iksDuHVru4mI= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf11.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1710069277; a=rsa-sha256; cv=none; b=irmAoEaLHzS1dc8gzLZghdY2xZ4+N3Ivf6EcS0cx7TdwA0eV+lkhhbp6ULBTnS+0ATawBZ 5d0b0e7ms28l010dIzxa3xOfJq8KCEQ7fdlIOg3qorENu/NGMfPdXx3oh15eaanUJvNtqZ XDjXJ0qJrIvEA/qEvyNFsZk+ZSni64U= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9F68A113E; Sun, 10 Mar 2024 04:15:12 -0700 (PDT) Received: from [10.57.68.196] (unknown [10.57.68.196]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 921F63F738; Sun, 10 Mar 2024 04:14:35 -0700 (PDT) Message-ID: <19d5bb96-cfdf-4ae8-a62b-0dfad638532c@arm.com> Date: Sun, 10 Mar 2024 11:14:33 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v3 10/18] mm: Allow non-hugetlb large folios to be batch processed Content-Language: en-GB From: Ryan Roberts To: Matthew Wilcox Cc: Andrew Morton , linux-mm@kvack.org References: <20240227174254.710559-1-willy@infradead.org> <20240227174254.710559-11-willy@infradead.org> <367a14f7-340e-4b29-90ae-bc3fcefdd5f4@arm.com> <8cd67a3d-81a7-4127-9d17-a1d465c3f9e8@arm.com> In-Reply-To: <8cd67a3d-81a7-4127-9d17-a1d465c3f9e8@arm.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 8F1244000D X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: pnkyrdiej41gmtds51o1ghrrypia35ku X-HE-Tag: 1710069277-824456 X-HE-Meta: U2FsdGVkX19F9yZlIuxHSdnoz7WJaArd6m0thiXaKMt8sPJ3BE3OJBQp810nbVWYL/+2Xmex9nbWuexR6G8Bm6kvcO50t65lCx0Lh5wNx7Poa02Cspm1t+HV2nTXGnObLQAlGlPEN6xXwESwZbXqoB1agrvnEe5CgWSdD1bAm4RlBUlHLm8W+mhSUctSQSsTn2ylZESZ91HSx90kq00XJB0+ejfBbRyKDOQuenQX3kTv6DyRtRhdE2wLmDDDkp75CpxnF+tT9ly2/sIPPycG3wdyGTkCtOPrSkGYg7BVA2lA11Nx6LEt0qarike8/+x0meI0MtN4IMWZ4lE3TUS1JlfdERiTP4H49lJQvaYogGEzkkDHEVgDU405EwXFDu6PkWQ49a3aDjLxn6U5VAyGWKIz6fMb4F0GJaT9XDN97Ut5tYdwD734dnAIhWJAG77rHmwKZch7V0ragMBCeedZ/EZ+hAZUw/p8vXfxLh5U0UcNQ5PMmIXYWsG0oiu9KPfPRDagbL81Qjuxwm0Sj2q4eVk69dT6SARpmWHami79RHxs+FMnG1fvHntp1qUxzYuWEk3h5Mk8XHvWqv0WYhj3YyrNPUBTZGcYJ5VAO4/4ZOqORJDnvMrUsZRQ8wJfdt70Ncdy3JV8WlgtKuD3FXv9O/AK+Ujm/rExMZpBqvUV0ql3WZ3VeLXTsL7AGDtRWY18gO7PDwThkEywy5HjILuR/slL5SeD/9crnHaQW5s4Vd1dt0Uxvm8CmQIMt++PHcjAZTE4PlvdTkQIykmPuz0maCC/YEbxHY8Bfy4Hh0Pq5JstjLUDXCodx1ZpUfnv+qOJxqYCNMj9s7+HNAZbTPvl0dk/7cQRAzVfETrOgO1wEpXma4QlBGUQnHcct/xQc79q0wvFzazi3RqiVgRqMdd69oRyJm7+q9wSXAtIMjgKsSlerDpkZholVl4fG2TKlrwdV9m0nInO/GZ57Fz49lI zQB38OkV AfGoP8R3jPC52VG8u61pLMsHbwt9Vm0Ra3NLBWcIDc+FbUCdGruza3tkfH3fwKHMnHaSDWGJ9JO1gp4e8OHWW0HiWePN7q1OdJGHkW4pLoW3eki2Db08Aq/zmwlLJ/YqPeSpiETPLr6kPk/G/EuUt35u4xg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 10/03/2024 11:01, Ryan Roberts wrote: > On 06/03/2024 16:09, Matthew Wilcox wrote: >> On Wed, Mar 06, 2024 at 01:42:06PM +0000, Ryan Roberts wrote: >>> When running some swap tests with this change (which is in mm-stable) >>> present, I see BadThings(TM). Usually I see a "bad page state" >>> followed by a delay of a few seconds, followed by an oops or NULL >>> pointer deref. Bisect points to this change, and if I revert it, >>> the problem goes away. >> >> That oops is really messed up ;-( We're clearly got two CPUs oopsing at >> the same time and it's all interleaved. That said, I can pick some >> nuggets out of it. >> >>> [ 76.239466] BUG: Bad page state in process usemem pfn:2554a0 >>> [ 76.240196] kernel BUG at include/linux/mm.h:1120! >> >> These are the two different BUGs being called simultaneously ... >> >> The first one is bad_page() in page_alloc.c and the second is >> put_page_testzero() >> VM_BUG_ON_PAGE(page_ref_count(page) == 0, page); >> >> I'm sure it's significant that both of these are the same page (pfn >> 2554a0). Feels like we have two CPUs calling put_folio() at the same >> time, and one of them underflows. It probably doesn't matter which call >> trace ends up in bad_page() and which in put_page_testzero(). >> >> One of them is coming from deferred_split_scan(), which is weird because >> we can see the folio_try_get() earlier in the function. So whatever >> this folio was, we found it on the deferred split list, got its refcount, >> moved it to the local list, either failed to get the lock, or >> successfully got the lock, split it, unlocked it and put it. >> >> (I can see this was invoked from page fault -> memcg shrinking. That's >> probably irrelevant but explains some of the functions in the backtrace) >> >> The other call trace comes from migrate_folio_done() where we're putting >> the _source_ folio. That was called from migrate_pages_batch() which >> was called from kcompactd. >> >> Um. Where do we handle the deferred list in the migration code? >> >> >> I've also tried looking at this from a different angle -- what is it >> about this commit that produces this problem? It's a fairly small >> commit: >> >> - if (folio_test_large(folio)) { >> + /* hugetlb has its own memcg */ >> + if (folio_test_hugetlb(folio)) { >> if (lruvec) { >> unlock_page_lruvec_irqrestore(lruvec, flags); >> lruvec = NULL; >> } >> - __folio_put_large(folio); >> + free_huge_folio(folio); >> >> So all that's changed is that large non-hugetlb folios do not call >> __folio_put_large(). As a reminder, that function does: >> >> if (!folio_test_hugetlb(folio)) >> page_cache_release(folio); >> destroy_large_folio(folio); >> >> and destroy_large_folio() does: >> if (folio_test_large_rmappable(folio)) >> folio_undo_large_rmappable(folio); >> >> mem_cgroup_uncharge(folio); >> free_the_page(&folio->page, folio_order(folio)); >> >> So after my patch, instead of calling (in order): >> >> page_cache_release(folio); >> folio_undo_large_rmappable(folio); >> mem_cgroup_uncharge(folio); >> free_unref_page() >> >> it calls: >> >> __page_cache_release(folio, &lruvec, &flags); >> mem_cgroup_uncharge_folios() >> folio_undo_large_rmappable(folio); > > I was just looking at this again, and something pops out... > > You have swapped the order of folio_undo_large_rmappable() and > mem_cgroup_uncharge(). But folio_undo_large_rmappable() calls > get_deferred_split_queue() which tries to get the split queue from > folio_memcg(folio) first and falls back to pgdat otherwise. If you are now > calling mem_cgroup_uncharge_folios() first, will that remove the folio from the > cgroup? Then we are operating on the wrong list? (just a guess based on the name > of the function...) Infact, looking at mem_cgroup_uncharge_folios() that's exactly what it does - it calls uncharge_folio(), which zeros memcg_data. And this is completely consistent with the behaviour I've seen, including the original bisection result. And it explains why the "workaround to re-narrow" the window is 100% successful - its reverting the ordering to be correct again. Assuming you agree, I'll leave you to work up the patch(s). > > > >> >> So have I simply widened the window for this race, whatever it is >> exactly? Something involving mis-handling of the deferred list? >> >