From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC69CC00A89 for ; Fri, 30 Oct 2020 21:33:43 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 41DA7207DE for ; Fri, 30 Oct 2020 21:33:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="v7TrL1WH"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=ibm.com header.i=@ibm.com header.b="i+7HZuTs" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 41DA7207DE Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=JiKNDedX+ZRkqlOCh1skBt13hAj+8PUtnfzf7Ee4HO8=; b=v7TrL1WHN4wwrqN+h/VvzdkSd t4DugsOWfg07fJCe4xhRgB0RQ+RcX8RuZSPi/rCHSPGUa7IpTzFcGAEtNJRD2+5/Ykn75TFLkwTFy KEj5Wi9VLtnsqPFMczIeOr69cHqtC9trdo+zoO26LZbRTbxuTepHgrRGD/cs19c5Vbre+sF+ofyzV Mbu8wnLbCtwW5PO9Fd52yadFWkCatx/naeTOvlMIatG2jBrg4uBnKFBaqdx9VjwR09WXX9eAY9CcM nEXlwaCULqTnNz0vLdApF0u3J+KcBSBRzpX2tsJv/8Cg37cuRQlpkO5UlBlf3s7EFGlAO0qNSknJF KEJ+LpqmA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kYc0Q-0007c8-Sl; Fri, 30 Oct 2020 21:32:18 +0000 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kYc0N-0007bI-48 for linux-arm-kernel@lists.infradead.org; Fri, 30 Oct 2020 21:32:16 +0000 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09ULVxMO045243; Fri, 30 Oct 2020 17:32:08 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=date : from : to : cc : subject : message-id : references : mime-version : content-type : in-reply-to; s=pp1; bh=t/Ayp5kcFJCu9kbie7CBaetV7Ow8xMQcbD2R/cFgOqw=; b=i+7HZuTs11h5kjzZNnkK38BZC0QkJ0//UBKPKkAIESQRQRNUttXYmZAZbEhoZoLMtYtP vP5VrGR4P39RJNZoptvIhpYlfLwBTVfMe2SI9saQo1oVNlHGtJDEeqFGwCwh91TdUt0n 8W4L5eP87JHStecYr3IrDoTVVtX9AI4Qa2GRUPGAluPPYKTp9cGZDr1PFEPO1AsJoyM8 n1OZhtg1P52SiD8Ebj/ucFRQLLPYOfrIGsGKDuT1OrD1hEGtwN1sC4S4G4sNYMK7XGJO BpKKqW1Wy9X3pADcSQtS2bWU9OOJorFBJIBfHCc+K+gOHgExs999COJdAfhjQXLayCKJ 9Q== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 34gtr304pe-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 30 Oct 2020 17:32:08 -0400 Received: from m0098393.ppops.net (m0098393.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 09ULW7KD045624; Fri, 30 Oct 2020 17:32:07 -0400 Received: from ppma04fra.de.ibm.com (6a.4a.5195.ip4.static.sl-reverse.com [149.81.74.106]) by mx0a-001b2d01.pphosted.com with ESMTP id 34gtr304np-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 30 Oct 2020 17:32:07 -0400 Received: from pps.filterd (ppma04fra.de.ibm.com [127.0.0.1]) by ppma04fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 09ULW5NR028701; Fri, 30 Oct 2020 21:32:05 GMT Received: from b06avi18878370.portsmouth.uk.ibm.com (b06avi18878370.portsmouth.uk.ibm.com [9.149.26.194]) by ppma04fra.de.ibm.com with ESMTP id 34f7s3sbc0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 30 Oct 2020 21:32:05 +0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06avi18878370.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 09ULW2t528705130 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 30 Oct 2020 21:32:02 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9A6A14C046; Fri, 30 Oct 2020 21:32:02 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CFEB04C040; Fri, 30 Oct 2020 21:32:01 +0000 (GMT) Received: from linux.ibm.com (unknown [9.145.56.109]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Fri, 30 Oct 2020 21:32:01 +0000 (GMT) Date: Fri, 30 Oct 2020 23:31:59 +0200 From: Mike Rapoport To: Ard Biesheuvel Subject: Re: [PATCH] ARM: highmem: avoid clobbering non-page aligned memory reservations Message-ID: <20201030213159.GA14584@linux.ibm.com> References: <20201029110334.4118-1-ardb@kernel.org> <013f82d6-d20f-1242-2cdd-9ea9c2ab9f9c@gmail.com> <20201030151822.GA16907@linux.ibm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312, 18.0.737 definitions=2020-10-30_10:2020-10-30, 2020-10-30 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 priorityscore=1501 mlxscore=0 clxscore=1015 suspectscore=5 impostorscore=0 lowpriorityscore=0 malwarescore=0 phishscore=0 mlxlogscore=999 bulkscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2010300157 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201030_173215_335592_10CEB5CD X-CRM114-Status: GOOD ( 48.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "# 3.4.x" , Linus Walleij , Florian Fainelli , Russell King , Linux ARM Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Oct 30, 2020 at 04:22:37PM +0100, Ard Biesheuvel wrote: > On Fri, 30 Oct 2020 at 16:18, Mike Rapoport wrote: > > > > Hi Ard, > > > > > > On 10/29/2020 4:14 AM, Ard Biesheuvel wrote: > > > > > On Thu, 29 Oct 2020 at 12:03, Ard Biesheuvel wrote: > > > > >> > > > > >> free_highpages() iterates over the free memblock regions in high > > > > >> memory, and marks each page as available for the memory management > > > > >> system. However, as it rounds the end of each region downwards, we > > > > >> may end up freeing a page that is memblock_reserve()d, resulting > > > > >> in memory corruption. So align the end of the range to the next > > > > >> page instead. > > > > >> > > > > >> Cc: > > > > >> Signed-off-by: Ard Biesheuvel > > > > >> --- > > > > >> arch/arm/mm/init.c | 2 +- > > > > >> 1 file changed, 1 insertion(+), 1 deletion(-) > > > > >> > > > > >> diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c > > > > >> index a391804c7ce3..d41781cb5496 100644 > > > > >> --- a/arch/arm/mm/init.c > > > > >> +++ b/arch/arm/mm/init.c > > > > >> @@ -354,7 +354,7 @@ static void __init free_highpages(void) > > > > >> for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, > > > > >> &range_start, &range_end, NULL) { > > > > >> unsigned long start = PHYS_PFN(range_start); > > > > >> - unsigned long end = PHYS_PFN(range_end); > > > > >> + unsigned long end = PHYS_PFN(PAGE_ALIGN(range_end)); > > > > >> > > > > > > > > > > Apologies, this should be > > > > > > > > > > - unsigned long start = PHYS_PFN(range_start); > > > > > + unsigned long start = PHYS_PFN(PAGE_ALIGN(range_start)); > > > > > unsigned long end = PHYS_PFN(range_end); > > > > > > > > > > > > > > > Strangely enough, the wrong version above also fixed the issue I was > > > > > seeing, but it is start that needs rounding up, not end. > > > > > > > > Is there a particular commit that you identified which could be used as > > > > Fixes: tag to ease the back porting of such a change? > > > > > > Ah hold on. This appears to be a very recent regression, in > > > cddb5ddf2b76debdb8cad1728ad0a9321383d933, added in v5.10-rc1. > > > > > > The old code was > > > > > > unsigned long start = memblock_region_memory_base_pfn(mem); > > > > > > which uses PFN_UP() to round up, whereas the new code rounds down. > > > > > > Looks like this is broken on a lot of platforms. > > > > > > Mike? > > > > I've reviewed again the whole series and it seems that only highmem > > initialization on arm and xtensa (that copied this code from arm) have > > this problem. I might have missed something again, though. > > > > So, to restore the original behaviour I think the fix should be > > > > for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, > > &range_start, &range_end, NULL) { > > unsigned long start = PHYS_UP(range_start); > > unsigned long end = PHYS_DOWN(range_end); > > > > > > PHYS_UP and PHYS_DOWN don't exist. > > Could you please send a patch that fixes this everywhere where it's broken? Argh, this should have been PFN_{UP,DOWN}. With the patch below qemu-system-arm boots for me. Does it fix your setup as well? I kept your authorship as you did the heavy lifting here :) With acks from ARM and xtensa maintainers I can take it via memblock tree. >From 5399699b9f8de405819c59c3feddecaac0ed1399 Mon Sep 17 00:00:00 2001 From: Ard Biesheuvel Date: Fri, 30 Oct 2020 22:53:02 +0200 Subject: [PATCH] ARM, xtensa: highmem: avoid clobbering non-page aligned memory reservations free_highpages() iterates over the free memblock regions in high memory, and marks each page as available for the memory management system. Until commit cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages") it rounded beginning of each region upwards and end of each region downwards. However, after that commit free_highmem() rounds the beginning and end of each region downwards, we and may end up freeing a page that is memblock_reserve()d, resulting in memory corruption. Restore the original rounding of the region boundaries to avoid freeing reserved pages. Fixes: cddb5ddf2b76 ("arm, xtensa: simplify initialization of high memory pages") Signed-off-by: Ard Biesheuvel Co-developed-by: Mike Rapoport Signed-off-by: Mike Rapoport --- arch/arm/mm/init.c | 4 ++-- arch/xtensa/mm/init.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c index d57112a276f5..c23dbf8bebee 100644 --- a/arch/arm/mm/init.c +++ b/arch/arm/mm/init.c @@ -354,8 +354,8 @@ static void __init free_highpages(void) /* set highmem page free */ for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { - unsigned long start = PHYS_PFN(range_start); - unsigned long end = PHYS_PFN(range_end); + unsigned long start = PFN_UP(range_start); + unsigned long end = PFN_DOWN(range_end); /* Ignore complete lowmem entries */ if (end <= max_low) diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c index c6fc83efee0c..8731b7ad9308 100644 --- a/arch/xtensa/mm/init.c +++ b/arch/xtensa/mm/init.c @@ -89,8 +89,8 @@ static void __init free_highpages(void) /* set highmem page free */ for_each_free_mem_range(i, NUMA_NO_NODE, MEMBLOCK_NONE, &range_start, &range_end, NULL) { - unsigned long start = PHYS_PFN(range_start); - unsigned long end = PHYS_PFN(range_end); + unsigned long start = PFN_UP(range_start); + unsigned long end = PFN_DOWN(range_end); /* Ignore complete lowmem entries */ if (end <= max_low) -- 2.28.0 -- Sincerely yours, Mike. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel