From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB215C38A2A for ; Thu, 7 May 2020 16:22:34 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B162A2083B for ; Thu, 7 May 2020 16:22:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="W15tfGKt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B162A2083B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date: Message-ID:From:References:To:Subject:Reply-To:Content-ID:Content-Description :Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qMYw1zFgT506cPLGk0ee03kqjr1Ug/scXOHrPP17OOI=; b=W15tfGKtsriUdj 88Is8YZVIEdMlWz2a8nqEDf2dnACWQdQQHOktHewhHlNVcozhhsd6iuvQm5pdrmS8fSRh0hk8VkFc d4l/Gure2AN44L2npiW3TA0Sle5+Iy/VVpttP/for57Ggizk71+i2qymfGOKMqc+PmiXoQxdrn9Vd B5v8gtqp2lUyLFiC57d6tlRaGNTddXCxouy0g/+EmyApLVNMitov8oHLraA4KCDQ8/MegnogrGyBY 7+sNUi45VXHNZcsL4BbDBthFuECKVyO/38B6ACKBmBB5+KoQ/N6mxvglM9N/NF5Cx/1Bsd4kHzfyw B02m8ep0jtC8w6be672g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWjIA-0007na-4O; Thu, 07 May 2020 16:22:34 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWjHt-0007Vj-Bg; Thu, 07 May 2020 16:22:19 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 09FB3101E; Thu, 7 May 2020 09:22:17 -0700 (PDT) Received: from [192.168.0.14] (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3C5BD3F71F; Thu, 7 May 2020 09:22:14 -0700 (PDT) Subject: Re: [PATCH v9 11/18] arm64: kexec: arm64_relocate_new_kernel clean-ups To: Pavel Tatashin References: <20200326032420.27220-1-pasha.tatashin@soleen.com> <20200326032420.27220-12-pasha.tatashin@soleen.com> From: James Morse Message-ID: Date: Thu, 7 May 2020 17:22:13 +0100 User-Agent: Mozilla/5.0 (X11; Linux aarch64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <20200326032420.27220-12-pasha.tatashin@soleen.com> Content-Language: en-GB X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200507_092217_497581_32B6E6E7 X-CRM114-Status: GOOD ( 22.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: sashal@kernel.org, mark.rutland@arm.com, vladimir.murzin@arm.com, corbet@lwn.net, catalin.marinas@arm.com, bhsharma@redhat.com, steve.capper@arm.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, jmorris@namei.org, linux-mm@kvack.org, rfontana@redhat.com, ebiederm@xmission.com, maz@kernel.org, matthias.bgg@gmail.com, tglx@linutronix.de, will@kernel.org, selindag@gmail.com, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Pavel, On 26/03/2020 03:24, Pavel Tatashin wrote: > Remove excessive empty lines from arm64_relocate_new_kernel. To make it harder to read? Or just for the churn ... > Also, use comments on the same lines with instructions where > appropriate. Churn, > Change ENDPROC to END it never returns. It might be more useful to convert this to the new style annotations, which should be a separate patch. See Documentation/asm-annotations.rst > copy_page(dest, src, tmps...) > Increments dest and src by PAGE_SIZE, so no need to store dest > prior to calling copy_page and increment it after. Also, src is not > used after a copy, not need to copy either. This bit sounds like cleanup, but I can't isolate it from the noise below.... > Call raw_dcache_line_size() only when relocation is actually going to > happen. Why? The pattern in this code is to setup register that don't change at the top, then do all the work. I think this was an attempt to make it more readable. Nothing branches back to that label, so this is fine, its just less obviously correct. > Since '.align 3' is intended to align globals at the end of the file, > move it there. Please don't make noisy changes to whitespace and comments, its never worth it. Thanks, James > diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S > index c1d7db71a726..e9c974ea4717 100644 > --- a/arch/arm64/kernel/relocate_kernel.S > +++ b/arch/arm64/kernel/relocate_kernel.S > @@ -8,7 +8,6 @@ > > #include > #include > - > #include > #include > #include > @@ -17,25 +16,21 @@ > /* > * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. > * > - * The memory that the old kernel occupies may be overwritten when coping the > + * The memory that the old kernel occupies may be overwritten when copying the > * new image to its final location. To assure that the > * arm64_relocate_new_kernel routine which does that copy is not overwritten, > * all code and data needed by arm64_relocate_new_kernel must be between the > * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The > * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec > - * control_code_page, a special page which has been set up to be preserved > - * during the copy operation. > + * safe memory that has been set up to be preserved during the copy operation. > */ > ENTRY(arm64_relocate_new_kernel) > - > /* Setup the list loop variables. */ > mov x18, x2 /* x18 = dtb address */ > mov x17, x1 /* x17 = kimage_start */ > mov x16, x0 /* x16 = kimage_head */ > - raw_dcache_line_size x15, x0 /* x15 = dcache line size */ > mov x14, xzr /* x14 = entry ptr */ > mov x13, xzr /* x13 = copy dest */ > - > /* Clear the sctlr_el2 flags. */ > mrs x0, CurrentEL > cmp x0, #CurrentEL_EL2 > @@ -46,14 +41,11 @@ ENTRY(arm64_relocate_new_kernel) > pre_disable_mmu_workaround > msr sctlr_el2, x0 > isb > -1: > - > - /* Check if the new image needs relocation. */ > +1: /* Check if the new image needs relocation. */ > tbnz x16, IND_DONE_BIT, .Ldone > - > + raw_dcache_line_size x15, x1 /* x15 = dcache line size */ > .Lloop: > and x12, x16, PAGE_MASK /* x12 = addr */ > - > /* Test the entry flags. */ > .Ltest_source: > tbz x16, IND_SOURCE_BIT, .Ltest_indirection > @@ -69,34 +61,18 @@ ENTRY(arm64_relocate_new_kernel) > b.lo 2b > dsb sy > > - mov x20, x13 > - mov x21, x12 > - copy_page x20, x21, x0, x1, x2, x3, x4, x5, x6, x7 > - > - /* dest += PAGE_SIZE */ > - add x13, x13, PAGE_SIZE > + copy_page x13, x12, x0, x1, x2, x3, x4, x5, x6, x7 > b .Lnext > - > .Ltest_indirection: > tbz x16, IND_INDIRECTION_BIT, .Ltest_destination > - > - /* ptr = addr */ > - mov x14, x12 > + mov x14, x12 /* ptr = addr */ > b .Lnext > - > .Ltest_destination: > tbz x16, IND_DESTINATION_BIT, .Lnext > - > - /* dest = addr */ > - mov x13, x12 > - > + mov x13, x12 /* dest = addr */ > .Lnext: > - /* entry = *ptr++ */ > - ldr x16, [x14], #8 > - > - /* while (!(entry & DONE)) */ > - tbz x16, IND_DONE_BIT, .Lloop > - > + ldr x16, [x14], #8 /* entry = *ptr++ */ > + tbz x16, IND_DONE_BIT, .Lloop /* while (!(entry & DONE)) */ > .Ldone: > /* wait for writes from copy_page to finish */ > dsb nsh > @@ -110,16 +86,12 @@ ENTRY(arm64_relocate_new_kernel) > mov x2, xzr > mov x3, xzr > br x17 > - > -ENDPROC(arm64_relocate_new_kernel) > - > .ltorg > - > -.align 3 /* To keep the 64-bit values below naturally aligned. */ > +END(arm64_relocate_new_kernel) > > .Lcopy_end: > .org KEXEC_CONTROL_PAGE_SIZE > - > +.align 3 /* To keep the 64-bit values below naturally aligned. */ > /* > * arm64_relocate_new_kernel_size - Number of bytes to copy to the > * control_code_page. > _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel