From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D6DFC433F5 for ; Fri, 3 Dec 2021 10:43:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357015AbhLCKqd (ORCPT ); Fri, 3 Dec 2021 05:46:33 -0500 Received: from foss.arm.com ([217.140.110.172]:46968 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1380014AbhLCKqa (ORCPT ); Fri, 3 Dec 2021 05:46:30 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C0233152B; Fri, 3 Dec 2021 02:43:06 -0800 (PST) Received: from a077416.arm.com (unknown [10.163.33.180]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6DF903F5A1; Fri, 3 Dec 2021 02:43:02 -0800 (PST) From: Amit Daniel Kachhap To: linux-kernel@vger.kernel.org Cc: Christoph Hellwig , Vincenzo Frascino , Kevin Brodsky , linux-fsdevel , kexec , Amit Daniel Kachhap , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86 Subject: [RFC PATCH 04/14] x86/crash_dump_64: Use the new interface copy_oldmem_page_buf Date: Fri, 3 Dec 2021 16:12:21 +0530 Message-Id: <20211203104231.17597-5-amit.kachhap@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211203104231.17597-1-amit.kachhap@arm.com> References: <20211203104231.17597-1-amit.kachhap@arm.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The current interface copy_oldmem_page() passes user pointer without __user annotation and hence does unnecessary user/kernel pointer conversions during its implementation. Implement the interface copy_oldmem_page_buf() to avoid this issue. Cc: Thomas Gleixner Cc: Ingo Molnar Cc: Borislav Petkov Cc: Dave Hansen Cc: x86 Signed-off-by: Amit Daniel Kachhap --- arch/x86/kernel/crash_dump_64.c | 44 +++++++++++++++------------------ 1 file changed, 20 insertions(+), 24 deletions(-) diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c index 99cd505628fa..7a6fa797260f 100644 --- a/arch/x86/kernel/crash_dump_64.c +++ b/arch/x86/kernel/crash_dump_64.c @@ -12,9 +12,9 @@ #include #include -static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize, - unsigned long offset, int userbuf, - bool encrypted) +static ssize_t __copy_oldmem_page(unsigned long pfn, char __user *ubuf, + char *kbuf, size_t csize, + unsigned long offset, bool encrypted) { void *vaddr; @@ -29,13 +29,13 @@ static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize, if (!vaddr) return -ENOMEM; - if (userbuf) { - if (copy_to_user((void __user *)buf, vaddr + offset, csize)) { + if (ubuf) { + if (copy_to_user(ubuf, vaddr + offset, csize)) { iounmap((void __iomem *)vaddr); return -EFAULT; } } else - memcpy(buf, vaddr + offset, csize); + memcpy(kbuf, vaddr + offset, csize); set_iounmap_nonlazy(); iounmap((void __iomem *)vaddr); @@ -43,39 +43,35 @@ static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize, } /** - * copy_oldmem_page - copy one page of memory + * copy_oldmem_page_buf - copy one page of memory * @pfn: page frame number to be copied - * @buf: target memory address for the copy; this can be in kernel address - * space or user address space (see @userbuf) + * @ubuf: target user memory pointer for the copy; use copy_to_user() if this + * pointer is not NULL + * @kbuf: target kernel memory pointer for the copy; use memcpy() if this + * pointer is not NULL * @csize: number of bytes to copy * @offset: offset in bytes into the page (based on pfn) to begin the copy - * @userbuf: if set, @buf is in user address space, use copy_to_user(), - * otherwise @buf is in kernel address space, use memcpy(). * - * Copy a page from the old kernel's memory. For this page, there is no pte - * mapped in the current kernel. We stitch up a pte, similar to kmap_atomic. + * Copy a page from the old kernel's memory into the buffer pointed either by + * @ubuf or @kbuf. For this page, there is no pte mapped in the current kernel. + * We stitch up a pte, similar to kmap_atomic. */ -ssize_t copy_oldmem_page(unsigned long pfn, char *buf, size_t csize, - unsigned long offset, int userbuf) +ssize_t copy_oldmem_page_buf(unsigned long pfn, char __user *ubuf, char *kbuf, + size_t csize, unsigned long offset) { - return __copy_oldmem_page(pfn, buf, csize, offset, userbuf, false); + return __copy_oldmem_page(pfn, ubuf, kbuf, csize, offset, false); } /** - * copy_oldmem_page_encrypted - same as copy_oldmem_page() above but ioremap the - * memory with the encryption mask set to accommodate kdump on SME-enabled + * copy_oldmem_page_encrypted - same as copy_oldmem_page_buf() above but ioremap + * the memory with the encryption mask set to accommodate kdump on SME-enabled * machines. */ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char __user *ubuf, char *kbuf, size_t csize, unsigned long offset) { - if (ubuf) - return __copy_oldmem_page(pfn, (__force char *)ubuf, csize, - offset, 1, true); - else - return __copy_oldmem_page(pfn, kbuf, csize, - offset, 0, true); + return __copy_oldmem_page(pfn, ubuf, kbuf, csize, offset, true); } ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos) -- 2.17.1