From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EDD8C433ED for ; Tue, 18 May 2021 22:45:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5066560C40 for ; Tue, 18 May 2021 22:45:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234948AbhERWrB (ORCPT ); Tue, 18 May 2021 18:47:01 -0400 Received: from out01.mta.xmission.com ([166.70.13.231]:45516 "EHLO out01.mta.xmission.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233749AbhERWrB (ORCPT ); Tue, 18 May 2021 18:47:01 -0400 Received: from in01.mta.xmission.com ([166.70.13.51]) by out01.mta.xmission.com with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.93) (envelope-from ) id 1lj8T1-00C3FV-Jh; Tue, 18 May 2021 16:45:35 -0600 Received: from ip68-227-160-95.om.om.cox.net ([68.227.160.95] helo=fess.xmission.com) by in01.mta.xmission.com with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.87) (envelope-from ) id 1lj8Sz-0000jv-Fq; Tue, 18 May 2021 16:45:35 -0600 From: ebiederm@xmission.com (Eric W. Biederman) To: Arnd Bergmann Cc: linux-arch , Christoph Hellwig , Alexander Viro , Andrew Morton , Borislav Petkov , Brian Gerst , Ingo Molnar , "H. Peter Anvin" , Thomas Gleixner , Linux ARM , Linux Kernel Mailing List , Linux-MM , kexec@lists.infradead.org References: <20210517203343.3941777-1-arnd@kernel.org> <20210517203343.3941777-2-arnd@kernel.org> Date: Tue, 18 May 2021 17:45:23 -0500 In-Reply-To: (Arnd Bergmann's message of "Tue, 18 May 2021 16:17:53 +0200") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-XM-SPF: eid=1lj8Sz-0000jv-Fq;;;mid=;;;hst=in01.mta.xmission.com;;;ip=68.227.160.95;;;frm=ebiederm@xmission.com;;;spf=neutral X-XM-AID: U2FsdGVkX1/4EKDkInLXQv/kdxJtbD4q26Fyh8T6eMg= X-SA-Exim-Connect-IP: 68.227.160.95 X-SA-Exim-Mail-From: ebiederm@xmission.com Subject: Re: [PATCH v3 1/4] kexec: simplify compat_sys_kexec_load X-SA-Exim-Version: 4.2.1 (built Thu, 05 May 2016 13:38:54 -0600) X-SA-Exim-Scanned: Yes (on in01.mta.xmission.com) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Arnd Bergmann writes: > On Tue, May 18, 2021 at 4:05 PM Arnd Bergmann wrote: >> >> On Tue, May 18, 2021 at 3:41 PM Eric W. Biederman wrote: >> > >> > Arnd Bergmann writes: >> > >> > > From: Arnd Bergmann KEXEC_ARCH_DEFAULT >> > > >> > > The compat version of sys_kexec_load() uses compat_alloc_user_space to >> > > convert the user-provided arguments into the native format. >> > > >> > > Move the conversion into the regular implementation with >> > > an in_compat_syscall() check to simplify it and avoid the >> > > compat_alloc_user_space() call. >> > > >> > > compat_sys_kexec_load() now behaves the same as sys_kexec_load(). >> > >> > Nacked-by: "Eric W. Biederman" >> >KEXEC_ARCH_DEFAULT >> > The patch is wrong. >> > >> > The logic between the compat entry point and the ordinary entry point >> > are by necessity different. This unifies the logic and breaks the compat >> > entry point. >> > >> > The fundamentally necessity is that the code being loaded needs to know >> > which mode the kernel is running in so it can safely transition to the >> > new kernel. >> > >> > Given that the two entry points fundamentally need different logic, >> > and that difference was not preserved and the goal of this patchset >> > was to unify that which fundamentally needs to be different. I don't >> > think this patch series makes any sense for kexec. >> >> Sorry, I'm not following that explanation. Can you clarify what different >> modes of the kernel you are referring to here, and how my patch >> changes this? I think something like the untested diff below is enough to get rid of compat_alloc_user cleanly. Certainly it should be enough to give any idea what I am thinking. diff --git a/kernel/kexec.c b/kernel/kexec.c index c82c6c06f051..ce69a5d68023 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -19,26 +19,21 @@ #include "kexec_internal.h" -static int copy_user_segment_list(struct kimage *image, +static void copy_user_segment_list(struct kimage *image, unsigned long nr_segments, - struct kexec_segment __user *segments) + struct kexec_segment *segments) { - int ret; size_t segment_bytes; /* Read in the segments */ image->nr_segments = nr_segments; segment_bytes = nr_segments * sizeof(*segments); - ret = copy_from_user(image->segment, segments, segment_bytes); - if (ret) - ret = -EFAULT; - - return ret; + memcpy(image->segment, segments, segment_bytes); } static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, unsigned long nr_segments, - struct kexec_segment __user *segments, + struct kexec_segment *segments, unsigned long flags) { int ret; @@ -59,9 +54,7 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, image->start = entry; - ret = copy_user_segment_list(image, nr_segments, segments); - if (ret) - goto out_free_image; + copy_user_segment_list(image, nr_segments, segments); if (kexec_on_panic) { /* Enable special crash kernel control page alloc policy. */ @@ -103,8 +96,8 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, return ret; } -static int do_kexec_load(unsigned long entry, unsigned long nr_segments, - struct kexec_segment __user *segments, unsigned long flags) +static int do_kexec_load_locked(unsigned long entry, unsigned long nr_segments, + struct kexec_segment *segments, unsigned long flags) { struct kimage **dest_image, *image; unsigned long i; @@ -174,6 +167,27 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments, return ret; } +static int do_kexec_load(unsigned long entry, unsigned long nr_segments, + struct kexec_segment *segments, unsigned long flags) +{ + int result; + + /* Because we write directly to the reserved memory + * region when loading crash kernels we need a mutex here to + * prevent multiple crash kernels from attempting to load + * simultaneously, and to prevent a crash kernel from loading + * over the top of a in use crash kernel. + * + * KISS: always take the mutex. + */ + if (!mutex_trylock(&kexec_mutex)) + return -EBUSY; + + result = do_kexec_load_locked(entry, nr_segments, segments, flags); + mutex_unlock(&kexec_mutex); + return result; +} + /* * Exec Kernel system call: for obvious reasons only root may call it. * @@ -224,6 +238,11 @@ static inline int kexec_load_check(unsigned long nr_segments, if ((flags & KEXEC_FLAGS) != (flags & ~KEXEC_ARCH_MASK)) return -EINVAL; + /* Verify we are on the appropriate architecture */ + if (((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH) && + ((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH_DEFAULT)) + return -EINVAL; + /* Put an artificial cap on the number * of segments passed to kexec_load. */ @@ -236,33 +255,29 @@ static inline int kexec_load_check(unsigned long nr_segments, SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, struct kexec_segment __user *, segments, unsigned long, flags) { - int result; + struct kexec_segment *ksegments; + unsigned long bytes, result; result = kexec_load_check(nr_segments, flags); if (result) return result; - /* Verify we are on the appropriate architecture */ - if (((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH) && - ((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH_DEFAULT)) - return -EINVAL; - - /* Because we write directly to the reserved memory - * region when loading crash kernels we need a mutex here to - * prevent multiple crash kernels from attempting to load - * simultaneously, and to prevent a crash kernel from loading - * over the top of a in use crash kernel. - * - * KISS: always take the mutex. - */ - if (!mutex_trylock(&kexec_mutex)) - return -EBUSY; + bytes = nr_segments * sizeof(ksegments[0]); + ksegments = kmalloc(bytes, GFP_KERNEL); + if (!ksegments) + return -ENOMEM; + result = copy_from_user(ksegments, segments, bytes); + if (result) + goto fail; + result = do_kexec_load(entry, nr_segments, segments, flags); + kfree(ksegments); - mutex_unlock(&kexec_mutex); - +fail: + kfree(ksegments); return result; + } #ifdef CONFIG_COMPAT @@ -272,9 +287,9 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, compat_ulong_t, flags) { struct compat_kexec_segment in; - struct kexec_segment out, __user *ksegments; - unsigned long i, result; - + struct kexec_segment *ksegments; + unsigned long bytes, i, result; + result = kexec_load_check(nr_segments, flags); if (result) return result; @@ -285,37 +300,26 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, if ((flags & KEXEC_ARCH_MASK) == KEXEC_ARCH_DEFAULT) return -EINVAL; - ksegments = compat_alloc_user_space(nr_segments * sizeof(out)); + bytes = nr_segments * sizeof(ksegments[0]); + ksegments = kmalloc(bytes, GFP_KERNEL); + if (!ksegments) + return -ENOMEM; + for (i = 0; i < nr_segments; i++) { result = copy_from_user(&in, &segments[i], sizeof(in)); if (result) - return -EFAULT; - - out.buf = compat_ptr(in.buf); - out.bufsz = in.bufsz; - out.mem = in.mem; - out.memsz = in.memsz; + goto fail; - result = copy_to_user(&ksegments[i], &out, sizeof(out)); - if (result) - return -EFAULT; + ksegments[i].buf = compat_ptr(in.buf); + ksegments[i].bufsz = in.bufsz; + ksegments[i].mem = in.mem; + ksegments[i].memsz = in.memsz; } - /* Because we write directly to the reserved memory - * region when loading crash kernels we need a mutex here to - * prevent multiple crash kernels from attempting to load - * simultaneously, and to prevent a crash kernel from loading - * over the top of a in use crash kernel. - * - * KISS: always take the mutex. - */ - if (!mutex_trylock(&kexec_mutex)) - return -EBUSY; - result = do_kexec_load(entry, nr_segments, ksegments, flags); - mutex_unlock(&kexec_mutex); - +fail: + kfree(ksegments); return result; } #endif From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B44DAC43460 for ; Tue, 18 May 2021 22:45:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 507436113C for ; Tue, 18 May 2021 22:45:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 507436113C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=xmission.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AF7348E005B; Tue, 18 May 2021 18:45:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AA56B8E002F; Tue, 18 May 2021 18:45:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 91FA68E005B; Tue, 18 May 2021 18:45:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0230.hostedemail.com [216.40.44.230]) by kanga.kvack.org (Postfix) with ESMTP id 634CA8E002F for ; Tue, 18 May 2021 18:45:44 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E56716D73 for ; Tue, 18 May 2021 22:45:43 +0000 (UTC) X-FDA: 78155835366.24.A1A62A5 Received: from out01.mta.xmission.com (out01.mta.xmission.com [166.70.13.231]) by imf09.hostedemail.com (Postfix) with ESMTP id A999F60006E4 for ; Tue, 18 May 2021 22:45:42 +0000 (UTC) Received: from in01.mta.xmission.com ([166.70.13.51]) by out01.mta.xmission.com with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.93) (envelope-from ) id 1lj8T1-00C3FV-Jh; Tue, 18 May 2021 16:45:35 -0600 Received: from ip68-227-160-95.om.om.cox.net ([68.227.160.95] helo=fess.xmission.com) by in01.mta.xmission.com with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.87) (envelope-from ) id 1lj8Sz-0000jv-Fq; Tue, 18 May 2021 16:45:35 -0600 From: ebiederm@xmission.com (Eric W. Biederman) To: Arnd Bergmann Cc: linux-arch , Christoph Hellwig , Alexander Viro , Andrew Morton , Borislav Petkov , Brian Gerst , Ingo Molnar , "H. Peter Anvin" , Thomas Gleixner , Linux ARM , Linux Kernel Mailing List , Linux-MM , kexec@lists.infradead.org References: <20210517203343.3941777-1-arnd@kernel.org> <20210517203343.3941777-2-arnd@kernel.org> Date: Tue, 18 May 2021 17:45:23 -0500 In-Reply-To: (Arnd Bergmann's message of "Tue, 18 May 2021 16:17:53 +0200") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain X-XM-SPF: eid=1lj8Sz-0000jv-Fq;;;mid=;;;hst=in01.mta.xmission.com;;;ip=68.227.160.95;;;frm=ebiederm@xmission.com;;;spf=neutral X-XM-AID: U2FsdGVkX1/4EKDkInLXQv/kdxJtbD4q26Fyh8T6eMg= X-SA-Exim-Connect-IP: 68.227.160.95 X-SA-Exim-Mail-From: ebiederm@xmission.com Subject: Re: [PATCH v3 1/4] kexec: simplify compat_sys_kexec_load X-SA-Exim-Version: 4.2.1 (built Thu, 05 May 2016 13:38:54 -0600) X-SA-Exim-Scanned: Yes (on in01.mta.xmission.com) Authentication-Results: imf09.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=xmission.com; spf=pass (imf09.hostedemail.com: domain of ebiederm@xmission.com designates 166.70.13.231 as permitted sender) smtp.mailfrom=ebiederm@xmission.com X-Stat-Signature: hmzcmbtuj3akntiyb1u9xgp8ixu1a3rw X-Rspamd-Queue-Id: A999F60006E4 X-Rspamd-Server: rspam02 X-HE-Tag: 1621377942-154456 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Arnd Bergmann writes: > On Tue, May 18, 2021 at 4:05 PM Arnd Bergmann wrote: >> >> On Tue, May 18, 2021 at 3:41 PM Eric W. Biederman wrote: >> > >> > Arnd Bergmann writes: >> > >> > > From: Arnd Bergmann KEXEC_ARCH_DEFAULT >> > > >> > > The compat version of sys_kexec_load() uses compat_alloc_user_space to >> > > convert the user-provided arguments into the native format. >> > > >> > > Move the conversion into the regular implementation with >> > > an in_compat_syscall() check to simplify it and avoid the >> > > compat_alloc_user_space() call. >> > > >> > > compat_sys_kexec_load() now behaves the same as sys_kexec_load(). >> > >> > Nacked-by: "Eric W. Biederman" >> >KEXEC_ARCH_DEFAULT >> > The patch is wrong. >> > >> > The logic between the compat entry point and the ordinary entry point >> > are by necessity different. This unifies the logic and breaks the compat >> > entry point. >> > >> > The fundamentally necessity is that the code being loaded needs to know >> > which mode the kernel is running in so it can safely transition to the >> > new kernel. >> > >> > Given that the two entry points fundamentally need different logic, >> > and that difference was not preserved and the goal of this patchset >> > was to unify that which fundamentally needs to be different. I don't >> > think this patch series makes any sense for kexec. >> >> Sorry, I'm not following that explanation. Can you clarify what different >> modes of the kernel you are referring to here, and how my patch >> changes this? I think something like the untested diff below is enough to get rid of compat_alloc_user cleanly. Certainly it should be enough to give any idea what I am thinking. diff --git a/kernel/kexec.c b/kernel/kexec.c index c82c6c06f051..ce69a5d68023 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -19,26 +19,21 @@ #include "kexec_internal.h" -static int copy_user_segment_list(struct kimage *image, +static void copy_user_segment_list(struct kimage *image, unsigned long nr_segments, - struct kexec_segment __user *segments) + struct kexec_segment *segments) { - int ret; size_t segment_bytes; /* Read in the segments */ image->nr_segments = nr_segments; segment_bytes = nr_segments * sizeof(*segments); - ret = copy_from_user(image->segment, segments, segment_bytes); - if (ret) - ret = -EFAULT; - - return ret; + memcpy(image->segment, segments, segment_bytes); } static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, unsigned long nr_segments, - struct kexec_segment __user *segments, + struct kexec_segment *segments, unsigned long flags) { int ret; @@ -59,9 +54,7 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, image->start = entry; - ret = copy_user_segment_list(image, nr_segments, segments); - if (ret) - goto out_free_image; + copy_user_segment_list(image, nr_segments, segments); if (kexec_on_panic) { /* Enable special crash kernel control page alloc policy. */ @@ -103,8 +96,8 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, return ret; } -static int do_kexec_load(unsigned long entry, unsigned long nr_segments, - struct kexec_segment __user *segments, unsigned long flags) +static int do_kexec_load_locked(unsigned long entry, unsigned long nr_segments, + struct kexec_segment *segments, unsigned long flags) { struct kimage **dest_image, *image; unsigned long i; @@ -174,6 +167,27 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments, return ret; } +static int do_kexec_load(unsigned long entry, unsigned long nr_segments, + struct kexec_segment *segments, unsigned long flags) +{ + int result; + + /* Because we write directly to the reserved memory + * region when loading crash kernels we need a mutex here to + * prevent multiple crash kernels from attempting to load + * simultaneously, and to prevent a crash kernel from loading + * over the top of a in use crash kernel. + * + * KISS: always take the mutex. + */ + if (!mutex_trylock(&kexec_mutex)) + return -EBUSY; + + result = do_kexec_load_locked(entry, nr_segments, segments, flags); + mutex_unlock(&kexec_mutex); + return result; +} + /* * Exec Kernel system call: for obvious reasons only root may call it. * @@ -224,6 +238,11 @@ static inline int kexec_load_check(unsigned long nr_segments, if ((flags & KEXEC_FLAGS) != (flags & ~KEXEC_ARCH_MASK)) return -EINVAL; + /* Verify we are on the appropriate architecture */ + if (((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH) && + ((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH_DEFAULT)) + return -EINVAL; + /* Put an artificial cap on the number * of segments passed to kexec_load. */ @@ -236,33 +255,29 @@ static inline int kexec_load_check(unsigned long nr_segments, SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, struct kexec_segment __user *, segments, unsigned long, flags) { - int result; + struct kexec_segment *ksegments; + unsigned long bytes, result; result = kexec_load_check(nr_segments, flags); if (result) return result; - /* Verify we are on the appropriate architecture */ - if (((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH) && - ((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH_DEFAULT)) - return -EINVAL; - - /* Because we write directly to the reserved memory - * region when loading crash kernels we need a mutex here to - * prevent multiple crash kernels from attempting to load - * simultaneously, and to prevent a crash kernel from loading - * over the top of a in use crash kernel. - * - * KISS: always take the mutex. - */ - if (!mutex_trylock(&kexec_mutex)) - return -EBUSY; + bytes = nr_segments * sizeof(ksegments[0]); + ksegments = kmalloc(bytes, GFP_KERNEL); + if (!ksegments) + return -ENOMEM; + result = copy_from_user(ksegments, segments, bytes); + if (result) + goto fail; + result = do_kexec_load(entry, nr_segments, segments, flags); + kfree(ksegments); - mutex_unlock(&kexec_mutex); - +fail: + kfree(ksegments); return result; + } #ifdef CONFIG_COMPAT @@ -272,9 +287,9 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, compat_ulong_t, flags) { struct compat_kexec_segment in; - struct kexec_segment out, __user *ksegments; - unsigned long i, result; - + struct kexec_segment *ksegments; + unsigned long bytes, i, result; + result = kexec_load_check(nr_segments, flags); if (result) return result; @@ -285,37 +300,26 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, if ((flags & KEXEC_ARCH_MASK) == KEXEC_ARCH_DEFAULT) return -EINVAL; - ksegments = compat_alloc_user_space(nr_segments * sizeof(out)); + bytes = nr_segments * sizeof(ksegments[0]); + ksegments = kmalloc(bytes, GFP_KERNEL); + if (!ksegments) + return -ENOMEM; + for (i = 0; i < nr_segments; i++) { result = copy_from_user(&in, &segments[i], sizeof(in)); if (result) - return -EFAULT; - - out.buf = compat_ptr(in.buf); - out.bufsz = in.bufsz; - out.mem = in.mem; - out.memsz = in.memsz; + goto fail; - result = copy_to_user(&ksegments[i], &out, sizeof(out)); - if (result) - return -EFAULT; + ksegments[i].buf = compat_ptr(in.buf); + ksegments[i].bufsz = in.bufsz; + ksegments[i].mem = in.mem; + ksegments[i].memsz = in.memsz; } - /* Because we write directly to the reserved memory - * region when loading crash kernels we need a mutex here to - * prevent multiple crash kernels from attempting to load - * simultaneously, and to prevent a crash kernel from loading - * over the top of a in use crash kernel. - * - * KISS: always take the mutex. - */ - if (!mutex_trylock(&kexec_mutex)) - return -EBUSY; - result = do_kexec_load(entry, nr_segments, ksegments, flags); - mutex_unlock(&kexec_mutex); - +fail: + kfree(ksegments); return result; } #endif From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.1 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4ACFAC433B4 for ; Tue, 18 May 2021 22:47:49 +0000 (UTC) Received: from desiato.infradead.org (desiato.infradead.org [90.155.92.199]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CB05D6113C for ; Tue, 18 May 2021 22:47:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CB05D6113C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=xmission.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=desiato.20200630; h=Sender:Content-Transfer-Encoding :Content-Type:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:Subject:MIME-Version:Message-ID:In-Reply-To:Date: References:Cc:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=anWD+FCB4lDz3sgQjzIeiWpkv2EfziIwSpIzc3ZJ6B4=; b=N0rbPKDHhaUl+I4ip+WBHGkAr qmixSAmWvjJKKMgaOgENKr8fGAFudFZkwultmKk2gLH3LnNVhxBGgI3UUAwJxttqhADJiomR3qZNx mbegeAOQEXkdd9Q7ODCO9RD3vePtkCYUqeBZ4+jHwqkzflybI5rVHF7svijbEdL9LgWynHJu+RT0m zJqRbh8gkZAEhAyZ6BEZ5VWc8VcgxTsQVY1VX1ah/p+SnawPIutjW5OIx9iTUfnL7EaYfQRlww8bU bQ0ATcJzSiAeYzeurrK/mvIHxqf+88IUVdBjJFznI69aR9iEikE6m2zXUbi4mans8Mn1ainByeRS/ ciHp+sUuA==; Received: from localhost ([::1] helo=desiato.infradead.org) by desiato.infradead.org with esmtp (Exim 4.94 #2 (Red Hat Linux)) id 1lj8TJ-0026Kl-Ax; Tue, 18 May 2021 22:45:53 +0000 Received: from bombadil.infradead.org ([2607:7c80:54:e::133]) by desiato.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lj8TC-0026JV-Mr; Tue, 18 May 2021 22:45:46 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Subject:Content-Type:MIME-Version: Message-ID:In-Reply-To:Date:References:Cc:To:From:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=j4ZlXmwmpzxgNVv8D5C3kBkimh3rGq5v5nY765TmtC8=; b=fpjwKHb1ooIh9IbXgGjCl2uLSL 3TFlM1qFF67/BDEXuObqsisgsETCO4nEEE8wIEa1movfuPJyhaDISNl6djge6Gtz2FXe4H07wLjWW 1QTJaZ3blOCyzUzpgiYW67ZpytbVF5zOSCikBGmvOyQbr4aiA1pBWj6HGB75AWUegrYBa+ukceJvS vklTk6XUeRhW64h4HJixyIZYmKCWW5gJl7yRRNL8RpVHmisMWDL+2Up1iKeiH27cVuZvnYP5HtDfY zu5jRQp0TTpceAu0l9YkKRrz5xVGlptwwY0ZPF9motyUa0XYpnayfV9gdmiCF/EkGQiAL2ExWq4m0 DBrCZggQ==; Received: from out01.mta.xmission.com ([166.70.13.231]) by bombadil.infradead.org with esmtps (Exim 4.94 #2 (Red Hat Linux)) id 1lj8T9-00F0ad-5W; Tue, 18 May 2021 22:45:45 +0000 Received: from in01.mta.xmission.com ([166.70.13.51]) by out01.mta.xmission.com with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.93) (envelope-from ) id 1lj8T1-00C3FV-Jh; Tue, 18 May 2021 16:45:35 -0600 Received: from ip68-227-160-95.om.om.cox.net ([68.227.160.95] helo=fess.xmission.com) by in01.mta.xmission.com with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.87) (envelope-from ) id 1lj8Sz-0000jv-Fq; Tue, 18 May 2021 16:45:35 -0600 From: ebiederm@xmission.com (Eric W. Biederman) To: Arnd Bergmann Cc: linux-arch , Christoph Hellwig , Alexander Viro , Andrew Morton , Borislav Petkov , Brian Gerst , Ingo Molnar , "H. Peter Anvin" , Thomas Gleixner , Linux ARM , Linux Kernel Mailing List , Linux-MM , kexec@lists.infradead.org References: <20210517203343.3941777-1-arnd@kernel.org> <20210517203343.3941777-2-arnd@kernel.org> Date: Tue, 18 May 2021 17:45:23 -0500 In-Reply-To: (Arnd Bergmann's message of "Tue, 18 May 2021 16:17:53 +0200") Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux) MIME-Version: 1.0 X-XM-SPF: eid=1lj8Sz-0000jv-Fq; ; ; mid=; ; ; hst=in01.mta.xmission.com; ; ; ip=68.227.160.95; ; ; frm=ebiederm@xmission.com; ; ; spf=neutral X-XM-AID: U2FsdGVkX1/4EKDkInLXQv/kdxJtbD4q26Fyh8T6eMg= X-SA-Exim-Connect-IP: 68.227.160.95 X-SA-Exim-Mail-From: ebiederm@xmission.com Subject: Re: [PATCH v3 1/4] kexec: simplify compat_sys_kexec_load X-SA-Exim-Version: 4.2.1 (built Thu, 05 May 2016 13:38:54 -0600) X-SA-Exim-Scanned: Yes (on in01.mta.xmission.com) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210518_154543_264845_8F453058 X-CRM114-Status: GOOD ( 38.42 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Arnd Bergmann writes: > On Tue, May 18, 2021 at 4:05 PM Arnd Bergmann wrote: >> >> On Tue, May 18, 2021 at 3:41 PM Eric W. Biederman wrote: >> > >> > Arnd Bergmann writes: >> > >> > > From: Arnd Bergmann KEXEC_ARCH_DEFAULT >> > > >> > > The compat version of sys_kexec_load() uses compat_alloc_user_space to >> > > convert the user-provided arguments into the native format. >> > > >> > > Move the conversion into the regular implementation with >> > > an in_compat_syscall() check to simplify it and avoid the >> > > compat_alloc_user_space() call. >> > > >> > > compat_sys_kexec_load() now behaves the same as sys_kexec_load(). >> > >> > Nacked-by: "Eric W. Biederman" >> >KEXEC_ARCH_DEFAULT >> > The patch is wrong. >> > >> > The logic between the compat entry point and the ordinary entry point >> > are by necessity different. This unifies the logic and breaks the compat >> > entry point. >> > >> > The fundamentally necessity is that the code being loaded needs to know >> > which mode the kernel is running in so it can safely transition to the >> > new kernel. >> > >> > Given that the two entry points fundamentally need different logic, >> > and that difference was not preserved and the goal of this patchset >> > was to unify that which fundamentally needs to be different. I don't >> > think this patch series makes any sense for kexec. >> >> Sorry, I'm not following that explanation. Can you clarify what different >> modes of the kernel you are referring to here, and how my patch >> changes this? I think something like the untested diff below is enough to get rid of compat_alloc_user cleanly. Certainly it should be enough to give any idea what I am thinking. diff --git a/kernel/kexec.c b/kernel/kexec.c index c82c6c06f051..ce69a5d68023 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -19,26 +19,21 @@ #include "kexec_internal.h" -static int copy_user_segment_list(struct kimage *image, +static void copy_user_segment_list(struct kimage *image, unsigned long nr_segments, - struct kexec_segment __user *segments) + struct kexec_segment *segments) { - int ret; size_t segment_bytes; /* Read in the segments */ image->nr_segments = nr_segments; segment_bytes = nr_segments * sizeof(*segments); - ret = copy_from_user(image->segment, segments, segment_bytes); - if (ret) - ret = -EFAULT; - - return ret; + memcpy(image->segment, segments, segment_bytes); } static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, unsigned long nr_segments, - struct kexec_segment __user *segments, + struct kexec_segment *segments, unsigned long flags) { int ret; @@ -59,9 +54,7 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, image->start = entry; - ret = copy_user_segment_list(image, nr_segments, segments); - if (ret) - goto out_free_image; + copy_user_segment_list(image, nr_segments, segments); if (kexec_on_panic) { /* Enable special crash kernel control page alloc policy. */ @@ -103,8 +96,8 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, return ret; } -static int do_kexec_load(unsigned long entry, unsigned long nr_segments, - struct kexec_segment __user *segments, unsigned long flags) +static int do_kexec_load_locked(unsigned long entry, unsigned long nr_segments, + struct kexec_segment *segments, unsigned long flags) { struct kimage **dest_image, *image; unsigned long i; @@ -174,6 +167,27 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments, return ret; } +static int do_kexec_load(unsigned long entry, unsigned long nr_segments, + struct kexec_segment *segments, unsigned long flags) +{ + int result; + + /* Because we write directly to the reserved memory + * region when loading crash kernels we need a mutex here to + * prevent multiple crash kernels from attempting to load + * simultaneously, and to prevent a crash kernel from loading + * over the top of a in use crash kernel. + * + * KISS: always take the mutex. + */ + if (!mutex_trylock(&kexec_mutex)) + return -EBUSY; + + result = do_kexec_load_locked(entry, nr_segments, segments, flags); + mutex_unlock(&kexec_mutex); + return result; +} + /* * Exec Kernel system call: for obvious reasons only root may call it. * @@ -224,6 +238,11 @@ static inline int kexec_load_check(unsigned long nr_segments, if ((flags & KEXEC_FLAGS) != (flags & ~KEXEC_ARCH_MASK)) return -EINVAL; + /* Verify we are on the appropriate architecture */ + if (((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH) && + ((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH_DEFAULT)) + return -EINVAL; + /* Put an artificial cap on the number * of segments passed to kexec_load. */ @@ -236,33 +255,29 @@ static inline int kexec_load_check(unsigned long nr_segments, SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, struct kexec_segment __user *, segments, unsigned long, flags) { - int result; + struct kexec_segment *ksegments; + unsigned long bytes, result; result = kexec_load_check(nr_segments, flags); if (result) return result; - /* Verify we are on the appropriate architecture */ - if (((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH) && - ((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH_DEFAULT)) - return -EINVAL; - - /* Because we write directly to the reserved memory - * region when loading crash kernels we need a mutex here to - * prevent multiple crash kernels from attempting to load - * simultaneously, and to prevent a crash kernel from loading - * over the top of a in use crash kernel. - * - * KISS: always take the mutex. - */ - if (!mutex_trylock(&kexec_mutex)) - return -EBUSY; + bytes = nr_segments * sizeof(ksegments[0]); + ksegments = kmalloc(bytes, GFP_KERNEL); + if (!ksegments) + return -ENOMEM; + result = copy_from_user(ksegments, segments, bytes); + if (result) + goto fail; + result = do_kexec_load(entry, nr_segments, segments, flags); + kfree(ksegments); - mutex_unlock(&kexec_mutex); - +fail: + kfree(ksegments); return result; + } #ifdef CONFIG_COMPAT @@ -272,9 +287,9 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, compat_ulong_t, flags) { struct compat_kexec_segment in; - struct kexec_segment out, __user *ksegments; - unsigned long i, result; - + struct kexec_segment *ksegments; + unsigned long bytes, i, result; + result = kexec_load_check(nr_segments, flags); if (result) return result; @@ -285,37 +300,26 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, if ((flags & KEXEC_ARCH_MASK) == KEXEC_ARCH_DEFAULT) return -EINVAL; - ksegments = compat_alloc_user_space(nr_segments * sizeof(out)); + bytes = nr_segments * sizeof(ksegments[0]); + ksegments = kmalloc(bytes, GFP_KERNEL); + if (!ksegments) + return -ENOMEM; + for (i = 0; i < nr_segments; i++) { result = copy_from_user(&in, &segments[i], sizeof(in)); if (result) - return -EFAULT; - - out.buf = compat_ptr(in.buf); - out.bufsz = in.bufsz; - out.mem = in.mem; - out.memsz = in.memsz; + goto fail; - result = copy_to_user(&ksegments[i], &out, sizeof(out)); - if (result) - return -EFAULT; + ksegments[i].buf = compat_ptr(in.buf); + ksegments[i].bufsz = in.bufsz; + ksegments[i].mem = in.mem; + ksegments[i].memsz = in.memsz; } - /* Because we write directly to the reserved memory - * region when loading crash kernels we need a mutex here to - * prevent multiple crash kernels from attempting to load - * simultaneously, and to prevent a crash kernel from loading - * over the top of a in use crash kernel. - * - * KISS: always take the mutex. - */ - if (!mutex_trylock(&kexec_mutex)) - return -EBUSY; - result = do_kexec_load(entry, nr_segments, ksegments, flags); - mutex_unlock(&kexec_mutex); - +fail: + kfree(ksegments); return result; } #endif _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-path: From: ebiederm@xmission.com (Eric W. Biederman) References: <20210517203343.3941777-1-arnd@kernel.org> <20210517203343.3941777-2-arnd@kernel.org> Date: Tue, 18 May 2021 17:45:23 -0500 In-Reply-To: (Arnd Bergmann's message of "Tue, 18 May 2021 16:17:53 +0200") Message-ID: MIME-Version: 1.0 Subject: Re: [PATCH v3 1/4] kexec: simplify compat_sys_kexec_load List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "kexec" Errors-To: kexec-bounces+dwmw2=infradead.org@lists.infradead.org To: Arnd Bergmann Cc: linux-arch , Christoph Hellwig , Alexander Viro , Andrew Morton , Borislav Petkov , Brian Gerst , Ingo Molnar , "H. Peter Anvin" , Thomas Gleixner , Linux ARM , Linux Kernel Mailing List , Linux-MM , kexec@lists.infradead.org Arnd Bergmann writes: > On Tue, May 18, 2021 at 4:05 PM Arnd Bergmann wrote: >> >> On Tue, May 18, 2021 at 3:41 PM Eric W. Biederman wrote: >> > >> > Arnd Bergmann writes: >> > >> > > From: Arnd Bergmann KEXEC_ARCH_DEFAULT >> > > >> > > The compat version of sys_kexec_load() uses compat_alloc_user_space to >> > > convert the user-provided arguments into the native format. >> > > >> > > Move the conversion into the regular implementation with >> > > an in_compat_syscall() check to simplify it and avoid the >> > > compat_alloc_user_space() call. >> > > >> > > compat_sys_kexec_load() now behaves the same as sys_kexec_load(). >> > >> > Nacked-by: "Eric W. Biederman" >> >KEXEC_ARCH_DEFAULT >> > The patch is wrong. >> > >> > The logic between the compat entry point and the ordinary entry point >> > are by necessity different. This unifies the logic and breaks the compat >> > entry point. >> > >> > The fundamentally necessity is that the code being loaded needs to know >> > which mode the kernel is running in so it can safely transition to the >> > new kernel. >> > >> > Given that the two entry points fundamentally need different logic, >> > and that difference was not preserved and the goal of this patchset >> > was to unify that which fundamentally needs to be different. I don't >> > think this patch series makes any sense for kexec. >> >> Sorry, I'm not following that explanation. Can you clarify what different >> modes of the kernel you are referring to here, and how my patch >> changes this? I think something like the untested diff below is enough to get rid of compat_alloc_user cleanly. Certainly it should be enough to give any idea what I am thinking. diff --git a/kernel/kexec.c b/kernel/kexec.c index c82c6c06f051..ce69a5d68023 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -19,26 +19,21 @@ #include "kexec_internal.h" -static int copy_user_segment_list(struct kimage *image, +static void copy_user_segment_list(struct kimage *image, unsigned long nr_segments, - struct kexec_segment __user *segments) + struct kexec_segment *segments) { - int ret; size_t segment_bytes; /* Read in the segments */ image->nr_segments = nr_segments; segment_bytes = nr_segments * sizeof(*segments); - ret = copy_from_user(image->segment, segments, segment_bytes); - if (ret) - ret = -EFAULT; - - return ret; + memcpy(image->segment, segments, segment_bytes); } static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, unsigned long nr_segments, - struct kexec_segment __user *segments, + struct kexec_segment *segments, unsigned long flags) { int ret; @@ -59,9 +54,7 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, image->start = entry; - ret = copy_user_segment_list(image, nr_segments, segments); - if (ret) - goto out_free_image; + copy_user_segment_list(image, nr_segments, segments); if (kexec_on_panic) { /* Enable special crash kernel control page alloc policy. */ @@ -103,8 +96,8 @@ static int kimage_alloc_init(struct kimage **rimage, unsigned long entry, return ret; } -static int do_kexec_load(unsigned long entry, unsigned long nr_segments, - struct kexec_segment __user *segments, unsigned long flags) +static int do_kexec_load_locked(unsigned long entry, unsigned long nr_segments, + struct kexec_segment *segments, unsigned long flags) { struct kimage **dest_image, *image; unsigned long i; @@ -174,6 +167,27 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments, return ret; } +static int do_kexec_load(unsigned long entry, unsigned long nr_segments, + struct kexec_segment *segments, unsigned long flags) +{ + int result; + + /* Because we write directly to the reserved memory + * region when loading crash kernels we need a mutex here to + * prevent multiple crash kernels from attempting to load + * simultaneously, and to prevent a crash kernel from loading + * over the top of a in use crash kernel. + * + * KISS: always take the mutex. + */ + if (!mutex_trylock(&kexec_mutex)) + return -EBUSY; + + result = do_kexec_load_locked(entry, nr_segments, segments, flags); + mutex_unlock(&kexec_mutex); + return result; +} + /* * Exec Kernel system call: for obvious reasons only root may call it. * @@ -224,6 +238,11 @@ static inline int kexec_load_check(unsigned long nr_segments, if ((flags & KEXEC_FLAGS) != (flags & ~KEXEC_ARCH_MASK)) return -EINVAL; + /* Verify we are on the appropriate architecture */ + if (((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH) && + ((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH_DEFAULT)) + return -EINVAL; + /* Put an artificial cap on the number * of segments passed to kexec_load. */ @@ -236,33 +255,29 @@ static inline int kexec_load_check(unsigned long nr_segments, SYSCALL_DEFINE4(kexec_load, unsigned long, entry, unsigned long, nr_segments, struct kexec_segment __user *, segments, unsigned long, flags) { - int result; + struct kexec_segment *ksegments; + unsigned long bytes, result; result = kexec_load_check(nr_segments, flags); if (result) return result; - /* Verify we are on the appropriate architecture */ - if (((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH) && - ((flags & KEXEC_ARCH_MASK) != KEXEC_ARCH_DEFAULT)) - return -EINVAL; - - /* Because we write directly to the reserved memory - * region when loading crash kernels we need a mutex here to - * prevent multiple crash kernels from attempting to load - * simultaneously, and to prevent a crash kernel from loading - * over the top of a in use crash kernel. - * - * KISS: always take the mutex. - */ - if (!mutex_trylock(&kexec_mutex)) - return -EBUSY; + bytes = nr_segments * sizeof(ksegments[0]); + ksegments = kmalloc(bytes, GFP_KERNEL); + if (!ksegments) + return -ENOMEM; + result = copy_from_user(ksegments, segments, bytes); + if (result) + goto fail; + result = do_kexec_load(entry, nr_segments, segments, flags); + kfree(ksegments); - mutex_unlock(&kexec_mutex); - +fail: + kfree(ksegments); return result; + } #ifdef CONFIG_COMPAT @@ -272,9 +287,9 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, compat_ulong_t, flags) { struct compat_kexec_segment in; - struct kexec_segment out, __user *ksegments; - unsigned long i, result; - + struct kexec_segment *ksegments; + unsigned long bytes, i, result; + result = kexec_load_check(nr_segments, flags); if (result) return result; @@ -285,37 +300,26 @@ COMPAT_SYSCALL_DEFINE4(kexec_load, compat_ulong_t, entry, if ((flags & KEXEC_ARCH_MASK) == KEXEC_ARCH_DEFAULT) return -EINVAL; - ksegments = compat_alloc_user_space(nr_segments * sizeof(out)); + bytes = nr_segments * sizeof(ksegments[0]); + ksegments = kmalloc(bytes, GFP_KERNEL); + if (!ksegments) + return -ENOMEM; + for (i = 0; i < nr_segments; i++) { result = copy_from_user(&in, &segments[i], sizeof(in)); if (result) - return -EFAULT; - - out.buf = compat_ptr(in.buf); - out.bufsz = in.bufsz; - out.mem = in.mem; - out.memsz = in.memsz; + goto fail; - result = copy_to_user(&ksegments[i], &out, sizeof(out)); - if (result) - return -EFAULT; + ksegments[i].buf = compat_ptr(in.buf); + ksegments[i].bufsz = in.bufsz; + ksegments[i].mem = in.mem; + ksegments[i].memsz = in.memsz; } - /* Because we write directly to the reserved memory - * region when loading crash kernels we need a mutex here to - * prevent multiple crash kernels from attempting to load - * simultaneously, and to prevent a crash kernel from loading - * over the top of a in use crash kernel. - * - * KISS: always take the mutex. - */ - if (!mutex_trylock(&kexec_mutex)) - return -EBUSY; - result = do_kexec_load(entry, nr_segments, ksegments, flags); - mutex_unlock(&kexec_mutex); - +fail: + kfree(ksegments); return result; } #endif _______________________________________________ kexec mailing list kexec@lists.infradead.org http://lists.infradead.org/mailman/listinfo/kexec