From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753466AbcDNWcq (ORCPT ); Thu, 14 Apr 2016 18:32:46 -0400 Received: from mail-pf0-f173.google.com ([209.85.192.173]:35369 "EHLO mail-pf0-f173.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752420AbcDNW3a (ORCPT ); Thu, 14 Apr 2016 18:29:30 -0400 From: Kees Cook To: Ingo Molnar Cc: Kees Cook , Baoquan He , Yinghai Lu , Ard Biesheuvel , Matt Redfearn , x86@kernel.org, "H. Peter Anvin" , Ingo Molnar , Borislav Petkov , Vivek Goyal , Andy Lutomirski , lasse.collin@tukaani.org, Andrew Morton , Dave Young , kernel-hardening@lists.openwall.com, LKML Subject: [PATCH v5 06/21] x86, KASLR: Update description for decompressor worst case size Date: Thu, 14 Apr 2016 15:28:59 -0700 Message-Id: <1460672954-32567-7-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.6.3 In-Reply-To: <1460672954-32567-1-git-send-email-keescook@chromium.org> References: <1460672954-32567-1-git-send-email-keescook@chromium.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Baoquan He The comment that describes the analysis for the size of the decompressor code only took gzip into account (there are 6 other decompressors that could be used). The actual z_extract_offset calculation in code was already handling the correct maximum size, but this documentation hadn't been updated. This updates the documentation and fixes several typos. Signed-off-by: Baoquan He [kees: rewrote changelog, cleaned up comment style] Signed-off-by: Kees Cook --- arch/x86/boot/compressed/misc.c | 26 +++++++++++++++++--------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c index e2a998f8c304..31e2d6155643 100644 --- a/arch/x86/boot/compressed/misc.c +++ b/arch/x86/boot/compressed/misc.c @@ -19,11 +19,13 @@ */ /* - * Getting to provable safe in place decompression is hard. - * Worst case behaviours need to be analyzed. - * Background information: + * Getting to provable safe in place decompression is hard. Worst case + * behaviours need be analyzed. Here let's take decompressing gzip-compressed + * kernel as example to illustrate it. + * + * The file layout of gzip compressed kernel is as follows. For more + * information, please refer to RFC 1951 and RFC 1952. * - * The file layout is: * magic[2] * method[1] * flags[1] @@ -70,13 +72,13 @@ * of 5 bytes per 32767 bytes. * * The worst case internal to a compressed block is very hard to figure. - * The worst case can at least be boundined by having one bit that represents + * The worst case can at least be bounded by having one bit that represents * 32764 bytes and then all of the rest of the bytes representing the very * very last byte. * * All of which is enough to compute an amount of extra data that is required * to be safe. To avoid problems at the block level allocating 5 extra bytes - * per 32767 bytes of data is sufficient. To avoind problems internal to a + * per 32767 bytes of data is sufficient. To avoid problems internal to a * block adding an extra 32767 bytes (the worst case uncompressed block size) * is sufficient, to ensure that in the worst case the decompressed data for * block will stop the byte before the compressed data for a block begins. @@ -88,11 +90,17 @@ * Adding 8 bytes per 32K is a bit excessive but much easier to calculate. * Adding 32768 instead of 32767 just makes for round numbers. * + * Above analysis is for decompressing gzip compressed kernel only. Up to + * now 6 different decompressor are supported all together. And among them + * xz stores data in chunks and has maximum chunk of 64K. Hence safety + * margin should be updated to cover all decompressors so that we don't + * need to deal with each of them separately. Please check + * the description in lib/decompressor_xxx.c for specific information. + * + * extra_bytes = (uncompressed_size >> 12) + 65536 + 128. + * */ -/* - * gzip declarations - */ #define STATIC static #undef memcpy -- 2.6.3 From mboxrd@z Thu Jan 1 00:00:00 1970 Reply-To: kernel-hardening@lists.openwall.com From: Kees Cook Date: Thu, 14 Apr 2016 15:28:59 -0700 Message-Id: <1460672954-32567-7-git-send-email-keescook@chromium.org> In-Reply-To: <1460672954-32567-1-git-send-email-keescook@chromium.org> References: <1460672954-32567-1-git-send-email-keescook@chromium.org> Subject: [kernel-hardening] [PATCH v5 06/21] x86, KASLR: Update description for decompressor worst case size To: Ingo Molnar Cc: Kees Cook , Baoquan He , Yinghai Lu , Ard Biesheuvel , Matt Redfearn , x86@kernel.org, "H. Peter Anvin" , Ingo Molnar , Borislav Petkov , Vivek Goyal , Andy Lutomirski , lasse.collin@tukaani.org, Andrew Morton , Dave Young , kernel-hardening@lists.openwall.com, LKML List-ID: From: Baoquan He The comment that describes the analysis for the size of the decompressor code only took gzip into account (there are 6 other decompressors that could be used). The actual z_extract_offset calculation in code was already handling the correct maximum size, but this documentation hadn't been updated. This updates the documentation and fixes several typos. Signed-off-by: Baoquan He [kees: rewrote changelog, cleaned up comment style] Signed-off-by: Kees Cook --- arch/x86/boot/compressed/misc.c | 26 +++++++++++++++++--------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c index e2a998f8c304..31e2d6155643 100644 --- a/arch/x86/boot/compressed/misc.c +++ b/arch/x86/boot/compressed/misc.c @@ -19,11 +19,13 @@ */ /* - * Getting to provable safe in place decompression is hard. - * Worst case behaviours need to be analyzed. - * Background information: + * Getting to provable safe in place decompression is hard. Worst case + * behaviours need be analyzed. Here let's take decompressing gzip-compressed + * kernel as example to illustrate it. + * + * The file layout of gzip compressed kernel is as follows. For more + * information, please refer to RFC 1951 and RFC 1952. * - * The file layout is: * magic[2] * method[1] * flags[1] @@ -70,13 +72,13 @@ * of 5 bytes per 32767 bytes. * * The worst case internal to a compressed block is very hard to figure. - * The worst case can at least be boundined by having one bit that represents + * The worst case can at least be bounded by having one bit that represents * 32764 bytes and then all of the rest of the bytes representing the very * very last byte. * * All of which is enough to compute an amount of extra data that is required * to be safe. To avoid problems at the block level allocating 5 extra bytes - * per 32767 bytes of data is sufficient. To avoind problems internal to a + * per 32767 bytes of data is sufficient. To avoid problems internal to a * block adding an extra 32767 bytes (the worst case uncompressed block size) * is sufficient, to ensure that in the worst case the decompressed data for * block will stop the byte before the compressed data for a block begins. @@ -88,11 +90,17 @@ * Adding 8 bytes per 32K is a bit excessive but much easier to calculate. * Adding 32768 instead of 32767 just makes for round numbers. * + * Above analysis is for decompressing gzip compressed kernel only. Up to + * now 6 different decompressor are supported all together. And among them + * xz stores data in chunks and has maximum chunk of 64K. Hence safety + * margin should be updated to cover all decompressors so that we don't + * need to deal with each of them separately. Please check + * the description in lib/decompressor_xxx.c for specific information. + * + * extra_bytes = (uncompressed_size >> 12) + 65536 + 128. + * */ -/* - * gzip declarations - */ #define STATIC static #undef memcpy -- 2.6.3