From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B74E9C433E1 for ; Fri, 7 Aug 2020 06:18:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 87BB822C9F for ; Fri, 7 Aug 2020 06:18:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1596781104; bh=36tB1MKB9BBGhJraUptyrInUB9HTj+GoP5szEqS1CnA=; h=Date:From:To:Subject:In-Reply-To:Reply-To:List-ID:From; b=tKC5RXff93/V2uASRy5b/SquOFVi5cwPPmo3T1qjjBoCsK7vS7SjW73KBNedbumLN aANxAivHGLC5RlNiC0sM994RHwpMf4AKbE+j64w5dbOotzJ+6F4H6SY3nLj5YLLeZW MH7yGwcH+sF022zxpfhKoucDRArZ8odACW/xLlQ4= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725872AbgHGGSY (ORCPT ); Fri, 7 Aug 2020 02:18:24 -0400 Received: from mail.kernel.org ([198.145.29.99]:53636 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725379AbgHGGSW (ORCPT ); Fri, 7 Aug 2020 02:18:22 -0400 Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3F8122177B; Fri, 7 Aug 2020 06:18:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1596781102; bh=36tB1MKB9BBGhJraUptyrInUB9HTj+GoP5szEqS1CnA=; h=Date:From:To:Subject:In-Reply-To:From; b=0V2Hk7nkniGFMmnvn6ZG6GqCOZV5Y7doZXs3t28u7hsNCTzT56dUbKAxacvVrkUhh 6547mQtofVuor1yi7cKGL7xOB8PKGC6jFEvM6uMjYm1BDHSAe8sRJUiOB0txmg4cOd EdPcn7pwfF6fRr7NRTu4OUcKdjyanIaD/iUcMHSo= Date: Thu, 06 Aug 2020 23:18:20 -0700 From: Andrew Morton To: akpm@linux-foundation.org, alex.popov@linux.com, cl@linux.com, guro@fb.com, iamjoonsoo.kim@lge.com, jannh@google.com, keescook@chromium.org, linux-mm@kvack.org, mjg59@google.com, mm-commits@vger.kernel.org, penberg@kernel.org, rientjes@google.com, torvalds@linux-foundation.org, vbabka@suse.cz, vinmenon@codeaurora.org, vjitta@codeaurora.org Subject: [patch 024/163] mm/slab: expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB Message-ID: <20200807061820.jrCao-wcB%akpm@linux-foundation.org> In-Reply-To: <20200806231643.a2711a608dd0f18bff2caf2b@linux-foundation.org> User-Agent: s-nail v14.8.16 Sender: mm-commits-owner@vger.kernel.org Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org From: Kees Cook Subject: mm/slab: expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB Patch series "mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB" In reviewing Vlastimil Babka's latest slub debug series, I realized[1] that several checks under CONFIG_SLAB_FREELIST_HARDENED weren't being applied to SLAB. Fix this by expanding the Kconfig coverage, and adding a simple double-free test for SLAB. This patch (of 2): Include SLAB caches when performing kmem_cache pointer verification. A defense against such corruption[1] should be applied to all the allocators. With this added, the "SLAB_FREE_CROSS" and "SLAB_FREE_PAGE" LKDTM tests now pass on SLAB: lkdtm: Performing direct entry SLAB_FREE_CROSS lkdtm: Attempting cross-cache slab free ... ------------[ cut here ]------------ cache_from_obj: Wrong slab cache. lkdtm-heap-b but object is from lkdtm-heap-a WARNING: CPU: 2 PID: 2195 at mm/slab.h:530 kmem_cache_free+0x8d/0x1d0 ... lkdtm: Performing direct entry SLAB_FREE_PAGE lkdtm: Attempting non-Slab slab free ... ------------[ cut here ]------------ virt_to_cache: Object is not a Slab page! WARNING: CPU: 1 PID: 2202 at mm/slab.h:489 kmem_cache_free+0x196/0x1d0 Additionally clean up neighboring Kconfig entries for clarity, readability, and redundant option removal. [1] https://github.com/ThomasKing2014/slides/raw/master/Building%20universal%20Android%20rooting%20with%20a%20type%20confusion%20vulnerability.pdf Link: http://lkml.kernel.org/r/20200625215548.389774-1-keescook@chromium.org Link: http://lkml.kernel.org/r/20200625215548.389774-2-keescook@chromium.org Fixes: 598a0717a816 ("mm/slab: validate cache membership under freelist hardening") Signed-off-by: Kees Cook Acked-by: Vlastimil Babka Cc: Alexander Popov Cc: Christoph Lameter Cc: David Rientjes Cc: Jann Horn Cc: Joonsoo Kim Cc: Matthew Garrett Cc: Pekka Enberg Cc: Roman Gushchin Cc: Vijayanand Jitta Cc: Vinayak Menon Signed-off-by: Andrew Morton --- init/Kconfig | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) --- a/init/Kconfig~mm-expand-config_slab_freelist_hardened-to-include-slab +++ a/init/Kconfig @@ -1913,9 +1913,8 @@ config SLAB_MERGE_DEFAULT command line. config SLAB_FREELIST_RANDOM - default n + bool "Randomize slab freelist" depends on SLAB || SLUB - bool "SLAB freelist randomization" help Randomizes the freelist order used on creating new pages. This security feature reduces the predictability of the kernel slab @@ -1923,12 +1922,14 @@ config SLAB_FREELIST_RANDOM config SLAB_FREELIST_HARDENED bool "Harden slab freelist metadata" - depends on SLUB + depends on SLAB || SLUB help Many kernel heap attacks try to target slab cache metadata and other infrastructure. This options makes minor performance sacrifices to harden the kernel slab allocator against common - freelist exploit methods. + freelist exploit methods. Some slab implementations have more + sanity-checking than others. This option is most effective with + CONFIG_SLUB. config SHUFFLE_PAGE_ALLOCATOR bool "Page allocator randomization" _